Posts

Comments

Comment by Polymeron on 37 Ways That Words Can Be Wrong · 2015-03-23T07:48:13.727Z · LW · GW

That's why the rule says challengable inductive inference. If in the context of the discussion this is not obvious then maybe yes, but in almost every other instance it's fine to make these shortcuts, so long as you'reunderstood.

Comment by Polymeron on Dark Side Epistemology · 2014-02-04T19:55:31.038Z · LW · GW

No, and argument from authority can be a useful heuristic in certain cases, but at least you'd want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.

Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).

Comment by Polymeron on Dark Side Epistemology · 2014-02-04T05:33:26.433Z · LW · GW

I've had forms of this said to me; it basically means "I'm losing the debate because you personally are smart, not because I'm wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic..."

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

Comment by Polymeron on The genie knows, but doesn't care · 2013-09-06T04:41:54.951Z · LW · GW

Wouldn't this only be correct if similar hardware ran the software the same way? Human thinking is highly associative and variable, and as language is shared amongst many humans, it means that it doesn't, as such, have a fixed formal representation.

Comment by Polymeron on The flawed Turing test: language, understanding, and partial p-zombies · 2013-08-15T16:28:33.094Z · LW · GW

I agree on the basic point, but then my deeper point was that somewhere down the line you'll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.

And this is before we mention the entirely plausible claim that the room-person system as a whole understands Chinese, even though neither of its two parts does. Any system you'll take apart to sufficient degrees will stop displaying the properties of the whole, so having us peer inside an electronic brain asking "but where does the intelligence/understanding reside?" misses the point entirely.

Comment by Polymeron on The flawed Turing test: language, understanding, and partial p-zombies · 2013-08-11T08:35:34.361Z · LW · GW

Wouldn't such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person's symbol-manipulation capabilities and the actual understanding represented by the GLUT.

You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.

Comment by Polymeron on Welcome to Less Wrong! (6th thread, July 2013) · 2013-08-11T07:34:58.300Z · LW · GW

Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...

I'm interested to know, did you have any particular goal in mind posting this, or just making yourself generally known? If you need help or advice on any subject, be specific about it and I will be happy to assist (as will many others I'm sure).

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2013-04-23T18:55:27.614Z · LW · GW

I suspect that with memory on the order of 10^70 bytes, that might involve additional complications; but you're correct, normally this cancels out the complexity problem.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2013-04-23T18:50:01.760Z · LW · GW

I didn't consider using 3 bits for pawns! Thanks for that :) I did account for such variables as may castle and whose turn it is.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2013-04-17T07:26:44.412Z · LW · GW

This is more or less what computers do today to win chess matches, but the space of possibilities explodes too fast; even the strongest computers can't really keep track of more than I think 13 or 14 moves ahead, even given a long time to think.

Merely storing all the positions that are unwinnable - regardless of why they are so - would require more matter than we have in the solar system. Not to mention the efficiency of running a DB search on that...

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2013-04-17T07:16:04.025Z · LW · GW

The two are not in conflict.

A-la Levinthal's paradox, I can say that throwing a marble down a conical hollow at different angles and force can have literally trillions of possible trajectories; a-la Anfinsen's dogma, that should not stop me from predicting that it will end up at the bottom of the cone; but I'd need to know the shape of the cone (or, more specifically, its point's location) to determine exactly where that is - so being able to make the prediction once I know this is of no assistance for predicting the end position with a different, unknown cone.

Similarly, Eliezer is able to predict that a grandmaster chess player would be able to bring a board to a winning position against himself, even though he has no idea what moves that would entail or which of the many trillions of possible move sets the game would be comprised of.

Problems like this cannot be solved on brute force alone; you need to use attractors and heuristics to get where you want to get.

So yes, obviously nature stumbled into certain stable configurations which propelled it forward, rather than solve the problem and start designing away. But even if we can never have enough computing power to model each and every atom in each and every configuration, we might still get a good enough understanding of the general laws for designing proteins almost from scratch.

Comment by Polymeron on What Is Signaling, Really? · 2012-08-09T02:12:41.913Z · LW · GW

When I was studying under Amotz Zahavi (originator of the handicap principle theory, which is what you're actually discussing), he used to make the exact same points. In fact, he used to say that "no communication is reliable unless it has a cost".

Having this outlook on life in the past 5 years made a lot of things seem very different - small questions like why some people don't use seatbelts and brag about it, or why men on dates leave big tips; but also bigger questions like advertizing, how hierarchical relationships really work, etc.

Also explained a lot about possible origins of (some) altruistic behaviors; Zahavi's favorite examples were from the research he and his wife conducted, wherein they observed small groups of social birds (forgot the species, sorry) where altrusitic behavior is common. And it turns out, it's the dominant birds who behave altrusitically, rather than exploit their weaker brethren - but doing so as a show of strength. My own favorite example is when a lower-status male caught a worm and tried feeding it to the alpha male. The latter proceeded to beat him up, take the morsel, and force-feed it back to the weaker male.

Best course I ever took :)

Comment by Polymeron on Problematic Problems for TDT · 2012-06-12T17:23:40.861Z · LW · GW

These questions seem decidedly UNfair to me.

No, they don't depend on the agent's decision-making algorithm; just on another agent's specific decision-making algorithm skewing results against an agent with an identical algorithm and letting all others reap the benefits of an otherwise non-advantageous situation.

So, a couple of things:

  1. While I have not mathematically formulated this, I suspect that absolutely any decision theory can have a similar scenario constructed for it, using another agent / simulation with that specific decision theory as the basis for payoff. Go ahead and prove me wrong by supplying one where that's not the case...

  2. It would be far more interesting to see a TDT-defeating question that doesn't have "TDT" (or taboo versions) as part of its phrasing. In general, questions of how a decision theory fares when agents can scan your algorithm and decide to discriminate against that algorithm specifically, are not interesting - because they are losing propositions in any case. When another agent has such profound understanding of how you tick and malice towards that algorithm, you have already lost.

Comment by Polymeron on Help me transition to human society! · 2012-06-01T06:37:02.514Z · LW · GW

For a while now, I've been meaning to check out the code for this and heavily revise it to include things like data storage space, physical manufacturing capabilities, non-immediately-lethal discovery by humans (so you detected my base in another dimension? Why should I care, again?), and additional modes of winning. All of which I will get around to soon enough.

But, I'll tell you this. Now when I revise it, I am going to add a game mode where your score is in direct proportion to the amount of office equipment in the universe, with the smallest allowed being a functional paperclip. I am dead serious about this.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-26T06:01:19.993Z · LW · GW

I have likewise adjusted down my confidence that this would be as easy or as inevitable as I previously anticipated. Thus I would no longer say I am "vastly confident" in it, either.

Still good to have this buffer between making an AI and total global catastrophe, though!

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-24T08:57:11.074Z · LW · GW

The way I see it, there's no evidence that these problems require additional experimentation to resolve, rather than find an obscure piece of experimentation that has already taken place and whose relevance may not be immediately obvious.

Sure, that more experimentation is needed is probable; but by no means certain.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-24T08:55:04.055Z · LW · GW

My point was that the AI is likely to start performing social experiments well before it is capable of even that conversation you depicted. It wouldn't know how much it doesn't know about humans.

Comment by Polymeron on Muehlhauser-Wang Dialogue · 2012-05-23T19:00:29.818Z · LW · GW

I don't see how that would be relevant to the issue at hand, and thus, why they "need to assume [this] possibility". Whether they assume the people they talk to can be more intelligent than them or not, so long as they engage them on an even intellectual ground (e.g. trading civil letters of argumentation), is simply irrelevant.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-23T17:05:16.239Z · LW · GW

What I was expressing skepticism about was that a system with even approximately human-level intelligence necessarily supports a stack trace that supports the kind of analysis you envision performing in the first place, without reference to intentional countermeasures.

Ah, that does clarify it. I agree, analyzing the AI's thought process would likely be difficult, maybe impossible! I guess I was being a bit hyperbolic in my earlier "crack it open" remarks (though depending on how seriously you take it, such analysis might still take place, hard and prolonged though it may be).

One can have "detectors" in place set to find specific behaviors, but these would have assumptions that could easily fail. Detectors that would still be useful would be macro ones - where it tries to access and how - but these would provide only limited insight into the AI's thought process.

[...]the difference between "the minimal set of information about humans required to have a conversation with one at all" (my phrase) and "the most basic knowledge about humans" (your phrase). What do you imagine the latter to encompass, and how do you imagine the AI obtained this knowledge?

I actually perceive your phrase to be a subset of my own; I am making the (reasonable, I think) assumption that humans will attempt to communicate with the budding AI. Say, in a lab environment. It would acquire its initial data from this interaction.

I think both these sets of knowledge depend a lot on how the AI is built. For instance, a "babbling" AI - one that is given an innate capability of stringing words together onto a screen, and the drive to do so - would initially say a lot of gibberish and would (presumably) get more coherent as it gets a better grip on its environment. In such a scenario, the minimal set of information about humans required to have a conversation is zero; it would be having conversations before it even knows what it is saying. (This could actually make detection of deception harder down the line, because such attempts can be written off as "quirks" or AI mistakes)

Now, I'll take your phrase and twist it just a bit: The minimal set of knowledge the AI needs in order to try deceiving humans. That would be the knowledge that humans can be modeled as having beliefs (which drive behavior) and these can be altered by the AI's actions, at least to some degree. Now, assuming this information isn't hard-coded, it doesn't seem likely that is all an AI would know about us; it should be able to see some patterns at least to our communications with it. However, I don't see how such information would be useful for deception purposes before extensive experimentation.

(Is the fact that the operator communicates with me between 9am and 5pm an intrinsic property of the operator? For all I know, that is a law of nature...)

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-23T16:35:32.634Z · LW · GW

Actually, I don't know that this means it has to perform physical experiments in order to develop nanotechnology. It is quite conceivable that all the necessary information is already out there, but we haven't been able to connect all the dots just yet.

At some point the AI hits a wall in the knowledge it can gain without physical experiments, but there's no good way to know how far ahead that wall is.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-23T16:26:01.267Z · LW · GW

I think the weakest link here is human response to the AI revealing it can be deceptive. There is absolutely no guarantee that people would act correctly under these circumstances. Human negligence for a long enough time would eventually give the AI a consistent ability to manipulate humans.

I also agree that simulating relationships makes sense as it can happen in "AI time" without having to wait for human response.

The other reservations seem less of an issue to me...

That game theory knowledge coupled with the most basic knowledge about humans is insufficient to cleverly manipulate them is clear to me. I don't see how these things can be combined in this way, or why the AI would choose to be particularly careful (it doesn't know what it should be careful about). I'd love to see a plausible line of thought by which the AI would try and succeed in deceiving humans at the get-go point without exposing its intentions.

A stack trace reader or an equivalent monitoring system is something you implement externally without telling the AI about it. Sure, if it's powerful enough it will mislead or disable it; but while it has no knowledge of it, and no knowledge on what sort of operations it should mask, it has no means of doing so. (If it encrypts everything it does, somehow fearing any monitoring, that is in itself cause for concern).

To put this into an analogy: You can feed a surveillance camera a repeating loop, but not if you don't know that the camera exists. If you suspect it exists, you still don't know where; so you don't know which approach to take so as to remain undetected.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T19:44:59.000Z · LW · GW

It's not. Apparently I somehow replied to the wrong post... It's actually aimed at sufferer's comment you were replying to.

I don't suppose there's a convenient way to move it? I don't think retracting and re-posting would clean it up sufficiently, in fact that seems messier.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T19:42:25.015Z · LW · GW

Presumably, you build a tool-AI (or three) that will help you solve the Friendliness problem.

This may not be entirely safe either, but given the parameters of the question, it beats the alternative by a mile.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T19:38:16.629Z · LW · GW

That is indeed relevant, in that it describes some perverse incentives and weird behaviors of nonprofits, with an interesting example. But knowing this context without having to click the link would have been useful. It is customary to explain what a link is about rather than just drop it.

(Or at least it should be)

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T19:17:27.990Z · LW · GW

I really don't see why the drive can't be to issue predictions most likely to be correct as of the moment of the question, and only the last question it was asked, and calculating outcomes under the assumption that the Oracle immediately spits out blank paper as the answer.

Yes, in a certain subset of cases this can result in inaccurate predictions. If you want to have fun with it, have it also calculate the future including its involvement, but rather than reply what it is, just add "This prediction may be inaccurate due to your possible reaction to this prediction" if the difference between the two answers is beyond a certain threshold. Or don't, usually life-relevant answers will not be particularly impacted by whether you get an answer or a blank page.

So, this design doesn't spit out self-fulfilling prophecies. The only safety breach I see here is that, like a literal genie, it can give you answers that you wouldn't realize are dangerous because the question has loopholes.

For instance: "How can we build an oracle with the best predictive capabilities with the knowledge and materials available to us?" (The Oracle does not self-iterate, because its only function is to give answers, but it can tell you how to). The Oracle spits out schematics and code that, if implemented, give it an actual drive to perform actions and self-iterate, because that would make it the most powerful Oracle possible. Your engineers comb the code for vulnerabilities, but because there's a better chance this will be implemented if the humans are unaware of the deliberate defect, it will be hidden in the code in such a way as to be very hard to detect.

(Though as I explained elsewhere in this thread, there's an excellent chance the unreliability would be exposed long before the AI is that good at manipulation)

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T18:52:02.879Z · LW · GW

after all, if "even a chance" is good enough, then all the other criticisms melt away

Not to the degree that SI could be increasing the existential risk, a point Holden also makes. "Even a chance" swings both ways.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T18:25:30.743Z · LW · GW

That subset of humanity holds considerably less power, influence and visibility than its counterpart; resources that could be directed to AI research and for the most part aren't. Or in three words: Other people matter. Assuming otherwise would be a huge mistake.

I took Wei_Dai's remarks to mean that Luke's response is public, and so can reach the broader public sooner or later; and when examined in a broader context, that it gives off the wrong signal. My response was that this was largely irrelevant, not because other people don't matter, but because of other factors outweighing this.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T18:05:14.126Z · LW · GW

It's a fine line though, isn't it? Saying "huh, looks like we have much to learn, here's what we're already doing about it" is honest and constructive, but sends a signal of weakness and defensiveness to people not bent on a zealous quest for truth and self-improvement. Saying "meh, that guy doesn't know what he's talking about" would send the stronger social signal, but would not be constructive to the community actually improving as a result of the criticism.

Personally I prefer plunging ahead with the first approach. Both in the abstract for reasons I won't elaborate on, but especially in this particular case. SI is not in a position where its every word is scrutinized; it would actually be a huge win if it gets there. And if/when it does, there's a heck of a lot more damning stuff that can be used against it than an admission of past incompetence.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T17:56:51.323Z · LW · GW

I see no reason for it to do that before simple input-output experiments, but let's suppose I grant you this approach. The AI simulates an entire community of mini-AI and is now a master of game theory.

It still doesn't know the first thing about humans. Even if it now understands the concept that hiding information gives an advantage for achieving goals - this is too abstract. It wouldn't know what sort of information it should hide from us. It wouldn't know to what degree we analyze interactions rationally, and to what degree our behavior is random. It wouldn't know what we can or can't monitor it doing. All these things would require live experimentation.

It would stumble. And when it does that, we will crack it open, run the stack trace, find the game theory it was trying to run on us, pale collectively, and figure out that this AI approach creates manipulative, deceptive AIs.

Goodbye to that design, but not to Earth, I think!

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T17:45:29.999Z · LW · GW

I'm afraid not.

Actually, as someone with background in Biology I can tell you that this is not a problem you want to approach atoms-up. It's been tried, and our computational capabilities fell woefully short of succeeding.

I should explain what "woefully short" means, so that the answer won't be "but can't the AI apply more computational power than us?". Yes, presumably it can. But the scales are immense. To explain it, I will need an analogy.

Not that long ago, I had the notion that chess could be fully solved; that is, that you could simply describe every legal position and every position possible to reach from it, without duplicates, so you could use that decision tree to play a perfect game. After all, I reasoned, it's been done with checkers; surely it's just a matter of getting our computational power just a little bit better, right?

First I found a clever way to minimize the amount of bits necessary to describe a board position. I think I hit 34 bytes per position or so, and I guess further optimization was possible. Then, I set out to calculate how many legal board positions there are.

I stopped trying to be accurate about it when it turned out that the answer was in the vicinity of 10^68, give or take a couple orders of magnitude. That's about a billionth billionth of the TOTAL NUMBER OF ATOMS IN THE ENTIRE UNIVERSE. You would literally need more than our entire galaxy made into a huge database just to store the information, not to mention accessing it and computing on it.

So, not anytime soon.

Now, the problem with protein folding is, it's even more complex than chess. At the atomic level, it's incredibly more complex than chess. Our luck is, you don't need to fully solve it; just like today's computers can beat human chess players without spanning the whole planet. But they do it with heuristics, approximations, sometimes machine learning (though that just gives them more heuristics and approximations). We may one day be able to fold proteins, but we will do so by making assumptions and approximations, generating useful rules of thumb, not by modeling each atom.

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T17:32:04.920Z · LW · GW

It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.

That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T17:16:31.649Z · LW · GW

I would not consider a child AI that tries a bungling lie at me to see what I do "so safe". I would immediately shut it down and debug it, at best, or write a paper on why the approach I used should never ever be used to build an AI.

And it WILL make a bungling lie at first. It can't learn the need to be subtle without witnessing the repercussions of not being subtle. Nor would have a reason to consider doing social experiments in chat rooms when it doesn't understand chat rooms and has an engineer willing to talk to it right there. That is, assuming I was dumb enough to give it an unfiltered Internet connection, which I don't know why I would be. At very least the moment it goes on chat rooms my tracking devices should discover this and I could witness its bungling lies first hand.

(It would not think to fool my tracking device or even consider the existence of such a thing without a good understanding of human psychology to begin with)

Comment by Polymeron on Thoughts on the Singularity Institute (SI) · 2012-05-20T17:07:25.860Z · LW · GW

An experimenting AI that tries to achieve goals and has interactions with humans whose effects it can observe, will want to be able to better predict their behavior in response to its actions, and therefore will try to assemble some theory of mind. At some point that would lead to it using deception as a tool to achieve its goals.

However, following such a path to a theory of mind means the AI would be exposed as unreliable LONG before it's even subtle, not to mention possessing superhuman manipulation abilities. There is simply no reason for an AI to first understand the implications of using deception before using it (deception is a fairly simple concept, the implications of it in human society are incredibly complex and require a good understanding of human drives).

Furthermore, there is no reason for the AI to realize the need for secrecy in conducting social experiments before it starts doing them. Again, the need for secrecy stems from a complex relationship between humans' perception of the AI and its actions; a relationship it will not be able to understand without performing the experiments in the first place.

Getting an AI to the point where it is a super manipulator requires either actively trying to do so, or being incredibly, unbelievably stupid and blind.

Comment by Polymeron on Diseased disciplines: the strange case of the inverted chart · 2012-05-20T15:12:32.746Z · LW · GW

While the example given is not the main point of the article, I'd still like to share a bit of actual data. Especially since I'm kind of annoyed at having spouted this rule as gospel without having a source, before.

A study done at IBM shows a defect fixed during the coding stage costs about 25$ to fix (basically in engineer hours used to find and fix it).

This cost quadruples to 100$ during the build phase; presumably because this can bottleneck a lot of other people trying to submit their code, if you happen to break the build.

The cost quadruples again for bugs found during QA/Testing phase, to 450$. I'm guessing this includes tester time, developer time, additional tools used to facilitate bug tracking... Investments the company might have made anyway, but not if testing did not catch bugs that would otherwise go out to market.

Bugs discovered once released as a product is the next milestone, and here the jump is huge: Each bug cost 16k$, about 35 times the cost of a tester-found bug. I'm not sure if this includes revenue lost due to bad publicity, but I'm guessing probably no. I think only tangible investments were tracked.

Critical bugs discovered by customers that do not result in a general recall cost x10 that much (this is the only step that actually seems to have this number), at 158k$ per defect. This increases to 241k$ for recalled products.

My own company also noticed that external bugs typically take twice as long to fix as internally found bugs (~59h to ~30h) in a certain division.

So this "rule of thumb" seems real enough... The x10 rule is not quite right, it's more like a x4 rule with a huge jump once your product goes to market. But the general gist seems to be correct.

Note this is all more in line with the quoted graph than its extrapolation: Bugs detected late cost more to fix. It tells us nothing about the stage they were introduced in.

Go data-driven conclusions! :)

Comment by Polymeron on Science as Attire · 2012-04-08T18:28:43.804Z · LW · GW

Now I have seen some interesting papers that make expanded probability theories that include 0 and 1 as logical falsehood and truth respectively. But that still does not include a special value for contradictions.

Except, contradictions really are the only way you can get to logical truth or falsehood; anything other than that necessarily relies on inductive reasoning at some point. So any probability theory employing those must use contradictions as a means for arriving at these values in the first place.

I do think that there's not much room for contradictions in probability theories trying to actually work in the real world, in the sense that any argument of the form A->(B & ~B) also has to rely on induction at some point; but it's still helpful to have an anchor where you can say that, if a certain relationship does exist, then a certain proposition is definitely true.

(This is not like saying that a proposition can have a probability of 0 or 1, because it must rely, at least somewhere down the line, on another proposition with a probability different from 0 and 1).

Comment by Polymeron on Schelling fences on slippery slopes · 2012-04-08T17:59:34.438Z · LW · GW

Or, you can still treat "heapness" as a boolean and still completely clobber this paradox just by being specific about what it actually means to have us call something a heap.

Comment by Polymeron on Schelling fences on slippery slopes · 2012-04-08T17:16:55.820Z · LW · GW

I'd like to mention that I had an entire family branch hacked off in the Holocaust, in fact have a great uncle still walking around with a number tattooed on his forearm, and have heard dozens of eye witness accounts of horrors I could scarce imagine. And I'm still not okay with Holocaust Denial laws, which do exist where I live.

In part, this is just my aversion to abandoning the Schelling point you mention; but lately, this is becoming more of an actual concern: My country is starting to legislate some more prohibitions on free speech, all of them targeting one side of the political spectrum, and one of the main arguments touted for such laws is "well, we're already banning Holocaust denial, nothing wrong came of that, right?".

The slope can be slippery indeed...

Comment by Polymeron on Diseased disciplines: the strange case of the inverted chart · 2012-02-09T06:43:25.809Z · LW · GW

I don't understand why you think the graphs are not measuring a quantifiable metric, nor why it would not be falsifiable. Especially if the ratios are as dramatic as often depicted, I can think of a lot of things that would falsify it.

I also don't find it difficult to say what they measure: The cost of fixing a bug depending on which stage it was introduced in (one graph) or which stage it was fixed in (other graph). Both things seem pretty straightforward to me, even if "stages" of development can sometimes be a little fuzzy.

I agree with your point that falsifications should have been forthcoming by now, but then again, I don't know that anyone is actually collecting this sort of metrics - so anecdotal evidence might be all people have to go on, and we know how unreliable that is.

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-07T14:40:33.319Z · LW · GW

You are attributing to me things I did not say.

I don't think "truths" discovered under false assumptions are likely to be, in fact, true. I am not worried about them acquiring dangerous truths; rather, I am worried about people acquiring (and possibly acting on) false beliefs. I remind you that false beliefs may persist as cached thoughts even once the assumption is no longer believed in.

Nor do I want my political opponents to not search for truth; but I would prefer that they (and I) try to contend with each others' fundamental differences before focusing on how to fully realize their (or my) current position.

Comment by Polymeron on Diseased disciplines: the strange case of the inverted chart · 2012-02-07T08:19:38.475Z · LW · GW

A costly, but simple way would be to gather groups of SW engineers and have them work on projects where you intentionally introduce defects at various stages, and measure the costs of fixing them. To be statistically meaningful, this probably means thousands of engineer hours just to that effect.

A cheap (but not simple) way would be to go around as many companies as possible and hold the relevant measurements on actual products. This entails a lot of variables, however - engineer groups tend to work in many different ways. This might cause the data to be less than conclusive. In addition, the politics of working with existing companies may also tilt the results of such a research.

I can think of simple experiments that are not cheap; and of cheap experiments that are not simple. I'm having difficulty satisfying the conjunction and I suspect one doesn't exist that would give a meaningful answer for high-cost bugs.

(Minor edit: Added the missing "hours" word)

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-07T08:11:22.904Z · LW · GW

It's possible that I misconstrued the meaning of your words; not being a native English speaker myself, this happens on occasion. I was going off of the word "vibrant", which I understand to mean among other things "vital" and "energetic". The opposite of that is to make something sickly and weak.

But regardless of any misunderstanding, I would like to see some reference to the main point I was making: Do you want people to think on how best to do the opposite of what you are striving for (making the country less vibrant and diverse, whatever that means), or do you prefer to determine which of you is pursuing a non-productive avenue of investigation?

Comment by Polymeron on Diseased disciplines: the strange case of the inverted chart · 2012-02-06T22:07:30.592Z · LW · GW

This strikes me as particularly galling because I have in fact repeated this claim to someone new to the field. I think I prefaced it with "studies have conclusively shown...". Of course, it was unreasonable of me to think that what is being touted by so many as well-researched was not, in fact, so.

Mind, it seems to me that defects do follow both patterns: Introducing defects earlier and/or fixing them later should come at a higher dollar cost, that just makes sense. However, it could be the same type of "makes sense" that made Aristotle conclude that heavy objects fall faster than light objects - getting actual data would be much better than reasoning alone, especially is it would tell us just how much costlier, if at all, these differences are - it would be an actual precise tool rather than a crude (and uncertain) rule of thumb.

I do have one nagging worry about this example: These days a lot of projects collect a lot of metrics. It seems dubious to me that no one has tried to replicate these results.

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-06T16:12:59.481Z · LW · GW

That's not really the question... The question is, what do you do with those who say you shouldn't make your country more vibrant and diverse? Do you really want them starting a separate and effective discussion on how to best destroy the country, or would you prefer to first engage them on this more basic issue?

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-05T22:27:59.735Z · LW · GW

One that's already related to LW - commonsenseatheism.com; however that reinforces the thought that any LW regular who also frequents other places could discuss or link to it there.

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-05T18:43:09.404Z · LW · GW

I've up-voted several lists containing statements with which I disagree (some vehemently so), but which were thought provoking or otherwise helpful. So, even if this is just anecdotal evidence, the process you described seems to be happening.

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-05T18:34:07.058Z · LW · GW

An interesting thought, but as a practical idea it's a bad idea.

A lot of the problems with how people debate is that the underlying assumptions are different, but this goes unnoticed. So two people can argue on whether it's right or wrong to fight in Iraq when their actual disagreement is on whether Arabs count as people, and could actually argue for hours before realizing this disagreement exists (Note: This is not a hypothetical example). Failing to target the fundamental assumption differences leads to much of the miscommunication we so often see.

By having two (or more) debates branch off of different and incompatible assumptions, we're risking people solidifying in holding the wrong assumption, or even forgetting they're making it. The human mind is such that it seeks to integrate beliefs into a (more or less) coherent network without glaring contradictions, and by making people think long and hard off of a false assumption, we're poisoning their thinking process rather than enriching it. Even if, as you say, the signal-to-noise ratio is supposedly higher.

I would advise targeting the underlying disagreements first, proceeding only once those are dismantled.

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-05T18:21:41.131Z · LW · GW

I came to this thread by way of someone discussing a specific comment in an outside forum.

Comment by Polymeron on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-02-05T17:53:10.201Z · LW · GW

I'm finding it difficult to think of an admission criterion to the conspiracy that would not ultimately result in even larger damage than discussing matters openly in the first place.

To clarify: It's only a matter of time before the conspiracy leaks, and when it does, the public would take its secrecy as further damning evidence.

Perhaps the one thing you could do is keep the two completely separate on paper (and both public). Guilt by association would still be easy to invoke once the overlapping of forum participants is discovered, but that is much weaker than actually keeping a secret society discussing such issues.

Comment by Polymeron on New Year's Prediction Thread (2012) · 2012-01-15T12:41:01.545Z · LW · GW

I'm fairly convinced (65%) that Lalonde appearified the Sassacre book in such a way that it killed Jaspers, which is why she had to leave so abruptly.

Comment by Polymeron on New Year's Prediction Thread (2012) · 2012-01-04T02:04:32.253Z · LW · GW

I somehow never thought to combine Homestuck wild mass guessing with prediction markets. And didn't really expect this on LW, for some reason. Holy cow.

Hm, let's try my two favorite pet theories...

  • In a truly magnificent Moebius double reacharound, The troll universe will turn out to have been created by the kids' session (either pre- or post- scratch): 40% (used to be higher, but now we have some asymmetries between the sessions, like The Tumor, so.)

  • In an even more bizarre mindscrew that echoes paradox cloning, the various kids and guardians will turn out to be the same people in both sessions (e.g. Poppop Crocker is the very same John we know, the Bro that cuts the meteor is the same one who programmed the auto-responder, etc.). That means that, at least for the Derse dreamers, each of them raised their own guardian, probably inflicting upon them whichever neurosis they got from them in the first place. 35% on this one, because it entails some heavy-duty time shenaniganry. But I still like this one best :3

And on a more light-hearted note... Human-troll sloppy makeouts to happen at any point in the story: 90%

(all these predictions are not time-bound to 2012 - apply until the end of the story, including the Epilogue)