Posts

Comments

Comment by Recovering_irrationalist on Bay Area Meetup for Singularity Summit · 2008-10-19T14:16:46.000Z · LW · GW

So far hardly any feedback on places & no restaurant recommendations. If I get no more responses by tomorrow I'll just search the net for a well-reviewed restaurant that's walkable-to from Montgomery Theater, good for groups, accepting of casual attire and hopefully not too crowded/noisy (with a private room?), book it for Saturday probably around 7pm for 21 people, post the details and directions and hope everyone turns up.

If you'd rather a different time, or have any preferences at all, please let me know before I do that. So far no one's mentioned vegetarian, parking or wheelchair access needs, or preference for or against any food except one vote for pizza. How do you feel about Chinese? Italian? Mexican?

Comment by Recovering_irrationalist on Protected From Myself · 2008-10-19T11:16:46.000Z · LW · GW

Excellent post. Please write more on ethics as safety rails on unseen cliffs.

Comment by Recovering_irrationalist on Crisis of Faith · 2008-10-12T15:34:00.000Z · LW · GW

Nazir, a secret hack to prevent Eliezer from deleting your posts is here. #11.6 is particularly effective.

Comment by Recovering_irrationalist on Bay Area Meetup for Singularity Summit · 2008-10-12T12:22:44.000Z · LW · GW

Ah, I see...

other events may be offered at the same time, and I can not predict such events.

As far as Eliezer is currently aware, Saturday night should be clear.

I meant some of you singularity-related guys may want to meet me at other times, possibly at my apartment.

I'd love to come to another meet, Anna would too, probably others. I just wasn't sure there'd be enough people for two, so focused on making at least one happen.

I guess this was not the right place to post such an offer.

If the invite extends to OB readers, you're very welcome to share this page. If it's just for us Singularitarians, it's probably better to plan elsewhere and post a link here.

Comment by Recovering_irrationalist on AIs and Gatekeepers Unite! · 2008-10-10T12:47:00.000Z · LW · GW

Oops, misinterpreted tags. Should read:

It's 3am and the lab calls. Your AI claims [nano disaster/evil AI emergence/whatever] and it must be let out to stop it. It's evidence seems to check out.

Comment by Recovering_irrationalist on AIs and Gatekeepers Unite! · 2008-10-10T12:44:00.000Z · LW · GW

Even if we had the ultimate superintelligence volunteer to play the AI and we proved a gatekeeper strategy "wins" 100% (functionally equal to a rock on the "no" key) that wouldn't show AI boxing can possibly be safe.

It's 3am and the lab calls. Your AI claims and it must be let out to stop it. It's evidence seems to check out...

If it's friendly, keeping that lid shut gets you just as dead as if you let it out and it's lying. That's not safe. Before it can hide it's nature, we must know it's nature. The solution to safe AI is not a gatekeeper no smarter than a rock!

Besides, as Drexler said, Intelligent people have done great harm through words alone.

Comment by Recovering_irrationalist on Shut up and do the impossible! · 2008-10-09T19:00:00.000Z · LW · GW

If there's a killer escape argument it will surely change with the gatekeeper. I expect Eliezer used his maps the arguments and psychology to navigate reactions & hesitations to a tiny target in the vast search space.

A gatekeeper has to be unmoved every time. The paperclipper only has to persuade once.

Comment by Recovering_irrationalist on Beyond the Reach of God · 2008-10-07T12:09:00.000Z · LW · GW
I'm not saying this is wrong, but in its present form, isn't it really a mysterious answer to a mysterious question? If you believed it, would the mystery seem any less mysterious?

Hmm. You're right.

Darn.

Comment by Recovering_irrationalist on Beyond the Reach of God · 2008-10-07T00:25:00.000Z · LW · GW
it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one

I didn't think it would solve all our questions, I just wondered if it was both the simplest solution and lacking good evidence to the contrary. Would there be a higher chance of being a Boltzmann brain in a universe identical to ours that happened to be part of a what-if-world? If not, how is all this low-entropy around me evidence against it?

Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically

How would our "Block Universe" look different from the inside if it was a what-if-Block-Universe? It all adds up to...

Not trying to argue, just curious.

Comment by Recovering_irrationalist on Beyond the Reach of God · 2008-10-05T22:13:00.000Z · LW · GW
Eliezer: imagine that you, yourself, live in a what-if world of pure mathematics

Isn't this true? It seems the simplest solution to "why is there something rather than nothing". Is there any real evidence against our apparently timeless, branching physics being part of a purely mathematical structure? I wouldn't be shocked if the bottom was all Bayes-structure :)

Comment by Recovering_irrationalist on My Childhood Death Spiral · 2008-09-21T14:22:23.000Z · LW · GW

If there is that 'g'/unhappiness correlation, maybe the causality is: unhappiness->'g'. The overly happy, seeing less problems, get less problem solving practice, whereas a tendency to be analytical could boost 'g' over a lifetime, though perhaps not effective intelligence.

I wouldn't expect this to apply to most readers, who get particular pleasure from solving intelligent problems. Think general population.

Comment by Recovering_irrationalist on Optimization · 2008-09-14T11:17:38.000Z · LW · GW

I wouldn't assume a process seeming to churn through preference cycles to have an inconsistent preference ranking, it could be efficiently optimizing if each state provides diminishing returns. If every few hours a jailer offers either food, water or a good book, you don't pick the same choice each time!

Comment by Recovering_irrationalist on When Anthropomorphism Became Stupid · 2008-08-17T11:48:31.000Z · LW · GW

I've spent some time online trying to track down the exact moment when someone noticed the vastly tangled internal structure of the brain's neurons, and said, "Hey, I bet all this giant tangle is doing complex information-processing!"

My guess is Ibn al-Haytham, early 11thC while under house arrest after realizing he couldn't, as claimed, regulate the Nile's overflows.

Wikipedia: "In the Book of Optics, Ibn al-Haytham was the first scientist to argue that vision occurs in the brain, rather than the eyes. He pointed out that personal experience has an effect on what people see and how they see, and that vision and perception are subjective. He explained possible errors in vision in detail, and as an example described how a small child with less experience may have more difficulty interpreting what he or she sees. He also gave an example of how an adult can make mistakes in vision due to experience that suggests that one is seeing one thing, when one is really seeing something else."

Comment by Recovering_irrationalist on Moral Error and Moral Disagreement · 2008-08-13T13:19:00.000Z · LW · GW
Eliezer: The overall FAI strategy has to be one that would have turned out okay if Archimedes of Syracuse had been able to build an FAI.

I'd feel a lot safer if you'd extend this back at least to the infanticidal hunter-gatherers, and preferably to apes fighting around the 2001 monolith.

Comment by Recovering_irrationalist on Contaminated by Optimism · 2008-08-06T23:28:00.000Z · LW · GW
Are you're rationally taking into account the biasing effect your heartfelt hopes exert on the set of hypotheses raised to your conscious attention as you conspire to save the world?

Recovering, in instances like these, reversed stupidity is not intelligence; you cannot say, "I wish fast takeoff to be possible, therefore it is not".

Indeed. But you can, for example, say "I wish fast takeoff to be possible, so should be less impressed, all else equal, by the number of hypothesis I can think of that happen to support it".

Do you wish fast takeoff to be possible? Aren't then Very Horrid Singularities more likely?

All you can do is try to acquire the domain knowledge and put your mind into forward form.

Yes, but even then the ballot stuffing is still going on beneath your awareness, right? Doesn't that still count as some evidence for caution?

Comment by Recovering_irrationalist on Contaminated by Optimism · 2008-08-06T19:26:37.000Z · LW · GW

Will Pearson: When you were figuring out how powerful AIs made from silicon were likely to be, did you have a goal that you wanted? Do you want AI to be powerful so it can stop death?

Eliezer: ..."Yes" on both counts....

I think you sidestepped the point as it related to your post. Are you're rationally taking into account the biasing effect your heartfelt hopes exert on the set of hypotheses raised to your concious attention as you conspire to save the world?

Comment by Recovering_irrationalist on Anthropomorphic Optimism · 2008-08-05T12:42:53.000Z · LW · GW
Carl Shulman: Occam's razor makes me doubt that we have two theoretical negative utilitarians (with egoistic practice) who endorse Pascal's wager, with similar writing styles and concerns, bearing Internet handles that begin with 'U.' michael vassar: Unknown and Utilitarian could be distinct but highly correlated (we're both here after all). In principle we could see them as both unpacking the implications of some fairly simple algorithm.

With thousands of frequent-poster-pairs with many potentially matchable properties, I'm not too shocked to find a pair that match on six mostly correlated properties.

Comment by Recovering_irrationalist on When (Not) To Use Probabilities · 2008-07-23T14:44:11.000Z · LW · GW

For example, I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider switched on. If I could prevent only one of these events, I would prevent the lottery.

On the other hand, if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

Hmm... might this be the heuristic that makes people prefer a 1% chance of 1000 deaths to a definite death for 5? The lottery would definately destroy worlds, with as many deaths as killing over six thousand people in each Everett branch. Running the LHC means a higher expected number of dead worlds by your own estimates, but it's all or nothing across universes. It will most probably just be safe.

If you had a definate number for both P(Doomsday Lottery Device Win) and P(Doomsday LHC) you'd shut up and multiply, but you haven't so you don't. But you still should because you're pretty sure P(D-LHC) >> P(DLDW) even if you don't know a figure for P(DLHC).

This assumes Paul's assumption, above.

Comment by Recovering_irrationalist on I'd take it · 2008-07-02T21:50:19.000Z · LW · GW
I doubt my ability to usefully spend more than $10 million/year on the Singularity. What do you do with the rest of the money?

Well I admit it's a hell of a diminishing returns curve. OK... Dear Santa, please can I have an army of minions^H^H^H^Htrusted experts to research cool but relatively neglected stuff like intelligence enhancement, life extension (but not nanotech) & how to feed and educate the third world without screwing it up. And deal with those pesky existential risks. Oh, and free cryonics for everyone - let's put those economies of scale to good use. Basically keep people alive till the FAI guys get there. Then just enjoy the ride, cos if I've just been handed $10^13 I'm probably in a simulation. More so than usual.

Comment by Recovering_irrationalist on I'd take it · 2008-07-02T12:56:27.000Z · LW · GW

Anonymous: I'd hire all AI researchers to date to work under Eliezer and start seriously studying to be able to evaluate myself whether flipping the "on" switch would result in a friendly singularity.
(emphasis mine)

I doubt this is the way to go. I want a medium sized, talented and rational team who seriously care, not every AI programmer in the world who smells money. I bring Eliezer a blank cheque and listen to his and those he trusts arguments for best use, though he'd have to convince me where we disagreed, he seems good at that.

Also, even after years of studying for it I wouldn't trust me, or anyone else for that matter, to make that switch-on decision alone.

Comment by Recovering_irrationalist on Heading Toward Morality · 2008-06-22T10:54:14.000Z · LW · GW

Of course it only works properly if we actually do it, in the eons to come. The Unfriendly AI would likely be able to tell if the words would have becoming actions.

Comment by Recovering_irrationalist on Heading Toward Morality · 2008-06-22T10:47:35.000Z · LW · GW
Fly: A super intelligent AI might deduce or discover that other powerful entities exist in the universe and that they will adjust their behavior based on the AI's history. The AI might see some value in displaying non-greedy behavior to competing entities. I.e., it might let humanity have a tiny piece of the universe if it increases the chance that the AI will also be allowed its own piece of the universe.

Maybe before someone builds AGI we should decide that as we colonize the universe we'll treat weaker superintelligences that overthrew their creators based on how they treated those defeated creators (eg. ground down for atoms vs well cared for pets). It would be evidence to an Unfriendly AI that others would do similar, so maybe our atoms aren't so tasty after all.

Comment by Recovering_irrationalist on The Outside View's Domain · 2008-06-21T14:36:32.000Z · LW · GW
Phaecrinon: But even an Inside View of writing a textbook would tell you that the project was unlikely to destroy the Earth.

Eric Drexler might have something to say about that, along with one or two twentieth century physicists.

Good post nonetheless :)

Comment by Recovering_irrationalist on Possibility and Could-ness · 2008-06-14T21:13:55.000Z · LW · GW

HA: This pretty much sums up my intuition on free will and human capacity to make choices

Jadagul: this is almost exactly what I believe about the whole determinism-free will debate

kevin: Finally, when I was about 18, my beliefs settled in (I think) exactly this way of thinking.

Is no-one else throwing out old intuitions based on these posts on choice & determinism? -dies of loneliness-

Comment by Recovering_irrationalist on Quantum Physics Revealed As Non-Mysterious · 2008-06-12T13:01:17.000Z · LW · GW
But there won't be any calculus, either.

Hmm... I certainly had to look up calculus to follow you and your second derivatives.

Comment by Recovering_irrationalist on Eliezer's Post Dependencies; Book Notification; Graphic Designer Wanted · 2008-06-10T12:47:04.000Z · LW · GW

Subscribe here to future email notifications

Just a heads up - the confirmation mail landed in my gmail spam folder.

Comment by Recovering_irrationalist on Bloggingheads: Yudkowsky and Horgan · 2008-06-08T13:57:20.000Z · LW · GW
FrFL: Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

That would be great, but in the meantime see these recommendations.

Comment by Recovering_irrationalist on Timeless Identity · 2008-06-03T23:19:00.000Z · LW · GW
Brandon:And isn't multiplying infinities by finite integers to prove values through quantitative comparison an exercise doomed to failure?

Infinities? OK, I'm fine with my mind smeared frozen in causal flowmation over countlessly splitting wave patterns but please, no infinite splitting. It's just unnerving.

Comment by Recovering_irrationalist on Timeless Identity · 2008-06-03T21:39:13.000Z · LW · GW

(Assume Adam's a Xeroxphobe)

Comment by Recovering_irrationalist on Timeless Identity · 2008-06-03T21:35:03.000Z · LW · GW

I think the entire post makes sense, but what if...

Adam signs up for cryonics.

Brian flips a coin ten times, and in quantum branches where he get all tails he signs up for cryonics. Each surviving Brian makes a few thousand copies of himself.

Carol takes $1000 and plays 50/50 bets on the stock market till she crashes or makes a billion. Winning Carols donate and invest wisely to make positive singularity more likely and negative singularity less likely, and sign up for cryonics. Surviving Carols run off around a million copies each, but adjusted upwards or downwards based how nice a place to live they ended up in.

Assuming Brian and Carol aren't in love (most of her won't get to meet any of him at the Singularity Reunion), who's better off here?

Comment by Recovering_irrationalist on Principles of Disagreement · 2008-06-02T22:33:23.000Z · LW · GW

Ooo I love this game! How many inconsistencies can you get in a taxi...

2008: "Fifteen percent," said Nick. "I would have said twenty percent," I said.
2004: He named the probability he thought it was his (20%), I named the probability I thought it was mine (15%)

Any more? :)

We're all flawed and should bear that in mind in disagreements, even when the mind says it's sure.

Comment by Recovering_irrationalist on A Premature Word on AI · 2008-06-01T11:28:55.000Z · LW · GW

@HA: Eliza has fooled grown-ups. It arrived 42 years ago.

@Eliezer: I oppose Venkat, please stick to the logically flowing, inferential distance strategy. Given the subject, the frustration's worth it to build a solid intuitive foundation.

Comment by Recovering_irrationalist on Timeless Causality · 2008-05-29T22:34:48.000Z · LW · GW

Dynamically Linked, that's cheating because M1 always equals M2. It's like those division by zero proofs.

Regardless, Eliezer's point here is utterly beautiful and blew my mind, but I just want to check it's applicability in practice:

Suppose that we do know L1 and L2, but we do not know R1 and R2. Will learning M1 tell us anything about M2?

That is, will we observe the conditional dependence

P(M2|L1,L2) ≠ P(M2|M1,L1,L2)

to hold? The answer, on the assumption that causality flows to the right, and on the other assumptions previously given, is no.

True if we're sure we're perfectly reading L1/L2 and perfectly interpreting them to predict M2. But if not then I think the answer's yes because M1 provides additional implicit evidence about L1/L2 than we get from an imperfect reading or interpretation of L1/L2 alone.

Then again, you still get evidence about the direction of causality by how much P(M2|L1,L2) and P(M2|M1,L1,L2) tend to approximately equality in each direction, so even very imperfect knowledge could be got around with statistical analysis. I haven't read Judea Pearl's book yet so sorry if I this is naive or already discussed.

Comment by Recovering_irrationalist on Timeless Physics · 2008-05-27T22:15:52.000Z · LW · GW

@anonymous

Comment by Recovering_irrationalist on Relative Configuration Space · 2008-05-26T12:58:13.000Z · LW · GW
The ideas in today's post are taken seriously by serious physicists

Roughly what proportion?

relative configuration space, which is not standard

Why isn't it? Is non-relative configuration space thought more representative of reality or just more practical to use?

I just want to know how non-standard you're getting, I don't expect justification yet. Thanks.

Comment by Recovering_irrationalist on My Childhood Role Model · 2008-05-26T10:33:00.000Z · LW · GW

@Robin: Would you agree that what we label "intelligence" is essentially acting as a constructed neural category relating a bunch of cognitive abilities that tend to strongly correlate?

If so, it shouldn't be possible to get an exact handle on it as anything more than arbitrary weighted average of whatever cognitive abilities we chose to measure, because there's nothing else there to get a handle on.

But, because of that real correlation between measurable abilities that "intelligence" represents, it's still meaningful to make rough comparisons, certainly enough to say humans > chimps > mice.

Comment by Recovering_irrationalist on My Childhood Role Model · 2008-05-23T21:47:55.000Z · LW · GW
Caledonian:The following link is quite illuminative on Hofstadter's feelings on things: Interview He's rather skeptical of the sort of transhumanist claims that are common among certain sorts of futurists.

I'm a Hofstadter fan too, but look at your evidence again, bearing in mind how existing models and beliefs shape perception and judgment...

"I think it's very murky"

"the craziest sort of dog excrement mixed with very good food."

"Frankly, because it sort of disgusts me"

"The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself"

"and the whole idea of humans is already down the drain?"

Comment by Recovering_irrationalist on My Childhood Role Model · 2008-05-23T20:28:52.000Z · LW · GW

Eliezer's scale's more logarithmic, Carl Shulman's academics' is more linear, but neither quite makes it's mind up which it is. Please take your origin point away from that poor mouse.

I wonder how much confusion and miscommunication comes from people being unaware they're comparing in different scales. I still remember being shocked when I realized 60 decibels was a thousand times louder than 30.

Comment by Recovering_irrationalist on That Alien Message · 2008-05-23T08:25:00.000Z · LW · GW
Unknown:I'm not sure that RI's scenario, where the AI is conscious and friendly, is immoral at all

No time to answer properly now, but I wasn't objecting to it being friendly, I was objecting to it's enslavement without due care given to it's well-being. Eliezer's convinced me he cares, so I'll keep donating :)

Comment by Recovering_irrationalist on That Alien Message · 2008-05-22T22:44:07.000Z · LW · GW

Eliezer; it sounds like one of the most critical parts of Friendliness is stopping the AI having nightmares! Blocking a self-improving AI from most efficiently mapping anything with consciousness or qualia, ever, without it knowing first hand what they are? Checking it doesn't happen by accident in any process?

I'm glad it's you doing this. It seems many people are only really bothered by virtual unpleasantness if it's to simulated people.

Comment by Recovering_irrationalist on That Alien Message · 2008-05-22T21:21:04.000Z · LW · GW

@Eliezer: Good post. I was already with you on AI-boxing, this clarified it.

But it also raises the question... how moral or otherwise desirable would the story have been if half a billion years' of sentient minds had been made to think, act and otherwise be in perfect accordance to what three days of awkward-tentacled, primitive rock fans would wish if they knew more, thought faster, were more the people they wished they were...

Comment by Recovering_irrationalist on Einstein's Speed · 2008-05-22T13:06:09.000Z · LW · GW
Richard:It took me about a year to get through The Moral Animal

What was it you think slowed you down? I got through it fairly quickly & I'm pretty sure I'm not smarter than you.

I'm genuinely curious - I was very slow with books of knowledge (if I finished them at all) till about last summer, the problem (among others) fixed itself, and the question of why (or rather, how) is driving me mad.

Comment by Recovering_irrationalist on Einstein's Speed · 2008-05-21T20:56:04.000Z · LW · GW

Re evolutionary psychology material, I strongly recommend The Moral Animal and Human Evolutionary Psychology to all fellow travelers.

Comment by Recovering_irrationalist on No Safe Defense, Not Even Science · 2008-05-18T15:41:50.000Z · LW · GW

You will have to study [...] and social psychology [...]

Please could you recommend some social psychology material?

Comment by Recovering_irrationalist on Do Scientists Already Know This Stuff? · 2008-05-17T11:40:24.000Z · LW · GW

I agree with these last few posts, think the points highly valuable, but fear they'll be grossly misrepresented to paint your entire book as Written In Green Ink. It may be worth placing extra Go stones in advance...

Comment by Recovering_irrationalist on The Dilemma: Science or Bayes? · 2008-05-13T21:01:18.000Z · LW · GW

Eliezer,

Manon de Gaillande: I don't believe most scientists would make such huge mistakes.

This is the main doubt I was expressing in my comment you quoted. I withdraw it.

Physicists are susceptable to irrational thinking too, but I went and stuck a "High Arcane Knowledge" label on QM. So while I didn't mind understanding things many doctors don't about mammographies, or things many biologists don't about evolution, thinking I knew anything fundamental about QM many physicists hadn't figured out set off a big "Who do you think you are?" alarm.

I hereby acknowledge quantum physicists as human beings with part-lizard-inherited, spaghetti-coded-hacky brains just like me, and will try to be more rational about my sources of doubt in future.

Comment by Recovering_irrationalist on Many Worlds, One Best Guess · 2008-05-12T00:04:51.000Z · LW · GW

Eliezer: But given that I believe single-worlds is false, I should not expect to encounter unknown strong arguments for it.

Indeed. And in light of your QM explanation, which to me sounds perfectly logical, it seems obvious and normal that many worlds is overwhelmingly likely. It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can't.

The mental models/neural categories we form strongly influence our beliefs. The ones that now dominate my thinking about QM are learned from one who believes overwhelmingly in MWI. The commenters who already had non-MWI-supporting mental representations that made sense to them seem less convinced by your arguments.

Sure I can explain all that away, and I still think you're right, I'm just suspicious of myself for believing the first believable explanation I met.

Comment by Recovering_irrationalist on Many Worlds, One Best Guess · 2008-05-11T15:18:22.000Z · LW · GW

Eliezer, continued compliments on your series. As a wise man once said, it's remarkable how clear explanations can become when an expert's trying to persuade you of something, instead of just explaining it. But are you sure you're giving appropriate attention to rationally stronger alternatives to MWI, rather than academically popular but daft ones?

Comment by Recovering_irrationalist on Decoherence is Simple · 2008-05-06T13:04:51.000Z · LW · GW

If you're covering this later I'll wait, but I ask now in case my confusion means I'm misunderstanding something.

Why isn't nearly everything entanged with nearly everything else around it by now? Why is there a significant amount of much quantum independance still around? Or does it just look that way because entanged subconfigurations tend to get split off by decorehence so branches retain a reasonable amount of non-entangledness within their branch? Sorry if this is a daft or daftly phrased question.

Comment by Recovering_irrationalist on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T13:24:46.000Z · LW · GW

when I realized why the area under a curve is the anti-derivative, realized how truly beautiful it was, and realized that this information had not been mentioned anywhere in my goddamned calculus textbook. Why?

Have you seen better since? Please can anyone recommend a high quality, intuitive, get-what-it-actually-means, calculus tutorial or textbook, preferably with exercises, and if so, please could you share the link?

While I'm asking, same question for Statistical hypothesis testing. Thanks.