Open Thread, January 2011

post by Paul Crowley (ciphergoth) · 2011-01-10T11:14:49.179Z · LW · GW · Legacy · 43 comments

Contents

43 comments

Better late than never, a new open thread. Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

43 comments

Comments sorted by top scores.

comment by beriukay · 2011-01-10T12:42:55.844Z · LW(p) · GW(p)

Oh, I've been aching to announce to people who wouldn't find it absolutely insane or unthinkable!

After being convinced that it isn't just something insane rich people do out of hubris, debating a bunch of my friends, reading all the documentation I could, listening to the horror stories from This American Life, and doing oodles of paperwork, I am now officially one of the potentially immortal. I am a pre-cryonaut.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-01-10T16:07:54.309Z · LW(p) · GW(p)

Hooray! I can't wait for the post-cryo meetups - though of course plan A is to live long enough to live forever...

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-10T17:09:02.432Z · LW(p) · GW(p)

I can't wait for the post-cryo meetups

Well, I suppose you don't have to... or at least, you don't have to experience waiting... but I rather wish you would.

Replies from: icebrand
comment by icebrand · 2011-01-11T01:15:58.505Z · LW(p) · GW(p)

Yes, plan A is definitely to wait as long as possible. :)

comment by gwern · 2011-01-10T18:02:33.474Z · LW(p) · GW(p)

Some years ago (at least 2 or 3), I read a long article somewhere in which a study or two looked at prominent figures - many politicians, Hollywood stars, that sort of thing - in the arts and sciences, and mentioned that a lot of them had childhoods filled with neglect or abuse. I think the author also suggested that this was not correlation, but causation as well, from the latter to the former.

Unfortunately, I can't seem to remember where I read this, and looking through my Evernote, don't find it there, nor do Google searches help. I thought I might've read it in The Atlantic, but looking through a few hundred hits there, I didn't find anything.

If this rings any bells for anyone, I'd appreciate a pointer. (I was going to write a Hansonesque article arguing for systematized child abuse/neglect of smart kids.)

EDIT: Also tried asking the Sociology subreddit.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-01-11T12:10:57.290Z · LW(p) · GW(p)

Talk about creating perverse incentives!

comment by HonoreDB · 2011-01-27T19:30:32.977Z · LW(p) · GW(p)

Recently, an acquaintance asked me whether I believed in destiny. I told her I didn't, and she told me a long story boiling down to this: someone she knows was prevented, by a series of improbable accidents, from getting on a plane. The plane then crashed, killing this person's entire family.

"How do you explain that?" she asked. "I don't really see the need for an explanation," I said.

I relayed the cached wisdom that there are billions of people on Earth, all living eventful lives, and therefore we can expect one-in-a-billion experiences to occur daily.

It just occurred to me that people in this sort of situation are subject to something very similar to the red/green paradox first discussed here. Suppose 10^9 people have a commonly agreed-on model of the universe (each person assigns it a probability of about 10^-3 of being false). This model says that, each time you wake up, there is precisely a 10^-9 chance of discovering you have been transformed into a giant insect, with the chance being independent for each person for each day. If we read in the news that someone was transformed, we don't update against the model being true--the model predicts that about one person should be transformed each day.

On the other hand, if you yourself wake up to find yourself transformed into a giant insect, it is tempting to say that you should update against the model, since it is more likely that the model underestimates the chances of this happening than that you have experienced a 1 in 10^9 event. Indeed, if someone within 2 degrees of separation from you is transformed, it seems you should update.

Such a population could experience a long period of statistical adherence to the model, yet contain a growing population of skeptics who believe that a lot of transformations are unreported or covered up.

Is this, generalized, the situation we actually find ourselves in with respect to what's usually called "belief in the supernatural?"

comment by NMJablonski · 2011-01-13T21:21:10.870Z · LW(p) · GW(p)

Was re-reading Think Like Reality and came upon the following quote from Eliezer - emphasis mine.

The same optimization process that built your retina backward and then routed the optic cable through your field of vision, also designed your visual system to process persistent objects bouncing around in 3 spatial dimensions because that's what it took to chase down tigers. But "tigers" are leaky surface generalizations - tigers came into existence gradually over evolutionary time, and they are not all absolutely similar to each other. When you go down to the fundamental level, the level on which the laws are stable, global, and exception-free, there aren't any tigers. In fact there aren't any persistent objects bouncing around in 3 spatial dimensions. Deal with it.

Replies from: ata
comment by ata · 2011-01-13T23:49:35.285Z · LW(p) · GW(p)

For those who don't get the reference: http://knowyourmeme.com/memes/deal-with-it

(Though if you didn't already know of that and/or don't care for silly internet meme humour, then reading the linked page will probably not cause you to find the above comment funny. Deal with it.)

comment by JoshuaZ · 2011-01-10T16:51:18.731Z · LW(p) · GW(p)

I've reviewed Jason Rosenhouse's book on the Monty Hall problem. Overall summary: The book is very good and does an interesting job of discussing not just the original problem but variations of it, as well as what reactions to the problem can teach us about how humans estimate probabilities.

comment by jaimeastorga2000 · 2011-01-16T17:58:41.886Z · LW(p) · GW(p)

My university recently held an event involving a presentation and book signing by Ray Kurzweil and a screening of The Singularity is Near. Fangirling about meeting Ray and getting my book signed aside, I thought I would give a short description of the movie and my opinion of it.

The film is mostly made of interviews with big names, so most of the time you will have somebody's head on-screen talking, whether it's Ray's, Aubrey De Grey's, Eric Drexler's, or Eliezer Yudkowsky's. The movie covers a wide variety of singulitarian and transhumanist topics (including a small debate on life extension), and although the material is a bit basic for someone who has already spent a lot of time reading sites like lesswrong and The Transhumanist Wiki, I quite enjoyed it.

Some of the scenes make use of spiffy CGI, including the opening and a couple of illustrations of nanobots. The biggest use of CGI by far occurs in the B-Plot, though, which consists of several interspersed short scenes dealing with an artificial intelligence named Ramona who goes from being a mechanical paper doll to a Second Life bot to a wholly sentient virtual being as the decades pass. I thought this plot thread was the weakest part of the movie; while it was entertaining and it illustrated ideas like editing one's own thinking process (Ramona cures herself of her fear of mice), several of the scenes were rather narmful (WARNING: TVTropes) and the overall sci-fi feel may have hurt the movie's credibility with people who would have otherwise taken it seriously.

So, to conclude, I asked a friend what she thought of the movie, and she said that it had been "too flashy" for her taste.

comment by Document · 2011-01-16T12:09:06.023Z · LW(p) · GW(p)

Can anyone link a comment or quote somewhere on LW saying something like "there are things that we can't imagine, but that we can imagine imagining, and we confuse that with actually imagining them."? Possible examples would be philosophical zombies, actual infinities, uncomputable physics, mathematical inconsistency and certain forms of objective morality; I vaguely remember the original thread having to do with theology.

Question inspired by the Reddit thread containing this comment, although I probably won't end up posting there.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-01-16T12:56:29.468Z · LW(p) · GW(p)

Zombies? Zombies!

Replies from: Document
comment by Document · 2011-05-16T03:53:02.059Z · LW(p) · GW(p)

I can't find it there, and I remember it being a comment rather than a top-level post. (In hindsight I should've asked you for the specific line in the first place rather than searching through an enormous post for it myself, but that's hindsight.)

comment by Miller · 2011-01-14T05:40:58.337Z · LW(p) · GW(p)

Audience member video of Watson taking on Jennings and another at Jeopardy: http://www.youtube.com/watch?v=hR528D64rpM&feature=player_embedded#!

So if 14 years ago it was chess, and approximately now it is Jeopardy, what's an example from the class of challenging high profile targets 14 years from now? Turing Test?

Replies from: jaimeastorga2000, TheOtherDave, Unknowns
comment by jaimeastorga2000 · 2011-01-16T17:09:41.219Z · LW(p) · GW(p)

Here's a better quality video. And this is another video in which the people at IBM talk about developing Watson and why they chose to do Jeopardy.

Replies from: Miller
comment by Miller · 2011-01-16T20:19:53.669Z · LW(p) · GW(p)

Watson apparently refines it's notion of the kinds of answers that are expected under a given category as it accumulates previous answers. The human contestants could exploit this by starting with the higher dollar questions. I'll be curious to see if they do.

There's a detailed chart of the performance over time of the system here.

comment by TheOtherDave · 2011-01-14T13:51:05.595Z · LW(p) · GW(p)

Proving novel theorems?

comment by Unknowns · 2011-01-14T12:51:27.584Z · LW(p) · GW(p)

Significantly increases my confidence that I will win my bet with Eliezer.

Replies from: jaimeastorga2000
comment by userxp · 2011-01-12T18:58:54.737Z · LW(p) · GW(p)

I still have my doubts about cryonics. I believe people here are a bit too optimistic about the future. How confident are you that the “molecular nanotechnology” necessary to repair cells will be developed within 100 or 200 years? If Alcor had been founded in 1800, would it have survived the industrial revolution and both world wars?

About neuropreservation, is it that easy to grow a new body? I mean, there is a big difference between just fixing some broken cells and completely creating a whole body. Even if it's possible, it'll probably be much more expensive (and thus you'll be less likely to get revived). And unless the new body is exactly like the old one, your motor system will be screwed up.

And you need rejuvenation technology too. Alcor claims that "By the time it becomes possible to revive cryonics patients, especially today's cryonics patients, biological aging as we know it today will not exist". I don't know how likely that is, but there is a difference between stopping aging and rejuvenating. What if they find a simple DNA mutation that stops aging, but it can only be applied before birth? In the worst case you'll wake up and die again a few weeks later. You may be lucky and only have to spend a few decades in a 90-year-old body.

comment by Liron · 2011-01-11T06:30:48.465Z · LW(p) · GW(p)

The Big Short is a really fun book about the financial crisis of last decade. It's a good followup to Eliezer's Just Lose Hope Already.

comment by Paul Crowley (ciphergoth) · 2011-01-10T11:16:32.935Z · LW(p) · GW(p)

I was asked what the mainstream thinks of AI Risk. My understanding is that the only comment on the subject from "mainstream" AI research is a conference report that says something like "Some people think there might be a risk from powerful AI, but there isn't." This was discussed here on LW, but obviously searching for it given only that information is pretty much impossible, so help would be much appreciated - thanks!

Replies from: Vladimir_Nesov, timtyler, timtyler, timtyler
comment by Vladimir_Nesov · 2011-01-10T12:18:58.092Z · LW(p) · GW(p)

Hanson and then you posted a link to AAAI Panel on Long-term AI Futures (also discussed here).

From "Interim Report" (Aug 2009):

The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of scientists in this realm, and for the need to educate people outside the AI research community about the promise of AI for enhancing the quality of human life in numerous ways, coupled with a re-focusing of attention on actionable, shorter-term challenges.

Replies from: ciphergoth, timtyler, ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-01-11T10:43:43.731Z · LW(p) · GW(p)

Pursuing this further, I emailed focus group chair Professor David McAllester to ask if there had been any progress in "sharing the rationale". He replied:

The wording you mention in the report was supported by many people. However, I personally think the possibility of an AI chain reaction in the next few decades should not be dismissed. I am trying my very hardest to make it happen.

(I have his permission to share that)

comment by timtyler · 2011-01-12T15:29:21.251Z · LW(p) · GW(p)

AAAI ex-president, Eric Horvitz seems ambivalent here:

Horvitz doubts that one of these virtual receptionists could ever lead to something that takes over the world. He says that's like expecting a kite to evolve into a 747 on its own.

So does that mean he thinks the singularity is ridiculous?

Mr. HORVITZ: Well, no. I think there's been a mix of views, and I have to say that I have mixed feelings myself.

comment by Paul Crowley (ciphergoth) · 2011-01-10T13:10:09.868Z · LW(p) · GW(p)

Impressed - how did you find this? I'm also impressed I managed to forget something I myself re-posted. Thanks!

comment by timtyler · 2011-01-10T21:31:14.148Z · LW(p) · GW(p)

Why robots won't rule. See also the links here.

comment by timtyler · 2011-01-19T15:32:57.357Z · LW(p) · GW(p)

Alon Halevy, a faculty member in the University of Washington's computer science department and an editor at the Journal of Artificial Intelligence Research, said he's not worried about friendliness.

"As a practical matter, I'm not concerned at all about AI being friendly or not," Halevy said. "The challenges we face are so enormous to even get to the point where we can call a system reasonably intelligent, that whether they are friendly or not will be an issue that is relatively easy to solve."

comment by timtyler · 2011-01-10T21:26:43.044Z · LW(p) · GW(p)

"There's certainly a finite chance that the whole process will go wrong - and the robots will eat us." - Hans Moravec, here.

comment by Matt_Simpson · 2011-01-23T17:32:08.811Z · LW(p) · GW(p)

I've had an idea for a "What I've learned from PUA post" bouncing around in my head for some time now. I would talk about what I've learned about the psychological differences between men and women and what that means for the dating market, NOT specific tip/tricks or anything like that. Would this be too controversial? I didn't participate in the PUA debates before - they inspired me to learn about PUA to be honest - so I don't know if I would be crossing a line.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-23T19:57:30.656Z · LW(p) · GW(p)

There are enough examples of people making broad general claims about how men and women think and behave that aren't particularly supported by data that such a post is, I expect, something of an uphill climb.

Put another way: that any given post on the subject is just another attempt to support pre-existing social preconceptions with pragmatic-sounding but ultimately ungrounded assertions is a pretty high prior probability for some readers (myself included), so overcoming that prior with evidence is important.

That said, IMHO a post that is demonstrably grounded in actual data, genuinely relevant to questions of how people think and behave, and at least somewhat novel ought to garner more approval than disapproval.

comment by timtyler · 2011-01-18T19:36:42.044Z · LW(p) · GW(p)

Here are some LessWrong RSS feeds aggregated on a single page:

http://www.netvibes.com/lesswrong

comment by Document · 2011-01-14T15:58:43.395Z · LW(p) · GW(p)

Scripts for LW I'd like to see someone write: something that blocks the appearance of the four recent-items sidebars, in the spirit of reducing shiny distraction.

Replies from: jaimeastorga2000
comment by jaimeastorga2000 · 2011-01-17T19:35:29.643Z · LW(p) · GW(p)

Firefox's Adblock Plus add-on has a supporting add-on called Element Hiding Helper that is helpful for situations such as these. You just press Ctrl+Shift+K and you can block pieces of websites as you see fit, or you can directly add the following filters under My Element Hiding Rules:

lesswrong.com###side-comments
lesswrong.com###side-posts
lesswrong.com###recent-wiki-edits

I haven't found a way to block the "New on Overcoming Bias" section without removing the right bar completely, though.

Replies from: Document
comment by Document · 2011-05-16T18:36:45.454Z · LW(p) · GW(p)

Just set it up on my laptop and it seems to work; thanks. (I think Firefox 4 changes the combination from Ctrl+Shift+K to Ctrl+Shift+S, though.)

comment by MBlume · 2011-01-11T00:10:27.635Z · LW(p) · GW(p)

Bayesian Statistics Textbook Banned in China

Replies from: wedrifid
comment by wedrifid · 2011-01-13T11:57:27.012Z · LW(p) · GW(p)

Errr... why on earth?

comment by Document · 2011-01-13T20:01:30.458Z · LW(p) · GW(p)

The latest Scenes from a Multiverse (the comic plus the news post below) discusses rationality.

comment by bentarm · 2011-01-12T18:32:40.887Z · LW(p) · GW(p)

Help request: is it possible to draw tables in top-level posts?

More generally, is it possible to get any sort of formatting in top-level posts other than that which is available from the toolbar? What sort of markup is used for top-level posts?

If the answer is no, I can probably manage, but I think it would make my life easier if the answer is yes.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-01-16T12:51:44.472Z · LW(p) · GW(p)

Top level posts use HTML, which supported tables last time I checked.

Replies from: bentarm
comment by bentarm · 2011-01-16T17:11:58.675Z · LW(p) · GW(p)

Yes, thanks, the problem I was actually having was that I hadn't spotted the "edit HTML" button in the toolbar for editing top-level posts.

comment by nhamann · 2011-01-10T19:49:56.766Z · LW(p) · GW(p)

Andrew Gelman's book on Bayesian Statistics has been banned in China. Apparently Bayes' theorem is "politically sensitive material." Gelman notes that that last sentence might not be true, but I really hope it is.