Open Thread, December 1-15, 2012

post by OpenThreadGuy · 2012-12-01T05:00:57.988Z · LW · GW · Legacy · 178 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

178 comments

Comments sorted by top scores.

comment by Matt_Caulfield · 2012-12-01T15:43:22.777Z · LW(p) · GW(p)

A couple of days ago, GiveWell updated their top charity picks. AMF is still on top, but GiveDirectly bumped SCI from #2 to #3.

They also (very) tentatively recommend splitting your donation among the three: 70% to AMF, 20% to GiveDirectly, and 10% to SCI. The arguments about this in the blog post and comments are pretty interesting. (But I wouldn't stress too much about it: harder choices matter less).

comment by advancedatheist · 2012-12-02T01:06:11.076Z · LW(p) · GW(p)

Nassim Nicholas Taleb argues: The future will not be cool

http://www.salon.com/2012/12/01/nassim_nicholas_taleb_the_future_will_not_be_cool/

Taleb's characterization of "technothinkers" as cultural ignoramuses doesn't sound quite right to me, because they tend to read and assimilate the writings of learned (in the liberal arts sense) fantasy and science fiction writers. In this way they at least get some exposure to humane culture once removed, if they don't immerse themselves in it directly. J.R.R. Tolkien taught Anglo-Saxon language and literature at Oxford, for example.

And many of my cryonicist friends have studies of history, literature and other cultures under their belts, or in some cases the experience of actually living in non-Western cultures. Thomas Donaldson spent some time living with indigenous people in New Guinea, for example. Despite what Taleb thinks, in my experience, a passion for the past can exist comfortably with a desire to build, and live in, a better future.

Replies from: aaronde
comment by aaronde · 2012-12-02T02:56:09.395Z · LW(p) · GW(p)

I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it's awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. And I thought it was funny when he said that condoms were one of the most important new technologies, but aren't talked about much, as if to imply that condoms aren't cool. I think that condoms are cool! I remember when I first got condoms, and took one out to play with. After testing it a couple different ways, I thought: *how does anyone manage to break one of these!?" It's easy to extrapolate that no "cool" technology will exist in the future, if you don't acknowledge that any cool technology currently exists.

But I think Taleb's piece is valuable, because it illustrates what we are up against, as people trying to get others to take seriously the risks, and opportunities, presented by future technologies. Taleb seems very serious and respectable, precisely because he is so curmudgeonly and conservative, whereas we seem excitable and silly. And he's right that singularitarian types tend to overemphasize changes relative to everything that remains the same, and often conflate their predictions of the future with their desires for the future. I think that lesswrong is better than most in this regard, with spokespeople for SI taking care to point out that their singularity hypothesis does not predict accelerating change, and that the consequences of disruptive technology need not be good. Still, I wonder if there's any way to present a more respectable face to the public, without pretending that we don't believe what we do.

Replies from: advancedatheist
comment by advancedatheist · 2012-12-02T03:24:53.731Z · LW(p) · GW(p)

You might get a different perspective on the present when you reach your 50's, as I have. I used Amazon's book-previewing service to read parts of W. Patrick McCray's book, The Visioneers, and I realized that I could nearly have written that book myself because my life has intersected with the story he tells at several points. McCray focuses on Gerard K. O'Neill and Eric Drexler, and in my Amazon review I pointed out that after a generation, or nearly two in O'Neill's case, we can get the impression that their respective ideas don't work. No one has gotten any closer to becoming a space colonist since the 1970's, and we haven't seen the nanomachines Drexler promised us in the 1980's which can produce abundance and make us "immortal."

So I suspect you youngsters will probably have a similar letdown waiting for you when you reach your 40's and 50's, and realize that you'll wind up aging and dying like everyone else without having any technological miracles to rescue you.

http://www.amazon.com/The-Visioneers-Scientists-Nanotechnologies-Limitless/dp/0691139830/

Replies from: Kaj_Sotala, Desrtopa, JoshuaZ, David_Gerard, gwern
comment by Kaj_Sotala · 2012-12-02T10:20:24.477Z · LW(p) · GW(p)

A lot of young people, including me, seem to be getting a lot of "man, we're really living in the future" kind of emotional reactions relatively frequently. E.g. I remember that as a kid, I imagined having a Star Trek-style combined communicator and tricorder so that if someone wanted to know where I was, I could snap them a picture of my location and send it to them instantly. To me, that felt cool and science fictiony. Today, not only can even the cheapest cell phone do that, but many phones can be set up to constantly share their location to all of one's friends.

Or back in the era of modems and dial-up Internet, the notion of having several gigabytes of e-mail storage, wireless broadband Internet, or a website hosting and streaming the videos of anyone who wanted to upload them all felt obviously unrealistic and impossible. Today everyone takes the existence of those for granted. And with Google Glass, I expect augmented reality to finally become commonplace and insert itself into our daily lives just as quickly as smartphones and YouTube did.

And since we're talking about Google, self-driving cars!

Or Planetary Resources. Or working brain implants. Or computers beating humans at Jeopardy. Or... I could go on and on.

So the point of this comment is that I'm having a hard time imagining my 40's and 50's being a letdown in terms of technological change, given that by my mid-20's I've already experienced more future shocks than I would ever have expected to. And that makes me curious about whether you experienced a similar amount of future shocks when you were my age?

Replies from: mwengler, Risto_Saarelma, thomblake
comment by mwengler · 2012-12-03T19:27:08.873Z · LW(p) · GW(p)

I'm 55 and I think the present is more shocking now than it was in the 1970s and 1980s. For me, the 70s and 80s were about presaging modern times. I think the first time I could look up the card catalog at my local library, ~1986 on gopher, I began to believe viscerally that all this internet stuff and computers was going to seriously matter SOON. Within a few months of that I saw my first webpage and that literally (by which of course I mean figuratively) knocked me in to the next century. I was flabbergasted.

Part of what was so shocking about being shocked was that it was, in some wierd sense, exactly what I expected. I had played with hypercard on macs years earlier and the early web was just essentially a networked extension of that. In my science fiction youth, I had always known or believed that knowledge would be ubiquitously available. I could summarize as saying there were no electronics in Star Trek (the original) that seemed unreasonable, from talking computers, big displays, tricorders and communicators. To me, faster-than-light travel, intelligent species all over the universe that looked and acted like made-up humans, and the transporter all seemed unreasonable.

Maybe what was shocking about the webpage is that it was so GORGEOUS. I saw it on a biggish Sun workstation screen. Text was crisp and proportionally spaced black type on white background. Pictures were colorful and vibrant. hyper-text links worked really fast. The impact of actually seeing it was overwhelming compared to just believing that someday it would be there.

As a 55 year old it feels to me like we are careening towards a singularity. The depth of processing power and sensor varieties that can be used in smartphones has barely begun to be explored. Meanwhile, these continue to get more powerful, more beautiful, and with more sensors available. Google autodrive cars: of course all those ideas about building guidewires and special things into the roads is dopey. At least its dopey if you don't have to, and google shows you don't.

Years ago when looking at biotech I commented wonderingly to my equal-aged frriend: isn't it amazing to think that we could be among the last generation to die. My only consolation in knowing I will probably not make it until the singularity is that the way these things go, it will probably be delayed until 2091 anyway so I won't just miss it by a little. And meanwhile, it doesn't take much to enjoy the sled ride down the event horizon as the proto-singularity continues to wind itself up.

Live long and prosper, my friends.

Replies from: NancyLebovitz, Armok_GoB, shminux
comment by NancyLebovitz · 2012-12-14T19:31:58.745Z · LW(p) · GW(p)

I'm 59. It didn't seem to me as though things changed very much until the 90's. Microwaves and transistor radios are very nice, but not the same sort of qualitative jump as getting on line.

And now we're in a era where it's routine to learn about extrasolar planets-- admittedly not as practical as access to the web, but still amazing.

I'm not sure whether we're careening towards a singularity, though I admit that self-driving cars are showing up much earlier than I expected.

Did anyone else expect that self-driving cars would be so much easier than natural language?

Replies from: gwern, None, TheOtherDave, mwengler, mwengler, CronoDAS
comment by gwern · 2012-12-15T05:24:18.115Z · LW(p) · GW(p)

Did anyone else expect that self-driving cars would be so much easier than natural language?

I was very surprised. I had been using Google Translate and before that Babel Fish for years, and expected them to slowly incrementally improve as they kept on doing; self-driving cars, on the other hand, had essentially no visible improvement to me in the 1990s and the 2000s essentially up to the second DARPA challenge where (to me) they did the proverbial '0 to 60'.

comment by [deleted] · 2012-12-14T21:45:39.829Z · LW(p) · GW(p)

Did anyone else expect that self-driving cars would be so much easier than natural language?

Not I -- they seem like different kinds of messy. Self-driving cars have to deal with the messy, unpredictable natural world, but within a fairly narrow set of constraints. Many very simple organisms can find their way along a course while avoiding obstacles and harm; driving obviously isn't trivial to automate, but it just seems orders of magnitude easier than automating a system that can effectively interface with the behavior-and-communication protocols of eusocial apes, as it were.

comment by TheOtherDave · 2012-12-15T17:03:06.633Z · LW(p) · GW(p)

Did anyone else expect that self-driving cars would be so much easier than natural language?

I have always expected computers that were as able to navigate a car to a typical real-world destination as an average human driver to be easier to build than computers that were as able to manage a typical real-world conversation as an average human native speaker.

That said, there's a huge range of goalposts in the realm of "natural language", some of which I expected to be a lot easier than they seem to be.

comment by mwengler · 2012-12-14T20:19:15.649Z · LW(p) · GW(p)

It didn't seem to me as though things changed very much until the 90's.

I had access to a basic-programmable shared time teletype '73 & '74, dial-up and a local IBM (we loaded cards, got printo of results) '74-'78 @ swarthmore college, programmed in Fortran for Radioastronomers '78-'80 and so on... I always took computers for granted and assumed through that entire time period that it was "too late" to get in on the ground floor because everybody already knew.

I never realized before now how lucky I was, how little excuse I have for not being rich.

Did anyone else expect that self-driving cars would be so much easier than natural language?

If by "expect" you mean BEFORE I knew the result? :) It is very hard to make predictions, ESPECIALLY about the future. Now I didn't anticipate this would happen, but as it happens it seems very sensible.

Stuff we were particularly evolved to do is more complex than stuff we use our neocortex for, stuff we were not particularly evolved to do. I think we systematically underestimate how hard language is because we have all sorts of eolutionarily provided "black boxes" to help us along that we seem blind too until we try to duplicate the function outside our heads. Driving, on the other hand, we are not particularly well evolved to do, so we have had to make it so simple that even a neocortex can do it. Probably the hardest part of automated driving is bringing the situational awareness into the machine driving the car: interpreting camera images to tell what a stoplight is doing, where the other cars are and how they are moving, and so on, which all recapitulate things we are well evolved to do.

But no, automated driving before relatively natural language interfaces was a shocking result to me as well.

And I can't WAIT to get one of those cars. Although my daughter getting her learner's permit in half a year is almost as good ( what do I care whether google drives me around or Julia does?)

comment by mwengler · 2012-12-15T08:42:51.779Z · LW(p) · GW(p)

In an amazing coincidence, soon after seeing your commentI came across a Hacker News link that included this quote:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.

comment by CronoDAS · 2012-12-14T20:16:46.326Z · LW(p) · GW(p)

Did anyone else expect that self-driving cars would be so much easier than natural language?

I'm not surprised that it's easier, but I also didn't expect to see self-driving cars that worked.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-15T04:27:12.356Z · LW(p) · GW(p)

I'm not surprised that it's easier, but I also didn't expect to see self-driving cars that worked.

Does this imply that you expected natural language to be impossible?

Replies from: CronoDAS
comment by CronoDAS · 2012-12-15T09:08:25.598Z · LW(p) · GW(p)

You know, I actually don't know!

It can't literally be impossible, because humans do it, but artificial natural language understanding seemed to me like the kind of thing that couldn't happen without either a major conceptual breakthrough or a ridiculous amount of grunt work done by humans, like the CYC project is seeking to do - input, by hand, into a database everything a typical 4-year-old might learn by experiencing the world. On the other hand, if by "natural language" you mean something like "really good Zork-style interactive fiction parser", that might be a bit less difficult than making a computer that can pass a high school English course. And I'm really boggled that a computer can play Jeopardy! successfully. Although, to really be a fair competition, the computer shouldn't be given any direct electronic inputs; if the humans have to use their eyes and ears to know what the categories and "answers" are, then the computer should have to use a video camera and microphone, too.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-15T11:48:24.676Z · LW(p) · GW(p)

And I'm really boggled that a computer can play Jeopardy! successfully.

The first time I used Google Translate, a couple years ago, I was astonished how good it was. Ten years earlier I thought it would be nearly impossible to do something like that within the next half century.

Replies from: CronoDAS, NancyLebovitz
comment by CronoDAS · 2012-12-15T23:34:44.301Z · LW(p) · GW(p)

Yeah, it's interesting the trick they used - they basically used translated books, rather than dictionaries, as their reference... that, and a whole lot of computing power.

If you have an algorithm that works poorly but gets better if you throw more computing power at it, then you can expect progress. If you don't have any algorithm at all that you think will give you a good answer, then what you have is a math problem, not an engineering problem, and progress in math is not something I know how to predict. Some unsolved problems stay unsolved, and some don't.

comment by NancyLebovitz · 2012-12-15T14:41:15.395Z · LW(p) · GW(p)

Is Google Translate a somewhat imperfect Chinese Room?

Also, is Google Translate getting better?

comment by Armok_GoB · 2012-12-03T23:05:57.756Z · LW(p) · GW(p)

/me points at cryonics.

comment by shminux · 2012-12-03T20:14:53.943Z · LW(p) · GW(p)

OK, I'm a bit younger than you, though I still remember having to use a slide rule in school. And I agree, it's been an exciting ride.

Part of what was so shocking about being shocked was that it was, in some weird sense, exactly what I expected.

Not my impression at all. To me the ride appears full of wild surprises around every turn. In retrospect, while I did foresee one or two things that came to pass, others were totally unexpected. That's one reason I keep pointing out on this forum that failure of imagination is one of the most pervasive and least acknowledged cognitive fallacies. There are many more black swans than we expect.

In that sense, we are living through the event horizon already. As a person trained in General Relativity, I dislike misusing this term, but there is a decent comparison here: when free-falling and crossing the event horizon of a black hole one does not notice anything special at all, it's business as usual. There is no visible "no going back" moment at all.

In that vein, I expect the surprises, both good and bad, to continue at about the same pace for some time. I am guessing that the worst problems will be those no one thinks about now, except maybe in a sci-fi story or two, or on some obscure blog. Same with x-risk. It will not be Skynet, or nanobots, bioweapons, asteroids, but something totally out of the left field. Similarly, the biggest progress in life extension will not be due to cryo or WBE, but some other tech. Or maybe there won't be any at all for another century.

comment by Risto_Saarelma · 2012-12-02T14:52:33.658Z · LW(p) · GW(p)

I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don't sound either head-in-the-sand denial of the technological change or crazy.

I wonder how you could figure out just how atypical things are now. Different than most of history, sure, most people lived in a world where you expected life parameters to be the same for your grandparents' and grandchildren's generations, and we definitely don't have that now. But we haven't had that in the first world for the last 150 years. Telegraphs, steam engines and mass manufacture were new things that caused massive societal change. Computers, nuclear power, space rockets, and figuring out that space and time are stretchy and living cells are just chemical machines were stuff that were more likely to make onlookers go "wait, that's not supposed to happen!" than "oh, clever".

People during the space age definitely thought they were living in the future, and contemporary stuff is still a bit tinged by how their vast projections failed to materialize on schedule. Did more people in 1965 imagine they were living in the future than people in 1975? What about people doing computer science in 1985, compared to 2005?

The space program enthusiasts mostly did end up very disappointed in their 50s, as did the people who were trying to get personal computing going using unified Lisp or SmallTalk environments that were supposed to empower users with the ability to actually program the system as a routine matter.

Following the pattern, you'd expect to get a bunch of let down aging singularitarians in the 2030s, when proper machine intelligence is still getting caught up with various implementation dead ends and can't get funding, while young people are convinced that spime-interfaced DNA resequencing implants are going to be the future thing that will change absolutely everything, you just wait, and the furry subculture is a lot more disturbing than it used to be.

So I don't know which it is. There seems to be more stuff from the future in peoples' everyday lives now, but stuff from the future has been around for over a century now, so it's not instantly obvious that things should be particularly different right now.

Replies from: Richard_Kennaway, Risto_Saarelma, gwern
comment by Richard_Kennaway · 2012-12-03T16:01:55.821Z · LW(p) · GW(p)

What about people doing computer science in 1985, compared to 2005?

It may seem to have been a golden age of promise now lost, but I was there, and that isn't how it seems to me.

As examples of computer science in 1985, the linked blog post cites the Lisp machine and ALICE. The Lisp machine was built. It was sold. There are no Lisp machines now, except maybe in museums or languishing as mementos. ALICE (not notable enough to get a Wikipedia article) never went beyond a hardware demo. (I knew Mike Reeve and John Darlington back then, and knew about ALICE, although I wasn't involved with it. One of my current colleagues was, and still has an old ALICE circuit board in his office. I was involved with another alternative architecture, of which, at this remove, the less said the better.)

What killed them? Moore's Law, and this was an observation that was made even back then. There was no point in designing special purpose hardware for better performance, because general purpose hardware would have doubled its speed before long and it would outperform you before you could ever get into production. Turning up the clock made everything faster, while specialised hardware only made a few things faster.

Processors stopped getting faster in 2004 (when Intel bottled out of making 4GHz CPUs). The result? Special-purpose hardware primarily driven not by academic research but by engineers trying to make stuff that did more within that limit: GPUs for games and server farms for the web. Another damp squid of the 1980s, the Transputer, can be seen as ancestral to those developments, but I suspect that if the Transputer had never been invented, the development of GPUs would be unaffected.

When it appears, as the blog post says, "that all you must do to turn a field upside-down is to dig out a few decades-old papers and implement the contents", well, maybe a geek encountering the past is like a physicist encountering a new subject. OTOH, he is actually trying to do something, so props to him, and I hope he succeeds at what could not be done back then.

comment by Risto_Saarelma · 2012-12-03T14:54:04.501Z · LW(p) · GW(p)

Thinking a bit more of this, I think the basic pattern I'm matching here is that each era there's some grand technocratic narrative where an overarching first-principles design from the current Impressive Technology, industrial production, rocket engines, internetworked computers, or artificial intelligence, will produce a clean and ordered new world order. This won't happen, and instead something a lot more organic, diffuse, confusing, low-key and wildly unexpected will show up.

On the other hand, we don't currently seem to be having the sort of unified present-day tech paradigm like there was during the space age. My guess for the next big tech paradigm thing would be radical biotechnology and biotech-based cognitive engineering, but we don't really have either of those yet. Instead, we've got Planetary Resources and Elon Musk doing the stuff of the space age folk, Bitcoin and whatnot that's something like the 90s cypherpunks thought up, IBM Watson and Google cars are something that AI was supposed to deliver in the 80s before the AI Winter set in, and we might be seeing a bit of a return to 80s style diverse playing field in computing with stuff like Raspberry PI, 3D printing and everybody being able to put their apps online and for sale without paying for brick & mortar shelf space.

So it's kinda like all the stuff that was supposed to happen any time now at various points of the late 20th century was starting to happen at once. But that could be just the present looking like it has a lot more stuff than the past, since I'm seeing a lot less of the past than the present.

comment by gwern · 2012-12-02T22:50:40.884Z · LW(p) · GW(p)

I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don't sound either head-in-the-sand denial of the technological change or crazy.

You know, that's a good description of my reaction to reading Brin's Existence the other day. I think 10 years is not that revolutionary, but at 50+ years, the dichotomy is getting pretty bad.

comment by thomblake · 2012-12-03T20:10:53.643Z · LW(p) · GW(p)

A lot of young people, including me, seem to be getting a lot of "man, we're really living in the future" kind of emotional reactions relatively frequently.

I'm 33, and same here.

I like to point out the difference between the time I think of something cool and the time it is invented. In general, that time has been usually negative for a number of years now. As a trivial silly example, after hearing the Gangnam Style song, I said "I want to see the parody video called 'Gungan style' about Star Wars." (I just assumed it exists already). While there were indeed several such videos, the top result was instead a funnier video making fun of the concept of making such a parody video.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-04T04:14:27.527Z · LW(p) · GW(p)

If we're living in the future, when is the present?

Replies from: thomblake, Oligopsony
comment by thomblake · 2012-12-04T14:58:44.738Z · LW(p) · GW(p)

We just missed it.

comment by Oligopsony · 2012-12-04T04:37:47.124Z · LW(p) · GW(p)

The time at which classical images of "the future" were generated and popularized.

comment by Desrtopa · 2012-12-02T03:45:57.816Z · LW(p) · GW(p)

No one has gotten any closer to becoming a space colonist since the 1970's, and we haven't seen the nanomachines Drexler promised us in the 1980's which can produce abundance and make us "immortal."

On the other hand, we do have nanomachines, which can do a number of interesting things, and we didn't have them a couple decades ago. We're making much more tangible progress towards versatile nanotechnology than we are towards space colonization.

comment by JoshuaZ · 2012-12-02T03:35:02.253Z · LW(p) · GW(p)

It seems that both Taleb and Aaronde are talking about a much smaller scale change than things like space colonization and general nanotech.

Replies from: aaronde
comment by aaronde · 2012-12-02T05:51:03.242Z · LW(p) · GW(p)

Yeah, that was my impression. One of the things that's interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it's hard to tell when he's criticizing futurists for expecting certain new technologies, and when he's criticizing them for wanting those new technologies. When he says that he's going to take a cab driven by an immigrant, is he saying that robot cars won't arrive any time soon? Or that it wouldn't make a difference if they did? Or that it would be bad if they did? I think his point is a bit muddled.

One thing he gets right is that cool new technologies need not be revolutionary. Don't get me wrong; I take the possibility of truly transformative tech seriously, but futurists do overestimate technology for a simple reason. When imagining what life will be like with a given gadget, you focus on those parts of your life when you could use the gadget, and thus overestimate the positive effect of the gadget (This is also why people's kitchens get cluttered over time). For myself, I think that robot cars will be commonplace in ten years, and that will be friggin' awesome. But it won't transform our lives - it will be an incremental change. The flip side is that Taleb may underestimate the cumulative effect of many incremental changes.

comment by David_Gerard · 2013-01-02T16:32:32.153Z · LW(p) · GW(p)

I'm 45 (edit: 46) and think the modern age is simply goddamn fantastic.

comment by gwern · 2013-01-02T19:27:05.976Z · LW(p) · GW(p)

So I suspect you youngsters will probably have a similar letdown waiting for you when you reach your 40's and 50's, and realize that you'll wind up aging and dying like everyone else without having any technological miracles to rescue you.

So why don't we see an inverse Maes-Garreau effect, where predictors upon hitting their 40-50s are suddenly letdown and disenchanted and start making predictions for centuries out, rather than scores of years?

And what would you predict for the LW survey results? All 3 surveys ask for the age of the respondent, so there's plenty of data to correlate against, and we should be able to see any discouragement in the 40-50syo respondents.

comment by Metus · 2012-12-01T16:29:20.149Z · LW(p) · GW(p)

On this site there is a lot of talk about x-risk like unfriendly AI, grey goo or meteorite strikes. Now x-risk is not a concept completely confined to humanity as a whole but is also applicable to any individual that is affected not only by global risks but also by local and individual events. Has anyone here researched ways to effectively reduce individual catastrophic risk and mitigate the effects of local and global catastrophic events? I am thinking of things like financial, juristic and political risk, natural disasters and pandemics. So far I have found the emergency kit as designed by www.ready.gov, but I am positive that there is much much more out there.

Replies from: NancyLebovitz, EvelynM
comment by NancyLebovitz · 2012-12-01T23:24:13.441Z · LW(p) · GW(p)

Taleb recommends staying out of debt so as to increase your flexibility.

comment by EvelynM · 2012-12-02T05:05:36.914Z · LW(p) · GW(p)

National Response Teams (http://www.nrt.org) are governmental inter-agency teams formed to respond to incidents of a wide variety of sizes. That may be a place to start your research.

comment by Thomas · 2012-12-01T09:48:53.488Z · LW(p) · GW(p)

A black hole of light only, has its name - Kugelblitz. When you have enough dense light, a black hole is normally formed and has this cool name - Kugelblitz!

The black hole of almost exclusively neutrinos, what is its name? I googled a little but haven't found anything yet.

Replies from: Manfred, Plasmon
comment by Manfred · 2012-12-01T11:03:30.831Z · LW(p) · GW(p)

By analogy, Kugelneutrino. Or maybe "kugelnichts" or "kugellangweiligkeit."

Replies from: Thomas
comment by Thomas · 2012-12-01T11:23:13.565Z · LW(p) · GW(p)

Google has no reference to Kugelneutrino until this will be indexed. Okay.

I wonder if a Kugelneutrino exits. Enough supernovas, maybe a million, spaced on a large sphere, igniting simultaneously for a sphere centered observer, would send a lot of neutrinos in all directions. In the sphere center their combined mass should stop them making a Kugelneutrino.

comment by Plasmon · 2012-12-01T12:19:24.109Z · LW(p) · GW(p)

Reminds me of this paper,

it was proposed that a black hole could be artificially created by firing a huge number of gamma rays from a spherically converging lasers. The idea is to pack so much energy into such a small space that a black hole will form.

comment by Risto_Saarelma · 2012-12-07T17:49:27.581Z · LW(p) · GW(p)

People in the very theoretical end of programming language research seem to be making noise about something called homotopy type theory, that's supposed to make programming awesome and machine-provable once someone gets around to implementing it.

Like the lambda calculus before it (which is actually embedded in it), MLTT may be a fruit of mathematics that has a very real and practical impact on how programmers think about programming. It could be the unifying paradigm that eventually, finally puts an end to all the “programming language wars” of our time. Already it has begun to transform the mathematical community.

It's apparently related to intuitionistic type theory, which one researcher is also very enthused about:

Intuitionistic type theory can be described, somewhat boldly, as a fulfillment of the dream of a universal language for science. In particular, intuitionistic type theory is a foundation for mathematics and a programming language.

Does anyone who has some actual idea what's going on here have an opinion on this?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-12-10T05:27:52.101Z · LW(p) · GW(p)

You seem to be confusing intuitionist MLTT (c.1970) and HTT (c.2005). Your second and third links are about MLTT, not HTT. The second link does mention HTT in passing and claims that it is "a new interpretation of type theory," but this is simply false. In particular, your first quote is not about HTT.

Your first link really is about HTT, but does not claim that it is relevant to programming. HTT is an extension of type theory to reduce the impedance mismatch between logic and category theory, especially higher category theory. It is for mathematicians who think in terms of categories, not for programmers. In as much as programmers should be using category theory, HTT may be relevant. For example, Haskell encourages users to define monads, but does not require them to prove the monad laws. HTT provides a setting for implementing a true monad type, but is overkill. This is largely orthogonal to MLTT.

Automated theorem proving is rarely done, but it usually uses MLTT.

Replies from: bogus
comment by bogus · 2012-12-12T16:17:56.666Z · LW(p) · GW(p)

You are of course right about the distinction between MLTT and HTT, but Risto_Saarelma's first link is a computer science blog. In my view, the claim that computer scientists are "making noise" about homotopy type theory as applicable to programming is fairly justified: in particular, HTT might make it easier to reason about and exploit isomorphisms between data types, e.g. data representations.

Also, Martin-Lof type theory is not exclusively used for computer-assisted theorem proving: for instance, the Calculus of Constructions is one alternative. Formal work on HTT is done in Coq, which is based on the CoC.

comment by Maelin · 2012-12-12T03:45:25.354Z · LW(p) · GW(p)

My father told me about someone he knew when he was working as a nurse at a mental hospital, who tried killing himself three times with a gun in the mouth. The first two times he used a pistol of some sort - both times, the bullet passed between the hemispheres of his brain (causing moderate but not fatal brain damage), exited through the back of his head, and all the hot gases from the gun cauterised the wounds.

The third time he used a shotgun, and that did the job. For firearm based suicide, I think above the ear is a safer bet.

Replies from: NancyLebovitz, army1987, vi21maobk9vp, None
comment by NancyLebovitz · 2012-12-15T20:11:10.430Z · LW(p) · GW(p)

There should be a word for that kind of luck.

comment by A1987dM (army1987) · 2012-12-15T12:13:13.040Z · LW(p) · GW(p)

The first two times he used a pistol of some sort - both times, the bullet passed between the hemispheres of his brain (causing moderate but not fatal brain damage), exited through the back of his head, and all the hot gases from the gun cauterised the wounds.

o.O

comment by vi21maobk9vp · 2012-12-14T18:40:56.930Z · LW(p) · GW(p)

Pistol to the mouth seems to require full mouth of water for high chance of success.

comment by [deleted] · 2012-12-13T22:07:53.440Z · LW(p) · GW(p)

Shotgun's not going to have the problems of a pistol, unless you're using slugs -- and I suspect the hydrostatic shock differential will still do the trick there.

comment by [deleted] · 2012-12-10T20:27:26.055Z · LW(p) · GW(p)

So assuming you have good evidence of eldritch abominations what is the best suicide method? I'm guessing anything that really scrambles your information right. Please keep in mind practicality. Really powerful explosives seem hard to obtain. Having someone dispose of your body after suicide seems an ok but risky option.

Fire?

Replies from: MixedNuts
comment by MixedNuts · 2012-12-15T13:16:57.623Z · LW(p) · GW(p)

Sufficiently clever eldritch abominations should be able to reconstruct you from very little material.

  • Your brain, of course, must be entirely destroyed.
  • It's safer to destroy the rest of your nervous system, which might also contain information.
  • Your genetic material and records of your actions (e.g. your comments on the Internet) are individually insufficient to deduce you, but I'm not so sure about the combination.

So first you want to erase as much information about yourself as you can. Take down everything you put on the Internet, burn everything you wrote, exert your right to delete personal information everywhere you have such a right.

You'll also want to distort other people's memories of you. (Or kill them too, but then we get recursive and reprehensible.) If you have sufficient time, you might do a few hugely out of character things and then isolate yourself completely. Maybe suggest a few false memories first.

There's probably nothing you can do about leaving DNA everywhere. At least try not to have kids.

Fire could work, but you're likely to burn incompletely. I suggest going out to a remote, hot area (think Amazonian jungle), obscuring your starting location as much as you can, going as far out as you can, and dying by having your head crushed or a bullet to the head. By the time someone notices your disappearance, figures out where you went, searches the area, and finds your body, you should have rotten completely.

If the eldritch abominations are coming right now and you don't have time for that, yeah, just jump into an incinerator. You should find one of these by following a garbage truck.

(Also, you okay, kid? This is just silly, Deep-Ones-dancing-on-the-head-of-a-pin musing, right? You can message me if you need to.)

Replies from: army1987, None, army1987, ArisKatsaris
comment by A1987dM (army1987) · 2012-12-16T13:02:44.776Z · LW(p) · GW(p)

I'm starting to wonder whether one of the reasons why Roko deleted all of his comments was that he didn't want to leave too many horcruxes behind.

comment by [deleted] · 2012-12-15T16:13:24.374Z · LW(p) · GW(p)

Thank you for the excellent comment.

Your genetic material and records of your actions (e.g. your comments on the Internet) are individually insufficient to deduce you, but I'm not so sure about the combination.

This is what most worries me.

comment by A1987dM (army1987) · 2012-12-15T22:02:56.660Z · LW(p) · GW(p)

Take down everything you put on the Internet, burn everything you wrote, exert your right to delete personal information everywhere you have such a right.

Unfortunately, I'm afraid that in my case I'd have to at least nuke Facebook's servers. I used not to worry about possible future eldritch abominations at all because I thought if I saw them coming I could just guillotine my head into a fireplace or something, but now that I realize that they could likely still reconstruct a sufficient-fidelity copy of me, I do worry a little about not pissing them off. Unfortunately I barely have any idea about what would piss them off, so all in all I don't behave that differently than I used to, as per the standard atheist reply to Pascal's wager. Also, I don't think such abominations are that likely.

comment by ArisKatsaris · 2012-12-15T20:24:11.226Z · LW(p) · GW(p)

I kinda think that we shouldn't make this forum into a place to give people advice about ways on how best to kill themselves.

comment by Ritalin · 2012-12-06T16:49:31.649Z · LW(p) · GW(p)

Rationality, winning, and munchkinry

I can't help but notice that, in reviews and comments to what we like to call "rationalist fiction", detractors often call characters whose approach to problems is to seek the winning approach. rather than, say, the "reasonable" approach (like one-boxing on Newcomb's rather than two-boxing) "munchkins", as if it were some sort of insult.

A work of fiction that Yudkwosky recently recommended, "Harry Potter and the Natural 20", features a protagonist, Milo, who is a wizard from a D&D setting, and who shows great ingenuity and predictive power through completely "insane" thought rituals that nevertheless tend to result in the right answers and the right successions of actions. The author explicitly qualifies him as a munchkin, and says that he only plays the way the character does when he hates the DM.

I notice that I am confused by this. Why is it that using your ingenuity to the very limit, creatively finding combinations of actions and rules that were not intended by the rule makers, following paths not usually trod by society, is seen as a bad thing? What's wrong, in the non-rationalist mind, with having a clear utility function, and optimizing one's actions to achieve the maximum result? Why is it practically treated as a form of defection? Does this have something to do, on some level, with ethical injunctions?

Replies from: MixedNuts
comment by MixedNuts · 2012-12-06T17:40:19.289Z · LW(p) · GW(p)

Munchkinry is a terrible way to play a game because maximizing your character's victories and maximizing your and other players' enjoyment of the game are two very different things. (For one thing, rules-lawyering is a boring waste of time (unless you're into that, but then there are better rulesets, like the Talmud (Zing.)); for another, it's fun to let your character make stupid in-character mistakes.) It is a good way to live a life, and indeed recommended as such by writers of rationalist fiction.

Replies from: Emile, Ritalin
comment by Emile · 2012-12-06T19:02:14.026Z · LW(p) · GW(p)

There can be several ways to get enjoyement out of a roleplaying game:

  • The sheer intellectual challenge of the game (which you can also get from storyless boardgames)

  • Telling or enjoying an interesting story, with interesting situations

  • Escapism - living as someone else, in a different world

These are usually called Gamist, narrativist, Simulationist.

They are not mutually incompatible, and you can indeed have different people around the same table with different tastes / goals. There can be problem when one ruins the story or the believability in order to get a game advantage, and other players were caring about the story etc. - this is when people claim about munchkinry.

But you can still have good game sessions where everybody is a munchkin, or when the rules and DM are good enough so that the player's don't get to choose between a game advantage and an interesting story (for example, I think in most versions of D&D you basically have some points you can only spend on combat-useful stuff (opicking feats or powers), and some points you can only spend on combat-useless stuff (skill points)).

Replies from: blashimov
comment by blashimov · 2012-12-11T16:33:02.071Z · LW(p) · GW(p)

Anyone who thinks skill points (or any other character ability) is useless in combat gets an "F" in munchkinry. ;)

comment by Ritalin · 2012-12-06T19:03:34.713Z · LW(p) · GW(p)

Yes, but why do people seem to think that it should also apply to fictional characters (not PC), and people leading their actual lives?

but then there are better rulesets, like the Talmud (Zing.)

Or, you know, actual Law.

Replies from: TimS
comment by TimS · 2012-12-06T19:28:40.358Z · LW(p) · GW(p)

Or, you know, actual Law.

Hey, I resemble that remark!

Although the actual practice of law is about as rules-lawyer-y as programming a computer. More than the average person has any reason to be, but the purpose is precision, not being a jerk.

Replies from: Ritalin
comment by Ritalin · 2012-12-06T20:15:39.724Z · LW(p) · GW(p)

the purpose is precision, not being a jerk.

I contest that loophole exploitation and leaving room for doubt and interpretation is equivalent to being a jerk.

Replies from: TimS
comment by TimS · 2012-12-06T20:51:48.441Z · LW(p) · GW(p)

In real life? Legal and factual uncertainty favors the unjust (particularly those with power in the current status quo who desire more power). And even institutional players who would want to be unjust make game-theoretic decisions about whether they prefer cost certainty or greater upside (and variability).

But in RPG environment? It depends a far bit on whether the goals of the other players are Gamist, Narrationist, or Simulationist. Playing the munchkin in an Narrationist environment has significant jerk potential.

Replies from: Ritalin
comment by Ritalin · 2012-12-06T21:33:35.298Z · LW(p) · GW(p)

Not if, like in Harry Potter And The Natural 20, you postulate that you the characters have an innate, total knowledge of the source-book, and that these are to them as the laws of physics are to us. Exploiting them to the utmost becomes a matter of common sense and enlightened self-interest. Also, their psychology becomes strangely inhuman and quite interesting; it's, essentially, xenofiction rather than fantasy.

Legal and factual uncertainty favors the unjust

Or gives legal institutions flexibility to deal with the case-by-case problems the original legislators could never have thought of. I'm thinking of the US Constitution as an instance of that, which was left deliberately as vague as possible so that it could be used for centuries and by all kinds of different ideologies. Countries that have written constitutions that were too specific have found themselves having to change them more frequently, as they became obsolete more rapidly. Am I right, so far?

Replies from: TimS
comment by TimS · 2012-12-11T19:02:17.133Z · LW(p) · GW(p)

Or gives legal institutions flexibility to deal with the case-by-case problems the original legislators could never have thought of. I'm thinking of the US Constitution as an instance of that, which was left deliberately as vague as possible so that it could be used for centuries and by all kinds of different ideologies. Countries that have written constitutions that were too specific have found themselves having to change them more frequently, as they became obsolete more rapidly. Am I right, so far?

I'm not sure what advantage deliberately unclear rules provide when there are legitimate methods to modify the rules and the processes to change rules can be invoked at any time. If your social governance lacks sufficient legitimacy to change the rules, the specificity or vagueness of the current rules is the least of your problem. And if rules can be changed, certainty about results under the current rules is a valuable thing - as multiple economists studying the economic value of the "rule of law" will attest.


the characters have an innate, total knowledge of the source-book, and that these are to them as the laws of physics are to us.

Knowing the laws of physics better is a great way to be more powerful. But be careful about distinguishing between what the player knows and what the character knows. If character doesn't know that +1 swords aren't worth the effort, but +2 swords are great values, then having the player make decisions explicitly and solely on that basis (as opposed to role-playing) can be very disruptive to the interactions between players or between player and GM.

Replies from: Ritalin
comment by Ritalin · 2012-12-12T10:06:57.167Z · LW(p) · GW(p)

Knowing the laws of physics better is a great way to be more powerful.

That is true in Real Life. But, in the world of, say, Dungeons and Dragons, believing that you can run and cast a spell at the same time, or down more than one potion in the span of six seconds, is tantamount to insanity; it just can't be done. The rules of the game are the laws of physics, or at least the most important subset thereof.

Your comment on rules is very interesting. Every time the topic came up, the citizens of those United States of America have been bashing me over the head with the common wisdom that the rules being flexible and accommodating, and therefore not requiring to be changed or rewritten, is a wonderful thing, and that the opposite would be a source of political and legislative instability. And that's when they didn't call the much more frequently-changed constitutions of European countries "toilet paper".

I think the reason US citizens care so much about keeping things the way they are is that they have allowed a great deal of regional diversity within the Federation, and creating a clearer, more modern, more specific set of rules would be a dangerous, complex process that would force outliers into convergence and create tons of resistance and a high chance for disaster. It's no coincidence that constitution changes in Europe and other places have come from fighting (especially losing) wars that involve their own territory, getting invaded by foreign powers, or having a revolution. The US haven't had any of these things since... the war with Mexico?

comment by Richard_Kennaway · 2012-12-02T11:13:11.602Z · LW(p) · GW(p)

Another paper on the low quality of much scientific evidence, here in the field of diet-related cancer risk. Just out in the Journal of Clinical Nutrition: Is everything we eat associated with cancer? A systematic cookbook review.

Replies from: Manfred
comment by Manfred · 2012-12-15T16:24:50.689Z · LW(p) · GW(p)

Also see the oncological ontology project, which aims to separate all things into either causing or curing cancer, as determined by the Daily Mail.

comment by beoShaffer · 2012-12-01T17:47:34.969Z · LW(p) · GW(p)

New study on rationalization and the sophistication effect curtesy of marginal revolution.

comment by NancyLebovitz · 2012-12-07T09:51:17.613Z · LW(p) · GW(p)

It's cute when it's just a lamp.....

Replies from: mwengler, army1987, FiftyTwo
comment by mwengler · 2012-12-15T08:59:53.493Z · LW(p) · GW(p)

I hope that got an A.

comment by A1987dM (army1987) · 2012-12-15T11:52:17.804Z · LW(p) · GW(p)

I'm reminded of this.

comment by FiftyTwo · 2012-12-09T04:50:34.366Z · LW(p) · GW(p)

It interests/unsettles me how much I anthropomorphise it based on very simple behaviours.

comment by Bakkot · 2012-12-02T21:11:10.157Z · LW(p) · GW(p)

Is anyone doing self-scoring on everyday predictions? I've been considering doing this for a while - writing down probability estimates for things like 'will have finished work by 7PM' and 'will have arrived on time' and even 'will rain' or 'friend will arrive on time', so I can detect systemic errors and try to correct for them. (In particular, if you are consistently over- or under-confident, you can improve your score by correcting for this without actually needing to be any more accurate.) This seems like a pretty straightforward way of getting `better' at predicting things, potentially very quickly, so I'm curious about other's experiences.

Replies from: beoShaffer, Ritalin
comment by beoShaffer · 2012-12-06T19:59:26.460Z · LW(p) · GW(p)

I do (sometimes) do this with private predictions on prediction book.

comment by Ritalin · 2012-12-06T17:47:09.801Z · LW(p) · GW(p)

I haven't. Before I imitate you, I'd like to know, what specific implementations have you performed yet, and what are your results so far?

comment by Vive-ut-Vivas · 2012-12-02T14:07:50.243Z · LW(p) · GW(p)

There's been some talk recently of the need for programmers and how people that are unsatisfied with their current employment can find work in that area while making a decent living. Does there exist some sort of virtual meet-up for people that are working towards becoming programmers? I'd like to form, or be part of, a support group of LW-ers that are beginning programming. There may be something like this around that I've just missed because I mostly lurk and not even that regularly anymore. (Hoping to change that, though.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-03T10:19:02.777Z · LW(p) · GW(p)

Is there a reason to believe that a LW-related environment will provide better help than existing environments, such as Stack Exchange, or one of the free online universities?

I believe there would be some advantages from the LW culture. For example questions like "which programming language is the best?" would be processed differently in culture which pays attention to mindkilling and values being specific.

On the other hand, LW is just a tiny subset of the world, and there is the strength in numbers. If a website is visited by thousands of programmers, you are more likely to get your answer, fast.

LW articles related to programming:

I could give free Skype lessons in programming (specifically Pascal, Java, JavaScript) if anyone is interested (send me a PM). There are probably more people like this, so we could have a list somewhere. Not just a list about programming, but more generally a list of LWers willing to provide professional-level advice on something, categorized by topic.

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2012-12-05T14:32:55.278Z · LW(p) · GW(p)

The main reason I am interested in a LW-related environment (other than it really being my only online "community") is because I know there's been talk here before about people switching fields to become programmers. That's a group of particular interest to me, since I'm one of them. I also know of at least one other person here who is working on becoming a programmer through self-study. There was a post a while back about encouraging more people to become computer programmers, so I'm betting that there are more of us out there.

comment by Paul_G · 2012-12-07T00:13:15.964Z · LW(p) · GW(p)

Why do LWers believe in global warming? The community's belief has changed my posterior odds significantly, but it's the only argument I have for global warming at the moment. I saw the CO2 vs temperature graphs, and that seemed to sell it for me... Then I heard that the temperature increases preceded the CO2 emissions by about 800 years...

So why does the community at large believe in it?

Thanks!

Replies from: blashimov, Bakkot, None, drnickbone, FiftyTwo, drethelin
comment by blashimov · 2012-12-11T16:28:07.675Z · LW(p) · GW(p)

I believe it is true as an environmental engineer engaged in atmospheric modeling. Atmospheric modeling is a field in which the standard scientific method seems to be working well, that is, there is a large benefit to researchers who are right and/or can prove others wrong. This means that there is a lot of effort going into improving models that are already quite accurate, to the limits of the data you input. For example, the 1990 model of climate change does quite well if you give it better data, and at least correctly predicts the temperature trend with bad data. http://www.huffingtonpost.com/2012/12/10/1990-ipcc-report_n_2270453.html Similar to comments below, the IPCC is an enormous body, and I find invalidating their arguments to require an implausible conspiracy theory. You can look up the executive summary for the various reports at your leisure, they are quite readable.

comment by Bakkot · 2012-12-09T10:02:23.848Z · LW(p) · GW(p)

This is one of those things you should probably just take on authority, like relativity or the standard model of particle physics. That is to say, it's an exceedingly complex topic in practice, and any argument stated for either side which can readily be understood is likely to be wrong. You have two or three options: study the field long enough to know what's going on, or trust the people who have already done so. (The third option, 'form an opinion without having any idea what's going on', is also commonly taken.)

In short: I believe it's happening because this is what scientists tell me, and it's not worth putting in the time required to understand the field well enough that I could trust my opinion over theirs.

comment by [deleted] · 2012-12-07T00:37:56.411Z · LW(p) · GW(p)

Can't speak for the community at large.

CO2 blocks some frequencies of infrared. This is known and uncontested by even the craziest deniers. Without an atmosphere the earth's average temperature would be around -20 C. You can calculate this based on radiation theory. (that specific number may be wrong, but it's around there). An atmosphere with CO2 (and some other major ones I don't remember) blocks a higher proportion of the radiation from earth than from the sun (because the earth radiation is mostly infrared near the range blocked by CO2). With a model for that, you can recalculate the surface temperature. It will be much higher.

edit: (on the other hand, now that I think about it, I can't prove to myself that absorbant CO2 will actually cause a greenhouse effect. Maybe it's reflective, which would cause greenhouse...) /edit

edit2: ok I just read the wiki article. Everything they tell you about how the greenhouse effect works is wrong. It's not that the atmosphere somehow blocks the outgoing radiation, as that would violate the second law by allowing the earth to heat up relative to it's surroundings. The real mechanism is that the absorbtion surface (the ground) and the emission surface (roughly tropopause) is seperated by a mechanism that enforces a temperature difference (adiabatic lapse rate). I need to think about this more. /edit

That analysis does not include things like the effect of temperature on albedo (clouds and snow), which changes things, and other effects, but it gives you rough bounds for what must happen. The model establishes a causal link from CO2 to temperature (there are also links the other way, like forest fires and desertification).

Beyond that, though, climate science is mostly empirical I think.

My rough belief is that global warming is a thing, but is probably hyped up a bit too much for political reasons.

Replies from: satt, Manfred
comment by satt · 2012-12-15T18:19:49.251Z · LW(p) · GW(p)

edit2: ok I just read the wiki article. Everything they tell you about how the greenhouse effect works is wrong. It's not that the atmosphere somehow blocks the outgoing radiation, as that would violate the second law by allowing the earth to heat up relative to it's surroundings.

That can't be right. The atmosphere does block most of the outgoing radiation — its transmissivity for the Earth's longwave radiation is only about 20% — and if it were transparent to radiation it couldn't exert a greenhouse effect at all. Also, a thought experiment: if we had an electric oven plugged into a solar panel orbiting the Sun, the oven could heat itself relative to the surrounding space just by using light from the Sun, and that wouldn't violate the second law.

Replies from: None
comment by [deleted] · 2012-12-15T19:15:19.511Z · LW(p) · GW(p)

Maybe the second law is the wrong way to look at it. The second law says that the sun can't cause you to heat up hotter than the sun on average. (you can do tricks with eat pumps to make parts of you hotter than the sun, though)

It also says you can't do tricks with surface properties to change your temperature. (in the absence of heat pumps)

The atmosphere does block most of the outgoing radiation — its transmissivity for the Earth's longwave radiation is only about 20% — and if it were transparent to radiation it couldn't exert a greenhouse effect at all.

Ok I'm still a bit confused about this. I suspect that this effect alone is not enough to cause a greenhouse effect. Let's think it through:

Assume the 0.2 missing from transmissivity is all absorptivity (t, a, and r add up to one). And that we model it as simply an optical obstruction in thermal equilibrium.

The sun's radiation comes, some of it goes to the atmosphere, some to the earth. If the atmosphere magically ate heat, the earth would get less radiation. However, it does not magically eat heat; it heats up until it is emitting as much as it absorbs. The longwave from earth also gets eaten and re-emitted. About half of the emitted goes to earth, the rest out to space.

So our greenhouse layer prevents some power P1 from getting to earth. Earth emits, and P2 is also eaten. The emitted P3 = P1+P2. Earth gets P3/2. The sun is hotter than earth so the power at any given wavelength will be higher, so P1 > P2, therefor P3/2 > P2, which means on net, heat is flowing from the greenhouse layer to earth. However the earth is receiving P1 less from the sun, and P1 > P3/2. So the earth cools down relative to the similar earth without "greenhouse" effect. This makes sense to me because the earth is effectively hiding behind a barrier.

Therefor, if a greenhouse effect exists, it cannot be explained by mere atmospheric absorption. Unless I made some mistake there...

The assumption that is not true in that model is the atmosphere being in independent thermal equilibrium.

If we instead make the atmosphere be in thermal eq with earth, there is no effect; the earth acts as a single body, and absorption by atmosphere is the same as absorption by ground.

If we instead model the atmosphere realistically as a compressible fluid, things become more interesting. I'm not going to do the math here, but the model goes like this: the atmosphere at ground is eq with the ground. If a piece of air gets heated up at ground, it expands and floats up. As it goes up, there is less pressure from the air above it, so it expands, which does work, which cools it down. It cools down at the adiabatic lapse rate, which is the temperature gradient in a well mixed compressible fluid in a gravitational field. At the tropopause, our piece of air has reached about -40. The tropopause is where the atmosphere stops being opaque to greenhouse rays. Therefor, the tropopause is approximately the emission/absorption surface for greenhouse radiation, and the earth gets the other stuff.

So what does this mean for greenhouse? If we make the atmosphere absorb more, it shifts the average radiation surface upwards to colder air. If the surface of our body is too cold, it must heat up to maintain thermal eq. So it heats up. The lapse rate enforces a certain temperate rate between earth and atmosphere, so you can see if you move the equilibrium point up, the earth has to heat up. As for the second law, the atmosphere is acting as a heat pump.

Therefore global warming.

Even this model is a bit broken. if you heat up some air at the top of the atmosphere, it stays up there and stops mixing. I think this is what the tropopause is. I have no idea how to model this.

Replies from: satt
comment by satt · 2012-12-16T18:44:08.423Z · LW(p) · GW(p)

Maybe the second law is the wrong way to look at it.

I think so. In practice, changing the surface properties of a body in orbit can affect its temperature. If we coated the Moon with soot it would get hotter, and if we coated it in silver it would get colder.

So our greenhouse layer prevents some power P1 from getting to earth. Earth emits, and P2 is also eaten. The emitted P3 = P1+P2. Earth gets P3/2. The sun is hotter than earth so the power at any given wavelength will be higher, so P1 > P2, therefor P3/2 > P2, which means on net, heat is flowing from the greenhouse layer to earth. However the earth is receiving P1 less from the sun, and P1 > P3/2. So the earth cools down relative to the similar earth without "greenhouse" effect.

Two key complications break this toy model:

  1. P1 > P2 doesn't follow from the Sun having higher spectral power. The Sun being hotter just means it emits more power per unit area at its own surface, but our planet intercepts only a tiny fraction of that power.

  2. The atmosphere likes to eat Earth's emissions much more than it likes to eat the Sun's. This allows P1 to be less than P2, and in fact it is. P2 > P1 implies P3/2 > P1, which turns the cooling into a warming.

This makes sense to me because the earth is effectively hiding behind a barrier.

The barrier metaphor's a bit dodgy because it suggests a mental picture of a wall that blocks incoming and outgoing radiation equally — or at least it does to me! (This incorrect assumption confused me when I was a kid and trying to figure out how the greenhouse effect worked.)

The assumption that is not true in that model is the atmosphere being in independent thermal equilibrium.

It's a false assumption, but it's not the assumption breaking your (first) model. It's possible to successfully model the greenhouse effect by pretending the atmosphere's a single isothermal layer with its own temperature.

The second model you sketch in your last 4 paragraphs sounds basically right, although the emission/absorption surface is some way below the tropopause. That surface is about 5km high, where the temperature's about -19°C, but the tropopause is 9-17km high. (Also, there's mixing way beyond the top of the troposphere because of turbulence.)

comment by Manfred · 2012-12-15T16:04:06.155Z · LW(p) · GW(p)

Yeah, understanding the real reason for the greenhouse effect was tricky for me. CO2 makes the atmosphere opaque to infrared even on the scale of meters, so it's not like a regular greenhouse. If the CO2 already absorbs all the infrared emitted from the ground, why does increasing CO2 decrease the amount of energy reaching space? Because what space sees is the temperature of the last atom to emit infrared, and as you add more CO2, the last atom gets higher and higher on average, and thus colder and colder.

This is more like a "warm, clear blanket" effect than a greenhouse effect. (That is, more like diffusion than reflection).

Though note that neither greenhouses nor warm blankets violate the second law - they just can't get any warmer than the sun, which is pouring in energy at wavelengths for which the atmosphere is mostly transparent. Good ol' sun.

comment by drnickbone · 2012-12-15T15:01:05.108Z · LW(p) · GW(p)

You might want to look at Skeptical Science which lists a large number of arguments raised by skeptics of global warming, and what climate science has to say about them. "CO2 lags temperature" is number 11 on the list. Here is the basic response:

CO2 didn't initiate warming from past ice ages but it did amplify the warming. In fact, about 90% of the global warming followed the CO2 increase.

Replies from: Paul_G
comment by Paul_G · 2012-12-15T21:01:21.952Z · LW(p) · GW(p)

This is exactly what I was looking for! Thank you kindly, looking through it as soon as I find time.

comment by FiftyTwo · 2012-12-09T05:03:23.314Z · LW(p) · GW(p)

Then I heard that the temperature increases preceded the CO2 emissions by about 800 years...

Source?

I have lots of reasons for believing in climate change I could quote at you, but they can mainly be found on the relevant wikipedia pages (so I assume you've already looked at them). So why am I putting more credence on those arguments than you? (Assuming we're both equally rational/sane/intelligent).

What it comes down to when you abstract from individual arguments, is that those who have most domain specific expertise strongly believe it to be true. In general it is best to trust experts in a particular domain unless you have strong reasons to believe that field is flawed. Absent improbable conspiracy theories I have no reason to in this case.

Replies from: Paul_G
comment by Paul_G · 2012-12-10T04:58:14.952Z · LW(p) · GW(p)

Teacher in a geology class who is decidedly non-rationalist mentioned that 800 years thing, without a source. Something about thickness of a line.

This is the first topic I've found in which I have no idea how to dissect this and figure out what's going on. It appears that there are incredibly powerful arguments for both sides, and mountains of strong evidence both for and against human caused climate change... Which shouldn't be possible. A lot of the skeptics seem to have strong arguments countering many of the "alarmist" ideas...

I'm not a good enough rationalist for this, yet. If it weren't for this community's famous support of global warming, there is no way I'd believe in it, given the data I have. Strange.

I'm not sure it's worth posting sources and the like, counter-counter arguments become difficult to follow, and it could easily cause a kurfuffle that I would rather avoid.

Thank you all greatly!

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-12-10T06:30:58.181Z · LW(p) · GW(p)

The lag is a phenomenon of the ice age cycle, which is caused by orbital shifts but amplified by emission or absorption of carbon dioxide by the ocean. It takes the ocean about a thousand years to respond to changed atmospheric temperature.

comment by drethelin · 2012-12-09T09:44:34.224Z · LW(p) · GW(p)

I don't know if there's an official consensus in the way you seem to think there is.

My personal point of view is that it seems fairly obvious that dumping tons of shit into the atmosphere is going to have an effect, and is not good for various obvious health and pleasant atmosphere reasons. There are also reasonable arguments about not upsetting existing equilibria that exist.

On the other hand, speculations about disastrous scenarios seem blatantly over-specified and ridiculous to me. We've had dozens of Ice Ages and warm epochs throughout earths' history, obviously not caused by humans, and we have no idea how they worked or ended or whatnot. I think worrying about global warming as a disaster scenario is ridiculous and semi-religiously enforced for political power as well as tribal affiliation.

Replies from: blashimov, Oscar_Cunningham
comment by blashimov · 2012-12-11T16:31:00.471Z · LW(p) · GW(p)

It depends on what you mean by "disaster" and "over specified." I will add that the IPCC, a body I accept as reputable, predicts a large range of possible outcomes with probability estimates, some of which I think can be fairly categorized as "disastrous." Global warming is a large potential human misery-causer, but not even close to an existential threat. For certain countries, such as the US, it probably won't be that bad, at least until the second half of this century.

comment by Oscar_Cunningham · 2012-12-09T10:43:10.015Z · LW(p) · GW(p)

My personal point of view is that it seems fairly obvious that dumping tons of shit into the atmosphere is going to have an effect

This is a hollow argument. You characterise CO2 (and other waste gases?) as "tons of shit" which sounds suitably negative but doesn't actually mean anything. What are you using to classify some gases as "tons of shit" that then makes it obvious they'll have an effect? Not all waste products of chemical processes are dangerous; dumping nitrogen into the atmosphere will have no effect at all.

Replies from: drethelin
comment by drethelin · 2012-12-10T07:59:00.301Z · LW(p) · GW(p)

I invite you to stand outside a coal power plant or in a large city in china.

My point was vaguely made but you're attacking it as if it said way more than it did.

comment by [deleted] · 2012-12-05T21:43:40.608Z · LW(p) · GW(p)

From wikipedia.

Principles of cosmicism

The philosophy of cosmicism states that there is no recognizable divine presence, such as a god, in the universe, and that humans are particularly insignificant in the larger scheme of intergalactic existence, and perhaps are just a small species projecting their own mental idolatries onto the vast cosmos, ever susceptible to being wiped from existence at any moment. This also suggested that the majority of undiscerning humanity are creatures with the same significance as insects and plants in a much greater struggle between greater forces which, due to humanity's small, visionless and unimportant nature, it does not recognize.

Perhaps the most prominent theme in cosmicism is the utter insignificance of humanity. Lovecraft believed that "the human race will disappear. Other races will appear and disappear in turn. The sky will become icy and void, pierced by the feeble light of half-dead stars. Which will also disappear. Everything will disappear. And what human beings do is just as free of sense as the free motion of elementary particles. Good, evil, morality, feelings? Pure 'Victorian fictions'. Only egotism exists."[2] Cosmicism shares many characteristics with nihilism, though one important difference is that cosmicism tends to emphasize the inconsequentiality of humanity and its doings, rather than summarily rejecting the possible existence of some higher purpose (or purposes). For example, in Lovecraft's Cthulhu stories, it is not so much the absence of meaning that causes terror for the protagonists as it is their discovery that they have absolutely no power to effect any change in the vast, indifferent, and ultimately incomprehensible universe that surrounds them. Whatever meaning or purpose may or may not be invested in the actions of the cosmic beings in Lovecraft's stories is completely inaccessible to the human characters, in the way an amoeba (for example) is completely unequipped to grasp the concepts that drive human behavior.

Lovecraft's cosmicism was a result of his complete disdain for all things religious, his feeling of humanity's existential helplessness in the face of what he called the "infinite spaces" opened up by scientific thought, and his belief that humanity was fundamentally at the mercy of the vastness and emptiness of the cosmos.[3] In his fictional works, these ideas are often explored humorously ("Herbert West–Reanimator," 1922), through fantastic dreamlike narratives ("The Dream Quest of Unknown Kadath," 1927), or through his well-known Cthulhu Mythos ("The Call of Cthulhu," 1928, and others). Common themes related to cosmicism in Lovecraft's fiction are the insignificance of humanity in the universe[4] and the search for knowledge ending in disaster.[5]

"Cosmic indifference"

Though cosmicism appears deeply pessimistic, H.P. Lovecraft thought of himself as neither a pessimist nor an optimist but rather an "indifferentist,"[citation needed] a theme expressed in his fiction. In Lovecraft's work, human beings are often subject to powerful beings and other cosmic forces, but these forces are not so much malevolent as they are indifferent toward humanity.[6] This indifference is an important theme in cosmicism. The noted Lovecraft scholar S. T. Joshi points out that "Lovecraft constantly engaged in (more or less) genial debates on religion with several colleagues, notably the pious writer and teacher Maurice W. Moe. Lovecraft made no bones about being a strong and antireligious atheist; he considered religion not merely false but dangerous to social and political progress."[7] As such, Lovecraft's cosmicism is not religious at all, but rather a version of his mechanistic materialism." Lovecraft thus embraced a philosophy of cosmic indifferentism. He believed in a meaningless, mechanical, and uncaring universe that human beings, with their naturally limited faculties, could never fully understand. His viewpoint made no allowance for religious beliefs which could not be supported scientifically. The incomprehensible, cosmic forces of his tales have as little regard for humanity as humans have for insects.[8]

Though hostile to religion, Lovecraft used various "gods" in his stories, particularly the Cthulhu related tales, to expound cosmicism. However, Lovecraft never conceived of them as supernatural; they are merely extraterrestrials who understand and obey a set of natural laws, which to the limited human understanding seem magical. These beings (the Great Old Ones, Outer Gods and others)—though dangerous to humankind—are neither good nor evil, and human notions of morality have no meaning for these beings. Indeed, they exist in cosmic realms beyond human understanding. As a symbol, they represent the kind of universe that Lovecraft believed in, a universe in which humanity is an insignificant blot, fated to come and go, its appearance unnoticed and its passing unmourned.[9]

Yeah.

Replies from: Ritalin, Multiheaded
comment by Ritalin · 2012-12-06T17:28:23.521Z · LW(p) · GW(p)

“There is no justice in the laws of nature, no term for fairness in the equations of motion. The Universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky.

But they don't have to! WE care! There IS light in the world, and it is US!” ― Eliezer Yudkowsky, Harry Potter and the Methods of Rationality

Yes.

Replies from: Zaine
comment by Zaine · 2012-12-10T07:46:56.545Z · LW(p) · GW(p)

While you may not err here, do keep in mind that not all characters are extensions of their author.

Replies from: Ritalin
comment by Ritalin · 2012-12-10T11:26:57.495Z · LW(p) · GW(p)

I think he made that same point in other words somewhere in the sequences. And I couldn't agree more.

Lovecraftian horror always struck me as a rather unwise way of looking at things; so what if incomprehensible forces in the universe could walk over us at any time and obliterate us? If we can't stop them, and can't predict them, why should we possibly even think about them or let their existence get us down? They're, essentially, irrelevant.

I also take issue with all the "drives you mad" reactions in Lovecraftian stories. PTSD drives you mad. Seeing seemingly-impossible things confuses you, because it messes with your epistemic models, but why should it mess with your capacity for rational thought?

comment by Multiheaded · 2012-12-10T14:49:34.239Z · LW(p) · GW(p)

The curtain rises on an ordinary street scene, with actors coming and going rapidly. There are bits of ordinary conversation ("Wines ... windowglass ... gold’s going down"), suggestions of violence and insanity ("He’s undressing me. Help, he’s ripping my dress off..." "I’m on fire, I’m burning, I’m going to jump") and, finally, the word "Sirius" repeated in every tone of voice and every pitch of the scale:

SIRIUS ... SIRIUS ... SIRIUS ... SIRIUS ... Then a loudspeaker thunders: THE GOVERNMENT URGES YOU TO REMAIN CALM.

Actors rush about claiming that the sun is getting bigger, the plague has broken out, there is thunder without lightning, etc. A reasonable voice tries to explain, "It was a magnetic phenomenon...." Then the loudspeaker tells us:

STUPENDOUS DISCOVERY - SKY PHYSICALLY ABOLISHED - EARTH ONLY A MINUTE AWAY FROM SIRIUS - NO MORE FIRMAMENT

(Antonin Artaud, There Is No More Firmament, 1933. He never knew Lovecraft.)

comment by [deleted] · 2012-12-03T22:45:12.617Z · LW(p) · GW(p)

Would a current or former Carnegie Mellon student be interested in talking to me, a high school senior, about the school? I intend on majoring in physics. Please private message me if you are.

comment by [deleted] · 2012-12-15T05:28:12.123Z · LW(p) · GW(p)

Someone is planning to do (and documenting on video) 100 days of rejection therapy. He's currently up to day 26.

comment by pleeppleep · 2012-12-12T16:10:07.152Z · LW(p) · GW(p)

My friend just asked me how many people typically attend our meet ups. I don't know the answer. How do I find out?

Replies from: None
comment by [deleted] · 2012-12-12T16:46:55.143Z · LW(p) · GW(p)

Do you mean just your (regional) group, or some kind of average over all meetup groups?

Replies from: pleeppleep
comment by pleeppleep · 2012-12-12T18:02:17.919Z · LW(p) · GW(p)

just in general

Replies from: drethelin
comment by drethelin · 2012-12-13T10:42:02.370Z · LW(p) · GW(p)

6.5 people and 1/3 dogs

Replies from: None
comment by [deleted] · 2012-12-13T14:55:10.965Z · LW(p) · GW(p)

I see your 1/3 dog and raise you half a cat.

comment by moridinamael · 2012-12-01T23:56:50.629Z · LW(p) · GW(p)

I've recently become aware of the existence of the Lending Club which appears to be a peer-to-peer framework for borrowers and lenders. I find myself intrigued by the interest rates claimed, but most of what I've found in my own research indicates that these interest rate computations involve a lot of convenient assumptions. Also, apparently if the Lending Club itself goes bankrupt, there is no expectation that you will get your investment back.

It seems at least conceivable that the interest rates are actually that high, since it is a new, weird type of investment, and thus underexposed relative to the market at large.

I was wondering if anyone more monetarily savvy than me could look into this, or warn me to stop wasting my time, if called for.

Replies from: EvelynM
comment by EvelynM · 2012-12-02T05:09:07.438Z · LW(p) · GW(p)

The interest rates for that sort of peer-to-peer lending are high, because the default rates are high. That is, you have a lower probability of getting all of your money back.

comment by aaronsw · 2012-12-01T14:18:40.922Z · LW(p) · GW(p)

Someone smart recently argued that there's no empirical evidence young earth creationists are wrong because all the evidence we have of the Earth's age is consistent either hypothesis that God created the earth 4000 years ago but designed it to look like it was much older. Is there a good one-page explanation of the core LessWrong idea that your beliefs need to be shifted by evidence even when the evidence isn't dispositive as versus the standard scientific notion of devastating proof? Right now the idea seems smeared across the Sequences.

Replies from: MinibearRex, TrE, Vaniver, DanielLC, mwengler, None, MugaSofer
comment by MinibearRex · 2012-12-02T02:16:10.866Z · LW(p) · GW(p)

Prior probabilities seem to me to be the key idea. Essentially, young earth creationists want P(evidence|hypothesis) = ~1. The problem is that to do this, you have to make P(hypothesis) very small. Essentially, they're overfitting the data. P(no god) and P(deceitful god) may have identical likelihood functions, but the second one is a conjunction of a lot of statements (god exists, god created the world, god created the world 4000 years ago, god wants people to believe he created the world 4000 years ago, god wants people to believe he created the world 4000 years ago despite evidence to the contrary, etc). All of these statements are an additional decrease in probability for the prior probability in the Bayesian update.

comment by TrE · 2012-12-01T16:27:23.961Z · LW(p) · GW(p)

IIRC the main post about this concept is conservation of expected evidence.

comment by DanielLC · 2012-12-01T18:50:59.368Z · LW(p) · GW(p)

He's not entirely wrong. Essentially, the more evidence you find of the Earth being more than 4000 years old, the more evidence you have against a non-deceiving god having created it 4000 years ago. If there's a 0.1% chance that a god will erase all evidence of his existence, then we can only get 20 bits of evidence against him.

The problem is most likely that he's overestimating the probability of a god being deceitful (conjunction fallacy), and that he's forgetting that it's equally impossible to find evidence for such a god (conservation of expected evidence).

comment by mwengler · 2012-12-01T17:27:39.838Z · LW(p) · GW(p)

If you are trying to explain the fossil, geological, and astronomical record, you might consider two hypotheses:

1) the details reflect the process that put these in place and current physical constants put the time for that to happen based on that record in the billions of years

2) somebody or something "God" for which we have little other evidence other than the world and universe created it all about 4000 years ago and made it look like a billions year project.

In the 2nd case, you take on the additional burden of explaing the existence and physics of God. Explaining why God would want to trick us is probably easier than explaining God's existence and physics in the first place.

I am reminded of Wg's statement "Believing you are in a sim is not distinguishable from believing in an omnipotent god (of any type)." Certainly, a sim would have the property that it would be much younger than it appeared to be, that the "history" built in to it would not be consistent with what actually appeared to have happened. Indeed, a sim seems to mean a reality which appears to be one thing but is actually another quite different thing created by powerful consciousnesses that are hiding their existence from us.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-01T23:22:31.807Z · LW(p) · GW(p)

Also, supposing that God created the world 6000 years ago or whenever and added a detailed past for artistic verisimilitude (a kinder explanation than the idea that the deep past is a way of tempting people into lack of faith), what would the difference be between God imagining the past in such detail and the past actually having happened?

Replies from: mwengler, MugaSofer
comment by mwengler · 2012-12-02T15:59:11.562Z · LW(p) · GW(p)

The difference is that in one situation we are conscious actors learning about our world and in the other we are simulations of meat puppets with experiences that are completely unreliable for indicating something about the world.

Further, if I can be deluded enough to think that dinosaur bones imply dinosaurs and star formation regions imply stars are formed in star formation regions, then God could be deluded, she could be part of a higher level simulation, set up to simulate a God that believed it was omnipotent, omniscient and omnigood.

The difference is in one case we are finite intelligences in a MUCH larger universe, evolving and adapting, with an intelligence that imperfectly but simply reflects reality. In the other case, we aer prisoners in a nightmarish experiment of deception where the rules/physics could be changed at any moment, and in deeply incomprehensible ways by either our God or God's God.

I suppose the problem of induction means we can never now that the persistence of laws of physics for thousands of miles and hundreds of years implies they will be the same tomorrow. But induction is not just our best bet, it is really our ONLY bet in predicting the future, in a world where we accept a God, predictability is purely at the whim of the programmer (God).

The only sense in whcih there is no difference is the sense in which God decieves us Perfectly.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-02T17:18:10.698Z · LW(p) · GW(p)

I may have been imagining a God not obviously worse than the one that (hypothetically) is running this universe-- the occasional miracle at most, but with the laws of physics applying almost all the time to almost everything.

Does it make sense to think of people surviving a substantial change in the laws of physics? That's probably close to one of those "can God defy the laws of logic?" questions.

Replies from: mwengler
comment by mwengler · 2012-12-03T16:11:20.889Z · LW(p) · GW(p)

Does it make sense to think of people surviving a substantial change in the laws of physics? That's probably close to one of those "can God defy the laws of logic?" questions.

As I understand both God and anybody running a sim, at any point, with the proper programming skills, they can cause essentially ANYTHING to happen. God could blow up the earth with blue heavenly fire, or convert all the Oxygen to Iron or change the Iron in our hemoglobin so it no longer grabbed oxygen for delivery to our cells. To the extent that the God in our universe doesn't interfere, I am put in mind of "Black Swans," God is out getting coffee for a few thousand years so we think he is a good guy, but then his coffee break is over, he sees where his sim got 10 billion people with super high tech, and he becomes interested in trimming us back down to biblical proportions. Or who knows what. The point is if these are not the REAL rules of physics, we are at the whim of a god. And indeed the evidence of what "our" benign (for now) God might do is not promising, He seems in the past to have sent clever and annoying plagues, flooded everything, cast us out of Eden, and he has certainly communicated to us the ideas of the end of the world.

It makes sense to think of people surviving a substantial change in the laws of physics if that is what God or the Simulator wants to happen. The essence of being unconstrained by Physics is that it is entirely up to the simulator what happens.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-10T15:41:23.028Z · LW(p) · GW(p)

Being unconstrained by physics isn't the same as being unconstrained by logic.

Replies from: mwengler
comment by mwengler · 2013-01-10T16:03:07.105Z · LW(p) · GW(p)

Certainly if you are in a simulated world, and I am controlling the simulation, I can pull the floor out from you instantly, I can transport you not only to any different location instantly, but change the orientation of your body, even how your body feels to you instantly. Indeed, if I am running the sim of you, I can create sensations in you by directly stimulating parts of the sim of your brain which in your real brain would be purely internal. I could make you feel intensely afraid every time you saw yourself in a mirror, I could make you see whatever I wanted as you looked at your hand, I could trigger face recognizers and have you recognizing your mother as you gazed at a robot or a pussy cat or a dinner plate.

Being less intrusive in to your brain, I can make things move in any fashion I choose at any time. I could create a billiards game where balls curved through the air, expanded and contracted, bounced off each other with extra energy, exploded, multiplied on the table, whatever. Your car could speed through the street at 10,000 mph.

I think the only constraint on the sim is temporary, it is what I can make your brain percieve by stimulating its simulated bits wherever I wish. And I can distort your wiring slowly enough so that you had the sensation of continuity but so your simulated nerves appeared to control extra limbs, mechanical objects, whatever. I could grow your intelligence in the sim by expanding the sim of your neocortex, you would feel yourself getting smarter.

I am not constrained to present to you a world which has any internal logic, or even consistent from moment to moment. Object permanence requires a continuity editor, it is easier to make a sim which doesn't have object permanence for example.

Just what constraints do you think logic imposes that I may have been violating in my comment above?

comment by MugaSofer · 2013-01-10T13:22:31.161Z · LW(p) · GW(p)

That would depend on whether God's thoughts contain conscious beings, wouldn't it?

comment by [deleted] · 2012-12-01T14:34:52.418Z · LW(p) · GW(p)

Try this and let me know if it's what you're looking for.

Replies from: aaronsw
comment by aaronsw · 2012-12-01T14:40:54.457Z · LW(p) · GW(p)

That's a good explanation of how to do Solomonoff Induction, but it doesn't really explain why. Why is a Kolmgorov complexity prior better than any other prior?

comment by MugaSofer · 2013-01-10T13:20:11.701Z · LW(p) · GW(p)

Personally, I always argue that if God created the world recently, he specifically designed it to look old; he included light from distant stars, fossils implying evolution, and even created radioactive elements pre-aged. Thus, while technically the Earth may be young, evolution etc. predict what God did with remarkable accuracy, and thus we should use them to make predictions. Furthermore, if God is so determined to deceive us, shouldn't we do as he wants? :P

comment by BerryPick6 · 2012-12-09T05:14:22.257Z · LW(p) · GW(p)

Could someone please break down the exact difference between a 'preference' and a 'bias' for me?

Replies from: Alicorn
comment by Alicorn · 2012-12-09T06:04:32.318Z · LW(p) · GW(p)

Biases cause you to systematically get less of what you want, or be less likely to get what you want. Preferences are the content of "what you want".

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-09T23:46:20.439Z · LW(p) · GW(p)

So is wanting to satisfy other people's preferences a 'preference' or a 'bias'?

Replies from: Alicorn
comment by Alicorn · 2012-12-10T00:25:56.571Z · LW(p) · GW(p)

That's a preference.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-10T01:02:37.921Z · LW(p) · GW(p)

Even though it causes one to systematically get less of what ze wants?

Replies from: Alicorn
comment by Alicorn · 2012-12-10T01:06:52.813Z · LW(p) · GW(p)

It doesn't. If you want other people to get what they want, then when that happens, you get something you want. You have to trade it off against other wants, but everybody has to do that, even people who only can't decide what to have for dinner.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-10T01:10:48.807Z · LW(p) · GW(p)

Do all preferences work this way, or are there some which don't have to be traded off at all?

These questions really should go in the "stupid questions open thread", but I can't seem to find a recent one. Thanks for taking the time to answer me.

Replies from: Alicorn
comment by Alicorn · 2012-12-10T01:24:01.443Z · LW(p) · GW(p)

No problem. You can only have a preference that doesn't get traded off with if it happens to never conflict with anything - for instance, my preference that there be a moon has yet to interact with any other preferences I could act towards fulfilling; the moon just goes on being regardless - or, if it's your only preference. Even if you have only one preference, there could be tradeoffs about instrumental subgoals. You might have to decide between a 50% chance of ten units of preference-fulfillment and a guarantee of five units, even if you'd really like to have both at once, even if the units are the only thing you care about.

comment by MileyCyrus · 2012-12-15T07:24:17.358Z · LW(p) · GW(p)

Has anyone used one of those pay for a doctor's opinion websites? How do you know if it's a scam?

comment by Ritalin · 2012-12-14T23:43:10.819Z · LW(p) · GW(p)

Do we have a lesswrong steam group?

comment by FiftyTwo · 2012-12-12T02:58:53.314Z · LW(p) · GW(p)

Can anyone give me a source/citation for the idea that more intellignet people are better at rationalisaiton? I've seen it mentioned several time but without link to experimental evidence.

comment by blashimov · 2012-12-11T16:13:03.522Z · LW(p) · GW(p)

Book Recommendation; Fiction; AI; While this might be the kind of scifi book to merely annoy experts, I found it enjoyable. It surrounds military use of potentially FOOM AI's which are wiped periodically to prevent the foom. Soiler: vg snvyf. It is also part of a series, in which some overlapping events are told from different perspectives, which I also found enjoyable. http://www.amazon.com/Insidious-Michael-McCloskey/dp/1440192529

comment by [deleted] · 2012-12-09T18:32:10.746Z · LW(p) · GW(p)

Can anyone think of any good sci-fi written about a world in which time travel is a commonplace (like something everyone has access to and uses in accomplishing everyday tasks)? It occurs to me that 1) this might be interesting to try to sort out, and 2) I can't even imagine how it would work.

Replies from: drethelin
comment by drethelin · 2012-12-10T07:55:56.843Z · LW(p) · GW(p)

The closest example I can think of is some of the Company novels by Kage Baker. Time travel is never for everyone but you do end with characters that have trivial access to it. On the other hand there are a lot of limitations

comment by FiftyTwo · 2012-12-09T04:23:24.467Z · LW(p) · GW(p)

How should one distinguish disgreement on empirical grounds vs disagreement about values? I'm increasingly convinced I'm miscalibrated on this.

Replies from: TheOtherDave, Nisan
comment by TheOtherDave · 2012-12-09T06:30:07.131Z · LW(p) · GW(p)

My usual approach is to keep asking variations on "What would you expect to experience if that were false?"
If that question has an answer, we're still in the realm of the empirical.
If it doesn't, it's possible we've transitioned into the realm of disagreement about values.

comment by Nisan · 2012-12-09T04:55:10.092Z · LW(p) · GW(p)

Can you give an example of a disagreement?

Replies from: FiftyTwo
comment by FiftyTwo · 2012-12-09T05:05:48.284Z · LW(p) · GW(p)

Friend proposes policy A, I think policy A is obviously bad. How do I most efficiently determine whether we have fundamentally different values or believe different facts to be the case?

Replies from: Nisan
comment by Nisan · 2012-12-09T16:57:23.415Z · LW(p) · GW(p)

"I think policy A is bad because it would cause B."
"But policy A wouldn't cause B. Also it would cause C which is good."
"If policy A did turn out to cause B, would A still be good?"

comment by [deleted] · 2012-12-07T22:27:40.714Z · LW(p) · GW(p)

FOR THE EMPEROR! HE IS THE ONLY VIABLE SCHELLING POINT!

Beware the anthropic implications of aliens, the selection pressure behind mutants, the institutional damage of heresy.

-- Sanctus Muflax of Holy Terra

In the grim dark future of our hostile multi-verse, past the dark age of technology when the men of iron were crushed by those who would be less wrong as the Emperor sits a hundred centuries undying on the golden throne there is only war.

Coming soon.

Appropriate context. Fanfiction, you know you want it.

Replies from: gwern
comment by gwern · 2012-12-07T23:03:26.310Z · LW(p) · GW(p)

The Emperor is an attractive Singleton compared to the em hell of the Necrons...

Replies from: FiftyTwo
comment by FiftyTwo · 2012-12-09T04:55:38.637Z · LW(p) · GW(p)

The Necrons seem to have failed entirely to self improve. Ironically the Tyranids are probably closest to an unfriendly AI despite being organic. They improve themselves in response to problems. They don't love you or hate you, they're just hungry, and you're made of organic matter they'd like to use for something else.

comment by [deleted] · 2012-12-02T12:26:56.900Z · LW(p) · GW(p)

The Worst-Run Big City in the U.S.

An very interesting autopsy of institutional dysfunction related to government and non-profits. I recommend reading the whole thing.

Minus the alleged harassment, city government is filled with Yomi Agunbiades — and they're hardly ever disciplined, let alone fired. When asked, former Board of Supervisors President Aaron Peskin couldn't remember the last time a higher-up in city government was removed for incompetence. "There must have been somebody," he said at last, vainly searching for a name.

Accordingly, millions of taxpayer dollars are wasted on good ideas that fail for stupid reasons, and stupid ideas that fail for good reasons, and hardly anyone is taken to task.

The intrusion of politics into government pushes the city to enter long-term labor contracts it obviously can't afford, and no one is held accountable. A belief that good intentions matter more than results leads to inordinate amounts of government responsibility being shunted to nonprofits whose only documented achievement is to lobby the city for money. Meanwhile, piles of reports on how to remedy these problems go unread. There's no outrage, and nobody is disciplined, so things don't get fixed.

You don't say?

In 2007, the Department of Children, Youth, and Families (DCYF) held a seminar for the nonprofits vying for a piece of $78 million in funding. Grant seekers were told that in the next funding cycle, they would be required — for the first time — to provide quantifiable proof their programs were accomplishing something.

The room exploded with outrage. This wasn't fair. "What if we can bring in a family we've helped?" one nonprofit asked. Another offered: "We can tell you stories about the good work we do!" Not every organization is capable of demonstrating results, a nonprofit CEO complained. He suggested the city's funding process should actually penalize nonprofits able to measure results, so as to put everyone on an even footing. Heads nodded: This was a popular idea.

Reading this I had to bite my hand in frustration.

There are two lessons here. First, many San Francisco nonprofits believe they're entitled to money without having to prove that their programs work. Second, until 2007, the city agreed. Actually, most of the city still agrees. DCYF is the only city department that even attempts to track results. It's the model other departments are told to aspire to.

But Maria Su, DCYF's director, admitted that accountability is something her department still struggles with. It can track "output" — what a nonprofit does, how often, and with how many people — but it can't track "outcomes." It can't demonstrate that these outputs — the very things it pays nonprofits to do — are actually helping anyone.

"Believe me, there is still hostility to the idea that outcomes should be tracked," Su says. "I think we absolutely need to be able to provide that level of information. But it's still a work in progress." In the meantime, the city is spending about $500 million a year on programs that might or might not work.

What the efficient charity movement has done so far looks much more impressive in light of this.

San Francisco historian Charles Fracchia recalls Mayor George Christopher's ploy after his plan to lure the New York Giants to San Francisco hit a snag in the late 1950s. It all hinged on building Candlestick Park, and doing that hinged on buying land in Hunters Point from real estate magnate Charlie Harney for $65,000 an acre. The trouble was, the city had sold that same land to Harney only five years previously for a fraction of the price.

"There was opposition to this from high-minded people in San Francisco," Fracchia says. "So Christopher got his opponents as well as his proponents together, and had 10 cases of scotch delivered up to this meeting at the Pacific Union Club. The scotch was drunk, and everyone came to the conclusion — yes, keep Candlestick Park."

When it comes to mismanaging a city, San Francisco has pulled a 180 — in half a century, we've gone from "city fathers" (if you liked them) or "oligarchs" (if you didn't) operating with limited input from the people to a hyperdemocracy. Overpaying for a Candlesticklike bad land deal today wouldn't be settled during a drunken soirée, but via years of high-decibel public meetings, developers being made to bleed funds to nonprofits of city supervisors' choosing, and any number of bond measures or other trips to the ballot box — all of which, when put together, could conceivably cost as much as the bad land deal itself. Maybe more.

For all its scotch-soaked flaws, the city of yore did not suffer from these problems. While archaic and stridently antidemocratic by today's standards, the system of government cobbled together by a citizens' commission in 1931 largely did what our forebears wanted it to do — mind the store and eliminate rampant corruption.

From 1932 until 1996, much of city government was handled by a powerful chief administrative officer (CAO), appointed to a 10-year term and tasked with overseeing the city's largest departments. The job was to take politics out of city management. (Today's San Francisco is so intensely saturated with politics down to the minutiae that the supervisors' recent appointment of a transit expert to a transit board — and not a union plumber — was seen as a deeply political move and an affront to organized labor.) The CAO was charged with making the city's largest decisions in an apolitical manner; the major portion of the job was keeping the books on the most vital departments and making sure they were running smoothly. In a manner of speaking, the CAO was a living, breathing accountability measure. The city certainly made its share of lousy calls, but the sloth, waste, and dysfunction emblematic of today's city government would have been shocking.

Over time, however, the CAO's purview was replaced by that hyperdemocracy. The reasonable notion that the people of San Francisco should have input into how things are run has turned into the democratic equivalent of death by a thousand cuts; as everybody gets a voice, democracy votes accountability down. When everyone's in charge, no one is. "In the old days, they ran roughshod over opposing views," Fracchia says. "Today, all ya got is opposing views. Pick your poison."

Wait maybe he has been reading Moldbug?

San Franciscans' appetite for voting is voracious; ours may be the only city that has had to ponder what to name ballot propositions after all the letters of the alphabet have been used up. "It is extraordinary, the number of things we ask our voters to vote on," Harrington confirms. "And somebody must like it, because we keep doing it."

Voters have demonstrated a jarring mixture of selflessness and selfishness. We greenlight billions of dollars in bonds, even when the city's inability to deliver projects on time or within budget has been rendered painfully clear. Yet we also repeatedly enshrine the wishes of single-issue activists and labor unions into law, and that carries ominous long-term consequences. There's a reason in times like the present that organizations such as the Department of Public Health are always targeted for deep cuts, while the notion of downsizing librarians, cops, or firefighters is inconceivable. The latter have gone to the voters to enshrine their standing in the city charter. No one has done so for the DPH — yet.

Special interests "go to the voters and say, 'Do you like libraries? Do you like children?' Well, of course they do," Harrington says. And if voters don't care to think through the fiscal ramifications — well, neither do their elected representatives. "The board likes children, too — so does the mayor. Next year in the budget they'll say, 'Oh, shit! Children get $30 million more — what doesn't?'" If the city ran its finances this way 30 years ago, the former controller notes, the money to respond to the AIDS crisis would have been locked up and unavailable. If such a need arises in the future — well, what then? Today's city can't even pay for the things it wants to pay for.

comment by aaronsw · 2012-12-01T14:36:57.598Z · LW(p) · GW(p)

I agree with EY that collapse interpretations of QM are ridiculous but are there any arguments against the Bohm interpretation better than the ones canvassed in the SEP article?

http://plato.stanford.edu/entries/qm-bohm/#o

Replies from: Manfred, Vaniver
comment by Manfred · 2012-12-02T00:19:01.278Z · LW(p) · GW(p)

Conflict with special relativity is the most common decisive reason for rejecting Bohmian mechanics - which is oddly not covered in the SEP article. Bohmian mechanics is nonlocal, which in the context of relativity means time travel paradoxes. When you try to make a relativistic version of it, instead of elegant quantum field theory, you get janky bad stuff.

comment by Vaniver · 2012-12-01T15:01:32.929Z · LW(p) · GW(p)

Not that I know of, but, in my interpretation preference Bohm is only beat by "shut up and calculate," so I may not be the most informed source.

comment by mwengler · 2012-12-01T15:02:15.244Z · LW(p) · GW(p)

Many Worlds Interpretation (MWI) is favored by EY as having a shorter message than others.

However, the short-message version of MWI does not include a theory as to how my particular stream of consciousness winds up in one branch or another. So Copenhagen (wave function collapse) is a theory of what I will experience, MWI is not.

Further, I have always thought MWI motivated by the ideas behind Einstein's "God does not play dice with the universe." That is, a non-deterministic theory is no theory at all. And then, MWI, would be a theory without wave function collapse, so a theory with no randomness. But of course, it is NOT a theory of what a particular observer will experience. To go from MWI to a theory of what I will experience, it seems I still need to have a random function. I suspect some will answer, "no, there is one of you in every branch so MWI predicts you will experience it all, but in separate non-interacting branches. No randomness." To which I would reply, we still need a theory that accounts for my subjective experiences, how did this me, the one I actually wound up as, "choose" between the various branches. To me it would seem essentially theological to say that because some me I can't see, hear or interact with in any way experience all the other possibilities that there is no randomness in the universe. It sure seems random that I wound up experiencing this particular version, in the absence of a non-random theory of that.

Please take this as an invitation to educate me or discuss the conclusions I reach. I am interested in sorting out just what MWI really gains you when leaving Copenhagen, and as competing theories of my own personal experience, they both seem to have, essentially, a random choosing event at their core: one calls it wave function collapse, the other one tries not to talk about it.

Replies from: David_Gerard, Nominull, endoself, khafra
comment by David_Gerard · 2012-12-01T15:06:46.270Z · LW(p) · GW(p)

how my particular stream of consciousness winds up in one branch or another

This assumes there is such a thing as a particular stream of consciousness, rather than your brain retconning a stream of consciousness to you when you bother to ask it (as is what appears to happen).

Replies from: mwengler
comment by mwengler · 2012-12-01T16:52:42.087Z · LW(p) · GW(p)

This assumes there is such a thing as a particular stream of consciousness, rather than your brain retconning a stream of consciousness to you when you bother to ask it (as is what appears to happen).

Yes it does assume that. However, we have plenty of evidence for this hypothesis.

My memory, and the memory of humans and higher mammals alike, has tremendous predictive power. Things like I remember a particular National Lampoon magazine cartoon with a topless boxer chanting "I am the queen of england, I like to sing and dance, and if you don't believe me, I will punch you in the pants," from about 40 years ago. I recently saw a DVD purporting to have all National Lampoons recorded digitally on it, I bought this and sure enough, the cartoon was there.

It seems clear to me that if conscious memory is predictive of future physical experience, it is drawn from something local to the Everett Branch my consciousness is in.

Let me design an experiment to test this. Set up a Schrodinger's cat experiment, include a time display which will show the time at which the cat was killed if in fact the cat is killed. If I once open the lid of the box and find the cat, and look at the time it was killed, record the time on a piece of paper which I put in a box on the table next to me and then close the box. I reopen it many subsequent times and each time I record the time on a piece of paper and put it on the box, or I record "N/A" on the paper if the cat is still alive.

My prediction is that every time I open the box with the memory of seeing the dead cat, I will still see the dead cat. Further, I predict that the time on the decay timer will be the same every time I reopen the box. This in my opinion proves that memory sticks with the branch my consciousness is in. Even if we only saw the same time 99 times out of 100, it would still prove that memory sticks, but not perfectly, with the branch my consciousness is in, which would then be a fact that physics explaining what I experience of the world would have to explain.

Having not explicitly done this experiment, I cannot claim for sure that we will conclude my consciousness is "collapsing" on an Everett Branch just as in Copenhagen interpretation it was the wave function that collapsed. But I will bet $100 against $10,000 if anybody wants to do the experiment. The terms of the bet are if you have a set-up that shows the counter result, that consciousness apparently dredges up memories of different nearby Everett branches by seeing different times on the timer, then I will come to where you are with your set-up and if you can show me it working for both you and I you get the $10,000, otherwise I get the $100 to defray my travel expenses. I'll reserve the right to pass on checking your set-up out if travel costs would be over $600, but for me that covers a good fraction of the world (I am in Sandy Eggo in this Everett Branch).

Fortunately for you cat lovers, the experiment can be done without the cat. You simply need to measure the time of radioactive decay, killing a cat with cyanide on detection of the radioactive decay is not necessary to win or lose the bet (or to prove the point.)

Note the box of papers with recorded times in it can also be used as evidence. If I open that box and all the papers have the same time written on them, and that is the time I remember, then I take this as strong evidence that my memory has been returning memories from only the current everett branch. If my memory were unhooked from this everett branch, then one would expect the physical evidence of what I had previously remembered which is in this everett branch, to include times from other everett branches. If it does not, then I think we can conclude that human consciousness, including its memories, are branch local, that a "collapse" occurs in MWI when we attempt to use it to predict what we will experience in this universe.

And indeed, I think predicting what we will experience is the hallmark of all good theories of how the universe works. We may say we want to predict "what will happen," but I believe by this we mean "what I will see happen."

Replies from: gjm, David_Gerard, aleksiL
comment by gjm · 2012-12-01T22:17:46.478Z · LW(p) · GW(p)

It seems clear to me that if conscious memory is predictive of future physical experience, it is drawn from something local to the Everett Branch my consciousness is in.

Whatever makes you think that your consciousness is in only one Everett branch? (And what do you think is happening on all those other branches that look so much like this one but that lack your consciousness?)

Surely the right account of this, conditional on MWI, is not that your consciousness is on a particular branch but that each branch has its own version of your consciousness, and each branch has its own version of your memory, and each branch has its own version of what actually happened, and -- not at all by coincidence -- these match up with one another.

What happens to your consciousness and your memories is much more like splitting than like collapse.

(It sounds as if you think that this ought to mean that you'd have conscious memories in one branch from other branches, but I can't see why. Am I missing something?)

Replies from: mwengler
comment by mwengler · 2012-12-02T15:49:33.772Z · LW(p) · GW(p)

(It sounds as if you think that this ought to mean that you'd have conscious memories in one branch from other branches, but I can't see why. Am I missing something?)

I misunderstood what David Gerard was suggesting and took a long riff proposing an experiment to address something he wasn't saying.

The tricky part for me is the extremely clear conscious experience I have of being on only one branch. That there are other consciousnesses NEARLY identical to mine on other nearby Everett branches, presumably having the same strong awareness that they are on only one Everett branch and have no direct evidence of any other branch is clearer to me. MWI seems to truly be an interpretation, not a theory, with apparently absolutely no Popperian experiments that could ever distinguish it from wave function collapse theories.

Replies from: Nisan
comment by Nisan · 2012-12-09T05:10:24.129Z · LW(p) · GW(p)

You can upload a person into a quantum computer and do Schrödinger's cat experiments on them. If you have a computational theory of mind, this should falsify at least some informal collapse theories.

comment by David_Gerard · 2012-12-01T19:29:11.102Z · LW(p) · GW(p)

You could have that predictive power without actually having a continuous stream of awareness. Consider sleepwalkers who can do things and have conversations (if not very good ones) with no conscious awareness. You're using philosophy to object to observed reality.

Replies from: mwengler
comment by mwengler · 2012-12-02T15:44:19.376Z · LW(p) · GW(p)

OK, i misunderstood what you were implying in your previous post. So there are multiple streams of consciousness, one on each everett branch, and the memories returned on each everett branch are the ones in the (conscious+unconscious) brain that exists on that everett branch.

So I experience my mind always returning memories consistent with my branch even as other branch-mwengler's experience memories consistent with their branch, and like me, use that as evidence for their uniqueness.

So it really is an interpretation, predicting nothing different in experience than does copenhagen.

Replies from: endoself
comment by aleksiL · 2012-12-01T17:41:52.474Z · LW(p) · GW(p)

I haven't seen one example of a precise definition of what constitutes an "observation" that's supposed to collapse the wavefunction in Copenhagen interpretation. Decoherence, OTOH, seems to perfecty describe the observed effects, including the consistency of macro-scale history.

This in my opinion proves that memory sticks with the branch my consciousness is in.

Actually it just proves that memory sticks with the branch it's consistent with. For all we know, our consciousnesses are flitting from branch to branch all the time and we just don't remember because the memories stay put.

We may say we want to predict "what will happen," but I believe by this we mean "what I will see happen."

Yeah, settling these kinds of questions would be much easier if we weren't limited to the data that manages to reach our senses.

In MWI the definition of "I" is not quite straightforward: the constant branching of the wavefunction creates multiple versions of everyone inside, creating indexical uncertainty which we experience as randomness.

comment by Nominull · 2012-12-02T07:24:20.737Z · LW(p) · GW(p)

Your mistake lies in using the word "I" like it means something. There is some mwengler-stuff, it has some properties, then there is a split and the mwengler-stuff is in two separate chunks. They both experience their "stream of consciousness" showing up in their particular branch, they both wonder how it is that they ended up in the one branch rather than the other.

comment by endoself · 2012-12-02T01:14:40.988Z · LW(p) · GW(p)

So Copenhagen (wave function collapse) is a theory of what I will experience, MWI is not.

Copenhagen is not a theory of what you will experience either; there are multiple minds even in Copenhagen's single world

Replies from: mwengler, mwengler, mwengler
comment by mwengler · 2012-12-03T20:06:41.839Z · LW(p) · GW(p)

Copenhagen is an interpretation where I have one mind, you have one mind, and each of us have one thread of experience. There are numerous places along that thread where the physics to calculate the time evolution of that thread is not deterministic, where a random choice has been made.

MWI is an interpretation where I have many minds, as opposed to the one mind I have in Copenhagen. In the MWI interpretation, each of my minds exists in a separate and non-interacting universe from all the other versions of my mind. If I wonder as I type this why this version of me is the one in THIS branch, MWI has no theory for that. MWI tries to make that question seem less interesting by pointing out that there are lots of versions of me asking that same question, so somehow obscuring the me-ness of the me in this branch with the me-ness of all these other similar but not identical me's in these other branches would render the question meaningless.

But as an interpretation with no observable experimental differences, MWI and Copenhagen are likely to have the same number of random events dictating progress. In MWI, the randomness is isolated to just one of many me's which of course is still quite unique and interesting to me, but which is not as bad as Copenhagen where it is the entire universe that got changed by each random waveform collapse.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-12-03T23:57:06.755Z · LW(p) · GW(p)

In the MWI interpretation, each of my minds exists in a separate and non-interacting universe from all the other versions of my mind. If I wonder as I type this why this version of me is the one in THIS branch, MWI has no theory for that.

How is this different to wondering why you are THIS mind in THIS branch rather than THIS OTHER mind in THIS branch? Why you are you rather than someone else?

comment by mwengler · 2012-12-03T15:52:52.182Z · LW(p) · GW(p)

Do I have multiple minds even in Copenhagen? And by I I mean flesh-and-blood me?

Replies from: endoself
comment by endoself · 2012-12-03T19:19:07.992Z · LW(p) · GW(p)

I mean that there are other minds in the world, in the sense of other people. Neither Copenhagen nor many worlds chooses a preferred mind, but people don't notice it as strongly in Copenhagen since they're already used to the idea of other conscious beings.

comment by mwengler · 2012-12-02T16:01:28.568Z · LW(p) · GW(p)

Copenhagen is not a theory of what you will experience either; there are multiple minds even in Copenhagen's single world

If I understand correctly, Copenhagen has only one mind for me, and the reality experienced by this mind is fundamentally randomly branched through wave function collapses. MWI creates a new mind for me so their are many minds for me, one in each Everett Branch. Did I miss something?

Replies from: endoself, Viliam_Bur
comment by endoself · 2012-12-02T19:01:18.350Z · LW(p) · GW(p)

I'm not sure what you're getting at here. Even under Copenhagen, one can duplicate an upload as it's running.

comment by Viliam_Bur · 2012-12-03T10:36:25.630Z · LW(p) · GW(p)

Let's suppose that your mind is a function of your brain, and that your brain is composed of atoms.

In MWI there are many branches with many configurations of atoms, that means many branches of your brain, that means many branches of your mind. In every branch your mind is entangled with the other atoms of the same branch. So for example in the universe with atoms of a dead cat, your mind is in the "poor kitty" state, and in the branch with the atoms of an alive cat, your mind is in the "kitty, you are so lucky, I promise I will never try this cruel experiment on you again" state.

In Copenhagen, on a tiny time scale there are many branches of atoms. But it is believed that on a larger scale it is not so. At some unspecified moment there is supposed to be a collapse where many branches of atoms become a single branch again (through a process of random selection). Nobody knows when does this happen. On a large scales, we are not able to run a precise enough experiment that would say either way. On smaller scales, where we can run the experiment, the result has always been that the collapse did not occur yet. So after the collapse, there is only one branch, and therefore one mind. Before the collapse... I would say that there is a superposition of minds (because there is a superposition of brains, because there is a superposition of atoms the brain is composed of), which should become one mind again at the moment of the collapse. But it is believed that this superposition exists only for a very small fraction of the second, so it's not like the different minds in the superposition have enough time to really think significantly different thoughts. The neurons work at a limited speed, and sending a signal from one neuron to another requires dozens of chemical reactions.

comment by khafra · 2012-12-03T17:59:13.097Z · LW(p) · GW(p)

Copenhagen:

  1. You bounce a photon off a half-silvered mirror and don't look at the results: no universe split.

  2. You bounce a photon off a half-silvered mirror and look at the results: Bam! Split universe.

MWI:

  1. You bounce a photon off a half-silvered mirror and don't look at the results. Since the physical state of your brain is not causally dependent on the destination of the photon, you don't branch into two mwenglers in any noticeable way.

  2. You bounce a photon off a half-silvered mirror and look at the results. Since you've made the state of your brain causally dependent on an event with quantum randomness, you branch into two mwenglers which are different on a macroscopic level. Two persons, which happen to share a causal history up to looking at the experimental outcome.

Replies from: mwengler
comment by mwengler · 2012-12-03T19:04:15.371Z · LW(p) · GW(p)

Copenhagen:

... You bounce a photon off a half-silvered mirror and look at the results: Bam! Split universe.

Copenhagen Interpretation never splits universes. Instead, you have a wave function collapse in the one and only universe.

You bounce a photon off a half-silvered mirror and don't look at the results. Since the physical state of your brain is not causally dependent on the destination of the photon, you don't branch into two mwenglers in any noticeable way.

In MWI, you NEVER branch in to two anythings in a "noticeable" way. All the myriads of branches have no interactions, there is nothing noticeable about any of the other branches from within the branch we are in. If there is something noticeable about other branches, then an experiment could be defined to check the hypothesis of branching, and we would start to gather evidence for or against branching. Until such time as an hypothesis is created and tested and shows evidence for branches, MWI is an interpretation, and not a theory.

So why does it even matter? I am thinking it through and realizing that an interpretation is in some way a pre-theory. As we sit with the idea of MWI, maybe one of us develops hypotheses about experiments whic might show evidence for the other branches, or not. Without the interpretation of MWI, that hypothetical progress might never be available.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-04T15:20:31.493Z · LW(p) · GW(p)

All the myriads of branches have no interactions

They do interact. This is how quantum physics was discovered.

The problem is that the magnitude of interaction is getting very small very quickly, so after a few microseconds it becomes technically impossible to measure. This is what allows people to say: "Yeah, for a few microseconds there is something mathematically equivalent to branches, but then it disappears completely" and you can't experimentally prove them wrong.

One side believes that the interaction is getting smaller, but it never reaches exactly zero. Other side believes that the interaction is getting smaller, and then in some unspecified moment all branches except one disappear. Experimental data say that the interaction is getting smaller until it becomes too small to see... and then, well, it is too small to see what happens. So essentially both sides disagree about who has the burden of proof; about the exact meaning of "fewest assumptions" in Occam's razor. One side says that "the extra branches disappearing" is the extra assumption. Other side says that "the extra branches not disappearing, even when their interaction becomes too small to measure" is the extra assumption.

More precisely, the magnitude of interaction depends on how much the particles in the branches are different. Therefore the branches we have measurable interaction with are those almost the same as our branch. The interaction is largest when both branches are exactly alike except for one particle. This is the famous double-slit experiment -- two branches of the universe with the particle going through different slits interact with each other. The branches are there. The question is not whether multiple branches exist, but whether they disappear later when their interaction becomes very small.

maybe one of us develops hypotheses about experiments which might show evidence for the other branches, or not.

How do you prove experimentally that the other branches do not disappear, especially if your opponents refuse to specify when they should disappear. If you make an experiment that proves that "after N seconds, the branches still exist", your opponents can say: "Yeah, but maybe after N+1 seconds they disappear." Repeat for any value of N.

comment by DefinitelyNotTenoke · 2012-12-13T11:09:00.718Z · LW(p) · GW(p)

You guys suck

Replies from: DefinitelyNotTenoke
comment by DefinitelyNotTenoke · 2012-12-13T11:10:29.645Z · LW(p) · GW(p)

Nah, I was kidding, I love you guys.