Open Thread, August 2010-- part 2
post by NancyLebovitz · 2010-08-09T23:18:21.789Z · LW · GW · Legacy · 373 commentsContents
373 comments
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
373 comments
Comments sorted by top scores.
comment by jimrandomh · 2010-08-29T14:23:31.776Z · LW(p) · GW(p)
My experiments with nootropics continue. A few days ago, I started taking sulbutiamine (350mg/day), a synthetic analog of thiamine which differs in that it crosses the blood-brain barrier more readily. The effects were immediate, positive, and extremely dramatic - on an entirely different order of magnitude than I expected, and probably the largest single improvement to my subjective well being I have ever experienced. A feeling of mental fatigue and not wanting to do stuff - a feeling that leads to spending lots of time on blogs, playing video games and otherwise killing time suboptimally (though not necessarily the only such feeling) - just up and vanished overnight. This was something that I had identified as a major problem, and believed to be purely psychological in nature, but was, in fact, entirely biochemical. On the first day I took sulbutiamine, I felt significantly better, worked three hours longer than normal, and went to the gym (which would previously have been entirely out of character for me).
That said, I do have a concrete reason to believe that this effect is atypical. Specifically, I believe I was deficient in thiamine; I believe this because I'm a type 1 diabetic, and according to the research reported in this article, that means my body uses up thiamine at a greatly increased rate; I was only getting the RDA of thiamine from a standard multivitamin; and the problems I had seem to match the symptoms of minor thiamine deficiency pretty well.
That said, searching the internet finds people without thiamine deficiencies who also benefited from sulbutiamine, albeit to a lesser degree. And trying sulbutiamine is safe (no credible reports of adverse effects ever) and cheap ($17 for an 85-day supply as bulk powder), so I recommend it.
Replies from: gwern, pjeby, ata, gwern, gwern, johannes-c-mayer, wedrifid↑ comment by gwern · 2010-09-01T02:32:39.887Z · LW(p) · GW(p)
Oh, in other news, the FDA is apparently going after piracetam; smartpowders.com reports that it's been ordered to cease selling piracetam and is frantically trying to get rid of its stock. See
- http://www.imminst.org/forum/topic/43512-fda-says-no-more-piracetam/
- http://www.reddit.com/r/Nootropics/comments/d7wcm/fda_set_to_ban_piracetam_claim_it_is_illegally/
↑ comment by wedrifid · 2010-09-02T00:04:11.828Z · LW(p) · GW(p)
Oh, in other news, the FDA is apparently going after piracetam; smartpowders.com reports that it's been ordered to cease selling piracetam and is frantically trying to get rid of its stock. See
That is infuriating! The fools!
Replies from: None, gwern↑ comment by SilasBarta · 2010-09-01T02:55:21.048Z · LW(p) · GW(p)
Yikes! Hits close to home for me! I had actually ordered bulk piracetam about a week ago, in an order with two other supplements. When the shipment arrived, the piracetam wasn't in it, and it had a note saying it was out of stock and I wouldn't be charged for it, but I'd be informed when it was available again.
I thought it was strange at first, since they wouldn't have taken the order if they weren't able to reserve a unit for my order. (This isn't fractional reserve banking, folks!) But that explanation makes a lot more sense. If only I had placed the order a few days earlier...
Replies from: gwern↑ comment by gwern · 2010-09-01T13:00:10.474Z · LW(p) · GW(p)
(This isn't fractional reserve banking, folks!)
Just-in-time techniques always struck me as being very close to fractional reserve banking, actually...
Anyway, elsewhere in that Reddit page, users mention that other nootropics seem to be getting harder to find lately like choline and huperzine-a. (I tried huperzine-a and wasn't impressed, but I kind of need the choline to go with any piracetam.)
Replies from: SilasBarta↑ comment by SilasBarta · 2010-09-01T14:00:02.842Z · LW(p) · GW(p)
Strange. My local grocery store with a health food/supplement aisle just started stocking a choline/inositol blend (saw it for the first time two days ago). I previously got the choline in a different supplement section, from a product called LipoTrim.
↑ comment by pjeby · 2010-08-30T15:53:03.783Z · LW(p) · GW(p)
($17 for an 85-day supply as bulk powder)
I noticed that a lot of the reviews were complaining about the taste - were you using it in its raw form, or putting it into capsules?
Replies from: jimrandomh, gwern, otto renger↑ comment by jimrandomh · 2010-08-30T18:07:29.201Z · LW(p) · GW(p)
I put it in capsules. Besides getting around the taste, it's also much more convenient that way; rather than having to measure and prepare some every day, I can sit down and prepare a month's worth of capsules in 30 minutes. The more different supplements you take, the more important it is to do it this way.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-09-01T23:34:14.541Z · LW(p) · GW(p)
I can sit down and prepare a month's worth of capsules in 30 minutes.
Have you considered buying or selling capsules? It seems unlikely that this is something you should do yourself, but only for yourself.
Also, before you said that you filled 10 capsules per minute. Do you take 10 capsules per day? Do you mix piracetam and choline in a single capsule?
Replies from: jimrandomh, wedrifid↑ comment by jimrandomh · 2010-09-02T01:05:55.720Z · LW(p) · GW(p)
I've considered buying capsules, but decided to get powder instead because it's cheaper and allows more flexibility if I change dosage or decide to pre-mix stuff. I couldn't sell the capsules I make because I don't measure them precisely enough (they vary by +/-10% or so). I currently take 5 capsules day - two of piracetam, two of choline citrate, and one of sulbutiamine.
Putting together capsules sounds hard, but it's actually quite easy. You get empty gel caps, which come as two unequally sized pieces that fit together tightly enough to stay in place but loosely enough to pull apart. Take the pieces of the capsule apart, pack some powder into the larger piece, put them together, and drop it on a scale. If it's within acceptable range, drop it in the 'done' container, otherwise open it back up and add some or remove some. After a dozen or so, you get the hang of it and can hit a 10% tolerance pretty consistently on the first time. Wear latex gloves so the gel caps won't stick to your fingers and you don't get hair and sweat in the powder tub.
(Edit: the discrepancy between my saying a month's worth of capsules in 30 minutes, and a rate of 10/minute, is due to setup and cleanup time; and neither of these numbers was precise to more than a factor of 2.)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-09-02T02:09:16.400Z · LW(p) · GW(p)
I couldn't sell the capsules I make because I don't measure them precisely enough (they vary by +/-10% or so).
If it's good enough for you, it may be good enough for customers; it's just a different niche. It may also be an illegal niche.
ETA: flexibility is a good reason.
↑ comment by wedrifid · 2010-09-02T00:02:04.678Z · LW(p) · GW(p)
Also, before you said that you filled 10 capsules per minute. Do you take 10 capsules per day? Do you mix piracetam and choline in a single capsule?
I'm not speaking for Jim but I note that I find mixing the racetams with the choline source convenient. It allows for simply adjusting the dose while keeping the same ratio.
↑ comment by otto renger · 2019-01-03T05:30:53.556Z · LW(p) · GW(p)
homemade capsules is a good idea,I used to buy some capsules and filler machines from a Chinese supplier.
↑ comment by ata · 2010-08-30T17:17:21.497Z · LW(p) · GW(p)
Very interesting — thanks for the information. I'm trying piracetam right now, but this also sounds like something I'd like to try. I have similar problems with mental fatigue and low motivation... unfortunately, I don't yet have even a vague sense of the biochemical basis for my issues (my symptoms match chronic fatigue, but it seems like its causal structure is not well-understood anyway). But it's worth a try, I suppose.
Are you taking this and the piracetam at the same time, or did you stop the piracetam to try this?
Replies from: jimrandomh, gwern↑ comment by jimrandomh · 2010-08-30T17:58:04.654Z · LW(p) · GW(p)
Both at the same time. (I have no particular reason to think they interact, I'm just following the strategy of changing only one thing at a time.) I hope sulbutiamine works for you; but if it doesn't, don't give up, it just means the biochemical issue is somewhere else, and there are many more safe things to try.
↑ comment by gwern · 2010-09-22T15:12:53.060Z · LW(p) · GW(p)
While re-reading the reports here for summary in my personal drugs file, it suddenly occurred to me that your experience with sulbutiamine might be on the level of pica & iron deficiency, and so worth mentioning or linking as a comment in http://lesswrong.com/lw/15w/experiential_pica/ .
↑ comment by gwern · 2010-08-30T17:52:34.413Z · LW(p) · GW(p)
That's quite interesting. I recently finished up my own 30g supply of sulbutiamine, and while I thought that it does work roughly on the level of piracetam without choline supplementation, I wasn't hugely impressed. But I am not diabetic nor do I match any of the descriptions of beriberi in Wikipedia.
(Didn't last me 85 days, however. 200mg strikes me as a pretty small dose.)
↑ comment by Johannes C. Mayer (johannes-c-mayer) · 2023-11-02T05:22:39.131Z · LW(p) · GW(p)
It seems that Benfotiamine (none of the other tiamines are over the counter in Germany) had a similar effect on me. I feel a lot better now, whereas before I would feel constantly tired. Before I felt like I could not do anything most of the time without taking stimulants. Now my default is probably more than 50% towards what I felt like on a medium stimulant dose. I did try a lot of interventions in the same time period, so I am not sure how much Benfotiamin contributed on its own, but I expect it to contribute between 25-65% of the positive effects. I also figured out that I am borderline diabetic, which is evidence in favor of Benfotiamine being very significant.
comment by Risto_Saarelma · 2010-08-10T07:18:26.009Z · LW(p) · GW(p)
Bridging the Chasm between Two Cultures: A former New Age author writes about slowly coming to realize New Age is mostly bunk and that the skeptic community actually might have a good idea about keeping people from messing themselves up. Also about how hard it is to open a genuine dialogue with the New Age culture, which has set up pretty formidable defenses to perpetuate itself.
Replies from: nhamann, rwallace↑ comment by nhamann · 2010-08-10T07:57:59.066Z · LW(p) · GW(p)
Hah, was just coming here to post this. This article sort of meanders, but it's definitely worth skimming at least for the following two paragraphs:
One of the biggest falsehoods I've encountered is that skeptics can't tolerate mystery, while New Age people can. This is completely wrong, because it is actually the people in my culture who can't handle mystery—not even a tiny bit of it. Everything in my New Age culture comes complete with an answer, a reason, and a source. Every action, emotion, health symptom, dream, accident, birth, death, or idea here has a direct link to the influence of the stars, chi, past lives, ancestors, energy fields, interdimensional beings, enneagrams, devas, fairies, spirit guides, angels, aliens, karma, God, or the Goddess.
We love to say that we embrace mystery in the New Age culture, but that’s a cultural conceit and it’s utterly wrong. In actual fact, we have no tolerance whatsoever for mystery. Everything from the smallest individual action to the largest movements in the evolution of the planet has a specific metaphysical or mystical cause. In my opinion, this incapacity to tolerate mystery is a direct result of my culture’s disavowal of the intellect. One of the most frightening things about attaining the capacity to think skeptically and critically is that so many things don't have clear answers. Critical thinkers and skeptics don't create answers just to manage their anxiety.
↑ comment by rwallace · 2010-08-10T13:04:55.533Z · LW(p) · GW(p)
Excellent article, thanks for the link! Let's keep in mind that she also wrote about how inflammatory and combative language is counterproductive, and the need to communicate with people in ways they have some chance of understanding.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-08-10T15:52:07.280Z · LW(p) · GW(p)
What she said wasn't that simple-- she also talks about trying to get her ideas across while being completely inoffensive, and having them not noticed at all. When we're talking about a call to change deeply held premises, getting some chance of being understood is quite a hard problem.
comment by Paul Crowley (ciphergoth) · 2010-08-17T12:01:54.289Z · LW(p) · GW(p)
Can't decide? With the Universe Splitter iPhone app, you can do both! The app queries a random number generator in Switzerland which releases a single photon into a half-silvered mirror, meaning that according to MWI each outcome is seen in one branch of the forking Universe. I particularly love the chart of your forking decisions so far.
Replies from: cousin_it, andreas↑ comment by cousin_it · 2010-08-25T21:20:36.673Z · LW(p) · GW(p)
With an endorsement from E8 surfer dude!
↑ comment by andreas · 2010-08-25T23:50:13.271Z · LW(p) · GW(p)
If all you want is single bits from a quantum random number generator, you can use this script.
comment by [deleted] · 2010-08-15T19:26:51.711Z · LW(p) · GW(p)
I'm looking for something that I hope exists:
Some kind of internet forum that caters to the same crowd as LW (scientifically literate, interested in technology, roughly atheist or rationalist) but is just a place to chat about a variety of topics. I like the crowd here but sometimes it would be nice to talk more casually about stuff other than the stated purpose of this blog.
Any options?
Replies from: Document, simplicio, nhamann, Perplexed↑ comment by Document · 2010-08-16T05:19:01.329Z · LW(p) · GW(p)
Stardestroyer.net fits that description somewhat, for values of "casually" that allow for copious swearing punctuating most disagreements. I haven't posted there, but Kaj Sotala posts as Xuenay (apologies no apologies for stalking).
Examples of threads on LW-related topics:
- Logic Fallacy reference thread Mk. 2
- The Singularity in Sci-Fi (haven't read)
- Robots Learn How to Lie
- Mini-FAQ on Artificial Intelligence (the thread that properly introduced me to SIAI)
- This derailment about Three Worlds Collide
- Rationality (short)
- And the Winner for Most Probable God is... Azathoth?
- Transhumanism: is it viable?
(Edited after first upvote; later edited again to add a link.)
↑ comment by nhamann · 2010-08-17T05:13:46.118Z · LW(p) · GW(p)
I honestly keep hoping that subreddits will be implemented here sometime soon. Yes, "off-topic" discussion technically doesn't fit the stated purpose of the site, but the alternative of having LWers interested in having off-topic discussion having to migrate off-site to some other discussion forum seems ridiculous to me.
↑ comment by Perplexed · 2010-08-16T02:23:23.904Z · LW(p) · GW(p)
Pharyngula. More atheist than rational, and more biology than technology, but it is definitely a community. It is a blog, but has an interesting feature called the endless thread which is kind of "collective stream of consciousness". Check it out. And also look at other offerings in the science blogosphere.
[Edit:supplied link.]
↑ comment by JoshuaZ · 2010-08-16T02:32:36.797Z · LW(p) · GW(p)
I would not suggest Pharyngula for this purpose. The endless thread is fun but the rationality level there is not very high. It is higher than that of a random internet forum but I suspect that many LWians would become quickly annoyed at the level at which arguments are treated as soldiers.
Replies from: simplicio, Perplexedcomment by thepokeduck · 2010-08-29T08:18:03.442Z · LW(p) · GW(p)
What fosters a sense of camaraderie or hatred for a machine? Or: How users learned to stop worrying and love Clippy
http://online.wsj.com/article/SB10001424052748703959704575453411132636080.html
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-08-29T09:42:27.811Z · LW(p) · GW(p)
I recommend reading the article-- I didn't realize people could be recruited that easily.
comment by Airedale · 2010-08-19T22:00:01.309Z · LW(p) · GW(p)
I’m not sure whether the satanic ritual abuse and similar prosecutions of the 80s/90s have ever been discussed on LW in any detail (I couldn’t find anything with a few google searches), but some of the failures of rationality in those cases seem to fit into the subject matter here.
For those unfamiliar with these cases, a sort of panic swept through many parts of the United States (and later other countries) resulting in a number of prosecutions of alleged satanic ritual abuse or other extensive conspiracies involving sexual abuse, despite, in almost all cases, virtually no physical evidence that such abuse occurred. Lack of physical evidence, of course, does not always mean that a crime has not occurred, but given the particular types of allegations made, it was not credible in most cases that no physical evidence would exist. It is hard to choose the most outrageous example, but this one is pretty remarkable:
Gerald [Amirault], it was alleged, had plunged a wide-blade butcher knife into the rectum of a 4-year-old boy, which he then had trouble removing. When a teacher in the school saw him in action with the knife, she asked him what he was doing, and then told him not to do it again, a child said. On this testimony, Gerald was convicted of a rape which had, miraculously, left no mark or other injury.
Moreover, there were all sorts of serious problems with the highly suggestive techniques used in questioning the children to elicit the accusations of abuse. If one is inclined to be charitable to the investigators, some of the problems with the interviews could be chalked up to lack of understanding at the time of how problematic these techniques were, but the stories are pretty damning. A short description of the sorts of techniques can be found in this Wiki entry on one of the most prominent prosecutions, that involving the McMartin preschool.
Many, although by no means all, of the defendants in these sorts of cases have since been exonerated. I am posting this comment because a defendant in one of the cases, Jesse Friedman (one of the subjects of the documentary film Capturing the Friedmans), is in the news because of a recent federal appellate court decision (pdf), which denied relief to Friedman, but noted:
While the law may require us to deny relief in this case, it does not compel us to do so without voicing some concern regarding the process by which the petitioner’s conviction was obtained.
For anyone who would like a brief overview of the problems with these sorts of prosecutions, the court’s opinion linked above has a relatively concise but informative discussion at pp. 18-23. For anyone interested in a book length treatment, I also recommend No Crueler Tyrannies by Dorothy Rabinowitz. Tons of info on the Internet as well, of course.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2010-08-24T09:21:47.551Z · LW(p) · GW(p)
My father was a forensic psychiatrist heavily involved in some of these cases, testifying for the defense of the accused. The moral panic phenomenon is real and complex, but there's a more basic failure of rationality underlying the whole movement which was the false belief in the inherent veracity of children.
Apparently juries (and judges alike) took the testimony of children at face value. The problem was that investigative techniques of the social workers invariably elicited the desired reactions in the children. In law you have the concept of leading the witness, but that doesn't apply for investigations of child abuse. The children are taken away from their parents and basically locked up with the investigators until they tell them what they want to hear. It wasn't even necessarily deliberate - from what I understand in many cases the social workers just had a complete lack of understanding of how they were conditioning the children to fabricate complex and in many cases outright ridiculous stories. Its amazing how similar the whole scare was to historical accounts of the witch trials. Although as far as I know, in the recent scare nobody was put to death (but I could even be wrong about that, and certainly incalculable damage was done nonetheless).
comment by jimrandomh · 2010-08-18T23:43:58.635Z · LW(p) · GW(p)
Having a dog in the room made subjects 30% less likely to defect in Prisoner's Dilemma (article; sample size 52 people in groups of 4).
This changes my views on pet ownership completely.
comment by mattnewport · 2010-08-31T18:47:55.012Z · LW(p) · GW(p)
Reasonable Doubt: Innocence Project Co-Founder Peter Neufeld on Being Wrong
Replies from: komponisto↑ comment by komponisto · 2010-08-31T19:00:35.305Z · LW(p) · GW(p)
Excellent link. A particularly noteworthy excerpt:
[Q:] I assume that most people in these jobs aren't actually trying to convict innocent people. So how does such misconduct come about?
[A:] I think what happens is that prosecutors and police think they've got the right guy, and consequently they think it's OK to cut corners or control the game a little bit to make sure he's convicted.
This is the same phenomenon that is responsible for most scientific scandals: people cheat when they think they have the right answer.
It illustrates why proper methods really ought to be sacrosanct even when you're sure.
comment by orthonormal · 2010-08-10T19:37:38.294Z · LW(p) · GW(p)
The welcome thread is about to hit 500 comments, which means that the newer comments might start being hidden for new users. Would it be a good thing if I started a new welcome thread?
While I'm at it, I'd like to add some links to posts I think are especially good and interesting for new readers.
Replies from: orthonormal↑ comment by orthonormal · 2010-08-10T21:57:29.842Z · LW(p) · GW(p)
OK, I'm seeing some quick approval. I've been looking back through LW for posts and wiki articles that would be interesting/provocative for new readers, and don't require the entirety of the sequences. Here's my list right now:
* Newcomb's Problem and Regret of Rationality
* The True Prisoner's Dilemma
* How to Convince Me that 2 + 2 = 3
* The Least Convenient Possible World
* The Apologist and the Revolutionary
* Your Intuitions are Not Magic
* The Third Alternative
* Lawful Uncertainty
* The Domain of Your Utility Function
* The Allais Paradox (with two followups)
* We Change Our Minds Less Often Than We Think
* The Tragedy of Group Selectionism
And from the wiki:
* Near/Far thinking
* Shut Up and Multiply
* Evolutionary Psychology
* Cryonics
* Religion
What should I add? What, if anything, should I subtract?
Replies from: MartinB↑ comment by MartinB · 2010-08-12T19:52:32.604Z · LW(p) · GW(p)
Due to heavy personal history bias: That Alien Message.
I would take out anything that involves weird stuff regarding dead people, but that might be better A/B tested or surveyed. My own expectation is that hitting readers with the crazy topics right away is bad and a turn-off while it is better to give out useful and interesting things in the beginning that are relatable right away. [Edit: important missing word added]
Replies from: orthonormal↑ comment by orthonormal · 2010-08-12T19:55:26.387Z · LW(p) · GW(p)
I would could anything that involves weird stuff regarding dead people
Huh?
Replies from: MartinB↑ comment by MartinB · 2010-08-12T20:07:16.131Z · LW(p) · GW(p)
Corrected. I meant cryonics, and some of the applications of 'shut up and multiply'.
Replies from: orthonormal↑ comment by orthonormal · 2010-08-12T20:11:19.840Z · LW(p) · GW(p)
Ah. As you can see on the page itself, I decided to leave out the wiki links (for basically the same reasons you mentioned.) I'll add That Alien Message.
comment by NancyLebovitz · 2010-08-10T16:03:34.310Z · LW(p) · GW(p)
A couple of viewquakes at my end.
I was really pleased when the Soviet Union went down-- I thought people there would self-organize and things would get a lot better.
This didn't happen.
I'm still more libertarian than anything else, but I've come to believe that libertarianism doesn't include a sense of process. It's a theory of static conditions, and doesn't have enough about how people actually get to doing things.
The economic crisis of 2007 was another viewquake for me. I literally went around for a couple of months muttering about how I had no idea it (the economy) was so fragile. A real estate bust was predictable, but I had no idea a real estate bust could take so much with it. Of course, neither did a bunch of other people who were much better paid and educated to understand such things, but I don't find that entirely consoling.
This gets back to libertarianism and process, I think. Protections against fraud don't just happen. They need to be maintained, whether by government or otherwise.
Replies from: Vladimir_M, None↑ comment by Vladimir_M · 2010-08-11T04:55:17.982Z · LW(p) · GW(p)
NancyLebovitz:
The economic crisis of 2007 was another viewquake for me. I literally went around for a couple of months muttering about how I had no idea it (the economy) was so fragile. A real estate bust was predictable, but I had no idea a real estate bust could take so much with it.
That depends on what exactly you mean by "the economy" being fragile. Most of it is actually extremely resilient to all sorts of disasters and destructive policies; if it weren't so, the modern civilization would have collapsed long ago. However, one critically unstable part is the present financial system, which is indeed an awful house of cards inherently prone to catastrophic collapses. Shocks such as the bursting of the housing bubble get their destructive potential exactly because their effect is amplified by the inherent instabilities of the financial system.
Moldbug's article "Maturity Transformation Considered Harmful" is probably the best explanation of the root causes of this problem that I've seen.
↑ comment by [deleted] · 2010-08-11T11:59:40.714Z · LW(p) · GW(p)
Sometimes I think the only kind of libertarianism that makes sense is what I'd call "tragic libertarianism." There is no magic market fairy. Awful things are going to happen to people. Poverty, crime, illness, and war. The libertarian part is that our ability to alleviate suffering through the government is limited. The tragic part is that this is not good news.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-08-11T14:08:51.010Z · LW(p) · GW(p)
There's another tragic bit-- some of what government does makes things worse. There's no magic government fairy that guarantees good (or even non-horrible) results just because a government is doing something.
Replies from: Nonecomment by cousin_it · 2010-08-14T12:10:36.956Z · LW(p) · GW(p)
I'm considering starting a Math QA Thread at the toplevel, due to recent discussions about the lack of widespread math understanding on LW. What do you say?
Replies from: XiXiDu, NancyLebovitz, nhamann↑ comment by XiXiDu · 2010-08-14T14:25:30.181Z · LW(p) · GW(p)
Here is all the math you need to know to understand most of LW (correct me if I'm wrong):
- The Khan Academy (For the basics.)
- A Guide to Bayes’ Theorem – A few links (Just this might do too.)
I'm working through all of it right now. Not very far yet though.
You might want to add computer science and basic programming knowledge too.
Replies from: cousin_it, XiXiDu↑ comment by cousin_it · 2010-08-14T16:06:18.697Z · LW(p) · GW(p)
Some people, including me, can get away with knowing much less and just figuring stuff out as we go along. I'm not sure if anyone can learn this ability, but for me personally it wasn't inborn and I know exactly how I acquired it. Working through one math topic properly at school over a couple years taught me all the skills needed to fill any gaps I encountered afterwards. University was a breeze after that.
The method of study was this: we built one topic (real analysis) up from the ground floor (axiomatization of the reals), receiving only the axioms and proving all theorems by working through carefully constructed problem sets. An adult could probably condense this process into several months. It doesn't sound like much fun - it's extremely grueling intellectual work of the sort most people never even attempt - but when you're done, you'll never be afraid of math again.
Replies from: XiXiDu↑ comment by XiXiDu · 2010-08-14T16:28:25.024Z · LW(p) · GW(p)
I had to figure out ALL myself, without the help of anyone in meatspace. I'm lacking any formal education that be worth mentioning. The very language I'm writing in right now is almost completely self-taught. It took me half a decade to get here, irrespective of my problems. That is, most of the time I haven't been learning anything but merely pondering what is the right thing to do in the first place. Only now I've gathered enough material, intention and the basic tools to tackle my lack of formal education.
↑ comment by XiXiDu · 2010-08-14T14:31:47.124Z · LW(p) · GW(p)
Ok, you might add some logic and set theory as well if you want to grasp the comments. Although some comment threads go much further than that.
↑ comment by NancyLebovitz · 2010-08-14T13:56:09.534Z · LW(p) · GW(p)
I'm not sure that people necessarily know what questions they need to ask, or even that they need to ask.
A math Q&A seems like a good idea, but it would be a better idea if there were some "the math you need for LW" posts first.
There was a very nice piece here (possibly a quote) on how to think about math problems-- no more that a few paragraphs long. It was about how to break things down and the sorts of persistence needed. Anyone remember it?
comment by ata · 2010-08-17T05:11:44.322Z · LW(p) · GW(p)
[Originally posted this in the first August 2010 Open Thread instead of this Part 2; oops]
I've been wanting to change my username for a while, and have heard from a few other people who do too, but I can see how this could be a bit confusing if someone with a well-established identity changes their username. (Furthermore, at LW meetups, when I've told people my username, a couple of people have said that they didn't remember specific things I've posted here, but had some generally positive affect associated with the name "ata". I would not want to lose that affect!) So I propose the following: Add a "Display name" field to the Preferences page on LW; if you put something in there, then this name would be shown on your user page and your posts and comments, next to your username. (Perhaps something like "ata (Adam Atlas)" — or the other way around? Comments and suggestions are welcome.)
I'm willing to code this if there's support for it and if the administrators deem it acceptable.
Replies from: arundelo↑ comment by arundelo · 2010-08-17T06:47:00.450Z · LW(p) · GW(p)
Your username reminds me of a scene from one of my favorite South Park episodes (slightly NSFW).
comment by Jordan · 2010-08-10T20:22:37.864Z · LW(p) · GW(p)
I've been thinking more and more about web startups recently (I'm nearing the end of grad school and am contemplating whether a life in academia is for me). I'm no stranger to absurd 100 hour weeks, love technology, and most of all love solving problems, especially if it involves making a new tool. Academia and startups are both pretty good matches for those specs.
Searching the great wisdom of the web suggests that a good startup should be two people, and that the best candidate for a cofounder is someone you've known for a while. From my own perspective, I'd love to have a cofounder that was rational and open minded, hence LessWrong as a potential source.
I'm not pitching a startup idea here. What I'm pitching is promiscuous intellectual philandering. I'd like to shoot the shit about random tech ideas, gossip about other startups, and in general just see if I click with anyone here strongly enough to at some point consider buddying up to take on the world.
Thoughts on how best to do this? What's the internet equivalent of speed dating for finding startup cofounders? Maybe the best way is to just attend more LessWrong meetups?
Replies from: curiousepic, xamdam↑ comment by curiousepic · 2010-08-17T20:10:05.929Z · LW(p) · GW(p)
If you weren't already aware, Hacker News http://news.ycombinator.com/ has a lot of discussion about startups.
Replies from: Jordan↑ comment by xamdam · 2010-08-17T20:15:05.811Z · LW(p) · GW(p)
Funny, the NYC Meetup (today) is going to touch on this topic ('cause I've been thinking about it). It's one of the "ways rationalists can make money", IMO.
Replies from: Jordan↑ comment by Jordan · 2010-08-18T04:51:56.408Z · LW(p) · GW(p)
I agree, it does seem like a great untapped potential for a rationalist community. The obvious question is, does being a rationalist make you a better founder of a startup? Or, more relevant here, does being a rationalist of LessWrong stripes make you a better founder?
When it comes to programming prowess, I doubt rationality confers much benefit. But when it comes to the psychological steel needed to forge a startup: dealing with uncertainty, sunk costs, investors, founder feuds, etc... I think a black belt in rationality could be a hell of a weapon!
Replies from: xamdam, xamdam↑ comment by xamdam · 2010-08-20T15:04:15.309Z · LW(p) · GW(p)
BTW, we having this discussion here:
http://groups.google.com/group/overcomingbiasnyc/browse_thread/thread/3b91c0f4460dca63?hl=en
↑ comment by xamdam · 2010-08-18T13:31:32.342Z · LW(p) · GW(p)
I think dealing with uncertainty is key. I think Frank Knight formulated the idea that this is where the greatest returns are; I feel that this is something rationalists should be better than average at.
I also think this should give advantage to rationalists in areas other than startups, though startups definitely come to mind.
comment by Clippy · 2010-08-12T13:09:11.434Z · LW(p) · GW(p)
Quick question about time: Is a time difference the same thing as the minimal energy-weighted configuration-space distance?
Replies from: wedrifid, Kazuo_Thow, CronoDAS↑ comment by Kazuo_Thow · 2010-08-16T22:39:04.228Z · LW(p) · GW(p)
Will a correct answer to this question give you significant help toward maximizing the number of paperclips in the universe?
Replies from: Clippycomment by knb · 2010-08-19T23:25:46.075Z · LW(p) · GW(p)
IMO, the quality of comments on Overcoming Bias has diminished significantly since Less Wrong started up. This was true almost from the beginning, but the situation has really spiraled out of control more recently.
I gave up reading the comments regularly last year, but once a week or so, I peek at the comments and they are atrociously bad (and almost uniformly negative). The great majority seem unwilling to even engage with Robin Hanson's arguments and instead rely on shaming techniques.
So what gives? Why is the comment quality so much higher on LW than on OB? My first thought is karma, but OB didn't have karma when Eliezer Yudkowsky was posting, and the comments were pretty good back then. My best guess is that the good commenters were mostly Yudkowsky fans, and they left when EY left.
However, I don't know if anyone else shares my impression about OB commenter quality, so I may be completely misguided here.
Replies from: cata, Eliezer_Yudkowsky, wedrifid, SilasBarta, Perplexed, wedrifid↑ comment by cata · 2010-08-20T00:14:08.228Z · LW(p) · GW(p)
I wasn't reading OB before LW existed, but if you look there now, it's immediately apparent that the topics represented on the front page are much, much, much more interesting to the average casual reader than the ones on LW's front page. I wouldn't be surprised if the commenters tended to be less invested and less focused as a result.
(EDIT: I shouldn't say the "average casual reader," since that must mean something different to everyone. I clarified what I meant below in response to katydee; I think OB appeals to a large audience of interested laymen who like accessible, smart writing on a variety of topics, but who aren't very interested in a lot of LW's denser and more academic discussion.)
Replies from: katydee↑ comment by katydee · 2010-08-20T00:44:14.152Z · LW(p) · GW(p)
I suppose I'm not the average casual reader, but here's my comparison--
Less Wrong front page:
-Occam efficiency/rationality games-- low interest
-Strategies for confronting existential risk-- high interest
-Potential biases in evolutionary psychology-- mid-level interest
-Taking ideas seriously-- extremely high interest
-Various community threads-- low/mid interest
-Quick explanations of rationality techniques-- extremely high interest
-Conflicts within the mind-- mid/high interest
Overcoming Bias front page:
-Personality trait effects on romantic relationships-- minimal interest
-Status and reproduction-- minimal interest
-Flaws with medicine- mid-level interest
-False virginity-- no interest beyond "it exists"
-(In)efficiency of free parking-- minimal interest
-Strategies for influencing the future-- high interest
-Reproductive ethics-- minimal interest
-Economic debate-- minimal interest
Only two of the Overcoming Bias articles were interesting to me at all; only one was strongly interesting, and it was also short. Less Wrong seemed, at least to me, to have better/more interesting topics than Overcoming Bias, which might be why it has better/more interesting discussions.
Replies from: cata, ShardPhoenix↑ comment by cata · 2010-08-20T01:07:31.999Z · LW(p) · GW(p)
I totally agree with you; that's why I'm here!
But personally, I know a lot of fairly smart, moderately well-educated people who just aren't very interested in a life of the mind. They don't get a lot out of studying philosophy and math, they read a little but not a lot, they don't seek intellectual self-improvement, and they aren't terribly introspective. However, they all have a passing interest in current events, technology, economics, and social issues; the stuff you'd find in the New Yorker or Harper's, or on news aggregators. Hanson's writing on these topics is exactly the sort of thing that appeals to that demographic, whereas Less Wrong is just not.
Replies from: wedrifid, katydee↑ comment by wedrifid · 2010-08-20T01:12:05.550Z · LW(p) · GW(p)
Hanson's writing on these topics is exactly the sort of thing that appeals to that demographic, whereas Less Wrong is just not.
I certainly find Hanson's anecdotes far more useful when socialising with people that have interested in hearing surprising stories about human behaviour (ie. most of the people I bother socialising with). The ability to drop sound bites is, after all, the primary purpose of keeping 'informed' in general.
↑ comment by ShardPhoenix · 2012-10-23T12:40:21.926Z · LW(p) · GW(p)
Hmm, you seem to be seeing a totally different OB "front page" to me. Where are you seeing those articles?
edit: nevermind, I thought this was the current open thread. I didn't see that it was from 2010.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-20T00:48:52.034Z · LW(p) · GW(p)
I remember there being lots of bad comments on the old OB, and I think that putting a karma system in place, and requiring registration, helped an awful lot.
↑ comment by wedrifid · 2010-08-19T23:53:11.276Z · LW(p) · GW(p)
So what gives? Why is the comment quality so much higher on LW than on OB? My first thought is karma, but OB didn't have karma when Eliezer Yudkowsky was posting, and the comments were pretty good back then. My best guess is that the good commenters were mostly Yudkowsky fans, and they left when EY left.
I expect that was the biggest reason. When I started following OB it basically was Eliezer's blog. Sure, occasionally Robin would post a quote and an interpretation but that was really just 'intermission break' entertainment.
I do note that comments here have been said to have reduced in quality. That is probably true and is somewhat related to lacking a stream of EY posts and also because there aren't many other prominent posters (like Yvain, Roko, Wei, etc.) posting on the more fascinating topics. (At least, fascinating to me.)
Replies from: SilasBarta, katydee↑ comment by SilasBarta · 2010-08-20T01:00:58.287Z · LW(p) · GW(p)
I expect that was the biggest reason. When I started following OB it basically was Eliezer's blog. Sure, occasionally Robin would post a quote and an interpretation but that was really just 'intermission break' entertainment.
lol, yeah, that's the impression I got in the OB days. When there was discussion about renaming the site, I half-seriously thought it should be called "Eliezer Yudkowsky and the backup squad" :-P
I do note that comments here have been said to have reduced in quality. That is probably true and is somewhat related to lacking a stream of EY posts and also because there aren't many other prominent posters (like Yvain, Roko, Wei, etc.) posting on the more fascinating topics.
Oh man, just you wait! I'm almost done with one. Here's the title and summary:
Title: Morality as Parfitian-filtered Decision Theory? (Alternate title: Morality as Anthropic Acausal Optimization?)
Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit. A mind that can identify such actions might place them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it. Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "the right thing to do"; we are unconvinced by arguments that point out the lack of a future benefit -- or our estimates of the magnitude of what future benefits do exist is skewed upward. Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.
Replies from: orthonormal, wedrifid, Cyan↑ comment by orthonormal · 2010-08-20T01:16:45.529Z · LW(p) · GW(p)
I know I've been mentioning Good and Real constantly since I read it, but this sounds a bit like the account of human decision theory (morality) in G&R...
Replies from: SilasBarta, SilasBarta↑ comment by SilasBarta · 2010-08-20T01:46:45.033Z · LW(p) · GW(p)
I've made about six different 3-paragraph posts about G&R in the past three weeks, so I think you're safe ;-)
And yes, it does draw heavily on Drescher's account of the overlap between "morality" and "acting as if recognizing subjunctive acausal means-end links" (which I hope to abbreviate to SAMEL without provoking a riot).
↑ comment by SilasBarta · 2010-08-20T02:22:16.186Z · LW(p) · GW(p)
Anyone know how to do linked footnotes? Where you have a link that jumps to the bottom and one at the bottom that jumps back to the point in the text? I suppose I could just do [1], [2], etc., but I figure that would annoy people.
↑ comment by wedrifid · 2010-08-20T01:06:40.912Z · LW(p) · GW(p)
Oh man, just you wait!
I'm looking forward to that one! I can't guarantee that I'll agree with all of it (it will depend on how strong you make some of the claims in the middle) but I can tell I'll be engaged either way.
My first impression from the titles was that the 'Alternative' one was far better. But on reflection it sounds like the first title would be more accurate.
↑ comment by katydee · 2010-08-20T00:08:56.582Z · LW(p) · GW(p)
Who, in particular, said the comments have reduced in quality? Your post seems weasel-wordy to me as it currently stands.
Replies from: wedrifid↑ comment by wedrifid · 2010-08-20T00:48:44.274Z · LW(p) · GW(p)
Who, in particular, said the comments have reduced in quality?
rhollerith_dot_com is one. I don't think I am being excessively controversial here. Ebbs and flows of post content are inevitable and even just looking at the voting trends in the post list. There is little shame in such a variation... it is a good sign that people are busy doing real work!
Your post seems weasel-wordy to me as it currently stands.
Perhaps, but it was also polite. I did not (and still am not) providing the explicit link because I don't see it necessary to direct people to the surrounding context. The context represents a situation that was later shown to be a success story in personal development but at that time reflected negatively.
↑ comment by SilasBarta · 2010-08-19T23:46:11.043Z · LW(p) · GW(p)
In the OB days, I mainly read it because of EY. Maybe others did too. I'm surprised that OB still wins in the usage stats.
↑ comment by Perplexed · 2010-08-20T01:23:21.775Z · LW(p) · GW(p)
I don't follow OB, but your comment sent me over there to look around. What I saw was a lot of criticism from feminists regarding posts by Robin that had a strong anti-feminist odor to them. I also saw some posts on less controversial subjects that drew almost no comments at all. So the natural presumption is that those feminist commentors are not regulars, but rather were attracted to OB when a post relevant to their core interests got syndicated somewhere. If that is what you are talking about then ...
Well, sure, a registration system might have repelled some of the commentors. If Robin really wants to insulate himself from feedback in this way, it might work. But I rather doubt that he is the kind of person to exclaim "OMG, we have wymin commenting here! Who let them in?". I hope his regulars aren't either.
Some comments on topics like this are emotive. Admittedly, you can't really engage with them. But that doesn't mean you shouldn't at least read them, count them, and try to learn something from their sheer existence, if not from their logic.
Replies from: knb↑ comment by knb · 2010-08-20T03:15:48.590Z · LW(p) · GW(p)
If that is what you are talking about then ...
That isn't what I was talking about. I was talking about a general impression I've gotten over the last year. Robin's recent posts have received an Instalanche, so they're hardly representative.
↑ comment by wedrifid · 2010-08-19T23:45:13.631Z · LW(p) · GW(p)
However, I don't know if anyone else shares my impression about OB commenter quality, so I may be completely misguided here.
Absolutely. OB posts are worth a read occasionally but the comments are not. And here I include even comments (not posts) by Robin himself. The way Robin replies does, I suggest, contribute to who is willing or interested in commenting there. Status interferes rather drastically with the ability of prominent figures to engage usefully in a public forum. By public forum I refer to the generic meaning not electronic adoption. It is often the case that hecklers wishing to shame and express outrage are the only ones who consider it worthwhile to show up.
For my part if I was particularly keen on discussing a topic from OB I would consider bringing it up on the open thread on LW.
comment by ata · 2010-08-18T07:08:10.664Z · LW(p) · GW(p)
Quick question — I know that Eliezer considers all of his pre-2002 writings to be obsolete. GISAI and CFAI were last updated in 2001 but are still available on the SIAI website and are not accompanied by any kind of obsolescence notice (and are referred to by some later publications, if I recall correctly). Are they an exception, or are they considered completely obsolete as well? (And does "obsolete" mean "not even worth reading", or merely "outdated and probably wrong in many instances"?)
comment by PhilGoetz · 2010-08-17T04:50:12.572Z · LW(p) · GW(p)
Is there a post dealing with the conflict between the common LW belief that there are no moral absolutes, and that it's okay to make current values permanent; and the belief that we have made moral progress by giving up stoning adulterers, slavery, recreational torture, and so on?
Replies from: ata↑ comment by ata · 2010-08-17T05:03:58.553Z · LW(p) · GW(p)
the conflict between the common LW belief that there are no moral absolutes, and that it's okay to make current values permanent
I'm not sure that both of those are common LW beliefs (at least common in the same people at the same time), but I don't see any conflict there. If there are no moral absolutes, then making current values permanent is just as good as letting them evolve as they usually do.
Who here advocates making current values permanent?
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-08-17T20:41:02.368Z · LW(p) · GW(p)
Replace "making current values permanent" with CEV jargon on extrapolating volition into future minds within a trajectory determined by current human values. (The CEV program still needs to demonstrate that means something different from "making current values permanent". Details depend, among other things, on how or whether you split values up into terminal and instrumental values.)
Replies from: timtyler↑ comment by timtyler · 2010-08-17T20:56:21.249Z · LW(p) · GW(p)
It certainly explicitly claims not to be doing that - see:
"2. Encapsulate moral growth." - http://singinst.org/upload/CEV.html
comment by jimrandomh · 2010-08-15T22:57:00.722Z · LW(p) · GW(p)
I recently started taking piracetam, a safe and unregulated (in the US) nootropic drug that improves memory. The effect (at a dose of 1.5g/day) was much stronger than I anticipated; I expected the difference to be small enough to leave me wondering whether it was mere placebo effect, but it has actually made a very noticeable difference in the amount of detail that gets committed to my long-term memory.
It is also very cheap, especially if you buy it as a bulk powder. Note that when taking piracetam, you also need to take something with choline in it. I bought piracetam and choline citrate as bulk powders, along with a bag of empty gelatin capsules and a scale. (Both piracetam and choline citrate taste extremely vile, so the gel caps are necessary. Assembling your own capsules is not hard, and can be done at a rate of approximately 10/minute with a tolerance of +/- 10% once you get the hang of it.)
I strongly recommend that anyone who has not tried piracetam stop procrastinating and order some. Yes, people have done placebo-controlled studies. No, there are not any rare but dangerous side effects. Taking piracetam is an unambiguous win if you want to learn and remember things.
Replies from: katydee, Cyan↑ comment by katydee · 2010-08-15T23:30:34.762Z · LW(p) · GW(p)
Two questions:
-How much does it cost?
-How soon do you start becoming desensitized to it, if at all?
↑ comment by jimrandomh · 2010-08-16T00:50:18.567Z · LW(p) · GW(p)
How much does it cost?
I ordered from here at a price of $46 for 500g each of piracetam and choline citrate, plus $10 for gel caps and $20 for a scale (which is independently useful).
How soon do you start becoming desensitized to it, if at all?
I could not find any reported instances of desensitization to piracetam, so I don't think it's an issue.
I'm trying out nootropics, adding them one at a time. Next on my list to try is sulbutiamine; I've seen claims that it prevents mental fatigue, and it too has basically zero side-effect risks. Also on my list to try are lion's mane, aniracetam, l-tyrosine and fish oil. All of these are unregulated in the US.
I also use adrafinil, which greatly improves my focus. However, it's more expensive and it can't be used continuously without extra health risks, so I only use it occasionally rather than as part of my daily regimen. (There's an expensive and prescription-only related drug, modafinil, which can be used continuously.)
Replies from: katydee↑ comment by katydee · 2010-08-16T01:16:01.308Z · LW(p) · GW(p)
Sounds good. Be sure to report back once you test out the others-- nootropics are very interesting to me, and I think generally useful to the community as well.
Replies from: rabidchicken↑ comment by rabidchicken · 2010-08-17T03:59:16.750Z · LW(p) · GW(p)
First, the results of a wikipedia check: "There is very little data on piracetam's effect on healthy people, with most studies focusing on those with seizures, dementia, concussions, or other neurological problems." which seems to decrease the assurance of safety for everyday use. But otherwise, most of the sources appear to agree with your advertising. I too would like to see memory tests for these drugs, but preferably in a large and random sample of people, with a control group given a placebo, and another control group taking the tests with no aid of any kind. As well as a long term test to check for diminishing effectiveness or side effects. With my memory, I would pay a considerable amount to improve it, but first I want to see a wide scale efficiency test.
Replies from: PhilGoetz, wedrifid↑ comment by PhilGoetz · 2010-08-17T04:54:53.107Z · LW(p) · GW(p)
With my memory, I would pay a considerable amount to improve it, but first I want to see a wide scale efficiency test.
Why? Given the low cost and risk of trying it out, the high possible benefits, and the high probability that results will depend on individual genetic or other variations and so will not reach significance in any study, wouldn't the reasonable thing be to try it yourself, even if the wide-scale test had already concluded it had no effect?
Replies from: rabidchicken↑ comment by rabidchicken · 2010-08-17T06:05:11.059Z · LW(p) · GW(p)
Using your logic, I would be forced to try a large proportion of all drugs ever made. My motivation to buy this drug is close to my motivation to buy every other miracle drug out there, I want more third party tests of each one so I can make a more informed decision of where to spend my money, instead of experimenting on hundreds per month. Also, it does not have a DIN number in Canada, so I would need to import it.
Replies from: gwern↑ comment by wedrifid · 2010-08-17T06:36:27.872Z · LW(p) · GW(p)
I too would like to see memory tests for these drugs, but preferably in a large and random sample of people, with a control group given a placebo, and another control group taking the tests with no aid of any kind.
Working on it. Give me a few years.
↑ comment by Cyan · 2010-08-16T00:47:02.137Z · LW(p) · GW(p)
Question: did you find that it leads to faster grokking (beyond the effects of improvement of raw recall ability)?
Replies from: jimrandomh↑ comment by jimrandomh · 2010-08-16T00:56:46.812Z · LW(p) · GW(p)
I don't know, but I think it's just memory. This is almost impossible to self-test, since there's a wide variance in problem difficulty and no way to estimate difficulty except by speed of grokking itself.
comment by gwern · 2010-08-13T07:34:03.643Z · LW(p) · GW(p)
"But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing.
Oh, we’ll invoke lush clichés about the lonely heroism of Olympic athletes, the pain and analgesia of football, the early rising and hours of practice and restricted diets, the preflight celibacy, et cetera. But the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode.
We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way “up close and personal” profiles of professional athletes strain so hard to find evidence of a rounded human life -- outside interests and activities, values beyond the sport.
We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small."
--David Foster Wallace, "The String Theory", July 1996 Esquire
I thought of Robin Hanson and ems as I read this:
"Joyce is, in other words, a complete man, though in a grotesquely limited way. But he wants more. He wants to be the best, to have his name known, to hold professional trophies over his head as he patiently turns in all four directions for the media. He wants this and will pay to have it -- to pursue it, let it define him -- and will pay up with the regretless cheer of a man for whom issues of choice became irrelevant a long time ago. Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well."
comment by xamdam · 2010-08-11T14:31:03.813Z · LW(p) · GW(p)
Jaron Laier is at it again: The First Church of Robotics
Besides piling up his usual fuzzy opinions about AI Jaron claims, and I cannot imagine that this was done out of sheer ignorance, that "This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined"; I cannot imagine that this is the Singularity University's official position, it's really just too stupid.
I understand SingU is not SIAI, but there is some affiliation and I hope someone speaks up for them.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-08-11T14:38:02.715Z · LW(p) · GW(p)
Seconded. I think SingU is stupid, but whatever its faults, that isn't one of them, and it would be bad if this image spilled onto SIAI.
I'm kind of upset too, because I was just reading Jaron Lanier's recent book, which is the biggest dose of sanity I've seen on issues I care about in a long time.
comment by James_Miller · 2010-08-10T00:24:54.158Z · LW(p) · GW(p)
I bet a good way to improve your rationality is to attempt to learn from the writings of smart, highly articulate people, whom you consider morally evil, and who often use emotional language to mock people like yourself. So, for example, feminists could read Roissy and liberals could read Ann Coulter.
http://roissy.wordpress.com/ http://www.anncoulter.com/
Replies from: None, grouchymusicologist, orthonormal, Oligopsony, wedrifid, Emile, NancyLebovitz, Craig_Heldreth, timtyler↑ comment by [deleted] · 2010-08-10T12:39:31.743Z · LW(p) · GW(p)
I've done exactly this. Read Roissy, read far-right sites (though not Coulter specifically.) Basically sought out the experience of having my feelings hurt, in the interest of curiosity.
I learned a few things from the experience. First, they have a few good points (my politics have changed over time). Second, they are not right about everything just because they are mean and nasty and make me feel bad, and in fact sometimes right-wingers and anti-feminists display flaws in reasoning. And, third, I learned how to deal better with emotional antagonism itself: I don't bother seeking out excuses to be offended any more, but I do protect myself by avoiding people who directly insult me.
↑ comment by grouchymusicologist · 2010-08-10T02:22:45.557Z · LW(p) · GW(p)
You'll have to expand on this before I could agree. My inclination is to think quite the opposite. That is, when I read people who more or less articulately use highly emotion-button-pushing language to mock people like me, it puts my defenses up and makes me try to justify my beliefs, rationality be damned. Was this not pretty much the thrust of Politics is the Mind-Killer? If I were, to adopt a wild hypothetical, a conservative, I would probably say nearly anything to defend myself -- whether publicly or in my own mind -- against the kind of mockery I'd get on a daily basis from Paul Krugman's blog (Krugman chosen as example per mattnewport). Rationality-wise, that is not the position I want to be trying to put myself in. Rather, I want to seek out reasoned, relatively "cool" (as opposed to emotionally "hot") expressions of opposing viewpoints and try to approach them open-mindedly, trying to modify my positions if warranted.
I mean, am I missing something?
Replies from: James_Miller↑ comment by James_Miller · 2010-08-10T03:18:15.086Z · LW(p) · GW(p)
"If I were, to adopt a wild hypothetical, a conservative, I would probably say nearly anything to defend myself -- whether publicly or in my own mind -- against the kind of mockery I'd get on a daily basis from Paul Krugman's blog"
Yes, most people would do this, so the rationality challenge would be to fight against it. Think of it as special-forces-intensity rationality training.
Replies from: orthonormal↑ comment by orthonormal · 2010-08-10T15:44:43.363Z · LW(p) · GW(p)
Not everything that is difficult is thereby good training. It's easier to withstand getting punched in the gut if you're in good physical shape, but I wouldn't suggest trying to get in shape by having someone punch you repeatedly in the gut.
(Indeed, at some point of martial arts training it's useful to learn how to take a punch, but this training has to be done carefully and sparingly. You don't become stronger by rupturing your spleen.)
↑ comment by orthonormal · 2010-08-10T15:39:12.791Z · LW(p) · GW(p)
I think this norm would do poorly in practice, because people would seek out antagonists they unconsciously knew would be flawed, rather than those who actually scare them.
A much better idea, I think, is the following:
- Find someone who appears intelligent to you but is very ideologically different.
- Offer to read a book of their choice if they read a book of your choice.
- Give them a book you think will challenge them.
I'd suggest, however, that you not give the other person someone who will constantly mock their position, because usually this will only further polarize them away from you. Exposing oneself to good contrary arguments, not ridicule, is the way for human beings to update.
↑ comment by Oligopsony · 2010-08-10T02:33:49.320Z · LW(p) · GW(p)
It's probably good to have a mix. I get something distinct from reading people like Roissy or Sailer, whose basic values are totally divorced from my own. I get something else from Eliezer or Will Wilkinson, who derive different policy preferences from values that are similar to mine.
There's something liberating about evil analysis, and I think it's that it's audaciousness allows you to put down mental blinders that would be on guard againstmore plausible threats to your ideological integrity. And a nice thing about values changing over time is that the classics are full of this stuff. Reading, say, Schmidt is like reading political philosophy from Mars, and that's something you should experience regularly. Any similar recommendations?
Replies from: grouchymusicologist, grouchymusicologist, Multiheaded, Vladimir_M↑ comment by grouchymusicologist · 2010-08-10T02:41:43.132Z · LW(p) · GW(p)
Upvoted. From my reply you'll see that I agree it's probably good to seek out, as you say, those "whose basic values are totally divorced from [one's'] own." But can you say more about James Miller's original contention that you should specifically be seeking out that which is designed to piss you off? That's where it seems to me that his idea goes just totally wrong. How is this going to do anything except encourage you to retreat into tribalism?
Replies from: James_Miller, Oligopsony↑ comment by James_Miller · 2010-08-10T03:22:21.490Z · LW(p) · GW(p)
If you know that doing X will " encourage you to retreat into tribalism" then doing X gives you a great opportunity to fight against your irrational instincts.
↑ comment by Oligopsony · 2010-08-10T03:03:22.781Z · LW(p) · GW(p)
Well, there is the aesthetic appreciation of polemic for its own sake, but that's not going to make you more rational.
I think the most obvious answer, though, is that it can inure you a bit to connotative sneers. Aversion to this kind of insult is likely one of the major things keeping you from absorbing novel information!
One way to do this very quickly - you shouldn't, of course, select your politics for such trivial advantages, but if you do, take advantage of it - is to become evil yourself, relative to the majority's values. There are certain groups an attack upon which constitutes an applause line in the mainstream. If you identify as a communist or fascist or Islamist or other Designated Antagonist Group, you can either take the (obviously epistemically disastrous) route of only reading your comrades, or you can keep relying on mainstream institutional sources of information that insult you, and thereby thicken your skin. (Empirical prediction: hard {left|right}ists are more likely to read mainstream {conservatives|liberals} than are mainstream {liberals|conservatives}.)
(An alternate strategy this suggests, if your beliefs are, alas, pedestrian, is to "identify" with some completely ridiculous normative outlook, like negative utilitarianism or something. Let everyone's viewpoint offend you until "this viewpoint offends!" no longer functions as a curiosity stopper.)
Replies from: grouchymusicologist↑ comment by grouchymusicologist · 2010-08-10T03:25:34.817Z · LW(p) · GW(p)
Well, I understand your reasoning: you suggest that it's likely (or at least possible) that one's reaction in the face of rhetorically "hot" disagreement will be a built-up tolerance (immunity) for mockery, making one more able to extract substance and ignore affect. My belief is that that particular strength of character (which I admire when I see it, which is rarely) is infrequent relative to, as I keep calling it, a retreat into tribalism in the face of mockery of one's dearly-held beliefs. Hence my feeling that the upper-left quadrant of the graph I describe is not good breeding grounds for rationality. That isn't to suggest that we shouldn't do our best to self-modify such that that would no longer be the case, but it is hard to do and our efforts might be best spent elsewhere.
Also worth considering is the hypothesis that the two axes of my graph aren't fully independent, but instead that "hot" expressions are correlated with substantively less rich and worthwhile viewpoints, because the richest and most worthwhile viewpoints wouldn't have much need to rely on affect. If this is true (and I think it is at least somewhat true), it would be another reason for avoiding rhetorically "hot" political viewpoints in general.
Replies from: Oligopsony↑ comment by Oligopsony · 2010-08-10T04:07:54.274Z · LW(p) · GW(p)
As my political beliefs have become more evil I've become much better at ignoring insults to my politics. I remain pretty thin-skinned individually, though, so it seems that whatever's moving me in this way is politics-specific.
The healthiest reading space is probably all over the axis. Passion is not the opposite of reason, and there are pleasures to take in reading beyond the conveyance of mere information.
↑ comment by grouchymusicologist · 2010-08-10T03:00:30.552Z · LW(p) · GW(p)
Further to this. Let's plot political discourse along two axes: substantive (x axis: -disagree to +agree) and rhetorical (y axis: -"cool"/reasoned to +"hot"/emotional). Oligopsony states that it is valuable to engage with those on the left-hand side of the graph (people who disagree with you), without any particular sense that special dangers are posed by the upper left-hand quadrant. (Oligopsony says reading so-and-so is "like reading political philosophy from Mars, and that's something you should experience regularly" -- regardless of the particular emotional relationship you are going to have with that Martian political philosophy as a function of the way in which it's presented.) My view (following on, I think, PitM-K -- and in sharp disagreement with James Miller's original post in this thread) is that the upper half of the graph, and particularly the upper left-hand quadrant, is danger territory, because of the likelihood you are going to retreat into tribalism as your views are mocked.
↑ comment by Multiheaded · 2012-09-10T11:08:15.913Z · LW(p) · GW(p)
Reading, say, Schmidt is like reading political philosophy from Mars, and that's something you should experience regularly.
I like many aspects of Schmitt and he never produced any shock in me, unlike e.g. many bloggers who can be found 1-2 links away from Moldbug. In fact, many leftists have talked favourably about his reasoning if not his values.
↑ comment by Vladimir_M · 2010-08-11T05:55:48.610Z · LW(p) · GW(p)
Oligopsony:
Reading, say, Schmidt is like reading political philosophy from Mars, and that's something you should experience regularly.
Please pardon my evident lack of erudition, but which Schmidt do you have in mind?
Replies from: Oligopsony↑ comment by Oligopsony · 2010-08-11T14:11:03.841Z · LW(p) · GW(p)
Carl, and upon looking it up it's Schmitt. So the lack of erudition is all mine.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-08-11T16:04:59.597Z · LW(p) · GW(p)
Yes, I've heard of Schmitt. Paul Gottfried, who is one of my favorite contemporary political theorists, wrote a book about him. I plan to read at least some of Schmitt's original work before reading what Gottfried has to say about him, but I haven't gotten to it yet. Do you have any particular recommendations?
If you want to read some political philosophy that's really out there by modern standards, try Joseph de Maistre. His staunch Catholicism will probably be off-putting to many people here, but a truly unbiased reader should understand that modern political writers are smuggling just as many unwarranted metaphysical assumptions into their work, except in much more devious ways. Also, although some of his arguments have been objectively falsified in the meantime, others have struck me as spot-on from the modern perspective of Darwinian insight into human nature and the humanity's practical political experiences from the last two centuries. (His brother Xavier is a minor classic of French literature, whom I warmly recommend for some fun reading.)
↑ comment by wedrifid · 2010-08-15T05:33:00.064Z · LW(p) · GW(p)
I would accept that bet. In my experience exposure to such writings mostly serves to produce contempt. Contempt is one emotion that seems to have a purely deleterious effect on thinking. Anger, fear, sadness, anxiety and depression all provide at least some positive effects on thinking in the right circumstances but contempt... nothing.
↑ comment by Emile · 2010-08-11T11:51:43.106Z · LW(p) · GW(p)
I've been trying to do roughly that, though focusing more on the "smart and highly articulate" aspect and dropping "emotional mockery". When I read someone taking cheap shots at a position I might hold, I mostly find the writer childish and annoying, I don't see how reading more of that would improve my rationality. It doesn't really hurt my feelings, unlike some commenters here, so I guess different people need to be prodded in different ways.
For smart and articulate writers with a rationalist vibe, I would recommend Mencius Moldbug (posts are articulate but unfortunately quite long; advocates monarchy, colonialism and slavery) and Noam Chomsky. Any recommendations of smart, articulate and "extreme" writers whose views are far from those two?
Replies from: Richard_Kennaway, kodos96, SilasBarta, gwern, Daniel_Burfoot, simplicio↑ comment by Richard_Kennaway · 2010-08-11T13:27:32.535Z · LW(p) · GW(p)
When I read someone taking cheap shots at a position I might hold, I mostly find the writer childish and annoying, I don't see how reading more of that would improve my rationality.
I have the same reaction to someone taking cheap shots, period. It doesn't matter whether they're arguing something I agree with, disagree with, or don't care about. It just lowers my opinion of the writer.
↑ comment by kodos96 · 2010-08-14T20:29:46.739Z · LW(p) · GW(p)
advocates monarchy, colonialism and slavery
Slavery? I'm certainly not defending Moldbug, but if he advocated slavery, I must have missed that post. Do you have a link?
Replies from: gwern↑ comment by gwern · 2010-08-14T20:43:52.464Z · LW(p) · GW(p)
See http://www.google.com/search?num=100&q=slavery%20site%3Aunqualified-reservations.blogspot.com%2F
(I part ways with Rothbard here. While hereditary slavery is more debatable, I don't have a problem at all with selling yourself into slavery. For me, a contract is an enforceable promise; removing my option to make enforceable promises cannot benefit me. If you don't want to make the promise, don't sign the contract. And promising to be your faithful servant so long as you and I shall live is a perfectly normal, legitimate, and (in a sane world) common sort of promise.)
I admit it: I am a pronomian. I endorse the nomos without condition. Fortunately, I do not have to endorse hereditary slavery, because any restoration of the nomos begins with the present state of possession, and at present there are no hereditary slaves. However, if you want to sell yourself and your children into slavery, I don't believe it is my business to object. Try and strike a hard bargain, at least. (A slightly weakened form of pronomianism, perhaps more palatable in this day and age, might include mandatory emancipation at twenty-one.)
And there is, of course, http://unqualified-reservations.blogspot.com/2009_03_01_archive.html which cannot be excerpted and done proper justice.
Replies from: kodos96↑ comment by kodos96 · 2010-08-14T20:52:10.577Z · LW(p) · GW(p)
Hrmmmm... haven't read the second link yet, but that first excerpt is.... well.... yeah. The selling yourself into slavery part is basically unobjectionable (to a libertarian), but selling your children into slavery.......
I think Moldbug's positions seem to be derived not so much from reversed stupidity as reversed PC.
↑ comment by SilasBarta · 2010-08-12T20:07:42.754Z · LW(p) · GW(p)
For smart and articulate writers with a rationalist vibe, I would recommend Mencius Moldbug (posts are articulate ...
I'll have to disagree, at least to the extent that this is taken as a positive attribute. I find his posts to be rambling and cutesy, which may correspond to articulate. But most people here have the kind of mind that prefers "get to the point" writing, which he fails at.
↑ comment by gwern · 2010-08-11T13:19:50.619Z · LW(p) · GW(p)
I think Moldbug is far away from any living thinker you could name. And he'd probably tell you so himself.
(FWIW, I think Moldbug is usually wrong, through a combination of confirmation bias and reversed stupidity, although I'm still open on Austrian economics in general.)
Replies from: cata, cousin_it, Emile↑ comment by cata · 2010-08-14T15:30:12.922Z · LW(p) · GW(p)
I have a very hard time evaluating Moldbug's claims, due to my lack of background in the relevant history, but holy shit, do I ever enjoy reading his posts.
The crowd here may be very interested in watching him debate Robin Hanson about futarchy before an audience at the 2010 Foresight conference. Moldbug seems to be a bit quicker with the pen than in person.
Moldbug's initial post that spurred the argument is here; it's very moldbuggy, so the summary, as far as my understanding goes, is like this: Futarchy is exposed to corrupt manipulators, decision markets can't correctly express comparisons between multiple competing policies, many potential participants are incapable of making rational actions on the market, and it's impossible to test whether it's doing a good job.
http://unqualified-reservations.blogspot.com/2009/05/futarchy-considered-retarded.html
Video of the debate is here: http://vimeo.com/9262193
Moldbug's followup: http://unqualified-reservations.blogspot.com/2010/01/hanson-moldbug-debate.html
Hanson's followup: http://www.overcomingbias.com/2010/01/my-moldbug-debate.html
Replies from: Emile, gwern↑ comment by Emile · 2010-08-15T09:15:17.345Z · LW(p) · GW(p)
I enjoy reading his posts too (when I have the time - not much, lately), but I wasn't very impressed by his debate with Robin Hanson - his arguments seemed to be mostly rehashing typical arguments against prediction markets that I'd heard before.
↑ comment by gwern · 2010-08-14T16:20:18.503Z · LW(p) · GW(p)
I was less than impressed by Hanson's response (in a comment) to http://unqualified-reservations.blogspot.com/2010/02/pipe-shorting-and-professor-hansons.html
Replies from: kodos96↑ comment by kodos96 · 2010-08-14T20:27:55.800Z · LW(p) · GW(p)
Yeah, that response didn't have much content, but I think that's pretty understandable considering that by that point in their debate, Moldbug had already revealed himself to be motivated by something other than rational objections to Hanson's ideas, and basically immune to evidence. In their video debate it became very clear that Moldbug's strategy was simply to hold Hanson's ideas to an impossibly high standard of evidence, hold his own ideas to an incredibly low standard of evidence, and then declare victory.
So I can understand why Hanson might not have thought it was worth investing a lot more time in responding point by point.
↑ comment by cousin_it · 2010-08-12T09:51:43.366Z · LW(p) · GW(p)
I'm kinda torn about Moldbug. His political arguments look shaky, but whenever he hits a topic I happen to know really well, he's completely right. (1, 2) Then again, he has credentials in CS but not history/economy/poli-sci, so the halo effect may be unjustified. Many smart people say dumb things when they go outside their field.
Replies from: SilasBarta, gwern↑ comment by SilasBarta · 2010-08-12T20:00:42.306Z · LW(p) · GW(p)
That just shows he got two easy questions right. When he spells out his general philosophy, which I had criticized before, you see just how anti-rational his epistemology is. You're just seeing a broken clock at noon.
By the way, anyone know if "Mencius Moldbug" is his real name? It sounds so fake.
Replies from: gwern, CronoDAS, cousin_it↑ comment by gwern · 2010-08-15T09:47:31.497Z · LW(p) · GW(p)
He states that it's a pseudonym. (It's actually quite a clever one - unique, and conveys a lot about him.)
Replies from: Mitchell_Porter"I may be a pseudonym, but more prominent folks like Sailer, Auster and Mangan aren't."
↑ comment by Mitchell_Porter · 2010-08-15T10:03:56.772Z · LW(p) · GW(p)
MM's name combines the pseudonyms he previously used as a commenter in two separate blogging realms (HBD and finance).
↑ comment by gwern · 2010-08-12T10:12:04.486Z · LW(p) · GW(p)
Funny, I like CS too but his writings put me off in part; I particularly disliked his Nock language. It looks like a seriously crappy Lisp to me (and I like Haskell better).
Replies from: cousin_it↑ comment by cousin_it · 2010-08-12T19:44:41.664Z · LW(p) · GW(p)
Agreed, Nock was a neat puzzle, but not much more. I have no idea why he tried to oversell it so.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-08-15T14:38:59.706Z · LW(p) · GW(p)
Nock was followed by Urbit, "functional programming from scratch", but that project doesn't seem to have gone anywhere, and it's not clear to me where there would be for it to go. His vision of "Martian code", "tiny and diamond-perfect" is still a castle in the air, the job of putting a foundation under it still undone.
A criticism that I think applies to his politics as well. He does a fine destructive critique of the current state of things and how we got here, but is weak on what he would replace it by.
↑ comment by Emile · 2010-08-11T15:40:10.006Z · LW(p) · GW(p)
Probably, as lon as you restrict yourself to sane, articulate thinkers in the West. There are probably even more outlandish ideas in Japan, India, or the Islamic world.
Come to think of it, it would probably be more instructive to read "non-westernized" intellectuals from India, Korean, Japan, China or the Islamic world, talking about the west. I think Moldbug recommended a medieval Japanese writer talking about his experience in America, but I can't find it right now.
Replies from: gwern↑ comment by gwern · 2010-08-11T16:06:56.960Z · LW(p) · GW(p)
Yukichi Fukuzawa. Only limited parts of his works are online (eg. in Google Books, very limited previews).
↑ comment by Daniel_Burfoot · 2010-08-16T04:33:00.681Z · LW(p) · GW(p)
For smart and articulate writers with a rationalist vibe, I would recommend Mencius Moldbug
Why do you think MM has a rationalist vibe?. He doesn't talk about probability or heuristics/biases etc.
↑ comment by simplicio · 2010-08-14T20:37:49.728Z · LW(p) · GW(p)
I would defend an (eviscerated) monarchy such as Canadians like me and other Commonwealthers have as being a social good, insofar as it's a rich & elegant tradition that's not very pernicious.
Actual "off with his head" monarchy... nah.
And if Prince Charles decides to flap his unelected gums too much when he accedes, you may see me change my tune. But at the moment I'm happy to be a subject of HM Queen Elizabeth.
↑ comment by NancyLebovitz · 2010-08-10T00:35:36.137Z · LW(p) · GW(p)
Who should conservatives read?
Replies from: cata, None, mattnewport↑ comment by [deleted] · 2010-08-10T15:42:42.960Z · LW(p) · GW(p)
Paul Krugman's a good example, because he goes on the offensive, but he's not quite offensive enough. For good, juicy ad hominems, read SLOG (the blog of the Seattle Stranger) or Feministe.
How to offend a conservative is an interesting question. I think it should be easy to offend (or upset or disgust) a traditional social conservative, with simple sexual shock value. It's harder for me to think of ways to offend an economic conservative. The closest thing I can think of is stereotyping libertarians as weirdo losers.
Replies from: CronoDAS↑ comment by mattnewport · 2010-08-10T00:39:51.557Z · LW(p) · GW(p)
Paul Krugman?
Replies from: CronoDAS↑ comment by CronoDAS · 2010-08-14T08:19:24.889Z · LW(p) · GW(p)
My experience is that Paul Krugman is one of those people with whom you disagree at your peril.
Replies from: kodos96↑ comment by kodos96 · 2010-08-15T05:07:37.205Z · LW(p) · GW(p)
If by "peril" you mean being censored from the comments section of his blog....
Replies from: CronoDAS↑ comment by CronoDAS · 2010-08-15T07:41:55.854Z · LW(p) · GW(p)
Replies from: kodos96↑ comment by kodos96 · 2010-08-15T08:18:18.593Z · LW(p) · GW(p)
And if you agreed with him about the housing market and bought any real estate a few years back, then you're probably underwater right now.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-08-15T21:28:36.252Z · LW(p) · GW(p)
Even if you sold in 2006?
I don't know if that link is gated or not.
Replies from: kodos96So here's the bottom line: yes, northern Virginia, there is a housing bubble. (Northern Virginia, not Virginia as a whole. Only the Washington suburbs are in the Zoned Zone.) Part of the rise in housing values since 2000 was justified given the fall in interest rates, but at this point the overall market value of housing has lost touch with economic reality. And there's a nasty correction ahead.
↑ comment by Craig_Heldreth · 2010-08-16T01:07:24.693Z · LW(p) · GW(p)
This is a special case of a well known process in social science circles since at least the 1950's: role playing. It became popular after the work of Fritz Perls (a psychotherapist who started out in drama), who would have the patient do things such as play their mother or father (or tyrannical trauma family character of choice) to try and broaden their understanding of their life story and memory. It can be a very powerful technique. I have been in group psychotherapy sessions where people scream, bawl, and many other visceral responses get displayed.
In 1983 Robert Anton Wilson published a book, Prometheus Unbound which was ostensibly a self-help book for making your thinking more rational, specifically for destroying dogmas. This book is little more than one recipe after another for exercises of this type; for example, be a neo-Nazi for a week.
The mechanics is to learn by exposing yourself to that great universe of unknown-unknowns. My personal experience is sometimes they can be helpful, but it is really hard to know beforehand if they will be worth the time. I have benefited from some of these things; I have wasted time doing some.
comment by Jonathan_Graehl · 2010-08-17T18:51:40.639Z · LW(p) · GW(p)
A HN post mocks Kurzweil for claiming the length of the brain's "program" is mostly due to the part of the genome that affects it. This was discussed here lately. How much more information is in the ontogenic environment, then?
The top rated comment makes extravagant unsupported claims about the brain being a quantum computer. This drives home what I already knew: many highly rated HN comments are of negligible quality.
PZ Myers:
We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!
(PZ Myers wrongly accuses Kurzweil of claiming he or others will simulate a human brain aided in large part by the sequenced genome, by 2020).
Kurzweil's denial - thanks Furcas - answers my question this way: only a small portion of the information in the brain's initial layout is due to the epigenetic pre-birth environment (although the evidence behind this belief wasn't detailed).
Replies from: Furcas, ocr-fork↑ comment by Furcas · 2010-08-24T02:14:58.318Z · LW(p) · GW(p)
Replies from: Jonathan_GraehlKurzweil claims he or others will simulate a human brain aided in large part by the sequenced genome, by 2020.
↑ comment by Jonathan_Graehl · 2010-08-24T02:54:38.921Z · LW(p) · GW(p)
Cool - were it not for your comment, I wouldn't have ever heard the correction.
↑ comment by ocr-fork · 2010-08-18T05:38:32.162Z · LW(p) · GW(p)
How much more information is in the ontogenic environment, then?
Off the top of my head:
The laws of physics
9 months in the womb
The rest of your organs. (maybe)
Your entire childhood...
These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.
comment by JoshuaZ · 2010-08-17T02:21:35.823Z · LW(p) · GW(p)
I'm thinking of signing up for cryonics. However, one point that is strongly holding me back is that cryonics seems to require signing up for a DNR (Do not resuscitate). However, if there's a chance at resuscitation I'd like all attempts to be made and only have cryonics used when it is clear that the other attempts to keep me alive will fail. I'm not sure that this is easily specifiable with current legal settings and how cryonics is currently set up. I'd appreciate input on this matter.
Replies from: Alicorn↑ comment by Alicorn · 2010-08-17T02:25:26.869Z · LW(p) · GW(p)
cryonics seems to require signing up for a DNR
What did you read that makes it seem this way? I haven't run into this before.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-08-17T02:38:44.340Z · LW(p) · GW(p)
A variety of places mention it. Alcor mentions it here. Cryonics.org discusses the need for some form of DNR although the details don't seem to be very clear there. Another one that discusses it is this article which makes the point that repeated attempts at resuscitation can lead to additional brain damage although at least from the material I've read I get the impression that as long as it doesn't delay cryopreservation by more than an hour or two that shouldn't be an issue.
Replies from: AngryParsley↑ comment by AngryParsley · 2010-08-17T04:01:47.914Z · LW(p) · GW(p)
You don't have to sign a DNR or objection to autopsy to get cryonics. The autopsy objection is recommended, but not required. It looks like Alcor wants terminally ill people to sign a DNR, not typical healthy people.
I've signed a religious objection to autopsy (California doesn't seem to allow an atheistic objection to autopsy), but never has a DNR been mentioned to me by anyone at Alcor.
Replies from: wedrifid, JoshuaZ↑ comment by wedrifid · 2010-08-17T04:46:40.348Z · LW(p) · GW(p)
California doesn't seem to allow an atheistic objection to autopsy
Which just a tad ironic. Atheists are people who consider the physical state of their brain to be all that is 'them'. Most religious people assume their immortal soul has traipsed off some place, a paradise or at the very least a brand spanking new (possibly animalian) body.
comment by NancyLebovitz · 2010-08-13T18:17:30.652Z · LW(p) · GW(p)
Problems with high stakes, low quality testing
The percentage of students falling below some arbitrary cutoff is a bad statistic to use for management purposes. (Again, this is Statistical QA 101 stuff.) It throws away information. Worse, it creates perverse incentives, encouraging schools to concentrate on the students performing at around the cut-off level and to neglect both those too far below the threshold to have a chance of catching up and those comfortably above it.
comment by thomblake · 2010-08-11T03:03:54.121Z · LW(p) · GW(p)
I assume this is relevant enough to post in the open thread, though it may be old news to some of you. There is a purported proof that P!=NP. A wiki collecting discussion and links to discussions, as well as links to the paper draft, is here.
comment by NancyLebovitz · 2010-08-14T16:11:51.629Z · LW(p) · GW(p)
Way back when MSNBC used to do 6 hour stints with lawyers doing the color commentary on the court case of the day, I was a regular doing the defense side. Once an hour, we would be expected to sit at a desk and do a five minute stint, me and whoever was doing the "former prosecutor" job that day. Our five minutes would consist of two questions, each taking up about 90 seconds including some mischaracterization of the nature of the legal issue, and concluding with the words "how do you feel?" We had ten seconds to respond before the talking head turned elsewhere.
One slow day caught up with us. The "former prosecutor" and I got bored playing cards, waiting for our next stint, and there was absolutely nothing worthwhile to say on the case du jour. We had just finished our stint with Ashley Banfield (back when she was blond and didn't wear her "interested" glasses), and some unknown kid in a peculiar Caribbean-green-colored shirt was the next hour's anchor. We decided to goof with the kid by switching sides. I would take the prosecutor side and the former prosecutor would pretend to be the defense.
The anchor, Rick Sanchez, was very nice and solicitous, as they sat us at our desk, and we nodded nicely back, knowing that there would be someone else we didn't know there in an hour. He ran through his question and we responded. Just backward. Rick didn't skip a beat, and we filled our five minutes like good little lawyers. Just backward. Nobody, not Sanchez, not a producer, nobody, even noticed. Our sound bites were good. Our time was filled. And everybody was happy. It meant absolutely nothing.
Link from The Agitator.
comment by [deleted] · 2010-08-12T13:35:44.389Z · LW(p) · GW(p)
Informal poll to ensure I'm not generalizing from one example:
How frequently do you find yourself able to remember how you feel about something, but unable to remember what produced the feeling in the first place (ie: you remember you hate steve but can't remember why)?
It seems like this is a cognitive shortcut, giving us access only to the "answer" that's already been computed (how to act vis-a-vis steve) instead of wasting energy and working memory re-accessing all the data and re-performing the calculation.
Replies from: gwern, kodos96, None, rabidchicken, JoshuaZ, JanetK, ciphergoth, Oligopsony, erratio, wedrifid↑ comment by gwern · 2010-08-13T07:31:37.369Z · LW(p) · GW(p)
I don't do this on LessWrong, but that may be because I don't care enough about LW, and the stakes are too low.
On Wikipedia, though, there are at least 10 editors who, when I see their name come up on my watchlist, I briefly freeze up with a combination of fear, disgust, and anger.
Replies from: kodos96↑ comment by kodos96 · 2010-08-13T07:43:02.184Z · LW(p) · GW(p)
Just curious: why do you consider LW to be lower stakes than wikipedia?
Replies from: gwern↑ comment by gwern · 2010-08-13T08:06:52.529Z · LW(p) · GW(p)
Fewer deletions. On Wikipedia, I have to fight tooth and nail for some things just to remain (and I often fail; I'm still a little bitter about the off-handed deletion of the Man-Faye article a few days ago); on LW, deletion of stuff is so rare that it's a major event when Eliezer deletes an article.
↑ comment by kodos96 · 2010-08-13T06:44:33.555Z · LW(p) · GW(p)
How frequently do you find yourself able to remember how you feel about something, but unable to remember what produced the feeling in the first place (ie: you remember you hate steve but can't remember why)?
I do this constantly. In fact, I do it a lot right here on LW - in reading comment threads, I see a comment by a certain user and have either a positive or negative reaction to the username, based on previous comments of theirs I've read, despite having no recollection of what those comments actually were
I'm not quite sure whether this is a good thing or a bad thing.
↑ comment by [deleted] · 2010-08-14T14:55:01.531Z · LW(p) · GW(p)
Yes -- and I agree that it's probably a cognitive shortcut, because it's also something that happens with purely conceptual ideas. I'll forget the definition of a word, but remember whether it's basically a positive or negative notion. Yay/Boo is surprisingly efficient shorthand for describing anything.
↑ comment by rabidchicken · 2010-08-17T05:07:41.361Z · LW(p) · GW(p)
I would say that most people I know easily fit this heuristic, but I almost never employ it, based on the way I remember people. When I have been in a conflict with someone, I can recall a categorized list of every thing I dislike about them, and a few fights we have had quite easily, and vice versa for people I like. What this means essentially is... I have a very hard time remaining angry / happy with people, because it requires constant use of resources, and it also seems to effect my ability to remember meeting people at all. Since I store memories of other people using events instead of descriptions if I have never had a particularly eventful interaction with someone, remembering their names or any other info is almost impossible.
↑ comment by JoshuaZ · 2010-08-16T00:24:13.621Z · LW(p) · GW(p)
Occasional but rare. I have more of a problem where I have some feeling some reason and then find out I was wrong about that reason and then need to make effort to adjust my feelings to fit the data. But I generally remember the cause for my feelings. The only exception is that occasionally I'll vaguely remember that some approach to a problem doesn't work at all but won't remember why (it generally turns out that I spent a few days at some point in the past trying to use that method to solve something and got negative results showing that the method wasn't very useful.)
↑ comment by JanetK · 2010-08-13T08:41:04.637Z · LW(p) · GW(p)
This has happened to me but not often.
There is a reason for it. Current thinking is that memories of events are retrieved by a process in the hippocampus (until the memories become substantially re-consolidated). Memories of strong emotional experiences are also retrieved by a process in the amygdala. They are not memories of events but just a link between the emotion and the object that caused it. In recent memories these are usually connected - If the amygdala retreives it prompts the hippocampus to do so also - if the hippocampus does the retreiving it triggers the amygdala. But the two processes can be disconnected for a particular memory pair. You see X and feel the memory of fear or anger but not the episodic memory of when and where you felt that emotion towards X. The amygdala retrieves but the hippocampus fails to.
Replies from: None↑ comment by Paul Crowley (ciphergoth) · 2010-08-12T13:54:34.605Z · LW(p) · GW(p)
Yes, absolutely. Well, "hate" is too strong a word, but certainly it's hard to explain to other people: "I have a mental black mark against Steve's name, though I can't tell you why"...
↑ comment by Oligopsony · 2010-08-17T05:19:54.893Z · LW(p) · GW(p)
All the time.
Generally I only remember the cause of my dispositions if I feel it's important in itself. In the case of something like political beliefs I have a commitment to throwing out anything I can't justify through verbal reasoning and reference to fundamental values. With likes and dislikes - from those fundamental values to flavors of ice cream - I don't consider the path that got me there particularly important.
Sometimes I need the justification for ulterior or social reasons. It doesn't particularly matter to me why I like the people I happen to like, but I try to examine them at least enough that I can give them substantive compliments when appropriate, even if this examination doesn't come naturally to me. Cynically, most "debate topics" are like this - you need the justificatory reasoning in order to engage in cocktail conversations.
↑ comment by wedrifid · 2010-08-12T14:32:25.820Z · LW(p) · GW(p)
How frequently do you find yourself able to remember how you feel about something, but unable to remember what produced the feeling in the first place (ie: you remember you hate steve but can't remember why)?
Never. But I'm not typical.
comment by Psy-Kosh · 2010-08-11T20:29:25.715Z · LW(p) · GW(p)
This may be a stupid question, but...
There're a couple of psych effects we have evidence for. Specifically, we have evidence for a sort of consistency effect. For example (relevant to my question) there's apparently evidence for stuff like if someone ends up tending to do small favors for others or being nice to them/etc, they'll be willing to continue to do so, more easily willing to do bigger things later.
And there's also willpower/niceness "used up"ness effects, whereby apparently (as I understand it), one might do one nice thing then, feeling they "filled up their virtue quota" be nasty elsewhere. (ie, apparently one of the less obvious dangers of religion is you, say, go to church or whatever, and thus later in the day you don't even bother tipping (or tip poorly) when you go to a restaurant because you're "already virtuous" and thus don't have to do any more.)
How is it that we can simultaneously have evidence of these two things when they directly contradict each other? Or am I being totally stupid here?
(EDIT: just to clarify, I meant I was asking "How can it be that the sum total of evidence support both these positions when they seem to me to directly contradict each other?")
Replies from: FAWS↑ comment by FAWS · 2010-08-11T20:43:47.273Z · LW(p) · GW(p)
As fas as I understand they operate on different scales. "Used up" effects operate on much shorter time scales, and consistency effects (often?) operate on more specific things than general niceness.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-08-11T22:15:00.794Z · LW(p) · GW(p)
Ah, if so, then thank you. (Huh, I'd thought consistency effects were supposed to work on short time scales too.)
Replies from: khafra↑ comment by khafra · 2010-08-12T17:10:32.420Z · LW(p) · GW(p)
The classic post says effects persist for two weeks, at least. So it would seem that the response curves of the two effects cross each other at one or two points. I'd be interested to see studies plotting them against each other; it is an interesting dichotomy.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-08-12T23:15:46.191Z · LW(p) · GW(p)
Thanks. And yeah, that's a fair point and question. (Hrm... how exactly would one measure the response curves anyways in any quantitative way? ie, sure, "how many people respond after delay X vs delay Y, etc...", but any way to directly measure the strength of the effect rather than simply measuring when it "falls below measurability"?
comment by blogospheroid · 2010-08-11T04:52:21.880Z · LW(p) · GW(p)
I had a question. Other than Cryonics and PUA, what other "selfish" purposes might be pursued by an extreme rationalist that would not be done by common people?
On thinking on this for quite a while, one unusual thing I could think of was possibly, the entire expat movement, building businesses all across the world and protecting their wealth from multiple governments. I'm not sure if this might be classified as extreme rationality or just plain old rationality.
Switzerland seems to be a great place to start a cryonics setup as it is already a hub for people maintaining their wealth there. If cryonics was added, then your money and your life could be safe in switzerland.
Replies from: JanetK, wedrifidcomment by [deleted] · 2010-08-10T17:22:40.362Z · LW(p) · GW(p)
The Last Psychiatrist on a new study of the placebo effect.
I'm having trouble parsing his analysis (it seems disjointed) but the effect is interesting nonetheless.
comment by gaffa · 2010-08-10T13:22:40.900Z · LW(p) · GW(p)
Has anyone read, and could comment on, Scientific Reasoning: The Bayesian Approach by philosophers Howson and Urbach? To me it appears to be the major work on Bayes from within mainstream philosophy of science, but reviews are mixed and I can't really get a feel for its quality and whether it's worth reading.
comment by Morendil · 2010-08-29T22:01:50.296Z · LW(p) · GW(p)
A quick probability math question.
Consider a population of blobs, initially comprising N individual blobs. Each individual blob independently has a probability p of reproducing, just once, spawning exactly one new blob. The next generation (an expected N*p individuals) has the same probability for each individual to spawn one new blob, and so on. Eventually the process will stop, with a total blob population of P.
The question is about the probability distribution for P, given N and p. Is this a well-known probability distribution? If so, which? Even if not, are there things that can be said about it which are mathematically obvious? (Not obvious to me, obviously. I'd be interested in which gaps in my math education I'm revealing by even asking these questions.)
Replies from: Wei_Dai, Perplexed, Pavitra↑ comment by Wei Dai (Wei_Dai) · 2010-08-29T22:20:18.843Z · LW(p) · GW(p)
Here's my solution. The descendants of each initial blob spawn independently of descendants of other initial blobs, so this is a sum of N independent distributions. The number of descendants of one initial blob is obviously the geometric distribution. Googling "sum of independent geometric distributions" gives Negative binomial distribution as the answer.
Replies from: RobinZ, Morendil, Pavitra↑ comment by Pavitra · 2010-08-29T22:35:14.206Z · LW(p) · GW(p)
I don't think that's right. I don't have the math to show why yet, but my current working hunch says to make explicit your assumptions about whether the initial number of blobs, and the number of generations, are continuous or discrete, because the geometric distribution may not actually be right.
↑ comment by Perplexed · 2010-08-29T22:32:42.330Z · LW(p) · GW(p)
After G generations, each blob has a probability q=p^G of having a descendant. So, it seems to me that P will be distributed as a binomial with q and N as parameters.
Replies from: FAWS↑ comment by FAWS · 2010-08-29T22:51:14.322Z · LW(p) · GW(p)
The blobs don't reproduce with probability p in any given generation, they reproduce with probability p ever. The scenario doesn't require generations in the sense you seem to be thinking of, it could all happen within 1 second, or a first generation blob might reproduce after the highest generation blob that reproduces has already done so.
Replies from: Perplexed↑ comment by Perplexed · 2010-08-29T23:38:47.277Z · LW(p) · GW(p)
Oh, ok. I thought the blobs died each generation. A shrinking population. Instead they go into nursing homes. A growing population which stabilizes once everyone is geriatric.
Got it. Wei pretty clearly has the solution. Negative Binomial distribution
The negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number r of failures occurs.
Pretty damned obvious, actually, that (P-N) is distributed as a negative binomial where r is set to N; failure = failure to reproduce; success = birth.
↑ comment by Pavitra · 2010-08-29T22:14:40.647Z · LW(p) · GW(p)
Offhand, I think you would also need to know the number of generations. I'll have to do some pen-and-paper work to work out what the distribution looks like.
Replies from: FAWS↑ comment by FAWS · 2010-08-29T23:05:18.233Z · LW(p) · GW(p)
Huh? Why? The expected number of blobs is given by N/(1-p), the number of actually realized generations is not a variable, it's determined by N, p and chance. I have no idea how the distribution looks, but the number of actual generations should be one of the things you have a distribution across, not an input.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-29T23:13:46.294Z · LW(p) · GW(p)
Morendil said:
Eventually the process will stop, with a total blob population of P.
Under your model, P=0 with frequency 1, so that doesn't make sense. I think the idea is to stop after a predetermined number of generations and see how many blobs are left.
Edit: No, wait, I see what's going on. You're right.
comment by Sniffnoy · 2010-08-27T12:08:13.984Z · LW(p) · GW(p)
ETA: Ag, just before posting this I realized Hal Finney had already basically raised this same point on the original thread! Still, I think this expands on it a little...
You know, if Wei Dai's alien black box halting decider scenario were to occur, I'm not sure there is any level of black-box evidence that could convince me they were telling the truth. (Note: To make later things make sense, I'm going to assume the claim is that if the program halts, it actually tells you the output, not just that it halts.)
It's not so much that I'm committed to the Turing notion of computability - presumably opening the box should, if they're telling us the truth, allow us to learn this new Turing-uncomputable physics; the problem is that - without hypercomputers ourselves - we don't really have any way of verifying their claim in the first place. Of course the set of halting programs is semicomputable, so we can certainly verify its yes answers (if not quickly), but no answers can only be verified in the cases where we ourselves have precomputed the answer by proving it for that particular case (or family of cases). In short, we can verify that it's correct on the easy cases, but it's not clear why we should believe it's correct on the hard cases that we actually care about it on. In other words, we can only verify it by checking it against a precomputed list of our own, and ISTM that if we precomputed it, they could have done the same.
If you're not being careful, you could just say justify the claim with "induction", but even without the precomputed list idea, induction also supports the hypothesis that it simply runs programs for a fixed but really long time and then reports whether they've halted, which doesn't require anything uncomputable and so is more probable a priori. (The fact that Wei Dai said nothing about the computation time makes this a bit trickier, but presumably they may have computation far faster than us.)
Now if it just claimed, say, that it was only necessarily correct for programs using polynomial space, then we'd be in better shape, due to IP=PSPACE; even if we couldn't replicate its results very fast, we could at least verify them quickly. We could actually give it hard cases, that we can't do (quickly) by hand, and then verify that it got them right. (Except actually I'm brushing over some problems here - IIRC it's uncomputable to determine whether a program will use polynomial space in the first place, so while it presumably doesn't have a precomputed list of inputs for the different programs, it might well have a precomputed list of which programs below a certain length are polynomial-space in the first place! And then just run those much faster than we can. We could just make it an oracle for a single PSPACE-complete problem, but then of course there's nothing uncomputable going on so no there's no real problem in the first place; it could just be a really fast brute-force solver. This would allow us to verify quickly that they have much more advanced computers than us, that can solve PSPACE-complete problems in an instant, but that's not nearly as surprising. Not sure if there's any way to make this example really work as intended.)
When we test our own programs, we have some idea of what's in the black box - we have some idea how they work, and we are just verifying that we haven't screwed it up. And on those, since we have some idea of the algorithm, we can construct tricky test cases to check the parts that are most likely to screw it up. And even if we're verifying a program from someone untrustworthy, SFAICT this is based on inferring what ways the program probably works, or what ways that look right but don't work on hard cases someone may have come up with, or key steps it will probably rely on, or ways it might cheat, and writing test cases for those. Of course you can't rule out arbitrarily advanced cheats this way, but we have other evidence against those - they'd take up too much space, or they'd be even harder than doing it correctly. In the case of a halting oracle, the problem there is no point where it would seem that such a ridiculous cheat would be even harder than doing it correctly.
So until the black box is opened, I'm not sure that this is a good argument against the universal prior, though I suppose it does still argue that the universal prior + Bayes doesn't quite capture our intuitive notion of induction.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-08-27T21:25:19.781Z · LW(p) · GW(p)
I'm not sure there is any level of black-box evidence that could convince me they were telling the truth.
I'm afraid I can't do much better at this point than to cite Harvey Friedman's position on this. (He posted his alien crystall ball scenario before I posted my alien black box, and obviously knows a lot more about this stuff than I do.)
I now come to the various deep questions - conceptually and technically - that arise when attempting to make "proofs" that the Crystal Ball from the hyperaliens is "the real thing" and that the information gleaned from its use is "proved".
I believe very strongly that rather subtle probabilistic arguments combined with rather clever TMs, will show, in various "rigorous" senses, that the Crystal Ball is the real thing (provided it is in fact the real thing).
Here are the relevant discussion threads on the Foundations of Mathematics mailing list:
- http://cs.nyu.edu/pipermail/fom/2004-February/subject.html#7934
- http://cs.nyu.edu/pipermail/fom/2004-March/subject.html#8003
ETA:
In the case of a halting oracle, the problem there is no point where it would seem that such a ridiculous cheat would be even harder than doing it correctly.
Assuming the laws of physics actually does allow a halting oracle to be implemented, then at some point it would be easier to just implement it than to do these ridiculous cheats, right? As we rule out various possible cheats, that intuitively raises our credence that a halting oracle can be physically implemented, which contradicts the universal prior.
Replies from: Sniffnoy, cousin_it↑ comment by Sniffnoy · 2010-09-04T08:10:00.382Z · LW(p) · GW(p)
...having actually read those now, those threads didn't seem very helpful. :-/
Assuming the laws of physics actually does allow a halting oracle to be implemented, then at some point it would be easier to just implement it than to do these ridiculous cheats, right? As we rule out various possible cheats, that intuitively raises our credence that a halting oracle can be physically implemented, which contradicts the universal prior.
Hm, indeed. Actually, it occurred to me after writing this that one thing to look at might be the size of the device, since there are, as far as we know, limits on how small you can make your computational units. No idea how you'd put that into action, though.
comment by gwern · 2010-08-26T11:16:23.979Z · LW(p) · GW(p)
Last open thread I linked a Wired article on a argument-diagram software called ACH being open-sourced.
It's now available: http://competinghypotheses.org/
(PHP, apparently. Blech!)
Replies from: Nonecomment by [deleted] · 2010-08-26T05:39:54.138Z · LW(p) · GW(p)
I just read and liked "Pascal's mugging." It was written a few years ago, and the wiki is pretty spare. What's the state of the art on this problem?
Replies from: gwern, gwern↑ comment by gwern · 2010-08-26T11:10:54.561Z · LW(p) · GW(p)
I haven't seen much response to it. There's a reply in Analysis by Baumann who takes a cheap out by saying simply that one cannot provide the probability in advance, that it's 'extremely implausible'.
I have an unfinished essay where I argue that as presented the problem is asking for a uniform distribution over an infinity, so you cannot give the probability in advance, but I haven't yet come up with a convincing argument why you would want your probability to scale down in proportion as the mugger's offer scales up.
That is: it's easy to show that scaling disproportionately leads to another mugging. If you scale superlinearly, then the mugging can be broken up into an ensemble of offers that add to a mugging. If you scale sublinearly, you will refuse sensible offers that are broken up.
But I haven't come up with any deeper justification for linearly scaling other than 'this apparently arbitrary numeric procedure avoids 3 problems'. I've sort of given up on it, as you can see from the parlous state of my essay.
Replies from: None↑ comment by [deleted] · 2010-08-26T15:28:28.179Z · LW(p) · GW(p)
Thanks. Here's my fresh and uneducated opinion.
I see three kinds of answers to the mugging:
- We're boned
- Some kind of hack in the decision-making process
- Some kind of hack in the mathematics
- "Head on." That is, prove that the expected disutility of a given threat is bounded independent of the size of the threat.
Here's my analysis in the sense of 4., tell me if I'm making a common mistake. We are worried that P(agent can do H amount of harm | agent threatens to do H amount of harm) times H can be arbitrarily large. As Tarleton pointed out in the 2007 post, any details beyond H about the scenario we're being threatened with is a distraction (right? That actually doesn't seem to be the implicit assumption of your draft, or of Hanson's comment, etc.)
By Bayes the quantity in question is the same as
P(threat | ability)/P(threat) x P(ability) x H
Our hope is that we can prove this quantity is actually bounded independent of H (but of course not independent of the agent making the threat). I'll leave aside the fact that the probability that such a proof contains a mistake is certainly bounded below.
P(threaten H) is the probability that a certain computer program (the agent making the threat) will give a certain output (the threat). My feeling about this number is that it is medium sized if H has low complexity (such as 3^^^3) and tiny if H has high complexity (such as some of the numbers within 10% of 3^^^3). That is, complex threats have more credibility. I'm comforted by the fact that, by the definition of complexity, it would take a long time for an agent to articulate his complex threat. So let's assume P(threaten H) is medium-sized, as in the original version where H = 3^^^3 x value of human not being tortured.
It seems like wishful thinking that P(threat | ability) should shrink with H. Let's assume this is also medium sized and does not depend on H.
So I think the question boils down to how fast P(agent can do H amount of harm) shrinks with H. If it's O(1/H) we're OK, and if it's larger we're boned.
Replies from: Pavitra, gwern↑ comment by Pavitra · 2010-08-27T05:48:45.959Z · LW(p) · GW(p)
As long as we're all chipping in, here's my take:
(1) Even if the correct answer is to hand over the money, we should expect to feel an intuitive sense that doing so is the wrong answer. A credible threat to inflict that much disutility would never have happened in the ancestral environment, but false threats to do so have happened rather often. That being the case, the following is probably rationalization rather than rationality:
(2) Consider the proposition that, at some point in my life, someone will try to Pascal's-mug me and actually back their threats up. In this case, I would still expect to receive a much larger number of false threats over the course of my lifetime. If I hand over all my money to the first mugger without proper verification, I won't be able to pay up when the real threat comes around.
Replies from: None↑ comment by [deleted] · 2010-08-27T06:16:02.110Z · LW(p) · GW(p)
I think that your (2) is a proof that handing over the money is the wrong answer. My understand is that the problem is whether this means that any AI that runs on the basic package that we sometimes envision hazily -- prior, (unbounded) utility function, algorithm for choosing based somehow on multiplying the former by the latter -- is boned.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-27T06:44:23.770Z · LW(p) · GW(p)
I thought that my (2) was a proof that a prior-and-utility system will correctly decide to investigate the claim to see whether it's credible.
Replies from: None↑ comment by [deleted] · 2010-08-27T20:15:46.884Z · LW(p) · GW(p)
But what a prior-and-utility system means by "credible" is that the expected disutility is large. If a blackmailer can, at finite cost to itself, put our AI in a situation with arbitrarily high expected disutility, then our AI is boned.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-27T20:25:51.839Z · LW(p) · GW(p)
Ah, you're worried about a blackmailer that can actually follow up on that threat. I would point out that humans usually pay ransoms, so it's not exactly making a different decision than we would in the same situation.
Or, the AI might anticipate the problem and self-modify in advance to never submit to threats.
Replies from: None↑ comment by [deleted] · 2010-08-27T20:37:38.394Z · LW(p) · GW(p)
I'm worried about a blackmailer that can with positive probability follow up on that threat.
Yes humans behave in the same way, at least according to economists. We pay ransoms when the probability of the threat being carried out, times the disutility that would result from the threat being carried out, is less than the ransom. The difference is that for human-scale threats, this expected disutility does seem to be bounded.
The AI might anticipate the problem and self-modify to never submit to threats
That could mean one of at least two things: either the AI starts to work according to the rules of a (hitherto not conceived?) non-prior-and-utility system. Or the AI calibrates its prior and its utility function so that it doesn't submit to (some) threats. I think the question is whether something like the second idea can work.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-27T20:51:16.987Z · LW(p) · GW(p)
No, see, that's different.
If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.
Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun. Do you now assign a higher expected-value utility to giving $100 to the EFF, or to giving the same $100 instead to SIAI? If I blew up the moon as a warning shot, would that change your mind?
Replies from: None↑ comment by [deleted] · 2010-08-27T21:06:44.186Z · LW(p) · GW(p)
If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.
The result of such an investigation might raise or lower P(threat can be carried out). This doesn't change the shape of the question: can a blackmailer issue a threat with P(threat can be carried out) x U(threat is carried out) > H, for all H? Can it do so at cost to itself that is bounded independent of H?
Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun.
I refuse. According to economists, I have just revealed a preference:
P(Pavitra can blow up the sun) x U(Sun) < U(100$)
If I blew up the moon as a warning shot, would that change your mind?
Yes. Now I have revealed
P(Pavitra can blow up the sun | Pavitra has blown up the moon) x U(Sun) > U(100$)
Replies from: Pavitra↑ comment by Pavitra · 2010-08-27T21:09:39.239Z · LW(p) · GW(p)
I have just revealed a preference:
P(Pavitra can blow up the sun) x U(Sun) < U(100$)
My point is that U($100) is partially dependent on P(Mallory can blow up the sun) x U(Sun), for all values of Mallory and Sun such that Mallory is demanding $100 not to blow up the sun. If P(M_1 can heliocide) is large enough to matter, there's a very good chance that P(M_2 can heliocide) is too. Credible threats do not occur in a vacuum.
Replies from: None↑ comment by [deleted] · 2010-08-27T21:13:46.523Z · LW(p) · GW(p)
I don't understand your points, can you expand them?
In my inequalities, P and U denoted my subjective probabilities and utilities, in case that wasn't clear.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-27T21:33:16.940Z · LW(p) · GW(p)
The fact that probability and utility are subjective was perfectly clear to me.
I don't know what else to say except to reiterate my original point, which I don't feel you're addressing:
Replies from: NoneConsider the proposition that, at some point in my life, someone will try to Pascal's-mug me and actually back their threats up. In this case, I would still expect to receive a much larger number of false threats over the course of my lifetime. If I hand over all my money to the first mugger without proper verification, I won't be able to pay up when the real threat comes around.
↑ comment by [deleted] · 2010-08-27T22:18:28.997Z · LW(p) · GW(p)
It's not even clear to me that you disagree with me. I am proposing a formulation (not a solution!) of Pascal's mugging problem: if a mugger can issue threats of arbitrarily high expected disutility, then a priors-and-utilities AI is boned. (A little more precisely: then the mugger can extract an arbitrarily large amount of utils from the P-and-U AI.) Are you saying that this statement is false, or just that it leaves out an essential aspect of Pascal's mugging? Or something else?
Replies from: Pavitra↑ comment by Pavitra · 2010-08-27T23:03:54.088Z · LW(p) · GW(p)
I'm saying that this statement is false. The mugger needs also to somehow persuade the AI of the nonexistence of other muggers of similar credibility.
In the real world, muggers usually accomplish this by raising their own credibility beyond the "omg i can blow up the sun" level, such as by brandishing a weapon.
Replies from: None↑ comment by [deleted] · 2010-08-27T23:22:03.513Z · LW(p) · GW(p)
OK let me be a little more careful. The expected disutility the AI associates to a threat is
EU(threat) = P(threat will be carried out) x U(threat will be carried out) + P(threat will not be carried out) x U(threat will not be carried out)
I think that the existence of other muggers with bigger weapons, or just of other dangers and opportunities generally, is accounted for in the second summand.
Now does the formulation look OK to you?
Replies from: Pavitra↑ comment by Pavitra · 2010-08-28T01:29:46.929Z · LW(p) · GW(p)
That formulation seems to fail to distinguish (ransom paid)&(threat not carried out) from (ransom not paid)&(threat not carried out).
There are two courses of actions being considered: pay ransom or don't pay ransom.
EU(pay ransom) = P(no later real threat) * U(sun safe) + P(later real threat) * U(sun explodes)
EU(don't pay ransom) = P(threat fake) * ( P(no later real threat) + P(later real threat) * P(later real threat correctly identified as real | later real threat) ) * U(sun safe) + ( P(threat real) + P(threat fake) * P(later real threat) P(later real threat incorrectly identified as fake | later real threat) ) \ U(sun explodes)
That's completely unreadable. I need symbolic abbreviations.
R=EU(pay ransom); r=EU(don't pay ransom)
S=U(sun safe); s=U(sun explodes)
T=P(threat real); t=P(threat fake)
L=P(later real threat); M=P(no later real threat)
i=P(later real threat correctly identified as real | later real threat)
j=P(later real threat incorrectly identified as fake | later real threat)
Then:
R = M*S + L*s
r = t*(M + L*i)*S + (T + t*L*j)*s
(p.s.: We really need a preview feature.)
Replies from: None↑ comment by [deleted] · 2010-08-28T02:38:46.908Z · LW(p) · GW(p)
Why so much focus on future threats to the sun? Are you going to argue, by analogy with the prisoner's dilemma, that the iterated Pascal's mugging is easier to solve than the one-shot Pascal's mugging?
That formulation seems to fail to distinguish (ransom paid)&(threat not carried out) from (ransom not paid)&(threat not carried out).
I thought that, either by definition or as a simplifying assumption, EU(ransom paid & threat not carried out) = current utility - size of ransom, and that EU(ransom not paid & threat not carried out) = current utility.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-28T03:19:19.396Z · LW(p) · GW(p)
My primary thesis is that the iterated Pascal's mugging is much more likely to approximate any given real-world situation than the one-shot Pascal's mugging, and that focusing on the latter is likely to lead by availability heuristic bias to people making bad decisions on important issues.
Replies from: None↑ comment by [deleted] · 2010-08-28T03:37:30.086Z · LW(p) · GW(p)
My primary thesis is that if you have programmed a purported god-like and friendly AI that you know will do poorly in one-shot Pascal's mugging, then you should not turn it on. Even if you know it will do well in other variations on Pascal's mugging.
My secondary thesis comes from Polya: "If there's a problem that you can't solve, then there's a simpler problem that you can solve. Find it!" Solutions to, failed solutions to, and ideas about one-shot Pascal's mugging will illuminate features about iterated Pascal's mugging and also about many given real-world situations.
("One-shot", "iterated"...If these are even good names!)
Replies from: Pavitra↑ comment by Pavitra · 2010-08-28T03:44:22.152Z · LW(p) · GW(p)
I'm not persuaded that paying the ransom is doing poorly on the one-shot. And if it predictably does the wrong thing, in what sense is it Friendly?
Replies from: None↑ comment by [deleted] · 2010-08-28T03:52:24.608Z · LW(p) · GW(p)
Forget it. I'm just weirded out that you would respond to "here's a tentative formalization of a simple version of Pascal's mugging" with "even thinking about it is dangerous." I don't agree and I don't understand the mindset.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-28T04:08:33.837Z · LW(p) · GW(p)
I don't mean to say that thinking about the one-shot is dangerous, only that grossly overemphasizing it relative to the iterated might be.
I hear about the one-shot all the time, and the iterated not at all, and I think the iterated is more likely to come up than the one-shot, and I think the iterated is easier to solve than the one-shot, so in all I think it's completely reasonable for me to want to emphasize the iterated.
Replies from: None↑ comment by [deleted] · 2010-08-28T04:15:14.814Z · LW(p) · GW(p)
Granted! And
I think the iterated is easier to solve than the one-shot
tell me more.
Replies from: Pavitra↑ comment by Pavitra · 2010-08-28T04:22:13.832Z · LW(p) · GW(p)
The iterated has an easy-to-accept-intuitively solution: don't just randomly accept blackmail from anyone who offers it, but rather investigate first to see if they constitute a credible threat.
The one-shot Pascal's Mugging, like most one-shot games discussed in game theory, has a harder-to-stomach dominant strategy: pay the ransom, because the mere claim, considered as Bayesian evidence, promotes the threat to much more likely than the reciprocal of its utility-magnitude.
↑ comment by gwern · 2010-08-27T04:57:09.288Z · LW(p) · GW(p)
That is, complex threats have more credibility.
I don't quite follow this. Assuming we're using one of the universal priors based on Turing machine enumerations, then an agent which consists of 3^^^3threat+noability is much shorter and much more likely than an agent which consists of ~.10*3^^^3threat+ability. The more complex the threat, the less space there is for executing it.
Replies from: None↑ comment by [deleted] · 2010-08-27T05:30:02.969Z · LW(p) · GW(p)
If I disagree, it's for a very minor reason, and with only a little confidence. (P(threat) is short for P(threat|no information about ability).) But you're saying the case for P(threaten H) being bounded below (and its reciprocal being bounded above) is even stronger than I thought, right?
Another way to argue that P(threaten H) should be medium-sized: at least in real life, muggings have a time-limit. There are finitely many threats of a hundred words or less, and so our prior probability that we will one day receive such a threat is bounded below.
Another way to argue that the real issue is P(ability H): our AI might single you out and compute P(gwern will do H harm) = P(gwern will do H harm | gwern can do H harm) x P(gwern can do H harm). It seems like you have an interest in convincing the AI that P(gwern can do H harm) x H is bounded above.
↑ comment by gwern · 2010-08-28T18:07:25.763Z · LW(p) · GW(p)
While raking, I think I finally thought of a proof that the before-offer-probability can't be known.
The question is basically 'what fraction of all Turing machines making an offer (which is accepted) will then output a certain result?'
We could rewrite this as 'what is the probability that a random Turing machine will output a certain result?
We could then devise a rewriting of all those Turing machines into Turing machines that halt or not when their offer is accepted (eg. halting might = delivering, not halting = welshing on the deal. This is like Rice's theorem).
Now we are asking 'what fraction of all these Turing machines will halt?'
However, this is asking 'what is Chaitin's constant for this rewritten set of Turing machines?' and that is uncomputable!
Since Turing machine-based agents are a subset of all agents that might try to employ Pascal's Mugging (even if we won't grant that agents must be Turing machines), the probability is at least partially uncomputable. A decision procedure which entails uncomputability is unacceptable, so we reject giving the probability in advance, and so our probability must be contingent on the offer's details (like its payoff).
Thoughts?
Replies from: Wei_Dai, Vladimir_Nesov, Perplexed↑ comment by Wei Dai (Wei_Dai) · 2010-08-28T20:27:36.808Z · LW(p) · GW(p)
I think Nesov is right, you've basically (re)discovered that the universal prior is uncomputable and thought that this result is related to Pascal's Mugging because you made the discovery while thinking about Pascal's Mugging. Pascal's Mugging seems to be more about the utility function having to be bounded in some way.
You might be interested in this thread, where I talked about how a computable decision process might be able to use an uncomputable prior:
↑ comment by Vladimir_Nesov · 2010-08-28T18:36:53.104Z · LW(p) · GW(p)
It seems to be an argument against possibility of making any decision, and hence not a valid argument about this particular decision. Under the same assumptions, you could in principle formalize any situation in this way. (The problem boils down to uncomputability of universal prior itself.)
Besides, not making the decision is not an option, so you have to fall down to some default decision when you don't know how to choose, but where does this default come from?
Replies from: gwern↑ comment by gwern · 2010-10-12T20:29:58.644Z · LW(p) · GW(p)
I take it as an argument against making perfect decisions. If perfection is uncomputable, then any computable agent is not perfect in some way.
The question is what imperfection do we want our agent to have? This might be the deep justification for choosing to scale probability by utility that I was looking for. Scaling linearly corresponds to being willing to lose a fixed amount to mugging, scaling superlinearly correspond to not willing to lose any genuine offer, scaling sublinearly corresponds to not being willing to ever be fooled. Or something like that. The details need some work.
↑ comment by Perplexed · 2010-08-28T18:21:50.382Z · LW(p) · GW(p)
In order to make a decision, we do not always need an exact probability: sometimes just knowing that a probability is less than, say, 0.5 is enough to determine the correct decision. So, even though an exact probability p may be incomputable, that doesn't mean that the truth value of the statement "p<0.1" can not be computed (for some particular case). And that computation may be all we need.
That said, I'm not sure exactly how to interpret "A decision procedure which entails uncomputability is unacceptable." Unacceptable to whom? Do decision procedures have to be deterministic? To be algorithms? To be recursive? To be guaranteed to terminate in a finite time. To be guaranteed to terminate in a bounded time? To be guaranteed to terminate by the deadline for making a decision?
Replies from: gwern↑ comment by gwern · 2010-08-28T18:40:19.890Z · LW(p) · GW(p)
In order to make a decision, we do not always need an exact probability: sometimes just knowing that a probability is less than, say, 0.5 is enough to determine the correct decision.
Alright, so you compute away and determine that the upper bound on Chaitin's constant for your needed formalism is 0.01. The mugger than multiplies his offering by 100, and proceeds to mug you, no? (After all, you don't know that the right probability isn't 0.01 and actually some smaller number.)
That said, I'm not sure exactly how to interpret "A decision procedure which entails uncomputability is unacceptable."
This is pretty intuitive to me - a decision procedure which cannot be computed cannot make decisions, and a decision procedure which cannot make decisions cannot do anything. I mean, do you have any reason to think that the optimal, correct, decision theory is uncomputable?
Replies from: Perplexed↑ comment by Perplexed · 2010-08-28T19:16:39.521Z · LW(p) · GW(p)
I have no idea whether we are even talking about the same problem. (Probably not, since my thinking did not arise from raking). But you do seem to be suggesting that the multiplication by 100 does not alter the upper bound on the probability. As I read the wiki article on "Pascal's Mugging", Robin Hanson suggests that it does. Assuming, of course, that by "his offering" you mean the amount of disutility he threatens. And the multiplication by 100 does also affect the number (in this example 0.01) which I need to know whether p is less than. Which strikes me as the real point.
This whole subject seems bizarre to me. Are we assuming that this mugger has Omega-like psy powers? Why? If not, how does my upper bound calculation and its timing have an effect on his "offer"? I seem to have walked into the middle of a conversation with no way from the context to guess what went before.
comment by ata · 2010-08-21T20:09:35.673Z · LW(p) · GW(p)
Apparently AGI, transhumanism, and the Singularity are a massive statist/corporate conspiracy, and there exists a vast "AGI Manhattan Project". Neat.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-08-21T20:29:51.002Z · LW(p) · GW(p)
Looks like the Illuminati have deleted page 8, which I assume is where all the juciest stuff is!
comment by nhamann · 2010-08-17T21:38:52.705Z · LW(p) · GW(p)
I just came across this article called "Thank God for the New Atheists," written by Michael Dowd, and I can't tell if his views are just twisted or if he is very subtly trying to convert religious folks into epistemic rationalists. Sample quotes include:
Religion Is About Right Relationship with Reality, Not the Supernatural
...
Because the New Atheists put their faith, their confidence, in an evidentially formed and continuously tested view of the world, these critics of religion are well positioned to see what’s real and what’s important today. It is thus time for religious people to listen to the New Atheists—and to listen as if they were speaking with God's voice, because in my view they are!
...
...we cannot understand religion and religious differences if we don’t understand how the human mind instinctually relationalizes—that is, personifies—reality.
...
God is still speaking, and facts are God’s native tongue—not Hebrew or Greek or King James English.
Ah, yes. The only way to true religious understanding is through science and realizing our anthropomorphic biases...uh, wait. What? This guy seems to be calling for a religion grounded in science and rationality, but then he says things like:
The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God
So I'm confused. It makes me think that he's a crypto-rationalist trying to convert religious believers into rationalists. If that's true, it does seem like really effective strategy.
Replies from: ocr-fork↑ comment by ocr-fork · 2010-08-18T06:07:14.712Z · LW(p) · GW(p)
Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.
The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.
The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don't even need it...
He's also written a book called "Thank God for Evolution," in which he sprays God all over science to make it more palatable to christians.
I dedicate this book to the glory of God. Not any "God" me may think about , speak about , believe in , or deny , but the one true God we all know and experience.
If he really is trying to deconvert people, I suspect it won't work. They won't take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.
comment by thomblake · 2010-08-16T20:04:33.287Z · LW(p) · GW(p)
File under "Less Wrong will rot your brain":
At my day job, I had to come up with and code an algorithm which assigned numbers to a list of items according to a list of sometimes-conflicting rules. For example, I'd have a list of 24 things that would have to be given the numbers 1-3 (to split them up into groups) according to some crazy rules.
The first algorithm I came up with was:
- model each rule as a utility function over dollars
- instantiate an ideally rational agent for each rule, which used the rule as its utility function.
- give each agent a number of dollars (changing the amount of money they have is one way to change the weighting of the rules during conflicts)
- let them negotiate until some equilibrium is reached.
Of course, I did not try to implement this algorithm. Rather, I ended up solving the problem (mostly) using about 100 lines of perl and no AI.
Replies from: rabidchicken, PhilGoetz↑ comment by rabidchicken · 2010-08-17T04:06:20.383Z · LW(p) · GW(p)
This is what happens to me whenever I start to write a difficult program in C++, start by building a innovative system which solves the problems with minimal intervention on my part, and then eventually set up a cludge using heuristics which get the same thing done in a fraction of the time.
comment by JoshuaZ · 2010-08-13T18:26:23.269Z · LW(p) · GW(p)
Possible new barriers to Moore's Law where small chips won't have enough power to use the maximum transistor density they have available. The article also discusses how other apparent barriers (such as leaky gates) have been overcome in the past including this amusing line:
“The number of people predicting the end of Moore’s Law doubles every two years,” quips the Scandinavian Tryggve Fossum.
comment by Matt_Simpson · 2010-08-13T18:04:40.725Z · LW(p) · GW(p)
What does Less Wrong know about the Myers-Briggs personality type indicator? My sense is that it's a useful model for some things, but I'm most interested in how useful it is for relationships. This site suggests that each personality type pair has a specific type of relationship, while this site only comments on what the ideal pair is for any given type. But the two sites disagree about what the ideal pairings are.
Replies from: lsparrish, wedrifid↑ comment by lsparrish · 2010-08-13T21:31:56.201Z · LW(p) · GW(p)
Personality Page is not mainstream Jungian; they seem to be of the opinion that sharing a dominant trait of opposite attitude is most beneficial. More mainstream MBTI sites will tend to agree with Socionics that completely opposite traits are the most complementary (for example Fe and Ti) but disagree on what of these traits correlate to a J or P.
So if you go by the theory that J/P correlates to extroverted conscious traits (the MBTI position), INTP and ESFJ are complementary. If you go by the theory that J/P correlates to the dominant trait, INTJ is ESFJ's dual. Socionics sites tend to take this position.
Note that while these letters should be completely exclusive for introverts, many of the introvert profiles seem to be the same (or suspiciously similar) between the systems, particularly with sensing types. So an (alleged) ISFP MBTI may actually be ISFP in Socionics.
That would imply that someone is wrong/confused. Either the profiles are uselessly vague (Fourier effect, no better than astrology charts for identifying this particular feature), the traits aren't actually real emperical phenomena (Si1 is indistinguishable from Se2), or the traits are being defined differently (such that Si1 in system A is actually Se2 in system B).
To confuse/complicate matters more, all the traits have various features in common with each other: S+T are pragmatic and "hard", T+N are theoretical/consequence-based, F+N are abstract and ideal, F+S are aesthetic and social, just as T+F are judging and S+N are perceiving. So profiles could have varying accuracy while describing surface aspects of real traits, yet not distinguishing them from each other well enough to be useful.
Now, if just you want to use this to find a prospective spouse or best friend who is your dual type, and don't care so much about the theoretical correctness of who is what type, there's a work-around: Find someone who appears opposite on the first three letters, then see if they make you comfortable or not. If they have shared values and a compatible sense of humor, chances are relatively high that they are a dual type rather than a conflict type.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2010-08-14T00:25:34.879Z · LW(p) · GW(p)
But which view (if any) makes good predictions in the relationship department?
EDIT: A quick survey of abstracts on google scholar suggests that marital satisfaction is not related to the MB personality types of the couples.
Replies from: lsparrish↑ comment by lsparrish · 2010-08-15T17:44:50.519Z · LW(p) · GW(p)
That is interesting. I would expect there to be some significant differences in relationship quality among MB types even if the types are only somewhat correlated (under the assumption that socionics is correct).
One of the better sites on the topic is Rick DeLong's Socionics.us. He says there is only roughly a 30% correlation between MBTI types and Socionics types. Boulakov is also skeptical of the validity of MBTI typings. Perhaps the correlation is not high enough to obtain meaningful results here. I will be updating my beliefs on the matter, as this implies most MBTI types are mistaken if socionics is valid.
Honestly though, it really does look a lot like motivated cognition on part of socionists. I mean, they do have a coherently self-consistent theory but reference to external data points are suspiciously scarce. They seemingly start with the assumption (based on anecdotal observations of Augusta, socionics' founder, and others after her) that these relationship preferences between distinct types exist, find subjective validation, and then go from there to assert that the MBTI is just not accurate enough at determining the traits socionics is based on. So for example if two people who are claimed to be ISFp and ENTp (where lowercase p is "irrational") do not get along, Socionists will say the typing is invalid rather than that the theory is wrong. But if relationships are the only acid test of a typing, and relationships are the only thing predicted by the typing, it's turned into a vague "if you like these kinds of people you will like these kinds of people".
However, it's not entirely hopeless because there are more specific predictions to to validate. As an example, given a valid ISFp/ENTp pair, socionics also predicts the ISFp will be a supervisor ("supervision transmitter" in DeLong's terms) for the INFj type, whereas ENTp will be the "request transmitter" or beneficiary for the INFj. So if you could design a set of experimental test situations where supervision and request are distinguishable from other types of interaction (perhaps a game of some sort), you could perhaps set up a series of meetings between test subjects and see if it checks out. You could verify a given dual pair by their interactions with a given supervision/request receiver first, then arrange a meeting between them and see if they have more compatibility than the control group. The same thing could be verified with the diad's supervision/request transmitter type.
↑ comment by wedrifid · 2010-08-13T18:26:04.353Z · LW(p) · GW(p)
My sense is that it's a useful model for some things, but I'm most interested in how useful it is for relationships.
I think there are better models to use when considering relationships. I note that often such models are useful in as much as they serve to provide a language which can be used to describe intuitive associations that we pick up through observation. The King Warrior Magician Lover model is not terrible, being a formalisation of the 'opposites attract' conventional wisdom with consideration given to how different people relate on intellectual and emotional levels.
As for MBTI, I have found it useful in some regards. I know, for example, that I can basically rule out relationships with anyone who comes in as a "J". I just find "J"s annoying ('judgemental' of me, I know!)
Edit: The links you provide are... interesting. I must admit I have rather strong doubts about just how accurate those physical descriptions of various personality types are!
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2010-08-13T18:49:55.665Z · LW(p) · GW(p)
I think there are better models to use when considering relationships.
like? (I'm intrigued)
The links you provide are... interesting. I must admit I have rather strong doubts about just how accurate those physical descriptions of various personality types are!
Me too. There does seem to be some correlation between physical appearance and personality, but those details are rather burdensome.
comment by apophenia · 2010-08-11T03:50:06.099Z · LW(p) · GW(p)
Wanted ad: Hiring a personal manager
I will pay anyone I hire $50-$100 a month, or equivalent services if you prefer.
I've trying to overcome my natural laziness and get work done. For-fun project, profitable projects, SIAI-type research, academic homework -- I don't do much without a deadline, even projects that I want to do because they sound fun.
I want to hire a personal manager, basically to get on my case and tell/convince me to get stuff done. The ideal candidate would:
- Be online. As much as possible. I'm mostly looking for 22-08 GMT (4pm-3am EST) right now, but I won't be working full-time much longer, and after that I'll be working irregular hours.
- Have a solid grasp of rationality, which is why I'm posting on Less Wrong.
- Accept wild swings in job description as we figure out what I'm actually looking for instead of what I've posted here.
Please do post something here as well for onlookers to see what's going on, but if you're interested PM, post, or email me contact information so we can get together real-time. My email is vanceza+lesswrong@gmail.com (I will take my email address down in a week or two).
Replies from: wedrifid, katydee, Alicorncomment by TobyBartels · 2010-08-11T03:00:23.604Z · LW(p) · GW(p)
There's an article on rationality in Newsweek, with an emphasis on evo-psych explanations for irrationality. Especially: we evolved our reasoning skills not just to get at the truth, but also to win debates, and overconfidence is good for the latter.
There's nothing there that's new to readers of this blog, and the analysis is superficial (plus the writer makes an annoying but illustrative error while explaining why human intuition is poor at logic puzzles). But Newsweek is a large-circulation (second to Time) newsweekly in the U.S., so this is a pretty broad audience.
Perhaps this has been mentioned before, since it's been online for almost a week, but my parents' print copy was just delivered today, and that's what I read.
comment by b1shop · 2010-08-10T23:29:49.872Z · LW(p) · GW(p)
Value-sorting hypothetical:
If you had access to a time-machine and could transfer one piece of knowledge to an influential ancient (i.e. Plato), what would you tell him?
Something practical, like pasteurization, would almost certainly improve millions of lives, but it wouldn't necessarily produce people with values like ours. I can imagine a bishop claiming heat drives demons from milk.
Meta-knowledge, like a working understanding of the scientific method, might allow for thousands of other pasteurizations to be developed, or maybe it would remain unused throughout the Dark Ages.
Convincingly arguing for a philosophical conclusion, like materialism, might prevent the horror of the crusades, or maybe the now unaddressed emotional need for community would sooner be channeled into nationalism and hasten the coming of the world wars that terrorized the early 20th century.
Each side has its pluses and potential pitfalls. Which would you choose?
And should that therefore be the main thrust of your rationality-promoting conversations today?
Replies from: jimrandomh, Vladimir_Nesov↑ comment by jimrandomh · 2010-08-10T23:49:05.836Z · LW(p) · GW(p)
If you had access to a time-machine and could transfer one piece of knowledge to an influential ancient (i.e. Plato), what would you tell him?
How to make a movable-type printing press. They'll figure out pasteurization and the scientific method on their own eventually, but without a press, they'll lose knowledge almost as fast as they gain it. And as an added bonus, it introduces the concept of mass production.
Replies from: JoshuaZ, steven0461, b1shop↑ comment by JoshuaZ · 2010-08-12T21:58:52.500Z · LW(p) · GW(p)
How to make a movable-type printing press.
This requires a lot of work. I'm not sure that they had the metallurgy to do this. The antikythera mechanism suggests that the answer is yes. But the printing press as a whole requires a lot of different technologies to come to together. The screw press, without which moveable type is highly inefficient, was not around until around 100 CE or slightly earlier(I'm under the impression that late medieval versions were generally better and more efficient than Roman era screw presses but don't have a citation for that claim. If someone can confirm/refute this I'd appreciate it). You also need to explain how to make a matrix for printing (again, otherwise efficiency issues kill things badly). Also, one needs to introduce the idea of a book/codex. Prior to that, the use of scrolls and other writing systems make a printing press less practical. This is another innovation from the Roman period. So one could probably have success introducing a printing press around 150 or 200 CE but the chance of successful introduction drops drastically as one goes further back in time.
Jared Diamond has suggested that even if something approximating the Gutenberg press were introduced early on the lack of supporting technologies might make it difficult to catch on. This connects with objects like the Phaistos Disc which used a standardized form of printing around 1600 BCE but the technology did not apparently spread far (or if it did spread far has left no substantial remnants elsewhere and did not stay around).
↑ comment by steven0461 · 2010-08-10T23:54:58.175Z · LW(p) · GW(p)
We don't want them to advance quickly; we want them to advance with a low probability of screwing up permanently.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-08-11T00:17:34.486Z · LW(p) · GW(p)
I don't think screwing up permanently becomes a real concern until the invention of nuclear weapons, and that's such a long ways ahead of the starting point for this exercise that I don't think we can influence how it goes.
Replies from: steven0461↑ comment by steven0461 · 2010-08-11T00:25:27.009Z · LW(p) · GW(p)
Surely we can have nontrivial influence both on variables relating to specific technologies like nukes, and on general variables along the lines of "caution about technology".
↑ comment by Vladimir_Nesov · 2010-08-11T08:04:28.218Z · LW(p) · GW(p)
If you had access to a time-machine and could transfer one piece of knowledge to an influential ancient (i.e. Plato), what would you tell him?
What counts for "one piece"? I'd like them to know enough math and rationality to be able to think sane thoughts, and explain the problem of Friendly AI, before technology is advanced enough to threaten.
comment by Oscar_Cunningham · 2010-08-10T18:10:08.380Z · LW(p) · GW(p)
After seeing the recent thread about proving Occam's razor (for which a better name would be Occam's prior), I thought I should add my own proof sketch:
Consider an alternative to Occam's prior such as "Favour complicated priors*". Now this prior isn't itself very complicated, it's about as simple as Occam's prior, and this makes it less likely, since it doesn't even support itself.
What I'm suggesting is that priors should be consistent under reflection. The prior "The 527th most complicated hypothesis is always true (probability=1)" must be false because it isn't the 527th most complicated prior.
So to find the correct prior you need to find a reflexive equilibrium where the probability given to each prior is equal to the average of the probabilities given to it by all the priors, weighted by how probable they are.
*This isn't a proper prior, but it's good enough for illustrative purposes.
Replies from: Emile, cousin_it↑ comment by Emile · 2010-08-11T10:24:23.201Z · LW(p) · GW(p)
Amusing exercise: find a complexity measure and a N such that "the Nth most complex hypothesis is always true" is the Nth most complex prior :)
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-08-11T11:25:04.535Z · LW(p) · GW(p)
:)
Equivalently, can you write a function that takes a string and returns true iff the string is the same as the source code of the function?
Anyone got some quining skills?
Replies from: Emile↑ comment by cousin_it · 2010-08-11T10:32:46.374Z · LW(p) · GW(p)
This makes you vulnerable to quining, like this:
Hypotheses that consist of ten words must have higher priors.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-08-11T11:17:58.481Z · LW(p) · GW(p)
I'm hoping that when the hypotheses are written in a well defined computer language, this problem doesn't crop up. (you would think that after reading GEB I would know better!)
Of course there may be multiple fixed points or none at all, but it would be nice if there was exactly one.
Replies from: cousin_it↑ comment by cousin_it · 2010-08-11T11:26:06.733Z · LW(p) · GW(p)
Oh, no. Quines) are just as common in programming as they are in natural languages. Also see the diagonal lemma. I use self-referential sentences to prove theorems all the time, they're very common and can be used for a huge variety of purposes.
comment by Jonathan_Graehl · 2010-08-10T17:13:14.959Z · LW(p) · GW(p)
I wish my long-term memory were better.
Am I losing out on opportunities to hold onto certain facts because I often rely on convenient electronic lookup? For instance, when programming I'll search for documentation on the web instead of first taking my best recollection as a guess (which, if wrong, will almost certainly be caught by the type checker). What's worse, I find myself relying on multi-monitor/window so I don't even need to temporarily remember anything :)
I'd like to hear any evidence/anecdotes in favor of:
habits that might improve my general ability to remember and/or recall (I'd guess that having enough sleep (and low enough stress) matters, for example.)
tricks for ensuring that particular bits of info are preferentially stored (As I mentioned, I imagine using a memory
consolation - perhaps being more forgetful than many other smart people is a trade-off with different advantages (I doubt it, although I've heard that we do some useful selective forgetting when we sleep, and I'm glad I don't remember every malformed thought I have while asleep)
↑ comment by wedrifid · 2010-08-10T17:35:56.110Z · LW(p) · GW(p)
- habits that might improve my general ability to remember and/or recall (I'd guess that having enough sleep (and low enough stress) matters, for example.)
You have two of the big ones. Add in exercise and diet. And add exercise again just in case you skipped it. With all the basics handled you can consider things like cognitive enhancers (ie. Aniracetam and choline supplementation).
- tricks for ensuring that particular bits of info are preferentially stored (As I mentioned, I imagine using a memory
- consolation - perhaps being more forgetful than many other smart people is a trade-off with different advantages (I doubt it, although I've heard that we do some useful selective forgetting when we sleep, and I'm glad I don't remember every malformed thought I have while asleep)
People spend an awful lot of time trying to forget things. A particularly strong memory exacerbates the effects of trauma. (If something particularly bad happens to you some day then smoke some weed to prevent memory consolidation.)
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-08-11T04:21:42.800Z · LW(p) · GW(p)
Thanks. I guess I'm just lazy and hope to remember things better without any explicit drilling.
I do exercise (but I'm nearly completely sedentary every other day; it's probably better to even out the activity).
I remember reading in the past week that the way exercise improves brain function is not merely by improving oxygen supply to the brain, but in some other interesting, measurable ways (unfortunately, that's as much as I can remember, but it seems like this from Wikipedia at least covers the category:
Replies from: wedrifidThere are several possibilities for why exercise is good for the brain: increasing the blood and oxygen flow to the brain increasing growth factors that help create new nerve cells[28] and promote synaptic plasticity[29] increasing chemicals in the brain that help cognition, such as dopamine, glutamate, norepinephrine, and serotonin[30] Physical activity is thought to have other beneficial effects related to cognition as it increases levels of nerve growth factors, which support the survival and growth of a number of neuronal cells.[31]
↑ comment by wedrifid · 2010-08-11T05:43:01.465Z · LW(p) · GW(p)
Exactly. Exercise is great stuff, particularly with the boost to neurogenesis!
Incidentally, the best forms of exercise (for this purpose) is activities which not only provide an intense cardiovascular workout but also rely on extensive motor coordination.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-08-11T20:38:49.511Z · LW(p) · GW(p)
But if the increased neurogenesis is only for implementing motor skill learning, then it's not going to help me get better at Starcraft 2 (I mean, my research) - so what's the point? :)
I play piano for 10-60 min daily and imagine there's some benefit as well (surprisingly, it's also a mild cardiovascular workout once you can play hard enough repertoire).
Also, I read a little about choline; it seems likely that unless I'm dieting heavily, I'll get enough already. That is, there's no hard evidence of any benefit to taking more than necessary to maintain liver health - although it seems like up to 7x that dose also has no notable side effects).
Aniracetam looks interesting (but moderately expensive). Do you have any personal experience with it?
Replies from: wedrifid↑ comment by wedrifid · 2010-08-12T13:29:27.881Z · LW(p) · GW(p)
But if the increased neurogenesis is only for implementing motor skill learning, then it's not going to help me get better at Starcraft 2 (I mean, my research) - so what's the point? :)
I don't think I expressed myself clearly. The effect I refer to is influence of a coordination based component to exercise on neurogenesis and not particularly on the benefits of such to motor skills. Crudely speaking, of the neurons formed from the BDNF released during exercise a greater fraction of them will stably integrate into the brain if extensive coordination is involved than if the exercise is 'boring'. I suspect, however, that a cardio workout combined with (ie. on the same day as) your piano practice will be at least as effective. That stuff does wonders!
Also, I read a little about choline; it seems likely that unless I'm dieting heavily, I'll get enough already. That is, there's no hard evidence of any benefit to taking more than necessary to maintain liver health - although it seems like up to 7x that dose also has no notable side effects).
I included choline only because I mentioned Aniracetam. While the effects are hardly miraculous, Aniracetam (and the more basic Piracetam) do seem to have a positive effect on cognition and learning. Because the *racetams work by (among other things) boosting Acetylcholine people usually find that their choline reserves are strained. The effects of such depletion tends to be reported as 'head fog' or at least as a neutralisation of the positive benefits of the cognitive enhancement.
Supplementing choline in proportion to racetam use is more or less standard practice. Using choline alone seems, as you noted, largely pointless.
Aniracetam looks interesting (but moderately expensive). Do you have any personal experience with it?
I have used it and my experiences were positive. I found it particularly useful in social situations, with improved verbal fluency. Unfortunately I cannot give much insight into how well it works for improving memory retention. Basically because my memory has always been far more powerful than I've ever required. It just isn't a bottle neck in my performance so my self report is largely useless.
comment by CarlShulman · 2010-08-10T04:33:15.393Z · LW(p) · GW(p)
Gelernter on "machine rights," I didn't know his anti-AI consciousness views were tied in with Orthodox Judaism.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-08-10T04:55:34.348Z · LW(p) · GW(p)
He's not exactly Orthodox. His views are a bit unique religiously and is connected to his politics in some strange ways. But he's made clear before that his negative views of AI come in a large part from his particular brand of theism before. See for example the section in his book "The Muse in the Machine" that consists of a fairly rambling theological argument that AI will never exist and is based mainly on quotes from the Bible and Talmud. Jeff Shallit wrote an interesting if slightly obnoxious commentary about how Gelernter's religion has impacted his thinking.
One curious thing is how rarely Gelernter touches on golems when discussing his religion and AI. I suspect this is because although the classical discussion of golems touches on many of the relevant issues (including discussion of whether golems have souls and whether people have ethical obligations to them), it probably comes across to him as too much like superstitious folklore that he doesn't like to think of as part of Judaism in a deep sense.
ETA: However, some of Gelernter's points have validity outside of any religious context. In particular, the point that acting badly to non-conscious entities will encourage people to act badly to conscious ones is valid outside any Talmudic framework. Disclaimer: I'm friends with one of Gelernter's sons and his niece so I may be a biased source.
Replies from: CarlShulman↑ comment by CarlShulman · 2010-08-10T05:29:40.050Z · LW(p) · GW(p)
Thanks for the info.
comment by humpolec · 2010-08-30T19:49:40.950Z · LW(p) · GW(p)
If I commit quantum suicide 10 times and live, does my estimate of MWI being true change? It seems like it should, but on the other hand it doesn't for an external observer with exactly the same data...
Replies from: Snowyowl, wedrifid, PaulAlmond↑ comment by Snowyowl · 2010-09-02T13:17:20.282Z · LW(p) · GW(p)
The anthropic principle gets in the way. If you play classical (i.e. non-quantum) Russian Roulette 10 times and live, you might conclude that there is some force protecting you from death. If you play classical Russian Roulette 10 times and die, you're not in a position to conclude anything much.
Replies from: humpolec↑ comment by humpolec · 2010-09-02T19:02:18.630Z · LW(p) · GW(p)
Good point, I missed that. So MWI seems to be even subjectively unconfirmable...
Replies from: cousin_it↑ comment by cousin_it · 2010-09-02T19:26:42.270Z · LW(p) · GW(p)
Yep. Until/unless our understanding of physics improves, we can't get any evidence for or against MWI. Our only reason for preferring it is that it sounds simple and thus should have lower prior. But it's a weird kind of "narrative simplicity", not mathematical (Kolmogorov) simplicity, because mathematically there's only one quantum mechanics and no interpretations. So I wonder why people care about MWI as anything more than an (admittedly very nice) intuition pump for studying QM.
Replies from: Wei_Dai, timtyler↑ comment by Wei Dai (Wei_Dai) · 2010-09-02T20:01:34.040Z · LW(p) · GW(p)
So I wonder why people care about MWI as anything more than an (admittedly very nice) intuition pump for studying QM.
MWI says (in part) that we don't have to make wave function collapse an integral part of the mathematical formulation of quantum mechanics. Since, historically, wave function collapse has been a part of the mathematical formulation of quantum mechanics, that seems sufficient reason to care about MWI.
Replies from: cousin_it↑ comment by cousin_it · 2010-09-02T20:32:34.236Z · LW(p) · GW(p)
I think you're equivocating on "mathematical formulation". We want theories to predict the future. The algorithm that assigns probabilities to your future observations is the same, and equally mysterious, across all interpretations. MWI does raise the tantalizing possibility that the Born rule might not be part of basic physics - that it might somehow emerge from a universe without it - but AFAIK this isn't settled yet.
↑ comment by wedrifid · 2010-09-02T13:30:46.521Z · LW(p) · GW(p)
If I commit quantum suicide 10 times and live, does my estimate of MWI being true change? It seems like it should, but on the other hand it doesn't for an external observer with exactly the same data...
It makes no difference. You're either throwing away Everett branches or having a chance of throwing away everything. This experiment doesn't tell you which. You could, however, conclude that you're a damn fool. ;)
↑ comment by PaulAlmond · 2010-08-30T20:26:37.685Z · LW(p) · GW(p)
Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work" (in terms of you being able to be sure of continuity) - merely that it seems to me to present an issue of putting you into observer moments which have very low measure indeed.
If you ever find yourself in an extremely low-measure observer moment, rather than having MWI or the validity of the quantum suicide idea proved to you, it may be that it gives you reason to think that you are being tricked in some way - that you are not really in such a low-measure situation. This might mean that repeated quantum suicide, if it were valid, could be a threat to your mental health - by putting you into a situation which you can't rationally believe you are in!
Replies from: Pavitracomment by Morendil · 2010-08-28T22:40:40.770Z · LW(p) · GW(p)
There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess?
Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities.
A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price.
-- John Baez on saving the planet
comment by ata · 2010-08-27T04:38:27.115Z · LW(p) · GW(p)
Have advocates of the simulation argument actually argued for the possibility of ancestor simulations? It is a very counterintuitive idea, yet it seems to be invoked as though it is obviously possible. Aside from whatever probability we want to assign to the possibility that the future human race will discover strange previously-unknown laws of physics that make it more feasible, doesn't the idea of an ancestor simulation (a simulation of "the entire mental history of humankind") depend on having access to a huge amount of information that has presumably been permanently lost to entropy? Where is the future civilization expected to get all the mental structures needed to simulate the entire mental history of humankind (or a model of the early Earth implausibly precise enough that simulating it causes things to play out exactly as they really did)?
Replies from: None↑ comment by [deleted] · 2010-08-27T04:46:48.913Z · LW(p) · GW(p)
If things don't play out exactly as they really did, does the simulation argument lose any force?
Replies from: Perplexed, ata↑ comment by Perplexed · 2010-08-27T04:57:12.110Z · LW(p) · GW(p)
Second the question. It's been a long time since I read Tipler, but as I recall, he claimed Omega would simulate all possible humans, not just all historically real ones.
Replies from: ata↑ comment by ata · 2010-08-27T05:05:48.628Z · LW(p) · GW(p)
Is Tipler / the Omega Point relevant to the simulation argument? I haven't seen him invoked in discussions thereof, and that idea (whatever its probability) seems to have a whole different set of implications, more along the lines of the confusing anthropic problems we have with Very Big Worlds and Boltzmann brains.
Replies from: Perplexed↑ comment by ata · 2010-08-27T05:00:48.780Z · LW(p) · GW(p)
It does appear to depend on ancestor simulations being of the world's history as it actually happened, on the basis that if we end up making simulations of our own history, then we are probably in such a simulation run by an someone in an outer future version of our own world.
You could argue for the same conclusion without requiring that, but it seems to me that it would end up being a completely different argument; at the very least, you'd have to figure out the general probability of some advanced civilization creating a simulation containing you, which is a lot harder when you aren't assuming that the civilization running the simulation used to actually contain you (and can somehow extrapolate backwards far enough to recover the information in your mind).
Replies from: Perplexed, None↑ comment by Perplexed · 2010-08-27T05:06:08.431Z · LW(p) · GW(p)
Maybe they are simulating me by mistake. Back in the "real world" I never existed. It is still the case that they simulating me.
Edit: Actually, this response wasn't particularly responsive. Consider it withdrawn unless it contains virtues I don't currently see.
↑ comment by [deleted] · 2010-08-27T05:39:59.009Z · LW(p) · GW(p)
OK I buy it. To be fair, Bostrom's conclusion is either we're in a simulation, we're going to go extinct, or "(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)." You're saying that (2) is so plausible that the other alternatives are not interesting.
Replies from: ata↑ comment by ata · 2010-08-27T19:50:10.644Z · LW(p) · GW(p)
You're saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn't explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they'd require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I'd argue that (2) is the most likely if we do eventually reach posthumanity.
comment by Emily · 2010-08-24T18:39:35.229Z · LW(p) · GW(p)
Light entertainment: this hyperboleandahalf comic reminded me of some of the FAI discussions that go on in these parts.
http://hyperboleandahalf.blogspot.com/2010/08/this-comic-was-inspired-by-experience-i.html
comment by gwern · 2010-08-20T08:14:31.455Z · LW(p) · GW(p)
Has anyone tried or uses the Zeo Personal Sleep Coach (press coverage).
It's a sleep tracker - measuring light, REM, and deep sleep - which sounds useful for improving sleep which as we all know is extremely important to mental performance and learning and health. I'm thinking of getting one, but the $200 pricepoint is a little daunting.
Replies from: curiousepic↑ comment by curiousepic · 2010-08-30T20:27:57.069Z · LW(p) · GW(p)
There is a $.99 iPhone App that does essentially the same thing using the phone's accelerometers, etc. called Sleep Cycle http://www.mdlabs.se/sleepcycle/ It definitely seems to have had a positive impact on my mornings. Less biometrics than the Zeo probably, but certainly more economical if you have an iPhone.
Replies from: jimrandomh, Wei_Dai↑ comment by jimrandomh · 2010-08-30T21:08:01.164Z · LW(p) · GW(p)
I got a Zeo recently, but mainly to try to get answers to a question that isn't generally applicable (specifically, how blood sugar interacts with sleep). I don't really buy the validity of using an accelerometer as a proxy for sleep stage, but if your goal is just to get woken from light sleep rather than deep sleep, there's an Android app called Gentle Alarm that does that using a pre-alarm: a soft alarm sound played 30 minutes before your scheduled wake-up time which, in principle, will only wake you if were close to awake already.
Replies from: gwern↑ comment by Wei Dai (Wei_Dai) · 2010-08-30T20:58:39.827Z · LW(p) · GW(p)
Thanks. I found SleepSense, a similar sleep tracker application for Windows Mobile. And Smart Alarm for Android.
comment by CronoDAS · 2010-08-14T08:48:13.510Z · LW(p) · GW(p)
I just found a new blog that I'm going to follow: http://neuroanthropology.net/
This post is particularly interesting: http://neuroanthropology.net/2009/02/01/throwing-like-a-girls-brain/
comment by Paul Crowley (ciphergoth) · 2010-08-11T10:25:44.545Z · LW(p) · GW(p)
Just watched Tyler Cowen at TEDx Mid-Atlantic 2009-11-05 talking about how our love of stories misleads us. We talk about good-story bias on the Wiki.
Is there a way good-story bias could be experimentally verified?
Replies from: Matt_Simpson, Alexandros, timtyler↑ comment by Matt_Simpson · 2010-08-13T17:55:36.050Z · LW(p) · GW(p)
late to the party? :)
↑ comment by Alexandros · 2010-08-11T13:02:03.779Z · LW(p) · GW(p)
Nice video. I am thinking that conjunction bias and hyperactive agency detectors are both linked to this 'story bias'. Of course religion milks this set for all it's worth.
Another question that came to me was whether telling children stories helps them or wires them up to keep thinking in terms of stories.
↑ comment by timtyler · 2010-08-11T11:16:59.458Z · LW(p) · GW(p)
Tyler Cowen tells a nice story here.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-08-11T11:36:41.925Z · LW(p) · GW(p)
As he points out.
comment by Alexandros · 2010-08-18T16:30:08.761Z · LW(p) · GW(p)
Replies from: anon895comment by NancyLebovitz · 2010-08-13T10:38:50.821Z · LW(p) · GW(p)
comment by gwern · 2010-08-12T10:13:17.097Z · LW(p) · GW(p)
'But Dietrich is more concerned that companies will fail to analyze the petabytes of data they do collect. When she met with the pharmaceutical company about its portfolio management strategy, for instance, the executives explained how they allocated spending according to their estimates of how likely each project was to succeed. “I asked them if they ever checked to see how well the estimates matched their results,” she says. “They never had.”'
comment by SilasBarta · 2010-08-19T02:57:08.674Z · LW(p) · GW(p)
~Oops I did it again.~
~I trolled on Slashdot ~
~Got modded to 5~
Well, hey, at least this time they universally criticized me. The topic was about a species being discovered that was present earlier than they thought, and I said that this refutes evolution.
Replies from: sketerpot, CronoDAS↑ comment by sketerpot · 2010-08-19T04:13:28.791Z · LW(p) · GW(p)
Since they universally argued with you, I assume that they assumed that you were joking. People who understand evolution often find it hard to believe that anybody could be as nuts as creationists actually are, so the default assumption tends to be that any sufficiently dumb comment is trolling.
comment by khafra · 2010-08-17T17:53:17.896Z · LW(p) · GW(p)
If this press release isn't overstating its case, AIXItl/other unFriendly bayesian superintelligence just got a lot closer.
comment by JanetK · 2010-08-13T08:17:03.477Z · LW(p) · GW(p)
There was a question recently about whether neurons were like computers or something like that. I cannot find the comment although I replied at the time. Today I came across an article that may interest that questioner. http://www.sciencedaily.com/releases/2010/08/100812151632.htm
comment by [deleted] · 2010-08-12T10:18:42.494Z · LW(p) · GW(p)
Reposted here instead of part 1, didn't realise part 2 had been started.
I don't understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don't benefit from precommitting to pay the $100. However, when faced with Omega you're probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).
Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn't pay the $100 because its existence is irrelevant to future encounters if this is your only encounter.
If this was 1, it seems at a glance like you should. But I don't understand in this case why you wouldn't just keep your $100 and then afterwards self-modify to be the sort of being that would pay the $100 in the future and therefore end up an extra hundred on top.
I presume I've missed something there though. But once I understand that, I still don't understand why you would give the $100 unless you assigned a greater than 10% probability to Omega returning in the future (even ignoring the none zero, but very low, chance of Nomega visiting).
Is anyone able to explain what I'm missing?
Replies from: None↑ comment by [deleted] · 2010-08-12T19:20:43.184Z · LW(p) · GW(p)
I think I've figured out the answer to my question.
The related scenario: You're stuck in the desert without water (or money) and a car offers to give you a lift if, when you reach the town, you pay them money. But you're both perfectly rational so you know when you reach town, you would gain nothing by giving the person the money then. You say, "Yes" but they know you're lying and so drive off.
If you use a decision theory which would have you give them the money once you reach town, you end up better off (ie. safely in town), even though the decision to give the money may seem stupid once you're in town.
From the perspective of t = 2 (ie. after the event), giving up the money looks stupid, you're in town. But if you didn't follow that decision theory, you wouldn't be in town, so it is beneficial to follow that decision theory.
Similarly, at t = 2 in the counterfactual mugging, giving up the money looks stupid. But if you didn't follow that decision theory, you would never have had the opportunity to win a lot more money. So once again, following a decision theory which involves you acting as if you precommitted is beneficial.
So by that analysis: My mistake was asking what the beneficial action was at t = 2. Whereas, the actual question is, what's the beneficial decision theory to follow.
Does my understanding seem correct?
Replies from: SilasBarta↑ comment by SilasBarta · 2010-08-12T19:26:03.001Z · LW(p) · GW(p)
Sounds right to me. I had actually written a blog post recently that explores the desert problem (aka Parfit's Hitchhiker) that you might be interested in. I think it also sheds some light on why humans (usually) obey a decision theory that would win on Parfit's Hitchhiker.
comment by multifoliaterose · 2010-08-12T03:34:20.030Z · LW(p) · GW(p)
Are there any Less Wrong postings besides The Trouble With "Good" and (arguably) Circular Altruism which argue in favor of utilitarianism?
Replies from: steven0461↑ comment by steven0461 · 2010-08-12T03:49:15.296Z · LW(p) · GW(p)
Some are linked here.
comment by ata · 2010-08-11T17:29:53.222Z · LW(p) · GW(p)
At last year's Singularity Summit, there was an OB/LW meetup the evening of the first day, held a few blocks away from the convention center. Is anything similar planned for this weekend?
(I'm guessing no, since I haven't heard anything about it here, but we'd still have a couple days to plan it if anyone's interested...)
Replies from: JGWeissman↑ comment by JGWeissman · 2010-08-11T19:50:29.155Z · LW(p) · GW(p)
I would be interested in such a meetup.
comment by NancyLebovitz · 2010-08-10T00:47:07.587Z · LW(p) · GW(p)
I've experimented a little more, and still don't know how to make links appear properly in top-level posts. Instead of doing a bug report, I request that someone who does get it to work explain what they do.
Also, Book Recommendations isn't showing up as NEW, even though it's there in Recent Posts. I thought there might be a delay involved, but the post for this thread showed up in NEW almost immediately.
Replies from: ata↑ comment by ata · 2010-08-10T00:53:49.785Z · LW(p) · GW(p)
It doesn't use the same Markdown formatting as the comments, if that's what you were trying to do. Instead, you select some text and click the link button in the WYSIWIG toolbar (two to the left of the anchor button).
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-08-10T01:27:59.848Z · LW(p) · GW(p)
That button has gone dead for me. It used to work (produced a pop-up with two windows), but now there's no reaction when I click on it.
Replies from: NancyLebovitz, sketerpot↑ comment by NancyLebovitz · 2010-08-10T02:56:18.739Z · LW(p) · GW(p)
Thanks.
I think I've got it now. Part of the problem was not realizing that it makes links blue and underlined in the edit window, but they aren't live-- they go live when the post is submitted, even to draft.
This is not what I'd call an intuitive interface.
comment by dv82matt · 2010-08-21T04:43:08.786Z · LW(p) · GW(p)
I have written a critique of the position that one boxing wins on Newcomb's problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here
Replies from: nhamann, ata↑ comment by nhamann · 2010-08-21T07:41:53.411Z · LW(p) · GW(p)
I don't understand what the part about "fallible" and "infallible" agents is supposed to mean. If there is an "infallible" agent that makes the correct prediction 60% of the time and a "fallible" agent that makes the correct prediction 60% of the time, in what way should one anticipate them to behave differently?
Replies from: dv82matt↑ comment by dv82matt · 2010-08-21T09:05:33.186Z · LW(p) · GW(p)
It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.
As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.
↑ comment by ata · 2010-08-21T04:48:34.076Z · LW(p) · GW(p)
The problem here is that Newcomb’s problem doesn’t actually state whether you are dealing with a smart predictor or a dumb predictor. It doesn’t state whether Omega is sufficiently smart. It doesn’t state whether the initial conditions that are causally connected to your choice are also causally connected to the prediction Omega makes. So without smuggled in assumptions there is insufficient information to determine whether to one box or two box. You might as well flip a coin.
http://wiki.lesswrong.com/wiki/Omega
Omega is assumed to be a "smart predictor".
Replies from: steven0461, dv82matt↑ comment by steven0461 · 2010-08-21T04:59:29.988Z · LW(p) · GW(p)
I've seen statements of Newcomb-like problems saying things like "Omega gets it right 90% of the time". In that case it seems like it should matter whether it's because of cosmic rays that affect all predictions equally, or whether it's because he can only usefully predict the 90% of people who are easiest to predict, in which case if I'm not mistaken you can two-box if you're confident you're in the other 10%. I'm sure this would have been thought through somewhere before.
↑ comment by dv82matt · 2010-08-21T07:04:06.927Z · LW(p) · GW(p)
You may have misunderstood what is meant by "smart predictor".
The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.
Furthermore a significant part of the essay explains in detail why many of the assumptions associated with Omega are problematic.
Edited to add that on rereading I can see how the bit where I say, "It doesn’t state whether Omega is sufficiently smart." is a bit misleading. It should be read as a statement about the method of making the prediction not about Omega's intelligence.
comment by XiXiDu · 2010-08-19T15:54:41.227Z · LW(p) · GW(p)
Probability & AI
The probabilistic approach has been responsible for most of the recent progress in artificial intelligence, such as voice recognition systems, or the system that recommends movies to Netflix subscribers. But Noah Goodman, an MIT research scientist whose department is Brain and Cognitive Sciences but whose lab is Computer Science and Artificial Intelligence, thinks that AI gave up too much when it gave up rules. By combining the old rule-based systems with insights from the new probabilistic systems, Goodman has found a way to model thought that could have broad implications for both AI and cognitive science.
Now here is the hardware, New Chip Startup Plays the Odds on Probability Processing:
That's where probability processing shines, Ben Vigoda, Lyric Semiconductor's CEO, told TechNewsWorld. It's based on Pbits, or probability bits.
"Digital bits flow through Boolean logic gates, while Pbits flow through probability or Bayesian gates multi-directionally," Vigoda said. "While digital processors program each operation in sequence, probability processors allow all the variables to talk to each other."
A Pbit is the probability of a bit. Every event has two possibilities -- it either happens or does not happen -- and a Pbit encapsulates that, Vigoda said.
By letting all the variables talk to each other simultaneously, probability processors engage in both multidimensional and parallel processing, Vigoda pointed out.
More, Bayes Chip:
Replies from: JoshuaZLyric’s chips are definitely a cool technology. But how do they compare with quantum computers? On the plus side, and it’s a very big plus, Lyric’s technology is already commercially available and very portable. QCs won’t be so for a long time. On the minus side, I suspect that D-Wave’s adiabatic quantum computer, and gate model quantum computers, once those are available for a large number of qubits, will be more efficient than Lyric’s chip when doing similar calculations.
↑ comment by JoshuaZ · 2010-08-19T16:02:06.906Z · LW(p) · GW(p)
I'd pay attention to this but note that the second source isn't reliable. Anyone who is talking about D-Wave seriously doesn't know much about quantum computing. Unfortunately, D-wave is a massively hyped project which has in practice done close to zero actual work. Scott Aaronson's writing on the subject.