Posts
Comments
"Walking on the moon is power! Being a great wizard is power! There are kinds of power that don't require me to spend the rest of my life pandering to morons!"
If you are solving an equation, debugging a software system, designing an algorithm, or any number of other cognitive tasks, understanding the methods of rationality involved in interacting with other people will be of no use to you (unless it just happens to be that some of the material applies across the domains). These are things that have to be done in and of yourself.
It appears that the majority of the activities and the primary focus of this boot camp is on rationality when interacting with others... social rationality training. While some of this may apply across domains, my interest is strictly in "selfish" rationality... the kind of rationality that one uses internally and entirely on your own. So I don't really know if this would be worth the grandiose expense of 10 "all-consuming" weeks. Maybe it would help if I had more information on the exact curriculum you are proposing.
where the hell would you find a group like that!?
well in that case, can you explain that emoticon (:3)? I have yet to hear any explanation that makes sense :)
Is this really relevant ...
Does anyone know if Blink: The Power of Thinking Without Thinking is a good book?
http://www.amazon.com/Blink-Power-Thinking-Without/dp/0316172324
Amazon.com Review
Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military maneuvers, he persuades readers to think small and focus on the meaning of "thin slices" of behavior. The key is to rely on our "adaptive unconscious"--a 24/7 mental valet--that provides us with instant and sophisticated information to warn of danger, read a stranger, or react to a new idea.
Gladwell includes caveats about leaping to conclusions: marketers can manipulate our first impressions, high arousal moments make us "mind blind," focusing on the wrong cue leaves us vulnerable to "the Warren Harding Effect" (i.e., voting for a handsome but hapless president). In a provocative chapter that exposes the "dark side of blink," he illuminates the failure of rapid cognition in the tragic stakeout and murder of Amadou Diallo in the Bronx. He underlines studies about autism, facial reading and cardio uptick to urge training that enhances high-stakes decision-making. In this brilliant, cage-rattling book, one can only wish for a thicker slice of Gladwell's ideas about what Blink Camp might look like. --Barbara Mackoff
If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"
http://www.alcor.org/FAQs/faq01.html#evidence
Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest near 0°C (+32°F ) (Cryobiology 23, 483-494 (1986)). There is no basic reason why such states of "suspended animation" could not be extended indefinitely at even lower temperatures (although the technical obstacles are enormous).
Existing cryopreservation techniques, while not yet reversible, can preserve the fine structure of the brain with remarkable fidelity. This is especially true for cryopreservation by vitrification. The observations of point (a) make clear that survival of structure, not function, determines survival of the organism.
It is now possible to foresee specific future technologies (molecular nanotechnology and nanomedicine) that will one day be able to diagnose and treat injuries right down to the molecular level. Such technology could repair and/or regenerate every cell and tissue in the body if necessary. For such a technology, any patient retaining basic brain structure (the physical basis of their mind) will be viable and recoverable.
I up-voted the post because you talked about two good, basic thinking skills. I think that paying attention to the weight of priors is a good thinking technique in general- and I think your examples of cryonics and AI are good points, but your conclusion fails- the argument you made does not mean they have 0 chance of happening, but you could take out of that more usefully, for example that any given person claiming to have created AI probably has close to 0 chance of having actually done it (unless you have some incredibly good evidence:
"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen
). The thinking technique of abstracting and "stepping back from" or "outside of" or using "reference class forecasting" for your current situation also works very generally. Short post though, I was hoping you would expand more.
"To anyone who would assert that intelligence, science or rationality is the Ultimate Power, not just on the level of a species or civilization, but on the level of an individual or small group, let them show that their belief is based in reality."
What is this, flame bait?
As a question for everyone (and as a counter argument to CEV),
Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?
And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.
A) Yes B) No
I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.
Also Peter de Blanc
http://yudkowsky.net/rational/bayes - link error
Whatever happened to Nick Hay, wasn't he doing some kind of FAI related research?
reminds me of http://yudkowsky.net/rational/virtues and http://lesswrong.com/lw/16d/working_mantras/
http://lesswrong.com/lw/1hh/rationality_quotes_november_2009/1ai9
Sure, but it's also reasonable for him to think that contributing something that was much harder would be that much more of a contribution to his goal (whatever those selfish or non-selfish goals are), after all, something hard for him would be much harder or impossible for someone less capable.
I don't see how this reveals his motive at all. He could easily be a person motivated to make the best contributions to science as he can, for entirely altruistic reasons. His reasoning was that he could make better contributions elsewhere, and it's entirely plausible for him to have left the field for ultimately altruistic, purely non-selfish reasons.
And what is it about selfishness exactly that is so bad?
And this is a great follow up:
"Very recently - in just the last few decades - the human species has acquired a great deal of new knowledge about human rationality. The most salient example would be the heuristics and biases program in experimental psychology. There is also the Bayesian systematization of probability theory and statistics; evolutionary psychology; social psychology. Experimental investigations of empirical human psychology; and theoretical probability theory to interpret what our experiments tell us; and evolutionary theory to explain the conclusions. These fields give us new focusing lenses through which to view the landscape of our own minds. With their aid, we may be able to see more clearly the muscles of our brains, the fingers of thought as they move. We have a shared vocabulary in which to describe problems and solutions. Humanity may finally be ready to synthesize the martial art of mind: to refine, share, systematize, and pass on techniques of personal rationality."
"But goodness alone is never enough. A hard, cold wisdom is required for goodness to accomplish good. Goodness without wisdom always accomplishes evil." - Robert Heinlein (SISL)
That reminds me of "counting doubles" from Ender's Game: 2, 4, 8, 16 ... etc until you lose track.
==Re comments on "Singularity Paper"== Re comments, I had been given to understand that the point of the page was to summarize and cite Eliezer's arguments for the audience of ''Minds and Machines''. Do you think this was just a bad idea from the start? (That's a serious question; it might very well be.) Or do you think the endeavor is a good one, but the writing on the page is just lame? --User:Zack M. Davis 20:19, 21 November 2009 (UTC)
(this is about my opinion on the writing in the wiki page)
No, just use his writing as much as possible- directly in the text of the paper. Whole articles/posts in sequence for the whole paper would be best, or try to copy-paste together some of the key points of a series of articles/posts (but do you really want to do that and leave out the rich, coherent, consistent explanation that these points are surrounded in?)
My comments may seem to imply that we would essentially be putting together a book. That would be an AWESOME book... we could call it "Intelligence Explosion".
If someone ended up doing a book like that, they might as well include a section on FAI. If SIAI produces a relevant FAI paper, that could be included (or merged) into the FAI section
SEE THIS:
Eliezer is arguing about one view of the Singularity, though there are others. This is one reason I thought to include http://yudkowsky.net/singularity/schools on the wiki. If leaders/proponents of the other two schools could acknowledge this model Eliezer has described of there being three schools of the Singularity, I think that might lend it more authority as you are describing.
I found the two SIAI introductory pages very compelling the first time I read them. This was back before I knew what SIAI or the Singularity really was, as soon as I read through those I just had to find out more.
I thought similarly about LOGI part 3 (Seed AI). I actually thought of that immediately and put a link up to that on the wiki page.
http://news.ycombinator.com/item?id=195959
"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...
All right, this much of a hint:
There's no super-clever special trick to it. I just did it the hard way.
Something of an entrepreneurial lesson there, I guess."
Well I wasn't really going overboard with praise. This is the best book ever written, as far as I know. This is an awesome thread, as I would love to find something that can outclass Atlas Shrugged.
For now though, it is by far the best book ever written. Many people agree- and not just the cultish fanatics. I've had many instances where random everyday people express exactly the same sentiment, at book stores, etc.
Really?
I mean come on, that's a cheap, weak analogy. I haven't finished yet but I'm compiling all of the good quotes from Atlas Shrugged. The book is full of these awesome quotes and truths that are portable to many other subjects of rationality.
It is far more real and relevant than you are giving it credit for.
what the hell?
What does the cultish behavior of followers have to do with the actual content? Affective death spirals can characterize virtually any group. Idiots and crazies are everywhere.
Why is this so down rated??
I realize that you didn't vote it down, but using this logic to vote it down would be something like a reverse affective death spiral- you let the visibly obvious ADS cast a negative halo on the entire philosophy, and thus become irrationally biased against the legitimate value in the center of the ADS that got blown up by the over-zealous crazies and idiots.
Atlas Shrugged by Ayn Rand.
The greatest book yet written, Atlas Shrugged is a foundational text of the philosophy of life, reason, and reality. It is dedicated to those who want to "win" and "do the right thing" in general.
The book's philosophy is expressed in a deep, self-consistent context, and the rationalist reader will find that much of the material is consistent with many other things rationalists will read in completely different subjects along their journey.
It is a book all about rationality, in the LW sense of "rationality", and the antithesis philosophies thereof, and it draws the logical conclusions of these opposing philosophies out in the full context of the world at large through a terrific story.
This book is a classic. If you are a rationalist, you would be crazy to ignore it if you haven't read it yet.