Posts

Comments

Comment by Hopefully_Anonymous on Good Idealistic Books are Rare · 2009-02-18T03:30:39.000Z · LW · GW

I recommend being wary of a point that needs to exist as part of a dialectical pair. What's orthogonal to cynicism vs. idealism. What's completely outside the set? What encompasses both? What has elements of both? What subversive idea or analytical framework is muted by discussing cynicism vs. idealism instead? I think these type questions are a good starting point when a dialectic is promoted, in general.

Comment by Hopefully_Anonymous on Cynical About Cynicism · 2009-02-17T13:16:54.000Z · LW · GW

Although I think you're overstating and misapplying your case, Eliezer (like Robin implies, a "cynical" critique of both cynicism and idealism seems to me to yield more fruit than an idealist critique of both), I agree with Richard that cynicism is a poorer epistemological framework than skepticism.

I think it's also worth noting that it's a common play for status to admonish people not to be so cynical, I think because (1) the crowd seems to award higher status to people who perform optimism as a general rule, and (2) there's an element of power alignment, and (if one is powerful) power maintenance to convincing less powerful people not to be cynical about the reasons for power variance in a social group.

Comment by Hopefully_Anonymous on BHTV: Yudkowsky / Wilkinson · 2009-01-26T02:28:56.000Z · LW · GW

I encourage you to do one with Koch on consciousness, free will, and zombies. Be agressive with him like you were with that AI researcher with the dreads (not deferential like you were with Aubrey de Gray. I think it'll be very useful.

Comment by Hopefully_Anonymous on Investing for the Long Slump · 2009-01-23T21:26:41.000Z · LW · GW

"Here's an odd bias I notice among the AI and singularity crowd: a lot of us seem to only plan for science-fictional emergencies, and not for mundane ones like economic collapse. Why is that?"

One hypothesis: because the value in the planning (and this may be rational, if nontransparent) is primarily for entertainment purposes.

Comment by Hopefully_Anonymous on BHTV: de Grey and Yudkowsky · 2008-12-14T09:26:17.000Z · LW · GW

Very good job, Eliezer. I recommend you do a BHTV tour of all the big blogging names in cryonics, life extension, and existential risk minimization. Kurzweil, Bostrom, and Hanson too, of course. They're probably asking you to do this already.

Comment by Hopefully_Anonymous on Is That Your True Rejection? · 2008-12-07T09:34:59.000Z · LW · GW

"However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way."

I think this is a good idea for you. But don't be surprised if finding the right one takes more work than an occasional bleg. And I do recommend getting it at Harvard or the equivalent. And if I'm not mistaken, you may still have to do a bachelors and masters?

Comment by Hopefully_Anonymous on Disappointment in the Future · 2008-12-02T03:41:42.000Z · LW · GW

His predictions were much better than I expected. Your headline is misleading given this data point.

Comment by Hopefully_Anonymous on Hanging Out My Speaker's Shingle · 2008-11-07T11:00:21.000Z · LW · GW

wow there's some haters in this thread. You can tell when Caledonian feels compelled to defend Eliezer. Peter? Apt name.

Comment by Hopefully_Anonymous on Crisis of Faith · 2008-10-10T23:50:35.000Z · LW · GW

Some interesting, useful stuff in this post. Minus the status-cocaine of declaring that you're smarter than Robert Aumann about his performed religious beliefs and the mechanics of his internal mental state. In that area, I think Michael Vassar's model for how nerds interpret the behavior of others is your God. There's probably some 10 year olds that can see through it (look everybody, the emperor has no conception that people can believe one thing and perform another). Unless this is a performance on your part too, and there's shimshammery all the way down!

Comment by Hopefully_Anonymous on Beyond the Reach of God · 2008-10-04T22:59:57.000Z · LW · GW

There's a corallary mystery category which most of you fall into: why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds? Where's the fight against a global ban on reproductive human cloning? Where's the fight to increase legal organ markets? Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation? Until you square aware your own repugnancy bias based inaction, criticisms of that of the rest of population on topics like cryonics reads as incoherent to me as debating angels dancing on the heads of pins. My blog shouldn't be so anomolous in seeking to overcome repugnancy bias to maximize persistence odds. Where are the other anonymous advocates? Our reality is the Titanic -who want to go down with the ship for the sake of a genetic aesthetic -because your repugnancy bias memes are likely to only persist in the form of future generations if you choose to value it over your personal persistence odds.

Comment by Hopefully_Anonymous on Beyond the Reach of God · 2008-10-04T19:13:10.000Z · LW · GW

Don't get bored with the small shit. Cancers, heart disease, stroke, safety engineering, suicidal depression, neurodegenerations, improved cryonic tech. In the next few decades I'm probably going to see most of you die from that shit (and that's if I'm lucky enough to persist as an observer), when you could've done a lot more to prevent it, if you didn't get bored so easily of dealing with the basics.

Comment by Hopefully_Anonymous on How Many LHC Failures Is Too Many? · 2008-09-21T00:06:47.000Z · LW · GW

Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.

You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.

Comment by Hopefully_Anonymous on Ban the Bear · 2008-09-20T09:14:24.000Z · LW · GW

MZ, I disagree to a limited extent, for reasons I explained on my blog. I think Intrade may have specifically predicted McCain's temporary lead in the electoral college before a reasonable expert could (about 1 week in advance of its occurence). Being able to predict events accurately one week in advance is about as good as our best weather prediction. It's not trivial.

Eliezer, whatever you're doing here with this post, it's not enlightenment. In my opinion you're pretending to understanding that you don't have. It's not to say that you're position is wrong (I doubt either of us know enough to know conclusively), but that it's presented in an overreductionist, unhelpful way. Take the best arguments for the short-sell ban seriously (some of them seem to be presented in the comments here), I feel intellectually dirty after reading your post as written.

Comment by Hopefully_Anonymous on Rationality Quotes 15 · 2008-09-06T19:49:44.000Z · LW · GW

It's ironic that Murray is largely a myth-promoter posing as a politically incorrect empiricist attacked my pc myth-promoters. This quote is a good illustration of that.

Comment by Hopefully_Anonymous on Dreams of Friendliness · 2008-09-03T03:25:53.000Z · LW · GW

Unsurprisingly I agree with Carl, especially the tax-farming angle. I think it's unlikely wet-brained humans would be part of a winning coalition that included self-improving human+ level digital intelligences for long. Humorously, because of the whole exponentional nature of this stuff, the timeline may be something like 2025 ---> functional biological immortality, 2030 --> whole brain emulation --> 2030 brain on a nanocomputer ---> 2030 earth transformed into computonium, end of human existence.

Comment by Hopefully_Anonymous on Rationality Quotes 13 · 2008-09-03T03:08:34.000Z · LW · GW

I'm not saying the irony is intentional (although I would claim it if I was Eliezer) but note who the soldier quote is from, and also note the content of the quote it succeeds.

Comment by Hopefully_Anonymous on Rationality Quotes 12 · 2008-09-02T08:43:23.000Z · LW · GW

Michael, well-articulated. BTW I encourage you to start up your blog again.

Comment by Hopefully_Anonymous on Rationality Quotes 12 · 2008-09-01T21:50:23.000Z · LW · GW

Those quotes seem rather weak to me. Especially the last one. Armchair psychology, you're worried about your own propensity towards irrationality, so you seek to master it by focusing on irrationality external to you, as by seeking to wipe it out. Kind of analogous to evangelical christianity. I'm not sure rational heroes and irrational villians in a morality play is as valuable to us trying to build our best models of the world, including of various irrationalities as natural phenomena. Whether we should expend effort to convince people not to engage in various irrationalities is an empirical question, and maybe one that has different answers in each instance.

Comment by Hopefully_Anonymous on Against Modal Logics · 2008-08-28T03:59:07.000Z · LW · GW

What do you think of the philosophy faculty of MIT and Cal-Tech? I ask because I suspect the faculty there selects for philosophers that would be most usual to hard scientists and engineers (and for hard science and engineering students).

http://www.mit.edu/~philos/faculty.html

http://www.hss.caltech.edu/humanities/faculty

Comment by Hopefully_Anonymous on Magical Categories · 2008-08-25T20:24:40.000Z · LW · GW

"I await the proper timing and forum in which to elaborate my skepticism that we should focus on trying to design a God to rule us all. Sure, have a contingency plan in case we actually face that problem, but it seems not the most likely or important case to consider."

I agree with Robin. Although I'm disappointed that he thinks he lacks an adequate forum to pound the podium on this more forcefully.

Comment by Hopefully_Anonymous on Magical Categories · 2008-08-24T22:50:08.000Z · LW · GW

There's this weird hero-worship codependency that emerges between Eliezer and some of his readers that I don't get, but I have to admit, it diminishes (in my eyes) the stature of all parties involved.

Comment by Hopefully_Anonymous on When Anthropomorphism Became Stupid · 2008-08-17T03:52:49.000Z · LW · GW

To the degree "thinking" or "deciding" actually exists, it's not clear to me that we as individuals are the actual agents, rather than observer subcomponents with an inflated sense of agency, perhaps a lot like the neurons but with a deluded/hallucinated sense of agency.

Comment by Hopefully_Anonymous on Hot Air Doesn't Disagree · 2008-08-16T02:09:29.000Z · LW · GW

"much the" should read "much like the"

Comment by Hopefully_Anonymous on Hot Air Doesn't Disagree · 2008-08-16T02:08:17.000Z · LW · GW

J Thomas, whether or not foxes or rabbits think about morality seems to me to be the less interesting aspect of Tim Tyler's comments.

As far as can tell this is more about algorithms and persistence. I aspire to value the persistence of my own algorithm as a subjective conscious entity. I can conceive of someone else who values maximizing the persistence odds of any subjective conscious entity that has ever existed above all. A third that values maximizing the persistence odds of any human who has ever lived above all. Eliezer seems to value maximizing the persistence of a certain algorithm of morality above all (even if it deoptimizing the persistence odds of all humans who have ever lived). Optimizing the persistence odds of these various algorithms seems to me to be in conflict with each other, much the algorithm of the fox having the rabbit in it's belly is in conflict with the algorithm of the rabbit eating grass, outside of the foxes belly. It's an interesting problem, although I do of course have my own preferred solution to it.

Comment by Hopefully_Anonymous on The Bedrock of Morality: Arbitrary? · 2008-08-15T11:14:19.000Z · LW · GW

Ben, you write "Do you strive for the condition of perfect, empty, value-less ghost in the machine, just for its own sake...?".

But my previous post clearly answered that question: "I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me."

Comment by Hopefully_Anonymous on The Bedrock of Morality: Arbitrary? · 2008-08-14T22:31:02.000Z · LW · GW

I'm fine with a galaxy without humor, music, or art. I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me.

Comment by Hopefully_Anonymous on Is Fairness Arbitrary? · 2008-08-14T07:57:32.000Z · LW · GW

Daniel Reeves, I checked out your bio. Very impressive stuff, and best of success with your work and research!

Comment by Hopefully_Anonymous on Abstracted Idealized Dynamics · 2008-08-12T06:09:16.000Z · LW · GW

Richard, Thanks, the SEP article on moral psychology was an enlightening read.

Comment by Hopefully_Anonymous on Abstracted Idealized Dynamics · 2008-08-12T03:09:07.000Z · LW · GW

"Someone sees a slave being whipped, and it doesn't occur to them right away that slavery is wrong. But they go home and think about it, and imagine themselves in the slave's place, and finally think, "No.""

I think lines like this epitomize how messy your approach to understanding human morality as a natural phenomenon is. Richard (the pro), what resources do you recommend I look into to find people taking a more rigorous approach to understanding the phenomenon of human morality (as opposed to promoting a certain type uncritically)?

Comment by Hopefully_Anonymous on Moral Error and Moral Disagreement · 2008-08-11T03:17:50.000Z · LW · GW

Weird, jsalvati is not my sock puppet, but the 11:16pm post above is mine.

Comment by Hopefully_Anonymous on Moral Error and Moral Disagreement · 2008-08-11T03:16:44.000Z · LW · GW

Frame it defensively rather than offensively and a whole heck of a lot of people would take that pill. Of course some of us would also take the pill that negates the effects of our friends taking the first pill, hehehe.

Comment by Hopefully_Anonymous on Inseparably Right; or, Joy in the Merely Good · 2008-08-10T00:44:43.000Z · LW · GW

should read: (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like "do not murder children").

Comment by Hopefully_Anonymous on Inseparably Right; or, Joy in the Merely Good · 2008-08-09T22:58:52.000Z · LW · GW

I think the child on train tracks/orphan in burning building tropes you reference back to prey on bias, rather than seek to overcome it. And I think you've been running from hard questions rather than dealing with them forthrightly (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like "do not murder children" or minimizing horrific outcomes). To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would be salient for a non-anonymous blogger. I hope you at least do your best to address them anonymously. Otherwise we could be left with a tragedy of the future outcomes commons, with all the thinkers vying for status over maximizing our future outcomes.

Comment by Hopefully_Anonymous on Hiroshima Day · 2008-08-07T22:54:50.000Z · LW · GW

Mark, I think you over-identify with whoever controls the nuclear weapons in the US arsenal. I think their existence is a complex phenomenon, and I'm not sure it can be reduced to "I am an American citizen and voter, therefore I exert partial control and ownership of the weapons in the nuclear aresenal".

Beyond that, I think a major source of bias is people who let the status quo and power/hegemony alignment do a lot of their argumentative legwork for them. I think you're doing that here, but it's a much bigger problem warping our models of reality than this instance.

Comment by Hopefully_Anonymous on Hiroshima Day · 2008-08-07T19:28:52.000Z · LW · GW

Frelkins, You shifted rather quickly from what I think is the stronger argument against MAD (greater catastrophic risk due to human error and irrationality) to what I think is a weaker argument against MAD (a claim that some states are suicidal). I think you should focus on the stronger argument.

Also, I think the claim that a world without the type of MAD one gets from nukes is a world where all politics is solved through war is, I think, inaccurate. Some politics seems to be solved through war, others don't, both before and after MAD. It may be true that there's never been direct conflict on sovereign territory between two nations that both have nuclear strike capability against each other, but that's a small swath of history.

I'm not arguing against MAD, or against the concept that nuclear proliferation results in a more peaceful world. But I'm not sold on it yet either. It's worth more study, it seems to me.

Comment by Hopefully_Anonymous on Hiroshima Day · 2008-08-07T18:19:05.000Z · LW · GW

Frelkins, I think the main perceived flaw in this line of reasoning is that error and irrational decision making are possible, and with viable MAD set up, the results could be catastrophic.

Comment by Hopefully_Anonymous on Hiroshima Day · 2008-08-07T03:34:11.000Z · LW · GW

I'm with James Miller and Caledonian on this one, and I want it taken further. Caledonian, I think the cognitive bias is good old repugnancy bias. How I'd like it taken further: I think what we want to avoid is not (1) horrific outcomes due to war from a specific type of technology, nor (2) horrific outcomes due to war generally, but (3) horrific outcomes generally. As such, beyond using nuclear weapons (which I'm not convinced prevents any of the three, though it may), how about greatly increasing the variety of human medical experimentation we engage in, including medical experimentation without consent, and breeding and cloning people, and making genetic knockout people and disease models of people to the extent that there will be a net decrease in horrific outcomes (death, suffering, etc.)? Sort of Dr. Ishii meets Jonas Salk.

Comment by Hopefully_Anonymous on Anthropomorphic Optimism · 2008-08-04T22:43:52.000Z · LW · GW

"And because we can more persuasively argue, for what we honestly believe, we have evolved an instinct to honestly believe that other people's goals, and our tribe's moral code, truly do imply that they should do things our way for their benefit."

Great post overall, but I'm skeptical of this often-repeated element in OB posts and comments. I'm not sure honest believers always, or even usually, have a persuasion advantage. This reminds me of some of Michael Vassar's criticism of nerds thinking of everyone else as a defective nerd (nerd defined as people who value truth-telling/sincerity over more political/tactful forms of communication.

Comment by Hopefully_Anonymous on No Logical Positivist I · 2008-08-04T09:11:47.000Z · LW · GW

I haven't gotten through your whole post yet, but the "postmodernist literature professor" jogged my memory about a trend I've noticed in your post. Postmodernists, and perhaps in particular postmodernist literature professors seem to be a recurring foil. What's going on there? Is a way to break out of that analytically? I sense that as a deeper writer and thinker you'll go beyond cartoonish representations of foils, if nothing more to reflect a deeper understanding of things like postmodernist literature professors as natural phenomena. It seems to me to be more a barrier to knowledge and understanding than an accurate summation of something in our reality (postmodernist literature professors).

Comment by Hopefully_Anonymous on The Comedy of Behaviorism · 2008-08-03T22:20:32.000Z · LW · GW

Caledonian, you make some good posts, but here I think your lates post fall in the category of anti-knowledge. I recommend trying to stay away from heroic narratives and morality plays (Watson, Skinner GOOD, Freud BAD) and easy targets, like those that express the wish-fulfilling belief that the mind mystically survives the death of the body.

Whether the mind does survive the death of the body in a sufficiently large universe/multiverse (with multiple "exact" iterations of us) is a more complicated question, in that black box/"magic" area of why our internal narrative sense of personal identity apparently survives over a punctuated swath of timespace configurations, in a changing variety of material compositions/blobs of amplitude probability distribution in the first place.

I jotted it off messy, but I think the point remains that although in principle our existence as minds may be perfectly normal since it's part of reality, it seems pretty damn weird compared to our evolved intuitions.

Comment by Hopefully_Anonymous on The Comedy of Behaviorism · 2008-08-03T18:26:19.000Z · LW · GW

Eliezer, I think you've given ample proof that Watson has written some things as cartoonish as your OP suggests. I don't think this has been shown to be generalizable across all of the behaviorist scientists of his era. Ian Maxwell's description of Behaviorists sounds like a reasonable way for science to be done pre-MRI's, etc. But your criticism, in you OP, of Watson's approach (or at least his rhetoric) hits the bulls eye and is a perfect contribution to the mission of this blog.

Comment by Hopefully_Anonymous on The Comedy of Behaviorism · 2008-08-03T13:27:21.000Z · LW · GW

The description of behaviorists does seem a bit cartoonish, but still it's a great post and an interesting thought provoking read. Good to see a commenter of the calibre of Richard Kennaway in the thread, too.

Comment by Hopefully_Anonymous on Setting Up Metaethics · 2008-07-29T08:11:15.000Z · LW · GW

Who cares if Caledonian is banned from here? Hopefully he'll post more on my blog as a result. I've never edited or deleted a post from Caledonian or anyone else (except to protect my anonymity). Neither has TGGP to my knowledge. As I've posted before on TGGP's blog, I think there's a hierarchy of blogs, and blogs that ban and delete for something other than stuff that's illegal, can bring liability, or is botspam aren't at the top of the heirarchy.

If no post of Caledonians was ever edited or deleted from here (except perhaps for excessive length), this blog would be just as good. Maybe, even better.

Comment by Hopefully_Anonymous on Leave a Line of Retreat · 2008-07-27T22:25:46.000Z · LW · GW

Post what you want to post most. The advice that you should go against your own instincts and pander is bad, in my opinion. The only things you should force yourself to do are: (1) try to post something every day, and (2) try to edit and delete comments as little as possible. I believe the result will be an excellent and authentic blog with the types of readers you want most (and that are most useful to you).

Comment by Hopefully_Anonymous on When (Not) To Use Probabilities · 2008-07-23T12:59:39.000Z · LW · GW

great post.

Comment by Hopefully_Anonymous on Fake Norms, or "Truth" vs. Truth · 2008-07-22T11:01:32.000Z · LW · GW

I don't think promoting truth (or "truth") will serve an aim of a better understanding of the world as much as promoting transparency. There seems to me to be something more naturally subversive to anti-rationality about promoting transparency than promoting "truth".

Comment by Hopefully_Anonymous on Should We Ban Physics? · 2008-07-21T09:20:10.000Z · LW · GW

In the last similar thread someone pointed out that we're just talking about increasing existential risk in the tiny zone were we observe (or reasonably extrapolate) each other existing, not the entire universe. It confuses the issue to talk about destruction of the universe.

Really this is all recursive to Joy's "Grey goo" argument. I think what needs to be made explicit is weighing our existential risk if we do or don't engage in a particular activity. And since we're not constrained to binary choices, there's no reason for that to be a starting point, unless it's nontransparent propaganda to encourage selection of a particular unnuanced choice.

A ban on the production of all novel physics situations seems more extreme than necessary (although the best arguments for that should probably be heard and analyzed). But unregulated, unreviewed freedom to produce novel physics situations also seems like it would be a bit extreme. At the least, I'd like to see more analysis of the risks of not engaging in such experimentation. This stuff is probably very hard to get right, and at some point we'll probably get it fatally wrong in one way or another and all die. But let's play the long odds with all the strategy we can, because the alternative seems like a recursive end state (almost) no matter what we do.

Comment by Hopefully_Anonymous on Touching the Old · 2008-07-20T22:31:06.000Z · LW · GW

Oxford is a little different than the Wailing Wall, it's one of the world's earliest universities, and its been one of the world's great universities for centuries. Eliezer, you would love Florence. In England and in other old countries, I'm most impressed by ancient pubs. One can see how an important church or castle can remain for centuries. But for a little old pub to eek it out for that long, there's something special about that, IMO.

Comment by Hopefully_Anonymous on Existential Angst Factory · 2008-07-20T02:28:48.000Z · LW · GW

pdf, no I don't mean the FAI project. I mean the things Eliezer discussed specifically in the OP and follow-up comments. He gives a long catalog of recommended actions to solve individual unhappiness. I'm pointing out that in many instances pharmaceutical or other solutions might be cheaper.

Comment by Hopefully_Anonymous on Could Anything Be Right? · 2008-07-19T21:47:49.000Z · LW · GW

"No Hopefully, just think about it as math instead of anthropomorphizing here. This is kids stuff in terms of understanding intelligence."

I disagree. It seems to me that you're imagining closed systems that don't seem to exist in the reality we live in.