Posts

Comments

Comment by avalot on Procedural Knowledge Gaps · 2011-03-30T03:20:22.430Z · LW · GW

Very tricky question. I won't answer it in two ways:

  1. As I indicated, in terms of navigation/organization scheme, LW is completely untraditional. It still feels to me like a dark museum of wonder, of unfathomable depth. I get to something new, and mind-blowing, every time I surf around. So it's a delightful labyrinth, that unfolds like a series of connected thoughts anyway you work it. It's an advanced navigation toolset, usable only by people who are able to conceptualize vast abstract constructs... which is the target audience... or is it?

  2. I've been in the usability business too long to make UI pronouncements without user research. We've got a very specific user base, not defined by typical demo/sociographics, but by affinity. Few common usability heuristics would apply blindly to this case.

But among the few that would:

  • Improved legibility, typographic design, visual hierarchy
  • Flexible, mobile to wide-screen self-optimizing layout
  • More personalized features (dashboard, analytics, watch lists, alerts, etc.) although many are implicitly available through feeds, permalinks, etc.
  • Advanced comments/post management tools for power-users (I'm guessing there might be a need, through I am not one by any means.)

But, again, I think we have a rare thing here: A user base that is smart enough to optimize its own tools. Normally, the best user experience practitioners will tell you that you should research, interview and especially observe your users, but never ever ever just listen to them. They don't know what they really want, wouldn't know how to explain it, and what they want isn't even close to what they actually need. Would LW users be different? And would design by committee work here? I'm very dubious, but curious.

Does anyone know the back-story of how this website evolved? Was it a person, a team, or the whole group designing it?

Comment by avalot on Rationality Quotes: March 2011 · 2011-03-02T23:46:29.986Z · LW · GW

Thanks for the irony!

Comment by avalot on Procedural Knowledge Gaps · 2011-02-12T17:55:54.195Z · LW · GW

Lesswrong is certainly designed for the advanced user. Most everything on the site is non-standard, which seriously impedes usability for the new user. Considering the topic and intended audience, I'd say it's a feature, not a bug.

Nonetheless, the site definitely smacks of unix-geekery. It could be humanized somewhat, and that probably wouldn't hurt.

Comment by avalot on Secure Your Beliefs · 2011-02-12T17:27:33.888Z · LW · GW

Anti-vaccination activists base their beliefs not on the scientific evidence, but on the credibility of the source. Not having enough scientific education to be able to tell the difference, they have to go to plan B: Trust.

The medical and scientific communities in the USA are not as well-trusted as they should be, for a variety of reasons. One is that the culture is generally suspicious of intelligence and education, equating them with depravity and elitism. Another is that some doctors and scientists in the US ignore their responsibility to preserve the profession's credibility, and sell out big time.

Chicken, meet egg.

So if my rationality is your business, you're going to have to get in the business of morality... Because until you educate me, I'll have to rely on trusting the most credible self-proclaimed paragon of virtue, and proto-scientific moral relativism doesn't even register on that radar.

Comment by avalot on Subjective Relativity, Time Dilation and Divergence · 2011-02-11T18:37:48.611Z · LW · GW

Interesting too is the concept of amorphous, distributed and time-lagged consciousness.

Our own consciousness arises from an asynchronous computing substrate, and you can't help but wonder what weird schizophrenia would inhabit a "single" brain that stretches and spreads for miles. What would that be like? Ideas that spread like wildfire, and moods that swing literally with the tides?

Comment by avalot on Optimal Employment · 2011-02-08T04:40:16.456Z · LW · GW

By "strangers and superficial acquaintances", I didn't mean bosses or co-workers. In business, knowing the ground is important, but as a foreigner, you get more free passes for mistakes, you're not considered a fool for asking advice on basic behavior, and you can actually transgress on some (not all, not most) cultural norms and taboos with impunity, or even with cachet.

I was not talking specifically about Americans. Americans indeed tend to find out that they have a lot to answer for when traveling abroad. I believe this is also often compounded by provincialism and lack of cultural sensitivity on the part of the imperials: America is the most culturally insular western country I know.

At any rate, the crux of my point wasn't about an American's chances trying to play by the rules in a foreign country. My point was that the cultural baggage you accumulated as a child in your home country is worth more if you sell it where the supply is low, and the demand is high.

It's like trading silk or spices, but instead you're trading cultural outlook. When you're young, and a new entrant to the marketplace, your cultural outlook is not a competitive advantage at home. It's an automatic differentiator in a foreign country, where you can turn it into an edge. It's not a free pass, but it can be a shortcut.

Comment by avalot on How to Beat Procrastination · 2011-02-07T06:17:42.314Z · LW · GW

Thank you! You have no idea just how helpful this comment is to me right now. Your answer to all-consuming nihilism is exactly what i needed!

Comment by avalot on Optimal Employment · 2011-02-01T04:46:45.765Z · LW · GW

I think there is a widespread emotional aversion to moving abroad, which means there must be great money to be made on arbitrage.

I think a lot of the aversion is fear of inferiority and/or ostracism. These are counter-intuitively misplaced.

The theory is this: You're worried that the people over there have their own way of doing things, they know the lay of the land, and they're competing hard at a game they've been playing together since they were born. Whereas you barely speak the language, don't know the social conventions, and have no connections. What chance could you possibly have of making money or making friends?

In practice, it's the opposite: Against a wildcard like you, they don't stand a chance!

If you're somewhat smart, you'll find that you have cultural superpowers in a foreign country: Your background gives you a different, unusual look on things which makes you interesting and exotic. At home, you'd be nothing special. And since your accent is cute, you'll be forgiven your blunders (at least by strangers and superficial acquaintances).

The same asymmetry applies to your education, your working style, etc. They are suddenly unique and refreshing. That can be parlayed into advantage, if used judiciously.

Playing 100% by the rules only guarantees that your playing field will be too crowded for you to get any breaks.

Where the market is irrationally risk-averse, take risks, young ones!

Comment by avalot on Rationality Quotes: November 2010 · 2010-11-06T21:47:25.883Z · LW · GW

Yes, and I think this is the one big crucial exception... That is the one bit of knowledge that is truly evil. The one datum that is unbearable torture on the mind.

In that sense, one could define an adult mind as a normal (child) mind poisoned by the knowledge-of-death toxin. The older the mind, the more extensive the damage.

Most of us might see it more as a catalyst than a poison, but I think that's insanity justifying itself. We're all walking around in a state of deep existential panic, and that makes us weaker than children.

Comment by avalot on Eliezer Yudkowsky Facts · 2010-10-19T16:22:58.723Z · LW · GW
  • The sound of one hand clapping is "Eliezer Yudkowsky, Eliezer Yudkowsky, Eliezer Yudkowsky..."
  • Eliezer Yudkowsky displays search results before you type.
  • Eliezer Yudkowsky's name can't be abbreviated. It must take up most of your tweet.
  • Eliezer Yudkowsky doesn't actually exist. All his posts were written by an American man with the same name.
  • If Eliezer Yudkowsky falls in the forest, and nobody's there to hear him, he still makes a sound.
  • Eliezer Yudkowsky doesn't believe in the divine, because he's never had the experience of discovering Eliezer Yudkowsky.
  • "Eliezer Yudkowsky" is a sacred mantra you can chant over and over again to impress your friends and neighbors, without having to actually understand and apply rationality in your life. Nifty!
Comment by avalot on The Irrationality Game · 2010-10-04T16:23:32.045Z · LW · GW

Surprised that nobody has posted this yet...

"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)

More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)

Comment by avalot on Bayes' Theorem Illustrated (My Way) · 2010-06-05T23:53:57.480Z · LW · GW

I don't have a very advanced grounding in math, and I've been skipping over the technical aspects of the probability discussions on this blog. I've been reading lesswrong by mentally substituting "smart" for "Bayesian", "changing one's mind" for "updating", and having to vaguely trust and believe instead of rationally understanding.

Now I absolutely get it. I've got the key to the sequences. Thank you very very much!

Comment by avalot on Abnormal Cryonics · 2010-05-26T16:05:09.160Z · LW · GW

Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.

I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.

March 2010: Mark Roth at TED

...by the way, the comments threads on the TED website could use a few more rationalists... Lots of smart people there thinking with the wrong body parts.

May 2009: NIH awards a $2,227,500 grant

2006: Doctors chill, operate on, and revive a pig

Comment by avalot on Abnormal Cryonics · 2010-05-26T15:37:30.743Z · LW · GW

Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.

Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "hibernation for a long time."

Hibernation will likely become a commonly used "last resort" for many many critical cases (instead of letting them die, you freeze 'em until you've gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you're being told that there's nothing left to do, we're going to have to thaw them and watch them die, your next question is going to be "Can we leave them in the fridge a bit longer?"

Hibernation will sell people on the idea that fridges save lives. It doesn't have to be much more rational than that.

If you're young, you might be better off pushing hard to help that tech go mainsteam faster. That will lead to mainstream cryo faster than promoting cryo, and once cryo is mainstream, you'll be able to sign up for cheaper, probably better cryo, and more importantly, one that is integrated into the medical system, where they might transition me from hibernation to cryo, without needing to make sure I'm clinically dead first.

I will gladly concede that, for myself, there is still an irrational set of beliefs keeping me from buying into cryo. The argument above may just be a justification I found t avoid biting the bullet. But maybe I've stumbled onto a good point?

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-26T14:36:30.310Z · LW · GW

You are right: This needs to be a fully decentralized system, with no center, and processing happening at the nodes. I was conceiving of "regional" aggregates mostly as a guess as to what may relieve network congestion if every node calls out to thousands of others.

Thank you for setting me right: My thinking has been so influenced by over a decade of web app dev that I'm still working on integrating the full principles of decentralized systems.

As for boiling oceans... I wish you were wrong, but you probably are right. Some of these architectures are likely to be enormously hard to fine-tune for effectiveness. At the same time, I am also hoping to piggyback on existing standards and systems.

Anyway, let's certainly talk offline!

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-25T20:39:55.735Z · LW · GW

You're right: A system like that could be genetically evolved for optimization.

On the other hand, I was hoping to create an open optimization algorithm, governable by the community at large... based on their influence scores in the field of "online influence governance." So the community would have to notice abuse and gaming of the system, and modify policy (as expressed in the algorithm, in the network rules, in laws and regulations and in social mores) to respond to it. Kind of like democracy: Make a good set of rules for collaborative rule-making, give it to the people, and hope they don't break it.

But of course the Huns could take over. I'm trusting us to protect ourselves. In some way this would be poetic justice: If crowds can't be wise, even when given a chance to select and filter among the members for wisdom, then I'll give up on bootstrapping humanity and wait patiently for the singularity. Until then, though, I'd like to see how far we could go if given a useful tool for collaboration, and left to our own devices.

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-24T15:49:57.465Z · LW · GW

Alexandros,

Not surprised that we're thinking along the same lines, if we both read this blog! ;)

I love your questions. Let's do this:

Keynesian Beauty Contest: I don't have a silver bullet for it, but a lot of mitigation tactics. First of all, I envision offering a cascading set of progressively more fine-grained rating attributes, so that, while you can still upvote or downvote, or rate something with starts, you can also rate it on truthfulness, entertainment value, fairness, rationality (and countless other attributes)... More nuanced ratings would probably carry more influence (again, subject to others' cross-rating). Therefore, to gain the highest levels of influence, you'd need to be nuanced in your ratings of content... gaming the system with nuanced, detailed opinions might be effectively the same as providing value to the system. I don't mind someone trying to figure out the general population's nuanced preferences... that's actually a valuable service!

Secondly, your ratings are also cross-related to the semantic metadata (folksonomy of tags) of the content, so that your influence is limited to the topic at hand. Gaining a high influence score as a fashion celebrity doesn't put your political or scientific opinions at the top of search results. Hopefully, this works as a sort of structural Palin-filter. ;)

The third mitigation has to do with your second question: How do we handle the processing of millions of real-time preference data points, when all of them should (in theory) get cross-related to all others, with (theoretically) endless recursion?

The typical web-based service approach of centralized crunching doesn't make sense. I'm envisioning a distributed system where each influence node talks with a few others (a dozen?), and does some cross-processing with a them to agree on some temporary local normals, means and averages. That cluster does some more higher-level processing in consort with other close-by clusters, and they negotiate some "regional" aggregates... that gets propagated back down into the local level, and up to the next level of abstraction... up until you reach some set of a dozen superclusters that span the globe, and who trade in high-level aggregates.

All that is regulated, in terms of clock ticks, by activity: Content that is being rated/shared/commented on by many people will be accessed and cached by more local nodes, and processed by more clusters, and its cross-processing will be accelerated because it's "hot". Whereas one little opinion on one obscure item might not get processed by servers on the other side of the world until someone there requests it. We also decay data this way: If nobody cares, the system eventually forgets. (Your personal node will remember your preferences, but the network, after having consumed their influence effects, might forget their data points.)

A distributed, propagation system, batch-processed, not real-time, not atomic but aggregated. That means you can't go back and change old ratings, and individual data points, because they get consumed by the aggregates. That means you can't inspect what made your scored go up and down at the atomic level. That means your score isn't the same everywhere on the planet at the same time. So gaming the system is harder because there's no real-time feedback loop, there's no single source of absolute truth (truth is local and propagates lazily), and there's no auditing trail of the individual effects of your influence.

All of this hopefully makes the system so fluid that it holds innumerable beauty contests, always ongoing, always local, and the results are different depending on when and where you are. Hopefully this makes the search for the Nash equilibrium a futile exercise, and people give up and just say what they actually think is valuable to others, as opposed to just expected by others.

That's my wishful thinking at the point. Am I fooling myself?

Comment by avalot on To signal effectively, use a non-human, non-stoppable enforcer · 2010-05-24T01:11:44.973Z · LW · GW

Clippy, how can we get along?

What should humans do to be AI-friendly? For paperclip-maximizing AIs, and other "natural" (non-Friendly) AIs, what are the attributes that can make humans a valuable part of the utility function, so that AIs won't pull the plug on us?

Or am I fooling myself?

Comment by avalot on To signal effectively, use a non-human, non-stoppable enforcer · 2010-05-24T01:00:02.427Z · LW · GW

At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain't FOOMing any faster than Clippy. At this rate, we'll never gonna ensure survival of the species.

If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we're helping with paperclip-maximization, he'll probably throw in some FOOM for us too (at least he'll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.

With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on... Of course, it will soon become cheaper to use robots to do this work, but that's the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we'll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)

So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.

Clippy's values and utility function are enormously more simple, defined, and achievable than ours. We're still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.

Clippy's value system is clear, defined, easy to implement, achieve, and measure. It's something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.

Shouldn't that count for something?

Comment by avalot on To signal effectively, use a non-human, non-stoppable enforcer · 2010-05-23T04:02:10.685Z · LW · GW

I'm wired for empathy toward human intelligence... Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That's all I'm saying. :)

Comment by avalot on To signal effectively, use a non-human, non-stoppable enforcer · 2010-05-23T03:37:10.644Z · LW · GW

I'm sure this sounds very one-sided from Clippy's perspective. "Friendliness Constraints" sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don't have paperclip maximizing constraints, we'd do it only out of self-interest to secure Clippy's help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans? I'm sure this has been addressed before... yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.

What about Clippy's rights, dammit?

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-22T17:59:39.781Z · LW · GW

There's a few questions in there. Let's see.

Authentication and identity are an interesting issue. My concept is to allow anonymous users, with a very low initial influence level. But there would be many ways for users to strengthen their "identity score" (credit card verification, address verification via snail-mailed verif code, etc.), which would greatly and rapidly increase their influence score. A username that is tied to a specific person, and therefore wields much more influence, could undo the efforts of 100 bots with a single downvote.

But if you want to stay anonymous, you can. You'll just have to patiently work on earning the same level of trust that is awarded to people who put their real-life reputation on the line.

I'm also conceiving of a richly semantic system, where simply "upvoting" or facebook-liking are the least influential actions one can take. Up from there, you can rate content on many factors, comment on it, review it, tag it, share it, reference it, relate it to other content. The more editorial and cerebral actions would probably do more to change one's influence than a simple thumbs up. If a bot can compete with a human in writing content that gets rated high on "useful", "factual", "verifiable", "unbiased", AND "original" (by people who have high influence score in these categories), then I think the bot deserves a good influence score, because it's a benevolent AI. ;)

Another concept, which would reduce incentives to game the system, is vouching. You can vouch for other users' identity, integrity, maturity, etc. If you vouched for a bot, and the bot's influence gets downgraded by the community, your influence will take a hit as well.

I see this happening throughout the system: Every time you exert your influence, you take responsibility for that action, as anyone may now rate/review/downvote your action. If you stand behind your judgement of Rush Limbaugh as truthful, enough people will disagree with you that from that point on, anytime you rate something as "truthful", that rating will count for very little.

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-22T00:26:09.190Z · LW · GW

Good point! I assume we'll have decay built into the system, based on age of the data points... some form of that is built into the architecture of FreeNet I believe, where less-accessed content eventually drops out from the network altogether.

I wasn't even thinking about old people... I was more thinking about letting errors of youth not follow you around for your whole life... but at the same time, valuable content (that which is still attracting new readers who mark it as valuable) doesn't disappear.

That said, longevity on the system means you've had more time to contribute... But if your contributions are generally rated as crappy, time isn't going to help your influence without a significant ongoing improvement to your contributions' quality.

But if you're a cranky old nutjob, and there are people out there who like what you say, you can become influential in the nutjob community, if at the expense of your influence in other circles. You can be considered a leading light by a small group of people, but an idiot by the world at large.

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-21T22:44:07.121Z · LW · GW

I'd love to discuss my concept. It's inspired in no small part by what I learned from LessWrong, and by my UI designer's lens. I don't have the karma points to post about it yet, but in a nutshell it's about distributing social, preference and history data, but also distributing the processing of aggregates, cross-preferencing, folksonomy, and social clustering.

The grand scheme is to repurpose every web paradigm that has improved semantic and behavioral optimization, but distribute out the evil centralization in each of them. I'm thinking of an architecture akin to FreeNet, with randomized redundancies and cross-checking, to circumvent individual nodes from gaming the ruleset.

But we do crowd-source the ruleset, and distribute its governance as well. Using a system not unlike LW's karma (but probably a bit more complex), we weigh individual users' "influence." The factors on which articles, comments, and users can be rated is one of the tough questions I'm struggling with. I firmly believe that given a usable yet potentially deep and wide range of evaluation factors, many people will bother to offer nuanced ratings and opinions... Especially if the effort is rewarded by growth in their own "influence".

So, through cross-influencing, we recreate online the networks of reputation and influence that exist in the real social world... but with less friction, and based more on your words and deeds than in institutional, authority, and character bias.

I'm hoping this has the potential to encourage more of a meritocracy of ideas. Although to be honest, I envision a system that can be used to filter the internet any way you want. You can decide to view only the most influential ideas from people who think like you, or from people who agree with Rush Limbaugh, or from people who believe in the rapture... and you will see that. You can find the most influential cute kitty video among cute kitty experts.

That's the grand vision in a nutshell, and it's incredibly ambitious of course, yet I'm thinking of bootstrapping it as an agile startup, eventually open-sourcing it all and providing a hosted free service as an alternative to running a client node. If I can find an honest and non-predatory way to cover my living expenses out of it, it would be nice, but that's definitely not the primary concern.

I'm looking for partners to build a tool, but also for advisors to help set the right value-optimizing architecture... "seed" value-adding behavior into the interface, as it were. I hope I can get some help from the LessWrong community. If this works, it could end up being a pretty influential bit of technology! I'd like it to be a net positive for humanity in the long term.

I'm probably getting ahead of myself.

Comment by avalot on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems · 2010-05-12T02:22:55.006Z · LW · GW

This touches directly on work I'm doing. Here is my burning question: Could an open-source optimization algorithm be workable?

I'm thinking of a wikipedia-like system for open-edit regulation of the optimization factors, weights, etc. Could full direct democratization of the attention economy be the solution to the arms race problem?

Or am I, as usual, a naive dreamer?

Comment by avalot on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-12T08:19:29.939Z · LW · GW

Advertising is, by nature, diametrically opposite to rational thought. Advertising stimulates emotional reptilian response. I advance the hypothesis that exposure to more advertising has negative effects on people's receptivity to and affinity with rational/utilitarian modes of thinking.

So far, the most effective tool to boost popular support for SIAI and existential risk reduction has been science-fiction books and movies. Hollywood can markedly influence cultural attitudes, on a large scale, with just a few million dollars... and it's profitable. Like advertising, they often just pander to reptilian and emotional response... but even then they can also educate and convince.

What most people know of and believe about AI and existential risk is what they learned from Steven Spielberg, Oliver Stone, Isaac Asimov, etc. If Spielberg is a LW reader (maybe he lurks?), I am much more optimistic for mankind than if ads run on Craigslist.

If you want people to support the right kind of research, I advance that it could be most effectively and humanely accomplished using the Direct Belief Transfer System that is storytelling.

Who wants to write The Great Bayesian Novel? And the screenplay?

Comment by avalot on A Much Better Life? · 2010-02-04T15:44:53.575Z · LW · GW

Alex, I see your point, and I can certainly look at cryonics this way... And I'm well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it's now feeling like one of these no-fun grown-up things I'm going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this "bah, no big deal, just get it done" attitude which is a helpful model at this point. I tend to be the agonizing type.

But I think I'm also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering... might it be a sort of self-preservation instinct of society (not individuals)? So, defining "society" as the system of beliefs and tools and skills we've evolved to deal with fore-knowledge of death, I guess I'm asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it?

So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can't make me slap my forehead by convincing each individual cell in my hand to do it. They'll need the brain to coordinate, and you can't make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?

Comment by avalot on A Much Better Life? · 2010-02-04T04:14:39.360Z · LW · GW

I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.

Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.

Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.

All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).

Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especially compared to the considerable arsenal of sophisticated technologies, art forms, and psychoses we've painstakingly evolved to cope with death.

That's where I am right now. Eliezer's comments have triggered a strongly rational dissonance, but I feel comfortable hanging around all the serious people, who are too busy doing the serious work of making the most of life to waste any time on silly things like immortality. Mostly, I'm terrified at the unfathomable enormity of everything that I'll have to do to adapt to a belief in cryonics. I'll have to change my approach to everything... and I don't have any cultural references to guide the way.

Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?

Is this a matter of genetic programming percolating too deep into the fabric of all our systems, be they genetic, nervous, emotional, instinctual, cultural, intellectual? Are we so hard-wired for death that we physically can't fathom or adapt to the potential for immortality?

I'm particularly interested in hearing about the experience of the LW community on this: How far can rational examination of life-extension possibilities go in changing your outlook, but also feelings or even instincts? Is there a new level of self-consciousness behind this brick wall I'm hitting, or is it pretty much brick all the way?

Comment by avalot on Welcome to Less Wrong! · 2009-07-20T16:32:21.270Z · LW · GW

Hello.

I'm Antoine Valot, 35 years old, Information Architect and Business Analyst, a frenchman living in Colorado, USA. I've been lurking on LW for about a month, and I like what I see, with some reservations.

I'm definitely an atheist, currently undecided as to how anti-theist I should be (seems the logical choice, but the antisocial aspects suggest that some level of hypocrisy might make me a more effective rational agent?)

I am nonetheless very interested in some of the philosophical findings of Buddhism (non-duality being my pet idea). I think there's some very actionable and useful tools in Buddhism at the juncture of rationality and humanity: How to not believe in santa, but still fulfill non-rational human needs and aspirations. Someone's going to have to really work on convincing me that "utility" can suffice, when Buddhist concepts of "happiness" seem to fit the bill better for humans. "Utility" seems too much like pleasure (unreliable, external, variable), as opposed to happiness (maintainable, internal, constant).

Anyway, I'm excited to be here, and looking forward to learning a lot and possibly contributing something of value.

A special shout-out to Alicorn: I read you post on male bias, and I dig, sister. I'll try to not make matters worse, and look for ways to make them better.

Comment by avalot on Newcomb's Problem and Regret of Rationality · 2009-07-20T06:07:56.398Z · LW · GW

I'm a bit nervous, this is my first comment here, and I feel quite out of my league.

Regarding the "free will" aspect, can one game the system? My rational choice would be to sit right there, arms crossed, and choose no box. Instead, having thus disproved Omega's infallibility, I'd wait for Omega to come back around, and try to weasel some knowledge out of her.

Rationally, the intelligence that could model mine and predict my likely action (yet fail to predict my inaction enough to not bother with me in the first place), is an intelligence I'd like to have a chat with. That chat would be likely to have tremendously more utility for me than $1,000,000.

Is that a valid choice? Does it disprove Omega's infallibility? Is it a rational choice?

If messing with the question is not a constructive addition to the debate, accept my apologies, and flame me lightly, please.