Posts

Comments

Comment by LauraABJ on [deleted post] 2023-11-13T22:13:33.991Z

Thank you for saying this outright.  I was appalled by Scott's lack of epistemic rigor and how irresponsible he was at using his widely-read platform and trust as a physician to fool people into thinking cutting out a major organ has very little risk.  Maybe he really did just fool himself, but I don't think that is an excuse when your whole deal is being the guy with good epistemics who looks at medical research.  A comment he made later about guilting 40,000 randomly selected Americans into donating indicates clearly that he has an Agenda.  He does not have your best interests at heart at all and thinks this is obligatory and not superogatory.  If people understand that there are large, not necessarily quantified risks here, and still want to donate, then go right ahead.  I think you are right that this is more about purifying themselves through self-sacrifice than it is about actually improving the world, but hey- a lot of people seem to report that donation has long-term improved their mental well-being.  What I object to is minimizing the risks and guilting people who second-guess the BS data.  People really need to go into this with their eyes open and make this choice for themselves.  I don't think Scott's article is written in good faith. 

And what effect would receiving this letter have on any of your patients Mr. Scott?  Do you think they will be better off?  Or should we convince suicidal people to stick around because they have so many useful organs? Hey, don't feel like you're a burden on your parents - they might need your kidney one day!

Comment by LauraABJ on How I Lost 100 Pounds Using TDT · 2011-03-15T15:52:57.788Z · LW · GW

It seems to me that one needs to place a large amount of trust in one's future self to implement such a strategy. It also requires that you be able to predict your future self's utility function. If you have a difficult time predicting what you will want and how you will feel, it becomes difficult to calculate the utility of any given precomittment. For example, I would be unconvinced that deciding to eat a donut now means that I will eat a donut every day and that not eating a donut now means I will not eat a donut every day. Knowing that I want a donut now and will be satisfied with that seems like an immediate win, while I do not know that I will be fat later. To me this seems like trading a definite win for a definite loss + potential bigger win. Also, it is not clear that there wouldn't be other effects. Not eating the donut now might make me dissatisfied and want to eat twice as much later in the day to compensate. If I knew exactly what the effects of action EAT DONUT vs NOT EAT DONUT were (including mental duress, alternative pitfalls to avoid, etc), then I would be better able to pick a strategy. The more predictable you are, the more you can plan a strategy that makes sense in the long term. In the absence of this information, most of just 'wing it' and do what seems best at the given moment. It would seem that deciding to be a TDT agent is deciding to always be predictable in certain ways. But that also requires trusting that future you will want to stick to that decision.

Comment by LauraABJ on Ability to react · 2011-02-21T07:47:26.046Z · LW · GW

I know that feeling, but I don't know how conscious it is. Basically when then outcome matters in a real immediate way and is heavily dependent on my actions, I get calm and go into 'I must do what needs to be done' mode. When my car lost traction in the rain and spun on the highway, I probably saved my life by reasoning how to best get control of it, pumping the break, and getting it into a clearing away from other vehicles/trees, all within a time frame that was under a minute. Immediately afterwards the thoughts running through my head were not, 'Oh fuck I could have died!' but 'How could I have handled that better.' and 'Oh fuck, I think the car is trashed.' It was only after I climbed out of the car that I realized I was physically shaking.

Likewise, when a man collapsed at synogogue after most people had left (there were only 6 of us), and hit his head on the table leaving a not unimpressive pool of blood on the floor, I immediately went over to him and checked his vitals and declared that someone should call an ambulance. The other people just stood around looking dumbfounded, and it turned out the problem was no one had a cell-phone on Saturday, so I called and was already giving the address by the time the man's friend realized there was something wrong and began screaming.

Doing these things did not feel like a choice. They were the necessary next action and so I did them. Period. I don't know how to describe that. "Emergency Programming"?

Comment by LauraABJ on Procedural Knowledge Gaps · 2011-02-08T06:34:01.322Z · LW · GW

Ok- folding a fitted sheet is really fucking hard! I don't think that deserves to be on that list, since it really makes no difference whatsoever in life whether or not you properly fold a fitted sheet, or just kinda bundle it up and stuff it away. Not being able to deposit a check, mail a letter, or read a bus schedule, on the other hand can get you in trouble when you actually need to. Here's to not caring about linen care!

Comment by LauraABJ on You're in Newcomb's Box · 2011-02-06T03:42:19.863Z · LW · GW

That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.

Comment by LauraABJ on You're in Newcomb's Box · 2011-02-06T00:56:07.115Z · LW · GW

"I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand."

Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb's problem to you as you consider whether or not to open the second. My thought would be, "Ha! Omega was WRONG!!!! " laughing as I dove into the second box.

edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.

Comment by LauraABJ on You're in Newcomb's Box · 2011-02-06T00:33:14.103Z · LW · GW

Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.

So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.

However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there's some loss, but at a high enough pay out, it becomes a worthy trade.

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

Comment by LauraABJ on Fast Minds and Slow Computers · 2011-02-05T19:34:22.881Z · LW · GW

This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven't read anything that has made me feel the urgency of the problem as much as this has in a really long time...

Comment by LauraABJ on Scientific Self-Help: The State of Our Knowledge · 2011-01-23T05:26:44.138Z · LW · GW

Yes.

Comment by LauraABJ on Scientific Self-Help: The State of Our Knowledge · 2011-01-23T05:13:53.277Z · LW · GW

This is true. We were (and are) in the same social group, so I didn't need to go out of my way for repeated interaction. Had I met him once and he failed to pick up my sigs, then NO, we would NOT be together now... This reminds me of a conversation I had with Silas, in which he asked me, "How many dates until....?" And I stared at him for a moment and said, "What makes you think there would be a second if the first didn't go so well?"

Comment by LauraABJ on Scientific Self-Help: The State of Our Knowledge · 2011-01-23T05:06:58.204Z · LW · GW

Self help usually fails because people are terrible at identifying what their actual problems are. Even when they are told! (Ahh, sweet, sweet denial.) As a regular member of the (increasingly successful) OB-NYC meetup, I have witnessed a great deal of 'rationalist therapy,' and frequently we end up talking about something completely different from what the person originally asked for therapy for (myself included). The outside view of other people (preferably rationalists) is required to move forward on the vast majority of problems. We should also not underestimate the importance of social support and social accountability in general as positive motivating factors. Another reason that self-help might fail is that the people reading these particular techniques are trying to help themselves by themselves. I really hope others from this site take the initiative in forming supportive groups, like the one we have running in NYC.

Comment by LauraABJ on Scientific Self-Help: The State of Our Knowledge · 2011-01-23T04:49:33.214Z · LW · GW

You are very unusual. I love nerds too, and am currently in an amazing relationship with one, but even I have my limits. He needed to pursue me or I wouldn't have bothered. I was quite explicitly testing, and once he realized the game was one, he exceeded expectations. But yeah, there were a couple of months there when I thought, 'To hell with this! If he's not going to make a move at this point, he can't know what he's doing, and he certainly won't be any good at the business...'

Comment by LauraABJ on Open Thread June 2010, Part 2 · 2010-06-09T17:03:58.176Z · LW · GW

Are you intending to do this online or meet in person? If you are actually meeting, what city is this taking place in? Thanks.

Comment by LauraABJ on Virtue Ethics for Consequentialists · 2010-06-04T16:31:53.082Z · LW · GW

I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.

Comment by LauraABJ on Diseased thinking: dissolving questions about disease · 2010-06-01T15:09:23.579Z · LW · GW

It seems that one way society tries to avoid the issue of 'preemptive imprisonment' is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.

Comment by LauraABJ on On Enjoying Disagreeable Company · 2010-05-26T15:27:58.470Z · LW · GW

Dear Tech Support, Might I suggest that the entire Silas-Alicorn debate be moved to some meta-section. It has taken over the comments section of an instrumentally useful post, and may be preventing topical discussion.

Comment by LauraABJ on Q&A with Harpending and Cochran · 2010-05-12T16:46:36.928Z · LW · GW

I have always been curious about the effects of mass-death on human genetics. Is large scale death from plague, war, or natural-disaster likely to have much effect on the genetics of cognitive architecture, or are outcomes generally too random? Is there evidence for what traits are selected for by these events?

Comment by LauraABJ on What are our domains of expertise? A marketplace of insights and issues · 2010-04-30T21:43:18.348Z · LW · GW

Most people commenting seem to be involved in science and technology (myself included), with a few in business. Are there any artists or people doing something entirely different out there?

To answer the main question, I am an MD/PhD student in neurobiology.

Comment by LauraABJ on Attention Lurkers: Please say hi · 2010-04-17T05:00:12.569Z · LW · GW

Awe, this made my night! Welcome to all!

Comment by LauraABJ on The two insights of materialism · 2010-03-25T17:06:39.403Z · LW · GW

Sure, one can always look at the positive aspects of reality, and many materialists have even tried to put a positive spin on the inevitability of death without an afterlife. But it should not be surprising that what is real is not always what is most beautiful. There are a panoply of reasons not to believe things that are not true, but greater aesthetic value does not seem to be one of them. There is an aesthetic value in the idea of 'The Truth,' but I would not say that this outweighs all of the ways in which fantasy can be appealing for most people. And the 'fantasies' of which I am speaking are not completely random untruths, like "Hey, I'm gonna believe in Hobbits, because that would be cool!', but rather ideas that spring from the natural emotional experiences of humanity. They feel correct. Even if they are not.

Comment by LauraABJ on The two insights of materialism · 2010-03-25T00:29:53.069Z · LW · GW

Good post, but I think what people are often seeking in the non-material is not so much an explanation of what they are, but a further connection with other people, deities, spirits, etc. In a crude sense, judeo-christian god gives people an ever-present friend that understands everything about them and always loves them. Materialism would tell them, 'There is no God. You have found that talking to yourself makes you feel that you are unconditionally loved, but it's all in your head.'

On a non-religious note, two lovers may feel that they have bonded such that they are communicating on another level. Which explanation seems more aesthetically pleasing: 1) Your 'souls' are entwined, your 'minds' are one, he/she really does deeply understand you such that words are no longer necessary, you are sharing the same experience. 2) You have found a trigger to an evolutionarily developed emotion that makes you feel as if you are communing. Your lover may or may not have found the same switch. You are each experiencing this in your own way in your own head. You will need to discuss to compare.

And yes, I do think that verbal and physical communication is still pretty great (I mean, that's what we got), but there is a large attraction to believe one's transcendent feelings really do, well, transcend, and that we are not as alone in our minds as we really are.

Comment by LauraABJ on The scourge of perverse-mindedness · 2010-03-23T00:58:29.174Z · LW · GW

While not everyone experiences the 'god-shaped hole,' it would be dense of us not to acknowledge the ubiquity of spirituality across cultures just because we feel no need for it ourselves (feel free to replace 'us' and 'we' with 'many of the readers of this blog'). Spirituality seems to be an aesthetic imperative for much of humanity, and it will probably take a lot teasing apart to determine what aspects of it are essential to human happiness, and what parts are culturally inculcated.

Comment by LauraABJ on The scourge of perverse-mindedness · 2010-03-22T17:27:05.857Z · LW · GW

Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology. They try to explain things in large concepts that humans have evolved to easily grasp rather than the minutiae and logical puzzles of reality. If materialists want these memes to be given up, they will need to create equally compelling human metaphor, which is a tall order if we want everything to convey reality correctly. Compelling metaphor is frequently incorrect. My atheist-jewish husband loves to talk about the beauty of scripture and parables in the Christian bible and stands firm against my insistence that any number of novels are both better written and provide better moral guidance. I personally have a disgust reaction whenever he points out a flowery passage about morality and humanity that doesn't make any actual sense. HOW CAN YOU BE TAKEN IN BY THAT? But unlike practicing religious people, he doesn't 'believe' any of it, he's just attracted to it aesthetically, as an idea, as a beautiful outgrowth of the human spirit. Basically, it presses all the right psychological buttons. This is not to say that materialists cannot produce equally compelling metaphors, but it may be a very difficult task, and the spiritualists have a good, I don't know, 10,000 years on us in honing in on what appeals to our primitive psychology.

Comment by LauraABJ on Rational feelings: a crucial disambiguation · 2010-03-13T20:14:50.726Z · LW · GW

" The negative consequences if I turn out to be wrong seem insignificant - oh no, I tried to deceive myself about my ability to feel differently than I do!"

Repression anyone? I think directly telling yourself, "I don't feel that way, I feel this way!" can be extremely harmful, since you are ignoring important information in the original feeling. You are likely to express your original feelings in some less direct, more destructive, and of course less rational way if you do this. A stereotypical example is that of a man deciding that he should not feel angry that he did not get a promotion at work and then blowing up at his wife for not doing the dishes properly. Maybe there is nothing to actually be angry about, and screaming at his boss certainly wouldn't accomplish anything, but ignoring the feeling as invalid is almost certain to end badly.

I think Alicorn is suggesting that if you attempt to understand why you have the feelings you do, and if these reasons don't make sense, your feelings will likely change naturally without the need to artificially apply different ones.

Comment by LauraABJ on Priors and Surprise · 2010-03-06T23:05:43.622Z · LW · GW

We discussed a similar idea in reference to Godzilla, namely what kind of evidence we would need to believe that 'magical' elements existed in the world. The point you made then was that even something as outside our scientific understanding as Godzilla would be insufficient evidence to change our basic scientific world view, and that such evidence might not exist even in theory. I think this post could be easily improved by an introduction explaining this point, which you currently leave as an open question at the the end.

Comment by LauraABJ on Babies and Bunnies: A Caution About Evo-Psych · 2010-02-23T16:00:22.671Z · LW · GW

Monroe, NY (though he is not a Hassid!)

It's not that they have a strict prohibition on pets, more of a general disapproval from appeal to cleanliness. I don't know how the super-orthodox interpret the Torah on this matter.

Comment by LauraABJ on Babies and Bunnies: A Caution About Evo-Psych · 2010-02-22T02:50:29.178Z · LW · GW

I would find this argument much more convincing if it were supported by people who actually have children. My mother goes beserk over a smiling infant in a way I cannot begin to comprehend (I am usually afraid I will accidentally hurt them). My husband, likewise, has an instant affinity for babies and always tries to communicate and play with them. He was raised Jewish with the idea that it is unclean to have animals in the home and does not find animals particularly adorable. In our culture we are inundated with anthropomorphised images of animals in television and given stuffed toys and pets that we take care of like children. It's not that surprising that we find animals cute when we focus so much attention on them as if they were little people. I do not know that such evaluations of 'cuteness' would hold cross-culturally, especially in cultures where people do kill and eat 'cute' animals on a regular basis.

Comment by LauraABJ on Med Patient Social Networks Are Better Scientific Institutions · 2010-02-19T22:30:38.839Z · LW · GW

Something like this is useful for the types of data points patients would have no reason to self-deceive over, however I worry that the general tendency for people to make their 'data' fit the stories they've written about themselves in their minds will promote superstitions. For example, a friend of mine is convinced that the aspartame in diet soda caused her rosacea/lupus. She's sent me links to chat-rooms that have blamed aspartame for everything from diabetes to alzheimer's, and it's disturbing to see the kind of positive feed-back loops that are created from anecdotes in which chat members state a clear link exists between symptoms and usage. One says, "I got symptom X after drinking diet soda," and another says, "I have symptom X, it must be from drinking diet soda!" and another says, "Thanks, after reading your comments, I stopped drinking diet soda and symptom X went away!" In spite of chat rooms dedicated to blaming diet soda for every conceivable health problem and the fall of American values, no scientific study to date has shown ANY negative side effect of aspartame even at the upper bounds of current human consumption.

Another example of hysterical positive-feedback would be the proliferation of insane allegations that the MMR vaccine causes autism. I would guess angry parents who wanted to believe MMR caused their child's autism would plot their 'data points' for the onset of their child's symptoms right after vaccination.

A site like this one may allow certain trends to rise out of the noise, but we must not forget the tendency people have to lie to themselves for a convenient story.

Comment by LauraABJ on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-15T21:52:06.875Z · LW · GW

I think the key is that most people don't care whether or not AGW is occurring unless they can expect it to affect them. Since changing policy will negatively affect them immediately via increased taxes, decreased manufacture, etc., it's easier to just say they don't believe in AGW period. If the key counter-AGW measure on the table were funding for carbon-capture research, I think many fewer people would claim that they didn't believe in AGW.

My take on global warming is that no policy that has significant impact on the problem will be implemented until the frequency of droughts/hurricaines/floods/fires increases to obvious levels in the western world (fuck-Bengali policy is already in place, and I don't think more famines will change that). And by obvious, I mean obvious to a layman, as in 'when I was young we only had 1 hurricane per year, and now we have 10!' By this time, the only option will probably be technological.

Comment by LauraABJ on A survey of anti-cryonics writing · 2010-02-08T17:06:03.711Z · LW · GW

There were some fantastic links here. Thankyou!

Does anyone here know what the break-down is among cryonics advocates between believing that A) in the future cryopreserved patients will be physically rejuvinated in their bodies and B) in the future cryopreserved patients will be brain-scanned and uploaded?

I think there is a reasonable probability of effective cryopreservation and rejuvination of a mammal (at least a mouse) in the next 25 years, but I think our ability to 'rejuvinate' will be largely dependent on the specific cryoincs technologies developed at that time, and that it is very unlikely cryonics methods developed before that time will be acceptable for rejuvination. Realize that once an effective cryopreservation method has been developed, socially there will be much more interest in perfecting it than there will be in going back to the old technology used to freeze past generations and figuring out how we can get that to work for their sake.

Comment by LauraABJ on A problem with Timeless Decision Theory (TDT) · 2010-02-05T00:37:57.001Z · LW · GW

Yes- but your two-boxing didn't cause i=0, rather the million was there because i=0. I'm saying that if (D or E) = true and you get a million dollars, and you two-box, then you haven't caused E=0. E=0 before you two boxed, or if it did not, then omega was wrong and thought D = onebox, when in fact you are a two-boxer.

Comment by LauraABJ on A problem with Timeless Decision Theory (TDT) · 2010-02-05T00:16:27.535Z · LW · GW

No, I still don't get why adding in the ith digit of pi clause changes Newcome's problem at all. If omega says you'll one-box and you two-box then omega was wrong, plain and simple. The ith digit of pi is an independent clause. I don't see how one's desire to make i=0 by two-boxing after already getting the million is any different than one wanting to make omega wrong by two-boxing after getting the million. If you are the type of person who, after getting the million thinks, "Gee, I want i=0! I'll two-box!" Then omega wouldn't have given you the million to begin with. After determining that he would not give you the million, he'd look at the ith digit of pi and either put the million in or not. You two-boxing has nothing to do with i.

Comment by LauraABJ on A problem with Timeless Decision Theory (TDT) · 2010-02-04T23:39:36.729Z · LW · GW

I'm not clear at all what the problem is, but it seems to be symantic. It's disturbing that this post can get 17 upvotes with almost no (2?) comments actually referring to what you're saying- indicating that no one else here really gets the point either.

It seems you have an issue with the word 'dependent' and the definition that Eliezer provided. Under that definition, E (the ith digit of pi) would be dependent on C (our decision to one or two box) if we two-boxed and got a million dollars, because then we would know that E = 0, and we would not have known this if we had not two-boxed. So we can infer E from C, thus dependency. By Eliezer's definition, which seems to be a special information-theoretical definition, I see no problem with this conclusion. The problem only seems to arise if you then take the intuitive definition of the word 'dependent' as meaning 'contingent upon,' as in 'Breaking the egg is contingent upon my dropping it.' Your symantic complain goes beyond newcome- by Eliezer's definition of 'dependent,' the pH of water (E) is dependent upon our litmus testing it, since the result of the litmus test (C) allows us to infer the water's actual pH. C lets us infer E, thus dependency.

Comment by LauraABJ on "Put It To The Test" · 2010-02-04T02:23:38.830Z · LW · GW

Would kids these days even recognize the old 8-bit graphics?

Comment by LauraABJ on My Failed Situation/Action Belief System · 2010-02-03T17:05:25.902Z · LW · GW

The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested). There have been comments referencing the idea that many people don't reason or think but just do, and the world appears magical to them. Your model does seem to explain how these people can get by in the world without much need for thinking- just green-go, red-stop. If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.

Comment by LauraABJ on My Failed Situation/Action Belief System · 2010-02-03T15:51:19.832Z · LW · GW

I think an important point missing from your post is that this is how many (most?) people model the world. 'Causality' doesn't necessarily enter into most people's computation of true and false. It would be nice to see this idea expanded with examples of how other people are using this model, why it gives them the opinions (output) that it does, and how we can begin to approach reasoning with people who model the world in this way.

Comment by LauraABJ on My Failed Situation/Action Belief System · 2010-02-02T21:07:02.940Z · LW · GW

Having a functional model of what will be approved by other people is very useful. I would hardly say that it "has nothing to do with reality." I think much of the trauma of my own childhood would have been completely avoided if I had been able to pull that off. Alas! Pity to my 9-year-old-self trying to convince the other children they were wrong.

Comment by LauraABJ on The AI in a box boxes you · 2010-02-02T15:07:47.164Z · LW · GW

Pascal's mugging...

Anyway, if you are sure you are going to hit the reset button every time, then there's no reason to worry, since the torture will end as soon as the real copy of you hits reset. If you don't, then the whole world is absolutely screwed (including you), so you're a stupid bastard anyway.

Comment by LauraABJ on Strong moral realism, meta-ethics and pseudo-questions. · 2010-02-01T03:32:52.178Z · LW · GW

Ah, so moral justifications are better justifications because they feel good to think about. Ah, happy children playing... Ah, lovers reuniting... Ah, the Magababga's chief warrior being roasted as dinner by our chief warrior who slew him nobly in combat...

I really don't see why we should expect 'morality' to extrapolate to the same mathematical axioms if we applied CEV to different subsets of the population. Sure, you can just define the word morality to include the sum total of all human brains/minds/wills/opinions, but that wouldn't change the fact that these people, given their druthers and their own algorithms would morally disagree. Evolutionary psychology is a very fine just-so story for many things that people do, but people's, dare I say, aesthetic sense of right and wrong is largely driven by culture and circumstance. What would you say if omega looked at the people of earth and said, "Yes, there is enough agreement on what 'morality' is that we need only define 80,000 separate logically consistent moral algorithms to cover everybody!"

Comment by LauraABJ on You cannot be mistaken about (not) wanting to wirehead · 2010-01-26T16:44:10.229Z · LW · GW

Your examples of getting tired after sex or satisfied after eating are based on current human physiology and neurochemistry, which I think most people here are assuming will no longer confine our drives after AI/uploading. How can you be sure what you would do if you didn't get tired?

I also disagree with the idea that 'pleasure' is what is central to 'wireheading.' (I acknowledge that I may need a new term.) I take the broader view that wireheading is getting stuck in a positive feed-back loop that excludes all other activity, and for this to occur, anything positively-reinforcing will do.* For example, let's say Jane Doe wants to want to exercise, and so modifies her preferences. Now lets say this modification is not calibrated correctly, and so she ends up on the treadmill 24/7, never wanting to get off of it. Though the activity is not pleasurable, she is still stuck in the loop. Even if we would not make a mistake quite this mundane, it is not difficult to imagine similar problems occurring after a few rounds of 'preference modification' by free transhumans. If someone has a drive to be satisfied, then satisfied he shall be, one way or another. Simple solutions, like putting in a preference for complexity, may not be sufficient safeguards either. Imagine an entity that spends all of its time computing and tracing infinite fractiles. Pinnacle of human evolution or wirehead?

*Disclaimer: I haven't yet defined the time parameters. For example, if the loop takes 24 hours to complete as opposed to a few seconds, is it still wireheading? What about 100 years? But I think the general idea is important to consider.

Comment by LauraABJ on Normal Cryonics · 2010-01-25T16:38:19.966Z · LW · GW

I'd be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I'm sure you have your reasons. Explaining a few of these 'personally credible stories,' and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.

Also, I believe I used the phrase 'outside view' incorrectly, since I didn't mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An 'unbiased' source of probabilities, if you will.

Comment by LauraABJ on Simon Conway Morris: "Aliens are likely to look and behave like us". · 2010-01-25T16:28:35.468Z · LW · GW

I don't see why darwinian evolution would necessarily create humanoid aliens in other environments-- sure arguing that they are likely to have structures similar to eyes to take advantage of EM waves makes sense, and even arguing that they'll have a structure similar to a head where a centralized sensory-decision-making unit like a brain exists makes sense, but walking on two legs? Even looking at the more intelligent life-forms on our own planet we find a great diversity of structure: from apes to dolphins to elephants to octopi... All I'd say we can really gather from this argument is that aliens will look like creatures and not like flickering light rays or crystals or something incomprehensibly foreign.

Comment by LauraABJ on Normal Cryonics · 2010-01-22T21:29:25.516Z · LW · GW

Your argument is interesting, but I'm not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic 'surprises' occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.

I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-political reasons that will throw a wrench in tech-development in general. Should I assign each of those a 1% probability just because they are possible?

Also, no offense meant to you or anyone else here, but I frequently wonder how much bias there is in this in-group of people who like to think about uploading/FAI towards believing that it will actually occur. It's a difficult thing to gage, since it seems the people best qualified to answer questions about these topics are the ones most excited/invested in the positive outcomes. I mean, if someone looks at the evidence and becomes convinced that the situation is hopeless, they are much less likely to get involved in bringing about a positive outcome and more likely to rationalize all this away as either crazy or likely to occur so far in the future that it won't bother them. Where do you go for an outside view?

Comment by LauraABJ on Normal Cryonics · 2010-01-21T15:55:59.706Z · LW · GW

I actually did reflect after posting that my probability estimate was 'overconfident,' but since I don't mind being embarrassed if I'm wrong, I'm placing it at where I actually believe it to be. Many posts on this blog have been dedicated to explaining how completely difficult the task of FAI is and how few people are capable of making meaningful contributions to the problem. There seems to be a panoply of ways for things to go horribly wrong in even minute ways. I think 1 in 10,000, or even 1 in a million is being generous enough with the odds that the problem is still worth looking at (given what's at stake). Perhaps you have a problem with the mind-set of low probabilities, like it's pessimistic and self-defeating? Also, do you really believe uploading could occur before AI?

Comment by LauraABJ on That Magical Click · 2010-01-20T20:50:23.383Z · LW · GW

Interesting. I remember my brother saying, "I want to be frozen when I die, so I can be brought back to life in the future," when he was child (somewhere between ages 9-14, I would guess). Probably got the idea from a cartoon show. I think the idea lost favor with him when he realized how difficult a proposition reanimating a corpse really was (he never thought about the information capture aspect of it.)

Comment by LauraABJ on Normal Cryonics · 2010-01-20T20:20:20.556Z · LW · GW

Well, I look at it this way:

I place the odds of humans actually being able to resuspend a frozen corpse near zero.

Therefore, in order for cryonics to work, we would need some form of information capture technology that would scan the in tact frozen brain and model the synaptic information in a form that could be 'played.' This is equivalent to the technology needed for uploading.

Given the complicated nature of whole brain simulations, some form of 'easier' quick and dirty AI is vastly more likely to come into being before this could take place.

I place the odds of this AI being friendly near zero. This might be where our calculations diverge.

In terms of 'evertt branches', one can never 'experience' being dead, so if we're going to go that route, we might as well say that we all live on in some branch where FAI was developed in time to save us... needless to say, this gets a bit silly as an argument for real decisions.

Comment by LauraABJ on Normal Cryonics · 2010-01-20T15:56:41.468Z · LW · GW

A question for Eliezer and anyone else with an opinion: what is your probability estimate of cryonics working? Why? An actual number is important, since otherwise cryonics is an instance of pascal's mugging. "Well, it's infinitely more than zero and you can multiply it by infinity if it does work" doesn't cut it for me. Since I place the probability of a positive singularity diminishingly small (p<0.0001), I don't see a point in wasting the money I could be enjoying now on lottery tickets or spending the social capital and energy on something that will make me seem insane.

Comment by LauraABJ on The Preference Utilitarian’s Time Inconsistency Problem · 2010-01-15T17:48:06.045Z · LW · GW

This is obviously true, but I'm not suggesting that all people will become heroin junkies. I'm using heroin addiction as an example of where neurochemistry changes directly change preference and therefore utility function- IE the 'utility function' is not a static entity. Neurochemistry differences among people are vast, and heroin doesn't come close to a true 'wire-head,' and yet some percent of normal people are susceptible to having it alter their preferences to the point of death. After uploading/AI, interventions far more invasive and complete than heroin will be possible, and perhaps widely available. It is nice to think that humans will opt not to use them, and most people with their current preferences in tact might not even try (as many have never tried heroin), but if preferences are constantly being changed (as we will be able to do), then it seems likely than people will eventually slide down a slippery slope towards wire-heading, since, well, it's easy.

Comment by LauraABJ on The Preference Utilitarian’s Time Inconsistency Problem · 2010-01-15T17:22:36.230Z · LW · GW

Combination of being broke, almost dying, mother-interference, naltrexone, and being institutionalized. I think there are many that do not quit though.

Comment by LauraABJ on The Preference Utilitarian’s Time Inconsistency Problem · 2010-01-15T16:18:38.451Z · LW · GW

There's a far worse problem with the concept of 'utility function' as a static entity than that different generations have different preferences: The same person has very different preferences depending on his environment and neurochemistry. A heroin addict really does prefer heroin to a normal life (at least during his addiction). An ex-junkie friend of mine wistfully recalls how amazing heroin felt and how he realized he was failing out of school and slowly wasting away to death, but none of that mattered as long as there was still junk. Now, it's not hard to imagine how in a few itterations of 'maximizing changing utilities' we all end up wire-headed one way or another. I see no easy solution to this problem. If we say "The utility function is that of unaltered, non-digital humans, living today," then there will be no room for growth and change after the singularity. However, I don't see an easy way of not falling into the local maximum of wire-heading one way or another at some point... Solutions welcome.