Posts

Comments

Comment by retired_urologist on The Pascal's Wager Fallacy Fallacy · 2009-03-19T22:27:26.000Z · LW · GW

Isn't there already a good deal of experience regarding the attitudes/actions of the most intelligent entity known (in current times, humans) towards cryonically suspended potential sentient beings (frozen embryos)?

Comment by retired_urologist on Formative Youth · 2009-02-25T23:37:08.000Z · LW · GW

Eliezer: So if you say that I'm revealing insufficient virtue by walking this path instead of the path of a firefighter

I did not say that, nor did I intend that. Your post was about the etiology of your altruistic attitude, and I said it seemed to be hard-wired self-preservation.

Comment by retired_urologist on Formative Youth · 2009-02-25T20:31:13.000Z · LW · GW

@Eliezer:

None of the scenarios in Superhero Bias involve the hero saving his own life by saving the lives of the others. They instead involve the hero putting himself at risk for them. I don't see the analogy with FAI.

Comment by retired_urologist on Formative Youth · 2009-02-25T16:03:43.000Z · LW · GW

To what degree is this amenability to help others actually hard-wired self-preservation? I mean, if you (Eliezer) hold that superhuman AI inevitably is coming, and that most forms of it will destroy mankind, isn't the desire to save others from that fate the same as the desire to save yourself? Rewrite the scenario such that you save mankind with FAI but die in the process. That sounds more like altruism.

Comment by retired_urologist on Which Parts Are "Me"? · 2008-10-23T18:32:00.000Z · LW · GW

Main post: Everything I am, is surely my brain.

It would seem that, as far as causes go, everything about any of us is contained in the zygote, long preceding any sort of "brain". Indeed, it would seem to go far more basic than that, as discussed in the Quantum Mechanics series. These recent discussions about ethics, morality, concept of self, etc. seem to be effects, rather than causes, the results of external forces interacting with the original selection and sequence of a relative few chemicals. Who can say that the eventual outcome and expression of a chemical code is analogous to, or necessary for, that of a mechanical code? None of this seems very reductionist to me, and most of the discussion could not possibly qualify as "science".

Comment by retired_urologist on Torture vs. Dust Specks · 2008-10-22T15:52:00.000Z · LW · GW

I came across this post only today, because of the current comment in the "recent comments" column. Clearly, it was an exercise that drew an unusual amount of response. It further reinforces

my impression of much of the OB blog, posted in August, and denied by email.

Comment by retired_urologist on Entangled Truths, Contagious Lies · 2008-10-16T03:00:04.000Z · LW · GW

@"Oh what a tangled web we weave, when first we practise to deceive," said Shakespeare.

Hopefully, the FAI will know that the author was Sir Walter Scott.

Comment by retired_urologist on The Level Above Mine · 2008-10-04T15:09:00.000Z · LW · GW

Supporting Ben Goertzel's comment:

Michael Shermer revised his book, Why People Believe Weird Things, to contain a chapter called “Why Smart People Believe Weird Things”. In it, he quotes studies by Hudson, Getzels, and Jackson showing that “creativity and intelligence are relatively orthogonal (i.e., unrelated statistically) at high levels of intelligence. Intuitively, it seems like the more intelligent people are the more creative they will be. In fact, in almost any profession significantly affected by intelligence, once you are at a certain level among the population of practitioners (and that level appears to be an IQ score of about 125), there is no difference in intelligence between the most successful and the average in that profession. At that point other variables, independent of intelligence, take over, such as creativity, or achievement motivation and the drive to succeed.”

Comment by retired_urologist on Use the Try Harder, Luke · 2008-10-03T02:49:34.000Z · LW · GW

Aleksei:

Indeed, I did misunderstand that! No wonder I was so impressed that the actor's refined position in the debate. My gullibility is showing. However, the underlying reason for the question was the many, many references to SF over the past posts and comments, and I think I have a better understanding now. Vassar, I think, put it best for me.

Comment by retired_urologist on Use the Try Harder, Luke · 2008-10-02T16:11:16.000Z · LW · GW

@MV: Thanks, Michael.

@scott clark: George Lucas wasn't trying to teach anything more important than that Luke was a whiny brat, who was reckless, implusive, and lazy. That's the point of my question, scott. Why is George Lucas (or the other authors whose novels he adapted in the series) to be considered an appropriate (valuable?) teacher/observer?

Comment by retired_urologist on Use the Try Harder, Luke · 2008-10-02T15:05:04.000Z · LW · GW

Not having grown up on science fiction, but being an avid reader of this blog: what is it with the reverence shown to science fiction stories and movies among OB's readers? From whence does the authority to give insight on important ideas emanate? I understand that many readers were motivated toward their current important interests by early exposure to SF. I also realize that some of the authors were/are scientists in their own right, but are they on the level of those scientific greats who are quoted (and frequently dispatched) here regularly? If so, why do not we see more quotes from the authors themselves, instead of from their characters and their story-lines? If these authors have such important insights, why is there not more discussion about the origins of those insights, how and why these authors have such utility in the field of important truths, such as occurs when the blog reviews EY's stories and their relationships to his actual work? I know I'm far older than most readers (or at least the commenters); is this a generational thing? It seems so out of line with the intense rationality of the group otherwise. Is it just enetertainment (I'm all for that), or what am I missing?

Comment by retired_urologist on Awww, a Zebra · 2008-10-01T13:10:03.000Z · LW · GW

In medicine, the concept "zebra" represents a strange, unlikely condition or diagnosis, usually to be avoided or considered on a lower tier, iterated thus: When one hears hoofbeats, one should think of horses rather than zebras. Spending too much time chasing zebras detracts from making the diagnosis of "horse". Coincidence? Or just another example of the medical field's poor thought process?

Comment by retired_urologist on The Magnitude of His Own Folly · 2008-09-30T15:11:20.000Z · LW · GW

@Dynamically Linked: Eliezer did reevaluate, and this blog is his human enhancement project!

I suggested a similar opinion of the blog's role here 6 weeks ago, but EY subsequently denied it. Time will tell.

Comment by retired_urologist on Competent Elites · 2008-09-27T14:00:37.000Z · LW · GW

Does the unusual tenor of this post have anything to do with the upcoming Singularity Summit and its potential for fund-raising?

Comment by retired_urologist on The Level Above Mine · 2008-09-26T14:40:04.000Z · LW · GW

@EY: We are the cards we are dealt, and intelligence is the unfairest of all those cards. More unfair than wealth or health or home country, unfairer than your happiness set-point. People have difficulty accepting that life can be that unfair, it's not a happy thought. "Intelligence isn't as important as X" is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead. It's a temptation, both to those dealt poor cards, and to those dealt good ones. Just as downplaying the importance of money is a temptation both to the poor and to the rich.

How could the writer of the above words be the writer of today's post? Apparently (as I'm told) you knew from the days of the Northwestern Talent Search that you weren't the smartest of those tested (not to mention all those who were not tested), but certainly one of the smartest. Apparently, you were dealt a straight flush to the king, while some in history received a royal flush. What difference does it make whether someone thinks you are the smartest person they have known, unless you are the smartest person? Does a straight flush to the king meet the threshold required to develop a method for "saving humanity"? If not, why aren't you in the camp of those who wish to improve human intelligence? awaits clap of thunder from those dealt better hands

Comment by retired_urologist on My Naturalistic Awakening · 2008-09-25T14:34:47.000Z · LW · GW

Bertrand Russell felt that such thought processes are native to humans:

What a man believes upon grossly insufficient evidence is an index into his desires -- desires of which he himself is often unconscious. If a man is offered a fact which goes against his instincts, he will scrutinize it closely, and unless the evidence is overwhelming, he will refuse to believe it. If, on the other hand, he is offered something which affords a reason for acting in accordance to his instincts, he will accept it even on the slightest evidence. The origin of myths is explained in this way.

Perhaps any reasoning one readily accepts is evidence of bias, and bears deeper examination. Could this be the value of educated criticism, the willingness of others to "give it to me straight", the impetus to fight against the unconscious tendencies of intelligence?

Comment by retired_urologist on My Childhood Death Spiral · 2008-09-15T19:43:41.000Z · LW · GW

@Carl Shulman:

Thanks Carl. Now I understand. See Teacher's Password.

Comment by retired_urologist on My Childhood Death Spiral · 2008-09-15T17:19:55.000Z · LW · GW

Thank you, Carl Shulman, for correcting my misinformation. It's difficult for one to know which references are reliable, when one is not in the field.

@Carl Shulman: The largest hypothesized effects of the disease alleles would be only a small fraction of the Ashkenazim advantage: they just aren't frequent enough.

Dr. Bostrom cites this paper (so I considered it might be reliable) in his treatise on cognitive enhancement: "Natural History of Ashkenazi Intelligence" by Gregory Cochran, Jason Hardy, Henry Harpending. Speaking of the incidence of of the Ashkenazim mutations, they state: "the probability of having at least one allele from these disorders is 59%.". As I understand it, these disorders are exceedingly rare in non-Ashkenazim. Are these authors simply incorrect, or did you mean that a 59% incidence just isn't frequent enough? That incidence is very close to the intelligence distribution I read between mutated and non-mutated Ashkenazim. Coincidence? Or already discredited?

Comment by retired_urologist on My Childhood Death Spiral · 2008-09-15T13:28:15.000Z · LW · GW

... if everyone was given the ability of todays top 2% regarding IQ. What would happen, implications, economic output, happiness and so on.

This doesn't seem outlandish. In my former field, advances in gene therapy have been able (in animal models) to improve the function of tissues. Observations such as: the association of autosomal recessive and low-penetrance dominant mutations in Ashkenazim with high intelligence. Without at least heterozygosity for the health disorders associated with the mutations, Ashkenazim are no more intelligent, in the aggregate, than non-Ashkenazy Jews. See here and here. It seems reasonable that the genetic pattern of this disease/intelligence relationship will be known, the ethical concerns addressed, and a method for cognitive enhancement available to all, perhaps sooner rather than later. So much the better if it were to be effective in adults! Please correct me if I have misunderstood this concept.

Comment by retired_urologist on Rationality Quotes 16 · 2008-09-08T12:33:10.000Z · LW · GW

"I have known more people whose lives have been ruined by getting a Ph.D. in physics than by drugs." -- Jonathan I. Katz

Happiness in intelligent people is the rarest thing I know. --Ernest Hemingway

Comment by retired_urologist on Harder Choices Matter Less · 2008-08-29T11:43:18.000Z · LW · GW

"When two opposite points of view are expressed with equal intensity, the truth does not necessarily lie exactly halfway between them. It is possible for one side to be simply wrong." -- Richard Dawkins

Comment by retired_urologist on Against Modal Logics · 2008-08-28T00:00:00.000Z · LW · GW

@ EY: I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time

Well, your buddy Robin Hanson has proved mathematically that my life has been a waste of time in his Doctors kill series of posts. I accept the numbers. Screw the philosophers; now it's their turn. It's all chemical neurotransmitters. Next: the lawyers.

Comment by retired_urologist on Dreams of AI Design · 2008-08-27T20:46:45.000Z · LW · GW

EY: email me. I have a donor in mind.

Comment by retired_urologist on Dreams of AI Design · 2008-08-27T20:29:34.000Z · LW · GW

EY:Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

I am contacting the SIAI today to see whether they have some role I can play. If my math is correct, you need $100 million dollars, and 20 selected individuals. If the money became available, do you have the individuals in mind? Would they do it?

I'll be 72 in 10 years when the coding starts; how long will that take? Altruism be damned, remember my favorite quote: "I don't want to achieve immortality through my work. I want to achieve it through not dying. (W. Allen)

Comment by retired_urologist on Dreams of AI Design · 2008-08-27T19:39:17.000Z · LW · GW

Disclaimer: perhaps the long-standing members of this blog understand the following question and may consider it impertinent. Sincerely, I am just confused (as I think anyone going to the Singularity site would be).

When I visit this page describing the "team" at the Singularity Institute, it states that Ben Goertzel is the "Director of Research", and Eliezer Yudkowsky is the "Research Fellow". EY states (above); "I was not working with Ben on AI, then or now." What actually goes on at SIAI?

Comment by retired_urologist on Magical Categories · 2008-08-24T23:07:53.000Z · LW · GW

@EY: If I have words to emit that I don't necessarily mean, for the sake of provoking reactions, I put them into a dialogue, short story, or parable - I don't say them in my own voice.

That's what I meant when I wrote: "By making his posts quirky and difficult to understand". Sorry. Should have been more precise.

@HA: perhaps you know the parties far better than I. I'm still looking.

Comment by retired_urologist on Magical Categories · 2008-08-24T22:00:01.000Z · LW · GW

Jess Riedel,

I don't know Eliezer Yudkowsky, but I have lots of spare time, and I have laboriously read his works for the past few months. I don't think much gets past him, within his knowledge base, and I don't think he cares about the significance of blog opinions, except as they illustrate predictable responses to certain stimuli. By making his posts quirky and difficult to understand, he weeds out the readers who are more comfortable at Roissy in DC, leaving him with study subjects of greater value to his project. His posts don't ask for suggestions; they teach, seeking clues to the best methods for communicating core data. Some are specifically aimed at producing controversy, especially in particular readers. Some are intentionally in conflict with his previously stated positions, to observe the response. The comparison I've previously made is that of Jane Goodall trying to understand chimps by observing their behavior and their reactions to stimuli, and to challenges requiring innovation, but even better because EY is more than an pure observer: he manipulates the environment to suit his interest. We'll see the results in the FAI, one day, I hope. If rudeness is part of that, right on.

Comment by retired_urologist on Invisible Frameworks · 2008-08-23T02:18:47.000Z · LW · GW

@quasi-anonymous; This is exactly the kind of BS conflict that Eliezer is searching for in this blog, in order to help with his catalogue of human characteristics. Congratulations. Unfortunately, you won't get any extra pay when the FAI emerges.

Comment by retired_urologist on Invisible Frameworks · 2008-08-22T22:58:01.000Z · LW · GW

Ever notice how heated and personal the discussion gets when one person tries to explain to a third person what the second person said, especially with such complicated topics? Perhaps this should be a green button that the AI never pushes.

Comment by retired_urologist on Hot Air Doesn't Disagree · 2008-08-20T12:09:18.000Z · LW · GW

Thanks, Tim Tyler, for the insight. I am trying to learn how to think differently (more effectively), since my education and profession actually did not include any courses, or even any experience, in clear thinking, sad to say. As you can see from some of my previous comments, I don't always see the rationale of your thoughts, to the point of discarding some of them out of hand, e.g., your series of observations on this topic, in which you dismiss possibly the best-referenced work on the diet subject without reading it, because you felt that some of the author's previous work was inadequate, yet your own references were lame.

I know there is a strong bias on this board about the arrogance of doctors, especially given their rather well-documented failure to make a positive impact on overall health care in the USA. I abhor the "doctor arrogance" as well. Any arrogance seen in my posts is unintentional, and comes not from being a "arrogant doctor", but from the failing of being an "arrogant person", a quality that seems widespread in many of the OB commenters. The more I learn about how such "ninja-brained" people think, the less I have to be arrogant about. I'm here to learn.

Comment by retired_urologist on The Cartoon Guide to Löb's Theorem · 2008-08-20T01:17:13.000Z · LW · GW

I know nothing about math, and cannot even follow what's being argued (keep this in mind when programming the fAI), but it's really funny to see how one of the 2 guys that got the right answer is on this like white on rice. Is that part of the experiment?

Comment by retired_urologist on Hot Air Doesn't Disagree · 2008-08-19T15:54:24.000Z · LW · GW

@Tim Tyler:IMO, few would lose much sleep over the rabbit slaughter: humans value other humans far more than we value rabbits.

PETA? Vegans?

@Mario: I believe in the moon because I can see the moon, I trust my eyes, and everyone else seems to agree.

This is so far from true that jokes are made about it: One evening, two blondes in Oklahoma, planing to spend their vacations in Florida, are having cocktails and gazing at the moon. The first asks, "Which do you think is closer, the moon or Florida?" The second responds, "Helloooo. Can you see Florida?"

@J Thomas: Eliezer is now presenting us with a series of arguments that are all bias, all the time, and noticing how we buy into it.

A heartfelt thanks. As a late-comer, I was unaware of the technique, and too lame to notice it on my own. Me perspective has changed. Is there any way to delete old comments, or is that similar to the desire to upload more intelligence?

Comment by retired_urologist on Rationality Quotes 7 · 2008-08-19T14:26:28.000Z · LW · GW

One should not go through an entire lifetime (soon to be infinite?) without reading something by Harry Crews, my ninth-grade English teacher. Regarding Robert Bruce Thompson's quote in the main post: "The simple fact is that non-violent means do not work against Evil." Mr. Crews position: "Contrary to popular belief, I'm not a violent person. But if you wrong me, I'll kill your fucking ass, and I'll spend the rest of my life in jail. I'll kill your fucking ass and you can count on it; depend on it."

Comment by retired_urologist on Dumb Deplaning · 2008-08-19T02:12:45.000Z · LW · GW

This is the species after which EY is trying to pattern fAI. Thank (God, goodness, whatever) that the Bush opponents to advances in science are unlikely to see this post, which is the dumbest I have yet come across on OB, and which confirms that EY is just a person, like the rest of us.

Comment by retired_urologist on Joy in the Merely Real · 2008-08-18T23:12:55.000Z · LW · GW

I was led to this post and thread by the recent comment list. Not having read the series before (apparently along with Mr. McCloskey, whose Keats quote is the opening feature used by Mr. Yudkowsky in the post two days prior), I find it elegant. Strangely missing, as far as I can tell, is any reference to Richard Dawkins' book, Unweaving the Rainbow, a beautifully written treatise on the joy of knowledge for its own sake.

Comment by retired_urologist on Is Fairness Arbitrary? · 2008-08-18T19:42:01.000Z · LW · GW

DJB: Dr. Hanson's post today, mundane dishonesty, seems to beg the question in your scenario, "What if he's lying about how hungry he is?" There could be numerous explanations for this: power, more sex (sorry; I guess those two are the same), insurance against famine, assurance of the competition's demise, et al. In medicine, we have all sorts of tests, whether or not Dr. Hanson accepts that their interpretation leads to any meaningful intervention on health issues, to determine the status of one's nutritional state. As far as I know, we have no tests to determine the level of hunger. Is that what I see being called a "meta" issue? I can begin to see why the programming of an fAI has not been accomplished yet. Keep working, please.

Comment by retired_urologist on Contaminated by Optimism · 2008-08-07T16:03:00.000Z · LW · GW

It would seem that the big development in our lifetimes has been the advent of the digital computer, the Turing Machine. Assuming all humans come into the world with no basic knowledge other than hard-wired reflexes, we must all gain our knowledge from those who have have preceded us, along with our own reflections about that knowledge and reflections on our environmental observations. The entire Library of Congress is available digitally. Using the concepts of trend analysis and Bayesian Probabilities (and others I don't know about), couldn't a properly programed computer available right now analyze all human knowledge to this point, spotting trends, successful patterns, outcomes that improve the human condition, etc? Couldn't it then use that knowledge to predict the future trends that would be beneficial? Isn't that what Eliezer and others are trying to do, albeit in a much more awkward way? I think the computer would discover the importance of Bayesian Probability, if it is important, forthwith. If it is not recursively self-improving, it could at least approach the ideas that Eliezer and others are trying to describe for FAI in a much faster and more efficient way. N'est pas?