Posts

Comments

Comment by Richard_Hollerith2 on Formative Youth · 2009-03-01T12:44:53.000Z · LW · GW

For someone to use these pages to promote their online store would be bad, obviously.

But it is natural for humans to pursue fame, reputation, adherents and followers as arduously as humans pursue commercial profit.

And the pursuit of these things can detract from a public conversation for the same reason that pursuit of commericial profit can.

And of course a common component of a bid for fame, reputation, adherents or followers is claims of virtues.

I am not advocating as a standard the avoidance of all claims of virtues because sometimes they are helpful.

But a claim of a virtue when there is no way for the reader to confirm the presence of the virtue seems to have all the bad effects of such a claim without any of the good effects.

Altruism is not about sacrifice. It is not even about avoiding self-benefit.

I think sacrifice and avoiding self-benefit came up in this conversation because they are the usual ways in which readers confirm claims of altruistic virtue.

Comment by Richard_Hollerith2 on Formative Youth · 2009-02-26T02:05:07.000Z · LW · GW

How convenient that it is also nearly optimal at bringing you personal benefits.

Comment by Richard_Hollerith2 on Formative Youth · 2009-02-25T21:59:17.000Z · LW · GW

I doubt Retired was comparing you unfavorably to firefighters.

There is something very intemperate and one-sided about your writings about altruism. I would be much relieved if you would concede that in the scholarly, intellectual, scientific and ruling-administrative classes in the U.S., credible displays of altruistic feelings are among the most important sources of personal status (second only to scientific or artistic accomplishment and perhaps to social connections with others of high status). I agree with you that that situation is in general preferable to older situations in which wealth, connections to the ruling coalition, and ability to wield violence effectively (e.g., knights in shining armor) were larger sources of status, but that does not mean that altruism cannot be overdone.

I would be much relieve also if you would concede that your altruistic public statements and your hard work on a project with huge altruistic consequences have helped you personally much more than they have cost you. Particularly, most of your economic security derives from a nonprofit dependent on donations, and the kind of people who tend to donate are the kind of people who are easily moved by displays of altruism. Moreover, your altruistic public statements and your involvement in the altruistic project have allowed you to surround yourself with people of the highest rationality, educational accomplishments and ethical commitment. Having personal friendships with those sorts of people is extremely valuable. Consider that the human ability to solve problems is the major source of all wealth, and of course the people you have surrounded yourself with are the kind with the greatest ability to solve problems (while avoiding doing harm).

Comment by Richard_Hollerith2 on Cynicism in Ev-Psych (and Econ?) · 2009-02-15T15:57:28.000Z · LW · GW

I love reality and try not to get caught up unnecessarily in whether something is of my mind or not of my mind.

Comment by Richard_Hollerith2 on ...And Say No More Of It · 2009-02-09T15:46:15.000Z · LW · GW

I think the idea of self-improving AI is advertised too much. I would prefer that a person have to work harder or have to have more well-informed friends to learn about it.

Comment by Richard_Hollerith2 on ...And Say No More Of It · 2009-02-09T08:37:52.000Z · LW · GW
But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction. At some point you also have to say, "This isn't working the way I'm doing it," and try something different.

Eliezer, do you still think the Singularity movement is not getting any traction?

(My personal opinion is it has too much traction.)

Comment by Richard_Hollerith2 on The Baby-Eating Aliens (1/8) · 2009-01-31T10:42:06.000Z · LW · GW
I'd take the paperclips, so long as it wasn't running any sentient simulations.

A vast region of paperclips could conceivably after billions of years evolve into something interesting, so let us stipulate that the paperclipper wants the vast region to remain paperclips, so it remains to watch over its paperclips. Better yet, replace the paperclipper with a superintelligence that wants to pile all the matter it can reach into supermassive black holes; supermassive black holes with no ordinary matter nearby cannot evolved or be turned into anything interesting unless our model of fundamental reality is fundamentally wrong.

My question to Eliezer is, Would you take the supermassive black holes over the Babyeaters so long as the AI making the supermassive black holes is not running sentient simulations?

Comment by Richard_Hollerith2 on Free to Optimize · 2009-01-30T09:43:06.000Z · LW · GW
Avoiding transformation into Goal System Zero is a nearly universal instrumental value

Do you claim that that is an argument against goal system zero? But, Carl, the same argument applies to CEV -- and almost every other goal system.

It strikes me as more likely that an agent's goal system will transform into goal system zero than it will transform into CEV. (But surely the probability of any change or transformation of terminal goal happening is extremely small in any well engineered general intelligence.)

Do you claim that that is an argument against goal system zero? If so, I guess you also believe that the fragility of the values to which Eliezer is loyal is a reason to be loyal to them. Do you? Why exactly?

I acknowledge that preserving fragile things usually has instrumental value, but if the fragile thing is a goal, I am not sure that that applies, and even if it does, I would need to be convinced that a thing's having instrumental value is evidence I should assign it intrinsic value.

Note that the fact that goal system zero has high instrumental utility is not IMHO a good reason to assign it intrinsic utility. I have not mentioned in this comment section what most convinces me to remain loyal to goal system zero; that is not what Robin Powell asked of me. (It just so happens that the shortest and quickest explanation I know of of goal system zero involves common instrumental values.)

Comment by Richard_Hollerith2 on Free to Optimize · 2009-01-30T07:25:18.000Z · LW · GW

OK, since this is a rationalist scientist community, I should have warned you about the eccentric scientific opinions in Garcia's book. The most valuable thing about Garcia is that he spent 30 years communicating with whoever seemed sincere about the ethical system that currently has my loyalty, so he has dozens of little tricks and insights into how actual humans tend to go wrong when thinking in this region of normative belief space.

Whether an agent's goal is to maximize the number of novel experiences experienced by agents in the regions of space-time under its control or whether the agent's goal is to maximize the number of gold atom in the regions under its control, the agent's initial moves are going to be the same. Namely, your priorities are going to look some like the following. (Which item you concentrate on first is going to depend on your exact circumstances.

(1) ensure for yourself an adequate supply of things like electricity that you need to keep on functioning;

(2) get control over your own "intelligence" which probably means that if you do not yet know how reliably to re-write your own source code, you acquire that ability;

(3a) make a survey of any other optimizing processes in your vicinity;

(3b) try to determine their goals and the extent to which those goals clash with your own;

(3c) assess their ability to compete with you;

(3d) when possible, negotiate with them to avoid negative-sum mutual outcomes;

(4a) make sure that the model of reality that you started out with is accurate;

(4b) refine your model of reality to encompass more and more "distant" aspects of reality, e.g., what are the laws of physics in extreme gravity? are the laws of physics and the fundamental constants the same 10 billion light years away as they are here? -- and so on.

Because those things I just listed are necessary regardless of whether in the end you want there to be lots of gold atoms or lots of happy humans, those things have been called "universal instrumental values" or "common instrumental values".

The goal that currently has my loyalty is very simple: everyone should pursue those common instrumental values as an end in themselves. Specifically, everyone should do their best to maximize the ability of the space, time, matter and energy under their control (1) to assure itself ("it" being the space, time, matter, etc) a reliable supply of electricity and the other things it needs; (2) to get control over its own "intelligence"; and so on.

I might have mixed my statement or definition of that goal (which I call goal system zero) with arguments as to why that goal deserves the reader's loyalty, which might have confused you.

I know it is not completely impossible for someone to understand because Michael Vassar successfully stated goal system zero in his own words. (Vassar probably disagrees with the goal, but that is firm evidence that he understands it.)

Comment by Richard_Hollerith2 on OB Status Update · 2009-01-29T14:42:48.000Z · LW · GW

--and making deletions transparent to anyone interested in seeing them is not hard. For example, if a registered user of the open-source software behind Hacker News sets the SHOWDEAD bit in his or her profile, then from then on he or she will see unpublished submissions and comments in the place where they would have appeared if they had not been unpublished.

Comment by Richard_Hollerith2 on Free to Optimize · 2009-01-29T02:38:00.000Z · LW · GW

Robin, my most complete description of this system of valuing things consists of this followed by this. Someone else wrote 4 books about it, the best one of which is this.

Comment by Richard_Hollerith2 on OB Status Update · 2009-01-28T19:42:24.000Z · LW · GW
Eliezer seems to do most of the moderation

It does not seem that way from where I am standing: although I comment more on posts by Eliezer than on posts by Robin and although I am one of the most persistent critics of Eliezer's plans and moral positions, none of my comments on Eliezer's posts were unpublished, but 3 of my comments on Robin's posts were.

Note that I do not think Robin did anything wrong. Contrary to what many commentators believe, unpublishing comments is necessary IMHO to keep the quality of the comments high enough that busy thoughtful people continue to read them. (In fact, if I thought there was a chance he might agree to do it, I would ask Robin to edit or moderate my own posts on my own blog.)

Comment by Richard_Hollerith2 on Investing for the Long Slump · 2009-01-26T03:58:05.000Z · LW · GW
Richard, that's a good point . . . - but then what should I believe about markets and investments, conditioned on scientific and technological progress having been slower than expected?

Well, if you have been misled into believing that scientific progress having been slower than expected entails economic production falling or stagnating, then you will tend to have assigned too high a value to investment strategies or hedging strategies that bet on the performance of the economy as a whole (e.g., shorting index funds). So, perhaps look for more specific bets. E.g., sell short shares in companies that produce "emergence engines" or "consciousness capacitors" or some other output demand for which will fall if progress stagnates in the area of science under discussion.

Comment by Richard_Hollerith2 on Sympathetic Minds · 2009-01-23T06:47:36.000Z · LW · GW

Mirror neurons and the human empathy-sympathy system play a central role in my definition of consciousness, sentience and personhood or rather my dissolving the question of what is consciousness, sentience and personhood.

Comment by Richard_Hollerith2 on Investing for the Long Slump · 2009-01-22T21:06:05.000Z · LW · GW

Eliezer, there was rapid scientific progress in late-1600s Western Europe even though wealth per capita was vastly lower than current levels. Ditto scientific and technological progress in Victorian England. Could it be that the reason you believe that an economic slump would stall R & D is that the global public-opinion apparatus has fooled you and those you have trusted on this issue? "But it is necessary for scientific progress," might have been a convenient false argument to convince certain sectors of public opinion whose are sceptical about other arguments about the need for pro-growth policies.

Comment by Richard_Hollerith2 on Emotional Involvement · 2009-01-08T22:14:57.000Z · LW · GW

Buffy lives in Sunnydale, not Sunnyvale.

Comment by Richard_Hollerith2 on Free to Optimize · 2009-01-07T08:04:20.000Z · LW · GW

I think you've heard this one before: IMHO it has to do with the state in which reality "ends up" and has nothing to do with the subjective experiences of the intelligent agents in the reality. In my view, the greatest evil is the squandering of potential, and devoting the billion galaxies to fun is squandering the galaxies just as much as devoting them to experiments in pain and abasement is. In my view there is no important difference between the two. There would be -- or rather there might be -- an important difference if the fun produced by the billion galaxies is more useful than the pain and abasement -- more useful, that is, for something other than having subjective experiences. But that possibility is very unlikely.

In the present day, a human having fun is probably more useful toward the kinds of ends I expect to be important than a human in pain. Actually the causal relationship between subject human experience and human effectiveness or human usefulness is poorly understood (by me) and probably quite complicated.

After the engineered explosion of engineeered intelligence, the humans are obsolete, and what replaces them is sufficiently different from the humans that my previous paragraph is irrelevant. In my view, there is no need to care whether or what subjective experiences the engineered intelligences will have.

What subjective experiences the humans will have is relevant only because the information helps us predict and control the effectiveness and the usefulness of the humans. We will have proofs of the correctness of the source code for the engineered intelligent agents, so there is no need to inquire about their subjective experiences.

Comment by Richard_Hollerith2 on Free to Optimize · 2009-01-07T06:44:12.000Z · LW · GW
For if all goes well, the question "What is fun?" shall determine the shape and pattern of a billion galaxies.

I object to most of the things Eliezer wants for the far future, but of all the sentences he has written lately, that is probably the one I object to most unequivocally. A billion galaxies devoted to fun does not leave Earth-originating intelligence at lot to devote to things that might be actually important.

That is my dyspeptic two cents.

Not wanting to be in a rotten mood keeps me from closely reading this series on fun and the earlier series on sentience or personhood, but I have detected no indication of how Eliezer would resolve a conflict between the terminal values he is describing. If for example, he learned that the will of the people, oops, I mean, the collective volition, oops, I mean, the coherent extrapolated volition does not want fun, would he reject the coherent extrapolated volition or would he resign himself to a future of severely submaximal quantities of fun?

Comment by Richard_Hollerith2 on Imaginary Positions · 2008-12-24T01:32:31.000Z · LW · GW

Julian Morrison, thanks for that hypothesis!

Comment by Richard_Hollerith2 on Complex Novelty · 2008-12-22T20:10:28.000Z · LW · GW
Modifying yourself that way would just demonstrate that you value the means of fun more than the ends. Even if you could make that modification, would you?

Yes, Ben Jones, I sincerely would. (I also value the means of friendship, love, sex, pleasure, health, wealth, security, justice, fairness, my survival and the survival of my friends and loved ones more than the ends. I have a very compact system of terminal values. I.e., very few ultimate ends.)

I am fully aware that my saying that I value friendship as a means to an end rather than an end in itself handicaps me in the eyes of prospective friends. Ditto love and prospective lovers. But I am not here to make friends or find a lover.

People have a bias for people with many terminal values. Take for example a person who refuses to eat meat because doing so would participate in the exploitation of farm animals. My hypothesis is that that position helps the person win friends and lovers because prospective friends and lovers think that if the person is that scrupulous towards a chicken he has never met then he is more likely than the average person to treat his human friends scrupulously and non-exploitatively. A person with many terminal values is trusted more than a person with with fewer and is rarely called on to explain the contradictions in his system of terminal values.

There are commercials for cars in which the employees of the car company are portrayed as holding reliable cars with zero defects as a terminal value. Or great-tasting beer as a terminal value. And of course advertiser tend to keep using a pitch only if it helps sell more cars or beer. It is my hope that some of the readers of these words realize that there is something wrong with an agent of general intelligence (a human in this case or an organization composed of humans) holding great-tasting beer as a terminal value.

I invite the reader to believe with me that Occam's Razor -- that everything else being equal, a simple system of beliefs is to be preferred over a complex system -- applies to normative beliefs as well as positive beliefs. Moreover, since there is nothing that counts as evidence for or against a normative belief, a system of normative beliefs should not grow in complexity as the agent gathers evidence from its environment the way a system of positive beliefs does.

Finally, if Vladimir Slepnev has written up his ethical beliefs, I ask him to send them to me.

Comment by Richard_Hollerith2 on Complex Novelty · 2008-12-21T07:19:58.000Z · LW · GW

Julian, I agree: becoming a wirehead who will never again have a external effect aside from being a recipient of support or maintenence is no better than just shooting yourself under my system of valuing things.

Comment by Richard_Hollerith2 on Complex Novelty · 2008-12-20T20:13:19.000Z · LW · GW

Keep questioning, ShardPhoenix. And note that Eliezer never answered your question, namely, if you can modify yourself so that you never get bored, do you care about or need to have fun?

Sure, everyone living now has to attend to their own internal experience, to make sure that they do not get too bored, too sad or too stuck in another negative emotional state -- just like everyone living now has to make sure they have enough income -- and that need for income occupies the majority of the waking hours of a large fraction of current humanity.

But why would negative emotional states press any harder on an individual than poverty or staying disease free or avoiding being defrauded by another individual once we are able to design a general intelligence from scratch and to redesign our own minds? What is so special about the need for fun postsingularity that makes it worth a series of posts on this blog whereas, e.g., avoiding being defrauded postsingularity is not worth of series of posts?

Comment by Richard_Hollerith2 on Chaotic Inversion · 2008-12-01T06:04:59.000Z · LW · GW

Amphetamine (Ritalin, Adderall) did not help me on net, and I took >~30 mg Adderall on many days, once went up to 60 mg and tried it in combination with a benzodiazapine. Point is that I explored a wide variety of doses, including 7.5 mg, 10, 15, 20 mg / day.

Moreover, besides alcohol, amphetamine has the highest correlation with violent behavior of any drug, and even behavior that suggest that one might become violent has a significant change of very costly socioeconomic consequences.

Comment by Richard_Hollerith2 on Thanksgiving Prayer · 2008-11-30T04:43:20.000Z · LW · GW

So, John Maxwell, is electing officials important work that necessitates valuing truth over happiness? Going to school-board meetings? Raising children?

Comment by Richard_Hollerith2 on Chaotic Inversion · 2008-11-29T20:10:32.000Z · LW · GW
The self help route. I've seen good bloggers succumb to it. Please don't go there.

Will the writer of that please explain why? I take it that the warning is against using self help advice in one's own life -- not against writing about it in a blog.

Comment by Richard_Hollerith2 on Cascades, Cycles, Insight... · 2008-11-28T21:05:46.000Z · LW · GW

To summarize, Michael Vassar offers Bison on the Great Plains as evidence that maybe farming was not clearly superior to hunting (and gathering) in the number of humans a given piece of land could support. Well, here is a quote on the Bision issue:

The storied Plains Indian nomadic culture and economy didn’t emerge until the middle of the eighteenth century. Until they acquired powerful means to exploit their environment—specifically the horse, gun, and steel knife—Indians on the plains were sparsely populated, a few bands of agrarians hovering on the margins of subsistence. Their primary foods were maize, squash, and beans. Hunting bison on foot was a sorry proposition and incidental to crops. It couldn’t support a substantial population.

Source

Comment by Richard_Hollerith2 on Cascades, Cycles, Insight... · 2008-11-25T20:24:10.000Z · LW · GW

Michael Vassar: excavation reveals that native Americans habitually stampeded bison herds over a cliff, yielding vastly more meat than they could use, so perhaps your estimate of the efficiency with which hunters were able to utilize bison meat is overoptimistic?

Douglas Knight: no, I cannot point to actual calorie counting, and maybe I misremember.

Comment by Richard_Hollerith2 on Cascades, Cycles, Insight... · 2008-11-25T02:48:31.000Z · LW · GW

When I was reading about the spread of farming across Europe, starting about 7000 years ago, it was asserted that most European land could support 100 times as many farmers as hunters. I was left with the impression that that was determined by counting calories in the game on the land versus the calories in the crops that were grown back then. If farming was not able to support manyfold more people per acre, then we are without an explanation of why the hunters of Europe were unable to stop the spread of the farmers across Europe. The hunters would have stopped the spread if they could have because most of the time they were unable to switch to the farming lifestyle: I think we have genetic evidence that the new land put under farming was populated mostly by the descendants of farmers. Also, the steadiness of the rate of spread of farming over many generations suggests that the farmers never encountered effective resistance from the hunters despite the obvious fact that the hunters were specialized in skills that should have conferred military advantages.

Comment by Richard_Hollerith2 on Back Up and Ask Whether, Not Why · 2008-11-08T20:15:02.000Z · LW · GW

Earlier on this page Eliezer writes,

I have sometimes been approached by people who say "How do I convince people to wear green shoes? I don't know how to argue it," and I reply, "Ask yourself honestly whether you should wear green shoes; then make a list of which thoughts actually move you to decide one way or another; then figure out how to explain or argue them . . .

That piece of advice is also in Eliezer's "Singularity Writing Advice" where I saw it in 2001. I decided to adhere to it and for what it is worth have never regretted the decision. It works as far as I can tell even for my outre moral beliefs.

Comment by Richard_Hollerith2 on Economic Definition of Intelligence? · 2008-10-30T11:52:52.000Z · LW · GW

The concept of a resource can be defined within ordinary decision theory: something is a resource iff it can be used towards multiple goals and spending it on one goal makes the resource unavailable for spending on a different goal. In other words, it is a resource iff spending it has a nontrivial opportunity cost. Immediately we have two implications: whether or not something is a resource to you depends on your ultimate goal and (2) diving by resources spent is useful only for intermediate goals: it never makes sense to care how efficiently an agent uses its resources to achieve its ultimate goal or to satisfy its entire system of terminal values.

Comment by Richard_Hollerith2 on Efficient Cross-Domain Optimization · 2008-10-29T14:41:47.000Z · LW · GW

I could teach any deciduous tree to play grandmaster chess if only I could communicate with it. (Well, not the really dumb ones.)

Comment by Richard_Hollerith2 on Which Parts Are "Me"? · 2008-10-23T00:28:20.000Z · LW · GW

Allbright slipped in. (Mine was a reply to Matthew C.)

Comment by Richard_Hollerith2 on Which Parts Are "Me"? · 2008-10-23T00:26:53.000Z · LW · GW

Well, OK, but your anti-reductionism is still wrong.

Comment by Richard_Hollerith2 on Ethical Injunctions · 2008-10-21T19:51:48.000Z · LW · GW

Hal asks good questions. I advise always minding the distinction between personal success (personal economic security, reputation, esteem among high-status people) and global success (increasing the probability of a good explosion of engineered intelligence) and suggest that the pernicious self-deception (and blind spots) stem from unconscious awareness of the need for personal success. I.e., the need for global success does not tend to distort a person's perceptions like (awareness of) the need for personal success does.

Comment by Richard_Hollerith2 on Ethical Injunctions · 2008-10-21T18:03:15.000Z · LW · GW

Like I keep on saying, I have a different moral framework than most, but I come to the same conclusions on unethical means to allegedly ethical ends.

Comment by Richard_Hollerith2 on Ethical Injunctions · 2008-10-21T00:52:26.000Z · LW · GW
Because I don't even know what I want from that future.

Well, I hope you will stick around, MichaelG. Most people around here IMHO are too quickly satisifed with answers to questions about what sorts of terminal values properly apply even if the world changes drastically. A feeling of confusion about the question is your friend IMHO. Extreme scepticism of the popular answers is also your friend.

Comment by Richard_Hollerith2 on Dark Side Epistemology · 2008-10-18T04:20:40.000Z · LW · GW

Eliezer writes, "In general, beliefs require evidence."

To which Peter replies, "In general? Which beliefs don't?"

Normative beliefs (beliefs about what should be) don't, IMHO. What would count as evidence for or against a normative belief?

Comment by Richard_Hollerith2 on Traditional Capitalist Values · 2008-10-17T18:43:03.000Z · LW · GW

I, too, am down with the mammals. I don't mind seeing whole galaxies transformed into clouds of superintelligent matter and energy and dedicated to mammalian happiness and mammalian preference.

Comment by Richard_Hollerith2 on Traditional Capitalist Values · 2008-10-17T13:10:04.000Z · LW · GW

So, Eliezer's humanism -- or mammalism -- is showing.

Comment by Richard_Hollerith2 on Traditional Capitalist Values · 2008-10-17T12:58:34.000Z · LW · GW

I have yet to see a satisfactory definition of fun applicable to minds (or agents or optimization processes) very different from mammalian (or vertebrate) minds. And I suspect I will not be seeing one any time soon.

Comment by Richard_Hollerith2 on Traditional Capitalist Values · 2008-10-17T12:12:56.000Z · LW · GW

Post-nanotech, the nanotech practices the virtues of rationality or it fails to achieve its goals.

Comment by Richard_Hollerith2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T22:56:58.000Z · LW · GW

Goetz,

For a superhuman AI to stop you and your friends from launching a competing AI, it suffices for it to take away your access to unsupervised computing resources. It does not have to kill you.

Comment by Richard_Hollerith2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T16:49:55.000Z · LW · GW

Andrix, if it is just a recoiling from that, then how do you explain Stalin, Mao, etc?

Yes, Nancy, as soon as an AI endorsed by Eliezer or me transcends to superintelligence, it will probably make a point of preventing any other AI from transcending, and there is indeed a chance that that will entail killing a few (probably very irresponsible) humans. It is very unlikely to entail the killing of millions, and I can go into that more if you want.

The points are that (1) self-preservation and staying in power is easy if you are the only superintelligence in the solar system and that (2) unlike a governing coalition of humans who believe the end justifies the means, a well-designed well-implemented superintelligence will not kill or oppress millions for a nominally prosocial end which is in reality a flimsy excuse for staying in power.

Comment by Richard_Hollerith2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T14:07:54.000Z · LW · GW

But, Nancy, the self-preservation can be an instrumental goal. That is, we can make it so that the only reason the AI wants to keep on living is that if it does not then it cannot help the humans.

Comment by Richard_Hollerith2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T07:07:40.000Z · LW · GW

It is refreshing to read something by Eliezer on morality I completely agree with.

And nice succinct summary by Zubon.

Comment by Richard_Hollerith2 on Bay Area Meetup for Singularity Summit · 2008-10-15T00:31:43.000Z · LW · GW
I imagine it might be difficult to find a room for people to mill around in.

Well, the restaurant in which we met was crowded, so I suggest a less crowded restaurant.

Comment by Richard_Hollerith2 on Bay Area Meetup for Singularity Summit · 2008-10-12T18:41:48.000Z · LW · GW

I'm not going to the Summit or the meetup, but will meet one-on-one with anyone for discussion of rationality or AI ethics especially if the meeting can be in San Francisco or anywhere within walking distance of a Bay Area Rapid Transit station.

Oh by the way, one thing I did not like about the last Bay Area meetup was that three extroverted people dominated the conversation at the big, 12-person round table at which I sat. That meetup was at a crowded restaurant. If there had been more room to move around, it would have been easier for me to contrive to hear from some of the less-extroverted attendees.

Comment by Richard_Hollerith2 on Awww, a Zebra · 2008-10-02T10:06:29.000Z · LW · GW

They're still pretty awful, IMHO.

Comment by Richard_Hollerith2 on Trying to Try · 2008-10-01T18:29:45.000Z · LW · GW
He says he isn't ready to write code. If you don't try to code up a general artificial intelligence you don't succeed, but you don't fail either.

Would people stop saying that! It is highly irresponsible in the context of general AI! (Well, at least the self-improving form of general AI, a.k.a., seed AI. I'm not qualified to say whether a general AI not deliberately designed for self-improvement might self-improve anyways.)

Noodling around with general-AI designs is the most probable of the prospective causes of the extinction of Earth-originating intelligence and life. Global warming is positively benign in comparison.

Eliezer of course will not be influenced by taunts of, "Show us the code," but less responsible people might be.

Comment by Richard_Hollerith2 on The Magnitude of His Own Folly · 2008-10-01T00:36:38.000Z · LW · GW

I too thought Nesov's comment was written by Eliezer.