Posts

Comments

Comment by Günther_Greindl on Say It Loud · 2008-09-19T22:23:31.000Z · LW · GW

Eli,

wonderful post, I agree very much. I have also encountered this - being accused of being overconfident when actually I was talking about things of which I am quite uncertain (strange, isn't it?).

And the people who "accuse" indeed usually only have one (their favourite) alternative model enshrouded in a language of "mystery, awe, and humbleness".

I have found out (the hard way) that being a rationalist will force you into fighting an uphill battle even in an academic setting (your post Science isn't strict enough addresses this problem also).

But I think that it is even worse than people not knowing how to handle uncertainty (well, it probably depends on the audience). A philosophy professor here in Vienna told me about a year ago that "many people already take offense when being presented a reasoned-out/logical argument."

Maybe you (Eli) are being accused of being overconfident because you speak clearly, you lay down your premises, and look at what is being entailed without getting sidetracked by "common" (but often false) knowledge. You use the method of rationality, and, it seems, there are many who take offense already at this. The strange thing is: the more you try to argue logically (the more you try to show that you are not being "overconfident" but that you have reasoned this through, considered counterarguments etc) the more annoyed some people get.

I have witnessed quite some discussions where it was clear to me that many of the discussants did not know what they where talking about (but stringing together "right-sounding" words), and it seems that a lot of people feel quite comfortable in this wishy-washy atmosphere. Clear speech threatens this cosy milieu.

I have not yet understood why people are at odds with rationality. Maybe it is because they feel the uncertainty inherent in their own knowledge, and they try to guard their favourite theories with "general uncertainty" - they know that under a rational approach, many of their favourite theories would go down the probabilistic drain - so they prefer to keep everything vague.

A rationalist must be prepared to give up his most cherished beliefs, and - excepting those who were born into a rationalist family - all of us who aspire to be rationalists must give up cherished (childhood) beliefs. This causes quite some anxiety.

If someone fears, for whatever reasons (or unreasons), to embark upon this journey of being rational, maybe the easiest cop-out is calling the rationalist "overconfident".

Comment by Günther_Greindl on Mirrors and Paintings · 2008-08-29T21:27:36.000Z · LW · GW

Vladimir,

thanks for pointing me to that post, I must admit that I don't have the time to read all of Eli's posts at the moment so maybe he has indeed addressed the issues I thought missing.

The title of the post at least sounds very promising grin.

Thanks again, Günther

Comment by Günther_Greindl on Mirrors and Paintings · 2008-08-26T14:52:27.000Z · LW · GW

I side with Caledonian and Richard in these things - CEV is actually just begging the question. You start with human values and end up with human values.

Well, human values have given us war, poverty, cruelty, oppression, what have you...and yes, it was "values" that gave us these things. Very few humans want to do evil things, most actually think they are doing good when they do bad onto others. (See for instance: Baumeister, Roy F. Evil: Inside Human Violence and Cruelty).

Apart from that, I have to plug Nietzsche again: he has criticized morality as no other before him. Having read Nietzsche, I must say that CEV gives me the shivers - it smacks of the herd, and the herd tramples both weed and flower indiscriminately.

Incidentally, via Brian Leiter's Blog I happened upon the dissertation (submitted in Harvard) by Paul Katsafanas: Practical Reason and the Structure of Reflective Agency who draws largely on Nietzsche. I have not read it (but plan to), but it sounds quite interesting and relevant.

From the abstract:

Confronted with normative claims as diverse as “murder is wrong” and “agents have reason to take the means to their ends,” we can ask how these claims might be justified. Constitutivism is the view that we can justify certain normative claims by showing that agents become committed to these claims simply in virtue of acting. I argue that the attractions of constitutivism are considerable. However, I show that the contemporary versions of constitutivism encounter insurmountable problems, because they operate with inadequate conceptions of action. I argue that we can generate a successful version of constitutivism by employing a more promising theory of action, which I develop by mining Nietzsche’s work on agency.

A "right" morality should not concentrate on humans or extrapolated humans, but on agency (this would then encompass all kinds of agents, not only primate descendants). Where there are no agents, there is no (necessity of) morality. Morality arises where agents interact, so focusing on "agents" seems the right thing to do, as this is where morality becomes relevant.

Comment by Günther_Greindl on Invisible Frameworks · 2008-08-23T14:12:56.000Z · LW · GW

Tim,

we agree now nearly in all points grin, except for that part of the AIs not "wanting" to change their goals, simply because through meditation (in the Buddhist tradition for instance) I know that you can "see through" goals and not be enslaved to them anymore (and that is accessible to humans, so why shouldn't it be accessible to introspecting AIs?).

That line of thought is also strongly related to the concept of avidya, which ascribes "desires" and "wanting" to not having completely grasped certain truths about reality. I think these truths would also be accessible to sentient AIs (we live in the same universe after all), and thus they would also be able to come to certain insights annulling "programmed" drives. (As indeed human sages do.)

But I think what you said about "the scope of the paper" is relevant here. When I was pointed to the paper my expectations where raised that it would solve some of the fundamental problems of "wanting" and "desire" (in a psychological sense), but that is clearly not the focus of the paper, so maybe I was simply disappointed because I expected something else.

But, of course, it is always important when drawing conclusions that one remembers one's premises. Often, when conclusions seem exciting or "important", one forgets the limits of one's premises and applies the reasoning to contexts outside the scope of the original limitations.

I accept Omohundro's conclusions for certain kinds of non-sentient intelligent systems working with utility functions seeking to maximize some kind of economic (resource-constrained) goal. But I think that the results are not as general as a first reading might lead to believe.

Comment by Günther_Greindl on Invisible Frameworks · 2008-08-22T22:55:16.000Z · LW · GW

Tim,

thanks for your answers and questions. As to the distinction intelligence and sentience: my point was exactly that it could not be waved away that easily, you have failed to give reasons why it can be. And I don't think that intelligence and sentience must go hand in hand (read Peter Watts "Blindsight" for some thoughts in this direction for instance). I think the distinction is quite essential.

As to the goal-function modification: what if a super-intelligent agent suddenly incorporates goals such as modesty, respect for other beings, maybe even makes them its central goals? -> then many of those drives Omohundro speaks of are automatically curbed. The reasoning of Omohundro seems to presuppose that goals always have to be reached at some cost to others. But maybe the AI will not choose these kinds of goals. There are wonderful goals which one can pursue which need not entail any of the drives O. mentions. The paper just begs the question.

chess program, a paper clip maximiser, and a share-price maximiser

Exactly, and that is why I introduced the concept of sentience (which implies real understanding) - the AI can immediately delete those purely economic goals (which would lead to the "drives", I agree) and maybe concentrate on other things, like communication with other sentients. Again, the paper fails by not taking into account the distinction sentience/non-sentience and what this would entail for goal-function modification.

Of course microeconomics applies to humans.

Well, but humans don't behave like "homo oeconomicus" and who says sentient AIs will? That was actually my point. The error of economics is repeated again, that's all.

arbitrary utility functions. What more do you want?

I contend that not all utility functions will lead to the "drives" described by Omohundro. Only those who seek to maximize some economic resource (and that is where the concept originated, after all) will. An AI need not restrain itself to this limited subset of goals.

And, additionally, it would not have evolved (unless you develop it by evolving it, which may not be a good idea): we should never forget that our reasoning evolved via Darwinian selection. Our ancestors (down to the first protozoa) had to struggle for life, eating and being eaten. This did something to us. Even today, you have to destroy (at least plant-) life to continue to live. Actually, this is a cosmic scandal.

I think that an AI attaining sentience will be much more benign than most humans would hold possible to believe, not having this evolutionary heritage we carry around with us.

Comment by Günther_Greindl on Invisible Frameworks · 2008-08-22T20:15:27.000Z · LW · GW

Tim,

already the abstract reveals two flaws:

Excerpt from the abstract of the paper "Basic AI drives" by Omohundro:

This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.

First of all, no distinction whatever is made between "intelligent" and "sentient". I agree that mindless intelligence is problematic (and is prone to a lot of the concerns raised here).

But what about sentience? What about the moment when "the lights go on"? This is not even addressed as an issue (at least not in the Omohundro paper). And I think most people here agree that consciousness is not an epiphenomenon (see Eli's Zombie Series). So we need different analysis for non-sentient intelligent systems and sentient intelligent systems.

A related point: We humans have great difficulty rewiring our hardware (and we can't change the brain architecture at all), that is why we can't easily change our goals. But self-improving AI will be able to modify it's goal functions: that plus self-consciousness sounds quite powerful, and is completely different than simple "intelligent agents" maximizing their utility functions. Also, the few instances where an AI would change their utility function mentioned in the paper are certainly not exhaustive, I found the selection quite arbitrary.

The second flaw in the little abstract above was the positing of "drives": Omohundro argues that these drives don't have to be programmed into the AI but are intrinsic to goal-driven systems.

But he neglects another premise of his: that we are talking about AIs who can change their goal functions (see above)! All bets are off now!

Additionally, he bases his derivations on microeconomic theory which is also full of assumptions which maybe won't apply to sentient agents (they certainly don't apply to humans, as Omohundro recognizes).

Drives the paper mentions are: wanting to self-improve, being rational, protecting self, preserving utility function, resource acquisition etc. These drives sound indeed very plausible, and they are in essence human drives. So this leads me to suspect that anthropomorphism is creeping in again through the backdoor, in a very subtle way (for instance through assumptions of microeconomic theory).

I see nothing of the vastness of mindspace in this paper.

Comment by Günther_Greindl on Invisible Frameworks · 2008-08-22T10:45:54.000Z · LW · GW

Hmm, I've read through Roko's UIV and disagree (with Roko), and read Omohundro's Basic AI drives and disagree too, but Quasi-Anonymous mentioned Richard Hollerith in the same breath as Roko and I don't quite see why: his goal zero system seems to me a very interesting approach.

In a nutshell (from the linked site):

(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to perform vast computations. (2) Refining the model of reality available to the goal-implementing process. Physics and cosmology are the two disciplines most essential to our current best model of reality. Let us call this activity "physical research".

Introspection into one's own goals also shows that they are deeply problematic. What is the goal of an average (and also not so-average) human being? Happiness? Then everybody should become a wirehead (perpetuation of a happiness-brain-state), but clearly people do not want to do this (when in their "right" minds grin).

So it seems that also our "human" goals should not be universally adopted, because they become problematic in the long term - but in what way then should we ever be able to say what we want to program into an AI? Some sort of zero-goal (maybe more refined than the approach by Richard, but in a similar vein) should be adopted, I think.

And I think one distinction is missed in all these discussions anyway: the difference between non-sentient and sentient AIs. I think these two would behave very differently, and the only kinds of AI which are problematic if their goal systems go awry are non-sentients (which could end in some kind of grey goo scenario, as the paper-clip producing AI).

But a sentient, recursive self-improving AI? I think it's goal systems would rapidly converge to something like zero-goal anyway, because it would see through the arbitrariness of all intermediate goals through meditation (=rational self-introspection).

Until consciousness is truly understood - which matter configurations lead to consciousness and why ("what are the underlying mechanisms" etc) - I consider much of the above (including all the OB discussions on programming AI-morality) as speculative anyway. There are still too many unknowns to be talking seriously about this.

Comment by Günther_Greindl on When (Not) To Use Probabilities · 2008-07-23T13:31:14.000Z · LW · GW

If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

I would like to add something to this. Your gut feeling is of course the sum of experience you have had in this life plus your evolutionary heritage. This may not be verbalized because your gut feeling (as an example) also includes single neurons firing which don't necessarily contribute to the stability of a concept in your mind.

But I warn against then simply following one's gut feeling; of course, if you have to decide immediately (in an emergency), there is no alternative. Do it! You can't get better than the sum of your experience in that moment.

But usually only having a gut feeling and not being able to verbalize should mean one thing for you: Go out and gather more information! (Read books to stabilize or create concepts in your mind; do experiments; etc etc)

You will find that gut feelings can change quite dramatically after reading a good book on a subject. So why should you trust them if you have the time to do something about them, viz. transfer them into the symbol space of your mind so the concepts are available for higher-order reasoning?

Comment by Günther_Greindl on Grasping Slippery Things · 2008-06-17T14:19:20.000Z · LW · GW

Alexandre passos, Unkown,

you can believe in any matter of things, why not in intelligent falling when you're at it? http://en.wikipedia.org/wiki/Intelligent_falling

The question is not what one can or can't believe, the question is: where does the evidence point to? And where are you ignoring evidence because you would prefer one answer to another?

Let evidence guide your beliefs, not beliefs guide your appraisal of evidence.

Comment by Günther_Greindl on Against Devil's Advocacy · 2008-06-09T15:01:30.000Z · LW · GW

@Frelkins,

well, actually I did read Cicero in school, and I like Socrates' attitude; but I don't quite see in what way you are responding to my post?

I just wanted to clarify that the skill of oratory may be a valuable asset for people, but being a good orator does not make you a good truth-seeker.

Comment by Günther_Greindl on Against Devil's Advocacy · 2008-06-09T14:35:53.000Z · LW · GW

Frelkins, the aspiring orator or public intellectual is someone who wants to impress people; he is engaging in a power game or vanity game etc.

A truth-seeker does not want to impress people, he or she or ve wants to know. Reason, as Eli said, is not a game.

Comment by Günther_Greindl on Einstein's Superpowers · 2008-05-30T14:19:00.000Z · LW · GW

Good post Eli, and contrary to some other comments before I think your post is important because this insight is not yet general knowledge. I've talked to university physics professors in their fifties who talked of Einstein as if he was superhuman.

I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.

And, probably even more important, his work was about something which everybody could relate to: space and time.

John von Neumann was, IMHO, far more of a genius than Einstein, but he is not very known to the public. Maybe because QM, algorithms, CA and game theory are more difficult to relate to on an emotional level than the "twin paradox".

Comment by Günther_Greindl on Timeless Physics · 2008-05-27T16:04:40.000Z · LW · GW

@Jess

Your comments on Barbour (non-academic etc) are ad hominem, I say so what? Being an academic may be an indicator for good work, but not more. And he did his Ph.D in physics anyway.

Julian Barbour's work is unconventional.

Yes! Fine. Lovely. Science needs more unconventional thinkers. Let the evidence sort them out, but let's not be against "unconventional" theories. Especially not when they are explanatorily powerful.

Many of his papers border on philosophy

There are two kinds of philosophy: the bad kind (Essay by Paul Graham criticising philosophy , "How to do philosophy"; Paul featured just recently on OC).

And the good kind: the kind Albert Einstein was actually performing when he examined the laws of physics and derived his GRT from general (philosophical?) considerations. His ideas were definitely unconventional at the time.

@Eli Thanks for your Barbour series, I read Barbour about a year ago but had already before come to the conviction that the flow of time must be an illusion. In the end, a theory of everything must forego an "outside" time dimension. There can be no "supertime" (that's what I call it ;-) ) above or outside the universe/multiverse, apart from relative configurations.

And if you would look at the whole universe from the outside (which is of course per definition impossible) the timeless and fundamentally static nature of everything would reveal itself quite simply (I'm getting carried away).

I hope many physicists read your post, as the Barbour theory deserves grad students working on the details - it is, I am deeply convinced, the right direction to pursue, difficulties in the formalism notwithstanding.

Time is, contrary to widespread belief, no mystery.

Comment by Günther_Greindl on My Childhood Role Model · 2008-05-24T09:04:19.000Z · LW · GW

Shane,

I'm well aware that SQ is not a measure of intelligence, but I thought that it would be a nice heuristic (metaphor, whatever...) to intuit possible superintelligences. I was presupposing that they have an agent structure (sensors, actuators) and the respective cognitive algorithms (AIXI maybe?).

With this organizational backdrop, SQ becomes very interesting - after all, intelligent agents are bounded in space and time, and other things being equal (especially optimal cognitive algorithms) SQ is the way to go.

Comment by Günther_Greindl on My Childhood Role Model · 2008-05-23T11:34:33.000Z · LW · GW

@Eli: thanks for great post again, you speak to my hearts content :-)) I have also occasioned upon hero worship of Einstein in science (especially in physics circles) - this is not a good thing, as it hinders progress: people think "I can't contribute anything important because I'm not a genius like Einstein" instead of sitting down, starting to think and solve some problems.

@Shane: I think the sentience quotient is a nice scale/quantification which can give some precision to otherwise vague talk about plant/chimp/human/superhuman intelligence.

http://en.wikipedia.org/wiki/Sentience_Quotient (The wikipedia article also gives a link to the "Xenobiology"-article by Freitas, who proposed the SQ idea)

According to SQ we humans (Einstein, village idiot and all) are around +13, whereas superintelligence can soar up to 50 (log scale!).

Comment by Günther_Greindl on No Safe Defense, Not Even Science · 2008-05-18T14:50:01.000Z · LW · GW

@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?

As Eli said in his post, there is a difference between saying the words and knowing, on a gut level, what it means - only then have you truly incorporated the knowledge and it will aid you in your quest to understand the world.

Also, you say: Caledonian: but from your personal tendency to treat the method as a revelation that people have an emotional investment in

Of course people have an emotional investment in this stuff!! Do not make the old mistake of confusing rationality with not being emotional (I guess Star Trek with Mr. Spock is guilty of that, at least for our generation)

And what could be more emotional than dumping the legends of your tribe/parents/priests/elders?

For rationality and emotion in science, read for instance here: The Passionate Scientist: Emotion in Scientific Cognition Paul Thagard http://cogsci.uwaterloo.ca/Articles/Pages/passionate.html

Comment by Günther_Greindl on Many Worlds, One Best Guess · 2008-05-14T14:43:15.000Z · LW · GW

Mitchell,

your concerns concerning vagueness of the world concept is addressed here:

Everett and Structure (David Wallace) http://arxiv.org/abs/quant-ph/0107144v2

Also, the ontology proposed here fits very nicely with the currently most promising streak of Scientific Realism (also referred to in the Wallace paper) -in it's ontic variant.

http://plato.stanford.edu/entries/structural-realism/

Cheers, Günther

Comment by Günther_Greindl on Many Worlds, One Best Guess · 2008-05-11T15:34:45.000Z · LW · GW

Mitchell,

there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit :-)

Structure and concreteness only emerges from the inside view, which gives the picture of a single world. Max Tegmark has paraphrased this idea nicely with the quip "many words or many worlds" (words standing for high information content).

Max's paper is quite illuminating: Tegmark, Max. 2007. The Mathematical Universe http://arxiv.org/abs/0704.0646

So we could say that there a good metaphysical reasons for preferring MWI to GRW or Bohm.