Journal of Consciousness Studies issue on the Singularity

post by lukeprog · 2012-03-02T15:56:16.185Z · LW · GW · Legacy · 86 comments

...has finally been published.

Contents:

The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & BostromIgor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.

McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.

86 comments

Comments sorted by top scores.

comment by gwern · 2012-03-02T18:14:59.860Z · LW(p) · GW(p)

Tipler paper

Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.

(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)

And the extraction of a transcendent system of ethics from a Feynman quote...

A moment’s thought will convince the reader that Feynman has described not only the process of science, but the process of rationality itself. Notice that the bold-faced words are all moral imperatives. Science, in other words, is fundamentally based on ethics. More generally, rational thought itself is based on ethics. It is based on a particular ethical system. A true human level intelligence program will thus of necessity have to incorporate this particular ethical system. Our human brains do, whether we like to acknowledge it or not, and whether we want to make use of this ethical system in all circumstances. When we do not make use of this system of ethics, we generate cargo cult science rather than science.

This is just too wrong for words. This is like saying that looking both ways before crossing the street is obviously a part of rational street-crossing - a moment's thought will convince the reader (Dark Arts) - and so we can collapse Hume's fork and promote looking both ways to a universal meta-ethical principal that future AIs will obey!

An AI program must incorporate this morality, otherwise it would not be an AI at all.

Show me this morality in the AIXI equation or GTFO!

After all, what is a computer program but a series of imperative sentences?

A map from range to domain, a proof in propositional logic, or a series of lambda equations and reductions all come to mind...

In fact, I claim that an ethical system that encompasses all human actions, and more generally, all actions of any set of rational beings (in particular, artificial intelligences) can be deduced from the Feynman axioms. In particular, note that destroying other rational beings would make impossible the honestly Feynman requires.

One man's modus ponens is another man's modus tollens. That the 'honestly' requires other entities is proof that this cannot be an ethical system which encompasses all rational beings.

Hence, they will be part of the community of intelligent beings deciding whether to resurrect us or not. Do not children try to see to their parents’ health and well-being? Do they not try and see their parent survive (if it doesn’t cost too much, and it the far future, it won’t)? They do, and they will, both in the future, and in the far future.

Any argument that rests on a series of rhetorical questions is untrustworthy. Specifically, sure, I can in 5 seconds come up with a reason they would not preserve us: there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler's Singularity.

(Correct and true? Dunno. But let's say this shows Tipler is massively overreaching...)

What a terrible paper altogether. This was a peer-reviewed journal, right?

Replies from: quanticle, Pfft, None, shminux, Incorrect, TetrahedronOmega, torekp, Will_Newsome, steven0461, timtyler
comment by quanticle · 2012-03-02T22:41:23.599Z · LW(p) · GW(p)

The quote that stood out for me was the following:

The nineteenth century physicists also believed in the aether, as did Newton. There were many aether theories available, but only one was consistent with observation: H.A. Lorentz's theory, which simply asserted that the Maxwell equations were the equations for the aether. In 1904, Lorentz showed (Einstein et al., 1923) that this theory of the aether - equivalently the Maxwell equations - implied that absolute time could not exist, and he deduced the transformations between space and time that now bear his name. [...] That is, general relativity is already there in 19th century classical mechanics.

Now, all that's well and good, except for one, tiny, teensy little flaw: there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887. Tipler, in this case, appears to be basing his argument on a theory that was discredited over a century ago. Yes, some of the conclusions of aetheric theory are superficially similar to the conclusions of relativity. That, however, doesn't make the aetheric theory any less wrong.

Replies from: TetrahedronOmega
comment by TetrahedronOmega · 2015-06-18T18:32:45.206Z · LW(p) · GW(p)

Hi, Quanticle. You state that "there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887." For the details on how General Relativity is inherently an æther theory, see physicist and mathematician Prof. Frank J. Tipler and mathematician Maurice J. Dupré's following paper:

  • Maurice J. Dupré and Frank J. Tipler, "General Relativity as an Æther Theory", International Journal of Modern Physics D, Vol. 21, No. 2 (Feb. 2012), Art. No. 1250011, 16 pp., doi:10.1142/S0218271812500113, bibcode: 2012IJMPD..2150011D, http://webcitation.org/6FEvt2NZ8 . Also at arXiv:1007.4572, July 26, 2010, http://arxiv.org/abs/1007.4572 .
comment by Pfft · 2012-03-02T20:04:30.480Z · LW(p) · GW(p)

Argh.

Also, this makes me wonder if the SIAI's intention to publish in philosophy journals is such a good idea. Presumably part of the point was for them to gain status by being associated with respected academic thinkers. But this isn't really the kind of thinking anyone would want to be associated with...

Replies from: gwern, Bruno_Coelho
comment by gwern · 2012-03-02T20:51:26.864Z · LW(p) · GW(p)

The way I look at it, it's 'if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.'

Replies from: JohnD
comment by JohnD · 2012-03-03T15:24:02.150Z · LW(p) · GW(p)

I can't speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you'd normally get for an unsolicited article in a top philosophy journal.

But, to reiterate, I can't say whether or not the Journal of Consciousness Studies did that in this instance.

Replies from: None
comment by [deleted] · 2012-03-05T04:22:59.070Z · LW(p) · GW(p)

I can't speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review.

On the one hand, this is the cached defense that I have for the Sokal hoax, so now I have an internal conflict on my hands. If I believe that Tipler's paper shouldn't have been published, then it's unclear why Sokal's should have been.

Oh dear, oh dear. How to resolve this conflict?

Perhaps rum...

comment by Bruno_Coelho · 2012-03-04T20:16:00.490Z · LW(p) · GW(p)

Someone think the visibility for philosophers have pratically impact for the solution of technical problems? Apparently who can possibly cause some harm in the near time are AI researchs, but much of these people are scalating Internet flux or working on their own projects.

Gaining visibility is a good thing when what's needed is social acceptance, or when is necessary more people to solution a problem. Publishing in peer reviews (philosophical)journals can give more scholars to the cause, but more people caring about AI don't mean a good thing per se.

comment by [deleted] · 2012-03-02T21:04:40.285Z · LW(p) · GW(p)

What a terrible paper altogether. This was a peer-reviewed journal, right?

Some things even peer-review can't cure. I looked through a few of their back-issues and was far from impressed. On the other hand, this ranking puts them above Topoi, Nous, and Ethics. I'm not even sure what that means -- maybe their scale is broken?

Replies from: gwern
comment by gwern · 2012-03-02T21:11:38.086Z · LW(p) · GW(p)

Maybe there's some confounding factor - like sudden recent interest in Singularity/transhumanist topics forcing the cite count up?

Replies from: Jesper_Ostman
comment by Jesper_Ostman · 2012-03-13T17:18:51.473Z · LW(p) · GW(p)

Unlikely, they have been highly ranked for a long time and singularity/transhumanist topics are only a very small part of what JCS covers.

comment by shminux · 2012-03-02T20:56:50.546Z · LW(p) · GW(p)

Tipler did some excellent work in mathematical relativity before going off the rails shortly thereafter.

Replies from: None
comment by [deleted] · 2012-03-02T21:26:46.058Z · LW(p) · GW(p)

I'm very grateful to the undergraduate professor of mine that introduced me to Penrose and Tipler as a freshman. I think at that time I was on the cusp of falling into a similar failure state, and reading Shadows of the Mind and The Physics of Immortality shocked me out of what would have been a very long dogmatic slumber indeed.

comment by Incorrect · 2012-03-02T22:15:56.181Z · LW(p) · GW(p)

One man's modus ponens is another man's modus tollens. That the 'honestly' requires other entities is proof that this cannot be an ethical system which encompasses all rational beings.

And yet humans kill eachother. His only possible retort is that some humans are not rational. Better hope that nobody builds an "irrational" AI.

comment by TetrahedronOmega · 2015-06-18T18:31:31.877Z · LW(p) · GW(p)

Hi, Gwern. You asked, "... MWI and quantum mechanics implied by Newton? Really?" Yes, the Hamilton-Jacobi Equation, which is the most powerful formulation of Newtonian mechanics, is, like the Schrödinger Equation, a multiverse equation. Quantum Mechanics is the unique specialization of the Hamilton-Jacobi Equation with the specification imposed that determinism is maintained: since the Hamilton-Jacobi Equation is indeterministic, because when particle trajectories cross paths a singularity is produced (i.e., the values in the equations become infinite) and so it is not possible to predict (even in principle) what happens after that. On the inherent multiverse nature of Quantum Mechanics, see physicist and mathematician Prof. Frank J. Tipler's following paper:

Regarding the universe necessarily being temporally closed according to the known laws of physics: all the proposed solutions to the black hole information issue except for Prof. Tipler's Omega Point cosmology share the common feature of using proposed new laws of physics that have never been experimentally confirmed--and indeed which violate the known laws of physics--such as with Prof. Stephen Hawking's paper on the black hole information issue which is dependent on the conjectured String Theory-based anti-de Sitter space/conformal field theory correspondence (AdS/CFT correspondence). (See S. W. Hawking, "Information loss in black holes", Physical Review D, Vol. 72, No. 8 [Oct. 15, 2005], Art. No. 084013, 4 pp.) Hence, the end of the universe in finite proper time via collapse before a black hole completely evaporates is required if unitarity is to remain unviolated, i.e., if General Relativity and Quantum Mechanics--which are what the proofs of Hawking radiation derive from--are true statements of how the world works.

Pertaining to your comments doubting "a universal meta-ethical principal that future AIs will obey!": Prof. Tipler is quite correct regarding his aforecited discussion on ethics. In order to understand his point here, one must keep in mind that the Omega Point cosmology is a mathematical theorem per the known physical laws (viz., the Second Law of Thermodynamics, General Relativity, and Quantum Mechanics) that requires sapient life (in the form of, e.g., immortal superintelligent human-mind computer-uploads and artificial intelligences) take control over all matter in the universe, for said life to eventually force the collapse of the universe, and for the computational resources of the universe (in terms of both processor speed and memory space) to diverge to infinity as the universe collapses into a final singularity, termed the Omega Point. Said Omega Point cosmology is also an intrinsic component of the Feynman-DeWitt-Weinberg quantum gravity/Standard Model Theory of Everything (TOE) correctly describing and unifying all the forces in physics, of which TOE is itself mathematically forced by the aforesaid known physical laws. Thus, existence itself selects which ethics is correct in order for existence to exist. Individual actors, and individuals acting in groups, can of course go rogue, but there is a limit to how bad things can get: e.g., life collectively cannot choose to extirpate itself.

You go on to state, "there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler's Singularity." Yet if Y is infinite, then this presents no problem to literal immortality. Traditional Christian theology has maintained that Y is indeed infinite.

Interestingly, the Omega Point final singularity has all the unique properties (quiddities) claimed for God in the traditional religions. For much more on Prof. Tipler's Omega Point cosmology and the details on how it uniquely conforms to, and precisely matches, the cosmology described in the New Testament, see my following article, which also addresses the societal implications of the Omega Point cosmology:

Additionally, in the below resource are different sections which contain some helpful notes and commentary by me pertaining to multimedia wherein Prof. Tipler explains the Omega Point cosmology and the Feynman-DeWitt-Weinberg quantum gravity/Standard Model TOE.

comment by torekp · 2012-03-04T20:40:50.650Z · LW(p) · GW(p)

Not to rescue Tipler, but:

After all, what is a computer program but a series of imperative sentences?

A map from range to domain, a proof in propositional logic, or a series of lambda equations and reductions all come to mind...

None of these possibilities seem to exclude being also a series of imperative sentences.

Replies from: gwern
comment by gwern · 2012-03-05T03:08:25.796Z · LW(p) · GW(p)

In much the same way rhetorically asking 'After all, what is a computer program but a proof in an intuitionistic logic?' doesn't rule out 'a series of imperative sentences'.

comment by Will_Newsome · 2012-03-03T07:32:40.916Z · LW(p) · GW(p)

An AI program must incorporate this morality, otherwise it would not be an AI at all.

Show me this morality in the AIXI equation or GTFO!

The "AIXI equation" is not an AI in the relevant sense.

Replies from: gwern
comment by gwern · 2012-03-03T07:49:36.209Z · LW(p) · GW(p)

Fine, 'show me this morality in a computable implementation of AIXI using the speed prior or GTFO' (what was it called, AIXI-tl?).

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T08:04:55.102Z · LW(p) · GW(p)

That also isn't an AI in the relevant sense, as it doesn't actually exist. Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can't prove that an AIXI-style AI will ever work, and it's presumably part of Tipler's argument that it won't work, so simply asserting that it will work is sort of pointless. I'm just saying that if you want to engage with his argument you'll have to get closer to it 'cuz you're not yet in bowshot range. If your intention was to repeat the standard counterargument rather than show why it's correct then I misinterpreted your intention; apologies if so.

Replies from: gwern
comment by gwern · 2012-03-03T08:25:29.414Z · LW(p) · GW(p)

Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can't prove that an AIXI-style AI will ever work, and it's presumably part of Tipler's argument that it won't work, so simply asserting that it will work is sort of pointless.

The AIXI proofs seem pretty adequate to me. They may not be useful, but that's different from not working.

More to the point, nothing in Tipler's paper gave me the impression he had so much as heard of AIXI, and it's not clear to me that he does accept Searlian reasons - what is that, by the way? It can't be Chinese room stuff since Tipler has been gung ho on uploading for decades now.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T08:34:41.229Z · LW(p) · GW(p)

The AIXI proofs seem pretty adequate to me. They may not be useful, but that's different from not working.

It's really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it's a common LW meme ("AIXI drops an anvil on its head").

By "Searlian reasons" I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring.

I don't think Tipler's not having heard of AIXI is particularly damning, even if true.

Replies from: gwern
comment by gwern · 2012-03-08T03:12:13.821Z · LW(p) · GW(p)

It's really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it's a common LW meme ("AIXI drops an anvil on its head").

I don't think it's obvious it would self-destruct - any more than it's obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans.

I don't think Tipler's not having heard of AIXI is particularly damning, even if true.

I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.

comment by steven0461 · 2012-03-02T21:19:28.110Z · LW(p) · GW(p)

You might enjoy reading this for more context.

comment by timtyler · 2012-03-02T19:01:36.418Z · LW(p) · GW(p)

Wow, that's all kinds of crazy.

Yes: nonsense.

comment by Wei Dai (Wei_Dai) · 2012-03-02T21:39:16.031Z · LW(p) · GW(p)

Daniel Dennett's "The Mystery of David Chalmers" quickly dismissed the Singularity without really saying why:

My reactions to the first thirty-odd pages did not change my mind about the topic, aside from provoking the following judgment, perhaps worth passing along: thinking about the Singularity is a singularly imprudent pastime, in spite of its air of cautious foresight, since it deflects our attention away from a much, much more serious threat, which is already upon us, and shows no sign of being an idle fantasy: we are becoming, or have become, enslaved by something much less wonderful than the Singularity: the internet.

and then spent the rest of his paper trying to figure out why Chalmers isn't a type-A materialist.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-03-03T09:03:12.285Z · LW(p) · GW(p)

By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.

People need cool names to treat ideas seriously, so let's call this apex of human invention "Procrastinarity". Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.

Replies from: gwern, John_Maxwell_IV
comment by gwern · 2012-03-03T12:13:25.132Z · LW(p) · GW(p)

There's another such curve, incidentally - I've been reading up on scientific careers, and there's solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).

So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist's career will be spent being trained, learning, helping out on other experiments, and in general just catching up!

We might call this the PhDalarity - the rate at which graduate and post-graduate experience is needed before one can make a major discovery.

Replies from: Viliam_Bur, John_Maxwell_IV
comment by Viliam_Bur · 2012-03-03T12:52:05.240Z · LW(p) · GW(p)

As a former teacher I have noticed some unlucky trends in education (it may be different in different coutries), namely that it seems to slow down. On one end there is a public pressure to make schools easier for small children, like not giving them grades in the first class. On the other end there is a pressure to send everyone to university, for signalling (by having more people in universities we can pretend to be smart, even if the price is dumbing down university education) and reducing unemployment (more people in schools, less people in unemployment registry).

While I generally approve friendlier environment for small children and more opportunities for getting higher education, the result seems like shifting the education to later age. Students learn less in high schools (some people claim otherwise, but e.g. math curicullum is being reduced in recent decades) and many people think it's ok, because they can still learn the necessary things in university, can't they? So the result is a few "child prodigies" and a majority of students who are kept at schools only for legal or financial reasons.

Yeah, people live longer, prolong their childhoods, but their peak productivity does not shift accordingly. We feel there is enough time, but that's because most people underestimate how much there is to learn.

Replies from: Thomas
comment by Thomas · 2012-03-03T13:44:35.369Z · LW(p) · GW(p)

OTOH there is a saying - just learn where and how to get the information you need.

And it's a big truth in that. It is easier every day to learn something (anything) when you need it.

Knowledge market value could be easily grossly overestimated.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-03-03T15:50:38.807Z · LW(p) · GW(p)

It's easy to learn something when you need it... if the inferential distance is short. Problem is, it often isn't. Second problem, it is easy to find information, but it is more difficult to separate right and wrong information if the person has no background knowledge. Third problem, the usefullness of some things becomes obvious only after a person learns them.

I have seen smart people trying to jump across a large informational gap and fail. For example there are many people who taught themselves programming from internet tutorials and experiments. They can do many impressive things, just to fail at something rather easy later, because they have no concepts of "state automata" or "context-free grammar" or "halting problem" -- the things that may seem like a useless academic knowledge at university, but they allow to quickly classify groups of problems into categories with already known rather easy solutions (or in the last case: known to be generally unsolvable). Lack of proper abstractions slows them at learning, they invent their own bad analogies. In theory, there are enough materials online that would allow them to learn everything properly, but that would take a lot of time and someone's guidance. And that's exactly what schools are for: they select materials, offer guidance, and connect you with other people studying the same topic.

In my opinion, a good "general education" is one that makes inferential distances shorter on average. Mathematics is very important, because it takes good basic knowledge to understand statistics, and without statistics you can't understand scientific results in many fields. A recent example: in a local Mensa group there was a discussion on web whether IQ tests are really necessary, because most people know what their IQ is. I dropped them a link to an article saying that the correlation between self-reported IQ and measured value is less than 0.3. I thought that would solve the problem. Well, it did, kind of... because the discussion switched to whether "correlation 0.3" means "0.3%" or "30%". I couldn't make this up. IMHO a good education should prevent such things from happening.

Though I agree that a conversion from "knowledge" to "money" is overestimated, or at least it is not very straightforward.

Replies from: Thomas
comment by Thomas · 2012-03-03T16:12:29.222Z · LW(p) · GW(p)

You are advocating a strategically devised network of knowledge which would always offer you a support from the nearest base, when you are wandering on a previously unknown land. "Here comes the marines" - you can always count on that.

Well, in science you can't. You must fight the marines as the enemies sometimes, and you are often so far out, that nobody even knows for you. You are on your own and all the heavy equipment is both useless and to expensive to carry.

This is the situation when the stakes are high, when it really matters. When it doesn't, it doesn't anyway.

comment by John_Maxwell (John_Maxwell_IV) · 2012-03-04T03:26:17.198Z · LW(p) · GW(p)

I think we can plausibly fight this by improving education to compress the time necessary to teach concepts. Hardly any modern education uses the Socratic method to teach, which in my experience is much faster than conventional methods, and could in theory be executed by semi-intelligent computer programs (the Stanford machine learning class embedding questions part way through their videos is just the first step).

Also, SENS.

Replies from: gwern
comment by gwern · 2012-03-04T04:10:05.620Z · LW(p) · GW(p)

Even better would be http://en.wikipedia.org/wiki/Bloom%27s_2_Sigma_Problem incidentally, and my own idée fixe, spaced repetition.

Like Moore's Law, at any point proponents have a stable of solutions for tackling the growth; they (or enough of them) have been successful for Moore's Law, and it has indeed continued pretty smoothly, so if they were to propose some SENS-style intervention, I'd give them decent credit for it. But in this case, the overall stylized evidence says that nothing has reversed the changes up until I guess the '80s at which point one could begin arguing that there's underestimation involved (especially for the Nobel prizes). SENS and online education are great, but reversing this trend any time soon? It doesn't seem terribly likely.

(I also wonder how big a gap between the standard courses and the 'cutting edge' there will be - if we make substantial gains in teaching the core courses, but there's a 'no mans land' of long-tail topics too niche to program and maintain a course on which extends all the way out to the actual cutting edge, then the results might be more like a one-time improvement.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-04T05:26:07.466Z · LW(p) · GW(p)

Thanks for the two sigma problem link.

comment by John_Maxwell (John_Maxwell_IV) · 2012-03-04T03:23:42.539Z · LW(p) · GW(p)

http://arstechnica.com/web/news/2009/04/study-surfing-the-internet-at-work-boosts-productivity.ars

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-03-04T10:43:40.075Z · LW(p) · GW(p)

The article says that internet use boosts productivity only if it is done less than 20% of time. How is this relevant to the real life? :D

Also the article suggests that the productivity improvement is not caused by internet per se, but by having short breaks during work.

So I think many people are beyond the point where internet use could boost their productivity.

comment by timtyler · 2012-03-02T16:29:43.426Z · LW(p) · GW(p)

Sue's article is here: She won’t be me.

Robin's article is here: Meet the New Conflict, Same as the Old Conflict - see also O.B. blog post

Francis's article is here: A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.

Marcus Hutter: Can Intelligence Explode?.

I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:

IMO, people want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia - that would stop us from taking advantage of them.

However, Francis's objections to virtual worlds seem even more silly to me. I've been hearing that simulations aren't real for decades now - and I still don't really understand why people get into a muddle over this issue.

Replies from: gwern
comment by gwern · 2012-03-02T17:48:47.997Z · LW(p) · GW(p)

Hanson link doesn't seem to work.

Replies from: timtyler
comment by timtyler · 2012-03-02T18:33:14.591Z · LW(p) · GW(p)

It seems to be back now.

Yes. It's still in Google's cache. Follow the links to the right once on that page.

comment by gwern · 2012-03-02T18:35:14.802Z · LW(p) · GW(p)

Schmidhuber paper

Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.

Simultaneously, our non-universal but still rather general fast deep/ recurrent neural networks have already started to outperform traditional pre-programmed methods: they recently collected a string of 1st ranks in many important visual pattern recognition benchmarks, e.g. Graves & Schmidhuber (2009); Ciresan et al. (2011): IJCNN traffic sign competition, NORB, CIFAR10, MNIST, three ICDAR handwriting competitions. Here we greatly profit from recent advances in computing hardware, using GPUs (mini-supercomputers normally used for video games) 100 times faster than today’s CPU cores, and a million times faster than PCs of 20 years ago, complementing the recent above-mentioned progress in the theory of mathematically optimal universal problem solvers.

On falsified predictions of AI progress:

I feel that after 10,000 years of civilization there is no need to justify pessimism through comparatively recent over-optimistic and self-serving predictions (1960s: ‘only 10 instead of 100 years needed to build AIs’) by a few early AI enthusiasts in search of funding.

Pessimism:

All attempts at making sure there will be only provably friendly AIs seem doomed though. Once somebody posts the recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. The survivors will define in hindsight what’s ‘moral’, since only survivors promote their values...

The Hard Problem dissolved?

But at least we have pretty good ideas where the symbols and self-symbols underlying consciousness and sentience come from (Schmidhuber, 2009a; 2010). They may be viewed as simple by-products of data compression and problem solving. As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and thus partially compressing the data histories we are observing. If the predictor/compressor is an artificial recurrent neural network (RNN) (Werbos, 1988; Williams & Zipser, 1994; Schmidhuber, 1992; Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2009), it will create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings (across neuron populations) or symbols for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole. Self-symbols may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself (Schmidhuber, 2009a; 2010). Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware. No need to see this as a mysterious process — it is just a natural by-product of partially compressing the observation history by efficiently encoding frequent observations.

Replies from: Wei_Dai, timtyler
comment by Wei Dai (Wei_Dai) · 2012-03-02T21:53:41.662Z · LW(p) · GW(p)

A Gödel machine, if one were to exist, surely wouldn't do something so blatantly stupid as posting to the Internet a "recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions". Why can't humanity aspire to this rather minimal standard of intelligence and rationality?

Replies from: Wei_Dai, gwern
comment by Wei Dai (Wei_Dai) · 2012-03-02T22:00:30.567Z · LW(p) · GW(p)

Similar theme from Hutter's paper:

Will AIXI replicate itself or procreate? Likely yes, if AIXI believes that clones or descendants are useful for its own goals.

If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?

Replies from: Will_Newsome, Luke_A_Somers
comment by Will_Newsome · 2012-03-03T07:49:48.136Z · LW(p) · GW(p)

I like lines of inquiry like this one and would like it if they showed up more.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-03T08:46:21.639Z · LW(p) · GW(p)

I'm not sure what you mean by "lines of inquiry like this one". Can you explain?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T08:57:31.828Z · LW(p) · GW(p)

I guess it's not a natural kind, it just had a few things I like all jammed together compactly:

  • Decompartmentalizes knowledge between domains, in this case between AIXI AI programmers and human AI programmers.
  • Talks about creation qua creation rather than creation as some implicit kind of self-modification.
  • Uses common sense to carve up the questionspace naturally in a way that suggests lines of investigation.
comment by Luke_A_Somers · 2012-03-03T02:53:54.390Z · LW(p) · GW(p)

An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn't figure out how to get as good a result with another design (under real constraints).

comment by gwern · 2012-03-02T21:57:04.222Z · LW(p) · GW(p)

Why can't humanity aspire to this rather minimal standard of intelligence and rationality?

I'm sure you can come up with several reasons for that.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-02T22:17:55.255Z · LW(p) · GW(p)

That was meant to be rhetorical... I'm hoping that the hypothetical person who's planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think "Hmm, a Gödel machine is supposed to be smart and it wouldn't publish its own recipe. Maybe I should give this a second thought."

Replies from: timtyler
comment by timtyler · 2012-03-03T02:13:29.578Z · LW(p) · GW(p)

If someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner's power and levelling things a little. Such an act may not be irrational - if it is a form of self-defense.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-03T09:44:28.143Z · LW(p) · GW(p)

Suppose someone has built a self-improving AI, and it's the only one in existence (hence they have a "monopoly"). Then there might be two possibilities, either it's Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it's not too late. What would publishing its source code accomplish?

Edit: Is the idea that the UFAI hasn't taken over the world yet, but for some technical or political reason it can't be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?

Replies from: timtyler
comment by timtyler · 2012-03-03T13:46:39.587Z · LW(p) · GW(p)

I don't think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.

If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn't bode well, though. Failing to share is surely one of the most reliable ways to signal that you don't have the interests of others at heart.

Industrial espionage or reverse engineering can't shut organisations down - but it may be able to liberate their technology for the benefit of everyone.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-03T14:00:30.180Z · LW(p) · GW(p)

Its effects will depend on the society in which it is released into

So we estimate based on what we anticipate about the possible state of society.

The fact that they are a secretive monopolist doesn't bode well, though.

If it's expected that sharing AGI design results in everyone dying, not sharing it can't signal bad intentions.

Replies from: timtyler
comment by timtyler · 2012-03-04T03:41:17.432Z · LW(p) · GW(p)

The fact that they are a secretive monopolist doesn't bode well, though.

If it's expected that sharing AGI design results in everyone dying, not sharing it can't signal bad intentions.

The expectations and intentions of secretive organisations are usually unknown. From outside, it will likely seem pretty clear that only a secretive elite having the technology is more likely to result in a massive wealth and power inequalities than what would happen if everyone had access. Large wealth and power inequalities seem undesirable.

Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.

comment by timtyler · 2012-03-02T18:52:51.952Z · LW(p) · GW(p)

Pessimism:

All attempts at making sure there will be only provably friendly AIs seem doomed though. Once somebody posts the recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. The survivors will define in hindsight what’s ‘moral’, since only survivors promote their values...

That seems more likely than a secretive monoplolistic agent keeping the technology for themselves from the beginning - and obliterating all potential rivals.

Keeping the technology of general-purpose inductive inference secret seems unlikely to happen in practice. It is going to go into embedded devices - from which it will inevitably be reverse engineered and made publicly accessible. Also, it's likely to arise from a public collaborative development effort in the first place. I am inclined to doubt whether anyone can win while keeping their technology on a secure server - try to do that and you will just be overtaken - or rather, you will never be in the lead in the first place.

Not pessimism, realism, is my assessment. You have to apply your efforts where they will actually make a difference.

comment by gwern · 2012-03-02T19:29:13.296Z · LW(p) · GW(p)

Roman V Yampolskiy paper

Pretty good overview of the AI boxing problem with respect to covert channels; possibly the first time I've see Eliezer's experiments cited, or Stuart Armstrong's Dr. Evil anthropic attack.

While the outlined informational hazards comprise over a dozen categories and are beyond the scope of this paper, it is easy to see how mental state of a person could be stressed to an unstable state. For example a religious guard could be informed of all the (unknown to him) contradictions in the main text of his religion causing him to question his beliefs and the purpose of life.

Given the length of the paper, I kind of expected there to be no mention of homomorphic encryption, as the boxing proposal that seems most viable, but to my surprise I read

The source code and hardware configuration of the system needs to be obfuscated (Yampolskiy & Govindaraju, 2007a) and important modules of the program should be provided only in the homomorphicly encrypted (Gentry, 2009) form, meaning that it could be used for computation or self-improvement (Hall, 2007), but not for self-analysis.

Important modules? Er, why not just the whole thing? If you have homomorphic encryption working and proven correct, the other measures may add a little security, but not a whole lot.

Replies from: timtyler
comment by timtyler · 2012-03-03T02:32:30.953Z · LW(p) · GW(p)

Pretty good overview of the AI boxing problem with respect to covert channels; possibly the first time I've see Eliezer's experiments cited, or Stuart Armstrong's Dr. Evil anthropic attack.

It says:

Careful analysis of the protocol used by Yudkowsky in conducting his AI-Box experiments reveals that they were unscientific and explicitly designed to show impossibility of confinement.

Replies from: gwern
comment by gwern · 2012-03-03T02:38:38.607Z · LW(p) · GW(p)

Well, weren't they? That was the whole point, I had the impression on SL4...

comment by quanticle · 2012-03-02T22:07:34.207Z · LW(p) · GW(p)

Our reason for placing the Singularity within the lifetimes of practi- cally everyone now living who is not already retired, is the fact that our supercomputers already have sufficient power to run a Singularity level program (Tipler, 2007). We lack not the hardware, but the soft- ware. Moore’s Law insures that today’s fastest supercomputer speed will be standard laptop computer speed in roughly twenty years (Tipler, 1994).

Really? I was unaware that Moore's law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design - we have single atom transistors in the lab. So, if you'll forgive me, I'll be taking the claim of, "Moore's law ensures that today's fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt."

Now, perhaps we'll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, "Moore's law will take care of it," is fuzzy thinking of worst sort.

Replies from: DanielVarga
comment by DanielVarga · 2012-03-03T09:54:09.164Z · LW(p) · GW(p)

True. But this one would not make the top 20 list of most problematic statements from the Tipler paper.

Replies from: gwern
comment by gwern · 2012-03-15T17:34:52.447Z · LW(p) · GW(p)

Indeed. For example, I raised my eyebrows when I came across the 2007 claim we already have enough. But that was far from the most questionable claim in the paper, and I didn't feel like reading Tipler 2007 to see what lurked within.

comment by lukeprog · 2012-03-02T21:46:27.489Z · LW(p) · GW(p)

I like Goertzel's succinct explanation of the idea behind Moore's Law of Mad Science:

...as technology advances, it is possible for people to create more and more destruction using less and less money, education and intelligence.

Also, his succinct explanation of why Friendly AI is so hard:

The practical realization of [Friendly AI] seems likely to require astounding breakthroughs in mathematics and science — whereas it seems plausible that human-level AI, molecular assemblers and the synthesis of novel organisms can be achieved via a series of moderate-level breakthroughs alternating with ‘normal science and engineering.’

Another choice quote that succinctly makes a key point I find myself making all the time:

if the US stopped developing AI, synthetic biology and nanotech next year, China and Russia would most likely interpret this as a fantastic economic and political opportunity, rather than as an example to be imitated.

His proposal for Nanny AI, however, appears to be FAI-complete.

Also, it is strange that despite paragraphs like this:

we haven’t needed an AI Nanny so far, because we haven’t had sufficiently powerful and destructive technologies. And now, these same technologies that may necessitate the creation of an AI Nanny, also may provide the means of creating it.

...he does not anywhere cite Bostrom (2004).

Replies from: timtyler
comment by timtyler · 2012-03-05T21:56:20.883Z · LW(p) · GW(p)

His proposal for Nanny AI, however, appears to be FAI-complete.

It's a very different idea from Yudkowsky's "CEV" proposal.

It's reasonable to think that a nanny-like machine might be easier to build that other kinds - because a nanny's job description is rather limited.

comment by lukeprog · 2012-03-02T21:15:28.539Z · LW(p) · GW(p)

A quote from Dennett's article, on the topic of consciousness:

‘One central problem,’ Chalmers tells us, ‘is that consciousness seems to be a further fact about conscious systems’ (p. 43) over and above all the facts about their structure, internal processes and hence behavioral competences and weaknesses. He is right, so long as we put the emphasis on ‘seems’. There does seem to be a further fact to be determined, one way or another, about whether or not anybody is actually conscious or a perfect (philosopher’s) zombie. This is what I have called the Zombic Hunch (Dennett, 2005). I can feel it just as vividly as anybody; I just don’t credit it, any more than I credit the sometimes well-nigh irresistible hunch that the sun goes around the earth; it surely does seem to go around the earth.

This reminds me of the time I took shrooms and my intuition about whether or not Mary acquires knowledge when she is given a color TV turned out to be different when high than when sober. This was interesting, but it didn't change my judgment on qualia because I had never credited my intuitions on the matter, anyway. (Because, you know, science.)

comment by gwern · 2012-03-02T19:00:11.271Z · LW(p) · GW(p)

Damien Broderick paper

"What if, as Vernor Vinge proposed, exponentially accelerating science and technology are rushing us into a Singularity (Vinge, 1986; 1993), what I have called the Spike? Technological time will be neither an arrow nor a cycle (in Stephen Jay Gould’s phrase), but a series of upwardly accelerating logistical S-curves, each supplanting the one before it as it flattens out. Then there’s no pattern of reasoned expectation to be mapped, no knowable Chernobyl or Fukushima Daiichi to deplore in advance. Merely - opacity."

...G. Harry Stine, ‘Science Fiction is Too Conservative’ (1961); this was followed by G.Harry Stine, ‘Science Fiction is Still Too Conservative!’ (1985). In 1996, I asked Stine for his current assessment. Without commenting on his wildly optimistic and erroneous earlier projections, he replied gamely: ‘Science fiction is STILL too conservative!’ (Personal communication).

Most of the rest is summaries of various Singularity/transhuman scenarios; I did like his descriptions of Stross's Accelerando (modulo the point that obviously AI-neko is narrating the whole thing).

comment by Luke_A_Somers · 2012-03-03T03:17:59.134Z · LW(p) · GW(p)

In "Leakproofing..."

"To reiterate, only safe questions with two possible answers of even likelihood which are independently computable by people should be submitted to the AI."

Oh come ON. I can see 'independently computable', but requiring single bit responses that have been carefully balanced so we have no information to distinguish one from the other? You could always construct multiple questions to extract multiple bits, so that's no real loss; and with awareness of Bayes' theorem, getting an exact probability balance is essentially impossible on any question we'd actually care about.

comment by haig · 2012-07-30T16:10:32.157Z · LW(p) · GW(p)

In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singularity.

I'm thinking of specific concepts by Yudkowsky and others in the singularity/FAI crowd that seem uncontroversial at first glance, but upon further investigation, when analyzed in the light of computational complexity, become unconvincing. One example of this is the concept of the possibility space of minds that is an assumption propping up many of the arguments for the negative consequences of careless AI engineering. When seen from the perspective of computability, that possibility space does represent the landscape of theoretically possible intelligent agents, and at first glance, those sensitive and wise enough to care about where in that landscape most outcomes of successful AI engineering projects will be located are alarmed at the needle in the haystack that is our target for a positive outcome. But, if you put on your computational complexity hat and start to analyze not just particular algorithms representing AI systems themselves, but the engineering processes that work towards outputting those AI agents/systems, a very different landscape takes shape, one that drastically constrains the space of possible minds that are a.) of a comparable cognitive class with humans, and b.) have a feasible engineering approach on a timescale T < heat death of our universe. I'm including the evolution of natural history on earth within the set of engineering processes that output intelligence mentioned above

This is but one example of how the neglect of computational complexity, and, to be frank, the neglect of time as a very important factor overall, has influenced the thinking of the SIAI/Lesswrong et al crowd. This neglect leads to statements such as the one Yudkowsky made that an AI could be programmed on a desktop computer circa early 2000s which I am extremely incredulous of. It also leads to timeless decision theories which I don't feel will be of much importance. Scott Aaronson has made a career out of stressing computational complexity for understanding the deep nature of quantum mechanics, and this should apply to all natural phenomena, cognition and AI among them.

comment by SiglNY · 2012-03-15T17:39:11.946Z · LW(p) · GW(p)

I wish I could read the Dennett article online. If Chalmers has a philosophical nemesis it has to be Dennett. Though he probably sees it otherwise, I contend that Dennett's hard materialism is loosing ground daily in the academic and philosophical mainstream even as Chalmers' non-reductive functionalism gains in appreciation. (Look at Giulio Tononi's celebrated IIT theory of consciousness with its attendant panpsychism for just one example. And that's in the hard sciences, not philosophy.)

I'm ascertaining from the comments here that Dennett is no fan of the Singularity. I suspect that Dennett dislikes Singularity thought because of its teleological implications about evolution. A truly teleological universe with mind as a non-physical feature opens up a whole host of philosophical reevaluations that I doubt Dennett is willing to explore. (To be fair, Chalmers doesn't explore these metaphysical concerns either. Broderick's lovely essay on science fiction and the Singularity gets closest to exploring this new ontological possibility space.)

Of the articles in the journal, at least Tipler thinks big, real big, and takes his arguments to their logical conclusion. Unfortunately, Tipler is convinced he has "proved" what can only be properly seen as suggestive and interesting speculation about future evolution. He even tries to deflate Hume's entire fact/value distinction while at it, clearly biting off more than he can chew in such a brief essay. (I plan to read his book to see if he gives his Hume discussion a more complete treatment.) Separate from his arguments, there is the aura of quack about Tipler (as there is with other Singularity-celebrities like Aubrey De Grey and even Ray Kurzweil) and yet, he's a quack who still may just be right, if not in exact detail than in his general train of thought. It's a radical idea that forces even the most secular of rationalists to think of a future that may only be described as, in some sense, divine.

comment by XiXiDu · 2012-03-03T12:29:33.952Z · LW(p) · GW(p)

Many of those people are believers who are already completely sold on the idea of a technological singularity. I hope some sort of critical examination is forthcoming as well.

Schmidhuber, Hutter and Goertzel might be called experts. But I dare to argue that statements like "progress towards self-improving AIs is already substantially beyond what many futurists and philosophers are aware of" are almost certainly bullshit.

Replies from: Thomas, timtyler
comment by Thomas · 2012-03-03T13:32:10.699Z · LW(p) · GW(p)

are almost certainly bullshit.

You can be certain if you wish. I am not. As I am not sure that there isn't a supervirus somewhere, I can't be certain that there isn't a decent self-improver somewhere. Probably not, but ...

Both ARE possible, according to my best knowledge, so it wouldn't be wise to be too sure in any direction.

As you are.

Replies from: XiXiDu
comment by XiXiDu · 2012-03-03T14:45:01.346Z · LW(p) · GW(p)

Both ARE possible, according to my best knowledge, so it wouldn't be wise to be too sure in any direction.

According to the technically correct, but completely useless, lesswrong style rationality you are right that it is not wise to say that it is "almost certainly bullshit". What I meant to say is that given what I know it is unlikely enough to be true to be ignored and that any attempt at calculating the expected utility of being wrong will be a waste of time, or even result in spectacular failure.

I currently feel that the whole business of using numerical probability estimates and calculating expected utilities is incredible naive in most situations and at best gives your beliefs a veneer of respectability that is completely unjustified. If you think something is almost certainly bullshit then say it and don't try to make up some number. Because the number won't resemble the reflective equilibrium of various kinds of evidence, your preferences and intuition that is being comprised in calling something almost certainly bullshit.

Replies from: Thomas
comment by Thomas · 2012-03-03T15:30:47.063Z · LW(p) · GW(p)

What I meant to say is that given what I know it is unlikely enough to be true

Well, given what you think you know. It is always the case, with just everyone, that (s)he estimates from the premises of what (s)he thinks (s)he knows. It just can't be any different.

Somewhere in the chain of logical conclusions might be an error. Or might not be. And might be an error in premises. Or might not be.

Saying - oh, I know you are wrong based on everything I stand for - is not good enough. You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely - should explain it also. They do so.

P.S. I don't consider myself as a "lesswronger" at all. Disagree too often and have no "site patriotism".

Replies from: XiXiDu
comment by XiXiDu · 2012-03-03T16:59:24.902Z · LW(p) · GW(p)

You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely - should explain it also. They do so.

My comment was specifically aimed at the kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying. I asked other AI researchers about their work, even some of whom worked with them, and they disagree.

There are mainly two possibilities here. That it takes a single breakthrough or that it takes a few breakthroughs, i.e. that it is a somewhat gradual development that can be extrapolated.

In the case that the development of self-improving AI's is stepwise I doubt that their optimism is justified simply because they are unable to show any achievements. All achievements in AI so far are either a result of an increase in computational resources or, in the case of e.g. IBM Watson or the Netflix algorithm, the result of throwing everything we have at a problem to brute force a solution. None of those achievements are based on a single principle like an approximation of AIXI. Therefore, if people like Schmidbauer and Goertzel made stepwise progress and extrapolate it to conclude that more progress will amount to general intelligence, then where are the results? They should be able to market even partial achievements.

In the case that the development of self-improving AI's demands a single breakthrough or mathematical insights I simply doubt their optimism based on the fact that such predictions amount to pure guesswork and that nobody knows when such a breakthrough will be achieved or at what point new mathematical insights will be discovered.

And regarding the proponents of a technological Singularity. 99% of their arguments consist of handwaving and claims that physical possibility implies feasibility. In other words, bullshit.

Replies from: Thomas
comment by Thomas · 2012-03-04T11:31:12.319Z · LW(p) · GW(p)

Everybody on all sides of this discussion is a suspect of a bullshit trader or a bullshit producer.

That includes me, you, Vinge, Kurzweill, Jürgen S., Ben Goertzel - everybody is a suspect. Including the investigators from any side.

Now, I'll clear my position. The whole AI business is an Edisonian, not an Einsteinian project. I don't see a need for some enormous scientific breakthroughs before it can be done. No, to me it looks like - we have Maxwell equations for some time now, can we build an electric lamp?

Edison is just one among many, who is claiming it is almost done in his lab. It is not certain what's the real situation in the Menlo Park. The fact that an apprentice who left Edison is saying that there is no hope for a light bulb is not very informative. As it is not, that another apprentice still working there, is euphoric. It doesn't matter even what the Royal Queen Science Society back in old England has to say. Or a simple peasant.

You just can't meta judge very productively.

But you can judge is it possible to have an object as an electric driven lamp? Or can you build a nuclear fusion reactor? Or can you built an intelligent program?

If it is possible, how hard is to actually build one of those? May takes a long time, even if it is. May take a short time, if it is.

The only real question is - can it be done and if yes - how? If no, also good. It just isn't.

But you have to stay on topic, not meta topic, I think.

Replies from: XiXiDu
comment by XiXiDu · 2012-03-04T12:38:50.967Z · LW(p) · GW(p)

No, to me it looks like - we have Maxwell equations for some time now, can we build an electric lamp?

To me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.

But you can judge is it possible to have an object as an electric driven lamp?

It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn't mean that it is feasible. It also says nothing about timeframes.

The only real question is - can it be done and if yes - how?

The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.

Replies from: Thomas, timtyler
comment by Thomas · 2012-03-04T12:55:59.417Z · LW(p) · GW(p)

to me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.

It has been done in 2500 years. (Providing that the fusion is still outsourced to the Sun). What are guaranties that in this case we will CERTAINLY NOT be 100 times faster?

It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn't mean that it is feasible. It also says nothing about timeframes.

It does not automatically mean that it is either unfeasible or far, far in the future.

The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.

If it was sure that it's far, far away - but it isn't that sure at all - even then it would be a very important topic.

Replies from: XiXiDu
comment by XiXiDu · 2012-03-04T14:41:51.761Z · LW(p) · GW(p)

If it was sure that it's far, far away - but it isn't that sure at all - even then it would be a very important topic.

I am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don't do it, do you?

There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can't really calculate shit 2) it's impossible to do for any agent that isn't computationally unbounded 3) you'll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability.

Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level.

The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can't be evaluated due to a lack of data.

Replies from: timtyler, gwern
comment by timtyler · 2012-03-05T20:21:51.943Z · LW(p) · GW(p)

Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don't do it, do you?

Does this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them?

Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions.

I don't think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.

Replies from: None
comment by [deleted] · 2012-03-05T20:32:58.324Z · LW(p) · GW(p)

expected utility maximization != utilitarianism.

Ok what's the difference here? By "utilitarianism" do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?

I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?

Replies from: timtyler
comment by timtyler · 2012-03-05T20:39:09.685Z · LW(p) · GW(p)

The term "utilitarianism" refers to maximising the combined happiness of all people. The page says:

Utilitarianism is an ethical theory holding that the proper course of action is the one that maximizes the overall "happiness".

So: that's a particular class of utility functions.

"Expected utility maximization" is a more general framework from decision theory. You can use any utility function with it - and you can use it to model practically any agent.

Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural - due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).

Replies from: None
comment by [deleted] · 2012-03-05T20:56:33.028Z · LW(p) · GW(p)

Thanks.

Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does "consequentialism" usually imply normal human value, or is it usually a general term?

Replies from: timtyler
comment by timtyler · 2012-03-05T21:16:08.858Z · LW(p) · GW(p)

See http://en.wikipedia.org/wiki/Consequentialism for your last question (it's a general term).

The answer to your "Is there a name..." question is "no" - AFAIK.

Replies from: None
comment by [deleted] · 2012-03-05T23:22:17.001Z · LW(p) · GW(p)

The answer to your "Is there a name..." question is "no" - AFAIK.

I get the impression that most people around here approach morality from that perspective, it seems like something that ought to have a name.

comment by gwern · 2012-03-05T04:55:17.040Z · LW(p) · GW(p)

Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don't do it, do you?

My understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different?

(Note there is no such possible danger from cryonics: you're already 'dead'.)

comment by timtyler · 2012-03-05T20:26:31.446Z · LW(p) · GW(p)

To me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.

Really? Some have been known top exaggerate to stimulate funding. However, many people (including some non-engineers) don't put machine intelligence that far off. Do you have your own estimates yet, perhaps?

comment by timtyler · 2012-03-05T22:00:25.577Z · LW(p) · GW(p)

I dare to argue that statements like "progress towards self-improving AIs is already substantially beyond what many futurists and philosophers are aware of" are almost certainly bullshit.

That's one of those statements-that-is-so-vague-it-is-bound-to-be-true. "Substantially" in one problem, and "many" is another one.