Posts

Comments

Comment by asciilifeform on The Strangest Thing An AI Could Tell You · 2009-07-15T16:28:19.457Z · LW · GW

We can temporarily disrupt language processing through magnetically-induced electric currents in the brain. As far as anyone can tell, the study subjects suffer no permanent impairment of any kind. Would you be willing to try an anosognosia version of the experiment?

Comment by asciilifeform on Avoiding Failure: Fallacy Finding · 2009-07-09T15:38:29.062Z · LW · GW

Would you be willing to show a reference or back-of-the-envelope calculation for this?

The last time I checked, the manufacture of large photovoltaic panels was energy-intensive and low-yield (their current price suggests that these problem persist.) They were also rated for a useful life of around two decades.

I do not believe that these problems have been corrected in any panel currently on the market. There is no shortage of vaporware.

Comment by asciilifeform on Avoiding Failure: Fallacy Finding · 2009-07-09T13:57:59.794Z · LW · GW

solar panels take more energy to manufacture than they'll produce in their lifetime

Do you mean to say that this is false?

Comment by asciilifeform on The Dangers of Partial Knowledge of the Way: Failing in School · 2009-07-07T15:33:46.968Z · LW · GW

Is there any other way to become literate?

No.

Comment by asciilifeform on Open Thread: July 2009 · 2009-07-02T17:34:21.376Z · LW · GW

What are your thoughts on the recent "Etsy considered harmful" article?

Comment by asciilifeform on The Aumann's agreement theorem game (guess 2/3 of the average) · 2009-06-22T14:52:25.542Z · LW · GW

This comes to mind. The author claims that "the winner was accurate to six decimal places."

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-18T19:18:51.143Z · LW · GW

Could you give more examples about things you like about Mathematica?

1) Mathematica's programming language does not confine you to a particular style of thinking. If you are a Lisp fancier, you can write entirely Lispy code. Likewise Haskell. There is even a capability for relatively painless dataflow programming.

2) Wolfram Inc. took great pains to make interfacing with the outside world from within the app as seamless as possible. For example, you can suck in a spreadsheet file directly into a multidimensional array. There is import and export capability for hundreds of formats, including obscure scientific and engineering ones. In case the built-in formats do not suffice, defining custom ones is surprisingly easy.

3) A non-headache-inducing replacement for regular expressions. Enough said.

4) Graphical objects (likewise audio and other streams) are first-class data types. They are able to appear as both the inputs and outputs of functions.

5) Lastly, and most importantly: fully interactive program development. The rest of the programming universe lives a life of endlessly repeated "compile and pray" cycles. Mathematica permits you to meaningfully evaluate and edit in place every line of code you write. I am otherwise an Emacs junkie, yet I have never felt the slightest desire to touch Emacs when working on Mathematica code. The programmer's traditional need to wade through and shovel giant piles of text from one place to another while writing code is almost entirely absent when working in this language.

The downsides of Mathematica (slow, proprietary, expensive, etc.) are widely known. Thus far, the advantages have vastly outweighed the problems for my particular kind of work. However, I have found that I now feel extremely confined when forced to work in any other programming language. Perhaps this risk should be added to the list of disadvantages.

I learned about Lisp after Mathematica, and was like, "wow, that must have been where Wolfram got the idea."

Wolfram had (at least in the early days of Mathematica) a very interesting relationship with Lisp. He seems to have initially rejected many of its ideas, but it is clear that they somehow crept back into his work as time went by.

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-18T14:32:43.155Z · LW · GW

intelligence doesn't necessarily have anything to do with our capacity to detect lies

Do you actually believe this?

Comment by asciilifeform on Rationalists lose when others choose · 2009-06-16T20:06:13.711Z · LW · GW

Regardless of exactly what the new fact was?

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-16T18:00:25.709Z · LW · GW

I do not know of a working society-wide solution. Establishing research institutes in the tradition of Bell Labs would be a good start, though.

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-16T16:50:37.060Z · LW · GW

Do you mean that organizations aren't very good at selecting the best person for each job.

Actually, no. What I mean is that human society isn't very good at realizing that it would be in its best interest to assign as many high-IQ persons as possible the job of "being themselves" full-time and freely developing their ideas - without having to justify their short-term benefit.

Hell, forget "as many as possible", we don't even have a Bell Labs any more.

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-16T16:01:05.507Z · LW · GW

How does increasing "the marginal social status payoff from an increase in IQ" help?

The implication may be that persons with high IQ are often prevented from putting it to a meaningful use due to the way societies are structured: a statement I agree with.

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-16T14:05:06.765Z · LW · GW

But there is no evidence that any pill can raise the average person's IQ by 10 points

Please read this short review of the state of the art of chemical intelligence enhancement.

We probably cannot reliably guarantee 10 added points for every subject yet. Quite far from it, in fact. But there are some promising leads.

if some simple chemical balance adjustment could have such a dramatic effect on fitness

Others have made these points before, but I will summarize: fitness in a prehistoric environment is a very different thing from fitness in the world of today; prehistoric resource constraints (let's pick, for instance, the scarcity of refined sugars) bear no resemblance to those of today; certain refinements may be trivial from the standpoint of modern engineering but inaccessible to biological evolution, or at the very least ended up unreachable from a particular local maximum. Consider, for example, the rarity of evolved wheels.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-16T00:44:36.913Z · LW · GW

I will accept that "AGI-now" proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence - or, say, the output of a purely human AI research community unburdened by Friendliness worries - might be able to counter. I previously gave Orlov's petrocollapse as yet another example.)

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-16T00:38:19.079Z · LW · GW

I cannot pin down this idea as rigorously as I would like, but there seems to exist such a trait as liking to think abstractly, and that this trait is mostly orthogonal to IQ as we understand it (although a "you must be this tall to ride" effect applies.) With that in mind, I do not think that any but the most outlandishly powerful and at the same time effortless intelligence amplifier will be of much interest to the bulk of the population.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-16T00:29:38.310Z · LW · GW

ASCII - the onus is on you to give compelling arguments that the risks you are taking are worth it

Status quo bias, anyone?

I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I'm thinking of Yudkowsky's Super-Happies.)

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-15T21:23:56.067Z · LW · GW

It's highly non-obvious that it would have significant effects

The effects may well be profound if sufficiently increased intelligence will produce changes in an individual's values and goal system, as I suspect it might.

At the risk of "argument from fictional evidence", I would like to bring up Poul Anderson's Brain Wave, an exploration of this idea (among others.)

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-15T21:17:11.676Z · LW · GW

not quite what I was aiming at

I am curious what you had in mind. Please elaborate.

Comment by asciilifeform on Intelligence enhancement as existential risk mitigation · 2009-06-15T20:23:45.130Z · LW · GW

Software programs for individuals.... prime association formation at a later time.... some short-term memory aid that works better than scratch paper

I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally.

Or just biting the bullet and learning Mathematica to an expert level instead of complaining about its UI

About six months ago, I resolved to do exactly that. While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the language and development environment are currently are in a class of their own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-15T19:20:00.091Z · LW · GW

I have located a paper describing Lenat's "Representation Language Language", in which he wrote Eurisko. Since no one has brought it up in this thread, I will assume that it is not well-known, and may be of interest to Eurisko-resurrection enthusiasts. It appears that a somewhat more detailed report on RLL is floating around public archives; I have not yet been able to track down a copy.

Comment by asciilifeform on Why safety is not safe · 2009-06-15T14:34:23.974Z · LW · GW

Fair enough. It may very well take both.

Comment by asciilifeform on Why safety is not safe · 2009-06-15T14:01:09.836Z · LW · GW

How are truly fundamental breakthroughs made?

Usually by accident, by one or a few people. This is a fine example.

ought to be more difficult than building an operating system

I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat's dictum that "intelligence is ten million rules." I suspect that the legendary missing "key" to AGI is something which could ultimately fit on a t-shirt.

Comment by asciilifeform on Why safety is not safe · 2009-06-15T03:30:36.347Z · LW · GW

Do you agree that you hold a small minority opinion?

Yes, of course.

Do you have any references where the arguments are spelled out in greater detail?

I was persuaded by the writings of one Dmitry Orlov. His work focuses on the impending collapse of the U.S.A. in particular, but I believe that much of what he wrote is applicable to the modern economy at large.

Comment by asciilifeform on Why safety is not safe · 2009-06-15T02:19:13.609Z · LW · GW

Please attack my arguments. I truly mean what I say. I can see how you might have read me as a troll, though.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-15T00:58:10.220Z · LW · GW

There were some extremely dedicated and obsessive people involved in Traveller, back then

How many of them made use of any kind of computer? How many had any formal knowledge applicable to this kind of optimization?

Comment by asciilifeform on Why safety is not safe · 2009-06-14T23:23:06.225Z · LW · GW

the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research

Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than "the rabble" to decide the fate of all mankind.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T23:16:07.537Z · LW · GW

if you are not dead as a result

I am profoundly skeptical of the link between Hard Takeoff and "everybody dies instantly."

ad-hoc tinkering is expected to lead to disaster

This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the "premature" development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.

Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T23:08:46.956Z · LW · GW

Thank you for the link.

I concede that a post-collapse society might successfully organize and attempt to resurrect civilization. However, what I have read regarding surface-mineral depletion and the mining industry's forced reliance on modern energy sources leads me to believe that if our attempt at civilization sinks, the game may be permanently over.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T23:03:43.335Z · LW · GW

I view the teenager's success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my "goal" in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T22:58:14.282Z · LW · GW

he logic of mutually assured destruction would be clear and compelling even to the general public

When was the last time a government polled the general public before plunging the nation into war?

Now that I think about it, the American public, for instance, has already voted for petrowar: with its dollars, by purchasing SUVs and continuing to expand the familiar suburban madness which fuels the cult of the automobile.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T22:54:47.042Z · LW · GW

have 10% of the population do science

Do you actually believe that 10% of the population are capable of doing meaningful science? Or that post-collapse authority figures will see value in anything we would recognize as science?

Comment by asciilifeform on Why safety is not safe · 2009-06-14T22:50:21.703Z · LW · GW

we have nuclear, wind, solar and other fossil fuels

Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture's non-negotiable dependence on synthetic fertilizers.

Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T22:42:33.685Z · LW · GW

AGI is a really hard problem

It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.

If it ever gets accomplished, it's going to be by a team of geniuses who have been working on the project for years

This is not how truly fundamental breakthroughs are made.

Will they be so immersed in the math that they won't have read the deep philosophical tracts?

Here is where I agree with you - anyone both qualified and motivated to work on AGI will have no time or inclination to pontificate regarding some nebulous Friendliness.

But your bored teenager scenario makes no sense.

Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?

Comment by asciilifeform on Why safety is not safe · 2009-06-14T22:34:36.345Z · LW · GW

Is that your plan against intelligence stagnation?

I'll bet on the bored teenager over a sclerotic NASA-like bureaucracy any day. Especially if a computer is all that's required to play.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T22:25:03.737Z · LW · GW

The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov's blog and dead-tree book.

As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of "not letting go of the steering wheel" lest humanity veer off into the maw of a paperclip optimizer or similar calamity. My position is that Friendly AI enthusiasts have invented the steering wheel, playing with it - "vroom, vroom" - without having invented the car.

The history of technology provides no examples of a safety system being developed entirely prior to the deployment of "unsafe" versions of the technology it was designed to work with. The entire idea seems arrogant and somewhat absurd to me.

I have been reading Yudkowsky since he first appeared on the Net in the 90's, and remain especially intrigued by his pre-2001 writings - the ones he has disavowed, which detail his theories regarding how one might actually construct an AGI. It saddens me that he is now a proponent of institutionalized caution regarding AI. I believe that the man's formidable talents are now going to waste. Caution and moderation lead us straight down the road of 15th century China. They give us OSHA and the modern-day FDA. We are currently aboard a rocket carrying us to pitiful oblivion rather than a glorious SF future. I, for one, want off.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T17:56:00.744Z · LW · GW

Dying is the default.

I maintain that there will be no FAI without a cobbled-together-ASAP (before petrocollapse) AGI.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T17:53:26.107Z · LW · GW

How is blindly looking for AGI in a vast search space better than stagnation?

No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.

How does working on FAI qualify as "stagnation"?

It is a distraction from doing things which are actually useful in the creation of our successors.

You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into "Friendly AI" is wasted. The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T17:47:10.326Z · LW · GW

Dmitry Orlov, and very.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T17:01:03.720Z · LW · GW

How about thinking about ways to enhance human intelligence?

I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.

If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T16:45:33.677Z · LW · GW

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-14T16:10:41.918Z · LW · GW

If humans manage to invent a virus that wipes us out, would you still call that intelligent?

Super-plagues and other doomsday tools are possible with current technology. Effective countermeasures are not. Ergo, we need more intelligence, ASAP.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T16:00:22.483Z · LW · GW

catastrophic social collapse seems to require something like famine

Not necessarily. When the last petroleum is refined, rest assured that the tanks and warplanes will be the very last vehicles to run out of gas. And bullets will continue to be produced long after it is no longer possible to buy a steel fork.

R&D... efficient services... economy of scale... new technologies will appear

Your belief that something like present technological advancement could continue after a cataclysmic collapse boggles my mind. The one historical precedent we have - the Dark Ages - teaches the exact opposite lesson. Reversion to barbarism - and a barbarism armed with the remnants of the finest modern weaponry, this time around - is the more likely outcome.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T15:52:41.826Z · LW · GW

permanently put us back in the stone age

Exactly. The surface-accessible minerals are entirely gone, and pre-modern mining will have no access to what remains. Even meaningful landfill harvesting requires substantial energy and may be beyond the reach of people attempting to "pick up the pieces" of totally collapsed civilization.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T15:14:16.743Z · LW · GW

resource depletion (as alluded to by RWallace) is a strong possible threat. But so is a negative singularity.

Resource depletion is as real and immediate as gravity. You can pick up a pencil and draw a line straight through present trends to a horse-and-cart world (or the smoking, depopulated ruins from a cataclysmic resource war.) The negative singularity, on the other hand, is an entirely hypothetical concept. I do not believe the two are at all comparable.

Comment by asciilifeform on Why safety is not safe · 2009-06-14T15:11:40.838Z · LW · GW

Would you have hidden it?

You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.

Reading "value is fragile" almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, "Safety is not Safe" reverted me back to my usual position: stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.

For instance, bloat and out-of-control accidental complexity have essentially halted all basic progress in computer software. I believe that the lack of quality programming systems will lead (and may already have led) directly to stagnation in other fields, such as computational biology. The near-term future appears to resemble Windows Vista rather than HAL. Engelbart's Intelligence Amplification dream has been lost in the noise. I thus expect civilization to succumb to Natural Stupidity in the near term future, unless a drastic reversal in these trends takes place.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-14T05:57:33.955Z · LW · GW

I was going to reply, but it appears that someone has eloquently written the reply for me.

I'd like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms - something I believe we're headed for by default, in the very near future.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-14T05:26:01.371Z · LW · GW

Well, Lenat did. Whether or in what capacity a computer program was involved is an open question.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-14T04:24:31.476Z · LW · GW

Eliezer,

I am rather surprised that you accept all of the claimed achievements of Eurisko and even regard it as "dangerous", despite the fact that no one save the author has ever seen even a fragment of its source code. I firmly believe that we are dealing with a "mechanical Turk."

I am also curious why you believe that meaningful research on Friendly AI is at all possible without prior exposure to a working AGI. To me it seems a bit like trying to invent the ground fault interrupter before having discovered electricity.

Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place? You seem to side with humanity over a hypothetical Paperclip Optimizer. Why is that? It seems to me that unaugmented human intelligence is itself an "unfriendly (non-A)I", quite efficient at laying waste to whatever it touches.

There is every reason to believe that if an AGI does not appear before the demise of cheap petroleum, our species is doomed to "go out with a whimper." I for one prefer the "bang" as a matter of principle.

I would gladly accept taking a chance at conversion to paperclips (or some similarly perverse fate at the hands of an unfriendly AGI) when the alternative appears to be the artificial squelching of the human urge to discover and invent, with the inevitable harvest of stagnation and eventually oblivion.

I accept Paperclip Optimization (and other AGI failure modes) as an honorable death, far superior to being eaten away by old age or being killed by fellow humans in a war over dwindling resources. I want to live in interesting times. Bring on the AGI. It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.

Why is the continued hegemony of Neolithic flesh-bags so precious to you?

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-14T03:56:04.010Z · LW · GW

Until Yudkowsky releases the chat transcripts for public review, the AI Box experiment proves nothing.

Comment by asciilifeform on Let's reimplement EURISKO! · 2009-06-14T03:41:49.501Z · LW · GW

EURISKO accomplished it in fits and starts

Where is the evidence that EURISKO ever accomplished anything? No one but the author has seen the source code.