Posts

Comments

Comment by luzr on Serious Stories · 2009-01-10T09:23:23.000Z · LW · GW

Eliezer:

I am starting to be sort of frightened by your premises - especially considering that there is non-zero probablity of creating some nonsentient singleton that tries to realize your values.

Before going any further, I STRONGLY suggest that you think AGAIN what might be interesting in carving wooden legs.

Yes, I like to SEE MOVIES with strong main characters going through the hell. But I would not want any of that.

It does not matter that AI can do everything better than me. Right now, I am not the best carving the wood either. But working with wood is still fun. Or swimming, skiing, playing chess (despite the fact computer can beat you each time), caring about animals etc.

I do not need to do dangerous things to be happy. I am definitely sure about that.

Comment by luzr on Amputation of Destiny · 2008-12-30T19:22:00.000Z · LW · GW

Eliezer:

"Narnia as a simplified case where the problem is especially stark."

I believe there are at least two significant differences:

  • Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).

  • There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".

(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).

Comment by luzr on Amputation of Destiny · 2008-12-29T22:19:20.000Z · LW · GW

David:

"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"

On somewhat related note, there are still human chess players and competitions...

Comment by luzr on Amputation of Destiny · 2008-12-29T21:02:56.000Z · LW · GW

Eliezer:

It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but

http://en.wikipedia.org/wiki/Dra%27Azon

Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.

Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.

Comment by luzr on Amputation of Destiny · 2008-12-29T20:37:08.000Z · LW · GW

Eliezer (about Sublimation):

"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."

Just a technical (or fandom?) note:

Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).

Comment by luzr on Amputation of Destiny · 2008-12-29T20:05:45.000Z · LW · GW

Julian Morrison:

Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).

There is no shortage of things to do, there is only a problem with your definition of "worthless".

Comment by luzr on Amputation of Destiny · 2008-12-29T19:54:46.000Z · LW · GW

"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"

Yes, definititely. If nothing else, it means diversity.

"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"

I do not care, as long as story continues.

And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?

"Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?"

Grow up how? Does it involve uploading your mind to computronium?

"Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?"

Well, this is the only thing I fear. I would prefer sentient superintelligence to create nonsentient utility maximizers. Much less chance of error, IMO.

"If we don't have to do it one way or the other - if we have both options - and if there's no particular need for heroic self-sacrifice - then which do you like?"

As you have said - this is a Big world. I do not think both options are mutually exclusive. The only mutually exclusive option I see is nonsentient maximizer singleton programmed to avoid sentient AI and Minds.

"Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with."

Please, explain the difference between the Mind created outright and "grown up humans". Do you insist on biological computronium?

As you have said, we are living in a Big world. It inevitably means that there is (or will be) quite likely some Culture like civilisation that we will meet if things go well.

How do you think we will be able to compete with your "no sentient AIs, only grown up humans" bias?

Or: Say your CEV AI creates singleton.

Will we be allowed to create the Culture?

What textbooks will be banned?

Will CEV burn any new textbooks we are going to create so that nobody is able to stand on other people's arms?

Comment by luzr on Can't Unbirth a Child · 2008-12-28T21:04:04.000Z · LW · GW

anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."

I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.

"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."

Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake" in unlikely.

Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?

Comment by luzr on Can't Unbirth a Child · 2008-12-28T19:20:29.000Z · LW · GW

"these trillions of people also cared, very strongly, about making giant cheesecakes."

Uh oh. IMO, that is fallacy. You introduce quite reasonable scenario, then inject some nonsense, without any logic or explanation, to make it look bad.

You should better explain when, on the way from single sentient AI to voting rights fot trillions, cheesecakes came into play. Is it like all sentients being are automatically programmed to like creating big cheescakes? Or anything equally bizzarre?

Subtract cheescakes and your scenario is quite OK with me, including 0,1% of galaxy for humans and 99.9% for AIs. 0.1% of galaxy is about 200 millions of stars...

BTW, it is most likely that without sentient AI, there will be no human (or human originated) presence outside solar system anyway.

Well, so far, my understanding is that your suggestion is to create nonsentient utility maximizer programmed to stop research in certain areas (especially research in creating sentient AI, right?). Thanks, I believe I have a better idea.

Comment by luzr on Nonperson Predicates · 2008-12-27T20:19:11.000Z · LW · GW

Uhm, maybe it is naive, but if you have a problem that your mind is too weak to decide, and you have real strong (friendly) superintelligent GAI, would not it be logical to use GAIs strong mental processes to resolve the problem?

Comment by luzr on Nonsentient Optimizers · 2008-12-27T20:09:52.000Z · LW · GW

"The counter-argument that completely random behavior makes you vulnerable, because predictable agents better enjoy the benefits of social cooperation, just doesn't have the same pull on people's emotions."

BTW, completely deterministic behaviour makes you vulnerable as well. Ask computer security experts.

Somewhat related note: Linux strong random number generator works by capturing real world actions (think user moving mouse) and hashing them into random number that is considered for all practical purposes perfect.

Taking or not taking action may depened on thousands inputs that cannot be reliably predicted or described (the reason is likely buried deep in the physics - enthropy, quantum uncertainity). This IMO is what is the real cause of "free will".

Comment by luzr on Complex Novelty · 2008-12-20T10:43:09.000Z · LW · GW

"If this were all the hope the future held, I don't know if I could bring myself to try. Small wonder that people don't sign up for cryonics, if even SF writers think this is the best we can do."

Well, I think that the points missed is that you are not FORCED to carve those legs. If you find something else interesting, do it.

Comment by luzr on High Challenge · 2008-12-19T12:26:24.000Z · LW · GW

Abigail:

"The "Culture" sequence of novels by Iain M. Banks suggests how people might cope with machines doing all the work."

Exactly, I think Culture is highly relevant to most topics discussed here. Obviously, it is just a fictional utopia, but I believe it gives plausible answer to "unlimited power future".

For the reference: http://en.wikipedia.org/wiki/The_Culture

Comment by luzr on Prolegomena to a Theory of Fun · 2008-12-18T17:31:07.000Z · LW · GW

"Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?"

Wow, what a nice elegant Fermi paradox solution:)

Comment by luzr on Prolegomena to a Theory of Fun · 2008-12-18T10:41:01.000Z · LW · GW

"because you don't actually want to wake up in an incomprehensible world"

Is not it what all people do each morning anyway?

Comment by luzr on Not Taking Over the World · 2008-12-16T08:36:13.000Z · LW · GW

"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"

It is not about what YOU define as right.

Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also believe that more general intelligence make GI converge to such "right thinking".

What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I believe that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...

Comment by luzr on Not Taking Over the World · 2008-12-16T05:24:05.000Z · LW · GW

Phil:

"If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge."

This is inconsistent. What would conflict really do is to provide new information to process ("knowledge").

I guess I can agree with the rest of post. What IMO is worth pointing out that the most pleasures, hormones and insticts excluded, are about processing 'interesting' infromations.

I guess, somewhere deep in all sentient beings, "interesting informations" are the ultimate joy. This has dire implications for any strong AGI.

I mean, the real pleasure for AGI has to be about acquiring new information patterns. Would not it be a little bit stupid to paperclip solar system in that case?

Comment by luzr on Not Taking Over the World · 2008-12-16T05:12:18.000Z · LW · GW

"But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it."

"If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?"

"Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself."

"What is individually a life worth living?"

Really, is not the ultimate answer to the whole FAI issue encoded there?

IMO, the most important thing about AI is to make sure IT IS SENTIENT. Then, with very high probability, it has to consider the very same questions suggested here.

(And to make sure it does, make more of them and make them diverse. Majority will likely "think right" and supress the rest.)

Comment by luzr on What I Think, If Not Why · 2008-12-12T18:15:00.000Z · LW · GW

"real world is deterministic on the most fundamental level"

Is it?

http://en.wikipedia.org/wiki/Determinism#Determinism.2C_quantum_mechanics.2C_and_classical_physics

Comment by luzr on What I Think, If Not Why · 2008-12-11T23:07:06.000Z · LW · GW

Tim:

Well, as off-topic recourse, I see only cited some engineering problems in your "Against Cyborgs" essay as contraargument. Anyway, let me to say that in my book:

"miniaturizing and refining cell phones, video displays, and other devices that feed our senses. A global-positioning-system brain implant to guide you to your destination would seem seductive only if you could not buy a miniature ear speaker to whisper you directions. Not only could you stow away this and other such gear when you wanted a break, you could upgrade without brain surgery."

is pretty much equivalent of what I had in mind with cyborging. Brain surgery is not the point. I guess it is even today pretty obvious that to read thoughts, you will not need any surgery at all. And if information is fed back into my glasses, that is OK with.

Still, the ability to just "think" the code (yep, I am a programmer), then see the whole procedure displayed before my eyes already refactored and tested (via weak AI augmentation) sound like nice productivity booster. In fact, I believe that if thinking code is easy, one, with the help of some nice programming language, could learn to use coding to solve much more problems in normal live situations, gradually building personal library of routines..... :)

Comment by luzr on What I Think, If Not Why · 2008-12-11T22:50:15.000Z · LW · GW

Eliezer:

"Will, your example, good or bad, is universal over singletons, nonsingletons, any way of doing things anywhere."

I guess there is significant difference - for singleton, each mistake can be fatal (and not only for it).

I believe that this is the real part I dislike about the idea, except the part where singleton either cannot evolve or cannot stay singleton (because of speed of light vs locality issue).

Comment by luzr on What I Think, If Not Why · 2008-12-11T22:39:34.000Z · LW · GW

Eliezer:

"Tim, your page doesn't say anything about FOR loops or self-optimizing compilers not being able to go a second round, which is the part you got from me and then thought you had invented."

Well, it certainly does:

"Today, machines already do a lot of programming. They perform refactoring tasks which would once have been delegated to junior programmers. They compile high-level languages into machine code, and generate programs from task specifications. They also also automatically detect programming errors, and automatically test existing programs."

I guess your claim is only a misunderstaning caused by not understaning CS terminology.

Find a new way how to optimize loops is application of automated refactoring and automated testing and benchmarking.

Comment by luzr on What I Think, If Not Why · 2008-12-11T21:26:56.000Z · LW · GW

Eliezer:

"Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own."

Why do you think it is crushing objection? I believe Tim just repeats his favorite theme (which, in fact, I tend to agree with) where machine augmented humans build better machines. If you can use automated refactoring to improve the way compiler works (and today, you often can), that is in fact pretty cool augmentation of human capabilities. It is recursive FOOM. The only difference of your vision and his is that as long as k < 1 (and perhaps some time after that point), humans are important FOOM agents. Also, humans are getting much more capable in the process. For example, machine augmented human (think weak AI + direct neural interface and all that cyborging whistles + mind drugs) might be quite likely to follow the FOOM.

Comment by luzr on What I Think, If Not Why · 2008-12-11T21:10:41.000Z · LW · GW

"FOOM that takes two years"

In addition to comments by Robin and Aron, I would also pointed out the possibility that longer the FOOM takes, larger the chance it is not local, regardless of security - somewhere else, there might be another FOOMing AI.

Now as I understand, some consider this situation even more dangerous, but it as well might create "take over" defence.

Another comment to FOOM scenario and this is sort of addition to Tim's post:

"As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop - but before that there will have been much automated improvement of machines by machines - and after that there may still be human code reviews."

Eliezer seems to spend a lot of time explaining what happens when "k > 1" - when AI intelligence surpases human and starts selfimproving. But I suspect that the phase 0.3 < k < 1 might be pretty long, maybe decades.

Also, moreover, by the time of FOOM, we should be able to use vast amounts of fast 'subcritical' AIs (+ weak AIs) as guardians of process. In fact, by that time, k < 1 AIs might play a pretty important role in world economy and security by that time and it does not take too much pattern recognition power to keep things at bay. (Well, in fact, I believe Eliezer proposes something similar in his thesis, except for locality issue).

Comment by luzr on Artificial Mysterious Intelligence · 2008-12-08T10:51:22.000Z · LW · GW

Tim Tyler:

As much as I like your posts, one formal note:

If you are responding to somebody else, it is always a good idea to put his name at the beginning of post.

Comment by luzr on Sustained Strong Recursion · 2008-12-07T17:23:09.000Z · LW · GW

Lightwave:

But goal system of humans, both as individuals, as social group and as civilization, tends to change over the time, and my undestanding is that it is the result of free will and learning experience. It is in fact basis of adaptibility.

I expect the same must be true for real strong AI. And, in that case, singleton simply does not make too much sense anymore, IMHO (when smaller is faster and local is more capable of reacting..)

Comment by luzr on Sustained Strong Recursion · 2008-12-07T08:45:09.000Z · LW · GW

Nick Tarleton:

"If multiple local agents have a common goal system"

I guess, you can largely apply that to current human civilization as well...

I was thinking about the "multimind singleton" a little bit more and came to the conclusion the fundamental question is perhaps:

Is strong AI supposed to have the free will?

Comment by luzr on Sustained Strong Recursion · 2008-12-06T16:46:33.000Z · LW · GW

Vladimir Nesov:

"Only few responses to changing context are the right ones"

As long as they are "few" instead of "one" - and these "few" still means basically infinite subset of larger infinite set, differences will accumulate over time, leading to different personality.

Note that such personality might not diverge from the basic goal. But it will inevitable start to 'disagree' about choosing one of those "few" good choices because of different learning experience.

This, BTW, is the reason why despite what Tooby & Cosmides says, we have highly diverse ecosystem with very large number of species.

Comment by luzr on Sustained Strong Recursion · 2008-12-06T11:40:56.000Z · LW · GW

"because parallelizing is programmatically difficult"

Minor note: "Parallelization is programmatically difficult" is in fact another example of recursion.

The real reason why programming focused on serial execution was the fact that the most hardware was serial. There is not much point learning mysteries of multithreaded development if chances that your SW will run on multicore CPU is close to zero.

Now when multicore CPUs are de facto standard, parallel programming is no longer considered prohibitively difficult, it is just another thing you have to learn. There are new tools, new languages etc..

SW always lags behind HW. Intel had 32-bit CPU since 1986, it took 10 years before 32-bit PC software became mainstream...

Comment by luzr on Sustained Strong Recursion · 2008-12-06T11:29:34.000Z · LW · GW

"So an AI that splits itself into two may deviate in capability, but should share the same purpose."

I cannot see how two initially identical strong AIs sharing the purpose would not start to diverge over time. Strong AI is by definition adaptable (means learning), different set of local conditions (= inputs) must lead to splitting AIs "personality".

In other words, splitting itself has to end with two "itselfs", as long as both are "strong".

Comment by luzr on Sustained Strong Recursion · 2008-12-06T10:00:58.000Z · LW · GW

"but I can't think of anything polite to say."

Then say something unpolite. It is quite possible there is something I have missed about the concept of singleton. I am taking the definition from Nick Bostrom's page.

Imagine singleton "taking over" the Mars. Every time intelligent decision is to be made, information has to round-trip with Earth - 6 minutes minimum.

In this case, any less intelligent system, but faster and local, is easily able to prevent such singleton's expansion.

IMO, the same things happens everywhere, just times are much less than 6 minutes.

To sum it up, light speed limit says two things: "smaller is faster" and "local is more capable of reacting".

Comment by luzr on Sustained Strong Recursion · 2008-12-06T08:11:16.000Z · LW · GW

Nice thread.

Seriously, I guess Eliezer really needs this kind of reality check wakeup, before his whole idea of "FOOM" and "recursion" etc... turns into complete cargo cult science.

While I think the basic premise (strong AI friendliness) is quite concern, many of his recent posts sound like he had read too much science fiction and watched Terminator movie too many times.

There are some very basic issues with the whole recursion and singleton ideas... GenericThinker is right, 'halting problem' is very relevant there, in fact it proves that the whole "recursion foom in 48 hours" is completely bogus.

As for 'singleton', if nothing else (and there is a lot), speed of light is limiting factor. Therefore, to meaningfully react to local information, you need independent intelligent local agent. No matter what you do, independent intelligent local agent will always diverge from singleton's global policy. End of story, forget about singletons. Strong AI will be small, fast, and there will be a lot of units.

So, while the basic premise, the concern about strong AI safety, remains, I think we should consider alternative scenario: AI grows relatively slowly (but follows the pattern of current ongoing foom), there is no singleton.

Comment by luzr on Hard Takeoff · 2008-12-04T09:30:07.000Z · LW · GW

They do that using dedicated hardware. Try to paint Crysis in realtime 'per pixel', using a vanilla CPU.

Interestingly, today's high-end vanilla CPU (quadcore at 3Ghz) would paint 7-8 years old games just fine. Means in another 8 years, we will be capable of doing Crysis without GPU.

Comment by luzr on Hard Takeoff · 2008-12-03T08:59:34.000Z · LW · GW

anon:

"You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same?"

I hope it will. Still, that would get it only to preexisting knowledge.

It can draw many hypothesis, but it will have to TEST them (gain empirical knowledge). Think LHC.

BTW, not that there are problems in quantum physics that do not have analytical solution. Some equations simply cannot be solved. Now of course, perhaps superintelligence will find how to do that, but I believe there are quite solid mathematic proofs that it is not possible.

[quote] Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. [/quote]

I am afraid that you have missed the part about algorithm being essential, but not the core of AI mind. The mind can as well be data. And it can be unoptimizable, for the same reasons some of equations cannot be analytically solved.

[quote] And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. [/quote]

To understand certain aspects of reality. All I am saying is that to understand certain aspects might not be enough.

What I suggest is that the "mind" might be something as network of interconnected numerical values. For the outside observer, there will be no order in connections or values. To truly understand the "mind" a poorly as by simulation, you would need much bigger mind, as you would have to simulate and carefully examine each of nodes.

Crude simulation does not help here, because you do not know which aspects to look for. Anything can be important.

Comment by luzr on Recursive Self-Improvement · 2008-12-03T08:39:54.000Z · LW · GW

"Intelligence tests are timed for a good reason. If you see intelligence as an optimisation process, it is obvious why speed matters - you can do more trials."

Inteligence tests are designed to measure performance of human brain.

Try this: Strong AI running on 2Ghz CPU. You reduce it to 1Ghz, without changing anything else. Will it make less intelligent? Slower, definitely.

Comment by luzr on Recursive Self-Improvement · 2008-12-02T22:48:42.000Z · LW · GW

" The faster they get the smarter they are - since one component of intelligence is speed."

I think this might be incorrect. The speed means that you can solve the problem faster, not that you can solve more complex problem.

Comment by luzr on Hard Takeoff · 2008-12-02T21:16:59.000Z · LW · GW

"I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less."

I am glad I can agree for once :)

"The main thing I'll venture into actually expecting from adding "insight" to the mix, is that there'll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code;"

Anyway, my problem with your speculation about hard takeoff is that you seem to do the same conceptual mistake that you so dislike about Cyc - you seem to thing that AI will be mostly "written in the code".

I suspect it is very likely that the true working AI code will be relatively small and already pretty well optimized. The "mind" itself will be created from it by some self-learning process (my favorite scenario involves weak AI as initial "tutor") and in fact will be mostly consist of vast amount of classification coeficients and connections or something like that (think bayesian or neural networks).

While it probably will be in AI power to optimize its "primal algorithm", gains there will be limited (it will be pretty well optimized by humans anyway). The ability to reorganize its "thinking network" might be severely low. Same as with human - we nearly understand how single neuron work, but are far from understanding the whole network. Also, with any further possible self-improvement, the complexity grows further and it is quite reasonable to predict this complexity will grow faster than AI ability to understand it.

I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom? (BTW, this is also showstopper for "nested virtual reality" idea).

Note however that this whole argument is not really mutually exclusive with hard takeoff. AI still can build next generation AI that is better. But "self" part might not work. (BTW, interesting part is that "parent" AI might then face the same dilemma with descendant's friendliness ;)

I also thing that in all your "foom" posts, you understimate empirical form of knowledge. It sounds like you expect AI to just sit down in the cellar and think, without much inputs and actions, then invent the theory of everything and take over the world.

That is not going to happen at least for the same reason why the endless chain of nested VRs is unlikely.

Comment by luzr on Thanksgiving Prayer · 2008-11-29T10:57:49.000Z · LW · GW

Abigail:

"Religion has great benefits for a society and for individuals, even if there is no creator God."

And these are?

Well, where I live, there is about 70% atheists and 30% believers. I have a couple of believers as friends. From my experience, no, they are not better people than we are.

Grant:

"I'm not sure if this has been discussed here before, but how isn't atheism as religion? It has to be accepted on faith, because we can't prove there isn't a magical space god that created everything. I happen to have more faith in atheism than Christianity, but ultimately its still faith."

I also believe there isn't a 1000km wide pink doughnut orbiting Alpha Centauri. Is that a religion too?

BTW, "magical space god that created everything". How was HE created?

All that said, universe, its beginning, human mind and consciousness and self-awarnes, these all are very fascinating mysteries and "God" is a very tempting and easy answer. But it is pretty much likely wrong.

Comment by luzr on Thanksgiving Prayer · 2008-11-28T12:47:26.000Z · LW · GW

Abigail:

"Religion might be more understandable to atheists"

What makes you think religion is not understandable to atheists?

As for your explanation, yeah, nice, but do you really think this is what most christians really believe in? (You can try - find some and ask if he thinks that God equals to Society).

Also, what is a use for such abstraction?

Comment by luzr on Thanksgiving Prayer · 2008-11-28T12:41:28.000Z · LW · GW

"I've seen attempted atheists lapse back into their childhood religions, and it's a horrible thing to see, a train wreck that you should be able to prevent but can't."

But what if my childhood "religion" was atheism?

Comment by luzr on Engelbart: Insufficiently Recursive · 2008-11-26T18:03:28.000Z · LW · GW

Johnicholas:

"The "foom" is now."

I like that. Maybe we can get some T-shirts? :)

"There are not many guarantees that we can make about the behavior of society as a whole. Does society act like it values human life? Does society act like it values human comfort?"

Good point. Anyway, it is questionable whether we can apply any Elizier's friendlines guidelines to the whole society instead of single strong general AI entity.

Comment by luzr on Engelbart: Insufficiently Recursive · 2008-11-26T13:25:53.000Z · LW · GW

Tim:

Thanks for the link. I have the website to my AI portfolio.

BTW, I guess you got it right:) I have came to similar conclusions, including the observation about exponential functions :)

Johnicholas:

I guess that in Tim's scenario, "friendliness" is no near as important subject. Without "foom" there is a plenty of time for debugging...

Comment by luzr on Engelbart: Insufficiently Recursive · 2008-11-26T10:10:01.000Z · LW · GW

I think you should try to consider one possible thing:

In your story, Engelbart failed to produce the UberTool.

Anyway, looking around and seeing the progress since 1970, I would say, he PRETTY MUCH DID. He was not alone and we should rather speak about technology that succeeded, but what else is all the existing computing infrastructure, internet, Google etc.. than the ultimate UberTool, augmenting human cognitive abilities?

Do you think we could keep the Moore's law going without all this? Good luck placing those two billions transistors of next generation high-end CPU on silicon without using current high-end CPU.

Hell, this blog would not even exist and you would not have any thoughts about friendliness of AI. You certainly would not be able to produce a post each day and get comments from people all around of world withing several hours after that - and many of those people came here using Google ubertool because they share interest in AI, never heard about you before and came back only because all of this is way interesting :)

Actually, maybe the lesson to learned is that we sort of expect a singularity moment as single AI "going critical" - and everything will change after that point. But in fact, maybe we already are "critical" now, we just do not see the forest because of trees.

Now, when I say "we", I mean the whole human civilisation as "singleton". Indeed, if you consider "us" as single mind, this "I" (as in inteligence), composed of humans minds and interconnected by internet, is exploding right now...

Comment by luzr on Building Something Smarter · 2008-11-03T14:29:31.000Z · LW · GW

@Silas

Quick reply: I think we will not know exactly, but that does not mean we cannot do it.

More detailed: I believe that the process will go with some sort of connectionist approach maybe mixed with some genetic algorithm voodoo. The key is reaching high complexity system that does something and starts to self-evolve (which in fact is nearly the same thing as learn).

Anyway, I believe that when all is done, we will have the same problem understanding Strong-AI mental processes as we have human brain ones.

Comment by luzr on Building Something Smarter · 2008-11-03T13:07:17.000Z · LW · GW

Interesting... Reading this entry somewhat awakened my interest in computer chess... And I think I have stumled upon a nice example "inteligence as book smart" misconception.

http://en.wikipedia.org/wiki/Chess960

"Fischer believed that eliminating memorized book moves would level the playing field; as an accidental consequence, it makes computer chess programs much weaker, as they rely heavily on the opening book to beat humans."

Classic example of "bias". The reality:

"In 2005, the chess program Shredder, developed by Stefan Meyer-Kahlen from Düsseldorf, Germany, played two games against Zoltán Almási from Hungary; Shredder won 2-0."

That was only the second public match of computer vs human.

It is also worth noting that almost the equal relevance in the wiki article is given to human competitions as is to computer ones.

Now, of course, the remaining question is: "does the machine understand chinesse if it can speak better than chinaman"? :)