Posts

Comments

Comment by Yasuo on Mostly silly alternatives to the word 'rationalist' · 2011-06-22T06:05:05.108Z · LW · GW

twat

Comment by Yasuo on Why No Wireheading? · 2011-06-22T03:24:00.436Z · LW · GW

So, when people pick chocolate, it illustrates that that's what they truly desire, and when they pick vanilla, it just means that they're confused and really they like chocolate but they don't know it.

Comment by Yasuo on Why No Wireheading? · 2011-06-19T17:35:06.967Z · LW · GW

Acting based on the feelings one will experience is something that already happens, so optimizing for it is sensible

I can't really pick apart your logic here, because there isn't any. This is like saying "buying cheese is something that already happens, so optimizing for it is sensible"

Comment by Yasuo on What are the best news sources to read for *insightful* discussions? · 2011-06-19T08:55:24.577Z · LW · GW

I like marginal revolution, if only because the comments section will usually yell at them when they post something stupid.

Comment by Yasuo on Why No Wireheading? · 2011-06-19T00:56:12.629Z · LW · GW

Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.

But aren't you just setting up a system that values states of the world based on the feelings they contain? How does that make any more sense?

You're arguing as though neurological reward maximization is the obvious goal to fall back to if other goals aren't specified coherently. But people have filled in that blank with all sorts of things. "Nothing matters, so let's do X" goes in all sorts of zany directions.

Comment by Yasuo on 1-2pm is for ??? · 2011-06-16T05:22:04.852Z · LW · GW

2:30 is a good time to go to the dentist.

Comment by Yasuo on Resetting Gandhi-Einstein · 2011-06-14T06:14:58.356Z · LW · GW

I would. I'd want to do some shorter test runs first though, to get used to the idea, and I'd want to be sure I was in a good mood for the main reset point.

It would probably be good to find a candidate who was enlightened in the buddhist sense, not only because they'd be generally calmer and more stable, but specifically because enlightenment involves confronting the incoherent naïve concept of self and understanding the nature of impermanence. From the enlightened perspective, the peculiar topology of the resetting subjective experience would not be a source of anxiety.

Comment by Yasuo on To Whom I May Have Concerned: A standard explanation of my disagreement · 2011-06-12T06:06:11.614Z · LW · GW

Dear Buck:

All I said was "looks like rain".

Comment by Yasuo on Q&A with Stan Franklin on risks from AI · 2011-06-11T18:56:22.080Z · LW · GW

Q: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Stan Franklin: Proofs occur only in mathematics.

This seems like a good point, and something that's been kind of bugging me for a while. It seems like "proving" an AI design will be friendly is like proving a system of government won't lead to the economy going bad. I don't understand how it's supposed to be possible.

I can understand how you can prove a hello world program will print "hello world", but friendly AI designs are based around heavy interaction WITH the messy outside world, not just saying hello to it, but learning all but its most primitive values from it.

How can we be developing 99% of our utility function by stealing it from the outside world, where we can't even "prove" that the shop won't be out of shampoo, and yet simultaneously have a "proof" that this will all work out? Even if we're not proving "friendliness" per se, but just that the AI has "consistent goals under self-modification", consistent with WHAT? If you're not programming in an opinion about abortion and gun control to start with, how can any value it comes to regarding that be "consistent" OR "inconsistent"?

Comment by Yasuo on London Meetup 05-Jun-2011 - very rough minutes · 2011-06-09T16:16:19.316Z · LW · GW

Can you give some examples of the problem?

Comment by Yasuo on Brief question about Conway's Game of LIfe and AI · 2011-06-02T13:41:53.399Z · LW · GW

There are no known structures in conway's game of life that are robust. Even eaters, which are used to soak up excess gliders, only work when struck from specific directions.

If you had a life board which was extremely sparsely populated, it's possible that a clever agent could send out salvos of gliders and other spaceships in all directions, in configurations that would stop incoming projectiles, inform it about the location of debris, and gradually remove that debris so that it would be safe to expand.

At a 50% density, the agent would need to start with a fairly large safe space around it, otherwise it would be overwhelmed. I can't imagine even the best sensing/manipulating technology in life is capable of picking its way through even mostly static garbage at any more than a glacial pace.

Basically you'd have to send out a probe, wait for the echo, or lack of echo, and from that, recalculate the probabilities of all the different configurations of still lifes and oscillators and spaceships and puffers and so on that the probe could've hit, and how those configurations would've been altered or destroyed or (in most cases) expanded due to collision with your probe. And then work out another probe to send, and repeat the process, until eventually you had a good enough estimate of what you were dealing with that you could send probes calculated to get rid of it, and all the additional garbage you generated in the process of probing it.

It is unknown whether robust structures can exist in life, even if incredibly intelligent, incredibly large, and incredibly slow, but I would speculate that they can.

However, it's also possible that there are far simpler robust expanding patterns, in which case, larger slower structures such as intelligent agents would be hopelessly overwhelmed.