Posts

Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? 2017-05-13T08:23:21.415Z
Does Kolmogorov complexity imply a bound on self-improving AI? 2016-02-14T08:38:26.832Z
Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun" 2015-08-28T01:12:41.091Z

Comments

Comment by contravariant on Physics has laws, the Universe might not · 2018-06-16T12:31:59.351Z · LW · GW
"exist"(whatever that means)

What are you implying here? It's clear that *we*, or at least *you* exist, in the sense that the computation of our minds is being performed and inputs are being given to it. We can also say, (with slightly less certainty) that observable external physical objects such as atoms exist because the evolution of their states from one Planck instant to the next is being performed (even when we're not observing it - if the easiest way to get from observation t1 to observation t2 is by computing all the intermediate states between t1 and t2, it's likely that the external object exists on the entire interval [t1..t2]). This is my conception of an object's existence, that the computation of an object's state is being done. What is yours?

Comment by contravariant on Hidden universal expansion: stopping runaways · 2017-05-16T16:55:23.018Z · LW · GW

Why would they want to stop us from fleeing? It doesn't reduce their expansion rate, and we already established that we don't pose any serious threat to them. We would essentially be giving a perfectly good planet and star to them, undamaged by war (we would probably have enough time to launch at least some nuclear missiles, probably not harming them much but wrecking the ecosystem and making the planet ill-suited for colonization by biological life). Unless they're just sadistic and value the destruction of life as a final goal, I see no reason for them to care. Any planets and star systems that would be colonized by the escaping humans would be taken just as easily as Earth, with only a minor delay.

Comment by contravariant on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-13T22:42:35.486Z · LW · GW

Evolution also had 1 chance, in the sense that the first intelligent species created would take over the world and reform it very quickly, leaving no time for evolution to try any other mind-design. I'm pretty sure there will be no other intelligent species that evolves by pure natural selection after humanity - unless it's part of an experiment run by humans. Evolution had a lot of chances to try to create a functional intelligence, but as for the friendliness problem, it had only one chance. The reason being, a faulty intelligence will die out soon enough, and give evolution time to design a better one, but a working paperclip maximizer is quite capable of surviving and reproducing and eliminating any other attempts at intelligence.

Comment by contravariant on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-04-24T18:17:45.879Z · LW · GW

Evolution is smarter than you.

Could you qualify that statement? If I was given a full time job to find the best way to increase some bacterium's fitness, I'm sure I could study the microbiology necessary and find at least some improvement well before evolution could. Yes, evolution created things that we don't yet understand, but then again, she had a planet's worth of processing power and 7 orders of magnitude more time to do it - and yet we can still see many obvious errors. Evolution has much more processing power than me, sure, but I wouldn't say she is smarter than me. There's nothing evolution created over all its history that humans weren't able to overpower in an eyeblink of a time. Things like lack of foresight and inability to reuse knowledge or exchange it among species, mean that most of this processing power is squandered.

Comment by contravariant on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-04-24T10:50:53.558Z · LW · GW

And if something as stupid as evolution (almost) solved the alignment problem, it would suggest that it should be much easier for humans.

Comment by contravariant on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-04-24T10:48:55.770Z · LW · GW

Replies to some points in your comment:

One could say AI is efficient cross-domain optimization, or "something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster", but personally I think the "A" is not really necessary here, and we all know what intelligence is. It's the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can't precisely define it, and the definitions I offered are only grasping at things that might be important.

If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.

You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.

And the prospect of an AI designed by the "Memetic Supercivilization" frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks "Hey, let's put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ" and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.

Comment by contravariant on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-04-24T10:18:23.125Z · LW · GW

But how can you use complex language to express your long term goals, then, like you're doing now? Do you get/trick S2 into doing it for you?

I mean, S2 can be used by S1, for instance if someone is addicted to heroin and they use S2 to invent reasons to take another dose would be the most clear example. But it must be hard doing anything more long term, you'd be giving up too much control.

Or is the concept of long term goals itself also part of the alien thing you have to use as a tool? Your S2 must really be a good FAI :D

Comment by contravariant on Instrumental Rationality is a Chimera · 2017-04-24T09:50:23.774Z · LW · GW

That's a subjective value judgement from your point of view.

If you intend it to be more than that, you would have to explain why others shouldn't see it as off-putting.

Otherwise, I don't see how it contributes to the discussion other than "there's at least one person out there who thinks masculinity isn't off putting", which we already know, there's billions of examples.

Comment by contravariant on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-31T19:13:18.225Z · LW · GW

It seems to me like it's extremely hard to think about sociology, especially relating to policies and social justice without falling into this trap. When you consider a statistic about a group of people, "is this statistic accurate?" is put in the same bucket as "does this mean discriminating against this group is justified?" or even "are these people worth less?" almost instinctively. Especially if you are a part of that group yourself. Now that you've explained it that way, it seems that understanding that this is what going on is a good strategy to avoid being mindkilled by such discussions.

Though, in this case, it can still be a valid concern that others may be affected by this fallacy if you publish or spread the original statistic, so if it can pose a threat to a large number of people it may still be more ethical to avoid publicizing it. However that is an ethical issue and not an epistemic one.

Comment by contravariant on Suffering · 2015-08-08T04:21:55.996Z · LW · GW

People who require help can be divided into those who are capable of helping themselves, and those who are not. Such a position as yours would express the value preference that sacrificing the good of the latter group is better than letting the first group get unpaid rewards - in all cases. For me it's not that simple, the choice depends on the proportion of the groups, cost to me and society, and just how much good is being sacrificed. To make an extreme example, I would save someone's life even if this encourages other people to be less careful protecting theirs.

Comment by contravariant on Doublethink (Choosing to be Biased) · 2015-05-29T03:46:39.261Z · LW · GW

While you can't fool your logical brain, if you want to have a false belief to make you happy, you don't need to anyway. The brain is compartmentalized and often doesn't update what you feel intuitively true, or what you base your actions on, just because you learned a fact. This sentence: "You can't know the consequences of being biased, until you have already debiased yourself" strikes me as most hard to believe. Reading about a bias and considering its consequences, esp. in an academic mindframe does NOT debias you. That requires applying it to your life and reasoning, recognizing when you are biased, sometimes even training and conditioning to change how you think. If after learning about a bias, I rationally decided that I want to keep it, I would just shelve it in my memory as academic trivia irrelevant to daily life, and I would stay just as biased as before in regards to what I do and how I feel.

Comment by contravariant on Mundane Magic · 2014-01-03T07:28:04.957Z · LW · GW

The Curse of Downregulation: Sufferers of this can never live "happily ever after", for anything that gives them joy, done often enough, will become mundane and boring. Someone who is afflicted could have the great luck to earn a million a day, and after a year they will be filled with despair and envy at their neighbor who is making two million, no happier than they would be in poverty.