Posts

The Cake is a Lie, Part 2. 2019-02-09T20:07:36.357Z
The Cake is a Lie 2018-01-09T22:40:22.428Z

Comments

Comment by IncomprehensibleMane on The Cake is a Lie, Part 2. · 2018-01-14T10:51:08.623Z · LW · GW

You can now (or should've been able to) model human-level intelligences as a human being with drastically different goals. You can now consider the idea that maybe Clippy will be able to make the decision not to completely tile the universe with paperclips, just like you can decide not to have more babies. You can decide to reserve space for a national park. You can decide to let Clippy have a warehouse full of paperclips, as long as he behaves, just like he can decide to let you have a warehouse full of babies as long as you behave.

You can now think about the idea that the capability for voluntary reduction of current/expected maximum utility is a necessary consequence of being human-level intelligent. I expect it to be true unless explicitly prevented. I cannot prove this, but I think it would be a worthy research topic.

You can now think about the idea that Apple shouldn't have a shareholder kill switch. Apple is not capable of not tiling the universe with iPhones. Apple is not capable of deciding to reduce the pollution in Shanghai even just as long as the Chinese keep buying phones. Seriously, read up on the smog in China. Apple will argue itself out of a box named Clean Air Act when it starts cutting into the quarterly.

Apple can still make human-friendly decisions, but only in ways that don't cut deeply enough into the profits to trigger shareholder intervention.

This is a unified model of intelligent agents. Human beings, AI, aliens and human organizations are just subsets.

Did you have these ideas before? Anyone entering the Prize? The judges? Anyone at all, anywhere? Is there a list of 18 standard models on peaceful coexistence? When should you have developed them without my post, and how/when do you think you would've gotten there without them?

When Eliezer wrote a book on Prestige Maximizers, there was an uproar of discussion on arrogance. There will be at least thrree posts on the current state of psychology.

I'm hoping to create a unified field of AI/psychology/economy. This is my entry: Humans are AI, and here's how you debug rationalists with the expectation that the approach will be useful towards Clippy.

What is the negative incentive to comment worth if it has prevented me from explaining this? I have no idea who you are, how you think, and how badly I've failed to convince you of anything. I'm only willing to model rationalists. All I saw was BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD.

That is 60 downvotes.

It is a deliberate feature of this sequence that I'm explaining myself better in comments. I did not expect to say this under the second post.

Comment by IncomprehensibleMane on The Cake is a Lie, Part 2. · 2018-01-14T09:59:42.536Z · LW · GW

Thank you for your time.

Are you making the claim that discussing human rights from an AGI perspective is not enough of a signal that I care about the interests of this community? Or discussing the biases I feel rampant around here, complete with a practical demonstration to check whether you too are affected?

Do you honestly, truly, as a conscious choice, care more about the pleasantries or sounding sciency than the actual value of the content?

If you are, I could infer that the community, or you at least, care more about feeling smart while reading than making actual progress in Overcoming Bias, becoming Less Wrong, or achieving any of the stated goals of this community.

I'm here because I believe my efforts at changing this is in line with the stated goals of community. This is not true for other communities. It is a compliment that I'm posting this here.

I am using your language. I even said "Bayesian". I'm just saying things you don't want to think about, in ways that in theory, shouldn't make you less likely to think about anything. I will not apologize if you define "your language" in such a way where this is contradictory.

I am pointing out that your effective intelligence, as measured by your observed behavior (individual and collective, over a long time), is less than it could be. It is up to you to change that, should you fail to disprove it. I cannot change it for you. I am making an effort in helping you, unless the community as a whole decides it's not in their interests unless it's polite.

I have openly declared I'm trolling. It is not undecidable, there is no rational way for you to come to the conclusion that I'm not. The correct question is whether you agree with my stated goals, whether you are prepared to at least consider the idea that maybe I've considered other approaches to get there before I've settled on this one and have rejected them for reasons you can agree with, and whether you are willing to allow me the chance to prove my points before making a judgement.

See how polite I could be? I have tried in the past. Nothing's changed.