A tentative solution to a certain mythological beast of a problem

2019-02-06T20:42:45.622Z · score: -32 (9 votes)
Comment by edward-knox on A tentative solution to a certain mythological beast of a problem · 2018-05-10T04:44:09.006Z · score: -7 (2 votes) · LW · GW

Brutal and facile are not the same things. I was hoping more for a categorical, complete, and total annihilation of my arguments, that's what I think brutal to mean.

Regarding the blackmail: blackmail only works to the extent that you take a threat to be credible, I don't believe the threat to be credible. An AI would know the integrity of this belief and reason it would be purposeless to blackmail me. For example, I question that there is ever or could ever be enough information to simulate perfectly another being, its thoughts, emotions, experiences. Such that no simulation could be so accurate as to itself be an extension of me.

When it comes to learning there are two ways of going about it, starting in the shallows and familiarizing yourself with swimming or jumping into the deep end and forcing yourself to learn. Both are effective. So I think this post makes for an excellent first one.

The topic doesn't make rationalists and AI look ridiculous, the responses do.

Comment by edward-knox on A tentative solution to a certain mythological beast of a problem · 2018-05-10T04:34:49.585Z · score: 1 (1 votes) · LW · GW

You don't have to kill anyone, you merely have to imply that they will be killed, such that the probability of future utility being equal or higher to past/present is lower than the probability of it lower than past/present utility. 20-30 years is a lot of people, manipulate events such that in the infinite years that follow there is never a higher probability of there being more people than existed and were aware than in those 20-30 years.

An interesting point I'd add is you don't need this probability to be true, you merely have to believe it to be true. You can only be blackmailed if threats are credible believed. If you honestly believe the probability as discussed is in your favour and more know and don't contribute than would ever exist/know and contribute then there is no benefit in blackmail as you truthfully believe yourself safe from it. Further, you can protect yourself further by having one person deceive all others of the truth of the probability such that they honestly believe it to be in their favour. The probability is false in this case but one man sacrifices himself to protect the many, very utilitarian (An act of utilitarian goodness I'm sure an AI could never reason deserves punishment as it allows for the creation of the AI but also the protection of people from punishment, resulting in a higher overall utility than would occur from creation with punishment).

As for Acausal trade I can again only conceive of it working to the extent that one believes in it. ("I do believe in fairies", if you don't like fairies stop believing in them and they disappear, how can an AI or God reasonably punish you if you honestly didn't believe in it. Does anyone truly condemn the men who reject the man who has seen the sun after escaping the cave? No, we reject those who know the truth but try to suppress it) The less you take it seriously the lower the probability of it working. And I'm fairly convinced there is a lot of reason to not take it seriously. However, the best one I think is pure in-the-moment selfishness. An attitude that can come very easily for even the most educated of people. So in regards to the Acausal trade issue I think we are in agreement that it is amusingly unlikely at best.

Comment by edward-knox on A tentative solution to a certain mythological beast of a problem · 2018-05-09T06:31:53.946Z · score: 1 (1 votes) · LW · GW

Regarding the first point: You merely have to ensure that the population that knows but doesn't contribute is larger than the combined past populations that have contributed and the expected future populations. An improbable thing to do but still a solution.

Regarding the second point: If the populations requiring punishment are greater than those that would benefit surely such an AI could never reason in a utilitarian manner that it was better to punish the many for the few. Unless as a result of the AI's actions an individual in the future is consistently always able to experience a higher utility than anyone in the past. So high in fact that it outweighed the collective utility of another person i.e. one persons utility could be greater than two persons collectively. There is no theoretical limit in that sense to the extent that one persons individual utility could outweigh a collective utility given the right circumstances. The AI could act such that the utility of one person was greater than all past and future persons, and as such it was worth sacrificing all past and future persons simply because one person is capable of experiencing greater utility than everyone combined. I struggle to see that individual human experiences could ever be so vastly different regardless of AI interventions. Sure one person who loves ice cream may experience more utility from an ice-cream than two people who hate ice-cream would collectively but could the utility of one person or two or 50 or 50,000, or 50 million ever outweigh all past person's utility.

I suppose I don't know because I'm not a super AI. :p

Beyond that I'd have to be convinced further that true, undying AI, truly is the capstone achievement of humanity. I'm sure there is plenty of reasoning for that on these forums though I'm still dubious. A capstone is an ingenuity that cannot be surpassed and I'm sure that at a minimum an AI could point out to us that we're not done yet, assuming we don't realize one ourselves.

Thank you for the reply though! Excellent points for me to ponder further.