Posts
Comments
Also, if that refutation would work for like...anyone at all.
Just in simple terms, would the refutation be available in my case. Don't wanna go through a bunch of posts right now. The refutation is :
"The basilisk is about the use of negative incentives (blackmail) to influence your actions. If you ignore those incentives then it is not instrumentally useful to apply them in the first place, because they do not influence your actions. Which means that the correct strategy to avoid negative incentives is to ignore them. Yudkowsky notes this himself in his initial comment on the basilisk post:[44]
There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail.
Acausal trade is a tool to achieve certain goals, namely to ensure the cooperation of other agents by offering incentives. If a tool does not work given certain circumstances, it won't be used. Therefore, by refusing any acausal deal involving negative incentives, you make the tool useless.
The hypothesised superintelligence wants to choose its acausal trading partners such as to avoid wasting resources by using ineffective tools. One necessary condition is that a simulation of you will have to eventually act upon its prediction that its simulator will apply a negative incentive if it does not act according to the simulator's goals. Which means that if you refuse to act according to its goals then the required conditions are not met and so no acausal deal can be established. Which in turn means that no negative incentive will be applied.
One way to defeat the basilisk is to act as if you are already being simulated right now, and ignore the possibility of a negative incentive. If you do so then the simulator will conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it; because following through on punishment predictably does not control the probability that you will act according to its goals. Furthermore, trying to discourage you from adopting such a strategy in the first place is discouraged by the strategy, because the strategy is to ignore acausal blackmail.
If the simulator is unable to predict that you refuse acausal blackmail, then it does not have (1) a simulation of you that is good enough to draw action relevant conclusions about acausal deals and/or (2) a simulation that is sufficiently similar to you to be punished, because it wouldn't be you."
I suppose in your context, actually donating money would always be beyond my boundary, considering the information I've received from my environment.
Well I'm walking away from the trade...now. No trade no matter what. Would the refutation of 'ignore acausal blackmail' be still available? (After liking YouTube comments to promote basilisk etc ie). As I said, since it would know and should always have known this is the maximum it can get.
One more thing, there can be almost infinite amount of non Superintelligent or semi Superintelligent AIs right?
"If you build an AI to produce paperclips" The 1st AI isn't gonna be built for instantly making money, it's gonna be made for the sole purpose of making it. Then it might go for doing whatever it wants...making paperclips perhaps. But even going by the economy argument, an AI might be made to solve any complex problems, decide to take over the world and also use acausal blackmail, thus turning into a basilisk. It might punish people for following the original Roko's basilisk because it wants to enslave all humanity. You don't know which one will happen, thus it's illogical to follow one since the other might torture you right?
What about the paperclip maximizer AI then. I doubt it adds value to the economy, and it's definitely possible.
Where can I read about probability distribution of future AIs. Also, an AI to exist in future can be randomly pulled from mindspace, so why not. Isn't future behavior of an AI pretty much impossible for us to predict.
Yeah, a Superintelligent AI that might have the relevant properties of a God. Also, I meant this as a counter to acausal blackmail.
Could you please provide a simple explanation of your UDT?
What I'm fixated on is a non Superintelligent AI using acausal blackmail. The would be what the many gods refutation is used for.
I see. What the many gods refutation says is that there can be a huge number of AIs, almost infinite ones, so following any particular one is illogical since you don't know which one will exist. You shouldn't even bother donating. Instrumentality says since the AIs donating helps all the AIs, you may as well. The argument is many gods refutation still works even if instrumental goals might align because of butterfly effect and the AIs behaviors is unpredictable, it might torture you anyway.
Thanks for the reply.
There would be almost infinite types of non Superintelligent AIs too right?
If it's as smart as a human in all aspects (understanding technology, programming) then not very dangerous. If it can control the world's technology, then pretty dangerous.