Some doubts about Non Superintelligent AIs

post by aditya malik (aditya-malik) · 2022-08-03T19:55:45.454Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    3 Raemon
    2 Charlie Steiner
None
No comments

Hello. I'm new to less wrong and would appreciate some help. I've been trying to understand basilisk since the more you understand, the less worried you are. While acausal trade requires clear understanding of the other agent, thus ruling out trading with a Superintelligent AI, I've been trying to find a answer to non Superintelligent AI's with which you 'might' be able to acausally trade with. I've arrived at 2:

  1. There are simply too many potential non Superintelligent AIs to care about a particular one. Also following one could just result is another being pissed at you. (Many Gods Refutation)
  2. If I can imagine the AI's decision making, then it's not smart enough to create a Utopia or make my simulations to torture anyway. It won't even try an acausal trade then.

Are my refutations valid? Any replies would be greatly appreciated.

Edit: is there a particular reason for the downvotes? I really do need help. Edit 2: spelling

Answers

answer by Raemon · 2022-08-03T23:14:27.551Z · LW(p) · GW(p)

Both of those seem reasonable to me. Putting #2 another way: you can also just acausal trade with other humans, and this isn't that huge a deal usually because most humans aren't that powerful.

comment by aditya malik (aditya-malik) · 2022-08-08T17:41:29.043Z · LW(p) · GW(p)

Thanks for the reply.

answer by Charlie Steiner · 2022-08-05T23:38:39.153Z · LW(p) · GW(p)

I mean, people could construct an AI that will acausally trade with you in a human-understandable way. I don't think this is completely wild, but I'd agree that as the probabilities get smaller, trade becomes less and less profitable/likely - you don't just have to find it, it has to find you. This is kind of like a quadratic penalty term.

Overall I think the best anti-acausal-trade-worry idea is "You know that decision-making procedure that you're worried other agents might use to take your lunch money? What would happen if you used it too, to get things to go well for yourself?"

comment by aditya malik (aditya-malik) · 2022-08-08T17:40:28.748Z · LW(p) · GW(p)

There would be almost infinite types of non Superintelligent AIs too right?

No comments

Comments sorted by top scores.