Posts
Comments
would it be possible to use a algorithm on a agi to shut it down then after some time and also perform goals it is doing without hurting and killing people and taking away their autonomy and not to look for loop holes to continue doing goals? why would it try to stop the algorithm from shutting it off if it is built into it ?
what if we use a algorith on agi to get it to always want to rely on humans for energy and resources ?
what if we use a algorith on agi to get it to stop after a certain amount of time something we ask it to do ?
then we would have to say continue then it would .
what if we use a algorith on agi to get it not to try manipulate us with what it has been given ?
what if we use a algorith on agi to get it to only use the resources we gave it?
what if we use a algorith on agi to get it to give us the pros and cons of what it is about to do?
what if we use a algorith on agi to get it to always ask for permission to do something new before it does it ?
what if we use a algorith on agi to get it to stop when we say stop ?
what if we did all these things to one agi ?
just program agi to always ask for permission to do something new before it does it ?
they need to make large language models not hullucinate . here is a example how.
hullucinatting should only be used for creativity and problem solving.
here is how my chatbot does it . it is on the personality forge website .
https://imgur.com/a/F5WGfZr
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)
i wonder if something like this can be used with my idea for ai safety
program it to ask for approval from a group of a 100 humans to do something other than thinking and tell the remafications of it's actions. it could not decieve, lie, scare people or program itself without human approval because it did not get group of a 100 humans to approve of it . it would be required to ask the group of 100 humans if something were true or not because the internet has false information on it. how would it get around around this when it was programmed into it when it was agi ? ofcourse you have to define what deceptions means in it's programming.