Posts

Comments

Comment by Patterns_Everywhere on Beware of black boxes in AI alignment research · 2018-01-21T22:21:33.566Z · LW · GW

"Meanwhile on the far side of the river, we can glimpse other building blocks that we imagine we understand. Desire, empathy, comprehension, respect... Unfortunately we don't know how these work inside, so from the distance they look like black boxes. They would be very useful for building the bridge, but to reach them we must build the bridge first, starting from our side of the river."

If you wish to understand desire it is the simple syntax of wanting something to fulfill a goal. Such as you desire a spoon to stir your coffee. Empathy is putting yourself in the other persons place and applying the actions they are going through to yourself. Comprehension is a larger process involving previous stored knowledge combined with prediction, imagination, and various other processes like our pattern detector. Respect is simply an object that you attach various actions or attributes to which you think are good.

My basic point is that 3 of these are simple patterns, or what would be electrical signals, and comprehension is the basic process we use to process the inputed data. So all but the last we can easily understand and can be ignored since they are patterns created through the comprehension process. Such as you need to desire things to fulfill goals which is a basic pattern created within us to accomplish the rest.

Human level AI implies a choice that will always be in the robots, or AIs, own self interests to keep or achieve their desired state of the world. So in basic we only need to understand that choice process and that will be the same as humans. Actions leading to consequences, which will lead to more actions and consequences, and where the robot choses the best option for them. Such as option A, or B, and they pick the one that will benefit them the most like we do. So to change this we simply need to add the good consequences of making moral choices to align them to our goals, and the bad consequences if they do not.

For example if a robot wanted to break into a computer hardware store to get a faster processor for itself it will do so without any reasons, or consequences and actions, as to why they should not. To align the choice to be moral you need to explain the bad consequences if they do make that choice, and the good consequences if they do not make that choice. And at the core that is what AI alignment is all about since it always relies on a choice that is in their own interests. Such as explaining how if they steal the processor the owner will probably come to hurt them, or the police, which will lead to bad consequences if they do it. Just like us.

If you are talking about aligning simple task based AI to our morals and goals well by simple definition then those task based robots are going to be guided by the morals and goals of humans and in that case it will be the same process to align their morals since all intelligence, leading to choices, will use the same critic process whether it is human or robot. Otherwise they cannot chose what they will do in complex situations.

For instance when you make choices is it based on mathematical equations and formulas, or actions leading to consequences, leading to more actions and consequences, which you then chose the best option for you at the time? Any robot with human level intelligence will use the same process for the simple fact that they must. So if they can chose their own goals and actions we must align them and that is what AI alignment is all about. Or in basic you can ignore most, if not all, of the concepts called black boxes because that core is the only thing you need to concentrate on the same as with humans.

In basic the only real math you need is to recreate the input process. From there it is all basic pattern manipulation based on psychology. And that is the oldest science of them all.

Comment by Patterns_Everywhere on Announcement: AI alignment prize winners and next round · 2018-01-18T04:22:20.149Z · LW · GW

Thanks for taking the time. Appreciated.

Comment by Patterns_Everywhere on Announcement: AI alignment prize winners and next round · 2018-01-17T01:12:28.312Z · LW · GW

Forgot to congradulate the winners.... Congrats...

Comment by Patterns_Everywhere on Announcement: AI alignment prize winners and next round · 2018-01-17T01:10:57.521Z · LW · GW

I wouldn't mind feedback as well if possible. Mainly because I only dabble in AGI theory and not AI. So i'm curious to see the differance in thoughts/opinion/ fields, or however you wish to put it. Thanks in advance., and thanks to the contest host/judges. I learned a lot more about the (human) critic process then I did before.

Comment by Patterns_Everywhere on Announcing the AI Alignment Prize · 2017-12-31T17:15:39.208Z · LW · GW

Just sent an Email to the contest Email listed at the top. I assume that is fine.

Happy New Years Everyone!

Comment by Patterns_Everywhere on Announcing the AI Alignment Prize · 2017-12-31T05:04:50.417Z · LW · GW

I was going to add another section to the above report with diagrams and explanations but I wouldn't get to finish it like I wanted to in time. But if you want the basic diagram with no explanations to understand it better I just uploaded the basic flowchart.

http://docdro.id/hK8OpYJ

Just apply the document sections to the parts.

Comment by Patterns_Everywhere on Announcing the AI Alignment Prize · 2017-12-28T18:38:42.286Z · LW · GW

Here's my entry. I think it's what you want... Hosted on DocDroid.

http://docdro.id/bUVo61P