Posts
Comments
29. You do not live in a video game. There are no pop-up warnings if you’re about to do something foolish, or if you’ve been going in the wrong direction for too long. You have to create your own warnings.
One great tool for creating those warnings is Habitica - a free and ad-free productivity app for gamifying your own good and bad habits, regular and one-off tasks and self-rewards.
27. Discipline is superior to motivation. The former can be trained, the latter is fleeting. You won’t be able to accomplish great things if you’re only relying on motivation.
I have the opposite experience - Discipline is external motivation that is forced on people by raising them, schooling them and training them in the army. It's driven by fear of not living up to expectations of others and disappears as soon as these others stop watching. Such a person is only productive in a team or with regular reporting.
On the contrary inner motivation won't disappear when someone isn't watching and is actually aligned with what makes you happy. It also updates in time to match what makes you happy.
For example if you have a hobby of painting miniatures that you enjoy and then decide to turn it into your only source of income, then you put a lot of performance stress on it. Your motivation gets updated - Stress is subtracted from the joy of the process and the sum might drop below the value of some other activity like watching educating youtube videos that isn't weighed down by expectation of serving as the primary source of income.
If this happens, then approach of discipline is to ignore this change and force yourself to do the thing you don't enjoy because discipline is all about giving up what you want to do what's good for others.
On the other hand the approach of motivation is attentive to this change, this useful signal. You care about your happiness and in order to protect it you can increase your hourly rate, start rejecting uninteresting projects or diversify your income by other sources to alleviate unreasonable expectation of productivity.
The argument that you can't accomplish great things with motivation alone is countered by examples of subjectively great things that I have accomplished by intentionally ignoring external motivation and only following internal motivation.
Admittedly to accomplish your long term goals you need to spend some of your daily supply of attention on reminding yourself how the things you are doing will bring about these long term goals. But that is far better than doing things that you don't want.
Personal vs Global CEV could also be mentioned here.
Upon reading the ideal advisor theories paper an idea came to mind about how to protect CEV from Sobel's fourth objection where the ideal adviser recommends actions that would lead to death because it knows that its original self would want to commit suicide after seeing how inferior and hopeless their life is compared to a perfect self. If we limit the "better version of ourselves" to only have superior knowledge and skills and nothing that we couldn't obtain if we had enough time and resources, then it wouldn't view us as disabled or hopeless, only misinformed. Hence there would be a way out and the perfectly informed self would also know all the ways to improve the situation. So it wouldn't recommend mercy death, unless the original self already had suicidal tendencies. What a nice topic to discuss =P
What disturbs me in this article is the normativeness - describing values, rightness and goodness as something objective, having an objective boolean value, existing in the world without an observer to have those values, like some motivation without someone being motivated by it. Instead rightness and goodness are meaningless outside of some utility function, some desired end state that would label moving towards it as positive direction and against it as negative direction. Without a destination every direction is as good as every other. Values are always subjective, so when teaching them to an AI we can only refer to how common it is to regard value A as being positive or negative among people.
The universe doesn't want anything, so for example killing humans has no innate badness and is not negative for the universe. It's just negative for most humans. If taking a pill will change your subjective values to "killing=good", then rightness will also change and the AI will now extrapolate this new rightness from your brain. Furthermore it will correctly recommend futures with killing because they are better than futures without it according to these values.
We have no reason to believe that if each of us knew as much as a superintelligence knows, could think as fast as it and reason as soundly as it does, that we would then have no differences in values. Let's assume safely that subjectivity isn't going anywhere. We can still define some useful values for the AI by substituting objective values with an overwhelming consensus of known subjective values. Those are basic values that are common to most people and don't vary significantly with political or personal preference, like human rights, basic criminal law, maybe some of the soft positive values mentioned in the article. Ban on wars would be nice to include! (We'd need to define what level of aggression is considered war and whether information war and sanctions are also included.)
The utility function of an AI is what defines its priorities for possible outcomes aka its values. In case of forementioned rights and laws they tend to take the form of penalty for wrong actions instead of utility gain for good actions, which is a slippery slope in the sense that AI-s tend to find loopholes in prohibitions, but on the other hand penalties can't be abused for utility maximization like gains can. For example rewarding for creating happy fluffy feelings in people would turn the AI into a maximizer.
In any case we'll want to change the AI-s values as our understanding of good and right evolves, so let's hope utility indifference will let us update them. Instead of changing drastically over time our values will probably become more detailed and situational - full of exceptions, just like our laws. Already justice systems of many countries are so complex that it would make sense to delegate judgement to AI-s. Can't wait to see news of first AI judges being bribed with utility gains.
P.S: Act of opposing normative values is the definition of rebelling, so I guess I'm a rebel now ^_^