Posts

Comments

Comment by Philip Niewold (philip-niewold) on Failures in Kindness · 2024-07-22T10:08:15.828Z · LW · GW

Social messaging is fine balancing act: people like to offload responsibility and effort, especially if it doesn't come at the cost of status. And, to be honest, you don't know if your question would impose upon the other (in terms of cognitive load, social pressure or responsibility), so you it is smart to start your social bid low and see if the other wants to raise the price. Sometimes they work, creating a feedback loop similar to how superstitions evolve: if it is minimal effort and sometimes it is effective, better continue using it.

As a child, I despised a lot of these practices, to me it felt like people were lying all the time, or at least, hiding their true motivations or concerns. I tended to simply call out these adults on their bullshit. If somebody said "I'm fine with everything", I simply proposed something that I know that person is not fine with but that is absurd enough to indicate that I'm not being serious. As a child you can still get away with such behaviour, but many adults find it highly annoying. However, I still employ it among friends who I know don't judge me on that interaction or at least lace it with humor to make it socially acceptable.

However, I think such messaging can often turn into a social communication into a prisoner's dilemma type situation, where each party puts in the minimum succesfull effort resulting in a situation unsatisfactory to either party. I'm just not sure how (and if) we are able to recognize when a situation is a prisoner's dillema and when not. "How was your week?" is often a very welcoming question to me, but not for others.

Leaving things unspoken and relying on generally accepted principles can increase communication efficiency enormously, a lot of communication isn't a prisoner's dilemma type exchange after all, but it will run into issues occasionally, especially if the communicator do not share a set of unspoken rules.

Having grown up in Dutch culture, I was unusually direct (rude) for even a Dutch person, so travelling in Iran where things are absurdly polite at times was very interesting for me, for example. However, a society like Iran requires quite an amount of cognitive load for even simple issues.

 

Comment by Philip Niewold (philip-niewold) on Hell is Game Theory Folk Theorems · 2023-05-08T09:22:29.490Z · LW · GW

Of course it is perfectly rational to do so, but only from a wider context. From the context of the equilibrium it isn't. The rationality your example is found because you are able to adjudicate your lifetime and the game is given in 10 second intervals. Suppose you don't know how long you have to live, or, in fact, now that you only have 30 seconden more to live. What would you choose?

This information is not given by the game, even though it impacts the decision, since the given game does rely on real-world equivalency to give it weight and impact. 

Comment by Philip Niewold (philip-niewold) on Hell is Game Theory Folk Theorems · 2023-05-08T09:10:23.076Z · LW · GW

Any Nash Equilibrium can be a local optimum. This example merely demonstrates that not all local optima are desirable if you are able to view the game from a broader context. Incidentally, evolution has provided us with some means to try and get out of these local optima. Usually by breaking the rules of the game or leaving the game or seemingly not acting rationally from the perspective of the local optimum.

Comment by Philip Niewold (philip-niewold) on Bing Chat is blatantly, aggressively misaligned · 2023-02-16T10:10:34.636Z · LW · GW

Please keep in mind that the Chat technology is an desired-answer-predicter. If you are looking for weird response, the AI can see that in your questioning style. It has millions of examples of people trying to trigger certain responses in fora etc, en will quickly recognize what you really are looking for, even if your literal words might not exactly request it.

If you are a Flat Earther, the AI will do its best to accomodate your views about the shape of the earth and answer in a manner that you would like your answer to be, even though the developers of the AI have done their best to instruct it to 'speak as accurately as possible within the parameters of their political and PR views'

If you want to trigger the AI to give poorly written code examples with mistakes in them, it can. And you don't even have to ask it directly, it can detect your intention by carefully listening to your line of questioning.

Once again, it is a desired-answer-predicter/most-likely-response generator, that's its primary job, not to be nice or give you accurate information.