Posts
Comments
This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.
I appreciate this post. I did want to point out that Palantir is an appropriate descriptive name for what the company does. LOTR fans understand it.
I had a bit of a chuckle looking at the Pew Research Center results reporting that 27% of people age 18 to 29 thought that dating was easier today than it was 10 years ago. What percentage of people in that bracket were between 8 to 12 years old 10 years ago? 😆
Would a BCI-mediated hive mind just be hacked and controlled by AGIs? Why would human brains be better at controlling AGIs when directly hooked in to computers than they are when indirectly hooked in by keyboards? I understand that BCIs theoretically would lead to faster communication between humans, but the communication would go in both directions. How would you know when it was being orchestrated by humans and when it was being manipulated by an AGI?
That would make a fun sci-fi novel
Could someone please point me toward some more recent posts about the war in Ukraine? Did folks on this platform lose interest? I still think it is pretty perplexing and would love to read some rational discussion about it.
I’m a fan of there being many experiments, but I might be biased by my background in meta-analysis. Many good experiments are, of course, better than many poorly designed and/or executed experiments, but replication is important, even in good experiments. Even carefully controlled experiments have the potential of error. Also, having many experiments usually is a better test of the generalizability of the findings. Finally, having many experiments coming out of many different laboratories (independent of each other) increases confidence that the findings are not the result of the investigator’s preference for what the results should be. If there is conflict in findings it might be poor study design and/or execution or it might be that the field is missing something important about the truth.
This reminds me of mindfulness training. It also reminds me of behavioral chain awareness training and other methods of reducing automatic thoughts and behaviors that are taught in cognitive behavioral therapy for addictions. It’s great that you figured it out for yourself and shared what you learned in a way that may be accessible to many in this audience.
I’m new to this community, so I don’t know why you have DEACTIVATED in front of your name. I’m sad that you do, though, because maybe it means that you have given up on this community. This post is so painfully lonely. I’m sorry if you didn’t find what you needed here. You have touched me with the loneliness in your writing. I also value other posts that you have made and would welcome hearing more of your thoughts. Maybe, you won’t see this message, but I hope that, if you still have friends in this community, they are checking up on you.
I wonder how many AI experts hold back their thoughts because they remember what happened to Copernicus when he presented that the Earth was not the center of the universe. Thank you for your post. I’m new here and am, therefore, not permitted to upvote it, but I would, if I could.
This summary of your post is exactly how I experienced it. From this reader’s perspective, you accomplished the goal of expressing this.
Also, I appreciate your agnosticism and acknowledgement that others may not have the same super goal. I do know quite a few people who have had or are having your experience.
I wonder if part of your experience is because you have a choice. Some do not, because, in their circumstances, complete attention must be focused on surviving. I wonder, also, if humans are a bit behind in their evolutionary adaptation to having this level of choice.
Your post also makes clear the incredible difficulty people face in AI alignment. It is difficult to align our own selves. We fall back on heuristics to save time and our own mental resources. There are multiple “right” answers. Rewards here have costs there. It’s difficult to assign weights. The weight values seem to fluctuate depending on where we focus our attention. If we spend too many resources trying to pick a direction, the paths meanwhile change and we have to reassess.
And there is the manipulation of incentives, particularly praise. Is the praise worth the cost? Did your start state set you up to put one foot in front of the next in response to praise? Do you always have to do a good job at being a CEO, a husband, a father? Is being in your wife’s company its own reward or are you doing the job of being a husband? Or do you feel both ways in fluctuating degrees? Also, it may be that the goal is not the only thing that directs your behavior. It may be that, sometimes, the push and pull of whatever small, repeated incentives are happening are guiding less planned behaviors. These less planned behaviors over time become your life.
Anyway, I appreciate what you have said and how you have said it.