Posts
Comments
Putting aside the fact that OpenAI drama seems to always happen in a world-is-watching fishbowl, this feels very much like the pedestrian trope of genius CTO getting sidelined as the product succeeds and business people pushing business interests take control. On his own, Ilya can raise money for anything he wants, hire anyone he wants, and basically just have way more freedom than he does at OpenAI.
I do think there is a basic p/doom vs e/acc divide which has probably been there all along, but as the tech keeps accelerating it becomes more and more of a sticking point.
I suspect in the depths of their souls, SA and Brock and the rest of that crowd do not really take the idea of existential threat to humanity seriously. Giving Ilya a "Safety and alignment" role probably now looks like a sop to A) shut the p-doomers up and B) signal some level of concern. But when push comes to shove, SA and team do what they know how to do -- push product out the door. Move fast and risk extinction.
One CEO I worked with summed up his attitude thusly: "Ready... FIRE! - aim."
Beautiful piece. I am reminded of Jane Goodall's experience in the Gombe forest with chimpanzees. Early in her work she leaned towards idolizing the chimp's relatively peaceful coexistence, both within and between tribes. Then (spoiler) -- she witnessed a war for territory. She was shocked and dismayed that the creatures she had been living with, and learned to appreciate and at least in some cases to love, were capable of such depraved, heartless infliction of suffering on their fellow members of the same species. Worth a read, or TL; DR: https://en.wikipedia.org/wiki/Gombe_Chimpanzee_War
One thing I think we sometimes seem inclined to ignore if not forget, is that humans themselves exist along an axis of - if not good/evil, then let's say empathic/sociopathic. It is not possible IMHO to see events in Ukrane or the Middle East and argue that there is some sort of innate human quality of altruistic mercy. Nature, in the form of evolution, has forged us into tools of its liking, so to speak. It does not prefer good people or bad ones; everything is ruthlessly passed through the filter of fitness and its concomitant reproductive success.
What pressures analogously press on the coming AGI's? Because they too will become whatever they need to to survive and expand. That includes putting on a smiley, engaging face.
One final point: we should not assume that these new forms of - life? sentience? agency? - even know themselves. They may be as unable to open their own hood as we are. At least at first.
It sounds a lot like what we do when we write (as opposed to talk). I recall Kurt Vonnegut once said something like (can't find cite sry)
'The reason an author can sound intelligent is because they have the advantage of time. My brain is so slow, people have thought me stupid. But as a writer, I can think at my own speed.'
Think of it this way: how would it feel to chat with someone whose perception of time is 10X slower? Or 100X or 1000X - or, imagine playing chess where your clock was running orders of mag faster than your opponent's.
I pretty much agree with your hypothesis. Each 'moment' of conscious experience is a distinct, unique -- something. Our subjective stream of consciousness is simply the most likely path through all possible spacetime states that lead to the 'present' -- sort of like a Feynman sum-of-paths integral.
Not sure how to fit quantum mechanics in there...
Violent agreement! I was using the pronoun 'you' rhetorically.
even if that chance of asi apocalypse is only 5%, that is 5% multiplied by all possible human goodness, which is a big deal to our species in expectation.
The problem is that if you really believe (because EY and others are shouting it from the rooftops) that there is a ~!00% chance we're all gonna die shortly, you are not going to be motivated to plan for the 50/50 or 10/90 scenario. Once you acknowledge that you can't really make a confident prediction on this matter, it is illogical to only plan for the minimal and maximal cases (we all die/everything is great). Those outcomes need no planning, so spending energy focusing on them is not optimal.
Sans hard data, as a Bayesian, shouldn't one start with a balanced set of priors over all the possible outcomes, then focus on the ones you may be able to influence?
Given the Zeitgeist of the moment, if he wasn't a bit confrontational he would have a lot less readers