Posts
Comments
He also said interpretability has been solved, so he's not the most calibrated when it comes to truthseeking. Similarly, his story here could be wildly exaggerated and not the full truth.
There have been comments from OAI staff that o1 is "GPT-2 level" so I wonder if it's a similar size?
It would be interesting to see which arguments the public and policymakers find most and least concerning.
So I generally think this type of incentive affecting people's views is important to consider. Though I wonder, couldn't you make counter arguments along the lines of "oh well if they're really so great why don't you try to sell them and make money? Because they're not great." And "If you really believed this was important, you would bet proportional amounts of money on it."
Trump said he would cancel the executive order on Safe, Secure, and Trustworthy AI on day 1 if reelected. Seems negative considering it creates more uncertainty around how consistent any AI regulation will be and he has no alternative.
I'm not sure what you mean. The main things would have been removing him from the board and replacing him as CEO, which they did.
But the board wasn't able to control whether a new company backed by Microsoft threatened to scoop up all the employees. So they negotiated Altman's return as CEO but not the board since that seemed worse.
Maybe paying closer attention to the CEO from the start and doing something earlier, or limiting or preventing commercialization and not having employee equity that would be at risk if the CEO changed might have been able to create a different outcome.
Seems good; will repost.