LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
There is an RSS feed available on the Substack page, here is the link:
https://api.substack.com/feed/podcast/2280890/s/104910.rss
No. E.g. see here
buck on Stephen Fowler's ShortformIn 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI.
From that page:
> We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work. While we would also expect general support for OpenAI to be likely beneficial on its own, the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.
So the case for the grant wasn't "we think it's good to make OAI go faster/better".
Why do you think the grant was bad? E.g. I don't think "OAI is bad" suffices to establish that the grant was bad.
ete on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"The space of values is large, and many people have crystalized into liking nature for fairly clear reasons (positive experiences in natural environments, memetics in many subcultures idealizing nature, etc). Also, misaligned, optimizing AI easily maps to the destructive side of humanity, which many memeplexes demonize.
dagon on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"It's always seemed strange to me what preferences people have for things well outside their own individual experiences, or at least outside their sympathized experiences of beings they consider similar to themselves.
Why would one particularly prefer unthinking terrestrial biology (moss, bugs, etc.) over actual thinking being(s) like a super-AI? It's not like bacteria are any more aligned than this hypothetical destroyer.
cubefox on DeepMind's "Frontier Safety Framework" is weak and unambitiousRSP = Responsible Scaling Policy
rudi-c on AI #64: Feel the Mundane UtilityCan you create a podcast of posts read by AI? It’s difficult to use otherwise.
rudi-c on AI #64: Feel the Mundane UtilityCan you create a podcast of posts read by AI? It’s difficult to use otherwise.
mateusz-baginski on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"Note to the LW team: it might be worth considering making links to AI Safety Info live-previewable (like links to other LW posts/sequences/comments and Arbital pages), depending on how much effort it would take and how much linking to AISI on LW we expect in the future.
tsvibt on Stephen Fowler's ShortformOn a meta note, IF proposition 2 is true, THEN the best way to tell this would be if people had been saying so AT THE TIME. If instead, actually everyone at the time disagreed with proposition 2, then it's not clear that there's someone "we" know to hand over decision making power to instead. Personally, I was pretty new to the area, and as a Yudkowskyite I'd probably have reflexively decried giving money to any sort of non-X-risk-pilled non-alignment-differential capabilities research. But more to the point, as a newcomer, I wouldn't have tried hard to have independent opinions about stuff that wasn't in my technical focus area, or to express those opinions with much conviction, maybe because it seemed like Many Highly Respected Community Members With Substantially Greater Decision Making Experience would know far better, and would not have the time or the non-status to let me in on the secret subtle reasons for doing counterintuitive things. Now I think everyone's dumb and everyone should say their opinions a lot so that later they can say that they've been saying this all along. I've become extremely disagreeable in the last few years, I'm still not disagreeable enough, and approximately no one I know personally is disagreeable enough.