LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
It's not an entirely unfair characterization.
vladimir_nesov on Alexander Gietelink Oldenziel's Shortform(Re: Difficult to Parse react on the other comment [LW(p) · GW(p)]
I was confused about relevance of your comment above [LW(p) · GW(p)] on chunky innovations, and it seems to be making some point (for which what it actually says is an argument), but I can't figure out what it is. One clue was that it seems like you might be talking about innovations needed for superintelligence, while I was previously talking about possible absence of need for further innovations to reach autonomous researcher chatbots, an easier target. So I replied with formulating this distinction and some thoughts on the impact and conditions for reaching innovations of both kinds. Possibly the relevance of this was confusing in turn.)
Aren't these different things? Private yes, for profit no. It was private because it's not like it was run by the US government.
emrik-1 on Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/EtcThe links/graphics are broken btw. Would probably be nice to fix if it's quick.
dr_s on Stephen Fowler's ShortformI think there's a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community - an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people's politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything - it's something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem - but those with an above average level of trust in free markets weren't so averse.
Such people don't necessarily have conflicts of interest (though some may, and that's another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.
rom on [Linkpost] Please don't take Lumina's anticavity probioticThe piece is unfair towards Bay Area Rationalists, but the critiques of Lumina can stand separate from what the author thinks about LW readers. "Haters gonna occasionally make some valid points" and such. Sometimes people who unfairly dislike you can also make valid critiques.
I think it's a fair point to note that:
On reflection I somewhat endorse pointing the risk out after discovering it, in the spirit of open collaboration, as you did. It was just really frustrating when all my experiments suddenly broke for no apparent reason. But that's mostly on OpenAI for not announcing the change to their API (other than emails sent to some few people). Apologies for grouching in your direction.
akash-wasil on robo's ShortformThere are some conversations about policy & government response taking place. I think there are a few main reasons you don't see them on LessWrong:
If anyone here is interested in thinking about "40% agreement" scenarios or more broadly interested in how governments should react in worlds where there is greater evidence of risk, feel free to DM me. Some of my current work focuses on the idea of "emergency preparedness"– how we can improve the government's ability to detect & respond to AI-related emergencies.
jonas-hallgren on Examples of Highly Counterfactual Discoveries?Sure! Anything more specific that you want to know about? Practice advice or more theory?
stephen-fowler on Stephen Fowler's ShortformSo the case for the grant wasn't "we think it's good to make OAI go faster/better".
I agree. My intended meaning is not that the grant is bad because its purpose was to accelerate capabilities. I apologize that the original post was ambiguous
Rather, the grant was bad for numerous reasons, including but not limited to:
This last claim seems very important. I have not been able to find data that would let me confidently estimate OpenAI's value at the time the grant was given. However, wikipedia mentions that "In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone." This certainly makes it seem that the grant provided OpenAI with a significant amount of capital, enough to have increased its research output.
Keep in mind, the grant needs to have generated 30 million in EV just to break even. I'm now going to suggest some other uses for the money, but keep in mind these are just rough estimates and I haven't adjusted for inflation. I'm not claiming these are the best uses of 30 million dollars.
The money could have funded an organisation the size of MIRI for roughly a decade (basing my estimate on MIRI's 2017 fundraiser [EA · GW], using 2020 numbers gives an estimate of ~4 years).
Imagine the shift in public awareness if there had been an AI safety Superbowl ad for 3-5 years.
Or it could have saved the lives of ~1300 children [EA · GW].
This analysis is obviously much worse if in fact the grant was negative EV.