Venki's Shortform
post by Venki (venki-kumar) · 2025-02-12T19:16:19.566Z · LW · GW · 5 commentsContents
5 comments
5 comments
Comments sorted by top scores.
comment by Venki (venki-kumar) · 2025-02-14T21:25:23.475Z · LW(p) · GW(p)
What are all the high-level answers to "What should you, a layperson, do about AI x-risk?". Happy to receive a link to an existing list.
Mine from 5m of recalling answers I've heard:
- Don’t work for OpenAI
- Found or work for an AI lab that gains a lead on capabilities, while remaining relatively safe
- Maybe work for Anthropic, they seem least bad
- Don’t work for any AI lab
- Don’t take any action which increases revenue of any AI lab
- Mourn
- Do technical [LW · GW] AI alignment [LW · GW]
- Don’t do technical AI alignment [LW · GW]
- Do AI governance & advocacy
- Donate to AI x-risk funds
- Cope
- Don't perform domestic terrorism
↑ comment by keltan · 2025-02-14T23:36:30.138Z · LW(p) · GW(p)
- Create educational content to sway opinions of large voting demographics. Especially when you can successfully signal that you are a part of that demographic.
- Do what Zvi is doing but for a lower IQ audience
- Form organisations in your local area
- Try your best to avoid making AI a culture war
- Stay grounded
↑ comment by Kaarel (kh) · 2025-02-14T21:45:29.653Z · LW(p) · GW(p)
- make humans (who are) better at thinking [LW · GW] (imo maybe like continuing this way forever, not until humans can "solve AI alignment")
- think well. do math, philosophy, etc.. learn stuff. become better at thinking
- live a good life
↑ comment by the gears to ascension (lahwran) · 2025-02-15T23:02:21.326Z · LW(p) · GW(p)
Your link to "don't do technical ai alignment" does not argue for that claim. In fact, it appears to be based on the assumption that the opposite is true, but that there are a lot of distractor hypotheses for how to do it that will turn out to be an expensive waste of time.
comment by Venki (venki-kumar) · 2025-02-12T14:19:42.747Z · LW(p) · GW(p)
Labor magnification as a measure of AI systems.
Cursor is Mag(Google SWE, 1.03) if Google would rather have access to Cursor than 3% more SWEs at median talent level.
A Mag(OpenAI, 10) system is a system that OpenAI would rather have than 10x more employees at median talent level
A time based alternative is useful too, in cases where it's a little hard to envision as many more employees.
A tMag(OpenAI, 100) system is a system that OpenAI would rather have than 100x time-acceleration for its current employee pool.
Given that definition, some notes:
1. Mag(OpenAI, 100) is my preferred watermark for self-improving AI, one where we'd expect takeoff unless there's a sudden & exceedingly hard scaling wall.
2. I prefer this framework over t-AGI [LW · GW] in some contexts - as I expect that we'll see AI systems able to do plenty of 1-month tasks before it can do all 1-minute tasks. OpenAI's Deep Research already comes close to confirming this. Magnification is more helpful as it measures shortcomings based on whether they can be cheaply shored up by human labor.