Some quotes from Tuesday's Senate hearing on AI
post by Daniel_Eth · 2023-05-17T12:13:04.449Z · LW · GW · 9 commentsContents
9 comments
9 comments
Comments sorted by top scores.
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-05-17T12:44:50.067Z · LW(p) · GW(p)
It's fascinating how Gary Marcus has become one of the most prominent advocates of AI safety, and particularly what he call long-term safety, despite being wrong on almost every prediction he has made to date.
I read a tweet that said something to the effect that GOFAI researchers remain the best ai safety researchers since nothing they did worked out.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-05-18T22:13:29.572Z · LW(p) · GW(p)
Seriously, how did he do that? I think it's important to understand. Maybe it's as some people cynically told me years ago -- In DC, a good forecasting track record counts for less than a piece of toilet paper? Maybe it's worse than that -- maybe being active on Twitter counts for a lot? Before I cave to cynicism I'd love to hear other takes.
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-05-18T23:25:16.834Z · LW(p) · GW(p)
It must be said that he was quite a notable /influential person before I think?
He is a student of Chomsky - knows a lot of the big public intellectuals. He s had a lot of time to build up a reputation.
But yeah I agree it's remarkable.
comment by the gears to ascension (lahwran) · 2023-05-17T16:39:33.359Z · LW(p) · GW(p)
remember that this hearing is almost entirely a report to the public to communicate the political disagreement process's existing state; almost nothing new happened, and everyone was effectively playing a game of chess about what ads to invite different people to make.
it's odd that Marcus was the only serious safety person on the stand. he's been trying somewhat, but he, like the others, has perverse capability incentives. he also is known for complaining incoherently about deep learning at every opportunity and making bad predictions even about things he is sort of right about. he disagreed with potential allies on nuances that weren't the key point. he said solid things on object level opinions, and if he got taken seriously by anyone it's promising, but given Altman's intense political savvy it doesn't seem like Marcus really gave much of a contrast at all.
comment by Sammy Martin (SDM) · 2023-05-17T22:39:17.218Z · LW(p) · GW(p)
One absolutely key thing got loudly promoted: that all cutting edge models should be evaluated for potentially dangerous properties. As far as I can tell no one objected to this
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-05-18T22:14:28.289Z · LW(p) · GW(p)
Specifically, for dangerous capabilities which is even better.
comment by trevor (TrevorWiesinger) · 2023-05-17T19:23:09.054Z · LW(p) · GW(p)
Number 2, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list on the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world.
I think the "dangerous capability evaluations" standard makes sense in the current policy environment.
Tons of people in policy can easily understand and agree that there's clear thresholds in AI that are really, really serious problems that shouldn't be touched with a ten-foot-pole, and "things like self-replication" is a good way to put it.
Senator Kennedy (R-LA): "Are there people out there that would be qualified [to administer the rules if they were implemented]?"
Sam Altman: "We would be happy to send you recommendations for people out there."
The other thing I agreed with was revolving door recommendations. There are parts of government that are actually surprisingly competent (especially compared to the average government office), and largely their secret for doing that is by recruiting top talent from the private sector, to work as advisors for a year or so and then return to their natural environment. The classic government officials stay in charge, but they have no domain expertise, and often zero interest in learning any, so they basically defer to the expert advisors and do nothing except handle the office politics so that the expert advisors don't have to (which is their area of expertise anyway). It's generally more complicated than that but this is the basic dynamic.
It kinda sucks that OpenAI gets to be the one to do that, as a reward for defecting and being the first to accelerate, as opposed to ARC or Redwood or MIRI who credibly committed to avoid accelerating. But it's probably more complicated than that. DM me if you're interested and want to talk about this.
comment by Odd anon · 2023-11-16T09:18:07.653Z · LW(p) · GW(p)
Sam Altman (remember, the hearing is under oath): "We are not currently training what will be GPT-5; we don't have plans to do it in the next 6 months."
Interestingly, Altman confirmed that they were working on GPT-5, just three days before six months would have passed from this quote. May 16 -> November 16, confirmation was November 13. Unless they're measuring "six months" "half a year" in days, in which case it the deadline would have been passed by only one day. Or, if they just say "month = 30 days, so 6 months = 180 days", six months after May 16 would be November 12, the day before GPT-5 confirmation.
I wonder if the timing was deliberate.
Replies from: Daniel_Eth↑ comment by Daniel_Eth · 2023-11-16T11:03:46.081Z · LW(p) · GW(p)
Seems possibly relevant that "not having plans to do it in the next 6 months" is different from "have plans to not do it in the next 6 months" (which is itself different from "have strongly committed to not do it in the next 6 months").