Posts
Comments
Being homeless sucks, it’s pretty legitimate to want to avoid that
I’ve found use of the term catastrophe/catastrophic in discussions of SB 1047 makes it harder for me to think about the issue. The scale of the harms captured by SB 1047 has a much much lower floor than what EAs/AIS people usually term catastrophic risk, like $0.5bn+ vs $100bn+. My view on the necessity of pre-harm enforcement, to take the lens of the Anthropic letter, is very different in each case. Similarly, while the Anthropic letter talks about the the bill as focused on catastrophic risk, it also talks about “skeptics of catastrophic risk” - surely this is about eg not buying that AI will be used to start a major pandemic, rather than whether eg there’ll be an increase in the number of hospital systems subject to ransomware attacks bc of AI.
Perhaps when you share the post with friends you could quote some of the bits focused on progressive concerns?
a dramatic hardware shift like that is likely going to mean a significant portion of progress up until that shift in topics like interpretability and alignment may be going out the window.
Why is this the case?
The weights could be stolen as soon as the model is trained though
unless the nondisparagement provision was mutual
This could be true for most cases though
That seems like a valuable argument. It might be worth updating the wording under premise 2 to clarifying this? To me it reads as saying that the configuration, rather than the aim, of OpenAI was the major red flag.
My impression is that post-board drama, they’ve de-emphasised the non-profit messaging. Also in a more recent interview Sam said basically ‘well I guess it turns out the board can’t fire me’ and that in the long term there should be democratic governance of the company. So I don’t think it’s true that #8-10 are (still) being pushed simultaneously with the others.
I also haven’t seen anything that struck me as communicating #3 or #11, though I agree it would be in OpenAI’s interest to say those things. Can you say more about where you are seeing that?
So the argument is that Open Phil should only give large sums of money to (democratic) governments? That seems too overpowered for the OpenAI case.
In that case OP’s argument would be saying that donors shouldn’t give large sums of money to any sort of group of people, which is a much bolder claim
I was more focused on the ‘company’ part. To my knowledge there is no such thing as a non-profit company?
Noting that while Sam describes the provision as being about “about potential equity cancellation”, the actual wording says ‘shall be cancelled’ not ‘may be cancelled’, as per this tweet from Kelsey Piper: https://x.com/KelseyTuoc/status/1791584341669396560
Instances in history in which private companies (or any individual humans) have intentionally turned down huge profits and power are the exception, not the rule.
OpenAI wasn’t a private company (ie for-profit) at the time of the OP grant though.
Is that not what Altman is referring to when he talks about vested equity? My understanding was employees had no other form of equity besides PPUs, in which case he’s talking non-misleadingly about the non-narrow case of vested PPUs, ie the thing people were alarmed about, right?
What do you mean by pseudo-equity?
Did OpenAI have the for-profit element at that time?
Sure, but you weren’t providing reasons to not believe the argument, or reasons why your interpretation is at least as implausible
Zvi has already addressed this - arguing that if (D) was equivalent to ‘has a similar cost to >=$500m in harm’, then there would be no need for (B) and (C) detailing specific harms, you could just have a version of (D) that mentions the $500m, indicating that that’s not a sufficient condition. I find that fairly persuasive, though it would be good to hear a lawyer’s perspective
Why does that mean you shouldn’t post it?
I think calling this a strategic meaning is not that helpful. I would say the issue is that “isolated” is underspecified. It’s not like there was a fully fleshed out account that was then backtracked on, it’s more like: what was the isolation? were they isolated from literally everyone who wasn’t Kat, Emerson or Drew, or were they isolated /pushed to isolate more than is healthy from people they didn’t need to have their ‘career face’ on for? We now know the latter was meant, but either was plausible.
This quote doesn’t say anything about the board member/s being people who are researching AI safety though - it’s Nathan’s friends who are in AI safety research not the board members.
I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh.
Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day - Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said.
“ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.””
They weren’t the only non employee board members though - that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley.
(Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
Why do you think McCauley is likely to be the board member Labenz spoke to? I had inferred that it was someone not particularly concerned about safety given that Labenz reported them saying they could be easily request access to the model if they’d wanted to (and hadn’t). I took the point of the anecdote to be ‘here was a board member not concerned about safety’.
Could you link to some examples of “ OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon”? I don’t have a twitter account so can’t search myself
In the latter case it is the 3rd party driving the article, airing the accusations in a public forum, and deciding how they are framed, rather than Avery.
If, without the 3rd party, Avery would have written an essentially identical article then the differences aren’t relevant. But in the more likely case where Avery is properly a “source” for an article for which the 3rd party is counterfactually responsible, then the 3rd party also bears more responsibility for the effect of the article on Pat’s reputation etc. Fortunately, the 3rd party, not being anonymous, can be practically judged for their choices in writing the article, in the final accounting.
And these situations are different again from someone posting under their real name but referring to sources who agreed to be sources on the condition of anonymity
I strongly think that much or even most of the commentary could have been discarded in favour of more evidence
I’m not sure, I haven’t been using Manifold for very long, sorry!
Why would attraction ruin the friendship?
You can also rate how someone resolved their market afterwards, which I assume does something
Ah I see. I took ‘recreational’, given NL’s context, to mean something like ‘ADHD medication taken recreationally’.
Thanks for checking! Have now figured out the issue, the thing I described was happening when Google docs opened in safari (which I knew), but I’ve now gotten it to open in the app proper.
I had thought people did push them for this?
Yeah Ben should have said illicit not illegal, because they are illegal to bring across the border except if you have a valid prescription, even if the place you purchased them didn’t require a prescription. But I wouldn’t consider it an unambiguous falsehood, like the following is mostly a sliding scale of frustrating ambiguity:
- ‘asked Alice to illegally bring Schedule II medication into the country’ [edit: entirely correct according to NL’s stating of the facts]
- ‘asked Alice to illegally bring Schedule II drugs into the country’ [some intermediate version, still completely factually correct but would be eliding the different between meth and Adderall]
- ‘asked Alice to bring illegal drugs across the border’ [frustratingly bad choice of words that gives people a much worse impression than is accurate, from memory basically the thing that Ben said]
FYI, when I click on some proportion (possibly 100%?) of these links to the Google doc (including the links in your comment here) it just takes me to the very start of Google doc, the beginning of the contents section, and I can’t always figure out which section to click on. Possibly a mobile issue with Google docs, but thought I should let you know 🙂
Does anyone know why it it Francesca vs Harvard and not Gino v Harvard?
What is “a good track record with respect to aging processes” referring to?
The assertion is that Sam sent the email reprimanding Helen to others at OpenAI, not to Helen herself, which is a fundamentally different move.
I can’t conceive of a situation in which the CEO of a non-profit trying to turn the other employees against the people responsible for that non-profit (ie the board) would be business-as-usual.
Most of this is interesting, and useful, speculation, but it reads as a reporting of facts…
I found this quite hard to parse fyi
Sorry yeah I could have explained what I meant further. The way I see it:
‘X is the most effective way that I know of’ = X tops your ranking of the different ways, but could still be below a minimum threshold (e.g. X doesn’t have to even properly work, it could just be less ineffective than all the rest). So one could imagine someone saying “X is the most effective of all the options I found and it still doesn’t actually do the job!”
‘X is an effective way’ = ‘X works, and it works above a certain threshold’.
‘X is Y done right’ = ‘X works and is basically the only acceptable way to do Y,’ where it’s ambiguous or contextual as to whether ‘acceptable’ means that it at least works, that it’s effective, or sth like ‘it’s so clearly the best way that anyone doing the 2nd best thing is doing something bad’.
This reads as some sort of confused motte and bailey. Are RSPs “an effective way” or “the most effective way… [you] know of”? These are different things, with each being stronger/weaker in different ways. Regardless, the title could still be made much more accurate to your beliefs, e.g. ~’RSPs are our (current) best bet on a pause’. ‘An effective way’ is definitely not “i.e … done right”, but “the most effective way… that I know of” is also not.
I understood the original comment to be making essentially the same point you’re making - that lying has a bad track record, where ‘lying has a bad track record of causing mistrust’ is a case of this. In what way do you see them as distinct reasons?
I understood NinetyThree to be talking about vegans lying about issues of health (as Elizabeth was also focusing on), not about the facts of animal suffering. If you agree with the arguments on the animal cruelty side and your uncertainty is focused on the health effects on you of a vegan diet vs your current one (which you have 1st hand data on), it doesn’t really matter what the meat industry is saying as that wasn’t a factor in the first place
Could you clarify who you are defining as carnists?
They’re talking about technical research orgs/labs, not ancillary orgs/projects
I would not consider CEA to be part of the rationality community
I definitely think Ben should be flagging anywhere in the post that he has made edits.
You say "if published as is", not "if published now". Is what you're saying in the comment that, if Ben had waited a week and then published the same post, unedited, you would not want to sue? That is not what is conveyed in the email.