0 comments
Comments sorted by top scores.
comment by Tomás B. (Bjartur Tómas) · 2021-09-06T18:27:13.648Z · LW(p) · GW(p)
I think it is fine to take notes, and fine to share them with friends. I'd prefer if this was not posted publicly on the web, as the reason he did not want to be recorded is it allowed him to speak more freely.
Replies from: p.b.↑ comment by p.b. · 2021-09-06T18:50:16.204Z · LW(p) · GW(p)
I considered this, but
a) there is a big difference between a recording that can be quote-mined by a million journalists and some relatively sparse notes that offer all the plausible deniability he could want (and then some).
b) it was clear that this information was going to be discussed in the ML community and much of that discussion would be in writing, so he could have made a request to refrain from sharing if that was important to him.
c) he went out of his way to make this information available to the AC10 community. I think it would be weird if somehow now we are supposed to not write about it.
Replies from: Benito, Bjartur Tómas↑ comment by Ben Pace (Benito) · 2021-09-07T01:49:22.534Z · LW(p) · GW(p)
Hm, seems like an obvious disincentive for speaking at ACX events if, on requesting no recording+transcript, someone writes up notes of what you said and publishes them on the web.
Replies from: p.b.↑ comment by p.b. · 2021-09-07T08:21:34.673Z · LW(p) · GW(p)
Depends on the notes? This is miles away from a transcript.
It also seems an obvious disincentive to do public speaking if you don't want anything of what you say discussed publicly. But he did do it.
And yes this was public speaking to a specific community. But this is also a post on a forum of that specific community.
If you don't want any notes to be published this post is a good incentive to make that explicit in the future or even retroactively.
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2021-09-07T14:59:57.334Z · LW(p) · GW(p)
>If you don't want any notes to be published this post is a good incentive to make that explicit in the future or even retroactively.
I consider this point to be slightly uncivil, but we will be explict in the future.
↑ comment by Tomás B. (Bjartur Tómas) · 2021-09-06T20:06:53.596Z · LW(p) · GW(p)
If possible on this site, perhaps a good compromise would be to make it available to LessWrong members only.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-09-07T21:05:22.152Z · LW(p) · GW(p)
That seems better to me, though I don't know what the technical options are. I didn't realize this was being shared so clearly against Sam's wishes; seems pretty bad.
Some options I like that come to mind include:
- p.b. deleting the post (ideally followed up by someone posting a short thing on LW summarizing what went down in this thread, so we have a record of that much).
- p.b. blanking the notes in the post, so it can only be viewed in History.
- p.b. doing one of the above, plus making a private Google Doc that contains the notes (and sharing it with people who request access).
(Obviously p.b. can do what they want, assuming the moderators don't think an intervention makes sense here. But the above is what I'd request, if it's true that Sam wanted notes to not be shared publicly.)
(Also, obviously, the cat's fairly out of the bag in terms of this post's contents spreading elsewhere on the Internet. But LW is still a pretty high-traffic site, so we can still do stuff to not add fuel to the fire. And there's value in clearly signaling our moral position here regardless.)
comment by Lukas Finnveden (Lanrian) · 2021-09-05T23:38:45.747Z · LW(p) · GW(p)
I wrote down some places where my memory disagreed with the notes. (The notes might well be more accurate than me, but I thought I'd flag in case other people's memories agree with mine. Also, this list is not exhaustive, e.g. there are many things on the notes that I don't remember, but where I'd be unsurprised if I just missed it.)
AGI will not be a binary moment. We will not agree on the moment it did happen. It will be gradual. Warning sign will be, when systems become capable of self-improvement.
I don't remember hearing that last bit as a generic warning sign, but I might well have missed it. I do remember hearing that if systems became capable of self-improvement (sooner than expected?), that could be a big update towards believing that fast take-off is more likely (as mentioned in your next point).
AGI will not be a pure language model, but language will be the interface.
I remember both these claims as being significantly more uncertain/hedged.
AGI (program able to do most economically useful tasks ...) in the first half of the 2030ies is his 50% bet, bit further out than others at OpenAI.
I remembered this as being a forecast for ~transformative AI, and as explicitly not being "AI that can do anything that humans can do", which could be quite a bit longer. (Your description of AGI is sort-of in-between those, so it's hard to tell whether it's inconsistent with my memory.)
Merging via CBI most likely path to a good outcome.
I was a bit confused about this answer in the Q&A, but I would not have summarized it like this. I remember claims that some degree of merging with AI is likely to happen conditional on a good outcome, and maybe a claim that CBI was the most likely path towards merging.
Replies from: gwern, p.b.↑ comment by gwern · 2021-09-06T01:47:59.529Z · LW(p) · GW(p)
I do remember hearing that if systems became capable of self-improvement (sooner than expected?), that could be a big update
The way I heard that bit was he said he expected it to go smoothly; then someone asked him what it would take to change his mind and would be a 'fire alarm', and he said self-improvement with some sudden jumps in abilities is where he'd start to seriously worry about a hard takeoff.
Replies from: p.b.↑ comment by p.b. · 2021-09-06T08:03:40.033Z · LW(p) · GW(p)
I remember "if the slope of the abilities graph starts changing a lot" for example via "big compute saving innovation" or "self-improvement" then he will update towards fast take-off.
Replies from: wunan↑ comment by wunan · 2021-09-06T20:51:30.738Z · LW(p) · GW(p)
Yeah, my main takeaway from that question was that a change in the slope of of the abilities graph was what would convince him of an imminent fast takeoff. Presumably the x axis of the graph is either time (i.e. the date) or compute, but I'm not sure what he'd put on the Y axis and there wasn't enough time to ask a followup question.
↑ comment by p.b. · 2021-09-06T07:59:55.456Z · LW(p) · GW(p)
I don't remember hearing that last bit as a generic warning sign, but I might well have missed it. I do remember hearing that if systems became capable of self-improvement (sooner than expected?), that could be a big update towards believing that fast take-off is more likely (as mentioned in your next point).
He mentioned the self-improvement part twice, so you probably missed the first instance.
I remember both these claims as being significantly more uncertain/hedged.
Yes, all the (far) future claims were more hedged than I express here.
I remembered this as being a forecast for ~transformative AI, and as explicitly not being "AI that can do anything that humans can do", which could be quite a bit longer. (Your description of AGI is sort-of in-between those, so it's hard to tell whether it's inconsistent with my memory.)
I think the difference between "transformative AI" and "AI that can do most economically useful tasks" is not that big? But because of his expectation of very gradual improvement (+ I guess different abilities profile compared to humans) the "when will AGI happen"-question didn't fit very well in his framework. I think he said something like "taking the question as intended" and he did mention a definition along the lines of "AI that can do x tasks y well", so I think his definition of AGI was a bit all over the place.
I was a bit confused about this answer in the Q&A, but I would not have summarized it like this. I remember claims that some degree of merging with AI is likely to happen conditional on a good outcome, and maybe a claim that CBI was the most likely path towards merging.
Yes, I think that's more precise. I guess I shortened it a bit too much.
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2021-09-06T11:34:04.590Z · LW(p) · GW(p)
Thanks, all this seems reasonable, except possibly:
Merging (maybe via BCI) most likely path to a good outcome.
Which in my mind still carries connotations like ~"merging is an identifiable path towards good outcomes, where the most important thing is to get the merging right, and that will solve many problems along the way". Which is quite different from the claim "merging will likely be a part of a good future", analogous to e.g. "pizza will likely be a part of a good future". My interpretation was closer to the latter (although, again, I was uncertain how to interpret this part).
Replies from: p.b.comment by Tomás B. (Bjartur Tómas) · 2021-09-07T14:54:51.363Z · LW(p) · GW(p)
See, it is on the front page of HackerNews now, all over Reddit. I'm the person who books guests for Joshua's meetups, and I feel like this is a sort of defection against Altman and future attendees of the meetup. As I said, I think notes are fine and sharing them privately is fine but publishing on the open web vastly increases the probability of some journalist writing a click-bait story about your paraphrased take of what Altman said.
Actually attending the meetups was a trivial inconvenience that reduced the probability of this occurring. Perhaps the damage is now done, but I really don't feel right about this.
I take some responsibility for not being explicit about not publishing notes on the web, for whatever reason this was not a problem last time.
Replies from: None, Taleuntum↑ comment by [deleted] · 2021-09-07T16:24:12.123Z · LW(p) · GW(p)
For what it's worth, I took notes on the event and did not share them publicly because it was pretty clear to me doing so would have been a defection, even though it was implicit. Obviously, just because it was clear to me doesn't mean it was clear to everyone but I thought it still made sense to share this as a data point in favor of "very well known person doing a not recorded meetup" implying "don't post and promote your notes publicly."
I am also disappointed to see that this post is so highly upvoted and positively commented upon despite:
- Presumably others were aware of the fact that the meetup was not supposed to be recorded and LW is supposedly characteristically aware of coordination problems/defection/impact on incentives. At least to me, it seems likely that in expectation this post spreading widely would make Sam and people like him less likely to speak at future events and less trustworthy of the community that hosted the event. This seems not worth the benefit of having the notes posted given that people who were interested could have attended the event or asked someone about it privately.
- As Sean McCarthy [LW(p) · GW(p)] and others pointed out, there were some at best misleading portrayals of what Altman said during his Q&A.
↑ comment by Ben Pace (Benito) · 2021-09-07T17:25:57.225Z · LW(p) · GW(p)
I strong upvoted it initially when I assumed the state of affairs was “nobody had done the work to make a recording+transcript, so this post is helping out”. Then I switched to strong downvote when I read Bjartur’s comment saying Sam had requested no transcript or recording.
Replies from: None↑ comment by Taleuntum · 2021-09-07T20:26:43.681Z · LW(p) · GW(p)
I was the one who first shared the notes on reddit. Unfortunately, I did not know that recording the speech was against the wishes of Sam Altman, but I should have checked why there was no recording (at the time I simply thought that no one bothered to do it). I'm sorry, fwiw I deleted the post.
comment by Ruby · 2021-09-07T16:38:24.000Z · LW(p) · GW(p)
I came here to also express concern that this post violated Sam Altman's desire to be off the record. I'm not sure what to do at this point.
Replies from: cousin_it↑ comment by cousin_it · 2021-09-07T17:32:43.980Z · LW(p) · GW(p)
No way to remove it from the internet at this point, but the obvious thing to do for LW goodwill is to remove the post and replace it with an apology, I'm not sure why that hasn't happened yet.
Replies from: Bjartur Tómas, jimrandomh, Ruby↑ comment by Tomás B. (Bjartur Tómas) · 2021-09-07T19:18:22.412Z · LW(p) · GW(p)
I would also like it to be removed.
↑ comment by jimrandomh · 2021-09-07T18:36:25.741Z · LW(p) · GW(p)
I wasn't at this meetup, but it sounds to me like he was speaking to a large group of people who did not individually agree to any sort of confidentiality. I also think the bar for moderators removing this sort of thing is very high; removing it would send a very bad signal about our trustworthiness and the trustworthiness of our ecosystem in general.
(I'm a moderator, but speaking here as an individual.)
Replies from: reverendfoom↑ comment by reverendfoom · 2021-09-07T19:42:10.011Z · LW(p) · GW(p)
Another point about "defection" is which action is a defection with respect to whom.
Sam Altman is the leader of an organization with a real chance of bringing about the literal end of the world, and I find any and all information about his thoughts and his organization to be of the highest interest for the rest of humanity.
Not disclosing whatever such information ones comes into contact with, except in case of speeding up potentially even-less-alignment-focused competitors, is a defection against the rest of us.
If this were an off-the-record meeting with a head of state discussing plans for expanding and/or deploying nuclear weapons capabilities, nobody would dare suggest taking it down, inaccuracy and incompleteness notwithstanding.
Now, Sam Altman appeals emotionally to a lot of us (me included) as much more relatable, being an apparently prosocial nerdy tech guy, but in my opinion he's a head state in control of WMDs and should be (and expect to be) treated as such.
Update: honestly curious about reasons for downvotes if anyone is willing to share. I have no intention to troll or harm the discussion and am willing to adapt writing style. Thank you.
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2021-09-07T20:17:16.676Z · LW(p) · GW(p)
The point of the defection/cooperation thing isn't just that cooperation is a kindness to Sam Altman personally, which can be overridden by the greater good. The point is that generally cooperative behavior, and generally high amounts of trust, can make everyone better off. If it was true as you said that:
he's a head state in control of WMDs and should be (and expect to be) treated as such
and as a consequence, he e.g. expected someone to record him during the Q&A, then he would presumably not have done the Q&A in the first place, or would have shared much less information. This would have lead to humanity learning less about OpenAI's plans.
And this is definitely not a single-shot interaction. This was Sam's second Q&A at an ACX meetup, and there was no reason to expect it to be the last. Moreover, there's been a lot of interactions between the alignment community (including some who write on LW) and OpenAI in the past. And given that OpenAI's decisions about alignment-related things matter a lot (as you say) it seems important to keep up good relations and high degrees of trust.
honestly curious about reasons for downvotes if anyone is willing to share
I initially downvoted, have since retracted it. Since trust/cooperation can be quite fragile and dependent on expectations about how people will behave, I became worried when I read you as basically announcing that you and other people should and will defect in the future. And I wanted a quick way to mark disagreement with that, to communicate that this isn't a generally accepted point of view. But your point was phrased in a perfectly civil manner, so I should really have just taken the time to write a response, sorry.
Replies from: reverendfoom↑ comment by reverendfoom · 2021-09-07T21:02:47.031Z · LW(p) · GW(p)
I appreciate the response and stand corrected.
The point about it being an iterated prisoner's dilemma is a good one, and I would rather there be more such ACX instances where he shares even more of his thinking due to our cooperative/trustworthy behavior, than this to be the last one or have the next ones be filtered PR-speak.
A small number of people in the alignment community repeatedly getting access to better information and being able to act on it beats the value of this one single post staying open to the world. And even in the case of "the cat being out of the bag," hiding/removing the post would probably do good as a gesture of cooperation.
comment by Amateur · 2021-09-06T09:55:23.267Z · LW(p) · GW(p)
Thanks for taking and sharing your notes! Adding some of my own below that I haven't seen mentioned yet:
- Sam made a case that people will stop caring about the size of the models as measured by the number of parameters, but will instead care about the training compute (with models that train continuously being the ultimate target). Parameters will get outdated in the same way we don't measure CPU performance using gigahertz anymore.
- The main bottleneck towards the AGI at the moment is the algorithmic/theoretical breakthroughs. There were times when Sam was convinced compute was the bottleneck, but not anymore. OpenAI believes there's enough compute in the world to be able to run an AGI (whenever the algo breakthroughs arrive). He also shared that the most pessimistic scenarios they've modelled put the power requirements for running hardware for an AGI at around one nuclear plant. Which in his opinion is not too much, and also means that you could put that machine close to clean energy (e.g. near a volcano in Iceland or a waterfall in Canada).
- On differences between the approaches between OpenAI and DeepMind: DeepMind seems to involve a lot more neuroscientists and psychologists in their research. OpenAI studies deeplearning "like people study physics".
- Sam mentioned that the name "OpenAI" is unfortunate, but they are stuck with it. The reason they don't release some of their models along with weights and biases is so that they can keep some level of control over their usage, and can shut them down if they need to. He said that they like the current API-based approach to release those models without completely giving away the control over them.
- On figuring out whether the model is conscious, Sam shared one speculation by a researcher from OpenAI: make sure to train the model on data that does not mention "self-awareness" or "consciousness" in any way, then at run time try to explain those concepts. If the model responds with something akin to "I understand exactly what you're saying", it's a worrying sign about that model's self-awareness. Also, as pointed out above, they have no idea whether intelligence can be untangled from consciousness.
- The whole discussion about merits of leaving the academia (or just generally an organization that does not reward thinking about AI safety) vs staying to persuade some of the smartest people who are still part of that system.
↑ comment by Gunnar_Zarncke · 2021-09-06T22:02:04.413Z · LW(p) · GW(p)
Parameters will get outdated in the same way we don't measure CPU performance using gigahertz anymore.
This was in the context of when he was asked about the number of parameters of GPT-4. He said that the big changes are not in the number of parameters but in the structure of the model. And he made a pretty excited and/or confident impression on me when he said it. I wouldn't be surprised if the nxt GPT-N is much better without many more parameters.
comment by Sean McCarthy (sean-mccarthy) · 2021-09-06T16:29:24.455Z · LW(p) · GW(p)
Reading through these notes I was alarmed by how much they misrepresented what Sam Altman said. I feel bad for the guy that he came on and so thoughtfully answered a ton of questions and then it gets posted online as "Sam Altman claims _____!"
An example:
GPT-5 might be able to pass the Turing test. But probably not worth the effort.
A question was asked about how far out he thought we were from being able to pass the Turing Test. Sam thought that this was technically feasible in the near term but would take a lot of effort that was better spent elsewhere, so they were quite unlikely to work on it. So "GPT-5 might be able to pass the Turing test." is technically true because "might" makes the whole sentence almost meaningless, but to the extent that it does have meaning, that meaning is giving you directionally false information.
I didn't take notes, and I don't want to try to correct the record from memory and plausibly make things worse. But just, take these all with a huge grain of salt. There's a lot where these notes say "X" but what I remember him saying was along the lines of "that's a good question, it's tricky, I'm currently leaning towards X over Y". And some things that are flat wrong.
Replies from: p.b., RobbBB↑ comment by p.b. · 2021-09-06T17:41:28.688Z · LW(p) · GW(p)
Nowhere did I write "Sam Altman claims ... !"
Instead I wrote: "These notes are not verbatim [...]While note-taking I also tended to miss parts of further answers, so this is far from complete and might also not be 100% accurate. Corrections welcome."
Talk about badly misrepresenting ...
I fail to see how "A question was asked about how far out he thought we were from being able to pass the Turing Test. Sam thought that this was technically feasible in the near term but would take a lot of effort that was better spent elsewhere, so they were quite unlikely to work on it." is misrepresented by "GPT-5 might be able to pass the Turing test. But probably not worth the effort."
Sam Altman literally said that passing the Turing test might be feasible with GPT-5, but not worth the effort. Where is the "directionallly false information"? To me your longer version is pretty much exactly what I express here.
My notes are 30+ bullet point style sentences that catch the gist of what Sam Altman said in answers that were often several minutes long. But this example is the "misrepresentation" you want to give as example? Seriously?
If some things are flat out wrong, say which and offer a more precise version.
Replies from: sean-mccarthy↑ comment by Sean McCarthy (sean-mccarthy) · 2021-09-06T21:37:35.757Z · LW(p) · GW(p)
You've improved the summary, thank you.
The main issue is still missing context. For example, if someone asks "is x possible" and he answers that it is, summarizing that was "x is possible" is misleading. Simply because there is a difference between calling out a thing unprompted, and answering a question about it. Former is what I meant by "Sam claims".
His answer about Turing test was that they were planning to not do it, though if they tried, they thought they could build that with a lot of effort. You summarized it as gpt5 might be able to pass it. I don't know what else to say about that, they seem pretty different to me.
Other people have mentioned some wrong things.
Replies from: p.b., gwern↑ comment by p.b. · 2021-09-07T05:47:56.908Z · LW(p) · GW(p)
For example, if someone asks "is x possible" and he answers that it is, summarizing that was "x is possible" is misleading. Simply because there is a difference between calling out a thing unprompted, and answering a question about it. Former is what I meant by "Sam claims".
At the very top it says "Q&A", i.e. all of these are answers to questions, none are Sam Altman shouting from the rooftop.
His answer about Turing test was that they were planning to not do it, though if they tried, they thought they could build that with a lot of effort. You summarized it as gpt5 might be able to pass it. I don't know what else to say about that, they seem pretty different to me.
I did not summarize it as "GPT-5 might be able to pass it". I said "GPT-5 might be able to pass it. But probably not worth the effort." Which to my mind clearly shows that a) there would be an engineering effort involved b) this would be a big effort and c) therefore they are not going to do it. He specifically mentioned GPT-5 as a model were this might become feasible.
Also: In one breath you complain that in Sam Altman's answers there was a lot of hedging that is missing here, and in the next you say ""might" makes the whole sentence almost meaningless". Like, what do you want? I can't simultaneously hedge more and less.
↑ comment by gwern · 2021-09-06T22:19:30.524Z · LW(p) · GW(p)
Yeah, it's a bit of an blind men/elephant thing. Like the Turing test thing was all of those, because he said something along the lines of "we don't want to aim for passing the Turing test (because that's pointless/useless and OA can only do a few things at a time) but we could if we put a few years into it and a hypothetical GPT-5* alone could probably do it". All 3 claims ("we could solve Turing test", "a GPT-5 would probably solve Turing", "we don't plan to solve Turing") are true and logically connected, but different people will be interested in different parts.
* undefined but presumably like a GPT-4 or GPT-3 in being another 2 OOM or so beyond the previous GPT
↑ comment by Rob Bensinger (RobbBB) · 2021-09-06T18:46:41.450Z · LW(p) · GW(p)
I didn't take notes, and I don't want to try to correct the record from memory and plausibly make things worse.
Maybe some attendees could make a private Google Doc that tries to be a little more precise about the original claims/context/vibe, then share it after enough attendees have glanced over it to make you confident in the summary?
I don't expect this would be a huge night-and-day difference from the OP, but it may matter a fair bit for a few of the points. And part of what bothers me right now is that I don't know which parts to be more skeptical of.
comment by Rob Bensinger (RobbBB) · 2021-09-06T09:29:12.748Z · LW(p) · GW(p)
If it's true that p.b. said some pretty false things about what was stated at the meet-up (per Lanrian), and that James Miller did the same in the comments (per Gwern), then I think the OP (and overall discussion) probably shouldn't be on the LW frontpage, at least until there's been more fact-checking and insertion of corrections.
I think some of the info in the OP (if true!) is pretty valuable to discuss, but I really hate the idea of LW spreading false gossip and rumors about someone's views. (Especially when it would have been relatively easy to check whether others at the meetup had the same recollection before posting.)
Replies from: p.b., gjm, Optimization Process, p.b., p.b., James_Miller↑ comment by p.b. · 2021-09-06T10:27:01.349Z · LW(p) · GW(p)
I mean even if Lanrian's corrections are based on perfect recall, none of them would make any of my notes "pretty false". He hedged more here, that warning was more specific, the AGI definition was more like "transformative AGI" - these things don't even go beyond the imprecision in Sam Altman's answers.
The only point were I think I should have been more precise is about the different "loss function". That was my interpretation in the moment, but it now seems to me much more uncertain whether that was actually what he meant.
I don't care about the frontpage, but if this post is seen by some as "false gossip and rumors about someone's views" I'd rather take it down.
↑ comment by gjm · 2021-09-06T09:56:08.636Z · LW(p) · GW(p)
I don't think that whether a post should be on the frontpage should be much influenced by what's being said in its comments by a third party.
I don't think I think we should be worried that something's going to do harm by spreading less-than-perfectly-accurate recollections when it says up front "These notes are not verbatim [...] While note-taking I also tended to miss parts of further answers, so this is far from complete and might also not be 100% accurate. Corrections welcome.". Lanrian's alternate versions don't seem so different to me as to make what p.b. wrote amount to "false gossip and rumors".
↑ comment by Optimization Process · 2021-09-06T15:44:46.370Z · LW(p) · GW(p)
Everything in the OP matches my memory / my notes, within the level of noise I would expect from my memory / my notes.
↑ comment by p.b. · 2021-09-06T11:27:49.847Z · LW(p) · GW(p)
I also don't think I could easily have checked whether others at the meetup had the same recollection. I had to leave pretty much when Sam Altman did and I didn't know anybody attending.
Fact of the matter is that gwern, NNOTM, Amateur and James Miller of the commenters so far seem to have attended the meetup and at least didn't express any disagreements with my recollections, while Lanrian's (well-intended and well-taken) corrections are about differences in focus or degree in a small number of statements.
↑ comment by James_Miller · 2021-09-06T12:01:10.862Z · LW(p) · GW(p)
My claims mostly relate to what Sam Altman said, in response to my question, in discussion room 1 after Altman's official talk had ended. Why are you so confident that I have said false things about what he stated? Gwern was, I believe, just referring to what Altman said in his official talk. You should have a very high standard of proof before you accuse someone of saying "pretty false things".
I think people irrationally reject evidence that aliens are UFOs. Pilots have reported that they feared disclosing what they had seen UFOs do because no one would believe them. Ironically, If my version of what Altman said is true, we have a case here where I'm being falsely accused of spreading false information for accurately reporting that a hyper-high status person thinks UFOs are likely aliens. Something about UFOs causes normally rational people to jump to the conclusion that anyone offering evidence for the alien hypothesis is either lying, deluded, or a fool.
comment by Nnotm (NNOTM) · 2021-09-06T01:13:56.227Z · LW(p) · GW(p)
One question was whether it's worth working on anything other than AGI given that AGI will likely be able to solve these problems; he agreed, saying he used to work with 1000 companies at YC but now only does a handful of things, partially just to get a break from thinking about AGI.
comment by Gunnar_Zarncke · 2021-09-06T21:56:58.083Z · LW(p) · GW(p)
Some more notes plus additions to some of your comments (quoted):
Robotics is lagging because robot hardware is lagging. Also it's easier to iterate with bits alone.
I got this a bit different: That it's lagging not because of lagging hardware but because robotics is hard.
He mentioned partial reprogramming as a strategy to age extension. Here a link fo what this seems to be about: https://pubmed.ncbi.nlm.nih.gov/31475896/#:~:text=Alternatively%2C%20partial%20cell%20reprogramming%20converts,cocktails%20of%20specific%20differentiation%20factors.
In the context of codex: Don't go into programming of 20 line programs - it is a solved problem. He expects significant progress within the next year.
AGI will (likely) not be a pure language model, but language might be the interface.
He didn't specifically say that it would be the likely interface but he talked quite a bit about the power of language as an interface.
EA would benefit from a startup culture where things are built. More doing instead of thinking and strategizing.
Consciousness is an underexplored area. he had an interesting example: If you build a powerful AI and do not explicitly build in a self and then talk with it about consciousness does it say: "Yeah, that is what it's for me too."?
Behavioral cloning probably much safer than evolving a bunch of agents. We can tell GPT to be empathic.
He specifically mentioned the competition between agents as a risk factor.
On the question what questions he'd like to get asked: How to find out what to do with life. Though I'm very unsure about his exact words and what he meant by it.
He expects a concentration in power from AGI. And he seemed worried about it to me.
“I’m an ambitious 19-year-old, what should I do?”
He said that he gets asked this question often and seemed to google for his standard reply which was quickly posted in the chat:
https://blog.samaltman.com/advice-for-ambitious-19-year-olds
ADDED: On GTP-4 he was asked about the size of the context window. He said that he thinks that a window as big as a whole article should be possible. He didn't say "article" specifically but I remember something of such a size.
Replies from: p.b.↑ comment by p.b. · 2021-09-07T05:52:14.042Z · LW(p) · GW(p)
I got this a bit different: That it's lagging not because of lagging hardware but because robotics is hard.
When I say "robot hardware" I don't mean compute hardware. He mentioned for example how human dexterity is far ahead of robots. The "robotics is hard" bit is partly in "easier to iterate with bits".
comment by reverendfoom · 2021-09-07T15:02:18.498Z · LW(p) · GW(p)
Did he really speak that little about AI Alignment/Safety? Does anyone have additional recollections on this topic?
The only relevant parts so far seem to be these two:
Behavioral cloning probably much safer than evolving a bunch of agents. We can tell GPT to be empathic.
And:
Chat access for alignment helpers might happen.
Both of which are very concerning.
"We can tell GPT to be empathetic" assumes it can be aligned in the first place so you "can tell" it what to do, and "be empathetic" is a very vague description of what a good utility function would be assuming one would be followed at all. Of course it's all in conversational tone, not a formal paper, but it seems very dismissive to me.
GPT-based "behavioral cloning" itself has been brought up by Vitalik Buterin and criticized by Eliezer Yudkowsky in this exchange between the two:
For concreteness: One can see how AlphaFold 2 is working up towards world-ending capability. If you ask how you could integrate an AF2 setup with GPT-3 style human imitation, to embody the human desire for proteins that do nice things... the answer is roughly "Lol, what? No."
As for "chat access for alignment helpers," I mean, where to even begin? It's not hard to imagine a deceptive AI using this chat to perfectly convince human "alignment helpers" that it is whatever they want it to be while being something else entirely. Or even "aligning" the human helpers themselves into beliefs/actions that are in the AI's best interest.
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2021-09-07T17:29:08.333Z · LW(p) · GW(p)
As a general point, these notes should not be used to infer anything about what Sam Altman thought was important enough to talk a lot about, or what his general tone/attitude was. This is because
- The notes are filtered through what the note-takers thought was important. There's a lot of stuff that's missing.
- What Sam spoke about was mostly a function of what he was asked about (it was a Q&A after all). If you were there live you could maybe get some idea of how he was inclined to interpret questions, what he said in response to more open questions, etc. But here, the information about what questions were asked is entirely missing.
- General attitude/tone is almost completely destroyed by the compression of answers into notes.
For example, IIRC, the thing about GPT being empathic was in response to some question like "How can we make AI empathic?" (i.e., it was not his own idea to bring up empathy). The answer was obviously much longer than the summary by notes (so less dismissive). And directionally, it is certainly the case already that GPT-3 will act more empathic if you tell it to do so.
Did he really speak that little about AI Alignment/Safety? Does anyone have additional recollections on this topic?
He did make some general claims that it was one of his top few concerns, that he felt like OpenAI had been making some promising alignment work over the last year, that it was still an important goal for OpenAI's safety work to catch up with its capabilities work, that it was good for more people to go into safety work, etc. Not very many specifics as far as I can remember.
comment by gwern · 2021-09-06T14:45:30.382Z · LW(p) · GW(p)
One observation I found interesting was Altman said that the OA API+Codex is profitable, but not profitable enough for the 'next generation' of models.
Replies from: p.b.↑ comment by p.b. · 2021-09-06T15:01:18.404Z · LW(p) · GW(p)
Yeah, but he said "at current revenue", which is kind of unsurprising?
Codex/Copilot is all of 6 weeks old and GPT-3 based apps that see real usage yet are where?
Replies from: gwern↑ comment by gwern · 2021-09-06T15:50:26.097Z · LW(p) · GW(p)
It's what I would have guessed at the estimated revenue numbers, but it's good to know. It also means that they're probably going to seek more VC (from MS?) for continuing upgrading & scaling, since we're seeing increasing competition. (Google alone has posted at least 2 papers so far using LamDA which is at a comparable scale, for coding as well, and their mysterious 'Pathways' model is apparently multimodal and even larger.)
comment by Rob Bensinger (RobbBB) · 2021-09-06T23:11:16.964Z · LW(p) · GW(p)
Thanks for making various edits to the post, pb.
I'd suggest making such edits in a way that makes the original version stay visible (e.g., as struck text). As is, the discussion is confusing because we've all read different versions. (I told Oliver a part I objected to -- "Merging via CBI most likely path to a good outcome" -- and he had seen a later version and hadn't been aware of what the original said.)
Clearly and very-visibly noting 'this was false/misleading' is also pretty important IMO, because I think it often takes unusual effort on the part of individuals and groups to properly un-update after hearing a falsehood. If we forget the falsehood, it can often just keep spreading through the social network side-by-side with the true, fixed info.
Replies from: Raemon, p.b.↑ comment by Raemon · 2021-09-06T23:21:31.556Z · LW(p) · GW(p)
Mod note: I just toggled the post into a "major update" mode, which makes it easier to see version history and read the diffs. (This is currently an admin-only feature). I'm not sure whether it's worth the readability-cost of including all the struck-out parts, but meanwhile at least now there's an easy way to see the changes (click on the clock icon by the date of the post)
↑ comment by p.b. · 2021-09-07T07:58:47.379Z · LW(p) · GW(p)
I changed very litte so far actually. Mostly because the "pretty false things" and the "flat out wrong" things don't seem to be forthcoming.
I think the "merging"-statement is also the only change were I didn't make it very clear in the text that it had been changed, but I will add that.
I am very ok with "you said X. I remember Y. My version is correct because Z." That's not what I get here, except by Lanrian. Instead I get allegations and then ... no follow-up.
comment by Throwaway001 · 2021-09-07T21:14:56.566Z · LW(p) · GW(p)
I'm disappointed with the LessWrong community for the mess in these comments. When I origionally read the post it seemed pretty clear that it was condensed interpretation of a Q&A session to be taken with a grain of salt, and with the filling in by other commenters I dont see how the acclaims of "falsified information" is warrented.
Also with it being posted without consent from Sam Altman seemes like an error done by good faith from the origional poster due to a misinterpretation from the rules. The comments here seem overly hostal for what seems like an error of wanting to share good resources with the community but misinterpreting the rules, referring to OP's responce to being asked about the no recording question. The easiest solution would be to ask Sam Altman what he wants done with the post (a little late now I know), apologize for any problems. delete it if he wants, then make it more explicitly known in the future that notes from these Q&A's wont be allowed.
The responce I've seen here seems really over dramatic for how simple a solution seems. Which is worse reflecting on a community which claims to be using rational judgement to figure out the world.
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2021-09-07T21:39:30.766Z · LW(p) · GW(p)
I think it has mostly been pretty civil. I have nothing against the OP and don't think he is malicious. Just think the situation is unfortunate. I was not blameless. We should have explicitly mentioned in the emails to not publish notes. And I should have asked OP flat out to remove it in my initial reply, rather than my initial slightly timid attempt.
Most of our meetups are recorded and posted publicly and obviously we are fine with summarization and notes, but about 1/10 guests prefer them to not be recorded.
Replies from: Throwaway001↑ comment by Throwaway001 · 2021-09-07T22:59:35.784Z · LW(p) · GW(p)
I think I may have misinterpreted the intent of the comments if that is the case, which is on me. To me the situation just seemed a bit too accusitory to one person who I didnt think meant to make a mistake and was trying to offer valuable info and in trying to state that I think I came across a bit too harsh as well. I'll work on that for next time. It seems you have a good plan going forward and I appreciate your reply.
comment by James_Miller · 2021-09-05T22:03:08.164Z · LW(p) · GW(p)
Sam Altman also said that the government admitted that UFOs are real. After the talk in the "room" discussions Sam expressed agreement that UFOs being aliens are potentially as important as AGI, but did not feel this was an issue he had time to work on.
Replies from: p.b., NNOTM, RobbBB↑ comment by p.b. · 2021-09-06T07:47:21.555Z · LW(p) · GW(p)
Just to be clear, at no point did Sam Altman endorse "UFOs are aliens".
Replies from: James_Miller, James_Miller, Lanrian↑ comment by James_Miller · 2021-09-06T12:14:13.632Z · LW(p) · GW(p)
Were you in discussion room 1 to hear the question I asked of Altman about UFOs? If not, you don't have a basis to say "at no point".
Replies from: p.b.↑ comment by p.b. · 2021-09-06T13:50:04.266Z · LW(p) · GW(p)
Yes I was.
As I remember it, he did say that it was an important question to investigate. And that he didn't have the time to do it. And there seemed to be little a civilian could do to make progress on the question.
I never read any of that as "UFOs are likely to be aliens", but rather "this is an mysterious phenomenon that demands an explanation".
Replies from: James_Miller, James_Miller↑ comment by James_Miller · 2021-09-06T14:12:46.006Z · LW(p) · GW(p)
Do you remember my saying that the issue was as important as AGI and him agreeing with my statement?
Replies from: p.b.↑ comment by p.b. · 2021-09-06T14:36:34.476Z · LW(p) · GW(p)
I think it comes down to exact phrases, which I don't remember. He may well have agreed with something like "but if these are aliens isn't it as important as AGI?", which is totally reasonable.
But at no point I thought "oh, Sam Altman thinks UFOs are aliens". Could be I just missed the definitive statement. Could be my prior on "UFOs = aliens" is so low that I interpreted everything he said in that direction.
I guess we'll just ask him the next time.
Replies from: James_Miller↑ comment by James_Miller · 2021-09-06T14:42:15.256Z · LW(p) · GW(p)
Yes, and it could also be that I was so excited that the great Sam Altman might agree with me on an issue I greatly care about that I read something into what he said that wasn't there.
↑ comment by James_Miller · 2021-09-06T13:52:45.680Z · LW(p) · GW(p)
↑ comment by James_Miller · 2021-09-06T11:55:19.840Z · LW(p) · GW(p)
I believe he did in discussion Room 1 in response to my question. This occurred after his formal talk was over.
↑ comment by Lukas Finnveden (Lanrian) · 2021-09-06T11:19:16.081Z · LW(p) · GW(p)
Were you there throughout the post-Q&A discussion? (I missed it.)
Replies from: p.b.↑ comment by p.b. · 2021-09-06T11:44:08.518Z · LW(p) · GW(p)
I left when Sam Altman left. But my notes don't encompass that part, there was a bit more elaboration, some small talk and litte additional stuff of interest (at least to me).
Edit: James Miller brought the UFO stuff up, so you probably missed that.
↑ comment by Nnotm (NNOTM) · 2021-09-06T01:06:34.055Z · LW(p) · GW(p)
Is that to be interpreted as "finding out whether UFOs are aliens is important" or "the fact that UFOs are aliens is important"?
Replies from: James_Miller↑ comment by James_Miller · 2021-09-06T01:21:01.097Z · LW(p) · GW(p)
The second.
↑ comment by Rob Bensinger (RobbBB) · 2021-09-05T22:37:15.846Z · LW(p) · GW(p)
Timestamp / excerpt? As described, I think you need to be missing a lot of facts (or be reasoning very badly) in order to reach a conclusion like that.
Replies from: gwern, James_Miller↑ comment by gwern · 2021-09-06T01:51:38.728Z · LW(p) · GW(p)
The context was that he was saying, to paraphrase, "that people would adapt to the changes from pervasive cheap energy & intelligence on tap [which he forecasts as coming in the next decades], however scary and weird we might find it, because the modern context is already weird and very different from human history; an example of this sort of human ability to cope with change is that the US government announced the other day that UFOs are real, and everyone just shrugged and carried on as usual." I didn't take him as endorsing the claim "yeah, space aliens are totally real, here, and buzzing pilots for kicks", necessarily.
↑ comment by James_Miller · 2021-09-06T01:00:11.105Z · LW(p) · GW(p)
It wasn't recorded. Actually I was the one who asked the question in the room discussion. The evidence that UFOs seen by the military are aliens is fairly strong. I've done three podcasts on the topic included one with Robin Hanson. See https://soundcloud.com/user-519115521
Replies from: Insub↑ comment by Insub · 2021-09-06T02:18:14.383Z · LW(p) · GW(p)
For those of us who don't have time to listen to the podcasts, can you give a quick summary of which particular pieces of evidence are strong? I've mostly been ignoring the UFO situation due to low priors. Relatedly, when you say the evidence is strong, do you mean that the posterior probability is high? Or just that the evidence causes you to update towards there being aliens? Ie, is the evidence sufficient to outweigh the low priors/complexity penalties that the alien hypothesis seems to have?
FWIW, my current view is something like:
- I've seen plenty of videos of UFOs that seemed weird at first that turned out to have a totally normal explanation. So I treat "video looks weird" as somewhat weak Bayesian evidence.
- As for complexity penalties: If there were aliens, it would have to be explained why they mostly-but-not-always hide themselves. I don't think it would be incompetence, if they're the type of civilization that can travel stellar distances.
- It would also have to be explained why we haven't seen evidence of their (presumably pretty advanced) civilization
- And it would have to be explained why there hasn't been any real knock-down evidence, eg HD close-up footage of an obviously alien ship (unless this is the type of evidence you're referring to?). A bunch of inconclusive, non-repeatable, low-quality data seems to be much more likely in the world where UFOs are not aliens. Essentially there's a selection effect where any sufficiently weird video will be taken as an example of a UFO. It's easier for a low-quality video to be weird, because the natural explanations are masked by the low quality. So the set of weird videos will include more low-quality data sources than the overall ratio of existing high/low quality sources would indicate. Whereas, if the weird stuff really did exist, you'd think the incidence of weird videos would match the distribution of high/low quality sources (which I don't think it does? as video tech has improved, have we seen corresponding improvements in average quality of UFO videos?).
↑ comment by James_Miller · 2021-09-06T02:40:29.529Z · LW(p) · GW(p)
The US military claims that on multiple occasions they have observed ships do things well beyond our capacities. There are cases where a drone is seen by multiple people and recorded by multiple systems flying in ways well beyond our current technology to the point where it is more likely the drones are aliens than something built by SpaceX or the Chinese. The aliens are not hiding, they are making it extremely obvious that they are here, it is just that we are mostly ignoring the evidence. The aliens seem to have a preference for hanging around militaries and militaries have a habit of classifying everything of interest. I don't understand why the aliens don't reshape the universe building things like Dyson spheres, but perhaps the aliens are like human environmentalists who like to keep everything in its natural state. Hanson's theory is that life is extremely rare in the universe but panspermia might be true. Consequently, even though our galaxy might be the only galaxy in 1 billion light years to have any life, our galaxy might have two advanced civilizations, and it would make sense that if the other civ is more than a million years in advance of us they would send ships to watch us. Panspermia makes the Bayesian prior of aliens visiting us, even given that the universe can't have too much advanced life or we would see evidence of it, not all that low, perhaps 1/1,000. I don't know why they don't use language to communicate with us, but it might be like humans sending deep sea probes to watch squids. I think the purpose of the UFOs might be for the aliens to be showing us that they are not a threat. If, say, we encounter the alien's home planet in ten thousands years and are technologically equal to the aliens because both of us have solved science, the aliens can say, "obviously we could have wiped you out when you were primitive, so the fact that we didn't is evidence we probably don't now mean you harm."
Replies from: p.b., jack-armstrong, steven0461, Taran, jack-armstrong↑ comment by p.b. · 2021-09-06T08:15:39.911Z · LW(p) · GW(p)
I listened to your podcasts as I generally do (they are great ;-) ),
Correct me if I am wrong, but neither Greg Cochran nor Robin Hanson gave you anything like "there is a >1% probability UFOs are aliens".
Replies from: James_Miller↑ comment by James_Miller · 2021-09-06T11:53:33.852Z · LW(p) · GW(p)
Thanks. Hanson, as best I recall, gave the 1/1000 Bayesian prior of aliens visiting us.
↑ comment by wickemu (jack-armstrong) · 2021-09-06T15:24:51.833Z · LW(p) · GW(p)
Despite my other comment I'm eager to and definitely will check our your podcast.
↑ comment by steven0461 · 2021-09-07T18:54:38.792Z · LW(p) · GW(p)
perhaps the aliens are like human environmentalists who like to keep everything in its natural state
Surely if they were showing themselves to the military then that would put us in an unnatural state.
Replies from: James_Miller↑ comment by James_Miller · 2021-09-07T21:17:47.761Z · LW(p) · GW(p)
Yes good point. They might be doing this to set up a situation where they tell us to not build Dyson spheres. If we accept that aliens are visiting us and observe that the universe is otherwise in a natural state we might infer that the aliens don't want us to disturb this state outside of our solar system.
Replies from: steven0461↑ comment by steven0461 · 2021-09-07T22:31:44.497Z · LW(p) · GW(p)
Why would they want the state of the universe to be unnatural on Earth but natural outside the solar system?
edit: I think aliens that wanted to prevent us from colonizing the universe would either destroy us, or (if they cared about us) help us, or (if they had a specific weird kind of moral scruples) openly ask/force us not to colonize, or (if they had a specific weird kind of moral scruples and cared about being undetected or not disturbing the experiment) undetectably guide us away from colonization. Sending a very restricted ambiguous signal seems to require a further unlikely motivation.
↑ comment by Taran · 2021-09-07T09:34:33.210Z · LW(p) · GW(p)
Panspermia makes the Bayesian prior of aliens visiting us, even given that the universe can't have too much advanced life or we would see evidence of it, not all that low, perhaps 1/1,000.
Is this estimate written down in more detail anywhere, do you know? Accidental panspermia always seemed really unlikely to me: if you figure the frequency of rock transfer between two bodies goes with the inverse square of the distance between them, then given what we know of rock transfer between Earth and Mars you shouldn't expect much interstellar transfer at all, even a billion years ago when everything was closer together. But I have not thought about it in depth.
Replies from: James_Miller↑ comment by James_Miller · 2021-09-07T18:27:09.364Z · LW(p) · GW(p)
I am unaware if Hanson has written about this. Panspermia could happen by the first replicators happening in space perhaps on comets and then spreading to planets. As Hanson has pointed out, if life is extremely rare it is strange that life would originate on earth when there are almost certainly super-earths on which you would think life would be much more likely to develop. A solution to this paradox is that life did develop on such an Eden and then spread to earth billions of years ago from a star system that is now far away. Our sun might have been very close to the other star system when life spread, or indeed in the same system at the time.
↑ comment by wickemu (jack-armstrong) · 2021-09-06T15:20:43.862Z · LW(p) · GW(p)
"but perhaps the aliens are like human environmentalists who like to keep everything in its natural state."
This is the kind of argument that makes me most believe there are no aliens. Like humans, there may be good environmentalists that work to keep worlds and cultures as untouched as possible. But that also represents a very small portion of human impact. No portion of our planet is untouched by humans, including those explicitly set to avoid. And every environmentally-conscious nature park or otherwise is teeming with those who visit and act on it whether inside or outside of set boundaries. Unless this presumed alien culture is so effectively and unreasonably authoritarian that none but the most exclusive are permitted and capable of visitation, I can't imagine there being aliens here and it not being obvious due not to military sightings and poor camera captures but from almost everyone witnessing it with their own eyes on a frequent basis.
Replies from: James_Miller↑ comment by James_Miller · 2021-09-06T17:53:20.369Z · LW(p) · GW(p)
You have anticipated Robin Hanson's argument. He believes that the only way that the aliens would be able to avoid having some splinter group changing the universe in obvious ways would be if they had a very stable and authoritarian leadership.
comment by Ben Pace (Benito) · 2021-09-05T21:10:45.947Z · LW(p) · GW(p)
Thanks very much for the notes!
Edit: Welp, turns out might have been pretty bad to share them.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-09-07T01:41:10.258Z · LW(p) · GW(p)
p.b., I propose that you change the title to "Rough Notes: Sam Altman Q&A on GPT + AGI". I think this will help people not be confused about how much the notes are claiming to be accurate – it seems the notes are 'roughly' accurate but with some mistakes. As Sam's a prominent person I think you can expect small mistakes to flow out far and it's good to avoid that.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-09-07T17:23:40.883Z · LW(p) · GW(p)
Thanks for making the change!