LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
EA has an extraordinary bad image right now, thanks largely to FTX. EA is not a good association to have in any context other than its base.
I suspect the pushback from within NIST has more to do with the fact that their budget has been cut to pay for this and very valuable projects put into indefinite suspension, for a cause that basically no one there supports.
habryka4 on Express interest in an "FHI of the West"To what extent would the organization be factoring in transformative AI timelines? It seems to me like the kinds of questions one would prioritize in a "normal period" look very different than the kinds of questions that one would prioritize if they place non-trivial probability on "AI may kill everyone in <10 years" or "AI may become better than humans on nearly all cognitive tasks in <10 years."
My guess is a lot, because the future of humanity sure depends on the details of how AI goes. But I do think I would want the primary optimization criterion of such an organization to be truth-seeking and to have quite strong norms and guardrails against anything that would trade off communicating truths against making a short-term impact and gaining power.
As an example of one thing I would do very differently from FHI (and a thing that I talked with Bostrom about somewhat recently where we seemed to agree) was that with the world moving faster and more things happening, you really want to focus on faster OODA loops in your truth-seeking institutions.
This suggests that instead of publishing books, or going through month-long academic review processes, you want to move more towards things like blogposts and comments, and maybe in the limit even on things like live panels where you analyze things right as they happen.
I do think there are lots of failure modes around becoming too news-focused (and e.g. on LW we do a lot of things to not become too news-focused), so I think this is a dangerous balance, but its one of the things I think I would do pretty differently, and which depends on transformative AI timelines.
To comment a bit more on the power stuff: I think a thing that I am quite worried about is that as more stuff happens more quickly with AI people will feel a strong temptation to trade in some of the epistemic trust they have built with others, into resources that they can deploy directly under their control, because as more things happen, its harder to feel in control and by just getting more resources directly under your control (as opposed to trying to improve the decisions of others by discovering and communicating important truths) you can regain some of that feeling of control. That is one dynamic I would really like to avoid with any organization like this, where I would like it to continue to have a stance towards the world that is about improving sanity, and not about getting resources for itself and its allies.
fractalideation on When is a mind me?Loved the post and all the comments <3
Here is I think an interesting scenario / though experiment:
At wake-up, based on their own memory of where the original person fell asleep, the original person will likely feel they are the copy and the copy will likely feel they are the original person, wouldn't they?!
Some might even argue that based on stream-of-consciousness continuity the original "me" is actually the copy (because the copy remembers falling asleep in the bed and actually wakes up in the bed as well).
Some others will argue that based on substrate/matter continuity the original "me" is the original person even if their stream-of-consciousness has experienced a discontinuity (remembering falling asleep in the bed but actually waking up on the sofa while seeing an identical person as them waking up in the bed).
I guess it is subjective and a matter of individual preference if the stream-of-consciousness continuity or the substrate continuity is more important to define who the original "me" is.
Some would even argue that in this case there is not actual any firm original "me", just one "stream-of-consciousness me" and another different "substrate me".
(The same/similar thought experiment could be done using the direct brain insertion of false memories instead of moving around people while they sleep / are unconscious, in this example an original person could be inserted false memories that they are a copy and vice-versa to manipulate the memory / self-awareness of who the original "me" is, also generally it obviously could be useful when someone is uploaded/copied if they want to alter some memories of their upload/copy for some reason)
akash-wasil on Express interest in an "FHI of the West"To what extent would the organization be factoring in transformative AI timelines? It seems to me like the kinds of questions one would prioritize in a "normal period" look very different than the kinds of questions that one would prioritize if they place non-trivial probability on "AI may kill everyone in <10 years" or "AI may become better than humans on nearly all cognitive tasks in <10 years."
I ask partly because I personally would be more excited of a version of this that wasn't ignoring AGI timelines, but I think a version of this that's not ignoring AGI timelines would probably be quite different from the intellectual spirit/tradition of FHI.
More generally, perhaps it would be good for you to describe some ways in which you expect this to be different than FHI. I think the calling it the FHI of the West, the explicit statement that it would have the intellectual tradition of FHI, and the announcement right when FHI dissolves might make it seem like "I want to copy FHI" as opposed to "OK obviously I don't want to copy it entirely I just want to draw on some of its excellent intellectual/cultural components." If your vision is the latter, I'd find it helpful to see a list of things that you expect to be similar/different.)
akash-wasil on peterbarnett's ShortformI would strongly suggest considering hires who would be based in DC (or who would hop between DC and Berkeley). In my experience, being in DC (or being familiar with DC & having a network in DC) is extremely valuable for being able to shape policy discussions, know what kinds of research questions matter, know what kinds of things policymakers are paying attention to, etc.
I would go as far as to say something like "in 6 months, if MIRI's technical governance team has not achieved very much, one of my top 3 reasons for why MIRI failed would be that they did not engage enough with DC people//US policy people. As a result, they focused too much on questions that Bay Area people are interested in and too little on questions that Congressional offices and executive branch agencies are interested in. And relatedly, they didn't get enough feedback from DC people. And relatedly, even the good ideas they had didn't get communicated frequently enough or fast enough to relevant policymakers. And relatedly... etc etc."
I do understand this trades off against everyone being in the same place, which is a significant factor, but I think the cost is worth it.
chris_leong on Express interest in an "FHI of the West"I strongly agree with Owen's suggestions about figuring out a plan grounded in current circumstances, rather than reproducing what was.
Here's some thoughts about what this might look like:
I do think evaporative cooling is a concern, especially if everyone (or a very significant amount) of people left. But I think on the margin more people should be leaving to work in govt.
I also suspect that a lot of systemic incentives will keep a greater-than-optimal proportion of safety-conscious people at labs as opposed to governments (labs pay more, labs are faster and have less bureaucracy, lab people are much more informed about AI, labs are more "cool/fun/fast-paced", lots of govt jobs force you to move locations, etc.)
I also think it depends on the specific lab– EG in light of the recent OpenAI departures, I suspect there's a stronger case for staying at OpenAI right now than for DeepMind or Anthropic.
anthonyc on Transportation as a ConstraintThis works too, yeah.
gwern on Transportation as a Constrainthttps://en.wikipedia.org/wiki/Jeep_problem https://en.wikipedia.org/wiki/Tsiolkovsky_rocket_equation
gwern on Transportation as a ConstraintTwitter, probably.