Posts
Comments
Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus:
Can you link to these?
This is great! Thanks for doing this.
Would you be able to add people's titles and affiliations for some context? Possibly also links to their websites, LinkedIn or similar.
You can now also subscribe to be automatically emailed when new events are added or updated. You can opt for either daily or weekly updates
Signup here:
https://airtable.com/shrEp75QWoCrZngXg
I have always thought of it like a vehicle blind spot not an ocular blind spot. More related to the structure of the situation than the individual.
How many places did you apply for before getting your current role or position?
How much time have you spent on applying for open opportunities?
What are some things that your org has that others don’t and should?
What are some things that other orgs have that your org should have?
What are some boring parts of your job that you have to do?
What are some frustrating parts of your job that you have to do?
What aspects of your job/place of work are different from what you expected from the outside?
Do you feel like you have good job security?
Not exactly sure what I was trying to say here. Probably using the PhD as an example of a path to credentials.
Here are some related things I believe:
- I don't think a PhD is necessary or the only way
- University credentials are not now and should not be the filter for people working on these problems
- There is often a gap between peoples competencies and their ability to signal them
- Credentials are the default signal for competence
- Universities are incredibly inefficient ways to gain competence or signal
- Assessing people is expensive and so reviewers are incentivised to find cheaper to assess signals
- Credentials are used as signals not because they are good but because they are cheap to assess and universally understood
- Credentials are often necessary but rarely sufficient
Could do Go, Poker or some E-Sports with commentary. Poker unlike chess has the advantage that the commentators can see all of the players hands but the players can only see their own. Commentators often will talk about what a player must be thinking in this situation and account for what is observable to the player or not.
This would certainly be easier to scale but not as good quality.
The plan and numbers I lay out above you actually finish friendly AI in 2036, which is the 10% point
Yes, if you have a solution in 2026 it isn't likely to be relevant to something used in 2050. But 2026 is the planned solution date and 2050 is the median TAI date.
The numbers I used above a just to demonstrate the point thought. The broad idea is that coming up with a solution/theory to alignment takes longer than planned. Having a theory isn't enough, you still have some time to make it count. Then TAI might come at the early end of your probability distribution.
It's pretty optimistic to plan that TAI will come at your median estimate and that you won't run into the planning fallacy.
Really excited about this! Donation on the way