Seeking AI Alignment Tutor/Advisor: $100–150/hr

post by MrThink (ViktorThink) · 2024-10-05T21:28:16.491Z · LW · GW · 2 comments

This is a question post.

Contents

  Tutoring Goals
  About Me
  Ideal Qualifications
  Structure & Logistics
  How to Apply
None
2 comments

I am actively looking for a tutor/advisor with expertise in AI x-risk, with the primary goal of collaboratively determining the most effective ways I can contribute to reducing AI existential risks (X-risk).

Tutoring Goals

I suspect that I misunderstand key components of the mental models that lead some highly rational and intelligent individuals to assign a greater than 50% probability of AI-related existential catastrophe ("p-doom"). By gaining a clearer understanding of these models, I aim to refine my thinking and make better-informed decisions about how to meaningfully reduce AI X-risk.

Specifically, I want to delve deeper into why and how misaligned AGI might be developed, and why it wouldn’t be straightforward to solve alignment before it becomes a critical issue.

To clarify, I do NOT believe we could contain or control a misaligned AGI with current safety practices. What I do find likely is that we will be able to avoid a situation altogether.

In addition to improving my understanding of AI X-risks, I also seek to explore strategies that I could aid in implementing in order to reduce AI X-risk.

About Me

- My primary motivation is effective altruism, and I believe that mitigating AI X-risk is the most important cause to work on.
- I have 7 years of experience working with machine learning, with a focus on large language models (LLMs), and possess strong technical knowledge of the field.
- My current p-doom estimate is 25%, derived from my own model, which gives about 5%, but I adjust upward in since some highly rational thinkers predicts significantly higher p-doom. Even if my p-doom were 1%, I would still view AI X-risk as the most pressing issue and dedicate my time to it.
 
Why Become My Tutor?

- You will be directly contributing to AI safety/alignment efforts, working with someone highly committed to making an impact.
- Opportunity for **highly technical 1-on-1 discussions** about the cutting-edge in AI alignment and X-risk reduction strategies.
- Compensation: $100–150 per hour (negotiable depending on your experience).

Ideal Qualifications

- Deep familiarity with AI existential risks and contemporary discussions surrounding AGI misalignment.
- A genuine interest in refining mental models related to AI X-risk and collaborating on solutions.
- p-doom estimate above 25%, since I aim to understand high p-doom perspectives.
- Strong interpersonal compatibility: It’s crucial that we both find these discussions rewarding and intellectually stimulating.

Structure & Logistics

- Weekly one-hour meetings focused on deep discussions of AI X-risk, strategic interventions, and mental model refinement.
- Flexible arrangement: you can invoice my company for the tutoring services.

How to Apply

If this opportunity sounds appealing to you, or if you know someone who may be a good fit, please DM me here on LessWrong.

Answers

2 comments

Comments sorted by top scores.

comment by Alex_Altair · 2024-10-06T00:01:47.830Z · LW(p) · GW(p)

FWIW I think this would be a lot less like "tutoring" and a lot more like "paying people to tell you their opinions". Which is a fine thing to want to do, but I just want to make sure you don't think there's any kind of objective curriculum that comprises AI alignment.

Replies from: habryka4
comment by habryka (habryka4) · 2024-10-06T01:40:40.127Z · LW(p) · GW(p)

Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment. 

It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.