How much do personal biases in risk assessment affect assessment of AI risks?

post by Gordon Seidoh Worley (gworley) · 2023-05-03T06:12:57.001Z · LW · GW · 4 comments

This is a question post.

Contents

  Answer Template[1]:
None
  Answers
    4 Dave Orr
    4 RussellThor
    3 Throwaway2367
    2 Gordon Seidoh Worley
None
4 comments

Had a stray thought that I think is worth exploring: perhaps disagreements over how to respond to AI risks are heavily influenced by personal biases in how to assess risk in general such that very different conclusions can be drawn from evaluating the same evidence and arguments.

This sort of thing isn't without precedent (e.g. Bourget, David, and David J. Chalmers. “What Do Philosophers Believe?”). We can roughly imagine a few possible correspondences:

For archetypical examples of people who I'd consider high, medium, and low AI risk:

I don't know if any of them fit my toy profiles. Honestly I'm not really sure what might correspond, if anything; these are just easy guesses to give some flavor to the idea.

So, if you've thought a lot about AI risks, and especially if you're actively working on AI in some capacity, I'd appreciate it if you left an answer filling out this template so we can see if the anecdata suggest anything worth exploring further. This might help with identify places where disagreement persists not because of disagreements about evidence, but personal disagreements about how to weigh and evaluate the risks the evidence suggests exist. Knowing that might prove useful to bridging some disagreements.

Answer Template[1]:

Level of AI risk concern: high/medium/low

General level of risk tolerance in everyday life: high/medium/low

Brief summary of what you do in AI: mostly if you're not famous to help identify you

Anything weird about you: are you unusually anxious, calm, whatever? do people tell you that you're the most X person they know? not necessarily a full psychological profile just some key facts to give a sense of your personality

  1. ^

    I realize this template is not very well constructed. It's because I'm not quite sure what we're looking for, if anything, so it's relatively open ended in the hopes that the answers will help make it clearer what I should have asked.

Answers

answer by Dave Orr · 2023-05-04T02:25:45.137Z · LW(p) · GW(p)

Similar risk to Christiano, which might be medium by less wrong standards but is extremely high compared to the general public.

High risk tolerance (used to play poker for a living, comfortable with somewhat risky sports like climbing or scuba diving). Very low neuroticism, medium conscientiousness. I spend a reasonable amount of time putting probabilities on things, decently calibrated. Very calm in emergency situations.

I'm a product manager exec mostly working on applications of language AI. Previously an ml research engineer.

answer by RussellThor · 2023-05-03T06:30:18.396Z · LW(p) · GW(p)

Level of AI risk concern: medium/high

(Similar risk to Christiano from what I can see, however I would put him >medium compared to the public and most AI researchers from what I can see)

General level of risk tolerance in everyday life: medium

Not sure how exactly to rate this as I started a tech company and do semi-dangerous recreational activities, but don't think it is high.

Brief summary of what you do in AI

Not actively working in it, however have released a product containing a DNN I trained in the past and follow the field.

Nothing unusual about me personally, especially compared to this audience.

answer by Throwaway2367 · 2023-05-03T20:16:19.983Z · LW(p) · GW(p)

Level of AI risk concern: medium

General level of risk tolerance in everyday life: low

Brief summary of what you do in AI: training NNs for this and that, not researching them, thought some amount about AI risk over a few years

Anything weird about you: I don't like to give too much information about myself online, but I do have a policy of answering polls I've interacted with (eg read replies) a bit to fight selection effects.

answer by Gordon Seidoh Worley · 2023-05-03T19:58:43.652Z · LW(p) · GW(p)

to answer my own question:

Level of AI risk concern: high

General level of risk tolerance in everyday life: low

Brief summary of what you do in AI: first tried to formalize what alignment would mean, this led me to work on a program of deconfusing human values that reached an end of what i could do, now have moved on to writing about epistemology that i think is critical to understand if we want to get alignment right

Anything weird about you: prone to anxiety, previously dealt with OCD, mostly cured it with meditation but still pops up sometimes

4 comments

Comments sorted by top scores.

comment by quetzal_rainbow · 2023-05-03T09:15:46.524Z · LW(p) · GW(p)

I would say that it should be done using google forms? For usability of large statistics.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-05-03T17:37:53.390Z · LW(p) · GW(p)

If I learn enough this way to suggest it's worth exploring and doing a real study, sure. This is a case of better done lazily to get some information than not done at all.

Replies from: awg
comment by awg · 2023-05-03T17:47:33.522Z · LW(p) · GW(p)

This took like literally two minutes to make: google form

Feel free to copy, edit, and distribute for respondents as you see fit. I do think this thing is worth just having in a google form format.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-05-03T18:01:47.901Z · LW(p) · GW(p)

A form is not just a form. I have to also follow up to make sense of the responses, report back findings, etc. Possibly worth exploring if seems like there might be something there but not the effort I want to put in now. Answering here I can ignore this and others can still benefit if I do nothing else with the idea.