What features of people do you know of that might predict academic success?

post by toonalfrink · 2019-05-10T18:16:59.922Z · score: 16 (2 votes) · LW · GW · 1 comment

This is a question post.


    5 James_Miller
    2 rhollerith_dot_com
1 comment

Context: RAISE is doing some preliminary research into which features might predict good fit for those that want to work in AI Safety.

IQ and big 5 came to mind as candidates. We also found this list. It's a good start, but we're not satisfied. The list doesn't contain non-psychological metrics (like location, career history, etc) and it is missing at least some interesting psychological metrics that we can think of (RQ, IQ (!), big 5).

We are interested in individual metrics, resources where we might gather them from, and meta observations about how to do this kind of research in general.

We won't be able to test every single metric, so we're also interested if you think there is any metric (already mentioned or not) that we should pay extra attention to.


answer by James_Miller · 2019-05-10T18:57:52.106Z · score: 5 (3 votes) · LW(p) · GW(p)

IQ test results (or SAT scores) of close relatives. IQ tests are an imperfect measure of general intelligence. Given the large genetic component to general intelligence, knowing how someone's sibling did on an IQ test gives you additional useful information about a person's general intelligence, even if you know that person's IQ test score.

answer by rhollerith_dot_com · 2019-05-10T18:52:17.421Z · score: 2 (1 votes) · LW(p) · GW(p)

I know nothing of RAISE or its strategy, so my answer will address only the broader question of which prospective AI safety workers to encourage or to invest resources in.

Although it is not true of everything worth teaching a prospect, many of the things worth teaching a prospect (including general instrumental rationality skills) can be used both for increasing AI safety and for irresponsible AGI research. The best way I have been able to think of to reduce the likelihood that an investment in a prospect will go on to be used by that prospect for irresponsible AGI research is to choose only female prospects and those few male prospects who show very strong signs of being better investment targets than average.

This is a completely serious suggestion although I might change my mind if I knew about more AGI researchers and AI safety researchers and activists. I know about only about a dozen of them. All the irresponsible ones are male whereas the two women in my sample, Anna Salamon and Katja Grace, are definitely responsible. My basis for that conclusion: in a previous decade I had the distinct pleasure of having long conversations with the two.

comment by toonalfrink · 2019-05-10T18:58:31.576Z · score: 5 (2 votes) · LW(p) · GW(p)

I expect that there are metrics that screen off gender so we can have better predictions and also circumvent the politics of doing anything related to gender

1 comment

Comments sorted by top scores.

comment by Wei_Dai · 2019-05-11T17:57:17.444Z · score: 5 (2 votes) · LW(p) · GW(p)

Giving more context might help readers to know what kind of answer you're looking for. What kind of people are you planning to run your predictions on? High school students? College students? Graduate students? AI researchers? People in non-AI fields? What kind of interventions are you planning to do on them?