Looking for non-AI people to work on AGI risks

post by otto.barten (otto-barten) · 2019-12-30T20:41:43.625Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    3 whpearson
    2 Brian Murphy
None
No comments

I'm worried about AGI safety, and I'm looking for non-AI people to worry with. Let me explain.

A lecture by futurist Anders Sandberg, online reading, and real-life discussions with my local Effective Altruist group, gave me as a non-AI person (33-yo physicist, engineer, climate activist and startup founder) the convictions that:

- AGI (Artificial General Intelligence, Superintelligence, or the Singularity) is a realistic possibility in the next decades, say between 2030 and 2050
- AGI could well become orders of magnitude smarter than humans, fast
- If unaligned, AGI could well lead to human extinction
- If aligned ('safe'), AGI could still possibly lead to human extinction, for example because someone's goals turned out to be faulty, or because someone removed the safety from the code

I'm active for two climate NGOs, where a lot of people are worrying about human extinction because of the climate crisis. I'm also worrying about this, but at the same time, I think the chance of human extincion due to AGI is much larger. Although the chance is much larger, I don't believe it to be 100%: we could still stop AGI development, for example (I think that makes more sense than fleeing to Mars or working on a human-machine interface). Stopping development is a novel angle for many safe AI researchers, futurists, startup founders, and the like. However, many non-AI people think this is a very sensible solution, at least if all else fails. I agree with them. It is not going to be an easy goal to achieve and I see the penalty, but I think it makes the most sense from the options we have.

Therefore, I'm looking for non-AI people, who are interested to work with me on common sense solutions for existential risks posed by AGI.

Does anyone know where to find them?

Answers

answer by whpearson · 2019-12-31T14:48:39.754Z · LW(p) · GW(p)

I'm someone who is moving in the opposite direction mainly (from AI to climate change). I see AGI as a lot harder to do than most, mainly due to the potential political ramifications causing slow development and thinking it will need experiments with novel hardware, so is more visible than just coding. So I see it as relatively easy to stop, at least inside a country. Multi-nationally would be trickier.

Some advise, I would try and frame your effort as "Understanding AGI risk". While you think there is risk currently, having an open mind about the status of the risk is important. If AGI turns out to be existential risk-free then it could help with climate adaptation, even if it is not in time for climate mitigation.

Edit: You could frame it just as understanding AI, and put together independent briefs on each project for policy makers to understand the likely impacts both positive and negative and the state of play. Getting a good reputation and maintaining independence might be hard though.

comment by otto.barten (otto-barten) · 2020-01-01T14:16:15.951Z · LW(p) · GW(p)

Hi WH, thank you for the reply! I find it really heartening and encouraging to learn what others are thinking.

Could you explain what hardware you think would be needed? It's kind of the first time I'm hearing someone talk about that, so I'm curious of course to learn what you think it would take.

I agree with your point that understanding risks of AI projects is a good way of framing things. Given the magnitude of AGI risks (as I understand it now, human extinction), an alarmist tone of a policy report would still be justified in my opinion. I also agree that we should keep an open mind: I see the benefits of AI, and even more the benefits of AGI, which would be biblical if we could control the risks. Climate adaptation could indeed be carried out a lot better, as could many other tasks. However, I think that we will not be able to control AGI, and we may therefore go extinct if we still develop it. But agreed: let's keep an open mind about the developments.

Do you know any reliable overview of AGI risks? It would be great to have a kind of IPCC equivalent that's as uncontroversial as possible to convince people that this problem needs attention. Or papers stating that there is a nonzero chance of human extinction, from a reliable source. Any such information would be great!

If I can help you by the way with ideas on how to fight the climate crisis, let me know!

comment by otto.barten (otto-barten) · 2020-01-01T16:58:51.203Z · LW(p) · GW(p)

Also another thought. (Partially) switching careers comes with a large penalty, since you don't have as much previous knowledge, experience, credibility, and network for the new topic. The only reason I'm thinking about it, is that I think AGI risk is a lot more important to work on than climate risk. If you're moving in the opposite direction:

1) Do you agree that such moving comes with a penalty?

2) Do you think that climate risk is a lot more important to work on than AGI risk?

If so, only one of us can be right. It would be nice to know who that is, so we don't make silly choices.

answer by Brian Murphy · 2019-12-31T21:03:42.770Z · LW(p) · GW(p)

The first and selfish answer (probably shared by countless others would be "I'm interested in working on that."

Am I qualified? Maybe; maybe not. I suspect I won't know what makes an effective AI safety planner until somebody actually starts to do it.

I make this observation. It looks to me that the potential emergence of AGI has two fronts. The first is raw scientific development. Programmers, engineers and cognitive scientists just "doing their thing;" understanding our world by replicating and modifying parts of it. The second is the one that a vast majority of people can already see; specific-task AI devices getting stronger/faster and better connected. If it cannot be done today, within months a person can talk to the air around them, order a cheeseburger that will be cooked, assembled, delivered, and paid for completely by automated, unconscious agents. Who am I to say that with enough forward development and integration of such automated systems, we would not see emergent automated behavior, just as fantastic or dangerous as a "thinking" machine might display.

Such a watchdog group can be potentially useful already, if they allow some economic skill-power to assist with current technology issues (i.e. workplace automation, and the unavoidable employment changes that causes.)

This is a long winded "I agree." We should not wait for someone else to organize our protective stance from the agents we build specifically to be better at tasks than ourselves, be they specific or general. Multiple, experienced folk should always be asking "What is the driving goal of this AGI? What are it's success/failure conditions? What information does it have access too? Where are the means to interrupt it, if it finds an unfriendly solution to its hurdles?


comment by otto.barten (otto-barten) · 2020-01-01T15:45:26.691Z · LW(p) · GW(p)

Hi Brian, thanks for your reply! I think we would not need very special qualifications for this, it's more a matter of reading up on the main status of AI and safe AI, cite the main conclusions from academia and make sure they get presented well to both policy makers and normal people. You say you'd expect countless others to want to work on this too, but I didn't find them yet. I'm still hopeful they may exist somewhere, and if you find people already doing this, I'd love to get in contact with them. Else, we should start ourselves.

Interesting observation! I'm thinking that your second front is especially interesting/worrying where AI improvement tasks are automated. For a positive feedback loop to occur, making AI get smarter very fast, many imagine an AGI is necessary. However, I'm thinking, what's improving AI now? Which skills are required? I think it's partially hardware improvement: academia and industry working together to keep Moore's law going. The other part is software/algorithm improvements, also done by academics and companies such as Deep Mind etc. So if the tasks of those researchers would be automated, that would be the point at which the singularity could take off. Their jobs tend to be analytical and focused on a single task, more than generically human and social, which I guess means that AI would find them easier. That in turn means the singularity (there should be a less scifi name for this) could happen sooner than AGI, if policy doesn't intervene. So also a long winded I agree.

So how should we go about organizing this, if no one is doing it yet? Any thoughts?

Thanks again for your reply, as I said above it's heartening that there are people out there who are on more or less the same page!

No comments

Comments sorted by top scores.