Some ideas for epistles to the AI ethicists

post by Charlie Steiner · 2022-09-14T09:07:14.791Z · LW · GW · 0 comments

Contents

  The Ideas:
    Explain the basic ideas of AI safety, and why to take them seriously.
    A more specific defense of the validity of transformative-AI-focused thinking as a valid use of ethicists' time.
    A worked example of "dual use" ethics - a connection between thinking about present-day problems and superhuman AI.
    Attempt to defend some specific ethics question as being an open problem relevant to superhuman AI.
    Reports on AI Safety progress.
    Talk about principles for AI governance, informed by an awareness that transformative AI is important.
None
No comments

Some papers, or ideas for papers, that I'd loved to see published in ethics journals like Minds and Machines or Ethics and Information Technology.[1] I'm probably going to submit one of these to a 2023 AI ethics conference myself.

Why should we do this? Because we want today's grad students to see that the ethical problems of superhuman AI are a cool topic that they can publish a cool paper about. And we want to (marginally) raise the waterline for thinking about future AI, nudging the AI ethics discourse towards more matured views of the challenges of AI.

Secondarily, it would be good to leverage the existing skillsets of some ethicists for AI safety work, particularly those already working on AI governance. And having an academic forum where talking about AI safety is normalized bolsters other efforts to work on AI safety in academia.

The Ideas:

Explain the basic ideas of AI safety, and why to take them seriously.

A more specific defense of the validity of transformative-AI-focused thinking as a valid use of ethicists' time.

A worked example of "dual use" ethics - a connection between thinking about present-day problems and superhuman AI.

Attempt to defend some specific ethics question as being an open problem relevant to superhuman AI.

Reports on AI Safety progress.

Talk about principles for AI governance, informed by an awareness that transformative AI is important.

  1. ^

    For my overview of the state of the field, see the two Reading the Ethicists posts (1 [LW · GW], 2 [LW · GW]).

  2. ^

    Actual example from Cawthorne and van Wysnberghe, Science and Engineering Ethics, 2020

  3. ^

    A genre of essay characterized by dishy, nontechnical speculation about which humans will benefit from superhuman AI.

  4. ^

    E.g. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, by Floridi et al., Minds and Machines, 2018

0 comments

Comments sorted by top scores.