Shouldn't there be a Chinese translation of Human Compatible?

post by MakoYass · 2020-10-09T08:47:55.760Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    Bjartur Tómas
    Thomas Kwa
    Caroline Jeanmaire
    Kaj_Sotala
    Stuart Anderson
None
No comments

Considering that China seems pretty serious about investing heavily in AI in the near future, it may be kind of important that Stuart Russel's work of AI alignment advocacy Human Compatible is made as accessible as possible to a Chinese audience, but there don't seem to be any Chinese translations.

Is it not necessary, perhaps? Is it possible that the most influential people in the field over there are all necessarily fluent in english, so that they can engage with the research literature and use the tools?
Otherwise, what's getting in the way of producing this? What should be done?

Answers

answer by Bjartur Tómas · 2020-10-10T14:25:31.709Z · LW(p) · GW(p)

After looking into the PISA scores and finding they implied about 20x more 3-sigma people are in China than America, I emailed Stuart Russel about translations. This was his reply:

 

The vast majority of Chinese CS researchers are publishing in English.

For the broader policymaking class, it might be useful to have literature in Chinese.

My book Human Compatible will appear in Chinese shortly.

Slaughterbots has already appeared with Chinese subtitles.

For English-language government and think-tank documents,

I assume Chinese policy makers have access to translation resources as needed.

 

answer by Thomas Kwa · 2020-10-10T03:55:27.791Z · LW(p) · GW(p)

I've talked with someone in EA Hong Kong who follows the progress of translation of effective altruism into Chinese language and culture; it is not trivial [EA · GW] to do so optimally, and suboptimal translations carry substantial risks. Some excerpts mentioned in the linked post:

Doing mass outreach in another language creates irreversible “lock in” [...] China faces especially high risk of lock in, because you also face the risk of government censorship

Likewise, one of the possible translations of “existential risk” (生存危机) is very close to the the name of a computer game (生化危机), so doesn’t have the credibility one might want.

To do this well, we’ll need people who are both experts in the local culture and effective altruism in the West. We’ll also need people who are excellent writer and communicators in the new language.

Initial efforts to expand effective altruism into new languages should focus on making strong connections with a small number of people who have relevant expertise, via person-to-person outreach instead of mass media.

The arguments about EA being niche and difficult to communicate through low-fidelity means apply just as strongly to EA-style AI safety. However, the author also says:

If written materials are used, then it’s better to focus on books, academic articles and podcasts aimed at a niche audience.

answer by Caroline Jeanmaire · 2020-11-13T03:29:05.910Z · LW(p) · GW(p)

The Chinese translation of Human Compatible just came out last month, published by CITIC! The first chapter is here.

Let me know if you would like more information - I'm working at the Center for Human-Compatible AI on this. 

answer by Kaj_Sotala · 2020-10-09T13:20:15.548Z · LW(p) · GW(p)

Book translations generally happen because a local publisher decides it would be worth it, so they buy the local-language sales rights (either from the original publisher or the original author, depending on whether the author kept those rights or sold them to the publisher) and hire a translator.

In this case, Human Compatible was published by Viking Press, who are a part of Penguin Group. According to Wikipedia, Penguin has its own division in China. They might or might not already be working on a translation of their own, or possibly negotiating with some other Chinese publisher for the sale of the rights.

If someone wanted to work with this, I would expect that the first step would be to get in contact with Viking Press and try to find out whether there's any translation effort or rights negotiation already in the works. If there isn't, getting a Chinese publisher (either Penguin's Chinese division or someone else) interested might be a good bet. That would probably require convincing them that Chinese people are interested in buying the book; I don't know what would persuade them of that.

comment by ChristianKl · 2020-10-09T17:41:38.539Z · LW(p) · GW(p)

Maybe there's a way of saying "I'm willing to buy 1000 books and gift them to people" that would persuade them that it makes sense to do the translation?

answer by Stuart Anderson · 2020-10-09T12:46:29.551Z · LW(p) · GW(p)

Find out how much a translation would cost. Then you can either make a business case for it, or offer to pay for it yourself.

The inverse question is what is the English speaking world missing out on? China presumably has equivalents to what we have that we never see.

That being said, it's going to take a lot more than books to stop China being China. Any AI they create will be shaped by their values. And AI is very much a weapons technology.

comment by ChristianKl · 2020-10-09T17:40:23.750Z · LW(p) · GW(p)

Any AI they create will be shaped by their values. 

Why? Why should we assume China can simply solve the alignment problem and the AI will follow their values?

comment by purge · 2020-10-10T04:46:01.344Z · LW(p) · GW(p)

"shaped by their values" != "aligned with their values".  I think Stuart is saying not that China will solve the alignment problem, but that they won't be interested in solving it because they're focused on expanding capabilities, and translating a book won't change that.

comment by MakoYass · 2020-10-12T22:37:49.948Z · LW(p) · GW(p)

If so, I think he's wrong here. The book may lead them to realize that unaligned AGI doesn't actually constitute an improvement in capabilities. It's the creation of a new enemy. A bridge that might fall down is not a useful bridge and a successful military power, informed of that, wouldn't want to build it.

It's in no party's interests to create AGI that isn't aligned with at least the people overseeing the research project.

An AGI aligned with a few living humans is generally going to lead to better outcomes than an AGI aligned with nobody at all, there is enough shared, to know that, and no one coherently extrapolated is as crass or parochial as the people we are now. Alignment theory should be promoted with every party.

comment by ChristianKl · 2020-10-10T14:46:44.379Z · LW(p) · GW(p)

If you understand that there's an alignment problem then "shaped by their values" = "aligned with their values". That's especially true in a country that has a strong central leadership.

comment by Stuart Anderson (stuart-anderson) · 2020-10-11T00:15:35.170Z · LW(p) · GW(p)

Nobody is guaranteed to solve alignment. We're all just speculating because of that.

AIs will not be perfectly aligned with all of humanity. That has logical consequences for their behaviour. What happens when we create AI and it takes sides in the same way we all do? 

For an AI to have maximum utility it must be able to choose when to cooperate and when to be self interested. The self interest of the creator is to convince the AI, probably via early learning, to adopt the creator's core values. If humanity's alignment varies (ie. we fight each other) then it stands to reason that so will AI's.

In the case of China it has already demonstrated values I consider unacceptable (death camps, organ harvesting, aggressive military expansion, racial supremacy. They're basically the Nazi's of the East). I think it is reasonable to be concerned that an AI birthed in that environment may pick up some values that are perfectly acceptable to its creators that are malign to the rest of us.

comment by Stuart Anderson (stuart-anderson) · 2020-10-10T15:26:56.773Z · LW(p) · GW(p)

The issue I have here is that human alignment assumes that humanity is a contiguous blob of shared values when that's demonstrably not the case. Nations and cultures vary. Worldviews vary.

If you ask a Chinese person "What does it mean to be an ethical person?" you're going to get very different answers from what a Westerner would give, or a person from the Islamic ummah. This is going to result in a varying design spec for AIs by nation, culture, and other factors that divide humans into smaller groups.

We may all be aiming to create AI, but we're not aiming to create the same AI. And that's before we factor in deliberately creating an AI that favours one group over others.

comment by ChristianKl · 2020-10-10T17:45:06.399Z · LW(p) · GW(p)

Design specs very but they all include an AGI that actually values human life which is the key AI safety consideration and why it's desireable to get the book translated. 

No comments

Comments sorted by top scores.