Posts

AIXSU - AI and X-risk Strategy Unconference 2019-09-03T11:35:39.283Z · score: 26 (21 votes)
AI Safety Research Camp - Project Proposal 2018-02-02T04:25:46.005Z · score: 64 (20 votes)
Book Review: Naive Set Theory (MIRI research guide) 2015-08-14T22:08:37.028Z · score: 15 (15 votes)

Comments

Comment by david_kristoffersson on AIXSU - AI and X-risk Strategy Unconference · 2019-09-06T00:55:46.831Z · score: 9 (10 votes) · LW · GW

I expect the event to have no particular downside risks, and to give interesting input and spark ideas in experts and novices alike. Mileage will vary, of course. Unconferences foster dynamic discussion and a living agenda. If it's risky to host this event, then I expect AI strategy and forecasting meetups and discussions at EAG to be risky and they should also not be hosted.

I and other attendees of AIXSU pay careful attention to potential downside risks. I also think it's important we don't strangle open intellectual advancement. We need to figure out what we should talk about; not that we shouldn't talk.

AISC: To clarify: AI safety camp is different and puts bigger trust in the judgement of novices, since teams are generally run entirely by novices. The person who proposed running a strategy AISC found the reactions from experts to be mixed. He also reckoned the event would overlap with the existing AI safety camps, since they already include strategy teams.

Potential negative side effects of strategy work is a very important topic. Hope to discuss it with attendees at the unconference!

Comment by david_kristoffersson on Three Stories for How AGI Comes Before FAI · 2019-08-17T14:48:16.645Z · score: 5 (3 votes) · LW · GW
We can subdivide the security story based on the ease of fixing a flaw if we're able to detect it in advance. For example, vulnerability #1 on the OWASP Top 10 is injection, which is typically easy to patch once it's discovered. Insecure systems are often right next to secure systems in program space.

Insecure systems are right next to secure systems, and many flaws are found. Yet, the larger systems (the company running the software, the economy, etc) manage to correct somehow. It's because there are mechanisms in the larger systems poised to patch the software when flaws are discovered. Perhaps we could fit and optimize this flaw-exploit-patch-loop in security as a technique for AI alignment.

If the security story is what we are worried about, it could be wise to try & develop the AI equivalent of OWASP's Cheat Sheet Series, to make it easier for people to find security problems with AI systems. Of course, many items on the cheat sheet would be speculative, since AGI doesn't actually exist yet. But it could still serve as a useful starting point for brainstorming.

This sounds like a great idea to me. Software security has a very well developed knowledge base at this point and since AI is software, there should be many good insights to port.

What possibilities aren't covered by the taxonomy provided?

Here's one that occurred to me quickly: Drastic technological progress (presumably involving AI) destabilizes society and causes strife. In this environment with more enmity, safety procedures are neglected and UFAI is produced.

Comment by david_kristoffersson on Project Proposal: Considerations for trading off capabilities and safety impacts of AI research · 2019-08-17T13:38:34.713Z · score: 11 (3 votes) · LW · GW

This seems like a valuable research question to me. I have a project proposal in a drawer of mine that is strongly related: "Entanglement of AI capability with AI safety".

Comment by david_kristoffersson on A case for strategy research: what it is and why we need more of it · 2019-07-12T07:08:57.094Z · score: 1 (1 votes) · LW · GW

My guess is that the ideal is to have semi-independent teams doing research. Independence in order to better explore the space of questions, and some degree of plugging in to each other in order to learn from each other and to coordinate.

Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?

There are info hazards. But I think if we can can discuss Superintelligence publicly, then yes; we can have a public discussion about non-hazardous parts of strategy.

Are there enough people and funding to sustain a parallel public strategy research effort and discussion?

I think you could get a pretty lively discussion even with just 10 people, if they were active enough. I think you'd need a core of active posters and commenters, and there needs to be enough reason for them to assemble.

Comment by david_kristoffersson on A case for strategy research: what it is and why we need more of it · 2019-06-21T18:02:59.030Z · score: 3 (2 votes) · LW · GW

Nice work, Wei Dai! I hope to read more of your posts soon.

However I haven't gotten much engagement from people who work on strategy professionally. I'm not sure if they just aren't following LW/AF, or don't feel comfortable discussing strategically relevant issues in public.

A bit of both, presumably. I would guess a lot of it comes down to incentives, perceived gain, and habits. There's no particular pressure to discuss on LessWrong or the EA forum. LessWrong isn't perceived as your main peer group. And if you're at FHI or OpenAI, you'll have plenty contact with people who can provide quick feedback already.

Comment by david_kristoffersson on A case for strategy research: what it is and why we need more of it · 2019-06-21T17:09:58.336Z · score: 1 (1 votes) · LW · GW
I'm very confused why you think that such research should be done publicly, and why you seem to think it's not being done privately.

I don't think the article implies this:

Research should be done publicly

The article states: "We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards."
Which means: share more, but don't share if you think there are possible negative consequences of it.
Though I guess you could mean that it's very hard to tell what might lead to negative outcomes. This is a good point. This is why we (Convergence) is prioritizing research on information hazard handling and research shaping considerations.

it's not being done privately

The article isn't saying strategy research isn't being done privately. What it is saying is that we need more strategy research and should increase investment into it.

Given the first sentence, I'm confused as to why you think that "strategy research" (writ large) is going to be valuable, given our fundamental lack of predictive ability in most of the domains where existential risk is a concern.

We'd argue that to get better predictive ability, we need to do strategy research. Maybe you're saying the article makes it looks like we are recommending any research that looks like strategy research? This isn't our intention.

Comment by david_kristoffersson on AI Safety Research Camp - Project Proposal · 2019-01-24T11:15:00.685Z · score: 1 (1 votes) · LW · GW

Yes -- the plan is to have these on an ongoing basis. I'm writing this just as the deadline was passed for the one planned to April.

Here's the web site: https://aisafetycamp.com/

The facebook is also a good place to keep tabs on it: https://www.facebook.com/groups/348759885529601/

Comment by david_kristoffersson on Beware Social Coping Strategies · 2018-02-05T09:42:40.043Z · score: 18 (5 votes) · LW · GW
Your relationship with other people is a macrocosm of your relationship with yourself.

I think there's something to that, but it's not that general. For example, some people can be very kind to others but harsh with themselves. Some people can be cruel to others but lenient to themselves.

If you can't get something nice, you can at least get something predictable

The desire for the predictable is what Autism Spectrum Disorder is all about, I hear.

Comment by david_kristoffersson on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-02-02T04:32:45.847Z · score: 7 (2 votes) · LW · GW

Here's the Less Wrong post for the AI Safety Camp!

Comment by david_kristoffersson on A Fable of Science and Politics · 2016-10-26T08:57:49.936Z · score: 1 (1 votes) · LW · GW

It's bleen, without a moment's doubt.

Comment by david_kristoffersson on LessWrong 2.0 · 2016-05-08T10:06:56.257Z · score: 1 (1 votes) · LW · GW

Counterpoint: Sometimes, not moving means moving, because everyone else is moving away from you. Movement -- change -- is relative. And on the Internet, change is rapid.

Comment by david_kristoffersson on Meetup : First meetup in Stockholm · 2015-10-09T19:11:29.667Z · score: 0 (0 votes) · LW · GW

Interesting. I might show up.

Comment by david_kristoffersson on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-16T10:26:47.403Z · score: 2 (2 votes) · LW · GW

Thanks for the tip. Two other books on the subject that seem to be appreciated are Introduction to Set Theory by Karel Hrbacek and Classic Set Theory: For Guided Independent Study by Derek Goldrei.

Edit: math.se weighs in: http://math.stackexchange.com/a/264277/255573

Comment by david_kristoffersson on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-16T10:23:09.130Z · score: 2 (2 votes) · LW · GW

The author of the Teach Yourself Logic study guide agrees with you about reading multiple sources:

I very strongly recommend tackling an area of logic (or indeed any new area of mathematics) by reading a series of books which overlap in level (with the next one covering some of the same ground and then pushing on from the previous one), rather than trying to proceed by big leaps.

In fact, I probably can’t stress this advice too much, which is why I am highlighting it here. For this approach will really help to reinforce and deepen understanding as you re-encounter the same material from different angles, with different emphases.

Comment by david_kristoffersson on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-16T09:59:22.493Z · score: 0 (0 votes) · LW · GW

My two main sources of confusion in that sentence are:

  1. He says "distinct elements onto distinct elements", which suggests both injection and surjection.
  2. He says "is called one-to-one (usually a one-to-one correspondence)", which might suggest that "one-to-one" and "one-to-one correspondence" are synonyms -- since that is what he usually uses the parantheses for when naming concepts.

I find Halmos somewhat contradictory here.

But I'm convinced you're right. I've edited the post. Thanks.

Comment by david_kristoffersson on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-16T09:53:22.549Z · score: 0 (0 votes) · LW · GW

You guys must be right. And wikipedia corroborates. I'll edit the post. Thanks.

Comment by david_kristoffersson on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-16T22:14:54.816Z · score: 6 (6 votes) · LW · GW

Hello.

I'm currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I'm emulating many of Nate's techniques. I'll post reviews of material in the research guide at lesswrong as I work through it.

I'm mostly posting here now just to note this. I can be terse at times.

See you there.

Comment by david_kristoffersson on Dark Arts of Rationality · 2015-07-11T19:26:08.177Z · score: 0 (0 votes) · LW · GW

First, appreciation: I love that calculated modification of self. These, and similar techniques, can be very useful if put to use in the right way. I recognize myself here and there. You did well to abstract it all out this clearly.

Second, a note: You've described your techniques from the perspective of how they deviate from epistemic rationality - "Changing your Terminal Goals", "Intentional Compartmentalization", "Willful inconsistency". I would've been more inclined to describe them from the perspective of their central effect, e.g. something to the style of: "Subgoal ascension", "Channeling", "Embodying". Perhaps not as marketable to the lesswrong crowd. Multiple perspectives could be used as well.

Third, a question: How did you create that gut feeling of urgency?

Comment by david_kristoffersson on MIRI's technical research agenda · 2015-01-27T19:42:01.922Z · score: 3 (3 votes) · LW · GW

And boxing, by the way, means giving the AI zero power.

No, hairyfigment's answer was entirely appropriate. Zero power would mean zero effect. Any kind of interaction with the universe means some level of power. Perhaps in the future you should say nearly zero power instead so as to avoid misunderstanding on the parts of others, as taking you literally on the "zero" is apparently "legalistic".

As to the issues with nearly zero power:

  • A superintelligence with nearly zero power could turn to be a heck of a lot more power than you expect.
  • The incentives to tap more perceived utility by unboxing the AI or building other unboxed AIs will be huge.

Mind, I'm not arguing that there is anything wrong with boxing. What's I'm arguing is that it's wrong to rely only on boxing. I recommend you read some more material on AI boxing and Oracle AI. Don't miss out on the references.

Comment by david_kristoffersson on MIRI's technical research agenda · 2015-01-27T18:49:43.523Z · score: 1 (1 votes) · LW · GW

So you disagree with the premise of the orthogonality thesis. Then you know a central concept to probe to understand the arguments put forth here. For example, check out Stuart's Armstrong's paper: General purpose intelligence: arguing the Orthogonality thesis

Comment by david_kristoffersson on MIRI's technical research agenda · 2015-01-23T19:12:35.342Z · score: 0 (0 votes) · LW · GW

There's no guarantee that boxing will ensure the safety of a soft takeoff. When your boxed AI starts to become drastically smarter than a human -- 10 times --- 1000 times -- 1000000 times -- the sheer enormity of the mind may slip out of human possibility to understand. All the while, a seemingly small dissonance between the AI's goals and human values -- or a small misunderstanding on our part of what goals we've imbued -- could magnify to catastrophe as the power differential between humanity and the AI explodes post-transition.

If an AI goes through the intelligence explosion, its goals will be what orchestrates all resources (as Omohundro's point 6 implies). If the goals of this AI does not align with human values, all we value will be lost.

Comment by david_kristoffersson on MIRI's technical research agenda · 2015-01-23T17:38:29.786Z · score: 0 (0 votes) · LW · GW

Mark: So you think human-level intelligence by principle does not combine with goal stability. Aren't you simply disagreeing with the orthogonality thesis, "that an artificial intelligence can have any combination of intelligence level and goal"?

Comment by david_kristoffersson on Facing the Intelligence Explosion discussion page · 2014-08-10T20:31:14.099Z · score: 0 (0 votes) · LW · GW

http://intelligenceexplosion.com/en/2012/ai-the-problem-with-solutions/ links to http://lukeprog.com/SaveTheWorld.html - which redirects to http://lukemuehlhauser.comsavetheworld.html/ - which isn't there anymore.