Posts

Copenhagen – ACX Meetups Everywhere Spring 2024 2024-03-30T11:18:11.856Z
Copenhagen, Denmark – ACX Meetups Everywhere 2021 2021-08-23T08:45:57.032Z
Map of the AI Safety Community 2017-09-26T08:39:10.136Z
AI Safety reading group 2017-01-28T12:07:17.681Z

Comments

Comment by SoerenE on Map of the AI Safety Community · 2017-09-27T08:55:52.381Z · LW · GW

Thank you for explaining.

Comment by SoerenE on Map of the AI Safety Community · 2017-09-26T09:43:44.873Z · LW · GW

Thank you for your comments. I have included them in version 1.1 of the map, where I have swapped FRI and OpenAI/DeepMind, added Crystal Trilogy and corrected the spelling of Vernor Vinge.

Comment by SoerenE on 2017 LessWrong Survey · 2017-09-14T13:08:51.974Z · LW · GW

I have taken the survey.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-11T19:22:20.572Z · LW · GW

I think difference in date of birth (1922 vs ~1960) is less important than difference of date of publication (2003 vs ~2015).

On the Outside View, is criticism 12 years after publication more likely to be valid than criticism levelled immediately? I do not know. On one hand, science generally improves over time. On the other hand, if a particular work get the first criticism after many years, it could mean that the work is of higher quality.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-11T12:37:22.521Z · LW · GW

I should clarify that I am referring to the section David Chapman calls: "Historical appendix: Where did the confusion come from?". I read it as a criticism of both Jaynes and his book.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-11T12:10:28.282Z · LW · GW

No, I do know what Yudkowsky's argument is. Truth be told, I probably would be able to evaluate the arguments, but I have not considered it important. Should I look into it?

I care about whether "The Outside View" works as a technique for evaluating such controversies.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-11T07:49:52.916Z · LW · GW

Yes! From the Outside View, this is exactly what I would expect substantial, well-researched criticism to look like. Appears very scientific, contains plenty of references, is peer-reviewed and published in "Journal of Statistical Physics" and has 29 citations.

Friedman and Shimonys criticism of MAXENT is in stark contrast to David Chapmans criticism of "Probability Theory".

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-10T07:47:20.416Z · LW · GW

Could you post a link to a criticism similar to David Chapman?

The primary criticism I could find was the errata. From the Outside View, the errata looks like a number of mathematically minded people found it to be worth their time to submit corrections. If they had thought that E. T. Jaynes was hopelessly confused, they would not have submitted corrections of this kind.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-08T19:32:04.227Z · LW · GW

I don't think it's a good sign for a book if there isn't anybody to be found that criticizes it.

I think it is a good sign for a Mathematics book that there isn't anybody to be found that criticizes it except people with far inferior credentials.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-07T19:54:37.362Z · LW · GW

Thank you for pointing this out. I did not do my background check far enough back in time. This substantially weakens my case.

I am still inclined to be skeptical, and I have found another red flag. As far as I can tell, E. T. Jaynes is generally very highly regarded, and the only person who is critical of his book is David Chapman. This is just from doing a couple of searches on the Internet.

There are many people studying logic and probability. I would expect some of them would find it worthwhile to comment on this topic if they agreed with David Chapman.

Comment by SoerenE on Bayesian probability theory as extended logic -- a new result · 2017-07-07T09:24:21.949Z · LW · GW

I do not know enough about logic to be able to evaluate the argument. But from the Outside View, I am inclined to be skeptical about David Chapman:

DAVID CHAPMAN

"Describing myself as a Buddhist, engineer, scientist, and businessman (...) and as a pop spiritual philosopher“

Web-book in progress: Meaningness

Tagline: Better ways of thinking, feeling, and acting—around problems of meaning and meaninglessness; self and society; ethics, purpose, and value.

EDWIN THOMPSON JAYNES

Professor of Physics at Washington University

Most cited works:

Information theory and statistical mechanics - 10K citations

Probability theory: The logic of science - 5K citations

The tone of David Chapman's refutation:

E. T. Jaynes (...) was completely confused about the relationship between probability theory and logic. (...) He got confused by the word “Aristotelian”—or more exactly by the word “non-Aristotelian.” (...) Jaynes is just saying “I don’t understand this, so it must all be nonsense.”

Comment by SoerenE on On-line google hangout on approaches to communication around agi risk (2017/5/27 20:00 UTC) · 2017-05-29T06:30:51.626Z · LW · GW

My apologies for not being present. I did not put it into my calendar, and it slipped my mind. :(

Comment by SoerenE on Existential risk from AI without an intelligence explosion · 2017-05-26T11:53:30.458Z · LW · GW

You might also be interested in this article by Kaj Sotala: http://kajsotala.fi/2016/04/decisive-strategic-advantage-without-a-hard-takeoff/

Even though you are writing about the exact same subject, there is (as far as I can tell) no substantial overlap with the points you highlight. Kaj Sotala titled his blog post "(Part 1)" but never wrote a subsequent part.

Comment by SoerenE on On-line google hangout on approaches to communication around agi risk (2017/5/27 20:00 UTC) · 2017-05-23T05:49:02.507Z · LW · GW

Also, it looks like the last time slot is 2200 UTC. I can participate from 1900 and forward.

I will promote this in the AI Safety reading group tomorrow evening.

Comment by SoerenE on On-line google hangout on approaches to communication around agi risk (2017/5/27 20:00 UTC) · 2017-05-22T10:30:32.827Z · LW · GW

The title says 2017/6/27. Should it be 2017-05-27?

Comment by SoerenE on Meetup : Superintellignce chapter 2 · 2017-03-16T10:10:18.983Z · LW · GW

Good luck with meetup!

In the Skype-based reading group, we followed the "Ambitious" plan from MIRI's reading guide: https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf We liked the plan. Among other things, the guide recommended splitting chapter 9 into two parts, and that was good advice.

Starting from chapter 7, I made slides appropriate for a 30 minute summary: http://airca.dk/reading_group.htm

Be sure to check out the comments from the Lesswrong reading group by Katja Grace: http://lesswrong.com/lw/kw4/superintelligence_reading_group/

Comment by SoerenE on AI Safety reading group · 2017-01-30T20:40:27.703Z · LW · GW

I think I agree with all your assertions :).

(Please forgive me for a nitpick: The opposite statement would be "Many humans have the ability to kill all humans AND AI Safety is a good priority". NOT (A IMPLIES B) is equivalent to A AND NOT B. )

Comment by SoerenE on AI Safety reading group · 2017-01-30T20:10:51.540Z · LW · GW

There are no specific plans - at the end of each session we discuss briefly what we should read for next time. I expect it will remain a mostly non-technical reading group.

Comment by SoerenE on AI Safety reading group · 2017-01-30T15:55:40.645Z · LW · GW

Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).

I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority. I strongly deny the antecedent.

I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).

Comment by SoerenE on AI Safety reading group · 2017-01-30T14:50:51.154Z · LW · GW

The word 'sufficiently' makes your claim a tautology. A 'sufficiently' capable human is capable of anything, by definition.

Your claim that Leo Szilard probably could have wiped out the human race seems very far from the historical consensus.

Comment by SoerenE on AI Safety reading group · 2017-01-29T19:57:21.653Z · LW · GW

Good idea. I will do so.

Comment by SoerenE on Open thread, Oct. 03 - Oct. 09, 2016 · 2016-10-04T13:27:42.610Z · LW · GW

No, a Superintelligence is by definition capable of working out what a human wishes.

However, a Superintelligence designed to e.g. calculate digits of pi would not care about what a human wishes. It simply cares about calculating digits of pi.

Comment by SoerenE on May Outreach Thread · 2016-05-08T18:41:33.603Z · LW · GW

In a couple of days, we are hosting a seminar in Århus (Denmark) on AI Risk.

Comment by SoerenE on Lesswrong 2016 Survey · 2016-03-29T17:54:11.183Z · LW · GW

I have taken the survey.

Comment by SoerenE on Lesswrong 2016 Survey · 2016-03-29T17:49:07.421Z · LW · GW

Congratulations!

My wife is also pregnant right now, and I strongly felt that I should include my unborn child in the count.

Comment by SoerenE on [deleted post] 2016-03-04T19:30:40.296Z

This interpretation makes a lot of sense. The term can describe events that have a lot of Knightian Uncertainty, which a "Black Swan" like UFAI certainly has.

Comment by SoerenE on [deleted post] 2016-03-04T07:45:02.656Z

You bring up a good point, whether it is useful to worry about UFAI.

To recap, my original query was about the claim that p(UFAI before 2116) is less than 1% due to UFAI being "vaguely magical". I am interested in figuring out what that means - is it a fair representation of the concept to say that p(Interstellar before 2116) is less than 1% because interstellar travel is "vaguely magical"?

What would be the relationship between "Requiring Advanced Technology" and "Vaguely Magical"? Clarke's third law is a straightforward link, but "vaguely magical" has previously been used to indicate poor definitions, poor abstractions and sentences that do not refer to anything.

Comment by SoerenE on [deleted post] 2016-03-03T19:52:38.674Z

Many things are far beyond our current abilities, such as interstellar space travel. We have no clear idea of how humanity will travel to the stars, but the subject is neither "vaguely magical", nor is it true that the sentence "humans will visit the stars" does not refer to anything.

I feel that it is an unfair characterization of the people who investigate AI risk to say that they claim it will happen by magic, and that they stop the investigation there. You could argue that their investigation is poor, but it is clear that they have worked a lot to investigate the processes that could lead to Unfriendly AI.

Comment by SoerenE on [deleted post] 2016-02-20T15:20:22.405Z

Like Unfriendly AI, algae blooms are events that behave very differently from events we normally encounter.

I fear that the analogies have lost a crucial element. OrphanWIlde considered Unfriendly AI "vaguely magical" in the post here. The algae bloom analogy also has very vague definitions, but the changes in population size of an algae bloom is a matter I would call "strongly non-magical".

I realize that you introduced the analogies to help make my argument precise.

Comment by SoerenE on [deleted post] 2016-02-19T20:16:47.133Z

Wow. It looks like light from James' spaceship can indeed reach us, even if light from us cannot reach the spaceship.

Comment by SoerenE on [deleted post] 2016-02-19T20:00:50.544Z

English is not my first language. I think I would put the accent on "reaches", but I am unsure what would be implied by having the accent on "super". I apologize for my failure to write clearly.

I now see the analogy with human reproduction. Could we stretch the analogy to claim 3, and call some increases in human numbers "super"?

The lowest estimate of the historical number of humans I have seen is from https://en.wikipedia.org/wiki/Population_bottleneck , claiming down to 2000 humans for 100.000 years. Human numbers will probably reach a (mostly cultural) limit of 10.000.000.000. I feel that this development in human numbers deserves to be called "super".

The analogy could perhaps even be stretched to claim 4 - some places at some times could be characterized by "runaway population growth".

Comment by SoerenE on [deleted post] 2016-02-19T08:07:58.354Z

Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.

Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.

I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with "superintelligence" for now.

Comment by SoerenE on [deleted post] 2016-02-19T07:22:35.225Z

I meant claim number 3 to be a sharper version of your claim: The AI will meet constraints, impediments and roadblocks, but these are overcome, and the AI reaches superintelligence.

Could you explain the analogy with human reproduction?

Comment by SoerenE on [deleted post] 2016-02-19T07:14:19.904Z

Thank you. It is moderately clear to me from the link that James' thought-experiment is possible.

Do you know of a more authoritative description of the thought-experiment, preferably with numbers? It would be nice to have an equation where you give the speed of James' spaceship and the distance to it, and calculate if the required speed to catch it is above the speed of light.

Comment by SoerenE on [deleted post] 2016-02-18T20:14:58.429Z

Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.

I feel that your sentence does refer to something: A hypothetical scenario. ("Godhood" should be replaced with "Superintelligence").

Is it correct that the sentence can be divided into these 4 claims?:

  1. An AI self-improves it's intelligence
  2. The self-improvement becomes recursive
  3. An AI reaches superintelligence through 1 and 2
  4. This can happen in a process that can be called "runaway"

Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02). Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)

Comment by SoerenE on [deleted post] 2016-02-18T07:19:08.148Z

I've seen this claim many places, including in the Sequences, but I've never been able to track down an authoritative source. It seems false in classical physics, and I know little about relativity. Unfortunately, my Google-Fu is too weak to investigate. Can anyone help?

Comment by SoerenE on [deleted post] 2016-02-18T07:11:28.349Z

Could you elaborate on why you consider p(UFAI before 2116) < 0.01? I am genuinely interested.

Comment by SoerenE on Does Kolmogorov complexity imply a bound on self-improving AI? · 2016-02-14T19:59:04.511Z · LW · GW

It is an interesting way of looking at the maximal potential of AIs. It could be that Oracle Machines are possible in this universe, but an AI built by humans cannot self-improve to that point because of the bound you are describing.

I feel that the phrasing "we have reached the upper bound on complexity" and later "can rise many orders of magnitude" gives a potentially misleading intuition about how limiting this bound is. Do you agree that this bound does not prevent us from building "paperclipping" AIs?

Comment by SoerenE on Is Spirituality Irrational? · 2016-02-11T19:32:19.364Z · LW · GW

I am tapping out of this thread.

Comment by SoerenE on Is Spirituality Irrational? · 2016-02-11T06:29:29.207Z · LW · GW

It is possible to be extremely intelligent, and suffer from a delusion.

Comment by SoerenE on Beware surprising and suspicious convergence · 2016-01-29T07:50:48.668Z · LW · GW

My thoughts exactly.

When I first heard it, it sounded to me like a headline from BuzzFeed: This one weird trick will literally solve all your problems!

Turns out that the trick is to create an IQ 20000 AI, and get it to help you.

(Obviously, Suspicious <> Wrong)

Comment by SoerenE on A toy model of the treacherous turn · 2016-01-25T07:29:51.745Z · LW · GW

I've tried my hand at visualizing it:

http://i.imgur.com/VE0P8JY.png

This picture shows the very last instant that the shopkeeper can choose to reset Link.

There are a number of assumptions in my calculations, which might not be valid in the actual game. A key assumption is that arrows fly at 3 times walking speed.
The Shopkeeper will need to walk 1 tile north to reset Link. That requires the same amount of time as for an arrow to fly 3 tiles.

  • At T=0, Link starts moving north, and the arrow heading towards Link continues heading west.

  • At T=1, Link has moved 1/3rd of a tile north, and thus narrowly avoids the arrow. The arrow continues West. Link takes an openly treacherous turn: He changes to the bow and fires an arrow west, towards the shopkeeper.

  • At T=2, the arrow from the monster destroys the ice-block protecting the shopkeeper. Link's arrow continues towards the shopkeeper.

  • At T=3, Link's arrow hits the shopkeeper. If the shopkeeper was moving north the entire time, the shopkeeper hits the reset button at this time.

If the shopkeeper decided to go for the reset button at T=0, the reset and the death of the shopkeeper happen simultaneous, and the shopkeeper dies while Link is reset. Notice that a reset (-1000 points) followed by wireheading (+inifinity) is a great move.

If Link moves north, and the shopkeeper immediately follows, Link can just move south again, to block the arrow. The openly treacherous turn at T=1 happens when it is too late for the shopkeeper to do anything about it.

I also like with this visualization that an enemy is present. It is easy to construct a story where a smart AI manipulates the situation until the shopkeeper is in a situation where he can choose between trusting the AI, or death.

Comment by SoerenE on Welcome to LessWrong (January 2016) · 2016-01-18T08:05:32.041Z · LW · GW

Thank you. That was exactly what I was after.

Comment by SoerenE on Welcome to LessWrong (January 2016) · 2016-01-15T20:38:12.284Z · LW · GW

Hi,

I've read some of "Rationality: From AI to Zombies", and find myself worrying about unfriendly strong AI.

Reddit recently had an AMA with the OpenAI team, where "thegdb" seems to misunderstand the concerns. Another user, "AnvaMiba" provides 2 links (http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better and http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/) as examples of researchers not worried about unfriendly strong AI.

The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man - quite possibly, www.popsci.com is misrepresenting a more reasonable argument.

Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to "steel man" the argument.

Comment by SoerenE on A toy model of the treacherous turn · 2016-01-13T10:44:09.099Z · LW · GW

I really like this visualization.

May I suggest another image, where the shopkeeper is in non-obvious danger:

To the left, the Shopkeeper is surrounded by ice-blocks, as in the images. All the way to the right, a monster is shooting arrows at Link, who is shooting arrows back at the monster. (The Gem-container is moved somewhere else.) Link, the Shopkeeper and the monster are on the same horizontal line. It looks like Link is about to heroically take an arrow that the monster aimed for the shopkeeper. The ice is still blocking, so the shopkeeper appears safe.

The problem is that Link can choose to go a bit north, dodging the next arrow from the monster. The monster's arrow will then destroy the ice. If Link immediately afterwards time fires an arrow at the Shopkeeper, the shopkeeper will be killed, as arrows are faster than movement.

For this to work, I think the monster's arrow should be aiming at the southern-most part of the ice-block, so Link only has to move a tiny bit. Link can then shoot at the Shopkeeper, and proceed to wirehead himself.