There's No Fire Alarm for Artificial General Intelligence

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-10-13T21:38:16.797Z · LW · GW · 72 comments

Contents

75 comments

What is the function of a fire alarm?

One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building.

In the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn't react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time. This and other experiments seemed to pin down that what's happening is pluralistic ignorance. We don't want to look panicky by being afraid of what isn't an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.

(I've read a number of replications and variations on this research, and the effect size is blatant. I would not expect this to be one of the results that dies to the replication crisis, and I haven't yet heard about the replication crisis touching it. But we have to put a maybe-not marker on everything now.)

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won't lose face if you proceed to exit the building.

The fire alarm doesn't tell us with certainty that a fire is there. In fact, I can't recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is weaker evidence of fire than smoke coming from under a door.

But the fire alarm tells us that it's socially okay to react to the fire. It promises us with certainty that we won't be embarrassed if we now proceed to exit in an orderly fashion.

It seems to me that this is one of the cases where people have mistaken beliefs about what they believe, like when somebody loudly endorsing their city's team to win the big game will back down as soon as asked to bet. They haven't consciously distinguished the rewarding exhilaration of shouting that the team will win, from the feeling of anticipating the team will win.

When people look at the smoke coming from under the door, I think they think their uncertain wobbling feeling comes from not assigning the fire a high-enough probability of really being there, and that they're reluctant to act for fear of wasting effort and time. If so, I think they're interpreting their own feelings mistakenly. If that was so, they'd get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. The uncertain wobbling feeling comes from the worry that others believe differently, not the worry that the fire isn't there. The reluctance to act is the reluctance to be seen looking foolish, not the reluctance to waste effort. That's why the student alone in the room does something about the fire 75% of the time, and why people have no trouble reacting to the much weaker evidence presented by fire alarms.

* * *

It's now and then proposed that we ought to start reacting later to the issues of Artificial General Intelligence (background here), because, it is said, we are so far away from it that it just isn't possible to do productive work on it today.

(For direct argument about there being things doable today, see: Soares and Fallenstein (2014/2017); Amodei, Olah, Steinhardt, Christiano, Schulman, and Mané (2016); or Taylor, Yudkowsky, LaVictoire, and Critch (2016).)

(If none of those papers existed or if you were an AI researcher who'd read them but thought they were all garbage, and you wished you could work on alignment but knew of nothing you could do, the wise next step would be to sit down and spend two hours by the clock sincerely trying to think of possible approaches. Preferably without self-sabotage that makes sure you don't come up with anything plausible; as might happen if, hypothetically speaking, you would actually find it much more comfortable to believe there was nothing you ought to be working on today, because e.g. then you could work on other things that interested you more.)

(But never mind.)

So if AGI seems far-ish away, and you think the conclusion licensed by this is that you can't do any productive work on AGI alignment yet, then the implicit alternative strategy on offer is: Wait for some unspecified future event that tells us AGI is coming near; and then we'll all know that it's okay to start working on AGI alignment.

This seems to me to be wrong on a number of grounds. Here are some of them.

One: As Stuart Russell observed, if you get radio signals from space and spot a spaceship there with your telescopes and you know the aliens are landing in thirty years, you still start thinking about that today.

You're not like, "Meh, that's thirty years off, whatever." You certainly don't casually say "Well, there's nothing we can do until they're closer." Not without spending two hours, or at least five minutes by the clock, brainstorming about whether there is anything you ought to be starting now.

If you said the aliens were coming in thirty years and you were therefore going to do nothing today... well, if these were more effective times, somebody would ask for a schedule of what you thought ought to be done, starting when, how long before the aliens arrive. If you didn't have that schedule ready, they'd know that you weren't operating according to a worked table of timed responses, but just procrastinating and doing nothing; and they'd correctly infer that you probably hadn't searched very hard for things that could be done today.

In Bryan Caplan's terms, anyone who seems quite casual about the fact that "nothing can be done now to prepare" about the aliens is missing a mood; they should be much more alarmed at not being able to think of any way to prepare. And maybe ask if somebody else has come up with any ideas? But never mind.

Two: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.

In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.

In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.

And of course if you're not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.

Were there events that, in hindsight, today, we can see as signs that heavier-than-air flight or nuclear energy were nearing? Sure, but if you go back and read the actual newspapers from that time and see what people actually said about it then, you'll see that they did not know that these were signs, or that they were very uncertain that these might be signs. Some playing the part of Excited Futurists proclaimed that big changes were imminent, I expect, and others playing the part of Sober Scientists tried to pour cold water on all that childish enthusiasm; I expect that part was more or less exactly the same decades earlier. If somewhere in that din was a superforecaster who said "decades" when it was decades and "5 years" when it was five, good luck noticing them amid all the noise. More likely, the superforecasters were the ones who said "Could be tomorrow, could be decades" both when the big development was a day away and when it was decades away.

One of the major modes by which hindsight bias makes us feel that the past was more predictable than anyone was actually able to predict at the time, is that in hindsight we know what we ought to notice, and we fixate on only one thought as to what each piece of evidence indicates. If you look at what people actually say at the time, historically, they've usually got no clue what's about to happen three months before it happens, because they don't know which signs are which.

I mean, you could say the words “AGI is 50 years away” and have those words happen to be true. People were also saying that powered flight was decades away when it was in fact decades away, and those people happened to be right. The problem is that everything looks the same to you either way, if you are actually living history instead of reading about it afterwards.

It's not that whenever somebody says "fifty years" the thing always happens in two years. It's that this confident prediction of things being far away corresponds to an epistemic state about the technology that feels the same way internally until you are very very close to the big development. It's the epistemic state of "Well, I don't see how to do the thing" and sometimes you say that fifty years off from the big development, and sometimes you say it two years away, and sometimes you say it while the Wright Flyer is flying somewhere out of your sight.

Three: Progress is driven by peak knowledge, not average knowledge.

If Fermi and the Wrights couldn't see it coming three years out, imagine how hard it must be for anyone else to see it.

If you're not at the global peak of knowledge of how to do the thing, and looped in on all the progress being made at what will turn out to be the leading project, you aren't going to be able to see of your own knowledge at all that the big development is imminent. Unless you are very good at perspective-taking in a way that wasn't necessary in a hunter-gatherer tribe, and very good at realizing that other people may know techniques and ideas of which you have no inkling even that you do not know them. If you don't consciously compensate for the lessons of history in this regard; then you will promptly say the decades-off thing. Fermi wasn't still thinking that net nuclear energy was impossible or decades away by the time he got to 3 months before he built the first pile, because at that point Fermi was looped in on everything and saw how to do it. But anyone not looped in probably still felt like it was fifty years away while the actual pile was fizzing away in a squash court at the University of Chicago.

People don't seem to automatically compensate for the fact that the timing of the big development is a function of the peak knowledge in the field, a threshold touched by the people who know the most and have the best ideas; while they themselves have average knowledge; and therefore what they themselves know is not strong evidence about when the big development happens. I think they aren't thinking about that at all, and they just eyeball it using their own sense of difficulty. If they are thinking anything more deliberate and reflective than that, and incorporating real work into correcting for the factors that might bias their lenses, they haven't bothered writing down their reasoning anywhere I can read it.

To know that AGI is decades away, we would need enough understanding of AGI to know what pieces of the puzzle are missing, and how hard these pieces are to obtain; and that kind of insight is unlikely to be available until the puzzle is complete. Which is also to say that to anyone outside the leading edge, the puzzle will look more incomplete than it looks on the edge. That project may publish their theories in advance of proving them, although I hope not. But there are unproven theories now too.

And again, that's not to say that people saying "fifty years" is a certain sign that something is happening in a squash court; they were saying “fifty years” sixty years ago too. It's saying that anyone who thinks technological timelines are actually forecastable, in advance, by people who are not looped in to the leading project's progress reports and who don't share all the best ideas about exactly how to do the thing and how much effort is required for that, is learning the wrong lesson from history. In particular, from reading history books that neatly lay out lines of progress and their visible signs that we all know now were important and evidential. It's sometimes possible to say useful conditional things about the consequences of the big development whenever it happens, but it’s rarely possible to make confident predictions about the timing of those developments, beyond a one- or two-year horizon. And if you are one of the rare people who can call the timing, if people like that even exist, nobody else knows to pay attention to you and not to the Excited Futurists or Sober Skeptics.

Four: The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.

Why do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:

(A) The author does not know how to build AGI using present technology. The author does not know where to start.

(B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.

(C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.

We've now considered some aspects of argument A. Let's consider argument B for a moment.

Suppose I say: "It is now possible for one comp-sci grad to do in a week anything that N+ years ago the research community could do with neural networks at all." How large is N?

I got some answers to this on Twitter from people whose credentials I don't know, but the most common answer was five, which sounds about right to me based on my own acquaintance with machine learning. (Though obviously not as a literal universal, because reality is never that neat.) If you could do something in 2012 period, you can probably do it fairly straightforwardly with modern GPUs, Tensorflow, Xavier initialization, batch normalization, ReLUs, and Adam or RMSprop or just stochastic gradient descent with momentum. The modern techniques are just that much better. To be sure, there are things we can't do now with just those simple methods, things that require tons more work, but those things were not possible at all in 2012.

In machine learning, when you can do something at all, you are probably at most a few years away from being able to do it easily using the future's much superior tools. From this standpoint, argument B, "You don't understand how hard it is to do what we do," is something of a non-sequitur when it comes to timing.

Statement B sounds to me like the same sentiment voiced by Rutherford in 1933 when he called net energy from atomic fission "moonshine". If you were a nuclear physicist in 1933 then you had to split all your atoms by hand, by bombarding them with other particles, and it was a laborious business. If somebody talked about getting net energy from atoms, maybe it made you feel that you were unappreciated, that people thought your job was easy.

But of course this will always be the lived experience for AI engineers on serious frontier projects. You don't get paid big bucks to do what a grad student can do in a week (unless you're working for a bureaucracy with no clue about AI; but that's not Google or FB). Your personal experience will always be that what you are paid to spend months doing is difficult. A change in this personal experience is therefore not something you can use as a fire alarm.

Those playing the part of wiser sober skeptical scientists would obviously agree in the abstract that our tools will improve; but in the popular articles they pen, they just talk about the painstaking difficulty of this year's tools. I think that when they're in that mode they are not even trying to forecast what the tools will be like in 5 years; they haven't written down any such arguments as part of the articles I've read. I think that when they tell you that AGI is decades off, they are literally giving an estimate of how long it feels to them like it would take to build AGI using their current tools and knowledge. Which is why they emphasize how hard it is to stir the heap of linear algebra until it spits out good answers; I think they are not imagining, at all, into how this experience may change over considerably less than fifty years. If they've explicitly considered the bias of estimating future tech timelines based on their present subjective sense of difficulty, and tried to compensate for that bias, they haven't written that reasoning down anywhere I've read it. Nor have I ever heard of that forecasting method giving good results historically.

Five: Okay, let's be blunt here. I don't think most of the discourse about AGI being far away (or that it's near) is being generated by models of future progress in machine learning. I don't think we're looking at wrong models; I think we're looking at no models.

I was once at a conference where there was a panel full of famous AI luminaries, and most of the luminaries were nodding and agreeing with each other that of course AGI was very far off, except for two famous AI luminaries who stayed quiet and let others take the microphone.

I got up in Q&A and said, "Okay, you've all told us that progress won't be all that fast. But let's be more concrete and specific. I'd like to know what's the least impressive accomplishment that you are very confident cannot be done in the next two years."

There was a silence.

Eventually, two people on the panel ventured replies, spoken in a rather more tentative tone than they'd been using to pronounce that AGI was decades out. They named "A robot puts away the dishes from a dishwasher without breaking them", and Winograd schemas. Specifically, "I feel quite confident that the Winograd schemas--where we recently had a result that was in the 50, 60% range--in the next two years, we will not get 80, 90% on that regardless of the techniques people use."

A few months after that panel, there was unexpectedly a big breakthrough on Winograd schemas. The breakthrough didn't crack 80%, so three cheers for wide credibility intervals with error margin, but I expect the predictor might be feeling slightly more nervous now with one year left to go. (I don't think it was the breakthrough I remember reading about, but Rob turned up this paper as an example of one that could have been submitted at most 44 days after the above conference and gets up to 70%.)

But that's not the point. The point is the silence that fell after my question, and that eventually I only got two replies, spoken in tentative tones. When I asked for concrete feats that were impossible in the next two years, I think that that's when the luminaries on that panel switched to trying to build a mental model of future progress in machine learning, asking themselves what they could or couldn't predict, what they knew or didn't know. And to their credit, most of them did know their profession well enough to realize that forecasting future boundaries around a rapidly moving field is actually really hard, that nobody knows what will appear on arXiv next month, and that they needed to put wide credibility intervals with very generous upper bounds on how much progress might take place twenty-four months' worth of arXiv papers later.

(Also, Demis Hassabis was present, so they all knew that if they named something insufficiently impossible, Demis would have DeepMind go and do it.)

The question I asked was in a completely different genre from the panel discussion, requiring a mental context switch: the assembled luminaries actually had to try to consult their rough, scarce-formed intuitive models of progress in machine learning and figure out what future experiences, if any, their model of the field definitely prohibited within a two-year time horizon. Instead of, well, emitting socially desirable verbal behavior meant to kill that darned hype about AGI and get some predictable applause from the audience.

I'll be blunt: I don't think the confident long-termism has been thought out at all. If your model has the extraordinary power to say what will be impossible in ten years after another one hundred and twenty months of arXiv papers, then you ought to be able to say much weaker things that are impossible in two years, and you should have those predictions queued up and ready to go rather than falling into nervous silence after being asked.

In reality, the two-year problem is hard and the ten-year problem is laughably hard. The future is hard to predict in general, our predictive grasp on a rapidly changing and advancing field of science and engineering is very weak indeed, and it doesn't permit narrow credible intervals on what can't be done.

Grace et al. (2017) surveyed the predictions of 352 presenters at ICML and NIPS 2015. Respondents’ aggregate forecast was that the proposition “all occupations are fully automatable” (in the sense that “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”) will not reach 50% probability until 121 years hence. Except that a randomized subset of respondents were instead asked the slightly different question of “when unaided machines can accomplish every task better and more cheaply than human workers”, and in this case held that this was 50% likely to occur within 44 years.

That's what happens when you ask people to produce an estimate they can't estimate, and there's a social sense of what the desirable verbal behavior is supposed to be.

* * *

When I observe that there's no fire alarm for AGI, I'm not saying that there's no possible equivalent of smoke appearing from under a door.

What I'm saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.

There's an old trope saying that as soon as something is actually done, it ceases to be called AI. People who work in AI and are in a broad sense pro-accelerationist and techno-enthusiast, what you might call the Kurzweilian camp (of which I am not a member), will sometimes rail against this as unfairness in judgment, as moving goalposts.

This overlooks a real and important phenomenon of adverse selection against AI accomplishments: If you can do something impressive-sounding with AI in 1974, then that is because that thing turned out to be doable in some cheap cheaty way, not because 1974 was so amazingly great at AI. We are uncertain about how much cognitive effort it takes to perform tasks, and how easy it is to cheat at them, and the first "impressive" tasks to be accomplished will be those where we were most wrong about how much effort was required. There was a time when some people thought that a computer winning the world chess championship would require progress in the direction of AGI, and that this would count as a sign that AGI was getting closer. When Deep Blue beat Kasparov in 1997, in a Bayesian sense we did learn something about progress in AI, but we also learned something about chess being easy. Considering the techniques used to construct Deep Blue, most of what we learned was "It is surprisingly possible to play chess without easy-to-generalize techniques" and not much "A surprising amount of progress has been made toward AGI."

Was AlphaGo smoke under the door, a sign of AGI in 10 years or less? People had previously given Go as an example of What You See Before The End.

Looking over the paper describing AlphaGo's architecture, it seemed to me that we were mostly learning that available AI techniques were likely to go further towards generality than expected, rather than about Go being surprisingly easy to achieve with fairly narrow and ad-hoc approaches. Not that the method scales to AGI, obviously; but AlphaGo did look like a product of relatively general insights and techniques being turned on the special case of Go, in a way that Deep Blue wasn’t. I also updated significantly on "The general learning capabilities of the human cortical algorithm are less impressive, less difficult to capture with a ton of gradient descent and a zillion GPUs, than I thought," because if there were anywhere we expected an impressive hard-to-match highly-natural-selected but-still-general cortical algorithm to come into play, it would be in humans playing Go.

Maybe if we'd seen a thousand Earths undergoing similar events, we'd gather the statistics and find that a computer winning the planetary Go championship is a reliable ten-year-harbinger of AGI. But I don't actually know that. Neither do you. Certainly, anyone can publicly argue that we just learned Go was easier to achieve with strictly narrow techniques than expected, as was true many times in the past. There's no possible sign short of actual AGI, no case of smoke from under the door, for which we know that this is definitely serious fire and now AGI is 10, 5, or 2 years away. Let alone a sign where we know everyone else will believe it.

And in any case, multiple leading scientists in machine learning have already published articles telling us their criterion for a fire alarm. They will believe Artificial General Intelligence is imminent:

(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.

(B) When their personal jobs do not give them a sense of everything being difficult. This, they are at pains to say, is a key piece of knowledge not possessed by the ignorant layfolk who think AGI might be near, who only believe that because they have never stayed up until 2AM trying to get a generative adversarial network to stabilize.

(C) When they are very impressed by how smart their AI is relative to a human being in respects that still feel magical to them; as opposed to the parts they do know how to engineer, which no longer seem magical to them; aka the AI seeming pretty smart in interaction and conversation; aka the AI actually being an AGI already.

So there isn't going to be a fire alarm. Period.

There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.

* * *

So far as I can presently estimate, now that we've had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.

By saying we're probably going to be in roughly this epistemic state until almost the end, I don't mean to say we know that AGI is imminent, or that there won't be important new breakthroughs in AI in the intervening time. I mean that it's hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won't know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky. Maybe researcher enthusiasm and funding will rise further, and we'll be able to say that timelines are shortening; or maybe we’ll hit another AI winter, and we'll know that's a sign indicating that things will take longer than they would otherwise; but we still won't know how long.

At some point we might see a sudden flood of arXiv papers in which really interesting and fundamental and scary cognitive challenges seem to be getting done at an increasing pace. Whereupon, as this flood accelerates, even some who imagine themselves sober and skeptical will be unnerved to the point that they venture that perhaps AGI is only 15 years away now, maybe, possibly. The signs might become so blatant, very soon before the end, that people start thinking it is socially acceptable to say that maybe AGI is 10 years off. Though the signs would have to be pretty darned blatant, if they’re to overcome the social barrier posed by luminaries who are estimating arrival times to AGI using their personal knowledge and personal difficulties, as well as all the historical bad feelings about AI winters caused by hype.

But even if it becomes socially acceptable to say that AGI is 15 years out, in those last couple of years or months, I would still expect there to be disagreement. There will still be others protesting that, as much as associative memory and human-equivalent cerebellar coordination (or whatever) are now solved problems, they still don't know how to construct AGI. They will note that there are no AIs writing computer science papers, or holding a truly sensible conversation with a human, and castigate the senseless alarmism of those who talk as if we already knew how to do that. They will explain that foolish laypeople don't realize how much pain and tweaking it takes to get the current systems to work. (Although those modern methods can easily do almost anything that was possible in 2017, and any grad student knows how to roll a stable GAN on the first try using the tf.unsupervised module in Tensorflow 5.3.1.)

When all the pieces are ready and in place, lacking only the last piece to be assembled by the very peak of knowledge and creativity across the whole world, it will still seem to the average ML person that AGI is an enormous challenge looming in the distance, because they still won’t personally know how to construct an AGI system. Prestigious heads of major AI research groups will still be writing articles decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from real, respectable concerns like loan-approval systems accidentally absorbing human biases.

Of course, the future is very hard to predict in detail. It's so hard that not only do I confess my own inability, I make the far stronger positive statement that nobody else can do it either. The “flood of groundbreaking arXiv papers” scenario is one way things could maybe possibly go, but it's an implausibly specific scenario that I made up for the sake of concreteness. It's certainly not based on my extensive experience watching other Earthlike civilizations develop AGI. I do put a significant chunk of probability mass on "There's not much sign visible outside a Manhattan Project until Hiroshima," because that scenario is simple. Anything more complex is just one more story full of burdensome details that aren't likely to all be true.

But no matter how the details play out, I do predict in a very general sense that there will be no fire alarm that is not an actual running AGI--no unmistakable sign before then that everyone knows and agrees on, that lets people act without feeling nervous about whether they're worrying too early. That's just not how the history of technology has usually played out in much simpler cases like flight and nuclear engineering, let alone a case like this one where all the signs and models are disputed. We already know enough about the uncertainty and low quality of discussion surrounding this topic to be able to say with confidence that there will be no unarguable socially accepted sign of AGI arriving 10 years, 5 years, or 2 years beforehand. If there’s any general social panic it will be by coincidence, based on terrible reasoning, uncorrelated with real timelines except by total coincidence, set off by a Hollywood movie, and focused on relatively trivial dangers.

It's no coincidence that nobody has given any actual account of such a fire alarm, and argued convincingly about how much time it means we have left, and what projects we should only then start. If anyone does write that proposal, the next person to write one will say something completely different. And probably neither of them will succeed at convincing me that they know anything prophetic about timelines, or that they've identified any sensible angle of attack that is (a) worth pursuing at all and (b) not worth starting to work on right now.

* * *

It seems to me that the decision to delay all action until a nebulous totally unspecified future alarm goes off, implies an order of recklessness great enough that the law of continued failure comes into play.

The law of continued failure is the rule that says that if your country is incompetent enough to use a plaintext 9-numeric-digit password on all of your bank accounts and credit applications, your country is not competent enough to correct course after the next disaster in which a hundred million passwords are revealed. A civilization competent enough to correct course in response to that prod, to react to it the way you'd want them to react, is competent enough not to make the mistake in the first place. When a system fails massively and obviously, rather than subtly and at the very edges of competence, the next prod is not going to cause the system to suddenly snap into doing things intelligently.

The law of continued failure is especially important to keep in mind when you are dealing with big powerful systems or high-status people that you might feel nervous about derogating, because you may be tempted to say, "Well, it's flawed now, but as soon as a future prod comes along, everything will snap into place and everything will be all right." The systems about which this fond hope is actually warranted look like they are mostly doing all the important things right already, and only failing in one or two steps of cognition. The fond hope is almost never warranted when a person or organization or government or social subsystem is currently falling massively short.

The folly required to ignore the prospect of aliens landing in thirty years is already great enough that the other flawed elements of the debate should come as no surprise.

And with all of that going wrong simultaneously today, we should predict that the same system and incentives won't produce correct outputs after receiving an uncertain sign that maybe the aliens are landing in five years instead. The law of continued failure suggests that if existing authorities failed in enough different ways at once to think that it makes sense to try to derail a conversation about existential risk by saying the real problem is the security on self-driving cars, the default expectation is that they will still be saying silly things later.

People who make large numbers of simultaneous mistakes don’t generally have all of the incorrect thoughts subconsciously labeled as "incorrect" in their heads. Even when motivated, they can't suddenly flip to skillfully executing all-correct reasoning steps instead. Yes, we have various experiments showing that monetary incentives can reduce overconfidence and political bias, but (a) that's reduction rather than elimination, (b) it's with extremely clear short-term direct incentives, not the nebulous and politicizable incentive of "a lot being at stake", and (c) that doesn't mean a switch is flipping all the way to "carry out complicated correct reasoning". If someone's brain contains a switch that can flip to enable complicated correct reasoning at all, it's got enough internal precision and skill to think mostly-correct thoughts now instead of later--at least to the degree that some conservatism and double-checking gets built into examining the conclusions that people know will get them killed if they’re wrong about them.

There is no sign and portent, no threshold crossed, that suddenly causes people to wake up and start doing things systematically correctly. People who can react that competently to any sign at all, let alone a less-than-perfectly-certain not-totally-agreed item of evidence that is likely a wakeup call, have probably already done the timebinding thing. They've already imagined the future sign coming, and gone ahead and thought sensible thoughts earlier, like Stuart Russell saying, "If you know the aliens are landing in thirty years, it's still a big deal now."

* * *

Back in the funding-starved early days of what is now MIRI, I learned that people who donated last year were likely to donate this year, and people who last year were planning to donate "next year" would quite often this year be planning to donate "next year". Of course there were genuine transitions from zero to one; everything that happens needs to happen for a first time. There were college students who said "later" and gave nothing for a long time in a genuinely strategically wise way, and went on to get nice jobs and start donating. But I also learned well that, like many cheap and easy solaces, saying the word "later" is addictive; and that this luxury is available to the rich as well as the poor.

I don't expect it to be any different with AGI alignment work. People who are trying to get what grasp they can on the alignment problem will, in the next year, be doing a little (or a lot) better with whatever they grasped in the previous year (plus, yes, any general-field advances that have taken place in the meantime). People who want to defer that until after there's a better understanding of AI and AGI will, after the next year's worth of advancements in AI and AGI, want to defer work until a better future understanding of AI and AGI.

Some people really want alignment to get done and are therefore now trying to wrack their brains about how to get something like a reinforcement learner to reliably identify a utility function over particular elements in a model of the causal environment instead of a sensory reward term or defeat the seeming tautologicalness of updated (non-)deference. Others would rather be working on other things, and will therefore declare that there is no work that can possibly be done today, not spending two hours quietly thinking about it first before making that declaration. And this will not change tomorrow, unless perhaps tomorrow is when we wake up to some interesting newspaper headlines, and probably not even then. The luxury of saying "later" is not available only to the truly poor-in-available-options.

After a while, I started telling effective altruists in college: "If you're planning to earn-to-give later, then for now, give around $5 every three months. And never give exactly the same amount twice in a row, or give to the same organization twice in a row, so that you practice the mental habit of re-evaluating causes and re-evaluating your donation amounts on a regular basis. Don't learn the mental habit of just always saying 'later'."

Similarly, if somebody was actually going to work on AGI alignment "later", I'd tell them to, every six months, spend a couple of hours coming up with the best current scheme they can devise for aligning AGI and doing useful work on that scheme. Assuming, if they must, that AGI were somehow done with technology resembling current technology. And publishing their best-current-scheme-that-isn't-good-enough, at least in the sense of posting it to Facebook; so that they will have a sense of embarrassment about naming a scheme that does not look like somebody actually spent two hours trying to think of the best bad approach.

There are things we’ll better understand about AI in the future, and things we’ll learn that might give us more confidence that particular research approaches will be relevant to AGI. There may be more future sociological developments akin to Nick Bostrom publishing Superintelligence, Elon Musk tweeting about it and thereby heaving a rock through the Overton Window, or more respectable luminaries like Stuart Russell openly coming on board. The future will hold more AlphaGo-like events to publicly and privately highlight new ground-level advances in ML technique; and it may somehow be that this does not leave us in the same epistemic state as having already seen AlphaGo and GANs and the like. It could happen! I can't see exactly how, but the future does have the capacity to pull surprises in that regard.

But before waiting on that surprise, you should ask whether your uncertainty about AGI timelines is really uncertainty at all. If it feels to you that guessing AGI might have a 50% probability in N years is not enough knowledge to act upon, if that feels scarily uncertain and you want to wait for more evidence before making any decisions... then ask yourself how you'd feel if you believed the probability was 50% in N years, and everyone else on Earth also believed it was 50% in N years, and everyone believed it was right and proper to carry out policy P when AGI has a 50% probability of arriving in N years. If that visualization feels very different, then any nervous "uncertainty" you feel about doing P is not really about whether AGI takes much longer than N years to arrive.

And you are almost surely going to be stuck with that feeling of "uncertainty" no matter how close AGI gets; because no matter how close AGI gets, whatever signs appear will almost surely not produce common, share, agreed-on public knowledge that AGI has a 50% chance of arriving in N years, nor any agreement that it is therefore right and proper to react by doing P.

And if all that did become common knowledge, then P is unlikely to still be a neglected intervention, or AI alignment a neglected issue; so you will have waited until sadly late to help.

But far more likely is that the common knowledge just isn't going to be there, and so it will always feel nervously "uncertain" to consider acting.

You can either act despite that, or not act. Not act until it's too late to help much, in the best case; not act at all until after it's essentially over, in the average case.

I don't think it's wise to wait on an unspecified epistemic miracle to change how we feel. In all probability, you're going to be in this mental state for a while - including any nervous-feeling "uncertainty". If you handle this mental state by saying "later", that general policy is not likely to have good results for Earth.

* * *

Further resources:

72 comments

Comments sorted by top scores.

comment by Scott Garrabrant · 2017-10-16T22:43:58.876Z · LW(p) · GW(p)

If I roll a 20 sided die until I roll a 1, the expected number of times I will need to roll the die is 20. Also, according to my current expectations, immediately before I roll the 1, I expect myself to expect to have to roll 20 more times. My future self will say it will take 20 more times in expectation, when in fact it will only take 1 more time. I can predict this in advance, but I can't do anything about it.

I think everyone should spend enough time thinking about this to see why there is nothing wrong with this picture. This is what uncertainity looks like, and it had to be this way.

Replies from: TheWakalix, abe-dillon
comment by TheWakalix · 2017-10-24T00:36:59.304Z · LW(p) · GW(p)

Yes, but ideally our prediction methods would allow us to predict events more accurately than flipping a coin does.

comment by Abe Dillon (abe-dillon) · 2019-08-06T03:05:32.394Z · LW(p) · GW(p)

That's not how rolling a die works. Each roll is completely independent. The expected value of rolling a 20 sided die is 10.5 but there's no logical way to assign an expected outcome of any given roll. You can calculate how many times you'd have to roll before you're more likely than not to have rolled a specific value (1-P(specific value))^n < 0.5 so log(0.5)/log(1-P(specific_value)) < n. In this case P(specific_value) is 1/20 = 0.05. So n > log(0.5)/log(0.95) = 13.513. So you're more likely than not to have rolled a "1" after 14 rolls, but that still doesn't tell you what to expect your Nth roll to be.

I don't see how your dice rolling example supports a pacifist outlook. We're not rolling dice here. This is a subject we can study and gain more information about to understand the different outcomes better. You can't do that with a dice. The outcomes of rolling a dice are not so dire. Probability is quite useful for making decisions in the face of uncertainty if you understand it better.

Replies from: None, Ian Televan
comment by [deleted] · 2019-12-21T07:56:50.951Z · LW(p) · GW(p)

You are sayin the same thing as the comment you are replying to.

Replies from: abe-dillon
comment by Abe Dillon (abe-dillon) · 2019-12-30T03:48:59.412Z · LW(p) · GW(p)

How? The person I'm responding to gets the math of probability wrong and uses it to make a confusing claim that "there's nothing wrong" as though we have no more agency over the development of AI than we do over the chaotic motion of a dice.

It's foolish to liken the development of AI to a roll of the dice. Given the stakes, we must try to study, prepare for, and guide the development of AI as best we can.

This isn't hypothetical. We've already built a machine that's more intelligent than any man alive and which brutally optimizes toward a goal that's incompatible with the good of man kind. We call it, "Global Capitalism". There isn't a man alive who knows how to stock the shelves of stores all over the world with #2 pencils that cost only 2 cents each, yet it happens every day because *the system* knows how. The problem is: that system operates with a sociopathic disregard for life (human or otherwise) and has exceeded all limits of sustainability without so much as slowing down. It's a short-sighted, cruel leviathan and there's no human at the reigns.

At this point, it's not about waiting for the dice to settle, it's about figuring out how to wrangle such a beast and prevent the creation of more.

Replies from: TurnTrout, None
comment by TurnTrout · 2019-12-30T04:09:08.301Z · LW(p) · GW(p)

uses it to make a confusing claim that "there's nothing wrong" as though we have no more agency over the development of AI than we do over the chaotic motion of a dice.

It's foolish to liken the development of AI to a roll of the dice. Given the stakes, we must try to study, prepare for, and guide the development of AI as best we can.

I think you're misinterpreting the original comment. Scott was talking about there being "nothing wrong" with this conception of epistemic uncertainty before the 1 arrives, where each new roll doesn't tell you anything about when the 1 will come. He isn't advocating pacifism about AI risk, though. Ironically enough, in his capacity as lead of the Agent Foundations team at MIRI, Scott is arguably one of the least AI-risk-passive people on the planet.

comment by [deleted] · 2019-12-31T00:10:24.751Z · LW(p) · GW(p)
The person I’m responding to gets the math of probability wrong.

No, they are correctly describing a Poisson distribution, which is correct for this situation and compatible with what you say too. AFAICT they’re not saying anything about AI or morality.

comment by Ian Televan · 2021-08-02T21:38:03.843Z · LW(p) · GW(p)

from random import *

runs = 100000
S = runs
for _ in range(runs):
   while(randint(1,20) != 1):
       S += 1
print(S/runs)

>>> 20.05751

comment by sarahconstantin · 2017-10-16T17:22:10.167Z · LW(p) · GW(p)

I'm one of the people who think AI is probably farther away than Eliezer does, and I think I owe a reply.

So let me go point by point here.

Do I think that nothing productive can be done on AI safety now? No; I think the MIRI and OpenAI research programs probably already include work relevant to safety.

Am I personally doing safety now? Nope; I'm doing applications. Mainly because it's the job I can do. But I assume that's not very relevant to Eliezer's point.

Do I think that one ought to feel a sense of urgency about AI right now? Well, that's a weird way of thinking about it, and I get stuck on the "one ought to feel" part -- is it ever true that you should feel a certain way? I have the sense that this is the road to Crazytown.

Eliezer points out that "the timing of the big development is a function of the peak knowledge in the field" -- i.e. just because I don't see any evidence in the literature that we currently have a strong AI in a secret lab in Google or Baidu or something, doesn't mean it might not already be here. This is technically true, but doesn't seem probabilistic enough. Absence of evidence is (weak) evidence of absence. "The real state of the art is much more advanced than the public state of the art" is always technically possible, and you can get Pascal's Mugged that way.

I find that I get negatively surprised about as often as positively surprised by reading new ML papers from top conferences -- half the time I'm like "we can do that now?!" and half the time I'm like "I'm surprised we couldn't do that already!" (For example, image segmentation with occlusions is way worse than I expected it to be.) If the secret research was much more advanced than the revealed research, I don't think I'd see that distribution of outcomes, unless the secretive researchers were much more strategic than I expect human beings to actually be.

Most of my "AI is far away" intuitions don't actually come with timelines. Embarrassingly enough, when people have pressed me for timelines, my asspull numbers have turned out to be inconsistent with each other. My asspull number guesses are not very good.

My intuitions come from a few sources:

a.) When I look at AI performance on particular benchmarks where we've seen progress, I usually see performance improvement that's linear in processing power and hence exponential in time. Basically, we're getting better because we're throwing more power at the problem (and because the combination of neural nets and GPUs allows us to grow with processing power rather than lag behind it, by parallelizing the problem.) The exceptions seem to be problems in their infancy, like speech recognition and machine translation, which may be improving faster than Moore's Law, but are so young that it's hard to tell. This tells me steady progress is happening, more or less one domain at a time, but calls into question whether the algorithmic improvements since backprop have made much difference at all.

b.) For a general intelligence, you'd need generalization ability from one problem to another, and most of what I've seen on meta-learning and generalization ability is that it's very very weak.

c.) I have some roughly cog-sci-influenced intuitions that intelligence is going to require something more "conceptual" or "model-like" than the "pattern recognition" we see in neural nets right now. (Basically, I believe what Joshua Tenenbaum believes.) In other words, I think we'll need novel theoretical breakthroughs, not just scaling up the algorithms we have today, to get strong AI. Can I prognosticate how long that'll take? No, not really. But when people make the strong claim of "5-10 years to strong AI, requiring no concepts not in already-published papers as of 2017", I feel confident that this is not true.

Replies from: Dr_Manhattan, TheWakalix
comment by Dr_Manhattan · 2017-10-17T16:48:28.723Z · LW(p) · GW(p)

Hi Sarah, not sure why you felt compelled to answer. Nothing in your reply suggests a contrary logical argument to the Fire Alarm; the only thing I can think of is Eliezer vaguely implying a shorter timeline and you vaguely implying a longer (or at least more diffuse) one. I didn't get the feeling EY implied AGI is possible by scaling current state of art. The argument about peak knowledge was also to explain the Fire Alarm mechanics, rather than imply that top people at Google have "it" already.

As far as your intuitions, I feel similarly about the cogsci stuff (from a lesser base of knowledge) but it should be noted that there's some idea exchange between the graphical models people like Josh and NN people. Also it's possible that NNs can be constructed to learn graphical models. (as an aside would be interesting to ask Josh what his distribution is. Josh begat Noah Goodman, Noah beget Andreas Struhmuller who is quite reachable and in the LW network)

Replies from: sarahconstantin, Zvi, Benito
comment by sarahconstantin · 2017-10-17T17:06:08.530Z · LW(p) · GW(p)

I guess I don't disagree with the "no fire alarm" thing. I have a policy that if it looks like I might be somebody's villain, I should show up and make myself available to get smited.

Good point re: talking to Andreas, I may do that one of these days.

Replies from: countingtoten
comment by countingtoten · 2017-11-06T22:21:11.151Z · LW(p) · GW(p)

I want to pursue this slightly. Before recent evidence - which caused me to update in a vague way towards shorter timelines - my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim "5-10 years to strong AI" and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you "confident" the way people were certain (~80%) Hillary Clinton would win?

comment by Zvi · 2017-10-17T17:34:31.655Z · LW(p) · GW(p)

I think Eliezer is implying here that timelines may be short or at least that the left tail is fatter than people want to admit, but I think the thing that Sarah feels compelled to respond to is more the vibe that you have no right to think there are long timelines. He's saying that in order to be confident in no strong AI within a few years you need lots of concrete predictions and probabilities or else you're just pulling things out of [the air] on request without a model and not updating on evidence, and implying that recent evidence should update you in favor of sooner being more likely rather than AGI getting one day later in expectation each day. In particular, his fifth point in response to the conference.

It felt off-putting enough to me that I decided to respond at length here to the associated analysis and logic, even though I too fully agree with no fire alarm and the need to act now and the fact that most people don't have models and so on.

I don't have enough knowledge of current ML to offer short term predictions that are worth anything, which is something I want to try and change, but in the meantime I don't think that means I can't make meaningful long term predictions, just that they'll be worse than they would otherwise be.

Replies from: TheWakalix
comment by TheWakalix · 2017-10-24T00:49:11.038Z · LW(p) · GW(p)

My take is that Eliezer is saying that we should be aware of the significant probability that AGI takes us unaware, and also that people don't tend to think enough about their claims. He's not saying "be certain that it will be soon," but rather "any claim that it will almost certainly take centuries is suspect if it cannot be backed up with specific, lower-level difficulty claims expressed through estimated times for certain goals to be reached." I'm not sure if this goes against your reading of the post, though.

comment by Ben Pace (Benito) · 2017-10-17T17:02:57.159Z · LW(p) · GW(p)

Yeah, I was also confused what disagreement Sarah was pointing to, but I thought maybe she was arguing that there was in fact a fire alarm, as she currently has models of AI development that say it's very far away without a conceptual breakthrough i.e. that conceptual breakthrough would be a fire alarm.

But this seems false, given that I've not heard many others state this fire alarm in particular (with all the details regarding "performance improvement that's linear in processing power and hence exponential in time" etc). Nonetheless I'd be happy to find out that there sort of is such a consensus.

comment by TheWakalix · 2017-10-24T00:43:03.895Z · LW(p) · GW(p)

"Do I think that one ought to feel a sense of urgency about AI right now? Well, that's a weird way of thinking about it, and I get stuck on the "one ought to feel" part -- is it ever true that you should feel a certain way? I have the sense that this is the road to Crazytown."

If the room is on fire, one ought to feel at least mildly concerned. If the room has a significant chance of being set on fire, one ought to feel somewhat less concerned but still not entirely okay with the prospect of a fiery death. It seems clear that one ought to be worried about future events to a degree proportional to their likelihood and adverse effects, or else face a greater chance of knowing about but ignoring a significant danger.

comment by Larks · 2017-10-14T03:21:35.921Z · LW(p) · GW(p)

Occationally we run surveys of ML people. Would it be worth asking them what their personal fire alarm would be, or what they are confident will not be achieved in the next N years? This would force them to make a mental stance that might help them achieve some cognitive dissonance later, and also allow us to potentially follow up with them.

Replies from: liam-donovan, AlexMennen, PDV
comment by Liam Donovan (liam-donovan) · 2019-12-08T17:28:00.420Z · LW(p) · GW(p)

Apparently an LW user did a series of interviews with AI researchers in 2011, some of which [LW · GW] included a similar question. I know most LW users have probably seen this, but I only found it today and thought it was worth flagging here.

comment by AlexMennen · 2017-10-15T00:02:11.045Z · LW(p) · GW(p)

Would it be worth asking them what their personal fire alarm would be, or what they are confident will not be achieved in the next N years?

If you ask about what would constitute a fire alarm for AGI, it might be useful to also ask how much advance warning the thing they come up with would give.

comment by PDV · 2017-10-14T08:14:09.249Z · LW(p) · GW(p)

I know I'm going to use the next 2 years thing.

comment by Scott Garrabrant · 2017-10-16T22:22:06.914Z · LW(p) · GW(p)

My understanding (which may be factually incorrect) of the Grace et al. survey is that the respondents who were asked about “all occupations are fully automatable” were also asked a long series of questions about when increasingly difficult occupations will be automatable. My default hypothesis is that the huge difference between the 121 vs 44 years was more caused by the self-consistency motivation to make each task be sufficiently later than the previous one, and not by the politicization implied by adding connotations of unemployment.

This is still strong evidence that the responders did not think for 5 minutes before taking the survey, but it is not quite as extreme as you make it sound.

Replies from: KatjaGrace
comment by KatjaGrace · 2017-10-17T22:42:45.893Z · LW(p) · GW(p)

Scott's understanding of the survey is correct. They were asked about four occupations (with three probability-by-year, or year-reaching-probability numbers for each), then for an occupation that they thought would be fully automated especially late, and the timing of that, then all occupations. (In general, survey details can be found at https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/)

comment by Charlie Steiner · 2017-10-14T22:05:53.140Z · LW(p) · GW(p)

Why do people react to fire alarms? It's not just that they're public - smoke is public too. One big factor is that we've had reacting to fire alarms drilled into us since childhood, a policy probably formulated after a few incidents of children not responding to fire alarms.

What this suggests is even if signals are unclear, maybe what we really need is training. If some semi-arbitrary advance is chosen, people may or may not change their behavior when that advance occurs, depending on whether they have been successfully trained to be able to change their behavior.

On the other hand, we should already be working on AI safety, and so attempting to set up a fire alarm may be pointless - we need people to already be evacuating and calling the firefighters.

comment by SoerenMind · 2017-10-16T17:42:27.324Z · LW(p) · GW(p)

People had previously given Go as an example of What You See Before The End.

Who said this? I only heard of the prediction that it'll take 10+ years, made only a few years before 2015.

Replies from: Zvi
comment by Zvi · 2017-10-17T17:38:31.661Z · LW(p) · GW(p)

I don't remember it being phrased in that way but I certainly had it on my list of What You See Before The End and seeing Go fall when it did definitely made me far more worried, although much less more worried than a lot of other people I've talked to. It certainly was one of the big 'this won't happen soon' predictions and it happened faster than people expected, which of course is what you'd expect if advances follow Poisson-style distributions where Go doesn't look closer to being solved until it is suddenly solved.

comment by Stuart_Armstrong · 2017-10-14T06:37:56.921Z · LW(p) · GW(p)

Brilliant essay, but far too long.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2017-10-15T22:39:50.152Z · LW(p) · GW(p)

Which parts do you think are not needed?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-10-24T05:45:19.758Z · LW(p) · GW(p)

It seems to me you can keep all the parts, and all the points, and cut half the paragraphs approximately. The kind of "repetition until everyone gets all subtle nuances of the points" can work well for shorter essays, but many people are not going to wade through all of this.

comment by indecisive · 2017-10-16T21:25:27.146Z · LW(p) · GW(p)

Thank you for this post - I had been waiting for warning signs, I think I will not do that anymore.

Note the post is possibly laced with a bit too many lesswrong/overcoming bias concepts to work for some people without the relevant background (e.g. "looking good to other people" as a large motivation in people's reasoning), but I'm not sure that's avoidable.

But at least significantly increased my short-timeline probabilities, from very low to plausible. And helped my System 1 feel a bit more urgency on this topic...

comment by Sailor_Vulcan · 2017-10-14T18:41:05.993Z · LW(p) · GW(p)

I just read this whole article last night, it really gives a sense of the scope and difficulty of humanity actually surviving. In fact, I wonder how likely anyone is to come up with a way to fix this whole "no fire alarm" mess if they didn't read through this article. My first thought is that the solution is either to come up with a way to make a fire alarm or something like it that can actually work under these circumstances, or a way to win that doesn't involve the fire alarm, or to change the circumstances.

It sounds like the amount of rationality necessary for the majority of laypeople to understand all this and respond appropriately is too high to expect them to reach, generally speaking, because this is too many inferential steps away. Of course, maybe if you get a sufficiently large minority of laypeople to have honest thoughtful discussions about AI risk, that will be enough people to make a critical difference.

In fact, just yesterday I spoke to a layperson about AI risk and not only did they understand, but they believed me and agreed with me about how serious it is. And it was easy to get them to listen and understand, and they aren’t someone who I am close with nor someone who has reason to trust me more than anyone else. They were curious and interested and did not seem to be aware of any AI risks besides unemployment, prior to the conversation.

You want to reach people who have not already invested themselves in the subject and their preconceptions, I suspect. Part of the problem here is likely that too many people who are talking about AI are already invested in their opinions. A layperson who isn’t completely batshit crazy might have trouble changing their minds and have crazy beliefs that they refuse to scrutinize in regards to a lot of other subjects, but if they haven't yet made up their minds about AI and you approach them about it right, using good communication skills and relating to their feelings, you could still get them to come to the right conclusions the first time around. You need to make the subject be interesting to them and the conversation engaging and do it in a way that doesn’t make them feel helpless and hopeless, nor puts them on the defensive, but still explains things accurately.

If beforehand they actually have reason to care besides wanting to save the world (because that will just make them feel overwhelmed with crushing responsibility) then things might go a bit better maybe? If only really brave people are able to respond appropriately to AI risk, then we either need to improve people’s bravery, or make it so that they don’t need to be so brave to respond appropriately, or make it so that the people who are already being brave are able to make more of an impact.

Replies from: habryka4
comment by habryka (habryka4) · 2017-10-14T22:40:15.766Z · LW(p) · GW(p)

Formatting recommendation: Use more paragraphs. I was unable to read the thing above because it was too much of a giant wall of text.

comment by quanticle · 2017-10-14T08:44:29.458Z · LW(p) · GW(p)

The problem that I have is that Eliezer, along with MIRI, and many other rationalists automatically assume that the eventuality of artificial intelligence exceeding human intelligence equates with that occurring with great speed. What evidence do we have that AI will suddenly and radically exceed human capabilities in a generalized fashion in a short period of time? The AI advancements pointed to in the piece, were all "narrow AIs", which progressed past human capabilities after a significant investment of time, research effort and computational hardware. What, beyond some nameless fear, is causing Eliezer to say that AI will suddenly progress in a generalized fashion across all fronts, when everything until now has been progress along fairly narrow fronts?

On a broader level, I see an assumption constantly made that an AGI system will be a single system. What evidence do we have of AI being a single integrated system rather than multiple specialized systems, each of which do a single thing better than all humans, but none of which do everything better?

Replies from: RobbBB, Zvi, Viliam, lahwran
comment by Rob Bensinger (RobbBB) · 2017-10-14T11:20:16.137Z · LW(p) · GW(p)

People aren't assuming that AI exceeding human intelligence "equates with that occurring with great speed"; they're arguing for the latter point separately. E.g., see:

Or, for a much quicker and more impressionistic argument, this post on FB.

Another simple argument: "Human cognition didn't evolve to do biochemistry, nuclear engineering, or computer science; those capabilities just 'came for free' with the very different set of cognitive problems human brains evolved to solve in our environment of evolutionary adaptedness. This suggests that there's such a thing as 'general intelligence' in the sense that there's a kind of reasoning that lets you learn all those sciences without needing an engineer to specially design new brains or new brain modules for each new domain; and it's the kind of capacity that a blind engineering process like natural selection was able to stumble on while 'trying' to solve a very different set of problems."

Some other threads that bear directly on this question include:

  • What's the track record within AI, or in automation in general? When engineers try to outperform biology on some specific task (and especially on cognitive tasks), how often do they hit a wall at par-biology performance; and when they don't hit a wall, how often do they quickly shoot past biological performance on the intended dimension?

  • Are humans likely to be near an intelligence ceiling, or near a point where evolution was hitting diminishing returns (for reasons that generalize to AI)?

  • How hardware-intensive is AGI likely to be? How does this vary for, e.g., 10-year versus 30-year timelines?

  • Along how many dimensions might AGI improve on human intelligence? How likely is it that early AGI systems will be able to realize some of these improvements, and to what degree; and how easy is it likely to be to leverage easier advantages to achieve harder ones?

  • How tractable is technological progress (of the kind we might use AGI to automate) in general? More broadly, if you have (e.g.) AGI systems that can do the very rough equivalent of 1000 serial years of cognitive work by 10 collaborating human scientists over the span of a couple of years, how much progress can those systems make on consequential real-world problems?

  • If large rapid capability gains are available, how likely is it that actors will be willing (and able) to go slow? Instrumental convergence and Gwern's post on tool AIs are relevant here.

Each of these is a big topic in its own right. I'm noting all these different threads because I want to be clear about how many different directions you can go in if you're curious about this; obviously feel free to pick just one thread and start the discussion there, though, since all of this can be a lot to try to cover simultaneously, and it's useful to ask questions and start hashing things out before you've read literally everything that's been written on the topic.

Replies from: Kaj_Sotala, whpearson
comment by Kaj_Sotala · 2017-10-14T14:02:27.154Z · LW(p) · GW(p)

On the same topic, see also my paper How Feasible is the Rapid Development of Artificial Superintelligence (recently accepted for publication in the 21st Century Frontiers focus issue of Physica Scripta), in which I argue that the things that we know about human expertise and intelligence seem to suggest that the process of scaling up from human-level intelligence to superhuman qualitative intelligence might be relatively fast and simple.

comment by whpearson · 2017-10-15T12:54:27.123Z · LW(p) · GW(p)

How tractable is technological progress (of the kind we might use AGI to automate) in general? More broadly, if you have (e.g.) AGI systems that can do the very rough equivalent of 1000 serial years of cognitive work by 10 collaborating human scientists over the span of a couple of years, how much progress can those systems make on consequential real-world problems?

How much science is cognitive work vs running an experiment in the real world? Have there been attempts to quantify that?

Replies from: RobbBB, ciphergoth
comment by Rob Bensinger (RobbBB) · 2017-10-15T18:25:46.914Z · LW(p) · GW(p)

MIRI and other people thinking about strategies for ending the risk period use "how much physical experimentation is needed, how fast can the experiments be run, how much can they be parallelized, how hard is it to build and operate the equipment, etc.?" as one of the key criteria for evaluating strategies. The details depend on what technologies you think are most likely to be useful for addressing existential risk with AGI (which is not completely clear, though there are plausible ideas out there). We expect a lot of speed advantages from AGI, so the time cost of experiments is an important limiting factor.

Replies from: whpearson, None
comment by whpearson · 2017-10-15T19:33:13.218Z · LW(p) · GW(p)

Are there any organisations set up to research this kind of question (going into universities and studying research)? I'm wondering if we need a specialism called something like AI prediction, which aims to get this kind of data.

Replies from: RobbBB, habryka4
comment by Rob Bensinger (RobbBB) · 2017-10-17T01:45:49.359Z · LW(p) · GW(p)

If this topic interests you, you may want to reach out to the Open Philanthropy Project, as they're interested in supporting efforts to investigate these questions in a more serious way.

Replies from: whpearson
comment by whpearson · 2017-10-17T21:12:05.795Z · LW(p) · GW(p)

Hi Rob, I had hoped to find people I could support. I am interested in the question. I'll see if I thnk it is more important than the other questions I am interested in.

comment by habryka (habryka4) · 2017-10-15T19:34:49.976Z · LW(p) · GW(p)

AI impacts has done some research in this area, I think.

Replies from: whpearson
comment by whpearson · 2017-10-15T20:05:06.963Z · LW(p) · GW(p)

They look like they are set up for researching existing literature and doing surveys, but they are not necessarily set up to do studies that collect data in labs.

The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence.

They are still part of the orient step, rather than the observation step.

But still lots of interesting things. Thanks for pointing me at them.

comment by [deleted] · 2019-12-21T08:12:24.628Z · LW(p) · GW(p)
We expect a lot of speed advantages from AGI

Are there any reasons for this expectation? In software development generally and machine learning specifically it often takes much longer to solve a problem for the first time than successive instances. The intuition this primes is that a proto-AGI is likely to stumble and require manual assistance a lot the first time it attempts any one Thing, and generally the Thing will take longer to do with an AI than without. The advantage of course is that afterwards similar problems are solved quickly and efficiently, which is what makes working on AI pay off.

AFAICT, the claim that any form of not-yet-superhuman AGI will quickly, efficiently, and autonomously solve the problems it encounters in solving more and more general classes of problems (aka "FOOM") is entirely ungrounded.

comment by Paul Crowley (ciphergoth) · 2017-10-15T22:17:28.070Z · LW(p) · GW(p)

Dawkins's "Middle World" idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.

Replies from: whpearson
comment by whpearson · 2017-10-16T17:40:24.823Z · LW(p) · GW(p)

I agree that research can probably be improved upon quickly and easily. Lab on a chip is obviously a way we are doing that currently. If an AGI system has got the backing of a large company or country and can get new things fabbed in secrecy it can improve on these kinds of things.
But I still think it is worth trying to quantify things. We can get ideas about stealth scenarios where the AGI is being developed by non-state/megacorp actors that can't easily fab new things. We can also get ideas about how useful things like lab on a chip are for speeding up the relevant science. Are we out of low hanging fruit and is it taking us more effort to find novel interesting chemicals/materials?

comment by Zvi · 2017-10-15T12:12:01.663Z · LW(p) · GW(p)

I note that this had been downvoted into negative karma and want to push back on that. A lot of us feel like You Should Know This Already but many don't, many others disagree, and it's a very important question that the OP skips.

This feels like an excellent place for someone to ask for the evidence, and Rob does a good job doing that, and I don't want people to feel punished for asking such questions nor do I want the question and Rob's answer to become hidden.

If anything, it seems odd that our front page resources don't currently include an easy pointing towards such evidence.

Replies from: dxu, Benito, TheWakalix
comment by dxu · 2017-10-15T19:15:59.486Z · LW(p) · GW(p)

This feels like an excellent place for someone to ask for the evidence

Was that really what the grandparent comment was doing, though? The impression I got was that the original commenter was simply using the question as a rhetorical device in order to reinforce the (false) impression that MIRI et al. "automatically assume" things relating to the growth curve of superintelligent AI, and that kind of rhetoric is certainly not something I want to encourage.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2017-10-15T19:38:07.797Z · LW(p) · GW(p)

I think Quanticle should have asked questions, rather than making strong claims like "MIRI automatically assumes p" without looking into the issue more. On the other hand, I'm glad that someone raised issues like these in this comments section (given that disagreements and misunderstandings on these issues are pretty common), and I care more about the issues getting discussed and about people sharing their current epistemic state than about punishing people who rushed to a wrong conclusion or had tone issues or whatever. (If I were the Emperor of Karma I might allocate something like 'net +2 karma' to mildly reward this level of openness and directness, without over-rewarding lack-of-scholarship etc.)

Re 'maybe this was all a ploy / rhetorical device', I'm skeptical that that's true in any strong/unusual sense. I also want to discourage treating it as normal, at the outset of a debate over some set of factual issues, to publicly speculate that the person on the other side has bad motives (in an accusatory/critical/dismissive/etc. way). There may be extreme cases where we're forced to do that at the outset of a factual debate, but it should be pretty rare, given what it can do to discussion when those sorts of accusations are commonplace.

Replies from: dxu
comment by dxu · 2017-10-16T09:54:53.139Z · LW(p) · GW(p)

Re 'maybe this was all a ploy / rhetorical device', I'm skeptical that that's true in any strong/unusual sense.

I don't think that there's anything particularly unusual about someone asking "Is there any evidence for claim X?" to imply that, no, there is not enough evidence for claim X. Rhetorical questions are such a common argumentative technique that you can sometimes employ them without even being consciously aware of it. That still doesn't make it the kind of style of discourse I approve of, however, and downvoting is a compact way of expressing that disapproval.

I also want to discourage treating it as normal, at the outset of a debate over some set of factual issues, to publicly speculate that the person on the other side has bad motives (in an accusatory/critical/dismissive/etc. way).

To be clear, I didn't reply to the original comment at all; my initial comment upthread was written purely in response to Zvi's allegation that the downvoters of Quanticle's comment were being unfair and/or petty. I disagreed with the notion that there was no valid reason to downvote, and I replied for that reason and that reason only. I certainly didn't intend my comment to be interpreted as "public speculation" regarding Quanticle's motives, only as an indication that the phrasing used could give the impression of bad motives, which I think is just as important as whether it was actually intended that way. More generally, however:

You said that the substance of a comment is more important than its tone, and I certainly don't disagree, but that still doesn't mean that issues relating to tone are unimportant. In fact, I'd go so far as to say that the way a commenter phrases certain things can very strongly shape the subsequent flow of discussion, and that in some cases the effects are strong enough to outweigh the actual substance of their comment entirely, especially when there's little to no substance to begin with (as in this case). Given that, I think voting based on such "ephemeral" considerations as tone and phrasing is just as valid as any other means of voting, and I take issue with the idea that you can't downvote and/or criticize someone for anything other than the purely denotative meaning of their statements.

comment by Ben Pace (Benito) · 2017-10-15T13:56:05.952Z · LW(p) · GW(p)

If anything, it seems odd that our front page resources don't currently include an easy pointing towards such evidence.

Working on it :-)

comment by TheWakalix · 2017-10-24T00:57:31.163Z · LW(p) · GW(p)

How many people downvoted because Grr I Disagree and How Dare They Question EY, and how many because they didn't like how the poster leaped from "I haven't seen any reason to think this" to "they must be automatically assuming this"? This wasn't so much asking for evidence as... assuming there was none. That tends to annoy people, and I think it makes sense on a productive-discourse level to discourage such posts. On a teach-the-newcomer-and-don't-drive-them-out-for-not-being-perfectly-rational-yet level, however, I think you have a point.

comment by Viliam · 2017-10-14T22:39:08.966Z · LW(p) · GW(p)

As an intuition pump, imagine that to become "fully human", whatever that means, the AI needs to have three traits, let's call them "A", "B", and "C", whatever those could be. It seems unlikely that the AI would gain all those three traits in the same version. More likely, first we will get an AI 1.0 that has the trait "A", but lacks traits "B" and "C". At some later time, we will have an AI 2.0 that has traits "A" and "B", but still lacks "C".

Now, the important thing is that if the AI 1.0 had the trait "A" at human level, the scientific progress at "A" probably still continued, so the AI 2.0 most likely already has the trait "A" at super-human level. So it has super-human "A", human-level "B", but still lacks "C".

And for the same reason, when later we have an AI 3.0 that finally has all the traits "A", "B", and "C" at least at the human level, it will likely already have traits "A" and "B" at super-human level; the trait "A" probably at insanely-super-human level. In other words, the first "fully human" AI will actually already be super-human in some aspects, simply because it is unlikely that all aspects would reach the human level at the same time.

For example, the first "fully human" AI will easily win the chess tournaments, simply because current AIs can already win them. Instead of some average Joe, the first "fully human" AI will be, at least, some kind of super-Kasparov.

How dangerous it is to have some super-human traits, while being "fully human" otherwise? Depends on the trait. Notice how smarter can people act when you simply give them more time, or better memory (e.g. having a paper, or some personal wiki software), or ability to work in groups. If the AI is otherwise like a human, except e.g. 100 times faster, or able to keep a wiki in its head, or able to do real multitasking (i.e. split into a few independent processes, and explore the issue from multiple angles at the same time), that could already make it quite smart. (Imagine how your life would change if at any moment you could magically take an extra hour to think about stuff, having all relevant books in your head, and an invisible team of equally smart discussion partners.) And these are just quite boring examples of human traits; it could also be e.g. 100 times greater ability to recognize patterns; or perhaps some trait we don't have a word for yet.

Generally, the idea is that "an average human" is a tiny dot on the intelligence scale. If you make small jumps upwards on the scale, the chance that one of your jumps will end exactly at this dot is very small. More likely, your jumps will repeatedly end below this dot, until at some moment the following jump will take over this dot, at some higher place. Humans have many "hardware" limitations that keep them in a narrow interval, such as brains made of meat working at frequency 200 Hz, or heads small enough to allow childbirth. The AI will have none of these. It will have its own hardware limits, but those will follow different rules. So it seems possible that e.g. the AI built in 2022 will work at human equivalent of 20 Hz, and the AI built in 2023 will work at human equivalent of 2000 Hz, simply because someone invented a smart algorithm allowing much faster simulation of neurons, or used a network of thousand computers instead of one supercomputer, or coded the critical parts in C++ instead of Lisp and had them run on GPU, etc.

But perhaps it is enough to accept that the first "fully human" AI could beat you at chess even in its sleep, and ask yourself what is the chance that chess would the only such example.

Replies from: thisheavenlyconjugation
comment by thisheavenlyconjugation · 2017-10-15T13:51:34.144Z · LW(p) · GW(p)

"For example, the first "fully human" AI will easily win the chess tournaments, simply because current AIs can already win them."
No they can't. *Chess playing programs* can easily win tournaments, self-driving cars and sentiment analysers can't. An AGI that had the ability to run a chess playing program would be able to win, but the same applies to humans with the same ability.

comment by the gears to ascension (lahwran) · 2017-10-16T02:20:15.594Z · LW(p) · GW(p)

+1 with this disagreement. ML methods seem to indicate that it's not quite that simple - they don't soar beyond human level just from self-improvement, when pointed at themselves they self-improve but relatively very slowly compared to the predictions. I do think it's possible, but it no longer seems like an obvious thing the way it may have before ML became a real thing.

With respect to being a single system: it's consistently the case that end-to-end learned neural networks are better at their jobs than plugging together disparately trained networks, which are in turn better at their jobs than neural networks that can only communicate via one-hot vectors (eg, agent populations communicating in english, or ai systems like the heavily multimodal alexa bots). The latter can be much easier to make, but in turn top out much earlier.

comment by MondSemmel · 2017-10-14T15:57:53.146Z · LW(p) · GW(p)

There's something right about seeing a new EY essay on Less Wrong. Here's hoping the LW revival will have some measure of success.

Essay feedback: I appreciate the density of the arguments, which might not have worked at a shorter length. Still wish there had been a summary as well, but I suppose something like that might already exist elsewhere*, and this essay just expands on it. It might also help to provide the various chapters / sections with headings. Finally, I wish the writing had been a bit more humorous (the Sequences felt easier to read, for me; and Scott Alexander also uses microhumor to make his long essays more hedonically rewarding to read), but I understand that could be perceived as off-putting by (part of) the actual target audience, i.e. Serious People / actual AI researchers.

* e.g. AFAIK several paper abstracts by the AI alignment community mention the general challenge of forecasting technological developments.

Replies from: Raemon
comment by Raemon · 2017-10-14T19:27:08.928Z · LW(p) · GW(p)

I had the weird experience of "at the end of each section, I felt I understood the point, but a the end of each subsequent section I was happy to have gotten additional points of clarification."

I assume the essay is oriented around something like "hopefully you keep reading until you are convinced, addressing more and more specific points as it goes along."

comment by Nisan · 2021-09-22T23:57:03.158Z · LW(p) · GW(p)

Typo: "common, share, agreed-on" should be "...shared...".

comment by dsj · 2022-12-28T19:57:02.292Z · LW(p) · GW(p)

To me, ChatGPT "seem[s] pretty smart in interaction and conversation". Does this mean it's "actually … an AGI already", or is my perception wrong?

comment by [deleted] · 2022-11-02T21:04:20.010Z · LW(p) · GW(p)

I believe the shortest route to AGI is RL, for example PPO is a great example. Transformer language models just try to replicate/predict what would other people do. I've worked on some PPO modifications, watched them "grow" and that was the closest I ever was to actual artificial intelligence, because you see agent learn new undiscovered methods of reaching goals, for example finding bugs in physics engine. Since movement behavior is not as descriptive to most people, language models will be boring until we let agents under disguise out in the world and let them interact endless hours with actual people. They will still be limited by number of sensory inputs/memories, but that is something to start with.

Meanwhile robotics and environmental agents should get as much experience in well formed environments. I think we are lacking well formed environments and a lot of work should be focus on creating environments. Also creating them manually one by one might be too slow. Another step up would be creating an engine that can generate environment for ML agents based on our needs. And eventually we will need a number of MMORPG like games that have built in physics environment, probably metaverse like projects. Agents should have same sensory inputs as human do and all of them should be available in those environments.

The whole Safety thing can eventually lead to greater threats, since agents will grow up in alien to humans environments. Agents wont reach optimal CEV state if their own environment goals and values do not align with human goals and values.

If we push Safety to extremes, at some point we will die out and new generations will come to a point where they will give up on Safety in order to proceed a step further.

I just joined this forum and I am slowly getting grip of local mood, as time goes I will elaborate my point of view in on most topics.

comment by Kustogor (savelii-kustogorov) · 2022-08-29T07:55:15.237Z · LW(p) · GW(p)

The fire alarm was created to answer a frequent and observable threat. The existential threat, by definition, is not the one.

While AGI is theoretically possible, there is no certainty in that (when did the last time AGI emerged?). There is no fire alarm in the space industry against alien invasion. Because until you spot the first spaceship, you can not be sure that aliens reliably exist.

Therefore I don't think there can be a fire alarm for AGI in the sense that there would be a system sending a common signal to the industry. The better comparison should be with tools preventing something that never ever happened. 
These kinds of systems were designed for the first nuclear reactor. But other than that, I guess mostly religions create tools for this kind of threats.

 

comment by avturchin · 2017-11-06T14:32:23.473Z · LW(p) · GW(p)

The interesting question is what could be such a "fire alarm event", which will immediately change public attitude.

I see two variants:

1) Appearing of home robots able to speak and activating of Uncanny valley reaction in humans, who will start to feel the danger.

2) AI related catastrophic accident, like Narrow AI virus attack on a nuclear power station. The event could be called "AI Chernobyl".

Global catastrophic AI could appear, unfortunately, without any such harbingers.

If we try to have a dialog with AI sceptics about what should be regarded as AI fire alarm, it may be useful.

comment by Kevin Van Horn (kevin-van-horn) · 2017-10-19T20:41:36.792Z · LW(p) · GW(p)

The relevant question is not, "How long until superhuman AI?", but "Can we solve the value alignment problem before that time?" The value alignment problem looks very difficult. It probably requires figuring out how to create bug-free software... so I don't expect a satisfactory solution within the next 50 years. Even if we knew for certain that superhuman AI wouldn't arrive for another 100 years, it would make sense to be putting some serious effort into solving the value alignment problem now.

comment by David_Bolin · 2017-10-15T17:12:21.579Z · LW(p) · GW(p)

"If that was so, they'd get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. "

I do get that feeling even more so, in exactly that situation. I therefore normally do not respond to fire alarms.

comment by Rain · 2017-10-14T03:13:20.064Z · LW(p) · GW(p)

This article is very heavy with Yudkosky-isms, repeats of stuff he's posted before, and it needs a good summary, and editing to pare it down. I'm surprised they posted it to the MIRI blog in its current form.

Edit: As stated below, I agree with all the points of the article, and consider it an important message.

Replies from: Robert Barlow, dxu
comment by Robert Barlow · 2017-10-14T03:57:04.312Z · LW(p) · GW(p)

I'll agree that it's more than a little redundant, especially when I understood the point he was getting at in the first part. But how much of that is the fault of his writing here and how much of it is the fault of the fact that he's written about the issue before? And, more importantly, if you were to hand this article to someone who knows nothing about Yudkowsky or Less Wrong, would that extra length help them? I'd argue that a lot of the article's length comes from trying to avoid some of his most common problems - instead of referencing a post where he'd said something before, he would explain the concept from the ground up using a different metaphor. (which is good for those of us who don't want to scrape through old Facebook rants)

Either way, in its current form it is definitely not out of place for rhetoric, and more than enough for Less Wrong.

Replies from: Rain
comment by Rain · 2017-10-14T15:43:55.100Z · LW(p) · GW(p)

I agree it fits well here. However, it has a very different tone from other posts on the MIRI blog, where it has also been posted.

comment by dxu · 2017-10-14T04:36:12.221Z · LW(p) · GW(p)

There is constructive criticism, and there is non-constructive criticism. My personal heuristic for determining whether a given critic is being constructive is to look at (a) how specific they are about the issues they perceive, and (b) whether they provide any specific suggestions as to how to address those issues. The parent comment does poorly on both fronts, and that in conjunction with the heavily aggressive tone therein are sufficient to convince me that it was very much written in bad faith. Please strive to do better.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2017-10-14T04:50:37.716Z · LW(p) · GW(p)

Tangent: "bad faith" can mean a lot of different things to different people (or in different contexts), ranging from "medium-severity dissimulation" to "deliberate cruelty/malice/malevolence" to "less-than-ideal self-awareness and candor". My personal suggestion would be to taboo it wherever possible and be more concrete/precise about what you think is happening.

Replies from: dxu
comment by dxu · 2017-10-14T05:02:23.377Z · LW(p) · GW(p)

I'm generally leery of ascribing motives to people who I don't know on the Internet, since I could very well be mistaken. By "bad faith", I simply meant a comment that was not (primarily) written for the purpose of accelerating progress along some axis generally agreed to be positive, e.g. understanding, knowledge, community, etc. This doesn't, of course, imply that I know what the actual motives of the commenter were, only that I'm fairly sure that they don't fall into the specific subset of motives I consider good.

That being said, if I were forced to generate a hypothesis that fits into one of the three categories you described, I would (very tentatively) nominate the third thing--"less-than-ideal self-awareness and candor"--as closest to what I think may actually be happening.

Replies from: Rain
comment by Rain · 2017-10-14T15:35:51.058Z · LW(p) · GW(p)

Laziness. Though I note Stuart_Armstrong had the same opinion as me, and offered even fewer means of improvement, and got upvoted. I should have also said I agree with all points contained herein, and that the message is an important one. That would have reduced the bite.

Replies from: Benito
comment by Ben Pace (Benito) · 2017-10-14T15:52:28.123Z · LW(p) · GW(p)

Just as a data point, you're right, your comment felt to me as though it had more 'bite' and felt a little more aggressive than Stuart's, which is why I downvoted yours and not his, even though I almost downvoted his too.

comment by [deleted] · 2017-10-14T21:56:21.413Z · LW(p) · GW(p)Replies from: Benito
comment by Ben Pace (Benito) · 2017-10-14T22:04:58.703Z · LW(p) · GW(p)

I mean, just so you know, you can write anything you like to your personal LW blog (i.e. blog posts on LW that you don't publish to the frontpage, but get stored in your user profile. It will look more like an actual blog in future site edits, and people will be able to follow you and get notifications when you post).

I'd definitely encourage you to write up thoughts like that there.

Replies from: None
comment by [deleted] · 2017-10-15T10:30:01.316Z · LW(p) · GW(p)Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2017-10-15T18:18:06.957Z · LW(p) · GW(p)

The thing closest to that might be https://agentfoundations.org, where you can post links to relevant things from your LW blog or elsewhere so they don't slip through the cracks; though more background knowledge and novelty is usually expected there, and getting full posting privileges requires proving your mettle via the link posts or other channels.

comment by [deleted] · 2017-10-14T21:54:59.327Z · LW(p) · GW(p)