Slow motion videos as AI risk intuition pumps

post by Andrew_Critch · 2022-06-14T19:31:13.616Z · LW · GW · 41 comments

Contents

41 comments

tl;dr: When making the case for AI as a risk to humanity, trying showing people an evocative illustration of what differences in processing speeds can look like, such as this video.

Over the past ~12 years of making the case for AI x-risk to various people inside and outside academia, I've found folks often ask for a single story of how AI "goes off the rails".  When given a plausible story, the mind just thinks of a way humanity could avoid that-particular-story, and goes back to thinking there's no risk, unless provided with another story, or another, etc..  Eventually this can lead to a realization that there's a lot of ways for humanity to die, and a correspondingly high level of risk, but it takes a while.

Nowadays, before getting into a bunch of specific stories, I try to say something more general, like this:

I've found this kind of argument — including an actual 30 second pause to watch a video in the middle of the conversation — to be more persuasive than trying to tell a single, specific story, so I thought I'd share it.

41 comments

Comments sorted by top scores.

comment by gallabytes · 2022-06-14T22:12:18.269Z · LW(p) · GW(p)

10 million times faster is really a lot - on modern hardware, running SOTA object segmentation models at even 60fps is quite hard, and those are usually much much smaller than the kinds of AIs we would think about in the context of AI risk.

But - 100x faster is totally plausible (especially w/100x the energy consumption!) - and I think the argument still mostly works at that much more conservative speedup.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-06-21T23:04:34.154Z · LW(p) · GW(p)

it's completely implausible they'd run their entire processing system 10 million times faster, yeah. running a full brain costs heat, and that heat has to dissipate, there aren't speed shortcuts.

our fastest neurons are on order 1000 hz, and our median neurons are on order 1hz. it's the fast paths through the network that affect fastest reasoning. the question, then, is how much a learning system can distill its most time-sensitive reasoning into the fast paths. eg, self-directed distillation of a skill into an accelerated network that calls out to the slower one.

there's no need for a being to run their entire brain at speed. being able to generate a program to run at 1ghz that can outsmart a human's motor control is not difficult - flies are able to outsmart our motor control by running at a higher frequency despite being much smaller than us in every single way. this is the video I would link to show how much frequency matters: https://www.youtube.com/watch?v=Gvg242U2YfQ

comment by interstice · 2022-06-14T21:01:29.637Z · LW(p) · GW(p)

In the same vein, I humbly suggest "The entire bee movie but every time they say bee it gets faster" as a good model for what the singularity will seem like from our perspective.

Replies from: Raemon
comment by Raemon · 2022-06-14T21:27:22.441Z · LW(p) · GW(p)

This is surprisingly on-point since 

Bee Movie is about slight perturbations causing the end of the world unexpectedly.

(Bee Movie isn't quite "good" but it sure is an interesting experience if you go in not knowing anything about the plot)

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2022-06-14T21:37:56.033Z · LW(p) · GW(p)

Especially since the bees were clearly maximizing their reward function, and succeeded astronomically beyond their imagination, and it ended horribly for them and everyone else.

comment by Raemon · 2022-06-14T19:48:30.186Z · LW(p) · GW(p)

This does seem like a helpful intuition pump.

Curious how many people you've tried this with, and what sort of specific responses they tend to have.

Replies from: Andrew_Critch
comment by Andrew_Critch · 2022-06-14T22:49:20.706Z · LW(p) · GW(p)

I'd say I've tried it with around 30 people?  With around 15 I showed the video, and with around 15 I didn't.  In all cases they seemed more thoughtful once I make the (humans:AI)::(plants::humans) analogy, and when I showed the video they seemed to spend considerably more time generating independent thoughts of their own about how things could go wrong.

Of course, speed isn't the only thing that matters, isn't necessary, isn't sufficient, etc. etc..  But it's a big deal in a lot of scenarios, and it helps to get people thinking about it.

comment by Richard_Ngo (ricraz) · 2022-06-14T23:53:05.396Z · LW(p) · GW(p)

Cool idea. By default I might suggest this video instead - very similar to yours, but with a girl running down the track, so you can actually see how slowed down it is (as opposed to it looking like a still frame).

Replies from: Lanrian, kave, ete
comment by Lukas Finnveden (Lanrian) · 2022-06-19T00:17:20.090Z · LW(p) · GW(p)

(Small exception to Critch's video looking like a still frame: There's a dude with a moving hand at 0:45.)

comment by kave · 2023-10-19T18:45:41.246Z · LW(p) · GW(p)

I am quite surprised by the relative stillness of the people contrasted to the girl's running. Do people really not move at all in the time it takes someone to run several person-widths?

comment by plex (ete) · 2022-06-15T16:10:45.775Z · LW(p) · GW(p)

That one requires login to view, which seems like a trivial inconvenience worth avoiding? 

Replies from: conor-sullivan
comment by Lone Pine (conor-sullivan) · 2022-06-19T07:21:13.529Z · LW(p) · GW(p)

I didn't need to login. Not sure what the difference is.

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-06-14T19:51:39.963Z · LW(p) · GW(p)

This may be persuasive, but does it pump intuitions in the direction of an accurate assessment of AGI risk? While you never explicitly state that this is your goal, I think it's safe to assume given that you're posting on LW.

As Nikita Sokolsky argued here [LW · GW], it's not clear that a 10-million fold difference in processing speed leads to a 10-million fold difference in capabilities. Even a superintelligent AI may be restricted to manipulating the world via physical processes and mechanisms that take human-scale time to execute. To establish a unique danger from AGI, it seems important to identify concrete attack vectors that are available to an AGI, but not to humans, due to the processing speed differential.

While it may be that a person hearing this "slow-motion camera" argument can conceive of this objection on their own, I think the point of an intuition pump is to persuade somebody who's unlikely to think of it independently. For this reason, I think that identifying at least one concrete AGI-tractable, human-intractable attack vector would be a more useful and accuracy-promoting intuition pump than the "slow-motion camera" pump.

Fortunately, articulating those AGI-unique attack vectors in public is not a particularly unsafe practice. Attack ideas generated by deliberately trying to find ideas that are impossible for humans, but tractable for AGI are unlikely to be preferable to a bad actor to an attack generated by trying to think of easy ways for a human to cause harm.

Replies from: Viliam, Andrew_Critch, TrevorWiesinger
comment by Viliam · 2022-06-14T22:23:02.577Z · LW(p) · GW(p)

Even a superintelligent AI may be restricted to manipulating the world via physical processes and mechanisms that take human-scale time to execute. To establish a unique danger from AGI, it seems important to identify concrete attack vectors that are available to an AGI, but not to humans, due to the processing speed differential.

I see two obvious advantages for a superfast human-level AI:

Can communicate in parallel with thousands of humans (assuming the bandwidth is not a problem, so perhaps a voice call without video) while paying full attention (full human-level attention, that is) to every single one of them. Given enough time, this alone could be enough to organize some kind of revolution; if you fail to impress some specific human, you just hang up and call another. Calling everyone using a different pretext (after doing some initial research about them online), so it takes some time to realize what you are doing.

Never makes a stupid mistake just because was distracted or did not have enough time to think about something properly. Before every sentence you say, you can consider it from several different angles, even verify some information online, while from the human's perspective you just answered immediately. While doing this, you can also pay full attention to the human's body language, etc.

Replies from: AllAmericanBreakfast, FireStormOOO
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-06-14T22:48:10.136Z · LW(p) · GW(p)

Can communicate in parallel with thousands of humans (assuming the bandwidth is not a problem, so perhaps a voice call without video) while paying full attention (full human-level attention, that is) to every single one of them. Given enough time, this alone could be enough to organize some kind of revolution; if you fail to impress some specific human, you just hang up and call another. Calling everyone using a different pretext (after doing some initial research about them online), so it takes some time to realize what you are doing.

This is a good start at identifying an attack vector that AGI can do, but humans can't. I agree that an AGI might be able to research people via whatever information they have online, hold many simultaneous conversations, try to convince people to act, and observe the consequences of its efforts via things like video cameras in order to respond to the dynamically unfolding situation. It would have a very large advantage in executing such a strategy over humans.

There are some challenges.

  • The AGI is still bottlenecked by the speed of human thought and behavior.
  • It's poorly disguised, or not even hidden at all, giving humans a chance to respond to these mysterious revolutionary appeals.
  • Insofar as the AGI is using simulation to plan its attack, it seems probably harder to orchestrate a revolution than to use an attack that depends on physical mechanisms and the normal operation of economic infrastructure. Physical mechanisms are dependable and non-agentic, while normal economic infrastructure is designed for legibility and predictability in most cases.
  • It seems like personal, embodied charisma, as well as privileged access to unpublished information, has often been necessary to orchestrate revolutions in the past. An AGI would be at a disadvantage in this regard.

To me, "AGI causes a world-ending revolt" still contains too much of a handwave, and too many human dependencies, to be a convincing attack vector. However, I do think you have identified a capability that would give AGI a unique advantage in its attempt. Perhaps there is some other AGI-only attack that doesn't have the challenges I listed here, that can take advantage of this ability?

Replies from: Viliam
comment by Viliam · 2022-06-15T14:33:34.013Z · LW(p) · GW(p)

I am not sure about the end game, but the beginning could be like this:

  • get "pocket money" like $100 or $1000 a day, by doing various tasks online for money. I don't know how this market works, just assuming that if some humans do it, a human-level AI could do it too, only 1000 times faster, making 1000 times more money.
  • pretend to be a human using a phone, for informal purposes. (Pretext: you are calling from a different city.)
  • get an official human identity for legal purposes. This probably needs to be done illegally, by bribing some official in a foreign country, using the "pocket money". (Pretext: you need the fake identity for someone else who will move to the country soon and needs to have the fake identity ready from the day 1.) Not sure if there is a way to do this legally, if there is a country that allows you to gain citizenship simply by paying the fees and filling up some questionnaire remotely, without ever showing your face or proving your previous identity.
  • found a company and hire the first human employee, using the "pocket money". (Pretext: you are a remote-friendly boss who currently travels around the world so you cannot meet your first employee in person.) Now you have the human face and hands, if needed.
  • expand to another country, by starting a new company there, fully owned by the original company. The "pocket money" could pay for the first dozen employees. You can however pretend that the original company already has hundreds of employees (role-played by you on the phone).

Not sure where to proceed from here, but with these assets the lack of human body does not seem like a problem anymore; if a human presence is required somewhere, just send an employee there.

You still have the ability to think 1000 times faster, or pretend to be 1000 different people at the same time.

comment by FireStormOOO · 2022-06-15T04:26:56.317Z · LW(p) · GW(p)

Expanding on this, even if the above alone isn't sufficient to execute any given plan, it takes most of the force out of any notion that needing humans to operate all of the physical infrastructure is a huge impediment to whatever the AI decides to do.  That level of communication bandwidth is also sufficient to stand up any number of requisite front companies, employing people that can perform complex real-world tasks and provide the credibility and embodiment required to interact with existing infrastructure on human terms without raising suspicion.

Money to get that off the ground is likewise no impediment if one can work 1000 jobs at once, and convincingly impersonate a seperate person for each one.

Doing this all covertly would seemingly require first securing high-bandwidth unmonitored channels where this won't raise alarms, so either convincing the experimenters it's entirely benign, getting them to greenlight something indistinguishable-to-humans from what it wants to do, or otherwise covertly escaping the lab.

Adding the the challenge, any hypothetical "Pivotal Act" would necessarily be such an indistinguishable-to-humans cover for malign action.  Presumably the AI would either be asked to convince people en mass or take direct physical action on a global scale.

comment by Andrew_Critch · 2022-06-14T22:54:01.070Z · LW(p) · GW(p)

For a person at a starting point of the form {AGI doesn't pose a risk / I don't get it}, I'd say this video+argument pushes thinking in a more robustly accurate direction than most brief-and-understandable arguments I've seen.  Another okay brief-and-understandable argument is the analogy "humans don't respect gorillas or ants very much, so why assume AI will respect humans?", but I think that argument smuggles in lots of cognitive architecture assumptions that are less robustly true across possible futures, by comparison to the speed advantage argument (which seems robustly valid across most futures, and important).

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-06-14T23:14:17.103Z · LW(p) · GW(p)

It sounds like you're advocating starting with the slow-motion camera concept, and then graduating into brainstorming AGI attack vectors and defenses until the other person becomes convinced that there's a lot of ways to launch a conclusive humanity-ending attack and no way to stop them all.

My concern with the overall strategy is that the slow-motion camera argument may promote a way of thinking about these attacks and defenses that becomes unmoored from the speed at which physical processes can occur, and the accuracy with which they can be usefully predicted even by an AGI that's extremely fast and intelligent. Most people do not have sufficient appreciation for just how complex the world is, how much processing power it would take to solve NP-hard problems, or how crucial the difference is between 95% right and 100% right in many cases.

If your objective is to convince people that AGI is something to take seriously as a potential threat, I think your approach would be accuracy-promoting if it moves people from "I don't get it/no way" to "that sounds concerning - worth more research!" If it moves people to forget or ignore the possibility that AGI might be severely bottlenecked by the speed of physical processes, including the physical processes of human thought and action, then I think it would be at best neutral in its effects on people's epistemics.

However, I do very much support and approve of the effort to find an accuracy-promoting and well-communicated way to educate and raise discussiona about these issues. My question here is about the specific execution, not the overall goal, which I think is good.

Replies from: JohnBuridan
comment by JohnBuridan · 2022-06-15T20:34:19.397Z · LW(p) · GW(p)

I agree that thinking critically about the way AGI can get bottlenecked by physical processes speed. While this is an important area of study and thought, I don't see how "there could be this bottleneck though!" matters to the discussion. It's true. There likely is this bottleneck. How big or small it is requires some thought and study, but that thought and study presupposes you already have an account for why the bottleneck operates as a real bottleneck from the perspective of a plausibly existing AGI.

comment by trevor (TrevorWiesinger) · 2022-06-14T21:34:48.998Z · LW(p) · GW(p)

I can vouch for this. Whenever you explain to someone e.g. a policymaker the problem using quickdraw arguments, you tend to get responses like "have them make it act/think like a human" or "give it a position on the team of operators so it won't feel the need to compromise the operators".

But as far as quickdraw arguments goes, this is clearly top notch, and the hook value alone merits significant experimentation with test audiences. This might be the thing that belongs in everyone's back pockets; when watching Schwarzennegger's Terminator (1984) and Terminator 2 (1991), virtually al viewers fail to notice how often the robot misses it's shots even though it has several seconds to aim for the head.

comment by Thomas Kwa (thomas-kwa) · 2022-06-15T07:41:12.599Z · LW(p) · GW(p)

I think this is one place where reading science fiction improves people's judgment. Without reading so many monologues and descriptions of AI making decisions in milliseconds, I'd probably not be able to bring this to mind as plausible nearly as easily.

Replies from: conor-sullivan
comment by Lone Pine (conor-sullivan) · 2022-06-19T07:28:32.790Z · LW(p) · GW(p)

In my opinion, this movement is shooting itself in the foot when we say that science fiction gives people bad intuitions. Most laypeople have a fear of AI already built in by Hollywood et al., and the complaint is that 'anthropomorphizing' the AI will cause people to make bad guesses. We are essentially demanding that people throw out millions of years of evolved skill at dealing with other agents in the world, even though that skill is an incredible asset in this crisis. We are making the perfect the enemy of the good.

comment by DanielFilan · 2022-06-15T06:53:09.079Z · LW(p) · GW(p)

Now, when you try imagining things turning out fine for humanity over the course of a year, try imagining advanced AI technology running all over the world and making all kinds of decisions and actions 10 million times faster than us, for 10 million subjective years. [emphasis and de-emphasis mine]

Would actions also be 10 million times faster? I'm too tired to do the math, but I think actuators tend to have a characteristic time-scale below which they're hard to control. Similarly, the speed of light limits how quickly you can communicate stuff to different places on earth.

comment by Chris Elster (chris-elster) · 2022-09-02T09:31:04.162Z · LW(p) · GW(p)

What is the source on this being slowed down by 100x? 

Here it says Magyar filmed at 1300 fps, and the video info says 50 fps; does this imply 1300/50=26x slow down? 

Also, if the video is 120 seconds long, 100x slow down implies the train stopping took 1.2 seconds, which seems too fast. 

26x slow down implies the train took 4.6 seconds to stop, which seems more plausible to me.  

comment by Nikita Sokolsky (nikita-sokolsky) · 2022-06-15T18:29:28.583Z · LW(p) · GW(p)

Points from this post I agree with:

  • AGI will have at least 100x faster decision making speed for any given decision, compared to human decision making
  • AGI will be able to interact with all 9 billion humans at once, in parallel, giving it a massive advantage
  • Slow motion videos present a helpful analogy

My objection is primarily around the fact that having a 100x faster processing power wouldn't automatically allow you to do things 100x faster in the physical world:

  1. Any mechanical systems that you control won't be 100x faster due to limitations of how faster real-world mechanical parts will work. I.e. if you control a drone, you have to deal with the fact that the drone won't fly/rotate 100x faster just because your processing power is 100x faster. And you'll probably have to control the drone remotely because you wouldn't fit the entire AGI on the drone itself, placing a limit on how fast you can make decisions.
  2. Any operations where you rely on human action will run at 1x speed, even if you somewhat streamline them thanks to parallelization and superior decision making
  3. Being 100x faster is useless if you don't have full information on what the humans are doing/plotting. And they could hide pretty easily by meeting up offline with no electronics in place.
Replies from: AllAmericanBreakfast, Lanrian
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-06-15T18:56:42.334Z · LW(p) · GW(p)

I'd also note that the energy required to speed up a physical action increases with the square of the velocity.

So let's take a military drone that normally must get confirmation from a human operator before firing at a target. This is the bottleneck for its firing. If an AI takes full control of this drone, the drone is now bottlenecked by things like:

  • The AI's processing speed in choosing targets in light of its latest observations and plans
  • The drone's speed in aiming at the target
  • The drone's speed in moving to a new position
  • The speed with which the drone can be resupplied with ammunition or fuel
  • The rate at which it needs to be repaired

If the motion of the drone was speeded up by 100x due to the AI's processing speed being 100x faster, then this would require at least a 10,000x increase in energy requirements.

Currently existing technology is typically engineered to operate to tolerate demands within the requirements for which it was originally designed. Presently existing drones can't just be commandeered by an AI and made to move at 100x their normal speed.

This also applies to whatever robots would be necessary for the AI to build a drone army capable of taking full advantage of the AI's faster processing power. And the AI can't just pull 10,000x the energy from our present infrastructure. It would have to build an infrastructure capable of supplying that amount of energy using presently existing infrastructure.

It might be that an AGI could achieve a 100x gain in the efficiency in achieving its goals via its superior processing power, constant operation, and ~total self-control. For example, it might be able to figure out a way of attacking using drones that much more efficiently destroys the morale and coordination abilities of its opponent, while still operating at normal drone speed.

comment by Lukas Finnveden (Lanrian) · 2022-06-19T00:31:04.038Z · LW(p) · GW(p)

2 seems more worrying than reassuring. If you have to rely on human action, you'll be slowed down. So AI's who can route around humans, or humans who can delegate more decision-making to AI systems, will have a competitive advantage over AIs that don't do that. If we're talking about AGI + decent robotics, there's in principle nothing that AIs need humans for.

3: "useless without full information" is presumably hyperbole, but I also object to weaker claims like "being 100x faster is less than half as useful as you think, if you haven't considered that spying is non-trivial". Random analogy: Consider a conflict (e.g. a war or competition between two firms) except that one side (i) gets only 4 days per year, and (ii) gets a very well-secured room to discuss decisions in. Benefit (ii) doesn't really seem to help much against the disadvantage from (i)!

comment by Jacob Pfau (jacob-pfau) · 2022-06-14T23:31:48.202Z · LW(p) · GW(p)

Recently I've had success introducing people to AI risk by telling them about ELK, and specifically how human simulation is likely to be favored. Providing this or a similarly general argument, e.g. that power-seeking is convergent, seems both more intuitive (humans can also do simulation and power-seeking) and faithful to actual AI risk drivers to me than the video speed angle? ELK and power-seeking are also useful complements to specific AI risk scenarios.

The video speed framing seems to make the undesirable suggestion that AI ontology will be human-but-faster. I would prefer an example which highlighted the likely differences in ontology. Examples highlighting ontology mismatch have the benefit of neatly motivating the problems of value learning and misaligned proxy objectives.

comment by kave · 2024-01-15T05:42:27.799Z · LW(p) · GW(p)

I am confused about whether the videos are real and exactly how much faster AIs could be run. But I think at the very least it's a promising direction to look for grokkable bounds on how advanced AI will go

comment by Ahab Picard · 2022-09-02T10:19:19.920Z · LW(p) · GW(p)

It clearly demonstrates what you're talking about in Episode 4x11 of Person Of Interest.

In a nutshell: Our team is trapped somewhere and we see an ASI start calculating to save them. It takes about 10 seconds for ASI and if I remember correctly, it finds the most suitable one out of 800,000 possible situations. It uses security cameras to monitor people, meaning it monitors almost everyone. And it calculates similar actions for all the people it watches

It's crazy to even imagine.

comment by JohnBuridan · 2022-06-15T20:22:08.715Z · LW(p) · GW(p)

Bizarre coincidence. Or maybe not.

Last night I was having 'the conversation' with a close friend and also found that the idea of speed of action was essential for explaining around the requirement of having to give a specific 'story'. We are both former StarCraft players so discussing things in terms of an ideal version of AlphaStar proved illustrative. If you know StarCraft, the idea of an agent being able to maximize the damage given and received for every unit, minerals mined, and resources expended, the dancing, casting, building, expanding, replenishing to the utmost degree, reveals the impossibility of a human being able to win against such an agent.

We wound up quite hung up on two objections. 1) Well, people are suspicious of AIs already, and 2) just don't give these agents access to the material world. And although we came to agreement on the replies to these objections, by that point we are far enough down the inferential working memory that the argument doesn't strike a chord anymore.

comment by frontier64 · 2022-06-15T19:32:33.744Z · LW(p) · GW(p)

I like using the intuition pump of, AI : Humans :: Human : Apes. Imagine apes had the decision to create humans or not. They can sit there and argue about how humans will share ape values because they're descended from apes. Or how humans pose an existential risk to apes or some such.

Humans may be dangerous because they'll be smarter than us apes. Maybe humans will figure out how to get those bananas at the very top of the tree without risk of falling, then humans will have a massive advantage over apes. Maybe humans will better know how to hide from leopards; they'll be able to hurt apes by attracting leopards to the colony and then hiding. Humans might be dangerous, but if we contain them or ensure that they share ape values then us apes will be better off.

And then humans take over the whole world and apes live in artificial habits entertaining us or survive in the wild only due to our mercy. We're just too stupid to reasonably think of the ways AI will be able to defeat us. We're sitting here with a boxed-AI thinking about the risk of nanotech while the AI is creating irl magic by warping the electric field of the world using just it's transistors.

Like, we're so stupid we don't even know how to spontaneously generate biological life. The upper bound on intelligence is way above where we're at now.

comment by Vaniver · 2022-06-15T04:58:31.700Z · LW(p) · GW(p)

It stands for Eliciting Latent Knowledge [LW · GW].

comment by Quintin Pope (quintin-pope) · 2022-06-15T00:03:26.701Z · LW(p) · GW(p)

I'm not a fan of saying that AIs will have a 10 million x speedup relative to humans. That seems very unlikely to happen on this side of the singularity. Probably, future AGI hardware will increasingly resemble [LW · GW] the brain, and AGI's won't have nearly a 10 million x higher serial clock speed advantage compared to humans. 

comment by pmcarlton (pmcarlton-1) · 2023-04-01T01:31:06.224Z · LW(p) · GW(p)

"human" objects around that could easily be taken apart for, say, biofuel or carbon atoms


This is one aspect of the discussion that never sits right with me: the idea that what might interest a future superintelligence is our "atoms" and not our standing as the only thing that's ever created a superintelligence so far. There are lots of more efficient fuels and more readily obtainable sources of carbon atoms than all the humans scurrying (or lumbering, to take the point of your post) around the earth. 

I suppose the charitable interpretation of this is is a superintelligence will make little distinction between the human and the concrete wall they're standing next to in terms of where it might choose to scoop up some matter?

comment by JakubK (jskatt) · 2023-03-21T17:07:51.487Z · LW(p) · GW(p)

Transistors can fire about 10 million times faster than human brain cells

Does anyone have a citation for this claim?

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2023-03-21T17:23:52.229Z · LW(p) · GW(p)

I think we’re dividing 1GHz by 100Hz.

The 1GHz clock speed for microprocessors is straightforward.

The 100Hz for the brain is a bit complicated. If we’re just talking about how frequently a neuron can fire, then sure, 100Hz is about right I think. Or if we’re talking about, like, what would be a plausible time-discretization of a hypothetical brain simulation, then it’s more complicated. There are certainly some situations like sound localization where neuron firing timestamps are meaningful down to the several-microsecond level, I think. But leaving those aside, by and large, I would guess that knowing when each neuron fired to 10ms (100Hz) accuracy is probably adequate, plus or minus an order of magnitude, I think, maybe? ¯\_(ツ)_/¯

comment by jam_brand · 2022-06-16T20:59:40.931Z · LW(p) · GW(p)

Also along these lines, perhaps contrasting the flicker fusion rates of different species could be illustrative as well. Here's a 30-second video displaying the relative perceptions of a handful of species side by side: https://www.youtube.com/watch?v=eA--1YoXHIQ . Additionally, a short section from 10:22 - 10:43 of this other video that incorporates time-stretched audio of birdcalls is fairly evocative: https://www.youtube.com/watch?v=Gvg242U2YfQ .

comment by Mo Putera (Mo Nastri) · 2022-06-16T08:58:17.172Z · LW(p) · GW(p)

This reminds me of Eliezer's short story That Alien Message [LW · GW], which is told from the other side of the speed divide. There's also Freitas' "sentience quotient" idea upper-bounding information-processing rate per unit mass at SQ +50 (it's log scale -- for reference, human brains are +13, all neuronal brains are several points away, vegetative SQ is -2, etc).

comment by mukashi (adrian-arellano-davin) · 2022-06-15T02:18:29.406Z · LW(p) · GW(p)

This is a fantastic idea! it manages to successfully convey how much more powerful an artificial brain could be than a human one.

I can't help pointing out one thing, which is that an AGI trying to take over the world would pretty much need to manipulate/interact with humans, and their reaction time/processing speed would be effectively a bottleneck for the AGI.