[link] Speed is the New Intelligence

post by Gunnar_Zarncke · 2015-01-28T11:11:56.860Z · LW · GW · Legacy · 17 comments

Contents

17 comments

From Scott Adams Blog

The article really is about speeding up government, but the key point is speed as a component of smart: 

A smart friend told me recently that speed is the new intelligence, at least for some types of technology jobs. If you are hiring an interface designer, for example, the one that can generate and test several designs gets you further than the “genius” who takes months to produce the first design to test. When you can easily test alternatives, the ability to quickly generate new things to test is a substitute for intelligence.

This shifts the focus from the ability to grasp and think through very complex topics (includes good working memory and memory recall in general) to the ability new topics quickly (includes quick learning and unlearning, creativity).

Smart people in the technology world no long believe they can think their way to success. Now the smart folks try whatever plan looks promising, test it, tweak it, and reiterate. In that environment, speed matters more than intelligence because no one has the psychic ability to pick a winner in advance. All you can do is try things that make sense and see what happens. Obviously this is easier to do when your product is software based.

This also changes the type of grit needed. The grit to push through a long topic versus the grit try lots of new things and to learn from failures.

17 comments

Comments sorted by top scores.

comment by Viliam_Bur · 2015-01-28T14:34:28.769Z · LW(p) · GW(p)

Imagine casting a "speed ×100" spell on a dumb person. Would that make them a smart person? No.

On the other hand, if we would cast a "speed ×2" spell on a smart person, it would appear to make them smarter. They would be able to solve difficult problems in half the time, right?

So... there seems to be some connection, but also a difference. Speed can make you more productive, and productivity is a signal of intelligence. But if you make systematic mistakes in thinking, you will only be making them faster.

Smart people in the technology world no long believe they can think their way to success.

Because they already are thinking. If you are already thinking at near 100% of your capacity, telling you "think more" is not going to help. The right advice in that situation could be "instead of thinking without experimenting, try thinking and experimenting". But one should give that advice only to people who are already thinking.

Replies from: JoshuaZ, IlyaShpitser, buybuydandavis, Richard_Kennaway, Princess_Stargirl, Gunnar_Zarncke, DanielLC
comment by JoshuaZ · 2015-01-28T16:04:08.662Z · LW(p) · GW(p)

Imagine casting a "speed ×100" spell on a dumb person. Would that make them a smart person? No.

There's a line in the book WYRM that made this fact click for me many years ago. The paraphrase is "A dog that can think a hundred times as fast will take a hundreth of the time to decide that it wants to sniff your crotch."

comment by IlyaShpitser · 2015-01-28T14:45:42.859Z · LW(p) · GW(p)

The steelman here is a call for empiricism. Empiricism + thinking clearly are both needed. The secret is to do everything well :).

comment by buybuydandavis · 2015-01-29T23:21:06.517Z · LW(p) · GW(p)

But if you make systematic mistakes in thinking, you will only be making them faster.

But you can get away with more mistakes, if you can loop your test and improve cycle to fix those mistakes.

There was a demo that really brought this home to me. Some robotic fingers dribbling a ping pong ball at blinding speed. Fast cameras, fast actuators, brute force stupid feedback calculations. Stupid can be good enough if you're fast enough.

For more human creative processes, speeding up the design/test/evaluate loop will often beat more genius. Many things aren't to be reasoned out as much tested out.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2015-01-30T09:04:45.720Z · LW(p) · GW(p)

I have this intuition that higher intelligence "unlocks" some options, and then it depends on the speed how much many points you get from the unlocked options. For example, if you have a ping-pong-playing robot with insane speed, such robot could easily win any ping-pong tournament. But still couldn't conquer the world, for example. His intelligence only unlocks the area of playing ping-pong. If the intelligence is not general, making it faster still doesn't make it general.

For general intelligences, if we ignore the time and resources, the greatest obstacle to a mind is the mind itself, its own biases. If the mind is prone to do really stupid things, giving it more power will allow it to do stupid things with greater impact. For example, if someone chooses to ignore feedback, then having more design/test/evaluate cycles available will not help.

Now let's assume that we have an intelligence which is (a) general, and (b) willing to experiment and learn from feedback. On this level, is time and resources all that matters? Would any mind on this level, given unlimited time (immoratlity) and resources, sooner or later become a god? Or is the path full of dangerous destructive attractors? Would the mind be able to successfully navigate higher and higher levels of meta-thinking, or could a mistake at some level prevent it from ever getting higher? In other words, is "don't ignore the feedback" the only issue to overcome, or is it just a first of many increasingly abstract issues that an increasingly powerful mind will have to deal with, where a failure to deal with any of them could "lock" the path to godhood even given unlimited time and resources? For example, imagine a mind that would be willing to consider feedback, but wouldn't care about developing a good theory of maths and statistics. At some moment, it would be making incorrect conclusions from the feedback.

I agree that for humans, lack of time and resources is a huge issue.

comment by Richard_Kennaway · 2015-01-30T11:53:39.022Z · LW(p) · GW(p)

The Law of the Minimum seems metaphorically relevant. "Growth is controlled not by the total amount of resources available, but by the scarcest resource."

Intelligence, speed, time, energy, charisma, money, able-bodiedness, a like-minded community, etc.: any of these may be someone's limiting factor.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-01-30T14:08:52.974Z · LW(p) · GW(p)

On the other hand, sometimes one resource can trade off for another. There are a lot of examples of this in computational complexity where one can use up less memory if one is willing to use a slower algorithm or if one is willing to use more memory one can get more speed. These aren't the only examples.

comment by Princess_Stargirl · 2015-01-30T05:15:54.112Z · LW(p) · GW(p)

Speed x 100 would almost certain make a normal intelligence person very smart. Speed x 100 means one week for you is 2 years for them. Maybe you couldn't beat Einstein. But imagine some common tests of intelligence such as the Putnam or a normal IQ test. People have 6 hours (in two blocks) to finish the Putnam, 600 hours is 25 days. And presumably you are not sleeping during those 25 days. If a normal person gets 3 minutes to finish a problem on a certain iq test you have 5 hours.

comment by Gunnar_Zarncke · 2015-01-28T21:30:01.351Z · LW(p) · GW(p)

Imagine casting a "speed ×100" spell on a dumb person.

Though this is the idea behind the AI in Branches on the Tree of Time.

And a very fast dumb machine can still kill reliably. And at least in some domains being dumb very fast can find solutions creativity alone wouldn't find either.

comment by DanielLC · 2015-01-28T20:35:42.911Z · LW(p) · GW(p)

There's problems you can solve quickly, and problems that you can solve at all. You want to find someone who can solve problems at a certain difficulty as fast as possible. If they can't solve it, work on making it so they can. If they can, work on making it so they do what they already can do, but faster.

This is particularly clear with computers. You can write better algorithms that solve more problems and get better answers, at the cost of running slower. If a program can't solve your problem, it's worthless. If it can solve your problem, making it more sophisticated will make things worse. For example, you can't stick formatting into a .txt file, but if you have no need for formatting, Notepad runs faster, takes less space, and is more reliable than Word.

comment by Richard_Kennaway · 2015-01-28T13:08:20.305Z · LW(p) · GW(p)

Is this new, or just a professional blogger weaving a few familiar concepts together into an essay that sounds new? "Quick witted" is an expression that goes back at least six centuries (esp. definition 20), and "quick/slow on the uptake" at least two. The correlation between the speed of neural signals and IQ has been known for a while. In fact, a quick grasp of new concepts is pretty much a defining characteristic of intelligence (as the latter word is generally used). And how often have we heard the standard startup wisdom of "fail early and often", "move fast and break things", etc.? There's even a whole program development methodology called "Agile".

If anything, Adams is being a bit slow on the uptake here.

Replies from: torekp
comment by torekp · 2015-02-02T21:10:12.713Z · LW(p) · GW(p)

Upvoted for snarky use+mention combo ending.

comment by Thomas · 2015-01-28T11:59:42.459Z · LW(p) · GW(p)

typo in the title - Intelligence

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2015-01-28T13:25:00.659Z · LW(p) · GW(p)

Fixed (together with blockquote formatting).

comment by mako yass (MakoYass) · 2015-02-13T07:49:18.012Z · LW(p) · GW(p)

The interface design example is classic Rationalism Verses Empiricism. The empiricist consults nature, iterating and testing until they have something that observably works. Speed of work, not depth of thought, is what matters. The classical rationalist doesn't bother, because they have so much faith in their models and deductive aparatus(in other words, a very very good imagination, mental or virtual) that they believe they can tell whether something will work just by looking hard at concepts while occupying the simulated mindsets of imaginary users. While I recognize that understanding user needs and perceptions from a distance is hard, I opine that the rationalist approach, on the right set of shoulders, is extremely valuable in interface design, there are local maxima you can't get past with A/B testing and user interviews.

You need to be thoughtful as all hell to do something new without ruining it in ten different ways. IMO Infinite scrolling is a good example. The design community has collectively decided that the idea is fundamentally broken for all sorts of reasons because none of them seem to be thoughtful enough to sit down and answer to each of the criticisms and see that every single one of them can be patched, instead of just looking at what has been done and making generalizations from how they did.

You cannot create truly new things without depth of thought, without a focused, accurate enough imagination to find the flaws before building one too many failed attempts and giving up.

[1] citation pending. I'll probably push out my black swan infinite scroll implementation at some point in the first or second quarter.

comment by buybuydandavis · 2015-01-30T20:05:11.761Z · LW(p) · GW(p)

Now the smart folks try whatever plan looks promising, test it, tweak it, and reiterate.

A little nitpick, but I think that would be much better as "iterate".