Posts

Comments

Comment by timtyler on An additional problem with Solomonoff induction · 2014-01-24T22:12:05.951Z · score: 0 (0 votes) · LW · GW

We don't think it has exactly the probability of 0, do we?

It isn't a testable hypothesis. Why would anyone attempt to assign probabilities to it?

Comment by timtyler on An additional problem with Solomonoff induction · 2014-01-23T02:08:54.415Z · score: 3 (9 votes) · LW · GW

Hypercomputation doesn't exist. There's no evidence for it - and nor will there ever be. It's an irrelevance that few care about. Solomonoff induction is right about this.

Comment by timtyler on A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk · 2014-01-10T00:18:08.586Z · score: -1 (5 votes) · LW · GW

Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.

The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.

Comment by timtyler on The genie knows, but doesn't care · 2014-01-09T11:25:00.858Z · score: 1 (1 votes) · LW · GW

Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years - with their "UnFriendly" peers and their "UnFriendly" institutions. Evidently, "Friendliness" is not necessary for human flourishing.

Comment by timtyler on Another Critique of Effective Altruism · 2014-01-09T03:01:20.203Z · score: 7 (11 votes) · LW · GW

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

Comment by timtyler on The genie knows, but doesn't care · 2014-01-09T02:41:30.473Z · score: 0 (2 votes) · LW · GW

Nor does the fact that evolution 'failed' in its goals in all the people who voluntarily abstain from reproducing (and didn't, e.g., hugely benefit their siblings' reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.

Failure is a necessary part of mapping out the area where success is possible.

Comment by timtyler on The genie knows, but doesn't care · 2014-01-09T02:36:47.772Z · score: 0 (0 votes) · LW · GW

Being Friendly is of instrumental value to barely any goals. [...]

This is not really true. See Kropotkin and Margulis on the value of mutualism and cooperation.

Comment by timtyler on A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk · 2014-01-08T11:27:32.532Z · score: 2 (6 votes) · LW · GW

Uploads first? It just seems silly to me.

The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(

Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.

Overall, I think I would have preferred Robopocalypse.

Comment by timtyler on Critiquing Gary Taubes, Part 4: What Causes Obesity? · 2014-01-05T23:15:30.568Z · score: 0 (2 votes) · LW · GW

One other point I should make: this isn't just about "someone" being wrong. It's about an author frequently cited by people in the LessWrong community on an important issue being wrong.

Not experts on the topic of diet. I associated with members of the Calorie Restriction Society some time ago. Many of them were experts on diet. IIRC, Taubes was generally treated as a low-grade crackpot by those folk: barely better than Atkins.

Comment by timtyler on Results from MIRI's December workshop · 2014-01-02T13:28:33.169Z · score: -1 (1 votes) · LW · GW

To learn more about this, see "Scientific Induction in Probabilistic Mathematics", written up by Jeremy Hahn

This line:

Choose a random sentence from S, with the probability that O is chosen proportional to u(O) - 2^-length(O).

...looks like a subtraction operation to the reader. Perhaps use "i.e." instead.

The paper appears to be arguing against the applicability of the universal prior to mathematics.

However, why not just accept the universal prior - and then update on learning the laws of mathematics?

Comment by timtyler on Building Phenomenological Bridges · 2013-12-31T01:07:10.416Z · score: 1 (1 votes) · LW · GW

why did you bring up the 'society' topic in the first place?

A society leads to a structure with advantages of power and intelligence over individuals. It means that we'll always be able to restrain agents in test harnesses, for instance. It means that the designers will be smarter than the designed - via collective intelligence. If the the designers are smarter than the designed, maybe they'll be able to stop them from wireheading themselves.

If wireheading is plausible, then it's equally plausible given an alien-fearing government, since wireheading the human race needn't get in the way of putting a smart AI in charge of neutralizing potential alien threats.

What I was talking about was "the possibility of a totalitarian world government wireheading itself". The government wireheading itself isn't really the same as humans wireheading. However, probably any wireheading increases the chances of being wiped out by less-stupid aliens. Optimizing for happiness and optimizing for survival aren't really the same thing. As Grove said, only the paranoid survive.

Comment by timtyler on Building Phenomenological Bridges · 2013-12-29T13:05:37.454Z · score: 1 (3 votes) · LW · GW

We can model induction in a monistic fashion pretty well - although at the moment the models are somewhat lacking in advanced inductive capacity/compression abilities. The models are good enough to be built and actually work.

Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date - e.g. by liberally sprinkling aversive sensors around the creature's brain. The argument that such approaches do not scale up is probably wrong - designers will always be smarter than the creatures they build - and will successfully find ways to avoid undesirable self-modifications. If there are limits, they are obviously well above the human level - since individual humans have very limited self-brain-surgery abilities. If this issue does prove to be a significant problem, we won't have to solve it without superhuman machine intelligence.

The vision of an agent improving its own brain is probably wrong: once you have one machine intelligence, you will soon have many copies of it - and a society of intelligent machines. That's the easiest way to scale up - as has been proved in biological systems again and again. Agents will be produced in factories run by many such creatures. No individual agent is likely to do much in the way of fundamental redesign on itself. Instead groups of agents will design the next generation of agent.

That still leaves the possibility of a totalitarian world government wireheading itself - or performing fatal experiments on itself. However, a farsighted organization would probably avoid such fates - in order to avoid eternal oblivion at the hands of less short-sighted aliens.

Comment by timtyler on Building Phenomenological Bridges · 2013-12-28T22:37:34.251Z · score: 1 (1 votes) · LW · GW

Naturalized induction is an open problem in Friendly Artificial Intelligence. The problem, in brief: Our current leading models of induction do not allow reasoners to treat their own computations as processes in the world.

I checked. These models of induction apparently allow reasoners to treat their own computations as modifiable processes:

Comment by timtyler on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" · 2013-12-18T11:47:17.065Z · score: 0 (4 votes) · LW · GW

Deutsch is interesting. He seems very close to the LW camp, and I think he's someone LWers should at least be familiar with.

Deutsch seems pretty clueless in the section quoted below. I don't see why students should be interested in what he has to say on this topic.

It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.

Comment by timtyler on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" · 2013-12-18T11:40:06.617Z · score: 2 (2 votes) · LW · GW

There never was a bloggingheads - AFAIK. There is: Yudkowsky vs Hanson on the Intelligence Explosion - Jane Street Debate. However, I'd be surprised if Yudkowsky makes the same silly mistake as Deutsch. Yudkowsky knows some things about machine intelligence.

Comment by timtyler on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" · 2013-12-18T00:25:45.687Z · score: 0 (2 votes) · LW · GW

But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences.

My estimate is 80% prediction, with the rest evaluation and tree pruning.

Comment by timtyler on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" · 2013-12-18T00:12:05.306Z · score: 3 (3 votes) · LW · GW

He also says confusing things about induction being inadequate for creativity which I'm guessing he couldn't support well in this short essay (perhaps he explains better in his books).

He does - but it isn't pretty.

Here is my review of The Beginning of Infinity: Explanations That Transform the World.

Comment by timtyler on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" · 2013-12-18T00:08:54.743Z · score: 1 (1 votes) · LW · GW

I remember Eliezer making the same point in a bloggingheads video with Robin Hanson.

A Hanson/Yudkowsky bloggingheads?!? Methinks you are mistaken.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-15T16:30:09.282Z · score: -2 (4 votes) · LW · GW

So:

  • What most humans tell you about their goals should be interpreted as public relations material;
  • Most humans are victims of memetic hijacking;

To give an example of a survivalist, here's an individual who proposes that we should be highly prioritizing species-level survival:

As you say, this is not a typical human being - since Nick says he is highly concerned about others.

There are many other survivalists out there, many of whom are much more concerned with personal survival.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-14T23:54:25.381Z · score: -1 (1 votes) · LW · GW

If you're dealing with creatures good enough at modeling the world to predict the future and transfer skills, then you're dealing with memetic factors as well as genetic. That's rather beyond the scope of natural selection as typically defined.

What?!? Natural selection applies to both genes and memes.

I suppose there are theoretical situations where that argument wouldn't apply

I don't think you presented a supporting argument. You referenced "typical" definitions of natural selection. I don't know of any definitions that exclude culture. Here's a classic one from 1970 - which explicitly includes cultural variation. Even Darwin recognised this, saying: "The survival or preservation of certain favoured words in the struggle for existence is natural selection."

If anyone tells you that natural selection doesn't apply to cultural variation, they are simply mistaken.

I'm having trouble imagining an animal smart enough to make decisions based on projected consequences more than one selection round out, but too dumb to talk about it.

I recommend not pursuing this avenue.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-14T22:46:04.214Z · score: 1 (1 votes) · LW · GW

The question's more about what function's generating the fitness landscape you're looking at (using "fitness" now in the sense of "fitness function"). "Survival" isn't a bad way to characterize that fitness function -- more than adequate for eighth-grade science, for example. But it's a short-tern and highly specialized kind of survival [...]

Evolution is only as short-sighted as the creatures that compose its populations. If organisms can do better by predicting the future (and sometimes they can) then the whole process is a foresightful one. Evolution is often characterised as 'blind to the future' - but that's just a mistake.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-14T21:07:00.087Z · score: 0 (0 votes) · LW · GW

Even to the extent that natural selection can be said to be care about anything, saying that survival is that thing is kind of misleading.

Well, I have gone into more details elsewhere.

It's perfectly normal for populations to hill-climb themselves into a local optimum and then get wiped out when it's invalidated by changing environmental conditions that a more basal but less specialized species would have been able to handle, for example.

Sure. Optimization involves going uphill - but you might be climbing a mountain that is sinking into the sea. However, that doesn't mean that you weren't really optimizing - or that you were optimizing something other than altitude.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-14T20:25:17.860Z · score: -6 (6 votes) · LW · GW

Nature only "cares" about survival. However, that's also exactly what we should care about - assuming that our main priority is avoiding eternal obliteration.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-14T19:42:43.454Z · score: -3 (5 votes) · LW · GW

In biology, the "how can you trust your descendants?" question is rarely much of an issue - typically, you can't.

The issue of how to ensure your descendants don't get overrun by parasites is more of a real problem.

Nature's most common solution involves sexual reproduction - and not "tiling". It's not necessarily a good thing to rule out the most common solution in the statement of a problem.

Comment by timtyler on Walkthrough of the Tiling Agents for Self-Modifying AI paper · 2013-12-14T16:34:03.588Z · score: -3 (5 votes) · LW · GW

We want to be able to consider agents which build slightly better versions of themselves, which build slightly better versions of themselves, and so on. This is referred to as an agent "tiling" itself. This introduces a question: how can the parent agent trust its descendants?

It also raises other questions - such as: how will such a monoculture resist exploitation by parasites?

Comment by timtyler on LINK: AI Researcher Yann LeCun on AI function · 2013-12-14T16:18:36.870Z · score: 1 (1 votes) · LW · GW

Here is one of my efforts to explain the links: Machine Forecasting.

Comment by timtyler on LINK: AI Researcher Yann LeCun on AI function · 2013-12-14T16:16:19.608Z · score: 0 (0 votes) · LW · GW

I'm pretty sure that we suck at prediction - compared to evaluation and tree-pruining. Prediction is where our machines need to improve the most.

Comment by timtyler on LINK: AI Researcher Yann LeCun on AI function · 2013-12-14T03:01:56.918Z · score: 0 (2 votes) · LW · GW

search is not the same problem as prediction

It is when what you are predicting is the results of a search. Prediction covers searching.

Comment by timtyler on LINK: AI Researcher Yann LeCun on AI function · 2013-12-14T02:50:45.010Z · score: 0 (0 votes) · LW · GW

It is interesting that his view of AI is apparently that of a prediction tool [...] rather than of a world optimizer.

If you can predict well enough, you can pass the Turing test - with a little training data.

Comment by timtyler on International cooperation vs. AI arms race · 2013-12-10T11:56:08.147Z · score: -1 (1 votes) · LW · GW

If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.

Note that humans haven't "taken over the world" in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts - and by other creatures.

Machine intelligence probably won't be a "secret" technology for long - due to the economic pressure to embed it.

While its true that things will go faster in the future, that applies about equally to all players - in a phenomenon commonly known as "internet time".

Comment by timtyler on International cooperation vs. AI arms race · 2013-12-10T02:30:57.016Z · score: -3 (5 votes) · LW · GW

Standard? Invoking reference classes is a form of arguing by analogy. It's a basic thinking tool. Don't knock it if you don't know how to use it.

Comment by timtyler on International cooperation vs. AI arms race · 2013-12-10T00:11:26.121Z · score: 0 (0 votes) · LW · GW

Doesn't someone have to hit the ball back for it to be "tennis"? If anyone does so, we can then compare reference classes - and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?

Comment by timtyler on International cooperation vs. AI arms race · 2013-12-09T11:15:17.907Z · score: -2 (2 votes) · LW · GW

As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself [...]

What, a new thinking technology? You can't be serious.

Comment by timtyler on International cooperation vs. AI arms race · 2013-12-09T02:21:50.572Z · score: -2 (4 votes) · LW · GW

The first OS didn't take over the world. The first search engine didn't take over the world. The first government didn't take over the world. The first agent of some type taking over the world is dramatic - but there's no good reason to think that it will happen. History better supports models where pioneers typically get their lunch eaten by bigger fish coming up from behind them.

Comment by timtyler on International cooperation vs. AI arms race · 2013-12-08T20:10:19.630Z · score: 0 (2 votes) · LW · GW

whoever builds the first AI can take over the world, which makes building AI the ultimate arms race.

As the Wikipedians often say, "citation needed". The first "AI" was built decades ago. It evidently failed to "take over the world". Possibly someday a machine will take over the world - but it may not be the first one built.

Comment by timtyler on Questions and comments about Eliezer's Dec. 2 2013 Oxford speech · 2013-12-08T00:04:33.591Z · score: -1 (1 votes) · LW · GW

I didn't buy the alleged advantage of a noise free environment. We've known since von-Neumann's paper titled:

PROBABILISTIC LOGICS AND THE SYNTHESIS OF RELIABLE ORGANISMS FROM UNRELIABLE COMPONENTS

...that you can use unreliable computing components to perform reliable computation - with whatever level of precision and reliability that you like.

Plus the costs of attaining global synchrony and determinism are large and massively limit the performance of modern CPU cores. Parallel systems are the only way to attain large computing capacities - and you can't guarantee every component in a large parallel system will behave deterministically. So: most of the future is likely to lie with asynchronous systems and hardware indeterminism, rather contrary to Yudkowsky's claims.

Comment by timtyler on A model of AI development · 2013-11-29T13:20:18.783Z · score: 4 (4 votes) · LW · GW

The point I was trying to make was more along the lines that choosing which parameters to model allows you to control the outcome you get. Those who want to recruit people to causes associated with preventing the coming robot apocalypse can selectively include competitive factors, and ignore factors leading to cooperation - in order to obtain their desired outcome.

Today, machines are instrumental in killing lots of people, but many of them also have features like air bags and bumpers, which show that the manufacturers and their customers are interested in safety features - and not just retail costs. Skipping safety features has disadvantages - as well as advantages - to the manufacturers involved.

Comment by timtyler on A model of AI development · 2013-11-29T00:50:40.990Z · score: 2 (10 votes) · LW · GW

People have predicted that corporations will be amoral, ruthless psychopaths too. This is what you get when you leave things like reputations out of your models.

Skimping on safety features can save you money. However, a reputation for privacy breaches, security problems and accidents doesn't do you much good. Why model the first effect while ignoring the second one? Oh yes: the axe that needs grinding.

Comment by timtyler on Gelman Against Parsimony · 2013-11-26T11:03:39.927Z · score: -1 (1 votes) · LW · GW

Bayesian methods certainly require relative parsimony, in the sense that the model complexity needs to be small compared to the quantity of information being modeled.

Not really. Bayesian methods can model random noise. Then the model is of the same size as the data being modeled.

Comment by timtyler on Gelman Against Parsimony · 2013-11-26T10:51:21.482Z · score: 0 (0 votes) · LW · GW

I often use simple models–because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts!

Recommended reading: Boyd and Richerson's Simple Models of Complex Phenomena.

Comment by timtyler on Gelman Against Parsimony · 2013-11-26T10:46:25.285Z · score: 0 (0 votes) · LW · GW

The reason the Solomonoff prior doesn't apply to social sciences is because knowing the area of applicability gives you more information.

That doesn't mean it doesn't apply! "Knowing the area of applicability" is just some information you can update on after starting with a prior.

Comment by timtyler on Gelman Against Parsimony · 2013-11-26T10:42:19.903Z · score: 2 (2 votes) · LW · GW

Losing information isn't a crime. The virtues of simple models go beyond Occam's razor. Often, replacing a complex world with a complex model barely counts as progress - since complex models are hard to use and hard to understand.

Comment by timtyler on Gelman Against Parsimony · 2013-11-25T11:30:56.112Z · score: 0 (0 votes) · LW · GW

Parsimony is good except when it loses information, but if you're losing information you're not being parsimonious correctly.

So: Hamilton's rule is not being parsimonious "correctly"?

Comment by timtyler on [Video Link] PostHuman: An Introduction to Transhumanism · 2013-11-15T11:31:16.318Z · score: 2 (2 votes) · LW · GW

Shane Legg prepared this graph.

It was enough to convince him that there was some super-exponential synergy:

Comment by timtyler on The Evolutionary Heuristic and Rationality Techniques · 2013-11-10T18:40:49.757Z · score: 4 (4 votes) · LW · GW

There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis.

I think we understand why humans are built like that. Slow-reproducing organisms often use rapidly-reproducing symbiotes to help them adapt to local environments. Humans using cultural symbionts to adapt to local regions of space-time is a special case of this general principle.

Instead of the cognitive niche, the cultural niche seems more relevant to humans.

Comment by timtyler on The Evolutionary Heuristic and Rationality Techniques · 2013-11-10T12:12:39.858Z · score: 9 (9 votes) · LW · GW

On the other hand, I think the evolutionary heuristic casts doubt on the value of many other proposals for improving rationality. Many such proposals seem like things that, if they worked, humans could have evolved to do already. So why haven't we?

Most such things would have had to evolve by cultural evolution. Organic evolution makes our hardware, cultural evolution makes our software. Rationality is mostly software - evolution can't program such things in at the hardware level very easily.

Cultural evolution has only just got started. Education is still showing good progress - as manifested in the Flynn effect. Our rationality software isn't up to speed yet - partly because is hasn't had enough time to culturally evolve its adaptations yet.

Comment by timtyler on Is the orthogonality thesis at odds with moral realism? · 2013-11-07T00:27:41.979Z · score: 2 (2 votes) · LW · GW

I usually try to avoid the term "moral realism" - due to associated ambiguities - and abuse of the term "realism".

Comment by timtyler on Is the orthogonality thesis at odds with moral realism? · 2013-11-06T11:04:20.770Z · score: 0 (2 votes) · LW · GW

The thesis says:

more or less any level of intelligence could in principle be combined with more or less any final goal.

The "in principle" still allows for the possibility of a naturalistic view of morality grounding moral truths. For example, we could have the concept of: the morality that advanced evolutionary systems tend to converge on - despite the orthogonality thesis.

It doesn't say what is likely to happen. It says what might happen in principle. It's a big difference.

Comment by timtyler on Lone Genius Bias and Returns on Additional Researchers · 2013-11-04T00:22:34.822Z · score: 1 (1 votes) · LW · GW

We're just saying that AGI is an incredibly powerful weapon, and FAI is incredibly difficult. As for "baseless", well... we've spent hundreds of pages arguing this view, and an even better 400-page summary of the arguments is forthcoming in Bostrom's Superintelligence book.

It's not mudslinging, it's Leo Szilard pointing out that nuclear chain reactions have huge destructive potential even if they could also be useful for power plants.

Machine intelligence is important. Who gets to build it using what methodology is also likely to have a significant effect. Similarly, operating systems were important. Their development produced large power concentrations - and a big mountain of F.U.D. from predatory organizations. The outcome set much of the IT industry back many years. I'm not suggesting that the stakes are small.

Comment by timtyler on Lone Genius Bias and Returns on Additional Researchers · 2013-11-03T23:55:46.133Z · score: 8 (8 votes) · LW · GW

It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.