Posts

Comments

Comment by will_pearson on Markets are Anti-Inductive · 2009-02-27T15:04:09.000Z · score: 2 (2 votes) · LW · GW

All competitive situations against ideal learning agents are anti inductive in this sense. Because they can note regularities in their actions and avoid them in the future as well as you can note regularities in their actions and exploit them. The usefulness of induction is based on the relative speeds of the induction of the learning agents.

As such anti induction appears in situations like bacterial resistance to antibiotics. We spot a chink in the bacterias armour, and we can predict that that chink will become less prevalent and our strategy less useful.

So I wouldn't mark markets as special, just the most extreme example.

Comment by will_pearson on Wise Pretensions v.0 · 2009-02-21T13:01:39.000Z · score: 0 (0 votes) · LW · GW

I find neither that convincing. Justice is not a terminal value for me, so I might sacrifice it for Winning. I prefered reading the first, but that is no indication of what a random person may prefer.

Comment by will_pearson on Pretending to be Wise · 2009-02-19T23:19:19.000Z · score: 1 (1 votes) · LW · GW

With international affairs, isn't stopping the aggression the main priority? That is stopping the death and suffering of humans on both sides? Sure it would be good to punish the aggressors rather than the retaliators but if that doesn't stop the fighting it just means more people are dying.

Also there is a difference between the adult and the child, the adult relies on the law of the land for retaliation the child takes it upon himself when he continues the fight. That is the child is a vigilante, and he may punish disproportionately e.g. breaking a leg for a dead leg.

Comment by will_pearson on Three Worlds Decide (5/8) · 2009-02-03T11:50:20.000Z · score: 2 (2 votes) · LW · GW

I don't really have a good enough grasp on the world to predict what is possible, it all seems to unreal.

One possibility is to jump one star away back towards earth and then blow up that star, if that is the only link to the new star.

Comment by will_pearson on The Baby-Eating Aliens (1/8) · 2009-01-31T01:11:28.000Z · score: 1 (1 votes) · LW · GW

Re: "MST3K Mantra"

Illustrative fiction is a tricky business, if this is to be part of your message to the world it should be as coherent as possible, so you aren't accidentally lying to make a better story.

If it is just a bit of fun, I'll relax.

Comment by will_pearson on The Baby-Eating Aliens (1/8) · 2009-01-30T19:24:13.000Z · score: 1 (1 votes) · LW · GW

I wonder why the babies don't eat each other. There must be a huge selective pressure to winnow down your fellows to the point where you don't need to be winnowed. This would in turn select for small brained, large and quick growing at the least. There might also be selective pressure to be partially distrusting of your fellows (assuming there was some cooperation), which might follow over into adulthood.

I also agree with the points Carl raised. It doesn't seem very evolutionarily plausible.

Comment by will_pearson on Value is Fragile · 2009-01-29T10:32:01.000Z · score: 2 (4 votes) · LW · GW

"Except to remark on how many different things must be known to constrain the final answer."

What would you estimate the probability of each thing being correct is?

Comment by will_pearson on Failed Utopia #4-2 · 2009-01-24T15:05:00.000Z · score: 1 (1 votes) · LW · GW

Reformulate to least regret after a certain time period, if you really want to worry about the resource usage of the genie.

Comment by will_pearson on Investing for the Long Slump · 2009-01-22T10:37:43.000Z · score: 1 (1 votes) · LW · GW

Personally I believe in the long slump. However I believe in human optimisim that will make people rally the market every so often. The very fact that most people believe the stock market will rise, will make it rise at least once or twice before people start to get the message that we are in the long slump.

Comment by will_pearson on Failed Utopia #4-2 · 2009-01-22T01:15:00.000Z · score: 7 (7 votes) · LW · GW

Eliezer, didn't you say that humans weren't designed as optimizers? That we satisfice. The reaction you got is probably a reflection of that. The scenario ticks most of the boxes humans have, existence, self-determination, happiness and meaningful goals. The paper clipper scenario ticks none. It makes complete sense for a satisficer to pick it instead of annihilation. I would expect that some people would even be satisfied by a singularity scenario that kept death as long as it removed the chance of existential risk.

Comment by will_pearson on Failed Utopia #4-2 · 2009-01-22T00:39:19.000Z · score: 1 (1 votes) · LW · GW

Dognab, your arguments apply equally well to any planner. Planners have to consider the possible futures and pick the best one (using a form of predicate), and if you give them infinite horizons they may have trouble. Consider a paper clip maximizer, every second it fails to use its full ability to paper clip things in its vicinity it is losing possible useful paper clipping energy to entropy (solar fusion etc). However if it sits and thinks for a bit it might discover a way to hop between galaxies with minimal energy. So what decision should it make? Obviously it would want to run some simulations, see if there gaps in its knowledge. How detailed simulations should it make, so it can be sure it has ruled out the galaxy hopping path?

I'll admit I was abusing the genie-trope some what. But then I am sceptical of FOOMing anyway, so when asked to think about genies/utopias, I tend to suspend all disbelief in what can be done.

Oh and belldandy is not annoying because she has broken down in tears (perfectly natural), but because she bases her happiness too much on what Stephen Grass thinks of her. A perfect mate for me would tell me straight what was going on and if I hated her for it (when not her fault at all), she'd find someone else because I'm not worth falling in love with. I'd want someone with standards for me to meet, not unconditional creepy fawning.

Comment by will_pearson on Failed Utopia #4-2 · 2009-01-21T16:04:39.000Z · score: 2 (2 votes) · LW · GW

Bogdan Butnaru:

What I meant was is that the AI would keep inside it a predicate Will_Pearson_would_regret_wish (based on what I would regret), and apply that to the universes it envisages while planning. A metaphor for what I mean is the AI telling a virtual copy of me all the stories of the future, from various view points, and the virtual me not regretting the wish. Of course I would expect it to be able to distill a non sentient version of the regret predicate.

So if it invented a scenario where it killed the real me, the predicate would still exist and say false. It would be able to predict this, and so not carry out this plan.

If you want to, generalize to humanity. This is not quite the same as CEV, as the AI is not trying to figure out what we want when we would be smarter, but what we don't want when we are dumb. Call it coherent no regret, if you wish.

CNR might be equivalent of CEV if humanity wishes not to feel regret in the future for the choice. That is if we would regret being in a future where people regret the decision, even though current people wouldn't.

Comment by will_pearson on Failed Utopia #4-2 · 2009-01-21T12:30:20.000Z · score: 3 (3 votes) · LW · GW

I don't believe in trying to make utopias but in the interest of rounding out your failed utopia series how about giving a scenario against this wish.

I wish that the future will turn out in such a way that I do not regret making this wish. Where I is the entity standing here right now, informed about the many different aspects of the future, in parallel if need be (i.e if I am not capable of groking it fully then many versions of me would be focused on different parts, in order to understand each sub part).

I'm reminded by this story that while we may share large parts of psychology, what makes a mate have an attractive personality is not something universal. I found the cat girl very annoying.

Comment by will_pearson on Building Weirdtopia · 2009-01-13T21:08:24.000Z · score: 3 (3 votes) · LW · GW

Personally I don't find the scientific weirdtopia strangely appealing. Finding knowledge for me is about sharing it later.

Utopia originally meant no-place, I have a hard time forgetting that meaning when people talk about them.

I'd personally prefer to work towards negated-dystopias. Which is not necessarily the same thing as working towards Utopia, depending on how broad your class of dystopia is. For example rather than try and maximise Fun, I would want to minimize the chance that humanity and all its work were lost to extinction. If there is time and energy to devote to Fun while humanity survives then people can figure it out for themselves.

Comment by will_pearson on Growing Up is Hard · 2009-01-04T15:55:35.000Z · score: 0 (0 votes) · LW · GW

Time scaling is not unproblematic. We don't have a single clock in the brain, clocks must be approximated by neurons and by neural firing. Speeding up the clocks may affect the ability to learn from the real world (if we have a certain time interval for associating stimuli).

We might be able to adapt, but I wouldn't expect it to be straight forward.

Comment by will_pearson on The Uses of Fun (Theory) · 2009-01-02T23:26:22.000Z · score: 1 (3 votes) · LW · GW

A random utility function will do fine, iff the agent has perfect knowledge.

Imagine, if you will a stabber, something that wants to turn the world into things that have been stabbed. If it knows that stabbing itself will kill itself, it will know to stab itself last. If it doesn't know know that stabbing itself will lead to it no longer being able to stab things, then it may not do well in actually achieving its stabbing goal by stabbing itself too early.

Comment by will_pearson on Can't Unbirth a Child · 2008-12-28T23:19:45.000Z · score: 0 (0 votes) · LW · GW

I'd agree with the sentiment in this post. I'm interested in building artificial brain stuff, more than building Artificial People. That is a computational substrate that allows the range of purpose-oriented adaptation shown in the brain, but with different modalities. Not neurally based, because simulating neural systems on a system where processing and memory is split defeats the majority of the point of them for me.

Comment by will_pearson on Nonperson Predicates · 2008-12-27T10:02:59.000Z · score: 1 (1 votes) · LW · GW

Don't you need a person predicate as well? If the RPOP is going to upload us all or something similar, doesn't ve need to be sure that the uploads will still be people.

Comment by will_pearson on Living By Your Own Strength · 2008-12-22T15:08:40.000Z · score: 0 (0 votes) · LW · GW

I suspect the knowledge you get from reading someones writings is very different than the knowledge you get from working with them or them teaching you. When you work or learn closely with someone they can see your reasoning processes and correct them when they go astray at the right point when they are still newly formed and not too ingrained. Otherwise it relies too much on luck. When in someone intellectual career should you read OB, too early it won't mean too much lacking the necessary background and too late you will be inured against it (assuming it is the right way to go!).

Autodidacts are going to be most intellectually useful when you need to break new ground and the methodologies of the past aren't the way to solve the problems needed to be solved.

Comment by will_pearson on What I Think, If Not Why · 2008-12-13T17:27:00.000Z · score: 0 (0 votes) · LW · GW

Are you saying "snakes are often deadly poisonous to humans" is an instrumental value?

I'd agree that dying is bad therefore avoid deadly poisonous things. But I still don't see that snakes have little xml tags saying keep away, might be harmful.... I don't see that as a value of any sort.

Comment by will_pearson on What I Think, If Not Why · 2008-12-13T15:29:00.000Z · score: 0 (0 votes) · LW · GW
Morality does not compress; it's not something you can learn just by looking at the (nonhuman) environment or by doing logic; if you want to get all the details correct, you have to look at human brains.

Why? Why can't you rewrite this as "complexity and morality"?

You may talk about the difference between mathematical and moral insights. Which is true, but then mathematical insights aren't sufficient for intelligence. Maths doesn't tell you whether a snake is poisonous and will kill you or not....

Comment by will_pearson on You Only Live Twice · 2008-12-12T22:28:34.000Z · score: 1 (3 votes) · LW · GW

The number of people living today because their ancestors invested their money in themselves/their status and their children, all of us:

The number of people living today because they or someone else invested their money in cryonics or other scheme to live forever, 0.

Not saying that things won't change in the future, but there is a tremendously strong bias to spend your resources on ambulatory people and new people, because that has been what has worked previously.

Women might have stronger instincts in this respect as they have been more strongly selected for the ability to care for their children (unlike men).

If you want to change this state of affairs, swiftly at least, you have to tap into our common psyche as successful replicators and have it pass the "useful for fitness test". This would be as easy as making it fashionable or a symbol of high status, get Obama to sign up publicly and I think you would see a lot more interest.

High status has been something sort after because it gets you better mates and more of them (perhaps illicitly).

Comment by will_pearson on What I Think, If Not Why · 2008-12-11T22:42:22.000Z · score: -1 (1 votes) · LW · GW
Will, your example, good or bad, is universal over singletons, nonsingletons, any way of doing things anywhere.

My point was not that non-singletons can see it coming. But if one non-singletons trys self-modification in a certain way and it doesn't work out then other non-singletons can learn from the mistake (or in worst the evolutionary case the descendents of people curious in a certain way would be out competed by those that instinctively didn't try the dangerous activity). Less so with the physics experiments, depending on dispersal of non-singletons, range of the physical destruction.

Comment by will_pearson on What I Think, If Not Why · 2008-12-11T22:08:13.000Z · score: 2 (2 votes) · LW · GW

There are some types of knowledge that seem hard to come by (especially for singletons). The type of knowledge is knowing what destroys you. As all knowledge is just an imperfect map, there are some things a priori that you need to know to avoid. The archetypal example is in-built fear of snakes in humans/primates. If we hadn't had this while it was important we would have experimented with snakes the same way we experiment with stones/twigs etc and generally gotten ourselves killed. In a social system you can see what destroys other things like you, but the knowledge of what can kill you is still hard won.

If you don't have this type of knowledge you may step into an unsafe region, and it doesn't matter how much processing power or how much you correctly use your previous data. Examples that might threaten singletons:

1) Physics experiments, the model says you should be okay but you don't trust your model under these circumstances, which is the reason to do the experiment. 2) Self-change, your model says that the change will be better but the model is wrong. It disables the system to a state it can't recover from, i.e. not an obvious error but something that renders it ineffectual. 3) Physical self-change. Large scale unexpected effects from feedback loops at a different levels of analysis, e.g. things like the swinging/vibrating bridge problem, but deadly.

Comment by will_pearson on Disjunctions, Antipredictions, Etc. · 2008-12-09T16:18:43.000Z · score: 3 (5 votes) · LW · GW
No diminishing returns on complexity in the region of the transition to human intelligence: "We're so similar to chimps in brain design, and yet so much more powerful; the upward slope must be really steep."

Or there is no curve and it is a random landscape with software being very important...

Scalability of hardware: "Humans have only four times the brain volume of chimps - now imagine an AI suddenly acquiring a thousand times as much power."

Bottle nosed dolphins have twice the brain volume as normal dolphins (and comparable to our brain volume), yet aren't massively more powerful compared to them. Asian elephants have 5 times the weight...

Comment by will_pearson on True Sources of Disagreement · 2008-12-09T01:00:49.000Z · score: 0 (0 votes) · LW · GW

I personally find the comparison between spike frequency and clockspeed unconvincing. It glosses over all sorts of questions of whether the system can keep all the working memory it needs in 2MB or whatever processor cache it has. Neurons have the advantage of having local memory, no need for the round trip off chip.

We also have no idea how neurons really work, there has been recent work on the role of methylation of dna in memory. Perhaps it would be better to view neural firing as communication between mini computers, rather than processing in itself.

I'm also unimpressed with large numbers, 10^15 operations is not enough to process the positions of 1 gram of hydrogen atoms, in fact it would take 20 million years for it to do so (assuming one op per atom). So this is what we have to worry about planning to atomically change our world to the optimal form. Sure it is far more than we can consciously do, and quite possibly a lot more than we can do unconsciously as well. But it is not mindboglingly huge compared to the real world.

Comment by will_pearson on Artificial Mysterious Intelligence · 2008-12-07T20:56:18.000Z · score: -1 (1 votes) · LW · GW

The universe doesn't have to be kind and make all problems amenable to insight....

There are only a certain number of short programs, and once a program gets above a certain length it is hard to compress (I can't remember the reference for this, so it may be wrong, can anyone help?). We can of course reorder things, but then we have to make things currently simple complex.

That said I do think insight will play some small part in the development of AI, but that there may well be a hell a lot of parameter tweaking that we don't understand or know why they are so.

Comment by will_pearson on Sustained Strong Recursion · 2008-12-06T11:00:49.000Z · score: 1 (1 votes) · LW · GW

You are right about the smaller is faster and local being more capable of reacting. But Eliezer's arguments are predicated on there being a type of AI that can change itself without deviation from a purpose. So an AI that splits itself into two may deviate in capability, but should share the same purpose.

Whether such an AI is possible or would be effective in the world is another matter.

Comment by will_pearson on Sustained Strong Recursion · 2008-12-05T23:54:56.000Z · score: 0 (0 votes) · LW · GW
We also suppose that the technology feeding Moore's Law has not yet hit physical limits. And that, as human brains are already highly parallel, we can speed them up even if Moore's Law is manifesting in increased parallelism instead of faster serial speeds - we suppose the uploads aren't yet being run on a fully parallelized machine, and so their actual serial speed goes up with Moore's Law. Etcetera.

Moore's Law says nothing about speed in the canonical form. You should probably define exactly what variant you are using.

Comment by will_pearson on Underconstrained Abstractions · 2008-12-05T07:02:49.000Z · score: 0 (0 votes) · LW · GW

Consider the following. Chimpanzees make tools. The first hominid tools were simple chipped stone from 2.5 million years ago. Nothing changed for a million years. Then homo erectus came along with Acheulian tech, nothing happened for a million years. Then two thousand years ago H. Sapiens appeared and tool use really diversified. The brains had been swelling from 3 million years ago.

If brains had been getting more generally intelligent at that time as they were increasing in size, it is not shown. They may have been getting better at wooing women and looking attractive to men.

This info has been cribbed from the Red Queen page 313 hardback edition.

I would say this shows a discontinous improvement in intelligence, where intelligence is defined as the ability to generally hit a small target in search space about the world. Rather than the ability to get into another hominids pants.

Comment by will_pearson on Recursive Self-Improvement · 2008-12-03T14:46:38.000Z · score: 0 (0 votes) · LW · GW
Because it shows that with constant optimization pressure from natural selection and no intelligent insight, there were no diminishing returns to a search for better brain designs up to at least the human level. There were probably accelerating returns (with a low acceleration factor). There are no visible speedbumps, so far as I know.

Were the brain designs better because they were more powerful or more intelligent?

That is how many of the improvements were adding more resources to the brain (because they paid off in this evolutionary case), rather than adding more efficient programs/systems.

Comment by will_pearson on Recursive Self-Improvement · 2008-12-02T20:15:03.000Z · score: 1 (1 votes) · LW · GW

Pearson: I think you have a tendency to overlook our lack of knowledge of how the brain works. You talk of constant brain circuitry, when people add new hippocampal cells through their life. We also expand the brain areas devoted to fingers if we are born blind and use braille.

Pearson, "constant brains" means "brains with constant adaptation-algorithms, such as an adaptation-algorithm for rewiring via reinforcement" not "brains with constant synaptic networks". I think a bit of interpretive charity would have been in order here. We don't know how deep the rabbit hole of adaptation goes. Are there constant adaptation-algortihms? Constant adaptation algorithms are not a prerequisite for an optimization process, evolution being the cannonical example. It gets by with a very changeable adaptation-algorithm embodied in the varieties of genetic transfer, reproduction rates etc. We have barely scratched the surface of adaption systems, assuming a constant adaptation-algorithm for intelligence is premature, as far as I am concerned.

Comment by will_pearson on Recursive Self-Improvement · 2008-12-01T23:58:32.000Z · score: 0 (2 votes) · LW · GW

I think you have a tendency to overlook our lack of knowledge of how the brain works. You talk of constant brain circuitry, when people add new hippocampal cells through their life. We also expand the brain areas devoted to fingers if we are born blind and use braille.

We don't know how else the brain rewires itself. In some sense all knowledge is wiring in the brain... I mean what else is it going to be. This is all invisible to us, and may throw a spanner in the works of any intelligence trying to improve a moving target.

Comment by will_pearson on Total Nano Domination · 2008-11-27T15:29:02.000Z · score: 0 (0 votes) · LW · GW

By nanocomputer I meant rod-logic or whatever the state of the art in hypothetical computing is. I want to see how it compares to standard computing.

I think the lure of nano computing is supposed to be low power consumption and easy 3d stackability that that entails as well. It it not sufficient to have small components if they are in a 2D design and you can't have too many together without overheating.

Some numbers would be nice though.

Comment by will_pearson on Total Nano Domination · 2008-11-27T11:21:26.000Z · score: 0 (0 votes) · LW · GW

How much of current R&D time is humans thinking, and how much is compiling projects, running computer simulations or doing physical experiments?

E.g. would having faster than human speed uploads, speed up getting results from the LHC by the ratio of their speed to us?

Also do you have some FLOPS per cubic centimeter estimations for nanocomputers? I looked at this briefly, and I couldn't find anything. It references a previous page that I can't find.

Comment by will_pearson on ...Recursion, Magic · 2008-11-25T10:01:21.000Z · score: 1 (1 votes) · LW · GW

Do you have any evidence that insight is applicable to understanding and creating intelligences? Because without that recursion isn't powerful and magic doesn't start to get off the ground.

Comment by will_pearson on Cascades, Cycles, Insight... · 2008-11-24T22:54:33.000Z · score: 0 (0 votes) · LW · GW

Another interesting innovation of evolution is the Hox genes. Basically evolution invented modularity, it saved the fact that 6 legs were useful and allowed the space of different lengths of all the legs to be searched, rather than tweaking each individually. So I would say the Homeobox system would count as an insight, for genes.

Comment by will_pearson on Surprised by Brains · 2008-11-23T12:11:36.000Z · score: -1 (1 votes) · LW · GW

Why not? The space we search has been very useful apart from for finding the solution to the creation of a mind. Perhaps the space of minds is outside the current space of Turing machines we are currently searching. It would cert explain why no one has been very successful so far.

Not to say that we could never find a mind. Just that we might have trouble using a compressible search space.

Comment by will_pearson on Surprised by Brains · 2008-11-23T10:11:24.000Z · score: 3 (5 votes) · LW · GW
Believer: The search space is compressible -

The space of behaviors of Turing Machines is not compressible, sub spaces are, but not the whole lot. What space do you expect the SeedAIs to be searching? If you could show that it is compressible and bound to have an uncountable number of better versions of the SeedAI, then you could convince me that I should worry about Fooming.

As such when I think of self-modifiers I think of them searching the space of Turing Machines, which just seems hard.

Comment by will_pearson on Life's Story Continues · 2008-11-22T12:29:40.000Z · score: -1 (1 votes) · LW · GW

I'm sure I have shown this to you before Eliezer, but other people might be interested in the very high rate of mutation in the cone shell poison genes.

Presumably this is under genetic control so counts as another meta innovation. How many are there throughout the world, we really don't know. I don't think reasoning about a process we don't really understand (evolution) will help us about trying to predict what happens when things go recursive.

Comment by will_pearson on Observing Optimization · 2008-11-21T14:43:21.000Z · score: 1 (1 votes) · LW · GW

Bacteria have a form of sex. It is horizontal rather than vertical biological information transfer though. I don't know what the limits of the transfer are, but it might make sense to talk about groups of bacteria that can transfer genes between them as a species. Evolution is not a simple thing.

Comment by will_pearson on Observing Optimization · 2008-11-21T10:37:57.000Z · score: 1 (3 votes) · LW · GW
Should you expect more speciation after the invention of sex, or less

What meaning of species are you using? Under some definitions speciation doesn't make sense without sex... as a species is defined as a group of organisms capable of interbreeding.

Comment by will_pearson on The First World Takeover · 2008-11-19T22:06:34.000Z · score: 0 (0 votes) · LW · GW
to say nothing of a whole multicellular C. elegans earthworm

It would probably better if you had said nothing of it. It eats dead rotting vegetable matter, which you have just hypothetically removed. Plants are replicators too! So it would die in short order and time would be reversed.

But the significant thing was not how much material was recruited into the world of replication; the significant thing was the search, and the material just carried out that search.

Search is significant, but it was not the only significant thing. What was searched was also significant. If by chance a brain that searched the space of good chess strategies had spontaneously appeared, it would not be important. What was searched was "what patterns are good for survival to date", not "what patterns that are expected to be good for survival". This is important, it is real first hand information, we cannot exist because of some delusive part of natures mind that thinks we are good at surviving so far, we have to be!

Our existence is first order information about the world. Our mental models are only second order, they are one removed from reality. We try and update them by testing our reality against what the models predict, but there might always be black swans the models don't see. The machinery that creates them has to have been useful for surviving to date and carrying out many tasks that helps that survival, but the models themselves do not necessarily have to be useful, right, correct or true. I think there will always be a flow of information from the first order bodies to the second order models.

Comment by will_pearson on The Nature of Logic · 2008-11-17T12:09:24.000Z · score: -3 (3 votes) · LW · GW

Sadly Eliezer does seem to think you can get get more knowledge about the world from pure deductive systems. As long as the axioms are the laws of a universe and you have infinite computing power.

http://www.overcomingbias.com/2008/11/complexity-and.html#more

Comment by will_pearson on Logical or Connectionist AI? · 2008-11-17T10:25:03.000Z · score: 0 (2 votes) · LW · GW

It was necessary for people doing AI to disassociate themselves from previous attempts at doing AI to get funding (see the various AI winters), as it came into disrepute for promising too much. Hence terms like GOFAI and the connectionist/logical dichotomy.

You are lucky not to be on that treadmill. Sadly nowadays you have to market your speculative research to be successful.

Comment by will_pearson on The Weighted Majority Algorithm · 2008-11-16T20:43:00.000Z · score: 0 (0 votes) · LW · GW

Don, you missed my comment saying you could bound the randomized algorithm in the same way to n/4+1 by keeping track of where you pick and if you find n/4+1 0s in one half you conclude it is in the other half.

I wouldn't say that a randomized algorithm is better per se, just more generally useful than the a particular deterministic one. You don't have to worry about the types of inputs given to it.

This case matters because I am a software creator and I want to reuse my software components. In most cases I don't care about performance of every little sub component too much. Sure it is not "optimal", but me spending time on optimizing a process that only happens once is not worth it in the real world!

Obsessing about optimization is not always the way to win.

Comment by will_pearson on The Weighted Majority Algorithm · 2008-11-14T21:15:56.000Z · score: 1 (1 votes) · LW · GW

Don how well a deterministic algorithm does depends upon what the deterministic algorithm is and what the input is, it cannot always query O(1) queries.

E.g. in a 20 bit case

If the deterministic algorithm starts its queries from the beginning, querying each bit in turn and this is pattern is always 00000111110000000000

It will always take n/4+1 queries. Only when the input is random will it on average take O(1) queries.

The random one will take O(1) on average on every type of input.

Comment by will_pearson on The Weighted Majority Algorithm · 2008-11-14T17:22:04.000Z · score: 2 (2 votes) · LW · GW

Isn't the probability of finding a bit 1/4 in each sample? Because you have to sample both sides. If you randomly sample without repetitions you can do even better than that. You can even bound the worse case for the random algorithm at n/4+1 as well by seeing if you ever pick n/4+1 0s from any side. So there is no downside to doing it randomly as such.

However if it is an important problem and you think you might be able to find some regularities, the best bet would to be to do bayesian updates on the most likely positions to be ones and preferentially choose those. You might be bitten by a super intelligence predicting what you think most probable, and doing the opposite, but as you have made the system more complex the world has to be more complex to outsmart you, which may be less likely. You can even check this by seeing whether you are having to do n/4+1 samples or close to it on average.

Comment by will_pearson on Selling Nonapples · 2008-11-13T21:22:23.000Z · score: 0 (0 votes) · LW · GW

Was the darpa grand challenge winner written using CES or a successor? I see no mention of it in the Darpa paper.

If not, why not? Perhaps neither of these approaches are good in the real world.

I am also guilty of wanting to toss people back to the Turing Tarpit to get to AI, but I don't advocate staying there for long. I just think we have the wrong foundation for resource management and have to redo security and resource allocation at the architectural level. Then rebuild in a more adaptive system from there. I have a few ideas and they do have a fair amount of centralized modeling. But those methods of centralized modeling should be able to be changed in extremis, if what we thought about the world was wrong.

Think about it this way, you advocate sometimes radically rethinking your fundamental ways of conceiving the world, should we not allow our AI systems to be able to sometimes do the same, rather than constrain them with our preconceptions for eternity?

Comment by will_pearson on The Weighted Majority Algorithm · 2008-11-13T00:06:44.000Z · score: 0 (0 votes) · LW · GW

It is getting late, so this may be way off, and have not the time to read the paper.

This is also assuming finite trials right? Because over infinite trials if you have a non-zero probability of siding with the wrong group of classifiers, you will make infinite mistakes. No matter how small the probabilities go.

It seems it is trading off a better expectation for a worse real worse case.

Also are you thinking of formalising an alternative to the infinite SI worst case you describe?