Posts

How should negative externalities be handled? (Warning: politics) 2013-05-08T21:40:34.745Z
A Series of Increasingly Perverse and Destructive Games 2013-02-14T09:22:23.380Z
Inferring Values from Imperfect Optimizers 2012-12-29T22:22:14.209Z
AI "Boxing" and Utility Functions 2012-12-05T23:44:21.496Z

Comments

Comment by nigerweiss on Steelmanning Young Earth Creationism · 2014-02-24T01:57:36.987Z · LW · GW

It's going to be really hard to come up with any models that don't run deeply and profoundly afoul of the Occam prior.

Comment by nigerweiss on [LINK] Slashdot interview with David Ettinger of the Cryonics Institute · 2013-08-20T20:16:44.020Z · LW · GW

When asked a simple question about broad and controversial assertions, it is rude to link to outside resources tangentially related to the issue without providing (at minimum) a brief explanation of what those resources are intended to indicate.

Comment by nigerweiss on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T09:37:26.156Z · LW · GW

I don't speak Old English, unfortunately. Could someone who does please provide me with a rough translation of the provided passage?

Comment by nigerweiss on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-29T09:46:55.871Z · LW · GW

It isn't the sort of bad argument that gets refuted. The best someone can do is point out that there's no guarantee that MNT is possible. In which case, the response is 'Are you prepared to bet the human species on that? Besides, it doesn't actually matter, because [insert more sophisticated argument about optimization power here].' It doesn't hurt you, and with the overwhelming majority of semi-literate audiences, it helps.

Comment by nigerweiss on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-28T01:04:21.584Z · LW · GW

Of course there is. For starters, most of the good arguments are much more difficult to concisely explain, or invite more arguments from flawed intuitions. Remember, we're not trying to feel smug in our rational superiority here; we're trying to save the world.

Comment by nigerweiss on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-26T08:10:34.020Z · LW · GW

That's... not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT - I'd bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.

All we're doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There's nothing wrong with that. Would we need to do it if people were rational agents? No - but, as you may be aware, we definitely don't live in that universe.

Comment by nigerweiss on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-25T10:43:50.008Z · LW · GW

I don't have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like 'well, the machine won't be able to do anything on its own because it's just a computer - it'll need humanity, therefore, it'll never kill us all." Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn't guaranteed to happen, but it's also not unlikely, and it's a powerful educational tool for showing people the sorts of things that strong AI may be capable of.

Comment by nigerweiss on Mahatma Armstrong: CEVed to death. · 2013-06-07T22:02:18.169Z · LW · GW

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

One obvious way to solve the problem you raise is to treat 'modifying your current value approximation'' as an object-level action by the AI, and one that requires it to compute your current EV - meaning that, if the logical consequences of the change (including all the future changes that the AI predicts will result from that change) don't look palatable to you, the AI won't make the first change. In other words, the AI will never assign you a value set that you find objectionable right now. This is safe in some sense, but not ideal. The profoundly racist will never accept a version of their values which, because of its exposure to more data and fewer cognitive biases, isn't racist. Ditto for the devoutly religious. This model of CEV doesn't offer the opportunity for growth.

It might be wise to compromise by locking the maximum number of edges in the graph between you and your EV to some small number, like two or three - a small enough number that value drift can't take you somewhere horrifying, but not so tightly bound up that things can never change. If your CEV says it's okay under this schema, then you can increase or decrease that number later.

Comment by nigerweiss on Rationality Quotes June 2013 · 2013-06-06T09:39:32.274Z · LW · GW

I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.

Comment by nigerweiss on Who thinks quantum computing will be necessary for AI? · 2013-05-30T01:34:19.934Z · LW · GW

Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure - and if you don't track hormonal regulation properly, you're going to be in for a world of hurt. Still, I think the point stands.

Comment by nigerweiss on Who thinks quantum computing will be necessary for AI? · 2013-05-30T01:31:45.244Z · LW · GW

Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.

Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don't have any money right now to propose a bet, but if it turns out that the brain can't be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat.

Consciousness is a much thornier nut to crack. I don't know that anyone has a good handle on that yet.

Daniel Dennet's papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.

Comment by nigerweiss on Who thinks quantum computing will be necessary for AI? · 2013-05-30T01:21:05.853Z · LW · GW

When I was younger, I picked up 'The Emperor's New Mind' in a used bookstore for about a dollar, because I was interested in AI, and it looked like an exciting, iconoclastic take on the idea. I was gravely disappointing when it took a sharp right turn into nonsense right out of the starting gate.

Comment by nigerweiss on Who thinks quantum computing will be necessary for AI? · 2013-05-29T08:31:43.857Z · LW · GW

Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.

I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.

Comment by nigerweiss on Who thinks quantum computing will be necessary for AI? · 2013-05-29T08:28:24.586Z · LW · GW

Evidence?

EDIT: Sigh. Post has changed contents to something reasonable. Ignore and move on.

Reply edit: I don't have a copy of your original comment handy, so I can't accurately comment on what I was thinking when I read it. However, I don't recall it striking me as a joke, or even an exceptionally dumb thing for someone on the internet to profess belief in.

Comment by nigerweiss on [LINK]s: Who says Watson is only a narrow AI? · 2013-05-23T03:52:11.862Z · LW · GW

Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you'd be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can't do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It's not powerful in the way that a true GAI would be.

That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably broad domain (structured knowledge extraction from unformatted English) , which is a major leap forward. You might say that Watson is a rough analog to a language center connected to a memory system sitting in a box. It's not a GAI by itself, but it could be a substantial component of one down the line.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-20T00:12:37.567Z · LW · GW

Zero? Why?

At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn't ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I'd be very surprised if nobody decided to play god like that. People go to Renaissance fairs, for goodness sakes. Do you think that nobody would take the opportunity to bring back whole lost eras of humanity in bottle-worlds?

As for the other point, if we decide that our simulators don't resemble us, then calling them 'people' is spurious. We know nothing about them. We have no reason to believe that they'd tend to produce simulations containing observers like us (the vast majority of computable functions won't). Any speculation, if you take that approach, that we might be living in a simulation is entirely baseless and unfounded. There is no reason to privilege that cosmological hypothesis over simpler ones.

Comment by nigerweiss on Open thread, May 17-31 2013 · 2013-05-19T20:42:59.575Z · LW · GW

I know some hardcore C'ers in real life who are absolutely convinced that centrally-planned Marxist/Leninist Communism is a great idea, and they're sure we can get the kinks out if we just give it another shot.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-19T19:17:53.098Z · LW · GW

Unless P=NP, I don't think it's obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we're alarmingly close to solipsism again.

I guess one way to to test this hypothesis would be to try to construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior ("Project: Piss Off God"), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe. If nothing interesting happens, then we have evidence that either we aren't in a simulation, or P=NP.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-19T19:06:18.000Z · LW · GW

We can be a simulation without being a simulation created by our descendants.

We can, but there's no reason to think that we are. The simulation argument isn't just 'whoa, we could be living in a simulation' - it's 'here's a compelling anthropic argument that we're living in a simulation'. If we disregard the idea that we're being simulated by close analogues of our own descendants, we lose any reason to think that we're in a simulation, because we can no longer speculate on the motives of our simulators.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-19T19:03:34.985Z · LW · GW

That doesn't actually solve the problem: if you're simulating fewer people, that weakens the anthropic argument proportionately. You've still only got so much processor time to go around.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-19T02:09:29.106Z · LW · GW

There's a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you're trying to model. Suffice to say that it'll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation - at least, in the domain that we're talking about.

And, you're right - we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don't think there's a good way to get around that (that doesn't rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future - not when you'd have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.

To sum up:

If our descendants are willing to run fully detailed simulations, they won't be able to run very many for economic reasons - possibly none at all, depending on how many optimizations to the world equations wind up being possible.

If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we'd notice the discrepancy, none of which seem true or satisfying.

Either way, I don't see a strong argument that we're living in a simulation.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-18T22:07:58.133Z · LW · GW

I can see a case that we're more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn't have a close resemblance to anyone's past.

That seems... problematic. If your argument depends on the future of people like us being likely to generate lots of simulations, and of us looking nothing like the past of the people doing the simulating, that's contradictory. If you simply think that every possible agency in the top level of reality is likely to run enough simulations that people like us emerge accidentally, that seems like a difficult thesis to defend.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-18T21:56:34.849Z · LW · GW

Not for the simulations to work - only for the simulations to look exactly like the universe we now find ourselves in. 95% of human history could have played out, unchanged, in a universe without relativistic effects or quantum weirdness, far more inexpensively. We simply wouldn't have had the tools to measure the difference.

Even after the advent of things like particle accelerators, we could still be living in a very similar but-less-expensive universe, and things would be mostly unchanged. Our experiments would tell us that Newtonian mechanics are perfectly correct to as many decimal places as we can measure, and that atoms are distinct, discrete point objects with a well-defined mass, position, and velocity, and that would be fine. That'd just be the way things are. Very few non-physicist people would be strongly impacted by the change.

In other words, if they're interested in simulating humans, there are very simple approximations that would save an enormous quantity of computing power per second. The fact that we don't see those approximations in place (and, in fact, are living in such a computationally lavish universe) is evidence that we are not living in a simulation.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-18T21:05:18.388Z · LW · GW

The original form of the Bostrom thesis is that, because we know that our descendants will probably be interested in running ancestor simulations, we can predict that, eventually, a very large number of these simulations exist. Thus, we are more likely to be living in an ancestor simulation than the actual, authentic history that they're based on.

If we take our simulators to be incomprehensible, computationally-rich aliens, then that argument is gone completely. We have no reason to believe they'd run many simulations that look like our universe, nor do we have a reason to believe that they exist at all. In short, the crux of the Bostrom argument is gone.

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-18T19:59:31.502Z · LW · GW

In that case, you've lost the anthropic argument entirely, and whether or not we're a simulation relies on your probability distributions over possible simulating agents, which is... weird.

Comment by nigerweiss on Post ridiculous munchkin ideas! · 2013-05-18T19:52:58.193Z · LW · GW

Could also be a temporary effect. Your gut flora adjusts to what you're eating, and a sudden shift in composition can cause digestive distress.

Comment by nigerweiss on LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' · 2013-05-18T19:31:18.639Z · LW · GW

Once you have an intelligent AI, it doesn't really matter how you got there - at some point, you either take humans out of the loop because using slow, functionally-retarded bags of twitching meat as computational components is dumb, or you're out-competed by imitator projects that do. Then you've just got an AI with goals, and bootstrapping tends to follow. Then we all die. Their approach isn't any safer, they just have different ideas about how to get a seed AI (and ideas, I'd note, that make it much harder to define a utility function that we like).

Comment by nigerweiss on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-18T18:57:49.778Z · LW · GW

I think a slightly sturdier argument is that we live in an unbelievably computationally expensive universe, and we really don't need to. We could easily be supplied with a far, far grainier simulation and never know the difference. If you're interested in humans, you'd certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.

There are two obvious answers to this criticism: the first is to raise the possibility that the top level universe has so much computing power that they simply don't care. However, if we're imagining a top level universe so vastly different from our own, the anthropic argument behind the Bostrom hypothesis sort of falls apart. We need to start looking at confidence distributions over simulating universes, and I don't know of a good way to do that.

The other answer is that we are living in a much grainier simulation, and either there are super-intelligent demons flitting around between ticks of the world clock, falsifying the results of physics experiments and making smoke detectors work, or that there is a global conspiracy of some kind, orchestrated by the simulators, to which most of science is party, to convince the bulk of the population that we are living in a more computationally expensive universe. From that perspective, the Simulation Argument starts to look more like some hybrid of solipsism and a conspiracy theory, and seems substantially less convincing.

Comment by nigerweiss on How should negative externalities be handled? (Warning: politics) · 2013-05-09T00:04:43.925Z · LW · GW

Anti-trust law hasn't (yet!) destroyed Google - however splitting up monopolists like Standard Oil or various cartels seems a clear win.

This has more to do with failure to enforce anti-trust laws in a meaningful way, though. In the case of Oil and most major cartels, these are not natural monopolies: they are monopolies built and maintained with the express help of various world states, which is a somewhat different matter.

Inherited wealth certainly does harm you. You and I are not on a level playing field with the son of some Saudi prince. We cannot compete fairly for jobs, or wealth. Its not 'caring for them after they're gone' its giving them an unfair advantage over the rest of us.

Economics is not a zero sum game. I doubt I'm going to find myself competing for a promotion with a Saudi prince. Rich people do not harm the poor with their wealth: they simply may not help (aside from, say, putting their money into banks and loaning it out to less wealthy people).

The education point follows from this, since purchasing better education is perhaps the primary way people inherit privelege. Education in our present society is a positional good - its distribution is zero sum. Some rich woman buys an extra qualification for her daughter, your parents can't buy it for you, she gets the job - not because shes better, but because her parents are richer than yours. Certainly hurts you.

Knowledge is certainly not a zero sum game! Besides, if you genuinely want everyone to have an identical education, you can't simply provide public education - you must also outlaw private schools, home schooling, and any sort of parental involvement in education: after all, why should the child of a college professor have an unfair advantage over the child of a ditch digger? That hardly seems fair.

My perspective here is colored because my public school experience was more or less entirely catastrophic, and I am predominately self-educated. If we were to force everyone to be educated at only modern public school levels, it would be an economic and cultural disaster of hard-to-register proportion.

So perhaps the type of information campaign needs the public backing of government, as this carries the legitimacy of collective action. Also, if we start from now, the 'private organisations' with disproportionate wealth and power will be able to produce more propaganda and preserve the status quo (that benefits them)

Large charities also have the 'legitimacy of public action.' Also, if you think the government won't use its propagandizing power to preserve the status quo that benefits it, you've never sat through the pledge of allegiance.

Comment by nigerweiss on How should negative externalities be handled? (Warning: politics) · 2013-05-08T23:33:27.334Z · LW · GW

I've heard this sort of thing before, and I've never been totally sold on the idea of post-scarcity economics. Mostly because I think that if you give me molecular nanotechnology, I, personally, can make good use of basically as much matter and energy (the only real resources) as I can get my hands on, with only moderately diminishing returns. If that's true for even a significant minority of the population, then there's no such thing as a post-scarcity economy, merely an extremely wealthy one.

In practice, I expect us all to be dead or under the watchful eye of some kind of Friendly power singlet by then, so the point is rather moot anyway.

Comment by nigerweiss on How should negative externalities be handled? (Warning: politics) · 2013-05-08T23:26:30.837Z · LW · GW

This seems intuitively likely, but, on the other hand, we thought the same thing about telecommunications, and our early move to nationalize that under the Bell corporation was wholeheartedly disastrous, and continues to haunt us to this day. I... honestly don't know. I suspect that some level of intervention is optimal here, but I'm not sure exactly how much.

In the case of water, if we were required to move water in tanks rather than pipes, water would be more expensive and traffic would be worse, but we'd also probably see far less wasted water and more water conservation.

Comment by nigerweiss on How should negative externalities be handled? (Warning: politics) · 2013-05-08T23:14:37.025Z · LW · GW

anti-trust law, laws against false advertising, corruption laws.

I'll give you the false advertising. Anti-trust laws do not seem like an obvious win in the case of natural monopolies; for example, destroying Google and giving an equal share of their resources and employees to Bing, Yahoo, and Ask.com does not seem obviously likely to improve the quality of search for consumers. As for anti-corruption laws, I'd need to see a much clearer definition before I gave you an opinion.

Your mention of wanting to "preclude blackmail, theft, and slavery" implies that you would like to see cooperative production between genuine equals, not relations of dominance or exploitation.

That's true, but I suspect you're using those words in a substantially different sense than I would intend. Let's clarify: I believe that it is undesirable for someone's freedom (in a Libertarian theoretical sense of free use of their own body, autonomy, and property) to be dependent on the amount of violent physical force that other economic players are able to muster. This does not imply most of the other things you suggest.

As it happens, I'm on the fence about public education. It seems like a good idea in principle, but having been through the process myself, I cannot endorse it with good conscience. Furthermore, I am uneasy granting the government a monopoly on indoctrination of youth, which is a major component of any education.

As for inheritance tax, I don't think that's either enforceable or desirable. If people want to care for their children after they are dead, that doesn't harm me at all, and I wish them well. As for information campaigns, government propaganda, however much I may agree with this generation of ideals, is not something I particularly want to fund. Governments do not generally give power back once it's been given to them, and, when you give a government a power, you must trust not only the current administration, but every future one for the lifespan of the country. On the whole, it seems wiser to allow private organizations to blanket the world in propaganda as they see fit.

Oh dear. We seem to have strayed quite far from libertarianism.

You certainly seem to. I think I'm comfortable where I am, thank you. This specifically is actually my complaint about much political discussion. Whenever anyone expresses any doubt about the details of their political beliefs, people from other camps flock like carrion-birds to try to convert them to the righteous path. This leads to people not discussing doubts for fear of appearing weak to the damn, dirty greens, and forces people into unnecessarily extreme positions.

Comment by nigerweiss on How should negative externalities be handled? (Warning: politics) · 2013-05-08T22:21:30.652Z · LW · GW

That does seem like a better idea, ignoring issues of price setting. Unfortunately, nation states are extremely bad at game theory, and it's difficult to achieve international agreement on these issues, especially when it will impact one nation disproportionately (China would be much harder hit, economically, by cap-and-trade legislation than the US).

I'd disagree pretty strongly with the energy issue, at least for now - but that's a discussion for another time. In politics, as in fighting couples, it is crucial to keep your peas separate from your pudding - one issue at a time.

Comment by nigerweiss on Pascal's Muggle (short version) · 2013-05-07T00:33:20.910Z · LW · GW

Here's a point of consideration: if you take Kurzweil's solution, then you can avoid Pascal's mugging when you are an agent, and your utility function is defined over similar agents. However, this solution wouldn't work on, for example, a paperclip maximizer, which would still be vulnerable - anthropiic reasoning does not apply over paperclips.

While it might be useful to have Friendly-style AIs be more resilient to P-mugging than simple maximizers, it's not exactly satisfying as an epistemological device.

Comment by nigerweiss on You only need faith in two things · 2013-03-11T08:48:09.227Z · LW · GW

I figured it out from context. But, sure, that could probably be clearer.

Comment by nigerweiss on Recommended reading on the ethics of Animal Cognitive Enhancement · 2013-02-22T21:27:41.656Z · LW · GW

So, in general, trying to dramatically increase the intelligence of species who lack our specific complement of social instincts and values seems like an astoundingly, overwhelmingly Bad Idea. The responsibilities to whatever it is that you wind up creating are overwhelming, as is the danger, especially if they can reproduce independently. It's seriously just a horrible, dangerous, irresponsible idea.

Related

Comment by nigerweiss on Open thread, February 15-28, 2013 · 2013-02-19T00:43:26.662Z · LW · GW

The baseline inconvenience cost associated with using bitcoins is also really high for conducting normal commerce with them.

Comment by nigerweiss on Open thread, February 15-28, 2013 · 2013-02-18T22:07:21.685Z · LW · GW

The bitcoin market value is predicated mostly upon drug use, pedophilia, nerd paranoia, and rampant amateur speculation. Basically, break out the tea leaves.

Comment by nigerweiss on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T08:21:04.320Z · LW · GW

That would definitely make you one of those tricky people.

Comment by nigerweiss on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T03:17:17.974Z · LW · GW

That's fair.

Actually, my secret preferred solution to GAME3 is to immediately give up, write a program that uses all of us working together for arbitrary amounts of time (possibly with periodic archival and resets to avoid senescence and insanity), to create an FAI, then plugging our minds into an infinite looping function in which the FAI makes a universe for us, populates it with agreeable people, and fulfills all of our values forever. Program never halts, return value is taken to be 0, Niger0 is instantly and painlessly killed, and Niger1 (the simulation) eventually gets to go live in paradise for eternity.

Comment by nigerweiss on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T03:07:06.469Z · LW · GW

How does your proposed solution for Game 1 stack up against the brute-force metastrategy?

Game 2 is a bit tricky. An answer to your described strategy would be to write a large number generator f(1),which produces some R, which does not depend on your opponents' programs, create a virtual machine that runs your opponents' programs for r steps, and, if they haven't halted, swaps the final recursive entry on the call stack with some number (say, R, for simplicity), and iterates upwards to produce real numbers for their function values. Then you just return the max of all three values. This strategy wins against any naive strategy, wins if your opponents are stuck in infinite loops, and, if taken by all players symmetrically, reduces the game to who has a larger R - i.e. the game simplifies to GAME1, and there is still (probably) one survivor.

Comment by nigerweiss on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T02:55:29.588Z · LW · GW

Note that the code is being run halting oracle hypercomputer, which simplifies your strategy to strategy number two.

Comment by nigerweiss on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T02:40:45.486Z · LW · GW

So, there are actually compelling reasons that halting oracles can't actually exist. Quite aside from your solution, it's straightforward to write programs with undefined behavior. Ex:

function undef():

if ORACLE_HALT(undef)::

    while 1 != 2:

       print "looping forever"

else:

   print "halting"

   return 0

For the sake of the gdanken-experiment, can we just assume that Omega has a well-established policy of horribly killing tricky people who try to set up recursive hypercomputational functions whose halting behavior depends on their own halting behavior?

Comment by nigerweiss on A Series of Increasingly Perverse and Destructive Games · 2013-02-14T17:28:36.300Z · LW · GW

So I guess I should have specified which model of hypercomputation Omega is using. Omega's computer can resolve ANY infinite trawl in constant time (assume time travel and an enormous bucket of phlebotinum is involved) - including programs which generate programs. So, the players also have the power to resolve any infinite computation in constant time. Were they feeling charitable, in an average utilitarian sense, they could add a parasitic clause to their program that simply created a few million copies of themselves which would work together to implement FAI, allow the FAI to reverse-engineer humanity by talking to all three of the contestants, then creating arbitrarily large numbers of value-fulfilled people and simulating them forever. But I digress.

In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.

Comment by nigerweiss on What are you working on? February 2013 · 2013-02-06T19:08:22.402Z · LW · GW

Playing around with search-space heuristics for more efficiently approximating S-induction.

Which actually sounds a lot more impressive than the actual thing itself, which mostly consists of reading wikipedia articles on information theory, then writing Python code that writes brainfuck (decent universal language).

EDIT: Also writing a novel, which is languishing at about the 20,000 word mark, and developing an indie videogame parody of Pokemon. Engine is basically done, getting started on content creation.

Comment by nigerweiss on Update on Kim Suozzi (cancer patient in want of cryonics) · 2013-01-22T22:25:27.294Z · LW · GW

That's got to be close to a best case suspension. I wish her nothing but the best.

Comment by nigerweiss on Inferring Values from Imperfect Optimizers · 2012-12-31T01:38:24.509Z · LW · GW

That would make sense. I assume the problem is lotus eating - the system, given the choice between a large cost to optimize whatever you care about, or small cost to just optimize its own sense experiences, will prefer the latter.

I find this stuff extremely interesting. I mean, when we talk about value modelling what we're really talking about isolating some subset of the causal mechanics driving human behavior (our values) from those elements we don't consider valuable. And, since we don't know if that subset is a natural category (or how to define it if it is), we've got a choice of how much we want to remove. Asking people to make a list of their values would be an example of the extreme sparse end of the spectrum, where we almost certainly don't model as much as we want to, and we know the features we're missing are important. On the other extreme end, we're just naively modelling the behaviors of humans, and letting the models vote. Which definitely captures all of our values, but also captures a bunch of extraneous stuff that we don't really want our system optimizing for. The target you're trying to hit is somewhere in the middle. It seems to me that it's probably best to err on the side of including too much than too little, since, if we get close enough, the optimizer will likely remove a certain amount of cruft on its own.

Comment by nigerweiss on [SEQ RERUN] What Core Argument? · 2012-12-31T01:29:08.564Z · LW · GW

The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it'd go in the longer run, but that might take time -- I don't know, I don't know enough about the subject. But even without strong Drexlerian nanotechnology, it's still possible to get an awful lot done.

Comment by nigerweiss on [SEQ RERUN] What Core Argument? · 2012-12-30T22:53:27.049Z · LW · GW

Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there's a logarithmic component to intelligence, as at some point you've already sampled the future outcome space thoroughly enough that most of the new bits of prediction you're getting back are redundant -- but the point of diminishing returns could be very, very high.

Comment by nigerweiss on Inferring Values from Imperfect Optimizers · 2012-12-30T18:42:48.848Z · LW · GW

I believe I saw a post a while back in which Anja discussed creating a variant on AIXI with a true utility function, though I may have misunderstood it. Some of the math this stuff involves I'm still not completely comfortable with, which is something I'm trying to fix.

In any case, what you'd actually want want to do is to model your agents using whatever general AI architecture you're using in the first place - plus whatever set of handicaps you've calibrated into it - which, presumably has a formal utility function, and is an efficient optimizer.