Posts

[Link] The Bayesian argument against induction. 2011-07-18T21:52:19.895Z

Comments

Comment by Peterdjones on How to Convince Me That 2 + 2 = 3 · 2014-10-09T03:32:14.316Z · LW · GW

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

On the other hand...

http://en.m.wikipedia.org/wiki/Is_logic_empirical%3F

Comment by Peterdjones on General purpose intelligence: arguing the Orthogonality thesis · 2014-03-13T20:08:50.177Z · LW · GW

I could add: Objective punishments and rewards need objective justification.

Comment by Peterdjones on General purpose intelligence: arguing the Orthogonality thesis · 2014-03-13T19:35:20.306Z · LW · GW

From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it's traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)

Comment by Peterdjones on The Problem with AIXI · 2014-03-13T18:43:47.691Z · LW · GW

I am aware that humans hav a non zero level of life threatening behaviour. If we wanted it to be lower, we could make it lower, at the expense of various costs. We don't which seems to mean we are happy with the current cost benefit ratio. Arguing, as you have, that the risk of AI self harm can't be reduced to zero doesn't mean we can't hit an actuarial optimum.

It is not clear to me why you think safety training would limit intelligence.

Comment by Peterdjones on The Problem with AIXI · 2014-03-12T18:37:51.658Z · LW · GW

Regarding the anvil problem: you have argued with great thoroughness that one can't perfectly prevent an AIXI from dropping an anvil on its head. However, I can't see the necessity. We would need to get the probability of a dangerously unfriendly SAI as close to zero as possible, because it poses an existential threat. However, a suicidally foolish AIXI is only a waste of money.

Humans have a negative reinforcement channel relating to bodily harm called pain. It isn't perfect, but it's good enough to train most humans to avoid doing suicidal stupid things. Why would an AIXI need anything better? Yout might want to answer that there is some danger related to an AIXI s intelligence, but it's clock speed, or whatever, could be throttle, during training.

Also any seriously intelligent .AI made with the technology of today, or the near future, is going to require a huge farm of servers. The only way it could physically interact with the world is through remote controlled body...and if drops an anvil on that, it actually will survive as a mind!

Comment by Peterdjones on Arguing Orthogonality, published form · 2014-03-12T14:09:43.267Z · LW · GW

An entity that has contradictory beliefs will be a poor instrumental rationalist. It looks like you would need to engineer a distinction between instrumental beliefs and terminal beliefs. While we're on the subject, you might need a firewall to stop an .AI acting on intrinsically motivating ideas, if they exist. In any case, orthogonality is an architecture choice, not an ineluctable fact about minds.

The OT has multiple forms, as Armstrong notes. An OT that says you could make arbitrary combinations of preference and power if you really wanted to, can't plug into an argument that future .AI will ,with high probability, be a Lovecraftian horror, at least not unless you also aargue that an orthogonal architecture will be chosen, with high probability.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-12T12:48:15.994Z · LW · GW

something previously deemed "impossible"

It's clearly possible for some values of "gatekeeper", since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-12T10:37:25.363Z · LW · GW

The problem is that I don't see much evidence that Mr. Loosemore is correct. I can quite easily conceive of a superhuman intelligence that was built with the specification of "human pleasure = brain dopamine levels", not least of all because there are people who'd want to be wireheads and there's a massive amount of physiological research showing human pleasure to be caused by dopamine levels.

I don't think Loosemore was addressing deliberately unfriendly AI, and for that matter EY hasn't been either. Both are addressing intentionally friendly or neutral AI that goes wrong.

I can quite easily conceive of a superhuman intelligence that knows humans prefer more complicated enjoyment, and even do complex modeling of how it would have to manipulate people away from those more complicated enjoyments, and still have that superhuman intelligence not care.

Wouldn't it care about getting things right?

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-12T08:24:20.525Z · LW · GW

Trying to think this out in terms of levels of smartness alone is very unlikely to be helpful.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-11T08:07:03.476Z · LW · GW

Then solve semantics in a seed.

Comment by Peterdjones on Arguing Orthogonality, published form · 2013-09-10T18:21:22.914Z · LW · GW

To be a good instrumental rationalist, an entity must be a good epistemic rationalist, because knowledge is instrumentally useful. But to be a good epistemic ratioanalist, and entityy must value certain things, like consistency and lack of contradiction. IR is not walled off from ER, which itself is not walled off from values. The orthogonality thesis is false. You can’t have any combination of values and instrumental efficacy, because an enity that thinks contradictions are valuable will be a poor epistemic ratiionalist and therefore a poor instrumental rationalist.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T18:14:37.099Z · LW · GW

That's not very realistic. If you trained AI to parse natural language, you would naturally reward it for interpreting instructions the way you want it to.

We want to select Ais that are friendly, and understand us, and this has already started happenning.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T17:47:38.262Z · LW · GW

My answer: who knows? We've given it a deliberately vague goal statement (even more vague than the last one), we've given it lots of admittedly contradictory literature, and we've given it plenty of time to self-modify before giving it the goal of self-modifying to be Friendly.

Humans generally manage with those constraints. You seem to be doing something that is kind of the opposite of anthropomorphising -- treatiing an entity that is stipulated as having at least human intelligence as if were as literal and rigid as a non-AI computer.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T17:25:46.233Z · LW · GW

Semantcs isn't optional. Nothing could qualify as an AGI,let alone a super one, unless it could hack natural language. So Loosemore architectures don't make anything harder, since semantics has to be solved anyway.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T17:06:45.809Z · LW · GW

"code in the high-level sentence, and let the AI figure it out."

http://lesswrong.com/lw/rf/ghosts_in_the_machine/

So it's impossible to directly or indirectly code in the compex thing called semantics, but possible to directly or indirectly code in the compex thing called morality? What? What is your point? You keep talking as if I am suggesting there is someting that can be had for free, without coding. I never even remotely said that.

If the AI is too dumb to understand 'make us happy', then why should we expect it to be smart enough to understand 'figure out how to correctly understand "make us happy", and then follow that instruction'? We have to actually code 'correctly understand' into the AI. Otherwise, even when it does have the right understanding, that understanding won't be linked to its utility function.

I know. A Loosemore architecture AI has to treat its directives as directives. I never disputed that. But coding "follow these plain English instructions" isn't obviously harder or more fragile than coding "follow <>". And it isn't trivial, and I didn't say it was.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T16:53:48.734Z · LW · GW

Yes, but that's stupidity on the part of the human programmer, and/or on the part of the seed AI if we ask it for advice.

That depends on the architecture. In a Loosemore architecture, the AI interprets high-level directives itself, so if it gets them wrong, that's it's mistake.

Comment by Peterdjones on Honesty: Beyond Internal Truth · 2013-09-10T16:42:35.594Z · LW · GW

There is no theorem which proves a rationalist must be honest - must speak aloud their probability estimates.

Speaking what you believe may be frankness, candour or tactlessness, but it isn't honesty. Honesty is not lying. It involves no requirement to call people Fatty or Shorty.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T12:33:44.932Z · LW · GW

Goertzel appears to be a respected figuer in the field. Could you point the interested reader to your critique of his work?

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T12:21:05.372Z · LW · GW

Almost everyhting he said has been civil, well informed an on topic. He has made one complaint about doenvoting, and EY has made an ad-hom against him. EYs behaviour has been worse.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T11:29:07.743Z · LW · GW

Richard, please don't be bullied off the site. It is LW that needs to learn how to handle debate and disagremeent, since they are basic to rationality.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T11:12:44.409Z · LW · GW

But our interesting disagreement seems to be over (c). Interesting because it illuminates general differences between the basic idea of a domain-general optimization process (intelligence) and the (not-so-)basic idea of Everything Humans Like. One important difference is that if an AGI optimizes for anything, it will have strong reason to steer clear of possible late intelligence defeaters. Late Friendliness defeaters, on the other hand, won't scare optimization-process-optimizers in general.

But it will scare friendly ones, which will want to keep their values stable.

But, once again, it doesn't take any stupidity on the AI's part to disvalue physically injuring a human,

It takes stupidity to misinterpret friendlienss.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T10:55:17.914Z · LW · GW

Now, why exactly should we expect the superintelligence that grows out of the seed to value what we really mean by 'pleasure', when all we programmed it to do was X, our probably-failed attempt at summarizing our values?

  • Maybe we didn't do it ithat way. Maybe we did it Loosemore's way, where you code in the high-level sentence, and let the AI figure it out. Maybe that would avoid the problem. Maybe Loosemore has solved FAi much more straightforwardly than EY.

  • Maybe we told it to. Maybe we gave it the low-level expansion of "happy" that we or our seed AI came up with together with an instruction that it is meant to capture the meaning of the high-level statement, and that the HL statement is the Prime Directive, and that if the AI judges that the expansion is wrong, then it should reject the expansion.

  • Maybe the AI will value getting things right because it is rational.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T10:31:04.643Z · LW · GW

Yes, but the AI was told, "make humans happy." Not, "give humans what they actually want."

And, you assume, it is not intelligent enough to realise that the intended meaning of "make people happy" is "give people what they actually want" -- although you and I can see that. You are assuming that it is a subintellgience. You have proven Loosemore's point.

You say things like "'Make humans happy' implies that..." and "subtleties implicit in..." You seem to think these implications are simple, but they really aren't. They really, really aren't.

We are smart enough to see that the Dpoamine Drip isn't intended. The Ai is smarter than us. So....

This is why I say you're anthropomorphizing.

I say that you are assuming the Ai is dumber than us, when it is stipulated as being smarter.

Comment by Peterdjones on The genie knows, but doesn't care · 2013-09-10T10:19:48.179Z · LW · GW

He's achieved about what Ayn Rand achieved, and almost everyone thinks she wasa crank.

Comment by Peterdjones on Arguments Against Speciesism · 2013-08-18T20:55:43.574Z · LW · GW

And to be fair, you'd have to give ten or a hundred votes to people with PhD's in political science.

Comment by Peterdjones on Why Are Individual IQ Differences OK? · 2013-08-17T22:25:57.569Z · LW · GW

It is almost always bad Bayes, or any other kind of reasoning, to make judgements about individuals based on group characeristics, since there is almost always information about them as individuals available which is almost always more reliable.

Comment by Peterdjones on Why Are Individual IQ Differences OK? · 2013-08-05T16:04:52.583Z · LW · GW

As evidence for this, he used the fact that right here in America, test scores have gone up over the past several decades. This clearly isn't caused by some genetic change, so the most likely explanation is cultural change.

Is that actually more likely than environmental change?

Comment by Peterdjones on Guardians of Ayn Rand · 2013-08-05T14:32:06.612Z · LW · GW

ith permission from both their spouses, which counts for a lot in my view. If you want to turn that into a "problem", you have to specify that the spouses were unhappy—and then it's still not a matter for outsiders.

I dare say many a guru or cult leader has similar "permission". It often isn't taken to ecuse their actions, because people recognise that such permission can be browbeaten ot of people by someone who seems to them to be an authority figure.

Comment by Peterdjones on Guardians of Ayn Rand · 2013-08-05T14:05:57.972Z · LW · GW

Rand herself didn't understand emergence (she casting a biologist as the embodiment of scientific corruption, because there is too much complexity in his area of study for any one human brain to be familiar with), and also didn't understand much about cybernetics, etc.

That's hardly the start of it. She opposed relativity and QM, and fence-sat on Evolution.

ETA:

I don't think "1957" is mcuh of an excuse either, particularly about evolution. For another thing, she never wavered till her death in the 80s. It makes no sense to focus on Bayes, unless your are a Bayes cultist. Rand was unaware that a realistic, raitonal science-orientated form of philosophy had arisen since she was spoon-fed Hegelianism in the early 20th century, and remained unwillingly to connect with it even after John Hospers painfully explained it to her. That's the acid test of whether you are interested in promoting ideas or yourself.

Comment by Peterdjones on Making Rationality General-Interest · 2013-07-25T03:55:22.979Z · LW · GW

Suggestion: teach rationality as an open spirit of enquiry, not as a secular rleigion that will turn you into a clone of Richard Dawkins.

Comment by Peterdjones on Making Rationality General-Interest · 2013-07-25T03:41:52.287Z · LW · GW

There are already precise terms for most of the concepts LW discusses. It's that LW uses its own jargon.

Comment by Peterdjones on Making Rationality General-Interest · 2013-07-25T03:31:05.957Z · LW · GW

You want to teach philosophy as rationality?

Philosophy includes epistemology, which is kind of important to epistemic ratioanlity.

Philosophy is a toolbox as well as a set of doctrines.

Comment by Peterdjones on Making Rationality General-Interest · 2013-07-25T01:13:58.311Z · LW · GW

Or maybe there's a lot of utility in not coming accross geeky and selfish, so they are already being instementally rational.

Comment by Peterdjones on Seeking examples of people smarter than me who got hung up · 2013-07-24T23:58:02.084Z · LW · GW

Einstein backed local realism and the ensemble interpretation, both of which have been "thrown out".

Comment by Peterdjones on Seeking examples of people smarter than me who got hung up · 2013-07-24T23:56:41.593Z · LW · GW

It may be correct, but not for the reasons usually given: the non existence of ontological randomness is in now way entailed buy the existence of epistemic inderminism.

Comment by Peterdjones on low stress employment/ munchkin income thread · 2013-07-24T16:57:54.426Z · LW · GW

As a former boss of mine used to say: "Bloody five o-clocker"

Comment by Peterdjones on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T15:20:40.563Z · LW · GW

. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do."

But presumably, productivity would rise as well, increasing the real value of wages of a certain face value.

Comment by Peterdjones on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T15:10:38.036Z · LW · GW

It's plausible we'll never see a city with a high-speed all-robotic all-electric car fleet because the government, after lobbying from various industries, will require human attendants on every car - for safety reasons, of course!

I believe I have alredy pointed out that automatic trains already exist. Putting a human superintendent onto a train with nothing to do except watch it drive itslef would be quite ineffectice, because the job is so boring they are unlikely to concetrate. I believe exisitng driverless trains are monitored by CCTV, which is more effective since the monotirs actually ahve something to do in flicking between channels, and could be appied to driverless cars.

Comment by Peterdjones on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T14:45:18.110Z · LW · GW

The US educational system is either getting worse at training people to handle new jobs, or getting so much more expensive that people can't afford retraining, for various other reasons. (Plus, we are really stunningly stupid about matching educational supply to labor demand. How completely ridiculous is it to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so? Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three choices, with salary charts and projections and probabilities of graduating that subject given their test scores? The more so considering this is a central allocation question for the entire economy? But I have no particular reason to believe this part has gotten worse since 1960.)

I always think teachers giving careers advice is a bit like prests giving sex advice -- uselss at best, dangerous at worst

Comment by Peterdjones on A case study in fooling oneself · 2013-07-22T13:15:23.435Z · LW · GW

the question of how many worlds can be answered pretty well by ~2 to the power of the average number of decoherence events since the beginning.

Deocherence evernts aren't well defined .. they are always FAPP. That;s the source of the problem.

Comment by Peterdjones on Problems of the Deutsch-Wallace version of Many Worlds · 2013-07-21T13:25:15.286Z · LW · GW

he Copenhagen interpretation suggests there's a priveliged branch in some way, which is the one we actually perceive. Why should there be? This priveliged branch idea is adding something that we don't need to add.

MWI adds a privileged basis that is also unnecessary.

Many worlds is pretty much that view.

MWI adds a universal quantum state that is not , and cannot be, observed.

Comment by Peterdjones on Problems of the Deutsch-Wallace version of Many Worlds · 2013-07-21T13:05:30.330Z · LW · GW

There are interpretatiions simpler than both CI and MWI which EY has not had time to study

Comment by Peterdjones on If Many-Worlds Had Come First · 2013-07-21T12:52:28.209Z · LW · GW

A very quick but sufficient refutation is that the same math taken as a description of an objectively existing causal process gives us MWI, hence there is no reason to complicate our epistemology beyond this

Or MWI could be said to be complicating the ontology unnecessarily. To be sure, rQM answers epistemologically some questions that MWI answers ontologically, but that isn't obviously a Bad Thing. A realistitc interpretation of the WF is a postive metaphyscial assumption, not some neutral default. A realistic quantum state of the universe is a further assumption that buys problems other interpretations don't have.

Comment by Peterdjones on Problems of the Deutsch-Wallace version of Many Worlds · 2013-07-21T12:43:12.116Z · LW · GW

The arguments don't apply to interpretations that don't require a real WF or real collapse, and EY has struggled with them,.

Comment by Peterdjones on Why Many-Worlds Is Not The Rationally Favored Interpretation · 2013-07-20T22:29:37.281Z · LW · GW

The basic wrong assumption being made is that quantum superposition by default equals multiplicity - that because the wavefunction in the double-slit experiment has two branches, one for each slit, there must be two of something there - and that a single-world interpretation has to add an extra postulate to this picture, such as a collapse process which removes one branch. But superposition-as-multiplicity really is just another hypothesis. When you use ordinary probabilities, you are not rationally obligated to believe that every outcome exists somewhere; and an electron wavefunction really may be describing a single object in a single state, rather than a multiplicity of them.

Another wrinkle that is too often overlooked is that superposition is observer dependent.

Comment by Peterdjones on Why Many-Worlds Is Not The Rationally Favored Interpretation · 2013-07-20T21:08:39.580Z · LW · GW

on a large scale

Which is to say, MWI is what you get if you assume there is a universal state without an observer to observe the state of fix the basis. As it happens, it is possible to reject Universal State AND real collapse.

Comment by Peterdjones on ESR's New Take on Qualia · 2013-04-12T15:41:27.341Z · LW · GW

But we don't know that qualia aren;t anything and we don't know that about free will either.

Comment by Peterdjones on Welcome to Less Wrong! (July 2012) · 2013-03-09T09:51:37.623Z · LW · GW

Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.

Comment by Peterdjones on Welcome to Less Wrong! (July 2012) · 2013-03-09T09:40:22.984Z · LW · GW

It is counterintuitive that you should slave for people you don't know, perhaps because you can't be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn't fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.

Comment by Peterdjones on We Change Our Minds Less Often Than We Think · 2013-03-03T19:23:11.710Z · LW · GW

Of course it is unworkable for politicians to stick rigidly to their manifestos. It is also unworkable for them to discard their manifestos on day one.