Posts

11 minute TED talk is about instrumental rationality 2012-08-29T07:33:42.237Z
Video: Getting Things Done Author at DO Lectures 2010-10-11T08:33:26.600Z
Counterintuitive World - Good intro to some topics 2010-10-02T03:32:21.842Z
Purposefulness on Mars 2010-08-08T09:23:52.117Z
False Majorities 2010-02-03T18:43:25.281Z

Comments

Comment by JamesAndrix on On saving the world · 2014-02-06T04:21:51.593Z · LW · GW

Anecdote: I think I've had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations.

It does take a lot to crosss those inferential distances, but I don't think quite that much.

To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.

Comment by JamesAndrix on Morality is Awesome · 2013-01-15T21:35:46.843Z · LW · GW

Sucks less sucks less.

Comment by JamesAndrix on Evaluating the feasibility of SI's plan · 2013-01-12T08:39:02.032Z · LW · GW

One trouble is that that is essentailly tacking mind enslavement on to the WBE proposition. Nobody wants that. Uploads wouldn't volunteer. Even if a customer paid enough of a premium for an employee with loyalty modifications, that only rolls us back to relying on the good intent of the customer.

This comes down to the exact same arms race between friendly and 'just do it' . With extra ethical and reverse-engineering hurdles. (I think we're pretty much stuck with testing and filtering based on behavior. And some modification will only be testable after uploading is available)

Mind you I'm not saying don't do work on this, I'm saying not much work will be done on it.

Comment by JamesAndrix on Evaluating the feasibility of SI's plan · 2013-01-10T23:37:50.978Z · LW · GW

I think we're going to get WBE's before AGI.

If we viewed this as a form of heuristic AI, it follows from your argument that we should look for ways to ensure friendliness of WBE's. (Ignoring the ethical issues here.)

Now, maye this is becasue most real approaches would consider ethical issues, but it seems like figuring out how to modify a human brain so that it doesn't act against your interests even if is powerful and without hampering its intellect, is a big 'intractable' problem.

I suspect no one is working on it and no one is going to, even though we have working models of these intellects today. A new design might be easier to work with, but it will still be a lot harder than it wil seem to be worth - as long as the AI's are doing near human level work.

Aim for an AI design that's easy enough to work on saftey that people actually will work on safety... and it will start to look a lot like SIAI ideas.

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-21T19:09:40.860Z · LW · GW

Moody set it as a condition for being able to speak as an equal.

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-21T19:00:52.996Z · LW · GW

There is some time resolution.

Albus said heavily, "A person who looked like Madam McJorgenson told us that a single Legilimens had lightly touched Miss Granger's mind some months ago. That is from January, Harry, when I communicated with Miss Granger about the matter of a certain Dementor. That was expected; but what I did not expect was the rest of what Sophie found."

Comment by JamesAndrix on Reply to Holden on 'Tool AI' · 2012-06-16T22:47:44.282Z · LW · GW

"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

http://en.wikipedia.org/wiki/Clarke%27s_three_laws

Comment by JamesAndrix on [draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary · 2012-04-04T03:06:27.517Z · LW · GW

Tell me from China.

Comment by JamesAndrix on [draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary · 2012-04-01T17:02:32.877Z · LW · GW

That would make (human[s] + predictor) in to an optimization process that was powerful beyond the human[s]'s ability to steer. You might see a nice looking prediction, but you won't understand the value of the details, or the value of the means used to achieve it. (Which would be called trade-offs in a goal directed mind, but nothing weighs them here.)

It also won't be reliable to look for models in which you are predicted to not hit the Emergency Regret Button As that may just find models in which your regret evaluator is modified.

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-15T20:38:36.053Z · LW · GW

For example, a hat and a cloak may be a uniform in a secret society, to be worn in special circumstances.

I much like the idea of this being a standard spell, as that provides further cover for your identity.

They Guy Fawkes mask is the modern equivalent.

Comment by JamesAndrix on POSITION: Design and Write Rationality Curriculum · 2012-01-20T18:46:25.441Z · LW · GW

Almost any human existential risk is also a paperclip risk.

Comment by JamesAndrix on Knowledge is Worth Paying For · 2011-09-22T02:37:02.942Z · LW · GW

Foundations of Neuroeconomic Analysis

Without getting into the legal or moral issues involved, there is a """library""" 'assigned to the island state of Niue', it's pretty damned good, and that's all I have to say about that.

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 8 · 2011-09-09T04:49:21.131Z · LW · GW

and secondly, a medievalesque public school is such a stereotypically British environment that one expects the language to match.

During the Revolution, Salem witches were considerably more adept at battle magic than those taught at the institution that had been sucking magical knowledge out of the world for the previous 600 years. They also had the advantage of being able to train in the open since most Puritans were self-obliviating.

It wasn't until the 1890's that the school returned fully to Ministry control after the retirement of Headmaster Teetonka. Over a century of American control left its mark on the language and culture of Wizarding Britain, unfortunately the basis of powerful aboriginal magics remains restricted by edict to the students of the Salem Institute, El Dorado, or the University of Phoenix®.

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 8 · 2011-09-09T04:17:56.275Z · LW · GW

So does Dumbledore know that Snape is putting the Sorcerer's Stone back into Gringotts?

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 8 · 2011-09-09T03:39:59.804Z · LW · GW

by the time year one ends, she and Harry will be participating side by side against serious, life threatening issues.

Absolutely not.

Draco will be in between them.

Comment by JamesAndrix on Prisoner's Dilemma Tournament Results · 2011-09-07T02:47:10.746Z · LW · GW

A cheap talk round would favor CliqueBots.

That O only took off once other variants were eliminated suggests a rock-paper-scissors relationship. But I suspect O only lost early on because parts of it's ruleset are too vulnerable to the wider bot population. So which rules was O following most when it lost/ against early opponents, and which rules did it use to beat I and C4?

Comment by JamesAndrix on Hacking on LessWrong Just Got Easier · 2011-09-06T22:51:18.528Z · LW · GW

Is there an easy way to change the logo/name?

It would be good to have a more generic default name and header, as this takes off there will be half finished sites turning up in google.

Comment by JamesAndrix on Hacking on LessWrong Just Got Easier · 2011-09-06T21:49:52.180Z · LW · GW

I will try to get a torrent up shortly (never created a torrent before)

--Posted from the lesswrong VM

Edit: am I doing this right? Will seed with fiber.

Comment by JamesAndrix on Hacking on LessWrong Just Got Easier · 2011-09-04T07:14:53.823Z · LW · GW

You should all attribute this event to my wishing for it earlier today.

Comment by JamesAndrix on Is Rationality Teachable? · 2011-09-01T14:41:40.037Z · LW · GW

Please paraphrase the conclusion in the introduction. This should be something more like an abstract, so I can an answer with minimal digging.

The opposite end of this spectrum has network news teasers. "Will your childrens' hyberbolic discounting affect your retirement? Find out at 11"

Comment by JamesAndrix on Consistently Inconsistent · 2011-08-09T08:05:46.627Z · LW · GW

When I saw that, I thought it was going to be an example of a nonsensical question, like "When did you stop beating your wife?".

Comment by JamesAndrix on Rationalist approach to developing Writing skills · 2011-07-17T01:26:56.607Z · LW · GW

I get writers block, or can't get past a simple explanation of an idea, unless I'm conversing online (usually some form of debate) in which case I can write pages and pages with no special effort.

Comment by JamesAndrix on The Blue-Minimizing Robot · 2011-07-09T04:18:22.926Z · LW · GW

I generally go with cross domain optimization power. http://wiki.lesswrong.com/wiki/Optimization_process Note that optimization target is not the same thing as a goal, and the process doesn't need to exist within obvious boundaries. Evolution is goalless and disembodied.

If an algorithm is smart because a programmer has encoded everything that needs to be known to solve a problem, great. That probably reduces potential for error, especially in well-defined environments. This is not what's going on in translation programs, or even the voting system here. (based on reddit) As systems like this creep up in complexity, their errors and biases become more subtle. (especially since we 'fix' them so that they usually work well) If an algorithm happens to be powerful in multiple domains, then the errors themselves might be optimized for something entirely different, and perhaps unrecognizable.

By your definition I would tend to agree that they are not dangerous, so long as their generalized capabilities are below human level, (seems to be the case for everything so far) with some complex caveats. For example 'non-self-modifying' is a likely false sense of security. If an AI has access to a medium which can be used to do computations, and the AI is good at making algorithms, then it could (Edit: It could build a powerful if not superintelligent program.)

Also, my concern in this thread has never been about the translation algorithm, the tax program, or even the paperclipper. It's about some sub-process which happens to be a powerful optimizer. (in a hypothetical situation where we do more AI research on the premise that it is safe if it is in a goalless program.

Comment by JamesAndrix on The Blue-Minimizing Robot · 2011-07-08T21:58:03.423Z · LW · GW

Making it more accurate is not the same as making it more intelligent. The question is: How does making something "more intelligent" change the nature of the inaccuracies? In translation especially there can be a bias without any real inaccuracy .

Goallessness at the level of the program is not what makes translators safe. They are safe because neither they nor any component is intelligent.

Comment by JamesAndrix on The Blue-Minimizing Robot · 2011-07-06T22:04:35.907Z · LW · GW

It seems that the narrative of unfriendly AI is only a risk if an AI were to have a true goal function, and many useful advances in artificial intelligence (defined in the broad sense) carry no risk of this kind.

What does it mean for a program to have intelligence if it does not have a goal? (or have components that have goals)

The point of any incremental intelligence increase is to let the program make more choices, and perhaps choices at higher levels of abstraction. Even at low intelligence levels, the AI will only 'do a good job' if the basis of those choices adequately matches the basis we would use to make the same choice. (a close match at some level of abstraction below the choice, not the substrate and not basic algorithms)

Creating 'goal-less' AI still has the machine making more choices for more complex reasons, and allows for non-obvious mismatches between what it does and what we intended it to do.

Yes, you can look at paperclip-manufacturing software and see that it is not a paper-clipper, but some component might still be optimizing for something else entirely. We can reject the anthropomorphically obvious goal and there can still be an powerful optimization process that affects the total system, at the expense of both human values and produced paperclips.

Comment by JamesAndrix on The Blue-Minimizing Robot · 2011-07-04T17:25:03.380Z · LW · GW

I suspect Richard would say that the robot's goal is minimizing its perception of blue. That's the PCT perspective on the behavior of biological systems in such scenarios.

This 'minimization' goal would require a brain that is powerful enough to believe that lasers destroy or discolor what they hit.

If this post were read by blue aliens that thrive on laser energy, they'd wonder they we were so confused as to the purpose of a automatic baby feeder.

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-07-01T21:11:05.674Z · LW · GW

Hypothesis: Quirrell is positioning Harry to be forced to figure out how to dissolve the wards at Hogwarts. (or at least that's the branch of the Xanatos pileup we're on.)

Comment by JamesAndrix on The True Rejection Challenge · 2011-06-25T17:12:16.407Z · LW · GW

I have two reasons not to use your system:

One: If you're committed to doing the action if you yourself can find a way to avoid the problems, then as you come to such solutions your instinct to flinch away will declare the list 'not done yet' and add more problems, and perhaps problems more unsolvable in style, until the list is an adequate defense against doing the thing.

One way to possibly mitigate this is to try not to think of any solutions until the list is done, and perhaps some scope restrictions on the allowable conditions. Despite this, there is another problem:

Two: The sun is too big.

Comment by JamesAndrix on Advice for AI makers · 2011-06-25T08:00:18.149Z · LW · GW

No, not learning. And the 'do nothing else' parts can't be left out.

This shouldn't be a general automatic programing method, just something that goes through the motions of solving this one problem. It should already 'know' whatever principles lead to that solution. The outcome should be obvious to the programmer, and I suspect realistically hand-traceable. My goal is a solid understanding of a toy program exactly one meta-level above hanoi.

This does seem like something Prolog could do well, if there is already a static program that does this I'd love to see it.

Comment by JamesAndrix on Drive-less AIs and experimentation · 2011-06-18T01:20:19.561Z · LW · GW

With 2 differences: CEV is tries to correct any mistakes in the initial formulation of the wish(aiming for an attractor), and it doesn't force the designers to specify details like whether making bacteria is ok or not-ok.

It's the difference between painting a painting of a specific scene, and making an auto-focus camera.

I do currently think it is possible to create a powerful cross-domain optimizer that is not a person and will not create persons or unbox itself or look at our universe or tile the universe with anything or make AI that doesn't comply with this. But I approach this line of thought with extreme caution, and really only to accelerate whatever it takes to get to CEV, because AI can't safely make changes to the real world without some knowledge of human volition, even if it wants to.

What if I missed something that's on the scale of the nonperson predicate? My AI works, creatively paints the apple, but somehow it's solution is morally awful. Even staying within pure math could be bad for unforseen reasons.

Comment by JamesAndrix on How much friendliness is enough? · 2011-06-17T21:06:32.901Z · LW · GW

Minor correction: It may need a hack if it remains unsolved.

Comment by JamesAndrix on Drive-less AIs and experimentation · 2011-06-17T20:34:52.981Z · LW · GW

There seems to be several orders of magnitude of difference between the two solutions for coloring a ball. You should have better predictions than that for what it can do. Obviously you shouldn't run anything remotely capable of engineering bacteria without a much better theory about what it will do.

I suspect "avoiding changing the world" actually has some human-values baked into it.

This seems to be trying to box an AI with it's own goal system, which I think puts it in the tricky-wish category.

Comment by JamesAndrix on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-08T16:40:04.653Z · LW · GW

I simply must get into the habit of asking for money.

Not doing this is probably my greatest failing.

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-31T22:32:35.039Z · LW · GW

Well, through seeing red, yes ;-)

Through study, no. I think the knowledge postulated is beyond what we currently have, and must include how the algorithm feels from the inside. (edit: Mary does know through study.)

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-31T17:44:17.649Z · LW · GW

I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-29T00:41:25.611Z · LW · GW

If there's a difference in the experience, then there's information about the difference,

The information about the difference is included in Mary's education. That is what was given.

Thus, there's a difference in my state, and thus, something to be surprised about.

Are you surprised all the time? If the change in Mary's mental state is what Mary expected it to be, then there is no surprise.

The word "red" is not equal to red, no matter how precisely you define that word.

How do you know?

If "red" is truly a material subject -- something that exists only in the form of a certain set of neurons firing (or analagous physical processes)

Isn't a mind that knows every fact about a process itself an analogous physical process?

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T20:04:03.012Z · LW · GW

No matter how much information is on the menu, it's not going to make you feel full.

"Feeling full" and "seeing red" also jumbles up the question. It is not "would she see red"

In which case, we're using different definitions of what it means to know what something is like. In mine, knowing what something is "like" is not the same as actually experiencing it -- which means there is room to be surprised, no matter how much specificity there is.

But isn't your "knowing what something is like" based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.

This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it.

Nor is the question "can she imagine red".

The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?

This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T17:07:36.570Z · LW · GW

However, materialism does not require us to believe that looking at a menu can make you feel full.

Looking at a menu is a rather pale imitation of the level of knowledge given Mary.

In order for her to know what red actually feels like, she'd need to be able to create the experience -- i.e., have a neural architecture that lets her go, "ah, so it's that neuron that does 'red'... let me go ahead and trigger that."

That is the conclusion you're asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say "oh wow", she says "ha, nailed it"

If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation's red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.

Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it's an automatic consequence of knowing all the physical facts.

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T06:12:28.574Z · LW · GW

I think the idea that "what it actually feels like" is knowledge beyond "every physical fact on various levels" is just asserting the conclusion.

I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We've never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we're also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.

I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T05:00:45.322Z · LW · GW

What is it that she's surprised about?

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T02:32:25.465Z · LW · GW

From what you quoted I thought you were arguing that there was something for her to be surprised about.

Comment by JamesAndrix on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-27T03:16:11.714Z · LW · GW

Not being able to make the neurons fire doesn't mean you don't know how it would feel if they did.

I hate this whole scenario for this kind of "This knowledge is a given but wait no it is not." kind of thinking.

Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.

Comment by JamesAndrix on Designing Rationalist Projects · 2011-05-14T02:30:59.793Z · LW · GW

There is a definition of terms confusion here between "inherently evil" and "processing data absolutely wrong".

I also get the impression that much of Europe is an extremely secular society that does OK.

There is confusion for individuals transitioning and perhaps specific questions that need to be dealt with by societies that are transitioning. But in general there is already a good tested answer for what religion can be replaced by. Getting that information to the people who may transition is trickier.

Comment by JamesAndrix on The 5-Second Level · 2011-05-09T15:42:20.672Z · LW · GW

Rationalists should also strive to be precise, but you should not try to express precisely what time it was that you stopped beating your wife.

Much of rationality is choosing what to think about, We've seen this before in the form of righting a wrong question, correcting logical fallacies (as above), using one method to reason about probabilities in favor of another, and culling non-productive search paths. (which might be the most general form here.

The proper meta-rule is not 'jump past warning signs'. I'm not yet ready to propose a good phrasing of the proper rule.

Comment by JamesAndrix on Offense versus harm minimization · 2011-04-17T06:35:40.208Z · LW · GW

Are the effects of the alien practical joke curable?

Comment by JamesAndrix on Toronto Less Wrong Meetup - Thursday Feb 17 · 2011-02-14T16:17:51.252Z · LW · GW

This Buffalonian should be able to go in the future, but the more notice the better.

james.andrix@gmail.com

Comment by JamesAndrix on Secure Your Beliefs · 2011-02-12T17:07:51.554Z · LW · GW

So when is your book on rationality coming out?

Comment by JamesAndrix on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-01-28T07:06:05.676Z · LW · GW

Would it also be moral to genetically engineer a human so that it becomes suicidal as a teenager?

Comment by JamesAndrix on Theists are wrong; is theism? · 2011-01-20T12:54:34.992Z · LW · GW

Imagine two Universes, both containing intelligent beings simulating the other Universe.

I don't see how that can really happen. I've never heard a non-hierarchical simulation hypothesis.

Comment by JamesAndrix on Anti-akrasia remote monitoring experiment · 2011-01-18T02:48:31.128Z · LW · GW

http://www.safefamilies.org/pastorindividualstep2.php

http://www.safefamilies.org/SoftwareTools.php#accountability

"Accountability Partner" is their key phrase I should have mentioned.