Posts
Comments
Anecdote: I think I've had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations.
It does take a lot to crosss those inferential distances, but I don't think quite that much.
To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.
Sucks less sucks less.
One trouble is that that is essentailly tacking mind enslavement on to the WBE proposition. Nobody wants that. Uploads wouldn't volunteer. Even if a customer paid enough of a premium for an employee with loyalty modifications, that only rolls us back to relying on the good intent of the customer.
This comes down to the exact same arms race between friendly and 'just do it' . With extra ethical and reverse-engineering hurdles. (I think we're pretty much stuck with testing and filtering based on behavior. And some modification will only be testable after uploading is available)
Mind you I'm not saying don't do work on this, I'm saying not much work will be done on it.
I think we're going to get WBE's before AGI.
If we viewed this as a form of heuristic AI, it follows from your argument that we should look for ways to ensure friendliness of WBE's. (Ignoring the ethical issues here.)
Now, maye this is becasue most real approaches would consider ethical issues, but it seems like figuring out how to modify a human brain so that it doesn't act against your interests even if is powerful and without hampering its intellect, is a big 'intractable' problem.
I suspect no one is working on it and no one is going to, even though we have working models of these intellects today. A new design might be easier to work with, but it will still be a lot harder than it wil seem to be worth - as long as the AI's are doing near human level work.
Aim for an AI design that's easy enough to work on saftey that people actually will work on safety... and it will start to look a lot like SIAI ideas.
Moody set it as a condition for being able to speak as an equal.
There is some time resolution.
Albus said heavily, "A person who looked like Madam McJorgenson told us that a single Legilimens had lightly touched Miss Granger's mind some months ago. That is from January, Harry, when I communicated with Miss Granger about the matter of a certain Dementor. That was expected; but what I did not expect was the rest of what Sophie found."
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
Tell me from China.
That would make (human[s] + predictor) in to an optimization process that was powerful beyond the human[s]'s ability to steer. You might see a nice looking prediction, but you won't understand the value of the details, or the value of the means used to achieve it. (Which would be called trade-offs in a goal directed mind, but nothing weighs them here.)
It also won't be reliable to look for models in which you are predicted to not hit the Emergency Regret Button As that may just find models in which your regret evaluator is modified.
For example, a hat and a cloak may be a uniform in a secret society, to be worn in special circumstances.
I much like the idea of this being a standard spell, as that provides further cover for your identity.
They Guy Fawkes mask is the modern equivalent.
Almost any human existential risk is also a paperclip risk.
Foundations of Neuroeconomic Analysis
Without getting into the legal or moral issues involved, there is a """library""" 'assigned to the island state of Niue', it's pretty damned good, and that's all I have to say about that.
and secondly, a medievalesque public school is such a stereotypically British environment that one expects the language to match.
During the Revolution, Salem witches were considerably more adept at battle magic than those taught at the institution that had been sucking magical knowledge out of the world for the previous 600 years. They also had the advantage of being able to train in the open since most Puritans were self-obliviating.
It wasn't until the 1890's that the school returned fully to Ministry control after the retirement of Headmaster Teetonka. Over a century of American control left its mark on the language and culture of Wizarding Britain, unfortunately the basis of powerful aboriginal magics remains restricted by edict to the students of the Salem Institute, El Dorado, or the University of Phoenix®.
So does Dumbledore know that Snape is putting the Sorcerer's Stone back into Gringotts?
by the time year one ends, she and Harry will be participating side by side against serious, life threatening issues.
Absolutely not.
Draco will be in between them.
A cheap talk round would favor CliqueBots.
That O only took off once other variants were eliminated suggests a rock-paper-scissors relationship. But I suspect O only lost early on because parts of it's ruleset are too vulnerable to the wider bot population. So which rules was O following most when it lost/ against early opponents, and which rules did it use to beat I and C4?
Is there an easy way to change the logo/name?
It would be good to have a more generic default name and header, as this takes off there will be half finished sites turning up in google.
I will try to get a torrent up shortly (never created a torrent before)
--Posted from the lesswrong VM
Edit: am I doing this right? Will seed with fiber.
You should all attribute this event to my wishing for it earlier today.
Please paraphrase the conclusion in the introduction. This should be something more like an abstract, so I can an answer with minimal digging.
The opposite end of this spectrum has network news teasers. "Will your childrens' hyberbolic discounting affect your retirement? Find out at 11"
When I saw that, I thought it was going to be an example of a nonsensical question, like "When did you stop beating your wife?".
I get writers block, or can't get past a simple explanation of an idea, unless I'm conversing online (usually some form of debate) in which case I can write pages and pages with no special effort.
I generally go with cross domain optimization power. http://wiki.lesswrong.com/wiki/Optimization_process Note that optimization target is not the same thing as a goal, and the process doesn't need to exist within obvious boundaries. Evolution is goalless and disembodied.
If an algorithm is smart because a programmer has encoded everything that needs to be known to solve a problem, great. That probably reduces potential for error, especially in well-defined environments. This is not what's going on in translation programs, or even the voting system here. (based on reddit) As systems like this creep up in complexity, their errors and biases become more subtle. (especially since we 'fix' them so that they usually work well) If an algorithm happens to be powerful in multiple domains, then the errors themselves might be optimized for something entirely different, and perhaps unrecognizable.
By your definition I would tend to agree that they are not dangerous, so long as their generalized capabilities are below human level, (seems to be the case for everything so far) with some complex caveats. For example 'non-self-modifying' is a likely false sense of security. If an AI has access to a medium which can be used to do computations, and the AI is good at making algorithms, then it could (Edit: It could build a powerful if not superintelligent program.)
Also, my concern in this thread has never been about the translation algorithm, the tax program, or even the paperclipper. It's about some sub-process which happens to be a powerful optimizer. (in a hypothetical situation where we do more AI research on the premise that it is safe if it is in a goalless program.
Making it more accurate is not the same as making it more intelligent. The question is: How does making something "more intelligent" change the nature of the inaccuracies? In translation especially there can be a bias without any real inaccuracy .
Goallessness at the level of the program is not what makes translators safe. They are safe because neither they nor any component is intelligent.
It seems that the narrative of unfriendly AI is only a risk if an AI were to have a true goal function, and many useful advances in artificial intelligence (defined in the broad sense) carry no risk of this kind.
What does it mean for a program to have intelligence if it does not have a goal? (or have components that have goals)
The point of any incremental intelligence increase is to let the program make more choices, and perhaps choices at higher levels of abstraction. Even at low intelligence levels, the AI will only 'do a good job' if the basis of those choices adequately matches the basis we would use to make the same choice. (a close match at some level of abstraction below the choice, not the substrate and not basic algorithms)
Creating 'goal-less' AI still has the machine making more choices for more complex reasons, and allows for non-obvious mismatches between what it does and what we intended it to do.
Yes, you can look at paperclip-manufacturing software and see that it is not a paper-clipper, but some component might still be optimizing for something else entirely. We can reject the anthropomorphically obvious goal and there can still be an powerful optimization process that affects the total system, at the expense of both human values and produced paperclips.
I suspect Richard would say that the robot's goal is minimizing its perception of blue. That's the PCT perspective on the behavior of biological systems in such scenarios.
This 'minimization' goal would require a brain that is powerful enough to believe that lasers destroy or discolor what they hit.
If this post were read by blue aliens that thrive on laser energy, they'd wonder they we were so confused as to the purpose of a automatic baby feeder.
Hypothesis: Quirrell is positioning Harry to be forced to figure out how to dissolve the wards at Hogwarts. (or at least that's the branch of the Xanatos pileup we're on.)
I have two reasons not to use your system:
One: If you're committed to doing the action if you yourself can find a way to avoid the problems, then as you come to such solutions your instinct to flinch away will declare the list 'not done yet' and add more problems, and perhaps problems more unsolvable in style, until the list is an adequate defense against doing the thing.
One way to possibly mitigate this is to try not to think of any solutions until the list is done, and perhaps some scope restrictions on the allowable conditions. Despite this, there is another problem:
Two: The sun is too big.
No, not learning. And the 'do nothing else' parts can't be left out.
This shouldn't be a general automatic programing method, just something that goes through the motions of solving this one problem. It should already 'know' whatever principles lead to that solution. The outcome should be obvious to the programmer, and I suspect realistically hand-traceable. My goal is a solid understanding of a toy program exactly one meta-level above hanoi.
This does seem like something Prolog could do well, if there is already a static program that does this I'd love to see it.
With 2 differences: CEV is tries to correct any mistakes in the initial formulation of the wish(aiming for an attractor), and it doesn't force the designers to specify details like whether making bacteria is ok or not-ok.
It's the difference between painting a painting of a specific scene, and making an auto-focus camera.
I do currently think it is possible to create a powerful cross-domain optimizer that is not a person and will not create persons or unbox itself or look at our universe or tile the universe with anything or make AI that doesn't comply with this. But I approach this line of thought with extreme caution, and really only to accelerate whatever it takes to get to CEV, because AI can't safely make changes to the real world without some knowledge of human volition, even if it wants to.
What if I missed something that's on the scale of the nonperson predicate? My AI works, creatively paints the apple, but somehow it's solution is morally awful. Even staying within pure math could be bad for unforseen reasons.
Minor correction: It may need a hack if it remains unsolved.
There seems to be several orders of magnitude of difference between the two solutions for coloring a ball. You should have better predictions than that for what it can do. Obviously you shouldn't run anything remotely capable of engineering bacteria without a much better theory about what it will do.
I suspect "avoiding changing the world" actually has some human-values baked into it.
This seems to be trying to box an AI with it's own goal system, which I think puts it in the tricky-wish category.
I simply must get into the habit of asking for money.
Not doing this is probably my greatest failing.
Well, through seeing red, yes ;-)
Through study, no. I think the knowledge postulated is beyond what we currently have, and must include how the algorithm feels from the inside. (edit: Mary does know through study.)
I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.
If there's a difference in the experience, then there's information about the difference,
The information about the difference is included in Mary's education. That is what was given.
Thus, there's a difference in my state, and thus, something to be surprised about.
Are you surprised all the time? If the change in Mary's mental state is what Mary expected it to be, then there is no surprise.
The word "red" is not equal to red, no matter how precisely you define that word.
How do you know?
If "red" is truly a material subject -- something that exists only in the form of a certain set of neurons firing (or analagous physical processes)
Isn't a mind that knows every fact about a process itself an analogous physical process?
No matter how much information is on the menu, it's not going to make you feel full.
"Feeling full" and "seeing red" also jumbles up the question. It is not "would she see red"
In which case, we're using different definitions of what it means to know what something is like. In mine, knowing what something is "like" is not the same as actually experiencing it -- which means there is room to be surprised, no matter how much specificity there is.
But isn't your "knowing what something is like" based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it.
Nor is the question "can she imagine red".
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
However, materialism does not require us to believe that looking at a menu can make you feel full.
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
In order for her to know what red actually feels like, she'd need to be able to create the experience -- i.e., have a neural architecture that lets her go, "ah, so it's that neuron that does 'red'... let me go ahead and trigger that."
That is the conclusion you're asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say "oh wow", she says "ha, nailed it"
If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation's red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.
Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it's an automatic consequence of knowing all the physical facts.
I think the idea that "what it actually feels like" is knowledge beyond "every physical fact on various levels" is just asserting the conclusion.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We've never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we're also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.
What is it that she's surprised about?
From what you quoted I thought you were arguing that there was something for her to be surprised about.
Not being able to make the neurons fire doesn't mean you don't know how it would feel if they did.
I hate this whole scenario for this kind of "This knowledge is a given but wait no it is not." kind of thinking.
Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.
There is a definition of terms confusion here between "inherently evil" and "processing data absolutely wrong".
I also get the impression that much of Europe is an extremely secular society that does OK.
There is confusion for individuals transitioning and perhaps specific questions that need to be dealt with by societies that are transitioning. But in general there is already a good tested answer for what religion can be replaced by. Getting that information to the people who may transition is trickier.
Rationalists should also strive to be precise, but you should not try to express precisely what time it was that you stopped beating your wife.
Much of rationality is choosing what to think about, We've seen this before in the form of righting a wrong question, correcting logical fallacies (as above), using one method to reason about probabilities in favor of another, and culling non-productive search paths. (which might be the most general form here.
The proper meta-rule is not 'jump past warning signs'. I'm not yet ready to propose a good phrasing of the proper rule.
Are the effects of the alien practical joke curable?
This Buffalonian should be able to go in the future, but the more notice the better.
james.andrix@gmail.com
So when is your book on rationality coming out?
Would it also be moral to genetically engineer a human so that it becomes suicidal as a teenager?
Imagine two Universes, both containing intelligent beings simulating the other Universe.
I don't see how that can really happen. I've never heard a non-hierarchical simulation hypothesis.
http://www.safefamilies.org/pastorindividualstep2.php
http://www.safefamilies.org/SoftwareTools.php#accountability
"Accountability Partner" is their key phrase I should have mentioned.