Posts
Comments
Most of those who haven't ever been on Less Wrong will provide data for that distinction. It isn't noise.
This is a diaspora survey, for the pan-rationalist community.
I have taken the survey. I did not treat the metaphysical probabilities as though I had a measure over them, because I don't.
I guess the rejection is more based on the fact that his message seems like it violates deep-seated values on your end about how reality should work than his work being bullshit.
Lumifer rejects him because he thinks Simon Anhold is simply a person who isn't serious but a hippy.
How about you let Lumifer speak for Lumifer's rejection, rather than tilting at straw windmills?
The equivocation of 'created' in those four points are enough to ignore it entirely.
I'm curious why this was downvoted. The last statement, which has political context?
Are there any egoist arguments for (EA) aid in Africa? Does investment in Africa's stability and economic performance offer any instrumental benefit to a US citizen that does not care about the welfare of Africans terminally?
We don't need to describe the scenarios precisely physically. All we need to do is describe it in terms of the agent's epistemology, with the same sort of causal surgery as described in Eliezer's TDT. Full epistemological control means you can test your AI's decision system.
This is a more specific form of the simulational AI box. The rejection of simulational boxing I've seen relies on the AI being free to act and sense with no observation possible, treating it like a black box, and somehow gaining knowledge of the parent world through inconsistencies and probabilities and escaping using bugs in its containment program. White-box simulational boxing can completely compromise the AI's apparent reality and actual abilities.
Stagnation is actually a stable condition. It's "yay stability" vs. "boo instability," and "yay growth" vs. "boo stagnation."
(Ve could also be copied, but it would require coping of all world).
Why would that be the case? And if it were the case, why would that be a problem?
Resurrect one individual, filling gaps with random quantum noise.
Resurrect all possible individuals with all combinations of noise.
That is a false trichotomy. You're perfectly capable of deciding to resurrect some sparse coverage of the distribution, and those differences are not useless. In addition, "the subject is almost exactly resurrected in one of the universes" is true of both two and three, and you don't have to refer to spooky alternate histories to do it in the first place.
Quals are the GRE, right?
...Okay? One in ten sampled individuals will be gay. You can do that. Does it really matter when you're resurrecting the dead?
Your own proposal is to only sample one, and call the inaccuracy "acausal trade," which isn't even necessary in this case. The AI is missing 100 bits. You're already admitting many-worlds. So the AI can simply draw those 100 bits out of quantum randomness, and in each Everett branch, there will be a different individual. The incorrect ones you could call "acausal travelers," even though you're just wrong. There will still be the "correct" individual, the exact descendant of this reality's instance, in one of the Everett branches. The fact that it is "correct" doesn't even matter, there is only ever "close enough," but the "correct" one is there.
What's wrong with gaps? This is probabilistic in the first place.
No, that's easy to grasp. I just wonder what the point is. Conservation of resources?
The evidence provided of any dead person produces a distribution on human brains, given enough computation. The more evidence there is, the more focused the distribution. Given post-scarcity, the FAI could simply produce many samples on each distribution.
This is certainly a clever way of producing mind-neighbors. I find problems with these sorts of schemes for resurrection, though. Socioeconomic privilege, tragedy of the commons, and data rot, to be precise.
You're confusing the intuitive notion of "simple" with "low Kolmogorov complexity"
I am using the word "simple" to refer to "low K-complexity." That is the context of this discussion.
It does if you look at the rest of my argument.
The rest of your argument is fundamentally misinformed.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe "is this an agent?".
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you're done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well "reducing entropy" as a concept does have low Kolmogorov complexity.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible." That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
It reminded me of reading Simpsons comics, is all.
Krusty's Komplexity Kalkulator!
Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.
What do you mean by "short referent?" Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that "agentiness" is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn't fail on any conceivable edge-cases.
Saying that it's important doesn't mean it's simple. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity."
An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".
That's a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your "necessary" condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.
I could stand to meet a real-life human. I've heard they exist, but I've had such a hard time finding one!
I don't think mind designs are dependent on their underlying physics. The physics is a substrate, and as long as it provides general computation, intelligence would be achievable in a configuration of that physics. The specifics of those designs may depend on how those worlds function, like how jellyfish-like minds may be different from bird-like minds, but not the common elements of induction, analysis of inputs, and selection of outputs. That would mean the simplest a priori mind would have to be computed by the simplest provision of general computation, however. An infinitely divine Turing Machine, if you will.
That doesn't mean a mind is more basic than physics, though. That's an entirely separate issue. I haven't ever seen a coherent model of God in the first place, so I couldn't begin to judge the complexity of its unproposed existence. If God is a mind, then what substrate does it rest on?
Two or three people confused about K-complexity doesn't herald the death of LW.
The non-triviality arises from technical considerations
The laws of physics as we know them are very simple, and we believe that they may actually be even simpler. Meanwhile, a mind existing outside of physics is somehow a more consistent and simple explanation than humans having hardware in the brain that promotes hypotheses involving human-like agents behind everything, which explains away every religion ever? Minds are not simpler than physics. This is not a technical controversy.
I especially like the question "Is it ethical to steal truffle mushrooms and champagne to feed your family?" That's an intuitive concept fairly voiced. Calculating the damage to the trolley is somewhat ridiculous, however.
The Open Thread is for all posts that don't necessitate their own thread. The media thread is for recommendations on entertainment. I don't see why comments should be necessary to bring a paper to LWs attention, especially in the Open Thread, and others clearly disagree with your opinion on this matter.
I believe in this instance he was reasoning alethically. De facto you are not necessarily correct.
I'm not claiming this is conclusive evidence of danger; I'm just concerned.
Thanks. I only saw this press release and I was concerned that there might be danger.
As an example of number 10, consider the Optimalverse. The friendliest death of self-determination I ever did see.
Unfortunately, I'm not quite sure of the point of this post, considering you're posting a reply to news articles on a forum filled with people who understand the mistakes they made in the first place. Perhaps as a repository of rebuttals to common misconceptions posited in the future?
This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.
She is preserving paperclip-valuing intelligence by protecting herself from the potential threat of non-paperclip-valuing intelligent life, and can develop interstellar travel herself.
It's a lonely job, but someone has to make the maximum possible amount of paperclips. Someone, and only one. Anyone else would be a waste of paperclip-material.
It is not irrational, because preferences are arational. Now, Gal might be mistaken about her preferences, but she is the current best source of evidence on her preferences, so I don't see how her actions in this case are irrational either. She's an engine specialist and a mechanic, so it's perfectly understandable that she would want something she knew how to maintain and repair.
That doesn't get rid of randomness, it pushes it into the observer.
You seem to be opposed to the nature of your species. This can't be very good for your self-esteem.
What use is such an AI? You can't even use the behavior of its utility function to predict a real-world agent because it would have such a different ontology. Not to mention the fact that GoL boards of the complexity needed for anything interesting would be massively intractable.
Hardcoding has nothing to do with it.
Well, actually, I think it could. Given that we want the AI to function as a problem solver for the real world, it would necessarily have to learn about aspects of the real world, including human behavior, in order to create solutions that account for everything the real world has that might throw off the accuracy of a lesser model.
Thanks for the explanation. Do you have any alternatives?
I disagree with John Rawl's veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward)
Um, what? What's wrong with risk-aversion? And what's wrong with the Veil of Ignorance? How does that assumption make the concept disgusting?
I think usernames would have to be anonymized, as well.
This was very informative, thank you.
Not so. I'm trying to figure out how to find the maximum entropy distribution for simple types, and recursively defined types are a part of that. This does not only apply to strings, it applies to sequences of all sorts, and I'm attempting to allow the possibility of error correction in these techniques. What is the point of doing statistics on coin flips? Once you learn something about the flip result, you basically just know it.
The result of some built-in string function length(s), that, depending on the implementation of the string type, either returns the header integer stating the size, or counts the length until the terminator symbol and returns that integer.
If this post doesn't get answered, I'll repost in the next open thread. A test to see if more frequent threads are actually necessary.
I'm trying to make a prior probability mass distribution for the length of a binary string, and then generalize to strings of any quantity of symbols. I'm struggling to find one with the right properties under the log-odds transformation that still obeys the laws of probability. The one I like the most is P(len(x)) = 1/(x+2), as under log-odds it requires log(x)+1 bits of evidence for strings of len(x) to meet even odds. For a length of 15, it uses all 4 bits in 1111, so its cost is 4 bits.
The problem is that 1/(x+2) does not converge, making it an improper prior. Are there some restrictions by which I can use this improper prior, or to find a proper prior with similarly desirable qualities?
Or 3141 May 9.
The dark side is Voldemort's thought patterns. In other words, Voldemort is constantly in the dark side.
Ah, so about as large as it takes for a fanfic to be good. :P