Posts
Comments
Robin, or anyone who agrees with Robin:
What evidence can you imagine would convince you that AGI would go FOOM?
I don't think it being unfalsifiable is a problem. I think this is more of a definition than a derivation. Morality is a fuzzy concept that we have intuitions about, and we like to formalize these sorts of things into definitions. This can't be disproven any more than the definition of a triangle can be disproven.
What needs to be done instead is show the definition to be incoherent or that it doesn't match our intuition.
Can you explain why that's a misconception? Or at least point me to a source that explains it?
I've started working with neural networks lately and I don't know too much yet, but the idea that they recreate the generative process behind a system, at least implicitly, seems almost obvious. If I train a neural network on a simple linear function, the weights on the network will probably change to reflect the coefficients of that function. Does this not generalize?
It fits with the idea of the universe having an orderly underlying structure. The simulation hypothesis is just one way that can be true. Physics being true is another, simpler explanation.
Neural networks may very well turn out to be the easiest way to create a general intelligence, but whether they're the easiest way to create a friendly general intelligence is another question altogether.
Many civilizations may fear AI, but maybe there's a super-complicated but persuasive proof of friendliness that convinces most AI researchers, but has a well-hidden flaw. That's probably a similar thing to what you're saying about unpredictable physics though, and the universe might look the same to us in either case.
Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn't be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI's "crash radius" of destruction.
Regarding your second point, if it turns out that most organic races can't produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it had the knowledge to, its own unstable value system could cause the VNP to have a really unstable value system too.
It might be the case that the space of self-modifying unstable AI has attractor zones that cause unstable AI of different designs to converge on similar behaviors, none of which produce VNP before crashing.
Your last point is an interesting idea though.
That's a good point. Possible solutions:
AI just don't create them in the first place. Most utility functions don't need non-evolving von Neumann probes, and instead the AI itself leads the expansion.
AI crash before creating von Neumann probes. There are lots of destructive technologies an AI could get to before being able to build such probes. An unstable AI that isn't in the attractor zone of self-correcting fooms would probably become more and more unstable with each modification, meaning that the more powerful it becomes the more likely it is to destroy itself. von Neumann probes may simply be far beyond this point.
Any von Neumann probes that could successfully colonize the universe would have to have enough intelligence to risk falling into the same trap as their parent AI.
It would only take one exception, but the second and third possibilities are probably strong enough to handle it. A successful von Neumann probe would be really advanced, while an increasingly insane AI might get ahold of destructive nanotech and nukes and all kinds of things before then.
Infinity is really confusing.
My statement itself isn't something I believe with certainty, but adding that qualifier to everything I say would be a pointless hassle, especially for things that I believe with a near-enough certainty that my mind feels it is certain. The part with the "ALL" is itself a part of the statement I believe with near certainty, not a qualifier of the statement I believe. Sorry I didn't make that clearer.
The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.
Suppose I'm destructively uploaded. Let's assume also that my consciousness is destroyed, a new consciousness is created for the upload, and there is no continuity. The upload of me will continue to think what I would've thought, feel what I would've felt, choose what I would've chosen, and generally optimize the world in the way I would've. The only thing it would lack is my "original consciousness", which doesn't seem to have any observable effect in the world. Saying that there's no conscious continuity doesn't seem meaningful. The only actual observation we could make is that the process I tend to label "me" is made of different matter, but who cares?
I think a lot of the confusion about this is treating consciousness as an actual entity separate from the process it's identified with, which somehow fails to transfer over. I think that if consciousness is something worth talking about, then it's a property of that process itself, and is agnostic toward what's running the process.
I expect that most people are biased when it comes to judging how attractive they are. Asking people probably doesn't help too much, since people are likely to be nice, and close friends probably also have a biased view of ones attractiveness. So is there a good way to calibrate your perception of how good you look?
One thing that helped me a lot was doing some soul-searching. It's not so much about finding something to protect so much as realizing what I already care about, even if there are some layers of distance between my current feelings and that thing. I think that a lot of that listless feeling of not having something to protect is just sort of being distracted from what we actually care about. I would recommend just looking for anything you care about at all, even slightly, and just focusing on that feeling.
At least that makes sense and works for me.
There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn't to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.
I think the biggest reason we have to assume that the universe is empty is that the earth hasn't already been colonized.
Ah I see. I was thinking of motte and bailey as something like a fallacy or a singular argument tactic, not a description of a general behavior. The name makes much more sense now. Thank you. Also, you said it's called that "everywhere except the Scottosphere". Could you elaborate on that?
What does the tern "doctrine" mean in this context anyways? It's not exactly a belief or anything, just a type of argument. I've seen that it's called that but I don't understand why.
Is this the same thing as the motte and bailey argument?
You cite the language's tendency to borrow foreign terms as a positive thing. Wouldn't that require an inconsistent orthography?
Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.
This is probably true. I think a lot of people feel uncomfortable with the possibility of us living in a simulation, because we'd be in a "less real" universe or we'd be under the complete control of the simulators, or various other complaints. But if such super-Turing machines are possible, then the simulated nature of the universe wouldn't really matter. Unless the simulators intervened to prevent it, we could "escape" by running an infinite simulation of ourselves. It would almost be like entering an ontologically separate reality.
I always thought that the "most civilizations just upload and live in a simulated utopia instead of colonizing the universe" response to the Fermi Paradox was obviously wrong, because it would only take ONE civilization breaking this trend to be visible, and regardless of what the aliens are doing, a galaxy of resources is always useful to have. But i was reading somewhere (I don't remember where) about an interesting idea of a super-Turing computer that could calculate anything, regardless of time constraints and ignoring the halting problem. I think the proposal was to use closed time like curves or something.
This, of course, seemed very far-fetched, but the implications are fascinating. It would be possible to use such a device to simulate an eternity in a moment. We could upload and have an eternity of eudaimonia, without ever having to worry about running out of resources or the heat death of the universe or alien superintelligences. Even if the computer was to be destroyed an instant later, it wouldn't matter to us. If such a thing was possible, then that would be an obvious solution to the Fermi Paradox.
I strongly suspect that the effectiveness of capitalism as a system of economic organization is proportional to how rational agents participating in it are. I expect that capitalism only optimizes against the general welfare when people in a capitalist society make decisions that go against their own long-term values. The more rational a capitalist society is, the more it begins to resemble an economist's paradise.
Thank you! That's the first in-depth presentation of someone actually benefiting from MBTI that I've ever seen, and it's really interesting. I'll mull over it. I guess the main thing to keep in mind is that other people are different from me.
I've noticed that a lot of my desire to be rational is social. I was raised as the local "smart kid" and continue to feel associated with that identity. I get all the stuff about rationality should be approached like "I have this thing I care about, and therefore become rational to protect it." but I just don't feel that way. I'm not sure how I feel about that.
Of the three reasons to be rational that are described, I'm most motivated by the moral reason. This is probably because of the aforementioned identity. I feel very offended at anything I perceive as "irrational" in others, kinda like it's an attack on my tribe. This has negative effects on my social life and causes me to be very arrogant to others. Does anybody have any advice for that?
I'd have to be stronger than the group in order to get more food than the entire group, but depending on their ability to cooperate I may be able to steal plenty for myself, an amount that would seem tiny compared to the large amount needed for the whole group.
The example I chose was a somewhat bad one I think though because the villagers would have a defender's advantage of protecting their food. You can substitute "food" for "abstract, uncontrolled resource" to clarify my point.
an armed group such as occupiers or raiders who kept forcibly taking resources from the native population would be high status among the population, which seems clearly untrue.
Maybe that's still the same kind of status, but it is in regards to a different domain. Perhaps an effective understanding of status acknowledges that groups overlap and may be formed around different resources. In your example, there is group (raiders and natives) which forms around literal physical resources, perhaps food. In this group, status is determined by military might, so the raiders have a higher status-as-it-relates-to-food.
Within this group, there is another subgroup of just the villagers, which the raiders are either not a part of or are very low-status in. This group distributes social support or other nice things like that, as the resource to compete over. The group norms dictate that pro-social behavior is how you raise status. So you can be high-status in the group of natives, but low status in the group of (natives and raiders).
In our daily lives, we are all part of many different groups, which are all aligned along different resources. We constantly exchange status in some groups for status in others. For instance, suppose I'm a pretty tough guy, and I'm inserted into the previously discussed status system. I obviously want food, but I'm not stronger than the raiders. I am, however, stronger than most of the villagers, and could take some of the food that the raiders don't scavenge for. If strength was my biggest comparative advantage, and food was all I wanted, then this would definitely be the way to go.
Suppose though that I don't just want food, or I have an even larger comparative advantage in another area, such as basketweaving. I could join the group of the villagers and raise my status within the group. Other villagers would be willing to sacrifice their status in the (raiders and villagers) system in exchange for something they need, like my baskets. This would be me bartering my baskets for food. Here, we can see the primary resource of the (raiders and villagers) group thrown under the bus for other values.
If I raise my status in the group far enough by making good enough baskets, then in terms of the (raiders and villagers) system I will be getting a larger piece of a smaller pie, but it might still be larger than the amount I would get otherwise. Or maybe I'm not even too concerned about the (raiders and villagers) system, and view status within the village group as a terminal value. Or maybe I want to collect villager status to trade for something even more valuable.
tl;dr: There are a lot of different groups optimizing for different things. We can be part of many of these groups at once and trade status between them to further our own goals.
When making Anki cards, is it more effective to ask the meaning of a term, or to ask what term describes a concept?
Would a boxed AI be able to affect the world in any important way using the computer hardware itself? Like, make electrons move in funky patterns or affect air flow with cooling fans? If so, would it be able to do anything significant?
Regarding point 2, while it would be epistemologically risky and borderline dark arts, I think the idea is more about what to emphasize and openly signal, not what to actually believe.
Thank you to those who commented here. It helped!
Hmm it seems obvious in retrospect, but it didn't occur to me that biochemistry would relate to nanotech. I suppose I compartmentalized "biological" from "super-cool high-tech stuff." Thank you very much for that point!
I'm at that point in life where I have to make a lot of choices about my future life. I'm considering doing a double major in biochemistry and computer science. I find both of these topics to be fascinating, but I'm not sure if that's the most effective way to help the world. I am comfortable in my skills as an autodidact, and I find myself to be interested in comp sci, biochemistry, physics, and mathematics. I believe that regardless which I actually major in, I could learn any of the others quite well. I have a nagging voice in my head saying that I shouldn't bother learning biochemistry, because it won't be useful in the long term because everything will be based on nanotech and we will all be uploads. Is that a valid point? Or should I just focus on the world as it is now? And should I study something else or does biochem have potential to help the world? I find myself to be very confused about this subject and humbly request any advice.
I guess what I'm saying is that since simpler ones are run more, they are more important. That would be true if every simulation was individually important, but I think one thing about this is that the mathematical entity itself is important, regardless of the number of times it's instituted. But it still intuitively feels as though there would be more "weight" to the ones run more often. Things that happen in such universes would have more "influence" over reality as a whole.
What I mean though, is that the more complicated universes can't be less significant, because they are contained within this simple universe. All universes would have to be at least as morally significant as this universe, would they not?
Another thought: Wouldn't one of the simplest universes be a universal turing machine that runs through every possible tape? All other universes would be contained within this universes, making them all "simple."
Instead of saying that you care about simpler universes more, couldn't a similar preference arise out of pure utilitarianism? Simpler universes would be more important because things that happen within them will be more likely to also happen within more complicated universes that end up creating a similar series of states. For every universe, isn't there an infinite number of more complicated universes that end up with the simpler universe existing within part of it?
Ones I've noticed are "lazy" or "stupid" or other words that are used to describe people. Sure, it can be good to have such models so that one can predict the behavior of a person, like "This person isn't likely to do his work." or "She might have trouble understanding that." The thing is, these are often treated as fundamental properties of an ontologically fundamental thing, which the human mind is not.
Why is this person lazy? Do they fall victim to hyperbolic discounting? Is there an ugh field related to their work? Do they not know what to do? Maybe they simply don't have good reason to work? Why is this person "stupid?" Do they lack the prerequisite knowledge to understand what you're saying? Are they interested in learning it? Do they have any experience with it?
I really would like a chronological order.
I really like the cute little story as you say, but agree that it isn't effective where it is. Maybe include it in the end as a sort of appendix?
Three shall be Peverell's sons and three their devices by which Death shall be defeated.
What is meant by the three sons? Harry, Draco, and someone else? Quirrell perhaps? Using the three Deathly Hallows?
I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I'm beginning to see that things aren't so simple.
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It's hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.
I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes "person" from "object" seems to belong to the latter.
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
I would very much like to attend this, having never attended a meetup before. However, I am currently a minor who lacks transportation ability and have had little luck convincing my guardians to drive me to it. Is there anybody who is attending and is coming from the Birmingham, AL area who would be willing to drive me? I am willing to pay for the service.
I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
Some scientists think they have a method to test the Simulation Argument.
Is it probable for intelligent life to evolve?