Posts

AI as a resolution to the Fermi Paradox. 2016-03-02T20:45:36.383Z
No Value 2012-05-05T22:38:31.741Z

Comments

Comment by Raiden on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-22T20:50:45.582Z · LW · GW

Robin, or anyone who agrees with Robin:

What evidence can you imagine would convince you that AGI would go FOOM?

Comment by Raiden on An attempt in layman's language to explain the metaethics sequence in a single post. · 2016-10-15T09:24:48.827Z · LW · GW

I don't think it being unfalsifiable is a problem. I think this is more of a definition than a derivation. Morality is a fuzzy concept that we have intuitions about, and we like to formalize these sorts of things into definitions. This can't be disproven any more than the definition of a triangle can be disproven.

What needs to be done instead is show the definition to be incoherent or that it doesn't match our intuition.

Comment by Raiden on The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe · 2016-09-14T15:48:04.153Z · LW · GW

Can you explain why that's a misconception? Or at least point me to a source that explains it?

I've started working with neural networks lately and I don't know too much yet, but the idea that they recreate the generative process behind a system, at least implicitly, seems almost obvious. If I train a neural network on a simple linear function, the weights on the network will probably change to reflect the coefficients of that function. Does this not generalize?

Comment by Raiden on "Free will" being a illusion fits pretty well with the simulation hypothesis. · 2016-05-10T11:16:12.995Z · LW · GW

It fits with the idea of the universe having an orderly underlying structure. The simulation hypothesis is just one way that can be true. Physics being true is another, simpler explanation.

Comment by Raiden on AlphaGo versus Lee Sedol · 2016-03-09T17:56:06.671Z · LW · GW

Neural networks may very well turn out to be the easiest way to create a general intelligence, but whether they're the easiest way to create a friendly general intelligence is another question altogether.

Comment by Raiden on AI as a resolution to the Fermi Paradox. · 2016-03-03T17:04:44.292Z · LW · GW

Many civilizations may fear AI, but maybe there's a super-complicated but persuasive proof of friendliness that convinces most AI researchers, but has a well-hidden flaw. That's probably a similar thing to what you're saying about unpredictable physics though, and the universe might look the same to us in either case.

Comment by Raiden on AI as a resolution to the Fermi Paradox. · 2016-03-02T22:01:19.983Z · LW · GW

Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn't be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI's "crash radius" of destruction.

Regarding your second point, if it turns out that most organic races can't produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it had the knowledge to, its own unstable value system could cause the VNP to have a really unstable value system too.

It might be the case that the space of self-modifying unstable AI has attractor zones that cause unstable AI of different designs to converge on similar behaviors, none of which produce VNP before crashing.

Your last point is an interesting idea though.

Comment by Raiden on AI as a resolution to the Fermi Paradox. · 2016-03-02T21:23:26.535Z · LW · GW

That's a good point. Possible solutions:

  1. AI just don't create them in the first place. Most utility functions don't need non-evolving von Neumann probes, and instead the AI itself leads the expansion.

  2. AI crash before creating von Neumann probes. There are lots of destructive technologies an AI could get to before being able to build such probes. An unstable AI that isn't in the attractor zone of self-correcting fooms would probably become more and more unstable with each modification, meaning that the more powerful it becomes the more likely it is to destroy itself. von Neumann probes may simply be far beyond this point.

  3. Any von Neumann probes that could successfully colonize the universe would have to have enough intelligence to risk falling into the same trap as their parent AI.

It would only take one exception, but the second and third possibilities are probably strong enough to handle it. A successful von Neumann probe would be really advanced, while an increasingly insane AI might get ahold of destructive nanotech and nukes and all kinds of things before then.

Comment by Raiden on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-28T13:46:12.525Z · LW · GW

Infinity is really confusing.

Comment by Raiden on If there was one element of statistical literacy that you could magically implant in every head, what would it be? · 2016-02-24T21:16:40.809Z · LW · GW

My statement itself isn't something I believe with certainty, but adding that qualifier to everything I say would be a pointless hassle, especially for things that I believe with a near-enough certainty that my mind feels it is certain. The part with the "ALL" is itself a part of the statement I believe with near certainty, not a qualifier of the statement I believe. Sorry I didn't make that clearer.

Comment by Raiden on If there was one element of statistical literacy that you could magically implant in every head, what would it be? · 2016-02-22T21:22:33.129Z · LW · GW

The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.

Comment by Raiden on Your transhuman copy is of questionable value to your meat self. · 2016-01-09T07:14:44.578Z · LW · GW

Suppose I'm destructively uploaded. Let's assume also that my consciousness is destroyed, a new consciousness is created for the upload, and there is no continuity. The upload of me will continue to think what I would've thought, feel what I would've felt, choose what I would've chosen, and generally optimize the world in the way I would've. The only thing it would lack is my "original consciousness", which doesn't seem to have any observable effect in the world. Saying that there's no conscious continuity doesn't seem meaningful. The only actual observation we could make is that the process I tend to label "me" is made of different matter, but who cares?

I think a lot of the confusion about this is treating consciousness as an actual entity separate from the process it's identified with, which somehow fails to transfer over. I think that if consciousness is something worth talking about, then it's a property of that process itself, and is agnostic toward what's running the process.

Comment by Raiden on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-29T15:58:00.849Z · LW · GW

I expect that most people are biased when it comes to judging how attractive they are. Asking people probably doesn't help too much, since people are likely to be nice, and close friends probably also have a biased view of ones attractiveness. So is there a good way to calibrate your perception of how good you look?

Comment by Raiden on What is the best way to develop a strong sense of having something to protect · 2015-09-11T14:27:06.165Z · LW · GW

One thing that helped me a lot was doing some soul-searching. It's not so much about finding something to protect so much as realizing what I already care about, even if there are some layers of distance between my current feelings and that thing. I think that a lot of that listless feeling of not having something to protect is just sort of being distracted from what we actually care about. I would recommend just looking for anything you care about at all, even slightly, and just focusing on that feeling.

At least that makes sense and works for me.

Comment by Raiden on Crazy Ideas Thread, Aug. 2015 · 2015-08-12T18:54:44.859Z · LW · GW

There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn't to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.

Comment by Raiden on Catastrophe Engines: A possible resolution to the Fermi Paradox · 2015-07-26T07:41:12.867Z · LW · GW

I think the biggest reason we have to assume that the universe is empty is that the earth hasn't already been colonized.

Comment by Raiden on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-07-14T17:37:03.705Z · LW · GW

Ah I see. I was thinking of motte and bailey as something like a fallacy or a singular argument tactic, not a description of a general behavior. The name makes much more sense now. Thank you. Also, you said it's called that "everywhere except the Scottosphere". Could you elaborate on that?

Comment by Raiden on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-07-11T17:23:19.080Z · LW · GW

What does the tern "doctrine" mean in this context anyways? It's not exactly a belief or anything, just a type of argument. I've seen that it's called that but I don't understand why.

Comment by Raiden on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-07-11T09:22:01.649Z · LW · GW

Is this the same thing as the motte and bailey argument?

Comment by Raiden on Crazy Ideas Thread · 2015-07-11T09:08:38.635Z · LW · GW

You cite the language's tendency to borrow foreign terms as a positive thing. Wouldn't that require an inconsistent orthography?

Comment by Raiden on Crazy Ideas Thread · 2015-07-11T08:57:07.278Z · LW · GW

Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.

This is probably true. I think a lot of people feel uncomfortable with the possibility of us living in a simulation, because we'd be in a "less real" universe or we'd be under the complete control of the simulators, or various other complaints. But if such super-Turing machines are possible, then the simulated nature of the universe wouldn't really matter. Unless the simulators intervened to prevent it, we could "escape" by running an infinite simulation of ourselves. It would almost be like entering an ontologically separate reality.

Comment by Raiden on Crazy Ideas Thread · 2015-07-08T20:27:14.015Z · LW · GW

I always thought that the "most civilizations just upload and live in a simulated utopia instead of colonizing the universe" response to the Fermi Paradox was obviously wrong, because it would only take ONE civilization breaking this trend to be visible, and regardless of what the aliens are doing, a galaxy of resources is always useful to have. But i was reading somewhere (I don't remember where) about an interesting idea of a super-Turing computer that could calculate anything, regardless of time constraints and ignoring the halting problem. I think the proposal was to use closed time like curves or something.

This, of course, seemed very far-fetched, but the implications are fascinating. It would be possible to use such a device to simulate an eternity in a moment. We could upload and have an eternity of eudaimonia, without ever having to worry about running out of resources or the heat death of the universe or alien superintelligences. Even if the computer was to be destroyed an instant later, it wouldn't matter to us. If such a thing was possible, then that would be an obvious solution to the Fermi Paradox.

Comment by Raiden on Why capitalism? · 2015-05-09T22:56:03.361Z · LW · GW

I strongly suspect that the effectiveness of capitalism as a system of economic organization is proportional to how rational agents participating in it are. I expect that capitalism only optimizes against the general welfare when people in a capitalist society make decisions that go against their own long-term values. The more rational a capitalist society is, the more it begins to resemble an economist's paradise.

Comment by Raiden on Rationality Reading Group: Introduction and A: Predictably Wrong · 2015-04-18T00:53:36.868Z · LW · GW

Thank you! That's the first in-depth presentation of someone actually benefiting from MBTI that I've ever seen, and it's really interesting. I'll mull over it. I guess the main thing to keep in mind is that other people are different from me.

Comment by Raiden on Rationality Reading Group: Introduction and A: Predictably Wrong · 2015-04-17T02:01:28.690Z · LW · GW

I've noticed that a lot of my desire to be rational is social. I was raised as the local "smart kid" and continue to feel associated with that identity. I get all the stuff about rationality should be approached like "I have this thing I care about, and therefore become rational to protect it." but I just don't feel that way. I'm not sure how I feel about that.

Of the three reasons to be rational that are described, I'm most motivated by the moral reason. This is probably because of the aforementioned identity. I feel very offended at anything I perceive as "irrational" in others, kinda like it's an attack on my tribe. This has negative effects on my social life and causes me to be very arrogant to others. Does anybody have any advice for that?

Comment by Raiden on Status - is it what we think it is? · 2015-04-01T05:03:39.932Z · LW · GW

I'd have to be stronger than the group in order to get more food than the entire group, but depending on their ability to cooperate I may be able to steal plenty for myself, an amount that would seem tiny compared to the large amount needed for the whole group.

The example I chose was a somewhat bad one I think though because the villagers would have a defender's advantage of protecting their food. You can substitute "food" for "abstract, uncontrolled resource" to clarify my point.

Comment by Raiden on Status - is it what we think it is? · 2015-04-01T00:18:17.896Z · LW · GW

an armed group such as occupiers or raiders who kept forcibly taking resources from the native population would be high status among the population, which seems clearly untrue.

Maybe that's still the same kind of status, but it is in regards to a different domain. Perhaps an effective understanding of status acknowledges that groups overlap and may be formed around different resources. In your example, there is group (raiders and natives) which forms around literal physical resources, perhaps food. In this group, status is determined by military might, so the raiders have a higher status-as-it-relates-to-food.

Within this group, there is another subgroup of just the villagers, which the raiders are either not a part of or are very low-status in. This group distributes social support or other nice things like that, as the resource to compete over. The group norms dictate that pro-social behavior is how you raise status. So you can be high-status in the group of natives, but low status in the group of (natives and raiders).

In our daily lives, we are all part of many different groups, which are all aligned along different resources. We constantly exchange status in some groups for status in others. For instance, suppose I'm a pretty tough guy, and I'm inserted into the previously discussed status system. I obviously want food, but I'm not stronger than the raiders. I am, however, stronger than most of the villagers, and could take some of the food that the raiders don't scavenge for. If strength was my biggest comparative advantage, and food was all I wanted, then this would definitely be the way to go.

Suppose though that I don't just want food, or I have an even larger comparative advantage in another area, such as basketweaving. I could join the group of the villagers and raise my status within the group. Other villagers would be willing to sacrifice their status in the (raiders and villagers) system in exchange for something they need, like my baskets. This would be me bartering my baskets for food. Here, we can see the primary resource of the (raiders and villagers) group thrown under the bus for other values.

If I raise my status in the group far enough by making good enough baskets, then in terms of the (raiders and villagers) system I will be getting a larger piece of a smaller pie, but it might still be larger than the amount I would get otherwise. Or maybe I'm not even too concerned about the (raiders and villagers) system, and view status within the village group as a terminal value. Or maybe I want to collect villager status to trade for something even more valuable.

tl;dr: There are a lot of different groups optimizing for different things. We can be part of many of these groups at once and trade status between them to further our own goals.

Comment by Raiden on Stupid Questions February 2015 · 2015-02-21T19:31:37.150Z · LW · GW

When making Anki cards, is it more effective to ask the meaning of a term, or to ask what term describes a concept?

Comment by Raiden on Stupid Questions February 2015 · 2015-02-04T03:15:14.567Z · LW · GW

Would a boxed AI be able to affect the world in any important way using the computer hardware itself? Like, make electrons move in funky patterns or affect air flow with cooling fans? If so, would it be able to do anything significant?

Comment by Raiden on You have a set amount of "weirdness points". Spend them wisely. · 2014-11-28T04:44:56.295Z · LW · GW

Regarding point 2, while it would be epistemologically risky and borderline dark arts, I think the idea is more about what to emphasize and openly signal, not what to actually believe.

Comment by Raiden on Open Thread April 8 - April 14 2014 · 2014-04-11T18:37:17.038Z · LW · GW

Thank you to those who commented here. It helped!

Comment by Raiden on Open Thread April 8 - April 14 2014 · 2014-04-11T18:36:47.776Z · LW · GW

Hmm it seems obvious in retrospect, but it didn't occur to me that biochemistry would relate to nanotech. I suppose I compartmentalized "biological" from "super-cool high-tech stuff." Thank you very much for that point!

Comment by Raiden on Open Thread April 8 - April 14 2014 · 2014-04-10T16:03:32.874Z · LW · GW

I'm at that point in life where I have to make a lot of choices about my future life. I'm considering doing a double major in biochemistry and computer science. I find both of these topics to be fascinating, but I'm not sure if that's the most effective way to help the world. I am comfortable in my skills as an autodidact, and I find myself to be interested in comp sci, biochemistry, physics, and mathematics. I believe that regardless which I actually major in, I could learn any of the others quite well. I have a nagging voice in my head saying that I shouldn't bother learning biochemistry, because it won't be useful in the long term because everything will be based on nanotech and we will all be uploads. Is that a valid point? Or should I just focus on the world as it is now? And should I study something else or does biochem have potential to help the world? I find myself to be very confused about this subject and humbly request any advice.

Comment by Raiden on Preferences without Existence · 2014-02-09T04:34:54.443Z · LW · GW

I guess what I'm saying is that since simpler ones are run more, they are more important. That would be true if every simulation was individually important, but I think one thing about this is that the mathematical entity itself is important, regardless of the number of times it's instituted. But it still intuitively feels as though there would be more "weight" to the ones run more often. Things that happen in such universes would have more "influence" over reality as a whole.

Comment by Raiden on Preferences without Existence · 2014-02-09T04:28:12.751Z · LW · GW

What I mean though, is that the more complicated universes can't be less significant, because they are contained within this simple universe. All universes would have to be at least as morally significant as this universe, would they not?

Comment by Raiden on Preferences without Existence · 2014-02-09T03:33:54.166Z · LW · GW

Another thought: Wouldn't one of the simplest universes be a universal turing machine that runs through every possible tape? All other universes would be contained within this universes, making them all "simple."

Comment by Raiden on Preferences without Existence · 2014-02-09T03:24:52.643Z · LW · GW

Instead of saying that you care about simpler universes more, couldn't a similar preference arise out of pure utilitarianism? Simpler universes would be more important because things that happen within them will be more likely to also happen within more complicated universes that end up creating a similar series of states. For every universe, isn't there an infinite number of more complicated universes that end up with the simpler universe existing within part of it?

Comment by Raiden on Personal examples of semantic stopsigns · 2013-12-11T14:16:07.034Z · LW · GW

Ones I've noticed are "lazy" or "stupid" or other words that are used to describe people. Sure, it can be good to have such models so that one can predict the behavior of a person, like "This person isn't likely to do his work." or "She might have trouble understanding that." The thing is, these are often treated as fundamental properties of an ontologically fundamental thing, which the human mind is not.

Why is this person lazy? Do they fall victim to hyperbolic discounting? Is there an ugh field related to their work? Do they not know what to do? Maybe they simply don't have good reason to work? Why is this person "stupid?" Do they lack the prerequisite knowledge to understand what you're saying? Are they interested in learning it? Do they have any experience with it?

Comment by Raiden on Help us Optimize the Contents of the Sequences eBook · 2013-09-20T14:12:58.919Z · LW · GW

I really would like a chronological order.

Comment by Raiden on Help us Optimize the Contents of the Sequences eBook · 2013-09-20T14:11:14.245Z · LW · GW

I really like the cute little story as you say, but agree that it isn't effective where it is. Maybe include it in the end as a sort of appendix?

Comment by Raiden on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T16:41:42.551Z · LW · GW

Three shall be Peverell's sons and three their devices by which Death shall be defeated.

What is meant by the three sons? Harry, Draco, and someone else? Quirrell perhaps? Using the three Deathly Hallows?

Comment by Raiden on "Stupid" questions thread · 2013-07-16T03:15:23.033Z · LW · GW

I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I'm beginning to see that things aren't so simple.

Comment by Raiden on "Stupid" questions thread · 2013-07-16T03:13:35.314Z · LW · GW

I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.

It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.

Comment by Raiden on "Stupid" questions thread · 2013-07-16T03:08:10.991Z · LW · GW

Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It's hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.

Comment by Raiden on "Stupid" questions thread · 2013-07-15T01:44:31.929Z · LW · GW

I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes "person" from "object" seems to belong to the latter.

Comment by Raiden on "Stupid" questions thread · 2013-07-14T22:45:55.732Z · LW · GW

My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?

Comment by Raiden on Meetup : Atlanta Lesswrong's May Meetup: The Rationality of Social Relationships, Friendship, Love, and Family. · 2013-05-04T00:54:13.429Z · LW · GW

I would very much like to attend this, having never attended a meetup before. However, I am currently a minor who lacks transportation ability and have had little luck convincing my guardians to drive me to it. Is there anybody who is attending and is coming from the Birmingham, AL area who would be willing to drive me? I am willing to pay for the service.

Comment by Raiden on Open thread, March 17-31, 2013 · 2013-03-25T19:17:23.845Z · LW · GW

I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?

Comment by Raiden on Open Thread, December 16-31, 2012 · 2012-12-16T17:19:17.123Z · LW · GW

Some scientists think they have a method to test the Simulation Argument.

Comment by Raiden on Most Likely Cause of an Apocalypse on December 21 · 2012-12-03T23:55:04.678Z · LW · GW

Is it probable for intelligent life to evolve?