Posts

Comments

Comment by woodchopper on Duplication versus probability · 2018-06-26T00:12:50.191Z · LW · GW

If an exact copy of you were to be created, it would have to be stuck in the hole as well. If the 'copy' is not in the hole, then it is not you, because it is experiencing different inputs and has a different brain state.

Comment by woodchopper on Wirehead your Chickens · 2018-06-25T12:23:43.945Z · LW · GW

> identify and surgically or chemically remove the part of the brain that is responsible for suffering,

There is no part of the brain responsible for consciousness. Consciousness is a process and it involves the entire system from the inputs to your brain (like me telling you that you're ignorant) to the peripheral nerves to the complex sub-sectors of the brain.

> breed animals who enjoy pain, not suffer from it

You cannot enjoy pain. That's quite literally a contradiction.

> Many of these are probably way easier and more practical than shaming people into giving up tasty steak

None of the ideas you have posited are easy, practical, or make any sense whatsoever. Shaming people into giving up tasty steak is a weird way to frame the problem. Shaming people for placing a momentary experience given to them by steak on their taste buds as worth torturing cows to death for is a viable and important strategy, because it is fundamentally sound.

> Because most people do not truly care about reducing animal suffering, they care about a different metric altogether, a visible human proxy for animal suffering that they find immediately relatable.

The best way of reducing animal suffering would be to reduce the number of animals currently in existence and reduce the number brought into existence. Ending factory farming is a very effective way of doing this, considering that an extremely large proportion of the most sentient creatures on the planet (mainly mammals with very complex brains) are brought into existence by the direct action of humans, for meat consumption.

One of your ideas, shrinking or even removing the brain, is already being developed. We are making meat without the animal, which means without the brain. We are using technology to do so. This is cultured meat. We are also replicating most of the properties of meat and making plant based meat (see Impossible Foods, Beyond Meat). Both of these approaches are effective and practical.

Is it practical to wirehead tens of billions of chickens every year? No, it's not. It's impossible with current technology. We could surgically implant carfentanil secreting devices in the spinal cords of every chicken, but the process of doing this would drive chicken meat costs up so high that the world would just go vegan instead of paying for them.

I urge you to think more clearly about this issue, instead of trying to find ways to justify your current lifestyle.

Comment by woodchopper on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-05-06T17:59:18.977Z · LW · GW

and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory.

Can you elaborate on this idea? What do you mean by 'the history of your experience is lost'? Can you supply some links to read on this whole theory?

Comment by woodchopper on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-05-06T17:57:17.148Z · LW · GW

Could you qualify that statement?

Can you make an AGI given only primordial soup?

Comment by woodchopper on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-05-06T17:55:06.649Z · LW · GW

An AI will have a utility function. What utility function do you propose to give it?

What values would we give an AI if not human ones? Giving it human values doesn't necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs.

If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity's, as a whole.

Comment by woodchopper on AI arms race · 2017-05-06T17:04:59.581Z · LW · GW

Developing an AGI (and then ASI) will likely involve a serious of steps involving lower intelligences. There's already an AI arms race between several large technology companies and keeping your nose in front is already practiced because there's a lot of utility in having the best AI so far.

So it isn't true to say that it's simply a race without important intermediate steps. You don't just want to get to the destination first, you want to make sure your AI is the best for most of the race for a whole heap of reasons.

Comment by woodchopper on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-30T05:21:46.007Z · LW · GW

That's a partial list. It also takes good universities, a culture that produces a willingness to take risks, a sufficient market for good products, and I suspect a litany of other things.

I think once you've got a society that genuinely innovates started, it can be hard to kill that off, but it can be and has been done. The problem is, as you mentioned, very few societies have ever been particularly innovative.

It's easy to use established technology to build a very prosperous first world society. For example: Australia, Canada, Sweden. But it's much harder for a society to genuinely drive humanity forwards and in the history of humanity it has only happened a few times. We forget that for a very long time, very little invention happened in human society anywhere.

Comment by woodchopper on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-28T11:03:44.805Z · LW · GW

I think it's an interesting point about innovation actually being very rare, and I agree. It takes a special combination of things for to happen and that combination doesn't come around much. Britain was extremely innovative a few hundred years ago. In fact, they started the industrial revolution, literally revolutionising humanity. But today they do not strike me as particularly innovative even with that history behind them.

I don't think America's ability to innovate is coming to end all that soon. But even if America continues to prosper, will that mean it continues to innovate? It takes more than prosperity for innovation to happen. It takes a combination of factors that nobody really understands. It takes a particular culture, a particular legal system, and much more.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-28T14:05:25.382Z · LW · GW

You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don't I want my dog to die? Obviously, when I'm actually dead, I won't want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-27T00:04:37.194Z · LW · GW

Why does anything at all matter?

Comment by woodchopper on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-26T10:54:41.672Z · LW · GW

In Australia we currently produce enough food for 60 million people. This is without any intensive farming techniques at all. This could be scaled up by a factor of ten if it was really necessary, but quality of life per capita would suffer.

I think smaller nations are as a general rule governed much better, so I don't see any positives in increasing our population beyond the current 24 million people.

Comment by woodchopper on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-26T10:50:51.661Z · LW · GW

Each human differs in their values. So it is impossible to build the machine of which you speak.

Comment by woodchopper on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-26T10:48:58.304Z · LW · GW

Raid Google and shut them down immediately. Start a Manhattan project of AI safety research.

Comment by woodchopper on The map of agents which may create x-risks · 2016-10-26T10:41:12.065Z · LW · GW

I really like that you mention world government as an existential risk. It's one of the biggest ones. Competition is a very good risk reduction process. It has been said before that if we all lived in North Korea, it may well be that the future of humanity would be quite bleak indeed. North Korea is less stable now than it would be if it was the world's government because all sorts of outside pressure contribute to its instability (technology created by more free nations, pressure from foreign governments, etc).

No organisation can ever get it right all the time. Even knowing what right is is pretty hard to do and the main way humans do it is with competition. We know certain things work and certain things don't simply because of policy diversity between nations - we can look and see which countries are successful and which aren't. A world government would destroy this. Under a world government I would totally write off humanity. I suspect we would all be doomed to die on this rock. People very much forget how precarious our current civilisation is. For thousands of years humanity floundered until Britain hit upon the ability to create continued progress through the chance development of certain institutions (rule of law, property rights, contracts, education, reading, writing, etc).

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-26T10:29:04.265Z · LW · GW

You might not care, but a lot of humans do care, and will continue to care. That's why we're discussing it.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-26T10:27:05.607Z · LW · GW

There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn't infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn't stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable strength grew up near each other over a period of time. In America, settlers simply neutralized Native Americans while the settlers' technological superiority was overwhelming, a much better idea than simply letting them grow powerful enough to eventually challenge you.

Comment by woodchopper on [deleted post] 2016-10-26T10:19:16.202Z

Remember also that viruses that kill lots of people tend to rapidly mutate into less lethal strains due to evolutionary pressures. This is what happened with the 1917 pandemic.

Comment by woodchopper on [deleted post] 2016-10-26T03:21:54.574Z

Extremely low. I have never believed any sort of pathogen could come close to wiping us out. They can be defeated by basic breather and biohazard technology. But the main key is that with improved and more accessible biotechnology, our ability to create vaccines and other defence mechanisms against pathogens is greatly enhanced. I actually think the better biotechnology gets, the less likely any pathogen is to wipe us out, even given the fact that terrorists will be able to misuse it more easily.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-26T02:45:14.579Z · LW · GW

Kicking the can down the road doesn't seem to be a likely action of an intelligent civilisation.

Best to control us while they still can, or while the resulting war will not result in unparalleled destruction.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-26T02:43:14.729Z · LW · GW

The development of Native Americans has been stunted and they simply exist within the controlled conditions imposed by the new civilization now. They aren't all dead, but they can't actually control their own destiny as a people. Native American reservations seem like exactly the sort of thing aliens might put us in. Very limited control over our own affairs in desolate parts of the universe with the addition of welfare payments to give us some sort of quality of life.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-26T02:38:38.258Z · LW · GW

If we were rational, we would stop their continued self-directed development, because having a rapidly advancing alien civilisation with goals different to ours is a huge liability.

So maybe we would not wipe them out, but we would not let them continue on as normal.

Comment by woodchopper on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-25T12:11:06.638Z · LW · GW

Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?

As has come to light with research on super intelligences, an actor does not have to hate us to destroy us, but rather realise we conflict, even in a very minor way, with its goals. As a rapidly advancing intelligent civilisation, it is likely our continued growth and existence will hamper the goals of other intelligent civilisations, so it will be in their interests to either stunt our growth or wipe us out. They don't have to hate us. They might be very empathetic. But if their goals are not exactly the same as ours, it seems a huge liability to leave us to challenge their power. I know that I would stop the development of any other rapidly advancing intelligent species if I could, simply because struggles over our inevitably conflicting goals would be best avoided.

So, my question is, can you see any realistic value system a superintelligent alien civilisation might hold that would result in them not stopping us from going on growing and developing our power as a civilisation in a self-directed way? I cannot.

Given this, why is it in any way legal to broadcast our existence and location? There have been efforts in the past to send radio signals to distant solar systems. A superintelligent civilisation may well pick these up and come on the hunt for us. I think that this is one of the biggest existential threats we face, and our only real advantage is the element of stealth and surprise, which several incomprehensibly stupid individuals seem to threaten with their attempts to contact other actors in the universe. Should the military physically bomb and attack installations that attempt to broadcast our location? How do we get the people doing this stuff to stop?

Comment by woodchopper on Newcomb versus dust specks · 2016-05-17T08:59:17.172Z · LW · GW

This doesn't seem very coherent.

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

OK. Then that means if I choose torture, I am alone. If I choose the dust specks, I am not alone. I don't want to be tortured, and don't really care about 3 ^^^ 3 people getting dust specks in their eyes, even if they're all 'perfect copies of me'. I am not a perfect utilitarian.

A perfect utilitarian would choose torture though, because one person getting tortured is technically not as bad from a utilitarian point of view as 3 ^^^ 3 dust specks in eyes.

Comment by woodchopper on Information Hazards and Community Hazards · 2016-05-15T12:49:19.025Z · LW · GW

I think a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics', where a large proportion of the population, with varying IQs, some extremely intelligent, believe things that are quite obviously wrong to who anyone who has spent any amount of time seeking the truth on those issues without prior bias.

The ability for humans to totally turn off their rationality, to organise the 'facts' as they see them to confirm their biases, is nothing short of incredible. If humans treated everything like politics, we would certainly get nowhere.

I think a community hazard would, unfortunately, be trying to collaboratively truth-seek about political issues on a forum like LessWrong. People would not be able to get over their biases, despite being very open to changing their mind on all other issues.

Comment by woodchopper on Open Thread May 2 - May 8, 2016 · 2016-05-03T17:05:38.542Z · LW · GW

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are "true" I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.

You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber "real" minds, then it's likely we are all simulated. I'm not really sure how us being "accurately simulated" minds changes things. It does make it easier to reason outside of our little box - if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

Let's assume I'm trying to make conclusions about the universe. I could be a brain in a vat, but there's not really anything to be gained in assuming that. Whether it's true or not, I may as well act as if the universe can be understood. Let's say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it's impossible to reason your way into believing you're in a simulation. It's self-referential.

I'm going to have to think about this harder, but try and criticise what I'm saying as you have been doing because it certainly helps flesh things out in my mind.

Comment by woodchopper on Open Thread May 2 - May 8, 2016 · 2016-05-03T15:23:36.374Z · LW · GW

We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent's memory.

There is no limit to how perverted a view of the world a simulated agent could have.

Comment by woodchopper on Open Thread May 2 - May 8, 2016 · 2016-05-03T15:10:00.624Z · LW · GW

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Comment by woodchopper on Open Thread May 2 - May 8, 2016 · 2016-05-03T03:07:40.937Z · LW · GW

No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?

Comment by woodchopper on Open Thread May 2 - May 8, 2016 · 2016-05-02T18:16:47.852Z · LW · GW

The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation's creators. If we're not in a simulation, we're not in a simulation. Either way, the simulation argument is flawed.

Comment by woodchopper on My Kind of Moral Responsibility · 2016-05-02T18:04:30.949Z · LW · GW

I think I agree with what you're saying for the most part. If your goal is, say, reducing suffering, then you have to consider the best way of convincing others to share your goal. If you started killing people who ran factory farms, you're probably going to turn a lot of the world against you, and so fail in your goal. And, you have to consider the best way of convincing yourself to continue performing your goal, now and into the future, since humans goals can change depending on circumstances and experiences.

In terms of guilt, finding little tricks to rid yourself of guilt for various things probably isn't a good way to make you continue caring and doing as much as you can for a certain issue. I can know that something is wrong, but if I don't feel guilty about doing nothing, I'm probably not going to exert myself as hard in trying to fix it. If I can tell myself "I didn't do it, therefore it's none of my concern, even though it is technically a bad thing" and absolve myself of guilt, it's simply going to make me less likely to do anything about the issue.

Comment by woodchopper on My Kind of Moral Responsibility · 2016-05-02T17:37:49.445Z · LW · GW

You have to consider that humans don't have perfect utility functions. Even if I want to be a moral utilitarian, it is a fact that I am not. So I have to structure my life around keeping myself as morally utilitarian as possible. Brian Tomasik talks about this. It might be true that I could reduce more suffering by not eating an extra donut, but I'm going to give up on the entire task of being a utilitarian if I can't allow myself some luxuries.

Comment by woodchopper on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-04-30T18:34:28.989Z · LW · GW

What you are saying doesn't follow from the premises, and is about as accurate as me saying that magic exists and Harry Potter casts a spell on too-advanced civilisations.

Comment by woodchopper on Crazy Ideas Thread, December 2015 · 2016-04-27T15:55:23.421Z · LW · GW

Why would us launching a simulation use more processing power? It seems more likely that the universe does a set amount of information processing and all we are doing is manipulating that in constructive ways. Running a computer doesn't process more information than the wind blowing against a tree does; in fact, it processes far less.

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-27T15:41:44.185Z · LW · GW

So, the graph model of identity sort of works, but I feel it doesn't quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don't think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn't meld with intuition. For example, a person's brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it's being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let's imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it's still the same identity?

Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it's off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what's to say you couldn't link a very, very dodgy 'clone' mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex?

Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like 'if I clone your mind, but then torture the clone, do you feel it?' Well, if you've modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-27T13:30:58.140Z · LW · GW

Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.

So go back to the scenario - you're killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing 'you', so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-25T14:59:17.736Z · LW · GW

So, let's say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that 'you'?

If it is, let's say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is 'you'?

Let's say the second one is 'you', and the first one isn't. What happens when the computer reconstructs yet another exact copy of your brain?

If the computer told you it was going to torture the slightly-wrong copy of you (the one with a few atoms missing), would that scare you?

What if it was going to torture the exact copy of you, but only one of the exact copies? There's a version of you not being tortured, what's to say that won't be the real 'you'?

Comment by woodchopper on Roughly you · 2016-04-25T11:13:56.036Z · LW · GW

Why would something that is not atom to atom exactly what you are now be 'you'?

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-25T10:48:07.036Z · LW · GW

I think consciousness arises from physical processes (as Denett says), but that's not really solving the problem or proving it doesn't exist.

Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it's hard to say you are wrong. However, what if I don't actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real 'us' existing and what don't.

What if the persistence of personal identity is a meaningless pursuit?

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-25T07:35:49.611Z · LW · GW

If there's no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of 'you' is not actually 'you', would seeking immortality mean we can't upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?

If we found out that there's a new 'you' every time you go to sleep and wake up, wouldn't it make sense to abandon the quest for immortality as we already die every night?

(Note, I don't actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-24T17:38:30.583Z · LW · GW

Can you elaborate on the concept of a connection through "moment-to-moment identity"? Would for example "mind uploading" break such a thing?

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-24T17:31:58.589Z · LW · GW

The thing is, I'm just not sure if it's even a reasonable thing to talk about 'immortality' because I don't know what it means for one personal identity ('soul') to persist. I couldn't be sure if a computer simulated my mind it would be 'me', for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a similar person but not the one who originally went under the anaesthetic. So from the perspective of the original person, undergoing their operation was pointless, because they are dead anyway. The person who wakes from the operation is someone else entirely.

I guess I'm just trying to say that immortality makes heaps of sense if we can somehow solve the question of personal identity, but if we can't, then 'immortality' may be pretty nonsensical to talk about, simply because if we cannot say what it takes for a single 'soul' to persist over time, the very concept of 'immortality' may be ill-defined.

I like your post about the heat death of the universe, if you ever figure anything out regarding the persistence of a personal identity, I'd like you to message me or something.

Comment by woodchopper on A Roadmap: How to Survive the End of the Universe · 2016-04-23T12:01:42.461Z · LW · GW

Currently it's pretty commonly believed that the end state of the universe is decayed particles moving away from every other particle at faster than the speed of light, therefore existing in an eternal and inescapable void. If you only have one particle you can't do calculations.

Comment by woodchopper on Does immortality imply eternal existence in linear time? · 2016-04-23T11:58:00.085Z · LW · GW

What does it mean to be immortal? We haven't solved key questions of personal identity yet. What is it for one personal identity to persist?

Comment by woodchopper on Consider having sparse insides · 2016-04-03T07:35:23.656Z · LW · GW

If you define yourself by the formal definition of a general intelligence then you're probably not going to go too far wrong.

That's what your theory ultimately entails. You are saying that you should go from specific labels ("I am a democrat") to more general labels (" I am a seeker of accurate world models") because it is easier to conform to a more general specification. The most general label would be a formal definition of what it means to think and act on an environment for the attainment of goals.

I don't think your theory is particularly useful.