Posts

Comments

Comment by Myron Hedderson (myron-hedderson) on sarahconstantin's Shortform · 2024-10-11T14:14:28.051Z · LW · GW

We can't steer the future

What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we're 100% screwed because I can't do that. But I do have some influence. A great deal of influence over my own actions (I'm resisting the temptation to go down a sidetrack about determinism, assuming you're modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word "we", but I don't know who the "we" is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we're not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to "we can't steer the future" is "not yet we can't, at least not very well"?
 

  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.

Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity's future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of "goodness", rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of "steering" a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku - very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can't hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying "in this instance, you're stifling the individual" and "in this instance you're harming the group/long-term future" wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.

Comment by Myron Hedderson (myron-hedderson) on [Completed] The 2024 Petrov Day Scenario · 2024-09-26T20:02:39.962Z · LW · GW

This is fun! I don't know which place I'm a citizen of, though, it just says "hello citizen"... I feel John Rawls would be pleased...

Comment by myron-hedderson on [deleted post] 2024-08-26T13:41:47.266Z

I think a search for a rule that:

  1. Is simple enough to teach to children, and ideally can be stated in a single sentence
  2. Can be applied consistently across many or all situations
  3. Cannot be gamed by an intelligent agent who is not following the rule and is willing to self-modify in order to exploit loopholes or edge cases.

is likely to find no such rule exists.

The platinum rule is pretty good, as a  simple to teach to children thing, I think. They will already intuitively understand that you don't have to play fair with someone who isn't following the rules, which patches most of the holes. (After all, if I'm following the platinum rule, creating a bunch of drama for someone else is not treating them how they'd like to be treated. Putting them in an awkward situation where they're forced to do what I want or I'll be sad, and they are now a puppet hostage to my emotional state, is likewise un-fun for the counterparty, even if the sadness is genuine.)

Comment by Myron Hedderson (myron-hedderson) on When is a mind me? · 2024-07-19T13:08:34.623Z · LW · GW

I think maybe the root of the confusion here might be a matter of language. We haven't had copier technology, and so our language doesn't have a common sense way of talking about different versions of ourselves. So when one asks "is this copy me?", it's easy to get confused. With versioning, it becomes clearer. I imagine once we have copier technology for a while, we'll come up with linguistic conventions for talking about different versions of ourselves that aren't clunky, but let me suggest a clunky convention to at least get the point across:

I, as I am currently, am Myron.1. If I were copied, I would remain Myron.1, and the copy would be Myron.1.1. If two copies were made of me at that same instant, they would be Myron.1.1 and Myron.1.2. If a copy was later made of Myron.1.2, he would be Myron.1.2.1. And so on.

With that convention in mind, I would answer the questions you pose up top as follows:

Rather, I assume xlr8harder cares about more substantive questions like:

  1. If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? No. Maybe similarly to a close relative, 
  2. Should I anticipate experiencing what my upload experiences? No. I should anticipate experiencing a continuation of Myron.1's existence if the process is nondestructive, or the end of my (Myron.1)'s existence. Myron.1.1's experiences will be separate and distinct from Myron.1's.
  3. If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure? Depends. Sometimes suicide is OK, and you could value the continuation of a mind like your own even if your mind goes away. Or, not. That's a values question, not a fact question.

I'll add a fourth, because you've discussed it:

4. After the scanning and copying process, will I feel like me? Yep. But, if the copying process was nondestructive, you will be able to look out and see that there is a copy of you. There will be a fact of the matter about who entered the copying machine and how the second copy was made, a point in time before which the second copy did not exist and after which it did exist, so one of you will be Rob.1, and the other will be Rob.1.1. It might not be easy to tell which version you are in the instant after the copy is made, but "the copy is the original" will be a statement that both you and the other version evaluate as logically false, same with "both of you are the same person". "Both of you are you", once we have linguistic conventions around versioning, will be a confusing and ambiguous statement and people will ask you what you mean by that.

And another interesting one: 

5. After the scanning process, if it's destructive, if I'm the surviving copy, should I consider the destruction of the original to be bad? I mean, yeah, a person was killed. It might not be you.currentversion, exactly, but it's where you came from, so probably you feel some kinship with that person. In the same way I would feel a loss if a brother I grew up with was killed, I'd feel a loss if a past version of me was killed. We could have gone through life together with lots of shared history in a way very few people can, and now we can't.

Comment by Myron Hedderson (myron-hedderson) on What Other Lines of Work are Safe from AI Automation? · 2024-07-12T01:24:22.177Z · LW · GW

Ok. I thought after I posted my first answer, one of the things that would be really quite valuable during the turbulent transition, is understanding what's going on and translating it for people who are less able to keep up, because of lacking background knowledge or temperament. While it will be the case after a certain point that AI can give people reliable information, there will be a segment of the population that will want to hear the interpretation of a trustworthy human, and also, the cognitive flexibility to deal with a complex and rapidly changing environment and provide advice to people based on their specific circumstances, will be a comparative advantage that lasts longer than most.

Acting as a consultant to help others navigate the transition, particularly if that incorporates other expertise you have (there may be a better generic advice giver in the world, and you're not likely to be able to compete with Zvi in terms of synthesizing and summarizing information, but if you're for example well enough versed in the current situation, plus you have some professional specialty, plus you have local knowledge of the laws or business conditions in your geographic area, you could be the best consultant in the world with that combination of skills).

Also, generic advice for turbulent times: learn to live on as little as possible, stay flexible and willing to move, save up as much as you can so that you have some capital to deploy when that could be very useful (if the interest rates go sky high because suddenly everyone wants money to build chip fabs or mine metals for robots or something, having some extra cash pre-transition could mean having plenty post-transition) but also you have some free cash in case things go sideways and a well placed wad of cash can get you out of a jam on short notice, let you quit your job and pivot, or do something else that has a short term financial cost but you think is good under the circumstances. Basically make yourself more resilient, knowing turbulence is coming, and prepare to help others navigate the situation. Make friends and broaden your social network, so that you can call on them if needed and vice versa.

Comment by Myron Hedderson (myron-hedderson) on What Other Lines of Work are Safe from AI Automation? · 2024-07-11T20:52:06.997Z · LW · GW

I think my answer would depend on your answer to "Why do you want a job?". I think that when AI and robotics have advanced to the point where all physical and intellectual tasks can be done better by AI/robots, we've reached a situation where things change very rapidly and "what is a safe line of work long term?" is hard to answer because we could see rapid changes over a period of a few years, and who knows what the end-state will look like? Also, any line of work which at time X it is economically valuable to have humans do, will have a lot of built-in incentive to make it automateable, so "what is it humans can make money at because people value the labour?" could change rapidly. For example, you suggest that sex work is one possibility, but if you have 100,000 genius-level AIs devising the best possible sex-robot, pretty quickly they'd be able to come up with something where the people who are currently paying for sex would feel like they're getting better value for money out of the sex-robot than out of a human they could pay for sex. Of course people will still want to have sex with people they like who like them back, but that isn't typically done for money.

We'll live in a world where the economy is much larger and people are much richer, so subsistence isn't a concern, provided that there's decent redistributive mechanisms of some sort in place. Like let's say we keep the tax rate the same but the GDP has gone up 1,000x - then the amount of tax revenue has gone up 1,000x, and UBI is easy. If we can't coordinate to get a UBI in place, it would still only need 1 in 1,000 people to somehow have lucked into resources and say "I wish everyone had a decent standard of living" and they could set up a charitable organization that gave out free food and shelter, with the resources under their command. So you won't need a job. Meaning, any work people got other people to do for them would have to pay an awful lot if it was something a worker didn't intrinsically want to do (if someone wanted a ditch dug for them by humans who didn't like digging ditches, they'd have to make those humans a financial offer they found made it worthwhile when all of their needs are already met - how much would you have to pay a billionaire to dig you a ditch? There's a price, but it's probably a lot.), and otherwise, you can just do whatever "productive" thing you want because you want to, you enjoy the challenge, it's a growth experience for you or whatever, and it likely pays 0, but that doesn't matter because you value it for reasons other than the pay.

I guess it could feel like a status or dignity thing, to know that other people value the things you can do, enough to keep you alive with the products of your own labour? And so you're like "nah, I don't want the UBI, I want to earn my living". In that case, keep in mind "enough to keep you alive with the products of your own labour" will be very little, as a percentage of people's income. So you can busk on a street corner, and people can occasionally throw the equivalent of a few hundred thousand of today's dollars of purchasing power into your hat, because you made a noise they liked, in the same way that I can put $5 down for a busker now because that amount of money isn't particularly significant to me, and you're set for a few years at least, instead of being able to get yourself a cup of coffee as is the case now.

Or, do you want to make a significant amount of money, such that you can do things most people can't do because you have more money than them? In that case, I think you'd need to be pushing the frontier somehow - maybe investing (with AI guidance, or not) instead of spending in non-investy ways would do it. If the economy is doubling every few years, and you decide to live on a small percentage of your available funds and invest the rest, that should compound to a huge sum within a short time, sufficient for you to, I dunno, play a key role in building the first example of whatever new technology the AI has invented recently which you think is neat, and get into the history books?

Or do you just want to do something that other people value? There will be plenty of opportunities to do that. When you're not constrained by a need to do something to survive, you could, if you wanted, make it your goal to give your friends really good and thoughtful gifts - do things for them that they really appreciate, which yes they could probably train an AI agent to do, but it's nice that you care enough to do that, the fact that you put in the thought and effort matters. And so your relationships with them are strengthened, and they appreciate you, and you feel good about your efforts, and that's your life.

Of course, there are a lot of problems in the world that won't magically get fixed overnight even if we create genius-level AIs and highly dexterous robots and for whatever reason that transition causes 0 unexpected problems. Making it so that everybody's life, worldwide, is at least OK, and we don't cause a bunch of nonhuman animal suffering, is a heavy lift to do from where we are, even with AI assistance. So if your goal is to make the lives of the people around you better, it'll be a while before you have a real struggle to find a problem worth solving because everything worthwhile has already been done, I'd think. If everything goes very well, we might get there in the natural un-extended lifetimes of people alive today, but there will be work to do for at least a decade or two even in the best case that doesn't involve a total loss of human control over the future, I'd think. The only way all problems get solved in the short term and you're really stuck for something worthwhile to do, involves a loss of human control over the situation, and that loss of control somehow going well instead of very badly.

Comment by Myron Hedderson (myron-hedderson) on Self responsibility · 2024-06-19T13:05:01.976Z · LW · GW

When a person isn't of a sound mind, they are still expected to maintain their responsibility but they may simply be unwell. Unable to be responsible for themselves or their actions. 

 

We have various ways of dealing with this. Contracts are not enforceable (or can be deemed unenforceable) against people who were not of sound mind when entering into them, meaning if you are contracting with someone, you have an incentive to make sure they are of sound mind at the time. There are a bunch of bars you have to clear in terms of mental capacity and non-coercion and in some cases having obtained independent legal advice, in order to be able to enter into a valid contract, depending on the context. As well, if someone is unwell but self-aware enough to know something is wrong, they have the option of granting someone else power of attorney, if they can demonstrate enough capacity to make that decision. If someone is unwell but not aware of the extent of their incapacity, it is possible for a relative to obtain power of attorney or guardianship. If a medical professional deems someone not to have capacity but there is no one able or willing to take on decision making authority for that person, many polities have something equivalent to a Public Trustee which is a government agency empowered to manage affairs on their behalf. If you show up in hospital and you are not well enough to communicate your preferences and/or a medical professional believes you are not of sound mind, they may make medical decisions based on what they believe to be in your best interest.

These may not be perfect solutions (far from it) but "you're drunk" or "you're a minor" are not the only times when society understands that people may not have capacity to make their own decisions, and puts in place measures to get good decisions made on behalf of those who lack capacity.

Of course (and possibly, correctly) many of these measures require clear demonstration of a fairly extreme/severe lack of capacity, before they can be used, so there's a gap for those who are adult in age but childish in outlook.

Comment by Myron Hedderson (myron-hedderson) on Just admit that you’ve zoned out · 2024-06-10T23:01:59.994Z · LW · GW

Just during lectures or work/volunteer organization meetings. I don't tend to zone out much during 1:1 or very small group conversations, and if I do, I'm only inconveniencing one or a few people by asking someone to repeat what they said, who would also be inconvenienced by my not being able to participate in the conversation because I've stopped following, so I just ask for clarification. I find zoning out happens most often when no response is required from me for an extended period of time.

I occasionally do feel a little qualmy, but whenever I have asked the answer has always been yes, and I keep the recordings confidential, reasoning that I do have a level of permission to hear/know the information and the main concern people will have is that it not be shared in ways they didn't anticipate.

Comment by Myron Hedderson (myron-hedderson) on Just admit that you’ve zoned out · 2024-06-07T10:12:17.262Z · LW · GW

My solution is to use the voice recorder app on my phone, so I can review any points I missed after the fact, and take notes about where I zoned out with timestamps so that I don't have to review the whole thing. If you have a wristwatch you can use the watch-time rather than recorder-time and synch up later, and it's not very obvious.

Comment by Myron Hedderson (myron-hedderson) on Power Law Policy · 2024-05-23T13:13:46.642Z · LW · GW

It would be cool if we could get more than 1% of the working population into the top 1% of earners, for sure. But, we cannot. The question then becomes, how much of what a top 1% earner earns is because they are productive in an absolute sense (they generate $x in revenue for their employer/business), vs. being paid to them because they are the (relative) best at what they do, and so they have more bargaining power?

Increasing people's productivity will likely raise earnings. Helping people get into the top 1% relative to others, just means someone else is not in the top 1%, who counterfactually would have been. Your post conflates the two a bit, and doesn't make the distinction between returns to relative position vs. returns to absolute productivity, measuring a "home run" as getting into the top x%, rather than achieving a specified earnings level.

If I look past that, I agree with the ideas you present for increasing productivity and focusing on making sure high-potential individuals achieve more of their potential. But... I am somewhat leery of having a government bureaucracy decide who is high potential and only invest in them. It might make more sense, given the returns on each high potential individual and the relatively small costs to making sure everyone has access to the things they would need to realize their potential, to just invest in everyone, as a strategy aimed at not missing any home runs.

Comment by Myron Hedderson (myron-hedderson) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-16T16:58:52.955Z · LW · GW

I've skimmed the answers so far and I don't think I'm repeating things that have already been said, but please feel free to let me know if I am and skip re-reading.

> What I know about science and philosophy suggests that determinism is probably true

What I know about science and philosophy suggests we shouldn't be really sure that the understanding we believe to be accurate now won't be overturned later. There are problems with our physics sufficient to potentially require a full paradigm shift to something else, as yet unknown. So if "determinism is true" is demotivating for you, then consider adding a "according to my current understanding" and a "but that may be incorrect" to that statement.

I also read in some of the discussion below that determinism isn't always demotivating for you - only in some cases, like when the task is hard and the reward, if any, is temporally distant. So I wonder how much determinism is a cause of your demotivation, rather than a rationalization of demotivation whose main cause is something else. If someone convinced you that determinism was false, how much more motivated do you expect you would be, to do hard things with long delays before reward? If the answer comes back "determinism is a minor factor" then focusing on the major factors will get you most of the way to where you want to be.

But, suppose determinism is definitely true, and is, on further reflection, confirmed as a major cause of your demotivation. What then?

This has actually been said in a few different ways below, but I'm going to try and rephrase. It's a matter of perspective. Let me give you a different example of something with a similar structure, that I have at times found demotivating. It is basically the case, as far as I understand, that slightly changing the timing of when people have sex with each other will mean a different sperm fertilizes a given egg, and so our actions, by for example by accidentally causing someone to pause while walking, ripple out and change the people who would otherwise have been born a generation hence, in very unpredictable ways whose effects probably dominate the fact that I might have been trying to be nice by opening a door for someone. It was nice of me to open the door, but whether changing the set of which billions of people will be born is a net good or a net bad, is not something I can know.

One response to this is something like "focus on your circle of control - the consequences you can't control and predict aren't your responsibility, but slamming the door in someone's face would be bad even if the net effect including all the consequences that are unknowable to you could be either very good or very bad".

This is similar in structure to the determinism problem - the universe might be deterministic, but even if so, you don't and can't know what the determined state at each point in time is. Within your circle of control as an incredibly cognitively limited tiny part of the universe, is only to make what feels like a choice to you, about whether to hold a door open for someone or slam it in their face. From your perspective as a cognitively-bounded agent with the appearance of choice, making a choice makes sense. Don't try to take on the perspective of a cognitively-unbounded non-agent looking at the full state of the universe at all points in time from the outside and going "yep, no choices exist here" - you don't have the cognitive capacity to model such a being correctly, and letting how such a being might feel if it had feelings influence how you feel is a mistake. In my opinion, anyway.

Comment by Myron Hedderson (myron-hedderson) on How do high-trust societies form? · 2024-02-09T03:14:39.111Z · LW · GW

I'm unclear why you consider low-trust societies to be natural and require no explanation. To me it makes intuitive sense that small high-trust groups would form naturally at times, and sometimes those groups would, by virtue of cooperation being an advantage, grow over time to be big and successful enough to be classed as "societies".

I picture a high trust situation like a functional family unit or small village where everyone knows everyone, to start. A village a few kilometers away is low trust. Over time, both groups grow, but there's less murdering and thievery and expense spent on various forms of protection against adversarial behaviour in the high-trust group, so they grow faster. Eventually the two villages interact, and some members of the low-trust group defect against their neighbours and help the outsiders to gain some advantage for themselves, while the high trust group operates with a unified goal, such that even if they were similarly sized, the high trust group would be more effective. Net result, the high trust group wins and expands, the low trust group shrinks or is exterminated. More generally, I think in a lot of different forms of competition, the high trust group is going to win because they can coordinate better. So all that is needed is for a high-trust seed to exist in a small functional group, and it may grow to arbitrary size (provided mechanisms for detecting and punishing defectors and free-riders, of course).

I don't claim that this is a well-grounded explanation with the backing of any anthropological research, which is why I'm putting it as a comment rather than an answer. But I do know that children often grow up assuming that whatever environment they grew up in is typical for everyone everywhere. So if a child grows up in a functional family that cooperates with and supports each other, they're going to generalize that and expect others outside of their family to cooperate and support each other was well, unless and until they learn this isn't always the case. This becomes the basis for forming high-trust cooperative relationships with non-kin, where the opportunity exists. Seems to me a high trust society is just one where those small seeds of cooperation have grown to a group of societal size. 

Taking it back a step, it seems like we have a lot of instincts that aid us in cooperating with each other. Probably because those with those instincts did better than those without, because a human by itself is puny and weak and can only look in one direction at once and sometimes needs to sleep, but ten humans working together are not subject to those same constraints. And it is those cooperative instincts, like reciprocity, valuing fairness, punishment of defectors, and rewarding generosity with status, which help us easily form trusting cooperative relationships ("easily" relative to how hard it would be if we were fully selfish agents aiming only to maximize some utility function in each interaction, and we further knew that this was true of everyone we interacted with as well), which in turn are the basis for trust within larger-scale groups.

I mean, you're asking this question with the well-founded hope that someone is going to take their own time to give you a good answer, without them being paid to do so or any credible promise of another form of reward. You would be surprised, I think, if the response to this request was an attempt to harm you in order to gain some advantage at your expense. If 10 people with a similar dispositon were trapped on an island for a few generations, you could start a high trust society, could you not?

Comment by Myron Hedderson (myron-hedderson) on 2023 Unofficial LessWrong Census/Survey · 2023-12-04T17:20:38.413Z · LW · GW

My brain froze up on that question. In order for there to be mysterious old wizards, there have to be wizards, and in order for the words "mysterious" and "old" to be doing useful work in that question the wizards would have to vary in their age and mysteriousness, and I'm very unsure how the set of potential worlds that implies compares to this one.

I'm probably taking the question too literally... :D

And, um, done.

Comment by Myron Hedderson (myron-hedderson) on Rational retirement plans · 2023-05-16T19:45:14.604Z · LW · GW

Generically, having more money in the bank gives you more options, being cash-constrained means you have fewer options. And, also generically, when the future is very uncertain, it is important to have options for how to deal with it. 

If how the world currently works changes drastically in the next few decades, I'd like to have the option to just stop what I'm doing and do something else that pays no money or costs some money, if that seems like the situationally-appropriate response. Maybe that's taking some time to think and plan my next move after losing a job to automation, rather than having to crash-train myself in something new that will disappear next year. Maybe it's changing my location and not caring how much my house sells for. Maybe it's doing different work. Maybe it's paying people to do things for me. Maybe it's also useful to be invested in the right companies when the economy goes through a massive upswing before the current system collapses, so I for a brief time have a lot of wealth and can direct it towards goals that are aligned with my values rather than someone else's, thus, index funds that buy me into a lot of companies.

Even if we eventually get to a utopia, the path to that destination could be rocky, and having some slack is likely to be helpful in riding that time out.

Another form of slack is learning to live on much less than you make - so the discipline required to accumulate savings, could also pay off in terms of not being psychologically attached to a lifestyle that stops you from making appropriate changes as the world changes around you.

Of course "accumulate money so you have options when the world changes" is a different mindset than "save money so you can go live on a beach in 40 years". But money is sort of like fungible power, an instrumentally useful thing to have for many different possible goals in many different scenarios, and a useless thing to have in only a few.

Side note: "the amount a dollar can do goes up, the value of a dollar collapses" strikes me as implausible. Your story for how that could happen is people hit a point of diminishing returns in terms of their own happiness... but there are plenty of things dollars can be used for aside from buying more personal happiness. If things go well, we're just at the start of earth-originating intelligence's story, and there are plenty of ways for an investment made at the right time to ripple out across the universe. If I was a trillionaire (or a 2023-hundred-thousandaire where the utility of a dollar has gone up by a factor of 10 million, whatever), I could set up a utopia suited to my tastes and understanding of the good, for others, and that seems worth doing even if my subjective day-to-day experience doesn't improve as a result. As just one example. In any case, being at the beginning of a large expansion in the power of earth-originating intelligence, seems like just the sort of time when you'd like to have the ability to make a careful investment.

Comment by Myron Hedderson (myron-hedderson) on Mental Models Of People Can Be People · 2023-04-26T18:15:36.352Z · LW · GW

To be clear, I do not endorse the argument that mental models embedded in another person are necessarily that person. It makes sense that a sufficiently intelligent person with the right neural hardware would be able to simulate another person in sufficient detail that that simulated person should count, morally.

I appreciate your addendum, as well, and acknowledge that yes, given a situation like that it would be possible for a conscious entity which we should treat as a person to exist in the mind of another conscious entity we should treat as a person, without the former's conscious experience being accessible to the latter.

What I'm trying to express (mostly in other comments) is that, given the particular neural architecture I think I have, I'm pretty sure that the process of simulating a character requires use of scarce resources such that I can only do it by being that character (feeling what it feels, seeing in my mind's eye what it sees, etc.), not run the character in some separate thread. Some testable predictions: If I could run two separate consciousnesses simultaneously in my brain (me plus one other, call this person B) and then have a conversation with B, I would expect the experience of interacting with B to be more like the experience of interacting with other people, in specific ways that you haven't mentioned in your posts. Examples: I would expect B to misunderstand me occasionally, to mis-hear what I was saying and need me to repeat, to become distracted by its own thoughts, to occasionally actively resist interacting with me. Whereas the experience I have is consistent with the idea that in order to simulate a character, I have to be that character temporarily - I feel what they feel, think what they think, see what they see, their conscious experience is my conscious experience, etc.  - and when I'm not being them, they aren't being. In that sense, "the character I imagine" and "me" are one. There is only one stream of consciousness, anyway. If I stop imagining a character, and then later pick back up where i left off, it doesn't seem like they've been living their lives outside of my awareness and have grown and developed, in the way a non-imagined person would grow and change and have new thoughts if I stopped talking to them and came back and resumed the conversation in a week. Rather, we just pick up right where we left off, perhaps with some increased insight (in the same sort of way that I can have some increased insight after a night's rest, because my subconscious is doing some things in the background) but not to the level of change I would expect from a separate person having its own conscious experiences.

I was thinking about this overnight, and an analogy occurs to me. Suppose in the future we know how to run minds on silicon, and store them in digital form. Further suppose we build a robot with processing power sufficient to run one human-level mind. In its backpack, it has 10 solid state drives, each with a different personality and set of memories, some of which are backups, plus one solid state drive is plugged in to its processor, which it is running as "itself" at this time. In that case, would you say the robot + the  drives in its backpack = 11 people, or 1?

I'm not firm on this, but I'm leaning toward 1, particularly if the question is something like "how many people are having a good/bad life?" - what matters is how many conscious experiencers there are, not how many stored models there are. And my internal experience is kind of like being that robot, only able to load one personality at a time. But sometimes able to switch out, when I get really invested in simulating someone different from my normal self.

EDIT to add: I'd like to clarify why I think the distinction between "able to create many models of people, but only able to run one at a time" and "able to run many models of people simultaneously" is important in your particular situation. You're worried that by imagining other people vividly enough, you could create a person with moral value who you are then obligated to protect and not cause to suffer. But: If you can only run one person at a time in your brain (regardless of what someone else's brain/CPU might be able to do) then you know exactly what that person is experiencing, because you're experiencing it too. There is no risk that it will wander off and suffer outside of your awareness, and if it's suffering too much, you can just... stop imagining it suffering.

Comment by Myron Hedderson (myron-hedderson) on Mental Models Of People Can Be People · 2023-04-26T00:46:25.275Z · LW · GW

I elaborated on this a little elsewhere, but the feature I would point to would be "ability to have independent subjective experiences". A chicken has its own brain and can likely have a separate experience of life which I don't share, and so although I wouldn't call it a person, I'd call it a being which I ought to care about and do what I can to see that it doesn't suffer. By contrast, if I imagine a character, and what that character feels or thinks or sees or hears, I am the one experiencing that character's (imagined) sensorium and thoughts - and for a time, my consciousness of some of my own sense-inputs and ability to think about other things is taken up by the simulation and unavailable for being consciously aware of what's going on around me. Because my brain lacks duplicates of certain features, in order to do this imagining, I have to pause/repurpose certain mental processes that were ongoing when I began imagining. The subjective experience of "being a character" is my subjective experience, not a separate set of experiences/separate consciousness that runs alongside mine the way a chicken's consciousness would run alongside mine if one was nearby. Metaphorically, I enter into the character's mindstate, rather than having two mindstates running in parallel.

Two sets of simultaneous subjective experiences: Two people/beings of potential moral importance. One set of subjective experiences: One person/being of potential moral importance. In the latter case, the experience of entering into the imagined mindstate of a character is just another experience that a person is having, not the creation of a second person.

Comment by Myron Hedderson (myron-hedderson) on Mental Models Of People Can Be People · 2023-04-25T19:24:38.095Z · LW · GW

Having written the above, I went away and came back with a clearer way to express it: For suffering-related (or positive experience related) calculations, one person = one stream of conscious experience, two people = two streams of conscious experience. My brain can only do one stream of conscious experience at a time, so I'm not worried that by imagining characters, I've created a bunch of people. But I would worry that something with different hardware than me could.

Comment by Myron Hedderson (myron-hedderson) on Mental Models Of People Can Be People · 2023-04-25T18:55:00.522Z · LW · GW

I have a question related to the "Not the same person" part, the answer to which is a crux for me.

Let's suppose you are imagining a character who is experiencing some feeling. Can that character be feeling what it feels, while you feel something different? Can you be sad while your character is happy, or vice versa?

I find that I can't - if I imagine someone happy, I feel what I imagine they are feeling - this is the appeal of daydreams. If I imagine someone angry during an argument, I myself feel that feeling. There is no other person in my mind having a separate feeling. I don't think I have the hardware to feel two people's worth of feelings at once, I think what's happening is that my neural hardware is being hijacked to run a simulation of a character, and while this is happening I enter into the mental state of that character, and in important respects my other thoughts and feelings on my own behalf stop.

So for me, I think my mental powers are not sufficient to create a moral patient separate from myself. I can set my mind to simulating what someone different from real-me would be like, and have the thoughts and feelings of that character follow different paths than my thoughts would, but I understand "having a conversation between myself and an imagined character", which you treat as evidence there are two people involved, as a kind of task-switching, processor-sharing arrangement - there are bottlenecks in my brain that prevent me from running two people at once, and the closest I can come is thinking as one conversation partner and then the next and then back to the first. I can't, for example, have one conversation partner saying something while the other is not paying attention because they're thinking of what to say next and only catches half of what was said and so responds inappropriately, which is a thing that I hear is not uncommon in real conversations between two people. And if the imagined conversation involves a pause which in a conversation between two people would involve two internal mental monologues, I can't have those two mental monologues at once. I fully inhabit each simulation/imagined character as it is speaking, and only one at a time as it is thinking.

If this is true for you as well, then in a morally relevant respect I would say that you and whatever characters you create are only one person. If you create a character who is suffering, and inhabit that character mentally such that you are suffering, that's bad because you are suffering, but it's not 2x bad because you and your character are both suffering - in that moment of suffering, you and your character are one person, not two.

I can imagine a future AI with the ability to create and run multiple independent human-level simulations of minds and watch them interact and learn from that interaction, and perhaps go off and do something in the world while those simulations persist without it being aware of their experiences any more. And for such an AI, I would say it ought not to create entities that have bad lives. And if you can honestly say that your brain is different than mine in such a way that you can imagine a character and you have the mental bandwidth to run it fully independently from yourself, with its own feelings that you know somehow other than having it hijack the feeling-bits of your brain and use them to generate feelings which you feel while what you were feeling before is temporarily on pause (which is how I experience the feelings of characters I imagine), and because of this separation you could wander off and do other things with your life and have that character suffer horribly with no ill effects to you except the feeling that you'd done something wrong... then yeah, don't do that. If you could do it for more than one imagined character at a time, that's worse, definitely don't.

But if you're like me, I think "you imagined a character and that character suffered" is functionally/morally equivalent to "you imagined a character and one person (call it you or your character, doesn't matter) suffered" - which, in principle that's bad unless there's some greater good to be had from it, but it's not worse than you suffering for some other reason.

Comment by Myron Hedderson (myron-hedderson) on You Don't Exist, Duncan · 2023-02-03T14:09:45.488Z · LW · GW

 I think there are at least two levels where you want change to happen - on an individual level, you want people to stop doing a thing they're doing that hurts you, and on a social level, you want society to be structured so that you and others don't keep having that same/similar experience. 

The second thing is going to be hard, and likely impossible to do completely. But the first thing... responding to this: 

It wouldn't be so bad, if I only heard it fifty times a month.  It wouldn't be so bad, if I didn't hear it from friends, family, teachers, colleagues.  It wouldn't be so bad, if there were breaks sometimes.

I think it would be healthy and good and enable you to be more effective at creating the change you want in society, if you could arrange for there to be some breaks sometimes. I see in the comments that you don't want to solve things on your individual level completely yet because there's a societal problem to solve and you don't want to lose your motivation, and I get that. (EDIT: I realize that I'm projecting/guessing here a bit, which is dangerous if I guess wrong and you feel erased as a result... so I'm going to flag this as a guess and not something I know. But my guess is the something precious you would lose by caring less about these papercuts has to do with a motivation to fix the underlying problem for a broader group of people). But if you are suffering emotional hurt to the extent that it's beyond your ability to cope with and you're responding to people in ways you don't like or retrospectively endorse, then taking some action to dial the papercut/poke-the-wound frequency back a bit among the people you interact with the most is probably called for.

With that said, it seems to me that while it may be hard to fix society, the few trusted and I assume mostly fairly smart people who you interact with most frequently can be guided to avoid this error, by learning the things about you that don't fit into their models of "everyone", and that it would really help if they said "almost all" rather than "all". People in general may have to rely on models and heuristics into which you don't fit, but your close friends and family can learn who you are and how to stop poking your sore spots. This gives you a core group of people who you can go be with when you want a break from society in general, and some time to recharge so you can better reengage with changing that society.

As for fixing society, I said above that it may be impossible to do completely, but if I was trying for most good for the greatest number, my angle of attack would be, make a list of the instances where people are typical-minding you, and order that list based on how uncommon the attribute they're assuming doesn't exist is. Some aspects of your cognition or personality may be genuinely and literally unique, while others that get elided may be shared by 30% of the population that the person you're speaking to at the moment just doesn't have in their social bubble. The things that are least uncommon are both going to be easiest to build a constituency around and get society to adjust to, and have the most people benefit from the change when it happens.