Posts
Comments
I think that people making more top level posts makes the community better off. I think that a new post that someone has put work into tends to be a lot better content overall than the top comment that might just be stating what everyone else's immediate thought about this was. Top level posts are also important for generating discussion and can be valuable even if they are wrong for that reason (though obvious they are better if they are right).
I've noticed that for many LW posts - and EA posts to a lesser extent - it's very common for a comment on them to get more upvotes than the post itself. Since it's usually a lot harder to write a post than to comment on it, it seems like this isn't incentivising people to post strongly enough.
This also seems to apply to facebook posts and likes in the LW and EA groups.
I'm a 97% introvert male 22 year old. I've lived with a number of different roommates at work sites and I am living with my parents right now. Living alone would have been preferable in every case. I might enjoy living in an EA or rationalist household, though.
That's a very good point about radiology being replaceable.
Hmm, would you say there is still less social interaction in surgery than most other specialties?
Many surgeries are quite long, and require you to be standing hours at a time not necessarily in a very comfortable position.
I can take physical discomfort,
To become good, you'll have to specialize, so you'll be doing the same procedures over and over again.
Yeah, I guess doing the same procedure over and over again might not be super interesting or educational.
That's a useful datapoint, thanks.
I social skills tend to improve over time and having good social skills makes social interaction more fun.
I'm the guy eggman is referring to :) Thanks for all the info!
No I do not like working with people. I would aim for surgery or radiology for this reason. I currently do not perform well under social pressure but my anxiety should diminish with time. Yes, I think I am good at explaining things in simple terms. I prefer less social interaction. I could tolerate a strict hierarchy. I don't handle sleep deprivation well. I do not handle uncertainty particularly well. Yes, I think I could handle accidents better than most people.
Med school isn't generally about that. Would it be agony for him to memorize loads of facts without questioning/understanding them too much, then forget them because he doesn't need them for anything? Also, much of the stuff you have to memorize after the first 1-2 years has nothing to do with human biology. There are some challenging moments with complicated patients, but the work is mostly quite simple and algorithmic.
That's bad news but not a deal breaker.
For reasons I'd rather not get into it's been repeatedly shown my revealed preference for torture is not much less than other kinds of time consuming distractions,
Most people are at an extremely low risk of actually getting tortured so looking at their revealed preferences for it would be hard. The odd attitudes people have to low risk high impact events would also confound that analysis.
It seems like a good portion of the long term plans of people are also the things that make them happy. The way I think about this is asking whether I would still want to want to do something if It would not satisfy my wanting or liking systems when I performed it. The answer is usually no.
and I can't recall ever having a strong hedonic experience "as myself" rather than immersed in some fictional character.
I'm not quite sure what you mean here.
Our preferences after endless iterations of self improvement and extrapolation probably is entirely uncorrelated with what they appear to be as current humans.
It seems to me that there would be problems like the voter paradox for CEV. And so the process would involve judgement calls and I am not sure if I would likely agree with the judgement calls someone else made for me if that was how CEV was to work. Being given super human intelligence to help me decide my values would be great, though.
I have also have some of the other problems CEV that are discussed in this thread: http://lesswrong.com/lw/gh4/cev_a_utilitarian_critique/
Hmm, well we could just differ in fundamental values. It does seem strange to me that based on the behavior of most people in their everyday lives that they wouldn't value experiential things very highly. And that if they did their values of what to do with the universe would share this focal point.
I'll share the intuitions pumps and thought experiments that lead to my values because should make them seem less alien.
So when I reflect on my what are my strongest self regarding values, it's pretty clear to me that "not getting tortured" is at the top of my preferences. I have other values and I seem to value some non-experiential values such as truth and wanting to remain the same sort of person as I currently am, but really these just pale in comparison to my preference for not_torture. I don't think that most people on LW imagine torture really consider torture when they reflect on what they value.
I also really strongly value peak hedonic experiences that I have had, but I haven't experienced any that have an intensity that could compare directly to what I can imagine real torture would be like so I use torture as an example instead. The strongest hedonic experiences I have had are nights where I successfully met interesting, hot women and had sex with them. I would certainly trade a number of these nights for a nigh of real torture and so they be described on the same scale.
My other regarding desires are straightforwardly about the well being of other beings and I would want to do this in the same way that I would want to satisfy myself if I were to have the same desires as they have. So if they have desires A, B & C, I would want the same thing to happen for them as I would want for myself if I had the same exact same set of desires.
Trying to maximize things other than happiness and suffering involves trading off against these two things and it just does seem worth it to do that. The action that maximizes hedons is also the action that the most beings care the most about happening and it feels kind of arbitrary and selfish to do something else instead.
I accept these intuition pumps and that leads me to hedonium. If it's unclear how exactly this follows, I can elaborate.
This may well be true but I don't see how you can be very certain about it in our current state of knowledge. Reducing this uncertainty seems to require philosophical progress rather than scientific progress.
Yeah, I think that making more philosophical or conceptual progress was higher value relative to cost than doing more experimental work.
Suppose I told you that your specially designed collection of atoms optimized for hedon production can't feel happiness because it's not conscious. Can you conduct an experiment to disprove this?
The question seems like it probably come down to 'how similar is the algorithm this thing is running to the algorithms can cause happiness in humans (and I'm very sure some other animals as well). And if running the exact same algorithm in a human would produce happiness and that person could tell us so, that would be pretty conclusive.
If Omega was concerned about this sort of thing (and didn't know it already it already) it could test exactly what conditions changes in physical conditions led to changes or lapses in its own consciousness and find out that way. That seems like potentially a near solution to the hard problem of consciousness that I think you are talking about.
What kind of scientific progress are you envisioning, that would eventually tell us how much hedonic value a given collection of atoms represents? Generally scientific theories can be experimentally tested, but I can't see how one could experimentally test whether such a hedonic value theory is correct or not.
You apply your moral sentiments to the facts to determine what to do. As you suggest, you don't look for them in other objects. I wouldn't be testing my moral sentiments per say, but what I want to do with the world depends on how exactly it works and testing that so I can best achieve this would be great
Figuring out more about what can suffer what can feel happiness would be necessary and some other questions would be useful to answer.
Moral realism vs. non realism can be a long debate but hopefully this will at least tell you where we are coming from even if you disagree.
I currently have a general sense of what it would look like but definitely not a naturalistic definition of what I value. I can think of a couple of different ways that suffering and happiness could turn out to work that would alter what sort of wireheading or hedonium I would want to implement, but not drastically. i.e it would not make me reject the idea.
I'm not sure that people would generally start wanting the same sorts of things I do if they had this knowledge and encouraging other people to do research so that I could later get access to it would have a poor rate of return. And so it seems like a better idea encourage people to implement these somewhat emotionally salient things when they are able to, rather than working on very expensive science myself. I'm not sure I'd be around to see the time when it might be applied and even then I'm not sure how likely most people would be to implementing it.
Having said that, since many scientists don't have these values there will be some low hanging fruit in looking at and applying previous research and I intend to do that. I just won't make a career out of it.
I think that moral realism is a non starter so I ignored that part of your question, but I can go into detail on that if you would like.
Would be up for creating wireheaded minds if they didn't care about interacting with other people?
Not sure that interacting with people is the most impotent part of my life and I'd be fine living a life without that feature providing it otherwise good.
Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.
I'm sorry, I'm confused. Which two situations?
A) Being tortured as you are now
B) Having your IQ and cognitive abilities lowered then being tortured.
EDIT:
I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn't even though doing so wouldn't harm anyone else, that seems like a point against that moral theory.
I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.
Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.
If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.
If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms?
I would strongly prefer B. Is that what you're asking?
I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor.
I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?
Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.
What if you were killed immediately afterwards, so long term memories wouldn't come into play?
Chickens and cattle who are raised ethically (which can still produce decent yields, though obviously less than factory farms) have lower levels of stress hormones than comparable wild animals.
Do you happen to have a source for this? Not that I particularly doubt this, but it would be useful information.
They are also bred to mature faster and this can lead to similar problems I think. Manipulating the lighting to affect their circadian rhythm also helps make them mature faster.
If you found that you cared much more about your present self than your future self, you might reflect on that and decide that because those two things are broadly similar you would want to change your mind about this case. Even if those selves are not counted as such by your sentiments right now.
This article is trying to get you to undertake similar reflections about pets and humans vs. other animals.
It could be that the vegetarian stuff you are eating doesn't have much protein in it. Or that the protein source doesn't have all the amino acids. There is certainly vegetarian stuff that does have these things, it just takes more knowledge and meal design that for meat diets.
Protein powder can also be helpful for vegetarians (and everyone). I recommend pea protein powder.
I don't think that very many people would except extreme harm to have these things, though. I used to think that I valued some non-experiential things very strongly, but I don't think that I was taking seriously how strong my preference not to be tortured is. And for most people I don't think there are peak levels of those three things that could outweigh torture.
Thrustvectoring said:
I don't think that some animals are capable of suffering
From what Thrust has said, I think it's ambiguous between whether he cares he thinks animals can't suffer and doesn't care about them for that reason or he just doesn't care about animal suffering as you describe. Or , more likely, he is in some middle state.
As to your second point, yes that's the approach. And it seems largely that is what is happening when it comes up in the discussion here.
For animals that are R-selected or, in other words, having many offspring in the hopes that some will survive, the vast majority of the offspring die very quickly. Most species of Fish, Amphibians and many less complex animals do this. 99.9% of them dieing in before reaching adulthood might be a good approximation for some species. A painful death doesn't seem worth a brief life as a wild animal.
It's true that most people wouldn't be functioning optimally if they were not somewhat happy and extrapolating this to other animals who seem to be similar to us in basic emotion, I would agree that an adult wild animal seem like they would live an alright life.
Most people believe that chickens suffer. They seem have all the right parts of the brain and the indicative behaviors and everything. What's your theory that says that humans do but chickens don't?
So just kill all the farm animals painlessly now? Sure that sounds good. But if there will still be farm animal being raised then it seems there still is a problem. Or if you are just talking about ways of making slaughter painless for continuing to factory farm, that sounds better than nothing.
Very interesting. What were they experts in? And how many people responded?
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for.
There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I'd link his site but it doesn't seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have.
Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
Yes. Bees and Cockroaches both have about a million neurons compared with maybe 100,000 for most insects.
I think there are good arguments for for suffering not being weighted by number of neurons and if you assign even a 10% to that being the case you end up with insects (and maybe nematodes and zooplankton) dominating the utility function because of their overwhelming numbers.
Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option.
There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
Well if vegan/vegetarian outreach is particularly effective then it may do more to develope lab meat than just donating to lab meat causes themselves (because there would be more people interested in this and similar technologies). Additionally, making people vegan/vegetarian may have a stronger effect in promoting anti speciesism in general which seems like it will be of larger overall benefit than just ending factory farming. This seems like it would happen because thoughts follow actions.
Yeah, I would like a ride and it's Max.
Wow, this sounds awesome. I hope you drive then, I'd come for sure.
There is a lot of diversity of opinions in philosophers and that may be true as a whole of the discipline, there is some good stuff to be found there. I'd recommend staying here for the most part rather than wading through philosophy elsewhere, though.
Also, many moral philosophers may have very different moral sentiments from you and that maybe that makes them seem like idiots more than they actually are. Different moral sentiments as to whether consequentialism rather than just within consequentialism among other things.
I think I am less productive when someone else is around. It seems like I am less effective at my current job when coworkers are around and certainly when I am being evaluated at work. This may be because I am socially anxious and socially maladjusted, though.
No, though I admit it has felt like that for me at some points in my life. Even if I did, there are a bunch of reasons why that I would not trust that intuition
I like certain things and dislike certain things, and in a certain sense I would be mistaken if I were doing things that reliably caused me pain. That certain sense is that if I were better informed I would not take that action. If, however, I liked pain, I would still take that action, and so I would not be mistaken. I could go through the same process to explain why an sadist is not mistaken.
I do not know what else to say except that this is just an appeal to intuition, and that specific intuitions are worthless unless they are proven to reliably point towards the truth.
I don't directly apprehend anything as the being "good" or the "bad" in the moral realist sense and I don't count other peoples' accounts of directly apprehending such things as evidence (especially since schizophrenics and theists exist).
I'm curious to know whether there's actually somebody who picks “Neither, both are good.”
I saw someone who answered that way, but it must have been as a joke. Not a good thing to do for matching purposes...
You could reduce human suffering to 0 by reducing the number of humans to 0, so there's got to be another value greater than reducing suffering.
Almost all hedonistic utilitarians are concerned with maximizing happiness as well as minimizing suffering, including Brian. The reason that he talks about suffering so much is because, it is most people rank a unit of suffering as, say a -3 experience and a unit of suffering as, say, a -1 experience. And he thinks that there is much more suffering than happiness in the world and that it easier to prevent it.
(Sorry if I got any of this wrong Brian)
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person
I am not sure that the hedonistic utilitarian agenda is high status. The most plausible cynical/psychological critique of the hedonistic utilitarian agenda, is that they are too worried about ethical consistency and about coherently extrapolating a simple principle from their values.
I tried briefly to find some similar books but couldn't see any others.
Some non-fiction books I really liked recently that might interest Lesswrong:
Ubersleep: Nap-Based Sleep Schedules and the Polyphasic Lifestyle
Procrastination: Why You Do It, What To Do About It
The 10,000 Year Explosion: How Civilization Accelerated Human Evolution
I've ranked about 500 as well and I also think the recommendation system sucks. The most common explanation it gives for a book recommendation is that I've added some other individual book. I would want it to give me recommendation based off of multiple books based off of what people on the site who also liked those same books also liked. It also almost never updates.
Seems like cloning ancient humans wouldn't really do any good in preventing a second apocalypse. Unless you were positing that there would still be a preserved body around and you were talking about that.
I like many of his essays. In any case, he doesn't discuss this or the evolution thing in many of them, so it's fairly irrelevant.
I thought the first Saw film was awesome. It was a cool gory story about making the most of life. It's fiction, so nobody actually got hurt and there is no secondary consideration of awesomeness there.
Some people think that the prospect of making disabled kids commit suicide is awesome; fewer people think that actually doing so is awesome. I don't think that people who actually do so are awesome.
I think that's a relatively standard use of "awesome".
They trigger the ingroup fuzzies really well for me. I think quotes inspire me as well sometimes and it's otherwise hard to find quotes that inspire in the right direction.
I'd say something bad, because the money could be better spent. But if they weren't going to do effective altruism stuff with it, it's probably just neutral so far as I can tell.
There are a lot of things that that didn't cover. Go for it!
I have like 53% - 55% in the 50% category. 60% seems high. Since I have some knowledge of the questions I would expect to answer above above 50% correctly.
Well, I notice strong fears of being judged (and other emotions that I have determined to be from status concern) surrounding the correct use of grammar and such, so I thought there was a chance that other people would share it. That seems like a point.
I was also thinking that if I were actually not concerned about status I probably would have stopped capitalizing things at all and I would have considered making other changes. That probably isn't true, though, because the instrumental value of grammar is still really high if it's a status concern for other people.
I would.