Posts
Comments
Will there be a policy on banned topics, such as e.g. politics, or will that be left to author discretion as part of moderation? Perhaps topics that are banned from promotion / front page (regardless of upvotes and comments) but are fine otherwise?
If certain things are banned, can they please be listed and defined more explicitly? This came up recently in another thread and I wasn't answered there.
Thanks! I'm trying to return to a more active commenting lifestyle.
Yours is a reasonable definition of "is it political?" but I think it's a very different sense from that which was forbidden on LW 1.0, as I understood it. The idea there was to avoid discussing any subject that was a live political debate (implicitly: in the US), because those are the debates that seemed most likely to become mindkilling.
So it was fine to say slavery and Nazism are bad, because (in the US) these are politically settled subjects, even though they're very political in themselves. And it was also fine to argue for very far-outside-the-mainstream ideas like UBI or cryonics, because they are so fringe that there's no politically or culturally active movement attacking them. But it probably wouldn't be fine to argue about abortion rights or open borders.
I'd like a clearer definition of what counts as politics. Some examples are easy to classify, but this post doesn't feel that way to me.
There was a recent post about the ethics of eating meat, and earlier posts on EA. Presumably these didn't count as "politics". But those two subjects are some of the examples given in the current post, and some of the others are uncontroversial (e.g. boo slavery).
ETA: ChristianKI's comment does seem clearly more politicised than the OP. But I wouldn't have predicted the discussion would go there just from reading the OP. And the thread I'm commenting on is older than that comment.
I came up with many reasons why this approach might fail. The fact there are so many suggests that I don't have very a good model and/or may be engaging in motivated reasoning.
In the general case, the recipients of the signals may not understand what is being signalled, or that signalling is involved at all, so they won't accept a substitute signal. E.g., most people are unaware or disbelieve that education serves for signalling more than teaching. They would not hire people who were merely accepted to MIT and would have received good grades, because they think MIT teaches important job skills.
There are several other potential problems with the given example, which may not be problems with the general approach:
- Most employers don't want to innovate in recruiting strategies. They're already trying to innovate in R&D or product design or marketing. It makes sense to be conservative elsewhere, due to limited resources (you need good HR to execute an unconventional recruitment strategy) and to hedge risk. They will not want to be the first to adopt a new strategy unless they think it will be wildly better than the standard one. But hiring non-graduates is only better in that it saves money, and that's not usually a big enough advantage unless the company simply can't afford to hire graduates. (See: startups founded and staffed by college dropouts.)
- There are many potential principal-agent problems. An individual recruiter or project manager may not want to make unconventional choices because they'll be blamed personally if it turns out badly ("no-one gets fired for buying IBM"). A team or division lead may not want to hire non-grads because their peers and bosses don't understand their logic, so they'll be looked down on for having a team with more "junior" people. College graduates on the regular career track may be hostile to non-grads because they perceive them as unfairly being allowed to skip the hard work that the graduates put in.
- Large companies (and government agencies, etc.) often regulate the positions offered to company employees in terms of job title, compensation, job progression, and requirements like diplomas. A large company may be the best place to experiment with new, nonstandard approaches to hiring, because it can survive a small experiment failing. But it may be less able to do so in practice, because HR and related departments are vertically integrated, and a software programming team doesn't have the formal authority to create a new type of position with customized entry criteria and salary.
seeds and placentas won’t exist for a long age yet, and there’s no other way to reproduce outside of standing water bodies.
Did you mean to write 'eggs' rather than placentas? Land-dwelling animals evolved long before placentas did. And long before tetrapods did - arthropodes came on land first, as well as many other ("worm") phyla such as nematodes and annelids - land ecology wouldn't be the same without insects and earthworms!
Plants, too, can reproduce on land without seeds, and for a long time they did: witness the bryophytes (liverworts, mosses) and seed-less vascular plants such as the horsetail.
In any case, needing water temporarily for reproduction does not prevent adults from living on land. All life needs water in the end.
Aleksei, do you mean they would have sex with the children once and then ask them if they'd like to leave their parents and have sex every day for the rest of their lives? :-)
Anyway, it takes too long for unmodified human children to develop proper minds in order to consent to anything like this. What do you do about pain incurred at the age of a few months? A year?
I'm also bothered that nobody has mentioned non-human animals. Why should cats and chimps and dolphins have to suffer pain and romantic disappointment? The Super Happy People should modify all the higher life forms and completely reshape the ecology.
How the fluff would they be able to feel all possible types of feelings
They seem to be limited to mapping others' experiences to their own feelings for analogous experiences. For instance, they first mapped giving birth to pleasure. Hardly epic angelic universal empathy powers.
What else is there, though? How do you define the feelings "pleasure" and "pain", distinctly from "goals sought" and "things avoided"? How do you empathize with a really alien intelligence without mapping its behavior and experiences to your own?
Do babies realize what will happen to them?
I asked about this in yesterday's comments thread, but I guess everyone's moved here since then :-)
My intuition is that selection pressure on young aliens (to do anything it takes not get eaten) would be stronger than most selection pressure adults experience (most adults produce hundreds of offspring <=> only one offspring out of several hundred survives; and in a technological society most if not all adults live to reproduce).
We should see children evolving to escape being eaten. If running faster doesn't work, then by hurting other children to make them run slower. Or by children eating one another themselves. Or by a social organization that lets a few bullies/rulers/... send other children to be eaten in their stead. Or by evolving to be poisonous or at least tasting really bad and having orange-black striping to warn your parents :-)
Also, the period of time from birth to the beginning of (post-winnowing) growth spurt would be compressed to the utter minimum required by their physiology. (The faster you grow up, the smaller the window of danger to be eaten). On that basis, the pre-winnowing children may not have much time to philosophize about being eaten.
Eventually you get is a creature that's born sentient, manages to learn about the day/night cycle (plus whatever inborn "instinct" provides), and then the winnowing takes place at the age of 2 days before the growth spurt can begin. Very stylized, kind of thing.
I don't think the Pilot is really taking the time to think through all the logical consequences of what he's saying
Indeed, even if he wants to make war, the logical next step would still be to keep talking to the aliens and learning as much as possible about them. Then maybe trying to capture or infiltrate their ship. Or asking for escort to their system and returning with strategic knowledge about that. Preparing a surprise attack. Things like that.
Destroying the first contact alien ship would be stupid.
Also, if some people care so much about this crusade they're willing to go against the rest of human society and risk a huge war, then logically they ought to have mounted a huge operation long ago to sweep the galaxy looking for morally unsuitable aliens. Killing or forcefully transforming any alien species that 1) they judge to be sufficiently intelligent and 2) whose behavior doesn't conform to human morals.
Or they might realize there's no real upper bound on the amount of suffering that might potentially be taking place somewhere out of sight. Especially if you give more weight to the suffering or death of more intelligent individuals. In which case they might want to make an alliance with the Baby Eaters to search the galaxy for cultures so alien that they would be abominations to both species. And only exterminate the Baby Eaters once the galaxy has been swept clean.
Put like that, it seems to me to be a really bad idea. But isn't that what follows from the Pilot's argument? If stopping the Baby Eating is so important they're willing to risk the extermination of humanity for it. (And there's no way they could be sure of the Baby Eaters' potential in a species-wide war just from reading one badly translated and possibly censored alien library for a day. So they're proposing going to war where they can't be sure of victory.)
Sentience DOES make a difference. You dont frown on your cat for hunting mice, but on your dog for doing it with children.
That's at least partly due to speciesm. How many people have gone on crusades to stop leopards from eating chimpanzees? For that matter, how many people devote their lives to stopping other humans from eating chimpanzees?
As for cannibalism, it seems to me that its role in Eliezer's story is to trigger a purely illogical revulsion in the humans who antropomorphise the aliens.
Imagine two completely different alien species living in one (technological) society, where each eats and "winnows" the other's children. This is the natural, evolved behavior of both species, just as big cats eat apes and (human) apes eat antelopes.
No cannibalism takes place, but the same amount of death and suffering is present as in Eliezer's scenario. Should we be less or more revolted at this? Which scenario has the greater moral weight? Should we say the two-species configuration is morally superior because they've developed a peaceful, stable society with two intelligent species coexisting instead of warring and hunting each other?