Posts
Comments
In the above examples, there may well be more net harm than gain from staying in an unpleasant relationship or firing a problematic employee. It's pretty case-by-case in nature, and you're not required to ignore your own feelings entirely. If not, yes, utilitarianism would say you'd be "wrong" for indulging yourself at the expense of others.
The same reason fat people can derail trolleys and businesspeople have lifeguard abilities, I'd imagine.
You pretty much got it. Eliezer's predicting that response and saying, no, they're really not the same thing. (Tu quoque)
EDIT: Never mind, I thought it was a literal question.
We encourage you to downvote any comment that you'd rather not see more of - please don't feel that this requires being able to give an elaborate justification. -LW Wiki Deletion Policy
Folks are encouraged to downvote liberally on LW, but the flip-side of that is that people will downvote where they might otherwise just move on for fear of offending someone or getting into an argument that doesn't interest them. You might want to be less sensitive if someone brings one of your posts to -1 - it's not really an act of aggression.
I sympathize. One of my professors jokes about having discovered a new optical illusion, then going to the literature and having the incredible good luck that for once nobody else discovered it first.
This all seems to have more to do with rule consequentialism than deontology. This isn't necessarily a bad thing, and rule consequentialism has indeed been considered a halfway point between deontology and act consequentialism, but it's worth noting.
Disliking meetings and reading in a crowded environment doesn't seem like much evidence that you're neither introverted nor extroverted (except that you're not one of Those Nasty Extraverts that keep supposedly fawning over meetings), which doesn't seem like much evidence that the introvert/extrovert split isn't helpful. I can't enjoy parties or meetings, prefer to read in silence and work alone.
In accordance with ancient tradition, I took the survey.
If I unpacked "disbelieves in God" to "has not encountered a concept of God they both believed ("did not disbelieve", if you prefer) and did not consider a silly conception of God", would atheism still be meaningless? Would that be a horrible misconception of atheism?
Are you sure there's nothing bundled in with "God is Reality" beyond what you state? Let's say I said "God is Reality. Reality is not sapient and has never given explicit instructions on anything." Would you consider that consistent with your belief that God equals Reality?
I'm not trying for Socratic Method arguing here, I'm just not quite sure where you're coming from.
As a psychology student, I can say with some certainty that Watson is a behaviorist poster boy.
I figured it was because it was a surprising and more-or-less unsupported statement of fact (that turned out to be, according to the only authority anyone cited, false). When I read 'poor people are better long-term planners than rich people due to necessity' I kind of expect the writer to back it up. I would have considered downvoting if it wasn't already downvoted, and my preferences are much closer to socialist than libertarian.
I don't have an explanation for the parent getting upvoted beyond a 'planning is important' moral and some ideological wiggle room for being a quote, so I guess it could still be hypocrisy. Of course, as of the 2011 survey LW is 32% libertarian (compared to 26% socialist and 34% liberal), so if there is ideological bias it's of the 'vocal minority' kind.
Explain?
Caledonian hasn't posted anything since 2009, if you said that in hopes of him responding.
Depends on if you're hallucinating everything or your vision has at least some bearing in the real world. I mean, I'd rather see spiders crawling on everything than be blind, since I could still see what they were crawling on.
It was grammar nitpicking. "The authors where wrong".
Unless you expect some factual, objective truth to arise about how one should define oneself, it seems fair game for defining in the most beneficial way. It's physics all the way down, so I don't see a factual reason not to define yourself down to nothing, nor do I see a factual reason to do so.
Good point.
I'm not talking about SI (which I've never donated money to), I'm talking about you. And you're starting to repeat yourself.
You guys are only being supposedly 'accurate' when it feels good. I have not said, 'all outsiders', that's your interpretation which you can subsequently disagree with.
You're misusing language by not realizing that most people treat "members of group A think X" as "a sizable majority of members of group A think X", or not caring and blaming the reader when they parse it the standard way. We don't say "LWers are religious" or even "US citizens vote Democrat", even though there's certainly more than one religious person on this site or Democrat voter in the US.
And if you did intend to say that, you're putting words into Manfred's mouth by assuming he's talking about 'all' instead.
If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.
I know I'll probably trigger a flamewar...
Nitpick: LW doesn't actually have a large proportion of cryonicists, so you're not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) 'considering it', but comparing that to the current proportion makes me skeptical they'll sign up.
A decision tree (the entirety of my game theory experience has been a few online videos, so I likely have the terminology wrong), with decision 1 at the top and the end outcomes at the bottom. The sections marked 'max' have the decider trying to pick the highest-value end outcome, and the sections marked 'min' have the decider trying to pick the lowest-value end outcome. The numbers in every line except the bottom propagate up depending on which solution will be picked by whoever is currently doing the picking, so if Max and Min maximize and minimize properly the tree's value is 6. I don't quite remember how the three branches being pruned off work.
I'm pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.
It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.
EDIT: Just in case this comes off as disparaging LW's upvote generosity or average comment quality, it's not.
He also notes that the experts who'd made failed predictions and employed strong defenses tended to update their confidence, while the experts who'd made failed predictions but didn't employ strong defenses did update.
I assume there's a 'not' missing in one of those.
Given humanity's complete lack of experience with absolute power, it seems like you can't even take that cliche for weak evidence. Having glided through the article and comments again, I also don't see where Eliezer said "rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
(No, I wasn't the one who downvoted)
And would newer readers know what "EY" meant?
Given it's right after an anecdote about someone whose name starts with "E", I think they could make an educated guess.
That's one hell of a grant proposal/foundation.
Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn't a very hive-mindey community, unless you count atheism.
(The singularity, yes, you're very much in the minority with the most skeptical quartile expecting it in 2150)
In other words, why didn't the story mention its (wealthy, permissive, libertarian) society having other arrangements in such a contentious matter - including, with statistical near-certainty, one of the half-dosen characters on the bridge of the Impossible Possible World?
It was such a contentious issue centuries (if I'm reading properly) ago, when ancients were still numerous enough to hold a lot of political power and the culture was different enough that Akon can't even wrap his head around the question. That's plenty of time for cultural drift to pull everyone together, especially if libertarianism remains widespread as the world gets more and more upbeat, especially if anti-rapers are enough part of the mainstream culture to "statistically-near-certainly" have a seat on the Impossible Possible World.
It's not framed as an irreconcilable ideological difference (to the extent those exist at all in the setting). The ancients were against it because they remembered it being something basically objectively horrible, and that became more and more outdated as the world became nicer.
On a similar note, what should be 13.9's solution links to 13.8's solution.
I'm also finding this really interesting and approachable. Thanks very much.
I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it's a lighthearted reference to that, but I can't turn it up by searching. I'm not even sure if it came before this comment.
(Richard_Hollerith2 hasn't commented for over 2.5 years, so you're not likely to get a response from him)
Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?
The agent's goals aren't changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner's Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the 'rational' thing to do without entirely understanding why.
You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it's true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say "I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state". Perhaps they find it an interesting puzzle or want status from publishing it, but there's certainly a higher reason than 'because they feel it's the right thing to do'. No fundamental change in priorities needs occur between feeding one's tribe and solving abstract mathematical problems.
I won't extrapolate my arguments farther than this, since I really don't have the philosophical background it needs.