Posts

Comments

Comment by Ben_Welchner on A question about utilitarianism and selfishness. · 2013-09-29T01:28:45.579Z · LW · GW

In the above examples, there may well be more net harm than gain from staying in an unpleasant relationship or firing a problematic employee. It's pretty case-by-case in nature, and you're not required to ignore your own feelings entirely. If not, yes, utilitarianism would say you'd be "wrong" for indulging yourself at the expense of others.

Comment by Ben_Welchner on Some reservations about Singer's child-in-the-pond argument · 2013-06-20T01:26:29.867Z · LW · GW

The same reason fat people can derail trolleys and businesspeople have lifeguard abilities, I'd imagine.

Comment by Ben_Welchner on You only need faith in two things · 2013-03-11T02:52:23.134Z · LW · GW

You pretty much got it. Eliezer's predicting that response and saying, no, they're really not the same thing. (Tu quoque)

EDIT: Never mind, I thought it was a literal question.

Comment by Ben_Welchner on Amputation of Destiny · 2013-03-06T16:30:08.713Z · LW · GW

We encourage you to downvote any comment that you'd rather not see more of - please don't feel that this requires being able to give an elaborate justification. -LW Wiki Deletion Policy

Folks are encouraged to downvote liberally on LW, but the flip-side of that is that people will downvote where they might otherwise just move on for fear of offending someone or getting into an argument that doesn't interest them. You might want to be less sensitive if someone brings one of your posts to -1 - it's not really an act of aggression.

Comment by Ben_Welchner on What Deontology gets right · 2013-02-26T03:35:11.937Z · LW · GW

I sympathize. One of my professors jokes about having discovered a new optical illusion, then going to the literature and having the incredible good luck that for once nobody else discovered it first.

Comment by Ben_Welchner on What Deontology gets right · 2013-02-25T23:23:23.000Z · LW · GW

This all seems to have more to do with rule consequentialism than deontology. This isn't necessarily a bad thing, and rule consequentialism has indeed been considered a halfway point between deontology and act consequentialism, but it's worth noting.

Comment by Ben_Welchner on [Link] Hey Extraverts: Enough is Enough · 2013-01-03T18:08:09.964Z · LW · GW

Disliking meetings and reading in a crowded environment doesn't seem like much evidence that you're neither introverted nor extroverted (except that you're not one of Those Nasty Extraverts that keep supposedly fawning over meetings), which doesn't seem like much evidence that the introvert/extrovert split isn't helpful. I can't enjoy parties or meetings, prefer to read in silence and work alone.

Comment by Ben_Welchner on 2012 Less Wrong Census/Survey · 2012-11-06T20:26:42.568Z · LW · GW

In accordance with ancient tradition, I took the survey.

Comment by Ben_Welchner on Uncritical Supercriticality · 2012-11-03T05:30:02.703Z · LW · GW

If I unpacked "disbelieves in God" to "has not encountered a concept of God they both believed ("did not disbelieve", if you prefer) and did not consider a silly conception of God", would atheism still be meaningless? Would that be a horrible misconception of atheism?

Are you sure there's nothing bundled in with "God is Reality" beyond what you state? Let's say I said "God is Reality. Reality is not sapient and has never given explicit instructions on anything." Would you consider that consistent with your belief that God equals Reality?

I'm not trying for Socratic Method arguing here, I'm just not quite sure where you're coming from.

Comment by Ben_Welchner on The Comedy of Behaviorism · 2012-09-19T00:56:10.583Z · LW · GW

As a psychology student, I can say with some certainty that Watson is a behaviorist poster boy.

Comment by Ben_Welchner on Rationality Quotes June 2012 · 2012-06-11T17:59:12.469Z · LW · GW

I figured it was because it was a surprising and more-or-less unsupported statement of fact (that turned out to be, according to the only authority anyone cited, false). When I read 'poor people are better long-term planners than rich people due to necessity' I kind of expect the writer to back it up. I would have considered downvoting if it wasn't already downvoted, and my preferences are much closer to socialist than libertarian.

I don't have an explanation for the parent getting upvoted beyond a 'planning is important' moral and some ideological wiggle room for being a quote, so I guess it could still be hypocrisy. Of course, as of the 2011 survey LW is 32% libertarian (compared to 26% socialist and 34% liberal), so if there is ideological bias it's of the 'vocal minority' kind.

Comment by Ben_Welchner on Rationality Quotes June 2012 · 2012-06-09T16:33:29.367Z · LW · GW

Explain?

Comment by Ben_Welchner on Guardians of the Gene Pool · 2012-06-09T03:16:02.836Z · LW · GW

Caledonian hasn't posted anything since 2009, if you said that in hopes of him responding.

Comment by Ben_Welchner on Rationality Quotes June 2012 · 2012-06-07T16:39:46.764Z · LW · GW

Depends on if you're hallucinating everything or your vision has at least some bearing in the real world. I mean, I'd rather see spiders crawling on everything than be blind, since I could still see what they were crawling on.

Comment by Ben_Welchner on Rationality Quotes June 2012 · 2012-06-06T16:21:08.866Z · LW · GW

It was grammar nitpicking. "The authors where wrong".

Comment by Ben_Welchner on Rationality Quotes June 2012 · 2012-06-05T14:25:02.468Z · LW · GW

Unless you expect some factual, objective truth to arise about how one should define oneself, it seems fair game for defining in the most beneficial way. It's physics all the way down, so I don't see a factual reason not to define yourself down to nothing, nor do I see a factual reason to do so.

Comment by Ben_Welchner on Thoughts on the Singularity Institute (SI) · 2012-05-27T17:24:05.160Z · LW · GW

Good point.

Comment by Ben_Welchner on Thoughts on the Singularity Institute (SI) · 2012-05-27T16:54:24.571Z · LW · GW

I'm not talking about SI (which I've never donated money to), I'm talking about you. And you're starting to repeat yourself.

Comment by Ben_Welchner on Thoughts on the Singularity Institute (SI) · 2012-05-26T20:53:28.838Z · LW · GW

You guys are only being supposedly 'accurate' when it feels good. I have not said, 'all outsiders', that's your interpretation which you can subsequently disagree with.

You're misusing language by not realizing that most people treat "members of group A think X" as "a sizable majority of members of group A think X", or not caring and blaming the reader when they parse it the standard way. We don't say "LWers are religious" or even "US citizens vote Democrat", even though there's certainly more than one religious person on this site or Democrat voter in the US.

And if you did intend to say that, you're putting words into Manfred's mouth by assuming he's talking about 'all' instead.

Comment by Ben_Welchner on Rationality Quotes April 2012 · 2012-04-05T17:10:32.208Z · LW · GW

If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.

Comment by Ben_Welchner on But There's Still A Chance, Right? · 2012-04-03T03:42:06.164Z · LW · GW

I know I'll probably trigger a flamewar...

Nitpick: LW doesn't actually have a large proportion of cryonicists, so you're not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) 'considering it', but comparing that to the current proportion makes me skeptical they'll sign up.

Comment by Ben_Welchner on Decision Theories: A Less Wrong Primer · 2012-03-17T00:56:30.288Z · LW · GW

A decision tree (the entirety of my game theory experience has been a few online videos, so I likely have the terminology wrong), with decision 1 at the top and the end outcomes at the bottom. The sections marked 'max' have the decider trying to pick the highest-value end outcome, and the sections marked 'min' have the decider trying to pick the lowest-value end outcome. The numbers in every line except the bottom propagate up depending on which solution will be picked by whoever is currently doing the picking, so if Max and Min maximize and minimize properly the tree's value is 6. I don't quite remember how the three branches being pruned off work.

Comment by Ben_Welchner on Rationally Irrational · 2012-03-12T02:36:02.874Z · LW · GW

I'm pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.

It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.

EDIT: Just in case this comes off as disparaging LW's upvote generosity or average comment quality, it's not.

Comment by Ben_Welchner on I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias · 2012-03-06T03:03:55.829Z · LW · GW

He also notes that the experts who'd made failed predictions and employed strong defenses tended to update their confidence, while the experts who'd made failed predictions but didn't employ strong defenses did update.

I assume there's a 'not' missing in one of those.

Comment by Ben_Welchner on Not Taking Over the World · 2012-01-28T02:32:57.992Z · LW · GW

Given humanity's complete lack of experience with absolute power, it seems like you can't even take that cliche for weak evidence. Having glided through the article and comments again, I also don't see where Eliezer said "rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?

(No, I wasn't the one who downvoted)

Comment by Ben_Welchner on Urges vs. Goals: The analogy to anticipation and belief · 2012-01-27T06:52:00.935Z · LW · GW

And would newer readers know what "EY" meant?

Given it's right after an anecdote about someone whose name starts with "E", I think they could make an educated guess.

Comment by Ben_Welchner on A call for solutions and a tease of mine · 2012-01-16T06:43:43.097Z · LW · GW

That's one hell of a grant proposal/foundation.

Comment by Ben_Welchner on Welcome to Less Wrong! · 2012-01-10T05:05:33.697Z · LW · GW

Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn't a very hive-mindey community, unless you count atheism.

(The singularity, yes, you're very much in the minority with the most skeptical quartile expecting it in 2150)

Comment by Ben_Welchner on Why would a free human society be in agreement on how to alter itself? · 2011-12-29T19:47:35.565Z · LW · GW

In other words, why didn't the story mention its (wealthy, permissive, libertarian) society having other arrangements in such a contentious matter - including, with statistical near-certainty, one of the half-dosen characters on the bridge of the Impossible Possible World?

It was such a contentious issue centuries (if I'm reading properly) ago, when ancients were still numerous enough to hold a lot of political power and the culture was different enough that Akon can't even wrap his head around the question. That's plenty of time for cultural drift to pull everyone together, especially if libertarianism remains widespread as the world gets more and more upbeat, especially if anti-rapers are enough part of the mainstream culture to "statistically-near-certainly" have a seat on the Impossible Possible World.

It's not framed as an irreconcilable ideological difference (to the extent those exist at all in the setting). The ancients were against it because they remembered it being something basically objectively horrible, and that became more and more outdated as the world became nicer.

Comment by Ben_Welchner on [Link] A gentle video introduction to game theory · 2011-12-14T02:37:13.263Z · LW · GW

On a similar note, what should be 13.9's solution links to 13.8's solution.

I'm also finding this really interesting and approachable. Thanks very much.

Comment by Ben_Welchner on Taboo Your Words · 2011-11-29T02:42:53.943Z · LW · GW

I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it's a lighthearted reference to that, but I can't turn it up by searching. I'm not even sure if it came before this comment.

(Richard_Hollerith2 hasn't commented for over 2.5 years, so you're not likely to get a response from him)

Comment by Ben_Welchner on Objections to Coherent Extrapolated Volition · 2011-11-23T00:53:18.032Z · LW · GW

Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?

The agent's goals aren't changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner's Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the 'rational' thing to do without entirely understanding why.

You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it's true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say "I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state". Perhaps they find it an interesting puzzle or want status from publishing it, but there's certainly a higher reason than 'because they feel it's the right thing to do'. No fundamental change in priorities needs occur between feeding one's tribe and solving abstract mathematical problems.

I won't extrapolate my arguments farther than this, since I really don't have the philosophical background it needs.