Posts

Play for a Cause 2010-01-28T20:52:24.091Z · score: 7 (8 votes)
Paper: Testing ecological models 2009-08-27T22:12:58.541Z · score: 0 (1 votes)
Pract: A Guessing and Testing Game 2009-07-31T09:13:47.353Z · score: 5 (8 votes)

Comments

Comment by brian_jaress on Self-fulfilling correlations · 2010-09-05T19:44:58.892Z · score: 1 (1 votes) · LW · GW

Auto insurance is broken down into different types of coverage, with injuries separate from damage to the car. In fact, I'm pretty sure your coverage makes a distinction between injuries to you and injuries to other people that are your fault. Every time I renew my insurance, they ask me if I want to change how much of each type of coverage I have.

The safety indicator that most car buyers look at is the crash test rating, usually done by a government or an insurance industry group. Maybe it's no longer part of the culture, but I remember when car ads would often show crash tests. I think there was one where the crash test dummies (like mannequins full of sensors) talked about which car they liked.

The Insurance Institute for Highway Safety has information on crash tests and statistics on accidents and payouts.

Comment by brian_jaress on Less Wrong: Open Thread, September 2010 · 2010-09-05T18:11:02.917Z · score: 1 (3 votes) · LW · GW

Maybe you shouldn't relax.

Regardless of official definitions, there is in practice a heavy emphasis on conceptual rigor over evidence.

There's still room for people who don't quite fit in.

Comment by brian_jaress on Less Wrong: Open Thread, September 2010 · 2010-09-05T10:11:18.714Z · score: 3 (3 votes) · LW · GW

I've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness."

As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link was this. I don't know about the quality of the study, but you could use the term.

Comment by brian_jaress on Beauty quips, "I'd shut up and multiply!" · 2010-05-12T22:09:25.670Z · score: 0 (0 votes) · LW · GW

I agree that more information would help the beauty, but I'm more interested in the issue of whether or not the question, as stated, is ill-posed.

One of the Bayesian vs. frequentist examples that I found most interesting was the case of the coin with unknown bias -- a Bayesian would say it has 50% chance of coming up heads, but a frequentist would refuse to assign a probability. I was wondering if perhaps this is an analogous case for Bayesians.

That wouldn't necessarily mean anything is wrong with Bayesianism. Everyone has to draw the line somewhere, and it's good to know where.

Comment by brian_jaress on Beauty quips, "I'd shut up and multiply!" · 2010-05-10T03:44:58.975Z · score: 0 (0 votes) · LW · GW

That's fine. I guess I'm just not a Bayesian epistemologist.

If Sleeping Beauty is a Bayesian epistemologist, does that mean she refuses to answer the question as asked?

Comment by brian_jaress on Beauty quips, "I'd shut up and multiply!" · 2010-05-10T02:28:24.984Z · score: 1 (1 votes) · LW · GW

It illustrates fairly clearly how probabilities are defined in terms of the payoff structure (which things will have payoffs assigned to them and which things are considered "the same" for the purposes of assigning payoffs).

I've felt for a while that probabilities are more tied to the payoff structure than beliefs, and this discussion underlined that for me. I guess you could say that using beliefs (instead of probabilities) to make decisions is a heuristic that ignores, or at least downplays, the payoff structure.

Comment by brian_jaress on Beauty quips, "I'd shut up and multiply!" · 2010-05-09T23:22:41.828Z · score: 1 (1 votes) · LW · GW

We know she will have the same credence on monday as she does on tuesday (if awakened), because of the amnesia. There is no reason to double count those.

Well, she does say it twice. That seems like at least a potential reason to count it as two answers.

You could say that 1/3 of the times the question is asked, the coin came up heads. You could also say that 1/2 of the beauties are asked about a coin that came up heads.

To me, this reinforces my doubt that probabilities and beliefs are the same thing.

EDIT: reworded for clarity

Comment by brian_jaress on Jinnetic Engineering, by Richard Stallman · 2010-05-02T18:29:06.220Z · score: 1 (1 votes) · LW · GW

I agree, but I upvoted it anyway because I thought it was interesting and funny.

I read it as a commentary on how, when we daydream about "breaking the rules" (or discovering a fundamental rule that changes the way we live) all the myths have trained us to think selfishly. She wants to use her three wishes to end disease for everyone, and it's like she asked to accept an Academy Award in a clown suit.

EDIT: grammar

Comment by brian_jaress on Free copy of Feynman's autobiography for best corny rationalist joke · 2010-04-06T07:45:51.622Z · score: 2 (10 votes) · LW · GW

A theologian, a lawyer, and a rationalist meet at a cocktail party.

"Theology is the most intellectually demanding field," says the theologian. "The concepts are so abstract, and many key texts are obscurely written."

"Oh please," says the lawyer. "I once knew a bright fellow who became a theologian because he couldn't make it as a lawyer. He read and studied and tore his hair out, but he just couldn't get how the law works."

"I've got you both beat," says the rationalist. "Rationalism is so hard, no one's figured it out!"

EDIT: Too bad there's no prize for the lowest rated joke. Sorry if this joke offended people. It wasn't meant to reflect badly on any of the characters or anyone in real life.

Comment by brian_jaress on Compartmentalization as a passive phenomenon · 2010-03-27T07:30:21.647Z · score: 1 (1 votes) · LW · GW

And it certainly doesn't help that most peoples' knowledge of non-Earth gravity comes entirely from television, where, since zero-gravity filming is impractical, the writers invariably come up with some sort of confusing phlebotinum (most commonly magnetic boots) to make them behave more like regular-gravity environments.

I think you're on to something. I was wondering why the "heavy boots" people singled out the boots. Why not say "heavy suits" or that the astronauts themselves were heavier than pens. Didn't 2001: A Space Odyssey start the first zero-gravity scene with a floating pen and a flight attendant walking up the wall?

Comment by brian_jaress on Undiscriminating Skepticism · 2010-03-15T09:20:47.655Z · score: 11 (11 votes) · LW · GW

You should offer a reward for the best top-level anti-cryonics post. Something to entice quiet dissenters to stick their necks out.

You can post it together with a pro-cryonics reading list, so people know what they're up against and only post arguments that haven't already been refuted.

EDIT: reworded for clarity, punctuation

Comment by brian_jaress on The fallacy of work-life compartmentalization · 2010-03-07T07:10:19.091Z · score: 3 (3 votes) · LW · GW

I'm telling it to give the reader the feeling of what it's like to see a smart person fail at something basic because they fail to cross domains, but when writing I couldn't actually come up with a real example that was simple enough to fit in one paragraph.

I would suggest the example of someone not getting the evil bit joke.

It's good because it works both ways. You only need common sense to understand it, but lay people can be intimidated by the context into not applying common sense, and you'll sometimes see domain experts try to implement essentially the same thing because they turn off common sense while in their domain.

Comment by brian_jaress on Case study: abuse of frequentist statistics · 2010-02-23T18:02:49.360Z · score: 3 (3 votes) · LW · GW

I think that, in this case, the underlying problem was not caused by the way frequentist statistics are commonly taught and practiced by working scientists:

In the present case, the null hypothesis is that the old method and the new method produce data from the same distribution; the authors would like to see data that do not lead to rejection of the null hypothesis.

I'm no statistician, but I'm pretty sure you're not supposed to make your favored hypothesis the null hypothesis. That's a pretty simple rule and I think it's drilled into students and enforced in peer review.

I see that as the underlying problem because it reverses the burden of proof. If they had done it the right way around, six data points would have been not enough to support their method instead of being not enough to reject it. Making your favored hypothesis the null hypothesis can allow you, in the extreme, to rely on a single data point.

Comment by brian_jaress on Case study: abuse of frequentist statistics · 2010-02-21T19:01:04.197Z · score: 3 (3 votes) · LW · GW

I too would like to see a good explanation of frequentist techniques, especially one that also explains their relationships (if any) to Bayesian techniques.

Based on the tiny bit I know of both approaches, I think one appealing feature of frequentist techniques (which may or may not make up for their drawbacks) is that your initial assumptions are easier to dislodge the more wrong they are.

It seems to be the other way around with Bayesian techniques because of a stronger built-in assumption that your assumptions are justified. You can immunize yourself against any particular evidence by having a sufficiently wrong prior.

EDIT: Grammar

Comment by brian_jaress on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-10T21:40:04.703Z · score: 0 (0 votes) · LW · GW

I'd rather give a lot money to GiveWell, earmarked for international charities.

OK, let's do that. You win.

We can probably still use "Save babies on Craigslist" or something similar as the slogan if we make some baby-oriented charity the "poster child."

EDIT: spelling

Comment by brian_jaress on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-10T21:17:34.122Z · score: 0 (0 votes) · LW · GW

With staff they hire. Certain kinds of problems are both inevitable and fixable once money is in the pipeline.

When you add that much money, you're giving it to the planners, not the plan. If what they're doing doesn't scale to the money they get (though I think it will) they'll do something else. Treat it like one of those business plan contests. Their success so far shows that they know how to do charity work.

It will also get people to join on Facebook, without which there will be no money for anyone.

But I'm not married to that particular charity. I just think that with so much money waiting to be claimed, we're having a little too much fun seeing who can predict the smallest nitty-gritties the farthest away.

Comment by brian_jaress on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-10T20:45:47.676Z · score: 0 (0 votes) · LW · GW

They do separate, regional projects, and that number is what they need to carry out the projects they've already committed to.

If they get on Craigslist and start seeing steady money out of it, they can start a bunch of new projects in new areas.

Comment by brian_jaress on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-10T14:15:34.333Z · score: 7 (9 votes) · LW · GW

Maybe they're not trying very hard.

I'm actually seriously disappointed in how hard we're trying. I saw the discussion start in the comments of the "shut up and divide" thread. I came here expecting people to be all over it like ants on a picnic. Instead, there actually appears to be more thought going into spinning theories about why it would be hard than plans for doing it, and none of it really compares to all the serious thinking about TDT, MWI, or "Free Will."

Of course it's hard. The point is not that it's easy, but that it's relatively easy considering how much money is involved.

Here's my own halfharted stab:

This meme needs

  1. A specific cause that moves people.
  2. A charity that uses money effectively.
  3. A good slogan.

GiveWell shows four charities with its top rating:

  • Village Reach: Vaccines for babies in Africa
  • Stop TB Partnership (Stop TB): tuberculosis treatments
  • Nurse-Family Partnership: Early Childhood Care (USA)
  • Knowledge is Power Program (KIPP): K-12 Education (USA)

Village Reach is the winner, as far as the cause moving people. Saving babies in Africa trumps treating TB worldwide and educating mothers or children in the US. (Nurse-Family Partnership sends nurses to teach mothers how to be mothers.)

For the slogan, how about: "Save babies on Craigslist."

EDIT: links, spelling

Comment by brian_jaress on Rationality Quotes: February 2010 · 2010-02-07T08:07:14.535Z · score: 6 (6 votes) · LW · GW

What is it about us, the public, and what is it about conformity itself that causes us all to require it of our neighbors and of our artists and then, with consummate fickleness, to forget those who fall into line and eternally celebrate those who do not?

-- Ben Shahn, "The Shape of Content"

Comment by brian_jaress on Rationality Quotes: February 2010 · 2010-02-03T08:30:25.351Z · score: 10 (10 votes) · LW · GW

Your friend must be pretty hungry by now.

Comment by brian_jaress on Deontology for Consequentialists · 2010-02-01T08:42:25.967Z · score: 3 (3 votes) · LW · GW

I'm pretty sure the standard reply is, "Sometimes there is no right answer." These are rules for classifying actions as moral or immoral, not rules that describe the behavior of an always moral actor. If every possible action (including inaction) is immoral, then your actions are immoral.

Comment by brian_jaress on Play for a Cause · 2010-01-30T18:16:43.057Z · score: 0 (0 votes) · LW · GW

I enjoy Go, but I'm an absolute beginner. If I could remember exactly how many games I've played, I'm pretty sure I could count them on one hand.

I've been meaning to try out Dave Peck's Go Which is said to have a nice interface and doesn't require you to sign up for an account. You start a game by entering both players' email addresses.

I have an email account at gmail.com under the user name bjaress if anyone wants to play.

Comment by brian_jaress on Play for a Cause · 2010-01-28T21:23:57.578Z · score: 1 (1 votes) · LW · GW

I guess that bit about "mutual consent" was sort of a cryptic remark on my part.

What I was trying to say is that I generally feel everyone except the players should butt out unless there's a dispute. If I suggest that a particular game be played or offer "official" rules as a third party, I won't mind at all if the players agree to do it differently or plug a loophole. I think it's important for everyone involved to have that attitude.

Comment by brian_jaress on That Magical Click · 2010-01-21T11:06:48.787Z · score: 36 (38 votes) · LW · GW

There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I think it's a mistake to put all the opinions you agree with in a special category. Why do some people come quickly to beliefs you agree with? There is no reason, except that sometimes people come quickly to beliefs, and some beliefs happen to match yours.

People who share one belief with you are more likely to share others, so you're anecdotally finding people who agree with you about non-cryonics things at a cryonics conference. Young people might be more likely to change their mind quickly because they're more likely to hear something for the first time.

Comment by brian_jaress on Reference class of the unclassreferenceable · 2010-01-10T07:49:13.073Z · score: 3 (3 votes) · LW · GW

I don't know if those are the right reference classes for prediction, but those two beliefs definitely fall into those two categories. That should set off some warning signals.

Most people seem to have a strong need to believe in life after death and godlike beings. Anything less than ironclad disproof leads them to strong belief. If you challenge their beliefs, they'll often vigorously demonstrate that these things are not impossible and declare victory. They ignore the distinction between "not impossible" and "highly likely" even when trying to persuade a known skeptic because, for them on those issues, the distinction does not exist.

Not that I see anyone doing that here.

It's just a warning sign that the topics invite bias. Proceed with caution.

Comment by brian_jaress on Open Thread: January 2010 · 2010-01-08T09:16:54.956Z · score: 1 (1 votes) · LW · GW

This might not be the best place to ask because so many people here prefer science fiction to regular fiction. I've noticed that people who prefer science fiction have a very different idea of what makes good science fiction than people who have no preference or who prefer regular fiction.

Most of what I see in the other comments is on the "prefers science fiction" side, except for things by LeGuin and maybe Dune.

Of course, you might turn out to prefer science fiction and just not have realized it. Then all would be well.

Comment by brian_jaress on The Contrarian Status Catch-22 · 2009-12-20T19:16:48.955Z · score: 4 (4 votes) · LW · GW

After a bit of searching, I think peteshnick is talking about the Afshar experiment. The wikipedia article is fascinating, but I don't really understand the issue. It only mentions many-worlds briefly, but includes a link to the creator of another interpretation saying that the experiment exposes a failure of both MWI and Copenhagen to match the math.

Comment by brian_jaress on I'm Not Saying People Are Stupid · 2009-10-11T21:11:16.209Z · score: 1 (11 votes) · LW · GW

Yes.

Be careful about asking me to call people who are wrong about many-worlds "crazy." You're one of them.

Comment by brian_jaress on I'm Not Saying People Are Stupid · 2009-10-11T20:40:09.266Z · score: 3 (3 votes) · LW · GW

What ever happened to just thinking people who disagreed with you were wrong?

Comment by brian_jaress on Avoiding doomsday: a "proof" of the self-indication assumption · 2009-09-24T15:22:53.583Z · score: 0 (0 votes) · LW · GW

They'll be wrong about the generation part only.

But that's the important part. It's called the "Doomsday Argument" for a reason: it concludes that doomsday is imminent.

Of course the last 2/3 is still going to be 2/3 of the total. So is the first 2/3.

Imminent doomsday is the only non-trivial conclusion, and it relies on the assumption that exponential growth will continue right up to a doomsday.

Comment by brian_jaress on Avoiding doomsday: a "proof" of the self-indication assumption · 2009-09-24T07:37:17.115Z · score: -1 (1 votes) · LW · GW

Only because of the assumption that the colony is wiped out suddenly. If, for example, the decline mirrors the rise, about two-thirds will be wrong.

ETA: I mean that 2/3 will apply the argument and be wrong. The other 1/3 won't apply the argument because they won't have exponential growth. (Of course they might think some other wrong thing.)

Comment by brian_jaress on Outlawing Anthropics: An Updateless Dilemma · 2009-09-13T19:38:57.700Z · score: 0 (0 votes) · LW · GW

Well, we might be saying the same thing but coming from different points of view about what it means. I'm not actually a bayesian, so when I talk about assigning probabilities and updating them, I just mean doing equations.

What I'm saying here is that you should set up the equations in a way that reflects the group's point of view because you're telling the group what to do. That involves plugging some probabilities of one into Bayes' Law and getting a final answer equal to one of the starting numbers.

Comment by brian_jaress on Outlawing Anthropics: An Updateless Dilemma · 2009-09-13T19:23:38.552Z · score: 0 (0 votes) · LW · GW

I agree that changes the answer. I was assuming a scheme like that in my two marble example. In a more typical situation, I would also say 2/3.

To me, it's not a drastic (or magical) change, just getting a different answer to a different question.

Comment by brian_jaress on Outlawing Anthropics: An Updateless Dilemma · 2009-09-11T16:05:09.375Z · score: 1 (1 votes) · LW · GW

OK, but I think Psy-Kosh was talking about something to do with the payoffs. I'm just not sure if he means the voting or the dollar amounts or what.

Comment by brian_jaress on Outlawing Anthropics: An Updateless Dilemma · 2009-09-11T08:30:55.910Z · score: 0 (0 votes) · LW · GW

What kind of funny business?

Comment by brian_jaress on Outlawing Anthropics: An Updateless Dilemma · 2009-09-11T07:59:31.811Z · score: 6 (5 votes) · LW · GW

I guess that does need a lot of explaining.

I would say:

P(green|mostly green bucket) = 1

P(green|mostly red bucket) = 1

P(green) = 1

because P(green) is not the probability that you will get a green marble, it's the probability that someone will get a green marble. From the perspective of the priors, all the marbles are drawn, and no one draw is different from any other. If you don't draw a green marble, you're discarded and the people who did get a green vote. For the purposes of figuring out the priors for a group strategy, your draw being green is not an event.

Of course, you know that you've drawn green. But the only thing you can translate it into that has a prior is "someone got green."

That probably sounds contrived. Maybe it is. But consider a slightly different example:

  • Two marbles and two people instead of twenty.
  • One marble is green, the other will be red or green based on a coin flip (green on heads, red on tails).

I like this example because it combines the two conflicting intuitions in the same problem. Only a fool would draw a red marble and remain uncertain about the coin flip. But someone who draws a green marble is in a situation similar to the twenty marble scenario.

If you were to plan ahead of time how the greens should vote, you would tell them to assume 50%. But a person holding a green marble might think it's 2/3 in favor of double green.

To avoid embarrassing paradoxes, you can base everything on the four events "heads," "tails," "someone gets green," and "someone gets red." Update as normal.

Comment by brian_jaress on Outlawing Anthropics: An Updateless Dilemma · 2009-09-10T18:58:40.960Z · score: 0 (0 votes) · LW · GW

anyone who draws a green marble should indeed be assigning a 90% probability to there being a mostly-green bucket.

I don't think so. I think the answer to both these problems is that if you update correctly, you get 0.5.

Comment by brian_jaress on Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives · 2009-09-07T16:02:53.476Z · score: 4 (2 votes) · LW · GW

Thanks, that makes sense. I was thinking that the diagrams represented all the nodes that the agents looked at, and that based on what nodes they saw they would pick one to surgically set. I didn't realize they represented the result of setting a node.

Follow-up stupid questions:

  1. Do all the agents start with the same graph and just pick different surgery points, or is it a combination of starting with different nodes and picking different nodes?
  2. If you put "innards" and "platonic" on the same graph (for any reason) what does that look like?
Comment by brian_jaress on Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives · 2009-09-07T08:00:04.625Z · score: 6 (4 votes) · LW · GW

Stupid question time: Why are all the agents only "surgically setting" nodes with no parents? Is that a coincidence, or is it different in a significant way from the sunniness example?

Comment by brian_jaress on Rationality Quotes - September 2009 · 2009-09-07T02:24:28.792Z · score: 0 (0 votes) · LW · GW

I don't have the context for that particular wording, but it's a recurring theme of his essays. He felt that wrong ideas could still be instructive, and he would often write essays explaining ideas that he clearly referred to as incorrect.

His point here seems to be that the theory is already wrong, so don't destroy the remaining value by cutting it up to extract the bits you could get from current theory. I don't think you need to worry that he's calling for a return to something you dislike.

Comment by brian_jaress on The Sword of Good · 2009-09-04T08:24:11.831Z · score: 1 (3 votes) · LW · GW

In writing it's even simpler - the author gets to create the whole social universe, and the readers are immersed in the hero's own internal perspective. And so anything the heroes do, which no character notices as wrong, won't be noticed by the readers as unheroic. Genocide, mind-rape, eternal torture, anything.

I don't think you give readers enough credit. The author has some influence, but not that much. Some of what appears to be acceptance of the social norms depicted is really just acceptance that the characters live within those norms.

For the influence that does exist, there's a whole body of criticism, controversy, and alternative versions taking on various uses of it. It's so well known, I didn't even realize you were trying to call attention to it. I read the story as straightforward propaganda for your work on an artificial BDFL.

Comment by brian_jaress on Rationality Quotes - September 2009 · 2009-09-03T08:25:27.204Z · score: 1 (1 votes) · LW · GW

This is a lesswrong quote, but I think it belongs in this discussion because it's remarkably apropos:

I remember when I finally picked up and started reading through my copy of the Feynman Lectures on Physics, even though I couldn't think of any realistic excuse for how this was going to help my AI work, because I just got fed up with not knowing physics. And - you can guess how this story ends - it gave me a new way of looking at the world, which all my earlier reading in popular physics (including Feynman's QED) hadn't done. Did that help inspire my AI research? Hell yes. (Though it's a good thing I studied neuroscience, evolutionary psychology, evolutionary biology, Bayes, and physics in that order - physics alone would have been terrible inspiration for AI research.)

-- Eliezer Yudkowsky

Comment by brian_jaress on Rationality Quotes - September 2009 · 2009-09-03T08:07:54.824Z · score: 2 (2 votes) · LW · GW

Great thinkers build their edifices with subtle consistency. We do our intellectual forebears an enormous disservice when we dismember their visions and scan their systems in order to extract a few disembodied “gems”—thoughts or claims still accepted as true. These disarticulated pieces then become the entire legacy of our ancestors, and we lose the beauty and coherence of older systems that might enlighten us by their unfamiliarity—and their consequent challenge—in our fallible (and complacent) modern world.

-- Stephen Jay Gould

Comment by brian_jaress on Rationality Quotes - September 2009 · 2009-09-02T21:38:37.712Z · score: 0 (0 votes) · LW · GW

Our pride is often increased by what we retrench from our other faults.

-- La Rochefoucauld

Comment by brian_jaress on Rationality Quotes - September 2009 · 2009-09-02T17:05:50.164Z · score: 2 (2 votes) · LW · GW

Elpinice was skeptical. She likes evidence. That means a well-made argument. For Greeks, the only evidence that matters is words. They are masters of making the fantastic sound plausible.

-- Gore Vidal, "Creation" (narrator Cyrus Spitama)

Comment by brian_jaress on Ingredients of Timeless Decision Theory · 2009-08-23T17:43:49.346Z · score: 0 (0 votes) · LW · GW

in our own timeless and deterministic (though branching) universe.

That's the part I don't buy. I'm not saying it's false, but I don't see any good reason to think it's true. (I think I read the posts where you explained why you believe it, but I might have missed some.)

Comment by brian_jaress on Ingredients of Timeless Decision Theory · 2009-08-21T23:26:29.973Z · score: 0 (0 votes) · LW · GW

Yes. I started writing my reply before Alicorn said anything, took a short break, posted it, and was a bit surprised to see a whole discussion had happened under my nose.

But I don't see how what you originally said is the same as what you ended up saying.

At first, you said not to consider f because there's no point. My response was that the equation correctly includes f regardless of your ability to choose based on the solution.

Now you are saying that Fb is different from (inferior to?) fB.

Comment by brian_jaress on Ingredients of Timeless Decision Theory · 2009-08-21T18:33:10.957Z · score: 0 (0 votes) · LW · GW

If you can't choose whether you believe, then you don't choose whether you believe. You just believe or not. The full equation still captures the correctness of your belief, however you arrived at it. There's nothing inconsistent about thinking that you are forced to not believe and that seeing the equation is (part of) what forced you.

(I avoid the phrase "free will" because there are so many different definitions. You seem to be using one that involves choice, while Eliezer uses one based on control. As I understand it, the two of you would disagree about whether a TV remote in a deterministic universe has free will.)

edit: missing word, extra word

Comment by brian_jaress on Utilons vs. Hedons · 2009-08-10T23:58:38.805Z · score: 2 (2 votes) · LW · GW

"Lots of people who want to will get really, really high" is only very rarely touted as a major argument.

In public policy discussions, that's true. In private conversations with individuals, I've heard that reason more than any other.

Comment by brian_jaress on Guess Again · 2009-08-10T07:20:51.465Z · score: 0 (0 votes) · LW · GW

I just checked the source of my post against my saved local copy. The entities that were given by name were converted to hexadecimal and look fine (’ became “ ) but the entities that were in decimal were mangled into multiple hexadecimal entities (↩ became ↊).

I didn't have any that were originally in hex, and I don't know if this difference is the reason it was mangled. It looks like all of your Chinese characters are currently hex entities (the answer in the footnote is 有).