On not diversifying charity 2014-03-14T05:14:08.909Z
Life hack request: I want to want to work. 2013-06-11T19:41:01.252Z
Trust in God, or, The Riddle of Kyon Fan Visual Novel 2013-03-07T05:48:39.956Z
How do you not be a hater? 2013-02-25T05:31:28.981Z
Timeless Physics Question 2012-04-28T20:33:19.923Z
Yet another Sleeping Beauty 2011-11-22T05:00:08.722Z
Felicifia: a Utilitarianism Forum 2011-11-07T13:37:54.826Z
Yehweh and the Methods of Rationality 2011-09-28T03:06:53.284Z
Looking for proof of conditional probability 2011-07-28T02:24:00.286Z
Against improper priors 2011-07-26T23:50:44.020Z
Unconditionally Convergent Expected Utility 2011-06-11T20:00:23.355Z
The Difference Between Classical, Evidential, and Timeless Decision Theories 2011-03-26T21:27:23.846Z
Sleeping Beauty 2011-02-01T22:13:32.013Z
Varying amounts of subjective experience 2010-12-16T03:02:15.107Z
It's Not About Efficiency 2010-12-06T04:12:41.198Z
Evidential Decision Theory and Mass Mind Control 2010-10-23T23:26:42.124Z
Bayesian Doomsday Argument 2010-10-17T22:14:17.440Z


Comment by DanielLC on Stupid Questions November 2015 · 2015-11-20T06:34:33.380Z · LW · GW

Kind of reminds me of a discussion of making a utilitarian emblem on We never really settled on anything, but I think the best one was Σ☺.

Comment by DanielLC on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-18T07:49:36.560Z · LW · GW

Alternately, learn to upload people. Which is still probably going to require nanotech. This way, you're not dependent on ecosystems because you don't need anything organic. You can also modify computers to be resistant to radiation more easily than you can people.

If we can't thrive on a wrecked Earth, the stars aren't for us.

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-16T05:29:17.264Z · LW · GW

I admit that a Dyson sphere seems like an arbitrary place to stop, but I think my basic argument stands either way. If any intelligent life was that common, some of it would spread.

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-16T05:27:56.935Z · LW · GW

And that's why my conclusion is "that wasn't made by aliens."

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-15T06:14:26.191Z · LW · GW

But that's just the prior probability. I can still say that we have strong evidence that the probability of a given solar system having intelligent life is much, much lower than one in 150,000.

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-15T06:11:29.067Z · LW · GW

They're in that interval, or there isn't easy space travel.

But that's a lot of information. It's a very short interval. Since it's so unlikely to be in that interval, this is large evidence against easy space travel.

We can argue it's unlikely, sure

It's a probabilistic argument. But what isn't? There's no argument that allows infinite certainty. At least, I'm pretty sure there isn't.

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-15T01:29:09.384Z · LW · GW

According to Wikipedia, in Malaysia sale and importation of sex toys is illegal, but it doesn't sound like there's any law against using a vibrator you made yourself.

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-15T01:24:13.774Z · LW · GW

But if those are aliens, then aliens must be common. And if aliens are common, then there should have been tons of them that got to the space travel point long enough ago to have reached us by now.

Comment by DanielLC on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-15T01:23:18.191Z · LW · GW

But how often does that have to happen? They only looked at about 150,000 stars. There are hundreds of billions in our galaxy alone, and if alien civilization developed even 1% earlier than ours, they'd have had time to colonize the entire Virgo supercluster, so long as they start near the center.

Comment by DanielLC on Stupid questions thread, October 2015 · 2015-10-14T02:32:47.292Z · LW · GW

basically a negative income tax for the working poor in the US

That would increase incentive to work for the poor, but decrease the incentive to work hard enough to stop being considered poor. They can't have the income tax be negative for everyone.

Comment by DanielLC on Stupid questions thread, October 2015 · 2015-10-14T00:38:51.825Z · LW · GW

Taxes would increase to pay for the Universal Basic Income. You could do it using the money we currently spend on welfare, but that includes things like medicare. Either we need to keep that, or we need to give them extra money to pay for medical insurance.

Supply of labor could decrease. This is a necessary consequence of any effort to help the poor. But since we already have a welfare system, it's just a question of which causes labor to decrease less.

Comment by DanielLC on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-06T03:49:22.414Z · LW · GW

MWI doesn't work that way. Universes are close iff the particles are in about the same place.

Comment by DanielLC on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-29T05:01:02.143Z · LW · GW

The link is broken. You need to escape your underscores. Write it as "[love languages](\\\_Five\\\_Love\\\_Languages)". That way it wil print as "love languages".

Comment by DanielLC on Help me test out my Bayes Academy game · 2015-09-23T02:53:33.959Z · LW · GW

I tried it on Ubuntu. The game is practically unplayable. I only see the last line of the text unless I scroll, and most of the bottom box is covered. Is the text supposed to be so huge?

Comment by DanielLC on Median utility rather than mean? · 2015-09-18T20:16:34.660Z · LW · GW

If you're a psychologist and you care about describing people, change the axioms. If you're a rationalist and you care about getting things done, change yourself.

Comment by DanielLC on The virtual AI within its virtual world · 2015-08-30T01:04:40.740Z · LW · GW

I don't mean you can feasibly program an AI to do that. I just mean that it's something you can tell a human to do and they'd know what you mean. I'm talking about deontological ethics, not programming a safe AI.

Comment by DanielLC on Open Thread - Aug 24 - Aug 30 · 2015-08-27T06:15:44.578Z · LW · GW

The same reasoning would suggest that bisexuals should only get into same-sex relationships. Would you say that as well?

I disagree with the idea that they can't have kids. They can adopt. The girl can go to a sperm bank.

Comment by DanielLC on The virtual AI within its virtual world · 2015-08-26T22:04:06.562Z · LW · GW

Safe AI sounds like it does what you say as long as it isn't stupid. Friendly AIs are supposed to do whatever's best.

Comment by DanielLC on The virtual AI within its virtual world · 2015-08-26T22:02:58.396Z · LW · GW

Once AI exists, in the public, it isn't containable.

You mean like the knowledge of how it was made is public and anyone can do it? Definitely not. But if you keep it all proprietary it might be possible to contain.

But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe.

I suppose what we should do is figure out how to make friendly AI, figure out how to create boxed AI, and then build an AI that's probably friendly and probably boxed, and it's more likely that everything won't go horribly wrong.

You would need some assurance that the AI would not try to manipulate the output.

Manipulate it to do what? The idea behind mine is that the AI only cares about answering the questions you pose it given that it has no inputs and everything operates to spec. I suppose it might try to do things to guarantee that it operates to spec, but it's supposed to be assuming that.

Comment by DanielLC on The virtual AI within its virtual world · 2015-08-26T21:58:25.345Z · LW · GW

There's a difference between creating someone with certain values and altering someone's values. For one thing, it's possible to prohibit messing with someone's values, but you can't create someone without creating them with values. It's not like you can create an ideal philosophy student of perfect emptiness.

Comment by DanielLC on Open Thread - Aug 24 - Aug 30 · 2015-08-25T04:10:57.928Z · LW · GW

There's certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I'm not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won't run away.

Comment by DanielLC on Open Thread - Aug 24 - Aug 30 · 2015-08-25T04:05:29.548Z · LW · GW

I would.

Comment by DanielLC on Open Thread - Aug 24 - Aug 30 · 2015-08-25T04:04:41.775Z · LW · GW

The money that's "at stake" is the amount you spend to play the game. Once the game begins, you get 2^(n) dollars, where n is the number of successive heads you flip.

Comment by DanielLC on Yudkowsky's brain is the pinnacle of evolution · 2015-08-25T04:02:08.491Z · LW · GW

That adds up to 100%. You need to leave room for other things, like they're trolling us for the fun of it.

Comment by DanielLC on The virtual AI within its virtual world · 2015-08-25T03:57:24.717Z · LW · GW

"Slave" makes it sound like we're making it do something against its will. "Benevolent AI" would be better.

Comment by DanielLC on The virtual AI within its virtual world · 2015-08-24T20:03:33.908Z · LW · GW

I have thought about something similar with respect to an oracle AI. You program it to try to answer the question assuming no new inputs and everything works to spec. Since spec doesn't include things like the AI escaping and converting the world to computronium to deliver the answer to the box, it won't bother trying that.

I kind of feel like anything short of friendly AI is living on borrowed time. Sure the AI won't take over the world to convert it to paperclips, but that won't stop some idiot from asking it how to make paperclips. I suppose it could still be helpful. It could at the very least confirm that AIs are dangerous and get people to worry about them. But people might be too quick to ask for something that they'd say is a good idea after asking about it for a while or something like that.

Comment by DanielLC on Magic and the halting problem · 2015-08-24T07:04:58.459Z · LW · GW

I think that the first universe is sufficiently more likely than the second that you shouldn't assume it's a coincidence, and you should expect wingardium leviosa to keep working.

Comment by DanielLC on Magic and the halting problem · 2015-08-23T20:25:25.241Z · LW · GW

Let me make a simpler form of this problem. Suppose I flip a fair coin a thousand times, and it just happens to land on heads every time. How do I find out that this is a fair coin, and that I don't actually have a trick coin that always lands on heads? The answer is that I can't. Any algorithm that tells me that it's fair is going to fail in the much more likely circumstance that I have a coin that always lands on heads. The best I can do is show that I have 1000 bits of evidence in favor of a trick coin, update my priors accordingly, and use this information when betting.

The good news is that you will only get a coin that lands on heads a thousand times about 00.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000933% of the time, so you won't be this wrong by chance very often. In general, you can calculate how likely you are to be wrong, and hedge your bets accordingly.

Comment by DanielLC on Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I? · 2015-08-19T17:52:56.965Z · LW · GW

Obviously it would distort our view of how quickly the universe decays into a true vacuum. There's also the mangled worlds idea to explain the Born rule.

Comment by DanielLC on Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I? · 2015-08-19T07:44:04.493Z · LW · GW

I'm pretty sure I've seen this before, with the example of our universe being a false vacuum with a short half-life.

Comment by DanielLC on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-17T08:02:42.389Z · LW · GW

I've once had a homework problem where I was supposed to use some kind of optimization algorithm to solve the knapsack problem. The teacher said that, while it's technically NP complete, you can generally solve it pretty easily. Although the homework did it with such a small problem that the algorithm pretty much came down to checking every combination.

Comment by DanielLC on Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels · 2015-08-16T03:47:30.933Z · LW · GW

TL;DR: Soylent contains safe levels of those heavy metals, but enough that they are required to warn people in the state of California. It's not uncommon for food to have heavy metals at the level.

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-12T21:29:50.402Z · LW · GW

There are two major problems with how the earth is currently set up. Only the surface is habitable, and it's a sphere, which is known for having the minimum possible surface area for its volume. A Matrioshka brain would be a much more optimal environment. Although that depends on your definition of "human being".

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-12T17:47:35.416Z · LW · GW

In other words, laziness and overconfidence bias cancel each other out, and getting rid of the second without getting rid of the first will cause problems?

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-12T07:47:43.426Z · LW · GW

You assume people will commit suicide if their life is not worth living. People have a strong instinct against suicide, so I doubt they'd do it unless their life is not worth living by a wide margin.

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-11T22:28:58.792Z · LW · GW

We'll make it a double territory.

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-11T22:27:05.357Z · LW · GW

I think drinking is also about the idea that it might cause problems to people who aren't fully grown. I don't know if that's true, but I don't think that matters politically.

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-11T22:25:47.445Z · LW · GW

Deontology is funny like that. Making a one-in-a-million chance of each of a million people dying is fine, but killing one is not. Not even if you make it a lottery so each of them has a one-in-a-million chance of dying, since you're still killing them.

Comment by DanielLC on Crazy Ideas Thread, Aug. 2015 · 2015-08-11T22:13:33.121Z · LW · GW

Is that actually illegal or just against the rules? I would expect it would be perfectly legal to start your own, although I could see why people might object if you don't at least limit it to make sure it stays at safe levels. And if you do limit it, you'll have all those advantages you said, but not the obvious one of not having cheaters. It's just as hard to tell if someone's doping more than they should as it is to tell if they're doing it at all.

Comment by DanielLC on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-10T23:45:22.094Z · LW · GW

I think babies are more person-like than the animals we eat for food. I'm not an expert in that though. They're still above someone in a coma.

Comment by DanielLC on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-10T23:42:36.979Z · LW · GW

It's not about communication. It's not even about sensing. It's about subjective experience. If your mind worked properly but you just couldn't sense anything or do anything, you'd have moral worth. It would probably be negative and it would be a mercy to kill you, but that's another issue entirely. From what I understand, if you're in a coma, your brain isn't entirely inactive. It's doing something. But it's more comparable to what a fish does than a conscious mammal.

Someone in a coma is not a person anymore. In the same sense that someone who is dead is not a person anymore. The problem with killing someone is that they stop being a person. There's nothing wrong with taking them from not a person to a slightly different not a person.

If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them

A mass murderer is still a person. They think and feel like you do, except probably with less empathy or something. The world is better off without them, and getting rid of them is a net gain. But it's not a Pareto improvement. There's still one person that gets the short end of the stick.

Comment by DanielLC on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-10T19:53:52.087Z · LW · GW

If they really don't care about humans, then the AI will use all the resources at its disposal to make sure the paradise is as paradisaical as possible. Humans are made of atoms, and atoms can be used to do calculations to figure out what paradise is best.

Although I find it unlikely that the S team would be that selfish. That's a really tiny incentive to murder everyone.

Comment by DanielLC on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-10T19:49:58.676Z · LW · GW

There are reasons why you shouldn't kill someone in a coma that doesn't want to be killed when they're in a coma even if you disagree with them about what makes life have moral value. If they agreed to have the plug pulled when it becomes clear that they won't wake up, then it seems pretty reasonable to take out the organs before pulling the plug. And given what's at stake, given permission, you should be able to take out their organs early and hasten their deaths by a short time in exchange for making it more likely to save someone else.

And why are you already conjecturing about what we would have wanted? We're not dead yet. Just ask us what we want.

Comment by DanielLC on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-10T19:44:49.200Z · LW · GW

A person in solitary still has experiences. They just don't interact with the outside world. People in a coma are, as far as we can tell, not conscious. There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.

Comment by DanielLC on Some concepts are like Newton's Gravity, others are like... Luminiferous Aether? · 2015-08-09T18:30:35.898Z · LW · GW

I wouldn't call luminiferous aether just plain wrong. Asking what it's made from doesn't make a lot of sense, but saying that that means it doesn't exist would be like saying electrons don't exist because they don't have a volume.

Personally, I don't trust the concept of values. It's already so complex and fragile, I'm afraid it doesn't actually exist.

It's something of a simplification. People are not ideal utility-maximizers. But they're close enough that it works well.

Comment by DanielLC on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T17:55:00.448Z · LW · GW

There are various ways to get infinite and infinitesimal utility. But they don't matter in practice. Everything but the most infinite potential producer of utility will only matter as a tie breaker, which will occur with probability zero.

Cardinal numbers also wouldn't work well even as infinite numbers go. You can't have a set with half an element, or with a negative number of elements. And is there a difference between a 50% chance of uncountable utilons and a 100% chance?

Comment by DanielLC on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T17:47:23.357Z · LW · GW

How badly could a reasonably intelligent follower of the selfish creed, "Maximize my QALYs", be manhandled into some unpleasant parallel to a Pascal's Mugging?

They'd be just as subject to it as anyone else. It's just that instead of killing 3^^^3 people, they threaten to torture you for 3^^^3 years. Or offer 3^^^3 years of life or something. It comes from having an unbounded utility function. Not from any particular utility function.

Comment by DanielLC on Rationality Quotes Thread August 2015 · 2015-08-04T17:44:57.428Z · LW · GW

The first is certainly good for teaching math, but in general they both have advantages and disadvantages. It's good to have a lot of methods for solving problems, but it's also important to have general methods that can each solve many problems.

Comment by DanielLC on On stopping rules · 2015-08-03T05:44:22.746Z · LW · GW

Here's how I look at it. Suppose you want to prove A, so you look for evidence until either you can prove it for p = 0.05, or it's definitely false. Let E be this experiment proving A, and !E be disproving it. P(A|E) = 0.95, and P(A|!E) = 0. Let's assume the prior for A is P(A) = 0.5.

P(A|E) = 0.95

P(A|!E) = 0

P(A) = 0.5

By conservation of expected evidence, P(A|E)P(E) + P(A|!E)P(!E) = P(A) = 0.5

0.95 P(E) = 0.5

P(E) = 0.526

So the experiment is more likely to succeed than fail. Even though A has even odds of being true, you can prove it more than half the time. It sounds like you're cheating somehow, but the thing to remember is that there are false positives but no false negatives. All you're doing is proving probably A more than definitely not A, and probably A is more likely.

But P(A|E) = 0.05. That was an assumption here. Had the probability been different, P(E) would have been different.

Comment by DanielLC on Stupid Questions August 2015 · 2015-08-02T21:52:52.677Z · LW · GW

We have values besides inclusive genetic fitness.