Posts

Comments

Comment by aprilsr on An optimal stopping paradox · 2019-11-16T05:21:23.649Z · score: 0 (2 votes) · LW · GW

Reminds me of the thought experiment where you’re in hell and there’s a button that will either condemn you permanently, or, with probability increasing over time, will allow you to escape. Since permanent hell is infinitely bad, any decreased chance of that is infinitely good, so you either wait forever or make an arbitrary unjustifiable decision.

Comment by aprilsr on The Simulation Epiphany Problem · 2019-11-02T21:00:55.415Z · score: 1 (1 votes) · LW · GW

Do we need it to predict people with high accuracy? Humans do well enough at our level of prediction.

Comment by AprilSR on [deleted post] 2019-10-31T02:49:43.542Z

I don’t understand what “an illness like DZV” means. Depending on how similar it has to be to qualify as “like,” it might be extremely unlikely purely on the basis of there being so many conjunctions, even putting aside that many parts of it are implausible.

Comment by aprilsr on Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces · 2019-08-24T03:37:26.458Z · score: 1 (1 votes) · LW · GW

I believe there is some amount of broken arms over the course of my life that would be worse than losing a toe, even though the broken arms are non-permanent and the toe is permanent.

Comment by aprilsr on Troll Bridge · 2019-08-24T03:03:52.488Z · score: 4 (3 votes) · LW · GW

"(It makes sense that) A proof-based agent can't cross a bridge whose safety is dependent on the agent's own logic being consistent, since proof-based agents can't know whether their logic is consistent."

If the agent crosses the bridge, then the agent knows itself to be consistent.

The agent cannot know whether they are consistent.

Therefore, crossing the bridge implies an inconsistency (they know themself to be consistent, even though that's impossible.)

The counterfactual reasoning seems quite reasonable to me.

Comment by aprilsr on Odds are not easier · 2019-08-21T15:53:43.844Z · score: 2 (2 votes) · LW · GW

If they didn’t need exactly the same amount of information I would be very interested in what kind of math wizardry is involved.

Comment by aprilsr on Predicted AI alignment event/meeting calendar · 2019-08-15T20:48:09.103Z · score: 1 (1 votes) · LW · GW

If both of those things happened I would be very interested in hearing about the person who decided to make a paperclip maximizer despite having an explicit model of human utility function they could implement.

Actually, I wouldn’t be interested in anything. I would be paperclips.

Comment by aprilsr on Eli's shortform feed · 2019-08-14T00:11:18.010Z · score: 2 (2 votes) · LW · GW

I’ve definitely experienced mental exhaustion from video games before - particularly when trying to do an especially difficult task.

Comment by aprilsr on Raemon's Scratchpad · 2019-08-10T01:54:45.419Z · score: 1 (1 votes) · LW · GW

I think 1000 people being struck by lightning would register as a gigantic surprise, not a less-than-1-signal-confusion.

Comment by aprilsr on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-07-31T21:59:34.865Z · score: 1 (1 votes) · LW · GW

This is a nitpick, but I contest the $10,000 figure. If I had an incentive as strong as building an (aligned) AGI, I'm sure I could find a way to obtain upwards of a million dollars worth of compute.

Comment by aprilsr on Arguments for the existence of qualia · 2019-07-28T21:17:37.097Z · score: 23 (7 votes) · LW · GW

I’m pretty sure “qualia do not exist” is an extreme fringe position. You seem to be under the impression that materialists deny qualia, which is not the case.

That said, this is a decent argument against the position that qualia do not exist.

Comment by aprilsr on Has "politics is the mind-killer" been a mind-killer? · 2019-03-17T05:05:30.032Z · score: 8 (7 votes) · LW · GW

I feel like “Politics is the Mind-Killer” made two points that came out pretty clearly to me and, I’d assume, most other people.

  1. It is very hard to discuss politics rationally.
  2. Therefore, avoid political examples (or use historical ones) when discussing rationality.

For example, Eliezer would advocate against saying “Hey, those stupid [political party] people made a huge mistake in supporting [candidate] in the 20XX election. Let’s learn from their mistake,” unless you were quite confident people could discuss the rationality and not the politics.

I think a lot of the “might”s and “could”s were avoided mainly for emphasis. Unless you have a strong reason to believe that someone will be able to be rational about politics, you can very safely assume they won’t be. “You have to support every argument on one side,” for example, is basically saying that most people don’t understand the nuance in saying that you think an argument is flawed even if you agree with its conclusion. I very commonly see people male horribly incorrect arguments for positions I strongly support, but pointing these out as flawed is rarely looked upon nicely among people who lack rationality skills.

While the conclusions you drew from the post were obviously harmful, I feel like very few people interpreted it that way.

Comment by aprilsr on In what ways are holidays good? · 2018-12-28T06:38:48.986Z · score: 4 (3 votes) · LW · GW

I think you've summarized the question we're trying to answer pretty well. Does Daniel want to go on vacations? We don't know. How would one go about deciding whether they want to go on vacations? You seem to be missing the fact that one might be unsure about their preferences.

Comment by aprilsr on Quantum immortality: Is decline of measure compensated by merging timelines? · 2018-12-12T00:45:13.550Z · score: 1 (1 votes) · LW · GW

This assumes that there's some point where things sharply cut off between being me and not being me. I think it makes more sense for my utility function to care more about something the more similar it is to me. The existence of a single additional memory means pretty much nothing, and I still care a lot about most human minds. Something entirely alien I might not care about at all.

Even if this actually raises my utility, it does it by changing my utility function. Instead of helping the people I care about, it makes me care about different people.