Posts

I'm looking for alternative funding strategies for cryonics. 2019-06-30T03:22:40.919Z

Comments

Comment by Walker Vargas on Multinational corporations as optimizers: a case for reaching across the aisle · 2023-12-10T13:35:57.955Z · LW · GW

Do they think it's a hardware/cost issue? Or do they think that "true" intelligence is beyond our abilities?

Comment by Walker Vargas on Multinational corporations as optimizers: a case for reaching across the aisle · 2023-12-09T17:24:43.677Z · LW · GW

This is also a plausible route for spreading awareness of AI safety issues to the left. The downside is that it might make AI safety a "leftest" issue if a conservative analogy is not introduced at the same time.

Comment by Walker Vargas on Bids To Defer On Value Judgements · 2023-10-01T16:10:38.242Z · LW · GW

I think of it as deferring to future me vs. deferring to someone else.

Comment by Walker Vargas on riceissa's Shortform · 2023-08-15T03:48:55.835Z · LW · GW

Another consideration is how much money someone has to hand. If someone only make $1,000 a month, they may choice $25 shoes that will last a year over $100 shoes that will last 5 years. Essentially, it is the complimentary idea of economy of scale.

Comment by Walker Vargas on A case for gamete personhood (reductio ad absurdum) · 2023-07-21T14:51:15.619Z · LW · GW

Personhood is a legal category and an assumed moral category that policies can point to. Usually, the rules being argued about are about the acceptability of killing something. The category is used differently depending on the moral framework, but it is usually assumed to point at the same objects. Therefore disagreements are interpreted as mistakes.

Personally, I have my doubts on there being an exact point in development that you can point to where a human becomes a person. If there is it might be weeks after birth.

Comment by Walker Vargas on Criticism of Eliezer's irrational moral beliefs · 2023-06-18T03:20:24.740Z · LW · GW

If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn't care what the "right" thing is. Humans also probably don't care about the cosmic tablet either. That sort of thing isn't what "morality" is references. The argue is more of a trick to get people recognize that than a formal argument.

Comment by Walker Vargas on Criticism of Eliezer's irrational moral beliefs · 2023-06-17T20:53:20.703Z · LW · GW

I think the point is that people try to point to things like God's will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God's decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.

Comment by Walker Vargas on What if we Align the AI and nobody cares? · 2023-04-20T02:41:55.451Z · LW · GW

While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.

Comment by Walker Vargas on [deleted post] 2023-04-01T18:04:29.518Z

Highly positive outcomes are assumed to be more particular and complex than highly bad outcomes. Another assumption I think is common is that a utility of a maximally good life is lower than the magnitude of the utility of a maximally bad life. Is there a life good enough that you would take a bet of a 50% chance of that life and a 50% chance of the worst life of torture?

Comment by Walker Vargas on God vs AI scientifically · 2023-03-24T20:38:37.987Z · LW · GW

I don't think the fundamental ought works as a default position. Partly because there will always be a possibility of being wrong about what that fundamental ought is no matter how long it looks. So the real choice is about how sure it should be before it starts acting on it's best known option.

The right side can't be NULL, because that'd make the expect value of both actions NULL. To do meaningful math with these possibilities there has to be a way of comparing utilities across the scenarios.

Comment by Walker Vargas on Harry Potter in The World of Path Semantics · 2023-03-23T17:18:19.598Z · LW · GW

No, if you are contributing to a preexisting discussion, there should be some older work you can cite. For example, you learned about the theory of path semantics from something that wasn't written by you. Cite that source.

Comment by Walker Vargas on God vs AI scientifically · 2023-03-23T17:03:55.896Z · LW · GW

I don't think that matrix is right. I think it describes a different scenario. Suppose an AI's Utility function is defined referentially as being equal to some unknown function written on a letter on Mt. Everest. It also has a given utility function that it has little reason to think is correlated with the real one. Then it would be vary important to find out want that true function is. Than the expected value of any action would be NULL if that letter doesn't exist.

But an AI that only assigns a probability that that scenario is the case might still have most of its expected value tied to following its current utility function. Well given some way of comparing them. Without that there's no way to weigh up the choice.

Comment by Walker Vargas on Seeing Red: Dissolving Mary's Room and Qualia · 2022-08-20T17:17:37.670Z · LW · GW

I just had a thought. If Mary was presented with a red, a blue, and a green tile on a white background could she identify which was which without additional visual context clues like comparing them to her nails? If not, I would expect a p-zombie to have the same issue implying that that failure isn't to do with consciousness.

Comment by Walker Vargas on Why will AI be dangerous? · 2022-02-06T05:26:06.532Z · LW · GW

Depending on who you are talking to for-profit corporations is a good analogy for what is meant by "misaligned". You can then point out that those same organizations are likely to make AI with profit maximization in mind, and might skimp out on moral restraint in favor of being superhumanly good at PR.

Use that comparison with the wrong person and they'll call you a communist.

Comment by Walker Vargas on Why will AI be dangerous? · 2022-02-06T05:09:01.722Z · LW · GW

I want to add that the AI probably does not know it is misaligned for a while.

Comment by Walker Vargas on niplav's Shortform · 2021-10-28T20:07:18.130Z · LW · GW

This sounds similar to the replication crisis, in terms of the incentivization issues.

Comment by Walker Vargas on Unification seems to be a way to avoid infinitarian paralysis in consequentialist aggregative ethics. · 2021-04-06T16:19:43.790Z · LW · GW

Under unification wouldn't it make sense to consider ourselves to be every instance of our mind state? So there's no fact of the matter of what your surroundings are like until they effect your mind state. Similarly, every past and future that is compatible with your current mind state happened and will happen respectively.

Comment by Walker Vargas on Would a more deadly virus have induced greater compliance with US lockdown restrictions? · 2020-12-21T17:19:07.350Z · LW · GW

This isn't the flu. America has had 318,000 deaths so far. That's ~8.5 years worth of flu deaths. One of those years was from the last 26 days. If the world had America's almost 1 death per 1,000 people mortality rate, that would be about 7.8 million deaths. There are 1.7 million deaths globally. That's 6 million people spared! And frankly America is in at least a half banked lockdown.

If your country has almost no cases, that isn't something to complain about. Mass graves would mean that your country had failed to the point that they were having difficulty manages all of the corpses. This point will vary country to country, but it is a lot harder for a first world country to hit that point than you seem to think.

Comment by Walker Vargas on Not Even Evidence · 2020-10-10T17:08:15.470Z · LW · GW

This doesn't require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other's event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.

Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn't affect you according to SIA. It would require some assumptions about the setup to be wrong for that to happen to your clone though.

Another example would be if you learn a star that has crossed your cosmic event horizon was 100 solar masses, it's fair to infer that it will become a black hole and not a white dwarf.

Comment by Walker Vargas on What do the baby eaters tell us about ethics? · 2020-02-22T15:31:04.360Z · LW · GW

Sorry this is so late. I haven't been on the site for a while. My last post was in reply to no interference always being better than fighting it out. Most of the character's seem to think that stopping the baby eaters has more utility than letting the superhappies do the same thing to us would cost.

Comment by Walker Vargas on What do the baby eaters tell us about ethics? · 2020-01-26T17:34:01.481Z · LW · GW

The story brings up the possibility that, the disutility of the babyeaters might outweigh the utility of humanity. There's certainly nothing logically impossible about this.

Comment by Walker Vargas on Counterfactual Mugging: Why should you pay? · 2019-12-19T01:29:50.197Z · LW · GW

Just ask which algorithm wins then. At least in these kinds of situations udt does better. The only downside is the algorithm has to check if it's in this kind of situation; it might not be worth practicing.

Comment by Walker Vargas on Bayesian Probability is for things that are Space-like Separated from You · 2019-12-14T03:31:48.345Z · LW · GW

It's a variant of the liar's paradox. If you say the statement is unlikely, you're agreeing with what it says. If you agree with it, you clearly don't think it's unlikely, so it's wrong.

Comment by Walker Vargas on What do the baby eaters tell us about ethics? · 2019-10-11T00:24:59.011Z · LW · GW

Vigilantism has been found to be lacking. If I wanted to help with that problem in particular I'd become a cop, or vote for politicians to put higher priority on it. That seems directly comparable to what the humans in the story intended to do for most of it.

What the baby eaters are doing is worst by most people's standards than anything in our history. At least if scale counts for something. Humans don't even need a shared utility function. There just needs to be a cluster around what most people would reflectively endource. Paperclip maximizers might fight each other over the exact definition, but a pencil maximizer is clearly not helping by any of their standards.

Also the baby eaters aren't Spartans. If you gave the Spartans a cure all for birth defects, they would stop killing their kids, and certainly wouldn't complain about cure.