Posts

Comments

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Allais Paradox · 2023-02-05T22:41:49.084Z · LW · GW

That's beside the point. In the first case you'd take 1A in the first game, and 2A in the 2nd game(34% chance of living is better than 33%). In the 2nd case, if you bothered to play at all, you'd probably take 1B/2B. What doesn't make sense is taking 1A and 2B. That policy is inconsistent no matter how you value different amounts of money (unless you don't care about money at all in which case do whatever, the paradox is better illustrated with something you do care about) so things like risk, capital cost, diminishing returns etc are beside the point.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Allais Paradox · 2022-10-30T21:15:01.510Z · LW · GW

In this case the only reason the money pumping doesn't work is because Omega is unable to choose its policy based on its prediction of your second decision: If it could, you would want to switch back to b, because if you chose a, Omega would know that and you'd get 0 payoff. This makes the situation after the coinflip different from the original problem where Omega is able to see your decision and make its decision based on that.

In the Allais problem as stated, there's no particular reason why the situation where you get to choose between $24,000, or $27,000 with 33/34 chance, differs depending on whether someone just offered it to you, or if they offered it to you only after you got <=34 on a d100.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on love, not competition · 2022-10-30T20:58:42.109Z · LW · GW

My worry with automation isn't that it will destroy the intrinsic value of human endeavors, rather that it will destroy the economic value of the average person's endeavors. I agree that human art is still valuable even if AI can make better art. My concern is that under the current system of production where people must contribute to society in a competitive way in order to secure an income and a living for themselves, full automation will be materially harmful to everyone who doesn't own the automated systems.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Darwin Game · 2020-10-10T17:07:26.859Z · LW · GW

Is everybody's code going to be in Python?

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Darwin Game · 2020-10-09T18:22:10.151Z · LW · GW

What are the rules about program runtime?

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Brainstorming positive visions of AI · 2020-10-08T05:57:33.677Z · LW · GW

A common concern around here seems to be that, without massive and delicate breakthroughs in our understanding of human values, any superintelligence will destroy all value by becoming some sort of paperclip optimizer. This is what Eliezer claims in Value is Fragile. Therefore, any vision of the future that manages to do better than this without requiring huge philosophical breakthroughs (in particular, a future that doesn’t know how to implement CEV before the Singularity happens) is encouraging to me as a proof of concept for how the future might be more likely to go well.

In a future where uploading minds into virtual worlds becomes possible before an AI takeover, there might well be a way to salvage quite a lot of human value with a very comparatively simple utility function: simply create a big virtual world and upload lots of people into it, then have the AI’s whole goal be to run this simulation for as long as possible.

This idea of “just run this program” seems a lot more robust and more likely to work and less likely to be exploited than attempting to maximize some utility function meant to represent human values, and the result would probably be better than what would happen if the latter went wrong. I suspect it would be well within the capability of a society which can upload minds to create a virtual world for these minds where the only scarce resource is computation cycles and there is no way to forcibly detain someone, so this virtual world would not have many of the problems our current world has.

This is far from a perfect outcome, of course. The AI would likely destroy everything it touches for resources, killing everyone not fortunate enough to get uploaded. And there are certainly other problems with any idea of “virtual utopia” we could come up with. But this idea gives me hope because it might be improved upon, and because it is a way that we don’t lose everything even if CEV proves too hard of a problem to solve before Singularity.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Troll Bridge · 2020-09-16T20:04:08.748Z · LW · GW

Thanks for the link, I will check it out!

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on War and/or Peace (2/8) · 2020-09-16T06:19:53.612Z · LW · GW
As for cannibalism, it seems to me that its role in Eliezer's story is to trigger a purely illogical revulsion in the humans who antropomorphise the aliens.

I dunno about you but my problem with the aliens isn't that it is cannibalism but that the vast majority of them die slow and horribly painful deaths

No cannibalism takes place, but the same amount of death and suffering is present as in Eliezer's scenario. Should we be less or more revolted at this?

The same.

Which scenario has the greater moral weight?

Neither. They are both horrible.

Should we say the two-species configuration is morally superior because they've developed a peaceful, stable society with two intelligent species coexisting instead of warring and hunting each other?

Not really because most of them still die slow and horribly painful deaths.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Troll Bridge · 2020-09-16T03:20:04.595Z · LW · GW

Sorry to necro this here, but I find this topic extremely interesting and I keep coming back to this page to stare at it and tie my brain in knots. Thanks for your notes on how it works in the logically uncertain case. I found a different objection based on the assumption of logical omniscience:

Regarding this you say:

Perhaps you think that the problem with the above version is that I assumed logical omniscience. It is unrealistic to suppose that agents have beliefs which perfectly respect logic. (Un)Fortunately, the argument doesn't really depend on this; it only requires that the agent respects proofs which it can see, and eventually sees the Löbian proof referenced.

However, this assumes that the Löbian proof exists. We show that the Löbian proof of A=cross→U=−10 exists by showing that the agent can prove □(A=cross→U=−10)→(A=cross→U=−10), and the agent's proof seems to assume logical omniscience:

Examining the agent, either crossing had higher expected utility, or P(cross)=0. But we assumed □(A=cross→U=−10), so it must be the latter. So the bridge gets blown up.

If here means "provable in PA", the logic does not follow through if the agent is not logically omniscient: the agent might find crossing to have a higher expected utility regardless, because it may not have seen the proof. If here instead means "discoverable by the agent's proof search" or something to that effect, then the logic here seems to follow through (making the reasonable assumption that if the agent can discover a proof for A=cross->U=-10, then it will set its expected value for crossing to -10). However, that would mean we are talking about provability in a system which can only prove finitely many things, which in particular cannot contain PA and so Löb's theorem does not apply.

I am still trying to wrap my head around exactly what this means, since your logic seems unassailable in the logically omniscient case. It is counterintuitive to me that the logically omniscient agent would be susceptible to trolling but the more limited one would not. Perhaps there is a clever way for the troll to get around this issue? I dunno. I certainly have no proof that such an agent cannot be trolled in such a way.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Strangest Thing An AI Could Tell You · 2020-05-08T05:40:55.849Z · LW · GW

That's what I was thinking. Garbage in, garbage out.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Stanford Encyclopedia of Philosophy on AI ethics and superintelligence · 2020-05-04T03:50:44.956Z · LW · GW

What do you mean by that?

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Is this viable physics? · 2020-04-16T07:45:53.231Z · LW · GW

This seems equivalent to Tegmark Level IV Multiverse to me. Very simple, and probably our universe is somewhere in there, but doesn't have enough explanatory power to be considered a Theory of Everything in the physical sense.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Two Alternatives to Logical Counterfactuals · 2020-04-03T01:37:28.191Z · LW · GW

From an omniscient point of view, yes. From my point of view, probably not, but there are still problems that arise relating to this, that can cause logic-based agents to get very confused.

Let A be an agent, considering options X and not-X. Suppose A |- Action=not-X -> Utility=0. The naive approach to this would be to say: if A |- Action=X -> Utility<0, A will do not-X, and if A |- Action=X -> Utility>0, A will do X. Suppose further that A knows its source code, so it knows this is the case.
Consider the statement G=(A |- G) -> (Action=X -> Utility<0). It can be constructed by using Godel-numbering and quines. Present A with the following argument:

Suppose for the sake of argument that A |- G. Then A |- (A |- G), since A knows its source code. Also, by definition of G, A |- (A |- G) -> (Action=X -> Utility<0). By modus ponens, A |- (Action=X -> Utility<0). Therefore, by our assumption about A, A will do not-X: Action!=X. But, vacuously, this means that (Action=X -> Utility<0). Since we have proved this by assuming A |- G, we know that (A |- G) -> (Action=X -> Utility<0), in other words, we know G.

The argument then goes, similarly to above:
A |- G
A |- (A |- G)
A |- (A |- G) -> (Action=X -> Utility<0)
A |- (Action=X -> Utility<0)
Action=Not-X

We proved this without knowing anything about X. This shows that naive logical implication can easily lead one astray. The standard solution to this problem is the chicken rule, making it so that if A ever proves which action it will take, it will immediately take the opposite action, which avoids the argument presented above, but is defeated by Troll Bridge, even when the agent has good logical uncertainty.

These problems seem to me to show that logical uncertainty about the action one will take, paired with logical implications about what the result will be if you take a particular action, are insufficient to describe a good decision theory.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Two Alternatives to Logical Counterfactuals · 2020-04-02T05:41:02.061Z · LW · GW
Suppose you learn about physics and find that you are a robot. You learn that your source code is "A". You also believe that you have free will; in particular, you may decide to take either action X or action Y.

My motivation for talking about logical counterfactuals has little to do with free will, even if the philosophical analysis of logical counterfactuals does.

The reason I want to talk about logical counterfactuals is as follows: suppose as above that I learn that I am a robot, and that my source code is "A"(which is presumed to be deterministic in this scenario), and that I have a decision to make between action X and action Y. In order to make that decision, I want to know which decision has better expected utility. The problem is that, in fact, I will either choose X or Y. Suppose without loss of generality that I will end up choosing action X. Then worlds in which I choose Y are logically incoherent, so how am I supposed to reason about the expected utility of choosing Y?

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on "No evidence" as a Valley of Bad Rationality · 2020-03-29T20:16:51.773Z · LW · GW

It's hard to tell, since while common sense is sometimes wrong, it's right more often than not. An idea being common sense shouldn't count against it, even though like the article said it's not conclusive.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on How to Measure Anything · 2020-03-28T00:23:20.835Z · LW · GW

Seems to me that before a philosophical problem is solved, it becomes a problem in some other field of study. Atomism used to be a philosophical theory. Now that we know how to objectively confirm it, it (or rather, something similar but more accurate) is a scientific theory.

It seems that philosophy (at least, the parts of philosophy that are actively trying to progress) is about trying to take concepts that we have intuitive notions of, and figure out what if anything those concepts actually refer to, until we succeed at this well enough that to study then in more precise ways than, well, philosophy.

So, how many examples can we find where some vague but important-seeming idea has been philosophically studied until we learn what the idea refers to in concrete reality, and how to observe and measure it to some degree?

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on A Priori · 2020-03-25T05:36:18.250Z · LW · GW
When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

I mean, yeah? You can still do that in your armchair, without looking at anything outside of yourself. Mathematical facts are indeed "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe," if you modify the statement a little to say "anywhere else existent" in order to acknowledge that the operation of thought indeed exists in the universe. Do mathematical facts exist independently of the universe? Maybe, maybe not, it probably depends what you mean by "exist" and it doesn't really matter to anyone since either way, you can't discover any mathematical facts without using your brain, which is in the universe. So there's no observable difference between whether Platonic math exists or not.


"free will" is a useful concept which should be kept, even though it has been used to refer to nonsensical things. Just because one can't will what he wills, doesn't mean we shouldn't be able to talk about willing what you do. Similarly, just because you can't get knowledge without thinking, doesn't mean we shouldn't be able to use "a priori knowledge" to talk about getting knowledge without looking.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on It "wanted" ... · 2020-02-16T08:43:27.524Z · LW · GW

Perhaps in many cases, if "X wants Y" then that means X will do or bring about Y unless it is prevented by something external. In some cases X is an unconscious optimization procedure, which therefore "wants" the thing that it is optimizing, in other cases X is the output of some optimization procedure, as in the case of a program that "wants" to complete its task or a microorganism that "wants" to reproduce, but optimization is not always involved, as illustrated by "high-pressure gas wants to expand".

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Catastrophic Convergence Conjecture · 2020-02-15T04:49:26.533Z · LW · GW

I think an important consideration is the degree of catastrophe. Even the asteroid strike, which is catastrophic to many agents on many metrics, is not catastrophic on every metric, not even every metric humans actually care about. An easy example of this is prevention of torture, which the asteroid impact accomplishes quite smoothly, along with almost every other negative goal. The asteroid strike is still very bad for most agents affected, but it could be much, much worse, as with the "evil" utility function you alluded to, which is very bad for humans on every metric, not just positive ones. Calling both of these things a "catastrophe" seems to sweep that difference under the rug.

With this in mind, "catastrophe" as defined here seems to be less about negative impact on utility, and more about wresting of control of utility function away from humans. Which seems bound to happen even in the best case where a FAI takes over. It seems a useful concept if that is what you are getting at but "catastrophe" seems to have confusing connotations, as if a "catastrophe" is necessarily the worst thing possible and should be avoided at all costs. If an antialigned "evil" AI were about to be released with high probability, and you had a paperclip maximizer in a box, releasing the paperclip maximizer would be the best option, even though that moves the chance of catastrophe from high probability to indistinguishable from certainty.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Reasonable Effectiveness of Mathematics or: AI vs sandwiches · 2020-02-15T01:44:20.722Z · LW · GW
But, over the lifetime of civilization, our accumulated experience led us to update this prior, and single out the complexity measure suggested by math.

I may be picking nits, here, but what exactly does it mean to "update a prior"?

And as a mathematical consideration, is it in general possible to switch your probabilities from one (limit computable) universal prior to another with a finite amount of evidence?

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on What are the risks of having your genome publicly available? · 2020-02-13T07:45:10.052Z · LW · GW

No way I'd take that bet on even odds. Though I do think it's better than even odds. It's kind of hard to figure out how I feel about this.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on What are the risks of having your genome publicly available? · 2020-02-12T07:47:44.808Z · LW · GW

Uh, if you're worried about UFAI I'd be more concerned about your digital footprint. The concern with UFAI is that it might decide to torture a clone of you(who isn't the same as you unless the UFAI has a ton of other information about you, which is a separate thing) instead of somebody else. It doesn't seem that much worse from a selfless or selfish point of view.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on A rant against robots · 2020-01-18T00:11:50.240Z · LW · GW

Funny you mention AlphaGo, since the first time AlphaGo(or indeed any computer) beat a professional go player(Fan Hui), it was distributed across multiple computers. Only later did it become strong enough to beat top players with only a single computer.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on A rant against robots · 2020-01-16T06:50:43.691Z · LW · GW

This is one of those things that seems obvious but it did cause some things to click for me that I hadn't thought of before. Previously my idea of AGI becoming uncontrollable was basically that somebody would make a superintelligent AGI in a box, and we would be able to unplug it anytime we wanted, and the real danger would be the AGI tricking us into not unplugging it and letting it out of the box instead. What changed this view was this line: "Try to unplug Bitcoin." Once you think of it that way it does seem pretty obvious that the most powerful algorithms, the ones that would likely first become superintelligent, would be distributed and fault-tolerant, as you say, and therefore would not be in a box of any kind to begin with.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on But exactly how complex and fragile? · 2019-11-03T23:39:37.996Z · LW · GW

I think that fully specifying human values may not be the best approach to an AI utopia. Rather, I think it would be easier and safer to tell the AI to upload humans and run an Archipelago-esque simulated society in which humans are free to construct and search for the society they want, free from many practical problems in the world today such as resource scarcity.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Deducing Impact · 2019-09-29T15:15:13.606Z · LW · GW

We're talking about the impact of an event though. The very question is only asking about worlds where the event actually happens.

If I don't know whether an event is going to happen and I want to know the impact it will have on me, I compare futures where the event happens to my current idea of the future, based on observation(which also includes some probability mass for the event in question, but not certainty).

In summary, I'm not updating to "X happened with certainty" rather I am estimating the utility in that counterfactual case.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Deducing Impact · 2019-09-25T05:44:39.748Z · LW · GW

Rot13:

Gur vzcnpg bs na rirag ba lbh vf gur qvssrerapr orgjrra gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba tvira pregnvagl gung gur rirag jvyy unccra, naq gur pheerag rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba.

Zber sbeznyyl, jr fnl gung gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba vf gur fhz, bire nyy cbffvoyr jbeyqfgngrf K, bs C(K)*H(K), juvyr gur rkcrpgrq inyhr bs lbhe hgvyvgl shapgvba tvira pregnvagl gung n fgngrzrag R nobhg gur jbeyq vf gehr vf gur fhz bire nyy cbffvoyr jbeyqfgngrf K bs C(K|R)*H(K). Gur vzcnpg bs R orvat gehr, gura, vf gur nofbyhgr inyhr bs gur qvssrerapr bs gubfr gjb dhnagvgvrf.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on The Cartoon Guide to Löb's Theorem · 2019-09-15T21:31:03.819Z · LW · GW

Because assuming Provable(C)->C as a hypothesis doesn't allow you to prove C. Rather, the fact that a proof exists of Provable(C)->C allows you to construct a proof of C.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Troll Bridge · 2019-09-15T19:11:25.065Z · LW · GW

The proof doesn't work on a logically uncertain agent. The logic fails here:

Examining the source code of the agent, because we're assuming the agent crosses, either PA proved that crossing implies U=+10, or it proved that crossing implies U=0.

A logically uncertain agent does not need a proof of either of those things in order to cross, it simply needs a positive expectation of utility, for example a heuristic which says that there's a 99% chance crossing implies U=+10.

Though you did say there's a version which still works for logical induction. Do you have a link to where I can see that version of the argument?

Edit: Now I still see the logic. On the assumption that the agent crosses but also proves that U=-10, the agent must have a contradiction somewhere, because that, and the logical uncertainty agents I'm aware of have a contradiction upon proving U=-10 because they prove that they will not cross, and then immediately cross in a maneuver meant to prevent exactly this kind of problem.

Wait but proving crossing implies U=-10 does not mean that prove they will not cross, exactly because they might still cross, if they have a contradiction.

God this stuff is confusing. I still don't think the logic holds though.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Anthropic answers to logical uncertainties? · 2019-09-15T19:03:58.886Z · LW · GW

The Riemann argument seems to differ from the Great Filter argument in this way: the Riemann argument depends only on the sheer number of observers, i.e. the only thing you're taking into account is the fact that you exist. Whereas in the great filter argument you're updating based on what kind of observer you are, i.e. you're intelligent but not a space-travelling, uploaded posthuman.


The first kind of argument doesn't work because somebody exists either way: if the RH or whatever is false then you are one of a small number, if it's true than you are one of a large number, you are in a typical position either way, and the other situation simply isn't possible. But the second kind of argument seems to hold more merit: if the great filter is behind then you are part of the extreme minority of normal humans, but if the great filter is ahead then you are rather typical of intelligent lifeforms. This might count as evidence, and it seems to be the same kind of evidence which suggests that a great filter even exists in the first place: if it doesn't then we are very exceptional not only in being the very first humans but the very first intelligent life as well.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Anthropic answers to logical uncertainties? · 2019-09-15T18:49:42.748Z · LW · GW

That's the funniest thing I've seen all day.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Troll Bridge · 2019-08-24T07:15:37.741Z · LW · GW

Seems to me that if an agent with a reasonable heuristic for logical uncertainty came upon this problem, and was confident but not certain of its consistency, it would simply cross because expected utility would be above zero, which is a reason that doesn't betray an inconsistency. (Besides, if it survived it would have good 3rd party validation of its own consistency, which would probably be pretty useful.)

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces · 2019-08-24T04:48:15.823Z · LW · GW

Regarding your comments on SPECKS preferable to TORTURE, I think that misses the argument they made. The reason you have to prefer 10N at X to N at X' at some point, is that a speck counts as a level of torture. That's exactly what OP was arguing against.

Comment by Andrew Jacob Sauer (andrew-jacob-sauer) on Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces · 2019-08-24T04:46:21.274Z · LW · GW

Non-Archimedean utility functions seem kind of useless to me. Since no action is going to avoid moving the probability of any outcome by more than 1/3^^^3, absolutely any action is important only insomuch as it impacts the highest lexical level of utility. So you might as well just call that your utility function.