Posts

Simulating Problems 2013-01-30T13:14:28.476Z
Can anyone explain to me why CDT two-boxes? 2012-07-02T06:06:36.198Z
Another Iterated Prisoner's Dilemma Tournament? 2012-05-25T14:16:30.529Z
Fixed-Length Selective Iterative Prisoner's Dilemma Mechanics 2011-09-13T03:24:59.358Z

Comments

Comment by Andreas_Giger on Open Thread, October 7 - October 12, 2013 · 2013-10-08T04:24:19.106Z · LW · GW

There used to be a thread on LW that dealt with interesting ways to make small sums of money and ways to reduce expenditure. I think among other things going to Australia for a year was discussed. Does anyone know which thread I'm talking about and can provide me with the link? I can't seem to find it.

Comment by Andreas_Giger on The Ultimate Sleeping Beauty Problem · 2013-10-02T08:54:59.950Z · LW · GW

The "many more days that include them" is the 3^n part in my expression that is missing from any per day series. This 3^n is the sum of all interviews in that coin flip sequence ("coin flip sequence" = "all the interviews that are done because one coin flip showed up tails", right?) and in the per day (aka per interview) series the exact same sum exists, just as 3^n summands.

In both cases, the weight of the later coin flip sequences increases, because the number of interviews (3^n) increases faster than the probabilistic weight of the coin flip (1/2^n) decreases.

However, this doesn't mean that there exists no Cesàro sum. In fact the existence of such a sum can be proven for my original expression because the quotient of the last two numerators (if we include both odd and even coin flips) of the isomorphic series is always 3:1, regardless of wether the last coin flip was even or odd. (The same thing can be said for the quotient of the last 3^n and 3^(n-1) summands of your series. Basically, the per day series is just a dragged out per coin flip series.)

The reason why my estimation for the Cesàro sum is 0.5 is that if we express that quotient in a way that one coin state is written first, then it alternates between 3:1 and 1:3, which results in 1:1 which is 0.5. Obviously this is not exact maths, but it's a good way for a quick estimation. (Alternatively, you could intuitively infer that if there exists a Cesàro sum it must be 0.5, because whether you look for even or odd coin flips gets increasingly irrelevant as the series approaches infinity.)

Also, since I haven't previously touched upon the subject of the isomorphic series: If we call my original expression f, then we can construct a function g where g(n) = f(n)-f(n-1) with f(-1) = 0, and a series a = g(0) + g(1) + g(2) + ...

Does that all make sense?

Comment by Andreas_Giger on Open Thread, September 30 - October 6, 2013 · 2013-10-02T01:46:39.966Z · LW · GW

I hold the belief that Newcomb, regardless of Omega's accuracy, is impossible in the universe I currently live in. Also, this is not what this discussion is about, so please refrain from derailing it further.

Comment by Andreas_Giger on Open Thread, September 30 - October 6, 2013 · 2013-10-02T01:42:02.845Z · LW · GW

I think people have slightly misunderstood what I was referring to with this:

  • There exist no problems shown to be possible in real life for which CDT yields superior results.
  • There exists at least one problem shown to be possible in real life for which TDT yields superior results.

My question was whether there is a conclusive, formal proof for this, not whether this is widely accepted on this site (I already realized TDT is popular). If someone thinks such a proof is given somewhere in an article (this one?) then please direct me to the point in the article where I can find that proof. I'm very suspicious about this though, since the wiki makes blatantly false claims, e.g. that TDT performs better in one-shot PD than CDT, while in fact it can only perform better if access to source coude is given. So the wiki article feels more like promotion than anything.

Also, I would be very interested to hear about what kind of reaction from the scientific community TDT has received. Like, very very interested.

Comment by Andreas_Giger on The Ultimate Sleeping Beauty Problem · 2013-10-02T00:15:10.130Z · LW · GW

I take it that my approach was not discussed in the heated debate you had? Because it seems a good exercise for grad students.

Also, I don't understand why you think a per interview series would net fundamentally different results than a per coin toss series. I'd be interested in your reports after you (or your colleagues) have done the math.

Comment by Andreas_Giger on The Ultimate Sleeping Beauty Problem · 2013-10-01T22:04:54.262Z · LW · GW

I could have said that the beauty was simulated floor(5^x) times where x is a random real between 0 and n

Ah, I see now what you mean. Disregarding this new problem for the moment, you can still formulate my original expression on a per-interview basis, and it will still have the same Cesàro sum because it still diverges in the same manner; it just does so more continuously. If you envision a graph of an isomorphic series of my original expression, it will have "saw teeth" where it alternates between even and odd coin flips, and if you formulate that series on a per-interview basis, those saw teeth just get increasingly longer, which has no impact on the Cesàro sum (because the series alternates between those saw teeth).

Concerning your new problem, it can still be expressed as a series with a Cesàro sum, it's just a lot more complicated. If I were you, I'd first try to find the simplest variant of that problem with the same properties. Still, the fact that this is solvable in an analogous way should be clear, because you can essentially solve the "floor(5^x) times where x is a random real between 0 and n" part with a series for x (similar to the one for the original problem) and then have a series of those series for n. Basically you're adding another dimension (or recursion level), but not doing anything fundamentally different.

Comment by Andreas_Giger on The Ultimate Sleeping Beauty Problem · 2013-10-01T21:00:46.915Z · LW · GW

What do you mean by "time" in this case? It sounds like you want to interrupt the interviews at an arbitrary point even though Beauty knows that interviews are quantised in a 3^n fashion.

Comment by Andreas_Giger on The Ultimate Sleeping Beauty Problem · 2013-10-01T20:40:52.081Z · LW · GW

(1/2 3^0 + 1/8 3^2 + ...) / (1/2 3^0 + 1/4 3^1 + 1/8 * 3^2 + ...)

... which can be transformed into an infinite series with a Cesàro sum of 0.5, so that's my answer.

Comment by Andreas_Giger on Open Thread, September 30 - October 6, 2013 · 2013-10-01T19:58:38.280Z · LW · GW

Parfit's hitchhiker looks like a thinly veiled Omega problem to me. At the very least, considering the lack of scientific rigorousness in Ekman's research, it should count as quite dubious, so adopting a new decision theory on the basis of that particular problem does not seem rational to me.

Comment by Andreas_Giger on Open Thread, September 30 - October 6, 2013 · 2013-10-01T19:43:17.406Z · LW · GW

I don't like the notion of using different decision theories depending on the situation, because the very idea of a decision theory is that it is consistent and comprehensive. Now if TDT were formulated as a plugin that seamlessly integrated into CDT in such a way that the resulting decision theory could be applied to any and all problems and would always yield optimal results, then that would be reason for me to learn about TDT. However, from what I gathered this doesn't seem to be the case?

Comment by Andreas_Giger on Open Thread, September 30 - October 6, 2013 · 2013-10-01T11:07:48.727Z · LW · GW

I saw this post from EY a while ago and felt kind of repulsed by it:

I no longer feel much of a need to engage with the hypothesis that rational agents mutually defect in the oneshot or iterated PD. Perhaps you meant to analyze causal-decision-theory agents?

Never mind the factual shortcomings, I'm mostly interested in the rejection of CDT as rational. I've been away from LW for a while and wasn't keeping up on the currently popular beliefs on this site, and I'm considering learning a bit more about TDT (or UDT or whatever the current iteration is called). I have a feeling this might be a huge waste of time though, so before I dive into the subject I would like to confirm that TDT has objectively been proven to be clearly superior to CDT, by which I (intuitively) mean:

  • There exist no problems shown to be possible in real life for which CDT yields superior results.
  • There exists at least one problem shown to be possible in real life for which TDT yields superior results.

"Shown to be possible in real life" excludes Omega, many-worlds, or anything of similar dubiousness. So has this been proven? Also, is there any kind of reaction from the scientific community in regards to TDT/UDT?

Comment by Andreas_Giger on Making Fun of Things is Easy · 2013-09-28T19:32:51.286Z · LW · GW

How many people actually did the exercises katydee suggested? I know I didn't.

I did, but I don't think people realised it.

Comment by Andreas_Giger on Why aren't there more forum-blogs like LW? · 2013-09-28T10:26:50.584Z · LW · GW

There are forums with popular blog sections, e.g. teamliquid.net which also features a wiki. There are also forums that treat top level posts differently, e.g. by displaying them prominently at the top of each thread page. None of this is really new.

On the other hand, I feel that in some regards, LW is too different from traditional forums, like that threads are sorted by OP time rather than the time of the last reply, which makes it very difficult to have sustained discussions in these threads because they stay hot for a few days, but afterwards people simply stop replying, and at best you have two or three people continuing to post without anyone else reading what they write.

Comment by Andreas_Giger on Intelligence Amplification and Friendly AI · 2013-09-28T00:51:21.206Z · LW · GW

You should probably edit your post then, because it currently suggests an IQ-atheism correlation that just isn't supported by the cited article.

Comment by Andreas_Giger on Intelligence Amplification and Friendly AI · 2013-09-27T23:27:44.393Z · LW · GW

Where in the linked article does it say that atheism correlates with IQ past 140? I cannot find this.

Comment by Andreas_Giger on A game of angels and devils · 2013-09-27T23:12:11.026Z · LW · GW

The current education system in Europe does a much better job at making education unpopular than at actually preventing those who may positively impact technology and society in the future from acquiring the necessary education to do so. Turning education into a chore is merely an annoyance for anyone involved, but doesn't actually hold back technological advance in any way.

If I was the devil, I would try to restrict internet access for as many people as possible. As long as you have internet, traditional education isn't really needed for humanity to advance technologically.

Also, does the devil win if humanity goes extinct? Because in that case I would instead try to make the best education available for free to all children, and focus on getting a few Satanists in positions where you get to push red buttons. Since the devil is traditionally displayed as persuasive and manipulative to the point that intelligent and well-educated people tend to more receptive to his offers than normal folk, that shouldn't be too much of a problem. Just imagine a few Hitlers with modern nuclear ICBMs.

Hanlon’s razor: “Never attribute to malice that which is adequately explained by stupidity.”

Never heard of Hanlon's razor before, but I think it makes much more sense if you replace stupidity with indifference.

Comment by Andreas_Giger on Making Fun of Things is Easy · 2013-09-27T22:20:29.305Z · LW · GW

I'm not sure if this post is meant to be taken seriously. It's always "easy" to make fun of X; what's difficult is to spread your opinion about X by making fun of X. Obviously this requires a target audience that doesn't already share your opinion about X, and if you look at people making fun of things (e.g. on the net), usually the audience they're catering to already shares their views. This is because the most common objective of making fun of things is not to convince people of anything, but to create a group identity, raise team morale, and so on. There is zero point talking about the difficulty of that because there is none.

Someone would have to be very susceptible to be influenced by people making fun of things. I guess rationality doesn't have all that much to do with how influenceable you are, but this post strikes me as overly naïve concerning the intentions of people. If someone makes fun of X, they're clearly not interested in an objective discussion about X, so why would you be swayed by their arguments?

Whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good.

Gee, you think?

Comment by Andreas_Giger on Prisoner's Dilemma vs the Afterlife · 2013-09-27T21:44:39.164Z · LW · GW

I have a dream that one day, people will stop bringing up the (Iterated) Prisoner's Dilemma whenever decisions involve consequences. IPD is a symmetrical two-player game with known payouts, rational agents, and no persistent memory (in tournaments). Real life is something completely different, and equating TFT with superficially similar real life strategies is just plain wrong.

The possibility of the existence of immortality/afterlife/reincarnation certainly affects how people behave in certain situations, this is hardly a revelation. Running PD-like simulations with the intent to gain insight into real life behaviour of humans in society is a bad idea usually proposed by people who don't know much about game theory but like some of the terms commonly associated with PD.

Please stop using the words "cooperate" and "defect" as if they would in any way refer to comparable things in real life and PD. It will make you much less confused.

I don't have a problem with the proposition of adding uncertainty about the match length to IPD, and it is hardly a new idea. Just please don't talk about PD/IPD when you're talking about real life and vice versa, and don't make inferences about one based on the other.

Comment by Andreas_Giger on Prisoner's dilemma tournament results · 2013-07-12T15:03:54.908Z · LW · GW

I think there would more people interested in playing if strategies could be submitted in pseudocode, so that would be great.

Comment by Andreas_Giger on Prisoner's dilemma tournament results · 2013-07-12T14:58:49.207Z · LW · GW

Am I the only one who sees a problem in that we're turning a non-zero-sum game into a winner-take-all tournament? Perhaps instead of awarding a limited resource like bitcoins to the "winner", each player should be awarded an unlimited resource such as karma or funny cat pictures according to their strategy's performance.

Comment by Andreas_Giger on Prisoner's dilemma tournament results · 2013-07-11T01:39:36.236Z · LW · GW

Considering this was an experimental tournament, learning how certain strategies perform against others seems far more interesting to me than winning, and I can't imagine any strategy I would label as a troll submission. Even strategies solely designed to be obstacles are valid and valuable contributions, and the fact that random strategies skew the results is a fault of the tournament rules and not of the strategies themselves.

Comment by Andreas_Giger on Prisoner's dilemma tournament results · 2013-07-09T21:38:58.189Z · LW · GW

A tournament like this would be much more interesting if it involved multiple generations. Here, the results heavily depended upon the pool of submitted strategies, regardless of their actual competitiveness, while a multiple-generations tournament would measure success as performance against other successful strategies.

Comment by Andreas_Giger on [suggestion] New Meetup Tab · 2013-02-04T04:46:18.667Z · LW · GW

What do you think about this?

Let's find out!

[pollid:402]

Comment by Andreas_Giger on Pinpointing Utility · 2013-02-03T01:29:18.121Z · LW · GW

Indeed, already figured that out here.

Comment by Andreas_Giger on Rationality Quotes February 2013 · 2013-02-02T15:47:46.032Z · LW · GW

Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.

Comment by Andreas_Giger on Rationality Quotes February 2013 · 2013-02-02T15:45:27.260Z · LW · GW

I think that's mostly because money is too abstract, and as long as you get by you don't even realize what you've lost. Survival is much more real.

Comment by Andreas_Giger on Rationality Quotes February 2013 · 2013-02-02T15:40:32.987Z · LW · GW

You don't "judge" a book by its cover; you use the cover as additional evidence to more accurately predict what's in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.

Comment by Andreas_Giger on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-02T15:28:12.128Z · LW · GW

You can't calculate utilites anyway; there's no reason to assume that u(n days) should be 0.5 * (u(n+m days) + u(n-m days)) for any n or m. If you want to include immortality, you can't assign utilities linearly, although you can get arbitrarily close by picking a higher factor than 0.5 as long as it's < 1.

Comment by Andreas_Giger on Rationality Quotes February 2013 · 2013-02-02T04:29:35.081Z · LW · GW

Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...

Comment by Andreas_Giger on The Blue-Minimizing Robot · 2013-02-02T04:15:23.339Z · LW · GW

A really smart 'shoot lasers at "blue" things' robot will shoot at blue things if there are any, and will move in a programmed way if there aren't. All its actions are triggered by the situation it is in; and if you want to make it smarter by giving it an ability to better distinguish actually-blue from blue-looking things, then any such activity must be triggered as well. If you program it to shoot at projectors that project blue things it won't become smarter, it will just shoot at some non-blue things. If you paint it blue and put a mirror in front of it it will shoot at itself, and if you program it to not shoot at blue things that look like itself it won't become smarter, it will just shoot at fewer blue things. If anything it shoots at doesn't cease to be blue or you give it a blue laser or camera lens, it will just continue shooting because it doesn't care about blue things or shooting; it just shoots when it sees blue. It certainly won't create blue things to shoot at.

A really dumb 'minimize blue' robot with a laser will shoot at anything blue it sees, but if shooting at something doesn't make it stop being blue, it will stop shooting at it. If there's nothing blue around it will search for blue things. If you paint it blue and put a mirror in front of it it will shoot at itself. If you give it a blue camera lens it will shoot at something, stop shooting, shoot at something different, stop shooting, move around, shoot at something, stop shooting, etc, and eventually stop moving and shooting altogether and weep. If instead of the camera lens you give it a blue laser it will become terribly confused.

Comment by Andreas_Giger on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-02T00:25:36.551Z · LW · GW

Actually, it seems you can solve the immortality problem in ℝ after all, you just need to do it counterintuitively: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|.

Comment by Andreas_Giger on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-02T00:05:19.732Z · LW · GW

This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions.

I believe it's actually a problem about how to do utility-maximising when there's no maximum utility, like the other problems. It's easy to find examples for problems in which there are infinitely many decisions as well as a maximum utility, and none of those I came up with are in any way paradoxical or even difficult.

Comment by Andreas_Giger on Pinpointing Utility · 2013-02-01T23:55:19.915Z · LW · GW

Yes, I am aware of that. The biggest trouble, as you have elaborately explained in your post, is that people think they can perform mathematical operations in VNM-utility-space to calculate utilities they have not explicitly defined in their system of ethics. I believe Eliezer has fallen into this trap, the sequences are full of that kind of thinking (e.g. torture vs dust specks) and while I realize it's not supposed to be taken literally, "shut up and multiply" is symptomatic.

Another problem is that you can only use VNM when talking about complete world states. A day where you get a tasty sandwich might be better than a normal day, or it might not be, depending on the world state. If you know there's a wizard who'll give you immortality for $1, you'll chose $1 over any probability<1 of $2, and if the wizard wants $2, the opposite applies.

VNM isn't bad, it's just far, far, far too limited. It's somewhat useful when probabilities are involved, but otherwise it's literally just the concept of well-ordering your options by preferability.

Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number.

Turns out this is not actually true: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|. Still, I'm pretty sure the set of all possible world states is of higher cardinality than ℝ, so...

(Also it's a good illustration why simply assigning utility to 1 day of life and then scaling up is not a bright idea.)

Comment by Andreas_Giger on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-01T22:13:02.700Z · LW · GW

This is very good post. The real question that has not explicitly been asked is the following:

How can utility be maximised when there is no maximum utility?

The answer of course is that it can't.

Some of the ideas that are offered as solutions or approximations of solutions are quite clever, but because for any agent you can trivially construct another agent who will perform better and there is no metrics other than utility itself for determining how much better an agent is than another agent, solutions aren't even interesting here. Trying to find limits such as storage capacity or computing power is only avoiding the real problem.

These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that's it.

And since I'm at it, let me quote another limitation of utility I very recently wrote about in a comment to Pinpointing Utility:

Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number.

Comment by Andreas_Giger on Pinpointing Utility · 2013-02-01T21:14:09.931Z · LW · GW

Do you mean it's not universally solvable in the sense that there is no "I always prefer the $1"-type solution? Of course there isn't. That doesn't break VNM, it just means you aren't factoring outcomes properly.

That's what I mean, and while it doesn't "break" VNM, it means I can't apply VNM to situations I would like to, such as torture vs dust specks. If I know the utility of 1000 people getting dust specks in their eyes, I still don't know the utility of 1001 people getting dust specks in their eyes, except it's probably higher. I can't quantify the difference between 49 and 50 years of torture, which means I have no idea whether it's less than, equal to, or greater than the difference between 50 and 51 years. Likewise, I have no idea how much I would pay to avoid one dust speck (or 1000 dust specks) because there's no ratio of u($) to u(dust speck), and I have absolutely no concept how to compare dust specks with torture, and even if I had, it wouldn't be scalable.

Comment by Andreas_Giger on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-01T20:47:53.150Z · LW · GW

You're taking this too literally. The point is that you're immortal, u(day in heaven) > u(day in neither heaven nor hell) > u(day in hell), and u(2 days in heaven and 1 day in hell) > u(3 days in neither heaven nor hell).

You don't even need hell for this sort of problems; suppose God offers you to either cash in on your days in heaven (0 at the beginning) right now or wait a day after which he will add 1 day to your bank and offer you the same deal again. How long will you wait? What if God would halve the additional time for each deal so you couldn't even spend 2 days in heaven, but could get arbitrarily close to it?

Comment by Andreas_Giger on If it were morally correct to kill everyone on earth, would you do it? · 2013-02-01T16:44:17.020Z · LW · GW

That's not bias, it's subjective morals.

Comment by Andreas_Giger on Pinpointing Utility · 2013-02-01T16:40:17.881Z · LW · GW

Most of [?] agree that the VNM axioms are reasonable

My problem with VNM-utility is that while in theory it is simple and elegant, it isn't applicable to real life because you can only assign utility to complex world states (a non-trivial task) and not to limited outcomes. If you have to choose between $1 and a 10% chance of $2, then this isn't universally solvable in real life because $2 doesn't necessarily have twice the value of $1, so the completeness axiom doesn't hold.

Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number.

Neither of these problems is solved by replacing utility with awesomeness.

Comment by Andreas_Giger on Simulating Problems · 2013-02-01T14:27:24.675Z · LW · GW

I'm not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?

Regardless, I think I have already disproven the conjecture I made above in another comment:

Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega's actual accuracy.

Comment by Andreas_Giger on Simulating Problems · 2013-01-31T22:11:15.964Z · LW · GW

You mean, if an agent loses money. And that's the point; if the only thing you know is that an agent loses money in a simulation of poker, how can you prove the same is true for real poker?

Comment by Andreas_Giger on The Zeroth Skillset · 2013-01-31T16:52:26.895Z · LW · GW

I can attest that I have personally saved the lives of friends on two occasions thanks to good situational awareness, and have saved myself from serious injury or death many times more.

It is not my impression that I lead a very dangerous life.

These two statements seem contradictory to me. Maybe you ought to specify what you mean by "saved from death". If I consider crossing the street, notice an approaching car, and procede to not cross the street until the car has passed, did I just save myself from death? Describing the particular incidents and pointing out exactly how SA helped you to stay alive where others would have died would be much more convincing.

Comment by Andreas_Giger on If it were morally correct to kill everyone on earth, would you do it? · 2013-01-31T16:29:23.625Z · LW · GW

If you would oppose an AI attempting to enforce a CEV that would be detrimental to you, but still classify it as FAI and not evil, then wouldn't that make you evil?

Obviously this is a matter of definitions, but it still seems to be the logical conclusion.

Comment by Andreas_Giger on Singularity Institute is now Machine Intelligence Research Institute · 2013-01-31T15:34:51.535Z · LW · GW

Looks like an attempt to get rid of the negative image associated with the name Singularity Institute. I wonder if it isn't already too late to take PR seriously.

Comment by Andreas_Giger on Simulating Problems · 2013-01-31T15:12:41.106Z · LW · GW

i'm not sure I understand what you mean by 'failing' in regards to simulations. Could you elaborate?

Comment by Andreas_Giger on Essay-Question Poll: Dietary Choices · 2013-01-31T15:07:43.977Z · LW · GW
I think the word you're looking for is pet -- the standard meaning of domesticated also includes livestock, whose meat, if anything, I guess is seen as less ethically problematic than game by many people. (From your username, I'm guessing you're not a native speaker. FWIW, neither am I.)

You're right, it's not exactly a matter of domestication, but it's not only pets, either; horses fall into that category just as well. As I said, it's too fuzzy and arbitrary.

You know, you could decide not to eat certain kinds of meat for reasons other than “taboo”; for example, that it's too expensive (either in terms of money or of energy) or that you don't like the way it tastes or for signalling reasons or for health reasons or because you'd be uncomfortable with the idea of eating it for purely emotional reasons or whatever. Just because oysters don't feel pain doesn't mean I'm obligated to eat them, if I know better ways to spend my money or if I prefer the taste of different food.

But that's exactly the point, I was deliberately looking to find some general system that would allow me to classify food into two categories. Of course I don't eat something I don't like or that's otherwise undesirable if it can be avoided, that's not the issue here. This is purely about the moral part, and the problem is that there's some meat I have moral obejctions to eating, and other meat I don't, and there's a very slippery slope in between. If I object to eating human meat, where's the watershed? How about the homo sapiens species in general, such as the extinct subspecies h. sapiens idaltu? How about other species of the homo genus? Apes? Monkeys? Aliens?

A collection of ad-hoc rules isn't a system of ethics.

Comment by Andreas_Giger on Essay-Question Poll: Dietary Choices · 2013-01-31T08:23:51.841Z · LW · GW

Last but not least, I started it out of curiosity, in order to obtain answers to specific questions about vegetarians' decision procedures; that's what I'm still interested in learning about

If you're really still interested in this...

I started my vegetarian diet shortly after I decided to adopt some definite policy in terms of which kinds of meat were ok to eat and which were not, because the common policy of excluding all meat from domesticated animals such as cats and dogs was too fuzzy for me. I experimented with different schelling points for a while, but it all seemed very arbitrary, even the schelling point right between humans and non-human animals, so I decided I had to either taboo all kinds of meat, or none.

Then it occured to me that there were some people around me I quite liked and really wouldn't want to eat or seen eaten, so I'd have needed a schelling point anyway to determine which humans were fair game and which were not, and a very subjective one at that, and that was when I settled on vegetarianism.

A year or so thereafter I was considering veganism for a while, but it restricted my options too much and I was actually quite happy with the schelling point I had established, so that experiment was abandoned quickly.

Perhaps the whole thing becomes more understandable if I say that at the time I was generally aiming for more intrinsic consistency, and I was also regarding religious people who were actually living their lives according to their beliefs much more highly than lukewarm atheists who read horoscopes. In a way, my switch to vegetarianism was a side effect of my effort to develop a unified personal system of ethics.

None of this is related to human or animal suffering in any way, I'm afraid.

Comment by Andreas_Giger on Simulating Problems · 2013-01-31T06:56:39.318Z · LW · GW

If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.

For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).

Comment by Andreas_Giger on If it were morally correct to kill everyone on earth, would you do it? · 2013-01-31T06:31:28.078Z · LW · GW

This is yet another poorly phrased, factually inaccurate post containing some unorthodox viewpoints that are unlikely to be taken seriously because people around here are vastly better at deconstructing others' arguments than fixing them for them.

Ignoring any formal and otherwise irrelevant errors such as what utilitarianism actually is, I'll try to address the crucial questions; both to make Bundle_Gerbe's viewpoints more accessible to LW members and also to make it more clear to him why they're not as obvious as he seems to think.

1: How does creating new life compare to preserving existing life in terms of utility or value?

Bundle_Gerbe seems to be of the view that they are of identical value. That's not a view I share, mostly because I don't assign any value to the creation of new life, but I must admit that I am somewhat confused (or undecided) about the value of existing human life, both in general and as a function of parameters such as remaining life expectancy. Maybe there's some kind of LW consensus I'm not aware of, but the whole issue seems like a matter of axioms to me rather than anything that could objectively be inferred from some sort of basic truth.

2: If creation of life has some positive value, does this value increase if creation is preponed?

Not a question relevant to me, but it seems that this would partly depend on whether earlier creation implied higher total amount of lives, or just earlier saturation, for example because humans live forever and ultimately the only constraints will be space. I'm not entirely certain I correctly understand Bundle_Gerbe's position on this, but it seems that his utility function is actually based on total lifetime as opposed to actual number of human lives, meaning that two humans existing for one second each would be equivalent to one human existing for two seconds. That's kind of an interesting approach with lots of implied questions, such as whether travelling at high speeds would reduce value because of relativistic effects.

3: Is sacrificing personal lifetime to increase total humanity lifetime value a good idea?

If your utility function is based on total humanity lifetime value, and you're completely altruistic, sure. Most people don't seem to be all that altruistic, though. If I had to choose between saving one or two human beings, I would choose the latter option, but I'd never sacrifice myself to save a measly two humans. I would be very suprised if CEV turned out to require my death after 20 years, and in fact I would immediately reclassify the FAI in question as UFAI. Sounds like an interesting setup for an SF story, though.

For what it's worth, I upvoted the post. Not because the case was particularly well presented, obviously, but because I think it's not completely uninteresting and because I perceived some of the comments such as Vladimir_Nesov's which got quite some upvotes as rather unfair.

That being said, the title is badly phrased while not being very relevant, either.

Comment by Andreas_Giger on Simulating Problems · 2013-01-31T04:09:13.308Z · LW · GW

That's because it's not strictly speaking a problem in GT/DT, it's a problem (or meta-problem if you want to call it that) about GT/DT. It's not "which decision should agent X make", but "how can we prove that problems A and B are identical."

Concerning the matter of rudeness, suppose you write a post and however many comments about a mathematical issue, only for someone who doesn't even read what you write and says he has no idea what you're talking about to conclude that you're not talking about mathematics. I find that rude.

Comment by Andreas_Giger on Simulating Problems · 2013-01-31T03:58:45.143Z · LW · GW

What do you mean by "analogous"?

I'm not surprised you don't understand what I'm asking when you don't read what I write.