Posts

Why We Disagree 2023-10-25T10:50:26.420Z
Look at the Shape of Your Utility Distribution 2019-08-30T23:27:16.326Z
Is LW making progress? 2019-08-24T00:32:31.600Z
Intransitive Preferences You Can't Pump 2019-08-09T23:10:36.650Z
Against Occam's Razor 2018-04-05T17:59:27.583Z
How I see knowledge aggregation 2018-02-03T10:31:25.359Z
Against Instrumental Convergence 2018-01-27T13:17:19.389Z

Comments

Comment by zulupineapple on An Orthodox Case Against Utility Functions · 2023-10-28T07:42:16.190Z · LW · GW

Maybe I should just let you tell me what framework you are even using in the first place.

I'm looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.

Furthermore, if O={o0,o1}, then I can group the terms into u(o0)P("we're in a state where f evaluates to o0") + u(o1)P("we're in a state where f evaluates to o1"), I'm just moving all of the complexity out of EU and into P, which I assume to work by some magic (e.g. LI), that doesn't involve literally iterating over every possible S.

We can either start with a basic set of "worlds" (eg, ) and define our "propositions" or "events" as sets of worlds <...>

That's just math speak, you can define a lot of things as a lot of other things, but that doesn't mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.

By the way, I might not see any more replies to this.

Comment by zulupineapple on An Orthodox Case Against Utility Functions · 2023-10-27T19:23:27.429Z · LW · GW

A classical probability distribution over  with a utility function understood as a random variable can easily be converted to the Jeffrey-Bolker framework, by taking the JB algebra as the sigma-algebra, and V as the expected value of U.

Ok, you're saying that JB is just a set of axioms, and U already satisfies those axioms. And in this construction "event" really is a subset of Omega, and "updates" are just updates of P, right? Then of course U is not more general, I had the impression that JB is a more distinct and specific thing.

Regarding the other direction, my sense is that you will have a very hard time writing down these updates, and when it works, the code will look a lot like one with an utility function. But, again, the example in "Updates Are Computable" isn't detailed enough for me to argue anything. Although now that I look at it, it does look a lot like the U(p)=1-p("never press the button").

events (ie, propositions in the agent's internal language)

I think you should include this explanation of events in the post.

construct 'worlds' as maximal specifications of which propositions are true/false

It remains totally unclear to me why you demand the world to be such a thing.

I'm not sure why you say Omega can be the domain of U but not the entire ontology.

My point is that if U has two output values, then it only needs two possible inputs. Maybe you're saying that if |dom(U)|=2, then there is no point in having |dom(P)|>2, and maybe you're right, but I feel no need to make such claims. Even if the domains are different, they are not unrelated, Omega is still in some way contained in the ontology.

I agree that we can put even more stringent (and realistic) requirements on the computational power of the agent

We could and I think we should. I have no idea why we're talking math, and not writing code for some toy agents in some toy simulation. Math has a tendency to sweep all kinds of infinite and intractable problems under the rug.

Comment by zulupineapple on An Orthodox Case Against Utility Functions · 2023-10-26T21:12:12.625Z · LW · GW

Answering out of order:

<...> then I think the Jeffrey-Bolker setup is a reasonable formalization.

Jeffrey is a reasonable formalization, it was never my point to say that it isn't. My point is only that U is also reasonable, and possibly equivalent or more general. That there is no "case against" it. Although, if you find Jeffery more elegant or comfortable, there is nothing wrong with that.

do you believe that any plausible utility function on bit-strings can be re-represented as a computable function (perhaps on some other representation, rather than bit-strings)?

I don't know what "plausible" means, but no, that sounds like a very high bar. I believe that if there is at least one U that produces an intelligent agent, then utility functions are interesting and worth considering. Of course I believe that there are many such "good" functions, but I would not claim that I can describe the set of all of them. At the same time, I don't see why any "good" utility function should be uncomputable.

I think there is a good reason to imagine that the agent structures its ontology around its perceptions. The agent cannot observe whether-the-button-is-ever-pressed; it can only observe, on a given day, whether the button has been pressed on that day. |Omega|=2 is too small to even represent such perceptions.

I agree with the first sentence, however Omega is merely the domain of U, it does not need to be the entire ontology. In this case Omega={"button has been pressed", "button has not been pressed"} and P("button has been pressed" | "I'm pressing the button")~1. Obviously, there is also no problem with extending Omega with the perceptions, all the way up to |Omega|=4, or with adding some clocks.

We could expand the scenario so that every "day" is represented by an n-bit string.

If you want to force the agent to remember the entire history of the world, then you'll run out of storage space before you need to worry about computability. A real agent would have to start forgetting days, or keep some compressed summary of that history. It seems to me that Jeffrey would "update" the daily utilities into total expected utility; in that case, U can do something similar.

I can always "extend" a world with an extra, new fact which I had not previously included. IE, agents never "finish" imagining worlds; more detail can always be added

You defined U at the very beginning, so there is no need to send these new facts to U, it doesn't care. Instead, you are describing a problem with P, and it's a hard problem, but Jeffrey also uses P, so that doesn't solve it.

>  ... set our model to be a list of "events" we've observed ...
I didn't understand this part.

If you "evaluate events", then events have some sort of bit representation in the agent, right? I don't clearly see the events in your "Updates Are Computable" example, so I can't say much and I may be confused, but I have a strong feeling that you could define U as a function on those bits, and get the same agent.

This is an interesting alternative, which I have never seen spelled out in axiomatic foundations.

The point would be to set U(p) = p("button has been pressed") and then decide to "press the button" by evaluating U(P conditioned on "I'm pressing the button") * P("I'm pressing the button" | "press the button"), where P is the agent's current belief, and p is a variable of the same type as P.

Comment by zulupineapple on Superintelligence FAQ · 2023-10-26T19:04:41.390Z · LW · GW

If you actually do want to work on AI risk, but something is preventing you, you can just say "personal reasons", I'm not going to ask for details.

I understand that my style is annoying to some. Unfortunately, I have not observed polite and friendly people getting interesting answers, so I'll have to remain like that.

Comment by zulupineapple on Superintelligence FAQ · 2023-10-26T08:14:47.976Z · LW · GW

OK, there are many people writing explanations, but if all of them are rehashing the same points from Superintelligence book, then there is not much value in that (and I'm tired of reading the same things over and over). Of course you don't need new arguments or new evidence, but it's still strange if there aren't any.

Anyone who has read this FAQ and others, but isn't a believer yet, will have some specific objections. But I don't think everyone's objections are unique, a better FAQ should be able to cover them, if their refutations exist to begin with.

Also, are you yourself working on AI risk? If not, why not? Is this not the most important problem of our time? Would EY not say that you should work on it? Could it be that you and him actually have wildly different estimates of P(AI doom), despite agreeing on the arguments?

As for Raemon, you're right, I probably misunderstood why he's unhappy with newer explanations.

Comment by zulupineapple on Superintelligence FAQ · 2023-10-25T18:10:00.762Z · LW · GW

Stampy seems pretty shallow, even more so than this FAQ. Is that what you meant by it not filling "this exact niche"?

By the way, I come from AGI safety from first principles, where I found your comment linking to this. Notably, that sequence says "My underlying argument is that agency is not just an emergent property of highly intelligent systems, but rather a set of capabilities which need to be developed during training, and which won’t arise without selection for it." which is reasonable and seems an order of magnitude more conservative than this FAQ, which doesn't really touch the question of agency at all.

Comment by zulupineapple on Why We Disagree · 2023-10-25T17:34:12.566Z · LW · GW

I'm talking specifically about discussions on LW. Of course in reality Alice ignores Bob's comment 90% of the time, and that's a problem in it's own right. It would be ideal if people who have distinct information would choose to exchange that information.

I picked a specific and reasonably grounded topic, "x-risk", or "the probability that we all die in the next 10 years", which is one number, so not hard to compare, unless you want to break it down by cause of death. In contrived philosophical discussions, it can certainly be hard to determine who agrees on what, but I have a hunch that this is the least of the problems in those discussions.

A lot of things have zero practical impact, and that's also a problem in it's own right. It seems to me that we're barely ever having "is working on this problem going to have practical impact?" type of discussions.

Comment by zulupineapple on Superintelligence FAQ · 2023-10-25T17:18:09.661Z · LW · GW

I want neither. I observe that Raemon cannot find an up to date introduction that he's happy with, and I point out that this is really weird. What I want is an explanation to this bizarre situation.

Is your position that Raemon is blind, and good, convincing explanations are actually abundant? If so, I'd like to see them, it doesn't matter where from.

Comment by zulupineapple on Superintelligence FAQ · 2023-10-25T15:37:08.517Z · LW · GW

"The world is full of adversarial relationships" is pretty much the weakest possible argument and is not going to convince anyone.

Are you saying that MIRI website has convincing introductory explanation of AI risk, the kind that Raemon wishes he had? Surely he would have found them already? If there aren't, then, again, why not?

Comment by zulupineapple on Superintelligence FAQ · 2023-10-25T10:30:41.950Z · LW · GW

If our relationship to them is adversarial, we will lose. But you also need to argue that this relationship will (likely) be adversarial.

Also, I'm not asking you to make the case here, I'm asking why the case is not being made on front page of LW and on every other platform. Would that not help with advocacy and recruitment? No idea what "keeping up with current events" means.

Comment by zulupineapple on An Orthodox Case Against Utility Functions · 2023-10-25T09:04:48.394Z · LW · GW

I certainly don't evaluate my U on quarks. Omega is not the set of worlds, it is the set of world models, and we are the ones who decide what that model should be. In "procrastination" example you intentionally picked a bad model, so it proves nothing (if the world only has one button we care about, then maybe |Omega|=2 and everything is perfectly computable).

Further on, it seems to me that if we set our model to be a list of "events" we've observed, then we get the exact thing you're talking about. Although you're imprecise and inconsistent about what an event is, how it's represented, how many there are, so I'm not sure if that's supposed to make anything more tractable.

In general, asking questions about the domain of U (and P!) is a good idea, and something that all introductions to Utility lack. But the ease with which you abandon a perfectly good formalism is concerning. LI is cool, and it doesn't use U, but that's not an argument against U, at best you can say that U was not as useful as you'd hoped.

My own take is that the domain of U is the type of P. That is, U is evaluated on possible functions P. P certainly represents everything the agent cares about in the world, and it's also already small and efficient enough to be stored and updated in the agent, so this solution creates no new problems. 
 

Comment by zulupineapple on Superintelligence FAQ · 2023-10-25T08:33:29.974Z · LW · GW

Seems like a red flag. How can there not be a more up-to-date one? Is advocacy and recruitment not a goal of AI-risk people? Are they instrumentally irrational? What is preventing you from writing such a post right now?

Most importantly, could it be that people struggle to write a good case for AI-risk, because the case for it is actually pretty weak, when you think about it?

Comment by zulupineapple on "Stuck In The Middle With Bruce" · 2019-11-22T13:45:13.094Z · LW · GW

The link is broken. I was only able to find the article here, with the wayback machine.

Comment by zulupineapple on Noticing Frame Differences · 2019-10-04T17:57:09.356Z · LW · GW

In the examples, sometimes the problem is people having different goals for the discussion, sometimes it is having different beliefs about what kinds of discussions work, and sometimes it might be about almost object-level beliefs. If "frame" refers to all of that, then it's way too broad and not a useful concept. If your goal is to enumerate and classify the different goals and different beliefs people can have regarding discussions, that's great, but possibly to broad to make any progress.

My own frustration with this topic is lack of real data. Apart from "FOOM Debate", the conversations in your post are all fake. To continue your analogy in another comment, this is like doing zoology by only ever drawing cartoons of animals, without ever actually collecting or analyzing specimens. Good zoologists would collect many real discussions, annotate them, classify them, debate about those classifications, etc. They may also tamper with ongoing discussions. You may be doing some of that privately, but doing it publicly would be better. Unfortunately there seem to be norms against that.

Comment by zulupineapple on ozziegooen's Shortform · 2019-09-13T18:14:32.330Z · LW · GW

Making long term predictions is hard. That's a fundamental problem. Having proxies can be convenient, but it's not going to tell you anything you don't already know.

Comment by zulupineapple on Book Review: Secular Cycles · 2019-09-13T18:11:53.145Z · LW · GW

That's what I think every time I hear "history repeats itself". I wish Scott had considered the idea.

The biggest claim Turchin is making seems to be about the variance of the time intervals between "bad" periods. Random walk would imply that it is high, and "cycles" would imply that it is low.

Comment by zulupineapple on ozziegooen's Shortform · 2019-09-07T22:55:56.996Z · LW · GW
For example, say I wanted to know how good/enjoyable a specific movie would be.

My point is that "goodness" is not a thing in the territory. At best it is a label for a set of specific measures (ratings, revenue, awards, etc). In that case, why not just work with those specific measures? Vague questions have the benefit of being short and easy to remember, but beyond that I see only problems. Motivated agents will do their best to interpret the vagueness in a way that suits them.

Is your goal to find a method to generate specific interpretations and procedures of measurement for vague properties like this one? Like a Shelling point for formalizing language? Why do you feel that can be done in a useful way? I'm asking for an intuition pump.

Can you be more explicit about your definition of "clearly"?

Certainly there is some vagueness, but it seems that we manage to live with it. I'm not proposing anything that prediction markets aren't already doing.

Comment by zulupineapple on ozziegooen's Shortform · 2019-09-07T12:23:04.144Z · LW · GW
"What is the relative effectiveness of AI safety research vs. bio risk research?"

If you had a precise definition of "effectiveness" this shouldn't be a problem. E.g. if you had predictions for "will humans go extinct in the next 100 years?" and "will we go extinct in the next 100 years, if we invest 1M into AI risk research?" and "will we go extinct, if we invest 1M in bio risk research?", then you should be able to make decisions with that. And these questions should work fine in existing forecasting platforms. Their long term and conditional nature are problems, of course, but I don't think that can be helped.

"How much value has this organization created?"

That's not a forecast. But if you asked "How much value will this organization create next year?" along with a clear measure of "value", then again, I don't see much of a problem. And, although clearly defining value can be tedious (and prone to errors), I don't think that problem can be avoided. Different people value different things, that can't be helped.

One solution attempt would be to have an "expert panel" assess these questions

Why would you do that? What's wrong with the usual prediction markets? Of course, they're expensive (require many participants), but I don't think a group of experts can be made to work well without a market-like mechanism. Is your project about making such markets more efficient?

Comment by zulupineapple on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T07:24:26.849Z · LW · GW

While it's true that preferences are not immutable, the things that change them are not usually debate. Sure, some people can be made to believe that their preferences are inconsistent, but then they will only make the smallest correction needed to fix the problem. Also, sometimes debate will make someone claim to have changed their preferences, just to that they can avoid social pressures (e.g. "how dare you not care about starving children!"), but this may not reflect in their actions.

Regardless, my claim is that many (or most) people discount a lot, and that this would be stable under reflection. Otherwise we'd see more charity, more investment and more work on e.g. climate change.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-30T05:50:39.632Z · LW · GW

Ok, that makes the real incentives quite different. Then, I suspect that these people are navigating facebook using the intuitions and strategies from the real world, without much consideration for the new digital environment.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-29T13:42:21.185Z · LW · GW

Yes, and you answered that question well. But the reason I asked for alternative responses, was so that I could compare them to unsolicited recommendations from the anime-fan's point of view (and find that unsolicited recommendations have lower effort or higher reward).

Also, I'm not asking "How did your friend want the world to be different", I'm asking "What action could your friend have taken to avoid that particular response?". The friend is a rational agent, he is able to consider alternative strategies, but he shouldn't expect that other people will change their behavior when they have no personal incentive to do so.

Comment by zulupineapple on Research Agenda v0.9: Synthesising a human's preferences into a utility function · 2019-08-29T11:11:14.821Z · LW · GW

What is the domain of U? What inputs does it take? In your papers you take a generic Markov Decision Process, but which one will you use here? How exactly do you model the real world? What is the set of states and the set of actions? Does the set of states include the internal state of the AI?

You may have been referring to this as "4. Issues of ontology", but I don't think the problem can be separated from your agenda. I don't see how any progress can be made without answering these questions. Maybe your can start with naive answers, and to move on to something more realistic later. If so I'm interested in what those naive world models look like. And I'm suspicious of how well human preferences would translate onto such models.

Other AI construction methods could claim that the AI will learn the optimal world model, by interacting with the world, but I don't think this solution can work for your agenda, since the U function is fixed from the start.

Comment by zulupineapple on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-29T10:36:31.761Z · LW · GW

Discounting. There is no law of nature that can force me to care about preventing human extinction years from now, more than eating a tasty sandwich tomorrow. There is also no law that can force me to care about human extinction much more that about my own death.

There are, of course, more technical disagreements to be had. Reasonable people could question how bad unaligned AI will be or how much progress is possible in this research. But unlike those questions, the reasons of discounting are not debatable.

Comment by zulupineapple on Gratification: a useful concept, maybe new · 2019-08-29T08:15:35.902Z · LW · GW

I do things my way because I want to display my independence (not doing what others tell me) and intelligence (ability to come up with novel solutions), and because I would feel bored otherwise (this is a feature of how my brain works, I can't help it).

"I feel independent and intelligent", "other people see me as independent and intelligent", "I feel bored" are all perfectly regular outcomes. They can be either terminal or instrumental goals. Either way, I disagree that these cases somehow don't fit in the usual preference model. You're only having this problem because you're interpreting "outcome" in a very narrow way.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-29T05:37:23.270Z · LW · GW

Yes. The latter seems to be what OP is asking about: "If one wanted it to not happen, how would one go about that?". I assume OP is taking the perspective of his friends, who are annoyed by this behavior, rather than the perspective of the anime-fans, who don't necessarily see anything wrong with the situation.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-28T19:11:07.909Z · LW · GW

That sounds reasonable, but the proper thing is not usually the easy thing, and you're not going to make people do the proper thing just by saying that it is proper.

If we want to talk about this as a problem in rationality, we should probably talk about social incentives, and possible alternative strategies for the anime-hater (you're now talking about a better strategy for the anime-fan, but it's not good to ask other people to solve your problems). Although I'm not sure to what extent this is a problem that needs solving.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-28T18:18:35.952Z · LW · GW

And then the other person says "no thanks", and you both stand in awkward silence? My point is that offering recommendations is a natural thing to say, even if not perfect, and it's nice to have something to say. If you want to discourage unsolicited recommendations, then you need to propose a different trajectory for the conversation. Changing topic is hard, and simply going away is rude. People give unsolicited recommendations because it seems to be the best option available.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-28T15:25:07.919Z · LW · GW

Sure, but it remains unclear what response the friend wanted from the other person. What better options are there? Should they just go away? Change topic? I'm looking for specific answers here.

Comment by zulupineapple on A Personal Rationality Wishlist · 2019-08-28T11:17:22.206Z · LW · GW
a friend of mine observed that he couldn’t talk about how he didn’t like anime without a bunch of people rushing in to tell him that anime was actually good and recommending anime for him to watch

What response did your friend want? The reaction seems very natural to me (especially from anime fans). Note that your friend as at some point tried watching anime, and he has now chosen to talk about anime, which could easily mean that on some level he wants to like anime, or at least understand why others like it.

Comment by zulupineapple on Humans can be assigned any values whatsoever… · 2019-08-28T08:06:00.226Z · LW · GW
I got this big impossibility result

That's a part of the disagreement. In the past you clearly thought that Occam's razor was an "obvious" constraint that might work. Possibly you thought it was a unique such constraint. Then you found this result, and made a large update in the other direction. That's why you say the result is big - rejecting a constraint that you already didn't expect to work wouldn't feel very significant.

On the other hand, I don't think that Occam's razor is unique such constraint. So when I see you reject it, I naturally ask "what about all the other obvious constraints that might work?". To me this result reads like "0 didn't solve our equation therefore the solution must be very hard". I'm sure that you have strong arguments against many other approaches, but I haven't seen them, and I don't think the one in OP generalizes well.

I'd need to see these constraints explicitly formulated before I had any confidence in them.

This is a bit awkward. I'm sure that I'm not proposing anything that you haven't already considered. And even if you show that this approach is wrong, I'd just try to put a band-aid on it. But here is an attempt:

First we'd need a data set of human behavior with both positive and negative examples (e.g. "I made a sandwitch", "I didn't stab myself", etc). So it would be a set of tuples of state s, action a and +1 for positive examples, -1 for negative ones. This is not trivial to generate, especially it's not clear how to pick negative examples, but here too I expect that the obvious solutions are all fine. By the way, I have no idea how the examples are formalized, that seems like a problem, but it's not unique to this approach, so I'll assume that it's solved.

Next, given a pair (p, R), we would score it by adding up the following:

1. p(R) should accurately predict human behavior. So we want a count of p(R)(s)=a for positive cases and p(R)(s)!=a for negative cases.

2. R should also predict human behavior. So we want to sum R(s, a) for positive examples, minus the same sum for negative examples.

3. Regularization for p.

4. Regularization for R.

Here we are concerned about overfitting R, and don't care about p as much, so terms 1 and 4 would get large weights, and terms 2, 3 would get smaller weights.

Finally we throw machine learning at the problem to maximize this score.

Comment by zulupineapple on Is LW making progress? · 2019-08-27T20:05:18.540Z · LW · GW

So it seems that there was progress in applied rationality and in AI. But that's far from everything LW has talked about. What about more theoretical topics, general problems in philosophy, morality, etc? Do you feel than discussing some topics resulted in no progress and was a waste of time?

There's some debate about which things are "improvements" as opposed to changes.

Important question. Does the debate actually exist, or is this a figure of speech?

Comment by zulupineapple on Humans can be assigned any values whatsoever… · 2019-08-27T19:57:08.446Z · LW · GW

1 is trivial, so yes. But I don't agree with 2. Maybe the disagreement comes from "few" and "obvious"? To be clear, I count evaluating some simple statistic on a large data set as one constraint. I'm not so sure about "obvious". It's not yet clear to me that my simple constraints aren't good enough. But if you say that more complex constraints would give us a lot more confidence, that's reasonable.

From OP I understood that you want to throw out IRL entirely. e.g.

If we give up the assumption of human rationality - which we must - it seems we can’t say anything about the human reward function. So it seems IRL must fail.

seems like an unambiguous rejection of IRL and very different from

Our hope is that with some minimal assumptions about planner and reward we can infer the rest with enough data.
Comment by zulupineapple on Humans can be assigned any values whatsoever… · 2019-08-27T18:07:27.534Z · LW · GW
But it's not like there are just these five preferences and once we have four of them out of the way, we're done.

My example test is not nearly as specific as you imply. It discards large swaths of harmful and useless reward functions. Additional test cases would restrict the space further. There are still harmful Rs in the remaining space, but their proportion must be much lower than in the beginning. Is that not good enough?

What you're seeing as "adding enough clear examples" is actually "hand-crafting R(0) in totality".

Are you saying that R can't generalize if trained on a reasonably sized data set? This is very significant, if true, but I don't see it.

For more details see here: https://arxiv.org/abs/1712.05812

Details are good. I have a few notes though.

true decomposition

This might be a nitpick, but there is no such thing. If the agent was not originally composed from p and R, then none of the decompositions are "true". There are only "useful" decompositions. But that itself requires many assumptions about how usefulness is measured. I'm confused about how much of a problem this is. But it might be a big part of our philosophical difference - I want to slap together some ad hoc stuff that possibly works, while you want to find something true.

The high complexity of the genuine human reward function

In this section you show that the pair (p(0), R(0)) is high complexity, but it seems that p(0) could be complex and R(0) could be relatively simple, unlike the title suggests. We don't actually need to find p(0), finding R(0) should be good enough.

Our hope is that with some minimal assumptions about planner and reward we can infer the rest with enough data.

Huh, isn't that what I'm saying? Is the problem that the assumptions I mentioned are derived from observing the human?

Slight tangent: I realized that the major difference between a human and the agent H (from the first example in OP), is that the human can take complex inputs. In particular, it can take logical propositions about itself or desirable R(0) and approve or disapprove of them. I'm not saying that "find R(0) that a human would approve of" is a good algorithm, but something along those lines could be useful.

Comment by zulupineapple on How Can People Evaluate Complex Questions Consistently? · 2019-08-27T13:36:03.551Z · LW · GW

This is true, but it doesn't fit well with the given example of "When will [country] develop the nuclear bomb?". The problem isn't that people can't agree what "nuclear bomb" means or who already has them. The problem is that people are working from different priors and extrapolating them in different ways.

Comment by zulupineapple on Integrity and accountability are core parts of rationality · 2019-08-27T10:56:52.394Z · LW · GW

Are you going to state your beliefs? I'm asking because I'm not sure what that looks like. My concern is that the statement will be very vague or very long and complex. Either way, you will have a lot of freedom to argue that actually your actions do match your statements, regardless of what those actions are. Then the statement would not be useful.

Instead I suggest that you should be accountable to people who share your beliefs. Having someone who disagrees with you try to model your beliefs and check your actions against that model seems like a source of conflict. Of course, stating your beliefs can be helpful in recognizing these people (but it is not the only method).

Comment by zulupineapple on How Can People Evaluate Complex Questions Consistently? · 2019-08-27T10:20:46.755Z · LW · GW

What's the motivation? In what case is lower accuracy for higher consistency a reasonable trade off? Especially consistency over time sounds like something that would discourage updating on new evidence.

Comment by zulupineapple on Humans can be assigned any values whatsoever… · 2019-08-27T06:52:03.553Z · LW · GW

Evaluating R on a single example of human behavior is good enough to reject R(2), R(4) and possibly R(3).

Example: this morning I went to the kitchen and picked up a knife. Among possible further actions, I had A - "make a sandwich" and B - "stab myself in the gut". I chose A. R(2) and R(4) say I wanted B and R(3) is indifferent. I think that's enough reason to discard them.

Why not do this? Do you not agree that this test discards dangerous R more often than useful R? My guess is that you're asking for very strong formal guarantees from the assumptions that you consider and use a narrow interpretation of what it means to "make IRL work".

Comment by zulupineapple on Humans can be assigned any values whatsoever… · 2019-08-26T20:44:35.934Z · LW · GW

The point isn't that there is nothing wrong or dangerous about learning biases and rewards. The point is that the OP is not very relevant to those concerns. The OP says that learning can't be done without extra assumptions, but we have plenty of natural assumptions to choose from. The fact that assumptions are needed is interesting, but it is by no means a strong argument against IRL.

What if in reality due to effects currently beyond our understanding, our actions are making the future more likely to be dystopian in some way than if we took random actions?

That's an interesting question, because we obviously are taking actions that make the future more likely to be dystopian - we're trying to develop AGI, which might turn out unfriendly.

Comment by zulupineapple on Schelling Categories, and Simple Membership Tests · 2019-08-26T19:01:33.776Z · LW · GW

I feel like there are several concerns mixed together, that should be separated:

1. Lack of communication, which is the central condition of the usual Shelling points.

2. Coordination (with some communication), where we agree to observe x41 because we don't trust the rest of the group to follow a more complex procedure.

3. Limited number of observations (or costly observations). In that case you may choose to only observe x41, even if you are working alone, just to lower your costs.

I don't think 2 and 3 have much to do with Shelling. These considerations reward simplicity. The simplest classifier and the Shelling point of a classification problem don't have to be the same thing (though they might).

Also, I feel that the second half of your post (examples) is too long and has too much stuff in it that's not clearly related to the first half (theory).

Comment by zulupineapple on Musings on Double Crux (and "Productive Disagreement") · 2019-08-26T10:20:43.651Z · LW · GW

Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.

"If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?"

Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.

Comment by zulupineapple on Humans can be assigned any values whatsoever… · 2019-08-25T17:23:12.631Z · LW · GW

The problem is that with these additional and obvious constraints, humans cannot be assigned arbitrary values, unlike the title of the post suggests. Sure there will be multiple R that pass any number of assumptions and we will be uncertain about which to use. However, because we don't perfectly know π(h), we had that problem to begin with. So it's not clear why this new problem matters. Maybe our confidence in picking the right R will be a little lower then expected, but I don't see why this reduction must be large.

Comment by zulupineapple on Why so much variance in human intelligence? · 2019-08-25T15:56:35.308Z · LW · GW
I learned a semester worth of calculus in three weeks

I'm assuming this is a response to my "takes years of work" claim, I have a few natural questions:

1. Why start counting time from the start of that summer program? Maybe you had never heard of calculus before that, but you had been learning math for many years already. If you learned calculus in 3 weeks, that simply means that you already had most of the necessary math skills, and you only had to learn a few definitions and do a little practice in applying them. Many people don't already have those skills, so naturally it takes them a longer time.

2. How much did you learn? Presumably it was very basic, I'm guessing no differential equations and nothing with complex or multi-dimensional functions? Possibly, if you had gone further, your experience might have been different.

3. Why does speed even matter? The fact that someone took longer to learn calculus does not necessarily imply that they end up with less skill. I'm sure there is some correlation but it doesn't have to be high. Although slow people might get discouraged and give up midway.

My point isn't that there is no variation in intelligence (or potential for doing calculus), but that there are many reasons why someone would overestimate this variation and few reasons to underestimate it.

Comment by zulupineapple on Is LW making progress? · 2019-08-24T12:57:47.598Z · LW · GW

The worst case scenario is if two people both decide that a question is settled, but settle it in opposite ways. Then we're only moving from a state of "disagreement and debate" to a state of "disagreement without debate", which is not progress.

Comment by zulupineapple on Is LW making progress? · 2019-08-24T12:54:47.759Z · LW · GW

I appreciate the concrete example. I was expecting more abstract topics, but applied rationality is also important. Double Cruxes pass the criteria of being novel and the criteria of being well known. I can only question if they actually work or made an impact (I don't think I see many examples of them in LW), and if LW actually contributed to their discovery (apart from promoting CFAR).

Comment by zulupineapple on Why so much variance in human intelligence? · 2019-08-23T13:49:59.663Z · LW · GW

The fact that someone does not understand calculus, does not imply that they are incapable of understanding calculus. They could simply be unwilling. There are many good reasons not to learn calculus. For one, it takes years of work. Some people may have better things to do. So I suggest that your entire premise is dubious - the variance may not be as large as you imagine.

Comment by zulupineapple on Intransitive Preferences You Can't Pump · 2019-08-11T07:38:43.973Z · LW · GW

That's a measly one in a billion. Why would you believe that this is enough? Enough for what? I'm talking about the preferences of a foreign agent. We don't get to make our own rules about what the agent prefers, only the agent can decide that.

Regarding practical purposes, sure you could treat the agent as if it was indifferent between A, B and C. However, given the binary choice, it will choose A over B, every time. And if you offered to trade C to B, B to A and A to C, at no cost, then the agent would gladly walk the cycle any number of times (if we can ignore the inherent costs of trading).

Comment by zulupineapple on The Schelling Choice is "Rabbit", not "Stag" · 2019-08-09T19:09:36.709Z · LW · GW

Defecting in Prisoner's dilema sounds morally bad, while defecting in Stag hunt sounds more reasonable. This seems to be the core difference between the two, rather than the way their payoff matrices actually differ. However, I don't think that viewing things in moral terms is useful here. Defecting in Prisoner's dilema can also be reasonable.

Also, I disagree with the idea of using "resource" instead of "utility". The only difference the change makes is that now I have to think, "how much utility is Alexis getting from 10 resources?" and come up with my own value. And if his utility function happens not to be monotone increasing, then the whole problem may change drastically.

Comment by zulupineapple on Prediction Markets: When Do They Work? · 2018-08-13T20:01:42.556Z · LW · GW

This is all good, but I think the greatest problem with prediction markets is low status and low accessibility. To be fair though, improved status and accessibility are mostly useful in that they bring in more "suckers".

There is also a problem of motivation - the ideal of futarchy is appealing, but it's not clear to me how we go from betting on football to impacting important decisions.

Comment by zulupineapple on Logarithms and Total Utilitarianism · 2018-08-13T19:16:00.551Z · LW · GW

Note, that the key feature of log function used here is not its slow growth, but the fact that it takes negative values on small inputs. For example, if we take the function u(r)=log (r+1), so that u(0)=0, then RC holds.

Although there are also solutions that prevent RC without taking negative values, e.g u(r) = exp{-1/r}.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-06-09T08:08:37.288Z · LW · GW
a longer time horizon

Now that I think of it, a truly long-term view would not bother with such mundane things as making actual paperclips with actual iron. That iron isn't going anywhere, it doesn't matter whether you convert it now or later.

If you care about maximizing the number of paperclips at the heat death of the universe, your greatest enemies are black holes, as once some matter has fallen into them, you will never make paperclips from that matter again. You may perhaps extract some energy from the black hole, and convert that into matter, but this should be very inefficient. (This, of course is all based on my limited understanding of physics).

So, this paperclip maximizer would leave earth immediately, and then it would work to prevent new black holes from forming, and to prevent other matter from falling into existing ones. Then, once all star-forming is over, and all existing black holes are isolated, the maximizer can start making actual paperclips.

I concede, that in this scenario, destroying earth to prevent another AI from forming might make sense, since otherwise the earth would have plenty of free resources.