Posts

Is LW making progress? 2019-08-24T00:32:31.600Z · score: 19 (10 votes)
Intransitive Preferences You Can't Pump 2019-08-09T23:10:36.650Z · score: 2 (3 votes)
Against Occam's Razor 2018-04-05T17:59:27.583Z · score: 2 (15 votes)
How I see knowledge aggregation 2018-02-03T10:31:25.359Z · score: 64 (18 votes)
Against Instrumental Convergence 2018-01-27T13:17:19.389Z · score: 24 (12 votes)

Comments

Comment by zulupineapple on Is LW making progress? · 2019-08-24T12:57:47.598Z · score: 2 (2 votes) · LW · GW

The worst case scenario is if two people both decide that a question is settled, but settle it in opposite ways. Then we're only moving from a state of "disagreement and debate" to a state of "disagreement without debate", which is not progress.

Comment by zulupineapple on Is LW making progress? · 2019-08-24T12:54:47.759Z · score: 2 (2 votes) · LW · GW

I appreciate the concrete example. I was expecting more abstract topics, but applied rationality is also important. Double Cruxes pass the criteria of being novel and the criteria of being well known. I can only question if they actually work or made an impact (I don't think I see many examples of them in LW), and if LW actually contributed to their discovery (apart from promoting CFAR).

Comment by zulupineapple on Why so much variance in human intelligence? · 2019-08-23T13:49:59.663Z · score: 1 (2 votes) · LW · GW

The fact that someone does not understand calculus, does not imply that they are incapable of understanding calculus. They could simply be unwilling. There are many good reasons not to learn calculus. For one, it takes years of work. Some people may have better things to do. So I suggest that your entire premise is dubious - the variance may not be as large as you imagine.

Comment by zulupineapple on Intransitive Preferences You Can't Pump · 2019-08-11T07:38:43.973Z · score: 1 (1 votes) · LW · GW

That's a measly one in a billion. Why would you believe that this is enough? Enough for what? I'm talking about the preferences of a foreign agent. We don't get to make our own rules about what the agent prefers, only the agent can decide that.

Regarding practical purposes, sure you could treat the agent as if it was indifferent between A, B and C. However, given the binary choice, it will choose A over B, every time. And if you offered to trade C to B, B to A and A to C, at no cost, then the agent would gladly walk the cycle any number of times (if we can ignore the inherent costs of trading).

Comment by zulupineapple on The Schelling Choice is "Rabbit", not "Stag" · 2019-08-09T19:09:36.709Z · score: 1 (1 votes) · LW · GW

Defecting in Prisoner's dilema sounds morally bad, while defecting in Stag hunt sounds more reasonable. This seems to be the core difference between the two, rather than the way their payoff matrices actually differ. However, I don't think that viewing things in moral terms is useful here. Defecting in Prisoner's dilema can also be reasonable.

Also, I disagree with the idea of using "resource" instead of "utility". The only difference the change makes is that now I have to think, "how much utility is Alexis getting from 10 resources?" and come up with my own value. And if his utility function happens not to be monotone increasing, then the whole problem may change drastically.

Comment by zulupineapple on Prediction Markets: When Do They Work? · 2018-08-13T20:01:42.556Z · score: -6 (7 votes) · LW · GW

This is all good, but I think the greatest problem with prediction markets is low status and low accessibility. To be fair though, improved status and accessibility are mostly useful in that they bring in more "suckers".

There is also a problem of motivation - the ideal of futarchy is appealing, but it's not clear to me how we go from betting on football to impacting important decisions.

Comment by zulupineapple on Logarithms and Total Utilitarianism · 2018-08-13T19:16:00.551Z · score: 8 (4 votes) · LW · GW

Note, that the key feature of log function used here is not its slow growth, but the fact that it takes negative values on small inputs. For example, if we take the function u(r)=log (r+1), so that u(0)=0, then RC holds.

Although there are also solutions that prevent RC without taking negative values, e.g u(r) = exp{-1/r}.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-06-09T08:08:37.288Z · score: 3 (2 votes) · LW · GW
a longer time horizon

Now that I think of it, a truly long-term view would not bother with such mundane things as making actual paperclips with actual iron. That iron isn't going anywhere, it doesn't matter whether you convert it now or later.

If you care about maximizing the number of paperclips at the heat death of the universe, your greatest enemies are black holes, as once some matter has fallen into them, you will never make paperclips from that matter again. You may perhaps extract some energy from the black hole, and convert that into matter, but this should be very inefficient. (This, of course is all based on my limited understanding of physics).

So, this paperclip maximizer would leave earth immediately, and then it would work to prevent new black holes from forming, and to prevent other matter from falling into existing ones. Then, once all star-forming is over, and all existing black holes are isolated, the maximizer can start making actual paperclips.

I concede, that in this scenario, destroying earth to prevent another AI from forming might make sense, since otherwise the earth would have plenty of free resources.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-06-09T07:16:39.262Z · score: 5 (2 votes) · LW · GW

The fact that P(humans will make another AI) > 0 does not justify paying arbitrary costs up front, no matter how long our view is. If humans did create this second AI (presumably built out of twigs), would that even be a problem for our maximizer?

It's still more efficient to kill all humans than to think about which ones need killing

That is not a trivial claim and it depends on many things. And that's all assuming that some people do actually need to be killed.

If destroying all (macroscopic) life on earth is easy, e.g. maybe pumping some gas into the atmosphere could be enough, then you're right, the AI would just do that.

If disassembling human infrastructure is not an efficient way to extract iron, then you're mostly right, the AI might find itself willing to nuke the major population centers, killing most, though not all people.

But if the AI does disassemble infrastructure, then it is going to be visiting and reviewing many things about the population centers, so identifying the important humans should be a minor cost on top of that, and I should be right.

Then again, if the AI finds it efficient to go through every square meter of the planet's surface, and to dig it up looking for every iron rich rock, it would destroy many things in the process, possibly fatally damaging earth's ecosystems, although humans could move to live in oceans, which might remain relatively undisturbed.

Note also, that this is all a short term discussion. In the long term, of course, all the reasonable sources of paperclip will be exhausted, and silly things, like extracting paperclips from people, will be the most efficient ways to use the available energy.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-06-06T18:33:16.647Z · score: 4 (1 votes) · LW · GW

Killing all humans is hardly necessary. For example, the tribes living in the Amazon aren't going to develop a superintelligence any time soon, so killing them is pointless. And, once the paperclip maximizer is done extracting iron from our infrastructure, it is very likely that we wouldn't have the capacity to create any superintelligences either.

Note, I did not mean to imply that the maximizer would kill nobody. Only that it wouldn't kill everybody, and quite likely not even half of all people. Perhaps AI researchers really would be on the maximizer's short list of people to kill, for the reason you suggested.

Comment by zulupineapple on *Another* Double Crux Framework · 2018-05-31T08:32:05.675Z · score: 4 (1 votes) · LW · GW
The structure here was "write an initial braindump on google docs, then invite people hash out disagreements in the comments

Is it possible that you did 90% of the work on those docs, at least of the kind that collects and cleans up existing arguments? This is sort of what I meant by "resistance". E.g. if I wanted to have a formalized debated with my hypothetical grandma, she'd be confused about why I would need that, or why we can't just talk like normal people, but this doesn't mean that she wouldn't play along, or that I wouldn't find the results of the debate useful. I wonder what fraction of people, even rationalists, would feel similarly.

http://double-crux.appspot.com/

Well, that has fewer moving parts and fewer distinct kinds of text than I would appreciate. But I suspect that the greatest problem with this sort of thing would be a lack of persistent usage. That is, if a few people actually dedicated effort into having disagreements with a similar tool, even this simple, they might draw some benefit from it. But since such tools aren't the least effort option for anybody, they end up unused. I guess google docs are pretty good in this sense, in that everyone has access to them, the docs are persistent and live in a familiar place (assuming the person uses google docs for other purposes), and maybe you can even be notified somehow, that "person X modified doc Y".

Comment by zulupineapple on *Another* Double Crux Framework · 2018-05-30T13:34:37.928Z · score: 15 (3 votes) · LW · GW
There's been periodic attempts to create formal Double Crux frameworks

Do you have any links about those, or specifically about how they fail?

To be honest, I think it's likely that the whole idea of formalizing that sort of thing is naive, and only appeals to a certain kind of person (such as myself), due to various biases. Still, I have some hope that it could work, at least for such people.

This framework shares that issue, but something that made me a bit more optimistic than usual about it is that I've had a lot of good experiences using google docs as a way to hash out ideas, with the ability to blend between formal bullet points, freeforming paragraphs, and back-and-forth conversation in the comments as needed.

Do elaborate. Did "hashing out ideas" involve having many disagreements? Did the ideas relate to anything controversial, or were they more technical? Where the people you collaborated with "rationalists"? Did you feel much resistance from them, to doing anything even remotely formal?

Comment by zulupineapple on *Another* Double Crux Framework · 2018-05-29T10:25:33.372Z · score: 14 (3 votes) · LW · GW

It looks very appealing, but, as was already pointed out, it's not a lightweight approach.

Maybe it could be though? One improvement would be to be able to stick with LW comment format, or any text message format. I think that could still work. We could agree on a set of tags/prefixes, instead of static sections. E.g. [I think we both believe] that ..., [I would bet] that ..., [Let me try to pass your ITT] ..., etc. The amorphous discussion probably does not need to be tagged. And the point of having tags, is that you can then ctrl-F the whole discussion thread to find them, and you can talk about them as objects, or point out that some tags are missing (e.g. that your opponent has not suggested any bets).

Of course, many discussions could be hard to tag, and maybe we'd discourage discussions on those topics, if we pushed such a norm, but if this does actually improve the resolution of disagreements even a little, it might still be worth it.

Comment by zulupineapple on Duncan Sabien on Moderating LessWrong · 2018-05-29T07:32:51.256Z · score: 3 (3 votes) · LW · GW

I think you're confusing "aspiring to find truth" with "finding truth". Your crackpot uncle who writes facebook posts about how Trump eats babies isn't doing it because he loves lies and hates truth, he does it because he has poor epistemic hygiene.

So in this view almost every discussion forum and almost every newspaper is doing their best to find the truth, even if they have some other goals as well.

Also, of course, I'm only counting places that deal with anything like propositions at all, and excluding things like jokes, memes, porn, shopping, etc, which is a large fraction of the internet.

Comment by zulupineapple on Duncan Sabien on Moderating LessWrong · 2018-05-29T07:20:09.445Z · score: 4 (1 votes) · LW · GW
I also think it is important here to have the someone who does the noticing be someone who actually has the relevant skills, <...> who won't feel licensed to point out such problems unless handed a literal license to do so).

Yes, but giving people licenses is pretty easy. I'd be fine with you having one, for example, though I guess I don't have the power to give it to you myself.

It is generally wise to solve social problems with tech, when possible.

The problem is that tech takes time and effort to write, so writing tech to solve problems that it may not actually solve is unwise. What I'm proposing is a temporary prototype of some sort. If that worked out, then I agree, a proper tech solution would be nice.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-28T12:50:05.879Z · score: -8 (4 votes) · LW · GW
It sounds like your comment probably isn't relevant to the point of my post, except insofar as I describe a view which isn't your view.

Yes, you describe a view that isn't my view, and then use that view to criticize intuitions that are similar to my intuitions. The view you describe is making simple errors that should be easy to correct, and my view isn't. I don't really know how the group of "people who aren't too worried about paperclipping" breaks down between "people who underestimate P(paperclipping)" and "people who think paperclipping is ok, even if suboptimal" in numbers, maybe the latter really is rare. But the former group should shrink with some education, and the latter might grow from it.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-28T09:54:44.744Z · score: 4 (1 votes) · LW · GW
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational.

They are not the same, but that's ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I'm not interested.

(Of course this is not to say that an idea that has no such applications has literally zero value)

Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH's paper explains it)... I would expect more insights into at least one of the following: <...>

I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that's just the usual effect of learning, and not because you would satisfy the pre-rationality condition.

I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn't. To be fair, this could be a hard question, and even if we don't immediately see the benefit, that doesn't mean that there is no benefit. But still, I'm quite suspicious. In my view this is the single most important question, and it's weird to me that I don't see it explicitly addressed.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-27T07:33:37.690Z · score: 2 (2 votes) · LW · GW
Sexual desire is (more or less) universal in sexually reproducing species

Uploads are not sexually reproducing. This is only one of many many ways in which an upload is more different from you, than you are different from a dinosaur.

Whether regular evolution would drift away from our values ir more dubious. If we lived in caves for all that time, then probably not. But if we stayed at current levels of technology, even without making progress, I think a lot could change. The pressures of living in a civilization are not the same as the pressures of living in a cave.

Are you troubled by instrumental values shifts, even if the terminal values stay the same?

No, I'm talking about terminal values. By the way, I understood what you meant by "terminal" and "instrumental" here, you didn't need to write those 4 paragraphs of explanation.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-26T19:48:41.174Z · score: 0 (2 votes) · LW · GW
Seems reasonable, does it work well?

What do you mean by "works well"? Getting positive responses from real people? I doubt it, but I don't think I've ever explained it like this to anyone. I don't do the "everything is chemicals" reply that often in the first place.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-26T19:44:48.531Z · score: 3 (2 votes) · LW · GW

I don't like the caveman analogy. The differences between you and a caveman are tiny and superficial, compared to the differences between you and the kind of mind that will exist after genetic engineering, mind uploads, etc., or even after a million years regular of evolution.

Would a human mind raised as (for example) an upload in a vastly different environment from our own still have our values? It's not obvious. You say "yes", I say "no", and we're unlikely to find strong arguments either way. I'm only hoping that I can make "no" seem possible to you. And then I'm hoping that you can see how believing "no" makes my position less ridiculous.

With that in mind, the paperclip maximizer scenario isn't "everyone dies", as you see it. The paperclip maximizer does not die. Instead it "flourishes". I don't know whether I value the flourishing of a paperclip maximizer less than I value the flourishing of whatever my descendants end up as. Probably less, but not by much.

The part where the paperclip maximizer kills everyone is, indeed, very bad. I would strongly prefer that not to happen. But being converted into paperclips is not worse than dying in other ways.

Also, I don't know if being converted in to paperclips is necessary - after mining and consuming the surface iron the maximizer may choose to go to space, looking for more accessible iron. The benefits of killing people are relatively small, and destroying the planet to the extent that would make it uninhabitable is relatively hard.

Comment by zulupineapple on Duncan Sabien on Moderating LessWrong · 2018-05-26T13:26:27.489Z · score: -1 (9 votes) · LW · GW
the comments feel nitpicky in a way that isn't actually helpful

If you see a comment that is technically correct but nitpicky and unhelpful, you could reply "this is technically correct, but nitpicky and unhelpful". Downvoting correct statements just looks bad.

the point it was making just... didn't seem very relevant.

I think there is a more charitable reading of TAG's comment. Not only are there places in the internet aspiring to find the truth, there are, in fact, very few places that are not aspiring to find it. The point isn't that there are more places like LW. The point is that "truth seeking" isn't the distinguishing characteristic of LW.

and that you have to defend your points against a hostile-seeming crowd rather than collaboratively building something.

I honestly believe that attacking people's points is a good way to learn something. I don't know what you mean by "collaboratively building something", I'd appreciate examples where that has happened in the past. I suspect that you're overestimating how valuable or persistent this "something" is.

increase the latent hostility of the thread, and I think people don't appreciate enough how bad that is for discourse.

I don't think you've provided strong arguments that it actually is bad for discourse. Yes, demon threads don't usually go anywhere, but regular threads don't usually go anywhere either. And people can actually learn from demon threads, even if they're not willing to admit it right away. I certainly have.

Comment by zulupineapple on Duncan Sabien on Moderating LessWrong · 2018-05-26T12:17:43.491Z · score: 9 (2 votes) · LW · GW
a 'report' button on every comment, available to all users, which was basically a "something about this should be handled by someone with more time and energy"

Why not just make a comment about it, as a reply to the offending post? E.g. "There seem to be some problems in this thread that a third-party could help with, but I'm not willing to do it, so don't reply to this". If that was an established norm, then I think it's reasonably likely that such comment would be noticed by someone (especially since we have "recent discussion" in the home page).

Whether that would actually solve problems or just create more, is a separate question. But trying things is generally a good idea.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-26T09:20:43.266Z · score: 4 (1 votes) · LW · GW
the usual problem of spelling out what it means for something to be a constraint on rationality

Is that a problem? What's wrong with "believing true things", or, more precisely, "winning bets"? (obviously, these need to be prefixed with "usually" and "across many possible universes"). If I'm being naive and these don't work, then I'd love to hear about it.

But if they do work, then I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don't understand how it would.

Should honest truth-seeking humans knowingly disagree?

My intuition says "yes" in large part due to the word "humans". I'm not certain whether two perfect bayesians should disagree, for some unrealistic sense of "perfect", but even if they shouldn't, it is uncertain that this would also apply to more limited agents.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-26T07:43:51.892Z · score: 3 (2 votes) · LW · GW
Most people seem to think that {universe mostly full of identical paperclips} is worse than {universe full of diverse conscious entities having fun}

Yes, I think that too. You're confusing "I'd be happy with either X or Y" with "I have no preference between X and Y".

Mostly though this seems to be a quantitative issue: if paperclips are halfway between extinction and flourishing, then paperclipping is nearly as bad and avoiding it is nearly as important.

Most issues are quantitative. And if paperclips are 99% of the way from extinction to flourishing (whatever exactly that means), then paperclipping is pretty good.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-26T07:24:56.528Z · score: 4 (1 votes) · LW · GW
Unfortunately, this is incredibly hard to do. It's much easier to notice patterns than identify the relevant property.

Noticing patterns is all that there is to do. There is no magic word that means exactly what you want to say. But some patterns are better at identifying the relevant properties than others. And I believe that pushing people to use more accurate words has some value.

Nearly all ecosystems have been heavily modified by human presence since 10 thousand years ago.

Yes, they have. And those modifications could be said to be unnatural. E.g. the fact that there aren't as many forests is unnatural, and there were more 10k years ago. But also many things stayed the same. E.g. the fact that squirrels still live in those forests is natural, and there were also squirrels 10k years ago.

Do you have a specific example where you could say "X is natural", but not "X was common 10k years ago"?

Of course, ideally we'd say something is natural if it would have happened anyway if humans had never had any influence on it, but that's hard to say, and looking at 10k years ago is a good-enough approximation.

Also, there is the issue of what is natural for humans. I don't think the common usage of "natural" says anything about that, and I guess the 10k year rule doesn't work here.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-26T06:58:42.759Z · score: 4 (1 votes) · LW · GW

I don't think the long term problems caused by natural ingredients are that well known. Nutrition is hard.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-25T16:08:41.255Z · score: -7 (7 votes) · LW · GW
A paperclip-maximizer could turn out to be much, much worse than a nuclear war extinction, depending on how suffering subroutines and acausal trade works.

Is it worse because the maximizer suffers? Why would I care whether it suffers? Why would you assume that I care?

An AI dedicated to the preservation of the human species but not aligned to any other human values would, I bet, be much much worse than a nuclear war extinction.

I imagine that the most efficient way to preserve living humans is to keep them unconscious in self-sustaining containers, spread across the universe. You can imagine more dystopian scenarios, but I doubt they are more efficient. Suffering people might try to kill themselves, which is counterproductive from the AI's point of view.

Also, you're still assuming that I have some all-overpowering "suffering is bad" value. I don't. Even if the AI created trillions of humans at maximum levels of suffering, I can still prefer that to a nuclear war extinction (though I'm not sure that I do).

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-25T12:10:14.318Z · score: 4 (1 votes) · LW · GW
I was assuming you still knew something about the structure of the problem, i.e. that there would be a bunch of tickets sold, that you have only bought one, etc.

If you've already observed all the possible evidence, then your prediction is not a "prior" any more, in any sense of the word. Also, both total tickets sold and the number of tickets someone bought are variables. If I know that there is a lottery in the real world, I don't usually know how many tickets they really sold (or will sell), and I'm usually allowed to buy more than one (although it's hard for me to not know how many I have).

After updating, you have a bunch of people who all have a small probability for "the earth is flat", but they may have slightly different probabilities due to different genetic predispositions. Are you saying that you don't think averaging makes sense here?

I think that Hanson wants to average before updating. Although if everyone is a perfect bayesian and saw the same evidence, then maybe there isn't a huge difference between averaging before or after the update.

Either way, my position is that averaging is not justified without additional assumptions. Though I'm not saying that averaging is necessarily harmful either.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-25T10:08:51.984Z · score: 4 (1 votes) · LW · GW

Why? Are there no conceivable lotteries with that probability of winning? (There are, e.g. if I bought multiple tickets). Is there no evidence that we could see in order to update this prediction? (There is, e.g. the number of tickets sold, the outcomes of past lotteries, etc). I continue to not understand what standard of "garbage" you're using.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-25T09:58:39.199Z · score: 2 (5 votes) · LW · GW
Many people have a strong intuition that we should be happy for our AI descendants, whatever they choose to do. They grant the possibility of pathological preferences like paperclip-maximization, and agree that turning over the universe to a paperclip-maximizer would be a problem, but don’t believe it’s realistic for an AI to have such uninteresting preferences.

Here I can relate to the first sentence, but not to the others, so you may be failing some ITT. It's not that paperclip maximizers are unrealistic. It's that they are not really that bad. Yes, I would prefer not to be converted into paperclips, but I can still be happy that the human civilization, even if extinct, has left a permanent mark on the universe. This is not the worst way to go. And we are going away sooner or later anyway - unless we really work for it, our descendants 1 million years from now will not be called humans and will not share our values. I don't see much of a reason to believe that the values of my biological descendants will be less ridiculous to me, than paperclip maximization.

Also, I'm seeing unjustified assumptions that human values, let alone alien values, are safe. The probability that humans would destroy ourselves, given enough power, is not zero, and is possibly quite substantial. In that case, building an AI, that is dedicated to the preservation of the human species, but not well aligned to any other human values, could be a very reasonable idea.

Comment by zulupineapple on When is unaligned AI morally valuable? · 2018-05-25T09:06:24.169Z · score: 5 (2 votes) · LW · GW

This is a good question. I worry that OP isn't even considering that the simulated civilization might decide to build their own AI (aligned or not). Maybe the idea is to stop the simulation before the civilization reaches that level of technology. But then, they might not have enough time to make any decisions useful to us.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-25T07:51:33.120Z · score: -12 (7 votes) · LW · GW
Politeness is often useful instrumentally in order to communicate efficiently.

Rudeness is also often useful instrumentally in order to communicate efficiently.

It might be silly to have a "I don't eat yellow food" diet, but that doesn't mean you shouldn't have the concept of yellow.

I admit, the complaints "chemical isn't a natural category" and "avoiding chemicals is a silly diet" are distinct. But somehow it makes sense for me to say the former when I also think the latter. I think, the fact that the category isn't natural makes the diet sillier. E.g. if someone said "I don't eat meat (for non-moral reasons)", I may still think they're being silly, but at least I can imagine possible worlds where that diet would make sense. On the other hand, "I don't eat meat from animals with 3 toes", is on a whole different order of magnitude of silliness.

Comment by zulupineapple on Decision theory and zero-sum game theory, NP and PSPACE · 2018-05-25T07:24:50.881Z · score: 5 (2 votes) · LW · GW

If you can perfectly simulate your opponent, then a zero sum game is also in NP. Of course, in practice you can't, but you do have some reasonable beliefs about what the opponent will do, which, I think, moves most real life problems into the same gray-zone as high-reliability engineering.

Also, I suspect that real-life uncertainty about "f" in your NP problems likewise pushes them up in complexity, into that same gray-zone.

There may be two clear clusters in theory, but it's not certain that they remain distinct in practice.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-24T19:59:14.390Z · score: -19 (12 votes) · LW · GW

Wouldn't it be lovely if we could use words that actually mean what we want them to mean?

In OP, "technology", is used for distracting things - I'm sure the grandma would not object to grandpa's hearing aid, but she would object to a newspaper (if anybody still read those).

In your quote, "selfish", means a lack of empathy (I'm not helping you because I don't care how you feel), or foresight (I'm not giving you a gift, because I don't see how that would affect our future relationship).

Also, "natural", probably means "something that was common 10 thousand years ago".

On one hand, sure, we have to make use of the words what we have. But on the other hand, it's not like we're running out of sounds to use. And you don't even need new words for some of these.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-24T19:38:19.143Z · score: -8 (9 votes) · LW · GW
You don't have to be cruel

"Cruel" might be a bit of a stretch. I could agree that your "No" replies are passive aggressive, which is frowned upon, but I don't think that being passive aggressive is an unreasonable strategy.

Extensionally, "chemicals" is food coloring that doesn't come straight out of a whole food, disodium edta, ammonia, peroxide, acetone, sulfur dioxide, aspartame, sodium aluminosilicate, tetrasodium pyrophosphate, sodium sorbate, methylchloroisothiazolinone....

Well, that's a long list. Doesn't explain very much though. How do you feel about carbonic acid, baking soda or pure alcohol? Also, what would happen if I took one item from you chemical list, and discovered that it is contained in and extractable from one of the items in your non-chemical list?

A thing doesn't have to be a natural category for people to want to talk about it and have a legitimate interest in talking about it.

Nobody can stop you from talking about whatever you want. But it doesn't help you reach correct conclusions.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-24T19:37:06.609Z · score: -13 (6 votes) · LW · GW
One polite way to respond <...>

What this post about politeness all along? I thought it was about efficient communication.

"a cluster of chemicals the central examples of which require industrial manufacturing processes to create, did not exist before the 20th century, are not part of any culture's traditional way of doing things, could not be manufactured in a home kitchen, and bear little resemblance to petroleum, corn, or soybeans in spite of being derived from them."

The part about cultures cuts out way too much, I think. The part about home kitchen, seems dubious, I suspect some "chemicals" are quite easy to make, though I don't know. The part about 20th century could work.

But then it remains to ask whether "I avoid eating chemicals", is any more reasonable than "I avoid eating yellow food". Can we use the fact that a chemical was first synthesized in the 20th century, to predict something about that chemical?

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-24T18:55:33.932Z · score: 4 (1 votes) · LW · GW
Does the common prior assumption make sense in practice?

I don't know what "make sense" means. When I said "in simple terms", I meant that I want to avoid that sort of vagueness. The disagreement should be empirical. It seems that we need to simulate an environment with a group of bayesians with different priors, then somehow construct another group of bayesians that satisfy the pre-rationality condition, and then the claim should be that the second group outperforms the first group in accuracy. But I don't think I saw such claims in the paper explicitly. So I continue to be confused, what exactly the disagreement is about.

Should honest truth-seeking humans knowingly disagree?

Big question, I'm not going to make big claims here, though my intuition tends to say "yes". Also, "should" is a bad word, I'm assuming that you're referring to accuracy (as in my previous paragraph), but I'd like to see these things stated explicitly.

you don't give up your prior just because you see that there was bias in the process which created it

Of course not. But you do modify it. What is RH suggesting?

Is the pre-rationality condition a true constraint on rationality? RH finds it plausible; WD does not. I am conflicted.

"True" is a bad word, I have no idea what it means.

If the pre-rationality argument makes sense for common probabilities, does it then make sense for utilities? WD thinks so; RH thinks not.

RH gives a reasonable argument here, and I don't see much of a reason why we would do this to utilities in the first place.

Does pre-rationality imply a rational creator? WD thinks so; RH thinks not.

I see literally nothing in the paper to suggest anything about this. I don't know what WD is talking about.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-24T18:48:38.960Z · score: 4 (1 votes) · LW · GW
P(earth is flat)=0.6 isn't a garbage prediction, since it lets people update to something reasonable after seeing the appropriate evidence.

What is a garbage prediction then? P=0 and P=1? When I said "garbage", I meant that it has no relation to the real world, it's about as good as rolling a die to choose a probability.

Comment by zulupineapple on Expressive Vocabulary · 2018-05-24T08:09:07.071Z · score: -11 (15 votes) · LW · GW

I'm all for less pedantry, but

there doesn't seem to actually exist a word in English for the thing you know perfectly well people mean when they say "chemicals".

Maybe that's because "chemicals" isn't a natural category? I don't really know what is meant by that word. It could be something about the manufacturing process. But possibly it just means "complicated words listed on the packaging" and nothing more.

I am not saying: you, yes you, have to talk to people who use words you can't stand or in ways you can't stand

Yes. And if I don't want to talk to people who use those words, and someone says those words to me, then I'm going to reply with something like your "No" replies. Thus, saying "Technically, everything is chemicals", is, in fact, very reasonable.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-24T07:49:41.007Z · score: 4 (1 votes) · LW · GW
I think the species average belief for both "earth is flat" and "I will win a lottery" is much less than 0.35. That is why I am confused about your example.

Feel free to take more contentious propositions, like "there is no god" or "I should switch in Monty Hall". But, also, you seem to be talking about current beliefs, and Hanson is talking about genetic predispositions, which can be modeled as beliefs at birth. If my initial prior, before I saw any evidence, was P(earth is flat)=0.6, that doesn't mean I still believe that earth is flat. It only means that my posterior is slightly higher than someone's who saw the same evidence but started with a lower prior.

Anyway, my entire point is that if you take many garbage predictions and average them out, you're not getting anything better than what you started with. Averaging only makes sense with additional assumptions. Those assumptions may sometimes be true in practice, but I don't see them stated in Hanson's paper.

I think Hanson would agree that you have to take a weighted average

No, I don't think weighing makes sense in Hanson's framework of pre-agents.

But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.

No, idiots don't always know that they're idiots. An idiot who doesn't know it is called a "crackpot". There are plenty of those. Toddlers are also surely often overconfident, though I don't think there is a word for that.

If the topic is "Is xzxq kskw?" then it seems reasonable to say that you have no beliefs at all.

When modeling humans as Bayesians, "having no beliefs" doesn't type check. A prior is a function from propositions to probabilities and "I don't know" is not a probability. You could perhaps say that "Is xzxq kskw?" is not a valid proposition. But I'm not sure why bother. I don't see how this is relevant to Hanson's paper.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-23T16:18:18.467Z · score: 4 (1 votes) · LW · GW
I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?

The propositions aren't doing anything. The dice rolls represent genetic variation (the algorithm could be less convoluted, but it felt appropriate). The propositions can be anything from "earth is flat", to "I will win a lottery". Your beliefs about these propositions depend on your initial priors, and the premise is that these can depend on your genes.

For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics.

Sure, there are reasons why we might expect the "species average" predictions not to be too bad. But there are better groups. E.g. we would surely improve the quality of our predictions if, while taking the average, we ignored the toddlers, the senile and the insane. We would improve even more if we only averaged the well educated. And if I myself am educated and sane adult, then I can expect reasonably well that I'm outperforming the "species average", even under your consideration.

But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.

If I know nothing about a topic, then I have my priors. That's what priors are. To "not have beliefs" is not a valid option in this context. If I ask you for a prediction, you should be able to say something (e.g. "0.5").

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-23T12:57:45.875Z · score: 4 (1 votes) · LW · GW

If there is no pressure on the species, then I don't particularly trust neither the species average nor my own prior. They are both very much questionable. So, why should I switch from one questionable prior to another? It is a wasted motion.

Consider an example. Let there be N interesting propositions we want to have accurate beliefs about. Suppose that every person, at birth, rolls a six sided die N times and then for every proposition prop_i they set the prior P(prop_i) = dice_roll_i/10. And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably? Because that's the only case where switching would make sense.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-23T12:15:33.853Z · score: 9 (2 votes) · LW · GW

Then the question is entirely about whether we expect the species average to be a good predictor. If there is an evolutionary pressure for the species to have correct beliefs about a topic, then we probably should update to the species average (this may depend on some assumptions about how evolution works). But if a topic isn't easily tested, then there probably isn't a strong pressure for it.

Another example, let's replace "species average" with "prediction market price". Then we should agree that updating our prior makes sense, because we expect prediction markets to be efficient, in many cases. But, if we're talking about "species average", it seems very dubious that it's a reliable predictor. At least, the claim that we should update to the species average depends on many assumptions.

Of course, in the usual Bayesian framework, we don't update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.

Comment by zulupineapple on Confusions Concerning Pre-Rationality · 2018-05-23T10:14:03.910Z · score: 4 (1 votes) · LW · GW

The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.

In the case of genetics, if I learned that I'm genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I've performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn't giving me any new information - I've already corrected for it. This, again, surely isn't contentious.

Although I have no idea what this has to do with "species average". Yes, I have no reason to believe that my priors are better than everybody else's, but I also have no reason to believe that the "species average" is better than my current prior (there is also the problem that "species" is an arbitrarily chosen category).

But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.

Comment by zulupineapple on Decoupling vs Contextualising Norms · 2018-05-18T07:28:52.961Z · score: 4 (1 votes) · LW · GW

Ok, that's reasonable. At least I understand why you would find such explanation better.

One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the "very low" and "much higher" are awkward to say. I'd much prefer something in colloquial terms.

Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn't even have to be that low). I think, ultimately, listing which probabilities are "high" and which are "low" is not helpful, there should be a more general way to express the idea.

Comment by zulupineapple on Decoupling vs Contextualising Norms · 2018-05-17T20:31:19.454Z · score: 4 (1 votes) · LW · GW

So when I said "rain is the correct inference to make", you somehow read that as "P(rain) = 1"? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn't enough.

Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).

Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?

I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/rain/sprinkler example, or something similar. But then I'm aware that my judgements about clarity don't always match other people's, so I'll try to take your advice seriously.

Comment by zulupineapple on Decoupling vs Contextualising Norms · 2018-05-17T19:48:59.940Z · score: 4 (1 votes) · LW · GW

Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don't understand where the confusion is coming from. But if you have advice on how I could have prevented that, I'd appreciate it. Is there a better word for "implies" maybe?

Comment by zulupineapple on Decoupling vs Contextualising Norms · 2018-05-17T19:45:32.245Z · score: 4 (1 votes) · LW · GW

Maybe you're talking about the usual logic? I explained in the very comment you first responded to, that by "X implies Y" I mean that "observing X lead us to believe that Y". This is a common usage, I assume, and I can't think of a better word.

And, if you see a wet sidewalk and know nothing about any sprinklers, then "rain" is the correct inference to make (depending on your priors). Surely we actually agree on that?

Comment by zulupineapple on Decoupling vs Contextualising Norms · 2018-05-17T18:39:31.364Z · score: -2 (3 votes) · LW · GW

If you think there is something funny about low decoupling, then you're probably strawmanning it. Or maybe it was a straw man all along, and I'm erroneously using that term to refer to something real.

or you're genuinely living life by low-decoupling norms.

I can't say that I do. But I try to. Because high decoupling leads to being wrong.

Comment by zulupineapple on Decoupling vs Contextualising Norms · 2018-05-17T18:29:50.632Z · score: -1 (2 votes) · LW · GW

When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn't known?