Posts

Suspiciously balanced evidence 2020-02-12T17:04:20.516Z
"Future of Go" summit with AlphaGo 2017-04-10T11:10:40.249Z
Buying happiness 2016-06-16T17:08:53.802Z
AlphaGo versus Lee Sedol 2016-03-09T12:22:53.237Z
[LINK] "The current state of machine intelligence" 2015-12-16T15:22:26.596Z
Scott Aaronson: Common knowledge and Aumann's agreement theorem 2015-08-17T08:41:45.179Z
Group Rationality Diary, March 22 to April 4 2015-03-23T12:17:27.193Z
Group Rationality Diary, March 1-21 2015-03-06T15:29:01.325Z
Open thread, September 15-21, 2014 2014-09-15T12:24:53.165Z
Proportional Giving 2014-03-02T21:09:07.597Z
A few remarks about mass-downvoting 2014-02-13T17:06:43.216Z
[Link] False memories of fabricated political events 2013-02-10T22:25:15.535Z
[LINK] Breaking the illusion of understanding 2012-10-26T23:09:25.790Z
The Problem of Thinking Too Much [LINK] 2012-04-27T14:31:26.552Z
General textbook comparison thread 2011-08-26T13:27:35.095Z
Harry Potter and the Methods of Rationality discussion thread, part 4 2010-10-07T21:12:58.038Z
The uniquely awful example of theism 2009-04-10T00:30:08.149Z
Voting etiquette 2009-04-05T14:28:31.031Z
Open Thread: April 2009 2009-04-03T13:57:49.099Z

Comments

Comment by gjm on A one-question Turing test for GPT-3 · 2022-01-23T02:47:10.020Z · LW · GW

A jar of mayonnaise would work tolerably: put it on top of a corner of whichever side of the book is tending to swing over and close.

(I agree that I would expect most humans to do better than almost all the AI responses shown here.)

Comment by gjm on Entropy isn't sufficient to measure password strength · 2022-01-18T17:48:08.123Z · LW · GW

The function is cdf. The way it's used in the expected-utility calculation is that it's applied to 1/p where p is the probability of a given password. My original use of the term "probability" for the reciprocal of the thing fed to the cdf function was needlessly confusing, which is why I dropped it in the rewrite.

since in my original comment, P is not the least probable password the attacker has the resources to try.

In your original comment, P is the probability of a particular password. (I say this just to confirm that I do, and did, understand that.)

But if we are going to explain what the cdf function actually is, we need to say something of the form "cdf(R) is the fraction -- or, in the case of improper not-exactly-probability-distributions, something more like the total number -- of attackers for whom ...". And I think the correct way to fill in that "..." is something like "when they crack passwords, we expect that the least probable password they're likely to crack has probability 1/R". (Right?)

In other words, I'm trying to be more explicit about what "adversary capabilities" actually cashes out to, and I think that's what it is.

Your more-explicit formalization of the calculation agrees with my understanding; to whatever extent you feel that what I'm describing is different from what you're describing, I am pretty confident the cause is not that we have different understandings of the mathematics at that point. I think it's you misunderstanding me / me communicating badly, not me misunderstanding you / you communicating badly.

(It is a lamentable misfeature of our language that -- so far as I can tell -- there is no good way to say "what's going on here is that what A is trying to say is not what B is interpreting it as" that doesn't tend to assign blame to one or other party. You have to call it misunderstanding (implicitly blaming B) or miscommunicating (implicitly blaming A). But it takes two to tango and communication failures often involve suboptimality at both ends, and even in cases where it doesn't assigning/taking blame is often an irrelevant distraction.)

Comment by gjm on (briefly) RaDVaC and SMTM, two things we should be doing · 2022-01-18T12:16:44.166Z · LW · GW

Thanks for the detail. I think that means that the answers to my questions are "no, not at all, for the more recent versions of RaDVaC, and in fact for earlier versions what we have is evidence that it usually doesn't produce any immune response", and "no, not at all, for any version of RaDVaC".

For the avoidance of doubt, I'm not saying that this means you're bad people, or that it means your thing doesn't work, or that it means no one should give you money. I'm just trying to assess how likely we should think it is that what you're doing will turn out to be useful.

I think, in particular, that when Eliezer (in the OP) says

They've pretty much proved themselves.  They should be nurtured and scaled up to where they can start to replace US and Earth defunct state capacity to do the R&D that leads up to being able to rapidly design new vaccines that rapidly scale in production and deployment.

that first sentence is 100% false unless "proved themselves" means something like "proved that they are smart people" or "proved that they are acting in good faith" rather than "proved that they have something that works", which means that the second is waaaay premature.

Which, again, doesn't mean that the RaDVaC project shouldn't be nurtured. Maybe it should. All science and technology starts out unproven, after all.

Eliezer, if you happen to be reading this: Do you, in the light of the discussion above, still consider that "they've pretty much proved themselves", and are you still confident that the project needs only nurturing and scaling-up before it is ready to start replacing the planet's existing vaccine R&D efforts? If so, what is it that you think I'm missing?

Comment by gjm on Entropy isn't sufficient to measure password strength · 2022-01-18T01:04:01.754Z · LW · GW

It's a separate independent parameter for me too. My intention (though of course I may have screwed up) is that I'm describing the same thing as you are :-).

So, suppose you have a particular probability distribution for the number of guesses an attacker has. I am describing that by saying what its pdf is as a function of 1/#guesses (the thing I'm calling p). I call this f(p). You're describing it by saying what its cdf is as a function of #guesses (the thing you're calling 1/P). You call this cdf(1/P). These two formulations are equivalent[1].

[1] Well, maybe there are some weird technical issues if the cdf is sufficiently horribly non-differentiable -- a Cantor staircase or something of the sort.

Why start with f(p) rather than cdf(1/P)? No particular reason, I guess :-). It would probably have been cleaner just to define q=1/p (as I did) and then work with either the pdf or the cdf as a function of q.

So, let me try again -- I am not saying anything different, or at least not intending to, just cleaning up my notation and language and so forth -- and see whether you find the result (1) comprehensible and (2) compatible with what you said before. (Again, to be clear, I am claiming only to explicate what you already said, which I found excellent but a bit terse.)


A little more detail seems worth providing. Let's say that an attacker has "resources R" if they are willing and able to crack passwords with probabilities at least 1/R. (Is this a realistic model? I think it is provided R is fairly large, which in practice it usually will be, provided attackers know the probability distribution we are using. If I am an attacker and know e.g. that passwords of type A are used 10x more often than passwords of type B, then I will use 10x more of my resources on type-A passwords. This falls down if I have enough resources that I literally can't do that without repeating passwords, but for actually-realistic password distributions and attackers who are likely attacking multiple victims at once this is not a problem.)

So, we can describe the distribution of attackers in terms of the pdf f(R). In some sense, as described above, values of R are reciprocals of password-probabilities; the corresponding pdf over probabilities is f(1/p)/p^2 where p=1/R. If we start with the cdf F(R) instead, the pdf is f'(R), the pdf over probabilities is F'(1/p)/p^2, and of course the cdf for probabilities is just 1-F(1/p).

In each case, tailcalled's prescription is to take E[cdf(1/p)], the expected fraction of attackers who fail to crack our password, as our figure of merit. The expectation is taken over our passwords with the probability distribution we are using (the probability distribution over passwords, not over attackers) so it equals the sum of p.F(1/p) where F is the cdf for attacker-resources.

These things I've called "pdfs" and "cdfs" may not actually integrate/tend to 1; we may leave them unnormalized, or indeed they may be improper (so that the integral of the pdf / limiting value of the cdf is infinite). What we are doing with these is just maximizing/minimizing some expectation involving them; rescaling the values of the pdf/cdf doesn't change the result (so we needn't care about normalization) and in many cases the expectation will converge even if the pdf/cdf is improper.

Let's look more explicitly at the examples tailcalled mentioned. A uniform distribution for R, which of course is very improper indeed, corresponds to F(R)=R or f(R)=1. The corresponding pdf for p=1/R is 1/p^2. A uniform distribution for log R, still improper, corresponds to F(R) = log R or f(R) = 1/R. The corresponding pdf for p=1/R is 1/p. This is exactly the case in which expected utility = entropy.

(As tailcalled implied but didn't quite say explicitly, the fact that these are improper is what enables the pathology described by OP: as R gets large the cdf -> infinity, so a modest probability of picking extremely-low-probability passwords can give an arbitrarily large positive contribution to the expectation we're trying to maximize, even if most of the time we do very badly.)

We looked at pdf(p) = 1/p^2 and pdf(p) = 1/p; let's go one step further and take pdf(p)=1, corresponding to a uniform distribution over 1/R; cdf(R) = 1-1/R, and pdf(R) = 1/R^2. Our figure of merit is E[cdf(1/p)] = E[1-p] = 1-E[p], so the expected negative utility we get is (aside from irrelevant constant offsets and scalings) E[p] = sum(p^2).


(At this point, continuing from "How does this figure of demerit do ..." in the original version of what I wrote seems fine, except that the final paragraph of what I already wrote is effectively incorporated above.)

Comment by gjm on Entropy isn't sufficient to measure password strength · 2022-01-17T19:35:10.854Z · LW · GW

I said "probabilities" but should really have said something like "reciprocals of numbers of trials". The thing you called P, when you wrote things like "E[1/P]" and "E[log 1/P]". This has the same "units" as probability; if you wanted to express it in explicitly probabilistic terms it would be something like "least probable password the attacker has the resources to try".

Comment by gjm on Working through D&D.Sci, problem 2 (solution) · 2022-01-17T19:29:55.762Z · LW · GW

You asked what Guy and I were looking for that led us to notice the prime-factors thing. In my case, the answer is just that by training I'm a pure mathematician and I can't help noticing that sort of thing. For whatever reason it didn't occur to me to, e.g., look at the distributions of the exponents of 2, 3, and 5 or anything like that, not that doing so would have led me to pick different items or anything.

(I was glad to see that you liked "Wakalix Maketh it Goe".)

Comment by gjm on Entropy isn't sufficient to measure password strength · 2022-01-17T17:03:38.431Z · LW · GW

A little more detail seems worth providing. If you have a pdf (over probabilities) f(p), which may or may not actually integrate to 1 or to any finite value, then the corresponding pdf for our measure of attacker-capability q=1/p is f(1/q)/q^2; the relevant CDF here is the integral from 0 to q of this.

Conversely, if we start with a given cdf F(q), its pdf is F'(q), and the corresponding pdf for p is F'(1/p)/p^2.

So a uniform distribution over q (obviously improper) corresponds to F(q) = q and a pdf of 1/p^2. A uniform distribution over log q (again, obviously improper) corresponds to F(q) = log q and a pdf of 1/p.

If we continue this "one step further", a (no longer improper) uniform prior on p means F'(1/p)/p^2 = const or F'(q)=const/q^2 or F(q) = const/q; obviously the constant should be negative. That is, this would mean trying to minimize the expected probability of the password we pick. Equivalently, trying to minimize sum(p^2) over all our possible passwords. So, in particular, for a fixed set of passwords this says to pick them all with equal probability.

How does this figure of demerit do for the example in the OP? Let's generalize it a little, so that with probability p we pick one password and with probability 1/p we pick one of N passwords.

  • The entropy here is p log 1/p + (1-p) log N/(1-p). If we call this E(p,N) then note that E(1/2,N) >= E(1, 2N), so setting p=1/2 can be "fixed" by doubling N.
  • The sum-of-squared-probabilities is p^2 + (1-p)^2/N. When p=0 this is 1/N. When p>0, even in the limit of infinite N it's 1/p. So for any N, if you increase p beyond 1/sqrt(N) you cannot hope to compensate for this by increasing N further.

The implied model of our attackers here is that the probability that an attacker can afford to guess at least N passwords goes down in proportion to 1/N. Not obviously very realistic, but not obviously worse than the improper priors considered above either :-).

tailcalled alluded to the following but didn't make it explicit: If your prior is improper then you can make that CDF get unboundedly large, which means you can compensate for badness elsewhere that way, which means the pathology OP is about can happen; if your prior is proper then you can't. So propriety is in some sense exactly the property we want here.

[EDITED to add two things:] 1. After some discussion with tailcalled, there's a rewritten and hopefully clearer version of most of the above in a comment of mine further downthread. 2. I see that this one has received someone's strong-downvote. For the avoidance of doubt, I'm not complaining; you should strong-downvote things that you think are very bad; but it would be much more useful information if I understood what the problem is, and right now I don't. Did I make a mathematical error? Was my writing too unclear to be useful (e.g., because "probabilities" was a bad way to refer to 1/attacker_capacity, or because looking at things as a function of p = 1/attacker_capacity is needlessly confusing)? Did it feel as if I was trying to restate what tailcalled already said and take credit for it? Etc. Thanks!

Comment by gjm on Guidelines for cold messaging people · 2022-01-17T14:37:45.094Z · LW · GW

It's not just that they're thoughtlessly maximizing responses at the expense of recipients' time: recipients are liable to notice this, be annoyed by it, and therefore be less likely to respond as a result. So maybe they're (in some sense) trying to maximize responses and not caring about recipients' time, but in practice they aren't even doing that; they could probably get more responses by being a bit more thoughtful and polite.

Comment by gjm on (briefly) RaDVaC and SMTM, two things we should be doing · 2022-01-16T13:49:54.256Z · LW · GW

Is there any empirical evidence yet that RaDVaC's alleged vaccines are in any way effective in making those who use them less likely to get COVID-19, or less likely to get it badly?

[EDITED to add:] I did take a look on the RaDVaC website, and if I'm understanding the FAQ there correctly the answer is (1) you have "positive indications" from your "preliminary assays" (I am guessing this means: very weak evidence?) that RaDVaC elicits some sort of immune response in the nose, and (2) you have no direct evidence at all that it has any effect on either whether one gets COVID-19 or how severe its effects are if you do. But maybe you can say more here than you have on your website?

Comment by gjm on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-14T16:34:24.116Z · LW · GW

My immediate (kneejerk System-1-ish emotional) reaction to the experimental voting popup is along the lines of "meh, too much effort, won't bother".

My slightly less immediate reaction turns out to be much the same. So e.g. I think "I should give this a try", take a look at the comment currently presented to me first, and ... well, there are now 9x as many things to decide about it as before (overall opinion, 4 "axes", and 4 possible reaction emoji), and all but the first feel as if they require substantially more mental work to arrive at a useful opinion about, and I just don't wanna.

But!

In practice, the way I usually use the LW karma system (other than "implicit" uses like reading the comments in the order the UI presents them to me in, which is affected by karma) is something like this: I vote only when something I read strikes me as especially good or bad, in which case I consider briefly whether it's good/bad enough to be worth voting on, and whether if so it warrants a strong up/down-vote. This process seems like it might cope fine with the more complex interface -- if I've been particularly impressed or unimpressed by something I'll have some sense of what about it is impressive or unimpressive to me, and having a finer-grained way to express that seems like a gain rather than a loss.

So it may be that the answers to the questions "is explicitly deciding what to do with a particular post or comment easier or harder with this experiment?" and "is voting as I actually do it easier or harder with this experiment?" are substantially different.

[EDITED to fix an inconsequential typo.]

Comment by gjm on Open Thread - Jan 2022 [Vote Experiment!] · 2022-01-14T15:42:28.638Z · LW · GW

I'm not sure what it says about LW that in the current Open Thread there is only one comment that isn't either (1) about the voting-system experiment or (2) deleted.

(And I slightly suspect that that one comment may have been inspired at least slightly by the voting-system experiment, not that there's anything at all wrong with that.)

Comment by gjm on Covid 1/6/22: The Blip · 2022-01-07T00:37:31.221Z · LW · GW

I also don't like "Law" for this, and cynically suspect that the use of "Law" for this sort of thing is at least sometimes at least partly an attempt to co-opt the prestige of actual laws while using "oh I just meant like Murphy" for plausible deniability.

It sounds as if you may be proposing "razor" as a general term for this sort of thing, which I don't like because there's a reason why Occam's Razor is called a razor rather than something else: it's a tool for trimming off low-value hypotheses. This particular proposition of Zeynep's is in fact razor-ish, though, so I endorse calling it a razor.

But "first" is ugly, and Zeynep doesn't use it. I suggest simply "Zeynep's Razor".

The "razor" formulation is better than "law" not only because it makes weaker claims to Universal Truth, but also because it correctly suggests a different kind of thing that this is: a methodological principle, of the form "consider these hypotheses before those ones". Of course the boundary between "consider these hypotheses first" and "... because they are more likely to be true" is a fuzzy one.

Comment by gjm on Scott Alexander 2021 Predictions: Market Prices - Resolution · 2022-01-02T22:29:59.175Z · LW · GW

I think the post should mention the fact that Zvi's forecasts were made after reading Scott's.

Comment by gjm on Religion's Claim to be Non-Disprovable · 2022-01-02T09:27:21.176Z · LW · GW

[EDITED to add: The following was written in response to a comment that has now been deleted -- not, I believe, by its author. The comment took exception to what Eliezer said about moral horrors in the Bible.] 

The problem with biblical ethics isn't that the Bible describes things that we now find morally terrible, it's that it endorses things that we now find morally terrible while claiming (or being claimed) to speak authoritatively for a perfectly good god.

So no one is complaining that the Bible says Joseph was sold into slavery by his brothers. They're complaining about things like Exodus 21:7-10 (where God, speaking through Moses, provides instructions on how best to sell your daughter as a sex slave) or 1 Chronicles 13:9-10 (where the Ark of the Covenant is being transported, and the oxen stumble, and some well-meaning chap puts out a hand to steady it -- and God himself strikes him dead for it) or 1 Samuel 15:1-3 (where God, speaking through Samuel, commands an outright genocide of the Amalekites as a punishment for something their ancestors had done) or Deuteronomy 22:28-29 (where God, speaking through Moses, commands that when a man rapes a woman who is neither married nor betrothed, he pays a fine to her father and she becomes his wife). These aren't merely reporting that moral atrocities happened; they are endorsing and even commanding moral atrocities in the name of God.

From which a reasonable person will conclude that one of three things is true. (1) These documents are not in fact providing reliable information about the god they claim to speak of and speak for. (2) The god in question has moral values so different from ours that we should very decidedly not be worshipping that god. Or (3) the god in question has moral values that have changed radically since those documents were written, in something like the same way as humans' values have changed. Of these, #1 seems much the most likely, and there is plenty more evidence for it (e.g., the discrepancies between biblical accounts of historical events and the best information we can find about them from other sources) but any of them would mean that these documents do not give us a trustworthy account of how we ought to live.

Comment by gjm on Leave a Line of Retreat · 2021-12-28T00:26:12.648Z · LW · GW

Evangelism and creationism don't tend to go down very well here, but you know what's likely to go down even less well? Claiming to have conclusive evidence against things near-universally believed here (e.g., evolution) and not bothering to provide us with any of it.

I don't want to mislead you; if you do tell us some of the things you regard as demonstrating that evolution is "a fairy tale", those things are not likely to get the sort of reception you would prefer them to get. (I say: because you're claiming to offer conclusive evidence of something that is in fact false, and of course this conclusive evidence is likely to be much worse than you think it is. You might have other explanations.) But just turning up and saying "I know that you guys are catastrophically wrong" but not saying why? Hopeless. That's not what you do when you actually want to help. It's what you do when you want to gloat.

(You may not be aware of how smug what you wrote comes off as being, to those who don't already agree with you. That's kinda fair, because I am very confident that a lot of things here that assert or presuppose atheism come across as equally smug to you. But, again, if you are actually hoping to help anyone escape from darkness and ignorance, you might want to avoid coming across as smug. But I'm not sure you are. After all, you believe in a god who might well "choose to hinder their understanding". Why, believing in such a god, you find yourself willing to believe anything that god is purported to have revealed to you, I don't really know. But if that's the sort of god you believe in, it's not surprising if your belief that you are in the light and we are in the dark leads you to gloat rather than to try to enlighten.)

Anyway, I just thought it might be helpful to offer a few words of explanation of the torrent of downvotes you will likely receive if anyone else actually reads what you wrote. I expect you will think of other explanations which are more flattering to you and to your religion, and you may prefer to believe those, but I wouldn't want not to have tried.

Comment by gjm on Quis cancellat ipsos cancellores? · 2021-12-22T03:33:33.509Z · LW · GW

Thanks. I am not sure I would trust the author of the post (1) not to lie outright and (2) to kinda-not-exactly-lie by taking the view that one doesn't know oneself, but your report is certainly some evidence that they are two different people (with, apparently, somewhat similar attitudes to Aella and to Leverage).

Comment by gjm on Quis cancellat ipsos cancellores? · 2021-12-21T22:31:02.480Z · LW · GW

Is it known whether the author is the same person who commented on the "Frame Control" post under the name of "blueiris" (and, after that account was banned, "blueiris2", and then "blueiris3") complaining about how Aella was taking unfair advantage of her feminine wiles to get her stuff well received on Less Wrong?

Comment by gjm on Covid 12/16: On Your Marks · 2021-12-17T21:38:03.118Z · LW · GW

Yes, the basic literal meaning of the word "late" is that something should have happened earlier. But not all actual uses of it have exactly that meaning, and I think Scott's use of it at the start of his post is an example.

(I agree with GWS's reading of Scott's opening sentence, and also with his examples of the sort of thing Scott might have written if he had actually intended to tell us that he ought to have written a post about ivermectin much sooner and that he hadn't because he was scared.)

I did not disagree that Scott thought there was an "early promising phase". What I was asking was where Scott said, as you implied he did, that "the meta-analysis we have points in the direction of ivermectin working" was the state of evidence two months ago. I take your reply as confirming that he did not say that, as I suspected he probably hadn't.

He does say that rationalists did no better than anyone else. That doesn't mean that he, personally, believed that there was good evidence that ivermectin is substantially helpful against Covid-19. (Both because "rationalists didn't do better" and "no individual rationalist did better" are entirely different propositions, and because I was questioning your account of what he said at that point, not your account of what he thought. Though I don't think much of your account of what he thought either.) Maybe he did believe that, maybe not; I don't know; but he does not appear to have said that he believed that, and you said that he did. I don't think you should do that.

(Another thing I don't think you should do: quote what I wrote with a bunch of important words removed from it, so that it looks as if I said "Scott's position on ivermectin as described in his big blog post about it is X" when in fact what I did was to say that you were claiming that and I think you were wrong.)

Comment by gjm on Covid 12/16: On Your Marks · 2021-12-17T15:25:31.235Z · LW · GW

Where does he say "that he should have written earlier"?

So far as I can tell, you're referring to the opening of his piece where he says

I know I’m two months late here. Everyone’s already made up their mind and moved on to other things.

He may be saying "you might have preferred me to write this two months earlier". That is not at all the same as "I didn't dare say anything earlier", and it is not at all the same as "it would in fact have been better for me to write this two months ago". Also, two months before Scott's post on ivermectin is not obviously the early "looks promising so far" phrase; stories were going around crediting ivermectin for the fairly short duration of India's huge Delta wave back in May.

The rest of what you say is just speculating about Scott's motivations. Maybe your speculation is right, maybe not (I think probably not, but of course I could be wrong) but in any case it's entirely separate from what you said above, which claims that Scott's position on ivermectin as described in his big post about it is that there was an early promising phase in which he dared not say anything about it in public. So far as I can tell, that's entirely false because (1) Scott didn't say that he was scared to write about ivermectin in public and (2) Scott's position on ivermectin as described in his big post about it is a bunch of propositions about ivermectin, not about Scott's mental state months ago.

I do not profess to know why Scott didn't write about ivermectin two months (or four months, or six months) ago. I can think of plenty of reasons that don't involve the dishonest motivations you're trying to pin on him.

Incidentally, when you say

that was the state of evidence according to Scott

what "according to Scott" are you referring to?

Comment by gjm on Covid 12/16: On Your Marks · 2021-12-17T11:47:40.022Z · LW · GW

That seems to me a staggeringly inaccurate characterization of Scott's post about ivermectin. (He does mention an "early phase of 'looks promising so far'". He doesn't, I think, say that during that phase he didn't dare say anything publicly about the topic. And none of that is "Scott's position on ivermectin" as opposed to "one incidental remark while Scott is describing his actual position on ivermectin". You can find his actual position right at the end of that post under the words "The Summary".

Comment by gjm on Leaving Orbit · 2021-12-10T01:55:09.832Z · LW · GW

On the one hand, I think there's a lot of truth in what you say.

On the other hand, discussions between reasonable people should not be dominance contests, and being shown that you were previously wrong (and hence becoming ... how shall I put it? ... less wrong) should be a thing you're glad of rather than a humiliating defeat. Our stupid monkey-brains make it difficult to operate that way, but we should damn well be trying to.

(Whereas ju-jitsu fights are, I take it, always and essentially dominance contests; a fight is a thing you win or lose, and there's really no way it could turn into some sort of in-principle-cooperative search for the best ju-jitsu moves or anything like that, without ceasing to be a fight.)

((This suggests a possibly useful analogy. Consider another usually competitive endeavour, namely chess. Suppose Magnus Carlsen and Ian Nepomniatchi decide, once Magnus has finished slapping Ian around in the world championship, to do some deep analysis and try to figure out who's winning in some particular line of the Petroff Defence. They might do it by sitting opposite one another at the board, with Carlsen playing the white pieces and Nepomniatchi playing the black pieces, and both trying to get the best position they can. But unlike the match they are playing right now, what they are doing will not, or should not, be a dominance contest, and sometimes Nepomniatchi will suggest a better move for white or Carlsen will suggest a better move for black, and if Carlsen keeps coming out with a clearly better position their conclusion will be "this opening is good for white", not "Carlsen is stronger than Nepomniatchi". And they will be better able to do this if, e.g., when one of them starts to get tired they can just say "I've had enough for now" without any presumption that they're just trying to avoid getting their ego bruised by the other's manifest superiority. I think intellectual argument is, or should be, more like chess than like ju-jitsu, and at least some of the time its goal should be "find the truth" rather than "beat the enemy". And I think this idea is quite central to what Less Wrong is supposed to be about.))

In pursuit of that noble-if-possibly-overoptimistic goal, there is value in having norms that explicitly make discussions less dominance-contest-y. "You can leave a discussion without either claiming victory or admitting defeat" is an attempt at such a norm.

Establishing such a norm, even if possible, wouldn't completely stop "tapping out", whatever language we use for it, being perceived as an admission of defeat, or as a dishonest attempt to avoid admitting defeat. Monkey-brains, and all that. I don't think that should stop us trying.

It might, as you say, make it harder to learn via the path of getting beaten and feeling the pain. It seems plausible to me that that's outweighed by the benefits of making conversation less adversarial.

(I'm more bothered by the problem someone else mentioned, that you can't simultaneously have a norm of "anyone can leave at any time and it's not an admission of defeat or anything" and a norm of "you can reasonably expect that if you get into a discussion it won't abruptly get dropped for no adequate reason", and those are both reasonable things to want.)

Comment by gjm on Leaving Orbit · 2021-12-07T04:08:29.482Z · LW · GW

Yes, but at the cost of being actively misleading to uninitiated readers because in its original context "tapping out" means admitting defeat, whereas a large point of the LW usage is to have a way of saying "I'm outta here, and I am explicitly not either admitting defeat or claiming victory".

Comment by gjm on Leaving Orbit · 2021-12-06T23:45:27.605Z · LW · GW

There's another term already used on LW for (I think) exactly this purpose: "tapping out".

"Tapping out" derives, I think, from a context where it specifically indicates acknowledging defeat, whereas the LW term specifically wants not to. "Leaving orbit" is more neutral in that respect.

Neither of them really conveys to an uninitiated reader what the intended meaning is. That may be an unavoidable problem, though.

Comment by gjm on Frame Control · 2021-12-01T01:41:33.087Z · LW · GW

I guess that's addressed to me?

I have not knowingly been "influenced by Aella" in any sense other than having read a few things she wrote and, so far as I can recall (which is not very far) found them interesting.

I find that I am not sure I believe you when you say "No affiliation". I'm not sure there's anything useful you can do with this information, which I regret, but you might want to know that the impression you're giving is very much not one of honesty and reliability.

Comment by gjm on Frame Control · 2021-11-29T13:27:04.870Z · LW · GW

While you're making accusations about "having an agenda with the leverage/paradigm thing": would you kindly disclose any connection you have with Leverage/Paradigm/Geoff?

(I reckon p=0.2 you are Literal Geoff Anders, and p=0.65 you are some other Leverage-associated person trying to do damage control.)

Also: you apparently consider that any time Aella talks about anything she has unfairly outsized influence because of being "a smart rationalist porn star" etc. etc. -- but what's she supposed to do about this? Never say anything about anything for fear of being too persuasive?

Let's suppose you're right that this is really all about Leverage, and look at some parallels. Geoff Anders is, by all accounts, charismatic and persuasive. He has an "aspirational narrative" of his own (I dunno, maybe he's abandoned it lately in the light of the bad publicity Leverage has had?) about understanding psychology better and developing better mental tools and saving the world. If everything you say about Aella's unfair advantages in persuasion is true, and if she's setting herself against Leverage ... well, it sounds like a fair fight to me overall, even if Leverage is now not well placed to defend itself. Geoff had more power in the past; maybe Aella has more power now; if Aella thinks Geoff is still trying to do harm, why shouldn't she try to stop that using the advantages she (allegedly) has that are parallel to the advantages that helped Geoff do harm in the past?

(Since this has become super-confrontational, my own position: Leverage sounds super-sketchy to me but I have no personal experience with them and I am not physically on the same continent as them; the tweet from Geoff Anders linked in the OP is obviously every bit as bad as Aella says it is and would on its own suffice to convince me that I never ever want to have anything to do with the person who made it; I am a heterosexual man but have literally no idea what Aella looks like and am therefore unlikely to be being influenced much by her alleged hotness or pornstarriness; I agree that OP can itself be seen as an instance of attempted "frame control" and that in some sense "frame control" is happening all the time in a large fraction of interactions; I think it is none the less valuable to have (1) a term for that thing and (2) more common knowledge that it's a thing that can be done abusively, which it absolutely is.)

Comment by gjm on Omicron Variant Post #1: We’re F***ed, It’s Never Over · 2021-11-28T12:25:08.260Z · LW · GW

The Times of India claims that the chairwoman of the South African Medical Association claims that Omicron is producing only "mild disease". There are lots of ways for that to turn out to be wrong, but if it's true it might be very good news: a new variant that's infectious enough to take over quickly and that does hardly any damage when it infects you.

Comment by gjm on French long COVID study: Belief vs Infection · 2021-11-24T00:23:37.535Z · LW · GW

It seems (on the basis of what you say here; I haven't looked at the actual study) as if everything is consistent with the following situation:

  • "Long COVID" symptoms other than anosmia/parosmia are caused by believing you have had COVID-19.
  • Actually having COVID-19 makes you more likely to believe you have had COVID-19.
  • This is how it comes about that "having COVID on average gives you ~0.2 persistent symptoms vs not having COVID".

Does the study give detailed enough numbers to distinguish this scenario from one where the disease causes the symptoms by "non-psychological" mechanisms?

Comment by gjm on The Meta-Puzzle · 2021-11-22T18:59:16.012Z · LW · GW

Relevant post from the Sequences: The Parable of the Dagger. (It's not making the exact same point as this one, but it's in the same territory.)

Comment by gjm on Awesome-github Post-Scarcity List · 2021-11-22T00:36:58.511Z · LW · GW

I did find the awesome-list github repo, but didn't have the cultural context of "awesome list" just meaning "list of things I am impressed by". If I had looked more carefully at the github repo I would have seen that it says "a list of awesome things" and should at that point have figured out what's going on.

It seems a pity to confuse "list that is awesome" with "list of things that are awesome", but I guess what's done is done.

Comment by gjm on Awesome-github Post-Scarcity List · 2021-11-21T00:22:59.938Z · LW · GW

I don't have any problem with the content of your list. I just don't like the boasty self-promotional feel of titling a thing "awesome". It feels like it would feel if I made my LW username "gjm, who is very intelligent", or if I went to buy a dictionary and found that they all had names like "The Best English Dictionary". To me, "awesome" is a label for other people to attach to you or your work, if they're impressed, not something you put on your own work to promote it. Not least because "the person who made this thinks it's good" conveys much less information than "some other people think this is good".

I can see that my comment got a downvote, so clearly at least one person doesn't feel that way :-). Maybe it's a cultural thing? I'm in the UK, where there's a general tradition of self-deprecation and irony and the like; I have the impression that things are a bit different in e.g. the US.

Comment by gjm on Awesome-github Post-Scarcity List · 2021-11-20T10:39:31.459Z · LW · GW

"Awesome"? Really?

I had a quick look at the list. It seems like it's probably a good list. But it didn't inspire awe. Nor did it make me go "wow, this is a startlingly good list", which seems like it should be a minimal criterion for calling something "awesome".

(I wondered whether maybe "Awesome" is the name of a list-making app or something, but it doesn't look like it. It does seem to be the name of a badge thing that some people attach to lists they've made. There's a curated central repository of lists-with-this badge, but 1. it has thousands of lists on it and 2. this "post-scarcity list" doesn't seem to be done of them.)

Comment by gjm on [deleted post] 2021-11-19T22:43:34.440Z

Hey lsusr, I notice that you quoted something I wrote, but that

  • you deleted the bit where I put "(steven0461 suggests)", to make it look as if I was endorsing a position that I was only describing
  • neither steven0461's comments which I was trying to clarify, nor what I said about them, are in any way examples of the process you describe immediately before quoting me without further comment
  • your argument suddenly goes a bit weird just before the quotation, as if you had to warp it a bit to shoehorn the quotation in and make it look superficially as if it applies

Now on the one hand this is a lovely example of the point you go on to make next -- how something can be strictly accurate (I did write the words you attributed to me in the order in which you gave them) but still be dishonest propaganda (I was describing, not endorsing, and in any case none of what I said is an example of the harmful, irrational, intellectually corrupting processes you are describing at the point where you drop in a quotation from me).

But on the other hand, could we maybe _not_ do that sort of thing here?

(An earlier draft of this comment went into more detail about why I think what you did is dishonest, why I don't think what you quoted is in any way a good illustration of what you're writing about immediately before you quote it, etc., but it was long and boring and angrier than I think is conducive to productive discussion, and on reflection I think that you will likely not have any difficulty understanding what I am objecting to. But if somehow you actually think, on looking again, that what you did was honest and/or that what you quoted really is an example of the processes you describe, let me know and I'll try to clarify.)

[EDITED to fix formatting; no changes in actual content]

Comment by gjm on Thomas Kwa's Shortform · 2021-11-08T15:53:33.966Z · LW · GW

He certainly looks like one to my (itself rather inexpert) eye.

Comment by gjm on D&D.Sci Dungeoncrawling: The Crown of Command · 2021-11-08T00:50:31.146Z · LW · GW

Are we supposed to understand that each dungeon name is unique (despite the rather weird cartesian-product naming scheme)?

The lists of dungeon-crawls in the dataset includes four for the "Lost Temple of Lemarchand" but the encounters listed for them don't seem compatible with the ones described in the text. It includes none for the "Infernal Den of Cheliax", which is a little odd. (Maybe everyone who tried got killed by the orcs, but that seems a bit unlikely.)

The text says "Goblin Tunnels" at one point and "Goblin Warrens" at another. It seems like the difference is probably significant. Are they supposed to be consistent with one another?

[EDITED to add:]

Are we supposed to understand that each dungeon has a fixed sequence of encounters, and that any given crawl meets with some subset of them, in a consistent order? Obviously any possible dataset is consistent with that hypothesis, if we allow the sequence of encounters to be large enough; just eyeballing a few cases it's not a hypothesis I'd be much inclined to make if the text didn't kinda imply that, at least as far as the three dungeons we really care about are concerned.

[EDITED to add:] I've spoiler-tagged all the above, although I don't really see how any of it would give nontrivial information to someone else attacking the challenge.

Comment by gjm on Thomas Kwa's Shortform · 2021-11-07T16:39:06.953Z · LW · GW

In nanotech? True enough, because I am not convinced that there is any domain expertise in the sort of nanotech Storrs Hall writes about. It seems like a field that consists mostly of advertising. (There is genuine science and genuine engineering in nano-stuff; for instance, MEMS really is a thing. But the sort of "let's build teeny-tiny mechanical devices, designed and built at the molecular level, which will be able to do amazing things previously-existing tech can't" that Storrs Hall has advocated seems not to have panned out.)

But more generally, that isn't so at all. What I'm looking for by way of domain expertise in a technological field is a history of demonstrated technological achievements. Storrs Hall has one such achievement that I can see, and even that is doubtful. (He founded and was "chief scientist" of a company that made software for simulating molecular dynamics. I am not in a position to tell either how well the software actually worked or how much of it was JSH's doing.) More generally, I want to see a history of demonstrated difficult accomplishments in the field, as opposed to merely writing about the field.

Selecting some random books from my shelves (literally random; I have a list and am selecting random lines from the file, discarding fiction and anthologies), I find:

  • Nigel Slater, "Real fast food". Slater is more food-journalist than chef, but there the proof is in the eating. I have made many things from his books and found that they consistently taste good and don't require excessive effort, which for me is sufficient reason to consider him a domain expert. Non-technical domain, though.
  • Alec Vidler, "Essays in liberality". A book of theological essays. Highly non-technical domain, much worse even than nanotech in terms of whether such a thing as domain expertise exists.
  • Richard Dawkins, "Unweaving the rainbow". I like Dawkins, but it's debatable whether he passes my domain-expert test; the main thing he's known for in evolutionary biology is the "selfish gene" concept, which is more a new perspective on ideas others had already had than a new discovery in its own right.
  • John Potter, "The Cambridge companion to singing". Potter is a tenor who has made >100 commercial recordings and performed in groups such as the Hilliard Ensemble. Singing well enough that anyone wants to sell your performances, or well enough to be a member of a group like the Hilliard Ensemble, is a difficult accomplishment in the field, and for this I consider him a domain expert. (Until his retirement he was also on the music faculty at a good university, but that's only indirect evidence.)
  • Peter Sagal, "The book of vice". Highly non-technical domain; author is a humourist and journalist. Not a domain expert.
  • Ruppert, Wand & Carroll, "Semiparametric regression". I'll consider only the first-named author. Author of what looks like about 100 statistical papers. Many of them are applied stats papers in journals with other subjects, suggesting that what he does is useful outside statistics itself. I had a look at one of his papers; it looks as if it is doing actual mathematics, discovering useful things about the merits of a couple of different approaches to data-smoothing. Sure looks like a domain expert.
  • H G Atkins, biography of Heinrich Heine. Non-technical domain. All I can easily find out about the author is that he was a professor of German at a good English university. Writing and publishing a biography of Heine is arguably itself evidence of domain expertise in, er, the life of Heine, but this one isn't easy to assess with confidence.
  • Laura Miller, "The magician's book: a skeptic's adventures in Narnia". About those books and C S Lewis's life. Author does not appear to be a domain expert by my criterion.
  • Matthew Chapman, "40 days and 40 nights", about the Dover intelligent design trial. I guess the domain is something like "history of church-and-state fights in the US"; I don't think the author is a domain expert by my criterion.
  • A K Dewdney, "The Planiverse" (a sort of updated "Flatland"). Not clear what would constitute domain expertise, and maybe this should be excluded from the list for being fiction, though clearly its main point is not storytelling. At any rate, Dewdney is a professor of computer science but I don't think he qualifies as a domain expert on two-dimensional universes occupied by intelligent beings.

That's 10 books, which seems like a reasonable place to stop. Somewhere between 3 and 5 of the authors are domain experts by my criterion (generalized in what I think is an honest way to domains other than technology). Maybe 3 of the books are concerned with technical domains (Dawkins, Ruppert et al, Dewdney) and 1 or 2 of the 3 authors qualify.

I picked some more random books to bring the number of technical-domain ones up to 10. I'll summarize more briefly. "(P)" means pop-science/pop-maths; "(T)" means technical works aimed at experts. Ruelle, "Chaotic evolution and strange attractors" (T); domain expert. O'Neill, "Doing data science: straight talk from the frontline" (P): marginal, leaning toward domain expert. Mlodinow, "The drunkard's walk: how randomness rules our lives" (P): marginal, leaning toward not. (Author is a domain expert in theoretical physics, not so much here.) Steele, "Probability theory and combinatorial optimization" (T): domain expert. Nick Lane, "The vital question: why is life the way it is?" (P): not a domain expert. Gribbin, "In search of Schrödinger's cat" (P): not a domain expert. Hatton, "Safer C: developing software for high-integrity and safety-critical systems" (T): domain expert. So, counting Dawkins, O'Neill, Mlodinow as half-domain-experts, I make that five out of 10.

So it seems like if you pull a book on a technical topic off my bookshelves at random, about half the time the author is a domain expert in this (admittedly fairly strong) sense; it seems to be pretty much all the time for hardcore technical works for expert audiences, whereas popular books are more likely to be written by journalists or by experts in other fields.

I wouldn't call that "very few people", in this context. (As a fraction of the whole population it's "very few", and I'm perfectly OK with that and assume you are too.)

Comment by gjm on Thomas Kwa's Shortform · 2021-11-07T13:28:40.557Z · LW · GW

I can't comment on what JBlack means by "domain expert", but looking at that list of things about Hall, what I see is:

  • "Involved in", which means nothing.
  • Founded and moderated a newsgroup: requires no particular domain expertise.
  • Founding chief scientist of Nanorex Inc for two years. I can't find any evidence that Nanorex ever produced anything other than a piece of software that claimed to do molecular dynamics suitable for simulating nanotech. Whether it was actually any good, I have no idea, but the company seems not to have survived. Depending on what exactly the responsibilities of the "founding chief scientist" are, this could be evidence that Hall understands a lot about molecular dynamics, or evidence that Hall is a good software developer, or evidence of nothing at all. In the absence of more information about Nanorex and their product, it doesn't tell us much.
  • Has written several papers on nanotechnology: anyone can write a paper. A quick look for papers he's written turns up some abstracts, all of which seem like high-level "here's a concept that may be useful for nanotech" ones. Such a paper could be very valuable and demonstrate deep insight, but the test of that would be actually turning out to be useful for nanotech and so far as I can tell his ideas haven't led to anything much.
  • Developed ideas such as utility fog, space pier, etc.: again, anyone can "develop ideas". The best test of the idea-developer's insight is whether those ideas turn out actually to be of any use. So far, we don't seem close to having utility fog, space piers, weather control or flying cars.
  • Author of "Nanofuture": pop-science book, which from descriptions I've read seems mostly to be broad general principles about nanotech that doesn't exist yet, and exciting speculations about future nanotech thatt doesn't exist yet.
  • Fellow of a couple of things: without knowing exactly what their criteria are for appointing Fellows, this could mean anything or nothing. My guess is that they made him a Fellow because he was an early popularizer of nanotech ideas, not because of actual expertise or accomplishments.
    • Also, literally the only things I can find about the "Molecular Engineering Research Institute" are places where Storrs-Hall describes himself as that. It looks worryingly like one of those pseudoscholarly institutions that a person or small group creates in order to sound more impressive. I'm willing to be corrected on this one, though.
    • I can't find much sign that the "Institution for Molecular Manufacturing" is much better, actually.
  • Computer systems architect at the Laboratory for Computer Science Research at Rutgers: This does seem to be a real thing at a real university. I think it is evidence that Storrs-Hall knows something about writing software.
  • Awarded the Feynman Communication Prize: implies no particular domain expertise.

Everything here is consistent with Storrs-Hall being a competent software guy who had a bunch of vague but exciting-sounding ideas about nanotech, writes well, and was in the right place at the right time -- and nothing more than that.

(It's also consistent with his being more expert than that. But it doesn't seem like it's much evidence for that's being so.)

Comment by gjm on A system of infinite ethics · 2021-11-06T01:23:58.017Z · LW · GW

No, I don't intend "experience-subject" to pick out a specific time. (It's not obvious to me whether a variant of your system that worked that way would be better or worse than your system as it is.) I'm using that term rather than "agent" because -- as I think you point out in te OP -- what matters for moral relevance is having experiences rather than performing actions.

So, anyway, I think I now agree that your system does indeed do approximately what you say it does, and many of my previous criticisms do not in fact apply to it; my apologies for the many misunderstandings.

The fact that it's lavishly uncomputable is a problem for using it in practice, of course :-).

I have some other concerns, but haven't given the matter enough thought to be confident about how much they matter. For instance: if the fundamental thing we are considering probability distributions over is programs specifying a universe and an experience-subject within that universe, then it seems like maybe physically bigger experience subjects get treated as more important because they're "easier to locate", and that seems pretty silly. But (1) I think this effect may be fairly small, and (2) perhaps physically bigger experience-subjects should on average matter more because size probably correlates with some sort of depth-of-experience?

Comment by gjm on A system of infinite ethics · 2021-11-05T05:09:56.722Z · LW · GW

It sounds as if my latest attempt at interpreting what your system proposes doing is incorrect, because the things you're disagreeing with seem to me to be straightforward consequences of that interpretation. Would you like to clarify how I'm misinterpreting now?

Here's my best guess.

You wrote about specifications of an experience-subject's universe and situation in it. I mentally translated that to their stream of experiences because I'm thinking in terms of Solomonoff induction. Maybe that's a mistake.

So let's try again. The key thing in your system is not a program that outputs a hypothetical being's stream of experiences, it's a program that outputs a complete description of a (possibly infinite) universe and also an unambiguous specification of a particular experience-subject within that universe. This is only possible if there are at most countably many experience-subjects in said universe, but that's probably OK.

So that ought to give a well-defined (modulo the usual stuff about uncomputability) probability distribution over experience-subjects-in-universes. And then you want to condition on "being in a universe with such-and-such characteristics" (which may or may not specify the universe itself completely) and look at the expected utility-or-utility-like-quantity of all those experience-subjects-in-universes after you rule out the universes without such-and-such characteristics.

It's now stupid-o'-clock where I am and I need to get some sleep. I'm posting this even though I haven't had time to think about whether my current understanding of your proposal seems like it might work, because on past form there's an excellent chance that said understanding is wrong, so this gives you more time to tell me so if it is :-). If I don't hear from you that I'm still getting it all wrong, I'll doubtless have more to say later...

Comment by gjm on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T04:27:02.379Z · LW · GW

It still appears to me that you are completely missing the point. I acknowledge that you are getting a lot of upvotes and I'm not, suggesting that other LW readers disagree with me. I think they are wrong, but outside view suggests caution.


I notice one thing I said that was not at all what I intended to say, so let me correct that before going further. I said

"not doing things that would offend X" in his comment is unambiguously not a move in any game being played with X at all.

but what I actually meant to say was

"standing up to X" in his comment is unambiguously not a move in any game being played with X at all.

[EDITED to add:] No, that also isn't quite right; my apologies; let me try again. What I actually mean is that "standing up to X" and "not doing things that would offend X" are events in two entirely separate games, and the latter is not a means to the former.

There are actually three separate interactions envisaged in Steven's comment, constituting (if you want to express this in game-theoretic terms) three separate games. (1) An interaction with left-wing entryists, where they try to turn LW into a platform for leftist propaganda. (2) An interaction with right-wing entryists, where they try to turn LW into a platform for rightist propaganda. (3) An interaction with leftists, who may or may not be entryists, where they try to stop LW being a platform for right-wing propaganda or claim that it is one. (There is also (4) an interaction with rightists, along the lines of #3, which I include for the sake of symmetry.)

Steven claims that in game 1 we should strongly resist the left-wing entryists, presumably by saying something like "no, LW is not a place for left-wing propaganda". He claims that in order to do this in a principled way we need also to say "LW is not a place for right-wing propaganda", thus also resisting the right-wing entryists in game 2. And he claims that in order to do this credibly we need to be reluctant to post things that might be, or that look like they are, right-wing propaganda, thus giving some ground to the leftists in game 3.

Game 1 and game 3 are entirely separate, and the same move could be a declaration of victory in one and a capitulation in the other. For instance, imposing a blanket ban on all discussion of politically sensitive topics on LW would be an immediate and total victory over entryists of both stripes in games 1 and 2, and something like a total capitulation to leftists and rightists alike in games 3 and 4.

So "not doing things that would offend leftists" is not a move in any game played with left-wing entryists; "standing up to left-wing entryists" is not a move in any game played with leftists complaining about right-wing content on LW; I was trying to say both of those and ended up talking nonsense. The above is what I actually meant.


I agree that steven0461 is saying (something like) that people writing LW articles should avoid saying things that outrage left-leaning readers, and that if you view what happens on LW as a negotiation with left-leaning readers then that proposal is not a strategy that gives you much leverage.

I don't agree that it makes any sense to say, as you did, that Steven's proposal involves "standing up to X by not saying anything that offends X", which is the specific thing you accused him of.

Your comment above elaborates on the thing I agree about, but doesn't address the reasons I've given for disagreeing with the thing I don't agree about. That may be partly because of the screwup on my part that I mention above.

I think the distinction is important, because the defensible accusation is of the form "Steven proposes giving too much veto power over LW to certain political groups", which is a disagreement about strategy, whereas the one you originally made is of the form "Steven proposes something blatantly self-contradictory", which is a disagreement about rationality, and around these parts accusations of being stupid or irrational are generally more serious than accusations of being unwise or on the wrong political team.

The above is my main objection to what you have been saying here, but I have others which I think worth airing:

It is not true that "don't do anything that the left considers offensively right-wing" gives the left "the ability to prevent arbitrary speech", at least not if it's interpreted with even the slightest bit of charity, because there are many many things one could say that no one will ever consider offensively right-wing. Of course it's possible in theory for any given group to start regarding any given thing as offensively right-wing, but I do not think it reasonable to read steven0461's proposal as saying that literally no degree of absurdity should make us reconsider the policy he proposes.

It is not true that Steven proposes to "not do anything that the left has decided is offensively right-wing". "Sufficiently offensive" was his actual wording. This doesn't rule out any specific thing, but again I think any but the most uncharitable reading indicates that he is not proposing a policy of the form "never post anything that anyone finds offensive" but one of the form "when posting something that might cause offence, consider whether its potential to offend is enough to outweigh the benefits of posting it". So, again, the proposal is not to give "the left" complete veto power over what is posted on LW.

I think it is unfortunate that most of what you've written rounds off Steven's references to "left/right-wing political entryism" to "the left/right". I do not know exactly where he draws the boundary between mere X-wing-ism and X-wing political entryism, but provided the distinction means something I think it is much more reasonable for LW to see "political entryism" of whatever stripe as an enemy to be stood up to, than for LW to see "the left" or "the right" as an enemy to be stood up to. The former is about not letting political groups co-opt LW for their political purposes. The latter is about declaring ourselves a political team and fighting opposing political teams.

Comment by gjm on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T01:22:03.940Z · LW · GW

I'm not sure there's a definite fact of the matter as to when something is "doing X by doing Y" in cases like this where it's indirect, but I think either we shouldn't use that language so broadly as to apply to such cases or it's not obvious that it's unworkable to "stand up to X by not doing things that offend X", since the obvious unworkability of that is (unless I'm misunderstanding your earlier comment) predicated on the idea that it's a sort of appeasement of X, rather than the sort of indirect thing we're actually talking about here.

Maybe I am also being too indirect. Regardless of whether there's some sense in which steven0461 is proposing to "stand up to X by not doing things that would offend X", he was unambiguously not proposing "a negotiating stance that is maximally yielding in the ultimatum game"; "not doing things that would offend X" in his comment is unambiguously not a move in any game being played with X at all. Your objection to what he wrote is just plain wrong, whether or not there is a technical sense in which he did say the thing that you objected to, because your argument against what he said was based on an understanding of it that is wrong whether or not that's so.

[EDITED to add:] As I mention in a grandchild comment, one thing in the paragraph above is badly garbled; I was trying to say something fairly complicated in too few words and ended up talking nonsense. It's not correct to say that "not doing things that would offend X" is not a move in any game being played with X. Rather, I claim that X in your original comment is standing in for two different albeit related Xs, who are involved in two different albeit related interactions ("games" if you like), and the two things you portray as inconsistent are not at all inconsistent because it's entirely possible (whether or not it's wise) to win one game while losing the other.

The game with "left-wing entryists" is one where they try to make LW a platform for left-wing propaganda. The game with "the left" is one where they try to stop LW being a platform for (what they regard as) right-wing propaganda. Steven proposes taking a firm stand against the former, and making a lot of concessions in the latter. These are not inconsistent; banning everything that smells of politics, whether wise or foolish overall, would do both of the things Steven proposes doing. He proposes making concessions to "the left" in the second game in order to resist "right-wing entryists" in the mirror-image of the first game. We might similarly make concessions to "the right" if they were complaining that LW is too leftist, by avoiding things that look to them like left-wing propaganda. I make no claims about whether any of these resistances and concessions are good strategy; I say only that they don't exhibit the sort of logical inconsistency you are accusing Steven of.

Comment by gjm on The Opt-Out Clause · 2021-11-05T00:27:23.440Z · LW · GW

Variant thought experiment:

You are in a simulation, much as described in this one. It is designed to be of finite duration; at some point it will end and you will return to your real (or at least one-level-up) life.

However, it is possible to keep you in the simulation for ever. When you went in, you were asked to choose a passphrase that would make that happen. In a fit of whimsy, you chose "I no longer consent to being in a simulation". If you ever say or write that passphrase, then when the usual time limit expires you will not leave the simulation; you will remain in it until your in-simulation death, and "when you die in the Matrix, you die in real life".

Remarks on the motivation for the above variant: OP's "there is something super-important but you hear of it only in passing, in circumstances that give you no real reason to believe it" reminds me of the claims of the world's various religions, many of which hold that it is vitally important that you accept their doctrines or live according to their principles, even though the only things you have telling you this are obviously unreliable. One particularly extreme version of this is Pascal's wager, where merely considering the hypothesis that (say) Roman Catholic Christianity might be correct is supposed to be sufficient to make you do your best to become a good Roman Catholic; and one standard reply to Pascal's wager is to point out that there are other hypotheses with apparently nonzero probability and exactly opposite consequences...

Comment by gjm on The Opt-Out Clause · 2021-11-05T00:19:40.749Z · LW · GW

Oh, I know this one. Forty-two.

Comment by gjm on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-05T00:17:33.438Z · LW · GW

steven0461 isn't proposing standing up to X by not doing things that would offend X.

He is proposing standing up to the right by not doing things that would offend the left, and standing up to the left by not doing things that would offend the right. Avoiding posts like the OP here is intended to be an example of the former, which (steven0461 suggests) has value not only for its own sake but also because it lets us also stand up to the left by avoiding things that offend the right, without being hypocrites.

(steven0461's comment seems to treat "standing up to left-wing political entryism" as a thing that's desirable for its own sake, and "standing up to right-wing political entryism" as something we regrettably have to do too in order to do the desirable thing without hypocrisy. This seems kinda strange to me because (1) standing up to all kinds of political entryism seems to me obviously desirable for its own sake, and because (2) if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter.)

Comment by gjm on A system of infinite ethics · 2021-11-05T00:08:09.892Z · LW · GW

If you are just using the regular notion of expected value then it is an average of utilities. (Weighted by probabilities.)

I understand that your measure of satisfaction need not be a utility as such, but "utility" is shorter than "measure of satisfaction which may or may not strictly speaking be utility".

Comment by gjm on A system of infinite ethics · 2021-11-05T00:05:28.593Z · LW · GW

OK, so I think I now understand your proposal better than I did.

So if I'm contemplating making the world be a particular way, you then propose that I should do the following calculation (as always, of course I can't do it because it's uncomputable, but never mind that):

  • Consider all possible computable experience-streams that a subject-of-experiences could have.
  • Consider them, specifically, as being generated by programs drawn from a universal distribution.
  • Condition on being in the world that's the particular way I'm contemplating making it -- that is, discard experience-streams that are literally inconsistent with being in that world.
  • We now have a probability distribution over experience-streams. Compute a utility for each, and take its expectation.

And now we compare possible universes by comparing this expected utility.

(Having failed to understand your proposal correctly before, I am not super-confident that I've got it right now. But let's suppose I have and run with it. You can correct me if not. In that case, some or all of what follows may be irrelevant.)

I agree that this seems like it will (aside from concerns about uncomputability, and assuming our utilities are bounded) yield a definite value for every possible universe. However, it seems to me that it has other serious problems which stop me finding it credible.

SCENARIO ONE. So, for instance, consider once again a world in which there are exactly two sorts of experience-subject, happy and unhappy. Traditionally we suppose infinitely many of both, but actually let's also consider possible worlds where there is just one happy experience-subject, or just one unhappy one. All these worlds come out exactly the same, so "infinitely many happy, one unhappy" is indistinguishable from "infinitely many unhappy, one happy". That seems regrettable, but it's a bullet I can imagine biting -- perhaps we just don't care at all about multiple instantiations of the exact same stream of experiences: it's just the same person and it's a mistake to think of them as contributing separately to the goodness of the universe.

So now let's consider some variations on this theme.

SCENARIO TWO. Suppose I think up an infinite (or for that matter merely very large) number of highly improbable experience-streams that one might have, all of them unpleasant. And I find a single rather probable experience-stream, a pleasant one, whose probability (according to our universal prior) is greater than the sum of those other ones. If I am contemplating bringing into being a world containing exactly the experience-streams described in this paragraph, then it seems that I should, because the expected net utility is positive, at least if the pleasantness and unpleasantness of the experiences in question are all about equal.

To me, this seems obviously crazy. Perhaps there's some reason why this scenario is incoherent (e.g., maybe somehow I shouldn't be able to bring into being all those very unlikely beings, at least not with non-negligible probability, so it shouldn't matter much what happens if I do, or something), but at present I don't see how that would work out.

The problem in SCENARIO TWO seems to arise from paying too much attention to the prior probability of the experience-subjects. We can also get into trouble by not paying enough attention to their posterior probability, in some sense.

SCENARIO THREE. I have before me a switch with two positions, placed there by the Creator of the Universe. They are labelled "Nice" and "Nasty". The CotU explains to me that the creation of future experience-subjects will be controlled by a source of True Randomness (whatever exactly that might be), in such a way that all possible computable experience-subjects have a real chance of being instantiated. The CotU has designed two different prefix-free codes mapping strings of bits to possible experience-subjects; then he has set a Truly Random coin to flip for ever, generating a new experience-subject every time a leaf of the code's binary tree is reached, so that we get an infinite number of experience-subjects generated at random, with a distribution depending on the prefix-free code being used. The Nice and Nasty settings of the switch correspond to two different codes. The CotU has computed that with the switch in the "Nice" position, the expected utility of an experience-subject in the resulting universe is large and positive; with the switch in the "Nasty" position, it's large and negative. But in both cases every possible experience-subject has a nonzero probability of being generated at any time.

In this case, our conditioning doesn't remove any possible experience-subjects from consideration, so we are indifferent between the "Nice" and "Nasty" settings of the switch.

This is another one where we might be right to bite the bullet. In the long run infinitely many of every possible experience-subject will be created in each version of the universe, so maybe these two universes are "anagrams" of one another and should be considered equal. So let's tweak it.

SCENARIO FOUR. Same as in SCENARIO THREE, except that now the CotU's generator will run until it has produced a trillion experience-subjects and then shut off for ever.

It is still the case that with the switch in either setting any experience-subject is possible, so we don't get to throw any of them out. But it's no longer the case that the universes generated in the "Nice" and "Nasty" versions are with probability 1 (or indeed with not-tiny probability) identical in any sense.

So far, these scenarios all suppose that somehow we are able to generate arbitrary sets of possible experience-subjects, and arrange for those to be all the experience-subjects there are, or at least all there are after we make whatever decision we're making. That's kinda artificial.

SCENARIO FIVE. Our universe, just as it is now. We assume, though, that our universe is in fact infinite. You are trying to decide whether to torture me to death.

So far as I can tell, there is no difference in the set of possible experience-subjects in the world where you do and the world where you don't. Both the tortured-to-death and the not-tortured-to-death versions of me are apparently possibilities, so it seems that with probability 1 each of them will occur somewhere in this universe, so neither of them is removed from our set of possible experience-streams when we condition on occurrence in our universe. Perhaps in the version of the world where you torture me to death this makes you more likely to do other horrible things, or makes other people who care for me suffer more, but again none of this makes any experiences impossible that would otherwise have been possible, or vice versa. So our universe-evaluator is indifferent between these choices.

(The possibly-overcomplicated business in one of my other comments, where I tried to consider doing something Solomoff-like using both my experiences and those of some hypothetical possibly-other experience-subject in the world, was intended to address these problems caused by considering only possibility and not anything stronger. I couldn't see how to make it work, though.)

Comment by gjm on They don't make 'em like they used to · 2021-11-04T00:28:47.540Z · LW · GW

The video makes it really hard to tell exactly what's going on (particularly annoying is the bit at 1:32 where they show an overhead view, which would let us see what's happening to each car without bits of the other one being in the way -- and then cut away from it to yet another nigh-incomprehensible side view at the instant of contact.

But I think there are two things going on here: the newer car has a slightly more squashable front portion, and a much less squashable passenger compartment.

In a head-on collision between the cars, the former doesn't do much to make the newer car look better (though it does make the collision less bad for the occupants of both vehicles) because what's happening is that energy that would otherwise be used for crushing both drivers is used for crushing the newer car's front part instead.

So part-way through 0:29 you can (I think) see that the newer car's front has scrunched up more. But there's still enough kinetic energy, or momentum, or whatever the relevant quantity actually is here, to keep scrunching. As we go through 0:30, the front of the older car also gets crushed. But so does the passenger compartment of the older car, whereas the passenger compartment of the newer car remains largely intact.

So the newer car

  • has a front portion that can absorb more energy by crumpling, which helps reduce the (other) damage to both cars
  • has a stronger and more rigid passenger compartment, so that once the crash has proceeded far enough that the next thing that has to go is either the front of the older car or the passenger compartment of the newer car, it's the front of the older car that goes.
Comment by gjm on A system of infinite ethics · 2021-11-03T20:58:35.153Z · LW · GW

I'll begin at the end: What is "the expected value of utility" if it isn't an average of utilities?

You originally wrote:

suppose you had no idea which agent in the universe it would be, what circumstances you would be in, or what your values would be, but you still knew you would be born into this universe. Consider having a bounded quantitative measure of your general satisfaction with life, for example, a utility function. Then try to make the universe such that the expected value of your life satisfaction is as high as possible if you conditioned on you being an agent in this universe, but didn't condition on anything else.

What is "the expected value of your life satisfaction [] conditioned on you being an agent in this universe but [not] on anything else" if it is not the average of the life satisfactions (utilities) over the agents in this universe?

(The slightly complicated business with conditional probabilities that apparently weren't what you had in mind were my attempt at figuring out what else you might mean. Rather than trying to figure it out, I'm just asking you.)

Comment by gjm on Where do you live? · 2021-11-01T15:44:33.418Z · LW · GW

Strictly, I'm a few miles out of Cambridge.

In and around Cambridge, (i) English is common, (ii) cost of living is not low and in particular housing is annoyingly expensive, and (iii) immigration is I think easier than to the US, though for inhabitants of the rest of Europe it recently became more difficult than it had been before.

Also relevant: Cambridge has a world-class university and a pretty strong (though not Bay-Area-strong) ecosystem of technology companies.

Comment by gjm on Where do you live? · 2021-11-01T15:43:24.882Z · LW · GW

Cambridge, UK.

Comment by gjm on Where do you live? · 2021-11-01T15:40:34.265Z · LW · GW

I suspect that either (1) someone misclicked or (2) someone unscrupulous took a dislike to something else you wrote and decided to downvotes lots of your comments. If it's just this one, it's probably #1. If you suddenly lost a lot of karma, it's probably #2. That sort of bulk-downvoting is not supposed to be a thing that happens here, but sometimes people do it (in my experience, usually for political reasons; they've decided that some other person is The Enemy and must be punished and their voice suppressed); if you have good reason to suspect it's happened, you might consider contacting the site admins.

[EDITED to add: I (weak-)upvoted seed's answer even though I do not live in Berlin, because I can think of no reason why it should have been downvoted.]