Quantum Immortality, foiled 2022-10-29T11:00:01.038Z
The Redaction Machine 2022-09-20T22:03:15.309Z


Comment by Ben (ben-lang) on Far-Future Commitments as a Policy Consensus Strategy · 2023-09-27T16:55:40.977Z · LW · GW

I think that governments already do this to some extent, the UK (and I think many other countries) have enshrined a date in law (2050) that says "net zero carbon emissions from this date". 

The point is almost the same as what you describe. The government wants the votes of environmentally concerned citizens, but it also wants the votes of people who drive or work in a polluting industry.

In the most charitable case the net-zero 2050 commitment is a public statement from the government to polluting industries or car manufacturers that their clock is ticking. So they should maybe stop hiring, or develop alternative products. This push does some of the government's work for it before it actually works out which policies it needs for net zero. Then, maybe, those policies face less stiff resistance when they do start coming in because time lead giving many of the people who would loose out "time to move".

With any large change in tax policy, or a policy that causes a big impact financially on a lot of people it does make sense to give a fairly large lead time to warn people. Assuming you wanted to abolish state pensions, doing so tomorrow morning would be madness. But, announcing that they will be abolished and that no one currently under 35 is ever getting a state pension would be a more reasonable approach, gives people more space to plan.

Comment by Ben (ben-lang) on Fund Transit With Development · 2023-09-27T12:46:54.063Z · LW · GW

"Now, if the land owner are correctly calibrated, and they correctly anticipate the odds of your project being a success, your expected profit on the sale of the land must be zero."

I worry that this argument appears to apply to any business venture at all that involves buying land. Indeed, it applies to any business venture at all that involves buying anything. But, many people seem to think that projects that involve purchasing goods or services can in fact be profitable. 

Goods (such as steel or land) can be improved by turning the steel into a plane, or adding a train connection to the land. Your model seems to be roughly that the current owner works out the chances that someone will buy the steel or land and turn it into a plane, or add a train connection, and then they price with that chance in mind. The problem (I think) is that their is causal connection between the price and the process of it turning into the imagined end product. If you charge extra for your steel because you think it will turn into a plane, then it won't, because Boing will just buy steel from someone else and turn that into a plane instead of yours.

Comment by Ben (ben-lang) on Fund Transit With Development · 2023-09-27T12:38:28.560Z · LW · GW

I think this argument is wrong. If they don't sell there is no train (at least not to that exact place), so they gain nothing by pricing the project out of existence or driving it elsewhere.

I want to build a Disneyland resort. There are dozens of different sites I could put it. When I go to a landowner (eg. a farmer) to buy land for my resort they can't sell it at the value it would have if it were a disneyland, if they try that I drive 10 mins up the road to the next farmer, and eventually one of them will realise that selling the land for a little more than its currently worth to them as a farm is still profit. 

This example is no different. If their is one, and only one, place the train could possibly go then yes the person on that land can charge you quite a lot. But still not 100% of what the land will be worth after the train is there, a potential train connection that someone might build in the future is worth a lot less than an actual train connection now. But, realistically, there will be other options for where to put trains and stations. So the owners of the half-dozen best sites have to try and make better (lower) offers than one another. 

"Why would the current owner sell the land at the historic price when, in fact, it is very clear that the price will go up once you are done with your project?" --- "Why would anyone sell you steel at the market price of steel, when, in fact, it is very clear that the value will go once you are done turning it into an airplane?"

Comment by Ben (ben-lang) on The commenting restrictions on LessWrong seem bad · 2023-09-19T11:13:14.244Z · LW · GW

"It wouldn't have mattered to me whose name was in the title of that post, the strong-downvote button floated nearer to me just from reading the rest of the title."

I think this is right from an individual user perspective, but misses part of the dynamic. My impression from reading lesswrong posts is that something like that post, had the topic been different, maybe "Jesus was often incredibly wrong about stuff", would have been ignored by many people. It would maybe have had between zero and a dozen karma and clearly not been clicked on by many people.

But that post, in some sense, was more successful than ones that are ignored - it managed to get people to read it (which is a necessary first step of communicating anything). That it has evidently failed in the second step (persuading people) is clear from the votes.

In a sense maybe this is the system working as intended: stuff that people just ignore doesn't need downvoting because it doesn't waste much communication bandwidth. Where as stuff that catches attention then disapoints is where the algorithm can maybe do people a favour with downvote data. But the way that system feeds into the users posting rights seems a little weird.

Comment by Ben (ben-lang) on Is this viable physics? · 2023-09-11T10:41:27.283Z · LW · GW

I think Wolfram is probably doing excellent maths, but is doing physics in a somewhat backwards way.

I think good physics starts with observations, things that we notice as patterns or similar. Then seek good explanations of what we see.

In the link they start with an extremely general mathematical framework, with infinitely many different possible update rules. We have only ever (as a civilisation) seen a finite number of data points, and that will always be the case. Therefore, of this infinite number of update rules there are infinitely many that (given the right interpretational handles) can "explain" all human experiments ever performed perfectly. Of those infinite set of equations, the vast majority are about to be disproved by the very next observation.

I think that the mathematical structure Wolfram lays out is powerful enough that, without specifics, it can support anything. Any kind of universe. That may include ours, but that doesn't tell us anything useful, because it also includes all the nonsense. By starting with the maths and then trying to "work up to" the physics I worry that this is like The Library of Babel of physics theories. Something equivalent to a true theory of everything is in the Wolfram framework somewhere, just like it is in the Library of Babel somewhere.

The fundamental flaw, as I see it, is from trying to start with the maths. Better to pick a single observation that current theories don't explain well, try and fix that one problem. Most of the time this just reveals an error in the experiment or perhaps misuse of existing theory. But every so often it shows a glaring problem with the models we have, that is how we make physics better.

[Library of Babel :]

Comment by Ben (ben-lang) on I'm still mystified by the Born rule · 2023-09-07T16:39:52.904Z · LW · GW

If you really hate the L2 norm of the Born Rule it is actually possible to do quantum physics entirely with probabilities* in phase space (

* The catch is that they have to be quasiprobabilities, which means they can be negative. There can be patches of the phase space, IE particular combinations of position and momentum, to which you assign a negative quasiprobability. The fact that you are limited by Heisenberg uncertainty means that you never actually predict anything you observe to occur with a negative probability.

Despite this the quasiprobabilities (or quasiprobability densities for continuous systems) are treated mathematically just like probabilities (probability densities).

I think that this alternative mathematical form for the same physics reveals that the "ughhh what?" that surrounds measurements in quantum mechanics is always there, but it takes on a different mathematical flavour in different formulations. If you are using Hilbert space you worry about the Born rule. If you are in phase space you are probably trying to work out what a negative quasiprobability means. Its not just ignorance anymore, something very odd is going on. The fact that these are (I think) the same problem seen from different perspectives is potentially useful in seeing the way to the solution.

Comment by Ben (ben-lang) on My First Post · 2023-09-07T09:38:29.386Z · LW · GW

One aspect that I think is directionally correct is that a is (up to the other things) divided by d. Where a is the number of pieces of evidence that this is a good idea and d evidence that it is bad. This (when all else is neglected) feels right. 10 points for and 5 points against seems like it would be close to 100 points for and 50 points against, rather than 10 times less relevant.

Comment by Ben (ben-lang) on Who Has the Best Food? · 2023-09-06T09:27:05.745Z · LW · GW

In the mid-90s I spent a week in Mongolia. I was a child, and a somewhat picky eater so that may have clouded my judgement, but I thought the food was very bad. I remember that every meal involved some kind of yoghurt, which I think may have been made from horse or yak milk and in hindsight I now think was mildly alcoholic. I loved yoghurt, I hated that stuff. Indeed, I ended up subsisting on crisp packets, apples and fasting for the last few days, and the fact my parents allowed this said a lot about what they thought of the food.

Comment by Ben (ben-lang) on Who Has the Best Food? · 2023-09-05T21:58:40.851Z · LW · GW

Yes, I considered this point. It is an unfair comparison to an extent, and actually made this point during the conversation. I was assured by the Italian that they did not think it was that unfair, that their home town was not much smaller than the place of our workplace.

Comment by Ben (ben-lang) on Who Has the Best Food? · 2023-09-05T19:51:12.823Z · LW · GW

An interesting data point. I live in the UK. Several of my work colleagues are from Italy. One informed me (and the other one present agreed) that food variety n the UK was much better than it Italy. They said that, in their home town you could get the local specialties at high quality, but you couldn't easily get dishes associated even with other parts of Italy. In contrast they felt that in the UK their was Indian/Chinese/etc and lots of variety. I assume the US is similar in this regard.

Comment by Ben (ben-lang) on A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX · 2023-09-01T14:12:22.983Z · LW · GW

I did think it was odd that the none of the 4 listed crew was a gunner, yet it supposedly had the firepower to wipe out a Soviet force.

Comment by Ben (ben-lang) on A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX · 2023-09-01T14:10:08.056Z · LW · GW

I think their is a hidden assumption here that building "the tallest building in the world" is about as difficult to do in 2023 as it was in 1933.  2023 technology and economy are better, enabling a larger building, but the competition also have those advantages, so its a wash.

I feel this assumption is doing a lot of work throughout. Back in the 60's building (for example) a big passenger jet was the kind of engineering project a large company might pursue. In the modern world we might also want a large passenger jet, but in order for developing it to be worth anyone's time it needs to significantly outperform the jets already on sale on some important metric(s).

If you are engineering a new type of thing that has not come before then that requires a certain type of organisation and mindset. Maybe here its ok for the fuel efficiency to be 20% worse than it might have been if it gets the engine design finished 3 months faster.

But if you are coming into a crowded market, and you intend to make something that is fundamentally just an improvement on machines that already exist, then it is likely that you want a completely different approach. You would take the efficiency over the time saved.

I notice that (perhaps excluding the pentagon) the examples all appear to be of the first kind. A lesson that could be drawn is "doing something novel might be risky and expensive, but it is often faster than improving on an existing technology."

Comment by Ben (ben-lang) on Newcomb Variant · 2023-08-29T14:28:58.502Z · LW · GW

What! I had the whole thing back to front. In that case you are completely correct and you obviously always two box.

I should have read more carefully. Still I would have preferred the presentation highlighted explicitly that it is the opposite way around than anyone would assume. 

Re-reading the initial post, and especially the answer it gives, I am still not sure which problem was intended. The model solution offered seems to only make sense in the setup I thought we had. (Otherwise seeing $100 with an intent to take two boxes would not make us update towards thinking we were simulated).

Comment by Ben (ben-lang) on Newcomb Variant · 2023-08-29T13:40:26.349Z · LW · GW

If the simulated me takes two boxes (for $200), before being deleted then Omega will (for the real trial with the real me) put $0 in each box. This is why Omega is doing the simulation in the first place, to work out what I will do so they can fill the boxes correctly. So the real me gets nothing if the simulated me gets $200. This was my logic.

Comment by Ben (ben-lang) on Newcomb Variant · 2023-08-29T10:09:30.948Z · LW · GW

I am not sure I agree with your stated answer.

 If I am being simulated by Omega, then no matter what decision I make I am about to be terminated. Its not like 1 boxing makes the simulated me get to live a full and happy life.

Lets say Omega simulates 1 copy of me, and shows it $100 in box 1. If I have a policy of 1 boxing then that simulation of me 1 boxes, gets terminated (to no benefit to itself), then the real me gets the $100 due to the 1 box policy. If I have a two box policy the simulated me gets $200 before deletion, and the real me gets nothing.

So I agree with the 1 box policy, but at least to me the point of that policy is that it somehow transfers wealth between the simulated me that won't last much longer and the real me who might actually have a chance to benefit from it. I don't think the simulation benefits at all in any case.

Comment by Ben (ben-lang) on The lost millennium · 2023-08-24T09:22:21.316Z · LW · GW

On this topic (and for looking at the causality direction) I think the book "In the Shadow of the Sword" by Tom Holland is quite informative. The book covers the "late antiquity" period, a lot of it involves the origins of islam and a lot of the (Eastern) romans having trouble with it all. From my memory their were either one or two devastating plagues in the Roman empire, which massively depopulated its cities. The non-urban populations (eg. the horse riding nomads) were barely effected by the plague (they were said to be immune, but it was probably just transmission opportunities). The book argues this was an important factor in the Roman Empires defeat by the new Islamic empires, a large fraction of the legionnaires the romans should have had had never been born because their parents had died of plague twenty years earlier. (The were a lot of other factors going on too, but this one stood out in this discussion.) I think that the Persians had the same problem when it was their turn to fight the same arabs, a fraction of their troops had never been born.

Comment by Ben (ben-lang) on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-11T10:01:04.338Z · LW · GW

Maybe we are starting to go in circles. But while I agree the word "guess" might be problematic I think you still have an ambiguity with what the word probability means in this case. Perhaps you could give the definition you would use for the word "probability".

"In everyday usage, we use "guess" when the aim is to guess correctly." Guess correctly in the largest proportion of trials, or in the largest proportion of guesses? I think my "scrob" and "quob" thingies are indeed aiming to guess correctly. One in the most possible trials, the other in the most possible individual instances of making a guess.

"Having eliminated the word "guess", why would one think that Beauty's use of the strategy of randomly taking action H or action T with equal probabilities implies that she must have P(Heads)=1/2?" - I initially conjectured this as weak evidence, but no longer hold the position at all, as I explained in the post with the graph. However, I still think that in the other death-scenario (Guess Wrong you die) the fact that deterministically picking heads is equally good to deterministically picking tails says something. This GWYD case sets the rules of the wager such that Beauty is trying to be right in as many trials as possible, instead of for as many individual awakenings. Clearly moving the goalposts to the "number of trials" denominator.

For me, the issue is that you appear to take "probability" as "obviously" meaning "proportion of awakenings". I do not think this is forced on us by anything, and that both denominators (awakenings and trials) provide us with useful information that can beneficially inform our decision making, depending on whether we want to be right in as many awakenings or trials as possible.  Perhaps you could explain your position while tabooing the word "probability"? Because, I think we have entered the Tree-falling-in-forest zone:, and I have tried to split our problem term in two (Quobability and Srobability) but it hasn't helped.

Comment by Ben (ben-lang) on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-10T14:04:28.434Z · LW · GW

Yes I do, very good point!

Comment by Ben (ben-lang) on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-10T10:25:55.405Z · LW · GW

We are in complete agreement about how beauty should strategize given each of the three games (bet a dollar on the coin flick with odds K, GWYL and GWYD). The only difference is that you are insisting that "1/3" is Beauty's "degree of belief".  (By the way I am glad you repeated the same maths I did for GWYD, it was simple enough but the answer felt surprising so I am glad you got the same.)

In contrast, I think we actually have two quantities:

"Quobability" - The frequency of correct guesses made divided by the total number of guesses made.

"Srobability" - The frequency of trials in which the correct guess was made, divided by the number of trials.

Quabability is 1/3, Scrobability is 1/2. "Probability" is (I think) an under-precise term that could mean either of the two. 

You say you are a Bayesian, not a frequentist. So for you "probability" is degree of belief. I would also consider myself a Bayesian, and I would say that normally I can express my degree of belief with a single number, but that in this case I want to give two numbers, "Quobability = 1/3, Scrobability =1/2". What I like about giving two numbers is that typically a Bayesian's single probability value given is indicative of how they would bet. In this case the two quantities are both needed to see how I would bet given slightly varies betting rules.


I was still interested in "GRYL", which I had originally assumed would support the thirder position, but (for a normal coin) had the optimal tactic being to pick at 50/50 odds. I just looked at biased coins.

For a biased coin that (when flicked normally, without any sleep or anmesia) comes up heads with probability k. (k on x-axis), I assume Beauty is playing GRYL, and that she is guessing heads with some probability. The optimal probability for her strategy to take is on the y-axis (blue line). Overall chance of survival is orange line.

You are completely correct that her guess is in no way related to the actual coin bias (k), except for k=0.5 specifically which is an exception not the rule. In fact, this graph appears to be vaguely pushing in some kind of thirder position, in that the value 2/3rds takes on special significance as beyond this point beauty always guesses heads. In contrast when tails is more likely she still keeps some chance of guessing heads because she is banking on one of her two tries coming up tails in the case of tails, so she can afford some preparation for heads.



import matplotlib.pyplot as plt

import matplotlib as mpl

import numpy as np


def p_of_k(k):

    if k==1:

        return 1

    nominal_p =  -1 * k / (2 *(k-1))

    if nominal_p > 1:

        return 1

    elif nominal_p < 0:

        return 0


        return nominal_p


p_live = lambda k, p: k*p + (1-k) *(1-p**2)



ks = np.linspace(0, 1, 100)


dat = []

lives = []


for k in ks:

    dat.append( p_of_k(k) )

    lives.append( p_live(k, dat[-1] ) )


fig, ax = plt.subplots(figsize=(5,5))

ax.plot( ks, dat )

ax.plot( ks, lives)

ax.annotate("2/3", (0.6666, 1))


ax.set_xticks(np.linspace(0, 1, 5))

ax.set_yticks(np.linspace(0, 1, 5))

Comment by Ben (ben-lang) on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-09T14:18:55.156Z · LW · GW

"If Beauty instead thinks that P(Heads)=1/2, then P(Heads&Monday)=1/2. Trying to guess what a Halfer would think, I'll also assume that she thinks that P(Tails&Monday)=1/4 and P(Tails&Tuesday)=1/4"

This part of the analysis gets to the core of my opinion. One could argue that if the coin is flicked and comes up tails then we have both "Tails&Monday" and "Tails&Tuesday" as both being correct, sequentially. They are not mutually exclusive outcomes. One could also argue that, on a particular waking up Beauty doesn't know which day it is and thus they are exclusive in that moment.*

Which gets me back to my point. What do you mean by "probability". If we take a frequentist picture do we divide by the number of timelines (or equivalently separate experimental runs if many are run), or do we divide by the number of times beauty wakes up. I think either can be defended and that you should avoid the words "probability" or the notation P( X ) until after you have specified which denominator you are taking.

I like the idea of trying to evaluate the optimal policy for Beauty, given the rules of the game. Maybe you are asked to right a computer function ( that is going to play the game as described. 


I liked the thought about indeteriminisitc strategies sometimes being the best. I cooked up an extreme example underneath here to (1) show how I think the "find the optimal policy approach" works in practice and to (2) give another example of randomness being a real friend.

Consider two very extreme cases of the sleeping beauty game:

Guess wrong and you die! (GWYD)

Guess right and you live! (GRYL)

In both instances beauty guesses whether the coin was heads of tails on each waking. After the experiment is over her guesses are considered. In the first game, one (or more) wrong guesses get beauty killed. In the Second a single right guess is needed, otherwise she is killed.

In the first game, GWYD, (I think) it is obvious that the deterministic strategy "guess heads" is just as good as the deterministic strategy "guess tails". Importantly the fact that Beauty is sometimes woken twice is just not relevant in this game: because providing the same answer twice (once on each day) changes nothing. As both the "guess tails" and "guess heads" policies are considered equally good one could say that the revealed probability of heads is 1/2. (Although as I said previously I would avoid the word "probability" entirely and just talk about optimal strategies given some particular scoring system).

However, in the second game, GRYL, it is clear that a randomised strategy is best (assuming beauty has access to a random number generator). If the coin does come up tails then Beauty gets two guesses and (in GRYL) it would be very valuable for those two guesses to be different from one another. However, if it is heads she gets only one try.

I drew myself a little tree diagram expecting to find that in GRYL the probabilities revealed by the optimal policy would be thirder ones, but I actually found that the optimal strategy in this case is to guess heads with 50% probability, and tails with 50% probability (overall giving a survival chance of 5/8).

I had vaguely assumed that GWYD was a "halfers game" and GRYL was a "thirders". Odd that they both give strategies that feel "halfer-y". (It remains the case that in a game where right guesses are worth  +1$ and wrong guesses -1$ that the thirder tactic is best, guess tails every time, expect to win 2/3rds).

* One could go even crazier and argue that experiences that are, by definition (amnesia drug) indistinguishable should be treated like indistinguishable particles.

Comment by Ben (ben-lang) on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-08T09:26:06.737Z · LW · GW

Yes, the Wednesday point is a good one, so it the oral exam comparison.

I think we agree that the details of the "scoring system" completely change the approach beauty should take. This is not true for most probability questions. Like, if she can bet 1$ at some odds each time she wakes up then it makes sense for her policy going in to more heavily weight the timeline in which she gets to bet twice. As you point out if her sleeping self repeats bets that changes things. If the Tuesday bet is considered to be "you can take the bet, but it will replace the one you may or may not have given on a previous day if their was one", then things line up to half again. If she has to guess the correct outcome of the coin flip, or else die once the experiment is over, then the strategy where she always guesses heads is just as good as the one where she always guesses tails. Her possible submissions are [H, T, HH, TT], two of which result in death. Where we differ is that you think the details of the scoring system being relevant suggest the approach is misguided. In contrast I think the fact that scoring system details matter is the entire heart of the problem. If I take probabilities as "how I should bet" then the details of the bet should matter. If I take probabilities as frequencies then I need to decide whether the denominator is "per timeline" or "per guess". I don't think the situation allows one to avoid these choices, and (at least to me) once these are identified as choices, with neither option pushed upon us by probability theory, the whole situation appears to make sense.


Frequentist example --- If you understood me above you certainly don't need this example so skip ---: The experiment is repeated 100 times, with 50 heads and 50 tails on the coin flip. A total of 150 guesses are made by beauty. BeautyV1.0 said the probability was 50/50 every time. This means that for 100 of the times she answered 50/50 it was actually tails, and 50 times she answered this same way it was actually heads. So she is poorly calibrated with respect to the number of times she answered. By this scoring system BeautyV2.0 who says "1/3" appears better.

However, in every single trial, even the ones where beauty is consulted twice, she only provides one unique answer.  If beauty is a simple program call in a simulation then she is a memoryless program, in some cases evaluated twice. In half the trials it was heads, in half it was tails, in every trial BeautyV1.0 said it was 50/50. So she is well calibrated per trial, but not per evaluation. BeautyV2.0 is the other way around.

Comment by Ben (ben-lang) on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-07T12:55:20.922Z · LW · GW

Thank you for sharing that Bayesian Beauty paper, its very clear.

I think the OP is onto something useful when they distinguish between (my phrasing) the approach such that "my probabilities are calibrated such that they represent the proportion of timelines in which I am right" and the approach that means "my probabilities are calibrated with respect to the number of times I answer questions about them".

I take the "timelines" (first) option. The second one seems odd to me, for example say someone considers a multiple choice question, decides that answer (a) is correct. They are told they have to wait in the examination room while other candidates finish. During this 10 minute wait they spend 9 minutes confident (a) was the right choice, then with 1 minute to go, realise (b) is actually right and change their answer. Here I would only judge the correctness (or otherwise) of the final choice, (b), and it would not be relevant that they spent more time believing (a). Similarly, if Beauty is woken for a second time any guesses she makes about the coin on this second waking should be taken to replace any guesses she made on the first waking (if you take the timelines point of view). I think we should only mark "final answers", otherwise we end up in really weird territory where if you think something is true with 2/3rds probability, but you used to think it was true with 1/3rd probability, then you should exaggerate your newfound belief that it is more likely in order to be more right on a temporal average.

Comment by Ben (ben-lang) on video games > IQ tests · 2023-08-07T10:36:29.410Z · LW · GW

I think their is another important reason people are selected based on a degree. When I was at school their were a lot of people who were some combination of disruptive/annoying/violent "laddish" that made me (and others) uncomfortable, by deducting status points for niche, weird or "nerdy" interests*. A correlation (at least at my school) was that none of those people went to university, and (at least at my university) no equivalent people of that type were there. Similarly I have not met any such people in the workplace. College/university filters them out. It overlaps with class-ism to some extent. Maybe to overstate it wildly you could say that employers are trying to select so that the workplace culture is dominated by middle-class social norms.

Comment by Ben (ben-lang) on Secure Hand Holding · 2023-08-03T21:35:35.071Z · LW · GW

Yes, you could use a stop sign. I am used to (in the UK) them instead putting the white road markings that mean "give way". I am not sure how much value the stop sign adds, because when they start moving again we still need to trust to the driver's vision. I suppose we need to place less faith in their judgement.

Zebra crossings are always well lit for that reason. But yes, a sensible pedestrian (esp. at night) would not step in front of a speeding car, but instead signal their intention to cross and begin crossing when the car stops or slows. I did a quick look for statistics on zebra crossing injuries and deaths and couldn't find anything clear in 5 mins, instead I found a news article about the country's "most dangerous zebra crossing" being turned into traffic lights. It has videos, which basically show everything you are thinking can go wrong, going wrong. (A link if you are interested : )

Comment by Ben (ben-lang) on Secure Hand Holding · 2023-08-03T09:51:43.908Z · LW · GW

I think the occasional little dance of "oops, I accidentally made that car stop by standing too close to it, I will get away from the zebra crossing" is a relatively small cost. 

When there is a junction where one car (lets say turning to join the main road) has to give way to other cars, would you always put a traffic light/stop sign? Or would you let people look and go when its clear? I think it depends on the speed of the road, and whether the traffic levels mean that a gap is actually going to appear in a reasonable timeframe. I would apply the same logic to zebra crossings vs traffic light crossings.

Stop signs will often (eg. at night) cause unnecessary vehicle stops. Traffic lights are more expensive than paint, and keep pedestrians waiting as they press a button (even with a fast button system time is wasted), and keep cars waiting for it to turn green again even after people have finished crossing. So the basic zebra has some advantages.

Comment by Ben (ben-lang) on Lack of Social Grace Is an Epistemic Virtue · 2023-07-31T17:08:05.281Z · LW · GW

This is kind of an aside, but does this Feynman story strike anyone else as off? Its kind of too perfect. Not even subtly. It strikes me as "significantly exaggerated", at the very least.

Comment by Ben (ben-lang) on Pulling the Rope Sideways: Empirical Test Results · 2023-07-28T13:41:06.040Z · LW · GW

I think the reason it doesn't work is because a tug of war is not so much about the force vectors being added together, (if it was then pulling sideways would be effective). I think it is more about which side's members are lighter or have worse shoes, and therefore slip. If you have 1 person pulling sideways (+y direction), and another 5 each pulling in the +x and -x directions respectively, then (ignoring the x direction), we have a force (it doesn't matter who is exerting the force) pulling 1 person in the -y direction and 10 in the +y direction. Which group is going to slide first? (The 1 person I think). And when they do you just have the other 10 not having moved (the static friction was never overcome), and 1 person who has moved closer to the rope/everyone else, but has not moved them at all.

Comment by Ben (ben-lang) on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-28T09:43:59.153Z · LW · GW

The rocks comic is funny, although its not a great example. If the rock is in the glare of the hot middle eastern sun it should be heating up despite being hotter than the surrounding air. The unimpressed people have probably seen rocks out in the sun before. "Sometimes rocks get hot, for example if they are exposed to sun light". Show me a rock that gets hot in the shade.

Comment by Ben (ben-lang) on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-28T09:29:47.826Z · LW · GW

Yes, that is the video I found on twitter.

You could be right and it is levitation, but I had a thing called a "magnetic sculpture" as a child, and to me this video just doesn't look any different from how the metal rectangles in that toy stood on the magnet, usually at some preferred angle. Google "Magnetic sculpture" to get a lot of pictures of things like this: Nuts Magnetic Sculpture | Spilsbury

On the other had, I have also seen the demonstration where an actual superconductor is lifted out of a bath of liquid nitrogen with some tool, then put floating (maybe also spinning) above the magnet. I thought that looked quite different, although all the liquid nitrogen "smokyness" could have just been adding so much cool that I was more easily impressed.

I am not seeing the relevance of Earnshaw. I thought that a superconductor floating in a magnetic field was distinct from one magnet pushing on an induced magnet. My vague understanding is that the magnetic field lines got "locked" inside the superconductor and that this was indeed stable (flux pinning is the term I think). I mean, I have seen countless demonstrations with cold superconductors and it seemed pretty stable, you could poke it to make it spin, and the poke wouldn't make it immediately collapse or fly away.

Comment by Ben (ben-lang) on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-27T09:00:54.373Z · LW · GW

They link a video ( ) in that article. I can't get it to play for some reason, but I think that is a really positive update towards it actually being legit. I can't imagine all the ways the electrical measurements might be mistaken, but a video of "look, this stuff hovers" sounds hard to mess up.

Got the video working at last (found a version of it on twitter), now I think its an update against the supercondutor. I have played with bits of metal touching magnets before, and they often sort of "spring up" on one end, or form slightly rigid structures. The video looks just like that, not like its levitating at all.

Other minor point. Their competing interests and data statements follow the Nature template, so that it very likely where it has been submitted. If the template had suggested submission to anything other than a high impact journal (Nature, Science) that would indicate some kind of problem.

Comment by Ben (ben-lang) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T16:13:58.411Z · LW · GW

Thanks for clarifying, that makes sense.

I also have no idea what I am. Maybe something in the vein of something I think Hume proposed, where you are a kind of second-order utilitarian. (You use utilitarianism to determine a set of rules of thumb, you then follow those rules of thumb instead of actually being a utilitarian.)

Comment by Ben (ben-lang) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T09:45:07.713Z · LW · GW

I think the poster acknowledges that the number 20 is somewhat ad hoc and handwavy, for example they go on to do the calculations later in their post assuming fish suffering is 100 times less than human suffering. So they have given the number a factor 5 uncertainty.

Although, when I was reading the post, I saw that as more a rhetorical "trap" than a real point. As soon as the poster says "fish suffering is 20 times less important than human suffering" it invites everyone to focus on the number 20, and start trying to work out if a ratio of 100 or 1,000 would align better with their own instincts. The trap is that anyone who accepts the real premise: that human suffering and fish suffering are somehow interchangeable up to some exchange rate, is already going to be snared by the argument because even gigantic factors are going to make fish farming work out as a bigger problem than say, gun homicides or traffic accidents.

Comment by Ben (ben-lang) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T09:32:27.056Z · LW · GW

I don't think Utilitarians are a sufficiency homogenous group for them all to agree to any kind of specific weighting. And I don't really see why that is a problem. Each individual utilitarian might be internally coherent, that doesn't mean the group will agree on anything much or be coherent taken together.

You say you are not a utilitarian, and then you offer a utilitarian argument (my understanding of your argument: fish suffering is worth human enjoyment). Maybe we are using the words differently, but I would say anyone who is trying to weigh up the suffering/pleasure on either side of a decision to determine its morality is fundamentally a utilitarian.

Many (most?) people do not approach ethics in this way at all. They take axioms like "murder is wrong" or "eating fish is natural" and the pleasure or suffering that follows as a consequence of the actions taken is irrelevant to their morality.

Comment by Ben (ben-lang) on Problems with predictive history classes · 2023-07-22T16:13:17.849Z · LW · GW

I think you are missing the biggest problem: that the very question being asked tells you a lot about the future.

The student is up to 1939. One of the questions is "Will their be a war in Europe?". But, I don't even know of all the other (probably quite plentiful) examples of years in which it looked like a war was possible but did not happen. Would spending as much time on those "could have been wars" actually be useful or interesting? Maybe, I don't know.

Comment by Ben (ben-lang) on Housing and Transit Roundup #5 · 2023-07-13T12:59:52.319Z · LW · GW

"The problem is that speed limits have for a long time been artificially low to adjust for the complete lack of enforcement, combined with the optionality this gives to police. It is often actively unsafe to drive the speed limit. What we should do, obviously, is not to stop requiring license plates or not enforce the law. We should enforce the law, and adjust the law for the expectation that it will be enforced."

I don't know if it is the same in the USA, but in the UK (where speed cameras are, I think, much more widespread) their is a widely held perception that you can exceed the speed limit by 10% and be guaranteed that nothing will come of it. However, I have noticed that Satellite navigation systems consistently indicate the cars speed is 10% slower than the speed displayed on the car's speedometer. Having brought this up in conversation it seems like everyone who has checked (with different cars and sat navs) finds the same thing.

My theory is that a car manufacturer is more likely to be in legal hot water if the speedometer under-estimates the car's speed than over-estimates. So they have all clustered around a 10% exaggeration. Which in turn leads to the myth that "speed cameras only go off between 75 and 80mph, even though the limit is 70mph".

A final data point, going through average speed checks (a computer-camera reads your number plate at the beginning of the road, starts a clock which stops when another camera reads your plate at the end of the road) all the lorries appear to be going a little over the limit (the magic 10% again). If you drive a lorry you probably know if your speedometer is lying, and maybe your employer gives you one that is actually accurate.

So adjusting and enforcing the adjusted values makes sense. I am just speculating their is an extra 10% discrepancy in the speeds people think they are doing and what they are actually doing.

Comment by Ben (ben-lang) on Negativity enhances positivity · 2023-07-06T15:52:48.367Z · LW · GW

It varies between cultures a lot. When I check reviews of stories I have written on Amazon or Goodreads I always calibrate by clicking on the user portrait and seeing what they normally give. Many of my 5-star ratings are not much to celebrate:  turns out they have given 5 to everything ever. But it makes me smile when I see that my 4-star was the highest rating that person gave in the last 10-20 things they reviewed.

I assume that Uber and similar software already does this automatically under the hood. They know a 4-star rating from a prolific 5-star giver is a bad sign. They know a 4-star is good from that person who aims to give 3 on average because "that's obviously what a well-calibrated person does". I think the searching algorithms at least know this.

Comment by Ben (ben-lang) on Self-Blinded Caffeine RCT · 2023-06-28T15:17:02.700Z · LW · GW

I think they mean that beforehand you said "My prediction about the content of the pill is more accurate than random guesses80%" - meaning that you were 80% sure you would do better than a 50/50 guess of what type of pill it was throughout the trial. Then you found that you did indeed do better than 50/50, but didn't give a number and I think sludgepuddle thought you had guessed right 80% of the time.

Comment by Ben (ben-lang) on Why am I Me? · 2023-06-26T08:46:15.397Z · LW · GW

To me, that version of the doomsday question is extremely unconvincing for a very different reason. It is using only the most basic (single number, N) aspect of the available data. We could go one step more sophisticated and get the number of people born last year and extrapolate that number of annual births out to eternity. Or we could go yet another step more sophisticated and fit an exponential to the births per year graph to extrapolate instead. Presumably we could go much further, fitting ever more complex models to a wider set of available data. Perhaps even trying to include models of the Earth's calorific budget or the likelihood of nuclear war.

Its not clear to me why we would put any credence in the doomsday argument (take N, approximately double it) specifically, out of all the available models.

Comment by Ben (ben-lang) on An Intro to Anthropic Reasoning using the 'Boy or Girl Paradox' as a toy example · 2023-06-22T14:18:49.448Z · LW · GW

Nice post, very clear.

Maybe this overlaps with some of the other points, but for me it seems a sensible way of navigating this situation is to reject the entire notion that their existed a set of obverses, and them "me-ness" was injected into one of them at random. Most of the issues seem to spring from this. If my subjective experience is "bolted on" to a random observer then of course what counts as an observer matters a lot, and it makes sense to be grateful that you are not an ant.

But I can imagine worlds full of agents and observers, where non of them are me. (For example, Middle Earth is full of observers, but I don't think any of them are me). I can also imagine worlds crammed with philosophical zombies that aren't carrying the me-ness from me or from anyone else.

I suppose if you take this position to its logical conclusion you end up with other problems. "If I were an ant,  I wouldn't be me." sounds coherent. "I just rolled a 5 on that die, if it had been a 6 I wouldn't be me (I would be a slightly different person, with a 6 on their retina)" sounds like gibberish and would result in failing to update to realise the die was weighted.

Comment by Ben (ben-lang) on What is the foundation of me experiencing the present moment being right now and not at some other point in time? · 2023-06-22T13:35:45.608Z · LW · GW

Related to this idea of space, is maybe asking "why am I me, and not someone else?".

The question in quotes is obviously nonsense, but I think it can get quite confusing, especially if we start assuming that people can be replicated (perhaps using digital copies). If you are one of 5 copies of a digital personality, does it make sense for you to be grateful you are not a different one of those copies? The world would not in any mechanical way be different if you were one of the copies and they were you. So it becomes complicated to think about because it seems to imply that two mechanically identical universes can be subjectively different for "me" (for some value of "me").

The time question in the original post I think it kind of equivalent. They are sort of thinking that their are many, many "me"'s at different times, all with different experiences. But that I am right now only one of those "me"'s. What is special about that one that it is the one that I am experiencing right now.

Comment by Ben (ben-lang) on Are Bayesian methods guaranteed to overfit? · 2023-06-19T10:38:41.363Z · LW · GW

I wonder if your more-detailed model could be included in a derivation like that in the post above. The post assumes that every observation the model has (the previous y values) is correct. Your idea of mislabelling's or sub-perfect observations might be includable as some rule that says y's have an X% chance of just being wrong.

We can imagine two similar models. [1] a "zoomed in" model consists of two parts, first a model for the real-world, and second a model of observation errors. [2] A "zoomed out" model that just combines the real world and the observation errors and tries to fit to that data. If [2] sees errors then the model is tweaked to predict errors. Equivalently in the maths, but importantly different in spirit is model [1] which when it encounters an obvious error does not update the world model, but might update the model of the observation errors.

My feeling is that some of this "overfitting" discussion might be fed by people intuitively wanting the model to do [1], but actually building or studying the much simpler [2]. When [2] tries to include observation errors into the same map it uses to describe the world we cry "overfitting".

Comment by Ben (ben-lang) on why I'm anti-YIMBY · 2023-06-15T17:48:52.001Z · LW · GW

You make a convincing case that their are forces that encourage very rich people to congregate relatively close together, I don't think its the main force behind what is going on but I can see that it exist. Other forces also exist, like those I outlined above. Mine is not a productivity argument, and you could if you wanted even lump my suggestion under "there were other rich people there to network with" where "network" here means "marry" and "rich people" here means "people with a career, not a job."

Comment by Ben (ben-lang) on Instrumental Convergence? [Draft] · 2023-06-15T11:47:57.601Z · LW · GW

Other agents are not random though. Many agents act in predictable ways. I certainly don't model the actions of people as random noise. In this sense I don't think other agents are different from any other physical system that might be more-or-less chaotic, unpredictable or difficult to control.

Comment by Ben (ben-lang) on why I'm anti-YIMBY · 2023-06-15T09:54:06.880Z · LW · GW

I think you are underselling the networking advantages of cities.

Most people are eventually part of a couple or family. Most couples make compromises in terms of one or the other taking not-the-best position for their career because they want to live in the same area as their spouse. In a big city (my experience is London) their are enough jobs in enough industries close together that a typical couple can both usually pursue their ideal careers (or close) without being in different places.

Add into this that your job might change. If you live in Boeing town: population - high, employers - one, then you work at Boeing, and if you stop working at Boeing you move house and your children change schools etc. If you live in a big city and you are a career-ist you can do the whole "monkey bars" thing where you keep jumping between companies as you think you can do better, all without moving home every 2-3 years.

Comment by Ben (ben-lang) on UFO Betting: Put Up or Shut Up · 2023-06-14T09:40:50.106Z · LW · GW

This sounds like the opening premise of a fun TV show or film.

UFO believer makes big bet with (for the sake of TV) one very rich person. Then heads out on an epic road trip in a camper van to find the alien evidence. A reporter covers the story and she starts travelling with him sending updates back to her paper. Obviously they fall for eachother.

They have various fun adventures where they keep encountering unconvincing evidence, or occasionally super-convincing evidence (UFO flys by) that they comically fail to catch on camera. Meanwhile the rich person on the other side of the bet becomes a villain, sending a hench-person to cut the tires on their van, get them in trouble with the police and generally obstruct the process.

Comment by Ben (ben-lang) on Kelly betting vs expectation maximization · 2023-05-31T11:54:57.548Z · LW · GW

Yes, my position did indeed shift, as you changed my mind and I thought about it in more depth. My original position was very much pro-Kelly. On thinking about your points I now think it is the while my_money > 0  aspect where the problem really lies. I still stand by the difference between optimal global policy and optimal action at each step distinction, because at each step the optimal policy (for Kelly or not) is to shake the dice another time. But, if this is taken as a policy we arrive at the while my_money > 0 break condition being the only escape, which is clearly a bad policy. (It guarantees that in any world we walk away, we walk away with nothing.)

Comment by Ben (ben-lang) on Kelly betting vs expectation maximization · 2023-05-31T09:26:25.597Z · LW · GW

I understand your point, and I think I am sort of convinced. But its the sort of thing where minor details in the model can change things quite a lot. For example, I am sort of assuming that Bob gets no utility at all from his money until he walks out of the casino with his winnings - IE having the money and still being in the casino is worth nothing to him, because he can't buy stuff with it. Where as you seem to be comparing Bob with his counter-factual at each round number - while I am only interested in Bob at the very end of the process, when he walks away with his winnings to get all that utility. But your proposed Bob never walks away from the table with any winnings. (Assuming no round limit). If he still has winnings he doesn't walk away.

Lets put details on the scenario in two slightly different ways. (1) the "casino" is just a computer script where Bob can program in a strategy (bet it all every time), and then just type in the number of rounds (N). (Or, for your version of Bob, put the whole thing in a "while my_money > 0:" loop.) We could alternatively (2) imagine that Bob is in the casino playing each round one at a time, and that the time taken doing 1 round is a fixed utility cost of  some small number (say 0.1). This doesn't change anything for utility-maximising-Bob, and in fact the time costs for 1 more round relative to his expected gains shrink over time as his money doubles up. (later rounds are a better deal in expectation).

With these models I just see a system where Bob deterministically looses all his money. The longer he goes before going bust, the more of his time he wastes as well (in (2)). 

Kelly betting doesn't actually fix my complaint. A Kelly betting Bob with no point at which they say "Yes, that is enough money, time to leave." actually gets minus infinity utility in model (2) where doing a round costs a small but finite amount of utility in terms of the time spent. Because the money acquired doesn't pay off till they leave, which they never do.

I think maybe you are right that it comes down to the utility function. Any agent (even the Kelly one) will behave in a way that comes across as obviously insane if we allow their utility function to go to infinity. Although I still don't quite see how that infinity actually ever enters in this specific case. If we answer the infinite utility function with an infinite number of possible rounds then we can say with certainty that Bob never walks away with any winnings.

Comment by Ben (ben-lang) on Kelly betting vs expectation maximization · 2023-05-30T20:53:47.972Z · LW · GW

Yes, I completely agree that the main reason in real life we would recommend against that strategy is that we instinctively (and usually correctly) feel that the person's utility function is sub-linear in money. So that the   dollars with probability  is bad. Obviously if    dollars is needed to cure some disease that will otherwise kill them immediately that changes things.

But, their is an objection that I think runs somewhat separately to that, which is the round limit. If we are operating under an optimal, reasonable policy, then (outside commitment tactic negotiations) I think it shouldn't really be possible for a new outside constraint to improve our performance. Because if the constraint does improve performance then we could have adopted that constraint voluntarily and our policy was therefore not optimal. And the N-round limit is doing a fairly important job at improving Bob's performance in this hypothetical. Otherwise Bob's strategy is equivalent to "I bet everything, every time, until I loose it all." Perhaps this second objection is just the old one in a new disguise (any agent with a finitely-bounded utility function would eventually reach a round number where they decide "actually I have enough now", and thus restore my sense of what should be), but I am not sure that it is exactly the same.

Comment by Ben (ben-lang) on Kelly betting vs expectation maximization · 2023-05-30T13:13:59.697Z · LW · GW

The problem with maximising expected utility is that Bob will sit their playing 1 more round, then another 1 more round again and again until he eventually looses everything. Each step maximised the expected utility, but the policy overall guarantees zero utility with certainty, assuming Bob never runs out of time.

But, even as utility-maximising-Bob is saved from self-destruction by the clock, he shall think to himself "dam it! Out of time. That is really annoying, I want to keep doing this bet".

At least to me Kelly betting fits in the same kind of space as the Newcomb paradox and (possibly) the prisoners dilemma. They all demonstrate that the optimal policy is not necessarily given by a sequence of optimal actions at every step.

Comment by Ben (ben-lang) on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-30T08:17:38.571Z · LW · GW

I think that the situation of someone spamming all the "bad" reactions on a post they don't like is the upvote system that already exists. If a post has a fair amount of karma and then copy of 10 different negative reacts might not mean much.