Posts
Comments
The teapot comparison (to me) seems to be a bad. I got carried away and wrote a wall of text. Feel free to ignore it!
First, lets think about normal probabilities in everyday life. Sometimes there are more ways for one state to come about that another state, for example if I shuffle a deck of cards the number of orderings that look random is much larger than the number of ways (1) of the cards being exactly in order.
However, this manner of thinking only applies to certain kinds of thing - those that are in-principle distinguishable. If you have a deck of blank cards, there is only one possible order, BBBBBB.... To take another example, an electronic bank account might display a total balance of $100. How many different ways are their for that $100 to be "arranged" in that bank account? The same number as 100 coins labelled "1" through "100"? No, of course not. Its just an integer stored on a computer, and their is only one way of picking out the integer 100. The surprising examples of this come from quantum physics, where photons act more like the bank account, where their is only 1 way of a particular mode to contain 100 indistinguishable photons. We don't need to understand the standard model for this, even if we didn't have any quantum theory at all we could still observe these Boson statistics in experiments.
So now, we encounter anthropic arguments like Doomsday. These arguments are essentially positing a distribution, where we take the exact same physical universe and its entire physical history from beginning to end, (which includes every atom, every synapse firing and so on). We then look at all of the "counting minds" in that universe (people count, ants probably don't, aliens, who knows), and we create a whole slew of "subjective universes", , , , , etc, where each of of them is atomically identical to the original but "I" am born as a different one of those minds (I think these are sometimes called "centred worlds"). We assume that all of these subjective universes were, in the first place, equally likely, and we start finding it a really weird coincidence that in the one we find ourselves in we are a human (instead of an Ant), or that we are early in history. This is, as I understand it, The Argument. You can phrase it without explicitly mentioning the different s, by saying "if there are trillions of people in the future, the chances of me being born in the present are very low. So, the fact I was born now should update me away from believing there will be trillions of people in the future". - but the s are still doing all the work in the background.
The conclusion depends on treating all those different subscripted s as distinguishable, like we would for cards that had symbols printed on them. But, if all the cards in the deck are identical there is only one sequence possible. I believe that all of the , , , 's etc are identical in this manner. By assumption they are atomically identical at all times in history, they differ only by which one of the thinking apes gets assigned the arbitrary label "me" - which isn't physically represented in any particle. You think they look different, and if we accept that we can indeed make these arguments, but if you think they are merely different descriptions of the same exact thing then the Doomsday argument no longer makes sense, and possibly some other anthropic arguments also fall apart. I don't think they do look different, if every "I" in the universe suddenly swapped places - but leaving all memories and personality behind in the physical synapses etc, then, how would I even know it? I would be a cyborg fighting in WWXIV and would have no memories of ever being some puny human typing on a web forum in the 21s Cent. Instead of imaging that I was born as someone else I could imagine that I could wake up as someone else, and in any case I wouldn't know any different.
So, at least to me, it looks like the anthropic arguments are advancing the idea of this orbital teapot (the different scripted s, although it is, in fairness, a very conceptually plausible teapot). There are, to me, three possible responses:
1 - This set of different worlds doesn't logically exist. You could push this for this response by arguing "I couldn't have been anyone but me, by definition." [Reject the premise entirely - there is no teapot]
2 - This set of different worlds does logically make sense, and after accepting it I see that it is a suspicious coincidence I am so early in history and I should worry about that. [accept the argument - there is a ceramic teapot orbiting Mars]
3 - This set of different worlds does logically make sense, but they should be treated like indistinguishable particles, blank playing cards or bank balances. [accept the core premise, but question its details in a way that rejects the conclusion - there is a teapot, but its chocolate, not ceramic.].
So, my point (after all that, Sorry!) is that I don't see any reason why (2) is more convincing that (3).
[For me personally, I don't like (1) because I think it does badly in cases where I get replicated in the future (eg sleeping beauty problems, or mind uploads or whatever). I reject (2) because the end result of accepting it is that I can infer information through evidence that is not causally linked to the information I gain (eg. I discover that the historical human population was much bigger than previously reported, and as a result I conclude the apocalypse is further in the future than I previously supposed). This leads me to thinking (3) seems right-ish, although I readily admit to being unsure about all this.].
I found this post to be a really interesting discussion of why organisms that sexually reproduce have been successful and how the whole thing emerges. I found the writing style, where it switched rapidly between relatively serious biology and silly jokes very engaging.
Many of the sub claims seem to be well referenced (I particularly liked the swordless ancestor to the swordfish liking mates who had had artificial swords attached).
"Stock prices represent the market's best guess at a stock's future price."
But they are not the same as the market's best guess at its future price. If you have a raffle ticket that will, 100% for definite, win $100 when the raffle happens in 10 years time, the the market's best guess of its future price is $100, but nobody is going to buy it for $100, because $100 now is better than $100 in 10 years.
Whatever it is that people think the stock will be worth in the future, they will pay less than that for it now. (Because $100 in the future isn't as good as just having the money now). So even if it was a cosmic law of the universe that all companies become more productive over time, and everyone knew this to be true, the stocks in those companies would still go up over time, like the raffle ticket approaching the pay day.
Toy example:
1990 - Stocks in C cost $10. Everyone thinks they will be worth $20 by the year 2000, but 10 years is a reasonably long time to wait to double your money so these two things (the expectation of 20 in the future, and the reality of 10 now) coexist without contradiction.
2000 - Stocks in C now cost $20, as expected. People now think that by 2010 they will be worth $40.
Other Ant-worriers are out there!
""it turned out this way, so I guess it had to be this way" doesn't resolve my confusion"
Sorry, I mixed the position I hold (that they maybe work like bosons) and the position I was trying to argue for, which was an argument in favor of confusion.
I can't prove (or even strongly motivate) my "the imaginary mind-swap procedure works like a swap of indistinguishable bosons" assumption, but, as far as I know no one arguing for Anthropic arguments can prove (or strongly motivate) the inverse position - which is essential for many of these arguments to work. I agree with you that we don't have a standard model of minds, and without such a model the Doomsday Argument, and the related problem of being cosmically early might not be problems at all.
Interestingly, I don't think the weird boson argument actually does anything for worries about whether we are simulations, or Boltzmann brains - those fears (I think) survive intact.
I suspect there is a large variation between countries in how safely taxi drivers drive relative to others.
In London my impression is that the taxis are driven more safely than non-taxis. In Singapore it appears obvious to casual observation that taxis are much less safely driven than most of the cars.
At least in my view, all the questions like the "Doomsday argument" and "why am I early in cosmological" history are putting far, far too much weight on the anthropic component.
If I don't know how many X's their are, and I learn that one of them is numbered 20 billion then sure, my best guess is that there are 40 billion total. But its a very hazy guess.
If I don't know how many X's will be produced next year, but I know 150 million were produced this year, my best guess is 150 million next year. But is a very hazy guess.
If I know that the population of X's has been exponentially growing with some coefficient then my best guess for the future is to infer that out to future times.
If I think I know a bunch of stuff about the amount of food the Earth can produce, the chances of asteroid impacts, nuclear wars, dangerous AIs or the end of the Mayan calendar then I can presumably update on those to make better predictions of the number of people in the future.
My take is that the Doomsday argument would be the best guess you could make if you knew literally nothing else about human beings apart from the number that came before you. If you happen to know anything else at all about the world (eg. that humans reproduce, or that the population is growing) then you are perfectly at liberty to make use of that richer information and put forward a better guess. Someone who traces out the exponential of human population growth out to the heat death of the universe is being a bit silly (lets call this the Exponentiator Argument), but on pure reasoning grounds they are miles ahead of the Doomsday argument, because both of them applied a natural, but naïve, interpolation to a dataset, but the exponentiator interpolated from a much richer and more detailed dataset.
Similarly to answer "why are you early" you should use all the data at your disposal. Given who your parents are, what your job is, your lack of cybernetic or genetic enhancements, how could you not be early? Sure, you might be a simulation of someone who only thinks they are in the 21st centaury, but you already know from what you can see and remember that you aren't a cyborg in the year 10,000, so you can't include that possibility in your imaginary dataset that you are using to reason about how early you are.
As a child, I used to worry a lot about what a weird coincidence it was that I was born a human being, and not an ant, given that ants are so much more numerous. But now, when I try and imagine a world where "I" was instead born as the ant, and the ant born as me, I can't point to in what physical sense that world is different from our own. I can't even coherently point to in what metaphysical sense it is different. Before we can talk about probabilities as an average over possibilities we need to know if the different possibilities are even different, or just different labelling on the same outcome. To me, there is a pleasing comparison to be made with how bosons work. If you think about a situation where two identical bosons have their positions swapped, it "counts as" the same situation as before the swap, and you DON'T count it again when doing statistics. Similarly, I think if two identical minds are swapped you shouldn't treat it as a new situation to average over, its indistinguishable. This is why the cyborgs are irrelevant, you don't have an identical set of memories.
I remember reading something about the Great Leap Forward in China (it may have been the Cultural Revolution, but I think it was the Great Leap Forward) where some communist party official recognised that the policy had killed a lot of people and ruined the lives of nearly an entire generation, but they argued it was still a net good because it would enrich future generations of people in China.
For individuals you weigh up the risk/rewards of differing your resource for the future. But, as a society asking individuals to give up a lot of potential utility for unborn future generations is a harder sell. It requires coercion.
I think we might be talking past each other. I will try and clarify what I meant.
Firstly, I fully agree with you that standard game theory should give you access to randomization mechanisms. I was just saying that I think that hypotheticals where you are judged on the process you use to decide, and not on your final decision are a bad way of working out which processes are good, because the hypothetical can just declare any process to be the one it rewards by fiat.
Related to the randomization mechanisms, in the kinds of problems people worry about with predictors guessing your actions in advance its very important to distinguish between [1] (pseudo-)randomization processes that the predictor can predict, and [2] ones that it cannot.
[1] Randomisation that can be predicted by the predictor is (I think) a completely uncontroversial resource to give agents in these problems. In this case we don't need to make predictions like "the agent will randomise", because we can instead make the stronger prediction "the agent will randomize, and the seed of their RNG is this, so they will take one box" which is just a longer way of saying "they will one box". We don't need the predictor to show its working by mentioning the RNG intermediate step.
[2] Randomisation that is beyond the predictor's power is (I think) not the kind of thing that can sensibly be included in these thought experiments. We cannot simultaneously assume that the predictor is pretty good at predicting our actions and useless at predicting a random number generator we might use to choose our actions. The premises: "Alice has a perfect quantum random number generator that is completely beyond the power of Omega to predict. Alice uses this machine to make decisions. Omega can predict Alice's decisions with 99% accuracy" are incoherent.
So I don't see how randomization helps. The first kind, [1] doesn't change anything, and the second kind [2], seems like it cannot be consistently combined with the premise of the question. Perfect predictors and perfect random number generators cannot exist in the same universe.
Their might be interesting nearby problems where you imagine the predictor is 100% effective at determining the agents algorithm, but because the agent has access to a perfect random number generator that it cannot predict their actions. Maybe this is what you meant? In this kind of situation I am still much happier with rules like "It will fill the box with gold if it knows their is a <50% chance of you picking it", [the closest we can get to "outcomes not processes" in probabilistic land], (or perhaps the alternative "the probability that it fills the box with gold is one-minus the probability with which it predicts the agent will pick the box".). But rules like "It will fill the box with gold if the agents process uses either randomisation or causal decision theory" seem unhelpful to me.
I see where you are coming from. But, I think the reason we are interested in CDT (for any DT) in the first place is because we want to know which one works best. However, if we allow the outcomes to be judged not just on the decision we make, but also on the process used to reach that decision then I don't think we can learn anything useful.
Or, to put it from a different angle, IF the process P is used to reach decision X, but my "score" depends not just on X but also P then that can be mapped to a different problem where my decision is "P and X", and I use some other process (P') to decide which P to use.
For example, if a student on a maths paper is told they will be marked not just on the answer they give, but the working out they write on the paper - with points deducted for crossings outs or mistakes - we could easily imagine the student using other sheets of paper (or the inside of their head) to first work out the working they are going to show and the answer that goes with it. Here the decision problem "output" is the entire exame paper, not just the answer.
I like this framing.
An alternative framing, which I think is also part of the answer is that some art is supposed to hit a very large audience and give each a small amount of utility, and other art is supposed to hit a smaller, more specialized, audience very hard. This framing explains things like traditional daytime TV, stuff that no one really loves but a large number of bored people find kind of unobjectionable. And how that is different from the more specialist TV you might actually look forward to an episode off but might hit a smaller audience.
(Obviously some things can hit a big audience and be good, and others can be bad on both counts. But the interesting quadrants two compare are the other two).
Random thoughts. You can relatively simply get a global phase factor at each timestep if you want. I don;t think a global phase factor at each step really counts as meaningfully different though. Anyway, as an example of this:
So that, at each (classical) timestep every single element of the CA tape just moves one step to the right. (So any patterns of 1's and 0's just orbit the tape in circles forever, unchanging.). Its quite a boring CA, but a simple example.
We can take the quantum CA that is exactly the same, but with some complex phase factor:
Where the delta function is saying "1 iff , else 0."
This is exactly the same as the old classical one (everything moves on step to the right), but this time we also have a global phase factor applied to the total system. The total phase factor is , where N is the total number of cells on the tape.
Tiny bit more interesting:
Now we only gain phase factors on values of 1, so the global phase depends on the total number of 1's on the tape, rather than its length.
To get proper quantum stuff we need phase factors that are not global. (IE some relative phases). I feel like this equation below is a reasonable kind of place to start, but I have run out of time for now so might return to this later.
After finding a Unitary that comes from one of your classical Cellular Automata then any power of that unitary will also be a valid unitary. So for example in classical logic their is a the "swap" gate for binary inputs, but in quantum computing the "square-root swap" gate also exists.
So you can get one of your existing unitary matrices, and (for example) take its square root. That would kind of be like a quantum system doing the classical Cellular Automata, that is interrupted halfway through the first step. (Because applying the root matrix twice is the same as applying the matrix). Similarly you can look at the 1/3rd step by applying the cube root of the matrix.
So would you consider the square root matrix a quantum elementary CA? Its not exactly equivalent to anything classical, because classically you can't look "between the steps".
[This is a long winded way of me saying that I don't "get" the question. You want a unitary, U, of the form given in that equation for <y|U|x>, but you also don't want U to be "basically equivalent" to a classical CA. How are you defining "basically equivalent", is anything satisfying your equation automatically "basically equivalent"?]
I think the limitations to radius set by material strength only apply directly to a cylinder spinning by itself without an outer support structure. For example, I think a rotating cylinder habitat surrounded by giant ball bearings connecting it to a non-rotating outer shell can use that outer shell as a foundation, so each part of the cylinder that is "suspended" between two adjacent ball bearings is like a suspension bridge of that length, rather than the whole thing being like a suspension bridge of length equal to the total cylinder diameter. Obviously you would need really smooth, low-friction bearings for this to be a plan to consider, although they would also help with wobble. One way of reducing the friction would be a Russian doll configuration of nested cylinders where each one out was rotating less fast than the previous, which (along with bearings etc) could maybe work.
On a similar vein, you could replace the mechanical bearings with a gas or fluid, in which the cylinder is immersed. Similar advantages in damping the wobble modes and (for fluids or very high pressure gases) helping support the cylinder against its own centrifugal weight. The big downside again would be friction.
If this was the setup I would bet on "hard man" fitness people swearing that running with the spin to run in a little more than earth normal gravity was great for building strength and endurance and some doctor somewhere would be warning people that the fad may not be good for your long term health.
Yes, its a bit weird. I was replying because I thought (perhaps getting the wrong end of the stick) that you were confused about what the question was, not (as it seems now) pointing out that the question (in your view) is open to being confused.
In probability theory the phrase "given that" is a very important, and it is (as far as I know) always used in the way used here. ["given that X happens" means "X may or may not happen, but we are thinking about the cases where it does", which is very different from meaning "X always happens"]
A more common use would be "What is the probability that a person is sick, given that they are visiting a doctor right now?". This doesn't mean "everyone in the world is visiting a doctor right now", it means that the people who are not visiting a doctor right now exist, but we are not talking about them. Similarly, the original post's imagined world involves cases where odd numbers are rolled, but we are talking about the set without odds. It is weird to think about how proposing a whole set of imaginary situations (odd and even rolls) then talking only about a subset of them (only evens) is NOT the same as initially proposing the smaller set of imaginary events in the first place (your D3 labelled 2,4,6).
But yes, I can definitely see how the phrase "given that", could be interpreted the other way.
That Iran thing is weird.
If I were guessing I might say that maybe this is happening:
Right now the more trade China has with Iran the more America might make a fuss. Either complaining politically, putting tariffs, or calling on general favours and good will for it to stop. But if America starts making a fuss anyway, or burns all its good will, then their is suddenly no downside to trading with Iran. Now substitute "China" for any and all countries (for example the UK, France and Germany, who all stayed in the Iran Nuclear Deal even after the USA pulled out).
"given that all rolls were even" here means "roll a normal 6 sided dice, but throw out all of the sequences that included odd numbers." The two are not the same, because in the case where odd numbers can be rolled, but they "kill" the sequence it makes situations involving long sequences of rolls much less likely to be included in the dataset at all.
As other comments explain, this is why the paradox emerges. By stealth, the question is actually "A: How long do I have to wait for two 6s in a row, vs B: getting two 6's, not necessarily in a row, given that I am post selecting in a way that very strongly favors short sequences of rolls".
I suppose its the difference between the LW team taking responsibility for any text the feature shows people (which you are), and the LW team endorsing any text the feature shows (which you are not). I think this is Richard's issue, although the importance is not obvious to me.
Could be an interesting poll question in the next LW poll.
Something like:
How often do you use LLMs?
Never used them
Messed about with one once or twice
Monthly
Weekly
Every Day
I think a reasonable-seeming metric on which humans are doubtless the winners is "energy controlled".
Total up all the human metabolic energy, plus the output of the world's power grids, the energy of all that petrol/gas burning in cars/boilers. If you are feeling generous you could give humans a percentage of all the metabolic energy going through farm animals.
Its a bit weird, because on the one hand its obvious that collectively humans control the planet in a way no other organism does. But, you are looking for a metric where plants and single-celled organisms are allowed to participate, and they can't properly be said to control anything, even themselves.
I think this question is maybe logically flawed.
Say I have a shuffled deck of cards. You say the probability that the top card is the Ace of Spades is 1/52. I show you the top card, it is the 5 of diamonds. I then ask, knowing what you know now, what probability you should have given.
I picked a card analogy, and you picked a dice one. I think the card one is better in this case, for weird idiosyncratic reasons I give below that might just be irrelevant to the train of thought you are on.
Cards vs Dice: If we could reset the whole planet to its exact state 1 week before the election then we would I think get the same result (I don't think quantum will mess with us in one week). What if we do a coarser grained reset? So if there was a kettle of water at 90 degrees a week before the election that kettle is reset to contain the same volume of water in the same part of my kitchen, and the water is still 90 degrees, but the individual water molecules have different momenta. For some value of "macro" the world is reset to the same macrostate but not the same microstate, it had 1 week before election day. If we imagine this experiment I still think Trump wins every (or almost every) time, given what we know now. For me to think this kind of thermal-level randomness made a difference in one week it would have to have been much closer.
In my head things that change on the coarse-grained reset feel more like unrolled dice, and things that don't more like facedown cards. Although in detail the distinction is fuzzy: it is based on an arbitrary line between micro an macro, and it is time sensitive, because cards that are going to be shuffled in the future are in the same category as dice.
EDIT: I did as asked, and replied without reading your comments on the EA forum. Reading that I think we are actually in complete agreement, although you actually know the proper terms for the things I gestured at.
This idea (without the name) is very relevant in First Aid training.
For example, if you learn CPR from some organisations they will teach you compressions-only CPR, while others will also teach you to do the breaths. I have heard it claimed by first aid teachers that the reason for this is because doing the best possible CPR requires the breaths, but that someone who learned CPR one afternoon over a year ago and hasn't practiced since is unlikely to do effective breaths, and that person would be better of keeping to compressions only.
In First Aid books a common attempt to solve this problem is to give sweeping commands at the beginning (often with the word "never" somewhat abused), and then give specific exceptions later. The aim is that if you will remember one thing it will hopefully be the blanket rule, not the specific exception. I think that method probably has something to recommend for it, its hard to imagine how you could remember the exception without remembering the rule it is an exception too.
[For example the Life Support book, tells you 'never' to give anyone medicine or drugs, as you are a First Aider, not a Doctor. It also tells you to give aspirin to someone having a heart attack if they have not taken any other drugs. I think it also recommends antihistamines for swelling insect stings.]
I find that surprising, given that so much of your writing feels kind of crisp and minimalist. Short punchy sentences. If that is how you think your mind is very unlike mine.
Much as I liked the book I think its not a good recomendation for an 11 year old. There are definitely maths-y 11 year olds who would really enjoy the subject matter once they get into it. (Stuff about formal systems and so on). But if we gave GEB to such an 11 year old I think the dozens of pages at the beginning on the history of music and Bach running around getting donations would repel most of them. (Urgh, mum tricked me into reading about classical music).
I am all for giving young people a challenge, but I think GEB is challenging on too many different fronts all at once. Its loooong. Its written somewhat in academic-ese. And the subject matter is advanced. So any 11 year old who could deal with one of that trinity also has to face the other two.
Yes, you could fix it by making the portal pay for lifting. An alternative fix would be to let gravity go through portals, so the ball feels the Earth's gravity by the direct route and also through the portal. Which I think makes the column between the two portals zero G, with gravity returning towards normal as you move radially. This solution only deals with the steady-state though, at the moment portals appear or disappear the gravitational potential energy of objects (especially those near the portal) would step abruptly.
Its quite a fun situation to think about.
“If I’m thinking about what someone else might do and feel in situation X by analogy to what I might do and feel in situation X, and then if situation X is unpleasant than that simulation will be unpleasant, and I’ll get a generally unpleasant feeling by doing that.”
I think this is definitely true. Although, sometimes people solve that problem by just not thinking about what the other person is feeling. If the other person has ~no power, so that failing to simulate them carries ~no costs, then this option is ~free.
This kind of thing might form some kind of an explanation for Stockholm Syndrome. If you are kidnapped, and your survival potentially depends on your ability to model your kidnapper's motivations, and you have nothing else to think about all day, then any overspill from that simulating will be maximised. (Although from the wikipedia article on Stockholm syndrome it looks like it is somewhat mythical https://en.wikipedia.org/wiki/Stockholm_syndrome)
I agree that its super unlikely to make any difference, if the LLM player is consistently building pylons in order to build assimilators that is a weakness at every level of slowdown so has little or no implications for your results.
An interesting project. One small detail that confuses me. In the first log is the entry:
"Action failed: BUILD ASSIMILATOR, Reason: No Pylon available"
But, in SC2 you don't need a pylon to build an assimilator. Perhaps something in the interface with the LLM is confused because most protos buildings do need a pylon and the exception is no accounted for correctly?
I am sure that being cited by wikipeida is very good for giving an article more exposure. There is an "altimetric" thingy on some journals that is used to help funders see what other useful impacts an article had on the world beyond citations from other articles, and it thinks wikipedia mentions are high-value (it also likes things like newspaper coverage).
I suspect that it is not that rare for the authors of a paper to go and put a link in wiki to their own paper. I have certainly seen wiki articles mention something with a cite, which, while true, feels weirdly specific.
Thank you very much, that sounds like a fascinating wider discussion. Personally, I suspect the Abraham-Minkowski question is only unusual in the sense that it is a known unknown. I think the unknown unknowns are probably much larger in scope. Although it is probably quite dependent on where exactly you draw the physics/engineering boundary.
In my post the way I cited Lubos Motl's comment implicitly rounded it off to "Minkowski is just right" (option [6]), which is indeed his headline and emphasis. But if we are zooming in on him I should admit that his full position is a little more nuanced. My understanding is that he makes 3 points:
(1) - Option [1] is correct. (Abraham gives kinetic momentum, Minkowski the canonical momentum)
(2) - In his opinion the kinetic momentum is pointless and gross, and that true physics only concerns itself with the canonical momentum.
(3) - As a result of the kinetic momentum being worthless its basically correct to say Minkowski was "just right"(option [6]). This means that the paper proposing option [1] was a waste of time (much ado about nothing), because the difference between believing [1] and believing [6] only matters when doing kinetics, which he doesn't care about. Finally, having decided that Minkowski was correct in the only way that he thinks matters, he goes off into a nasty side-thing about how Abraham was supposedly incompetent.
So his actual position is sort of [1] and [6] at the same time (because he considers the difference between them inconsequential, as it only applies to kinetics). If he leans more on the [1] side he can consider 12.72 to be valid. But why would he bother? 12.72 is saying something about kinetics, it might as well be invalid. He doesn't care either way.
He goes on to explicitly say that he thinks 12.72 is invalid. Although I think his logic on this is flawed. He says the glass block breaks the symmetry, which is true for the photon. However, the composite system (photon + glass block) still has translation and boost symmetry, and it is the uniform motion of the center of mass of the composite system that is at stake.
Yes, you are certainly right it is a quasiparticle. People often use the word polariton to name it (eg https://www.sciencedirect.com/science/article/pii/S2666032620300363#bib1 ).
I think you might have muddled the numbering? It looks like you have written an argument in favor of either [2] or [3] (which both hold that the momentum of the full polariton is larger than the momentum of the photonic part alone - in the cartoon of the original post whether or not the momentum "in the water" is included), then committed to [1] instead at the end. This may be my fault, as the order I numbered the arguments in the summary at the end of the post didn't match the order they were introduced, and [2] was the first introduced. (In hindsight this was probably a bad way to structure the post, sorry about that!)
" "passing by atoms and plucking them" is a lie to children " - I personally dislike this kind of language. There is nothing wrong with having mental images that help you understand what is going on. If/when those images need to be discarded then I don't think belittling them or the people who use them is helpful. In this case the "plucking" image shows that at any one time some of the excitation is in the material, which is the same thing you conclude.
[In this case I think the image is acceptably rigorous anyway, but lets not litigate that because which mental images are and are not compatible with a quantum process is a never ending rabbit hole.]
Thank you very much for reading and for your thoughts. If I am correct about the numbering muddle it is good to see more fellow [2/3]'ers.
I presented the redshift calculation in terms of a single photon, but actually, the exact same derivation goes through unchanged if you replace every instance of with and with . Where and are the energy of a light pulse before and after it enters the glass. There is no need to specify whether the light pulse is a single photon a big flash of classical light or anything else.
Something linear in the distance travelled would not be a cumulatively increasing red shift, but instead an increasing loss of amplitude (essentially a higher cumulative probability of being absorbed). This is represented using a complex valued refractive index (or dielectric constant) where the real part is how much the wave slows down and the imaginary part is how much it attenuates per distance. There is no reason in principle why the losses cannot be arbitrarily close to zero at the wavelength we are using. (Interestingly, the losses have to be nonzero at some wavelength due to something called the Kramers Kronig relation, but we can assume they are negligible at our wavelength).
I think the point about angular momentum is a very good way of gesturing at how its possibly different. Angular momentum is conserved, but an isolated system can still rotate itself, by spinning up and then stopping a flywheel (moving the "center of rotation").
Thank for finding that book and screenshot. Equation 12.72 is directly claiming that momentum is proportional to energy flow (and in the same direction). I am very curious how that intersects with claims common in metamaterials (https://journals.aps.org/pra/abstract/10.1103/PhysRevA.75.053810 ) that the two can flow in opposite directions.
"And then conservation of momentum implies uniform motion of the center of mass, right?" - This is the step I am less than 100% on. Certainly it does for a collection of billiard balls. But, as soon as light is included things get less clear to me. It has momentum, but no inertial mass. Plus, as an admittedly weird example, the computer game "portal" has conservation or momentum, but not uniform motion of the centre of mass. Which means at the very least the two can logically decouple.
I consider momentum conservation a "big principle.", and Newtons 3 laws indeed set out momentum conservation. However, I believe uniform centre of mass motion to be an importantly distinct principle. The drive loop thing would conserve momentum even if it were possible. Indeed momentum conservation is the principle underpinning the assumed reaction forces that make it work in the first place. To take a different example, if you had a pair of portals (like from the game "portal") on board your spaceship, and ran a train between them, you could drive the train backwards, propelling your ship forwards, and thereby move while conserving total momentum, only to later put the train's breaks on and stop. I am not asking you to believe in portals, I am just trying to motivate that weird hypotheticals can be cooked up where the principle of momentum conservation decouples from the principle of uniform centre of mass motion. The two are distinct principles.
Abraham supporters do indeed think you can use conservation of momentum to work out which way the glass block moves in that thought experiment, showing that (because the photon momentum goes down) the block must move to the right. Minkowksi supporters also think you can use conservation of momentum to work out how the glass block moves, but because they think the photon momentum goes up the block must move to the left. The thing that is at issue is the question of what expression to use to calculate the momentum, both sides agree that whatever the momentum is it is conserved. As a side point, a photon has nonzero momentum in all reference frames, and that is not an aspect of relativity that is sensibly ignored.
You are actually correct that the photon does have to red-shift very slightly as it enters the glass block. If the glass was initially at rest, then after the photon has entered the photon has either gained or lost momentum (depending on Abraham or Minkowski), in either case imparting the momentum difference onto the glass block. The kinetic energy of the glass block is given by where p is the momentum the block has gained, and m is the block's mass. The photon's new frequency is then given by (by conservation of energy) where was its initial frequency. In practice a glass block will have a very gigantic mass compared to , but at least in principle the photon does red shift.
Going into the full gory detail for Abraham.
Abraham:
Photon momentum before entering glass
Photon momentum after entering glass (note, new frequency , not the old one)
Change in photon momentum
The same momentum goes into the glass, so
We can re-arrange to put the c^2 in the denominator next to the mass of the glass block. So that the change in the frequency/energy of the photon is scaled by a term that has something to do with the refractive index, along with how the photon energy compares to the rest mass energy of the glass block (). So, as previously said, this is negligible for a glass block that weighs any reasonable amount.
The Minkowski version is almost the same derivation, except the division by refractive index becomes a multiplication, giving:
Playing with these quadratic equations, to solve for , you find that the Abraham version never breaks. In contrast, if you assume it is possible to have a glass block with a reasonably high refractive index, but arbitrarily small mass, then Minkowski eventually breaks and starts giving an imaginary frequency. This maybe says something vaguely negative about Minkowski, but a block of material with a high refractive index but negligible mass is such an unrealistic setup that I don't think failing in that case is too embarrassing for the Minkowski equation.
My thoughts on this are not going to be fully coherent, because I am in the process of possibly changing my mind.
I agree that if we take the uniform motion of centre of mass as an absolute principle then the weird light-in-circles machine does not work. However, I had never before encountered this principle, and (to me) it still carries the "I learned about this last week, how much do I trust it?" penalty. But, even accepting it, that doesn't explain why the machine fails. Does it remain the case that the actual mechanical momentum and energy transport directions are opposite in the right metamaterial (as claimed in, for example: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.75.053810 ), but the machine fails for some other reason (eg. recoils on the interfaces)? Showing the machine to be impossible while leaving this unanswered doesn't get to the roots of my various related confusions, I still don't know whether energy flow and momentum can 'really' point opposite directions, or whether it's all just an accounting trick.
The uniform motion of centre of mass implies other things. For example, it means anything like a portal from the game "portal" is impossible as the centre of mass would change discontinuously as something went through the portal. [We can even "re-skin" of the photon loop by instead having a train with portals, so it can keep reusing the same track on our space ship repeatedly].
Numbering the options properly is a good idea, done.
To answer your points:
- This is interesting. Symmetry under rotations gives us conservation of angular momentum. Symmetry under translations conservation of linear momentum. You are saying symmetry under boosts gives conservation of centre of mass velocity. Although in "normal" situations (billiard balls colliding) conservation of centre of mass velocity is a special case of as conservation of linear momentum - which I suppose is why I have not heard of it before. I need to look at this more as I find I am still confused. Intuitively I feel like if translation symmetry is doing momentum for us boost symmetry should relate to a quantity with an extra time-derivative in it somewhere.
There is no symmetry under angular boosts, which I imagine is why fly-wheels (or gyroscopes) allow for an "internal reaction drive" for angular velocity. - I did not know that the kinetic and canonical momentum had different values in other fields. That makes option (1) more believable.
- Yes, the k-vector (wavevector) certainly extends by a factor of . So if you want your definition of "momentum" to be linear in wavevector then you are stuck with Minkowski.
I believe, that at the interface between the water and the air we will have a partial reflection of the light. The reflected component of the light has an evanescent tail associated with it that tunnels into the air gap. If we had more water on the other side of the air gap then the evanescent tail would be converted back into a propagating wave, and the light would not reflect from the first water interface in the first place. As the evanescent tail has a length of the order of a wavelength this means that random gaps between the atoms in water or glass don't mess with the propagating light wave, as the wavelength is so much longer than those tiny gaps they do not contribute.
Applying this picture to your question, I think we would expect to interpolate smoothly between the two momentum values as the air gap size was changed, with the interpolation function an exponential with decay distance equal to the length of our evanescent wave.
Thanks for reading. Enjoy your option (1)!
This is very interesting, thank you for sharing it.
I find the 5 day limits (without approval) quite insane. Even assuming that means 5 actual days (and not 20% of 5 days = 1 full day). Lets say you have an employee who has now put 5 days into their preferred passion project. You end it. They then put 5 says into their second-favourite passion project. The end result is an annoyed employee who has half-finished a train of side-projects and is still putting 20% of their time to one side from core duties.
My current work (university) is thankfully very flexible, so maybe I am seeing things from the wrong perspective.
I agree with this.
In my very limited experience (which is mostly board games with some social situations thrown in), attempts to obscure publically discernible information to influence other people's actions are often extremely counter-productive. If you don't give people the full picture, then the most likely case is not that they discover nothing, but that they discover half the picture. And you don't know in advance which half. This makes them extremely unpredictable. You want them to pick A in preference to B, but the half-picture they get drives them to pick C which is massively worse for everyone.
In board games I have played, if a slightly prisoner's dilemma like situation arises, you are much more likely to get stung by someone who has either misunderstood the rules or has misunderstood the equilibrium than someone who knows what is going on. [As a concrete example, in the game Scyth a new player believed that they got mission completion points for each military victory, not just the first one. As they had already scored a victory another played reasoned they wouldn't make a pointless attack. But they did make the pointless attack. It set them and their target back, giving the two players not involved in that battle a relative advantage.]
“The best swordsman does not fear the second best, he fears the worst since there’s no telling what that idiot is going to do.” [https://freakonomics.com/2011/10/rules-of-the-game/#:~:text=%E2%80%9CThe%20best%20swordsman%20does%20not,can%20beat%20smartness%20and%20foresight%3F]
This best swordsman wants more people to know how to sword fight, not fewer.
I very good point. Especially after reading your other comment I wonder if this is deliberate.
The payoff matrix for the generals suggests that in a one-way attack the winning generals win more than the losers loose. Hence your coin toss plan. But, for the civilians it is the other way around. (+25 for winning, but -50 for loosing).
I suspect it may be some kind of message about how the generals launching the nuclear war have different incentives to the civilians, as the generals may place a higher value on victory, and are more likely to access bunkers and so on.
I have no idea what the event will be, but Petrov Day itself is the 26th of September, and given that LW users are in many timezones my expectation is that there will be no specific time you need to be available on that day.
I don't know how the survey was done, but one silver lining is that if the respondents were asked those questions in the same order they are shown then the last question was given last in that set. So the people being surveyed are to some extent being "brought towards it" by the preceding questions. See this Yes Minister clip (although the survey above is much less dramatic example, and they published all the questions, so they are being legitimate):
You could probably cook up a set of preceding questions that would have delivered the opposite result. Something like: "Do you think Germans in WW2 should have resisted/criticised their government more?", following with questions about a non-specific country's people criticising its government during a non-specific war, then going to that final question.
Thanks for the great article. I think that the idea of asking "how many re-uses does this thing give before wearing out" is a great way of focusing the mind away from the super fancy materials. I love the idea of all these spaceships doing a "strip the willow" type dance with one another.
In terms of dropping things from orbit to re-spool your tethers. One component of that might be old/redundant/broken satellites. Its kind of like fuel recycling, the fuel that went into accelerating a satellite 20 years ago can be re-extracted (at some efficiency) by transferring that kinetic energy to something new that is not obsolete.
To me the more natural reading is "probably looses North Caroliner (57%)".
57% being the chance that she "looses North Caroliner". Where as, as it is, you say "looses NC" but give the probabiltiy that she wins it. Which for me takes an extra scan to parse.
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where "sufficiently advanced" here means "probably always science fiction"). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone's future behaviour based on a simulation of their brain impossible. (Omega from Newcomb's paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn't seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn't have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
My position is that either (1) my brain is computationally stable, in the sense that what I think, how I think it and what I decide to do after thinking is fundamentally about my algorithm (personality/mind), and that tiny changes in the conditions (a random thermal fluctuation), are usually not important. Alternatively (2) my brain is not a reliable/robust machine, and my behaviour is very sensitive to the random thermal fluctuations of atoms in my brain.
In the first case, we wouldn't expect small errors (for some value of small) in the uploaded brain to result in significant divergence from the real person (stability). In the second case I am left wondering why I would particularly care. Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
So, I don't think uploaded brains can be ruled out a priori on precision grounds. There exists a non-infinite amount of precision that suffices, the necessary precision is upper bounded by the thermal randomness in a body temperature brain.
Oh, Thrawn is a Navi. I somehow got it into my head on first pass that he was an Ewok. Its going to take time to shift that mental image of him.
Dantoine was where she said in the film I think.