Posts
Comments
Relativity
Recommendation: Spacetime and Geometry
Author: Sean Carroll
This is an expanded version of Carroll's lecture notes on relativity, which he has used to teach courses and which are available for free online (see the "Lecture Notes" tab on the page linked to above). I find it to be an excellent introduction to the subject, which covers the mathematical tools used, the basics of the theory, and the most common applications, all in a straightforward fashion. I have recommended this text (or its corresponding lecture notes) many times on Physics Forums as a reference for people who want a good introduction to the subject.
Other Textbooks Read:
Spacetime Physics, by Taylor & Wheeler. The text that I first learned Special Relativity from, and still a good introduction, with an emphasis on building physical intuition. However, it does not cover General Relativity. (Taylor apparently has a follow-on text covering GR at least as it applies to black holes, but I have not read it.)
Gravitation, by Misner, Thorne, & Wheeler: The classic text, and still a good comprehensive reference even though it was published in 1973. However, it is very comprehensive and detailed, has a somewhat idiosyncratic style, and can be difficult if you don't already have considerable background in the subject. It also weighs enough to seem like it might undergo gravitational collapse and become a black hole. :-)
General Relativity, by Robert Wald. Another classic, with a more abstract mathematical approach than MTW, not as comprehensive but covering some topics in more detail and from a different viewpoint than MTW. Published in 1984, so it also covers some topics, such as quantum fields in curved spacetime, that were too new to be covered when MTW was published. Not as recent as Carroll's text (published in 1984), and going into topics that are probably too advanced for readers who are being introduced to the topic for the first time.
The Large Scale Structure of Spacetime, by Hawking & Ellis. The definitive text on global geometric methods and causal structure in GR. It covers the classic singularity theorems of Hawking & Penrose in detail. However, it is really a monograph, not a comprehensive GR text, and requires the reader to already have considerable background in the subject.
The Usenet Physics FAQ has a long list of relativity references here:
http://math.ucr.edu/home/baez/physics/Administrivia/rel_booklist.html
Your R is actually the Ricci tensor, not the Riemann tensor. The Riemann tensor has four indices, not two. The Ricci tensor is formed by contracting the Riemann tensor on its first and third indices.
Whether that means 'out of the normal range' for any particular hormone, I don't know. Nobody seems to have the faintest idea what 'normal range' means for these things.
Yes, a better way to put it would be that the bloodstream levels of T3/T4 should be significantly higher when the person is treated and feels better than they were before treatment.
it should probably cause TSH suppression
That would be expected, yes.
What would be ideal would be to figure out the cause of the 'hormone resistance', and fix it, rather than trying to overwhelm it.
Yes, agreed. See below.
It might even be a bad idea to fix it, if it's performing some vital immune defence function.
If this is true, it might also be a bad idea to overwhelm it. The "immune defense" hypothesis says that, if a person is feeling symptoms of CFS/etc., it's because some pathogen is trying to attack their cells, so the immune defense kicks in, but as a side effect it also depresses normal cell metabolism. If thyroid therapy increases normal cell metabolism by overwhelming the immune defense, it might also increase the ability of the pathogen to infect cells. The only real fix in this case would be to find the pathogen and eliminate it.
Ok, so the "hormone resistance" hypothesis is really something more like: the rate of some key reaction involving T3/T4 is being slowed down by some unknown factor; since we don't know what the factor is, we can't fix it directly, but we can increase the reaction rate by increasing the concentration of T3/T4 in the bloodstream to above normal levels, to compensate for the damping effect of the unknown factor.
This hypothesis makes an obvious testable prediction: that when people with CFS/etc. who are treated with thyroid extract feel better, the T3/T4 levels in their bloodstream should be above normal. Or, conversely, if their bloodstream T3/T4 levels are within the normal range, they should not feel better, even though they are being treated with thyroid extract. I don't know if any existing data has this information.
I have a couple of questions about your hypothesis.
First, as I understand it, you are hypothesizing that there are people who have symptoms of CFS/etc. but normal blood levels of T3, T4, and TSH, who can nevertheless be helped by taking thyroid extract. And your hypothesized explanation for why these people are having symptoms of CFS/etc. is that, even though there are normal levels of T3 and T4 in their bloodstream, those hormones are not getting into their cells where they are actually needed. But if that is the case, how will putting more T3 and T4 into their bloodstream help? It still won't be getting into the cells. It seems to me that, if your hypothesized cause were correct, the indicated treatment would be to somehow inject T3/T4 directly into the cells--or else to figure out what is blocking the hormones from getting into the cells, and fix that. But just putting more T3/T4 into the bloodstream, ISTM, should not work if your hypothesized cause were correct.
Second, as I understand it, you are taking the fact that treating these people with thyroid extract appears to help them, as evidence that your hypothesis is correct. But it seems to me that this fact is actually evidence for a different hypothesis: the hypothesis that the definition of "normal" levels of T3, T4, and TSH is incorrect. More specifically, that "normal" levels should be defined, not in a "one size fits all" fashion, but specifically for each person based on some set of factors that can vary from person to person. (Obvious candidates would be body weight/BMI and genetic factors.)
In this case, the Knighian uncertainty cancels out
Does it? You still know that you will only be able to take one of the two bets; you just don't know which one. The Knightian uncertainty only cancels out if you know you can take both bets.
Apologies for coming to the discussion very, very late, but I just ran across this.
If I saw no need for more power, e.g. because I'm already maximally happy and there's a system to ensure sustainability, I'd happily give up everything.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn't the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven't seen anyone else zero in on this particular point.)
when it was realized in dath ilan that business cycles were a thing, the economists probably said "This is a coordination problem", the shadarak backed them, and the serious people got together and coordinated to try to avoid business cycles.
I think this is a feature of dath ilan, not a bug.
Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie.
On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am "more willing than most people to respond to tests truthfully"; but what it indicates to me is that, for all but 6 out of 33 questions, the "nice" answer was a lie, at least for me.
The 6 questions were the ones where the answer I gave was, according to the test, the "nice" one, but just happened to be the truth in my case: for example, one of the 6 was "T F I like to gossip at times"; I answered "F", which is the "nice" answer according to the test--presumably on the assumption that most people do like to gossip but don't want to admit it--but I genuinely don't like to gossip at all, and can't stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)
Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.
Is that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?
(although of course I don't trust your rationality either)
I'm not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other's rationality.
Sorry for the late comment but I'm just running across this thread.
demand for loans decreased and this caused destruction of money via the logic of fractional reserve banking
This is an interesting comment which I haven't seen talked about much on econblogs (or other sources of information about economics, for that matter). I understand the logic: fractional reserve banking is basically using loans as a money multiplier, so fewer loans means less multiplication, hence effectively less money supply. But it makes me wonder: what happens when the loan demand goes up again? Do you then have to reverse quantitative easing and effectively retire money to keep things in balance? Do any mainstream economists talk about that?
Sorry for the late comment but I'm just running across this thread.
Prices don't go up if all the new money just ends up under Apple Computer's proverbial mattress instead of in the hands of someone who is going to spend it.
But as far as I know the mainstream economists like the Fed did not predict that this would happen; they thought quantitative easing would start banks (and others with large cash balances) lending again. If banks had started lending again, by your analysis (which I agree with), we would have seen significant inflation because of the growth in the money supply.
So it looks to me like the only reason the Fed got the inflation prediction right was that they got the lending prediction wrong. I don't think that counts as an instance of "we predicted critical event W".
Are there any theories about the mechanism involved here? I've done a fair bit of Googling about this but haven't found any discussion of underlying mechanisms, only the statistics. I know that CoQ10 is critical in the metabolic cycle that produces ATP, and therefore is involved in energy production everywhere in the body; but I'm not sure how to get from that to the specific result of lowering blood pressure (rather than something more general like "feel more energetic").
Sorry for the late comment, but I'm just running across this thread.
The question is not whether Google reinvests their producer surplus better than other monopolies. The question is whether Google reinvests their producer surplus more efficiently, i.e., for greater total benefit to society as a whole, than would all the consumers who would otherwise get that surplus as consumer surplus. That seems highly unlikely since the options for reinvestment open to even a large company like Google will cover a much smaller range of possibilities than the options open to the entire set of consumers who would otherwise receive the surplus.
(Admittedly, there is an effect here in the other direction: Google has much more leverage than the average consumer. But I don't think that outweighs the effect I referred to above, because Google is not being compared to the average consumer; they are being compared to the sum total of activities of all consumers--more precisely, all consumers who would otherwise receive the surplus Google is getting.)
Even if we had teleporters, would future Tyler Cowens be writing that they're not as innovative as the car - and would they be correct, in that a teleporter is just a more efficient way of solving a problem that cars and airplanes had already partially solved?
I don't think so, because there are threshold effects. For example, consider the airplane vs. the car: having airplane travel available doesn't just mean your trips are shorter; it enables many trips that otherwise would not even be considered, and therefore enables many kinds of activities that otherwise would not be considered. If I can fly to a distant city in a few hours, that enables me to have relationships, both business and personal, with people in that city that I couldn't have if I had to take days to drive there. If things can be shipped across country overnight on an airplane, many more economic activities requiring "just in time" delivery become possible. And so on.
In this particular case, I agree with you that the weatherman is far more likely to be right than the person's intuitions.
However, suppose the weatherman had said that since it's going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can't find a flaw in his argument?
The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate.
But by hypothesis, we are talking about a scenario where the intelligent person is proposing something that violently clashes with an intuition that is supposed to be common to everyone. So we're not talking about whether the intelligent person has an advantage in all situations, on average; we're talking about whether the intelligent person has an advantage, on average, in that particular class of situations.
In other words, we're talking about a situation where something has obviously gone wrong; the question is which is more likely to have gone wrong, the intuitions or the intelligent person. It doesn't seem to me that your argument addresses that question.
if the claim "the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions" actually implied that intuitions were orders of magnitude better
That's not what it implies; or at least, that's not what I'm arguing it implies. I'm only arguing that it implies that, if we already know that something has gone wrong, if we have an obvious conflict between the intelligent person and the intuitions built up over the evolution of humans in general, it's more likely that the intelligent person's arguments have some mistake in them.
Also, there seems to be a bit of confusion about how the word "intuition" is being used. I'm not using it, and I don't think the OP was using it, just to refer to "unexamined beliefs" or something like that. I'm using it to refer speciflcally to beliefs like "mass murder is wrong", which have obvious reasonable grounds.
Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren't able to debug yours. It doesn't matter if the intelligent person can't debug your program if they can write a buggy program that works better than your buggy program.
We're not talking about the intelligent person being able to debug "your" program; we're talking about the intelligent person not being able to debug his own program. And if he's smarter than you, then obviously you can't either. Also, we're talking about a case where there is good reason to doubt whether the intelligent person's program "works better"--it is in conflict with some obvious intuitive principle like "mass murder is wrong".
why should we stand by our intuitions disregard the opinions of more intelligent people?
Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer. So if there is a conflict, it's far more probable that the intelligent people have made some mistake that we haven't yet spotted.
I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.