Posts
Comments
It's an interesting idea but I feel very skeptical about the generic plan. Personally, a revulsion for organized/standardized education is what drove me to look at things like Less Wrong in the first place. I think this is fairly common in the community, with many people interested in discussion of akrasia and self-work habits.
Also, considering the informality of ideas like "I want to be a good rationalist", I would expect this sort of thing to be much more open-ended and unstructured anyways. It doesn't seem to fit with the idea of a rigid system or a "boot-camp". It just seems contrary to the idea of rationality and free thinking.
I am also somewhat bemused by the character of the "application", where apparently qualification relates to reading of the sequences and SIAI in-house literature. I mean the level of self-masturbation is quite remarkable, not to be too cynical, but it seems to be setting the bar fairly low when you're treating a subject that has been actively discussed for thousands of years.
On the other hand I'm sure this is well intentioned and you have to start somewhere, so I apologize if my remarks seem overly caustic.
I've also read it several times before that physicists and scientists tend to achieve their best results by their mid-thirties. But I don't think the characterization necessarily works for physics/math/etc. like it does for baseball and athletics. There's just a major qualitative difference there -- e.g., athletes are forced to retire fairly young, whereas teachers are very rarely forced to retire until they are really nearing the end of their viable lifespan. Although I do agree that in something like physics, there is also a component of "mental athleticism", which just naturally peaks at a medium or youthful age.
Also, for a lot of subjects like physics or math, you probably won't be able to have a decent mastery of your work until around, say, age 25-35. So the simple fact of the matter is that you will always be past your peak for the majority of your practicing career. It's a bit sad, but again, I think it just shows that the concept of "peaking" may not be really as broadly applicable for academic areas.
In the 419991 times this simulation has run, players have won $1811922 And by won I mean they have won back $1811922 of the $419991 they spent (431%).
Mating is good. I am somewhat baffled as to why the "PUA" discussion has had a strong negative connotation. As you say, there's a ton of benefits for everyone involved, and it serves as a successful, easy-to-test model for many related skill sets. Personally I think the hesitancy to talk about mating and mating development is likely no more than a sort of vestigial organ of society's ancient associations with religion. It still seems "improper" in ordinary society to talk about how to get into someone's pants. But I see no reason why the sort of thing like "pick-up-artistry" must be unethical or wrong.
Yes -- I agree strongly with this analysis.
The whole "happiness limited by shyness/social awkwardness which results in no dates" stereotype does not apply to many people here.
How's that?
Hypertext reading has a strong potential, but it also has negative aspects that you don't have as much with standard books. For example, it's much easier to get distracted or side-tracked with a lot of secondary information that might not even be very important.
It's not that books take longer to produce, it's that books just tend to have higher quality, and a corollary of that is that they frequently take longer to produce. Personally I feel fairly certain that the average quality of my online reading is substantially lower than offline reading.
Any problem in government can only be suboptimal relative to a different set of policies, and as such, criticism of government should come with an argument that a solution is possible.
I think most criticism is based on the implicit understanding that a solution is possible. Otherwise you are basically hiding behind a shield of nihilism or political anarchy or something. It seems overly restrictive to say that any criticism without an auxiliary solution is worthless. Just because you see a problem doesn't mean you are able to see a solution. I guess it's a bit like asking all voters to also be politicians.
I think you've touched on something really important when you mention how it is easier to be a strong critic than to have a real, working solution. This is a common retort against strong criticism -- "Oh, but you don't how to make it any better" -- and it seems to be something of a logical fallacy.
There is a certain sense of energy and inspiration behind good criticism which I've always been fond of. This is important, because criticism seems to be almost always non-conformist or pessimistic in a certain sense, so I think you kind of need encouragement to remind yourself that criticism is generally originating from good intentions.
I would argue that charity is just plain good, and you don't need to take something simple and kind and turn it into an inconclusive exercise in societal interpretation.
This sort of brings to my mind Pirsig's discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way -- there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.
That's funny. Well, perhaps Foucault may not have been very accurate -- I'm not at all qualified to comment. But the book still stands as an amazing work of intellectual writing.
Some fiction....
The Color of Magic (Discworld series) -- Terry Pratchett -- pretty funny, top British author. The first book (this one) seems to be unmatched by at least the next five in the series, but there are like 30 in the series total, so...
Neutron star -- Larry Niven -- a collection of short stories in Larry Niven's fascinating future.
Fire upon the deep -- Vernor Vinge -- just the best picture of a future filled with GAI's that I have read.
Neuromancer -- William Gibson -- incredible action/cyberpunk story, incredible characters. Gets pretty boring at the end though.
Some nonfiction...
Madness and civilization -- Michael Foucault -- exquisite historical/philosophical writing. This book I think shows an example of what it means to be a real scholar.
Road to reality -- Roger Penrose -- an interesting attempt to delve into the exact sciences of physics/mathematics in one, singular drive. Not recommended without extensive prior experience in math/physics, since unfortunately it doesn't explain so much as shed new light on things you might have already learned. There needs to be more books like this.
Die nigger die -- H. Rap Brown -- this book is written with such passion and intelligent revolutionary spirit, it really had a major impact on me when I read it. (Brown was an important figure in the civil rights movements of the 60's. )
Pirsig's book is brilliant... I recommend that to everyone as well...
AFAIK there's currently no major projects attempting to send contact signals around the galaxy (let alone the universe). Our signals may be reaching Vega or some of the nearest star systems, but definitely not much farther. It's not prohibitively difficult to broadcast out to say, a 1000 lightyear radius ball around earth, but you're still talking about an antenna that's far larger than anything currently existing.
Right now the SETI program is essentially focused on detection, not broadcasting. Broadcasting is a much more expensive problem. Detection is favorable for us because if there are other broadcasting civilizations, they will tend to be more advanced, and broadcasting will be comparatively easier/cheaper for them.
Edit: If you're doing directional broadcasting, it's true that you can go much further. Of course, you are simply trading broadcasting distance for the amount of space covered by the signal. Wikipedia says that Arecibo broadcasted towards M13, around 25,000 light years away. That's about the same distance as us from the center of the Milky Way.
I don't think this is much of an insight, to be honest. The "anthropic" interpretation is a statement that the universe requires self-consistency. Which is, let's say, not surprising.
The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)
My feeling is that this is a statement about the English language. This is not a statement about the universe.
Note that one could just as easily come up with a two page article about a "Futuristic Life Meme" which represents the cryonics supporters' sense of being threatened by death.
The analysis of a new, emerging science deserves critique. From what I can tell, this particular critique is essentially ad-hominem, in that it attempts to attack a belief based on the characteristics of the individuals, rather than their arguments.
It trivializes the fact that there are reasons for being reluctant to invest in cryonics. Lastly, this writing conflates cryonics skepticism with unwillingness to invest.
My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding.
I would LOVE to agree with this statement, as it justifies my criticism of poor teachers who IMO are (not usually maliciously) putting their students through hell. However, I don't think it's obvious, or I think maybe you just have to take it as an axiom of your system. It seems there is some notion of individualism or personal difference which is missing from the system. If someone is just terrible at learning, can you really expect to succeed in explaining, for example? Realistically I think it's probably impossible to classify the massive concept of understanding by merely three levels, and these problems are just a symptom of that fact.
As another example, in order to understand something, it's clearly necessary to be able to explain it to yourself. In your system, you are additionally requiring that your understanding means you must be able to explain things to other people. In order to explain things to others, you have to understand them, as has been discussed. Therefore you have to be able to explain other people to yourself. Why should an explanation of other individuals behavior be necessary for understanding some random area of expertise, say, mathematics? It's not clear to me.
And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield?
It certainly seems like someone with a deep understanding of their subject should be able to identify the validity or uncertainty in their assumptions about the subject. If they are a poor teacher, I think I would still believe this to be true.
Ah, OK, I read your article. I think that's an admirable task to try to classify or identify the levels of understanding. However, I'm not sure I am convinced by your categorization. It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality. Actually it seems like the claim of "Level 1 understanding" basically trivializes that understanding. Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this would be such a common phenomena, for example, in academia. I would argue that these people have some further complications or issues which are not recognized in the 1-2-3 hierarchy.
That being said, you have to start somewhere, and the 0-1-2-3 hierarchy looks like a good place to start. I'd definitely be interested in hearing more about this analysis.
Suppose that inventing a recursively self improving AI is tantamount to solving a grand mathematical problem, similar in difficulty to the Riemann hypothesis, etc. Let's call it the RSI theorem.
This theorem would then constitute the primary obstacle in the development of a "true" strong AI. Other AI systems could be developed, for example, by simulating a human brain at 10,000x speed, but these sorts of systems would not capture the spirit (or capability) of a truly recursively self-improving super intelligence.
Do you disagree? Or, how likely is this scenario, and what are the consequences? How hard would the "RSI theorem" be?
I will reply to this in the sense of
"do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?",
since I am not so familiar with the formalism of a "Level 2" understanding.
My uninteresting, simple answer is: yes.
My philosophical answer is that I find the entire question to be very interesting and strange. That is, the relationship between teaching and understanding is quite strange IMO. There are many people who are poor teachers but who excel in their discipline. It seems to be a contradiction because high-level teaching skill seems to be a sufficient, and possibly necessary condition for masterful understanding.
Personally I resolve this contradiction in the following way. I feel like my own limitations make it to where I am forced to learn a subject by progressing at it in very simplistic strokes. By the time I have reached a mastery, I feel very capable of teaching it to others, since I have been forced to understand it myself in the most simplistic way possible.
Other people, who are possibly quite brilliant, are able to master some subjects without having to transmute the information into a simpler level. Consequentially, they are unable to make the sort of connections that you describe as being necessary for teaching.
Personally I feel that the latter category of people must be missing something, but I am unable to make a convincing argument for this point.
Cool article...
Cool... that's really close to where I work. I'll probably make it. Thanks for taking the initiative guys.
I'm not sure if I buy that the "frequentist" explanations (as in the disease testing example) are best characterized by being frequentist -- it seems to me that they are just stating the problem and the data in a more relevant way to the question that's being asked. Without those extra statements, you have to decode the information down from a more abstract level.
For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.
This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Descartes is studied, there is usually a good deal of critique and rejection of his ideas. But there's still a lot of good stuff there, in the end.
What philosophical works and authors have you found especially valuable, for whatever reason?
I have found Foucault to be a very interesting modern philosopher/historian. His book, I believe entitled "Madness and civilization", (translated from French), strikes me as a highly impressive analysis on many different levels. His writing style is striking, and his concentration on motivation and purpose goes very, very deep.
I understand that there is work supporting the idea that cryonics/regeneration/etc. will eventually be successful. However, I don't feel the need to respond to this work very directly, because this work, after all, is very indirect, in the sense that it is only making plausibility arguments. As a cryonics skeptic, I am not attempting to rule out the plausibility or possibility of cryonics. After all, it seems fairly plausible that this stuff will eventually get worked out, as with the usual arguments for technological advancement. As a cryonics skeptic, I am only asserting that there is insufficient evidence that it will work for my personal freezing/revival to justify my substantial investment.
The response to this might be to claim that I am unfairly or erroneously making "demands for particular proof". I think that this point is an intelligent point, but that it is being somewhat abused or overused in this context. In areas like physics or biology, it is completely status-quo to believe nothing except that which has been shown by fairly direct evidence. You might even abstractly characterize the entirety of professional science as an area in which "demands for particular proof" are the centralizing, unifying, distinguishing feature. Seeing as how cryonics is essentially an area of physics and biology, I view it in much the same way. I expect to see more concrete proof of its ability before being willing to believe in it, invest in it, or rely on it for my supposed personal immortality.
The extent of qualification necessary to clearly convey a meaning on here is absolutely unfathomable. No, it's beyond unfathomable, it's really utter rubbish, it's exasperating and despicable how this happens almost every time one starts a substantive disagreement.
It's perfectly clear from context that I am referring to the entire cryonics refrigeration and revival process, but in case that wasn't clear, let that be clearly stated now. In case that was clear and it was intentionally or subconsciously disregarded, as I must shamefully and cynically suspect, then you can simply go fuck yourself.
In my experience, it's quite fruitless to get into the discussion of evidence on this particular problem. That's because it's absolutely clear that cryonics (on current evidence) does not work. The entire argument in favor of cryonics is based on projections for future discoveries and technologies, which any cryonics proponent will admit. Thus their argument is not really an argument based on evidence -- it is more of an argument based on expectation. Now, this expectation may very well be solid and well justified, but in my experience, the LW community tends to really bastardize the argument and claim that the evidence is somehow solid and well worked out. To the extent that you should dish out a lot of money, starting now. This goes so far as to the IMO absurd idea in ciphergoth's post above, whereby he states that the burden of proof or evidence should be put on the cryonics critics.
This kind of rebuttal absolutely fails, because it simply doesn't address the point. You're taking the OP completely out of context. The OP is arguing against cryonics evidence in the context of having to dish out substantial money. The pro-cryonics LW community asserts that you must pay money if you believe in cryonics, since it's the only rational decision, or some such logic. In response, critics (such as the OP) contend that cryonics evidence isn't sufficient to justify paying money. This is totally different from asserting that you don't believe in cryonics or the possibility of cryonics out of context.
In your examples, you don't have to pay out of your wallet if you believe that 1) practical fusion power, 2) human mission to Mars, 3) substantial life extension exists. These examples are misleading.
You might think about the zen idea, in which the proposal of solutions is certainly held off, or treated differently. This is a very common idea in response to the tendency of solutions to precipitate themselves so ubiquitously.
Without any way of authenticating the donations, I find this to be rather silly.
I just saw this and realized I basically just expanded on this above.
I wasn't familiar with this description of "world states", but it sounds interesting, yes. I take it that positing "I am a think that things" is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn't apply.
I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don't know that I know proposition A, then I don't know proposition A.
Edit/Revised: I think all you have to do is realize that "K(K(A)) false" permits "K(A) false". At first I had a little proof but now it seems just redundant so I deleted it.
So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don't see how you can learn anything beyond K(A).
Um, if you're a brain in a vat, then any "brain" you perceive in the real world like on a "real world" MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you're a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.
People don't mention anything like altering the brain itself.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The "human experiences" are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
You don't seem to be familiar with this concept.
You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain,
This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.
Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.
How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.
No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.
The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.
That's a flagrant misinterpretation. The OP's intention was to say that innocent people don't get put in prison intentionally.
I sometimes get various ideas for inventions, but I'm not sure what to do with it, as they are often unrelated to my work, and I don't really possess the craftsmanship capabilities to make prototypes and market them or investigate them on my own. Does anyone have experience and/or recommendations for going about selling or profiting from these ideas?
This comment just seems really harsh to me... I understand what you're saying but surely the author doesn't have bad intentions here...
This seems very well written and I'd like to complement you on that regard. I find the shaman example amusing and also very fun to read.
For Sophie, if she has a large data set, then her theory should be able to predict a data set for the same experimental configuration, and then the the two data sets would be compared. That is the obvious standard and I'm not sure why it's not permitted here. Perhaps you were trying to emphasize Sophie's desire to go on and test her theory on different experimental parameters, etc.
The original shaman example works very well for me, it is rather basic and doesn't make any very unsubstantiated claims. In the next examples, however, there needs to be more elaboration on the method in which you go from theory --> data. In the post you say,
She immediately returns to her office and spends the next several weeks writing Matlab code, converting her theory into a compression algorithm. The resulting compressor is highly successful: it shrinks the corpus of experimental data from an initial size of 8.7e11 bits to an encoded size of 3.3e9 bits.
Without knowing the details of how you go from theory to compressed end product, it's hard to say that this method makes sense. Actually, I would probably be fairly satisfied if you stopped after the second section. But when you introduce the third section, with the competition between colleagues, it implies there is some kind of unknown, nontrivial relation between fitting parameters of the theory, the theory, the compression program, the compression program data size, and the final compressed data.
It all seems pretty vague to make a conclusion like "add the compression program size and the final data size to get the final number".
There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical, I don't mean to rain on anyone's parade, but I'm just stating the facts.
Bear in mind they don't just have to prove it will work. They also need to show you can be uploaded, reverse-aged, or whatever else that comes next. (Now awaiting hoards of flabbergasted replies and accusations.)
This looks somewhat similar to what I was thinking and the attempt at formalization seems helpful. But it's hard for me to be sure. It's hard for me to understand the conceptual meaning and implications of it. What are your own thoughts on your formalization there?
I've also recently found something interesting where people denote the criterion of mathematical existence as freedom from contradiction. This can be found on pg. 5 of Tegmark here, attributed to Hilbert.
This looks disturbingly similar to my root idea and makes me want to do some reading on this stuff. I have been unknowingly claiming the criterion for physical existence is the same as that for mathematical existence.
B meant "This rock is heavier than this pencil." So, "B or ~B" means "Either this rock is heavier than this pencil, or this rock is not heavier than this pencil." Surely that is something that I can say truthfully regardless of where the pencil's weight lies. So I don't understand why you say that we can't say "B or ~B" if the pencil's weight lies in a certain range.
My idea was that the rock weighs 1.5 plus/minus sigma. If the pencil then weighs 1.5 plus/minus sigma, then you can't compare their weights with absolute certainty. The difference in their weights is a statistical proposition; the presence of the sigma factor means that the pencil must weigh less than (1.5 minus sigma) or more than (1.5 plus sigma) for B or ~B to hold. But anyways, I might concede your point as I didn't really intend this to be so technical.
I didn't say that the consequent can imply anything "by logical explosion". On the contrary, since the consequent is a tautology, it only implies TRUE things. Given any tautology T and false proposition P, the implication T => P is false.
Sorry, "logical explosion" is just a synonym for "ex falso quodiblet", which you originally mentioned. You originally pointed out that the consequent can imply anything because of ex falso quodiblet, when A is not true. That wasn't my intention, so I added the A true qualifier.
More generally, I don't understand the principle by which you seem to say that A => ~~A is "too simple", while other tautologies are not. Or are you now saying that all tautologies are too simple, and that you want to focus attention on certain non-tautologies, like "if A AND C, then B" ?
It initially seemed too simple for me, but maybe you are right. My original thinking was that "A => ~~A" seems to mean merely that a statement makes sense, whereas other propositions seem to have more meaning outside of that context. Also, the class of tautologies between different propositions seems to generalize the class of tautologies with a single proposition.
... It doesn't seem right to think of this "obviousness" as having anything to do with the territory. It seems entirely a property of how well we can work with our map.
I hadn't really thought about this, and I'm not sure how important it is to the argument, although it is an interesting point. Maybe we should come back to this if you think this is a key point. For the moment I am going to move to the other reply...
Little note to self:
I guess my original idea (i.e., the idea I had in my very first question in the open thread) was that the physical systems can be phrased in the form of tautologies. Now, I don't know enough about mathematical logic, but I guess my intuition was/is telling me that if you have a system which is completely described by tautologies, than by (hypothetically) fine-graining these tautologies to cover all options and then breaking the tautologies into alternative theorems, we have an entire "mathematical structure" (i.e., propositions and relations between propositions, based on logic) for the reality. And this structure would be consistent, because we had already shown that the tautologies could be formed consistently using the (hypothetically) available data. Then physics would work by seizing on these structures and attempting to figure out which theorems were true, refining the list of theorems down into results, and so on and so forth.
I'm beginning to worry I might lose the reader do to the impression I am "moving the goalpost" or something of that nature... If this appears to be the case, I apologize and just have to admit my ignorance. I wasn't entirely sure what I was thinking about to start out with and that was really why I made my post. This is really helping me understand what I was thinking.
Sorry, I caught that myself earlier and added a sidenote, but you must have read before I finished:
Side-note: I suppose these particular examples are all tautological so they don't quite show the full richness of a logical system. However, it would be easy to make theorems, such as "if A AND C, then B" (where C could be specified similar to A or B.) Then we would see not only tautologies but also theorems and other propositions which are all encoded as we would expect from a typical logical system.
Edit: Or, sorry, just to complete, in case you had read that -- the tautology does depend on whether the pencil lies in the range of 1.5 plus/minus sigma. If the pencil lies in that range, we can't say B or ~B.
In answer to (1.), I'm not using the consequent because you identified the fact that the consequent can imply anything by logical explosion. I was referring to the "A=>~A" example not getting to the heart of the point because that example is too simple to reveal anything of substance, as I subsequently discuss.
In answer to (2.), I am not claiming that some tautologies are "less true". I am just roughly showing how there is a gradation from obvious tautologies to less obvious tautologies to tautologies which may not even be recognizable as tautologies, to theorems, and so on.
I think Pigliucci is somewhat hung up on the technicality of whether a computer system can instantiate an (a) intelligence or (b) a human intelligence. Clearly he is gravely skeptical that it could be a human intelligence. But he seems to conflate or interchange this skepticism with his skepticism in a general computer intelligence. I don't think anybody really thinks an AI will be exactly like a human, so I'm not that impressed by these distinctions. Whereas it seems like Pigliucci thinks that's one of the main talking points? I wish Pigliucci read these comments so we could talk to him... are you out there Massimo?
Thank you for comment, and I hope this reply isn't too long for you to read. I think your last sentence sums up your comment somewhat:
...the territory ought not to be thought of as a logical system of which the features are axioms or theorems.
In support of this, you mention:
What about a tautology such as "A => ~~A"? Tautologies do give us true statements about the territory. But, importantly, such a statement is not true in virtue of any feature of the territory. The tautology would have been true no matter what features the territory had. There is nothing in the territory making "A => ~~A" be true of it.
It seems like things are getting confused here. I take "A => ~~A" to be a necessary condition for proposition A to make sense. In order to make things concrete, let me use a real example. Say that proposition A is, "This particular rock weighs 1.5 pounds with uncertainty sigma." This seems like a fairly reasonable, easily imaginable statement. Now clearly, A is simply a rendition or re-representation of the reality that is the physical system. In other words, proposition A only tells you what reality tells you by holding the rock in your hands, or throwing it through the air, or vaporizing it and measuring the amount of output energy. The only difference in this case is that the reality is encoded in human language.
For A to make sense, clearly "A => ~~A" must be true. For the rock to weigh 1.5 plus/minus sigma, it must not - not weigh 1.5 plus/minus sigma. That strikes me more or less a requirement imposed by human language, not so much a requirement of physical reality.
For this reason I think that your example of "A => ~~A" does not get to the heart of my point. My point is slightly different. Consider again the proposition "A true => (if A then B) OR (if A then not B)". Take B as: "This rock is heavier than this pencil." Now, assuming that the pencil does not lie in the weight range 1.5 plus/minus sigma, then this proposition must be true. And now, this statement is significantly more complicated than "A => ~~A", and it implies that (under proper restrictions) you can make longer logical statements, and continuing further, statements which are no longer trivial and just a property of human language.
Side-note: I suppose these particular examples are all tautological so they don't quite show the full richness of a logical system. However, it would be easy to make theorems, such as "if A AND C, then B" (where C could be specified similar to A or B.) Then we would see not only tautologies but also theorems and other propositions which are all encoded as we would expect from a typical logical system.
Now, the fact that this sort of statement works comes straight out of the territory. Our maps to A and B are merely re-representations of reality, and they are what reality is telling us, only encoded in human language. So we are seeing that reality appears to obey the same logical rules that we have come to expect from ordinary kinds of logical systems.
Now, I am not claiming that the physical systems (the territory) is somehow naturally encoding itself into these re-representations. Clearly, the human mind is at work in realizing these re-representations. But once these re-representations are realized, it really is the territory which takes on a logical structure.
So I am not claiming that the physical system is naturally a system of axioms and theorems and so on. My proposition is weaker and more generic, and only says that the physical system has a logical character. My real punchline, I suppose, is to say that this logical character of the re-representation is non-trivial. As you say, "Things are a certain way. They are not some other way." But the way in which they are is logical. They are in a way which is the same way that logical statements are encoded. This is non-trivial because physical systems at the highest level just look like a huge collection of various and vague facts. We have no reason (a priori) to expect physical systems to map about in this way -- but they do! And this I claim allows for math to be so effective in working with reality in general.
I can see how that phrasing would strike you as being redundant or inaccurate. To try to clarify --
The rocks not occupying the same point in space is a logical contradiction in the following sense: If it wasn't a logical contradiction, there wouldn't be anything preventing it. You might claim this is a "physical" contradiction or a contradiction of "reality", but I am attempting to identify this feature as a signature example of a sort of logic of reality.