Posts

The impossibility of rationally analyzing partisan news 2023-11-16T16:19:50.485Z
AI as Super-Demagogue 2023-11-05T21:21:13.914Z
Doubt Certainty 2023-11-02T17:43:54.157Z

Comments

Comment by RationalDino on [deleted post] 2023-11-17T15:45:28.683Z

Hypothetically this is possible.

But, based on current human behavior, I would expect such simulations to focus on the great and famous, or on situations which represent fun game play.

My life does not qualify as any of those. So I heavily discount this possibility.

Comment by RationalDino on [deleted post] 2023-11-17T01:55:58.062Z

I agree with your caveats.

However I'm not egocentric enough to imagine myself as particularly interesting to potential simulators. And so that hypothetical doesn't significantly change my beliefs.

Comment by RationalDino on [deleted post] 2023-11-16T20:56:00.880Z

This line of reasoning is a lot like the Doomsday argument.

Comment by RationalDino on [deleted post] 2023-11-16T20:54:05.176Z

I am well aware that my assumptions are my assumptions. They work for me, but you may want to assume something different.

I've personally interpreted the Doomsday argument that way since I ran across it about 30 years ago. Honestly AI x-risk is pretty low on my list of things to worry about.

The simulation argument never impressed me. Every simulation that I've seen ran a lot more slowly than the underlying reality. Therefore even if you do get a lengthy regress of simulations stacked on on simulations, most of the experience to have is in the underlying reality, and not in the simulations. Therefore I've concluded that I probably exist in reality, not a simulation.

Comment by RationalDino on The impossibility of rationally analyzing partisan news · 2023-11-16T20:24:03.648Z · LW · GW

If all beliefs in a Bayesian network are bounded away from 0 and 1, then an approximate update can be done to arbitrary accuracy in polynomial time.

The pathological behavior shows up here because there are two competing but mutually exclusive belief systems. And it is hard to determine when your world view should flip.

I hope that this makes it more interesting to you.

Comment by RationalDino on How much fraud is there in academia? · 2023-11-16T19:34:03.401Z · LW · GW

I like your second argument. But to be honest, there is a giant grey area between "non-replicable" and "fraudulent". It is hard to draw the line between, "Intellectually dishonest but didn't mean to deceive" and "fraudulent". And even if you could define the line, we lack the data to identify what falls on either side.

It is worth reading Time to assume that health research is fraudulent until proven otherwise? I believe that this case is an exception to Betteridge's law - I think that the answer is yes. Given the extraordinary efforts that the editor of Anaesthesia needed to catch some fraud, I doubt that most journals do it. And because I believe that, I'm inclined to a prior that says that non-replicability suggests at least even odds of fraud.

As a sanity check, high profile examples like the former President of Stanford demonstrate that fraudulent research is accepted in top journals, leading to prestigious positions. See also the case of Dr. Francesca Gino, formerly of Harvard.

And, finally, back to the line between intellectual dishonesty and fraud. I'm inclined to say that they amount to the same thing in practice, and we should treat them similarly. And the combined bucket is a pretty big problem.

Here is a good example. The Cargo Cult Science speech happened around 50 years ago. Psychologists have objected ever since to being called a pseudoscience by many physicists. But it took 40 years before they finally did what Feynman told them to, and tried replicating their results. They generally have not acknowledged Feynman's point, nor have the started fixing the other problems that Feynman talked about.

Given that, how much faith should we put in psychology?

Comment by RationalDino on How much fraud is there in academia? · 2023-11-16T17:02:05.827Z · LW · GW

The effect isn't large, but you'd lose that bet. See Nonreplicable publications are cited more than replicable ones.

Comment by RationalDino on [deleted post] 2023-11-15T22:27:42.055Z

I have a simple solution for Pascal muggers. I assume that, unless I have good and specific reason to believe otherwise, the probability of achieving an extreme utility u is bounded above by O(1/u). And therefore after some point, I can replace whatever extreme utility is quoted in an argument with a constant. Which might be arbitrarily close to 0.

In some contexts this is obvious. For example if someone offers you $1000 to build a fence in their yard, you might reasonably believe them. You might or might not choose to do it. If they offered you $10,000, that's suspiciously high for the job. You reasonably would worry that it is a lie, and might or might not choose the believable $1000 offer over it as having higher expected value. If you were offered $1,000,000 to build the same fence, you'd assume it was a lie and definitely wouldn't take the job.

By this reasoning, what should you think in the limit as the job stays finite, and the reward tends towards infinity? You hit that limit with the claim that for a finite amount of worship in your lifetime (and a modest tithe) you'll get an infinite reward in Heaven. This is my favorite argument against Pascal's wager.

But now let's bring it back to something more reasonable. Longtermism makes moral arguments in terms of improving the prospect of a distant future teaming with consciousness. But if you trot out the Doomsday argument, the a priori odds of this future are proportional to 1 / the amount of future consciousness. Given the many ways that humanity could go extinct, and the possibilities of future space, I don't have a strong opinion how to adjust that prior given current evidence. Therefore I treat this as a case where we hit the upper bound on the probability of the reward from the scenario is bounded above by a reasonably small constant.

My constant is small in this case because I also believe that large amounts of future consciousness go hand in hand with high likelihood of extreme misery from a Malthusian disaster scenario.

Comment by RationalDino on Prosthetic Intelligence · 2023-11-14T23:26:08.899Z · LW · GW

It depends on subject matter.

For math, it is already here. Several options exist, Coq is the most popular.

For philosophy, the language requirements alone need AI at the level of reasonably current LLMs. Which brings their flaws as well. Plus you need knowledge of human experience. By the time you put it together, I don't see how a mechanistic interpreter can be anything less than a (hopefully somewhat limited) AI.

Which again raises the question of how we come to trust in it enough for it not to be a leap of faith.

Comment by RationalDino on Prosthetic Intelligence · 2023-11-14T16:17:00.306Z · LW · GW

Nobody ever read the 1995 proof.

Instead they wound up reading the program. This time it was written in C - which is easier to follow. And the fact that there were now two independent proofs in different languages that ran on different computers greatly reduced the worries that one of them might have a simple bug.

I do not know that any human has ever tried to properly read any proof of the 4 color theorem.

Now to the issue. The overall flow and method of argument were obviously correct. Spot checking individual points gave results that were also correct. The basic strategy was also obviously correct. It was a basic, "We prove that if it holds in every one of these special cases, then it is true. Then we check each special case." Therefore it "made sense". The problem was the question, "Might there be a mistake somewhere?" After all proofs do not simply have to make sense, they need to be verified. And that was what people couldn't accept.

The same thing with the Objectivist. You can in fact come up with flaws in proposed understandings of the philosophy fairly easily. It happens all the time. But Objectivists believe that, after enough thought and evidence, it will converge on the one objective version. The AI's proposed proof therefore can make sense in all of the same ways. It would even likely have a similar form. "Here is a categorization of all of the special cases which might be true. We just have to show that each one can't work." You might look at them and agree that those sound right. You can look at individual cases and accept that they don't work. But do you abandon the belief that somewhere, somehow, there is a way to make it work? As opposed to the AI saying that there is none?

As you said, it requires a leap of faith. And your answer is mechanistic interpretability. Which is exactly what happened in the end with the 4 color proof. A mechanistically interpretable proof was produced, and mechanistically interpreted by Coq. QED.

But for something as vague as a philosophy, I think it will take a long time to get to mechanistically interpretable demonstrations. And the thing which will do so is likely itself to be an AI...

Comment by RationalDino on Prosthetic Intelligence · 2023-11-13T05:33:52.067Z · LW · GW

You may be missing context on my reference to the 4 color problem. The original 1976 proof, by Appel and Haken, took over 1000 hours of computer time to check. A human lifetime is too short to verify that proof. This eliminates your first option. The Objectivist cannot, even in principle, check the proof. Life is too short.

Your first option is therefore, by hypothesis, not an option. You can believe the AI or not. But you can't actually check its reasoning.

The history of the 4 color problem proof shows this kind of debate. People argued for nearly 20 years about whether there might be a bug. Then an independent, and easier to check, computer proof came along in 1995. The debate mostly ended. More efficient computer generated proofs have since been created. The best that I'm aware of is 60,000 lines. In principle that would be verifiable by a human. But no human that I know of has actually bothered. Instead the proof was verified by the proof assistant Coq. And, today, most mathematicians trust Coq over any human.

We have literally come full circle on the 4 color problem. We started by asking whether we can trust a computer if a human can't check it. And now we accept that a computer can be more trustworthy than a human!

However it took a long time to get the proof down to such a manageable size. And it took a long time to get a computer program that is so trustworthy that most believe it over themselves.

And so the key epistemological challenge. What would it take for you to trust an AI's reasoning over your own beliefs when you're unable to actually verify the AI's reasoning?

Comment by RationalDino on Prosthetic Intelligence · 2023-11-12T05:45:37.300Z · LW · GW

Concrete example.

Let's presuppose that you are an Objectivist. If you don't know about Objectivism, I'll just give some key facts.

  1. Objectivists place great value in rationality and intellectual integrity.
  2. Objectivists believe that they have a closed philosophy. Meaning that there is a circle of fundamental ideas set out by Ayn Rand that will never change, though the consequences of those ideas certainly are not obvious and still needs to be worked out.
  3. Objectivists believe that there is a single objective morality that can be achieved from Ayn Rand's ideas if we only figure out the details well enough.

Now suppose that an Objectivist used your system. And the AIs came to the conclusion that there is no single objective morality obtainable by Ayn Rand's ideas. But the conclusion required a long enumeration of different possible resolutions, only to find a problem in each one. With the enumeration, like the proof of the 4-color problem, being too long for any human to read.

What should the hypothetical Objectivist do upon obtaining the bad news? Abandon the idea of an absolute morality? Reject the result obtained by the intelligent AI? Ignore the contradiction?

Now I don't know your epistemology. There might be no such possible conflict for you. I doubt there is for me. But in the abstract, this is something that really could happen to someone who thinks of themselves as truly rational.

Comment by RationalDino on [deleted post] 2023-11-10T16:19:14.087Z

What you want sounds like Próspera. It is too early to say how that will work out.

They took some inspiration from Singapore. When Singapore became independent in 1965, it was a poverty-stricken third world place. It now has a better GDP/capita than countries like the USA. And also did things like come up with the best way of teaching math to elementary school students.

But Singapore is only libertarian in some ways. They are also a dictatorship who does not believe in, for instance, free speech. Their point is that when you cram immigrants from many cultures together, you'll get problems if you don't limit how much one group is allowed to offend another. I don't like it, but also don't have evidence that they are wrong.

And finally, most utopian experiments don't work out very well. See A Libertarian Walks Into a Bear for an amusing example.

Comment by RationalDino on The case for "Generous Tit for Tat" as the ultimate game theory strategy · 2023-11-09T23:58:15.345Z · LW · GW

As a life strategy I would recommend something I call "tit for tat with forgiveness and the option of disengaging".

Most of the time do tit for tat.

When we seem to be in a negative feedback loop, we try to reset with forgiveness.

When we decide that a particular person is not worth having in our life, we walk them out of our life in the most efficient way possible. If this requires giving them a generous settlement in a conflict, that's generally better than continuing with the conflict to try for a more even settlement.

The first three are things most social animals are adapted to do. The last is possible for us because we live in societies that are large enough for us to never interact with people we don't like. Unfortunately our emotions are pretty well adapted to life in groups below Dunbar's number. So the decision to disengage efficiently takes work.

Comment by RationalDino on Prosthetic Intelligence · 2023-11-08T20:13:17.297Z · LW · GW

Honest question about a hypothetical.

How would you respond if you set this up, and then your personal GPT concluded, from your epistemology and the information available to it, that your epistemology is fundamentally flawed and you should adopt a different one. Suppose further than when it tried to explain it to you, everything that it said made sense but you could not follow the full argument.

What should happen then? Should you no longer be the center of the prosthetic enabled "you" that has gone beyond your comprehension? Should the prosthetics do their thing, with the goal of supplying you with infinite entertainment instead of merely amplifying you? Should the prosthetics continue to be bound by the limitations of your mind? (Not necessarily crazy if you're afraid of another advanced AI hacking your agents to subvert them.)

Obviously ChatGPT does not offer sufficient capabilities that this should happen. But if your future continues to the point where the agents augmenting your capabiliteis have AGI, this type of challenge will arise.

Comment by RationalDino on When and why should you use the Kelly criterion? · 2023-11-07T21:47:25.594Z · LW · GW

The reason why variance matters is that high variance increases your odds of going broke. In reality, gamblers don't simply get to reinvest all of their money. They have to take money out for expenses. That process means that you can go broke in the short run, despite having a great long-term strategy.

Therefore instead of just looking at long-term returns you should also look at things like, "What are my returns after 100 trials if I'm unlucky enough to be at the 20th percentile?" There are a number of ways to calculate that. The simplest is to say that if p is your probability of winning, the expected number of times you'll win is 100p. The variance in a single trial is p(1-p). And therefore the variance of 100 trials is 100p(1-p). Your standard deviation in wins is the square root, or 10sqrt(p(1-p)). From the central limit theorem, at the 20th percentile you'll therefore win roughly 100p - 8.5sqrt(p(1-p)) times. Divide this by 100 to get the proportion q that you won. Your ideal strategy on this metric will be Kelly with p replaced by that q. This will always be less than Kelly. Then you can apply that to figure out what rate of return you'd be worrying about if you were that unlucky.

Any individual gambler should play around with these numbers. Base it on your bankroll, what you're comfortable with losing, how frequent and risky your bets are, and so on. It takes work to figure out your risk profile. Most will decide on something less than Kelly.

Of course if your risk profile is dominated by the pleasure of the adrenaline from knowing that you could go broke, then you might think differently. But professional gamblers who think that way generally don't remain professional gamblers over the long haul.

Comment by RationalDino on When and why should you use the Kelly criterion? · 2023-11-07T20:55:36.811Z · LW · GW

I'm sorry that you are confused. I promise that I really do understand the math.

In repeated addition of random variables, all of these have a close relationship. The sum is approximately normal. The normal distribution has identical mean, median, and mode. Therefore all three are the same.

What makes Kelly tick is that the log of net worth gives you repeated addition. So with high likelihood the log of your net worth is near the mean of an approximately normal distribution, and both median and mode are very close to that. But your net worth is the exponent of the log. That creates an asymmetry that moves the mean away from the median and mode. With high probability, you will do worse than the mean.

The comment about variance is separate. You actually have to work out the distribution of returns after, say 100 trials. And then calculate a variance from that. And it turns out that for any finite n, variance monotonically increases as you increase the proportion that you bet. With the least variance being 0 if you bet nothing, to being dominated by the small chance of winning all of them if you bet everything.

Comment by RationalDino on When and why should you use the Kelly criterion? · 2023-11-06T11:55:53.638Z · LW · GW

Dang it. I meant to write that as,

If you bet more than Kelly, you'll experience lower returns on average and higher variance.

That said, both median and mode are valid averages, and Kelly wins both.

Comment by RationalDino on AI as Super-Demagogue · 2023-11-06T11:53:26.556Z · LW · GW

I believe that AI safety is a real issue. There are both near term and long term issues.

I believe that the version of AI safety that will get traction is regulatory capture.

I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.

On Andrew Ng, his point is that he doesn't see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.

Comment by RationalDino on AI as Super-Demagogue · 2023-11-06T11:39:38.037Z · LW · GW

I am using Trump merely as an illustrative example of techniques.

My more immediate concern is actually the ability of China to shape US opinion through TikTok.

Comment by RationalDino on When and why should you use the Kelly criterion? · 2023-11-06T03:40:06.481Z · LW · GW

The simple reason to use Kelly is this.

With 100% odds, any other strategy will lose to Kelly in the long run.

This can be shown by applying the strong law of large numbers to the random walk that is the log of your net worth.

Now what about a finite game? It takes surprisingly few rounds before Kelly, with median performance, pulls ahead of alternate strategies. It takes rather more rounds before, say, you have a 90% chance of beating another strategy. So in the short to medium run, Kelly offers the top of a plateau for median returns. You can deviate fairly far from it and still do well on average.

So should you still bet Kelly? Well, if you bet less than Kelly, you'll experience lower average returns and lower variance. If you bet more than Kelly, you'll experience lower average returns and higher variance. Variance in the real world tends to translate into, "I don't have enough left over for expenses and I'm broke." Reducing variance is generally good. That's why people buy insurance. It is a losing money bet that reduces variance. (And in a complex portfolio, can increase expected returns!) So it makes sense to bet something less than Kelly in practice.

There is a second reason to bet less than Kelly in practice. When we're betting, we estimate the odds. We're betting against someone else who is also estimating the odds. The average of many people betting is usually more accurate than individual bettors. We believe that we're well-informed and have a better estimate than others. But we're still likely biased towards overconfidence in our chances. That means that betting Kelly based on what we think the odds are means we're likely betting too much.

Ideally you would have enough betting history tracked to draw a regression line to figure out the true odds based on the combination of what you think, and the market things. But most of us don't have enough carefully tracked history to accurately make such judgments.

Comment by RationalDino on AI as Super-Demagogue · 2023-11-06T03:14:37.036Z · LW · GW

Social media has proven more than capable of creating effective social contexts for persuading people.

LLMs are perfectly capable of operating in these social contexts. Particularly if they have (as in the case of TikTok and China) the support of the owner of the site.

Do you have specific cause to believe that LLMs will fail to persuade in these social contexts?

Comment by RationalDino on AI as Super-Demagogue · 2023-11-06T00:20:44.066Z · LW · GW

You make a good point that Cruz et al may have different beliefs than they portray publicly. But if so, then Cruz must have had a good acting coach in late 2018.

About 70 million followers, you're right to call me out for overstating it. But according to polling, that's how many people believe that the 2020 election was stolen at the ballot box. So far he has lost dozens of election challenge cases, key members of his inner circle have admitted in court that there was no evidence, and he's facing multiple sets of felony charges in multiple jurisdictions.

I think it is reasonable to call someone a devoted follower if they continue to accept his version in the face of such evidence.

On AI safety, we can mean two different things.

My concern is with the things that are likely to actually happen. Hence my focus on what is supported by tech companies, and what politicians are likely to listen to. That part I'm sure is mostly regulatory capture.

I did acknowledge, though not loudly enough, that there are people working in AI safety who truly believe in what they are doing. But to the extent that they don't align with the vested interests, what they do will not matter. To the extent that they do align, their motivations don't matter as much as the motivations of the vested interests. And in the meantime, I wish that they would investigate questions that I consider important.

For example, how easily can an LLM hypnotize people? Given the ability to put up images and play videos created by AI. Can it hypnotize people then? Can it implant posthypnotic suggestions? In other words, how easily can an existing social network, with existing technology, be used for mass hypnosis?


Update. I forgot to mention that Andrew Ng's accomplishments in AI are quite impressive. Cofounder of Google Brain, taught machine learning to Sam Altman, and so on. I might wind up disagreeing with some of his positions, but I'll generally default to trusting his thinking over mine on anything related to machine learning.

If you're willing to pay, you can read a stronger version of his thoughts in Google Brain founder says big tech is lying about AI extinction danger.

Comment by RationalDino on We are already in a persuasion-transformed world and must take precautions · 2023-11-05T21:25:04.029Z · LW · GW

With all due respect, I see no evidence that elites are harder to fool now than they were in the past. For concrete examples, look at the ones who flipped to Trump over several years. The Corruption of Lindsey Graham gives an especially clear portrayal about how one elite went from condemning Trump to becoming a die-hard supporter.

I dislike a lot about Mr. Graham. But there is no question that he was smart and well aware of how authoritarians gain power. He saw the risk posed by Trump very clearly. However he knew himself to be smart, and thought he could ride the tiger. Instead, his mind got eaten.

Moving on, I believe that you are underestimating the mass psychology stuff. Remember, I'm suggesting it as a floor to what could already be done. New capabilities and discoveries allow us to do more. But what should already be possible is scary enough.

However that is a big topic. I went into it in AI as Super-Demagogue which you will hopefully find interesting.

Comment by RationalDino on We are already in a persuasion-transformed world and must take precautions · 2023-11-04T20:07:32.097Z · LW · GW

I disagree that people who do ML daily would be in a good position to judge the risks here. The key issue is not the capabilities of AI, but rather the level of vulnerability of the brain. Since they don't study that, they can't judge it.

It is like how scientists proved to be terrible at unmasking charlatans like Uri Geller. Nature doesn't actively try to fool us, charlatans do. The people with actual relevant expertise were people who studied how people can be fooled. Which meant magicians like James Randi. Similarly, to judge this risk, I think you should look at how dictators, cult leaders, and MLM companies operate.

A century ago Benito Mussolini figured out how to use mass media to control the minds of a mass audience. He used this to generate a mass following, and become dictator of Italy.. The same vulnerabilities exploited the same way have become a staple for demagogues and would-be dictators ever since. But human brains haven't been updated. And so Donald Trump has managed to use the same basic rootkit to amass about 70 million devoted followers. As we near the end of 2023, he still has a chance of successfully overthrowing our democracy if he can avoid jail.

Your thinking about zero days is a demonstration of how thinking in terms of computers can mislead you. What matters for an attack is the availability of vulnerable potential victims. In computers there is a correlation between novelty and availability. Before anyone knows about a vulnerability, everyone is available for your attack. Then it is discovered, a patch is created, and availability goes down as people update. But humans don't simply upgrade to brain 2.1.8 to fix the vulnerabilities found in brain 2.1.7. People can be brainwashed today by the same techniques that the CIA was studying when they funded the Reverend Sun Moon back in the 1960s.

You do make an excellent point about the difficulty of building something that can work at scale in the real world. Which is why I focused my scenario on techniques that have worked, repeatedly, at scale. We know that they can work, because they have worked. We see it in operation whenever we study the propaganda techniques used by dictators like Putin.

Given these examples, the question stops being an abstract, "Can AI find vulnerabilities by which we can be exploited?" It then switches to, "Is AI capable of executing effectrive variants on the strategies that dictators, cult leaders and MLM founders already have shown works at scale against human minds?"

I think that the answer is a pretty clear yes. Properly directed, ChatGPT should be more than capable of doing this. We then have the hallmark of a promising technology, we know that nothing fundamentally new is required. It is just a question of execution.

Comment by RationalDino on We are already in a persuasion-transformed world and must take precautions · 2023-11-04T18:18:48.752Z · LW · GW

Sorry, but you're overthinking what's required. Simply being able to reliably use existing techniques is more than enough to hack the minds of large groups of people, no complex new research needed.

Here is a concrete example.

First, if you want someone's attention, just make them feel listened to. ELIZA could already successfully do this in the 1970s, ChatGPT is better. The result is what therapists call transference, and causes the person to wish to please the AI.

Now the AI can use the same basic toolkit mastered by demagogues throughout history. Use simple and repetitive language to hit emotional buttons over and over again. Try to get followers to form a social group. Switch positions every so often. Those that pay insufficient attention will have the painful experience of being attacked by their friends, and it forces everyone to pay more attention.

All of this is known and effective. What AI brings is that it can use individualized techniques, at scale, to suck people into many target groups. And once they are in those groups, it can use the demagogue's techniques to erase differences and get them aligned into ever bigger groups.

The result is that, as Sam Altman predicted, LLMs will prove superhumanly persuasive. They can beat the demagogues at their own game by seeding the mass persuasion techniques by individual attention at scale.

Do you think that this isn't going to happen? Social media accidentally did a lot of this at scale. Now it is just a question of weaponizing something like TikTok.

Comment by RationalDino on Doubt Certainty · 2023-11-04T16:16:12.035Z · LW · GW

You would be amazed at what lengths many go to never learn.

Ever heard the saying (variously attributed) that A level people want to be around other A level people while B level people want to be around C level people?

A lot of those B level people are ones who stop getting better because they believe themselves to already be good. And they would prefer to surround themselves with people who confirm that belief than risk challenging themselves.

Furthermore, it is easier to maintain illusions of superior competency when it isn't competitive. It was a lot easier for me to hide from ways in which I was a bad husband than to hide from the fact that I was losing at chess. There isn't really an objective measure of being a poor husband. And continuing doing what I already did was constant evidence to me that I was a good husband. So my illusions continued until some of the same problems showed up in my next relationship.

Comment by RationalDino on Doubt Certainty · 2023-11-03T17:26:27.738Z · LW · GW

One example is the kind of person who began to learn something, worked at it, and became good at it compared to their friends. Without context for what "good" really means in the outside world, it is easy to believe that you are good.

In my blog I gave the example of myself as a teenager in chess. I could usually beat everyone in my school except my brother, so I felt like a good player.

But my competitive rating would have probably been about 1200-1400. I still remember my first encounter with a good chess player. A master was sitting in public, playing simultaneously against everyone who wanted to play him. I sat down, promptly lost, played again and lost again. He gave me some advice beginning with, "Weak players like you should focus on..."

I took offense, despite having just received evidence that he knew what he was talking about when it came to chess.

While I learned better, I've now been on the other side of this interaction in a number of areas. Including ping-pong and programming. Which suggests that my younger self was hardly unique in my overestimation of my abilities.

Comment by RationalDino on Doubt Certainty · 2023-11-03T17:15:12.534Z · LW · GW

I agree that when you feel sure of your reasoning, you are generally more likely right than when you aren't sure.

But when you cross into feeling certain, you should suspect cognitive bias. And when you encounter other people who are certain, you should question whether they might also have cognitive bias. Particularly when they are certain on topics that other smart and educated people disagree with them on.

This is not a 100% rule. But I've found it a useful guideline.

Comment by RationalDino on Do you believe "E=mc^2" is a correct and/or useful equation, and, whether yes or no, precisely what are your reasons for holding this belief (with such a degree of confidence)? · 2023-11-01T21:56:52.377Z · LW · GW

Add me to those who have been through the physics demonstration. So I'll give it odds of, let's say, 99.9999%.

But I also don't like how most physicists think about this. In The Feynman Lectures on Physics, Richard Feynman taught it as energy and mass are the same thing, and c^2 is simply the conversion factor. But most physicists distinguish between rest mass and relativistic mass. And so think in terms of converting between mass and energy. And not simply between different forms of energy, one of which is recognized to be mass.

But let's take a hydrogen atom. A hydrogen atom is an electron and proton. But the mass of a hydrogen atom is less than the mass of an electron plus the mass of a proton. It is less by (to within measurement error) the mass of the energy required to split a hydrogen atom apart. I find this easier to think about within Feynman's formulation than what most physicists do.