Posts
Comments
Should you "trust literatures, not papers"?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.
https://twitter.com/michael_wiebe/status/1750572525439062384
Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.
https://twitter.com/michael_wiebe/status/1750197740603367689
New replication: I find that the results in Moretti (AER 2021) are caused by coding errors. The paper studies agglomeration effects for innovation (do bigger cities cause technological progress?), but the results supporting a causal interpretation don't hold up.
https://twitter.com/michael_wiebe/status/1749462957132759489
What was the effect of reservists joining the protests? This says: "Some 10,000 military reservists were so upset, they pledged to stop showing up for duty." Does that mean they were actively 'on strike' from their duties? It looks like they're now doing grassroots support (distributing aid).
Yeah, I do reanalysis of observational studies rather than rerunning experiments.
Do you have any specific papers in mind?
But isn't it problematic to start the analysis at "superhuman AGI exists"? Then we need to make assumptions about how that AGI came into being. What are those assumptions, and how robust are they?
Why start the analysis at superhuman AGI? Why not solve the problem of aligning AI for the entire trajectory from current AI to superhuman AGI?
Also came here to say that 'latter' and 'former' are mixed up.
In particular, we should be interested in how long it will take for AGIs to proceed from human-level intelligence to superintelligence, which we’ll call the takeoff period.
Why is this the right framing? Why not focus on the duration between 50% human-level and superintelligence? (Or p% human-level for general p.)
So it seems very likely to me that eventually we will be able to create AIs that can generalise well enough to produce human-level performance on a wide range of tasks, including abstract low-data tasks like running a company.
Notice how unobjectionable this claim is: it's consistent with AGI being developed in a million years.
If you're loss averse, the expected value could easily be negative: cost(voting for wrong candidate) > benefit((voting for right candidate).
I was astonished to find myself having ascended to the pantheon of those who have made major contributions to human knowledge
Is this your own evaluation of your work?
If the "tear apart the stars" prophecy just refers to Harry harvesting the stars for resources, then Voldemort looks really stupid for misinterpreting it.
Now Hermione learns Patronus 2.0 and destroys Azkaban. So both the Boy-Who-Lived and the Girl-Who-Revived can kill dementors. Sounds like "surviving/defeating Voldemort" is a plausible cover for explaining the origin of the ability to destroy dementors.
Isn't Harry saying this to Draco after Draco has been obliviated? Draco has no idea what Harry's talking about.
Shouldn't Harry have fallen to his knees twenty seconds earlier, if he originally heard/saw the explosion via Voldie-simulcast?
"Harry, let me verify that your Time-Turner hasn't been used," said Professor McGonagall.
"LOOK OVER THERE!" Harry screamed, already sprinting for the door.
Why are Hermione's robes red? Does Voldie want her to be Gryffindor?
What about Harry changing Voldie's understanding of death?
Important: QQ's earlier parseltongue-spoken plans for Harry to become ruler of the world were said before he heard the 'tear apart the stars' prophecy. So it appears V changed his mind after hearing the prophecy.
Ten hours after the deadline.
If knowledge of the True Patronus can prevent people from being able to cast Patronus 1.0, is there a way for knowledge of the True Patronus to harm Voldemort?
Does Quirrell have the Resurrection Stone? If so, that's 3/3 Deathly Hallows (invisibility cloak and elder wand).
When Harry first entered the room wearing his cloak, he looked into the mirror and saw only the reflection. Now he is again looking into the mirror while cloaked.
Dumbledore is in the mirror. Quirrell, from 104:
I saw the Headmaster missing... but for all my magic can tell me... he could be in another... realm of existence
Snape kills Dumbledore?
But unless you bought Draco Malfoy's latest theory that Professor Sprout had been assigning and grading less homework around the time of Hermione being framed for attempted murder, thereby proving that Professor Sprout had been spending her time setting it up, the truth remained unfound.
So Imperiused-Sprout is Hat and Cloak?
He went with Lesath, not Cedric.
See also Gwern's belief tags.
We are nearing the end of the school year, after all.
Edit for clarity: referring to the curse on the Defense Against the Dark Arts teaching position.
In the future, poverty reduction EAs might also focus on economic, political, or research-infrastructure changes that might achieve poverty reduction, global health, and educational improvements more indirectly, as when Chinese economic reforms lifted hundreds of millions out of poverty.
I'd like to see more discussion of economic growth and effective altruism. Something that can lift hundreds of millions of people out of poverty is something that should definitely be investigated. (See also Lant Pritchett's distinction between linear and transformative philanthropy.)
Regarding Harry's lack of surprise, isn't it odd that he puts no effort into wearing the expression of someone who has no idea about Hermione's body being missing?
What's "Harry James Potter-Evans-Verres" an anagram for?
Is there a page that lists all of the unresolved hints/clues in MoR? For example, Remembrall-like-a-sun, Bacon's diary, etc.
He had vanished from where he was standing over the Weasley twins and come into existence beside Harry; George Weasley had discontinously teleported from where he was sitting to be kneeling next to his brother's side
What's going on here? Is it just that Harry isn't paying attention to what's happening around him?
This is interesting. From the end of Ch. 89:
Unseen by anyone, the Defense Professor's lips curved up in a thin smile. Despite its little ups and downs, on the whole this had been a surprisingly good day
From Ch. 46, after Harry destroys the dementor:
I must admit, Mr. Potter, that although it has had its ups and downs, on the whole, this has been a surprisingly good day.
I agree with all of this.
Do you think that the standards given in the OP are too demanding? Not demanding enough?
Good point. Do you think the ideological Turing-capability requirement helps to mitigate this danger, and if so, how much does it help?
Academic papers are what get's published, not what's true. The difference is particularly pronounced for political topics.
Right, it's a necessary condition, not a sufficient one.
As a tool for combating privileged questions, what about consciously prioritizing which issues you spend time thinking about?
Both strategies might end up producing the same outcome. Define a "Certified Political Belief" as a belief which satisfies the above standards. In my own case, I don't actually have any strong political beliefs (>90% confidence) which I would claim are Certified (except maybe "liberal democracy is a good thing").
In fact, a good exercise would be to take your strongest political beliefs, actually write down which academic articles you've read that support your position, and then go do a quick test (with a third-party referee) to see whether you're ideologically Turing-capable. This sounds like a good way to get feedback to help you calibrate.
I interpreted the signals as "this woman is interesting," yet when I got to know those woman, I was not actually >interested in their personality. I put a lot of effort into fixing this miscalibration, and I think it was worth the effort.
Details?
Turing-capable?
What's a good term for "being able to pass an ideological Turing test"? (Being able to pass an ITT is related to being able to argue both sides of a debate, being able to accurately explain your opponent's position, being able to summarize the strongest counterargument to your position, etc.)
Following the original analogy, is there a term for "a machine that's able to pass a Turing test"? My googling didn't turn up anything. But if there was ("a machine is called Turing-(blank) if it can pass a Turing test"), then it seems we could adapt it fairly easily to the ITT: someone is ideologically Turing-(blank) if they can pass an ITT.
Any suggestions to fill in the blank?
On a related topic, Pinker has a very useful discussion of the case for and against open discussion of dangerous (non-technological) ideas. (Mindkiller warning)
A person is said to exhibit rational irrationality when it is instrumentally rational for him to be epistemically irrational. An instrumentally rational person chooses the best strategies to achieve his goals. An epistemically irrational person ignores and evades evidence against his beliefs, holds his beliefs without evidence or with only weak evidence, has contradictions in his thinking, employs logical fallacies in belief formation, and exhibits characteristic epistemic vices such as closed-mindedness. Epistemically irrational political beliefs can reinforce one’s self-image; boost one’s self-esteem; make one feel noble, smart, superior, safe, or comfortable; and can help achieve conformity with the group and thus facilitate social acceptance. Thus, epistemic irrationality can be instrumentally rational.
If I falsely believe the road I am crossing is free of cars, I might die. So I have a strong incentive to form beliefs about the road in a rational way. However, if I falsely believe that import quotas are good for the economy, this has no directly harmful effects. (On the contrary, the belief can have significant instrumental value. It might make me feel patriotic; serve my xenophobia; serve as an outlet to rationalize, sublimate, or redirect racist attitudes; or help me pretend to have solidarity with union workers.) … Epistemic rationality is hard and takes self-discipline.
When it comes to politics, individuals have every incentive to indulge their irrational impulses. Demand for irrational beliefs is like demand for most other goods. The lower the cost, the more will be demanded. The cost to the typical voter of voting in epistemically irrational ways is nearly zero. The cost of overcoming bias and epistemic irrationality is high. The psychological benefit of this irrationality is significant. Thus, voters demand a high amount of epistemic irrationality.
Jason Brennan, The Ethics of Voting, p.173-74
I would recommend skipping the section on political correctness. I do think the first two sections give a good lesson on how a little reason can be a dangerous thing.
This article seems relevant: "Clever sillies: Why high IQ people tend to be deficient in common sense."
The author argues that high IQ people solve problems by using abstract reasoning instead of evolved common sense. Moreover, general intelligence is mainly useful for solving evolutionarily novel problems, and can actually be a hindrance for problems which were a regular part of the evolutionary environment (for example, social situations). Hence, when facing problems where humans have evolved behavioral responses, smart people who apply abstract reasoning and override common sense often end up doing silly things.