Nate Showell's Shortform
post by Nate Showell · 2023-03-11T06:09:43.604Z · LW · GW · 19 commentsContents
19 comments
19 comments
Comments sorted by top scores.
comment by Nate Showell · 2023-12-11T05:12:12.583Z · LW(p) · GW(p)
I've come to believe (~65%) that Twitter is anti-informative: that it makes its users' predictive calibration worse on average. On Manifold, I frequently adopt a strategy of betting against Twitter hype (e.g., on the LK-99 market), and this strategy has been profitable for me.
Replies from: Viliam, Dagon↑ comment by Viliam · 2023-12-11T13:07:06.836Z · LW(p) · GW(p)
Is Twitter literally worse than flipping a coin, or just worse than... someone following a non-Twitter crowd?
Replies from: Nate Showell↑ comment by Nate Showell · 2023-12-16T03:59:15.416Z · LW(p) · GW(p)
I was comparing it to base-rate forecasting. Twitter leads people to over-update on evidence that isn't actually very strong, making their predictions worse by moving their probabilities too far from the base rates.
↑ comment by Dagon · 2023-12-11T19:06:44.549Z · LW(p) · GW(p)
For hype-topics, this is almost certainly true. For less-trendy ideas, probably less so. I suspect this isn't specific to twitter, but to all large-scale publishing and communication mechanisms. People are mostly amplifiers rather than evaluators.
comment by Nate Showell · 2023-06-25T19:12:57.412Z · LW(p) · GW(p)
I find myself betting "no" on Manifold a lot more than I bet "yes," and it's tended to be a profitable strategy. It's common for questions on Manifold to have the form "Will [sensational event] happen by [date]." These markets have a systematic tendency to be too high. I'm not sure how much of this bias is due to Manifold users overestimating the probabilities of sensational, low-probability events, and how much of it is an artifact of markets being initialized at 50%.
comment by Nate Showell · 2024-01-08T00:09:38.662Z · LW(p) · GW(p)
Is trade ever fully causal? Ordinary trade can be modeled as acausal trade [LW · GW] with the "no communication" condition relaxed. Even in a scenario as seemingly causal as using a vending machine, trade only occurs if the buyer believes that the vending machine will actually dispense its goods and not just take the buyer's money. Similarly, the vending machine owner's decision to set up the machine was informed by predictions about whether or not people would buy from it. The only kind of trade that seems like it might be fully causal is a self-executing contract that's tied to an external trigger, and for which both parties have seen the source code and verified that the other party have enough resources to make the agreed-upon trade. Would a contract like that still have some acausal element anyway?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-01-08T05:50:04.221Z · LW(p) · GW(p)
Physical causality is naturally occuring acausal dependence (between physically interacting things), similarly to how a physical calculator captures something about abstract arithmetic. So the word "acausal" is unfortunate, it's a more general thing that shouldn't be defined by exclusion of the less general special case of immense practical importance, acausal dependence is something like logical/computational causality. And acausal trade is trade that happens in situations within the fabric of acausal dependencies, how an agent existing within acausal ontology might think about regular trade. But since a clearer formulation remains elusive, fixing the terminology seems premature.
comment by Nate Showell · 2024-07-06T20:19:18.225Z · LW(p) · GW(p)
Bug report: when I'm writing an in-line comment on a quoted block of a post, and then select text within my comment to add formatting, the formatting menu is displayed underneath the box where I'm writing the comment. For example, this prevents me from inserting links into in-line comments.
comment by Nate Showell · 2024-02-17T20:36:06.329Z · LW(p) · GW(p)
An edgy writing style is an epistemic red flag. A writing style designed to provoke a strong, usually negative, emotional response from the reader can be used to disguise the thinness of the substance behind the author's arguments. Instead of carefully considering and evaluating the author's arguments, the reader gets distracted by the disruption to their emotional state and reacts to the text in a way that more closely resembles a trauma response, with all the negative effects on their reasoning capabilities that such a response entails. Some examples of authors who do this: Friedrich Nietzsche, Grant Morrison, and The Last Psychiatrist.
Replies from: carl-feynman, Benito, StartAtTheEnd↑ comment by Carl Feynman (carl-feynman) · 2024-02-18T01:14:42.130Z · LW(p) · GW(p)
Allow me to quote from Lem’s novel “Golem XIV”, which is about a superhuman AI named Golem:
Being devoid of the affective centers fundamentally characteristic of man, and therefore having no proper emotional life, Golem is incapable of displaying feelings spontaneously. It can, to be sure, imitate any emotional states it chooses— not for the sake of histrionics but, as it says itself, because simulations of feelings facilitate the formation of utterances that are understood with maximum accuracy, Golem uses this device, putting it on an "anthropocentric level," as it were, to make the best contact with us.
May not this method also be employed by human writers?
↑ comment by Ben Pace (Benito) · 2024-02-17T20:43:20.029Z · LW(p) · GW(p)
One thing to do here is to re-write their arguments in your own (ideally more neutral) language, and see whether it still seems as strong.
↑ comment by StartAtTheEnd · 2024-02-18T02:19:36.089Z · LW(p) · GW(p)
It's a natural tendency to taunting, which is meant to motivate the reader to attack the author, who is frustrated at the lack of engagement. The more sure you are of yourself, the more provocative you tend to be, especially if you're eager to put your ideas to the test.
A thing which often follows edginess/confidence, and the two may even be a cause of eachother, is mania. Even hypomanic moods has a strong effect on ones behaviour. I believe this is what happened to Kanye West. If you read Nietzsche's Zarathustra, you might find that it seems to contain a lot of mood-swings, and it was written in just 10 days as far as I know (and periods of high productivity are indeed a characteristic of mania)
I think it makes for great reading, and while such people have a higher risk of being wrong, I also think they have more interesting ideas. But I will admit that I'm a little biased on this topic as I've made myself a little edgy (confidence has a positive effect on mood)
comment by Nate Showell · 2023-11-19T07:20:30.641Z · LW(p) · GW(p)
What do other people here think of quantum Bayesianism as an interpretation of quantum mechanics? I've only just started reading about it, but it seems promising to me. It lets you treat probabilities in quantum mechanics and probabilities in Bayesian statistics as having the same ontological status: both are properties of beliefs, whereas in some other interpretations of quantum mechanics, probabilities are properties of an external system. This match allows quantum mechanics and Bayesian statistics to be unified into one overarching approach, without requiring you to postulate additional entities like unobserved Everett branches.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-12-11T05:45:11.646Z · LW(p) · GW(p)
My probability that quantum Bayesianism is onto something is .05. It went down a lot when I read Sean Carroll's book Something Deeply Hidden. .05 is about as extreme as my probabilities get for the parts of quantum physics that are not settled science since I'm not an expert.
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-12-11T14:25:48.042Z · LW(p) · GW(p)
Could you summarize what Carroll says that made you update so strongly against it?
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-12-11T15:51:06.183Z · LW(p) · GW(p)
My memory is not that good. I do recall that it is in the chapter "Other ways: alternatives to many-worlds".
comment by Nate Showell · 2023-03-11T06:09:43.797Z · LW(p) · GW(p)
Simulacrum level 4 is more honest than level 3. Someone who speaks at level 4 explicitly asks himself "what statement will win me social approval?" Someone who speaks at level 3 asks herself the same question, but hides from herself the fact that she asked it.
Replies from: Dagon, lahwran↑ comment by Dagon · 2023-03-12T02:00:47.751Z · LW(p) · GW(p)
Simulacra levels aren't a particularly good model for some interactions/topics, because they blend together in idiosyncratic ways. It's unspecified in the model whether it's intentional or not, or whether a speaker is hiding anything from themselves, vs cynically understanding the levels and using the one that they think is most effective at the time.
↑ comment by the gears to ascension (lahwran) · 2023-03-11T07:12:47.360Z · LW(p) · GW(p)
I don't think so. simulacrum 4 has trouble making coherent reference to physical causal trajectory. simulacrum 3 and 1 are compatible in fact, in some circumstances. not so with 4