Posts
Comments
Oh also, I am no longer surprised to find out that someone has an eloquent, insightful online presence while also being perpetually obnoxious and maladjusted in real life. Turns out lots of people have both of those.
Many projects are left undone simply because people don't step up to do them. I had heard this a lot, but I now feel it more deeply.
A number of times this year, I sharply changed the mind of a trusted advisor by arguing with them, even though I thought they knew more and should be able to change my mind. It now seems marginally more valuable to argue with people and ask them to show their work.
My antipathy toward Twitter had waned, but then I asked people about it, and did some intentional browsing, and I am back to being as anti-Twitter as ever. Twitter is harming the minds of some of my smartest friends & allies, and they seem to be unable to fully realize this, presumably due to the addiction impairing their judgment.
I have become highly uncertain about public sentiment around AI progress. I have heard multiple conflicting claims about what the median American thinks, always asserted with conviction, but never by anyone anywhere near the median.
Is this different than logical validity? If not, how do they relate?
https://en.wikipedia.org/wiki/Validity_(logic)
https://www.lesswrong.com/tag/valid-argument
I could believe that adding the word "local" might help people communicate the concept more clearly, but I'm curious if it's doing anything else here.
there’s the ever-present, gnawing worry that haunts me, whispering that I might be fundamentally mistaken about something else.
I think negative visualization is useful for this. I made a list of implications for my beliefs & actions conditioning on the totally hypothetical case in which a particular political opinion of mine (no, I won't say which one) is wrong.
I noticed that I had some bucket errors along the lines of "I will have to admit to those nasty outgroup memers that I've been evil+dumb all along, and accept their righteous judgment!" Once I had written it explicitly, the correction pretty much wrote itself: good-vs-evil is oversimplified at best, being wrong doesn't make you dumb, and hateful memers deserve no one's attention, regardless of what faction anyone is in.
I liked the length, readability, and importance; happy to spend my reading budget on this.
Here are some thoughts I had:
- You said, "the belief that persistent good faith disagreements are common would seem to be in bad faith!" and this tripped my alarm for gratuitous meta-leveling. Is that point essential to your thesis? Unless I read too quickly, it seems like you gave a bunch of reasons why that belief is wrong, then pointed out that it would seem to be in bad faith, but then didn't really flesh out what the agenda/angle was. Was that intentional? Am I just stumbling over a joke I don't understand?
- I would be interested to read a whole post about how full-contact psychoanalysis can go well or poorly. I've seen it go well, but usually in ways that are noticeably bounded, so I think I'll challenge the word "full" here. You meant this as an idealization/limiting case, right?
- I feel like there is an implicit call to action here, which may not be right for everyone. I anticipate early adopters of Assuming Bad Faith to pay noticeable extra costs, and possibly also late adopters. I don't have anything in particular in mind, just Chesterton's Fence and hidden order type heuristics, plus some experience seeing intentional norm-setting go awry.
Important but frustrating rationalist skill: getting halfway through a comment and then deleting it because you realized it was wrong
What are your thoughts about this objection to evals? Have you already addressed it somewhere?
if we can squash every scary AI that is not quite smart enough to do a treacherous turn, and we don't structurally eliminate treacherous turns, then the first deployed AI that causes major damage will do it via a treacherous turn. We have no warning shots.
Less importantly, why are these things in quotation marks?:
[...]doing things like “incorporate a company” or “exploit arbitrages in stock prices” or “design and synthesize DNA” without needing any human assistance or oversight.
A prairie is qualitatively different than a billiard table or an asteroid belt: If you tried to use basic kinematics and free body diagrams to describe a prairie ecosystem, you would find that most of the interesting action was left unexplained. To handwave away air resistance and viscosity is to handwave away all the birds. To handwave away friction is to handwave away basically every other mobile life form. And I think it only gets worse if you move from a prairie to a rainforest--floating spores, flying snakes, geckos, soft but breakable eggs, all manner of sticky appendages, etc.
Simple dynamics don’t even get you a decent first approximation of these systems unless you zoom way out and take very coarse averages. (“The biomass generally stays within roughly 10m of ground level, because of gravity.” “These tightly coupled populations of predators and prey roughly trace out this orbit in phase space every X time interval.“) (But I'm interested in counterexamples if you have them.)
...
Anyway, this feels related to the fact that we can’t develop good models for human interactions, either descriptive or prescriptive. When I try to do virtue ethics, I find that all my virtues turn to swiss cheese after a day’s worth of exception handling. When I try to take actions based on first principles of game theory I end up feeling like a maladjusted sociopath. When I try to incorporate the good parts of economic/evopsych cynicism into my view of human affairs, I end up with more questions than answers.
…the question does sometimes haunt me, as to whether in the alternative Everett branches of Earth, we could identify a distinct cluster of “successful” Earths, and we’re not in it.
— This Failing Earth, Eliezer Yudkowsky
Does anyone else wonder similar things about the EA/rationality scene? If we could scan across Tegmark III, would we see large clusters of nearby Earths that have rationality & EA communities that embarrass us and lay bare our own low standards?
I wonder if this post would have gotten a better reception if the stooge had been a Scientologist or a conspiracy theorist or something, instead of just a hapless normie.
I assume that the whole flat earth thing will lose its contrarian luster and fall out of style in the next few years. But suppose that's wrong. How soon until there are significant numbers of flat-earther kids enrolling in kindergarten? Will they be like existing fringe religious minorities? Will they mostly be homeschooled? My real best guess is that flat-earthers don't have kids so this won't happen.
Some smart, scrupulous, rational news junkie should write a periodical report on the state of anti-epistemology. I sort of worry that memeplexes, including anti-epistemic ones, have tipping points whereat they become popular (or dominant) very suddenly.
I followed a link to an article about how Facebook was used to facilitate a genocide in Myanmar. I got a few paragraphs into it and then thought, "Wait, the New York Times is telling me a scandalous but murky story about Big Tech and world events...and I’m just condensing that as 'known facts of public record.' Isn’t this Gell-Mann amnesia?"
So then I felt myself searching for reasons why the NYT could be trusted more about this kind of thing, but found it difficult to come up with a single specific reason that I actually believed. So then I supposed that it was worth reading anyway, since the basic facts were important, and I wasn’t at that much risk from whatever biased framing the NYT might take. But I realized that I didn't really believe that either--I imagined the future in which I turn out to have been utterly misled by the article, and that hypothetical future felt entirely plausible.
So I didn’t read it.
It was an effortful and unrewarding decision, but I endorse it, and I’m hopeful that it will be easier next time. For news stories of this sort, I expect to fall short of my own epistemic standards unless I check 3 or 4 diverse sources. But I didn’t want to do an hour of responsible research, I wanted to spend a leisurely 10 minutes on a single, highly consumable, authoritatively-voiced article and then enjoy the feeling of being informed.
Uh oh, do you really leave the news playing in your living room all the time? Don't you know it's corrosive to your epistemics and agency? Plane crashes are overrated and chronic stress is underrated!
This is pretty much my default attitude, but...SSC once wrote that smoking possibly mitigates schizophrenia, and that "[t]his should be a warning to anyone who’s too quick to tell patients that their coping strategies are maladaptive."
News does have those downsides, just like smoking does cause cancer. But it's good to remember that load-bearing bugs are the rule, not the exception.
What good thing happens if you read The Sequences?
- You see repeated examples of rigorous thought about slippery topics, very deliberately setting up the seductive cached answers and then swerving away from them.
- Exposure to a lot of carefully applied Transhumanism. Mostly in Fun Theory but also sprinkled throughout. The transhumanism is sincere and often emotionally charged, not just smug philosophical gotchas.
- The concepts & jargon are really useful. Yeah, jargon has its downsides, no doubt, but it is still overwhelmingly net positive.
- A thorough argument that you really can live a life that integrates philosophical curiosity, narrative satisfaction, frolicking artistry, vigilant truth-orientation, deep emotion, scientific rigor, and a childlike hope for The Good.
- Deep and unforced optimism. Cynicism about cynicism[1]. Generalized anti-nihilism.
- Friendly AI is permitted by the laws of physics. This is sufficient reason to try our best, even if it turns out to be too difficult for tiny mortals like us.
- We're a thousand shards of desire lashed together by evolved kludges. Human-compatible morals & æsthetics exist only in humans, and are not objectively special. So what are you gonna do about it?
- Unicorns aren't real, there is no god, and no pixies in the garden. But you know what is real? Giant squid. Electric eels. Radiotrophic fungus. Black holes. Volcanoes. Aircraft. Flamethrowers. SCUBA diving. Lightning, rainbows, aurorae. If you really think you could enjoy unicorns and levitation spells, there is no reason why you shouldn't also be able to take joy in the merely real.
[1] Cynical About Cynicism isn't in the Sequences, but the same general attitude still comes up.
When i’m walking around through my daily life, it helps me to think of myself as a character in a cyberpunk weirdtopia.
- Phone anxiety ruining my nature walk? Yeah that’s cyberpunk, even if it wasn't anticipated by Neuromancer.
- Strolling over to the donut shop for a nice pastry...amid a bungled global health crisis of disputed origin? Yup, that sure counts.
- Detouring down a beautifully verdant neighborhood, past a consecution of strident culture war yard signs, presumably influenced in some part by foreign psyops like the IRA? Definitely cyberpunk.
- Scratching my head over the risks of cryptocurrency hodling vs the risks of pandemic-driven inflation? Cyberpunk af.
Hiro Protagonist, the protagonist of Snow Crash wouldn’t complain about these things; he would go on a sassy, sciencey, poetic monologue about it and appreciate it all for what it was.
I like this post; words are important.
- I certainly want something that means Tolkningsföreträde, that sounds quite useful.
- Maybe also Föreställningsvärld--it sounds like it isn't quite interchangeable with "worldview", and I find that "world model" sounds too technical.
- I love "microdictator" and I'm going to try to spread it.
- I'm not so sure about the rest. It seems like "caricature" and "mission creep" might be fine.
Often I have witnessed people encountering new information, apparently accepting it, and then carefully explaining why they are going to do exactly the same thing they planned to do previously, but with a different justification. The point of thinking is to shape our plans; if you’re going to keep the same plans anyway, why bother going to all that work to justify it? When you encounter new information, the hard part is to update, to react, rather than just letting the information disappear down a black hole.
In some contexts, this is exactly right. It is right and proper to see major, real-time belief updates in the climax of a rational fic. And one hopes that executives in a high-stakes meeting will be properly incentivized to do the same. But in many ordinary cases, the most extreme concession one should hope to hear is, "okay, you've given me something to think about," followed by a change of subject. (If this seems unambitious, consider how rarely people make even such small concessions.)
I think it's important to mind the costs--both psychological and social--of abruptly changing one's plans or attitudes. "Why bother going to all that work to justify [staying the course]?" Indeed, I wish it were more normal for people to say, "well, that's a good point but it's probably not worth the switching costs" or even just, "I don't feel like thinking that hard about it."
… yesterday, you said Q, and Q implies not P. So you were wrong yesterday or today. So you’re wrong.
I sort of want to try developing (the valid version of) this into a deliberate skill. I think that of all the mundane forms of hypocrisy, one of the most vexing might be inconsistency at the 24h+ timescale. It's just hard to say, "hey, you're trying to have it both ways!" if the violation in question is spread out over multiple days. So naturally, everyone does it all the time.
"Top Forecasting Team Says World Population in 2050 Will be Only Six Thousand!" there's a good chance that they will just write "Top Forecasting Team Says World Population will massively decrease in the middle of the century".
Ok that's probably true. This idea was meant mostly as a joke, but still...I can't help but wonder if there might be some cool Straussian tactic to push a tiny signal through the Great Distorter.
Yeah sounds right. Post edited.
Once again, I do declare: the the world could really use at least 10 more John Nersts.
Insightful Articles about Politics
Slightly inspired by this post from Julia Galef. I've selected the following posts because they are insightful specifically at the meta-level.
The post is making me feel optimism about social technology again.
I think I would be a much better-trained rationalist if I did my basic rationality practices as regularly as I do physical exercise. The practices are:
- I keep a list of things I have changed my mind about. Everything from geopolitics to my personal life.
- I hardly ever go looking outside my echo chamber in search for things that can challenge/correct my beliefs because I find it very effortful & unpleasant. (You know what else is effortful? Pushups).
- I sometimes write letters to my future or past selves. I tried giving comprehensive life advice to my 16-year-old self, and ended up learning a lot about advice and spurious counterfactuals...
- I sometimes do the Line of Retreat negative visualization. For immediate things, I tend to do it out loud while on a walk. For political beliefs, I slowly add to a private document over time and occasionally review it.
- I maintain a list of my disagreements with various public thinkers. Helps me separate tribal thinking from truth-seeking.
- I made an Anki deck for memorizing my defensive epistemology heuristics: "is this explainable by selection effects?", Proving Too Much, "is this claim consistent with their previous claim?", Reversal Test, etc.
- I notice I'm at a point where I can make surprisingly good fermi estimates if I spend a couple minutes thinking really hard, usually not otherwise. Feels like there's room for improvement.
- Hard to practice with regularity, but these days I try to restrain myself from joining into an in-progress debate when I overhear one, and instead sit on the sideline and patiently wait for openings to point out (double) cruxes.
- Prompt myself to answer, "what would a slightly improved version of me do in this situation? What would I think if I were more rested and better hydrated?" It's embarrassing how much mileage I have gotten out of role-playing as myself.
- Privately journaling about my internal conflicts or difficult feelings. Simple but underpracticed (much like sit-ups).
- I wrote down a page of predictions about the state of crypto tech in 2031, aiming for maximum specificity & minimal future embarrassment. Similar for Twitter in this post. I guess I might call this "conscientious futurism" or just "sticking my neck out".
- Pro/Con lists. They're effortful & time-intensive. But so is half-focused vacillating, which is what I do by default.
So yeah, those are my rationality exercises, and I really wish I practiced them more regularly. It's not exactly high-level SEAL-inspired training, and it's pretty hard to verify, but...it feels like it makes me more rational.
I think a world of widespread economic literacy might be even better than it is depicted here. Speculative sci-fi has traditionally suffered from issues like predicting flying cars instead of smartphones. In Optimism About Social Technology, I wrote that my pet heuristic is:
Imagine how much worse the world would be if there were a worldwide ban on e.g. standard insurance contracts--no health insurance, no auto insurance, no fire insurance.
Now imagine how much better the world would be if we had not only those things but also widespread liability insurance...or dominant assurance contracts, or prediction markets, or something that hasn't even been invented yet!
I think EY is off to a great start with Dath Ilan, but speculative fiction is hard, so I want there to be a whole genre of Dath Ilan-style world-building.
Conditional payments for paywalled content (after you pay for a piece of downloadable content and view it, you can decide after the fact if payments should go to the author or to proportionately refund previous readers)
-- Vitalik Buterin, On Radical Markets
Good post. I myself have gotten into the habit of referring to an outside view instead of the outside view.
I wonder where the Spiral of Silence fits in here. I guess opposite the Respectability cascade?
society can respond to new information relatively quickly, but does so smoothly. This seems like a good thing.
This makes me think of the Concave Disposition.
I guess it shouldn't come as a surprise that these concepts are already well-known.
Well I think independent discovery is underrated anyway.
I think this remains an outstanding, top-tier problem in group rationality. I feel like I encounter it constantly. I'm surprised this post doesn't have more engagement.
I suspect that the long days break down some of your usual defences. It makes their techniques more effective, but you may not want to provide them with this power over you.
I personally feel less concerned by the long hours than by the notion of "psychological hacks" that lead to testimonials like, “What is, is; and what isn't isn't”. That stuff makes me imagine some kind of "leap of faith" maneuver, which I usually see as unreliable and prone to misfiring.
The Western focus on individuality and autonomy can be limiting as often a push is exactly what we need. This may explain part of why they were able to achieve what seemed like remarkable results - psychologists are limited by ethics in a way in which Landmark is not.
Yeah, this is plausible. It's easy to imagine scenarios where a push from a trusted friend is exactly what I want. However, I'm still wary of hiring an organization of strangers to overpower my narratives & worldview using psychological hacks.
Contrast with certain types of meditation, whereby you can directly observe evidence that challenges your narrative, without ever doing anything epistemically questionable.
Purely for completeness, I'll go ahead and represent the opposite preference: I am noticeably energized by overcast days, and I enjoy rain. Long, unbroken sequences of sunny days feel oppressive to me. I think my ideal week would be overcast 4 days, medium-light rain 2 of those days, and sunny on the remaining 3 days for evaporation & variety.
Of course, I realize that pluviophiles are a small minority, so any community/subcultural hub in a chronically cloudy place will suffer an excess SAD burden.
The seismic shift that’s occurred in the last 10 years is the ability of social media platforms to freebase user generated content and create serious behavioral addictions with very salient real world consequences. We‘re making a category error if we continue to discuss Twitter like it’s the same platform it was 10 years ago.
Important point, and well-put.
The Jaron quote was also powerful; I hadn't heard that sort of thing about Trump before but it's not surprising. I personally think the highest reasonable hope would be for Trump to return to how he was in 2012--the birth certificate stuff was much less bad than the Qanon stuff and the capitol insurrection. That was less bad, but it was still bad and this might undermine a sanguine narrative of "Trump in Recovery"...but if it somehow didn't, then yeah, I'd be happy to see that narrative get some airtime.
Regardless of how the stories of Trump end up being told, I do hope that people start to see Twitter as the psychotoxic game that it is. I have expressed some optimism about this in Predictions for future dispositions toward Twitter. It's possible that tech companies will eventually try to sell cleaner digital ecosystems to conscientious end-users--I imagine a high-income, tech-savvy buyer paying extra for a well-integrated device-app ecosystem that tends to respect & enhance one's mental/emotional life rather than harming it. This could come to represent a high-status, post-Twitter lifestyle. Again this is optimistic, but perhaps worth hoping for.
(I use the term "full reversal" to mean going from high confidence in a belief to high confidence in the opposite belief. A "hard reversal" is when a full reversal happens quickly.)
When have you noticed and remembered peers or colleagues changing their minds?
I think the question might need some modifiers to exclude the vast amounts of boring examples. Obviously your question does not evoke answers so boring as "Oh, the store is closed? Okay, then we can't get milk tonight" but what about a corporate executive pivoting his strategy when he hears business-relevant news? By now I am bored of Losing-the-Faith stories, but I don't deny their relevance to human rationality.
Anyway, I think full reversals tend to happen much less frequently than moderate reductions in confidence. Much more common are things of the form "I used to be totally anti-X, but now I see that the reality is a lot more complex and I've become much less certain" or "I used to be completely convinced that Y was true and the deniers were just being silly, but I read a couple decent challenges and now I'm just pretty confused overall". One way in which this happens is when someone accepts that their strong belief actually depends on some fact that they don't know much about.
But to try to directly answer your question, I might list:
- Megan Phelps-Roper left the Westboro Baptist Church, in part due to having respectful debates on Twitter
- Bostrom's Hypothetical Apostasy never really caught on, despite sounding pretty cool on paper. Too bad.
- Rationalists have gotten some recognition for anticipating the pandemic early--you might be able to find some good examples of mind-changing there.
- Rationalist-adjascent blogger Tim Urban had a fairly sharp reversal on cryonics.
- There's that classic (boring?) example of a person quitting grad school after spending a few minutes answering reasonable questions about their motivations.
- If you want a more politically-charged example: Scott Alexander loosely identifies as libertarian, having formerly been vocally anti-libertarian. Seems like this happened via deliberate argumentation, including some email exchanges with David Friedman (son of Milton Friedman).
- I've seen some of my friends and acquaintances change their minds about psychoactive drugs.
Thanks for putting out more fiction.
the rocket began to tilt slightly east.
I interpret this as subtle world-building. A future with Jewish space lasers AND peace in the Middle East.
When I first read the post, about 50% of my reaction was, "this platform could never get traction with a major political party". But is that true? (...also, is it too meta?)
Scott writes in the piece,
There's a theory that the US party system realigns every 50-or-so years. Last time, in 1965, it switched from the Democrats being the party of the South and the Republicans being the party for blacks, to vice versa. If the theory's right, we're in the middle of an equally big switch. Wouldn't it be great if the Republicans became the racially diverse party of the working class? You can make it happen!
So I guess that's my biggest question about all this. Is the realignment theory correct? And more importantly, would a 1960s-magnitude realignment be enough to cause a major US political party to adopt a prominently anti-credentialist, pro-betting, anti-gatekeeping platform?
Thanks, this is really helpful! I'll ask more questions if I think of them.
In particular, gaining a new form of beauty mostly makes my life feel nicer, whereas gaining a new form of disgust increases the unpleasantness
This resonates for me, and I sometimes end up with an 'ignorance is bliss' attitude toward the latter.
~~~
I gained some ability to see systemization as beautiful. My sense of hufflepuff beauty became more nuanced and caveated.
Can you say more about this? Did this aesthetic shift feel good/bad/neutral, either in the moment or upon reflection? I have such shifts occasionally, and it sometimes makes me feel...tired. Like I just get weary at the thought of permanently increasing the amount of nuance that I track. Rereading the excerpt, I feel like some part of me is insisting that adding nuance and caveats is costly and unsustainable.
Chapter 15: A Scout Identity
Section: You Can Choose Your Communities Online
For all that people complain about how toxic Twitter, Facebook, and the rest of the internet can be, they don't often seem to put in much effort to crafting a better online experience for themselves. Sure, there are plenty of trolls, overconfident pundits, uncharitable talk-show hosts, and intellectually dishonest influencers, but you don't have to give them your attention. You can choose to read, follow, and engage with the exceptions to the rule instead.
Well I gotta strongly disagree with this part. While it's true that most complainers put hardly any effort in, the actual effort required to do what she suggests requires monastic dedication. Psychotoxic internet content is highly addictive for many people and our infrastructure amplifies and spreads it.
I'm pretty concerned by things like state-sponsored polarization campaigns and the apparent memetic collapse, so I can't help but feel like the quoted passage is kind of sweeping aside some pretty big stuff.
Chapter 8: Motivation Without Deception
Is it too early to calculate Elon Musk's calibration? Tesla seems like a success by now, and you could argue that SpaceX is as well. That's at least two 10% predictions that came out true, so he'll need to fail with 18 similar companies if he wants Scouts to take his opinion seriously...
EDIT: This was a joke.
I don't want to have to pay attention to everything that's out there on Twitter or Facebook, and would like a short document that gets to the point and links out to other things if I feel curious.
I was pretty happy when Ben Pace turned Eliezer's Facebook AMA into a LW post; I might like to see more stuff like that. However, I feel like wiki pages ought to be durable and newcomer-friendly, and therefore must necessarily lag the cutting edge.
Here are a few ideas I picked up from the Substack comments:
- How volatile is the Honduran government? Can you really bet on future politicians honoring current agreements?
- At any point where the Prósperan government relies on the Honduran government, won't corruption and graft leak in?
- What are conditions like for those existing resorts on Roatan? Are they protected from corruption and violence?
- It's too bad that they plan to hew closely to traditional schooling arrangements. If you're starting from scratch anyway, why not make it easy to unbundle education & childcare?
And here are my own thoughts:
- Why haven't I heard about Próspera already? If it's so promising, are professional economists buzzing about it? For example what does Tyler Cowan think?
- What do we know about Pronomos Capital? Do they have any sort of track record? Is it reasonable to expect them to have about a 10% hit rate, as Raemon suggests?
That's a pretty good link, thanks. And yeah, the inverse had occurred to me, but I forgot to mention it except kind of in the title.
This part:
People read the Times not to find out what happened where and when, but to find out who is to be comforted and who afflicted. People just want to be on the same page as their peers.
reminds me of Scott Alexander on the phatic:
Consider a very formulaic conservative radio show. Every week, the host talks about some scandal that liberals have been involved in. Then she explains why it means the country is going to hell. I don’t think the listeners really care that a school in Vermont has banned Christmas decorations or whatever. The point is to convey this vague undercurrent of “Hey, there are other people out there who think like you, we all agree with you, you’re a good person, you can just sit here and listen and feel reassured that you’re right.” Anything vaguely conservative in content will be equally effective, regardless of whether the listener cares about the particular issue.
~~~
my best guess of the typical experience is being in social reality 99.9% of the time. The 0.1% are extreme shocks, cases when physical reality kicks someone so far off-script they are forced to confront it directly. These experiences are extremely unpleasant, and processing them appears as “depression and anxiety”. One looks at the first opportunity to dive back into the safety of social reality, in the form of a communal narrative that “makes sense” of what happened and suggests an appropriate course of action.
Really? Shouldn't "typical experience" include small business owners running sales forecasts, truckers navigating new environments, and a contractor building a staircase? It seems to me that lots of normal people contend with novel situations in objective reality on a regular basis. What really seems noteworthy to me is how domain-specific that mode of thought tends to be. A guy who builds houses can tell when some new construction regulation is not reality-based, but he will not think twice about questionable statements from the CDC.
Why is Japan called Australia??
Yeah, this seems like an important point. For me the difference between jogging and badminton is like night and day. Asking me whether I like "exercise" would be like asking me if I like "food".
In general, I think most people should put a lot more resources into shopping around for enjoyable exercise. I got really lucky that my friend talked me into taking a badminton class with him in high school; if not for that, I might conceive of myself as "not a cardio person".
All that being said, I still do force myself to jog when my preferred cardio alternatives are unavailable.
By the way, here's a Metaculus question about when Kalshi will launch.
Sometimes I scroll social media (because I am yet weak) and I see rationalists raising Concerns about various news topics and current events.
Here’s a list of concerns and potential actions, including those I see as inadequate.
- Our institutions are losing trustworthiness and competence
- Shout about the CDC on social media
- #GhostTheNYT
- Try to understand what leads to institutional success/failure
- Try to advance incentive-compatible systems and social tech that can scale
- Freedom of speech is under attack! Gatekeepers! Reality czar! The Narrative!
- Take traditional political action
- Less twitter, more substack
- Research and post about the economics and logistics of running your own servers.
- Experiment with ring signatures?
- Social media is harming mental health, undermining individual & group epistemics, and enabling horrible actions in meatspace
- Complain about it on social media in between dunking on your outgroup.
- Await humanity’s inexorable slide down the fitness gradients toward memetic collapse.
- Promote more group activities in your group house. Establish device-free zones.
- Coordinate to stigmatize doomscrolling, hate-reading, and contempt addiction. While we’re at it, let’s stigmatize using Twitter at all.
- Large, free, liquid prediction markets cannot exist, supposedly due to the global influence of burdensome US trading regulations.
- Traditional political action
- Pay a competent YouTuber to make a really good explanatory video (series) about prediction markets.
- Write a postmortem on Augur to help future attempts avoid the same pitfalls
- Research and post about why prediction markets haven’t gotten big even where regulation is looser
- Start up a competitor to Kalshi, and launch before they do
- Ongoing Uighur atrocity
- Share an article about it once in a while
- Take traditional political action, push for substantial international response
- Make fewer purchases from companies that profit from the abuse of Uighurs
we also reinvent the wheel more.
Could you elaborate on this? Which wheels are you thinking of?
Hm okay. And is this a problem for prediction markets too, even though participants expect to profit from their time spent?
The way I imagine it, sloppier traders will treat a batch of nearly identical questions as identical, arbitraging among them and causing the prices to converge. Meanwhile, the more literal-minded traders will think carefully about how the small changes in the wording might imply large changes in probability, and they will occasionally profit by pushing the batch of prices apart.
But maybe most traders won't be that patient, and will prefer meta-resolution or offloading.
I still feel like I'm onto something here...
None of this seems cruxy to me--I could grant that all of your claims here are true and it wouldn't much affect my argument. I'm not advocating that everyone abandon their church communities and throw out their bibles.
Don't these religions (the large incumbent ones of the western world) need to reform in light of new opportunities and challenges of the 21st century? (Not least of all anti-aging, gene editing, space colonization, psychedelic research, neuroscience, and powerful AI).
Don't the inertia and epistemic standards of incumbents like Mormonism pose an obstacle to more modern religions Mormon Transhumanism? And isn't that bad?