Posts
Comments
Another amazing post. How long does each of these take you to make? Seems like it would be a full-time job.
Thanks :) Hmm I think all I can point you to is this tweet.
I <3 Specificity
For years, I've been aware of myself "activating my specificity powers" multiple times per day, but it's kind of a lonely power to have. "I'm going to swivel my brain around and ride it in the general→specific direction. Care to join me?" is not something you can say in most group settings. It's hard to explain to people that I'm not just asking them to be specific right now, in this one context. I wish I could make them see that specificity is just this massively under-appreciated cross-domain power. That's why I wanted this sequence to exist.
I gratuitously violated a bunch of important LW norms
- As Kaj insightfully observed last year, choosing Uber as the original post's object-level subject made it a political mind-killer.
- On top of that, the original post's only role model of a specificity-empowered rationalist was this repulsive "Liron" character who visibly got off on raising his own status by demolishing people's claims.
Many commenters took me to task on the two issues above, as well as raising other valid issues, like whether the post implies that specificity is always the right power to activate in every situation.
The voting for this post was probably a rare combination: many upvotes, many downvotes, and presumably many conflicted non-voters who liked the core lesson but didn't want to upvote the norm violations. I'd love to go back in time and launch this again without the double norm violation self-own.
I'm revising it
Today I rewrote a big chunk of my dialogue with Steve, with the goal of making my character a better role model of a LessWrong-style rationalist, and just being overall more clearly explained. For example, in the revised version I talk about how asking Steve to clarify his specific point isn't my sneaky fully-general argument trick to prove that Steve's wrong and I'm right, but rather, it's taking the first step on the road to Double Crux.
I also changed Steve's claim to be about a fictional company called Acme, instead of talking about the politically-charged Uber.
I think it's worth sharing
Since writing this last year, I've received a dozen or so messages from people thanking me and remarking that they think about it surprisingly often in their daily lives. I'm proud to help teach the world about specificity on behalf of the LW community that taught it to me, and I'm happy to revise this further to make it something we're proud of.
Ok I finally made this edit. Wish I did it sooner!
Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
Glad to hear you feel I've addressed the Combat Culture issues. I think those were the lowest-hanging fruits that everyone agreed on, including me :)
As for the first point, I guess this is the same thing we had a long comment thread about last year, and I'm not sure how much our views diverge at this point...
Let's take this paragraph you quoted: "It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag." Do you not agree with my point that Seibel should have endeavored to be more clear in his public statement?
Zvi, I respect your opinion a lot and I've come to accept that the tone disqualifies the original version from being a good representation of LW. I'm working on a revision now.
Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
Thanks for the feedback. I agree that the tone of the post has been undermining its content. I'm currently working on editing this post to blast away the gratuitously bad-tone parts :)
Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
Meta-level reply
The essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concept
Yeah, I take your point that the post's tone and political-ish topic choice undermine the ability of readers to absorb its lessons about the power of specificity. This is a clear message I've gotten from many commenters, whether explicitly or implicitly. I shall edit the post.
Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
Object-level reply
In the meantime, I still think it's worth pointing out where I think you are, in fact, analyzing the content wrong and not absorbing its lessons :)
For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart
My dialogue character has various positive-affect a-priori beliefs about Uber, but having an a-priori belief state isn't the same thing as having an immutable bottom line. If Steve had put forth a coherent claim, and a shred of support for that claim, then the argument would have left me with a modified a-posteriori belief state.
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit
My character is making a good-faith attempt at Double Crux. It's just impossible for me to ascertain Steve's claim-underlying crux until I first ascertain Steve's claim.
even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.
You seem to be objecting that selling "the power to demolish bad arguments" means that I'm selling a Fully General Counterargument, but I'm not. The way this dialogue goes isn't representative of every possible dialogue where the power of specificity is applied. If Steve's claim were coherent, then asking him to be specific would end up helping me change my own mind faster and demolish my own a-priori beliefs.
reversed stupidity is not intelligence
It doesn't seem relevant to mention this. In the dialogue, there's no instance of me creating or modifying my beliefs about Uber by reversing anything.
all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".
I'm making an example out of Steve because I want to teach the reader about an important and widely-applicable observation about so-called "intellectual discussions": that participants often win over a crowd by making smart-sounding general assertions whose corresponding set of possible specific interpretations is the empty set.
Curve fitting isn't Problematic. The reason it's usually a good best guess that points will keep fitting a curve (though wrong a significant fraction of the time) is because we can appeal to a deeper hypothesis that "there's a causal mechanism generating these points that is similar across time". When we take our time and do actual science on our universe, our theories tell us that the universe has time-similar causal structures all over the place. Actual science is what licenses quick&dirty science-like heuristics.
Just because curve fitting is one way you can produce a shallow candidate model to generate your predictions, that doesn't mean "induction is needed" in the original problematic sense, especially considering that what's likely to happen is that a theory that doesn't use mere curve fitting will probably come along and beat out the curve fitting approach.
I think at best you can say Deutsch dissolves the problem for the project of science
Ok I think I'll accept that, since "science" is broad enough to be the main thing we or a superintelligent AI cares about.
Since "no one believes that induction is the sole source of scientific explanations", and we understand that scientific theories win by improving on their competitors in compactness, then the Problem of Induction that Russell perceived is a non-problem. That's my claim. It may be an obvious claim, but the LW sequences didn't seem to get it across.
You seem to be saying that induction is relevant to curve fitting. Sure, curve fitting is one technique to generate theories, but tends to be eventually outcompeted by other techniques, so that we get superseding theories with reductionist explanations. I don't think curve fitting necessarily needs to play a major role in the discussion of dissolving the Problem of Induction.
Ah yeah. Interesting how all the commenters here are talking about how this topic is quite obvious and settled, yet not saying the same things :)
Theories of how quarks, electromagnetism and gravity produce planets with intelligent species on them are scientific accomplishments by virtue of the compression they achieve, regardless of why quarks appear to be a thing.
If we reverse-engineer an accurate compressed model of what the universe appears like to us in the past/present/future, that counts as science.
If you suspect (as I do) that we live in a simulation, then this description applies to all the science we've ever done. If you don't, you can at least imagine that intelligent beings embedded in a simulation that we build can do science to figure out the workings of their simulation, whether or not they also manage to do science on the outer universe.
Justifying that blue is an a-priori more likely concept than grue is part of the remaining problem of justifying Occam's Razor. What we don't have to justify is the wrong claim that science operates based on generalized observations of similarity.
your claim is that if we admit that the universe follows these patterns then this automatically means that these patterns will apply in the future.
Yeah. My point is that the original statement of the Problem of Induction was naive in two ways:
- It invokes "similarity", "resemblance", and "collecting a bunch of confirming observations"
- It talks about "the future resembling the past"
#1 is the more obviously naive part. #2's naivety is what I explain in this post's "Not About Past And Future" section. Once one abandons naive conceptions #1 and #2 by understanding how science actually works, one reduces the Problem of Induction to the more tractable Problem of Occam's Razor.
I don't think we know that the universe follows these patterns as opposed to appearing to follow these patterns.
Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.
Well, I hope this post can be useful as a link you can give to explain the LW community's mostly shared view about how one resolves the Problem of Induction. I wrote it because I think the LW Sequences' treatment of the Problem of Induction is uncharacteristically off the mark.
If I have two diffrerent data and compress them well among each of them I would not expect those compressions to be similar or the same.
If I drop two staplers, I can give the same compressed description of the data from their two trajectories: "uniform downward acceleration at close to 9.8 meters per second squared".
But then the fence can suddenly come to an end or make an unexpected 90 degree turn. How many posts do you need to see to reasonably conclude that post number #5000 exists?
If I found the blueprint for the fence lying around, I'd assign a high probability that the number of fenceposts is what's shown in the blueprint, minus any that might be knocked over or stolen. Otherwise, I'd start with my priori knowledge of the distribution of sizes of fences, and update according to any observations I make about which reference class of fence this is, and yes, how many posts I've encountered so far.
It seems like you haven't gotten on board with science being a reverse-engineering process that outputs predictive models. But I don't think this is a controversial point here on LW. Maybe it would help to clarify that a "predictive model" outputs probability distributions over outcomes, not predictions of single forced outcomes?
To clarify, what I think is underappreciated (and what's seemingly being missed in Eliezer's statement about his belief that the future is similar to the past), isn't that justifying an Occamian prior is necessary or equivalent to solving the original Problem of Induction, but that it's a smaller and more tractable problem which is sufficient to resolve everything that needs to be resolved.
Edit: I've expanded on the Problem of Occam's Razor section in the post:
In my view, it's a significant and under-appreciated milestone that we've reduced the original Problem of Induction to the problem of justifying Occam's Razor. We've managed to drop two confusing aspects from the original PoI:
- We don't have to justify using "similarity", "resemblance", or "collecting a bunch of confirming observations", because we know those things aren't key to how science actually works.
- We don't have to justify "the future resembling the past" per se. We only have to justify that the universe allows intelligent agents to learn probabilistic models that are better than maximum-entropy belief states.
Agree. Not only is asking “what’s an example” generally highly productive, it’s about 80% as productive as asking “what are two examples”.
I’m not a gamer. Having a ton of screen real estate makes me more productive by letting me keep a bunch of windows visible in the same fixed locations.
Re paying a premium, I don’t think I am; the Samsung monitor is one of the cheapest well-reviewed curved monitors I found at that resolution.
5. If your work is done on a computer, get a second monitor. Less time navigating between windows means more time for thinking.
Agree. I'm stacking two of these bad boys: https://www.amazon.com/gp/product/B07L9HCJ2V
For most professionals, spending $2k is cheap for even a 5% more productive computing experience
I agree with your main idea about how curiosity is related to listening well.
The post’s first sentence implies that the thesis will be a refutation of a different claim:
A common piece of interacting-with-people advice goes: “often when people complain, they don’t want help, they just want you to listen!”
The claim still seems pretty true from my experience: that sometimes people have a sufficient handle on their problem, and don’t want help dealing with the problem better, but do want some empathy, appreciation, or other benefits from communicating their problem in the form of a complaint.
Technically I bought these at slightly above NAV, and brought their effective prices below NAV by selling November call options against them.
How does that work and what’s the downside of that trade?
This feature seems to be making the page wider and allowing horizontal scrolling on my mobile (iPhone) which degrades the post reading experience. I would prefer if the interface got shrunk down to fit the phone’s width.
Thanks for this post! Interesting to learn about the current state of things.
It does seem true (and funny) to me that the #1 thing in physical reality I and millions of others would like to experience in Virtual Reality is our computer screens.
Consider this analogy: Professional basketball teams are much better than hobby league teams because they have a much stronger talent pool and incentive feedback loop. Yet individual teams rise and fall within their league, because it’s a competitive ecosystem. Business is currently the pro league for brainpower, but individual companies still rise and fall within that league.
Business is also a faster-changing game than basketball because consumer preferences, supplier offerings and technological progress are all moving targets. So a company full of really smart people will still often find itself much less competitive than it used to be.
Companies like Yahoo that fall too far stop being able to generate large profits and attract top talent, and eventually go extinct. The analogy with sports teams breaks here because many sports leagues give their worst teams some advantages to rebuild themselves, while failing companies just go out of business.
GM, IBM and AT&T are teams who have fallen in the league rankings, but if they’re still operating in the market then they’re still effectively competing for talent and their higher-paid positions still probably have correspondingly higher average IQ.
The NYT is a case where the competitive ecosystem shifted drastically, and the business successfully continued optimizing for profit within the new ecosystem. Before the internet, when information was a scarce resource, the NYT’s value prop was information creation and distribution, with a broad audience, and paid for by broad-targeted ads. Now their value prop is more focused on advocacy of the viewpoints of its narrower subscriber base, paid for by that subscriber base. The governing board of the NYT may care about neutral news reporting, but they also care a lot about profit, so they consider the NYT’s changes to be good tradeoffs.
If you think of the NYT like a public service providing neutral reporting, then yes that service has been crumbling, and no company will replace it doing that same service (the way IBM’s services are getting replaced by superior alternatives) because it wasn’t designed with the right incentive feedback loops for providing neutral reporting, it was designed as a profit-maximizing company, and profit only temporarily coincided with providing neutral reporting.
The highest-quality organizations today (not sure if they're "institutions") are the big companies like Amazon and Google. By "high quality" I mean they create lots of value, with a high value-per-(IQ-weighted)-employee ratio.
Any institution that does a big job, like government, has lots of leverage on the brainpower of its members and should be able to create lots of value. E.g. a few smart people empowered to design a new government healthcare system could potentially create a $trillion of value. But the for-profit companies are basically the only ones who actually do consistently leverage the brainpower and create $trillions of value. This is because they're the only ones who make a sustained effort to win the bidding war for brainpower, and manage it with sufficiently tight feedback cycles.
Another example of a modern high-quality institution that comes to mind, which isn't a for-profit company, is Wikipedia. Admittedly no one is bidding money for that talent, so my model would predict that Wikipedia should suffer a brain drain, and in fact I do think my model explains why the percentage of people who are motivated to edit Wikipedia is low. But it seems like there's a small handful of major Wikipedia editors addicted to doing it as a leisure activity. The key to Wikipedia working well without making its contributors rich is that the fundamental unit of value is simple enough to have a tight feedback loop, so that it can lodge in a few smart people's mind as an "addictive game". You make an edit and it's pretty clear whether you've followed the rule of "improve the article in some way". Repeat, and watch your reputation score (number of edits, number of article views) tick steadily up.
So my model is that successful institutions are powered by smart people with reward feedback loops that keep them focused on a mission, and companies are attracting almost all the smart people, but there are still a few smart people pooled in other places like Wikipedia which use a reward feedback loop to get lots of value from the brainpower they have.
Re subcultures and hobby groups: I don't know, I don't even have a sense of whether they're on an overall trend of getting better or worse.
Institutions have been suffering a massive brain drain ever since the private sector shot way ahead at providing opportunities for smart people.
Think of any highly capable person who could contribute high-quality socially-valuable work to an important institution like the New York Times, CDC, Federal Highway Administration, city counsel, etc. What's the highest-paying, highest-fun career opportunity for such a person? Today, it's probably the private sector.
Institutions can't pay much because they don't have feedback loops that can properly value an individual's contribution. For example, if you work for the CDC and are largely the one responsible for saving 100,000 lives, you probably won't get a meaningful raise or even much status boost, compared to someone who just thrives on office politics and doesn't save any net lives.
In past decades, the private sector had the same problem as institutions: It was unable to attribute disproportionate value to most people's work. So in past decades, a typical smart person could have a competitive job offer from an institution. In that scenario, they might pick the institution because their passion for a certain type of work, and the pride of doing it well, and the pride of public service, on top of the competitive compensation and promotion opportunities, was the most attractive career option.
But now we're in a decades-long trend where the private sector has shot way ahead of institutions in its ability to offer smart people a good job. There are many rapidly-scaling tech(-enabled) companies, and it's increasingly common for the top 10% of workers to contribute 1,000%+ as much value as the median worker in their field, and companies are increasingly better at making higher-paying job offers to people based on their level of capability.
We see institutions do increasingly stupid things because the kind of smart people who used to be there are instead doing private-sector stuff.
The coordination problem of "fixing institutions" reduces to the coordination problem of designing institutions whose pay scale is calibrated to the amount of social good that smart people do when working there, relative to private sector jobs. The past gave us this scenario accidentally, but no such luck in the present.
Cofounder of Relationship Hero here. There's a sound underlying principle of courtship that PHTG taps into: That if your partner models you as someone with a high standard that they need to put in effort to meet, then they'll be more attracted to you.
The problem with trying to apply any dating tactic, even PHTG, is that courtship is a complex game with a lot of state and context. It's very common to be uncalibrated and apply a tactic that backfires on you because you weren't aware of the overall mental model that your partner had of you. So I'd have to observe your interactions and confirm that "being too easy" is a sufficiently accurate one-dimensional projection of where you currently stand with your partner, before recommending this one particular tactic.
Instead of relying on a toolbag of one-dimensional tactics, my recommended approach is to focus on understanding your partner's mental model of you, and of their relationship with you, and of the relationship they'd want. Then you can strategize how to get the relationship you want, assuming it's compatible with a kind of relationship they'd also want.
I agree with the other answers that say climate change is a big deal and risky and worth a lot of resources and attention, but it’s already getting a lot of resources and attention, and it’s pretty low as an existential threat.
Also, my impression is that there are important facts about how climate change works that are almost never mentioned. For example, this claim that there are diminishing greenhouse effects to CO2: https://wattsupwiththat.com/2014/08/10/the-diminishing-influence-of-increasing-carbon-dioxide-on-temperature/
Also, I think most of the activism I see around climate change is dumb and counterproductive and moralizing, e.g. encouraging personal lifestyle sacrifices.
I think they mean that ad tech (or perhaps a more consensus example is nukes) is a prisoner’s dilemma, which is nonzero sum as opposed to positive/negative/constant/zero sum.
Golden raises $14.5M. I wrote about Golden here as an example of the most common startup failure mode: lacking a single well-formed use case. I’m confused about why someone as savvy as Mark Andreessen is tripling down and joining their board. I think he’s making a mistake.
I was thinking that if the sequences and other LW classics were a high school class, we could make something like an SAT subject test to check understanding/fluency in the subject, then that could be a badge on the site and potentially a good credential to have in your career.
The kinds of questions could be like:
1.
If a US citizen has a legal way to save $500/year on their taxes, but it requires spending 1 hour/day filling out boring paperwork on 5 days of every week, should they do it?
a. Virtually everyone should do it
b. A significant fraction (10-90%) of the population should do it
c. Virtually no one should do it
2.
With sufficient evidence and a rational deliberation process, is it possible to become sure that the Loch Ness Monster does/doesn't exist?
a. We CAN potentially become sure either way
b. We CAN'T potentially become sure either way
c. We can only potentially become sure that it DOES exist
d. We can only potentially become sure that it DOESN'T exist
Because the driver expects that the consequences of running you over will be asymmetrically bad for them (and you), compared to the rest of humanity. Actions that take humanity down with you perversely feel less urgent to avoid.
Yeah I was off base there. The Nash Equilibrium is nontrivial because some players will challenge themselves to “win” by tricking the group with button access to push it. Plus probably other reasons I haven’t thought of.
Right now it seems like the Nash equilibrium is pretty stable at everyone not pressing the button. Maybe we can simulate adding in some lower-priority yet still compelling pressure to press the button, analogous to Petrov’s need to follow orders or the US’s need to prevent Russians from stationing nuclear missiles in Cuba.
Wow A+ answer, thanks!
Now that so much of California has burned, does that mean we’re in good shape for a few years of mild fire seasons?
Thanks for the informative and easily-readable summary! This makes me wish Blinkist would add a checkbox to enable good epistemology in their book summaries. Or more plausibly, makes me want to contribute some summaries here too.
Re examples of toy examples with moving parts:
Andy Grove’s classic book High Output Management starts with the example of a diner that has to produce breakfasts with cooked eggs, and keeps referring to it to teach management concepts.
Minute Physics introduces a “Spacetime Globe” to visualize spacetime (the way a globe visualizes the Earth’s surface) and refers to it often starting at 3:25 in this video: https://youtu.be/1rLWVZVWfdY
My favorite part was the advice to highlight what’s important, and it helped that you applied your own advice by highlighting that the most important part of your lesson is the advice to highlight the most important part of your lesson.
I’ve previously attempted to elaborate on why examples are helpful for teaching: https://www.lesswrong.com/posts/CD2kRisJcdBRLhrC5/the-power-to-teach-concepts-better
You can make money from an out-of-the-money short call even if the stock goes up
Oh so in this case you're selling a call, but you can't be said to be "shorting the stock" because you still benefit from a higher price?
Nice ones. The first is probably the one that most accounts for funds like Titan marketing themselves misleadingly (IMO), but the others are still important caveats of the definition and good to know.
You are allowed to be bearish at times, but it's better to sell calls or buy anticorrelated bonds and continue to collect the risk premium, than to short the stocks and be on the hook for the dividends or buyouts.
Doesn’t “sell calls” mean the same thing as “short the stocks”?
I’ve been wondering what are the caveats with relying on Sharpe ratio to measure how much risk was taken to get an investment’s returns.
For example, Titan touts a high Sharpe ratio, and frames its marketing like it’s better than the S&P in every way with no downside: see https://www.lesswrong.com/posts/59oPYfFJjYn3BBBwi/titan-the-wealthfront-of-active-stock-picking-what-s-the
But doesn’t EMH imply that all Sharpe ratios long term will tend to the same average value, i.e. no one can have a sufficiently replicable strategy that gives more returns without more risk?
And in the case of Titan, is the “catch” to their Sharpe ratio that they have higher downside exposure to momentum reversal and multiple contraction?
Hm ya I guess the causality between sex and babies (even sex and visible pregnancy) is so far away in time that it’s tough to make a brain want to “make babies”.
But I don’t think computationally intractability of how actions effect inclusive genetic fitness is quite why evolution made such crude heuristics. Because if a brain understood that it was trying to maximize that quantity, I think it could figure out “have a lot of sex” as a heuristic approach without evolution hard-coding it in. And I think humans actually do have some level of in-brain goals to have more descendants beyond just having more sex. So I think these things like sex pleasure are just performance optimizations to a mentally tractable challenge.
E.g. snakes quickly triggering a fear reflex
Thanks for the ELI12, much appreciated.
evolution's objective of "maximize inclusive genetic fitness" is quite simple, but it is still not represented explicitly because figuring out how actions affect the objective is computationally hard
This doesn’t seem like the bottleneck in many situations in practice. For example, a lot of young men feel like they want to have as much sex as possible, but not father as many kids as possible. I’m not sure exactly what the reason is, but I don’t think it’s the computational difficulty of representing having kids vs. having sex, because humans already build a world model containing the concept of “my kids”.
It seems to me that one under-appreciated aspect of Inner Alignment is that, even if one had the one-true-utility-function-that-is-all-you-need-to-program-into-AI, this would not, in fact, solve the alignment problem, nor even the intent-alignment part. It would merely solve outer alignment (provided the utility function can be formalized).
Damn, yep I for one under-appreciated this for the past 12 years.
What else have people said on this subject? Do folks think that scenarios where we solve outer alignment most likely involve us not having to struggle much with inner alignment? Because fully solving outer alignment implies a lot of deep progress in alignment.