Posts

Dissolving the Problem of Induction 2020-12-27T17:58:27.536Z
Are aircraft carriers super vulnerable in a modern war? 2020-09-20T18:52:29.270Z
Titan (the Wealthfront of active stock picking) - What's the catch? 2020-08-06T01:06:04.599Z
Asset Prices Consistently Violate Efficient Market Hypothesis 2020-07-28T14:21:15.220Z
Half-Baked Products and Idea Kernels 2020-06-24T01:00:20.466Z
Liron's Shortform 2020-06-09T12:27:51.078Z
How does publishing a paper work? 2020-05-21T12:14:17.589Z
Isn't Tesla stock highly undervalued? 2020-05-18T01:56:58.415Z
How About a Remote Variolation Study? 2020-04-03T12:04:04.439Z
How to Frame Negative Feedback as Forward-Facing Guidance 2020-02-09T02:47:37.230Z
The Power to Draw Better 2019-11-18T03:06:02.832Z
The Thinking Ladder - Wait But Why 2019-09-29T18:51:00.409Z
Is Specificity a Mental Model? 2019-09-28T22:53:56.886Z
The Power to Teach Concepts Better 2019-09-23T00:21:55.849Z
The Power to Be Emotionally Mature 2019-09-16T02:41:37.604Z
The Power to Understand "God" 2019-09-12T18:38:00.438Z
The Power to Solve Climate Change 2019-09-12T18:37:32.672Z
The Power to Make Scientific Breakthroughs 2019-09-08T04:14:14.402Z
Examples of Examples 2019-09-06T14:04:07.511Z
The Power to Judge Startup Ideas 2019-09-04T15:07:25.486Z
How Specificity Works 2019-09-03T12:11:36.216Z
The Power to Demolish Bad Arguments 2019-09-02T12:57:23.341Z
Specificity: Your Brain's Superpower 2019-09-02T12:53:55.022Z
What are the biggest "moonshots" currently in progress? 2019-09-01T19:41:22.556Z
Is there a simple parameter that controls human working memory capacity, which has been set tragically low? 2019-08-23T22:10:40.154Z
Is the "business cycle" an actual economic principle? 2019-06-18T14:52:00.348Z
Is "physical nondeterminism" a meaningful concept? 2019-06-16T15:55:58.198Z
What's the most annoying part of your life/job? 2016-10-23T03:37:55.440Z
Quick puzzle about utility functions under affine transformations 2016-07-16T17:11:25.988Z
You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides] 2015-08-16T05:51:51.459Z
Wisdom for Smart Teens - my talk at SPARC 2014 2015-02-09T18:58:17.449Z
A proposed inefficiency in the Bitcoin markets 2013-12-27T03:48:56.031Z
Atkins Diet - How Should I Update? 2012-06-11T21:40:14.138Z
Quixey Challenge - Fix a bug in 1 minute, win $100. Refer a winner, win $50. 2012-01-19T19:39:58.264Z
Quixey is hiring a writer 2012-01-05T06:22:06.326Z
Quixey - startup applying LW-style rationality - hiring engineers 2011-09-28T04:50:45.130Z
Quixey Engineering Screening Questions 2010-10-09T10:33:23.188Z
Bloggingheads: Robert Wright and Eliezer Yudkowsky 2010-08-07T06:09:32.684Z
Selfishness Signals Status 2010-03-07T03:38:30.190Z
Med Patient Social Networks Are Better Scientific Institutions 2010-02-19T08:11:21.500Z
What is the Singularity Summit? 2009-09-16T07:18:06.675Z
You Are A Brain 2009-05-09T21:53:26.771Z

Comments

Comment by liron on Covid 1/14: To Launch a Thousand Shipments · 2021-01-16T18:12:48.988Z · LW · GW

Another amazing post. How long does each of these take you to make? Seems like it would be a full-time job.

Comment by liron on The Power to Teach Concepts Better · 2021-01-12T15:53:02.748Z · LW · GW

Thanks :) Hmm I think all I can point you to is this tweet.

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-12T02:54:56.868Z · LW · GW

I <3 Specificity

For years, I've been aware of myself "activating my specificity powers" multiple times per day, but it's kind of a lonely power to have. "I'm going to swivel my brain around and ride it in the general→specific direction. Care to join me?" is not something you can say in most group settings. It's hard to explain to people that I'm not just asking them to be specific right now, in this one context. I wish I could make them see that specificity is just this massively under-appreciated cross-domain power. That's why I wanted this sequence to exist.

I gratuitously violated a bunch of important LW norms

  1. As Kaj insightfully observed last year, choosing Uber as the original post's object-level subject made it a political mind-killer.
  2. On top of that, the original post's only role model of a specificity-empowered rationalist was this repulsive "Liron" character who visibly got off on raising his own status by demolishing people's claims.

Many commenters took me to task on the two issues above, as well as raising other valid issues, like whether the post implies that specificity is always the right power to activate in every situation.

The voting for this post was probably a rare combination: many upvotes, many downvotes, and presumably many conflicted non-voters who liked the core lesson but didn't want to upvote the norm violations. I'd love to go back in time and launch this again without the double norm violation self-own.

I'm revising it

Today I rewrote a big chunk of my dialogue with Steve, with the goal of making my character a better role model of a LessWrong-style rationalist, and just being overall more clearly explained. For example, in the revised version I talk about how asking Steve to clarify his specific point isn't my sneaky fully-general argument trick to prove that Steve's wrong and I'm right, but rather, it's taking the first step on the road to Double Crux.

I also changed Steve's claim to be about a fictional company called Acme, instead of talking about the politically-charged Uber.

I think it's worth sharing

Since writing this last year, I've received a dozen or so messages from people thanking me and remarking that they think about it surprisingly often in their daily lives. I'm proud to help teach the world about specificity on behalf of the LW community that taught it to me, and I'm happy to revise this further to make it something we're proud of.

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-12T00:39:02.494Z · LW · GW

Ok I finally made this edit. Wish I did it sooner!

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-12T00:38:18.511Z · LW · GW

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-12T00:37:18.461Z · LW · GW

Glad to hear you feel I've addressed the Combat Culture issues. I think those were the lowest-hanging fruits that everyone agreed on, including me :)

As for the first point, I guess this is the same thing we had a long comment thread about last year, and I'm not sure how much our views diverge at this point...

Let's take this paragraph you quoted: "It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag." Do you not agree with my point that Seibel should have endeavored to be more clear in his public statement?

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-11T22:26:03.955Z · LW · GW

Zvi, I respect your opinion a lot and I've come to accept that the tone disqualifies the original version from being a good representation of LW. I'm working on a revision now.

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-11T21:14:02.446Z · LW · GW

Thanks for the feedback. I agree that the tone of the post has been undermining its content. I'm currently working on editing this post to blast away the gratuitously bad-tone parts :)

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Comment by liron on The Power to Demolish Bad Arguments · 2021-01-11T01:40:56.362Z · LW · GW

Meta-level reply

The essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concept

Yeah, I take your point that the post's tone and political-ish topic choice undermine the ability of readers to absorb its lessons about the power of specificity. This is a clear message I've gotten from many commenters, whether explicitly or implicitly. I shall edit the post.

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Object-level reply

In the meantime, I still think it's worth pointing out where I think you are, in fact, analyzing the content wrong and not absorbing its lessons :)

For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart

My dialogue character has various positive-affect a-priori beliefs about Uber, but having an a-priori belief state isn't the same thing as having an immutable bottom line. If Steve had put forth a coherent claim, and a shred of support for that claim, then the argument would have left me with a modified a-posteriori belief state.

In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit

My character is making a good-faith attempt at Double Crux. It's just impossible for me to ascertain Steve's claim-underlying crux until I first ascertain Steve's claim.

even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.

You seem to be objecting that selling "the power to demolish bad arguments" means that I'm selling a Fully General Counterargument, but I'm not. The way this dialogue goes isn't representative of every possible dialogue where the power of specificity is applied. If Steve's claim were coherent, then asking him to be specific would end up helping me change my own mind faster and demolish my own a-priori beliefs.

reversed stupidity is not intelligence

It doesn't seem relevant to mention this. In the dialogue, there's no instance of me creating or modifying my beliefs about Uber by reversing anything.

all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".

I'm making an example out of Steve because I want to teach the reader about an important and widely-applicable observation about so-called "intellectual discussions": that participants often win over a crowd by making smart-sounding general assertions whose corresponding set of possible specific interpretations is the empty set.

Comment by liron on Dissolving the Problem of Induction · 2020-12-29T21:13:06.544Z · LW · GW

Curve fitting isn't Problematic. The reason it's usually a good best guess that points will keep fitting a curve (though wrong a significant fraction of the time) is because we can appeal to a deeper hypothesis that "there's a causal mechanism generating these points that is similar across time". When we take our time and do actual science on our universe, our theories tell us that the universe has time-similar causal structures all over the place. Actual science is what licenses quick&dirty science-like heuristics.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T23:22:31.079Z · LW · GW

Just because curve fitting is one way you can produce a shallow candidate model to generate your predictions, that doesn't mean "induction is needed" in the original problematic sense, especially considering that what's likely to happen is that a theory that doesn't use mere curve fitting will probably come along and beat out the curve fitting approach.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T20:45:39.323Z · LW · GW

I think at best you can say Deutsch dissolves the problem for the project of science

Ok I think I'll accept that, since "science" is broad enough to be the main thing we or a superintelligent AI cares about.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T17:51:15.243Z · LW · GW

Since "no one believes that induction is the sole source of scientific explanations", and we understand that scientific theories win by improving on their competitors in compactness, then the Problem of Induction that Russell perceived is a non-problem. That's my claim. It may be an obvious claim, but the LW sequences didn't seem to get it across.

You seem to be saying that induction is relevant to curve fitting. Sure, curve fitting is one technique to generate theories, but tends to be eventually outcompeted by other techniques, so that we get superseding theories with reductionist explanations. I don't think curve fitting necessarily needs to play a major role in the discussion of dissolving the Problem of Induction.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T13:40:23.326Z · LW · GW

Ah yeah. Interesting how all the commenters here are talking about how this topic is quite obvious and settled, yet not saying the same things :)

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T13:37:57.150Z · LW · GW

Theories of how quarks, electromagnetism and gravity produce planets with intelligent species on them are scientific accomplishments by virtue of the compression they achieve, regardless of why quarks appear to be a thing.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T12:16:46.398Z · LW · GW

If we reverse-engineer an accurate compressed model of what the universe appears like to us in the past/present/future, that counts as science.

If you suspect (as I do) that we live in a simulation, then this description applies to all the science we've ever done. If you don't, you can at least imagine that intelligent beings embedded in a simulation that we build can do science to figure out the workings of their simulation, whether or not they also manage to do science on the outer universe.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T12:10:30.362Z · LW · GW

Justifying that blue is an a-priori more likely concept than grue is part of the remaining problem of justifying Occam's Razor. What we don't have to justify is the wrong claim that science operates based on generalized observations of similarity.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T02:19:56.049Z · LW · GW

your claim is that if we admit that the universe follows these patterns then this automatically means that these patterns will apply in the future.

Yeah. My point is that the original statement of the Problem of Induction was naive in two ways:

  1. It invokes "similarity", "resemblance", and "collecting a bunch of confirming observations"
  2. It talks about "the future resembling the past"

#1 is the more obviously naive part. #2's naivety is what I explain in this post's "Not About Past And Future" section. Once one abandons naive conceptions #1 and #2 by understanding how science actually works, one reduces the Problem of Induction to the more tractable Problem of Occam's Razor.

I don't think we know that the universe follows these patterns as opposed to appearing to follow these patterns.

Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T02:11:48.378Z · LW · GW

Well, I hope this post can be useful as a link you can give to explain the LW community's mostly shared view about how one resolves the Problem of Induction. I wrote it because I think the LW Sequences' treatment of the Problem of Induction is uncharacteristically off the mark.

Comment by liron on Dissolving the Problem of Induction · 2020-12-28T01:33:34.921Z · LW · GW

If I have two diffrerent data and compress them well among each of them I would not expect those compressions to be similar or the same.

If I drop two staplers, I can give the same compressed description of the data from their two trajectories: "uniform downward acceleration at close to 9.8 meters per second squared".

But then the fence can suddenly come to an end or make an unexpected 90 degree turn. How many posts do you need to see to reasonably conclude that post number #5000 exists?

If I found the blueprint for the fence lying around, I'd assign a high probability that the number of fenceposts is what's shown in the blueprint, minus any that might be knocked over or stolen. Otherwise, I'd start with my priori knowledge of the distribution of sizes of fences, and update according to any observations I make about which reference class of fence this is, and yes, how many posts I've encountered so far.

It seems like you haven't gotten on board with science being a reverse-engineering process that outputs predictive models. But I don't think this is a controversial point here on LW. Maybe it would help to clarify that a "predictive model" outputs probability distributions over outcomes, not predictions of single forced outcomes?

Comment by liron on Dissolving the Problem of Induction · 2020-12-27T21:23:47.519Z · LW · GW

To clarify, what I think is underappreciated (and what's seemingly being missed in Eliezer's statement about his belief that the future is similar to the past), isn't that justifying an Occamian prior is necessary or equivalent to solving the original Problem of Induction, but that it's a smaller and more tractable problem which is sufficient to resolve everything that needs to be resolved.

Edit: I've expanded on the Problem of Occam's Razor section in the post:

In my view, it's a significant and under-appreciated milestone that we've reduced the original Problem of Induction to the problem of justifying Occam's Razor. We've managed to drop two confusing aspects from the original PoI:

  1. We don't have to justify using "similarity", "resemblance", or "collecting a bunch of confirming observations", because we know those things aren't key to how science actually works.
  2. We don't have to justify "the future resembling the past" per se. We only have to justify that the universe allows intelligent agents to learn probabilistic models that are better than maximum-entropy belief states.
Comment by liron on The First Sample Gives the Most Information · 2020-12-26T18:19:18.128Z · LW · GW

Agree. Not only is asking “what’s an example” generally highly productive, it’s about 80% as productive as asking “what are two examples”.

Comment by liron on 100 Tips for a Better Life · 2020-12-25T22:16:34.432Z · LW · GW

I’m not a gamer. Having a ton of screen real estate makes me more productive by letting me keep a bunch of windows visible in the same fixed locations.

Re paying a premium, I don’t think I am; the Samsung monitor is one of the cheapest well-reviewed curved monitors I found at that resolution.

Comment by liron on 100 Tips for a Better Life · 2020-12-23T14:50:28.113Z · LW · GW

5. If your work is done on a computer, get a second monitor. Less time navigating between windows means more time for thinking. 

Agree. I'm stacking two of these bad boys: https://www.amazon.com/gp/product/B07L9HCJ2V

For most professionals, spending $2k is cheap for even a 5% more productive computing experience

Comment by liron on To listen well, get curious · 2020-12-13T20:06:35.655Z · LW · GW

I agree with your main idea about how curiosity is related to listening well.

The post’s first sentence implies that the thesis will be a refutation of a different claim:

A common piece of interacting-with-people advice goes: “often when people complain, they don’t want help, they just want you to listen!”

The claim still seems pretty true from my experience: that sometimes people have a sufficient handle on their problem, and don’t want help dealing with the problem better, but do want some empathy, appreciation, or other benefits from communicating their problem in the form of a complaint.

Comment by liron on Anti-EMH Evidence (and a plea for help) · 2020-12-06T22:59:06.013Z · LW · GW

Technically I bought these at slightly above NAV, and brought their effective prices below NAV by selling November call options against them.

How does that work and what’s the downside of that trade?

Comment by liron on Embedded Interactive Predictions on LessWrong · 2020-11-21T14:05:26.302Z · LW · GW

This feature seems to be making the page wider and allowing horizontal scrolling on my mobile (iPhone) which degrades the post reading experience. I would prefer if the interface got shrunk down to fit the phone’s width.

Comment by liron on Working in Virtual Reality: A Review · 2020-11-21T13:56:53.502Z · LW · GW

Thanks for this post! Interesting to learn about the current state of things.

It does seem true (and funny) to me that the #1 thing in physical reality I and millions of others would like to experience in Virtual Reality is our computer screens.

Comment by liron on Where do (did?) stable, cooperative institutions come from? · 2020-11-06T11:17:39.608Z · LW · GW

Consider this analogy: Professional basketball teams are much better than hobby league teams because they have a much stronger talent pool and incentive feedback loop. Yet individual teams rise and fall within their league, because it’s a competitive ecosystem. Business is currently the pro league for brainpower, but individual companies still rise and fall within that league.

Business is also a faster-changing game than basketball because consumer preferences, supplier offerings and technological progress are all moving targets. So a company full of really smart people will still often find itself much less competitive than it used to be.

Companies like Yahoo that fall too far stop being able to generate large profits and attract top talent, and eventually go extinct. The analogy with sports teams breaks here because many sports leagues give their worst teams some advantages to rebuild themselves, while failing companies just go out of business.

GM, IBM and AT&T are teams who have fallen in the league rankings, but if they’re still operating in the market then they’re still effectively competing for talent and their higher-paid positions still probably have correspondingly higher average IQ.

The NYT is a case where the competitive ecosystem shifted drastically, and the business successfully continued optimizing for profit within the new ecosystem. Before the internet, when information was a scarce resource, the NYT’s value prop was information creation and distribution, with a broad audience, and paid for by broad-targeted ads. Now their value prop is more focused on advocacy of the viewpoints of its narrower subscriber base, paid for by that subscriber base. The governing board of the NYT may care about neutral news reporting, but they also care a lot about profit, so they consider the NYT’s changes to be good tradeoffs.

If you think of the NYT like a public service providing neutral reporting, then yes that service has been crumbling, and no company will replace it doing that same service (the way IBM’s services are getting replaced by superior alternatives) because it wasn’t designed with the right incentive feedback loops for providing neutral reporting, it was designed as a profit-maximizing company, and profit only temporarily coincided with providing neutral reporting.

Comment by liron on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T02:40:55.147Z · LW · GW

The highest-quality organizations today (not sure if they're "institutions") are the big companies like Amazon and Google. By "high quality" I mean they create lots of value, with a high value-per-(IQ-weighted)-employee ratio.

Any institution that does a big job, like government, has lots of leverage on the brainpower of its members and should be able to create lots of value. E.g. a few smart people empowered to design a new government healthcare system could potentially create a $trillion of value. But the for-profit companies are basically the only ones who actually do consistently leverage the brainpower and create $trillions of value. This is because they're the only ones who make a sustained effort to win the bidding war for brainpower, and manage it with sufficiently tight feedback cycles.

Another example of a modern high-quality institution that comes to mind, which isn't a for-profit company, is Wikipedia. Admittedly no one is bidding money for that talent, so my model would predict that Wikipedia should suffer a brain drain, and in fact I do think my model explains why the percentage of people who are motivated to edit Wikipedia is low. But it seems like there's a small handful of major Wikipedia editors addicted to doing it as a leisure activity. The key to Wikipedia working well without making its contributors rich is that the fundamental unit of value is simple enough to have a tight feedback loop, so that it can lodge in a few smart people's mind as an "addictive game". You make an edit and it's pretty clear whether you've followed the rule of "improve the article in some way". Repeat, and watch your reputation score (number of edits, number of article views) tick steadily up.

So my model is that successful institutions are powered by smart people with reward feedback loops that keep them focused on a mission, and companies are attracting almost all the smart people, but there are still a few smart people pooled in other places like Wikipedia which use a reward feedback loop to get lots of value from the brainpower they have.

Re subcultures and hobby groups: I don't know, I don't even have a sense of whether they're on an overall trend of getting better or worse.

Comment by liron on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T23:04:08.217Z · LW · GW

Institutions have been suffering a massive brain drain ever since the private sector shot way ahead at providing opportunities for smart people.

Think of any highly capable person who could contribute high-quality socially-valuable work to an important institution like the New York Times, CDC, Federal Highway Administration, city counsel, etc. What's the highest-paying, highest-fun career opportunity for such a person? Today, it's probably the private sector.

Institutions can't pay much because they don't have feedback loops that can properly value an individual's contribution. For example, if you work for the CDC and are largely the one responsible for saving 100,000 lives, you probably won't get a meaningful raise or even much status boost, compared to someone who just thrives on office politics and doesn't save any net lives.

In past decades, the private sector had the same problem as institutions: It was unable to attribute disproportionate value to most people's work. So in past decades, a typical smart person could have a competitive job offer from an institution. In that scenario, they might pick the institution because their passion for a certain type of work, and the pride of doing it well, and the pride of public service, on top of the competitive compensation and promotion opportunities, was the most attractive career option.

But now we're in a decades-long trend where the private sector has shot way ahead of institutions in its ability to offer smart people a good job. There are many rapidly-scaling tech(-enabled) companies, and it's increasingly common for the top 10% of workers to contribute 1,000%+ as much value as the median worker in their field, and companies are increasingly better at making higher-paying job offers to people based on their level of capability.

We see institutions do increasingly stupid things because the kind of smart people who used to be there are instead doing private-sector stuff.

The coordination problem of "fixing institutions" reduces to the coordination problem of designing institutions whose pay scale is calibrated to the amount of social good that smart people do when working there, relative to private sector jobs. The past gave us this scenario accidentally, but no such luck in the present.

Comment by liron on Does playing hard to get work? AB testing for romance · 2020-10-26T13:59:59.248Z · LW · GW

Cofounder of Relationship Hero here. There's a sound underlying principle of courtship that PHTG taps into: That if your partner models you as someone with a high standard that they need to put in effort to meet, then they'll be more attracted to you.

The problem with trying to apply any dating tactic, even PHTG, is that courtship is a complex game with a lot of state and context. It's very common to be uncalibrated and apply a tactic that backfires on you because you weren't aware of the overall mental model that your partner had of you. So I'd have to observe your interactions and confirm that "being too easy" is a sufficiently accurate one-dimensional projection of where you currently stand with your partner, before recommending this one particular tactic.

Instead of relying on a toolbag of one-dimensional tactics, my recommended approach is to focus on understanding your partner's mental model of you, and of their relationship with you, and of the relationship they'd want. Then you can strategize how to get the relationship you want, assuming it's compatible with a kind of relationship they'd also want.

Comment by liron on Rationality and Climate Change · 2020-10-06T00:43:49.043Z · LW · GW

I agree with the other answers that say climate change is a big deal and risky and worth a lot of resources and attention, but it’s already getting a lot of resources and attention, and it’s pretty low as an existential threat.

Also, my impression is that there are important facts about how climate change works that are almost never mentioned. For example, this claim that there are diminishing greenhouse effects to CO2: https://wattsupwiththat.com/2014/08/10/the-diminishing-influence-of-increasing-carbon-dioxide-on-temperature/

Also, I think most of the activism I see around climate change is dumb and counterproductive and moralizing, e.g. encouraging personal lifestyle sacrifices.

Comment by liron on "Zero Sum" is a misnomer. · 2020-10-01T13:30:24.083Z · LW · GW

I think they mean that ad tech (or perhaps a more consensus example is nukes) is a prisoner’s dilemma, which is nonzero sum as opposed to positive/negative/constant/zero sum.

Comment by liron on Open & Welcome Thread - September 2020 · 2020-10-01T12:28:19.543Z · LW · GW

Golden raises $14.5M. I wrote about Golden here as an example of the most common startup failure mode: lacking a single well-formed use case. I’m confused about why someone as savvy as Mark Andreessen is tripling down and joining their board. I think he’s making a mistake.

Comment by liron on What are good rationality exercises? · 2020-09-28T15:30:20.488Z · LW · GW

I was thinking that if the sequences and other LW classics were a high school class, we could make something like an SAT subject test to check understanding/fluency in the subject, then that could be a badge on the site and potentially a good credential to have in your career.

The kinds of questions could be like:

1.

If a US citizen has a legal way to save $500/year on their taxes, but it requires spending 1 hour/day filling out boring paperwork on 5 days of every week, should they do it?

a. Virtually everyone should do it

b. A significant fraction (10-90%) of the population should do it

c. Virtually no one should do it

2.

With sufficient evidence and a rational deliberation process, is it possible to become sure that the Loch Ness Monster does/doesn't exist?

a. We CAN potentially become sure either way

b. We CAN'T potentially become sure either way

c. We can only potentially become sure that it DOES exist

d. We can only potentially become sure that it DOESN'T exist

Comment by liron on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T21:57:00.453Z · LW · GW

Because the driver expects that the consequences of running you over will be asymmetrically bad for them (and you), compared to the rest of humanity. Actions that take humanity down with you perversely feel less urgent to avoid.

Comment by liron on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T21:44:45.808Z · LW · GW

Yeah I was off base there. The Nash Equilibrium is nontrivial because some players will challenge themselves to “win” by tricking the group with button access to push it. Plus probably other reasons I haven’t thought of.

Comment by liron on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T08:48:08.848Z · LW · GW

Right now it seems like the Nash equilibrium is pretty stable at everyone not pressing the button. Maybe we can simulate adding in some lower-priority yet still compelling pressure to press the button, analogous to Petrov’s need to follow orders or the US’s need to prevent Russians from stationing nuclear missiles in Cuba.

Comment by liron on Are aircraft carriers super vulnerable in a modern war? · 2020-09-24T14:43:38.978Z · LW · GW

Wow A+ answer, thanks!

Comment by liron on Covid 9/17: It’s Worse · 2020-09-17T22:25:07.625Z · LW · GW

Now that so much of California has burned, does that mean we’re in good shape for a few years of mild fire seasons?

Comment by liron on Book Review: Working With Contracts · 2020-09-17T10:57:04.947Z · LW · GW

Thanks for the informative and easily-readable summary! This makes me wish Blinkist would add a checkbox to enable good epistemology in their book summaries. Or more plausibly, makes me want to contribute some summaries here too.

Comment by liron on How to teach things well · 2020-08-29T18:32:32.406Z · LW · GW

Re examples of toy examples with moving parts:

Andy Grove’s classic book High Output Management starts with the example of a diner that has to produce breakfasts with cooked eggs, and keeps referring to it to teach management concepts.

Minute Physics introduces a “Spacetime Globe” to visualize spacetime (the way a globe visualizes the Earth’s surface) and refers to it often starting at 3:25 in this video: https://youtu.be/1rLWVZVWfdY

Comment by liron on How to teach things well · 2020-08-29T18:24:41.595Z · LW · GW

My favorite part was the advice to highlight what’s important, and it helped that you applied your own advice by highlighting that the most important part of your lesson is the advice to highlight the most important part of your lesson.

I’ve previously attempted to elaborate on why examples are helpful for teaching: https://www.lesswrong.com/posts/CD2kRisJcdBRLhrC5/the-power-to-teach-concepts-better

Comment by liron on The Wrong Side of Risk · 2020-08-24T18:56:37.993Z · LW · GW
You can make money from an out-of-the-money short call even if the stock goes up

Oh so in this case you're selling a call, but you can't be said to be "shorting the stock" because you still benefit from a higher price?

Comment by liron on What posts on finance would your find helpful or interesting? · 2020-08-24T18:54:43.188Z · LW · GW

Nice ones. The first is probably the one that most accounts for funds like Titan marketing themselves misleadingly (IMO), but the others are still important caveats of the definition and good to know.

Comment by liron on The Wrong Side of Risk · 2020-08-24T12:31:15.525Z · LW · GW

You are allowed to be bearish at times, but it's better to sell calls or buy anticorrelated bonds and continue to collect the risk premium, than to short the stocks and be on the hook for the dividends or buyouts.

Doesn’t “sell calls” mean the same thing as “short the stocks”?

Comment by liron on What posts on finance would your find helpful or interesting? · 2020-08-23T14:08:16.932Z · LW · GW

I’ve been wondering what are the caveats with relying on Sharpe ratio to measure how much risk was taken to get an investment’s returns.

For example, Titan touts a high Sharpe ratio, and frames its marketing like it’s better than the S&P in every way with no downside: see https://www.lesswrong.com/posts/59oPYfFJjYn3BBBwi/titan-the-wealthfront-of-active-stock-picking-what-s-the

But doesn’t EMH imply that all Sharpe ratios long term will tend to the same average value, i.e. no one can have a sufficiently replicable strategy that gives more returns without more risk?

And in the case of Titan, is the “catch” to their Sharpe ratio that they have higher downside exposure to momentum reversal and multiple contraction?

Comment by liron on Inner Alignment: Explain like I'm 12 Edition · 2020-08-09T11:36:41.436Z · LW · GW

Hm ya I guess the causality between sex and babies (even sex and visible pregnancy) is so far away in time that it’s tough to make a brain want to “make babies”.

But I don’t think computationally intractability of how actions effect inclusive genetic fitness is quite why evolution made such crude heuristics. Because if a brain understood that it was trying to maximize that quantity, I think it could figure out “have a lot of sex” as a heuristic approach without evolution hard-coding it in. And I think humans actually do have some level of in-brain goals to have more descendants beyond just having more sex. So I think these things like sex pleasure are just performance optimizations to a mentally tractable challenge.

E.g. snakes quickly triggering a fear reflex

Comment by liron on Inner Alignment: Explain like I'm 12 Edition · 2020-08-08T23:22:50.586Z · LW · GW

Thanks for the ELI12, much appreciated.

evolution's objective of "maximize inclusive genetic fitness" is quite simple, but it is still not represented explicitly because figuring out how actions affect the objective is computationally hard

This doesn’t seem like the bottleneck in many situations in practice. For example, a lot of young men feel like they want to have as much sex as possible, but not father as many kids as possible. I’m not sure exactly what the reason is, but I don’t think it’s the computational difficulty of representing having kids vs. having sex, because humans already build a world model containing the concept of “my kids”.

It seems to me that one under-appreciated aspect of Inner Alignment is that, even if one had the one-true-utility-function-that-is-all-you-need-to-program-into-AI, this would not, in fact, solve the alignment problem, nor even the intent-alignment part. It would merely solve outer alignment (provided the utility function can be formalized).

Damn, yep I for one under-appreciated this for the past 12 years.

What else have people said on this subject? Do folks think that scenarios where we solve outer alignment most likely involve us not having to struggle much with inner alignment? Because fully solving outer alignment implies a lot of deep progress in alignment.