Posts

Seeing Status Quo Bias 2021-03-08T00:24:19.666Z
Dissolving the Problem of Induction 2020-12-27T17:58:27.536Z
Are aircraft carriers super vulnerable in a modern war? 2020-09-20T18:52:29.270Z
Titan (the Wealthfront of active stock picking) - What's the catch? 2020-08-06T01:06:04.599Z
Asset Prices Consistently Violate Efficient Market Hypothesis 2020-07-28T14:21:15.220Z
Half-Baked Products and Idea Kernels 2020-06-24T01:00:20.466Z
Liron's Shortform 2020-06-09T12:27:51.078Z
How does publishing a paper work? 2020-05-21T12:14:17.589Z
Isn't Tesla stock highly undervalued? 2020-05-18T01:56:58.415Z
How About a Remote Variolation Study? 2020-04-03T12:04:04.439Z
How to Frame Negative Feedback as Forward-Facing Guidance 2020-02-09T02:47:37.230Z
The Power to Draw Better 2019-11-18T03:06:02.832Z
The Thinking Ladder - Wait But Why 2019-09-29T18:51:00.409Z
Is Specificity a Mental Model? 2019-09-28T22:53:56.886Z
The Power to Teach Concepts Better 2019-09-23T00:21:55.849Z
The Power to Be Emotionally Mature 2019-09-16T02:41:37.604Z
The Power to Understand "God" 2019-09-12T18:38:00.438Z
The Power to Solve Climate Change 2019-09-12T18:37:32.672Z
The Power to Make Scientific Breakthroughs 2019-09-08T04:14:14.402Z
Examples of Examples 2019-09-06T14:04:07.511Z
The Power to Judge Startup Ideas 2019-09-04T15:07:25.486Z
How Specificity Works 2019-09-03T12:11:36.216Z
The Power to Demolish Bad Arguments 2019-09-02T12:57:23.341Z
Specificity: Your Brain's Superpower 2019-09-02T12:53:55.022Z
What are the biggest "moonshots" currently in progress? 2019-09-01T19:41:22.556Z
Is there a simple parameter that controls human working memory capacity, which has been set tragically low? 2019-08-23T22:10:40.154Z
Is the "business cycle" an actual economic principle? 2019-06-18T14:52:00.348Z
Is "physical nondeterminism" a meaningful concept? 2019-06-16T15:55:58.198Z
What's the most annoying part of your life/job? 2016-10-23T03:37:55.440Z
Quick puzzle about utility functions under affine transformations 2016-07-16T17:11:25.988Z
You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides] 2015-08-16T05:51:51.459Z
Wisdom for Smart Teens - my talk at SPARC 2014 2015-02-09T18:58:17.449Z
A proposed inefficiency in the Bitcoin markets 2013-12-27T03:48:56.031Z
Atkins Diet - How Should I Update? 2012-06-11T21:40:14.138Z
Quixey Challenge - Fix a bug in 1 minute, win $100. Refer a winner, win $50. 2012-01-19T19:39:58.264Z
Quixey is hiring a writer 2012-01-05T06:22:06.326Z
Quixey - startup applying LW-style rationality - hiring engineers 2011-09-28T04:50:45.130Z
Quixey Engineering Screening Questions 2010-10-09T10:33:23.188Z
Bloggingheads: Robert Wright and Eliezer Yudkowsky 2010-08-07T06:09:32.684Z
Selfishness Signals Status 2010-03-07T03:38:30.190Z
Med Patient Social Networks Are Better Scientific Institutions 2010-02-19T08:11:21.500Z
What is the Singularity Summit? 2009-09-16T07:18:06.675Z
You Are A Brain 2009-05-09T21:53:26.771Z

Comments

Comment by Liron on The Best Software For Every Need · 2021-09-10T19:52:15.909Z · LW · GW

Ah ya, it seems to me that good posture requires a split keyboard given how far apart people's arms are. I use Moonlander.

Comment by Liron on The Best Software For Every Need · 2021-09-10T16:14:17.223Z · LW · GW

Raise your monitor. This isn't software, but probably relevant to the same readers. Consider raising your monitor so that its center is 20-27" higher than your keyboard, similar to how your head is 20-27" higher on your body than your elbows. I said "raise" instead of "raise or lower" because apparently 99% of people have their monitor too low.

I recommend buying something like this, which conveniently clamps onto the back of your desk and lets you set up your monitor at a good height. There are nicer desk-clamp moniter stands out there, but they don't seem to go up to 24"+, and you may need that much height if your keyboard is sitting on your desktop rather than a keyboard tray.

Comment by Liron on The shoot-the-moon strategy · 2021-07-23T04:12:19.925Z · LW · GW

How I operationalize crash-only code in my data generation code, given that Data Denormalization Is Broken [0]:

When operating on database data, I try to make functions whose default behavior on each invocation is to re-process large chunks of data and regenerate all the generated values, and make it idempotent. (I would regenerate the whole database on every invocation if I could, but there’s some tradeoff of how big a chunk is sufficiently fast to reprocess.)

[0] https://lironshapira.medium.com/data-denormalization-is-broken-7b697352f405

Comment by Liron on Jimrandomh's Shortform · 2021-07-22T06:26:06.793Z · LW · GW

Seems very plausible to me. Thanks for sharing.

Comment by Liron on Why did no LessWrong discourse on gain of function research develop in 2013/2014? · 2021-06-20T03:40:46.596Z · LW · GW

A related question is why the topic of GoF research still didn’t get much LW discussion in 2020

Comment by Liron on Taboo "Outside View" · 2021-06-20T03:18:39.853Z · LW · GW

Bravo, this is on the meta level a great example of applying epistemic rationality to replace a vague concept with better concepts. The post uses specific examples everywhere to be clearly understandable and easy to apply. It could be part of my specificity sequence, with a title like “The Power to Clarify Concepts”. https://www.lesswrong.com/posts/XosKB3mkvmXMZ3fBQ/specificity-your-brain-s-superpower

Comment by Liron on Taboo "Outside View" · 2021-06-20T03:14:05.616Z · LW · GW

The achievement of easiness is due to the use of specific examples everywhere.

Comment by Liron on Bad names make you open the box · 2021-06-10T02:16:27.332Z · LW · GW

"Bad names make you open the box" is in multiple ways a special case of the more general principle that "Good system architecture is low-context" or "Good system architecture has a sparse understanding-graph".

If we imagine a graph diagram where each node N representing a part of the system (e.g. a function in a codebase) has edges coming in from all other nodes that one must understand in order to understand N, then a good low-context architecture is one with the fewest possible edges per node.

The post talks about how a badly-named function causes there to be an understanding-edge from the code inside that function to that function. More generally, a badly-architected function requires understanding other parts of the system in order to understand what it does. E.g.:

  • If the function mutates a global state variable, then the reader must understand outside context about that variable's meaning in order to understand the function
  • If the function does a combination of work that only makes sense in the context of your program - rather than being a more program-independent reusable part - then its understanding-graph will have extra edges to various other parts of your program. Or in the best case, where your function is well-documented to avoid imposing those understanding-edges on the reader, you're still adding extra edge weight from the function to the now-longer-winded docstring.

The "sparse understanding-graph" is also applicable to org charts of people working together. You ideally want the sparsest possible cooperation-graph.

Comment by Liron on Don't feel bad about not knowing basic things · 2021-06-08T00:46:11.865Z · LW · GW

Ya I don’t know the details even though I use NodeJS almost every day :) Maybe it does run parallel requests in separate threads.

Comment by Liron on Finite Factored Sets · 2021-06-07T23:32:45.046Z · LW · GW

Agree with #3, presenting definitions with examples first.

Congrats on this research, feels like you’re onto something huge!

Comment by Liron on Don't feel bad about not knowing basic things · 2021-06-07T22:02:32.246Z · LW · GW

Re database normalization, it’s obviously good to do if you can afford the hit for speed and scalability. Unfortunately I believe the software industry currently has a big problem with a lack of capable databases to support elegant data denormalization patterns: https://lironshapira.medium.com/data-denormalization-is-broken-7b697352f405

Comment by Liron on Don't feel bad about not knowing basic things · 2021-06-07T21:59:34.512Z · LW · GW

NodeJS is mostly cool because you can use the same language and the same development tools across your whole stack. When it launched I think another selling point was that it’s reasonably good at handling multiple requests in parallel.

Comment by Liron on Two Definitions of Generalization · 2021-05-31T20:03:01.139Z · LW · GW

Upvoted for teaching concepts well by using specific and concrete examples, even when the concepts are ironically "generalization" and "abstraction"

Comment by Liron on A Review and Summary of the Landmark Forum · 2021-05-30T20:27:25.758Z · LW · GW

I experienced Landmark Forum 13 years ago and this post is a good summary of it.

It seems like they’ve settled on a bunch of heuristic mental models to (1) push people to change their state to potentially break out of old patterns and make life changes and (2) perpetuate the organization.

They don’t provide good quality explanations and answers to questions. They don’t hold themselves to the standards of productive discourse. They offer a shell of pre-generated heuristics for you to “try on” (their phrase). They admit that that’s what they’re giving you, but I think for the LW crowd it wouldn’t be that hard to have a version of Landmark offering more robust concepts and tools.

Comment by Liron on Book review: "A Thousand Brains" by Jeff Hawkins · 2021-04-11T19:51:59.368Z · LW · GW

Thanks for writing this. I just read the book and I too found Part I to be profoundly interesting and potentially world-changing, while finding Parts II and III shallow and wrong compared to the AI safety discourse on LessWrong. I’m glad someone took the time to think through and write up the counterargument to Hawkins’ claims.

Comment by Liron on Why We Launched LessWrong.SubStack · 2021-04-01T19:44:30.233Z · LW · GW

Reminds me of that old LW April Fools where they ran the whole site as a Reddit fork

Comment by Liron on Clubhouse · 2021-03-21T19:11:40.902Z · LW · GW

RelationshipHero.com - convenient dating, relationship and couples coaching over Zoom

Comment by Liron on Clubhouse · 2021-03-16T07:32:48.064Z · LW · GW

Clubhouse being valued at $1B by Andreessen Horowitz in the latest funding round implies that they also think it has a >10% chance of being a major success.

The biggest signal they’re looking at is the growth rate: it has over 10M users and is still growing at 10%/wk, which is in the absolute top tier of startup growth metrics.

I think Clubhouse will probably have 50-100M users in a few months, and have acted on this prediction by dedicating a full-time marketing person to building my company’s presence on it.

Comment by Liron on Seeing Status Quo Bias · 2021-03-08T03:24:09.764Z · LW · GW

It seems clear to me that the percentage of days worked remotely will never go back to anything less than double the pre-pandemic value, at least

Comment by Liron on The Prototypical Negotiation Game · 2021-02-21T20:59:22.655Z · LW · GW

Upvoted for providing an important deepening of the popular understanding of “Schelling point”

Comment by Liron on The feeling of breaking an Overton window · 2021-02-17T14:20:59.571Z · LW · GW

More generally, “portray yourself as an empathetic character” is a social skill I find myself using often. Basically copy the way the protagonists talk on This American Life, where even the ones who’ve done crazy things tell you their side of the story in such a way that you think “sure, I guess I can relate to that”.

Comment by Liron on The feeling of breaking an Overton window · 2021-02-17T12:33:16.568Z · LW · GW

If I reply with the naive factual response, “Yes I’m stocking up to prep for the virus”, and leave it at that, there’s a palpable awkwardness because all participants and witnesses in the conversation are at some level aware that this carries the subtext, “Yes I’m smartly taking action to protect myself from a big threat while you are ignorantly exposing yourself to danger”, which means a listener has to wonder if they’re stupid or I’m crazy. Even if the listener is curious and doesn’t take any offense to the conversation, they know that I’ve made a social error in steering the conversation to this awkward state, because it’s mutual knowledge that a savvy conversationalist needs to be aware of the first-order subtext of the naive factual response. The objective social tactlessness of my naive response provides valid evidence to update them toward me being the crazy one.

I think a more tactful response is, “Yeah, I know a lot of people say it’s not a big deal and I hope they’re right, but I think there’s enough risk that extra supplies might come in handy”.

If I first acknowledge and validate or “pace” the background beliefs of mainstream society, then it’s socially graceful to segue to answering with my honest beliefs. Now I’ve portrayed myself as an empathetic character, where any listener can follow my reasoning and see that it’s potentially valid, even if it doesn’t identically match theirs.

Comment by Liron on “PR” is corrosive; “reputation” is not. · 2021-02-14T13:49:44.951Z · LW · GW

What are some examples of good PR that’s reputation-like and bad PR that’s not? It’d be interesting to analyze a failed high-budget public PR campaign.

Comment by Liron on Quadratic, not logarithmic · 2021-02-09T15:13:41.165Z · LW · GW

A lot depends on what type of “interactions” we’re considering, and how uniform the distribution is: indoor/outdoor, masks on/off, etc. If we assume that all interactions are of the identical type, then the quadratic model is useful.

But in a realistic scenario, they’re probably not identical interactions, because the 100 interactions probably divide across different life contexts, e.g. 5 different gatherings with 20 interactions each.

Therefore, contrary to what this post seems to imply, I believe the heuristic of “I’ve already interacted with 99 people so I’m not going to go out of my way to avoid 1 more” is directionally correct in most real-life scenarios, because of the Pareto (80/20) principle.

In a realistic scenario, you can probably model the cause of your risk as having one or two dominant factors, and modeling the dominant factors probably doesn’t look different when adding one marginal interaction, unless that interaction is the disproportionally risky one compared to the others.

On the other hand, when going from 0 to 1 interactions, it’s more plausible to imagine that this 1 interaction is one of the most dominant risk factors in your life, because it has a better shot of changing your model of dominant risks.

Comment by Liron on Technological stagnation: Why I came around · 2021-01-26T16:05:01.012Z · LW · GW

“Go into a room and subtract off all of the screens. How do you know you’re not in 1973, but for issues of design?”

At least if you’re in an average grocery store, you can tell it’s the 2000s from the greatly improved food selection

Comment by Liron on Covid 1/14: To Launch a Thousand Shipments · 2021-01-16T18:12:48.988Z · LW · GW

Another amazing post. How long does each of these take you to make? Seems like it would be a full-time job.

Comment by Liron on The Power to Teach Concepts Better · 2021-01-12T15:53:02.748Z · LW · GW

Thanks :) Hmm I think all I can point you to is this tweet.

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-12T02:54:56.868Z · LW · GW

I <3 Specificity

For years, I've been aware of myself "activating my specificity powers" multiple times per day, but it's kind of a lonely power to have. "I'm going to swivel my brain around and ride it in the general→specific direction. Care to join me?" is not something you can say in most group settings. It's hard to explain to people that I'm not just asking them to be specific right now, in this one context. I wish I could make them see that specificity is just this massively under-appreciated cross-domain power. That's why I wanted this sequence to exist.

I gratuitously violated a bunch of important LW norms

  1. As Kaj insightfully observed last year, choosing Uber as the original post's object-level subject made it a political mind-killer.
  2. On top of that, the original post's only role model of a specificity-empowered rationalist was this repulsive "Liron" character who visibly got off on raising his own status by demolishing people's claims.

Many commenters took me to task on the two issues above, as well as raising other valid issues, like whether the post implies that specificity is always the right power to activate in every situation.

The voting for this post was probably a rare combination: many upvotes, many downvotes, and presumably many conflicted non-voters who liked the core lesson but didn't want to upvote the norm violations. I'd love to go back in time and launch this again without the double norm violation self-own.

I'm revising it

Today I rewrote a big chunk of my dialogue with Steve, with the goal of making my character a better role model of a LessWrong-style rationalist, and just being overall more clearly explained. For example, in the revised version I talk about how asking Steve to clarify his specific point isn't my sneaky fully-general argument trick to prove that Steve's wrong and I'm right, but rather, it's taking the first step on the road to Double Crux.

I also changed Steve's claim to be about a fictional company called Acme, instead of talking about the politically-charged Uber.

I think it's worth sharing

Since writing this last year, I've received a dozen or so messages from people thanking me and remarking that they think about it surprisingly often in their daily lives. I'm proud to help teach the world about specificity on behalf of the LW community that taught it to me, and I'm happy to revise this further to make it something we're proud of.

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-12T00:39:02.494Z · LW · GW

Ok I finally made this edit. Wish I did it sooner!

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-12T00:38:18.511Z · LW · GW

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-12T00:37:18.461Z · LW · GW

Glad to hear you feel I've addressed the Combat Culture issues. I think those were the lowest-hanging fruits that everyone agreed on, including me :)

As for the first point, I guess this is the same thing we had a long comment thread about last year, and I'm not sure how much our views diverge at this point...

Let's take this paragraph you quoted: "It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag." Do you not agree with my point that Seibel should have endeavored to be more clear in his public statement?

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-11T22:26:03.955Z · LW · GW

Zvi, I respect your opinion a lot and I've come to accept that the tone disqualifies the original version from being a good representation of LW. I'm working on a revision now.

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-11T21:14:02.446Z · LW · GW

Thanks for the feedback. I agree that the tone of the post has been undermining its content. I'm currently working on editing this post to blast away the gratuitously bad-tone parts :)

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Comment by Liron on The Power to Demolish Bad Arguments · 2021-01-11T01:40:56.362Z · LW · GW

Meta-level reply

The essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concept

Yeah, I take your point that the post's tone and political-ish topic choice undermine the ability of readers to absorb its lessons about the power of specificity. This is a clear message I've gotten from many commenters, whether explicitly or implicitly. I shall edit the post.

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Object-level reply

In the meantime, I still think it's worth pointing out where I think you are, in fact, analyzing the content wrong and not absorbing its lessons :)

For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart

My dialogue character has various positive-affect a-priori beliefs about Uber, but having an a-priori belief state isn't the same thing as having an immutable bottom line. If Steve had put forth a coherent claim, and a shred of support for that claim, then the argument would have left me with a modified a-posteriori belief state.

In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit

My character is making a good-faith attempt at Double Crux. It's just impossible for me to ascertain Steve's claim-underlying crux until I first ascertain Steve's claim.

even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.

You seem to be objecting that selling "the power to demolish bad arguments" means that I'm selling a Fully General Counterargument, but I'm not. The way this dialogue goes isn't representative of every possible dialogue where the power of specificity is applied. If Steve's claim were coherent, then asking him to be specific would end up helping me change my own mind faster and demolish my own a-priori beliefs.

reversed stupidity is not intelligence

It doesn't seem relevant to mention this. In the dialogue, there's no instance of me creating or modifying my beliefs about Uber by reversing anything.

all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".

I'm making an example out of Steve because I want to teach the reader about an important and widely-applicable observation about so-called "intellectual discussions": that participants often win over a crowd by making smart-sounding general assertions whose corresponding set of possible specific interpretations is the empty set.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-29T21:13:06.544Z · LW · GW

Curve fitting isn't Problematic. The reason it's usually a good best guess that points will keep fitting a curve (though wrong a significant fraction of the time) is because we can appeal to a deeper hypothesis that "there's a causal mechanism generating these points that is similar across time". When we take our time and do actual science on our universe, our theories tell us that the universe has time-similar causal structures all over the place. Actual science is what licenses quick&dirty science-like heuristics.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T23:22:31.079Z · LW · GW

Just because curve fitting is one way you can produce a shallow candidate model to generate your predictions, that doesn't mean "induction is needed" in the original problematic sense, especially considering that what's likely to happen is that a theory that doesn't use mere curve fitting will probably come along and beat out the curve fitting approach.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T20:45:39.323Z · LW · GW

I think at best you can say Deutsch dissolves the problem for the project of science

Ok I think I'll accept that, since "science" is broad enough to be the main thing we or a superintelligent AI cares about.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T17:51:15.243Z · LW · GW

Since "no one believes that induction is the sole source of scientific explanations", and we understand that scientific theories win by improving on their competitors in compactness, then the Problem of Induction that Russell perceived is a non-problem. That's my claim. It may be an obvious claim, but the LW sequences didn't seem to get it across.

You seem to be saying that induction is relevant to curve fitting. Sure, curve fitting is one technique to generate theories, but tends to be eventually outcompeted by other techniques, so that we get superseding theories with reductionist explanations. I don't think curve fitting necessarily needs to play a major role in the discussion of dissolving the Problem of Induction.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T13:40:23.326Z · LW · GW

Ah yeah. Interesting how all the commenters here are talking about how this topic is quite obvious and settled, yet not saying the same things :)

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T13:37:57.150Z · LW · GW

Theories of how quarks, electromagnetism and gravity produce planets with intelligent species on them are scientific accomplishments by virtue of the compression they achieve, regardless of why quarks appear to be a thing.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T12:16:46.398Z · LW · GW

If we reverse-engineer an accurate compressed model of what the universe appears like to us in the past/present/future, that counts as science.

If you suspect (as I do) that we live in a simulation, then this description applies to all the science we've ever done. If you don't, you can at least imagine that intelligent beings embedded in a simulation that we build can do science to figure out the workings of their simulation, whether or not they also manage to do science on the outer universe.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T12:10:30.362Z · LW · GW

Justifying that blue is an a-priori more likely concept than grue is part of the remaining problem of justifying Occam's Razor. What we don't have to justify is the wrong claim that science operates based on generalized observations of similarity.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T02:19:56.049Z · LW · GW

your claim is that if we admit that the universe follows these patterns then this automatically means that these patterns will apply in the future.

Yeah. My point is that the original statement of the Problem of Induction was naive in two ways:

  1. It invokes "similarity", "resemblance", and "collecting a bunch of confirming observations"
  2. It talks about "the future resembling the past"

#1 is the more obviously naive part. #2's naivety is what I explain in this post's "Not About Past And Future" section. Once one abandons naive conceptions #1 and #2 by understanding how science actually works, one reduces the Problem of Induction to the more tractable Problem of Occam's Razor.

I don't think we know that the universe follows these patterns as opposed to appearing to follow these patterns.

Hm, I see this claim as potentially beyond the scope of a discussion of the Problem of Induction.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T02:11:48.378Z · LW · GW

Well, I hope this post can be useful as a link you can give to explain the LW community's mostly shared view about how one resolves the Problem of Induction. I wrote it because I think the LW Sequences' treatment of the Problem of Induction is uncharacteristically off the mark.

Comment by Liron on Dissolving the Problem of Induction · 2020-12-28T01:33:34.921Z · LW · GW

If I have two diffrerent data and compress them well among each of them I would not expect those compressions to be similar or the same.

If I drop two staplers, I can give the same compressed description of the data from their two trajectories: "uniform downward acceleration at close to 9.8 meters per second squared".

But then the fence can suddenly come to an end or make an unexpected 90 degree turn. How many posts do you need to see to reasonably conclude that post number #5000 exists?

If I found the blueprint for the fence lying around, I'd assign a high probability that the number of fenceposts is what's shown in the blueprint, minus any that might be knocked over or stolen. Otherwise, I'd start with my priori knowledge of the distribution of sizes of fences, and update according to any observations I make about which reference class of fence this is, and yes, how many posts I've encountered so far.

It seems like you haven't gotten on board with science being a reverse-engineering process that outputs predictive models. But I don't think this is a controversial point here on LW. Maybe it would help to clarify that a "predictive model" outputs probability distributions over outcomes, not predictions of single forced outcomes?

Comment by Liron on Dissolving the Problem of Induction · 2020-12-27T21:23:47.519Z · LW · GW

To clarify, what I think is underappreciated (and what's seemingly being missed in Eliezer's statement about his belief that the future is similar to the past), isn't that justifying an Occamian prior is necessary or equivalent to solving the original Problem of Induction, but that it's a smaller and more tractable problem which is sufficient to resolve everything that needs to be resolved.

Edit: I've expanded on the Problem of Occam's Razor section in the post:

In my view, it's a significant and under-appreciated milestone that we've reduced the original Problem of Induction to the problem of justifying Occam's Razor. We've managed to drop two confusing aspects from the original PoI:

  1. We don't have to justify using "similarity", "resemblance", or "collecting a bunch of confirming observations", because we know those things aren't key to how science actually works.
  2. We don't have to justify "the future resembling the past" per se. We only have to justify that the universe allows intelligent agents to learn probabilistic models that are better than maximum-entropy belief states.
Comment by Liron on The First Sample Gives the Most Information · 2020-12-26T18:19:18.128Z · LW · GW

Agree. Not only is asking “what’s an example” generally highly productive, it’s about 80% as productive as asking “what are two examples”.

Comment by Liron on 100 Tips for a Better Life · 2020-12-25T22:16:34.432Z · LW · GW

I’m not a gamer. Having a ton of screen real estate makes me more productive by letting me keep a bunch of windows visible in the same fixed locations.

Re paying a premium, I don’t think I am; the Samsung monitor is one of the cheapest well-reviewed curved monitors I found at that resolution.

Comment by Liron on 100 Tips for a Better Life · 2020-12-23T14:50:28.113Z · LW · GW

5. If your work is done on a computer, get a second monitor. Less time navigating between windows means more time for thinking. 

Agree. I'm stacking two of these bad boys: https://www.amazon.com/gp/product/B07L9HCJ2V

For most professionals, spending $2k is cheap for even a 5% more productive computing experience

Comment by Liron on To listen well, get curious · 2020-12-13T20:06:35.655Z · LW · GW

I agree with your main idea about how curiosity is related to listening well.

The post’s first sentence implies that the thesis will be a refutation of a different claim:

A common piece of interacting-with-people advice goes: “often when people complain, they don’t want help, they just want you to listen!”

The claim still seems pretty true from my experience: that sometimes people have a sufficient handle on their problem, and don’t want help dealing with the problem better, but do want some empathy, appreciation, or other benefits from communicating their problem in the form of a complaint.