The Power to Demolish Bad Arguments

post by Liron · 2019-09-02T12:57:23.341Z · score: 80 (55 votes) · LW · GW · 48 comments

Contents

    This is Part I of the Specificity Sequence
  "Uber exploits its drivers!"
  Zooming Into the Claim
  “Startups should have more impact!”
None
48 comments

This is Part I of the Specificity Sequence [LW · GW]

Specificity turns any argument into a game of 3D Chess. Just when it seems like your argument is a clash of two ground armies, you can use your specificity powers to take off and fly all over the conceptual landscape. Fly, I say!

"Uber exploits its drivers!"

Want to see what a 3D Chess argument looks like? Behold the conversation I had the other day with my friend “Steve”:

Steve: Uber exploits its drivers by paying them too little!

Steve’s statement was a generic one, lacking specific detail. So I shot back with my own generic counterpoint:

Liron: No, job creation is a force for good at any wage. Uber creates increased demand for labor, which drives wages up in the economy as a whole.

You can see I was showing off my mastery of basic economics. This seemed like a good move to me at the time, but I should have prioritized which of my skills to bust out. The skill of specificity is so badass that it even takes precedence over the mighty Econ 101.

When I used my Econ 101 rook to attack Steve’s defenses and put his king in check, I was merely playing 2D chess. The 3D chess move would have been to dial up the specificity.

What happens if I ask Steve to zoom into the substance of his claim? Then the conversation goes like this:

Steve: Uber exploits its drivers by paying them too little!
Liron: What do you mean by “exploits its drivers”?
Steve: Come on, you know what “exploit” means… Dictionary.com says it means “to use selfishly for one’s own ends”
Liron: You’re saying you have a beef with any company that acts “selfish”? Doesn’t every company under capitalism aim to maximize returns for its shareholders?
Steve: Capitalism can be good sometimes, but Uber has gone beyond the pale with their exploitation of workers. They’re basically ruining capitalism.

Nooooo, this is not the enlightening conversation we were hoping for. You can sense that I haven’t made much progress “pinning him down”.

But don’t worry… that wasn’t my real demonstration of 3D Chess. Psyche!

In the above conversation, I didn’t employ my specificity powers, I just showcased another failure mode. Can you figure out where I went wrong?

It was a mistake for me to ask Steve for a mere definition of the term “exploit”. I should have asked for a specific example of what he imagines “exploit” to mean. How specifically does Uber exploit its drivers?

Instead, I accepted his non-specific reply — that “exploit” means “to use selfishly” — and I tried to counterargue by tossing the big abstract concept of “capitalism” into the discussion. From Steve’s perspective, “capitalism” is yet one more generic word for him to build his generic responses out of, together with “exploitation” and “selfishness”. He loves flinging concept-words around; it makes him feel like he’s having a lively intellectual back & forth.

Steve’s mind doesn’t actually contain a structured understanding of the subject he’s making a claim about; it contains a ball pit of loosely-associated concepts. He holds up his end of the conversation by snatching a nearby ball and flinging it. And what have I done by mentioning “capitalism”? I’ve gone and tossed in another ball.

I hate to admit it, but Steve’s approach in the above dialogue works for him. By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers. What a nightmare for us.

Is there a way to reach in and pull him out of this pit? Yes, by activating our specificity powers! Here’s how it’s done:

Steve: Uber exploits its drivers by paying them too little!
Liron: Can you help me paint a specific mental picture of a driver being exploited by Uber?
Steve: Ok… A single dad whose kid gets sick. He works for Uber and he doesn’t even get health insurance, and he’s maxed out all his credit cards to pay for doctor’s visits. The next time his car breaks down, he won’t even be able to fix it. Meanwhile, Uber skims 25% of every dollar so he barely makes minimum wage. You should try living on minimum wage so you can see how hard it is!
Liron: You’re saying Uber should be blamed for this person’s unpleasant life circumstances, right?
Steve: Yes, because they have millions of drivers under these kinds of circumstances, and meanwhile they IPO for $80B.

He doesn’t realize it yet, but by making him flesh out a specific example of his claim, I’ve now pulled him out of his ball pit of loosely-associated concepts. This isn’t your average 2D argument anymore. We’re now flying like Superman.

Liron: Ok, sticking with this one specific person’s hypothetical story — what would they be doing if Uber didn’t exist?
Steve: Getting a different job
Liron: Ok, what specific job?
Steve: I don’t know, depends what their skills are
Liron: This is your specific story Steve, you get to pick any specific plausible details you want in order to support any point you want!

I have to stop and point out how crazy this is.

You’d think the way smart people argue is by supporting their claims with evidence, right? But here I’m giving Steve a handicap where he gets to make up fake evidence (telling me any hypothetical specific story) just to establish that his argument is coherent by checking whether empirical support for it ever could meaningfully exist. This is a preschool-level standard that your average arguer can’t pass.

Let’s see how he might reply:

Steve: I guess he could instead be a cashier at McDonald’s. Because then he’d be a W2 employee and get medical insurance.
Liron: In a world where Uber exists, couldn’t this specific guy still go get a job as a cashier at McDonald’s? Plus, wouldn’t he have less competition for that cashier job because some of the other would-be applicants got recruited to be Uber drivers instead? Can we conclude that the specific person who you chose to illustrate your point is actually being *helped* by the existence of Uber?
Steve: No because he’s an Uber driver, not a McDonald’s cashier
Liron: So doesn’t that mean Uber offered him a better deal than McDonald’s, thereby improving his life?
Steve: No, they just tricked him into thinking that it’s a better deal, but it’s actually a worse deal for him.
Liron: So like, McDonald’s offered him $13/hr plus benefits, while Uber gave him an estimate of making $20/hr but it actually works out to $14/hr once you factor in all his costs like gas and depreciation, but Uber gave him no benefits, so his overall compensation value is less than making $13/hr plus benefits at McDonald’s?
Steve: Um, ya, something like that.
Liron: So if Uber did a better job of educating drivers about how much their compensation plan is really worth, would you stop saying that Uber is “exploiting its drivers by paying them too little”?
Steve: No, because Uber preys on drivers who need quick access to cash, and they also intend to automate away the drivers’ jobs as soon as they can.
Liron: It looks like you’re now making new claims that weren’t represented in the specific story you chose, right?
Steve: Yes, but I can tell other stories
Liron: But for the specific story you chose to tell that was supposed to best illustrate your claim, the “exploitation” you’re referring to only “robbed” the driver of the value of a McDonald’s cashier’s health insurance plan, which might be like a $1/hr loss? And his work schedule is so much more flexible as an Uber driver… couldn’t that easily be worth $1/hr to him, so that he wasn’t “tricked” into joining Uber but rather made a decision in rational self-interest?
Steve: Yeah maybe, but anyway that’s just one story.
Liron: No worries, we can start over and talk about a specific story that you think would illustrate your main claim. I’m listening…
Steve thinks for a little while...
Steve: I don't know all the exploitative shit Uber does ok? I just think Uber is a greedy company.

In complex topics such as politics and economics, most people who think they’re making an “argument” are merely making an incoherent statement. They’re confused about their own claim.

Before you think about winning the argument, just start by drilling down into whether their point is coherent. You’ll then find that you’re often done arguing before you even really start.

In the above conversation, I hadn’t gotten to a point where I was trying to refute Steve’s argument, I was just trying to get specific clarity on what Steve’s argument is.

As I tried to nail down his point, his point simply collapsed down to nothing. He didn’t have a single specific example of what specific world-state could possibly be a referent of the statement “Uber exploits its drivers”.

Zooming Into the Claim

Imagine Steve shows you this map and says, “Oregon’s coastline is too straight. I wish all coastlines were less straight so that they could all have a bay!”

Resist the temptation to argue back, “You’re wrong, bays are stupid!” Hopefully, you’ve built up the habit of nailing down a claim’s specific meaning before trying to argue against it.

Steve is making a claim about “Oregon’s coastline”, which is a pretty abstract concept. In order to unpack the claim’s specific meaning, we have to zoom into the concept of a “coastline” and see it in more detail as this specific configuration of land and water:

From this perspective, a good first reply would be, “Well, Steve, what about Coos Bay over here? Are you happy with Oregon’s coastline as long as Coos Bay is part of it, or do you still think it’s too straight even though it has this bay?”

Notice that we can’t predict how Steve will answer our specific clarifying question. So we never knew what Steve’s words meant in the first place, did we? Now you can see why it wasn’t yet productive for us to start arguing against him.

When you hear a claim that sounds meaningful, but isn’t 100% concrete and specific, the first thing you want to do is zoom into its specifics. In many cases, you’ll then find yourself disambiguating between multiple valid specific interpretations, like for Steve’s claim that “Oregon’s coastline is too straight”.

In other cases, you’ll discover that there was no specific meaning in the mind of the speaker, like in the case of Steve’s claim that “Uber exploits its drivers by paying them too little” — a staggering thing to discover.

TFW a statement unexpectedly turns out to have no specific meaning

“Startups should have more impact!”

Activating our specificity powers let us demolish Steve’s claim that “Uber exploits its drivers” by showing that Steve had no specific meaning in mind for it. But Steve is just your average above-average IQ guy who went to college and smoked weed if he didn’t have a test coming up. Let’s kick it up a notch.

Consider this excerpt from a recent series of tweets by Michael Seibel, CEO of the Y Combinator startup accelerator program:

Successful tech founders would have far better lives and legacies if they competed for happiness and impact instead of wealth and users/revenue.
We need to change [the] model from build a big company, get rich, and then starting a foundation...
To build a big company, get rich, and use the company's reach and power to make the world a better place.

When I first read these tweets, my impression was that Michael was providing useful suggestions that any founder could act on to make their startup more of a force for good. But then I activated my specificity powers…

Before elaborating on what I think is the failure of specificity on Michael’s part, I want to say that I really appreciate Michael and Y Combinator engaging with this topic in the first place. It would be easy for them to keep their head down and stick to their original wheelhouse of funding successful startups and making huge financial returns, but instead, YC repeatedly pushes the envelope into new areas such as founding OpenAI and creating their Request for Carbon Removal Technologies. The Y Combinator community is an amazing group of smart and morally good people, and I’m proud to call myself a YC founder (my company Relationship Hero was in the YC Summer 2017 batch). Michael’s heart is in the right place to suggest that startup founders may have certain underused mechanisms by which to make the world a better place.

That said… is there any coherent takeaway from this series of tweets, or not?

The key phrases seem to be that startup founders should “compete for happiness and impact” and “use the company’s reach and power to make the world a better place”.

It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag.

Remember when you first heard Steve’s claim that “Uber exploits its drivers by paying them too little”? At first, it sounded like a meaningful claim. But as we tried to nail down what it meant, it collapsed into nothing. Will the same thing happen here?

Specificity powers, activate! Form of: Tweet reply

What's a specific example, real or hypothetical, of a $1B+ founder trading off less revenue for more impact?
Cuz at the $1B+ level, competing for impact may look indistinguishable from competing for revenue.
E.g. Elon Musk companies have huge impact and huge valuations.

Let’s consider a specific example of a startup founder who is highly successful: Elon Musk and his company SpaceX, currently valued at $33B. The company’s mission statement is proudly displayed at the top of their about page:

SpaceX designs, manufactures and launches advanced rockets and spacecraft. The company was founded in 2002 to revolutionize space technology, with the ultimate goal of enabling people to live on other planets.

What I love about SpaceX is that everything they do follows from Elon Musk’s original goal of making human life multiplanetary. Check out this incredible post by Tim Urban to understand Elon’s plan in detail. Elon’s 20-year playbook is breathtaking:

  1. Identify a major problem in the world
    A single catastrophic event on Earth can permanently wipe out the human species
  2. Propose a method of fixing it
    Colonize other planets, starting with Mars
  3. Design a self-sustaining company or organization to get it done
    Invent reusable rockets to drop the price per launch, then dominate the $27B/yr market for space launches

I would enthusiastically advise any founder to follow Elon’s playbook, as long as they have the stomach to commit to it for 20+ years.

So how does this relate to Michael’s tweets? I believe my advice to “follow Elon’s playbook” constitutes a specific example of Michael’s suggestion to “use the company’s reach and power to make the world a better place”.

But here’s the thing: Elon’s playbook is something you have to do before you found the company. First you have to identify a major problem in the world, then you come up with a plan to start a certain type of company. How do you apply Michael’s advice once you’ve already got a company?

To see what I mean, let’s pick another specific example of a successful founder: Drew Houston and Dropbox ($11B market cap). We know that Michael wants Drew to “compete for happiness and impact” and to “use the company’s reach and power to make the world a better place”. But what does that mean here? What specific advice would Michael have for Drew?

Let’s brainstorm some possible ideas for specific actions that Michael might want Drew to take:

I know, these are just stabs in the dark, because we need to talk about specifics somehow. Did Michael really mean any of these? The ones about charity and employee benefits seem too obvious. Let’s explore the possibility that Michael might be recommending that Dropbox change its mission.

Here’s Dropbox’s current mission from their about page:

We’re here to unleash the world’s creative energy by designing a more enlightened way of working.

Seems like a nice mission that helps the world, right? I use Dropbox myself and can confirm that the product makes my life a little better. So would Michael say that Dropbox is an example of “competing for happiness and impact”?

If so, then it would have been really helpful if Michael had written in one of his tweets, “I mean like how Dropbox is unleashing the world’s creative energy”. Mentioning Dropbox, or any other specific example, would have really clarified what Michael is talking about.

And if Dropbox’s current mission isn’t what Michael is calling for, then how would Dropbox need to change it in order to better “compete for happiness and impact”? For instance, would it help if they tack on “and we guarantee that anyone can have access to cloud storage regardless of their ability to pay for it”, or not?

Notice how this parallels my conversation with Steve about Uber. We begin with what sounds like a meaningful exhortation: Companies should compete for happiness and impact instead of wealth and users/revenue! Uber shouldn’t exploit its drivers! But when we reach for specifics, we suddenly find ourselves grasping at straws. I showed three specific guesses of what Michael’s advice could mean for Drew, but we have no idea what it does mean, if anything.

Imagine that Dara Kosrowshahi, CEO of Uber, wanted to take Steve’s advice about how not to exploit drivers. He’d be in the same situation as Drew from Dropbox: confused about the specifics of what his company was supposedly doing wrong, to begin with.

Once you’ve mastered the power of specificity, you’ll see this kind of thing everywhere: a statement that at first sounds full of substance, but then turns out to actually be empty. And the clearest warning sign is the absence of specific examples.

Next post: How Specificity Works [LW · GW]

48 comments

Comments sorted by top scores.

comment by JenniferRM · 2019-09-02T18:09:35.915Z · score: 34 (18 votes) · LW · GW

I have a strong appreciation for the general point that "specificity is sometimes really great", but I'm wondering if this point might miss the forest for the trees with some large portion of its actual audience?

If you buy that in some sense all debates are bravery debates then audience can matter a lot, and perhaps this point addresses central tendencies in "global english internet discourse" while failing to address central tendencies on LW?

There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.

However any such statement might still be a useful in a structured argument of very high quality, perhaps as an illustration of a troubling central tendency, or a "lemma" in a multi-part probabalistic argument.

It might even be the case that the MEDIAN EXAMPLE of a real tendency is highly imperfect without that "demolishing" the point.

Suppose for example that someone has focused on a lot on higher level structural truths whose evidential basis was, say, a thorough exploration of many meta-analyses about a given subject.

"Mel the meta-meta-analyst" might be communicating summary claims that are important and generally true that "Sophia the specificity demander" might rhetorically "win against" in a way that does not structurally correspond to the central tendencies of the actual world.

Mel might know things about medical practice without ever having treated a patient or even talked to a single doctor or nurse. Mel might understand something about how classrooms work without being a teacher or ever having visited a classroom. Mel might know things about the behavior of congressional representatives without ever working as a congressional staffer. If forced to confabulate an exemplar patient, or exemplar classroom, or an exemplar political representative the details might be easy to challenge even as a claim about the central tendencies is correct.

Naively, I would think that for Mel to be justified in his claims (even WITHOUT having exemplars ready-to-hand during debate) Mel might need to be moderately scrupulous in his collection of meta-analytic data, and know enough about statistics to include and exclude studies or meta-analyses in appropriately weighed ways. Perhaps he would also need to be good at assessing the character of authors and scientists to be able to predict which ones are outright faking their data, or using incredibly sloppy data collection?

The core point here is that Sophia might not be lead to the truth SIMPLY by demanding specificity without regard to the nature of the claims of her interlocutor.

If Sophia thinks this tactic gives her "the POWER to DEMOLISH arguments" in full generality, that might not actually be true, and it might even lower the quality of her beliefs over time, especially if she mostly converses with smart people (worth learning from, in their area(s) of expertise) rather than idiots (nearly all of whose claims might perhaps be worth demolishing on average).

It is totally possible that some people are just confused and wrong (as, indeed, many people seem to be, on many topics... which is OK because ignorance is the default and there is more information in the world now than any human can integrate within a lifetime of study). In that case, demanding specificity to demolish confused and wrong arguments might genuinely and helpfully debug many low quality abstract claims.

However, I think there's a lot to be said for first asking someone about the positive rigorous basis of any new claim, to see if the person who brought it up can articulate a constructive epistemic strategy.

If they have a constructive epistemic strategy that doesn't rely on personal knowledge of specific details, that would be reasonable, because I think such things ARE possible.

A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.

If I was asked to offer a single specific positive example of "general arguments being worthwhile" I might nominate Visible Learning by John Hattie as a fascinating and extremely abstract synthesis of >1M students participating in >50k studies of K-12 learning. In this case a core claim of the book is that mindless teaching happens sometimes, nearly all mindful attempts to improve things work a bit, and very rarely a large number of things "go right" and unusually large effect sizes can be observed. I've never seen one of these ideal classrooms I think, but the arguments that they have a collection of general characteristics seem solid so far.

Maybe I'll change my mind by the end? I'm still in progress on this particular book, which makes it sort of "top of mind" for me, but the lack of specifics in the book present a readability challenge rather than an epistemic challenge ;-P

The book Made to Stick, by contrast, uses Stories that are Simple, Surprising, Emotional, Concrete, and Credible to argue that the best way to convince people of something is to tell them Stories that are Simple, Surprising, Emotional, Concrete, and Credible.

As near as I can tell, Made to Stick describes how to convince people of things whether or not the thing is true, which means that if these techniques work (and can in fact cause many false ideas to spread through speech communities with low epistemic hygiene, which the book arguably did not really "establish") then a useful epistemic heuristic might be to give a small evidential PENALTY to all claims illustrated merely via vivid example.

I guess one thing I would like to say here at the end is that I mean this comment in a positive spirit. I upvoted this article and the previous one, and if the rest of the sequence has similar quality I will upvote those as well.

I'm generally IN FAVOR of writing imperfect things and then unpacking and discussing them. This is a better than median post in my opinion, and deserved discussion, rather than deserving to be ignored :-)

comment by Liron · 2019-09-02T19:11:17.937Z · score: 25 (7 votes) · LW · GW

A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.

I don't see how this is an example of a time when my specificity test shouldn't be used, because Robin Hanson would simply pass my specificity test. It's safe to say that Robin has thought through at least one specific example of what the claim that "medical insurance doesn't cause health" means. The sample dialogue with me and Robin would look like this:

Robin: Medical insurance coverage doesn't cause health on average!

Liron: What's a specific example (real or hypothetical) of someone who seeks medical insurance coverage because they think they're improving their health outcome, but who actually would have had the same health outcome without insurance?

Robin: A 50-year-old man opts for a job that has insurance benefits over one that doesn't, because he believes it will improve his expected health outcome. For the next 10 years, he's healthy, so the insurance didn't improve his outcome there, but he wouldn't expect it to. Then at age 60, he gets a heart attack and goes to the hospital and gets double bypass surgery. But in the hypothetical where this same person hadn't had health insurance, he would have had that same bypass surgery anyway, and then paid off the cost over the next few years. And then later he dies of old age. The data shows that this example I made up is a representative one - definitely not in every sense, but just in the sense that there's no delta between health outcomes in counterfactual world-states where people either have or don't have health insurance.

Liron: Okay, I guess your claim is coherent. I have no idea if it's true or false, but I'm ready to start hearing your evidence, now that you've cleared this low bar of having a coherent claim.

Robin: It was such an easy exercise though, it seemed like a pointless formality.

Liron: Right. The exercise was designed to be the kind of basic roadblock that's a pointless formality for Robin Hanson while being surprisingly difficult for the majority of humans.

Mel might know things about medical practice without ever having treated a patient or even talked to a single doctor or nurse. Mel might understand something about how classrooms work without being a teacher or ever having visited a classroom. Mel might know things about the behavior of congressional representatives without ever working as a congressional staffer. If forced to confabulate an exemplar patient, or exemplar classroom, or an exemplar political representative the details might be easy to challenge even as a claim about the central tendencies is correct.

Since I don't think the Robin Hanson example was enough to change my mind, is there another example of a general claim Mel might make where we can agree he's fundamentally right to make that claim but shouldn't expect him to furnish a specific example of it?

If he's been reading meta-analyses, couldn't he just follow their references to analyses that contain specific object-level experiments that were done, and furnish those as his examples?

I wouldn't insist that he has an example "ready to hand during debate"; it's okay if he says "if you want an example, here's where we can pull one up". I agree we should be careful to make sure that the net effect of asking for examples is to raise the level of discourse, but I don't think it's a hard problem.


I appreciate your high-quality comment. What I've done in this comment, obviously, is activated my specificity powers. I won't claim that I've demolished your argument, I just hope readers will agree that my approach was fair, respectful, productive, and not unnecessarily "adversarial"!

comment by JenniferRM · 2019-09-03T01:01:07.927Z · score: 9 (5 votes) · LW · GW
I appreciate your high-quality comment.

I likewise appreciate your prompt and generous response :-)

I think I see how you imagine a hypothetical example of "no net health from insurance" might work as a filter that "passes" Hanson's claim.

In this case, I don't think your example works super well and might almost cause more problems that not?

Differences of detail in different people's examples might SUBTRACT from attention to key facts relevant to a larger claim because people might propose different examples that hint at different larger causal models.

Like, if I was going to give the strongest possible hypothetical example to illustrate the basic idea of "no net health from insurance" I'd offers something like:

EXAMPLE: Alice has some minor symptoms of something that would clear up by itself and because she has health insurance she visits a doctor. ("Doctor visits" is one of the few things that health insurance strongly and reliably causes in many people.) While there she gets a nosocomial infection that is antibiotic resistant, lowering her life expectancy. This is more common than many people think. Done.

This example is quite different from your example. In your example medical treatment is good, and the key difference is basically just "pre-pay" vs "post-pay".

(Also, neither of our examples covers the issue where many innovative medical treatments often lower mortality due to the disease they aim at while, somehow (accidentally?) RAISING all cause mortality...)

In my mind, the substantive big picture claim rests ultimately on the sum of many positive and negative factors, each of which arguably deserves "an example of its own". (Things that raise my confidence quite a lot is often hearing the person's own best argument AGAINST their own conclusion, and then hearing an adequate argument against that critique. I trust the winning mind quite a bit more when someone is of two minds.)

No example is going to JUSTIFIABLY convince me, and the LACK of an example for one or all of the important factors wouldn't prevent me from being justifiably convinced by other methods that don't route through "specific examples".

ALSO: For that matter, I DO NOT ACTUALLY KNOW if Robin Hanson is actually right about medical insurance's net results, in the past or now. I vaguely suspect that he is right, but I'm not strongly confident. Real answers might require studies that haven't been performed? In the meantime I have insurance because "what if I get sick?!" and because "don't be a weirdo".

---

I think my key crux here has something to do with the rhetorical standards and conversational norms that "should" apply to various conversations between different kinds of people.

I assumed that having examples "ready-to-hand" (or offered early in a written argument) was something that you would actually be strongly in favor of (and below I'll offer a steelman in defense of), but then you said:

I wouldn't insist that he has an example "ready to hand during debate"; it's okay if he says "if you want an example, here's where we can pull one up".

So for me it would ALSO BE OK to say "If you want an example I'm sorry. I can't think of one right now. As a rule, I don't think in terms of fictional stories. I put effort into thinking in terms of causal models and measurables and authors with axes to grind and bridging theories and studies that rule out causal models and what observations I'd expect from differently weighed ensembles of the models not yet ruled out... Maybe I can explain more of my current working causal model and tell you some authors that care about it, and you can look up their studies and try to find one from which you can invent stories if that helps you?"

If someone said that TO ME I would experience it as a sort of a rhetorical "fuck you"... but WHAT a fuck you! {/me kisses her fingers} Then I would pump them for author recommendations!

My personal goal is often just to find out how the OTHER person feels they do their best thinking, run that process under emulation if I can, and then try to ask good questions from inside their frames. If they have lots of examples there's a certain virtue to that... but I can think of other good signs of systematically productive thought.

---

If I was going to run "example based discussion" under emulation to try to help you understand my position, I would offer the example of John Hattie's "Visible Learning".

It is literally a meta-meta-analysis of education.

It spends the first two chapters just setting up the methodology and responding preemptively to quibbles that will predictable come when motivated thinkers (like classroom teachers that the theory says are teaching suboptimally) try to hear what Hattie has to say.

Chapter 3 finally lays out an abstract architecture of principles for good teaching, by talking about six relevant factors and connecting them all (very very abstractly and loosely) to: tight OODA loops (though not under that name) and Popperian epistemology (explicitly).

I'll fully grant that it can take me an hour to read 5 pages of this book, and I'm stopping a lot and trying to imagine what Hattie might be saying at each step. The key point for me is that he's not filling the book with examples, but with abstract empirically authoritative statistical claims about a complex and multi-faceted domain. It doesn't feel like bullshit, it feels like extremely condensed wisdom.

Because of academic citation norms, in some sense his claims ultimately ground out in studies that are arguably "nothing BUT examples"? He's trying to condense >800 meta-analyses that cover >50k actual studies that cover >1M observed children.

I could imagine you arguing that this proves how useful examples are, because his book is based on over a million examples, but he hasn't talked about an example ONCE so far. He talks about methods and subjectively observed tendencies in meta-analyses mostly, trying to prepare the reader with a schema in which later results can land.

Plausibly, anyone could follow Hattie's citations back to an interesting meta-analysis, look at its references, track back to a likely study, look in their methods section, and find their questionnaires, track back to the methods paper validating an the questionnaire, then look in the supplementary materials to get specific questionnaire items... Then someone could create an imaginary kid in their head who answered that questionnaire some way (like in the study) and then imagine them getting the outcome (like in the study) and use that scenario as "the example"?

I'm not doing that as I read the book. I trust that I could do the above, "because scholarship" but I'm not doing it. When I ask myself why, it seems like it is because it would make reading the (valuable seeming) book EVEN SLOWER?

---

I keep looping back in my mind to the idea that a lot of this strongly depends on which people are talking and what kinds of communication norms are even relevant, and I'm trying to find a place where I think I strongly agree with "looking for examples"...

It makes sense to me that, if I were in the role of an angel investor, and someone wanted $200k from me, and offered 10% of their 2-month-old garage/hobby project, then asking for examples of various of their business claims would be a good way to move forward.

They might not be good at causal modeling, or good at stats, or good at scholarship, or super verbal, but if they have a "native faculty" for building stuff, and budgeting, and building things that are actually useful to actual people... then probably the KEY capacities would be detectable as a head full of examples to various key questions that could be strongly dispositive.

Like... a head full of enough good examples could be sufficient for a basically neurotypical person to build a valuable company, especially if (1) they were examples that addressed key tactical/strategic questions, and (2) no intervening bad examples were ALSO in their head?

(Like if they had terrible examples of startup governance running around in their heads, these might eventually interfere with important parts of being a functional founder down the road. Detecting the inability to give bad examples seems naively hard to me...)

As an investor, I'd be VERY interested in "pre-loaded ready-to-hand theories" that seem likely to actually work. Examples are kinda like "pre-loaded ready-to-hand theories"? Possession of these theories in this form would be a good sign in terms of the founder's readiness to execute very fast, which is a virtue in startups.

A LACK of ready-to-hand examples would suggest that even a good and feasible idea whose premises were "merely scientifically true" might not happen very fast if an angel funded it and the founder had to instantly start executing on it full time.

I would not be offended if you want to tap out. I feel like we haven't found a crux yet. I think examples and specificity are interesting and useful and important, but I merely have intuitions about why, roughly like "duh, of course you need data to train a model", not any high church formal theory with a fancy name that I can link to in wikipedia :-P

comment by Liron · 2019-09-03T01:50:39.457Z · score: 5 (3 votes) · LW · GW

I agree that this section of your comment is the most cruxy:

So for me it would ALSO BE OK to say "If you want an example I'm sorry. I can't think of one right now. As a rule, I don't think in terms of fictional stories. I put effort into thinking in terms of causal models and measurables and authors with axes to grind and bridging theories and studies that rule out causal models and what observations I'd expect from differently weighed ensembles of the models not yet ruled out... Maybe I can explain more of my current working causal model and tell you some authors that care about it, and you can look up their studies and try to find one from which you can invent stories if that helps you?"

Yes. Then I would say, "Ok, I've never encountered a coherent generalization for which I couldn't easily generate an example, so go ahead and tell me your causal model and I'll probably cook up an obvious example to satisfy myself in the first minute of your explanation."

Anyone talking about having a "causal model" is probably beyond the level that my specificity trick is going to demolish. The specificity trick I focus on in this post is for demolishing the coherence of the claims of the average untrained arguer, or occasionally catching oneself at thinking overly vaguely. That's it.

comment by JenniferRM · 2019-09-03T02:44:58.049Z · score: 4 (2 votes) · LW · GW
"...go ahead and tell me your causal model and I'll probably cook up an obvious example to satisfy myself in the first minute of your explanation."

I think maybe we agree... verbosely... with different emphasis? :-)

At least I think we could communicate reasonably well. I feel like the danger, if any, would arise from playing example ping pong and having the serious disagreements arise from how we "cook (instantiate?)" examples into models, and "uncook (generalize?)" models into examples.

When people just say what their model "actually is", I really like it.

When people only point to instances I feel like the instances often under-determine the hypothetical underlying idea and leave me still confused as to how to generate novel instances for myself that they would assent to as predictions consistent with the idea that they "meant to mean" with the instances.

Maybe: intensive theories > extensive theories?

comment by Liron · 2019-09-03T02:48:56.424Z · score: 4 (2 votes) · LW · GW

Indeed

comment by Slider · 2019-09-02T22:51:13.246Z · score: 1 (1 votes) · LW · GW

I feel like the same scrutinity standard is not being applied. Guy with health insurance doesn't check their health more often catching diseases earlier? Uncertainty doesn't cause stress and workload on circulatory system? Why are these not holes that prevent it from being coherent? Why can't Steve claim he has a friend that can be called that can exempilify exploitation?

If the bar is infact low Steve passed it upon positing McDonalds as relevant alternative and the argument went on to actually argue the argument. Or alternatively it requires opinion to have that Robin specification to be coherent and a reasonable arguer could try to hold it to be incoherent.

I feel like this is a case where epistemic status breaks symmetry. A white coat doctor and a witch doctor making the same claims requires the witch doctor to show more evidence to reach the same credibility levels. If argument truly screens off authority the problems needs to be in the argument. Steve is required to have the specification ready on hand during debate.

comment by Liron · 2019-09-02T23:03:48.839Z · score: 3 (2 votes) · LW · GW

The difference is that we all have a much clearer picture of what Robin Hanson's claim means than what Steve means, so Robin's claim is sufficiently coherent and Steve's isn't. I agree there's a gray area on what is "sufficiently coherent", but I think we can agree there's a significant difference on this coherence spectrum between Steve's claim and Robin's claim.

For example, any listener can reasonably infer from Robin's claim that someone on medical insurance who gets cancer shouldn't be expected to survive with a higher probability than someone without medical insurance. But reasonable listeners can have differing guesses about whether or not Steve's claim would also describe a world where Uber institutes a $15/hr flat hourly wage for all drivers.

Why can't Steve claim he has a friend that can be called that can exempilify exploitation?

Sure, then I'd just try to understand what specific property of the friend counts as exploitation. In Robin's case, I already have good reasonable guesses about the operational definitions of "health". Yes I can try to play "gotcha" and argue that Robin hasn't nailed down his claim, and in some cases that might actually be the right thing to do - it's up to me to determine what's sufficiently coherent for my own understanding, and nail down the claim to that standard, before moving onto arguing about the truth-value of the claim.

If the bar is infact low Steve passed it upon positing McDonalds as relevant alternative and the argument went on to actually argue the argument

Ah, if Steve really meant "Yes Uber screws the employee out of $1/hr compared to McDonald's", then he would have passed the specificity bar. But the specific Steve I portray in the dialogue didn't pass the bar because that's not really the point he felt he wanted to make. The Steve that I portray found himself confused because his claim really was insufficiently specific, and therefore really was demolished, unexpectedly to him.

comment by Slider · 2019-09-03T00:30:53.675Z · score: 3 (2 votes) · LW · GW

Well I am more familiar with settings where I have a duty to understand the world rather than the world having the duty to explain itself to me. I also hold that having unfamiliar things hit higher standards creates epistemic xenophobia. I would hold it important that one doesn't assign falsehood to a claim they don't understand. Althought it is also true that assigning truth to a claim one doesn't understand is dangerous to relatively same caliber.

My go-to assumption would be that Steve understands something different with the word and might be running some sort of moon logic in his head. Rather than declare the "moon proof" to be invalid it's more important that the translation between moon logic and my planet logic interfaces without confusion. Instead of using a word/concept I do know wrong he is using a word or concept I do not know.

"Coherent" usually points to a concept where a sentence is judged on it's home logics terms. But as used here it's clearly in the eye of the beholder. So it's less "makes objective sense" and more a "makes sense to whom?". The shared reality you create in a discussion or debate would be the arbiter but if the argument realies too much on those mechanics it doesn't generalise to contextes outside of that.

comment by Liron · 2019-09-03T00:54:52.205Z · score: 2 (1 votes) · LW · GW

Sure, makes sense.

I also just think there are a lot of Steves in the world who are holding on to belief-claims that lack specific referents, who could benefit from reading this post even if no one is arguing with them.

comment by Liron · 2019-09-02T19:41:19.806Z · score: 4 (2 votes) · LW · GW

There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.

I think this is a common misunderstanding that people are having about what I'm saying. I'm not saying to hunt for a counterexample that demolishes a claim. I'm saying to ask the person making the claim for a single specific example that's consistent with the general claim.

Imagine that a general claim has 900 examples and 100 counterexamples. Then I'm just asking to see one of the 900 examples :)

comment by Gurkenglas · 2019-09-02T13:47:10.018Z · score: 8 (12 votes) · LW · GW

By telling Steve to be specific, you are trying to trick him into adopting an incoherent position. You should be trying to argue against your opponent at his strongest, in this case in the general case that he has thought the most about. If you lay out your strategy before going specific, he can choose an example that is more resilient to it. In your example, if Uber didn't exist, that job may have been a taxi driver job instead, which pays more, because there's less rent seeking stacked against you.

comment by Liron · 2019-09-02T13:53:32.903Z · score: 7 (6 votes) · LW · GW

It's not a trick, he's allowed to backtrack and amend his choice of specific example and have a re-do of my response. In this dialogue, I chose the underlying reality to be that Steve's "point" really is incoherent, because a Monte Carlo simulation of real-world arguers has a high probability of landing on this scenario.

comment by Gurkenglas · 2019-09-02T14:26:10.468Z · score: 7 (3 votes) · LW · GW

You saw coming that his position would be temporarily incoherent, that's why you went there. I expect Steve to be aware of this at some level, and update on how hostile the debate is. Minimize the amount of times you have to prove him wrong.

comment by Liron · 2019-09-03T00:58:39.444Z · score: 8 (2 votes) · LW · GW

I agree with the principle of "minimize the amount of times you have to prove him wrong". But in the dialogue, I only had to "prove him wrong" that one time, because his position was fundamentally incoherent, not temporarily incoherent.

comment by Slider · 2019-09-02T16:09:52.507Z · score: 2 (2 votes) · LW · GW

Relies very heavily that in adversial context a free pick should be an optimal pick. The other arguer demonstrated that he didn't even realise that he can pick so its is reasonable to assume he doesn't know the pick should be optimal.

Doing re-dos without preannouncing them is giving free mulligans for yourself. I think would have been in the safe in saying that he did not make new claims, denying the mulligan. There could have been 10 facets of the exploitation in the scenario and fixing one of them would still leave 9 open. You can't say that a forest doesn't exist if it is not any of the individual trees.

The new claim is also not contradictory with the old story. It could also be taken as further spesification of it.

comment by Liron · 2019-09-02T16:45:01.803Z · score: 4 (2 votes) · LW · GW

Ok but what if he truly doesn't have a point? Because as the author of the fictional situation, I'm saying that's the premise. Under these circumstances, isn't it right and proper to play the kind of "adversarial game" with him that he's likely to "lose"?

comment by Citrus · 2019-09-02T18:34:40.466Z · score: 2 (2 votes) · LW · GW

I guess the point of humanity is to achieve as much prosperity as possible. Adversarial techniques help when competition improves our chances -- helpful in physical activities, when groups compete, in markets generally. But in a conversation with someone your best bet to help humanity is to help them come around to your superior understanding, and adversarial conversation won't achieve that.

The ideal strategy looks something like the best path along which you can lead them, where you can demonstrate to them they are wrong and they will believe you, which usually involves you demonstrating a very clear and comprehensive understanding, citing information, but doing it all in a way that seems collaborative.

comment by Liron · 2019-09-02T18:18:40.367Z · score: 5 (3 votes) · LW · GW

Asking for specific examples is not a rhetorical device, it's a tool for clear thinking. What I'm illustrating with Steve is a productive approach that raises the standard of discourse. IMO.

I've personally been in the Steve role many times: I used to hang out a lot with Anna Salamon when I was still new to LessWrong-style rationality, and I distinctly remember how I would make statements that Anna would then basically just ask me to clarify, and in attempting to do so I would realize I probably don't have a coherent point, and this is what talking to a smarter person than me feels like. She was being empathetic and considerate to me like she always is, not adversarial at all, but it's still accurate to say she demolished my arguments.

comment by Kaj_Sotala · 2019-09-16T05:59:47.690Z · score: 12 (3 votes) · LW · GW

I believe that the thing which is making many of your commenters misinterpret the post is that you chose a political example [LW · GW] for the dialogue. That gives people the (reasonable, as this is a common move) suspicion that you have a motive of attacking your political enemy while disguising it as rationality advice.

Even if they don't think that, if they have any sympathies towards the side that you seem to be attacking, they will still feel it necessary to defend that side. To do otherwise would risk granting the implicit notion that the "Uber exploits its drivers" side would have no coherent arguments in general, regardless of whether or not you meant to send that message.

You mentioned that you have personal examples where Anna pointed out to you that your position was incoherently. Something like that would probably have been a better example to use: saying "here's an example of how I made this mistake" won't be suspected of having ulterior motives the way that "here's an example of a mistake made by someone who I might reasonably be suspected to consider a political opponent" will.

comment by Liron · 2019-09-16T10:57:44.563Z · score: 5 (2 votes) · LW · GW

Ahhh right, you got me, I was unnecessarily political! It didn't pattern match to the kind of political arguing that I see in my bubble, but I get why anyone who feels skeptical or unhappy about Uber's practices won't be maximally receptive to learning about specificity using this example, and why even people who don't have an opinion about Uber have expressed feeling "uncomfortable" with the example. Thanks!

At some point I may go back and replace with another example. I'm open to ideas.

comment by assignvaluetothisb · 2019-09-06T12:00:06.667Z · score: 1 (1 votes) · LW · GW

It is pretty easy to 'win' any argument that the other side has to take a strong stance behind.

Go ahead, ask something where you take a strong stance.

I was going to say Uber has definately done 'blank', but I'm not very Steve and saying that Uber isn't as 'fair' as amazon or walmart is a position that is easy to agree with or against. Whoever makes out the best is going to agree with this adversarial gain.

comment by Slider · 2019-09-02T20:26:08.177Z · score: 1 (1 votes) · LW · GW

Relying that your opponent does a mistake is not a super reliable strategy. If someone reads your story and uses it as inspiration to start an argument they might end up in a situation where the actual person doesn' t make that mistake. That could feel a lot more like "shooting yourself in the face in an argument" rather than "demolishing an argument".

Argument methods that work because of misdirection arguably don't serve truth very well or work very indirectly (being deceptive makes it rewarding for the other to be keen).

Most people have reasons for their stances. Their point might be louzy or unimportant but usually one exists. If he truly doesn' t have a point then there is no specific story to tell. As author you have the options of him having a story or not meaning anything with his words but not both.

comment by Liron · 2019-09-02T20:37:13.109Z · score: 2 (1 votes) · LW · GW

Well, I'm not offering a general-purpose debate-winning tactic. I'm offering a basic sanity check that happens to demolish most arguments because humans are bad at thinking and have trouble even forming coherent claims.

comment by jmh · 2019-09-03T12:12:55.228Z · score: 3 (2 votes) · LW · GW

I find that I struggle with the rhetoric of the argument. Shouldn't the goal be to illuminate facts and truths rather than merely proving the other side wrong? Specifics certainly allow the illumination of truths (and so getting less wrong in our decisions and actions). However, it almost reads like the goal is to use specificity as some rhetorical tool in much the same way statistics can be misused to color the lens and mislead.

I'm sure that is not your goal so assume one of the hidden assumptions here could be put in the title. One additional word: The Power to Demolish BAD Arguments might set a better tone at the start.

comment by Liron · 2019-09-03T14:28:19.748Z · score: 2 (1 votes) · LW · GW
Shouldn't the goal be to illuminate facts and truths rather than merely proving the other side wrong? Specifics certainly allow the illumination of truths (and so getting less wrong in our decisions and actions).

Yep! Since in practice, the other side of a discussion is often incoherent about the meaning of their original claim, I believe it's efficient to employ this specificity tool to illuminate the incoherence early in the conversation.

However, it almost reads like the goal is to use specificity as some rhetorical tool in much the same way statistics can be misused to color the lens and mislead.
I'm sure that is not your goal so assume one of the hidden assumptions here could be put in the title. One additional word: The Power to Demolish BAD Arguments might set a better tone at the start.

Ah yeah, I agree. Title changed. Thanks!

comment by TAG · 2019-09-02T18:47:22.292Z · score: 3 (7 votes) · LW · GW

Here's how Steve could have demolished Liron's argument:

"If a company makes $28/h out of their workers and pays them $14/h, they are exploiting them".

comment by Liron · 2019-09-02T19:29:04.682Z · score: 2 (1 votes) · LW · GW

Yeah if Steve had said that, he would have been making progress toward potentially passing my having-a-coherent-claim test.

Do I agree or disagree with this version of Steve's claim? Neither. I'm still not done nailing down what he's talking about.

My followup question would be, "If it's impossible for Uber to take a lower share of the ride revenue without hastening their bankruptcy (because they're currently cashflow negative), would this scenario still make you claim that Uber is exploiting its drivers?"

comment by Svyatoslav Usachev (svyatoslav-usachev) · 2019-09-03T13:19:05.325Z · score: 1 (1 votes) · LW · GW

That's actually a very reasonable question to ask your interlocutor (and yourself), as are the previous "specificity" questions.

The answer to that question would be:

"If Uber was making $1/h out of their workers and paying them $14/h, that clearly would not have been exploitation, but if it makes $28/h, then it is, regardless of them being profitable. The question is how much every particular worker brings to the company, it doesn't matter whether it's enough for the viability of their business model."

I don't see what is that you would argue about if not about these particular questions one of which might become a double-crux at some point.

comment by Liron · 2019-09-03T14:20:35.528Z · score: 2 (1 votes) · LW · GW
I don't see what is that you would argue about if not about these particular questions one of which might become a double-crux at some point.

If Steve has a coherent claim for us to argue about, then you're right. But a surprisingly large amount of people, such as the specific Steve I chose for this dialogue, don't have enough substance to their belief-state to qualify as a "claim", despite their ability to say an abstract sentence like "Uber exploits its workers" that sounds like it could be a meaningful claim. In this case, specificity demolishes their argument, making them go back to the drawing board of what claim they desire to put forth, if any.

If Uber was making $1/h out of their workers and paying them $14/h, that clearly would not have been exploitation, but if it makes $28/h, then it is, regardless of them being profitable. The question is how much every particular worker brings to the company, it doesn't matter whether it's enough for the viability of their business model.

My meta-level response: This is more than what the Steve in the dialogue had in mind. The specific Steve in the dialogue just thinks "Hm, I guess I don't know in what sense my specific-example guy is being exploited. I'd have to think about this more."

My object-level response: What specifically do you mean by "Uber making $X/hr"? Contribution margin before (huge) fixed costs? Or net profit margin? Because right now Uber has a negative net profit margin, so its shareholders are subsidizing the drivers' wages and the riders' low prices.

comment by Svyatoslav Usachev (svyatoslav-usachev) · 2019-09-03T16:54:11.426Z · score: 1 (1 votes) · LW · GW

On the meta level I agree with you, and I am happy to see the updated title, which makes the post feel less like attacking this sort of questions.

With regards to the object-level response, I surely mean "contribution margin before (huge) fixed costs". Net profits are not very relevant here, see Amazon, a booming business with close-to-zero net profits. It is also clear that while Uber doesn't have profits, it surely has other gains, such as market share, for example. I.e., if the company decides that it wants to essentially exchange money for other non-monetary gains, it should not affect our opinion on their relationship with their workers.

That said, I acknowledge that it is slightly more nuanced, and that simply looking at the contribution margin is not enough.

comment by Liron · 2019-09-03T17:02:57.418Z · score: 2 (1 votes) · LW · GW

Well, I'm guessing Uber's current contribution margins are hovering around slightly negative or slightly positive, and that's before accounting for fixed costs like engineering that we can probably agree should be amortized into the "fairness equation".

In my personal analysis, I don't see how Uber is being unfair in any way to their drivers. It seems like Uber is a nice shareholder-subsidized program to put drivers to work and give riders convenient point-to-point transportation.

comment by clone of saturn · 2019-09-05T03:03:49.986Z · score: 3 (2 votes) · LW · GW

Could you explain why shareholders are subsidizing Uber drivers, in your opinion?

comment by Liron · 2019-09-05T03:09:45.340Z · score: 2 (1 votes) · LW · GW

Since I'm pretty sure Uber's net margins are negative, there must be a net transfer of wealth from people who buy Uber shares to Uber's drivers, employees, vendors and/or customers (I believe it's to all of them).

comment by clone of saturn · 2019-09-05T06:15:02.164Z · score: 3 (2 votes) · LW · GW

Right, but what's their motivation for transferring their wealth in this way?

comment by Liron · 2019-09-05T11:31:27.279Z · score: 2 (1 votes) · LW · GW

Ah because they think that in the next 5-20 years, Uber will figure out how to widen their contribution margin, get leverage on their fixed costs, and eventually make so much total profit that it compensates for all the money burned along the way (currently losing $5B/quarter btw).

One way this could happen is if Uber is able to keep their prices the same but replace their human-driven cars with self-driving cars. This scenario is possible but I'd give it less than 50% chance. But we don't need to try to predict the exact win scenario, we can just assign some probability that there will be some kind of win scenario.

Facebook IPO'd in 2012 at a $104B market cap before anyone knew by what mechanism they'd generate profits. They only hit the jackpot figuring out a highly profitable advertising engine around 2014.

I personally don't think Uber is an appealing investment, but I'm not confident about that assessment, because they're in a unique position where they're embedded deeply in the lives of many millions of customers, and they might also hit some kind of unforeseen jackpot to become the next Facebook.

comment by jmh · 2019-09-05T12:23:55.404Z · score: 3 (2 votes) · LW · GW

Not invested either, but I thought its view of the future where, in general, people moved from owning transportation capital (cars) to one of an on demand use of transportation service seems to have some sense to it. Coupled with the move toward rental of personal assets while not used, like the AirBnB model it looks a bit better too (perhaps as a transition state...?)

That does seem to depend on more than merely the technical and financial aspect. I suspect there is also the whole cultural and social (and probably the legal liability and insurance aspects for the autonomous car) part that will need to shift to support that type of market shift.

Not sure if this is a move in the similar direction but one of the big car rental companies just launched (or will) a new service for longer term rental. Basically you can pay a monthly fee and drive most of the cars they offer. The market here seemed to be those that might want a difference car every few weeks (BMW this month, Audi next, any maybe Lexus a bit later...). In the back of my mind I cannot help be see some time of signaling motivation here and wonder just how long that lasts if everyone can do it -- all the different cars you are seen driving no long signals any really type of status. Still, there are clearly some functional aspects that make it appealing over having to own multiple vehicles.

comment by Svyatoslav Usachev (svyatoslav-usachev) · 2019-09-05T08:45:20.920Z · score: 1 (1 votes) · LW · GW

I don't quite understand, if even the contribution margin of an individual driver is negative (before fixed costs), then I don't see how this model can become viable in the future.

My understanding is that contribution margins are obviously positive (Uber gets at least a half of the trip fare on average), but there is also a cost of investment in engineering and in low fares (which buy market share), that have not yet been covered.

The viability of the business model, thus, comes from the fact that future (quite positive) income from the provided services will continue to cover investments in non-monetary gains, such as brand, market share, assets and IP.

comment by Liron · 2019-09-05T11:43:46.345Z · score: 2 (1 votes) · LW · GW

I don't quite understand, if even the contribution margin of an individual driver is negative (before fixed costs), then I don't see how this model can become viable in the future.

The hope is that they'll hit on something big like self-driving car technology that fundamentally improves Uber's marginal profit.

I run a service busines with a financial model kind of similar to Uber's, and I can tell you there's not much qualitative difference between reporting -20% vs +20% "contribution margin", because it depends on how much you decide to amortize all kinds of gray-area costs like marketing, new-driver incentives, non-driver employees, etc, into the calculation of what goes into "one ride".

I use the ambiguous term "marginal profit" to mean "contribution margin with more overhead amortized in", and I'm pretty sure Uber's is quite negative right now, maybe in the ballpark of -20%.

comment by TAG · 2019-09-15T11:50:15.153Z · score: 1 (1 votes) · LW · GW

The hope is that they’ll hit on something big like self-driving car technology that fundamentally improves Uber’s marginal profit.

Or the old fashioned thing where you kill off competition and then raise prices.

comment by Pattern · 2019-09-02T20:37:57.186Z · score: 1 (1 votes) · LW · GW

Upvoted for having a claim with a testable component.

comment by jimmy · 2019-09-04T16:34:43.687Z · score: 0 (5 votes) · LW · GW


1) There is a risk in looking at concrete examples before understanding the relevant abstractions. Your Uber example relies on the fact that you can both look at his concrete example and know you're seeing the same thing. This condition does not always hold, as often the wrong details jump out as salient.

To give a toy example, if I were to use the examples "King cobra, Black mamba" to contrast with "Boa constrictor, Anaconda" you'd probably see "Ah, I get it! Venomous snakes vs non-venomous snakes", but that's not the distinction I'm thinking of so now I have to be more careful with my selection of examples. I could say "King cobra, Reticulated python" vs "Rattlesnake, Anaconda", but now you're just going to say "I don't get it" (or worse yet, you might notice "Ah, Asia vs the Americas!"). At some point you just have to stop the guessing game, say "live young vs laying eggs", and only get back to the concrete examples once they know where to be looking and why the other details aren't relevant.

Anything you have to teach which is sufficiently different from the persons pre-existing world view is necessarily going to require the abstractions first. Even when you have concrete real life experiences that this person has gone through themselves, they will simply fail to recognize what is happening to them. Your conclusion "I showed three specific guesses of what Michael’s advice could mean for Drew, but we have no idea what it does mean, if anything." is kinda the point. When you're learning new ways of looking at things, you're not going to immediately be able to cache them out into specific predictions. Noticing this is an important step that must come before evaluating predictions for accuracy if you're going to evaluate reliably. You do have to be able to get specific eventually, or else the new abstractions won't have any way to provide value, but "more specificity" isn't always the best next step.


2) It seems like the main function you have for "can you give me a concrete example" is to force coherence by highlighting the gaps. Asking for concrete examples is one way of doing this, but it is not required. All you really need for that is a desire to understand how their worldview works, and you can do this in the abstract as well. You can ask "Can you give me a concrete example?", but you could also ask "What do you think of the argument that Uber workers could simply work for McDonald's instead if Uber isn't treating them right?". Their reasoning is in the abstract, and it will have holes in the abstract too.

You could even ask "What do you mean by 'exploits its workers'?", so long as it's clear that your intent is to really grok how their worldview works, and not just trying to pick it apart in order to make them look dumb. In fact, your hypothetical example was a bit jarring to me, because "what do you mean by [..]" is exactly the kind of thing I'd ask and "Come on, you know!" isn't a response I ever get.

3) Am I understanding your post correctly that you're giving a real-world example of you not using the skill you're aiming to teach, and then a purely fictional example of you imagine that the conversation would have gone if you had?

I'd be very hesitant to accept that you've drawn the right conclusion about what is actually going on in people's heads if you cannot show it with actual conversations and at the very least provoke cognitive dissonance, if not agreement and change. Otherwise, you have a "fictitious evidence" problem, and you're in essence trusting your analysis rather than actually testing your analysis.

You say "Once you’ve mastered the power of specificity, you’ll see this kind of thing everywhere: a statement that at first sounds full of substance, but then turns out to actually be empty.", but I don't see any indication anywhere that you've actually ruled out the hypothesis "they actually have something to say, but I've failed to find it".

comment by Liron · 2019-09-04T17:07:02.306Z · score: 7 (3 votes) · LW · GW

Re (3):

Well, the whole example is a fictional pastiche. I didn't force myself to make it super real because I didn't think people would doubt that it was sufficiently realistic. If you want to know a real example of a Steve, it's me a bunch of times when I first talked to Anna Salamon about various subjects.

comment by jimmy · 2019-09-05T18:34:20.975Z · score: 2 (1 votes) · LW · GW

Okay, I thought that might be the case but I wasn't sure because the way it was worded made it sound like the first interaction was real. ("You can see I was showing off my mastery of basic economics." doesn't have any "[in this hypothetical]" clarification and "This seemed like a good move to me at the time" also seems like something that could happen in real life but an unusual choice for a hypothetical).

To clarify though, it's not quite "doubt that it's sufficiently realistic". Where your simulated conversation differs from my experience is easily explained by differing subcommunication and preexisting relationships, so it's not "it doesn't work this way" but "it doesn't *have to* work this way". The other part of it is that even if the transcript was exactly something that happened, I don't see any satisfying resolution. If it ended in "Huh, I guess I didn't actually have any coherent point after all", it would be much stronger evidence that they didn't actually have a coherent point -- even if the conversation were entirely fictional but plausible.

comment by Liron · 2019-09-05T22:42:14.137Z · score: 2 (1 votes) · LW · GW
If it ended in "Huh, I guess I didn't actually have any coherent point after all", it would be much stronger evidence that they didn't actually have a coherent point -- even if the conversation were entirely fictional but plausible.

Ok I think I see your point! I've edited the dialogue to add:

Steve thinks for a little while...
Steve: I don't know all the exploitative shit Uber does ok? I just think Uber is a greedy company.