Test Driven Thinking

post by Adam Zerner (adamzerner) · 2015-07-24T18:38:46.991Z · LW · GW · Legacy · 26 comments

Contents

26 comments

Programmers do something called Test Driven Development. Basically, they write tests that say "I expect my code to do this", then write more code, and if the subsequent code they write breaks a test they wrote, they'll be notified.

Wouldn't it be cool if there was Test Driven Thinking?

  1. Write tests: "I expect that this is true."
  2. Think: "I claim that A is true. I claim that B is true."
  3. If A or B causes any of your tests to fail, you'd be notified.

I don't know where to run with this though. Maybe someone else will be able to take this idea further. My thoughts:
  • It'd be awesome if you could apply TDT and be notified when your tests fail, but this seems very difficult to implement.
  • I'm not sure what a lesser but still useful version would look like.
  • Maybe this idea could serve as some sort of intuition pump for intellectual hygiene ("What do you think you know, and why do you think you know it?"). Ie. having understood the idea of TDT, maybe it'd motivate/help people apply intellectual hygiene. Which is sort of like a manual version of TDT, where you're the one constantly running the tests.

 

26 comments

Comments sorted by top scores.

comment by Dagon · 2015-07-24T21:22:16.339Z · LW(p) · GW(p)

This is an interesting restatement of beliefs paying rent - in order to qualify as a useful belief to hold, it must be testable. And if it's testable, you should test it.

Replies from: Lumifer
comment by Lumifer · 2015-07-24T23:11:40.075Z · LW(p) · GW(p)

in order to qualify as a useful belief to hold, it must be testable.

Atheism..?

And if it's testable, you should test it.

I believe that jumping off tall buildings without a parachute is a bad idea. Should I test it?

Replies from: eternal_neophyte
comment by eternal_neophyte · 2015-07-25T00:12:01.192Z · LW(p) · GW(p)

Atheism can be legitimately viewed as a lack of belief, if you properly hedge your claims about whether or not it's possible for gods or other ethereal beings to exist.

Also testing a belief doesn't necessarily mean testing it in full. You've probably tested your belief in the lethality of long drops partially by falling out of trees as a child (or at least, I did).

Replies from: Lumifer
comment by Lumifer · 2015-07-25T01:10:18.314Z · LW(p) · GW(p)

Atheism can be legitimately viewed as a lack of belief

Not quite, that goes by the name of agnosticism. An atheist answers the question "Do gods exist?" by saying "No".

You've probably tested your belief in the lethality of long drops partially by falling out of trees as a child

The results of all these tests point out that falls are not lethal, of course :-P

Replies from: eternal_neophyte, None, Dentin
comment by eternal_neophyte · 2015-07-25T01:20:32.498Z · LW(p) · GW(p)

Provisionally accepting your distinction between atheism and agnosticism, in what way is the former useful and the latter not?

The results of all these tests point out that falls are not lethal, of course :-P

That's where an untested auxiliary belief figures in - "if something hurts in proportion to variable x (i.e. the height of the drop), experiencing that thing when x is very large will probably kill you".

That's basically the Duhem-Quine spiel right? Which is why strict falsificationism doesn't quite work. But that's not to say a weaker form of falsificationism can't work: a network of ideas is useful to the degree that nodes in the network are testable. A fully isolated network (such as a system of theology) is useless.

comment by [deleted] · 2015-07-25T10:34:06.804Z · LW(p) · GW(p)

Your definition of atheism doesn't seem to reflect the way the word is used. A good portion of self-identified atheists would in fact be agnostics under your definition. In fact, every flavour of atheism I would consider compatible with general LW beliefs would be agnosticism since we can only claim that P(god) is very small.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-03T15:50:27.933Z · LW(p) · GW(p)

Very few people reason in a way that uses probabilities.

Replies from: None
comment by [deleted] · 2015-08-04T00:15:04.840Z · LW(p) · GW(p)

True, but I would consider the most common chain of reasoning for atheism (Occam's razor, therefore no God) equivalent to thinking in terms of probabilities even if probabilities aren't explicitly mentioned.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-04T07:27:56.136Z · LW(p) · GW(p)

Occam's razor has little to do with probabilities.

Replies from: None
comment by [deleted] · 2015-08-04T11:52:28.032Z · LW(p) · GW(p)

Then why accept the simplest solution instead of say, the most beautiful solution, or the most intuitive solution?

Replies from: ChristianKl, VoiceOfRa
comment by ChristianKl · 2015-08-04T20:34:42.655Z · LW(p) · GW(p)

Because you decide to accept the simplest solution. At least that's true for most people. Very few people reason with probabilities.

comment by VoiceOfRa · 2015-08-12T09:38:03.479Z · LW(p) · GW(p)

Then why accept the simplest solution instead of say, the most beautiful solution, or the most intuitive solution?

Good question. I'd argue that actually accepting the most elegant solution is a better heuristic than accepting the simplest.

comment by Dentin · 2015-07-27T20:14:39.150Z · LW(p) · GW(p)

As an atheist, I answer the question "Do gods exist?" by saying "With the evidence we have right now, it is most likely that they do not."

comment by Gunnar_Zarncke · 2015-07-24T21:45:44.672Z · LW(p) · GW(p)

For predictions there are e.g. predictionbook or foresightexchange.

Maybe the programming analogy can be pushed forward by applying pair programming: Make the test a cooperative thing by mutually 'running' your tests. This is also suggested byOthers' predictions of your performance are usually more accurate and Bet Your Friends to Be More Right.

comment by kpreid · 2015-09-05T20:05:19.626Z · LW(p) · GW(p)

Your description of TDD is slightly incomplete: the steps include, after writing the test, running the test when you expect it to fail. The idea being that if it doesn't fail, you have either written an ineffective test (this is more likely than one might think) or the code under test actually already handles that case.

Then you write the code (as little code as needed) and confirm that the test passes where it didn't before to validate that work.

comment by shminux · 2015-07-26T03:30:23.635Z · LW(p) · GW(p)

I work in software, and the hardest part is explaining to people that until you finish a test plan your requirements are incomplete.

comment by Lumifer · 2015-07-24T20:36:12.590Z · LW(p) · GW(p)

I can see two interpretations.

In one, this is equivalent to

  • Write down your forecasts and explicitly list reasons for your predictions
  • Check whether the forecast was correct
  • If it was not, re-evaluate the reasons listed in step 1.

In the other, this is just an internal consistency check: if you believe A, B, and C, make sure they can co-exist and do not contradict each other.

comment by MrMind · 2015-07-27T08:49:11.802Z · LW(p) · GW(p)

TDD is a declarative/generative process: you describe first what you want, then tell the computer how to achieve it, then test that against your initial requirement.

WRT rationality, this cannot be taken literally because you cannot generate reality, unless it's something of a goal setting. In this case it would be akin to writing a smart goal: how would you know if you have achieved it?
But in case of beliefs about reality, the only test you can do is: how does my model matches reality?
As Lumifer pointed out though, there's no simple way to check beliefs against a simple test, which gives you only true/false information. For basically all interesting belief, you need to use probabilities and test using a Bayesian framework.

comment by Gunnar_Zarncke · 2015-07-26T09:31:42.721Z · LW(p) · GW(p)

For predictions regarding stock exchanges it is possible to state your expectations in form of ranges over time and you will be informed when these ranges are left. It seems straightforward to extend this to more xomplex conditions.

comment by Voltairina · 2015-07-26T05:18:05.448Z · LW(p) · GW(p)

That seems like a job for an expert system - using formal reasoning from premises (as long as you can translate them comfortably into symbols), identifying whether a new fact contradicts any old fact...

comment by eternal_neophyte · 2015-07-24T23:55:12.733Z · LW(p) · GW(p)

What exactly separates this from general critical thinking?

Replies from: drethelin
comment by drethelin · 2015-07-26T04:11:38.237Z · LW(p) · GW(p)

critical thinking is a buzzword

Replies from: eternal_neophyte
comment by eternal_neophyte · 2015-07-26T11:20:27.170Z · LW(p) · GW(p)

How so?

Replies from: ChristianKl
comment by ChristianKl · 2015-08-03T15:48:36.199Z · LW(p) · GW(p)

Different people mean different things with it.

comment by Sarunas · 2015-07-24T20:01:22.026Z · LW(p) · GW(p)

Am I correct to rephrase your idea as "People should develop a habit to apply reductio ad absurdum and, to some extent, absurdity heuristic more often"?

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-07-24T20:05:24.118Z · LW(p) · GW(p)

No. I think it applies to more than absurd ideas.