Posts

Asch Conformity Could Explain the Conjunction Fallacy 2020-08-06T21:46:57.227Z · score: 6 (6 votes)
Can Social Dynamics Explain Conjunction Fallacy Experimental Results? 2020-08-05T08:50:05.855Z · score: 7 (5 votes)
Irrational Resistance to Business Success 2020-08-02T19:15:14.589Z · score: 3 (2 votes)
Principles Behind Bottlenecks 2020-08-01T22:48:10.499Z · score: 1 (1 votes)
Comment Replies for Chains, Bottlenecks and Optimization 2020-07-24T18:51:32.294Z · score: 1 (1 votes)
Bottleneck Examples 2020-07-23T18:23:40.966Z · score: 12 (4 votes)
Chains, Bottlenecks and Optimization 2020-07-21T02:07:27.953Z · score: 14 (10 votes)
Open Letter to MIRI + Tons of Interesting Discussion 2017-11-22T21:16:45.231Z · score: -12 (3 votes)
Less Wrong Lacks Representatives and Paths Forward 2017-11-08T19:00:20.866Z · score: -7 (4 votes)
AGI 2017-11-05T20:20:56.338Z · score: 0 (0 votes)
Intent of Experimenters; Halting Procedures; Frequentists vs. Bayesians 2017-11-04T19:13:46.762Z · score: 1 (1 votes)
Simple refutation of the ‘Bayesian’ philosophy of science 2017-11-01T06:54:20.510Z · score: 1 (1 votes)
Questions about AGI's Importance 2017-10-31T20:50:22.094Z · score: -14 (2 votes)
Reason and Morality: Philosophy Outline with Links for Details 2017-10-30T23:33:38.496Z · score: 0 (0 votes)
David Deutsch on How To Think About The Future 2011-04-11T07:08:42.530Z · score: 4 (34 votes)
Do people think in a Bayesian or Popperian way? 2011-04-10T10:18:28.936Z · score: -22 (35 votes)
reply to benelliott about Popper issues 2011-04-07T08:11:14.351Z · score: -1 (18 votes)
Popperian Decision making 2011-04-07T06:42:38.957Z · score: -1 (22 votes)
Bayesian Epistemology vs Popper 2011-04-06T23:50:51.766Z · score: 1 (33 votes)

Comments

Comment by curi on Asch Conformity Could Explain the Conjunction Fallacy · 2020-08-07T19:56:52.282Z · score: 1 (1 votes) · LW · GW

There was a response to this post at http://curi.us/2359-asch-conformity-could-explain-the-conjunction-fallacy#16985

Comment by curi on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? · 2020-08-06T21:50:13.330Z · score: 1 (1 votes) · LW · GW

People got post-research interviewed and asked to explain their answers. There were social feedback mechanisms. Even if there wasn't peer to peer social feedback, it was certainly possible to annoy the authority (researchers) who is giving you the questions (like annoying your teacher who gives you a test). The researchers want you to answer a particular way so people, reasonably, guess what that is, even if they don't already have that way highly internalized (as most people do).

This is how people have learned to deal with questions in general. And people are correct to be very wary of guessing "it's safe to be literal now" (often when it looks safe, it's not, so people come to the reasonable rule of thumb that it's never safe and basically decide (but not as a conscious decision) that maintaining a literalist personality to be used very rarely, when it's hard to even identify any safe times to use it, is not worth the cost). People have near-zero experience in situations where being hyper literal (or whatever you want to call it) won't be punished. Those scenarios barely exist. Even science, academia or Less Wrong mostly aren't like that.

More related to this in my followup post: Asch Conformity Could Explain the Conjunction Fallacy

Comment by curi on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? · 2020-08-05T19:01:48.144Z · score: 3 (2 votes) · LW · GW

Yeah, (poor) context isolation is is a recurring theme I've observed in my discussions and debates. Here's a typical scenario:

There's an original topic, X. Then we talk back and forth about it for a bit: C1, D1, C2, D2, C3, D3, C4, D4. The C messages are me and D is the other guy.

Then I write a reply, C5, about a specific detail in D4. Often I quote the exact thing I'm replying to or explain what I'm doing (e.g. a statement like "I disagree with A because B" where A was something said in D4.).

Then the person writes a reply (more of a non sequitur from my pov) about X.

People routinely try to jump the conversation back to the original context/topic. And they make ongoing attempts to interpret things I say in relation to X. Whatever I say, they often try to jump to conclusions about my position on X from it.

I find it very hard to get people to stop doing this. I've had little success even with explicit topic shifts like "I think you're making a discussion methodology mistake, and talking about X won't be productive until we get on the same page about how to discuss."

Another example of poor context isolation is when I give a toy example that'd be trivial to replace with a different toy example, but they start getting hung up on specific details of the example chosen. Sometimes I make the example intentionally unrealistic and simple because I want it to clearly be a toy example and I want to get rid of lots of typical context, but then they get hung up specifically on how unrealistic it is.

Another common example is when I compare X and Y regarding trait Z, and people get hung up b/c of how X and Y compare in general. Me: X and Y are the same re Z. Them: X and Y aren't similar!

I think Question-Ignoring Discussion Pattern is related, too. It's a recurring pattern where people don't give direct responses to the thing one just said.

And thanks for the link. It makes sense to me and I think social dynamics ideas are some of the ones most often coupled/contextualized. I think it’s really important to be capable of thinking about things from multiple perspectives/frameworks, but most people really just have the one way of thinking (and have enough trouble with that), and for most people their one way has a lot of social norms built into it (because they live in society – you need 2+ thinking modes in order for it to make sense to have one without social norms, otherwise you don’t have a way to get along with people. Some people compromise and build fewer social norms into their way of thinking because that’s easier than learning multiple separate ways to think).

Comment by curi on What are you looking for in a Less Wrong post? · 2020-08-03T20:30:04.606Z · score: 1 (1 votes) · LW · GW

He said, “Well, um, I guess we may have to agree to disagree on this.”

I [Yudkowsky] said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”

...

Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree

...

Regardless of our various disputes, we [Yudkowsky and Hanson] both agree that Aumann’s Agreement Theorem extends to imply that common knowledge of a factual disagreement shows someone must be irrational.

...

Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree

Do you think I'm misunderstanding the sequences or do you disagree with them?

Just because it's not fully proven in practice by math doesn't mean it isn't a broadly true and useful idea.

Comment by curi on Irrational Resistance to Business Success · 2020-08-03T08:18:54.641Z · score: 1 (1 votes) · LW · GW

In this situation, it sounds like the problem is that improvement for the plant came at cost for the DCs

Why do you think so? Merely because they are complaining or for some other reason?

The DCs were unable to substantively identify any problem that was created for them. And they spent 9 months refusing to use measurements or evidence to address this matter, in addition to failing to explain any cause-and-effect logic about what the problem they're now facing is and how it was caused by the change in production. (And, on top of that, without quantifying the alleged cost for them, the DCs want a change to production that will be costly, even though they have done no meaningful comparison to discover which cost is bigger.)

Comment by curi on What are you looking for in a Less Wrong post? · 2020-08-02T20:18:25.432Z · score: 6 (6 votes) · LW · GW

The main issue for me to write comments is whether I think discussion to a conclusion is available. Rationalists can't just agree to disagree, but in practice almost all discussions end without agreement and without explanation of reasons for ending the discussion by the party choosing to end the discussion. Just like at most other forums, most conversations seem to have short time limits which are very hard to override regardless of the content of the discussion.

I'm interested in things like finding and addressing double cruxes and otherwise getting some disagreements resolved. I want conversations where at least one of us learns something significant. I don't like for us each to give a few initial arguments and then stop talking. Generally I've already heard the first few things that other people say (and often vice versa too), so the value in the conversation mostly comes later. (The initial part of the discussion where you briefly say your position mostly isn't skippable. There are too many common positions, that I've heard before, for me to just guess what you think and jump straight into the new stuff.)

I occasionally write comments even without an expectation of substantive discussion. That's mostly because I'm interested in the topic and can use writing to help improve my own thoughts.

Comment by curi on Chains, Bottlenecks and Optimization · 2020-07-24T19:26:14.775Z · score: 1 (1 votes) · LW · GW

I replied at https://www.lesswrong.com/posts/qEkX5Ffxw5pb3JKzD/comment-replies-for-chains-bottlenecks-and-optimization

Comment by curi on Chains, Bottlenecks and Optimization · 2020-07-24T19:25:59.503Z · score: 1 (1 votes) · LW · GW

I replied at https://www.lesswrong.com/posts/qEkX5Ffxw5pb3JKzD/comment-replies-for-chains-bottlenecks-and-optimization

Comment by curi on Chains, Bottlenecks and Optimization · 2020-07-24T19:25:38.791Z · score: 1 (1 votes) · LW · GW

I replied at https://www.lesswrong.com/posts/qEkX5Ffxw5pb3JKzD/comment-replies-for-chains-bottlenecks-and-optimization

Comment by curi on Chains, Bottlenecks and Optimization · 2020-07-23T18:25:28.261Z · score: 1 (1 votes) · LW · GW

I just posted this, which I think will answer your questions. Let me know if not.

https://www.lesswrong.com/posts/Z66PxZnZJNSXzf8TL/bottleneck-examples

Comment by curi on Chains, Bottlenecks and Optimization · 2020-07-23T18:24:33.363Z · score: 3 (2 votes) · LW · GW

I replied at https://www.lesswrong.com/posts/Z66PxZnZJNSXzf8TL/bottleneck-examples

Comment by curi on Chains, Bottlenecks and Optimization · 2020-07-22T00:14:57.363Z · score: 1 (1 votes) · LW · GW

Yes, even a factory process often isn't a chain because e.g. a workstation may take inputs from multiple previous workstations and combine them.

Do you have a specific limit in mind re non-linear systems? I'm not clear on what the problem is.

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T09:02:15.025Z · score: 0 (0 votes) · LW · GW

You need any framework, but never provided one. I have a written framework, you don't. GG.

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T03:06:36.793Z · score: 0 (0 votes) · LW · GW

genetic algorithms often write and later read data, just like e.g. video game enemies. your examples are irrelevant b/c you aren't addressing the key intellectual issues. this example also adds nothing new over examples that have already been addressed.

you are claiming it's a certain kind of writing and reading data (learning) as opposed to other kinds (non-learning), but aren't writing or referencing anything which discusses this matter. you present some evidence as if no analysis of it was required, and you don't even try to discuss the key issues. i take it that, as with prior discussion, you're simply ignorant of what the issues are (like you simply take an unspecified common sense epistemology for granted, rather than being able to discuss the field). and that you won't want to learn or seriously discuss, and you will be hostile to the idea that you need a framework in which to interpret the evidence (and thus go on using your unquestioned framework that is one of the cultural defaults + some random and non-random quirks).

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T00:50:26.014Z · score: 0 (0 votes) · LW · GW

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Or are you claiming the OP is mistaken even within the CR framework..? Or do you have no rival view, but think CR is wrong and we just don't have any good philosophy? In that case the appropriate thing to do would be to answer this challenge that no one even tried to answer: https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T22:22:51.912Z · score: 0 (0 votes) · LW · GW

AlphaZero clearly isn't general purpose. What are we even debating?

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T21:56:36.837Z · score: 0 (0 votes) · LW · GW

yes that'd be my first guess – that it's caused by something in the gene pool of orcas. why not? and what else would it be?

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T21:46:34.448Z · score: 1 (1 votes) · LW · GW

Here are some examples of domains other than game playing: architecture, chemistry, cancer research, website design, cryonics research, astrophysics, poetry, painting, political campaign running, dog toy design, knitting.

The fact that the self-play method works well for chess but not poetry is domain knowledge the programmers had, not something alphazero figured out for itself.

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T11:02:59.209Z · score: 0 (0 votes) · LW · GW

If they wanna convince anyone it isn't using domain-specific knowledge created by the programmers, why don't they demonstrate it in the straightforward way? Show results in 3 separate domains. But they can't.

If it really has nothing domain specific, why can't it work with ANY domain?

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T05:17:19.896Z · score: 1 (1 votes) · LW · GW

They chose a limited domain and then designed and used an algorithm that works in that domain – which constitutes domain knowledge. The paper's claim is blatantly false; you are gullible and appealing to authority.

Comment by curi on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T19:23:49.442Z · score: 0 (0 votes) · LW · GW

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true?

are you asking for infallible proof, or merely argument?

anything rigorous?

see this book http://beginningofinfinity.com (it also addresses most of your subsequent questions)

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T19:57:18.538Z · score: -1 (1 votes) · LW · GW

That aspect of the timeline actually is public information, you just don't know it. Again you've made a false factual claim (about what is or isn't public info).

You are clinging to a false narrative from a position of ignorance, while still trying to attack me (now I suck at thinking in a fact based way, apparently because I factually corrected you) rather than reconsidering anything.

I've told you what happened. You don't believe me and started making up factually false claims to fit your biases, which aren't going anywhere when corrected. You think someone like David Deutsch couldn't possibly like and value my philosophical thinking all that much. You're mistaken.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T19:38:10.515Z · score: -1 (1 votes) · LW · GW

That situation today doesn't prevent you from being ignorant of things like timelines. Your claim that "you provided a valuable service to him by organising an online forum as a volunteer and as a result he saw you as a friend who got to read his draft and he listened to your feedback on his draft" is factually false. I didn't run or own those forums at the time. I did not in fact get to read "his draft" (falsely singular) due to running a forum.

You don't know what you're talking about and you're making up false stories.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T19:24:10.059Z · score: -1 (1 votes) · LW · GW

There literally is such a statement as the one you deny exists: he put the word "especially" before my name. He also told me directly. You are being dishonest and biased.

Your comments about organizing a forum, etc, are also factually false. You don't know what you're talking about and should stop making false claims.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T18:50:15.705Z · score: -1 (1 votes) · LW · GW

You seem to be implying I'm a liar while focusing on making factual claims in a intentionally biased way (you just saw, but omitted, relevant information b/c it doesn't help "your side", which is to attack me).

Your framing here is as dishonest, hostile, and unfair as usual: I did not claim to be a coauthor.

You are trying to attack informality as something bad or inferior, and trying to deny my status as a professional colleague of Deutsch who was involved with the book in a serious way. You are, despite the vagueness and hedging, factually mistaken about what you're suggesting. Being a PhD student under Deutsch would have been far worse – much less attention, involvement, etc. But you are dishonestly trying to confuse the issues by switching btwn arguing about formality itself (who cares? but you're using it as a proxy for other things) and actually talking about things that matter (quality, level of involvement, etc).

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T16:16:18.118Z · score: -1 (1 votes) · LW · GW

I didn't correspond with David Deutsch in an "informal way" as you mean it. For example, I was the most important editor of BoI (other than DD ofc).

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T16:04:18.531Z · score: -1 (1 votes) · LW · GW

I didn't say it was untypical, i was replying to the parent comment. Pay attention instead of responding out-of-context.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T14:55:03.770Z · score: -1 (1 votes) · LW · GW

Do you want new material which is the same as previous material, or different? If the same, I don't get it. if different, in what ways and why?

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T23:05:12.610Z · score: 0 (0 votes) · LW · GW

they've never learned or dealt with high-quality ideas before. they don't think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T16:11:01.106Z · score: 0 (0 votes) · LW · GW

You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don't have a clear understanding of what you mean by "induction" and it's a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what's logically wrong with it, but you don't. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.

What you do have is more ability to muddy the waters than patience or interest in thinking. That's a formula for never knowing you lost a debate, and never learning much. It's understandable that you're bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it's unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you've found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T13:24:35.132Z · score: -1 (1 votes) · LW · GW

again: i and others already wrote it and they don't want to read it. how will writing it again change anything? they still won't want to read it. this request for new material makes no sense whatsoever. it's not that they read the existing material and have some complaint and want it to be better in some way, they just won't read.

your community as a whole has no answer to some fairly famous philosophers and doesn't care. everyone is just like "they don't look promising" and doesn't have arguments.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T05:38:37.023Z · score: 0 (0 votes) · LW · GW

Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn't participating and doesn't matter. If we disagree about the nature of what's taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a "probably".

Fallibility isn't an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T21:57:18.439Z · score: 0 (0 votes) · LW · GW

i have tools to talk about uncertainty, which are different than your tools, and which conceive of uncertainty somewhat differently than you do.

i have not figured it ALL out, but many things, such as the quality of SENS and twin studies.

fallibilism is one of the major philosophical ideas used in figuring things out. it's crucial but it doesn't imply, as you seem to believe, hedging, ignorance, equivocation, not knowing much, etc.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T21:36:05.806Z · score: 0 (0 votes) · LW · GW

the VCs would laugh, like you, and don't want to hear it. surely this doesn't surprise you.

i'm also not a big fan of yachts and prefer discussions.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T21:10:51.787Z · score: 0 (0 votes) · LW · GW

What is an "intellectual" fixing of an error instead of a plain-vanilla fixing of an error?

I'm talking about identifying an error and writing a better idea. That's different than e.g. spending 50 years working on the better idea or somehow getting others to.

What's the % chance that he is correct? AFAIK he has been saying the same thing for years.

Yeah it's been staying the same due to lack of funding.

I don't typically do % estimates like you guys, but I read his book and some other material (for his side and against), and talked with him, and I believe (using philosophy) his ideas merit major research attention over their rivals.

You don't think that figuring out which ideas are "best available" is the hard part? Everyone and his dog claims his idea is the best.

well, using philosophy i did that hard part and figured out which ones are good.

I don't think that's true. Most people don't want to live for a long time as wrecks with Alzheimer's and pains in every joint, but invent a treatment that lets you stay at, say, the the 30-year-old level of health indefinitely and I bet few people will refuse (at least the non-religious ones).

oh they won't refuse that after it's cheaply available. they are confused and inconsistent.

Why is there a "should"?

b/c i didn't want the interpretation that it can be explained multiple ways. i'm advocating just the one option.

The twin studies are garbage, btw

All of them?

i have surveyed them and found them to all be garbage. i looked specifically at ones with some of the common, important conclusions, e.g. about heritability of autism, IQ, that kinda stuff. they have major methodological problems. but i imagine you could find some study involving twins, about something, which is ok.

if you believe you know a twin study that is not garbage, would you accept an explanation of why it's garbage as a demonstration of the power and importance of CR philosophy?

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T20:36:31.997Z · score: 0 (0 votes) · LW · GW

As you could have guessed, I'm already familiar with Cato. If you're not plugged into these networks, why are you trying to make claims about them?

Fixing these errors should produce tangible results and if the errors are massive,

No, I was talking about intellectual fixing of errors. That could lead to tangible results if ppl in the fields used the improved ideas, but i don't claim to know how to get them to do that.

So where is my cure for aging?

Aubrey de Grey says there's a 50% chance it's $100 million a year for 10 years away. That may be optimistic, but he has some damn good points about science that merit a lot of research attention ASAP. But he's massively underfunded anyway (partly b/c his approach to outreach is wrong, but he doesn't want to hear that or change it).

The holdup here isn't needing new scientific ideas (there's already an outlier offering those and telling the rest of the field what they're doing wrong) – it's most scientists and funders not wanting the best available ideas. Also, related, most people are pro-aging and pro-death so the whole anti-aging field itself has way too little attention and funding even for the other approaches.

Generally translated as "I don't like the conclusions which science came up with" :-D

I agree, though I don't think I agree with the people you named. The homosexuality stuff and the race/IQ stuff can and should be explained in terms of culture, memes, education, human choice, environment, etc. The twin studies are garbage, btw. They routinely do things like consider two people living in the US to have no shared environment (despite living in a shared culture).

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T19:48:41.402Z · score: 0 (0 votes) · LW · GW

I suppose you're going to tell me that pushing or pulling my spouse out of the way of a car that was going to hit them, without asking for consent first (don't have time), is using force against them, too, even though it's exactly what they want me to do. While still not explaining what you think "force" is, and not acknowledging that TCS's claims must be evaluated in its own terminology.

At that point I'll wonder what types of "force" you advocate using against children that you do not think should be used on adults.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T19:35:31.576Z · score: 0 (0 votes) · LW · GW

Critical Rationalism (CR)

CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.

Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).

CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.

Fallibility

CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.

There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).

Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.

CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.

Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.

So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.

Justification is the Major Error

The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.

Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.

Fallible Knowledge

CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.

When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.

The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.

CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”

Guesses and Criticism

Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how knowledge can be created.)

How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).

CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.

Science and Evidence

CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.

Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.

These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)

CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.

We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T19:22:17.608Z · score: 0 (0 votes) · LW · GW

The sequence idea doesn't work b/c you can criticize sequences or categories as a whole, criticism doesn't have to be individualized (and typically shouldn't be – you want criticisms with some generality).

Most falsifiable hypotheses are rejected for being bad explanations, containing internal contradictions, or other issues – without empirical investigation. This is generally cheaper and is done with critical argument. If someone can generate a sequence of ideas you don't know of any critical arguments against, then you actually do need some better critical arguments (or else they're actually good idea). But your example is trivial to criticize – what kind of science fairy, why will it appear in that case, if you accelerate a proton past a speed will that work or does it have to stay at the speed for a certain amount of time? does the fairy or sticker have mass or energy and violate a conservation law? It's just arbitrary, underspecified nonsense.


most ppl who like most things are not so great. that works for Popper, induction, socialism, Objectivism, Less Wrong, Christianity, Islam, whatever. your understanding of Popper is incorrect, and your experiences do not give you an accurate picture of Popper's work. meanwhile, you don't know of a serious criticism of CR by someone who does know what they're talking about, whereas I do know of a serious criticism of induction which y'all don't want to address.

If you look at the Popper summary you linked, it has someone else's name on it, and it isn't on my website. This kind of misattribution is the quality of scholarship I'm dealing with here. anyway here is an excerpt from something i'm currently in the process of writing.

(it says "Comment too long" so i'm going to try putting it in a reply comment, and if that doesn't work i'll pastebin it and edit in the link. it's only 1500 words.)

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T18:55:31.267Z · score: 0 (0 votes) · LW · GW

Funny how a great deal of libertarians like her a lot...

Where can I find them?

You are a proponent of one-bit thinking, are you not? In Yes/No terms de Grey set himself a goal and failed at it.

This is an over-simplification of a nuanced theory with a binary aspect. You don't know how YESNO works, have chosen not to find out, and can't speak to it.

Gregory Cochran

According to a quick googling, this guy apparently thinks that homosexuality is a disease. Is that the example you want to use and think I won't be able to point out any flaws in? There seems to be some political bias/hatred in this webpage so many it's not an accurate secondary source. Meanwhile I read that, "Khan’s career exemplifies the sometimes-murky line between mainstream science and scientific racism."

I am potentially OK with this topic, but it gets into political controversies which may be distracting. I'm concerned that you'll disagree with me politically (rather than scientifically) when I comment. What do you think? Also I think you should pick something more specific than their names, e.g. is there a particular major paper of interest? Cuz I don't wanna pick a random paper from one of them, find errors, and then you say that isn't their important work.

Also, at first glance, it looks like you may have named some outliers who may consider their field (most of the ppl/work/methods in it) broadly inadequate, and therefore might actually agree with my broader point (about the possibility of going into fields and pointing out inadequacies if you know what you're doing, due to the fields being inadequate).

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T06:37:34.808Z · score: 0 (0 votes) · LW · GW

i literally already gave u a definition of force and suggested you had no idea what i was talking about. you ignored me. this is 100% your fault and you still haven't even tried to say what you think "force" is.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T06:13:17.666Z · score: 0 (0 votes) · LW · GW

Let's see... Soviet Russia lived (relatively) happily until 1991 when it imploded through no effort of Ayn Rand. Libertarianism is not a major political force in any country that I know of. So, not that much influence.

Considering Rand was anti-libertarianism, you don't know the first thing about her.

You are a good philosopher, yes? Would you like to demonstrate this with some scientific field?

sure, wanna do heritability studies? cryonics?

de Grey runs a medical think tank that so far has failed at its goal. In which way did he "fix massive errors"?

did you read his book? ppl were using terrible approaches and he came up with much better ones.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:51:38.159Z · score: 0 (0 votes) · LW · GW

Nope, that's true only if I want to engage in this discussion and I don't. Been there, done that, waiting for the t-shirt.

i don't suppose you or anyone else wrote down your reasoning. (this is the part where either you provide no references, or you provide one that i have a refutation of, and then you don't respond to the problems with your reference. to save time, let's just skip ahead and agree that you're unserious, ignorant, and mistaken.)

Yes. Using that meaning, the sentence "I mean psychological "torture" literally" is false.

i disagree that it's false. you aren't giving an argument.

are you aware of many common ways force is initiated against children?

Of course. So?

well if you don't want to talk about it, then i guess you can continue your life of sin.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:41:38.164Z · score: 0 (0 votes) · LW · GW

Sorry that was a typo, the word "philosopher" should be "philosophy".

How would they transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due to the advantages of getting to learn from her books and also getting to talk with each other and help each other), all doing their own thing, at the same time. Each would be promoting the same core ideas. What force in our current culture could stand up to that? What could stop them?

Concretely, some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc. (Trump only won because his campaign was run, to a partial extent, by lesser philosophers like Coulter, Miller and Bannon. They may stand out today, but they have nothing on a real philosopher like Ayn Rand. They don't even claim to be philosophers. And yet it was still enough to determine the US presidency. What more do you want as a demonstration of the power of ideas than Trump's Mexican rapists line, learned from Coulter's book? Science? We have that too! And a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking. Even a mediocre philosopher like Aubrey de Grey managed to do something like that.)

They could discuss whatever problems came up to stop them. This discussion quality, having 1000 great thinkers, would far surpass any discussions that have ever existed, and so it would be highly effective compared to anything you have experience with.

As the earliest adopters catch on, the next earliest will, and so on, until even you learn about it, and then one day even Susie Soccer Mom.

Have you read Atlas Shrugged? It's a book in which a philosophy teacher and his 3 star students change the world.

Look at people like Jordan Peterson or Eliezer Yudkowsky and then try to imagine someone with ~100x better ideas and how much more effective that would be.

His ideas got to be very very popular.

He spread bad ideas which have played a major role in killing over a hundred million of people and it looks like they will kill billions before they're done (via e.g. all the economic harm that delays medical science to save people from dying of aging). Oops... As an intellectual, Marx fucked up and did it wrong. Also he's been massively misunderstood (I'm not defending him; he's guilty; but also I don't think he'd actually like or respect most of his fans, who use him as a symbol for their own purposes rather than seriously studying his writing.)

Presumably specially selected since early childhood since normal upbringing produces mental cripples?

a few people survive childhood. you might want to read the inexplicable personal alchemy by Ayn Rand (essay, not book). or actually i doubt you do... but i mean that's the kind of thing you could do if you wanted to understand.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:20:20.231Z · score: 0 (0 votes) · LW · GW

Huh, you're someone who would get the name of ARR [1] wrong? I didn't expect that. You're giving away significant identifying information, FYI. Why are you hiding your identity from me, btw?

And DD's status has a significant counter productive aspect – it intimidates people and prevents him from being contacted in some ways he'd like.

Feynman complained bitterly about his Nobel prize, which he didn't want, but they didn't give him the option to decline it privately (so that no one found out). After he got it, he kept getting the wrong kinds of people at his public lectures (non-physicists) which heavily pressured him to do introductory lectures that they could understand. (He did give some great lectures for lay people, but he also wanted to do advanced physics lectures.) Feynman made an active effort not to intimidate people and to counteract his own high status.

[1] http://curi.us/1539-autonomy-respecting-relationships

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:08:58.460Z · score: 0 (0 votes) · LW · GW

Of course you can help them, there are options other than violence. For example you can get a baby gate or a home without stairs. https://parent.guide/how-to-baby-proof-your-stairs/ Gates let them e.g. move around near the top of the stairs without risk of falling down. Desired, consensual gates, which the child deems helpful to the extent he has any opinion on the matter at all, aren't force. If the child specifically wants to play on/with the stairs, you can of course open the gate, put out a bunch of padding, and otherwise non-violently help him.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T03:57:07.981Z · score: 0 (0 votes) · LW · GW

Children don't want to fall down stairs. You can help them not fall down stairs instead of trying to force them. It's unclear to me if you know what "force" means. Here's the dictionary:

2 coercion or compulsion, especially with the use or threat of violence: they ruled by law and not by force.

A standard classical liberal conception of force is: violence, threat of violence, and fraud. That's the kind of thing I'm talking about. E.g. physically dragging your child somewhere he doesn't want to go, in a way that you can only do because you're larger and stronger. Whereas if children were larger and stronger than their parents, the dragging would stop, but you can still easily imagine a parent helping his larger child with not accidentally falling down stairs.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:45:18.542Z · score: 0 (0 votes) · LW · GW

I don't see what's to envy about Marx.

If you want to convert most of the world to your ideology you better call yourself a god then, or at least a prophet -- not a mere philosopher.

I'd be very happy to persuade 1000 people – but only counting productive doer/thinker types who learn it in depth. That's better than 10,000,000 fans who understand little and do less. I estimate 1000 great people with the right philosopher [typo: PHILOSOPHY] is enough to promptly transform the world, whereas the 10,000,000 fans would not.

EDIT: the word "philosopher" should be "philosophy" above, as indicated.

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:32:16.032Z · score: 0 (0 votes) · LW · GW

And persuading people that they need to die is kinda hard :-/

ppl don't need to die, that's wrong.

I understand this assertion. I don't think I believe it.

that's the part where you give an argument.


"torture" has an English meaning separate from emotional impact. you already know what it is. if you wanted to have a productive conversation you'd do things like ask for examples or give an example and ask if i mean that.

you don't seem to be aware that you're reading a summary essay and there's a lot more material, details, etc. you aren't treating it that way. and i don't think you want references to a lot more reading.

to begin with, are you aware of many common ways force is initiated against children?

Comment by curi on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T01:17:58.520Z · score: 0 (0 votes) · LW · GW

How successful do you think these are, empirically?

Roughly: everything good in all of history is from voluntary means. (Defensive force is acceptable but isn't a positive source of good, it's an attempt to mitigate the bad.) This is a standard (classical) liberal view emphasized by Objectivism. Do you have much familiarity? There are also major aggressive-force/irrationality connections, b/c basically ppl initiate force when they fail to persuade (as William Godwin pointed out) and force is anti-error-correction (making ppl act against their best judgement; and the guy with a gun isn't listening to reason).

@torture: The words have meanings. I agree many people use them imprecisely, but there's no avoiding words people commonly use imprecisely when dealing with subjects that most people suck at. You could try to suggest better wording to me but I don't think you could do that unless you already knew what I meant, at which point we could just talk about what I meant. The issues are important despite the difficulty of thinking objectively about them, expressing them adequately precisely in English, etc. And I'm using strong words b/c they correspond to my intended claims (which people usually dramatically underestimate even when I use words like "torture"), not out of any desire for emotional impact. If you wanted to try to understand the issues, you could. If you want it to be readily apparent, from the outset, how precise stuff is, then you need to start with the epistemology before its parenting implications.