Posts

Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? 2020-09-04T19:46:29.174Z
Mathematical Inconsistency in Solomonoff Induction? 2020-08-25T17:09:49.832Z
Analyzing Blackmail Being Illegal (Hanson and Mowshowitz related) 2020-08-20T18:56:19.087Z
Updating My LW Commenting Policy 2020-08-18T16:48:16.744Z
Rationally Ending Discussions 2020-08-12T20:34:14.951Z
Social Dynamics 2020-08-10T19:29:02.767Z
The Law of Least Effort Contributes to the Conjunction Fallacy 2020-08-09T19:38:11.091Z
Asch Conformity Could Explain the Conjunction Fallacy 2020-08-06T21:46:57.227Z
Can Social Dynamics Explain Conjunction Fallacy Experimental Results? 2020-08-05T08:50:05.855Z
Irrational Resistance to Business Success 2020-08-02T19:15:14.589Z
Principles Behind Bottlenecks 2020-08-01T22:48:10.499Z
Comment Replies for Chains, Bottlenecks and Optimization 2020-07-24T18:51:32.294Z
Bottleneck Examples 2020-07-23T18:23:40.966Z
Chains, Bottlenecks and Optimization 2020-07-21T02:07:27.953Z
Open Letter to MIRI + Tons of Interesting Discussion 2017-11-22T21:16:45.231Z
Less Wrong Lacks Representatives and Paths Forward 2017-11-08T19:00:20.866Z
AGI 2017-11-05T20:20:56.338Z
Intent of Experimenters; Halting Procedures; Frequentists vs. Bayesians 2017-11-04T19:13:46.762Z
Simple refutation of the ‘Bayesian’ philosophy of science 2017-11-01T06:54:20.510Z
Questions about AGI's Importance 2017-10-31T20:50:22.094Z
Reason and Morality: Philosophy Outline with Links for Details 2017-10-30T23:33:38.496Z
David Deutsch on How To Think About The Future 2011-04-11T07:08:42.530Z
Do people think in a Bayesian or Popperian way? 2011-04-10T10:18:28.936Z
reply to benelliott about Popper issues 2011-04-07T08:11:14.351Z
Popperian Decision making 2011-04-07T06:42:38.957Z
Bayesian Epistemology vs Popper 2011-04-06T23:50:51.766Z

Comments

Comment by curi on ricraz's Shortform · 2020-09-03T22:32:10.667Z · LW · GW

they're willing to accept ideas even before they've been explored in depth

People also reject ideas before they've been explored in depth. I've tried to discuss similar issues with LW before but the basic response was roughly "we like chaos where no one pays attention to whether an argument has ever been answered by anyone; we all just do our own thing with no attempt at comprehensiveness or organizing who does what; having organized leadership of any sort, or anyone who is responsible for anything, would be irrational" (plus some suggestions that I'm low social status and that therefore I personally deserve to be ignored. there were also suggestions – phrased rather differently but amounting to this – that LW will listen more if published ideas are rewritten, not to improve on any flaws, but so that the new versions can be published at LW before anywhere else, because the LW community's attention allocation is highly biased towards that).

Comment by curi on misc raw responses to a tract of Critical Rationalism · 2020-08-28T02:46:16.297Z · LW · GW

Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn't suitable for science.

Source?

Comment by curi on misc raw responses to a tract of Critical Rationalism · 2020-08-28T02:08:50.969Z · LW · GW

What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I've also read some if the great man's works.

Which forums? Under what name?

Comment by curi on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-25T20:43:23.390Z · LW · GW

Li and Vitanyi write:

Can a thing be simple under one definition of simplicity and not simple under another? The contemporary philosopher Karl R. Popper (1902– 1994) has said that Occam’s razor is without sense, since there is no objective criterion for simplicity. Popper states that every such proposed criterion will necessarily be biased and subjective.

There's no citation. There's one Popper book in the references section, LScD, but it doesn't contain the string "occam" (case insensitive search).

I also searched a whole folder of many Popper books and found nothing mentioning Occam (except it's mentioned by other people, not Popper, in the Schlipp volumes).

If Popper actually said something about Occam's razor, I'd like to read it. Any idea what's going on? This seems like a scholarship problem from Li and Vitanyi. They also dismiss Popper's solution to the problem of induction as unsatisfactory, with no explanation, argument, cite, etc.

Comment by curi on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-25T20:27:17.318Z · LW · GW

Which section of the 850 page book contains a clear explanation of this? On initial review they seem to talk about hypotheses, for hundreds of pages, without trying to define them or explain what sorts of things do and do not qualify or how Solomonoff hypotheses do and do not match the common sense meaning of a hypothesis.

Comment by curi on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-25T17:52:58.055Z · LW · GW

Thanks. So "There are no black swans." is not a valid Solomonoff hypothesis? A hypothesis can't exclude things, only make positive predictions?

Is a hypothesis allowed to make partial predictions? E.g. predict some pixels or frames and leave others unspecified. If so, then you could "and" together two partial hypotheses and run into a similar math consistency problem, right? But the way you said it sounds like a valid hypothesis may be required to predict absolutely everything, which would prevent conjoining two hypotheses since they're already both complete and nothing more could be added.

Comment by curi on Rationally Ending Discussions · 2020-08-25T04:19:02.720Z · LW · GW

I have never sock puppeted at LW and I have never been banned at the LW website. You're just wrong and smearing me.

Please leave me alone.

Comment by curi on Rationally Ending Discussions · 2020-08-25T01:09:12.457Z · LW · GW

We're discussing social dynamics and rational conversations at http://curi.us/2363-discussion-with-gigahurt-from-less-wrong

Comment by curi on Rationally Ending Discussions · 2020-08-25T00:57:11.602Z · LW · GW

past misbehaviors with sock puppets

What sock puppets?

Comment by curi on misc raw responses to a tract of Critical Rationalism · 2020-08-24T17:25:08.968Z · LW · GW

http://fallibleideas.com/discussion

Comment by curi on misc raw responses to a tract of Critical Rationalism · 2020-08-23T16:33:19.499Z · LW · GW

A place to start is considering what problems we're trying to solve.

Epistemology has problems like:

What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?

Are those the sorts of problems you're trying to solve when you talk about Solomonoff induction? If so, what's the best literature you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)

(My questions are open to anyone else, too.)

Comment by curi on misc raw responses to a tract of Critical Rationalism · 2020-08-22T17:18:14.796Z · LW · GW

Hi, Deutsch was my mentor. I run the discussion forums where we've been continuously open to debate and questions since before LW existed. I'm also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I've been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.

Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is? And if you're interested, have you read FoR and BoI?

I'll begin with one comment now:

I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories

~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you've talked with or read.

But that is not a CR position. CR says we only ever believe theories tentatively. We always know they may be wrong and that we may need to reconsider. We can't 100% count on ideas. Wholely believing things is not a part of CR.

If by "wholely" you mean with a 100% probability, that is also not a CR position, since CR doesn't assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say "0% or infinitesimal" (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.

Sometimes we act, judge, decide or (tentatively) conclude. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I'm acting 100% on that plan and not following either original plan. So I'm still picking a single plan to wholely act on.)

Comment by curi on Analyzing Blackmail Being Illegal (Hanson and Mowshowitz related) · 2020-08-20T22:55:35.583Z · LW · GW

More discussion of this post is available at https://curi.us/2366-analyzing-blackmail-being-illegal#comments

Comment by curi on Analyzing Blackmail Being Illegal (Hanson and Mowshowitz related) · 2020-08-20T20:46:59.347Z · LW · GW

many motives ... mostly commonly to get money

If I threaten to do X unless you pay me, then the motive for making that threat is getting money. However, I don't get money for doing X. There are separate things involved (threat and action) with different motives.

Comment by curi on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T19:04:00.569Z · LW · GW

I wrote a reply at https://www.lesswrong.com/posts/5ffPhqaLdrSajFe37/analyzing-blackmail-being-illegal-hanson-and-mowshowitz

I read only the initial overview at the top, did my own analysis, then read the rest to see if it'd change my mind.

Here are summaries of IMO the two most notable ideas from my analysis:

  1. Compare blackmail to this scenario: My neighbor is having a party this weekend. I threaten to play loud music (at whatever the max loudness is that's normally within my rights) to disrupt it unless he pays me $100. Compare to: I often play loud music and my neighbor comes and offers me $100 to be quiet all weekend. In one, I'm threatening to do something for the express purpose of harming someone, not to pursue my own values. In the other, I just enjoy music as part of my life. I think blackmail compares to the first scenario, but not the second.

  2. We (should) prohibit initiation of force as a means to an end. The real underlying thing is enabling people to pursue their values in their life and resolve conflicts. If blackmail doesn't initiate force, that doesn't automatically make it OK, b/c non-initiation of force isn't the primary.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-18T22:40:05.228Z · LW · GW

I read the older, now-renamed book that I linked. The newer one has different authors. I saw it when searching and confirmed the right author for the one I read by searching old emails.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-18T18:16:38.934Z · LW · GW

Do the PUAs really have a good model of an average human, or just a good model of a drunk woman who came to a nightclub wanting to get laid?

PUAs have evidence of efficacy. The best is hidden camera footage. The best footage that I’m aware of, in terms of confidence the girls aren’t actors, is Mystery’s VH1 show and the Cajun on Keys to the VIP. I believe RSD doesn’t use actors either and they have a lot of footage. I know some others have been caught faking footage.

My trusted friend bootcamped with Mystery and provided me with eyewitness accounts similar to various video footage. My friend also learned and used PUA successfully, experienced it working for him in varied situations … and avoids talking about PUA in public. He also observed other high profile PUAs in action IRL.

Some PUAs do daygame and other venues, not just nightclubs/parties. They have found the same general social principles apply, but adjustments are needed like lower energy approaches. Mystery, who learned nightclub style PUA initially, taught daygame on at least one episode of his TV show and his students quickly had some success.

PUAs have also demonstrated they’re effective at dealing with males. They can approach mixed-gender sets and befriend or tool the males. They’ve also shown effectiveness at befriending females who aren’t their target. Also standard PUA training advice is to approach 100 people on the street and talk with them. Learning how to have smalltalk conversations with anyone helps people be better PUAs, and also people who get good at PUA become more successful at those street conversations than they used to be.

I think these PUA Field Reports are mostly real stories, not lies. Narrator bias/misunderstandings and minor exaggerations are common. I think they’re overall more reliable than posts on r/relationships or r/AmITheAsshole, which I think also do provide useful evidence about what the world is like.

There are also notable points of convergence, e.g. Feynman told a story ("You Just Ask Them?” in Surely You’re Joking) in which he got some PUA type advice and found it immediately effective (after his previous failures), both in a bar setting and later with a “nice” girl in another setting.

everyone lives in a bubble

I generally agree but I also think there are some major areas of overlap between different subcultures. I think some principles apply pretty broadly, e.g. LoLE applies in the business world, in academia, in high school popularity contests, and for macho posturing like in the Top Gun movie. My beliefs about this use lots of evidence from varied sources (you can observe people doing social dynamics ~everywhere) but also do use significant interpretation and analysis of that evidence. There are also patterns in the conclusions I’ve observed other people reach and how e.g. their conclusion re PUA correlates with my opinion on whether they are a high quality thinker (which I judged on other topics first). I know someone with different philosophical views could reach different conclusions from the same data set. My basic answer to that is that I study rationality, I write about my ideas, and I’m publicly open to debate. If anyone knows a better method for getting accurate beliefs please tell me. I would also be happy pay for useful critical feedback if I knew any good way to arrange it.

Business is a good source of separate evidence about social dynamics because there are a bunch of books and other materials about the social dynamics of negotiating raises, hiring interviews, promotions, office politics, leadership, managing others, being a boss, sales, marketing, advertising, changing organizations from the bottom-up (passing on ideas to your boss, boss’s boss and even the CEO), etc. I’ve read a fair amount of that stuff but it’s not my main field (which is epistemology/rationality).

There are also non-PUA/MGTOW/etc relationship books with major convergence with PUA, e.g. The Passion Paradox (which has apparently been renamed The Passion Trap). I understand that to be a mainstream book:

About the Author Dr. Dean C. Delis is a clinical psychologist, Professor of Psychiatry at the University of California, San Diego, School of Medicine, and a staff psychologist at the San Diego V.A. Medical Center. He has more than 100 professional publications and has served on the editorial boards of several scientific journals. He is a diplomate of the American Board of Professional Psychology and American Board of Clinical Neuropsychology.

The main idea of the book is similar to LoLE. Quoting my notes from 2005 (I think this is before I was familiar with PUA): “The main idea of the passion paradox is that the person who wants the relationship less is in control and secure, and therefore cares about the relationship less, while the one who wants it more is more needy and insecure. And that being in these roles can make people act worse, thus reinforcing the problems.”. I was not convinced by this at the time and also wrote: “I think passion paradox dynamics could happen sometimes, but that they need not, and that trying to analyse all relationships that way will often be misleading.” Now I have a much more AWALT view.

The entire community is selecting for people who have some kinds of problems with social interaction

I agree the PUA community is self-selected to mostly be non-naturals, especially the instructors, though there are a few exceptions. In other words, they do tend to attract nerdy types who have to explicitly learn about social rules.

Sometimes I even wonder whether I overestimate how much the grass is greener on the other side.

My considered opinion is that it’s not, and that blue pillers are broadly unhappy (to be fair, so are red pillers). I don’t think being good at social dynamics (via study or “naturally” (aka via early childhood study)) makes people happy. I think doing social dynamics effectively clashes with rationality and being less rational has all sorts of downstream negative consequences. (Some social dynamics is OK to do, I’m not advocating zero, but I think it’s pretty limited.)

I don’t think high status correlates well with happiness. Both for ultra high status like celebs, which causes various problems, and also for high status that doesn’t get you so much public attention.

I think rationality correlates with happiness better. I would expect to be wrong about that if I was wrong about which self-identified rational people are not actually rational (I try to spot fakers and bad thinking).

I think the people with the best chance to be happy are content and secure with their social status. In other words, they aren’t actively trying to climb higher socially and they don’t have to put much effort into maintaining their current social status. The point is that they aren’t putting much effort into social dynamics and focus most of their energy on other stuff.

I am intellectually aware of the taboo against the "PUA/MRA/etc" cluster.

I too am intellectual aware of that but don’t intuitively feel it. I also refuse to care and have publicly associated my real name with lower status stuff than PUA. I have gotten repeated feedback (sometimes quite strongly worded) about how my PUA ideas alienate people, including from a few long time fans, but I haven’t stopped talking about it.

[Edit for clarity: I mostly mean hostile feedback from alienated people, not feedback from people worrying I'll alienate others.]

I would like to learn from people who are guided neither by social taboos nor by edginess. And I am not sure if I could contribute much beyond an occassional sanity check.

I’d be happy to have you at my discussion forums. My community started in 1994, (not entirely) coincidentally the same year as alt.seduction.fast. The community is fairly oriented around the work of David Deutsch (the previous community leader and my mentor) and myself, as well as other thinkers that Deutsch or I like. A broad variety of topics are welcome (~anything that rationality can be applied to).

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-18T02:13:14.109Z · LW · GW

Maybe another important thing is how your work is.... oriented. I mean, are you doing X to impress someone specific (which would signal lower status), or are you doing X to impress people in general but each of them individually is unimportant? A woman doing her make-up, a man in the gym, a professor recording their lesson... is okay if they do it for the "world in general"; but if you learned they are actually doing all this work to impress one specific person, that would kinda devalue it. This is also related to optionality: is the professor required to make the video? is the make-up required for the woman's job?

That makes sense.

You can also orient your work to a group, e.g. a subculture. As long as its a large enough group, this rounds to orienting to the world in general.

Orienting to smaller groups like your high school, workplace or small academic niche (the 20 other high status people who read your papers) is fine from the perspective of people in the group. To outsiders, e.g. college kids, orienting to your high school peers is lame and is due to you being lame enough not yet to have escaped high school. Orienting to a few other top academics in a field could impress many outsiders – it shows membership in an exclusive club (high school lets in losers/everyone and hardly any the current highest status people are in the club).

I think orienting to a single person can be OK if 1) it's reciprocated; and 2) they are high enough status. E.g. if I started making YouTube videos exclusively to impress Kanye West, that's bad if he ignores me, but looks good for me if he responds regularly (that'd put me as clearly lower status than him, but still high in society overall). Note that more realistically my videos would also oriented to Kanye fans, not just Kanye personally, and that's a large enough group for it to be OK.

I didn't have other immediate, specific comments but I generally view these topics as important and hard to find quality discussion about. Most people aren't red-pilled and hate PUAs/MRAs/etc or at least aren't familiar with the knowledge. And then the PUAs/MRAs/etc themselves mostly aren't philosophers posting on rationalist forums ... most of them are more interested in other stuff like getting laid, using their knowledge of social dynamics to gain status, or political activism. So I wanted to end by saying that I'm open to proposals for more, similar discussion if you're interested.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-17T00:27:47.154Z · LW · GW

gjm, going forward, I don't want you to comment on my posts, including this one.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-17T00:24:58.011Z · LW · GW

Thanks for the reply. I think privacy is important and worth analyzing.

But I'm not convinced of your explanation. I have some initial objections.

I view LoLE as related to some other concepts such as reactivity and chasing. Chasing others (like seeking their attention) is low status, and reacting to others (more than they're reacting to you) is low status. Chasing and reacting are both types of effort. They don't strike me as privacy related. However, for LoLE only the appearance of effort counts (Chase's version), so to some approximation that means public effort, so you could connect it to privacy that way.

Some people do lots of publicly visible work. There are Twitch streamers, like Leffen and MajinPhil, who stream a lot of their practice time. (Some other people do stream for a living and stream less or no practice.) Partly I think it's OK because they get paid to stream. But partly I think it's OK because they are seen as wanting to do that work – it's their passion that they enjoy. Similarly I think one could livestream their gym workouts, tennis practice sessions, running training, or similar, and making that public wouldn't ruin their status. Similarly, Brandon Sanderson (a high status fantasy author) has streamed himself answering fan questions while simultaneously signing books by the hundreds (just stacks of pages that aren't even in the books yet, not signing finished books for fans), and he's done this in video rather than audio-only format. So he's showing the mysterious process of mass producing a bunch of signed books. And I don't think Sanderson gets significant income from the videos. I also don't think that Jordan Peterson putting up recordings of doing his job – university lectures – was bad for his status (putting up videos of his lecture prep time might be bad, but the lecturing part is seen as a desirable and impressive activity for him to do, and that desirability seems like the issue to me more than whether it's public or private). The (perceived) option to have privacy might sometimes matter more than actually having privacy.

I think basically some effort isn't counted as effort. If you like doing it, it's not real work. Plus if it's hidden effort, it usually can't be entered into evidence in the court of public opinion, so it doesn't count. But my current understanding is that if 1) it counts as effort/work; and 2) you're socially allowed to bring it up then it lowers status. I see privacy as an important thing helping control (2) but effort itself, under those two conditions, as the thing seen as undesirable, bad, something you're presumed to try to avoid (so it's evidence of failure or lack of power, resources, helpers, etc), etc.

Comment by curi on Rationally Ending Discussions · 2020-08-16T19:17:56.844Z · LW · GW

When there is no transparency about why people exit discussions, it allows for them to leave due to bias, dodging, bad reasons, etc., and it's not very provable. Your response is: they didn't explain that they left for bad reasons, so you (curi) can't really prove anything! Indeed. It's ambiguous. That's a large part of the problem.

I could go into detail about some of the specifics that I didn't reply to, explain why I think some of the things people wrote were low quality, argue my case, answer every question, etc. but I don't have a reasonable expectation that they would be responsive to the discussion. Different discussion norms or explicit request could change that.

My sense is that you're trying to hold people to standards you fall short of.

I proposed that if both people want a serious discussion that tries to make progress and doesn't end arbitrarily, then here's some stuff you can do. I also proposed that the general norms here could be improved.

Me responding more energetically and thoroughly to people with different preferred discussion norms than me will not solve the problem. And yes I've already tried it (thousands of times).

I could also reply to people and say why I think their messages (as a whole or specific parts) are low quality so I don't want to reply, but please correct me if my analysis is wrong. I have tried this too but people mostly rather dislike it. I am open to doing it by request.

I could also reply to people asking if they want a substantive discussion. I have tried that too. Yes answers are rare and doing it a lot here would annoy people.

So I've put in my bio here a note that people can make a request if they want a substantive discussion with me, and I've talked some about the general issue. I also have more detailed policies posted on my websites, including public promises re how anyone can get my attention and get responses, and I have established different discussion norms at my own forums.

Comment by curi on Rationally Ending Discussions · 2020-08-16T17:28:22.556Z · LW · GW

Yes I've found it's a major problem in practice, everywhere. I think most discussion interactions at LW end either at key moments or earlier. Hardly any make significant progress. The reasons they end early are rarely explained. Would examples help? There are multiple examples in this topic, e.g. remizidae dropped the discussion, as did G Gordon Worley III and Dagon.

note: i don't want to particularly blame or criticize them compared to the people who didn't write anything at all and would have done similarly well or worse. but discussion interactions like these are problematic – not taken far enough to actually really get anywhere – and typical. discussions where people try to actually resolve disagreements are uncommon, and when those begin they are usually dropped at some point too without much in the way of transparency, post-mortem, conclusion, etc.

regarding your article: I think a 2 more reply warning would be a large improvement over what people typically do.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-16T01:03:56.464Z · LW · GW

I didn't quote you en masse. I didn't just dump all your posting history. I quoted some specific stuff related to my critical commentary. Did you even look?

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-15T20:14:53.965Z · LW · GW

No. Quoting is not a copyright violation. And I won't have a discussion with you without being able to mirror it. Goodbye and no discussion I guess?

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-15T19:37:30.488Z · LW · GW

This discussion was on Slack (which unfortunately hides all but the most recent messages unless you pay them, which LW doesn't).

Well, fortunately, I did save copies of those discussions. You could find them in the FI archives if you wanted to. (Not blaming you at all but I do think this is kinda funny and I don't regret my actions.)

FYI, full disclosure, on a related note, I have mirrored recent discussion from LW to my own website. Mostly my own writing but also some comments from other people who were discussing with me, including you. See e.g. http://curi.us/2357-less-wrong-related-dicussion and http://curi.us/archives/list_category/126

I don't plan to review the 3 year old discussions and I don't want to re-raise anything that either one of us saw negatively.

If you are interested in pursuing any of those discussions, maybe I can make a post summarizing my position and we can proceed in comments there.

Sure but I'd actually mostly prefer literature, partly because I want something more comprehensive (and more edited/polished) and partly because I want something more suitable for quoting and responding to as a way to present and engage with rival, mainstream viewpoints which would be acceptable to the general public.

Is there any literature that's close enough (not exact) or which would work with a few modifications/caveats/qualifiers/etc? Or put together a position mostly from selections from a few sources? E.g. I don't exactly agree with Popper and Deutsch but I can provide selections of their writing that I consider close enough to be a good starting point for discussion of my views.

I also am broadly in favor of using literature in discussions, and trying to build on and engage with existing writing, instead of rewriting everything.

If you can't do something involving literature, why not? Is your position non-standard? Are you inadequately familiar with inductivist literature? (Yes answers are OK but I think relevant to deciding how to proceed.)

And yes feel free to start a new topic or request that I do instead of nesting further comments.

what I think about what Popper thinks about induction

I actually think the basics of induction would be a better topic. What problems is it trying to solve? How does it solve it? What steps does one do to perform an induction? If you claim the future resembles the past, how do you answer the basic logical fact that the future always resembles the past in infinitely many ways and differs in infinitely many ways (in other words, infinitely many patterns continue and infinitely many are broken, no matter what happens), etc.? What's the difference, if any, between evidence that doesn't contradict a claim and evidence that supports it? My experience with induction discussions is a major sticking point is vagueness and malleability re what the inductivist side is actually claiming, and a lack of clear answers to initial questions like those above, and I don't actually know where to find any books which lay out clear answers to this stuff.

Another reason for using literature is I find lots of inductivists don't know about some of the problems in the field, and sometimes deny them. Whereas a good book would recognize at least some of the problems are real problems and try to address them. I have seen inductivist authors do that before – e.g. acknowledging that any finite data set underdetermines theory or pattern – just not comprehensively enough. I don't like to try to go over known ground with people who don't understand the ideas on their own side of the debate – and do that in the form of a debate where they are arguing with me and trying to win. They shouldn't even have a side yet.

I think I looked at that argument in particular because you said you found it convincing

FYI I'm doubtful that I said that. It's not what convinced me. My guess is I picked it because someone asked for math. I'd prefer not to focus on it atm.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-14T22:13:37.796Z · LW · GW

I'd be more interested in discussing Popper and Bayes stuff than your LoLE comments. Is there any literature which adequately explains your position on induction, which you would appreciate criticism of?

FYI I do not remember our past conversations in a way that I can connect any claims/arguments/etc to you individually. I also don't remember if our conversations ended by either of our choice or were still going when moderators suppressed my participation (slack ban with no warning for mirroring my conversations to my forum, allegedly violating privacy, as well as repeated moderator intervention to prevent me from posting to the LW1.0 website.)

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-14T19:26:10.606Z · LW · GW

I hereby grant you and everyone else license to break social norms at me. (This is not a license to break rational norms, including rational moral norms, which coincide with social norms.) I propose trying this until I get bent out of shape once. I do have past experience with such things including on 4chan-like forums.

I agree with you about common cases.

What I don't see in your comment is a solution. Do you regard this as an important, open problem?

Comment by curi on Rationally Ending Discussions · 2020-08-14T19:15:16.052Z · LW · GW

I'm flexible. An option, which I think is hard but important, is what people want from a discussion partner and what sort of discussion partners are in shortage. I think our models of that are significantly different.

Comment by curi on Rationally Ending Discussions · 2020-08-14T04:09:56.998Z · LW · GW

Would you like to try to resolve one of our disagreements by discussion?

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-13T21:23:40.977Z · LW · GW

But if you reckon my comments are low-quality and I'm likely to bail prematurely, you'll have to decide for yourself whether that's a risk you want to take.

I have decided and I don't want to take that risk in this particular case.

But I believe I'm socially prohibited from saying so or explaining the analysis I used to reach that conclusion.

This is a significant issue for me because I have a similar judgment regarding most responses I receive here (and at most forums). But it's problematic to just not reply to most people while providing no explanation. But it's also problematic to violate social norms and offend and confuse people with meta discussion about e.g. what evidence they've inadvertently provided that they're irrational or dumb. And often the analysis is complex and relies on lots of unshared background knowledge.

I also think I'm socially prohibited from raising this meta-problem, but I'm trying it anyway for a variety of reasons including that there are some signs that you may understand what I'm saying. Got any thoughts about it?

Comment by curi on Rationally Ending Discussions · 2020-08-13T21:17:33.316Z · LW · GW

Do you have any proposal for how to solve the problems of people being biased then leaving discussions at crucial moments to evade arguments and dodge questions, and there being no transparency about what's going on and no way for the error to get corrected?

Comment by curi on Rationally Ending Discussions · 2020-08-13T18:50:35.819Z · LW · GW

I was using rationality in the same way you normally do – about a process, not about best or optimal. I don't know why you read it otherwise.

Comment by curi on Rationally Ending Discussions · 2020-08-13T18:39:19.693Z · LW · GW

https://www.lesswrong.com/tag/rationality

Rationality is the art of thinking in ways that result in accurate beliefs and good decisions.

There are discussion ending methods which are compatible with this and others which aren't. The same goes for other rationality issues like finding out if you're mistaken, biases being found instead of hidden, etc. What is the type error?

Also I hereby grant you and everyone else unlimited license to give me advice.

Comment by curi on Rationally Ending Discussions · 2020-08-13T00:49:14.246Z · LW · GW

Suppose hypothetically that the worldwide availability of this type of discussion was zero. Do you think that would be important or consequential?

Comment by curi on Rationally Ending Discussions · 2020-08-12T22:27:03.309Z · LW · GW

So are you on board with something like differentiating and labelling a particular type of discussion and using procedures along these lines for that type of discussion?

My assumed context, which I grant I underspecified, was intellectual discussion or discussion of ideas (though no doubt there is room to specify further). Stuff like LW comments are on a forum where substantive discussion and trying to seek the truth is, to some extent, the expected norm. I didn't intend this to apply to e.g. all small talk (though tbh I think people would benefit from applying norms like this much more widely, ).

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-12T20:39:56.815Z · LW · GW

I'm glad that you seem to have largely understood me and also given a substantive response about your main concern. That is fairly atypical. I'm also glad that you agree that there are important issues in this general area.

I will agree to discuss to a length 3 impasse chain with you (rather than 5) if that'd solve the problem (I doubt it). I'd also prefer to discuss impasse chains and discussion ending issues (which I consider a very important topic) over the conjunction fallacy or law of least effort, but I'm open to either.

I think you're overestimating how much effort it takes to create length 5 impasse chains, but I know that's not the main issue. He's an example of a length 5 impasse chain which took exactly 5 messages, and all but the first were quite short. It wasn't a significant burden for me (especially given my interest in the topic of discussion methods themselves) and in fact was considerably faster and easier for me than some other things I've done in the past (i try very hard to be open to critical discussion and am interested in policies to enable that). If it had taken more than 5 messages, that would have only been because the other guy said some things I thought were actually good messages.

Discussion ending policies and the problems with walking away with no explanation are a problem that particularly interests me and I'd write a lot about regardless of what you did or didn't do. I actually just wrote a bunch about it this morning before seeing your comment. By contrast, I don't want to discuss the LoLE stuff with you without some sort of precommitment re discussion ending policies because I think your messages about LoLE were low quality and explaining the errors is not the type of writing I'd choose just for my own benefit. (This kind of statement is commonly hard to explain without offending people, which is awkward because I do want to tell people why I'm not responding, and it often would only take one sentence. And I don't think it should be offense: we disagree, and i expect your initial perspective is that there were quality issues with what I wrote, so I expect symmetry on this point anyway, no big deal.) It's specifically the discussions which start with symmetric beliefs that the other guy is wrong in ways I already understand, or is saying low quality stuff, or otherwise isn't going to offer significant value in the early phases of the discussion, that especially merit using approaches like impasse chains to enable discussion. The alternative to impasse chains is often no discussion. But I'd rather offer the impasse chain method over just ignoring people (though due to risk of offending people, sometimes I just say nothing – but at least I have a publicly posted debate policy and paths forward policy, as well as the impasse chain article, so if anyone really cares they can find out about and ask for options like that.)

As a next step, you can read and reply to – or not – what I wrote anyway about impasse chains today. Rationally Ending Discussions

You may also, if you want, indicate your good faith interest in the topic of too much effort going into bad discussions, and how that relates to rationally ending discussions. If you do, I expect that'll be enough for me to write something about it even with no formal policy. (I didn't say much about that in the Rationally Ending Discussions linked in the previous paragraph, but I do have ideas about it.) Anyone else may also indicate this interest or request a discussion to agreement or impasse chain with me (i'm open to them on a wide range of topics including basically anything about rationality, and i don't think we'll have much trouble finding a point of disagreement if we try to).

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-12T00:17:01.645Z · LW · GW

This is my proposed approach for unilateral discussion ending: https://www.elliottemple.com/essays/debates-and-impasse-chains

I'd be interested if anyone has any other attempts at solving the same problem that could be used instead.

I am interested in discussion, but not one plagued by certain problems (summarized briefly above re arbitrarily ending discussions in the middle without explanation or resolution). If you will acknowledge the problems are a real concern, we can talk about potential rational ways to address them which aren't overly burdensome to anyone. Do those problems concern you too? Would you like to do anything to address them if it wasn't too expensive? If not, why?

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T19:52:24.294Z · LW · GW

What is your goal here? Do you want to find a point of disagreement and try seriously to discuss it persistently over time to a conclusion?

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T18:52:09.687Z · LW · GW

Re all your comments, do you want to attempt to debate these matters to a rational conclusion? I don't think we're going to reach a quick conclusion/agreement and I don't know if you're interested enough to make a serious effort at an organized, persistent-over-time discussion.

A common discussion failure case is something like someone decides (often around when they start losing the argument) that the other guy's messages are low quality and not worth engaging with further. Another is there are too many relevant tangents, so I’ll just stop discussing. I’d like mutual agreement in advance not to do those, as well as some discussion of appropriate discussion methodology. Otherwise, due to large inferential distance issues, I wouldn’t expect us to get anywhere substantial and would prefer to abort at the start rather than have the discussion end in the middle.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T18:50:30.696Z · LW · GW

Did you read my previous linked posts, which this post is a followup to?

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T07:36:40.315Z · LW · GW

The similar naming is unfortunate. It's LoLE in the article I linked explaining it as well as in that author's book, but I'll think about disambiguating in the future.

Comment by curi on Social Dynamics · 2020-08-10T20:37:45.254Z · LW · GW

Those things were covered both under conformity (e.g. sharing interests with a group, fitting in) and value (which lists knowledge, skills, etc.)

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-10T19:10:53.870Z · LW · GW

LoLE isn't about conserving effort. It's about appearing to conserve effort and social dynamics. So a comment like

of course we conserve effort; what else would any living thing do?

shows a lack of understanding of LoLE. E.g. people put a lot of effort into doing their makeup instead of conserving that effort.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-10T19:04:12.203Z · LW · GW

Saying something "seems more off track" is not an argument criticizing some error in it.

Comment by curi on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-10T19:03:28.868Z · LW · GW

We already have the law of least effort for extensive other reasons. It's already a major part of our culture, so we should apply the tools we have. I understand it wouldn't look that way to someone who is new to the issue, but try to see it from a different perspective. If you want to debate that, fine, but assuming contrary premises to mine is missing the point.

And as I said this is not the full explanation. It's a followup post. LoLE explains not doing math, being careless, etc. It helps reinforce stuff I already covered which explains the particular result more.

Comment by curi on Asch Conformity Could Explain the Conjunction Fallacy · 2020-08-07T19:56:52.282Z · LW · GW

There was a response to this post at http://curi.us/2359-asch-conformity-could-explain-the-conjunction-fallacy#16985

Comment by curi on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? · 2020-08-06T21:50:13.330Z · LW · GW

People got post-research interviewed and asked to explain their answers. There were social feedback mechanisms. Even if there wasn't peer to peer social feedback, it was certainly possible to annoy the authority (researchers) who is giving you the questions (like annoying your teacher who gives you a test). The researchers want you to answer a particular way so people, reasonably, guess what that is, even if they don't already have that way highly internalized (as most people do).

This is how people have learned to deal with questions in general. And people are correct to be very wary of guessing "it's safe to be literal now" (often when it looks safe, it's not, so people come to the reasonable rule of thumb that it's never safe and basically decide (but not as a conscious decision) that maintaining a literalist personality to be used very rarely, when it's hard to even identify any safe times to use it, is not worth the cost). People have near-zero experience in situations where being hyper literal (or whatever you want to call it) won't be punished. Those scenarios barely exist. Even science, academia or Less Wrong mostly aren't like that.

More related to this in my followup post: Asch Conformity Could Explain the Conjunction Fallacy

Comment by curi on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? · 2020-08-05T19:01:48.144Z · LW · GW

Yeah, (poor) context isolation is is a recurring theme I've observed in my discussions and debates. Here's a typical scenario:

There's an original topic, X. Then we talk back and forth about it for a bit: C1, D1, C2, D2, C3, D3, C4, D4. The C messages are me and D is the other guy.

Then I write a reply, C5, about a specific detail in D4. Often I quote the exact thing I'm replying to or explain what I'm doing (e.g. a statement like "I disagree with A because B" where A was something said in D4.).

Then the person writes a reply (more of a non sequitur from my pov) about X.

People routinely try to jump the conversation back to the original context/topic. And they make ongoing attempts to interpret things I say in relation to X. Whatever I say, they often try to jump to conclusions about my position on X from it.

I find it very hard to get people to stop doing this. I've had little success even with explicit topic shifts like "I think you're making a discussion methodology mistake, and talking about X won't be productive until we get on the same page about how to discuss."

Another example of poor context isolation is when I give a toy example that'd be trivial to replace with a different toy example, but they start getting hung up on specific details of the example chosen. Sometimes I make the example intentionally unrealistic and simple because I want it to clearly be a toy example and I want to get rid of lots of typical context, but then they get hung up specifically on how unrealistic it is.

Another common example is when I compare X and Y regarding trait Z, and people get hung up b/c of how X and Y compare in general. Me: X and Y are the same re Z. Them: X and Y aren't similar!

I think Question-Ignoring Discussion Pattern is related, too. It's a recurring pattern where people don't give direct responses to the thing one just said.

And thanks for the link. It makes sense to me and I think social dynamics ideas are some of the ones most often coupled/contextualized. I think it’s really important to be capable of thinking about things from multiple perspectives/frameworks, but most people really just have the one way of thinking (and have enough trouble with that), and for most people their one way has a lot of social norms built into it (because they live in society – you need 2+ thinking modes in order for it to make sense to have one without social norms, otherwise you don’t have a way to get along with people. Some people compromise and build fewer social norms into their way of thinking because that’s easier than learning multiple separate ways to think).

Comment by curi on What are you looking for in a Less Wrong post? · 2020-08-03T20:30:04.606Z · LW · GW

He said, “Well, um, I guess we may have to agree to disagree on this.”

I [Yudkowsky] said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”

...

Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree

...

Regardless of our various disputes, we [Yudkowsky and Hanson] both agree that Aumann’s Agreement Theorem extends to imply that common knowledge of a factual disagreement shows someone must be irrational.

...

Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree

Do you think I'm misunderstanding the sequences or do you disagree with them?

Just because it's not fully proven in practice by math doesn't mean it isn't a broadly true and useful idea.

Comment by curi on Irrational Resistance to Business Success · 2020-08-03T08:18:54.641Z · LW · GW

In this situation, it sounds like the problem is that improvement for the plant came at cost for the DCs

Why do you think so? Merely because they are complaining or for some other reason?

The DCs were unable to substantively identify any problem that was created for them. And they spent 9 months refusing to use measurements or evidence to address this matter, in addition to failing to explain any cause-and-effect logic about what the problem they're now facing is and how it was caused by the change in production. (And, on top of that, without quantifying the alleged cost for them, the DCs want a change to production that will be costly, even though they have done no meaningful comparison to discover which cost is bigger.)