Posts

Has the effectiveness of fever screening declined? 2020-03-27T22:07:16.932Z · score: 9 (3 votes)
Potential Research Topic: Vingean Reflection, Value Alignment and Aspiration 2020-02-06T01:09:05.384Z · score: 15 (4 votes)
What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? 2019-09-18T17:17:05.602Z · score: 12 (5 votes)
Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness 2018-12-03T08:00:00.000Z · score: 38 (17 votes)
Trying for Five Minutes on AI Strategy 2018-10-17T16:18:31.597Z · score: 20 (7 votes)
A Process for Dealing with Motivated Reasoning 2018-09-03T03:34:11.650Z · score: 18 (8 votes)
Ikaxas' Hammertime Final Exam 2018-05-01T03:30:11.668Z · score: 22 (6 votes)
Ikaxas' Shortform Feed 2018-01-08T06:19:40.370Z · score: 16 (4 votes)

Comments

Comment by ikaxas on Has the effectiveness of fever screening declined? · 2020-03-27T22:48:35.105Z · score: 1 (1 votes) · LW · GW

Thanks

Comment by ikaxas on Has the effectiveness of fever screening declined? · 2020-03-27T22:09:08.460Z · score: 3 (2 votes) · LW · GW

Also, mods, how do I tag this as "coronavirus"? Or is that something the mods just do?

Comment by ikaxas on Ikaxas' Shortform Feed · 2020-02-16T02:21:53.641Z · score: 5 (3 votes) · LW · GW

Global coordination problems

I've said before that I tentatively think that "foster global coordination" might be a good cause area in its own right, because it benefits so many other cause areas. I think it might be useful to have a term for the cause areas that global coordination would help. More specifically, a term for the concept "(reasonably significant) problem that requires global coordination to solve, or that global coordination would significantly help with solving." I propose "global coordination problem" (though I'm open to other suggestions). You may object "but coordination problem already has a meaning in game theory, this is likely to get confused with that." But global coordination problems are coordination problems in precisely the game theory sense (I think, feel free to correct me), so the terminological overlap is a benefit.

What are some examples of global coordination problems? Certain x-risks and global catastrophic risks (such as AI, bioterrorism, pandemic risk, asteriod risk), climate change, some of the problems mentioned in The Possibility of an Ongoing Moral Catastrophe, as well as the general problem of ferreting out and fixing moral catastrophes, and almost certainly others.

In fact, it may be useful to think about a spectrum of problems, similar to Bostrom's Global Catastrophic Risk spectrum, organized by how much coordination is required to solve them. Analogous to Bostrom's spectrum, we could have: personal coordination problems (i.e. problems requiring no coordination with others, or perhaps only coordination with parts of oneself), local coordination problems, national coordination problems, global coordination problems, and transgenerational coordination problems.

Comment by ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-16T01:39:49.612Z · score: 1 (1 votes) · LW · GW

Forgot one other thing I intend to work on: I've seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that's another project I may potentially work on.

Comment by ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-16T01:37:42.462Z · score: 4 (2 votes) · LW · GW

Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest...

Huh, that's an interesting point.

I'm not sure where I stand on the question of "should we be pulling the brakes now," but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn't really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there's not much talk of that option is because it's so clearly unrealistic at this point; but I'm all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.

It looks like your background is in philosophy

Yep!

check out Problems in AI Alignment that philosophers could potentially contribute to, in case you haven't come across it already.

I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the "Normativity for AI / AI designers" and "Metaethical policing" bullets (namely the problem raised in these posts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I'm also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I'm actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I'm not certain which I ultimately should settle on (though I have a bit of time, I'm at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don't work out?

Comment by ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-14T13:55:27.804Z · score: 6 (3 votes) · LW · GW

I had not thought about this again since writing the post, until your comment. (Yeah, that seems worrying if mine really is the only post. Though in my [limited] experience with the LW/EA forum search it's not easy to tell how reliable/comprehensive it is, so there may be related posts that aren't easy to find.)

I actually had a somewhat different model in mind for how polarization happens: something like "the parties tend to take opposite stances on issues. So if one party takes up an issue, this causes the other party to take up the opposite stance on that issue. So if one party starts to talk more about AI safety than the other, this would cause the other party to take the anti-AI-safety stance, therefore polarization." (Not saying it's a good model, but it's the model I had in the back of my mind.)

Your model of climate polarization seems mostly right to me. I was wondering, though, why it would lead to polarization in particular, rather than, say, everybody just not caring about climate change, or being climate skeptics. I guess the idea is something like: Some climate activists/scientists/etc got concerned about climate change, started spreading the word. Oil corps got concerned this would affect them negatively, started spreading a countermessage. It makes sense that this would lead to a split, where some people care about climate change and some people anti-care about it. But why would the split be along party lines (or even: along ideological lines)? Couple things to say here. First, maybe my model kicks in here: the parties tend to take opposite stances on issues. Maybe the dems picked up the climate-activist side, so the republicans picked up the big-oil side. But was it random which side picked up which? I guess not: the climate-activist case is quite caring-focused, which on Jon Haidt's model makes it a left issue, while the big-oil case is big-business, which is a republican-flavored issue. (Though the climate-activist case also seemingly has, or at least used to have, a pretty sizeable purity component, which is puzzling on Haidt's model.)

Applying some of this to the AI case: the activist stuff has already happened. However, the AI corporations (the equiv of big-oil in our climate story) haven't reacted in the same way big-oil did. At least public-facingly, they've actually recognized and embraced the concerns to a sizeable degree (see Google DeepMind, OpenAI, to some degree Facebook).

Though perhaps you don't think the current AI corps are the equivalent of big-oil; there will be some future AI companies that react more like big oil did.

Either way, this doesn't totally block polarization from happening: it could still happen via "one party happens to start discussing the issue before the other, the other party takes the opposite stance, voters take on the stances of their party, therefore polarization."

<politics>

Hadn't thought of this till seeing your comment, but this might be an argument against Andrew Yang (though he's just dropped out)---if he had gotten the dem nomination, he might have caused Trump to take up the contrarian stance on AI, causing Trump's base to become skeptical of AI risk, therefore polarization (or some other variant on the basic "the dems take up the issue first, so the republicans take the opposite stance" story). This may still happen, though with him out it seems less likely.

</politics>

I don't know if climate activists could have done anything differently in the climate case; don't know enough about the history of climate activism and how specifically it got as polarized as it is (though as I said, your model seems good at least from the armchair). This may be something worth looking into as a historical case study (though time is of the essence I suppose, since now is probably the time to be doing things to prevent AI polarization).

Thanks for prompting me to think about this again! No promises (pretty busy with school right now) but I may go back and write up the conversation with my friend that I mentioned in the OP, I probably still have the notes from it. And if it really is as neglected as you think, I may take up thinking about it again a bit more seriously.

Comment by ikaxas on Potential Research Topic: Vingean Reflection, Value Alignment and Aspiration · 2020-02-07T20:00:19.895Z · score: 1 (1 votes) · LW · GW

how do we take those notions and turn them into something mathematically precise enough that we could instruct a machine to do them and then evaluate whether or not what it did was in fact what we intended

Yep, that's the project! I think the main utility of Callard's work here is (1) pointing out the phenomenon (a phenomenon that is strikingly similar to some of the abilities we want AI's to have), and (2) noticing that the most prominent theories of decision theory, moral psychology, and moral responsibility make assumptions that we have to break if we want to allow room for aspiration (assumptions that we who are trying to build safe AI are probably also accidentally making insofar as we take over those standard theories). IDK whether she provides alternate assumptions to make instead, but if she does these might also be useful. But the main point is just noticing that we need different theories of these things.

Once we've noticed the phenomenon of aspiration, and that it requires breaking some of these assumptions, I agree that the hard bit is coming up with a mathematical theory of aspiration (or the AI equivalent).

Comment by ikaxas on 2018 Review: Voting Results! · 2020-01-26T19:31:04.629Z · score: 14 (8 votes) · LW · GW

I hope they are buying 50+ books each otherwise I don’t see how the book part is remotely worth it.

As a data point, I did not vote, but if there is a book, I will almost certainly be buying a copy of it if it is reasonably priced, i.e. similar price to the first two volumes of R:A-Z ($ 6-8).

Comment by ikaxas on On Being Robust · 2020-01-14T15:37:53.733Z · score: 3 (2 votes) · LW · GW

This seems like another angle on "Play in Hard Mode". Is that about right?

Comment by ikaxas on Morality vs related concepts · 2020-01-09T04:24:53.508Z · score: 7 (5 votes) · LW · GW

I am an ethics grad student, and I will say that this largely accords with my understanding of these terms (though tbh the terminology in this field is so convoluted that I expect that I still have some misunderstandings and gaps).

Re epistemic rationality, I think at least some people will want to say that it's not just instrumental rationality with the goal of truth (though I am largely inclined to that view). I don't have a good sense of what those other people do say, but I get the feeling that the "epistemic rationality is instrumental rationality with the goal of truth" view is not the only game in town.

Re decision theory, I would characterize it as closely related to instrumental rationality. How I would think about it is like this: CDT or EDT are to instrumental rationality as utilitarianism or Kantianism are to morality. CDT is one theory of instrumental rationality, just as utilitarianism is one theory of morality. But this is my own idiosyncratic understanding, not derived from the philosophical literature, so the mainstream might understand it differently.

Re metaethics: thank you for getting this one correct. Round these parts it's often misused to refer to highly general theories of first order normative ethics (e.g. utilitarianism), or something in that vicinity. The confusion is understandable, especially given that utilitarianism (and probably other similarly general moral views) can be interpreted as a view about the metaphysics of reasons, which would be a metaethical view. But it's important to get this right. Here's a less example-driven explanation due to Tristram McPherson:

"Metaethics is that theoretical activity which aims to explain how actual ethical thought and talk—and what (if anything) that thought and talk is distinctively about—fits into reality" (McPherson and Plunkett, "The Nature and Explanatory Ambitions of Metaethics," in The Routledge Handbook of Metaethics, p. 3).

Anyway, thank you for writing this post, I expect it will clear up a lot of confusions and be useful as a reference.

Comment by ikaxas on What are you reading? · 2019-12-26T19:49:15.283Z · score: 2 (2 votes) · LW · GW

Goodreads reveals many books with the title "borrowed time." Who's the author?

Comment by ikaxas on What are you reading? · 2019-12-26T17:08:45.067Z · score: 3 (3 votes) · LW · GW

I'm an ethics PhD student, so unsurprisingly lots of that. Currently reading Consequentialism and its Critics (ed. Samuel Scheffler), Nagel's The Possibility of Altruism, Sidgwick's The Methods of Ethics , and Enoch's Taking Morality Seriously, with various levels of commitment. Also reading Joseph Romm's Climate Change: What Everyone Needs to Know, and just started The Almost Nearly Perfect People: Behind the Myth of the Scandanavian Utopia.

In fiction, I just finished Neal Shusterman's Arc of a Scythe trilogy, which I highly recommend for rationalists. Deals with a lot of issues we tend to think about, mostly immortality and friendly-AI-run utopia.

Comment by ikaxas on Tabletop Role Playing Game or interactive stories for my daughter · 2019-12-14T15:38:41.549Z · score: 3 (3 votes) · LW · GW

Haven't played it myself, but this seems related: https://slatestarcodex.com/2013/02/22/dungeons-and-discourse-third-edition-the-dialectic-continues/

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-22T00:55:35.438Z · score: 1 (1 votes) · LW · GW

Thanks!

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-21T03:39:25.786Z · score: 2 (2 votes) · LW · GW

Yep, I've seen that post before. I've tried to use Anki a couple times, but I always get frustrated trying to decide how to make things into cards. I haven't totally given up on the idea, though, I may try it again at some point, maybe even for this. Thanks for your comment.

Also, NB, your link is not formatted properly -- you have the page URL, but then also "by Michael Nielsen is interesting" as part of the link, so it doesn't go where you want it to.

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-21T03:35:04.300Z · score: 5 (3 votes) · LW · GW

Thanks, this is helpful! Mathematical maturity is a likely candidate -- I've done a few college math courses (Calc III, Linear Alg, Alg I), so I've done some proofs, but probably nowhere near enough, and it's been a few years. Aside from Linear Alg, all I know about the other three areas is what one picks up simply by hanging around LW for a while. Any personal recommendations for beginner textbooks in these areas? Nbd if not, I do know about the standard places to look (Luke Muehlhauser's textbook thread, MIRI research guide, etc), so I can just go look there.

Comment by ikaxas on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-07-02T11:37:07.515Z · score: 3 (2 votes) · LW · GW

[Off topic] Data point: the repeated "(respectively I/you)" at the beginning of the post made that paragraph several times harder to read for me than it otherwise would have been.

Comment by ikaxas on The 3 Books Technique for Learning a New Skilll · 2019-06-03T22:11:14.293Z · score: 1 (1 votes) · LW · GW

Do you generally read the "What" book all the way through, or only use it as a reference when you get stuck? Could a Q&A forum, e.g. StackExchange, serve as the "What" book, do you think?

Comment by ikaxas on Say Wrong Things · 2019-05-26T13:24:00.198Z · score: 6 (4 votes) · LW · GW

The Babble and Prune Sequence seems relevant here

Comment by ikaxas on Tales From the American Medical System · 2019-05-10T22:39:13.298Z · score: 9 (2 votes) · LW · GW

"no refill until appointment is on the books"

But Zvi's friend had an appointment on the books? It was just that it was a couple weeks away.

Otherwise, thanks very much for commenting on this, good to get a doctor's perspective.

Comment by ikaxas on Ideas ahead of their time · 2019-04-04T03:19:08.301Z · score: 5 (3 votes) · LW · GW

As one suggestion, how about something along the lines of "Ideas ahead of their time"?

Comment by ikaxas on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T05:21:29.695Z · score: 7 (4 votes) · LW · GW

Data point: even with the name of the account it took me an embarrassingly long time to figure out that this was actually written by GPT2 (at least, I'm assuming it is). Related: https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/

Comment by ikaxas on Applied Rationality podcast - feedback? · 2019-02-05T16:38:19.220Z · score: 4 (3 votes) · LW · GW

How about something like: "Tsuyoku Naritai - The Becoming Stronger Podcast"?

Comment by ikaxas on What is abstraction? · 2018-12-18T00:56:14.706Z · score: 4 (2 votes) · LW · GW

One such essay about a concept that is either identical to equivocation, or somewhere in the vicinity (I've never quite been able to figure out which, but I think it's supposed to be subtly different) is Scott's post about Motte and Bailey, which includes lots of examples

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-10T13:27:15.058Z · score: 1 (1 votes) · LW · GW

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-10T13:26:55.938Z · score: 1 (1 votes) · LW · GW

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T17:08:05.349Z · score: 6 (4 votes) · LW · GW

Because, unlike the robot, the cognitive architectures producing the observed behavior (alleviating a pain) are likely to be similar to those producing the similar behavior in us (since evolution is likely to have reused the same cognitive architecture in us and in the fish), and we know that whatever cognitive architecture produces that behavior in us produces a pain quale. The worry was supposed to be that perhaps the underlying cognitive architecture is more like a reflex than like a conscious experience, but the way the experiment was set up precluded that, since it's highly unlikely that a fish would have a reflex built in for this specific situation (unlike, say, the situation of pulling away from a hot object or a sharp object, which could be an unconscious reflex in other animals).

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T16:57:17.726Z · score: 1 (1 votes) · LW · GW

The answer given in the book is that, as it turns out, they have color receptors in their skin. The book notes that this is only a partial answer, because they still only have one color receptor in their skin, which still doesn't allow for color vision, so this doesn't fully solve the puzzle, but Godfrey-Smith speculates that perhaps the combination of one color receptor with color-changing cells in front of the color receptor allows them to gain some information about the color of things around it (121-123).

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T16:46:45.659Z · score: 4 (3 votes) · LW · GW

Thanks! This was quite interesting to try. Just to make it more explicit, your point is supposed to be that here's a form of visual processing going on that doesn't "feel like anything" to us, right?

Comment by ikaxas on Tentatively considering emotional stories (IFS and “getting into Self”) · 2018-12-01T15:25:23.596Z · score: 11 (3 votes) · LW · GW

Said, I'm curious: have you ever procrastinated? If so, what is your internal experience like when you are procrastinating?

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-10-15T03:08:02.996Z · score: 3 (2 votes) · LW · GW

Ah, thanks. Just transcribed the first 5 minutes, it took me like 20-30 minutes to do. I definitely won't have time to transcribe the whole thing. Might be able to do 30 mins, i.e. ~2 hours of transcription time, over the next few days. Let me know if you still need help and which section you'd want me to transcribe. Definitely looking forward to watching the whole thing, this looks pretty interesting.

Comment by Ikaxas on [deleted post] 2018-10-01T19:36:57.763Z

I think the word you're looking for instead of "ban" is "taboo".

Comment by ikaxas on Zetetic explanation · 2018-09-15T02:54:15.845Z · score: 3 (2 votes) · LW · GW

After quite a while thinking about it I'm still not sure I have an adequate response to this comment; I do take your points, they're quite good. I'll do my best to respond to this in the post I'm writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn't adequately address your points.

Comment by ikaxas on Zetetic explanation · 2018-09-15T02:47:59.720Z · score: 4 (3 votes) · LW · GW

Thanks for this. Sorry it's taken me so long to reply here, didn't mean to let this conversation hang for so long. I completely agree with about 99% of what you wrote here. The 1% I'll hopefully address in the post I'm working on on this topic.

Comment by ikaxas on Zetetic explanation · 2018-09-08T21:34:45.648Z · score: 1 (1 votes) · LW · GW

Ah, thanks!

Comment by ikaxas on Zetetic explanation · 2018-09-08T21:23:21.801Z · score: 1 (1 votes) · LW · GW

EDIT: oops, replied to the wrong comment.

Comment by ikaxas on Zetetic explanation · 2018-09-08T19:29:56.631Z · score: 4 (4 votes) · LW · GW

By the way, I'm curious why you say that the principle of charity "was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere." What do you think was the original, good form of the idea, what is the difference between that and the version the rationalist memesphere has adopted, and what is so bad about the rationalist version?

Comment by ikaxas on Zetetic explanation · 2018-09-08T19:07:08.512Z · score: 1 (1 votes) · LW · GW

I've been mulling over where I went wrong here, and I think I've got it.

that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.

I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there's some threshold or some clear rule for deciding when to ask for clarification, it's not worth implementing "ask for clarification if you're unsure" as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that's not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone's fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it's worth stopping to have one or both parties do something in the vicinity of trying to pass the other's ITT, to see where the confusion is.

I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I'm much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn't enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver's point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn't really entail that you ought to have asked for clarification here, in this very instance.

Anyway, as Ben suggested I'm working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I'll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I've been looking for a while and haven't been able to find it if there is.)

consider these two scenarios

I agree the model I've been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don't think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you're going with this is something like "scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way)."

If this is where you are going, I have a couple disagreements with it, but I'll wait until you've explained the rest of your point to state them in case I've guessed wrong (which I'd guess is fairly likely in this case).

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T05:03:47.233Z · score: 3 (2 votes) · LW · GW

Awesome, I'll watch for when the video is up and then get in touch about coordinating who will transcribe what. If I don't get in touch feel free to PM me or comment here.

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T00:22:37.380Z · score: 8 (5 votes) · LW · GW

If transcripts end up not being provided, I would be willing to transcribe the video or part of the video, depending on how long it is (I'd probably be willing to transcribe up to about 2 hours of video, maybe more if it's less effort than I expect, having never really tried it before).

Comment by ikaxas on Toward a New Technical Explanation of Technical Explanation · 2018-09-07T05:47:37.164Z · score: 2 (2 votes) · LW · GW

Have you happened to write down your thoughts on this in the meantime?

Comment by ikaxas on Zetetic explanation · 2018-09-06T23:43:07.409Z · score: 11 (3 votes) · LW · GW

Thanks for the encouragement. I will try writing one and see how it goes.

Comment by ikaxas on Zetetic explanation · 2018-09-06T04:51:30.729Z · score: 3 (2 votes) · LW · GW

what level of confidence in having understood what someone said should prompt asking them for clarification?

This is an isolated demand for rigor. Obviously there's no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.

Note that I say "obviously mistaken." If your interlocutor says something that seems mistaken, that's one thing, and as you say, it shouldn't always prompt a request for clarification; sometimes there's just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn't wont to say obviously wrong things, that may indicate that there is something they see that you don't, in which case it would be useful to ask for clarification.

In this particular case, it seems to me that "good content" could be vacuous, or it could be a stand-in for something like "content that meets some standards which I vaguely have in mind but don't feel the desire or need to specify at the moment." It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn't be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don't claim to know what particular standards he has in mind, but clearly standards that would be useful for "solving problems related to advancing human rationality and avoiding human extinction"). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for "good content" in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification ("what do you have in mind when you say 'good content', that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?")

As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said.

"The case at hand" was your misunderstanding of Vaniver, not Benquo.


Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form "any time your interlocutor says something that seems obviously mistaken, ask for clarification"). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that's sometimes an indication that you should ask for clarification. Sometimes it's not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn't mean the vacuous interpretation of "good content." I think I probably don't need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I'm not really sure what to do about that right now, or whether and how to revise it.

EDIT: if it turns out you didn't mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn't need to ask you for clarification).

Comment by ikaxas on Zetetic explanation · 2018-09-05T21:33:58.915Z · score: 14 (4 votes) · LW · GW

I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did.

This is exactly the problem that the ITT is trying to solve. Ben's interpretation of what you said is Ben's interpretation of what you said, whether he posts it or merely thinks it. If he merely thinks it, and then responds to you based on it, then he'll be responding to a misunderstanding of what you actually said and the conversation won't be productive. You'll think he understood you, he'll perhaps think he understood you, but he won't have understood you, and the conversation will not go well because of it.

But if he writes it out, then you can see that he didn't understand you, and help him understand what you actually meant before he tries to criticize something you didn't even actually say. But this kind of thing only works if both people cooperate a little bit. (Okay, that's a bit strong, I do think that the kind of thing Ben did has some benefit even though you didn't respond to it. But a lot of the benefit comes from the back and forth.)

if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?

Again, this is merely evidence that communication is harder than it seems. Ben not writing down his interpretation of you doesn't magically make him understand you better. All it does is hide the fact that he didn't understand you, and when that fact is hidden it can cause problems that seem to come from nowhere.

If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”

That's not the claim at all. The claim is that the reading that seems straightforward to you may not be the reading that seems straightforward to Ben. So if Ben relies on what seems to him a "straightforward reading," he may be relying on a wrong reading of what you said, because you wanted to communicate something different.

but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?

I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that. And him putting forward the interpretation he thinks is correct gives you a jumping-off point for helping him to understand what you meant. Without that jumping-off point you would be shooting in the dark, throwing out different ways of rephrasing what you said until one stuck, or worse (as I've said several times now) you wouldn't realize he had misunderstood you at all.

sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.

Yes, but you can't hash out the substantive disagreements until you've sorted out any misunderstandings first. That would be like arguing about the population size of Athens when one of you thinks you're talking about Athens, Greece and the other thinks you're talking about Athens, Ohio.

Comment by ikaxas on A Process for Dealing with Motivated Reasoning · 2018-09-05T20:52:55.857Z · score: 9 (2 votes) · LW · GW

Thanks! Done

Comment by ikaxas on Zetetic explanation · 2018-09-05T20:39:09.442Z · score: 3 (3 votes) · LW · GW

Yes, this, precisely this.

Comment by ikaxas on Zetetic explanation · 2018-09-05T20:35:28.983Z · score: 2 (2 votes) · LW · GW

There's a lot going on in this thread, so I'm not sure exactly where this response best belongs, so I'll just put it here.

In this comment Vaniver wrote:

some explanations are trying to talk about underlying generators while other explanations are trying to talk about ritual behavior

I think I have some idea of what he was trying to say here, so let me try to interpret a bit (Vaniver, feel free to correct if anything I say here is mistaken).

There are two kinds of explanation (there are obviously more than two, but among them are these):

The first kind is the kind where you're trying to tell someone how to do something. This is the kind of explanation you see on WikiHow and similar explanation sites, in how-to videos on YouTube, etc. In the current case, this would be something like the following

How to make a sourdough starter: Step 1: Add some flour to some water Step 2: Leave out for a few days, adding more water and flour as necessary Step 3: And there you have a sourdough starter.

This is the kind of explanation Vaniver was referring to as "merely trying to present people with additional rituals to perform." I think a better way to describe it is that you're providing someone with a procedure for how to do something. [Vaniver, I'm somewhat puzzled as to why you used the word "ritual" rather than "procedure," when "procedure" seems like the word that fits best? Is there some subtle way in which it differs from what you were trying to say?]. I'll call it a "procedural explanation."

The second kind may[1] also include telling someone a procedure for how to do something (note that Benquo's explanation did, in fact, provide a simple procedure for making a sourdough starter). But the heart of this type of explanation is that it also includes the information they would have needed in order to discover that procedure for themselves. This is what I take Benquo to be referring to when he says "zetetic explanation." When Vaniver uses the word "generators" in the quote above (though not necessarily in other contexts--some of his usages of the word confuse me as well) I think it means something like "the background knowledge or patterns of thought that would cause someone to think the thought in question on their own." A couple examples:

  1. The generators of the procedure for the sourdough starter were something like:[2]
  • On its own, grain is hard to digest
  • There are microbes on it that can make it easier to digest
  • If you create an environment they like living in, you can attract them and then get them to do things to your dough that make it easier to digest
  • They like environments with flour and water This is the kind of information that would lead you to be able to generate the above procedure for making a sourdough starter on your own.
  1. In this comment I make the point that I, and perhaps some of the mods, believe that communication is hard and that this leads me (us?) to think that people should probably put in more effort to understand others and to be understood than might feel natural. I could just as easily say that the generator of the thought that [people should probably put in more effort to understand others and to be understood than might feel natural] is that [communication is hard], where "communication is hard" stands in for a bunch of background models, past experiences, etc.
  2. Vaniver's example with mashing potatoes. The "ritual" or "procedure" that his friends had was "get the potato masher, use it to mash the potatoes." But Vaniver had some more general knowledge that enabled him to generate a new procedure when that procedure failed because its preconditions weren't in place (i.e. there was no potato masher on hand). That general knowledge (the "generators" of the thought "use a glass," which would have allowed his friends to generate the same thought had they considered them) was probably something like:
  • Potatoes are pretty tough, so you need a mashing device that is sufficiently hefty
  • A glass is sufficiently hefty

But what does [the potato-mashing story] have to do with the OP? It does not seem to me like your cleverly practical solution to the problem of mashing potatoes had to draw on a knowledge of the history of potato-mashing, or detailed botanical understanding of tubers and their place in the food chain, or the theoretical underpinnings of the construction of kitchen tools, etc.

The history is not necessarily the important part of the "zetetic explanation." Vaniver's solution didn't have to draw on the "detailed theoretical underpinnings of the construction of kitchen tools," but it did have to draw on something like a recognition of "the principles that make a potato masher a good tool for mashing potatoes."

I think the important feature of the "zetetic explanation" is that it** gives the generators as well as just the object-level explanation**. It connects up the listener's web of knowledge a bit--in addition to imparting new knowledge, it draws connections between bits of knowledge the listener already had, particularly between general, theoretical knowledge and particular, applied/practical/procedural knowledge. Note that Benquo gives Feynman's explanation of triboluminescence as another example. This leads me to believe the key feature of zetetic explanations isn't that they explain a procedure for how to do something plus how to generate that procedure, but that they more generally connect abstract knowledge with concrete knowledge, and that they connect up the knowledge they're trying to impart with knowledge the listener already has (I've been using the word "listener" rather than "reader" because, as Benquo points out, this kind of explanation is easier to give in person, where they can be personalized to the audience ). The listener probably already knows about sugar, so when Feynman explains triboluminescence he doesn't just explain it in an abstract way, he tells that it applies to sugar so that you can link it up with something you already know about.

On one way of using these words, you might say that a zetetic explanation doesn't just create knowledge, it creates understanding.

As I say, communication is hard, so it's possible that I've misinterpreted Benquo or Vaniver here, but this is what I took them to be saying. Hope that helped some.


[1] note that, as I mention near the end of the comment, there might be zetetic explanations of things other than procedural explanations. Not sure if Benquo intended this, but I think he did, and I think in any case that it is a correct extension of the concept. (I might be wrong though--Benquo might have intended zetetic explanations to be explanations answering the question "where did X come from?" But if that's the case then much of my interpretation near the end of the comment is probably wrong)

[2] I actually think you're right that Benquo's explanation doesn't fully give the generators here (though as Vaniver says, "half of it is, in some sense, 'left out'"), so I don't claim that the generators I list here are fully correct, just that it would be something like this.

Comment by ikaxas on Zetetic explanation · 2018-09-05T17:38:06.772Z · score: 20 (8 votes) · LW · GW

This is the first point at which I, at least, saw any indication that you thought Ben's attempt to pass your ITT was anything less than completely accurate. If you thought his summary of your position wasn't accurate, why didn't you say so earlier ? Your response to the comment of his that you linked gave no indication of that, and thus seemed to give the impression that you thought it was an accurate summary (if there are places where you stated that you thought the summary wasn't accurate and I simply missed it, feel free to point this out). My understanding is that often, when person A writes up a summary of what they believe to be person B's position, the purpose is to ensure that the two are on the same page (not in the sense of agreeing, but in the sense that A understands what B is claiming). Thus, I think person A often hopes that person B will either confirm that "yes, that's a pretty accurate summary of my position," or "well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3" or "no, you've completely misunderstood what I'm trying to say. Actually, I was trying to say [summary of person B's position]."

To be perfectly clear, an underlying premise of this is that communication is hard, and thus that two people can be talking past each other even if both are putting in what feels like a normal amount of effort to write clearly and to understand what the other is saying. This implies that if a disagreement persists, one of the first things to try is to slow down for a moment and get clear on what each person is actually saying, which requires putting in more than what feels like a normal amount of effort, because what feels like a normal amount of effort is often not enough to actually facilitate understanding. I'm getting a vibe that you disagree with this line of thought. Is that correct? If so, where exactly do you disagree?

Comment by ikaxas on A Process for Dealing with Motivated Reasoning · 2018-09-03T23:13:37.251Z · score: 22 (6 votes) · LW · GW

If this is intended as a summary of the post, I'd say it doesn't quite seem to capture what I was getting at. If I had to give my own one-paragraph summary, it would be this:

There's a thing people (including me) sometimes do, where they (unreflectively) assume that the conclusions of motivated reasoning are always wrong, and dismiss them out of hand. That seems like a bad plan. Instead, try going into System II mode and reexamining conclusions you think might be the result of motivated reasoning, rather than immediately dismissing them. This isn't to say that System II processes are completely immune to motivated reasoning, far from it, but "apply extra scrutiny" seems like a better strategy than "dismiss out of hand."

Something that was in the background of the post, but I don't think I adequately brought out, is that this habit of [automatically dismissing anything that seems like it might be the result of motivated reasoning] can lead to decision paralysis and pathological self-doubt. The point of this post is to somewhat correct for that. Perhaps it's an overcorrection, but I don't think it is.

Comment by ikaxas on A Process for Dealing with Motivated Reasoning · 2018-09-03T22:58:46.430Z · score: 3 (2 votes) · LW · GW

Ah, thanks! What happened was that I wrote the post in the LW editor, copied it over to Google Docs for feedback (including links), added some more links while it was in the Google Doc, then copy-and-pasted it back. So that might have been where the weird link formatting came from.