Posts

What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? 2019-09-18T17:17:05.602Z · score: 12 (5 votes)
Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness 2018-12-03T08:00:00.000Z · score: 38 (17 votes)
Trying for Five Minutes on AI Strategy 2018-10-17T16:18:31.597Z · score: 17 (6 votes)
A Process for Dealing with Motivated Reasoning 2018-09-03T03:34:11.650Z · score: 18 (8 votes)
Ikaxas' Hammertime Final Exam 2018-05-01T03:30:11.668Z · score: 22 (6 votes)
Ikaxas' Shortform Feed 2018-01-08T06:19:40.370Z · score: 16 (4 votes)

Comments

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-22T00:55:35.438Z · score: 1 (1 votes) · LW · GW

Thanks!

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-21T03:39:25.786Z · score: 2 (2 votes) · LW · GW

Yep, I've seen that post before. I've tried to use Anki a couple times, but I always get frustrated trying to decide how to make things into cards. I haven't totally given up on the idea, though, I may try it again at some point, maybe even for this. Thanks for your comment.

Also, NB, your link is not formatted properly -- you have the page URL, but then also "by Michael Nielsen is interesting" as part of the link, so it doesn't go where you want it to.

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-21T03:35:04.300Z · score: 5 (3 votes) · LW · GW

Thanks, this is helpful! Mathematical maturity is a likely candidate -- I've done a few college math courses (Calc III, Linear Alg, Alg I), so I've done some proofs, but probably nowhere near enough, and it's been a few years. Aside from Linear Alg, all I know about the other three areas is what one picks up simply by hanging around LW for a while. Any personal recommendations for beginner textbooks in these areas? Nbd if not, I do know about the standard places to look (Luke Muehlhauser's textbook thread, MIRI research guide, etc), so I can just go look there.

Comment by ikaxas on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-07-02T11:37:07.515Z · score: 3 (2 votes) · LW · GW

[Off topic] Data point: the repeated "(respectively I/you)" at the beginning of the post made that paragraph several times harder to read for me than it otherwise would have been.

Comment by ikaxas on The 3 Books Technique for Learning a New Skilll · 2019-06-03T22:11:14.293Z · score: 1 (1 votes) · LW · GW

Do you generally read the "What" book all the way through, or only use it as a reference when you get stuck? Could a Q&A forum, e.g. StackExchange, serve as the "What" book, do you think?

Comment by ikaxas on Say Wrong Things · 2019-05-26T13:24:00.198Z · score: 6 (4 votes) · LW · GW

The Babble and Prune Sequence seems relevant here

Comment by ikaxas on Tales From the American Medical System · 2019-05-10T22:39:13.298Z · score: 9 (2 votes) · LW · GW

"no refill until appointment is on the books"

But Zvi's friend had an appointment on the books? It was just that it was a couple weeks away.

Otherwise, thanks very much for commenting on this, good to get a doctor's perspective.

Comment by ikaxas on Ideas ahead of their time · 2019-04-04T03:19:08.301Z · score: 5 (3 votes) · LW · GW

As one suggestion, how about something along the lines of "Ideas ahead of their time"?

Comment by ikaxas on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T05:21:29.695Z · score: 7 (4 votes) · LW · GW

Data point: even with the name of the account it took me an embarrassingly long time to figure out that this was actually written by GPT2 (at least, I'm assuming it is). Related: https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/

Comment by ikaxas on Applied Rationality podcast - feedback? · 2019-02-05T16:38:19.220Z · score: 4 (3 votes) · LW · GW

How about something like: "Tsuyoku Naritai - The Becoming Stronger Podcast"?

Comment by ikaxas on What is abstraction? · 2018-12-18T00:56:14.706Z · score: 4 (2 votes) · LW · GW

One such essay about a concept that is either identical to equivocation, or somewhere in the vicinity (I've never quite been able to figure out which, but I think it's supposed to be subtly different) is Scott's post about Motte and Bailey, which includes lots of examples

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-10T13:27:15.058Z · score: 1 (1 votes) · LW · GW

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-10T13:26:55.938Z · score: 1 (1 votes) · LW · GW

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T17:08:05.349Z · score: 6 (4 votes) · LW · GW

Because, unlike the robot, the cognitive architectures producing the observed behavior (alleviating a pain) are likely to be similar to those producing the similar behavior in us (since evolution is likely to have reused the same cognitive architecture in us and in the fish), and we know that whatever cognitive architecture produces that behavior in us produces a pain quale. The worry was supposed to be that perhaps the underlying cognitive architecture is more like a reflex than like a conscious experience, but the way the experiment was set up precluded that, since it's highly unlikely that a fish would have a reflex built in for this specific situation (unlike, say, the situation of pulling away from a hot object or a sharp object, which could be an unconscious reflex in other animals).

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T16:57:17.726Z · score: 1 (1 votes) · LW · GW

The answer given in the book is that, as it turns out, they have color receptors in their skin. The book notes that this is only a partial answer, because they still only have one color receptor in their skin, which still doesn't allow for color vision, so this doesn't fully solve the puzzle, but Godfrey-Smith speculates that perhaps the combination of one color receptor with color-changing cells in front of the color receptor allows them to gain some information about the color of things around it (121-123).

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T16:46:45.659Z · score: 4 (3 votes) · LW · GW

Thanks! This was quite interesting to try. Just to make it more explicit, your point is supposed to be that here's a form of visual processing going on that doesn't "feel like anything" to us, right?

Comment by ikaxas on Tentatively considering emotional stories (IFS and “getting into Self”) · 2018-12-01T15:25:23.596Z · score: 11 (3 votes) · LW · GW

Said, I'm curious: have you ever procrastinated? If so, what is your internal experience like when you are procrastinating?

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-10-15T03:08:02.996Z · score: 3 (2 votes) · LW · GW

Ah, thanks. Just transcribed the first 5 minutes, it took me like 20-30 minutes to do. I definitely won't have time to transcribe the whole thing. Might be able to do 30 mins, i.e. ~2 hours of transcription time, over the next few days. Let me know if you still need help and which section you'd want me to transcribe. Definitely looking forward to watching the whole thing, this looks pretty interesting.

Comment by Ikaxas on [deleted post] 2018-10-01T19:36:57.763Z

I think the word you're looking for instead of "ban" is "taboo".

Comment by ikaxas on Zetetic explanation · 2018-09-15T02:54:15.845Z · score: 3 (2 votes) · LW · GW

After quite a while thinking about it I'm still not sure I have an adequate response to this comment; I do take your points, they're quite good. I'll do my best to respond to this in the post I'm writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn't adequately address your points.

Comment by ikaxas on Zetetic explanation · 2018-09-15T02:47:59.720Z · score: 4 (3 votes) · LW · GW

Thanks for this. Sorry it's taken me so long to reply here, didn't mean to let this conversation hang for so long. I completely agree with about 99% of what you wrote here. The 1% I'll hopefully address in the post I'm working on on this topic.

Comment by ikaxas on Zetetic explanation · 2018-09-08T21:34:45.648Z · score: 1 (1 votes) · LW · GW

Ah, thanks!

Comment by ikaxas on Zetetic explanation · 2018-09-08T21:23:21.801Z · score: 1 (1 votes) · LW · GW

EDIT: oops, replied to the wrong comment.

Comment by ikaxas on Zetetic explanation · 2018-09-08T19:29:56.631Z · score: 4 (4 votes) · LW · GW

By the way, I'm curious why you say that the principle of charity "was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere." What do you think was the original, good form of the idea, what is the difference between that and the version the rationalist memesphere has adopted, and what is so bad about the rationalist version?

Comment by ikaxas on Zetetic explanation · 2018-09-08T19:07:08.512Z · score: 1 (1 votes) · LW · GW

I've been mulling over where I went wrong here, and I think I've got it.

that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.

I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there's some threshold or some clear rule for deciding when to ask for clarification, it's not worth implementing "ask for clarification if you're unsure" as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that's not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone's fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it's worth stopping to have one or both parties do something in the vicinity of trying to pass the other's ITT, to see where the confusion is.

I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I'm much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn't enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver's point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn't really entail that you ought to have asked for clarification here, in this very instance.

Anyway, as Ben suggested I'm working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I'll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I've been looking for a while and haven't been able to find it if there is.)

consider these two scenarios

I agree the model I've been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don't think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you're going with this is something like "scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way)."

If this is where you are going, I have a couple disagreements with it, but I'll wait until you've explained the rest of your point to state them in case I've guessed wrong (which I'd guess is fairly likely in this case).

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T05:03:47.233Z · score: 3 (2 votes) · LW · GW

Awesome, I'll watch for when the video is up and then get in touch about coordinating who will transcribe what. If I don't get in touch feel free to PM me or comment here.

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T00:22:37.380Z · score: 8 (5 votes) · LW · GW

If transcripts end up not being provided, I would be willing to transcribe the video or part of the video, depending on how long it is (I'd probably be willing to transcribe up to about 2 hours of video, maybe more if it's less effort than I expect, having never really tried it before).

Comment by ikaxas on Toward a New Technical Explanation of Technical Explanation · 2018-09-07T05:47:37.164Z · score: 2 (2 votes) · LW · GW

Have you happened to write down your thoughts on this in the meantime?

Comment by ikaxas on Zetetic explanation · 2018-09-06T23:43:07.409Z · score: 11 (3 votes) · LW · GW

Thanks for the encouragement. I will try writing one and see how it goes.

Comment by ikaxas on Zetetic explanation · 2018-09-06T04:51:30.729Z · score: 3 (2 votes) · LW · GW

what level of confidence in having understood what someone said should prompt asking them for clarification?

This is an isolated demand for rigor. Obviously there's no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.

Note that I say "obviously mistaken." If your interlocutor says something that seems mistaken, that's one thing, and as you say, it shouldn't always prompt a request for clarification; sometimes there's just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn't wont to say obviously wrong things, that may indicate that there is something they see that you don't, in which case it would be useful to ask for clarification.

In this particular case, it seems to me that "good content" could be vacuous, or it could be a stand-in for something like "content that meets some standards which I vaguely have in mind but don't feel the desire or need to specify at the moment." It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn't be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don't claim to know what particular standards he has in mind, but clearly standards that would be useful for "solving problems related to advancing human rationality and avoiding human extinction"). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for "good content" in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification ("what do you have in mind when you say 'good content', that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?")

As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said.

"The case at hand" was your misunderstanding of Vaniver, not Benquo.


Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form "any time your interlocutor says something that seems obviously mistaken, ask for clarification"). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that's sometimes an indication that you should ask for clarification. Sometimes it's not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn't mean the vacuous interpretation of "good content." I think I probably don't need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I'm not really sure what to do about that right now, or whether and how to revise it.

EDIT: if it turns out you didn't mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn't need to ask you for clarification).

Comment by ikaxas on Zetetic explanation · 2018-09-05T21:33:58.915Z · score: 14 (4 votes) · LW · GW

I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did.

This is exactly the problem that the ITT is trying to solve. Ben's interpretation of what you said is Ben's interpretation of what you said, whether he posts it or merely thinks it. If he merely thinks it, and then responds to you based on it, then he'll be responding to a misunderstanding of what you actually said and the conversation won't be productive. You'll think he understood you, he'll perhaps think he understood you, but he won't have understood you, and the conversation will not go well because of it.

But if he writes it out, then you can see that he didn't understand you, and help him understand what you actually meant before he tries to criticize something you didn't even actually say. But this kind of thing only works if both people cooperate a little bit. (Okay, that's a bit strong, I do think that the kind of thing Ben did has some benefit even though you didn't respond to it. But a lot of the benefit comes from the back and forth.)

if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?

Again, this is merely evidence that communication is harder than it seems. Ben not writing down his interpretation of you doesn't magically make him understand you better. All it does is hide the fact that he didn't understand you, and when that fact is hidden it can cause problems that seem to come from nowhere.

If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”

That's not the claim at all. The claim is that the reading that seems straightforward to you may not be the reading that seems straightforward to Ben. So if Ben relies on what seems to him a "straightforward reading," he may be relying on a wrong reading of what you said, because you wanted to communicate something different.

but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?

I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that. And him putting forward the interpretation he thinks is correct gives you a jumping-off point for helping him to understand what you meant. Without that jumping-off point you would be shooting in the dark, throwing out different ways of rephrasing what you said until one stuck, or worse (as I've said several times now) you wouldn't realize he had misunderstood you at all.

sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.

Yes, but you can't hash out the substantive disagreements until you've sorted out any misunderstandings first. That would be like arguing about the population size of Athens when one of you thinks you're talking about Athens, Greece and the other thinks you're talking about Athens, Ohio.

Comment by ikaxas on A Process for Dealing with Motivated Reasoning · 2018-09-05T20:52:55.857Z · score: 9 (2 votes) · LW · GW

Thanks! Done

Comment by ikaxas on Zetetic explanation · 2018-09-05T20:39:09.442Z · score: 0 (2 votes) · LW · GW

Yes, this, precisely this.

Comment by ikaxas on Zetetic explanation · 2018-09-05T20:35:28.983Z · score: 2 (2 votes) · LW · GW

There's a lot going on in this thread, so I'm not sure exactly where this response best belongs, so I'll just put it here.

In this comment Vaniver wrote:

some explanations are trying to talk about underlying generators while other explanations are trying to talk about ritual behavior

I think I have some idea of what he was trying to say here, so let me try to interpret a bit (Vaniver, feel free to correct if anything I say here is mistaken).

There are two kinds of explanation (there are obviously more than two, but among them are these):

The first kind is the kind where you're trying to tell someone how to do something. This is the kind of explanation you see on WikiHow and similar explanation sites, in how-to videos on YouTube, etc. In the current case, this would be something like the following

How to make a sourdough starter: Step 1: Add some flour to some water Step 2: Leave out for a few days, adding more water and flour as necessary Step 3: And there you have a sourdough starter.

This is the kind of explanation Vaniver was referring to as "merely trying to present people with additional rituals to perform." I think a better way to describe it is that you're providing someone with a procedure for how to do something. [Vaniver, I'm somewhat puzzled as to why you used the word "ritual" rather than "procedure," when "procedure" seems like the word that fits best? Is there some subtle way in which it differs from what you were trying to say?]. I'll call it a "procedural explanation."

The second kind may[1] also include telling someone a procedure for how to do something (note that Benquo's explanation did, in fact, provide a simple procedure for making a sourdough starter). But the heart of this type of explanation is that it also includes the information they would have needed in order to discover that procedure for themselves. This is what I take Benquo to be referring to when he says "zetetic explanation." When Vaniver uses the word "generators" in the quote above (though not necessarily in other contexts--some of his usages of the word confuse me as well) I think it means something like "the background knowledge or patterns of thought that would cause someone to think the thought in question on their own." A couple examples:

  1. The generators of the procedure for the sourdough starter were something like:[2]
  • On its own, grain is hard to digest
  • There are microbes on it that can make it easier to digest
  • If you create an environment they like living in, you can attract them and then get them to do things to your dough that make it easier to digest
  • They like environments with flour and water This is the kind of information that would lead you to be able to generate the above procedure for making a sourdough starter on your own.
  1. In this comment I make the point that I, and perhaps some of the mods, believe that communication is hard and that this leads me (us?) to think that people should probably put in more effort to understand others and to be understood than might feel natural. I could just as easily say that the generator of the thought that [people should probably put in more effort to understand others and to be understood than might feel natural] is that [communication is hard], where "communication is hard" stands in for a bunch of background models, past experiences, etc.
  2. Vaniver's example with mashing potatoes. The "ritual" or "procedure" that his friends had was "get the potato masher, use it to mash the potatoes." But Vaniver had some more general knowledge that enabled him to generate a new procedure when that procedure failed because its preconditions weren't in place (i.e. there was no potato masher on hand). That general knowledge (the "generators" of the thought "use a glass," which would have allowed his friends to generate the same thought had they considered them) was probably something like:
  • Potatoes are pretty tough, so you need a mashing device that is sufficiently hefty
  • A glass is sufficiently hefty

But what does [the potato-mashing story] have to do with the OP? It does not seem to me like your cleverly practical solution to the problem of mashing potatoes had to draw on a knowledge of the history of potato-mashing, or detailed botanical understanding of tubers and their place in the food chain, or the theoretical underpinnings of the construction of kitchen tools, etc.

The history is not necessarily the important part of the "zetetic explanation." Vaniver's solution didn't have to draw on the "detailed theoretical underpinnings of the construction of kitchen tools," but it did have to draw on something like a recognition of "the principles that make a potato masher a good tool for mashing potatoes."

I think the important feature of the "zetetic explanation" is that it** gives the generators as well as just the object-level explanation**. It connects up the listener's web of knowledge a bit--in addition to imparting new knowledge, it draws connections between bits of knowledge the listener already had, particularly between general, theoretical knowledge and particular, applied/practical/procedural knowledge. Note that Benquo gives Feynman's explanation of triboluminescence as another example. This leads me to believe the key feature of zetetic explanations isn't that they explain a procedure for how to do something plus how to generate that procedure, but that they more generally connect abstract knowledge with concrete knowledge, and that they connect up the knowledge they're trying to impart with knowledge the listener already has (I've been using the word "listener" rather than "reader" because, as Benquo points out, this kind of explanation is easier to give in person, where they can be personalized to the audience ). The listener probably already knows about sugar, so when Feynman explains triboluminescence he doesn't just explain it in an abstract way, he tells that it applies to sugar so that you can link it up with something you already know about.

On one way of using these words, you might say that a zetetic explanation doesn't just create knowledge, it creates understanding.

As I say, communication is hard, so it's possible that I've misinterpreted Benquo or Vaniver here, but this is what I took them to be saying. Hope that helped some.


[1] note that, as I mention near the end of the comment, there might be zetetic explanations of things other than procedural explanations. Not sure if Benquo intended this, but I think he did, and I think in any case that it is a correct extension of the concept. (I might be wrong though--Benquo might have intended zetetic explanations to be explanations answering the question "where did X come from?" But if that's the case then much of my interpretation near the end of the comment is probably wrong)

[2] I actually think you're right that Benquo's explanation doesn't fully give the generators here (though as Vaniver says, "half of it is, in some sense, 'left out'"), so I don't claim that the generators I list here are fully correct, just that it would be something like this.

Comment by ikaxas on Zetetic explanation · 2018-09-05T17:38:06.772Z · score: 20 (8 votes) · LW · GW

This is the first point at which I, at least, saw any indication that you thought Ben's attempt to pass your ITT was anything less than completely accurate. If you thought his summary of your position wasn't accurate, why didn't you say so earlier ? Your response to the comment of his that you linked gave no indication of that, and thus seemed to give the impression that you thought it was an accurate summary (if there are places where you stated that you thought the summary wasn't accurate and I simply missed it, feel free to point this out). My understanding is that often, when person A writes up a summary of what they believe to be person B's position, the purpose is to ensure that the two are on the same page (not in the sense of agreeing, but in the sense that A understands what B is claiming). Thus, I think person A often hopes that person B will either confirm that "yes, that's a pretty accurate summary of my position," or "well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3" or "no, you've completely misunderstood what I'm trying to say. Actually, I was trying to say [summary of person B's position]."

To be perfectly clear, an underlying premise of this is that communication is hard, and thus that two people can be talking past each other even if both are putting in what feels like a normal amount of effort to write clearly and to understand what the other is saying. This implies that if a disagreement persists, one of the first things to try is to slow down for a moment and get clear on what each person is actually saying, which requires putting in more than what feels like a normal amount of effort, because what feels like a normal amount of effort is often not enough to actually facilitate understanding. I'm getting a vibe that you disagree with this line of thought. Is that correct? If so, where exactly do you disagree?

Comment by ikaxas on A Process for Dealing with Motivated Reasoning · 2018-09-03T23:13:37.251Z · score: 22 (6 votes) · LW · GW

If this is intended as a summary of the post, I'd say it doesn't quite seem to capture what I was getting at. If I had to give my own one-paragraph summary, it would be this:

There's a thing people (including me) sometimes do, where they (unreflectively) assume that the conclusions of motivated reasoning are always wrong, and dismiss them out of hand. That seems like a bad plan. Instead, try going into System II mode and reexamining conclusions you think might be the result of motivated reasoning, rather than immediately dismissing them. This isn't to say that System II processes are completely immune to motivated reasoning, far from it, but "apply extra scrutiny" seems like a better strategy than "dismiss out of hand."

Something that was in the background of the post, but I don't think I adequately brought out, is that this habit of [automatically dismissing anything that seems like it might be the result of motivated reasoning] can lead to decision paralysis and pathological self-doubt. The point of this post is to somewhat correct for that. Perhaps it's an overcorrection, but I don't think it is.

Comment by ikaxas on A Process for Dealing with Motivated Reasoning · 2018-09-03T22:58:46.430Z · score: 3 (2 votes) · LW · GW

Ah, thanks! What happened was that I wrote the post in the LW editor, copied it over to Google Docs for feedback (including links), added some more links while it was in the Google Doc, then copy-and-pasted it back. So that might have been where the weird link formatting came from.

Comment by ikaxas on Interpersonal Morality · 2018-08-18T22:08:42.831Z · score: 1 (1 votes) · LW · GW

I imagine you're probably aware of this in the meantime, but for Eliezer's benefit in case he isn't (and hopefully for the benefit of others who read this post and aren't as familiar with moral philosophy): I believe the term "normativity" is the standard term used to refer to the "sum of all valu(ation rul)es," and would probably be a good term for LessWrong to adopt for this purpose.

Comment by ikaxas on Ikaxas' Shortform Feed · 2018-08-16T03:57:06.686Z · score: 3 (2 votes) · LW · GW

I said in this comment that I would post an update as to whether or not I had done deep reflection (operationalized as 5 days = 40 hours cumulatively) on AI timelines by August 15th. As it turns out, I have not done so. I had one conversation that caused me to reflect that perhaps timelines are not as key of a variable in my decision process (about whether to drop everything and try to retrain to be useful for AI safety) as I thought they were, but that is the extent of it. I'm not going to commit to do anything further with this right now, because I don't think that would be useful.

Comment by ikaxas on We Agree: Speeches All Around! · 2018-08-15T05:34:06.550Z · score: 14 (7 votes) · LW · GW

I think there are actually two separate phenomena under discussion here, which look superficially similar, but actually don't have much to do with each other.

First phenomenon

Alice: Would you help me fix my car muffler?

Bob: Sure.

Alice: That way you won't have to listen to my car roaring like a jet engine every time I leave my house (since we're neighbors and all).

Second Phenomenon

Alice: Would you help me fix my car muffler?

Bob: Sure.

Alice: The noise sure does give me a headache, I want it fixed as soon as possible.

Bob: Ah, okay. I'm alright with cars, but not stellar, so how about I just pay for you to get it fixed at a garage instead? You can owe me one.

The first phenomenon seems bad for the reasons you describe in the great-grandparent comment. It also just seems strange from a linguistic perspective to keep trying to persuade someone to do something after they've already agreed to do it. Though if the order were reversed so that Alice gave her reason before Bob assented, it would still seem bad for the reasons you mention (because Alice's reason isn't all that good) but not linguistically odd.

The second phenomenon, on the other hand, seems like a good thing to me, and as far as I can tell it isn't affected by the problems you mention. In particular, Alice giving extra reasons doesn't absolve her of any debt she owes to Bob for the favor; in fact, in this particular scenario I would perceive her to owe a greater debt to Bob if he pays for her to have her car fixed than if he helps her fix it (though I have no idea how universal this intuition would be, and am agnostic about whether it's correct morally). It actually seems like Bob and Alice both benefit from Alice giving her reason (at least the way I'm imagining the extra details of the scenario): Alice gets her car fixed faster, and Bob gets to avoid spending a large amount of time fixing the car. As I'm imagining the scenario, Bob would have done it if he thought Alice was asking him e.g. partially as an excuse to spend more time with him, because he also would have wanted to do that, but once it was revealed that Alice's primary objective was to get the car fixed as fast as possible, Bob was able to save himself some time and (as I mentioned above) get Alice in debt to him even more than she otherwise would have been. So they both benefited.

The distinction seems to be that in the first phenomenon, Alice mentions a reason why it would benefit Bob to help her fix her car, whereas in the second phenomenon, Alice mentions the underlying reason she wants the car fixed. I can see how Alice mentioning a reason Bob would want to help fix the car could shift the situation to an instance of your third case, but I don't see how Alice mentioning the underlying reason she wants the car fixed could do so, since that doesn't make it any more in Bob's interest to help her (except insofar as fulfilling Alice's preferences is part of Bob's interest, but that's an instance of your second case).

It seems the fact that these two phenomena are distinct has only been obliquely acknowledged elsewhere in this thread, so I wanted to make it more explicit. In particular, if I'm interpreting everyone correctly then most of what people have said in this thread has been in support of the second phenomenon, and most of your objections have been objections to the first phenomenon, so to a certain extent people seem to be talking past each other.

Also, you said in the parent comment that you object to what looks to me like the second phenomenon, but you didn't give your reasons there. Nothing wrong with that, but if you're willing I'd be interested in hearing those reasons, because I'm having trouble imagining what someone could object to about the second phenomenon. The only thing I can think of is this: If you know the "big-picture goal" behind someone's request, perhaps that obligates you to put in more effort to help them towards that big-picture goal than if you only knew the contents of the immediate request, i.e. you have to put in time to think about whether there's a better way to accomplish the big-picture goal, and if that way ends up being more effortful than the original ask you still have to help with it, etc. That might be concerning in a similar way to your objection to the first phenomenon, if it's true.

Comment by ikaxas on Is there a practitioner's guide for rationality? · 2018-08-13T06:25:18.226Z · score: 12 (6 votes) · LW · GW

I don't know of a full guide, but here's a sequence exploring applications for several CFAR techniques: https://www.lesswrong.com/sequences/qRxTKm7DAftSuTGvj

Comment by Ikaxas on [deleted post] 2018-07-27T21:50:59.428Z

For me as well, especially once I related it back to Parfit's Hitchhiker.

Comment by ikaxas on Are ethical asymmetries from property rights? · 2018-07-02T23:13:05.817Z · score: 3 (2 votes) · LW · GW

I think they sometimes do, or at least it is eminently plausible that they sometimes do. The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism. I'm sure you're aware of the trolley problem, so I'm not bringing it up as an example I think you're not aware of, but more to note that I'm confused as to why, given that you're aware of it, you think it doesn't defy consequentialism.

For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person's level of happiness by x. Thus, not creating this person when you could goes against consequentialism.

There are ways to argue that these asymmetries are actually optimal from a consequentialist perspective, but it seems to me the default view would be that they aren't, so I'm confused why you think that they so obviously are. (I'm not sure that the fact that these asymmetries defy consequentialism would make them confusing--I don't think (most) humans are intuitive consequentialists, at least not about all cases, so it seems to me not at all confusing that some of our intuitions would prescribe actions that aren't optimal from a consequentialist perspective.)

Comment by ikaxas on LW Update 2018-07-01 – Default Weak Upvotes · 2018-07-02T22:52:34.369Z · score: 6 (4 votes) · LW · GW

Strong-upvoted, mostly because I had a positive system-1 reaction to the line "I really hate the focus on karma." I was going to straightforwardly agree with it, but then thought about it some more and came up with the following:

@ mostly the mod team: I'm not necessarily sure I grok why this site seems to care so much about karma, and I'm curious about it. I'm getting the impression that maybe it's more important than I thought though, as a tool for guiding site culture. Like, every time I see a post talking about changing the karma system, my first thought is "Whoah, isn't this way overthinking it? Why is this such an important issue?" Then I remind myself "oh, yeah, maybe it's for shaping site culture, which is important I guess, so maybe this is important." But then the next time I have the same system 1 reaction of "Why bother caring about karma so much?" Now I'm new here, and wasn't around for the death of old-LW (I came in around the time LW2 started), so maybe this is just due to the fact that "LW dying" isn't a particularly salient possibility for me, so I'm not worried so much about the nitty-gritty details of how to shape incentive gradients on the site so that doesn't happen.

I would also say my intuitive reaction is that low-positive karma seems the right place for neutral comments. I'm not sure I like the "levels" idea, just because I don't know how to determine what level I want a comment to be at on a scale that goes from -infinity to +infinity.

Comment by ikaxas on Machine Learning Analogy for Meditation (illustrated) · 2018-06-29T01:50:36.222Z · score: 6 (5 votes) · LW · GW

I wondered the same thing. However, after thinking about it, I noticed that having the text be handwritten in different colors and sizes gave it a different feel, in a good way, in that the color and size in a way stood in for speech modulations like tone/volume/etc. One could change the font size and color in normal text, but I feel like that probably wouldn't have had the same effect, though I could be wrong.

Comment by ikaxas on Open Thread June 2018 · 2018-06-16T03:43:19.185Z · score: 3 (1 votes) · LW · GW

Yep, I agree (ETA: about the fact that the books aren't especially "rationalist"; I don't remember thinking that the quality of the writing went down as the amount of anti-religious axe-grinding went up, but it's been long enough since I read the books that maybe if I read them again with that claim in mind I would agree). Rereading Ender's Game and have changed my mind about His Dark Materials being especially rationalist since writing that comment. ETA: Ender's game has a ton more stuff in it than I remembered that could basically have come straight out of the sequences, so my mental baseline for "especially rationalist-y fiction" was a lot lower than it probably should have been. Also probably some halo effect going on: I like the books, I like rationalism, so my brain wanted to associate them.

Comment by ikaxas on The Incoherence of Honesty · 2018-06-14T02:13:06.490Z · score: 3 (1 votes) · LW · GW

What does your epistemology recommend for others? For example, should I:

1. treat cousin_it's axioms as true?

2. treat Ikaxas's axioms as true?

3. Something else?

If the first, why should the rule be

C: For all x, x should treat cousin_it's axioms as true

rather than say "treat TAG's axioms as true" or "treat Barack Obama's axioms as true" or "treat Joe Schmoe's axioms as true"? Don't symmetry considerations speak against this "epistemological egoism"?

If the second, then the rule seems to be

A: For all x, x should treat x's axioms as true.

This is pretty close to relativism. Granted, A is not relativism--Relativism would be

R: For all x, x's axioms are true for x

--but it is fairly close. For all that, it may in fact be the best rule from a pragmatic perspective.

To put this in map-territory terminology: the best rule, pragmatically speaking, may be, "for all x, x should follow x's map" (i.e. A), since x doesn't really have unmediated access to other people's maps, or to the territory. But the rule could not be "for all x, the territory corresponds to x's map," (i.e. R) since this would either imply that there is a territory for each person, when in fact there is only one territory, or it would imply that the territory contains contradictions, since some people's maps contain P and others' contain not-P.

Alternatively, perhaps your epistemology only makes a recommendation for you, cousin_it, and doesn't tell others what they should believe. But in that case it's not complete.

Also, it's not clear to me what "everything except cousin_it's axioms requires justification" has to do with the original statement that "knowledge is relative to a particular process." That statement certainly seems like it could be open to the charge of relativism.

Comment by ikaxas on Toolbox-thinking and Law-thinking · 2018-06-05T05:10:38.015Z · score: 7 (3 votes) · LW · GW

Ah, okay, I think I understand now. That reminds me of Kant's noumena-phenomena distinction, where the territory is the noumena, and you're saying we will never have access to the territory/noumena directly, only various maps (phenomena), and none of those maps can ever perfectly correspond to the territory. And Law thinking sometimes forgets that we can never have access to the territory-as-it-is. Is that about right?

Comment by ikaxas on Open Thread June 2018 · 2018-06-04T14:46:47.716Z · score: 5 (2 votes) · LW · GW

Actually that sounds like a good idea not just because you'd get more accurate information about how often you exercise, but also for the following reason: what often happens (at least to me) when I'm tracking something I want to do is that when I have to put in a failed instance I feel guilty. Due to Goodhart's Imperius this then disincetivizes me to track the behavior in the first place (esp if I'm failing often) because I get negative feedback from the tracking, so the simplest solution from the monkey brain's perspective is to stop the tracking. But if you get the lotus whether you did the thing or not, conditional on you entering that information into the app, then that gives the proper incentive to track. So I would predict this would work well.

Comment by ikaxas on Toolbox-thinking and Law-thinking · 2018-06-04T03:47:14.691Z · score: 10 (2 votes) · LW · GW

I'm finding this comment hard to parse for some reason. In particular, I'm not sure I understand the phrase "map that is the territory." On my understanding those terms (which I thought was the usual one, but may not be), it's a category error to think of the territory as just another map, even if a particularly special one; the territory is qualitatively distinct from any map, it's a different kind of thing. So "a map that is the territory" doesn't parse, because the territory isn't a map, it's the territory. Are you using these terms in a different sense, or intentionally/actively disagreeing with this framing [EDIT: (e.g., claiming that it's "just maps all the way down")], or something else? Also, usually a discrepancy is between two things A and B, so I'm having trouble understanding what you mean by "discrepancies involved in (for instance) optimizing for min Euclidean distance" without a specification of what the discrepancies are between.