Posts

Has the effectiveness of fever screening declined? 2020-03-27T22:07:16.932Z · score: 9 (3 votes)
Potential Research Topic: Vingean Reflection, Value Alignment and Aspiration 2020-02-06T01:09:05.384Z · score: 15 (4 votes)
What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? 2019-09-18T17:17:05.602Z · score: 12 (5 votes)
Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness 2018-12-03T08:00:00.000Z · score: 38 (17 votes)
Trying for Five Minutes on AI Strategy 2018-10-17T16:18:31.597Z · score: 20 (7 votes)
A Process for Dealing with Motivated Reasoning 2018-09-03T03:34:11.650Z · score: 18 (8 votes)
Ikaxas' Hammertime Final Exam 2018-05-01T03:30:11.668Z · score: 23 (7 votes)
Ikaxas' Shortform Feed 2018-01-08T06:19:40.370Z · score: 16 (4 votes)

Comments

Comment by ikaxas on Scheduling Algorithm for a PhD Student · 2020-09-25T15:08:16.807Z · score: 1 (1 votes) · LW · GW

How long is the median article in your field? More like 5 pages, more like 20 pages, or more like 40 pages?

Comment by ikaxas on Expansive translations: considerations and possibilities · 2020-09-18T19:31:05.018Z · score: 4 (3 votes) · LW · GW

Imagine a system where when you land on a Wikipedia page, it translates it into a version optimized for you at that time. The examples change to things in your life, and any concepts difficult for you get explained in detail. It would be like a highly cognitively empathetic personal teacher.

Hmm, something about this bothers me, but I'm not entirely sure what. At first I thought it was something about filter bubbles, but of course that can be fixed; just tune the algorithm so that it frames things in a way that is just optimally outside your intellectual filter bubble/comfort zone.

Now I think it's something more like: it can be valuable to have read the same thing as other people; if everyone gets their own personalized version of Shakespeare, then people lose some of the connection they could have had with others over reading Shakespeare, since they didn't really read the same thing. And also, it can be valuable for different people to read the same thing for another reason: different people may interpret a text in different ways, which can generate new insights. If everyone gets their own personalized version, we lose out on some of the insights people might have had by bouncing their minds off of the original text.

I guess this isn't really a knockdown argument against making this sort of "personal translator" technology, since there's no reason people couldn't turn it off sometimes and read the originals, but nevertheless, we don't have a great track record of using technology like this wisely and not overusing it (I'm thinking of social media here).

Comment by ikaxas on What Does "Signalling" Mean? · 2020-09-17T03:27:17.565Z · score: 5 (3 votes) · LW · GW

"Metaethics" is another example of this; sometimes it gets used around here to mean "high-level normative ethics" (and in fact the rationalist newsletter just uses it as the section header for anything ethics-related).

Comment by ikaxas on Diagramming "Replacing Guilt," Part 1 · 2020-08-13T02:05:34.265Z · score: 1 (1 votes) · LW · GW

I liked this a lot. However, I didn't really understand the "Replacing Guilt" picture, the one with the two cups. Are they supposed to be coffee and water, with the idea being that coffee can keep you going short term but isn't sustainable, while water is better long term, it something?

Comment by ikaxas on No Ultimate Goal and a Small Existential Crisis · 2020-07-24T22:00:00.332Z · score: 8 (3 votes) · LW · GW

I think about this question a lot as well. Here are some pieces I've personally found particularly helpful in thinking about it:

  • Sharon Street, "Nothing 'Really' Matters, But That's Not What Matters": link
    • This is several levels deep in a conversation, but you might be able to read it on its own and get the gist, then go back and read some of the other stuff if you feel like it.
  • Nate Soares' "Replacing Guilt" series: http://mindingourway.com/guilt/
    • Includes much more metaethics than it might sound like it should.
    • Especially relevant: "You don't get to know what you're fighting for"; everything in the "Drop your Obligations" section
  • (Book) Jonathan Haidt: The Happiness Hypothesis: link
    • Checks ancient wisdom about how to live against the modern psych literature.
  • Susan Wolf on meaning in life: link
    • There is a book version, but I haven't read it yet

Search terms that might help if you want to look for what philosophers have said about this:

  • meaning in/of life
  • moral epistemology
  • metaethics
  • terminal/final goals/ends

Some philosophers with relevant work:

  • Derek Parfit
  • Sharon Street
  • Christine Korsgaard
  • Bernard Williams

There is a ton of philosophical work on these sorts of things obviously, but I only wanted to directly mention stuff I've actually read.

If I had to summarize my current views on this (still very much in flux, and not something I necessarily live up to), I might say something like this:

Pick final goals that will be good both for you and for others. As Susan Wolf says, meaning comes from where "subjective valuing meets objective value," and as Jonathan Haidt says, "happiness comes from between." Some of this should be projects that will be fun/fulfilling for you and also produce value for others, some of this should be relationships (friendships, family, romantic, etc). But be prepared for those goals to update. I like romeostevensit's phrasing elsethread: "goals are lighthouses not final destinations." As Nate Soares says, "you don't get to know what you're fighting for": what you're fighting for will change over time, and that's not a bad thing. Can't remember the source for this (probably either the Sequences or Replacing Guilt somewhere), but the human mind will often transform an instrumental goal into a terminal goal. Unlike Eliezer, I think this really does signal a blurriness in the boundary between instrumental and terminal goals. Even terminal goals can be evaluated in light of other goals we have (making them at least a little bit instrumental), and if an instrumental goal becomes ingrained enough, we may start to care about it for its own sake. And when picking goals, start from where you are, start with what you already find yourself to care about, and go from there. The well-known metaphor of Neurath's Boat goes like this: "We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction." See also Eliezer, "Created already in motion". So start from what you already care about, and aim to have that evolve in a more consistent direction. Aim for goals that will be good for both you and others, but take care of the essentials for yourself first (as Jordan Peterson says, "Set your house in perfect order before you criticize the world"). In order to help the world, you have to make yourself formidable (think virtue ethics). Furthermore, as Agnes Callard points out (link), the meaning in life can't solely be to make others happy. The buck has to stop somewhere -- "At some point, someone has to actually do the happying." So again, look for places where you can do things that make you happy while also creating value for others.

I don't claim this is at all airtight, or complete (as I said, still very much in flux), but it's what I've come to after thinking about this for the last several years.

Comment by ikaxas on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-07T22:36:48.913Z · score: 3 (2 votes) · LW · GW

There has been some philosophical work that makes just this point. In particular, Julia Nefsky (who I think has some EA ties?) has a whole series of papers about this. Probably the best one to start with is her PhilCompass paper here: https://onlinelibrary.wiley.com/doi/abs/10.1111/phc3.12587

Obviously I don't mean this to address the original question, though, since it's not from an FDT/UDT perspective.

Comment by ikaxas on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-07T16:15:35.838Z · score: 9 (3 votes) · LW · GW

I think there is a strong similarity between FDT (can't speak to UDT/TDT) and Kantian lines of thought in ethics. (To bring this out: the Kantian thought is roughly to consider yourself simply as an instance of a rational agent, and ask "can I will that all rational agents in these circumstances do what I'm considering doing?" FDT basically says "consider all agents that implement my algorithm or something sufficiently similar. What action should all those algorithm-instances output in these circumstances?" It's not identical, but it's pretty close.) Lots of people have Kantian intuitions, and to the extent that they do, I think they are implementing something quite similar to FDT. Lots of people probably vote because they think something like "well, if everyone didn't vote, that would be bad, so I'd better vote." (Insert hedging and caveats here about how there's a ton of debate over whether Kantianism is/should be consequentialist or not.) So they may be countable as at least partially FDT agents for purposes of FDT reasoning.

I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT.

Here's a brief argument why they would (and why they might diverge specifically in the direction of FDT): the metric evolution optimizes for is inclusive genetic fitness, not merely fitness of the organism. Witness kin selection. The heuristics that evolution would install to exploit this would tend to be: act as if there are other organisms in the environment running a similar algorithm to you (i.e. those that share lots of genes with you), and cooperate with those. This is roughly FDT-reasoning, not CDT-reasoning.

Comment by ikaxas on Missing dog reasoning · 2020-06-27T01:19:39.891Z · score: 3 (2 votes) · LW · GW

whales, despite having millions of times the number of individual cells that mice have, don’t seem to get cancer much more often than mice.

Is this all mice, or just lab mice? I ask because of Bret Weinstein's thing about how lab mice have abnormally long telomeres, which causes them to get cancer a lot more frequently than normal mice (though in googling for the source I also found this counterargument). So is it that whales get cancer less often than we'd expect, or just that mice (or rather, the mice that we observe) get it a lot more frequently?

Comment by ikaxas on Explaining the Rationalist Movement to the Uninitiated · 2020-04-21T19:35:03.702Z · score: 4 (3 votes) · LW · GW

So I don't know a ton about Pragmatism, but from what I do know about it, I definitely see what you're getting at; there are a lot of similarities between Pragmatism and LW-rationality. One major difference, though: as far as I know, Pragmatism doesn't accept the correspondence theory of truth (see here, at the bullet "epistemology (truth)"), while LW-rationality usually does (though as is often the case, Yudkowsky seems to have been a bit inconsistent on this topic: here for example he seems to express a deflationist theory of truth). Although, as Liam Bright has pointed out (in a slightly different context), perhaps one's theory of truth is not as important as some make it out to be.

At any rate, I had already wanted to learn more about Pragmatism, but hadn't really made the connection with rationality, so this makes me want to learn about it more. So thanks!

Comment by ikaxas on Premature death paradox · 2020-04-21T19:03:20.512Z · score: 1 (1 votes) · LW · GW

So I agree that this paradox is quite interesting as a statistical puzzle. But I'm not sure it shows much about the ethical question of whether and when death is bad. I think the relevant notion of "premature death" might not be a descriptive notion, but might itself have a normative component. Like, "premature death" doesn't mean "unusually early death" (which is a fully descriptive notion) but something else. For example, if you assume Thomas Nagel's "deprivation account" of the badness of death, then "premature death" might be cashed out roughly as: dying while there's still valuable life ahead of you to live, such that you're deprived of something valuable by dying. In other words, you might say that death is not bad when one has lived a "full life," and is bad when one dies before living a full life. (Note that this doesn't beg the question against the transhumanist "death is always bad" sort of view, for one might insist that a life is never "full" in the relevant sense, and that there's always more valuable life ahead of you.) Trying to generalize this looks objectionably circular: death is bad when it's premature, and it's premature when it's bad. But at any rate it seems to me like the notion of premature death is trying to get at more than just the descriptive notions of dying before one is statistically predicted to die, or dying before Laplacean demon who had a perfect physical model of the world would predict one to die.

Anyway, low confidence in this, and again, I agree the statistical puzzle is interesting in its own right.

Comment by ikaxas on Premature death paradox · 2020-04-19T11:17:37.046Z · score: 1 (1 votes) · LW · GW

It was the very first thing they discussed. Start from the beginning of the stream and you'll get most of it.

Comment by ikaxas on Premature death paradox · 2020-04-16T22:05:06.198Z · score: 1 (1 votes) · LW · GW

This is getting discussed right now on livestream by a couple philosophers including Agnes Callard (recording will be available after): https://www.crowdcast.io/e/k7viaqzp

Comment by ikaxas on Premature death paradox · 2020-04-14T14:03:27.961Z · score: 1 (1 votes) · LW · GW

Maybe this is obvious, but:

Say life expectancy at age X is Y further years, and life expectancy at age (X + Y) is Z further years. I think at least part of the reason why Z isn't 0 is that if you're following someone along their life, the fact that they lived Y further years is more evidence to update on. If X means "lived to at least X years", then P(X+Y+Z|X+Y) > P(X+Y+Z|X), because living to X+Y years indicates certain things about, say, your health, genetics, etc that aren't indicated by just living to X years.

Comment by ikaxas on Has the effectiveness of fever screening declined? · 2020-03-27T22:48:35.105Z · score: 1 (1 votes) · LW · GW

Thanks

Comment by ikaxas on Has the effectiveness of fever screening declined? · 2020-03-27T22:09:08.460Z · score: 3 (2 votes) · LW · GW

Also, mods, how do I tag this as "coronavirus"? Or is that something the mods just do?

Comment by ikaxas on Ikaxas' Shortform Feed · 2020-02-16T02:21:53.641Z · score: 5 (3 votes) · LW · GW

Global coordination problems

I've said before that I tentatively think that "foster global coordination" might be a good cause area in its own right, because it benefits so many other cause areas. I think it might be useful to have a term for the cause areas that global coordination would help. More specifically, a term for the concept "(reasonably significant) problem that requires global coordination to solve, or that global coordination would significantly help with solving." I propose "global coordination problem" (though I'm open to other suggestions). You may object "but coordination problem already has a meaning in game theory, this is likely to get confused with that." But global coordination problems are coordination problems in precisely the game theory sense (I think, feel free to correct me), so the terminological overlap is a benefit.

What are some examples of global coordination problems? Certain x-risks and global catastrophic risks (such as AI, bioterrorism, pandemic risk, asteriod risk), climate change, some of the problems mentioned in The Possibility of an Ongoing Moral Catastrophe, as well as the general problem of ferreting out and fixing moral catastrophes, and almost certainly others.

In fact, it may be useful to think about a spectrum of problems, similar to Bostrom's Global Catastrophic Risk spectrum, organized by how much coordination is required to solve them. Analogous to Bostrom's spectrum, we could have: personal coordination problems (i.e. problems requiring no coordination with others, or perhaps only coordination with parts of oneself), local coordination problems, national coordination problems, global coordination problems, and transgenerational coordination problems.

Comment by ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-16T01:39:49.612Z · score: 1 (1 votes) · LW · GW

Forgot one other thing I intend to work on: I've seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that's another project I may potentially work on.

Comment by ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-16T01:37:42.462Z · score: 4 (2 votes) · LW · GW

Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest...

Huh, that's an interesting point.

I'm not sure where I stand on the question of "should we be pulling the brakes now," but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn't really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there's not much talk of that option is because it's so clearly unrealistic at this point; but I'm all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.

It looks like your background is in philosophy

Yep!

check out Problems in AI Alignment that philosophers could potentially contribute to, in case you haven't come across it already.

I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the "Normativity for AI / AI designers" and "Metaethical policing" bullets (namely the problem raised in these posts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I'm also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I'm actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I'm not certain which I ultimately should settle on (though I have a bit of time, I'm at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don't work out?

Comment by ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-14T13:55:27.804Z · score: 6 (3 votes) · LW · GW

I had not thought about this again since writing the post, until your comment. (Yeah, that seems worrying if mine really is the only post. Though in my [limited] experience with the LW/EA forum search it's not easy to tell how reliable/comprehensive it is, so there may be related posts that aren't easy to find.)

I actually had a somewhat different model in mind for how polarization happens: something like "the parties tend to take opposite stances on issues. So if one party takes up an issue, this causes the other party to take up the opposite stance on that issue. So if one party starts to talk more about AI safety than the other, this would cause the other party to take the anti-AI-safety stance, therefore polarization." (Not saying it's a good model, but it's the model I had in the back of my mind.)

Your model of climate polarization seems mostly right to me. I was wondering, though, why it would lead to polarization in particular, rather than, say, everybody just not caring about climate change, or being climate skeptics. I guess the idea is something like: Some climate activists/scientists/etc got concerned about climate change, started spreading the word. Oil corps got concerned this would affect them negatively, started spreading a countermessage. It makes sense that this would lead to a split, where some people care about climate change and some people anti-care about it. But why would the split be along party lines (or even: along ideological lines)? Couple things to say here. First, maybe my model kicks in here: the parties tend to take opposite stances on issues. Maybe the dems picked up the climate-activist side, so the republicans picked up the big-oil side. But was it random which side picked up which? I guess not: the climate-activist case is quite caring-focused, which on Jon Haidt's model makes it a left issue, while the big-oil case is big-business, which is a republican-flavored issue. (Though the climate-activist case also seemingly has, or at least used to have, a pretty sizeable purity component, which is puzzling on Haidt's model.)

Applying some of this to the AI case: the activist stuff has already happened. However, the AI corporations (the equiv of big-oil in our climate story) haven't reacted in the same way big-oil did. At least public-facingly, they've actually recognized and embraced the concerns to a sizeable degree (see Google DeepMind, OpenAI, to some degree Facebook).

Though perhaps you don't think the current AI corps are the equivalent of big-oil; there will be some future AI companies that react more like big oil did.

Either way, this doesn't totally block polarization from happening: it could still happen via "one party happens to start discussing the issue before the other, the other party takes the opposite stance, voters take on the stances of their party, therefore polarization."

<politics>

Hadn't thought of this till seeing your comment, but this might be an argument against Andrew Yang (though he's just dropped out)---if he had gotten the dem nomination, he might have caused Trump to take up the contrarian stance on AI, causing Trump's base to become skeptical of AI risk, therefore polarization (or some other variant on the basic "the dems take up the issue first, so the republicans take the opposite stance" story). This may still happen, though with him out it seems less likely.

</politics>

I don't know if climate activists could have done anything differently in the climate case; don't know enough about the history of climate activism and how specifically it got as polarized as it is (though as I said, your model seems good at least from the armchair). This may be something worth looking into as a historical case study (though time is of the essence I suppose, since now is probably the time to be doing things to prevent AI polarization).

Thanks for prompting me to think about this again! No promises (pretty busy with school right now) but I may go back and write up the conversation with my friend that I mentioned in the OP, I probably still have the notes from it. And if it really is as neglected as you think, I may take up thinking about it again a bit more seriously.

Comment by ikaxas on Potential Research Topic: Vingean Reflection, Value Alignment and Aspiration · 2020-02-07T20:00:19.895Z · score: 1 (1 votes) · LW · GW

how do we take those notions and turn them into something mathematically precise enough that we could instruct a machine to do them and then evaluate whether or not what it did was in fact what we intended

Yep, that's the project! I think the main utility of Callard's work here is (1) pointing out the phenomenon (a phenomenon that is strikingly similar to some of the abilities we want AI's to have), and (2) noticing that the most prominent theories of decision theory, moral psychology, and moral responsibility make assumptions that we have to break if we want to allow room for aspiration (assumptions that we who are trying to build safe AI are probably also accidentally making insofar as we take over those standard theories). IDK whether she provides alternate assumptions to make instead, but if she does these might also be useful. But the main point is just noticing that we need different theories of these things.

Once we've noticed the phenomenon of aspiration, and that it requires breaking some of these assumptions, I agree that the hard bit is coming up with a mathematical theory of aspiration (or the AI equivalent).

Comment by ikaxas on 2018 Review: Voting Results! · 2020-01-26T19:31:04.629Z · score: 14 (8 votes) · LW · GW

I hope they are buying 50+ books each otherwise I don’t see how the book part is remotely worth it.

As a data point, I did not vote, but if there is a book, I will almost certainly be buying a copy of it if it is reasonably priced, i.e. similar price to the first two volumes of R:A-Z ($ 6-8).

Comment by ikaxas on On Being Robust · 2020-01-14T15:37:53.733Z · score: 3 (2 votes) · LW · GW

This seems like another angle on "Play in Hard Mode". Is that about right?

Comment by ikaxas on Morality vs related concepts · 2020-01-09T04:24:53.508Z · score: 7 (5 votes) · LW · GW

I am an ethics grad student, and I will say that this largely accords with my understanding of these terms (though tbh the terminology in this field is so convoluted that I expect that I still have some misunderstandings and gaps).

Re epistemic rationality, I think at least some people will want to say that it's not just instrumental rationality with the goal of truth (though I am largely inclined to that view). I don't have a good sense of what those other people do say, but I get the feeling that the "epistemic rationality is instrumental rationality with the goal of truth" view is not the only game in town.

Re decision theory, I would characterize it as closely related to instrumental rationality. How I would think about it is like this: CDT or EDT are to instrumental rationality as utilitarianism or Kantianism are to morality. CDT is one theory of instrumental rationality, just as utilitarianism is one theory of morality. But this is my own idiosyncratic understanding, not derived from the philosophical literature, so the mainstream might understand it differently.

Re metaethics: thank you for getting this one correct. Round these parts it's often misused to refer to highly general theories of first order normative ethics (e.g. utilitarianism), or something in that vicinity. The confusion is understandable, especially given that utilitarianism (and probably other similarly general moral views) can be interpreted as a view about the metaphysics of reasons, which would be a metaethical view. But it's important to get this right. Here's a less example-driven explanation due to Tristram McPherson:

"Metaethics is that theoretical activity which aims to explain how actual ethical thought and talk—and what (if anything) that thought and talk is distinctively about—fits into reality" (McPherson and Plunkett, "The Nature and Explanatory Ambitions of Metaethics," in The Routledge Handbook of Metaethics, p. 3).

Anyway, thank you for writing this post, I expect it will clear up a lot of confusions and be useful as a reference.

Comment by ikaxas on What are you reading? · 2019-12-26T19:49:15.283Z · score: 2 (2 votes) · LW · GW

Goodreads reveals many books with the title "borrowed time." Who's the author?

Comment by ikaxas on What are you reading? · 2019-12-26T17:08:45.067Z · score: 3 (3 votes) · LW · GW

I'm an ethics PhD student, so unsurprisingly lots of that. Currently reading Consequentialism and its Critics (ed. Samuel Scheffler), Nagel's The Possibility of Altruism, Sidgwick's The Methods of Ethics , and Enoch's Taking Morality Seriously, with various levels of commitment. Also reading Joseph Romm's Climate Change: What Everyone Needs to Know, and just started The Almost Nearly Perfect People: Behind the Myth of the Scandanavian Utopia.

In fiction, I just finished Neal Shusterman's Arc of a Scythe trilogy, which I highly recommend for rationalists. Deals with a lot of issues we tend to think about, mostly immortality and friendly-AI-run utopia.

Comment by ikaxas on Tabletop Role Playing Game or interactive stories for my daughter · 2019-12-14T15:38:41.549Z · score: 3 (3 votes) · LW · GW

Haven't played it myself, but this seems related: https://slatestarcodex.com/2013/02/22/dungeons-and-discourse-third-edition-the-dialectic-continues/

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-22T00:55:35.438Z · score: 1 (1 votes) · LW · GW

Thanks!

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-21T03:39:25.786Z · score: 2 (2 votes) · LW · GW

Yep, I've seen that post before. I've tried to use Anki a couple times, but I always get frustrated trying to decide how to make things into cards. I haven't totally given up on the idea, though, I may try it again at some point, maybe even for this. Thanks for your comment.

Also, NB, your link is not formatted properly -- you have the page URL, but then also "by Michael Nielsen is interesting" as part of the link, so it doesn't go where you want it to.

Comment by ikaxas on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-21T03:35:04.300Z · score: 5 (3 votes) · LW · GW

Thanks, this is helpful! Mathematical maturity is a likely candidate -- I've done a few college math courses (Calc III, Linear Alg, Alg I), so I've done some proofs, but probably nowhere near enough, and it's been a few years. Aside from Linear Alg, all I know about the other three areas is what one picks up simply by hanging around LW for a while. Any personal recommendations for beginner textbooks in these areas? Nbd if not, I do know about the standard places to look (Luke Muehlhauser's textbook thread, MIRI research guide, etc), so I can just go look there.

Comment by ikaxas on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-07-02T11:37:07.515Z · score: 3 (2 votes) · LW · GW

[Off topic] Data point: the repeated "(respectively I/you)" at the beginning of the post made that paragraph several times harder to read for me than it otherwise would have been.

Comment by ikaxas on The 3 Books Technique for Learning a New Skilll · 2019-06-03T22:11:14.293Z · score: 1 (1 votes) · LW · GW

Do you generally read the "What" book all the way through, or only use it as a reference when you get stuck? Could a Q&A forum, e.g. StackExchange, serve as the "What" book, do you think?

Comment by ikaxas on Say Wrong Things · 2019-05-26T13:24:00.198Z · score: 6 (4 votes) · LW · GW

The Babble and Prune Sequence seems relevant here

Comment by ikaxas on Tales From the American Medical System · 2019-05-10T22:39:13.298Z · score: 9 (2 votes) · LW · GW

"no refill until appointment is on the books"

But Zvi's friend had an appointment on the books? It was just that it was a couple weeks away.

Otherwise, thanks very much for commenting on this, good to get a doctor's perspective.

Comment by ikaxas on Ideas ahead of their time · 2019-04-04T03:19:08.301Z · score: 5 (3 votes) · LW · GW

As one suggestion, how about something along the lines of "Ideas ahead of their time"?

Comment by ikaxas on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T05:21:29.695Z · score: 7 (4 votes) · LW · GW

Data point: even with the name of the account it took me an embarrassingly long time to figure out that this was actually written by GPT2 (at least, I'm assuming it is). Related: https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/

Comment by ikaxas on Applied Rationality podcast - feedback? · 2019-02-05T16:38:19.220Z · score: 4 (3 votes) · LW · GW

How about something like: "Tsuyoku Naritai - The Becoming Stronger Podcast"?

Comment by ikaxas on What is abstraction? · 2018-12-18T00:56:14.706Z · score: 4 (2 votes) · LW · GW

One such essay about a concept that is either identical to equivocation, or somewhere in the vicinity (I've never quite been able to figure out which, but I think it's supposed to be subtly different) is Scott's post about Motte and Bailey, which includes lots of examples

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-10T13:27:15.058Z · score: 1 (1 votes) · LW · GW

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-10T13:26:55.938Z · score: 1 (1 votes) · LW · GW

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T17:08:05.349Z · score: 6 (4 votes) · LW · GW

Because, unlike the robot, the cognitive architectures producing the observed behavior (alleviating a pain) are likely to be similar to those producing the similar behavior in us (since evolution is likely to have reused the same cognitive architecture in us and in the fish), and we know that whatever cognitive architecture produces that behavior in us produces a pain quale. The worry was supposed to be that perhaps the underlying cognitive architecture is more like a reflex than like a conscious experience, but the way the experiment was set up precluded that, since it's highly unlikely that a fish would have a reflex built in for this specific situation (unlike, say, the situation of pulling away from a hot object or a sharp object, which could be an unconscious reflex in other animals).

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T16:57:17.726Z · score: 1 (1 votes) · LW · GW

The answer given in the book is that, as it turns out, they have color receptors in their skin. The book notes that this is only a partial answer, because they still only have one color receptor in their skin, which still doesn't allow for color vision, so this doesn't fully solve the puzzle, but Godfrey-Smith speculates that perhaps the combination of one color receptor with color-changing cells in front of the color receptor allows them to gain some information about the color of things around it (121-123).

Comment by ikaxas on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-08T16:46:45.659Z · score: 4 (3 votes) · LW · GW

Thanks! This was quite interesting to try. Just to make it more explicit, your point is supposed to be that here's a form of visual processing going on that doesn't "feel like anything" to us, right?

Comment by ikaxas on Tentatively considering emotional stories (IFS and “getting into Self”) · 2018-12-01T15:25:23.596Z · score: 11 (3 votes) · LW · GW

Said, I'm curious: have you ever procrastinated? If so, what is your internal experience like when you are procrastinating?

Comment by ikaxas on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-10-15T03:08:02.996Z · score: 3 (2 votes) · LW · GW

Ah, thanks. Just transcribed the first 5 minutes, it took me like 20-30 minutes to do. I definitely won't have time to transcribe the whole thing. Might be able to do 30 mins, i.e. ~2 hours of transcription time, over the next few days. Let me know if you still need help and which section you'd want me to transcribe. Definitely looking forward to watching the whole thing, this looks pretty interesting.

Comment by Ikaxas on [deleted post] 2018-10-01T19:36:57.763Z

I think the word you're looking for instead of "ban" is "taboo".

Comment by ikaxas on Zetetic explanation · 2018-09-15T02:54:15.845Z · score: 3 (2 votes) · LW · GW

After quite a while thinking about it I'm still not sure I have an adequate response to this comment; I do take your points, they're quite good. I'll do my best to respond to this in the post I'm writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn't adequately address your points.

Comment by ikaxas on Zetetic explanation · 2018-09-15T02:47:59.720Z · score: 4 (3 votes) · LW · GW

Thanks for this. Sorry it's taken me so long to reply here, didn't mean to let this conversation hang for so long. I completely agree with about 99% of what you wrote here. The 1% I'll hopefully address in the post I'm working on on this topic.

Comment by ikaxas on Zetetic explanation · 2018-09-08T21:34:45.648Z · score: 1 (1 votes) · LW · GW

Ah, thanks!

Comment by ikaxas on Zetetic explanation · 2018-09-08T21:23:21.801Z · score: 1 (1 votes) · LW · GW

EDIT: oops, replied to the wrong comment.

Comment by ikaxas on Zetetic explanation · 2018-09-08T19:29:56.631Z · score: 4 (4 votes) · LW · GW

By the way, I'm curious why you say that the principle of charity "was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere." What do you think was the original, good form of the idea, what is the difference between that and the version the rationalist memesphere has adopted, and what is so bad about the rationalist version?