Posts

Has the effectiveness of fever screening declined? 2020-03-27T22:07:16.932Z
Potential Research Topic: Vingean Reflection, Value Alignment and Aspiration 2020-02-06T01:09:05.384Z
What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? 2019-09-18T17:17:05.602Z
Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness 2018-12-03T08:00:00.000Z
Trying for Five Minutes on AI Strategy 2018-10-17T16:18:31.597Z
A Process for Dealing with Motivated Reasoning 2018-09-03T03:34:11.650Z
Ikaxas' Hammertime Final Exam 2018-05-01T03:30:11.668Z
Ikaxas' Shortform Feed 2018-01-08T06:19:40.370Z

Comments

Comment by Ikaxas on How refined is your art of note-taking? · 2021-05-20T15:44:41.701Z · LW · GW

My main concern with using an app like Evergreen Notes is that a hobby project built by one person seems like a fragile place to leave a part of my brain.

In that case you might like obsidian.md.

Comment by Ikaxas on Your Dog is Even Smarter Than You Think · 2021-05-01T23:38:22.876Z · LW · GW

I found this one particularly impressive: https://m.youtube.com/watch?v=AHiu-EDJUx0

The use of "oops" at the end is spot on.

Comment by Ikaxas on Defining "optimizer" · 2021-04-19T14:20:45.489Z · LW · GW

Hmm. I think this is closer to "general optimizer" than to "optimizer": notice that certain chess-playing algorithms (namely, those that have been "hard-coded" with lots of chess-specific heuristics and maybe an opening manual) wouldn't meet this definition, since it's not easy to change them to play e.g. checkers or backgammon or Go. Was this intentional (do you think that this style of chess program doesn't count as an optimizer)? I think your definition is getting at something interesting, but I think it's more specific than "optimizer".

Comment by Ikaxas on Are there good classes (or just articles) on blog writing? · 2021-04-19T07:52:22.067Z · LW · GW

Here's Scott Alexander's tips: https://slatestarcodex.com/2016/02/20/writing-advice/

Comment by Ikaxas on The Point of Easy Progress · 2021-03-28T20:00:24.314Z · LW · GW

I really liked this. I thought the little graphics were a nice touch. And the idea is one of those ones that seems almost obvious in retrospect, but wasn't obvious at all before reading the post. Looking back I can see hints of it in thoughts I've had before, but that's not the same as having had the idea. And the handle ("point of easy progress") is memorable, and probably makes the concept more actionable (it's much easier to plan a project if you can have thoughts like "can I structure this in such a way that there is a point of easy progress, and that I will hit it within a short enough amount of time that it's motivating?").

Comment by Ikaxas on AI x-risk reduction: why I chose academia over industry · 2021-03-17T16:21:00.635Z · LW · GW

I've started using the phrase "existential catastrophe" in my thinking about this; "x-catastrophe" doesn't really have much of a ring to it though, so maybe we need something else that abbreviates better?

Comment by Ikaxas on [Lecture Club] Awakening from the Meaning Crisis · 2021-03-08T23:00:09.009Z · LW · GW

So one thing I'm worried about is having a hard time navigating once we're a few episodes in. Perhaps you could link in the main post to the comment for each episode?

Comment by Ikaxas on adamShimi's Shortform · 2021-02-28T18:31:50.110Z · LW · GW

Could this be solved just by posting your work and then immediately sharing the link with people you specifically want feedback from? That way there's no expectation that they would have already seen it. (Granted, this is slightly different from a gdoc in that you can share a gdoc with one person, get their feedback, then share with another person, while what I suggested requires asking everyone you want feedback from all at once.)

Comment by Ikaxas on Yes, words can cause harm · 2021-02-25T03:45:52.457Z · LW · GW

I disagree, I think Kithpendragon did successfully refute the argument without providing examples. Their argument is quite simple, as I understand it: words can cause thoughts, thoughts can cause urges to perform actions which are harmful to oneself, such urges can cause actions which are harmful to oneself. There's no claim that any of these things is particularly likely, just that they're possible, and if they're all possible, then it's possible for words to cause harm (again, perhaps not at all likely, for all Kithpendragon has said, but possible). It borders on a technicality, and elsethread I disputed its practical importance, but for all that it is successful at what it's trying to do.

I agree that the idea that concrete examples are a "likely hazard" seems a bit excessive, but I can see the reasoning here even if I don't agree with it: if you think that words have the potential to cause substantial harm, then it makes sense to think that if you put out a long list of words/statements chosen for their potential to be harmful, the likelihood that at least one person will be substantially harmed by at least one entry on the list seems, if not high, then still high enough to warrant caution. Viliam has managed to get around this, because the reasoning only applies if you're directly mentioning the harmful words/statements, whereas Viliam has described some examples indirectly.

Comment by Ikaxas on Yes, words can cause harm · 2021-02-25T00:57:03.230Z · LW · GW

A sneeze can determine much more than hurricane/no hurricane. It can determine the identities of everyone who exists, say, a few hundred years into the future and onwards.

If you're not already familiar, this argument gets made all the time in debates about "consequentialist cluelessness". This gets discussed, among other places, in this interview with Hilary Greaves: https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/. It's also related to the paralysis argument I mentioned in my other comment.

Comment by Ikaxas on Yes, words can cause harm · 2021-02-24T16:35:09.717Z · LW · GW

Upvoted for giving "defused examples" so to speak (examples that are described rather than directly used). I think this is a good strategy for avoiding the infohazard.

Comment by Ikaxas on Yes, words can cause harm · 2021-02-24T16:02:03.162Z · LW · GW

I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren't trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don't think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of "words that cause harm." But I think it's easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren't explicitly trying to give a full defense of that thesis in this post.)

Comment by Ikaxas on Yes, words can cause harm · 2021-02-24T15:57:20.971Z · LW · GW

Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn't to go all the way to "people need to be more careful with their words" in this post, then fair enough.

Comment by Ikaxas on Yes, words can cause harm · 2021-02-24T15:45:20.350Z · LW · GW

I originally had a longer comment, but I'm afraid of getting embroiled in this, so here's a short-ish comment instead. Also, I recognize that there's more interpretive labor I could do here, but I figure it's better to say something non-optimal than to say nothing.

I'm guessing you don't mean "harm should be avoided whenever possible" literally. Here's why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I'm guessing you don't want to say that. (Related is the discussion of the "paralysis argument" in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)

I think this is part of what's behind Christian's comment. If we don't want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we're already at roughly the optimal level of risk, then it's not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there's always some risk isn't enough to argue that interlocutors should be more careful -- you also have to argue that the current norms don't prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.

[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I'm reading him right, is making a different argument, and saying that your original argument doesn't get us all the way from "words can cause harm" to "interlocutors should be more careful with their words."

You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren't aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:

  1. Words can't cause harm
  2. Therefore, people don't need to be careful with their words.

You successfully refute (1) in the post. But this doesn't get us to "people do need to be careful with their words" since the following sort of argument is also available:

A. Words don't have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they're already being.

B. Therefore, people don't need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]

Comment by Ikaxas on Open & Welcome Thread – February 2021 · 2021-02-22T14:08:08.959Z · LW · GW

The sentence, "The present king of France is bald."

Comment by Ikaxas on Google’s Ethical AI team and AI Safety · 2021-02-21T00:28:00.702Z · LW · GW

I can't figure out why this is being downvoted. I found the model of how AI safety work is likely to actually ensure (or not) the development of safe AI to be helpful, and I thought this was a pretty good case that this firing is a worrying sign, even if it's not directly related to safety in particular.

Comment by Ikaxas on “PR” is corrosive; “reputation” is not. · 2021-02-16T00:52:52.034Z · LW · GW

thoughts [don't] end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked.

I think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.

Comment by Ikaxas on The art of caring what people think · 2021-02-12T16:34:23.931Z · LW · GW

Oh! That makes much more sense as a thing to be confused about haha. I was actually a bit hesitant to post my comment because it seemed like you wouldn't be prone to the basic confusion I was attributing to you; in retrospect, perhaps if I had listened to that I could have discovered the way in which you were actually confused, and addressed that instead.

Comment by Ikaxas on The art of caring what people think · 2021-02-12T13:55:32.870Z · LW · GW

No, the scenario is someone who isn't fully convinced by the AI risk arguments, and thinks climate change might be worse, but mostly hands out with AI risk types and so doesn't feel comfortable having that opinion. Then they find a group of people who are more worried about climate change, and start to feel more comfortable thinking about both sides of the topic.

Comment by Ikaxas on Has there been any work done on the ethics of changing people's minds? · 2021-02-03T03:54:27.154Z · LW · GW

There's this: https://plato.stanford.edu/entries/ethics-manipulation/ (Haven't read it myself, just guessing it might be relevant)

Comment by Ikaxas on Dumb Dichotomies in Ethics, Part 1: Intentions vs Consequences · 2021-01-30T02:26:59.718Z · LW · GW

might be doing more harm than good

This could well be true. It's highly possible that we ought to be teaching this distinction, and teaching the expected-value version when we teach utilitarianism (and maybe some philosophy professors do, I don't know).

Also, here's a bit in the SEP on actual vs expected consequentialism: https://plato.stanford.edu/entries/consequentialism/#WhiConActVsExpCon

Comment by Ikaxas on Dumb Dichotomies in Ethics, Part 1: Intentions vs Consequences · 2021-01-29T19:49:56.595Z · LW · GW

I can confirm that philosophers are familiar with the concept of expected value. One famous paper that deals with it is Frank Jackson's "Decision-theoretic consequentialism and the nearest-and-dearest objection." (Haven't read it myself, though I skimmed it to make sure it says what I thought it said.) ETA: nowadays, as far as I can tell, the standard statement of consequentialism is as maximizing expected value.

There's also a huge literature (which I'm not very familiar with) on the question of how to reconcile deontology and consequentialism. Search "consequentializing" if you're interested -- the big question is whether all deontological theories can be restated in purely consequentialist terms.

I think a far more plausible hypothesis is that we vastly oversimplify this stuff for our undergrads (in part because we hew to the history on these things, and some of these confusions were present in the historical statements of these views). The slides you cite are presumably from intro courses. And many medical students likely never get beyond a single medical ethics course, which probably performs all these oversimplifications in spades. (I am currently TAing for an engineering ethics course. We are covering utilitarianism this week, and we didn't talk about expected value -- though I don't know off the top of my head if we cover it later.)

Source: PhD student in ethics.

Comment by Ikaxas on [Book Review] The Trouble with Physics · 2021-01-18T20:50:25.133Z · LW · GW

You might check out Eric Weinstein's podcast, "The Portal" and the associated discord server. He has a theory about this, "geometric unity". And while I haven't been active on the discord server, I think they're working on this as well.

If you want to be invited to the discord server, PM me and I'll send you a link.

Comment by Ikaxas on Why Productivity Systems Don't Stick · 2021-01-17T11:11:03.986Z · LW · GW

I liked this post a lot.

My experience with ideas related to this (e.g. Replacing Guilt, IFS) has been that I tend not to be able to muster compassion and understanding for whatever part of myself is putting up resistance. Rather, I just get frustrated with it for being so obviously wrong and irrational. I just can't see how those parts of myself could possibly be correct (e.g. How could it possibly be correct to not do my paper, that would get me kicked out of my PhD program). And when I remind myself of these sorts of techniques and that I'm supposed to be trying to have compassion and understanding for those parts of myself and leave room for the possibility that they might have a point, this just creates another iteration of the same problem: I get frustrated with myself for not being able to understand myself.

Reading this post helped me see, at least a little bit, how some of those motivations of mine might be based on things that are valuable.

(Actually, writing this comment was helpful as well: writing out the above made me realize that I don't have to acknowledge that there's even a chance that those parts of myself might be totally correct. Rather, I have to acknowledge that there's a chance they might be at least partially correct. E.g., if part of me doesn't want to do my paper, I don't have to acknowledge that there's a possibility that the best thing to do is to not do it ever. Rather, perhaps there's some reason not to do it now - perhaps I just need a rest, or perhaps there's some way that doing it now would harm me and I need to fix that first.)

Comment by Ikaxas on Why Productivity Systems Don't Stick · 2021-01-17T10:51:48.610Z · LW · GW

I for one would like to read that

Comment by Ikaxas on What currents of thought on LessWrong do you want to see distilled? · 2021-01-09T03:08:27.096Z · LW · GW

One thing I would like distilled is Eliezer's metaethics sequence. (I might try to do this one myself at some point.)

Another is a discussion of how the literature on biases has held up to the replication crisis (I know the priming stuff has fallen, but how much does that taint? What else has replicated/failed to replicate?)

Comment by Ikaxas on What do you think should be included in a series about conceptual media? · 2021-01-01T15:16:35.168Z · LW · GW

No problem. I'll be interested to see what you come up with.

Comment by Ikaxas on What do you think should be included in a series about conceptual media? · 2020-12-31T14:10:03.756Z · LW · GW

Andy Matuschak has lots of relevant thoughts: https://notes.andymatuschak.org/About_these_notes

Comment by Ikaxas on Signaling importance · 2020-12-09T18:15:19.921Z · LW · GW

I'm not positive, but I think in principle you could use colors on LW, via CSS in the markdown editor. Could be wrong here though.

Comment by Ikaxas on What are Examples of Great Distillers? · 2020-11-13T02:44:50.610Z · LW · GW

Can't claim advanced knowledge for either of these, but based on benefitting from them myself: Jonathan Haidt for social and evolutionary psychology, and the 3Blue1Brown youtube channel for math.

Comment by Ikaxas on Why does History assume equal national intelligence? · 2020-10-31T14:22:31.166Z · LW · GW

Also, in various places you seem to be moving back and forth between explaining events in terms of how smart a decision was, vs how smart a person was. These are different (though of course related).

Comment by Ikaxas on Why does History assume equal national intelligence? · 2020-10-31T14:18:11.843Z · LW · GW

“raw computing power” or “adeptness at achieving ones’ desired outcomes”

If you go for the second one, then you're essentially suggesting that sometimes we should explain a person's success (failure) in terms of their innate tendency to succeed (fail). This sounds like a mysterious explanation. It's like saying that sleeping medicine works because it has a dormitive potency.

I'm not saying the second definition is never useful, just not in this context.

Comment by Ikaxas on Scheduling Algorithm for a PhD Student · 2020-09-25T15:08:16.807Z · LW · GW

How long is the median article in your field? More like 5 pages, more like 20 pages, or more like 40 pages?

Comment by Ikaxas on Expansive translations: considerations and possibilities · 2020-09-18T19:31:05.018Z · LW · GW

Imagine a system where when you land on a Wikipedia page, it translates it into a version optimized for you at that time. The examples change to things in your life, and any concepts difficult for you get explained in detail. It would be like a highly cognitively empathetic personal teacher.

Hmm, something about this bothers me, but I'm not entirely sure what. At first I thought it was something about filter bubbles, but of course that can be fixed; just tune the algorithm so that it frames things in a way that is just optimally outside your intellectual filter bubble/comfort zone.

Now I think it's something more like: it can be valuable to have read the same thing as other people; if everyone gets their own personalized version of Shakespeare, then people lose some of the connection they could have had with others over reading Shakespeare, since they didn't really read the same thing. And also, it can be valuable for different people to read the same thing for another reason: different people may interpret a text in different ways, which can generate new insights. If everyone gets their own personalized version, we lose out on some of the insights people might have had by bouncing their minds off of the original text.

I guess this isn't really a knockdown argument against making this sort of "personal translator" technology, since there's no reason people couldn't turn it off sometimes and read the originals, but nevertheless, we don't have a great track record of using technology like this wisely and not overusing it (I'm thinking of social media here).

Comment by Ikaxas on What Does "Signalling" Mean? · 2020-09-17T03:27:17.565Z · LW · GW

"Metaethics" is another example of this; sometimes it gets used around here to mean "high-level normative ethics" (and in fact the rationalist newsletter just uses it as the section header for anything ethics-related).

Comment by Ikaxas on Diagramming "Replacing Guilt," Part 1 · 2020-08-13T02:05:34.265Z · LW · GW

I liked this a lot. However, I didn't really understand the "Replacing Guilt" picture, the one with the two cups. Are they supposed to be coffee and water, with the idea being that coffee can keep you going short term but isn't sustainable, while water is better long term, it something?

Comment by Ikaxas on No Ultimate Goal and a Small Existential Crisis · 2020-07-24T22:00:00.332Z · LW · GW

I think about this question a lot as well. Here are some pieces I've personally found particularly helpful in thinking about it:

  • Sharon Street, "Nothing 'Really' Matters, But That's Not What Matters": link
    • This is several levels deep in a conversation, but you might be able to read it on its own and get the gist, then go back and read some of the other stuff if you feel like it.
  • Nate Soares' "Replacing Guilt" series: http://mindingourway.com/guilt/
    • Includes much more metaethics than it might sound like it should.
    • Especially relevant: "You don't get to know what you're fighting for"; everything in the "Drop your Obligations" section
  • (Book) Jonathan Haidt: The Happiness Hypothesis: link
    • Checks ancient wisdom about how to live against the modern psych literature.
  • Susan Wolf on meaning in life: link
    • There is a book version, but I haven't read it yet

Search terms that might help if you want to look for what philosophers have said about this:

  • meaning in/of life
  • moral epistemology
  • metaethics
  • terminal/final goals/ends

Some philosophers with relevant work:

  • Derek Parfit
  • Sharon Street
  • Christine Korsgaard
  • Bernard Williams

There is a ton of philosophical work on these sorts of things obviously, but I only wanted to directly mention stuff I've actually read.

If I had to summarize my current views on this (still very much in flux, and not something I necessarily live up to), I might say something like this:

Pick final goals that will be good both for you and for others. As Susan Wolf says, meaning comes from where "subjective valuing meets objective value," and as Jonathan Haidt says, "happiness comes from between." Some of this should be projects that will be fun/fulfilling for you and also produce value for others, some of this should be relationships (friendships, family, romantic, etc). But be prepared for those goals to update. I like romeostevensit's phrasing elsethread: "goals are lighthouses not final destinations." As Nate Soares says, "you don't get to know what you're fighting for": what you're fighting for will change over time, and that's not a bad thing. Can't remember the source for this (probably either the Sequences or Replacing Guilt somewhere), but the human mind will often transform an instrumental goal into a terminal goal. Unlike Eliezer, I think this really does signal a blurriness in the boundary between instrumental and terminal goals. Even terminal goals can be evaluated in light of other goals we have (making them at least a little bit instrumental), and if an instrumental goal becomes ingrained enough, we may start to care about it for its own sake. And when picking goals, start from where you are, start with what you already find yourself to care about, and go from there. The well-known metaphor of Neurath's Boat goes like this: "We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction." See also Eliezer, "Created already in motion". So start from what you already care about, and aim to have that evolve in a more consistent direction. Aim for goals that will be good for both you and others, but take care of the essentials for yourself first (as Jordan Peterson says, "Set your house in perfect order before you criticize the world"). In order to help the world, you have to make yourself formidable (think virtue ethics). Furthermore, as Agnes Callard points out (link), the meaning in life can't solely be to make others happy. The buck has to stop somewhere -- "At some point, someone has to actually do the happying." So again, look for places where you can do things that make you happy while also creating value for others.

I don't claim this is at all airtight, or complete (as I said, still very much in flux), but it's what I've come to after thinking about this for the last several years.

Comment by Ikaxas on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-07T22:36:48.913Z · LW · GW

There has been some philosophical work that makes just this point. In particular, Julia Nefsky (who I think has some EA ties?) has a whole series of papers about this. Probably the best one to start with is her PhilCompass paper here: https://onlinelibrary.wiley.com/doi/abs/10.1111/phc3.12587

Obviously I don't mean this to address the original question, though, since it's not from an FDT/UDT perspective.

Comment by Ikaxas on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-07T16:15:35.838Z · LW · GW

I think there is a strong similarity between FDT (can't speak to UDT/TDT) and Kantian lines of thought in ethics. (To bring this out: the Kantian thought is roughly to consider yourself simply as an instance of a rational agent, and ask "can I will that all rational agents in these circumstances do what I'm considering doing?" FDT basically says "consider all agents that implement my algorithm or something sufficiently similar. What action should all those algorithm-instances output in these circumstances?" It's not identical, but it's pretty close.) Lots of people have Kantian intuitions, and to the extent that they do, I think they are implementing something quite similar to FDT. Lots of people probably vote because they think something like "well, if everyone didn't vote, that would be bad, so I'd better vote." (Insert hedging and caveats here about how there's a ton of debate over whether Kantianism is/should be consequentialist or not.) So they may be countable as at least partially FDT agents for purposes of FDT reasoning.

I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT.

Here's a brief argument why they would (and why they might diverge specifically in the direction of FDT): the metric evolution optimizes for is inclusive genetic fitness, not merely fitness of the organism. Witness kin selection. The heuristics that evolution would install to exploit this would tend to be: act as if there are other organisms in the environment running a similar algorithm to you (i.e. those that share lots of genes with you), and cooperate with those. This is roughly FDT-reasoning, not CDT-reasoning.

Comment by Ikaxas on Missing dog reasoning · 2020-06-27T01:19:39.891Z · LW · GW

whales, despite having millions of times the number of individual cells that mice have, don’t seem to get cancer much more often than mice.

Is this all mice, or just lab mice? I ask because of Bret Weinstein's thing about how lab mice have abnormally long telomeres, which causes them to get cancer a lot more frequently than normal mice (though in googling for the source I also found this counterargument). So is it that whales get cancer less often than we'd expect, or just that mice (or rather, the mice that we observe) get it a lot more frequently?

Comment by Ikaxas on Explaining the Rationalist Movement to the Uninitiated · 2020-04-21T19:35:03.702Z · LW · GW

So I don't know a ton about Pragmatism, but from what I do know about it, I definitely see what you're getting at; there are a lot of similarities between Pragmatism and LW-rationality. One major difference, though: as far as I know, Pragmatism doesn't accept the correspondence theory of truth (see here, at the bullet "epistemology (truth)"), while LW-rationality usually does (though as is often the case, Yudkowsky seems to have been a bit inconsistent on this topic: here for example he seems to express a deflationist theory of truth). Although, as Liam Bright has pointed out (in a slightly different context), perhaps one's theory of truth is not as important as some make it out to be.

At any rate, I had already wanted to learn more about Pragmatism, but hadn't really made the connection with rationality, so this makes me want to learn about it more. So thanks!

Comment by Ikaxas on Premature death paradox · 2020-04-21T19:03:20.512Z · LW · GW

So I agree that this paradox is quite interesting as a statistical puzzle. But I'm not sure it shows much about the ethical question of whether and when death is bad. I think the relevant notion of "premature death" might not be a descriptive notion, but might itself have a normative component. Like, "premature death" doesn't mean "unusually early death" (which is a fully descriptive notion) but something else. For example, if you assume Thomas Nagel's "deprivation account" of the badness of death, then "premature death" might be cashed out roughly as: dying while there's still valuable life ahead of you to live, such that you're deprived of something valuable by dying. In other words, you might say that death is not bad when one has lived a "full life," and is bad when one dies before living a full life. (Note that this doesn't beg the question against the transhumanist "death is always bad" sort of view, for one might insist that a life is never "full" in the relevant sense, and that there's always more valuable life ahead of you.) Trying to generalize this looks objectionably circular: death is bad when it's premature, and it's premature when it's bad. But at any rate it seems to me like the notion of premature death is trying to get at more than just the descriptive notions of dying before one is statistically predicted to die, or dying before Laplacean demon who had a perfect physical model of the world would predict one to die.

Anyway, low confidence in this, and again, I agree the statistical puzzle is interesting in its own right.

Comment by Ikaxas on Premature death paradox · 2020-04-19T11:17:37.046Z · LW · GW

It was the very first thing they discussed. Start from the beginning of the stream and you'll get most of it.

Comment by Ikaxas on Premature death paradox · 2020-04-16T22:05:06.198Z · LW · GW

This is getting discussed right now on livestream by a couple philosophers including Agnes Callard (recording will be available after): https://www.crowdcast.io/e/k7viaqzp

Comment by Ikaxas on Premature death paradox · 2020-04-14T14:03:27.961Z · LW · GW

Maybe this is obvious, but:

Say life expectancy at age X is Y further years, and life expectancy at age (X + Y) is Z further years. I think at least part of the reason why Z isn't 0 is that if you're following someone along their life, the fact that they lived Y further years is more evidence to update on. If X means "lived to at least X years", then P(X+Y+Z|X+Y) > P(X+Y+Z|X), because living to X+Y years indicates certain things about, say, your health, genetics, etc that aren't indicated by just living to X years.

Comment by Ikaxas on Has the effectiveness of fever screening declined? · 2020-03-27T22:48:35.105Z · LW · GW

Thanks

Comment by Ikaxas on Has the effectiveness of fever screening declined? · 2020-03-27T22:09:08.460Z · LW · GW

Also, mods, how do I tag this as "coronavirus"? Or is that something the mods just do?

Comment by Ikaxas on Ikaxas' Shortform Feed · 2020-02-16T02:21:53.641Z · LW · GW

Global coordination problems

I've said before that I tentatively think that "foster global coordination" might be a good cause area in its own right, because it benefits so many other cause areas. I think it might be useful to have a term for the cause areas that global coordination would help. More specifically, a term for the concept "(reasonably significant) problem that requires global coordination to solve, or that global coordination would significantly help with solving." I propose "global coordination problem" (though I'm open to other suggestions). You may object "but coordination problem already has a meaning in game theory, this is likely to get confused with that." But global coordination problems are coordination problems in precisely the game theory sense (I think, feel free to correct me), so the terminological overlap is a benefit.

What are some examples of global coordination problems? Certain x-risks and global catastrophic risks (such as AI, bioterrorism, pandemic risk, asteriod risk), climate change, some of the problems mentioned in The Possibility of an Ongoing Moral Catastrophe, as well as the general problem of ferreting out and fixing moral catastrophes, and almost certainly others.

In fact, it may be useful to think about a spectrum of problems, similar to Bostrom's Global Catastrophic Risk spectrum, organized by how much coordination is required to solve them. Analogous to Bostrom's spectrum, we could have: personal coordination problems (i.e. problems requiring no coordination with others, or perhaps only coordination with parts of oneself), local coordination problems, national coordination problems, global coordination problems, and transgenerational coordination problems.

Comment by Ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-16T01:39:49.612Z · LW · GW

Forgot one other thing I intend to work on: I've seen several people (perhaps even you?) say that the case for AI risk needs to be made more carefully than it has, that's another project I may potentially work on.

Comment by Ikaxas on Trying for Five Minutes on AI Strategy · 2020-02-16T01:37:42.462Z · LW · GW

Another way to look at it though, is that the AI companies have co-opted some of the people concerned with AI risk (those on the more optimistic end of the spectrum) and cowed the rest...

Huh, that's an interesting point.

I'm not sure where I stand on the question of "should we be pulling the brakes now," but I definitely think it would be good if we had the ability to pull the brakes should it become necessary. It hadn't really occurred to me that those who think we should be pulling the brakes now would feel quasi-political pressure not to speak out. I assumed the reason there's not much talk of that option is because it's so clearly unrealistic at this point; but I'm all in favor of building the capacity to do so (modulo Caplan-style worries about this accidentally going to far and leading to totalitarianism), and it never really occurred to me that this would be a controversial opinion.

It looks like your background is in philosophy

Yep!

check out Problems in AI Alignment that philosophers could potentially contribute to, in case you haven't come across it already.

I had come across it before, but it was a while ago, so I took another look. I was already planning on working on some stuff in the vicinity of the "Normativity for AI / AI designers" and "Metaethical policing" bullets (namely the problem raised in these posts by gworley), but looking at it again, the other stuff under those bullets, as well as the metaphilosophy bullet, sound quite interesting. I'm also planning on doing some work on moral uncertainty (which, in addition to its relevance to global priorities research, also has some relevance for AI; based on my cursory understanding, CIRL seems to incorporate the idea of moral uncertainty to some extent), and perhaps other GPI-style topics. AI-strategy/governance stuff, including the topics in the OP, are also interesting, and I'm actually inclined to think that they may be more important than technical AI safety (though not far more important). But three disparate areas, all calling for disparate areas of expertise outside philosophy (AI: compsci; GPR: econ etc; strategy: international relations), feels a bit like too much, and I'm not certain which I ultimately should settle on (though I have a bit of time, I'm at the beginning of my PhD atm). I guess relevant factors are mostly the standard ones: which do I find most motivating/fun to work on, which can I skill-up in fastest/easiest, which is most important/tractable/neglected? And which ones lead to a reasonable back-up plan/off-ramp in case high-risk jobs like academia/EA-org don't work out?