Posts

Has the effectiveness of fever screening declined? 2020-03-27T22:07:16.932Z
Potential Research Topic: Vingean Reflection, Value Alignment and Aspiration 2020-02-06T01:09:05.384Z
What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? 2019-09-18T17:17:05.602Z
Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness 2018-12-03T08:00:00.000Z
Trying for Five Minutes on AI Strategy 2018-10-17T16:18:31.597Z
A Process for Dealing with Motivated Reasoning 2018-09-03T03:34:11.650Z
Ikaxas' Hammertime Final Exam 2018-05-01T03:30:11.668Z
Ikaxas' Shortform Feed 2018-01-08T06:19:40.370Z

Comments

Comment by Vaughn Papenhausen (Ikaxas) on The Limits of the Existence Proof Argument for General Intelligence · 2023-09-20T18:22:11.200Z · LW · GW

I suspect this is getting downvoted because it is so short and underdeveloped. I think the fundamental point here is worth making though. I've used the existence proof argument in the past, and I think there is something to it, but I think the point being made here is basically right. It might be worth writing another post about this that goes into a bit more detail.

Comment by Vaughn Papenhausen (Ikaxas) on Complex Signs Bad · 2023-07-04T20:57:13.918Z · LW · GW

This is pretty similar in concept to the conlang toki pona, which is a language explicitly designed to be as simple as possible. It has less than 150 words. ("toki pona" means something like "good language" or "good speech" in toki pona)

Comment by Vaughn Papenhausen (Ikaxas) on The basic reasons I expect AGI ruin · 2023-04-18T14:35:11.736Z · LW · GW

Quoting a recent conversation between Aryeh Englander and Eliezer Yudkowsky

Out of curiosity, is this conversation publicly posted anywhere? I didn't see a link.

Comment by Vaughn Papenhausen (Ikaxas) on Eliezer is still ridiculously optimistic about AI risk · 2023-02-28T15:05:05.181Z · LW · GW

Putting RamblinDash's point another way: when Eliezer says "unlimited retries", he's not talking about a Groundhog Day style reset. He's just talking about the mundane thing where, when you're trying to fix a car engine or something, you try one fix, and if it doesn't start, you try another fix, and if it still doesn't start, you try another fix, and so on. So the scenario Eliezer is imagining is this: we have 50 years. Year 1, we build an AI, and it kills 1 million people. We shut it off. Year 2, we fix the AI. We turn it back on, it kills another million people. We shut it off, fix it, turn it back on. Etc, until it stops killing people when we turn it on. Eliezer is saying, if we had 50 years to do that, we could align an AI. The problem is, in reality, the first time we turn it on, it doesn't kill 1 million people, it kills everyone. We only get one try.

Comment by Vaughn Papenhausen (Ikaxas) on I hired 5 people to sit behind me and make me productive for a month · 2023-02-05T14:06:56.906Z · LW · GW

Am I the only one who, upon reading the title, pictured 5 people sitting behind OP all at the same time?

Comment by Vaughn Papenhausen (Ikaxas) on I hired 5 people to sit behind me and make me productive for a month · 2023-02-05T13:40:25.170Z · LW · GW

The group version of this already exists, in a couple of different versions:

Comment by Vaughn Papenhausen (Ikaxas) on Basics of Rationalist Discourse · 2023-01-28T14:32:53.261Z · LW · GW

Yeah, that is definitely fair

Comment by Vaughn Papenhausen (Ikaxas) on Basics of Rationalist Discourse · 2023-01-28T02:04:17.874Z · LW · GW

My model of gears to ascension, based on their first 2 posts, is that they're not complaining about the length for their own sake, but rather for the sake of people that they link this post to who then bounce off because it looks too long. A basics post shouldn't have the property that someone with zero context is likely to bounce off it, and I think gears to ascension is saying that the nominal length (reflected in the "43 minutes") is likely to have the effect of making people who get linked to this post bounce off it, even though the length for practical purposes is much shorter.

Comment by Vaughn Papenhausen (Ikaxas) on Notes on writing · 2023-01-11T00:34:32.301Z · LW · GW

Pinker has a book about writing called The Sense of Style

Comment by Vaughn Papenhausen (Ikaxas) on Ritual as the only tool for overwriting values and goals · 2023-01-07T00:26:39.055Z · LW · GW

There seems to be a conflict between putting “self-displays on social media” in the ritual box, and putting “all social signalling” outside it. Surely the former is a subset of the latter.

My understanding was that the point was this: not all social signalling is ritual. Some of it is, some of it isn't. The point was: someone might think OP is claiming that all social signalling is ritual, and OP wanted to dispel that impression. This is consistent with some social signalling counting as ritual.

Comment by Vaughn Papenhausen (Ikaxas) on Is there an Ultimate text editor? · 2022-09-11T16:50:54.099Z · LW · GW

I think the idea is to be able to transform this:

- item 1
    - item 2
- item 3

into this:

- item 3
- item 1
    - item 2

I.e. it would treat bulleted lists like trees, and allow you to move entire sub-branches of trees around as single units.

Comment by Vaughn Papenhausen (Ikaxas) on Novelty Generation - The Art of Good Ideas · 2022-08-21T03:48:23.468Z · LW · GW

This isn't necessarily a criticism, but "exploration & recombination" and "tetrising" seem in tension with each other. E&R is all about allowing yourself to explore broadly, not limiting yourself to spending your time only on the narrow thing you're "trying to work on." Tetrising, on the other hand, is precisely about spending your time only on that narrow thing.

As I said, this isn't a criticism; this post is about a grab bag of techniques that might work at different times for different people, not a single unified strategy, but it's still interesting to point out the tension here.

Comment by Vaughn Papenhausen (Ikaxas) on Assorted thoughts about abstraction · 2022-07-05T23:26:09.482Z · LW · GW

Cool, thanks!

Comment by Vaughn Papenhausen (Ikaxas) on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T21:39:17.337Z · LW · GW

I think the point was that it's a cause you don't have to be a longtermist in order to care about. Saying it's a "longtermist cause" can be interpreted either as saying that there are strong reasons for caring about it if you're a longtermist, or that there are not strong reasons for caring about it if you're not a longtermist. OP is disagreeing with the second of these (i.e. OP thinks there are strong reasons for caring about AI risk completely apart from longtermism).

Comment by Vaughn Papenhausen (Ikaxas) on Assorted thoughts about abstraction · 2022-07-05T21:29:40.914Z · LW · GW

Not a programmer, but I think one other reason for this is that at least in certain languages (I think interpreted languages, e.g. Python, is the relevant category here), you have to define a term before you can use it; the interpreter basically executes the code top-down instead of compiling it first, so it can't just look later in the file to figure out what you mean. So

def brushTeeth():
    putToothpasteOnToothbrush()
    ...

def putToothpasteOnToothbrush():
    ...


wouldn't work, because you're calling putToothpasteOnToothbrush() before you've defined it.

Comment by Vaughn Papenhausen (Ikaxas) on Best open-source textbooks (goal: make them collaborative)? · 2022-05-07T16:50:31.589Z · LW · GW

Fyi, the link to your site is broken for those viewing on greaterwrong.com; it's interpreting "--a" as part of the link.

Comment by Vaughn Papenhausen (Ikaxas) on SERI ML Alignment Theory Scholars Program 2022 · 2022-04-27T17:50:19.398Z · LW · GW

Maybe have a special "announcements" section on the frontpage?

Comment by Vaughn Papenhausen (Ikaxas) on Fundamental Uncertainty: Chapter 2 - Why do words have meaning? · 2022-04-19T13:46:06.454Z · LW · GW

The way I like to think about this is that the set of all possible thoughts is like a space that can be carved up into little territories and each of those territories marked with a word to give it a name.

Probably better to say something like "set of all possible concepts." Words denote concepts, complete sentences denote thoughts.

I'm curious if you're explicitly influenced by Quine for the final section, or if the resemblance is just coincidental.

Also, about that final section, you say that "words are grounded in our direct experience of what happens when we say a word." While I was reading I kept wondering what you would say about the following alternative (though not mutually exclusive) hypothesis: "words are grounded in our experience of what happens when others say those words in our presence." Why think the only thing that matters is what happens when we ourselves say a word?

Comment by Vaughn Papenhausen (Ikaxas) on What The Foucault · 2022-02-20T19:58:54.286Z · LW · GW

Master: Now, is Foucault’s work the content you’re looking for, or merely a pointer.

Student: What… does that mean?

Master: Do you think that you think that the value of Foucault for you comes from the specific ideas he had, or in using him to even consider these two topics?

This put words to a feeling I've had a lot. Often I have some ideas, and use thinkers as a kind of handle to point to the ideas in my head (especially when I haven't actually read the thinkers yet). The problem is that this fools me into thinking that the ideas are developed, either by me or by the thinkers. I like this idea of using the thinkers to notice topics, but then developing on the topics yourself, at least if the thinkers don't take those topics in the direction you had in mind to take them.

On a different note, if you're interested in Foucault's methodology, some search terms would be "genealogy" and "conceptual engineering." Here is a LW post on conceptual engineering, and here is a review of a recent book on the topic (which I believe engages with Foucault as well as Nietzsche, Hume, Bernard Williams, and maybe others; I haven't actually read the full book yet, just this review). The book seems to be pretty directly about what you're looking for: "history for finding out where our concepts and values come from, in order to question them."

Comment by Vaughn Papenhausen (Ikaxas) on Bryan Caplan meets Socrates · 2022-02-06T10:26:45.204Z · LW · GW

Yep, check out the Republic, I believe this is in book 5, or if it's not in book 5 it's in book 6.

Comment by Vaughn Papenhausen (Ikaxas) on Is it rational to modify one's utility function? · 2022-02-05T00:47:01.658Z · LW · GW

The received wisdom in this community is that modifying one's utility function is at least usually irrational. The classic source here is Steve Omohundro's 2008 paper, "The Basic AI Drives," and Nick Bostrom gives basically the same argument in Superintelligence, pp. 132-34. The argument is basically this: imagine you have an AI that is solely maximizing the number of paperclips that exist. Obviously, if it abandons that goal, there will be less paperclips than if it maintains that goal. And if it adds another goal, say maximizing staples, then this other goal will compete with the paperclip goal for resources, e.g. time, attention, steel, etc. So again, if it adds the staple goal, there will be less paperclips than if it doesn't. So if it evaluates every option by h many paperclips result in expectation, then it will choose to maintain its paperclip goal unchanged. This argument isn't mathematically rigorous, and allows that there may be special cases where changing one's goal may be useful. But the thought is that, by default, changing one's goal is detrimental from the perspective of one's current goals.

As I said, though, there may be exceptions, at least for certain kinds of agents. Here's an example. It seems as though, at least for humans, we're more motivated to pursue our final goals directly than we are to pursue merely instrumental goals (which child do you think will read more: the one who intrinsically enjoys reading, or the one you pay $5 for every book they finish?). So, if a goal is particularly instrumentally useful, it may be useful to adopt it as a final goal in itself in order to increase your motivation to pursue it. For example, if your goal is to become a diplomat, but you find it extremely boring to read papers on foreign policy... well, first of all, I question why you want to become a diplomat if you're not interested in foreign policy, but more importantly, you might be well-served to cultivate an intrinsic interest in foreign policy papers. This is a bit risky: if circumstances change so that it's no longer as instrumentally useful, it may end up competing with your initial goals as described by the Bostrom/Omohundro argument. But it could work out that, at least some of the time, the expected value of changing your goal for this reason is positive.

Another paper to look at might be Steve Petersen's paper, "Superintelligence as Superethical," though I can't summarize the argument for you off the top of my head.

Comment by Vaughn Papenhausen (Ikaxas) on The ignorance of normative realism bot · 2022-01-20T07:43:38.850Z · LW · GW

I would think the metatheological fact you want to be realist about is something like "there is a fact of the matter about whether the God of Christianity exists." "The God of Christianity doesn't exist" strikes me as an object-level theological fact.

The metaethical nihilist usually makes the cut at claims that entail the existence of normative properties. That is, "pleasure is not good" is not a normative fact, as long as it isn't read to entail that pleasure is bad. "Pleasure is not good" does not by itself entail the existence of any normative property.

Comment by Vaughn Papenhausen (Ikaxas) on A non-magical explanation of Jeffrey Epstein · 2022-01-08T20:45:46.499Z · LW · GW

Same

Comment by Vaughn Papenhausen (Ikaxas) on Third Time: a better way to work · 2022-01-08T18:05:59.400Z · LW · GW

Really? I'm American and it sounds perfectly normal to me.

Comment by Vaughn Papenhausen (Ikaxas) on A fate worse than death? · 2021-12-15T20:12:04.648Z · LW · GW

I think this post is extremely interesting, and on a very important topic. As I said elsethread, for this reason, I don't think it should be in negative karma territory (and have strong-upvoted to try to counterbalance that).

On the object level, while there's a frame of mind I can get into where I can see how this looks plausible to someone, I'm inclined to think that this post is more of a reductio of some set of unstated assumptions that lead to its conclusion, rather than a compelling argument for that conclusion. I don't have the time right now to think about what exactly those unstated assumptions are or where they go wrong, but I think that would be important. When I get some more time, if I remember, I may come back and think some more about this.

Comment by Vaughn Papenhausen (Ikaxas) on A fate worse than death? · 2021-12-15T19:58:36.572Z · LW · GW

I agree with this as well. I have strongly upvoted in an attempt to counterbalance this, but even so it is still in negative karma territory, which I don't think it deserves.

Comment by Vaughn Papenhausen (Ikaxas) on Film Study for Research · 2021-10-03T21:12:26.738Z · LW · GW

A possible example of research film-study in a very literal sense: Andy Matuschak's 2020-05-04 Note-writing livestream.

I would love it if more people did this sort of thing.

Comment by Vaughn Papenhausen (Ikaxas) on Dissolving the Experience Machine Objection · 2021-10-03T19:17:11.632Z · LW · GW

I think if you accept the premise that the machine somehow magically truly simulates perfectly and indistinguishably from actual reality, in such a way that there is absolutely no way of knowing the difference between the simulation and the outside universe, then the simulated universe is essentially isomorphic to reality, and we should be fully indifferent. I’m not sure it even makes sense to say either universe is more “real”, since they’re literally identical in every way that matters (for the differences we can’t observe even in theory, I appeal to Newton’s flaming laser sword). Our intuitions here should be closer to stepping into an identical parallel universe, rather than entering a simulation.

I see what you're trying to get at here, but as stated I think this begs the question. You're assuming here that the only ways universes could differ that would matter would be ways that have some impact on what we experience. People who accept the experience machine (let's call them "non-experientialists) don't agree. They (usually) think that whether we're deceived, or whether our beliefs are actually true, can have some effect on how good our life is.

For example, consider two people whose lives are experientially identical, call them Ron and Edward. Ron lives in the real world, and has a wife and two children who love him, and whom he loves, and who are a big part of the reason he feels his life is going well. Edward lives in the experience machine. He has exactly the same experiences as Ron, and therefore also thinks he has a wife and children who love him. However, he doesn't actually have a wife and children, just some experiences that make him think he has a wife and children (so of course "his wife and children" feel nothing for him, love or otherwise. Perhaps these experiences are created by simulations, but suppose the simulations are p-zombies who don't feel anything). Non-experientialists would say that Ron's life is better than Edward's, because Edward is wrong about whether his wife and children love him (naturally, Edward would be devastated if he realized the situation he was in; it's important to him that his wife and children love him, so if he found out they didn't, he would be distraught). He won't ever find this out, of course (since his life is experientially identical to Ron's, and Ron will never find this out, since Ron doesn't live in the experience machine). But the fact that, if he did, he would be distraught, and the fact that it's true, seem to make a difference to how well his life goes, even though he will never actually find out. (Or at least, this is the kind of thing non-experientialists would say.)

(Note the difference between the way the experience machine is being used here and the way it's normally used. Normally, the question is "would you plug in?" But here, the question is "are these two experientially-identical lives, one in the experience machine and one in the real world, equally as good as each other? Or is one better, if only ever-so-slightly?" See this paper for more discussion: Lin, "How to Use the Experience Machine")

For a somewhat more realistic (though still pretty out-there) example, imagine Andy and Bob. Once again, Andy has a wife and children who love him. Bob also has a wife and children, and while they pretend to love him while he's around, deep down his wife thinks he's boring and his children think he's tyrannical; they only put on a show so as not to hurt his feelings. Suppose Bob's wife and children are good enough at pretending that they can fool him for his whole life (and don't ever let on to anyone else who might let it slip). It seems like Bob's life is actually pretty shitty, though he doesn't know it.

Ultimately I'm not sure how I feel about these thought experiments. I can get the intuition that Edward and Bob's lives are pretty bad, but if I imagine myself in their shoes, the intuition becomes much weaker (since, of course, if I were in their shoes, I wouldn't know that "my" wife and children don't love "me"). I'm not sure which of these intuitions, if either, is more trustworthy. But this is the kind of thing you have to contend with if you want to understand why people find the experience machine compelling.

Comment by Vaughn Papenhausen (Ikaxas) on Tasks apps w/ time estimates to gauge how much you'll overshoot? · 2021-09-07T17:05:02.187Z · LW · GW

Not sure if this is exactly what you're looking for, but you could check out "Do Now" on the play store: https://play.google.com/store/apps/details?id=com.slamtastic.donow.app (no idea if it's available for apple or not)

Comment by Vaughn Papenhausen (Ikaxas) on Good software to draw and manipulate causal networks? · 2021-09-02T21:00:28.613Z · LW · GW

Two things I've come across. Haven't used either much, but figured I'd mention them:

Comment by Vaughn Papenhausen (Ikaxas) on Could you have stopped Chernobyl? · 2021-08-27T13:41:02.504Z · LW · GW

Ah, I think the fact that there's an image after the first point is causing the numbered list to be numbered 1,1,2,3.

Comment by Vaughn Papenhausen (Ikaxas) on How refined is your art of note-taking? · 2021-05-20T15:44:41.701Z · LW · GW

My main concern with using an app like Evergreen Notes is that a hobby project built by one person seems like a fragile place to leave a part of my brain.

In that case you might like obsidian.md.

Comment by Vaughn Papenhausen (Ikaxas) on Your Dog is Even Smarter Than You Think · 2021-05-01T23:38:22.876Z · LW · GW

I found this one particularly impressive: https://m.youtube.com/watch?v=AHiu-EDJUx0

The use of "oops" at the end is spot on.

Comment by Vaughn Papenhausen (Ikaxas) on Defining "optimizer" · 2021-04-19T14:20:45.489Z · LW · GW

Hmm. I think this is closer to "general optimizer" than to "optimizer": notice that certain chess-playing algorithms (namely, those that have been "hard-coded" with lots of chess-specific heuristics and maybe an opening manual) wouldn't meet this definition, since it's not easy to change them to play e.g. checkers or backgammon or Go. Was this intentional (do you think that this style of chess program doesn't count as an optimizer)? I think your definition is getting at something interesting, but I think it's more specific than "optimizer".

Comment by Vaughn Papenhausen (Ikaxas) on Are there good classes (or just articles) on blog writing? · 2021-04-19T07:52:22.067Z · LW · GW

Here's Scott Alexander's tips: https://slatestarcodex.com/2016/02/20/writing-advice/

Comment by Vaughn Papenhausen (Ikaxas) on The Point of Easy Progress · 2021-03-28T20:00:24.314Z · LW · GW

I really liked this. I thought the little graphics were a nice touch. And the idea is one of those ones that seems almost obvious in retrospect, but wasn't obvious at all before reading the post. Looking back I can see hints of it in thoughts I've had before, but that's not the same as having had the idea. And the handle ("point of easy progress") is memorable, and probably makes the concept more actionable (it's much easier to plan a project if you can have thoughts like "can I structure this in such a way that there is a point of easy progress, and that I will hit it within a short enough amount of time that it's motivating?").

Comment by Vaughn Papenhausen (Ikaxas) on AI x-risk reduction: why I chose academia over industry · 2021-03-17T16:21:00.635Z · LW · GW

I've started using the phrase "existential catastrophe" in my thinking about this; "x-catastrophe" doesn't really have much of a ring to it though, so maybe we need something else that abbreviates better?

Comment by Vaughn Papenhausen (Ikaxas) on [Lecture Club] Awakening from the Meaning Crisis · 2021-03-08T23:00:09.009Z · LW · GW

So one thing I'm worried about is having a hard time navigating once we're a few episodes in. Perhaps you could link in the main post to the comment for each episode?

Comment by Vaughn Papenhausen (Ikaxas) on adamShimi's Shortform · 2021-02-28T18:31:50.110Z · LW · GW

Could this be solved just by posting your work and then immediately sharing the link with people you specifically want feedback from? That way there's no expectation that they would have already seen it. (Granted, this is slightly different from a gdoc in that you can share a gdoc with one person, get their feedback, then share with another person, while what I suggested requires asking everyone you want feedback from all at once.)

Comment by Vaughn Papenhausen (Ikaxas) on Yes, words can cause harm · 2021-02-25T03:45:52.457Z · LW · GW

I disagree, I think Kithpendragon did successfully refute the argument without providing examples. Their argument is quite simple, as I understand it: words can cause thoughts, thoughts can cause urges to perform actions which are harmful to oneself, such urges can cause actions which are harmful to oneself. There's no claim that any of these things is particularly likely, just that they're possible, and if they're all possible, then it's possible for words to cause harm (again, perhaps not at all likely, for all Kithpendragon has said, but possible). It borders on a technicality, and elsethread I disputed its practical importance, but for all that it is successful at what it's trying to do.

I agree that the idea that concrete examples are a "likely hazard" seems a bit excessive, but I can see the reasoning here even if I don't agree with it: if you think that words have the potential to cause substantial harm, then it makes sense to think that if you put out a long list of words/statements chosen for their potential to be harmful, the likelihood that at least one person will be substantially harmed by at least one entry on the list seems, if not high, then still high enough to warrant caution. Viliam has managed to get around this, because the reasoning only applies if you're directly mentioning the harmful words/statements, whereas Viliam has described some examples indirectly.

Comment by Vaughn Papenhausen (Ikaxas) on Yes, words can cause harm · 2021-02-25T00:57:03.230Z · LW · GW

A sneeze can determine much more than hurricane/no hurricane. It can determine the identities of everyone who exists, say, a few hundred years into the future and onwards.

If you're not already familiar, this argument gets made all the time in debates about "consequentialist cluelessness". This gets discussed, among other places, in this interview with Hilary Greaves: https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/. It's also related to the paralysis argument I mentioned in my other comment.

Comment by Vaughn Papenhausen (Ikaxas) on Yes, words can cause harm · 2021-02-24T16:35:09.717Z · LW · GW

Upvoted for giving "defused examples" so to speak (examples that are described rather than directly used). I think this is a good strategy for avoiding the infohazard.

Comment by Vaughn Papenhausen (Ikaxas) on Yes, words can cause harm · 2021-02-24T16:02:03.162Z · LW · GW

I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren't trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don't think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of "words that cause harm." But I think it's easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren't explicitly trying to give a full defense of that thesis in this post.)

Comment by Vaughn Papenhausen (Ikaxas) on Yes, words can cause harm · 2021-02-24T15:57:20.971Z · LW · GW

Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn't to go all the way to "people need to be more careful with their words" in this post, then fair enough.

Comment by Vaughn Papenhausen (Ikaxas) on Yes, words can cause harm · 2021-02-24T15:45:20.350Z · LW · GW

I originally had a longer comment, but I'm afraid of getting embroiled in this, so here's a short-ish comment instead. Also, I recognize that there's more interpretive labor I could do here, but I figure it's better to say something non-optimal than to say nothing.

I'm guessing you don't mean "harm should be avoided whenever possible" literally. Here's why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I'm guessing you don't want to say that. (Related is the discussion of the "paralysis argument" in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)

I think this is part of what's behind Christian's comment. If we don't want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we're already at roughly the optimal level of risk, then it's not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there's always some risk isn't enough to argue that interlocutors should be more careful -- you also have to argue that the current norms don't prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.

[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I'm reading him right, is making a different argument, and saying that your original argument doesn't get us all the way from "words can cause harm" to "interlocutors should be more careful with their words."

You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren't aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:

  1. Words can't cause harm
  2. Therefore, people don't need to be careful with their words.

You successfully refute (1) in the post. But this doesn't get us to "people do need to be careful with their words" since the following sort of argument is also available:

A. Words don't have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they're already being.

B. Therefore, people don't need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]

Comment by Vaughn Papenhausen (Ikaxas) on Open & Welcome Thread – February 2021 · 2021-02-22T14:08:08.959Z · LW · GW

The sentence, "The present king of France is bald."

Comment by Vaughn Papenhausen (Ikaxas) on Google’s Ethical AI team and AI Safety · 2021-02-21T00:28:00.702Z · LW · GW

I can't figure out why this is being downvoted. I found the model of how AI safety work is likely to actually ensure (or not) the development of safe AI to be helpful, and I thought this was a pretty good case that this firing is a worrying sign, even if it's not directly related to safety in particular.

Comment by Vaughn Papenhausen (Ikaxas) on “PR” is corrosive; “reputation” is not. · 2021-02-16T00:52:52.034Z · LW · GW

thoughts [don't] end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked.

I think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.

Comment by Vaughn Papenhausen (Ikaxas) on The art of caring what people think · 2021-02-12T16:34:23.931Z · LW · GW

Oh! That makes much more sense as a thing to be confused about haha. I was actually a bit hesitant to post my comment because it seemed like you wouldn't be prone to the basic confusion I was attributing to you; in retrospect, perhaps if I had listened to that I could have discovered the way in which you were actually confused, and addressed that instead.

Comment by Vaughn Papenhausen (Ikaxas) on The art of caring what people think · 2021-02-12T13:55:32.870Z · LW · GW

No, the scenario is someone who isn't fully convinced by the AI risk arguments, and thinks climate change might be worse, but mostly hands out with AI risk types and so doesn't feel comfortable having that opinion. Then they find a group of people who are more worried about climate change, and start to feel more comfortable thinking about both sides of the topic.