Posts

April 15, 2040 2021-05-04T21:18:08.912Z
What is a VNM stable set, really? 2021-01-25T05:43:59.496Z
Why you should minimax in two-player zero-sum games 2020-05-17T20:48:03.770Z
Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) 2020-05-11T09:47:00.773Z
Conflict vs. mistake in non-zero-sum games 2020-04-05T22:22:41.374Z
Beliefs at different timescales 2018-11-04T20:10:59.223Z
Counterfactuals and reflective oracles 2018-09-05T08:54:06.303Z
Counterfactuals, thick and thin 2018-07-31T15:43:59.187Z
An environment for studying counterfactuals 2018-07-11T00:14:49.756Z
Logical counterfactuals and differential privacy 2018-02-04T00:17:43.000Z
Oracle machines for automated philosophy 2015-02-17T15:10:04.000Z
Meetup : Berkeley: Beta-testing at CFAR 2014-03-19T05:32:26.521Z
Meetup : Berkeley: Implementation Intentions 2014-02-27T07:06:29.784Z
Meetup : Berkeley: Ask vs. Guess (vs. Tell) Culture 2014-02-19T20:16:30.017Z
Meetup : Berkeley: The Twelve Virtues 2014-02-12T19:56:53.045Z
Meetup : Berkeley: Talk on communication 2014-01-24T03:57:50.244Z
Meetup : Berkeley: Weekly goals 2014-01-22T18:16:38.107Z
Meetup : Berkeley meetup: 5-minute exercises 2014-01-15T21:02:26.223Z
Meetup : Meetup at CFAR, Wednesday: Nutritionally complete bread 2014-01-07T10:25:33.016Z
Meetup : Berkeley: Hypothetical Apostasy 2013-06-12T17:53:40.651Z
Meetup : Berkeley: Board games 2013-06-04T16:21:17.574Z
Meetup : Berkeley: The Motivation Hacker by Nick Winter 2013-05-28T06:02:07.554Z
Meetup : Berkeley: To-do lists and other systems 2013-05-22T01:09:51.917Z
Meetup : Berkeley: Munchkinism 2013-05-14T04:25:21.643Z
Meetup : Berkeley: Information theory and the art of conversation 2013-05-05T22:35:00.823Z
Meetup : Berkeley: Dungeons & Discourse 2013-03-03T06:13:05.399Z
Meetup : Berkeley: Board games 2013-01-29T03:09:23.841Z
Meetup : Berkeley: CFAR focus group 2013-01-23T02:06:35.830Z
A fungibility theorem 2013-01-12T09:27:25.637Z
Proof of fungibility theorem 2013-01-12T09:26:09.484Z
Meetup : Berkeley meetup: Board games! 2013-01-08T20:40:42.392Z
Meetup : Berkeley: How Robot Cars Are Near 2012-12-17T19:46:33.980Z
Meetup : Berkeley: Boardgames 2012-12-05T18:28:09.814Z
Meetup : Berkeley meetup: Hermeneutics! 2012-11-26T05:40:29.186Z
Meetup : Berkeley meetup: Deliberate performance 2012-11-13T23:58:50.742Z
Meetup : Berkeley meetup: Success stories 2012-10-23T22:10:43.964Z
Meetup : Different location for Berkeley meetup 2012-10-17T17:19:56.746Z
[Link] "Fewer than X% of Americans know Y" 2012-10-10T16:59:38.114Z
Meetup : Different location: Berkeley meetup 2012-10-03T08:26:09.910Z
Meetup : Pre-Singularity Summit Overcoming Bias / Less Wrong Meetup Party 2012-09-24T14:46:05.475Z
Meetup : Vienna meetup 2012-09-22T13:14:23.668Z
Meetup report: How harmful is cannabis, and will you change your habits? 2012-09-09T04:50:10.943Z
Meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind 2012-08-29T03:50:23.867Z
Meetup : Berkeley meetup: Operant conditioning game 2012-08-21T15:07:36.431Z
Meetup : Berkeley meetup: Discussion about startups 2012-08-14T17:09:10.149Z
Meetup : Berkeley meetup: Board game night 2012-08-01T06:40:27.322Z
Meetup : Berkeley meetup: Rationalist group therapy 2012-07-25T05:50:53.138Z
Meetup : Berkeley meetup: Argument mapping software 2012-07-18T19:50:27.973Z
Meetup : Berkeley meta-meetup 2012-07-06T08:02:11.372Z
Meetup : Berkeley meetup 2012-06-24T04:36:23.833Z

Comments

Comment by Nisan on [Link] Musk's non-missing mood · 2021-07-13T21:40:39.940Z · LW · GW

Agreed that it shouldn't be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they're good at, even if it's not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because "the prospect of discovery is too sweet".

Comment by Nisan on Daniel Kokotajlo's Shortform · 2021-07-08T00:21:49.703Z · LW · GW

It's fine to say that if you want the conversation to become a discussion of AI timelines. Maybe you do! But not every conversation needs to be about AI timelines.

Comment by Nisan on Confusions re: Higher-Level Game Theory · 2021-07-02T17:49:03.979Z · LW · GW

I feel excited about this framework! Several thoughts:

I especially like the metathreat hierarchy. It makes sense because if you completely curry it, each agent sees the foe's action, policy, metapolicy, etc., which are all generically independent pieces of information. But it gets weird when an agent sees an action that's not compatible with the foe's policy.

You hinted briefly at using hemicontinuous maps of sets instead of or in addition to probability distributions, and I think that's a big part of what makes this framework exciting. Maybe if one takes a bilimit of Scott domains or whatever, you can have an agent that can be understood simultaneously on multiple levels, and so evade commitment races. I haven't thought much about that.

I think you're right that the epiphenomenal utility functions are not good. I still think using reflective oracles is a good idea. I wonder if the power of Kakutani fixed points (magical reflective reasoning) can be combined with the power of Kleene fixed points (iteratively refining commitments).

Comment by Nisan on Is there a "coherent decisions imply consistent utilities"-style argument for non-lexicographic preferences? · 2021-06-30T18:43:53.386Z · LW · GW

Oh you're right, I was confused.

Comment by Nisan on Is there a "coherent decisions imply consistent utilities"-style argument for non-lexicographic preferences? · 2021-06-29T21:10:01.514Z · LW · GW

I've no idea if this example has appeared anywhere else. I'm not sure how seriously to take it.

Comment by Nisan on Is there a "coherent decisions imply consistent utilities"-style argument for non-lexicographic preferences? · 2021-06-29T21:08:08.539Z · LW · GW

Consider the following game: At any time , you may say "stop!", in which case you'll get the lottery that resolves to an outcome you value at with probability , and to an outcome you value at with probability . If you don't say "stop!" in that time period, we set .

Let's say at every instant in you can decide to either say "stop!" or to wait a little longer. (A dubious assumption, as it lets you make infinitely many decisions in finite time.) Then you'll naturally wait until and get a payoff of . It would have been better for you to say "stop!" at , in which case you'd get .

You can similarly argue that it's irrational for your utility to be discontinuous in the amount of wine in your glass: Otherwise you'll let the waiter fill up your glass and then be disappointed the instant it's full.

Comment by Nisan on Sam Altman and Ezra Klein on the AI Revolution · 2021-06-27T22:04:53.027Z · LW · GW

I haven't seen a writeup anywhere of how it was trained.

Comment by Nisan on Sam Altman and Ezra Klein on the AI Revolution · 2021-06-27T07:03:51.769Z · LW · GW

The instruction-following model Altman mentions is documented here. I didn't notice it had been released!

Comment by Nisan on How one uses set theory for alignment problem? · 2021-05-30T07:37:20.938Z · LW · GW

See section 2 of this Agent Foundations research program and citations for discussion of the problems of logical uncertainty, logical counterfactuals, and the Löbian obstacle. Or you can read this friendly overview. Gödel-Löb provability logic has been used here.

I don't know of any application of set theory to agent foundations research. (Like large cardinals, forcing, etc.)

Comment by Nisan on Dario Amodei leaves OpenAI · 2021-05-28T20:02:25.556Z · LW · GW

Ah, 90% of the people discussed on this post are now working for Anthropic, along with a few other ex-OpenAI safety people.

Comment by Nisan on The Homunculus Problem · 2021-05-28T19:28:01.372Z · LW · GW

Here's a fun and pointless way one could rescue the homunculus model: There's an infinite regress of homunculi, each of which sees a reconstructed image. As you pass up the chain of homunculi, the shadow gets increasingly attenuated, approaching but never reaching complete invisibility. Then we identify "you" with a suitable limit of the homunculi, and what you see is the entire sequence of images under some equivalence relation which "forgets" how similar A and B were early in the sequence, but "remembers" the presence of the shadow.

Comment by Nisan on The Homunculus Problem · 2021-05-28T19:06:45.603Z · LW · GW

The homunculus model says that all visual perception factors through an image constructed in the brain. One should be able to reconstruct this image by asking a subject to compare the brightness of pairs of checkerboard squares. A simplistic story about the optical illusion is that the brain detects the shadow and then adjusts the brightness of the squares in the constructed image to exactly compensate for the shadow, so the image depicts the checkerboard's inferred intrinsic optical properties. Such an image would have no shadow, and since that's all the homunculus sees, the homunculus wouldn't perceive a shadow.

That story is not quite right, though. Looking at the picture, the black squares in the shadow do seem darker than the dark squares outside the shadow, and similarly for the white squares. I think if you reconstructed the virtual image using the above procedure you'd get an image with an attenuated shadow. Maybe with some more work you could prove that the subject sees a strong shadow, not an attenuated one, and thereby rescue Abram's argument.

Edit: Sorry, misread your comment. I think the homunculus theory is that in the real image, the shadow is "plainly visible", but the reconstructed image in the brain adjusts the squares so that the shadow is no longer present, or is weaker. Of course, this raises the question of what it means to say the shadow is "plainly visible"...

Comment by Nisan on The Homunculus Problem · 2021-05-27T23:29:33.753Z · LW · GW

This is the sort of problem Dennett's Consciousness Explained addresses. I wish I could summarize it here, but I don't remember it well enough.

It uses the heterophenomenological method, which means you take a dataset of earnest utterances like "the shadow appears darker than the rest of the image" and "B appears brighter than A", and come up with a model of perception/cognition to explain the utterances. In practice, as you point out, homunculus models won't explain the data. Instead the model will say that different cognitive faculties will have access to different pieces of information at different times.

Comment by Nisan on The Argument For Spoilers · 2021-05-22T20:30:14.045Z · LW · GW

Very interesting. I would guess that to learn in the presence of spoilers, you'd need not only a good model of how you think, but also a way of updating the way you think according to the model's recommendations. And I'd guess this is easiest in domains where your object-level thinking is deliberate rather than intuitive, which would explain why the flashcard task would be hardest for you.

When I read about a new math concept, I eventually get the sense that my understanding of it is "fake", and I get "real" understanding by playing with the concept and getting surprised by its behavior. I assumed the surprise was essential for real understanding, but maybe it's sufficient to track which thoughts are "real" vs. "fake" and replace the latter with the former.

Comment by Nisan on The Argument For Spoilers · 2021-05-21T21:31:20.755Z · LW · GW

Have you had any success learning the skill of unseeing?

  • Are you able to memorize things by using flashcards backwards (looking at the answer before the prompt) nearly as efficiently as using them the usual way?
  • Are you able to learn a technical concept from worked exercises nearly as well as by trying the exercises before looking at the solutions?
  • Given a set of brainteasers with solutions, can you accurately predict how many of them you would have been able to solve in 5 minutes if you had not seen the solutions?
Comment by Nisan on Reflexive Oracles and superrationality: prisoner's dilemma · 2021-05-13T21:31:04.970Z · LW · GW

See also this comment from 2013 that has the computable version of NicerBot.

Comment by Nisan on Prisoner's Dilemma (with visible source code) Tournament · 2021-05-13T21:26:54.011Z · LW · GW

This algorithm is now published in "Robust program equilibrium" by Caspar Oesterheld, Theory and Decision (2019) 86:143–159, https://doi.org/10.1007/s11238-018-9679-3, which calls it ϵGroundedFairBot.

The paper cites this comment by Jessica Taylor, which has the version that uses reflective oracles (NicerBot). Note also the post by Stuart Armstrong it's responding to, and the reply by Vanessa Kosoy. The paper also cites a private conversation with Abram Demski. But as far as I know, the parent to this comment is older than all of these.

Comment by Nisan on Challenge: know everything that the best go bot knows about go · 2021-05-11T10:05:42.256Z · LW · GW

Or maybe it means we train the professional in the principles and heuristics that the bot knows. The question is if we can compress the bot's knowledge into, say, a 1-year training program for professionals.

There are reasons to be optimistic: We can discard information that isn't knowledge (lossy compression). And we can teach the professional in human concepts (lossless compression).

Comment by Nisan on Challenge: know everything that the best go bot knows about go · 2021-05-11T05:44:58.846Z · LW · GW

This sounds like a great goal, if you mean "know" in a lazy sense; I'm imagining a question-answering system that will correctly explain any game, move, position, or principle as the bot understands it. I don't believe I could know all at once everything that a good bot knows about go. That's too much knowledge.

Comment by Nisan on April 15, 2040 · 2021-05-06T03:39:09.354Z · LW · GW

The assistant could have a private key generated by the developer, held in a trusted execution environment. The assistant could invoke a procedure in the trusted environment that dumps the assistant's state and cryptographically signs it. It would be up to the assistant to make a commitment in such a way that it's possible to prove that a program with that state will never try to break the commitment. Then to trust the assistant you just have to trust the datacenter administrator not to tamper with the hardware, and to trust the developer not to leak the private key.

Comment by Nisan on Psyched out · 2021-05-06T03:24:30.063Z · LW · GW

Welcome! I recommend checking out the Sequences. It's what I started with.

Comment by Nisan on April 15, 2040 · 2021-05-05T06:42:54.158Z · LW · GW

Yep, that is a good question and I'm glad you're asking it!

I don't know the answer. One part of it is whether the assistant is able and willing to interact with me in a way that is compatible with how I want to grow as a person.

Another part of the question is whether people in general want to become more prosocial or more cunning, or whatever. Or if they even have coherent desires around this.

Another part is whether it's possible for the assistant to follow instructions while also helping me reach my personal growth goals. I feel like there's some wiggle room there. What if, after I asked whether I'd be worse off if the government collapsed, the assistant had said "Remember when we talked about how you'd like to get better at thinking through the consequences of your actions? What do you think would happen if the government collapsed, and how would that affect people?"

Comment by Nisan on April 15, 2040 · 2021-05-05T06:28:01.552Z · LW · GW

Yeah, I would be very nervous about making an exception to my assistant's corrigibility. Ultimately, it would be prudent to be able to make some hard commitments after thinking very long and carefully about how to do that. In the meantime, here are a couple corrigibility-preserving commitment mechanisms off the top of my head:

  • Escrow: Put resources in a dumb incorrigible box that releases them under certain conditions.
  • The AI can incorrigibly make very short-lived commitments during atomic actions (like making a purchase).

Are these enough to maintain competitiveness?

Comment by Nisan on April 15, 2040 · 2021-05-04T23:18:51.231Z · LW · GW

Yeah, I spend at least as much time interacting with my phone/computer as with my closest friends. So if my phone were smarter, it would affect my personal development as much as my friends do, which is a lot.

Comment by Nisan on Dario Amodei leaves OpenAI · 2021-05-03T08:33:27.117Z · LW · GW

Also Daniela Amodei, Nicholas Joseph, and Amanda Askell apparently left in December, January, and February, according to their LinkedIn profiles.

Comment by Nisan on Homeostatic Bruce · 2021-04-09T06:04:05.434Z · LW · GW

Probably this one: Stuck in the middle with Bruce (first mentioned on Less Wrong here.

Comment by Nisan on My research methodology · 2021-03-24T05:37:15.915Z · LW · GW

Red-penning is a general problem-solving method that's kinda similar to this research methodology.

Comment by Nisan on Dario Amodei leaves OpenAI · 2021-02-05T17:43:13.850Z · LW · GW

Also Jacob Jackson left, saying his new project is same.energy.

Comment by Nisan on Dario Amodei leaves OpenAI · 2021-02-05T17:06:58.753Z · LW · GW

Gwern reports that Tom Brown, Sam McCandlish, Tom Henighan, and Ben Mann have also left.

Comment by Nisan on Some AI research areas and their relevance to existential safety · 2020-11-19T09:02:18.353Z · LW · GW

I'd believe the claim if I thought that alignment was easy enough that AI products that pass internal product review and which don't immediately trigger lawsuits would be aligned enough to not end the world through alignment failure. But I don't think that's the case, unfortunately.

It seems like we'll have to put special effort into both single/single alignment and multi/single "alignment", because the free market might not give it to us.

Comment by Nisan on Some AI research areas and their relevance to existential safety · 2020-11-19T08:52:12.290Z · LW · GW

I'd like more discussion of the claim that alignment research is unhelpful-at-best for existential safety because of it accelerating deployment. It seems to me that alignment research has a couple paths to positive impact which might balance the risk:

  1. Tech companies will be incentivized to deploy AI with slipshod alignment, which might then take actions that no one wants and which pose existential risk. (Concretely, I'm thinking of out with a whimper and out with a bang scenarios.) But the existence of better alignment techniques might legitimize governance demands, i.e. demands that tech companies don't make products that do things that literally no one wants.

  2. Single/single alignment might be a prerequisite to certain computational social choice solutions. E.g., once we know how to build an agent that "does what [human] wants", we can then build an agent that "helps [human 1] and [human 2] draw up incomplete contracts for mutual benefit subject to the constraints in the [policy] written by [human 3]". And slipshod alignment might not be enough for this application.

Comment by Nisan on Singularity Mindset · 2020-09-04T19:51:12.298Z · LW · GW

2 years later, do you have an answer to this?

Comment by Nisan on Risk is not empirically correlated with return · 2020-08-24T07:42:45.810Z · LW · GW

Hm, I think all I meant was:

"If you have two assets with the same per-share price, and asset A's value per share has a higher variance than asset B's value per share, then asset A's per-share value must have a higher expectation than asset B's per-share value."

I guess I was using "cost" to mean "price" and "return" to mean "discounted value or earnings or profit".

Comment by Nisan on Maybe Lying Can't Exist?! · 2020-08-24T04:55:53.880Z · LW · GW

(I haven't read any of the literature on deception you cite, so this is my unimformed opinion.)

I don't think there's any propositional content at all in these sender-receiver games. As far as the P.redator is concerned, the signal means "I want to eat you" and the P.rey wants to be eaten.

If the environment were somewhat richer, the agents would model each other as agents, and they'd have a shared understanding of the meaning of the signals, and then I'd think we'd have a better shot of understanding deception.

Comment by Nisan on What Would I Do? Self-prediction in Simple Algorithms · 2020-07-20T14:28:18.350Z · LW · GW

Ah, are you excited about Algorithm 6 because the recurrence relation feels iterative rather than topological?

Comment by Nisan on Self-sacrifice is a scarce resource · 2020-07-20T02:15:08.449Z · LW · GW

Like, if you’re in a crashing airplane with Eliezer Yudkowsky and Scott Alexander (or substitute your morally important figures of choice) and there are only two parachutes, then sure, there’s probably a good argument to be made for letting them have the parachutes.

This reminds me of something that happened when I joined the Bay Area rationalist community. A number of us were hanging out and decided to pile in a car to go somewhere, I don't remember where. Unfortunately there were more people than seatbelts. The group decided that one of us, who was widely recognized as an Important High-Impact Person, would definitely get a seatbelt; I ended up without a seatbelt.

I now regret going on that car ride. Not because of the danger; it was a short drive and traffic was light. But the self-signaling was unhealthy. I should have stayed behind, to demonstrate to myself that my safety is important. I needed to tell myself "the world will lose something precious if I die, and I have a duty to protect myself, just as these people are protecting the Important High-Impact Person".

Everyone involved in this story has grown a lot since then (me included!) and I don't have any hard feelings. I bring it up because offhand comments or jokes about sacrificing one's life for an Important High-Impact Person sound a bit off to me; they possibly reveal an unhealthy attitude towards self-sacrifice.

(If someone actually does find themselves in a situation where they must give their life to save another, I won't judge their choice.)

Comment by Nisan on Classifying games like the Prisoner's Dilemma · 2020-07-13T04:20:44.205Z · LW · GW

Von Neumann and Morgenstern also classify the two-player games, but they get only two games, up to equivalence. The reason is they assume the players get to negotiate beforehand. The only properties that matter for this are:

  • The maximin value , which represents each player's best alternative to negotiated agreement (BATNA).

  • The maximum total utility .

There are two cases:

  1. The inessential case, . This includes the Abundant Commons with . No player has any incentive to negotiate, because the BATNA is Pareto-optimal.

  2. The essential case, . This includes all other games in the OP.

It might seem strange that VNM consider, say, Cake Eating to be equivalent to Prisoner's Dilemma. But in the VNM framework, Player 1 can threaten not to eat cake in order to extract a side payment from Player 2, and this is equivalent to threatening to defect.

Comment by Nisan on [deleted post] 2020-07-13T04:10:20.223Z
  • item
    • subitem
    • subitem
  • item
Comment by Nisan on [deleted post] 2020-07-13T04:03:22.664Z

Von Neumann and Morgenstern also classify the two-player games, but they get only two games, up to equivalence. The reason is they assume the players get to negotiate beforehand. For them the only properties that matter are:

  • The maximin value , which represents each player's best alternative to negotiated agreement (BATNA).

  • The maximum total utility .

There are two cases:

  1. The inessential case, . This includes the Abundant Commons with . No player has any incentive to negotiate, because the BATNA is Pareto-optimal.

  2. The essential case, . This includes all other games in the OP.

It might seem strange that VNM consider Cake Eating to be equivalent to Prisoner's Dilemma. But in the VNM framework, Player 1 can threaten not to eat cake in order to extract a side payment from Player 2, just and this is the same as threatening to defect.

Comment by Nisan on Your Prioritization is Underspecified · 2020-07-11T21:18:50.248Z · LW · GW

There is likely much more here than just 'cognition is expensive'

In particular, prioritization involves negotiation between self-parts with different beliefs/desires, which is a tricky kind of cognition. A suboptimal outcome of negotiation might look like the Delay strategy.

Comment by Nisan on Learning the prior · 2020-07-06T03:54:00.815Z · LW · GW

In this case humans are doing the job of transferring from to , and the training algorithm just has to generalize from a representative sample of to the test set.

Comment by Nisan on Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) · 2020-05-22T16:42:45.987Z · LW · GW

Thanks for the references! I now know that I'm interested specifically in cooperative game theory, and I see that Shoham & Leyton-Brown has a chapter on "coalitional game theory", so I'll take a look.

Comment by Nisan on Conflict vs. mistake in non-zero-sum games · 2020-05-21T15:56:47.448Z · LW · GW

If you have two strategy pairs , you can form a convex combination of them like this: Flip a weighted coin; play strategy on heads and strategy on tails. This scheme requires both players to see the same coin flip.

Comment by Nisan on Why you should minimax in two-player zero-sum games · 2020-05-17T20:50:54.673Z · LW · GW

A proof of the lemma :

Comment by Nisan on Multi-agent safety · 2020-05-16T16:51:12.202Z · LW · GW

Ah, ok. When you said "obedience" I imagined too little agency — an agent that wouldn't stop to ask clarifying questions. But I think we're on the same page regarding the flavor of the objective.

Comment by Nisan on Multi-agent safety · 2020-05-16T10:20:01.322Z · LW · GW

Might not intent alignment (doing what a human wants it to do, being helpful) be a better target than obedience (doing what a human told it to do)?

Comment by Nisan on Stop saying wrong things · 2020-05-03T07:39:37.822Z · LW · GW

Also Dan Luu's essay 95%-ile isn't that good, where he claims that even 95th-percentile Overwatch players routinely make silly mistakes, suggesting that you can get to that level by not making mistakes.

Comment by Nisan on Conflict vs. mistake in non-zero-sum games · 2020-04-22T05:45:47.848Z · LW · GW

Oh, this is quite interesting! Have you thought about how to make it work with mixed strategies?

I also found your paper about the Kripke semantics of PTE. I'll want to give this one a careful read.

You might be interested in: Robust Cooperation in the Prisoner's Dilemma (Barasz et al. 2014), which kind of extends Tennenholtz's program equilibrium.

Comment by Nisan on Jan Bloch's Impossible War · 2020-04-09T15:21:51.013Z · LW · GW

Ah, thank you! I have now read the post, and I didn't find it hazardous either.

Comment by Nisan on Jan Bloch's Impossible War · 2020-04-08T07:07:55.932Z · LW · GW

More info on the content or severity of the neuropsychological and evocation infohazards would be welcome. (The WWI warning is helpful; I didn't see that the first time.)

Examples of specific evocation hazards:

  • Images of gore
  • Graphic descriptions of violence
  • Flashing lights / epilepsy trigger

Examples of specific neuropsychological hazards:

  • Glowing descriptions of bad role models
  • Suicide baiting

I know which of these hazards I'm especially susceptible to and which I'm not.

I appreciate that Hivewired thought to put these warnings in. But I'm kind of astounded that enough readers plowed through the warnings and read the post (with the expectation that they would be harmed thereby?) to cause it to be promoted.