How much do we know about how brains learn? 2020-01-24T14:46:47.185Z · score: 8 (4 votes)
[Link] "Doing being rational: polymerase chain reaction" by David Chapman 2019-12-13T23:54:45.189Z · score: 11 (6 votes)
Link: An exercise: meta-rational phenomena | Meaningness 2019-10-21T16:56:24.443Z · score: 9 (4 votes)
Paper on qualitative types or degrees of knowledge, with examples from medicine? 2019-06-15T00:31:56.912Z · score: 5 (2 votes)
Flagging/reporting spam *posts*? 2018-05-23T16:14:11.515Z · score: 6 (2 votes)


Comment by kenny on Is the Covid-19 crisis a good time for x-risk outreach? · 2020-03-29T03:19:39.948Z · score: 1 (1 votes) · LW · GW

From a comment I made in the comments on this question:

(... I'd expect it [outreach] to backfire).


I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people's current understanding to the larger risks we imagine and wish to avoid.

Given the second part, I still think one should do no more outreach than usual but also definitely do not tie x-risk, or a specific non-pandemic x-risk, to the current pandemic.


It just occurred to me that the form of the outreach, and especially the targeted audience of new outreach campaigns, could be decisive for my answer. When I first read your question, I immediately imagined things like ads on popular websites, in YouTube videos, or even on TV. Perhaps that wasn't what you imagined by "outreach". You did write:

I wouldn't expect anyone to be willing to open their wallets right now, but it could be a good time to "plant the seed".

I think outreach specifically intended to "plant the seed" – but not soliciting for funds or funding – could be very much worth doing now. But, because you're not looking for money, you should target people that are most likely to 'spread the seed', e.g. public intellectuals.

(I still think you're going to have a hard time selling non-friendly-AI risk, especially given that approximately everything is going to try to twist stories of the pandemic to their advantage. { Global warming / climate change} is an obvious likely competitor; as is universal healthcare in the U.S..)

Comment by kenny on Is the Covid-19 crisis a good time for x-risk outreach? · 2020-03-29T02:53:44.596Z · score: 1 (1 votes) · LW · GW

I agree with your main criticism. It's well put too!

That's a scary possibility; I would feel much safer ...

Maybe doing this is the best that one can do (so ... shutup and multiply). I don't think it is (because I'd expect it to backfire).

(But I think we should also pursue teaching people how to think rationally.)

I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people's current understanding to the larger risks we imagine and wish to avoid.

Comment by kenny on Is the Covid-19 crisis a good time for x-risk outreach? · 2020-03-29T02:38:56.293Z · score: 1 (1 votes) · LW · GW

I would seriously consider not doing more outreach than you are now – possibly for several years.

In the near-term, I think significantly more people will find x-risk on their own.

Comment by kenny on Is the coronavirus the most important thing to be focusing on right now? · 2020-03-28T23:40:45.220Z · score: 1 (1 votes) · LW · GW

We should defer to people with more domain expertise exactly as much as we would normally do (all else being equal).

Almost all of what's posted to and discussed on this site is 'non-original work' (or, at best, original derivative work). That's our comparative advantage! Interpreting and synthesizing other's work is what we do best and this single issue affects both every regular user and any potentially visitor immensely.

There's no reason why we can't continue to focus long-term on our current priorities – but the pandemic affects all of our abilities to do so and I don't think any of us can completely ignore this crisis.

Comment by kenny on Is the coronavirus the most important thing to be focusing on right now? · 2020-03-28T23:33:49.546Z · score: 1 (1 votes) · LW · GW

I'm much more inclined to accept, uncritically, posts and questions from users on whatever topics they want to discuss.

I think the pandemic is likely to motivate a lot more non-pandemic work and activity here even if the pandemic continues to account for most of the total activity.

Comment by kenny on Is the coronavirus the most important thing to be focusing on right now? · 2020-03-28T23:30:16.537Z · score: 1 (1 votes) · LW · GW

Even for people that are working professionally on x-risk, it's probably harder now, and likely to continue to be harder, to focus on x-risk anyways.

Comment by kenny on Is the coronavirus the most important thing to be focusing on right now? · 2020-03-28T23:27:43.640Z · score: 1 (1 votes) · LW · GW

My intuition is that from here on out it's going to be hard to find steps we can take that will have even a moderate impact on our wellbeing.

It's going to be hard now but it was easy before now?

I think the site regulars have a comparative advantage in thinking (and writing about those thoughts) and that we'll make (relatively) good judgements about how much attention we should be paying to the pandemic.

I think it's just as likely that we will continue to help as much as we have already, which I think has been an effective impact. A lot of this 'work' seems broadly useful too, beyond just this current crisis.

Comment by kenny on Does the 14-month vaccine safety test make sense for COVID-19? · 2020-03-28T21:55:12.194Z · score: 1 (1 votes) · LW · GW

This would be nice to have for 'magic numbers' in general – as is common in well-documented source code.

Comment by kenny on Why Telling People They Don't Need Masks Backfired · 2020-03-28T17:43:16.800Z · score: 1 (1 votes) · LW · GW

This is a general problem – titles or headlines making stronger claims than the article – and seems to be due to a different person, or different people, choosing the title or headline than the person or persons that wrote the article.

Comment by kenny on Mazes Sequence Roundup: Final Thoughts and Paths Forward · 2020-03-18T00:40:54.383Z · score: 1 (1 votes) · LW · GW

How is that an isolated demand for rigor? I asked why (or how) you believed what you claimed and that seemed like pertinent info. I think you misinterpreted me as implying that you couldn't have a good reason because you hadn't worked at a restaurant before (and, further, also implying that if you had, you'd agree with me). I was really just curious why you believe what you claimed. I can't imagine what history of me you consulted to conclude that my question to you was an isolated demand; care to share what that was?

Personally, I've worked at two restaurants, mostly delivering food at both. One was a pizza restaurant that also had on-site dining, i.e. had a sit-down full-service also. Another was just a delivery (or pickup) pizza restaurant. I've also worked at a 'smoothie shop', which is probably legally considered a restaurant or something similar, but is also not a prototypical 'restaurant'.

There are no bullshit jobs in restaurants, there's no ambiguity about what people should be doing, people's employment status usually depends much more heavily on performance than in mazes, etc.

These are really absolute statements! [Emphasis mine in some quotes of you below.]

"There are no bullshit jobs in restaurants" – that's absolutely not true! Many restaurants have been killed because of someone with a bullshit job, e.g. an owner with a bullshit manager job. I would agree with 'there are few or fewer bullshit jobs in restaurants compared to other businesses', but I'm not even sure that it's that true compared to similarly sized businesses.

"there's no ambiguity about what people should be doing" – this is also absolutely not true! Any business, even successful restaurants, aren't busy with 'direct work' at every single moment. What should people be doing during 'down time'? Whatever the answer is, it wasn't obvious to me – i.e. there IS ambiguity – every time that happened and various managers frequently asked me to perform wildly different tasks or projects. And even in 'crunch time' it's not always obvious what people should be doing? At the smoothie shop at which I worked I would often have to decide something like 'Should I continue taking orders or help filling orders or should I start washing some of the blender containers now instead?'. And sometimes I was the effective manager at the time, so I'd have to make a decision – resolve that particular ambiguity myself. And this – resolving or handling ambiguity – is exactly what a good manager in a restaurant or similar business does that's so valuable.

"people's employment status usually depends much more heavily on performance than in mazes" – you're begging the question as you're assuming restaurants aren't mazes. Maybe you would be surprised that, in my experience, some people are retained (i.e. their "employment status" remains 'employed') long after their performance demonstrates that they should be let go. I've found smaller businesses to be less ruthless, generally, about firing people than larger businesses. I think your statement is also ambiguous because a person's 'performance' – in a maze – is a very really thing even if it's not object-level productivity. That's a part of why they're so toxic! They actively corrupt more objective measures of productivity and thus performance.

Larger companies become more maze like, but mazes can't survive competition, so companies under intense competition have to stay small to avoid becoming mazes.

I don't agree that mazes can't survive competition. I think mazes are parasitic on { direct / object level } productivity. But, like biological parasites, the fact that the productive organization or living organism is under intense competition doesn't out-right prohibit mazes or parasitism. And, because most competition is most intense between or among similar organizations or organisms (i.e. within an industry or niche or species), and because both mazes and parasites are generally contagious, we should expect similar organizations or organisms to have (roughly) similar levels of either.

'Competition' is also ambiguous. 'Anti-competitive' behavior is one one (very real) sense 'cheating', but in another (also very real) sense it's just another move in the larger game, on another 'level'. Mazes and parasites are also under (sometimes intense) competition, so we should expect a general 'arms race' between maze-runners and the productive portion of an organization and between parasites and 'productive' organisms. And maybe even 'symbiosis' is more like the 'loser' eking out some kind of advantage from what was formerly a purer parasite.

I think competition provides a relatively modest negative 'pressure' against mazes, but there's no level of competition that prevents them from forming or actively destroys them. I think mazes are more often a chronic, and sometimes terminal, condition that afflicts any and every organization to a considerable degree.

Comment by kenny on More writeups! · 2020-03-05T01:51:48.166Z · score: 3 (2 votes) · LW · GW

Agreed about moar writeups (please).

I find spoken audio both too distracting and not distracting enough – too distracting to do something else but not distracting enough to prevent me from wanting to do something else.

But video, even just of the recording of a podcast, has enough extra input/stimulus to work wonderfully.

Given the above, I've found some videos to be wonderful 'writeups', many episodes of Joe Rogan's podcast being good examples of that. In particular, his episode with Daryl Davis, was a good 'writeup' of something like 'what I did that (indirectly) resulted in hundreds of people quitting the Klu Klux Klan and similar organizations'.

Comment by kenny on Mazes Sequence Roundup: Final Thoughts and Paths Forward · 2020-03-05T00:29:43.296Z · score: 2 (2 votes) · LW · GW

You're right about the link (the first one in what you quoted). I read a little of it originally but now, even only half-way thru still, I realize it's much better than I first judged!

Now I'm really getting some Atlas-Shrugged-Ayn-Rand vibes from this series of posts, and especially from what I imagine 'The Journey of the Sensitive One' to cover.

Her language – terminology – was very different, but, as just one example, all of the villains were clearly part of the same oligarchic Maze of power.

Comment by kenny on Mazes Sequence Roundup: Final Thoughts and Paths Forward · 2020-03-05T00:09:56.194Z · score: 1 (1 votes) · LW · GW

Of course wonderful bounties etc. exist because that's what competitive organizations optimize for! If they optimized something that people don't want to buy, they would be out-competed by someone else.

Mazes are internal politics (mostly), or they're internal to many related organizations.

I've also pointed out that there's strong negative feedback that affects how maze-like organizations can become – coming from outside those organizations. Even other organizations that are very maze-like often directly depend on products or services being (mostly) delivered by other organizations, regardless of their maze-likeness. That is a very real restraint on mazes just because they are costly.

And you're right that competition – between organizations – should constraint the growth or maintenance of mazes; and it does. It also reduces slack, a prime target of maze-runners, but also reduces everything else, e.g. wages, employee work-life balance, and overall profits.

But there's also a very real difference between 'appropriating' a significant portion of the organization's profits and literally losing money. Note that the latter directly threatens the maze-runners too.

What might be a little hard to keep track of separately is that maze-runners are the kind of people that are generally willing to engage in rent seeking, regulatory capture, and other anti-competitive behavior to better secure the profits/resources they're appropriating.

It would probably be very useful if we had even a rough measure of competitiveness for different markets or industries. If we did, I'd bet that competitiveness is inversely correlated with maze-likeness.

But Zvi wasn't claiming that 'super-perfect competition' between organizations would produce or promote mazes. It's super-perfect competition within insulated levels of managerial hierarchies that does. It's super-perfect competition in one particular labor market – middle managers – that causes mazes.

... restaurants generally aren't mazes.

Why do you believe that? Have you worked in a restaurant?

I would suspect the reason that restaurants aren't maze-like is that most of them are small and relatively flat hierarchies. A LOT of them don't make any money, ever, either.

(I've long considered restaurants as a class to be a weird form of unintended charity. Restaurant-goers (hopefully) get to eat good food and servers and cooks and others get jobs. But a lot of restaurant owners get nothing (financially).)

Mazes can't survive competition.

All-else-equal, mazes are opposed by competition. But so is everything else. And perfect competition is such a bizarre situation that it will probably never literally hold, tho we may (in the far future) approach it asymptotically.

But mazes can survive any particular amount of competition strictly less than perfect even if we rightly expect them to be smaller and less severe in the face of more competition than otherwise.

Comment by kenny on "But that's your job": why organisations can work · 2020-03-04T05:28:59.609Z · score: 1 (1 votes) · LW · GW

I think 'best systems' refers to those that would be best for their object-level purpose, e.g. delivering mail as efficiently as possible. (But too much efficiency would be literally terrible for the people that work there and at least a small number want to cheat anyways.)

You could also consider a more Darwinian interpretation of which are the 'best systems' – they would be the ones that receive the most resources but provide the minimum products or services demanded, i.e. produce the largest 'internal profit' – but survive indefinitely. (And for cheaters, these systems are paradises.)

But I think the key negative feedback explaining why immoral mazes mostly still work is more likely that other people really do care, to some degree, that their job gets done. Apparently, many DMV offices in the U.S. are much better than they had been in the past. And the systems themselves can screw up enough and 'get themselves killed', e.g. closed, disbanded, or broken-up; or individuals in the system can be directly punished, e.g. fined, imprisoned, executed. There's a significant amount of outside pressure that can be brought to bear.

That's probably also why it's the insides of large hierarchies become the most dense immoral mazes. The leaders 'on the surface' are (relatively) public figures and thus default targets for punishment. Also, other immoral mazes probably depend on their work being done, at least for them! And the workers are in direct contact with whatever the relevant portion of object level reality there is with which to interact to do their jobs. At that level, there very much is a pronounced 'but that's (not) my job' operating.

Comment by kenny on [Link] Ignorance, a skilled practice · 2020-03-04T05:10:44.282Z · score: 1 (1 votes) · LW · GW

That's hilarious, and a little sad!

Sorry Mr. Feynman, we couldn't replicate your memories.

Comment by kenny on The Decline Effect and the Scientific Method [link] · 2020-03-04T05:05:31.491Z · score: 1 (1 votes) · LW · GW

What's the reason to not demand that all experiments be videoed in their entirety?

You seem to be trying to accommodate the way scientists and journals already operate:

Sometimes, the proposal might not end up as a wholly accurate description of the actual experiment, for a variety of reasons.

It might not be bad to accommodate them, but the primary and central purpose of science is to know – to produce shared knowledge of the world.

I think an ideal journal would allow scientists to change their registered proposal – possibly. That would also be recorded in the journal's register, if they accept the changes.

Maybe I'm in a bad mood, but it's especially galling how terrible all of this still, e.g. NOT sharing all scientific results with the public for publicly funded research.

Why can't all of this be done in the open, on the researcher's blog? They register a proposal by publishing a post describing it, in as much detail as is feasible, e.g. including code they're registering to use on the data they collect. They record video of the entire experiment (where feasible); they publish that to YouTube. They publish all of their data. They perform their analysis – the exact one described in their registration post – and then publish a blog post, or a whole series of posts, about their analysis.

If the researchers want to change a registered, but un-performed, experiment, they publish a post describing their changes, in comparable detail as originally.

Blog posts don't need to be open for anyone to comment on. Researchers could explicitly invite other individuals or 'anyone with X degree in Y from an accredited institution recognized by professional association Z'.

The relevant people could comment on the registered proposal, on registered changes, on the documentation of the performance of the experiment itself, and on interpretation of the registered analysis.

Why do we need journals? Why do we want journals?

Comment by kenny on [Link] Ignorance, a skilled practice · 2020-03-03T22:01:48.381Z · score: 5 (3 votes) · LW · GW

This reminded me of Cargo Cult Science by Richard Feynman, particularly this part:

For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Now, from a scientific standpoint, that is an A‑Number‑l experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat‑running.

I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of Cargo Cult Science.

Comment by kenny on Excerpts from a larger discussion about simulacra · 2020-03-03T21:06:52.258Z · score: 1 (1 votes) · LW · GW

I don't think [2] is accurate. Certainly some people are using simulacrums "cooperatively" but only as a larger defection – that's the whole point: to receive benefits that are unearned per object-level reality.

I agree that all of this behavior isn't a good (central) example of 'immoral behavior', but it's certainly not good. It might be to some degree inevitable, but so are lots of bad things.

Comment by kenny on Eutopia is Scary · 2020-02-13T03:51:41.363Z · score: 1 (1 votes) · LW · GW

Getting rid of textbooks, for example—postulating that talking about science in public is socially unacceptable, for the same reason that you don't tell someone aiming to see a movie whether the hero dies at the end. A world that had rejected my beloved concept of science as the public knowledge of humankind.

That's a pretty good chunk of the premise of Anathem, tho the people in that universe didn't do it for Fun exactly.

Comment by kenny on [Link] "Doing being rational: polymerase chain reaction" by David Chapman · 2020-01-26T17:56:32.012Z · score: 1 (1 votes) · LW · GW


Comment by kenny on Is there an existing label for the category of fallacies exemplified by "paradox of tolerance"? · 2020-01-24T15:17:31.031Z · score: 4 (2 votes) · LW · GW

So, is your idea that, because of the general principle (ha) of 'intellectual charity', we should – typically at least, or maybe by default – habitually steelman principled arguments to automatically include any justified exceptions of which we're aware?

I think maybe it'd be better to simply offer to, and query, our fellow reasoners about which justified exceptions they accept for any principles under discussion.

Making the principles pure or absolute is an attempt to make the required judgement formulaic instead, often due to a cynicism about individual judgement abilities of people, ...

It's in general, in my experience anyways, difficult to distinguish between cynicism and realism. People really are, or seem to be, pretty bad reasoners in a lot of situations. We really do seem to be still, mostly, running how-to-get-along-in-a-small-tribal-band software, particularly when doing moral reasoning. Do you really trust most people to make good moral judgements generally? I'm on the fence, tho I do lean to a kind of Taoist 'people are naturally good' stance ('attitude'). But I'm also regularly watching for strong evidence of specific people's actual moral decisions and reasoning – and 'cynicism' isn't always wrong.

Comment by kenny on [Link] "Doing being rational: polymerase chain reaction" by David Chapman · 2020-01-24T15:10:45.079Z · score: 6 (2 votes) · LW · GW

I love this comment!

In linkposts, often the content is quoted wholesale (beginning to end) or partially (First X paragraphs or so). Although the post starting with a video might have made these options more difficult than usual.

I didn't know that – thanks for the feedback!

Should I edit my post to quote the referenced content?

David Chapman's thoughts on AI might differ from yours.

At sufficient detail, everyone's thoughts about anything (sufficiently complex) differ from everyone else's. But I don't think David Chapman and I have any fundamental disagreements about AI.

"AI have memory problems"

Ooooh! That's a perfectly concise form of a criticism I've thought about neural network architectures for a long time. The networks certainly are a form of memory themselves, but not really a history, i.e. of distinct and relatively discrete events or entities. Our own minds certainly seem to have that kind of memory and it seems very hard for an arbitrary intelligent reasoner to NOT have something similar (if not exactly something like this).

The quoted text you included is a perfect example of this kind of thing too; thanks for including it.

Isn't there evidence that human brains/minds have what is effectively a dedicated 'causal reasoning' unit/module? It probably also relies on the 'thing memory' unit(s)/module(s) too tho.

  • Perhaps the distinction is:
  • Rationality is what you should do.
  • Meta-rationality is what you should do in order to "make rationality work".

While these two things can be combined under one umbrella, making definitions smaller:

  • Increases clarity (of discussion)
  • Makes it easier to talk about components
  • Makes it clear when all membership criterion for a category have been met.
  • Might help with teaching/retention

As I mentioned, or implied, in this post, I'm indifferent about the terminology. But I like all of your points and think they're good reasons to make the distinction that Chapman does. I'm going to consider doing the same!

Comment by kenny on Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors · 2019-12-14T00:23:48.396Z · score: 3 (2 votes) · LW · GW

I appreciate any debunking, at least a little.

I found this of interest, if for no other reason than to trust claims made by the author of the 'debunked' book less. The details about the specific claims debunked were also (mildly) informative.

Comment by kenny on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-12-13T22:05:28.249Z · score: 1 (1 votes) · LW · GW

Good answer!

I was thinking about people living in detached homes in residential neighborhoods, i.e. places where I would expect local politics to prevent car parks ('parking lots' in my colloquialisms) from being built at all.

Comment by kenny on Arguing about housing · 2019-12-13T21:57:57.520Z · score: 1 (1 votes) · LW · GW

There's probably (at least) something to that idea. I imagine commercial construction is similarly constrained as residential. It's pretty common to hear that commercial rents are high in the places where residential rents are too.

Comment by kenny on Book Review: Design Principles of Biological Circuits · 2019-12-02T23:34:51.617Z · score: 5 (3 votes) · LW · GW

Thanks for this post!

I was excited to read the book reviewed just based on the first few sentences!

Comment by kenny on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-12-02T20:39:36.125Z · score: 1 (1 votes) · LW · GW

Specialized cabins seem like they would hurt this idea – where would people store all of their cabins?

Comment by kenny on When do you start looking for a Boston apartment? · 2019-11-28T21:57:12.392Z · score: 1 (1 votes) · LW · GW

I'm really confused. I'm used to the NYC rental market, particularly Brooklyn, and, aside from lining up apartment-mates, the rule is that you look for a new apartment right before you're reading to move. I can't even remember seeing apartments listed for rent months in advance, tho I wouldn't be entirely surprised that it happens, e.g. for students.

Where are you getting your listings and how can you tell when the lease is intended or expected to start from the listing?

Comment by Kenny on [deleted post] 2019-11-08T03:32:02.669Z

Epsilon value maybe?

In a certain sense, this is a trivial claim. Obviously some people are going to have 'local' competitive advantages but nothing speaks louder than success so word gets out eventually.

But I'm intrigued, so maybe this is not literally valueless.

Comment by kenny on Maybe Lying Doesn't Exist · 2019-11-08T01:36:04.969Z · score: 8 (2 votes) · LW · GW

This is a great post! A lot of these points have been addressed, but this is what I wrote while reading this post:

It's not immediately clear that an 'appeal to consequences' is wrong or inappropriate in this case. Scott was explicitly considering the policy of expanding the definition of a word, not just which definition is better.

If the (chief) purpose of 'categories' (i.e. words) is to describe reality, then we should only ever invent new words, not modify existing ones. Changing words seems like a strict loss of information.

It also seems pretty evident that there are ulterior motives (e.g. political ones) behind overt or covert attempts to change the common shared meaning of a word. It's certainly appropriate to object to those motives, and to object to the consequences of the desired changes with respect to those motives. One common reason to make similar changes seems to be to exploit the current valence or 'mood' of that word and use it against people that would be otherwise immune based on the current meaning.

Some category boundaries should reflect our psychology and the history of our ideas in the local 'category space', and not be constantly revised to be better Bayesian categories. For one, it doesn't seem likely that Bayesian rationalists will be deciding the optimal category boundaries of words anytime soon.

But if the word "lying" is to actually mean something rather than just being a weapon, then the ingroup and the outgroup can't both be right.

This is confusing in the sense that it's obviously wrong but I suspect intended in a much more narrow sense. It's a demonstrated facts that people assign different meanings to the 'same words'. Besides otherwise unrelated homonyms, there's no single unique global community of language users where every word means the same thing for all users. That doesn't imply that words with multiple meanings don't "mean something".

Given my current beliefs about the psychology of deception, I find myself inclined to reach for words like "motivated", "misleading", "distorted", &c., and am more likely to frown at uses of "lie", "fraud", "scam", &c. where intent is hard to establish. But even while frowning internally, I want to avoid tone-policing people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they're trying to point to.

You're a filthy fucking liar and you've twisted Scott Alexander's words while knowingly ignoring his larger point; and under cover of valuing 'epistemic rationality' while leveraging your privileged command of your cult's cant.

[The above is my satire against-against tone policing. It's not possible to maintain valuable communication among a group of people without policing tone. In particular, LessWrong is great in part because of it's tone.]

Comment by kenny on Maybe Lying Doesn't Exist · 2019-11-08T01:23:59.743Z · score: 2 (2 votes) · LW · GW

This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’.

I think it's still a good example, perhaps because of what you pointed out. It seems pretty clear to me that there's a sometimes significant difference between the legal and colloquial meanings of 'crime' and even bigger differences for 'criminal'.

There are many legal 'crimes' that most people would not describe as such and vice versa. "It's a crime!" is inevitably ambiguous.

Comment by kenny on Maybe Lying Doesn't Exist · 2019-11-08T01:18:20.698Z · score: 1 (1 votes) · LW · GW

It's important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.

I think this a very much underrated avenue to improve lots of things. I'm a little sad at the thought that neither are likely without the looming threat of possible punishment.

Comment by kenny on Maybe Lying Doesn't Exist · 2019-11-08T01:15:55.376Z · score: 1 (1 votes) · LW · GW

I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.

There have been recent shifts in ideas about what counts as 'epistemically negligent' [and that's a great phrase by the way!], at least among some groups of people with which I'm familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.

Comment by kenny on Link: An exercise: meta-rational phenomena | Meaningness · 2019-11-04T19:21:41.363Z · score: 1 (1 votes) · LW · GW

I think that links to Chapman's texts should contain some disclaimer that "rationality" as defined by Chapman is something completely different from "rationality" as defined by Less Wrong.

I am of many minds about this. Sometimes I feel as you've expressed; that Chapman undersells 'rationality' and misrepresents its possibilities. Certainly LW!rationality is (mostly) aware of his specific criticisms. But I still find his writing immensely insightful as-is. And given that his audience is very different than LW, I'm inclined to accept his writing as-is too.

As for him using 'rationality' differently – that general phenomena (of words being used differently by different people) is something that I'm all too aware of, among all the things I read and all the conversations I have. I certainly don't find his writing as painful to read as others.

And maybe we should add disclaimers to all of our pages pointing out that our use of 'rationality' is idiosyncratic (with respect to everyone else in the world). I don't think there's a good solution to this.

I agree that "The LW!rationality already contains its own meta." but I think Chapman has a point that meta-rationality is something distinct from ('regular') rationality. Hence the utility of a lot of the advice that both Chapman and the LW sequence writers provide.

Chapman warns people against going from straw rationalism to nihilism (unless they accept the Buddhism-inspired wisdom). But I don't see nihilism promoted on Less Wrong. We have "something to protect". And the stories of "beisutsukai" are obviously written to inspire.

Maybe that's missing from LW? I agree that LW doesn't promote nihilism, but maybe it should do more to help otherwise-intelligent people avoid it.

And more generally, (intelligent) people really do get stuck at "straw rationality" ("level 4"), i.e. 'trapped' in the specific formalisms of which they're aware and in which they can 'operate'. We don't worship science, but lots of other people sure seem to do so.

I think the best 'trick' LW!rationality incorporated into its 'canon' is the idea of instrumental rationality. Coupled with a consequentialism scoped to our 'entire future light cone', that idea alone acts like a source of intellectual free energy capable of pushing us out of any particular formalism when (we suspect) it's not good enough for our purposes. But it's not clear, to me anyways, that that itself is 'rational'. (It is LW!rational, obviously.)

Also, I'm not sure if "the Buddhism-inspired wisdom" was dismissive, but I really enjoy his writing about Buddhism (and it's mostly published on other sites of his). From what I've read of that, he's not a Buddhist – certainly not a 'traditional' (or folk) Buddhist. He seems mostly interested in very specific schools, has his own idiosyncratic interpretations, wants a better 'modern synthesis' drawing on his favored insights, and is actively experimenting with various practices for his own purposes. He definitely rejects 'woo' (and his favorite schools seem to be relatively light on that anyways). But there's a lot of insight available too. Just off the top of my head – the tantric Buddhist "practice of views" charnel ground and pure land. Traditional rationality, i.e. straw rationality, is pretty dismissive of emotions. LW!rationality is much better. Chapman is mining popular religion and philosophy, in particular the branches of Buddhism he likes, for interesting and sometimes-useful info, often pertaining to emotions and what to do about them.

So, ironically, from my perspective, it is like if straw rationality is level 4, and Chapman's "meaningness" is level 5, then Less Wrong would be level 6. (Yeah, I can play this game, too.)

How seriously are you playing this game (ha)? Somewhat seriously, we're definitely around (or aiming for) his level 5. You've pointed out a lot of 'meta-rational' advice from this site (and most that's several years old now too). What would level 6 be, to you (besides 5 + 1)?

Comment by kenny on Thermal Mass Thermos · 2019-10-21T17:01:27.740Z · score: 1 (1 votes) · LW · GW

My bad – I read that follow-up and was disappointed in the last sentence:

To be safe, though, I'm going to keep using the thermal mass thermos approach.

It doesn't seem like the thermal mass thermos is strictly necessary "to be safe", but I understand your extra abundance of caution.

Comment by kenny on Thermal Mass Thermos · 2019-10-18T17:19:46.884Z · score: 3 (2 votes) · LW · GW

I was disappointed that Jeff didn't conclude, based on the detailed evidence he found, that it's probably fine to just pack the rice in a regular un-modified thermos.

Or am I way off in summarizing his evidence this way?

Comment by kenny on Taxing investment income is complicated · 2019-10-11T19:11:57.975Z · score: 1 (1 votes) · LW · GW

Sure, and there are good reasons for that technical terminology.

But it's weird to claim that any transfer between people, especially one that's coerced, has "no social cost". That's perhaps an unreasonable objection, particularly in this context.

Is there another term then for something generally 'beyond' 'internalizing an externality'? It just doesn't seem likely to be effective to simply impose private costs equal in magnitude to other 'social' costs and then claim victory. Maybe I'm just conflating the economic concept with a kind of accounting-like generalization of the 'match expenses to revenue' principle.

In practice, it seems counter-productive to ignore how specific tax revenues are allocated. It certainly seems most natural to me to allocate those revenues to offset the relevant 'social costs' that inspired the taxes originally.

Comment by kenny on Taxing investment income is complicated · 2019-10-01T17:04:16.187Z · score: 1 (1 votes) · LW · GW

I don't think any taxes have zero social costs. Maybe you're imagining that they have a net zero cost, i.e. where costs born by some people are offset by gains enjoyed by others?

It's maybe off-topic, but I'm concerned by the accounting realities attendant to some of the taxes you mentioned, and similar ones, e.g. carbon taxes and cigarette taxes. It doesn't seem likely that either would or are actually internalizing externalities in practice. Cigarette taxes are often 'earmarked' or allocated to entirely unrelated goods or services, e.g. schools, and that can be disastrous when smokers actually respond to higher taxes by buying fewer (legal, i.e. taxed) cigarettes. Similarly, it doesn't seem like the costs of 'carbon' are actually internalized by the tax itself if the tax revenues themselves aren't directly used to offset those costs by, e.g. capturing carbon, reimbursing losses incurred because of 'carbon', or, more sensibly, saving to offset the expected future costs.

Comment by kenny on Is there an existing label for the category of fallacies exemplified by "paradox of tolerance"? · 2019-10-01T16:55:58.854Z · score: 1 (1 votes) · LW · GW

I agree, on the object level, that principles often are 'true' or valuable but with justified exceptions.

But I don't understand why the best response isn't just 'There are justified exceptions to those principles.', or 'I don't hold that principle to be true or valuable absolutely.'.

Comment by kenny on Is there an existing label for the category of fallacies exemplified by "paradox of tolerance"? · 2019-10-01T16:51:35.577Z · score: 1 (1 votes) · LW · GW

I'm really confused about this. It seems like you're arguing that every consideration or analysis of any principle must also include any 'justified exception'. I'm not arguing that any particular justified exception is impossible, but that it should be considered separate from the principle to which it is an exception – not part of the principle itself. Lumping principles and their justified exceptions seems strictly less useful in general; one reason being that which exceptions are justified is yet another potential axis of disagreement. It also seems almost designed to be maximally confusing.

Are you claiming that people should adopt a rhetorical rule of assuming that 'pacifism' actually refers to the base principle and its 'justified exceptions'? How would that work in practice? In particular, what are (all of) the justified exceptions to pacifism? How should people refer to the base principle instead, e.g. when discussing which exceptions exactly are justified or not? How should people refer to the base principle in the case where they don't think any exceptions are justified?

To make this even more meta, do you presume that there are justified exceptions to every possible principle? Are there no justified exceptions to that principle?

Comment by kenny on Is there an existing label for the category of fallacies exemplified by "paradox of tolerance"? · 2019-09-20T16:55:05.109Z · score: 1 (1 votes) · LW · GW

You seem to have a lot of assumptions that probably need to be 'unpacked' (made explicit). For one, 'absolutism fallacy' isn't obviously a fallacy. Pacifism can definitely be 'absolute' and, as I claimed in my answer to this question, I don't even think that's paradoxical.

Are you trying to gather 'rhetorical ammunition' to defend 'pacifism' and 'tolerance' as principles, specifically? I'm confused because you seem to be denying that either of those principles can even be interpreted literally or 'absolutely' and it seems obvious to me that they can (and that people often do so).

I'm personally on-board with 'game-theoretical steelman' versions of 'pacifism' and 'tolerance', but the 'game-theoretical steelmanning', in my mind, necessarily involves all of my other values, i.e. there aren't 'pure (but sophisticated) non-absolute' versions of those principles to which every sufficiently advanced thinker would readily agree. (For one, I suspect that the steelmanned version of those principles is inevitably complicated and intricately detailed due to its interactions with other values, and to the variation and general incoherence/inconsistency of human values.)

Comment by kenny on Is there an existing label for the category of fallacies exemplified by "paradox of tolerance"? · 2019-09-20T16:42:44.517Z · score: 1 (3 votes) · LW · GW

I don't think they're fallacies.

Some pacifists really do believe that violence should be avoided absolutely, even as a last resort. And that doesn't even seem to be a paradox, just a strategy with an extreme weakness.

I think the 'paradox of tolerance' really is a paradox given that it's not obvious, for anyone abiding by the principle, how tolerant they should be of intolerance. Of course, any given non-absolute degree or 'distribution' of tolerance could be 'self-consistent' so it's not an unavoidable 'gotcha' by any means. But the simplest, most literal forms do definitely seem to be paradoxical, unless it's interpreted entirely personally, e.g. 'I should tolerate everything and anything.'.

My favorite example – which I think is, in a sense, paradoxical – is the precautionary principle. It's definitely not obvious that it shouldn't apply to people adopting the principle itself and, in fact, doing so is one reason why I reject it as a principle. It seems obvious to me that the superior principle is to 'make the best decisions one can given the information, and attendant uncertainty, available'.

Generally, I suspect that if the above principles, and similar ones, are 'sharpened' by modifying them to "integrate critical rejection", one would arrive at an entirely different (and more sophisticated) principle like, e.g. 'make the best decisions one can'.

Comment by kenny on The Power to Solve Climate Change · 2019-09-19T00:29:38.470Z · score: 1 (1 votes) · LW · GW

While the link "wash clothes > reduce your personal carbon footprint" is definite, my point is that the category of solutions that rely on the link "reduce your personal carbon footprint > solve carbon change" are indefinite.

I'm not seeing much difference in 'definitedness' between personal behavior change and "world organization that sets per-country targets" unless it's just that solving global warming necessarily must involve global government. I think there are a LOT more examples of even 'intra-country targets' , for anything (not global warming), being effectively bullshit compared to examples of the opposite. I'm thinking of things like (the U.S.) The War on Drugs, but drug prohibition generally seems to fit pretty well.

And more generally, a lot of 'government' solutions seem to be pretty indefinite. It's a depressingly common feature of government. (And of course governments do implement definite specific policies too, so it's not an inevitable failure.)

Comment by kenny on Who To Root For: 2019 College Football Edition · 2019-09-15T05:32:49.402Z · score: 1 (1 votes) · LW · GW

Foiled! I was going to write a 'scathing' comment about this not being appropriate for cross-posting here.

(I loved the post. I think it does a great job at gesturing at the kinds of things I'd expect you to include in an eventual "fundamentals-level guide on sports".)

Comment by kenny on [Link] Book Review: Reframing Superintelligence (SSC) · 2019-09-11T03:05:00.906Z · score: 1 (1 votes) · LW · GW

A lot of the distinction between a service and an agent seems to rest on the difference between thinking and doing.

That doesn't seem right to me. There are several, potentially subtle differences between services and agents – the boundary (or maybe even 'boundaries') are probably nebulous at high resolution.

A good prototypical service is Google Translate. You submit text to it to translate and it outputs a translation as text. It's both thinking and doing but the 'doing' is limited – it just outputs translated text.

A good prototypical agent is AlphaGo. It pursues a goal, to win a game of Go, but does so in a (more) open-ended fashion than a service. It will continue to play as long as it can.

Down-thread, you wrote:

I am aiming directly at questions of how an AI that starts with a only a robotic arm might get to controlling drones or trading stocks, from the perspective of the AI.

I think one thing to point out up-front is that a lot of current AI systems are generated or built in a stage distinct from the stage in which they 'operate'. A lot of machine learning algorithms involve a distinct period of learning, first, which produces a model. That model can then be used – as a service. The model/service would do something like 'tell me if an image is of a hot dog'. Or, in the case of AlphaGo, something like 'given a game state X, what next move or action should be taken?'.

What makes AlphaGo an agent is that it's model is operated in a mode whereby it's continually fed a sequence of game states, and, crucially, both its output controls the behavior of a player in the game, and the next game state its given depends on it's previous output. It becomes embedded or embodied via the feedback between its output, player behavior, and its subsequence input, a game state that includes the consequences of its previous output.

But, we're still missing yet another crucial ingredient to make an agent truly (or at least more) dangerous – 'online learning'.

Instead of training a model/service all at once up-front, we could train it while it acts as an agent or service, i.e. 'online'.

I would be very surprised if an AI installed to control a robotic arm would gain control of drones or be able to trade stocks, but just because I would expect such an AI to not use online learning and to be overall very limited in terms of what inputs with which it's provided (e.g. the position of the arm and maybe a camera covering its work area) and what outputs to which it has direct access (e.g. a sequence of arm motions to be performed).

Probably the most dangerous kind of tool/service AI imagined is an oracle AI, i.e. an AI to which people would pose general open-ended questions, e.g. 'what should I do?'. For oracle AIs, I think some other (possibly) key dangerous ingredients might be present:

  • Knowledge of other oracle AIs (as a plausible stepping stone to the next ingredient)
  • Knowledge of itself as an oracle AI (and thus an important asset)
  • Knowledge of its own effects on the world, thru those that consult it, or those that are otherwise aware of its existence or its output
Comment by kenny on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-09-08T18:13:41.387Z · score: 1 (1 votes) · LW · GW

That's a good question. Or, rather, of the several ways I can interpret it (ha), each seems interesting.

I interpret your answer as being honest and in good faith. I'd default to the same were Reich to answer, if he were to answer like you did. I'd expect most other prominent public critics to deflect in some way.

More generally, I'd interpret similar answers from others writing 'against billionaire philanthropy' as weak-moderate evidence of the same.

As to how to more precisely test that, I admit that it's probably very tricky and thus I downgrade how "crucial" a test it really is. Here's one idea:

Some billionaire, one of those previously criticized in the manner under discussion, announces that, for every philanthropic donation they make, they'll make 'matching' donations to the relevant federal, state, and municipal treasuries to 'offset' the tax rebate/refund effect of the donations.

I'd expect that, mostly, this would result in heavier criticism and increasing suspicion. I'd expect you, if asked, to moderate your own criticism or praise the offsetting directly.

'Ideally', we'd ask The Simulators of the Universe, to re-run the universe simulation and 'magically' have some kind of tax law passed that removes the refund/rebate at some point before some portion of billionaire philanthropic donations and we could measure the number and 'sentiment' of criticisms.

Realistically, we could probably much much more crudely approximate something similar, but any comparisons would inevitably be confounded by all kinds of other things.

Comment by kenny on Paper Trauma · 2019-08-26T15:09:16.590Z · score: 2 (2 votes) · LW · GW

GitLab's Markdown supports charts and diagrams.

I find tables in Markdown pretty easy to input, on a computer, because I can format them easily in Vim, my favorite and always-open text editor.

Comment by kenny on Do you do weekly or daily reviews? What are they like? · 2019-08-22T18:21:18.064Z · score: 4 (3 votes) · LW · GW

I don't do weekly, or longer-period, reviews any more – or at least I don't commit to doing them on a schedule.

Daily Review

My daily review centers around reviewing all of my 'tasks' in Habitica. I have a daily review task in Habitica that consists of the following steps (and is represented as a checklist):

  • Review all tasks
  • Review calendar for today*
  • Process every unread email
  • Review all of my 'to be reviewed' projects

* I've been observing something like a 'secular Sabbath' (roughly) each Friday starting at 6p to Saturday at the same time. On Friday, in the morning or later during the day, I review any calendar items for the next day, Saturday, too. I generally avoid scheduling anything for Saturday, but sometimes I either have to do something that day or want to anyways.

Reviewing tasks

The goal is just to read (or skim) each task and think about them minimally. If one is something I can do in literally 2-5 minutes, sometimes I'll do it right then, but I don't have to actually complete anything during the review.

I'll occasionally delete old tasks that I've given up on ever doing. If I remember that I've already completed a task, I'll mark it completed and, if it's part of a project, update the project info and, usually, flag that project 'to be reviewed'.

Tasks in Habitica can have checklists but I've realized that I'm using them too often and that, instead of having one task with a checklist, I should more often have separate tasks. The key distinction to which is better is whether each step needs to be done together, especially in order, or whether each step can be done independently. Laundry, for me, is a sequence of steps that all need to be done, in order, to result in me having clean clothes and other items. Dusting my house however is something that I really can do room by room.

One reason I really like Habitica is that there are three different types of tasks: 'to-dos', 'dailies', and 'habits'. To-dos are just like tasks in most any other task list system – something that you can mark 'completed' when its done. Dailies are tasks that repeat – I mostly just use a 'weekly' schedule for specific days, e.g. the daily task for my daily review repeats every day except Saturday. Habits are tasks that can be completed, or 'missed', at any time. I've got one now to 'remember to either pump up the tires on your bike before you ride it, or at least check that their pressure is fine'. I only have a handful, or less, at any one time.

Reviewing my calendar

I use Google Calendar – mainly because I can access it from my phone, so most anywhere, and it's free.

I have separate calendars and I use them to categorize items. My main calendar has my reminders and events that I am either planning on attending (e.g. something to which I need to either travel or commute) or in which I am planning on participating (e.g. a phone call). I've got a 'family' calendar for tracking the schedules of family members or friends. I've got a 'maybe' calendar in which I put things like, e.g. fun events I might want to attend or the hours of my local rock climbing gym.

There are two types of items: events and reminders.

For events on my main calendar, there are two broad types: all-day and with-times. All-day events are usually just reminders, e.g. I'm on vacation. For events with times, typically I just need to decide whether I need to set an alarm on my phone, e.g. to get ready to leave to travel or commute to the event.

For reminders, I mostly just copy them to my Habitica to-do list; sometimes I'll just mark a few as completed or delete them. I'm using them much like what the Getting Things Done system terms a 'tickler file'. I generally add tasks that I need to do 'later' or on some kind of schedule as calendar reminders; the idea being that my task ('to-do') list in Habitica can be free of them until they're due.

Processing email

My goal isn't necessarily to read every email, completely – just process each one (and then mark them as read, until I reach 'inbox zero'). For long emails that I do want to read, I'll either save a web version in a 'read later' app or add a task to Habitica to read or review the email. I'll often add a task in Habitica to respond to someone if I can't do so within a few minutes right away.

I track all of my financial activity in YNAB, a nice budgeting app so any email receipts get entered there immediately.

For some emails, I'll update the info for any related projects, add tasks in Habitica, or add something to my calendar.

Reviewing 'to be reviewed' projects

I'm using GitLab – a free account on the official 'hosted' instance. I've got a lot of 'projects' (GitLab's term), most of them pertaining to code, but several just for maintaining info about various projects. I mostly use a single 'project' named "@misc".

I've been using (software-development-focused) issue trackers for at least a decade now and GitLab's my favorite so far. The main reasons why I like it more than any others I've tried is that it uses Markdown, and its Markdown dialect is fantastic, and that its got a separate description for each issue (whereas some trackers only have comments). Markdown, especially GitLab's dialect, allows me to easily quote emails, link to web pages (and entire sets of them from open tabs with a nice Chrome extension), and maintain check lists of tasks. Having an issue description separate from comments let's me maintain a nice overview of a project and a single list (or tree) of tasks (or, more often, a board outline thereof).

Each ('real world') project gets an 'issue' in the GitLab 'project'. I regularly edit the issue description so that it contains an up-to-date overview and outline of tasks. I add comments with info, quotes, links, and mini sub-projects and their tasks.

I assign an issue to myself to mark it as 'to be reviewed'. During my daily review, my goal for each assigned issue is mainly to review the project for that issue and determine what the next task is to be done. Once I've determined the next task, I make sure I add it to Habitica (and I link the task in Habitica to the issue in GitLab). If I expect to work fairly intensively on a project short-term, I'll leave the issue assigned to me; otherwise, I un-assign it to myself. I also use GitLab, and the same account too, for work, so I'll usually have one or two work issues assigned to me as well and, because I usually focus on a single work project at a time, I'll leave the currently active issue or issues assigned to myself until I'm either finished or stuck waiting for some kind of outside input.

Other components


I use the standard Alarm app on my phone (an iPhone) a lot. I've got a few standard, repeating alarms – 'wakeup', review my 'roughly scheduled' tasks – but I also use it liberally for anything I want to remember to do. I'll use the timer feature if I'm doing something like cooking but, because (in the standard app anyways) there's only one timer, I mostly default to using alarms because I can label them, e.g. 'Check the dryer', 'Leave to go _', or 'Get ready for phone call with X in Y minutes'.


I often use email – i.e. I email myself – about new tasks, projects, or 'reference material' I want to be able to quickly find later. (I use Gmail mainly because its search is fantastic.) Sometimes I'll add tasks directly to Habitica or projects directly as an issue in GitLab, but email is much more frictionless and, because I habitually process my unread email every day, I'm confident I'll create tasks or GitLab issues later if I send myself an email.

I've got a couple of 'logs' in separate notes in my phone's standard Notes app. I sometimes think about writing my own little (web) apps but that would be a lot of work and regular text, tho structured fairly regularly, is probably not much worse, and (of course) already possible (and easily too).

Roughly scheduled tasks

In Habitica, I've got three tags for tasks that are 'roughly scheduled': 'morning', 'today', and 'tonight'. I've got alarms on my phone for each tag. I've committed to reviewing any tasks with the relevant tag sometime around when the alarm is scheduled. I don't have to complete all, or even any, of those tasks; just review them. I, of course, try to do any that need to be done.

Comment by kenny on Power Buys You Distance From The Crime · 2019-08-22T04:00:31.768Z · score: 1 (1 votes) · LW · GW

By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it's replying).

Comment by kenny on Power Buys You Distance From The Crime · 2019-08-22T01:06:55.157Z · score: 1 (1 votes) · LW · GW

enacting conflict in the course of discussing conflict

... seems to be exactly why it's so difficult to discuss a conflict theory with someone already convinced that it's true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.

But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one's unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one's decoupling the current discussion from a (or any) conflict theory.