Tags Discussion/Talk Thread

post by Ruby · 2020-07-28T21:57:04.540Z · LW · GW · 78 comments

Contents

  Other relevant pages about tagging
None
78 comments

The LessWrong dev team is hard at work creating Talk Pages/Discussion pages for tags. When they're done, every tag page will have a corresponding talk page which lets users discuss changes and improvements related to that tag.

We don't have that yet, so in the meantime, please make comments you have about tags (generally or for specific-tags) here. If you're talking about a specific tag, of course, make sure to link to it. You might also want to link back to your comment in the body of the tag description, e.g., "Tag Discussion here"

Examples of things you might comment about a tag:

Or:

Also, feel free to use this space to claim credit for tags you've worked hard to make great! (we'll give you karma!)

Other relevant pages about tagging

78 comments

Comments sorted by top scores.

comment by abramdemski · 2020-07-31T15:30:03.617Z · LW(p) · GW(p)

I edited the Bayes Theorem / Bayesianism tag [? · GW]. There was a bracketed statement (something like [needs more]) next to the description of Bayesianism. At the time the description of "Bayesianism" was just:

Bayesianism is the broader philosophy inspired by the theorem.

I kept that text in there for now. It is accurate but seems misleading to me. Bayesianism is not primarily about Bayes Theorem at all. Which brings me to my main point:

1. Should the Bayes Theorem / Bayesianism tag be split up into two tags?

It is conceptually awkward to lump these two things together.

  • Bayes' Theorem is a theorem in probability theory. It holds true whether you are a Bayesian, a Frequentist, or most anything else.
  • Bayesianism is an interpretation of probability theory. It holds that probability is subjective. So these two are very different things.
    • As a consequence of this belief, Bayesians are more interested in applying Bayes' Theorem (while frequentists prefer other techniques such as p-values for similar purposes). But Bayes' Theorem has little to do with Bayesian philosophy. Indeed, Bayesians need not accept Bayes' Law as an update rule [LW · GW].

On the other hand, I expect this to never be a problem in practice. 

2. It's "Bayes' Theorem"

The last name of the man is Bayes. It's his theorem, so it's possessive. Standard written English adds an apostrophe at the end of words ending in s to make them possessive.

OTOH, who cares, writing Bayes' Theorem is annoying.

Should the tag name be edited?

For now I've made sure the usage in the tag description is correct, without editing the tag name.

comment by Ben Pace (Benito) · 2020-07-31T16:54:42.795Z · LW(p) · GW(p)

I find the "A / B" to be fairly ugly in tag naming, and think that even "A (and B)" is more attractive.

My guess is that we should just go with Bayesianism, because it feels more general? Like, if it's a wiki page later, the page on Bayesianism would naturally have a section on Bayes' Theorem that explains it.

comment by Ruby · 2020-07-31T19:12:00.980Z · LW(p) · GW(p)

I introduced the '/' convention in naming and think it 1) looks fine, 2) is very necessary to lump adjacent-enough concepts or things with two likely names into a single tag. Parentheses for the second thing would also likely imply it it lesser even more than being second already does.

comment by DanielFilan · 2020-07-31T16:19:39.619Z · LW(p) · GW(p)

FWIW I vote for "Bayes' Theorem" over "Bayes Theorem".

comment by abramdemski · 2020-07-31T16:47:30.187Z · LW(p) · GW(p)

Changed it.

comment by abramdemski · 2020-07-31T16:39:07.553Z · LW(p) · GW(p)

Just putting this here for now:

It seems to me like there should be a "logical uncertainty" tag that's more general than "logical induction", or at least the "logical induction" tag should be renamed to the more general one.

Probably the more general one should be made, rather than re-naming. But collecting all the stuff about logical uncertainty sounds like uncommonly much work (because I don't expect searching for "logical uncertainty" to necessarily get all the important stuff?).

comment by Ben Pace (Benito) · 2020-07-31T17:40:56.838Z · LW(p) · GW(p)

(I'm pro renaming for now.)

comment by Ruby · 2020-08-03T17:30:02.559Z · LW(p) · GW(p)

(Ben.) (Why do you talk in brackets?) ()

 

 

 

(Is it a test for GPT)?)

comment by abramdemski · 2020-08-03T16:13:10.001Z · LW(p) · GW(p)

I decided not to rename, because (iiuc) the address of the "logical uncertainty" tag would then be https://www.lesswrong.com/tag/logical-induction [? · GW], which would be weird given that there would also be a "logical induction" tag.

Instead I created the new tag and tagged everything under "logical induction" to also be under "logical uncertainty".

What I wasn't able to do was remove the inappropriate stuff from the logical induction tag. There are a number of things with that tag which are not really about logical induction, only logical uncertainty.

comment by Ruby · 2020-08-03T17:25:08.933Z · LW(p) · GW(p)

By default, if you rename a tag, the url will change to match the new name and the old url will be redirected to the new one (thus preserving any existing links to it). In the case we want to use an old url for one tag as the url for another (this has happened), we'd need to do some manual database stuff (but it can be done) and then ensure all existing links go to the right place. We can also delete tags easily.

At this point, would it just make sense to delete the logical induction tag and have its url redirect to the new logical uncertainty tag?

Or if you prefer, we can remove help remove tags. Eventually, assuming tagging continues to get used enough, I think we'll build proper tools to handle these splitting/merge situations.

comment by abramdemski · 2020-08-03T19:48:00.631Z · LW(p) · GW(p)

I don't think just deleting the logical induction tag is the right thing. I did what I did because I think there should be two separate tags.

I think the next step is to remove the logical induction tag from any posts which aren't LI-specific.

I have personally upvoted the logical induction tag on the items I think should be tagged "logical induction" and downvoted it on things which I especially think aren't appropriate. So (assuming you can see which things I personally upvoted) you could use that as a basis for removing tags.

comment by Ruby · 2020-08-03T20:10:45.209Z · LW(p) · GW(p)

Ah, my bad, wasn't clear to me that you thought Logical Uncertainty should be a superset of Logical Induction, and that Logical Induction should still exist separately. I misread it as being about maintaining url addresses.

Yeah, I can look at those votes and then vote with you (maybe help from another LW team member).

comment by Ruby · 2020-08-05T17:57:32.824Z · LW(p) · GW(p)

Addendum to this comment [LW(p) · GW(p)]:

While the LW team are currently the final arbiters of what makes tags and tag descriptions, and I've been assigning the tag grades in almost teacher-like way, people are encouraged to argue with and debate any feedback or norms we've proposed. I do want the community to feel they have a large say in how the system is.

If you think the tag grade system should be different in some way, feel free to say so.
If you think a tag has been given the wrong grade, feel free to say so.
If you think feedback I've given about what would make a tag better is wrong, feel free to argue with it.

I've formed various opinions about what makes for good tags (and also changed those opinions a lot, particularly in response from other LW team members so far) but I doubt I've yet arrived at the perfect opinions, so feedback and pushback welcome from all.

To me, a good state is where we're collectively established enough of an idea of what good tags are that the ability to grade tags is granted to a larger group of tag moderators. And also that people are giving feedback and suggestions (or just making the changes in a do-ocracy way) a bunch between themselves. I think that'll be easier once Talk Pages are live.

comment by brook · 2020-08-17T13:04:16.758Z · LW(p) · GW(p)

Edited the courage [? · GW] tag, think it's C-class (Not sure if it needs integrating somehow with the groupthink [? · GW] and/or heroic responsibility [? · GW] tags? certainly some things in each of these don't fit under the others but there is a fair amount of overlap at present)

Edited self-deception [? · GW] & superstimuli [? · GW], think they're now C-class (self-deception in particular, I'd like somebody who's actually read Elephant in the Brain to have a look over it, because it seems relevant but I'm not overly familiar)

Edited evolution [? · GW] and think it's now B-class

comment by Ruby · 2020-08-19T20:00:05.471Z · LW(p) · GW(p)

Just had a chance to look at these. Thanks!!

I really like what you've done with Self-Deception and Superstimuli. They're both really solid.

Courage is good too, though I find it slightly hard to read. "here" references seem less good than just using the post name.

Evolution I found myself bouncing off a bit for some reason. Gave it C-Class for now, but I still hope to revise the grading-scheme/description-improvement-system following all the useful feedback/advice you gave. I'm just getting back on top of things following our team-retreat, and next priorities are getting drawn up now. Similarly, we need to decide about doing a push to get tag a basic description, which was also a good thing you suggested.

comment by brook · 2020-08-20T19:12:01.851Z · LW(p) · GW(p)

Hmm, ok. I think both courage and evolution I found more difficult because they're less well-defined clusters in postspace (compared to self-deception and superstimuli). I'm glad you found the feedback helpful.

I've also edited gears-level [? · GW] and spaced repetition [? · GW]. I think they're probably C and B class respectively, but I'm still very unconfident about that. Gears-level in particular I'm not sure if it might not just be better to point to Gears in Understanding [? · GW], as it's pretty well-written and is pointing to an odd (& specific) concept.

comment by brook · 2020-08-05T13:52:10.145Z · LW(p) · GW(p)

Are there any plans to implement tagging of whole sequences? I understand that tagging the first post in a sequence has a similar effect, but it might be more productive in some instances to have, for instance Slack and The Sabbath [? · GW]as the top link under the slack tag [? · GW], rather than the individual posts from this sequence appearing in an order based on relevance.

Obviously that then creates issues about whether you want posts that appear in sequences to also appear individually or not, and whether you want all sequences to be taggable or not, and so on. I'm not sure if these issues outweigh the benefits; even just on an admin-only basis, it seems like a helpful feature if we expect a significant user base who don't read the tag descriptions (the other place it might otherwise make sense to put sequence links).

(Other examples where this seems to me it might be more useful than the current method: Philosophy of language & A Human's Guide To Words, Group Rationality & The Craft And The Community. I imagine there are more)

comment by Yoav Ravid · 2020-08-05T15:44:42.804Z · LW(p) · GW(p)

I also left a comment suggesting this. for now i'm just adding "Related Sequences" to the description with links to relevant sequences (see Epistemology [? · GW] for example), i hope in the future this can be done with actual tagging.

comment by Ruby · 2020-08-05T17:39:16.586Z · LW(p) · GW(p)

Yeah, I thought about this early on and it seemed very necessary. It's not lost on me though how it's kinda bad to have 20 posts from the same sequences just filling up the tag.

As you point out, there area bunch of challenges in figuring out all the right behaviors. There might be a solution where the benefits are worth it, but it's gonna be work to figure out what it is. We're first dealing with a heap of low-hanging fruit on tagging stuff.

In short, seems like a good idea to figure something out here, also seems tricky and so would be a bit further down the roadmap. For now, I think the Related Sequences is the best solution even if some people miss the section.

comment by brook · 2020-08-06T11:27:22.309Z · LW(p) · GW(p)

For sure! I figured the team wouldn't have missed this, just wanted to give my two cents. For what it's worth I think the tagging system is actually really nicely implemented already; I feel like a kid in a candy shop with all these posts that were just thoroughly inaccessible to me until now.

comment by Ruby · 2020-08-06T13:26:27.242Z · LW(p) · GW(p)

Your cents are appreciated! Really helpful to know which things stick out to people.

I feel like a kid in a candy shop with all these posts that were just thoroughly inaccessible to me until now.

That was the goal! <3 Don't eat too much candy.

I'm joking. Knock yourself out, this is the good stuff.

comment by Multicore (KaynanK) · 2020-08-05T16:24:51.194Z · LW(p) · GW(p)

My suggestions for changes/merges:

Change Alpha(algorithm family) [? · GW] to DeepMind, which would then include DM's other projects like Agent57 and MuZero. I think it's what more people would look for and it has more forwards compatibility.

Merge Blues and Greens [? · GW] and Coalitional Instincts [? · GW]; they're about basically the same thing. I don't like either name; "Tribalism" would probably be better. Blues and Greens is jargon that's not used enough, and coalitional instincts is too formal.

Merge Good Explanations(advice) [? · GW] into Distillation and Pedagogy [? · GW]. Distillation and Pedagogy is slightly broader, but not enough for good explanations to need to be its own tag.

comment by Kaj_Sotala · 2020-08-06T15:57:48.296Z · LW(p) · GW(p)
Merge Blues and Greens [? · GW] and Coalitional Instincts [? · GW]; they're about basically the same thing. I don't like either name; "Tribalism" would probably be better. Blues and Greens is jargon that's not used enough, and coalitional instincts is too formal.

I don't have an opinion on the Blues and Greens merge - I wouldn't expect anyone to be specifically interested in posts that happen to use that particular analogy - but I would somewhat lean towards keeping the Coalitional Instincts term.

I considered several terms for that tag, including "Tribalism", but I feel like there's an underlying concept cluster that's worth carving out and which is better described by Coalitional Instincts. Though this feels like a somewhat subtle difference in what I feel that "Tribalism" connotes, and if others disagree with me on those connotations, then I'm certainly willing to switch.

Basically, it feels to me like "Tribalism" generally refers to a somewhat narrow set of behaviors, whereas Coalitional Instincts includes those but also includes the underlying psychological mechanisms and somewhat broader behaviors. For example, Eliezer's post Professing and Cheering [LW · GW] includes this excerpt:

But even the concept of “religious profession” doesn’t seem to cover the pagan woman’s claim to believe in the primordial cow. If you had to profess a religious belief to satisfy a priest, or satisfy a co-religionist—heck, to satisfy your own self-image as a religious person—you would have to pretend to believe much more convincingly than this woman was doing. As she recited her tale of the primordial cow, she wasn’t even trying to be persuasive on that front—wasn’t even trying to convince us that she took her own religion seriously. I think that’s the part that so took me aback. I know people who believe they believe ridiculous things, but when they profess them, they’ll spend much more effort to convince themselves that they take their beliefs seriously.
It finally occurred to me that this woman wasn’t trying to convince us or even convince herself. Her recitation of the creation story wasn’t about the creation of the world at all. Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game. A banner saying Go Blues isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.
That strange flaunting pride . . . it was like she was marching naked in a gay pride parade.1 [LW(p) · GW(p)]It wasn’t just a cheer, like marching, but an outrageous cheer, like marching naked—believing that she couldn’t be arrested or criticized, because she was doing it for her pride parade.
That’s why it mattered to her that what she was saying was beyond ridiculous. If she’d tried to make it sound more plausible, it would have been like putting on clothes.

To me, "believing in something because it is outrageous, and not even trying to make it legible according to the other people's epistemology" is a phenomenon that's covered by coalitional instincts - it's holding up the standards of explanation of your group and flaunting the fact that you don't care about the other side's. But it doesn't quite fit tribalism as I usually understand it? I feel like "tribalism" usually refers to things that are more explicitly about "explicitly attacking anything that is seen to support the other side", and not so much about "simply being proud about your side".

But I'm not completely certain of that myself, and if others disagree with this assessment, then I'd be willing to change the name to tribalism.

comment by Ruby · 2020-08-06T19:21:35.946Z · LW(p) · GW(p)

People knowing what a tag is about straight away is quite valuable. Tribalism is very close to the broader thing. What if was "Tribalism" and the description was changed to:

Tribalism, or more broadly Coalitional Instincts, drive humans to act in ways which cause them join, support, defend, and maintain their membership in various coalitions. These concepts...something, something in-group/out-group

comment by Kaj_Sotala · 2020-08-06T21:44:24.826Z · LW(p) · GW(p)

I think that people knowing what the tag means right away, is potentially a problem if that instant understanding is slightly wrong. E.g. if people only look at the tag's name (which is what they'll generally do if they have no reason to explicitly look up the description) they might feel that some posts that fit CI but not Tribalism are mis-tagged and downvote the tag relevance. Coalitional Instincts being less self-explanatory has the advantage that people are less likely to assume they know what it means without looking at the tag description.

comment by Ruby · 2020-08-06T23:40:44.883Z · LW(p) · GW(p)

I see the argument. I do think that people downvoting in order to maintain the tag are much more likely to have read the text vs people adding the tag.

But I predict the largest effect is most people don't look at descriptions they don't recognize and therefore don't look at the tag at all, which is a shame because I think a lot of people are interested in the topic. Gut feeling is a 2-5x reduction in how much the tag gets looked at with the unfamiliar name, and I think that matters.

PS: I don't want to make the decision here, I have enough tagging decisions to make already, so I'm leaving it up to others even though I'm offering some thoughts.

comment by Kaj_Sotala · 2020-08-07T06:36:30.091Z · LW(p) · GW(p)

That's a reasonable point. After this discussion, I think that I do lean towards just renaming it after all.

comment by jimrandomh · 2020-08-05T18:43:59.151Z · LW(p) · GW(p)

I like Blues and Greens being separate, reserved for posts using that specific analogy, as opposed to other posts on the topic the analogy bears on. The analogy is flavorful, and it's made its way into our jargon.

comment by Yoav Ravid · 2020-08-05T18:55:01.220Z · LW(p) · GW(p)

I suggest a Tribalism tag, and adding (analogy) or (metaphor) at the end of the current Blues and Greens tag

comment by Ruby · 2020-08-05T18:51:36.914Z · LW(p) · GW(p)

If it's specifically a subset of the Tribalism tag for that analogy, I think putting that restriction in the description and mentioning in the broader Tribalism tag's description covers it. I think it'd be bad if people thought Blues & Greens was for all tribalism posts and overlapped/was a synonym, but that can be cleared up in the description I hope.

comment by Ruby · 2020-08-05T17:59:23.189Z · LW(p) · GW(p)

Yeah, those seem like sensible merges to me. Habryka mentioned something about wanting to do a big merge/clean-up in a couple of weeks. We might also want to build tools to make merging/splitting for the occasion.

I've pinged the four tag creators now, though, in case they want to argue in particular directions now.

comment by brook · 2020-08-06T15:11:26.590Z · LW(p) · GW(p)

I've edited the Heuristics and Biases [? · GW] tag. I think it's probably A-grade (I'm still getting a handle on exactly what an A-grade tag should feel like though, honestly).

That said, I'd like it if somebody could check the specifics of the three definitions, because I'm actually not completely sure, and check that it scans ok.

comment by Ruby · 2020-08-06T19:53:37.678Z · LW(p) · GW(p)

(I'm still getting a handle on exactly what an A-grade tag should feel like though, honestly).

Me too. Maybe I'll get a chance to write some of my own today or tomorrow and we can compare notes.

Probably the hard thing is that what counts as A-Class for one posts isn't for enough for another. For Heuristics & Biases, man, it's such a central tag for LW so it's gotta be really good. Probably we should hold off declaring this one A-Class until we're ready to say it really doesn't merit more work, which could be a while.

One thought is that this a topic with some amazing posts introducing it and the tag description should lean on them, maybe even quoting them a lot and pushing them to read them. The current tag description is alright, but it feels like it doesn't get at the heart of the topic in the few paragraphs or make me feel like I should care much. Contrast with the engagingness of ...What's a bias, again? [LW · GW]

In Conservation of Expected Evidence [? · GW], I basically thought to myself "there's no way I'm gonna write a better explanation than Eliezer on this", should just quote him. Though sometime's Eliezer's explanations are too long to quote and it makes sense to rewrite them. 

Other thoughts:
- I think posts in this class should heavily mention and describe top posts close to the top (including pretty related ones like "Your Intuitions Aren't Magic"'). 
- Relatedly, it should guide your reading of the topic much more explicitly. What is Predictably Wrong? What does it contain?
- I think it'd be good to have a more explicit list of different heuristics and biases. As kind of a parent tag, it should maybe even have a nice table of all "sub-tags"
- The way "fallacies" is used on LW isn't about explicit logic, really. I think it's more that a fallacy is a bad inference/step of reasoning, whereas heuristics and biases are properties of the algorithm that does the reasoning. 

[I also moved the Related Tags to the top because I think it's good if someone can immediately see whether a related tag is what they're after (even in the hover-over which you can hackily include if you shift-enter)]

comment by brook · 2020-08-07T16:29:09.008Z · LW(p) · GW(p)

I've updated the Heuristics and Biases tag again btw. I don't think it's A-grade based on "I'd like to see more work done on it", but I think it's about as good as I personally am going to be able to get it. I'd really like somebody (yes you, fellow user reading this) to have a read through and make any adjustments that make sense and/or make it more comprehensive.

re: fallacies, I thought about it, and I think they're actually used pretty similarly, at least here on LW. Planning fallacy could easily be described as a bias generated by an 'imagine your ideal plan going correctly (and maybe add, say, 10%)' heuristic. At the very least, there's plenty overlap. Really what I envisioned for that section was making the point that a heuristic can be good (or just ok), because that was something that I didn't realise for a long time.

comment by brook · 2020-08-07T11:10:05.898Z · LW(p) · GW(p)

OK. So you see the grading as being more of a "neglected-o-meter" in the sense that it describes the gap between how a tag currently is and how it would be in an ideal world? (i.e. a more important tag would have a higher bar for being A-grade than a less important one?)

I think that makes more sense than an absolute-quality stamp, but I think the tag grading post as is currently written should make that clear (if it is the case)-- currently it implies almost the opposite, at least as I read it. For instance phrases like "It covers a valuable topic" in A-grade, and "tagged posts may not be especially good." in C-grade. To me these read as "quality/importance of topic and of posts are as important for grading as description".

I think actually the way you're describing tags now is more useful (for e.g. directing peoples attention for improving tags), but I'm not sure if it came across that way (to me) in the initial post. I would be interested to hear how other people read it.

comment by Ruby · 2020-08-07T19:35:28.622Z · LW(p) · GW(p)

Yes, "neglected-o-meter" is a good way of putting it. The idea was a bit tricky to convey, I guess I didn't have it super well-articulated in my own head.

The idea was that:

  • Tag grades identify tags in need of work.
  • For each grade, there's a set of standard things to do to improve them. (This seemed better than individually marking tags as "needs more posts" or "needs better description")

And also additionally that tags reflect absolute quality as well, such that if you only want the best tags, you can filter for that. I didn't realize that what I'd consider A-Grade for an obscure topic with limited content would be different for a major topic where there's lot to be said. Another difference is how fundamental and introductory a topic is, where topics that a person is early on in someone's LW journey need extra polish.

Now that people are writing more tag descriptions, the gaps in the system are coming out. I've felt somewhat that I should give any tag that seems to meet the criteria a grade, but then in some cases there's still more I'd want. This might be solved by making the criteria better and clearer.

I apologize for the confusion. We're about to go on team retreat, but when we get back maybe taggers in this thread and the LW team can refine the system/schema.

Thanks for your patience.

comment by abramdemski · 2020-07-31T16:58:36.003Z · LW(p) · GW(p)

I created the Truth, Semantics, and Meaning [? · GW] tag. Then I noticed that there was an empty map/territory [? · GW] tag. I tagged most of the truth stuff as map/territory.

Map/territory strikes me as a better tag than truth. The huge overlap between the two makes me think maybe truth shouldn't be a tag. However, these two tags are different.

  • Truth, Semantics, and Meaning is about the philosophy/metaphysics, and the more formal side. This means stuff about the liar paradox fits well here, whereas it doesn't so obviously belong in map/territory (though it certainly could).
    • Anything which talks about truth in itself -- rather than truth as a way of getting at semantics -- is like this. EG, arguably Eliezer's essay The Simple Truth [LW · GW] is not about map/territory at all, and is rather directly trying to discuss the word "truth" -- its usage and meaning.
  • Map/territory includes stuff like the mind projection fallacy. This doesn't really belong in Truth, Semantics, and Meaning, since it's more psychological and less about ideal rationality.
    • In fact, the tag description makes me think it was intended primarily to be about the "psychological" side.
    • Is the psychological side worth distinguishing from the formal-epistemological side?
comment by Ruby · 2020-08-05T18:41:24.488Z · LW(p) · GW(p)

I missed this comment till now, sorry. Seems no one else opined yet, so my thoughts:

I don't feel like I can confidently say something about the value of distinguishing the two, but I can offer the broader questions I see.

  1. What is the extent of our ability to maintain nuanced distinctions in the tags?

I've seen already that tag creators will have one narrow thing in mind when they make a tag, and then other people will come along and apply it very broadly, e.g. Biology getting applied to anything involving a biological system at all even if it'd be of no interest to a biologist or someone look for biologist content. If two tags are very adjacent, I might expect things to haphazardly go in one or the other or both.

      2. If it's important, how do we maintain two tags?

I could see preserving a space to just discuss the formal side separately from the psychological being quite valuable. Especially if there's a sustained back-and-forth on technical stuff. If it is, then I think there's effort we could put in to maintain the distinction in the face of entropy.

  • The tags need to be optimized a set.
  • The tag names directly should imply the tag boundary, e.g Truth (formal) and Map & Territory (Applied) or something.
  • Each description should early on differentiate the tags and mention the other.
  • You'll need someone who cares to keep an eye on the tags. (Subscriptions to tags should make this easier, but currently, it doesn't batch that doesn't work well for high-volume tags).
  • Although if each tag has highly upvoted relevant content at the top, it matters less if some other stuff and goes to the bottom of the list (especially below the Load More)

Overall, the seeming cost of maintaining separate tags makes me by default want to just aim for broader, more-inclusive tags. But if a distinction is worth it and/or we end up with a committed community and people who want to keep tags clean and true to their intention, might be worth it.

comment by John_Maxwell (John_Maxwell_IV) · 2020-08-07T07:33:11.320Z · LW(p) · GW(p)

What are the norms around the number of tags that are appropriate for a post to get?  There are some posts of mine that I wish more people would read, and piling relevant tags onto them looks like an easy way to accomplish this.  However, I'm looking at some of the other tagging effort that's being done, and it seems like sometimes posts are being tagged with just one or two of a larger collection of say 4-5 tags that could be considered relevant.

Edit: Thanks for the responses, all.

comment by Kaj_Sotala · 2020-08-07T10:51:17.570Z · LW(p) · GW(p)

I'd say that the appropriate number of tags for a post is "as many or few as seem to match the contents of the post".

I expect that many posts that have just a few tags are that way simply because the tagger didn't happen to think of the other possible ones. Or someone might be focusing on some particular tag and tagging all relevant posts with that, without also looking at those posts to see what other tags might be suitable.

comment by Multicore (KaynanK) · 2020-08-07T10:43:31.259Z · LW(p) · GW(p)

I put 1-3 on most posts, but I've gone up to 5 or more on some. Probably many of the posts I've tagged could have other tags applied to them that I didn't think of at the time. It's not about a hard number, it's about asking for each individual tag, is it likely someone exploring this tag would think this post was relevant to it?

comment by Ruby · 2020-08-07T15:55:14.642Z · LW(p) · GW(p)

I'm glad you asked!

I feel a bit uncomfortable with "piling on tags" (maybe the phrase more than what you're actually suggesting). I think it's because when I've seen authors apply lots of tags to their own posts (I'm assuming the motivation), half the tags seem like a stretch and the posts were low karma/quality (less true of yours, but it's an association now in my mind).

That said, I do think 4-5 tags can be reasonable, and more posts might hit that or more as we have more tags and people do more tagging. I'm mostly responding to the phrasing "piling on for easy visibility". The heuristics Kaj and Multicore suggest feel right to me.

If the relevance is really there, it's good to tag.

comment by Yoav Ravid · 2020-08-08T08:02:48.869Z · LW(p) · GW(p)

I made a Hard Problem of Consciousness [? · GW]tag. it seems distinct enough from the Consciousness [? · GW] tag, which already has 46 posts.

comment by Kaj_Sotala · 2020-08-07T12:38:38.675Z · LW(p) · GW(p)

A tag that I'm about to create would have the following description:

________ is a strategy for dealing with confusing questions [LW · GW] or points of disagreement, such as "do humans have free will" or "when a tree falls in a forest with no-one to hear, does it make a sound". Rather than trying to give an answer in the form of "yes", "no", or "the question is incoherent", one seeks to understand the cognitive algorithm that gave rise to the confusion [? · GW], so that at the end there is nothing left to explain.

Eliezer originally called this strategy "dissolving the question" (in the first linked post), but an important part of it is thinking in terms of "how an algorithm feels from the inside" (the second linked post), and I tend to think of these interchangeably. "Dissolving the Question" says, among other things:

What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "free will"?

In fact, until I looked up the relevant posts, I remembered the name of the strategy as being "dissolving the algorithm" rather than "dissolving the question".

Given these considerations, should the tag be called:

  • Dissolving the Question
  • Dissolving the Cognitive Algorithm
  • How an Algorithm Feels From Inside
  • Something else?
comment by Yoav Ravid · 2020-08-07T13:27:10.249Z · LW(p) · GW(p)

Out of these I think Dissolving the Question is probably the right name for the tag. Dissolving the Cognitive Algorithm is in interesting alternate name for the technique, but since it isn't known it's not very good for the tag name. How an algorithm feels from the inside doesn't feel like a tag name, and wouldn't be intuitive to put on posts that aim to dissolve questions.

Though Dissolving the Question also feels awkward for a tag name. Perhaps 'Dissolving Questions", of if anyone has better ideas?

comment by Multicore (KaynanK) · 2020-08-07T17:26:28.882Z · LW(p) · GW(p)

+1 Dissolving Questions

comment by Ruby · 2020-08-07T19:16:45.790Z · LW(p) · GW(p)

I've felt like I've wanted a tag for "Confusions" and I guess by extension one about "Deconfusions"? "Deconfusioning?"

I'm really not sure, but I think there's a broader thing here that should a be a (the?) tag.  Something about the broader phenomenon of getting deconfused, of which "dissolving the question" is an instance.

Of the three things listed tough, definitely Dissolving the Question.

comment by brook · 2020-08-05T14:17:44.850Z · LW(p) · GW(p)

I've edited the Effective Altruism [? · GW] tag pretty heavily, and I now believe it qualifies as A grade.

I've also edited the Epistemic Modesty [? · GW]tag, and think it's now C or B grade.

I'd also like it if the X-risk and S-risk tags are consistent with one another-- I propose that "S-risks (Risk of astronomical suffering)" and "X-risks (Existential risk)" is the best format.

comment by Ruby · 2020-08-05T17:48:17.588Z · LW(p) · GW(p)

Phenomenal job on the EA tag! Definitely A-Class. 

Quick thoughts that occur to me for it: Should probably link to the EA Forum too? And I think it might be clearer if the See Also section was divided into two sub-sections, one for related LW tags and one for external resources?

I made Epistemic Modesty C-Class. The central idea is conveyed but I'd want to reserve B-Class for things that touch on more of the arguments, e.g., for and against. Inadequate Equilibria posts are tagged, but would be good to mention what position gets laid out in them. (Especially as the quote might make it seem that Eliezer is plain in favor of epistemic modesty as virtuous.)

PS: See a note on tag feedback in this comment [LW(p) · GW(p)].

comment by John_Maxwell (John_Maxwell_IV) · 2020-08-08T04:23:55.295Z · LW(p) · GW(p)

We have both an AI tag and an AI Risk tag.  When should one use one or the other?  Maybe we should rename AI Risk to AI Risk Strategy or AI Strategy so they're more clearly differentiated.

comment by Ben Pace (Benito) · 2020-08-08T05:09:57.592Z · LW(p) · GW(p)

I think AI Risk is open to improvement as a name, but it's definitely a more narrow category than AI. AI includes reviews of AI textbooks, explanation of how certain ML architectures work, and just anything relating to AI. AI risk is about the downside risk and analysis of what the risk looks like.

comment by John_Maxwell (John_Maxwell_IV) · 2020-08-08T05:29:36.394Z · LW(p) · GW(p)

BTW, "productivity" and "akrasia" are another pair of tags that feel a bit poorly differentiated to me.

comment by Multicore (KaynanK) · 2020-08-08T13:23:17.140Z · LW(p) · GW(p)

Productivity seems to include both "improve productivity by fighting akrasia" and "improve productivity by optimizing your workflows", for example What's your favorite notetaking system? [LW · GW], so it's not a full overlap.

Procrastination is the tag that feels most redundant next to Akrasia to me.

comment by Ruby · 2020-08-08T06:17:24.220Z · LW(p) · GW(p)

Yeah, there's also Willpower in that cluster too. I think I want a good meta-cluster for that whole bundle but haven't thought of what it'd be called. There's overlap between each, but also some differentiation, so I'm not sure, would be interested in proposals for how to carve up and tag that space.

Oh, and Motivation(s). However, that tag has grown kind of huge and I haven't got to thinking about what it really should be,

comment by John_Maxwell (John_Maxwell_IV) · 2020-08-08T08:30:56.800Z · LW(p) · GW(p)

FWIW, I'm not a fan of "akrasia"--seems unnecessarily highfalutin to me.  Most stuff tagged with "akrasia" is essentially about procrastination, not akrasia as a philosophical problem.  (Just found this article on Google.)  I think it's OK for LW to use jargon, but we should recognize jargon comes with a cost, and there's no reason to pay the cost if we aren't getting any particular benefit.

(crl826 [LW · GW] mentioned that "procrastination" is another related tag in the latest open thread.)

comment by John_Maxwell (John_Maxwell_IV) · 2020-08-08T05:18:44.403Z · LW(p) · GW(p)

So it sounds like the underlying content categories are:

  • Technical AI safety
  • Nontechnical AI safety/AI strategy
  • AI content unrelated to safety

Is that right?

I guess my complaint is that while "AI content unrelated to safety" always gets tagged "AI", and "Nontechnical AI safety/AI strategy" always gets tagged "AI Risk", there doesn't seem to be a consistent policy for the "Technical AI safety" content.

comment by Ben Pace (Benito) · 2020-08-08T06:22:54.683Z · LW(p) · GW(p)

All of them get tagged AI. Not all of the technical content gets tagged AI risk – for example, when Scott Garrabrant writes curious things like prisoner's dilemma with costs to modelling, this is related to embedded agency, but it's not at all clearly relevant to AI risk, only indirectly at best. The ones that are explicitly about AI risk get tagged that way, such as What Failure Looks Like, or The Rocket Alignment Problem get tagged AI risk.

comment by Gyrodiot · 2020-08-06T08:09:42.264Z · LW(p) · GW(p)

Should all HPMOR posts [? · GW] be tagged with the Fiction tag [? · GW]? Only the very first chapter [LW · GW] is, currently, which makes sense. Conversely, all chapters of Three Worlds Collide [? · GW] are tagged with it. Which convention shall prevail?

(sidenote: I'm volunteering to mass-tag HPMOR if it's greenlighted)

comment by Ruby · 2020-08-06T13:21:42.762Z · LW(p) · GW(p)

I think we should definitely not tag every chapter of HPMOR. So to be consistent we should probably untag the later chapters of 3 Worlds and make that the norm. I guess that should be added to the description.

Thanks for the very generous offer though!

comment by Gyrodiot · 2020-08-06T14:30:18.998Z · LW(p) · GW(p)

Roger that! Later chapters of Three Worlds Collide, and The Bayesian Conspiracy have just been untagged.

I also updated the tag description to reflect the norm (in bold and near the top so it appears on the tag hover text, if I understand correctly the meta-norm about such disclaimers).

Edit: the tag is still there for 3WC c1 [LW · GW], I didn't have enough power to remove it.

comment by Multicore (KaynanK) · 2020-08-06T16:49:14.963Z · LW(p) · GW(p)

This raises the question of what serial fiction posts should be tagged as, because some of the posts you untagged are now at the top of the untagged posts page [? · GW].

Maybe we could have "serial fiction" as a containment tag much like "Newsletters".

Or are we going for a norm where some posts do not merit any tag at all, and the untagged posts page is doomed to become a list of them?

comment by Ruby · 2020-08-06T18:57:40.159Z · LW(p) · GW(p)

Yeah, that's a good point. I think the correct thing is to build a way to remove them from the untagged page (perhaps a special kind of tag for the purpose) rather than trying to come up with a spurious tag.

The thing about HPMOR is it's a sequence that's easy to see from the Sequence buttons on the page. A tag doesn't add much, and tagging ~100+ posts won't really help it.

For now, I think it's best to ignore and trust the team will save the untagged posts page from the non-taggables!

comment by Ruby · 2020-08-06T20:21:16.411Z · LW(p) · GW(p)

Raemon surfaces some other reasoning (paraphrased):

Contention: all fictions posts should be tagged Fiction

  • We don't want to see every chapter of HPMOR on the Fiction tag, but it is useful to have every chapter of HPMOR have a link to the Fiction tag.
  • On the Fiction tag we can solve having a gazillion posts with relevance ordering. The first chapter in things should get upvoted and later chapters can languish at the bottom of the list and that'll be fine.

That makes sense to me? What do people think?

Thanks and sincere sorry to Gyrodiot who didn't skip a beat in executing the previous norm.

comment by Gyrodiot · 2020-08-07T09:39:54.046Z · LW(p) · GW(p)

Tagging everything makes sense to me as well, and, yes, the first installments should be relevance-boosted.

I perceive the consensus to have shifted in favor of the mass-tagging, which will begin soon. I'll report back.

Edit: all of HPMOR, 3WC, TBC have been tagged, and the tag description has been reupdated. Please boost the first chapters, and standalone pieces!

comment by Ruby · 2020-08-07T16:08:26.975Z · LW(p) · GW(p)

Woop!

comment by abramdemski · 2020-08-02T19:40:41.577Z · LW(p) · GW(p)

Is the "site meta [? · GW]" tag taking the place of https://www.lesswrong.com/meta [? · GW]?

comment by Ruby · 2020-08-02T20:20:55.458Z · LW(p) · GW(p)

I believe yes, though we haven't gotten around to fully deprecating/redirecting, etc.

comment by abramdemski · 2020-08-03T15:41:34.585Z · LW(p) · GW(p)

I'm curious how it'll work. I felt like meta content was pretty hidden before. If there's a desire to make meta stuff hidden by default, maybe it could be a hidden-by-default tag, but controlled via the same tag filtering as everything else, so that it's pretty obvious that it's hidden and how to un-hide it. Or maybe hiding it is bad.

comment by Ruby · 2020-08-03T17:21:41.859Z · LW(p) · GW(p)

My personal sense is that usually meta stuff should be neither hidden nor promoted, excluding announcements that should get extra exposure temporarily. I definitely want people to feel an affordance to discuss site meta, and also see thinking behind and past records.

I wasn't around when past decisions about meta were made, nor am I completely sure what others on the team think. We are, of course, open to feedback from others.

comment by Gyrodiot · 2020-07-31T12:25:28.116Z · LW(p) · GW(p)

I created the Growth Stories [? · GW] tag, but that may have been a mistake, since the Postmortem & Retrospectives [? · GW] tag already exists. Apologies!

comment by mr-hire · 2020-08-04T03:22:26.931Z · LW(p) · GW(p)

I created a "Case Study" tag which seems to overlap with both of these a bit?  Definitely seems like it could be different.

comment by Ruby · 2020-08-04T04:03:57.885Z · LW(p) · GW(p)

Those both feel distinct to me from P&R (if anything, Postmortems & Retrospectives is too big, idk). Growth Stories has ended up with several posts that seem like a good collection and don't really fit under P&R. The same could be true for Case Study except I don't know if we have enough posts to justify it? As a general heuristic, we don't want tags unless they've got several posts int them. (And we won't add them Tag Portal until they do, barring very rare exceptions.)

comment by Gyrodiot · 2020-08-06T14:44:05.382Z · LW(p) · GW(p)

Ah, and now there's Updated Beliefs (examples of) [? · GW], which is less about personal growth in rationality skill, and more about evolution of personal beliefs and the updating process.

Slightly different!

comment by Yoav Ravid · 2020-08-07T07:19:58.372Z · LW(p) · GW(p)

I made an Epistemic Spot Check [? · GW] tag. i wasn't sure if i should create it at first, because although the content fits posts from anyone (i myself am thinking of making a post in that style), it currently only has content from Elizabeth. i decided to go ahead and create it anyway, even if just to experiment. also only after creating i remembered that there's an Epistemic Review [? · GW] tag, oh well.

comment by Multicore (KaynanK) · 2020-07-31T01:56:16.831Z · LW(p) · GW(p)

I think there should be a tag for discussion of present-day AI progress outside of the context of alignment. For example "Understanding Deep Double Descent" https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT?lw_source=posts_sheet [? · GW] . Right now the only tag for that is the core tag "AI", which is too broad.

But I'm not sure what to call it. Ideas: "Prosaic AI", "Machine Learning", "Neural Networks", "AI Progress", "AI Capabilities".

comment by Ruby · 2020-07-31T02:24:48.551Z · LW(p) · GW(p)

Also thanks for your prodigious tagging effort! Let me know if you want to chat (send you a DM).

comment by Ben Pace (Benito) · 2020-07-31T02:17:45.774Z · LW(p) · GW(p)

I'd use the machine learning tag [? · GW].