[SEE NEW EDITS] No, *You* Need to Write Clearer

post by Nicholas / Heather Kross (NicholasKross) · 2023-04-29T05:04:01.559Z · LW · GW · 65 comments

This is a link post for https://www.thinkingmuchbetter.com/nickai/fieldbuilding/no-you-need-to-write-clearer.html

Contents

    This post is aimed solely at people in AI alignment/safety.
  The Problem
  What You, Personally, Need to Do Differently
  "Can't I just assume my interlocutor is intelligent/informed/mature/conscientious?"
  Well-Written Emotional Ending Section
None
66 comments

This post is aimed solely at people in AI alignment/safety.

EDIT 3 October 2023: This post did not even mention, let alone account for, how somebody should post half-baked/imperfect/hard-to-describe [LW · GW]/fragile [LW · GW] alignment ideas. Oops.

LessWrong as a whole is generally seen as geared more towards "writing up ideas in a fuller form" than "getting rapid feedback on ideas". Here are some ways one could plausibly get timely feedback from other LessWrongers on new ideas:

EDIT 2 May 2023: In an ironic unfortunate twist, this article itself has several problems relating to clarity. Oops. The big points I want to make obvious at the top:

Now, back to the post...

So I was reading this post [LW · GW], which basically asks "How do we get Eliezer Yudkowsky to realize this obviously bad thing he's doing, and either stop doing it or go away?"

That post was linking this tweet, which basically says "Eliezer Yudkowsky is doing something obviously bad."

Now, I had a few guesses as to the object-level thing that Yudkowsky was doing wrong. The person who made the first post said this:

he's burning respectability that those who are actually making progress on his worries need. he has catastrophically broken models of social communication and is saying sentences that don't mean the same thing when parsed even a little bit inaccurately. he is blaming others for misinterpreting him when he said something confusing. etc.

A-ha! A concrete explanation!

...

Buried in the comments. As a reply to someone innocently asking what EY did wrong.

Not in the post proper. Not in the linked tweet.

The Problem

Something about this situation got under my skin, and not just for the run-of-the-mill "social conflict is icky" reasons.

Specifically, I felt that if I didn't write this post, and directly get it in front of every single person involved in the discussion... then not only would things stall, but the discussion might never get better at all.

Let me explain.

Everyone, everyone, literally everyone in AI alignment is severely wrong about at least one core thing, and disagreements still persist on seemingly-obviously-foolish things.

This is because the field is "pre-paradigmatic". That is, we don't have many common assumptions that can all agree on, no "frame" that we all think is useful.

In biology, they have a paradigm involving genetics and evolution and cells. If somebody shows up saying that God created animals fully-formed... they can just kick that person out of their meetings! And they can tell them "go read a biology textbook".

If a newcomer disagrees with the baseline assumptions, they need to either learn them, challenge them (using other baseline assumptions!), or go away.

We don't have that luxury.

AI safety/alignment is pre-paradigmatic. Every [LW · GW] word in this [? · GW] sentence [LW · GW] is [LW · GW] a [LW · GW] hyperlink [LW · GW] to [LW · GW] an [LW · GW] AI [LW · GW] safety [LW · GW] approach [LW · GW]. Many of them overlap. Lots of them are mutually-exclusive. Some of these authors are downright surprised and saddened that people actually fall for the bullshit in the other paths.

Many of these people have even read the same Sequences.

Inferential gaps are hard to cross [LW · GW]. In this environment, the normal discussion norms [LW · GW] are necessary but not sufficient.

What You, Personally, Need to Do Differently

Write super clearly and super specifically.

Be ready and willing to talk and listen, on levels so basic that without context they would seem condescending. "I know the basics, stop talking down to me" is a bad excuse when the basics are still not known.

Draw diagrams. Draw cartoons. Draw flowcharts with boxes and arrows. The cheesier, the more "obvious", the better.

"If you think that's an unrealistic depiction of a misunderstanding that would never happen in reality, keep reading."

-Eliezer Yudkowsky, about something else [LW · GW].

If you're smart, you probably skip some steps when solving problems. That's fine, but don't skip writing them down! A skipped step will confuse somebody. Maybe that "somebody" needed to hear your idea.

Read The Sense of Style by Steven Pinker. You can skip chapter 6 and the appendices, but read the rest. Know the rules of "good writing". Then make different tradeoffs, sacrificing beauty for clarity. Even when the result is "overwritten" or "repetitive".

Make it obvious which (groupings of words) within (the sentences that you write) belong together. This helps people "parse" your sentences.

Explain the same concept in multiple different ways. Beat points to death before you use them.

Link (pages that contain your baseline assumptions) liberally. When you see one linked, read it at least once.

Use cheesy "A therefore B" formal-logic syllogisms, even if you're not at "that low a level of abstraction". Believe me, it still makes everything clearer.

Repeat your main points. Summarize your main points. Use section headers and bullet points and numbered lists. Color-highlight and number-subscript the same word when it's used in different contexts ("apple1 is not apple2").

Use italics, as well as commas (and parentheticals!), to reduce the ambiguity in how somebody should parse a sentence when reading it. Decouple your emotional reaction to what you're reading, and then still write with that in mind.

Read charitably, write defensively.

Do all of this to the point of self-parody. Then maybe, just maybe, someone will understand you.

"Can't I just assume my interlocutor is intelligent/informed/mature/conscientious?"

No.

People have different baseline assumptions. People have different intuitions that generate their assumptions. The rationality/Effective-Altruist/AI-safety community, in particular, attracts people with very unbalanced skills. I think it's because a lot of us have medical-grade mental differences [LW(p) · GW(p)].

General intelligence factor g probably exists, but it doesn't account for 100% of someone's abilities. Some people have autism, or ADHD, or OCD, or depression, or chronic fatigue, or ASPD, or low working memory, or emotional reactions to thinking about certain things. Some of us have multiple of these at the same time.

Everyone's read a slightly (or hugely!) different set of writings. Many have interpreted them in different ways. And good luck finding 2 people who have the same opinion-structure regarding the writings.

Doesn't this advice contradict the above point to "read charitably"? No. Explain things like you would to a child: assume they're not trying to hurt you, but don't assume they know much of anything.

We are a group of elite, hypercompetent, clever, deranged... children. You are not an adult talking to adults, you are a child who needs to write like a teacher to talk to other children. That's what "pre-paradigmatic" really means.

Well-Written Emotional Ending Section

We cannot escape the work of gaining clarity, when nobody in the field actually knows the basics (because the basics are not known by any human).

The burden of proof, of communication, of clarity, of assumptions... is on you. It is always on you, personally, to make yourself blindingly clear. We can't fall back on the shared terminology of other fields because those fields have common baseline assumptions (which we don't).

You are always personally responsible for making things laboriously and exhaustingly clear, at all times, when talking to anyone even a smidge outside your personal bubble.

Think of all your weird, personal thoughts. Your mental frames, your shorthand, your assumptions, your vocabulary, your idiosyncrasies.

No other human on Earth shares all of these with you.

Even a slight gap between your minds, is all it takes to render your arguments meaningless, your ideas alien, your every word repulsive.

We are, without exception, alone in our own minds.

A friend of mine once said he wanted to make, join, and persuade others into a hivemind. I used to think this was a bad idea. I'm still skeptical of it, but now I see the massive benefit: perfect mutual understanding for all members.

Barring that, we've got to be clear. At least out of kindness.

65 comments

Comments sorted by top scores.

comment by ryan_greenblatt · 2023-04-29T18:17:46.513Z · LW(p) · GW(p)

I can't tell if this post is trying to discuss communicating about anything related to AI or alignment or is trying to more specifically discuss communication aimed at general audiences. I'll assume it's discussing arbitrary communication on AI or alignment.

I feel like this post doesn't engage sufficiently with the costs associated with high effort writting and the alteratives to targeting arbitrary lesswrong users interested in alignment.

For instance, when communicating research it's cheaper to instead just communicate to people who are operating within the same rough paradigm and ignore people who aren't sold on the rough premises of this paradigm. If this results in other people having trouble engaging, this seems like a reasonable cost in many cases.

An even narrower approach is to only communicate beliefs in person to people who you frequently talk to.

Replies from: NicholasKross, whocares, TAG
comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T23:11:35.929Z · LW(p) · GW(p)

I think I agree with this regarding inside-group communication, and have now edited the post to add something kind-of-to-this-effect at the top.

comment by who am I? (whocares) · 2023-04-30T17:29:03.845Z · LW(p) · GW(p)

While writing well is one of the aspects focused on by the OP, your reply doesn't address the broader point, which is that EY (and those of similar repute/demeanor) juxtaposes his catastrophic predictions with his stark lack of effective exposition and discussion of the issue and potential solutions to a broader audience. To add insult to injury, he seems to actively try to demoralize dissenters in a very conspicuous and perverse manner, which detracts from his credibility and subtly but surely nudges people further and further from taking his ideas (and those similar) seriously. He gets frustrated by people not understanding him, hence the title of the OP implying the source of his frustration is his own murkiness, not a lack of faculty of the people listening to him. To me, the most obvious examples of this are his guest appearances on podcasts (namely Lex Fridman's and Dwarkesh Patel's, the only two I've listened to). Neither of these hosts are dumb, yet by the end of their respective episodes, the hosts were confused or otherwise fettered and there was palpable repulsion between the hosts and EY. Considering these are very popular podcasts, it is reasonable to assume that he agreed to appear on these podcasts to reach a wider audience. He does other things to reach wider audiences, e.g. his twitter account and the Time Magazine article he wrote. Other people like him do similar things to reach wider audiences. 

Since I've laid this out, you can probably predict what my thoughts are regarding the cost-benefit analysis you did. Since EY and similar folk are predicting outcomes as unfavorable as human extinction and are actively trying to recruit people from a wider audience to work towards their problems, is it really a reasonable cost to continue going about this as they have?

Considering the potential impact on the field of AI alignment and the recruitment of individuals who may contribute meaningfully to addressing the challenges currently faced, I would argue that the cost of improving communication is far beyond justifiable. EY and similar figures should strive to balance efficiency in communication with the need for clarity, especially when the stakes are so high.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2023-04-30T20:59:26.374Z · LW(p) · GW(p)

I agree that EY is quite overconfident and I think his argument for doom are often sloppy and don't hold up. (I think the risk is substantial but often the exact arguments EY gives don't work). And, his communication often fails to meet basic bars for clarity. I'd also probably agree with 'if EY was able to do so, improving his communication and arguments in a variety of contexts would be extremely good'. And specifically not saying crazy sounding shit which is easily misunderstood would probably be good (there are some real costs here too). But, I'm not sure this the top of my asks list for EY.

Further I agree with "when trying to argue nuanced complex arguments to general audiences/random people, doing extremely high effort communication is often essential".

All this said, this post doesn't differentiate between communication to general audience and other communication about ai. I assumed it was talking about literally all alignment/ai communication and wanted to push back on this. There are real costs to better communication, and in many cases those costs aren't worth it.

My comment was trying to make a relatively narrow and decoupled point (see decoupling norms etc.).

comment by TAG · 2023-05-14T13:58:59.603Z · LW(p) · GW(p)

If you are think that AI is going to kill everyone, sooner or later you are going to have to communicate that to everyone. That doesn't mean evey article has to be at the highest level of comprehensibity, but it does mean you shouldn't end up with the in-group problem of being unable to communicate with outsiders at all.

comment by Evan Hockings · 2023-04-29T15:29:41.751Z · LW(p) · GW(p)

Agreed—thanks for writing this. I have the sense that there's somewhat of a norm that goes like 'it's better to publish something that not, even if it's unpolished' and while this is not wrong, exactly, I think those who are doing this professionally, or seek to do this professionally, ought to put in the extra effort to polish their work.

I am often reminded of this Jacob Steinhardt comment [LW(p) · GW(p)]. 

Researchers are, in a very substantial sense, professional writers. It does no good to do groundbreaking research if you are unable to communicate what you have done and why it matters to your field. I hope that the work done by the AI existential safety will attract the attention of the broader ML community; I don't think we can do this alone. But for that to happen, there needs to be good work and it must be communicated well.

Replies from: lahwran, NicholasKross
comment by the gears to ascension (lahwran) · 2023-04-30T04:04:34.738Z · LW(p) · GW(p)

Researchers are, in a very substantial sense, professional writers.

Wow, this is a quote for the ages.

comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-29T19:22:41.947Z · LW(p) · GW(p)

I've kinda gone back-and-forth on this, since I often have low-energy, yet ideas to express.

Since we already use "epistemic status" labels, I could imagine labels like "trying to clarify" VS "just getting an idea out there". Some epistemic-statuses kinda do that (e.g. "strong conviction, weakly held" or "random idea").

comment by Seth Herd · 2023-05-03T03:20:47.741Z · LW(p) · GW(p)

I've gotta say, I think you're right about the problem, but more wrong than right about the solution. Writing more detailed pieces is rarely the way to communicate clearly outside of the community, because few readers will carefully read all of it. Writing better is often writing more concisely, and using intuition pumps and general arguments that are likely to resonate with your particular target audience. It's also anticipating their emotional hangups with the issue and addressing them.

Not everyone is as logical as LW members, and even rationalists don't read in detail when they feel that the topic is probably silly.

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T23:06:03.490Z · LW(p) · GW(p)

This advice is helpful in addition, and yeah, my advice is probably reverse-worthy sometimes (though not like half the time).

Replies from: Seth Herd
comment by Seth Herd · 2023-05-04T17:19:04.111Z · LW(p) · GW(p)

You noted that he's using sentences that mean something totally different when parsed even a little bit wrong, and I think that's right and an insightful way to look at possible modes of communication failure.

comment by dr_s · 2023-04-30T15:01:12.171Z · LW(p) · GW(p)

If you're smart, you probably skip some steps when solving problems. That's fine, but don't skip writing them down! A skipped step will confuse somebody. Maybe that "somebody" needed to hear your idea.

Am reminded of those odious papers or physics textbooks that will jump three/four steps between two equations with some handwaving "it's trivial" "obviously" or "left as an exercise to the reader". As the reader, the exercise has sometimes wasted hours or days of my research time that could have been spent doing actual new research rather than trying to divine what the Hell were the original authors thinking or assuming that they didn't write down explicitly.

comment by romeostevensit · 2023-04-29T17:21:13.717Z · LW(p) · GW(p)

Be ready and willing to talk and listen, on levels so basic that without context they would seem condescending. "I know the basics, stop talking down to me" is a bad excuse when the basics are still not known.

The number of defenses people have against this sort of thing is pretty obvious in other difficult areas like phenomenology.

comment by LVSN · 2023-04-29T18:07:32.198Z · LW(p) · GW(p)

"Always remember that it is impossible to speak in such a way that you cannot be misunderstood: there will always be some who misunderstand you."
― Karl Popper

A person can rationalize the existence of causal pathways where people end up not understanding things that you think are literally impossible to misunderstand, and then very convincingly pretend that that was the causal pathway which led them to where they are, 

and there is also the possibility that someone will follow such a causal pathway towards actually sincerely misunderstanding you and you will falsely accuse them of pretending to misunderstand.

comment by Viliam · 2023-04-29T15:21:39.548Z · LW(p) · GW(p)

Bah, writing simply and clearly is low status, obscure scripto est summus status.

(...is the instinct you need to overcome.)

Replies from: dr_s
comment by dr_s · 2023-04-30T15:03:48.737Z · LW(p) · GW(p)

Past a certain point it circles around and writing clearly becomes high status again. Look at Feynman's Lectures. There's a specific kind of self-assured and competent expert that will know that talking in simple words about their topic will not make it any less obvious that they still know a shit-ton more about it than anyone else in the room, and that is how they can even make it sound simple.

Replies from: Viliam
comment by Viliam · 2023-05-01T16:07:55.594Z · LW(p) · GW(p)

This is called counter-signalling, and it usually only works if everyone already knows that you are an expert (and the ones who don't know, they get social signals from the others).

Imagine someone speaking just as simply as Feynman, but you are told that the person is some unimportant elementary-school teacher. Most people would probably conclude "yes, this guy knows his subject well and can explain it simply, which deserves respect, but of course he is incomparable to the actual scientists". On the other hand, someone speaking incomprehensibly will probably immediately be perceived as a member of the scientific elite (unless you have a reason to suspect a crackpot).

comment by tailcalled · 2023-04-29T09:47:48.588Z · LW(p) · GW(p)

This post made me start wondering - have the shard theorists written posts about what they think is the most dangerous realistic alignment failure?

comment by the gears to ascension (lahwran) · 2023-04-29T06:35:09.683Z · LW(p) · GW(p)

if this was most inspired by my post being super confusing and vague - yeah, it was, and I set out knowing that; more details in a comment I added on my post. https://www.lesswrong.com/posts/XgEytvLc6xLK2ZeSy/does-anyone-know-how-to-explain-to-yudkowsky-why-he-needs-to?commentId=hdoyZDQNLkSj96DJJ [LW(p) · GW(p)]

but of course agreed on the object level. the post was not intended to be "I know what he's doing wrong, here's what it is". it was a request for comments - that perhaps could have been clearer. It was very much "yeah, so, I don't know what I'm trying to say, help?" not a coherent argument.

Replies from: ChristianKl, NicholasKross
comment by ChristianKl · 2023-04-29T12:19:06.814Z · LW(p) · GW(p)

If you personally criticize other people publically doing it in a super confusing and vague way while calling them a fool, that's bad. 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-29T12:22:35.517Z · LW(p) · GW(p)

"no criticizing unless you know what it is you're criticizing. all discussion must already know what it's saying." nah. the karma is negative - that's fine. I'm not deleting it, there's nothing wrong with it being public. it's fine and normal to say vague things before non-vague things actually. the thing I'm attempting to describe is a fact about the impact of yudkowsky's communication; I'm not at all the only one who has observed it.

Replies from: ChristianKl
comment by ChristianKl · 2023-04-29T12:27:07.625Z · LW(p) · GW(p)

There are a lot of ways to criticize other people without calling them a 'fool'. 

If you don't know what you are saying, then it makes sense to be humble about what you are saying and not pretend that you have certainty. 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-29T12:32:36.357Z · LW(p) · GW(p)

yeah ok I see some issue there. fair enough.

comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-29T19:26:54.448Z · LW(p) · GW(p)

It was very much "yeah, so, I don't know what I'm trying to say, help?" not a coherent argument. Yeah! I think the community should look into either using "epistemic status" labels to make this sort of thing clearer, or a new type of label (like "trying to be clear" vs "random idea" label).

comment by zeshen · 2023-04-30T17:00:11.184Z · LW(p) · GW(p)

I feel like a substantial amount of disagreement between alignment researchers are not object-level but semantic disagreements, and I remember seeing instances where person X writes a post about how he/she disagrees with a point that person Y made, with person Y responding about how that wasn't even the point at all. In many cases, it appears that simply saying what you don't mean [LW · GW] could have solved a lot of the unnecessary misunderstandings.

comment by ryan_greenblatt · 2023-04-29T18:18:33.801Z · LW(p) · GW(p)

Everyone, everyone, literally everyone in AI alignment is severely wrong about at least one core thing, and disagreements still persist on seemingly-obviously-foolish things.

If by 'severely wrong about at least one core thing' you just mean 'systemically severely miscalibrated on some very important topic ', then my guess is that many people operating in the rough prosaic alignment prosaic alignment paradigm probably don't suffer from this issue. It's just not that hard to be roughly calibrated. This is perhaps a random technical point.

comment by Astynax · 2023-04-30T20:43:38.954Z · LW(p) · GW(p)

Apparently clarity is hard. Because although I agree that it's essential to communicate clearly, it took significant wrapping my head around it to digest this post, to identify its thrust. I thought I had it eventually, but looking at comments it seems I wasn't the only one not sure.

I am not saying this to be snarky. I find this to be one of the clearer posts on LessWrong; I am usually lost in jargon I don't know. (Inferential gaps? General intelligence factor g?) But despite its relative clarity, it's still a slog.

I still admire the effort, and hope everyone will listen.

Replies from: NicholasKross, Astynax
comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-30T20:47:07.739Z · LW(p) · GW(p)

Thank you! Out of curiosity, which parts of this post made it harder to wrap your head around? (If you're not sure, that's also fine.)

Replies from: Astynax
comment by Astynax · 2023-05-03T02:47:12.712Z · LW(p) · GW(p)

Terms I don't know: inferential gaps, general intelligence factor g, object-level thing, opinion-structure. There are other terms I can figure but I have to stop a moment: medical grade mental differences, baseline assumptions. I think that's most of it.

At the risk of going too far, I'll paraphrase one section with hopes that it'll say the same thing and be more accessible. (Since my day job is teaching college freshmen, I think about clarity a lot!)

--

"Can't I just assume my interlocutor is intelligent?"

No.

People have different basic assumptions. People have different intuitions that generated those assumptions. This community in particular attracts people with very unbalanced skills (great at reasoning, not always great at communicating). Some have autism, or ADHD, or OCD, or depression, or chronic fatigue, or ASPD, or low working memory, or emotional reactions to thinking about certain things, or multiple issues at once. 

Everyone's read different things in the past, and interpreted them in different ways. Good luck finding 2 people who have the same opinion regarding what they've read.

Doesn't this advice contradict the above point to "read charitably," to try to assume the writer means well? No. Explain things like you would to a child: assume they're not trying to hurt you, but don't assume they know what you're talking about.

In a field as new as this, in which nobody really gets it yet, we're like a group of elite, hypercompetent, clever, deranged... children. You are not an adult talking to adults, you are a child who needs to write very clearly to talk to other children. That's what "pre-paradigmatic" really means.

-- 

What I tried to do here was replace words and phrases that required more thought ("writings" -> "what they've read"), and to explain those that took a little thought ("read charitably"). IDK if others would consider this clearer, but at least that's the direction I hope to go in. Apologies if I took this too far.

Replies from: NicholasKross, NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T23:11:02.869Z · LW(p) · GW(p)

No worries, that makes sense!

comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T23:06:37.437Z · LW(p) · GW(p)

No worries, this is definitely helpful!

comment by Astynax · 2023-04-30T20:46:16.771Z · LW(p) · GW(p)

Actually, I wonder if we might try something more formal. How about if we get a principle that if a poster sees a comment saying "what's that term?" the poster edits the post to define it where it's used?

comment by Seth Herd · 2023-04-29T05:41:09.193Z · LW(p) · GW(p)

This has some really insightful content. I agree with the sentiment that good communication is on you, the reader and future writer. On me as a writer.

But it's missing a huge factor. You're assuming that someone will read your prose. That is not a given. In fact, whether someone reads at all, and if they do, with a hostile or friendly mindset, are the most important things.

Bookmarking for reference. We need to write better. How to do that is not si.ple, but it is worthy of thought and discussion.

Replies from: NicholasKross, TAG
comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-29T19:29:51.642Z · LW(p) · GW(p)

Thanks!

The "It's on me/you" part kinda reminds me of a quote I think of, from a SlateStarCodex post:

This obviously doesn’t absolve the Nazis of any blame, but it sure doesn’t make the rest of the world look very good either.

I was trying to make sense of it, and came up with an interpretation like "The situation doesn't remove blame from Nazis. It simply creates more blame for other people, in addition to the blame for the Nazis."

Likewise, my post doesn't try to absolve the reader of burden-of-understanding. It just creates (well, points-out) more burden-of-being-understandable, for the writer.

comment by TAG · 2023-04-30T12:26:44.045Z · LW(p) · GW(p)

You can increase the chances of something being read by keeping it short. There's about no chance that random, moderately interested people are going to read the complete sequences. So there is a need for distillation into concise but complete arguments. Still, after all this time.

Replies from: Seth Herd
comment by Seth Herd · 2023-05-01T03:42:13.676Z · LW(p) · GW(p)

Exactly. What I was thinking of but not expressing clearly is that the strategy this post focused on, writing more to be clearer, is not a good strategy for many purposes.

comment by ChristianKl · 2023-05-01T23:14:42.693Z · LW(p) · GW(p)

Use italics, as well as commas (and parentheticals!), to reduce the ambiguity in how somebody should parse a sentence when reading it. Decouple your emotional reaction to what you're reading, and then still write with that in mind.

When it comes to italics, it's worth thinking about the associations. Style-guides like The Chicago Manual of Style don't recommend adding italics to words like "decouple" and "still". 

The genre of texts that puts italics around words like that is sleazy online sale websites. I remember someone writing on LessWrong a while ago that using italics like that is a tell for crackpot writing. 

If you want a piece of writing to be taken seriously, overusing italics can be harmful. 

Replies from: steve2152, SaidAchmiz
comment by Steven Byrnes (steve2152) · 2023-05-02T13:14:30.678Z · LW(p) · GW(p)

For my part, I don’t have that association. I associate italics with “someone trying to make it easy for me to parse what they’re saying”. I tend to associate it with blog posts, honestly. I wish papers and textbooks would use it more.

Here’s a pretty typical few sentences from Introduction to Electrodynamics, a textbook by David Griffiths:

The electric field diverges away from a (positive) charge; the magnetic field line curls around a current (Fig. 5.44). Electric field lines originate on positive charges and terminate on negative ones; magnetic field lines do not begin or end anywhere—to do so would require a nonzero divergence. They typically form closed loops or extend out to infinity.17 To put it another way, there are no point sources for B, as there are for E; there exists no magnetic analog to electric charge. This is the physical content of the statement ∇ · B = 0. Coulomb and others believed that magnetism was produced by magnetic charges (magnetic monopoles, as we would now call them), and in some older books you will still find references to a magnetic version of Coulomb’s law, giving the force of attraction or repulsion between them. It was Ampère who first speculated that all magnetic effects are attributable to electric charges in motion (currents). As far as we know, Ampère was right; nevertheless, it remains an open experimental question whether magnetic monopoles exist in nature (they are obviously pretty rare, or somebody would have found one), and in fact some recent elementary particle theories require them. For our purposes, though, B is divergenceless, and there are no magnetic monopoles. It takes a moving electric charge to produce a magnetic field, and it takes another moving electric charge to “feel” a magnetic field.

Griffiths has written I think 3 undergrad physics textbooks and all 3 are among of the most widely-used and widely-praised textbooks in undergrad physics. I for one find them far more readable and pedagogical than other textbooks on the same topics (of which I’ve also read many). He obviously thinks that lots of italics makes text easier to follow—I presume because somewhat-confused students can see where the emphasis / surprise is, along with other aspects of sentence structure. And I think he’s right!

comment by Said Achmiz (SaidAchmiz) · 2023-05-01T23:38:18.643Z · LW(p) · GW(p)

The genre of texts that puts italics around words like that is sleazy online sale websites. I remember someone writing on LessWrong a while ago that using italics like that is a tell for crackpot writing.

Is this actually true? I don’t think I’ve found this to be true (and it’s the sort of thing I notice, as a designer).

Here’s type designer Matthew Butterick, in his book Butterick’s Practical Typography, on the use of italic and bold:

Bold or italic—think of them as mutually exclusive. That is the rule #1.

Rule #2: use bold and italic as little as possible. They are tools for emphasis. But if everything is emphasized, then nothing is emphasized. Also, because bold and italic styles are designed to contrast with regular roman text, they’re somewhat harder to read. Like all caps, bold and italic are fine for short bits of text, but not for long stretches.

With a serif font, use italic for gentle emphasis, or bold for heavier emphasis.

If you’re using a sans serif font, skip italic and use bold for emphasis. It’s not usually worth italicizing sans serif fonts—unlike serif fonts, which look quite different when italicized, most sans serif italic fonts just have a gentle slant that doesn’t stand out on the page.

Foreign words used in English are sometimes italicized, sometimes not, depending on how common they are. For instance, you would italicize your bête noire and your Weltanschauung, but neither your croissant nor your résumé. When in doubt, consult a dictionary or usage guide. Don’t forget to type the accented characters correctly.

(There’s also a paragraph demonstrating overuse of emphasis styling, which I can’t even replicate on Less Wrong because there’s no underline styling on LW, as far as I can tell.)

So using italics for emphasis too much is bad, but using it at all is… correct, because sometimes you do in fact want to emphasize things. According to Butterick. And pretty much every style guide I’ve seen agrees; and that’s how professional writers and designers write and design, in my experience.

Replies from: ChristianKl
comment by ChristianKl · 2023-05-01T23:44:45.935Z · LW(p) · GW(p)

I don't think there's anything wrong to put italics around some words. The OP violates both rules 1 and 2.

It has sentences like:

Everyone, everyone, literally everyone in AI alignment is severely wrong about at least one core thing, and disagreements still persist on seemingly-obviously-foolish things.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-05-01T23:50:49.285Z · LW(p) · GW(p)

Yep, that definitely violates #1, no argument.

As far as #2 goes, well, presumably the author would disagree…? (Click on the Practical Typography link I posted for an—admittedly exaggerated, but not by much, I assure you!—example of what “overuse of emphasis” really looks like. It’s pretty bad! Much, much worse than anything in the OP, which—aside from the combination of bold and italic, which indeed is going too far—mostly only skirts the edges of excess, in this regard.)

Anyway, my point was primarily about the “sleazy online sale websites” / “crackpot writing” association, which I think is just mostly not true. Sites / writing like that is more likely to overuse all-caps, in my experience, or to look like something close to Butterick’s example paragraph. (That’s not to say the OP couldn’t cut back on the emphasis somewhat—I do agree with that—but that’s another matter.)

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T00:21:49.316Z · LW(p) · GW(p)

I agree the highlighted sentence in my article definitely breaks most rules about emphasis fonts (though not underlining!). My excuse is: that one sentence contains the core kernel of my point. The other emphasis marks (when not used for before-main-content notes) are to guide reading the sentences out-loud-in-your-head, and only use italics.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-05-03T02:18:05.490Z · LW(p) · GW(p)

My recommendation is to use bold only in that case. Bold + italic is generally only needed when you need nested emphasis, e.g. bold within an italicized section, or vice-versa.

comment by TAG · 2023-04-30T12:43:38.321Z · LW(p) · GW(p)

if you’re smart, you probably skip some steps when solving problems. That’s fine, but don’t skip writing them down! A skipped step will confuse somebody. Maybe that “somebody” needed to hear your idea.

This is something I've noticed over and over, to the extent that I've never seen a distillation that's both short and complete.

Most recently , here.

https://www.lesswrong.com/posts/FDnLNvNqDifsviJyW/a-concise-sum-up-of-the-basic-argument-for-ai-doom?commentId=krHAdQMhGnLtxkt7C [LW(p) · GW(p)]

The odd thing is that many people here are coders, and the skill of making everything explicit in writing is similar to the skill of defining variables before you use them.

comment by awg · 2023-04-29T15:54:32.525Z · LW(p) · GW(p)

Make it obvious which (groupings of words) within (the sentences that you write) belong together. This helps people "parse" your

 

You're missing some words here.

Replies from: NicholasKross
comment by Review Bot · 2024-05-16T16:13:03.384Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by jackhullis · 2023-05-04T23:59:36.165Z · LW(p) · GW(p)

Whilst perhaps introducing some issues related to oversimplification, I feel like GPT could be used to somewhat help with this problem. An initial idea would be to embed all posts to make catching up on the literature / filling knowledge gaps easier. But I'm sure there are other more creative solutions that could be used too.

comment by Gunnar_Zarncke · 2023-04-29T23:19:55.495Z · LW(p) · GW(p)

AI safety/alignment is pre-paradigmatic. Every [LW · GW] word in this [? · GW] sentence [LW · GW] is [LW · GW] a [LW · GW] hyperlink [LW · GW] to [LW · GW] an [LW · GW] AI [LW · GW] safety [LW · GW] approach [LW · GW]. Many of them overlap. ...

Yes. But it doesn't matter because all (well, I guess, most) of the authors agree that AI Alignment is needed and understand deeply so because they have read the Sequences.

Many of these people have even read the same Sequences.

Maybe not all of them, and maybe a part is just trusting the community. But the community shares parts that are not pre-paradigmatic. Sure, the solution is open, unfortunately, very much so. And on that I agree. But when people outside of the community complain, that is very much like creationists coming to a biology class - except they can't be kicked out so easily, because alignment is a niche science that suddenly made it into the spotlight.

 

The rest of this will be about how the community here does do the things you demand. Yudkowsky specifically. Beating the same points as you.

Write super clearly and super specifically.

See: The Sequences [? · GW]. The original ones. Those published as a book [? · GW]. Nicely ordered and curated. There are reading groups about it [? · GW]. 

Be ready and willing to talk and listen, on levels so basic that without context they would seem condescending. "I know the basics, stop talking down to me" is a bad excuse when the basics are still not known.

Yes, that was the case when the Sequences were written and discussed ten years back. 

Draw diagrams. Draw cartoons. Draw flowcharts with boxes and arrows. The cheesier, the more "obvious", the better.

You mean like (from here [LW · GW])

or  (from here [? · GW])

or this (from here [LW · GW]), oh, that's not cheesy enough, how about (from here [LW · GW])

or (from here [LW · GW], or The Cartoon Guide to Löb's Theorem [LW · GW])

Or many other [? · GW].

If you're smart, you probably skip some steps when solving problems. That's fine, but don't skip writing them down! A skipped step will confuse somebody. Maybe that "somebody" needed to hear your idea.

You mean Inferential Distance [? · GW] (list of 50 posts)?

Read The Sense of Style by Steven Pinker. You can skip chapter 6 and the appendices, but read the rest. Know the rules of "good writing". Then make different tradeoffs, sacrificing beauty for clarity. Even when the result is "overwritten" or "repetitive".

You mean Human's Guide to Words [? · GW] (a 25 post sequence from 2008)?

Link (pages that contain your baseline assumptions) liberally. When you see one linked, read it at least once.

You mean, as in the Sequence?

Use cheesy "A therefore B" formal-logic syllogisms, even if you're not at "that low a level of abstraction". Believe me, it still makes everything clearer.

You mean as on Arbital? Like the Introduction to Bayes' Rule?

Repeat your main points. Summarize your main points. Use section headers and bullet points and numbered lists. Color-highlight and number-subscript the same word when it's used in different contexts ("apple1 is not apple2").

Dito.

Do all of this to the point of self-parody. Then maybe, just maybe, someone will understand you.

Except all of that was done, and some people understood, but unfortunately, those people complaining now didn't.

Replies from: NicholasKross, TAG, NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-30T00:45:24.427Z · LW(p) · GW(p)

My point wasn't "the community never does the above things", it was "each member of the community should do these things, and more often".

EDIT: You are correct that I should've mentioned more of the prior-art on this, though, especially Human's Guide To Words.

Also, maybe LW should bring back "Sequences reruns" for that inferential distance post.

comment by TAG · 2023-04-30T12:56:33.305Z · LW(p) · GW(p)

Write super clearly and super specifically.

See: The Sequences.

Can you summarise EY's views on consciousness and on ethics?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-04-30T15:24:23.907Z · LW(p) · GW(p)

Luckily, Eliezer has written enough about it that you can just ask ChatGPT exactly that:

EY's Views on Consciousness and Ethics

Model: GPT-4

User summarise EY's views on consciousness and on ethics

ChatGPT EY, or Eliezer Yudkowsky, is a prominent artificial intelligence (AI) researcher and writer known for his work on rationality, AI alignment, and friendly AI. While I can't provide an exhaustive account of his views, I can give a brief summary of some of his key ideas on consciousness and ethics.

Consciousness:

Reductionism: Yudkowsky approaches consciousness from a reductionist perspective, which means he believes that consciousness can be understood and explained in terms of simpler, lower-level components. No "magic": He argues against the idea that there is some sort of "magic" or inherently inexplicable aspect to consciousness. Instead, he sees it as a natural phenomenon that can be understood through scientific inquiry. Importance of AI: Yudkowsky believes that understanding consciousness is crucial for the development of AI, particularly AI alignment, as it will help us build artificial intelligences that can comprehend and interact with human values and emotions. Ethics:

Meta-ethics: Yudkowsky's approach to ethics is rooted in meta-ethics, which focuses on understanding the underlying nature and structure of ethics itself. Reflective equilibrium: He advocates for the concept of "reflective equilibrium," in which we arrive at ethical conclusions by balancing and adjusting our intuitions, principles, and beliefs to achieve a coherent and stable ethical framework. Coherent extrapolated volition (CEV): Yudkowsky has proposed the concept of CEV as a way to align AI systems with human values. CEV involves using AI to model the values that humanity would converge upon if given enough time and resources to reflect, discuss, and reason together. Friendly AI: Yudkowsky emphasizes the importance of developing AI that is both intelligent and aligned with human values. He believes that ensuring AI safety and value alignment is a critical challenge for humanity. In conclusion, Eliezer Yudkowsky's views on consciousness emphasize reductionism and the importance of understanding it for AI development. His ethical approach is grounded in meta-ethics, with a focus on reflective equilibrium and coherent extrapolated volition for developing friendly AI.

Replies from: TAG
comment by TAG · 2023-04-30T16:20:19.022Z · LW(p) · GW(p)

Neither of those is an actual theory. Saying consciousness of is reductionistic, somehow, doesn't tell you how it works, any more than saying it is emergent, somehow. Saying the true ethics are what you get out CEV also doesn't specify anything, because there is no gears level specification of CEV.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-04-30T18:05:40.484Z · LW(p) · GW(p)

To me, it seems you are engaging with the ChatGPT summary. You can find more about in Metaethics [? · GW], e.g.

Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence [? · GW], which Yudkowsky worried failed to convey his central point (this post by Luke [LW · GW] tried to clarify); he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners [? · GW]. From a standard philosophical standpoint, Yudkowsky's philosophy is closest to Frank Jackson's moral functionalism / analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist - someone who believes moral sentences are either true or false - but not a moral realist - thus denying that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is 'logical' in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like "morality" can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.

That there is no gears level specification is exactly the problem he points out! We don't know how specify human values and I think he makes a good case pointing that out - and that it is needed for alignment.

Replies from: TAG, TAG
comment by TAG · 2023-04-30T18:23:00.831Z · LW(p) · GW(p)

That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values

And we don't know that human values exist as a coherent object either. So his metaethics is "ethics is X" where X is undefined and possibly non existent.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-04-30T19:26:26.930Z · LW(p) · GW(p)

He doesn't say "ethics is X" and I disagree that this is a summary that advances the conversation. 

Replies from: TAG
comment by TAG · 2023-04-30T19:47:39.721Z · LW(p) · GW(p)

For any value of X?

comment by TAG · 2023-04-30T18:12:53.428Z · LW(p) · GW(p)

I am not asking because I want to know. I was asking because I wanted you to think about those sequences,and what they are actually saying , and how clear they are. Which you didn't.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-04-30T19:24:51.662Z · LW(p) · GW(p)

Why would it matter what an individual commenter says about the clarity of the Sequences? I think a better measure would be what a large number of readers think about how clear they are. We could do a poll but I think there is already a measure: The votes. But these don't measure clarity. More something like how useful people found them. And maybe that is a better measure? Another metric would be the increase in the number of readers while the Sequences were published. By that measure, esp. given the niche subject, they seem to be of excellent quality.

But just to check I read one high (130) and one low-vote (21) post from the Metaethics sequence and I think they are clear and readable.

Replies from: TAG
comment by TAG · 2023-04-30T19:54:06.856Z · LW(p) · GW(p)

Yes, lots of people think the sequences are great. Lots of people also complain about EY's lack of clarity. So something has to give.

The fact that it seems to be hugely difficult for even favourably inclined people to distill his arguments is evidence in favour of unclarity.

Replies from: gilch, Gunnar_Zarncke
comment by gilch · 2023-04-30T20:34:18.382Z · LW(p) · GW(p)

I don't think these are mutually exclusive? The Sequences are long and some of the posts were better than others. Also, what is considered "clear" can depend on one's background. All authors have to make some assumptions about the audience's knowledge. (E.g., at minimum, what language do they speak?) When Eliezer guessed wrong, or was read by those outside his intended audience, they might not be able to fill in the gaps and clarity suffers--for them, but not for everyone.

comment by Gunnar_Zarncke · 2023-04-30T20:03:43.823Z · LW(p) · GW(p)

I agree that it is evidence to that end.

comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-30T00:47:31.402Z · LW(p) · GW(p)

But when people outside of the community complain

Except all of that was done, and some people understood, but unfortunately, those people complaining now didn't.

Semirelated: is this referring to me specifically? If not, who/else?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-04-30T15:27:20.142Z · LW(p) · GW(p)

I am sorry for this quip. It was more directed at the people on Twitter, but they are not a useful sample to react to.

Replies from: NicholasKross
comment by TAG · 2023-04-30T12:44:09.248Z · LW(p) · GW(p)