Open & Welcome Thread - January 2020

post by habryka (habryka4) · 2020-01-06T19:42:36.499Z · LW · GW · 42 comments

Contents

42 comments

If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW].

The Open Thread sequence is here [? · GW].

42 comments

Comments sorted by top scores.

comment by Isnasene · 2020-01-08T03:57:03.546Z · LW(p) · GW(p)

Hey yall; I've been around for long enough -- may as well introduce myself. I've had this account for a couple months but I've been lurking off-and-on for about ten years. I think it's pretty amazing that after all that time, this community is still legit. Keep up the good work, everyone!

Things I hope to achieve through my interactions with Less Wrong:

  • Accidentally move the AI Safety field sligthly forward by making a clever comment on something
  • Profound discussions (Big fan of that whole thing with Internal Family Systems [LW · GW], also interested in object-level discussion about how to navigate Real Life)
  • Friends? (yeah I know; internet rationality forums aren't particularly conducive to this but what're ya gonna do? I need some excuse to run away to California)

Current status (stealing Mathisco's idea): United States, just outta college, two awesome younger cousins who I spend too much time with, AI/ML capabilities research in finance, bus-ride to work, trying to learn guitar.

Coolest thing I've ever done: When I was fifteen, I asked my dad for a slim jim and he accidentally tossed two at me at the same time. I raised my hand and caught one slim jim betwen my pinky and ring finger and the other between my middle and index finger, wolverine claw style.

...

PS: Is it just me or are the Open Threads kind of out of the way? My experience with Open Thread Posts has been

1. See them in the same stream as regular Less Wrong posts

2. Click on them at my leisure

3. Notice that there are only a few comments (usually introductions)

4. Forget about it until the next Open Thread

As a result, I was legitimately surprised to see the last Open Thread had ~70 comments! No idea whether this was just a personal quirk of mine or a broader site-interaction pattern.

Replies from: Mathisco, habryka4, Alexei
comment by Mathisco · 2020-01-08T19:21:58.550Z · LW(p) · GW(p)

I inspired someone; yay!

Since I like profound discussions I am now going to have to re-read IFS, it didn't fully resonate with me the first time.

I cannot come up with such a cool wolverine story I am afraid.

Replies from: Isnasene
comment by Isnasene · 2020-01-09T01:24:52.552Z · LW(p) · GW(p)
Since I like profound discussions I am now going to have to re-read IFS, it didn't fully resonate with me the first time.

Huzzah! To speak more broadly, I'm really interested in joining abstract models of the mind with the way that we subjectively experience ourselves. Back in the day when I was exploring psychological modifications, I would subjectively "mainline" my emotions (ie cause them to happen and become aware of them happening) and then "jam the system" (ie deliberately instigating different emotions and shoving them into that experiential flow). IFS and later Unlocking The Emotional Brain [LW · GW] (and Scott Alexander's post on that post, Mental Mountains) helped confirm for me that the thing I thought I was doing was actually the thing I was doing.

I cannot come up with such a cool wolverine story I am afraid.

No worries; you've still got time!

Replies from: Mathisco
comment by Mathisco · 2020-01-10T20:00:12.086Z · LW(p) · GW(p)

I found a link in your links to Internal Double Crux [LW · GW]. This technique I do recognize.

I recently also tried recursively observing my thoughts, which was interesting. I look at my current thought, than I look at the thought that's looking at the first thought, etc. Untill it pops, followed by a moment of stillness, then a new thought arises, I start over. Any name for that?

Replies from: Isnasene
comment by Isnasene · 2020-01-11T23:48:22.312Z · LW(p) · GW(p)

Interesting... When you do this, do you consider the experience of the thought looking at your first thought to be happening simultaneously with the experience of your first thought? If so, this would be contrary to my expectation that one only experiences one thought at a time. To quote Scott Alexander quoting Daniel Ingram:

Then there may be a thought or an image that arises and passes, and then, if the mind is stable, another physical pulse. Each one of these arises and vanishes completely before the other begins, so it is extremely possible to sort out which is which with a stable mind dedicated to consistent precision and not being lost in stories.

If you're interesting in this, you might want to also check out Scott's review of Daniel's book.



Replies from: Mathisco
comment by Mathisco · 2020-01-12T16:19:16.048Z · LW(p) · GW(p)

I'll examine the link!

When you say 'one thought at a time', do you mean one conscious thought? From reading all these multi-agent models I assumed the subconscious is a collection of parallel thoughts, or at least multi-threaded.

I also interpreted the Internal Double Crux as spinning up two threads and let them battle it out.

I recall one dream where I was two individuals at the same time.

I do consider it like two parallel thoughts, though one dominates, or at least I relate my 'self' mostly with one of them. However, how do I evaluate my subjective experience? It's not like I can open the taskmanager and monitor my mind's processes (though I am still undecided whether I should invest in some of those open source EEG devices).

Edit: While reading Scott's review, I am more convinced it's multi-threading, due the observation that there may be 'brain wave frequencies':

This is vipassana (“insight”, “wisdom”) meditation. It’s a deep focus on the tiniest details of your mental experience, details so fleeting and subtle that without a samatha-trained mind you’ll miss them entirely. One such detail is the infamous “vibrations”, so beloved of hippies. Ingram notes that every sensation vibrates in and out of consciousness at a rate of between five and forty vibrations per second, sometimes speeding up or slowing down depending on your mental state. I’m a pathetic meditator and about as far from enlightenment as anybody in this world, but with enough focus even I have been able to confirm this to be true. And this is pretty close to the frequency of brain waves, which seems like a pretty interesting coincidence.

Under this hypothesis, I would now state I have at least observed three states of multi-threading:

  • Double threading. I picked this up from a mindfulness app. You try to observe your thoughts as they appear. In essence there is one monitoring thread and one free thread.
  • Triple threads, i.e. Internal Double Crux. You have one moderator thread that monitors and balances two other debating threads.
  • Recursive threading. One thread starts another thread, which starts another, untill you hit the maximum limit, which is probably related to the brainwave frequency.

I'll continue to investigate.

Replies from: Isnasene
comment by Isnasene · 2020-01-12T23:44:28.908Z · LW(p) · GW(p)
When you say 'one thought at a time', do you mean one conscious thought? From reading all these multi-agent models I assumed the subconscious is a collection of parallel thoughts, or at least multi-threaded.

Yes. The key factor is that, while I might have many computations going on in my brain at once, I am only ever experiencing a single thing. These things flicker into existence and non-existence extremely quickly and are sampled from a broader range of parallel, unexperienced, thoughts occuring in the subconscious.

Under this hypothesis, I would now state I have at least observed three states of multi-threading:

I think it's worth hammering out the definition of a thread here. In terms of brain-subagents engaging in computational process, I'd argue that those are always on subconsciously. When I'm watching and listening to TV for instance, I'd describe my self as rapidly flickering between three main computational processes: a visual experience, an auditory experience, and an experience of internal monologue. There are also occasionally threads that I give less attention to -- like a muscle being too tense. But I wouldn't consider myself as experiencing all of these processes simultaneously -- instead its more like I'm seeing a single console output that keeps switching between the data produced by each of the processes.


Replies from: Mathisco
comment by Mathisco · 2020-01-13T19:12:54.572Z · LW(p) · GW(p)
I think it's worth hammering out the definition of a thread here.

Agreed. I only want to include conscious thought processes. So I am modeling myself as having a single core conscious processor. I assume this aligns with your statement that you are only experiencing a single thing, where experience is equivalent to "a thought during a specified time interval in your consciousness"? The smallest possible time interval that still constitutes a single thought I consider the period of a conscious brainwave. This random site states a conscious brainwave frequency of 12-30Hz, then the shortest possible thought is above 30 milliseconds.

I am assuming it's temporal multithreading, with each though at least one cycle. Note that I am neither a neuroscientist, nor a computer scientist, so I am probably modeling it all wrong. Nevertheless simple toy models can often be of great help. If there's a better analogy, I am more than willing to try it out.

People are discussing this across the internet of course, here's one example on Hacker News

Replies from: Isnasene
comment by Isnasene · 2020-01-14T04:39:39.619Z · LW(p) · GW(p)

Yes -- this fits with my perspective. The definition of the word "thought" is not exactly clear to me but claiming that it's duration is lower-bounded by brainwave duration seems reasonable to me.

I am assuming it's temporal multithreading, with each though at least one cycle.

Yeah, it could be that our conscious attention performs temporal multi-threading -- only being capable of accessing a single one of the many normally background processes going on in the brain at once. Of course, who knows? Maybe it only feels that way because we are only a single conscious attention thread and there are actually many threads like this in the brain running in parallell. Split brain studies are a potential indicator that this could be true:

After the right and left brain are separated, each hemisphere will have its own separate perception, concepts, and impulses to act. Having two "brains" in one body can create some interesting dilemmas. When one split-brain patient dressed himself, he sometimes pulled his pants up with one hand (that side of his brain wanted to get dressed) and down with the other (this side did not).

--quote from wikipedia

People are discussing this across the internet of course, here's one example on Hacker News

Alternative hypothesis: The way our brain produces thought-words seems like it could in principle be predictive processing a-la GPT-2. Maybe we're just bad at multi-tasking because switching rapidly between different topics just confuses whatever brain-part is instantiating predictive-processing.



comment by habryka (habryka4) · 2020-01-08T04:12:00.086Z · LW(p) · GW(p)

Welcome! (In as much as that makes sense to say to someone who has been around for 10 years)

Is it just me or are the Open Threads kind of out of the way?

Open Threads should be pinned to the frontpage if you have the "Include Personal Blogposts" checkbox enabled. So for anyone who has done that, they should be pretty noticeable. Though you saying otherwise does make me update that something in the current setup is wrong. 

Replies from: Isnasene
comment by Isnasene · 2020-01-08T05:23:03.325Z · LW(p) · GW(p)
Open Threads should be pinned to the frontpage if you have the "Include Personal Blogposts" checkbox enabled. So for anyone who has done that, they should be pretty noticeable.

Thanks for that! You're right, I did not have "include Personal Blogposts" checked. I can now see that the Open Thread is pinned. IDK if I found it back in the day, unclicked it, and forgot about it or if that's just the default. In any case, I appreciate the clarification.

Though you saying otherwise does make me update that something in the current setup is wrong. 

Turns out the experience described above wasn't a site-problem anyway; it was just my habit of going straight to the "all posts" page, instead of either a) editing my front page so "latest posts" show up higher on my screen or b) actually scrolling down to look at the latest posts. What can I say for myself except beware trivial inconveniences [LW · GW]?

comment by Alexei · 2020-01-08T20:20:08.708Z · LW(p) · GW(p)

Can you expand on “ AI/ML capabilities research in finance” or shoot me a PM?

Replies from: Isnasene
comment by Isnasene · 2020-01-09T01:01:55.172Z · LW(p) · GW(p)

Sure! I work for a financial services company (read: not quant finance). We leverage a broad range of machine-learning methodologies to create models that make various decisions across the breadth of our business. I'm involved with a) developing our best practices for model development and b) performing experiments to see if new methodologies can improve model performance.


comment by riceissa · 2020-01-16T07:42:50.761Z · LW(p) · GW(p)

I noticed that the parliamentary model of moral uncertainty can be framed as trying to import a "group rationality" mechanism into the "individual rationality" setting, to deal with subagents/subprocesses that appear in the individual setting. But usually when the individual rationality vs group rationality topic is brought up, it is to talk about how group rationality is much harder/less understood than individual rationality (here are two [LW · GW] examples of what I mean). I can't quite explain it, but I find it interesting/counter-intuitive/paradoxical that given this general background, there is a reversal here, where a solution in the group rationality setting is being imported to the individual rationality setting. (I think this might be related to why I've never found the parliamentary model quite convincing, but I'm not sure.)

Has anyone thought about this, or more generally about transferring mechanisms between the two settings?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-01-16T09:22:06.230Z · LW(p) · GW(p)

I think with the parliamentary model, it's probably best to assume away as many of the problems with group rationality as you can.

A big source of problems in group rationality is asymmetric information, and for the parliamentary model we can just assume that all the delegates can costlessly learn everything about all the other delegates, or equivalently that they differ only in their morality and not in their information set.

Another big source of problems is that coalitional behavior can lead to arbitrary and unfair outcomes: for example if you start out with three equal individuals, and any two of them can ally with each other and beat up the third person and take their stuff, you're going to end up with an arbitrary and unfair situation. For this perhaps we can assume that the delegates just don't engage in alliance building and always vote according to their own morality without regard to strategic coalitional considerations. (Actually I'm not sure this paragraph makes sense but I've run out of time to think about it.)

I'm probably missing other important group rationality problems, but hopefully this gives you the general idea.

comment by Mathisco · 2020-01-06T20:44:04.309Z · LW(p) · GW(p)

Goodday! I've been reading rationalist blogs for approximately 2 years. At this random moment I have decided to make a LessWrong account.

Like most human beings I suffer and struggle in life. As a rich human, like most LessWrong users I assume (we have user stats?), I suffer in luxury.

The main struggle is where to spend my time and energy. The opportunity cost of life I suppose. What I do:

  • Improve myself. My thinking, my energy, my health, my wealth, my career, my status.
  • Improve my nearest relationships.
  • Improve my community (a bit).
  • Improve the world (a tiny bit).

But alas, the difficulty, how to choose the right balance? Hopefully I am doing better as I go along. Though how do I measure that?

I have no intellectual answers for you I am afraid. I'll let you know if I find them.

Current status: Europe, 30+ years, 2 kids, physics PhD (bit pointless, but fun), AI/ML related work in high tech hardware company, bicycle to work, dabbled some in social entrepreneurship (failure).

Replies from: gjm
comment by gjm · 2020-01-08T11:37:09.026Z · LW(p) · GW(p)

There are surveys, but I think it may have been a few years since the last one. In answer to your specific question, LWers tend to be smart and young, which probably means most are rich by "global" standards, most aren't yet rich by e.g. typical US or UK standards, but many of those will be in another decade or two. (Barring global economic meltdown, superintelligent AI singularity, etc.) I think LW surveys have asked about income but not wealth. E.g., here are results from the 2016 survey which show median income at $40k and mean at $64k; median age is 26, mean is 28. Note that the numbers suggest a lot of people left a lot of the demographic questions blank, and of course people sometimes lie about their income even in anonymous surveys, so be careful about drawing conclusions :-).

Replies from: Mathisco
comment by Mathisco · 2020-01-08T19:26:45.969Z · LW(p) · GW(p)

Thanks!

I wrote with global standards in mind. My own income isn't high compared to US technology industry standards.

In the survey I also see some (social) media links that may be interesting. I have occasionally wondered if we should do something on LinkedIn for more career related rationalist activities?

comment by Zian · 2020-01-21T01:46:42.693Z · LW(p) · GW(p)

This week, I noticed that a medical textbook (Principles and Practice of Sleep Medicine) explicitly described Bayes Theorem in the first pages of a chapter about diagnostic tests. It went on to describe how strongly you should update your beliefs based on various symptoms/tests (e.g. "Presence of daytime headache" means you should update __this hard__ towards having a sleep problem but if you don't have a headache, you should update __different amount__ in the other direction).

I thought it was neat that this type of thinking is being used in medicine's textbook orthodoxy.

comment by Long try · 2020-01-15T05:03:40.252Z · LW(p) · GW(p)

Hola! Been around for a few months, time to move out into the light. My intention was to finish LW's 3 core readings before introducing myself, but then I gave up on RA-Z and yesterday I stopped HPMOR at chapter 59. My expectation is now so low that I won't put my bet on Codex, though I'll definitely try reading it soon. So here I am.

I live in Vietnam. Not to my surprise, none or very few on this platform are from the country. If you are, give me a shout out!

I don't really work now, though I do have some stock exchange accounts. That means I have quite some time to spend during a day. So I learn Spanish and other stuffs. I'm currently in a project to finish all APOD posts during 4 years, and have read up to 2017. I try to find the best online courses to learn day by day, and completed quite some.

It's also my goal to watch the best films or TV shows. Since the rating system for TV is not as extended as movies, I rely on IMDb 250 to filter the shows. And have worked my way up to Breaking Bad now! That means I've completed GoT (was intrigued by people's hype all these years), and man the early seasons were really good. On the other hand, I have a feeling that I've missed many great recent films because the aggregating site flickmetric stopped working properly when I reached around year 1988; so if you have any recommendations, feel free to enlighten me :) My criteria for a good movie are: RT critic score >94%, audience score >89%, letterboxd 5-star votes > 4 star votes, and IMDB score >7.9.

Like most members of LW, I have big ideas. But for now I want to have a better, more accurate view of the world so that when I spring into actions, they will produce expected effects. Also, to wait for a depression to make use of the investment money. In the current situation, earning is not easy. BTW, when do you think it'll happen?

comment by Wei Dai (Wei_Dai) · 2020-01-19T19:18:34.622Z · LW(p) · GW(p)

Anyone else kept all photos of themselves off the public net because they saw something like this coming?

Replies from: Raemon
comment by Raemon · 2020-01-19T20:08:03.043Z · LW(p) · GW(p)

I think I was more resigned to it.

comment by Isnasene · 2020-01-12T01:19:33.038Z · LW(p) · GW(p)

So yall are rationalists so you probably know about the thing I'm talking about:

You've just discovered that you were horribly wrong about something you consider fundamentally important. But, on a practical level, you have no idea how to feel about that.

On one hand, you get to be happy and triumphant about finally pinning down a truth that opens up a vast number of possibilities that you previously couldn't even consider. On the other hand, you get to be deeply sad and almost mournful because, even if the thing wasn't true, you have a lot of respect for the aesthetic of believing in the thing you now know to be false. Overall, the result is the bittersweet feeling of a Pyrrhic victory blended with the feeling of being lost (epistemologically).

One song that I find captures this well is Lord Huron's Way Out There:

Find me way out there
There's no road that will lead us back
When you follow the strange trails
They will take you who knows where
  • The distance between you and your past captured by find me way out there
  • The irreversibility captured by no road that will lead us back
  • The epistemic ambiguity of who knows where, denying the destination any positive or negative valence

Anyone else know any songs like this?

Replies from: Charlie Steiner
comment by Charlie Steiner · 2020-01-13T06:44:54.963Z · LW(p) · GW(p)

I bet you'd like Jim Guthrie.

https://jimguthrie.bandcamp.com/album/takes-time

I'm basically thinking of half the tracks on this album. "Taking My Time," "Difference a Day Makes," "Before and After," "The Rest is Yet To Come," "Don't Be Torn," and "Like a Lake."

An unexplainable thing
I'll have to change to stay the same
Just like this bottle of wine
It's gonna take time no doubt

It's not hard
Letting go
But it's hard
Even so

And you say
‘Come here and sit down
Don't try to own it all’

And you said ‘The rest is yet to come’
I said ‘Don't you mean the best?’
You said ‘We're making a huge mess’
Won't lay down. Won't confess
All burnt out and won't succumb
Ah but the rest has yet to come
Replies from: Isnasene
comment by Isnasene · 2020-01-14T04:10:01.028Z · LW(p) · GW(p)

That's a bet with good odds.

I didn't mean to doubt you
I just figured it out
Oh the difference a day makes
comment by ryan_b · 2020-01-16T18:56:17.202Z · LW(p) · GW(p)

Reflecting on making morally good choices vs. morally bad ones, I noticed the thing I lean on the most is not evaluating the bad ones. This effectively means good choices pay up front in computational savings.

I'm not sure whether this counts as dark arts-ing myself; on the one hand it is clearly a case of motivated stopping. On the other hand I have a solid prior that there are many more wrong choices than right ones, which implies evaluating them fairly would be stupidly expensive; that in turn implies the don't-compute-evil rule is pretty efficient even if it were arbitrarily chosen.

Replies from: Pattern
comment by Pattern · 2020-01-18T03:04:07.376Z · LW(p) · GW(p)
the don't-compute-evil rule is pretty efficient even if it were arbitrarily chosen.

What if it's more general - say, a prior to first employ actions you've used before that have worked well? (I don't have a go to example of something good to do that people usually don't. Just 'most people don't go skydiving, and most people don't think about going skydiving.')

comment by Mary Chernyshenko (mary-chernyshenko) · 2020-01-11T09:07:27.410Z · LW(p) · GW(p)

Sometimes it seems to me that old-ish reference books (on not too-hard science) age into cinematographic world-building. For example, there are some good volumes on the vegetation structure of European forests c. 1950-s. As botanic material, they are dated: the woods have burnt, grown, got paved, incorporated alien species etc., and the current methods to describe vegetation are more demanding.

Yet as simple pictures of the state of the world, grainy and washed out in places, they are good enough.

comment by Sai Pendyala · 2021-02-05T14:03:44.217Z · LW(p) · GW(p)

Hi Everyone, I recently discovered LessWrong through David Perell. 

I'm aspiring to start curating and creating content based on health, lifestyle and intentional living. As a student of life, I've been engrossed in the psyche of the human mind and I like analysing human behaviours, thoughts & emotion. Considering myself to be a "rational" person for so long, I loved seeing the array of topics on hand here & I hope to be able to sift through them slowly & contribute myself to them one day, hopefully!

Replies from: Raemon
comment by Raemon · 2021-02-06T09:47:43.491Z · LW(p) · GW(p)

Welcome!

comment by roland · 2020-01-09T19:22:18.031Z · LW(p) · GW(p)

What is the name of the following bias:

X admits to having done Y, therefore it must have been him.

Replies from: Isnasene, Mathisco
comment by Isnasene · 2020-01-11T00:51:49.618Z · LW(p) · GW(p)

For the most point, admitting to having done Y is strong evidence that the person did do Y so I'm not sure if it can generally be considered a bias.

In the case where there is additional evidence that the admittance was coerced, I'd probably decompose it into the Just World Fallacy (ie "Coercion is wrong! X couldn't have possibly been coerced.") or a blend of Optimism Bias and Typical Mind Fallacy (ie "I think I would never admitting to something I haven't done! So I don't think X would either!") where the person is overconfident in their uncoercibility and extrapolates this confidence to others.

This doesn't cover all situations though. For instance, if someone was obviously paid a massive amount of money to take the fall for something, I don't know of a bias that would lead to to continue to believe that they must've done it

Replies from: roland
comment by roland · 2020-01-12T14:32:08.536Z · LW(p) · GW(p)

For the most point, admitting to having done Y is strong evidence that the person did do Y so I’m not sure if it can generally be considered a bias.

Not generally but I notice that the argument I cited is usually invoked when there is a dispute, e.g.:

Alice: "I have strong doubts about whether X really did Y because of..."

Bob: "But X already admitted to Y, what more could you want?"

Replies from: ChristianKl
comment by ChristianKl · 2020-01-13T13:52:00.058Z · LW(p) · GW(p)

Bob's reply is not concerned with the truth of whether X did Y in the Bayesian sense. Bob doesn't argue about what the correct probability happens to be.

It's concerned with dispute resolution. In a discussion about truth, wanting doesn't matter. In a process of dispute resolution it matters a great deal.

comment by Mathisco · 2020-01-10T19:49:47.403Z · LW(p) · GW(p)

Gullibility bias?

comment by leggi · 2020-01-27T03:59:18.505Z · LW(p) · GW(p)

A request, if it's possible:

Being able to set a "max-height: px" for images in posts would be great.

Replies from: habryka4
comment by habryka (habryka4) · 2020-01-27T19:49:51.841Z · LW(p) · GW(p)

You should be able to already. When you add a picture you can drag on its left and right edges to resize it.

Replies from: leggi
comment by leggi · 2020-01-29T04:08:09.483Z · LW(p) · GW(p)

Thank you. I missed that feature. (I've been using ![image text] to get bigger pics.)

I can't figure out the 4 'image position' options though. Is there a trick to getting text to the side?

comment by Pattern · 2020-01-18T03:05:54.972Z · LW(p) · GW(p)

Is this a good place to post bugs? (Like consistently getting a 404 error for a user page, which prevents subscription.)

Replies from: habryka4
comment by habryka (habryka4) · 2020-01-18T03:26:20.751Z · LW(p) · GW(p)

It's a fine place, though the best place is through the Intercom chat in the lower right corner (the gray chat bubble).

Replies from: Pattern
comment by Pattern · 2020-01-18T03:31:46.153Z · LW(p) · GW(p)
a 404 error for a user page

https://www.lesswrong.com/users/juan-andres-hurtado-baeza [LW · GW]

The name does have an uncommon symbol (é), which doesn't show up in the url, if that changes anything.

Replies from: habryka4
comment by habryka (habryka4) · 2020-01-18T05:30:50.945Z · LW(p) · GW(p)

Well, that sure is an interesting case. Fixed it. The account was marked as deleted and banned until late 2019 for some reason, so my guess is they were caught by our anti-spam measures in late 2018, which bans people for one year, and then they ended up posting again after the ban expired.