Posts

Comments

Comment by plethora on HPMOR and Sartre's "The Flies" · 2017-09-21T12:48:09.874Z · LW · GW

I'd be surprised if Yudkowsky has read Sartre. But it's a natural thing to do. Harry Potter is (unfortunately) the closest thing we have to a national epic we have these days... well, an Anglosphere epic, but you get the idea.

If this is the sort of thing you're interested in, you might want to read Benedict Anderson's book Imagined Communities.

Comment by plethora on 2017 LessWrong Survey · 2017-09-21T09:54:16.291Z · LW · GW

I have taken the survey.

Comment by plethora on Against lone wolf self-improvement · 2017-07-11T19:47:15.528Z · LW · GW

I think this discussion is somewhat confused by the elision of the difference between 'autodidact' and 'lone wolf'. 'Autodidact', in internet circles, is generally used to mean 'anyone who learns things primarily outside a formalized educational environment'; it's possible to be an autodidact while still being heavily engaged with communities and taking learning things as a social endeavor and so on, and in fact Eliezer was active in communities related to LW's subject matter for a long time before he started LW. By the same token, one of the main things I took from reading Ben Franklin's autobiography was that, despite having little formal schooling and being solely credited for many of his innovations, he didn't actually do it alone. I doubt he would've been even a tenth as successful as he was without something like his Junto.

Some people will get more out of formal education than others, although getting things out of formal education is itself a skill that can be learned. (It seems to require an ability to buy into institutions on an emotional level that many of us lack. I saw college as an obnoxious necessity rather than a set of opportunities, and as a result got much less out of it than I could have. This seems to be a common mistake.) But I just don't think it's possible to become a spectacular writer, or even a middling one, as a lone wolf. If nothing else, you need feedback from a community in order to improve. Look at lone-wolf outsider art -- it's frequently unusual, but how much of it is good?

Comment by plethora on What's up with Arbital? · 2017-03-31T01:45:42.559Z · LW · GW

When you ask someone if they would like a debate platform and describe all the features and content it'll have, they go: "Hell yeah I'd love that!" And it took me a while to realize that what they are imagining is someone else writing all the content and doing all the heavy lifting. Then they would come along, read some of it, and may be leave a comment or two. And basically everyone is like that: they want it, but they are not willing to put in the work. And I don't blame them, because I'm not willing to put in the work (of writing) either. There are just a handful of people who are.

This is what incentive structures are for. There are quite a few people who have strong incentives to publish high-quality writing, you know...

An open-access journal for debates seems like it ought to be possible, although it'd have to actively solicit contributions (an encyclopedia for debates?) and reward them with academic status, which means you'd need solid academic backing.

Comment by plethora on I Want To Live In A Baugruppe · 2017-03-18T10:02:23.162Z · LW · GW

Yes, so you send everyone out and hide most of the beds when the inspectors come around.

This is probably not desirable for communities with children, but it's common for co-ops in places with those laws.

Comment by plethora on I Want To Live In A Baugruppe · 2017-03-18T09:59:46.885Z · LW · GW

It's a coastal, urban American custom. To a first approximation, it's illegal to build in coastal cities and most of the land in them is uninhabitable because crime.

Comment by plethora on I Want To Live In A Baugruppe · 2017-03-18T09:54:39.014Z · LW · GW

Would be interested if I lived in a place amenable to this. Seconding dropspindle's recommendation of Appalachia, since that's where I'm already planning to move if I can get a remote job.

It may be worth looking to see whether there are any large, relatively inexpensive houses near major cities that could be converted. There are a lot of McMansion developments in the suburbs north of DC that have never looked particularly inhabited.

Comment by plethora on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-02-05T00:01:50.711Z · LW · GW

Yes, I know. I bet Islamists don't think highly of it either.

Comment by plethora on Open thread, Jan. 23 - Jan. 29, 2017 · 2017-01-24T16:54:19.985Z · LW · GW

If Nazis got punched all the time, they would be perceived as weak and nobody would join them.

Two thousand years ago, some guy in the Roman Empire got nailed to a piece of wood and left to die. How did that turn out?

Comment by plethora on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-24T16:45:29.985Z · LW · GW

I guess the second part is more important, because the first part is mostly a strawman.

Not in my experience. It may seem like it now, but that's because the postrationalists won the argument.

Comment by plethora on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-24T16:41:33.386Z · LW · GW

Similarly, when a third party describes SSC, they cannot credibly accuse Scott of what someone else wrote in the comments; the dividing line between Scott and his comentariat is obvious.

They can accuse Scott of being the sort of fascist who would have a [cherry-picking two or three comments that aren't completely in approval of the latest Salon thinkpiece] far-right extremist commentariat. And they do.

Comment by plethora on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-24T16:39:37.757Z · LW · GW

I don't feel like I can just share Less Wrong articles to many places because Less Wrong lacks respectability in wider society and is only respectable with those who are part of the LW ghetto's culture.

That's mostly a CSS problem. The respectability of a linked LW article would, I think, be dramatically increased if the place looked more professional. Are there any web designers in the audience?

Comment by plethora on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-24T16:37:52.737Z · LW · GW

Walled gardens are probably necessary for honest discussion.

If everything is open and tied to a meatspace identity, contributors have to constantly mind what they can and can't say and how what they're saying could be misinterpreted, either by an outsider who isn't familiar with local jargon or by a genuinely hostile element (and we've certainly had many of those) bent on casting LW or that contributor in the worst possible light.

If everything is open but not tied to an identity, there's no status payoff for being right that's useful in the real world -- or if there is, it comes at the risk of being doxed, and it's generally not worth it.

The ideal would probably be a walled garden with no real name policy. I've considered writing a site along these lines for some time, with many walled gardens and individually customizable privacy settings like Facebook, but I'm not sure what model to base the posting on -- that is, should it look like a forum, like Facebook/Reddit, like Tumblr, or what?

Comment by plethora on 0.999...=1: Another Rationality Litmus Test · 2017-01-23T18:47:54.487Z · LW · GW

Let's consider the number x = ...999; in other words, now we have infinitely many 9s to the left of the decimal point.

My gut response (I can't reasonably claim to know math above basic algebra) is:

  • Infinite sequences of numbers to the right of the decimal point are in some circumstances an artifact of the base. In base 3, 1/3 is 0.1 and 1/10 is 0.00220022..., but 1/10 "isn't" an infinitely repeating decimal and 1/3 "is" -- in base 10, which is what we're used to. So, heuristically, we should expect that some infinitely repeating representations of numbers are equal to some representations that aren't infinitely repeating.

  • If 0.999... and 1 are different numbers, there's nothing between 0.999... and 1, which doesn't jive with my intuitive understanding of what numbers are.

  • The integers don't run on a computer processor. Positive integers can't wrap around to negative integers. Adding a positive integer to a positive integer will always give a positive integer.

  • 0.999... is 0.9 + 0.09 + 0.009 etc, whereas ...999.0 is 9 + 90 + 900 etc. They must both be positive i̶n̶t̶e̶g̶e̶r̶s̶.

  • There is no finite number larger than ...999.0. A finite number must have a finite number of digits, so you can compute ...999.0 to that many digits and one more. So there's nothing 'between' ...999.0 and infinity.

  • Infinity is not the same thing as negative one.

All I have to do to accept that 0.999... is the same thing 1 is accept that some numbers can be represented in multiple ways. If I don't accept this, I have to reject the premise that two numbers with nothing 'between' them are equal -- that is, if 0.999... != 1, it's not the case that for any x and y where x != y, x is either greater than or less than y.

But if I accept that ...999.0 is equal to -1, I have to accept that adding together some positive numbers can give a negative number, and if I reject it, I just have to say that multiplying an infinite number by ten doesn't make sense. (This feels like it's wrong but I don't know why.)

Comment by plethora on Open thread, Jan. 16 - Jan. 22, 2016 · 2017-01-23T18:31:52.808Z · LW · GW

The is-ought problem implies that the universe is deterministic

What?

Comment by plethora on Open thread, Jan. 16 - Jan. 22, 2016 · 2017-01-19T07:47:52.933Z · LW · GW

No. Accepting facts fully does not lead to utilitarian ideas. This has been a solved problem since Hume, FFS.

Comment by plethora on Open thread, Jan. 16 - Jan. 22, 2016 · 2017-01-18T14:09:28.091Z · LW · GW

Accepting facts fully (probably leads to EA ideas,

It's more likely to lead to Islam; that's at least on the right side of the is-ought gap.

Comment by plethora on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-15T21:56:23.978Z · LW · GW

Language could be more or less frozen wherever it stands at the time.

No it wouldn't -- language is for signaling, not only communication. There would probably be a common language for business and travel, but languages would continue to develop normally, since people would still want to use language to determine how they present themselves.

Comment by plethora on Rationality Considered Harmful (In Politics) · 2017-01-14T22:40:07.388Z · LW · GW

If you never publicly state your beliefs, how are you supposed to refine them?

But if you do publicly state your beliefs, the Rebecca Watsons can eat you, and if you don't, the Rebecca Watsons can coordinate against you.

How do you solve that?

"I believe that it's always important to exchange views with people, no matter what their perspectives are. I think that we have a lot of problems in our society and we need to be finding ways to talk to people, we need to find ways to talk to people where not everything is completely transparent. ... I think often you have the best conversations in smaller groups where not everything is being monitored. That's how you have very honest conversations and how you can think better about the future." -- Thiel on Bilderberg

Comment by plethora on Dominic Cummings: how the Brexit referendum was won · 2017-01-14T22:15:47.380Z · LW · GW

Right, and he addresses this in the article:

This lack of motivation is connected to another important psychology – the willingness to fail conventionally. Most people in politics are, whether they know it or not, much more comfortable with failing conventionally than risking the social stigma of behaving unconventionally. They did not mind losing so much as being embarrassed, as standing out from the crowd. (The same phenomenon explains why the vast majority of active fund management destroys wealth and nobody learns from this fact repeated every year.)

We plebs can draw a distinction between belief and action, but political operatives like him can't. For "failing conventionally", read "supporting the elite consensus".

Now, 'rationalists', at least in the LW sense (as opposed to the broader sense of Kahneman et al.), have a vague sense that this is true, although I'm not sure if it's been elaborated on yet. "People are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing" (e.g. "political actors are more interested in going through the conventional symbolic motions of working out which side they ought to be on than in actually working it out") is widespread enough in the community that it's been blamed for the failure of MetaMed. (Reading that post, it sounds to me like it failed because it didn't have enough sales/marketing talent, but that's beside the point.)

Something worth noting: the alternate take on this is that, while most people are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing, conventional symbolic motions are still usually good enough. Sometimes they aren't, but usually they are -- which allows the Burkean reading that the conventional symbolic motions have actually been selected for effectiveness to an extent that may surprise the typical LW reader.

It should also be pointed out that, while we praise people or institutions that behave unconventionally to try to win when it works (e.g. Eliezer promoting AI safety by writing Harry Potter fanfiction, the Trump campaign), we don't really blame people or institutions that behave conventionally and lose. So going through the motions could be modeled purely by calculation of risk, at least in the political case: if you win, you win, but if you support an insurgency and lose, that's a much bigger deal than if you support the consensus and lose -- at least for the right definition of 'consensus'. But that can't be a complete account of it, because MetaMed.

Comment by plethora on Feature Wish List for LessWrong · 2016-12-18T13:20:20.723Z · LW · GW

Discussion quality is a function of the discussants more than the software.

But daydreaming about the cool new social media software we're totally going to write is so fun!

Comment by plethora on Circles of discussion · 2016-12-18T13:17:37.578Z · LW · GW

People have been building communities with canons since the compilation of the Torah.

LW, running on the same Reddit fork it's on today, used to be a functional community with a canon. Then... well, then what? Interesting content moved offsite, probably because 1) people get less nervous about posting to Tumblr or Twitter than posting an article to LW 2) LW has content restrictions that elsewhere doesn't. So people stopped paying attention to the site, so the community fragmented, the barrier to entry was lowered, and now the public face of rationalists is Weird Sun Twitter and Russian MRAs from 4chan who spend their days telling people to kill themselves on Tumblr. Oops!

(And SSC, which is a more active community than LW despite running on even worse software.)

Comment by plethora on Circles of discussion · 2016-12-16T13:49:54.226Z · LW · GW

This is seeking a technological solution to a social problem.

The proposed technological solution is interesting, complicated, and unlikely to ever be implemented. It's not hard to see why the sorts of people who read LW want to talk about interesting and complicated things, especially interesting and complicated things that don't require much boring stuff like research -- but I highly doubt that anyone is going to sit down and do the work of implementing it or anything like it, and in the event that anyone ever does, it'll likely take so long that many of the people who'd otherwise use LW or its replacement will lose interest in the interim, and it'll likely be so confusing that many more people are turned off by the interface and never bother to participate.

If we want interesting, complicated questions that don't require a whole lot of research, here's one: what exactly is LW trying to do? Once this question has been answered, we can go out and research similar groups, find out which ones accomplished their goals (or goals similar to ours, etc.) and which ones didn't, and try to determine the factors that separate successful groups from failed ones.

If we want uninteresting, uncomplicated questions that are likely to help us achieve our goals, here's one: do we have any managers in the audience? People with successful business experience, muaybe in change management or something of that nature? I'm nowhere near old or experienced enough to nominate myself, or even to name the most relevant subdomains of management with any confidence, but I've still seen a lot of projects that failed due to nonmanagers' false assumption that management is trivial, and a few projects in the exact same domain that succeeded due to bringing in one single competent manager.

As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff.

There's already a lot of stuff from the post-LW fragmentation that a great many people have read. How about identifying and compiling that? And since many of these things will be spread out across Tumblr/Twitter/IRC/etc. exchanges rather than written up in one single post, we could seed the LW revival with explanations of them. This would also give us something more interesting and worthwhile to talk about than what sort of technological solution we'd like to see for the social problem that LW can't find anything more interesting and worthwhile to talk about than what sort of technological solution we'd like to see for the social problem that LW can't find anything interesting or worthwhile enough to get people posting here.

Comment by plethora on Measuring the Sanity Waterline · 2016-12-16T13:34:33.316Z · LW · GW

Religion requires epistemic blindspots, but does religion require epistemic blindspots? That is, is requiring epistemic blindspots a property of religion itself, or is religion one among many subclasses of the type of thing that requires epistemic blindspots? In the former case, raising the sanity waterline to specifically eliminate religion would raise the sanity waterline; in the latter case, it might lower it.

What do you think would happen to the sanity waterline if all the Seventh-Day Adventists in America became atheists and joined an antifa group? Would it rise?

Seventh-Day Adventists' epistemic blindspots (from the atheistic perspective) are things like "God exists" and "we'll live forever in Heaven because we're right about when the Sabbath is" and "eventually the Catholic Church, mainstream Protestant groups, and the US government will get together to pass a law requiring observance of a Sunday Sabbath, and we'll be horribly persecuted for a while but it's OK because Jesus will come back soon after that". Antifa groups' epistemic blindspots are things like "liberal norms serve fascists and must be eroded ASAP", "mob violence is the most important form of political activism", and "murder is good when it's people we disagree with getting killed".

And Seventh-Day Adventists are more prone to epistemic blind spots than religions that don't share the unusual Christian innovation of elevating orthodoxy above orthopraxy, such as Shinto or mainstream American Judaism, both of which are clearly religions. (We have quite a few adherents of mainstream American Judaism in these circles; try asking a few of them about the utility of ritual, the upsides and downsides of religion, etc.)

Religion is one among many subclasses of the type of thing that requires epistemic blindspots, whatever that thing is. But there's another problem, which is that religion doesn't exist. The consensus in religious studies is that there's no coherent way to define 'religion' -- the category exists for strange historical reasons that are particular to the pre-secularization West and certainly don't hold everywhere. You can go to China or Japan or ancient Rome and ask, "is this religious? is this secular?", and they'll just look at you funny. (Admittedly, there's a complication, in that contact between 'pagans' and Christians or Muslims occasionally results in the local variety of paganism adopting the Christian or Muslim idea of 'religion' -- see e.g. here.)

Is Confucianism a religion? It has rites, holy texts, a quasi-prophet (Confucius) and influential quasi-theologians, such as Mencius, Dong Zhongshu, and Zhu Xi. How about Communism, the Hotep movement, or LW? What makes Louis Farrakhan a religious figure and Maulana Karenga a secular one?

Comment by plethora on Measuring the Sanity Waterline · 2016-12-14T13:51:55.601Z · LW · GW

That's true. One has to be on the lookout for pathological social trends masquerading as widespread rationality. For example, the current US attitude that you have to go to college is looking less and less rational by the day. That said, in some other country with 10% high school graduation rates and zero universities, I would consider any increase in those numbers to be a sanity improvement.

This sounds like an exploration/exploitation problem. If every society heads for the known maximum of sanity, it'll be much more difficult to find higher maxima that are yet unknown. If the USA had headed for the known maximum of sanity after seceding from the British Empire, we'd have a king.

Just as it seemed clear to the revolutionaries that the known maximum of sanity in government was suboptimal, it seems clear to me that the known maximum of sanity in education is suboptimal. High school on the Prussian model is about burning years of life in order to be socialized into government-promoted cultural norms and be prepared for work where discipline matters more than thought -- e.g. industrial jobs and the military. College on the American model is about burning years of life (and taking on massive amounts of debt) in order to be socialized into academia-promoted cultural norms and obtain a certificate that says, essentially, "this person is allowed to work". Although it's probably true that most existing societies with 10% high school graduation rates and zero universities rank lower in sanity than the USA, it's also probably true that the USA is, modulo technological improvement and the increase in conceptual vocabulary that flows from that, less sane now than it was before the GI Bill, Griggs v. Duke, etc., because it's completely viable and even promoted for 22-year-olds to have no work experience, a mountain of debt, and a head full of nonsense. If people could, say, test into paid job-training programs -- internships, apprenticeships, etc. -- at the age of 16, and if this were the mainstream life path, this would be a sanity improvement: the resulting 22-year-olds would have financial stability, six years of work experience, markedly less political indoctrination, and no mountain of debt taken on to pay parasitic political radicals for a "see, I'm not banned from working!" certificate.

The only downside I can see is the potential effect on basic research, but I'm not sure how significant that would be.

We're sitting at a weird point in history where we have dynamited all our social institutions except religion, so it makes religion look artificially appealing. I don't think statement "epistemic rationality gains outweigh the instrumental rationality losses to the median human" is true. I think 95% of religious people have never even been exposed to even a basic level of rationality and don't even know what it could do for them, much less for society.

What could it do for them? If, say, health and an extended lifespan are saner, how do the downsides of being, say, a Seventh-Day Adventist outweigh the known upsides? (Remember that most people are much better at compartmentalization than most LW posters, and that decreases in religion don't mean decreases in folk magic -- if anything, the atheistic communities I've seen outside LW are heavier on folk magic than the religious ones I've seen. The other side of that, however, is that some folk magic can be legitimately useful -- but astrology and MBTI don't strike me as falling inside that category.)

Comment by plethora on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-14T00:17:53.027Z · LW · GW

I decide that it can't hurt to ask around and see what marketable skills one can acquire outside a job or formal education, other than programming.

Comment by plethora on Measuring the Sanity Waterline · 2016-12-08T14:11:31.334Z · LW · GW

(+) Enrollment rates in primary/secondary/tertiary education

How are you defining 'education' here? Does homeschooling count? What about trade schools? Apprenticeships?

If a society had a college education rate of, say, 98%, would it have a higher or lower sanity waterline than a society with a college education rate of 30% where most of the other 70% went into employer-funded job training, apprenticeships, etc.?

And education depresses fertility. Until widespread genetic engineering or FAI, the values of populations whose fertility rate is above (replacement rate + defection rate) will gain support, and the values of populations whose fertility rate is below it will lose support. This especially matters in democracies, as anyone who follows Israeli politics can tell you. What this means is that, even if raising tertiary education rates raises the sanity waterline in the short term (which I'm not convinced of), it will likely lower it in the long run.

(+) Median level of awareness about world events

Why? Rationalists win. To the extent that awareness about world events helps you win, awareness about world events is rational. To the extent that awareness about world events does not help you win, you may as well be an anorak.

(-) Religiosity rate

How do you square this with the scientific consensus? Again, rationalists win. If you interpret the relevant studies as saying that religious people accrue benefits (social capital, a sense of meaning, a support network, etc.) from religion (rather than irreligion selecting against the personality traits that provide those things), you have to make the case that the epistemic rationality gains outweigh the instrumental rationality losses to the median human in the society you're trying to affect, and that either these gains outweigh the losses from changing religion/engineering new religions or 'saner' religions can't be created and our only choice is between Richard Dawkins and Creflo Dollar.

(-) Adolescent fertility rate

I would expect a society where 18-year-olds are financially and morally (and... wisdom-ly?) capable of raising children to have a higher sanity waterline than a society where, for financial, moral, and... wisdom-al? reasons, reproduction has to be deferred to one's early thirties, unless the simplest way to raise the sanity waterline is to increase the rate of autism.

Comment by plethora on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-08T11:31:54.008Z · LW · GW

That's... an unusual combination unless you're still in high school (or pursuing a liberal-arts major in college :-P).

Liberal arts major. I can code, but not well enough to get hired for it, and since I haven't managed to get myself to like it enough to level up in it yet, I doubt I will.

Comment by plethora on CFAR’s new focus, and AI Safety · 2016-12-07T06:34:22.436Z · LW · GW

Online communities do not have a strong comparative advantage in compiling and presenting facts that are well understood. That's the sort of thing academics and journalists are already paid to do.

But academics write for other academics, and journalists don't and can't. (They've tried. They can't. Remember Vox?)

AFAIK, there isn't a good outlet for compilations of facts intended for and easily accessible by a general audience, reviews of books that weren't just written, etc. Since LW isn't run for profit and is run as outreach for, among other things, CFAR, whose target demographic would be interested in such an outlet, this could be a valuable direction for either LW or a spinoff site; but, given the reputational risk (both personally and institutionally) inherent in the process of generating new ideas, we may be better served by pivoting LW toward the niche I'm thinking of -- a cross between a review journal, SSC, and, I don't know, maybe CIA (think World Factbook) or RAND -- and moving the generation and refinement of ideas into a separate container, maybe an anonymous blog or forum.

Comment by plethora on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-07T04:39:09.237Z · LW · GW

1) I'm fairly intelligent, completely unskilled (aside from writing, which I have some experience in, but not the sort that I could realistically put on a resume, especially where I live), and I don't like programming. What skills should I develop for a rewarding career?

2) On a related note, the best hypothetical sales pitch for EA would be that it can provide enough career help (presumably via some combination of statistically-informed directional advice and networking, mostly the latter) to more than make up for the 10% pledge. I don't know how or whether this could be demonstrated, but do EA people think this is worth pursuing, or is their strategy still to use 99% of their members for publicity to attract the odd multi-millionaire?

Comment by plethora on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-07T00:12:56.959Z · LW · GW

I have a very low bar for 'interesting discussion', since the alternative for what to do with my spare time when there's nothing going on IRL is playing video games that I don't particularly like. But it's been months since I've seen anything that meets it.

It seems like internet people think insight demands originality. This isn't true. If you look at popular long-form 'insight' writers, even Yudkowsky (especially Yudkowsky), most of what they do is find earlier books and file the serial numbers off. It could be a lot easier for us to generate interesting discussion if we read more books and wrote about them, like this.

Comment by plethora on On the importance of Less Wrong, or another single conversational locus · 2016-12-06T08:47:30.330Z · LW · GW

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I didn't delete my account a year ago because the site runs on a fork of Reddit rather than HN (and I recall that people posted links to outside articles all the time; what benefit would a HN-style aggregator add over either what we have now or our Reddit fork plus Reddit's ability to post links to external sites?); I deleted it because the things people posted here weren't good.

I think if you want to unify the community, what needs to be done is the creation of more good content and less bad content. We're sitting around and talking about the best way to nominate people for a committee to design a strategy to create an algorithm to tell us where we should go for lunch today when there's a Five Guys across the street. These discussions were going on the last time I checked in on LW, IIRC, and there doesn't seem to have been much progress made.

I haven't seen anyone link to a LW post written after I deleted since I deleted. I suspect this has less to do with aggregators or BDFL nomination committees and more to do with the fact that a long time ago people used to post good things here and then they stopped.

Then again, better CSS wouldn't hurt. This place looks like Reddit. Nobody wants to link to a place that looks like Reddit.