Revitalising Less Wrong is not a lost purpose
post by casebash · 2016-06-15T08:10:04.070Z · LW · GW · Legacy · 42 commentsContents
Systematic changes, not content production Decide on a direction Positions Sections Halve downvotes Crowdsourcing None 42 comments
ohn_Maxwell_IV argued that revitalising Less Wrong is a lost purpose. I'm also very skeptical about Less Wrong 2.0 - but I wouldn't agree with it being a lost purpose. It is just that we are currently not on a track to anywhere. The #LW_code_renovation channel resulted in a couple of minor code changes, but there hasn't been any discussion for at least a month. All that this means, however, is that if we want a better less wrong that we have to do something other than what we have been doing so far. Here are some suggestions.
Systematic changes, not content production
The key problem currently is the lack of content, so the most immediate solution is to produce more content. However, not many people are an Elizier or a Scott. Think about what percentage of blog are actually successful - now throw on the extra limitation of having to be on topic on Less Wrong. Note that many of Scott's most popular posts would be too political to be posted on Less Wrong. Trying to get a group of people together to post content on Less Wrong wouldn't work. Let's say 10 people agreed to join such a group. 5 would end up doing nothing, 3 would do 2-3 posts and it'd fall on the last 2 to drive the site. The odds would be strongly against them. Most people can't consistently pump out high quality content.
The plan to get people to return to Less Wrong and post here won't work either unless combined with changes. Presumably, people have moved to their own blogs for a reason. Why would they come back to posting on Less Wrong, unless something was changed? We might be able to convince some people to make a few posts here, but we aren't going to return the community to its glory days without consistent content.
Why not try to change how the system is set up instead to encourage more content?
Decide on a direction
We now have a huge list of potential changes, but we don't have a direction. Some of those changes would help bring in more content and solve the key issue, while other changes wouldn't. The problem is that there is currently no consensus on what needs to be done. This makes it so much less likely that anything will actually get done, particularly given that it isn't clear whether a particular change would be approved or not if someone did actually do it. At the moment, what we have is people coming on to the site suggesting features and there is discussion, but there isn't anyone or any group in charge to say if you implement this that we would use it. So people will often never start these projects.
Before we can even tackle the problem of getting things done, we need to tackle the problem of what needs to be done. The current system of people simply making posts in discussion in broken - we never even get to the consensus stage, let alone implementation. I'm still thinking about the best way to resolve this, I think I'll post more about this in a future post. Regardless, practically *any* system, would be better than what we have now where there is *no* decision that is ever made.
Below I'll suggest what I think our direction should be:
Positions
Less Wrong is the website for global movement and has a high number of programmers, yet some societies in my university are more capable of getting things done than we are. Part of the reason is that university societies have positions - people decide to run for a position and this grants them status, but also creates responsibilities. At the moment, we have *no-one* working on adding features the website. We'd actually be better off if we held an election for the position of webmaster and *only* had that person working on the website. I'm not saying we should restrict a single person to being able to contribute code for our website, I'm just saying that *right now* implementing this stupid policy would actually improve things. I imagine that there would be at least *one* decent programmer for whom the status would be worth the work given that half the people here seem to be programmers.
Links
If we want more content, then an easy way would be to have a links section, because posting a link is about 1% of the effort of trying to write a Less Wrong post. In order to avoid diluting discussion, these links would have to be posted in their own section. Given that this system is based upon Reddit, this should be super easy.
Sections
The other easy way to generate more content would be to change the rules about what content is on or off topic. This comes with risks - many people like the discussion section how it is. However, if a separate section was created, then people would be able to have these additional discussions without impacting how discussion works at the moment. Many people have argued for a tag system, but whether we simply create additional categories or use tags would be mostly irrelevant. If we have someone who is willing to build this system, then we can do it, if not, then we should just use another category. Given that there is already Main and Discussion I can't imagine that it would be that hard to add in another category of posts. There have been many, many suggestions of what categories we could have. If we just want to get something done, then the simplest thing is to add a single new category, Open, which has the same rules as the Open Threads that we are already running.
Halve downvotes
John_Maxwell_IV points out that too many posts are getting downvotes and critical comments. We could try to change the culture of Less Wrong, perhaps ask a high status individual like Scott or Elizier to request people to be less critical. And that might even work for even a week or a month, before people forget about it. Or we could just halve downvotes. While not completely trivial, this change would be about as simple as they come. We might want to only halve downvotes on articles, not comments, because we seem to get enough comments already, just not enough content. I don't think it'll lower the quality of content too much - quite often there are more people who would downvote a post, but they don't bother because the content is already below zero. I think this might be worth a go - I see a high potential upside, but not much in the way of downside.
Crowdsourcing
If we could determine that a particular set of features would have a reasonable chance of improving LessWrong, then we could crowd-source putting a bounty on someone implementing these features. I suspect that there are many people who'd be happy to donate some money and if we chose simple, well defined features, then it actually wouldn't be that expensive.
42 comments
Comments sorted by top scores.
comment by Gleb_Tsipursky · 2016-06-15T23:48:52.846Z · LW(p) · GW(p)
Agreed on the benefits of trying things, such as links and an additional Open section. That will give us additional data to go on.
comment by gilch · 2016-06-29T00:33:50.283Z · LW(p) · GW(p)
Rationalists should win. So what's stopping us? We got a big upgrade to our epistemic rationality from the Sequences, but our instrumental rationality may still be lacking, both individually and especially as groups. (Are there any CFAR instructors or graduates paying attention to this thread?) How would the hypothetical ideal instrumental rationalist approach this problem? That last one is not rhetorical. Post answers below.
Remember why an oracle AI is a small step away from a genie: If we can "epistemically" predict what the ideal agent would do (or even approximate it well enough) then we can take that action ourselves. We still have the subproblems of akrasia and group coordination. We can solve them the same way: how would the ideal agent solve these problems?
I'll try my hand at answering first below, but remember the wisdom of crowds. Some of you can probably improve on my prediction attempts.
Replies from: gilch↑ comment by gilch · 2016-06-29T02:43:17.775Z · LW(p) · GW(p)
The first step is probably deciding what exactly we want. Remember that values are orthogonal to intelligence. It's not enough to imagine an ideal instrumental rationalist without also imagining what that rationalist wants.
What does revitalizing LessWrong mean? If we are wildly successful in our endeavor after one year, what does LessWrong look like at that time? Why is LessWrong valuable to you? What makes it worth saving?
Maybe we can do more of that, better.
Again, not rhetorical, I want to know what the rest of you think.
What I think:
When I was young and learned how to read, my knowledge grew, quickly, mainly thanks to childrens' encyclopedias. But then it tapered off. There was a period where I read even more but didn't learn as quickly. This was due to the low quality of my available reading material. When I discovered Wikipedia my knowledge grew quickly again, and then tapered off again. There is a great deal of information on the web, but even more noise. Wikipedia is a rare bright spot. The Sequences are the densest source of insight I've found since.
I value the concentrated insights. I want more of that. LessWrong delivered more of that, for a time. Distilling knowledge from the deluege of data available at our fingertips is hard work. I'm willing to contribute to that effort, since I stand to gain so much more. That's what made Wikipedia work. (That's what made The Pirate Bay work.) LessWrong is the same.
I'm more willing to trust information I find on LessWrong; because the sanity waterline is higher; because if an ignorant actor posts bad information, there's a much higher chance the community will call them on it here, compared to elsewhere; because we care about truth, not authority, not politics, not some corporation's shareholders' pocketbooks. Trust is a valuable thing. I don't want to give that up.
I value interaction with intelligent people who are willing to change their minds, and are able to change mine, for the better.
I value practical advice that I can use in real life.
I value the community.
There may be more things that haven't occurred to me yet.
If we can achieve all of that through other sites (Arbital, CFAR, etc.), the best of LessWrong in all but name, that's fine with me. I don't value the name itself, but we must have one.
comment by scarcegreengrass · 2016-06-28T19:52:41.910Z · LW(p) · GW(p)
Something should definitely be tried about downvotes: It seems like the average value in many threads is below zero.
Replies from: scarcegreengrass↑ comment by scarcegreengrass · 2016-06-28T19:59:21.175Z · LW(p) · GW(p)
I should probably add that i'm looking for a positive and mutually-supporting LW-style community. But i'm sure other people would prefer a more brutally honest community. That's fine and ideally we'll all find sites that suit us in the end.
Replies from: gjmcomment by passive_fist · 2016-06-24T00:04:28.364Z · LW(p) · GW(p)
The LW community is in rapid decline and people have been leaving in large numbers for years. LW is probably in the terminal stage of decline now, not in the initial or even middle stage. If you think this isn't true you are in denial - all the poll data and post/comment data shows this to be true.
I used to be an active member of this group. This is my first comment in months. I don't know why other people left; I can only speculate and offer the reasons why I left. The reason I left was because I perceived (maybe incorrectly, I don't know) that discourse was being dominated by a handful of individuals who had very little interest in actual rational unbiased discussion and were more interested in forcing their views on everyone under the pretense of rationality.
I guess it's a lesson and a set of things to learn for the next LW-like site. It's a lesson in how quickly good intentions (rational discussion and questioning authority) can lead to the evaporative cooling effect and the adoption of extreme sociological/political views while pretending that this is not taking place.
Replies from: lsparrish, gjm, scarcegreengrass, Vaniver, Lumifer↑ comment by lsparrish · 2016-07-07T19:26:09.019Z · LW(p) · GW(p)
My long hiatus started a couple years ago, so my perspective might be different from yours.
I think the main issue for me it was more that it wasn't very fun any more. The people who made it fun (EY, Yvain, etc) were mostly posting elsewhere. The majority of posts were starting to be boring things like meetup announcements. Some of the new posts were interesting, but had more technical content and less humor.
Part of it could be that the commenters became more politically (in the sense of national politics) motivated, but that's not something I noticed at the time... I think that's perhaps a more recent thing, assuming that is indeed happening.
Another thing that might have been a factor is that I started using a smartphone more. So apps like twitter and facebook that can be easily checked there had more appeal. (This website still sucks for mobile.)
↑ comment by gjm · 2016-06-24T10:25:19.810Z · LW(p) · GW(p)
Out of curiosity: Which individuals and which views? (If you fear retaliation, feel free to answer by PM rather than here.)
Replies from: passive_fist↑ comment by passive_fist · 2016-06-24T11:02:12.941Z · LW(p) · GW(p)
I don't see why I should fear retaliation as I've already left this site, for all intents and purposes.
The only issue is that I don't want to give the impression of having left over some petty argument and being bitter over it. The reality is the opposite. The reality is that there were never any heated disagreements. It was just me observing a very clear irrational, politically extremist bias in many people's comments, especially the ones most frequently in the 'top 30 contributors' panel (which shows that their beliefs in general match up with the overall beliefs in this community). In a few cases this bias went even to the extent of denying basic accepted science. In the end I realized that instead of trying to debate on LW rationally, it would be a better use of my time to go elsewhere.
Replies from: Crux↑ comment by Crux · 2016-06-26T10:02:44.717Z · LW(p) · GW(p)
What irrational, politically extremist positions have you recently seen a lot of on LW?
Replies from: passive_fist↑ comment by passive_fist · 2016-06-27T01:04:43.212Z · LW(p) · GW(p)
Neoreaction, libertarianism, and related ideologies.
Replies from: Crux↑ comment by Crux · 2016-06-27T04:56:13.669Z · LW(p) · GW(p)
Libertarianism is an irrational, politically extremist position?
Replies from: scarcegreengrass, TheAncientGeek, passive_fist↑ comment by scarcegreengrass · 2016-06-28T19:50:27.477Z · LW(p) · GW(p)
I think passive_fist was saying that they considered certain comments irrational, and that those fell into the (broad) category of libertarianism. That c is an element of set I and set L, not that L is a subset of I.
↑ comment by TheAncientGeek · 2016-10-27T12:23:32.975Z · LW(p) · GW(p)
Rationality s more than one thing. Even if there are defenses of neoreaction and libertarianism as epistemic rationality, they are open to the criticism that they are not instrumentally rational pursuits because they are too far out to influence anything in the real world.
Replies from: Crux↑ comment by Crux · 2016-11-01T10:13:04.612Z · LW(p) · GW(p)
I don't see how this is a reply to my question. Being impractical doesn't mean something is irrational and politically extremist. If you look at the comment thread, you'll see that I'm reacting to a certain poster deciding to quit posting on Less Wrong due to "politically extremist" ideologies, where he gave libertarianism as an example. I think it's a bit silly to refer to libertarianism as a "politically extremist" ideology, hence my question.
If you want to go on a tangent and discuss whether libertarianism is practical or not, then sure, we can do that. To start, Bitcoin (or crypto-currency in general) has the potential to create massive changes to the economic landscape, where the government may lose a lot of control over the flow of goods and services. This could create a more libertarian world without requiring the normal process of passing legislation and influencing politicians.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-11-01T11:48:40.464Z · LW(p) · GW(p)
I don't see how this is a reply to my question. Being impractical doesn't mean something is irrational and politically extremist.
Something that is impractical, that cannot be achieved, is instrumentally irrational. If you don't understand, that is probably because you are not noticing the difference between epistemic and instrumental rationality.
Full strength, axiomatic, dismantle-the state libertarianism is impractical. If your central example of libertarianism is bitcoin, then that is not impractical.
Replies from: Crux↑ comment by Crux · 2016-11-01T16:41:33.285Z · LW(p) · GW(p)
Why are you focusing on so heavily on whether it's "rational"? He said that it's an irrational, politically extremist position. The whole statement is what I was replying to.
Full strength, axiomatic, dismantle-the state libertarianism is impractical. If your central example of libertarianism is bitcoin, then that is not impractical.
See here for a good overview of how the State is already being dismantled.
↑ comment by passive_fist · 2016-07-07T12:55:42.061Z · LW(p) · GW(p)
What surprises me is that you would even ask that question... what rational justification is there for libertarianism?
Replies from: Crux, Lumifer↑ comment by Crux · 2016-07-07T22:48:05.826Z · LW(p) · GW(p)
To me libertarianism is more a community than a specific set of doctrines. There are certainly core values and epistemological underpinnings which define the ideological innovators and leaders in the libertarian community, and contrast them with those of opposing movements. But your discovery that the arguments for libertarianism "constantly shift around and are hard to pin down" is simply expected for an evolving community.
In terms of epistemological underpinnings, I'd say what best defines the libertarianism movement is a peculiar recognition of the nature of partial knowledge in thought and action. Hayek went to great lengths over the course of his career to explain why individuals who find enjoyment and skill in mathematics, physics, and so forth tend to react with skepticism to the arguments of libertarianism and free-market economics. To delve into the full depth of his thesis, begin with this book. For a quick summary, see the first few minutes of this video.
You say that libertarianism is obviously irrational. When we look at libertarianism as a community rather than a specific set of doctrines, your claim seems to boil down to the following: "The people in the libertarian community are clearly irrational." I assume that means they're incompetent and misguided? That they're unlikely to put into effect real, useful, and sustainable change in the world's economic and social systems?
I have a related question: What do you think about Bitcoin?
↑ comment by Lumifer · 2016-07-07T14:17:31.230Z · LW(p) · GW(p)
Interesting. So you truly believe that libertarianism has no possible rational justification, so much nope you can't even steelman it?
Replies from: passive_fist↑ comment by passive_fist · 2016-07-07T22:05:08.576Z · LW(p) · GW(p)
I've tried before to steelman it and failed because the arguments constantly shift around and are hard to pin down. Tailoring arguments to every single person's interpretation gets tiring after a while. But if you can provide an explanation or link to what you believe then I'd read and try to steelman it to see if I understand your position correctly.
Here, though, I'm arguing on a more meta level - even assuming that it comprised a coherent set of beliefs, and assuming you had a well-defined utility function you wanted to maximize, how would you possibly go about providing a truly rational justification that libertarianism applied to a large mass of complicated human beings would result in the desired outcome? This also applies to capitalism, socialism, communism, etc. Essentially anything other than pure utilitarianism, but even utilitarianism requires a lot of fleshing out before you get to anything resembling a working procedure for governing people.
Replies from: Lumifer↑ comment by Lumifer · 2016-07-08T01:11:07.049Z · LW(p) · GW(p)
because the arguments constantly shift around and are hard to pin down.
That's not an argument against libertarianism, that's an argument that people with a fairly diverse set of views call themselves libertarians. I think that happens to be true.
On a sufficiently high level of abstraction I'd probably say that the two main features of libertarianism are (1) an unusually high preference for liberty/freedom; and (2) a far-off-the-center position on the individualism vs collectivism axis. Point (1) directly leads to a strong suspicion of power structures such as the state.
assuming you had a well-defined utility function you wanted to maximize, how would you possibly go about providing a truly rational justification that libertarianism applied to a large mass of complicated human beings would result in the desired outcome?
I don't quite understand the question. If you have a "well-defined utility function", well, you just try to maximize it to the best of your ability. You seem to be thinking of a scenario where you're a god-king who gets to arrange the society (and individual values) as he sees fit. That is obviously incompatible with libertarianism at a basic level. And then you are talking about capitalism and socialism as if they were "working procedure[s] for governing people", but they are not. Economic systems are not power structures.
Replies from: passive_fist↑ comment by passive_fist · 2016-07-16T01:22:43.686Z · LW(p) · GW(p)
I don't know what "an unusually high preference for liberty/freedom" means. Every single political philosophy claims that it is pro-freedom. Even totalitarian regimes claim to be pro-freedom. Without reference to specific policy positions, claiming to be 'pro-freedom' seems meaningless to me.
So that reduces your definition of libertarianism to 'far-off-the-center position on the individualism vs collectivism axis'.
For a stable society to exist, at some level everyone has to agree upon some central authority with final say over disputes and superlative enforcement ability. Do you agree with this or not?
Replies from: Crux, TheAncientGeek, Lumifer↑ comment by Crux · 2016-07-23T14:22:32.779Z · LW(p) · GW(p)
For a stable society to exist, at some level everyone has to agree upon some central authority with final say over disputes and superlative enforcement ability. Do you agree with this or not?
I'm not completely sure what you mean, but my guess is that I don't agree with you.
In each possible situation, it's useful to have an authority available who has final say over disputes. But it's not necessarily for every process in society to depend on the same authority.
Replies from: passive_fist↑ comment by passive_fist · 2016-07-24T22:36:10.715Z · LW(p) · GW(p)
In each possible situation, it's useful to have an authority available who has final say over disputes. But it's not necessarily for every process in society to depend on the same authority.
Then who gets to decide who that authority is for every particular situation?
Replies from: Crux↑ comment by Crux · 2016-07-25T08:41:49.932Z · LW(p) · GW(p)
This is a predictable response from someone who's skeptical of libertarian economics. Just as it's natural to observe the order in the world and therefore assume that there must be a designer (God), it feels reasonable to the human mind to witness the structure inherent in society and thus expect that there must be in each instance a particular person who made a conscious decision to put the institution into place.
There are many facets to human society, so giving a comprehensive answer would require a book-length treatment. But to give an example, investors tend to have a large amount of power in many cases. Collectively they use their expertise in predicting future states in the economy in order to choose which companies are kept in the market and which are pushed out. Companies have internal power structures, where the final say could be an individual or a panel or individuals. Therefore, the "proximate final say" in this situation may be a certain person or group of people, where the "ultimate final say" may be based on the collective support or non-support of investors.
See here for how law and order could fit into a decentralized market system as well.
Replies from: passive_fist↑ comment by passive_fist · 2016-07-26T04:46:07.778Z · LW(p) · GW(p)
What you're saying doesn't sound to me like a disagreement that there must be some higher authority. It just sounds like you're saying that the final authority gets decided at run-time, based on whoever happens to have the most financial power. So then the question becomes: Why do you think this is preferable to a system where authority is agreed upon beforehand by a majority of the people?
And just to make the discussion clearer, let's make it even more specific and talk about the issue of disputes over ownership of objects or property.
The comparison to religion makes no sense. Unlike biological organisms, human governments are designed. For example, in the case of the US, the structure and function of the court system is very explicitly laid out in the US constitution, and it was carefully designed in a committee via months/years of debate.
Replies from: Crux↑ comment by Crux · 2016-10-26T09:46:39.594Z · LW(p) · GW(p)
It just sounds like you're saying that the final authority gets decided at run-time, based on whoever happens to have the most financial power.
That's just one of the many possibilities.
Why do you think this is preferable to a system where authority is agreed upon beforehand by a majority of the people?
Democracy inevitably becomes a grandiose popularity contest where the population votes based on social-signaling considerations which have little if nothing to do with putting into place an institution which will lead to sustainably benevolent results for the society. There are all sorts of oddities, such as systematic redistribution of resources from the productive members of the economy to the unproductive, shortsighted policy enactment because the real problems of society usually can't be solved without initial pain which the politician would be blamed for, and so forth.
The comparison to religion makes no sense. Unlike biological organisms, human governments are designed. For example, in the case of the US, the structure and function of the court system is very explicitly laid out in the US constitution, and it was carefully designed in a committee via months/years of debate.
The court system is an absolute wreck, no matter how "carefully designed" the designers believe it to be.
Imagine a pre-industrial world with two villages on either side of a large forest. The people need to get back and forth between these villages every few days or weeks. The first person through his own self-interest simply looks for the easiest path, breaking several branches on his way. The next person does the same, probably going on a completely different route, not thinking anything of the previous person. After quite a few iterations of this, some of the people will end up going on routes that were previously made a bit easier by previous hikers. After tens of thousands of iterations of this, there will be convenient trails going through the woods in an efficient way, with all the obstacles neutralized.
If a foreigner chanced upon this creation, they would surely think to themselves, "What a great trail system! I'm glad the people of this area were kind enough to make a trail for all to use!" They would immediately jump to the idea that the trail, looking like it was created for a purpose, must have been designed by a committee of individuals or commissioned by a wise member of one of the villages. But no such thing happened; each person acted upon their own self-interest, and the byproduct was a trail system that looks like it was designed but really was an automatically emergent order.
Most of what works very well in society is like this, and most of what breaks in a disastrous way is an attempt to design systems where simply setting the initial conditions for an emergent order would have been a much better idea.
Investors simply try to buy low and sell high for their own self-interest. Many of them, even very successful ones, probably have little or no appreciation for how important the role of investors is in the emergent order of the economic system.
↑ comment by TheAncientGeek · 2016-07-16T09:19:34.804Z · LW(p) · GW(p)
Libertarian freedom is usually cashed out in terms of a strong adherence to negative rights combined with a disdain for positive rights.
Replies from: passive_fist↑ comment by passive_fist · 2016-07-17T00:58:24.164Z · LW(p) · GW(p)
Can you be more specific?
↑ comment by scarcegreengrass · 2016-06-26T19:34:17.322Z · LW(p) · GW(p)
While lesswrong.com has a low population, the private blogs and diaspora communities are growing very rapidly.
↑ comment by Vaniver · 2016-06-24T00:13:27.939Z · LW(p) · GW(p)
It's a lesson in how quickly good intentions (rational discussion and questioning authority) can lead to the evaporative cooling effect and the adoption of extreme sociological/political views while pretending that this is not taking place.
Which time period do you have in mind, here? Because "quickly" seems inaccurate. LW is old, and the decline has taken a long time.
If anything, the interesting thing with LW's decline is how slow it was, and how much attention the site continues to receive despite the lack of content. There was no major crisis that split things apart; it just got more and more stale, mostly as people graduated to more impressive and important things without replacements growing up here in the same way.
Replies from: passive_fist↑ comment by passive_fist · 2016-06-24T02:04:54.030Z · LW(p) · GW(p)
I somewhat agree. Sometimes communities dissolve through a publicized schism. Other times they just decay without any visible drama. It's not realistic to expect every single person who gets fed up and leaves to post a detailed criticism of the site and why they are leaving. A lot of people would rather just leave quietly and not waste their time with that kind of thing.
Still, it seems like the decline definitely accelerated over the past couple of years.
comment by root · 2016-06-15T15:13:07.070Z · LW(p) · GW(p)
Man, I don't really care what happens to LW but if I had to choose, I honestly would say, 'dunno, shoot me'.
I'm just really getting the 'I don't know what to decide' feeling here. It's a bet, and by 'bet' I mean something I cannot put into concrete, set-in-stone numbers that I must decide on. For example, I recently got a new haircut. I didn't get one compliment for it. Do you think it was a good bet? I could've got a better haircut. But on the other hand, I'm pretty satisfied with it. I got tired of the same old haircut I had and, despite not getting even one compliment, I'm going to keep going with it.
I once did some mindkilling and tried to cold approach[1] women. A significant majority of them said that they have a boyfriend. Only a few women totally appreciated it. Some of them probably invented an imaginary boyfriend. Maybe some of them later went on Facebook or some other media and complain about random guys hitting on them. I suppose that I made this particular branch of universe a slightly worse place to live in. But a very small minority gave me a wide, unexpected smile and at that moment I wanted to middle finger the non-existing camera filming my life and say "420 is for wankers".[2]
Tomorrow, I'll be visiting a previous workplace to say thank you (and ask to keep contact) to a woman who, the moment she saw me, had a welcoming smile and we engaged in conversation in a record-breaking speed of human cognition. And actually part of me screams that this woman is probably nice, or was just curious, or perhaps just happy to see me again, and that the whole effort of dressing up, going the non-trivial travel time, all for what could be yet another (possibly imaginary) boyfriend is a huge waste. But part of me also believes that I could succeed here. Do I know for sure? Nope! It's yet another bet, and the dealer is the laws of physics (maybe biology is a better fit) and who knows what cards I get.
So let's wrap this up. I know you like Mark Manson[3] so you probably noticed a similar theme[4] here. The people that believe LessWrong is going to make it should place their bets - bet with what you will. Money, effort, rationality is winning. The other party already placed their bets and the cards are still not revealed and the roulette has still not started spinning. Place your bets and we'll see who wins.
[1] I basically went with some variation of an introduction, saying that she's gorgeous and if she'd like to talk or meet up later.
[2] Imagine a Loony Toons character breaking the fourth wall. Alternatively, a paranoid schizophrenic psych ward escapee.
[3] markmanson.net does have excellent stuff. What are you waiting for?
[4] The theme is having to make a decision under unknown possibility of success. Which is the way I see it because I personally haven't seen too much from the LessWrong 2.0 camp, despite the enthusiasm.
Replies from: Lumifer, Error↑ comment by Lumifer · 2016-06-15T15:34:31.924Z · LW(p) · GW(p)
but if I had to choose, I honestly would say, 'dunno, shoot me'
That looks like a very poor decision :-P
Replies from: root↑ comment by root · 2016-06-15T15:48:07.727Z · LW(p) · GW(p)
I do hope you're not taking the hypothetical person in the second footnote bent on revitalizing LessWrong with a knack for movie scripts seriously.
I did have an extra footnote at 'rationality is winning' mentioning if someone with an extreme desire to see LW prosper would kidnap CFAR staff and other prominent bloggers in order to achieve his goal, but that victory would be too suspicious to pass. The possible escape scene seems interesting, but I can't think of a way to make the villian worthwhile if he gets outsmarted too easily (Let's say Eliezer does a sequence of posts and one post has 4 paragraphs, and they begin with H, E, L, and P. But I suppose the villian also read GEB and so he might notice it)
Anyhow, that supposed situation has a low enough probability that it shouldn't be a worry and there are probably better people to shoot rather than neutrals.
EDIT: And let's also stop this here, this is getting offtopic.
Replies from: Lumifer↑ comment by Lumifer · 2016-06-15T16:00:13.327Z · LW(p) · GW(p)
I do hope you're not taking the hypothetical person in the second footnote bent on revitalizing LessWrong with a knack for movie scripts seriously.
Looks like Deadpool X-)
Let's say Eliezer does a sequence of posts and one post has 4 paragraphs, and they begin with H, E, L, and P.
Too easy :-)
there are probably better people to shoot rather than neutrals.
...famous last words.