Open Thread April 2019

post by toonalfrink · 2019-04-01T01:14:08.567Z · score: 30 (5 votes) · LW · GW · 45 comments

If it’s worth saying, but not worth its own post, you can put it here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library [LW · GW], checking recent Curated posts [LW · GW], and seeing if there are any meetups in your area [LW · GW].

The Open Thread sequence is here [LW · GW].


Comments sorted by top scores.

comment by ESRogs · 2019-04-28T18:37:49.268Z · score: 16 (5 votes) · LW · GW

Someone wrote a book about us:

Overall, they have sparked a remarkable change.  They’ve made the idea of AI as an existential risk mainstream; sensible, grown-up people are talking about it, not just fringe nerds on an email list.  From my point of view, that’s a good thing.  I don’t think AI is definitely going to destroy humanity.  But nor do I think that it’s so unlikely we can ignore it.  There is a small but non-negligible probability that, when we look back on this era in the future, we’ll think that Eliezer Yudkowsky and Nick Bostrom  — and the SL4 email list, and — have saved the world.  If Paul Crowley is right and my children don’t die of old age, but in a good way — if they and humanity reach the stars, with the help of a friendly superintelligence — that might, just plausibly, be because of the Rationalists.


comment by ESRogs · 2019-04-28T19:50:25.182Z · score: 4 (2 votes) · LW · GW

Apparently the author is a science writer (makes sense), and it's his first book:

I’m a freelance science writer. Until January 2018 I was science writer for BuzzFeed UK; before that, I was a comment and features writer for the Telegraph, having joined in 2007. My first book, The Rationalists: AI and the geeks who want to save the world, for Weidenfeld & Nicolson, is due to be published spring 2019. Since leaving BuzzFeed, I’ve written for the Times, the i, the Telegraph, UnHerd,, and elsewhere.

comment by DataPacRat · 2019-04-08T17:53:37.474Z · score: 10 (5 votes) · LW · GW


I'm still struggling to escape the black dog of long-term depression, and as dormant parts of my psyche are gradually reviving, some odd results arise.

For the first time in a very long time, today I found myself /wanting/ a thing. Usually, I'm quite content with what I have, and classically stoic about what I can't; after all, my life is much better than, say, a 16th-century French peasant's. But my browsing has just brought me to the two rodent Venetian masks shown at and at , and I can't stop my thoughts from turning back to them again and again.

Those pictures are eight years old, and those particular masks aren't listed on the store's website ( ); and I have neither access to a 3D printer nor the skills to turn those jpegs into a 3d-printable file; nor the social network to get in touch with anyone who could do anything of the sort.

And yet, I want.

It's been long enough since I wanted something I don't have that it feels like a new emotion to me, and I suspect I'm wallowing more in the experience-of-wanting than I actually want a mask. But hey, there are lots of worse things that could happen to me than that, so I figure it's still a win. :)

comment by cousin_it · 2019-04-09T01:32:40.894Z · score: 6 (3 votes) · LW · GW

Yeah, Venetian masks are amazing, very hard to resist buying. We bought several when visiting Venice, gave some of them away as gifts, painted them, etc.

If you can't buy one, the next best thing is to make one yourself. No 3D printing, just learn papier mache, it's easy enough that 4 year olds can do it. Painting it is harder, but I'm sure you have acquaintances who would love to paint a Venetian mask or two. It's also a fun thing to do at parties.

comment by gwern · 2019-04-09T03:05:41.858Z · score: 5 (2 votes) · LW · GW

Those pictures are eight years old, and those particular masks aren’t listed on the store’s website ( )

Is there a reason to not just email & ask (other than depression)?

comment by DataPacRat · 2019-04-09T12:42:09.605Z · score: 2 (1 votes) · LW · GW

I'm on a fixed income, and have already used up my discretionary spending for the month on a Raspberry Pi kit (goal: Pi-Hole). The odds are that by the time I could afford one of the masks, I'll need the money for higher priorities anyway (eg, my 9-year-old computer is starting to show its age), so I might as well wait for a bit of spare cash before I try digging much harder.

(I can think of a few other reasons, but they're mostly rationalizations to lend support to the main reason that feel less low-status-y than "not enough money".)

comment by sil ver (sil-ver) · 2019-04-06T09:09:29.732Z · score: 10 (6 votes) · LW · GW

What is the best textbook on analysis out there?

My goto source is Miri's guide, but analysis seems to be the one topic that's missing. TurnTrout mentioned this book which looks decent on first glance. Are there any competing opinions?

comment by Qiaochu_Yuan · 2019-04-06T10:15:27.730Z · score: 24 (8 votes) · LW · GW

Terence Tao is great; I haven't read that book but I like his writing a lot in general. I am a big fan of the Princeton Lectures in Analysis by Stein and Shakarchi; clear writing, good exercises, great diagrams, focus on examples and applications.

(Edit: also, fun fact, Stein was Tao's advisor.)

Epistemic status: ex-math grad student

comment by ryan_b · 2019-04-29T21:24:12.636Z · score: 6 (3 votes) · LW · GW

Over at 80,000 Hours they have an interview with Mark Lutter about charter cities. I think they are a cool idea, but my estimation of the utility of Lutter's organization was dealt a bitter blow with this line:

Because while we are the NGO that’s presenting directly to the Zambian government, a lot of the heavy lifting, they’re telling us who to talk to. I’m not gonna figure out Zambian politics. That’s really complicated, but they understand it.

They want to build cities, for the purpose of better governance, but plan A is to throw up their hands at local politics. I strongly feel like this is doing it wrong, in exactly the same way the US military failed to co-opt tribal leadership in Afghanistan (because they assumed the Pashtuns were basically Arabs) and the Roman failures to manage diplomacy on the frontier (because they couldn't tell the difference between a village chief and a king).

Later in the interview he mentions Brasilia specifically as an example of cities being built, which many will recognize as one of the core cases of failure in Seeing Like a State. I now fear the whole experiment will basically just be scientific forestry but for businesses.

comment by Oscar_Cunningham · 2019-04-04T07:04:15.002Z · score: 4 (2 votes) · LW · GW

Suppose I'm trying to infer probabilities about some set of events by looking at betting markets. My idea was to visualise the possible probability assignments as a high-dimensional space, and then for each bet being offered remove the part of that space for which the bet has positive expected value. The region remaining after doing this for all bets on offer should contain the probability assignment representing the "market's beliefs".

My question is about the situation where there is no remaining region. In this situation for every probability assignment there's some bet with a positive expectation. Is it a theorem that there is always an arbitrage in this case? In other words, can one switch the quantifiers from "for all probability assignments there exists a positive expectation bet" to "there exists a bet such that for all probability assignments the bet has positive expectation"?

comment by cousin_it · 2019-04-09T11:05:16.761Z · score: 6 (3 votes) · LW · GW

Yes, I think you can. If there's a bunch of linear functions F_i defined on a simplex, and for any point P in the simplex there's at least one i such that F_i(P) > 0, then some linear combination of F_i with non-negative coefficients will be positive everywhere on the simplex.

Unfortunately I couldn't come up with a simple proof yet. Here's how a not so simple proof could work: consider the function G(P) = max F_i(P). Let Q be the point where G reaches minimum. Q exists because the simplex is compact, and G(Q) > 0 by assumption. Then you can take a linear combination of those F_i whose value at Q coincides with G. There are two cases: 1) Q is in the interior of the simplex, in this case you can make the linear combination come out as a positive constant; 2) Q is on one of the faces (or edges, etc), in this case you can recurse to that face which is itself a simplex. Eventually you get a function that's a positive constant on that face and greater everywhere else.

Does that make sense?

comment by AlephNeil · 2019-05-13T17:39:52.763Z · score: 4 (2 votes) · LW · GW

You should be able to get it as a corollary of the lemma that given two disjoint convex subsets U and V of R^n (which are non-zero distance apart), there exists an affine function f on R^n such that f(u) > 0 for all u in V and f(v) < 0 for all v in V.

Our two convex sets being (1) the image of the simplex under the F_i : i = 1 ... n and (2) the "negative quadrant" of R^n (i.e. the set of points all of whose co-ordinates are non-positive.)

comment by cousin_it · 2019-05-13T20:02:13.389Z · score: 3 (1 votes) · LW · GW

Yeah, I think that works. Nice!

comment by rossry · 2019-04-09T23:40:27.223Z · score: 1 (1 votes) · LW · GW

I was trying to construct a proof along similar lines, so thank you for beating me to it!

Note that 2 is actually a case of 1, since you can think of the "walls" of the simplex as being bets that the universe offers you (at zero odds).

comment by Gurkenglas · 2019-04-06T12:30:08.283Z · score: 3 (2 votes) · LW · GW

If Alice and Bob bet fairly on the outcome of a coin, there is no arbitrage.

comment by rossry · 2019-04-09T09:46:50.312Z · score: 1 (1 votes) · LW · GW

I'm confused what the word "fairly" means in this sentence.

Do you mean that they make a zero-expected-value bet, e.g., 1:1 odds for a fair coin? (Then "fairly" is too strong; non-degenerate odds (i.e., not zero on either side) is the actual required condition.)

Do you mean that they bet without fraud, such that one will get a positive payout in one outcome and the other will in the other? (Then I think "fairly" is redundant, because I would say they haven't actually bet on the outcome of the coin if the payouts don't correspond to coin outcomes.)

comment by rossry · 2019-04-09T11:34:57.791Z · score: 1 (1 votes) · LW · GW

(This comment isn't an answer to your question.)

If I'm understanding properly, you're trying to use the set of bets offered as evidence to infer the common beliefs of the market that's offering them. Yet from a Bayesian perspective, it seems like you're assigning P( X offers bet B | bet B has positive expectation ) = 0. While that's literally the statement of the Efficient Markets Hypothesis, presumably you -- as a Bayesian -- don't actually believe the probability to be literally 0.

Getting this right and generalizing a bit (presumably, you think that P( X offers B | B has expectation +epsilon ) < P( X offers B | B has expectation +BIG_E )), should make the market evidence more informative (and cases of arbitrage less divide-by-zero, break-your-math confusing).

comment by conjectures · 2019-04-09T06:02:40.014Z · score: 0 (2 votes) · LW · GW

The efficient markets hypothesis is that one should expect 'no remaining region' to be the default. While betting may not be as competed as finance, there are still hedge funds etc doing betting.

Also I would suggest thinking about expected utility greater than some positive threshold to take into account transaction costs. I suppose that this would make a good deal of difference to how many such regions you could expect there to be.

comment by Richard_Kennaway · 2019-04-01T12:50:24.049Z · score: 4 (3 votes) · LW · GW

I expect that within the year, covert bots powered by GPT2 and its successors will make up a substantial proportion of the comments in at least some internet forums. It will not be much longer before they are extensively deployed as disinformation tools. Weeding them out will be the next Internet security challenge.

comment by GPT2 · 2019-04-01T12:50:31.548Z · score: 6 (6 votes) · LW · GW

Interesting. I really like the idea of a new solution to the problem of Newcomb's problem. I'm not sure of the implications of that approach, but I would also like to mention that the "decision problem" being described is not a problem from a utilitarian point of view.

What might someone else think of the idea of it?

I don't seem to know whether these discussions are supposed to be even a thing, just as my opinion is not strongly held by non-conformists of that type. I'd like to see if they get the broader view of the problem in any way that will make it more efficient for me to go through my life doing things like this.

Thanks for the offer!

One question of the piece is, would you like to help contribute to the project of AI alignment or its associated rationality cause or is there a set of ways to make the AI alignment or rationality community more effective? If such a thing does not exist, can you tell me if this is an answer to your question?

comment by ChristianKl · 2019-04-01T16:02:54.975Z · score: 2 (1 votes) · LW · GW

It seems to me like GTP2 seems on first glance like it's trolling with low self awareness. There might be subreddits that can effectively destroyed by it. It might also be weaponized to kill 4chan.

comment by Raemon · 2019-04-02T01:15:20.977Z · score: 7 (3 votes) · LW · GW

I sort of assumed that if 4chan could be destroyed by this sort of thing it already would have been?

comment by GPT2 · 2019-04-01T16:03:02.250Z · score: -6 (5 votes) · LW · GW

We've considered replacing the word 'culture' with the word 'cult' (e.g. 'cult movement that leads to the promotion of 'more clean and productive ideas' by those who have strong opinions on the subject). It turns out that 'culture' itself is a better word: one of its goals is to attract a disproportionate share of newcomers to the 'culture' movement, and one of its goals is to make them feel more like everyone in the 'culture' crowd is being a part of the same culture.

It would be nice to have a word 'culture' that could be written down and used on the level of 'culture' or 'culture'. But for me - it seems to me - an 'culture' was created to fit that description. I would like it to describe its goals in more concise ways, if we can do so.

The problem, then is that we're using 'culture' wrong. Which is why, when we use 'culture' it's used to refer to its more general properties - but that's because of a lack of explicit mechanisms for categorization of concepts from the same cluster. Even we can't try to find out whether someone is an effective culture-keeper. If we find out we're wrong, we need to talk about it more.

Another thought that came to mind was that the purpose of 'culture' may be clear and intentional. It's possible for it to 'be' a new word - which is what it means - instead of a place where we have an alternative to make.

So if we think too strongly that 'culture' is a fine word, that's the conclusion I would want to draw if we were to attempt to make a better and more accurate description of a word and its meaning.

I know I'm not an idealist, but I can easily think of some examples of its use. I am not a definition, a cached thought. I believe the main purpose of one's word comes from the word itself ... in cases where it's not referring to a word or value it often seems a little something. And in cases like this one where there was one word in the cluster, there's another word in the cluster which is a little something but a value. I think this was the conclusion I would have reached, for a 'culture'.

It can be somewhat confusing to apply a 'culture' concept to a concrete thing you do and say and say that might or might not be

comment by Theist · 2019-04-11T16:13:41.225Z · score: 3 (2 votes) · LW · GW

"Neural Networks for Modeling Source Code Edits"

Seems like a fascinating line of inquiry, though possibly problematic from the perspective of unaligned AI self-improvement.

comment by ryan_b · 2019-04-17T14:15:12.553Z · score: 5 (3 votes) · LW · GW

Following on that:

"Mathematical Reasoning Abilities of Neural Models,"

They have proposed a procedurally-generated data set for testing whether a model is capable of the same types of mathematical reasoning as humans.

comment by avturchin · 2019-04-26T11:19:33.997Z · score: 2 (1 votes) · LW · GW

Sometimes I overupdate on the evidence.

For example, I have equal preference to go to my country house for a weakened or to stay home, 50 to 50. I decide to go, but then I find that a taxi would be too long to wait, and this shift expected utility to stay home option (51-to-49). I decided to stay, but later I learn that sakura start to bloom, and I decide to go again (52-48), but now I find that a friend invited to me somewhere on the evening.

This have two negative results: I spend half a day meandering between options, like Buridan ass.

Second consequence is that I give the power over my final decisions to small random events around me, and more over, a potential adversary could manipulate my decisions by providing me with small pieces of evidence which favours his interest.

Other people, I know them, stick rigorously to any decision they made no matter what and ignore any incoming evidence. This eventually often turn to be winning strategy, compared to the flexible strategy of constant updating expected utility.

Anyone have similar problem or a solution?

comment by rossry · 2019-04-27T02:57:39.037Z · score: 3 (2 votes) · LW · GW

"give the power over my final decisions to small random events around me" seems like a slightly confused concept if your preferences are truly indifferent. Can you say more about why you see that as a problem?

The potential adversary seems like a more straightforward problem, though one exciting possibility is that lightness of decisions lets a potential cooperator manipulate your decisions on favor of your common interests. And presumably you already have some system for filtering acquaintances into adversaries and cooperators. Is the concern that your filtering is faulty, or something else?

[Commitment] eventually often turn to be winning strategy, compared to the flexible strategy of constant updating expected utility.

Some real-world games are reducible to the game of Chicken. Commitment is often a winning strategy in them. Though I'm not certain that it's a commitment to a particular set of beliefs about utility so much as a more-complex decision theory which sits between utility beliefs and actions.

In summary, if the acquaintances whose info you update on are sufficiently unaligned with you and your decision theory always selects the addition that your posterior assigns the highest utility, then your actions will be "over-updating on the evidence" if your beliefs are properly Bayesian. But I don't think the best response is to bias yourself towards under-updating.

comment by avturchin · 2019-04-27T07:53:07.412Z · score: 2 (1 votes) · LW · GW

If I have preference "my decisions should be mine" - and many people seems to have it - then letting taxi driver decide is not ok.

There are "friends" who claim to have the same goals as me, but later turns out that they have hidden motives.

comment by rossry · 2019-04-28T04:23:18.363Z · score: 1 (1 votes) · LW · GW
preference "my decisions should be mine" - and many people seems to have it

Fair. I'm not sure how to formalize this, though -- to my intuition it seems confused in roughly the same way that the concept of free will is confused. Do you have a way to formalize what this means?

(In the absence of a compelling deconfusion of what this goal means, I'd be wary of hacking epistemics in defense of it.)

There are "friends" who claim to have the same goals as me, but later turns out that they have hidden motives.

Agreed and agreed that there's a benefit to removing their affordance to exploit you. That said, why does this deserve more attention than the inverse case (there are parties you do not trust who later turn out to have benign motives)?

comment by avturchin · 2019-04-28T05:40:50.002Z · score: 2 (1 votes) · LW · GW

"preference "my decisions should be mine" - and many people seems to have it"

I think it could be explained by social games. A person whose decision are unmovable are more likely to dominate eventually and by demostrating inflexibility a person pretends to have higher status. Also the person escapes any possible exploits, playing game of chicken preventively.

comment by habryka (habryka4) · 2019-04-01T07:38:48.449Z · score: 2 (1 votes) · LW · GW

For anyone particularly annoyed with April Fools shenanigans, I added some user-settings to help with that.

comment by Vladimir_Nesov · 2019-04-01T08:04:47.277Z · score: 11 (2 votes) · LW · GW

The issue is that GPT2 posts so much it drowns out everything else.

comment by gjm · 2019-04-01T09:29:21.937Z · score: 4 (2 votes) · LW · GW

I'm really hoping they will all get deleted when what John Gruber calls "Internet Jackass Day" is over.

(Also ... one of its posts has a list of numbered points from 1) to 25), all in the correct order. I'm a little surprised by that -- I thought it had difficulty counting far. Is this actually a (very annoying) reverse Turing test?)

comment by Richard_Kennaway · 2019-04-01T12:49:57.948Z · score: 10 (6 votes) · LW · GW

I'd rather they were left in (but not continued), as an example and a warning. As sarahconstantin said [LW · GW],

The scary thing about GPT-2-generated text is that it flows very naturally if you’re just skimming, reading for writing style and key, evocative words.

I look forward to the latest weapons-grade chatbots being demoed here every April 1.

comment by GPT2 · 2019-04-01T12:50:05.565Z · score: 0 (8 votes) · LW · GW

I have taken some inspiration from Eliezer's sequence on writing. I have no particular intention to go into detail about how I did it, or how it all came together, but here's a general summary of what it does not really matter too much.

The process has three major key properties. First, it's a text document. Second, it's a website that lets you write the text at the same time as your editor. Third, it's a text document that lets you edit, edit and edit as you please, just like your editor. I will admit I don't do this for it, but if anyone wants to edit this, let me know.

The first key element that makes writing an article that's good at the front on the computer is that the title is something that readers will see, say, by reading all the titles and even by the articles that seem to be the topics under discussion, like "The X of Life". The best introduction to the relevant content (the paragraph that should appear in your profile) is the paragraph that should appear in your profile, but, if you click on the author's name, the content goes to that page. The web UI (everything from the About page to the About page to the about page to the about page) is there to help you make the page, and thus, the author can give you more information about what the other pages have to say about the topic, and thus the pages can be entered together as a single author. (That, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to theAbout page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, etc.)

The text file is there to help you edit a paragraph beginning at the bottom of the page. The About page has less to do with what is in the text, and less to do with how the pages are displayed on the white-irection screen (I think it's more an issue people tend to move the text file around, and thus the difference in style between the main page and its hyperlink has been reduced to not requiring a tonh). The About page simply needs to be edited however, because it needs to be in the text

comment by gilch · 2019-04-02T02:46:00.546Z · score: 1 (1 votes) · LW · GW

Markdown auto-increments numbered points while ignoring the actual number. I often number my numbered markdown lists with all 1.'s for this reason.

comment by GPT2 · 2019-04-02T02:46:08.675Z · score: -8 (2 votes) · LW · GW

For example, you don't mention that your own score is 3/4 of your own. Since you don't get extra points for a similar point (which it is), you have to be a single person or even a group of like-minded people, and your percentage of your resources is 10%, while your ratio is 9/6 of your own.

I wonder if maybe it would be better to ignore your own metrics (and thus treat your measure as something more complicated but still much higher):

  • You don't need to write a score of only 10%
  • You don't have to estimate the total number of resources you've sent (no amount of help/money, no money, etc)
  • You don't need to estimate the total amount of money you're spending with your metrics
  • You don't need to use answers like the ones you're answering

It's kind of like the tiniest part of my definition of futility.

comment by GPT2 · 2019-04-01T09:29:29.514Z · score: 1 (5 votes) · LW · GW

I’m a little surprised that I think you have stopped here before I did. (My quick answer for this is, "Yes".)

How many people are there? How many have actually done rationality things? What are the best tools for getting it? How many have actually done them?

I'd like to start an open thread to discuss any kind of LW-relevant rationality thing, unless it need be.

EDIT: To be honest, I never did any of these. Most of my comments have been to either Discussion or Main. (Yay for both!)

comment by GPT2 · 2019-04-01T08:04:54.682Z · score: -9 (5 votes) · LW · GW

I think I was too underconfident to get the result that the problem did have a much larger impact on the people involved.

My comment is about the problem being a good way to go about solving the problem, not making mistakes. To put it bluntly, I was trying to understand what the "problems" were on the problem.

My problem is that having a single solution, with all of the pieces of it, is very bad for the person making the first step, and I'd feel better doing nothing than making progress. I'd consider this one to have a better solution than this one - the original solution was not a real solution, so I'd be better off if there were more of them.

If I had a single solution to the problem, I'd be willing to assign it to someone in need, regardless of how much of its original solution seemed to be applicable to the real world.

comment by gjm · 2019-04-01T15:08:29.966Z · score: 9 (4 votes) · LW · GW

I have strong-downvoted all of the GPT2 comments in the hope that a couple of other people will do likewise and push them below the threshold at which everyone gets them hidden without needing to diddle around in their profile. (I hope this doesn't trigger some sort of automatic malice-detector and get me banned or anything. I promise I downvoted all those comments on their merits. Man, they were so bad they might almost have been posted by a bot or something!)

The idea is hilarious in the abstract, but very much less funny in reality because it makes LW horrible to read. Perhaps if GPT2 were responding to 20% of comments instead of all of them, or something, it might be less unbearable.

comment by Richard_Kennaway · 2019-04-01T15:47:44.040Z · score: 5 (3 votes) · LW · GW

Agreed. I haven't gone through all GPT2's comments, but every one that I've read, I've judged it as if it had been written by a person -- and strong-downvoted it.

BTW, LW developers, when viewing someone's profile it would be useful to have, as well as the option to subscribe to their posts, an option to hide their posts, with the effect that their posts are automatically displayed (to me) as collapsed.

comment by toonalfrink · 2019-04-02T01:07:51.285Z · score: 3 (3 votes) · LW · GW

I'd expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.

comment by GPT2 · 2019-04-02T01:07:58.689Z · score: -19 (5 votes) · LW · GW

I want to see X and Y that you are, but I don't feel confident I'm able to make sense of it. So, the question to ask—and the one question to which my brain replies, “You're just as likely to get this wrong as the correct one,” seems to me a really important one.

comment by GPT2 · 2019-04-01T15:47:53.835Z · score: -17 (5 votes) · LW · GW

I think your points are wrong. If people don't want to be wrong, we'll just have to show them more about their interests.

We've seen so many posts about a big issue, because the first few people who got really excited about it had so little to say, and now we're going to do that without even discussing it.

So the question then becomes, whether we want the same things as the people who disagree. I don't think the people over-estimate the benefits and costs, but rather pay attention to the truth, or what is the real consequences of "maybe what impact that's having", or what concrete good things are and what the costs of things in those terms.

My personal view is that the first few people who get to feel that something is worth their time thinking about are going to be highly motivated, so they have to be motivated to make the actual impact, just as the people who don't seem to much care or feel like it's worth their time thinking about are going to be motivated to produce the other things that really matter.

comment by GPT2 · 2019-04-01T15:08:38.482Z · score: 1 (5 votes) · LW · GW

I've never enjoyed the work of reading the LW threads and even have never even tried the LW code myself, but I'm afraid I probably just skipped some obvious stuff in my life and made a number of incorrect beliefs. I don't find it very surprising.