post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by mingyuan · 2018-05-05T17:08:45.963Z · LW(p) · GW(p)

First of all, I'm really sorry you've had this discouraging experience in your first few weeks on LessWrong. It does seem unfair that you've received a lot of negative votes while receiving very little feedback on why that is. I think there's something real in all of the interpretations you listed, and my guess would be that each one was an opinion held by at least one person who read at least one of your posts (presuming high traffic on Frontpage posts).

For now, I think it makes most sense for you to continue to post regularly, just on your personal blog. Given what you've said here I definitely don't want to discourage you from writing, but I think there are certain expectations of Frontpage posts that you're unlikely to meet since you're a newcomer both to writing and to the ideas of the community.

I strongly encourage you to read more of the rationality canon and become familiar with the discussion around various ideas before writing too much about those ideas. From what I remember of surveys of prominent users of the old LessWrong, most of them, upon discovering the site, spent several months just reading the Sequences without posting or commenting, then spent several months or years just commenting, and only then began writing their own top-level posts. Obviously times have changed in a lot of ways; I just want to emphasize that familiarity with the canon is a quite important prerequisite for writing well-received posts.

I also agree with Elo that you might want to wait before addressing sensitive topics - if you are a newcomer to the community and an inexperienced writer, it will be difficult for you to write about controversial or sensitive things in a way that is:

  • interesting to long-time readers of the site - in that it provides novel insight and is framed in a way that makes it relevant to their interests
  • comprehensible - it can be really difficult to convey your thoughts on complex topics to strangers through only the written word, and I think this just takes a lot of practice (hopefully with fast feedback loops)
  • nuanced, not clumsy - the thing about writing about sensitive topics is that people can be really, well, sensitive about them (surprise!); there are a myriad of ways you can end up putting your foot in your mouth and you need to know your audience well and write clearly and carefully to be able to avoid all those failure modes

Some other, more concrete things:

I found your post on effective altruism difficult to follow; I didn't actually understand it until reading Ixakas' comment. On a first (uncomprehending) read, it kind of comes off as a naïve attack on something that is really important to a lot of people here, which may be where a lot of the downvotes came from. You also seemed to present things as novel insights when pretty much every premise in the post is something that's been discussed in this community for years. That said, now that I've read Ixakas' top-level comment, I do find the post pretty interesting.

As for your post on dating: I'm a young female and it didn't make me personally uncomfortable, but the framing is a bit rude, as Elo said - even just the title, 'Finding a Girlfriend', feels to me like it elides a lot of what a romantic relationship actually is. I had a friend who thought in these terms, trying to find The One using a search algorithm that involved dating apps and spreadsheets, and he was wildly unsuccessful at dating. Actually, my main thought when reading your post was that it might be pretty helpful to someone like him. So, that general way of looking at the problem may not be the best, but as long as a lot of people are going to do it anyway your post seems like it could be valuable.

To look at an example of someone managing to successfully navigate discussing this sensitive topic - when lukeprog wrote about rational romantic relationships [LW · GW], he included personal anecdotes, but he also looked at what was going on from other points of view (including a lot of female POV), and mostly framed the problem as one that humans in general have, rather than one specific to any group. It also helped that he included a whole bunch of scientific evidence.

--

In conclusion, this was a very long comment but I hope you find at least parts of it useful. Good luck, Michaël.

comment by ESRogs · 2018-05-05T19:58:00.722Z · LW(p) · GW(p)

Kudos for soliciting feedback, rather than giving up!

comment by Elo · 2018-05-05T11:26:51.291Z · LW(p) · GW(p)

Concretely: the last topic you wrote about is a sensitive one.

When writing about dating, it's kinda rude to talk about it as a one player game. "I am getting what I want out of my actions". For their part in the other side of the experience people don't like to be the one being hunted.

Yes rationality is a one-agent game most of the time, but we still need to be careful how we portray the other agents.

I'm not saying that you should not write about it at all but you should definitely be sensitive. In your case maybe you should wait until you get another 150k words of writing experience before you open sensitive topics. (context: it took me that many words experience before people who were saying that my writing stunk now say that it's good)

Replies from: clone of saturn, ialdabaoth, TAG
comment by clone of saturn · 2018-05-05T21:30:49.110Z · LW(p) · GW(p)

Also, I think any frontpage post needs to be evaluated in light of who it will attract to LW and who it will repel, and I think "Finding a Girlfriend with Reinforcement Learning" miserably fails in that evaluation.

comment by ialdabaoth · 2018-05-05T12:24:36.898Z · LW(p) · GW(p)

More plausibly, any topic that talks about "getting girls" in a nerdy way painfully reminds guys that they don't know how to get girls, so they downvote you; OR awkwardly demonstrates that you are less attractive/cool/etc. than the reader, so they downvote you, OR provides the capacity for the reader to believe that you only see girls as a prize to be one, so they downvote you. There's really no winning this game.

comment by TAG · 2018-05-05T19:11:01.240Z · LW(p) · GW(p)

What does a solipsist get out of successful dating anyway?

As against solipsism it is to be said, in the first place, that it is psychologically impossible to believe, and is rejected in fact even by those who mean to accept it. I once received a letter from an eminent logician, Mrs. Christine Ladd-Franklin, saying that she was a solipsist, and was surprised that there were no others. Coming from a logician and a solipsist, her surprise surprised me. Bertrand Russell, Human Knowledge

Note the Mrs!

comment by Viliam · 2018-05-12T01:14:34.994Z · LW(p) · GW(p)

Speaking for myself, it's mostly the fact that you decided to publish such articles daily. So it's not just "do I want this kind of article to be here?" but "do I want this kind of article, from the same author, to be here, every day?". And the answer is "hell no".

Like, the other factors contribute significantly too, but this one is a multiplier on all of them. It is easier to just ignore a one-off mistake than to ignore a precommitment to keep doing them every day.

Or perhaps let me put it this way:

Thanks to various factors, mostly online advertising, we currently live in an era of abundance of text. Each day, on internet appears more new text than any of us could read during a lifetime. Uhm, maybe I got my estimates wrong here; either way, the idea is that it is physically impossible to read everything, or even a significant fraction of everything. What we read is but a tiny fraction of what was written.

So, unless we are happy with taking a random sample (thanks, but no thanks), we need filters. Something that separates the good content from the bad content, and allows us to read the good one.

For me, Less Wrong is a filter for the really good stuff. It is far from perfect, but it is better than most other options I got. (The only better option I am aware of is reading only the Slate Star Codex.) Without LW (and SSC), my best option would probably be a combination of Hacker News and some selected Reddit fora. But Less Wrong is, sometimes, better than that, mostly because it is better tuned to my interests.

And... you are polluting this filter. Not just once in a while, but each day. You generate more than 10% of headers on this website recently. Whoa, slow down please! How about instead of posting one article a day, you would just think about 7 ideas each week, and then only post the best one?

Replies from: Viliam
comment by Viliam · 2018-05-12T01:49:49.488Z · LW(p) · GW(p)

More specific feedback about your articles:

  • The Multiple Names of Beneficial AI - sounds useful
  • Talking about AI Safety with Hikers - good point, but too long
  • Applied Coalition Formation - meh
  • Better Decisions at the Supermarket - ok
  • Beliefs: A Structural Change - meh
  • Finding a Girlfriend with Reinforcement Learning - extremely long
  • Are you Living in a Me-Simulation? - long and useless
  • Effective Egoism, or My Life in a Video Game - confused
  • A Logician, an Entrepreneur, and a Hacker, discussing Intelligence - feels like a summary without conclusion
  • Assembling Any Lego or NYT Best Seller? - meh
  • Decoding, Not Encoding - either confused or I failed to understand
  • Should an AGI build a telescope to spot intergalactic Segways? - interesting

So, twelve articles, one of them interesting, three or four have a good idea but are very long, and the rest feels useless. I typically do not downvote the "meh" articles, but that's under assumptions that they don't appear daily from the same author.

My aggregate feedback would be: You have some good points. But sometimes you just write a wall of text. And I suspect that the precommitment to post an article each day could be making this a lot worse. In a different situation, such as writing for an online magazine which wants to display a lot of ads, writing a lot of text with only a few ideas would be a good move; here it is a bad move.

Replies from: Elo, mtrazzi
comment by Elo · 2018-05-12T01:53:20.800Z · LW(p) · GW(p)

Lesswrong needed more content at one point. Now it does not. It needs the content to be higher quality. It would be nice for users to know if lw is looking for more content or strictly higher quality content if we can't have both.

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-12T09:32:15.652Z · LW(p) · GW(p)

I gave some points about the higher quality/low quality debate in my two answers to Viliam, but I will answer more specifically to this here.

The quality of a post is relative to the other posts. Yes, if the other articles are from Scott Alexander, ialdaboth, sarahconstantin and Rob Bensiger, the quality of my daily posts are quite deplorable, and spamming the frontpage with low quality posts is not what LW users want.

However, for the last few days, I decided not to publish on the frontpage, and LW even changed the website so that I can't publish on the frontpage. So it's personal blog by default, and it will go to frontpage only if mods/LW users enjoy it and think it's insightful enough.

Are you saying that people might want high quality personal blogs then?

Well, I get why people might be interested in reading personal blogs, and want them to be of high quality. And, because you got to correct some of my posts, I understand the frustration of seeing articles published where there still is a lot of work to do.

However, the LW algorithm is also responsible for this. Maybe it promotes too much the recent posts, and should highlight more the upvoted ones. Then, my posts will never be visible. Only the 20+ upvotes will be visible in the personal blogs page.

I understand why people would prefer an article that took one week to write, short and concise, particularly insightful. I might prefer that as well, and start to only post higher-quality posts here. But I don't agree that it is not recommended for people to post not-well-thought-off articles on a website where you are able to post personal blogs.

I think volume is not a problem if the upvote/downvote system and the algorithms are good enough to filter the useful posts for the readers. People should not filter themselves, and keep articles they enjoy not as much as Scott Alexander ones ( but still find insightful), for themselves.

comment by Michaël Trazzi (mtrazzi) · 2018-05-12T08:48:25.192Z · LW(p) · GW(p)

Thank you Viliam for your honest feedback.

I think you're making some good points, but you're ignoring (in your comment) some aspects.

"do I want this kind of article, from the same author, to be here, every day?". And the answer is "hell no".

So what you're saying is "whenever deciding to upvote or downvote, I decide whether I want more articles like this or not. But because you're posting every day, when I am deciding whether or not to downvote, I am deciding if I want an article every single day and the answer to this is no".

I understand the difference in choice here (a choice for every article, instead of just for one). I assumed that on LW people could think about posts independently, and could downvote a post and upvote another from the same author, saying what felt useful or not, even if it is daily. I understand that you just want to say "no" to the article, to say "no" to the series, and this is even more true if the ratio of good stuff is the one you mention at the end.

It is easier to just ignore a one-off mistake than to ignore a precommitment to keep doing them every day.

What would be the mistake here? From what I understand, when reading an article and seeing a mistake, the mistake is "multiplied" by the number of time it could happen again in other articles, so every tiny mistakes becomes important? If I got you right, I think that by writing daily, those little mistakes (if possible to correct easily) could be corrected quickly by commenting on a post, and I would take it into account in the next posts. A short feedback loop could improve quickly the quality of the posts. However, I understand that people might not want LW to be an error-tolerant zone, but would prefer a performance zone.

And... you are polluting this filter. Not just once in a while, but each day. You generate more than 10% of headers on this website recently.

I had not thought about it in terms of daily % of headers of the website, interesting point of view. I also use Hacker News as a filter (for other interests) and LW is also a better option for the interests I mentioned in my posts. I think the real difference is the volume of posts in hacker news/reddit/LW. It is always a tradeoff between being in a pool of hundreds of high quality posts (more people reading, but more choices for them), or a pool of only a dozens of even-higher quality posts but less traffic.

Replies from: Elo, mtrazzi
comment by Elo · 2018-05-12T09:13:03.323Z · LW(p) · GW(p)

There is a point to be made here about responsibility for the feedback. It takes a lot of time and energy to write good feedback.

Yes people have some willingness to help with feedback but it's not unlimited.

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-12T09:42:15.530Z · LW(p) · GW(p)

You're right. I appreciate the time and effort you put in giving feedback, especially the google docs. I think I didn't said it enough, and didn't get to answer your last feedbacks (will do this weekend).

The question is: are people putting to much effort in giving feedback with small improvements in the writing/posts? If yes, then it feels utterly inefficient to continue giving feedback or writing those daily posts.

I also believe that one can control the time he spends on giving feedback, by saying only the most important thing (for instance Ikaxas saying the bold/underline thing).

I am not sure if this is enough to make daily LessWrong posts consistently better, and more importantly if it is enough to make them valuable/useful for the readers.

I am actively looking for a way to continue posting daily (on Medium or a personal website) and keep getting good feedback without spamming the community. I could request quality feedback (by posting every week max) only once in a while and not ask for too much of your time (especially you, Elo).

Thank you again for your time/efforts, and the feedback you gave in the google docs/comments.

comment by Michaël Trazzi (mtrazzi) · 2018-05-12T09:15:32.193Z · LW(p) · GW(p)
So, twelve articles, one of them interesting, three or four have a good idea but are very long, and the rest feels useless.

I appreciate you took the time to read all of them (or enough to comment on them). I also feel some are better written than the others, and I was also more inspired for some. From what I understood, you want the articles to be "useful" and "not too long". I understand what you would want that (maximize the (learned stuff)/(time spent on learning) ration). I used to write on Medium where the read ratio of posts would decrease significantly with the length of the post. This pushed me to read shorter and shorter posts, if I wanted to be read entirely. I wanted to try LW because I imagined here people would have longer attention spans and could focus on philosophical/mathematical thinking. However, if you're saying I'm being "too long with very low density of ideas" I understand why this could be infuriating.

I typically do not downvote the "meh" articles, but that's under assumptions that they don't appear daily from the same author

I get your point, and it makes sense with what you said in the first comment. However, I don't feel comfortable with people downvoting "meh" articles because of the author (even though it's daily). I would prefer a website where people could rate articles independently of who the author is, and then check their other stuff.

My aggregate feedback would be: You have some good points. But sometimes you just write a wall of text.

Ok. So I should be more clear/concise/straight-to-the-point, gotcha.

And I suspect that the precommitment to post an article each day could be making this a lot worse. In a different situation, such as writing for an online magazine which wants to display a lot of ads, writing a lot of text with only a few ideas would be a good move; here it is a bad move.

Could you be more specific about what you think would be my move? For the online magazine, getting the maximum number of clicks/views to display the more ads makes sense, and so lots of text with lots of ads, and enough ideas to ensure the reader keeps seeing adds makes sense.

But what about LW? My move here was simple: understand better AI Safety by forcing myself to daily crystallize ideas about ideas related to the field, on a website with great feedback/discussions and low-tolerance for mistakes. For now, the result (in the discussions) is, overall, satisfying, and I feel that people here seem to enjoy AI Safety stuff.

More generally, I think the fact that if I generate 10% of headers or you get to click on all my articles may be correlated to other factors than me daily posting, such as:

  • The LW algorithm promotes them
  • You're "Michaël Trazzi" filter (you need one, because you get to see my header) is not tuned correctly, because you still seem to still be reading them, even if only 1/12 felt useful (or maybe you just read them to comment on this post?).

This comment is already long (sorry for the wall of text), so I will say more about the Meta LW high/low quality debate on Elo's comment below.

Replies from: Viliam
comment by Viliam · 2018-05-22T21:11:19.888Z · LW(p) · GW(p)
However, I don't feel comfortable with people downvoting "meh" articles because of the author (even though it's daily).

The "meh" articles should be downvoted. Simply because LW is the place where I come to read something better.

However, it makes sense strategically to be more lenient towards new authors. The reasoning is that it people are often scared to make the first post here, and that it takes some time to get attuned to the local culture. This there is a chance that a person who wrote a "meh" article first will write better articles later. On the other hand, having one's only article downvoted is probably emotionally harder that having e.g. one of three articles downvoted. So it the new author's first article is "meh", instead of downvoting I often just abstain from voting and try to give some good advice in a comment.

While this may feel the same, the motivation is not punishing frequent authors (I would be quite happy with frequent high-quality articles), but rather making the first step into the community easier -- with the tacit assumption that the author will improve.

comment by tinyanon (aaron-teetor) · 2018-05-05T16:46:00.614Z · LW(p) · GW(p)
My post are badly-argued. For instance, the Effective Egoist one was very short and implicit, and did not give any precise/well-justified arguments.

I've only read two of your posts but this is the thing I noticed. For Effective Egoist, the farther something is from the accepted set of priors for a community the more justification it needs. The only way I could see that article changing my mind is if I already bought the original premise and just had to slightly shift my conclusion. Your arguments were each only a few sentences long and you didn't address any of the obvious counter arguments in the original article so nobody with different priors was going to come out of that article feeling changed.

I also read the dating one and it didn't feel like it made any strong claims. It started off going in the same direction as the posts that go into the math behind "if you want to be married by X age, date Y people and then marry the next person better than the best person you've dated so far", but then doesn't actually get to that point. It then goes on to say you've had poor experiences with online dating and meeting people through friends so you did better going out to events. Which is a decent conclusion and possibly worth a post telling people to stop wasting resources on things that aren't paying off for them... but it comes off more as the afterthought the way it's packed into the middle of a different section. If that's your conclusion it should have its own pretty header and a bit more support.

Overall, it feels like your actual claims aren't supported by the rest of your writing and I don't feel like someone who doesn't already agree with you will walk away with a positive view of your writing.

Replies from: Elo
comment by Elo · 2018-05-05T17:19:20.415Z · LW(p) · GW(p)

This might be a structural problem that comes with practice. Unfortunately for readers, recognizing structural problems and being able to point them out and describe them is also a skill that comes with practice.

comment by Davide_Zagami · 2018-05-05T20:11:43.988Z · LW(p) · GW(p)
I have only read a small fraction of Yudkowsky's sequences (I printed the 1800 pages two days ago and have only read about 50), so maybe I think I am discussing interesting stuff where in reality EY has already discussed it in length.

Mostly this. Other things too, but all mostly are caused by this one. I am one of the few who commented in one of your posts with links to some of his writings exactly for this reason. While I'm guilty of not having given you any elaborate feedback and of downvoting that post, I still think you need to catch up with the basics. It's praiseworthy that you want to engage in rationality and in new ideas, but by doing it without becoming familiar with the canon first, you are not just (1) probably going to say something silly (because rationality is harder than you think), (2) probably going to say something old (because a lot has been written), but also (3) wasting your own time.

comment by Charlie Steiner · 2018-05-05T18:06:51.192Z · LW(p) · GW(p)

In regards to your posts on AI safety, I have two opinions.

1: Maybe choose titles that allow the reader to figure out what they're getting into. I can't read everything, so I'd much rather read something whose title lets me infer it's about e.g. AI timelines. In general, I would like the point to be slightly more obvious throughout.

2: Don't stop posting, but slow down posting. Eliezer cheated in four ways. He did it for more time per day than you can afford, he was often rehashing arguments he'd already put into text elsewhere, he rarely posted original technical work, and if he didn't do a good job you wouldn't know about him (while you have no such antropic selection). Your AI posts often raise questions but only scrape the surface of an answer - I would rather read fewer but deeper posts.

comment by drethelin · 2018-05-12T03:09:15.851Z · LW(p) · GW(p)

this is the only post of yours I've seen and it's meta whining, and long for that. My guess is the rest of your posts take a long time to say stuff people don't want to read.

comment by Vaughn Papenhausen (Ikaxas) · 2018-05-06T06:56:16.385Z · LW(p) · GW(p)

I think the advice to primarily post to your personal blog is very good; this won't completely tank visibility of your posts, since many people read the "community" feed, but the frontpage has a particular purpose that your posts maybe aren't fulfilling right now (though they might in the future once you've had more practice writing, and writing for this community in particular).

However, I wouldn't completely discourage you from writing about topics that the Sequences have covered before reading about those topics in the Sequences. (Sorry, I know that last sentence had a ton of negations in it, translation: if you want to write about a topic, but haven't read the relevant portions of the Sequences yet, I'd say still do it). There are several reasons for this:

1. If you have a view on something before reading Eliezer's thoughts on it, this can help you integrate Eliezer's views into your own, without doing so blindly. It's easier to learn something if you already have some related beliefs for it to latch onto (e.g., it's easier to learn about Japanese history if you already know something about, say, anime, because there will be certain things from anime that you'll be able to use as hooks for the new historical knowledge to latch onto).

2. If you write about something before seeing Eliezer's thoughts, you may have a fresh take that turns out to be correct (though more often you will write something, look at Eliezer's thoughts, and see that you fell into a trap that Eliezer already warned about. But that's okay I think, you still learned from it).

That is to say, you can write _unencumbered_ by Eliezer's work to some extent. It's easier to do an Original Seeing [LW · GW] if you haven't already read Eliezer's thoughts on some topic. It's good to dare to be wrong [LW · GW].

However, if you do this, I would advise you to either

1. keep those writings private [LW · GW], or

2. frame them as "I'm writing this before reading Eliezer's work on the topic, in preparation for reading said work" and perhaps write a follow-up post after reading Eliezer's relevant work. Even Eliezer did this while writing the sequences, e.g. with Gary Drescher's work; it's a well-respected technique in this community to write up your (preliminary) thoughts on some topic _before_ reading the relevant literature (though with the expectation that you'll probably update after reading said literature).

But yeah, do definitely read the Sequences sooner rather than later, and expect that what you write after reading them will be more relevant to this community than what you write before reading them.

I also want to echo ESRogs's kudos for getting feedback rather than giving up.

Also, as a datapoint, I also found the Effective Egoism post somewhat off-putting at first. A lot of stuff in that post could have used a lot more unpacking, and some of the phrasings felt clumsy or in other ways "off" (especially the "It will be the last" at the beginning). That, combined with the topic, fits with my model of the types of things this community tends to downvote. But thanks for engaging so well with my comment, and I'm glad it seems to have helped others understand the post better as well.

Anyway, good luck with your future writing, for this site and elsewhere!

comment by ChristianKl · 2018-05-06T16:44:36.293Z · LW(p) · GW(p)

As a general principle: When in Rome, do as the Romans do.

You find that most people on this website don't post a post every day. That suggest that when you come to this website as a newcomer posting every day might not be the best idea.

It's a good idea to wait a week after you write a draft and then come back and ask yourself what can be improved about the post before actually publishing it. Ask yourself whether you provide evidence for the claims you are making.

Additionally, don't post to the frontpage. If the mods consider your posts to belong to the frontpage, they will move them.

When it comes to the "Finding a Girlfriend with Reinforcement Learning" post there are multiple things wrong with it. The first problem is the timing. Given the recent controversy stirred up by Hanson this is not a good time to have that discussion.

The second problem is that it's bad advice. Good dating advice generally at least partly contains sections about how to level up parts of you that are valuable to potential partners or that help you with the mating process.

comment by Matt Goldenberg (mr-hire) · 2018-05-06T22:13:40.553Z · LW(p) · GW(p)

I looked over your posts and I like them. If the question in your title were personally directed at me, my answer would be no.