Posts

2020 predictions 2020-05-01T20:11:04.423Z · score: 12 (5 votes)
COVID-19 growth rates vs interventions 2020-03-27T21:33:25.851Z · score: 29 (12 votes)
[UPDATED] COVID-19 cabin secondary attack rates on Diamond Princess 2020-03-18T22:36:06.099Z · score: 51 (13 votes)
Why such low detected rates of COVID-19 in children? 2020-03-16T16:52:02.508Z · score: 17 (3 votes)
Growth rate of COVID-19 outbreaks 2020-03-09T23:16:51.275Z · score: 73 (28 votes)
Quadratic models and (un)falsified data 2020-03-08T23:34:58.128Z · score: 31 (10 votes)
Bucky's Shortform 2020-03-08T00:08:23.193Z · score: 6 (1 votes)
Rugby & Regression Towards the Mean 2019-10-30T16:36:00.287Z · score: 16 (4 votes)
Age gaps and Birth order: Reanalysis 2019-09-07T19:33:16.174Z · score: 49 (10 votes)
Age gaps and Birth order: Failed reproduction of results 2019-09-07T19:22:55.068Z · score: 66 (17 votes)
What are principled ways for penalising complexity in practice? 2019-06-27T07:28:16.850Z · score: 42 (11 votes)
How is Solomonoff induction calculated in practice? 2019-06-04T10:11:37.310Z · score: 35 (7 votes)
Book review: My Hidden Chimp 2019-03-04T09:55:32.362Z · score: 31 (13 votes)
Who wants to be a Millionaire? 2019-02-01T14:02:52.794Z · score: 29 (16 votes)
Experiences of Self-deception 2018-12-18T11:10:26.965Z · score: 16 (5 votes)
Status model 2018-11-26T15:05:12.105Z · score: 29 (10 votes)
Bayes Questions 2018-11-07T16:54:38.800Z · score: 22 (4 votes)
Good Samaritans in experiments 2018-10-30T23:34:27.153Z · score: 134 (56 votes)
In praise of heuristics 2018-10-24T15:44:47.771Z · score: 44 (14 votes)
The tails coming apart as a strategy for success 2018-10-01T15:18:50.228Z · score: 33 (17 votes)
Defining by opposites 2018-09-18T09:26:38.579Z · score: 19 (10 votes)
Birth order effect found in Nobel Laureates in Physics 2018-09-04T12:17:53.269Z · score: 62 (20 votes)

Comments

Comment by bucky on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes · 2020-09-15T10:18:21.434Z · score: 4 (2 votes) · LW · GW

I had the same confusion.

One of the key things I think between the 3 games is whether communication beforehand helps (in single shot games).

In PD communication doesn't really help much as you there is little reason to trust what the other person.

In SH communication should be able to solve your problem as S-S is optimal for both players.

In BotS communication which results in agreement can at least be trusted as co-ordinating is optimal for both players. Choosing which option to co-ordinate on is another matter.

(assuming you've included the pleasure of spiting the other person etc. in the payoff matrix)

Comment by bucky on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-15T09:26:46.341Z · score: 2 (1 votes) · LW · GW

Maybe I need to be more heterogenous in my hiring! 

I hadn't heard of pair-writing but it sounds like it could work well in my context.

Comment by bucky on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-15T09:21:16.916Z · score: 4 (2 votes) · LW · GW

My intended point was that if one person has an ugh-field around something then it is often a generally unenjoyable task. Although other people don't have ugh-fields around the task, it still seems unfair (and would likely lead to bad team dynamics) to reassign it to someone else who merely dislikes the task.

Comment by bucky on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-14T11:34:33.962Z · score: 5 (3 votes) · LW · GW

My experience would be that people generally have Ugh fields around tasks which no-one on the team likes (e.g. report writing). I can't reassign such tasks without being unfair to the people who are dealing well with such jobs.

I would agree that mentioning to a manager that you're finding something aversive is basically fine as long as you're more looking for support than reassignment (although this might be different in different fields) and that a manager should encourage that. 

As an example one employee found that people constantly interrupted him and this made getting into the flow of report writing super hard so we blocked off a day a week to allow him to catch up without interruptions.

I guess to some extent it's knowing what's possible in your context and knowing how flexible your manager is able/willing to be.

Comment by bucky on Covid 8/27: The Fall of the CDC · 2020-08-28T20:41:37.409Z · score: 6 (3 votes) · LW · GW

11.9% vs 8.7% is early plasma administration (0-3 days from diagnosis) vs late (4+ days).

13.7% is using low antibody count plasma, 8.9% is using high antibody count plasma. I guess this is the 35% reduction.

paper

Comment by bucky on Are We Right about How Effective Mockery Is? · 2020-08-27T21:05:10.057Z · score: 4 (2 votes) · LW · GW

All Debates are Bravery Debates?

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-26T23:31:35.948Z · score: 2 (1 votes) · LW · GW

I do think that it would be very bad if this happened. However I don't think this is likely. Quoting my other comment:

I think its important to note here that we are not really that homogenous in our opinions and weightings of different sources of value. Alot of the worries about Blind voting seem to assume that we're all going to vote the same way about the same posts which I think is highly unrealistic. There also seems to be the assumption that everything fractionally above 0 value will get an upvote which again seems unrealistic. 

This seems even more true for downvotes - I think people realise that downvotes feel extra bad and only use them sparingly. For instance, I only really downvote when I think something has been a definite breaking of a conversational norm or if someone is doubling down on an argument which has been convincingly refuted.

I  think a spread of opinions on what constitutes a downvote (and a general feeling that comments get less votes in general) would make the -80 only happen to super egregiously bad comments.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-26T23:23:36.457Z · score: 2 (1 votes) · LW · GW

When a post is at 50, I can think that is a bit too high just from my general sense of what I want to see more of on the site. And it's be throwing away information about my own beliefs to not give me the fine-gradation of "I want to see these posts on the site about as often as I would if they got 50 karma, not the amount that I would if they got 200 karma."

This is true when the equilibrium position of the karma system is set to Total Karma Voting. 

 

I think that Blind voting would move the karma system to a new equilibrium. I'm not convinced we should do so as I think it would be a fairly unstable equilibrium but I think it would work if everyone did it and would allow for fine grained expressions of your belief.

The equilibrium I envisage would be that the current amount of something that LW has is taken into account when people blind vote their opinion.

As an example, I think the reason that joke comments can get fairly high karma is that they're rare. If more people start writing joke comments as a result then that's fine for as long as people are upvoting.

At some point the people who value the jokes least stop upvoting them or start downvoting them. This continues until the reward experienced by the jokers roughly matches the effort taken or some other balancing factor.

In the case of low positive value posts, some people have a higher threshold for what they will give an upvote for and the more low positive value posts there are the fewer people will upvote them.

(I think its important to note here that we are not really that homogenous in our opinions and weightings of different sources of value. Alot of the worries about Blind voting seem to assume that we're all going to vote the same way about the same posts which I think is highly unrealistic. There also seems to be the assumption that everything fractionally above 0 value will get an upvote which again seems unrealistic. Frankly I think that anyone who can write a post which is good enough that it persuades 100 different people with different standards to click the upvote button then they deserve to get 150 karma!)

The key then is that in order to get an oversized reward for the amount of effort put in, you have to do better than average at providing value.

In Blind Voting, accounting-for-how-much-of-a-certain-thing-there-currently-is-on-LW is doing the same thing as considering-what-message-the-total-karma-sends does with Total Karma Voting. The former seems to have a lag in the message getting out but I think when you're in a rough equilibrium the lag is relatively short.

 

So this brings me onto what I think the main cost of Total karma voting is. If an author looks at a post which has 25 karma from 10 votes, what does it mean? Roughly speaking it means that it was considered about as valuable as another 25 karma post. The 10 votes tells the author how efficient the karma market was for the post and possibly gives limited information on how varied the opinions were.

With Blind voting the author sees that and knows that 10 people had an opinion that this post was wanted more or less and that their average strength of opinion was 2.5 karma points in favour. This probably consists of something like 3 people who want alot more like it and 7 people who want a little more like it (or possibly some who wish there was less like it or were just yay/booing).

I agree that karma is a kludge and the true meaning isn't necessarily clear but with Blind voting it seems importantly less of a kludge and some extra information can be extracted.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-26T21:39:11.163Z · score: 2 (1 votes) · LW · GW

I want to note that the I see the "vote towards the ideal karma" as completely compatible with "vote your belief."

Agreed. I was looking for a shorthand way of referring to the different voting policies but am yet to find one which is satisfactory - you’ve (rightly) shot down a couple of my ideas! Total Karma voting seems fine for one policy, maybe direct opinion voting for the other? If you shoot that one down too you can come up with your own!

Comment by bucky on On Suddenly Not Being Able to Work · 2020-08-26T19:55:27.855Z · score: 4 (2 votes) · LW · GW

The paper does attempt to adjust for this with a complexity metric although I suspect this doesn’t work perfectly as it seems to be a linear adjustment with number of nodes used by the engine to calculate the optimal move.

I have a concern that the paper is comparing tournament play (offline) to match play with 4 games per match (online). In match play, especially with few games, a player who is behind needs to force the game and the player in the lead can play more conservatively. Tournaments have their own incentives but overall I would expect short match play to cause bigger errors from engine optimal play as losing players try to force a win in naturally drawing situations.

The calculated effect size is >200 ELO points which suggests to me that something is amiss.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-26T13:13:22.210Z · score: 5 (3 votes) · LW · GW

Ok, I think I actually agree with your crux.

The points I was trying to make were (kinda scattered across the comments here!):

1. It is advantageous if people have a shared understanding of the system

2. Voting your own belief actually should work pretty well

3. There is a written norm in favour of voting your own belief

I think we disagree on all 3 to some extent, at least in how important they are. I think if we lose the disagreement on number 3 then disagreements on 1&2 are less important.

I'm ok with a norm of voting based somewhat on target karma (making it overly strong an effect I think would be detrimental), especially as this is now common knowledge and seems to be most people's preference. 

This whole thing has resolved some of my confusion as to why karma scores end up the way they do.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-25T22:22:48.626Z · score: 2 (1 votes) · LW · GW

Now I fell less stupid for not getting it - at least I included all of the different parts of the recipe! Very impressive content density comment.

I have a close-to-deontological belief in the need to obey the rules of a community that's trying to create things together (even when the rules seem wrong) and I think I tend to interpret things in that frame (for or against) even if that isn't the intention. In the immortal words of Scott Alexander:

No! I am Exception Nazi! NO EXCEPTION FOR YOU!

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-25T21:52:06.277Z · score: 4 (2 votes) · LW · GW

I think the asshole filter is a good point and to be honest its possibly enough to get me to change my mind about this subject. There should be some mitigation in the karma weighting system but even long term members might be assholes.

Does it prove too much? Should the current karma, then, actually be the main consideration in deciding how to vote? Few people on the site seem willing to bite that bullet. Should I almost always use my strong votes as if I don't then an asshole might do that and therefore have an oversized effect?

Count me confused.

 

On the other points I won't go through point by point. The main thing I think is that what you're describing is in conflict with the explicit phrasing of the reasons for voting. Compare:

What should my votes mean?

We encourage people to vote such that upvote means “I want to see more of this” and downvote means “I want to see less of this.”

with

What should karma indicate?

The karma on a post is intended to indicate whether, and by how much, members of the site would like to see more of the posts/comments in question. We encourage people to vote accordingly.

The former is the FAQ but I think the latter is what you're describing. If this is the case then I think this ends up being an asshole filter in itself and the phrasing in the FAQ should be corrected.

(I realise as Ray qua user this has nothing to do with you but if you can pass this along to Ray qua admin that would be great!)

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-25T18:19:35.357Z · score: 3 (2 votes) · LW · GW

I like that framing in the first paragraph.

In the second paragraph I can’t work out if the question is intended rhetorically, ironically or genuinely!

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-25T14:24:03.758Z · score: 4 (2 votes) · LW · GW

IMO you are advocating for basically switching to a new voting system, not "properly" implementing the current one.

Compare to the LW FAQ:

What should my votes mean?

We encourage people to vote such that upvote means “I want to see more of this” and downvote means “I want to see less of this.”

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-25T11:40:19.249Z · score: 2 (1 votes) · LW · GW

Downvote a different post of the same author because I didn't like that one? That doesn't sound like a good idea.

No, I mean why wouldn't you downvote a hypothetical post that you are agnostic about? 

Imagine there are two posts, both have 50 karma.

You read one and feel confident that it is net positive but that 50 is too high.

You read the other and it is not net positive for you - you just have a meh reaction to it.

It seems very odd to me that one would downvote the former but not the latter. The net effect is to encourage people to read/write a post that is more likely to provide a meh reaction than be net positive.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-25T10:46:03.124Z · score: 2 (1 votes) · LW · GW

The fundamental problem is that we're trying to map a multidimensional thing into a single dimension. Whenever you do this you end up throwing out some information and you have to do the best you can. 

As described by jimmy, with the "I want to see more/less of this" rule you lose some information on magnitude of like/dislike with the "I want to see more/less of this" rule. This is somewhat mitigated by having weak and strong votes, plus the dither factor jimmy describes (which I think for me is quite significant) so overall I'm not hugely worried about this. 

(You can also get some of this information back if you're really interested by comparing total number of votes to score although this is less obvious)

 

I'm not sure how a "how much total karma should this post have" rule even works in practice but a couple of options:

How much karma a post has needs to link to post value / correct amount of reward to the author.

If I judge this according to how much value I personally got out of it then the great-great grandparent comment applies and 50% awesome, 50% meh posts get 0 karma - a worse result than with the "I want to see more/less of this" rule, with all of the information from the 50% of people who found it awesome disappearing.

If instead I am trying to judge how much value I think the average LWer would get out of it then I think this gets really hard to assess. As an example, the recent 10 fun questions results showed that people weren't very good at guessing whether others believed the Civilisational Inadequacy thesis more or less than they themselves did. Here you lose some information on people's actual opinions in favour of information on what other people think their opinion might be, adding significant noise to the result.

Whichever option you chose you probably end up throwing out information on how many people got value from the post. You can try to get around this by each person estimating how many others would find it useful but I think this just adds more noise to the result.

 

You can try to make the rule some combination of rules (as it seems most people do) but then to me it seems like interpreting karma scores becomes really difficult. We also run into the problem of how much weighting to give to each sub-rule and if people give different weightings then you get a discrepancy in how effective each person's opinion is.

 

I'm interested if someone can explain another way that a "how much total karma should this post have" rule would work in practice which doesn't run into such problems.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-24T21:50:25.304Z · score: 2 (1 votes) · LW · GW

Hmm, interesting - I'm now slightly confused by:

I recently strong-downvoted a post that I would have weak-upvoted if it had been at a lower karma

Was that post good or bad? It sounded to me like you thought the post had value, just not as much as was currently showing. If you downvoted a post you thought had positive value (you were confident that its current karma value was too high?), why not downvote one that you don't see any value in?

If being agnostic is a cause for not voting at all, a 50% great, 50% agnostic post would get a higher score than a 50% great, 50% slightly good post as the slightly good experiences would downvote and the agnostics wouldn't.

I think my main concern with the "vote to try to give posts/comments the total karma they should have" rule is that I can't see a way to operationalise it which doesn't suffer from worse problems than the simple "I want to see more/less of this" rule.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-24T21:02:26.575Z · score: 3 (2 votes) · LW · GW

Further, I'm not sure having a voting condition of "vote to try to bring the karma to the value you think it should be" helps in this situation. If 50% of people didn't get any value from a post/comment then they would be trying to vote the karma down to 0. So a "50% earth shattering, 50% meh" post would end up with ~0 karma.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-24T20:10:42.830Z · score: 10 (6 votes) · LW · GW

It is important that everyone use a similar condition for voting.

Inasmuch as voting has a defined meaning understood by the community ("I want to see more/less of this"), using it to mean something else is a Simulacrum level 2 action which starts to distort the shared map.

If we want to change the meaning of karma to be self-referential then I guess that might work but it would require this being agreed by the community as the new meaning. Doing so unilaterally on an individual basis increases the effectiveness of one's own opinion at the expense of others' opinions.

Comment by bucky on Do you vote based on what you think total karma should be? · 2020-08-24T19:47:55.971Z · score: 6 (2 votes) · LW · GW

This is an interesting point that I hadn't thought of.

Without that, a post that is unanimously barely worth upvoting will get an absurd amount of upvotes while another post which is recognized as earth shatteringly important by 50% will fail to stand out. 

I think this oversells the problem somewhat.

First a technicality - strong votes are, at least for active members, stronger than a weak vote.

Second, if a post is earth shatteringly important to some then it is likely to be net positive to many others so would also receive a large number of weak upvotes.

So a more realistic scenario would be:

100% weakly upvoted post

is similar to

20% strongly upvoted, 30% weakly upvoted.

These would clearly both be very high scoring posts so would certainly stand out from the crowd, exactly as they should. It doesn't seem obvious to me that the former should stand out significantly less (or be rewarded significantly less) than the latter.

Comment by bucky on Status for status sake is a fact of political life · 2020-08-19T11:30:01.855Z · score: 8 (4 votes) · LW · GW

Sociometer theory suggests that people don't optimise for status directly but optimise for high self-esteem which is often correlated with high status (see That other kind of status). 

I think this make sense in the context of the examples you give here.

Comment by bucky on Survey Results: 10 Fun Questions for LWers · 2020-08-19T11:15:11.158Z · score: 12 (6 votes) · LW · GW

There was a strong correlation between what people believed about CivIn and what people believed about the community, a correlation of 0.62. It's basically a measure of the strength of the typical mind fallacy around here.

I don’t think that this necessarily represents typical minding so much as correct interpretation of limited evidence.

I have very good evidence of what I believe. I have limited evidence of what other people believe. I know that they may be different from me but I don’t necessarily know whether they will believe more or less. So using my own level of belief as a guide seems correct rather than representing that I believe that everyone will believe the same.

If I had been asked to give a distribution guess for the community belief level and it had a very sharp peak at my own belief level then that would better represent the typical mind fallacy. (The Bayesian truth serum papers used distributions for the community prediction.)

(Of note: of the 73 respondents who had different own and community guesses, only 57% managed to guess a more popular value of actual community belief than if they had just used their own belief level as a prediction)

Comment by bucky on Updating My LW Commenting Policy · 2020-08-18T22:00:20.406Z · score: 2 (1 votes) · LW · GW

I think people routinely are judged for not replying.

Can you share what makes you think that? More recently I’ve been moving towards the no commitment model and interpreting other people in the same way.

Comment by bucky on 10 Fun Questions for LessWrongers · 2020-08-18T14:00:15.687Z · score: 3 (2 votes) · LW · GW

This could explain why the distribution is so far from 25% each.

I think the distribution range is roughly what you would expect by chance given the number of responses (when I did it: 65 responses total, range of results 18.5% - 30.8%)

Comment by bucky on Does crime explain the exceptional US incarceration rate? · 2020-08-17T09:49:15.608Z · score: 7 (4 votes) · LW · GW

Initially I thought this wouldn't have much of an effect. However a brief check suggests that only ~1/3 of gunshot victims actually die in the US (study) so there's plenty of scope for healthcare to be making a significant difference. 

Comment by bucky on Tagging Open Call / Discussion Thread · 2020-08-04T16:18:38.324Z · score: 11 (4 votes) · LW · GW

Are there any thoughts on external links for tag wiki pages? I was looking at the social status tag, for example, and there are a few overcomingbias / ribbonfarm posts which I think would be useful but whether / how best to incorporate them isn't clear to me

Comment by bucky on Become a person who Actually Does Things · 2020-07-26T20:17:07.370Z · score: 6 (4 votes) · LW · GW

I think this is a very difficult skill to teach people (I say this as someone who had to learn it myself). As a result this is a very heavily weighted part of my hiring process.

One thing I’ve found is that people who are naturally good at this almost never realise that they’re doing anything unusual - it doesn’t occur to them that other people don’t do it.

Comment by bucky on Jam is obsolete · 2020-07-26T19:59:54.057Z · score: 7 (4 votes) · LW · GW

I feel like you’ve buried the lead here - you mean I can have Nutella and frozen fruit for the same(ish) sugar content as jam?

Num num

Comment by bucky on [Meta] anonymous merit or public status · 2020-07-21T18:53:05.874Z · score: 5 (3 votes) · LW · GW

Not sure if you’re aware of this but there is an anti-kibitzer mode which hides author names and karma. I haven’t used it but have used the greaterwrong.com version (eye icon) which doesnt require any installation.

I use/don’t use it for more or less the reasons you describe - I use it when I think I’m likely to be reading lots of posts for a week or so and filtering for time constraints is less important.

Not sure whether it would be possible to compare votes with and without the feature on and whether there would be any concerns about accidentally deanonymising the votes if only a few people are using it.

Comment by bucky on "Can you keep this confidential? How do you know?" · 2020-07-21T08:21:48.521Z · score: 5 (3 votes) · LW · GW

One thing to consider here is legal duty to disclose certain information and that should form a part of such meta discussions.

Comment by bucky on Criticism of some popular LW articles · 2020-07-19T22:26:15.596Z · score: 6 (3 votes) · LW · GW

I think I didn’t get the fanfic analogy at first. Could I summarise it as “Lesswrong is to scholarship as serious fanfic is to original novels”?

I know the LW team have spoken about a level above curated which would be intended to be more on the level of scholarship. I think the 2018 review was designed to serve this purpose so we should hope that these posts in particular don’t contain any glaring errors!

I think it’s super valuable to be able to be able to put imperfect ideas out there (I see one of the 2018 review top posts was Babble!) but thinking about this has really emphasised to me how useful epistemic statuses are.

To your second paragraph - yes, definitely this! When I do this it definitely gets me out of the habit of passive reading.

Comment by bucky on Telling more rational stories · 2020-07-19T20:41:26.880Z · score: 2 (1 votes) · LW · GW

I too would like to see that sequence. As a start, is there a list of posts have been removed from R:A-Z?

Comment by bucky on Criticism of some popular LW articles · 2020-07-19T07:14:08.978Z · score: 25 (15 votes) · LW · GW

I think when assessing lesswrong it is important to think of posts and their comments as a single entity. Many of the objections to the posts that you mention are also brought up in the comment sections, often themselves highly upvoted (in the first example the comment has 1 more karma than the post).

If you take upvotes to mean you are glad something was posted then I don’t think it is inconsistent to upvote something you think contains an error. Therefore high karma alone shouldn’t be enough to consider something to have been considered correct by the LW community, just that it has some value. If there are also no/only minor critical comments then I think that is much stronger evidence.

I don’t think this completely exonerates LW but I think it means the picture is not quite as bleak as it would first appear.  

(Edited to add: As an example I upvoted this post despite this comment as I think this is an important kind of thing to look at and agree that my frame of mind can be important in how I read content)

Comment by bucky on The New Scientific Method · 2020-07-17T23:12:15.500Z · score: 3 (2 votes) · LW · GW

I feel like the point you should be making is that there are some pitfalls to Bayesian methods which need more attention. Instead the piece comes across as a sarcastic denouncement of Bayesian methods in general as insufficiently scientific. 

In general Bayesian methods have proved to be incredibly powerful tools and I worry you’re throwing out the baby with the bath water.

Comment by bucky on The New Scientific Method · 2020-07-17T22:14:19.594Z · score: 2 (1 votes) · LW · GW

Again avoiding the specifics I’ll counter that I’ve met frequentist statisticians who don’t understand the dangers of multiple hypothesis testing.

Comment by bucky on Anthropomorphizing Humans · 2020-07-17T20:56:14.561Z · score: 9 (5 votes) · LW · GW

Realising that my anger/grumpiness was caused almost exclusively by me being tired rather than the thing I thought was annoying me was one of my formative moments.

Pointing this kind of thing out to people at the time is rarely helpful but, depending on the person, mentioning  it later on can give big mutual wins.

Comment by bucky on The New Scientific Method · 2020-07-17T20:12:43.685Z · score: 1 (2 votes) · LW · GW

I haven‘t looked in much depth at your specific analyses but I think trying to present these problems (assuming there are actually problems) as machine learning issues is misguided. 

As an example here is an analysis of a research paper I did a couple of years ago. Similarly to your claims about the papers you look at, it is a widely cited paper whose claims are not backed up by the data due to too much flexibility with the parameters used. The paper didn’t use any bayesian analysis, it just tested multiple hypotheses to such an extent that it got significant results by chance. Sloppy research is not a new phenomenon and definitely doesn’t require Bayesian analysis.

If your assertion is that machine learning users are unaware of the dangers of overfitting then I suggest looking at a few online training courses on machine learning where you will find they go on and on and on about overfitting almost to the point of self-parody.

The complaints about using priors in Bayesian statistics are well trodden ground and I think it would be instructive to read up on implicit priors of Frequentist statistics. 

I think lumping in data falsification in with machine learning is particularly disingenuous - liars gonna lie, the method isn’t particularly relevant.

So basically I don’t think you’ve identified faults which are specific to machine learning or given evidence that the errors are more prevalent than when using alternative methods (even with the assumption that your specific analyses are correct).

Comment by bucky on Mazes and Duality · 2020-07-15T16:19:26.075Z · score: 8 (2 votes) · LW · GW

At some point, I'm probably going to write up a post specifically on that topic. But one of the best examples I've thought recently of  is putting a satellite into orbit without using a rocket. And I used that as an exercise in this specifically.

I’d really love to see this post and think examples would really help with me applying this to more complex problems.

Comment by bucky on Update more slowly! · 2020-07-13T10:43:50.855Z · score: 21 (6 votes) · LW · GW

An error that I’ve made more than once is to go through my existing hypotheses‘ likelihoods for the new evidence and update accordingly while managing not to realise that the likelihoods are low for all of the hypotheses, suggesting the true answer might be outside of my hypothesis space.

Comment by bucky on Kelly Bet on Everything · 2020-07-11T22:11:43.330Z · score: 2 (1 votes) · LW · GW

The description of the Kelly’s Criterion here seems like it is for the specific case where the house odds (as it were) are 1:1?

Comment by bucky on DontDoxScottAlexander.com - A Petition · 2020-06-29T15:55:58.608Z · score: 4 (2 votes) · LW · GW

The Daily Beast article has some information about how other NYTimes employees are against de-anonymising Scott.

Comment by bucky on DontDoxScottAlexander.com - A Petition · 2020-06-28T14:57:06.981Z · score: 4 (2 votes) · LW · GW

Yes, I’m meaning something along the lines of the actions suggested in the original comment but am doing a rubbish job at explaining this properly. Violence in particular was a poor choice of words and I have changed it to force in the grandparent comment.

All I was really wanting to say was that escalation isn’t the only solution and is usually a bad idea.

Comment by bucky on Missing dog reasoning · 2020-06-27T21:33:22.605Z · score: 8 (4 votes) · LW · GW

One example I’ve experienced is reading scientific papers. I have had the experience where I think “why haven't they presented this sub-result in this intuitive way?”. Sometimes this is just incompetence but at other times it leads me to find that the particular result in question goes against the hypothesis of the paper and that result is included only in the footnotes/supplemental material.

Comment by bucky on DontDoxScottAlexander.com - A Petition · 2020-06-27T21:07:24.069Z · score: 6 (3 votes) · LW · GW

The claim I was against was that there’s no point trying to petition as force is the only solution which is covered in some depth in that piece. Currently there is a clash of norms but no force has been used. My feelings will change somewhat if they do publish.

Comment by bucky on DontDoxScottAlexander.com - A Petition · 2020-06-25T19:46:01.198Z · score: 3 (4 votes) · LW · GW

I'm In Favor of Niceness, Community and Civilization.

Comment by bucky on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-25T19:27:56.488Z · score: 3 (2 votes) · LW · GW

The dispute here, then, is whether doxing is a concept like murder[1] (with intent built into the definition) or homicide (which is defined solely by the nature of the act and its consequences).

I feel like we're still talking past each other a bit here. I don't dispute that doxxing can mean any revealing of information about someone, it could be used even when no foreseeable damage is implied and someone just wanted to remain private. The strict definition is not the question.

The non-central fallacy is when a negative affect word is used to describe something where the word is technically true but the actual thing should not have that negative affect associated with it. Martin Luther King fits the definition of a criminal but the negative affect of the word criminal (the reasons why crimes are bad) shouldn't apply to him.

The problem I have using "dox" here is that some portion of the word's negative affect doesn't (or at least might not) apply in this case. An alternative phrasing would be "reveal Scott's true identity" or, to be snappier, "unmask Scott" which are more neutral. dontdoxscottalexander.com's title is Don't De-Anonymize Scott Alexander which I think is better than my ideas.

Comment by bucky on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-25T15:58:39.189Z · score: 2 (1 votes) · LW · GW

I think its hard to argue that a central example of doxxing doesn't involve intent to cause harm. The central example I think in most people's minds would be something like the hit list of abortion providers or anonymous. Wikipedia has a list of examples of doxxing  - a rough count suggests ~13/15 involve providing information about someone ideologically opposed to the doxxer (confirming intent is more difficult). The non-centrality here isn't as extreme as it is in, say, "Martin Luther King was a criminal" but it is there.

On the relevance of the distinction, yes, I do think it is important. I would support different responses to the NYT depending on whether I thought they were acting out of a desire to endanger/silence Scott or were following a journalistic norm in a way I considered wrong.

Comment by bucky on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-24T20:44:03.881Z · score: 10 (6 votes) · LW · GW

Is dox the right word here? I guess this fits inside the definition but it feels kinda non-central to me. A typical example would include some intent to do harm. Considering a different principle more important feels importantly different. 

Not that this is much consolation to Scott and I think the NYT is wrong to reveal Scott's identity (and have written in to say this), I just think doxxing is the wrong way to describe it.

Comment by bucky on Bathing Machines and the Lindy Effect · 2020-06-18T07:50:25.922Z · score: 3 (2 votes) · LW · GW

You can generalise this for other required accuracies. If instead of 25% we use "a" then the optimal guess is of the current life which is correct of the time.

If we use an alternative optimisation criterion where we compare any two prediction methods and see, over the life of the bathing machine, which is closer to the correct answer most often then 200% (i.e. the halfway rule) is best.

So which rule of thumb you use depends on what you're looking to achieve - a guess which will be fairly good for as much of the lifetime as possible or a guess which is better for most of the lifetime, even if sometimes it's way off.