Comment by bucky on Emotional Climate Change - an inconvenient idea · 2019-02-12T16:54:20.633Z · score: 4 (3 votes) · LW · GW

Just to provide a bit of feedback, this seems unnecessarily alarmist.

“Social media is bad” is a fairly standard trope and not something which will surprise many people. As a result, most people I know are aware of the problem and talk about how to use social media responsibly. In my experience young people are often most aware of this as they've been warned about the dangers for their entire lives. As a result they are often good at managing their social media use.

It's not perfect but it's not cataclysmic.

Comment by bucky on Probability space has 2 metrics · 2019-02-11T22:29:45.228Z · score: 1 (1 votes) · LW · GW

I like the theory. How would we test it?

We have a fairly good idea of how people weight decisions based on probabilities via offering different bets and seeing which ones get chosen.

I don't know how much quantification has been done on incorrect Bayesian updates. Could one suggest trades where one is given options one of which has been recommended by an "expert" who has made the correct prediction to a 50:50 question on a related topic x times in a row. How much do people adjust based on the evidence of the expert? This doesn't sound perfect to me, maybe someone else has a better version or maybe people are already doing this research?!

Comment by bucky on Spaghetti Towers · 2019-02-11T09:16:17.621Z · score: 3 (2 votes) · LW · GW

I wonder to what extent tax codes are spaghetti towers - every time someone finds a loophole a new bit of tax code gets added to close it without considering how to make a coherent whole. This would explain how the uk tax code runs to >10,000 pages

Comment by bucky on The Case for a Bigger Audience · 2019-02-10T21:36:23.681Z · score: 1 (1 votes) · LW · GW

I like this idea. I can't find it now but I remember a recent comment suggesting that any post/comment which ends up with negative karma should have someone commenting on as to why they downvoted it, so that the feedback is practical.

To encourage commentors (and posters) without cluttering up the comments thread:

Non-substantive comments, collapsed by default, where voters can leave a couple of words as to why they voted the way they did.

Comment by bucky on The Case for a Bigger Audience · 2019-02-10T21:25:01.166Z · score: 9 (2 votes) · LW · GW

Just wanted to say I agree regarding the problems with conversation being "time driven" (I've previously suggested a similar problem with Q&A)

One idea that occurs to me is to personalise Recent Discussion on the homepage. If I've read a post and even more if I've upvoted it then I'm likely to be interested in comments on that thread. If I've upvoted a comment then I'm likely to be interested in replies to that comment.

If Recent Discussion worked more like a personal recommendation section than a rough summary section then I think I'd get more out of it and probably be more motivated to post comments, knowing that people may well read them even if I'm replying to an old post.

Comment by bucky on What we talk about when we talk about life satisfaction · 2019-02-05T15:26:35.758Z · score: 3 (3 votes) · LW · GW

I suspect that for most people the reality is that they just anchor and adjust for this kind of question. Typically I'd expect an anchor at about 6-8 out of 10 (few people want to think they're unhappy) and then an adjustment +/- 1-2 depending on whether their current circumstances are better or worse than they think they should expect.

I'd assumed that the vagueness was more of a feature of the question than a bug. If you compare yourself to a billionaire then you will probably rate yourself lower than if you compare yourself to people around you. At the same time, if your instinct is to compare yourself with the billionaire then you probably are less satisfied in life than if you instinctively compare yourself to a more achievable datum. Thus the answer you provide tends to match the underlying reality, if by satisfaction we mean "lack of wishing things were different".

Comment by bucky on Urgent & important: How (not) to do your to-do list · 2019-02-02T19:31:45.664Z · score: 1 (1 votes) · LW · GW

I say I was taught it - it was more like my first boss saying to me “look, if you mark things as urgent/important then it helps you see which tasks you should prioritise”. I don’t think he mentioned delegation as I wouldn’t have had anyone to delegate to!

Comment by bucky on How would one go about defining the ideal personality compatibility test? · 2019-02-02T15:03:17.945Z · score: 3 (2 votes) · LW · GW

For-profits operate in a marketplace. Provided the marketplace is working, it doesn’t matter if they would rather keep people on the site by giving poor matches - if they don’t match people well then another company will give the people what they want and take their market share.

First you need to show that a company can operate against the interests of its customers for an extended time period without market consequences, then you can talk about what the company would like to do with this ability.

I don’t claim this can’t happen, just that the market working properly should be the null hypothesis.

Comment by bucky on How would one go about defining the ideal personality compatibility test? · 2019-02-01T21:14:07.821Z · score: 1 (1 votes) · LW · GW

I think I would have to pull Efficient Market Hypothesis on this and direct you to e.g. Match.com. Huge multinational dating sites are dependent on making good matches and I can’t think of any strong enough reasons to expect the market to be inefficient enough for me to do better.

You can read a bit about how Match.com do it here.

Comment by bucky on Urgent & important: How (not) to do your to-do list · 2019-02-01T20:22:32.101Z · score: 3 (2 votes) · LW · GW

Urgent/important was taught to me essentially in the way that your simpler version works. I didn’t even know how it was supposed to work a different way so I learnt something new.

I wonder how typical it is that people modify the Eisenhower box with a good dollop of common sense and end up with roughly what you describe. I definitely think this version makes more sense and like the hopscotch diagram.

Who wants to be a Millionaire?

2019-02-01T14:02:52.794Z · score: 29 (16 votes)
Comment by bucky on The Relationship Between Hierarchy and Wealth · 2019-01-24T15:43:13.439Z · score: 1 (1 votes) · LW · GW
If we're asking "what causes hierarchy?", then I'd expect the root answer to be "large-scale coordination problems with low communication requirements"

Nicely put. David Manheim has an interesting post on the need for legible structure when scaling from a startup to a large organisation.

Comment by bucky on What math do i need for data analysis? · 2019-01-19T22:07:31.915Z · score: 6 (4 votes) · LW · GW

I personally found the Udacity course helpful but I see that someone has done a comparison of all the online data science courses they could find here. Hopefully one of those might be what you’re looking for.

Comment by bucky on What are questions? · 2019-01-10T14:50:25.540Z · score: 30 (6 votes) · LW · GW

Questions in humans often involve some kind of status interaction. A question is not only (or even always) a request for information but often also represents an offer to trade status for information.

(I realise that this is focusing on a very narrow sub-field of the question asked but you did ask for unusual framings!)

In a canonical case of requesting unknown information, the asker is lowering his status relative to the askee in trade for the information. The act of asking implies that, at least in this area, the askee has superior knowledge to the asker, thus increasing the askee’s prestige. In doing so it provides a trade to the askee for an answer.

This status transfer is often the reason that questions go unasked. The common refrain “there’s no such thing as a stupid question” is there to try to overcome this reluctance to ask by lowering the status tax on question asking. I often tell new starts at my work to ask as many questions as possible in the first 6 months because you’ll feel less silly during this period (i.e. the status tax is lower as expectations of your knowledge are lower). Asking questions after that will be necessary but it will be emotionally more costly as time goes on. One weighs up (not necessarily consciously) whether the information required justifies the expenditure of status.

One obvious result is a preference to ask questions in a smaller group so that the asker’s status is lowered in fewer people’s opinions. Of course this is reversed if the question is really an excuse to show off one's own intelligence!

Similarly, one is tempted to search for information without making it obvious that you are asking a question thereby maintaining plausible deniability.

Questions generally being low status can be used to your advantage. In general you won’t tell your boss if he’s wrong but you might ask a question obliquely which might make him consider for himself that he might be wrong. If this works then you can correct his mistake without threatening his status (Caution: use with care!).

A manager might also use questions to correct an employee in order to be less status threatening to the employee.

I feel like questions in humans can’t be fully understood without some form of status interaction.

Meta Note

Having the question feature on LessWrong is interesting as it emphasizes that question asking is not a low status activity on the site. The potential issue is that askees might not feel like spending the time to answer a question provides sufficient reward. If this is overcome then I think the Q&A feature would count as a positive sum status interaction (at least on a status adaptation level).

I think responders need to be careful to avoid putting the status tax back on asking questions (e.g. by implying the question is stupid or the answer is obvious). I realise the need to distinguish good and bad questions (for clarity, I would include this question in the former) but I would prefer this to be done via moderation policy.

The issue with the status tax is that it optimises for whether an individual needs the information rather than whether it is a generally good question. For example, if a question is something that alot of people are interested in but no individual really desperately needs to know then the status tax makes it less likely to be asked. A good moderation policy should be able to encourage such questions.

(I've often had the experience that when someone finally overcomes their reluctance to pay the status tax and asks a question suddenly everyone says that they were wanting to ask that too - I wonder how often such questions just don't get asked).

Comment by bucky on What do you do when you find out you have inconsistent probabilities? · 2019-01-01T13:32:34.346Z · score: 1 (1 votes) · LW · GW

Argh,you’re right,I didn’t check that one. P(OM) cancels on the P(G) equation so that one isn’t over constrained.

However for the equation for P(OM) 4 variables is over constrained, 3 is enough.

Comment by bucky on What do you do when you find out you have inconsistent probabilities? · 2019-01-01T08:19:22.890Z · score: 1 (1 votes) · LW · GW

The 4 given probabilities are actually perfectly consistent within the equations you are using. It is provable that whatever 4 probabilities you use the equations will be consistent.

Therefore the question becomes “where did my maths go wrong?”

P(G|OM) = 0.055, not 0.55

I’m pretty confident that the only way probabilities can actually be inconsistent is if it is over constrained (e.g. in this case you define 5 relevant probabilities instead of 4). The whole point of having axioms is to prevent inconsistencies provided you stay inside them.

P.S. Good job on noticing your confusion!

Comment by bucky on Thoughts on Q&A so far? · 2018-12-31T23:32:32.202Z · score: 12 (4 votes) · LW · GW

Firstly I think the feature is a great idea and is working pretty well for a prototype.

Ideally the feature needs to do at least 3 things:

Get questions noticed

Get questions answered well

Present good Q/A combinations to the community

The current practice of promoting interesting questions to frontpage does the first but could be detrimental to the other 2 - many users will see the question but not get round to reopening and looking at the answers. This encourages people to answer quickly to get their response read and discourages more detailed answers.

If you’re planning on developing the feature further then addressing this issue would really help get the best out of it. One option would be the ability to promote the best answer(s) to frontpage after a week or so.

Comment by bucky on Defining Freedom · 2018-12-22T20:08:43.189Z · score: 1 (1 votes) · LW · GW

Thanks, got it! I hadn’t read the post as advocating a lack of decision but re-reading it now I can see how you read it.

Comment by bucky on Defining Freedom · 2018-12-20T21:05:53.063Z · score: 2 (2 votes) · LW · GW

I'm very confused about this. Here's what I think you're saying:

Choices are bad, particularly with regards to regret. It is better to make a decision based on instinct and forget about it than to consider your options carefully and potentially regret the decision.

Is that about right or is there something that I'm missing?

Experiences of Self-deception

2018-12-18T11:10:26.965Z · score: 16 (5 votes)
Comment by bucky on Interpreting genetic testing · 2018-12-17T11:55:26.189Z · score: 5 (3 votes) · LW · GW

Thanks for writing this, it's not something I'd looked at before but I read some of the Promethease sample reports because you got me interested.

There does seem to be some weird normalisation going on when calculating magnitude. For instance this gene gives a score of 0 for (C;C) and bad magnitude of 2.7 and 3.1 for (C;G) and (G;G) respectively. So if you have (C;C) and you filter by magnitude 2 you miss out on the fact that you have an advantageous genotype.

This isn't a problem if (C;C) is extremely common but actually it's no more common than (C;G) (except for people of African descent), so the act of filtering prevents you from realising that you missed a 50:50ish chance of getting a disadvantageous genotype.

So probably to work this out properly you can't filter by magnitude and you'd have to open up every genoptye details to check for what you've avoided getting hit by. You could only really work out how well you've done compared to other people where the data includes frequency so you could see just how lucky/unlucky you got for a particular gene.

Not all of the genotypes have this issue - for instance this gene seems to be more sensibly normalised. If they were all done like this then I'd be much happier with the system.

Comment by bucky on What went wrong in this interaction? · 2018-12-13T11:51:16.865Z · score: 4 (3 votes) · LW · GW

Good summary.

I'd like to add that sometimes definitions do matter, particularly in a public settings such as a blog. Even if t3tsubo and Benquo agree with both (a) and (b), it is possible that others reading the OP think that it is asserting (not-a).

If a significant number of people reading the blog are likely to think that it is asserting (not-a), rather than asserting (b), then it may be worth clarifying the OP to ensure that the correct message is received. I don't know whether this would be a common misunderstanding, I can only conclude that at least one person read the post as asserting (not-a).

Comment by bucky on The Bat and Ball Problem Revisited · 2018-12-13T09:56:01.389Z · score: 2 (2 votes) · LW · GW

This is pretty much the same for me. I think the solution to bat and ball of "10cents, oh no, that doesn't work. Split the difference evenly for 5 cents? yup that's better" is all done on system 1.

Kahneman's examples of system 1 thinking include (I think) a Chess Grandmaster seeing a good chess move, so he includes the possibility of training your system 1 to be able to do more things. In the case of the OP, system 1 has been trained to really understand exponential growth and ratios. I think that for me both "quickly check that your answer is right" and "try something vaguely sensible and see what happens" are both ingrained as general principles that I don't have to exert effort to apply them to simple problems.

A problem which I would volunteer for a CRT is the snail climbing out of a well. Here there's an obvious but wrong answer but I think if you realise that it's wrong then the correct answer isn't too hard to figure out.

Comment by bucky on Good Samaritans in experiments · 2018-12-08T09:13:12.322Z · score: 2 (2 votes) · LW · GW

From a frequentist perspective you're right.

From a Bayesian perspective, my prior would be that the GS condition would make people more likely to help. The likelihood calculation reinforces this belief. In terms of bits, my prior would have been (conservatively) 2-3 bits in favour of an effect and the experiment adds another 3-4(?) so I end up on about 6 bits. 64:1 is pretty good.

Comment by bucky on Worth keeping · 2018-12-07T18:07:47.801Z · score: 11 (7 votes) · LW · GW

I agree with what you say but feel like it’s the wrong kind of response for a post marked as speculative epistemic status. For such posts I think it’s fair to assume that the author knows they are over-simplifying and is just wanting to see where the idea goes.

Comment by bucky on Worth keeping · 2018-12-07T13:00:32.647Z · score: 2 (2 votes) · LW · GW
A related question is when you should let people know where they stand with you. Prima facie, it seems good to make sure people know when they are safe. But that means it also being clearer when a person is not safe, which has downsides.

An interesting question.

Even if you are not specific about where people stand with you, they have the evidence of your past actions. So whenever you decide whether to stick with a friend or give up will provide evidence to your other acquaintances as to what they can expect from you.

If one is perceived as being too quick to ditch friends, it probably decreases availability of replacement friends. On the other hand, someone who is extremely loyal is likely to have greater availability of friends (up to a limit!) but also less need for new friends.

This surplus may give leverage for things one cares about - one might say "I'll stand by my friends but I do expect they turn up when they say they'll turn up". Someone who is less loyal may not be able to be so picky.

Comment by bucky on Playing Politics · 2018-12-07T07:23:51.783Z · score: 4 (3 votes) · LW · GW

Just spotted this comment is being put at the bottom by the magical sorting algorithm despite its high karma - maybe an artefact of having been marked as spam?

Comment by bucky on Good Samaritans in experiments · 2018-12-05T23:47:56.394Z · score: 4 (3 votes) · LW · GW

Done :)

Comment by bucky on Good Samaritans in experiments · 2018-12-05T21:49:27.012Z · score: 5 (4 votes) · LW · GW

Thanks for the comments, I've added section headings so hopefully it reads a bit easier now.

To be honest I really didn't expect this to be as interesting to people as it was - glad to be proven wrong!

Comment by bucky on Playing Politics · 2018-12-05T16:48:48.568Z · score: 6 (4 votes) · LW · GW
Ever wonder why people reply more if you ask them for a meeting at 2pm on Tuesday, than if you offer to talk at whatever happens to be the most convenient time in the next month? The first requires a two-second check of the calendar; the latter implicitly asks them to solve a vexing optimisation problem.

My experience is that this also makes people more likely to show up at the agreed time (using this method was suggested to me when I was working with students who were notoriously bad at showing up).

Possibly phrasing it this way creates an artificial significance to the time suggested, I don't know, but it does seem to work. I generally offer as few options as is reasonable given other constraints.

Comment by bucky on Is Science Slowing Down? · 2018-11-30T13:22:56.687Z · score: 4 (3 votes) · LW · GW

Ok, I'm an idiot, this model doesn't predict exponentially increasing required inputs in time - the model predicts exponentially increasing required inputs against man-hours worked.

The relationship between required inputs and time is hyperbolic.

Comment by bucky on Is Science Slowing Down? · 2018-11-29T22:48:29.285Z · score: 4 (4 votes) · LW · GW

This is an interesting theory. I think it makes some different predictions to the low hanging fruit model.

For instance, this theory would appear to suggest that larger teams would be helpful. If intel are not internally repeating the same research then them increasing their number of researchers should increase the discovery rate. If instead a new company employs the same number of new researchers then this will have minimal effect on the world discovery rate if they repeat what Intel is doing.

A simplistic low hanging fruit explanation does not distinguish between where the extra researchers are located.

Comment by bucky on Is Science Slowing Down? · 2018-11-29T22:24:19.880Z · score: 2 (2 votes) · LW · GW

Sticking with the throwing darts at a wall analogy, the linked post suggests that the problem is that no-one knows how close any of the darts are to the line. That problem would need to be solved before we could make progress.

Comment by bucky on Is Science Slowing Down? · 2018-11-27T16:38:25.248Z · score: 7 (4 votes) · LW · GW

Toy model: Say we are increasing knowledge towards a fixed maximum and for each man hour of work we get a fixed fraction of the distance closer to that maximum. Then exponentially increasing inputs are required to maintain a constant growth rate.

If I was throwing darts blind at a wall with a line on it and measured the closest I got to the line then the above toy model applies. I realise this is a rather cynical interpretation of scientific progress!

If the progress in a field doesn't depend on how much you know but how much you have left to find out then this pattern seems like a viable null hypothesis. Of course the data will add information but I'm not allowed to take the data into account when choosing my null.

EDIT: More generally, a fixed maximum knowledge is not strictly required. We still require exponentially varying inputs if the potential maximum increases linearly as we gain more knowledge. Think Zeno’s Achilles and the tortoise.

Comment by bucky on Status model · 2018-11-27T15:08:26.605Z · score: 9 (2 votes) · LW · GW

Done, thanks for the suggestion. Fortunately I kept a list as originally I was writing something much longer before realising that the model was probably the most interesting bit.

Comment by bucky on Status model · 2018-11-26T20:31:52.102Z · score: 1 (1 votes) · LW · GW

It's on my Christmas list!

It's quoted/referenced extensively on a few posts. The impression I got is that he focuses on rows 2 & 3 but maybe that's just the focus of those posts rather than Johnstone himself?

Status model

2018-11-26T15:05:12.105Z · score: 28 (9 votes)
Comment by bucky on Combat vs Nurture: Cultural Genesis · 2018-11-26T00:05:30.165Z · score: 1 (1 votes) · LW · GW

Loosely define nurture and combat culture as different truth seeking methods and typical culture as the absence of truth seeking. Then using nurture or combat won’t work in typical culture as only one of you is truth-seeking. If the other person is “just making conversation” then any attempt to change their mind will be seen as weird.

My hypothesis is that when someone who defaults to typical culture realises that truth seeking is required, nurture culture will seem less weird to them.

This is only based on my own experience of applying nurture and combat. In my work I often have to get people willing to seek for the truth together and be willing to disagree. Nurture is generally easier for newbies to cope with.

Getting people interested in seeking the truth in the first place is an even harder problem.

Comment by bucky on Four factors which moderate the intensity of emotions · 2018-11-24T23:04:51.052Z · score: 16 (8 votes) · LW · GW

My personal experience is that a separate factor of “emotional susceptibility” is very important. Tiredness, stress and repeated emotional experiences can sensitise me to feel stronger emotions.

Comment by bucky on Jesus Made Me Rational (An Introduction) · 2018-11-23T08:57:23.578Z · score: 2 (2 votes) · LW · GW

I'll rephrase:

It sounds like you've discovered something new (rationality) and it has dissolved your previously felt cognitive dissonance regarding your belief. The dissolving of cognitive dissonance feels like it is confirming the side that you end up on, even though the actual evidence is sketchy at best.

***

Corrections to my previous comments:

1. Where I said:

method for choosing a prior

I should have said "method for calculating a likelihood"

2. I talk about my belief in Christianity being on a scale but this is unhelpful because of the issues discussed in No, really, I've deceived myself. I should have talked about "How likely I though it was that Christianity was true" being measured on a scale. This sounds the same but means I actually assessed the truth of the statement, not just my level of belief.

Comment by bucky on Jesus Made Me Rational (An Introduction) · 2018-11-22T22:10:16.922Z · score: 20 (6 votes) · LW · GW

I have mixed feelings about this. On one hand, I'm glad you wrote it as openness seems like the first step to knowledge. On the other, I think you're dealing with your evidence wrongly.

To me it feels like you've been discovering something new (rationality) and found a way to fit it into your existing belief system. On the inside this feels like it confirms your belief system but from the outside it looks like privileging the hypothesis. One of the main things I got from Thinking: Fast and Slow was that being able to tell ourselves a convincing story feels like we're discovering the truth but actually the convincing-ness of the story is orthogonal to truth.

If we grant that Christians invented science then maybe this can be counted as evidence for Christianity, but is it strong evidence? A rough estimate might be that 1/6 people who have ever lived were Christian so I don't think that it should be overly surprising that one of them was the inventor. I know this is a horrendous method for choosing a prior but it gives an indicator that evidence of what Christians have done in the past is unlikely to be strong evidence either way.

If you count this as evidence for Christianity then you need to count similar evidence too. Should the other historical figures before the 12th century who contributed to science and maths count as evidence that their beliefs are true? Compared to the number of Christians who have ever lived, the number of ancient Greeks who ever lived is tiny so it is incredible that they got as far as they did.

To someone looking in from the outside, claiming that Christianity is different because it gave a reason for believing the world would be consistent again seems like privileging the hypothesis. Those other ancient figures seemed to assume that the world would be consistent even without Christianity so even in your belief system there doesn't seem to be an a priori reason to believe that they couldn't have invented the scientific method.

It took 12 centuries after Christ to invent the scientific method so it would also seem to be true that believing in Christianity wasn't a massively strong driver towards inventing the scientific method.

***

To put my cards on the table, until a couple of years ago I was in a similar situation to you. I believed in Christianity and didn't expect ever to be dissuaded.

I'm not sure that I can pinpoint exactly what changed for me. One big part of it was the realisation that I didn't have to believe or not believe in Christianity - 0 and 1 are not probabilities. What was more I realised I already didn't 100% believe in everything in Christianity - there were already plenty of things that I found incredibly confusing but kinda just accepted because they were part of a parcel of beliefs. I guess you might be similar but may have different issues - mine included the trinity, free will vs God's sovereignty, differences between new and old testaments, suffering, # of fertilised eggs which never even implant into the womb (I know, that one is probably fairly idiosyncratic).

When I allowed myself to see my belief in Christianity on a scale I was able to modify how much I believed it based on evidence I saw. Before that any new evidence was judged on whether it allowed me to believe Christianity rather than whether it encouraged me to believe. I should note that from a Christian point of view this seems to be a virtue not a vice - Christianity seems to imply that you should only believe in Christianity if it is true so looking accurately at the evidence should be encouraged.

Over a few months my belief slowly waned as more evidence came in. I think the tipping point for me was realising how badly designed human intelligence is. The likelihood of God inventing something so poor in absolute terms to be the pinnacle of his creation was enough to push me over the edge. Again, this is probably fairly idiosyncratic!

***

I'm not sure exactly what you were hoping for in response to your introduction but I hoped my experience might be interesting to you.

Comment by bucky on Schelling fences on slippery slopes · 2018-11-20T23:16:16.157Z · score: 2 (2 votes) · LW · GW
I can't comprehend how rationality can hope to propagate in an environment that values social nicety over truth.

The point I was trying to make was that social nicety is a prerequisite for truth, or if not social nicety per se, at least good faith communication.

In general I'd agree that society values nicety more highly than is strictly healthy. To propagate rationality in such circumstances you focus on the battles that you can win. I'm not optimistic about rationality propagating fast but I don't think focusing on extreme and emotionally charged hypotheticals will get us there any faster.

Maybe give it another 30 years and we'll see where we are!

(Of course if this is less hypothetical then this discussion would be a very different one.)

Comment by bucky on Schelling fences on slippery slopes · 2018-11-20T10:40:18.898Z · score: 4 (3 votes) · LW · GW

In the most general sense, any law which impinges on free speech has the potential to be detrimental to accuracy of beliefs.

For example, if I make a defamatory claim about someone and they take me to court, the onus is on me to prove that what I said was true (at least in the UK). This will discourage me from making a claim that I believe to be true but don't have strong evidence for and so I cannot publish some true information.

In the US the burden of proof would be more on the person who I defamed to show that what I said was false (I'm not a legal expert, I got this from an episode of The Good Wife!). This is a lesser brake on free speech and allows me to say things which I am confident are true, even if my proof would be insufficient for a UK court.

However, there is a flip side. Completely free speech is not beneficial for truth seeking unless all members of the society can be trusted to communicate in good faith. Without any defamation laws everyone can say whatever they like about anyone else and no-one knows what to believe. I can imagine circumstances where if the burden of proof is overly on the defamed then people can make up things which are very hard to disprove and again the truth can suffer.

Another example would be that hate-speech laws discourage racism but also make it more difficult for people to discuss the possibility of differences between races.

So the choice of where to draw the line on free speech includes a play-off between allowing accurate evidence to be presented and preventing bad faith communication.

In the case of Holocaust denial I don't think it would be too controversial to suggest that most revisionist theories constitute bad faith communication (I'll be honest, I haven't looked at any myself).

My personal preference on this wouldn't be to ban holocaust denial, as the social norms where I am from are sufficiently strong that they constitute enough of a barrier to Holocaust denial entering the main stream but I can certainly see why people would make the opposite trade off. If any time someone discusses the Holocaust they risk being hounded by Holocaust denial trolls then I don't think that this would be beneficial for society seeking the truth.

Generally I would prefer strong social norms to laws but until everyone can be trusted to communicate in good faith, laws limiting free speech are here to stay.

EDIT: This SSC post goes into this in much more detail, with an emphasis on how such norms might work in practice.

***

I can think of an experiment to test this - how many people question these facts where the questions themselves are not illegal (the US for example) compared to where they are (Germany)?

Just in passing, I think experiments like this are too noisy to provide useful conclusions due to numerous confounders. What is the base rate of acceptance of conspiracy theories in each country? How many citizens know someone who lived in Germany in the 1930s-40s? How many citizens have physically been to Auschwitz or know someone who has? How strong are the social norms against Holocaust denial in each country?

The differences in the societies are so large that there is probably more noise than signal for the original question, even if you included multiple countries on each side.

Comment by bucky on Clickbait might not be destroying our general Intelligence · 2018-11-19T19:25:11.535Z · score: 2 (2 votes) · LW · GW

Is there a way to do this on mobile devices?

Comment by bucky on Schelling fences on slippery slopes · 2018-11-19T13:00:39.330Z · score: 13 (5 votes) · LW · GW

Firstly, welcome!

Beyond a certain probability (say 99.9% confidence that a story is true in its generality, even if one is less sure of some of the specifics), it seems to me that the truthfulness of the story is no longer the main consideration in whether to instigate such a law. In that case I would be more interested in how such a law would alter the incentives of society and the knock-on effects of such. Not making a decision due to imperfect information can often be a mistake.

The point about granting the state authority to end a life for breaking any law isn't something I'd thought about before and is a very interesting one. I feel like it possibly proves too much if relied on too heavily to make decisions about which laws to implement - I can apply the same argument to speeding but that isn't a strong argument against speeding laws. The strength of the argument depends on how often one would expect such a death to occur. Assuming this list is typical, the argument is much stronger in e.g. the US than in Europe (where it can be all but ignored).

I do think that Holocaust denial laws are problematic if you're not allowed to say anything even slightly less severe than the widely accepted viewpoint. Some laws attempt to get around this by only banning "grossly" or "maliciously" downplaying the Holocaust, other's don't. Either way, most convictions appear to be for denying the use of gas chambers or for dramatically underestimating the number of dead. Laws generally require disseminating such views publicly. I don't think this matches with it being "illegal to question even one part of the telling thereof" unless I'm misunderstanding you?

Comment by bucky on Stoicism: Cautionary Advice · 2018-11-15T09:51:50.636Z · score: 4 (3 votes) · LW · GW

This post got me thinking.

If I have a relatively stoic mindset compared to those around me then it's entirely possible that I see more instances of the absence of stoicism being damaging (in others) than the presence of stoicism being damaging (in me). This then reinforces positive feelings about stoicism and makes me become more stoic, even though the evidence may actually point to me already being too stoic.

If I'm sufficiently different to those around me, conflating evidence from their lives with evidence from my life is dangerous - especially because spotting where someone else is going wrong feels much easier than spotting where I am going wrong.

I suspect I've been guilty of this on many occasions.

Comment by bucky on Laughing Away the Little Miseries · 2018-11-14T19:48:17.551Z · score: 1 (1 votes) · LW · GW

I like this framing.

A version of this I’ve tried with some success is, instead of laughing, giving myself a metaphorical pat on the head for being a good little rationalist and noticing my own silliness. This seems to be self reinforcing in the same way as laughing is.

I think I’ll try the laughter version and see which one works best for me.

Comment by bucky on Combat vs Nurture: Cultural Genesis · 2018-11-13T10:52:50.710Z · score: 7 (3 votes) · LW · GW

I guess I was including that default in the "nurture culture" box rather than a separate entity - I had it mentally listed as "nurture culture at its worst". Maybe this is an unhelpful categorisation as you're right that often there is no underlying truth-seeking goal.

(My experience in purely social circumstances is often the same as yours, in the workplace I'd say I find semi-functional nurture culture quite often, as the default gets somewhat modified to actually get stuff done)

I think the original point stands that many people are not used to being involved in a combat culture and will simply not know how to react when exposed to it. A functional combat culture may make the uninitiated think the culture is just rude, a functional nurture culture will not seem that far removed from a normal conversation without a truth-seeking goal. As such a combat culture will tend to exclude people who are not used to it and has an optics problem.

If I grant that combat culture at its best is the ideal for efficient truth seeking (which I would agree with) there is still the problem of getting from here to there which seems like a co-ordination problem. Possibly a functioning nurture culture is a good start which then allows the culture to move towards combat as people become more comfortable (similarly to your second point here?). But that leaves a problem when you have moved to a combat culture and want other people to join.

(This is not purely theoretical. I encourage a relatively combative approach in my department and it does often make newcomers a little uneasy. Going full combat would likely be even more difficult)

It may be that scaring away those who are not used to combat culture is worth it for the benefits of a good combat culture. It might also be arguable that exposure to combat culture would help people understand it, although I think there's a danger of this going the opposite way and putting people off combat culture completely.

Comment by bucky on Combat vs Nurture: Cultural Genesis · 2018-11-12T15:45:26.629Z · score: 2 (2 votes) · LW · GW

My experience is that nurture culture is far more common than combat culture (possible exceptions: Law, science?).

This means that almost everyone has experienced nurture culture at some point. They know how to act within it and have common knowledge of how to interpret each other. Even if they prefer combat culture they at least understand the rules of the game.

I think there is a significant minority of people who have no experience of combat culture and so are left confused when they come into contact with it. Over and above being upset at being told "You're absolutely wrong", such a person may not understand how anyone could ever say such a thing to another human being. They don't even understand that such a game can exist.

Maybe others have a different experience, I can imagine people who only know combat getting very confused when they first enter a nurture environment - "Why is everyone getting offended at me?". Probably this happens relatively early in life if nurture culture is dominant in the wider culture.

Comment by bucky on Bayes Questions · 2018-11-09T22:44:04.817Z · score: 1 (1 votes) · LW · GW

I use it to determine the relative probabilities of each μ,σ pair which in turn create the pseudo cdf.

Comment by bucky on Bayes Questions · 2018-11-09T20:14:19.349Z · score: 1 (1 votes) · LW · GW

For μ,σ I effectively created a quasi-cumulative distribution with the parameter pairs as the x-axis.

μ1,σ1. μ2,σ1. μ3,σ1 ... μ1,σ2. μ2,σ2. μ3,σ2 ... μn,σm

The random number defines the relevant point on the y-axis. From there I get the corresponding μ,σ pair from the x-axis.

If this method works I’ll probably have to code the whole thing instead of using a spreadsheet as I don’t have nearly enough μ,σ values to get a good answer currently.

Comment by bucky on Bayes Questions · 2018-11-09T12:10:36.276Z · score: 1 (1 votes) · LW · GW

Birnbaum-Saunders is an interesting one. For the purposes of fatigue analysis, the assumptions which bring about the three models are:

Weibull - numerous failure modes (of similar failure speed) racing to see which causes the component to fail first

Log-normal - Damage per cycle is proportional to current damage

Birnbaum-Saunders - Damage per cycle is normally distributed and independent of previous damage

My engineering gut says that this component is probably somewhere between Log-normal and Birnbaum-Saunders (I think proportionality will decay as damage increases) which is maybe why I don't have a clear winner yet.

***

I think I understand now where my original reasoning was incorrect when I was calculating the expected worst in a million. I was just calculating worst in a million for each model and taking a weighted average of the answers. This meant that bad values from the outlier potential pdfs were massively suppressed.

I've done some sampling of the worst in a million by repeatedly creating 2 random numbers from 0 to 1. I use the first to select a μ,σ combination based on the posterior for each pair. I use the second random number as a p-value. I then icdf those values to get an x.

Is this an ok sampling method? I'm not sure if I'm missing something or should be using MCMC. I definitely need to read up on this stuff!

The worst in a million is currently dominated by those occasions where the probability distribution is an outlier. In those cases the p-value doesn't need to be particularly extreme to achieve low x.

I think my initial estimates were based either mainly on uncertainty in p or mainly on uncertainty in μ,σ. The sampling method allows me to account for uncertainty in both which definitely makes more sense. The model seems to react sensibly when I add potential new data so I think I can assess much better now how many data points I require.

Comment by bucky on Bayes Questions · 2018-11-07T22:55:49.635Z · score: 2 (2 votes) · LW · GW

I can (a bit).

I have good programming skills compared to a typical mechanical engineer but poor compared to a typical programmer.

Is there any introductory text on the theory which you would recommend (forgetting about the programming for the moment)? I wouldn't want to try to use a programming language where I didn't understand the theory behind what I was asking the program to do.

Bayes Questions

2018-11-07T16:54:38.800Z · score: 22 (4 votes)

Good Samaritans in experiments

2018-10-30T23:34:27.153Z · score: 127 (50 votes)

In praise of heuristics

2018-10-24T15:44:47.771Z · score: 44 (14 votes)

The tails coming apart as a strategy for success

2018-10-01T15:18:50.228Z · score: 33 (17 votes)

Defining by opposites

2018-09-18T09:26:38.579Z · score: 19 (10 votes)

Birth order effect found in Nobel Laureates in Physics

2018-09-04T12:17:53.269Z · score: 61 (19 votes)