Comment by bucky on The Politics of Age (the Young vs. the Old) · 2019-03-24T17:02:13.434Z · score: 2 (2 votes) · LW · GW

Another example is the Scottish independence referendum 2014 where 16 & 17 year olds were allowed to vote for the first time. Apparently in general the younger someone was the more likely they were to vote for independence but those <24 reversed that trend.

https://www.bbc.co.uk/news/uk-scotland-glasgow-west-34283948

I’m skeptical that 16-17 year olds would have changed the Brexit result given that Leave won by 1.3 million votes and the are only 1.5 million 16-17 year olds in th UK. Roughly eyeballing the numbers allowing 16-17 year olds to vote might cause a 1% swing towards remain so it could make a difference if a second referendum is called.

Comment by bucky on The Game Theory of Blackmail · 2019-03-23T13:08:43.796Z · score: 1 (1 votes) · LW · GW

Cooperate-cooperate is Pareto optimal (even when including mixed strategies).

Am I right in thinking cooperate-defect is also Pareto optimal for both games (although obviously not optimal for total utility)? If they are iterated then a set of results is Pareto optimal provided at least one person cooperated in every round.

Comment by bucky on What societies have ever had legal or accepted blackmail? · 2019-03-18T16:36:28.656Z · score: 2 (2 votes) · LW · GW

I think there's a crossed wire here. I read Dagon as claiming that hypocrisy is prohibited but rarely enforced, rather than blackmail is prohibited but rarely enforced. I take it from "crime" that you understand the latter.

In my interpretation the statement would be that hypocrisy is frowned upon by society but the norm of non-hypocrisy is not enforced via blackmail.

Comment by bucky on How to Understand and Mitigate Risk · 2019-03-14T16:24:03.029Z · score: 2 (2 votes) · LW · GW

Great post.

Can you clarify for me:

Are "Skin in the game", "Barbell", "Hormesis", "Evolution" and "Via Negativa" considered to be subsets of "Optionality"

OR

Are all 6 ("Skin in the game", "Barbell", "Hormesis", "Evolution", "Via Negativa" AND "Optionality") subsets of "Anti-fragility"?

I understood the latter from the wording of the post but the former from the figure at the top. Same with "Effectuation" and "Pilot in plane" etc.

Comment by bucky on Blackmailers are privateers in the war on hypocrisy · 2019-03-14T10:10:51.832Z · score: 3 (3 votes) · LW · GW
Licit blackmail at scale wouldn't just punish people for hypocrisy - it would reveal the underlying rate of hypocrisy.

I'm not sure this works. If blackmail is common then people will know how often certain blackmail demands aren't paid but in order to know the underlying rate of hypocrisy you also need ratios for (hypocrisy):(blackmail) and (blackmail):(non-payment).

As those ratios depend on a number of variables I would imagine people would have very limited information on actual base rates.

Second, once people find out how common certain kinds of illicit behavior are, we should expect the penalties to be reduced.

Can you expand on the mechanism for this? Is it just that the a person threatened with blackmail will be less likely to pay if someone else has already been outed for the same thing?

Comment by bucky on Renaming "Frontpage" · 2019-03-13T14:41:23.556Z · score: 1 (1 votes) · LW · GW

I like Whiteboard for Frontpage.

The only alternative I've thought of which might work is Origin (or Genesis) - intended connotation is both "place to start" and "new ideas".

Book review: My Hidden Chimp

2019-03-04T09:55:32.362Z · score: 31 (13 votes)
Comment by bucky on Where to find Base Rates? · 2019-02-27T11:58:49.960Z · score: 1 (1 votes) · LW · GW

To be honest I'd just google that one but that didn't seem like very useful advice! My googling got me almost straight to this risk calculator used by NHS Scotland. Cross check this with a few other references from google and that's probably as good as anything I'd work out myself by going to the data - it's a well studied issue.

ONS is useful for base rates where google fails me.

Comment by bucky on Where to find Base Rates? · 2019-02-26T20:08:13.338Z · score: 4 (3 votes) · LW · GW

I often use the Office for National Statistics (UK)

Comment by bucky on De-Bugged brains wanted · 2019-02-23T20:46:04.134Z · score: 1 (1 votes) · LW · GW

I feel like we’re going over the same ground. I’m not sure there’s much more for me to add as I don’t know of any sites which I think would be the right match for you.

Comment by bucky on De-Bugged brains wanted · 2019-02-22T22:35:21.891Z · score: 1 (1 votes) · LW · GW

In the future, my advice to you would be:

Start small - what individual bias do you think you could explain best? How would you explain just that 1 small thing as simply and engagingly as possible?

Use the site questions feature - if you want examples from the community just ask the question without any commentary on who is/isn't debugged etc.

I suspect you have more learning to do before you really get LW rationality as G Gordon Worley III describes so it might be better to really get a handle on all this first.

Comment by bucky on De-Bugged brains wanted · 2019-02-22T20:55:32.489Z · score: 1 (1 votes) · LW · GW

I’ve read it and commented on it already. You can refer to that comment for my thoughts.

Concepts which I can’t find elsewhere are only good if they are accurate/helpful which I don’t believe they are.

I think in this case it is up to you to show that you’re right, rather than up to me to show you’re wrong.

Comment by bucky on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-22T15:02:30.203Z · score: 5 (5 votes) · LW · GW

Lack of follow-through means that too few people actually change and the new equilibrium is not achieved. This makes future coordination more difficult as people lose faith in coordination attempts in general.

If I were to be truly cynical I could create/join a coordination for something I was against, spam a lot of fake accounts, get the coordination conditions met and watch it fail due to poor follow-through. Now people lose faith in the idea of coordinating to make that change.

Not sure how likely this is, how easy it is to counter or how much worse than the status quo coordination attempts can get...

Comment by bucky on De-Bugged brains wanted · 2019-02-22T12:44:49.080Z · score: 6 (4 votes) · LW · GW

AFAIK there isn't a specific movement where the spread of rationality is its core aim. I can't speak for anyone else but my impression is that this kind of rationality is most likely to spread organically rather than from one big project. There are lots of communities which are working on rationality related projects and will welcome in whoever is interested. People here are more than happy to apply their rationality, just not necessarily in the project which you are prescribing. This is a rational response if they have a low expectation of success.

My issue here is that from witnessing your interactions so far I don't have very high expectations of your own personal emotional intelligence. Criticism of your ideas often seems to be met with hostility, exasperation and accusations of fallacies. Even if your ideas are correct this seems like a great way of alienating those who you are asking for help. One of the key tenets to LW style rationality vs traditional rationality is dealing with the world as it is, not as we think it should be and I don't feel like you're doing that.

Again, I could be wrong about this but the impressions that you give are key to getting people to co-operate with you.

I can understand your excitement at finding a community which represents some of the things where you've previously felt that you're on you own. However I think you would be wiser to take stock and learn before you try a project as ambitious as you are suggesting.

Comment by bucky on De-Bugged brains wanted · 2019-02-21T16:39:05.330Z · score: 2 (2 votes) · LW · GW

Firstly, let me say that I think the idea of bringing rationalism to the masses is a great idea. I think the best we have so far is HPMoR so that should be the standard to try to improve upon.

Secondly, it is a very difficult task, as you are aware. That means that my prior for any individual succeeding at this would be very low, even if I've seen lots of evidence showing that they have the kind of skill set that would be required. If I hadn't read HPMoR I would have put a low expectation on Eliezer managing it - he himself says he would have only put a 10% chance of the kind of success that it has achieved.

If I have yet to witness that individual's skills then my prior is tiny and I need alot of evidence to suggest that they are capable. I think this is what you're seeing when you perceive a judgment on negative authority - I'm not saying you can't do it, only that I want more evidence before I believe that you can.

***

With your last post I think you were doing the right thing - putting your ideas out there and seeing what happens. Then if you've got it right people will start believing in your project more. I think where you went wrong on your last post was how you updated on the feedback you received. 2 hypotheses:

1. You are right and the community is full of people who don't realise

2. There are some issues which you were wrong about or stylistic choices which were unhelpful

I think the evidence is better for option 2 and that you would do better to modify what you've done based on the feedback.

If you are still convinced of option 1 then it's up to you to persuade the community why it is wrong. For ChristianKl's comment you could write the page of the proposed site where you give the evidence he requests. Reading between the lines I suspect that he disagrees with what you've said and that is why he wants you to provide the evidence, rather than purely that this would be the norm for LW. For my or Elo's comments you could persuade us that it really is as bad as you say.

***

In the future, my advice to you would be:

Start small - what individual bias do you think you could explain best? How would you explain just that 1 small thing as simply and engagingly as possible?

Use the site questions feature - if you want examples from the community just ask the question without any commentary on who is/isn't debugged etc.

Comment by bucky on Epistemic Tenure · 2019-02-19T22:27:28.565Z · score: -2 (3 votes) · LW · GW

Also, there's no irony if the downvoters do not believe I've earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.

You’re right of course, I just found it amusing that someone would disagree that it’s a good idea to provide negative feedback and then provide negative feedback.

Comment by bucky on Epistemic Tenure · 2019-02-19T21:23:21.893Z · score: 3 (2 votes) · LW · GW

Thanks, that makes sense.

I completely empathise with worries about social pressures when I’m putting something out there for people to see. I don’t think this would apply to me in the generation phase but you’re right that my introspection may be completely off the mark.

My own experience at work is that I get ideas for improvements even when such ideas aren’t encouraged but maybe I’d get more if they were. My gut says that the level of encouragement mainly determines how likely I am to share the ideas but there could be more going on that I’m unaware of.

Comment by bucky on Epistemic Tenure · 2019-02-19T19:44:25.737Z · score: 5 (4 votes) · LW · GW

Putting myself in Bob’s shoes I’m pretty sure I would just want people to just be straight with me and give my idea the attention that they feel it deserves. I’m fairly confident this wouldn’t have a knock on effect to my ability to generate ideas. I’m guessing from the post that Scott isn’t sure this would be true of him (or maybe you’re more concerned for others than you would be for yourself?).

I’d be interested to hear other people’s introspections on this.

Comment by bucky on Epistemic Tenure · 2019-02-19T18:56:49.811Z · score: -5 (4 votes) · LW · GW

Just want to check that whoever downvoted Dagon’s comment sees the irony? :)

(Context: At time of writing the parent comment was at -1 karma)

Comment by bucky on Avoiding Jargon Confusion · 2019-02-19T13:37:00.899Z · score: 3 (2 votes) · LW · GW

The fact that there are subtly different purposes for the alternative naming schema could be a strength.

If I'm talking about biases I might talk about s1/s2. If I'm talking about motivation I might go for elephant/rider. If I'm talking about adaptations being executed I'd probably use blue minimising robot/side module.

I'm not sure whether others do something similar but I find the richness of the language helpful to distinguish in my own mind the subtly different dichotomies which are being alluded to.

Comment by bucky on Avoiding Jargon Confusion · 2019-02-18T11:28:57.740Z · score: 14 (4 votes) · LW · GW

Another option might be to use a word without any baggage. For example, Moloch seems to have held onto its original meaning pretty well but then maybe that's because the source document is so well known.

EDIT: I see The sparkly pink ball thing makes a similar point.

Comment by bucky on Emotional Climate Change - an inconvenient idea · 2019-02-12T16:54:20.633Z · score: 5 (4 votes) · LW · GW

Just to provide a bit of feedback, this seems unnecessarily alarmist.

“Social media is bad” is a fairly standard trope and not something which will surprise many people. As a result, most people I know are aware of the problem and talk about how to use social media responsibly. In my experience young people are often most aware of this as they've been warned about the dangers for their entire lives. As a result they are often good at managing their social media use.

It's not perfect but it's not cataclysmic.

Comment by bucky on Probability space has 2 metrics · 2019-02-11T22:29:45.228Z · score: 1 (1 votes) · LW · GW

I like the theory. How would we test it?

We have a fairly good idea of how people weight decisions based on probabilities via offering different bets and seeing which ones get chosen.

I don't know how much quantification has been done on incorrect Bayesian updates. Could one suggest trades where one is given options one of which has been recommended by an "expert" who has made the correct prediction to a 50:50 question on a related topic x times in a row. How much do people adjust based on the evidence of the expert? This doesn't sound perfect to me, maybe someone else has a better version or maybe people are already doing this research?!

Comment by bucky on Spaghetti Towers · 2019-02-11T09:16:17.621Z · score: 3 (2 votes) · LW · GW

I wonder to what extent tax codes are spaghetti towers - every time someone finds a loophole a new bit of tax code gets added to close it without considering how to make a coherent whole. This would explain how the uk tax code runs to >10,000 pages

Comment by bucky on The Case for a Bigger Audience · 2019-02-10T21:36:23.681Z · score: 2 (2 votes) · LW · GW

I like this idea. I can't find it now but I remember a recent comment suggesting that any post/comment which ends up with negative karma should have someone commenting on as to why they downvoted it, so that the feedback is practical.

To encourage commentors (and posters) without cluttering up the comments thread:

Non-substantive comments, collapsed by default, where voters can leave a couple of words as to why they voted the way they did.

Comment by bucky on The Case for a Bigger Audience · 2019-02-10T21:25:01.166Z · score: 10 (3 votes) · LW · GW

Just wanted to say I agree regarding the problems with conversation being "time driven" (I've previously suggested a similar problem with Q&A)

One idea that occurs to me is to personalise Recent Discussion on the homepage. If I've read a post and even more if I've upvoted it then I'm likely to be interested in comments on that thread. If I've upvoted a comment then I'm likely to be interested in replies to that comment.

If Recent Discussion worked more like a personal recommendation section than a rough summary section then I think I'd get more out of it and probably be more motivated to post comments, knowing that people may well read them even if I'm replying to an old post.

Comment by bucky on What we talk about when we talk about life satisfaction · 2019-02-05T15:26:35.758Z · score: 3 (3 votes) · LW · GW

I suspect that for most people the reality is that they just anchor and adjust for this kind of question. Typically I'd expect an anchor at about 6-8 out of 10 (few people want to think they're unhappy) and then an adjustment +/- 1-2 depending on whether their current circumstances are better or worse than they think they should expect.

I'd assumed that the vagueness was more of a feature of the question than a bug. If you compare yourself to a billionaire then you will probably rate yourself lower than if you compare yourself to people around you. At the same time, if your instinct is to compare yourself with the billionaire then you probably are less satisfied in life than if you instinctively compare yourself to a more achievable datum. Thus the answer you provide tends to match the underlying reality, if by satisfaction we mean "lack of wishing things were different".

Comment by bucky on Urgent & important: How (not) to do your to-do list · 2019-02-02T19:31:45.664Z · score: 1 (1 votes) · LW · GW

I say I was taught it - it was more like my first boss saying to me “look, if you mark things as urgent/important then it helps you see which tasks you should prioritise”. I don’t think he mentioned delegation as I wouldn’t have had anyone to delegate to!

Comment by bucky on How would one go about defining the ideal personality compatibility test? · 2019-02-02T15:03:17.945Z · score: 3 (2 votes) · LW · GW

For-profits operate in a marketplace. Provided the marketplace is working, it doesn’t matter if they would rather keep people on the site by giving poor matches - if they don’t match people well then another company will give the people what they want and take their market share.

First you need to show that a company can operate against the interests of its customers for an extended time period without market consequences, then you can talk about what the company would like to do with this ability.

I don’t claim this can’t happen, just that the market working properly should be the null hypothesis.

Comment by bucky on How would one go about defining the ideal personality compatibility test? · 2019-02-01T21:14:07.821Z · score: 1 (1 votes) · LW · GW

I think I would have to pull Efficient Market Hypothesis on this and direct you to e.g. Match.com. Huge multinational dating sites are dependent on making good matches and I can’t think of any strong enough reasons to expect the market to be inefficient enough for me to do better.

You can read a bit about how Match.com do it here.

Comment by bucky on Urgent & important: How (not) to do your to-do list · 2019-02-01T20:22:32.101Z · score: 3 (2 votes) · LW · GW

Urgent/important was taught to me essentially in the way that your simpler version works. I didn’t even know how it was supposed to work a different way so I learnt something new.

I wonder how typical it is that people modify the Eisenhower box with a good dollop of common sense and end up with roughly what you describe. I definitely think this version makes more sense and like the hopscotch diagram.

Who wants to be a Millionaire?

2019-02-01T14:02:52.794Z · score: 29 (16 votes)
Comment by bucky on The Relationship Between Hierarchy and Wealth · 2019-01-24T15:43:13.439Z · score: 1 (1 votes) · LW · GW
If we're asking "what causes hierarchy?", then I'd expect the root answer to be "large-scale coordination problems with low communication requirements"

Nicely put. David Manheim has an interesting post on the need for legible structure when scaling from a startup to a large organisation.

Comment by bucky on What math do i need for data analysis? · 2019-01-19T22:07:31.915Z · score: 6 (4 votes) · LW · GW

I personally found the Udacity course helpful but I see that someone has done a comparison of all the online data science courses they could find here. Hopefully one of those might be what you’re looking for.

Comment by bucky on What are questions? · 2019-01-10T14:50:25.540Z · score: 30 (6 votes) · LW · GW

Questions in humans often involve some kind of status interaction. A question is not only (or even always) a request for information but often also represents an offer to trade status for information.

(I realise that this is focusing on a very narrow sub-field of the question asked but you did ask for unusual framings!)

In a canonical case of requesting unknown information, the asker is lowering his status relative to the askee in trade for the information. The act of asking implies that, at least in this area, the askee has superior knowledge to the asker, thus increasing the askee’s prestige. In doing so it provides a trade to the askee for an answer.

This status transfer is often the reason that questions go unasked. The common refrain “there’s no such thing as a stupid question” is there to try to overcome this reluctance to ask by lowering the status tax on question asking. I often tell new starts at my work to ask as many questions as possible in the first 6 months because you’ll feel less silly during this period (i.e. the status tax is lower as expectations of your knowledge are lower). Asking questions after that will be necessary but it will be emotionally more costly as time goes on. One weighs up (not necessarily consciously) whether the information required justifies the expenditure of status.

One obvious result is a preference to ask questions in a smaller group so that the asker’s status is lowered in fewer people’s opinions. Of course this is reversed if the question is really an excuse to show off one's own intelligence!

Similarly, one is tempted to search for information without making it obvious that you are asking a question thereby maintaining plausible deniability.

Questions generally being low status can be used to your advantage. In general you won’t tell your boss if he’s wrong but you might ask a question obliquely which might make him consider for himself that he might be wrong. If this works then you can correct his mistake without threatening his status (Caution: use with care!).

A manager might also use questions to correct an employee in order to be less status threatening to the employee.

I feel like questions in humans can’t be fully understood without some form of status interaction.

Meta Note

Having the question feature on LessWrong is interesting as it emphasizes that question asking is not a low status activity on the site. The potential issue is that askees might not feel like spending the time to answer a question provides sufficient reward. If this is overcome then I think the Q&A feature would count as a positive sum status interaction (at least on a status adaptation level).

I think responders need to be careful to avoid putting the status tax back on asking questions (e.g. by implying the question is stupid or the answer is obvious). I realise the need to distinguish good and bad questions (for clarity, I would include this question in the former) but I would prefer this to be done via moderation policy.

The issue with the status tax is that it optimises for whether an individual needs the information rather than whether it is a generally good question. For example, if a question is something that alot of people are interested in but no individual really desperately needs to know then the status tax makes it less likely to be asked. A good moderation policy should be able to encourage such questions.

(I've often had the experience that when someone finally overcomes their reluctance to pay the status tax and asks a question suddenly everyone says that they were wanting to ask that too - I wonder how often such questions just don't get asked).

Comment by bucky on What do you do when you find out you have inconsistent probabilities? · 2019-01-01T13:32:34.346Z · score: 1 (1 votes) · LW · GW

Argh,you’re right,I didn’t check that one. P(OM) cancels on the P(G) equation so that one isn’t over constrained.

However for the equation for P(OM) 4 variables is over constrained, 3 is enough.

Comment by bucky on What do you do when you find out you have inconsistent probabilities? · 2019-01-01T08:19:22.890Z · score: 1 (1 votes) · LW · GW

The 4 given probabilities are actually perfectly consistent within the equations you are using. It is provable that whatever 4 probabilities you use the equations will be consistent.

Therefore the question becomes “where did my maths go wrong?”

P(G|OM) = 0.055, not 0.55

I’m pretty confident that the only way probabilities can actually be inconsistent is if it is over constrained (e.g. in this case you define 5 relevant probabilities instead of 4). The whole point of having axioms is to prevent inconsistencies provided you stay inside them.

P.S. Good job on noticing your confusion!

Comment by bucky on Thoughts on Q&A so far? · 2018-12-31T23:32:32.202Z · score: 13 (5 votes) · LW · GW

Firstly I think the feature is a great idea and is working pretty well for a prototype.

Ideally the feature needs to do at least 3 things:

Get questions noticed

Get questions answered well

Present good Q/A combinations to the community

The current practice of promoting interesting questions to frontpage does the first but could be detrimental to the other 2 - many users will see the question but not get round to reopening and looking at the answers. This encourages people to answer quickly to get their response read and discourages more detailed answers.

If you’re planning on developing the feature further then addressing this issue would really help get the best out of it. One option would be the ability to promote the best answer(s) to frontpage after a week or so.

Comment by bucky on Defining Freedom · 2018-12-22T20:08:43.189Z · score: 1 (1 votes) · LW · GW

Thanks, got it! I hadn’t read the post as advocating a lack of decision but re-reading it now I can see how you read it.

Comment by bucky on Defining Freedom · 2018-12-20T21:05:53.063Z · score: 2 (2 votes) · LW · GW

I'm very confused about this. Here's what I think you're saying:

Choices are bad, particularly with regards to regret. It is better to make a decision based on instinct and forget about it than to consider your options carefully and potentially regret the decision.

Is that about right or is there something that I'm missing?

Experiences of Self-deception

2018-12-18T11:10:26.965Z · score: 16 (5 votes)
Comment by bucky on Interpreting genetic testing · 2018-12-17T11:55:26.189Z · score: 5 (3 votes) · LW · GW

Thanks for writing this, it's not something I'd looked at before but I read some of the Promethease sample reports because you got me interested.

There does seem to be some weird normalisation going on when calculating magnitude. For instance this gene gives a score of 0 for (C;C) and bad magnitude of 2.7 and 3.1 for (C;G) and (G;G) respectively. So if you have (C;C) and you filter by magnitude 2 you miss out on the fact that you have an advantageous genotype.

This isn't a problem if (C;C) is extremely common but actually it's no more common than (C;G) (except for people of African descent), so the act of filtering prevents you from realising that you missed a 50:50ish chance of getting a disadvantageous genotype.

So probably to work this out properly you can't filter by magnitude and you'd have to open up every genoptye details to check for what you've avoided getting hit by. You could only really work out how well you've done compared to other people where the data includes frequency so you could see just how lucky/unlucky you got for a particular gene.

Not all of the genotypes have this issue - for instance this gene seems to be more sensibly normalised. If they were all done like this then I'd be much happier with the system.

Comment by bucky on What went wrong in this interaction? · 2018-12-13T11:51:16.865Z · score: 4 (3 votes) · LW · GW

Good summary.

I'd like to add that sometimes definitions do matter, particularly in a public settings such as a blog. Even if t3tsubo and Benquo agree with both (a) and (b), it is possible that others reading the OP think that it is asserting (not-a).

If a significant number of people reading the blog are likely to think that it is asserting (not-a), rather than asserting (b), then it may be worth clarifying the OP to ensure that the correct message is received. I don't know whether this would be a common misunderstanding, I can only conclude that at least one person read the post as asserting (not-a).

Comment by bucky on The Bat and Ball Problem Revisited · 2018-12-13T09:56:01.389Z · score: 2 (2 votes) · LW · GW

This is pretty much the same for me. I think the solution to bat and ball of "10cents, oh no, that doesn't work. Split the difference evenly for 5 cents? yup that's better" is all done on system 1.

Kahneman's examples of system 1 thinking include (I think) a Chess Grandmaster seeing a good chess move, so he includes the possibility of training your system 1 to be able to do more things. In the case of the OP, system 1 has been trained to really understand exponential growth and ratios. I think that for me both "quickly check that your answer is right" and "try something vaguely sensible and see what happens" are both ingrained as general principles that I don't have to exert effort to apply them to simple problems.

A problem which I would volunteer for a CRT is the snail climbing out of a well. Here there's an obvious but wrong answer but I think if you realise that it's wrong then the correct answer isn't too hard to figure out.

Comment by bucky on Good Samaritans in experiments · 2018-12-08T09:13:12.322Z · score: 2 (2 votes) · LW · GW

From a frequentist perspective you're right.

From a Bayesian perspective, my prior would be that the GS condition would make people more likely to help. The likelihood calculation reinforces this belief. In terms of bits, my prior would have been (conservatively) 2-3 bits in favour of an effect and the experiment adds another 3-4(?) so I end up on about 6 bits. 64:1 is pretty good.

Comment by bucky on Worth keeping · 2018-12-07T18:07:47.801Z · score: 11 (7 votes) · LW · GW

I agree with what you say but feel like it’s the wrong kind of response for a post marked as speculative epistemic status. For such posts I think it’s fair to assume that the author knows they are over-simplifying and is just wanting to see where the idea goes.

Comment by bucky on Worth keeping · 2018-12-07T13:00:32.647Z · score: 2 (2 votes) · LW · GW
A related question is when you should let people know where they stand with you. Prima facie, it seems good to make sure people know when they are safe. But that means it also being clearer when a person is not safe, which has downsides.

An interesting question.

Even if you are not specific about where people stand with you, they have the evidence of your past actions. So whenever you decide whether to stick with a friend or give up will provide evidence to your other acquaintances as to what they can expect from you.

If one is perceived as being too quick to ditch friends, it probably decreases availability of replacement friends. On the other hand, someone who is extremely loyal is likely to have greater availability of friends (up to a limit!) but also less need for new friends.

This surplus may give leverage for things one cares about - one might say "I'll stand by my friends but I do expect they turn up when they say they'll turn up". Someone who is less loyal may not be able to be so picky.

Comment by bucky on Playing Politics · 2018-12-07T07:23:51.783Z · score: 4 (3 votes) · LW · GW

Just spotted this comment is being put at the bottom by the magical sorting algorithm despite its high karma - maybe an artefact of having been marked as spam?

Comment by bucky on Good Samaritans in experiments · 2018-12-05T23:47:56.394Z · score: 4 (3 votes) · LW · GW

Done :)

Comment by bucky on Good Samaritans in experiments · 2018-12-05T21:49:27.012Z · score: 5 (4 votes) · LW · GW

Thanks for the comments, I've added section headings so hopefully it reads a bit easier now.

To be honest I really didn't expect this to be as interesting to people as it was - glad to be proven wrong!

Comment by bucky on Playing Politics · 2018-12-05T16:48:48.568Z · score: 6 (4 votes) · LW · GW
Ever wonder why people reply more if you ask them for a meeting at 2pm on Tuesday, than if you offer to talk at whatever happens to be the most convenient time in the next month? The first requires a two-second check of the calendar; the latter implicitly asks them to solve a vexing optimisation problem.

My experience is that this also makes people more likely to show up at the agreed time (using this method was suggested to me when I was working with students who were notoriously bad at showing up).

Possibly phrasing it this way creates an artificial significance to the time suggested, I don't know, but it does seem to work. I generally offer as few options as is reasonable given other constraints.

Comment by bucky on Is Science Slowing Down? · 2018-11-30T13:22:56.687Z · score: 4 (3 votes) · LW · GW

Ok, I'm an idiot, this model doesn't predict exponentially increasing required inputs in time - the model predicts exponentially increasing required inputs against man-hours worked.

The relationship between required inputs and time is hyperbolic.

Comment by bucky on Is Science Slowing Down? · 2018-11-29T22:48:29.285Z · score: 4 (4 votes) · LW · GW

This is an interesting theory. I think it makes some different predictions to the low hanging fruit model.

For instance, this theory would appear to suggest that larger teams would be helpful. If intel are not internally repeating the same research then them increasing their number of researchers should increase the discovery rate. If instead a new company employs the same number of new researchers then this will have minimal effect on the world discovery rate if they repeat what Intel is doing.

A simplistic low hanging fruit explanation does not distinguish between where the extra researchers are located.

Status model

2018-11-26T15:05:12.105Z · score: 28 (9 votes)

Bayes Questions

2018-11-07T16:54:38.800Z · score: 22 (4 votes)

Good Samaritans in experiments

2018-10-30T23:34:27.153Z · score: 127 (50 votes)

In praise of heuristics

2018-10-24T15:44:47.771Z · score: 44 (14 votes)

The tails coming apart as a strategy for success

2018-10-01T15:18:50.228Z · score: 33 (17 votes)

Defining by opposites

2018-09-18T09:26:38.579Z · score: 19 (10 votes)

Birth order effect found in Nobel Laureates in Physics

2018-09-04T12:17:53.269Z · score: 61 (19 votes)