Comment by bucky on No, it's not The Incentives—it's you · 2019-06-16T11:30:22.371Z · score: 1 (1 votes) · LW · GW

Take out the “10mph over” and I think this would be both fairer than the existing system and more effective.

(Maybe some modification to the calculation of the average to account for queues etc.)

Comment by bucky on No, it's not The Incentives—it's you · 2019-06-16T10:57:20.507Z · score: 1 (1 votes) · LW · GW

On reflection I’m not sure “above average” is a helpful frame.

I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).

Comment by bucky on No, it's not The Incentives—it's you · 2019-06-15T20:40:54.975Z · score: 6 (3 votes) · LW · GW

Trying to steelman the quoted section:

If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.

I’m not sure I endorse the specific example there but in a personal example:

My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.

I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.

If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.

Comment by bucky on Book Review: The Secret Of Our Success · 2019-06-07T23:23:35.711Z · score: 5 (3 votes) · LW · GW

Heinrich counters with his own Cultural Intelligence Hypothesis – humans evolved big brains in order to be able to maintain things like Inuit seal hunting techniques.

I can’t really see how this would work.

Partly this is because maintaining techniques like this doesn’t seem difficult enough to justify just how intelligent humans are - on a scale of chimp to human it seems like it’s more on the chimp end. The fact that inventing the technique is impressive doesn’t imply that learning the technique is impressive.

But mainly I can’t see the selection pressure for increasing intelligence. Not being able to remember the hunting technique is obviously bad but where is the upwards selection pressure?

I definitely agree that Cultural Intelligence is important and is one of the ways humans have used their intelligence but I think the Machiavellian Intelligence Hypothesis is a stronger candidate for the root cause.

Comment by bucky on Steelmanning Divination · 2019-06-06T09:38:22.694Z · score: 19 (9 votes) · LW · GW

In an innovation workshop we were taught the following technique:

Make a list of 6 things your company is good at

Make a list of 6 applications of your product(s)

Make a list of 6 random words (Disney characters? City names?)

Roll 3 dice and select the corresponding words from the lists. Think about those 3 words and see what ideas you can come up with based on them.

Everyone I spoke to agreed that this was the best technique which we were taught. I knew constrained creativity was a thing but I think using this technique really drove the point home. I don't think this is quite the same thing as traditional divination (e.g. you can repeat this a few times and then choose your best idea) but I wonder if it is relying on similar principles.

Comment by bucky on FB/Discord Style Reacts · 2019-06-06T07:30:24.627Z · score: 2 (2 votes) · LW · GW

"I especially like/benefited from this bit:

Quote from post/comment"

Comment by bucky on How is Solomonoff induction calculated in practice? · 2019-06-05T21:13:15.838Z · score: 3 (2 votes) · LW · GW

Well that explains why I was struggling to find anything online!

Thanks for the link, I’ve been going through some of the techniques.

Using AIC the penalty for each additional parameter is a factor of e. For BIC the equivalent is so the more samples the more penalised a complex model is. For large n the models diverge - are there principled methods for choosing which regularisation to use?

Comment by bucky on How is Solomonoff induction calculated in practice? · 2019-06-05T19:27:55.049Z · score: 2 (2 votes) · LW · GW

Yes, this is helpful - I had thought of Solomonoff induction as only being calculating the prior but it’s helpful to understand the terminology properly.

How is Solomonoff induction calculated in practice?

2019-06-04T10:11:37.310Z · score: 32 (6 votes)
Comment by bucky on Book review: The Sleepwalkers by Arthur Koestler · 2019-05-31T11:10:39.203Z · score: 3 (2 votes) · LW · GW

If the curves are constructed randomly and independently then in some cases a linear relationship would be implied by the central limit theorem.

Not sure if this is helpful or not - CLT assumptions may or may not be valid in the instances you're thinking of. I think my brain just went "Sum of many different variables leading to a surprising regular pattern? That reminds me of CLT".

Comment by bucky on Simple Rules of Law · 2019-05-20T08:03:53.554Z · score: 1 (1 votes) · LW · GW

For L], what would be the effect of scenario 1.5 - CEOs are fired if (but not only if) they are judged to be bad for the stock price?

There would be an option that if the CEO is fired for other reasons than the prediction market that the market doesn't pay out and all bets are refunded - not sure if this would help or hinder!


Note: There's an unfinished sentence in this section, end of 3rd to last paragraph

So I think that realistically
Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-03T22:29:51.008Z · score: 2 (2 votes) · LW · GW

I wonder what would happen if one were to remove b and play the game iteratively. The game stops after 50 iterations or the first time S fails the test or defects.

b is then essentially replaced by S’s expected payoff over the remaining iterations if he remains loyal. However M would know this value so the game might need further modification.

Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-02T10:24:29.693Z · score: 3 (2 votes) · LW · GW

Thanks for posting, I had fun trying to solve it and I think I learned a few things.

My solution is below (I think this is correct but I’m no expert) but I’ve hidden it in a spoiler in case you’re still wanting to figure it out yourself!

M has preference order of . He wants to set r such that if S has then S will pass the test and then remain loyal. If S has then M wants S to fail the test and therefore not get the chance to defect in round 2. It is common knowledge that this is what M wants.

Starting by making S’s Payoff for 2b less than that for 1 gives a formula for r:

for some small positive

With this value for r, S’s payoff matrix becomes:

1.

2a.

2b.

We can see that if then S’s best payoff is obtained by choosing 2a. Otherwise his best payoff is 1. This is exactly what M wants - he has changed S's payoffs to make S's preference order the same as his to the greatest extent possible.

Due to M's preference being common knowledge, S knows that M will choose this value of r and therefore knows what v is before he chooses whether to pass the test () and can choose between the three options simultaneously.

This is an interesting result as M's decision on r does not depend on the tax rate - he must always set an obedience test to be slightly more aversive than the entire value that is at stake. The tax rate only affects whether S will choose to pass the test.

Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-02T07:28:55.913Z · score: 1 (1 votes) · LW · GW

Thanks

Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-01T19:48:50.653Z · score: 4 (3 votes) · LW · GW

Comment removed until I can figure out getting spoilers to work

Comment by bucky on The Politics of Age (the Young vs. the Old) · 2019-03-24T17:02:13.434Z · score: 2 (2 votes) · LW · GW

Another example is the Scottish independence referendum 2014 where 16 & 17 year olds were allowed to vote for the first time. Apparently in general the younger someone was the more likely they were to vote for independence but those <24 reversed that trend.

https://www.bbc.co.uk/news/uk-scotland-glasgow-west-34283948

I’m skeptical that 16-17 year olds would have changed the Brexit result given that Leave won by 1.3 million votes and the are only 1.5 million 16-17 year olds in th UK. Roughly eyeballing the numbers allowing 16-17 year olds to vote might cause a 1% swing towards remain so it could make a difference if a second referendum is called.

Comment by bucky on The Game Theory of Blackmail · 2019-03-23T13:08:43.796Z · score: 1 (1 votes) · LW · GW

Cooperate-cooperate is Pareto optimal (even when including mixed strategies).

Am I right in thinking cooperate-defect is also Pareto optimal for both games (although obviously not optimal for total utility)? If they are iterated then a set of results is Pareto optimal provided at least one person cooperated in every round.

Comment by bucky on What societies have ever had legal or accepted blackmail? · 2019-03-18T16:36:28.656Z · score: 2 (2 votes) · LW · GW

I think there's a crossed wire here. I read Dagon as claiming that hypocrisy is prohibited but rarely enforced, rather than blackmail is prohibited but rarely enforced. I take it from "crime" that you understand the latter.

In my interpretation the statement would be that hypocrisy is frowned upon by society but the norm of non-hypocrisy is not enforced via blackmail.

Comment by bucky on How to Understand and Mitigate Risk · 2019-03-14T16:24:03.029Z · score: 2 (2 votes) · LW · GW

Great post.

Can you clarify for me:

Are "Skin in the game", "Barbell", "Hormesis", "Evolution" and "Via Negativa" considered to be subsets of "Optionality"

OR

Are all 6 ("Skin in the game", "Barbell", "Hormesis", "Evolution", "Via Negativa" AND "Optionality") subsets of "Anti-fragility"?

I understood the latter from the wording of the post but the former from the figure at the top. Same with "Effectuation" and "Pilot in plane" etc.

Comment by bucky on Blackmailers are privateers in the war on hypocrisy · 2019-03-14T10:10:51.832Z · score: 3 (3 votes) · LW · GW
Licit blackmail at scale wouldn't just punish people for hypocrisy - it would reveal the underlying rate of hypocrisy.

I'm not sure this works. If blackmail is common then people will know how often certain blackmail demands aren't paid but in order to know the underlying rate of hypocrisy you also need ratios for (hypocrisy):(blackmail) and (blackmail):(non-payment).

As those ratios depend on a number of variables I would imagine people would have very limited information on actual base rates.

Second, once people find out how common certain kinds of illicit behavior are, we should expect the penalties to be reduced.

Can you expand on the mechanism for this? Is it just that the a person threatened with blackmail will be less likely to pay if someone else has already been outed for the same thing?

Comment by bucky on Renaming "Frontpage" · 2019-03-13T14:41:23.556Z · score: 1 (1 votes) · LW · GW

I like Whiteboard for Frontpage.

The only alternative I've thought of which might work is Origin (or Genesis) - intended connotation is both "place to start" and "new ideas".

Book review: My Hidden Chimp

2019-03-04T09:55:32.362Z · score: 31 (13 votes)
Comment by bucky on Where to find Base Rates? · 2019-02-27T11:58:49.960Z · score: 1 (1 votes) · LW · GW

To be honest I'd just google that one but that didn't seem like very useful advice! My googling got me almost straight to this risk calculator used by NHS Scotland. Cross check this with a few other references from google and that's probably as good as anything I'd work out myself by going to the data - it's a well studied issue.

ONS is useful for base rates where google fails me.

Comment by bucky on Where to find Base Rates? · 2019-02-26T20:08:13.338Z · score: 4 (3 votes) · LW · GW

I often use the Office for National Statistics (UK)

Comment by bucky on De-Bugged brains wanted · 2019-02-23T20:46:04.134Z · score: 1 (1 votes) · LW · GW

I feel like we’re going over the same ground. I’m not sure there’s much more for me to add as I don’t know of any sites which I think would be the right match for you.

Comment by bucky on De-Bugged brains wanted · 2019-02-22T22:35:21.891Z · score: 1 (1 votes) · LW · GW

In the future, my advice to you would be:

Start small - what individual bias do you think you could explain best? How would you explain just that 1 small thing as simply and engagingly as possible?

Use the site questions feature - if you want examples from the community just ask the question without any commentary on who is/isn't debugged etc.

I suspect you have more learning to do before you really get LW rationality as G Gordon Worley III describes so it might be better to really get a handle on all this first.

Comment by bucky on De-Bugged brains wanted · 2019-02-22T20:55:32.489Z · score: 1 (1 votes) · LW · GW

I’ve read it and commented on it already. You can refer to that comment for my thoughts.

Concepts which I can’t find elsewhere are only good if they are accurate/helpful which I don’t believe they are.

I think in this case it is up to you to show that you’re right, rather than up to me to show you’re wrong.

Comment by bucky on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-22T15:02:30.203Z · score: 5 (5 votes) · LW · GW

Lack of follow-through means that too few people actually change and the new equilibrium is not achieved. This makes future coordination more difficult as people lose faith in coordination attempts in general.

If I were to be truly cynical I could create/join a coordination for something I was against, spam a lot of fake accounts, get the coordination conditions met and watch it fail due to poor follow-through. Now people lose faith in the idea of coordinating to make that change.

Not sure how likely this is, how easy it is to counter or how much worse than the status quo coordination attempts can get...

Comment by bucky on De-Bugged brains wanted · 2019-02-22T12:44:49.080Z · score: 6 (4 votes) · LW · GW

AFAIK there isn't a specific movement where the spread of rationality is its core aim. I can't speak for anyone else but my impression is that this kind of rationality is most likely to spread organically rather than from one big project. There are lots of communities which are working on rationality related projects and will welcome in whoever is interested. People here are more than happy to apply their rationality, just not necessarily in the project which you are prescribing. This is a rational response if they have a low expectation of success.

My issue here is that from witnessing your interactions so far I don't have very high expectations of your own personal emotional intelligence. Criticism of your ideas often seems to be met with hostility, exasperation and accusations of fallacies. Even if your ideas are correct this seems like a great way of alienating those who you are asking for help. One of the key tenets to LW style rationality vs traditional rationality is dealing with the world as it is, not as we think it should be and I don't feel like you're doing that.

Again, I could be wrong about this but the impressions that you give are key to getting people to co-operate with you.

I can understand your excitement at finding a community which represents some of the things where you've previously felt that you're on you own. However I think you would be wiser to take stock and learn before you try a project as ambitious as you are suggesting.

Comment by bucky on De-Bugged brains wanted · 2019-02-21T16:39:05.330Z · score: 2 (2 votes) · LW · GW

Firstly, let me say that I think the idea of bringing rationalism to the masses is a great idea. I think the best we have so far is HPMoR so that should be the standard to try to improve upon.

Secondly, it is a very difficult task, as you are aware. That means that my prior for any individual succeeding at this would be very low, even if I've seen lots of evidence showing that they have the kind of skill set that would be required. If I hadn't read HPMoR I would have put a low expectation on Eliezer managing it - he himself says he would have only put a 10% chance of the kind of success that it has achieved.

If I have yet to witness that individual's skills then my prior is tiny and I need alot of evidence to suggest that they are capable. I think this is what you're seeing when you perceive a judgment on negative authority - I'm not saying you can't do it, only that I want more evidence before I believe that you can.

***

With your last post I think you were doing the right thing - putting your ideas out there and seeing what happens. Then if you've got it right people will start believing in your project more. I think where you went wrong on your last post was how you updated on the feedback you received. 2 hypotheses:

1. You are right and the community is full of people who don't realise

2. There are some issues which you were wrong about or stylistic choices which were unhelpful

I think the evidence is better for option 2 and that you would do better to modify what you've done based on the feedback.

If you are still convinced of option 1 then it's up to you to persuade the community why it is wrong. For ChristianKl's comment you could write the page of the proposed site where you give the evidence he requests. Reading between the lines I suspect that he disagrees with what you've said and that is why he wants you to provide the evidence, rather than purely that this would be the norm for LW. For my or Elo's comments you could persuade us that it really is as bad as you say.

***

In the future, my advice to you would be:

Start small - what individual bias do you think you could explain best? How would you explain just that 1 small thing as simply and engagingly as possible?

Use the site questions feature - if you want examples from the community just ask the question without any commentary on who is/isn't debugged etc.

Comment by bucky on Epistemic Tenure · 2019-02-19T22:27:28.565Z · score: -2 (3 votes) · LW · GW

Also, there's no irony if the downvoters do not believe I've earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.

You’re right of course, I just found it amusing that someone would disagree that it’s a good idea to provide negative feedback and then provide negative feedback.

Comment by bucky on Epistemic Tenure · 2019-02-19T21:23:21.893Z · score: 3 (2 votes) · LW · GW

Thanks, that makes sense.

I completely empathise with worries about social pressures when I’m putting something out there for people to see. I don’t think this would apply to me in the generation phase but you’re right that my introspection may be completely off the mark.

My own experience at work is that I get ideas for improvements even when such ideas aren’t encouraged but maybe I’d get more if they were. My gut says that the level of encouragement mainly determines how likely I am to share the ideas but there could be more going on that I’m unaware of.

Comment by bucky on Epistemic Tenure · 2019-02-19T19:44:25.737Z · score: 5 (4 votes) · LW · GW

Putting myself in Bob’s shoes I’m pretty sure I would just want people to just be straight with me and give my idea the attention that they feel it deserves. I’m fairly confident this wouldn’t have a knock on effect to my ability to generate ideas. I’m guessing from the post that Scott isn’t sure this would be true of him (or maybe you’re more concerned for others than you would be for yourself?).

I’d be interested to hear other people’s introspections on this.

Comment by bucky on Epistemic Tenure · 2019-02-19T18:56:49.811Z · score: -5 (4 votes) · LW · GW

Just want to check that whoever downvoted Dagon’s comment sees the irony? :)

(Context: At time of writing the parent comment was at -1 karma)

Comment by bucky on Avoiding Jargon Confusion · 2019-02-19T13:37:00.899Z · score: 3 (2 votes) · LW · GW

The fact that there are subtly different purposes for the alternative naming schema could be a strength.

If I'm talking about biases I might talk about s1/s2. If I'm talking about motivation I might go for elephant/rider. If I'm talking about adaptations being executed I'd probably use blue minimising robot/side module.

I'm not sure whether others do something similar but I find the richness of the language helpful to distinguish in my own mind the subtly different dichotomies which are being alluded to.

Comment by bucky on Avoiding Jargon Confusion · 2019-02-18T11:28:57.740Z · score: 14 (4 votes) · LW · GW

Another option might be to use a word without any baggage. For example, Moloch seems to have held onto its original meaning pretty well but then maybe that's because the source document is so well known.

EDIT: I see The sparkly pink ball thing makes a similar point.

Comment by bucky on Emotional Climate Change - an inconvenient idea · 2019-02-12T16:54:20.633Z · score: 5 (4 votes) · LW · GW

Just to provide a bit of feedback, this seems unnecessarily alarmist.

“Social media is bad” is a fairly standard trope and not something which will surprise many people. As a result, most people I know are aware of the problem and talk about how to use social media responsibly. In my experience young people are often most aware of this as they've been warned about the dangers for their entire lives. As a result they are often good at managing their social media use.

It's not perfect but it's not cataclysmic.

Comment by bucky on Probability space has 2 metrics · 2019-02-11T22:29:45.228Z · score: 1 (1 votes) · LW · GW

I like the theory. How would we test it?

We have a fairly good idea of how people weight decisions based on probabilities via offering different bets and seeing which ones get chosen.

I don't know how much quantification has been done on incorrect Bayesian updates. Could one suggest trades where one is given options one of which has been recommended by an "expert" who has made the correct prediction to a 50:50 question on a related topic x times in a row. How much do people adjust based on the evidence of the expert? This doesn't sound perfect to me, maybe someone else has a better version or maybe people are already doing this research?!

Comment by bucky on Spaghetti Towers · 2019-02-11T09:16:17.621Z · score: 3 (2 votes) · LW · GW

I wonder to what extent tax codes are spaghetti towers - every time someone finds a loophole a new bit of tax code gets added to close it without considering how to make a coherent whole. This would explain how the uk tax code runs to >10,000 pages

Comment by bucky on The Case for a Bigger Audience · 2019-02-10T21:36:23.681Z · score: 2 (2 votes) · LW · GW

I like this idea. I can't find it now but I remember a recent comment suggesting that any post/comment which ends up with negative karma should have someone commenting on as to why they downvoted it, so that the feedback is practical.

To encourage commentors (and posters) without cluttering up the comments thread:

Non-substantive comments, collapsed by default, where voters can leave a couple of words as to why they voted the way they did.

Comment by bucky on The Case for a Bigger Audience · 2019-02-10T21:25:01.166Z · score: 10 (3 votes) · LW · GW

Just wanted to say I agree regarding the problems with conversation being "time driven" (I've previously suggested a similar problem with Q&A)

One idea that occurs to me is to personalise Recent Discussion on the homepage. If I've read a post and even more if I've upvoted it then I'm likely to be interested in comments on that thread. If I've upvoted a comment then I'm likely to be interested in replies to that comment.

If Recent Discussion worked more like a personal recommendation section than a rough summary section then I think I'd get more out of it and probably be more motivated to post comments, knowing that people may well read them even if I'm replying to an old post.

Comment by bucky on What we talk about when we talk about life satisfaction · 2019-02-05T15:26:35.758Z · score: 3 (3 votes) · LW · GW

I suspect that for most people the reality is that they just anchor and adjust for this kind of question. Typically I'd expect an anchor at about 6-8 out of 10 (few people want to think they're unhappy) and then an adjustment +/- 1-2 depending on whether their current circumstances are better or worse than they think they should expect.

I'd assumed that the vagueness was more of a feature of the question than a bug. If you compare yourself to a billionaire then you will probably rate yourself lower than if you compare yourself to people around you. At the same time, if your instinct is to compare yourself with the billionaire then you probably are less satisfied in life than if you instinctively compare yourself to a more achievable datum. Thus the answer you provide tends to match the underlying reality, if by satisfaction we mean "lack of wishing things were different".

Comment by bucky on Urgent & important: How (not) to do your to-do list · 2019-02-02T19:31:45.664Z · score: 1 (1 votes) · LW · GW

I say I was taught it - it was more like my first boss saying to me “look, if you mark things as urgent/important then it helps you see which tasks you should prioritise”. I don’t think he mentioned delegation as I wouldn’t have had anyone to delegate to!

Comment by bucky on How would one go about defining the ideal personality compatibility test? · 2019-02-02T15:03:17.945Z · score: 3 (2 votes) · LW · GW

For-profits operate in a marketplace. Provided the marketplace is working, it doesn’t matter if they would rather keep people on the site by giving poor matches - if they don’t match people well then another company will give the people what they want and take their market share.

First you need to show that a company can operate against the interests of its customers for an extended time period without market consequences, then you can talk about what the company would like to do with this ability.

I don’t claim this can’t happen, just that the market working properly should be the null hypothesis.

Comment by bucky on How would one go about defining the ideal personality compatibility test? · 2019-02-01T21:14:07.821Z · score: 1 (1 votes) · LW · GW

I think I would have to pull Efficient Market Hypothesis on this and direct you to e.g. Match.com. Huge multinational dating sites are dependent on making good matches and I can’t think of any strong enough reasons to expect the market to be inefficient enough for me to do better.

You can read a bit about how Match.com do it here.

Comment by bucky on Urgent & important: How (not) to do your to-do list · 2019-02-01T20:22:32.101Z · score: 3 (2 votes) · LW · GW

Urgent/important was taught to me essentially in the way that your simpler version works. I didn’t even know how it was supposed to work a different way so I learnt something new.

I wonder how typical it is that people modify the Eisenhower box with a good dollop of common sense and end up with roughly what you describe. I definitely think this version makes more sense and like the hopscotch diagram.

Who wants to be a Millionaire?

2019-02-01T14:02:52.794Z · score: 29 (16 votes)
Comment by bucky on The Relationship Between Hierarchy and Wealth · 2019-01-24T15:43:13.439Z · score: 1 (1 votes) · LW · GW
If we're asking "what causes hierarchy?", then I'd expect the root answer to be "large-scale coordination problems with low communication requirements"

Nicely put. David Manheim has an interesting post on the need for legible structure when scaling from a startup to a large organisation.

Comment by bucky on What math do i need for data analysis? · 2019-01-19T22:07:31.915Z · score: 6 (4 votes) · LW · GW

I personally found the Udacity course helpful but I see that someone has done a comparison of all the online data science courses they could find here. Hopefully one of those might be what you’re looking for.

Comment by bucky on What are questions? · 2019-01-10T14:50:25.540Z · score: 30 (6 votes) · LW · GW

Questions in humans often involve some kind of status interaction. A question is not only (or even always) a request for information but often also represents an offer to trade status for information.

(I realise that this is focusing on a very narrow sub-field of the question asked but you did ask for unusual framings!)

In a canonical case of requesting unknown information, the asker is lowering his status relative to the askee in trade for the information. The act of asking implies that, at least in this area, the askee has superior knowledge to the asker, thus increasing the askee’s prestige. In doing so it provides a trade to the askee for an answer.

This status transfer is often the reason that questions go unasked. The common refrain “there’s no such thing as a stupid question” is there to try to overcome this reluctance to ask by lowering the status tax on question asking. I often tell new starts at my work to ask as many questions as possible in the first 6 months because you’ll feel less silly during this period (i.e. the status tax is lower as expectations of your knowledge are lower). Asking questions after that will be necessary but it will be emotionally more costly as time goes on. One weighs up (not necessarily consciously) whether the information required justifies the expenditure of status.

One obvious result is a preference to ask questions in a smaller group so that the asker’s status is lowered in fewer people’s opinions. Of course this is reversed if the question is really an excuse to show off one's own intelligence!

Similarly, one is tempted to search for information without making it obvious that you are asking a question thereby maintaining plausible deniability.

Questions generally being low status can be used to your advantage. In general you won’t tell your boss if he’s wrong but you might ask a question obliquely which might make him consider for himself that he might be wrong. If this works then you can correct his mistake without threatening his status (Caution: use with care!).

A manager might also use questions to correct an employee in order to be less status threatening to the employee.

I feel like questions in humans can’t be fully understood without some form of status interaction.

Meta Note

Having the question feature on LessWrong is interesting as it emphasizes that question asking is not a low status activity on the site. The potential issue is that askees might not feel like spending the time to answer a question provides sufficient reward. If this is overcome then I think the Q&A feature would count as a positive sum status interaction (at least on a status adaptation level).

I think responders need to be careful to avoid putting the status tax back on asking questions (e.g. by implying the question is stupid or the answer is obvious). I realise the need to distinguish good and bad questions (for clarity, I would include this question in the former) but I would prefer this to be done via moderation policy.

The issue with the status tax is that it optimises for whether an individual needs the information rather than whether it is a generally good question. For example, if a question is something that alot of people are interested in but no individual really desperately needs to know then the status tax makes it less likely to be asked. A good moderation policy should be able to encourage such questions.

(I've often had the experience that when someone finally overcomes their reluctance to pay the status tax and asks a question suddenly everyone says that they were wanting to ask that too - I wonder how often such questions just don't get asked).

Comment by bucky on What do you do when you find out you have inconsistent probabilities? · 2019-01-01T13:32:34.346Z · score: 1 (1 votes) · LW · GW

Argh,you’re right,I didn’t check that one. P(OM) cancels on the P(G) equation so that one isn’t over constrained.

However for the equation for P(OM) 4 variables is over constrained, 3 is enough.

Comment by bucky on What do you do when you find out you have inconsistent probabilities? · 2019-01-01T08:19:22.890Z · score: 1 (1 votes) · LW · GW

The 4 given probabilities are actually perfectly consistent within the equations you are using. It is provable that whatever 4 probabilities you use the equations will be consistent.

Therefore the question becomes “where did my maths go wrong?”

P(G|OM) = 0.055, not 0.55

I’m pretty confident that the only way probabilities can actually be inconsistent is if it is over constrained (e.g. in this case you define 5 relevant probabilities instead of 4). The whole point of having axioms is to prevent inconsistencies provided you stay inside them.

P.S. Good job on noticing your confusion!

Comment by bucky on Thoughts on Q&A so far? · 2018-12-31T23:32:32.202Z · score: 13 (5 votes) · LW · GW

Firstly I think the feature is a great idea and is working pretty well for a prototype.

Ideally the feature needs to do at least 3 things:

Get questions noticed

Get questions answered well

Present good Q/A combinations to the community

The current practice of promoting interesting questions to frontpage does the first but could be detrimental to the other 2 - many users will see the question but not get round to reopening and looking at the answers. This encourages people to answer quickly to get their response read and discourages more detailed answers.

If you’re planning on developing the feature further then addressing this issue would really help get the best out of it. One option would be the ability to promote the best answer(s) to frontpage after a week or so.

Experiences of Self-deception

2018-12-18T11:10:26.965Z · score: 16 (5 votes)

Status model

2018-11-26T15:05:12.105Z · score: 28 (9 votes)

Bayes Questions

2018-11-07T16:54:38.800Z · score: 22 (4 votes)

Good Samaritans in experiments

2018-10-30T23:34:27.153Z · score: 127 (50 votes)

In praise of heuristics

2018-10-24T15:44:47.771Z · score: 44 (14 votes)

The tails coming apart as a strategy for success

2018-10-01T15:18:50.228Z · score: 33 (17 votes)

Defining by opposites

2018-09-18T09:26:38.579Z · score: 19 (10 votes)

Birth order effect found in Nobel Laureates in Physics

2018-09-04T12:17:53.269Z · score: 61 (19 votes)