Posts

What are principled ways for penalising complexity in practice? 2019-06-27T07:28:16.850Z · score: 42 (11 votes)
How is Solomonoff induction calculated in practice? 2019-06-04T10:11:37.310Z · score: 35 (7 votes)
Book review: My Hidden Chimp 2019-03-04T09:55:32.362Z · score: 31 (13 votes)
Who wants to be a Millionaire? 2019-02-01T14:02:52.794Z · score: 29 (16 votes)
Experiences of Self-deception 2018-12-18T11:10:26.965Z · score: 16 (5 votes)
Status model 2018-11-26T15:05:12.105Z · score: 28 (9 votes)
Bayes Questions 2018-11-07T16:54:38.800Z · score: 22 (4 votes)
Good Samaritans in experiments 2018-10-30T23:34:27.153Z · score: 128 (51 votes)
In praise of heuristics 2018-10-24T15:44:47.771Z · score: 44 (14 votes)
The tails coming apart as a strategy for success 2018-10-01T15:18:50.228Z · score: 33 (17 votes)
Defining by opposites 2018-09-18T09:26:38.579Z · score: 19 (10 votes)
Birth order effect found in Nobel Laureates in Physics 2018-09-04T12:17:53.269Z · score: 61 (19 votes)

Comments

Comment by bucky on Analysis of a Secret Hitler Scenario · 2019-08-23T11:36:17.646Z · score: 2 (2 votes) · LW · GW

Firstly, I really like this kind of thing and enjoyed you analysis.

One thing I think it misses out on Marek's choice of who to inspect.

Liberal!Marek chooses without knowledge of who is fascist and who is liberal so has a 50:50 chance of selecting a fascist or a liberal. So if he is a liberal there is a 50:50 chance of him selecting a fascist, outing them and getting into this argument. (I'm ignoring the possibility that Marek will just say nothing)

Fascist!Marek already knows who is fascist/liberal and looking at the party membership card is a charade for him. He has 4 options:

1. Choose liberal, claim liberal

2. Choose liberal, claim fascist

3. Choose fascist, claim fascist

4. Choose fascist, clam liberal

On the surface option 3 doesn't seem likely. Options 1 and 2 are the options investigated in the OP (but assuming liberal was chosen by chance). Option 4 also seems like it might be used.

If we set option 4 to 0% then Marek is guaranteed to choose a liberal and assume the 50:50 bold/timid split for 1&2 then fascist!Marek has a 50:50 chance of getting into this argument - the same as liberal!Marek so this provides no evidence either way.

If we say split the probabilities of option 1,2 and 4 in 25%:25%:50% then we return to the result in the OP. If option 4 is between 0 and 50% likely then the argument happening is somewhere between 0 and 1 bit of evidence in favour of Marek being liberal.

***

Of course fascist!Marek makes the choice between the 4 options in the knowledge that everyone already thinks he's probably a fascist (although he's probably not Hitler). This will effect his choice as he may be extra keen to send a signal that he isn't a fascist, so would ideally like to not accuse anyone in the knowledge that everyone will probably side with the person he accuses. He might choose option 1 as this will increase that person's trust in him and also cast doubt on that person in the mind of everyone else. Even option 3 might be appealing - it might harm Marek but it makes the person he accuses look very liberal.

But everyone knows that Marek is in this position and Marek knows that everyone knows so this begins to hurt my head and is also why this kind of game is amazing!

Harry, smiling, had asked Professor Quirrell what level he played at, and Professor Quirrell, also smiling, had responded, One level higher than you. - HPMor
Comment by bucky on Analysis of a Secret Hitler Scenario · 2019-08-23T10:00:22.844Z · score: 2 (2 votes) · LW · GW

The first mistake you mention is exactly the mistake I make when I don't convert to odds form as I mentioned here.

If I start with and him accusing gives me 1 bit of evidence (he's twice as likely to accuse if he's liberal) then the temptation is to split the uncertainty in half and update incorrectly to .

Odds form helps - 1:1 becomes 2:1 after 1 bit of evidence so .

More formally:


Comment by bucky on Odds are not easier · 2019-08-21T22:38:23.408Z · score: 5 (3 votes) · LW · GW

I find if I try using probabilities in Bayes in my head then I make mistakes. If I start at 1/4 probability and get 1 bit of evidence to lower this further then I think “ok, Ill update to 1/8”. If I use odds I start at 1:3, update to 1:6 and get the correct posterior of 1/7.

So essentially I’m constantly going back and forth - like you I find probabilities easier to picture but find odds easier for updates.

Comment by bucky on Laplace Approximation · 2019-08-21T10:51:28.626Z · score: 1 (1 votes) · LW · GW

For an introduction to MCMC aimed at a similar level target audience, I found this explanation helpful.

Comment by bucky on Why do humans not have built-in neural i/o channels? · 2019-08-11T20:19:02.571Z · score: 1 (1 votes) · LW · GW

Communication requires both input and output channels. All of the instances I can think of from the animal world involve a sense (hearing, sight, smell, touch) which has evolved with a different benefit. Then an output can evolve to take communicate using this sense as the input.

This seems orders of magnitude less complex than evolving input and output simultaneously which would be required for direct brain communication (a least I can't think of another option).

Even if it could potentially happen, before it did there would be many instances of indirect communication evolving. Take-off happening first in a species with indirect communication is a fairly inevitable consequence of the relative complexity of the evolutions required.

Comment by bucky on Why Subagents? · 2019-08-02T21:28:00.482Z · score: 3 (2 votes) · LW · GW

Imagine a second agent which has the same preferences but an anti-status-quo preference between mushroom and pepperoni.

This would be exploitable by a third agent who is able to compare mushroom and pepperoni but assigns equal utilities to both. However the original agent described in the OP would not be able to exploit agent 2 (if agent 1's status-quo bias is larger than agent 2's anti-status-quo bias), so agent 3 dominates agent 1 in terms of performance.

Over multiple dimensions agent 3 becomes much more complex than agent 1. Having a status quo bias makes sense as a way to avoid being exploited whilst also being less computationally expensive than tracking or calculating every preference ordering.

Assuming agent 2 is rare, the loss incurred by not being able to exploit others is small.

Comment by bucky on Drive-By Low-Effort Criticism · 2019-07-31T19:41:15.570Z · score: 7 (4 votes) · LW · GW
Start with lower-effort posts, to get a sense of how people react to the headline and thesis statement.

Shortform seems like a great way to do this.

Comment by bucky on From Laplace to BIC · 2019-07-24T22:24:41.147Z · score: 1 (1 votes) · LW · GW

In removing the terms I think we're removing all of the widths of the peak in the various dimensions. So in the case where the widths are radically different between the models this would mean that N would need to be even larger for BIC to be a useful approximation.

The widths issue might come up, for example, when an additional parameter is added which splits the data into 2 populations with drastically different population sizes - the small population is likely to have a wider peak.

Is that right?

Comment by bucky on Laplace Approximation · 2019-07-21T20:46:59.587Z · score: 1 (1 votes) · LW · GW

Thanks for this sequence, I've read each post 3 or 4 times to try to properly get it.

Am I right in thinking that in order to replace we not only require a uniform prior but also that span unit volume?


Comment by bucky on What do you think of cognitive types and MBTI? What type are you? What do you think is the percentage of the 16 different personality types on LessWrong? · 2019-07-19T09:24:12.838Z · score: 4 (3 votes) · LW · GW

The last one appears to be 2016 (this was a slightly wider survey which included other rationalist communities) which was before the lesswrong 2.0 relaunch. I haven't heard of any plans for surveys - maybe a mod can fill us in.

Slatestarcodex does an annual survey of its readers. Scott pre-registers some investigations and then reports on results. This year, for example, he got a negative result on "Math preference vs Corn eating style" and more interesting results in the ongoing birth-order investigation.

Comment by bucky on What do you think of cognitive types and MBTI? What type are you? What do you think is the percentage of the 16 different personality types on LessWrong? · 2019-07-18T22:51:55.733Z · score: 23 (6 votes) · LW · GW

My own feelings on MBTI are similar to this SSC post - it's unscientific but manages to kinda work as long as you don't expect too much of it. I wouldn't make any life decisions based on it!

For the third part of the question we don't have to guess - the 2012 lesswrong survey included an MBTI question. Of the people who answered, 65% were INTP or INTJ, compared to 5-9% of Americans according to the MBTI website.

Comment by bucky on Let's Read: Superhuman AI for multiplayer poker · 2019-07-14T21:36:58.005Z · score: 7 (5 votes) · LW · GW

Thanks for this.

Nitpick:

The description of a big blind:

Big blind: the minimal money/poker chips that every player must bet in order to play. For example, $0.1 would be a reasonable amount in casual play.

sounds more like an ante than a big blind. This is important for understanding the discussion of limping in Ars Technica.

Comment by bucky on Book Review: The Secret Of Our Success · 2019-07-06T22:00:46.353Z · score: 1 (1 votes) · LW · GW

Yes, that’s definitely upward selection pressure but I think that’s more evidence for “ability to solve problems” being the cause of our intelligence rather than “ability to transmit culture”.

Most cultural processes could be transmitted by being shown what to do and punished if you do it wrong. Language makes it easier but isn’t necessarily required. Chimps have some fairly complex tool kits knowledge of which appear to be transmitted culturally.

Comment by bucky on Everybody Knows · 2019-07-05T05:28:44.180Z · score: 5 (5 votes) · LW · GW

A version of this that I hear fairly often is “it’s common sense that...”

It works in the same way in that it makes it socially costly to argue against but is more insidious than “everybody knows” (at least in my circles “it’s common sense” has more of a veneer of respectability).

Both also have their proper uses which I think makes the improper uses more difficult to counter.

Comment by bucky on What are principled ways for penalising complexity in practice? · 2019-06-30T21:44:37.387Z · score: 1 (1 votes) · LW · GW

Thanks for this. I’m trying to get an intuition on how this works.

My mental picture is to imagine the likelihood function with respect to theta of the more complex model. The simpler model is the equivalent of a square function with height of its likelihood and width 1.

The relative areas under the graphs reflect the likelihood of the models. So if picturing the relative maximum likelihoods and how sharp the peak is on the more complex model gives an impression of the Bayes factor.

Does that work? Or is there a better mental model?

Comment by bucky on What's up with self-esteem? · 2019-06-25T13:36:40.094Z · score: 44 (11 votes) · LW · GW

From the literature on self esteem

Previously, I thought that self-worth was like an estimate of how valuable you are to your peers

is sociometer theory and

Now I think there's an extra dimension which has to do with simpler dominance-hierarchy behavior.

is hierometer theory.

Hierometer theory is relatively new (2016) and could be though of as a subset of sociometer theory if sociometer theory is interpreted more broadly. Accordingly it has less research backing it up and that which is there is mostly by the original proponents of the theory.

This paper gives an introduction to both and a summary of evidence (I found this diagram a useful explanation of the difference). The paper suggests that both are true to some extent and complement each other.

I've included some quotes below.

Sociometer theory

Sociometer theory starts from the premise that human beings have a fundamental need to belong (Baumeister and Leary, 1995). Satisfying this need is advantageous: group members, when cooperating, afford one another significant opportunities for mutual gain (von Mises, 1963; Nowak and Highfield, 2011; Wilson, 2012). Accordingly, if individuals are excluded from key social networks, their prospects for surviving and reproducing are impaired. It is therefore plausible to hypothesize that a dedicated psychological system evolved to encourage social acceptance (Leary et al., 1995).
...
The original version of sociometer theory (Leary and Downs, 1995; Leary et al., 1995) emphasizes how self-esteem tracks social acceptance, by which is implied some sort of community belongingness, or social inclusion.
...
In contrast, the revised version (Leary and Baumeister, 2000) emphasizes how self-esteem tracks relational value, defined as the degree to which other people regard their relationship with the individual as important or valuable overall, for whatever reason.

Hierometer theory

Like sociometer theory, hierometer theory proposes that self-regard serves an evolutionary function. Unlike sociometer theory, it proposes that this function is to navigate status hierarchies. Specifically, hierometer theory proposes that self-regard operates both indicatively—by tracking levels of social status—and imperatively—by regulating levels of status pursuit (Figure 1).
...
Note here some key differences between hierometer theory and dominance theory (Barkow, 1975, 1980), another alternative to sociometer theory (e.g., Leary et al., 2001). Dominance theory, plausibly interpreted, states that self-esteem tracks, not levels of social acceptance or relational value, but instead levels of “dominance” or “prestige,” by which some social or psychological, rather than behavioral, construct is meant.
...
Accordingly, hierometer theory proposes that higher (lower) prior social status promotes a behavioral strategy of augmented (diminished) assertiveness, with self-regard acting as the intrapsychic bridge—in particular, tracking social status in the first instance and then regulating behavioral strategy in terms of it. Note that the overall dynamic involved is consolidatory rather than compensatory: higher rather than lower status is proposed to lead to increased assertiveness. In this regard, hierometer theory differs from dominance theory, which arguably implies that it is losses in social status that prompt attempts to regain it (Barkow, 1980).

Findings

... our findings are arguably consistent with the revised version of sociometer theory, which is equivocal about the type of relational value that self-esteem tracks, and by extension, the type of social acceptance that goes hand in hand with it. Indeed, hierometer theory, and the original version of sociometer theory, might each be considered complementary subsets of the revised version of sociometer theory, if the latter is construed very broadly as a theory which states that types of social relations (status, inclusion), which constitute different types of relational value, regulate types of behavioral strategies (assertiveness, affiliativeness) via types of self-regard (self-esteem, narcissism). If so, then our confirmatory findings for hierometer theory, and mixed findings for the original version of sociometer theory, would still suggest that the revised version of sociometer theory holds truer for agentic variables than for communal ones.
Comment by bucky on No, it's not The Incentives—it's you · 2019-06-16T11:30:22.371Z · score: 1 (1 votes) · LW · GW

Take out the “10mph over” and I think this would be both fairer than the existing system and more effective.

(Maybe some modification to the calculation of the average to account for queues etc.)

Comment by bucky on No, it's not The Incentives—it's you · 2019-06-16T10:57:20.507Z · score: 1 (1 votes) · LW · GW

On reflection I’m not sure “above average” is a helpful frame.

I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).

Comment by bucky on No, it's not The Incentives—it's you · 2019-06-15T20:40:54.975Z · score: 6 (3 votes) · LW · GW

Trying to steelman the quoted section:

If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.

I’m not sure I endorse the specific example there but in a personal example:

My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.

I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.

If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.

Comment by bucky on Book Review: The Secret Of Our Success · 2019-06-07T23:23:35.711Z · score: 6 (4 votes) · LW · GW

Heinrich counters with his own Cultural Intelligence Hypothesis – humans evolved big brains in order to be able to maintain things like Inuit seal hunting techniques.

I can’t really see how this would work.

Partly this is because maintaining techniques like this doesn’t seem difficult enough to justify just how intelligent humans are - on a scale of chimp to human it seems like it’s more on the chimp end. The fact that inventing the technique is impressive doesn’t imply that learning the technique is impressive.

But mainly I can’t see the selection pressure for increasing intelligence. Not being able to remember the hunting technique is obviously bad but where is the upwards selection pressure?

I definitely agree that Cultural Intelligence is important and is one of the ways humans have used their intelligence but I think the Machiavellian Intelligence Hypothesis is a stronger candidate for the root cause.

Comment by bucky on Steelmanning Divination · 2019-06-06T09:38:22.694Z · score: 21 (11 votes) · LW · GW

In an innovation workshop we were taught the following technique:

Make a list of 6 things your company is good at

Make a list of 6 applications of your product(s)

Make a list of 6 random words (Disney characters? City names?)

Roll 3 dice and select the corresponding words from the lists. Think about those 3 words and see what ideas you can come up with based on them.

Everyone I spoke to agreed that this was the best technique which we were taught. I knew constrained creativity was a thing but I think using this technique really drove the point home. I don't think this is quite the same thing as traditional divination (e.g. you can repeat this a few times and then choose your best idea) but I wonder if it is relying on similar principles.

Comment by bucky on FB/Discord Style Reacts · 2019-06-06T07:30:24.627Z · score: 2 (2 votes) · LW · GW

"I especially like/benefited from this bit:

Quote from post/comment"

Comment by bucky on How is Solomonoff induction calculated in practice? · 2019-06-05T21:13:15.838Z · score: 3 (2 votes) · LW · GW

Well that explains why I was struggling to find anything online!

Thanks for the link, I’ve been going through some of the techniques.

Using AIC the penalty for each additional parameter is a factor of e. For BIC the equivalent is so the more samples the more penalised a complex model is. For large n the models diverge - are there principled methods for choosing which regularisation to use?

Comment by bucky on How is Solomonoff induction calculated in practice? · 2019-06-05T19:27:55.049Z · score: 2 (2 votes) · LW · GW

Yes, this is helpful - I had thought of Solomonoff induction as only being calculating the prior but it’s helpful to understand the terminology properly.

Comment by bucky on Book review: The Sleepwalkers by Arthur Koestler · 2019-05-31T11:10:39.203Z · score: 3 (2 votes) · LW · GW

If the curves are constructed randomly and independently then in some cases a linear relationship would be implied by the central limit theorem.

Not sure if this is helpful or not - CLT assumptions may or may not be valid in the instances you're thinking of. I think my brain just went "Sum of many different variables leading to a surprising regular pattern? That reminds me of CLT".

Comment by bucky on Simple Rules of Law · 2019-05-20T08:03:53.554Z · score: 1 (1 votes) · LW · GW

For L], what would be the effect of scenario 1.5 - CEOs are fired if (but not only if) they are judged to be bad for the stock price?

There would be an option that if the CEO is fired for other reasons than the prediction market that the market doesn't pay out and all bets are refunded - not sure if this would help or hinder!


Note: There's an unfinished sentence in this section, end of 3rd to last paragraph

So I think that realistically
Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-03T22:29:51.008Z · score: 2 (2 votes) · LW · GW

I wonder what would happen if one were to remove b and play the game iteratively. The game stops after 50 iterations or the first time S fails the test or defects.

b is then essentially replaced by S’s expected payoff over the remaining iterations if he remains loyal. However M would know this value so the game might need further modification.

Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-02T10:24:29.693Z · score: 3 (2 votes) · LW · GW

Thanks for posting, I had fun trying to solve it and I think I learned a few things.

My solution is below (I think this is correct but I’m no expert) but I’ve hidden it in a spoiler in case you’re still wanting to figure it out yourself!

M has preference order of . He wants to set r such that if S has then S will pass the test and then remain loyal. If S has then M wants S to fail the test and therefore not get the chance to defect in round 2. It is common knowledge that this is what M wants.

Starting by making S’s Payoff for 2b less than that for 1 gives a formula for r:

for some small positive

With this value for r, S’s payoff matrix becomes:

1.

2a.

2b.

We can see that if then S’s best payoff is obtained by choosing 2a. Otherwise his best payoff is 1. This is exactly what M wants - he has changed S's payoffs to make S's preference order the same as his to the greatest extent possible.

Due to M's preference being common knowledge, S knows that M will choose this value of r and therefore knows what v is before he chooses whether to pass the test () and can choose between the three options simultaneously.

This is an interesting result as M's decision on r does not depend on the tax rate - he must always set an obedience test to be slightly more aversive than the entire value that is at stake. The tax rate only affects whether S will choose to pass the test.

Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-02T07:28:55.913Z · score: 1 (1 votes) · LW · GW

Thanks

Comment by bucky on My poorly done attempt of formulating a game with incomplete information. · 2019-05-01T19:48:50.653Z · score: 4 (3 votes) · LW · GW

Comment removed until I can figure out getting spoilers to work

Comment by bucky on The Politics of Age (the Young vs. the Old) · 2019-03-24T17:02:13.434Z · score: 2 (2 votes) · LW · GW

Another example is the Scottish independence referendum 2014 where 16 & 17 year olds were allowed to vote for the first time. Apparently in general the younger someone was the more likely they were to vote for independence but those <24 reversed that trend.

https://www.bbc.co.uk/news/uk-scotland-glasgow-west-34283948

I’m skeptical that 16-17 year olds would have changed the Brexit result given that Leave won by 1.3 million votes and the are only 1.5 million 16-17 year olds in th UK. Roughly eyeballing the numbers allowing 16-17 year olds to vote might cause a 1% swing towards remain so it could make a difference if a second referendum is called.

Comment by bucky on The Game Theory of Blackmail · 2019-03-23T13:08:43.796Z · score: 1 (1 votes) · LW · GW

Cooperate-cooperate is Pareto optimal (even when including mixed strategies).

Am I right in thinking cooperate-defect is also Pareto optimal for both games (although obviously not optimal for total utility)? If they are iterated then a set of results is Pareto optimal provided at least one person cooperated in every round.

Comment by bucky on What societies have ever had legal or accepted blackmail? · 2019-03-18T16:36:28.656Z · score: 2 (2 votes) · LW · GW

I think there's a crossed wire here. I read Dagon as claiming that hypocrisy is prohibited but rarely enforced, rather than blackmail is prohibited but rarely enforced. I take it from "crime" that you understand the latter.

In my interpretation the statement would be that hypocrisy is frowned upon by society but the norm of non-hypocrisy is not enforced via blackmail.

Comment by bucky on How to Understand and Mitigate Risk · 2019-03-14T16:24:03.029Z · score: 2 (2 votes) · LW · GW

Great post.

Can you clarify for me:

Are "Skin in the game", "Barbell", "Hormesis", "Evolution" and "Via Negativa" considered to be subsets of "Optionality"

OR

Are all 6 ("Skin in the game", "Barbell", "Hormesis", "Evolution", "Via Negativa" AND "Optionality") subsets of "Anti-fragility"?

I understood the latter from the wording of the post but the former from the figure at the top. Same with "Effectuation" and "Pilot in plane" etc.

Comment by bucky on Blackmailers are privateers in the war on hypocrisy · 2019-03-14T10:10:51.832Z · score: 3 (3 votes) · LW · GW
Licit blackmail at scale wouldn't just punish people for hypocrisy - it would reveal the underlying rate of hypocrisy.

I'm not sure this works. If blackmail is common then people will know how often certain blackmail demands aren't paid but in order to know the underlying rate of hypocrisy you also need ratios for (hypocrisy):(blackmail) and (blackmail):(non-payment).

As those ratios depend on a number of variables I would imagine people would have very limited information on actual base rates.

Second, once people find out how common certain kinds of illicit behavior are, we should expect the penalties to be reduced.

Can you expand on the mechanism for this? Is it just that the a person threatened with blackmail will be less likely to pay if someone else has already been outed for the same thing?

Comment by bucky on Renaming "Frontpage" · 2019-03-13T14:41:23.556Z · score: 1 (1 votes) · LW · GW

I like Whiteboard for Frontpage.

The only alternative I've thought of which might work is Origin (or Genesis) - intended connotation is both "place to start" and "new ideas".

Comment by bucky on Where to find Base Rates? · 2019-02-27T11:58:49.960Z · score: 1 (1 votes) · LW · GW

To be honest I'd just google that one but that didn't seem like very useful advice! My googling got me almost straight to this risk calculator used by NHS Scotland. Cross check this with a few other references from google and that's probably as good as anything I'd work out myself by going to the data - it's a well studied issue.

ONS is useful for base rates where google fails me.

Comment by bucky on Where to find Base Rates? · 2019-02-26T20:08:13.338Z · score: 4 (3 votes) · LW · GW

I often use the Office for National Statistics (UK)

Comment by bucky on De-Bugged brains wanted · 2019-02-23T20:46:04.134Z · score: 1 (1 votes) · LW · GW

I feel like we’re going over the same ground. I’m not sure there’s much more for me to add as I don’t know of any sites which I think would be the right match for you.

Comment by bucky on De-Bugged brains wanted · 2019-02-22T22:35:21.891Z · score: 1 (1 votes) · LW · GW

In the future, my advice to you would be:

Start small - what individual bias do you think you could explain best? How would you explain just that 1 small thing as simply and engagingly as possible?

Use the site questions feature - if you want examples from the community just ask the question without any commentary on who is/isn't debugged etc.

I suspect you have more learning to do before you really get LW rationality as G Gordon Worley III describes so it might be better to really get a handle on all this first.

Comment by bucky on De-Bugged brains wanted · 2019-02-22T20:55:32.489Z · score: 1 (1 votes) · LW · GW

I’ve read it and commented on it already. You can refer to that comment for my thoughts.

Concepts which I can’t find elsewhere are only good if they are accurate/helpful which I don’t believe they are.

I think in this case it is up to you to show that you’re right, rather than up to me to show you’re wrong.

Comment by bucky on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-22T15:02:30.203Z · score: 5 (5 votes) · LW · GW

Lack of follow-through means that too few people actually change and the new equilibrium is not achieved. This makes future coordination more difficult as people lose faith in coordination attempts in general.

If I were to be truly cynical I could create/join a coordination for something I was against, spam a lot of fake accounts, get the coordination conditions met and watch it fail due to poor follow-through. Now people lose faith in the idea of coordinating to make that change.

Not sure how likely this is, how easy it is to counter or how much worse than the status quo coordination attempts can get...

Comment by bucky on De-Bugged brains wanted · 2019-02-22T12:44:49.080Z · score: 6 (4 votes) · LW · GW

AFAIK there isn't a specific movement where the spread of rationality is its core aim. I can't speak for anyone else but my impression is that this kind of rationality is most likely to spread organically rather than from one big project. There are lots of communities which are working on rationality related projects and will welcome in whoever is interested. People here are more than happy to apply their rationality, just not necessarily in the project which you are prescribing. This is a rational response if they have a low expectation of success.

My issue here is that from witnessing your interactions so far I don't have very high expectations of your own personal emotional intelligence. Criticism of your ideas often seems to be met with hostility, exasperation and accusations of fallacies. Even if your ideas are correct this seems like a great way of alienating those who you are asking for help. One of the key tenets to LW style rationality vs traditional rationality is dealing with the world as it is, not as we think it should be and I don't feel like you're doing that.

Again, I could be wrong about this but the impressions that you give are key to getting people to co-operate with you.

I can understand your excitement at finding a community which represents some of the things where you've previously felt that you're on you own. However I think you would be wiser to take stock and learn before you try a project as ambitious as you are suggesting.

Comment by bucky on De-Bugged brains wanted · 2019-02-21T16:39:05.330Z · score: 2 (2 votes) · LW · GW

Firstly, let me say that I think the idea of bringing rationalism to the masses is a great idea. I think the best we have so far is HPMoR so that should be the standard to try to improve upon.

Secondly, it is a very difficult task, as you are aware. That means that my prior for any individual succeeding at this would be very low, even if I've seen lots of evidence showing that they have the kind of skill set that would be required. If I hadn't read HPMoR I would have put a low expectation on Eliezer managing it - he himself says he would have only put a 10% chance of the kind of success that it has achieved.

If I have yet to witness that individual's skills then my prior is tiny and I need alot of evidence to suggest that they are capable. I think this is what you're seeing when you perceive a judgment on negative authority - I'm not saying you can't do it, only that I want more evidence before I believe that you can.

***

With your last post I think you were doing the right thing - putting your ideas out there and seeing what happens. Then if you've got it right people will start believing in your project more. I think where you went wrong on your last post was how you updated on the feedback you received. 2 hypotheses:

1. You are right and the community is full of people who don't realise

2. There are some issues which you were wrong about or stylistic choices which were unhelpful

I think the evidence is better for option 2 and that you would do better to modify what you've done based on the feedback.

If you are still convinced of option 1 then it's up to you to persuade the community why it is wrong. For ChristianKl's comment you could write the page of the proposed site where you give the evidence he requests. Reading between the lines I suspect that he disagrees with what you've said and that is why he wants you to provide the evidence, rather than purely that this would be the norm for LW. For my or Elo's comments you could persuade us that it really is as bad as you say.

***

In the future, my advice to you would be:

Start small - what individual bias do you think you could explain best? How would you explain just that 1 small thing as simply and engagingly as possible?

Use the site questions feature - if you want examples from the community just ask the question without any commentary on who is/isn't debugged etc.

Comment by bucky on Epistemic Tenure · 2019-02-19T22:27:28.565Z · score: -2 (3 votes) · LW · GW

Also, there's no irony if the downvoters do not believe I've earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.

You’re right of course, I just found it amusing that someone would disagree that it’s a good idea to provide negative feedback and then provide negative feedback.

Comment by bucky on Epistemic Tenure · 2019-02-19T21:23:21.893Z · score: 3 (2 votes) · LW · GW

Thanks, that makes sense.

I completely empathise with worries about social pressures when I’m putting something out there for people to see. I don’t think this would apply to me in the generation phase but you’re right that my introspection may be completely off the mark.

My own experience at work is that I get ideas for improvements even when such ideas aren’t encouraged but maybe I’d get more if they were. My gut says that the level of encouragement mainly determines how likely I am to share the ideas but there could be more going on that I’m unaware of.

Comment by bucky on Epistemic Tenure · 2019-02-19T19:44:25.737Z · score: 5 (4 votes) · LW · GW

Putting myself in Bob’s shoes I’m pretty sure I would just want people to just be straight with me and give my idea the attention that they feel it deserves. I’m fairly confident this wouldn’t have a knock on effect to my ability to generate ideas. I’m guessing from the post that Scott isn’t sure this would be true of him (or maybe you’re more concerned for others than you would be for yourself?).

I’d be interested to hear other people’s introspections on this.

Comment by bucky on Epistemic Tenure · 2019-02-19T18:56:49.811Z · score: -5 (4 votes) · LW · GW

Just want to check that whoever downvoted Dagon’s comment sees the irony? :)

(Context: At time of writing the parent comment was at -1 karma)

Comment by bucky on Avoiding Jargon Confusion · 2019-02-19T13:37:00.899Z · score: 3 (2 votes) · LW · GW

The fact that there are subtly different purposes for the alternative naming schema could be a strength.

If I'm talking about biases I might talk about s1/s2. If I'm talking about motivation I might go for elephant/rider. If I'm talking about adaptations being executed I'd probably use blue minimising robot/side module.

I'm not sure whether others do something similar but I find the richness of the language helpful to distinguish in my own mind the subtly different dichotomies which are being alluded to.

Comment by bucky on Avoiding Jargon Confusion · 2019-02-18T11:28:57.740Z · score: 14 (4 votes) · LW · GW

Another option might be to use a word without any baggage. For example, Moloch seems to have held onto its original meaning pretty well but then maybe that's because the source document is so well known.

EDIT: I see The sparkly pink ball thing makes a similar point.