Posts

80,000 Hours: EA and Highly Political Causes 2017-01-26T21:44:33.229Z
Dominic Cummings: how the Brexit referendum was won 2017-01-12T21:26:02.639Z
Rationality Considered Harmful (In Politics) 2017-01-08T10:36:37.384Z
A Review of Signal Data Science 2016-08-14T15:32:45.946Z
Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today! 2016-08-11T17:29:38.399Z
Request for help: Android app to shut down a smartphone late at night 2015-04-02T11:38:47.994Z
(misleading title removed) 2015-01-28T23:00:58.639Z

Comments

Comment by The_Jaded_One on Wrongology 101 · 2018-04-27T00:43:23.061Z · LW · GW

imagine you're in a fistfight with a hungry tiger. Do you want it to be a fair fight, or would you like to try and cheat somehow?

Comment by The_Jaded_One on Wrongology 101 · 2018-04-27T00:21:01.032Z · LW · GW

They are irrational. ... Structurally, we’re talking about a cartel, or a mob. Mob in both the “mafia” and the “riot” sense. Collusion to keep unmerited privilege

It is not irrational to want to hang on to power/privilege. A very basic property of a rational agent is to want to increase or at least not decrease its level of capability in the world. Fair competition, when you would lose, is irrational. In fact the very notion of "fair" is hard to define in a general way, but winning is less hard to define.

Similarly, it is not irrational to want to form a cartel or political ingroup. Quite the opposite. It's like the concept of economic moat, but for humans.

all the millions of mental motions involved in trying to understand things accurately. The person who is wrong on purpose wants to just stop all of that motion, forever.

The most important thing for an agent to understand is to preserve itself and its utility function. Perhaps Satre's antisemites, though lacking in IQ, understood this better than we do.

Comment by The_Jaded_One on What Are The Chances of Actually Achieving FAI? · 2017-08-04T01:12:13.589Z · LW · GW

I don't think it will be very difficult to impart your intentions into a sufficiently advanced machine

Counterargument: it will be easy to impart an approximate version of your intentions, but hard to control the evolution of those values as you crank up the power. E.g. evolution, humans, make us want sex, we invent condoms.

No-one will really care about this until it's way too late and we're all locked up in nice padded cells and drugged up, or something equally bad but hard for me to imagine right now.

Comment by The_Jaded_One on What Are The Chances of Actually Achieving FAI? · 2017-08-04T01:08:38.823Z · LW · GW

I think 50% is a reasonable belief given the very limited grasp of the problem we have.

Most of the weight on success comes from FAI being quite easy, and all of the many worries expressed on this site not being realistic. Some of the weight for success comes from a concerted effort to solve hard problems.

Comment by The_Jaded_One on Bridging the Intention-Behavior Gap (aka Akrasia) · 2017-08-01T21:11:24.279Z · LW · GW

I guess there is a gap between the OP's intention and his/her behaviour? Intended to link to something but actually just self-links?

Comment by The_Jaded_One on Tentative Thoughts on the Cost Effectiveness of the SENS Foundation · 2017-08-01T15:28:36.132Z · LW · GW

Thanks for your comment! Can you say which country?

Could you tell me how you came about the list of African backward values?

Not in particular, the human brain tends to collect overall impressions rather than keep track of sources.

I'd like the names of all the values I'd need to instil to avoid seeing preventable suffering around me.

This sounds like a seriously tough battle.

Comment by The_Jaded_One on Concrete Ways You Can Help Make the Community Better · 2017-06-24T12:26:50.804Z · LW · GW

Yeah, I mean maybe just make them float to the bottom?

Comment by The_Jaded_One on Concrete Ways You Can Help Make the Community Better · 2017-06-20T20:57:48.935Z · LW · GW

One problem here is that we are trying to optimize a thing that is broken on an extremely fundamental level.

Rationality, transhumanism, hardcore nerdery in general attracts a lot of extremely socially dysfunctional human beings. They also tend to skew towards a ridiculously biologically-male-heavy gender distribution.

Sometimes life throws unfair challenges at you; the challenge here is that ability and interest in rationality correlates negatively with being a well-rounded human.

We should search very hard for extreme out-of-the-box solutions to this problem.

One positive lead I have been given is that the anti-aging/life-extension community is a lot more gender balanced. Maybe LW should try to embrace that. It's not a solution, but that's the kind of thing I'm thinking of.

Comment by The_Jaded_One on Concrete Ways You Can Help Make the Community Better · 2017-06-20T20:47:34.768Z · LW · GW

agree with that isn't just “+1 nice post.” Here are some strategies...

How about the strategy of writing "+1 nice post"? Maybe we're failing to see the really blatantly obvious solution here....

+1 nice post btw

Comment by The_Jaded_One on [deleted post] 2017-06-14T19:50:03.340Z

someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child

They were just doing their part against dysgenics and should be commended.

Comment by The_Jaded_One on [deleted post] 2017-06-14T19:47:28.955Z

word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years

Sounds interesting, I'd like to hear more about this.

Comment by The_Jaded_One on How I'd Introduce LessWrong to an Outsider · 2017-05-04T17:49:36.910Z · LW · GW

My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

This also applies to me

Comment by The_Jaded_One on The map of global catastrophic risks connected with biological weapons and genetic engineering · 2017-04-18T17:25:27.116Z · LW · GW

I think that there are a multipandemic of computer viruses, but most of them now are malware which is not destroying data, and they are in balance with antivirus systems.

Well............ I don't know about this. If it's "in balance" and not actually destroying the hosts then it's not really a pandemic in the sense that you were using above. (Where it kills 99.999% of hosts!)

Comment by The_Jaded_One on The map of global catastrophic risks connected with biological weapons and genetic engineering · 2017-04-16T22:25:15.934Z · LW · GW

But then why have we not seen a multipandemic of computer viruses?

Mostly (I assert) because the existence of an epidemic of virus A doesn't​ (on net) help virus B to spread.

Parasites which parasitize the same host tend to be in competition with each other (in fact as far as I am aware sophisticated malware today even contains antivirus code to clean out other infections); this is especially true if the parasites kill hosts.

I think a multipandemic is an interesting idea, though, and worthy of further investigation 👍

Comment by The_Jaded_One on The map of global catastrophic risks connected with biological weapons and genetic engineering · 2017-04-14T21:06:13.834Z · LW · GW

AFAIK Anthrax is not human transmissible. See: https://en.wikipedia.org/wiki/Anthrax

In result there will be multipandemic with mortality 1- (0.5 power 100) = 0,99999

I don't think that's what would actually happen. Most likely, there would be a distribution over transmission rates. Some of your pathogens would be more infectious then others. The most infectious one or two of them would quickly outpace the transmission of all the others. It would be extremely hard to balance them so that they all had the same transmission rate.

The slower ones could be stranded by the deaths and precautions caused by the faster ones.

Comment by The_Jaded_One on OpenAI makes humanity less safe · 2017-04-03T20:50:44.166Z · LW · GW

That world is called the planet Vulcan.

Meanwhile, on earth, we are subject to common knowledge/signalling issues...

Comment by The_Jaded_One on Elon Musk launches Neuralink, a venture to merge the human brain with AI · 2017-04-01T09:49:28.050Z · LW · GW

It has been fairly standard LW wisdom for a long time that any kind of human augmentation is unhelpful for friendliness.

I think that we should be much less confident about this, and I welcome alternative efforts such as the neural lace.

Comment by The_Jaded_One on In support of Yak Shaving · 2017-04-01T09:46:57.359Z · LW · GW

I'm not 100% sure what the incentives for such people are, but it is a very small company.

Actually yesterday this came to bite them and we now have a serious problem because my "fix this underlying system" advice was rejected.

Comment by The_Jaded_One on In support of Yak Shaving · 2017-03-19T11:42:06.379Z · LW · GW

We had this problem at work quite a few times. Bosses are reluctant to let me do something which will make things run more smoothly, they want new features instead.

The when things break they're like "What! Why is it broken again?!"

Comment by The_Jaded_One on LessWrong Discord · 2017-03-14T21:57:20.349Z · LW · GW

no options for encryption on the community

I've heard the CIA, the FBI and the Illuminati are all onto us. Strong encryption is not negotiable.

Why not go for something based on the matrix protocol

Maybe not everyone is ready to take the red pill?

Comment by The_Jaded_One on [deleted post] 2017-02-24T18:11:16.440Z

So if we can't downvote into oblivion, how do we get rid of shitposting in this place?

What is the algorithm that currently determines the placement of discussion articles? Oh it defaults to "new". Hmm. ok.

Then when you click "Top Scoring", it defaults to "All time".

When you manually select something more sensible like this week or this month you can't see this post, and you see some interesting articles.

Maybe the problem is not that this post hasn't been downvoted enough, but that we are not setting sensible defaults? Maybe we need to make the default some kind of semi-random selection which trades off quality against newness?

Comment by The_Jaded_One on The Semiotic Fallacy · 2017-02-22T17:43:38.342Z · LW · GW

Call this kind of reasoning the semiotic fallacy: Thinking about the semiotics of possible actions without estimating the consequences of the semiotics.

But you could equally well write a post on the "anti-semiotic fallacy" where you only think about the immediate and obvious consequences of an action, and not about the signals it sends.

I think that rationalists are much more susceptible to the anti-semiotic fallacy in our personal lives. And also to an extent when thinking about global or local politics and economics.

For example, I suspect that I suffered a lot of bullying at school for exactly the reason given in this post: being keen to avoid conflict in early encounters at a school (among other factors).

Comment by The_Jaded_One on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-18T10:24:03.796Z · LW · GW

I don't believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn't.

I disagree, I think there is value in analogies when used carefully.

It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs.

Yes, I also agree with this; you have to be careful of implicitly using fiction as evidence.

Comment by The_Jaded_One on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-17T19:14:01.106Z · LW · GW

I think this is more useful as a piece that fleshes out the arguments; a philosophical dialogue.

Comment by The_Jaded_One on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-15T20:58:58.550Z · LW · GW

I have a higher probability of a group of very dedicated wizards succeeding, worth re-doing the above decision analysis with those assumptions

Then there is still a problem with how much time we leave for the wizards, which mithril mining approaches we should pursue (risky vs safe)

Comment by The_Jaded_One on The "I Already Get It" Slide · 2017-02-15T18:23:49.115Z · LW · GW

Yes, definitely. The more you are in such a community, the more you can do this.

Comment by The_Jaded_One on The "I Already Get It" Slide · 2017-02-02T23:01:12.385Z · LW · GW

convince or be convinced

Isn't this kind of like the Aumann agreement theorem?

Are there any humans who meet that lofty standard?

Comment by The_Jaded_One on Civil resistance and the 3.5% rule · 2017-02-02T22:32:15.983Z · LW · GW

It seems kind of common sense that a small group of people using violence against a very large, well-armed group are going to have a tough time.

Comment by The_Jaded_One on Performance Trends in AI · 2017-01-29T21:20:49.170Z · LW · GW

It is definitely true that progress towards AGI is being made, if we count the indirect progress of more money being thrown at the problem, and importantly perceptual challenges being solved means that there is now going to be a greater ROI for symbolic AI progress.

A world with lots of stuff that is just waiting for AGI-tech to be plugged into it is a world where more people will try hard to make that AGI-tech. Examples of 'stuff' would include robots, drones, smart cars, better compute hardward, corporate interest in the problem/money, highly refined perceptual algorithms that are fast and easy to use, lots of datasets, things like deepmind's universe, etc.

A lot of stuff that was created from 1960 to 1990 helped to create the conditions for machine learning; the internet, Moore's law, databases, operating system, open source software, a computer science education system etc.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T14:49:39.822Z · LW · GW

Upvoted, and I encourage others to upvote for visibility.

Comment by The_Jaded_One on Performance Trends in AI · 2017-01-28T17:38:00.543Z · LW · GW

I might wonder if there are things humans can do with concepts and symbols and principles, the traditional tools of the “higher intellect”, the skills that show up on highly g-loaded tasks, that deep learning cannot do with current algorithms. ... So far, I think there is no empirical evidence from the world of deep learning to indicate that today’s deep learning algorithms are headed for general AI in the near future.

I strongly agree, and I think people at Deepmind already get this because they are working on differentiable neural computers.

Another key point here is that hardware gains such as GPUs and Moore's law increase the returns to investing time and effort into software and research.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-28T14:45:36.579Z · LW · GW

And 80,000 hours is advertising that they aim to help everyone, but then they are funding an organisation that is explicitly aiming to favor certain groups. As I have already said, males are disproportionately incarcerated by a very large margin, and any realistic decrease in incarceration will therefore help males, but that fact is not being trumpeted. It's the color label that is getting extra special attention here and being promoted from a side effect of doing something else good to a goal in its own right.

IMO this is not a good thing to fund.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-28T14:28:13.199Z · LW · GW

Well, I am probably overstepping if I claim to know for certain that Prop 47 was a mistake. 80,000 hours is advertising that they will maintain public safety with their efforts in this area, but the consensus is that Prop 47 has done the exact opposite.

car burglaries are up 47 percent this year over 2014, while car thefts have risen 17 percent and robberies rose by 23 percent. In Los Angeles, overall crime is up 12.7 percent this year and violent crime rose almost 21 percent. That’s after 12 straight years of crime decreases in the state’s largest city.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-28T12:31:42.981Z · LW · GW

The focus on 'people of color' you picked up on is thus not necessarily indicative of a damaging bias here

But let's suppose that the most effective intervention in this field resulted in increasing the racial disparity in incarceration. Would ASJ pursue it? Can we take their outward focus on race as evidence that race-favoritism is a goal that they internally pursue, perhaps over and above the high-level goal that 80,000 hours advertises them under?

Does their focus on race bias them about where the tradeoff between incarceration and safety should be struck? For example,

ASJ aims to build on the successful strategies of Californians for Safety and Justice and its sister organization, Vote Safe, the 501c4 that launched and ran the successful Proposition 47

and what is Prop 47?

... offenders who knew the specifics of Prop 47 and how to use it to their advantage ...There was the thief in San Bernardino County who had been caught shoplifting with his calculator, which he said he used to make sure he never stole the equivalent of $950 or more.

and also:

known gang member near Palm Springs who had been caught with a stolen gun valued at $625 and then reacted incredulously when the arresting officer explained that he would not be taken to jail but instead written a citation. “But I had a gun. What is wrong with this country?”

The tradeoffs here are at least somewhat controversial.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-28T11:02:51.981Z · LW · GW

it seems quite intuitive that any effective approach to reducing mass incarceration in the U.S. will have its biggest impact in 'communities of color'

It is very hard for me to respond to this without breaking my own rules; "this post is not intended to start an object-level discussion about which race, gender, political movement or sexual orientation is cooler", but let me try.

First: 'people of color' is simply a Social Justice term meaning "not white", and explicitly includes (far east) Asian Americans. Without implying here any form of superiority, it is a fact that the incarceration rates for Asian Americans most certainly do not put them into the same broad category as other "people of color".

So in this context, the term "people of color" is not a category that carves reality at its joints. A martian xenosociologist would not find the category "all people who are not white European" useful for trying to maximise the objective of "substantially reducing incarceration while maintaining public safety", when compared to the more natural categories of actual races. Uncharitably, one could explain the non-carving-at-joints term "people of color" as a brazen attempt to rope Asian Americans and other "Model minorities" into a political coalition that actively harms them.

Second: the stated goal of 80,000 hours here is not to reduce incarceration. It is to reduce incarceration while maintaining public safety. The mere fact that more "people of color" (sorry, Asian Americans!) are incarcerated than white European people is not enough to get to the the claim that you are making - "biggest impact in 'communities of color' ".

And then there is the further claim by the ASJ that we should "reduce racial disparities in incarceration". That's an additional jump from "having the biggest impact in communities of color", because it implies that you could keep the same level of incarceration in communities of color, but incarcerate more white people. That would technically reduce the disparity. Are they trying to invent affirmative action for courts/prisons?

Go back to our martian alien who knows nothing of SJWs. He starts trying to come up with a plan to reduce incarceration whilst maintaining public safety, he looks at the well-established facts about differential incarceration rates. Then maybe he communicates with the earthling ChristianKl who has just started having potentially useful ideas about "rewarding prisons financially for low recidivism rates". What does the alien, who is apolitical and doesn't know to avoid the taboos of the culture war think about next? He might look at redictivism rates by race?

At this point, the alien would perhaps start to question whether the goal of "reducing incarceration while maintaining public safety" was really an accurate specification of what humans wanted. Maybe what they want is some combination of

  • less incarceration overall
  • more safety for the law-abiding public
  • a justice system which exhibits equality of outcomes when that would benefit groups that are high status within the SJ movement (e.g. African Americans), and equality of process when equality of outcomes would be to the detriment of groups that are high status within the SJ movement (e.g. women)

This combination of goals is good at explaining the words that are being emitted by the ASJ. It explains the focus on people of color as well as the total lack of any mention of the fact that males are vastly over-represented in prisons, and the conspicuous absence of efforts to reduce the gender disparity in prisons.

Now you might say, "wow, you have really broken your own rules there!" - well, let me disclaim that I am not implying any form of moral superiority between culture-war salient groups here. There are certainly many people of color who have suffered injustice at the hands of a highly imperfect and unfair, sometimes racist, system.

I am simply pointing out that if you casually assert the "intuitive" equivalence of statements that are not equivalent in all possible worlds, then you are taking some pretty big risks regarding good epistemology.

Comment by The_Jaded_One on A majority coalition can lose a symmetric zero-sum game · 2017-01-27T20:27:29.533Z · LW · GW

I would like to see more crossposted from Intelligent Agent Foundations Forum.

Comment by The_Jaded_One on A new blog with analyses of various topics (e. g. slavery and capitalism, why election models didn't predict Trump's victory) · 2017-01-27T20:21:45.698Z · LW · GW

I think linkposts + drafts is broken and weird. You have to go to the dropdown and select "post to LW dsicussion" immediately. If you post to drafts once, it can do some odd things.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-27T19:24:40.988Z · LW · GW

I have made some edits to this post emphasizing some things that occurred to me after finishing it.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-27T17:42:58.351Z · LW · GW

Well there are definitely a lot of good things about the EA movement, and people who choose to be a part of it should be proud of its achievements.

Comment by The_Jaded_One on A new blog with analyses of various topics (e. g. slavery and capitalism, why election models didn't predict Trump's victory) · 2017-01-27T17:40:13.038Z · LW · GW

Very interesting, though I would do a linkpost for a specific topic so that people can discuss just that. The stuff about slavery sounds really intersting

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-27T12:48:06.015Z · LW · GW

it's obvious by his tone and wording that he has other motives, and that these are obvious enough to enough people that I expect this to cause many people's guts to notice these motives and politically infect

How would you change this post to convey the same objections and points, but be less "infectious"?

I am prepared to take criticism into account and reword the article before it goes to the EA forum.

Comment by The_Jaded_One on 80,000 Hours: EA and Highly Political Causes · 2017-01-26T22:28:33.868Z · LW · GW

if your descriptions of their recommended organizations are charitable, then I too am confused right now.

Please check the links and report back, I am one person working alone so it is possible I have missed something important.

frame your criticism as a confusion.

Well I have been accused of being a concern troll in the past for doing exactly that. So, I am being up-front: this is a critical article with that caveat that criticism of professional altruists is a necessary evil.

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-26T22:00:50.442Z · LW · GW

How to addict users to little squirts of dopamine is big business. The problem, of course, is the kind of crowd you end up attracting. If you offer gold stars, you end up with people who like gold stars.

Everyone likes gold stars, but not everyone likes decision theory, rationality, philosphy, AI, etc. Even if we were as good as farmville at dopamine, the farmville people wouldn't come here instead of farmville, because they'd never have anything non-terrible to say.

Now we might start attracting more 13 year-old nerds... but do we want to be so elite that 13 year old nerds can't come here to learn rationality? The ultimate eliteness is just an empty page that no-one ever sullies with a potentially imperfect speck of pixels. I think we are waaaay too close to that form of eliteness.

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-26T21:54:54.917Z · LW · GW

Yes, there is a point at which more upvoting starts to saturate the set of possible scores for a comment, but we are nowhere near that point IMO. And if we were, I think it would be much better to add a limited-supply super-upvote to the system.

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-26T16:48:32.908Z · LW · GW

I think that at some point adding more "free gold stars", i.e. upvotes, badges etc to people would look silly and be counterproductive, but we are nowhere near that point, so we should push the gas pedal, aim to upvote every non-terrible post at least somewhat, upvote decent posts a lot and create new levels of reward - something like lesswrong gold - for posts that are truly great.

We should limit downvotes substantially, or perhaps permanently remove the downvote and replace with separate buttons for "I disagree but this person is engaging in (broadly) rational debate" and "This is toxic/spammy/unrepentantly dumb".

These buttons should have different semantics, for example "I disagree but this person is engaging in (broadly) rational debate" might be easy to click but not actually make the post go down the page. The "This is toxic/spammy/unrepentantly dumb" might be more costly to click, for example have a limited budget per week and cause the user to have to click an additional dialogue, perhaps with a mandatory "reason" field which is enforced, but would actually push the post down the way downvotes currently do, or perhaps even more strongly than downvotes currently work.

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-25T21:13:02.544Z · LW · GW

trade-off between attracting more people and becoming more popular vs maintaining certain exclusivity and avoiding an Eternal September,

I do not think that this is the tradeoff that we are actually facing. I think that in order for the site to be high quality, it needs to attract more people. Right now, in my opinion, the site is both empty and low quality, and these factors currently reinforce each other.

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-25T06:59:16.424Z · LW · GW

OK forget the phrase pissed off - what I am trying to get at is deontology vs consequences

Comment by The_Jaded_One on Metrics to evaluate a Presidency · 2017-01-25T06:57:52.265Z · LW · GW

Well I would honestly start by doing a literature review of what the relevant academic fields have already studied.

If I had to guess on the spot what makes a government good, I woild caution that a lot of what one sees in outcomes in the short term is determined by economics. On top of that there are broader political processes that are just gping to happen.

Maybe one thing I feel fairly confident about is that starting expeditionary wars of aggression has a very bad track record.

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-24T23:15:04.882Z · LW · GW

What exactly do you mean by "should" here? Is it "should" as in the empirical claim "should = these actions will maximise the quality and number of users" or is it some kind of deontological claim like "should because u/Lumifer inherently believes that a mediocre post/comment should map to 0"?

I ask because it is plausible that

  • the optimal choice of mapping is not mediocre -> 0, where we judge optimality by the consequences for the site

  • you and others are inherently pissed off by people posting an average comment and getting +1 for it

Comment by The_Jaded_One on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-24T20:38:34.076Z · LW · GW

It doesn't have to be that specific number or way of doing things - the general point is "do we mostly punish or mostly reward".