Posts

GreaterWrong Arbital Viewer 2019-06-28T06:32:22.278Z · score: 61 (14 votes)
What societies have ever had legal or accepted blackmail? 2019-03-17T09:16:55.560Z · score: 33 (10 votes)
An alternative way to browse LessWrong 2.0 2018-02-19T01:52:06.462Z · score: 104 (33 votes)

Comments

Comment by clone-of-saturn on GreaterWrong Arbital Viewer · 2019-08-20T02:33:13.381Z · score: 3 (2 votes) · LW · GW

Fixed.

Comment by clone-of-saturn on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-18T08:56:11.037Z · score: 21 (4 votes) · LW · GW

What makes you think A and B are mutually exclusive? Or even significantly anticorrelated? If there are enough very different models built out of legitimate facts and theories for everyone to have one of their own, how can you tell they aren't picking them for political reasons?

Comment by clone-of-saturn on Power Buys You Distance From The Crime · 2019-08-04T08:34:18.016Z · score: 8 (4 votes) · LW · GW

This seems like dramatically over-complicating the idea. I would expect a prototypical conflict theorist to reason like this:

  1. Political debates have winners and losers—if a consensus is reached on a political question, one group of people will be materially better off and another group will be worse off.

  2. Public choice theory makes black people worse off. (I don't know if the article is right about this, but I'll assume it's true for the sake of argument.)

  3. Therefore, one ought to promote public choice theory if one wants to hurt black people, and disparage public choice theory if one wants to help black people.

Comment by clone-of-saturn on Drive-By Low-Effort Criticism · 2019-08-04T03:03:33.972Z · score: 13 (5 votes) · LW · GW

Sure, that all makes sense, but at least on LW it seems like we ought to insist on saying "rewarding results" when we mean rewarding results, and "deceiving ourselves into thinking we're rewarding results" when we mean deceiving ourselves into thinking we're rewarding results.

Comment by clone-of-saturn on Drive-By Low-Effort Criticism · 2019-08-02T02:21:40.944Z · score: 9 (6 votes) · LW · GW

Strange. You bring up Goodhart's Law, but the way you apply it seems exactly backwards to me. If you're rewarding strategies instead of results, and someone comes up with a new strategy that has far better results than the strategy you're rewarding, you fail to reward people for developing better strategies or getting better results. This seems like it's exactly what Goodhart was trying to warn us about.

Comment by clone-of-saturn on Drive-By Low-Effort Criticism · 2019-07-31T22:28:32.093Z · score: 17 (7 votes) · LW · GW

perhaps high-effort posts are more likely to contain muddled thinking, and hence more likely to have incorrect conclusions? but it’s hard to see why this should be the case a priori

I don't think high-effort posts are more likely to contain muddled thinking, but I do think readers are less likely to notice muddled thinking when it appears in high-effort posts, so suppressing criticism of high-effort posts is especially dangerous.

Comment by clone-of-saturn on When does adding more people reliably make a system better? · 2019-07-19T18:43:23.234Z · score: 9 (3 votes) · LW · GW

LW 1.0 had an additional problem that no one wanted to risk writing a worse than average post in Main, leading to ever increasing standards and fewer posts, but I believe user numbers were still increasing, and quality of Discussion posts decreasing, during that process.

Comment by clone-of-saturn on Why artificial optimism? · 2019-07-16T04:20:55.107Z · score: 18 (5 votes) · LW · GW

Know­ingly al­low­ing some­one to get away with some­thing bad makes you bad.

While some peo­ple have a be­lief like this, this seems false from a philo­soph­i­cal eth­i­cal per­spec­tive.

I think a philosophical ethical perspective that labels this "false" (and not just incomplete or under-nuanced) is failing to engage with the phenomenon of ethics as it actually happens in the world. Ethics arose in this cold and indifferent universe because being ethical is a winning strategy, but being "ethical" all by yourself without any mechanism to keep everyone around you ethical is not a winning strategy.

The cost of explicitly punishing people for not being vegetarian is prohibitive because vegetarianism is still a small and entrepreneurial ethical system, but you can certainly at least punish non-vegetarians by preferentially choosing other vegetarians to associate with. Well-established ethical systems like anti-murder-ism have much less difficulty affording severe punishments.

An important innovation is that you can cooperate with people who might be bad overall, as long as they follow a more minimal set of rules (for example, the Uniform Commercial Code). Or in other words, you can have concentric circles of ethicalness, making more limited ethical demands of people you interact with less closely. But when you interact with people in your outer circle, how do people in your inner circle know you don't condone all of the bad things they might be doing? One way is to have some kind of system of group membership, with rules that explicitly apply only to group members. But a cheaper and more flexible way is to simply remain ignorant about anything that isn't relevant--a.k.a respect their privacy.

Comment by clone-of-saturn on Why artificial optimism? · 2019-07-15T23:00:52.938Z · score: 18 (6 votes) · LW · GW

Po­lite­ness and pri­vacy are, in fact, largely about main­tain­ing im­pres­sions (es­pe­cially pos­i­tive im­pres­sions) through co­or­di­nat­ing against the rev­e­la­tion of truth.

People don't always agree with each other about what's good and bad. Knowingly allowing someone to get away with something bad makes you bad. Coordinating against the revelation of truth allows us to get something productive done together instead of spending all our time fighting.

Comment by clone-of-saturn on Open Thread July 2019 · 2019-07-15T09:23:25.007Z · score: 10 (5 votes) · LW · GW

I thought your comment was fine and the irony was obvious, but this kind of misunderstanding can be easily avoided by making the straightforward reading more boring, like so:

Given that CfAR is an organization which is speci­fi­cally about seeking truth, one could safely assume that if the actual reason were “Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don’t want to undercut our core product,” then the handbook would have just said that. To do otherwise would be to call the whole premise into question!

Comment by clone-of-saturn on Raemon's Scratchpad · 2019-07-08T06:56:30.226Z · score: 4 (2 votes) · LW · GW

Knowing your plans could definitely make a difference--I do want to prioritize fixing any problems that make GW confusing to use, as well as adding features that someone has directly asked for. As such, I just implemented the related questions feature.

Comment by clone-of-saturn on GreaterWrong Arbital Viewer · 2019-06-29T04:57:13.157Z · score: 5 (3 votes) · LW · GW

Oops, fixed.

Comment by clone-of-saturn on GreaterWrong Arbital Viewer · 2019-06-29T00:50:19.749Z · score: 14 (3 votes) · LW · GW

Thanks!

Comment by clone-of-saturn on GreaterWrong Arbital Viewer · 2019-06-28T13:54:20.135Z · score: 7 (3 votes) · LW · GW

(For future reference, I believe you mean this page)

Comment by clone-of-saturn on Discourse Norms: Moderators Must Not Bully · 2019-06-17T00:57:59.553Z · score: 17 (5 votes) · LW · GW

But you realize this isn't just random unmotivated nitpicking, because it's also fairly straightforward and reasonable to clump "Nazi" with "HBD", and from there to ban someone like Gwern for his GWAS and embryo selection research, right?

Comment by clone-of-saturn on Experimental Open Thread April 2019: Socratic method · 2019-06-16T08:48:03.884Z · score: 2 (1 votes) · LW · GW

What's different about these domains? Can you tell them apart in any way?

Comment by clone-of-saturn on Reasonable Explanations · 2019-06-16T07:46:51.506Z · score: 20 (9 votes) · LW · GW

meta: LW supports spoiler tags now.

Comment by clone-of-saturn on Experimental Open Thread April 2019: Socratic method · 2019-06-16T06:56:06.259Z · score: 2 (1 votes) · LW · GW

Is there anything that makes observations different or distinguishable from imaginations? If so, what?

Comment by clone-of-saturn on No, it's not The Incentives—it's you · 2019-06-15T19:08:31.034Z · score: 2 (1 votes) · LW · GW

Right... but fraud rings need something to initially nucleate around. (As do honesty rings)

Comment by clone-of-saturn on No, it's not The Incentives—it's you · 2019-06-15T17:54:06.274Z · score: 3 (2 votes) · LW · GW

I don't endorse the quoted statement, I think it's just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it's wrong to shame someone for violating a norm they didn't explicitly agree to follow. If you call me out for falsifying data, you're not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you're simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.

(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don't see it that way.)

Comment by clone-of-saturn on Asymmetric Weapons Aren't Always on Your Side · 2019-06-09T10:32:40.007Z · score: 3 (2 votes) · LW · GW

Of course, I completely agree with this, especially this part:

“we’ve put massive amounts of effort into punishing physical violence as a way to solve problems and as a tool it’s acceptable to use (generally, and in the domain of sexuality).

Punishing physical violence. With more efficient violence. What we've done is brought a very large coalition into extremely precise agreement about who it's acceptable to do violence to ("criminals" for short), who must do it, and how it must be done. Not only will uninvolved bystanders intervene to ensure these violent norms are followed, but we even have a class of professional violent bystanders (the police).

The sort of spontaneous lashing out that you brought up is exactly the kind of thing highly organized violence excels at suppressing. Lack of such violence, overall, tends to make life much worse for physically weaker non-criminals, even if it might let them get away with occasionally pepper-spraying a catcaller.

Comment by clone-of-saturn on Asymmetric Weapons Aren't Always on Your Side · 2019-06-09T09:44:07.688Z · score: 9 (2 votes) · LW · GW

Right, and also the ability to do science and engineering, the ability to frankly discuss strategy without too much political backstabbing, etc. tends to favor less-hellish societies.

Comment by clone-of-saturn on Asymmetric Weapons Aren't Always on Your Side · 2019-06-08T09:37:14.017Z · score: 3 (2 votes) · LW · GW

See my other reply.

Comment by clone-of-saturn on Asymmetric Weapons Aren't Always on Your Side · 2019-06-08T09:36:23.732Z · score: 16 (4 votes) · LW · GW

My problem with this is that human history is heavily saturated with violent conflict; most places on earth have been violently conquered not just once but many times. If violence were really asymmetric in a bad direction, goodness ought to have been very thoroughly eliminated by now!

Comment by clone-of-saturn on Asymmetric Weapons Aren't Always on Your Side · 2019-06-08T03:37:08.229Z · score: 15 (4 votes) · LW · GW

Violence isn’t merely symmetric—it’s asymmetric in a bad direction, since fascists are better than violence than you.

This seems like a strange opinion to have, given that the fascists were in fact the losers of the most violent conflict in history, and their name became the default metonym for pure badness as a direct result of that loss.

Comment by clone-of-saturn on Arbital scrape · 2019-06-07T04:23:01.514Z · score: 10 (3 votes) · LW · GW

I've been working on something similar, it should be ready soon-ish.

Comment by clone-of-saturn on Major Update on Cost Disease · 2019-06-05T22:01:22.819Z · score: 22 (8 votes) · LW · GW

If we measure the Baumol effect in healthcare using "salary and benefits" where there's no increase in salary and the increase in benefits all goes to increased healthcare costs, that seems like a form of circular reasoning or begging the question. We've only concluded that healthcare costs increased because healthcare benefits increased.

Comment by clone-of-saturn on Tales From the American Medical System · 2019-05-10T18:51:00.865Z · score: 1 (1 votes) · LW · GW

(but what would be the effects of making potentially dangerous medications freely available?)

Well, you can already walk into any hardware store and buy all sorts of deadly poisons, no questions asked. So my guess would be not much, except they'd be a lot cheaper.

Comment by clone-of-saturn on Privacy · 2019-05-03T00:05:23.651Z · score: 1 (1 votes) · LW · GW

Only if you assume everyone loses an equal amount of privacy.

Comment by clone-of-saturn on Counterfactuals about Social Media · 2019-04-24T05:22:44.947Z · score: 3 (2 votes) · LW · GW

Would you mind being more specific about what you find lacking in other tools?

Comment by clone-of-saturn on Moral Weight Doesn't Accrue Linearly · 2019-04-23T23:55:45.629Z · score: 7 (5 votes) · LW · GW

It seems natural to weight things according to how much I expect to be able to interact with them.

Obviously that means my weightings can change if I unexpectedly gain or lose the ability to interact with things, but I can't immediately think of any major problems with that.

Comment by clone-of-saturn on Moral Weight Doesn't Accrue Linearly · 2019-04-23T23:02:52.716Z · score: 4 (3 votes) · LW · GW

Personally, I would modus tollens this and take it as an example of why it's absurd to morally value things in other universes or outside my light cone.

Comment by clone-of-saturn on Moving to a World Beyond “p < 0.05” · 2019-04-20T21:17:49.960Z · score: 20 (8 votes) · LW · GW

Previously on Less Wrong:

Elsewhere:

Comment by clone-of-saturn on User GPT2 is Banned · 2019-04-02T08:28:37.093Z · score: 9 (2 votes) · LW · GW

I'm happy to discuss any concerns you have about it.

Comment by clone-of-saturn on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-02T07:57:45.349Z · score: 1 (1 votes) · LW · GW

Yup.

Comment by clone-of-saturn on Experimental Open Thread April 2019: Socratic method · 2019-04-02T06:01:52.252Z · score: 13 (4 votes) · LW · GW

What's the difference between "the source of observations" and "reality?"

Comment by clone-of-saturn on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-02T04:16:17.343Z · score: 15 (3 votes) · LW · GW

I added an ignore user feature to GreaterWrong; go to a user's page and click the Ignore User button (Ignore User image)

Comment by clone-of-saturn on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-04-01T03:57:59.499Z · score: 3 (2 votes) · LW · GW

You can access the #lesswrong IRC through this link.

Comment by clone-of-saturn on Please use real names, especially for Alignment Forum? · 2019-03-29T06:28:11.263Z · score: 3 (2 votes) · LW · GW

OK, I added real names in a hover popup. I might try out some other options later.

Comment by clone-of-saturn on Hazard's Shortform Feed · 2019-03-24T00:27:08.227Z · score: 1 (1 votes) · LW · GW

But why stop at individual people? This kind of ontological deflationism can naturally be continued to say there are no individual people, just cells, and no cells, just molecules, and no molecules, just atoms, and so on. You might object that it's absurd to say that people don't exist, but then why isn't it also absurd to say that groups don't exist?

Comment by clone-of-saturn on What failure looks like · 2019-03-18T02:35:25.791Z · score: 5 (3 votes) · LW · GW

Oops, this bug should be fixed now.

Comment by clone-of-saturn on Has "politics is the mind-killer" been a mind-killer? · 2019-03-17T09:40:33.597Z · score: 10 (3 votes) · LW · GW

You write:

The truth of the mat­ter is, that policy de­ci­sions can of­ten have life-and-death con­se­quences.

But also:

ONE WAY OF THINKING ABOUT poli­tics is AS IF IT IS an ex­ten­sion of war by other means. Ar­gu­ments CAN BE THOUGHT OF AS sol­diers.

This seems like a contradiction. As you allude to, the fundamental question of politics is whose desires should and can legitimately be overridden by society--up to and including their desire not to be killed. With stakes so high, how can you justify placing good faith debate above using whatever tactics are necessary to avoid losing? It seems to me that if arguments aren't soldiers, you aren't actually engaged in politics.

Comment by clone-of-saturn on The Case for a Bigger Audience · 2019-02-20T02:03:51.117Z · score: 4 (3 votes) · LW · GW

You're right that this site is geared to not wanting a large audience in absolute terms; this post is implicitly about having a relatively larger share of the small pool of people who are intellectually engaged with LW-relevant topics.

Comment by clone-of-saturn on Introducing the AI Alignment Forum (FAQ) · 2019-02-16T06:30:02.404Z · score: 4 (2 votes) · LW · GW

This should work now, sorry about the delay.

Comment by clone of saturn on [deleted post] 2019-02-15T05:48:09.765Z

Test comment editing

Comment by clone-of-saturn on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-01T03:14:09.135Z · score: 12 (4 votes) · LW · GW

I'm afraid the only confusion this generated for me was confusion about what the confusion was supposed to be...

Comment by clone-of-saturn on Is Agent Simulates Predictor a "fair" problem? · 2019-01-25T20:17:54.175Z · score: 1 (1 votes) · LW · GW

It's impossible to enumerate possible worlds and pick the best one without a decision theory, because your decision process gives the same output in every possible world where you have a given epistemic state. We obviously need counterfactuals to make decisions, and the different decision theories can be seen as different theories about how counterfactuals work.

Comment by clone-of-saturn on Open Thread January 2019 · 2019-01-15T22:08:53.453Z · score: 3 (2 votes) · LW · GW

This thread might be relevant to your question.

Comment by clone-of-saturn on Why Don't Creators Switch to their Own Platforms? · 2018-12-23T09:55:53.336Z · score: 11 (5 votes) · LW · GW

My guess would be that this has a lot to do with IQ differences of both the audience and the creators. First, Sam Harris listeners may have a much easier time learning new apps and websites than PewDiePie viewers. Second, PewDiePie is primarily "famous for being famous" and there are many people producing videos of more-or-less equivalent quality to PewDiePie's, whereas there aren't that many nearly identical podcast hosts ready to step in and replace Sam Harris as soon as his podcast becomes slightly more inconvenient to access.

Comment by clone-of-saturn on What is "Social Reality?" · 2018-12-13T18:31:15.960Z · score: 1 (1 votes) · LW · GW

If you lived without having a bike, sure. I don't think you could get away with that level of ignorance if you had to build or repair a bike yourself.