Posts

How much does neatly eating matter? What about other food manners? 2019-06-11T19:40:45.539Z · score: 9 (9 votes)
Rational Feed: Last Month's Best Posts 2018-05-02T18:19:47.821Z · score: 43 (10 votes)
Rationality Feed: Last Month's Best Posts 2018-03-21T14:12:26.432Z · score: 48 (14 votes)
A Concrete Multi-Step Variant of Double Crux I Have Used Successfully 2018-03-15T01:26:16.028Z · score: 46 (11 votes)
Correct Models Are Bad 2018-03-04T16:51:25.919Z · score: -9 (11 votes)
Rationality Feed: Last Month's Best Posts 2018-02-12T13:18:39.909Z · score: 64 (19 votes)
Rational Feed: Last Week's Community Articles and Some Recommended Posts 2017-10-02T13:49:05.937Z · score: 2 (2 votes)
Rational Feed: Last Week's Community Articles and Some Recommended Posts 2017-09-25T13:41:55.117Z · score: 11 (9 votes)
Rational Feed 2017-09-17T22:03:25.134Z · score: 7 (7 votes)
Rational Feed 2017-09-09T19:48:18.079Z · score: 12 (12 votes)
Rational Feed 2017-08-27T03:49:15.774Z · score: 11 (11 votes)
Bi-weekly Rational Feed 2017-08-08T13:56:20.160Z · score: 20 (20 votes)
Bi-Weekly Rational Feed 2017-07-24T21:56:46.003Z · score: 8 (8 votes)
Bi-Weekly Rational Feed 2017-07-09T19:11:21.767Z · score: 11 (11 votes)
Bi-Weekly Rational Feed 2017-06-24T00:07:03.768Z · score: 24 (22 votes)
Concrete Ways You Can Help Make the Community Better 2017-06-17T03:03:33.317Z · score: 22 (22 votes)
Bi-Weekly Rational Feed 2017-06-10T21:56:38.374Z · score: 11 (11 votes)
Bi-Weekly Rational Feed 2017-05-28T17:12:57.118Z · score: 7 (17 votes)
A Month's Worth of Rational Posts - Feedback on my Rationality Feed. 2017-05-15T14:21:32.995Z · score: 17 (18 votes)
I Updated the List of Rationalist Blogs on the Wiki 2017-04-25T10:26:51.234Z · score: 24 (26 votes)

Comments

Comment by deluks917 on Drive-By Low-Effort Criticism · 2019-07-31T13:31:29.876Z · score: 39 (11 votes) · LW · GW

I don't think Qiaochu's comment is particularly low effort. He has been Berkeley for a long time and spoke about his experiences. Given that he shared his google doc with some people, the comment was probably constructive on net. Though I don't think it was constructive to the conversation on lesswrong.

If someone posts a detailed thread describing how they want to do X, maybe people should hold off on posting 'actually trying to do X is a bad idea'. Sometimes the negative comments are right. But lesswrong seems to have gone way too far in the direction of naysaying. As you point out, the top comments are often negative on even high effort posts by highly regarded community members. This is a big problem.

Comment by deluks917 on LW authors: How many clusters of norms do you (personally) want? · 2019-07-13T06:32:57.404Z · score: 6 (3 votes) · LW · GW

I would post much more on lesswrong if there was a 'no nitpicking' norm available.

(re-posted as a top level comment at Ray's request)

Comment by deluks917 on LW authors: How many clusters of norms do you (personally) want? · 2019-07-13T03:19:26.300Z · score: 10 (5 votes) · LW · GW

I would post much more on lesswrong if there was a 'no nitpicking' norm available.

Comment by deluks917 on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-09T13:34:59.803Z · score: 12 (4 votes) · LW · GW

Lying about what? It is certainly common to blatantly lie when you want to cancel plans or decline an invitation. Some people think there should be social repurcussions for these lies. But imo these sorts of lies are, by default, socially acceptable.

There are complicated incentives around punishing deliberate manipulations and deception much harder than motivatated/unconcious manipulation and deception. In particular you are punishing people for being self aware. You can interpret 'The Elephant in the Brain' as record of the myriad ways people in somewhat, or more than somewhat, manipulative behavior. Motivated reasoning is endemic. A huge amount of behavior is largely motivated by local 'monkey politics' and status games. Learning about rationality might make a suffiently open minded and intelectually honest person aware of what they are often doing. But its not going to make them stop doing these things.

Imagine that people on average engage in 120 units of deception. 20 units of concious deception and 100 units of unconcious. People who take the self awareness pill engage in 40 units of concious deception and 0 units of unconcious deception. The later group engage in much less deception but they engage in twice as much 'deliberate' deception.

I have two main conclusions. First, I think seeing people, and yourself, clearly requires an increased tolerance for certain kinds of bad behavior. People are not very honest but cooperation is empirically possible. Ray commented this below: "If someone consciously lies* to me, it's generally because there is no part of them that thinks it was important enough to cooperate with me". I think that Ray's comment is false. Secondly I think its bad to penalize 'deliberate' bad behavior so much more heavily. What is the point of penalizing deception? Presumably much of the point is to preserve the group's ability to reason. Motivated reasoning and other forms of non-deleiberate deception and manipulation are arguably at least as serious a problem as blatant lies.

Comment by deluks917 on Explaining "The Crackpot Bet" · 2019-06-24T18:46:23.496Z · score: 0 (7 votes) · LW · GW

Even if Glenn is having a mental breakdown letting him continue to spam people on various forums is not helping him. In particular because he is currently burning a ton of social capital and cultivting a very negative reputation. At least he needs to take a break from public posting.

Comment by deluks917 on Discourse Norms: Moderators Must Not Bully · 2019-06-15T03:17:57.119Z · score: 31 (14 votes) · LW · GW

https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism

I think you should explain in substantially more detail why you think communities should do the opposite of following Eliezer's advice.

Comment by deluks917 on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-24T21:29:32.803Z · score: 1 (1 votes) · LW · GW

If you bid 2$ you get at most 4$. If you bid 100$ you have a decent chance to get much more. If even 10% of people big ~100 and everyone else bids two you are better off bidding 100. Even in a 5% 100$ / 95% 2$ the two strategies ahve a similar expected value. In order for bidding 2$ to be a good strategy you have to assume almost everyone else will bid 2$.

Comment by deluks917 on The Amish, and Strategic Norms around Technology · 2019-03-26T14:13:26.572Z · score: 8 (5 votes) · LW · GW

If you can consistently get to work late enough I think the best time to go to sleep is around 1am. 1am is late enough you can be out until midnight and still have an hour to get home and go to sleep on time. Even if you are out very late and only get to bed by 2am you are only down an hour of sleep if you maintain your wakeup time. There is occasional social pressure to hang out substnatially past midnight but it is pretty rare.

For these reasons I go to bed at 1am and get up at 9am. Of course I don't have to be at work until 10am. But if you can make this work its great to have a sleep schedule you can hold to without sacrafices socialization.

Comment by deluks917 on Privacy · 2019-03-16T17:16:41.639Z · score: 6 (8 votes) · LW · GW

Ben Hoffman's views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name 'declaration of war' and Ben says:

I was a little surprised to see someone else express opinions so similar to my true feelings here (which are stronger than my endorsed opinions), but they’re not me.

Here are two relevant quotes:

It's not surprising if privacy has value for the person preserving it. It's very surprising if it has social value.

Trivially, information puts people in better positions to make decisions. If it doesn't, it logically has to be due to their perverse behaviors.

It seems self-evident that we are all MASSIVELY worse off because sexuality is somewhat shrouded in secrecy. If we don't agree on that point, not regarding what happens on the margins, but regarding global policy, I simply consider you to be part of rape culture and possibly it would be immoral to blackmail you rather than simply exposing you unconditionally.

Another (in the context of sexuality and privacy)

Coordinated concealing information is always about perpetuating patterns of abuse.

Ben says his endorsed views are not this extreme but he certainly seems to have some extreme views about whether sharing more information is almost always good. His position on this is presumably downstream of how 'perverse' he thinks human society is. I personally think that it is pretty obvious that, in currently existing society, sharing more information is not almost always good for society. And that privacy is not primarily a way to prevent abuse.

A society with no privacy is essentially a society of perfect norm and law enforcement. I do not think that would be a good society. Ben and others presumably agree many current norms and laws are quite bad. But they also seem to think that in a world without privacy all norms and laws would become just. Perhaps the central crux is 'in a world without privacy would laws and norms automatically become just?'.

Comment by deluks917 on Alignment Newsletter #46 · 2019-02-23T15:17:09.157Z · score: 1 (1 votes) · LW · GW

I don't like to be overly negative but I am a long time reader and changing the format to 'post an email' makes this really unpleasant for me to read.

Comment by deluks917 on "Other people are wrong" vs "I am right" · 2019-02-23T09:16:21.723Z · score: 9 (5 votes) · LW · GW

I find many of the views you updated away from plausible and perhaps compelling. Given that I have found your wriitng compelling on other topics compelling. Given this I feel like I should update my confidence in my own beliefs. Based on the post I find it hard to model where you currently stand on some of these issues. For example you claim you don't endorse the following:

> The future might be net negative, because humans so far have caused great suffering with their technological progress and there’s no reason to imagine that this will change.

I certainly don't think its obvious that average suffering will be higher in the future. But it also seems plausible to me that the future will be net negative. 'The trendline will continue' seems like a strong enough argument to find a net negative future plausible. Elsewhere in the article you claim that human's weak preferences will eventually end factory farming and I agree with that. However new forms of suffering may develop. One could imagine strong competitive pressures rewarding agents that 'negatively reinforce' agents they simulate. There are many other ways things can go wrong. So I am genuinely unsure what you mean by the fact that you don't endorse this claim anymore. Do you think it is implausible the future is net negative? Or have you just substantially reduced the probabality you assign to a net negative future?

Relatedly do you have any links on why you updated your opinion of professionalism? I should note I am not at all trying to nitpick this post. I am very interested in how my own views should update.

Comment by deluks917 on Avoiding Jargon Confusion · 2019-02-20T18:31:28.708Z · score: 5 (1 votes) · LW · GW

Language drift can introduce confusions but it also has advantages. The original definition of a concept is unlikely to be the most useful definition. It is good if words shift to the definitions that the community finds useful. Let me give an example.

Bostrom’s original defintion of ‘infohazard’ includes information that is dangerous in the wrong hands: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” However most people use infohazard to mean something like “information that is intrinsically harmful to those who have it or will cause them to do harm to others” (this is how it used in the SCP stories for example). As Taymon points out Bostrom didn’t distinguish between “things you don’t want to know” and “things you don’t want other people to know”.

I think the SCP definition is more useful. It’s probably actively good that the definition of infohazard has shifted over time. Insisting on Bostrom’s definition is usually just confusing things.

Comment by deluks917 on Is there a standard discussion of vegetarianism/veganism? · 2019-01-03T17:24:38.336Z · score: 2 (2 votes) · LW · GW

There are some standard answers to "an you rank animals by how bad eating them is?". Here is Brian Tomasik's ranking. The article goes into considerable detail and has a useful results table: How Much Direct Suffering is Caused by Different Animal Foods . Various people have proposed alternative ways to count, for example suffering/gram_protein, but this is the standard starting point.

Comment by deluks917 on Events in Daily? · 2019-01-02T06:12:54.675Z · score: 16 (3 votes) · LW · GW

Only a tiny minority of events are relevant to me. So I prefer they are not included.

Comment by deluks917 on Anyone use the "read time" on Post Items? · 2018-12-29T00:34:09.566Z · score: 9 (5 votes) · LW · GW

I would strongly prefer word count. Word count is implemented uniformly accross sites and contexts. I also almost always take longer than the stated read time to actually read the post.

Comment by deluks917 on Defining Freedom · 2018-12-20T05:55:44.858Z · score: 5 (1 votes) · LW · GW

Its not obvious to me why some things feel constraining and some do not. For example you could say that 'every country in the world has the death penalty for stepping in front of a moving bus'. Obviously transhumanists probably do feel constrained by a lack of technological solutions. But the bus death penalty just does not both me the way a human made law does.

Comment by deluks917 on The housekeeper · 2018-12-03T20:12:44.820Z · score: 3 (4 votes) · LW · GW

My current rent is NYC is already as high as I can justify. But I would prefer to pay 1680 for a room in a house with a housekeeper than 1400 in a house without. I think $1400 is a reasonable price for a single room in many locations.

Comment by deluks917 on If You Want to Win, Stop Conceding · 2018-11-23T00:23:18.244Z · score: 2 (2 votes) · LW · GW

Its not clear to me this attitude is always optimal even if your only goal is to improve. The fundmaental question is 'Is the information we get from finishing this match in X minutes greater than the information we would get by spending X minutes toward playing a new match?'.

If the endgame is relatively long and not particularly interesting just concede. We aren't going to learn much from actually playing it out even if I am 2-5% to win.

Say we are practicing for a 1vs1 Terraforming Mars competition. On generation three you get out AI central and I don't have a huge lead in other areas to compensate. I think its rational to concede here. Terraforming mars takes a long time to play out. Is not really clear how exactly you will beat me, but you will draw a ton of cards and kill me somehow. I doubt you need practice crushing someone with a normal draw when you have an active AI central.

In a game with substantial luck I think it matters what caliber of opponents you are expecting to play against. If you are anticipating playing vs people substantiall worse than you it can make sense to practice winning from 'objectively lose' positions. If you are substantially stronger than the opponent you actually can win. But if your expected opponents are capable and playing the game out will take awhile just concede.

Nevermind the fact it is phycologically unplesanat to play out almost certainly lost positions. So if you are playing for enjoyment its often rational to concede. Of course during a literal tournament match play it out til the end if you are playing to win. Though make sure you are not screwing yourself because of timer rules (for example not conceding a game of mtg quick enough can make it unlikely you can finish the bo3 match).

sidenote: I have also had quite alot of success playing games. Though I don't really play competitive games anymore.

Comment by deluks917 on No standard metric for CFAR workshops? · 2018-09-08T19:18:15.395Z · score: 21 (5 votes) · LW · GW

I think its very confusing to call d = 0.2 to 0.5 'small', especially in the context of a 4 day workshop. Imagine the variable is IQ. Then a 'small' effect increases iq by 3 to 7.5 points. That boost in iq would be much better described as 'huge'. However IQ has a relatively large standard deviation compared to its mean (roughly 15 and 100).

Lets look at male height. In the USA male height has a mean around 70 inches and a standard deviation around 4 inches. (Note 4/70 is 38% of 15/100). A d of 0.2 to 0.5 would correspond to an increase in height of 0.8 to 2 inches. Some people are willing to undergo costly, time consuming and painful length lengthening surgery to gain 4-5 inches of height. If a four day, 4000 dollar workshop gave increased your height by 0.8 to 2 inches millions of men would be on the waiting list. I know I would be. That doesnt really sound 'small' to me.

Comment by deluks917 on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T21:31:41.771Z · score: 4 (5 votes) · LW · GW

Pledged 50$.

As an aside I dont understand why CEA or the community building fund wont give any money to these projects.

Comment by deluks917 on Tactical vs. Strategic Cooperation · 2018-08-16T20:35:47.944Z · score: 5 (1 votes) · LW · GW
The question is, who counts as terrible? What sorts of lapses in rigorous thinking are just normal human fallibility and which make a person seriously untrustworthy?

If at all possible you need to look at the person's actual track record. Everyone has views you will find incredible stupid or immoral. Even the very wise make mistakes that look obvious to us. In addition its possible that the person engaging in 'obvious folly' actually has a better understanding of the situation than we do. You need to look at a representiive sample and weigh their successes and failures in a systematic way. If you cannot access their history you still need to get an actual sample. If you were judging programmers something like a triplebyte interview is a reasonable way to get info. Trying to weigh the stupid things they have said about programming is a very bad method. Without a real sample you are making a character judgement under huge uncertainty.

Of course we are Bayesians. If forced to come up with an estimate despite uncertainty we can do it. But its important to do the updating correctly. Say a person's stupidest belief, that you know about, is X. The relevant odds ratio is not:

P(beleives X| trustworthy)/P(beleives X|untrustorthy)

Instead you have to look at:

P(stupidest belief I learn about is at least as stupid as X| trustworthy)/P(stupidest beleif I learn about is at least as stupid as X|untrustowrthy)

You can try to estimate similar odds ratios for collections of stupid beleifs. This method isnt as good as trying to conditioning on both unusually wise and unusually stupid beliefs. But if you are going to judge based on stupid beliefs you have to do it correctly. Keep in mind that the more 'open' a person is the more likely you are to learn their stupid beleifs. So you need to facor in an estimate of their openness towards you.

Comment by deluks917 on Would you benefit from audio versions of posts? · 2018-07-26T11:13:49.642Z · score: 1 (1 votes) · LW · GW

Would be useful to me

Comment by deluks917 on Replace yourself first if you're moving to the Bay · 2018-07-23T17:09:23.344Z · score: 13 (7 votes) · LW · GW

I wonder if we are past the tipping point. If someone's main social group is rationalists I am not sure it makes sense not to live in the Bay. You will lose too many friends over time. And maintaining long term social connections is very important. I think the unfortunate situation is that non-Bay communities have to be be staffed by people who dislike the bay culture, dont consider rationalsits their primary social group or have strong reasons for living in a particular city (for example they work in finance and alot of the jobs are in NYC). I think this situation is problematic, mostly for reasons you outlined. But there isn't going to be a coordinated effort to reverse the trend of people moving to the Bay. And there are certainly benefits of having people concentrated. I also agree that the schelling point had to be the Bay, the silicon valley money was too important given the communities goals and deamgraphics.

Comment by deluks917 on Who Wants The Job? · 2018-07-23T15:03:29.106Z · score: 21 (3 votes) · LW · GW

Zvi posted the following comment:

Yeah, I likely should have been more explicit about the whole ‘the ones who are any good already got hired’ thing. Which has the same implication, of course – that if you can simply display what we’d instinctively think of as ordinary competence, you’ll get hired reasonably quickly once you start putting in effort. Which matches my experience on both sides.

---

To put it mildly, the above does not match my experience at all. And I know a ton of rationalist programmers having trouble finding a job. These people are usually not super expereinced and didnt go to the ICPC final. But they certainly seem at least 'ordinarily competent'.

Comment by deluks917 on Who Wants The Job? · 2018-07-23T00:52:18.319Z · score: 6 (2 votes) · LW · GW

Same experience. I was applying for software jobs for what its worth.

Comment by deluks917 on Last Chance to Fund the Berkeley REACH · 2018-06-27T22:47:57.965Z · score: 46 (13 votes) · LW · GW

I neither live in Berkeley nor am I unusually wealthy for a rationalist. but I think it would set a very bad precedent for Sarah and friend's efforts to end in failure. My vision of the rationalist community can make projects work.

I pledged $100

Comment by deluks917 on On the Chatham House Rule · 2018-06-15T20:29:06.879Z · score: 5 (3 votes) · LW · GW

A later comment suggests the Chatham house website is sympathetic to my interpretation:

Q. Can participants in a meeting be named as long as what is said is not attributed?
A. It is important to think about the spirit of the Rule. For example, sometimes speakers need to be named when publicizing the meeting. The Rule is more about the dissemination of the information after the event - nothing should be done to identify, either explicitly or implicitly, who said what.

Comment by deluks917 on On the Chatham House Rule · 2018-06-14T00:04:58.108Z · score: 18 (5 votes) · LW · GW

That runs into the problem that if you say the rules are absolute many people will 'follow their spirit' and if you say 'follow the spirit of the rules' then people will be way too lax about the rules. Eliezer mentions this issue in his meta-honesty.

Comment by deluks917 on On the Chatham House Rule · 2018-06-13T23:45:53.545Z · score: 7 (8 votes) · LW · GW

My intuitive reaction is that you are following this rule more strictly than intended. If I held an event with Chatham house rules and someone's wife accidentally saw the attendee list briefly this would not even register as a problem. I would expect the attendee to tell their wife not to talk about who was at the event. I also think its expected people will occasionally break the rules for the greater good (ex adding a trustworthy person to an email thread so they can work on a research problem). If someone asks whether you know someone just say 'yes but the details are private' (this is not breaking the rules imo).

When people chose to use Chatham house rules they are trying to prevent information becoming public and let people stay off the record. They usually do not expect the rules to be treated as sacred.

Comment by deluks917 on The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo · 2018-05-21T14:07:21.751Z · score: 4 (3 votes) · LW · GW

I don't tell everyone to move to Berkeley. But if you are heavily invested socially in the rationalist community you are passing up alot of personal utility by not moving to Berkeley. Other considerations apply of course. But I think the typical hghly invested rationalist would be personally better off if they moved to Berkeley. Whether this dynamic is good for the community longterm or not is unclear.

Comment by deluks917 on The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo · 2018-05-20T23:17:32.950Z · score: 11 (4 votes) · LW · GW

I am sort of agnostic about whether the Berkeley community is a good idea or not. On one hand it certainly feels pointless to try to build up any non-Berkeley community. If someone is a committed rationalist they are pretty likely to move to Berkeley in the near future. In addiiton it is very hard to constantly lose friends. This post probably best captures the emotional reality:

"I have lost motivation to put any effort into preserving the local community – my friends have moved away and left me behind – new members are about a decade younger than myself, and I have no desire to be a ‘den mother’ to nubes who will just move to Berkley if they actually develop agency… I worry that I have wasted the last decade of my life putting emotional effort into relationships that I have been unable to keep and I would have been better off finding other communities that are not so prone to having its members disappear."

If you base your social life around the rationality community, and do not live in Berkeley, you are in for alot of heart ache. For this reason I cannot really recommend people invest too heavily in the rationalist unless they want to move to Berkeley.

===

On the other hand concentration has benefits. Living close to your friends has huge social benefits. As sarah says very few adults live on a street with their friends and many Berkeley rationalists do. Its looks likely there will be rationalist group parenting/unschooling. The Berkeley REACH looks awesome (I am a patron despite living on the other side of the country). The question is whether the Berkeley community is worth the severe toll it places on other rationalist communities. In the past i thought Berkeley had some pretty severe social problems. Alot of people (who were neither unusually well connected or high status) who moved their reported surprising levels of social isolation. However things seem to have improved a ton. There are now a ton of group houses near each other and the online community (discord/tumblr) is pretty inclusive and lets 'not high status' people make connections pretty easily.

Also arguably 'Moloch already won'. So its hard to tell people to refrain from moving to Berkeley

===

(I am currently one of the more active people in NYC. The meetup currently occurs in my apartment, etc. )

Comment by deluks917 on Dimensional decoupling · 2018-05-19T18:36:49.873Z · score: 7 (2 votes) · LW · GW

Great post. Decoupling nice/generous is extremely useful. Lots of people look for signs of being 'nice' when they really want someone generous.

Comment by deluks917 on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-13T23:15:05.976Z · score: 1 (7 votes) · LW · GW

This seems well into 'Politics is the Mind Killer' territory.

Comment by deluks917 on Is Rhetoric Worth Learning? · 2018-04-07T15:17:44.851Z · score: 12 (3 votes) · LW · GW

Lets look at a relatively non-controversial example. Say people are arguing about conciousness. As it turns out I do not agree with Dan Dennett's point of view on this topic. However lets say I start making for the Dennett point of view. How might I be hurting by doing this? I can think of some plausible mechanisms:

1) I might be disrupting Aumannian Agreement. However in most arguments I don't see many aumannian processes at work, people are rather reluctant to change their views. I agree its important to state your beleifs accurately in situations with substantial aumannian processes ex: A double crux or a friend asking me for advice.

2) I am slightly distorting the sample of community opinion. I suppose this is a real harm but it seems slight. In addition trying to gauge the distribution of community opinion based on observing discussions seems problematic anyway. The majority of people do not comment much at all. Its better to look at various community surveys. Some questions are not represented on surveys but in those cases its very hard to get info anyway.

3) Maybe since I disagree with Dennett I will probably argue for his views badly? I think this sort of issue comes up alot for people trying to 'steelman'. However since I am trying to persuade my incentives are alligned with arguing well. I am not trying to 'steelman' Dennett then shoot down his views. I am actually trying to spread them. Various debate organizations seem to think its possible to argue well for many sides of an argument. The legal profession also seems to assume you can argue well for 'both sides'. (though in the conciousness debate there are more than two sides).

4) Maybe its actively bad to 'spread wrong ideas'. First off this seems like it conflicts alot with the ideology around having a 'marketplace of ideas'. People can evaluate ideas for themselves, I don't think exposing them to Dennett's pint of view is hurting them (or hurting society). Maybe this is a crux but I think the concept of a marketplace of ideas has proven very beneficial (even if its an over simplication). Secondly I don't know Dennett is wrong! I should not privilege my own opinion too much.

===

Can you explain why you think this sort of behavior is harmful and which norms are being broken?

Comment by deluks917 on Is Rhetoric Worth Learning? · 2018-04-07T05:46:37.099Z · score: 25 (8 votes) · LW · GW

Its difficult to discuss this topic on lesswrong for several reasons:

1) The bulk of most people's experience 'trying to convince other people of things' comes from political discussions. This is probably not an optimal state of affairs but its still true for most rationalists. Most of my experience 'learning how to persuade people' comes from discussing things like libertarianism and climate change. I can't go into details about those things on lesswrong.

2) Talking about how persuasive you are makes you sound arrogant and might make you sound manipulative. So its difficult to talk about successes you have had. Even worse, people you convinced of things don't want to hear about your master persuader powers.

3) The 'Dark Arts' taboo is pretty strong. When I was younger I was very bad at persuading people. Looking back I was just spamming arguments that only made sense to people who shared my moral foundations (and they are relatively but not ridiculously rare). At some point I decided to stop trying to make 'good' arguments and just try to convince people of things. I would not actively lie or mislead but I stopped trying to follow especially truth seeking norms (as I understood them). This was actually very educational. The later approach led to me spending much more time trying to really understand and engage with different points of view. I followed this approach for many years until I decided I had learned enough to change mindset. I only practiced this sort of attitude in political discussions with little or no real world ramification. But even talking about this can be socially risky if one is not careful. Among other things I argued for lots of positions I didn't actually hold (with no disclaimers) and made tons of arguments I personally found unpersausive. This rubs alot of people the wrong way.

Comment by deluks917 on *Deleted* · 2018-03-28T17:45:56.392Z · score: 15 (3 votes) · LW · GW

Why is this on lesswrong?

Comment by deluks917 on Explicit and Implicit Communication · 2018-03-21T17:34:53.949Z · score: 42 (9 votes) · LW · GW

The 8 point sabotage strategy is amazing. One of the best things I have ever found by reading lesswrong.

Comment by deluks917 on A Concrete Multi-Step Variant of Double Crux I Have Used Successfully · 2018-03-15T03:40:04.554Z · score: 18 (4 votes) · LW · GW

Its perhaps best to give concrete examples. This is roughly the set of topics we discussed in the CFAR Double Crux. It seemed possible to discuss these frutifully in a small length of time. (I didn't put this in the article because I really want to avoid people making this thread into a discussion of CFAR's value)

-- How much learning in person being qualitatively different from learning via blogs

-- Whether its an isolated demand for rigor to judge CFAR relative to the best self improvement techniques you could use (since few people use those). Is CFAR competitive relative to things like 'many hours of private tutoring'.

-- How much of the value of CFAR comes form the curriculum and how much comes from the Alumni Network and Events

-- What percentage of one's money and effort should people generally be spending on self improvement

-- How skeptical should we be about the CFAR studies

You should ask to discuss topics where either agreement seems relatively easy to reach or you want to get more details from your partner. You might want details because they have info you don't, because you want to ask some key questions or because you want to hear their model explained at length. If a disagreement about a crux seems especially deep you should add it to the list of 'prickly cruxes' and handle it in step6.

In general the goal is to make progress and find non-obvious statements you can both agree to. This requires focsing the discussion on areas where progress is likely. In the case of the CFAR doucle crux I felt I learned alot and got a much better model of which people will benefit from CFAR. The goal is to make genuine mutually agreed upon headway not nescessarily to resolve all disagreement.

Comment by deluks917 on An alternative way to browse LessWrong 2.0 · 2018-02-20T18:32:11.001Z · score: 9 (2 votes) · LW · GW

I love the feel of the site and would personally prefer to swap to reading GW. Two Question:

1) How does the RSS feed work for GW

2) Can anyone I should trust vouch for GW? Its a little dicey to log into this site with my lw password. By 'person I should trust' I mean someone I know or someone suffiently high profile (Ex Scott, Vanier) they have alot of reputation to lose if we get screwed.

Comment by deluks917 on Rational Feed: Last Week's Community Articles and Some Recommended Posts · 2017-09-29T17:31:45.581Z · score: 5 (1 votes) · LW · GW

This gets posted only on r/ssc and lesswrong.

The daily version is posted on discord and my blog deluks917.wordpress.com

Comment by deluks917 on Musings on Double Crux (and "Productive Disagreement") · 2017-09-28T13:06:22.838Z · score: 28 (9 votes) · LW · GW

I am genuinely confused by the discourse around double crux. Several people I respect seem to think of DC as a key intellectual method. Duncan (curriculum director at CFAr) explicitly considers DC to be a cornerstone CFAR technique. However I have tried to use the technique and gotten nowhere.

Ray deserves credit for identifying and explicitly discussing some of the failure modes I ran into. In particular DC style discussion frequently seems to recurse down to very fundamental issues in philosophy and epistemology. Twice I have tried to discuss a concrete practical issue via DC and wound up discussing utility aggregation; in these cases we were both utilitarians and we still couldn't get the method to work.

I have to second Said Achmiz's request for public examples of double crux going well. I once asked Ray for an example via email and received the following link to Sarah Constantin's blogpost . This post is quite good and caused me to update towards the view that DC can be productive. But this post doesn't contain the actual DC conversation, just a summary of the events and the lessons learned. I want to see an actual, for real, fully detailed example of DC being used productively. I don't understand why no such examples are publicly available.

Comment by deluks917 on Rational Feed: Last Week's Community Articles and Some Recommended Posts · 2017-09-26T18:03:39.599Z · score: 5 (1 votes) · LW · GW

Thanks

Comment by deluks917 on Rational Feed · 2017-09-18T14:21:26.705Z · score: 6 (2 votes) · LW · GW

I think the title was changed after I wrote the review. These reviews are published and compiled daily.

Comment by deluks917 on Machine Learning Group · 2017-07-17T01:37:57.862Z · score: 6 (2 votes) · LW · GW

I am in the group. I am getting started tomorrow!

Comment by deluks917 on Idea for LessWrong: Video Tutoring · 2017-06-27T20:38:04.190Z · score: 5 (1 votes) · LW · GW

I would attend the OKC presentation.

Comment by deluks917 on Idea for LessWrong: Video Tutoring · 2017-06-23T23:10:20.089Z · score: 9 (5 votes) · LW · GW

Awesome idea. I signed up for both roles (Though I can't teach anything til August). I predict the most in demand subject will be machine learning. After that I would guess stats/probability. I wonder how easy it will be to coordinate ~5 learners to agree to one "class". I would guess this project will wind up having 2-3 people in some "classes", assuming the tutors agree. But perhaps I am wrong. Its hard to get people to agree to both a subject and a difficulty level.

Comment by deluks917 on Concrete Ways You Can Help Make the Community Better · 2017-06-19T13:40:38.307Z · score: 5 (1 votes) · LW · GW

I wonder how many comments non-Eliezer/Scott(Yvain) threads got on the old lesswrong. Though even if other posters got alot of comments its possible the pressence of Eliezer/Scott drove alot of people to check the site (and subsequently comment on many threads). But it would still be a good sign if the engagement level was higher on general non-seqence threads. Perhaps we could eventually re-start the old dynamics.

Comment by deluks917 on Concrete Ways You Can Help Make the Community Better · 2017-06-17T22:31:56.811Z · score: 7 (3 votes) · LW · GW

"Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down."

That is a rather good point. The point suggests that if we want to keep lesswrong a healthy community we need to maintain a strong pipeline.

I see both sides of the 'radio silence" thing. On one hand its good to let other people know about your project in case they want to get involved. On the other hand making a project "public" creates alot of stuff to deal with. We both agree public criticism can be quite harsh. Organazing a group effort is difficult. Maintaining a cohesive vision becoems more difficult the more people that are invovled. Finally a decent number of hyped rationalsit project seemed to have fundamental problems (Arbital comes to mind*).

My personal intuition is that in many cases its better to take the middle ground about when to take ideas public. Put together something like a "minimum viable project" or at least a true" proof of concept". Once you have that its easier to keep a coherent vision and its more likely the project is a good idea. It is suboptimal to spend lots of time organizing people and dealing with feedback before you have determined your project is a fundamentally sound idea.In this post I tried to mention projects which were already underway or that could be done on a small scale. I should note I am not very confidant in my preceding intuition and would welcome your feedback.

*I am aware of the personal problems that hurt Arbital. I am also aware that there are/were plans for it to pivot directions to a micro-blogging platform. But the original vision of arbital seems flawed. Ther arbital leadership basically confimed this some time ago.

Comment by deluks917 on Concrete Ways You Can Help Make the Community Better · 2017-06-17T15:44:15.375Z · score: 6 (2 votes) · LW · GW

Its unclear the community can or should have a "leader" again. Alot of the community no longer suffiently agrees with Eliezer. The only person enough people would consent to follow is Scott 'The Rightful Caliph' Alexander. And Scott doesn't want the job.

I think the community can flourish despite remaining de-centralized. But its admittedly trickier.

Comment by deluks917 on Mode Collapse and the Norm One Principle · 2017-06-07T04:12:51.995Z · score: 6 (2 votes) · LW · GW

Really great post. I really enjoyed the theoretical justification for a very practical idea. Overall I found the machine learning argument caused me to update significantly in favor of "norm-one" criticism. Some comments and questions:

1) Its not that clear to me how to estimate the "norm" of one's criticism. We aren't going to do math to commute this stuff. What kind of heuristics can we use? Notably the community requires some degree of consistency in how people estimate criticism norms.

2) If you strongly disagree with a proposition X it might be hard to give any norm-one criticism. Maybe someone is suggesting plan X and you very storngly think they should abandon the plan. It might feel dishonest, insincere, or immoral to give advice on how to make plan X go slightly less badly.

3) Say a friend of yours asks you to critique their writing. This advice basically says you should hold back on some/much of your feedback. In theory you should try to only send the feedback thats most useful but fits inside a "norm-one" limit. This seems different from the "wall of red ink" technique that is commonly praised in wiriting cricles. (Though I find the walls of red ink demoralizing I am not a writer).

4) Is it ever useful for someone to say: "Ignore the norm-one limit. Just give me all the criticism you have". Will it become "low status" not to ask for unlimited-norm-criticism?