Posts

Reality has a surprising amount of detail 2017-05-13T20:02:35.768Z · score: 20 (15 votes)
Submission and dominance among friends 2017-03-28T02:43:38.494Z · score: 8 (9 votes)
If only we had taller been 2017-03-04T23:15:42.282Z · score: 4 (5 votes)
The "I Already Get It" Slide 2017-02-01T03:11:00.551Z · score: 18 (13 votes)
LessWrong Help Desk - free paper downloads and more (2014) 2014-01-16T05:51:40.710Z · score: 32 (32 votes)
[Link] New prize on causality in statistics education 2012-12-15T23:50:25.709Z · score: 4 (9 votes)
What are you working on? December 2012 2012-12-02T18:49:31.017Z · score: 7 (8 votes)
Meetup : Seattle Meetup: Bayes Theorem Tutorial 2012-11-29T07:31:04.236Z · score: 2 (3 votes)
How well defined is ADHD? 2012-11-15T23:34:02.266Z · score: 10 (12 votes)
Meetup : (Seattle) Conscientiousness 2012-11-09T22:10:45.473Z · score: 0 (1 votes)
LessWrong help desk - free paper downloads and more 2012-10-07T23:45:13.566Z · score: 36 (37 votes)
Review: Selfish Reasons to Have More Kids 2012-05-29T18:00:02.945Z · score: 17 (28 votes)
Papers framing anthropic questions as decision problems? 2012-04-26T00:40:11.266Z · score: 3 (4 votes)
Generic Modafinil sales begin? 2012-04-02T15:53:15.967Z · score: 14 (15 votes)
Meetup : Seattle: Decision Theory 2012-03-31T21:07:52.231Z · score: 2 (3 votes)
Meetup : Seattle, Diseased Thinking and evidence on parenting 2012-01-11T16:33:05.210Z · score: 3 (4 votes)
Meta analysis of Writing Therapy 2012-01-01T02:02:38.425Z · score: 18 (15 votes)
What are you working on? December 2011 2011-12-13T15:27:48.980Z · score: 7 (8 votes)
Meetup : Seattle biweekly meetup: problem solving 2011-12-01T16:37:43.873Z · score: 0 (1 votes)
Is latent Toxoplasmosis worth doing something about? 2011-11-17T17:04:48.138Z · score: 23 (24 votes)
Meetup : The Planning Fallacy 2011-11-11T01:48:58.825Z · score: 0 (1 votes)
Should I get genotyped? 2011-10-24T15:51:56.739Z · score: 7 (10 votes)
Reminder: $250 LessWrong source introduction prize submissions due soon 2011-10-20T02:47:02.800Z · score: 1 (2 votes)
What are you working on? 2011-10-06T16:19:37.266Z · score: 9 (10 votes)
Prize for the best introduction to the LessWrong source ($250) 2011-10-05T00:08:33.404Z · score: 17 (22 votes)
Questions about doing literature searches 2011-09-15T17:27:17.133Z · score: 6 (7 votes)
What are good techniques and resources for teaching bayes theorem hands on? 2011-09-09T15:51:51.543Z · score: 1 (2 votes)
Meetup : Seattle Biweekly Meetup: Occam's Razor, Repetition and Time's Up 2011-09-09T04:46:49.745Z · score: 0 (1 votes)
Free research help, editing and article downloads for LessWrong 2011-09-06T21:13:05.226Z · score: 55 (56 votes)
What are good topics for literature review prizes? 2011-09-02T23:48:09.248Z · score: 4 (5 votes)
Spaced Repetition literature review prize: And the winner is... 2011-08-19T20:35:55.559Z · score: 26 (27 votes)
Anki deck for Cognitive Science in One Lesson 2011-08-16T16:23:35.441Z · score: 5 (8 votes)
What are you working on? 2011-08-15T14:43:48.314Z · score: 8 (9 votes)
Meetup : Seattle Biweekly meetup 2011-08-03T18:26:45.078Z · score: 1 (2 votes)
Requesting low cost/high payoff projects ideas 2011-07-30T20:48:41.757Z · score: 21 (22 votes)
How credible is neuroeconomics? 2011-07-30T18:51:55.943Z · score: 3 (6 votes)
Meetup : Biweekly Sunday Seattle meetup: talking about identity 2011-07-20T23:24:12.252Z · score: 0 (1 votes)
GiveWell interview with major SIAI donor Jaan Tallinn 2011-07-19T15:10:25.905Z · score: 17 (18 votes)
Spaced Repetition literature review contest submissions: August 1st deadline 2011-07-18T15:58:39.237Z · score: 2 (3 votes)
Motivation research presentation 2011-07-11T14:20:45.681Z · score: 2 (3 votes)
Meetup : Seattle Regular Sunday Meetup 2011-07-08T18:27:16.666Z · score: 0 (1 votes)
Has SIAI/FHI considered putting up prizes for contributions to important problems? 2011-07-03T17:26:41.282Z · score: 12 (13 votes)
Volunteers needed to work on LessWrong's public goods problem 2011-07-03T01:08:08.380Z · score: 22 (25 votes)
psychology and applications of reinforcement learning: where do I learn more? 2011-06-26T20:56:26.514Z · score: 2 (3 votes)
Meetup : Regular Seattle Meetup 2011-06-22T13:33:44.523Z · score: 0 (1 votes)
Mostly silly alternatives to the word 'rationalist' 2011-06-22T04:53:00.539Z · score: 2 (9 votes)
[prize] new contest for Spaced Repetition literature review ($365+) 2011-06-18T18:31:48.680Z · score: 17 (19 votes)
Building habits: requesting advice on installing mental software 2011-06-12T04:17:34.791Z · score: 4 (5 votes)
[prize] Spaced Repetition literature review 2011-06-07T03:28:55.842Z · score: 17 (18 votes)
What are you working on? June 2011 2011-06-05T20:31:00.250Z · score: 6 (7 votes)

Comments

Comment by jsalvatier on Probabilistic Programming and Bayesian Methods for Hackers · 2017-05-24T02:42:28.070Z · score: 1 (2 votes) · LW · GW

There's not that many that I know of. I do think its much more intuitive and lets you build more nuanced models that are useful for social sciences. You can fit the exact model that you want instead of needing to fit your case in a preexisting box. However, I don't know of too many examples where this is hugely practically important.

The lack of obviously valuable use cases is part of why I stopped being that interested in MCMC, even though I invested a lot in it.

There is one important industrial application of MCMC: hyperparameter sampling in Bayesian optimization (Gaussian Processes + priors for hyper parameters). And the hyperparameter sampling does substantially improve things.

Comment by jsalvatier on Probabilistic Programming and Bayesian Methods for Hackers · 2017-05-23T03:45:27.904Z · score: 7 (7 votes) · LW · GW

Funny enough, as a direct result of reading the sequences, I got super obsessed with Bayesian stats and that eventually resulted in writing PyMC3 (which is the software used in the book).

Comment by jsalvatier on Reality has a surprising amount of detail · 2017-05-14T22:57:50.523Z · score: 0 (0 votes) · LW · GW

Likewise

Comment by jsalvatier on Reality has a surprising amount of detail · 2017-05-14T08:35:41.144Z · score: 1 (1 votes) · LW · GW

If you want to see a billion examples of details mattering, watch anything about shipbuilding by this guy: https://www.youtube.com/watch?v=jM6R81SiKgA

Comment by jsalvatier on Reality has a surprising amount of detail · 2017-05-14T08:32:37.116Z · score: 1 (1 votes) · LW · GW

Great description. Yes, I think that's exactly why people are reluctant to see other people's points.

Comment by jsalvatier on Reality has a surprising amount of detail · 2017-05-14T06:41:59.627Z · score: 1 (1 votes) · LW · GW

Yeah, I wasn't too specific on that. I do endorse the piece that jb55 quotes below, but I'm still figuring out what to tell people to do. I'll hopefully have more to say in the coming months.

Comment by jsalvatier on Reality has a surprising amount of detail · 2017-05-13T20:31:15.281Z · score: 1 (1 votes) · LW · GW

John Maxwell posted this quote:

The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.

-- Daniel Kahneman

Comment by jsalvatier on Submission and dominance among friends · 2017-03-30T01:48:25.036Z · score: 2 (2 votes) · LW · GW

I want you to come up to me, put your arm around me, ask me how I am and start telling me about the idea you’ve got. Show me you ought to be in charge, because right now I’m a little lost and you’re not.

My desire is not for some permanent power structure, but for other people to sometimes and temporarily take leadership with the expectation that I will probably do so in the future as well. I think one of the most valuable things I do is sit people down and say 'look, there's this problem you have that you don't see, but I think its fixable. You're stuck thinking of things as X, but actually Y.' And I wish people would return the favor more often.

In retrospect, I should have way more clear about this.

Comment by jsalvatier on Submission and dominance among friends · 2017-03-29T18:14:01.884Z · score: 1 (1 votes) · LW · GW

Yes, I was trying mostly to talk about #2. I like the dominance frame because I think this kind fluid dominance roles is the something like the Proper Use of Dominance. Dominance as enabling swift changes status to track changes in legitimate authority.

Seems like that wasn't really very clear though.

I think I want to additionally emphasize, people being comfortable temporarily taking responsibility for other people. Sometimes I want someone to come in and tell me I have a problem I don't see and how to solve it. I try to do this for others because I think its one of the most valuable services I can provide for people. Letting them see outside themselves.

Comment by jsalvatier on Submission and dominance among friends · 2017-03-29T17:49:41.996Z · score: 0 (0 votes) · LW · GW

No?

Comment by jsalvatier on Submission and dominance among friends · 2017-03-29T03:34:32.164Z · score: 0 (0 votes) · LW · GW

Thanks :)

Comment by jsalvatier on Submission and dominance among friends · 2017-03-28T02:44:03.887Z · score: 0 (0 votes) · LW · GW

Thanks, had to make a new link.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-10T22:35:45.961Z · score: 0 (0 votes) · LW · GW

There are certainly people who meet it better than others.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-10T21:21:08.268Z · score: 0 (0 votes) · LW · GW

This comment on that post is especially relevant.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-10T21:18:33.699Z · score: 0 (0 votes) · LW · GW

(Sorry for the long delay)

Ah, I see why you're arguing now.

(And an idea that works for central examples but fails for edge cases is an idea that fails.)

Ironically, this is not a universal criteria for the success of ideas. Sometimes its a very useful criteria (think mathematical proofs). Other times, its not a very useful idea (think 'choosing friends' or 'mathematical intuitions').

For example the idea of 'cat' fails for edge cases. Is this a cat? Sort of. Sort of not. But 'cat' is still a useful concept.

Concepts are clusters in thing space, and the concept that I am pointing at is also a cluster.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-03T23:52:08.641Z · score: 0 (0 votes) · LW · GW

Maybe I'm still misunderstanding.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-03T23:16:41.712Z · score: 0 (0 votes) · LW · GW

Ahhhh, maybe I see what you're complaining about

Are you primarily thinking of this as applying to creationists etc?

The part of the reason I put the caveat 'people about as reasonable as you' in the first place was to exclude that category of people from what I was talking about.

That is not the central category of people I'm suggesting this for. Also, I'm not clear on why you would think it was.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-03T20:51:21.178Z · score: 0 (0 votes) · LW · GW

There's a point intermediate between "completely new" and "just being difficult".

Fair enough. To me, your previous words pattern matched very strongly to 'being difficult because they think this is dumb but don't want to say why because it seems like too much work' (or something). My mistake.

I didn't mean new to LW, I meant new to the questions you were posing and the answers you got.

Back on the topic at hand,

In order to do that I would have to assume that I know what questions are the right ones and that he does not. Assuming this would amount to assuming that I am right about the subject and he is wrong.

Consider the following: you meet a friend of a friend who seems reasonable enough, and they start telling you about their startup. They go on and on for a long time but try as you might, you can't figure out how on earth they're going to make money. Finally, you delicately ask "how do you intend to make money?". They give some wishy washy answer.

Here they have failed to ask a question that you know to be important. You know this quite definitely. Even if they thought that the question were somehow not relevant, if they knew it was usually relevant, they would probably explain why its not in this particular case. Much more likely that they are just not very good at thinking about startups.

Similarly, if they anticipate all of your objections and questions, you will probably think they are being pretty reasonable and be inclined to take them more seriously. And rightfully so, that's actually decent evidence.

in which case I am again assuming I am right about the subject

There's a middle ground between 'assuming I am right' and 'assuming they are right'. You can instead be unsure how likely they are to be right, and try to figure it out. One way you can figure it out is by trying to assess whether they seem like they are doing good epistemic things (like do they actually pause to think about things, do they try to understand people's points, do they respond to the actual question, do they make arguments that later turn out to be convincing, do they base things on believable numbers, do they present actual evidence for their views, etc. etc.)

Are you familiar with the idea of 'latent variables' from Bayesian statistics? Are you used to thinking about it in the context of people and the real world? The basic idea is that you can infer hidden properties of things by observing many things it affects (even if it only noisily affects them).

For example, you go to a small school and observe many students doing very impressive science experiments, you might then infer some hidden cause that causes the school to have smart students. Thus you might also guess that in several years, different students at the same school will do well on their SATs, even though that's not directly related to your actual observations.

I suspect thinking a bunch about latent variables in the real world might be useful for you. Especially as it relates to inferring where people are reasonable and how much they are. Especially the idea of using data from different topics to improve your estimate for a given topic (say using test scores from different students to improve your quality estimate for a specific student).

This might be a good starting point: http://www.stat.columbia.edu/~gelman/research/published/multi2.pdf (read until sec 2.3).

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-03T17:32:59.531Z · score: 0 (0 votes) · LW · GW

Your points have what seem to me like pretty obvious responses. If this is actually new to you, then I'm very happy to have this discussion.

But I suspect that you have some broader point. Perhaps you think my overall point is misguided or something. If that's the case, then I think you should come out and say it rather than what you're doing. I'm totally interested in thinking about and responding to actual points you have, but I'm only interested in having arguments that might actually change my mind or yours.

But again, if this is actually new, I'm very interested.

On your actual points:

Not being as sensible as me on these topics isn't the same thing as not being as sensible as me in general.

Sure, but they are also very closely related, and knowing about one will help you make inferences about the other.

without (in effect) first concluding that he's wrong on the topic

There are plenty of excellent ways to make educated guesses about how sensible someone is being in a given area.

For example, you might look at closely or not so closely related topics and see if they are sensible there. Or you might look at a few of their detailed arguments and see if they ask the questions you would ask (or similarly good ones). You can see if they respond to counterarguments in advance. You can see if they seem like they change their mind substantially based on evidence. etc. etc. etc.

But as I said, If this is actually new to you, I'm actually super excited to describe further.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-03T16:05:39.801Z · score: 0 (0 votes) · LW · GW

At the very least, Jiro believes that they are not as sensible as him on those topics.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-03T00:13:36.656Z · score: 1 (1 votes) · LW · GW

From the article

If Paul is at least as sensible as you are and his arguments sound weak or boring, you probably haven’t grokked his real internal reasons.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-02T03:09:52.254Z · score: 0 (0 votes) · LW · GW

Not sure! If it was in the last couple months there's a good chance.

Comment by jsalvatier on The "I Already Get It" Slide · 2017-02-02T02:53:07.454Z · score: 1 (1 votes) · LW · GW

Yup!

this disparity in strength of beliefs is in itself good evidence that there is information we are missing

That's a nice way of summarizing.

I would emphasize the difference between parsing the arguments they're explicitly making and understanding the reasons they actually hold the beliefs they do.

They may not be giving you the arguments that are the most relevant to you. After all, they probably don't know why you don't already believe what they do. They may be focusing on parts that are irrelevant for convincing you.

By the way, nice job trying to summarize my view. As you'll see in the coming weeks, that's close to the move I recommend for extracting people's intuitions. Just repeatedly try to make their argument for them.

Comment by jsalvatier on Funding the Reproducibility Crises as effective giving · 2017-01-27T06:50:48.448Z · score: 2 (2 votes) · LW · GW

Thanks, this was super useful context.

Seems like its more that the institutions are broken rather than few people caring. Or could be that most scientists don't care that much but a significant minority care a lot. And for that to cause lots of change you need money, but to get money you need the traditional funders (who don't care because most scientists don't care) or you need outside help.

Comment by jsalvatier on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-20T19:17:24.018Z · score: 0 (0 votes) · LW · GW

Reddit/HN seem like examples of extreme success, we should probably also not behave as if we will definitely enjoy extreme success.

Comment by jsalvatier on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-20T04:56:07.078Z · score: 1 (1 votes) · LW · GW

I make the suggestion because precisely because we will definitely lose that war.

Comment by jsalvatier on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-19T19:57:18.151Z · score: 7 (7 votes) · LW · GW

I wonder if we could find a scalable way of crossposting facebook and g+ comments? The way Jeff Kaufmann does on his blog (see the comments: https://www.jefftk.com/p/leaving-google-joining-wave)

That would lower the frictions substantially.

Comment by jsalvatier on Improve comments by tagging claims · 2016-12-26T18:49:31.677Z · score: 2 (2 votes) · LW · GW

I think you may be misunderstanding why people focus on selection mechanisms. Selection mechanisms can have big effects on both the private status returns to quality in comments (~5x) and the social returns to quality (~1000x). Similar effects are much less plausible with treatment effects.

Claim: selection mechanisms are much more powerful than treatment effects.

I think people are using the heuristic: If you want big changes in behavior, focus on incentives.

Selection mechanisms can make relatively big changes in the private status returns to making high quality comments by making high quality comments much more recognized and visible. That makes the authors higher status, which gives them good reason to invest more in making the comments. If you get 1000x the audience when you make high quality comments, you're going to feel substantially higher status.

Selection mechanisms can make the social returns to quality much larger by focusing people's attention on high quality comments (whereas before, many people might have had difficulty identifying high quality even after reading it).

Comment by jsalvatier on LessWrong Help Desk - free paper downloads and more (2014) · 2015-05-30T23:44:39.069Z · score: 1 (1 votes) · LW · GW

It turns out Cochrane does provide their data. Very nice of them.

Also, at least in this case my own metanalysis based on their data perfectly replicated their results. The inefficiency I thought was there was not there.

Comment by jsalvatier on LessWrong Help Desk - free paper downloads and more (2014) · 2015-05-30T23:43:24.133Z · score: 0 (0 votes) · LW · GW

Metamed went out of business recently.

Comment by jsalvatier on Announcement: The Sequences eBook will be released in mid-March · 2015-03-03T05:53:53.858Z · score: 1 (1 votes) · LW · GW

Ah, I didn't realize you were also doing a print version.

Comment by jsalvatier on Announcement: The Sequences eBook will be released in mid-March · 2015-03-03T04:32:01.415Z · score: 2 (2 votes) · LW · GW

I'm very surprised you guys are releasing them all at once rather than releasing them on a year or something. That seems like it would generate more interest.

Also, I'm sort of disappointed that they were not more substantially edited. When I show the sequences to other people, people often complain a lot about the examples being terrible and more offensive than necessary even if they agree with the argument. But I get that that would require a lot of work.

Comment by jsalvatier on Who are your favorite "hidden rationalists"? · 2015-01-14T19:51:36.279Z · score: 2 (2 votes) · LW · GW

Two months later, he reemerged at his own domain, promising to avoid a particular kind of discourse, one aimed at closing the minds of those on one’s own side. Although Kling was never among the worst offenders on this score, one could indeed sense a shift in his tone. He prioritized framing his opponents’ positions in the most favorable light, and he developed a framework for understanding political issues from progressive, conservative, and libertarian perspectives.

Hey that sounds pretty good! This was precisely my problem with him on EconLog. My ideology match his a lot, but I was irritated because he seemed to make okay, but not especially good arguments for things I agreed with and seemed to frame things in unnecessarily charged ways. He often framed things in a very libertarian way (in a Three Languages of Politics sense, which seems like it has a pretty cool idea), and I'm glad he does that a lot less!

His book sounds interesting.

Comment by jsalvatier on Who are your favorite "hidden rationalists"? · 2015-01-14T00:45:46.631Z · score: 0 (0 votes) · LW · GW

I'm surprised, I followed him on econlog for a long long time, but usually found him too ideological for my tastes (even though I lean pretty libertarian) and just not that interesting. What are some of your favorites?

Comment by jsalvatier on Low Hanging fruit for buying a better life · 2015-01-07T20:24:35.461Z · score: 4 (4 votes) · LW · GW

The standard advice for the best quality/price tradeoff seems to be Victorinox knives with the fibrox handle.

Comment by jsalvatier on Happiness Logging: One Year In · 2014-10-14T23:37:11.909Z · score: 0 (0 votes) · LW · GW

For certain formulations of this, that objection seems irrelevant. Imagine that instead of a 1-10 scale, you had a ranked list of activities (or sets of activities).

Comment by jsalvatier on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-26T22:28:21.305Z · score: 2 (2 votes) · LW · GW

Remember there's no such thing as zero utility. You can assign an arbitrarily bad value to failing to resolve, but it seems a bit arbitrary.

Comment by jsalvatier on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-26T22:26:39.889Z · score: 2 (2 votes) · LW · GW

I think the key benefit of the parliamentary model is that the members will vote trade in order to maximize their expectation.

Comment by jsalvatier on Another type of intelligence explosion · 2014-08-28T18:57:08.470Z · score: 1 (1 votes) · LW · GW

I think you're sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.

Comment by jsalvatier on An example of deadly non-general AI · 2014-08-24T18:17:13.489Z · score: 3 (3 votes) · LW · GW

Narrow AI can be dangerous too is an interesting idea, but I don't think this is very convincing. I think you've accidentally snuck in some things not inside its narrow domain. In this scenario the AI has to model the actual population, including the quantity of the population, which doesn't seem too relevant. Also, it seems unlikely that people would use reducing absolute number of deaths as the goal function as opposed to chance of death for those already alive.

Comment by jsalvatier on Connection Theory Has Less Than No Evidence · 2014-08-04T22:10:58.619Z · score: 5 (5 votes) · LW · GW

One part that worries me is that they put on the EA Summit (and ran it quite well), and thus had a largish presence there. Anders' talk was kind of uncomfortable to watch for me.

Comment by jsalvatier on The Correct Use of Analogy · 2014-07-17T22:52:27.320Z · score: 0 (0 votes) · LW · GW

I like the idea of coming up with lots of analogies and averaging them or seeing if they predict things in common.

Comment by jsalvatier on Some alternatives to “Friendly AI” · 2014-07-05T22:36:38.267Z · score: 0 (0 votes) · LW · GW
  1. Human Compatible AGI
  2. Human Safe AGI
  3. Cautious AGI
  4. Secure AGI
  5. Benign AGI
Comment by jsalvatier on Against utility functions · 2014-06-20T19:06:15.508Z · score: 1 (1 votes) · LW · GW

Even if you don't think it's the ideal, utility based decision theory it does give us insights that I don't think you can naturally pick up from anywhere else that we've discovered yet.

Comment by jsalvatier on LessWrong as social catalyst · 2014-05-29T00:07:03.838Z · score: 2 (2 votes) · LW · GW

About 50% of my day to day friends are LWers. All 3 of my housemates are LWers. I've hosted Yvain and another LWer. Most of the people I know in SF are through LW. I've had a serious business opportunity through someone I know via LW. I've had a couple of romantic interests.

Comment by jsalvatier on Arguments and relevance claims · 2014-05-07T19:42:46.949Z · score: 0 (0 votes) · LW · GW

This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.

That's my experience with myself.

Comment by jsalvatier on Channel factors · 2014-03-15T01:33:18.718Z · score: 1 (1 votes) · LW · GW

Already exists! https://chrome.google.com/webstore/detail/tab-wrangler/egnjhciaieeiiohknchakcodbpgjnchh?hl=en

Comment by jsalvatier on Channel factors · 2014-03-15T01:28:03.722Z · score: 5 (5 votes) · LW · GW

This seems quite close to Beware Trivial Inconveniences. It's good to have an outside established name for this, though.

Comment by jsalvatier on Proportional Giving · 2014-03-06T19:58:53.023Z · score: 0 (0 votes) · LW · GW

Can you expand on that? What do you think would be closer to the right calculation?

Comment by jsalvatier on Proportional Giving · 2014-03-05T01:24:55.098Z · score: 1 (1 votes) · LW · GW

This seems obviously correct to me. In my experience this is not obvious to everyone and many people find it a bit distasteful to talk about. I'm glad you bring it up.

I haven't really tried hard, but I think I would find it pretty difficult to get myself to behave this way.

The way I "resolve" this dissonance is by thinking in terms of a parliamentary model of me. Parts of me want to be altruistic and part of me is selfish and they sort of "vote" over the use of resources.