The 10% Improvement Problem 2018-07-02T16:27:01.146Z · score: 9 (5 votes)
Striving vs Cruising 2018-05-23T23:42:36.768Z · score: 6 (2 votes)
Hotel Concierge: Shame & Society 2018-05-12T23:36:43.645Z · score: 17 (4 votes)


Comment by norswap on Sets and Functions · 2019-10-14T22:19:26.164Z · score: 1 (1 votes) · LW · GW

Here is what confuses me: from before, I thought morphisms were "just" arrows between objects, with a specific identity.

But in the case of functions, we have to smuggle in the set of ordered pairs that define them. Do you simply equate the identity of a function with this set definition?

That might be fine, but it means there needs to be some kind of ... semantics? that gives us the "meaning" (~ implementation) of composition based on the "meaning" (the set of ordered pairs) of the composed morphisms.

Am I right here?

Comment by norswap on What's your favorite notetaking system? · 2019-10-07T23:04:31.205Z · score: 1 (1 votes) · LW · GW

I'll add the biggest minus in my book:

Potential alternatives:

Comment by norswap on Candy for Nets · 2019-10-07T22:44:18.663Z · score: 6 (4 votes) · LW · GW

This was a really heartwarming story that brought a smile to my face!

I'd like to give a special shout-out to

As we go I'm going to continue to try very hard not to pressure or manipulate her, while still giving advice and helping her explore her motivations here.

That's very important indeed.

Comment by norswap on How good is the case for retraining yourself to sleep on your back? · 2019-09-30T22:19:39.213Z · score: 4 (3 votes) · LW · GW

I watched one or two videos of this channel a while back and was impressed by the seemingly solid - but non-conventional - argument (it was on salt intake). I subscribed and was *dismayed* by further videos. I wouldn't put much stock into the either the research being quoted (if you didn't review it yourself) nor the treatment of the research made by this channel.

That being said, I haven't watched this particular video. What it says might all be true.

Comment by norswap on Compilers/PLs book recommendation? · 2019-08-06T02:05:10.447Z · score: 1 (0 votes) · LW · GW

Not a very pointed answer, but a collection of leads:

Most books I can find on compilers/PLs tend to spend most of their time on the text representation (and algorithms for translating programs out of text, i.e. parsing) and the machine-code representation (and algorithms for translating programs into machine code).

There are good reasons for the time spent on them — they are more difficult than the parts that go in the middle, which is "merely" software engineering, although of an unusual kind.

There is also a dearth on resources on the topic. And because of that, it is actually fairly hard.

One reason is that the basics of it is quite simple: generate a tree as the output of parsing, then transform this tree. Generate derivative trees and graphs from these trees to perform particular analyses.

Phrased like that, it seems that knowledge on how to work with trees and graphs is going to serve you well, and that is indeed correct.

A good read (though with a very narrow focus) is that discussion of syntax tree architecture in Roslyn. The Roslyn whitepaper is also quite interesting though more oriented towards exposing compiler features to user.

Personally, I did some research on trying to implement name resolution (relating an identifier user to its declaration site) and typing as a reactive framework: you would define the typing rules for you language by defining inference rules, e.g. once your type the type of node A and the type of node B, you can derive the type of node C. The reactive part was then to simply find the applicable inference rules and run them.

The project didn't really pan out. In reality, the logic ends looking quite obfuscated and it's just easier to write some boring non-modular old code where the logic is readily apparent.

(Incidentally, fighting against this "it's easier to just code it manually" effect — but in parsing — is what my PhD thesis is about.)

I might advise you to look at research done on the Spoofax language workbench. Spoofax includes declarative language to specify name binding, typing, semantics and more. These languages do not offer enormous flexibility but they cover the most common language idioms. Since those were codified in a formal system (the declarative language), it might tell you something about the structure of the underlying problem (... which is not really about something quite as simple as data structure selection, but there you have it).

For purposes of this question, I'm not particularly interested in either of these representations - they're not very natural data structures for representing programs, and we mostly use them because we have to.

I'd like to point out I have seen very convincing arguments to the contrary. One argument in particular was that while the data structures used to represent program will tend to change (for engineering reasons, supporting new features, ...), the text representation stays constant. This was made in the context of a macro system, I believe (defending the use of quasiquotes).

Regarding machine code, it would be immensely useful even if we didn't need to run code on CPUs. Look at virtual machines: they work with bytecode. A list of sequential instructions is just the extremum of the idea of translating high-level stuff into a more limited number of lower-level primitives that are easier to deal with.

Is there some other question I should be asking, e.g. a different term to search for?
On the meta-level, where else should I look/ask this question?

For academic literature on the topic, I would like at the proceedings of the GPCE (Generative Programming: Concepts & Experiences) and SLE (Software Language Engineering) conferences.

I think there exists some program transformation framework out there, and you might also learn something from them, though in my experience they're quite byzantine. One such is Rascal MPL (meta-programming language). Another is Stratego (part of Spoofax) (I read some papers on that one a while ago that were palatable).

So anyay, here goes. Hope it helps. You can contact me if you need more info!

Comment by norswap on Old Man Jevons Can’t Save You Now (Part 2/2) · 2019-07-29T22:35:20.807Z · score: 1 (1 votes) · LW · GW

I'd be more interested in the in-between: what about cases where we don't have general AI, but we have automation that drastically cuts jobs in a field, without causing counter-balancing wage increases or job creation in another field?

For instance, imagine the new technology is something really simple to manufacture (or worse, a new purpose for something we already manufacture en masse) — it's so easy to produce these things, we don't need really need to hire more workers, just push a couple levers and all the demand is met just like that.

Is there something interesting to be said about what happens then? Can this be modeled?

(In practice, even this is too extreme a scenario of course, everything sits on a continuum.)

Something more realistic, I think, is that even when a new useful machine and introduced, and the productivity of the producers of that machine shots up, the salaries of the machine-maker won't shot up in a way that is proportional (maybe it's easy to train people to make these machines?). And maybe the ratio skews: like automation will remove X people, and the increased demand for automation will get X/5 people hired. So on the one hand you get major job loss, and on the other a minor salary hike and minor job creation.

How to model what is lost here? Isn't there some kind of conservation law and the surplus disappears somewhere (presumably in the pockets of the shareholders of both the companies buying and producing the machines?).

Comment by norswap on When is rationality useful? · 2019-04-30T14:45:54.466Z · score: 3 (2 votes) · LW · GW

I think rationality ought to encompass more than explicit decision making (and I think there are plenty of writing on this website that show it does even within the community).

If you think of instrumental rationality of the science of how to win, then necessarily it entails considering things like how to setup your environment, unthinking habits, how to "hack" into your psyche/emotions.

Put otherwise, it seems you share your definition of Rationality with David Chapman (of ) — and I'm thinking of that + what he calls "meta-rationality".

So when is rationality relevant? Always! It's literally the science of how to make your life better / achieving your values.

Of course I'm setting that up by definition... And if you look at what's actually available community-wise, we still have a long way to go. But still, there is quite a bit of content about fundamentals ways in which to improve not all of which have to do with explicit decision making or an explicit step-by-step plan where each step is an action to carry explicitly.

Comment by norswap on Counterfactuals about Social Media · 2019-04-30T14:38:41.660Z · score: 1 (1 votes) · LW · GW

Seems to me you're on about treating (or more to the point, dreaming about treating) the cure rather than the symptoms that make people vulnerable to the social network sink in the first place. The same fundamental weakness probably has a lot of other failure modes.

Comment by norswap on Why does category theory exist? · 2019-04-29T22:49:18.584Z · score: 3 (2 votes) · LW · GW

Category theory, of which I'm acquainted with at a basic level, seems to formalize a lot of regularities I already knew about as a programmer and a student of <those mathematics topics that were taught to me as part of my CS master's degree>.

I found it mathematically neat, but I have never derived any useful insights from it. Said otherwise, nothing would have changed if I had never been introduced to it. This seems quite wrong to me, so I was quite interested in reading the answers here. Unfortunately, there is not much in ways of insight.

Comment by norswap on IRL 5/8: Maximum Causal Entropy IRL · 2019-04-04T22:25:15.739Z · score: 1 (1 votes) · LW · GW

What is this? The links seem to require some login and registration is limited to students of some specific universities.

Comment by norswap on Renaming "Frontpage" · 2019-03-12T00:53:26.815Z · score: 1 (1 votes) · LW · GW

Is it even possible to avoid for a curated selection to be deemed better? Maybe only if it fails horribly at what it set out to do, but otherwise?

I strongly second Michaël's recommendation — of any place, the front page of Less Wrong is where things should be clear.

Comment by norswap on When does introspection avoid the pitfalls of rumination? · 2019-02-25T01:58:46.869Z · score: 1 (1 votes) · LW · GW

For me, what separates mindfulness from rumination is that in mindfulness you observe things and accept them, whereas in rumination you're trying to fight or hold onto something.

Constantly reminiscing a slight is a good way to make it loom large. It's an unwillingness to either resolve the matter and letting it be.

Similarly, fighting some negative emotions (pain, loss, anger) makes them worse when they inevitably breaks through.

Comment by norswap on [deleted post] 2019-02-21T00:18:56.415Z

Great post! More of an exploration than a presentation, but a thoroughly enjoyable one.

Last year, I sat down with some hard thoughts about my own life philosophy, and came out with essentially the same conclusion: that enjoying life is about the process of getting somewhere rather than about actually getting there.

There are some intriguing new elements here, including the link with entropy (though I do tend to think that the ending is perhaps a tad too abstract and speculative).

I too, was inspired by reading and quotes, here are a few that guided me in this direction, the most related of which is perhaps:

What man actually needs is not some tension-less state but rather the striving and struggling for some goal worthy of him.
— Viktor Frankl

(feel free to reach out, there's a whole lot more of them)

I also can't resist to link this Hunter S Thompson letter, which is perhaps the piece of writing that has influenced me the most, and is completely in line with what you propose here.

Comment by norswap on In what way has the generation after us "gone too far"? · 2019-01-24T14:44:33.783Z · score: 8 (6 votes) · LW · GW

You should probably specify which generation you're in =)

I'm 28. I don't know that the next generation has "gone too far", but the big difference I see between them and my generation is that we were the last generation to grow up without pervasive internet / smartphones / social networks. Facebook boomed (at least in Europe) right as I entered college.

What it entails is a lack of focus. I won't say my generation is very focused, but the next one is certainly worse. As a TA, I can witness this firsthand.

Comment by norswap on What are the open problems in Human Rationality? · 2019-01-22T14:11:28.439Z · score: 4 (3 votes) · LW · GW

For applied rationality, my 10% improvement problem:

Basically, how do you notice small (10% or less) improvements in areas that are hard to quantify. This is important, because after reaping the low-hanging fruits, stacking those small improvements is how you get ahead.

Comment by norswap on I want it my way! · 2019-01-07T22:42:46.898Z · score: 3 (2 votes) · LW · GW

I thought the piece was interesting.

If I can offer some feedback on form, I also thought it was too long for what it did say, and conversely did not say some things I would have wanted it to.

For me, the gist of the article really is this:

What I really wanted out of the system, in each case, wasn't the most valuable thing to get, or what it had to teach me. What I wanted was me, and my own beliefs, and for everything to stay the same, so that my prime directive would be met.

This is somewhat relatable. It's intriguing! But

  • is it true? I'm having some doubts. If I'm taking on some endeavour, or even some experiment, my goal isn't to be confirmed in my current identity. But could my current identity be a force that acts against that endeavour or the honest fullfilment of the experiment? Probably. Would you agree or do you see this differently?
  • Where does this come from?
  • What to do about it?
Comment by norswap on Akrasia is confusion about what you want · 2018-12-31T16:39:22.737Z · score: 1 (1 votes) · LW · GW

Would the tl;dr "integrate the evidence presented by revealed preferences" be accurate?

Comment by norswap on Why Don't Creators Switch to their Own Platforms? · 2018-12-25T21:54:01.259Z · score: 1 (1 votes) · LW · GW

Putting technical limitations aside (which are a huge deal, at the very least for video), the problem is that the audiences were built using the platform, and don't carry over easily.

The creators were able to build their audiences because, notably

  • The platforms have idle eyeballs actively looking for good content *on the platform*. No one google for content these days, only for answers.
  • The recommendation algorithms sometimes work, or at least you can make them work for you. Even if you have to figure out the peculiarities in the algorithm, this is vastly simpler than cracking global marketing. And again, active digital marketing for content typical passes through social media anyway! This is were the people are, and it's where they look, and it's where they will stumble on you if they're not looking.
  • The alternative is being so damn appealing that you'll spread by word of mouth. And even then, you'd do better on a platform, it's just an incredible force multiplier.

The audiences don't carry over because, simply put, they are living on the platform. It's centralized. They consume many things there, so they will check it. Most people don't know RSS and it's being phased out of many browsers. You'll lose most of your subscribers.

And you are wrong, the algorithms do account for many of the views of the top creators, on top of their subscribers.

Could they survive without the platform? Of course! Would they do better? No chance.

Finally, anger at the platform is generally at being less good than it used to be. But think about, for instance, demonetization on YouTube. Well, you can still sign your own deals and include your own ads in the videos. If you leave the platform, you have to do this. But if you stay there, it's still an option.

Comment by norswap on Experiences of Self-deception · 2018-12-25T19:42:59.869Z · score: 3 (2 votes) · LW · GW

Brings two things to mind:

  • The Dark Arts of Rationality series and its compartmentalization and inconsistency techniques. I'm toying with that a bit, but I don't have a good account to give yet.
  • The fact (apparently) that placebo work even if you know they are placebo.

So I'd say that clearly many people are getting self-reported benefits from self-deception.

Key in understanding the phenomenon is the system 1 / system 2 (fast / slow) distinction. Typically you know in system 2 that you are deceiving yourself but you act out the deception in system 1.

Comment by norswap on Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability · 2018-12-25T19:03:42.944Z · score: 1 (1 votes) · LW · GW

I don't think one can generalize so easily from bounded-options full-information games like those to the whole range of human endeavours.

Comment by norswap on Rationality Is Not Systematized Winning · 2018-11-14T10:19:11.699Z · score: 1 (1 votes) · LW · GW

I'm reading this, and it seems very reasonable, and then:

Changing our perspective might have significant benefits. Systematized winning is not an actionable definition. Most domains already have field specific knowledge on how to win, and in aggregate these organized practices are called society. The most powerful engine of systematized winning developed thus far is civilization.

So, assume civilization is a set of guidelines that dictate a course of actions. Just like rationality in fact. How can this beat rationality? If it dictates the correct course of actions, rationality will too. And often, rationality can suggest something more effective.

The possible counters c are: (a) rationality is hard work, and mostly sticking with civilization is fine. (b) Or you're not a good enough rationalist (or have enough good information) to beat civilizational guidelines.

But the article does not really suggest those. It says civilization is already winning. Well, it all hinges on the definition of winning. But it's quite clear you can achieve better outcomes through rationality if that's what you care about and are not put off by the extra work (counter (a)).

The counters are interesting but ultimately irrelevant. You can actually rationality arrive at (a): determining that the cost incurred by practicing rationality is more than the benefits accrued. That being said, it's so general a statement, I don't think anyone it can be true for anyone capable to think the thoughts. You can also rationally arrive at (b), and in fact, if it's true you should: civilization IS evidence, and it has to be valued accurately. If civilization guidelines keep trumping your best guesses, the weight of civilizational evidence should increase accordingly.

Comment by norswap on Rationality of demonstrating & voting · 2018-11-12T23:25:32.050Z · score: 1 (1 votes) · LW · GW

Why bother voting? Your vote will only change the result if it would otherwise be an exact tie; and the chance of that is negligible – one in millions.
But a chance of one in millions is worth taking if the jackpot is billions or trillions. That is, the opportunity for you to select a better rather than worse government, thereby making the country – though not yourself – billions or trillions of dollars better off. So as long as you care at least slightly about the rest of the country, voting is rational; civic duty really is a reason to vote.

That's an incredibly spurious premise right from the bat. Personally, I don't care all that much if the country is billions or trillions better off... That's ranging from single-digit dollar amounts to a couple hundreds. Also that's supposing the government has this kind of influence (esp. if you counter the last by positing bigger amounts). Also as long as people are not going into poverty, I still mostly care about myself.

People hate to hear this, and I usually don't bring it up because it's counterproductive, but: voting is not rational except in very small elections. The problem is that if everyone thinks this, you have a serious problem. Yep, that's the tragedy of the commons.

A possible way to solve the issue is to make the vote legally mandatory (which is the case in my country - Belgium). This might lead to more uninformed ballots being cast, but I'm not entirely sure (most of the ballots are uninformed regardless).

Comment by norswap on Criticism Scheduling and Privacy · 2018-10-06T23:43:11.811Z · score: 1 (1 votes) · LW · GW

Bravo! This essay is very well put together, and it make my mind go "bling" a couple times.

I have experienced guilt for not taking well to criticism, and I feel this piece helps to explain why: the criticism didn't address my own unsatisfaction with the work, nor highlight what I thought was an important shortcoming. Looking forward, it required things of me without actually helping me making something better. But as you mentioned, feedback (just an alias for criticism) is almost sacred is certain circles nowadays.

Comment by norswap on A Rationalist's Guide to... · 2018-08-09T13:47:03.532Z · score: 7 (2 votes) · LW · GW

I've been thinking about this too, and I'm not sure guide suffice. Getting in shape or learning about a topic are simple problems (not that can't be challenging in their own right) compared to the complexity of actually achieving something.

At this point, we don't even have good theories or hypotheses on why these things are hard. It's lot of small issues that aggregate and compound. Motivation is a big class of these issues. Not seeing clearly enough - failure to perceive danger, opportunities, alternative ways of doing things.

To achieve you have to get the strategy, the tactics and the operations right. There's a lot you can screw up at every level.

One key issue, I think, is that it's damn hard to hack yourself on some fundamental levels. For instance to "be more perceptive". You can't really install a TAP for that. I guess some mindfulness practice can help (although I'd be wary of prescribing meditation -- more like mindfulness on the move). Consuming self-help, insights, news, etc etc only seems to move the needle marginally.

So yeah, I don't know. Just throwing some ideas out there.

Something like this: might be a nice starting point. Maybe, just maybe, we're trying to lift heavy weights without having built the required muscles. Worth investigating and expanding.

Comment by norswap on A Rationalist's Guide to... · 2018-08-09T13:38:25.535Z · score: 1 (1 votes) · LW · GW

Elo's a nice guy, but I have no idea what he's talking about either.

Maybe rationality improves your quality of life or subjective well-being, there is certainly evidence for that.

But in terms of accomplishing more material and outwardly visible goals, you're right that the evidence is scant. CFAR and EA could be evidence, but there are a lot of non-rat institutions that perform well too.

Comment by norswap on Pretense · 2018-08-05T16:57:14.499Z · score: 7 (5 votes) · LW · GW

Strong agree. I think one of the reason we're not always honest regarding our feelings (I'm not sure - maybe you're targeting something more specific here) is that it would be very tiresome if everyone were to do it. Everyone has pangs of sadness, despair and inadequacy, and that's bad enough - but imagine if you could hear those from everyone else as well?

That being said, I try to tend to more truthful. I don't try to impress people with something I don't believe is impressive. I disclose my misgivings about my own work, or popular things. Doing otherwise would feel to close to downright lying. But it isn't always well received. Even though I'm mild in my expression, there are many people who feel threatened (or more accurately, who feels their illusions threatened). There is a reason why we pretend we all know what we're doing and that all the children are above average.

Comment by norswap on Non-resolve as Resolve · 2018-07-16T12:57:17.580Z · score: 1 (1 votes) · LW · GW

This is reminiscent of Nate Soares' writing, especially

Comment by norswap on The 10% Improvement Problem · 2018-07-08T13:59:34.079Z · score: 1 (1 votes) · LW · GW

Good one. I think maybe that's true for some domains but not others?

Another way to consider this is that there are a small number of low-hanging fruits that yield a lot of improvement. You could even call them "beginner gains" if they are easy. But after that, you have to deal with a long tail of modest improvements - yet there's no doubt in my mind that correctly stacking them can yield some more improvement.

Comment by norswap on Unraveling the Failure's Try · 2018-06-10T13:12:53.145Z · score: 7 (3 votes) · LW · GW

It did indeed exist (more or less) before:

In fact this whole series (Replacing Guilt) might very well be exactly what you are looking for:

Very recommended.

Comment by norswap on The Second Circle · 2018-05-21T20:17:18.734Z · score: 4 (1 votes) · LW · GW

Maybe, but doubt. I think I did express myself poorly - I'm generally pretty tolerant of weirdness and even awkwardness. I think what it evokes me is more like pointless meeting where I have to try very hard not to roll my eyes at how silly what we are doing is, and how pointless, and how obvious it would be to an external observer.

But like I said, I would try circling with an open mind any chance I got. I've only got a sense of what it looks like from the outside and that's very different from what it is on the inside, for many many things (creative work, yoga, playing video games, ...).

Comment by norswap on The Second Circle · 2018-05-20T20:55:29.796Z · score: 9 (2 votes) · LW · GW

I am genuinely confused about the point being made. Is it (a) don't abuse a good thing (b) different kind of circling for different kind of purposes (serious circling vs game circling for practice) or (c) something else entirely.

It feels like a lot of words to say very little of substance - this is not meant as an attack, just a candid observation. This piece doesn't make me angry or isn't wrong, or anything like that.

It also feels very silly - this is not about the piece but about the situation. And I wonder what that means, that I find it silly. Given my priors and background, this situation is something I don't imagine being able to take seriously.

Circling feels weird to start with, but I can understand its rationale fairly well. My worry would be that it would devolve into something quite unnatural and devoid of some sort of common sense. It would be whatever the opposite of smooth is - awkward maybe, but that's not totally it either. I think in some important sense, smoothness matters a lot, but I'm still missing pieces of the puzzle here.

Anyway, this is not a criticism of circling, just some prior predictions about the experience of circling. What worries me is that this kind of experience report (and actually, your previous post as well) point towards validating those predictions. And I don't really have a point here, I'm just sharing my thoughts in case they would be useful, cause some reflection or some interesting comment.

Comment by norswap on Henry Kissinger: AI Could Mean the End of Human History · 2018-05-16T17:44:22.578Z · score: 4 (1 votes) · LW · GW

Does this contribute something besides "yay, we've gone mainstream"?

I do however think it's interesting that Kissinger makes in Ur-form what I perceive some flaws or shortcuts made by those who are convinced that AI risk is #1 issue. (Not really interesting in debating whether it is or not.) Example: AlphaGo is clearly a sign of the end times. I dramatize of course (although... look at the title), but the real point being made by Kissinger/risk proponents: AlphaGo is good evidence towards some form of agentful AI risk (i.e., not just algorithms gone wrong, à la Facebook timeline).

Comment by norswap on Hotel Concierge: Shame & Society · 2018-05-14T14:16:51.262Z · score: 4 (1 votes) · LW · GW

Very true, hadn't thought of that. Maybe that's one of the reason for the success of alternative medecines? That even though they're not backed by evidence, they feel like metis, or maybe they even work in circumstances that are hard to abstract / generalize. Or maybe it's just that the placebo effect is easier to kick off in these circumstances.

Comment by norswap on Terrorism, Tylenol, and dangerous information · 2018-05-12T22:03:12.260Z · score: 10 (3 votes) · LW · GW

Only if (a) terrorists tend to read what I consider to be fairly intellectual content or (b) they google around for meta-strategies. I rate (a) very unlikely and (b) as well, since as this post shows, they can't even be bothered to google around for good terrorism methods.

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-12T21:57:56.786Z · score: 4 (1 votes) · LW · GW

I thought the point was that people don't set a line in some cases. This leads to situation where something that doesn't actually bother people gets pushed back against based *on principle* only.

But it could very well be that you are right, or that we both are.

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-12T21:55:48.677Z · score: 4 (1 votes) · LW · GW

This feels like a not-completely-honest question. But here is my honest answer: "hit back" is a shorthand/metaphor for "react". It's an example with a very particular scenario in mind, but I'm sure you can generalize. Do something effective about it, the keyword to search for here is "domestic violence".

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-05T10:54:05.603Z · score: 4 (1 votes) · LW · GW

My point is that you need to set a line: if it's below the line, ignore it, if it's above the line, react. I said as much in a reply to another comment. If someone hits you in a way that is not okay, hit back.

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-04T21:12:10.539Z · score: 13 (5 votes) · LW · GW

I read your reactions as being completely in line with what Duncan describes in Part 5 (social ownership of the micro), and what would be interesting is how you react to his arguments.

Personally, I think magnitude matters. "Very small harm" (for slug bug, feels like an overstatement) does not warrant the reactions Duncan outlines. They are not born of any actual real-life impact, but of ideologically motivated grandstanding. To me, this signals you're going to be a very bothersome person to deal with, because you're going to see evil where there is none (or more accurately wrongdoing rather than evil). Duncan goes into great details about this - but essentially, I have no intention to cause harm (not even very small), but I can never be sure what might cause very small harm, and so this leads to paralysis. He says more smart things, like outrage giving power under this paradigm — an abuse I have witnessed quite a few times.

Frankly, even this conversation is ridiculous to an outside observer. The reactions Duncan gets even more so. Whatever harm slug bug might cause, I can guarantee that any ideological argument about slug bug is going to cause more. I am going to be wary of any person that is willing to make this trade-off, start an ideological argument on the hypothetical of very small harm. And at worse I will give them a wide berth, but some people get angry. And I think it's understandable. Making much ado about basically nothing is not a nice reaction. You're actually making a fuss (not something pleasant to be involved with) about something that doesn't actually do harm, but rather on the premise that it might. There is an underlying assumption of defending a general principle (e.g. refusal of participation in unstaged games), but again: magnitude matters. Enlisting someone into slug punch is not the same as enlisting them in a wrestling match, and no one is arguing that it is. I would have chalked that down to that oh-so-elusive common sense, but quite clearly that won't do.

Ultimately, Duncan does not argue people should *initiate* slug punch. Rather than if you're dragged into a game of slug punch, you should play along because not doing so will cause much more harm both to you and others. And it will lead to a toxic environment of outrage and blame-shifting without any kind of spontaneity.

Comment by norswap on Warrior rationalists · 2018-05-04T20:44:13.547Z · score: 4 (1 votes) · LW · GW

Good point. I focused mostly on vocal/visible actors which may form only a small subset of survey respondents. Still, I have no evidence in the other direction either.

Even if depression is more represented, I don't feel like it really informs the content of most posts on this website. If anything, I'd ascribe them an optimistic mindset which is not usually associated with depression.

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-04T13:11:02.741Z · score: 13 (3 votes) · LW · GW

Let's take the example of catcalling, because it's even more clear-cut than slug bug.

Ideally you're right. Pragmatically there is nothing you can do to stop catcalling on a big scale. Hence you're better off not letting it bother you rather than uselessly railing against it or letting it embitter you.

As for slug bug, it doesn't help anyone, it's just something people do for fun. You'd certainly never start slug bug by holding a conversation "oh, slug bug will help our feeling of cohension, let's play it". You just try it out. And then maybe someone says they don't like it. Then it's up to you and that's what the post discusses.

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-04T13:07:21.178Z · score: 27 (8 votes) · LW · GW

I love this article. It's illuminating and well written.

I do agree with the conclusion, although I probably would have been less generous in my treatment.

I'm firmly on the "micro-ignoring" side of things. But, interestingly, I notice my own small reactions more and more. It's something I've been working. And I noticed them precisely so that I don't let them affect me, even unconsciously.

That is also the goal of zen, mindfulness, etc... Become aware of your thoughts and feelings, accept them and let them go.

I'm truly baffled that people would become very self-conscious of all the small unease of everyday life and then choose to elevate them as major inconveniences. It's a bit like discovering who holds your chains and redoubling in bondage and obedience to this silent master.

I'm reminded of the quote:

A person without a sense of humor is like a wagon without springs. It’s jolted by every pebble in the road.

This is how it feels to me, with small feelings substituted in place of the sense of humor.

Comment by norswap on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-04T13:01:26.006Z · score: 4 (1 votes) · LW · GW

My read it is that he mostly criticizes people who object to the game on ideological grounds. Typically, you wouldn't hit hard, it's a playful thing.

It's about making a big thing out of a small thing, when it would be equally easy (easier, ultimately) to ignore it and play along.

Now, surely we can imagine an abusive scenario. What to do? I think it requires social finesse. Either you're being hit too hard and you think you have a good case to be made for that. You just say: you hit me too hard, stop doing it. That might work. But maybe the person thinks you're objecting on unreasonable ideological ground, or maybe he's just a jerk. Then don't mount your high horses, just punch them back.

If I could give advice to my younger self, one of the most important one would be: throw a few punches. I was never bullied, but people did give me a lot of flack because I was a good students (and with hindsight, there are other things I could have done to ease the situation, but I digress). I should have punched one or two students who gave me a hard time. Even if I ultimately got beaten. The point is to send a signal that messing with you is costly, it has consequences. It's not a matter of debate.

Comment by norswap on Warrior rationalists · 2018-05-04T12:53:22.684Z · score: 4 (1 votes) · LW · GW

This is pretty wrong, but I do think it does come from a well-meaning place.

First, the list you make is not my experience of the community. Especially on the website. The Discord which is sometimes is visit would be closer, but the list is still problematic.

Physical fitness for one. It's one of the easiest change one can make in his life, everyone ought to, and it seems to me most people attempt it and many succeed. Heck I'm even on a rationalist-offshoot Discord dedicated to training now.

Second, depressed? Nothing on this website gives me this vibe. Also depressed is inversely correlated with high achievement. For that matter, "high achiever" is deeply connected to what one considers to be an achievement.

No, truly, I do not think most of the community fits the trope you're laying out.

Now to address you real point, yes, experiences do inform one's vision and concerns. Physical fitness (or even strength) seems to improve well-being and confidence, but it does not seem to me that the content of this website shows a lack of either.

Discussion around here tend to be involved, nuanced, and frankly sometimes a bit too involved in hypotheticals (for my own taste, but everyone's different). I see no correlation between confidence (or strength etc) and nuance. People with a low sense of self-worth spout non-sense with bravado remarkably often; and conversely, confident people are frequently too confident in their belief. So one must not confuse confidence, and over-confidence in one's stated beliefs.

Comment by norswap on On Emotional Responsibility & Abuse · 2018-04-27T12:45:52.622Z · score: 4 (1 votes) · LW · GW
Why was the video saying that Z had responsibility over their emotions? One thing I’ve learned throughout my life, although I can’t point to when or where, is that people have no immediate control over their emotions, that they can’t help how they feel about a particular thing when it happens.

Are you sure? This is a topic I'd like to know more about.

My current thoughts on the matter:

We might have no control over the original input, we have control over how we process it.

A negative thought can be suppressed, analyzed to see if it matches truth, or fed back constantly in a crazy echo. And that, we can control.

So while it would be impossible to prevent a pang of jealousy, anger, etc... It's possible to quell it or reinforce it. And of course, one can always choose how to act upon it. Voicing a concern, having a conversation, making demands, cheating, ...

Incidentally, this is why some shades of selfishness can be a virtue, not that common culture is ever going to say it outright.

I actually think this is a very very common meme, although it's usually not phrased as bluntly as "a little selfishness can be a virtue". Things about taking care of yourself first. On saying no. etc etc

Comment by norswap on Don't Believe Wrong Things · 2018-04-27T12:33:54.192Z · score: 4 (1 votes) · LW · GW

Very much seconded.

In fact, it seems like the Christian lady in EY example "got it" by accident:

She doesn't really believe in god, but says her belief is useful to her.

To me, to be effective and useful, self-deception should occur in System 1 (fast, intuitive), but not in System 2 (slow, analytical). It seems applied rationality helps a lot with questions of motivation, or having useful intuitions to make progress towards a goal. And since System 2 is not affected, "fake beliefs" installed in System 1 are open for re-evaluation.

Comment by norswap on Effective Altruism's Ultimate Goal: Eradicate Human Suffering. · 2018-04-27T12:26:58.179Z · score: 4 (1 votes) · LW · GW

I agree in principle, but I have one reservation.

Suffering serves a role. Or at least, it can be very useful. It's one of the most potent motor for growth that we have. It throws things into perspective. It can spur us towards action, towards new opportunities. It changes the scale with whom we judge what happens to us, often to our benefit.

Ongoing, long-term suffering is bad. But a bit of suffering... I've met many people that were much the better for having suffered a bit. Ditto for me. On the other hand, you sometimes see people who look like they haven't suffered quite enough. They can be nice enough. But you can tell. They don't understand suffering. Their empathy is stunted. They get frustrated over things that shouldn't matter.


It's hard to tell where psychological suffering comes from, but you are right that we can control it. We can rein it in, or wallow in our own misery. I think neither option is really satisfying. There is a time to mourn, and a time to laugh.

I lived through a bad breakup last year. It would have been really easy for me to suppress my emotions, my suffering. I can do it, I have done it in the past. But I refused to. The pain was an acknowledgement that the relationship meant something. It was a ritual of sorts. It also served as a lesson. It changed me, and I know there are things in how I act that will never be the same due to this. They were deeply rewired. If I had avoided suffering through meditation or otherwise, first I would never have dug in enough to understand what I did, and the lesson wouldn't have been visceral enough to stick.

In the past, suffering helped me understand things about myself that were deeply buried and wouldn't have otherwise come to the surface. In particular, I understood two big drivers of suffering for me: the feeling of powerlessness, when part of your fate is outside your control; and a fear that I don't really matter to people whom I love — it would be more comforting to know they hate me, but the doubt and the fact that they are simply indifferent really hurts.


Of course, Nietzsche:

To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities—I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not—that one endures.

I wouldn't read too much into the soundness of his argument. But empirically, I think he's right. The most remarkable people I have met are people who have suffered grievously, but have managed to transcend this suffering.

Of course, for every person that transcend her suffering, how many others drown in it?

Comment by norswap on GreaterWrong—more new features & enhancements · 2018-04-08T14:03:26.846Z · score: 9 (2 votes) · LW · GW

Terrific work!

Comment by norswap on Is Rhetoric Worth Learning? · 2018-04-08T13:20:24.101Z · score: 7 (2 votes) · LW · GW

Don't use your answer to an argument to make a point about (your interpretation of) another argument.

Comment by norswap on Is Rhetoric Worth Learning? · 2018-04-08T13:14:13.158Z · score: 7 (2 votes) · LW · GW

Hypocrisy is anathema to me, but I've notice that many (most?) people are happy to let other people live with their contradictions as long as they are not very painfully glaring.

Comment by norswap on *Deleted* · 2018-03-28T20:25:44.051Z · score: 9 (4 votes) · LW · GW

Seems like it falls under applied rationality to me. Using what scientific knowledge we have to try to improve relationships. Not too sure the hypothesis is seaworthy, but the goal certainly seems laudable.

I am not too sure why the post has been downvoted so much, the comments give no indications of what's *so* wrong about it.