Where Do We Go From Here? 2020-11-08T19:53:51.955Z
The Axiological Treadmill 2020-09-15T18:36:31.954Z
Are there non-AI projects focused on defeating Moloch globally? 2020-09-14T02:13:11.252Z
A Brief Chat on World Government 2020-09-13T18:33:47.009Z
Changes in Reality 2020-09-07T16:18:27.276Z
A Policy for Biting Bullets 2020-08-31T21:00:33.396Z
Postel’s Principle as Moral Aphorism 2020-08-23T19:36:40.800Z
The Manual Economy 2020-08-07T22:29:47.384Z
Musical Outgroups 2020-07-19T22:55:12.007Z
Against Reopening Ottawa 2020-07-18T20:08:07.151Z
Roll for Sanity 2020-07-13T16:39:38.902Z
Frankenstein Delenda Est 2020-07-10T14:38:04.763Z
Abstractions on Inconsistent Data 2020-06-29T00:30:01.815Z
Atemporal Ethical Obligations 2020-06-26T19:52:25.567Z
What's the name for that plausible deniability thing? 2020-06-24T18:42:54.705Z
When is it Wrong to Click on a Cow? 2020-06-20T18:23:07.420Z
Practical Conflict Resolution: A Taxonomy of Disagreement 2020-06-19T02:25:10.594Z
A Practical Guide to Conflict Resolution: Comprehension 2020-06-13T15:08:18.546Z
Karma fluctuations? 2020-06-11T00:33:26.604Z
Self-Predicting Markets 2020-06-10T22:39:25.933Z
A Practical Guide to Conflict Resolution: Communication 2020-06-09T01:04:50.332Z
A Practical Guide to Conflict Resolution: Attitude 2020-06-06T12:29:16.728Z
The Stopped Clock Problem 2020-06-04T12:07:37.417Z
A Practical Guide to Conflict Resolution: Introduction 2020-06-04T01:03:26.576Z
The Law of Cultural Proximity 2020-06-03T02:18:40.523Z
Our Need for Need 2020-05-29T11:12:40.284Z
[Meta] Finding free energy in karma 2020-05-24T14:47:15.115Z
Extracting Value from Inadequate Equilibria 2020-05-18T17:25:09.781Z
What is a “Good” Prediction? 2020-05-03T16:51:46.802Z
It's Not About The Nail 2020-04-26T23:50:55.973Z
Fast Takeoff in Biological Intelligence 2020-04-25T12:21:23.603Z
Every system seems random from the inside 2020-04-19T23:02:55.151Z
What does the curve look like for coronavirus on a surface? 2020-04-15T02:07:34.044Z
Narrative Direction and Rebellion 2020-04-13T01:07:12.454Z
What We Owe to Ourselves 2020-04-11T01:34:22.730Z
Advice on reducing other risks during Coronavirus? 2020-03-24T23:30:37.172Z
International Conflict X-Risk in the Era of COVID-19 2020-03-18T11:42:42.538Z
Winning vs Truth – Infohazard Trade-Offs 2020-03-07T22:49:47.616Z
A Cautionary Note on Unlocking the Emotional Brain 2020-02-08T17:21:03.112Z


Comment by evan-huus on Some thoughts on criticism · 2020-09-18T22:30:30.404Z · LW · GW

One thing that I do to invite more frank criticism from people is to ask in the frame of "I think I'm bad at X, do you have any specific thoughts or suggestions to help me get better?" (where X is a pretty broad category). This pre-commits to the position that you're bad at it, which gets rid of (most of) the status risk for them in criticizing you.

Comment by evan-huus on The Axiological Treadmill · 2020-09-17T11:38:00.412Z · LW · GW

I guess I'm imagining that in a world where we only "barely tolerate" the pain, many people still try to defect over the long run due to psychological buildup, and end up getting killed. Thus there is some pressure to do better than "barely tolerate" at least. I've definitely updated to a model where the shock still produces suffering though; I no longer think the evolutionary pressure would be sufficient to remove the pain entirely.

The key is that evolution would try to find an equilibrium between the reproductive risk of not shocking yourself enough, and the reproductive risk of shocking yourself too much. Since not shocking yourself has much higher consequences on the margin, I expect evolution to bias slightly towards shocking yourself too much.

Comment by evan-huus on The Axiological Treadmill · 2020-09-16T22:54:58.254Z · LW · GW

I think of slack in this context like a "tax" on evolution - it doesn't prevent the relevant forces (metaphorically supply and demand) from working, it just limits their speed, and prevents them from approaching a perfect solution with no inefficiency at the limits.

Comment by evan-huus on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T11:54:17.952Z · LW · GW

An immortal benevolent human dictator isn’t a singleton either. Human cells tend to cooperate to make humans because it tends to be their most effective competitive strategy. The cells in an immortal all powerful human dictator would have a different payoff matrix and would likely start defecting over time.

Comment by evan-huus on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T15:35:21.682Z · LW · GW

These are interesting parallels (maybe? The unabomber parallel seems odd but I don’t actually know enough about him to critique it properly) But they don’t seem to answer my question. If there is an answer being implied, please spell it out more explicitly. Otherwise maybe this belongs as a comment, not an answer?

Comment by evan-huus on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T15:32:18.193Z · LW · GW

There aren't many other plausible technological options for things that could defeat moloch.

Why? What about non-technological solutions?

Comment by evan-huus on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T11:52:05.243Z · LW · GW

This answer is interesting, but underspecified for somebody who’s never heard of this. What is Game B? Where is it? Google just returns a bunch of board game links.

edit: Ah, finally got to

Comment by evan-huus on A Brief Chat on World Government · 2020-09-14T03:32:23.163Z · LW · GW
You both seem to be assuming that competitive pressures from other governments is what causes current governments to be stable

Current, yes, but I was explicit that I don't think this is a universal truth. As I wrote, "I think a well constructed world government could survive just fine without competitive pressure".

I definitely think it's true of your example though. If the US government was completely isolated, then the individual states would have a much-reduced incentive to participate in that government, and a much-increased incentive to defect and try and grab more of the pie for themselves. I suspect in the long term of that scenario, the US would dissolve into a collection of smaller competing states.

I think it's misleading to say the competitive pressures themselves cause stability. It's more that they provide the incentive to coordinate effectively, which is what causes stability.

Comment by evan-huus on A Brief Chat on World Government · 2020-09-13T21:08:55.904Z · LW · GW

Neither. Governments are effectively defined by wielding monopolistic force, not by citizenship papers.

Comment by evan-huus on Social Capital Paradoxes · 2020-09-13T12:57:44.158Z · LW · GW

The implied normative advice, as with all things Moloch, is that we should coordinate to stop [horizontal transmission]. I think I’m just biting the bullet you called “severe and counterintuitive”.

Comment by evan-huus on Social Capital Paradoxes · 2020-09-11T00:55:55.791Z · LW · GW

2 just seems like another framing of Moloch: burn the commons, or be outcompeted by those who do.

3 I suspect is confounded by wealth, which enables people to take more risks. Free-market liberal societies are so much wealthier that this counterbalances the less cooperative general system.

Comment by evan-huus on Updates Thread · 2020-09-09T22:58:21.406Z · LW · GW

For me specifically, I've learned recently that a basic workout in the morning has big effects on my mood throughout the rest of the day. Specifically I do before breakfast, which only takes about 10 minutes. The change was noticeable the very first day, and still is - even on days when I skip for a good reason and don't feel bad about it, I still end up noticeably grumpier and more sluggish.

Worth noting that I've jogged in the morning before, and also done that same workout before at other times of the day. So to be more specific, there's something about resistance training in the morning that is really beneficial for me.

Motivation has become a lot easier because of this; my System 1 has recognized the positive association and so it's become something I (mostly) want to do now.

Comment by evan-huus on Updates Thread · 2020-09-09T12:23:16.389Z · LW · GW

Significant shift in the magnitude of positive impact for physical exercise on mental state.

There’s been a lot of research over the years indicating that exercise makes you happier and generally feel better. I’ve always trusted that research to be accurate, but vastly underestimated the size of the effect until I started exercising regularly myself a few months ago. I’m not sure if I’ve actually experienced a greater boost than the research suggests is normal, or if I just never bothered reading the research closely enough and made some bad assumptions.

Comment by evan-huus on Changes in Reality · 2020-09-08T22:16:31.692Z · LW · GW

I think there is a sort of micro view where that is true, but also that culture can change very rapidly in many ways. It might not be entirely pleasant, but I think society could handle a permanent 1960s level of change. What worries me is e.g. the much more recent phenomenon that siblings a couple of years apart are active on totally different social media sites, and have different social norms and practically different language dialects as a result.

Comment by evan-huus on Postel’s Principle as Moral Aphorism · 2020-08-24T01:25:01.445Z · LW · GW

Fully agree with the technical critiques. I'm less certain that equivalent critiques apply to the social/moral, but I can see the argument. I think it depends on how close you care to wander to true moral relativism. Thanks for this, I'm going to think about it some more.

Comment by evan-huus on A Hierarchy of Abstraction · 2020-08-08T12:02:46.745Z · LW · GW

Compound interest is a thing in the world people might not know about despite knowing arithmetic. It’s also underspecified (presumably compounds annually?) and it’s not clear how close your estimate has to be to “pass”.

Just in general, while actual pass/fail tests like that may be indicative, I find them too easily confounded by other factors to be strong solo evidence of whether somebody has a given piece of knowledge or ability. “Can write code in a mainstream language” is obviously a lot fuzzier, but for examples like this post I think that’s a pro, not a con.

Comment by evan-huus on 3 Interview-like Algorithm Questions for Programmers · 2020-08-08T11:54:12.663Z · LW · GW

For Question #1, I think you mean “lowest big-O complexity” not “fastest”, since constant factors matter a lot for actual fastest.

Comment by evan-huus on A Hierarchy of Abstraction · 2020-08-08T11:51:13.998Z · LW · GW

Secondary to the hierarchy itself or the claim about g floors which Darmani has already commented on: the tests you refer to for various levels don’t seem like reliable measures of the claims you’re making. As one example, I definitely understand pointers (I’ve written a memory management subsystem in C for a widely-used open source project) but I can’t make heads or tails of your pointer puzzle. I have similar quibbles for 4, 7, and 8.

Comment by evan-huus on Against Reopening Ottawa · 2020-08-02T11:50:57.740Z · LW · GW

I did, but I’d didn’t bother crossposting here since the first one didn’t get any engagement/karma:

Comment by evan-huus on Free Educational and Research Resources · 2020-07-31T11:19:59.003Z · LW · GW

I am vaguely concerned about parts of this post being on less wrong - I’m don’t think we should be a place that openly advocates for illegal actions without extensive justification? Both for legal liability reasons and for philosophical ones.

Comment by evan-huus on Karma fluctuations? · 2020-07-25T11:02:26.117Z · LW · GW

The new tagging system actually works really well for this. I set AI to have a moderate negative homepage modifier, and still get to see the top AI posts but mostly only those.

Comment by evan-huus on The New Scientific Method · 2020-07-18T15:05:25.993Z · LW · GW

As I read it, by “invalid” they mean not 1 or 2 you suggested but (3) your reconstruction process is assuming something which is the actual cause of most of the non-randomness in your output, and would produce a plausible-looking image when run on any random set of pixels.

For the example you gave (3) is clearly not true - there’s no way that any random set of pixels would produce something so correlated with the original image, when the reconstruction algorithm doesn’t itself embed the original image. But as far as I know LIGO doesn’t know the original image, so the fact that they get something structured-looking out of noise isn’t meaningful? Or at least that’s how I interpret nixtaken’s argument; this is really not my area of expertise.

Comment by evan-huus on Covid 7/2: It Could Be Worse · 2020-07-02T22:22:47.603Z · LW · GW
The infection fatality rate seems to clearly have fallen, but why would it have fallen so much so quickly now that a surge in infections doesn’t kill more people?


Explanation 3: Young Getting Infected

My current leading hypothesis is that the difference in IFR between different demographics is substantially greater than previously reported. If under-30 groups see far more asymptomatic cases and very-low-symptom cases than older folks, that would likely cause a substantial underreporting of total cases in those demographics specifically (even more so than in the general population), which would lead to relative overreporting of their IFR - and the reported IFR for those groups is already much lower than for older groups.

This would also partly explain the ongoing confusion about whether there are tons of asymptomatic cases or not. There may be, but only in some demographic groups.

Comment by evan-huus on The Illusion of Ethical Progress · 2020-06-28T20:15:46.074Z · LW · GW
Western philosophy has no place for empiricism.

This seems a funny claim to make when there's an entire movement in Western philosophy (the British Empiricists) dedicated to empiricism.

Comment by evan-huus on Atemporal Ethical Obligations · 2020-06-27T02:40:25.788Z · LW · GW

I believe fairly strongly that the future will agree Rowling’s current position is immoral. Whether cancelling her is an appropriate response is a whole different question.

Comment by evan-huus on Atemporal Ethical Obligations · 2020-06-27T02:36:14.139Z · LW · GW

If we define Z as what most people recognize as moral today, then I think most people end up doing Z, not X. And Y is arguably a lot better than Z.

I’m also sympathetic to your second paragraph. Presumably a lot of the people I gave as examples would at least claim to be following X. Since their actions are no longer ones we consider moral, then plausibly they were wrong about their X, and there’s no reason to believe we will be any more accurate. Y seems more accessible in that regard.

I’m trying to walk a pretty thin line here between taking this argument seriously and admitting to full on moral relativism. Thus the disclaimer at the top of the post.

Comment by evan-huus on What's the name for that plausible deniability thing? · 2020-06-24T19:59:18.562Z · LW · GW

Thank you!

Comment by evan-huus on Half-Baked Products and Idea Kernels · 2020-06-24T01:32:44.131Z · LW · GW

This is not true? It seems to me that iterative agile approaches are much more popular right now, and advise explicitly against this kind of waterfall process.

Comment by evan-huus on Prediction = Compression [Transcript] · 2020-06-23T10:49:27.605Z · LW · GW

Obviously these skills should be correlated with IQ or what not

I’d argue that recognition, prediction, and compression of patterns isn’t just correlated with IQ. It’s the thing that IQ tests actually (try to) measure.

Comment by evan-huus on When is it Wrong to Click on a Cow? · 2020-06-23T00:54:48.723Z · LW · GW

This feels like a joke or a reference to something with which I am not familiar?

Comment by evan-huus on Are Humans Fundamentally Good? · 2020-06-21T19:20:50.371Z · LW · GW

Any reasonable answer depends entirely on what you mean by “good”. Though the flavour of the question makes me think you might find The Goddess of Everything Else enlightening,

Comment by evan-huus on When is it Wrong to Click on a Cow? · 2020-06-20T21:40:27.082Z · LW · GW

Typo fixed, thanks!

Comment by evan-huus on When is it Wrong to Click on a Cow? · 2020-06-20T21:39:57.197Z · LW · GW

I think if stimming was cheap and easy, most people would do it. I don’t think it would only be done by people with other socially undesirable characteristics.

Comment by evan-huus on Types of Knowledge · 2020-06-20T18:38:14.729Z · LW · GW

Could these three categories be roughly summarized as "Know that", "Know how", and "Know why"?

Comment by evan-huus on Practical Conflict Resolution: A Taxonomy of Disagreement · 2020-06-19T18:11:07.158Z · LW · GW

Assuming indirect realism, then we don’t have direct access to the performance of our strategies either, so I’m not sure how that ends up being more useful.

Comment by evan-huus on Practical Conflict Resolution: A Taxonomy of Disagreement · 2020-06-19T11:34:32.900Z · LW · GW

As far as being predictive, I think I’ve done a clear job of that already. I’m not just saying you can fit any disagreement into my model with enough mental gymnastics; I’m saying that doing so is concretely useful in guiding the resolution of that disagreement. My model could very well be overly flexible or generally incorrect in some cases, but it’s the most useful model for this topic that I’ve come up with. If you think modelling disagreements at the strategy level is more useful, I would greatly enjoy reading your post on how to make use of that for conflict resolution.

Comment by evan-huus on Practical Conflict Resolution: A Taxonomy of Disagreement · 2020-06-19T11:20:49.141Z · LW · GW

we have direct access to neither Is nor Ought but can compare strategies' performance to one another

I don’t understand this part. The only way in which we don’t have direct access to Is or Ought is a fairly philosophical one, and on that level we don’t have direct access to the performance of our strategies either?

Comment by evan-huus on How to analyze progress, stagnation, and low-hanging fruit · 2020-06-16T00:50:17.336Z · LW · GW

I've sometimes wondered if it's possible that computing has so much unclaimed low-hanging fruit that it's currently sucking up the majority of innovative brains, and that progress in other fields will resume to a certain extent once it becomes more difficult to make world-changing inventions in computing.

edit to add: this would line up with my experience that there is a massive undersupply in competent computer programmers relative to available opportunities

Comment by evan-huus on Karma fluctuations? · 2020-06-14T00:08:28.203Z · LW · GW

Related: do mods consider karma when deciding what to curate? Obviously something in the negatives is unlikely to warrant curation, but is a higher karma score considered a positive signal past whatever minimum bar?

(Speaking for my own internal reward function, I like writing posts that get high karma, but I'd like writing a post that gets curated much more)

Comment by evan-huus on Karma fluctuations? · 2020-06-11T21:17:02.284Z · LW · GW

This is interesting. I am mostly uninterested in AI research topics, but have avoided downvoting them on less wrong because there seems to be a lot of value and interest from other parts of the community. I could start downvoting every AI post that I see, but I’m afraid it would turn into a factional war between AI enthusiasts and everyone else.

Comment by evan-huus on Karma fluctuations? · 2020-06-11T01:02:55.950Z · LW · GW

It’s surprising to me because it feels very different from other similar voting systems, and I was expecting people to carry over habits from those places.

Maybe controversial was the wrong word. It still feels like “actively want to see less of” is a much stronger reaction than my default to posts I didn’t think were particularly great. But quite possibly that’s on me. It’s also surprising to me that there is so much lack of consensus on whether people want more or less of certain posts.

Comment by evan-huus on Your best future self · 2020-06-06T20:09:01.504Z · LW · GW

The new meta-introduction (is there a better term of art for those italic bits at the top?) definitely helps read it in the proper frame. Thank you for clarifying.

Comment by evan-huus on Your best future self · 2020-06-06T19:54:45.901Z · LW · GW

I am conflicted about this post. On the one hand, it smells like new-agey nonsense. I worry that posts like this could hurt the credibility of rationalists trying to spread other non-obvious ideas into the mainstream.

On the other hand, even if the only mechanism of this idea is the placebo effect, it’s an emotionally satisfying story to trigger that effect. As someone who grew up with strong religious beliefs, I can appreciate it as... something more than mere art.

Ultimately it’s not obvious to me if this post was supposed to convey a genuine psychological insight, and was just unclear, or if it’s more metaphorical and I’m being too pedantic?

This comment is probably confusing, but I think that merely reflects my own confusion here.

Comment by evan-huus on Why isn’t assassination/sabotage more common? · 2020-06-04T20:34:50.490Z · LW · GW

Even psychopaths are risk-averse. Why take on the physical risk of performing assassination or sabotage when you can take on a much lower risk (for similar reward) via white-collar crime.

Comment by evan-huus on The Stopped Clock Problem · 2020-06-04T20:22:24.031Z · LW · GW

I think your understanding is generally correct. The failure case I see is where people say "this problem was really really really hard, instead of one point, I'm going to award one thousand correctness points to everyone who predicted it", and then end up surprised that most of those people still turn out to be cranks.

Comment by evan-huus on Open & Welcome Thread - June 2020 · 2020-06-04T12:10:09.066Z · LW · GW

I saw this "stopped clock" assumption catching a bunch of people with COVID-19, so I wrote a quick post on why it seems unlikely to be a good strategy.

Comment by evan-huus on The Law of Cultural Proximity · 2020-06-04T11:10:32.438Z · LW · GW

This is good feedback, thank you. I found it hard to write this post for this exact reason - it seems obviously true, but there aren’t any good studies or natural experiments to point to. Perhaps it would have been better framed as a hypothesis in need of validation? Though I fear it feels too obvious for that, and nobody would be interested in validating it.

Comment by evan-huus on Reflective Complaints · 2020-06-04T01:07:10.752Z · LW · GW

I've started the conversion into a sequence here.

Comment by evan-huus on The principle of no non-Apologies · 2020-05-28T21:48:14.248Z · LW · GW

Not sure if this is the exact source you were thinking of, but your definition reminds me of

Comment by evan-huus on [Meta] Finding free energy in karma · 2020-05-24T19:38:39.707Z · LW · GW

Funny cross-thread coincidence, but I now think that maybe what I really noticed for point #2 is what you described here: Crossposts just do better than linkposts, not necessarily better than "original" posts.