Posts

[Site Update] Behind the scenes data-layer and caching improvements 2019-08-07T00:49:29.721Z · score: 25 (13 votes)
AI Alignment Open Thread August 2019 2019-08-04T22:09:38.431Z · score: 37 (14 votes)
Integrity and accountability are core parts of rationality 2019-07-15T20:22:58.599Z · score: 128 (46 votes)
Recommendation Features on LessWrong 2019-06-15T00:23:18.102Z · score: 61 (18 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 80 (32 votes)
Long Term Future Fund applications open until June 28th 2019-06-10T20:39:58.183Z · score: 32 (10 votes)
Comment section from 05/19/2019 2019-05-20T00:51:49.298Z · score: 24 (8 votes)
Kevin Simler's "Going Critical" 2019-05-16T04:36:32.470Z · score: 56 (21 votes)
Gwern's "Why Tool AIs Want to Be Agent AIs: The Power of Agency" 2019-05-05T05:11:45.805Z · score: 24 (6 votes)
[Meta] Hiding negative karma notifications by default 2019-05-04T02:36:43.919Z · score: 27 (8 votes)
Has government or industry had greater past success in maintaining really powerful technological secrets? 2019-05-01T02:24:52.302Z · score: 28 (7 votes)
Does the patent system prevent industry from keeping secrets? 2019-05-01T02:24:35.928Z · score: 8 (1 votes)
What are concrete historical examples of powerful technological secrets? 2019-05-01T02:22:37.870Z · score: 9 (2 votes)
Why is it important whether governments or industry projects are better at keeping secrets? 2019-05-01T02:10:21.533Z · score: 8 (1 votes)
Change A View: An interesting online community 2019-04-30T18:34:37.351Z · score: 53 (20 votes)
Habryka's Shortform Feed 2019-04-27T19:25:26.666Z · score: 62 (17 votes)
Long Term Future Fund: April 2019 grant decisions 2019-04-08T02:05:44.217Z · score: 54 (14 votes)
What LessWrong/Rationality/EA chat-servers exist that newcomers can join? 2019-03-31T03:30:20.819Z · score: 53 (13 votes)
How large is the fallout area of the biggest cobalt bomb we can build? 2019-03-17T05:50:13.848Z · score: 21 (5 votes)
How dangerous is it to ride a bicycle without a helmet? 2019-03-09T02:58:23.964Z · score: 32 (14 votes)
LW Update 2019-01-03 – New All-Posts Page, Author hover-previews and new post-item 2019-03-02T04:09:41.029Z · score: 28 (7 votes)
New versions of posts in "Map and Territory" and "How To Actually Change Your Mind" are up (also, new revision system) 2019-02-26T03:17:28.065Z · score: 36 (12 votes)
How good is a human's gut judgement at guessing someone's IQ? 2019-02-25T21:23:17.159Z · score: 45 (17 votes)
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:30:11.243Z · score: 45 (12 votes)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:27:17.619Z · score: 31 (11 votes)
Reinterpreting "AI and Compute" 2018-12-25T21:12:11.236Z · score: 33 (9 votes)
[Video] Why Not Just: Think of AGI Like a Corporation? (Robert Miles) 2018-12-23T21:49:06.438Z · score: 18 (4 votes)
Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction? 2018-12-08T01:49:56.073Z · score: 21 (6 votes)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:39:15.247Z · score: 38 (9 votes)
Switching hosting providers today, there probably will be some hiccups 2018-11-15T19:45:59.181Z · score: 13 (5 votes)
The new Effective Altruism forum just launched 2018-11-08T01:59:01.502Z · score: 28 (12 votes)
Introducing the AI Alignment Forum (FAQ) 2018-10-29T21:07:54.494Z · score: 88 (31 votes)
Upcoming API changes: Upgrading to Open-CRUD syntax 2018-10-04T02:28:39.366Z · score: 16 (3 votes)
AI Governance: A Research Agenda 2018-09-05T18:00:48.003Z · score: 27 (5 votes)
Changing main content font to Valkyrie? 2018-08-24T23:05:42.367Z · score: 25 (4 votes)
LW Update 2018-08-10 – Frontpage map, Markdown in LaTeX, restored posts and reversed spam votes 2018-08-10T18:14:53.909Z · score: 24 (10 votes)
SSC Meetups Everywhere 2018 2018-08-10T03:18:58.716Z · score: 31 (9 votes)
12 Virtues of Rationality posters/icons 2018-07-22T05:19:28.856Z · score: 49 (22 votes)
FHI Research Scholars Programme 2018-06-29T02:31:13.648Z · score: 34 (10 votes)
OpenAI releases functional Dota 5v5 bot, aims to beat world champions by August 2018-06-26T22:40:34.825Z · score: 56 (20 votes)
Announcement: Legacy karma imported 2018-05-31T02:53:01.779Z · score: 40 (8 votes)
Using the LessWrong API to query for events 2018-05-28T22:41:52.649Z · score: 12 (3 votes)
April Fools: Announcing: Karma 2.0 2018-04-01T10:33:39.961Z · score: 122 (39 votes)
Harry Potter and the Method of Entropy 1 [LessWrong version] 2018-03-31T20:38:45.125Z · score: 21 (4 votes)
Site search will be down for a few hours 2018-03-30T00:43:22.235Z · score: 12 (2 votes)
LessWrong.com URL transfer complete, data import will run for the next few hours 2018-03-23T02:40:47.836Z · score: 69 (20 votes)
You can now log in with your LW1 credentials on LW2 2018-03-17T05:56:13.310Z · score: 30 (6 votes)
Cryptography/Software Engineering Problem: How to make LW 1.0 logins work on LW 2.0 2018-03-16T04:01:48.301Z · score: 23 (4 votes)
Should we remove markdown parsing from the comment editor? 2018-03-12T05:00:22.062Z · score: 20 (5 votes)
Explanation of Paul's AI-Alignment agenda by Ajeya Cotra 2018-03-05T03:10:02.666Z · score: 55 (14 votes)

Comments

Comment by habryka4 on Vaniver's View on Factored Cognition · 2019-08-23T03:13:09.523Z · score: 2 (1 votes) · LW · GW

(Formatting note: Fixed a broken footnote, which involved converting the post into markdown)

Comment by habryka4 on Response to Glen Weyl on Technocracy and the Rationalist Community · 2019-08-23T02:18:41.173Z · score: 8 (3 votes) · LW · GW

My adblocker completely blocks the site. I had to turn it off to get any access to it.

Comment by habryka4 on Alignment & Balance of the Human Body. Midline Anatomy & the Median Plane. · 2019-08-22T19:31:32.126Z · score: 4 (2 votes) · LW · GW

This post seems potentially interesting, but I think I am missing a hook on why I should care about the content of this post. Is there a particular reason to care about the alignment of my body? What benefits would I gain from it? Is there a relevance to rationality and the art of thinking?

Comment by habryka4 on Please use real names, especially for Alignment Forum? · 2019-08-22T17:14:56.013Z · score: 2 (1 votes) · LW · GW

Why doesn't it work for LessWrong.com? Just replacing the classname seems to have worked fine for me. Here is the same script for LessWrong.com.

Comment by habryka4 on Davis_Kingsley's Shortform · 2019-08-21T21:21:03.325Z · score: 3 (2 votes) · LW · GW

1. Sure, happy to chat

2. Yeah, I didn't mean to imply that it's in direct contradiction, just that I have the most data about actually abusive relationships, and some broad implication that I do think that's where most of the variance comes from, though definitely not all of it.

Comment by habryka4 on Davis_Kingsley's Shortform · 2019-08-21T19:33:58.485Z · score: 9 (4 votes) · LW · GW

My observations are mostly the opposite. I've seen a bunch of abusive relationships over the years, and in general poly seemed to reduce the incidence of more abusive relationships by making it easier for partners to have periods of temporary distance, and because other partners had the opportunity to sanity-check what was happening.

Most of the worst relationships I recall in the Bay Area and other parts of the rationality community were monogamous ones.

Comment by habryka4 on Buck's Shortform · 2019-08-20T18:35:05.790Z · score: 4 (2 votes) · LW · GW

I posted on Facebook, and LW might actually also be a good place for some subset of topics.

Comment by habryka4 on Open & Welcome Thread August 2019 · 2019-08-19T23:07:07.351Z · score: 4 (2 votes) · LW · GW

Welcome! :)

Comment by habryka4 on Buck's Shortform · 2019-08-18T18:14:41.892Z · score: 24 (7 votes) · LW · GW

I usually have lots of questions. Here are some types of questions that I tended to ask:

  • Here is my rough summary of the basic proof structure that underlies the field, am I getting anything horribly wrong?
    • Examples: There is a series of proof at the heart of Linear Algebra that roughly goes from the introduction of linear maps in the real numbers to the introduction of linear maps in the complex numbers, then to finite fields, then to duality, inner product spaces, and then finally all the powerful theorems that tend to make basic linear algebra useful.
    • Other example: Basics of abstract algebra, going from groups and rings to modules, fields, general algebra's, etcs.
  • "I got stuck on this exercise and am confused how to solve it". Or, "I have a solution to this exercise but it feels really unnatural and forced, so what intuition am I missing?"
  • I have this mental visualization that I use to solve a bunch of problems, are there any problems with this mental visualization and what visualization/intuition pumps do you use?
    • As an example, I had a tutor in Abstract Algebra who was basically just: "Whenever I need to solve a problem of "this type of group has property Y", I just go through this list of 10 groups and see whether any of them has this property, and ask myself why it has this property, instead of trying to prove it in abstract"
  • How is this field connected to other ideas that I am learning?
    • Examples: How is the stuff that I am learning in real analysis related to the stuff in machine learning? Are there any techniques that machine learning uses from real analysis that it uses to achieve actually better performance?
Comment by habryka4 on Buck's Shortform · 2019-08-18T07:37:21.896Z · score: 29 (10 votes) · LW · GW

I've hired tutors around 10 times while I was studying at UC-Berkeley for various classes I was taking. My usual experience was that I was easily 5-10 times faster in learning things with them than I was either via lectures or via self-study, and often 3-4 one-hour meetings were enough to convey the whole content of an undergraduate class (combined with another 10-15 hours of exercises).

Comment by habryka4 on Neural Nets in Python 1 · 2019-08-18T04:58:58.751Z · score: 4 (2 votes) · LW · GW

Fixed the code blocks for you. The trick was to press CTRL+Shift+V to make sure that your browser doesn't try to do any fancy formatting to your code. Sorry for the inconvenience.

Comment by habryka4 on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-18T03:29:22.248Z · score: 11 (8 votes) · LW · GW

Promoted to curated: I think this post is describing a real error mode that I think many people benefit from identifying. I also particularly appreciate that this post tries to provide concrete evidence in the form of personal experience (though obviously externally verifiable evidence is even better, but also even harder to come by). I think a frequent error mode for posts on LW is to not be grounded sufficiently in even personal experience.

Comment by habryka4 on The Robbers Cave Experiment · 2019-08-16T01:26:06.658Z · score: 7 (4 votes) · LW · GW

Note that this article isn't included in the latest edition of Rationality: AI to Zombies, for roughly the reasons listed here (if I remember correctly).

Comment by habryka4 on Beliefs Are For True Things · 2019-08-16T00:08:34.442Z · score: 5 (5 votes) · LW · GW

I don't think the "Copybook headings" are a direct reference to truth. Some random googlings suggest that the following is a representative example of those copybook headings, which seem more to me like proverbs and references to old wisdom, than to some core concept of truth:

“Eternal vigilance is the price of success.”

“If wishes were horses then beggars would ride.”

“All is not gold that glitters.”

“Well begun is half done.”

I do think the poem works well for the point you are trying to make, but figured I would provide a bit of context.

Comment by habryka4 on Raemon's Scratchpad · 2019-08-15T21:00:34.971Z · score: 5 (3 votes) · LW · GW

(That was indeed the piece that crystallized this intuition for me, and I think Ray got this broader concept from me)

Comment by habryka4 on FactorialCode's Shortform · 2019-08-15T16:38:02.909Z · score: 3 (4 votes) · LW · GW

You can order the comments by oldest first, which gives you at least some of that.

We do also record when all the votes are cast, so a timemachine is possible, though querying and aggregating all the votes for a large thread might be too much for a browser client.

Comment by habryka4 on Slider's Shortform · 2019-08-14T19:17:29.493Z · score: 2 (1 votes) · LW · GW

Interesting. Do you have a link to the document that sparked this thought?

Comment by habryka4 on Diana Fleischman and Geoffrey Miller - Audience Q&A · 2019-08-13T00:19:23.476Z · score: 37 (10 votes) · LW · GW

I am quite glad you posted it, and don't think that comment should discourage you from posting more similar things.

In general I am very excited to see more conversations being written up as transcripts and posted online, and would be really sad if this would prevent that trend from taking hold more.

Comment by habryka4 on Off the Cuff Brangus Stuff · 2019-08-13T00:13:47.751Z · score: 19 (7 votes) · LW · GW

Ok, let me give it a try. I am trying to not spend too much time on this, so I prefer to start with a rough draft and see whether there is anything interesting here before I write a massive essay.

You say the following:

Do chakras exist?

In some sense I might be missing the point since the answer to this is basically just "no". Though obviously I still think they form a meaningful category of something, but in my model they form a meaningful category of "mental experiences" and "mental procedures", and definitely not a meaningful category of real atom-like things in the external world.

Another way might be that you think chakras do not literally exist like planes do, but you can make a predictive profit by pretending that they do exist

I don't think the epistemically healthy thing is to pretend that they exist as some external force. Here is an analogy that I think kind of explains the ideas of "auras", which is a broader set than just chakras:

Imagine you are talking to a chessmaster who has played 20000 hours of chess. You show him a position and he responds with "Oh, black is really open on the right". You ask "what do you mean by 'open on the right'?". He says: "Black's defense on the right is really weak, I could push through that immediately if I wanted to", while making the motion of picking up a piece with his right hand and pushing it through the right side of black's board.

As you poke him more, his sense of "openness" will probably correspond to lots of proprioceptive experiences like "weak", "fragile", "strong", "forceful", "smashing", "soft", etc.

Now, I think it would be accurate to describe (in buddhist/spiritual terms) the experience of the chessmaster as reading an "aura" off the chessboard. It's useful to describe it as such because a lot of its mental representation is cached out in the same attributes that people and physical objects in general have, even though its referent is the state of some chess-game, which obviously doesn't have those attributes straightforwardly.

My read of what the deal with "chakras" is, is that it's basically trying to talk about the proprioceptive subsets of many mental representations. So in thinking about something like a chessboard, you can better understand your own mental models of it, by getting a sense of what the natural clusters of proprioceptive experiences are that tend to correlate with certain attributes of models (like how feeling vulnerable around your stomach corresponds to a concept of openness in a chess position).

You can also apply them to other people, and try to understand what other people are experiencing by trying to read their body-language, which gives you evidence about the proprioceptive experiences that their current thoughts are causing (which tend to feed back into body-language), which allows you to make better inferences about their mental state.

I haven't actually looked much into whether the usual set of chakras tend to be particularly good categories for the relationship between proprioceptive experiences and model attributes, so I can't speak much about that. But it seems clear that there are likely some natural categories here, and referring to them as "chakras" seems fine to me.

Comment by habryka4 on Matthew Barnett's Shortform · 2019-08-12T20:11:25.661Z · score: 2 (1 votes) · LW · GW
The identification of the pain-pleasure axis as the primary source of value (Bentham).

I will mark that I think this is wrong, and if anything I would describe it as a philosophical dead-end. Complexity of value and all of that. So listing it as a philosophical achievement seems backwards to me.

Comment by habryka4 on Machine Learning Analogy for Meditation (illustrated) · 2019-08-12T16:52:13.381Z · score: 6 (3 votes) · LW · GW

I am confused, like obviously my thoughts cause some changes in behavior. Maybe not immediately (though I am highly dubious of the whole "you can predict my actions before they are mentally conscious bit"), but definitely in the future (by causing some kind of back-propagation of updates that change my future actions).

The opposite would make no sense from an evolutionary adaptiveness perspective (having a whole system-2 like thingy would be a giant waste of energy if it never caused any change in actions), and doesn't at all correspond to high-level planning actions, isn't what the whole literature on S1 and S2 says (which does indeed make the case that S2 determines many actions), and doesn't correspond well to my internal experience.

Comment by habryka4 on Power Buys You Distance From The Crime · 2019-08-12T06:52:40.192Z · score: 7 (4 votes) · LW · GW

I do think that I tend to update downwards on the likelihood of a piece being true if it seems to have obvious alternative generators for how it was constructed that are unlikely to be very truth tracking. Obvious examples here are advertisements and political campaign speeches.

I do think in that sense I think it's reasonable to distrust pieces of writing that seem like they are part of some broader conflict, and as such are unlikely to be generated in anything close to an unbiased way. A lot of conflict-theory-heavy pieces tend to be part of some conflict, since accusing your enemies of being evil is memetic warfare 101.

I am not sure (yet) what the norms for discussion around these kinds of updates should be though, but did want to bring up that there exist some valid bayesian inferences here.

Comment by habryka4 on What is the state of the ego depletion field? · 2019-08-09T22:48:53.403Z · score: 32 (7 votes) · LW · GW

This has been my default reference for the past few years:

https://replicationindex.com/2016/04/18/is-replicability-report-ego-depletionreplicability-report-of-165-ego-depletion-articles/

It's from 2016, so I don't actually know where things are right now. But presumably not that much has changed.

Comment by habryka4 on Is there a standard discussion of vegetarianism/veganism? · 2019-08-09T22:32:53.509Z · score: 6 (3 votes) · LW · GW

The cost of doing so has an effect on productivity (due to nutritional effects, but also effects on attention and general hassle, as well as coordination costs), and using a fraction of that additional productivity to help animals results in a much larger reduction in net animal suffering (because of the abundance of easy opportunities for helping animals, due to the horrible state of animal lives).

Comment by habryka4 on Matt Goldenberg's Short Form Feed · 2019-08-08T20:23:35.246Z · score: 2 (1 votes) · LW · GW

My general takeaway from that post was that in terms of psychometric validity, most developmental psychology is quite bad. Did I miss something?

This doesn't necessarily mean the underlying concepts aren't real, but I do think that in terms of the quality metrics that psychometrics tends to assess things on, I don't think the evidence base is very good.

Comment by habryka4 on Mapping of enneagram to MTG personality types · 2019-08-08T19:57:00.630Z · score: 4 (2 votes) · LW · GW

The Open Philanthropy Project created an updated version (I am not a huge fan of it, but itt does have a lot of the things you care about): https://www.openphilanthropy.org/blog/new-web-app-calibration-training

Comment by habryka4 on [deleted post] 2019-08-08T19:28:11.769Z

Duplicate of: https://www.lesswrong.com/posts/XzetppcF8BNoDqFBs/help-forecast-study-replication-in-this-social-science

Comment by habryka4 on [Site Update] Behind the scenes data-layer and caching improvements · 2019-08-07T22:34:50.262Z · score: 2 (1 votes) · LW · GW

Huh, weird. I will look into it. Just to check, by the green comment thing do you mean the following interaction? (which takes less than a second for me as you can see)

http://www.giphy.com/gifs/hoy4FbckKeEWCAUk5r

Post-pages also take less than a second for me to load (initially, though there is some JS initialization afterwards), so that's also confusing:

http://www.giphy.com/gifs/lStCmWRImaotNqIFoo

Might be browser specific, or something else weird going on.

Comment by habryka4 on Help forecast study replication in this social science prediction market · 2019-08-07T19:48:11.920Z · score: 2 (1 votes) · LW · GW

Yeah, bug on our side. Just merging a PR that fixes it. Will be fixed within the day.

https://github.com/LessWrong2/Lesswrong2/pull/2264#pullrequestreview-272171432

Comment by habryka4 on Edit Nickname · 2019-08-07T17:41:47.744Z · score: 2 (1 votes) · LW · GW

Ping us on Intercom (the chat icon in the bottom right corner, and I can help you change it to whatever you want). Sorry for the hassle with the Google login, it's been on my to-do list for a while to fix that.

Comment by habryka4 on Occam's Razor: In need of sharpening? · 2019-08-06T22:49:20.613Z · score: 4 (4 votes) · LW · GW

This seems right, though something about this still feels confusing to me in a way I can't yet put into words. Might write a comment at a later point in time.

Comment by habryka4 on Occam's Razor: In need of sharpening? · 2019-08-06T18:39:08.160Z · score: 4 (2 votes) · LW · GW

I originally agreed with this comment, but after thinking about it for two more days I disagree. Just because you see a high-level phenomenon, doesn't mean you have to have that high-level phenomenon as a low-level atom in your model of the world.

Comment by habryka4 on Just Imitate Humans? · 2019-08-06T18:25:21.184Z · score: 19 (6 votes) · LW · GW

I actually spent a bunch of time in the last weeks fixing and updating Arbital, so it should be reasonably fast now. The arbital pages loaded for me in less than a second.

arbital.greaterwrong is obviously still faster, but it's no longer as massive of a difference.

Comment by habryka4 on benwr's unpolished thoughts · 2019-08-06T18:22:25.071Z · score: 4 (2 votes) · LW · GW

The shortform page is currently sorted by last-commented-on so that one should help you find active comment threads reasonably well.

Comment by habryka4 on benwr's unpolished thoughts · 2019-08-06T02:58:36.210Z · score: 5 (3 votes) · LW · GW

I have some hesitations about this. The biggest one is that I do want to avoid LessWrong just becoming a collection of filter-bubbles in the way Tumblr or Reddit or Facebook is, and do think there is a lot of value from having people with disagreeing perspectives share the same space.

I think I am not opposed to people building feeds, but I would want to make sure that there is still a way to reach those users with at least the most important content. I.e. at least make sure that everyone sees the curated posts or something like that.

Comment by habryka4 on Raemon's Scratchpad · 2019-08-05T20:05:05.264Z · score: 9 (4 votes) · LW · GW

I have a bunch of thoughts on this. A lot of the good effects of this actually happened in space-law, because nobody really cared about the effects of the laws when they were written.

Other interesting contracts that were surprisingly long-lasting is the ownership of Hong-Kong for Britain, which was returned after 90 years.

However, I think there are various problems with doing this a lot. One of them is that when you make a policy decision that's supposed to be useful in 20 years, then you are making a bid on that policy being useful in the environment that will exist in 20 years, over which you have a lot of uncertainty. So by default I expect policy-decisions made for a world 20 years from now to be worse than decisions made for the current world.

The enforcability of contracts over such long time periods is also quite unclear. What prevents the leadership 15 years from now from just calling off the policy implementation? This requires a lot of trust and support for the meta-system, which is hard to sustain over such long periods of time.

In general, I have a perspective that lots of problems could be solved if people could reliably make long-term contracts, but that there are no reliably enforcement mechanisms for long-term contracts at the national-actor level.

Comment by habryka4 on Occam's Razor: In need of sharpening? · 2019-08-05T00:08:58.543Z · score: 2 (3 votes) · LW · GW

I made a kind of related point in: https://www.lesswrong.com/posts/3xnkw6JkQdwc8Cfcf/is-the-human-brain-a-valid-choice-for-the-universal-turing

Comment by habryka4 on Occam's Razor: In need of sharpening? · 2019-08-04T22:32:32.368Z · score: 2 (1 votes) · LW · GW

There has been some discussion in the community about whether you want to add memory or runtime-based penalties as well. At least Paul comments on it a bit in "What does the Universal Prior actually look like?"

Comment by habryka4 on Occam's Razor: In need of sharpening? · 2019-08-03T17:08:34.332Z · score: 12 (3 votes) · LW · GW

Yes, and the sequence (as well as the post I linked below) tries to define a complexity measure based on Solomonoff Induction, which is a formalization of Occam's Razor.

Comment by habryka4 on Writing children's picture books · 2019-08-03T02:41:13.535Z · score: 11 (5 votes) · LW · GW

Promoted to curated: One of my favorite posts of LessWrong history is Sarah Constantin's "Fact Posts How and Why", because it gave me a very concrete tool that could help me understand large parts of the world in a better way. This post I think has done something similar, and while I sadly haven't gotten around to using it in detail, I have brought it up as an intuition pump a few times in conversation and when thinking about things alone.

I also particularly like the very concrete example, and generally think that concrete examples help a lot with posts like this.

Comment by habryka4 on Off the Cuff Brangus Stuff · 2019-08-02T20:15:04.744Z · score: 5 (3 votes) · LW · GW

I have some thoughts about this (as someone who isn't really into the chakra stuff, but feels like it's relatively straightforward to answer the meta-questions that you are asking here). Feel free to ping me in a week if I haven't written a response to this.

Comment by habryka4 on [Site Update] Weekly/Monthly/Yearly on All Posts · 2019-08-02T17:25:16.926Z · score: 2 (1 votes) · LW · GW

This post has a list of all the chat servers: https://www.lesswrong.com/posts/mQDoZ2yCX2ujLxJDk/what-lesswrong-rationality-ea-chat-servers-exist-that

I will ping Elo about accepting your invitation.

Comment by habryka4 on [Site Update] Weekly/Monthly/Yearly on All Posts · 2019-08-02T07:17:38.669Z · score: 4 (2 votes) · LW · GW

Mostly UI complexity. I’ve already heard some users report that the current set of sorting options is quite overwhelming, so I am hesitant to add a closely overlapping set of additional options.

Also just code complexity. Any additional sort option is a cause for bugs and keeping things simple is necessary with our relatively small team.

Comment by habryka4 on [Site Update] Weekly/Monthly/Yearly on All Posts · 2019-08-02T06:45:28.029Z · score: 5 (2 votes) · LW · GW

I think for yearly and monthly I prefer having the calendar dates. It feels like the more natural category to look for "the best post in 2016" instead of "the best post in the year that starts 4 years ago".

If more people feel differently though, seems reasonable to maybe change it.

Comment by habryka4 on Drive-By Low-Effort Criticism · 2019-08-02T00:01:51.425Z · score: 7 (4 votes) · LW · GW

I think social punishments usually have the same form. Where rewards tend to be more of a transfer of status, and punishments more of a destruction of status (two people can destroy each others reputation with repeated social punishments).

There is also the bandwidth cost of punishment, as well as the simple fact that giving people praise usually comes with a positive emotional component for the receiver (in addition to the status and the reputation), whereas punishments usually come with an addition of stress and discomfort that reduces total output for a while.

In either case, I think the simpler case is made by simply looking at the assumption of diminishing returns in resources and realizing that the cost of giving someone a reward they care 2x about is usually larger than the cost of giving the reward twice, meaning that there is an inherent cost to high-variance reward landscapes.

Comment by habryka4 on Drive-By Low-Effort Criticism · 2019-08-01T23:38:00.507Z · score: 7 (4 votes) · LW · GW

Rewards are usually a transfer of resources (e.g. me giving you money), which tend to preserve total wealth (or status, or whatever other resource you are thinking about).

Unilateral punishments are usually not transfers of resource, they are usually one party imposing a cost on another party (like hitting them with a stick and injuring them), in a way that does not preserve total wealth (or health, or whatever other resource applies to the situation).

Comment by habryka4 on Drive-By Low-Effort Criticism · 2019-08-01T23:35:57.725Z · score: 4 (2 votes) · LW · GW
assuming I am not risk-averse—but if I am, then I’m not going to be the one trying the high-variance strategy anyway

But, of course, everyone is risk averse in almost every resource. Even the most ambitious startup founders are still risk averse in total payment, just less so than others. I care less about my 10th million dollar than any of my first 9 million dollars, which already creates risk aversion. The same is true for status or almost any other resource with which you might want to reward people.

Comment by habryka4 on Drive-By Low-Effort Criticism · 2019-08-01T23:33:49.029Z · score: 3 (2 votes) · LW · GW

I do not know of any industry in which contractor agreements with variable payments that are dependent on the quality of the output are common practice. There is often an agreement on what it means to "complete the work" but in almost any case both your downside and your upside are limited by a guaranteed upfront payment, and a conditional final payment. But it's almost never the case that you can get 2x the money depending on the quality of your output, which seems like a necessary requirement for some of the incentive schemes you outlined.

Comment by habryka4 on Drive-By Low-Effort Criticism · 2019-08-01T23:28:29.970Z · score: 10 (5 votes) · LW · GW

Yeah, but the fact that it takes a while and we have monthly wages instead of just all being contractors that are paid by the piece is kind of my point. Most of the economy does not pay for completed output, but for intermediary metrics that allow a much higher-level of stability.

Comment by habryka4 on Drive-By Low-Effort Criticism · 2019-08-01T23:23:45.811Z · score: 7 (4 votes) · LW · GW

No, because humans are risk-averse, at least in money terms, but also in most other currencies. If you do this, you increase the total risk for your friend, for no particular gain.

Punishment is also usually net-negative, whereas rewards tend to be zero-sum, so by adding a bunch of worlds where you added punishments, you destroyed a bunch of value, with no gain (in the world where you both have certainty about the payoff matrix).

One model here is that humans have diminishing returns on money, so in order to reward someone 2x with dollars, you have to pay more than 2x the dollar amount, so your total cost is higher.