Posts

Shell, Shield, Staff 2018-01-25T13:22:26.728Z
Actionable Eisenhower 2018-01-21T15:26:47.979Z
Dynamic Karma & Static Karma 2018-01-18T20:23:44.647Z
Messaging Troubles 2018-01-17T11:19:22.189Z
Bug: Inconsistent session status 2018-01-05T13:32:57.354Z
Superhuman Meta Process 2018-01-03T13:39:36.230Z
Notes on Mental Security 2017-12-30T17:53:35.025Z
Happiness Is a Chore 2017-12-20T11:13:08.329Z
Happiness Is a Chore 2017-12-20T11:11:01.848Z
The Little Dragon is Dead 2017-11-06T21:24:04.529Z
The Little Dragon is Dead 2017-11-06T21:20:55.108Z
Time to Exit the Sandbox 2017-10-24T17:13:42.883Z
Time to Exit the Sandbox 2017-10-24T08:04:14.478Z
You Too Can See Suffering 2017-10-03T19:49:28.738Z
You Too Can See Suffering 2017-10-03T19:46:00.697Z
[Humor] A Fearsome Rationality Technique 2017-08-14T21:05:23.479Z
The Unyoga Manifesto 2017-08-04T21:24:26.678Z
Best Of Rationality Blogs RSS Feed 2017-07-10T11:11:13.896Z
Epistemic Laws of Motion 2017-07-07T21:37:43.938Z
Philosophical Parenthood 2017-05-30T14:09:07.702Z
The AI Alignment Problem Has Already Been Solved(?) Once 2017-04-22T13:24:38.978Z
Make Your Observations Pay Rent 2017-03-28T13:32:53.041Z
Prediction Calibration - Doing It Right 2017-01-30T10:05:09.705Z
Applied Rationality Exercises 2017-01-07T18:13:28.887Z
On Risk of Viral Infections from Chlorella 2016-11-21T12:46:01.039Z
Internal Race Conditions 2016-10-23T13:23:29.286Z
Against Amazement 2016-09-20T19:25:25.238Z
Neutralizing Physical Annoyances 2016-09-12T16:36:41.379Z
Non-Fiction Book Reviews 2016-08-11T05:05:40.950Z
[CORE] Concepts for Understanding the World 2016-07-16T10:53:29.460Z
Geometric Bayesian Update 2016-04-09T07:24:14.774Z
Abuse of Productivity Systems 2016-03-27T05:32:09.670Z

Comment by SquirrelInHell on The Jordan Peterson Mask · 2018-03-05T01:58:33.222Z · LW · GW
Then what makes Peterson so special?

This is what the whole discussion is about. You are setting boundaries that are convenient for you, and refuse to think further. But some people in that reference class you are now denigrating as a whole are different from others. Some actually know their stuff and are not charlatans. Throwing a tantrum about it doesn't change it.

Comment by SquirrelInHell on The Jordan Peterson Mask · 2018-03-04T21:45:04.981Z · LW · GW

I did in fact have something between those two in mind, and was even ready to defend it, but then I basically remembered that LW is status-crazy and and gave up on fighting that uphill battle. Kudos to alkjash for the fighting spirit.

Comment by SquirrelInHell on The Jordan Peterson Mask · 2018-03-04T19:11:16.483Z · LW · GW
They explicitly said that he's not wrong-on-many-things in the T framework, the same way Eliezer is T.correct.

Frustrating, that's not what I said! Rule 10: be precise in your speech, Rule 10b: be precise in your reading and listening :P My wording was quite purposeful:

I don't think you can safely say Peterson is "technically wrong" about anything

I think Raemon read my comments the way I intended them. I hoped to push on a frame in people seem to be (according to my private, unjustified, wanton opinion) obviously too stuck in. See also my reply below.

I'm sorry if my phrasing seemed conflict-y to you. I think the fact that Eliezer has high status in the community and Peterson has low status is making people stupid about this issue, and this makes me write in a certain style in which I sort of intend to push on status because that's what I think is actually stopping people from thinking here.

Comment by SquirrelInHell on The Jordan Peterson Mask · 2018-03-04T19:03:01.284Z · LW · GW

Cool examples, thanks! Yeah, these are issues outside of his cognitive expertise and it's quite clear that he's getting them wrong.

Note that I never said that Peterson isn't making mistakes (I'm quite careful with my wording!). I said that his truth-seeking power is in the same weight class, but obviously he has a different kind of power than LW-style. E.g. he's less able to deal with cognitive bias.

But if you are doing "fact-checking" in LW style, you are mostly accusing him of getting things wrong about which he never cared in the first place.

Like when Eliezer is using phlogiston as an example in the Sequences and gets the historical facts wrong. But that doesn't make Eliezer wrong in any meaningful sense, because that's not what he was talking about.

There's some basic courtesy in listening to someone's message, not words.

Comment by SquirrelInHell on Murphy’s Quest Ch 1: Exposure Therapy · 2018-03-04T12:21:35.463Z · LW · GW
This story is trash and so am I.
If people don't want to see this on LW I can delete it.

You are showcasing a certain unproductive mental pattern, for which there's a simple cure. Repeat after me:

This is my mud pile

I show it with a smile

And this is my face

It also has its place

For increased effect, repeat 5 times in rap style.

Comment by SquirrelInHell on The Jordan Peterson Mask · 2018-03-04T12:03:25.261Z · LW · GW

[Please delete this thread if you think this is getting out of hand. Because it might :)]

I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.

See right here, you haven't listened. What I'm saying is that there is some fairly objective quality which I called "truth-seeking juice" about people like Peterson, Eliezer and Scott which you can evaluate by yourself. But you are just dug yourself into the same trap a little bit more. From what you write, your heuristics for evaluating sources seem to be a combination of authority and fact-checking isolated pieces (regardless of how much you understand the whole picture). Those are really bad heuristics!

The only reason why Eliezer and Scott seem trustworthy to you is that their big picture is similar to your default, so what they say is automatically parsed as true/sensible. They make tons of mistakes and might fairly be called "technically wrong on many things". And yet you don't care because you when you feel their big picture is right, those mistakes feel to you like not-really-mistakes.

Here's an example of someone who doesn't automatically get Eliezer's big picture, and thinks very sensibly from their own perspective:

On a charitable interpretation of pop Bayesianism, its message is:
Everyone needs to understand basic probability theory!
That is a sentiment I agree with violently. I think most people could understand probability, and it should be taught in high school. It’s not really difficult, and it’s incredibly valuable. For instance, many public policy issues can’t properly be understood without probability theory.
Unfortunately, if this is the pop Bayesians’ agenda, they aren’t going at it right. They preach almost exclusively a formula called Bayes’ Rule. (The start of Julia Galef’s video features it in neon.) That is not a good way to teach probability.

What about if you go read that, and try to mentally swap places. The degree to which Chapman doesn't get Eliezer's big picture is probably similar to the degree to which you don't get Peterson's big picture, with similar results.

Comment by SquirrelInHell on The Jordan Peterson Mask · 2018-03-04T01:56:15.401Z · LW · GW

[Note: somewhat taking you up on the Crocker's rules]

Peterson's truth-seeking and data-processing juice is in super-heavy weight class, comparable to Eliezer etc. Please don't make the mistake of lightly saying he's "wrong on many things".

At the level of analysis in your post and the linked Medium article, I don't think you can safely say Peterson is "technically wrong" about anything; it's overwhelmingly more likely you just didn't understand what he means. [it's possible to make more case-specific arguments here but I think the outside view meta-rationality should be enough...]

Comment by SquirrelInHell on Is there a Connection Between Greatness in Math and Philosophy? · 2018-03-04T01:43:05.618Z · LW · GW
4) The skill to produce great math and skill to produce great philosophy are secretly the same thing. Many people in either field do not have this skill and are not interested in the other field, but the people who shape the fields do.

FWIW I have reasonably strong but not-easily-transferable evidence for this, based on observation of how people manipulate abstract concepts in various disciplines. Using this lens, math, philosophy, theoretical computer science, theoretical physics, all meta disciplines, epistemic rationality, etc. form a cluster in which math is a central node, and philosophy is unusually close to math even considered in the context of the cluster.

Comment by SquirrelInHell on Funding for AI alignment research · 2018-03-03T23:36:59.219Z · LW · GW

Note that this is (by far) the least incentive-skewing from all (publicly advertised) funding channels that I know of.

Apply especially if all of 1), 2) and 3) hold:

1) you want to solve AI alignment

2) you think your cognition is pwned by Moloch

3) but you wish it wasn't

Comment by SquirrelInHell on Focusing · 2018-02-26T14:06:31.380Z · LW · GW
tl;dr: your brain hallucinates sensory experiences that have no correspondence to reality. Noticing and articulating these “felt senses” gives you access to the deep wisdom of your soul.

I think this snark makes it clear that you lack gears in your model of how focusing works. There are actual muscles in your actual body that get tense as a result of stuff going on with your nervous system, and many people can feel that even if they don't know exactly what they are feeling.

Comment by SquirrelInHell on Arguments about fast takeoff · 2018-02-25T11:57:32.351Z · LW · GW

[Note that I am in no way an expert on strategy, probably not up to date with the discourse, and haven't thought this through. I also don't disagree with your conclusions much.]

[Also note that I have a mild feeling that you engage with a somewhat strawman version of the fast-takeoff line of reasoning, but have trouble articulating why that is the case. I'm not satisfied with what I write below either.]

These possible arguments seem not included in your list. (I don't necessarily think they are good arguments. Just mentioning whatever intuitively seems like it could come into play.)

Idiosyncrasy of recursion. There might be a qualitative difference between universality across economically-incentivized human-like domains, and universality extended to self-improvement from the point of view of a self-improving AI, rather than human-like work on AI. In this case recursive self-improvement looks more like a side effect than mainstream linear progress.

Actual secrecy. Some group might actually pull off being significantly ahead and protecting their information from leaking. There are incentives to do this. Related: Returns to non-scale. Some technologies might be easier to develop by a small or medium sized well-coordinated group, rather than a global/national ecosystem. This means there's a selection effect for groups which stay somewhat isolated from the broader economy, until significantly ahead.

Non-technological cruxes. The ability to extract high quality AI research from humans is upstream of technological development, and an early foom loop might route through a particular configuration of researcher brains and workflow. However, humans are not fungible and there might be strange non-linear progress achieved by this. This consideration seems historically more important for projects that really push the limits of human capability, and an AGI seems like such a project.

Nash equilibria. The broader economy might random-walk itself into a balance of AI technologies which actively hinders optimizing for universality, e.g. by producing only certain kinds of hardware. This means it's not enough to argue that at some point researchers will realize the importance of AGI, but you have to argue they will realize this before the technological/economic lock-in occurs.

Comment by SquirrelInHell on The Intelligent Social Web · 2018-02-23T15:58:43.606Z · LW · GW
I haven't really understood where the fakeness in the framework is. And the other comments also seem to not acknowledge, that it is a fake framework, which I am interpreting as people taking this framework at face value to be true or real. I suspect I haven't quite understood what is meant by "fake framework".

Yeah, "fake" in this case is basically a trick to avoid being questioned about a justification for it.

Comment by SquirrelInHell on Toward a New Technical Explanation of Technical Explanation · 2018-02-21T19:28:03.674Z · LW · GW

I think it's perfectly valid to informally say "gears" while meaning both "gears" (how clear a model is on what it predicts) and "meta-gears" (how clear the meta model is on which models it a priori expects to be correct). And the new clarity you bring to this would probably be the right time to re-draw the boundaries around gears-ness, to make it match the structure of reality better. But this is just a suggestion.

Comment by SquirrelInHell on Toward a New Technical Explanation of Technical Explanation · 2018-02-16T14:55:27.852Z · LW · GW

[excellent, odds ratio 3:2 for worth checking LW2.0 sometimes and 4:3 for LW2.0 will succeed]

I think "Determinism and Reconstructability" are great concepts but you picked terrible names for them, and I'll probably call them "gears" and "meta-gears" or something short like that.

This article made me realize that my cognition runs on something equivalent to logical inductors, and what I recently wrote on Be Well Tuned about cognitive strategies is a reasonable attempt at explaining how to implement logical inductors in a human brain.

Comment by SquirrelInHell on Mental TAPs · 2018-02-08T17:39:42.054Z · LW · GW
Request: Has this idea already been explicitely stated elsewhere? Anything else regular old TAPs are missing?

It's certainly not very new, but nothing wrong with telling people about your TAP modifications. There are many nuances to using TAPs in practice, and ultimately everyone figures out their own style anyway. Whether you have noticed or not, you probably already have this meta-TAP:

"TAPs not working as I imagined -> think how to improve TAPs"

It is, ultimately, the only TAP you need to successfully install to start the process of recursive improvement.

Comment by SquirrelInHell on Hammertime Day 10: Murphyjitsu · 2018-02-07T20:14:50.285Z · LW · GW
I have the suspicion that everyone is secretly a master at Inner Sim

There's a crucial difference here between:

• good "secretly": I'm so good at it it's my second nature, and there's little reason to bring it up anymore
• bad "secretly": I'm not noticing what I'm doing, so I can't optimize it, and never have
Comment by SquirrelInHell on Pseudo-Rationality · 2018-02-07T20:10:03.869Z · LW · GW

One example is that the top tiers of the community are in fact composed largely of people who directly care about doing good things for the world, and this (surprise!) comes together with being extremely good at telling who's faking it. So in fact you won't be socially respected above a certain level until you optimize hard for altruistic goals.

Another example is that whatever your goals are, in the long run you'll do better if you first become smart, rich, knowledgeable about AI, sign up for cryonics, prevent the world from ending etc.

Comment by SquirrelInHell on Pseudo-Rationality · 2018-02-07T14:09:28.390Z · LW · GW
if people really wanted to optimize for social status in the rationality community there is one easiest canonical way to do this: get good at rationality.

I think this is false: even if your final goal is to optimize for social status in the community, real rationality would still force you to locally give it up because of convergent instrumental goals. There is in fact a significant first order difference.

Comment by SquirrelInHell on UDT as a Nash Equilibrium · 2018-02-07T13:56:14.634Z · LW · GW
I realized today that UDT doesn't really need the assumption that other players use UDT.

Was there ever such an assumption? I recall a formulation in which the possible "worlds" include everything that feeds into the decision algorithm, and it doesn't matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).

Comment by SquirrelInHell on Hammertime Day 7: Aversion Factoring · 2018-02-04T19:20:11.716Z · LW · GW
You’d reap the benefits of being pubicly wrong

By the way - did I mention that inventing the word "hammertime" was epic, and that now you might just as well retire because there's no way to compete against your former glory.

Comment by SquirrelInHell on Hammertime Day 3: TAPs · 2018-02-01T14:11:05.296Z · LW · GW

I think this comment is 100% right despite being perhaps maybe somewhat way too modest. It's more useful to think of sapience as introducing a delta on behavior, rather than a way to execute desired behavior. The second is a classic Straw Vulcan failure mode.

Comment by SquirrelInHell on Hammertime Day 2: Yoda Timers · 2018-01-31T15:31:59.945Z · LW · GW

I wonder if all of the CFAR techniques will have different names after you are done with them :) Looking forward to your second and third iteration.

Comment by SquirrelInHell on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-01-29T19:55:54.478Z · LW · GW

All sounds sensible.

Also, reminds me of the 2nd Law of Owen:

In a funny sort of way, though, I guess I really did just end up writing a book for myself.
Comment by SquirrelInHell on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-01-29T12:07:01.965Z · LW · GW

[Note: I am writing from my personal epistemic point of view from which pretty much all the content of the OP reads as obvious obviousness 101.]

The reason why people don't know this is not because it's hard to know it. This is some kind of common fallacy: "if I say true things that people apparently don't know, they will be shocked and turn their lives around". But in fact most people around here have more than enough theoretical capacity to figure out this, and much more, without any help. The real bottleneck is human psychology, which is not able to take certain beliefs seriously without much difficult work at the fundamentals. So "fact" posts about x-risk are mostly preaching to the choir. At best, you get some people acting out of scrupulosity and social pressure, and this is pretty useless.

Of course I still like your post a lot, and I think it's doing some good on the margin. It's just that it seems like you're wasting energy on fighting the wrong battle.

Note to everyone else: the least you can do is share this post until everyone you know is sick of it.

Comment by SquirrelInHell on A LessWrong Crypto Autopsy · 2018-01-28T17:35:24.799Z · LW · GW

It is a little bit unfair to say that buying 10 bicoins was everything you needed to do. I owned 10 bitcoins, and then sold them at a meager price. Nothing changed as a result of me merely understanding that buying bitcoins was a good idea.

What you really needed was to sit down and think up a strict selling schedule, and also commit to following it. E.g. spend $100 on bitcoin now, and later sell exactly 10% of your bitcoins every time that 10% becomes worth at least$10,000 (I didn't run the numbers to check if these exact values make sense, but you get the idea).

Upstream of not taking effective action was unwillingness to spend a few hours thinking hard about what would actually be smart to do if the hypothetical proved true.

Comment by SquirrelInHell on Magic Brain Juice · 2018-01-26T21:20:52.572Z · LW · GW
At grave peril of strawmanning, a first order-approximation to SquirrelInHell’s meta-process (what I think of as the Self) is the only process in the brain with write access, the power of self-modification. All other brain processes are to treat the brain as a static algorithm and solve the world from there.

Let me clarify: I consider it the meta level when I think something like "what trajectory do I expect to have as a result of my whole brain continuing to function as it already tends to do, assuming I do nothing special with the output of the thought process which I'm using right now to simulate myself?". This simulation obviously includes everyday self-modification which happens as a side-effect of acting (like the 10% shift you describe). The key question is, am I reflectively consistent? Do I endorse the most accurate simulation of myself that I can run?

The meta process is what happens when I want to make sure that I always remain reflectively consistent. Then I conjure up a special kind of self-modification which desires to remember to do itself, and to continue to hold on to enough power to always win. I aspire to make this meta process an automatic part of myself, so that my most accurate simulation of any of my future trajectories already automatically includes self-consistency.

Also: enjoy your CFAR workshop! :)

Comment by SquirrelInHell on Shell, Shield, Staff · 2018-01-26T15:13:24.936Z · LW · GW

Humans are not thermostats, and they can do better than a simple mathematical model. The idea of oscillation with decreasing amplitude you mention is well known from control theory, and it's looking at the phenomenon from a different (and, I dare say, less interesting) perspective.

To put it in another way, there is no additional deep understanding of reality that you could use to tell apart the fourth and the sixth oscillation of a converging mathematical model. If you know the model, you are already there.

Comment by SquirrelInHell on Teaching Ladders · 2018-01-25T19:02:04.601Z · LW · GW

[Note: I'm not sure if this was your concern - let me know if what I write below seems off the mark.]

The most accurate belief is rarely the best advice to give; there is a reason why these corrections tend to happen in a certain order. People holding the naive view need to hear the first correction, those who overcompensated need to hear the second correction. The technically most accurate view is the one that the fewest people need to hear.

I invoke this pattern to forestall a useless conversation about whose advice is objectively best.

In fact, I think it would be a good practice to always before giving advice, do your best to trace back to the naive view and count the reversals, and inform your reader on which level you are advising. (This is surprisingly doable if you bother to do it.)

Comment by SquirrelInHell on Teaching Ladders · 2018-01-25T13:42:26.351Z · LW · GW

Here we go: the pattern of this conversation is "first correction, second correction, accurate belief" (see growth triplets).

Naive view: "learn from masters"

The OP is the first correction: "learn from people just above you"

Your comment is the second correction: "there are cases where teacher's advice is better quality"

The accurate belief takes all of this into account: "it's best learn from multiple people in a way that balances wisdom against accessibility"

Comment by SquirrelInHell on Book Review - Probability and Finance: It's Only a Game! · 2018-01-25T13:31:38.791Z · LW · GW

Yes! Not just improved, but leading by stellar example :)

Comment by SquirrelInHell on Dispel your justification-monkey with a “HWA!” · 2018-01-24T12:55:21.899Z · LW · GW

People have recently discussed short words from various perspectives. While I was initially not super-impressed by this idea, this post made me shift towards "yeah, this is useful if done just right".

Casually reading this post on your blog yesterday was enough for the phrase to automatically latch on to the relevant mental motion (which it turns out I was already using a lot), solidify it, make it quicker and more effective, and make me want to use it more.

It has since then been popping up in my consciousness repeatedly, on at least 5 separate occasions after I have completely forgotten about it. Once, taken to the extreme, it moved me directly into a kinda "fresh seeing" or "being a new homunculus" state of mind, where I was looking at a familiar landscape and having long-unthought thoughts in the style of "why do all these people walk? flying would be more useful. why is everything so slow, big, and empty?".

To summarize: I think this name hit right in the middle of some concept that already badly "wanted" to materialize in my mind, and also it managed to be more short and catchy than what I would have come up with myself.

Good job! Let the HWA be with you!

Comment by SquirrelInHell on Dynamic Karma & Static Karma · 2018-01-22T11:50:47.351Z · LW · GW

Your point can partially be translated to "make reasonably close to 1" - this makes the decisions less about what the moderators want, and allows longer chains of passing the "trust buck".

However, to some degree "a clique moved in that wrote posts that the moderators (and the people they like) dislike" is pretty much the definition of a spammer. If you say "are otherwise extremely good", what is the standard by which you wish to judge this?

Comment by SquirrelInHell on Singularity Mindset · 2018-01-22T11:42:07.398Z · LW · GW

Yes, and also it's even more general than that - it's sort of how progress works on every scale of everything. See e.g. tribalism/rationality/post-rationality; thesis/antithesis/synthesis; life/performance/improv; biology/computers/neural nets. The OP also hints at this.

Comment by SquirrelInHell on The Desired Response · 2018-01-21T11:51:24.128Z · LW · GW

This seems to rest on a model of people as shallow, scripted puppets.

"Do you want my advice, or my sympathy?" is really asking: "which word-strings are your password today?" or "which kind of standard social script do you want to play out today?" or "can you help me navigate your NPC conversation tree today?".

Personally, when someone tries to use this approach on me I am inclined to instantly write them off and never look back. I'm not saying everyone is like me but you might want to be wary of what kind of people you are optimizing yourself for.

Comment by SquirrelInHell on An Apology is a Surrender · 2018-01-20T15:51:53.381Z · LW · GW

I'd add that the desire to hear apologies is itself a disguised status-grabbing move, and it's prudent to stay wary of it.

Comment by SquirrelInHell on Fashionable or Fundamental Thought in Communities · 2018-01-20T15:42:53.117Z · LW · GW

While I 100% agree with your views here, and this is by far the most sane opinion on akrasia that I've seen in a long time, I'm not convinced that so many people on LW really "get it". Although to be sure, the distribution of behavior that signals this has significantly shifted since the move to LW2.0.

So overall I am very uncertain, but I still find it more plausible that the reason why the community as a whole stopped talking about akrasia is more like, people run out of impressive-seeming or fresh-seeming things to say about it? While the minority that could have contributed actual real new insights turned away for better reasons.

Comment by SquirrelInHell on Book Review - Probability and Finance: It's Only a Game! · 2018-01-20T15:15:36.949Z · LW · GW

Big props for posting a book review - that's always great and in demand. However, some points on (what I think is) good form while doing these:

• a review on LW is not an advertisement; try to write reviews in a way that is useful to people who decide to not read the book
• I also don't care to see explicit encouragement to read a book - if what you relate about its content is tempting enough, I expect that I will have the idea to go and read it on my own
Comment by SquirrelInHell on Low Enough To See Your Shadow · 2018-01-20T15:04:29.184Z · LW · GW

[Note: you post is intentionally poetic, so I'll let myself be intentionally poetic while answering this:]

Would you trust someone without a shadow?

The correct answer is, I think, "don't care". On Friday night you dance with a Gervais-sociopath. On Saturday you build a moon rocket together and use it to pick up groceries. Do you "trust" the rocket to be "good"? No, but you don't need to.

Comment by SquirrelInHell on Kenshō · 2018-01-20T14:49:05.572Z · LW · GW

Not to put too fine a point on it: through the tone and content of the post, I can still see the old attachments and subconscious messed-up strategies shining through.

I am, of course, not free of blame here because the same could be said about my comment.

However, I reach out over both of these and touch you, Val.

Comment by SquirrelInHell on Dynamic Karma & Static Karma · 2018-01-20T14:03:31.435Z · LW · GW

Sure, and that's probably what almost all users do. But the situation is still perverse: the broken incentives of the system are fighting against your private incentive to not waste effort.

This kind of conflict is especially bad if people have different levels of the internal incentive, but also bad even they don't, because on the margin it pushes everyone to act slightly against their preferences. (I don't think this particular case is really so bad, but the more general phenomenon is and that's what you get if you design systems with poor incentives)

Comment by SquirrelInHell on Dynamic Karma & Static Karma · 2018-01-20T13:52:25.884Z · LW · GW
Ultimately the primary constraint on almost any feature on LessWrong is UI complexity, and so there is a very strong prior against any specific passing the very high bar to make it into the final UI

On the low end, you can fit the idea entirely inside of the existing UI, as a new fancy way of calculating voting weights under the hood (and allowing multiple clicks on the voting buttons).

Then, in a rough order of less to more shocking to users:

• showing the user some indication of how many points their one click is currently worth
• showing how many unused "voting points" they still have (privately)
• showing a breakdown of recevied feedback into positive and negative votes
• some simple configuration that allows to change the default allocation of budget to one click (e.g. how many percent, or pick a fixed value)

And that's probably all you ever need?

This in particular limits the degree to which you can force the user to spend a limited number of resources, since it both strongly increases mental overhead (instead of just asking themselves "yay or nay?" after reading a comment, they now need to reason about their limited budget and compare it to alternative options)

This should be much less of an issue if the configuration of this is global and has reasonable defaults. Then it's pretty much reduced to "new fancy way of calculating voting weights", and the users should be fine with just being roughly aware that if they vote lots lots or don't post anything on their own, their individual votes will have less weight.

Comment by SquirrelInHell on Dynamic Karma & Static Karma · 2018-01-19T16:57:19.617Z · LW · GW
I'm still not really sure what the root issues you're trying to resolve are. What are examples of cases you're either worried about the current system specifically failing, or areas where we just little don't have anything even trying to handle a particular use-case?

Sure, I can list some examples, but first note that while I agree that examples are useful, focusing on them too much is not a good way in general to think about designing systems.

A good design can preempt issues that you would never have predicted could happen; a bad design will screw you up in similarly unpredictable ways. What you want to look out for is designs which reflect the computational structure of the problem you are trying to solve. If you cannot think about it in these terms, I don't think you'll be persuaded by any specific list of benefits of the proposed system, such as:

• multiple voting is natural, safe, and easy to implement
• reduced room for downvote abuse, spamming and nepotism (see example in the post)
• it's possible to change how static karma translates into voting power without disrupting the ecosystem (because the calculations you change affect the marginal voting power, not the total voting power)
• users can choose their voting strategy (e.g. many low-impact votes or few high-impact ones) without being incentivized to waste time (in the current system, more clicks = more impact)
• moderation power is quantitative, allowing things like partial trust, trial periods and temporary stand-ins without bad feelings afterward (instead of "become moderator" and then "unbecome moderator" we have "I trust you enough to give you 1000 moderation power, no strings attached - let's see what you do with it")
• each user is incentivized to balance downvotes against upvotes (in the current system, the incentive is to downvote everything you don't upvote to double your signal - and this would stopped only by some artifical intervention?)

Etc. etc. Does it feel like you could generate more pros (and cons) if you set a 5 minute timer? Ultimately, there is nothing I can say to replace you figuring these out.

Comment by SquirrelInHell on Singularity Mindset · 2018-01-19T09:02:06.643Z · LW · GW

This is very well done :) Thanks for the Terence Tao link - it's amusing that he describes exactly the same meta-level observation which I expressed in this post.

Comment by SquirrelInHell on The Solitaire Principle: Game Theory for One · 2018-01-17T11:08:33.447Z · LW · GW
Classes of interpersonal problems often translate into classes of intrapersonal problems, and the tools to solve them are broadly similar.

This is true, but it seems you don't have any ideas about why it's true. I offer the following theory: if you are designing brains to deal with social situations, it is very adaptive to design them in a way that internally mirrors some of the structure that arises in social environments. This makes the computations performed by the brain more directly applicable to social life, in several interesting ways (e.g. increased ability to take/simulate various points of view, simulate and exploit adversarial situations, operate under mismatched/fake sets of pretenses etc.).

Comment by SquirrelInHell on 1/16/2018 Update - Parent Comments, and Nearterm Horizon · 2018-01-17T10:56:16.395Z · LW · GW
We should expect that anyone should be able to get over 1000 karma if they hang around the site long enough.

I second this worry. Historically, karma on LW has been a very good indicator of hours of life burned on the site, and a somewhat worse indicator of other things.

Comment by SquirrelInHell on Plan to Be Lucky · 2018-01-16T18:54:40.347Z · LW · GW

Excellent content, would be even beter in a shorter post.

As a 5-minute exercise, I'm coming up with some more examples:

• assume that you can make progress on AI alignment
• or at least, assume that there is some way in which you can contribute to saving the world
• run fast enough to win the race, even if it means you won't make it to the finish
• assume you will earn enough money to survive while doing things you care about
• assume the brain of the person who stopped breathing is still alive
• assume your epistemology is good enough to ignore "common sense" and not be crazy
• assume your readers like your writing enough so that there's really nothing you need to prove to anyone anymore
• assume you don't need to prove your worth to other people
• assume your best ideas will be understood
Comment by SquirrelInHell on Circumambulation · 2018-01-16T10:16:14.363Z · LW · GW

Obvious note: this sequence of posts is by itself a good example of what circumambulation looks like in practice.

Comment by SquirrelInHell on Is death bad? · 2018-01-15T13:36:18.540Z · LW · GW

Well, if ageing was slowed proportionally, and the world were roughly unchanged from the present condition, I'd expect large utility gains (in total subjective QoL) from prioritizing longer lives, with diminishing returns to this only in late 100s or possibly 1000s. But I think both assumptions are extremely unlikely.

Comment by SquirrelInHell on What Is "About" About? · 2018-01-14T19:11:26.097Z · LW · GW

I think at this point it's fair to say that you have started repeating yourself, and your recent posts strongly evoke the "man with a hammer" syndrome. Yes, your basic insight describes a real aspect of some part of reality. It's cool, we got it. But it's not the only aspect, and (I think) also not the most interesting one. After three or four posts on the same topic, it might be worth looking for new material to process, and other insights to find.