Posts

Pair Debugging and applied rationality 2021-09-20T08:37:53.894Z
Flirting with postmodernism 2021-08-28T18:55:54.389Z
Pair Debugging and applied rationality 2021-08-26T11:37:10.872Z
Balance motivation and discipline 2021-01-07T12:00:34.942Z
Aphorisms on motivation 2020-12-02T08:55:17.065Z
Under which conditions are human beings most likely to be altruistically motivated? 2020-06-01T19:16:33.212Z
Meetup #46 - Authentic Relating games 2020-02-24T21:29:14.207Z
Post-meetup social 2020-02-04T08:43:03.669Z
Meetup #45 - Implementation Intentions 2020-02-04T08:26:31.529Z
Meetup #44 - Murphyjitsu 2020-01-21T07:37:00.317Z
"human connection" as collaborative epistemics 2020-01-12T23:16:26.487Z
Meetup #43 - New techniques 2020-01-07T07:59:31.797Z
Moloch feeds on opportunity 2019-12-12T21:05:24.030Z
Meetup #42 - Ideological Turing Test 2019-12-09T07:36:06.469Z
RAISE post-mortem 2019-11-24T16:19:05.163Z
Meetup #41 - Double Crux 2019-11-24T16:09:53.722Z
Steelmanning social justice 2019-11-17T11:52:43.771Z
Meetup #40 - Street epistemology 2019-11-08T16:37:27.873Z
Toon Alfrink's sketchpad 2019-10-31T14:56:10.205Z
Meetup #39 - Rejection game 2019-10-28T20:36:54.704Z
The first step of rationality 2019-09-29T12:01:39.932Z
Is competition good? 2019-09-10T14:01:56.297Z
Have you lost your purpose? 2019-05-30T22:35:38.295Z
What is a good moment to start writing? 2019-05-29T21:47:50.454Z
What features of people do you know of that might predict academic success? 2019-05-10T18:16:59.922Z
Experimental Open Thread April 2019: Socratic method 2019-04-01T01:29:00.664Z
Open Thread April 2019 2019-04-01T01:14:08.567Z
RAISE is launching their MVP 2019-02-26T11:45:53.647Z
What makes a good culture? 2019-02-05T13:31:57.792Z
The housekeeper 2018-12-03T20:01:57.618Z
We can all be high status 2018-10-10T16:54:19.047Z
Osmosis learning: a crucial consideration for the craft 2018-07-10T15:40:12.193Z
Open Thread July 2018 2018-07-10T14:51:12.351Z
RAISE is looking for full-time content developers 2018-07-09T17:01:38.401Z
A friendly reminder of the mission 2018-06-05T00:36:38.869Z
The league of Rationalists 2018-05-23T11:55:14.248Z
Fundamentals of Formalisation level 2: Basic Set Theory 2018-05-18T17:21:30.969Z
The reverse job 2018-05-13T13:55:35.573Z
Fundamentals of Formalisation level 1: Basic Logic 2018-05-04T13:01:50.998Z
Soon: a weekly AI Safety prerequisites module on LessWrong 2018-04-30T13:23:15.136Z
Give praise 2018-04-29T21:00:42.003Z
Raising funds to establish a new AI Safety charity 2018-03-17T00:09:30.843Z
Welcome to LW Netherlands 2018-03-16T10:13:08.360Z
Updates from Amsterdam 2017-12-16T22:14:48.767Z
Project proposal: Rationality Cookbook 2017-11-21T14:34:01.537Z
In defense of common-sense tribalism 2017-11-02T08:43:11.715Z
We need a better theory of happiness and suffering 2017-07-04T20:14:15.539Z
Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) 2017-06-15T18:55:06.306Z
Meetup : Meetup 17 - Comfort Zone Expansion (CoZE) 2017-05-10T09:59:48.318Z
Meetup : Meetup 15 (for real this time) - Trigger Action Planning 2017-04-12T12:52:40.546Z

Comments

Comment by toonalfrink on Amsterdam, Netherlands – ACX Meetups Everywhere 2021 · 2021-09-17T14:37:04.050Z · LW · GW

Yes, definitely

Comment by toonalfrink on The Best Software For Every Need · 2021-09-13T15:33:36.767Z · LW · GW

Looks like an upgrade. One problem is that it's not 100% surpassable, especially on phone, but for some use cases that's fine. 
For my phone I use Scalefusion, by the way

Comment by toonalfrink on The Best Software For Every Need · 2021-09-10T13:47:48.527Z · LW · GW

Selfcontrol for abstaining from visiting websites without expending your precious willpower. If you're a mac user.

Sorry for not following the rules but, I think discovering a whole category of software (like terminal multiplexers) is often higher value than discovering the best in that category.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2021-08-29T10:06:31.579Z · LW · GW

If you’re struggling with confidence in the LW crowd, I recommend aiming for doing one thing well, instead of trying too hard to prevent criticism. You will inevitably get criticism and it’s better to embrace that.

Comment by toonalfrink on Flirting with postmodernism · 2021-08-28T20:59:25.654Z · LW · GW

I am admittedly working off the definition of it's critics, including Wilber's definition which includes, in his words:
- constructivism (the world is not just a perception but an interpretation)
- contextualism (all truths are context-dependent, and contexts are boundless)
- integral-aperspectivism (no context is finally privileged, so an integral view should include multiple perspectives; pluralism; multi-culturalism)

Do you think this definition is missing the point? If yes, where do you think I should be looking for a better one?

Comment by toonalfrink on Pair Debugging and applied rationality · 2021-08-26T16:50:16.931Z · LW · GW

I'm curious where this complaint comes from. Is their stuff often misrepresented?

Changed it in any case. Is this what you meant?

Comment by toonalfrink on Fake Selfishness · 2021-08-26T13:48:06.144Z · LW · GW

My answer to your first mischievous question tends to be: "if you (also) identify as selfish, this will make you more predictable and thereby more trustworthy".  

Don't give two shits about one guy's contribution to the economy. My selfish incentives are purely local.

Besides, it's always better to cooperate with a rich guy than exploit a poor one.

But of course there's no need to convince you to be something you already are

Comment by toonalfrink on Toon Alfrink's sketchpad · 2021-08-24T09:24:15.897Z · LW · GW

But could you explain why you think wireheading is bad?
Besides, I don't think the comparison is completely justified. Enlightenment isn't just claimed to increase happiness, but also intelligence and morality.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2021-08-24T09:21:39.947Z · LW · GW

I don't think "if you do this you'll be super happy (while still alive)" is comparable to "if you do this you'll be super happy (after you die)". The former is testable, and I have close friends who have already fully verified it for themselves. I've also noticed in myself a superlinear relation between meditation time and likelihood to be in a state of bliss, and I have no reason to think this relation won't hold when I meditate even more.

The buddha also urged people to go and verify his claims themselves. It seems that the mystic (good) part of buddhism is much more prominent than the organised religion (bad) part, compared to christianity.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2021-08-05T18:03:26.857Z · LW · GW

Cause X candidate:

Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously. It currently takes decades for most people to reach an enlightened state. If some sort of medical intervention can reduce this to mere months, this might drive mass adoption and create a huge amount of utility.

Comment by toonalfrink on Why I Work on Ads · 2021-05-23T22:31:52.964Z · LW · GW

Relevant: the non-adversarial principle of AI alignment

Comment by toonalfrink on Sabien on "work-life" balance · 2021-05-23T21:38:02.490Z · LW · GW

Whereas if you're good at your work and you think that your job is important, there's an intervening layer or three—I'm doing X because it unblocks Y, and that will lead to Z, and Z is good for the world in ways I care about, and also it earns me $ and I can spend $ on stuff...


Yes initially there might be a few layers, but there's also the experience of being really good at what you do, being in flow, at which point Y and Z just kind of dissolve into X, making X feel valuable in itself like jumping on a trampoline.

Seems like this friend wants to be in this state by default. If X inherits its value from Z through an intellectual link, a S2-level association, the motivation to do X just isn't as strong as when the value is directly hardcoded into X itself on the S1 level. "Why was I filling in these forms again? Something with solving global coordination problems? Whatever it's just my Duty as a Good Citizen." or "Whatever I can do it faster than Greg".

But there is a problem: the more the value is a property of X, the harder it will be to detach from it when X suddenly doesn't become instrumental to Z anymore. Here we find ourselves in the world of dogma and essentialism and lost purposes.

So we're looking at a fundamental dilemma: do I maintain the most accurate model by always deriving my motivation from first principles, or do I declare the daily activities of my job to be intrinsically valuable?

In practice I think we tend to go back and forth between these extremes. Why do we need breaks, anyway? Maybe it's to zoom out a bit and rederive our utility function.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2021-05-09T00:32:34.741Z · LW · GW

A thought experiment: would you A) murder 100 babies or B) murder 100 babies? You have to choose!

Comment by toonalfrink on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T21:48:29.144Z · LW · GW

Sidestepping the politics here: I've personally found that avoiding (super)stimuli for a week or so, either by not using any electronic devices or by going on a meditation retreat, tends to be extremely effective in increasing my ability to regulate my emotions. Semi-permanently.

I have no substitute for it, it's my panacea against cognitive dissonance and mental issues of any form. This makes me wonder: why aren't we focusing more on this from an applied rationality point of view? 

Comment by toonalfrink on Forcing yourself to keep your identity small is self-harm · 2021-04-03T18:31:43.367Z · LW · GW

This seems to be a fully general counterargument against any kind of advice.
As in: "Don't say 'do X' because I might want to do not X which will give me cognitive dissonance which is bad"

You seem to essentially be affirming the Zen concept that any kind of "do X" will imply that X is better than not X, i.e. a dualistic thought pattern, which is the precondition for suffering.

But besides that idea I don't really see how this post adds anything. 

Not to mention that identity tends to already be an instance of "X is better than not X". Paul Graham is saying "not (X is better than not X) is better than (X is better than not X), and you just seem to be saying "not (not (X is better than not X) is better than (X is better than not X)) is better than (not (X is better than not X) is better than (X is better than not X))".

At that point you're running in circles and the only way out is to say mu and put your attention on something else.

Comment by toonalfrink on RSS Feeds are fixed and should be properly functional this time · 2021-03-14T19:37:54.696Z · LW · GW

Since this is the first Google result and seems out of date, how do we get the RSS link nowadays?

Comment by toonalfrink on Toon Alfrink's sketchpad · 2021-03-03T13:42:10.811Z · LW · GW

I may have finally figured out the use of crypto.

It's not currency per se, but the essential use case of crypto seems to be to automate the third party.

This "third party" can be many things. It can be a securities dealer or broker. It can be a notary. It can be a judge that is practicing contract law.

Whenever there is a third party that somehow allows coordination to take place, and the particular case doesn't require anything but mechanical work, then crypto can do it better.

A securities dealer or broker doesn't beat a protocol that matches buyers and sellers automatically. A notary doesn't beat a public ledger. A judge in contract law doesn't beat an automatically executed verdict, previously agreed upon in code.

(like damn, imagine contracts that provably have only one interpretation. Ain't that gonna put lawyers out of business)

And maybe a bank doesn't beat peer to peer transactions, with the caveat that central banks are pretty competent institutions, and if anyone will win that race it is them. While I'm optimistic about cryptocurrency, I'm still skeptical about private currency.

Comment by toonalfrink on Ego syntonic thoughts and values · 2021-02-06T15:47:53.258Z · LW · GW

I was in this "narcissist mini-cycle" for many years. Many google searches and no luck. I can't believe that I finally found someone who recognizes it. Thank you so much.

fwiw, what got me out of it was to attend a Zen temple for 3 months or so. This didn't make me less narcissistic, but somehow gave me the stamina to actually achieve something that befit my inflated expectations, and now I just refer back to those achievements to quell my need for greatness. At least while I work on lowering my expectations.

Comment by toonalfrink on Babies and Bunnies: A Caution About Evo-Psych · 2021-02-06T14:21:41.182Z · LW · GW

It does not, but consider 2 adaptations:
A: responds to babies and more strongly to bunnies
B: responds to babies only

B would seem more adaptive. Why didn't humans evolve it?

Plausible explanation: A is more simple and therefore more likely to result from a random DNA fluctuation. 
Is anyone doing research into which kinds of adaptations are more likely to appear like this?

Comment by toonalfrink on When Money Is Abundant, Knowledge Is The Real Wealth · 2021-01-29T19:15:52.101Z · LW · GW

Can you come up with an example that isn't AI? Most fields aren't rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you've got millions.

For what it's worth, given the scenario that you've at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.

Comment by toonalfrink on When Money Is Abundant, Knowledge Is The Real Wealth · 2021-01-29T11:04:02.610Z · LW · GW

I don't like this post because it ignores that instead of yachts you can simply buy knowledge for money. Plenty of research that isn't happening because it isn't being funded. 

Comment by toonalfrink on Great minds might not think alike · 2021-01-02T23:48:18.176Z · LW · GW

A Shor-to-Constance translation would be lossy because the latter language is not as expressive or precise as the former

Comment by toonalfrink on Great minds might not think alike · 2021-01-02T23:46:53.370Z · LW · GW

Great work. 

I wonder just how far this concept can be stretched. Is focusing a translation from the part of you that thinks in feelings to the part of you that thinks in words? If you're translating some philosophical idea into math, are you just translating from the language of one culture to the language of another?

And if so, it strikes me that some languages are more effective than others. Constance may have had better ideas, but if Shor knew the same stuff as Constance (in his own language) perhaps he would have done better. Shor's language seems to be more expressive, precise and transferable.

So:

  • In a given context, which language is "best"?
  • In a given context, which languages have the best ideas/data?
  • Where might we find large opportunities for arbitrage?

For example, I think we should be translating spiritual ideas into the language of cognitive science and/or economics. Any others?

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-12-25T01:36:58.267Z · LW · GW

Personalized mythic-mode rendition of Goodhart's law:

"Everyone wants to be a powerful uncompromising force for good, but spill a little good and you become a powerful uncompromising force for evil"

Comment by toonalfrink on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2020-12-17T12:28:45.897Z · LW · GW

The parent-child model is my cornerstone of healthy emotional processing. I'd like to add that a child often doesn't need much more than your attention. This is one analogy of why meditation works: you just sit down for a while and you just listen

The monks in my local monastery often quip about "sitting in a cave for 30 years", which is their suggested treatment for someone who is particularly deluded. This implies a model of emotional processing which I cannot stress enough: you can only get in the way. Take all distractions away from someone and they will asymptotically move towards healing. When they temporarily don't, it's only because they're trying to do something, thereby moving away from just listening. They'll get better if they give up.

Another supporting quote from my local Roshi: "we try to make this place as boring as possible". When you get bored, the only interesting stuff left to do is to move your attention inward. As long as there is no external stimulus, you cannot keep your thoughts going forever. By sheer ennui you'll finally start listening to those kids, which is all you need to do.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-11-24T13:01:44.268Z · LW · GW

Look, if you can't appreciate the idea because you don't like it's delivery, you're throwing away a lot of information

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-11-16T20:37:27.723Z · LW · GW

It's supposed to read like "this idea is highly unpolished"

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-11-16T19:22:16.205Z · LW · GW

Here's an idea: we hold the Ideological Turing Test (ITT) world championship. Candidates compete to pass an as broad range of views as possible.

Points awarded for passing a test are commensurate with the amount of people that subscribe to the view. You can subscribe to a bunch of them at once.

The awarding of failures and passes is done anonymously. Points can be awarded partially, according to what % of judges give a pass.

The winner is made president (or something)

Comment by toonalfrink on Sunny's Shortform · 2020-07-30T08:32:56.623Z · LW · GW

It might be hard to take a normative stance, but if culture 1 makes you feel better AND leads to better results AND helps people individuate and makes adults out of them, then maybe it's just, y'know, better. Not "better" in the naive mistake-theorist assumption that there is such a thing as a moral truth, but "better" in the correct conflict-theorist assumption that it just suits you and me and we will exert our power to make it more widely adopted, for the sake of us and our enlightened ideals.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-07-30T07:47:20.594Z · LW · GW

In the spirit of sharing what worked: goal factoring a girlfriend, I found that I could get better and cheaper coverage of my needs by hiring escorts and dating someone with less sexual but more emotional compatibility.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-07-05T14:36:45.995Z · LW · GW

Case study: A simple algorithm for fixing motivation

So here I was, trying to read through an online course to learn about cloud computing, but I wasn't really absorbing any of it. No motivation.

Motives are a chain, ending in a terminal goal. Lack of motivation meant that my System 1 did not believe what I was doing would lead to achieving any terminal goal. The chain was broken.

So I traversed the chain to see which link was broken.

  • Why was I doing the online course? Because I want to become better at my job.
    • Do I still think doing the online course will make me better at my job? Yes I do.
    • Do I want to get better at my job? Nah, doesn't spark joy.
  • Why do I want to get better at my job? Because I want to get promoted.
    • Do I still think doing better will make me get promoted? Yes I do.
    • Do I want to get promoted? Nah, doesn't spark joy.
  • Why do I want to get promoted? Because (among other things) I want more influence on my environment, for example by having more money.
    • Do I still think promotion will give me more influence? Yes I do
    • Do I want more influence? Nah
  • Why do I want more influence (via money)? Because (among other things) I want to buy a house and do meetups, and live with close friends at the center of a vibrant community that helps people
    • Do I think more money will get me this house? Yes I do
    • Do I want to live with close friends at the center of a vibrant community that helps people? Well, usually yes, but today I kind of just want to go to the beach with my gf, and decompress.
  • Well okay, but most days you do want this thing.
  • Shit you're right, I do want to do this online course

And motivation was restored. Suddenly, I feel invigorated. To do the course, and to write this post.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-06-28T20:58:37.923Z · LW · GW

Question for the Kegan levels folks: I've noticed that I tend to regress to level 3 if I enter new environments that I don't fully understand yet, and that this tends to cause mental issues because I don't always have the affirmative social environment that level 3 needs. Do you relate? How do you deal with this?

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-06-28T19:34:42.153Z · LW · GW

As someone who never came across religion before adulthood, I've been trying to figure it out. Some of it's claims seem pretty damn nonsensical, and yet some of it's adherents seem pretty damn well-adjusted and happy. The latter means there's gotta be some value in there.

The most important takeaway so far is that religious claims make much more sense if you interpret them as phenomenological claims. Claims about the mind. When buddhists talk about the 6 worlds, they talk about 6 states of mood. When christians talk about a covenant with god, they talk about sticking to some kind of mindset no matter what.

Back when this stuff was written, people didn't seem to distinguish between objective reality and subjective experience. The former is a modern invention. Either that, or this nuance has been lost in translation over the centuries.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-06-11T07:23:04.240Z · LW · GW

As for being on ibogaine, a high dose isn't fun for sure, but microdoses are close to neutral and their therapeutic value makes them net positive

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-06-11T07:13:21.839Z · LW · GW

Have you tried opiates? You don't need to be in pain for it to make you feel great

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-06-04T17:27:23.344Z · LW · GW

Ibogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session.

If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude.

Of course, that's only if there are no unforeseen caveats. Still, why isn't everybody talking about this?

Comment by toonalfrink on Paul Crowley's Shortform · 2020-06-02T20:45:17.478Z · LW · GW

Did Dominic Cummings in fact try a "Less Wrong approach" to policy making? If so, how did it fail, and how can we learn from it? (if not, ignore this)

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-06-02T20:42:41.093Z · LW · GW

I did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I've found useful extracts everywhere.

And now I'm alone. I don't fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I've lost motivation for deep friendships, it just doesn't seem compatible with learning new things about the world. That sense of belonging I got from LessWrong is gone too. There are a few things that LW/EA just doesn't understand well enough, and I haven't been able to get it across.

I don't think I can bridge this gap. Even if I can put things to words, they're too provisional and complicated to be worth delving into. Most of it isn't directly actionable. I can't really prove things yet.

I've considered going back. Is lonely dissent worth it? Is there an end to this tunnel?

Comment by toonalfrink on Is competition good? · 2020-05-25T20:19:07.668Z · LW · GW

I don't recall, this is one of those concepts that you kind of assemble out of a bunch of conversations with people that already presuppose it

Comment by toonalfrink on What Money Cannot Buy · 2020-05-21T22:40:17.182Z · LW · GW

Here's another: probing into their argument structure a bit and checking if they can keep it from collapsing under its own weight.

https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies

Comment by toonalfrink on What Money Cannot Buy · 2020-05-21T22:36:14.371Z · LW · GW

Probably the skill of discerning skill would be easier to learn than... every single skill you're trying to discern.

Comment by toonalfrink on Conflict vs. mistake in non-zero-sum games · 2020-05-21T22:11:33.722Z · LW · GW
The outgroup is evil, not negotiating in good faith, and it's an error to give them an inch. Conflict theory is the correct one for this decision.

Which outgroup? Which decision? Are you saying this is universally true?

Comment by toonalfrink on "human connection" as collaborative epistemics · 2020-01-14T19:26:35.164Z · LW · GW

Yes

Comment by toonalfrink on "human connection" as collaborative epistemics · 2020-01-13T08:06:52.877Z · LW · GW

Forgive me for stating things more strongly than I mean them. It’s a bad habit of mine.

I’m coming from the assumption that people are much more like Vulcans than we give them credit for. Feelings are optimizers. People that do things that aren’t in line with their stated goals, aren’t always biased. In many cases they misstate their goals but don’t actually fail to achieve them.

See my last shortform for more on this

Comment by toonalfrink on Toon Alfrink's sketchpad · 2020-01-13T08:01:15.798Z · LW · GW

So here's two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.

But the emphasis on "we're just a bunch of hardcoded heuristics" is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether you'll fall in and out of love with someone based on some criteria, like whether they're compatible with your self-narrative and whether their opinions and interests align with yours, etc. The same is true for many intuitions that we often tend to dismiss as just "my brain" or "neurotransmitter xyz" or "some knee-jerk reaction".

There tends to be a layer of agency in these things. A set of conditions that makes these things fire off, or not fire off. If we want to influence them, we should be looking for the levers, instead of just accepting these things as a given.

So sure, we're godshatter, but the shards are larger than we give them credit for.

Comment by toonalfrink on Moloch feeds on opportunity · 2019-12-13T16:09:29.844Z · LW · GW
I am aware that confessing to this in most places would be seen as a huge social faux pas, I'm hoping LW will be more understanding.

You're good. You're just confessing something that is true for most of us anyway.

Where I have a big disagreement is in the lesson to take from this. Your argument is that we should essentially try to turn off status as a motivator. I would suggest it would be wiser to try to better align status motivations with the things we actually value.

Up to a point. It is certainly true that status motivations have led to great things, and I'm personally also someone who is highly status-driven but manages to mostly align that drive with at least neutral things, but there's more.

I struggle hugely with akrasia. If I didn't have some external motivation then I'd probably just lie in bed all day watching tv.

The other great humanist psychologist besides Maslow was Adam Rogers. His thinking can be seen as an expansion on this "subagent motivation is perceived opportunity" idea. He proposed an ideal vs an actual self. The ideal self is what you imagine you could and should be. Your actual self is what you imagine you are. The difference between ideal self and actual self, he said, was the cause of suffering. I believe that Buddhism backs this up too.

I'd like to expand on that and say that the difference between ideal self (which seems like a broader class of things that includes perceived opportunity but also social standards, the conditions you're used to, biological hardwiring, etc) and your actual self is the thing that activates your subagents. The bigger the difference, the more your subagents are activated by this difference.

Furthermore, the level of activation of your subagents causes cognitive dissonance (a.k.a. akrasia), i.e. one or multiple of your subagents not getting what they want even though they're activated.

And THAT is my slightly-more-gears-level model of where suffering comes from.

So here's what I think is actually going on with you: you're torn between multiple motivations until the status subagent comes along and pulls you out of your deadlock because it's stronger than everything else. So now there's less cognitive dissonance and you're happy that this status incentive came along. It cut your gordian knot. However, I think it's also possible to resolve this dissonance in a more constructive way. I.e. untie the knot. In some sense the status incentive pushes you into a local optimum.

I realise that I'm probably hard to follow. There's too much to unpack here. I should probably try and write a sequence.


Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-12-12T21:08:43.679Z · LW · GW

Right, right. So there is a correlation.

I'll just say that there is no reason to believe that this correlation is very strong.

I once won a mario kart tournament without feeling my hands.

Comment by toonalfrink on Give praise · 2019-12-12T20:03:57.273Z · LW · GW
People generally only discuss 'status' when they're feeling a lack of it

While this has been true for other posts that I wrote about the subject, this post was actually written from a very peaceful, happy, almost sage-like state of mind, so if you read it that way you'll get closer to what I was trying to say :)

Comment by toonalfrink on Give praise · 2019-12-12T19:58:00.531Z · LW · GW

I appreciate your review.

Most of your review assumes that my intent was to promote praise regardless of honesty, but quite the opposite is true. My intent was for people to pause, take a breath, think for a few moments what good things others are doing, and then thank them for it, but only if they felt compelled to do so.

Or I'll put it this way: it's not about pretending to like things, it's about putting more attention to the things about others that you already like. It's about gratefulness, good faith and recognition. It's about validating those that are already on the right track, to embolden them and secure them.

And this works to the extent that it is genuine. If you don't feel what you say, people will notice and discard your opinion. Congruency is an obvious first step that I didn't include in the post because I assumed it to be obvious.

But of course not getting that point across is all on me. I suppose I could have written a better post.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-12-12T19:33:04.345Z · LW · GW

I have gripes with EA's that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.

It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can't actually see another mind.

If you're going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don't look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That's just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.