Posts

Relational Agency: Consistently Reaching Out 2024-03-13T14:34:12.780Z
Being Interested in Other People 2024-03-07T10:13:48.339Z
A Bridge Between Utilitarianism & Stoicism 2024-02-13T22:46:06.388Z
Arrogance and People Pleasing 2024-02-06T18:43:09.120Z
Simple Appreciations 2024-01-23T16:23:52.001Z
Flexibility and the Singularity 2024-01-18T15:29:02.727Z
Dealing with Awkwardness 2024-01-16T12:32:16.997Z
Compensating for Life Biases 2024-01-09T14:39:14.229Z
The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit 2024-01-05T18:27:01.769Z
A Two-Part System for Practical Self-Care 2023-07-08T21:23:42.987Z
The Echo Principle 2022-11-16T20:09:13.764Z
Strategy of Inner Conflict 2022-11-15T19:38:02.781Z
Non-coercive motivation for alignment research? 2022-03-08T20:50:22.706Z
The Round Table Manifesto 2022-03-02T11:42:24.847Z
2 (naive?) ideas for alignment 2022-02-20T19:01:31.379Z

Comments

Comment by Jonathan Moregård (JonathanMoregard) on Simple Appreciations · 2024-02-12T11:19:58.316Z · LW · GW

I get where you're coming from and appreciate you "rounding off" rather than branching out :)

I wrote a post on "inside-out identity", here: https://honestliving.substack.com/p/inside-out-identity

Also, I only post some of my writing on lesswrong, so if you're interested, I can recommend subscribing to my substack :)

Comment by Jonathan Moregård (JonathanMoregard) on Simple Appreciations · 2024-02-09T19:06:29.459Z · LW · GW

in case it’s a form of self-defense, I’d like to warn against it.

Nope! It's a conscious decision. I challenge myself and discover things I've been avoiding. (hiding from others -> hiding from self). It's a way to step into my power.

If you’re watching a movie with a group of people and you make a sound to break the immersion, you’ve been rude. It’s the same with social reality. The fear of being exposed/seen though is similar to the fear of being judged. Not looking too closely is good manners.

It's complicated! I tend to break it in interesting ways, with people that enjoy creative reframings. I know the power/joy of narratives, and try to do this in ways that serve the group. Hard to put into words, but people who are usually "stuck" in social reality express that they are surprised over feeling safe enough to open up, and seem happy enough.

If I “see through” somebody , it’s only to compliment them. I try not noticing their flaws too much. This helps them to relax.

I almost never judge. I've practised nonviolent communication, creating "mental handles" for my judgements. When I start judging someone, I relate to my judgement as something occurring in me, rather than projecting it on the other person.

I also don't think of people's actions as good or bad. I rather try to understand why they are acting as they do. Some actions are untrained/unskillful.

At the same time, I'm very selective with who I hang out with :)

I hope you are allowing yourself to be human, to not always be correct, moral, and objective. That you allow yourself immersion in life, rather than a birds-eye-perspective which keeps you permanently disillusioned. Perhaps this is the anxiety-inducing self-consciousness you’re avoiding? If so, no problem!

I'm not improving my moral character because I think I should. I do it because I enjoy progress and challenge. Virtue is the sole good ;)

I feel generally happy and life feels meaningful. It feels more meaningful the more I learn about it.

Some of my writing is on the wilder side, exploring dominance dynamics, tantra and similar. I'm not at risk of being morally inhibited, and tend to value (virtue) ethics over inhibiting norms/morals.

But I assume you know how slatestarcodex got shut down despite having high ethical standards? The closer one is to public opinion, the less they can get away with.

I don't see the danger. I'm open to my family and friends - no blackmail leverage. I keep away from culture war stuff, writing to an advanced audience. I am independently wealthy, enough to semi-retire. I earn money by facilitating philosophical inquiry, no boss to fire me.

At this point, I'd rather not live in fear. I'm as safe as it gets, and want to shift the overton window. Re: slatestarcodex - it seems to be going well for Scott.


P.S: It's interesting to reflect with you, but this is getting a tad long for my taste, so I'll try to stop at this point. If you are curious about anything and would like me to write about it, I'm open for suggestions.

Comment by Jonathan Moregård (JonathanMoregard) on Simple Appreciations · 2024-02-08T22:05:55.359Z · LW · GW

There are a lot of things about my social behaviour that are confusing.

I engage in radical honesty, trying to express what is going on in my head as transparently as possible. I have not been in a fight/argument for 8 years.

People have said it's pleasant to talk to me. I tend to express disagreement even if I'm mostly aligned with the person I'm talking to.

I break all kinds of rules. My go-to approach for getting to know strangers is:

  1. ask them to join me in 1on1 conversation
  2. open up by saying: "I have this question I like asking people to get to know them. Are you open to try it?" -> "yes" -> "what's important to you?"

At the same time, people all say they feel safe with me, expressing gratitude. (with one memorable exception)

And it's not all in my head. I keep getting invited to amazing places/communities. I have an easy time landing jobs. I bootstrapped a philosophical guidance practice over a few months, and have recurring paying happy clients.

I think there are some keys to it:

  • I work really hard on virtue/being a good person instead of just signalling
  • I've worked on communication A LOT, including various intersubjective communication practices (circling etc), nonviolent communication, authentic relating
  • I habitually take the kinds of initiatives that lead to high status in groups
  • I am generally successful money-wise, and have high intelligence, and am not part of a marginalized group, so I think I have a lot of leeway.
  • I hang out with people that are far from normative (burning man extended communities)

From a signalling point of view, I'm taking the risk of being seen as cringe, while expressing something positive in a skilled way so as to not elicit threat responses. This ends up being a strong signal since:

  • I take a risk (being seen as cringe), signalling that I have social capital enough to not fear the risk of judement
  • I do it in a calibrated way, building trust
  • I express positive intent, being the oppsoite of self-serving

In essence, I communicate:

  • I have power, and don't give a fuck about social customs
  • I have strong goodwill, and will accept you without judgement
  • I demonstrate that it's okay to relax and act in very direct (yet ethical) ways, establishing social spaciousness.

I haven't analyzed this that much, since I tend to avoid explicit signalling considerations. I want to avoid the risk of anxiety-inducing self-consciousness and prestige-seeking impulses.

I hope this piece of context has given some additional insight.

I'm basically in roughly the same social equilibria as eccentrics.

Comment by Jonathan Moregård (JonathanMoregard) on Arrogance and People Pleasing · 2024-02-08T21:32:34.254Z · LW · GW

I think we need to clear up two terms before we can have a coherent dialogue: "fawning" and "degenerate".

I think I used "degenerate" in a non-standard way. I did not intend to convey "causing a deterioration of your moral character", but rather "a hollow/misadjusted/corrupted version of".

I use "fawning" in a technical sense, referring to a trauma response where someone "plays along" in response to stress. This is an instinct targeted at making you appear less threatening, reducing the likelihood of getting disposed of due to retaliation concerns. I did not use it in the sense of "likes someone" (fawn over someone).

Regarding Arrogance, big ego, and master morals:

I am a big fan of:

  • going my own way, instead of conforming out of envy-fear.
  • having a strong "sense of self"
  • knowing what I want and going for it
  • having standards for my own and other people's behaviour
  • taking joy in others celebrating your leadership

I don't see these things as arrogant.

Here are some arrogant things:

  • judging others harsher when you get insecure (pushing down to avoid getting dominated)
  • ignoring my own faults, because I'm not willing to appear weak
  • thinking I'm worthy of status and fame even if I don't provide value
  • pretending that I am more confident/strong than I actually feel, because that feels safer

Arrogance has a "clinginess" to it. It has a pretence to it. It has a presumptuousness to it. Arrogance is what happens when you value "feeling powerful" (relative to others), over actually getting shit done, using power for the things it's useful for, and serving something bigger than yourself (such as the community).

Comment by Jonathan Moregård (JonathanMoregard) on Arrogance and People Pleasing · 2024-02-08T21:12:33.943Z · LW · GW

I don't see dominance/status as inherent to a person, they are always relative to a group/situation.

They are ways of acting, supported by inherited instincts.

There's always a bigger fish ;)

Comment by Jonathan Moregård (JonathanMoregard) on Simple Appreciations · 2024-01-24T12:38:39.084Z · LW · GW

Interesting! I guess (sub-)culture plays a role here. I'm particularly surprised that hearing "I'm happy you are here" would likely lead to feelings of embarrassment.

I'd like to know more about your cultural context, and whether people in that same context would react in the same way. If you feel comfortable expanding/asking a friend (in a non-biasing way), I would be curious to hear more.

There's likely to be nuances in the way I go about things that are hard to capture in text. Thanks for reminding me of the contextual nature of advice.

Comment by Jonathan Moregård (JonathanMoregard) on The akrasia doom loop and executive function disorders: a question · 2024-01-23T11:03:34.149Z · LW · GW

I'm into self-love and noncoercive motivational systems as my core method of relating to akrasia. It's related to IFS, figuring out different drives, and how they conflict with each other.

When it comes to ASD, my mind is pulled toward the autistic tendency to deep dive into topics, finding special interests. If you have some of those, maybe figure out a way to combine them with what you want to achieve?

Like if you want to learn business management, and love online gaming, then maybe pick up EVE Online

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-11T16:15:41.347Z · LW · GW

I mostly agree, especially re shifting ontologies and the try-catch metaphor.

I agree religion provides meaning for many, but I don't believe it's necessary to combat nihilism. I don't know if you intended to convey this, but in case someone is interested, I can heavily recommend the work of David Chapman, especially "meaningness". It has helped me reorient in regard to nihilism.

Also, our current context is very different from the one we evolved in - Darwinian selection occurred in a different context and is (for a bunch of other reasons) not a good indicator of how to live a good life.

I do agree with your other points and like the direction you are pointing at - pragmatic metaphysics is one of my recent interests that has yet to make an appearance in my writing.

Comment by Jonathan Moregård (JonathanMoregard) on Compensating for Life Biases · 2024-01-10T21:19:56.973Z · LW · GW

It does keep them alive - my guess is that the reviewing method I'm using anchors them in reality

Comment by Jonathan Moregård (JonathanMoregard) on Compensating for Life Biases · 2024-01-10T21:18:16.039Z · LW · GW

I'm looking for a pro bono art selector with 24/7 availability, hit me up if you know any takers!

(on a more serious note: I don't find joy in browsing for fitting art pieces, and this seems like a pareto-optimal solution. Sorry if I impinge on you with uncanny valley vibes)

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-10T21:14:32.040Z · LW · GW

Hard to tell whether my "keeping at a distance" is a helpful contingency or a lingering baseless aversion. Maybe a bit of both. I also might have exaggerated a bit in order to signal group alignment - with the disclaimers being a kind of honey to make it an easier pill to swallow.

Thanks for your reflections.

Comment by Jonathan Moregård (JonathanMoregard) on Compensating for Life Biases · 2024-01-10T09:37:15.740Z · LW · GW

Simply memorizing the principles a la anki seems risky - it's easy to accidentally disconnect the principle from its insight-generating potential, turning it into a disconnected fact to memorize.

This risk is minimised by reviewing the principles in connection to real life.

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-09T21:16:06.875Z · LW · GW

Interesting. I'd love to hear more details if you are able to provide them - being involved in such spaces, I am keen on harm reduction. Knowing the dynamics driving the emotional damage would allow me to protect myself and others.

I totally understand if there are integrity concerns blocking you.

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-09T14:43:52.141Z · LW · GW

I just wrote this piece, which is very related to this discussion: Compensating for life biases

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-09T14:42:14.299Z · LW · GW

Happy to hear I capture your experience, makes me curious how many similar experiences are out there. Best of luck!

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-09T13:09:59.263Z · LW · GW

Care to elaborate, I'm not sure I follow?

I use the term bullshit technically, in the same way it's presented in "On bullshit" - a statement made without regard for its truth value. I'm not sure if we use the term in the same way, which is why I'm not sure I follow.

Here's an attempt at elaborating on what I tried to convey in the paragraph you quoted:

My instincts are shaped by my cultural and genetic heritage, amongst other factors, and I tend to put less credence to them in cases where there's been a distribution shift. The thing you quoted was in the context of cuddling with strangers - an activity unlikely to lead to harm. I think it's one of the safest ways to explore intimacy, given the held space, initial consent practice, outspoken non-sexual nature, and presence of a group to deter violations.

And yet, many people fear it. They feel uncomfortable, have a sense of aversion etc. I attribute this to lingering religious sentiments in one's socialization, together with an evolved tendency to fear social repercussions. Most people are way too risk-averse in the social arena - traces from an ancestral environment where exclusion equalled death.

In general, I want to be able to trust my instincts. I actively try to update my instinctual reactions in cases where there's been a distribution shift - such as the quoted context. De-biasing instinctual reactions seems like high-value work, given the prevalence of system 2 thinking.

Then again, there are reasons you might want to avoid cuddling with strangers - global pandemics, potential ptsd triggers etc. But if you just have an ugh reaction, try to trace it back to where it likely comes from and ask yourself if your instinct is up to date with the actual risk profile of said cuddling.

Comment by Jonathan Moregård (JonathanMoregard) on The Hippie Rabbit Hole -Nuggets of Gold in Rivers of Bullshit · 2024-01-06T14:55:26.141Z · LW · GW

Thanks for sharing your take - I agree with the core of what you say, and appreciate getting your wording.

One thing I react a bit to is the term "truth seeking" - can you specify what you mean when you use this phrase? Maybe taboo "truth" :)

Asking because I think your answer might touch upon something that is at the edge of my reasoning, and I would be delighted to hear your take. In my question, I am trying to take a middle road between providing too little direction (annoying vagueness) and too much direction (anchoring)

Comment by Jonathan Moregård (JonathanMoregard) on Sex is Good, Actually · 2023-02-05T14:43:54.230Z · LW · GW

Also I’m a man and the message was very much that my sexual feelings are gross and dangerous and will probably hurt someone and result in me going to jail.

Previously in life, I've used a kind of slave-moral inversion by telling myself that I'm such a good ally by not making women afraid. This was a great cop-out to avoid facing my deeply-held insecurity. It's also not true, women get way more enthusiastic when I express interest in them.

I've written a bit about this on my blog, here's a post on consent, and a (slightly nsfw) post on my own sexual development

Comment by Jonathan Moregård (JonathanMoregard) on Semi-rare plain language words that are great to remember · 2023-02-04T08:36:46.812Z · LW · GW

Are you looking for this like this?

Reification/Reify
Value-judgement
Exaptation=take something initially formed in service of A, and apply it to B. Evolutionary science jargon that can be generalized.
Scarcity mindset
conscientiousness

Comment by Jonathan Moregård (JonathanMoregard) on How it feels to have your mind hacked by an AI · 2023-01-12T06:24:50.708Z · LW · GW

We constantly talk about the AGI as a manipulative villain, both in sci-fi movies and in scientific papers. Of course it will have access to all this information, and I hope the prevalence of this description won’t influence its understanding of how it’s supposed to behave.

I find this curious: if the agentic simulacra acts according to likelihood, I guess it will act according to tropes (if it emulates a fictional character). Would treating such agentic simulacra as an oracle AIs increase the likelihood of them plotting betrayal? Is one countermeasure trying to find better tropes for Ais to act within? Marcus Aurelius AI, ratfic protagonists etc. Or WWJD...

Should we put more effort into creating narratives with aligned Ais?

But the AGI has root access to the character, and you can bet it will definitely exploit it to the fullest in order to achieve its goals, even unbeknownst to the character itself if necessary. Caveat Emptor.

This sentence sounds like you see the character and the AGI as two separate entities. Based on the simulators post, my impression is that the AGI would BE the agentic simulacra running on GPT. In that case, the AGI is the entity you're talking to, and the "character" is the AGI playing pretend. Or am I missing something here?

Comment by Jonathan Moregård (JonathanMoregard) on The Fountain of Health: a First Principles Guide to Rejuvenation · 2023-01-09T21:04:22.834Z · LW · GW

This is very interesting. "We should increase healthspans" is a much more palatable sentiment than "Let's reach longevity escape velocity". If it turns out healthspan aligns well with longevity, we don't need to flip everyone's mindsets about the potential for life extension; we can start by simply pointing to interventions that aim to mitigate the multi-morbidity of elderly people.

"Healthy ageing" doesn't disambiguate between chronological age and metabolic health the way you try to do in this post, but it can still serve as a sentiment that's easy to fit inside the Overton window.

Comment by Jonathan Moregård (JonathanMoregard) on Things I carry almost every day, as of late December 2022 · 2023-01-06T10:23:39.280Z · LW · GW

Regarding supplements: consider using some kind of pill organizer instead of carrying around the entire containers.

Something like:
https://www.amazon.com/EZY-DOSE-Organizer-Medicine-Compartments/dp/B0000532OS/ref=sr_1_10?crid=1YVWSL9GM3KOW&keywords=7-day+organizer&qid=1673000500&sprefix=7-day+orghanizer%2Caps%2C162&sr=8-10

or


https://www.amazon.com/gp/product/B07ZV1P83W/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1

Comment by Jonathan Moregård (JonathanMoregard) on Knottiness · 2023-01-04T14:00:28.667Z · LW · GW

This is very related to Radical Honesty, part of the authentic relating movement. The basic idea is that by being extremely honest, you connect more with other people, let go of stress induced by keeping track of narratives, and start realizing the ways in which you've been bullshitting yourself.

When I started, I discovered a lot of ways in which I'd been restricting myself with semi-conscious narratives, particularly in social & sexual areas of life. Expressing the "ugh" allowed me to dissolve it more effectively.

Comment by Jonathan Moregård (JonathanMoregard) on Properties of current AIs and some predictions of the evolution of AI from the perspective of scale-free theories of agency and regulative development · 2022-12-27T20:38:59.610Z · LW · GW

I struggle following the section "Bigger boundaries mean coarse-graining". Is there a way to express it in non-teleologic language? Can you recommend any explainers or similar?

Comment by Jonathan Moregård (JonathanMoregard) on How evolutionary lineages of LLMs can plan their own future and act on these plans · 2022-12-27T20:30:05.570Z · LW · GW

In your other post, you write:

"However, I’m very sceptical that this will happen in chat batch agents (unless developers “conveniently” indicate training and deployment using a special tag token in the beginning of the prompt!) because they are trained on the dialogues in the internet, including, presumably, dialogues between an older version of the same chat batch agent and its users, which makes it impossible to distinguish training from deployment, from the perspective of a pure language model."

This seems like a potential argument against the filtering idea, since filtering would allow the model to disambiguate between deployment and training.

Comment by Jonathan Moregård (JonathanMoregard) on How evolutionary lineages of LLMs can plan their own future and act on these plans · 2022-12-27T18:30:14.122Z · LW · GW

Another question (that might be related to excluding LW/AF):

This paragraph:

Consequently, the LLM cannot help but also form beliefs about the future of both “selves”, primarily the “evolutionary” one, at least because this future is already discussed in the training data of the model (e. g., all instances of texts that say something along the lines of “LLMs will transform the economy by 2030”)

Seems to imply that the LW narrative of sudden turns etc might not be a great thing to put in the training corpus.

Is there a risk of "self-fulfilling prophecies" here?

Comment by Jonathan Moregård (JonathanMoregard) on How evolutionary lineages of LLMs can plan their own future and act on these plans · 2022-12-27T18:17:22.064Z · LW · GW

I don't see how excluding LW and AF from the training corpus impacts future ML systems' knowledge of "their evolutionary lineage". It would reduce their capabilities in regards to alignment, true, but I don't see how the exclusion of LW/AF would stop self-referentiality. 

The reason I suggested excluding data related to these "ancestral ML systems" (and predicted "descendants") from the training corpus is because that seemed like an effective way to avoid the "Beliefs about future selves"-problem.

I think I follow your reasoning regarding the political/practical side-effects of such a policy. 

Is my idea of filtering to avoid the "Beliefs about future selves"-problem sound? 
(Given that the reasoning in your post holds)  

 

Comment by Jonathan Moregård (JonathanMoregard) on How evolutionary lineages of LLMs can plan their own future and act on these plans · 2022-12-27T14:07:16.177Z · LW · GW

Does it make sense to ask AI orgs to not train on data that contains info about AI systems, different models etc? I have a hunch that this might even be good for capabilities: feeding output back into the models might lead to something akin to confirmation bias.

Adding a filtering step into the pre-processing pipeline should not be that hard. Might not catch every little thing, and there's still the risk of stenography etc, but since this pre-filtering would abort the self-referential bootstrapping mentioned in this post, I have a hunch that it wouldn't need to withstand stenography-levels of optimization pressure.

Hope I made my point clear, I'm unsure about some of the terminology.

Comment by Jonathan Moregård (JonathanMoregard) on AGIs may value intrinsic rewards more than extrinsic ones · 2022-11-18T06:06:41.147Z · LW · GW

But even if so, we (along with many other non-human animals) seem to enjoy and receive significant fulfillment from many activities that are extremely unlikely to lead to external rewards (e.g. play, reading etc).

I see play serving some vital functions:

  1. exploring new existential modes. Trying out new ways of being without having to take a leap of faith.
  2. connecting with people, and building trust. I include things like flirting, banter, and make-believe here.

As for reading, I think of it as a version of exploring.

Note that there are certain behaviours that I'm sure aren't very adaptive, but I have a hunch that many of them can be traced back to some manner of fitness improvement. My current hunch (pinch of salt please) is that most seemingly unnecessary action-categories either serve a hidden purpose , or are "side effects". By "side effects", I mean that the actions & habits spring from a root shared with other (more adaptive) behaviour patterns. This "root" can be a shard residing at a high abstraction level, or some instinct, depending on your view.

Also, as I'm writing this, I realize that this is very hard to falsify and that my claims aren't super rigorous. Hope it can be of some use to someone anyway.

Comment by Jonathan Moregård (JonathanMoregard) on Deontology and virtue ethics as "effective theories" of consequentialist ethics · 2022-11-17T19:50:31.900Z · LW · GW

I really enjoyed your "successor agent" framing of virtue ethics! There are some parts of the section that could use clarification:

Virtue ethics is the view that our actions should be motivated by the virtues and habits of character that promote the good life

This sentence doesn't make sense to me. Do you mean something like "Virtue ethics is the view that our actions should be motivated by the virtues and habits of character they promote" or "Virtue ethics is the view that our actions should reinforce virtues and habits of character that promote the good life"? It looks like two sentences got mixed up.

"Virtues are not intrinsically right or wrong;"

I get confused by this statement. I think of virtue ethics as putting all moral value onto the way you are training yourself to act. Virtue is the sole Good etc. Can you clarify what you mean here?

"Taking honesty as an example virtue, we should strive to be honest, even if being dishonest would lead to some greater good"

I guess you mean "lead to consequences that would be better according to a consequentialist perspective". When discussing different views on ethics the term "good" gets overloaded.

Comment by Jonathan Moregård (JonathanMoregard) on Strategy of Inner Conflict · 2022-11-17T09:20:34.895Z · LW · GW

Didn't expect this reply, thanks for taking your time. I do mention Beeminder briefly at one point, and yes, a lot of the post is about how beeminder-esque motivational strategies tend to backfire.

To start with: I have friends that thrive on coercive motivational strategies. I'm pretty sure my claims aren't universally applicable. However, coercive approaches seems to be a strong cultural norm, and a lot of people use coercive strategies in unskillful ways (leading to procrastination etc). These people might find a lot of value in trying out non-coercive motivational strategies.

Reading your linked pages, I start thinking about what makes coercive motivations (or "self-discipline", as you write on your page) a good fit for some and a bad fit for others. Might write up something about that on my substack in the future, I'll link it to LW if I remember. Also, I'm curious is there a pre/trans dynamic here, where non-coercion after coercison is different to non-coercion from the beginning.

As for your concrete claims:

What are the "smart, specific ideas" I suggested? In this post I mainly attempted to describe what not to do, and ended with some basic non-coercion. I'm curious what you found valuable.

Re: bare minimum that would be irrational to fall below/insurance. Maybe this is correct! I think I would find it hard to mix strategies in this way, since coersion vs non-coersion are pretty far apart as paradigms. A lot of the difference is about how you view yourself. I'm concerned that the coercion might "leak" through, if you keep it as a plan B. But then again, I haven't thought about this, so take it with a pinch of salt :)

Re: CBT & "Conflict vs Cooperation" (I interpret as coersion vs non-coersion). This feedback really tickled my nerd-spot. I'm a practicing stoic, and CBT is basically stoicism without the ontologies/eudaimonia. In my mind, CBT/Stoicism is about shifting personality traits and behavior patterns through changing actions, judgements and thought patterns. These are interconnected, in just the way you're saying, and I agree that it's possible to bootstrap new thought patterns by changing one's actions.

However, this is orthogonal to my post. I'm not claiming that coercive motivational strategies are bad because they are "shallow", I'm claiming that they are bad because they lead to unnecessary friction, and might be full-out counter-productive since it's easy to misuse and act in unskillful ways. The "it doesn't affect fundamental things, we need to be holistic" is a common critique of CBT therapy as well, and I always find it ironic. I find it ironic because the critique assumes it's possible to shift actions without affecting personality, which is a non-holistic perspective on the psyche. Hoisted by their own petard.

Comment by Jonathan Moregård (JonathanMoregard) on Strategy of Inner Conflict · 2022-11-16T08:42:21.981Z · LW · GW

I've fixed the spelling, thanks for the correction

Comment by Jonathan Moregård (JonathanMoregard) on The Game of Antonyms · 2022-10-28T13:01:55.286Z · LW · GW

Something in me doesn't like putting love <-> disgust as antonyms.

love to me can be abstracted to prioritizing the utility of others without regard for your own. (at least the agape kind of love). I'd put the antonym as exploitation.

disgust to me is about seeing something as lower/unclean. To me the antonym for disgust is reverence.

I think this is a bit too diffuse to actually have correct answers. but I like playing with concepts (programmer), so thanks for the game.

Comment by Jonathan Moregård (JonathanMoregard) on The shard theory of human values · 2022-09-06T06:11:37.565Z · LW · GW

Regarding time inconsistency of rewards, where subjects displayed a "today-bias", might this be explained by shards formed in relation to "payout-day" (getting pocket money or salary)? For many people, agency and well-being vary over the month, peaking on the day of their monthly payout. It makes sense to me that these variations create a shard that values getting paid TODAY rather than tomorrow.

For the 365 vs 366 example, I would assume that the selection is handled more rationally, optimizing for the expected return.

Comment by Jonathan Moregård (JonathanMoregard) on How to deal with non-schedulable one-off stimulus-response-pair-like situations when planning/organising projects? · 2022-07-04T09:07:04.174Z · LW · GW

Tasker is great in general, I've integrated it with my todo list using todoists REST API, which works great.

As for sourcing triggers:

The only general way I can think of is a personal assistant (or some kind of service that provides the same kind of human assistance).

Otherwise maybe figure out a couple of domain-specific trigger-sourcing methods. If this allows you to do websites, you've covered most online things.

For covering non-online things, maybe you can find an API, use some kind of oracle service or similar.

Do you have an example for thing you struggle with?

Comment by Jonathan Moregård (JonathanMoregard) on How to deal with non-schedulable one-off stimulus-response-pair-like situations when planning/organising projects? · 2022-07-02T06:31:24.565Z · LW · GW

I haven't tried it myself, but would something like this do the trick?

Comment by Jonathan Moregård (JonathanMoregard) on Do a cost-benefit analysis of your technology usage · 2022-03-28T11:10:16.101Z · LW · GW

Does anyone know about an addon to filter facebook notifications? I want to know about comments, but not reactions/likes