Posts
Comments
I found your example problem very interesting, and started thinking about social dynamics that match "tell me if you do it, but don't do it".
The closest cultural anchor I could find is that of sin, confession, priest. Modelling the situation as a confession might be an apt anchor
Why is hostile low-quality resurrection almost inevitable? If you want to clone someone into an em, why not pick a living human?
Frozen people have potential brain damage and an outdated understanding of the world.
"By manipulating your belief system, you can lessen your inhibitions, by lowering the perceived threat of less control."
I don't follow this sentence
So I might read “trust the universe” and get closer to a flow state, or forget exactly what I meant by that.
Yes, this is the problem when signifiers detach from the signified. In my post on personal heuristics I mention something similar: the issue where many "stock wisdoms" turn into detached platitudes.
This is also reminiscent of spiritual practices (like meditation instructions) that turn into religious dogma.
For me, there's a difference between having techniques that are dependent on reminders (like telling yourself "trust the universe"), and techniques that are more descriptive (like "take slow belly-breaths with long exhales" or "act upon your immediate impulses")
This is an enormous topic that is under-theorized, agreed that language is somewhat lacking.
When I'm very in tune with my short-term desires, emotions and agency - acting according to instinct and impulse rather than ideas, plans or similar.
It's a mindset/state of being I can go into, which has a very particular "flavour" to it, it's light-hearted, unconcerned & in tune with what I want/like, in the moment.
I guess different people have different "modes" or "headspaces", a kind of equilibria for how they experience the world, their own agency, and themselves. Different equilibria fit different situations. What I wanted to exemplify in the post was the potential of knowing what "modes", "equilibrias" or "headspaces" you have access to, and try switching into non-standard ones when your default headspace doesn't resolve the situation at hand.
If we want to shift group dynamics, I see these things as important shifts:
- conflict theory -> mistake theory
- general complaints -> specific solutions
- overconfidence -> humility
One way to go about this, inspired by Scott Alexander, is to ask for more concreteness: https://www.astralcodexten.com/p/details-that-you-should-include-in
In general, though, I think the info content of the outrage is low. For most people, it mainly means "I read this thing online, and it resonated somehow". I see most outrage group discussions as extensions of newsfeeds, best to be ignored.
For solid discussions, find the people capable of deep analysis, and read their work.
Do you mean "alert and active" as in:
- Dealing with group-outrage situations without zoning out?
- Staying up to date with politics and having an influence?
- Shifting your social contexts to become less outrage-oriented?
- Other?
This happened a while ago, and I've since migrated my social circles to distinctly non-partisan ones. I still want to help you, and would like to offer some ideas. Some of them might not fit your specific contexts. I trust you to pick the ones that seem promising.
Ideas:
- Say something akin to "I get depressed talking about those people. I've decided to focus on people I like instead. Have you been excited about anything recently?"
- Bring up the negative consequences explicitly. In a highly polarized state, not engaging in the outrage might be interpreted as a sign of betrayal. Explicitly bringing it up might be weird, but it gives you a non-traitor reason for not engaging in outrage
- Look into authentic relating - a bunch of practices for deepening communication & connection. Nonviolent communication & circling are included in this category, as well as general "authentic relating". You get some new tools for relating, and/or new friends.
- Try to get people to reduce time spent on news & social media. Phone-free family gatherings?
- Get people addicted to mobile games, so they spend their time on candy crush instead of culture wars
- Become a hermit, build a log house under an oversized rock
For reference, most of the people I hang out with are involved in the nordic branches of the wider burning man community. They are busy actually doing things, rather than complaining about politics. YMMV, but equivalent spaces where you're located might serve as a source of non-bitter connections.
See my response to mako yass :)
The article is party written from a past-me perspective, and I agree that it is a bit harsh. Also, there are multiple things converging to create an expectation mismatch.
I guess it's possible to say both things, and I failed at disambiguating between the content of what was said and the tone. Most people looked at me like I was a beaten dog, offering support in the same ooh-that's-horrible tone people vibe into during charity galas.
I get that it might be a (sub)cultural thing, but I've gotten a lot of appreciation for actually trying to understand the person's situation. Guess vs ask culture maybe?
-
The medical opinion was that: "That's an inappropriate question". It works for dogs, so why not? :D
-
"No, we are going to split it up into millimetre sized cubes and analyse it". (They went full hitchhiker's on me.)
-
"Some people don't lose their hair" (empirically, the answer is "yes")
Always carry a water bottle - keeping hydrated is easier, as is avoiding sugary drinks (due to reduced thirst impulses)
I get where you're coming from and appreciate you "rounding off" rather than branching out :)
I wrote a post on "inside-out identity", here: https://honestliving.substack.com/p/inside-out-identity
Also, I only post some of my writing on lesswrong, so if you're interested, I can recommend subscribing to my substack :)
in case it’s a form of self-defense, I’d like to warn against it.
Nope! It's a conscious decision. I challenge myself and discover things I've been avoiding. (hiding from others -> hiding from self). It's a way to step into my power.
If you’re watching a movie with a group of people and you make a sound to break the immersion, you’ve been rude. It’s the same with social reality. The fear of being exposed/seen though is similar to the fear of being judged. Not looking too closely is good manners.
It's complicated! I tend to break it in interesting ways, with people that enjoy creative reframings. I know the power/joy of narratives, and try to do this in ways that serve the group. Hard to put into words, but people who are usually "stuck" in social reality express that they are surprised over feeling safe enough to open up, and seem happy enough.
If I “see through” somebody , it’s only to compliment them. I try not noticing their flaws too much. This helps them to relax.
I almost never judge. I've practised nonviolent communication, creating "mental handles" for my judgements. When I start judging someone, I relate to my judgement as something occurring in me, rather than projecting it on the other person.
I also don't think of people's actions as good or bad. I rather try to understand why they are acting as they do. Some actions are untrained/unskillful.
At the same time, I'm very selective with who I hang out with :)
I hope you are allowing yourself to be human, to not always be correct, moral, and objective. That you allow yourself immersion in life, rather than a birds-eye-perspective which keeps you permanently disillusioned. Perhaps this is the anxiety-inducing self-consciousness you’re avoiding? If so, no problem!
I'm not improving my moral character because I think I should. I do it because I enjoy progress and challenge. Virtue is the sole good ;)
I feel generally happy and life feels meaningful. It feels more meaningful the more I learn about it.
Some of my writing is on the wilder side, exploring dominance dynamics, tantra and similar. I'm not at risk of being morally inhibited, and tend to value (virtue) ethics over inhibiting norms/morals.
But I assume you know how slatestarcodex got shut down despite having high ethical standards? The closer one is to public opinion, the less they can get away with.
I don't see the danger. I'm open to my family and friends - no blackmail leverage. I keep away from culture war stuff, writing to an advanced audience. I am independently wealthy, enough to semi-retire. I earn money by facilitating philosophical inquiry, no boss to fire me.
At this point, I'd rather not live in fear. I'm as safe as it gets, and want to shift the overton window. Re: slatestarcodex - it seems to be going well for Scott.
P.S: It's interesting to reflect with you, but this is getting a tad long for my taste, so I'll try to stop at this point. If you are curious about anything and would like me to write about it, I'm open for suggestions.
There are a lot of things about my social behaviour that are confusing.
I engage in radical honesty, trying to express what is going on in my head as transparently as possible. I have not been in a fight/argument for 8 years.
People have said it's pleasant to talk to me. I tend to express disagreement even if I'm mostly aligned with the person I'm talking to.
I break all kinds of rules. My go-to approach for getting to know strangers is:
- ask them to join me in 1on1 conversation
- open up by saying: "I have this question I like asking people to get to know them. Are you open to try it?" -> "yes" -> "what's important to you?"
At the same time, people all say they feel safe with me, expressing gratitude. (with one memorable exception)
And it's not all in my head. I keep getting invited to amazing places/communities. I have an easy time landing jobs. I bootstrapped a philosophical guidance practice over a few months, and have recurring paying happy clients.
I think there are some keys to it:
- I work really hard on virtue/being a good person instead of just signalling
- I've worked on communication A LOT, including various intersubjective communication practices (circling etc), nonviolent communication, authentic relating
- I habitually take the kinds of initiatives that lead to high status in groups
- I am generally successful money-wise, and have high intelligence, and am not part of a marginalized group, so I think I have a lot of leeway.
- I hang out with people that are far from normative (burning man extended communities)
From a signalling point of view, I'm taking the risk of being seen as cringe, while expressing something positive in a skilled way so as to not elicit threat responses. This ends up being a strong signal since:
- I take a risk (being seen as cringe), signalling that I have social capital enough to not fear the risk of judement
- I do it in a calibrated way, building trust
- I express positive intent, being the oppsoite of self-serving
In essence, I communicate:
- I have power, and don't give a fuck about social customs
- I have strong goodwill, and will accept you without judgement
- I demonstrate that it's okay to relax and act in very direct (yet ethical) ways, establishing social spaciousness.
I haven't analyzed this that much, since I tend to avoid explicit signalling considerations. I want to avoid the risk of anxiety-inducing self-consciousness and prestige-seeking impulses.
I hope this piece of context has given some additional insight.
I'm basically in roughly the same social equilibria as eccentrics.
I think we need to clear up two terms before we can have a coherent dialogue: "fawning" and "degenerate".
I think I used "degenerate" in a non-standard way. I did not intend to convey "causing a deterioration of your moral character", but rather "a hollow/misadjusted/corrupted version of".
I use "fawning" in a technical sense, referring to a trauma response where someone "plays along" in response to stress. This is an instinct targeted at making you appear less threatening, reducing the likelihood of getting disposed of due to retaliation concerns. I did not use it in the sense of "likes someone" (fawn over someone).
Regarding Arrogance, big ego, and master morals:
I am a big fan of:
- going my own way, instead of conforming out of envy-fear.
- having a strong "sense of self"
- knowing what I want and going for it
- having standards for my own and other people's behaviour
- taking joy in others celebrating your leadership
I don't see these things as arrogant.
Here are some arrogant things:
- judging others harsher when you get insecure (pushing down to avoid getting dominated)
- ignoring my own faults, because I'm not willing to appear weak
- thinking I'm worthy of status and fame even if I don't provide value
- pretending that I am more confident/strong than I actually feel, because that feels safer
Arrogance has a "clinginess" to it. It has a pretence to it. It has a presumptuousness to it. Arrogance is what happens when you value "feeling powerful" (relative to others), over actually getting shit done, using power for the things it's useful for, and serving something bigger than yourself (such as the community).
I don't see dominance/status as inherent to a person, they are always relative to a group/situation.
They are ways of acting, supported by inherited instincts.
There's always a bigger fish ;)
Interesting! I guess (sub-)culture plays a role here. I'm particularly surprised that hearing "I'm happy you are here" would likely lead to feelings of embarrassment.
I'd like to know more about your cultural context, and whether people in that same context would react in the same way. If you feel comfortable expanding/asking a friend (in a non-biasing way), I would be curious to hear more.
There's likely to be nuances in the way I go about things that are hard to capture in text. Thanks for reminding me of the contextual nature of advice.
I'm into self-love and noncoercive motivational systems as my core method of relating to akrasia. It's related to IFS, figuring out different drives, and how they conflict with each other.
When it comes to ASD, my mind is pulled toward the autistic tendency to deep dive into topics, finding special interests. If you have some of those, maybe figure out a way to combine them with what you want to achieve?
Like if you want to learn business management, and love online gaming, then maybe pick up EVE Online
I mostly agree, especially re shifting ontologies and the try-catch metaphor.
I agree religion provides meaning for many, but I don't believe it's necessary to combat nihilism. I don't know if you intended to convey this, but in case someone is interested, I can heavily recommend the work of David Chapman, especially "meaningness". It has helped me reorient in regard to nihilism.
Also, our current context is very different from the one we evolved in - Darwinian selection occurred in a different context and is (for a bunch of other reasons) not a good indicator of how to live a good life.
I do agree with your other points and like the direction you are pointing at - pragmatic metaphysics is one of my recent interests that has yet to make an appearance in my writing.
It does keep them alive - my guess is that the reviewing method I'm using anchors them in reality
I'm looking for a pro bono art selector with 24/7 availability, hit me up if you know any takers!
(on a more serious note: I don't find joy in browsing for fitting art pieces, and this seems like a pareto-optimal solution. Sorry if I impinge on you with uncanny valley vibes)
Hard to tell whether my "keeping at a distance" is a helpful contingency or a lingering baseless aversion. Maybe a bit of both. I also might have exaggerated a bit in order to signal group alignment - with the disclaimers being a kind of honey to make it an easier pill to swallow.
Thanks for your reflections.
Simply memorizing the principles a la anki seems risky - it's easy to accidentally disconnect the principle from its insight-generating potential, turning it into a disconnected fact to memorize.
This risk is minimised by reviewing the principles in connection to real life.
Interesting. I'd love to hear more details if you are able to provide them - being involved in such spaces, I am keen on harm reduction. Knowing the dynamics driving the emotional damage would allow me to protect myself and others.
I totally understand if there are integrity concerns blocking you.
I just wrote this piece, which is very related to this discussion: Compensating for life biases
Happy to hear I capture your experience, makes me curious how many similar experiences are out there. Best of luck!
Care to elaborate, I'm not sure I follow?
I use the term bullshit technically, in the same way it's presented in "On bullshit" - a statement made without regard for its truth value. I'm not sure if we use the term in the same way, which is why I'm not sure I follow.
Here's an attempt at elaborating on what I tried to convey in the paragraph you quoted:
My instincts are shaped by my cultural and genetic heritage, amongst other factors, and I tend to put less credence to them in cases where there's been a distribution shift. The thing you quoted was in the context of cuddling with strangers - an activity unlikely to lead to harm. I think it's one of the safest ways to explore intimacy, given the held space, initial consent practice, outspoken non-sexual nature, and presence of a group to deter violations.
And yet, many people fear it. They feel uncomfortable, have a sense of aversion etc. I attribute this to lingering religious sentiments in one's socialization, together with an evolved tendency to fear social repercussions. Most people are way too risk-averse in the social arena - traces from an ancestral environment where exclusion equalled death.
In general, I want to be able to trust my instincts. I actively try to update my instinctual reactions in cases where there's been a distribution shift - such as the quoted context. De-biasing instinctual reactions seems like high-value work, given the prevalence of system 2 thinking.
Then again, there are reasons you might want to avoid cuddling with strangers - global pandemics, potential ptsd triggers etc. But if you just have an ugh reaction, try to trace it back to where it likely comes from and ask yourself if your instinct is up to date with the actual risk profile of said cuddling.
Thanks for sharing your take - I agree with the core of what you say, and appreciate getting your wording.
One thing I react a bit to is the term "truth seeking" - can you specify what you mean when you use this phrase? Maybe taboo "truth" :)
Asking because I think your answer might touch upon something that is at the edge of my reasoning, and I would be delighted to hear your take. In my question, I am trying to take a middle road between providing too little direction (annoying vagueness) and too much direction (anchoring)
Also I’m a man and the message was very much that my sexual feelings are gross and dangerous and will probably hurt someone and result in me going to jail.
Previously in life, I've used a kind of slave-moral inversion by telling myself that I'm such a good ally by not making women afraid. This was a great cop-out to avoid facing my deeply-held insecurity. It's also not true, women get way more enthusiastic when I express interest in them.
I've written a bit about this on my blog, here's a post on consent, and a (slightly nsfw) post on my own sexual development
Are you looking for this like this?
Reification/Reify
Value-judgement
Exaptation=take something initially formed in service of A, and apply it to B. Evolutionary science jargon that can be generalized.
Scarcity mindset
conscientiousness
We constantly talk about the AGI as a manipulative villain, both in sci-fi movies and in scientific papers. Of course it will have access to all this information, and I hope the prevalence of this description won’t influence its understanding of how it’s supposed to behave.
I find this curious: if the agentic simulacra acts according to likelihood, I guess it will act according to tropes (if it emulates a fictional character). Would treating such agentic simulacra as an oracle AIs increase the likelihood of them plotting betrayal? Is one countermeasure trying to find better tropes for Ais to act within? Marcus Aurelius AI, ratfic protagonists etc. Or WWJD...
Should we put more effort into creating narratives with aligned Ais?
But the AGI has root access to the character, and you can bet it will definitely exploit it to the fullest in order to achieve its goals, even unbeknownst to the character itself if necessary. Caveat Emptor.
This sentence sounds like you see the character and the AGI as two separate entities. Based on the simulators post, my impression is that the AGI would BE the agentic simulacra running on GPT. In that case, the AGI is the entity you're talking to, and the "character" is the AGI playing pretend. Or am I missing something here?
This is very interesting. "We should increase healthspans" is a much more palatable sentiment than "Let's reach longevity escape velocity". If it turns out healthspan aligns well with longevity, we don't need to flip everyone's mindsets about the potential for life extension; we can start by simply pointing to interventions that aim to mitigate the multi-morbidity of elderly people.
"Healthy ageing" doesn't disambiguate between chronological age and metabolic health the way you try to do in this post, but it can still serve as a sentiment that's easy to fit inside the Overton window.
Regarding supplements: consider using some kind of pill organizer instead of carrying around the entire containers.
Something like:
https://www.amazon.com/EZY-DOSE-Organizer-Medicine-Compartments/dp/B0000532OS/ref=sr_1_10?crid=1YVWSL9GM3KOW&keywords=7-day+organizer&qid=1673000500&sprefix=7-day+orghanizer%2Caps%2C162&sr=8-10
or
https://www.amazon.com/gp/product/B07ZV1P83W/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1
This is very related to Radical Honesty, part of the authentic relating movement. The basic idea is that by being extremely honest, you connect more with other people, let go of stress induced by keeping track of narratives, and start realizing the ways in which you've been bullshitting yourself.
When I started, I discovered a lot of ways in which I'd been restricting myself with semi-conscious narratives, particularly in social & sexual areas of life. Expressing the "ugh" allowed me to dissolve it more effectively.
I struggle following the section "Bigger boundaries mean coarse-graining". Is there a way to express it in non-teleologic language? Can you recommend any explainers or similar?
In your other post, you write:
"However, I’m very sceptical that this will happen in chat batch agents (unless developers “conveniently” indicate training and deployment using a special tag token in the beginning of the prompt!) because they are trained on the dialogues in the internet, including, presumably, dialogues between an older version of the same chat batch agent and its users, which makes it impossible to distinguish training from deployment, from the perspective of a pure language model."
This seems like a potential argument against the filtering idea, since filtering would allow the model to disambiguate between deployment and training.
Another question (that might be related to excluding LW/AF):
This paragraph:
Consequently, the LLM cannot help but also form beliefs about the future of both “selves”, primarily the “evolutionary” one, at least because this future is already discussed in the training data of the model (e. g., all instances of texts that say something along the lines of “LLMs will transform the economy by 2030”)
Seems to imply that the LW narrative of sudden turns etc might not be a great thing to put in the training corpus.
Is there a risk of "self-fulfilling prophecies" here?
I don't see how excluding LW and AF from the training corpus impacts future ML systems' knowledge of "their evolutionary lineage". It would reduce their capabilities in regards to alignment, true, but I don't see how the exclusion of LW/AF would stop self-referentiality.
The reason I suggested excluding data related to these "ancestral ML systems" (and predicted "descendants") from the training corpus is because that seemed like an effective way to avoid the "Beliefs about future selves"-problem.
I think I follow your reasoning regarding the political/practical side-effects of such a policy.
Is my idea of filtering to avoid the "Beliefs about future selves"-problem sound?
(Given that the reasoning in your post holds)
Does it make sense to ask AI orgs to not train on data that contains info about AI systems, different models etc? I have a hunch that this might even be good for capabilities: feeding output back into the models might lead to something akin to confirmation bias.
Adding a filtering step into the pre-processing pipeline should not be that hard. Might not catch every little thing, and there's still the risk of stenography etc, but since this pre-filtering would abort the self-referential bootstrapping mentioned in this post, I have a hunch that it wouldn't need to withstand stenography-levels of optimization pressure.
Hope I made my point clear, I'm unsure about some of the terminology.
But even if so, we (along with many other non-human animals) seem to enjoy and receive significant fulfillment from many activities that are extremely unlikely to lead to external rewards (e.g. play, reading etc).
I see play serving some vital functions:
- exploring new existential modes. Trying out new ways of being without having to take a leap of faith.
- connecting with people, and building trust. I include things like flirting, banter, and make-believe here.
As for reading, I think of it as a version of exploring.
Note that there are certain behaviours that I'm sure aren't very adaptive, but I have a hunch that many of them can be traced back to some manner of fitness improvement. My current hunch (pinch of salt please) is that most seemingly unnecessary action-categories either serve a hidden purpose , or are "side effects". By "side effects", I mean that the actions & habits spring from a root shared with other (more adaptive) behaviour patterns. This "root" can be a shard residing at a high abstraction level, or some instinct, depending on your view.
Also, as I'm writing this, I realize that this is very hard to falsify and that my claims aren't super rigorous. Hope it can be of some use to someone anyway.
I really enjoyed your "successor agent" framing of virtue ethics! There are some parts of the section that could use clarification:
Virtue ethics is the view that our actions should be motivated by the virtues and habits of character that promote the good life
This sentence doesn't make sense to me. Do you mean something like "Virtue ethics is the view that our actions should be motivated by the virtues and habits of character they promote" or "Virtue ethics is the view that our actions should reinforce virtues and habits of character that promote the good life"? It looks like two sentences got mixed up.
"Virtues are not intrinsically right or wrong;"
I get confused by this statement. I think of virtue ethics as putting all moral value onto the way you are training yourself to act. Virtue is the sole Good etc. Can you clarify what you mean here?
"Taking honesty as an example virtue, we should strive to be honest, even if being dishonest would lead to some greater good"
I guess you mean "lead to consequences that would be better according to a consequentialist perspective". When discussing different views on ethics the term "good" gets overloaded.
Didn't expect this reply, thanks for taking your time. I do mention Beeminder briefly at one point, and yes, a lot of the post is about how beeminder-esque motivational strategies tend to backfire.
To start with: I have friends that thrive on coercive motivational strategies. I'm pretty sure my claims aren't universally applicable. However, coercive approaches seems to be a strong cultural norm, and a lot of people use coercive strategies in unskillful ways (leading to procrastination etc). These people might find a lot of value in trying out non-coercive motivational strategies.
Reading your linked pages, I start thinking about what makes coercive motivations (or "self-discipline", as you write on your page) a good fit for some and a bad fit for others. Might write up something about that on my substack in the future, I'll link it to LW if I remember. Also, I'm curious is there a pre/trans dynamic here, where non-coercion after coercison is different to non-coercion from the beginning.
As for your concrete claims:
What are the "smart, specific ideas" I suggested? In this post I mainly attempted to describe what not to do, and ended with some basic non-coercion. I'm curious what you found valuable.
Re: bare minimum that would be irrational to fall below/insurance. Maybe this is correct! I think I would find it hard to mix strategies in this way, since coersion vs non-coersion are pretty far apart as paradigms. A lot of the difference is about how you view yourself. I'm concerned that the coercion might "leak" through, if you keep it as a plan B. But then again, I haven't thought about this, so take it with a pinch of salt :)
Re: CBT & "Conflict vs Cooperation" (I interpret as coersion vs non-coersion). This feedback really tickled my nerd-spot. I'm a practicing stoic, and CBT is basically stoicism without the ontologies/eudaimonia. In my mind, CBT/Stoicism is about shifting personality traits and behavior patterns through changing actions, judgements and thought patterns. These are interconnected, in just the way you're saying, and I agree that it's possible to bootstrap new thought patterns by changing one's actions.
However, this is orthogonal to my post. I'm not claiming that coercive motivational strategies are bad because they are "shallow", I'm claiming that they are bad because they lead to unnecessary friction, and might be full-out counter-productive since it's easy to misuse and act in unskillful ways. The "it doesn't affect fundamental things, we need to be holistic" is a common critique of CBT therapy as well, and I always find it ironic. I find it ironic because the critique assumes it's possible to shift actions without affecting personality, which is a non-holistic perspective on the psyche. Hoisted by their own petard.
I've fixed the spelling, thanks for the correction
Something in me doesn't like putting love <-> disgust as antonyms.
love to me can be abstracted to prioritizing the utility of others without regard for your own. (at least the agape kind of love). I'd put the antonym as exploitation.
disgust to me is about seeing something as lower/unclean. To me the antonym for disgust is reverence.
I think this is a bit too diffuse to actually have correct answers. but I like playing with concepts (programmer), so thanks for the game.
Regarding time inconsistency of rewards, where subjects displayed a "today-bias", might this be explained by shards formed in relation to "payout-day" (getting pocket money or salary)? For many people, agency and well-being vary over the month, peaking on the day of their monthly payout. It makes sense to me that these variations create a shard that values getting paid TODAY rather than tomorrow.
For the 365 vs 366 example, I would assume that the selection is handled more rationally, optimizing for the expected return.
Tasker is great in general, I've integrated it with my todo list using todoists REST API, which works great.
As for sourcing triggers:
The only general way I can think of is a personal assistant (or some kind of service that provides the same kind of human assistance).
Otherwise maybe figure out a couple of domain-specific trigger-sourcing methods. If this allows you to do websites, you've covered most online things.
For covering non-online things, maybe you can find an API, use some kind of oracle service or similar.
Do you have an example for thing you struggle with?
I haven't tried it myself, but would something like this do the trick?
Does anyone know about an addon to filter facebook notifications? I want to know about comments, but not reactions/likes