Posts
Comments
no
Both people ideally learn from existing practitioners for a session or two, ideally they also review the written material or in the case of Focusing also try the audiobook. Then they simply try facilitating each other. The facilitator takes brief notes to help keep track of where they are in the other person's stack, but otherwise acts much as eg Gendlin acts in the audiobook.
Probably the most powerful intervention I know of is to trade facilitation of emotional digestion and integration practices with a peer. The modality probably only matters a little, and so should be chosen for what's easiest to learn to facilitate. Focusing is a good start, I also like Core Transformation for going deeper once Focusing skills are good. It's a huge return on ~3 hours per week (90 minutes facilitating and being facilitated, in two sessions) IME.
"What causes your decisions, other than incidentals?"
"My values."
People normally model values as upstream of decisions. Causing decisions. In many cases values are downstream of decisions. I'm wondering who else has talked about this concept. One of the rare cases that the LLM was not helpful.
moral values
Is there a broader term or cluster of concepts within which is situated the idea that human values are often downstream of decisions, not upstream, in that the person with the correct values will simply be selected based on what decisions they are expected to make (ie election of a CEO by shareholders). This seems like a crucial understanding in AI acceleration.
I like this! improvement: a lookup chart for lots of base rates of common disasters as an intuition pump?
People inexplicably seem to favor extremely bad leaders-->people seem to inexplicably favor bad AIs.
One of the triggers for getting agitated and repeating oneself more forcefully IME is an underlying fear that they will never get it.
I had first optimism and then sadness as I read the post bc my model is that every donor group is invested in the world where we make liability laundering organizations that make juicy targets for social capture the primary object of philanthropy instead of the actual patronage (funding a person) model. I understand it is about taxes, but my guess is that biting the bullet on taxes probably dominates given various differences. Is anyone working on how to tax efficiently fund individuals via eg trusts, distributed gift giving etc?
Upvotes for trying anything at all of course since that is way above the current bar.
Would be a Whole Thing so perhaps unlikely but here is something I would use: A bounty system, microtipping system on LW where I can both pay people for posts I really like in some visible way, with a percent cut going to LW, and a way to aggregate bounties for posts people want to see (subject to vote whether a post passed the bounty threshold etc.)
Just the general crypto cycle continuing onwards since then (2018). The idea being it was still possible to get in at 5% of current prices at around the time the autopsy was written.
We seem to be closing in on needing a lesswrong crypto autopsy autopsy. Continued failure of first principles reasoning bc blinded by speculative frenzies that happen to accompany it.
Is-ought confabulation
Means-ends confabulation
Scope sensitivity
Fundamental attribution error
Attribute substitution
Ambiguity aversion
Reasoning from consequences
Recurring option at the main donation link?
+1 it took a while as a child before I came to understand that reading a book and watching a movie were meaningfully different for some people.
pretty small, hard to quantify but I'd guess under 20% and perhaps under 10.
A lot of stuff turns out to hinge on effort. One of the reasons that strength programs work better than generic exercise routines is that with higher reps it's easy to 'tire yourself out' at a level that doesn't actually drive that much adaptation. Think of those fitness classes with weights. Decent cardio, but they don't gain much strength.
Twisted: The Untold Story of a Royal Vizier isn't really rational but is rat-adjacent and funny about it. Available to watch on youtube though the video quality isn't fantastic.
what technologies like bbq are we missing?
It's also my litmus test for community, if a group can't succeed at casual BBQs at all or has them but they have to be a big production I am more wary.
Many people have no context in their life where they can get feedback on socially undesirable ideas from thoughtful people so that they can potentially update them. E.g. you hear socially undesirable thing online that you suspect has some truth to it, you can't have any reasonable discussion about which aspects might be true, which might be false, and even amongst the more true parts how to navigate having that belief or what would be a wholesome framework to use to work with it, bc no feedback.
I'll give an egregious example. At one time, iodizing salt in developing countries was opposed by some NGOs on the grounds that the argument that it raised IQ was some sort of fake racist thing. A person in that environment might have wanted to be able to discuss things in a safer space than whatever environment produced that insanity.
Thanks for writing this, I indeed felt that the arguments were significantly easier to follow than previous efforts.
My personal experience was that superintelligence made it harder to think clearly about AI by making lots of distinctions and few claims.
Thank you!
Ironically, I do not know who to attribute to the notion that 'all problems are credit assignation problems.'
I've read leaked emails from people in similar situations before that made a couple things apparent:
- Power talk happens on the phone for paper trail reasons
- There is no meeting where an actual rational discussion of considerations and theories of change happens, everything really is people flying by the seat of their pants even at highest level. Talk of ethics usually just gets you excluded from the power talk.
I concluded this from the lack of any such talk in meeting minutes that are recorded, and the lack of any reference to such considerations in 'previous conversations' or requests to set up such meetings.
This elides the original argument by assuming the conclusion: that countermanding efforts remain cheap relative to the innovations. But the whole point is that significant shifts in costs associated with defense of a certain level can change behaviors and which plans and supply chains are economically defensible a lot.
Relatedly: people often discount improvements with large startup costs even if those costs are one time cost for an ongoing benefit. One of the worst is when it's something one is definitely going to do eventually, so delaying paying the startup cost is simply reducing the amount of time for diffuse benefits. Exercise and learning to cook are like this.
One operationalization is splitting out positive and negative predictions/models in all three questions (or cost benefit etc).
when you're stuck at the bottom of an attractor a hard kick to somewhere else can be good enough even with unknown side effects.
I have attempted to communicate to ultra-high-net-worth individuals, seemingly to little success so far, that given the reality of limited personal bandwidth, with over 99% of their influence and decision-making typically mediated through others, it’s essential to refine the ability to identify trustworthy advisors in each domain. Expert judgment is an active field of research with valuable, actionable insights.
It's worth noting that many therapists break therapeutic alliance for ideological or liability reasons and this is one of the reasons that self therapy, peer therapy, llms, and workbooks can sometimes be better.
Agree with the approach with the caveat that some people in group 2 are naive cooperators and therefore second order defectors since they are suckers for group 1. Eg the person who will tell the truth to the Nazis out of mistaken theories of ethics or just behavioral conditioning.
I was reading this earlier and it dovetails very well with this post. Framing defending yourself against hostile people and processes as primarily selfish itself serves the hostile.
'In essence, it is viewed as a form of adhamma (not-Dhamma) or misconduct to teach someone who is uninterested or unreceptive, since doing so does not respect the individual's disposition and may lead to misunderstanding or conflict rather than enlightenment.' (commentary on Akkosa Sutta (SN 7.2))
Yes, though this often involves some self deception about your true utility function. I suspect that some ace people did this to themselves to avoid zero sum competition they expect to painfully lose.
I can secondhand lend some affirmation to the newcomb case. A friend with DID from a childhood with a BPD mom later became a meditator and eventually rendered transparent the shell game that was being played with potentially dangerous preferences and goals to keep them out of consciousness, since the mom was extremely good at telepathy and was hostile for the standard BPD reason: other beings with other goals are inherently threatening to their extremely fragile sense of their own preferences and goals.
Another solution is illegible-ization/orthogonalization of preferences to the hostile telepath so that you don't overlap in anything they might care about or overpower you with. I think this is one of the things to think about in terms of rationalist avoidance of conflict theory.
I think this is an important topic and am glad to see substantial scholarship efforts on it.
Wrt AI relevance: I think the meme that it matters a lot who builds the self activating doomsday device has done potentially quite a bit of harm and may be a main contributor to what kills us.
Wrt people detecting these traits: I personally feel that the self domestication of humans has made us easier targets for such people, and undermined our ability to even think of doing anything about them. I don't think this is entirely random.
I propose a new term, gas bubble, to describe the spate of scams we're about to see. It's a combination of gas lighting and filter bubble.
like the calibration game but for a variety of decision problems, where the person has to assign probabilities to things at different stages based on what information is available. Afterwards they get an example brier score based on the average of what people with good prediction track records set at each phase.
I've thought about this for a long time and I think one of the big issues is lack of labelled training data in many domains. E.g. people made calibration toys and that helped a lot for that particular dimension. Ditto the tests on which studies replicated. In many cases we'd want more complex blinded data for people to practice on, and that requires, like in games, someone to set up all the non-fun backend for them.
I tried them for a while and was unimpressed. Plus some need to be loaded quite heavy, risking injury.
Reasonable
Thanks for the details! One of the findings of exercise studies is that you still get a lot of benefits not going to failure.
"Thou shalt have no other Schelling points before me" is a pretty strong attractor for (at least naive) coordination tech.
without a principled distinction between credences that are derived from deep, rigorous models of the world, and credences that come from vague speculation
Double counting issues here as well, in communities.
Thanks, I wrote it and found the process of recording my thoughts and organizing them to be helpful.
'these practices grant unmediated access to reality' sounds like a metaphysical claim. The Buddha's take on his system's relevance to metaphysics seems pretty consistently deflationary to me.
You mention 'warp' when talking about cross ontology mapping which seems like your best summary of a complicated intuition. I'd be curious to hear more (I recognize this might not be practical). My own intuition surfaced 'introducing degrees of freedom' a la indeterminacy of translation.
Found a great example of something needing improvement: https://en.wikipedia.org/wiki/Models_of_scientific_inquiry