Posts
Comments
I mostly got this from Nozick's final book Invariances
A type of celief that resists updating is one that discourages you from talking about it with others.
Every subculture I've participated in has lowkey bad actors. The harms this causes are underrated imo.
Deck builder rogue lites maybe. Slay the Spire, dicey dungeons etc.
Might want to mention that Kelly is an upper limit, and that most will then reduce from this on the basis of multiplying by some confidence function.
Theories are invariants. Invariants screen off large numbers of contingent facts. That's why we have reference classes. A reference class is a collection of contingent factors such that we expect an invariant to hold, or know exactly* which contingent factors are present in which amounts such that we can correct for their contribution such that the remaining invariant holds.
*in practice you know this with some noise, even up to a large amount, what matters is that you can then propagate this through the model correctly such that you know how much noise your resultant answers are also subject to.
I don't expect to be able to explain this to students.
Conspiracy theories are a bad reference class due to the lumping together of real actions by nation-states with crackpot schizophrenic fantasies. This was intentional and you shouldn't buy into it.
specifically documented? No. I think some of the obvious examples are in things like batteries, materials science, and computer parts, where there are strong IP fights. Eg AMD, Intel, Nvidia, and ARM all license some but not all of their core tech to their rivals. I'm actually pretty confused about how they determine when to do this vs not, but would guess that this is at least somewhat inefficient by over optimizing on short term gains vs the more nebulous future payoffs of what would be enabled with more licenses.
The cleanest example is during Ravens testing, noticing that checking a particular set of hypotheses one by one is taking too long. Zooming out and seeing them as a class of hypotheses, what they have in common, and then asking what else is possible. If the different moving parts of the puzzle are slot machines, then it's an explore exploit problem.
One of the things that helped a lot with the predictions part was reading Judea Pearl's Heuristics. It seemed to make me better at noticing that a big part of my problem solving was split into two things: my representation of the problem space, and then my traversal of that space. I would notice more readily when I had stuck myself with an intractably sized space for the traversal speed available, and conclude that I needed to switch to trying to find a different representation that was tractable. Others might get very different insights out of the book, the search-inference framework is pretty flexible (also covered in Baron's Thinking and Deciding).
What is most upstream of good cognitive strategies that lead to useful behaviors? What is most upstream of bad cognitive strategies that lead to maladaptive behaviors?
I want a smarter and longer lived population, and reject that playing slots to get more of those with sheer quantity is the only play here.
Another hypothesis: the moment of compression feels amazing, because you need to deeply understand something about a phenomenon to compress it. The zip file feels mundane and doesn't include the insight of building new frontiers in your compression library.
assumption of independence
Companies, in service to the liability monster, try to reduce complaints, as many enforcement mechanisms (such as the FDA) are largely complaint based. This situation is generating complaints, but no plausible mechanism by which those complaints will turn into lost money so far.
Interesting that conjunctive fallacy is a broadly used term but disjunctive fallacy is not.
Is there a name for these sorts of errors of conjunction and disjunction in super high dimension parameter spaces? I usually just refer to it as 'cold reading yourself.'
much lower weight
it's mostly the same it's just exercise selection that has changed slightly
upper body push: dumbbell standing press, incline press, pushups, push press
upper body pull: dumbbell row, chinups
lower body push: step ups (full rom), front squat
lower body pull: RDL, hyperextensions, one legged bridges
accessory: hip abductor with band, face pull, body saws
I do one or two from each depending on mood. Usually 3x8-12.
I think you're systematically failing to appreciate how things that have been optimized to look good to you can predictably behave differently in domains where they haven't been optimized to look good to you—
My girlfriend's cat doesn't appear cute to me because I'm not the one feeding it. From the outside, it's really obvious to see that the cat performs experiments to see what elicits desired behaviors. If I started feeding it it would bother at all trying to optimize at me. If you generalize house cats you get torture of billions of sentient creatures.
I'd chunk part of this as: most people (including me, often) don't habitually condition on their own thoughts/beliefs/actions and then query their own expectations for the most obvious results of those and are often surprised when the most obvious results are something they don't want. This reliably fails to induce the meta update that
-
This information is (apparently counterintuitively) available
-
I should engage with this information more often
-
Given 2, making upstream changes that make 2 more likely is higher impact than lots of object level changes and should be prioritized as such.
-
Given the past failure of 1, 2, 3 (the meta update) I should try to figure out what's going on and get even more upstream of useful patterns. This would be worth at least tens of hours of investigation on expectation.
For myself, the thing that finally clicked in this area was, for whatever reason (and I don't know if it will work for anyone else) noticing mental operations that can apply to themselves. The specific operation that happened was applying ooda loops to the concept of ooda loops.
I'd also broadly say of the meta update: I think human intuitions aren't fine tuned for happening to be in a +2-3sd brain loadout, so they aren't very good at cueing us to actually use those features reliably.
One might preface the OODA loop with 'actually' or otherwise indicate that there is a break-downable skill in the steps i.e.
Actually Observing (noticing)
Actually Orienting (causal reasoning)
Actually Deciding (emotional processing and logistics)
Actually Acting (execution reps)
to which we might also add, Actually Resting or otherwise creating the slack in which the above can operate.
Played around with it a bit this past week. Main problem is I can't find much to bet on. The markets that will close in 2030 etc are profoundly uninteresting and the short term stuff is mostly in the genre of various horse races (sports, politics, celebrities, crypto prices).
I guess I agree with Tetlock that the main hurdle for these is the lack of interesting questions. One obvious hypothesis is that short term alpha is almost always kept quiet, so it wouldn't be trickling into a public market.
Maybe the person hired needs to have good scores on a prediction market such that people trust them to be well calibrated.
I don't think anyone has any idea on how you would reshape the guts of a human mind to change it to another personality cluster.
Actors do. I have an actor friend who thinks the socionics people were onto something. Their work is based on the more complex model of personality that Jung abandoned because people kept doing dumb things with it due to language confusions, culminating in the famous Meyers-Briggs.
I'll add: pursue topic areas that are weirdly explanatory across seemingly disparate things, like the various branches of math and computation.
His verbal patterns have my scammer hackles up.
Organizational capture by sociopaths seems like just as bad a problem.
I use a folding nexstand to elevate my laptop when traveling above a compact keyboard and mouse. Works quite a bit better for productivity with a 16 inch laptop than using the laptop regularly. I'm thinking of trying a vertical stand next, to move the screen closer to me.
after several more years of experiments I was able to run up to 10k with softer shoes and attending closely to how I was tensing my feet slightly, which had a cumulative effect over the distance.
Yeah, so if I'm dealing with something that primarily manifests as mental events, internal talk, weird belief structures, emotional reactions etc, I'll tend to work on that with core transformation. If what I'm noticing seems to be mostly behavioral with only a minor mental component (I might have emotional reactions about the behavior, but it feel like the behavior is primary), I'll tend to use ACT concepts. I might switch from one to the other in the process of 'rolling out' any particular thing, behaviors and cognition are entangled after all, but it's where I tend to start.
I'll add: analyzing a codebase in a programming language that isn't type safe and in which the people who made the code base have intentionally overloaded the types all over the place is much much harder than analyzing type safe stuff. This directly happens in language as well.
I make high protein pancakes like the following:
3-4 eggs
1 cup cottage cheese
1 banana
40g oats
1 tablespoon pancake mix
baking soda
vanilla or almond extract
1 scoop of vanilla, strawberry, banana, or chocolate whey to preference
Texture comes out amazing
For dessert I similarly do a high protein ice cream subsitute
1 cup full fat greek yogurt
1/2 scoop strawberry or banana whey
1/2 cup frozen blueberries
the texture thickens up both from the whey and the cold of the blueberries and comes out like frozen yogurt. Zero added sugar and has as much protein as a cheeseburger though.
Review Voting page.
this links to the 2020 review
I think a lot of things that go under the category of agency are a collection of other things. Usually they involve higher task saliency in the face of distraction. This leads to a bunch of downstream effects that look like agency, eg not stopping when existing options don't work but instead investigating creating new options.
I'm not a big fan of The Mind Illuminated as it can strengthen various muscles related to fighting with oneself about what one should be doing. A better translation of concentration practice is tranquility/collectedness practice in which the object of focus is something more obviously valuable like peace, wholeness, kindness etc and the difficulties with engaging with the meditation object are slowly deconstructed and incorporated rather than suppressed. TWIM has a useful practice manual on this sort of thing and for a longer take on exactly how integration works I recommend Core Transformation by Connierae Andreas.
less credence is very different from 'most likely not rational.' We don't know why we have the priors that we do, but many on close examination have useful things to tell us about what is likely to be harmful. I know people who report emotional harm from engaging with the sorts of communities in which cuddle parties are a thing, in ways that were fairly unsurprising.
You have discovered an inhibition that is most likely not rational
I would recommend reconsidering dismissing your intuitions as bullshit.
Media reports on the situation seem highly sculpted by various sides such that I feel low confidence in any understanding on what's happening on the ground. I think this will be increasingly true as memetic tools grow more sophisticated with more actors deploying them.
When I came to the conclusion that prioritization was the highest priority, there was the question of what goes in slot 2 and 3. I knew that I didn't know what would be the highest return there, so resolved to investigate it by going upstream from known good things, like physical, mental, financial health.
This is the coolest use I've seen of AI so far.
The movie has a kind of reverence.
I recently watched this excellent comparison[1] of two documentaries made about the same content, and basically agree with the author that reverence for the subject is one of the defining features of good vs bad movies.
it’s also an aesthetic
and an anesthetic
Paralysis via analysis is a big possibility for this audience. It's worth noting the power law: 80 percent of the variance in outcomes is hotness/picture quality, a big chunk of the remainder is, broadly, mental health and values overlap (dealbreakers on long term life plans). Women's rating of male attractiveness varies about twice as much as men's based on controllable factors of good clothes and good grooming.
Attractiveness is also not a uni-dimensional ranking! Ranking of facial differences correlated with genetic distance and immune system compatibility seems to be a thing such that there is arbitrage in the dating market, but you have to expose yourself to a certain number of people to take advantage of it on expectation. Concretely what this means is that there are people out there that others 'rank' a 6 but you'd rank a 7 or even 8, and the same applies to you.
Mental health is a tractable intervention, but like the gym will take about a year to pay serious dividends. For many it's not clear where to start with this, ime the highest impact intervention is finding a self therapy modality that you like, then committing to doing one on one space holding for it with a friend twice a week (once facilitated, once facilitator). Both sides build skills.
This post puts some words to a difficulty I've been having lately with assigning probabilities to things. Thanks a lot for writing it, it's a great example of the sort of distillation and re-distillation that I find most valuable on LW.
Why so much effort on trying to come up with a simple metric that we don't value? I value organisms by something like their capability to steward their own supporting conditions/investigate supporting conditions for complex forms of value creation.
It has now been 4 years since this post, and 3 years since your prediction. Two thoughts:
- prediction markets have really taken off in the past few years and this has been a substantial upgrade in keeping abreast of what's going on in the world.
- the Fosbury Flop of rationality might have already happened: Korzybski's consciousness of abstraction. It's just not used because of the dozens to hundred hours it takes.
The most egregious demarcation problem exploit I saw was an ad for a mattress. It featured a woman sleeping with the blanket artfully arranged such that when the ad was viewed in peripheral vision it appeared to be pornographic. I expect more of this optical illusion advertising.
It seems that one of the goals of religion is to put humans in a state of epistemic uncertainty about the payoff structure of their current game. Relatedly, your setup seems to imply that the AI is in a state of very high epistemic certainty.
The classic example is referred to as "disassembling the boat when you're halfway across the river" and refers to breaking down the concepts used in the meditation practice itself prematurely. Another intuition for how this might be a problem is the need for the advice "Do not be too quick to give up your desire for jhana."
Rapid conquests are often examples of warfare tech overhangs IMO.