Posts
Comments
Anglo armies have been extremely unusual historically speaking for their low rates of atrocity.
(I don't think this is super relevant for AI, but I think this is where intuitions about the superiority of the west bottoms out)
Training wheels have been replaced with balance bikes for this reason.
I think the major impacts that matter are on war, pandemic risk, and x-risk. I rarely see anyone try to figure those out, perhaps the sign is too uncertain due to complexity.
Type errors:
Map-territory confusion (labels facts)
Is-ought confusion (fact value)
Means-ends confusion (value strategy)
Implementation-classification confusion (strategy label) eg "if you classify this as an emergency that must mean you support taking immediate action"
Semantic-normative confusion (label value) eg "if you classify this as art you must think it is valuable"
Empirical-procedural confusion (fact strategy) eg "recidivism rates are highest among those without stable employment, therefore job training programs are the most important intervention"
it's about training the same muscle groups with lower joint injury. eg people do deadlifts with 2x+ bodyweight but RDLs are effective at bodyweight even for strong people.
lately i've been doing one legged leg press for similar reasons, though less time effective.
Prior: physical health and social success
Dating studies causing updates away from that prior: none found
It used to be weird to me how much ink was spilled on twisting the prior into knots, but I eventually realized it was people who don't like it for the obvious reason.
What is a useful prediction that eliminatism makes?
The school I found that seemed most serious (and whose stuff also worked for me) held the position that these things basically don't work for some people unless or until they have certain spontaneous experiences. No one knows what causes them. Some people report that they had the experiences on psychedelics, but no one knows if that's really causal or their propensity to take psychedelics was also caused by this upstream thing. I don't think there's much point in trying to force it, I don't think it works.
Found this interesting and useful. Big update for me is that 'I cut you choose' is basically the property that most (all?) good self therapy modalities use afaict. In that the part or part-coalition running the therapy procedure can offer but not force things, since its frames are subtly biasing the process.
Thanks for the link. I mean that predictions are outputs of a process that includes a representation, so part of what's getting passed back and forth in the diagram are better and worse fit representations. The degrees of freedom point is that we choose very flexible representations, whittle them down with the actual data available, then get surprised that that representation yields other good predictions. But we should expect this if Nature shares any modular structure with our perception at all, which it would if there was both structural reasons (literally same substrate) and evolutionary pressure for representations with good computational properties i.e. simple isomorphisms and compressions.
The two concepts that I thought were missing from Eliezer's technical explanation of technical explanation that would have simplified some of the explanation were compression and degrees of freedom. Degrees of freedom seems very relevant here in terms of how we map between different representations. Why are representations so important for humans? Because they have different computational properties/traversal costs while humans are very computationally limited.
I saw memetic disenfranchisement as central themes of both.
Two tacit points that seemed to emerge to me:
- Have someone who is ambiently aware and proactively getting info to the right people, or noticing when team members will need info and setting up the scaffolding so that they can consistently get it cheaply and up to date.
- The authority goes all the way up. The locally ambiently aware person has power vested in them by higher ups, meaning that when people drag their feet bc of not liking some of the harsher OODA loops you have backup.
Surprisingly small amounts of money can do useful things IMO. There's lots of talk about billions of dollars flying around, but almost all of it can't structurally be spent on weird things and comes with strings attached that cause the researchers involved to spend significant fractions of their time optimizing to keep those purse strings opened. So you have more leverage here than is perhaps obvious.
My second order advice is to please be careful about getting eaten (memetically) and spend some time on cognitive security. The fact that ~all wealthy people don't do that much interesting stuff with their money implies that the attractors preventing interesting action are very very strong and you shouldn't just assume you're too smart for that. Magic tricks work by violating our intuitions about how much time a person would devote to training a very weird edge case skill or particular trick. Likewise, I think people dramatically underestimate how much their social environment will warp into one that encourages you to be sublimated into the existing wealth hierarchy (the one that seemingly doesn't do much). Specifically, it's easy to attribute substitution yourself from high impact choices to choices where the grantees make you feel high impact. But high impact people don't have the time, talent, or inclination to optimize how you feel.
Since most all of a wealthy person's impact comes mediated through the actions of others, I believe the top skill to cultivate besides cogsec is expert judgement. I'd encourage you to talk through with an LLM some of the top results from research into expert judgement. It's a tricky problem to figure out who to defer to when you are giving out money and hence everyone has an incentive to represent themselves as an expert.
I don't know the details of Talinn's grant process but as Tallinn seems to have avoided some of these problems it might be worth taking inspiration from. (SFF, S-Process mentioned elsewhere here).
Not entirely wrong
They're entirely correct. Learning new communication techniques are about what you choose to say, not what other people do.
Red Herring. Quibbling over difficult to detect effects is a waste of time while we're failing to kill those who commit ten+ violent crimes and account for a substantial fraction of all such crime. I don't buy mistake theory on this.
Waistcoat and rolled up sleeves works in many more settings and still looks amazing.
Mixed reports on how they have degraded in quality and sometimes misrepresented how thorough their tests are, but still a time saver for finding higher quality options for things you want long service life from like home appliances.
Book reviews that bring in very substantive content from other relevant books are probably the type of post I find the most consistently valuable.
"0.12% of the population (the most persistent offenders) accounted for 20% of violent crime convictions" https://inquisitivebird.xyz/p/when-few-do-great-harm
There are the predictable lobbies for increasing the price taxpayers pay for prisoners, but not much advocacy for decreasing it.
Thanks I had wondered about this
Pragmatic note: many of the benefits of polyester (eg activewear wicking) can be had with bamboo sourced rayon. I buy David Archy brand on Amazon.
Ai developers heading to work, colorized
successfully bought out
*got paid to remove them as a social threat
For people who want weirder takes I would recommend Egan's unstable orbits in the space of lies.
To +1 the rant, my experience across the class spectrum is that many bootstrapped successful people know this but have learned not to talk about it too much as most don't want to hear supporting evidence for meritocracy, it would invalidate their copes.
To my younger self, I would say you'll need to learn to ignore those who would stoke your learned helplessness to excuse their own. I was personally gaslit about important life decisions, not out of malice per se but just this sort of choice supportive bias, only to much later discover that jumping in on those decisions actually appeared on lists of advice older folks would give to younger.
Notkilleveryonism, why not Omnicidal AI? As in we oppose OAI.
Thank you for writing this. A couple shorthands I keep in my head for aspects:
My confidence interval ranges across the sign flip.
Due to the waluigi effect, I don't know if the outcomes I care about are sensitive to the dimension I'm varying my credence along.
I often feel that people don't get how the sucking up thing works. Not only does it not matter that it is transparent, that is part of the point. There is simultaneously common knowledge of the sucking up and common knowledge that those in the inner party don't acknowledge the sucking up, that's part of what the inner party membership consists of. People outside can accuse the insiders of nakedly sucking up and the insiders can just politely smile at them while carrying on. Sucking up can be what deference networks look like from the outside when we don't particularly like any of the people involved or what they are doing. But their hierarchy visibly produces their own aims, so more fools we.
The corn thresher is not inherently evil. Because it is more efficient than other types of threshers, the humans will inevitably eat corn. If this persists for long enough the humans will be unsurprised to find they have a gut well adapted to corn.
Per Douglas Adams, the puddle concludes that the indentation in which it rests fits it so perfectly that it must have been made for it.
The means by which the ring always serves sauron is that any who wear it and express a desire will have the possible worlds trimmed both in the direction of their desire, but also in the direction of sauron's desire in ways that they cannot see. If this persists long enough they may find they no longer have the sense organs to see (the mouth of sauron is blind).
Some people seem to have more dimensions of moral care than others, it makes one wonder about the past.
These things are similar in shape.
Even a hundred million humanoid robots a year (we currently make 90 million cars a year) will be a demand shock for human labor.
https://benjamintodd.substack.com/p/how-quickly-could-robots-scale-up
No they don't, billionaires consume very little of their net worth.
I am very confused why the tax is 99% in this example.
Post does not include the word auction, which is a key aspect of how LVT works to not have some of these downsides.
Yes, and I don't mean to overstate a case for helplessness. Demons love convincing people that the anti demon button doesn't work so that they never press it even though it is sitting right out in the open.
unfortunately, the disanalogy is that any driver who moves their foot towards the brakes is almost instantly replaced with one who won't.
High variance but there's skew. The ceiling is very high and the downside is just a bit of wasted time that likely would have been wasted anyway. The most valuable alert me to entirely different ways of thinking about problems I've been working on.
no
Both people ideally learn from existing practitioners for a session or two, ideally they also review the written material or in the case of Focusing also try the audiobook. Then they simply try facilitating each other. The facilitator takes brief notes to help keep track of where they are in the other person's stack, but otherwise acts much as eg Gendlin acts in the audiobook.
Probably the most powerful intervention I know of is to trade facilitation of emotional digestion and integration practices with a peer. The modality probably only matters a little, and so should be chosen for what's easiest to learn to facilitate. Focusing is a good start, I also like Core Transformation for going deeper once Focusing skills are good. It's a huge return on ~3 hours per week (90 minutes facilitating and being facilitated, in two sessions) IME.
"What causes your decisions, other than incidentals?"
"My values."
People normally model values as upstream of decisions. Causing decisions. In many cases values are downstream of decisions. I'm wondering who else has talked about this concept. One of the rare cases that the LLM was not helpful.
moral values
Is there a broader term or cluster of concepts within which is situated the idea that human values are often downstream of decisions, not upstream, in that the person with the correct values will simply be selected based on what decisions they are expected to make (ie election of a CEO by shareholders). This seems like a crucial understanding in AI acceleration.
I like this! improvement: a lookup chart for lots of base rates of common disasters as an intuition pump?
People inexplicably seem to favor extremely bad leaders-->people seem to inexplicably favor bad AIs.
One of the triggers for getting agitated and repeating oneself more forcefully IME is an underlying fear that they will never get it.
I had first optimism and then sadness as I read the post bc my model is that every donor group is invested in the world where we make liability laundering organizations that make juicy targets for social capture the primary object of philanthropy instead of the actual patronage (funding a person) model. I understand it is about taxes, but my guess is that biting the bullet on taxes probably dominates given various differences. Is anyone working on how to tax efficiently fund individuals via eg trusts, distributed gift giving etc?
Upvotes for trying anything at all of course since that is way above the current bar.
Would be a Whole Thing so perhaps unlikely but here is something I would use: A bounty system, microtipping system on LW where I can both pay people for posts I really like in some visible way, with a percent cut going to LW, and a way to aggregate bounties for posts people want to see (subject to vote whether a post passed the bounty threshold etc.)