Posts

Time Efficient Resistance Training 2024-10-07T15:15:44.950Z
Some Things That Increase Blood Flow to the Brain 2024-03-27T21:48:46.244Z
Should rationalists be spiritual / Spirituality as overcoming delusion 2024-03-25T16:48:08.397Z
The Handbook of Rationality (2021, MIT press) is now open access 2023-10-10T00:30:05.589Z
My guess for why I was wrong about US housing 2023-06-14T00:37:04.162Z
Updates and Reflections on Optimal Exercise after Nearly a Decade 2023-06-08T23:02:14.761Z
Exploring Tacit Linked Premises with GPT 2023-03-24T18:09:10.087Z
Buddhist Psychotechnology for Withstanding Apocalypse Stress 2023-02-25T03:11:18.735Z
Thoughts on ADHD 2020-10-07T20:46:24.827Z
Your Prioritization is Underspecified 2020-07-10T20:48:29.654Z
PSA: Cars don't have 'blindspots' 2020-07-01T17:04:06.690Z
Which facebook groups on covid do you recommend? 2020-03-23T22:34:15.125Z
How to Lurk Less (and benefit others while benefiting yourself) 2020-02-17T06:18:54.978Z
[Link] Ignorance, a skilled practice 2020-01-31T16:21:23.062Z
Is there a website for tracking fads? 2019-12-06T04:48:51.297Z
Schematic Thinking: heuristic generalization using Korzybski's method 2019-10-14T19:29:14.672Z
Towards an Intentional Research Agenda 2019-08-23T05:27:53.843Z
romeostevensit's Shortform 2019-08-07T16:13:55.144Z
Open problems in human rationality: guesses 2019-08-02T18:16:18.342Z
87,000 Hours or: Thoughts on Home Ownership 2019-07-06T08:01:59.092Z
The Hard Work of Translation (Buddhism) 2019-04-07T21:04:11.353Z
Why do Contemplative Practitioners Make so Many Metaphysical Claims? 2018-12-31T19:44:30.358Z
Psycho-cybernetics: experimental notes 2018-09-18T19:21:03.601Z

Comments

Comment by romeostevensit on Towards a scale-free theory of intelligent agency · 2025-03-22T00:38:03.074Z · LW · GW

Found this interesting and useful. Big update for me is that 'I cut you choose' is basically the property that most (all?) good self therapy modalities use afaict. In that the part or part-coalition running the therapy procedure can offer but not force things, since its frames are subtly biasing the process.

Comment by romeostevensit on Mo Putera's Shortform · 2025-03-21T18:09:58.374Z · LW · GW

Thanks for the link. I mean that predictions are outputs of a process that includes a representation, so part of what's getting passed back and forth in the diagram are better and worse fit representations. The degrees of freedom point is that we choose very flexible representations, whittle them down with the actual data available, then get surprised that that representation yields other good predictions. But we should expect this if Nature shares any modular structure with our perception at all, which it would if there was both structural reasons (literally same substrate) and evolutionary pressure for representations with good computational properties i.e. simple isomorphisms and compressions.

Comment by romeostevensit on Mo Putera's Shortform · 2025-03-20T22:14:22.296Z · LW · GW

The two concepts that I thought were missing from Eliezer's technical explanation of technical explanation that would have simplified some of the explanation were compression and degrees of freedom. Degrees of freedom seems very relevant here in terms of how we map between different representations. Why are representations so important for humans? Because they have different computational properties/traversal costs while humans are very computationally limited.

Comment by romeostevensit on How AI Takeover Might Happen in 2 Years · 2025-03-20T21:57:03.095Z · LW · GW

I saw memetic disenfranchisement as central themes of both.

Comment by romeostevensit on How I've run major projects · 2025-03-19T03:02:11.431Z · LW · GW

Two tacit points that seemed to emerge to me:

  1. Have someone who is ambiently aware and proactively getting info to the right people, or noticing when team members will need info and setting up the scaffolding so that they can consistently get it cheaply and up to date.
  2. The authority goes all the way up. The locally ambiently aware person has power vested in them by higher ups, meaning that when people drag their feet bc of not liking some of the harsher OODA loops you have backup.
Comment by romeostevensit on I make several million dollars per year and have hundreds of thousands of followers—what is the straightest line path to utilizing these resources to reduce existential-level AI threats? · 2025-03-17T16:34:50.137Z · LW · GW

Surprisingly small amounts of money can do useful things IMO. There's lots of talk about billions of dollars flying around, but almost all of it can't structurally be spent on weird things and comes with strings attached that cause the researchers involved to spend significant fractions of their time optimizing to keep those purse strings opened. So you have more leverage here than is perhaps obvious.

My second order advice is to please be careful about getting eaten (memetically) and spend some time on cognitive security. The fact that ~all wealthy people don't do that much interesting stuff with their money implies that the attractors preventing interesting action are very very strong and you shouldn't just assume you're too smart for that. Magic tricks work by violating our intuitions about how much time a person would devote to training a very weird edge case skill or particular trick. Likewise, I think people dramatically underestimate how much their social environment will warp into one that encourages you to be sublimated into the existing wealth hierarchy (the one that seemingly doesn't do much). Specifically, it's easy to attribute substitution yourself from high impact choices to choices where the grantees make you feel high impact. But high impact people don't have the time, talent, or inclination to optimize how you feel.

Since most all of a wealthy person's impact comes mediated through the actions of others, I believe the top skill to cultivate besides cogsec is expert judgement. I'd encourage you to talk through with an LLM some of the top results from research into expert judgement. It's a tricky problem to figure out who to defer to when you are giving out money and hence everyone has an incentive to represent themselves as an expert.

I don't know the details of Talinn's grant process but as Tallinn seems to have avoided some of these problems it might be worth taking inspiration from. (SFF, S-Process mentioned elsewhere here).

Comment by romeostevensit on Your Communication Preferences Aren’t Law · 2025-03-12T19:43:39.189Z · LW · GW

Not entirely wrong

They're entirely correct. Learning new communication techniques are about what you choose to say, not what other people do.

Comment by romeostevensit on Response to Scott Alexander on Imprisonment · 2025-03-12T16:16:02.299Z · LW · GW

Red Herring. Quibbling over difficult to detect effects is a waste of time while we're failing to kill those who commit ten+ violent crimes and account for a substantial fraction of all such crime. I don't buy mistake theory on this.

Comment by romeostevensit on You can just wear a suit · 2025-02-27T05:17:29.799Z · LW · GW

Waistcoat and rolled up sleeves works in many more settings and still looks amazing.

Comment by romeostevensit on You should use Consumer Reports · 2025-02-27T05:15:01.469Z · LW · GW

Mixed reports on how they have degraded in quality and sometimes misrepresented how thorough their tests are, but still a time saver for finding higher quality options for things you want long service life from like home appliances.

Comment by romeostevensit on Power Lies Trembling: a three-book review · 2025-02-24T05:25:54.883Z · LW · GW

Book reviews that bring in very substantive content from other relevant books are probably the type of post I find the most consistently valuable.

Comment by romeostevensit on The case for the death penalty · 2025-02-21T22:55:22.513Z · LW · GW

"0.12% of the population (the most persistent offenders) accounted for 20% of violent crime convictions" https://inquisitivebird.xyz/p/when-few-do-great-harm

Comment by romeostevensit on The case for the death penalty · 2025-02-21T22:51:02.240Z · LW · GW

There are the predictable lobbies for increasing the price taxpayers pay for prisoners, but not much advocacy for decreasing it.

Comment by romeostevensit on Microplastics: Much Less Than You Wanted To Know · 2025-02-17T10:21:59.101Z · LW · GW

Thanks I had wondered about this

Comment by romeostevensit on Microplastics: Much Less Than You Wanted To Know · 2025-02-16T00:06:36.266Z · LW · GW

Pragmatic note: many of the benefits of polyester (eg activewear wicking) can be had with bamboo sourced rayon. I buy David Archy brand on Amazon.

Comment by romeostevensit on The Paris AI Anti-Safety Summit · 2025-02-13T00:31:29.005Z · LW · GW

Ai developers heading to work, colorized

https://imgflip.com/i/9k1b0o

Comment by romeostevensit on Eli's shortform feed · 2025-02-11T20:20:32.958Z · LW · GW

successfully bought out

*got paid to remove them as a social threat

Comment by romeostevensit on How AI Takeover Might Happen in 2 Years · 2025-02-08T17:20:54.860Z · LW · GW

For people who want weirder takes I would recommend Egan's unstable orbits in the space of lies.

Comment by romeostevensit on C'mon guys, Deliberate Practice is Real · 2025-02-06T20:41:36.084Z · LW · GW

To +1 the rant, my experience across the class spectrum is that many bootstrapped successful people know this but have learned not to talk about it too much as most don't want to hear supporting evidence for meritocracy, it would invalidate their copes.

To my younger self, I would say you'll need to learn to ignore those who would stoke your learned helplessness to excuse their own. I was personally gaslit about important life decisions, not out of malice per se but just this sort of choice supportive bias, only to much later discover that jumping in on those decisions actually appeared on lists of advice older folks would give to younger.

Comment by romeostevensit on The Failed Strategy of Artificial Intelligence Doomers · 2025-01-31T22:35:27.008Z · LW · GW

Notkilleveryonism, why not Omnicidal AI? As in we oppose OAI.

Comment by romeostevensit on Should you go with your best guess?: Against precise Bayesianism and related views · 2025-01-28T01:23:11.682Z · LW · GW

Thank you for writing this. A couple shorthands I keep in my head for aspects:

My confidence interval ranges across the sign flip.

Due to the waluigi effect, I don't know if the outcomes I care about are sensitive to the dimension I'm varying my credence along.

Comment by romeostevensit on Stargate AI-1 · 2025-01-26T10:59:02.446Z · LW · GW

I often feel that people don't get how the sucking up thing works. Not only does it not matter that it is transparent, that is part of the point. There is simultaneously common knowledge of the sucking up and common knowledge that those in the inner party don't acknowledge the sucking up, that's part of what the inner party membership consists of. People outside can accuse the insiders of nakedly sucking up and the insiders can just politely smile at them while carrying on. Sucking up can be what deference networks look like from the outside when we don't particularly like any of the people involved or what they are doing. But their hierarchy visibly produces their own aims, so more fools we.

Comment by romeostevensit on AI, centralization, and the One Ring · 2025-01-14T18:37:34.897Z · LW · GW

The corn thresher is not inherently evil. Because it is more efficient than other types of threshers, the humans will inevitably eat corn. If this persists for long enough the humans will be unsurprised to find they have a gut well adapted to corn.

Per Douglas Adams, the puddle concludes that the indentation in which it rests fits it so perfectly that it must have been made for it.

The means by which the ring always serves sauron is that any who wear it and express a desire will have the possible worlds trimmed both in the direction of their desire, but also in the direction of sauron's desire in ways that they cannot see. If this persists long enough they may find they no longer have the sense organs to see (the mouth of sauron is blind).

Some people seem to have more dimensions of moral care than others, it makes one wonder about the past.

These things are similar in shape.

Comment by romeostevensit on AGI Will Not Make Labor Worthless · 2025-01-12T15:38:45.111Z · LW · GW

Even a hundred million humanoid robots a year (we currently make 90 million cars a year) will be a demand shock for human labor.

https://benjamintodd.substack.com/p/how-quickly-could-robots-scale-up

Comment by romeostevensit on Oppression and production are competing explanations for wealth inequality. · 2025-01-07T16:30:34.086Z · LW · GW

No they don't, billionaires consume very little of their net worth.

Comment by romeostevensit on Some arguments against a land value tax · 2024-12-30T06:28:03.993Z · LW · GW

I am very confused why the tax is 99% in this example.

Comment by romeostevensit on Some arguments against a land value tax · 2024-12-29T17:58:45.245Z · LW · GW

Post does not include the word auction, which is a key aspect of how LVT works to not have some of these downsides.

Comment by romeostevensit on The Field of AI Alignment: A Postmortem, and What To Do About It · 2024-12-29T09:42:15.319Z · LW · GW

Yes, and I don't mean to overstate a case for helplessness. Demons love convincing people that the anti demon button doesn't work so that they never press it even though it is sitting right out in the open.

Comment by romeostevensit on The Field of AI Alignment: A Postmortem, and What To Do About It · 2024-12-27T07:21:19.329Z · LW · GW

unfortunately, the disanalogy is that any driver who moves their foot towards the brakes is almost instantly replaced with one who won't.

Comment by romeostevensit on What Have Been Your Most Valuable Casual Conversations At Conferences? · 2024-12-25T09:32:38.363Z · LW · GW

High variance but there's skew. The ceiling is very high and the downside is just a bit of wasted time that likely would have been wasted anyway. The most valuable alert me to entirely different ways of thinking about problems I've been working on.

Comment by romeostevensit on Hire (or Become) a Thinking Assistant · 2024-12-24T03:33:25.206Z · LW · GW

no

Comment by romeostevensit on Hire (or Become) a Thinking Assistant · 2024-12-23T19:20:20.531Z · LW · GW

Both people ideally learn from existing practitioners for a session or two, ideally they also review the written material or in the case of Focusing also try the audiobook. Then they simply try facilitating each other. The facilitator takes brief notes to help keep track of where they are in the other person's stack, but otherwise acts much as eg Gendlin acts in the audiobook.

Comment by romeostevensit on Hire (or Become) a Thinking Assistant · 2024-12-23T18:51:36.085Z · LW · GW

Probably the most powerful intervention I know of is to trade facilitation of emotional digestion and integration practices with a peer. The modality probably only matters a little, and so should be chosen for what's easiest to learn to facilitate. Focusing is a good start, I also like Core Transformation for going deeper once Focusing skills are good. It's a huge return on ~3 hours per week (90 minutes facilitating and being facilitated, in two sessions) IME.

Comment by romeostevensit on romeostevensit's Shortform · 2024-12-23T07:38:48.675Z · LW · GW

"What causes your decisions, other than incidentals?"

"My values."

Comment by romeostevensit on romeostevensit's Shortform · 2024-12-23T02:51:37.900Z · LW · GW

People normally model values as upstream of decisions. Causing decisions. In many cases values are downstream of decisions. I'm wondering who else has talked about this concept. One of the rare cases that the LLM was not helpful.

Comment by romeostevensit on romeostevensit's Shortform · 2024-12-23T00:43:59.925Z · LW · GW

moral values

Comment by romeostevensit on romeostevensit's Shortform · 2024-12-22T22:58:45.206Z · LW · GW

Is there a broader term or cluster of concepts within which is situated the idea that human values are often downstream of decisions, not upstream, in that the person with the correct values will simply be selected based on what decisions they are expected to make (ie election of a CEO by shareholders). This seems like a crucial understanding in AI acceleration.

Comment by romeostevensit on When Is Insurance Worth It? · 2024-12-21T01:53:09.409Z · LW · GW

I like this! improvement: a lookup chart for lots of base rates of common disasters as an intuition pump?

Comment by romeostevensit on gwern's Shortform · 2024-12-13T11:17:29.067Z · LW · GW

People inexplicably seem to favor extremely bad leaders-->people seem to inexplicably favor bad AIs.

Comment by romeostevensit on Subskills of "Listening to Wisdom" · 2024-12-11T02:02:26.250Z · LW · GW

One of the triggers for getting agitated and repeating oneself more forcefully IME is an underlying fear that they will never get it.

Comment by romeostevensit on The Dream Machine · 2024-12-06T02:31:12.417Z · LW · GW

I had first optimism and then sadness as I read the post bc my model is that every donor group is invested in the world where we make liability laundering organizations that make juicy targets for social capture the primary object of philanthropy instead of the actual patronage (funding a person) model. I understand it is about taxes, but my guess is that biting the bullet on taxes probably dominates given various differences. Is anyone working on how to tax efficiently fund individuals via eg trusts, distributed gift giving etc?

Upvotes for trying anything at all of course since that is way above the current bar.

Comment by romeostevensit on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-06T02:23:12.840Z · LW · GW

Would be a Whole Thing so perhaps unlikely but here is something I would use: A bounty system, microtipping system on LW where I can both pay people for posts I really like in some visible way, with a percent cut going to LW, and a way to aggregate bounties for posts people want to see (subject to vote whether a post passed the bounty threshold etc.)

Comment by romeostevensit on romeostevensit's Shortform · 2024-12-04T20:57:18.939Z · LW · GW

Just the general crypto cycle continuing onwards since then (2018). The idea being it was still possible to get in at 5% of current prices at around the time the autopsy was written.

Comment by romeostevensit on romeostevensit's Shortform · 2024-12-04T18:36:59.132Z · LW · GW

We seem to be closing in on needing a lesswrong crypto autopsy autopsy. Continued failure of first principles reasoning bc blinded by speculative frenzies that happen to accompany it.

Comment by romeostevensit on Which Biases are most important to Overcome? · 2024-12-04T09:29:59.384Z · LW · GW

Is-ought confabulation

Means-ends confabulation

Scope sensitivity

Fundamental attribution error

Attribute substitution

Ambiguity aversion

Reasoning from consequences

Comment by romeostevensit on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T11:07:48.208Z · LW · GW

Recurring option at the main donation link?

Comment by romeostevensit on Which things were you surprised to learn are not metaphors? · 2024-11-24T07:20:34.686Z · LW · GW

+1 it took a while as a child before I came to understand that reading a book and watching a movie were meaningfully different for some people.

Comment by romeostevensit on Time Efficient Resistance Training · 2024-11-24T01:34:35.927Z · LW · GW

pretty small, hard to quantify but I'd guess under 20% and perhaps under 10.

A lot of stuff turns out to hinge on effort. One of the reasons that strength programs work better than generic exercise routines is that with higher reps it's easy to 'tire yourself out' at a level that doesn't actually drive that much adaptation. Think of those fitness classes with weights. Decent cardio, but they don't gain much strength.

Comment by romeostevensit on What are the good rationality films? · 2024-11-20T22:37:21.648Z · LW · GW

Twisted: The Untold Story of a Royal Vizier isn't really rational but is rat-adjacent and funny about it. Available to watch on youtube though the video quality isn't fantastic.

Comment by romeostevensit on Monthly Roundup #24: November 2024 · 2024-11-19T10:09:34.460Z · LW · GW

what technologies like bbq are we missing?

It's also my litmus test for community, if a group can't succeed at casual BBQs at all or has them but they have to be a big production I am more wary.