Thoughts on ADHD 2020-10-07T20:46:24.827Z
Your Prioritization is Underspecified 2020-07-10T20:48:29.654Z
PSA: Cars don't have 'blindspots' 2020-07-01T17:04:06.690Z
Which facebook groups on covid do you recommend? 2020-03-23T22:34:15.125Z
How to Lurk Less (and benefit others while benefiting yourself) 2020-02-17T06:18:54.978Z
[Link] Ignorance, a skilled practice 2020-01-31T16:21:23.062Z
Is there a website for tracking fads? 2019-12-06T04:48:51.297Z
Schematic Thinking: heuristic generalization using Korzybski's method 2019-10-14T19:29:14.672Z
Towards an Intentional Research Agenda 2019-08-23T05:27:53.843Z
romeostevensit's Shortform 2019-08-07T16:13:55.144Z
Open problems in human rationality: guesses 2019-08-02T18:16:18.342Z
87,000 Hours or: Thoughts on Home Ownership 2019-07-06T08:01:59.092Z
The Hard Work of Translation (Buddhism) 2019-04-07T21:04:11.353Z
Why do Contemplative Practitioners Make so Many Metaphysical Claims? 2018-12-31T19:44:30.358Z
Psycho-cybernetics: experimental notes 2018-09-18T19:21:03.601Z


Comment by romeostevensit on How's it going with the Universal Cultural Takeover? Part II · 2021-09-25T18:00:52.467Z · LW · GW

One thing I've noticed is that when companies are focused on getting new customers they often improve the product in ways that make it meet the customer's needs better. This helps it attract those new customers. In contrast when companies switch to increasing the value they get from each customer they generally introduce a lot of on-their-face anti-customer quality of life things that drive up short term engagement. This may be significant part of the story of company churn. Once a company is in the 'exploit' existing customer base phase it is probably nigh impossible for them to go back and they are on a slope that will orient them increasingly towards the least discerning most easily exploited customers.

Comment by romeostevensit on Three enigmas at the heart of our reasoning · 2021-09-24T17:40:49.331Z · LW · GW

universalizability of compressions in light of them being bound to intentionality on the part of the one doing the compression. The closest we get to universal compressions are when the intent is more upstream of other intents like survival and reproduction.

Comment by romeostevensit on Three enigmas at the heart of our reasoning · 2021-09-22T17:17:12.999Z · LW · GW

You might enjoy Nozick's Invariances which takes a similar approach to the Is-Ought problem in claiming that the ontological assumptions of the problem as stated are incoherent. We don't have firm Is's and firm Oughts we need to bridge. We already are the bridge (of theseus) one end of which is built from heuristics that return Is-like answers, and one end of which is built from heuristics that return Ought-like answers.

I believe Nozick was partially responding to The View from Nowhere.

Comment by romeostevensit on Three enigmas at the heart of our reasoning · 2021-09-22T17:15:16.339Z · LW · GW

Wanting compressions to be universalizable makes sense, it would be an additional compression bonus to be able to throw out all the contextual data about when the compression isn't a good fit for part of reality. I think it's mostly incoherent as a principle even though as a process it can be a good intuition to follow (E=MC^2 and Natural Selection sure are useful). We don't actually need a ground to push off of, we create our own control surfaces, like wingsuits.

Comment by romeostevensit on Where do your eyes go? · 2021-09-21T00:23:17.987Z · LW · GW

I had some thoughts about what I was calling 'visual schema' as a way of talking about deliberate practice, using tetris as an example for learning where your eyes should go. It seems like a useful lead in for talking about mindfulness of attention in meditation. The move is the same between visual attention and more general attention voluntarily vs involuntarily moving and what and how it moves between objects.

Comment by romeostevensit on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-18T02:06:42.408Z · LW · GW

You are one of the people I am least confident in simulating accurately.

Comment by romeostevensit on Why didn't we find katas for rationality? · 2021-09-16T17:37:04.494Z · LW · GW

Written down if I need to multiply a few values to get a ballpark. In head if it's just a direct guess.

Comment by romeostevensit on Why didn't we find katas for rationality? · 2021-09-14T21:04:16.277Z · LW · GW

Same as posts on fermi estimates. I just make a guess to whatever level of effort seems appropriate for the query (often pretty casual, but will take a bit more time if it's about something I feel is important or especially uncertain about). Then when I am getting the piece of info I can reflect on reasons I might have been off. This often helps structure my inquiry into the piece of info also as I map out model differences that are why it surprised me.

Comment by romeostevensit on Why didn't we find katas for rationality? · 2021-09-14T20:05:22.246Z · LW · GW

I did wind up with some personal katas.

Calibration using search: anytime I am searching for something with a quantitative answer I have the chance to do a fermi estimate/reference class forecasting and getting feedback on how I did.

Selection effects/Straussian readings: trying to figure out what incentives drove a particular piece of information to be in front of me in this moment.

Stack trace: finding the provenance of internal maps and noticing that they are often predicated on extremely sparse data which is then overgeneralized.

Schematic thinking: An extension of narrative fallacy bias. Noticing when alternatives would be equally valid when replacing parts of arguments. The implied degrees of freedom make the proposed explanation weaker than it might otherwise seem.

Comment by romeostevensit on We need a new philosophy of progress · 2021-08-24T19:35:47.940Z · LW · GW

I expect the big breakthrough to come when we figure out why the paradoxes in things like VNM and Arrow's impossibility don't in fact preclude radically better preference aggregation and thus much better coordination tech. I expect it will have turned out that those results were an artifact of the representation chosen for preferences. I expect that we will move from a bit estimate at particular times (e.g. voting) to some more fluid and continuous representation. I expect the new representations won't just measure preferences over specific inputs and outputs (e.g. representatives and policy prescription) but something about the structures of the beliefs about how inputs map to outputs. This sounds complicated exactly because we haven't found the nice formalism yet. It will seem elegant and obvious in hindsight.

Comment by romeostevensit on Analogies and General Priors on Intelligence · 2021-08-24T19:25:16.031Z · LW · GW

I understand, thought it was worth commenting on anyway.

Comment by romeostevensit on A Response to A Contamination Theory of the Obesity Epidemic · 2021-08-22T21:49:03.426Z · LW · GW

Processed protein is something that there isn't a great definition for precisely because our models are missing something. There's something about preserved and processed meats that does something bad but we don't know what.

Similarly we aren't sure why natural short chain carbs (honey, high GI fruits) seem to elicit less negative effects than processed short chain carbs. Our causal models are missing something.

Comment by romeostevensit on Analogies and General Priors on Intelligence · 2021-08-21T23:13:29.107Z · LW · GW

the small size of the human genome suggests that brain design is simple

Bounds, yes but the bound can be quite high due to offloading much of the compression to the environment.

Comment by romeostevensit on Exploring the Landscape of Scientific Minds (Let my People Go) · 2021-08-20T19:54:46.439Z · LW · GW

Slightly tangential To expand on Hamming's point slightly: Any problem handed to you is almost certainly formulated wrong. Why can you be confident in that? If it were formulated right it would already be solved and not coming into your awareness as a problem. This is helpful for scientific problems, but also personal problems. Traversing the same representation of your problem for the nth time isn't going to do much other than agitate you. This is part of why new self help techniques work for a time then stop working. The problems you had that were amenable to those representations are now solved.

Comment by romeostevensit on Framing Practicum: Bistability · 2021-08-20T19:43:09.415Z · LW · GW

I wonder if we can think of a physical metaphor for an inversion of this, when pushing harder on one pole lowers the transition cost such that a sudden flip becomes more likely.

Comment by romeostevensit on Factors of mental and physical abilities - a statistical analysis · 2021-08-20T19:38:07.591Z · LW · GW

I've been hoping for a long time for someone to do a nice write-up on factor analysis. This is great.

Comment by romeostevensit on Josh Jacobson's Shortform · 2021-08-20T00:20:41.067Z · LW · GW

Intersections are what kill mostly. The energy delta between two fast moving cars going the same direction is low. The energy delta between even moderatly moving cars at orthogonal or directly head to head is huge.

Comment by romeostevensit on Perhaps vastly more people should be on FDA-approved weight loss medication · 2021-08-15T00:29:12.213Z · LW · GW

whoops, meant metformin. Always confuse those two.

Comment by romeostevensit on Perhaps vastly more people should be on FDA-approved weight loss medication · 2021-08-14T22:35:48.482Z · LW · GW

have there been any updates on the wellbutrin (edit: meant metformin) front? My understanding is that berberine has a similar mechanism of action and is OTC.

Comment by romeostevensit on A Response to A Contamination Theory of the Obesity Epidemic · 2021-08-12T14:13:50.377Z · LW · GW

It would also dovetail with the other mysteries: despite investigation we can't seem to figure out exactly why processed sugar seems so much worse for you than matched amounts from fruit and dairy. Similarly, despite investigation, we can't seem to figure out why highly processed protein is so much worse for you than non processed protein. My guess is that alterations to the molecular structure of substances winds up in a negative goldilocks zone: not altered enough that the body rejects it thus getting incorporated as functional structure (cell walls, say), but altered enough that some bio processes either don't work or work at significantly reduced efficiency or with weird side effects. This will eventually be measurable, we don't have the right proxy metrics currently.

Comment by romeostevensit on Do we have a term for the issue with quantifying policy effect Scott Alexander stumbled on multiple times? · 2021-07-29T22:46:27.321Z · LW · GW

related to curse of dimensionality

Comment by romeostevensit on How much do variations in diet quality determine individual productivity? · 2021-07-28T15:09:16.386Z · LW · GW

I don't believe nutritional RCTs are going to give a resolution of evidence necessary to support or refute.

Comment by romeostevensit on Draft report on AI timelines · 2021-07-20T23:21:22.511Z · LW · GW


Comment by romeostevensit on Draft report on AI timelines · 2021-07-13T20:50:40.181Z · LW · GW

Is a sensitivity analysis of the model separated out anywhere? I might just be missing it.

Comment by romeostevensit on Intro to Debt Crises · 2021-06-29T05:43:50.603Z · LW · GW

My impression is that competition pushes fragility in good times. If two firms are basically the same but one takes out bigger loans and levers up their investments more they will have more cash to play with to try to take some market share from their competitors.

Comment by romeostevensit on Internal Information Cascades · 2021-06-25T20:38:56.575Z · LW · GW

This reminds me of 'The Medium is the Message' and the Sapir-Whorf hypothesis and Quine's ontological commitments. Namely, that leaky abstractions don't just leak sideways across your different abstractions, but also up and down across levels of abstraction. Thus your epistemology leaks into your ontology and vice versa, which leak into which goals you can think about etc.

One takeaway from thinking this way was that I radically increased the priority on figuring out which skills are worth putting serious time into. Which were more 'upstream' of more good things. Two answers I came up with were expert judgment, since I can't do the vast majority of things on my own I need to know who to listen to, and introspection in order to not be mistaken about what I actually want.

Comment by romeostevensit on ELI12: how do libertarians want wages to work? · 2021-06-24T09:15:43.942Z · LW · GW

The basic idea is that without gov forcing out competition via monopoly the market provides arbitration services.

Comment by romeostevensit on romeostevensit's Shortform · 2021-06-22T21:39:07.759Z · LW · GW

causation seems lossless when it is lossy in exactly the same way as the intention that gave rise to it

Comment by romeostevensit on I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction · 2021-06-22T21:14:14.452Z · LW · GW

Relaxing independence rather than transitivity is the most explored angle of attack IIRC.

Comment by romeostevensit on Visualizing in 5 dimensions · 2021-06-20T00:22:37.359Z · LW · GW

Comment by romeostevensit on A Reason to Expect Republics to Perform Better than Absolute Monarchies in the Long-Term · 2021-06-18T01:55:55.502Z · LW · GW

I'd be curious about data on hereditary vs non hereditary monarchies if anyone knows any pointers

Comment by romeostevensit on (Trying To) Study Textbooks Effectively: A Year of Experimentation · 2021-06-09T02:27:15.658Z · LW · GW

Great post! You sound like a geometer and some people are algebraists. The latter seem to use interoception and internal speech more so than visualization. Using interoception to come up with new visual metaphors a la Gendlin's Focusing can be helpful for geometers IME.

Comment by romeostevensit on Often, enemies really are innately evil. · 2021-06-08T01:17:28.834Z · LW · GW

This is an example of the problem. More concern with intractable causes than tractable effects.

Comment by romeostevensit on Often, enemies really are innately evil. · 2021-06-07T18:06:09.803Z · LW · GW

"Sadism exists and is popular" is something I think of as a major blind spot for mistake/error theorists.

Comment by romeostevensit on Five Whys · 2021-06-07T18:01:33.225Z · LW · GW

Other direction can be valuable for operationalizing: 5 Hows

Comment by romeostevensit on What is the Risk of Long Covid after Vaccination? · 2021-06-02T20:14:41.077Z · LW · GW

I expect symptoms-consistent-with is broad enough to interact with a whole lot of stuff that is going on medically and culturally.

Comment by romeostevensit on Selection Has A Quality Ceiling · 2021-06-02T20:05:47.543Z · LW · GW

I have the sense that training happens out in the tails via the mechanism of lineage. Lineage holders get some selection power and might be doing something inscrutable with it, but it's not like they can cast a net for PhD candidates arbitrarily wide so they must be doing some training or we wouldn't see the concentration of results we do. The main issue with this seems to be that it is very expensive. If I have only 10 people I think can do top tier work it is very costly to test hypotheses that involve them spending time doing things other than top tier work. Suggestion: find ways for candidates to work closely with top tier people such that it doesn't distract those people too much. Look at how intellectual lineages do this and assume that some of it looks dumb on the surface.

Comment by romeostevensit on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-02T02:05:33.667Z · LW · GW

A research review that I found incredibly helpful for bridging my understanding of technical topics and multi-agent cooperation in more practical contexts was this paper: Statistical physics of human cooperation. I'd be really excited about people engaging with this approach.

Comment by romeostevensit on The Homunculus Problem · 2021-05-28T07:12:22.637Z · LW · GW

The homunculus fallacy fallacy is the tendency to deny that you are in fact the homunculus. The homunculus homunculus fallacy is believing in a second order homunculus that is only a projection and is what mistaken people are talking about when they refer to the homunculus fallacy. The homunculus homunculus homunculus fallacy fallacy is believing that it leads to infinite regress problems when in fact two levels of meta are sufficient.

Comment by romeostevensit on romeostevensit's Shortform · 2021-05-26T18:02:10.230Z · LW · GW

It strikes me that, at certain times and places, low time preference research might have become a competitive consumption display for wealthy patrons. I know this is considered mildly the case, but I mean as a major cultural driver.

Comment by romeostevensit on Concerning not getting lost · 2021-05-26T02:49:28.682Z · LW · GW

I describe it a different way in Towards an Intentional Research Agenda. But basically I think trying to constrain intentions algorithmically is a type error.

Comment by romeostevensit on Questions are tools to help answerers optimize utility · 2021-05-25T04:59:38.223Z · LW · GW

Answerers can also split out the breakdown/tacit linked premises for the questioner, like you do in this post, if the questioner has patience for that because the question is somewhat important to them. See also: Arisototle treating questions only fully answered if they separately address four different types of whys.

Comment by romeostevensit on The Hard Work of Translation (Buddhism) · 2021-05-25T01:49:00.619Z · LW · GW

on interpretations:Ānāpānasati_Sutta

insight techniques:

Comment by romeostevensit on AI Safety Research Project Ideas · 2021-05-22T06:52:47.808Z · LW · GW

Detecting preferences in agents: how many assumptions need to be made?

I'm interpreting this to be asking how to detect the dimensionality of the natural embedding of preferences?

Comment by romeostevensit on How To Think About Overparameterized Models · 2021-05-20T04:58:55.724Z · LW · GW

somehow I missed this post and only caught it now. This was helpful for a few things.

  1. That I should think of some algorithms primarily as populating a space with the given data and then 'deciding' on the topology of the space
  2. That 'the valley of bad X' is the inverse of a 'goldilocks zone'
  3. That overfitting can be thought of as occurring in a valley of bad parameterization.
Comment by romeostevensit on How Can One Tell What Is Beautiful? · 2021-05-15T22:55:08.720Z · LW · GW

Given your current compression library there's a frontier of new compression heuristics you can learn. This determines what seems appealing. The appealing objects are in your current goldilocks zone of ambiguity wrt how to compress them best.

Comment by romeostevensit on Concerning not getting lost · 2021-05-15T19:00:04.341Z · LW · GW

I did a version of this (what I would call a sense of the main story line for me personally) here:

Comment by romeostevensit on The case for hypocrisy · 2021-05-13T16:26:10.457Z · LW · GW

Detailed maps of technical or personal subjects. Like knowing particular idiosyncracies of your job or relationships.

Comment by romeostevensit on The case for hypocrisy · 2021-05-13T05:50:41.344Z · LW · GW

I think of modularity/composability of beliefs like lego. It is important for pieces to be able to be different so they serve different tasks, yet also important for them to maintain certain properties so that they remain connectable to other lego. Pieces that conform along more dimensions will be more composable and thus more usable in many different ways (more generic pieces) while pieces that conform along fewer dimensions can accomplish more specialized tasks but also create limits on how well they interface with other pieces (super specialized pieces that 'are only good for their specific purpose' in specialty sets.)

Comment by romeostevensit on Agency in Conway’s Game of Life · 2021-05-13T05:41:56.325Z · LW · GW

Related to sensitivity of instrumental convergence. i.e. the question of whether we live in a universe of strong or weak instrumental convergence. In a strong instrumental convergence universe, most possible optimizers wind up in a relatively small space of configurations regardless of starting conditions, while in a weak one they may diverge arbitrarily in design space. This can be thought of one way of crisping up concepts around orthogonality. e.g. in some universes orthogonality would be locally true but globally false, or vice versa, or locally and globally true or vice versa.