Posts

Comments

Comment by ADifferentAnonymous on We need a new philosophy of progress · 2021-08-31T15:07:52.975Z · LW · GW

Perfect! Glad to know you're on it.

Comment by ADifferentAnonymous on We need a new philosophy of progress · 2021-08-30T18:53:39.594Z · LW · GW

It's very easy to read this as a call to mostly bring back the old philosophy of progress, despite what I recognize as attempts to avoid that reading.

My take is that a genuinely new philosophy of progress needs to transcend the old battle by positioning itself as heir to both sides. Increased understanding of the environmental and other costs of industrialization is no less a form of progress than new industrial technology. Environmentalists seeing industry as the enemy and industrialists seeing environmentalism as the enemy are both missing a larger picture.

In this vision, there would be Roots of Progress posts on topics like CFCs/ozone layer and acid rain, or maybe broader things like "how we stopped dumping so much stuff in rivers", without any sense that these posts are opposed to or in a different category from the rest. You could still discuss the disagreements around how to solve these issues, but even those judged completely wrong should not be cast as villains any more than proponents of "beating"-type threshing machines.

(I realize I'm sort of describing Mistake Theory. Mistake Theory being the philosophy of progress should be no surprise!)

Comment by ADifferentAnonymous on A Better Time until Sunburn Calculator · 2021-08-18T16:29:24.505Z · LW · GW

Thanks for doing this! Just seeing the concept makes me realize how subjective my assessments of sunburn risk are.

One thing I've been wondering lately is the effects of interrupted vs uninterrupted sun exposure. E.g., if I spend an hour outside, an hour inside, and then another hour outside, how does that compare to the effects of two continuous hours outside? I've tried a bit of googling, but the information is surprisingly hard to find.

What I have learned is that UV-induced DNA damage is mostly single-strand breaks that can be repaired via nucleotide excision repair, but I'm not sure how long that takes. I did find this on the simpler related process of base excision repair:

BER reactions in cells are extremely fast, and in many cases, an individual BER event may take only a few minutes (10,11). The repair of acute DNA damage requires several rounds of BER and can take several hours, as the amount of BER enzymes is limited.

(Dramatically-titled source)

To me that suggests a model where sun damage accumulates at a rate depending on exposure, is repaired at a fixed rate, and damage reaching a certain threshold triggers a sunburn. 

Comment by ADifferentAnonymous on A Response to A Contamination Theory of the Obesity Epidemic · 2021-08-16T18:07:09.556Z · LW · GW

Jeff Nobbs (one of OP's sources) says polyunsaturated fatty acids are the real culprit and provides a helpful chart. Tl;dr coconut oil is great, olive and avocado oil are pretty good, avoid canola/peanut/rice bran/corn/sunflower. (Sesame isn't on the chart but IME it's used in pretty small quantities anyway).

It's hard to get much oil from whole versions of the source foods. My quick calculation say you can add '5 tbs soybean oil requires six blocks of tofu'.

Comment by ADifferentAnonymous on Transitive Tolerance Means Intolerance · 2021-08-14T22:28:23.387Z · LW · GW

There's a self-fulfilling prophecy aspect to this. If you expect to be judged for your transitive associations, you'll choose them carefully. If you choose your transitive associations carefully, they'll provide more Bayesian evidence about your values, making it more rational for others to judge you by them.

Comment by ADifferentAnonymous on A Response to A Contamination Theory of the Obesity Epidemic · 2021-08-13T22:46:24.597Z · LW · GW

Thanks for pulling all that data!

That study says third-generation Chinese-Americans—presumably the ones eating the most typically American diet—are actually slightly more obese than white Americans! At face value that pretty much torpedoes any genetic adaptation theory (and I have no particular reason not to take it at face value).

Theories 1 and 2 are both quite possible. 

Re: Japan, it looks like soybean oil doesn't dominate vegetable oil intake like in the US; rapeseed is more common and did not decline in the same way, and palm oil is also significant, so their overall trend in vegetable oil consumption isn't so easy to eyeball. Though I think those numbers are consumption in the economic sense, not in the 'eating' sense—not sure how to account for that.

Comment by ADifferentAnonymous on A Response to A Contamination Theory of the Obesity Epidemic · 2021-08-13T16:46:42.861Z · LW · GW

Also, re: China being an outlier of high vegetable oil intake with low obesity, apparently soybean oil has been used there for millennia. Adaptation?

Comment by ADifferentAnonymous on A Response to A Contamination Theory of the Obesity Epidemic · 2021-08-12T20:55:19.372Z · LW · GW

One thing that jumped out at me from the [Stephan Guyenet post](http://wholehealthsource.blogspot.com/2011/08/seed-oils-and-body-fatness-problematic.html) Scott cites: the increasing linoleic acid content of human fat.

Mostly I'm intrigued because I've basically never heard anyone talk about body fatty acid composition before at all. For all I knew, the human body converted all its fats into a standardized human fat before storing them.

And this kinda seems like a big deal, just intuitively? It definitely makes me update in favor of vegetable fat consumption having some kind of cumulative effect.

Comment by ADifferentAnonymous on Progress, Stagnation, & Collapse · 2021-07-23T12:26:32.245Z · LW · GW

It occurs to me that from a system robustness perspective, luxury is actually great, because it implies surplus capacity (assuming society can and will divert luxury-production to essentials-production in a crisis).

Comment by ADifferentAnonymous on My Marriage Vows · 2021-07-23T12:04:01.682Z · LW · GW

The "best" values in KS are those that result when you optimize one player's payoff under the constraint that the second player's payoff is higher than the disagreement payoff.

I'm not sure this is the case? Wiki does say "It is assumed that the problem is nontrivial, i.e, the agreements in [the feasible set] are better for both parties than the disagreement", but this is ambiguous as to whether they mean some or all. Googling further, I see graphs like this where non-Pareto-improvement solutions visibly do count.

I agree that your version seems more reasonable, but I think you lose monotonicity over the set of all policies, because a weak improvement to player 1's payoffs could turn a (-1, 1000) point into a (0.1, 1000) point, make it able to affect the solution, and make the solution for player 1 worse. Though you'll still have monotonicity over the restricted set of policies.

Comment by ADifferentAnonymous on My Marriage Vows · 2021-07-23T00:30:06.514Z · LW · GW

First of all, this is awesome.

I didn't know about KS bargaining before reading this, thinking through it now... 

It seems kind of odd that terrible solutions like (1000, -10^100) could determine the outcome (I realize they can't be the outcome, but still). I would hesitate to use KS bargaining unless I felt that  values were in some sense 'reasonable' outcomes. Do you have a general sense of what a life of maximizing your spouse's utility would look like (and vice versa)? 

Trying to imagine this myself wrt my own partner, figuring out my utility function is a little tricky. The issue is that I think I have some concern for fairness baked in. Like, do I want my partner to do 100% of chores? My reaction is to say 'no, that would be unfair, I don't want to be unfair'. But if you're referencing your utility function in a bargaining procedure to decide what 'fair' is, I don't think that works. So, would I want my partner to do 100% of chores if that were fair? I can simulate that by imagining she offered to do this temporarily as part of a trade or bet and asking myself if I'd consider that a better deal than, say,  her doing 75% of chores. And yes, yes I would. But I'd consider 'she does 100% of chores no matter what, I'm not allowed to help' a worse deal than 'she does 100% of chores unless it becomes too costly to her' for some definitions of 'too costly'.

Assuming that my utility function is like that about most things, and that hers is as well, I'd say our  values are actually reasonable counterfactuals to consider. Which inclines me to think yours are as well. 

Still, 'everything I do' is a big solution space to make assumptions about. The Vow of Concord pretty much requires you to look for edge cases where your spouse's utility can be increased by disproportionate sacrifices of yours; I'd suggest you start looking now (if you haven't yet), before you've Vowed to let them guide your decisions.

Comment by ADifferentAnonymous on Punishing the good · 2021-07-22T12:32:51.368Z · LW · GW

It makes a difference whether punishment is zero-sum or negative-sum. If we can't take $100 from Bob to give to someone else but can only impose $100 of cost on him to no one's benefit, we'd rather not do that.

In that case I think the answer is to forego the punishment if you're sufficiently confident the harm is an inevitable result of a net-good decision.

Comment by ADifferentAnonymous on The shoot-the-moon strategy · 2021-07-22T11:45:53.625Z · LW · GW

Since I first heard of controversy around ballot selfies, I've thought that an alternative to prosecuting those who take them would be to facilitate fake ballot selfies.

I was going to say you could implement this by letting people surrender a filled-out-but-not-submitted ballot to a poll worker in exchange for a new one, but you can probably already do this if you just say you made a mistake? In that case polling sites would just need to put posters up telling people to do this if they are under pressure of any kind to produce a ballot selfie.

Comment by ADifferentAnonymous on Improving capital gains taxes · 2021-07-10T22:09:38.045Z · LW · GW

Do you have thoughts on pros and cons of this relative to progressive consumption tax? (I agree they're mostly equivalent and both good).

I think consumption tax has an advantage in terms of perceived fairness in that it (almost) guarantees you won't get years where e.g. Jeff Bezos pays literally zero taxes, which look pretty bad. Whereas these reforms could give you years where his taxes are highly negative, which would look worse.

Comment by ADifferentAnonymous on The reverse Goodhart problem · 2021-06-09T20:01:22.534Z · LW · GW

Hmm... I find the scaling aspect a bit fishy (maybe an ordinal vs cardinal utility issue?). The goodness of a proxy should be measured by the actions it guides, and a V-maximizer, a log(V) maximizer and an  maximizer will all take the same actions (barring uncertain outcomes).

That said, reverse Goodhart remains possible. I'd characterize it as a matter of being below a proxy's range of validity, whereas the more familiar Goodhart problem involves ending up above it. E.g. if V =  + Y, then U = X is a reverse-Goodhart proxy for V—the higher X gets, the less you'll lose (relatively) by neglecting Y. (Though we'd have to specify some assumptions about the available actions to make that a theorem).

An intuitive example might be a game with an expert strategy and a beginner strategy—'skill at the expert strategy' being a reverse-Goodhart proxy for skill at the game.

Comment by ADifferentAnonymous on The irrelevance of test scores is greatly exaggerated · 2021-04-16T15:31:06.390Z · LW · GW

A more general observation that I'm sure has been stated many times but clicked for me while reading this: Once you condition on the output of a prediction process, correlations are residuals. Positive/negative/zero coefficients then map not to good/bad/irrelevant but to underrated/overrated/valued accurately.

("Which college a student attends" is the output of a prediction process insofar as diff students attend the most selective college that accepts them and colleges differ only in their admission cutoffs on a common scoring function, I think).

Comment by ADifferentAnonymous on Learning Russian Roulette · 2021-04-05T18:00:37.820Z · LW · GW

Shorter statement of my answer:

The source of the apparent paradox here is that the perceived absurdity of 'getting lucky N times in a row' doesn't scale linearly with N, which makes it unintuitive that an aggregation of ordinary evidence can justify an extraordinary belief.

You can get the same problem with less anthropic confusion by using coin-flip predictions instead of Russian Roulette. It seems weird that predicting enough flips successfully would force you to conclude that you can psychically predict flips, but that's just a real and correct implication of having on nonzero prior on psychic abilities in the first place.

Comment by ADifferentAnonymous on Learning Russian Roulette · 2021-04-05T17:27:40.180Z · LW · GW

Okay. So, we agree that your prior says that there's a 1/N chance that you are unkillable by Russian Roulette for stupid reasons, and you never get any evidence against this. And let's say this is independent of how much Russian Roulette one plays, except insofar as you have to stop if you die.

Let's take a second to sincerely hold this prior.  We aren't just writing down some small number because we aren't allowed to write zero; we actually think that in the infinite multiverse, for every N agents (disregarding those unkillable for non-stupid reasons), there's one who will always survive Russian Roulette for stupid reasons. We really think these people are walking around the multiverse.

So now let K be the base-5/6 log of 1/N. If N people each attempt to play K games of Russian Roulette (i.e. keep playing until they've played K games or are dead), one will survive by luck, one will survive because they're unkillable, and the rest will die (rounding away the off-by-one error).

If N^2 people across the multiverse attempt to play 2K games of Russian Roulette, N of them will survive for stupid reasons, one of them will survive by luck, and the rest will die. Picture that set of N immortals and one lucky mortal, and remember how colossal a number N must be. Are the people in that set wrong to think they're probably immortals? I don't think they are.

Comment by ADifferentAnonymous on Ranked Choice Voting is Arbitrarily Bad · 2021-04-05T16:58:17.218Z · LW · GW

Yeah, I think ranked-choice voting almost always refers to [instant-runoff voting](https://en.wikipedia.org/wiki/Instant-runoff_voting), which would indeed eliminate Carol first here. So I think the post is just wrong with that example.

A real example of a questionable RCV outcome was the [2009 Burlington, VT mayoral election](https://en.wikipedia.org/wiki/Instant-runoff_voting#2009_Burlington_mayoral_election), where the Democrat would have beaten either the Progressive or the Republican head-to-head but had fewer first-choice votes than either, leading to a Progressive victory over the Republican in the final round. This seems bad but not arbitrarily bad—the winner wasn't universally despised or anything.

Comment by ADifferentAnonymous on Learning Russian Roulette · 2021-04-02T21:28:53.379Z · LW · GW

Even after you've gotten an infinite amount of evidence against every possible alternative consideration, you'll still believe that youre certain to survive

Isn't the prior probability of B the sum over all specific hypotheses that imply B? So if you've gotten an arbitrarily large amount of evidence against all of those hypotheses, and you've won at Russian Roulette an arbitrarily high number of times... well, you'll just have to get more specific about those arbitrarily large quantities to say what your posterior is, right?

Comment by ADifferentAnonymous on The (not so) paradoxical asymmetry between position and momentum · 2021-03-30T15:49:31.697Z · LW · GW

'Symmetric vs. asymmetric' isn't the right distinction; merely noting that a Hamiltonian is asymmetric in position and momentum can't tell you anything about which one is fundamental!

The notable thing about position in our universe is that there are no interactions that don't lose strength with increasing distance (I think?), and in ancestral human life the Earth's gravity is the only obviously-important violation of strong locality. 

As for why this is, I'm inclined toward anthropic explanations.  This could just be a limit of human intuition, but it seems like locality is really helpful for complex purposeful structures. E.g., it allows a cell to control an interaction neighborhood such that everything that happens inside the membrane is coordinated. If some interactions were position-local and others momentum-local, you'd have to try to defend a neighborhood in both position-space and momentum-space, but your momentum-space boundaries would drift apart in position-space, and the need to stay in your momentum-space neighborhood would constrain your ability to update your position... it seems hard.

Comment by ADifferentAnonymous on Product orientation · 2021-03-18T16:51:23.718Z · LW · GW

For question 5, maybe try out different shopping-like activities to see if any of them are less aversive.

Some examples:

  • Researching a product category without the intention to make a purchase.
    • A few ways to motivate this, if 'product orientation practice' isn't motivating
      • Market research for a potential product
      • Write a buying guide others might appreciate
      • Things you might buy someday but not soon
      • Fantasy purchases. "If I were going to buy a yacht/private plane/supercar, which one would I want?"
  • Cheap unimportant purchases where the consequences of choosing wrong are minimal
  • Choosing among free things (e.g open-source libraries)
Comment by ADifferentAnonymous on Blue is Arbitrary · 2021-03-15T23:00:17.947Z · LW · GW

"Eurocentric paint" is an imprecise phrase. I first read it as meaning "traditionally-used European paints", with the implication that other cultures chose their colors based on different paints. But the rest of the post makes clear it's the idea of basing colors on paints that's allegedly Eurocentric; so the better phrasing might be "Eurocentric fixation on paint".

 

I was taught in (US) school that the primary colors were red, yellow, and blue and the secondaries were green, orange and purple (which matches the 'rainbow' in the comic, though the 'rainbow' I learned was ROYGBIV).  Per https://en.wikipedia.org/wiki/Color_theory#Traditional_color_theory, this only works with paint:

One reason the artist's primary colors work at all is due to the imperfect pigments being used have sloped absorption curves, and change color with concentration...  Another reason the correct primary colors were not used by early artists is they were not available as durable pigments. Modern methods in chemistry were needed to produce them.

Granted, I was taught those colors in conjunction with being given paint to play with, which is a good reason to teach them. But it's still a bit striking that at no point in my education was I taught any other set of primary colors, except implicitly by picking RGB colors in MS Paint (an ironic name, in context).

I'm pretty sure that the common intuition among my classmates, way back in childhood, was that the first-tier colors were red, yellow, blue and green.  This turns out to be supported by a relatively sophisticated color theory based neither on natural occurrences of colors nor on any means of producing colors but rather the brain's fundamental abstractions for processing them.

Comment by ADifferentAnonymous on Are the Born probabilities really that mysterious? · 2021-03-02T16:07:40.462Z · LW · GW

I think b) is what I always assumed was meant by the Born rule being called mysterious?

Comment by ADifferentAnonymous on The slopes to common sense · 2021-02-23T15:11:42.478Z · LW · GW

I think one of my main contrarian instincts is to see a flat direction and worry we've been creeping up it, to the point that I'm actually pretty receptive to arguments for going the other way.

I take it somewhat as a sign I have this well-calibrated that your more-sleep and less-sleep paragraphs sounded about equally reasonable to me.

Comment by ADifferentAnonymous on Quadratic, not logarithmic · 2021-02-10T03:01:10.773Z · LW · GW

I remember very early in the pandemic reading an interview with someone who justified their decision to continue going to bars by pointing that they had a high-contact job that they still had to do. I noticed that this in fact made their decision worse (in terms of total societal Covid risk).

(And as the number of cases was still quite low at the time, the 100% bound on risk was much less plausibly a factor)

Comment by ADifferentAnonymous on Quadratic, not logarithmic · 2021-02-10T02:09:32.346Z · LW · GW

If you're deciding whether or not to add the (n+1)th person, what matters is the marginal risk of that decision.

Comment by ADifferentAnonymous on Quadratic, not logarithmic · 2021-02-08T17:14:32.909Z · LW · GW

Another explanation for logarithmic thinking is Laplace's rule of succession.

If you have N exposures and have not yet had a bad outcome, the Laplacian estimate of a bad outcome from the next exposure goes as 1/N (the marginal cost under a logarithmic rule).

Applying this to "number of contacts" rather than "number of exposures" is admittedly more strained but I could still see it playing a part.

Comment by ADifferentAnonymous on Massive consequences · 2021-02-08T16:25:04.870Z · LW · GW

I think the idea is that Huemer's quote seems to itself be an effort to repair society without fully understanding it.

I don't think this is a facile objection, either*—I think it's very possible that "Voters, activists, and political leaders" are actually an essential part of the complex mechanism of society and if they all stopped trying to remedy problems things would get even worse.

On the other hand, you can recurse this reasoning and say that maybe bold counterintuitive philosophical prescriptions like Huemer's are also part of the complex mechanism.

 

*To the quote as a standalone argument, anyway—haven't read the essay.

Comment by ADifferentAnonymous on New Empty Units · 2021-01-26T19:17:10.685Z · LW · GW

Searching "real estate money laundering", it does sound like this is a real thing. But the few pages I just read generally don't emphasize the "overpaying in exchange for out-of-band services" mechanism—they seem to be thinking in terms of buying (with dirty money) and selling (for clean money) at market prices, and emphasize that real estate's status as "a good investment" is an important part of why criminals use it.

(They also bring up international tax avoidance strategies. Obviously using property to "park your wealth" also relies on prices not going down and hopefully going up at a reasonable rate).

So it sounds like OP's strategy of building more and more until the speculators stop paying would work almost equally well against these types of buyers.

Comment by ADifferentAnonymous on Would the Real Economy Please Stand Up · 2020-12-29T16:47:16.596Z · LW · GW

I find this distinction useful as well. I suspect it's one that many people understand implicitly and many others totally lack. Evidence of the latter: I've seen intelligent people be far too upset by https://en.wikipedia.org/wiki/K_Foundation_Burn_a_Million_Quid.

Comment by ADifferentAnonymous on Motive Ambiguity · 2020-12-15T23:36:03.242Z · LW · GW

One (admittedly idealistic) solution would be to spread awareness of this dynamic and its toxicity. You can't totally expunge it that way, but you could make it less prevalent (i.e. upper-middle managers probably can't be saved, but it might get hard to find enough somewhat-competent lower-middle managers who will play along).

What would it look like to achieve an actually-meaningful level of awareness? I would say "there is a widely-known and negative-affect-laden term for the behavior of making strictly-worse choices to prove loyalty". 

Writing this, I realized that the central example of "negative-sum behavior to prove loyalty" is hazing. (I think some forms of hazing involve useful menial labor, but classic frat-style hazing is unpleasant for the pledges with no tangible benefit to anyone else). It seems conceivable to get the term self-hazing into circulation to describe cases like the one in OP, to the point that someone might notice when they're being expected to self-haze and question whether they really want to go down that road.

Comment by ADifferentAnonymous on Hermione Granger and Newcomb's Paradox · 2020-12-15T21:17:28.156Z · LW · GW

Had she been the sort to do that, Omega wouldn't have made her the offer in the first place.

Comment by ADifferentAnonymous on What confusions do people have about simulacrum levels? · 2020-12-15T20:40:31.113Z · LW · GW

I could use more clarity on what is and isn't level three.

Supposedly at level three, saying "There's a lion across the river" means "I’m with the popular kids who are too cool to go across the river." But there's more than one kind of motivation the speaker might have.

A) A felt sense that "There's a lion across the river" would be a good thing to say (based on subconscious desire to affiliate with the cool kids, and having heard the cool kids say this)

B) A conscious calculation that saying this will ingratiate you with the cool kids, based on explicit reasoning about other things the cool kids have said, but motivated by a felt sense that those kids are cool and you want to join them

C) A conscious calculation that saying this will ingratiate you with the cool kids, motivated by a conscious calculation that gaining status among the cool kids will yield tangible benefits.

Are all three of these contained by level three? Or does an element of conscious calculation take us into level four?

(I think C) has a tendency to turn into B) and B) likewise into A), but I don't think it's inevitable)

Comment by ADifferentAnonymous on Hermione Granger and Newcomb's Paradox · 2020-12-15T17:24:49.480Z · LW · GW

The answer looks something like "if she had been planning to do that, the opaque envelope would have been empty".

Comment by ADifferentAnonymous on Luna Lovegood and the Chamber of Secrets - Part 7 · 2020-12-11T14:08:03.075Z · LW · GW

I think I know what you mean (about even-numbered pages; I'm not familiar with Manuscript), but there isn't actually missing necessary information (unless you haven't read HPMoR, in which case you're definitely missing necessary information). I suppose what's missing is unnecessary information--each scene is stripped to its bare essentials.

Comment by ADifferentAnonymous on D&D.Sci · 2020-12-07T21:08:25.611Z · LW · GW

I like to read blog posts by people who do real statistics, but with a problem in front of me I'm very much making stuff up. It's fun, though!

The approach I settled on was to estimate the success chance of a possible stat line by taking a weighted success rate over the data, weighted by how similar the hero's stats are to the stats being evaluated. My rationale is that based on intuitions about the domain I would not assume linearity or independence of stats' effects or such, but I would assume that heroes with similar stats would have similar success chances.

In pseudocode: 

estimatedchance(stats) = sum(weightfactor(hero.stats, stats) * hero.succeeded) / sum(weightfactor(hero, stats))

weightfactor(hero.stats, stats) = k ^ distance(hero.stats, stats)

(Assuming 0 < k < 1, and hero.succeeded is 1 if the hero succeeded and 0 otherwise)

I tried using both Euclidean and Manhattan distances, and various values for k as well. I also tried a hacky variant of Manhattan distance that added abs(sum(statsA) - sum(statsB)) to the result, but it didn't seem to change much.

Lastly, I tried the replacing (hero.succeeded) with (hero.succeeded - linearprediction(sum(hero.stats))) to try to isolate builds that do well relative to their stat total. linearprediction is a simple model I threw together by eyeballing the data: 40% chance to succeed with total stats of 60, 100% chance with total stats >= 95, linear in between. Could probably be improved with not too much effort, but I have to stop somewhere.

I generally found two clusters of optima, one around (8, 14, 13, 13, 8, 16)—that is, +4 CHA, +2 STR, +4 WIS—and the other around (4, 16, 13, 14, 9, 16)—that is, +2 CON, +1 INT, +3 STR, +4 WIS. The latter was generally favored by low k values, as the heroes with stats closest to that value generally did quite well but those a little farther away got less impressive. So it could be a successful strategy that doesn't allow too much deviation, or just a fluke. Using the linear prediction didn't seem to change things much.

If I had to pick one final answer, it's probably (8, 14, 13, 13, 8, 16) (though there seems to be a fairly wide region of variants that tend to do pretty well—the rule seems to be 'some CHA, some WIS, and maybe a little STR'), but I find myself drawn towards the maybe-illusory (4, 16, 13, 14, 9, 16) niche solution.

ETA: Looks like I was iterating over an incomplete list of possible builds... but it turned out not to matter much.

ETA again (couldn't leave this alone): I tried computing log-likelihood scores for my predictors (restricting the 'training' set to the first half of the data and using only the second half for validation. I do find that with the right parameters some of my predictors do better than simple linear regression on sum of stats, and also better the apparently-better predictor of simple linear regression on sum of non-dex stats. But they don't beat it by much. And it seems the better parameter values are the higher k values, meaning the (8, 14, 13, 13, 8, 16) cluster is probably the one to bet on.

Comment by ADifferentAnonymous on Prize: Interesting Examples of Evaluations · 2020-12-01T15:19:22.574Z · LW · GW

I see "property assessment" on the list, but it's worth calling out self-assessment specifically (where the owner has to sell their property if offered their self-assessed price).

Then there are those grades organizations give politicians. And media endorsements of politicians. And, for that matter, elections.

Keynesian beauty contests.

And it seems with linking to this prior post (not mine): https://www.lesswrong.com/posts/BthNiWJDagLuf2LN2/evaluating-predictions-in-hindsight

Comment by ADifferentAnonymous on Inner Alignment in Salt-Starved Rats · 2020-11-25T00:24:29.369Z · LW · GW

Glad to hear this is helpful for you too :)

I didn't really follow the time-derivative idea before, and since you said it was equivalent I didn't worry about it :p. But either it's not really equivalent or I misunderstood the previous formulation, because I think everything works for me now.

So if we (1) decide "I will imagine yummy food", then (2) imagine yummy food, then (3) stop imagining yummy food, we get a positive reward from the second step and a negative reward from the third step, but both of those rewards were already predicted by the first step, so there's no RPE in either the second or third step, and therefore they don't feel positive or negative. Unless we're hungrier than we thought, I guess...

Well, what exactly happens if we're hungrier than we thought?

(1) "I will imagine food": No reward yet, expecting moderate positive reward followed by moderate negative reward.

(2) [Imagining food]: Large positive reward, but now expecting large negative reward when we stop imagining, so no RPE on previous step.

(3) [Stops imagining food]: Large negative reward as expected, no RPE for previous step.

The size of the reward can then be informative, but not actually rewarding (since it predictably nets to zero over time). The neocortex obtains hypothetical reward information form the subcortex, without actually extracting a reward—which is the thing I've been insisting had to be possible. Turns out we don't need to use a separate channel! And the subcortex doesn't have to know or care whether its receiving a genuine prediction or an exploratory imagining from the neocortex—the incentives are right either way.

(We do still need some explanation of why the neocortex can imagine (predict?) food momentarily but can't keep doing it food forever, avoid step (3), and pocket a positive RPE after step (2). Common sense suggests one: keeping such a thing up is effortful, so you'd be paying ongoing costs for a one-time gain, and unless you can keep it up forever the reward still nets to zero in the end)

Comment by ADifferentAnonymous on Inner Alignment in Salt-Starved Rats · 2020-11-24T00:25:50.128Z · LW · GW

Thanks for the reply; I've thought it over a bunch, and I think my understanding is getting clearer.

I think one source of confusion for me is that to get any mileage out of this model I have to treat the neocortex as a black box doing trying to maximize something, but it seems like we also need to rely on the fact that it executes a particular algorithm with certain constraints.

For instance, if we think of the 'reward predictions' sent to the subcortex as outputs the neocortex chooses, the neocortex has no reason to keep them in sync with the rewards it actually expects to receive—instead, it should just increase the reward predictions to the maximum for some free one-time RPE and then leave it there, while engaging in an unrelated effort to maximize actual reward.

(The equation V(sprev)+=(learning rate)⋅(RPE) explains why the neocortex can't do that, but adding a mathematical constraint to my intuitive model is not really a supported operation. If I say "the neocortex is a black box that does whatever will maximize RPE, subject to the constraint that it has to update its reward predictions according to that equation," then I have no idea what the neocortex can and can't do)

Adding in the basal ganglia as an 'independent' reward predictor seems to work. My first thought was that this would lead to an adversarial situation where the neocortex is constantly incentivized to fool the basal ganglia into predicting higher rewards, but I guess that isn't a problem if the basal ganglia is good at its job.

Still, I feel like I'm missing a piece to be able to understand imagination as a form of prediction. Imagining eating beans to decide how rewarding they would be doesn't seem to get any harder if I already know I don't have any beans. And it doesn't feel like "thoughts of eating beans" are reinforced, it feels like I gain abstract knowledge that eating beans would be rewarded.

Meanwhile, it's quite possible to trigger physiological responses by imagining things. Certainly the response tends to be stronger if there's an actual possibility of the imagined thing coming to pass, but it seems like there's a floor on the effect size, where arbitrarily low probability eventually stops weakening the effect. This doesn't seem like it stops working if you keep doing it—AIUI, not all hungry people are happier when they imagine glorious food, but they all salivate. So that's a feedback channel separate from reward. I don't see why there couldn't also be similar loops entirely within the brain, but that's harder to prove.

So when our rat thinks about salt, the amygdala detects that and alerts... idk, the hypothalamus? The part that knows it needs salt... and the rat starts salivating and feels something in its stomach that it previously learned means "my body wants the food" and concludes eating salt would be a good idea.

Comment by ADifferentAnonymous on Inner Alignment in Salt-Starved Rats · 2020-11-20T23:13:51.924Z · LW · GW

This might just be me not grokking predictive processing, but...

I feel like I do a version of the rat's task all the time to decide what to have for dinner—I imagine different food options, feel which one seems most appetizing, and then push the button (on Seamless) that will make that food appear.

Introspectively, this feels to me there's such a thing as 'hypothetical reward'. When I imagine a particular food, I feel like I get a signal from... somewhere... that tells me whether I would feel reward if I ate that food, but does not itself constitute reward. I don't generally feel any desire to spend time fantasizing about the food I'm waiting for.

To turn this into a brain model, this seems like the neocortex calling an API the subcortex exposes. Roughly, the neocortex can give the subcortex hypothetical sensory data and get a hypothetical reward in exchange. I suppose this is basically hypothesis two with a modification to avoid the pitfall you identify, although that's not how I arrived at the idea.

This does require a second dimension of subcortex-to-neocortex signal alongside the reward. Is there a reason to think there isn't one?

Comment by ADifferentAnonymous on Simulacra Levels and their Interactions · 2020-06-18T16:56:25.084Z · LW · GW

I'm not sure Level 3 is actually less agentic than Level 1. The Oracle does not choose which truths to speak in order to pursue goals; if they did, they'd be the Sage.