Posts

What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? 2021-07-25T06:47:56.249Z
I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction 2021-06-22T03:53:33.868Z
Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? 2021-06-20T21:57:04.078Z
How do we prepare for final crunch time? 2021-03-30T05:47:54.654Z
What are some real life Inadequate Equilibria? 2021-01-29T12:17:15.496Z
#2: Neurocryopreservation vs whole-body preservation 2021-01-13T01:18:05.890Z
Some recommendations for aligning the decision to go to war with the public interest, from The Spoils of War 2020-12-27T01:04:47.186Z
What is the current bottleneck on genetic engineering of human embryos for improved IQ 2020-10-23T02:36:55.748Z
How To Fermi Model 2020-09-09T05:13:19.243Z
Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? 2020-08-14T19:19:30.980Z
Basic Conversational Coordination: Micro-coordination of Intention 2020-07-27T22:41:53.236Z
If you are signed up for cryonics with life insurance, how much life insurance did you get and over what term? 2020-07-22T08:13:38.931Z
The Basic Double Crux pattern 2020-07-22T06:41:28.130Z
What are some Civilizational Sanity Interventions? 2020-06-14T01:38:44.980Z
Ideology/narrative stabilizes path-dependent equilibria 2020-06-11T02:50:35.929Z
Most reliable news sources? 2020-06-06T20:24:58.529Z
Anyone recommend a video course on the theory of computation? 2020-05-30T19:52:43.579Z
A taxonomy of Cruxes 2020-05-27T17:25:01.011Z
Should I self-variolate to COVID-19 2020-05-25T20:29:42.714Z
My dad got stung by a bee, and is mildly allergic. What are the tradeoffs involved in deciding whether to have him go the the emergency room? 2020-04-18T22:12:34.600Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T05:00:35.435Z
Resource for the mappings between areas of math and their applications? 2020-03-30T06:00:10.297Z
When are the most important times to wash your hands? 2020-03-15T00:52:56.843Z
How likely is it that US states or cities will prevent travel across their borders? 2020-03-14T19:20:58.863Z
Recommendations for a resource on very basic epidemiology? 2020-03-14T17:08:27.104Z
What is the best way to disinfect a (rental) car? 2020-03-11T06:12:32.926Z
Model estimating the number of infected persons in the bay area 2020-03-09T05:31:44.002Z
At what point does disease spread stop being well-modeled by an exponential function? 2020-03-08T23:53:48.342Z
How are people tracking confirmed Coronavirus cases / Coronavirus deaths? 2020-03-07T03:53:55.071Z
How should I be thinking about the risk of air travel (re: Coronavirus)? 2020-03-02T20:10:40.617Z
Is there any value in self-quarantine (from Coronavirus), if you live with other people who aren't taking similar precautions? 2020-03-02T07:31:10.586Z
What should be my triggers for initiating self quarantine re: Corona virus 2020-02-29T20:09:49.634Z
Does anyone have a recommended resource about the research on behavioral conditioning, reinforcement, and shaping? 2020-02-19T03:58:05.484Z
Key Decision Analysis - a fundamental rationality technique 2020-01-12T05:59:57.704Z
What were the biggest discoveries / innovations in AI and ML? 2020-01-06T07:42:11.048Z
Has there been a "memetic collapse"? 2019-12-28T05:36:05.558Z
What are the best arguments and/or plans for doing work in "AI policy"? 2019-12-09T07:04:57.398Z
Historical forecasting: Are there ways I can get lots of data, but only up to a certain date? 2019-11-21T17:16:15.678Z
How do you assess the quality / reliability of a scientific study? 2019-10-29T14:52:57.904Z
Request for stories of when quantitative reasoning was practically useful for you. 2019-09-13T07:21:43.686Z
What are the merits of signing up for cryonics with Alcor vs. with the Cryonics Institute? 2019-09-11T19:06:53.802Z
Does anyone know of a good overview of what humans know about depression? 2019-08-30T23:22:05.405Z
What is the state of the ego depletion field? 2019-08-09T20:30:44.798Z
Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? 2019-07-29T22:59:33.170Z
Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea? 2019-07-09T21:57:28.537Z
Does scientific productivity correlate with IQ? 2019-06-16T19:42:29.980Z
Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? 2019-06-16T19:12:48.358Z
Eli's shortform feed 2019-06-02T09:21:32.245Z
Historical mathematicians exhibit a birth order effect too 2018-08-21T01:52:33.807Z

Comments

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-28T08:38:14.078Z · LW · GW

This is a particularly helpful answer for me somehow. Thanks.

I think I might add one more: probability. For instance, "what are the base rates for people meeting good cofounders (in general, or in specific contexts)?" Knowing the answer to this might tell you how much you should make tradeoffs to optimize for working with possible cofounders. 

Though, probably "risk" and "probability" should be one category.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:24:56.141Z · LW · GW

Really? The plausibility ordering is "transplant to new body > become robot > revive old body"?

I would have guessed it would be "revive old body > transplant to new body > become robot". 

Am I missing something?

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:21:06.278Z · LW · GW

What seems ideal to me would be doing both: remove the head from the body, and then cryopreserve, and store, them both separately. This would give you the benefit of a faster perfusion of the brain and ease of transport in an emergency, but also keeps the rest of the body around on the off-chance that it contains personality-relevant info.

I might consider this "option" [Is this an option? As far as I know, no one has done this, so it would presumably be a special arrangement with Alcor.] when I am older and richer.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:17:45.097Z · LW · GW

It seems worth noting that I have opted for neuropreservation instead of full body, at least at this time, in large part due to price difference. The "inclination to cryopreserve my full body" noted above, was not sufficient to sway my choice.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:07:16.208Z · LW · GW

Fortunately, before the Coroner executed a search warrant, her head mysteriously disappeared from the Alcor facility. That gave Alcor the time to get a permanent injunction in the courts against autopsying her head.

Wow. Sounds like that was an exciting (and/or nerve-wracking) week at Alcor!

Comment by Eli Tyre (elityre) on The shoot-the-moon strategy · 2021-07-25T10:10:28.561Z · LW · GW

hahahahah

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-25T09:01:06.555Z · LW · GW

I probably do basic sanity checks moderately often, just to see if something makes sense in context. But that's already intuition-level, almost. 

If it isn't too much trouble, can you give four more real examples of when you've done this? (They don't need to be as detailed as your first one. A sentence describing the thing you were checking is fine.)

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-25T08:59:22.260Z · LW · GW

Last time I actually pulled an excel was when Taleb was against IQ and said its only use is to measure low IQ. I wanted to see if this could explain (very) large country differences. So I made a trivial model where you have parts of the population affected by various health issues that can drop the IQ by 10 points. And the answer was yes, if you actually have multiple causes and they stack up, you can end up with the incredibly low averages we see (in the 60s for some areas). 

I'm glad that I asked the alternative phrasing of my question, because this anecdote is informative!

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-25T08:58:41.252Z · LW · GW

Can you be more specific? Presumably it was possible to open a spreadsheet when you were typing this answer, but I'm guessing that you didn't?

Comment by Eli Tyre (elityre) on What would it look like if it looked like AGI was very near? · 2021-07-14T01:29:00.878Z · LW · GW

and it's very difficult to have [a general intelligence] below human-scale!

I would be surprised if this was true, because it would mean that the blind search process of evolution was able to create a close to maximally-efficient general intelligence.

Comment by Eli Tyre (elityre) on What is the current bottleneck on genetic engineering of human embryos for improved IQ · 2021-07-14T01:14:50.393Z · LW · GW

Greg Cochran's idea

Do you have a citation for this?

Comment by Eli Tyre (elityre) on Moral Complexities · 2021-07-13T07:56:31.828Z · LW · GW

Perhaps even simpler: it is adaptive to have a sense of fairness because you don't want to be the jerk. 'cuz then everyone will dislike you, oppose you, and not aid you.

The biggest, meanest, monkey doesn't stay on top for very long, but a big, largely fair, monkey, does?

Comment by Eli Tyre (elityre) on Moral Complexities · 2021-07-13T07:51:35.217Z · LW · GW

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?  Why are the two propositions argued in different ways?

  • I want to consider this question carefully.
    • My first answer is that arguing about morality is a political maneuver that is more likely to work for getting what you want than simply declaring your desires.
      • But that begs the question, why is it more likely to work? Why are other people, or non-sociopaths, swayed by moral arguments?
        • It seems like they, or their genes, must get something out of being swayed by moral arguments.
        • You might think that it is better coordination or something. But I don't think that adds up. If everyone makes moral arguments insincerely, then the moral argument don't actually add more coordination.
          • But remember that morality is enforced...?
          • Ok. Maybe the deal is that humans are loss averse. And they can project, in any given conflict, being in the weaker party's shoes, and generalize the situation to other situations that they might be in. And so, any given onlooker would prefer norms that don't hurt the looser too badly? And so, they would opt into a timeless contract where they would uphold a standard of "fairness"?
            • But also the contract is enforced.
          • I think this can maybe be said more simply? People have a sense rage at someone taking advantage of someone else iff they can project that they could be in the loser's position?
            • And this makes sense if the "taking advantage" is likely to generalize. If the jerk is pretty likely to take advantage of you, then it might be adaptive to oppose the jerk in general?
              • For one thing, if you oppose the jerk when he bullies someone else, then that someone else is more likely to oppose him when he is bullying you.
          • Or maybe this can be even more simply reduced to a form of reciprocity? It's adaptive to do favors for non-kin, iff they're likely to do favors for you?
            • There's a bit of bootstrapping problem there, but it doesn't seem insurmountable.
          • I want to keep in mind that all of this is subject to scapegoating dynamics, where some group A coordinates to keep another group B down, because A and B can be clearly differentiated and therefore members of A don't have to fear the bullying of other members of A. 
            • This seems like it has actually happened, a bunch, in history. Whites and Blacks in American history is a particularly awful example that comes to mind.
Comment by Eli Tyre (elityre) on Potential Bottlenecks to Taking Over The World · 2021-07-08T00:39:43.772Z · LW · GW

Pinky, v3.41.08: Well, coordination constraints are a big one. They appear to be fundamentally intractable, as soon as we allow for structural divergence of world-models. Which means I can’t even coordinate robustly with copies of myself unless we either lock in the structure of the world-model (which would severely limit learning), or fully synchronize at regular intervals (which would scale very poorly, the data-passing requirements would be enormous).

This seems like a straightforward philosophy / computer science / political science  problem. Is there a reason why Pinky version [whatever] can't just find a good solution to it? Maybe after it has displaced the entire software industry?

It seems like you need a really strong argument that this problem is intractable, and I don't see what it is.

Comment by Eli Tyre (elityre) on Less Realistic Tales of Doom · 2021-07-05T10:43:14.744Z · LW · GW

Yes. Though it could be improved further by elaborating "the work of fiction that is spoiled", instead of just "the work."

Comment by Eli Tyre (elityre) on A Parable On Obsolete Ideologies · 2021-07-02T16:37:42.075Z · LW · GW

I think the jargon here is actually useful compression.

Comment by Eli Tyre (elityre) on A Parable On Obsolete Ideologies · 2021-07-02T16:36:38.526Z · LW · GW

doesn't have an evil ideology tied up with it? 

 

(At least no ideology much worse than a lot of popular political movements).

These are strongly different claims.

Comment by Eli Tyre (elityre) on Normal Cryonics · 2021-06-30T09:41:14.432Z · LW · GW

This made me smile. : )

Comment by Eli Tyre (elityre) on rohinmshah's Shortform · 2021-06-29T22:16:08.264Z · LW · GW

Good point.

But in this case, you guys are both seeking utility, right? And that's what pushed you to some common behaviors?

Comment by Eli Tyre (elityre) on rohinmshah's Shortform · 2021-06-29T04:40:18.559Z · LW · GW

Instrumental convergence!

Comment by Eli Tyre (elityre) on Another (outer) alignment failure story · 2021-06-29T02:36:08.417Z · LW · GW

This sentence stuck out to me:

Again, we can tell the treaties at least kind of work because we can tell that no one is dying.

How can we tell? It's already the case that I'm pretty much at the mercy of my news sources. It seems like all kinds of horrible stuff might be happening all over the world, and I wouldn't know about it.

Comment by Eli Tyre (elityre) on Less Realistic Tales of Doom · 2021-06-29T02:21:37.090Z · LW · GW

Um. This spoiler tag was not very helpful because I didn't have any hint about what was hiding under it.

Comment by Eli Tyre (elityre) on I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction · 2021-06-22T03:55:48.488Z · LW · GW

My first half-baked thoughts about what sort of abstraction we might use instead of utility functions:

Maybe instead of thinking about preferences as rankings over worlds, we think of preferences as like gradients. Given the situation that an agent finds itself in, there are some directions to move in state space that it prefers and some that it disprefers. And as the agent moves through the world, and it’s situation changes, it’s preference gradients might change too. 

This allows for cycles, where from a, the agent prefers b, and from b, the agent prefers c, and from c, the agent prefers a.

It also means that preferences are inherently contextual. It doesn’t make sense to ask what an agent wants in the abstract, only what it wants given some situated context. This might be a feature, not a bug, in that it resolves some puzzles about values. 

This implies a sort of non-transitivity of preferences. If you can predict that you’ll want something, in the future, that doesn't necessarily imply that you want it now.

Comment by Eli Tyre (elityre) on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-22T03:34:08.466Z · LW · GW

The UK also has a political tradition of scientists being closed involved in some policy decisions.

Insofar as this is true, I want to know why, and why the US doesn't have a similar tradition.

Comment by Eli Tyre (elityre) on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-22T03:31:38.478Z · LW · GW

I should note, Cummings does appear to be playing politics in parts of his testimony. He is scathing in his comments on some people (Matt Hancock, Health Secretary), and describes others in glowing terms (Rishi Sunak, Chancellor).

Yeah. I want to emphasize this to everyone in the Less Wrong sphere who is causally aware of Cummings. He is interested in a lot of the same kinds of things that we are, but he doesn't hold to exactly the same epistemic mores. There are lots of videos of him on youtube in which he is obviously trying to win arguments / score points, instead of straightforwardly say what's true. 

This is probably appropriate and adaptive for his context in the world, but it does make me cringe, and if I observed my friends and coworkers engaging in moves that were that straightforwardly un-epistemic, I would be shocked and disturbed. 

Comment by Eli Tyre (elityre) on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-22T03:31:11.340Z · LW · GW

The UK Government apparently has a small team of PhDs tasked with translating complicated papers and other data for the leadership. This team was assembled by rationalist-adjacent Dominic Cummings, who has since left (essentially the Prime Minister stopped listening to him).

If UK Government policy appears sensible and forward thinking, this team is likely the source. I fear with the removal of their patron they may not last long.

The way this is phrased, you make it sound like Cummings put such a team together, but it is still there, after his departure, at least for the time being.

I had assumed that when Cummings was ousted everyone that he was working with went with him, and therefore, whatever team of people he assembled was also out of the job. 

Do you have any additional information here? Do you know which it was?

Comment by Eli Tyre (elityre) on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-21T07:04:16.756Z · LW · GW

they have vaccinated 80% of adults with 1st doses, so perhaps they've decided that the need for 1st doses is dropping

That was my impression.

Comment by Eli Tyre (elityre) on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-20T23:52:51.486Z · LW · GW

An important note!

Comment by Eli Tyre (elityre) on Rob B's Shortform Feed · 2021-06-19T03:29:58.898Z · LW · GW

It is my impression he also helped establish the Center for Applied Rationality, which has the explicit mission of training skills. (I'm not sure if he technically did but he was part of the community which did and he helped promote it in its early days.)

Eliezer was involved with CFAR in the early days, but has not been involved since at least 2016.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-06-19T02:02:07.373Z · LW · GW

I’m no longer sure that I buy dutch book arguments, in full generality, and this makes me skeptical of the "utility function" abstraction

Thesis: I now think that utility functions might be a pretty bad abstraction for thinking about the behavior of agents in general including highly capable agents.

[Epistemic status: half-baked, elucidating an intuition. Possibly what I’m saying here is just wrong, and someone will helpfully explain why.]

Over the past years, in thinking about agency and AI, I’ve taken the concept of a “utility function” for granted as the natural way to express an entity's goals or preferences. 

Of course, we know that humans don’t have well defined utility functions (they’re inconsistent, and subject to all kinds of framing effects), but that’s only because humans are irrational. To the extent that a thing acts like an agent, it’s behavior corresponds to some utility function. That utility function might not be explicitly represented, but if an agent is rational, there’s some utility function that reflects it’s preferences. 

Given this, I might be inclined to scoff at people who scoff at “blindly maximizing” AGIs. “They just don’t get it”, I might think. “They don’t understand why agency has to conform to some utility function, and an AI would try to maximize expected utility.”

Currently, I’m not so sure. I think that talking in terms of utility functions is biting a philosophical bullet, and importing some unacknowledged assumptions. Rather than being the natural way to conceive of preferences and agency, I think utility functions might be only one possible abstraction, and one that emphasizes the wrong features, giving a distorted impression of what agents, in general, are actually like.

I want to explore that possibility in this post.

Before I begin, I want to make two notes. 

First, all of this is going to be hand-wavy intuition. I don’t have crisp knock-down arguments, only a vague discontent. But it seems like more progress will follow if I write up my current, tentative, stance even without formal arguments.

Second, I don’t think utility functions being a poor abstraction for agency in the real world has much bearing on whether there is AI risk. As I’ll discuss, it might change the shape and tenor of the problem, but highly capable agents with alien seed preferences are still likely to be catastrophic to human civilization and human values. I mention this because the sentiments expressed in this essay are casually downstream of conversations that I’ve had with skeptics about whether there is AI risk at all. So I want to highlight: I think I was mistakenly overlooking some philosophical assumptions, but that is not a crux.

Is coherence overrated? 

The tagline of the “utility” page on arbital is “The only coherent way of wanting things is to assign consistent relative scores to outcomes.” 

This is true as far as it goes, but to me, at least, that sentence implies a sort of dominance of utility functions. “Coherent” is a technical term, with a precise meaning, but it also has connotations of “the correct way to do things”. If someone’s theory of agency is incoherent, that seems like a mark against it. 

But it is possible to ask, “What’s so good about coherence anyway? Maybe 

The standard reply of course, is that if your preferences are incoherent, you’re dutchbookable, and someone will pump you for money. 

But I’m not satisfied with this argument. It isn’t obvious that being dutch booked is a bad thing.

In, Coherent Decisions Imply Consistent Utilities, Eliezer says, 

Suppose I tell you that I prefer pineapple to mushrooms on my pizza. Suppose you're about to give me a slice of mushroom pizza; but by paying one penny ($0.01) I can instead get a slice of pineapple pizza (which is just as fresh from the oven). It seems realistic to say that most people with a pineapple pizza preference would probably pay the penny, if they happened to have a penny in their pocket. 1

After I pay the penny, though, and just before I'm about to get the pineapple pizza, you offer me a slice of onion pizza instead--no charge for the change! If I was telling the truth about preferring onion pizza to pineapple, I should certainly accept the substitution if it's free.

And then to round out the day, you offer me a mushroom pizza instead of the onion pizza, and again, since I prefer mushrooms to onions, I accept the swap.

I end up with exactly the same slice of mushroom pizza I started with... and one penny poorer, because I previously paid $0.01 to swap mushrooms for pineapple.

This seems like a qualitatively bad behavior on my part.

Eliezer asserts that this is “qualitatively bad behavior.” But I think that this is biting a philosophical bullet. 

As an intuition pump: In the actual case of humans, we seem to get utility not from states of the world, but from changes in states of the world. So it isn’t unusual for a human to pay to cycle between states of the world. 

For instance, I could imagine a human being hungry, eating a really good meal, feeling full, and then happily paying a fee to be instantly returned to their hungry state, so that they can enjoy eating a good meal again. 

This is technically a dutch booking (which do they prefer, being hungry or being full?), but from the perspective of the agent’s values there’s nothing qualitatively bad about it. Instead of the dutchbooker pumping money from the agent, he’s offering a useful and appreciated service.

Of course, we can still back out a utility function from this dynamic: instead of having a mapping of ordinal numbers to world states, we can have one from ordinal numbers to changes from world state to another. 

But that just passes the buck one level. I see no reason in principle that an agent might have a preference to rotate between different changes in the world, just as well as rotating different between states of the world.

But this also misses the central point. I think you can always construct a utility function that represents some behavior. But if one is no longer compelled by dutch book arguments, this begs the question of why we would want to do that. If coherence is no longer a desiderata, it’s no longer clear that a utility function is that natural way to express preferences.

And I wonder, maybe this also applies to agents in general, or at least the kind of learned agents that humans are likely to build via gradient descent. 

Maximization behavior

I think this matters, because many of the classic AI risk arguments go through a claim that maximization behavior is convergent. If you try to build a satisficer, there are a number of pressures for it to become a maximizer of some kind. (See this Rob Miles video, for instance)

I think that most arguments of that sort depend on an agent acting according to an expected utility maximization framework. And utility maximization turns out not to be a good abstraction for agents in the real world, I don't know if these arguments are still correct.

I posit that straightforward maximizers are rare in the multiverse, and that most evolved or learned agents are better described by some other abstraction.  

If not utility functions, then what?

If we accept for the time being that utility functions are a warped abstraction for most agents, what might a better abstraction be?

I don’t know. I’m writing this post in the hopes that others will think about this question and perhaps come up with productive alternative formulations. 

I'll post some of my half-baked thoughts on this question shortly.

Comment by Eli Tyre (elityre) on How do we prepare for final crunch time? · 2021-06-11T00:20:12.369Z · LW · GW

Glad to help.

Comment by Eli Tyre (elityre) on Takeaways from one year of lockdown · 2021-05-28T09:57:54.785Z · LW · GW

4. When I look back and think "what did we get wrong that we realistically could have done better?" I think the thing is having a clearer model that negotiations would calcify and people would get exhausted, and also that people wouldn't primarily be "applying agency and rationality" the whole way through, they would be settling into patterns, and choosing "the amount of relaxed/paranoid to be" rather than choosing "the right actions based on their goals and values." It's easier to dial-up and dial-down paranoia, than it is to change complex strategy. And we weren't factoring that in, but when I'm judging our rationality, that's what I think we could have credibly figured out in advance.

This does seem like a super-important lesson to me.

It also feels like it gives me some insight into the gears of why the world is mad

Comment by Eli Tyre (elityre) on Takeaways from one year of lockdown · 2021-05-28T09:55:11.445Z · LW · GW

My guess is people who refactored into smaller houses had a healthier time)

Really? I would not guess this. It seems like having more people around in your day-to-day social environment is on-net better.

Comment by Eli Tyre (elityre) on Takeaways from one year of lockdown · 2021-05-28T09:47:16.501Z · LW · GW

And by the time we knew the important, action-relevant information like transmission vectors and all that jazz, we were already used to being afraid, and we failed to adjust.

This sentence is basically true of me as well, but without the emotional valence. My version: 

And by the time I knew the important, action-relevant information like transmission vectors and all that jazz, I was already settled into a rhythm of pretty intense lock down, and I failed to adjust.

In retrospect, I wish I had spent more time traveling to to see and talk with people I know in other parts of the country. I did consider this, but I concluded that I shouldn't.

3 concrete things that I failed to account for:

  • Quantitatively, how bad it is for me to catch COVID in expectation, and especially how likely I was to get permanent chronic fatigue (which is a very bad outcome).
  • How effective P100 masks are (I now think of them as "it's as if you're vaccinated for as long as you're wearing it, which gives a different tenor to my sense of risk.)
  • That I could rent a minivan, and set it up for sleeping in so I wouldn't need to pay for hotels or deal with the communal spaces of hotels.
Comment by Eli Tyre (elityre) on What are some real life Inadequate Equilibria? · 2021-05-19T21:35:45.418Z · LW · GW

I didn't put a deadline for the bounty, but I'm now setting the deadline for June 1, 2021. I'm going to count up all the answers and reach out to winners some time that week, only counting answers that came in on or before that date.

Feel free to try and slip in last minute. 

Comment by Eli Tyre (elityre) on The AI Timelines Scam · 2021-05-02T05:15:06.032Z · LW · GW

The key sentiment of this post that I currently agree with:

  • There's a bit of a short timelines "bug" in the Berkeley rationalist scene, where short timelines have become something like the default assumption (or at least are not unusual). 
  • There don't seem to be strong, public reasons for this view. 
  • It seems like most people who are sympathetic to short timelines are sympathetic to it mainly as the result of a social proof cascade. 
  • But this is obscured somewhat, because some folks who opinions are being trusted, don't show their work (rightly or wrongly), because of info security considerations.
Comment by Eli Tyre (elityre) on The AI Timelines Scam · 2021-05-02T04:50:45.446Z · LW · GW

Is there a reason you need to do 50 year plans before you can do 10 year plans? I'd expect the opposite to be true.

I think you do need to learn how to make plans that can actually work, at all, before you learn how to make plans with very limited resources.

And I think that people fall into the habit of making "plans", that they don't inner sim actually leading to success, because they condition themselves into thinking that things are desperate and the best action will only be the best action "in expected value" eg that the "right" action should look like a moonshot.

This seems concerning to me. It seems like you should be, first and foremost, figuring out how you can get any plan that works at all, and then secondarily, trying to figure out how to make it work in the time allotted. Actual, multi-step strategy shouldn't mostly feel like "thinking up some moon-shots".

Comment by Eli Tyre (elityre) on The AI Timelines Scam · 2021-05-02T04:39:30.631Z · LW · GW

Never mind. It seems like I should have just kept reading.

Comment by Eli Tyre (elityre) on The AI Timelines Scam · 2021-05-02T04:38:46.663Z · LW · GW

Pivot Into Governance: this is what a lot of AI risk orgs are doing

What? How exactly is this a way of dealing with the hype bubble bursting? It seems like if it bursts for AI, it bursts for "AI governance"?

Am I missing something?

Comment by Eli Tyre (elityre) on The AI Timelines Scam · 2021-05-02T04:25:09.294Z · LW · GW

This comment is such a good example of managing to be non-triggering in making the point. It stands out to me amongst all the comments above it, which are at least somewhat heated.

Comment by Eli Tyre (elityre) on The AI Timelines Scam · 2021-05-02T04:20:01.141Z · LW · GW

Note also, it seems useful for there to be essays on the Democrat party's marketing strategy that don't also talk about the Republican party's marketing strategy

Minor, unconfident, point: I'm not sure that this is true. It seems like it would result in people mostly fallacy-fallacy-ing the other side, each with their own "look how manipulative the other guys are" essays. If the target is thoughtful people trying to figure things out, they'll want to hear about both sides, no?

Comment by Eli Tyre (elityre) on How do we prepare for final crunch time? · 2021-03-31T08:07:40.716Z · LW · GW

Strong agree.

Comment by Eli Tyre (elityre) on How do we prepare for final crunch time? · 2021-03-31T08:07:23.073Z · LW · GW

I suspect that it becomes more and more rate limiting as technological progress speeds up.

Like, to a first approximation, I think there's a fixed cost to learning to use and take full advantage of a new tool. Let's say that cost if a few weeks of experimentation and tinkering. If importantly new tools are are invented on a cadence of once ever 3 years, that fixed cost is negligible. But if importantly new tools are dropping every week, the fixed cost becomes much more of a big deal.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-30T05:50:28.323Z · LW · GW

If you're so price sensitive $1000 is meaningful, well, uh try to find a solution to this crisis.  I'm not saying one exists, but there are survival risks to poverty.

Lol. I'm not impoverished, but I want to cheaply experiment with having a car. It isn't worth it to spend throw away $30,000 on a thing that I'm not going to get much value from.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-30T05:13:34.130Z · LW · GW

I recall a Chriss Olah post in which he talks about using AIs as a tool for understanding the world, by letting the AI learn, and then using interpretability tools to study the abstractions that the AI uncovers. 

I thought he specifically mentioned "using AI as a microscope."

Is that a real post, or am I misremembering this one? 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-26T09:36:33.757Z · LW · GW

Are there any hidden risks to buying or owning a car that someone who's never been a car owner might neglect?

I'm considering buying a very old (ie from the 1990s), very cheap (under $1000, ideally) minivan, as an experiment.

That's inexpensive enough that I'm not that worried about it completely breaking down on me. I'm willing to just eat the monetary cost for the information value.

However, maybe there are other costs or other risks that I'm not tracking, that make this a worse idea.

Things like

- Some ways that a car can break make it dangerous, instead of non-functional.

- Maybe if a car breaks down in the middle of route 66, the government fines you a bunch?

- Something something car insurance?

Are there other things that I should know? What are the major things that one should check for to avoid buying a lemon?

Assume I'm not aware of even the most drop-dead basic stuff. I'm probably not.

(Also, I'm in the market for a minivan, or other car with 3 rows of seats. If you have an old car like that which you would like to sell, or if know someone who does, get in touch.

Do note that I am extremely price sensitive, but I would pay somewhat more than $1000 for a car, if I were confident that it was not a lemon.)

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-18T03:42:42.871Z · LW · GW

Question: Have Moral Mazes been getting worse over time? 

Could the growth of Moral Mazes be the cause of cost disease? 

I was thinking about how I could answer this question. I think that the thing that I need is a good quantitative measure of how "mazy" an organization is. 

I considered the metric of "how much output for each input", but 1) that metric is just cost disease itself, so it doesn't help us distinguish the mazy cause from other possible causes, 2) If you're good enough at rent seeking maybe you can get high revenue despite you poor production. 

What metric could we use?

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-02-28T20:38:59.875Z · LW · GW

Is there a standard article on what "the critical risk period" is?

I thought I remembered an arbital post, but I can't seem to find it.

Comment by Eli Tyre (elityre) on Yoav Ravid's Shortform · 2021-02-25T05:42:06.385Z · LW · GW

My guess was: you could have a different map, for different parts of the globe, ie a part that focus on Africa (and therefore has minimal distortions of Africa), and a separate part for America, and a separate part for Asia, and so on.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-02-25T05:34:46.429Z · LW · GW

Is there a LessWrong article that unifies physical determinism and choice / "free will"? Something about thinking of yourself as the algorithm computed on this brain?