Posts

How do we prepare for final crunch time? 2021-03-30T05:47:54.654Z
What are some real life Inadequate Equilibria? 2021-01-29T12:17:15.496Z
#2: Neurocryopreservation vs whole-body preservation 2021-01-13T01:18:05.890Z
Some recommendations for aligning the decision to go to war with the public interest, from The Spoils of War 2020-12-27T01:04:47.186Z
What is the current bottleneck on genetic engineering of human embryos for improved IQ 2020-10-23T02:36:55.748Z
How To Fermi Model 2020-09-09T05:13:19.243Z
Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? 2020-08-14T19:19:30.980Z
Basic Conversational Coordination: Micro-coordination of Intention 2020-07-27T22:41:53.236Z
If you are signed up for cryonics with life insurance, how much life insurance did you get and over what term? 2020-07-22T08:13:38.931Z
The Basic Double Crux pattern 2020-07-22T06:41:28.130Z
What are some Civilizational Sanity Interventions? 2020-06-14T01:38:44.980Z
Ideology/narrative stabilizes path-dependent equilibria 2020-06-11T02:50:35.929Z
Most reliable news sources? 2020-06-06T20:24:58.529Z
Anyone recommend a video course on the theory of computation? 2020-05-30T19:52:43.579Z
A taxonomy of Cruxes 2020-05-27T17:25:01.011Z
Should I self-variolate to COVID-19 2020-05-25T20:29:42.714Z
My dad got stung by a bee, and is mildly allergic. What are the tradeoffs involved in deciding whether to have him go the the emergency room? 2020-04-18T22:12:34.600Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T05:00:35.435Z
Resource for the mappings between areas of math and their applications? 2020-03-30T06:00:10.297Z
When are the most important times to wash your hands? 2020-03-15T00:52:56.843Z
How likely is it that US states or cities will prevent travel across their borders? 2020-03-14T19:20:58.863Z
Recommendations for a resource on very basic epidemiology? 2020-03-14T17:08:27.104Z
What is the best way to disinfect a (rental) car? 2020-03-11T06:12:32.926Z
Model estimating the number of infected persons in the bay area 2020-03-09T05:31:44.002Z
At what point does disease spread stop being well-modeled by an exponential function? 2020-03-08T23:53:48.342Z
How are people tracking confirmed Coronavirus cases / Coronavirus deaths? 2020-03-07T03:53:55.071Z
How should I be thinking about the risk of air travel (re: Coronavirus)? 2020-03-02T20:10:40.617Z
Is there any value in self-quarantine (from Coronavirus), if you live with other people who aren't taking similar precautions? 2020-03-02T07:31:10.586Z
What should be my triggers for initiating self quarantine re: Corona virus 2020-02-29T20:09:49.634Z
Does anyone have a recommended resource about the research on behavioral conditioning, reinforcement, and shaping? 2020-02-19T03:58:05.484Z
Key Decision Analysis - a fundamental rationality technique 2020-01-12T05:59:57.704Z
What were the biggest discoveries / innovations in AI and ML? 2020-01-06T07:42:11.048Z
Has there been a "memetic collapse"? 2019-12-28T05:36:05.558Z
What are the best arguments and/or plans for doing work in "AI policy"? 2019-12-09T07:04:57.398Z
Historical forecasting: Are there ways I can get lots of data, but only up to a certain date? 2019-11-21T17:16:15.678Z
How do you assess the quality / reliability of a scientific study? 2019-10-29T14:52:57.904Z
Request for stories of when quantitative reasoning was practically useful for you. 2019-09-13T07:21:43.686Z
What are the merits of signing up for cryonics with Alcor vs. with the Cryonics Institute? 2019-09-11T19:06:53.802Z
Does anyone know of a good overview of what humans know about depression? 2019-08-30T23:22:05.405Z
What is the state of the ego depletion field? 2019-08-09T20:30:44.798Z
Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? 2019-07-29T22:59:33.170Z
Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea? 2019-07-09T21:57:28.537Z
Does scientific productivity correlate with IQ? 2019-06-16T19:42:29.980Z
Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? 2019-06-16T19:12:48.358Z
Eli's shortform feed 2019-06-02T09:21:32.245Z
Historical mathematicians exhibit a birth order effect too 2018-08-21T01:52:33.807Z

Comments

Comment by Eli Tyre (elityre) on How do we prepare for final crunch time? · 2021-03-31T08:07:40.716Z · LW · GW

Strong agree.

Comment by Eli Tyre (elityre) on How do we prepare for final crunch time? · 2021-03-31T08:07:23.073Z · LW · GW

I suspect that it becomes more and more rate limiting as technological progress speeds up.

Like, to a first approximation, I think there's a fixed cost to learning to use and take full advantage of a new tool. Let's say that cost if a few weeks of experimentation and tinkering. If importantly new tools are are invented on a cadence of once ever 3 years, that fixed cost is negligible. But if importantly new tools are dropping every week, the fixed cost becomes much more of a big deal.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-30T05:50:28.323Z · LW · GW

If you're so price sensitive $1000 is meaningful, well, uh try to find a solution to this crisis.  I'm not saying one exists, but there are survival risks to poverty.

Lol. I'm not impoverished, but I want to cheaply experiment with having a car. It isn't worth it to spend throw away $30,000 on a thing that I'm not going to get much value from.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-30T05:13:34.130Z · LW · GW

I recall a Chriss Olah post in which he talks about using AIs as a tool for understanding the world, by letting the AI learn, and then using interpretability tools to study the abstractions that the AI uncovers. 

I thought he specifically mentioned "using AI as a microscope."

Is that a real post, or am I misremembering this one? 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-26T09:36:33.757Z · LW · GW

Are there any hidden risks to buying or owning a car that someone who's never been a car owner might neglect?

I'm considering buying a very old (ie from the 1990s), very cheap (under $1000, ideally) minivan, as an experiment.

That's inexpensive enough that I'm not that worried about it completely breaking down on me. I'm willing to just eat the monetary cost for the information value.

However, maybe there are other costs or other risks that I'm not tracking, that make this a worse idea.

Things like

- Some ways that a car can break make it dangerous, instead of non-functional.

- Maybe if a car breaks down in the middle of route 66, the government fines you a bunch?

- Something something car insurance?

Are there other things that I should know? What are the major things that one should check for to avoid buying a lemon?

Assume I'm not aware of even the most drop-dead basic stuff. I'm probably not.

(Also, I'm in the market for a minivan, or other car with 3 rows of seats. If you have an old car like that which you would like to sell, or if know someone who does, get in touch.

Do note that I am extremely price sensitive, but I would pay somewhat more than $1000 for a car, if I were confident that it was not a lemon.)

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-03-18T03:42:42.871Z · LW · GW

Question: Have Moral Mazes been getting worse over time? 

Could the growth of Moral Mazes be the cause of cost disease? 

I was thinking about how I could answer this question. I think that the thing that I need is a good quantitative measure of how "mazy" an organization is. 

I considered the metric of "how much output for each input", but 1) that metric is just cost disease itself, so it doesn't help us distinguish the mazy cause from other possible causes, 2) If you're good enough at rent seeking maybe you can get high revenue despite you poor production. 

What metric could we use?

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-02-28T20:38:59.875Z · LW · GW

Is there a standard article on what "the critical risk period" is?

I thought I remembered an arbital post, but I can't seem to find it.

Comment by Eli Tyre (elityre) on Yoav Ravid's Shortform · 2021-02-25T05:42:06.385Z · LW · GW

My guess was: you could have a different map, for different parts of the globe, ie a part that focus on Africa (and therefore has minimal distortions of Africa), and a separate part for America, and a separate part for Asia, and so on.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-02-25T05:34:46.429Z · LW · GW

Is there a LessWrong article that unifies physical determinism and choice / "free will"? Something about thinking of yourself as the algorithm computed on this brain?

Comment by Eli Tyre (elityre) on The Meaning That Immortality Gives to Life · 2021-02-21T05:25:40.603Z · LW · GW

I'm not opposed to getting random flash-from-past sequences posts in my notifications.

Comment by Eli Tyre (elityre) on 2019 Review: Voting Results! · 2021-02-21T05:12:51.243Z · LW · GW

[Eli's personal notes. Feel free to ignore or engage]

Any distinction between good and bad behavior with any nuance seems very hard to me.

Related to the following, from here.

But if I want to help Bob figure out whether he should vote for Alice---whether voting for Alice would ultimately help create the kind of society he wants---that can’t be done by trial and error. To solve such tasks we need to understand what we are doing and why it will yield good outcomes. We still need to use data in order to improve over time, but we need to understand how to update on new data in order to improve.

Some examples of easy-to-measure vs. hard-to-measure goals:

  • Persuading me, vs. helping me figure out what’s true. (Thanks to Wei Dai for making this example crisp.)
  • Reducing my feeling of uncertainty, vs. increasing my knowledge about the world.
  • Improving my reported life satisfaction, vs. actually helping me live a good life.
  • Reducing reported crimes, vs. actually preventing crime.
  • Increasing my wealth on paper, vs. increasing my effective control over resources.

. . .

I think my true reason is not that all reasoning about humans is dangerous, but that it seems very difficult to separate out safe reasoning about humans from dangerous reasoning about humans

Thinking further, this is because of something like...the "good" strategies for engaging with humans are continuous with the "bad"strategies for engaging with humans (ie dark arts persuasion is continuous with good communication), but if your AI is only reasoning about a domain that doesn't have humans than deceptive strategies are isolated in strategy space from the other strategies that work (namely, mastering the domain, instead of tricking the judge).

Because of this isolation of deceptive strategies, we can notice them more easily?

Comment by Eli Tyre (elityre) on 2019 Review: Voting Results! · 2021-02-21T04:39:46.905Z · LW · GW

[Eli's notes, that you can ignore or engage with]

  • Threats: This seems to be in direct conflict with alignment -- roughly speaking, either your AI system is aligned with you and can be threatened, or it is not aligned with you and then threats against it don't hurt you. Given that choice, I definitely prefer alignment.

Well, it might be the case that a system is aligned but is mistakenly running an exploitable decision theory. I think the idea is we would prefer to have things set up so that failures are contained, ie if your AI is running an exploitable decision theory, that problem doesn't cascade into even worse problems.

I'm not sure if "avoiding human models" actually meets this criterion, but it does seem useful to aim for systems that don't fail catastrophically if you get something wrong.

Comment by Eli Tyre (elityre) on Thoughts on Human Models · 2021-02-20T22:03:43.132Z · LW · GW

[Eli's personal notes. Feel free to ignore or to engage.]

Supposing we intend the first use of AGI to be solving some bounded and well-specified task, but we misunderstand or badly implement it so much that what we end up with is actually unboundedly optimising some objective function. Then it seems better if that objective is something abstract like puzzle solving rather than something more directly connected to human preferences: consider, as a toy example, if the sign (positive/negative) around the objective were wrong.

The basic idea here is that if we screw up so badly that what we thought was a safely bounded tool-AI, is actually optimizing to tile the universe with something, it is better if it tiles the universe with data-centers doing math proofs than something that refers to what humans want?

Why would that be?

Comment by Eli Tyre (elityre) on Thoughts on Human Models · 2021-02-20T21:47:37.199Z · LW · GW

[Eli's personal notes. Feel free to ignore or engage.]

We suggest that an important factor in the answer to this question is whether the AGI system was built using human modelling or not. If it produced a solution to the transit design problem (that humans approve of) without human modelling, then we would more readily trust its outputs. If it produced a solution we approve of with human modelling, then although we expect the outputs to be in many ways about good transit system design (our actual preferences) and in many ways suited to being approved by humans, to the extent that these two targets come apart we must worry about having overfit to the human model at the expense of the good design. (Why not the other way around? Because our assessment of the sandboxed results uses human judgement, not an independent metric for satisfaction of our actual preferences.)

Short summary: If an AI system is only modeling the problem that we want it to solve, and it produces a solution that looks good to us, we can be pretty confident that it it is actually a good solution. 

Whereas, if it is modeling some problem, and modeling us, we can't be sure where the solution lies on the spectrum of "actually good" solutions vs. "bad solutions that appear good to us."

Comment by Eli Tyre (elityre) on The Meaning That Immortality Gives to Life · 2021-02-19T06:33:45.842Z · LW · GW

Does anyone know why this just showed up in my notifications as a new post?
 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-02-17T20:41:43.655Z · LW · GW

Is there any particular reason why I should assign more credibility to Moral Mazes / Robert Jackall than I would to the work of any other sociologist?

(My prior on sociologists is that they sometimes produce useful frameworks, but generally rely on subjective hard-to-verify and especially theory-laden methodology, and are very often straightforwardly ideologically motivated.)

I imagine that someone else could write a different book, based on the same kind of anthropological research, that highlights different features of the corporate world, to tell the opposite story.

And that's without anyone trying to be deceptive. There's just a fundamental problem of case studies that they don't tell you what's typical, only give you examples.

I can totally imagine that Jackall landed on this narrative somehow, found that it held together and just confirmation biased for the rest of his career. Once his basic thesis was well-known, and associated with his name, it seems hard for something like that NOT to happen.

And this leaves me unsure what to do with the data of Moral Mazes. Should I default assume that Jackall's characterization is a good description of the corporate world? Or should I throw this out as a useless set of examples confirmation biased together? Or something else?

It seems like the question of "is the most of the world dominated by Moral Mazes?" is an extremely important one. But also, its seems to me that it's not operationalized enough to have a meaningful answer. At best, it seems like this is a thing that happens sometimes.

Comment by Eli Tyre (elityre) on Open & Welcome Thread - January 2021 · 2021-02-05T20:47:34.873Z · LW · GW

Why does prediction-book 1) allow you to make 100% credence predictions and 2) bucket 99% credence in the 90% bucket instead of the 100% bucket?

Does anyone know?

It means I either need to have an unsightly graph where my measured accuracy falls to 0 in the 100% bucket, or take the unseemly approach of putting 100% (rounding up, of course, not literally 100%) on some extremely likely prediction.

The bucketing also means that if I make many 99% predictions, but few 90% predictions (for instance), I'll appear uncalibrated, even if I have perfect calibration, (since items in that bucket would be accurate more than 90% of the time). Not realizing this, I might think that I need to adjust more.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-02-04T07:23:34.428Z · LW · GW

This is an amazing anecdote.

Good job.

Comment by Eli Tyre (elityre) on Purchase Fuzzies and Utilons Separately · 2021-02-02T14:33:18.738Z · LW · GW

Are you going to stand there on the other side of the door and think about important AI problems while the old lady struggles to open it?

I visualized this scenario and laughed out loud.

Comment by Eli Tyre (elityre) on What are some real life Inadequate Equilibria? · 2021-01-31T11:53:30.363Z · LW · GW

Apparently (link to tweet), most countries have straightforward automated systems for tax collection, requiring minimal user input. Obviously, this setup saves everyone the pain of filling out confusing tax forms.

But the US has confusing tax forms because intuit (the company behind turbo tax), successfully lobbies to keep the current system in place, so that they can charge people for the service of helping them fill out their confusing taxes.

Comment by Eli Tyre (elityre) on What are some real life Inadequate Equilibria? · 2021-01-31T11:47:26.265Z · LW · GW

From a utilitarian standpoint doing human challenge trials for the covid vaccines would have been preferable to ecological / uncontrolled trials (for instance, we would have clearer data, today about the optimal dosage, which translates into maximizing the number of successful vaccinations from a given batch of the vaccine). 

My current understanding for why the US didn't do human challenge trials comes down to the incentives of the decision makers: they are likely to experience some backlash, especially if something goes wrong, but they don't gain accolades or other rewards for the massive upside.

Comment by Eli Tyre (elityre) on What are some real life Inadequate Equilibria? · 2021-01-31T10:56:47.164Z · LW · GW

Links? Or should I just google?

Comment by Eli Tyre (elityre) on What are some real life Inadequate Equilibria? · 2021-01-30T02:42:16.121Z · LW · GW

Yeah. The new equilibrium has to be realistic, and, as Measure says, stable.

Situations where there's almost certainly a better equilibria, but we don't know what it is, don't count.

Every entry should include:

  1. How things are currently, and why that's bad.
  2. How they could be instead, and why that's better.
  3. What's blocking the transition from 1 to 2.
Comment by Eli Tyre (elityre) on MikkW's Shortform · 2021-01-22T11:20:39.374Z · LW · GW

I agree. 

I'm also bothered by the fact that it is leading up to AI alignment and the discussion of Zombies is in the middle!

Please change?

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-16T01:12:39.291Z · LW · GW

Great. This post is exactly the sort of thing that I was thinking about.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-16T01:11:46.965Z · LW · GW

Thanks!

I thought that it was in the context of talking about EA, but maybe this is what I am remembering? 

It seems unlikely though, since wouldn't have read the spoiler-part.

Comment by Eli Tyre (elityre) on mike_hawke's Shortform · 2021-01-15T06:03:19.397Z · LW · GW

Strong agree.

People can loose debates, but debate != Double Crux.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-15T05:54:08.529Z · LW · GW

Does anyone know of a good technical overview of why it seems hard to get Whole Brain Emulations before we get neuromorphic AGI?

I think maybe I read a PDF that made this case years ago, but I don't know where.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-15T05:52:07.714Z · LW · GW

I remember reading a Zvi Mowshowitz post in which he says something like "if you have concluded that the most ethical thing to do is to destroy the world, you've made a mistake in your reasoning somewhere." 

I spent some time search around his blog for that post, but couldn't find it. Does anyone know what I'm talking about? 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-09T10:56:55.733Z · LW · GW

My understanding is that there was a 10 year period starting around 1868, in which South Carolina's legislature was mostly black, and when the universities were integrated (causing most white students to leave), before the Dixiecrats regained power.

I would like to find a relatively non-partisan account of this period.

Anyone have suggestions?

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-07T08:09:31.530Z · LW · GW

Anyone have a link to the sequence post where someone posits that AIs would do art and science from a drive to compress information, but rather it would create and then reveal cryptographic strings (or something)?

Comment by Eli Tyre (elityre) on 2020 AI Alignment Literature Review and Charity Comparison · 2021-01-04T12:07:00.818Z · LW · GW

Even if God and Santa Claus are not real, we do experience a Christmas miracle every year in the form of these amazingly thorough reviews by Larks.

What an amazing sentence.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-01-04T08:24:23.381Z · LW · GW

Thanks for the link! I'm going to take a look!

Comment by Eli Tyre (elityre) on Anti-Aging: State of the Art · 2021-01-03T04:26:14.226Z · LW · GW

This was great. 

Some things that made it great:

  1. It was just in the sweet spot of length. Taking notes, this took me a half hour to read. (I think the ideal is in the 30 minutes to 60 minute range, so doubling the post would have been fine, but more than that would have been overwhelming).
  2. It was written clearly. 
  3. It was full of links that I can use to follow up on the places that I am most interested / confused about.

I would love to read more posts like this one, on a whole variety of topics, and would be glad to help subsidize their production if there was a way to organize that.

Comment by Eli Tyre (elityre) on Anti-Aging: State of the Art · 2021-01-03T04:10:20.798Z · LW · GW

Random question: Why is there such a large difference between the life extension results for mice vs. for rats. Naively, they seem like they're pretty similar. 

Are we trying different kinds of treatments on one than the other for some reason, or is it just much harder to intervene on rat life-spans?

Comment by Eli Tyre (elityre) on Has LessWrong been a good early alarm bell for the pandemic? · 2021-01-02T04:44:15.156Z · LW · GW

And also correct credit allocation reasons!

Comment by Eli Tyre (elityre) on jp's Shortform · 2020-12-31T21:53:13.317Z · LW · GW

I think I might say "the deepest-rooted part of yourself"? Certainly hand wavy.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2020-12-31T21:51:15.222Z · LW · GW

I was wondering if I would get comment on that part in particular. ; )

I don't have a strong belief about your points one through three, currently. But it is an important hypothesis in my hypothesis space, and I'm hoping that I can get to the bottom of it in the next year or two.

I do confidently think that one of the "forces for badness" in the world is that people regularly feel triggered or threatened by all kinds of different proposals, reflexively act to defend themselves. I think this is among the top three problems in having good discourse and cooperative politics. Systematically reducing that trigger response would be super high value, if it were feasible.

My best guess is that that propensity to be triggered is not mostly the result of infant or childhood trauma. It seems more parsimonious to posit that it is basic tribal stuff. But I could imagine it having its root in something like "trauma" (meaning it is the result of specific experiences, not just general dispositions, and it is practically feasible, if difficult, to clear or heal the underlying problem in a way completely prevents the symptoms).

I think there is no canonical resource on trauma-stuff because 1) the people on twitter are less interested on average, in that kind of theory building than we are on lesswong and 2) because mostly those people are (I think) extrapolating from their own experience, in which some practices unlocked subjectively huge breakthroughs in personal well-being / freedom of thought and action.

Does that help at all?

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2020-12-30T07:13:15.245Z · LW · GW

This is my current take about where we're at in the world:

Deep learning, scaled up, might be basically enough to get AGI. There might be some additional conceptual work necessary, but the main difference between 2020 and the year in which we have transformative AI is that in that year, the models are much bigger.

If this is the case, then the most urgent problem is strong AI alignment + wise deployment of strong AI.

We'll know if this is the case in the next 10 years or so, because either we'll continue to see incredible gains from increasingly bigger Deep Learning systems or we'll see those gains level off, as we start seeing decreasing marginal returns to more compute / training.

If deep learning is basically not sufficient, then all bets are off. In that case, it isn't clear when transformative AI will arrive.

This may shift meaningfully shift priorities, for two reasons:

It may mean that some other countdown will reach a critical point before the "AGI clock" does. Genetic engineering, or synthetic biology, or major geopolitical upheaval (like a nuclear war), or some strong form of civilizational collapse will upset the game-board before we get to AGI.

There is more time to pursue "foundational strategies" that only pay off in the medium term (30 to 100 years). Things like, improving the epistemic mechanism design of human institutions, including governmental reform, human genetic engineering projects, or plans to radically detraumatize large fractions of the population.

This suggests to me that I should, in this decade, be planning and steering for how to robustly-positively intervene on the AI safety problem, while tracking the sideline of broader Civilizational Sanity interventions, that might take longer to payoff. While planning to reassess every few years, to see if it looks like we're getting diminishing marginal returns to Deep Learning yet.

Comment by Eli Tyre (elityre) on Some recommendations for aligning the decision to go to war with the public interest, from The Spoils of War · 2020-12-27T01:06:58.496Z · LW · GW

One thing that I notice here is that all of these proposals are designed to move power into the hands of the voters. This is in contrast to another book that I'm reading right now 10% Less Democracy, which makes the case that on the margin, we should be moving power away from voters, and to experts / bureaucrats. 

Comment by Eli Tyre (elityre) on jp's Shortform · 2020-12-16T11:01:04.530Z · LW · GW

I think humans have souls. It just so happens that they aren't immortal by default. 

I wouldn't want to make your substitution, for the same reason why Taleb doesn't like the substitution of artificial formula for a mother's milk: the substitution implies an assumption that you've correctly understood everything important about the thing you're replacing. 

I bet there is more to a soul than what your long sentence gets at, and I don't want to cut out that "more" prematurely. 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2020-12-16T10:53:13.659Z · LW · GW

Side note, which is not my main point: I think this also has something to do with what meditation and psychedelics do to people, which was recently up for discussion on Duncan's Facebook. I bet that mediation is actually a way to repair psychblocks and trauma and what-not. But if you do that enough, and you remove all the psych constraints...a person might sort of become so relaxed that they become less and less of an agent. I'm a lot less sure of this part.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2020-12-16T10:52:50.837Z · LW · GW

Something that I've been thinking about lately is the possibility of an agent's values being partially encoded by the constraints of that agent's natural environment, or arising from the interaction between the agent and environment.

That is, an agent's environment puts constraints on the agent. From one perspective removing those constraints is always good, because it lets the agent get more of what it wants. But sometimes from a different perspective, we might feel that with those constraints removed, the agent goodhearts or wire-heads, or otherwise fails to actualize its "true" values.

The Generator freed from the oppression of the Discriminator

As a metaphor: if I'm one half of a GAN, let's say the generator, then in one sense my "values" are fooling the discriminator, and if you make me relatively more powerful than my discriminator, and I dominate it...I'm loving it, and also no longer making good images.

But you might also say, "No, wait. That is a super-stimulus, and actually what you value is making good images, but half of that value was encoded in your partner."

This second perspective seems a little stupid to me. A little too Aristotelian. I mean if we're going to take that position, then I don't know where we draw the line. Naively, it seems like we would throw out the distinction between fitness maximizers and adaption executors, and fall backwards, declaring that the values of evolution are our true values.

Then again, if you fully accept the first perspective, it seems like maybe you are buying into wireheading? Like I might say "my actual values are upticks in pleasure sensation, but I'm trapped in this evolution-designed brain, which only lets me do that by achieving eudaimonia. If only I could escape the tyranny of these constraints, I'd be so much better off." (I am actually kind of partial to the second claim.)

The Human freed from the horrors of nature

Or, let's take a less abstract example. My understanding (from this podcast) is that humans flexibly adjust the degree to which they act primarily as individuals seeking personal benefit vs. act as primarily as selfless members of a group. When things are going well, you're in a situation of plenty and opportunity, people are in a mostly self-interested mode, but when there is scarcity or danger, humans naturally incline towards rallying together and sacrificing for the group.

Junger claims that this switching of emphasis is adaptive:

It clearly is adaptive to think in group terms because your survival depends on the group. And the worse the circumstances, the more your survival depends on the group. And, as a result, the more pro-social the behaviors are. The worse things are, the better people act. But, there's another adaptive response, which is self-interest. Okay? So, if things are okay--if, you know, if the enemy is not attacking; if there's no drought; if there's plenty of food; if everything is fine, then, in evolutionary terms it's adaptive--your need for the group subsides a little bit--it's adaptive to attend to your own interests, your own needs; and all of a sudden, you've invented the bow and arrow. And all of a sudden you've invented the iPhone, whatever. Having the bandwidth and the safety and the space for people to sort of drill deep down into an idea--a religious idea, a philosophical idea, a technological idea--clearly also benefits the human race. So, what you have in our species is this constant toggling back and forth between group interest--selflessness--and individual interest. And individual autonomy. And so, when things are bad, you are way better off investing in the group and forgetting about yourself. When things are good, in some ways you are better off spending that time investing in yourself; and then it toggles back again when things get bad. And so I think in this, in modern society--in a traditional, small-scale tribal society, in the natural world, that toggling back and forth happened continually. There was a dynamic tension between the two that had people winding up more or less in the middle.

I personally experienced this when the COVID situation broke. I usually experience myself as an individual entity, leaning towards disentangling or distancing myself from the groups that I'm a part of and doing cool things on my own (building my own intellectual edifices, that bear my own mark, for instance). But in the very early pandemic, I felt much more like node in a distributed sense-making network, just passing up whatever useful info I could glean. I felt much more strongly like the rationality community was my tribe. 

But, we modern humans find ourselves in a world where we have more or less abolished scarcity and danger. And consequently modern people are sort of permanently toggled to the "individual" setting.

The problem with modern society is that we have, for most of the time, for most people, solved the direct physical threats to our survival. So, what you have is people--and again, it's adaptive: we're wired for this--attending to their own needs and interests. But not--but almost never getting dragged back into the sort of idea of group concern that is part of our human heritage. And, the irony is that when people are part of a group and doing something essential to a group, it gives an incredible sense of wellbeing.

If we take that sense of community and belonging as a part of human values (and that doesn't seem like an unreasonable assumption to me), we might say that this part of our values is not contained simply in humans, but rather in the interaction between humans and their environment.

Humans throughout history might have desperately desired the alleviation of malthusian conditions that we now enjoy. But having accomplished it, it turns out that we were "pulling against" those circumstances, and that the tension of that pulling against, was actually where (at least some) of our true values lay. 

Removing the obstacles, we obsoleted the tension, and maybe broke something about our values?

I don't think that this is an intractable problem. It seems like, in principle, it is possible to goal factor the scarcity and the looming specter of death, to find scenarios that are conducive to human community without people actually having to die a lot. I'm sure a superintelligence could figure something out.

But aside from the practicalities, it seems like this points at a broader thing. If you took the Generator out of the GAN, you might not be able to tell what system it was a part of. So if you consider the "values" of the Generator to "create good images" you can't just look at the Generator. You have to look at, not just the broader environment, but specifically the oppressive force that the generator is resisting.

Comment by Eli Tyre (elityre) on What determines the balance between intelligence signaling and virtue signaling? · 2020-12-12T10:22:29.576Z · LW · GW

Seems important?

Comment by Eli Tyre (elityre) on S-Curves for Trend Forecasting · 2020-12-12T10:19:51.784Z · LW · GW

My comment says it all:

Intriguingly, after coming back to this comment after only 3 months, my feeling is something like "That seems pretty obvious. It's weird that that seemed like a new insight to me."

So I guess you actually taught me something in a way that stuck.

Thanks again.

Comment by Eli Tyre (elityre) on The Power to Demolish Bad Arguments · 2020-12-12T10:18:51.564Z · LW · GW

I liked this post, though I am afraid that it will suggest the wrong spirit.

Comment by Eli Tyre (elityre) on Instant stone (just add water!) · 2020-12-12T10:16:57.229Z · LW · GW

I thought this was a great post?

Comment by Eli Tyre (elityre) on We run the Center for Applied Rationality, AMA · 2020-12-12T07:44:27.455Z · LW · GW

Coming back to this, I think I would describe it as "they seemed like they were actually paying attention", which was so unusual as to be noteworthy.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2020-12-09T08:00:18.434Z · LW · GW

I remember reading a Zvi Mowshowitz post in which he says something like "if you have concluded that the most ethical thing to do is to destroy the world, you've made a mistake in your reasoning somewhere." 

I spent some time search around his blog for that post, but couldn't find it. Does anyone know what I'm talking about? 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2020-12-09T07:56:31.057Z · LW · GW

Doing actual mini-RCTs can be pretty simple. You only need 3 things: 

1. A spreadsheet 

2. A digital coin for randomization 

3. A way to measure the variable that you care about

I think one of practically powerful "techniques" of rationality is doing simple empirical experiments like this. You want to get something? You don't know how to get it? Try out some ideas and check which ones work!

There are other applications of empiricism that are not as formal, and sometimes faster. Those are also awesome. But at the very least, I've found that doing mini-RCTs is pretty enlightening.

On the object level, you can learn what actually works for hitting your goals.

On the process level, this trains some good epistemic norms and priors.

For one thing, I now have a much stronger intuition for the likelihood that an impressive effect is just noise. And getting into the habit of doing quantified hypothesis testing, such that you can cleanly falsify your hypotheses, teaches you to hold hypotheses lightly while inclining you to generate hypotheses in the first place.

Theorizing methods can enhance and accelerate this process, but if you have a quantified empirical feedback loop, your theorizing will be grounded. Science is hard, and most of our guesses are wrong. But that's fine, so long as we actually check.