Posts

[Lecture Club] Awakening from the Meaning Crisis 2021-03-08T15:22:22.626Z
Alex Irpan: "My AI Timelines Have Sped Up" 2020-08-19T16:23:25.348Z
Property as Coordination Minimization 2020-08-04T19:24:15.759Z
Rereading Atlas Shrugged 2020-07-28T18:54:45.272Z
A reply to Agnes Callard 2020-06-28T03:25:27.378Z
Public Positions and Private Guts 2020-06-26T23:00:52.838Z
How alienated should you be? 2020-06-14T15:55:24.043Z
Outperforming the human Atari benchmark 2020-03-31T19:33:46.355Z
Mod Notice about Election Discussion 2020-01-29T01:35:53.947Z
Circling as Cousin to Rationality 2020-01-01T01:16:42.727Z
Self and No-Self 2019-12-29T06:15:50.192Z
T-Shaped Organizations 2019-12-16T23:48:13.101Z
ialdabaoth is banned 2019-12-13T06:34:41.756Z
The Bus Ticket Theory of Genius 2019-11-23T22:12:17.966Z
Vaniver's Shortform 2019-10-06T19:34:49.931Z
Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z
Steelmanning Divination 2019-06-05T22:53:54.615Z
Public Positions and Private Guts 2018-10-11T19:38:25.567Z
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z
Compact vs. Wide Models 2018-07-16T04:09:10.075Z
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z
Turning 30 2018-05-08T05:37:45.001Z
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z
LW Migration Announcement 2018-03-22T02:18:19.892Z
LW Migration Announcement 2018-03-22T02:17:13.927Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T23:40:26.663Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T22:53:17.721Z
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z
Articles in Main 2016-11-29T21:35:17.618Z
Linkposts now live! 2016-09-28T15:13:19.542Z
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z
Posting to Main currently disabled 2016-02-19T03:55:08.370Z
Upcoming LW Changes 2016-02-03T05:34:34.472Z
LessWrong 2.0 2015-12-09T18:59:37.232Z
Meetup : Austin, TX - Petrov Day Celebration 2015-09-15T00:36:13.593Z
Conceptual Specialization of Labor Enables Precision 2015-06-08T02:11:20.991Z
Rationality Quotes Thread May 2015 2015-05-01T14:31:04.391Z

Comments

Comment by Vaniver on Quantum Non-Realism · 2021-09-14T17:28:27.557Z · LW · GW

Fixed.

Comment by Vaniver on Sam Altman Q&A Notes - Aftermath · 2021-09-14T04:46:16.347Z · LW · GW

But I have never seen an article pulled completely before.

It happened before, but it's quite rare. Normally when I've done it, I've left a note in an Open Thread, such as this case where I moved to drafts a post that was talking about an ongoing legal case (now concluded). I think that's the last one I did, and it was four years ago? But there are other mods as well.

Comment by Vaniver on D&D.Sci Pathfinder: Return of the Gray Swan Evaluation & Ruleset · 2021-09-09T17:05:51.930Z · LW · GW

Overall Kraken damage is substantially higher on a 4-gun ship than a 2-gun ship.

This seems reversed to me.

Comment by Vaniver on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-08T19:00:35.238Z · LW · GW

Also, I was under the impression that cryonics was a business with significant returns to scale--two facilities storing 100 bodies each is much more expensive than one facility storing 200 bodies, which makes 'market share' more important than it normally is.

Comment by Vaniver on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-08T05:32:45.831Z · LW · GW

Previously discussed on LW here.

Comment by Vaniver on Coordination Schemes Are Capital Investments · 2021-09-07T19:59:45.641Z · LW · GW

There's a paired optimization problem, where you assign everyone to a room, and the constraint that this assignment be 'envy-free'; that is, no one looks at someone else's assignment/rent combo and says "I'd rather have that than my setup!". There was a calculator that I can't easily find now which tried to find the centroid of the envy-free region.

There are other approaches that work differently; this one, for example, tries to split surplus evenly between the participants, and shows the comparison to other options.

Comment by Vaniver on Can you control the past? · 2021-08-31T17:14:19.941Z · LW · GW

Do you “manage the news” by refusing to read the morning’s newspaper, or by scribbling over the front page “Favored Candidate Wins Decisively!”? No: if you’re rational, your credence in the loss is still 70%.

I feel like the "No; if you're rational" bit is missing some of the intuition against EDT. Physical humans do refuse to read the morning's newspaper, or delay opening letters, or similar things, I think because of something EDT-ish 'close to the wire'. (I think this is what's up with ugh fields.)

I think there's something here--conservation of expected evidence and related--that means that a sophisticated EDT won't fall prey to those traps. But this feels sort of like the defense whereby a sophisticated EDT doesn't fall prey to typical counterexamples because if you're doing the expectation correctly, you're taking into account causation, at which point we're not really talking about EDT anymore. I do think it's sensible to include proper probabilistic reasoning in EDT, but sometimes feels off about hiding this detail behind the word "rational."

Comment by Vaniver on Vaniver's Shortform · 2021-08-27T23:04:49.740Z · LW · GW

One frame I have for 'maximizing altruism' is that it's something like a liquid: it's responsive to its surroundings, taking on their shape, flowing to the lowest point available. It rapidly conforms to new surroundings if there are changes; turn a bottle on its side and the liquid inside will rapidly resettle into the new best configuration.

This has both upsides and downsides: the flexibility and ability to do rapid shifts mean that as new concerns become the most prominent, they can be rapidly addressed. The near-continuous nature of liquids means that as you get more and more maximizing altruist capacity, you can smoothly increase the 'shoreline'.

Many other approaches seem solid instead of liquid, in a way that promotes robustness and specialization (while being less flexible and responsive). If the only important resources are fungible commodities, then the liquid model seems optimal; if it turns out that the skills and resources you need for tackling one challenge are different than the skills and resources needed for tackling another, or if switching costs dominate the relative differences between projects. Reality has a surprising amount of detail, and it takes time and effort to build up the ability to handle that detail effectively.

I think there's something important here for the broader EA/rationalist sphere, tho I haven't crystallized it well yet. It's something like--the 'maximizing altruism' thing, which I think of as being the heart of EA, is important but also a 'sometimes food' in some ways; it is pretty good for thinking about how to allocate money (with some caveats) but is much less good for thinking about how to allocate human effort. It makes sense for generalists, but actually that's not what most people are or should be. This isn't to say we should abandon maximizing altruism, or all of its precursors, but... somehow build a thing that both makes good use of that, and good use of less redirectable resources.

Comment by Vaniver on The Codex Skeptic FAQ · 2021-08-24T19:03:01.113Z · LW · GW

[Note: I use Copilot and like it. The 'aha' moment for me was when I needed to calculate the intersection of two lines, a thing that I would normally just copy/paste from Stack Overflow, and instead Copilot wrote the function for me. Of course I then wrote tests and it passed the tests, which seemed like an altogether better workflow.]

Language models are good enough at generating code to make the very engineers building such models slightly more productive

How much of this is 'quality of code' vs. 'quality of data'? I would naively expect that the sort of algorithmic improvements generated from OpenAI engineers using Copilot/Codex/etc. are relatively low-impact compared to the sort of benefits you get from adding your company's codebase to the corpus (or whatever is actually the appropriate version of that). I'm somewhat pessimistic about the benefits of adding Copilot-generated code to the corpus as a method of improving Copilot.

Comment by Vaniver on Extraction of human preferences 👨→🤖 · 2021-08-24T18:57:27.423Z · LW · GW

Thanks for sharing negative results!

If I'm understanding you correctly, the structure looks something like this:

  • We have a toy environment where human preferences are both exactly specified and consequential.
  • We want to learn how hard it is to discover the human preference function, and whether it is 'learned by default' in an RL agent that's operating in the world and just paying attention to consequences.
  • One possible way to check whether it's 'learned by default' is to compare the performance of a predictor trained just on environmental data, a predictor trained just on the RL agent's internal state, and a predictor extracted from the RL agent.

The relative performance of those predictors should give you a sense of whether the environment or the agent's internal state give you a clearer signal of the human's preferences.

It seems to me like there should be some environments where the human preference function is 'too easy' to learn on environmental data (naively, the "too many apples" case should qualify?) and cases where it's 'too hard' (like 'judge how sublime this haiku is', where the RL agent will also probably be confused), and then there's some goldilocks zone where the environmental predictor struggles to capture the nuance and the RL agent has managed to capture the nuance (and so the human preferences can be easily exported from the RL agent). 

Does this frame line up with yours? If so, what are the features of the environments that you investigated that made you think they were in the goldilocks zone? (Or what other features would you look for in other environments if you had to continue this research?)

Comment by Vaniver on We need a new philosophy of progress · 2021-08-24T18:29:21.906Z · LW · GW

IMO there's a big difference between "obviously material progress is good" and "obviously some progress is good"--it could be that after a careful consideration of the evidence, it turns out that the thing we need to do is focus on spiritual progress and all become monks (or w/e) and then progress can be measured in how rapidly that transition happens.

[Like, in one era the accumulation of slaves would have been a sign of progress, and now we view it as a sign of regress.]

There's a second point that you might be making, that it's weird to have a 'theory of progress' if your forecasts show the world getting worse, even if we do our best. (For example, suppose there was a massive volcanic eruption and so we knew volcanic winter was coming.) But I think even then it's important to figure out what ways we can improve in and make those changes, even if the background is decay instead of progress.

Comment by Vaniver on Vaniver's Shortform · 2021-08-24T17:04:26.540Z · LW · GW

So I've been playing HUMANKIND over the last few days and think I have the hang of it now. It's by Amplitude Studios, who also made Endless Space, Endless Legend, Endless Space 2, and Dungeon of the Endless (which was my favorite out the of four; also apparently I wrote up my thoughts on ES2).

The basic engine is the same as those games, and most similar to Endless Legend; the world is a hex-map that's broken up into pre-defined territories, each of which can only have one outpost/city. Each hex generates some resources on its own (fertile land giving you food, forests industry, etc.), but you only work the hexes immediately adjacent to the districts you build (including the city center), and districts vary in what resources they collect. [Build a farmer's quarter next to a forest and you don't collect any of the industry, but build a maker's quarter and you do.]

The core gimmick that differentiates it from Civilization / Endless Legend is that rather than picking one nation/race, you pick one culture from each age. (So no more Abraham Lincoln wearing furs / a suit at the beginning of the game, both of which were nonsense in different ways.) Instead you might be the Babylonians, and then the Carthaginians, then Khmer, then Mughals, then French, then Japanese (which was the path I took in my most recent game that I won). You end up building a history (both in continuing buffs and districts that remain on the field), and picking things that are appropriate to your setup. (In Civ, having Russians get a bonus to tundra tiles is sort of terrible because maybe the RNG will give you tundra and maybe it won't, but having one of the faith options be a tundra bonus is fine because only someone who knows they have lots of tundra will pick it. This makes everything more like that.)

The other relevant facts are: 1) the cultures seem to vary wildly in power (or at least appropriateness to any given situation), and 2) you pick from the list whenever you age up from the previous age, and 3) everyone starts as a nondescript nomadic tribe. (Which, as a neat side effect, means you do much more exploring before you place your first city, and so you have much more choice than you normally get.) So rather than starting the game as the Babylonians, you're racing to see who gets to be them. Wonders, the typical race dynamic of the Civ games, are minimized here (there aren't that many of them and they aren't that great), replaced by these cultures.

Overall, tho, I think the net effect is significantly increasing the 'rich get richer' dynamic and makes for a less satisfying game. One method of asymmetrical balance is to say "well, it's alright if the cultures are unbalanced, because then the drafting mechanics will create a meta-balance." But when the drafting mechanics are "the person in the lead picks first", you end up with a probably dominant meta-strategy (and then the best available counter-strategy which is trying hard to play catchup).

At my current skill level (who knows, maybe I'm doing the naive strategy), it looks to me like the dominant move is 1) make one mega-city and 2) stack lots of cultures who have emblematic districts that give you buffs based on population size / number of districts. You can have only one such district per territory, but you can have lots of territories in your city (limited only by your influence and the number of territories other players will 'let' you have). So when each Khmer Baray gives you +1 industry per population, and you've combined ten territories into your megalopolis with 100 population, you now get 1k industry/turn out of that, instead of the 100 you would have gotten from having ten cities each with their own Baray. And then later you get the Japanese Robotics Lab, which gives you +2 industry on each Maker's Quarter, and so that leads to a +20 bonus on each of the ten, for +200 industry (and another +200 industry from the effect of those Robotics Labs on themselves).

[There are countervailing forces pushing against the megalopolis--each additional territory you add to a city increases the cost of the next, so actually I had one big city and then five or six small ones, but I think I hadn't realized how strong this effect was and will do something different next game.]

So far... I think I like it less than Old World, but it has interestingly different solutions to many of the same problems, and it's covering a very different time period.

Comment by Vaniver on Jimrandomh's Shortform · 2021-07-26T16:58:23.883Z · LW · GW

IMO this is a prime candidate for curation/editing work, which I might be happy to do if no one else does.

Comment by Vaniver on A Contamination Theory of the Obesity Epidemic · 2021-07-26T16:50:11.660Z · LW · GW

I wonder if the GC-MS exists for municipal water supplies already, and can just be aggregated and compared against population obesity rates? Less precise than doing it house-by-house, but much cheaper if someone has already done it for you and also it might not vary much house-by-house.

Comment by Vaniver on Big picture of phasic dopamine · 2021-07-26T05:54:12.590Z · LW · GW

I guess my sense is that most biological systems are going to be 'package deals' instead of 'cleanly separable' as much as possible--if you already have a system that's doing learning, and you can tweak that system in order to get something that gets you some of the benefits of a VoI framework (without actually calculating VoI), I expect biology to do that.

Comment by Vaniver on Big picture of phasic dopamine · 2021-07-24T16:00:40.350Z · LW · GW

But in experiments, they’re not synchronized; the former happens faster than the latter.

This has the effect of incentivizing learning, right? (A system that you don't yet understand is, in total, more rewarding than an equally yummy system that you do understand.) So it reminds me of exploration in bandit algorithms, which makes sense given the connection to motivation.

Comment by Vaniver on Covid 7/22: Error Correction · 2021-07-22T23:35:31.971Z · LW · GW

The 'generation time' is the one that can't be negative. Suppose Alice gets infected on day 1, infects Bob on day 2, Bob shows symptoms on day 3, and Alice shows symptoms on day 4. We end up with:

  1. Incubation periods of 3 days (for Alice) and 1 day (for Bob)
  2. Generation times of 1 day (Bob infected - Alice infected)
  3. Serial intervals of -1 day (Bob symptoms - Alice symptoms)
Comment by Vaniver on Covid 7/22: Error Correction · 2021-07-22T15:43:29.279Z · LW · GW

I am deeply confused how a serial interval can be negative. If I understand the words involved that means you spread it on to someone who gets their symptoms before you do?

If I understand that correctly, it means you're breathing out infectious levels of virus days before you cough / notice that you are sick.

(I'm confused about your confusion, because I thought the negative serial interval of COVID was one of its most striking features, and the reason why many of the old 'control system' things failed; like, people were used to "if you feel fine you are fine" and wouldn't accept "everyone needs to act as though they could be sick, because you won't know whether or not you're infectious until after the fact.") 

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-21T04:17:22.834Z · LW · GW

Yeah, I think there's a (generally unspoken) line of argument that if you have a system that can revise its goals, it will continue revising its goals until it it hits a reflectively stable goal, and then will stay there. This requires that reflective stability is possible, and some other things, but I think is generally the right thing to expect.

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:50:59.282Z · LW · GW

I do think it's fair to describe this as the 'standard argument'.

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:49:44.911Z · LW · GW

I think we're in a sort of weird part of concept-space where we're thinking both about absolutes ("all X are Y" disproved by exhibiting an X that is not Y) and distributions ("the connection between goals and intelligence is normally accidental instead of necessary"), and I think this counterexample is against a part of the paper that's trying to make a distributional claim instead of an absolute claim.

Roughly, their argument as I understand it is:

  1. Large amounts of instrumental intelligence can be applied to nearly any goal.
  2. Large amounts of frame-capable intelligence will take over civilization's steering wheel from humans.
  3. Frame-capable intelligence won't be as bad as the randomly chosen intelligence implied by Bostrom, and so this argument for AI x-risk doesn't hold water; superintelligence risk isn't as bad as it seems.

I think I differ on the 3rd point a little (as discussed in more depth here), but roughly agree that the situation we're in probably isn't as bad as the "AIXI-tl with a random utility function implemented on a hypercomputer" world, for structural reasons that make this not a compelling counterexample.

 

Like, in my view, much of the work of "why be worried about the transition instead of blasé?" is done by stuff like Value is Fragile, which isn't really part of the standard argument as they're describing it here.

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:34:04.019Z · LW · GW

On this proposal, any reflection on goals, including ethics, lies outside the realm of intelligence. Some people may think that they are reflecting on goals, but they are wrong. That is why orthogonality holds for any intelligence.

I think I do believe something like this, but I would state it totally differently. Roughly, what most people think of as goals are something more like intermediate variables which are cognitive constructs designed to approximate the deeper goals (or something important in the causal history of the deeper goals). This is somewhat difficult to talk about because the true goal is not a cognitive construct, in the same way that the map is not the territory, and yet all my navigation happens in the map by necessity.

Of course, ethics and reflection on goals are about manipulating those cognitive constructs, and they happen inside of the realm of intelligence. But, like, who won WWII happened 'in the territory' instead of 'in the map', with corresponding consequences for the human study of ethics and goals.

Persuasion, in this view, is always about pointing out the flaws in someone else's cognitive constructs rather than aligning them to a different 'true goal.'

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:33:59.053Z · LW · GW

So, to argue that instrumental intelligence is sufficient for existential risk, we have to explain how an instrumental intelligence can navigate different frames.

This is where the other main line of argument comes into play:

I think 'ability to navigate frames' is distinct from 'philosophical maturity', roughly because of something like a distinction between soldier mindset and scout mindset

You can imagine an entity that, whenever it reflects on their current political / moral / philosophical positions, using their path-finding ability like a lawyer to make the best possible case for why they should believe what they already believe, or to discard incoming arguments whose conclusions are unpalatable. There's something like another orthogonality thesis at play here, where even if you're a wizard at maneuvering through frames, it matters whether you're playing chess or suicide chess.

This is just a thesis; it might be the case that it is impossible to be superintelligent and in soldier mindset (the 'curiosity' thesis?), but the orthogonality thesis is that it is possible, and so you could end up with value lock-in, where the very intelligent entity that is morally confused uses that intelligence to prop up the confusion rather than disperse it. Here we're using instrumental intelligence as the 'super' intelligence in both the orthogonality and existential risk consideration. (You consider something like this case later, but I think in a way that fails to visualize this possibility.)

[In humans, intelligence and rationality are only weakly correlated, in a way that I think supports this view pretty strongly.]

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:33:46.231Z · LW · GW

Sticking a typo over here instead of the other tree:

This thought it sometimes called the

"thought is sometimes"

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:33:22.635Z · LW · GW

So, what would prevent a generally superintelligent agent from reflecting on their goals, or from developing an ethics? One might argue that intelligent agents, human or AI, are actually unable to reflect on goals. Or that intelligent agents are able to reflect on goals, but would not do so. Or that they would never revise goals upon reflection. Or that they would reflect on and revise goals but still not act on them. All of these suggestions run against the empirical fact that humans do sometimes reflect on goals, revise goals, and act accordingly.

I think this is not really empathizing with the AI system's position. Consider a human who is lost in an unfamiliar region, trying to figure out where they are based on uncertain clues from the environment. "Is that the same mountain as before? Should I move towards it or away from it?" Now give that human a map and GPS routefinder;  much of the cognitive work that seemed so essential to them before will seem pointless now that they have much better instrumentation.

An AI system with a programmed-in utility function has the map and GPS. The question of "what direction should I move in?" will be obvious, because every direction has a number associated with it, and higher numbers are better. There's still uncertainty about how acting influences the future, and the AI will think long and hard about that to the extent that thinking long and hard about that increases expected utility.

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:33:10.761Z · LW · GW

The orthogonality thesis is thus much stronger than the denial of a (presumed) Kantian thesis that more intelligent beings would automatically be more ethical, or that an omniscient agent would maximise expected utility on anything, including selecting the best goals: It denies any relation between intelligence and the ability to reflect on goals.

I don't think this is true, and have two different main lines of argument / intuition pumps. I'll save the other for a later section where it fits better.

Are there several different reflectively stable moral equilibria, or only one? For example, it might be possible to have a consistent philosophically stable egoistic worldview, and also possible to have a consistent philosophically stable altruistic worldview. In this lens, the orthogonality thesis is the claim that there are at least two such stable equilibria and which equilibrium you end up in isn't related to intelligence. [Some people might be egoists because they don't realize that other people have inner lives, and increased intelligence unlocks their latent altruism, but some people might just not care about other people in a way that makes them egoists, and making them 'smarter' doesn't have to touch that.]

For example, you might imagine an American nationalist and a Chinese nationalist, both remaining nationalistic as they become more intelligent, and never switching which nation they like more, because that choice was for historical reasons instead of logical ones. If you imagine that, no, at some intelligence threshold they have to discard their nationalism, then you need to make that case in opposition to the orthogonality thesis. 

For some goals, I do think it's the case that at some intelligence threshold you have to discard it, hence the 'more or less', and I think many more 'goals' are unstable, where the more you think about them, the more they dissolve and are replaced by one of the stable attractors. For example, you might imagine it's the case that you can have reflectively stable nationalists who eat meat and universalists who are vegan, but any universalists who eat meat are not reflectively stable, where either they realize their arguments for eating meat imply nationalism or their arguments against nationalism imply not eating meat. [Or maybe the middle position is reflectively stable, idk.]

 

In this view, the existential risk argument is less "humans will be killed by robots and that's sad" and more "our choice of superintelligence to build will decide what color the lightcone explosion is and some of those possibilities are as bad or worse than all humans dying, and differences between colors might be colossally important." [For example, some philosophers today think that uploading human brains to silicon substrates will murder them / eliminate their moral value; it seems important for the system colonizing the galaxies to get that right! Some philosophers think that factory farming is immensely bad, and getting questions like that right before you hit copy-paste billions of times seems important.]

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:32:53.845Z · LW · GW

So, intelligent agents can have a wide variety of goals, and any goal is as good as any other.

The second half of this doesn't seem right to me, or at least is a little unclear. [Things like instrumental convergence could be a value-agnostic way of sorting goals, and Bostrom's 'more or less' qualifier is actually doing some useful work to rule out pathological goals.]

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T19:32:34.896Z · LW · GW

Lots of different comments on the details, which I'll organize as comments to this comment.

(I forgot that newer comments are displayed higher, so until people start to vote this'll be in reverse order to how the paper goes. Oops!)

Comment by Vaniver on Is the argument that AI is an xrisk valid? · 2021-07-19T18:19:34.298Z · LW · GW

Overall, I think your abstract and framing is pretty careful to narrow your attention to "is this argument logically sound?" instead of "should we be worried about AI?", but still this bit jumps out to me:

the argument for the existential risk of AI turns out invalid.

Maybe insert "standard" in front of "argument" again?

Comment by Vaniver on How to Sleep Better · 2021-07-16T22:23:08.263Z · LW · GW
  1. Oxygen: Not much to say here. Your body needs oxygen. This doesn't stop while we sleep. If possible open a window.

This is a pet peeve of mine, but: you're not running out of oxygen as input. Instead exhaust products are building up in the room, of which the most well-known is carbon dioxide. (Outside air outside contains about 500x as much O2 as CO2, and in typical stuffy rooms the ratio is down to about 100x.) For some reason, we seem to be very sensitive to those exhaust products (tho it also seems like this might be a dimension that people vary on significantly).

Comment by Vaniver on Jimrandomh's Shortform · 2021-07-16T19:48:36.862Z · LW · GW

I think I basically disagree with this, or think that it insufficiently steelmans the other groups.

For example, the homeless vs. the landlords; when I put on my systems thinking hat, it sure looks to me like there's a cartel, wherein a group that produces a scarce commodity is colluding to keep that commodity scarce to keep the price high. The facts on the ground are more complicated--property owners are a different group from landlords, and homelessness is caused by more factors than just housing prices--but the basic analysis that there are different classes, those classes have different interests, and those classes are fighting over government regulation as a tool in their conflict seems basically right to me. Like, it's really not a secret that many voters are motivated by keeping property values high, politicians know this is a factor that they will be judged on.

Maybe you're trying to condemn a narrow mistake here, where someone being an 'enemy' implies that they are a 'villain', which I agree is a mistake. But it sounds like you're making a more generic point, which is that when people have political disagreements with the rationalists, it's normally because they're thinking in terms of enemy action instead of not thinking in systems. But a lot of what the thinking in systems reveals is the way in which enemies act using systemic forces!

Comment by Vaniver on The Point of Trade · 2021-06-24T17:30:25.479Z · LW · GW

Interestingly, I think this is pretty obviously stated in The Wealth of Nations; Chapter 1 identifies division of labor as the cause of capital accumulation, Chapter 2 identifies trade as the cause of division of labor, and Chapter 3 identifies the size of the market as a limiter on specialization.

This is... actually sort of surprising that now I have two examples of economic concepts which are really better explained by Adam Smith than they are by modern textbooks (the other is supply and demand), and this makes me even more glad that I read The Wealth of Nations in high school before I had come across any modern textbooks.

Comment by Vaniver on Reply to Nate Soares on Dolphins · 2021-06-18T21:19:55.277Z · LW · GW

Some notes with my mod hat on:

While it seems to me like you're trying to protect an important pole of coherency and consistency here, I think this comment as well as some features of the OP (to a lesser extent) overstep some important bounds and make it quite tricky to have a productive conversation, in a way that I would like to both discourage and advise against. I worry that you're imputing positions stronger than people are holding, and thus creating more disagreement than exists, and raising the emotional stakes of that disagreement more than seems necessary to continue the conversation.

I would rather not perpetuate an escalatory dynamic where you think you need to make a bigger and bigger fuss in order to get responses, in a way that can be reminiscent of 'trapped priors'; it seems to me like the conversation in this thread could have been basically as effective at challenging So8res's position and provoking elaboration with much less strain on your part, and yet when I imagine being in your shoes this encounter probably feels like an example of the success of this approach.

Concretely, in this case, I think you're exasperated about humor and shitposting in a way that isn't justified and is failing to credit the ways in which people are responding to your bids for increased seriousness and abstraction. The standard you seem to be imposing is not "please respond to seriousness with seriousness" but the much stronger "please never joke in public about something I take very seriously", which seems like a pretty drastic standard, and one I would mildly warn against trying to enforce on LW.

(On the object level, I agree with Ben Pace that you are right that the about-face on this example deserves explanation, but my sense is that the explanation is satisfactory; the take that I'd summarize as 'there's a paraphyletic grouping that pretends to obviousness that it does not possess on closer examination' seems sensible enough, tho I am interested in disagreements you have with that take.)

I applaud your earlier decision to have a friend review a draft before posting it, since I think this is the sort of behavior that leads to more intellectual progress and less mutual misunderstanding. In that spirit, I'd be happy to review any further comments you want to make in this conversation, in the hopes of having it go a bit better.

Comment by Vaniver on Reply to Nate Soares on Dolphins · 2021-06-16T18:33:21.593Z · LW · GW

So, I'm not a biologist. I don't think Eliezer is much of a biologist either. A thing that I learned in the last ten years, which maybe Nate and Eliezer learned in the same time, idk, is that different aquatic animals are more distantly related than one might have thought. For example, let's take the list from 2008. When I go on Wikipedia and try to find an appropriate scientific name for each and stick it into timetree.org to try to figure out when their most recent common ancestor was, I get the following estimates:

Salmon and Guppies: 206 MYA
Trout and Guppies: 206 MYA
Dolphins and Guppies: 435 MYA
Sharks and Guppies: 473 MYA
Jellyfish and Guppies: 824 MYA
Algae and Guppies: 1496 MYA

That is, if you're going to start removing things from the list because of how distantly related they are, sharks go first; Chondrichthyes is just as weird a member of Chordata as Mammalia is, from the perspective of Actinopterygii

The trouble with defending the 2008 classification is not that it's phylogenetics, it's that, as far as I can tell, it's bad phylogenetics. And so you end up requiring mental gymnastics in order to exclude dolphins because their most recent common ancestor is too far back while including sharks whose most recent common ancestor is even further back. The pedant's position ("I know that dolphins are mammals instead of fish!") doesn't hold up under either the useful definition ("dolphins are aquatic animals tho") or the phylogenist's definition ("mammals are chordata tho, which is what you should mean when you say 'fish'.").

Comment by Vaniver on Reply to Nate Soares on Dolphins · 2021-06-16T18:01:33.684Z · LW · GW

I can't speak for So8res, but I'm bothered by something like... privileging one particular frame for reasons of fashion or class rather than efficiency? The sort of thing where you leave hazards around as tests, so that people can see who stumbles on them and who gracefully avoids them. I'm not opposed to tests in general, I just wish they'd be more efficient.

Like, the old meaning of fish was "fully aquatic animal", which seems like the right sort of definition for a four-letter word (remember, words are supposed to encode information cheaply), and saying "actually we've reserved that four-letter word for this tiny slice of its former domain" seems like a weird choice (comparable to the 'true bug' definition).

Comment by Vaniver on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T20:38:46.518Z · LW · GW

I'm thinking about Weimar Germany to Nazi Germany, or (the reverse) Imperial Japan to Solid-State Electronics Japan.

Consider this claim from a recent SSC book review contest entrant, describing the Bretton Woods arrangement:

The deal offered benefits not only to England, France, and the Allies, but also to Japan and Germany that they couldn’t have even hoped to achieve had they won the war. 6

6 Apparently Germany and Japan would have found it to be unbelievable. “The primary reason Germany and Japan had launched World War II in the first place was to gain greater access to resources and markets. Germany wanted the agricultural output of Poland, the capital of the Low Countries, the coal of Central Europe, and the markets of France. Japan coveted the manpower and markets of China and the resources of Southeast Asia. Now that they had been thoroughly defeated, the Americans were offering them economic access far beyond their wildest prewar longings: risk-free access to ample resources and bottomless markets a half a world away. And “all” it would cost them was accepting a security guarantee that was better than anything they could ever have achieved by themselves.” 

It seems to me there are positional status questions--is China just a participant in America's world, or is it the Middle Kingdom?--but I think it's hard to see a situation where China is better off annexing countries to be recalcitrant provinces rather than just trading with them while they're American allies and protectorates. (Like, it's really not obvious that China is better off with a conquered Korea than it is with a neighboring Korea.)

Comment by Vaniver on Quotes from Moral Mazes · 2021-05-25T16:26:15.081Z · LW · GW

They wrote a book about it, the Science of Success.

Comment by Vaniver on The Argument For Spoilers · 2021-05-21T14:49:57.203Z · LW · GW

I think it's pretty easy to separate things I've recommended to people as "better spoiled" or "better unspoiled"; so long as my threshold / reason for thinking this is sufficiently similar to abramdemski's, then I should be able to freely spoil for him the art that I think can be spoiled with only minor costs (compared to freely spoiling all art).

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-17T05:08:03.591Z · LW · GW

Episode 50: Tillich and Barfield

Then I pointed to somebody whose work, also deriving from Heidegger, integrates aspects of all of these together in kind of a profound way. Tillich is deeply influenced and aware of what he calls 'depth psychology', the kind of psychology in Jung, he of course is deeply aware of Heidegger. I don't think that Tillich was aware of Corbin, but he is deeply aware of the symbol in an imaginal instead of a merely imaginary way.

Tillich takes the meaning crisis seriously, he writes perhaps his most well-known (and I think it's a masterpiece) book, The Courage to Be, as a response to the meaning crisis. Like Jung and Corbin, and for very related reasons, he's deeply critical of literalism and fundamentalism throughout, but he takes it deeper. As I mentioned, he really deepens it in terms of Heidegger's critique of ontotheology and he comes critical of literalism and fundamentalism as forms of idolatry in which we are attempting to have rather than become.

So there's some excellent books on the relationship between Jung and Tillich, a series of ongoing work by John Dourley; I recommend two books to you, The Psyche as Sacrament which I tweeted about in my book recommendations, I would also recommend his later book, Paul Tillich, Carl Jung, and the Recovery of Religion, but make no mistake, Dourley is not talking about a recovery in a nostalgic sense. He writes another book called A Strategy for a Loss of Faith where he is trying to get beyond classical theism. So I recommend Dourley's work as a comprehensive way of bringing about a deep dialog and a kind of integration between Jung and Tillich.

Tillich sees the main response to the meaning crisis, and here's how Tillich is not just theorizing: he is trying to give us guidance on how to live. Let's remember that this really matters, because you know, the way Tillich resisted the Nazis. What Tillich talks about in The Courage to Be is *courage*, now he's careful to note that this is a kind of existential courage that ultimately allows us to confront and overcome meaninglessness in its depth, but also to more practically respond to perverted response to the meaning crisis itself, like Nazism and its gnostic nightmare.

This process of encouragement--now, he is like Aristotle, he's not talking about something as simple as just bravery (facing danger) or fortitude (the ability to endure), no, for Tillich courage is a virture. There's something of wisdom in courage. Courage involves within it that central feature of wisdom, which is seeing through illusion into reality. The brave person face danger, but that's all we can say about them. The person with fortitude endures difficulty, but that's all we can say about them. The courageous person sees through the illusion and the distortion of fear or distress to what is truly good and acts accordingly.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-13T15:19:05.930Z · LW · GW

Episode 49: Corbin and Jung

So last time we looked in depth at Corbin and Jung and tried to draw very deeply the notion of the relationship to the sacred second self. I launched into a sort of mutual criticism between Corbin and Jung and brought in some Buber along the way.

The summary at the beginning of the next episode pretty quickly shifts to new material, so here's the key quote according to me:

Freud has a Newtonian machine hydraulic model of the psyche. Jung ultimately rejects that; Jung replaces the hydraulic metaphor with an organic metaphor. He sees the psyche as a self-organizing dynamical system, ultimately as an autopoetic being, so he sees the psyche as going through a complex process of self-organization, and that you have to understand individuation as this kind of organic self-organizing process that you neither make nor receive but you participate in.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-12T15:39:18.248Z · LW · GW

Episode 48: Corbin and the Divine Double

So last time we followed Heidegger into the depths, where we encountered Eckhart and this non-teleological relatiosnhip to the play of being. That led us very directly into Corbin, and Corbin's core argument that gnosis (as we've been using it), the ability to engage in this serious play, relates centrally to the imagination. 

But Corbin is making use of this term in a new way; he makes a distinction between the imaginary (which is how we typically use the word "the imagination" and mental images in my head that are only subjection and have no objective reality) and the imaginal (which mediates between the abstract intelligible world and the concrete sensible world, and transjects between the subjective and the objective). All this mediation is not done statically, but a mutual affordance is done, and an ongoing transformative transframing, and that the symbol captures all of this.

Then I wanted to bring out Corbin's core symbol, and it's a core symbol that relates directly to gnosis. Because in gnosis (transformative participatory knowing), and this goes to the heart of Heidegger's notion of design, the being whose being is in question. We have to see self-knowledge and knowledge of the world as inextricably bound up together in order to do that; we are purusing Corbin's central symbol, the angel.

Which, of course, is immediately off-putting to many people including myself. But I've been trying to get a way of articulating how Corbin is incorporating both Heidegger and Persian Sufism, Neoplatonic Sufism into this understanding of the symbol, and I recommend that we take a look at the historical work showing how throughout the ancient Mediterranean world, and up and through the Hellenistic period and beyond, up until about the 5th century of the common era, there's the pursuit of the divine double. The idea is one that is deeply transgressive of our cultural cognitive grammar of decadent Romanticism, where we are born with our true self that merely needs to express itself (a la Rousseau), and that the core virtue is authenticity, which is being true to the true self that you have, that you possess. Rather than, for example, a Socratic model in which the true self is something towards which you are constantly aspiring.

The transgressive mythology is that the self that I have now is not my true self. My true self is my divine double; this is something that is superlative to me, it is bound to me, it is my double. It is bound to me but is is superlative to me; it is both me and not me. It's me as I'm meant to be, as I should be, and that the existential project is not one of expressing a self that you have but of transcending to become a self that is ecstatically ahead of you in an important way.

Then I pointed out that for many of you this would still be "okay, I get the transgression, but I still find this notion of a divine double unpalatable." Maybe for some of you you don't, but nevertheless I think there's something important to asking the question "why did so many people for so long believe in this aspirational process?". This takes us back into work that was core to the discussion I made about gnosis, and it has a resounding impact at various places throughout this series, which is L.A. Paul's work on transformative experience, and then somebody who's from the same school, influenced by Paul while having a different view as Paul, her transformations are more like insight: Agnes Callard's notion of aspiration is much more developmental, but I argue they can be (I think) readily reconciled together if you see development as a linked sequence of insights to bring about qualitative change in your competence.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-12T04:03:09.037Z · LW · GW

Episode 47: Heidegger

So last time we were trying to understand Heidegger's work as a prophet (in the Old Testament sense) of the meaning crisis. We took a look at this notion of "the thing beyond itself" and "realness" as simultaneously the shining into our framing and the withdrawing beyond our framing in a deeply interpenetrating manner. We took a look at this deeper notion of truth--not truth as correctness, but truth as aletheia, that which grounds the agent-arena relationship in attunement and allows us the potential to remember being by getting into an attunement with its simultaneous disclosure and withdrawal.

But we can forget that; we can get into a profound kind of modal confusion and this is the history of metaphysics as the emergence of nihilism. We can forget the being mode, we can get trapped into the having mode in which the metaphysics is a propositional project of trying to just use truth as correctness, and we misunderstand being as a particular being. We try to capture the unlimitedness aspect of being, but we only do it at the limit (which Heidegger is deeply critical of). So we understand being in terms of a Supreme Being, a being at the limit, and beyond the limit. This is ontotheology; we understand God as the Supreme Being and this is deeply enmeshed (for Heidegger) with nihilism, because this ontotheology, this version of theology from classical traditional theism, this way of understanding being gets us into the deep forgetfulness and modal confusion that is the hallmark of nihilism.

Of course, we could perhaps remember the being mode, and this is what Corbin (following Heidegger) talks about as gnosis.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-12T03:55:26.255Z · LW · GW

Episode 46: Conclusion and the Prophets of the Meaning Crisis

Last time I finished the discussion of wisdom and connected it to enlightenment and argued for the wise cultivation of enlightenment as our deepest kind of existential response to the meaning crisis, a way in which we can awaken from the meaning crisis. I then wanted to put that scientific model of spirituality (for lack of a better phrase) into discourse with some of the central prophets of the meaning crisis. I'm using the word prophet as it's used in the Old Testament; I'm talking about individuals who were crucial for articulating the advent and helping to propose or promise a response to the meaning crisis. I put a diagram on the board in which Heidegger played a central role; there's many connections in there that I'll point out that I will not be able to fully address, because the people are there insofar as they help us articulate the response, not to be examined for their own sake.

I mentioned the work of Nishida and Nishitani in the Kyoto School; I will talk briefly about Nishitani here but I won't be able to go into that in depth. I do intend to pursue this later in another series I'm putting together (I'm putting together a couple of series to follow this one) and I would like to do a series that will include work on the Kyoto School that I've entitled The God Beyond God, in which we look at all of these great non-theistic thinkers within both Eastern and Western traditions, and things like the Kyoto School that tried to bridge between them. So I will have to neglect (to some degree) the Kyoto School in this series but I promise to follow it up more deeply in another series.

The first 45 lectures have been, to some extent, "how did we get here, and where is here anyway?", and these remaining five lectures are something like "what do other people think about being here?" This episode mostly touches on Husserl (who doesn't really make it into the summary at the beginning of the next episode).

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-09T05:13:31.199Z · LW · GW

Fixed, thanks!

Comment by Vaniver on MIRI location optimization (and related topics) discussion · 2021-05-09T05:09:25.592Z · LW · GW

I think Singapore is very high on my "city to do finance in" list and not very high on my "naturey place to do thinking in" list, and as pointed out the LGBTQ acceptance is probably low enough to dissuade some people from going there.

Comment by Vaniver on MIRI location optimization (and related topics) discussion · 2021-05-09T05:04:04.365Z · LW · GW

"Better grocery stores than other places I've lived" - Where have you lived, if you don't mind my asking?

Austin is the home of Whole Foods, and the typical mainstream grocery store is HEB, which was prepping for COVID in January 2020; Central Market is also pretty good?

I think people who really like Berkeley Bowl might not find something exactly similar, but I think they will find things that are adequate replacements.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-07T17:48:27.440Z · LW · GW

Episode 45: The Nature of Wisdom

So last time I tried to draw together all the other theories (I don't just mean the psychological theories, although they're the most salient right now, but also the philosophical theories) into an account of wisdom. I presented a model to you, a theory of wisdom developed by myself and Leo Ferraro from 2013, in which we are enhancing inferential processing through active open-mindedness, enhancing insightful processing through mindfulness, we're enhancing the capacity for internalization by internalizing the sage, and cultivating sophrosyne by our salience landscape naturally organizing away from self-deception and tempts us towards the truth (or at least what's true, good, and perhaps beautiful; that's perhaps a better way of putting it).

That coordinates the propositional knowing associated with inference, the procedural knowing associated with insight, the perspectival knowing associated with internalization together. That is directed towards realizing sophrosyne and that can help cultivate a more moral existence, the connection to virtue, mastery (in the sense of coping and caring), and meaning in life.

Of course, one of the criticisms I made was that the notion of meaning in life there was too simplistic, and it needs to be integrated with a much more developed account that's already in the literature. I'm contributing to that by work I'm doing with others on meaning in life; I pointed out that the Vervaeke-Ferraro model is missing participatory knowing, it's missing (or at least I think it misrepresents /misaligns) the relationship between the kinds of knowing. Understanding is missing, transformative experiences are missing, aspiration is missing, gnosis is missing, so all of these things need to be deeply integrated together.

I tried to suggest the beginnings of an account of how we turn basic understanding, which is to grasp the relevance of our knowledge, into profound understanding by integrating the account of understanding with the account of possibilities, so that profound understanding is the generation of plausibility by having convergence onto a contextually sensitive optimal grip that is transformatively transferrable in a highly effective manner in many different problem finding, formulating, and solving in many different domains.

I also brought out the idea that in addition to inspiration (this is a term I'm giving for more sudden insight-laden transformative experience), you can have what Callard calls aspiration, that's more incremental, it still can't be solved in an inferential decision-theoretic fashion (she agrees with Paul on that). She does argue though (and I agree with this argument) that aspiration must be considered a form of rationality which she calls 'proleptic rationality' because you're going to get into a performative contradiction: if my aspiration for rationality and my love of wisdom are not themselves rational processes, I'm kind of in trouble in my model of rationality.

Given all of that philosophy, what's missing (as I argued) is an extensive psychology of aspiration. I know one of my colleagues Juensung Kim is working on exactly that problem, and he's of course doing it in connection with a psychology of wisdom. I did suggest to you that we could see one of Callard's ideas of how we do this: we create something that's double-faced (I argued, ultimately symbolic, having aspects of gnosis in it) that allows us to make the leap, even if it's an incremental one, from who we are now and what we value now to the place where I've acquired some new thing that I value for its own sake. We used the example of music appreciation.

I think the 'summary' portion of the next lecture goes out to about 8 minutes, but I'm cutting it off at about 5, in part because there's a lot of tying together / elaborating / concluding to it.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-06T15:47:03.904Z · LW · GW

Episode 44: Theories of Wisdom

Last time we finished up looking at Baltes and Saudinger and made some criticisms that led into important criticisms made by Monica Ardelt. Then we looked into Ardelt's theory and the way it brought in an important distinction about not just having a good theory of wisdom, but the process of becoming a wise person, and then the emphasis on 'what are the features of a wise person?' as opposed to 'what are some of the central claims made by a theory of wisdom?'. Then we talked about how Monika insightfully brings together the cognitive, the reflective, and the affective.

I pointed out how we've got relevance realization grasping the significance (at least the cognitive directly because of the invocation of Kekes). I would also point out that I think that's at least implicit in the reflective machinery, and there's deep potential connection there with both perspectival knowing and the cultivation of rationality (at least perspectival rationality), and the affective ties to agape (which I've already argued has very important connections to relevance realization). That afford Ardelt's theory a powerful way of connecting wisdom to meaning in life as something different from connecting wisdom to virtue, and that's a very important thing to do.

We still noted some criticisms, largely it's still a product theory, it doesn't have an independent account of foolishness and a processing theory of how one becomes wise. In that sense it's not picking up as well as it could the philosophical heritage given to us by people like Socrates and Plato and Aristotle and Marcus Aurelius and so on.

We then took a look at the theory of Sternberg, just an extremely pivotal figure in the cognitive science of wisdom. I pointed out his ideas about adopting, shaping, and selecting are clearly ideas about relevance realization; he invokes implicit processing, tacit knowledge you know in order to bring understanding in that intuitive grasping of the significance of information (I think that's what he's implying). We talked about how he involves a balancing of interests, and there's the intrapersonal (how you're connected to yourself), the interpersonal (how you're connected to other people), and the extrapersonal (how you're connected to the world). So that's (at least implicitly) important connections to meaning in life in the way we've been talking about throughout this course. He invokes balance throughout, and I tried to make a good case that you should see that as optimization and directly relevant therefore to accounts of optimization of processing that we discussed with connection to relevance realization.

There were some issues I had with Sternberg; the idea that all wise people, all of this machinery is directed towards the common good, that strikes me as anachronistic. I think a less contentious claim would be that it's directed towards virtue and meaning in life for oneself and others in some unspecified way. There was also the invocation of values as affecting or constraining the whole process; again it was unclear to me what this is. There's an ambiguity here, it could be the relatively trivial claim that the wise person is being regulated by normativity--you know, by considerations what's true and good and beautiful--and that would be definitional (and therefore relatively trivial) because wisdom is a normative term or it could be that specific values are being invoked here, but if that's the case they should be specifically stated and justified for why those ones are chosen, and explicitly explain how those specific values make an impact on specific aspects of the machinery. So that's all sort of missing and needs to be addressed. It's ultimately a product theory, not a process theory. Sternberg does have a theory of foolishness, but it's not independently generated and it doesn't really pick up on the centrality of seeing through illusion and into reality.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-05T20:32:29.250Z · LW · GW

Yep. 0.8 is retest correlation among adults.

Do you happen to remember the source for this? I'm having trouble finding any studies that seem to bear directly on the question.

Comment by Vaniver on [Lecture Club] Awakening from the Meaning Crisis · 2021-05-05T16:20:47.565Z · LW · GW

because IQ-tests are highly game-able

I'm not sure which claim this is, but I think in general the ability to game IQ tests is what they're trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don't think this is that relevant for population-level comparisons.]

The retest correlation of IQ is only like 0.8 after 20 years! 

So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the 'fadeout' effect that the effect of those interventions are short-lived. I don't think there are that many that reliably affect adult IQ, and what we're interested in is the retest correlation of IQ among adults.

In adulthood, things definitely change: generally for the worse. People make a big distinction between 'fluid intelligence' and 'crystallized intelligence', where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one's cohort.

That's almost certainly much less than your retest correlation for basketball skills

Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven't yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major 'inborn' variation that should show up in the statistics over time.]

We've even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect. 

Lynn is pretty sure it's not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it's probably quite small (on fluid intelligence, at least).