But I have never seen an article pulled completely before.
It happened before, but it's quite rare. Normally when I've done it, I've left a note in an Open Thread, such as this case where I moved to drafts a post that was talking about an ongoing legal case (now concluded). I think that's the last one I did, and it was four years ago? But there are other mods as well.
Also, I was under the impression that cryonics was a business with significant returns to scale--two facilities storing 100 bodies each is much more expensive than one facility storing 200 bodies, which makes 'market share' more important than it normally is.
There's a paired optimization problem, where you assign everyone to a room, and the constraint that this assignment be 'envy-free'; that is, no one looks at someone else's assignment/rent combo and says "I'd rather have that than my setup!". There was a calculator that I can't easily find now which tried to find the centroid of the envy-free region.
There are other approaches that work differently; this one, for example, tries to split surplus evenly between the participants, and shows the comparison to other options.
Do you “manage the news” by refusing to read the morning’s newspaper, or by scribbling over the front page “Favored Candidate Wins Decisively!”? No: if you’re rational, your credence in the loss is still 70%.
I feel like the "No; if you're rational" bit is missing some of the intuition against EDT. Physical humans do refuse to read the morning's newspaper, or delay opening letters, or similar things, I think because of something EDT-ish 'close to the wire'. (I think this is what's up with ugh fields.)
I think there's something here--conservation of expected evidence and related--that means that a sophisticated EDT won't fall prey to those traps. But this feels sort of like the defense whereby a sophisticated EDT doesn't fall prey to typical counterexamples because if you're doing the expectation correctly, you're taking into account causation, at which point we're not really talking about EDT anymore. I do think it's sensible to include proper probabilistic reasoning in EDT, but sometimes feels off about hiding this detail behind the word "rational."
One frame I have for 'maximizing altruism' is that it's something like a liquid: it's responsive to its surroundings, taking on their shape, flowing to the lowest point available. It rapidly conforms to new surroundings if there are changes; turn a bottle on its side and the liquid inside will rapidly resettle into the new best configuration.
This has both upsides and downsides: the flexibility and ability to do rapid shifts mean that as new concerns become the most prominent, they can be rapidly addressed. The near-continuous nature of liquids means that as you get more and more maximizing altruist capacity, you can smoothly increase the 'shoreline'.
Many other approaches seem solid instead of liquid, in a way that promotes robustness and specialization (while being less flexible and responsive). If the only important resources are fungible commodities, then the liquid model seems optimal; if it turns out that the skills and resources you need for tackling one challenge are different than the skills and resources needed for tackling another, or if switching costs dominate the relative differences between projects. Reality has a surprising amount of detail, and it takes time and effort to build up the ability to handle that detail effectively.
I think there's something important here for the broader EA/rationalist sphere, tho I haven't crystallized it well yet. It's something like--the 'maximizing altruism' thing, which I think of as being the heart of EA, is important but also a 'sometimes food' in some ways; it is pretty good for thinking about how to allocate money (with some caveats) but is much less good for thinking about how to allocate human effort. It makes sense for generalists, but actually that's not what most people are or should be. This isn't to say we should abandon maximizing altruism, or all of its precursors, but... somehow build a thing that both makes good use of that, and good use of less redirectable resources.
[Note: I use Copilot and like it. The 'aha' moment for me was when I needed to calculate the intersection of two lines, a thing that I would normally just copy/paste from Stack Overflow, and instead Copilot wrote the function for me. Of course I then wrote tests and it passed the tests, which seemed like an altogether better workflow.]
Language models are good enough at generating code to make the very engineers building such models slightly more productive
How much of this is 'quality of code' vs. 'quality of data'? I would naively expect that the sort of algorithmic improvements generated from OpenAI engineers using Copilot/Codex/etc. are relatively low-impact compared to the sort of benefits you get from adding your company's codebase to the corpus (or whatever is actually the appropriate version of that). I'm somewhat pessimistic about the benefits of adding Copilot-generated code to the corpus as a method of improving Copilot.
If I'm understanding you correctly, the structure looks something like this:
We have a toy environment where human preferences are both exactly specified and consequential.
We want to learn how hard it is to discover the human preference function, and whether it is 'learned by default' in an RL agent that's operating in the world and just paying attention to consequences.
One possible way to check whether it's 'learned by default' is to compare the performance of a predictor trained just on environmental data, a predictor trained just on the RL agent's internal state, and a predictor extracted from the RL agent.
The relative performance of those predictors should give you a sense of whether the environment or the agent's internal state give you a clearer signal of the human's preferences.
It seems to me like there should be some environments where the human preference function is 'too easy' to learn on environmental data (naively, the "too many apples" case should qualify?) and cases where it's 'too hard' (like 'judge how sublime this haiku is', where the RL agent will also probably be confused), and then there's some goldilocks zone where the environmental predictor struggles to capture the nuance and the RL agent has managed to capture the nuance (and so the human preferences can be easily exported from the RL agent).
Does this frame line up with yours? If so, what are the features of the environments that you investigated that made you think they were in the goldilocks zone? (Or what other features would you look for in other environments if you had to continue this research?)
IMO there's a big difference between "obviously material progress is good" and "obviously some progress is good"--it could be that after a careful consideration of the evidence, it turns out that the thing we need to do is focus on spiritual progress and all become monks (or w/e) and then progress can be measured in how rapidly that transition happens.
[Like, in one era the accumulation of slaves would have been a sign of progress, and now we view it as a sign of regress.]
There's a second point that you might be making, that it's weird to have a 'theory of progress' if your forecasts show the world getting worse, even if we do our best. (For example, suppose there was a massive volcanic eruption and so we knew volcanic winter was coming.) But I think even then it's important to figure out what ways we can improve in and make those changes, even if the background is decay instead of progress.
So I've been playing HUMANKIND over the last few days and think I have the hang of it now. It's by Amplitude Studios, who also made Endless Space, Endless Legend, Endless Space 2, and Dungeon of the Endless (which was my favorite out the of four; also apparently I wrote up my thoughts on ES2).
The basic engine is the same as those games, and most similar to Endless Legend; the world is a hex-map that's broken up into pre-defined territories, each of which can only have one outpost/city. Each hex generates some resources on its own (fertile land giving you food, forests industry, etc.), but you only work the hexes immediately adjacent to the districts you build (including the city center), and districts vary in what resources they collect. [Build a farmer's quarter next to a forest and you don't collect any of the industry, but build a maker's quarter and you do.]
The core gimmick that differentiates it from Civilization / Endless Legend is that rather than picking one nation/race, you pick one culture from each age. (So no more Abraham Lincoln wearing furs / a suit at the beginning of the game, both of which were nonsense in different ways.) Instead you might be the Babylonians, and then the Carthaginians, then Khmer, then Mughals, then French, then Japanese (which was the path I took in my most recent game that I won). You end up building a history (both in continuing buffs and districts that remain on the field), and picking things that are appropriate to your setup. (In Civ, having Russians get a bonus to tundra tiles is sort of terrible because maybe the RNG will give you tundra and maybe it won't, but having one of the faith options be a tundra bonus is fine because only someone who knows they have lots of tundra will pick it. This makes everything more like that.)
The other relevant facts are: 1) the cultures seem to vary wildly in power (or at least appropriateness to any given situation), and 2) you pick from the list whenever you age up from the previous age, and 3) everyone starts as a nondescript nomadic tribe. (Which, as a neat side effect, means you do much more exploring before you place your first city, and so you have much more choice than you normally get.) So rather than starting the game as the Babylonians, you're racing to see who gets to be them. Wonders, the typical race dynamic of the Civ games, are minimized here (there aren't that many of them and they aren't that great), replaced by these cultures.
Overall, tho, I think the net effect is significantly increasing the 'rich get richer' dynamic and makes for a less satisfying game. One method of asymmetrical balance is to say "well, it's alright if the cultures are unbalanced, because then the drafting mechanics will create a meta-balance." But when the drafting mechanics are "the person in the lead picks first", you end up with a probably dominant meta-strategy (and then the best available counter-strategy which is trying hard to play catchup).
At my current skill level (who knows, maybe I'm doing the naive strategy), it looks to me like the dominant move is 1) make one mega-city and 2) stack lots of cultures who have emblematic districts that give you buffs based on population size / number of districts. You can have only one such district per territory, but you can have lots of territories in your city (limited only by your influence and the number of territories other players will 'let' you have). So when each Khmer Baray gives you +1 industry per population, and you've combined ten territories into your megalopolis with 100 population, you now get 1k industry/turn out of that, instead of the 100 you would have gotten from having ten cities each with their own Baray. And then later you get the Japanese Robotics Lab, which gives you +2 industry on each Maker's Quarter, and so that leads to a +20 bonus on each of the ten, for +200 industry (and another +200 industry from the effect of those Robotics Labs on themselves).
[There are countervailing forces pushing against the megalopolis--each additional territory you add to a city increases the cost of the next, so actually I had one big city and then five or six small ones, but I think I hadn't realized how strong this effect was and will do something different next game.]
So far... I think I like it less than Old World, but it has interestingly different solutions to many of the same problems, and it's covering a very different time period.
I wonder if the GC-MS exists for municipal water supplies already, and can just be aggregated and compared against population obesity rates? Less precise than doing it house-by-house, but much cheaper if someone has already done it for you and also it might not vary much house-by-house.
I guess my sense is that most biological systems are going to be 'package deals' instead of 'cleanly separable' as much as possible--if you already have a system that's doing learning, and you can tweak that system in order to get something that gets you some of the benefits of a VoI framework (without actually calculating VoI), I expect biology to do that.
But in experiments, they’re not synchronized; the former happens faster than the latter.
This has the effect of incentivizing learning, right? (A system that you don't yet understand is, in total, more rewarding than an equally yummy system that you do understand.) So it reminds me of exploration in bandit algorithms, which makes sense given the connection to motivation.
I am deeply confused how a serial interval can be negative. If I understand the words involved that means you spread it on to someone who gets their symptoms before you do?
If I understand that correctly, it means you're breathing out infectious levels of virus days before you cough / notice that you are sick.
(I'm confused about your confusion, because I thought the negative serial interval of COVID was one of its most striking features, and the reason why many of the old 'control system' things failed; like, people were used to "if you feel fine you are fine" and wouldn't accept "everyone needs to act as though they could be sick, because you won't know whether or not you're infectious until after the fact.")
Yeah, I think there's a (generally unspoken) line of argument that if you have a system that can revise its goals, it will continue revising its goals until it it hits a reflectively stable goal, and then will stay there. This requires that reflective stability is possible, and some other things, but I think is generally the right thing to expect.
I think we're in a sort of weird part of concept-space where we're thinking both about absolutes ("all X are Y" disproved by exhibiting an X that is not Y) and distributions ("the connection between goals and intelligence is normally accidental instead of necessary"), and I think this counterexample is against a part of the paper that's trying to make a distributional claim instead of an absolute claim.
Roughly, their argument as I understand it is:
Large amounts of instrumental intelligence can be applied to nearly any goal.
Large amounts of frame-capable intelligence will take over civilization's steering wheel from humans.
Frame-capable intelligence won't be as bad as the randomly chosen intelligence implied by Bostrom, and so this argument for AI x-risk doesn't hold water; superintelligence risk isn't as bad as it seems.
I think I differ on the 3rd point a little (as discussed in more depth here), but roughly agree that the situation we're in probably isn't as bad as the "AIXI-tl with a random utility function implemented on a hypercomputer" world, for structural reasons that make this not a compelling counterexample.
Like, in my view, much of the work of "why be worried about the transition instead of blasé?" is done by stuff like Value is Fragile, which isn't really part of the standard argument as they're describing it here.
On this proposal, any reflection on goals, including ethics, lies outside the realm of intelligence. Some people may think that they are reflecting on goals, but they are wrong. That is why orthogonality holds for any intelligence.
I think I do believe something like this, but I would state it totally differently. Roughly, what most people think of as goals are something more like intermediate variables which are cognitive constructs designed to approximate the deeper goals (or something important in the causal history of the deeper goals). This is somewhat difficult to talk about because the true goal is not a cognitive construct, in the same way that the map is not the territory, and yet all my navigation happens in the map by necessity.
Of course, ethics and reflection on goals are about manipulating those cognitive constructs, and they happen inside of the realm of intelligence. But, like, who won WWII happened 'in the territory' instead of 'in the map', with corresponding consequences for the human study of ethics and goals.
Persuasion, in this view, is always about pointing out the flaws in someone else's cognitive constructs rather than aligning them to a different 'true goal.'
So, to argue that instrumental intelligence is sufficient for existential risk, we have to explain how an instrumental intelligence can navigate different frames.
This is where the other main line of argument comes into play:
I think 'ability to navigate frames' is distinct from 'philosophical maturity', roughly because of something like a distinction between soldier mindset and scout mindset.
You can imagine an entity that, whenever it reflects on their current political / moral / philosophical positions, using their path-finding ability like a lawyer to make the best possible case for why they should believe what they already believe, or to discard incoming arguments whose conclusions are unpalatable. There's something like another orthogonality thesis at play here, where even if you're a wizard at maneuvering through frames, it matters whether you're playing chess or suicide chess.
This is just a thesis; it might be the case that it is impossible to be superintelligent and in soldier mindset (the 'curiosity' thesis?), but the orthogonality thesis is that it is possible, and so you could end up with value lock-in, where the very intelligent entity that is morally confused uses that intelligence to prop up the confusion rather than disperse it. Here we're using instrumental intelligence as the 'super' intelligence in both the orthogonality and existential risk consideration. (You consider something like this case later, but I think in a way that fails to visualize this possibility.)
[In humans, intelligence and rationality are only weakly correlated, in a way that I think supports this view pretty strongly.]
So, what would prevent a generally superintelligent agent from reflecting on their goals, or from developing an ethics? One might argue that intelligent agents, human or AI, are actually unable to reflect on goals. Or that intelligent agents are able to reflect on goals, but would not do so. Or that they would never revise goals upon reflection. Or that they would reflect on and revise goals but still not act on them. All of these suggestions run against the empirical fact that humans do sometimes reflect on goals, revise goals, and act accordingly.
I think this is not really empathizing with the AI system's position. Consider a human who is lost in an unfamiliar region, trying to figure out where they are based on uncertain clues from the environment. "Is that the same mountain as before? Should I move towards it or away from it?" Now give that human a map and GPS routefinder; much of the cognitive work that seemed so essential to them before will seem pointless now that they have much better instrumentation.
An AI system with a programmed-in utility function has the map and GPS. The question of "what direction should I move in?" will be obvious, because every direction has a number associated with it, and higher numbers are better. There's still uncertainty about how acting influences the future, and the AI will think long and hard about that to the extent that thinking long and hard about that increases expected utility.
The orthogonality thesis is thus much stronger than the denial of a (presumed) Kantian thesis that more intelligent beings would automatically be more ethical, or that an omniscient agent would maximise expected utility on anything, including selecting the best goals: It denies any relation between intelligence and the ability to reflect on goals.
I don't think this is true, and have two different main lines of argument / intuition pumps. I'll save the other for a later section where it fits better.
Are there several different reflectively stable moral equilibria, or only one? For example, it might be possible to have a consistent philosophically stable egoistic worldview, and also possible to have a consistent philosophically stable altruistic worldview. In this lens, the orthogonality thesis is the claim that there are at least two such stable equilibria and which equilibrium you end up in isn't related to intelligence. [Some people might be egoists because they don't realize that other people have inner lives, and increased intelligence unlocks their latent altruism, but some people might just not care about other people in a way that makes them egoists, and making them 'smarter' doesn't have to touch that.]
For example, you might imagine an American nationalist and a Chinese nationalist, both remaining nationalistic as they become more intelligent, and never switching which nation they like more, because that choice was for historical reasons instead of logical ones. If you imagine that, no, at some intelligence threshold they have to discard their nationalism, then you need to make that case in opposition to the orthogonality thesis.
For some goals, I do think it's the case that at some intelligence threshold you have to discard it, hence the 'more or less', and I think many more 'goals' are unstable, where the more you think about them, the more they dissolve and are replaced by one of the stable attractors. For example, you might imagine it's the case that you can have reflectively stable nationalists who eat meat and universalists who are vegan, but any universalists who eat meat are not reflectively stable, where either they realize their arguments for eating meat imply nationalism or their arguments against nationalism imply not eating meat. [Or maybe the middle position is reflectively stable, idk.]
In this view, the existential risk argument is less "humans will be killed by robots and that's sad" and more "our choice of superintelligence to build will decide what color the lightcone explosion is and some of those possibilities are as bad or worse than all humans dying, and differences between colors might be colossally important." [For example, some philosophers today think that uploading human brains to silicon substrates will murder them / eliminate their moral value; it seems important for the system colonizing the galaxies to get that right! Some philosophers think that factory farming is immensely bad, and getting questions like that right before you hit copy-paste billions of times seems important.]
So, intelligent agents can have a wide variety of goals, and any goal is as good as any other.
The second half of this doesn't seem right to me, or at least is a little unclear. [Things like instrumental convergence could be a value-agnostic way of sorting goals, and Bostrom's 'more or less' qualifier is actually doing some useful work to rule out pathological goals.]
Overall, I think your abstract and framing is pretty careful to narrow your attention to "is this argument logically sound?" instead of "should we be worried about AI?", but still this bit jumps out to me:
the argument for the existential risk of AI turns out invalid.
Maybe insert "standard" in front of "argument" again?
Oxygen: Not much to say here. Your body needs oxygen. This doesn't stop while we sleep. If possible open a window.
This is a pet peeve of mine, but: you're not running out of oxygen as input. Instead exhaust products are building up in the room, of which the most well-known is carbon dioxide. (Outside air outside contains about 500x as much O2 as CO2, and in typical stuffy rooms the ratio is down to about 100x.) For some reason, we seem to be very sensitive to those exhaust products (tho it also seems like this might be a dimension that people vary on significantly).
I think I basically disagree with this, or think that it insufficiently steelmans the other groups.
For example, the homeless vs. the landlords; when I put on my systems thinking hat, it sure looks to me like there's a cartel, wherein a group that produces a scarce commodity is colluding to keep that commodity scarce to keep the price high. The facts on the ground are more complicated--property owners are a different group from landlords, and homelessness is caused by more factors than just housing prices--but the basic analysis that there are different classes, those classes have different interests, and those classes are fighting over government regulation as a tool in their conflict seems basically right to me. Like, it's really not a secret that many voters are motivated by keeping property values high, politicians know this is a factor that they will be judged on.
Maybe you're trying to condemn a narrow mistake here, where someone being an 'enemy' implies that they are a 'villain', which I agree is a mistake. But it sounds like you're making a more generic point, which is that when people have political disagreements with the rationalists, it's normally because they're thinking in terms of enemy action instead of not thinking in systems. But a lot of what the thinking in systems reveals is the way in which enemies act using systemic forces!
Interestingly, I think this is pretty obviously stated in The Wealth of Nations; Chapter 1 identifies division of labor as the cause of capital accumulation, Chapter 2 identifies trade as the cause of division of labor, and Chapter 3 identifies the size of the market as a limiter on specialization.
This is... actually sort of surprising that now I have two examples of economic concepts which are really better explained by Adam Smith than they are by modern textbooks (the other is supply and demand), and this makes me even more glad that I read The Wealth of Nations in high school before I had come across any modern textbooks.
While it seems to me like you're trying to protect an important pole of coherency and consistency here, I think this comment as well as some features of the OP (to a lesser extent) overstep some important bounds and make it quite tricky to have a productive conversation, in a way that I would like to both discourage and advise against. I worry that you're imputing positions stronger than people are holding, and thus creating more disagreement than exists, and raising the emotional stakes of that disagreement more than seems necessary to continue the conversation.
I would rather not perpetuate an escalatory dynamic where you think you need to make a bigger and bigger fuss in order to get responses, in a way that can be reminiscent of 'trapped priors'; it seems to me like the conversation in this thread could have been basically as effective at challenging So8res's position and provoking elaboration with much less strain on your part, and yet when I imagine being in your shoes this encounter probably feels like an example of the success of this approach.
Concretely, in this case, I think you're exasperated about humor and shitposting in a way that isn't justified and is failing to credit the ways in which people are responding to your bids for increased seriousness and abstraction. The standard you seem to be imposing is not "please respond to seriousness with seriousness" but the much stronger "please never joke in public about something I take very seriously", which seems like a pretty drastic standard, and one I would mildly warn against trying to enforce on LW.
(On the object level, I agree with Ben Pace that you are right that the about-face on this example deserves explanation, but my sense is that the explanation is satisfactory; the take that I'd summarize as 'there's a paraphyletic grouping that pretends to obviousness that it does not possess on closer examination' seems sensible enough, tho I am interested in disagreements you have with that take.)
I applaud your earlier decision to have a friend review a draft before posting it, since I think this is the sort of behavior that leads to more intellectual progress and less mutual misunderstanding. In that spirit, I'd be happy to review any further comments you want to make in this conversation, in the hopes of having it go a bit better.
So, I'm not a biologist. I don't think Eliezer is much of a biologist either. A thing that I learned in the last ten years, which maybe Nate and Eliezer learned in the same time, idk, is that different aquatic animals are more distantly related than one might have thought. For example, let's take the list from 2008. When I go on Wikipedia and try to find an appropriate scientific name for each and stick it into timetree.org to try to figure out when their most recent common ancestor was, I get the following estimates:
Salmon and Guppies: 206 MYA Trout and Guppies: 206 MYA Dolphins and Guppies: 435 MYA Sharks and Guppies: 473 MYA Jellyfish and Guppies: 824 MYA Algae and Guppies: 1496 MYA
That is, if you're going to start removing things from the list because of how distantly related they are, sharks go first; Chondrichthyes is just as weird a member of Chordata as Mammalia is, from the perspective of Actinopterygii.
The trouble with defending the 2008 classification is not that it's phylogenetics, it's that, as far as I can tell, it's bad phylogenetics. And so you end up requiring mental gymnastics in order to exclude dolphins because their most recent common ancestor is too far back while including sharks whose most recent common ancestor is even further back. The pedant's position ("I know that dolphins are mammals instead of fish!") doesn't hold up under either the useful definition ("dolphins are aquatic animals tho") or the phylogenist's definition ("mammals are chordata tho, which is what you should mean when you say 'fish'.").
I can't speak for So8res, but I'm bothered by something like... privileging one particular frame for reasons of fashion or class rather than efficiency? The sort of thing where you leave hazards around as tests, so that people can see who stumbles on them and who gracefully avoids them. I'm not opposed to tests in general, I just wish they'd be more efficient.
Like, the old meaning of fish was "fully aquatic animal", which seems like the right sort of definition for a four-letter word (remember, words are supposed to encode information cheaply), and saying "actually we've reserved that four-letter word for this tiny slice of its former domain" seems like a weird choice (comparable to the 'true bug' definition).
The deal offered benefits not only to England, France, and the Allies, but also to Japan and Germany that they couldn’t have even hoped to achieve had they won the war. 6
6 Apparently Germany and Japan would have found it to be unbelievable. “The primary reason Germany and Japan had launched World War II in the first place was to gain greater access to resources and markets. Germany wanted the agricultural output of Poland, the capital of the Low Countries, the coal of Central Europe, and the markets of France. Japan coveted the manpower and markets of China and the resources of Southeast Asia. Now that they had been thoroughly defeated, the Americans were offering them economic access far beyond their wildest prewar longings: risk-free access to ample resources and bottomless markets a half a world away. And “all” it would cost them was accepting a security guarantee that was better than anything they could ever have achieved by themselves.”
It seems to me there are positional status questions--is China just a participant in America's world, or is it the Middle Kingdom?--but I think it's hard to see a situation where China is better off annexing countries to be recalcitrant provinces rather than just trading with them while they're American allies and protectorates. (Like, it's really not obvious that China is better off with a conquered Korea than it is with a neighboring Korea.)
I think it's pretty easy to separate things I've recommended to people as "better spoiled" or "better unspoiled"; so long as my threshold / reason for thinking this is sufficiently similar to abramdemski's, then I should be able to freely spoil for him the art that I think can be spoiled with only minor costs (compared to freely spoiling all art).
Then I pointed to somebody whose work, also deriving from Heidegger, integrates aspects of all of these together in kind of a profound way. Tillich is deeply influenced and aware of what he calls 'depth psychology', the kind of psychology in Jung, he of course is deeply aware of Heidegger. I don't think that Tillich was aware of Corbin, but he is deeply aware of the symbol in an imaginal instead of a merely imaginary way.
Tillich takes the meaning crisis seriously, he writes perhaps his most well-known (and I think it's a masterpiece) book, The Courage to Be, as a response to the meaning crisis. Like Jung and Corbin, and for very related reasons, he's deeply critical of literalism and fundamentalism throughout, but he takes it deeper. As I mentioned, he really deepens it in terms of Heidegger's critique of ontotheology and he comes critical of literalism and fundamentalism as forms of idolatry in which we are attempting to have rather than become.
So there's some excellent books on the relationship between Jung and Tillich, a series of ongoing work by John Dourley; I recommend two books to you, The Psyche as Sacrament which I tweeted about in my book recommendations, I would also recommend his later book, Paul Tillich, Carl Jung, and the Recovery of Religion, but make no mistake, Dourley is not talking about a recovery in a nostalgic sense. He writes another book called A Strategy for a Loss of Faith where he is trying to get beyond classical theism. So I recommend Dourley's work as a comprehensive way of bringing about a deep dialog and a kind of integration between Jung and Tillich.
Tillich sees the main response to the meaning crisis, and here's how Tillich is not just theorizing: he is trying to give us guidance on how to live. Let's remember that this really matters, because you know, the way Tillich resisted the Nazis. What Tillich talks about in The Courage to Be is *courage*, now he's careful to note that this is a kind of existential courage that ultimately allows us to confront and overcome meaninglessness in its depth, but also to more practically respond to perverted response to the meaning crisis itself, like Nazism and its gnostic nightmare.
This process of encouragement--now, he is like Aristotle, he's not talking about something as simple as just bravery (facing danger) or fortitude (the ability to endure), no, for Tillich courage is a virture. There's something of wisdom in courage. Courage involves within it that central feature of wisdom, which is seeing through illusion into reality. The brave person face danger, but that's all we can say about them. The person with fortitude endures difficulty, but that's all we can say about them. The courageous person sees through the illusion and the distortion of fear or distress to what is truly good and acts accordingly.
So last time we looked in depth at Corbin and Jung and tried to draw very deeply the notion of the relationship to the sacred second self. I launched into a sort of mutual criticism between Corbin and Jung and brought in some Buber along the way.
The summary at the beginning of the next episode pretty quickly shifts to new material, so here's the key quote according to me:
Freud has a Newtonian machine hydraulic model of the psyche. Jung ultimately rejects that; Jung replaces the hydraulic metaphor with an organic metaphor. He sees the psyche as a self-organizing dynamical system, ultimately as an autopoetic being, so he sees the psyche as going through a complex process of self-organization, and that you have to understand individuation as this kind of organic self-organizing process that you neither make nor receive but you participate in.
So last time we followed Heidegger into the depths, where we encountered Eckhart and this non-teleological relatiosnhip to the play of being. That led us very directly into Corbin, and Corbin's core argument that gnosis (as we've been using it), the ability to engage in this serious play, relates centrally to the imagination.
But Corbin is making use of this term in a new way; he makes a distinction between the imaginary (which is how we typically use the word "the imagination" and mental images in my head that are only subjection and have no objective reality) and the imaginal (which mediates between the abstract intelligible world and the concrete sensible world, and transjects between the subjective and the objective). All this mediation is not done statically, but a mutual affordance is done, and an ongoing transformative transframing, and that the symbol captures all of this.
Then I wanted to bring out Corbin's core symbol, and it's a core symbol that relates directly to gnosis. Because in gnosis (transformative participatory knowing), and this goes to the heart of Heidegger's notion of design, the being whose being is in question. We have to see self-knowledge and knowledge of the world as inextricably bound up together in order to do that; we are purusing Corbin's central symbol, the angel.
Which, of course, is immediately off-putting to many people including myself. But I've been trying to get a way of articulating how Corbin is incorporating both Heidegger and Persian Sufism, Neoplatonic Sufism into this understanding of the symbol, and I recommend that we take a look at the historical work showing how throughout the ancient Mediterranean world, and up and through the Hellenistic period and beyond, up until about the 5th century of the common era, there's the pursuit of the divine double. The idea is one that is deeply transgressive of our cultural cognitive grammar of decadent Romanticism, where we are born with our true self that merely needs to express itself (a la Rousseau), and that the core virtue is authenticity, which is being true to the true self that you have, that you possess. Rather than, for example, a Socratic model in which the true self is something towards which you are constantly aspiring.
The transgressive mythology is that the self that I have now is not my true self. My true self is my divine double; this is something that is superlative to me, it is bound to me, it is my double. It is bound to me but is is superlative to me; it is both me and not me. It's me as I'm meant to be, as I should be, and that the existential project is not one of expressing a self that you have but of transcending to become a self that is ecstatically ahead of you in an important way.
Then I pointed out that for many of you this would still be "okay, I get the transgression, but I still find this notion of a divine double unpalatable." Maybe for some of you you don't, but nevertheless I think there's something important to asking the question "why did so many people for so long believe in this aspirational process?". This takes us back into work that was core to the discussion I made about gnosis, and it has a resounding impact at various places throughout this series, which is L.A. Paul's work on transformative experience, and then somebody who's from the same school, influenced by Paul while having a different view as Paul, her transformations are more like insight: Agnes Callard's notion of aspiration is much more developmental, but I argue they can be (I think) readily reconciled together if you see development as a linked sequence of insights to bring about qualitative change in your competence.
So last time we were trying to understand Heidegger's work as a prophet (in the Old Testament sense) of the meaning crisis. We took a look at this notion of "the thing beyond itself" and "realness" as simultaneously the shining into our framing and the withdrawing beyond our framing in a deeply interpenetrating manner. We took a look at this deeper notion of truth--not truth as correctness, but truth as aletheia, that which grounds the agent-arena relationship in attunement and allows us the potential to remember being by getting into an attunement with its simultaneous disclosure and withdrawal.
But we can forget that; we can get into a profound kind of modal confusion and this is the history of metaphysics as the emergence of nihilism. We can forget the being mode, we can get trapped into the having mode in which the metaphysics is a propositional project of trying to just use truth as correctness, and we misunderstand being as a particular being. We try to capture the unlimitedness aspect of being, but we only do it at the limit (which Heidegger is deeply critical of). So we understand being in terms of a Supreme Being, a being at the limit, and beyond the limit. This is ontotheology; we understand God as the Supreme Being and this is deeply enmeshed (for Heidegger) with nihilism, because this ontotheology, this version of theology from classical traditional theism, this way of understanding being gets us into the deep forgetfulness and modal confusion that is the hallmark of nihilism.
Of course, we could perhaps remember the being mode, and this is what Corbin (following Heidegger) talks about as gnosis.
Episode 46: Conclusion and the Prophets of the Meaning Crisis
Last time I finished the discussion of wisdom and connected it to enlightenment and argued for the wise cultivation of enlightenment as our deepest kind of existential response to the meaning crisis, a way in which we can awaken from the meaning crisis. I then wanted to put that scientific model of spirituality (for lack of a better phrase) into discourse with some of the central prophets of the meaning crisis. I'm using the word prophet as it's used in the Old Testament; I'm talking about individuals who were crucial for articulating the advent and helping to propose or promise a response to the meaning crisis. I put a diagram on the board in which Heidegger played a central role; there's many connections in there that I'll point out that I will not be able to fully address, because the people are there insofar as they help us articulate the response, not to be examined for their own sake.
I mentioned the work of Nishida and Nishitani in the Kyoto School; I will talk briefly about Nishitani here but I won't be able to go into that in depth. I do intend to pursue this later in another series I'm putting together (I'm putting together a couple of series to follow this one) and I would like to do a series that will include work on the Kyoto School that I've entitled The God Beyond God, in which we look at all of these great non-theistic thinkers within both Eastern and Western traditions, and things like the Kyoto School that tried to bridge between them. So I will have to neglect (to some degree) the Kyoto School in this series but I promise to follow it up more deeply in another series.
The first 45 lectures have been, to some extent, "how did we get here, and where is here anyway?", and these remaining five lectures are something like "what do other people think about being here?" This episode mostly touches on Husserl (who doesn't really make it into the summary at the beginning of the next episode).
I think Singapore is very high on my "city to do finance in" list and not very high on my "naturey place to do thinking in" list, and as pointed out the LGBTQ acceptance is probably low enough to dissuade some people from going there.
So last time I tried to draw together all the other theories (I don't just mean the psychological theories, although they're the most salient right now, but also the philosophical theories) into an account of wisdom. I presented a model to you, a theory of wisdom developed by myself and Leo Ferraro from 2013, in which we are enhancing inferential processing through active open-mindedness, enhancing insightful processing through mindfulness, we're enhancing the capacity for internalization by internalizing the sage, and cultivating sophrosyne by our salience landscape naturally organizing away from self-deception and tempts us towards the truth (or at least what's true, good, and perhaps beautiful; that's perhaps a better way of putting it).
That coordinates the propositional knowing associated with inference, the procedural knowing associated with insight, the perspectival knowing associated with internalization together. That is directed towards realizing sophrosyne and that can help cultivate a more moral existence, the connection to virtue, mastery (in the sense of coping and caring), and meaning in life.
Of course, one of the criticisms I made was that the notion of meaning in life there was too simplistic, and it needs to be integrated with a much more developed account that's already in the literature. I'm contributing to that by work I'm doing with others on meaning in life; I pointed out that the Vervaeke-Ferraro model is missing participatory knowing, it's missing (or at least I think it misrepresents /misaligns) the relationship between the kinds of knowing. Understanding is missing, transformative experiences are missing, aspiration is missing, gnosis is missing, so all of these things need to be deeply integrated together.
I tried to suggest the beginnings of an account of how we turn basic understanding, which is to grasp the relevance of our knowledge, into profound understanding by integrating the account of understanding with the account of possibilities, so that profound understanding is the generation of plausibility by having convergence onto a contextually sensitive optimal grip that is transformatively transferrable in a highly effective manner in many different problem finding, formulating, and solving in many different domains.
I also brought out the idea that in addition to inspiration (this is a term I'm giving for more sudden insight-laden transformative experience), you can have what Callard calls aspiration, that's more incremental, it still can't be solved in an inferential decision-theoretic fashion (she agrees with Paul on that). She does argue though (and I agree with this argument) that aspiration must be considered a form of rationality which she calls 'proleptic rationality' because you're going to get into a performative contradiction: if my aspiration for rationality and my love of wisdom are not themselves rational processes, I'm kind of in trouble in my model of rationality.
Given all of that philosophy, what's missing (as I argued) is an extensive psychology of aspiration. I know one of my colleagues Juensung Kim is working on exactly that problem, and he's of course doing it in connection with a psychology of wisdom. I did suggest to you that we could see one of Callard's ideas of how we do this: we create something that's double-faced (I argued, ultimately symbolic, having aspects of gnosis in it) that allows us to make the leap, even if it's an incremental one, from who we are now and what we value now to the place where I've acquired some new thing that I value for its own sake. We used the example of music appreciation.
I think the 'summary' portion of the next lecture goes out to about 8 minutes, but I'm cutting it off at about 5, in part because there's a lot of tying together / elaborating / concluding to it.
Last time we finished up looking at Baltes and Saudinger and made some criticisms that led into important criticisms made by Monica Ardelt. Then we looked into Ardelt's theory and the way it brought in an important distinction about not just having a good theory of wisdom, but the process of becoming a wise person, and then the emphasis on 'what are the features of a wise person?' as opposed to 'what are some of the central claims made by a theory of wisdom?'. Then we talked about how Monika insightfully brings together the cognitive, the reflective, and the affective.
I pointed out how we've got relevance realization grasping the significance (at least the cognitive directly because of the invocation of Kekes). I would also point out that I think that's at least implicit in the reflective machinery, and there's deep potential connection there with both perspectival knowing and the cultivation of rationality (at least perspectival rationality), and the affective ties to agape (which I've already argued has very important connections to relevance realization). That afford Ardelt's theory a powerful way of connecting wisdom to meaning in life as something different from connecting wisdom to virtue, and that's a very important thing to do.
We still noted some criticisms, largely it's still a product theory, it doesn't have an independent account of foolishness and a processing theory of how one becomes wise. In that sense it's not picking up as well as it could the philosophical heritage given to us by people like Socrates and Plato and Aristotle and Marcus Aurelius and so on.
We then took a look at the theory of Sternberg, just an extremely pivotal figure in the cognitive science of wisdom. I pointed out his ideas about adopting, shaping, and selecting are clearly ideas about relevance realization; he invokes implicit processing, tacit knowledge you know in order to bring understanding in that intuitive grasping of the significance of information (I think that's what he's implying). We talked about how he involves a balancing of interests, and there's the intrapersonal (how you're connected to yourself), the interpersonal (how you're connected to other people), and the extrapersonal (how you're connected to the world). So that's (at least implicitly) important connections to meaning in life in the way we've been talking about throughout this course. He invokes balance throughout, and I tried to make a good case that you should see that as optimization and directly relevant therefore to accounts of optimization of processing that we discussed with connection to relevance realization.
There were some issues I had with Sternberg; the idea that all wise people, all of this machinery is directed towards the common good, that strikes me as anachronistic. I think a less contentious claim would be that it's directed towards virtue and meaning in life for oneself and others in some unspecified way. There was also the invocation of values as affecting or constraining the whole process; again it was unclear to me what this is. There's an ambiguity here, it could be the relatively trivial claim that the wise person is being regulated by normativity--you know, by considerations what's true and good and beautiful--and that would be definitional (and therefore relatively trivial) because wisdom is a normative term or it could be that specific values are being invoked here, but if that's the case they should be specifically stated and justified for why those ones are chosen, and explicitly explain how those specific values make an impact on specific aspects of the machinery. So that's all sort of missing and needs to be addressed. It's ultimately a product theory, not a process theory. Sternberg does have a theory of foolishness, but it's not independently generated and it doesn't really pick up on the centrality of seeing through illusion and into reality.
I'm not sure which claim this is, but I think in general the ability to game IQ tests is what they're trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don't think this is that relevant for population-level comparisons.]
The retest correlation of IQ is only like 0.8 after 20 years!
So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the 'fadeout' effect that the effect of those interventions are short-lived. I don't think there are that many that reliably affect adult IQ, and what we're interested in is the retest correlation of IQ among adults.
In adulthood, things definitely change: generally for the worse. People make a big distinction between 'fluid intelligence' and 'crystallized intelligence', where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one's cohort.
That's almost certainly much less than your retest correlation for basketball skills
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven't yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major 'inborn' variation that should show up in the statistics over time.]
We've even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
Lynn is pretty sure it's not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it's probably quite small (on fluid intelligence, at least).