Posts

Is January AI a real thing? 2021-03-20T00:10:17.253Z

Comments

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T21:16:58.809Z · LW · GW

Does agency matter? There are 21 x 21 x 4 possible payoff matrixes for a 2x2 game if we use Ordinal payoffs. For the vast majority of them (all but about 7 x 7 x 4 of them) , one or both players can make a decision without knowing or caring what the other player's payoffs are, and get the best possible result. Of the remaining 182 arrangements, 55 have exactly one box where both players get their #1 payoff (and, therefore, will easily select that as the equilibrium).

All the interesting choices happen in the other 128ish arrangements, 6/7 of which have the pattern of the preferred (1st and 1st, or 1st and 2nd) options being on a diagonal. The most interesting one (for the player picking the row, and getting the first payoff) is:

1 / (2, 3, or 4) ; 4 / (any)

2 / (any) ; 3 / (any)

The optimal strategy for any interesting layout will be a mixed strategy, with the % split dependent on the relative Cardinal payoffs (which are generally not calculatable since they include Reputation and other non-quantifiable effects).

Therefore, you would want to weight the quality of any particular result by the chance of that result being achieved (which also works for the degenerate cases where one box gets 100% of the results, or two perfectly equivalent boxes share that) 

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T20:54:43.781Z · LW · GW

So, given this payoff matrix (where P1 picks a row and gets the first payout, P2 picks column and gets 2nd payout):

5 / 0 ; 5 / 100

0 / 100 ; 0 / 1

Would you say P1's action furthers the interest of player 2?

Would P2's action further the interest of player 1?

Where would you rank this game on the 0 - 1 scale?

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T17:54:13.982Z · LW · GW

Correlation between outcomes, not within them. If both players prefer to be in the same box, they are aligned. As we add indifference and opposing choices, they become unalienable. In your example, both people have the exact same ordering of outcome. In a classic PD, there is some mix. Totally unaligned (constant value) example: 0/2 2/0 2/0 0/2

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T17:21:57.555Z · LW · GW

Tabooing "aligned" what property are you trying to map on a scale of "constant sum" to "common payoff"?

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T17:16:53.544Z · LW · GW

Um... the definition of the normal form game you cited explicitly says that the payoffs are in the form of cardinal or ordinal utilities. Which is distinct from in-game payouts.

Also, too, it sounds like you agree that the strategy your counterparty uses can make a normal form game not count as a "stag hunt" or "prisoner's dillema" or "dating game"

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T15:55:07.729Z · LW · GW

It's a definitional thing. The definition of utility is "the thing people maximize." If you set up your 2x2 game to have utilities in the payout matrix, then by definition both actors will attempt to pick the box with the biggest number. If you set up your 2x2 game with direct payouts from the game that don't include phychic (eg "I just like picking the first option given") or reputational effects, then any concept of alignment is one of:

  1. assume the players are trying for the biggest number, how much will they be attempting to land on the same box?
  2. alignment is completely outside of the game, and is one of the features of function that converts game payouts to global utility

You seem to be muddling those two, and wondering "how much will people attempt to land on the same box, taking into account all factors, but only defining the boxes in terms of game payouts." The answer there is "you can't." Because people (and computer programs) have wonky screwed up utility functions (eg (spoiler alert) https://en.wikipedia.org/wiki/Man_of_the_Year_(2006_film))

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T15:31:38.719Z · LW · GW

Quote: Or maybe we're playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we're in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases -- but maybe I'm a billionaire and literally don't care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you're a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I'm actually evil and want you to do as badly as possible.

 

So, if the other player is "always cooperate" or "always defect" or any other method of determining results that doesn't correspond to the payouts in the matrix shown to you, then you aren't playing "prisoner's dillema" because the utilities to player B are not dependent on what you do. In all these games, you should pick your strategy based on how you expect your counterparty to act, which might or might not include the "in game" incentives as influencers of their behavior.

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T12:51:41.055Z · LW · GW

The function should probably be a function of player A's alignment with player B; for example, player A might always cooperate and player B might always defect. Then it seems reasonable to consider whether A is aligned with B (in some sense), while B is not aligned with A (they pursue their own payoff without regard for A's payoff).

That seems to be confused reasoning. "Cooperate" and "defect" are labels we apply to a 2x2 matrix sometimes, and applying those labels changes the payouts. If I get $1 or $5 for picking "A" and $0 or $3 for picking "B" depending on a coin flip that leads me to a different choice than if A is labeled "defect" and B is labeled "cooperate" and the payout depends on another person, because I get psychic/reputational rewards for cooperating/defecting (which one is better depends on my peer group, but whichever is better the story equity is much higher than $5, so my choice is dominated by that, and the actual payout matrix is: pick S: 1000 util or 1001 util. Pick T: 2 util or 2 util.

None of which negates the original question of mapping the 8! possible arrangements of relative payouts in a 2x2 matrix game to some sort of linear scale.

Comment by Ericf on Shall we count the living or the dead? · 2021-06-14T18:21:26.695Z · LW · GW

Asking someone to watch a video is rude and filters your audience to "people with enough time to consume content slowly, and an environment that allows audio/streaming"

Comment by Ericf on Experiments with a random clock · 2021-06-14T00:30:52.125Z · LW · GW

Since this comment thread is apparently "share what you do to be on time" here's mine.

I consider it a test of estimation skills to arrive places exactly on time, so I get a little dopamine hit by arriving at the predicted moment. And I can set that target time according to the risk and importance of the event (ie, I aimed 5 minutes early for swim lessons yesterday, because I wasn't sure if the drive was 7 or 11 minutes long, and being late is bad, and I aim 30 minutes early to catch a plane, since missing late by 1 minute is extremely costly, but when going to visit a single counterparty (grandma, a friend) I aim at the time suggested)

Comment by Ericf on Survey on AI existential risk scenarios · 2021-06-11T15:32:11.458Z · LW · GW

But the action needed to avoid/mitigate in those cases is very different, so it doesn't seem useful to get a feeling for "how far off of ideal are we likely to be" when that is composed of:
1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?

2. What is the range of desirable outcomes within that range? - ie what should we do?

3. How will politics, incumbent interests, etc. play out? - ie what will we actually do?

Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances. It could be "attempt to shut down all AI research" or "put more funding into AI research" or "it doesn't matter because the two majority cases are "General AI is impossible - 40%" and "General AI is inevitable and will wreck us - 50%""

Comment by Ericf on Bad names make you open the box · 2021-06-11T00:44:36.741Z · LW · GW

Saying poor naming instead of bad names would be clearer, since it wouldn't call up the idea of "bad names" = swear words.

Saying "look in" instead of "open" would also distance from the AI concept.

Comment by Ericf on Bad names make you open the box · 2021-06-10T16:30:14.687Z · LW · GW

See comment below about Intentionality.

English is not Newspeak: there are multiple words for the same basic concept that convey shades of meaning and emotion, and allow for poetic usage that sometimes becomes mainstream.

Comment by Ericf on What are the gears of gluten sensitivity? · 2021-06-09T19:52:11.647Z · LW · GW

The normal sourdough recipe is to take some of the starter, mix it with more flour and water, and let it rise/ferment for only 1-2 hours before baking.

Comment by Ericf on Bad names make you open the box · 2021-06-09T15:33:32.386Z · LW · GW

Return has more intentionality than Regress.

I Return an purchase, Return to the scene of a crime, or Return to the left side of the page by pressing Enter. Student's learning Regresses over the summer, people Regress to a bestial state when hungry, an organized closet Regresses into chaos.

Comment by Ericf on Bad names make you open the box · 2021-06-09T15:25:31.897Z · LW · GW

I can see how the choice is architecture dependent. If you can write something like:

Display(promotedPosts()) Display(recentPosts())

having the function be written without a verb makes sense. If you have a multi-tier architecture where you want to cache things locally, the code might have to be: PostList = getPromotedPosts() Append(PostList, getRecentPosts()) ShowOnScreen(PostList)

I would say the distinction is that if a function takes a long time to go look at a database and do some post-processing, we don't want to run around using it like a variable. Especially if the database might change between one use of the data and the next, but we want to keep the results the same. That way, the code can be: PromotedPosts = getPromotedPosts() Display(PromotedPosts) ...user clicks a button Email(PromotedPosts) //this sends the displayed posts, not whatever the promoted one happen to be at that moment

Comment by Ericf on Bad names make you open the box · 2021-06-09T13:18:37.918Z · LW · GW

Heh, this is why well written automated tests are so great. If the test for "are the first 5 posts marked as promoted" existed there would be an obvious failure when the old wrong code came back into use. Of course it would also throw failures while the Farah post function was active, but that should be bypassed by a date-limited switch. (Ie, update the test case to say: IF now() < EXCEPTION_END_DATE then return(pass) Else ...run the test...) that way when the system should stop doing the Farah thing, there will be an automatic defect thrown against whatever code is actually being run, and it can be corrected.

Comment by Ericf on Bad names make you open the box · 2021-06-09T13:07:48.372Z · LW · GW

Huh? Aren't some functions puts? Or calculates?

Comment by Ericf on The reverse Goodhart problem · 2021-06-09T12:34:04.097Z · LW · GW

That test / class example isn't even a case because the test is instrumental to the goal, it's not a metric. Your U in this case is "time spent studying" which you accurately see will be un-correlatrd from "graduating" if all students (or all counterfactual "you"s) attempt to optomize it.

Comment by Ericf on The reverse Goodhart problem · 2021-06-09T12:23:28.334Z · LW · GW

I think it's empirical observation. Goodhart looked around, saw in many domains that U diverged from V in a bad way after it became a tracked metric, while seeing no examples of U diverging from a theoretical V' in a good way, and then minted the "law." Upon further analysis, no-one has come up with a counterexample not already covered by the built in exceptions (if U is sufficiently close to V, then maximizing U is fine - eg Moneyball; OR if there is relatively low benefit to perform, agents won't attempt to maximize U - eg anything using Age as U like senior discounts or school placements)

Comment by Ericf on What are the gears of gluten sensitivity? · 2021-06-09T02:09:13.639Z · LW · GW

For the types of gluten problems (or wheat allergy, which often looks like a gluten problem) actually supported by science (and not just ?maybe it's bad?) You need to go to 0 initially to let your system recover. After that, mild allergy or celiac could allow for the occasional wheat maltodextrin ingredient or "cooked in the same kitchen as active flour usage" or "ok, just one bite of that" or "I'll just scrape the filling out of the pie." Severe allergy or celiac requires continued zero tolerance. At no point would eating a gluten roll or sandwich be reasonable.

Regardless, the standard diagnostic method for "I seem to have GI problems" is to remove all the things from your diet that have caused other people problems (wheat, milk, etc.) And then reintroduce them one at a time and observe your reactions. Your gut bacteria are unique to you, and you might have or lack something that makes gluten or some other protein contraindicated.

Comment by Ericf on Survey on AI existential risk scenarios · 2021-06-09T00:18:57.946Z · LW · GW

That seems like a really bad conflation? Is one question combining the risk of "too much" AI use and "too little" AI use?

That's even worse than the already widely smashed distinctions between "can we?" "should we?" And "will we?"

Comment by Ericf on The reverse Goodhart problem · 2021-06-08T20:26:51.579Z · LW · GW

This looks like begging the question. The whole point of Goodhart is that the second case always applies (barring a discontinuity in the production functions - its possible that trying to maximize U generates a whole new method, which produces far more V than the old way). You cannot argue against that by assuming a contradictory function into existence (at least, not without some actual examples)

Comment by Ericf on Against intelligence · 2021-06-08T20:13:09.760Z · LW · GW

The problem (even in humans) is rarely the ability to identify the right answer, or even the speed at which answers can be evaluated, but rather the ability to generate new possibilities. And that is a skill that is both hard and not well understood.

Comment by Ericf on On making fictional miracles seem plausible · 2021-05-31T15:25:54.609Z · LW · GW

It's about trusting the narrator (https://www.shamusyoung.com/twentysidedtale/?p=17692), and that trust comes from the illusion of linear time, and conservation of narrative detail (Chekov's Gun).

In the first story, the illusion of linear time makes the reader think that the author was writing along, and then didn't know how to end the story and invented the meteor at that moment. In reality, the author could have written the chapters in any order, or gone back and edited parts.

In the second version, a third character is introduced early, which sets up the expectation of that character doing something eventually (eg creepy old guy in Home Alone). This helps the audience trust that the author had a plan. So, it might make for a dumb story, but at least it's clear to the reader that the author intended the deus ex machina from the beginning.

Comment by Ericf on The Cost of Convenience.... · 2021-05-31T01:58:43.142Z · LW · GW

People generally seem to be voting with their days and chosing convenience over commitment. It sounds like you are making Plato's philosopher king argument.(https://en.m.wikipedia.org/wiki/Philosopher_king)

  1. There are two kinds of pleasure
  2. Only someone who has experienced both can rightly decide which is better
  3. (Plato claimed there was a hierarchy, such that A < B < C and no-one could have experienced only B or only A and C etc. That argument isn't important here)
  4. The author has experienced both, and feels one is clearly superior
  5. Therefore, hedonistic/convenience is inferior to philosophical/commitment
Comment by Ericf on Two Definitions of Generalization · 2021-05-31T01:35:42.175Z · LW · GW

Notes:

  1. "Common elements" could mean "the intersection of two sets" (which is probably what you meant) or "a set of attributes that are correlated with the set of interest" (which is where "most people find church boring" fits).
  2. Generalizing could be a means of abstracting the important elements from a set, or a means of predicting elements to be found in future examples of a set. So, saying "pies are desert," could mean that a peach pie shares important features with cake (more so than sharing features with a particular shade of off-white paint), or that when a menu says pie the diner can expect it to be sweet and come after the primary meal (even though some pies are savory).
Comment by Ericf on Why don't long running conversations happen on LessWrong? · 2021-05-31T00:44:49.284Z · LW · GW

If you spend a month composing a contribution, make it a new post, with a link back to the previous. That is how I've seen several longer conversations happen here

Comment by Ericf on Is there a term for 'the mistake of making a decision based on averages when you could cherry picked instead'? · 2021-05-26T17:04:47.296Z · LW · GW

Examples:

  1. Pitching overhand vs sidearm in baseball.
  2. Net decking vs custom building in Magic The Gathering (before 2010)
  3. Buying index funds vs picking stocks (after doing Berkshire Hathaway levels of research)
  4. Not gambling vs playing blackjack (as part of a card-counting bet-variance team)
  5. Shooting a basketball from the floor vs dunking (if you're tall enough)

In general: Doing things the same way that worked in the past vs doing something different. Most mutations are deleterious, but doing things in the correct different way can have big benefits.

Comment by Ericf on A.D&D.Sci May 2021 Evaluation and Ruleset · 2021-05-26T16:27:24.750Z · LW · GW

Very good points. I actually made a counting error, and estimated the odds of Beetle Wins at ~20% And then also failed to account for more than 4 players.

Comment by Ericf on A.D&D.Sci May 2021 Evaluation and Ruleset · 2021-05-26T15:04:36.351Z · LW · GW

Same here as "D" - given a goal of "score highest" winning a high value Beetle auction was the best way to do it. I did try to tweak my valuations such that I would either win a bunch of auctions up front, and then not be able to bid on Beetle, or not win the early auctions, and then have a chance at a Beetle win.

Sadly, the EV (excluding beetle) was only ~600 sp, so there was no manipulation of "let others win the first few auctions, then when they run out of money clean up with low bids at the end"

Comment by Ericf on We should probably buy ADA? · 2021-05-25T17:29:55.861Z · LW · GW

Ok, ok, ok, ok, and... thread dropped. I'm still not seeing that "last mile" connection where the contract knows anything about what happened in meatspace except "verified agent 8675309 asserts side 2 of the contract has been fulfilled" times however many verifiers you're willing to pay for.

And regarding crop insurance, 

  1. the article says "to develop" which means it does not yet exist probably due to some combination of:
  2. taking some sort of external data inputs and outputting a result is vulnerable to hacks to in incoming data. For example, if all the relevant sources report heavy hail in one particular region, all those farmers would get paid, regardless of the actual meatspace weather. 
  3. medium term contracts, like insurance, need to pay out with a predictable value. Niche (and therefore volatile) currency isn't suitable... especially for an insurance product that would flood a small geographic area with payouts at the same time.
  4. paying out based on generalized reports rather than individual claims means that you are pushing risk from the insurer to the farmer - specifically, you are requiring the farmer to know what kinds of area weather might damage their crops, and how much, rather than being able to say "I should get X hundred bushels, if I get less and there was an obvious weather reason, pay for the difference"
  5. paying out based on generalized reports rather than individual claims means that some farms will have no damage and get the same payout as the one "across the road" or "across the river" that was heavily damaged. Which means the premiums need to reflect the increased chance of a payout occurring. Does that compensate for hiring fewer people to verify claims? Maybe. 
  6. Every step made to disconnect insurance from actual suffering by a specific human increases the ability of people to buy it as a gamble, rather than a hedge. This is bad ref Financial Crisis (2008).
Comment by Ericf on We should probably buy ADA? · 2021-05-25T14:30:22.397Z · LW · GW

So, I might be missing something, but why would people putting contracts on a block chain need to use a chain based token? One half of the contract (the Tesla, house, 1000 widgets delivered, or whatever) exists in meatspace, why can't the other half be "I promise to send you $50,000 USD"?

Comment by Ericf on The Argument For Spoilers · 2021-05-21T19:03:58.366Z · LW · GW

The three levels of spoiler:

  1. Movie 1, 2, and 3 are good. (No spoilers)
  2. "hey go watch Frozen, but I'm not gonna tell you anything about it" (you can intuit that there is some sort of plot twist surprise.)
  3. Kill Bill is a revenge story where Uma Thurman messily kills her former gang in a variety of fights. (#spoileralert, Bill dies at the end)
Comment by Ericf on The Argument For Spoilers · 2021-05-21T15:19:08.667Z · LW · GW

Sure, that mitigates the costs, but even knowing "this media is better unspoiled" is just a third level of spoiler (and often is enough to figure out the plot twist early, though notably not in a certain Disney animated feature).

Comment by Ericf on The Argument For Spoilers · 2021-05-21T14:33:36.418Z · LW · GW

Unfortunately, you can't know ahead of time if each piece of art will be better experienced spoiled or unspoiled. So, you have to pay the social costs of remaining spoiler free all the time if you want to ever experience great no-spoiler art. Maybe that cost is too high for you, specifically, but it clearly isn't for some folks.

Comment by Ericf on A.D&D.Sci May 2021: Interdimensional Monster Carcass Auction · 2021-05-17T19:41:23.802Z · LW · GW

Is Row 173 accurate? It's really far away from all the other numbers.

173,Jewel Beetle,9,2514sp

Comment by Ericf on How to compute the probability you are flipping a trick coin · 2021-05-16T21:17:48.526Z · LW · GW

If one side is heavier, it will land that side down more often. You can see this with a household experiment of gluing a quarter to a circle of cardboard the same thickness, and then flipping it.

Comment by Ericf on How to compute the probability you are flipping a trick coin · 2021-05-15T18:33:12.919Z · LW · GW

A string of all-heads makes "the coin always flips heads" more likely than any other option, given equal priors, no matter how long the string is. So, what is your prior distribution of bias for "a coin someone tells you to flip"? I'd say 1000:10:1:.001 for fair:biased a tiny but detectable amount:always heads:any other bias amount

Comment by Ericf on Challenge: know everything that the best go bot knows about go · 2021-05-14T22:02:10.659Z · LW · GW

I kind of do know everything the best go bot knows? For a given definition of "knows."

At the most simple: I know that the best move to make given a board is the one that leads to a victory board state, or, failing that, a board state with the best chance of leading to a victory board state. Which is all a go progam is doing.

Now, the program is able to evaluate those conditions to a much deeper search depth & breadth than I can, but that isn't a matter of knowledge, just ability-to-implement knowledge.

I wouldn't count the database of prior games as part of the go program, since I (or a different program) could also have access to that same database.

Comment by Ericf on Zvi's Law of No Evidence · 2021-05-14T13:26:02.703Z · LW · GW

This misses the point of Zvi's comment. Alice saying "there is no evidence of X" is more likely in worlds where Alice is BSing than in worlds where Alice is attempting to provide factual information (ie Level 1 communication). That is orthogonal to any actual calculations of {amount of evidence observed} / {amount of evidence that could have been observed, given the amount of looking that has been done}.

Also, too, the second thing is a gradient, not a dichotomy. And "your priors" are just a way of saying {amount of evidence observed that I know about} / {amount of evidence that could have been observed, given what I know about the amount of looking that has been done}

Comment by Ericf on Is driving worth the risk? · 2021-05-11T16:51:18.913Z · LW · GW
  1. With 0 years of experience, you are not in the top half of safe drivers.

  2. Even if brain upload is available within your lifetime, there is a less than 100% chance that you, personally, get to do it. How rich and/or valuable are you (be honest and realistic)?

  3. I'd you're looking at mere "cure for aging" then years remaining is even less, and need for $ is greater than the brain upload scenario.

Comment by Ericf on Why are the websites of major companies so bad at core functionality? · 2021-05-06T19:12:24.291Z · LW · GW

Also, they apparently did write code to validate it. And if most everyone puts in either a correct IBAN, or a correctly formatted but typoed wrong IBAN, it might be that no-one has ever complained to Amazon about having to wait for a server-side verification. There's probably a low-priority bug ticket written by some tester sitting on their backlog, but without customer complaint or measurable loss of business it won't get worked.

Comment by Ericf on There’s no such thing as a tree (phylogenetically) · 2021-05-06T14:02:48.751Z · LW · GW

And, conversely, Palm "trees" do not actually have wood - they are just very large stems with lots of layers. Kind of like Ogres.

Comment by Ericf on The Schelling Game (a.k.a. the Coordination Game) · 2021-05-04T13:22:29.418Z · LW · GW

Heh. specifically, max point scoring Dixit play involves explicitly referencing an in-joke known by some of the group, and unknown to others.

Comment by Ericf on Low-stakes alignment · 2021-04-30T13:17:34.134Z · LW · GW

In such a world AI can still cause huge amounts of trouble if humans can’t understand what it is doing. Rather than “taking over” in a single unanticipated shock, the situation can deteriorate in a thousand tiny pieces each of which humans cannot understand.

Some would say this has already happened, even though (some) humans do understand (and object) to what it is doing, most do not understand, and the ones in control of the AI do not object.

Comment by Ericf on Modern Monetary Theory for Dummies · 2021-04-28T05:00:35.972Z · LW · GW

There is one logical gap in this summary: the assumption that the total goods produced remains fixed when the government spends money. This is obviously false, since the things the government buys (the military, medical care for seniors, etc.) would not have been produced (or not as much of it) had the money not been spent. There is a smuggled assumption that the people making stuff for government would have made the same amount of goods for other people, but that isn't proven.

Comment by Ericf on Can you improve IQ by practicing IQ tests? · 2021-04-27T23:22:25.390Z · LW · GW

Maybe if you have to pass a certain level to start, and then you have a bunch of different kinds of things to learn, but each person gets to specialize in what they do/like best so the test covers both breadth and depth of learning. It would probably take a few years, but the administrator could provide some sort of general certificate of intelligence+grit that potential employers and spouses could check without having to administer the test themselves?

Comment by Ericf on Can you improve IQ by practicing IQ tests? · 2021-04-27T17:51:36.421Z · LW · GW

Sadly, I can only share the synthesized results of years of reading - I don't keep track of where my ideas come from (though I do try to avoid known-bad sources)

#1 is seen with SAT scores - taking the test a second time / taking a prep course improves the median student's score by ~10 percentage points. I (and others) attribute this to improvements in the "IQ test taking ability" portion of the SAT, not the "have memorized vocabulary and rules of math" portion.

#3 is clearly seen in results from twin studies, adoption studies, and just looking at the world (ie we see a wider range in "ability to do things we would predict high IQ people to be better at" among people with similar childhoods than we do among people with dis-similar childhoods in extended families.

Comment by Ericf on Can you improve IQ by practicing IQ tests? · 2021-04-27T16:40:02.400Z · LW · GW

Also, too, one of things g-factor is good for is the ability to learn, and especially the ability to apply previous experience to novel situations. So, the very ability of someone to get better at IQ tests (which when done well are not rote memorization exercises) indicates that there is some "there" there.