Posts

Population ethics and the value of variety 2024-06-23T10:42:21.402Z
Book review: The Quincunx 2024-06-05T21:13:55.055Z
A case for fairness-enforcing irrational behavior 2024-05-16T09:41:30.660Z
I'm open for projects (sort of) 2024-04-18T18:05:01.395Z
A short dialogue on comparability of values 2023-12-20T14:08:29.650Z
Bounded surprise exam paradox 2023-06-26T08:37:47.582Z
Stop pushing the bus 2023-03-31T13:03:45.543Z
Aligned AI as a wrapper around an LLM 2023-03-25T15:58:41.361Z
Are extrapolation-based AIs alignable? 2023-03-24T15:55:07.236Z
Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z

Comments

Comment by cousin_it on Universal Basic Income and Poverty · 2024-07-26T10:58:44.713Z · LW · GW

I'm not sure the “poverty equilibrium” is real. Poverty varies a lot by country and time period, various policies in various places have helped with poverty, so UBI might help as well. Though I think other policies (like free healthcare, or fixing housing laws) might help more per dollar.

Comment by cousin_it on Universal Basic Income and Poverty · 2024-07-26T10:57:07.260Z · LW · GW
Comment by cousin_it on Demography and Destiny · 2024-07-22T09:11:48.509Z · LW · GW

I think the main point of the essay might be wrong. It's not necessarily true that evolution will lead to a resurgence of high fertility. Yes, evolution is real, but it's also slow: it works on the scale of human lifetimes. Culture today evolves faster than that. It's possible that culture can keep adapting its fertility-lowering methods faster than humans can evolve defenses against them.

Comment by cousin_it on Why Georgism Lost Its Popularity · 2024-07-21T22:27:56.965Z · LW · GW

I think you're right, Georgism doesn't get passed because it goes against the interests of landowners who have overwhelming political influence. But if the actual problem we're trying to solve is high rents, maybe that doesn't require full Georgism? Maybe we just need to make construction legally easier. There's strong opposition to that too, but not as strong as literally all landowners.

Comment by cousin_it on Ice: The Penultimate Frontier · 2024-07-16T10:49:20.900Z · LW · GW
Comment by cousin_it on Ice: The Penultimate Frontier · 2024-07-14T09:42:10.614Z · LW · GW

It seems to me that land of the same quality as this can already be bought for cheaper in many places. The post says the new land could be more valuable because of better governance, but governance is an outcome of human politics, so it's orthogonal to old/new land. In Jules Verne's Propeller Island, a power conflict eventually leads to physical destruction of the island.

Comment by cousin_it on Reliable Sources: The Story of David Gerard · 2024-07-11T11:25:16.534Z · LW · GW

My impression is that Wikipedia was founded on an ideal of neutrality, but Gerard doesn't really believe in that ideal - he considers it harmful, a kind of shield for the status quo. That's a possible position, but I'm not sure how one can hold it and at the same time edit Wikipedia in good faith. Does anyone know how that can be justified?

Comment by cousin_it on When is a mind me? · 2024-07-09T08:23:30.344Z · LW · GW

anything that acts like us has our qualia

Well, a thing that acts like us in one particular situation (say, a thing that types "I'm conscious" in chat) clearly doesn't always have our qualia. Maybe you could say that a thing that acts like us in all possible situations must have our qualia? This is philosophically interesting! It makes a factual question (does the thing have qualia right now?) logically depend on a huge bundle of counterfactuals, most of which might never be realized. What if, during uploading, we insert a bug that changes our behavior in one of these counterfactuals - but then the upload never actually runs into that situation in the course of its life - does the upload still have the same qualia as the original person, in situations that do get realized? What if we insert quite many such bugs?

Moreover, what if we change the situations themselves? We can put the upload in circumstances that lead to more generic and less informative behavior: for example, give the upload a life where they're never asked to remember a particular childhood experience. Or just a short life, where they're never asked about anything much. Let's say the machine doing the uploading is aware of that, and allowed to optimize out parts that the person won't get to use. If there's a thought that you sometimes think, but it doesn't influence your I/O behavior, it can get optimized away; or if it has only a small influence on your behavior, a few bits' worth let's say, then it can be replaced with another thought that would cause the same few-bits effect. There's a whole spectrum of questionable things that people tend to ignore when they say "copy the neurons", "copy the I/O behavior" and stuff like that.

Comment by cousin_it on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-06T16:06:26.390Z · LW · GW
Comment by cousin_it on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-06T11:16:44.892Z · LW · GW

Since you conclude that both SIA and SSA are flawed because we know a lot about our parents, let's see if that works. Imagine a world where people spend their first years not knowing much about their parents, or about human reproduction. Suppose they each live in a kind of egg, and receive newscasts from outside only carrying information about the particular anthropic problem we want them to solve (e.g. "the world currently contains N people", or "there are two theories about astronomy" and so on). How should people solve anthropic problems under such conditions? Should they use SIA or SSA? Or should they still reject both, but based on some other argument, and the "we know a lot about our parents" argument was a red herring?

Comment by cousin_it on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-06-27T18:00:16.132Z · LW · GW
Comment by cousin_it on Book Review: Righteous Victims - A History of the Zionist-Arab Conflict · 2024-06-27T13:10:13.485Z · LW · GW

I think right of return might be close to the center of the problem. Imagine a Palestinian Arab who lived on the land before, and who agrees with Israel's system (democracy, civil rights and so on). According to an Israeli nationalist, should such a person be allowed to return and get Israeli citizenship?

Comment by cousin_it on Population ethics and the value of variety · 2024-06-27T06:49:17.445Z · LW · GW

I think that phrase is right "to zeroth order": one can imagine an agent with any preferences about population ethics. Then "to first order", I think the choice between average vs total utilitarianism does have a right answer (see link in my reply to Cleo). And then there are "second order" corrections like value of variety, which seem more subjective, but maybe there are right answers to be found about them as well.

Comment by cousin_it on A Step Against Land Value Tax · 2024-06-25T09:39:48.695Z · LW · GW

It could be owned by the government and rented out for cheap, but only if the renter uses it as primary residence. Or it could be means-tested.

Comment by cousin_it on Paying Russians to not invade Ukraine · 2024-06-24T22:55:00.106Z · LW · GW

Yeah, this is an interesting proposal. It would make being a Russian soldier in Ukraine more attractive than being a Ukrainian soldier (who's stuck in the trenches) or a Ukrainian male of military age (who can't leave the country). Also, antiwar Russians would find that the fastest way to move to Europe is to enlist and defect (skipping the normal multi-year process of getting citizenship). Also, everyone along the way would want a cut: Russians would start paying money to recruitment offices when they enlist, and paying more money to Ukrainian soldiers when they defect. The economic incentives of the whole thing just get funnier and funnier as I think about it.

Comment by cousin_it on Population ethics and the value of variety · 2024-06-24T22:17:48.564Z · LW · GW

Yeah. I decided sometime ago that total utilitarianism is in some sense more "right" than average utilitarianism, because of some variations on the Sleeping Beauty problem. Now it seems the right next step is taking total utilitarianism and adding corrections for variety / consistency / other such things.

Comment by cousin_it on Population ethics and the value of variety · 2024-06-24T12:57:01.401Z · LW · GW

To me it actually feels plausible that identical people might value themselves less. If there are 100 identical people and 1 different person, the identical ones might prefer saving the different one rather than one of their own. But maybe not everyone has that intuition.

Comment by cousin_it on A Step Against Land Value Tax · 2024-06-24T08:55:36.176Z · LW · GW

I think you make some good points. But this part stuck out to me a bit:

You are deciding where to build both a pharmacy and bakery (assume corresponding foot traffic uplift from the other is 0). Under the SQ, you build them next to each other such that you can maximally capture the land value increase of the other. Under LVT, you build them maximally apart to reduce your land value tax burden. The increased land value you obtained under SQ that was partially captured by you is now a cost being bourne out as inconvenience to the public under LVT.

It seems to me that in reality, the effect of business attracting customers to other nearby businesses (through foot traffic, more people moving into the area, incentivizing more transport nearby, etc) is very much nonzero and might cover the whole LVT increase.

That said, I agree that LVT is a nonobvious thing and might be off target from what we really want. The real problem is making sure there's enough low-income housing in cities, it's better to just solve that directly.

Comment by cousin_it on Reply to Stuart on anthropics · 2024-06-23T16:46:15.257Z · LW · GW

I think I just came up with a somewhat new idea on this, namely that some kinds of valuable experiences tend toward summing and others tend toward averaging.

Comment by cousin_it on Evaporation of improvements · 2024-06-23T00:07:47.328Z · LW · GW
Comment by cousin_it on I would have shit in that alley, too · 2024-06-21T16:03:12.616Z · LW · GW

Yeah, I see. Thinking more about this, they'd be right to mistrust this kind of offer. It feels like the only real solution is the hard one: making sure there's enough low-income housing in cities.

Comment by cousin_it on I would have shit in that alley, too · 2024-06-21T15:37:22.334Z · LW · GW
Comment by cousin_it on Evaporation of improvements · 2024-06-21T08:03:24.759Z · LW · GW

There are two things that are fixed: Land and attention.

I think that's a red herring though. The current scarcity of housing is far below the limit set by land, so I just call it artificial. There can be artificial scarcity of other necessities as well, all that's needed is that the providers of that necessity act as a cartel.

Comment by cousin_it on Evaporation of improvements · 2024-06-20T21:09:02.907Z · LW · GW

Yeah, maybe "diffusion" is a too optimistic view. A large part of improvements get captured by those who can create an artificial scarcity of some necessity - currently it's housing, in the past it used to be physical safety. And many other improvements happen as part of arms races, so on net they lead to more waste and no benefit - like improvements to advertising in a fixed size market, or improvements to weapons. I think these are the two main mechanisms that "eat" improvements and prevent us from having a 15-hour work week.

Comment by cousin_it on Ilya Sutskever created a new AGI startup · 2024-06-20T10:36:11.211Z · LW · GW
Comment by cousin_it on I would have shit in that alley, too · 2024-06-19T12:36:09.619Z · LW · GW

Here's a question I don't know the answer to. If there was a program offering a basic income to homeless people, enough to pay for housing+food+clothes+etc in a cheaper area of the US without having to find work, but to receive the money you have to actually move there and get housed - would most homeless people accept the offer and move? I ask because it seems like such a program could be pretty cheap. Or would most of them prefer to stay in cities even at the cost of being homeless? If so, what would be the main reasons?

Comment by cousin_it on Our Intuitions About The Criminal Justice System Are Screwed Up · 2024-06-18T08:19:12.011Z · LW · GW

That article seems to focus on recidivism rate, not incarceration rate or crime rate. But still your point is interesting and I decided to check. I compared the Wikipedia tables for incarceration rate and intentional homicide rate (the most objective proxy for crime rate that I can think of). It turns out if I sort countries by the ratio between the two, both Norway and the US are in the middle of the pack. Moreover, it seems there's no relation between these variables: the scatter plot looks like just a bunch of random points.

EDIT: thinking more about this, it's much more complex. Incarceration rate probably depends both on crime rate and on society's tolerance for crime (hidden variable), and crime rate probably depends both on incarceration rate and on society's custom for crime (hidden variable). So there are feedback mechanisms in both directions, with hidden variables that vary by country. Maybe someone can make a clear conclusion from this, but not me.

Comment by cousin_it on Our Intuitions About The Criminal Justice System Are Screwed Up · 2024-06-17T08:42:38.372Z · LW · GW

Since the US has the highest incarceration rate in the world, replacing an imprison-too-many-people system with a flog-too-many-people system doesn't seem like the most important thing to me. It seems more important to figure out how to punish fewer people in the first place.

Comment by cousin_it on CIV: a story · 2024-06-17T00:30:10.725Z · LW · GW

A true superintelligence could wipe out humanity incredibly easily—but it could build a utopia nearly as easily. Even if it were almost entirely misaligned, just a sliver of human morality could make it decide to give humans a paradise beyond their wildest imaginings.

As long as the superintelligence's values don't contain any components that pull against components of human morality. But in case of almost-alignment there might indeed be some such components. Almost-alignment is where s-risks live.

Comment by cousin_it on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-14T08:08:27.689Z · LW · GW

Yeah, this agrees with my thinking so far. However, I think if you could research how to align AIs specifically to human flourishing (as opposed to things like obedience/interpretability/truthfulness, which defer to the user's values), that kind of work could be more helpful than most.

Comment by cousin_it on Four Futures For Cognitive Labor · 2024-06-14T08:01:38.599Z · LW · GW

Yeah. The way the world works now, if technical alignment work is successful, it will just lead to AIs that are aligned to making money or winning wars. There need to be AIs aligned to human flourishing, but nobody wants to spend money training those. OpenAI was started for this purpose, but got taken over by money interests.

Comment by cousin_it on Four Futures For Cognitive Labor · 2024-06-13T21:41:41.913Z · LW · GW

Yeah, I think your formulation is more correct than mine.

Comment by cousin_it on Four Futures For Cognitive Labor · 2024-06-13T14:45:51.203Z · LW · GW

I think AIs will be able to do all cognitive labor for less resources than a human could survive on. In particular, "Scott Alexander quality writing" and "long term planning and coordination of programmers" - tasks that you assume will stay with humans - seem to me like tasks where AIs will surpass the best humans before the decade is out. Any "replacement industry" tasks can be taken up by AIs as well, because AI learning will keep getting better and more general. And it doesn't seem to matter whether demand satiates or grows: even fast-growing demand would be cheaper met by building more AIs than by using the same resources to feed humans.

(This is also why Ricardian comparative advantage won't apply. If the AI side has a choice of trading with humans for something, vs. spending the same resources on building AIs to produce the same thing cheaper, then the latter option is more profitable. So after a certain point in capability development, the only thing AIs and AI companies will want from us is our resources, like land; not our labor. The best analogy is enclosures in England.)

Comment by cousin_it on Searching for the Root of the Tree of Evil · 2024-06-12T17:17:23.960Z · LW · GW

You point out several problems in the world: people have unhealthy lifestyles, nuclear power isn't used to its full potential, ecosystems are not protected, our social lives are not in accordance with human flourishing. Then you say all these problems could be solved by a "cooperation machine". But you don't seem to explain why these problems could be solved by the same "machine". Maybe they're all separate problems that need to be solved separately.

Maybe one exercise to try is holding off on proposing solutions. Can you discuss the problems in more detail, but without mentioning any solutions? Can you point out commonalities between the problems themselves? For example, "all these problems would be solved by a cooperation machine" wouldn't count, and "they happen because people are bad at cooperating" is too vague. I'm looking for something more like "these problems happen because people get angry", or "because people get deceived by politicians". Does that make sense? Can you give it a try?

Comment by cousin_it on Response to Aschenbrenner's "Situational Awareness" · 2024-06-09T10:55:36.338Z · LW · GW

It still seems to me that international cooperation isn't the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries' safety commissions we can maybe get an international commission. But if you skip this first step, then China's negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don't really believe it's dangerous, and are just trying to gain an advantage.

Comment by cousin_it on Response to Aschenbrenner's "Situational Awareness" · 2024-06-08T12:09:40.933Z · LW · GW
Comment by cousin_it on Book review: The Quincunx · 2024-06-07T23:58:36.459Z · LW · GW

I'm not sure what's the best solution in general. For this post specifically, maybe we could drop the narration?

Comment by cousin_it on Response to Aschenbrenner's "Situational Awareness" · 2024-06-07T20:11:46.358Z · LW · GW

I'm not sure international coordination is the right place to start. If the Chinese are working on a technology that will end humanity, that doesn't mean the US needs to work on the same technology. There's no point working on such technology. The US could just stop. That would send an important signal: "We believe this technology is so dangerous that nobody should develop it, so we're stopping work on it, and asking everyone else to stop as well." After that, the next step could be: "We believe that anyone else working on this technology is endangering humanity as well, so we'd like to negotiate with them on stopping, and we're prepared to act with force if negotiations fail."

Comment by cousin_it on Book review: The Quincunx · 2024-06-07T19:57:00.678Z · LW · GW

Yeah, I think the narration doesn't catch up when I edit the post, and I've edited it a lot. Maybe there's a button to refresh it but I haven't found it. @habryka?

Comment by cousin_it on Book review: The Quincunx · 2024-06-06T16:17:51.088Z · LW · GW

I think hoping for "pseudokindness" doesn't really work. You can have one-millionth care about a flower, but you'll still pave it over if you have more-than-one-millionth desire for a parking lot there. And if we're counting on AIs to have certain drives in tiny amounts, we shouldn't just talk about kindness, but also for example desire for justice (leading to punishment and s-risk). So putting our hopes on these one-millionths feels really risky.

Comment by cousin_it on Book review: The Quincunx · 2024-06-06T10:38:37.824Z · LW · GW

(Edited because my previous reply was a bit off the mark.)

I don't think this scenario depends on government. If AI is better at all jobs and can make more efficient use of all resources, "AI does all jobs and uses all resources" is the efficient market outcome. All that's needed is that companies align their AIs to the company's money interest, and people use and adapt AI in the pursuit of money interest. Which is what's happening now.

A single AI taking dramatic transformative action seems less likely to me, because it'll have to take place in a world already planted thick with AI and near-AI following money interests.

Comment by cousin_it on Benaya Koren's Shortform · 2024-06-06T10:28:26.139Z · LW · GW

I'm not sure what exactly you're proposing to transfer to the state.

  • The right to charge rent on the land? But under Georgism the state already owns that right.

  • The structure you built on the land? But that structure, without the land, is a depreciating asset. Chances are the next user of the land will just tear it down and build something else. So you might not get enough money for retirement by offering up the structure alone.

Comment by cousin_it on Former OpenAI Superalignment Researcher: Superintelligence by 2030 · 2024-06-05T21:15:27.005Z · LW · GW

I think that response basically doesn't work. But when I started writing in more detail why it doesn't work, it morphed into a book review that I've wanted to write for the last couple years but always put it off. So thank you for finally making me write it!

Comment by cousin_it on Former OpenAI Superalignment Researcher: Superintelligence by 2030 · 2024-06-05T09:40:11.301Z · LW · GW

Page 87:

The clusters can be built in the US, and we have to get our act together to make sure it happens in the US.

No, we have to make sure it doesn't happen anywhere.

Page 110:

What we want is to add side-constraints: don’t lie, don’t break the law, etc.

That's very not enough. A superintelligence will be much more economically powerful than humans. If it merely exhibits normal human levels of benevolence, truth-telling, law-obeying, money-seeking, power-seeking and so on, it will deprive humans of everything.

It's entirely legal to do jobs so cheaply that others can't compete, and to show people optimized messages to make them spend savings on consumption. A superintelligence merely doing these two things superhumanly well, staying within the law, is sufficient to deprive most people of everything. Moreover, the money incentives point to building superintelligences that will do exactly these things, while rushing to the market and spending the minimum on alignment.

Superintelligence requires super-benevolence. It must not be built for profit or for an arms race, it has to be built for good as the central goal. We've been saying this for decades. If AI researchers even now keep talking in terms like "add constraints to not break the law", we really are fucked.

Comment by cousin_it on Another attempt to explain UDT · 2024-06-04T10:10:54.349Z · LW · GW

Simulations; predictors (not necessarily perfect); amnesia; identical copies; players with aligned interests.

Comment by cousin_it on The Standard Analogy · 2024-06-03T19:42:07.775Z · LW · GW

Why isn’t incremental progress at instilling human-like behavior into machines, incremental progress on AGI alignment?

It kind of is, but unfortunately treating others badly when you have lots of power is also part of human nature. And there's no real limit to how bad it could get, see the Belgian Congo for example.

Comment by cousin_it on AI Regulation is Unsafe · 2024-06-03T13:58:53.981Z · LW · GW

An elected representative who’s term limit is coming up wouldn’t have the same incentives.

I think this proves too much. If elected representatives only follow self-interest, then democracy is pointless to begin with, because any representative once elected will simply obey the highest bidder. Democracy works to the extent that people vote for representatives who represent the people's interests, which do reach beyond the term limit.

Comment by cousin_it on Politics is the mind-killer, but maybe we should talk about it anyway · 2024-06-03T13:12:32.271Z · LW · GW

Don't know about others, but to me feels like "wokeness" has faded from view a bit because economic inequality has become the main issue again. And I agree with that: making people less dependent on landlords and employers is indeed the main issue. (The AI issue is more urgent, but even the AI transition would feel safer to me if we had a system where people's livelihoods weren't tied to jobs or affording rent.)

Comment by cousin_it on One way violinists fail · 2024-05-29T21:50:00.755Z · LW · GW

If orchestral jobs for violinists are so scarce, what do you think about the options of branching out? Playing with a band, going into pop violin, etc.

Comment by cousin_it on Real Life Sort by Controversial · 2024-05-28T09:05:08.224Z · LW · GW

I think Scott's original story described scissor statements a bit differently. The people reading them thought "hmm, this isn't controversial at all, this is just obviously true, maybe the scissor-statement-generator has a bug". And then other people read the same statement and said it was obviously false, and controversy resulted. Like the black and blue vs white and gold dress, or yanny/laurel. Maybe today's LLMs aren't yet smart enough to come up with new such statements.

EDIT: I think one possible reason why LLMs have trouble with this kind of question (and really with any question that requires coming up with specific interesting things) is that they have a bias toward generic. In my interactions with them at least, I keep having to constrain the question with specificity, and then the model will still try to give the most generic answer it can get away with.