Posts

An argument that consequentialism is incomplete 2024-10-07T09:45:12.754Z
Population ethics and the value of variety 2024-06-23T10:42:21.402Z
Book review: The Quincunx 2024-06-05T21:13:55.055Z
A case for fairness-enforcing irrational behavior 2024-05-16T09:41:30.660Z
I'm open for projects (sort of) 2024-04-18T18:05:01.395Z
A short dialogue on comparability of values 2023-12-20T14:08:29.650Z
Bounded surprise exam paradox 2023-06-26T08:37:47.582Z
Stop pushing the bus 2023-03-31T13:03:45.543Z
Aligned AI as a wrapper around an LLM 2023-03-25T15:58:41.361Z
Are extrapolation-based AIs alignable? 2023-03-24T15:55:07.236Z
Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z

Comments

Comment by cousin_it on The case for the death penalty · 2025-02-22T21:57:30.182Z · LW · GW
Comment by cousin_it on The case for the death penalty · 2025-02-22T01:23:09.193Z · LW · GW
Comment by cousin_it on The case for the death penalty · 2025-02-22T01:07:14.269Z · LW · GW

No. Committing a crime inflicts damage. But interacting with a person who committed a crime in the past doesn't inflict any damage on you.

Comment by cousin_it on The case for the death penalty · 2025-02-21T23:36:32.340Z · LW · GW

Because the smaller measure should (on my hypothesis) be enough to prevent crime, and inflicting more damage than necessary for that is evil.

Comment by cousin_it on The case for the death penalty · 2025-02-21T22:32:20.913Z · LW · GW

Because otherwise everyone will gleefully discriminate against them in every way they possibly can.

Comment by cousin_it on The case for the death penalty · 2025-02-21T19:06:49.758Z · LW · GW

I think the US has too much punishment as it is, with very high incarceration rate and prison conditions sometimes approaching torture (prison rape, supermax isolation).

I'd rather give serial criminals some kind of surveillance collars that would detect reoffending and notify the police. I think a lot of such people can be "cured" by high certainty of being caught, not by severity of punishment. There'd need to be laws to prevent discrimination against people with collars, though.

Comment by cousin_it on Ascetic hedonism · 2025-02-17T20:46:18.788Z · LW · GW

Yeah, I stumbled on this idea a long time ago as well. I never drink sugary drinks, my laptop is permanently in grayscale mode and so on. And it doesn't feel like missing out on fun; on the contrary, it allows me to not miss out. When I "mute" some big, addictive, one-dimensional thing, I start noticing all the smaller things that were being drowned out by it. Like, as you say, noticing the deliciousness of baked potatoes when you're not eating sugar every day, or noticing all the colors in my home and neighborhood when my screen is on grayscale.

Comment by cousin_it on Celtic Knots on a hex lattice · 2025-02-15T00:59:08.713Z · LW · GW
Comment by cousin_it on Altman blog on post-AGI world · 2025-02-10T09:57:30.008Z · LW · GW

I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.

Comment by cousin_it on Altman blog on post-AGI world · 2025-02-10T01:15:06.910Z · LW · GW

I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.

Comment by cousin_it on The Risk of Gradual Disempowerment from AI · 2025-02-06T00:45:57.299Z · LW · GW

I also agree with all of this.

For what an okayish possible future could look like, I have two stories in mind:

  1. Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.

  2. Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.

A post-AI world where baseline humans are anything more than housecats seems hard to imagine, I'm afraid. And even getting to be housecats at all (rather than dodos) looks to be really difficult.

Comment by cousin_it on Tear Down the Burren · 2025-02-04T10:59:31.295Z · LW · GW

Thanks for writing this, it's a great explanation-by-example of the entire housing crisis.

Comment by cousin_it on Predation as Payment for Criticism · 2025-02-02T23:57:27.926Z · LW · GW

Well, Christianity sometimes spread by conquest, but other times it spread peacefully just as effectively. Same for democracy. So I don't think the spread of moral values requires conquest.

Comment by cousin_it on The Simplest Good · 2025-02-02T23:41:02.226Z · LW · GW

Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I'd be very happy to see a world such as in the first half of your story, and don't think it would lead to the second half.

Comment by cousin_it on Gradual Disempowerment, Shell Games and Flinches · 2025-02-02T15:28:15.377Z · LW · GW
Comment by cousin_it on Poetic Methods I: Meter as Communication Protocol · 2025-02-01T21:47:55.969Z · LW · GW
Comment by cousin_it on Predation as Payment for Criticism · 2025-01-31T18:13:23.067Z · LW · GW

Your theory would predict that we'd be much better at modeling tigers (which hunted us) than at modeling antelopes (which we hunted), but in reality we're about equally bad at modeling either, and much better at modeling other humans.

Comment by cousin_it on The future of humanity is in management · 2025-01-30T23:08:28.423Z · LW · GW

I don't think this post addresses the main problem. Consider the exchange ratio between labor and land. You need land to live, and your food needs land to be grown. Will you be able to afford more land use for the same work hours, or less? (As programmer, manager, CEO, super high productivity job, whatever.) Well, if the same land can be used to run AIs that can do your job N times over, then from your labor you won't be able to afford it, and that closes the case.

So basically, the only way the masses can survive long term is by some kind of handouts. It won't just happen by itself due to tech progress and economic laws.

Comment by cousin_it on Predation as Payment for Criticism · 2025-01-30T02:12:19.326Z · LW · GW

I don't buy it. Lots of species have predators and have had them for a long time, but very few species have intelligence. It seems more likely that most of our intelligence is due to sexual selection, a Fisherian runaway that accidentally focused on intelligence instead of brightly colored tails or something.

Comment by cousin_it on The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating · 2025-01-22T08:49:06.505Z · LW · GW

An ASI project would be highly distinguishable from civilian AI applications and not integrated with a state’s economy

Why? I think there's a smooth ramp from economically useful AI to superintelligence: AIs gradually become better at many tasks, and these tasks help more and more with improving AI in turn.

Comment by cousin_it on Tax Price Gouging? · 2025-01-18T16:58:03.057Z · LW · GW
Comment by cousin_it on We probably won't just play status games with each other after AGI · 2025-01-16T18:47:42.723Z · LW · GW
Comment by cousin_it on RobertM's Shortform · 2025-01-16T14:08:54.685Z · LW · GW

For cognitive enhancement, maybe we could have a system like "the smarter you are, the more aligned you must be to those less smart than you"? So enhancement would be available, but would make you less free in some ways.

Comment by cousin_it on Beliefs and state of mind into 2025 · 2025-01-13T08:49:00.499Z · LW · GW

I think the problem with WBE is that anyone who owns a computer and can decently hide it (or fly off in a spaceship with it) becomes able to own slaves, torture them and whatnot. So after that technology appears, we need some very strong oversight - it becomes almost mandatory to have a friendly AI watching over everything.

Comment by cousin_it on Cast it into the fire! Destroy it! · 2025-01-13T08:38:25.480Z · LW · GW

What about biological augmentation of intelligence? I think if other avenues are closed, this one can still go pretty far and make things just as weird and risky. You can imagine biological self-improving intelligences too.

So if you're serious about closing all avenues, it amounts to creating a god that will forever watch over everything and prevent things from becoming too smart. It doesn't seem like such a good idea anymore.

Comment by cousin_it on Applying traditional economic thinking to AGI: a trilemma · 2025-01-13T07:54:46.881Z · LW · GW

Sure. But in an economy with AIs, humans won't be like Bob. They'll be more like Carl the bottom-percentile employee who struggles to get any job at all. Even in today's economy lots of such people exist, so any theoretical argument saying it can't happen has got to be wrong.

And if the argument is quantitative - say, that the unemployment rate won't get too high - then imagine an economy with 100x more AIs than people, where unemployment is only 1% but all people are unemployed. There's no economic principle saying that can't happen.

Comment by cousin_it on Applying traditional economic thinking to AGI: a trilemma · 2025-01-13T01:43:01.656Z · LW · GW

That, incidentally, implies that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.

Less skilled labor has a well-paying niche today?

Comment by cousin_it on Beliefs and state of mind into 2025 · 2025-01-11T00:02:47.348Z · LW · GW

Yeah, on further thought I think you're right. This is pretty pessimistic then, AI companies will find it easy to align AIs to money interests, and the rest of us will be in a "natives vs the East India Company" situation. More time to spend on alignment then matters only if some companies actually try to align AIs to something good instead, and I'm not sure any companies will do that.

Comment by cousin_it on On Eating the Sun · 2025-01-10T23:18:05.389Z · LW · GW

I wonder how hard it would be to make the Sun stop shining? Maybe the fusion reaction could be made subcritical by adding some "control rod" type stuff.

Edit: I see other commenters also mentioned spinning up the Sun, which would lower the density and stop the fusion. Not sure which approach is easier.

Comment by cousin_it on Beliefs and state of mind into 2025 · 2025-01-10T22:44:45.288Z · LW · GW

I guess the opposite point of view is that aligning AIs to AI companies' money interests is harmful to the rest of us, so it might actually be better if AI companies didn't have much time to do it, and the AIs got to keep some leftover morality from human texts. And WBE would enable the powerful to do some pretty horrible things to the powerless, so without some kind of benevolent oversight a world with WBE might be scary. But I'm not sure about any of this, maybe your points are right and mine are wrong.

Comment by cousin_it on On Eating the Sun · 2025-01-10T22:31:46.858Z · LW · GW

Huh? Environmentalism means let things work as they naturally worked, not change them to be "reversible" or something else.

Comment by cousin_it on Parkinson's Law and the Ideology of Statistics · 2025-01-06T17:31:45.152Z · LW · GW

There have been many controversies about the World Bank. A good starting point is this paragraph from Naomi Klein's article:

The truth is that the bank's credibility was fatally compromised when it forced school fees on students in Ghana in exchange for a loan; when it demanded that Tanzania privatise its water system; when it made telecom privatisation a condition of aid for Hurricane Mitch; when it demanded labour "flexibility" in Sri Lanka in the aftermath of the Asian tsunami; when it pushed for eliminating food subsidies in post-invasion Iraq. Ecuadoreans care little about Wolfowitz's girlfriend; more pressing is that in 2005 the World Bank withheld a promised $100m after the country dared to spend a portion of its oil revenues on health and education. Some anti-poverty organisation.

Whether she's right or wrong, I like how the claims are laid out nicely. Anyone can fact-check and come to their own conclusions.

Comment by cousin_it on Oppression and production are competing explanations for wealth inequality. · 2025-01-06T16:59:56.480Z · LW · GW
Comment by cousin_it on Human study on AI spear phishing campaigns · 2025-01-04T22:31:44.061Z · LW · GW

Fair enough. And it does seem to me like the action will be new laws, though you're right it's hard to predict.

Comment by cousin_it on Human study on AI spear phishing campaigns · 2025-01-04T21:50:41.452Z · LW · GW

This one isn't quite a product though, it's a service. The company receives a request from a criminal: "gather information about such-and-such person and write a personalized phishing email that would work on them". And the company goes ahead and does it. It seems very fishy. The fact that the company fulfilled the request using AI doesn't even seem very relevant, imagine if the company had a staff of secretaries instead, and these secretaries were willing to make personalized phishing emails for clients. Does that seem like something that should be legal? No? Then it shouldn't be legal with AI either.

Though probably no action will be taken until some important people fall victim to such scams. After that, action will be taken in a hurry.

Comment by cousin_it on Seth Herd's Shortform · 2025-01-04T12:04:05.631Z · LW · GW

Yeah, this is really dumb. I wonder if it would've gone better if the AI profiles had been more honest to begin with, using actual datacenter photos as their profile pics and so on.

Comment by cousin_it on Human study on AI spear phishing campaigns · 2025-01-04T01:04:48.934Z · LW · GW

Are AI companies legally liable for enabling such misuse? Do they take the obvious steps to prevent it, e.g. by having another AI scan all chat logs and flag suspicious ones?

Comment by cousin_it on Preference Inversion · 2025-01-03T21:25:22.885Z · LW · GW

For every person saying "religion gave me a hangup about sex" there will be another who says "religion led to me marrying younger" or "religion led me to have more kids in marriage". The right question is whether religion leads to more anti-reproduction attitude on average, but I can't see how that can be true when religious people have higher fertility.

Comment by cousin_it on The Intelligence Curse · 2025-01-03T21:13:25.145Z · LW · GW

I've held this view for years and am even more pessimistic than you :-/

In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out.

Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.

Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant

It seems really hard to think of any examples of such tech.

Comment by cousin_it on Preference Inversion · 2025-01-03T17:42:01.257Z · LW · GW
Comment by cousin_it on Preference Inversion · 2025-01-03T00:38:08.273Z · LW · GW

But many do maintain an explicit approval hierarchy that ranks celibacy and sexual restraint above typical sexual behavior

I think we just disagree here. The Bible doesn't say married people shouldn't have sex, and no prominent Christians say that either. There are norms against nonmarital sex, and there are norms against priests having sex, but between these things you draw a connection and generalization to all people which doesn't sound right to me.

Comment by cousin_it on Preference Inversion · 2025-01-02T23:21:45.062Z · LW · GW

Yeah, I missed a big part of your point on that. But another part maybe I didn’t? Your post started out talking about norms against nonmarital sex. Then you jump from that to saying they’re norms against reproduction - which doesn't sound right, religious people reproduce fine. And then you say (unless I'm missing something) that they're based on hypocrisy, enabling other people to not follow these norms, which also doesn't sound right.

Comment by cousin_it on Preference Inversion · 2025-01-02T23:18:52.924Z · LW · GW
Comment by cousin_it on Preference Inversion · 2025-01-02T21:33:05.268Z · LW · GW

I think this is wrong. First you say that celibacy would be pushed on lower status people like peasants, then you say it would be pushed on higher status people like warriors. But actually neither happens: it's not to the group's advantage (try to explain how making peasants or warriors celibate would advantage the group - you can't), and we don't find major religions doing it either, they are pro-fertility for almost all people. Celibacy of priests is an exception, but it's small and your explanations don't work for it either.

Comment by cousin_it on Economic Post-ASI Transition · 2025-01-02T13:33:00.506Z · LW · GW

I think they meant that when people are afraid to lose their jobs, they spend less, leading to less demand for other people's work.

Comment by cousin_it on Zombies among us · 2024-12-31T20:13:49.367Z · LW · GW
Comment by cousin_it on The low Information Density of Eliezer Yudkowsky & LessWrong · 2024-12-30T23:35:19.549Z · LW · GW

I think for a certain time and demographic (which included me then), the wordiness and imagery actually helped. But we were all younger then, maybe smarter, and definitely more open. It doesn't work as much on me now.

Anyway, I'm not sure it needs to be rewritten today. The threat has become easier to see. Lots of people already ask themselves what jobs they'll have, what skills children should learn, how most people will live - given that we already treat our poor and homeless pretty badly. It's not the whole threat, but it's a lower bound threat that feels alarming enough.

Comment by cousin_it on Some arguments against a land value tax · 2024-12-30T02:19:46.113Z · LW · GW

Sorry - I realized after commenting that I overstated this bit, and deleted it. But anyway yeah.

Comment by cousin_it on Some arguments against a land value tax · 2024-12-30T02:16:58.248Z · LW · GW

People pretty much already say this about gentrification and rent. Like that joke about shooting in the air a few times every morning to keep your rent low. Maybe under LVT homeowners will end up doing the same.

Comment by cousin_it on Some arguments against a land value tax · 2024-12-30T01:44:27.406Z · LW · GW

I think I have counterarguments to some of this.

First:

This disincentive to search for new ways to use land is intrinsic to the land value tax: since a landowner does not actually create the oil on their land, but merely discovers it, the oil would be part of the land’s “unimproved value”, which is inherently subject to taxation under the LVT.

In the comments to Bryan Caplan's post, Mark Wadsworth replies:

This is feeblest argument of all. I don’t even need logic to defeat this one, I can do this with hard facts. And as a matter of hard fact, most governments operate a fairly Georgist system with oil exploration and extraction, or just about any mining activities, i.e. they auction off licences to explore and extract.

The winning bid for the licence must, by definition, be approx. equal to the rental value of the site (or the rights to do certain things at the site). And the winning bid, if calculated correctly, will leave the company with a good profit on its operations in future, and as a matter of fact, most mining companies and most oil companies make profits, end of discussion, there is no disincentive for exploration at all.

Or do you think that when Western oil companies rock up in Saudi Arabia, that the Saudis don’t make them pay every cent for the value of the land/natural resources? The Western oil companies just get to keep the additional profits made by extracting, refining, shipping the stuff.

Second:

For instance, if a developer owns multiple adjacent parcels and decides to build housing or infrastructure on one of them, the value of the undeveloped parcels will rise due to their proximity to the improvements. As a result, the developer faces higher taxes on the remaining undeveloped land, making development less financially appealing in the first place.

Why do we want to make it easy for developers to hold onto unimproved land as it gets more valuable? Isn't it in society's interest that someone else gets that land and builds improvements on it?

Third:

Unlike professional developers or large corporations, individual landowners with sentimental ties to their property are not necessarily looking to maximize profit. Therefore, taxing the unimproved value of their land through an LVT would not necessarily compel them to sell or develop it. Instead, it might simply place an additional financial burden on individuals who already have strong personal reasons for holding onto their land, doing little to incentivize the creation of additional housing developments.

If society doesn't get to use that land, at least it'll get more tax money for it. And on the margin some land will be freed up.


All of that said, I think the real blocker to LVT is that the landowner lobby will never allow it, and will fight tooth and nail to repeal it if passed. The only way it could stick is if a sufficiently "red" government got power for a long time, but that's a big ask. Making construction easier is a smaller and more realistic ask, it would give people a lot of the benefits at a fraction of the cost.