Posts

I'm open for projects (sort of) 2024-04-18T18:05:01.395Z
A short dialogue on comparability of values 2023-12-20T14:08:29.650Z
Bounded surprise exam paradox 2023-06-26T08:37:47.582Z
Stop pushing the bus 2023-03-31T13:03:45.543Z
Aligned AI as a wrapper around an LLM 2023-03-25T15:58:41.361Z
Are extrapolation-based AIs alignable? 2023-03-24T15:55:07.236Z
Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z
True numbers and fake numbers 2014-02-06T12:29:08.136Z
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z

Comments

Comment by cousin_it on "Why I Write" by George Orwell (1946) · 2024-04-25T20:32:41.679Z · LW · GW

Orwell is one of my personal heroes, 1984 was a transformative book to me, and I strongly recommend Homage to Catalonia as well.

That said, I'm not sure making theories of art is worth it. Even when great artists do it (Tolkien had a theory of art, and Oscar Wilde, and Flannery O'Connor, and almost every artist if you look close enough), it always seems to be the kind of theory which suits that artist and nobody else. Would advice like "good prose is like a windowpane" or "efface your own personality" improve the writing of, say, Hunter S. Thompson? Heck no, his writing is the opposite of that and charming for it! Maybe the only possible advice to an artist is to follow their talent, and advising anything more specific is as likely to hinder as help.

Comment by cousin_it on This is Water by David Foster Wallace · 2024-04-25T09:56:40.185Z · LW · GW

I think for good emotions the feel-it-completely thing happens naturally anyway.

Comment by cousin_it on This is Water by David Foster Wallace · 2024-04-25T09:10:39.149Z · LW · GW

To me it's less about thoughts and more about emotions. And not about doing it all the time, but only when I'm having some intense emotion and need to do something about it.

For example, let's say I'm angry about something. I imagine there's a knob in my mind: make the emotion stronger or weaker. (Or between feeling it less, and feeling it more.) What I usually do is turn the knob up. Try to feel the emotion more completely and in more detail, without trying to push any of it away. What usually happens next is the emotion kinda decides that it's been heard and goes away: a few minutes later I realize that whatever I was feeling is no longer as intense or urgent. Or I might even forget it entirely and find my mind thinking of something else.

It's counterintuitive but it's really how it works for me; been doing it for over a decade now. It's the closest thing to a mental cheat code that I know.

Comment by cousin_it on This is Water by David Foster Wallace · 2024-04-25T07:24:03.889Z · LW · GW

There's an amazing HN comment that I mention everytime someone links to this essay. It says don't do what the essay says, you'll make yourself depressed. Instead do something a bit different, and maybe even opposite.

Let's say for example you feel annoyed by the fat checkout lady. DFW advises you to step over your annoyance, imagine the checkout lady is caring for her sick husband, and so on. But that kind of approach to your own feelings will hurt you in the long run, and maybe even seriously hurt you. Instead, the right thing is to simply feel annoyed at the checkout lady. Let the feeling come and be heard. After it's heard, it'll be gone by itself soon enough.

Here's the whole comment, to save people the click:

DFW is perfect towards the end, when he talks about acceptance and awareness— the thesis ("This is water") is spot on. But the way he approaches it, as a question of choosing what to think, is fundamentally, tragically wrong.

To Mindfulness-Based Cognitive Therapy folks call that focusing on cognition rather than experience. It's the classic fallacy of beginning meditators, who believe the secret lies in choosing what to think, or in fact choosing not to think at all. It makes rational sense as a way to approach suffering; "Thinking this way is causing me to suffer. I must change my thinking so that the suffering stops."

In fact, the fundamental tenet of mindfulness is that this is impossible. Not even the most enlightened guru on this planet can not think of an elephant. You cannot choose what to think, cannot choose what to feel, cannot choose not to suffer.

Actually, that is not completely true. You can, through training over a period of time, teach yourself to feel nothing at all. We have a special word to describe these people: depressed.

The "trick" to both Buddhist mindfulness and MBCT, and the cure for depression if such a thing exists, lies in accepting that we are as powerless over our thoughts and emotions as we are over our circumstances. My mind, the "master" DFW talks about, is part of the water. If I am angry that an SUV cut me off, I must experience anger. If I'm disgusted by the fat woman in front of me in the supermarket, I must experience disgust. When I am joyful, I must experience joy, and when I suffer, I must experience suffering. There is no other option but death or madness— the quiet madness that pervades most peoples' lives as they suffer day in and day out in their frantic quest to avoid suffering.

Experience. Awareness. Acceptance. Never thought— you can't be mindful by thinking about mindfulness, it's an oxymoron. You have to just feel it.

There's something indescribably heartbreaking in hearing him come so close to finding the cure, to miss it only by a hair, knowing what happens next.

[Full disclosure: My mother is a psychiatrist who dabbles in MBCT. It cured her depression, and mine.]

And another comment from a different person making the same point:

Much of what DFW believed about the world, about himself, about the nature of reality, ran counter to his own mental wellbeing and ultimately his own survival. Of the psychotherapies with proven efficacy, all seek to inculcate a mode of thinking in stark contrast to Wallace's.

In this piece and others, Wallace encourages a mindset that appears to me to actively induce alienation in the pursuit of deeper truth. I believe that to be deeply maladaptive. A large proportion of his words in this piece are spent describing that his instinctive reaction to the world around him is one of disgust and disdain.

Rather than seeking to transmute those feelings into more neutral or positive ones, he seeks to elevate himself above what he sees as his natural perspective. Rather than sit in his car and enjoy the coolness of his A/C or the feeling of the wheel against his skin or the patterns the sunlight makes on his dash, he abstracts, he retreats into his mind and an imagined world of possibilities. He describes engaging with other people, but it's inside his head, it's intellectualised and profoundly distant. Rather than seeing the person in the SUV in front as merely another human and seeking to accept them unconditionally, he seeks a fictionalised narrative that renders them palatable to him.

He may have had some sort of underlying chemical or structural problem that caused his depression, but we have no real evidence for that, we have no real evidence that such things exist. What we do know is that patterns of cognition that he advocated run contrary to the basic tenets of the treatment for depression with the best evidence base - CBT and it's variants.

Comment by cousin_it on cousin_it's Shortform · 2024-04-24T09:13:35.397Z · LW · GW

Wow, it's worse than I thought. Maybe the housing problem is "government-complete" and resists all lower level attempts to solve it.

Comment by cousin_it on AI Regulation is Unsafe · 2024-04-24T09:05:40.935Z · LW · GW
Comment by cousin_it on Let's Design A School, Part 1 · 2024-04-24T08:50:51.224Z · LW · GW

What if you build your school-as-social-service, and then one day find that the kids are selling drugs to each other inside the school?

Or that the kids are constantly interfering with each other so much that the minority who want to follow their interests can't?

I think any theory of school that doesn't mention discipline is a theory of dry water. What powers and duties would the 1-supervisor-per-12-kids have? Can they remove disruptive kids from rooms? From the building entirely? Give detentions?

Comment by cousin_it on Examples of Highly Counterfactual Discoveries? · 2024-04-24T08:31:33.776Z · LW · GW

I sometimes had this feeling from Conway's work, in particular, combinatorial game theory and surreal numbers to me feel closer to mathematical invention than mathematical discovery. This kind of things are also often "leaf nodes" on the tree of knowledge, not leading to many followup discoveries, so you could say their counterfactual impact is low for that reason.

In engineering, the best example I know is vulcanization of rubber. It has had a huge impact on today's world, but Goodyear developed it by working alone for decades, when nobody else was looking in that direction.

Comment by cousin_it on AI Regulation is Unsafe · 2024-04-23T10:53:39.462Z · LW · GW

You're saying governments can't address existential risk, because they only care about what happens within their borders and term limits. And therefore we should entrust existential risk to firms, which only care about their own profit in the next quarter?!

Comment by cousin_it on Priors and Prejudice · 2024-04-23T07:32:51.108Z · LW · GW

Yeah, the trapped priors thing is pretty worrying to me too. But I'm confused about the opposing interventions thing. Do charter cities, or labor unions, rely on donations that much? Is it really so common for donations to cancel each other out? I guess advocacy donations (for example, pro-life vs pro-choice) do cancel each other out, so maybe we could all agree that advocacy isn't charity.

Comment by cousin_it on cousin_it's Shortform · 2024-04-22T09:45:11.134Z · LW · GW

If the housing crisis is caused by low-density rich neighborhoods blocking redevelopment of themselves (as seems the consensus on the internet now), could it be solved by developers buying out an entire neighborhood or even town in one swoop? It'd require a ton of money, but redevelopment would bring even more money, so it could be win-win for everyone. Does it not happen only due to coordination difficulties?

Comment by cousin_it on Security amplification · 2024-04-22T09:18:56.418Z · LW · GW

I don't know about others, but to me these approaches sound like "build a bureaucracy from many well-behaved agents", and it seems to me that such a bureaucracy wouldn't necessarily behave well.

Comment by cousin_it on Express interest in an "FHI of the West" · 2024-04-20T16:04:05.484Z · LW · GW

I mean, one of the participants wrote: "getting comments that engage with what I write and offer a different, interesting perspective can almost be more rewarding than money". Others asked us for feedback on their non-winning entries. It feels to me that interaction between more and less experienced folks can be really desirable and useful for both, as long as it's organized to stay within a certain "lane".

Comment by cousin_it on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-20T09:46:38.108Z · LW · GW

I have maybe a naive question. What information is needed to find the MSP image within the neural network? Do we have to know the HMM to begin with? Or could it be feasible someday to inspect a neural network, find something that looks like an MSP image, and infer the HMM from it?

Comment by cousin_it on Should we maximize the Geometric Expectation of Utility? · 2024-04-19T20:19:17.375Z · LW · GW

For example, if there were certain states of the world which I wanted to avoid at all costs (and thus violate the continuity axiom), I could assign zero utility to it and use geometric averaging. I couldn’t do this with arithmetic averaging and any finite utilities.

Well, you can't have some states as "avoid at all costs" and others as "achieve at all costs", because having them in the same lottery leads to nonsense, no matter what averaging you use. And allowing only one of the two seems arbitrary. So it seems cleanest to disallow both.

If I wanted to program a robot which sometimes preferred lotteries to any definite outcome, I wouldn’t be able to program the robot using arithmetic averaging over goodness values.

But geometric averaging wouldn't let you do that either, or am I missing something?

Comment by cousin_it on Express interest in an "FHI of the West" · 2024-04-19T18:53:02.904Z · LW · GW

Sent the form.

What do you think about combining teaching and research? Similar to the Humboldt idea of the university, but it wouldn't have to be as official or large-scale.

When I was studying math in Moscow long ago, I was attending MSU by day, and in the evenings sometimes went to the "Independent University", which wasn't really a university. Just a volunteer-run and donation-funded place with some known mathematicians teaching free classes on advanced topics for anyone willing to attend. I think they liked having students to talk about their work. Then much later, when we ran the AI Alignment Prize here on LW, I also noticed that the prize by itself wasn't too important; the interactions between newcomers and old-timers were a big part of what drove the thing.

So maybe if you're starting an organization now, it could be worth thinking about this kind of generational mixing, research/teaching/seminars/whatnot. Though there isn't much of a set curriculum on AI alignment now, and teaching AI capability is maybe not the best idea :-)

Comment by cousin_it on I'm open for projects (sort of) · 2024-04-19T13:41:47.829Z · LW · GW

Yeah, that might be a good idea in case any rich employers stumble on this :-)

In terms of goals, I like making something, having many people use it, and getting paid for it. I'm not as motivated by meaning, probably different from most EAs in that sense.

In terms of skillset, I'd say I'm a frontend-focused generalist. The most fun programming experience in my life was when I built an online map just by myself - the rendering of map data to png tiles, the serving backend, the javascript for dragging and zooming, there weren't many libraries back then - and then it got released and got hundreds of thousands of users. The second most fun was when I made the game - coming up with the idea, iterating on the mechanics, graphic design, audio programming, writing text, packaging for web and mobile, the whole thing - and it got quite popular too. So that's the prototypical good job for me.

Comment by cousin_it on I'm open for projects (sort of) · 2024-04-19T13:21:20.347Z · LW · GW

I don't really understand your approach yet. Let's call your decision theory CLDT. You say counterfactuals in CLDT should correspond to consistent universes. For example, the counterfactual "what if a CLDT agent two-boxed in Newcomb's problem" should correspond to a consistent universe where a CLDT agent two-boxes on Newcomb's problem. Can you describe that universe in more detail?

Comment by cousin_it on I'm open for projects (sort of) · 2024-04-19T09:19:08.353Z · LW · GW

Done! I didn't do it at first because I thought it'd have to be in person only, but then clicked around in the form and found that remote is also possible.

Comment by cousin_it on I'm open for projects (sort of) · 2024-04-19T07:44:11.435Z · LW · GW

Besides math and programming, what are your other skills and interests?

Playing and composing music is the main one.

I have an idea of a puzzle game, not sure if it would be good or bad, I haven’t done even a prototype. So if anyone is interested, feel free to try

Yeah, you're missing out on all the fun in game-making :-) You must build the prototype yourself, play with it yourself, tweak the mechanics, and at some moment the stars will align and something will just work and you'll know it. There's no way anyone else can do it but you.

Comment by cousin_it on When is a mind me? · 2024-04-18T21:14:55.641Z · LW · GW

Yeah. My point was, we can't even be sure which behavior-preserving optimizations (of the kind done by optimizing compilers, say) will preserve consciousness. It's worrying because these optimizations can happen innocuously, e.g. when your upload gets migrated to a newer CPU with fancier heuristics. And yeah, when self-modification comes into the picture, it gets even worse.

Comment by cousin_it on When is a mind me? · 2024-04-18T13:18:31.217Z · LW · GW

I think there's a pretty strong argument to be more wary about uploading. It's been stated a few times on LW, originally by Wei Dai if I remember right, but maybe worth restating here.

Imagine the uploading goes according to plan, the map of your neurons and connections has been copied into a computer, and simulating it leads to a person who talks, walks in a simulated world, and answers questions about their consciousness. But imagine also that the upload is being run on a computer that can apply optimizations on the fly. For example, it could watch the input-output behavior of some NN fragment, learn a smaller and faster NN fragment with the same input-output behavior, and substitute it for the original. Or it could skip executing branches that don't make a difference to behavior at a given time.

Where do we draw the line which optimizations to allow? It seems we cannot allow all behavior-preserving optimizations, because that might lead to a kind of LLM that dutifully says "I'm conscious" without actually being so. (The p-zombie argument doesn't apply here, because there is indeed a causal chain from human consciousness to an LLM saying "I'm conscious" - which goes through the LLM's training data.) But we must allow some optimizations, because today's computers already apply many optimizations, and compilers even more so. For example, skipping unused branches is pretty standard. The company doing your uploading might not even tell you about the optimizations they use, given that the result will behave just like you anyway, and the 10x speedup is profitable. The result could be a kind of apocalypse by optimization, with nobody noticing. A bit unsettling, no?

The key point of this argument isn't just that some optimizations are dangerous, but that we have no principled way of telling which ones are. We thought we had philosophical clarity with "just upload all my neurons and connections and then run them on a computer", but that doesn't seem enough to answer questions like this. I think it needs new ideas.

Comment by cousin_it on Wei Dai's Shortform · 2024-04-18T08:41:53.668Z · LW · GW

Yeah, that seems to agree with my pessimistic view - that we are selfish animals, except we have culture, and some cultures accidentally contain altruism. So the answer to your question "are humans fundamentally good or evil?" is "humans are fundamentally evil, and only accidentally sometimes good".

Comment by cousin_it on Wei Dai's Shortform · 2024-04-17T12:58:01.938Z · LW · GW

I don't think altruism is evolutionarily connected to power as you describe. Caesar didn't come to power by being better at altruism, but by being better at coordinating violence. For a more general example, the Greek and other myths don't give many examples of compassion (though they give many other human values), it seems the modern form of compassion only appeared with Jesus, which is too recent for any evolutionary explanation.

So it's possible that the little we got of altruism and other nice things are merely lucky memes. Not even a necessary adaptation, but more like a cultural peacock's tail, which appeared randomly and might fix itself or not. While our fundamental nature remains that of other living creatures, who eat each other without caring much.

Comment by cousin_it on Should we maximize the Geometric Expectation of Utility? · 2024-04-17T12:12:38.847Z · LW · GW

Guilty as charged - I did read your post as arguing in favor of geometric averaging, when it really wasn't. Sorry.

The main point still seems strange to me, though. Suppose you were programming a robot to act on my behalf, and you asked me to write out some goodness values for outcomes, to program them into the robot. Then before writing out the goodnesses I'd be sure to ask you: which method would the robot use for evaluating lotteries over outcomes? Depending on that, the goodness values I'd write for you (to achieve the desired behavior from the robot) would be very different.

To me it suggests that the goodness values and the averaging method are not truly independent degrees of freedom. So it's simpler to nail down the averaging method, to use ordinary arithmetic averaging, and then assign the goodness values. We don't lose any ability to describe behavior (as long as it's consistent), and we remain with only the degree of freedom that actually matters.

Comment by cousin_it on Should we maximize the Geometric Expectation of Utility? · 2024-04-17T11:33:37.251Z · LW · GW

That makes me even more confused. Are you arguing that we ought to (1) assign some "goodness" values to outcomes, and then (2) maximize the geometric expectation of "goodness" resulting from our actions? But then wouldn't any argument for (2) depend on the details of how (1) is done? For example, if "goodnesses" were logarithmic in the first place, then wouldn't you want to use arithmetic averaging? Is there some description of how we should assign goodnesses in (1) without a kind of firm ground that VNM gives?

Comment by cousin_it on Should we maximize the Geometric Expectation of Utility? · 2024-04-17T10:48:32.884Z · LW · GW

This seems misguided.

The normal VNM approach is to start with an agent whose behavior satisfies some common sense conditions: can't be money pumped and so on. From that we can prove that the agent behaves as if maximizing the expectation of some function on outcomes, which we call the "utility function". That function is not unique, you can apply an affine transform and obtain another utility function describing the same behavior. The behavior is what's real; utility functions are merely our descriptions of it.

From that perspective, it makes no sense to talk about "maximizing the geometric expectation of utility". Utility is, by definition, the function whose (ordinary, not geometric) expectation is maximized by your behavior. That's the whole reason for introducing the concept of utility.

The mistake is a bit similar to how people talk about "caring about other people's utility, not just your own". You cannot care about other people's utility at the expense of your own, it's a misuse of terms. If your behavior is consistent, then the function that describes it is called "your utility".

Comment by cousin_it on on the dollar-yen exchange rate · 2024-04-09T08:14:42.210Z · LW · GW

I thought employers (and more generally the elite, who are net buyers of labor) would be happy with a remote work revolution. But they don't seem to be, hence my confusion.

Comment by cousin_it on on the dollar-yen exchange rate · 2024-04-08T16:52:20.479Z · LW · GW

Your post mentions what seems to me the biggest economic mystery of all: why didn't outsourcing, offshoring and remote work take over everything? Why do 1st world countries keep having any non-service jobs at all? Why does Silicon Valley keep hiring programmers who live in Silicon Valley, instead of equally capable and much cheaper programmers available remotely? There are no laws against that, so is it just inertia? Would slightly better remote work tech lead to a complete overturn of the world labor market?

Comment by cousin_it on Evolution did a surprising good job at aligning humans...to social status · 2024-03-23T09:45:29.644Z · LW · GW

This seems like good news about alignment.

To me it sounds like alignment will do a good job of aligning AIs to money. Which might be ok in the short run, but bad in the longer run.

Comment by cousin_it on On green · 2024-03-21T19:59:53.104Z · LW · GW
Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-17T17:37:09.711Z · LW · GW
Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-14T11:15:50.967Z · LW · GW

Sure, but there's an important economic subtlety here: to the extent that work is goal-aligned, it doesn't need to be paid. You could do it independently, or as partners, or something. Whereas every hour worked doing the employer's bidding, and every dollar paid for it, must be due to goals that aren't aligned or are differently weighted (for example, because the worker cares comparatively more about feeding their family). So it makes more sense to me to view every employment relationship, to the extent it exists, as transactional: the employer wants one thing, the worker another, and they exchange labor for money. I think it's a simpler and more grounded way to think about work, at least when you're a worker.

Comment by cousin_it on What could a policy banning AGI look like? · 2024-03-13T16:50:05.477Z · LW · GW

I think all AI research makes AGI easier, so "non-AGI AI research" might not be a thing. And even if I'm wrong about that, it also seems to me that most harms of AGI could come from tool AI + humans just as well. So I'm not sure the question is right. Tbh I'd just stop most AI work.

Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-13T15:44:28.048Z · LW · GW

Interesting, your comment follows the frame of the OP, rather than the economic frame that I proposed. In the economic frame, it almost doesn't matter whether you ban sexual relations at work or not. If the labor market is a seller's market, workers will just leave bad employers and flock to better ones, and the problem will solve itself. And if the labor market is a buyer's market, employers will find a way to extract X value from workers, either by extorting sex or by other ways - you're never going to plug all the loopholes. The buyer's market vs seller's market distinction is all that matters, and all that's worth changing. The great success of the union movement was because it actually shifted one side of the market, forcing the other side to shift as well.

Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-13T09:52:30.253Z · LW · GW

I think this is a good topic to discuss, and the post has many good insights. But I kinda see the whole topic from a different angle. Worker well-being can't depend on the goodness of employers, because employers gonna be bad if they can get away with it. The true cause of worker well-being is supply/demand changes that favor workers. Examples: 1) unionizing was a supply control which led to 9-5 and the weekend, 2) big tech jobs became nice because good engineers were rare, 3) UBI would lead to fewer people seeking jobs and therefore make employers behave better.

To me these examples show that, apart from market luck, the way to improve worker well-being is coordinated action. So I mostly agree with banning 80 hour workweeks, regulating gig work, and the like. We need more such restrictions, not less. The 32-hour work week seems like an especially good proposal: it would both make people spend less time at work, and make jobs easier to find. (And also make people much happier, as trials have shown.)

Comment by cousin_it on What is progress? · 2024-03-10T13:20:32.128Z · LW · GW

I think the main question is how to connect technological progress (which is real) to moral progress (which is debatable). People didn't expect that technological progress would lead to factory farming or WMDs, but here we are.

Comment by cousin_it on Movie posters · 2024-03-07T00:00:49.558Z · LW · GW
Comment by cousin_it on Many arguments for AI x-risk are wrong · 2024-03-05T11:46:39.790Z · LW · GW
  1. I’m worried about centralization of power and wealth in opaque non-human decision-making systems, and those who own the systems.

This has been my main worry for the past few years, and to me it counts as "doom" too. AIs and AI companies playing by legal and market rules (and changing these rules by lobbying, which is also legal) might well lead to most humans having no resources to survive.

Comment by cousin_it on Housing Roundup #7 · 2024-03-04T23:53:16.827Z · LW · GW
Comment by cousin_it on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-03-03T12:08:30.998Z · LW · GW
Comment by cousin_it on Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles · 2024-03-03T11:40:43.528Z · LW · GW

I feel like instead of flipping out you could just say "eh, I don't agree with this community's views on gender, I'm more essentialist overall". You don't actually have to convince anyone or get convinced by them. Individual freedom and peaceful coexistence is fine. The norm that "Bayesians can't agree to disagree" should burn in a fire.

Comment by cousin_it on Adding Sensors to Mandolin? · 2024-03-01T08:18:57.270Z · LW · GW
Comment by cousin_it on Can we get an AI to do our alignment homework for us? · 2024-02-27T13:41:39.440Z · LW · GW

I'm no longer sure the question makes sense, and to the extent it makes sense I'm pessimistic. Things probably won't look like one AI taking over everything, but more like an AI economy that's misaligned as a whole, gradually eclipsing the human economy. We're already seeing the first steps: the internet is filling up with AI generated crap, jobs are being lost to AI, and AI companies aren't doing anything to mitigate either of these things. This looks like a plausible picture of the future: as the AI economy grows, the money-hungry part of it will continue being stronger than the human-aligned part. So it's only a matter of time before most humans are outbid / manipulated out of most resources by AIs playing the game of money with each other.

Comment by cousin_it on Ideological Bayesians · 2024-02-26T12:23:50.352Z · LW · GW

Amazing post. I already knew that filtered evidence can lead people astray, and that many disagreements are about relative importance of things, but your post really made everything "click" for me. Yes, of course if what people look at is correlated with what they see, that will lead to polarization. And even if people start out equally likely to look at X or Y, but seeing X makes them marginally more likely to look at X in the future rather than Y, then some people will randomly polarize toward X and others toward Y.

Comment by cousin_it on Why you, personally, should want a larger human population · 2024-02-24T15:35:06.044Z · LW · GW

I think we're using at most 1% of the potential of geniuses we already have. So improving that usage can lead to 100x improvement in everything, without the worries associated with 100x population. And it can be done much faster than waiting for people to be born. (If AI doesn't make it all irrelevant soon, which it probably will.)

Comment by cousin_it on The Byronic Hero Always Loses · 2024-02-22T19:23:05.393Z · LW · GW
Comment by cousin_it on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-19T12:36:10.239Z · LW · GW

I left in 2011. My advice is to leave soon. And not even for reasons of ethics, business, or comfort. More like, for the spirit. Even if Russia is quite comfortable now, in broad strokes the situation is this: you're young, and the curtain is slowly closing. When you're older, would you rather be the older person who stayed in, or the person who took a chance on the world?

Comment by cousin_it on "What if we could redesign society from scratch? The promise of charter cities." [Rational Animations video] · 2024-02-18T16:09:24.651Z · LW · GW

Unfortunately, the game of power is about ruling a territory, not improving it. It took me many years to internalize this idea. "Surely the elite would want to improve things?" No. Putin could improve Russia in many ways, but these ways would weaken his rule, so he didn't. That's why projects like Georgism or charter cities keep failing: they weaken the relative position of the elite, even if they plausibly make life better for everyone. Such projects can only succeed if implemented by a whole country, which requires a revolution or at least a popular movement. It's possible - it's how democracy was achieved - but let's be clear on what it takes.

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-13T10:50:44.111Z · LW · GW

Not sure I understand. My question was, what kind of probability theory can support things like "P(X|Y) is defined but P(Y) isn't". The snippet you give doesn't seem relevant to that, as it assumes both values are defined.