Posts

A short dialogue on comparability of values 2023-12-20T14:08:29.650Z
Bounded surprise exam paradox 2023-06-26T08:37:47.582Z
Stop pushing the bus 2023-03-31T13:03:45.543Z
Aligned AI as a wrapper around an LLM 2023-03-25T15:58:41.361Z
Are extrapolation-based AIs alignable? 2023-03-24T15:55:07.236Z
Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z
True numbers and fake numbers 2014-02-06T12:29:08.136Z
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z

Comments

Comment by cousin_it on on the dollar-yen exchange rate · 2024-04-09T08:14:42.210Z · LW · GW

I thought employers (and more generally the elite, who are net buyers of labor) would be happy with a remote work revolution. But they don't seem to be, hence my confusion.

Comment by cousin_it on on the dollar-yen exchange rate · 2024-04-08T16:52:20.479Z · LW · GW

Your post mentions what seems to me the biggest economic mystery of all: why didn't outsourcing, offshoring and remote work take over everything? Why do 1st world countries keep having any non-service jobs at all? Why does Silicon Valley keep hiring programmers who live in Silicon Valley, instead of equally capable and much cheaper programmers available remotely? There are no laws against that, so is it just inertia? Would slightly better remote work tech lead to a complete overturn of the world labor market?

Comment by cousin_it on Evolution did a surprising good job at aligning humans...to social status · 2024-03-23T09:45:29.644Z · LW · GW

This seems like good news about alignment.

To me it sounds like alignment will do a good job of aligning AIs to money. Which might be ok in the short run, but bad in the longer run.

Comment by cousin_it on On green · 2024-03-21T19:59:53.104Z · LW · GW
Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-17T17:37:09.711Z · LW · GW
Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-14T11:15:50.967Z · LW · GW

Sure, but there's an important economic subtlety here: to the extent that work is goal-aligned, it doesn't need to be paid. You could do it independently, or as partners, or something. Whereas every hour worked doing the employer's bidding, and every dollar paid for it, must be due to goals that aren't aligned or are differently weighted (for example, because the worker cares comparatively more about feeding their family). So it makes more sense to me to view every employment relationship, to the extent it exists, as transactional: the employer wants one thing, the worker another, and they exchange labor for money. I think it's a simpler and more grounded way to think about work, at least when you're a worker.

Comment by cousin_it on What could a policy banning AGI look like? · 2024-03-13T16:50:05.477Z · LW · GW

I think all AI research makes AGI easier, so "non-AGI AI research" might not be a thing. And even if I'm wrong about that, it also seems to me that most harms of AGI could come from tool AI + humans just as well. So I'm not sure the question is right. Tbh I'd just stop most AI work.

Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-13T15:44:28.048Z · LW · GW

Interesting, your comment follows the frame of the OP, rather than the economic frame that I proposed. In the economic frame, it almost doesn't matter whether you ban sexual relations at work or not. If the labor market is a seller's market, workers will just leave bad employers and flock to better ones, and the problem will solve itself. And if the labor market is a buyer's market, employers will find a way to extract X value from workers, either by extorting sex or by other ways - you're never going to plug all the loopholes. The buyer's market vs seller's market distinction is all that matters, and all that's worth changing. The great success of the union movement was because it actually shifted one side of the market, forcing the other side to shift as well.

Comment by cousin_it on Jobs, Relationships, and Other Cults · 2024-03-13T09:52:30.253Z · LW · GW

I think this is a good topic to discuss, and the post has many good insights. But I kinda see the whole topic from a different angle. Worker well-being can't depend on the goodness of employers, because employers gonna be bad if they can get away with it. The true cause of worker well-being is supply/demand changes that favor workers. Examples: 1) unionizing was a supply control which led to 9-5 and the weekend, 2) big tech jobs became nice because good engineers were rare, 3) UBI would lead to fewer people seeking jobs and therefore make employers behave better.

To me these examples show that, apart from market luck, the way to improve worker well-being is coordinated action. So I mostly agree with banning 80 hour workweeks, regulating gig work, and the like. We need more such restrictions, not less. The 32-hour work week seems like an especially good proposal: it would both make people spend less time at work, and make jobs easier to find. (And also make people much happier, as trials have shown.)

Comment by cousin_it on What is progress? · 2024-03-10T13:20:32.128Z · LW · GW

I think the main question is how to connect technological progress (which is real) to moral progress (which is debatable). People didn't expect that technological progress would lead to factory farming or WMDs, but here we are.

Comment by cousin_it on Movie posters · 2024-03-07T00:00:49.558Z · LW · GW
Comment by cousin_it on Many arguments for AI x-risk are wrong · 2024-03-05T11:46:39.790Z · LW · GW
  1. I’m worried about centralization of power and wealth in opaque non-human decision-making systems, and those who own the systems.

This has been my main worry for the past few years, and to me it counts as "doom" too. AIs and AI companies playing by legal and market rules (and changing these rules by lobbying, which is also legal) might well lead to most humans having no resources to survive.

Comment by cousin_it on Housing Roundup #7 · 2024-03-04T23:53:16.827Z · LW · GW
Comment by cousin_it on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-03-03T12:08:30.998Z · LW · GW
Comment by cousin_it on Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles · 2024-03-03T11:40:43.528Z · LW · GW

I feel like instead of flipping out you could just say "eh, I don't agree with this community's views on gender, I'm more essentialist overall". You don't actually have to convince anyone or get convinced by them. Individual freedom and peaceful coexistence is fine. The norm that "Bayesians can't agree to disagree" should burn in a fire.

Comment by cousin_it on Adding Sensors to Mandolin? · 2024-03-01T08:18:57.270Z · LW · GW
Comment by cousin_it on Can we get an AI to do our alignment homework for us? · 2024-02-27T13:41:39.440Z · LW · GW

I'm no longer sure the question makes sense, and to the extent it makes sense I'm pessimistic. Things probably won't look like one AI taking over everything, but more like an AI economy that's misaligned as a whole, gradually eclipsing the human economy. We're already seeing the first steps: the internet is filling up with AI generated crap, jobs are being lost to AI, and AI companies aren't doing anything to mitigate either of these things. This looks like a plausible picture of the future: as the AI economy grows, the money-hungry part of it will continue being stronger than the human-aligned part. So it's only a matter of time before most humans are outbid / manipulated out of most resources by AIs playing the game of money with each other.

Comment by cousin_it on Ideological Bayesians · 2024-02-26T12:23:50.352Z · LW · GW

Amazing post. I already knew that filtered evidence can lead people astray, and that many disagreements are about relative importance of things, but your post really made everything "click" for me. Yes, of course if what people look at is correlated with what they see, that will lead to polarization. And even if people start out equally likely to look at X or Y, but seeing X makes them marginally more likely to look at X in the future rather than Y, then some people will randomly polarize toward X and others toward Y.

Comment by cousin_it on Why you, personally, should want a larger human population · 2024-02-24T15:35:06.044Z · LW · GW

I think we're using at most 1% of the potential of geniuses we already have. So improving that usage can lead to 100x improvement in everything, without the worries associated with 100x population. And it can be done much faster than waiting for people to be born. (If AI doesn't make it all irrelevant soon, which it probably will.)

Comment by cousin_it on The Byronic Hero Always Loses · 2024-02-22T19:23:05.393Z · LW · GW
Comment by cousin_it on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-19T12:36:10.239Z · LW · GW

I left in 2011. My advice is to leave soon. And not even for reasons of ethics, business, or comfort. More like, for the spirit. Even if Russia is quite comfortable now, in broad strokes the situation is this: you're young, and the curtain is slowly closing. When you're older, would you rather be the older person who stayed in, or the person who took a chance on the world?

Comment by cousin_it on "What if we could redesign society from scratch? The promise of charter cities." [Rational Animations video] · 2024-02-18T16:09:24.651Z · LW · GW

Unfortunately, the game of power is about ruling a territory, not improving it. It took me many years to internalize this idea. "Surely the elite would want to improve things?" No. Putin could improve Russia in many ways, but these ways would weaken his rule, so he didn't. That's why projects like Georgism or charter cities keep failing: they weaken the relative position of the elite, even if they plausibly make life better for everyone. Such projects can only succeed if implemented by a whole country, which requires a revolution or at least a popular movement. It's possible - it's how democracy was achieved - but let's be clear on what it takes.

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-13T10:50:44.111Z · LW · GW

Not sure I understand. My question was, what kind of probability theory can support things like "P(X|Y) is defined but P(Y) isn't". The snippet you give doesn't seem relevant to that, as it assumes both values are defined.

Comment by cousin_it on Drone Wars Endgame · 2024-02-11T22:38:05.868Z · LW · GW

I think you're describing a kind of robotic tank, which would be useful for many other things as well, not just clearing mines. But designing a robotic tank that can't be disabled by an ATGM (some modern mines are already ATGMs waiting to fire) seems like a tall order to me. Especially given that ATGM tech won't stand still either.

Comment by cousin_it on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-11T22:16:27.351Z · LW · GW

OpenAI has already been the biggest contributor to accelerating the AI race; investing in chips is just another step in the same direction. I'm not sure why people keep assuming Altman is optimizing for safety. Sure, he has talked about safety, but it's very common for people to give lip service to something while doing the opposite thing. I'm not surprised by it and nobody should be surprised by it. Can we just accept already that OpenAI is going full speed in a bad direction, and start thinking what we can/should do about it?

Comment by cousin_it on Updatelessness doesn't solve most problems · 2024-02-09T01:22:00.347Z · LW · GW

I think the problem is not about updatelessness. It's more like, what people want from decision theory is fundamentally unachievable, except in very simple situations ("single player extensive-form games" was my take). In more complex situations, game theory becomes such a fundamental roadblock that we're better off accepting it; accepting that multiplayer won't reduce to single player no matter how much we try.

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-02T09:08:54.307Z · LW · GW

If you say things like "P(X|Y) is defined but P(Y) isn't", doesn't that call for a reformulation of all probability theory? Like, if I take the interpretation of probability theory based on sigma-algebras (which is quite popular), then P(Y) gotta be defined, no way around it. The very definition of P(X|Y) depends on P(X∧Y) and P(Y). You can say "let's kick out this leg from this table", but the math tells me pretty insistently that the table can't stand without that particular leg. Or at least, if there's a version of probability theory where P(Y) can be undefined but P(X|Y) defined, I'd want to see more details about that theory and how it doesn't trip over itself. Does that make sense?

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-02T08:24:30.599Z · LW · GW

Hmm. But in the envelope experiment, once Alice commits to a decision (e.g. choose A), her probabilities are well-defined. So in Sleeping Beauty, if we make it so the day is automatically disclosed to Alice at 5pm let's say, it seems like her probabilities about it should be well-defined from the get go. Or at least, the envelope experiment doesn't seem to shed light why they should be undefined. Am I missing something?

Comment by cousin_it on Drone Wars Endgame · 2024-02-01T21:16:39.867Z · LW · GW

The goal is for them to take over unlimited land territory

I'm not sure that goal is achievable. When the drones you're describing become mature, another pretty horrible technology might become mature as well: smart mines and remote mine-laying systems. A smart mine is cheaper than a drone (because it sits still instead of flying around), much easier to hide and harder to detect (same reason), they can see stuff and talk to each other just like drones can, and they can be deployed at a distance in large numbers.

So that's the picture I'm imagining. Thousands of mine-laying shells burst over a territory and make it un-takeable, a kind of hostile forest. Your drones will fly over it and see nothing. But the moment your people or vehicles enter the territory, something jumps out of the ground 50 meters away and they're dead. Or a column of your troops enters, the mines wait and then kill them all at once. Stuff like that.

Not sure there's any real counter to this. Even in peacetime, removing unexploded dumb bombs and mines from long-past wars (e.g. in Laos) takes more time and money than laying them in the first place. And if the mines fight back, the task of demining probably becomes unrealistic altogether. Especially as the defender can just keep dropping in more mines.

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-01T16:40:47.384Z · LW · GW

And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.

Yeah, I don't know if "undefined" is a good answer.

To be fair, there are some decision-theoretic situations where "undefined" is a good answer. For example, let's say Alice wakes up with amnesia on 10 consecutive days, and each day she's presented with a choice of envelope A or envelope B, one of which contains money. And she knows that whichever envelope she chooses on day 1, the experimenter will put money in the other envelope on days 2-10. This case is truly undefined: the contents of the envelopes on the desk in front of Alice are eerily dependent on how Alice will make the choice. For example, if she always chooses envelope A, then she should believe that the money is in envelope A with probability 10% and in B with probability 90%. But she can't use that knowledge to say "oh I'll choose B then", because that'll change the probabilities again.

But the Sleeping Beauty problem is not like that. Alice doesn't make any decisions during the experiment that could feed back into her probabilities. If each day we put a sealed envelope in front of Alice, containing a note saying which day it is, then Alice really ought to have some probability distribution over what's in the envelope. Undefined doesn't cut it for me yet. Maybe I should just wait for your post :-)

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-01T14:34:55.406Z · LW · GW

suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong

That sentence confuses me. The formulas from my comment imply that P(A=H|D=1) = 1/3 and P(A=T|D=1) = 2/3, which looks like an ok halfer position (modulo the fact that I accidentally swapped heads and tails in the very first comment and now am still sticking to that for coherence - sorry!)

About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)? It seems to me (maybe naively) that these three numbers should be enough, any conditionals can be calculated from them by Bayes' theorem.

Comment by cousin_it on My Alignment "Plan": Avoid Strong Optimisation and Align Economy · 2024-02-01T12:46:02.507Z · LW · GW

a company that pays its employees a below-subsistance wages will get outcompeted by companies that offer better conditions... once we automate a large fraction of the economy and society, this relationship between competitiveness and being beneficial to humans can cease to hold

Walmart is one of the biggest employers in the world, and its salaries are notoriously so low that a large percentage of employees depend on welfare to survive (in addition to walmart salary). The economy is already pretty far from what I'd call aligned. If we want to align it, the best time to start was a couple centuries ago, the second best time is now. Let's not wait until AI increases concentration of power even more.

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-01T12:17:09.361Z · LW · GW

It seems to me that the correct Bayesian updating is a bit different.

Let's denote Alice and Bob's coins as A and B, each taking values H or T, and denote the current day as D, taking values 1 or 2. Then, just after waking up but before learning whether Bob is awake, Halfer Alice has this prior: P(A=H∧D=1) = 1/4, P(A=H∧D=2) = 1/4, P(A=T∧D=1) = 1/2, and independently P(B=H) = P(B=T) = 1/2.

After that, meeting Bob gives new her information N = (A=H∧D=1∧B=H) ∨ (A=H∧D=2∧B=T) ∨ (A=T∧D=1∧B=H). These are three mutually exclusive clauses, and we can compute each of them according to Alice's prior above: P(A=H∧D=1∧B=H) = 1/4 * 1/2 = 1/8, P(A=H∧D=2∧B=T) = 1/4 * 1/2 = 1/8, P(A=T∧D=1∧B=H) = 1/2 * 1/2 = 1/4. The probability mass of N is split equally between A=H and A=T, so observing N shouldn't make Halfer Alice update about her coin.

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-01T00:54:39.303Z · LW · GW

She knows that when the coin is Heads this event has 100% probability, while on Tails it’s only 50%.

I might be missing something in your argument, but I think in my setup as stated, it should be 50% in both cases. When Alice's coin is heads, she wakes up on both days, but Bob wakes up on only one of them, depending on his own coin. So no matter if Alice is a halfer or a thirder, meeting Bob doesn't give her any new information about her coin. While Bob, in case of meeting Alice, does update to 2/3 about Alice's coin. So if the Alice he's meeting is a halfer, they have an unresolvable disagreement about her coin.

This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment!

Yeah, also known as "algorithm for winning the lottery": precommit to make many copies of yourself if you win. I guess we thirders have learned to live with it.

Comment by cousin_it on Don't sleep on Coordination Takeoffs · 2024-01-29T11:51:06.877Z · LW · GW

Some of the first people to try to get together and have a really big movement to enlighten and reform the world was the Counter Culture movement starting in the 60′s

The first? Like, in the history of the world?

Comment by cousin_it on Does literacy remove your ability to be a bard as good as Homer? · 2024-01-18T13:43:18.154Z · LW · GW

We're witnessing a similar change in our time. Many people today think that reading books is difficult and impressive, when 20-30 years ago everyone was reading books easily for fun.

A similar thing is happening with music: people are losing the ability to enjoy instrumental music. Instrumental songs almost never chart today, compared to the 60s and 70s. People do listen to soundtracks from movies and games, but that's a borrowed emotional effect.

I haven't checked, but willing to bet that people's ability to navigate without an electronic map is also pretty much gone.

Comment by cousin_it on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2024-01-14T10:11:35.136Z · LW · GW

Coming back to this, I think "martial art of rationality" is a phrase that sounds really cool. But there are many cool-sounding things that in reality are impossible, or not viable, or just don't work well enough. The road from intuition about a nonexistent thing, to making that thing exist, is always tricky. The success rate is low. And the thing you try to bring into existence almost always changes along the way.

Comment by cousin_it on Universal Love Integration Test: Hitler · 2024-01-12T11:17:08.473Z · LW · GW

I think the base rate of Hitlers might be really high, like 10% or more, depending on circumstances. If you look at pre-WWII times, or colonial times, there sure were a lot of world leaders that did horrible things. But world leaders are just people with power. The proportion of people without power, who would do horrible things if given power, is probably similar.

And that's disregarding the base rate of other evil that we commit, like tolerating factory farming. So once you start fantasizing about ancestor sim reeducation camps, oh man. If that's really the plan, guess I'd better prepare to end up in such a camp too!

Comment by cousin_it on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-10T22:27:30.057Z · LW · GW
Comment by cousin_it on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-10T11:15:13.064Z · LW · GW

Hmm, if you're a "selfish seeker of truth", debates don't seem like a very good source of truth. Even reading debates feels often shallow and pointless to me, compared to reading books.

Maybe the reason is that the best parts of something are rarely the most disputed parts. For example, in Christianity there's a lot of good stuff like "don't judge lest ye be judged", but debates are more focused on stuff like "haha, magic sky fairy".

Comment by cousin_it on Compensating for Life Biases · 2024-01-10T10:39:50.870Z · LW · GW

Putting art in posts is nice, but this kind of midjourney art is really creepy to me. I wish people used good art instead.

Comment by cousin_it on elbow921's Shortform · 2024-01-07T01:18:51.937Z · LW · GW
Comment by cousin_it on Almost everyone I’ve met would be well-served thinking more about what to focus on · 2024-01-05T22:35:55.768Z · LW · GW

Annie Dillard calls the writer’s life colorless to the point of sensory deprivation.

Ahh, that's from The Writing Life. Amazing book, I recommend it to everyone.

Comment by cousin_it on MIRI 2024 Mission and Strategy Update · 2024-01-05T21:05:51.299Z · LW · GW

Sure, in theory you could use cryptography to protect uploads from tampering, at the cost of slowdown by a factor of N. But in practice the economic advantages of running uploads more cheaply, in centralized server farms that'll claim to be secure, will outweigh that. And then (again, in practice, as opposed to theory) it'll be about as secure as people's personal data and credit card numbers today: there'll be regular large-scale leaks and they'll be swept under the rug.

To be honest, these points seem so obvious that MIRI's support of uploading makes me more skeptical of MIRI. The correct position is the one described by Frank Herbert: don't put intelligence in computers, full stop.

Comment by cousin_it on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-05T11:11:04.471Z · LW · GW

Well, the internet, obviously. I've been so used to thinking of it as a good thing, and earning a comfortable living from it, but now it feels more and more like lead pipes in the Roman Empire.

Comment by cousin_it on MIRI 2024 Mission and Strategy Update · 2024-01-05T10:43:27.624Z · LW · GW

Yeah. If you're an upload, the server owner's power over you is absolute. There's no precedent for this kind of power in reality, and I don't think we should bring it into existence.

Other fictional examples are the White Christmas episode of Black Mirror, where an upload gets tortured while being run at high speed, so that in a minute many years of bad stuff have happened and can't be undone; and Iain Banks' Surface Detail, where religious groups run simulated hells for people they don't like, and this large scale atrocity can be undetectable from outside.

Comment by cousin_it on MIRI 2024 Mission and Strategy Update · 2024-01-05T01:19:30.970Z · LW · GW

I think humanity shouldn't work on uploading either, because it comes with very large risks that Sam Hughes summarized as "data can't defend itself". Biological cognitive enhancement is a much better option.

Comment by cousin_it on AI #45: To Be Determined · 2024-01-04T16:39:26.679Z · LW · GW

Min Choi: This is bonkers… You can’t tell if these photos are AI generated

Yes I can. Hands and text look freaky in all of them.

Comment by cousin_it on $300 for the best sci-fi prompt: the results · 2024-01-03T21:23:59.531Z · LW · GW

It's certainly a good imitation of average (i.e. bad) writing. I couldn't bear reading any of these stories past the first paragraph or two. Midjourney art usually makes me feel the same way as well.

The most interesting AI-generated texts for me were the early "rewrite this as a sonnet" examples, and the most interesting AI-generated art was the "spiral art" stuff. Basically stuff that isn't just mimicking humans, but feels impressively labor-intensive compared to human work. Stuff that flaunts its AI-ness.

If this view makes sense, then maybe a different kind of prompt would work best. You wouldn't ask an AI to write a modern musical piece, because then it'll just give you a blur of modern musical pieces. Instead you'd ask for something strange and difficult, like "write a beautiful melody whose contour outlines the Manhattan skyline". Then the AI will go to absurd lengths to oblige your ridiculous constraints, and the result might be genuinely cool and worth sharing.

Comment by cousin_it on Rhythm Stage Setup Components · 2024-01-02T02:14:44.112Z · LW · GW

Or maybe drums+keys, like this. The fullest sounding one-person band I've ever seen.