Posts

Dependencies and conditional probabilities in weather forecasts 2022-03-07T21:23:12.696Z
Money creation and debt 2020-08-12T20:30:42.321Z
Superintelligence and physical law 2016-08-04T18:49:19.145Z
Scope sensitivity? 2015-07-16T14:03:31.933Z
Types of recursion 2013-09-04T17:48:55.709Z
David Brooks from the NY Times writes on earning-to-give 2013-06-04T15:15:26.992Z
Cryonics priors 2013-01-20T22:08:58.582Z

Comments

Comment by AnthonyC on Can we get an AI to do our alignment homework for us? · 2024-02-27T20:35:00.765Z · LW · GW

Yeah, there are lots of ways to be useful, and not all require any superhuman capabilities. How much is broadly-effective intelligence vs targeted capabilities development (seems like more the former lately), how much is cheap-but-good-enough compared to humans vs better-than-human along some axis, etc.

Comment by AnthonyC on Can we get an AI to do our alignment homework for us? · 2024-02-27T19:01:24.855Z · LW · GW

Fair enough, "trivial" overstates the case. I do think it is overwhelmingly likely.

 

That said, I'm not sure how much we actually disagree on this? I was mostly trying to highlight the gap between an AI have a capability and us having the control to use an AI to usefully benefit from that capability.

Comment by AnthonyC on Can we get an AI to do our alignment homework for us? · 2024-02-26T22:58:50.196Z · LW · GW

Trivially, any AI smart enough to be truly dangerous is capable of doing our "alignment homework" for us, in the sense of having enough intelligence to solve the problem. This is something EY has also pointed out many times, but which often gets ignored. Any ASI that destroys humanity will have no problem whatsoever understanding that that's not what humanity wanted, and no difficulty figuring out what things we would have wanted it to do instead.

What is very different and less clear of a claim is whether we can use any AI developed with sufficient capabilities, but built before the "homework" was done, to do so safely (for likely/plausible definitions of "we" and "use").

Comment by AnthonyC on AI #52: Oops · 2024-02-23T19:14:42.452Z · LW · GW

It seems totally reasonable to say that AI is rapidly getting many very large advantages with respect to humans, so if it gets to ‘roughly human’ in the core intelligence module, whatever you want to call that, then suddenly things get out of hand fast, potentially the ‘fast takeoff’ level of fast even if you see more signs and portents first.

 

In retrospect, there had been omens and portents.

But if you use them as reasons to ignore the speed of things happening, they won't help you.

Comment by AnthonyC on The Byronic Hero Always Loses · 2024-02-22T14:15:53.909Z · LW · GW

Lots of thoughts here. One is that over the course of our lives we encounter so many stories that they need to have variety, and Tolstoy's point makes pure heroes less appealing: "Happy families are all alike; every unhappy family is unhappy in its own way." Heroes and conflicts in children's stories are simple, and get more complex in stories for teens and adults. This is not just about maturity and exposure to the complexities of life and wanting to grapple with real dilemmas, it's also about not reading the hundredth identical plot.

Milton's Lucifer was also my first thought reading this, but I'm not sure I agree with your take. I think the point, for me, is that he makes us question whether is actually is the villain, at all. The persuasion element is, I think, an artifact of the story being told in a cultural context where there's an overwhelming presumption that he is the villain. The ancient Greeks had a different context, and had no problem writing complex and flawed and often doomed heroes who fought against fate and gods without their writers/composers ever thinking they needed to classify them as villains or anti-heroes, just larger-than-life people.

Perhaps it's just my American upbringing, but I think I want to live in a world where agents can get what they want, even with the world set against them, if only they are clever and persistent enough.

I'm American too, and I don't want that. At least not in general. I do share the stereotypical American distrust of rigid traditions and institutions to a substantial degree. I want agents-in-general to get much of what they want, but when the world is set against them? Depends case-by-case on why the world is set against them, and on why the agents have the goals they have. Voldemort was a persistent and clever (by Harry Potter standards) agent as much as Gandhi was. I can understand how each arrived at their goals and methods, and why their respective worlds were set against them, but that doesn't mean I want them both to get what they want. Interpolate between extremes however you like.

Comment by AnthonyC on How do I make predictions about the future to make sense of what to do with my life? · 2024-02-22T13:41:11.811Z · LW · GW

I know opinions about these kinds questions differ widely, and I think you shouldn't take too much advice from people who don't know anything about you. Regardless, I think the answers depend a lot on what set of options is or seems available to you.

For the first set of questions, do any of the options you'd consider seem likely to change the answer of "how many years?" If not, I would probably not use that as a deciding factor. You're building a life for yourself, it's unlikely the things you value only have value in the future and not the present, and there's enough probability that the answer is "at least decades" to make accounting for long timelines in your plans worthwhile.

For the second, this is harder and depends a lot on where you are, where you can easily go, where you have personal or family ties, how much money and other resources you have available, and how exposed you currently are to different kinds of economic and geopolitical changes.

As for personal anecdotes: none of the career options I considered had to do with AI, so I've treated the first set of questions as irrelevant to my own career path. I do understand that AI going well is extremely high-importance and high-variance, but I'm still focusing on the much lower-variance problem of climate change (and relatedly, energy, food, water, transportation, etc.). Sure, it won't make a difference if humanity goes extinct in 2035, but neither would any other path I took. I've also had the luxury of mostly being able to ignore the second set of questions, but FWIW I work fully remote and travel full time, which has the side effects of preserving optionality and of teaching me how to be transplantable and not get tied to hard-to-transport stuff.

Comment by AnthonyC on AI #51: Altman’s Ambition · 2024-02-20T21:54:27.539Z · LW · GW

The actual WSJ article centers on companies not sure they want to pay $30/month per user for Microsoft Copilot.

I understand that this is a thing, but I find it hard to imagine there are that many people making significant use of Windows and Microsoft Office at work who wouldn't be able to save an hour or two a month using Copilot or it's near-term successors. For me the break-even point would be saving somewhere between 5-30 minutes a month depending on how I calculate the cost and value of my work time.

Comment by AnthonyC on Monthly Roundup #15: February 2024 · 2024-02-20T21:04:12.347Z · LW · GW

On the protest acceptability: whenever I read about these polls I have no idea how much I'm supposed to think about the actual question. Personally I find it very easy to imagine fliers someone could hand out, and audiences they could hand them to, that I would find unacceptable but that both causes I support and those I oppose might decide are great ideas. "Always" to me means "probability ~1" but maybe that is too high a thresholdfor the intended question.

On binging: shows have gotten more complex since the advent of DVR and then streaming platforms. Yes, some space is better than binging, but also a lot of what's actually good is going to demand a lot of memory from me until I reach a natural breakpoint where there's not so many loose ends and intertwined plot points. Sometimes a week is fine. Other times even a day might leave me having to go rewatch bits of previous episodes.

Comment by AnthonyC on When Should Copyright Get Shorter? · 2024-02-20T04:05:14.075Z · LW · GW

Yes, agreed, and just to be clear I'm not talking about delays in granting a patent, I'm talking about delays in how long it takes to bring a technology to market and generate a return once a patent has been filed and/or granted.

Also, I'm not actually sure I'm 100% behind extending patent terms. I probably am. I do think they should be longer than copyright terms, though.

Comment by AnthonyC on Scientific Method · 2024-02-19T17:25:52.011Z · LW · GW

I think there could be a lot of value in having a sequence of posts on, basically, "What is this 'science' thing anyway?" Right now all the core ideas (including various prerequisites and corollaries) exist on this site or in the Sequences, but not a single, clear, cohesive whole that isn't extremely long.

However, I think trying to frame it this way, in one post, doesn't work. It's unclear who the target audience it, how they should approach it, and what they should hope to get out of it. Even knowing and already understanding these points, I read it wondering, "Why are these here, together, in this order? What is implied by the point numbering? Who, not already knowing these, would be willing to read this and able to understand it?"

It looks like the author created this account only a day before posting this. IDK if they've been here lurking or using another account a long time before that or not. In any case, my suggestion would be to look at how the Sequences are structured, find the bits that tie into what you're writing here, and then refactor this post into a series. Try and make it present ideas in a cohesive order, in digestible chunks, with links to past posts by others that expand on important points in more detail or other styles.

Comment by AnthonyC on I'd also take $7 trillion · 2024-02-19T17:09:33.775Z · LW · GW

I agree with pretty much all of this. If anything I think it somewhat understates the case. We're going to need a lot more clean power than current  electricity demand suggests if and when we make a real effort to reduce fossil fuel consumption in chemical production and transportation, and the latter will necessitate building a whole lot of some form of energy storage whether or not batteries get much cheaper.

Comment by AnthonyC on When Should Copyright Get Shorter? · 2024-02-19T17:02:05.983Z · LW · GW

Is the slow forward march of copyright terms the optimal response to the massive changes in information technology we’ve seen over the past several centuries?

 

Of course not! Even without the detailed analysis and assorted variables needed to figure out anything like an optimal term, economics tells us this can't actually help much in increasing idea, tech, patent, or creative work output. Market size for copies of a given work changes over time (speed and direction vary), but to a first approximation assume you can get away with holding it steady. Apply a 7% discount rate to the value of future returns and by year 20 you've gotten roughly 75% of all the value you'll ever extract. By year 50 you're over 95%. 

Even without copyright, Disney must still own the trademark rights to use Mickey in many ways, so that part is really about copyright. Honestly outside of an occasional TV special I can't remember the last time there was an actual new creative work about Mickey at all. Who can claim that copyright was still incentivizing anything at all? What court would believe it?

Patents are trickier, because they start at date of filing, and in some industries it can take most of the patent term just to bring it to market, leaving only a few years to benefit from the protection. Something as simple as a lawsuit from a competitor, or any form of opposition to building a plant, or a hiccup in raising funds, could create enough delay to wipe out a significant chunk of a patent's value, in a way that wasn't really true in a century ago. It makes little sense to me to have the same patent durations across software, automotive, energy, aviation, and pharmaceutical inventions/discoveries. 

Comment by AnthonyC on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-17T14:52:53.712Z · LW · GW

Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can't predict the outcomes of actions even in principle.

In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It's similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn't a point of difference among options, and it isn't a lever anyone can pull that affects what needs to be done.

I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That's very different than steering in a direction I want to steer (or be steered) in. It's also very different from retaining the ability to continue to steer and course correct. 

Comment by AnthonyC on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-16T12:47:09.966Z · LW · GW

Indeterminism is, tautologously, freedom from determinism.

Yes, and determinism isn't the thing I want freedom from. External control is, mostly.

Why would it be a deterministic fact in an indeterministic world?

The "may" is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.

Comment by AnthonyC on Tort Law Can Play an Important Role in Mitigating AI Risk · 2024-02-15T12:34:05.289Z · LW · GW

The moment any corporation knows it has the ability to kill millions to billions of people, or disrupt the world economy, with AI, it becomes a global geopolitical superpower, which can also really change how much it cares about complying with national laws. 

It's a bit like the joke about the asteroid mining business model. 1) Develop the ability to de-orbit big chunks of space rock. 2) Demand money. No mining needed.

Comment by AnthonyC on Tort Law Can Play an Important Role in Mitigating AI Risk · 2024-02-15T12:30:34.172Z · LW · GW

Thanks, that makes sense. 

The expected prevalence of warning shots is something I really don't have any sense of. Ideally, of course, I'd like a policy that both increases the likelihood of (doesn't disincentivize) small, early warning shots in the context of paths that, without them, would lead to large incidents, but also disincentivizes all bad outcomes such that companies want to avoid them.

Comment by AnthonyC on Tort Law Can Play an Important Role in Mitigating AI Risk · 2024-02-13T16:34:32.750Z · LW · GW

Should I be concerned that this kind of reliance on tort law only disincentivizes such "small" incidents, and only when they occur in such a way that the offending entity won't attain control of the future faster than the legal system can resolve a massive and extremely complex case in the midst of the political system and economy trying to resolve the incident's direct aftermath? Because I definitely am.

Comment by AnthonyC on And All the Shoggoths Merely Players · 2024-02-12T15:23:45.533Z · LW · GW

But I'm not sure how to reconcile that with the empirical evidence that deep networks are robust to massive label noise: you can train on MNIST digits with twenty wrong labels for every correct one and still get good performance as long as the correct label is slightly more common than the most common wrong label. If I extrapolate that to the frontier AIs of tomorrow, why doesn't that predict that biased human reward ratings should result in a small performance reduction, rather than ... death?

And how many errors, at what level of AGI capabilities, are sufficient to lead to human extinction? That's already beyond the bare minimum level of reliability you need, the upper bound on how many errors you can tolerate.  The answer doesn't look anything like the 90% accuracy found in the linked paper if the scenario were actually a high-powered AGI that will be used a vast number of times. 

Comment by AnthonyC on Running the Numbers on a Heat Pump · 2024-02-12T14:10:52.302Z · LW · GW

I recently had a coworker in the UK tell me they can get better off-peak rates if they install a home battery system and let the utility control it. I think in general the peak/off-peak rate difference could make a significant difference to these kinds of questions, but it's very dependent on local and regional policy choices shaping energy markets.

Comment by AnthonyC on Running the Numbers on a Heat Pump · 2024-02-12T14:07:20.036Z · LW · GW

My question about this is, are there generator systems that allow you to safely dump the waste heat from combustion inside the way a forced-air furnace system does?

Getting more speculative/forward looking: if we can get the up front costs down enough to consider swapping the generator in your model for a methane fuel cell, would it be cheaper to heat and power your house with natural gas than to run off the grid? (Not that any MA town I've lived in would be likely to approve a building permit for such a thing, but still, interesting question).

FWIW there are RVs with electric heat pumps (though less efficient than residential ones, usually) as well as on-board (propane, gas, or diesel) generators. In this context there are definitely cases where it's cheaper to run the generator and heat pump than to run the propane furnace. These kinds of systems also benefit from the presence of batteries (which, set up properly, can stabilize power draw and from the generator, and minimize generator run time and start/stop cycles, as the heat pump turns on and off). Last summer I dry camped in Wyoming for about a month, and my 10kWh battery + 3kW inverter let me cut my generator fuel use (for AC, not heat, but similar idea) in half (would have been even better but I was limited by max converter charging rate and battery thermal management) compared to if I didn't have that.

Comment by AnthonyC on Running the Numbers on a Heat Pump · 2024-02-12T13:54:28.078Z · LW · GW

Yeah, I came to say the same. You're basically running into the problem that electricity in MA is expensive relative to natural gas, which is very much a contingent fact of policy/history/infrastructure. If you were living elsewhere, or living off-grid, the numbers would look very different.

You may (or may not) find the MA policy mix and cost structure changing in the future, so if nothing else, be ready to revise your numbers over time. Especially if your current gas system breaks and you have to replace it with something no matter what, that can change the economics a lot too. 

Comment by AnthonyC on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-12T13:42:43.191Z · LW · GW

If you're talking about the two-stage model, I'm aware of it but haven't read their original writings. Still, I don't see how that could possibly help make my choices more or less "free" in any sense I care about for any philosophical, motivational, or moral reason.

If I am deterministically selecting among options generated within myself by an indeterministic process, sure that's possible, and I appreciate that it's an actual question we could find an answer to. But, I've never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that's outside my control, whether it happens inside by body or not, whether it's deterministic or not. (Yes, I realize I am essentially rejecting the idea that I should consider the option-generating indeterministic process to be part of "me." Maybe that's a mistake,, but that's how my me-concept is (currently) shaped.).

To put it another way: Imagine I am playing a game where I (deterministically) deliberate and choose among options presented to me. Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not? Why does it depend on whether the option-generating indeterministic module is located inside or outside my body?

Separately, I also have a hard time with the idea that this implies that the question of free will could depend on which version of quantum mechanics is (more) true, because if Many Worlds is correct then it is no longer true that the future is indeterministic; instead it is only true that different parts of current-me will (deterministically) no longer be in communication with one another in the future. 

(Continuing with the game-themed thought experiments because they're readily available and easy to describe) This idea feels as strange to me as it would be to say that a contestant's answers on Who Wants to Be a Millionaire become more or less free if you take away or use the 50-50 lifeline. I don't mean that to be flippant. In some sense, it's true - all of a sudden there are fewer options to freely choose among. But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked, and certain options but not others will disappear. To me that seems like a strange hook to hang my self-concept, will, and moral responsibility from.

Comment by AnthonyC on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-11T16:47:09.096Z · LW · GW

Not arguing with that, for sure.  Still, just knowing, at a gut level, that non-mysterious solutions exist was a critical step on my own journey. 

Indeterminism makes the problem harder; randomness means there is no part of "me" (physical or otherwise) deciding what I do, and I don't know of any non-random indeterministic conception of free will. I've looked, and having seen anything that would even suggest a shape of what such a thing could look like.

Supernatural solutions don't actually address the question of determinism at all, despite sometimes claiming to do so (at best, they hide the gears somewhere unobservable-in-principle).

And I don't think the more psychological arguments about belief in free will and uncertainty about your own mind-state or predictability within world are likely to be helpful to the OP given the content of the post and prior comments.

Comment by AnthonyC on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-11T14:56:29.115Z · LW · GW

It's a good recommendation, thank you. But whether it's better depends on the individual reader. For me, way back when I got stuck on the idea that both determinism and randomness seemed incompatible with free will to me, what got me out of is was someone asking me, "Well, what is it you want from your free will? If what you want is to act for reasons, then determinism doesn't take that away." Changed the way I thought just enough for further reflection and a bit more reading to get me the rest of the way. 

Stylistically, I found the Free Will sequence helped me examine and internalize the relevant ideas far more intuitively than more academic philosophy sources ever did, because what I needed was to be beaten over the head with the point that it wasn't actually mysterious. Reading summaries of past arguments by various philosophers, most of whom were either partly-religious in nature or unable/unwilling (for many reasons) to engage with the nature of physical reality as we moderns understand it, had never been enough for me. 

Comment by AnthonyC on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-08T15:45:36.882Z · LW · GW

Demotivation is only a problem when it comes to tasks that I do not want to do - typically because there is a great delay between the action and the reward.

This describes a very different (but still perfectly normal) source of demotivation that, I would suggest, is only masquerading as being about determinism.

Everyone else's answers (and I assume you've also already read the free will sequence? If not, it's a good idea) focus on how and why deterministic physics don't invalidate that it is still you and your choices that determine your actions. The locus of control is still internal, not external. But if your complaint is more about the different parts of yourself disagreeing about what they want you to do, or about what you should do (or other similar but slightly divergent framings of what parts of this the word "you" refers to), then answers about determinism won't help you. The work of harmonizing the parts of yourself, and having better relationships with your body and among the different parts of your mind, is something else entirely. 

 

To me, it sounds like some part of yourself is invoking the idea of determinism as an invalid argument in favor of a course of action, and the rest of you is accepting that argument. If those other parts want to not accept that argument, then sure, realizing at a deep level why it's invalid might be useful. But it might also be useful to identify arguments that the determinism-invoking-thought-processes are likely to find convincing, such as "Sure, but doing X instead will get you more of the Y you also want."

Comment by AnthonyC on On the Debate Between Jezos and Leahy · 2024-02-07T14:08:22.774Z · LW · GW

"entities that are most inclined towards growth"

As always, the word for this entity is "cancer" and it has a well-known tendency to kill its host and evolve around interventions to restrain it.

Comment by AnthonyC on Can we create self-improving AIs that perfect their own ethics? · 2024-01-30T17:20:31.043Z · LW · GW

If the process of self-improving AIs like described in an simple article by Tim Urban (below) is mastered, then the AI alignment problem is solved.

I would say this has causality backwards. In other words, one of the ways of solving the AI alignment problem is figuring out how to master the plausibly extremely complex process necessary to successfully implement a strategy that can be pointed to in a simple article.

research on AI, ON ETHICS, and coding changes into itself

As I understand it, the vast majority of the difficulty is in figuring out what the second goal in that list actually is, and how to make an AI care about it. Keep in mind that in so many cases we humans are still arguing about the same questions, answers, and frameworks that we've been debating for millennia.

Comment by AnthonyC on Monthly Roundup #14: January 2024 · 2024-01-24T14:44:38.236Z · LW · GW

As far as solar+storage goes, I wonder what timescale that was about. Because eventually you run out of non-renewable resources, as which point earth-based solar supply potential beats out everything else by a few thousand times (and most of the rest is wind). You could match that with fission and fusion for a very long time, but then you have a whole different kind of global warming problem.

Comment by AnthonyC on On the abolition of man · 2024-01-21T13:51:24.847Z · LW · GW

"even as you chose to create Bob, you chose to create the parts of Bob that his freedom is made of – his motivations, his reasoning, and so on. You chose for a particular sort of free being to join you in the world – one that will, in fact, choose the way you want them to"

This is one observation that Lewis, I think, would not endorse. After all, he is a Christian apologist, yet very clearly does not thereby consider God a tyrant whose complete control over the creation of the universe takes away human freedom. Calling the moral realist thing Tao instead doesn't actually help with that, I think? Either the Tao can influence the world in the present, in which case the conditioners can never really prevent it from reasserting itself; or it can't, in which case how did we first find it anyway; or it controlled the beginning as first cause in which case whatever happens anywhere ever is what it intended; or it intended something different but it's not very good at it's job. In that last case we can either let it fail or else choose for ourselves what we think we should do to help it out (which either way puts us right back in the situation of being conditioners, unable to be sure where the line between it's will and our will lies in steering the future).

Comment by AnthonyC on Does literacy remove your ability to be a bard as good as Homer? · 2024-01-18T14:31:56.386Z · LW · GW

Is there a reason I should want to? I mean that sincerely. Is there a reason I should want to memorize a specific handful of books' worth of information? Because rather than memorize a few thousand pages designed to be memorizable, what I've actually done is read/hear hundreds of thousands, mostly likely millions, of pages or their equivalent, with hundreds of new pages per day added, and extracted the key insights as best I can while keeping track as best I can of where I got them and how they all fit together.

I've read or heard or watched the Iliad and Odyssey, Plato, Aristotle, the Oresteia, the Bacchae, and thousands of other books, plays, songs, movies, shows, lectures, podcasts, etc. Would anything about me be better if I'd instead memorized the Catalogue of Ships or the exact text of Plato's Crito and Apology

Comment by AnthonyC on Medical Roundup #1 · 2024-01-17T14:57:20.021Z · LW · GW

They decided that this is not a form of evidence they are willing to use, even though African-Americans suffer more heart attacks and strokes even when you control for everything else we know to measure. Not factoring this in means they will get less care. 

 

I... really wish I could still find this surprising. Shocking, horrifying? Sure. But not surprising.

Comment by AnthonyC on The impossible problem of due process · 2024-01-17T14:47:44.825Z · LW · GW

Good or bad, I doubt it's a coincidence that this is in a society with more more stringent cultural contexts and assumptions about face and what it is or isn't ok to say and do in regards to other people's reputations.

Comment by AnthonyC on Actually, All Nuclear Famine Papers are Bunk · 2024-01-17T14:13:27.324Z · LW · GW

I might think daily expenditure per person could even increase after a large scale nuclear war as more people need to engage in more physical labor.

Comment by AnthonyC on Commonwealth Fusion Systems is the Same Scale as OpenAI · 2024-01-12T23:28:42.247Z · LW · GW

I think in some ways this says more about the structure of how we as a society finance promising emerging companies than it does about the companies themselves.

Comment by AnthonyC on Saving the world sucks · 2024-01-10T14:49:47.514Z · LW · GW

I think I agree with a lot of the object level content of the post (and I once fell into a years-long depression due to my own inability to live what I wanted my values to be), but I also would add that there needs to be a lot of initial and ongoing work done, by a lot of people who don't need or necessarily want to be doing it, in order for us to exist in a world where we (and most people) can have the kind of freedom you're talking about, in a meaningful way. You don't have to be the one doing it, but someone does, and we'd better hope they have the kind of ideals which involve valuing other peoples' well-being. Which, of course, we get mainly by encouraging the best, brightest, and most-likely-to-be-influential to have those kinds of values.

See John Adams:

I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain.

And then what happens in the 4th generation? Someone in each generation needs to be at the John Adams equivalent object level, and manage to become powerful enough to keep the virtuous cycle going.

Comment by AnthonyC on The Act Itself: Exceptionless Moral Norms · 2024-01-02T00:33:18.395Z · LW · GW

Perhaps chief among them is how to identify what an act is.

I think you're greatly underestimating the degree to which the fuzziness in category boundaries is inherent in reality and in the distance between fundamental reality and human perception, as opposed to being something social that we can address with better word definitions.

I am revising this view. My motivation is that I think we should have a category of actions that we do categorically rule out. Yes, the category will be fuzzy, but a good life requires being the type of person who can draw a hardline at times to say, “This I simply will not do, and no argument or circumstance will make me do it.” 

I'm glad you're acknowledge that it's a person drawing the lines. It sounds like, at least implicitly, you're acknowledging that those lines need not, and should not, be in the same place for all people. The lines are not the same for me as they would be for a president, a general, a doctor, a starving orphan, a medieval peasant, or a hunter-gatherer.  They also are not the same for a child as they are for an adult, in ways that the child is not yet capable of understanding or predicting. So, then, there must be times and ways for a person to reassess themselves and their lines over the course of their lives. In other words, the "I" and "me" in that paragraph is one that only exists at a moment in time, and has only a quantitative rather than binary identity with the entities before and after that go by the same name. There are choices I've made and things I've done that Anthony-2008, Anthony-2013, and Anthony-2018 would all have very confidently drawn some very stark lines prohibiting. I see no contradiction in this, because I'm no longer those people. I've grown, learned, and changed, and I consider the relevant changes positive.

Comment by AnthonyC on AI #44: Copyright Confrontation · 2023-12-31T00:11:41.450Z · LW · GW

In practice, one can think of this as ChatGPT committing copyright infringement if and only if everyone else is committing copyright infringement on that exact same passage, making it so often duplicated that it learned this is something people reproduce.

Definitely. Currently, I am of the opinion that there's nothing LLMs do with their training data that is fundamentally much different than what we normally describe with the word "reading," when it happens in a human mind instead of an LLM. IDK if you could convince a court of that, but if you could it would seem to be a pretty strong defense against copyright claims. 

Comment by AnthonyC on AI #43: Functional Discoveries · 2023-12-23T14:15:48.056Z · LW · GW

On the Nora Belrose link: I think the term "white box" works better than intended, in that it highlights a core flaw in the reasoning. Unlike a black box, a white box reflects your own externally shined light back at you. It still isn't telling you anything about what's inside the box. If you could make a very pretty box that had all the most important insights, moral maxims, and pieces of wisdom written on it, but the box was opaque, that would still be the same situation.

Comment by AnthonyC on AI #43: Functional Discoveries · 2023-12-23T14:02:48.974Z · LW · GW

I can affirm, even though we were fortunate enough to not use Slack, that this skill was indeed a major portion of running a company.

 

Am I missing something? Because this seemed to me to be about lower case s slack, not about the specific platform?

Comment by AnthonyC on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-22T11:44:45.078Z · LW · GW

I think it could be safely assumed that people have an idea of "software", and that they know that AI is a type of software.

I second faul_sname's point. I have a relative whose business involves editing other people's photos as a key step. It's amazing how often she comes across customers who have no idea what a file is, let alone how to attach one to an email. These are sometimes people who have already sent her emails with files attached in the past. Then add all the people who can't comprehend, after multiple rounds of explanation, that there is a difference between a photo file, like a jpeg, as opposed to a screenshot of their desktop or phone when the photo is pulled up. Yet somehow they know how to take a screenshot and send it to her. 

For so many people, their computer (or car, or microwave, etc.) really is just a magic black box anyway, and if it breaks you go to a more powerful wizard to re-enchant it. The idea that it has parts and you can understand how they work is just... not part of the mental process.

Comment by AnthonyC on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-22T11:34:41.855Z · LW · GW

And it applies to fields far less technical than AI research and geochemistry. I've been a consultant for years. My parents still regularly ask me what it is I actually do.

Comment by AnthonyC on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-22T11:31:46.883Z · LW · GW

hit them with the software equivalent of a rolled up newspaper and telling them "Bad neural net!", and hope they figure out what we're mad about

That's actually a really clear mental image. For those conversations where I have a few sentences instead of five public-facing words, I might use it.

Comment by AnthonyC on What is the next level of rationality? · 2023-12-19T15:20:44.806Z · LW · GW

Telling people to stop being meta is very important, but I think you may be misunderstanding the way in which Chapman is using the term. AFAICT it's really more about being able to step back from your own viewpoint and assumptions and effectively apply a mental toolbox and different mental stances effectively to a problem that isn't trivial or already-solved. Personally I've found it has helped keep me from going too meta in a lot of cases, by re-orienting my thinking to what's needed.

Comment by AnthonyC on When scientists consider whether their research will end the world · 2023-12-19T14:37:41.398Z · LW · GW

There was never “a probability of slightly less than three parts in a million,”... Ignition is not a matter of probabilities; it is simply impossible.

I am not sure if the phrasing was intentional, but he may have been dodging the intended question. He sounds lie he's using "probability" in the sampling-from-a-distribution sense, not the Bayesian-epistemic sense. Yes, ignition is impossible, in that if you set off an atomic bomb a million times you don't get three dead Earths. No, this does not necessarily mean that the epistemic state occupied by the Manhattan Project scientists pre-Trinity test included evidence of that physical fact sufficient to bet on it at odds better than 333k:1. 

I think this points at my general question: In each of the past cases, given the available data and tools, what is the lowest probability the scientists involved could have arrived it, before it was indistinguishable from zero? I could be convinced that there was nothing the Manhattan Project scientists could have learned or tested with the tools and time available to get their estimate much below 3e-6, whereas CERN had much better theoretical, computational, and experimental support (with less time pressure) to enable them to push to 2e-8 or lower.

SETI discussions don't seem to ever have quantified risk estimates, and I can think of several reasons for that. My own conclusion there has always been something like "No aliens powerful enough to be a threat need us to announce our presence. They knew we were here before we did."

The fact that AI researchers do make these estimates, and they're 3-12 OOMs larger than the estimates in any of the other cases, is really all we should need to know to understand how different the cases are from each other.

Comment by AnthonyC on Are There Examples of Overhang for Other Technologies? · 2023-12-15T18:24:04.849Z · LW · GW

I had a similar reaction, and I actually think the nuclear example is a clear sign that we are in a design overhang for nuclear reactors. We've developed multiple generations of better nuclear technologies and mostly just not used them. As a result, the workforce and mining and manufacturing capabilities that would have existed without the regulatory pause have not happened, and so even if regulations were relaxed I would not expect to catch up all the way to where we would have counterfactually been. But if we suddenly started holding nuclear to only the same level of overall safety standards as we hold other power generation, we would get slow growth as we rebuild a whole industry from scratch, then faster growth once it becomes possible to do so. (Or not, if timelines make solar+storage cheap faster than nuclear can ramp up, but that's a whole different question). And no, it wouldn't be 10^6 times cheaper, there's a floor due to just cost of materials that's much higher than that. But I would expect some catch-up growth.

Comment by AnthonyC on Are There Examples of Overhang for Other Technologies? · 2023-12-15T18:08:22.093Z · LW · GW

True, that was poor framing on my part. 

I think I was thrown by the number of times I've read things about us already being in hardware overhang, which a pause would make larger but not necessarily different-in-kind. I don't know if (or realistically, how much) larger overhangs lead to faster change when the obstacle holding us back goes away. But I would say in this proposed scenario that the underlying dynamics of how growth happens don't seem like they should depend on whether the overhang comes from regulatory sources specifically.

The reason I got into the whole s-curve thing is largely because I was trying to say that overhangs are not some novel thing, but rather a part of the development path of technology and industry generally. In some sense, every technology we know is possible is in some form(s) of overhang, from the moment we meet any of the prerequisites for developing it, right up until we develop and implement it. We just don't bother saying things like "Flying cars are in aluminum overhang."

Comment by AnthonyC on AI #42: The Wrong Answer · 2023-12-14T18:42:53.135Z · LW · GW

Also the lack thereof. There are places where going along with a wife who says that which is not is the 100% correct play. This often will importantly not be one of them. 

Context needed :-) Namely, how did this become a disagreement? Say she needs 4 apples for lunch/snacks and 5 for a pie this week, and sends me to do the shopping. 12 is probably about the right number to buy! Someone will snag one when we didn't expect it, one will end up rotten, and they may be smaller than we usually get. 

Comment by AnthonyC on Are There Examples of Overhang for Other Technologies? · 2023-12-14T18:24:06.389Z · LW · GW

You're right, I was switching between performance s-curves and market size s-curves in my thinking without realizing it. I do think the general point holds that there's a pattern of hit minimum viability --> get some adoption --> adoption accelerates learning, iteration, and innovation --> performance and cost improve --> viability increases --> repeat until you hit a wall or saturate the market.

Comment by AnthonyC on Are There Examples of Overhang for Other Technologies? · 2023-12-14T14:36:35.878Z · LW · GW

Have you considered the brain as a possible example, though evolved instead of engineered? Gradual increase in volume across the last six million years, but with technological stagnation for long stretches (for example, Homo erectus' stone tools barely changed for well over a million years). Then some time in the last hundred thousand years or so, we got accelerating technological progress despite steady or decreasing brain size. Is it better algorithms (via genetics)? Better data (cultural)? Feedback loops from these leading to larger population? Unclear, but the last, rate-limiting step wasn't larger or faster brains.

And I think the idea of the rate-limiting step, rather than overhang, is exactly the key here. In your post you talk about S-curves, but why do you think s-curves happen at all? My understanding is that it's because there's some hard problem that takes multiple steps to solve, and when the last step falls (or a solution is in sight), it's finally worthwhile to toss increasing amounts of investment to actually realize and implement the solution. Then, we see diminishing returns as we approach the next wall that requires a different, harder set of solutions. In time each one is overcome, often for unrelated reasons, and we have overhang in those areas until the last, rate-limiting step falls and we get a new s-curve.

Consider the steam engine, first given a toy demo about two and a half millennia ago by Archytas. Totally impractical, ignored, forgotten. Then we developed resource and compute overhang (more material wealth, more minds in the population, greater fraction of minds focused on things other than survival, better communications from the printing press and shared Latin language among scholars). We developed better algorithms (algebra, calculus, metallurgical recipes for iron and steel, the scientific method, physics). Then, and only then, did James Watt and his contemporaries overcome the last hurdle to make it practical enough to kickstart the s-curve of the industrial revolution that we're still riding to this day.

Your post reads, to me, as saying, "Better algorithms in AI may add new s-curves, but won't jump all the way to infinity, they'll level off after a while." Which is a reasonable observation and true enough for the effects of each advance. But at some level that's almost the same as saying, "There is compute overhang," because both mean, "Available compute is not currently the rate-limiting step in AI development."

Now, you can go on from there to debate where or when a given s-curve will level off. You can debate whether the fact that each AI s-curve increases available intelligence for problem solving makes AI's expected trajectory different than other technologies. Faster planes don't invent their successors, and we can only produce so many aeronautical engineers to address the successively-harder problems, but if AI hits a point of "We can make a new mind as good as the best human inventor ever at near-zero cost and point it at the next AI problem AND the next hardware problem AND the next energy problem AND etc., all at 1000x the speed of neurons" it's not quite the same thing. Regardless, you don't need to address this to discuss the idea of compute overhang.

Comment by AnthonyC on Is being sexy for your homies? · 2023-12-14T13:03:22.273Z · LW · GW

I actually wonder about the breakdown here. I agree many don't. I don't, though really I have few friends in general. But some do. I don't think there's very many people in the modern world that are in the extreme Mike Pence "I will never let myself be alone with a woman who isn't my wife" category of enforcing such boundaries (though some religious communities still have such rules!). But if you watch the start of When Harry Met Sally, I think a sizeable chunk of people do still lean closer to Billy Crystal's position than Meg Ryan's.