Posts

How are you preparing for the possibility of an AI bust? 2024-06-23T19:13:45.247Z
Nate Showell's Shortform 2023-03-11T06:09:43.604Z
Degamification 2023-02-19T05:35:59.217Z
Reinforcement Learner Wireheading 2022-07-08T05:32:48.541Z

Comments

Comment by Nate Showell on Heresies in the Shadow of the Sequences · 2024-11-16T22:22:47.369Z · LW · GW

Some other examples:

  1. Agency and embeddedness are fundamentally at odds with each other. Decision theory and physics are incompatible approaches to world-modeling, with each making assumptions that are inconsistent with the other. Attempting to build mathematical models of embedding agency will fail as an attempt to understand advanced AI behavior.
  2. Reductionism is false. If modeling a large-scale system in terms of the exact behavior of its small-scale components would take longer than the age of the universe, or would require a universe-sized computer, the large-scale system isn't explicable in terms of small-scale interactions even in principle. The Sequences are incorrect to describe non-reductionism as ontological realism about large-scale entities -- the former doesn't inherently imply the latter.
  3. Relatedly, nothing is ontologically primitive. Not even elementary particles: if, for example, you took away the mass of an electron, it would cease to be an electron and become something else. The properties of those particles, as well, depend on having fields to interact with. And if a field couldn't interact with anything, could it still be said to exist?
  4. Ontology creates axiology and axiology creates ontology. We aren't born with fully formed utility functions in our heads telling us what we do and don't value. Instead, we have to explore and model the world over time, forming opinions along the way about what things and properties we prefer. And in turn, our preferences guide our exploration of the world and the models we form of what we experience. Classical game theory, with its predefined sets of choices and payoffs, only has narrow applicability, since such contrived setups are only rarely close approximations to the scenarios we find ourselves in.
Comment by Nate Showell on Species as Canonical Referents of Super-Organisms · 2024-10-19T04:30:27.539Z · LW · GW

How does this model handle horizontal gene transfer? And what about asexually reproducing species? In those cases, the dividing lines between species are less sharply defined.

Comment by Nate Showell on Rationality Quotes - Fall 2024 · 2024-10-12T17:52:42.430Z · LW · GW

The ideas of the Cavern are the Ideas of every Man in particular; we every one of us have our own particular Den, which refracts and corrupts the Light of Nature, because of the differences of Impressions as they happen in a Mind prejudiced or prepossessed.

Francis Bacon, Novum Organum Scientarum, Section II, Aphorism V

Comment by Nate Showell on Another argument against maximizer-centric alignment paradigms · 2024-10-03T03:59:08.427Z · LW · GW

The reflective oracle model doesn't have all the properties I'm looking for -- it still has the problem of treating utility as the optimization target rather than as a functional component of an iterative behavior reinforcement process. It also treats the utilities of different world-states as known ahead of time, rather than as the result of a search process, and assumes that computation is cost-free. To get a fully embedded theory of motivation, I expect that you would need something fundamentally different from classical game theory. For example, it probably wouldn't use utility functions.

Comment by Nate Showell on Alexander Gietelink Oldenziel's Shortform · 2024-09-29T20:19:51.095Z · LW · GW

Why are you a realist about the Solomonoff prior instead of treating it as a purely theoretical construct?

Comment by Nate Showell on Another argument against maximizer-centric alignment paradigms · 2024-09-28T22:58:06.421Z · LW · GW

A theory of embedded world-modeling would be an improvement over current predictive models of advanced AI behavior, but it wouldn't be the whole story. Game theory makes dualistic assumptions too (e.g., by treating the decision process as not having side effects), so we would also have to rewrite it into an embedded model of motivation.

 

Cartesian frames are one of the few lines of agent foundations research in the past few years that seem promising, due to allowing for greater flexibility in defining agent-environment boundaries. Preferably, we would have a model that lets us avoid having to postulate an agent-environment boundary at all. Combining a successor to Cartesian frames with an embedded theory of motivation, likely some form of active inference, might give us an accurate overarching theory of embedded behavior.

Comment by Nate Showell on Another argument against maximizer-centric alignment paradigms · 2024-09-22T23:51:56.751Z · LW · GW

And this is where the fundamental AGI-doom arguments – all these coherence theorems, utility-maximization frameworks, et cetera – come in. At their core, they're claims that any "artificial generally intelligent system capable of autonomously optimizing the world the way humans can" would necessarily be well-approximated as a game-theoretic agent. Which, in turn, means that any system that has the set of capabilities the AI researchers ultimately want their AI models to have, would inevitably have a set of potentially omnicidal failure modes.

This is my crux with people who have 90+% P(doom): will vNM expected utility maximization be a good approximation of the behavior of TAI? You argue that it will, but I expect that it won't.

 

My thinking related to this crux is informed less by the behaviors of current AI systems (although they still influence it to some extent) than by the failure of the agent foundations agenda. The dream 10 years ago was that if we started by modeling AGI as an vNM expected utility maximizer, and then gradually added more and more details to our model to account for differences between the idealized model and real-world AI systems, we would end up with an accurate theoretical system for predicting the behaviors AGI would exhibit. It would be a similar process to how physicists start with an idealized problem setup and add in details like friction or relativistic corrections.

 

But that isn't what ended up happening. Agent foundations researchers ended up getting stuck on the cluster of problems collectively described as embedded agency, unable to square the dualistic assumptions of expected utility theory and Bayesianism with the embedded structure of real-world AI systems. The sub-problems of embedded agency are many and too varied to allow one elegant theorem to fix everything. Instead, they point to a fundamental flaw in the expected utility maximizer model, suggesting that it isn't as widely applicable as early AI safety researchers thought. 

 

The failure of the agent foundations agenda has led me to believe that expected utility maximization is only a good approximation for mostly-unembedded systems, and that an accurate theoretical model of advanced AI behavior (if such a thing is possible) would require a fundamentally different, less dualistic set of concepts. Coherence theorems and decision-theoretic arguments still rely on the old, unembedded assumptions and therefore don't provide an accurate predictive model. 

Comment by Nate Showell on Wei Dai's Shortform · 2024-08-25T19:36:37.301Z · LW · GW

Philosophy is frequently (probably most of the time) done in order to signal group membership rather than as an attempt to accurately model the world. Just look at political philosophy or philosophy of religion. Most of the observations you note can be explained by philosophers operating at simulacrum level 3 instead of level 1.

Comment by Nate Showell on Nate Showell's Shortform · 2024-07-06T20:19:18.225Z · LW · GW

Bug report: when I'm writing an in-line comment on a quoted block of a post, and then select text within my comment to add formatting, the formatting menu is displayed underneath the box where I'm writing the comment. For example, this prevents me from inserting links into in-line comments.

Comment by Nate Showell on Scalable oversight as a quantitative rather than qualitative problem · 2024-07-06T20:09:08.955Z · LW · GW

In particular, if the sample efficiency of RL increases with large models, it might turn out that the optimal strategy for RLing early transformative models is to produce many fewer and much more expensive labels than people use when training current systems; I think people often neglect this possibility when thinking about the future of scalable oversight.

This paper found higher sample efficiency for larger reinforcement learning models (see Fig. 5 and section 5.5).

Comment by Nate Showell on How are you preparing for the possibility of an AI bust? · 2024-06-27T03:22:21.425Z · LW · GW

I picked the dotcom bust as an example precisely because it was temporary. The scenarios I'm asking about are ones in which a drop in investment occurs and timelines turn out to be longer than most people expect, but where TAI is still developed eventually. I asked my question because I wanted to know how people would adjust to timelines lengthening.

Comment by Nate Showell on Matt Goldenberg's Short Form Feed · 2024-06-18T06:33:54.968Z · LW · GW

Then what do you mean by "forces beyond yourself?" In your original shortform it sounded to me like you meant a movement, an ideology, a religion, or a charismatic leader. Creative inspiration and ideas that you're excited about aren't from "beyond yourself" unless you believe in a supernatural explanation, so what does the term actually refer to? I would appreciate some concrete examples.

Comment by Nate Showell on Matt Goldenberg's Short Form Feed · 2024-06-16T19:54:00.753Z · LW · GW

There are more than two options for how to choose a lifestyle. Just because the 2000s productivity books had an unrealistic model of motivation doesn't mean that you have to deceive yourself into believing in gods and souls and hand over control of your life to other people.

Comment by Nate Showell on Two easy things that maybe Just Work to improve AI discourse · 2024-06-12T06:24:12.583Z · LW · GW

That's not as bad, since it doesn't have the rapid back-and-forth reward loop of most Twitter use.

Comment by Nate Showell on Two easy things that maybe Just Work to improve AI discourse · 2024-06-09T22:42:02.955Z · LW · GW

The time expenditure isn't the crux for me, the effects of Twitter on its user's habits of thinking are the crux. Those effects also apply to people who aren't alignment researchers. For those people, trading away epistemic rationality for Twitter influence is still very unlikely to be worth it.

Comment by Nate Showell on Two easy things that maybe Just Work to improve AI discourse · 2024-06-09T00:33:55.287Z · LW · GW

I strongly recommend against engaging with Twitter at all. The LessWrong community has been significantly underestimating the extent to which it damages the quality of its users' thinking. Twitter pulls its users into a pattern of seeking social approval in a fast-paced loop. Tweets shape their regular readers' thoughts into becoming more tweet-like: short, vague, lacking in context, status-driven, reactive, and conflict-theoretic. AI alignment researchers, more than perhaps anyone else right now, need to preserve their ability to engage in high-quality thinking. For them especially, spending time on Twitter isn't worth the risk of damaging their ability to think clearly.

Comment by Nate Showell on The case for stopping AI safety research · 2024-05-24T19:42:03.678Z · LW · GW

AI safety research is speeding up capabilities. I hope this is somewhat obvious to most.

This contradicts the Bitter Lesson, though. Current AI safety research doesn't contribute to increased scaling, either through hardware advances or through algorithmic increases in efficiency. To the extent that it increases the usability of AI for mundane tasks, current safety research does so in a way that doesn't involve making models larger. Fears of capabilities externalities from alignment research are unfounded as long as the scaling hypothesis continues to hold.

Comment by Nate Showell on William_S's Shortform · 2024-05-04T21:17:20.004Z · LW · GW

The lack of leaks could just mean that there's nothing interesting to leak. Maybe William and others left OpenAI over run-of-the-mill office politics and there's nothing exceptional going on related to AI.

Comment by Nate Showell on David Udell's Shortform · 2024-04-25T03:59:20.337Z · LW · GW

The concept of "the meaning of life" still seems like a category error to me. It's an attempt to apply a system of categorization used for tools, one in which they are categorized by the purpose for which they are used, to something that isn't a tool: a human life. It's a holdover from theistic worldviews in which God created humans for some unknown purpose.

 

The lesson I draw instead from the knowledge-uploading thought experiment -- where having knowledge instantly zapped into your head seems less worthwhile acquiring it more slowly yourself -- is that to some extent, human values simply are masochistic. Hedonic maximization is not what most people want, even with all else being equal. This goes beyond simply valuing the pride of accomplishing difficult tasks, as such as the sense of accomplishment one would get from studying on one's own, above other forms of pleasure. In the setting of this thought experiment, if you wanted the sense of accomplishment, you could get that zapped into your brain too, but much like getting knowledge zapped into your brain instead of studying yourself, automatically getting a sense of accomplishment would be of lesser value. The suffering of studying for yourself is part of what makes us evaluate it as worthwhile.

Comment by Nate Showell on On green · 2024-03-24T21:50:01.001Z · LW · GW

Spoilers for Fullmetal Alchemist: Brotherhood:

 

Father is a good example of a character whose central flaw is his lack of green. Father was originally created as a fragment of Truth, but he never tries to understand the implications of that origin. Instead, he only ever sees God as something to be conquered, the holder of a power he can usurp. While the Elric brothers gain some understanding of "all is one, one is all" during their survival training, Father never does -- he never stops seeing himself as a fragile cloud of gas inside a flask, obsessively needing to erect a dichotomy between controller and controlled. Not once in the series does he express anything resembling awe. When Father finally does encounter God beyond the Doorway of Truth, he doesn't recognize what he's seeing. The Elric brothers have artistic expressions of wonderment toward God inscribed on their Doorways of Truth, but Father's Doorway of Truth is blank.

Father's lack of green also extends to how he sees humans. It never seems to occur to Father that the taboo against human transmutation is anything more than an arbitrary rule. To him, humans are only ever tools or inconveniences, not people to appreciate for their own sake or look to for guidance. Joy-in-the-Other is what Father most deeply desires, but he doesn't recognize this need.

Comment by Nate Showell on Ratios's Shortform · 2024-03-21T03:24:12.667Z · LW · GW

Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.

Comment by Nate Showell on 0th Person and 1st Person Logic · 2024-03-10T21:20:36.852Z · LW · GW

It's worth noting that no reference to preferences has yet been made. That's interesting because it suggests that there are both 0P-preferences and 1P-preferences. That intuitively makes sense, since I do care about both the actual state of the world, and what kind of experiences I'm having.

Believing in 0P-preferences seems to be a map-territory confusion, an instance of the Tyranny of the Intentional Object. The robot can't observe the grid in a way that isn't mediated by its sensors. There's no way for 0P-statements to enter into the robot's decision loop, and accordingly act as something the robot can have preferences over, except by routing through 1P-statements. Instead of directly having a 0P-preference for "a square of the grid is red," the robot would have to have a 1P-preference for "I believe that a square of the grid is red." 

Comment by Nate Showell on shortplav · 2024-03-04T22:39:49.273Z · LW · GW

What's your model of inflation in an AI takeoff scenario? I don't know enough about macroeconomics to have a good model of what AI takeoff would do to inflation, but it seems like it would do something.

Comment by Nate Showell on Richard_Kennaway's Shortform · 2024-03-04T22:08:33.971Z · LW · GW

You're underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn't address the ease of circumvention. There's no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.

Comment by Nate Showell on Richard_Kennaway's Shortform · 2024-03-03T21:24:06.470Z · LW · GW

All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. "Teachers" become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.

This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.

Comment by Nate Showell on Choosing My Quest (Part 2 of "The Sense Of Physical Necessity") · 2024-02-25T03:56:15.003Z · LW · GW

Why do you ordinarily not allow discussion of Buddhism on your posts?

 

Also, if anyone reading this does a naturalist study on a concept from Buddhist philosophy, I'd like to hear how it goes.

Comment by Nate Showell on Nate Showell's Shortform · 2024-02-17T20:36:06.329Z · LW · GW

An edgy writing style is an epistemic red flag. A writing style designed to provoke a strong, usually negative, emotional response from the reader can be used to disguise the thinness of the substance behind the author's arguments. Instead of carefully considering and evaluating the author's arguments, the reader gets distracted by the disruption to their emotional state and reacts to the text in a way that more closely resembles a trauma response, with all the negative effects on their reasoning capabilities that such a response entails. Some examples of authors who do this: Friedrich Nietzsche, Grant Morrison, and The Last Psychiatrist.

Comment by Nate Showell on Phallocentricity in GPT-J's bizarre stratified ontology · 2024-02-17T06:12:43.195Z · LW · GW

OK, so maybe this is a cool new way to look at at certain aspects of GPT ontology... but why this primordial ontological role for the penis?

"Penis" probably has more synonyms than any other term in GPT-J's training data.

Comment by Nate Showell on Dreams of AI alignment: The danger of suggestive names · 2024-02-10T21:38:24.269Z · LW · GW

I particularly wish people would taboo the word "optimize" more often. Referring to a process as "optimization" papers over questions like:

  • What feedback loop produces the increase or decrease in some quantity that is described as "optimization?" What steps does the loop have?
  • In what contexts does the feedback loop occur?
  • How might the effects of the feedback loop change between iterations? Does it always have the same effect on the quantity?
  • What secondary effects does the feedback loop have?

There's a lot hiding behind the term "optimization," and I think a large part of why early AI alignment research made so little progress was because people didn't fully appreciate how leaky of an abstraction it is.

Comment by Nate Showell on A sketch of acausal trade in practice · 2024-02-04T19:50:03.716Z · LW · GW

The "pure" case of complete causal separation, as with civilizations in separate regions of a multiverse, is an edge case of acausal trade that doesn't reflect what the vast majority of real-world examples look like. You don't need to speculate about galactic-scale civilizations to see what acausal trade looks like in practice: ordinary trade can already be modeled as acausal trade, as can coordination between ancestors and descendants. Economic and moral reasoning already have elements of superrationality to the extent that they rely on concepts such as incentives or universalizability, which introduce superrationality by conditioning one's own behavior on other people's predicted behavior. This ordinary acausal trade doesn't require formal proofs or exact simulations -- heuristic approximations of other people's behavior are enough to give rise to it.

Comment by Nate Showell on Decaeneus's Shortform · 2024-01-28T21:02:11.151Z · LW · GW

There are some styles of meditation that are explicitly described as "just sitting" or "doing nothing."

Comment by Nate Showell on Deep atheism and AI risk · 2024-01-13T21:52:56.129Z · LW · GW

Trust and distrust are social emotions. To feel either of them toward nature is to anthropomorphize it. In that sense, "deep atheism" is closer to theism than "shallow atheism," in some cases no more than a valence-swap away. 

 

An actually-deeply-atheistic form of atheism would involve stripping away anthropomorphization instead of trust. It would start with the observation that nature is alien and inhuman and would extend that observation to more places, acting as a kind of inverse of animism. This form of atheism would remove attributions of properties such as thought, desire, and free will from more types of entities: governments, corporations, ideas, and AI. At its maximum extent, it would even be applied to the processes that make up our own minds, with the recognition that such processes don't come with any inherent essence of humanness attached. To really deepen atheism, make it illusionist.

Comment by Nate Showell on Nate Showell's Shortform · 2024-01-08T00:09:38.662Z · LW · GW

Is trade ever fully causal? Ordinary trade can be modeled as acausal trade with the "no communication" condition relaxed. Even in a scenario as seemingly causal as using a vending machine, trade only occurs if the buyer believes that the vending machine will actually dispense its goods and not just take the buyer's money. Similarly, the vending machine owner's decision to set up the machine was informed by predictions about whether or not people would buy from it. The only kind of trade that seems like it might be fully causal is a self-executing contract that's tied to an external trigger, and for which both parties have seen the source code and verified that the other party have enough resources to make the agreed-upon trade. Would a contract like that still have some acausal element anyway?

Comment by Nate Showell on Shortform · 2023-12-31T20:30:58.944Z · LW · GW

I agree: the capabilities of AI romantic partners probably aren't the bottleneck to their wider adoption, considering the success of relatively primitive chatbots like Replika at attracting users. People sometimes become romantically attached to non-AI anime/video game characters despite not being able to interact with them at all! There doesn't appear to be much correlation between the interactive capabilities of fictional-character romantic partners and their appeal to users/followers.

Comment by Nate Showell on Picasso in the Gallery of Babel · 2023-12-28T04:35:52.167Z · LW · GW
  1. Sculpture wouldn't be immune if robots get good enough, but live dance and theater still would be. I don't expect humanoid robots to ever become completely indistinguishable from biological humans.
  2. I agree, since dance and theater are already so frequently experienced in video form.
Comment by Nate Showell on Picasso in the Gallery of Babel · 2023-12-26T23:54:09.203Z · LW · GW

The future you're describing only applies in Looking-At-Screens World. In sculpture, dance, and live theater, to name a few, human artists would still dominate. If generative AI achieved indistinguishability from human digital artists, I expect that those artists would shift toward more concrete media. Those concrete media would also become higher-status due to still requiring human artists.

Comment by Nate Showell on Nate Showell's Shortform · 2023-12-16T03:59:15.416Z · LW · GW

I was comparing it to base-rate forecasting. Twitter leads people to over-update on evidence that isn't actually very strong, making their predictions worse by moving their probabilities too far from the base rates.

Comment by Nate Showell on Nate Showell's Shortform · 2023-12-11T05:12:12.583Z · LW · GW

I've come to believe (~65%) that Twitter is anti-informative: that it makes its users' predictive calibration worse on average. On Manifold, I frequently adopt a strategy of betting against Twitter hype (e.g., on the LK-99 market), and this strategy has been profitable for me.

Comment by Nate Showell on FixDT · 2023-12-03T22:18:49.906Z · LW · GW

It seems like fixed points could be used to replace the concept of utility, or at least to ground it as an inferred property of more fundamental features of the agent-environment system. The concept of utility is motivated by the observation that agents have preference orderings over different states. Those preference orderings are statements about the relative stability of different states, in terms of the direction in which an agent tends to transition between them. It seems duplicative to have both utilities and fixed points as two separate descriptions of state transition processes in the agent-environment system; utilities look like they could be defined in terms of fixed points.

 

As one preliminary idea for how to do this, you could construct a fully connected graph  in which the vertices are the probability distributions  that satisfy . The edges  are beliefs that represent hypothetical transitions between the fixed points. The graph  would take the place of a preference ordering by describing the tendency of the agent to move between the fixed points if given the option. (You could also model incomplete preferences by not making the graph fully connected.) Performing power iteration with the transition matrix of  would act as a counterpart to moving through the preference ordering.

 

Further exploration of this unification of utilities and fixed points could involve connecting  to the beliefs that are actually, rather than just counterfactually, present in the agent-environment system, to describe what parts of the system the agent can control. Having a way to represent that connection could let us rewrite the instrumental constraint to not rely on .

Comment by Nate Showell on Nate Showell's Shortform · 2023-11-19T07:20:30.641Z · LW · GW

What do other people here think of quantum Bayesianism as an interpretation of quantum mechanics? I've only just started reading about it, but it seems promising to me. It lets you treat probabilities in quantum mechanics and probabilities in Bayesian statistics as having the same ontological status: both are properties of beliefs, whereas in some other interpretations of quantum mechanics, probabilities are properties of an external system. This match allows quantum mechanics and Bayesian statistics to be unified into one overarching approach, without requiring you to postulate additional entities like unobserved Everett branches.

Comment by Nate Showell on On Tapping Out · 2023-11-17T22:49:25.841Z · LW · GW

"Tapping out" has a different meaning in Magic: the Gathering (tapping all your lands) that could create some confusion.

Comment by Nate Showell on Vote on Interesting Disagreements · 2023-11-12T21:32:26.053Z · LW · GW

"Agent" is an incoherent concept.

Comment by Nate Showell on Don't Dismiss Simple Alignment Approaches · 2023-11-09T02:51:21.692Z · LW · GW

I asked on Discord and someone told me this: 

A simple way to quantify this: first define a "feature" as some decision boundary over the data domain, then train a linear classifier to predict that decision boundary from the network's activations on that data. Quantify the "linearity" of the feature in the network as the accuracy that the linear classifier achieves. 

 

For example, train a classifier to detect when some text has positive or negative sentiment, then pass the same text through some pretrained LLM (e.g. BERT) whose "feature-linearity" you're trying to measure, and try to predict the sentiment from the BERT's activation vectors using linear regression. The accuracy of this linear model tells you how linear the "sentiment" feature is in your LLM.

Comment by Nate Showell on AI as Super-Demagogue · 2023-11-06T01:03:37.637Z · LW · GW

This post seems to focus too much on the denotative content of propaganda rather than the social context in which it occurs. Effective propaganda requires saturation that creates common knowledge, or at least higher-than-first-order knowledge. People want to believe what their friends believe. If you used AI to generate political messages that were custom-tailored to their recipients, they would fail as propaganda, since the recipients wouldn't know that all their friends were receiving the same message. Message saturation and conformity-rewarding environments are necessary for propaganda to succeed; denotative content barely matters. This makes LLMs practically useless for propagandists, since they don't establish higher-order knowledge and don't contribute to creating an environment in which conformity is socially necessary.

 

(Overemphasis on the denotative meaning of communications in a manner that ignores their social context is a common bias on LessWrong more generally. Discussions of persuasion, especially AI-driven persuasion, are where it tends to lead to the biggest mistakes in world-modeling.)

Comment by Nate Showell on AI Safety is Dropping the Ball on Clown Attacks · 2023-10-23T00:20:38.130Z · LW · GW

I agree that the vast majority of people attempting to do targeting advertising do not have sufficient data. But that doesn't tell us much about whether the big 5 tech companies, or intelligence agencies, have sufficient data to do that, and aren't being really loud about it.

If any of the big tech companies had the capability for actually-good targeted advertising, they'd use it. The profit motive would be very strong. The fact that targeted ads still "miss" so frequently is strong evidence that nobody has the highly advanced, scalable, personalized manipulation capabilities you describe.

Social media recommendation algorithms aren't very powerful either. For instance, when I visit YouTube, it's not unusual for it to completely fail to recommend anything I'm interested in watching. The algorithm doesn't even seem to have figured out that I've never played Skyrim or that I'm not Christian. In the scenario in which social media companies have powerful manipulation capabilities that they hide from the public, the gap between the companies' public-facing and hidden recommendation systems would be implausibly large.

As for chaotic dynamics, there's strong experimental evidence that they occur in the brain, and even if they didn't, they would still occur in people's surrounding environments. Even if it weren't prohibitively expensive to point millions or billions of sensors at one person, that still wouldn't be enough to predict everything. But tech companies and security agencies don't have millions or billions of sensors pointed at each person. Compared to the entirety of what a person experiences and thinks, computer use patterns are a very sparse signal even for the most terminally online segment of the population (let alone your very offline grandma). Hence the YouTube algorithm flubbing something as basic as my religion -- there's just too much relevant information they don't have access to.

Comment by Nate Showell on AI Safety is Dropping the Ball on Clown Attacks · 2023-10-22T21:34:16.522Z · LW · GW

In a world in which the replication attempts went the other direction and social priming turned out to be legit, I would probably agree with you. But even in controlled laboratory settings, human behavior can't be reliably "nudged" with subliminal cues. The human brain isn't a predictable computer program for which a hacker can discover "zero days." It's a noisy physical organ that's subject to chaotic dynamics and frequently does things that would be impossible to predict even with an extremely extensive set of behavioral data.

Consider targeted advertising. Despite the amount of data social media companies collect on their users, ad targeting still sucks. Even in the area of attempted behavior manipulation that's subject to more optimization pressure than any other, companies still can't predict, let alone control, their users' purchasing decisions with anything close to consistency. Their data simply isn't sufficient.

What would it take to make nudges actually work? Even if you covered the entire surface of someone's living area with sensors, I doubt you'd succeed. That would just give you one of the controlled laboratory environments in which social priming still failed to materialize. As mentioned above, the brain is a chaotic system. This makes me think that reliably superhuman persuasion at scale would be impractical even for a superintelligence, aside from with brain-computer interfaces.

Comment by Nate Showell on Don't Dismiss Simple Alignment Approaches · 2023-10-08T17:13:52.356Z · LW · GW

Has anyone developed a metric for quantifying the level of linearity versus nonlinearity of a model's representations? A metric like that would let us compare the levels of linearity for models of different sizes, which would help us extrapolate whether interpretability and alignment techniques that rely on approximate linearity will scale to larger models.

Comment by Nate Showell on Arguments for moral indefinability · 2023-10-03T02:47:53.803Z · LW · GW

CEV also has another problem that gets in the way of practically implementing it: it isn't embedded. At least in its current form, CEV doesn't have a way of accounting for side-effects (either physical or decision-theoretic) of the reflection process. When you have to deal with embeddedness, the distinction between reflection and action breaks down and you don't end up getting endpoints at all. At best, you can get a heuristic approximation.

Comment by Nate Showell on Arguments for moral indefinability · 2023-10-02T02:54:27.458Z · LW · GW

I interpret the quote to mean that there's no guarantee that the reflection process converges. Its attractor could be a large, possibly infinite, set of states rather than a single point.

Comment by Nate Showell on Is this the beginning of the end for LLMS [as the royal road to AGI, whatever that is]? · 2023-08-26T02:09:53.496Z · LW · GW

Some other possible explanations for why ChatGPT usage has decreased:

  • The quality of the product has declined over time
  • People are using its competitors instead