Posts
Comments
This looks like it's related to the phenomenon of glitch tokens:
https://www.lesswrong.com/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology
https://www.lesswrong.com/posts/f4vmcJo226LP7ggmr/glitch-token-catalog-almost-a-full-clear
ChatGPT no longer uses the same tokenizer that it used when the SolidGoldMagikarp phenomenon was discovered, but its new tokenizer could be exhibiting similar behavior.
Another piece of evidence against practical CF is that, under some conditions, the human visual system is capable of seeing individual photons. This finding demonstrates that in at least some cases, the molecular-scale details of the nervous system are relevant to the contents of conscious experience.
A definition of physics that treats space and time as fundamental doesn't quite work, because there are some theories in physics such as loop quantum gravity in which space and/or time arise from something else.
"Seeing the light" to describe having a mystical experience. Seeing bright lights while meditating or praying is an experience that many practitioners have reported, even across religious traditions that didn't have much contact with each other.
Some other examples:
- Agency and embeddedness are fundamentally at odds with each other. Decision theory and physics are incompatible approaches to world-modeling, with each making assumptions that are inconsistent with the other. Attempting to build mathematical models of embedding agency will fail as an attempt to understand advanced AI behavior.
- Reductionism is false. If modeling a large-scale system in terms of the exact behavior of its small-scale components would take longer than the age of the universe, or would require a universe-sized computer, the large-scale system isn't explicable in terms of small-scale interactions even in principle. The Sequences are incorrect to describe non-reductionism as ontological realism about large-scale entities -- the former doesn't inherently imply the latter.
- Relatedly, nothing is ontologically primitive. Not even elementary particles: if, for example, you took away the mass of an electron, it would cease to be an electron and become something else. The properties of those particles, as well, depend on having fields to interact with. And if a field couldn't interact with anything, could it still be said to exist?
- Ontology creates axiology and axiology creates ontology. We aren't born with fully formed utility functions in our heads telling us what we do and don't value. Instead, we have to explore and model the world over time, forming opinions along the way about what things and properties we prefer. And in turn, our preferences guide our exploration of the world and the models we form of what we experience. Classical game theory, with its predefined sets of choices and payoffs, only has narrow applicability, since such contrived setups are only rarely close approximations to the scenarios we find ourselves in.
How does this model handle horizontal gene transfer? And what about asexually reproducing species? In those cases, the dividing lines between species are less sharply defined.
The ideas of the Cavern are the Ideas of every Man in particular; we every one of us have our own particular Den, which refracts and corrupts the Light of Nature, because of the differences of Impressions as they happen in a Mind prejudiced or prepossessed.
Francis Bacon, Novum Organum Scientarum, Section II, Aphorism V
The reflective oracle model doesn't have all the properties I'm looking for -- it still has the problem of treating utility as the optimization target rather than as a functional component of an iterative behavior reinforcement process. It also treats the utilities of different world-states as known ahead of time, rather than as the result of a search process, and assumes that computation is cost-free. To get a fully embedded theory of motivation, I expect that you would need something fundamentally different from classical game theory. For example, it probably wouldn't use utility functions.
Why are you a realist about the Solomonoff prior instead of treating it as a purely theoretical construct?
A theory of embedded world-modeling would be an improvement over current predictive models of advanced AI behavior, but it wouldn't be the whole story. Game theory makes dualistic assumptions too (e.g., by treating the decision process as not having side effects), so we would also have to rewrite it into an embedded model of motivation.
Cartesian frames are one of the few lines of agent foundations research in the past few years that seem promising, due to allowing for greater flexibility in defining agent-environment boundaries. Preferably, we would have a model that lets us avoid having to postulate an agent-environment boundary at all. Combining a successor to Cartesian frames with an embedded theory of motivation, likely some form of active inference, might give us an accurate overarching theory of embedded behavior.
And this is where the fundamental AGI-doom arguments – all these coherence theorems, utility-maximization frameworks, et cetera – come in. At their core, they're claims that any "artificial generally intelligent system capable of autonomously optimizing the world the way humans can" would necessarily be well-approximated as a game-theoretic agent. Which, in turn, means that any system that has the set of capabilities the AI researchers ultimately want their AI models to have, would inevitably have a set of potentially omnicidal failure modes.
This is my crux with people who have 90+% P(doom): will vNM expected utility maximization be a good approximation of the behavior of TAI? You argue that it will, but I expect that it won't.
My thinking related to this crux is informed less by the behaviors of current AI systems (although they still influence it to some extent) than by the failure of the agent foundations agenda. The dream 10 years ago was that if we started by modeling AGI as an vNM expected utility maximizer, and then gradually added more and more details to our model to account for differences between the idealized model and real-world AI systems, we would end up with an accurate theoretical system for predicting the behaviors AGI would exhibit. It would be a similar process to how physicists start with an idealized problem setup and add in details like friction or relativistic corrections.
But that isn't what ended up happening. Agent foundations researchers ended up getting stuck on the cluster of problems collectively described as embedded agency, unable to square the dualistic assumptions of expected utility theory and Bayesianism with the embedded structure of real-world AI systems. The sub-problems of embedded agency are many and too varied to allow one elegant theorem to fix everything. Instead, they point to a fundamental flaw in the expected utility maximizer model, suggesting that it isn't as widely applicable as early AI safety researchers thought.
The failure of the agent foundations agenda has led me to believe that expected utility maximization is only a good approximation for mostly-unembedded systems, and that an accurate theoretical model of advanced AI behavior (if such a thing is possible) would require a fundamentally different, less dualistic set of concepts. Coherence theorems and decision-theoretic arguments still rely on the old, unembedded assumptions and therefore don't provide an accurate predictive model.
Philosophy is frequently (probably most of the time) done in order to signal group membership rather than as an attempt to accurately model the world. Just look at political philosophy or philosophy of religion. Most of the observations you note can be explained by philosophers operating at simulacrum level 3 instead of level 1.
Bug report: when I'm writing an in-line comment on a quoted block of a post, and then select text within my comment to add formatting, the formatting menu is displayed underneath the box where I'm writing the comment. For example, this prevents me from inserting links into in-line comments.
In particular, if the sample efficiency of RL increases with large models, it might turn out that the optimal strategy for RLing early transformative models is to produce many fewer and much more expensive labels than people use when training current systems; I think people often neglect this possibility when thinking about the future of scalable oversight.
This paper found higher sample efficiency for larger reinforcement learning models (see Fig. 5 and section 5.5).
I picked the dotcom bust as an example precisely because it was temporary. The scenarios I'm asking about are ones in which a drop in investment occurs and timelines turn out to be longer than most people expect, but where TAI is still developed eventually. I asked my question because I wanted to know how people would adjust to timelines lengthening.
Then what do you mean by "forces beyond yourself?" In your original shortform it sounded to me like you meant a movement, an ideology, a religion, or a charismatic leader. Creative inspiration and ideas that you're excited about aren't from "beyond yourself" unless you believe in a supernatural explanation, so what does the term actually refer to? I would appreciate some concrete examples.
There are more than two options for how to choose a lifestyle. Just because the 2000s productivity books had an unrealistic model of motivation doesn't mean that you have to deceive yourself into believing in gods and souls and hand over control of your life to other people.
That's not as bad, since it doesn't have the rapid back-and-forth reward loop of most Twitter use.
The time expenditure isn't the crux for me, the effects of Twitter on its user's habits of thinking are the crux. Those effects also apply to people who aren't alignment researchers. For those people, trading away epistemic rationality for Twitter influence is still very unlikely to be worth it.
I strongly recommend against engaging with Twitter at all. The LessWrong community has been significantly underestimating the extent to which it damages the quality of its users' thinking. Twitter pulls its users into a pattern of seeking social approval in a fast-paced loop. Tweets shape their regular readers' thoughts into becoming more tweet-like: short, vague, lacking in context, status-driven, reactive, and conflict-theoretic. AI alignment researchers, more than perhaps anyone else right now, need to preserve their ability to engage in high-quality thinking. For them especially, spending time on Twitter isn't worth the risk of damaging their ability to think clearly.
AI safety research is speeding up capabilities. I hope this is somewhat obvious to most.
This contradicts the Bitter Lesson, though. Current AI safety research doesn't contribute to increased scaling, either through hardware advances or through algorithmic increases in efficiency. To the extent that it increases the usability of AI for mundane tasks, current safety research does so in a way that doesn't involve making models larger. Fears of capabilities externalities from alignment research are unfounded as long as the scaling hypothesis continues to hold.
The lack of leaks could just mean that there's nothing interesting to leak. Maybe William and others left OpenAI over run-of-the-mill office politics and there's nothing exceptional going on related to AI.
The concept of "the meaning of life" still seems like a category error to me. It's an attempt to apply a system of categorization used for tools, one in which they are categorized by the purpose for which they are used, to something that isn't a tool: a human life. It's a holdover from theistic worldviews in which God created humans for some unknown purpose.
The lesson I draw instead from the knowledge-uploading thought experiment -- where having knowledge instantly zapped into your head seems less worthwhile acquiring it more slowly yourself -- is that to some extent, human values simply are masochistic. Hedonic maximization is not what most people want, even with all else being equal. This goes beyond simply valuing the pride of accomplishing difficult tasks, as such as the sense of accomplishment one would get from studying on one's own, above other forms of pleasure. In the setting of this thought experiment, if you wanted the sense of accomplishment, you could get that zapped into your brain too, but much like getting knowledge zapped into your brain instead of studying yourself, automatically getting a sense of accomplishment would be of lesser value. The suffering of studying for yourself is part of what makes us evaluate it as worthwhile.
Spoilers for Fullmetal Alchemist: Brotherhood:
Father is a good example of a character whose central flaw is his lack of green. Father was originally created as a fragment of Truth, but he never tries to understand the implications of that origin. Instead, he only ever sees God as something to be conquered, the holder of a power he can usurp. While the Elric brothers gain some understanding of "all is one, one is all" during their survival training, Father never does -- he never stops seeing himself as a fragile cloud of gas inside a flask, obsessively needing to erect a dichotomy between controller and controlled. Not once in the series does he express anything resembling awe. When Father finally does encounter God beyond the Doorway of Truth, he doesn't recognize what he's seeing. The Elric brothers have artistic expressions of wonderment toward God inscribed on their Doorways of Truth, but Father's Doorway of Truth is blank.
Father's lack of green also extends to how he sees humans. It never seems to occur to Father that the taboo against human transmutation is anything more than an arbitrary rule. To him, humans are only ever tools or inconveniences, not people to appreciate for their own sake or look to for guidance. Joy-in-the-Other is what Father most deeply desires, but he doesn't recognize this need.
Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.
It's worth noting that no reference to preferences has yet been made. That's interesting because it suggests that there are both 0P-preferences and 1P-preferences. That intuitively makes sense, since I do care about both the actual state of the world, and what kind of experiences I'm having.
Believing in 0P-preferences seems to be a map-territory confusion, an instance of the Tyranny of the Intentional Object. The robot can't observe the grid in a way that isn't mediated by its sensors. There's no way for 0P-statements to enter into the robot's decision loop, and accordingly act as something the robot can have preferences over, except by routing through 1P-statements. Instead of directly having a 0P-preference for "a square of the grid is red," the robot would have to have a 1P-preference for "I believe that a square of the grid is red."
What's your model of inflation in an AI takeoff scenario? I don't know enough about macroeconomics to have a good model of what AI takeoff would do to inflation, but it seems like it would do something.
You're underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn't address the ease of circumvention. There's no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. "Teachers" become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.
Why do you ordinarily not allow discussion of Buddhism on your posts?
Also, if anyone reading this does a naturalist study on a concept from Buddhist philosophy, I'd like to hear how it goes.
An edgy writing style is an epistemic red flag. A writing style designed to provoke a strong, usually negative, emotional response from the reader can be used to disguise the thinness of the substance behind the author's arguments. Instead of carefully considering and evaluating the author's arguments, the reader gets distracted by the disruption to their emotional state and reacts to the text in a way that more closely resembles a trauma response, with all the negative effects on their reasoning capabilities that such a response entails. Some examples of authors who do this: Friedrich Nietzsche, Grant Morrison, and The Last Psychiatrist.
OK, so maybe this is a cool new way to look at at certain aspects of GPT ontology... but why this primordial ontological role for the penis?
"Penis" probably has more synonyms than any other term in GPT-J's training data.
I particularly wish people would taboo the word "optimize" more often. Referring to a process as "optimization" papers over questions like:
- What feedback loop produces the increase or decrease in some quantity that is described as "optimization?" What steps does the loop have?
- In what contexts does the feedback loop occur?
- How might the effects of the feedback loop change between iterations? Does it always have the same effect on the quantity?
- What secondary effects does the feedback loop have?
There's a lot hiding behind the term "optimization," and I think a large part of why early AI alignment research made so little progress was because people didn't fully appreciate how leaky of an abstraction it is.
The "pure" case of complete causal separation, as with civilizations in separate regions of a multiverse, is an edge case of acausal trade that doesn't reflect what the vast majority of real-world examples look like. You don't need to speculate about galactic-scale civilizations to see what acausal trade looks like in practice: ordinary trade can already be modeled as acausal trade, as can coordination between ancestors and descendants. Economic and moral reasoning already have elements of superrationality to the extent that they rely on concepts such as incentives or universalizability, which introduce superrationality by conditioning one's own behavior on other people's predicted behavior. This ordinary acausal trade doesn't require formal proofs or exact simulations -- heuristic approximations of other people's behavior are enough to give rise to it.
There are some styles of meditation that are explicitly described as "just sitting" or "doing nothing."
Trust and distrust are social emotions. To feel either of them toward nature is to anthropomorphize it. In that sense, "deep atheism" is closer to theism than "shallow atheism," in some cases no more than a valence-swap away.
An actually-deeply-atheistic form of atheism would involve stripping away anthropomorphization instead of trust. It would start with the observation that nature is alien and inhuman and would extend that observation to more places, acting as a kind of inverse of animism. This form of atheism would remove attributions of properties such as thought, desire, and free will from more types of entities: governments, corporations, ideas, and AI. At its maximum extent, it would even be applied to the processes that make up our own minds, with the recognition that such processes don't come with any inherent essence of humanness attached. To really deepen atheism, make it illusionist.
Is trade ever fully causal? Ordinary trade can be modeled as acausal trade with the "no communication" condition relaxed. Even in a scenario as seemingly causal as using a vending machine, trade only occurs if the buyer believes that the vending machine will actually dispense its goods and not just take the buyer's money. Similarly, the vending machine owner's decision to set up the machine was informed by predictions about whether or not people would buy from it. The only kind of trade that seems like it might be fully causal is a self-executing contract that's tied to an external trigger, and for which both parties have seen the source code and verified that the other party have enough resources to make the agreed-upon trade. Would a contract like that still have some acausal element anyway?
I agree: the capabilities of AI romantic partners probably aren't the bottleneck to their wider adoption, considering the success of relatively primitive chatbots like Replika at attracting users. People sometimes become romantically attached to non-AI anime/video game characters despite not being able to interact with them at all! There doesn't appear to be much correlation between the interactive capabilities of fictional-character romantic partners and their appeal to users/followers.
- Sculpture wouldn't be immune if robots get good enough, but live dance and theater still would be. I don't expect humanoid robots to ever become completely indistinguishable from biological humans.
- I agree, since dance and theater are already so frequently experienced in video form.
The future you're describing only applies in Looking-At-Screens World. In sculpture, dance, and live theater, to name a few, human artists would still dominate. If generative AI achieved indistinguishability from human digital artists, I expect that those artists would shift toward more concrete media. Those concrete media would also become higher-status due to still requiring human artists.
I was comparing it to base-rate forecasting. Twitter leads people to over-update on evidence that isn't actually very strong, making their predictions worse by moving their probabilities too far from the base rates.
I've come to believe (~65%) that Twitter is anti-informative: that it makes its users' predictive calibration worse on average. On Manifold, I frequently adopt a strategy of betting against Twitter hype (e.g., on the LK-99 market), and this strategy has been profitable for me.
It seems like fixed points could be used to replace the concept of utility, or at least to ground it as an inferred property of more fundamental features of the agent-environment system. The concept of utility is motivated by the observation that agents have preference orderings over different states. Those preference orderings are statements about the relative stability of different states, in terms of the direction in which an agent tends to transition between them. It seems duplicative to have both utilities and fixed points as two separate descriptions of state transition processes in the agent-environment system; utilities look like they could be defined in terms of fixed points.
As one preliminary idea for how to do this, you could construct a fully connected graph in which the vertices are the probability distributions that satisfy . The edges are beliefs that represent hypothetical transitions between the fixed points. The graph would take the place of a preference ordering by describing the tendency of the agent to move between the fixed points if given the option. (You could also model incomplete preferences by not making the graph fully connected.) Performing power iteration with the transition matrix of would act as a counterpart to moving through the preference ordering.
Further exploration of this unification of utilities and fixed points could involve connecting to the beliefs that are actually, rather than just counterfactually, present in the agent-environment system, to describe what parts of the system the agent can control. Having a way to represent that connection could let us rewrite the instrumental constraint to not rely on .
What do other people here think of quantum Bayesianism as an interpretation of quantum mechanics? I've only just started reading about it, but it seems promising to me. It lets you treat probabilities in quantum mechanics and probabilities in Bayesian statistics as having the same ontological status: both are properties of beliefs, whereas in some other interpretations of quantum mechanics, probabilities are properties of an external system. This match allows quantum mechanics and Bayesian statistics to be unified into one overarching approach, without requiring you to postulate additional entities like unobserved Everett branches.
"Tapping out" has a different meaning in Magic: the Gathering (tapping all your lands) that could create some confusion.
"Agent" is an incoherent concept.
I asked on Discord and someone told me this:
A simple way to quantify this: first define a "feature" as some decision boundary over the data domain, then train a linear classifier to predict that decision boundary from the network's activations on that data. Quantify the "linearity" of the feature in the network as the accuracy that the linear classifier achieves.
For example, train a classifier to detect when some text has positive or negative sentiment, then pass the same text through some pretrained LLM (e.g. BERT) whose "feature-linearity" you're trying to measure, and try to predict the sentiment from the BERT's activation vectors using linear regression. The accuracy of this linear model tells you how linear the "sentiment" feature is in your LLM.
This post seems to focus too much on the denotative content of propaganda rather than the social context in which it occurs. Effective propaganda requires saturation that creates common knowledge, or at least higher-than-first-order knowledge. People want to believe what their friends believe. If you used AI to generate political messages that were custom-tailored to their recipients, they would fail as propaganda, since the recipients wouldn't know that all their friends were receiving the same message. Message saturation and conformity-rewarding environments are necessary for propaganda to succeed; denotative content barely matters. This makes LLMs practically useless for propagandists, since they don't establish higher-order knowledge and don't contribute to creating an environment in which conformity is socially necessary.
(Overemphasis on the denotative meaning of communications in a manner that ignores their social context is a common bias on LessWrong more generally. Discussions of persuasion, especially AI-driven persuasion, are where it tends to lead to the biggest mistakes in world-modeling.)
I agree that the vast majority of people attempting to do targeting advertising do not have sufficient data. But that doesn't tell us much about whether the big 5 tech companies, or intelligence agencies, have sufficient data to do that, and aren't being really loud about it.
If any of the big tech companies had the capability for actually-good targeted advertising, they'd use it. The profit motive would be very strong. The fact that targeted ads still "miss" so frequently is strong evidence that nobody has the highly advanced, scalable, personalized manipulation capabilities you describe.
Social media recommendation algorithms aren't very powerful either. For instance, when I visit YouTube, it's not unusual for it to completely fail to recommend anything I'm interested in watching. The algorithm doesn't even seem to have figured out that I've never played Skyrim or that I'm not Christian. In the scenario in which social media companies have powerful manipulation capabilities that they hide from the public, the gap between the companies' public-facing and hidden recommendation systems would be implausibly large.
As for chaotic dynamics, there's strong experimental evidence that they occur in the brain, and even if they didn't, they would still occur in people's surrounding environments. Even if it weren't prohibitively expensive to point millions or billions of sensors at one person, that still wouldn't be enough to predict everything. But tech companies and security agencies don't have millions or billions of sensors pointed at each person. Compared to the entirety of what a person experiences and thinks, computer use patterns are a very sparse signal even for the most terminally online segment of the population (let alone your very offline grandma). Hence the YouTube algorithm flubbing something as basic as my religion -- there's just too much relevant information they don't have access to.
In a world in which the replication attempts went the other direction and social priming turned out to be legit, I would probably agree with you. But even in controlled laboratory settings, human behavior can't be reliably "nudged" with subliminal cues. The human brain isn't a predictable computer program for which a hacker can discover "zero days." It's a noisy physical organ that's subject to chaotic dynamics and frequently does things that would be impossible to predict even with an extremely extensive set of behavioral data.
Consider targeted advertising. Despite the amount of data social media companies collect on their users, ad targeting still sucks. Even in the area of attempted behavior manipulation that's subject to more optimization pressure than any other, companies still can't predict, let alone control, their users' purchasing decisions with anything close to consistency. Their data simply isn't sufficient.
What would it take to make nudges actually work? Even if you covered the entire surface of someone's living area with sensors, I doubt you'd succeed. That would just give you one of the controlled laboratory environments in which social priming still failed to materialize. As mentioned above, the brain is a chaotic system. This makes me think that reliably superhuman persuasion at scale would be impractical even for a superintelligence, aside from with brain-computer interfaces.