Posts

Disagreement on AGI Suggests It’s Near 2025-01-07T20:42:43.456Z
Limits to Learning: Rethinking AGI’s Path to Dominance 2023-06-02T16:43:25.635Z

Comments

Comment by tangerine on The present perfect tense is ruining your life · 2025-02-20T22:49:48.880Z · LW · GW

What do you make of feral children like Genie? While there are not many counterfactuals to cultural learning—probably mostly because depriving children of cultural learning is considered highly immoral—feral children do provide strong evidence that humans that are deprived of cultural learning do not come close to being functional adults. Additionally, it seems obvious that people who do not receive certain training, e.g., those who do not learn math or who do not learn carpentry, generally have low capability in that domain.

the genetic changes come first, then the cultural changes come after

You mean to say that the human body was virtually “finished evolving” 200,000 years ago, thereby laying the groundwork for cultural optimization which took over form that point? Henrich’s thesis of gene-culture coevolution contrasts with this view and I find it to be much more likely to be true. For example, the former thesis posits that humans lost a massive amount of muscle strength (relative to, say, chimpanzees) over many generations and only once that process had been virtually “completed”, started to compensate by throwing rocks or making spears when hunting other animals, requiring much less muscle strength than direct engagement. This begs the question, how did our ancestors survive in the time when muscle strength had already significantly decreased, but tool usage did not exist yet? Henrich’s thesis answers this by saying that such a time did not exist; throwing rocks came first, which provided the evolutionary incentive for our ancestors to expend less energy on growing muscles (since throwing rocks suffices for survival and requires less muscle strength). The subsequent invention of spears provided further incentive for muscles to grow even weaker.

There are many more examples to make that are like the one above. Perhaps the most important one is that as the amount of culture grows (also including things like rudimentary language and music), a larger brain has an advantage because it can learn more and more quickly (as also evidenced by the LLM scaling laws). Without culture, this evolutionary incentive for larger brains is much weaker. The incentive for larger brains leads to a number of other peculiarities specific to humans, such as premature birth, painful birth and fontanelles.

Comment by tangerine on The present perfect tense is ruining your life · 2025-02-20T21:03:42.604Z · LW · GW

How do LLMs and the scaling laws make you update in this way? They make me update in the opposite direction. For example, I also believe that the human body is optimized for tool use and scaling, precisely because of the gene-culture coevolution that Henrich describes. Without culture, this optimization would not have occurred. Our bodies are cultural artifacts.

Cultural learning is an integral part of the scaling laws; the scaling laws show that indefinitely scaling the number of parameters in a model doesn't quite work; the training data also has to scale, with the implication that that data is some kind of cultural artifact, where the quality of that artifact determines the capabilities of the resulting model. LLMs work because of the accumulated culture that goes into them. This is no less true for “thinking” models like o1 and o3, because the way they think is very heavily influenced by the training data. The fact that thinking models do so well is because thinking becomes possible at all, not because thinking is something inherently beyond the training data. These models can think because of the culture they absorbed, which includes a lot of examples of thinking. Moreover, the degree to which Reinforcement Learning determines the capabilities of thinking models is small compared to Supervised Learning, because, firstly, less compute is spent on RL than on SL, and, secondly, RL is much less sample-efficient than SL.

Comment by tangerine on ≤10-year Timelines Remain Unlikely Despite DeepSeek and o3 · 2025-02-15T18:33:54.959Z · LW · GW

Current LLMs can only do sequential reasoning of any kind by adjusting their activations, not their weights, and this is probably not enough to derive and internalize new concepts à la C.

For me this is the key bit which makes me update towards your thesis.

Comment by tangerine on The Failed Strategy of Artificial Intelligence Doomers · 2025-01-31T23:35:46.695Z · LW · GW

This is indeed an interesting sociological breakdown of the “movement”, for lack of a better word.

I think the injection of the author’s beliefs about whether or not short timelines are correct distracting from the central point. For example, the author states the following.

there is no good argument for when [AGI] might be built.

This is a bad argument against worrying about short timelines, bordering on intellectual dishonesty. Building anti-asteroid defenses is a good idea even if you don’t know that one is going to hit us within the next year.

The argument that it’s better to have AGI appear sooner rather than later because institutions are slowly breaking down is an interesting one. It’s also nakedly accelerationist, which is strangely inconsistent with the argument that AGI is not coming soon, and in my opinion very naïve.

Besides that, I think it’s generally a good take on the state of the movement, i.e., like pretty much any social movement it has a serious problem with coherence and collateral damage and it’s not clear whether there’s any positive effect.

Comment by tangerine on The present perfect tense is ruining your life · 2025-01-27T22:18:43.100Z · LW · GW

Ah, I see now. Thank you! I remember reading this discussion before and agree with your viewpoint that he is still directionally correct.

Comment by tangerine on The present perfect tense is ruining your life · 2025-01-27T21:47:14.409Z · LW · GW

he apparently faked some of his evidence

Would be happy to hear more about this. Got any links? A quick Google search doesn’t turn up anything.

Comment by tangerine on Why care about AI personhood? · 2025-01-27T21:38:56.666Z · LW · GW

You talk about personhood in a moral and technical sense, which is important, but I think it’s important to also take into account the legal and economic senses of personhood. Let me try to explain.

I work for a company where there’s a lot of white-collar busywork going on. I’ve come to realize that the value of this busywork is not so much the work itself (indeed a lot of it is done by fresh graduates and interns with little to no experience), but the fact that the company can bear responsibility for the work due to its somehow good reputation (something something respectability cascades), i.e., “Nobody ever got fired for hiring them”. There is not a lot of incentive to automate any of this work, even though I can personally attest that there is a lot of low-hanging fruit. (A respected senior colleague of mine plainly stated to me, privately, that most of it is bullshit jobs.)

By my estimation, “bearing responsibility” in the legal and economic sense means that an entity can be punished, where being punished means that something happens which disincentivizes it and other entities from doing the same. (For what it’s worth, I think much of our moral and ethical intuitions about personhood can be derived from this definition.) AI cannot function as a person of any legal or economic consequence (and by extension, moral or ethical consequence) if it cannot be punished or learn in that way. I assume it will be able to eventually, but until then most of these bullshit jobs will stay virtually untouchable because someone needs to bear responsibility. How does one punish an API? Currently, we practically only punish the person serving the API or the person using it.

There are two ways I see to overcome this. One way is that AI eventually can act as a drop-in replacement for human agents in the sense that they can bear responsibility and be punished as described above. With the current systems this is clearly not (yet) the case.

The other way is that the combination of cost, speed and quality becomes too good to ignore, i.e., that we get to a point where we can say “Nobody ever got fired for using AI” (on a task-by-task basis). This depends on the trade-offs that we’re willing to make between the different aspects of using AI for a given task, such as cost, speed, quality, reliability and interpretability. This is already driving use of AI for some tasks where the trade-off is good enough, while for others it’s not nearly good enough or still too risky to try.

Comment by tangerine on The present perfect tense is ruining your life · 2025-01-27T20:28:50.450Z · LW · GW

Reminds me of mimetic desire:

Man is the creature who does not know what to desire, and he turns to others in order to make up his mind. We desire what others desire because we imitate their desires.

However, I only subscribe to this theory insofar as it is supported by Joseph Henrich's work, e.g., The Secret of Our Success, in which Henrich provides evidence that imitation (including imitation of desires) is the basis of human-level intelligence. (If you’re curious how that works, I highly recommend Scott Alexander’s book review.)

Comment by tangerine on Disagreement on AGI Suggests It’s Near · 2025-01-10T16:28:20.989Z · LW · GW

But we already knew that some people think AGI is near and others think it's farther away!

And what do you conclude based on that?

I would say that as those early benchmarks ("can beat anyone at chess", etc.) are achieved without producing what "feels like" AGI, people are forced to make their intuitions concrete, or anyway reckon with their old bad operationalizations of AGI.

The relation between the real world and our intuition is an interesting topic. When people’s intuitions are violated (e.g., the Turing test is passed but it doesn’t “feel like” AGI), there’s a temptation to try to make the real world fit the intuition, when it is more productive to accept that the intuition is wrong. That is, maybe achieving AGI doesn’t feel like you expect. But that can be a fine line to walk. In any case, privileging an intuitive map above the actual territory is about as close as you can get to a “cardinal sin” for someone who claims to be rational. (To be clear, I’m not saying you are doing that.)

Comment by tangerine on Disagreement on AGI Suggests It’s Near · 2025-01-09T21:56:16.423Z · LW · GW

They spend more time thinking about the concrete details of the trip, not because they know the trip is happening soon, but because some think the trip is happening soon. Disagreement on and attention to concrete details is driven by only some people saying that the current situation looks like, or is starting to look like the event occurring according to their interpretation. If the disagreement had happened at the beginning, they would soon have started using different words.

In the New York example, it could be that when someone says “Guys, we should really buy those Broadway tickets. The trip to New York is next month already.” they prompt the response “What? I thought we were going the month after!”, hence disagreement. If this detail had been discussed earlier, there might have been the “February trip” and the “March trip” in order to disambiguate the trip(s) to New York.

In the case of AGI, some people’s alarm bells are currently going off, prompting others to say that more capabilities are required. What seems to have happened is that people at one point latched on to the concept of AGI, thinking that their interpretation was virtually the same as those of others because of its lack of definition. Again, if they had disagreed with the definition to begin with, they would have used a different word altogether. Now that some people are claiming that AGI is here or here soon, it turns out that the interpretations do in fact differ. The most obnoxious cases are when people disagree with their own past interpretation once that interpretation is threatened to be satisfied, on the basis of some deeper, undefined intuition (or, in the case of OpenAI and Microsoft, ulterior motives). This of course is also known as “moving the goalposts”.

Once upon a time, not that long ago, AGI was interpreted by many as “it can beat anyone at chess”, “it can beat anyone at go” or “it can pass the Turing test”. We are there now, according to those interpretations.

Whether or not AGI exists depends only marginally on any one person’s interpretation. Words are a communicative tool and therefore depend on others’ interpretations. That is, the meanings of words don’t fall out of the sky; they don’t pass through a membrane from another reality. Instead, we define meaning collectively (and often unconsciously). For example, “What is intelligence?” is a question of how that word is in practice interpreted by other people. “How should it be interpreted (according to me personally)?” is a valid but different question.
 

Comment by tangerine on Disagreement on AGI Suggests It’s Near · 2025-01-08T21:58:15.871Z · LW · GW

The amount of contention says something about whether an event occurred according to the average interpretation. Whether it occurred according to your specific interpretation depends on how close that interpretation is to the average interpretation.

You can't increase the probability of getting a million dollars by personally choosing to define a contentious event as you getting a million dollars.

Comment by tangerine on Disagreement on AGI Suggests It’s Near · 2025-01-08T21:32:19.376Z · LW · GW

I wouldn’t call either hypothesis invalid. People just use the same words to refer to different things. This is true for all words and hypotheses to some degree. When there is little to no contention that we’re not in New York, or that we don’t have AGI, or that the Second Coming hasn’t happened, then those differences are not apparent. But presumably there is some correlation between the different interpretations, such that when the Event does take place, contention rises to a degree that increases as that correlation decreases[1]. (Where by Event I mean some event that is semantically within some distance to the average interpretation[2].)

Formally, I say that , meaning  is small, where  can be considered a measure of how vaguely the term AGI is specified.

The more vaguely an event is specified, the more contention there is when the event takes place. Conversely, the more precisely an event is specified, the less contention there is when the event takes place. It’s kind of obvious when you think about it. Using Bayes’ law we can additionally say the following.

That is, when there is contention about whether a vaguely defined event such as AGI has occurred, your posterior probability should be high, modulated by your prior for AGI (the posterior monotonically decreases with the prior). I think it's also possible to say that the more contentious an event the higher the probability that it has occurred, but that may require some additional assumptions about the distribution of interpretations in semantic space.

An important difference between AGI and the Second Coming (at least among rationalists and AI researchers) is that the latter generally has a much lower prior probability than the former.

  1. ^

    Assuming rational actors.

  2. ^

    Assuming a unimodal distribution of interpretations in semantic space.

Comment by tangerine on Disagreement on AGI Suggests It’s Near · 2025-01-08T10:58:23.408Z · LW · GW

You’re kind of proving the point; the Second Coming is so vaguely defined that it might as well have happened. Some churches preach this.

If the Lord Himself did float down from Heaven and gave a speech on Capitol Hill, I bet lots of Christians would deride Him as an impostor.

Comment by tangerine on RohanS's Shortform · 2025-01-05T13:42:09.801Z · LW · GW

Thank you for the reply!

I’ve actually come to a remarkably similar conclusion as described in this post. We’re phrasing things differently (I called it the “myth of general intelligence”), but I think we’re getting at the same thing. The Secret of Our Success has been very influential on my thinking as well.

This is also my biggest point of contention with Yudkowsky’s views. He seems to suggest (for example, in this post) that capabilities are gained from being able to think well and a lot. In my opinion he vastly underestimates the amount of data/experience required to make that possible in the first place, for any particular capability or domain. This speaks to the age-old (classical) rationalism vs empiricism debate, where Yudkowsky seems to sit on the rationalist side, whereas it seems you and I would lean more to the empiricist side.
 

Comment by tangerine on The Intelligence Curse · 2025-01-04T23:39:03.580Z · LW · GW

Entities that reproduce with mutation will evolve under selection. I'm not so sure about the "natural" part. If AI takes over and starts breeding humans for long floppy ears, is that selection natural?

In some sense, all selection is natural, since everything is part of nature, but an AI that breeds humans for some trait can reasonably be called artificial selection (and mesa-optimization). If such a breeding program happened to allow the system to survive, nature selects for it. If not, it tautologically doesn’t. In any case, natural selection still applies.

But there won't necessarily be more than one AI, at least not in the sense of multiple entities that may be pursuing different goals or reproducing independently. And even if there are, they won't necessarily reproduce by copying with mutation, or at least not with mutation that's not totally under their control with all the implications understood in advance. They may very well be able prevent evolution from taking hold among themselves. Evolution is optional for them. So you can't be sure that they'll expand to the limits of the available resources.

In a chaotic and unpredictable universe such as ours, survival is virtually impossible without differential adapation and not guaranteed even with it. (See my reply to lukedrago below.)

Comment by tangerine on The Intelligence Curse · 2025-01-04T23:11:52.500Z · LW · GW

I don't know how selection pressures would take hold exactly, but it seems to me that in order to prevent selection pressures, there would have to be complete and indefinite control over the environment. This is not possible because the universe is largely computationally irreducible and chaotic. Eventually, something surprising will occur which an existing system will not survive. Diverse ecosystems are robust to this to some extent, but that requires competition, which in turn creates selection pressures.

Comment by tangerine on RohanS's Shortform · 2025-01-04T18:18:56.315Z · LW · GW

humans are general because of the data, not the algorithm

Interesting statement. Could you expand a bit on what you mean by this?

Comment by tangerine on The Intelligence Curse · 2025-01-03T22:16:01.042Z · LW · GW

You cannot in general pay a legislator $400 to kill a person who pays no taxes and doesn't vote.

Indeed not directly, but when the inferential distance increases it quickly becomes more palatable. For example, most people would rather buy a $5 T-shirt that was made by a child for starvation wages on the other side of the world, instead of a $100 T-shirt made locally by someone who can afford to buy a house with their salary. And many of those same T-shirt buyers would bury their head in the sand when made aware of such a fact.

If I can tell an AI to increase profits, incidentally causing the AI to ultimately kill a bunch of people, I can at least claim a clean conscience by saying that wasn't what I intended, even though it happened just the same.

In practice, legislators do this sort of thing routinely. They pass legislation that causes harm—sometimes a lot of harm—and sleep soundly.

Comment by tangerine on The Intelligence Curse · 2025-01-03T21:53:26.603Z · LW · GW

Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.

Agreed. The rich and powerful could pick off more and more economically irrelevant classes while promising the remaining ones the same won't happen to them, until eventually they can get everything they need from AI and live in enclaves protected by vast drone armies. Pretty bleak, but seems like the default scenario given the current incentives.

It seems really hard to think of any examples of such tech.

I think you would effectively have to build extensions to people's neocortexes in such a way that those extensions cannot ever function on their own. Building AI agents is clearly not that.

Comment by tangerine on The Intelligence Curse · 2025-01-03T21:34:35.199Z · LW · GW

Excellent post. This puts into words really well some thoughts that I have had.

I would also like to make an additional point: it seems to me that a lot of people (perhaps less so on LessWrong) hold the view that humanity has somehow “escaped” the process of evolution by natural selection, since we can choose to do a variety of things that our genes do not “want”, such as having non-reproductive sex. This is wrong. Evolution by natural selection is inescapable. When resources are relatively abundant, which is currently true for many Western nations, it can seem that it’s escapable because the selection pressures are relatively low and we can thus afford to spend resources somewhat frivolously. Since resources are not infinitely abundant, over time those selection pressures will increase. Those selection pressures will select out unproductive elements.

This means that even if we managed to get aligment right and form a utopia where everybody gets everything they need or more, they will eventually still be discarded because they cannot produce anything of economic value. In your post, capitalist incentives effectively play the role of natural selection, but even if we converted to a communist utopia, the result would ultimately be the same once selection pressures increase sufficiently, and they will.

Comment by tangerine on Considerations on orca intelligence · 2024-12-29T21:54:05.668Z · LW · GW

Very interesting write-up! When you say that orcas could be more intelligent than humans, do you mean something similar to them having a higher IQ or g factor? I think this is quite plausible.

My thinking has been very much influenced by Joseph Henrich's The Secret of Our Success, which you mentioned. For example, looking at the behavior of feral (human) children, it seems quite obvious to me now that all the things that humans can do better than other animals are all things that humans imitate from an existing cultural “reservoir” so to speak and that an individual human has virtually no hope of inventing within their lifetime, such as language, music, engineering principles, etc.

Gene-culture coevolution has resulted in a human culture and a human body that are adapted to each other. For example, the human digestive system is quite short because we've been cooking food for a long time, humans have muscles that are very weak compared to those of our evolutionary cousins because we've learned to make do with tools (weapons) instead and we have relatively protracted childhoods to absorb all of the culture required to survive and reproduce. If we tried to “uplift” orcas, the fact that human culture has co-evolved with the human body and not with the orca body would likely be an issue in trying to get them to learn it (a bit like trying to get software built for x86 to run on an ARM processor). Still, I think progress in LLM scaling shows that neural networks (artificial or biological) are able to absorb a significant chunk of human culture, as long as you have the right training method. I've made a similar point here.

There is nothing in principle that stops a chimpanzee from being able to read and write English, for example. It’s just that we haven’t figured out the methods to configure their brains into that state, because they don’t have a strong tendency to imitate, which human children do have, which makes training them much easier.

Comment by tangerine on What are the strongest arguments for very short timelines? · 2024-12-24T12:24:14.130Z · LW · GW

I agree with this view. Deep neural nets trained with SGD can learn anything. (“The models just want to learn.”) Human brains are also not really different from brains of other animals. I think the main struggles are 1. scaling up compute, which follows a fairly predictable pattern, and 2. figuring out what we actually want them to learn, which is what I think we’re most confused about.

Comment by tangerine on Daniel Dennett has died (1942-2024) · 2024-04-20T08:35:13.076Z · LW · GW

My introduction to Dennett, half a lifetime ago, was this talk: 

That was the start of his profound influence on my thinking. I especially appreciated his continuous and unapologetic defense of the meme as a useful concept, despite the many detractors of memetics.

Sad to know that we won't be hearing from him anymore.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-12T17:01:01.824Z · LW · GW

Yes. My bad, I shouldn’t have implied all hidden-variables interpretations.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-11T16:08:13.292Z · LW · GW

Every non-deterministic interpretation has a virtually infinite Kolmogorov complexity because it has to hardcode the outcome of each random event.

Hidden-variables interpretations are uncomputable because they are incomplete.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-09T21:42:38.030Z · LW · GW

It’s the simplest explanation (in terms of Kolmogorov complexity).

It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality. Most interpretations provide no mechanism for the selection of a specific outcome, which is absurd. Under the MWI, randomness emerges from determinism through indexical uncertainty, i.e., not knowing which branch you’re in. Some people, such as Sabine Hossenfelder for example, get confused by this and ask, “then why am I this version of me?”, which implicitly assumes dualism, as if there is a free-floating consciousness which could in principle inhabit any branch; this is patently untrue because you are by definition this “version” of you. If you were someone else (including someone in a different branch where one of your atoms is moved by one Planck distance) then you wouldn’t be you; you would be literally someone else.

Note that the Copenhagen interpretation is also a many-worlds explanation, but with the added assumption that all but one randomly chosen world disappears when an “observation” is made, i.e., when entanglement with your branch takes place.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-09T21:28:52.410Z · LW · GW

It’s just a matter of definition. We say that “you” and “I” are the things that are entangled with a specific observed state. Different versions of you are entangled with different observations. Nothing is stopping you from defining a new kind of person which is a superposition of different entanglements. The reason it doesn’t “look” that way from your perspective is because of entanglement and the law of the excluded middle. What would you expect to see if you were a superposition?

Comment by tangerine on my theory of the industrial revolution · 2024-02-29T17:02:01.596Z · LW · GW

Have you read Joseph Henrich’s books The Secret of Our Success, and its sequel The WEIRDest People in the World? If not, they provide a pretty comprehensive view of how humanity innovates and particularly the Western world, which is roughly in line with what you wrote here.

Comment by tangerine on Why you, personally, should want a larger human population · 2024-02-24T19:16:35.101Z · LW · GW

I kind of agree that most knowledge is useless, but the utility of knowledge and experience that people accrue is probably distributed like a bell curve, which means you can't just have more of the good knowledge without also accruing lots of useless knowledge. In addition, very often stuff that seems totally useless turns out to be very useful; you can't always tell which is which.

Comment by tangerine on Why you, personally, should want a larger human population · 2024-02-24T09:30:49.353Z · LW · GW

I completely agree. In Joseph Henrich’s book The Secret of Our Success, he shows that the amount of knowledge possessed by a society is proportional to the number of people in that society. Dwindling population leads to dwindling technology and dwindling quality of life.

Those who advocate for population decline are unwittingly advocating for the disappearance of the knowledge, experience and frankly wisdom that is required to keep the comfortable life that they take for granted going.

Keeping all that knowledge in books is not enough. Otherwise our long years in education would be unnecessary. Knowing how to apply knowledge is its own form of knowledge.

Comment by tangerine on Causality is Everywhere · 2024-02-13T23:09:03.794Z · LW · GW

If causality is everywhere, it is nowhere; declaring “causality is involved” will have no meaning. It begs the question whether an ontology containing the concept of causality is the best one to wield for what you’re trying to achieve. Consider that causality is not axiomatic, since the laws of physics are time-reversible.

Comment by tangerine on On Dwarkesh’s 3rd Podcast With Tyler Cowen · 2024-02-04T15:44:35.163Z · LW · GW

I respect Sutskever a lot, but if he believed that he could get an equivalent world model by spending an equivalent amount of compute learning from next-token prediction using any other set of real-world data samples, why would they go to such lengths to specifically obtain human-generated text for training? They might as well just do lots of random recordings (e.g., video, audio, radio signals) and pump it all into the model. In principle that could probably work, but it’s very inefficient.

Human language is a very high density encoding of world models, so by training on human language models get much of their world model “for free“, because humanity has already done a lot of pre-work by sampling reality in a wide variety of ways and compressing it into the structure of language. However, our use of language still doesn’t capture all of reality exactly and I would argue it’s not even close. (Saying otherwise is equivalent to saying we’ve already discovered almost all possible capabilities, which would entail that AI actually has a hard cap at roughly human ability.)

In order to expand its world model beyond human ability, AI has to sample reality itself, which is much less sample-efficient than sampling human behavior, hence the “soft cap”.

Comment by tangerine on On Dwarkesh’s 3rd Podcast With Tyler Cowen · 2024-02-04T15:42:18.726Z · LW · GW

In theory, yes, but that’s obviously a lot more costly than running just one instance. And you’ll need to keep these virtual researchers running in order to keep the new capabilities coming. At some point this will probably happen and totally eclipse human ability, but I think the soft cap will slow things down by a lot (i.e., no foom). That’s assuming that compute and the number of researchers even is the bottleneck to new discoveries; it could also be empirical data.

Comment by tangerine on On Dwarkesh’s 3rd Podcast With Tyler Cowen · 2024-02-03T15:57:33.912Z · LW · GW

If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements

There is good reason to believe that AI will have a soft cap at roughly human ability (and by “soft cap” I mean that anything beyond the cap will be much harder to achieve) for the same reason that humans have a soft cap at human ability: copying existing capabilities is much easier than discovering new capabilities.

A human being born today can relatively easily achieve abilities that other humans have achieved, because you just copy them; lots of 12-year-olds can learn calculus, which is much easier than inventing it. AI will have the same issue.

Comment by tangerine on Monthly Roundup #14: January 2024 · 2024-01-24T21:44:19.655Z · LW · GW

The European socket map is deceptive. My charger will work anywhere on mainland Europe. Looking at the sockets, can you tell why?

Comment by tangerine on There is way too much serendipity · 2024-01-22T17:08:28.421Z · LW · GW

Does this count as “rational, deliberate design”? I think a case could be made for both yes and no, but I lean towards no. Humans who have studied a certain subject often develop a good intuition for what will work and what won’t and I think deep learning captures that; you can get right answers at an acceptable rate without knowing why. This is not quite rational deliberation based on theory.

Comment by tangerine on There is way too much serendipity · 2024-01-21T14:29:36.152Z · LW · GW

I think that “rational, deliberate design”, as you put it, is simply far less common (than random chance) than you think; that the vast majority of human knowledge is a result of induction instead of deduction; that theory is overrated and experimentalism is underrated.

This is also why I highly doubt that anything but prosaic AI alignment will happen.

Comment by tangerine on Seth Explains Consciousness · 2024-01-14T21:50:56.027Z · LW · GW

I don't think I disagree with what you're saying here, though we may be using different terms to say the same thing.

How does what you say here inform your thoughts about the Hard Problem?

Comment by tangerine on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-02T23:02:14.640Z · LW · GW

Regarding taking hints, the other gender typically does not see all the false positives one has to deal with. What seems obvious is usually not obvious at all. In fact, a socially skilled person will always try to use plausibly deniable (i.e., not-obvious) signals and will consider anything more a gauche faux pas. Acting on such signals is therefore inherently risky and is nowadays perhaps considered more risky than it used to be, especially at work and around close friends.

For example, a few years ago, a woman I had great rapport with called me her Valentine in a very charming way. You might say that's an obvious signal, but when I tried to make plans with her she said that's just a thing she does for friends and family and there was no special meaning to it. Some people are out to get your romantic attention, but ultimately want to keep you at arm's length.

Comment by tangerine on Seth Explains Consciousness · 2023-11-14T22:41:18.876Z · LW · GW

All I’m asking for is a way for other people to determine whether a given explanation will satisfy you. You haven’t given enough information to do that. Until that changes we can’t know that we even agree on the meaning of the Hard Problem.

Comment by tangerine on Seth Explains Consciousness · 2023-11-14T20:11:07.763Z · LW · GW

Also., the existence of a problem does not depend on the existence of a solution.

Agreed, but even if no possible solution can ultimately satisfy objective properties, until those properties are defined the problem itself remains undefined. Can you define these objective properties?

Comment by tangerine on Seth Explains Consciousness · 2023-11-14T17:17:48.280Z · LW · GW

I know. Like I said, neither Chalmers nor you or anyone else have shown it plausible that subjective experience is non-physical. Moreover, you repeatedly avoid giving an objective description what you’re looking for.

Until either of the above change, there is no reason to think there is a Hard Problem.

Comment by tangerine on Seth Explains Consciousness · 2023-11-13T20:19:09.872Z · LW · GW

Chalmers takes hundreds of pages to set out his argument.

His argument does not bridge that gap. He, like you, does not provide objective criteria for a satisfying explanation, which means by definition you do not know what the thing is that requires explanation, no matter how many words are used trying to describe it.

Comment by tangerine on Seth Explains Consciousness · 2023-11-13T19:29:05.066Z · LW · GW

The core issue is that there’s an inference gap between having subjective experience and the claim that it is non-physical. One doesn’t follow from the other. You can define subjective experience as non-physical, as Chalmer’s definition of the Hard Problem does, but that’s not justified. I can just as legitimately define subjective experience as physical.

I can understand why Chalmers finds subjective experience mysterious, but it’s not more mysterious than the existence of something physical such as gravity or the universe in general. Why is General Relativity enough for you to explain gravity, even though the reason for the existence of gravity is mysterious?

Comment by tangerine on Seth Explains Consciousness · 2023-11-12T21:12:41.005Z · LW · GW

Let’s say the Hard Problem is real. That means solutions to the Easy Problem are insufficient, i.e., the usual physical explanations.

But when we speak about physics, we’re really talking about making predictions based on regularities in observations in general. Some observations we could explain by positing the force of gravity. Newton himself was not satisfied with this, because how does gravity “know” to pull on objects? Yet we were able to make very successful predictions about the motions of the planets and of objects on the surface of the Earth, so we considered those things “explained” by Newton’s theory of gravity. But then we noticed a slight discrepancy between some of these predictions and our observations, so Einstein came up with General Relativity to correct those predictions and now we consider these discrepancies “explained”, even though the reason why that particular theory works remains mysterious, e.g., why does spacetime exist? In general, when a hypothesis correctly predicts observations, we consider these observations scientifically explained.

Therefore to say that solutions to the Easy Problem are insufficient to explain qualia indicates (at least to me) one of two things.

  1. Qualia have no regularity that we can observe. If they really didn’t have regularities that we could observe, we wouldn’t be able to observe that they exist, which contradicts the claim that they do exist. However, they do have regularities! We can predict qualia! Which means solutions to the Easy Problem are sufficient after all, which contradicts the assumption that they’re insufficient.
  2. We’re aspiring to a kind of explanation for qualia over and above the scientific one, i.e., just predicting is not enough. You could posit any additional requirements for an explanation to qualify, but presumably we want an explanation to be true. You can’t know beforehand what’s true, so you can’t know that such additional requirements don’t disqualify the truth. There is only one thing that we know will be true however, namely that whatever we will observe in the future is what we will observe in the future. Therefore as long as the predictions of a theory don’t deviate from future observations, we can’t rule out that it’s accurately describing what’s actually going on, i.e., we can’t falsify it. In a way it’s a low bar, but it’s the best we can do. However, if a hypothesis makes predictions that are compatible with any and all observations, i.e., it’s unfalsifiable, then we can’t ever gain any information about its validity from any observations even in principle, which directly contradicts the assumption that you can find an explanation.
Comment by tangerine on Seth Explains Consciousness · 2023-11-12T17:33:01.836Z · LW · GW

You say you see colors and have other subjective experiences and you call those qualia and I can accept that, but when I ask why solutions to the Easy Problem wouldn’t be sufficient you say it’s because you have subjective experiences, but that’s circular reasoning. You haven’t said why exactly solutions to the Easy Problem don’t satisfy you, which is why I keep asking what kind of explanation would satisfy you. I genuinely do not know, based on what you have said. It doesn’t have to be scientific.

If we are talking about scientific explanation: a scientific explanation of X succeeds if it is able to predict X's, particularly novel ones, and it doesn't mispredict X's.

But it’s not clear to me how you would judge that any explanation, scientific or not, does these things for qualia, because it seems to me that solutions to the Easy Problem do exactly this; I can already predict what kind of qualia you experience, even novel ones. If I show you a piece of red paper, you will experience the qualia of red. If I give you a drink or a drug you haven’t had before I can predict that you will have a new experience. I may not be able to predict quite exactly what those experiences will be in a given situation because I don’t have complete information, but that’s true for virtually any explanation, even when using quantum mechanics.

I suspect you may now object again and say, “but that doesn’t explain subjective experience”. Then I will object again and say, “what explanation would satisfy you?”, to which you will again say, “if it predicts qualia”, to which I will say, “but we can already predict what qualia you will have in a given situation”. Then you will again object and say, “but that doesn’t explain subjective experience”. And so on.

It looks to me like you’re holding out for something you don’t know how to recognize. True, maybe an explanation is impossible, but you don’t know that either. When some great genius finally does explain it all, how will you know he’s right? You wouldn’t want to miss out, right?

They don't explain subjective experience. The Easy Problem is everything except subjective experience.

But this is the very thing in question. Can you explain to me how exactly you come to this conclusion? Having subjective experience does not in itself imply that it’s not physical.

The fact that qualia are physically mysterious can't be predicted from physics

I’m genuinely curious what you mean by this. Can you expand on this?

Comment by tangerine on Seth Explains Consciousness · 2023-11-11T21:58:41.114Z · LW · GW

Science isn't based on exactly predermining an explanation before you have it.

But then how you would know that a given explanation, scientific or not, explains qualia to your satisfaction? How will you be able to tell that that explanation is indeed what you were looking for before?

If I can tell that qualia are indescribable or undetectable, I must know something of "qualia" means.

People have earnestly claimed the same thing about various deities. Do you believe in those? Why would your specific belief be true if theirs weren’t? Why are you so sure you’re not mistaken?

And if it is an objective fact that there is some irreducible subjectivity

Could be, but we don’t know that.

One could test a qualiometer on oneself.

How would you determine that it is working? That if you’re seeing something red, the qualiometer says “red”? If so, how would that show that there is something more going on than what’s explained with solutions to the Easy Problem?

it can’t be subjectively true and false at the same time, depending on who you are.

I don't know who is suggesting that.

It’s a logical consequence of claiming there is no objective fact about something.

But you can notice your own qualia.. anaesthesia makes a difference.

Again, I agree with you that subjective experience exists, but I don’t see why solutions to the Easy Problem wouldn’t satisfy you. There’s something mysterious about subjective experience, but that’s true for everything, including atoms and electromagnetic waves and chairs and the rest of objective reality. Why does anything in the universe exist? It’s “why?” all the way down.

Comment by tangerine on Seth Explains Consciousness · 2023-11-11T21:11:26.927Z · LW · GW

If I had to choose between those two phrasings I would prefer the second one, for being the most compatible between both of our notions. My notion of "emerges from" is probably too different from yours. The main difference seems to be that you're a realist about the third-person perspective, whereas I'm a nominalist about it, to use your earlier terms.

That actually sounds more like the first phrasing to me. If you are a nominalist about the third-person perspective, then it seems that you think the third-person perspective does not actually exist and the concept of the third-person perspective is borne of the first-person perspective.

Do you think this works as a double crux?

I’m not sure whether this is a good double crux, because it’s not clear enough to me what we mean by first- and third-person perspectives. It seems conceivable to me that my conception of the third-person perspective is functionally equivalent to your conception of the first-person perspective. Let me expand on that below.

If only the first-person perspective exists, then presumably you cannot be legitimately surprised, because that implies something was true outside of your first-person perspective prior to your experiencing it, unless you define that as being part of your first-person perspective, which seems contradictory to me, but functionally the same as just defining everything from the third-person perspective. The only alternative possibility that seems available is that there are no external facts, which would mean reality is actually an inconsistent series of experiences, which seems absurd; then we wouldn’t even be able to be sure of the consistency of our own reasoning, including this conversation, which defeats itself.

Comment by tangerine on GPT-2030 and Catastrophic Drives: Four Vignettes · 2023-11-11T09:13:06.939Z · LW · GW

I find scenarios in which a single agent forms a significant global threat very implausible because even for very high IQ humans (200+) it seems very difficult to cross a large inference gap on their own.

Moreover, it would have to iterate on empirical data, which it somehow needs to gather, which will be more noticeable as it scales up.

If it employs other agents, such as copies of itself, this only exacerbates the problem, because how will the original agent be able to control its copies enough to keep them from going rogue and being noticed?

The most likely scenario to me seems one where over some number of years we willingly give these agents more and more economic power and they leverage that to gain more and more political power, i.e., by using the same levers of power that humans use and in a collective way, not through a single agent.

Comment by tangerine on Seth Explains Consciousness · 2023-11-08T09:17:41.588Z · LW · GW

Those analogies don't hold, because you're describing claims I might make about the world outside of my subjective experience ('ghosts are real', 'gravity waves are carried by angels', etc.).

The analogies do hold, because you don’t get to do special pleading and claim ultimate authority about what’s real inside your subjective experience any more than about what’s real outside of it. Your subjective experience is part of our shared reality, just like mine.

People are mistaken all the time about what goes on inside their mind, about the validity of their memories, or about the real reasons behind their actions. So why should I take at face value your claims about the validity of your thoughts, especially when those thoughts lead to logical contradictions?