Posts

Andrew Burns's Shortform 2024-02-10T23:57:39.684Z

Comments

Comment by Andrew Burns (andrew-burns) on Andrew Burns's Shortform · 2024-04-27T15:26:10.954Z · LW · GW

Yes, yes. Probably not. And they already have a Sora clone called Vidu, for heaven's sake.

We spend all this time debating: should greedy companies be in control, should government intervene, will intervention slow progress to the good stuff: cancer cures, longevity, etc. All of these arguments assume that WE (which I read as a gloss for the West) will have some say in the use of AGI. If the PRC gets it, and it is as powerful as predicted, these arguments become academic. And this is not because the Chinese are malevolent. It's because, AGI would fall into the hands of the CCP via their civil-military fusion. This is a far more calculating group than those in Western governments. Here officials have to worry about getting through the next election. There, they can more comfortably wield AGI for their ends while worrying less about palatability of the means: observe how the population quietly endured a draconian lock-down and only meekly revolted when conditions began to deteriorate and containment looked futile.

I am not an accelerationist. But I am a get-it-before-them-ist. Whether the West (which I count as including Korea and Japan and Taiwan) can maintain our edge is an open question. A country that churns out PhDs and loves AI will not be easily thwarted.

Comment by Andrew Burns (andrew-burns) on Andrew Burns's Shortform · 2024-04-27T01:54:05.536Z · LW · GW

So the usual refrain from Zvi and others is that the specter of China beating us to the punch with AGI is not real because limits on compute, etc. I think Zvi has tempered his position on this in light of Meta's promise to release the weights of its 400B+ model. Now there is word that SenseTime just released a model that beats GPT-4 Turbo on various metrics. Of course, maybe Meta chooses not to release its big model, and maybe SenseTime is bluffing--I would point out though that Alibaba's Qwen model seems to do pretty okay in the arena...anyway, my point is that I don't think the "what if China" argument can be dismissed as quickly as some people on here seem to be ready to do.

Comment by Andrew Burns (andrew-burns) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-04-12T05:40:04.406Z · LW · GW

Wait...your children are on the Mormon path? Oh boy.

As a non-parent, I have no idea how it is to be a parent. It must be exceptionally hard and require making difficult compromises. However, having realized that Mormonism is not the path to reason...aren't you terrified that your children are headed toward a dead-end, believing irrational things and perpetuating those beliefs unto the next generation? How do you handle that? I would be looking for any signs that my kids wanted out...looking for them to send me an SOS so I would be justified in swooping in and telling them it's all baloney and they don't need to take any of it seriously. That would probably land me in family court and alienate me from my children, who, having grown up in the community and imbibed the teachings like mother's milk, have become integrated into the hive mind, but the temptation to cry BS must be overwhelming, no?

Comment by Andrew Burns (andrew-burns) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T13:54:45.053Z · LW · GW
Comment by Andrew Burns (andrew-burns) on Modern Transformers are AGI, and Human-Level · 2024-03-30T03:51:51.953Z · LW · GW

Less than a year. They probably already have toy models with periodically or continuously updating weights.

Comment by Andrew Burns (andrew-burns) on Addressing Accusations of Handholding · 2024-03-29T14:49:12.195Z · LW · GW

Sure, the topics in this piece are dealt with superficially and the discussions are not especially thought-provoking; when compared to the amazing creative works that people on this site produce, it is low-mediocre. But Claude writes more coherently than a number of published authors and most of the general public.

Comment by Andrew Burns (andrew-burns) on The Gemini Incident · 2024-02-23T03:22:13.952Z · LW · GW

He doesn't mean politically conservative, he means that Google has traditionally been conservative when it comes to releasing new products...to the point where potentially lucrative products and services rot on the vine.

Comment by Andrew Burns (andrew-burns) on Abs-E (or, speak only in the positive) · 2024-02-20T14:09:17.577Z · LW · GW

Good point, although I used Esperanto precisely because it is a language for which the OP's approach is transparently difficult. The Greek word for light (in weight) is avaris...not heavy. So in Greek, one must say "This object is easy to lift because of the lowness of its weight," but in English one can say "This object is light." Seems arbitrary. I appreciate what the OP is trying to do, though.

Comment by Andrew Burns (andrew-burns) on Abs-E (or, speak only in the positive) · 2024-02-20T03:33:57.373Z · LW · GW

Most of the time English has an antonym that does not involve a negative prefix or suffix.

  • It is not warm. ~= It is cool.
  • It is not new. ~= It is old.

But this is not the case in other languages. Consider Esperanto:

  • It is not warm. -> Ĝi ne estas varmeta. ~= Ĝi estas malvarmeta.
  • It is not new. -> Ĝi ne estas nova. ~= Ĝi estas malnova.

Because mal- is equivalent to un-, it is forbidden, and you have to resort to periphrasis:

  • Ĝi estas alia ol varmeta. (It is other than warm.)
  • Ĝi estas la malo de varmeta. (It is the opposite of warm.)...oh, wait, this contains mal- too.
Comment by Andrew Burns (andrew-burns) on The Altman Technocracy · 2024-02-19T05:10:12.576Z · LW · GW

People who eat seafood, but not the flesh of other terrestrial animals are pescatarian. Ethical (as opposed to environmental) pescatarians say fish and other marine life aren't complex enough for fear or pain. Perhaps they call themselves vegetarians just to avoid having to explain pescatarianism.

Comment by Andrew Burns (andrew-burns) on The Altman Technocracy · 2024-02-19T00:00:43.644Z · LW · GW

I'm puzzled by your use of the word "intelligence." Intelligence refers to a capacity to understand facts, acquire knowledge and process information. Humans are presently the only members of the set of intelligent self-regulating systems.

Comment by Andrew Burns (andrew-burns) on Social media use probably induces excessive mediocrity · 2024-02-18T05:04:03.993Z · LW · GW

Whenever someone uses "they," I get nervous.

Comment by Andrew Burns (andrew-burns) on The Altman Technocracy · 2024-02-17T21:27:26.111Z · LW · GW

This and other communities seek to transcend, or at least mitigate, human imperfections. Just because something is "human" doesn't mean it contributes to human flourishing. Envy, rage, hate, and cruelty are human, after all.

Comment by Andrew Burns (andrew-burns) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-12T15:17:35.264Z · LW · GW

Lex Luthor vibes.

Comment by Andrew Burns (andrew-burns) on Believing In · 2024-02-11T07:12:32.594Z · LW · GW

"To believe" in German is glauben, also from Proto-Germanic. Was this meaning also colored by Greek?

Comment by Andrew Burns (andrew-burns) on Believing In · 2024-02-11T00:38:10.800Z · LW · GW

I don't know that an opinion that conforms to reality is self-reinforcing. It is reinforced by reality. The presence of a building on a map is reinforced by the continued existence of the building in real life.

Comment by Andrew Burns (andrew-burns) on Andrew Burns's Shortform · 2024-02-10T23:57:54.854Z · LW · GW
Comment by Andrew Burns (andrew-burns) on Andrew Burns's Shortform · 2024-02-10T23:57:39.779Z · LW · GW

When I was in middle school, our instructor was trying to teach us about the Bill of Rights. She handed out a paper copy and I immediately identified that Article the first (sic) and Article the second (sic) were not among the first ten amendments and that the numbers for the others were wrong. I boldly asserted that this wasn't the Bill of Rights and the teacher apologized and cursed the unreliable Internet. But I was wrong. This WAS the Bill of Rights, but the BILL rather than the ten ratified amendments. Everyone came away wrongly informed from that exchange.

Edit: I wrote before that I identified that they were not in the Constitution, but article the second is, as the 27th amendment, and I knew that, but it wasn't among the first ten.

Comment by Andrew Burns (andrew-burns) on AI #50: The Most Dangerous Thing · 2024-02-10T18:23:40.701Z · LW · GW

Sam Altman: is there a word for feeling nostalgic for the time period you’re living through at the time you’re living it?

Call it "nowstalgia."

Comment by Andrew Burns (andrew-burns) on On the Debate Between Jezos and Leahy · 2024-02-07T01:15:16.753Z · LW · GW

First, I suggest that people pay heed to what happened in the movie "Don't Look Up." I don't remember the character names, but the punk female scientist, when confronted during an interview by unserious journalists, went absolutely bonkers on television and contributed significantly to doom. The lesson I got from this is if you do not present serious existential threats in a cogent, sober manner, the public will polarize based on vibes and priors and then lock in. Only the best, most unflappable, polished spokespeople should be put forward, and even then, it might be no use.

Second, you cannot have a meaningful exchange on Twitter. Twitter encourages the generation of poorly reasoned emotional responses that are then used to undermine better reasoned future arguments. I would recommend people just avoid that platform entirely because the temptation to respond to raconteurs like Jezos is too high.

Comment by Andrew Burns (andrew-burns) on Notes on Innocence · 2024-02-01T04:14:57.290Z · LW · GW

I think this disagreement stems from a failure to distinguish which meaning of innocence we are talking about. By my reckoning, there are three major meanings: legal innocence, moral innocence, and naive innocence. Legal innocence is the lack of criminal culpability. Moral innocence is the lack of moral culpability. Naive innocence is the lack of knowledge about sensitive topics.

"Innocent as a dove and shrewd as a serpent" is referring to moral innocence and means: be clever, but only so far as is morally acceptable. Naive innocence, however, which is the topic the OP seems to be discussing, isn't a virtue, it is ignorance, and curiosity is the virtue which seeks to extinguish it. An innocent listener who doesn't understand the racial joke should be curious and ask probing questions and do research to better understand what the racial joke teller was trying to say. Then, the next time someone talks in a similar manner, the now savvy listener can make an informed decision about whether that person is the sort of person the listener wants to associate with.

Comment by Andrew Burns (andrew-burns) on Orthogonality or the "Human Worth Hypothesis"? · 2024-01-29T03:18:15.490Z · LW · GW

Yes. This could have been written better. I am honestly, genuinely not partial to either side. I could be convinced that intelligence begets human-like values if someone could walk me through how that happens and how to account for examples of very intelligent people who do not have human-friendly values. I shouldn't have been so aggressive with my word choice, I just find it frustrating when I see people assuming something will happen without explaining how. I reckon the belief in this convergence, at least sometimes, is tied to a belief in moral realism and that's the crux. If moral realism holds, then moral truths are discoverable and, given sufficient intelligence, an entity will reach these truths by reason. For those who are not motivated by moral realism, then perhaps this conviction stems from sureness that the manner in which current AIs are trained--using human-created data--will, at some point, cause sufficiently advanced AI to grok human values. But what if they do grok, but don't care? I understand that some animals eat their young, but I don't endorse this behavior and wish it were not so. I would feel the urge to stop it if I saw it happening and would only refrain because I don't want to interfere with natural processes. It seems to me that the world is not compatible with human values, otherwise what is would be what ought to be, so humans, which operate in the world, may not be compatible with AI values, even if the AI understands them deeply and completely.

Anyway, point is, I'm not trying to be a partisan.

Comment by Andrew Burns (andrew-burns) on David Burns Thinks Psychotherapy Is a Learnable Skill. Git Gud. · 2024-01-28T19:02:15.170Z · LW · GW

Exactly right. However, I am extremely doubtful about anyone who claims that all their patients are cured within a few sessions. That sounds very unlikely unless they screen out people with anything more than minor hang-ups. Sure, in many cases, the root cause of the psychological problem can be identified and the patient can learn a few techniques and then they no longer need further therapy. However, lots of people in therapy are dealing with negative mental processes that were baked into them by a difficult childhood or a traumatic experience. Those sorts of issues can require on-going therapy to keep the patient on track and in a positive mindspace. One quick tricks don't work on someone with severe codependency or agoraphobia or anorexia. Maybe, with time, they can work through these issues and no longer need therapy, but this could take years.

Comment by Andrew Burns (andrew-burns) on Notes on Innocence · 2024-01-28T00:53:24.023Z · LW · GW

Yes. I agree with you. Innocence is naivete. They are the same thing. But innocence emphasizes the benefits of being unsullied by knowledge and naivete emphasizes the dangers.

Knowledge isn't always psychically refreshing. I've heard people use the term cognitohazard to describe knowledge that causes mental harm to the person who knows it. Knowing about the wicked tendencies of man and the indifference of the universe is psychically scarring. Once you know about it, it alters your thought processes pretty permanently. It makes life sadder and fills it with more anxiety. However, because knowing about these things means that you can watch out for them and survive, not knowing can be dangerous.

In situations where not knowing does not present immediate harm, we use the word innocence. "Look, she is so friendly with everyone, even people she doesn't know. Isn't that precious. I wish I were still like that." But in situations where not knowing places someone in danger, we use the word naivete. "Can you believe she gave that strange man all that information? She is so naive. He could come to her house and hurt her."

As for innocence and naivete being associated with sexuality, the same reasoning holds. Sex, past and present, is dangerous business. In the past, getting pregnant meant you were at elevated risk of death or disability. Even if not impregnated, you could get incurable diseases that would shorten your life and make you unmarriagable. Lacking knowledge about sex meant you weren't aware of this grim reality. And people who were aware of it wished they could go back to not knowing because the burden of knowledge is heavy. So they would say: "look, she is so innocent. Wish I were still so." On the other hand, when people aware of the grim reality saw an innocent person acting in a way that was likely to attract unwanted sexual attention, they would call them naive, since this attention could lead to disease or pregnancy and therefore the discovery of the grim reality.

Comment by Andrew Burns (andrew-burns) on Orthogonality or the "Human Worth Hypothesis"? · 2024-01-24T16:30:53.256Z · LW · GW

Upon reflection, he doesn't say he is in favor of an alien plan where everyone gets killed. It could be that the alien AI offers something beneficial to humanity. However, "war" suggests that the interaction is not peaceful. I retract my claim that he is endorsing entertaining proposals for omnicide and substitute it with an observation that he is endorsing entertaining proposals from aliens which might include omnicidal or disempowering plans, and that he is not partial to humanity's cause.

Comment by Andrew Burns (andrew-burns) on Orthogonality or the "Human Worth Hypothesis"? · 2024-01-24T16:21:59.138Z · LW · GW

His take is so horrible. How could you be on the alien's side? What argument could they make? It is jarring that even toying with the idea of causing the death of a particular group of people is strictly taboo and grounds for cancellation, but stating that you would seriously entertain arguments in favor of letting everyone die is okay. As if the biggest wrong is the discrimination rather than the death. Death is the biggest wrong and supporting omnicide is the worst possible position.

Comment by Andrew Burns (andrew-burns) on Orthogonality or the "Human Worth Hypothesis"? · 2024-01-23T17:24:47.876Z · LW · GW

This fleshes out quite gingerly why orthogonality deniers are operating on faith rather reason. Orthogonality doesn't say that a superintelligence necessarily has values incompatible with those of humans, only that it can. Orthogonality deniers must argue that under no circumstances can the values of an ASI be misaligned with those of humans. How can that be? They never provide a mechanism for this convergence toward human values. Moreover, you see counterexamples all the time: serial killers with high IQs that murder other people to satisfy base desires. When you hear these killers articulate their reasons for killing, they often evince values that are incompatible with valuing human life and dignity even as their minds are razor sharp. They know what is right, but they do otherwise because they want to; society has to carefully lock them up or put them down because letting them out in public is extremely dangerous. And what do we call these sorts: monsters, demons, animals...i.e., words for other sentient beings that don't respect human worth. So if there are smart humans with values incompatible with human worth, then nothing prevents there from being AI with such values.

Comment by Andrew Burns (andrew-burns) on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T21:08:09.288Z · LW · GW

You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.

Comment by Andrew Burns (andrew-burns) on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T04:56:15.806Z · LW · GW

The apprehension of death guides a good deal of human behavior, so the sort of entity that might arise when freed from this fate could be frightening (i.e., undergo substantial value drift in a direction that we would not approve of, like toward something akin to baby-eating). Consider how immortal beings in fiction often have hostile alien values. AI never ends well in fiction, and neither does immortality.

Comment by Andrew Burns (andrew-burns) on An even deeper atheism · 2024-01-12T16:06:13.060Z · LW · GW

Lord Acton was on to something when he observed that great men become bad men when given great power. "Despotic power is always accompanied by corruption of morality." I believe this is because morality flows from the knowledge of our own fragility...there but for the grace of God go I, the faithful might say...and the golden rule works because people stand on an equal playing field and recognize that bad actions have a tendency to boomerang and that anyone could be caught in a state of misfortune. So, when we reflect as we are now, weak mortals, we converge on a certain set of values. But, when given enormous powers, it is very likely we will converge on different values, values which might cause our present selves to shriek in disgust. No one can be trusted with absolute power.