Posts

Comments

Comment by Lycaos King (lycaos-king) on Solving adversarial attacks in computer vision as a baby version of general AI alignment · 2024-08-30T17:17:13.154Z · LW · GW

That one seemed pretty obvious to me. Angle of the hairline, sharper shadows on the nose to give it a different shape. Smaller eyes and head overall (technically looks a bit larger, but farther away). Eyebrows are larger and rougher. Mouth is more prominent, philtrum is sharper. Angle of the jaw changes.

 

That's what I got in about 45 seconds of looking it over. It was an interesting exercise. Thanks for sharing that link.

Comment by Lycaos King (lycaos-king) on Darwinian Traps and Existential Risks · 2024-08-26T19:31:35.447Z · LW · GW

Strictly speaking there is no such thing as "natural selection" or "fitness" or "adaptation" or even "evolution". There are only patterns of physical objects, which increase or decrease in frequency over time in ways that are only loosely modeled by those terms. 

But it's practically impossible to talk about physical systems without fudging a bit of teleology in, so I don't think it's a valid objection.

Comment by Lycaos King (lycaos-king) on 3C's: A Recipe For Mathing Concepts · 2024-07-04T16:15:07.982Z · LW · GW

How would this method distinguish between apparently and actually optimized features? In an evolutionary example, for instance, what's the difference between a bird with:

a large beak that was optimized to consume certain kinds of food

a large beak that was the result of a genetic bottleneck that resulted from a series of accidental deaths culling small beaks from the gene pool (neutral drift).

a large beak that is the result of a single generation mutation that superficially resembles an environmental adaptation, but is, in actuality, unfit.

A large beak that helps with consuming certain kinds of food, but whose primary ancestral optimization pressure was purely sexual selection.

I remain skeptical that this approach is able to add teleological concepts back into my physical reality lexicon, but I'm willing to be convinced. (Currently, my leading theory is that teleology is a pure illusion.)

Comment by Lycaos King (lycaos-king) on Deep atheism and AI risk · 2024-01-09T00:01:05.297Z · LW · GW

(You're correct. I was using fictionalist in that sense.)

I think the equivocation of "Theorem X is provable from Axiom Set Y" <--> such-and-such thing is Good; would be the part of that chain of reasoning a self-described fictionalist would ascribe fictionality to. 

As I understand it, it's the difference between thinking that Good is a real feature of the universe and Good being a wordgame that we play to make certain ideas easier to work with. Maybe a different example could illuminate some of this.

Fictionalism would be a good tool to describe the way we talk about Evolution and Nature. As has sometimes been said on this site, humans are not aligned towards Evolution, since they aren't inclusive fitness maximizers. We also say things like: such-and-such a feature evolved to do X function on an organism. Of course, that's not true. Biological features don't evolve in order to do a thing, they just happen to do things as a consequence of surviving in an ancestral environment.

We talk about organs and limbs "evolving to do" things, even when they do not, because it is a fiction that makes Evolution more palatable to intellectual examination, but unless you belief in weird stuff like teleology, it's just a fiction, a story that is convenient, and corresponds to real features of the world, but is not itself strictly true. And it is not untrue in a provisional way that we expect to be overturned with later reasoning and evidence, but untrue by design, because the literal truth of biological features arising by chance and operating by chance is harder to talk coherently about, given human constraints on mental compute.

I think your presentation of Eliezer's view is like that: one way it differs from a moral realist is not only that of a category error (objective morality vs aligning to human value) but that of a thought pattern deliberately constructed to aid human cognition vs a thought pattern attempting to align closely with correct mathmatical model of the object(s).

That's my reading of why it would matter if you're a moral antirealist (classical) vs a moral antirealist (fictionalist). I do consider fictionalist to be a subset of antirealist.

Comment by Lycaos King (lycaos-king) on Deep atheism and AI risk · 2024-01-07T00:37:38.530Z · LW · GW

Thanks for the reply.

I don't disagree with Eliezer's position for the most part, I just don't see where he lays out a coherent foundation for why he believes certain things about human values. (Or maybe I'm just being uncharitable in my evaluation and not counting some things as "real arguments" that others would.)

By objective and discoverable, I meant in the sense of the values understood in and of themselves without reference to humans in particular. Obviously you can just model human brains and understand what they value, but I meant that you can't learn about "beauty" or "friendship" or what have you outside of that. That part of the post was inelegantly worded, and I'd probably strike that out if this was a long post and not a comment.

I used "Moral Fictionalist" as a descriptor for Eliezer's position because, although he probably wouldn't ascribe it to himself, it seems to me to be the best fit for it. I'm not a rationalist, and I don't have a rationalist background, I just like to read the site from time to time, and very occasionally comment. So my diction tends to sound "foreign" here.

 

Comment by Lycaos King (lycaos-king) on Deep atheism and AI risk · 2024-01-06T10:28:01.465Z · LW · GW

"Why would any supermind want something so inherently worthless as the feeling of discovery without any real discoveries?"

"No free lunch.  You want a wonderful and mysterious universe?  That's your value."

"These values do not emerge in all possible minds.  They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer."

"Touch too hard in the wrong dimension, and the physical representation of those values will shatter - and not come back, for there will be nothing left to want to bring it back."

I've chosen a small representation of the sort of things that Eliezer says about human values. When I call Eliezer a moral fictionalist, I don't mean that he doesn't think human values are real, just that they are real in the way that fictional stories are real, ie. that they exist only in human minds, and are not in any way objective or discoverable.

Human values are, in Eliezer's view:

Irrational: they cannot be derived from first principles.
Accidental: they arise from the ancestral environment in which humans evolved.
Inalienable: You can't get jettison them for arbitrary values, your philosophy must ultimately reconcile your stated values with your innate ones[1]
Fragile: because human values are a small subset of high dimensional intersections, they are subject to be destroyed by even small perturbations.

All of these attributes are just obvious consequences of his metaphysics so he doesn't attempt to justify any of it in the sequence you linked. Why would he? It's obvious. He's more interested in examining the consequences of these attributes on civilizational policy.
 

  1. ^

    "You do have values, even when you're trying to be "cosmopolitan", trying to display a properly virtuous appreciation of alien minds.  Your values are then faded further into the invisible background - they are less obviously human.  Your brain probably won't even generate an alternative so awful that it would wake you up, make you say "No!  Something went wrong!" even at your most cosmopolitan.  E.g. "a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips".  You'll just imagine strange alien worlds to appreciate.

    Trying to be "cosmopolitan" - to be a citizen of the cosmos - just strips off a surface veneer of goals that seem obviously "human"."

Comment by Lycaos King (lycaos-king) on Deep atheism and AI risk · 2024-01-05T18:26:50.740Z · LW · GW

No. Yudkowski is a moral fictionalist, but he has never (to my knowledge) ever justified his position. Granted I haven't read his whole corpus of work, but from what I've seen he just takes it as a given.

Comment by Lycaos King (lycaos-king) on Would You Work Harder In The Least Convenient Possible World? · 2023-09-23T14:23:35.191Z · LW · GW

The correct moral choice is for both people to lower their EA contributions to 0%.

Comment by Lycaos King (lycaos-king) on Devil's Advocate: Adverse Selection Against Conscientiousness · 2023-05-28T20:10:44.640Z · LW · GW

Thoughtfulness, pro-sociality, and conscientiousness have no bearing on people's ability to produce aligned AI. 

They do have an effect on people's willingness to not build AI in the first place, but the purpose of working at Meta, OpenAI, and Google is to produce AI. No one who is thoughtful, pro-social, and conscientious is going to decide not to produce AI while working at those companies, while still having a job. 

Hence, the effect of discouraging those sorts of people from working at those companies has no net increase in Pdoom.

If you want to avoid building unaligned AI, you should avoid building AI.

Comment by Lycaos King (lycaos-king) on Twiblings, four-parent babies and other reproductive technology · 2023-05-22T13:03:22.681Z · LW · GW

Who says you contribute to the pool at the same rate you'd contribute to your own children? Surely other people in the pool would have different priorities than you, wouldn't they? What if there are N people in the pool and you contribute 1/5N to the children in each pool?

Add that to the fact, that maybe you only have one standout chromosome, and you could easily see a situation where genetic analysis of the population in your family + your pool shows a sudden disappearance of 90% of your genes with a proliferation of 5% of your genes. Is that equivalent to having children? Some people might say it's not.

Also, yes, obviously if you were trying to maximize your genetic genetic density you'd do all of the above: contribute to pools, clone yourself a couple times, have children normally (or with chromosomal selection), and contribute to a sperm banks. That'd be the route to take if you view maximizing genetic density as a terminal goal.

I think the reality is that people have some instinctual need to see more of themselves and their loved ones in the world, and that a learned person would use genetic inheritance as a proxy for this emotional non-quantifiable goal. I also suspect it's a threshold goal, and not a maximization goal, which is why people want some number N of children and not "as many children as I can afford to have".

Comment by Lycaos King (lycaos-king) on Twiblings, four-parent babies and other reproductive technology · 2023-05-21T18:57:35.357Z · LW · GW

Maximizing the amount of your genetic material in the (near) future is my null hypothesis. I don't think it's totally accurate, but in the absence of a good understanding of which parts of our genetic material produce the non-quantifiable traits we care about: things like the shape of one's smile, personality, taste in food, overall "mood", then I expect people to be reluctant to trade off genetic density at rates greater than ~25-60%

The alternative extreme hypothesis would be a "parent" who wants to maximize their "children's" traits to the point where they'd prefer 0% genetic inheritance if the resultant child would be superior in some respect.

Comment by Lycaos King (lycaos-king) on Twiblings, four-parent babies and other reproductive technology · 2023-05-21T03:08:56.751Z · LW · GW

That comparison misses something crucial, which is the density of genetic material passed on. Each generation represents a dilution of the first parent's genetic material with non-kin, but also has the potential for increased numbers of descendants at each generation. By the time your family would be producing your great-grandkids, they'd have the potential to have 2 dozen or more of your direct descendants.

With chromosomal selection you're trading off a massive amount of genetic saturation: essentially getting the percentage genetic inheritance of a great grandchild without the potential for the massive "payoff" of having numerous descendants sharing portions of your genetics.

Putting it like that, it's no surprise that people are going to feel repulsed by the idea...and that's before we even get into the part where the chromosomes which don't get selected are going to be instantly "lost" in a single generation. That's made more spooky by the fact that we won't know where demonstrably inheritable traits like "Has his mother's eyes; has his dad's smile; has temper/melancholy/mood similarities to one or the other parent" lie on the chromosomes. I wouldn't be surprised if the idea that you'd be risking losing something important like that is going to be a deal breaker for a lot of people.

Comment by Lycaos King (lycaos-king) on SolidGoldMagikarp III: Glitch token archaeology · 2023-02-14T20:22:49.292Z · LW · GW

Regarding DragonMagazine: It would often publish content for Dungeons and Dragons that was of a more hurried and slightly lower quality. This led to it being treated as a sort of pseudo 3rd party or beta source of monsters and player options.

People in online communities would frequently talk about options being "from Dragon Magazine" or "Dragon content" in order to forewarn people of content that may not have been given a thorough pass on editing/game balance. As such that phrase was very prevalent in online forums for D&D discussion, which as I understand it, would show up a lot in the training data.

Comment by Lycaos King (lycaos-king) on What fact that you know is true but most people aren't ready to accept it? · 2023-02-03T18:45:07.370Z · LW · GW

Most people on this website are unaligned.

A lot of the top AI people are very unaligned. 

Comment by Lycaos King (lycaos-king) on Some Thoughts on AI Art · 2023-01-26T19:44:47.119Z · LW · GW

While it's probably true that copyright/patent/IP law generally in effect helps "preserve the livelihood of intellectual property creators," it's a mistake IMO to see this as more than merely instrumental in preserving incentives for more art/inventions/technology which, but for a temporary monopoly (IP protections), would be financially unprofitable to create. Additionally, this view ignores art consumers, who out-number artists by several orders of magnitude. It seems unfair to orient so much of the discussion of AI art's effects on the smaller group of people who currently create art. 

 

I think you've got this precisely backwards. The concept of laws as such only makes sense in a deontological framework where the fruits of intellectual labor belong to the individual who produced them. Otherwise instead of complicated rules about temporary monopolies and intellectual property, the government would just allow any use which could be proven in court to be net positive in utility, regardless of the wishes of the original creator.

Whether or not you think this is a bad idea, I think it clear that society at large doesn't agree with the framework you've proposed for evaluating IP and copyright.

Comment by Lycaos King (lycaos-king) on AI art isn't "about to shake things up". It's already here. · 2022-08-23T03:45:17.771Z · LW · GW

Am I the only person who thinks AI art still looks terrible? I see all these posts talking about how amazing AI art is and sharing pictures and they just look...bad? 

Comment by Lycaos King (lycaos-king) on What's the Least Impressive Thing GPT-4 Won't be Able to Do · 2022-08-21T07:29:07.235Z · LW · GW

Write semi-convincingly from the perspective of a non-mainstream political ideology, religion, philosophy, or aesthetic theory. The token weights are too skewed towards the training data.

This is something I've noticed GPT-3 isn't able to do, after someone pointed out to me that GPT-3 wasn't able to convincingly complete their own sentence prompts because it didn't have that person's philosophy as a background assumption.

I don't know how to put that in terms of numbers, since I couldn't really state the observation in concrete terms either.

Comment by Lycaos King (lycaos-king) on Dath Ilan vs. Sid Meier's Alpha Centauri: Pareto Improvements · 2022-04-29T17:30:41.558Z · LW · GW

When Dath Ilan kicks off their singularity, all the Illuminati factions (keepers, prediction market engineers, secret philosopher kings) who actually run things behind the scenes will murder each other in an orgy of violence, fracturing into tiny subgroups as each of them tries to optimize control over the superintelligence. To do otherwise would be foolish. Binding arbitration cannot hold across a sufficient power/intelligence/resource gap unless submitting to binding arbitration is part of that party's terminal values.

Comment by Lycaos King (lycaos-king) on Covid 12/9: Counting Down the Days · 2021-12-10T07:51:39.857Z · LW · GW

"This is to help you, yes you, stop spinning stories where everyone is competent and things are done for sensible reasons.."

I'll take that and throw it right back your way. You will never be able to predict the actions of authority figures if you assume them to be incompetent instead of malicious. When malice is the best fit curve for the data, you should update your model. The purpose of school shooter interventions is to exercise authority and keep people afraid, not to prevent school shootings. Same for NPIs. Paxlovid is illegal because its legality would result in a decrease in power for authorities.