Vaniver's Shortform

post by Vaniver · 2019-10-06T19:34:49.931Z · score: 10 (1 votes) · LW · GW · 10 comments

10 comments

Comments sorted by top scores.

comment by Vaniver · 2019-10-18T00:04:52.501Z · score: 25 (9 votes) · LW · GW

[Meta: this is normally something I would post on my tumblr, but instead am putting on LW as an experiment.] Sometimes, in games like Dungeons and Dragons, there will be multiple races of sapient beings, with humans as a sort of baseline. Elves are often extremely long-lived, but most handlings of this I find pretty unsatisfying. Here's a new take, that I don't think I've seen before (except the Ell in Worth the Candle have some mild similarities): Humans go through puberty at about 15 and become adults around 20, lose fertility (at least among women) at about 40, and then become frail at about 60. Elves still 'become adults' around 20, in that a 21-year old elf adventurer is as plausible as a 21-year old human adventurer, but they go through puberty at about 40 (and lose fertility at about 60-70), and then become frail at about 120.

This has a few effects:

  • The peak skill of elven civilization is much higher than the peak skill of human civilization (as a 60-year old master carpenter has had only ~5 decades of skill growth, whereas a 120-year old master carpenter has had ~11). There's also much more of an 'apprenticeship' phase in elven civilization (compare modern academic society's "you aren't fully in the labor force until ~25" to a few centuries ago, when it would have happened at 15), aided by them spending longer in the "only interested in acquiring skills" part of 'childhood' before getting to the 'interested in sexual market dynamics' part of childhood.
  • Young elves and old elves are distinct in some of the ways human children and adults are distinct, but not others; the 40-year old elf who hasn't started puberty yet has had time to learn 3 different professions and build a stable independence, whereas the 12-year old human who hasn't started puberty yet is just starting to operate as an independent entity. And so sometimes when they go through puberty, they're mature and stable enough to 'just shrug it off' in a way that's rare for humans. (I mean, they'd still start growing a beard / etc., but they might stick to carpentry instead of this romance bullshit.)
  • This gives elven society something of a huge individualist streak, in that people focused a lot on themselves / the natural world / whatever for decades before getting the kick in the pants that convinced them other elves were fascinating too, and so they bring that additional context to whatever relationships they do build.
  • For the typical human, most elves they come into contact with are wandering young elves, who are actually deeply undifferentiated (sometimes in settings / games you get jokes about how male elves are basically women, but here male elves and female elves are basically undistinguished from each other; sure, they have primary sex characteristics, but in this setting a 30-year old female elf still hasn't grown breasts), and asexual in the way that children are. (And, if they do get into a deep friendship with a human for whom it has a romantic dimension, there's the awkward realization that they might eventually reciprocate the feelings--after a substantial fraction of the human's life has gone by!)
  • The time period that elves spend as parents of young children is about the same as the amount of time that humans spend, but feels much shorter, and still elves normally only see their grandchildren and maybe briefly their great-grandchildren.

This gives you three plausible archetypes for elven adventurers:

  • The 20-year old professional adventurer who's just starting their career (and has whatever motivation).
  • The 45-year old drifter who is still level 1 (because of laziness / lack of focus) who is going through puberty and needs to get rich quick in order to have any chance at finding a partner, and so has turned to adventuring out of desperation.
  • The established 60-year old who has several useless professions under their belt (say, a baker and an accountant and a fisherman) who is now taking up adventuring as career #4 or whatever.
comment by Vaniver · 2019-10-06T19:34:50.088Z · score: 22 (7 votes) · LW · GW

People's stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying "humans are a blight on the planet" would mean something closer to "we should be more environmentalist on the margin" instead of "all things considered, humans should be removed."

You can probably imagine how this can be disorienting, and how there's a meta issue of the point estimate view is able to see what it's doing in a way that the gradient view might not be able to see what it's doing.

comment by romeostevensit · 2019-10-06T22:38:56.700Z · score: 12 (3 votes) · LW · GW

(metameta note, I think going meta often comes off as snarky even though not intended, which might contribute to Why Our Kind Can't Get Along)

People's metabeliefs are downstream of which knowledge representation they are using and what that representation tells them about

  • Which things are variant and invariant
  • Of the variant things how sensitive they are (huh, actually I guess you can just say the invariants have zero sensitivity, I haven't had that thought before)
  • What sorts of things count as evidence that a parameter or metadata about a parameter should change
  • What sorts of representations are reasonable (where the base representation is hard to question) ie whether or not metaphorical reasoning is appropriate (hard to think about) and which metaphors capture causal structure better
  • Normativity and confidence have their own heuristics that cause them to be sticky on parts of the representation and help direct attention while traversing it
comment by Pattern · 2019-10-06T22:54:24.669Z · score: 1 (1 votes) · LW · GW
This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action.

What about guides for changes to individual/personal action?

comment by Vaniver · 2019-10-31T17:39:31.853Z · score: 15 (6 votes) · LW · GW

I've been thinking a lot about 'parallel economies' recently. One of the main differences between 'slow takeoff' and 'fast takeoff' predictions is whether AI is integrated into the 'human civilization' economy or constructing a separate 'AI civilization' economy. Maybe it's worth explaining a bit more what I mean by this: you can think of 'economies' as collections of agents who trade with each other. Often it will have a hierarchical structure, and where we draw the lines are sort of arbitrary. Imagine a person who works at a company and participates in its internal economy, and the company participates in national and global economies, and the person participates in those economies as well. A better picture has a very dense graph with lots of nodes and links between groups of nodes whose heaviness depends on the number of links between nodes in those groups.

As Adam Smith argues, the ability of an economy to support specialization of labor depends on its size. If you have an island with a single inhabitant, it doesn't make sense to fully employ a farmer (since a full-time farmer can generate much more food than a single person could eat), for a village with 100 inhabitants it doesn't make sense to farm more than would feed a hundred mouths, and so on. But as you make more and more of a product, investments that have a small multiplicative payoff become better and better, to the point that a planet with ten billion people will have massive investment in farming specialization that make it vastly more efficient per unit than the village farming system. So for much of history, increased wealth has been driven by this increased specialization of labor, which was driven by the increased size of the economy (both through population growth and decreased trade barriers widening the links between economies until they effectively became one economy).

One reason to think economies will remain integrated is because increased size benefits all actors in the economy on net; another is that some of the critical links will be human-human links, or that human-AI links will be larger than AI-AI links. But if AI-AI links have much lower friction cost, then it will be the case that the economy formed just of AI-AI links can 'separate' from the total civilizational economy, much in the way that the global economy could fragment through increased trade barriers or political destabilization (as has happened many times historically, sometimes catastrophically). More simply, it could be the case that all the interesting things are happening in the AI-only economy, even if it's on paper linked to the human economy. Here, one of the jobs of AI alignment could be seen as making sure that either there's continuity of value between the human-human economy and the AI-AI economy, or ensuring that the human-AI links remain robust so that humans are always relevant economic actors.

comment by Vaniver · 2019-10-29T23:13:09.848Z · score: 14 (4 votes) · LW · GW

One challenge for theories of embedded agency over Cartesian theories is that the 'true dynamics' of optimization (where a function defined over a space points to a single global maximum, possibly achieved by multiple inputs) are replaced by the 'approximate dynamics'. But this means that by default we get the hassles associated with numerical approximations, like when integrating differential equations. If you tell me that you're doing Euler's Method on a particular system, I need to know lots about the system and about the particular hyperparameters you're using to know how well you'll approximate the true solution. This is the toy version of trying to figure out how a human reasons through a complicated cognitive task; you would need to know lots of details about the 'hyperparameters' of their process to replicate their final result.

This makes getting guarantees hard. We might be able to establish what the 'sensible' solution range for a problem is, but establishing what algorithms can generate sensible solutions under what parameter settings seems much harder. Imagine trying to express what the set of deep neural network parameters are that will perform acceptably well on a particular task (first for a particular architecture, and then across all architectures!).

comment by Vaniver · 2019-10-28T16:44:36.642Z · score: 9 (4 votes) · LW · GW

I came across some online writing years ago, in which someone considers the problem of a doctor with a superpower, that they can instantly cure anyone they touch. They then talk about how the various genres of fiction would handle this story and what they would think the central problem would be.

Then the author says "you should try to figure out how you would actually solve this problem." [EDIT: I originally had his solution here, but it's a spoiler for anyone who wants to solve it themselves; click rsaarelm's link below to see it in its original form.]

I can't easily find it through Google, but does anyone know what I read / have the link to it?

comment by rsaarelm · 2019-10-29T06:10:02.068Z · score: 20 (3 votes) · LW · GW

John McCarthy's The Doctor's Dilemma

comment by Vaniver · 2019-10-29T16:27:43.159Z · score: 4 (2 votes) · LW · GW

That's it, thanks!

comment by Ben Pace (Benito) · 2019-10-28T18:11:46.846Z · score: 4 (2 votes) · LW · GW

There’s a subgenre of Worm fanfic around the character Panacea who has a similar power, and runs into these problems.