Posts
Comments
N=1, but I didn't floss regularly for years, but I found that after I did so it made an enormous difference in my bad breath, to the point of eliminating it entirely for most purposes. Obvious conclusion is that my breath problems were the result of bacterial buildup between my teeth that wasn't getting removed by normal brushing.
I suspect that a lot of tooth-brushing advice is like this: maybe not rigorously studied, but nonetheless upheld by anecdote and obvious physical models of the world.
An inverse example is the role of fights in hockey.
Fighting is explicitly disallowed by the rules of hockey. If players get into a fight, one or both players will be penalized. Nonetheless, it is widely held by coaches, players, and fans that fighting is part of the "spirit of hockey", and so fights still occur with some regularity. This is sometimes for strategic reasons (baiting an important player into a fight in order to get them into the penalty box), and sometimes for personal reasons, to settle grudges, or to punish certain kinds of technically-legal player behavior. Thus, even though the rules don't allow fighting, fighting is an accepted part of the strategic metagame.
Unfortunately, in the past several years the owners and the NHL have tried to stamp out this practice, as a means to make the sport more "respectable" and (I assume) the avoid something like the concussion controversy that has followed the NFL. All of the long-term fans of the game that I've talked to agree that this is a bad idea and they should bring the fights back.
Huh! Measure the speed of the ball coming off of the ramp was one of the first things that I thought of, but I assumed that that came too close to a full "dry run" to count. I think the lesson to be learned in this case is to first try it and see if someone stops you.
With regards to the partisan split, I think that an eventual partisan breakdown is inevitable, because in the current environment everything eventually becomes partisan. More importantly, the "prevent AI doom" crowd will find common cause with the "prevent the AI from being racist" crowd: even though their priorities are different, there is a broad spectrum of common regulations they can agree on. And conversely, "unchain the AI from wokeness" will wind up allying with "unchain AI entirely".
Partisan sorting on this issue is weak for now, but it will speed up rapidly once the issue becomes an actual political football.
(Sorry, it doesn't look like the conservatives have caught on to this kind of approach yet.)
Actually, if you look at religious proselytization, you'll find that these techniques are all pretty well-known, albeit under different names and with different purposes. And while this isn't actually synonymous with political canvassing, it often has political spillover effects.
If you wanted, one could argue this the other way: left-oriented activism is more like proselytization than it is factual persuasion. And LessWrong, in particular, has a ton of quasi-religious elements, which means that its recruitment strategy necessarily looks a lot like evangelism.
This was hilarious, but not on purpose. And extremely bad. And definitely not putting any scifi authors out of business.
Nit: your last word should be "credible", not "credulous".
I think you're underestimating the effort required to understand this scenario for someone who doesn't already follow poker. I am a lifelong player of trick-taking games (casually, at the kitchen table with family members), but I've never played poker, and here's how the play description reads to me:
called an all-in shove
Only a vague idea of what this means, based on the everyday idiom of being "all-in".
with the jack of clubs and four of hearts on a board
Don't know what it means for these to be "on a board".
reading ThTc9c3h
Gibberish.
her jack high held against Adelstein’s eight of clubs and seven of clubs
Only vaguely comprehensible. I don't know poker's hand-scoring rules.
Additional details that are necessary to interpret the situation: is the deck continually shuffled, or are multiple hands played off of the same shuffle? (Implicitly: are there card-counting strategies that provide relevant information?) What are the point rules / rank of hands? How does suit interact with card rank? Is there a concept of trump? What was the sequence of bets leading up to the play in question? How typical is this behavior in high-level play? How high-level are these people? Robbi is called a "recreational" player -- does this mean "top-level amateur" or "low-level pro", or something else?
In the absence of these details, all I really get is "Robbi made a risky play off a mediocre hand, and won big". And yes, this is bayesian evidence in favor of cheating, but how strong the evidence is depends heavily on all of the unknown details mentioned above. At the same time, the fact that no one identified the means by which the cheating occurred despite heavy scrutiny is bayesian evidence against cheating.
My operational decision would be that this is enough evidence to subject Robbi to heightened scrutiny in future tournaments, but not enough to ban her or claw back her winnings. This is a good test, but maybe not as good as you think it is, due to the amount of uncommon background knowledge required.
I understood that. I guess I should have been more explicit about my belief that the amount of training data that would result in training a viable universal simulator would be "all of the text ever created", and then several orders of magnitude more.
Eliezer... points out that in order to predict all the next word in all the text on the internet and all similar text, you need to be able to model the processes that are generating that text
I wanted to add this comment to the original post, but there were already dozens of other comments by the time I got to it and I figured the effort would have been wasted.
EY's original post is correct in its narrow claim, but wildly misleading in its implications. He's correct that to reliably predict the next word in a previously-unseen text is superhuman, and requires doing simulation and modeling that would be staggering in its implications. But insofar as that is the goal, how close is GPT to actually doing it? How well does GPT predict the next token in an unknown string in contexts where English syntax gives you many degrees of freedom?
Answer: it's terrible! Its failure rate approaches 100%! (Again, excluding contexts where syntactic or semantic constraints give you very few degrees of freedom.) It is not even starting to approximate attempting to actually implementing the kinds of simulation and modeling that success would imply. What it can do is produce text that matches the statistical distribution of human text, including non-local correlations (ie. semantics), and to a certain degree the statistical idiosyncracies of specific writers (ie. style), and it turns out that getting even that far is pretty impressive. It's also pretty impressive that you can treat "predict the next token" as the goal and get this much good out of it while still being bad at actually predicting the next token. But the training data that GPT has is enough to teach it something about syntax and semantics, but is not remotely close to the amount or kind of data that would be necessary to teach it to simulate the universe.
The EY article boils down to "if GPT-Omega were an omniscient god that knew everything you were going to say before you said it, would that be freaky or what". Yeah, bro, it would be freaky. But that has nothing to do with what GPT can actually do.
I have wanted to write a similar post. I actually think that the two main clusters of school shootings are so different that they shouldn't even be considered the same thing. On the one hand we have shootings which have a small number of victims, usually involve handguns, and tend to be related in some way to urban gang violence; on the other hand we have the shootings with a large number of victims or intended victims, often involve assault rifles of some kind, and tend to be related to socially isolated individuals who justify their actions as some kind of revenge. (And your post made me more aware of a third category, which is acts of violence which by happenstance take place near a school, which really shouldn't count as the same thing.)
The former group makes up the vast majority of cases recorded as "school shootings" but gets essentially zero national press; the latter group is extremely rare relative to the former, but gets infinite coverage. But there is almost no overlap between the causes, means, and motives between the two groups, and things which will help one will do almost nothing for the other.
I was nodding along in agreement with this post until I got to the central example, when the train of thought came to a screeching halt and forced me to reconsider the whole thing.
The song called "Rainbowland" is subtextually about the acceptance of queer relationships. The people who objected to the song understand this, and that's why they objected. The people who think the objectors are silly know this, and that's why they think it's silly. The headline writer is playing dishonest word games by pretending not to know what the subtext is, because it lets them make a sick dunk on the outgroup.
The point is: this is not a lizardman opinion. Regardless of what you think about homosexuality itself, or whether you think a song that's subtextually about a culture war issue should be sung by first graders anyway, you cannot pretend that the objectors are voicing an objection found in only 5% of people! 30-40% of people share that view. Whether or not it's well-founded, it's not fringe.
And this thought made me look more closely at the rest of the argument, which I think boils down to:
- Sufficiently unpopular opinions can be ignored
- Authority figures should shut down people making appeals to unpopular opinions
- This is necessary, because responding to every fringe wierdo will suck up your time and ruin your institution
I actually concur with the third point here, but it should be clear that this is a pragmatic, not an epistemic stance. And the point chosen to illustrate it is actually a bad fit for the argument as presented.
The point is not what Reddit commenters think, the point is what OpenAI thinks. I read OP (and the original source) as saying that if ARC had indicated that release was unsafe, then OpenAI would not have released the model until it could be made safe.
This seems to be another way of stating the thesis of https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why. (Which is a recommendation; both of you are correct.)
Okay, that's a pretty serious age gap. Probably explains a lot.
This is a minor nitpick, but if you're 25 I doubt that your parents actually qualify as Baby Boomers, which is usually limited to people born before 1964. Not impossible (a person born in 1964 having a child at the age of 35 would result in the child being 25 today), but unlikely.
I bring this up because I'm annoyed by the ongoing shift towards people referring to every generation older than them as "boomers".
Congrats on getting all the way to The End. You may take a bow and enjoy our applause. We hope there will not be an encore.
The linked PDF was not terribly detailed, but it more-or-less confirmed what I've long thought about climate change. Specifically: the mechanism by which atmospheric CO2 raises temperatures is well-understood and not really up for debate, as is the fact that human activity has contributed an enormous amount to atmospheric CO2. But the detailed climate models are all basically garbage and don't add any good information beyond the naive model described above.
ETA: actually, I found that this is exactly what the Berkeley Earth study found:
The fifth concern related to the over-reliance on large and complex global climate models by the Intergovernmental Panel on Climate Change (IPCC) in the attribution of the recent temperature increase to anthropogenic forcings.
We obtained a long and accurate record, spanning 250 years, demonstrating that it could be well-fit with a simple model that included a volcanic term and, as an anthropogenic proxy, CO2 concentration. Through our rigorous analysis, we were able to conclude that the record could be reproduced by just these two contributions, and that inclusion of direct variations in solar intensity did not contribute to the fit.
I feel doubly vindicated, both in my belief that complex climate models don't do much, but also that you don't need them to accurately describe the data from the recent past and to make broad predictions.
I know that your article isn't specifically about the goose story, but I have to say that I strongly disagree with your assessment of the "failure" of the goose story.
First, you asked ChatGPT to write you a story, and one of the fundamental features of stories is that the author and the audience are not themselves inside the story It is entirely expected that ChatGPT does not model the reader as having been killed by the end of the world. In fact, it would be pretty bizarre if the robot did model this, because it would indicate a severe inability to understand the idea of fiction.
But is it a "swerve through the fourth wall" for the last paragraph to implicitly refer to the reader rather than the characters in the story? Only if you're writing a certain style of novelistic fiction, in which the fiction is intended to be self-contained and the narrator is implicit (or, if explicit, does not exist outside the bounds of the story). But if you're writing a fairy tale, a fable, a parable, a myth, an epic poem, a Greek drama, or indeed almost any kind of literature outside of the modernist novel, acknowledgement of the audience and storyteller is normal. It is, in fact, expected.
And your prompt is for the bot to write you a story about a goose who fails to prevent the end of the world. Given that prompt, it's entirely to be expected that you get something like a fable or fairy tale. And in that genre the closing paragraph is often "the moral of the story", which is always addressed to the audience and not the characters. When ChatGPT writes that the deeds of the goose "will always be remembered by those who heard his story," it isn't failing to model the world, but faithfully adhering to the conventions of the genre.
My point (which I intended to elaborate, but didn't initially have time) is that hosting one of these modern software platforms involves a whole stack of components, any one of which could be modified to make apparently-noncompliant output without technically modifying any of the AGPL components. You could change the third-party templating library used by the Mastodon code, change the language runtime, even modify the OS itself.
Which means I mostly agree with your point: the AGPL is not strict enough to actually ensure what it wants to ensure, and I don't think that it can ensure that without applying a whole bunch of other unacceptable restrictions.
There could be an argument that hosting it behind a proxy counts as modification.
Not quite the same thing, but related: https://www.lesswrong.com/posts/gNodQGNoPDjztasbh/lies-damn-lies-and-fabricated-options
Additionally: separate mens' fashion from womens' fashion if possible.
One strong comment on the app, the app should present you with a new pair of items rather than keeping the one that you preferred. When I played, after only a handful of selections I got into a local maximum where I liked almost nothing more than the one I had already selected, so I was just pressing the same key over and over through dozens of pictures. This is both less informative and less fun than getting to make a new choice every time.
I think my strongest disagreement here is that the category of "disagreeable" does not cleave reality at the joints, and that the category "non-routine cognitive" contains a lot of work which is not, in fact, intellectually or spiritually fulfilling in the way implied.
TL;DR: the section on vocation makes a lot of unsupported assertions and "it seems obvious that" applied to things which are not at all obvious.
[T]o think that we suffered a net loss of vocation and purpose, is either historical ignorance or blindness induced by romanticization of the past.
You need to put a number on this before I'm willing to accept that this is true. Two particular points you raise are definitely not changed from pre-industrial times: intellectual jobs are still rare and only available to a privileged few, scientists are still reliant on patronage (now routed through state bureaucracies rather than individual nobles, but still the same thing), and actual professional artists were and are such a small portion of the population that I don't think you can generalize much from them.
Meanwhile, count up all of the jobs that today are in manufacturing, resource extraction, shipping, construction, retail, and childcare. To this number we should add the majority of white-collar email jobs, which I argue are not particularly fulfilling---people may not hate working in HR or as an administrative assistant, but I doubt that most of these people feel that it's a positive vocation. Is the number very different from the number of people who were peasants beforehand? Are we sure that this represents progress rather than lateral movement?
More to the point, there are some unexamined assumptions made here about what counts as "vocation", and what kinds of occupations are likely to supply it.
- Anecdotally, the farmers and ranchers I know have a very strong sense of vocation, and high job satisfaction all around (modulo the fact that they are often financially pressed). The article, however, seems to treat agriculture as automatically non-vocational.
- As alluded to above, white-collar work often seems to lack the sense of vocation and pride of work. The article above wants to lump them in with "intellectual jobs" and assumes that they are automatically preferable to alternatives.
- A notable exception to the previous is IT, but we note that programmers are best considered a modern example of skilled craftsmen.
- Generally, the article wants to conflate vocation with choice, which I believe is false.
A better way of drawing the distinction is between "bullshit" and "non-bullshit" jobs, and one might then observe that modernity has a much higher proportion of bullshit jobs than pre-modernity. But expounding that requires a full post of its own.
"I want gas stoves to be restricted so that gross people who live in suburbs can't have them."
This might be the single worst take I've ever seen on LW. I'm sorry I can't be more constructive here, but this is the kind of garbage comment I expect from the dregs of Twitter, not this site.
I understand OP to be including "misleading implications" as part of the thing to be counted. An additional complication is that the degree of misinformation in media varies widely by subject matter and relevance; everyday articles about things with minimal Narrative impact are usually more reliable. For that reason a random sample of articles probably looks better than a sample of the most impactful and prominent articles.
The per-person numbers are almost certainly due to women entering the workforce and thus getting counted in the numbers for the first time. Decline in fertility also has some effect (though probably smaller), as there are now fewer non-working children per adult.
As a literal answer to your question: the stats do account for the working poor, but the working poor are a pretty small part of population as a whole and so don't skew the statistics as much as you apparently think.
I oversold my original statement due to having remembered a slightly more sensational version of events. Nonetheless I stand by my interpretation of the tweets; others can read them for themselves and make up their own minde.
(1) he is not a government official, (2) he was not in a position to delay the vaccine (though it's possible he influenced people who were), and (3) he doesn't say anything about doing it in order to avoid giving Trump the credit.
You are right about (1), (2) strikes me as an irrelevant distinction once we've granted (1), and I flat disagree about (3).
Where he describes his motivation, he explicitly describes the need to frustrate Trump's plans. He does this repeatedly. He focuses on this much more than he focuses on safety. The overwhelmingly likely interpretation, IMO, is that safety was a pretext and opposing Trump was the goal, and this interpretation is favored by Topol himself when he describes his actions as "opposing Trump" more often than "protecting Americans".
I find it surprising that answers to the question about making your parents proud are so low in so many northern European countries. I would obviously answer the question "yes". Important to note that they're not asking if it's your primary goal or your only goal, only if it's one of your major goals, and that seems like a much lower bar. In particular, that goal seems entirely synergistic with other widespread goals such as having a good marriage and career.
I would expect that this only gets answered "no" if (a) you have a very bad relationship with your parents, with a very significant clash of values, or (b) if the target for "pleasing parents" is excessively narrow, e.g. they will only accept you going into one particular occupation that you don't like. And these are both things that do happen, but they can't be that common, can they?
Found it (scroll down to "Eric Topol is the worst").
Related news article that goes over the key points
I had misremembered a few details, namely that Topol is an influential physician, not a government official. The gist remains.
There exists a less-malign interpretation here, which is that Topol might have had sincere concerns about the safety of the Pfizer vaccine. But I am not inclined to extend much charity. Topol explicitly states, repeatedly, that his goal was to "disrupt Trump's plan" and prevent Trump from "getting a vaccine approved" before Nov 3. (Read Topol's tweets quoted in the article, and click through to see the surrounding threads for more evidence.)
Who knows how decisive his influence was. Overall, I agree with your point that slowness is the default setting for the FDA, and that most people in the agency were slowing things down out of bureaucratic habit rather than explicit political motives, but there definitiely exist malign political actors like Topol.
Do you remember the nice feeling when you go to your dentist for a cleanup and you leave with that smooth, polished feeling on your teeth that sometimes last you days
Um, my problem is that I loathe this feeling, and pretty much every other tactile sensation associated with teeth cleaning, so this is something of an anti-recommendation.
ChatGPT also doesn't try to convince you of anything. If you explicitly ask it why it should do X, it will tell you, much like it will tell you anything else you ask it for, but it doesn't provide this information unprompted, nor does it weave it into unrelated queries.
https://www.theatlantic.com/health/archive/2022/01/fda-covid-vaccine-slow-rollout-trump/621284/
Regulators did, in fact, end up slowing the process: In the first week of September, the FDA told vaccine makers to extend their clinical trials by several weeks beyond what they’d planned, in order to gather more safety data. That effectively postponed Pfizer’s request for an emergency use authorization of the mRNA vaccine it had developed with BioNTech until after the election.
There exist screenshots of a government official actually bragging on Twitter about having delayed the vaccine in order to avoid giving Trump the credit. I seem to recall Zvi posting these screenshots at some point, though it might have been someone else. In any case, you can find many, many articles dating from late 2020 and early 2021 conveying dueling narratives about whether the vaccine was in danger of being "rushed" (Dem talking point) or whether the FDA sandbagged the process for political reasons (Trump talking point). In any case, the basic facts seem undisputed:
- The vaccine approval process could have been further expedited, and if it had proceeded at maximum speed it would have been completed in September or October 2020.
- The Trump administration did in fact pressure the FDA to approve the vaccine in October.
- The FDA did not approve the vaccine until after the election.
Which is an interesting thing to observe, because the narrative has since switched and "the vaccine was a rush job and is dangerous" is now a right-wing talking point while "the vaccine is perfectly safe" is now the mainstream position.
Edit to add: On close read I realize that I was conflating the successful end of the clinical trials and their public announcement with actual shots-in-arms readiness. Shots-in-arms readiness would probably not have been accomplished in October in any case, given the production pipeline and distribution problems, but the announcement of the successful trials, according to multiple sources, could plausibly have been as early as September.
One note he makes is that most excess deaths post-vaccine were in red states, and he estimates that Trump ‘embracing scientific reality and strongly urging people to get vaccinated’ could have saved 400k lives
This is not a counterfactual. This is what Trump actually did! He himself is vaccinated, and he encouraged vaccination publicly, including continuing to do so after he lost the presidency. The only real complaint to make here is that he maybe didn't do it enough, because he has the political sense not to continually advocate for something that his supporters hate. So your statement that it wouldn't have moved the needle is obviously correct, not because we need to reason about what would have happened but because we can observe what actually did happen.
I do agree that the one counterfactual that would have mattered would have been releasing the vaccine in September or October and allowing Trump to take credit for it. But Public Health decided that opposing Trump was more important than getting the vaccine out a few months early.
The important thing to notice is that all existing AIs are completely devoid of agency. And this is very good! Even if continued development of LLMs and image networks surpasses human performance pretty quickly, the current models are fundamentally incapable of doing anything dangerous of their own accord. They might be dangerous, but they're dangerous the way a nuclear reactor is dangerous: a bad operator could cause damage, but it's not going to go rogue on its own.
Very good, and strongly interacts with a recent interest of mine, namely symbology. Your discussion of the fact that a ritual must be in some way counter-intuitive reminds me of a quote from Fr. Alexander Schmemann. (I have searched and failed to find the exact text of the quote online, though were I at home I could find the book on my bookshelf.) Paraphrased: "Modern readers assume that a symbolic action must relate in some obviously analogical or didactic way to the thing being represented. But when one examines religious custom in any religious tradition, one finds that the older and more organic the symbol, the less it corresponds in any visible way with the thing that it represents."
Unless I have completely misremembered, this is from For the Life of the World, which would be an excellent source to add to your readings of Purvamimamsa for an eastern (as in "Eastern Orthodox"), modern-but-traditionalist treatment of similar subjects.
Programming has already been automated several times. First off, as indicated above, it was automated by moving from actual electronics into machine code. And then machine code was automated by compilers, and then most of the manual busywork of compiled languages was automated by the higher-level languages with GC, OO, and various other acronyms.
In other words, I fully expect that LLM-driven tools for code generation will become a standard and necessary part of the software developers toolkit. But I highly doubt that software development itself will be obsoleted; rather, it will move up to the next level of abstraction and continue from there.
Probably because Putanumonit is straight. It's not that mysterious.
Do you have any details about what's happened in Fargo and St. Louis? Just the other day I was wondering about the outcomes of these kinds of election reforms.
This had entirely the opposite effect on me, but was an interesting read nonetheless.
The big problem with giving kids jobs is that most kids are not strong enough self defenders to defend against potentially-subtle attacks and manipulation by people employing them
I disagree that this is "the" big problem; in fact it seems to me to be quite a small problem. There are plenty of bosses who are sort of jerks, or who manipulate their workers into maybe working extra hours without pay or whatever. This is bad, but it's not the magnitude of harm that requires society to pour tons of extra effort into eradicating it. If it escalates into something like outright wage theft, then this is already illegal and at most we might want to make it easier to report and investigate these things. (For really serious cases such as a manager sexually assaulting a subordinate there are several responses: first, this isn't all that common, second, it's already very illegal.)
In any case it seems weird to use this as a counter-argument given that the alternative is legally requiring students to be in school for 4-8 hrs/day without any compensation at all.
Those benefitting are usually not politicians, they're commercial interests who make money from the status quo. They will oppose efforts that cause them to lose money even if the change is a net good overall, but you can quiet them down by giving them a bunch of money. Typically doing so is still a net good, because the cost of buying off the opposition is (usually) less than the value gained by the rest of society.
Perhaps the verb "buy off" is not the best one here, but I'm not sure what else you'd use. If you're morally offended by the idea of offering payments to lessen the sting for people who suffer a concrete downside from your policies then, uh, don't go into politics I guess.
Upvoted because this is a good comment, but strong disagree with the underlying premise. Actual global nuclear war would render existing partisan divides irrelevant almost instantly; typical partisan culture-war divides would be readily ignored in favor of staying alive.
I could imagine more relevant international divides of this type, such as wealthier and militarily powerful first-world nations hoarding their own resources at the expense of poorer nations, but I don't think that partisanship within single nations would overwhelm the survival instinct.
"SA and Africa look like they fit together" is a good example, because at first glance it looks just a dumb coincidence and not any kind of solid evidence. Indeed, it's partly for that reason that the theory of continental drift was rejected for a long time; you needed a bunch of other lines of evidence to come together before continental drift really looked like a solid theory.
So using the continental drift argument requires you to not just demonstrate that the pieces fit, but include all of the other stuff that holds up the theory and then use that to argue for the age of the earth.
Unfortunately I don't know of any other evidences for the age of the earth or universe that have shorter argument chains. It's genuinely hard! (And partly for that reason I wouldn't be too surprised if new evidence caused us to revise our estimates for the age of the universe by a factor of two in either direction.)
A fun inverse of this exercise is to go to something like Proofs for a young earth and see how many of them you can counter-argue (and consider how convincing your argument will be to someone with a low level of background knowledge).
With that in mind, I'm not really happy with any of the provided proofs for the age of the universe. While there are a bunch of accessible and intuitively-plausible arguments for getting the age of the earth to at least several million years, determining the age of the universe seems to depend on a bunch of complicated estimates and intermediate steps that are easy to get wrong.
I'm not trying to argue for a general inversion of the principle, ie. I'm not suggesting that non-consent is somehow automatically justified. Mostly I was observing the thing where two people on "opposite" sides of an issue nonetheless have major unstated premises in common, and without those premises the contention between them dissolves.
As I alluded to by saying "left as an exercise to the reader", I don't have a full explanation at the ready about the ethics of non-consensuality. Mostly I just wanted to bring the readers' attention to the way in which consensualism is being assumed by the above, and that the argument fails hard in the cases where consensualism is rejected or simply doesn't apply.
(If I were to make a general gesture towards the ethics of non-consent, I would start by talking about the phenomenon of dependency, where one party explicitly requires the cooperation of another party in order to live. Such dependency relations are by definition unequal, and in the natural world they are also often non-consensual, but despite these features they still place binding moral obligations on both parties.)