Posts
Comments
I would be willing to bet maybe $100 on the video prediction one. Kling is already in beta. As soon as it is released to the general public, that is satisfied. The only uncertainty is whether Chinese authorities crack down on such services for insufficient censorship of requests.
The Amelia Bedelia defense.
I acknowledge this. My thinking is a bit scattered and my posts are often just an attempt to articulate publically somewhere intuitions that I have no outlet elsewhere to discuss and refine.
I'm saying first off, there is no moat. Yet I observe people on this and similar forums with the usual refrain: but look, the West is so far ahead in doing X in AI, so we shouldn't use China as a boogie man when discussing AI policy. I claim this is bogus. The West isn't far ahead in X because everything can be fast copied, stolen, brute forced and limits on hardware, etc. appear ineffective. Lots of the arguments in favor of disregarding China in setting AI safety policy assume it being perpetually a few steps behind. But if they are getting similar performance, then they aren't behind.
So if there is no moat, and we can expect peer performance soon, then we should be worried because we have reason to believe that if scaling + tweaks can reach AGI, then China might conceivably get AGI first, which would be very bad. I have seen replies to this point of: well, how do you know it would be that much worse? Surely Xi wants human flourishing as well. And my response is: governments do terrible things. At least in the West, the public can see these terrible things and sometimes say, hey: I object. This is bad. The PRC has no mechanism. So AGI would be dangerous in their hands in a way that it might not be...at least initially, in the West...and the PRC is starting from a not so pro-flourishing position (Uighur slavery and genocide, pro-Putinism, invade Taiwan fever, debt trap diplomacy, secret police abroad, etc.).
If you think AGI kills everyone anyway, then this doesn't matter. If you think AGI just makes the group possessing it really powerful and able to disempower or destroy competitors, then this REALLY matters, and policies designed to hinder Western AI development could mean Western disempowerment, subjugation, etc.
I make no guarantees about the coherence of this argument and welcome critiques. Personally, I hope to be wrong.
Before 30, I was also a moron. But I only know this because I had an ideological epiphany after that and my belief system changed abruptly. Scales-fell-from-my-eyes type situation. When I turned 33, I started keeping a diary because I noticed I have a terrible memory for even fairly recent things, so maybe going forward subtle changes will become more salient.
That said, some things seem more impervious to change. For instance the "shape" of things that give you pleasure. Maybe you liked 3d puzzles as a child and now you like playing in Blender in your free time. Not the same thing, but the same shape.
Good point.
I'd like to be convinced that I'm wrong, but I just watched a Kling AI video of Justin Timberlake drinking soda and it was pretty real looking. This plus Voice delay from OpenAI plus Yi-Large in top 10 on the LMSYS leader board after company has only existed 1 year plus just the general vibe has me really convinced that:
- There is no moat. Chinese labs are now at peer level to Western AI labs. I mean, maybe they don't have big text context lengths yet and maybe they have fewer GPUs, but Zvi, gwern and other's insistance about not needing to worry--they don't have the secret sauce yet--is, to put it politely, absolute nonsense. All the secrets have already leaked out. Only a month ago I was told that SORA like video was out of reach. Now we see, anyone can do it. Everyone and their mother is popping out video generation tools. The food eating videos from KLing should give everyone pause.
Predictions:
-
(Item removed. Realized that paper I was refering to would affect inference time compute, not training compute.)
-
By years end, some Chinese-made LLM will be atop the LMSYS leaderboard. (60%)
-
Beyond Sora text-to-video and image-to-video generation wide-released to general Chinese public by end of year (80%). Capable of generating multiple minutes of video. (70%, given the first statement). Generation times less than half that of Sora. (80%) Compute less than half that of Sora. (90%)
-
Chips of similar quality to ones produced by TSMC or Samsung will be produced by a Chinese firm within 2 years (50%). This will be accomplished by using a new lithographic process to sidestep the need for embargoed advanced etching machines or by reverse engineering one of the latest etching machines (smuggling it from Korea or Japan) (80%, given the first statement is true)
-
Advanced inexpensive Chinese personal robots will overwhelm the western markets, destroying current western robotics industry in the same way that the West's small kitchen appliance industry was utterly crushed. (70%) Data from these robots will make its way to CCP (90%, given the first statement is true)
What does this mean: the West is caught backfoot again. Despite creating the technology, China, by sheer size and directed investment, is poised to crush the West in AI. We saw this same story with electric cars, solar panels, robotics. Fast copy (or steal) and then quickly iterate and scale is extremely effective and there is no easy way to combat it. Market asymmetries mean that Chinese firms always have a large market without competitors while Western markets are bombarded with cheap alternatives to domestic brands.
If these were Japanese firms in the 1980s or Korean firm in the 2000s, we could sit back and relax. Sure, they may be ahead, but they are friendly and so we can reap the benefits. That is not the case here, especially with the possibility of AGI. Chinese firms in the 2020s, funded and controlled by CCP and subject to civil-military fusion laws, the tech is likely already being deployed in weapon systems, propaganda tools, etc. If LLMs scale to AGI and the Chinese get it first, the West is cooked in a scary existential way, over and above the general danger of AGI.
Why? Observe the flood of fentanyl precursors streaming from Chinese ports to Mexico. This could be stopped, but is permitted because it serves the CCP's ends. Observe the Chinese chips making their way into Russian weapon systems. This could be stopped, but it serves the CCP's ends that its vassal Russia crush western advancement. Now imagine the same entity had AGI. This is not to say that the West has a good track record--Iran-Contra, Iraq, Afghanistan, arms to rogue regimes, propping up South American despots, turning a blind eye to South African apartheid for decades, etc. But the various checks and balances in the West often mean that there is a meaningful way to change such policies, especially ones that look calculated to disempower and subordinate. An AGI China is scary as fuck. Unchecked power. The CCP already has millions of people in work camps and promotes re-education (ethnic cleansing) in "wayward" provinces. Extrapolate a little.
Again, I am eager to be convinced I am wrong. I hate to beat this same drum over and over.
I would argue that leaders like Xi would not immediately choose general human flourishing as the goal. Xi has a giant chip on his shoulder. I suspect (not with any real proof, but just from a general intuition) that he feels western powers humiliated imperial China and that permanently disabling them is the first order of business. That means immediately dissolving western governments and placing them under CCP control. Part of human flourishing is the feeling of agency. Having a foreign government use AI to remove their government is probably not conducive to human flourishing. Instead, it will produce utter despair and hopelessness.
Consider what the US did with Native Americans using complete tech superiority. Subjugation and decimation in the name of "improvement" and "reeducation." Their governments were eliminated. They were often forcibly relocated at gunpoint. Schools were created to beat out "savage" habits from children. Their children were seized and rehomed with Whites. Their languages were forcibly suppresed and destroyed. Many killed themselves rather than submit. That is what I'd expect to happen to the West if China gets AGI.
Unfortunately, given the rate at which things are moving, I expect the West's slight lead to evaporate. They've already fast copied SORA. The West is unprepared to contend with a fully operational China. The counter measures are half-hearted and too late. I foresee a very bleak future.
There are lots of language that use a "to-be" copula far less frequently than English. I don't know that it actually affects people's ontologies. It would be evidence in favor of Sapir-Worf if it did.
Nvidia just low-key released its own 340B parameter model. For those of you worried about the releasing of model weights becoming the norm, this will probably aggravate your fears.
Here is the link: https://research.nvidia.com/publication/2024-06_nemotron-4-340b
Oh, and they also released their synthetic data generation pipeline:
https://blogs.nvidia.com/blog/nemotron-4-synthetic-data-generation-llm-training/
I think I've switched positions on open source models. Before I felt that we must not release them because they can be easily fine-tuned to remove safety measures and represent a tech donation to adversaries. But now I feel the harm posed by these open source models seems pretty small and that because Alibaba is releasing them at an exceptionally rapid pace, western forbearance will not affect their proliferation.
This. Very much.
Truman tried to nationalize steel companies on the basis of national security to get around a strike. Was badly benchslapped.
Very serious negative update. And they practically say: we read Sora paper and then replicated.
As I wrote in another post, this can easily turn into another TikTok tool, where dumb westerners spill personal info into what amounts to a Chinese intelligence gathering apparatus.
Now, apparently, someone has used Kling to make a commercial for a Mad Max-themed beer. Zvi would call this mundane utility.
What it demonstrates is that the Chinese can fast copy anything we do, improve around the edges, and release a product. Frontier model...boom, fast copied. The amount of compute required for some of these tasks makes me suspect big leaks from frontier labs. Also, because big labs here are reluctant to release any new models ahead of this years elections, Chinese counterparts get a head start with copying and product diffusion. We could see a situation like the one with TikTok. A Chinese firm creates an intel slurping app that it releases to the West (but doesn't allow internally) and then the West cannot get rid of it because the Chinese proceed to abuse Western legal processes. A video generating application is the poster child for an app that can be tweaked to cause destabilization while also hiding behind free speech protections.
While I'm not sure about doom from AGI, my p(doom) for West rises every time another one of these fast copies happens. The asymmetry in Sino-Western relations--Chinese firms can enter western markets but not the reverse--ensures this dynamic will continue until western firms and labs lose predominance in everything.
That's the wise thing to do, but people routinely use oh. Five-oh-six-three-four-oh-one. In fact, zero might sound overly formal to me depending on the context. If I am reading my credit card number, I would say zero.
To punch something up can sometimes mean "to make it more vibrant, visible, outstanding" so I think he means, he goes into the details in order to make the important ones more salient for the reader.
Why not "sen"? If not for written language, seven would likely already be pronounced this way. The process was under way. Weeks were once called sennights. And even people who don't usually say it this way often do when drunk. True, there tends to be a glottal stop after the e before the n when the v is elided, but not always.
What to do about zero? Just reduce it to oh, like in casual speech or when reciting telephone numbers.
oh, one, two, three, four, five, six, sen, eight, nine, ten.
There! All single syllables.
Several things are at work.
-
Global warming is highly partisan, and the proposed solutions to it are extremely polarizing. Al Gore's An Inconvenient Truth is probably single-handedly responsible for shifting the GOP to deny it exists. The taint spread to other issues. A whiff of any environmentalism raises hackles that would not have been raised otherwise.
-
Environmentalists have used environmental laws that were initially bipartisan to throw wrenches into development favored by GOP.
-
Partisan sorting. Republicans who were concerned about the environment in 1990 are dead, changed positions, or are no longer Republicans, just like anti-abortion Democrats from 1990 are dead, changed positions, or are no longer Democrats.
Care to reassess?
Possible. Possible. But I don't see how that is more likely than that Alibaba just made something better. Or they made something with with lots of contamination. I think this should make us update toward not underestimating them. The Kling thing is a whole nother issue. If it is confirmed text-to-video and not something else, then we are in big trouble because the chip limits have failed.
So US has already slipped behind despite chip limits. I also saw that Llama 3 was already bested by Qwen 2. We are about a week away from some Chinese model surpassing GPT-4o on Lmsys. I want to hear the China-is-no-big-deal folks explain this.
A Chinese company released a new SORA competitor--Kling--and it is arguably superior to SORA publically available. Could be exfiltration or could be genuinely home grown. In any case, the moat is all gone.
I'm curious what these "more effective words" are. This isn't asked flippantly. Clearly there is a geopolitical dimension to the AI issue and Zvi lives in the U.S. Even as a rationalist, how should Zvi talk about the issue? China and the U.S. are hostile to each other and will each likely use AGI to (at the very least) disempower the other, so if you live in the U.S., first, you hope that AGI doesn't arrive until alignment is solved, and second, you hope that the U.S. gets it first.
10 AI dropped a model on Lmsys that is doing fairly well, briefly overtaking Claude Opus before slipping a bit. Just another reminder that, as we wring our hands about dodgy behavior by Open AI, apparently these Chinese firms are getting compute (despite our efforts to restrict this) and releasing powerful and competitive models.
Zvi is talking about those people who use libertarianism as a gloss for "getting what they want." In other words, people who aren't into liberty per se, but only into liberty to the extent it satisfies their preferences. There probably is, and if there isn't, there should be, a word for people who invoke liberty this way. That way, when talking about the sort that, for instance, want children to be allowed to read the Bible in the classroom (because LIBERTY!) while simultaneously wanting to ban some book on trans-youth (because PARENTS RIGHTS), we can say: oh, yes, that (word) is at it again. I mean, hypocrite for sure, and perhaps gaslighter, but we need a better word. Well, if there is an existing word, please let me know. There are so many of these sorts out and about, they easily dwarf the population of libertarians.
Anyone paying attention to the mystery of the GPT-2 chatbot that has appeared on lmsys? People are saying it operates at levels comparable to or exceeding GPT-4. I'm writing because I think the appearance of mysterious unannounced chatbots for public use without provenance makes me update my p(doom) upward.
Possibilities:
-
this is a OpenAI chatbot based on GPT-4, just like it says it is. It has undergone some more tuning and maybe has boosted reasoning because of methods described in one of the more recently published papers
-
this is another big American AI company masquarading OpenAI
-
this is a big Chinese AI company masquerading as OpenAI
-
this is an anonymous person or group who is using some GPT-4 fine tune API to improve performance
Possibility 1 seems most likely. If that is the case, I guess it is alright, assuming it is purely based on GPT-4 and isn't a new model. I suppose if they wanted to test on lmsys to gauge performance anonymously, they couldn't slap 4.5 on it, but they also couldn't ethically give it the name of another company's model. Giving it an entirely new name would invite heavy suspicion. So calling it the name of an old model and monitoring how it does in battle seems like the most ethical compromise. Still, even labeling a model with a different name feels deceptive.
Possibility 2 would be extremely unethical and I don't think it is the case. Also, the behavior of the model looks more like GPT-4 than another model. I expect lawsuits if this is the case.
Possibility 3 would be extremely unethical, but is possible. Maybe they trained a model on many GPT-4 responses and then did some other stuff. Stealing a model in this way would probably accelerate KYC legislation and yield outright bans on Chinese rental of compute. If this is the case, then there is no moat because we let our moat get stolen.
Possibility 4 is a something someone mentioned in Twitter. I don't know whether it is viable.
In any case, releasing models in disguise onto the Internet lowers my expectations for companies to behave responsibly and transparently. It feels a bit like Amazon and their scheme to collect logistics data from competitors by calling itself a different name. In that case, like this, the facade was paper thin...the headquarters of the fake company was right next to Amazon, but it worked for a long while. Since I think 1 is the mostly likely, I believe OpenAI wants to make sure it soundly beats everyone else in the rankings before releasing an update with improvements. But didn't they just release an update a few weeks ago? Hmm.
The roundness of the earth is not a point upon which any political philosophy hinges, yet flat earthism is a thing. The roundness is not subjective, it isn't controversial, and it does not advance anyone's economic interest. So why do people engage in this sort of contrarianism? I speculate that the act of being a contrarian signals to others that you question authority. The bigger the consensus challenged, the more disdain for authority shown. One's willingness to question authority is often used as a proxy for "independent thinking." The thought is that someone who questions authority might be more likely to accept new evidence. But questioning authority is not the same as being an independent thinker, and so, when taken to its extreme, it leads to denying reality, because isn't reality the ultimate authority?
Yes, yes. Probably not. And they already have a Sora clone called Vidu, for heaven's sake.
We spend all this time debating: should greedy companies be in control, should government intervene, will intervention slow progress to the good stuff: cancer cures, longevity, etc. All of these arguments assume that WE (which I read as a gloss for the West) will have some say in the use of AGI. If the PRC gets it, and it is as powerful as predicted, these arguments become academic. And this is not because the Chinese are malevolent. It's because, AGI would fall into the hands of the CCP via their civil-military fusion. This is a far more calculating group than those in Western governments. Here officials have to worry about getting through the next election. There, they can more comfortably wield AGI for their ends while worrying less about palatability of the means: observe how the population quietly endured a draconian lock-down and only meekly revolted when conditions began to deteriorate and containment looked futile.
I am not an accelerationist. But I am a get-it-before-them-ist. Whether the West (which I count as including Korea and Japan and Taiwan) can maintain our edge is an open question. A country that churns out PhDs and loves AI will not be easily thwarted.
So the usual refrain from Zvi and others is that the specter of China beating us to the punch with AGI is not real because limits on compute, etc. I think Zvi has tempered his position on this in light of Meta's promise to release the weights of its 400B+ model. Now there is word that SenseTime just released a model that beats GPT-4 Turbo on various metrics. Of course, maybe Meta chooses not to release its big model, and maybe SenseTime is bluffing--I would point out though that Alibaba's Qwen model seems to do pretty okay in the arena...anyway, my point is that I don't think the "what if China" argument can be dismissed as quickly as some people on here seem to be ready to do.
Wait...your children are on the Mormon path? Oh boy.
As a non-parent, I have no idea how it is to be a parent. It must be exceptionally hard and require making difficult compromises. However, having realized that Mormonism is not the path to reason...aren't you terrified that your children are headed toward a dead-end, believing irrational things and perpetuating those beliefs unto the next generation? How do you handle that? I would be looking for any signs that my kids wanted out...looking for them to send me an SOS so I would be justified in swooping in and telling them it's all baloney and they don't need to take any of it seriously. That would probably land me in family court and alienate me from my children, who, having grown up in the community and imbibed the teachings like mother's milk, have become integrated into the hive mind, but the temptation to cry BS must be overwhelming, no?
Less than a year. They probably already have toy models with periodically or continuously updating weights.
Sure, the topics in this piece are dealt with superficially and the discussions are not especially thought-provoking; when compared to the amazing creative works that people on this site produce, it is low-mediocre. But Claude writes more coherently than a number of published authors and most of the general public.
He doesn't mean politically conservative, he means that Google has traditionally been conservative when it comes to releasing new products...to the point where potentially lucrative products and services rot on the vine.
Good point, although I used Esperanto precisely because it is a language for which the OP's approach is transparently difficult. The Greek word for light (in weight) is avaris...not heavy. So in Greek, one must say "This object is easy to lift because of the lowness of its weight," but in English one can say "This object is light." Seems arbitrary. I appreciate what the OP is trying to do, though.
Most of the time English has an antonym that does not involve a negative prefix or suffix.
- It is not warm. ~= It is cool.
- It is not new. ~= It is old.
But this is not the case in other languages. Consider Esperanto:
- It is not warm. -> Ĝi ne estas varmeta. ~= Ĝi estas malvarmeta.
- It is not new. -> Ĝi ne estas nova. ~= Ĝi estas malnova.
Because mal- is equivalent to un-, it is forbidden, and you have to resort to periphrasis:
- Ĝi estas alia ol varmeta. (It is other than warm.)
- Ĝi estas la malo de varmeta. (It is the opposite of warm.)...oh, wait, this contains mal- too.
People who eat seafood, but not the flesh of other terrestrial animals are pescatarian. Ethical (as opposed to environmental) pescatarians say fish and other marine life aren't complex enough for fear or pain. Perhaps they call themselves vegetarians just to avoid having to explain pescatarianism.
I'm puzzled by your use of the word "intelligence." Intelligence refers to a capacity to understand facts, acquire knowledge and process information. Humans are presently the only members of the set of intelligent self-regulating systems.
Whenever someone uses "they," I get nervous.
This and other communities seek to transcend, or at least mitigate, human imperfections. Just because something is "human" doesn't mean it contributes to human flourishing. Envy, rage, hate, and cruelty are human, after all.
Lex Luthor vibes.
"To believe" in German is glauben, also from Proto-Germanic. Was this meaning also colored by Greek?
I don't know that an opinion that conforms to reality is self-reinforcing. It is reinforced by reality. The presence of a building on a map is reinforced by the continued existence of the building in real life.
When I was in middle school, our instructor was trying to teach us about the Bill of Rights. She handed out a paper copy and I immediately identified that Article the first (sic) and Article the second (sic) were not among the first ten amendments and that the numbers for the others were wrong. I boldly asserted that this wasn't the Bill of Rights and the teacher apologized and cursed the unreliable Internet. But I was wrong. This WAS the Bill of Rights, but the BILL rather than the ten ratified amendments. Everyone came away wrongly informed from that exchange.
Edit: I wrote before that I identified that they were not in the Constitution, but article the second is, as the 27th amendment, and I knew that, but it wasn't among the first ten.
Sam Altman: is there a word for feeling nostalgic for the time period you’re living through at the time you’re living it?
Call it "nowstalgia."
First, I suggest that people pay heed to what happened in the movie "Don't Look Up." I don't remember the character names, but the punk female scientist, when confronted during an interview by unserious journalists, went absolutely bonkers on television and contributed significantly to doom. The lesson I got from this is if you do not present serious existential threats in a cogent, sober manner, the public will polarize based on vibes and priors and then lock in. Only the best, most unflappable, polished spokespeople should be put forward, and even then, it might be no use.
Second, you cannot have a meaningful exchange on Twitter. Twitter encourages the generation of poorly reasoned emotional responses that are then used to undermine better reasoned future arguments. I would recommend people just avoid that platform entirely because the temptation to respond to raconteurs like Jezos is too high.
I think this disagreement stems from a failure to distinguish which meaning of innocence we are talking about. By my reckoning, there are three major meanings: legal innocence, moral innocence, and naive innocence. Legal innocence is the lack of criminal culpability. Moral innocence is the lack of moral culpability. Naive innocence is the lack of knowledge about sensitive topics.
"Innocent as a dove and shrewd as a serpent" is referring to moral innocence and means: be clever, but only so far as is morally acceptable. Naive innocence, however, which is the topic the OP seems to be discussing, isn't a virtue, it is ignorance, and curiosity is the virtue which seeks to extinguish it. An innocent listener who doesn't understand the racial joke should be curious and ask probing questions and do research to better understand what the racial joke teller was trying to say. Then, the next time someone talks in a similar manner, the now savvy listener can make an informed decision about whether that person is the sort of person the listener wants to associate with.
Yes. This could have been written better. I am honestly, genuinely not partial to either side. I could be convinced that intelligence begets human-like values if someone could walk me through how that happens and how to account for examples of very intelligent people who do not have human-friendly values. I shouldn't have been so aggressive with my word choice, I just find it frustrating when I see people assuming something will happen without explaining how. I reckon the belief in this convergence, at least sometimes, is tied to a belief in moral realism and that's the crux. If moral realism holds, then moral truths are discoverable and, given sufficient intelligence, an entity will reach these truths by reason. For those who are not motivated by moral realism, then perhaps this conviction stems from sureness that the manner in which current AIs are trained--using human-created data--will, at some point, cause sufficiently advanced AI to grok human values. But what if they do grok, but don't care? I understand that some animals eat their young, but I don't endorse this behavior and wish it were not so. I would feel the urge to stop it if I saw it happening and would only refrain because I don't want to interfere with natural processes. It seems to me that the world is not compatible with human values, otherwise what is would be what ought to be, so humans, which operate in the world, may not be compatible with AI values, even if the AI understands them deeply and completely.
Anyway, point is, I'm not trying to be a partisan.
Exactly right. However, I am extremely doubtful about anyone who claims that all their patients are cured within a few sessions. That sounds very unlikely unless they screen out people with anything more than minor hang-ups. Sure, in many cases, the root cause of the psychological problem can be identified and the patient can learn a few techniques and then they no longer need further therapy. However, lots of people in therapy are dealing with negative mental processes that were baked into them by a difficult childhood or a traumatic experience. Those sorts of issues can require on-going therapy to keep the patient on track and in a positive mindspace. One quick tricks don't work on someone with severe codependency or agoraphobia or anorexia. Maybe, with time, they can work through these issues and no longer need therapy, but this could take years.
Yes. I agree with you. Innocence is naivete. They are the same thing. But innocence emphasizes the benefits of being unsullied by knowledge and naivete emphasizes the dangers.
Knowledge isn't always psychically refreshing. I've heard people use the term cognitohazard to describe knowledge that causes mental harm to the person who knows it. Knowing about the wicked tendencies of man and the indifference of the universe is psychically scarring. Once you know about it, it alters your thought processes pretty permanently. It makes life sadder and fills it with more anxiety. However, because knowing about these things means that you can watch out for them and survive, not knowing can be dangerous.
In situations where not knowing does not present immediate harm, we use the word innocence. "Look, she is so friendly with everyone, even people she doesn't know. Isn't that precious. I wish I were still like that." But in situations where not knowing places someone in danger, we use the word naivete. "Can you believe she gave that strange man all that information? She is so naive. He could come to her house and hurt her."
As for innocence and naivete being associated with sexuality, the same reasoning holds. Sex, past and present, is dangerous business. In the past, getting pregnant meant you were at elevated risk of death or disability. Even if not impregnated, you could get incurable diseases that would shorten your life and make you unmarriagable. Lacking knowledge about sex meant you weren't aware of this grim reality. And people who were aware of it wished they could go back to not knowing because the burden of knowledge is heavy. So they would say: "look, she is so innocent. Wish I were still so." On the other hand, when people aware of the grim reality saw an innocent person acting in a way that was likely to attract unwanted sexual attention, they would call them naive, since this attention could lead to disease or pregnancy and therefore the discovery of the grim reality.