Posts

Comments

Comment by gurugeorge on Mark Manson and Rationality · 2015-12-01T06:52:13.757Z · LW · GW

Thanks for the heads up, never heard of this guy before but he's very good and quite inspiring for me where I'm at right now.

Comment by gurugeorge on Using the Copernican mediocrity principle to estimate the timing of AI arrival · 2015-11-13T18:33:54.187Z · LW · GW

I dunno, isn't this just a nerdy version of numerology?

Comment by gurugeorge on Does the Internet lead to good ideas spreading quicker? · 2015-11-08T16:00:07.917Z · LW · GW

I think for non-elites it's about the same. It depends on how you conceive "ideas" of course - whether you restrict the term purely to abstractions, or broaden it to include all sorts of algorithms, including the practical.

Non-elites aren't concerned with abstractions as much as elites, they're much more concerned with practical day-to-day matters like raising a family, work, friends, entertainment, etc.

Take for instance DIY videos on Youtube - there are tons of them nowadays, and that's an example of the kind of thing that non-elites (and indeed elites to the extent that they might actually care about DIY) are going to benefit from tremendously. And I think it's going to be natural for a non-elite individual to check out a few (after all it's pretty costless, except in terms of a tiny bit of time) and sift out what seem like the best methods.

Comment by gurugeorge on Fiction Considered Harmful · 2015-10-11T02:43:21.823Z · LW · GW

It could be that, like sleep, the benefits of reading fiction aren't obvious and aren't on the surface. IOW, escapism might be like dreaming - a waste from one point of view (time spent) but still something without which we couldn't function properly, so therefore not a waste, but a necessary part of maintenance, or summat.

Comment by gurugeorge on Summoning the Least Powerful Genie · 2015-09-23T17:14:50.329Z · LW · GW

What happens if it doesn't want to - if it decides to do digital art or start life in another galaxy?

That's the thing, a self-aware intelligent thing isn't bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn't a big problem.

Comment by gurugeorge on Summoning the Least Powerful Genie · 2015-09-18T07:48:19.308Z · LW · GW

I can't remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it's simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world's resources and scientists onto it. But who would pay for such a thing?

What's the ROI for a super-intelligent, self-aware machine? Not very much, I should think - especially considering the potential dangers.

So yeah, we'll certainly produce machines like the robots in Interstellar - clever expert systems with a simulacrum of self-awareness. Because there's money in it.

But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.

Comment by gurugeorge on What Exactly Do We Mean By "Rationality"? · 2015-09-17T07:33:54.038Z · LW · GW

If there's any kernel to the concept of rationality, it's the idea of proportioning beliefs to evidence (Hume). Everything really flows from that, and the sub-variations (like epistemic and instrumental rationality) are variations of that principle, concrete applications of it in specific domains, etc.

"Ratio" = comparing one thing with another, i.e. (in this context) one hypothesis with another, in light of the evidence.

(As I understand it, Bayes is the method of "proportioning beliefs to evidence" par excellence.)

Comment by gurugeorge on The Library of Scott Alexandria · 2015-09-16T15:46:19.791Z · LW · GW

Great stuff! As someone who's come to all this Bayes/LessWrong stuff quite late, I was surprised to discover that Scott Alexander's blog is one of the more popular in the blogosphere, flying the flag for this sort of approach to rationality. I've noticed that he's liked by people on both the Left and the Right, which is a very good thing. He's a great moderating influence and I think he offers a palatable introduction to a more serious, less biased way of looking at the world, for many people.

Comment by gurugeorge on Why people want to die · 2015-08-26T19:24:55.832Z · LW · GW

I think the concept of psychological neoteny is interesting (Google Bruce Charlton neoteny) in this regard.

Roughly, the idea would be that some people retain something of the plasticity and curiosity of children, whereas others don't, they mature into "proper" human beings and lose that curiosity and creativity. The former are the creative types, the latter are the average human type.

There are several layered ironies if this is a valid notion.

Anyway, for the latter type, they really do exhaust their interests in maturity, they stick to one career, their interests are primarily friends and family, etc., so it's easy to see how for them, life might be "done" at some point. For geeks, nerds, artists, and probably a lot of scientists too, the curiosity never ends, there's always interest about what happens next, what's around the corner, so for them, the idea of life extension and immortality is a positive.

Comment by gurugeorge on ​My recent thoughts on consciousness · 2015-07-06T13:29:56.673Z · LW · GW

All purely sensory qualities of an object are objective, yes. Whatever sensory experience you have of an object is just precisely how that object objectively interacts with your sensory system. The perturbation that your being (your physical substance) undergoes upon interaction with that object via the causal sensory channels is precisely the perturbation caused by that object on your physical system, with the particular configuration ("wiring") it has.

There are still subjective perceived qualities of objects though - e.g. illusory (e.g.like Müller-Lyer, etc., but not "illusions" like the famous "bent" stick in water, that's a sensory experience), pleasant, inspiring, etc.

I'm calling "sensory" here the experience (perturbation of one's being) itself, "perception" the interpretation of it (i.e. hypothetical projection of a cause of the perturbation outside the perturbation itself). Of course in doing this I'm "tidying up" what is in ordinary language often mixed (e.g. sometimes we call sensory experiences as I'm calling them "perceptions", and vice-versa). At least, there are these two quite distinct things or processes going on, in reality. There may also be caveats about at what level the brain leaves off sensorily receiving and starts actively interpreting perception, not 100% sure about that.

Comment by gurugeorge on ​My recent thoughts on consciousness · 2015-06-30T12:05:47.107Z · LW · GW

Yes, for that person. Remember, we're not talking about an intrinsic or inherent quality, but an objective quality. Test it however many times you like, the lemon will be sweet to that person - i.e. it's an objective quality of the lemon for that person.

Or to put it another way, the lemon is consistently "giving off" the same set of causal effects that produce in one person "tart", another person "sweet".

The initial oddness arises precisely because we think "sweetness" must itself be an intrinsic quality of something, because there's several hundred years of bad philosophy that tells us there are qualia, which are intrinsically private, intrinsically subjective, etc.

Comment by gurugeorge on ​My recent thoughts on consciousness · 2015-06-27T13:37:39.307Z · LW · GW

Sweetness isn't an intrinsic property of the thing, but it is a relational property of the thing - i.e. the thing's sweetness comes into existence when we (with our particular characteristics) interact with it. And objectively so.

It's not right to mix up "intrinsic" or "inherent" with objective. They're different things. A property doesn't have to be intrinsic in order to be objective.

So sweetness isn't a property of the mental model either.

It's an objective quality (of a thing) that arises only in its interaction with us. An analogy would be how we're parents to our children, colleagues to our co-workers, lovers to our lovers. We are not parents to our lovers, or intrinsically or inherently parents, but that doesn't mean our parenthood towards our children is solely a property of our childrens' perception, or that we're not really parents because we're not parents to our lovers.

And I think Dennett would say something like this too; he's very much against "qualia" (at least to a large degree, he does allow some use of the concept, just not the full-on traditional use).

When we imagine, visualize or dream things, it's like the activation of our half of the interaction on its own. The other half that would normally make up a veridical perception isn't there, just our half.

Comment by gurugeorge on The Brain as a Universal Learning Machine · 2015-06-23T14:13:53.229Z · LW · GW

Hmm, but isn't this conflating "learning" in the sense of "learning about the world/nature" with "learning" in the sense of "learning behaviours"? We know the brain can do the latter, it's whether it can do the former that we're interested in, surely?

IOW, it looks like you're saying precisely that the brain is not a ULM (in the sense of a machine that learns about nature), it is rather a machine that approximates a ULM by cobbling together a bunch of evolved and learned behaviours.

It's adept at learning (in the sense of learning reactive behaviours that satisfice conditions) but only proximally adept at learning about the world.

Comment by gurugeorge on The Brain as a Universal Learning Machine · 2015-06-21T22:37:23.536Z · LW · GW

Great stuff, thanks! I'll dig into the article more.

Comment by gurugeorge on The Brain as a Universal Learning Machine · 2015-06-21T20:11:04.620Z · LW · GW

I'm not sure what you mean by gerrymandered.

What I meant is that you have sub-systems dedicated to (and originally evolved to perform) specific concrete tasks, and shifting coalitions of them (or rather shifting coalitions of their abstract core algorithms) are leveraged to work together to approximate a universal learning machine.

IOW any given specific subsystem (e.g. "recognizing a red spot in a patch of green") has some abstract algorithm at its core which is then drawn upon at need by an organizing principle which utilizes it (plus other algorithms drawn from other task-specific brain gadgets) for more universal learning tasks.

That was my sketchy understanding of how it works from evol psych and things like Dennett's books, Pinker, etc.

Furthermore, I thought the rationale of this explanation was that it's hard to see how a universal learning machine can get off the ground evolutionarily (it's going to be energetically expensive, not fast enough, etc.) whereas task-specific gadgets are easier to evolve ("need to know" principle), and it's easier to later get an approximation of a universal machine off the ground on the back of shifting coalitions of them.

Comment by gurugeorge on The Brain as a Universal Learning Machine · 2015-06-21T15:14:49.774Z · LW · GW

That's a lot to absorb, so I've skimmed it, so please forgive if responses to the following are already implicit in what you've said.

I thought the point of the modularity hypothesis is that the brain only approximates a universal learning machine and has to be gerrymandered and trained to do so?

If the brain were naturally a universal learner, then surely we wouldn't have to learn universal learning (e.g. we wouldn't have to learn to overcome cognitive biases, Bayesian reasoning wouldn't be a recent discovery, etc.)? The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.

Comment by gurugeorge on In praise of gullibility? · 2015-06-18T19:46:38.987Z · LW · GW

I think there's always been something misleading about the connection between knowledge and belief. In the sense that you're updating a model of the world, yes, "belief" is an ok way of describing what you're updating. But in the sense of "belief" as trust, that's misleading. Whether one trusts one's model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.

IOW, knowledge is not a modification of a psychological state, it's the actual, objective status of an "aperiodic crystal" (sequences of marks, sounds, etc) as filtered via public habits of use ("interpretation" in more of the mathematical sense) to be representational. IOW there are 3 components, the sequence of scratches, the way the sequence of scratches is used (usually involving interaction with the world, implicitly predicting the world will react a certain way conditional upon certain actions), and the way the world is. None of those involve belief.

So don't worry about belief. Take things lightly. Except on relatively rare mission-critical occasions, you don't need to know, and as Feynman typically wisely pointed out, it's ok not to know.

That thing of lurching from believing in one thing as the greatest thing since sliced bread, to another, I'm familiar with, but at some point, you start to see that emotional roller-coaster as unnecessary.

So it's not gullibility, but lability (labileness?) that's the key. Like the old Zen master story "Is that so?":-

"The Zen master Hakuin was praised by his neighbours as one living a pure life. A beautiful Japanese girl whose parents owned a food store lived near him. Suddenly, without any warning, her parents discovered she was with child. This made her parents angry. She would not confess who the man was, but after much harassment at last named Hakuin. In great anger the parent went to the master. "Is that so?" was all he would say.

"After the child was born it was brought to Hakuin. By this time he had lost his reputation, which did not trouble him, but he took very good care of the child. He obtained milk from his neighbours and everything else he needed. A year later the girl-mother could stand it no longer. She told her parents the truth - the real father of the child was a young man who worked in the fishmarket. The mother and father of the girl at once went to Hakuin to ask forgiveness, to apologize at length, and to get the child back. Hakuin was willing. In yielding the child, all he said was: "Is that so?"

Comment by gurugeorge on The lymphatic system is found to connect to the Central Nervous System · 2015-06-07T00:09:56.124Z · LW · GW

I remember reading a book many years ago which talked about the "hormonal bath" in the body being actually part of cognition, such that thinking of the brain/CNS as the functional unit is wrong (it's necessary but not sufficient).

This ties in with the philosophical position of Externalism (I'm very much into the Process Externalism of Riccardo Manzotti). The "thinking unit" is really the whole body - and actually finally the whole world (not in the Panpsychist sense, quite, but rather in the sense of any individual instance of cognition being the peak of a pyramid that has roots that go all the way through the whole).

I'm as intrigued and hopeful about the possibility of uploading, etc., as the next nerd, but this sort of stuff has always led me to be cautious about the prospects of it.

There may also a lot more to be discovered about the brain and body too, in the area of some connection between the fascia and the immune system (cf. the anecdotal connection between things like yoga and "internal" martial arts and health).

Comment by gurugeorge on "Immortal But Damned to Hell on Earth" · 2015-05-30T21:38:57.661Z · LW · GW

Oh, true for the "uploaded prisoner" scenario, I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.

But even for the "uploaded prisoner", given sufficient time it would be possible - there's no absolute impermeability to information anywhere, is there? And where there's information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )

But that reminds me of the problem of trying to isolate an AI once built.

Comment by gurugeorge on "Immortal But Damned to Hell on Earth" · 2015-05-29T20:25:23.472Z · LW · GW

Isn't suicide always an option? When it comes to imagining immortality, I'm like Han Solo, but limits are conceivable and boredom might become insurmountable.

The real question is whether intelligence has a ceiling at all - if not, then even millions of years wouldn't be a problem.

Charlie Brooker's Black Mirror tv show played with the punishment idea - a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the "gaoler" for amusement. Hellish scenario.

(In another episode- or it might be the same one? - a version of the same kind of "punishment" - except just a featureless white space for a few years - is also used to "tame" a copy of a person's mind that's trained to be a boring virtual assistant for the person.)

Comment by gurugeorge on Dissolving philosophy · 2015-05-26T22:50:10.012Z · LW · GW

Yeah post-Tractatus - Blue and Brown Books, Philosophical Grammar, Philosophical Remarks, Wittgenstein's Lectures by Alice Ambrose (very useful to get an inroad into the later W.), On Certainty and a book on phil of maths the title of which I can't remember.

None of these were published, they're all notes or lectures notes or nearly-finished books. The only book post-Tractatus that Wittgenstein seems to have been ready to publish was the first part (as currently published) of Philosophical Investigations (the second part isn't connected really, it's someone else's idea of something that they thought fit with it).

His later philosopy was WIP at the time of his death, but with the first part of PI, we're seeing something that's really new and revolutionary in philosophy. Some of it was taken up and became "ordinary language philosophy" in Oxford (esp JL Austin), but that was really only part of the story. Dan Dennett is, I think, a true heir of the later Wittgenstein (perhaps W. wouldn't have thought so, because his conception of philosophy excluded it from the empirical domain altogether, and restricted it to purely being about language and concepts - but he wasn't right about everything, and I hold to the older definition of philosophy :) ).

The Tractatus isn't actually totally repudiated by the PI either, but it's seen as a kind of special case of his later philosophy, as having a more limited scope and usefulness than W. thought when he wrote it.

I'm saying all this as someone who, when I first encountered W.'s later philosophy, agreed with Russell's estimate of it as trivial - I thought W. was just a fey poseur. But over the course of the past few decades of my life, having re-read PI maybe 4 or 5 times with concentration, and returned to pondering it again and again, I've come to gradually realize that he really was the philosophical schizz :)

It's not systematic though - as W. says in the intro to PI, it's something you've got to go over several times, like becoming familiar with a landscape by journeying through it, criss-crossing it several times.

Comment by gurugeorge on Dissolving philosophy · 2015-05-26T16:04:11.510Z · LW · GW

Take a look at the later Wittgenstein - he's basically the Plato (Socrates) killer, as well as the Descartes killer.

This whole way of doing philosophy is misconceived. Meaning is not about what goes on in our brains, language is a precipitate of human social action, but not the result of human design, much like economic value (and also, I think moral value).

IOW, there is no "the" concept of justice, there are various things (in the real world) called "justice", in the context of various "games" (ways of using words combined with ways of acting), and they are inter-related, but not in an essentialist way (one thing in common), rather in the way a rope is made out of skeins and threads that overlap some of the way, but with no thread or skein going all the way the length of the rope. Or again, cf. his concept of "family resemblance" - there's a family nose, but not everyone in the family has it, while some members of the family have the family cheekbones, but not all have them, etc.

With this understanding, philosophy is free to go back to being pre-Socratic and Bayesian (with some element of Aristotle's systematization). In the long view (which is but the blink of an AI) it was an interesting 2,000-odd year detour, but it was ultimately a dead end (as was the "modern" philosophical turn of Cartesian methodological solipsism).

Comment by gurugeorge on [Link] A Darwinian Response to Sam Harris’s Moral Landscape Challenge · 2015-05-23T16:37:33.965Z · LW · GW

I disagree that any of those three points are unimportant, they're central parts of Harris' argument and they are part of what has to be refuted.

The idea that there has to be a "rightness criterion" (or an "intrinsic" criterion as per the article) is very much what Harris' view questions, and his position has very little to do with hedonism (hedonism is just a partially-intersecting sub-set of what he's talking about).

To violate Hume's distinction, you don't need to say there's a "higher meaning" in fitnessism, you just need to say that a "rightness criterion" can be based on "what is" (how animals actually behave).

It's like this: Hume's distinction, while valid, is (contrary to his belief and popular belief) irrelevant to morality. A reason has to be given why the "ought" of morality cannot be instrumental all the way down (or rather up and down), why morality has to have an "intrinsic" or "absolute" criterion at all.

Essentially, all that's happened is that people formerly thought that moral behaviour had to be mandated or commanded by a God. God is dead, but people from the time of the Enlightenment on still had a vague feeling that there has to be some kind of "ought" that's not instrumental, that grounds morality - as it were, the ghost of a mandate, a mandate-shaped hole at the root of morality.

What Harris is saying (and I agree) is that no mandate or command is required for morality, there is no other kind of "ought" than the instrumental, there just seems to be; and it's the instrumental "ought" that's at work in morality just as it is in, e.g., technology, from the basic level (which everyone agrees on - i.e. science helps with the nitty gritty) to the high level (at the level of the "if" of the "if .. then", where there's doubt, where people think there has to be this other kind of mysterious "ought"). The trick is to see how.

Comment by gurugeorge on [Link] A Darwinian Response to Sam Harris’s Moral Landscape Challenge · 2015-05-22T09:51:03.935Z · LW · GW

The argument seems to fail at: "I believe that neither morality nor values at the very core depend on minds being conscious or experiencing pleasure or suffering." The author evidently believes that, but fails to substantiate it. And anyway, Harris' criterion isn't about pleasure vs. suffering, but well being vs. suffering.

The idea that animals can have an ethic seems nonsensical to me. Only conscious, reflective, language-using beings can have an ethic or morality, because an ethic is by definition a self-consciously-held code of conduct, and a morality is by definition a self-consciously-shared code of conduct, which can obviously only be held, read, understood and shared by self-conscious language-users.

Criticisms of Harris usually boil down to, "But what about Hume's is/ought, eh, EH??!!" Oddly, this article seems to commit that very fallacy : obviously, just because genetically-mandated behaviour is, doesn't mean it ought to be. The error is as old as Social Darwinism, and it's surprising to find someone falling for it in this day and age.

Comment by gurugeorge on Which ideas from LW would you most like to see spread? · 2015-05-20T00:03:58.936Z · LW · GW

I may be mad, but I actually think of Popper more or less in the same breath as Bayesianism - modus tollens and reductio (the main methods of Popperian "critical rationalism" - CR basically says that the reductio is the model of all successful empirical reasoning) just seem to me to be special cases of Bayesianism. The idea with both (as I see it) is that we start where we are and get to the truth by shaving away untruths, by testing our ideas to destruction and going with what's left standing because we've got nothing better left standing - that seems to me the basic gist of both philosophies.

I'm also fond of the idea that knowledge is always conjecture, and that belief has nothing to do with knowledge (and knowledge can occasionally be accidental). Knowledge is just the "aperiodic crystals" of language in its manifest forms (ink on paper, sounds out of a mouth, coding, or whatever), which, by convention ("language games"), represent or model reality either accurately or not, regardless of psychological state of belief.

Furthermore, while I'm on my high horse, Bayesianism is conjectural deductive reasoning - neither "subjective" nor "objective" approaches have anything to do with it. It doesn't "update beliefs" it updates, modifies, discards, conjectures.

IOW, you take a punt, a bet, a conjecture (none of which have anything to do with belief) at how things are, objectively. The punt is itself in the form of a "language crystal", objectively out there in reality, in some embodied form, which is something embedded in reality that conventionally models reality, as above - again, nothing to do with belief.

In this context, truth and objectivity (in another sense) are ideals - things we're aiming for. It may be the case that there is no true proposition, but when we say we have a probably true proposition, what that means is that we have a ranking of conjectures against each other, in a ratio, and the most probable is the most provable (the one that can be best corroborated - in the Popperian sense - by evidence). That's all.

Comment by gurugeorge on When does technological enhancement feel natural and acceptable? · 2015-05-03T23:45:59.045Z · LW · GW

I think that it's acceptable when it works.

What I mean is, a lot of the transhumanist stuff is predicated on these things working properly. But we know how badly wrong computers can sometimes go, and that's in everyone's experience, so much so that "switch it off and switch it on again" is part of common, everyday lore now.

Imagine being so intimately connected with a computerized thingummybob that part of your conscious processing, what makes you you, is tied up with it - and it's prone to crashing. Or hacking, or any of the other ills that can befall computery things. Potential horrorshow.

Similar for bio enhancements, etc. For example, physical enhancement like steroids, but safer and easier to use, are still a long way off, and until they come, people are just not going to go for it. We really only have a very sketchy understanding of how the body and brain work at the moment. It's developing, but it's still early days.

So ultimately, I think for the foreseeable future, people are still going to go for things that are separable, that the natural organic body can use as tools that can be put away, that the natural organic body can easily separate itself from, at will, if they go wrong.

They're not going to go for any more intimate connections until such things work much, much better than anything we've got now.

And I think it's actually debatable whether that's ever going to happen. It may be the case that there are limits on complexity, and that the "messy" quality of organics is actually the best way of having extremely complex thinking, moving objects - or that there's a trade-off between having stupid things that do massive processing well, and clever things that do simple processing well, and you can't have both in one physical (information processing) entity (but the latter can use the former as tools).

Another angle to look at this would be to look at the rickety nature of high IQ and/or genius - it's six and a half dozen whether a hyper-intelligent being is going to be of any use at all, or just go off the rails as soon as it's booted up. It's probably the same for "AI".

I don't think any of this is insurmountable, but I think people are massively underestimating the time it's going to take to get there; and we'll already have naturally evolved into quite different beings by that time (maybe as different as early homonids from us), so by that time, this particular question is moot (as there will have been co-evolution with the developing tech anyway, only it will have been very gradual).

Comment by gurugeorge on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" · 2013-12-30T16:38:33.599Z · LW · GW

I'm not sure whether AGI won't come until AI is social - i.e. the mistake is to think of it intelligence as a property of an individual machine, whereas it's more a a property of a transducer (that's of a sufficient level of complexity) embedded in a network. That is so even when it's working relatively independently of the network.

IOW, the tools and materials of intelligence are a social product, even if an individual transducer that's relatively independent works with them in an individual act of intelligence. When I say "product" I mean that the meaning itself is distributed in amongst the network and doesn't reside in any individual.

No AGI until social AI.

Comment by gurugeorge on [Link] Technology will destroy human nature · 2012-10-10T18:12:46.166Z · LW · GW

Fascinating topic, and a topic that's going to loom larger as we progress. I've just registered in order to join in with this discussion (and hopefully many more at this wonderful site). Hi everybody! :)

Surely an intelligent entity will understand the necessity for genetic/memetic variety in the face of the unforeseen? That's absolutely basic. The long-term, universal goal is always power (to realize whatever); power requires comprehensive understanding; comprehensive understanding requires sufficient "generate" for some "tests" to hit the mark.

The question then, I guess is, can we sort of "drift" into being a mindless monoculture of replicators?

Articles like this, or like s-f in general, or even just thought experiments in general (again, on the "generate" side of the universal process) shows that we are unlikely to, since this is already a warning of potential dangers.