Posts

A trick for Safer GPT-N 2020-08-23T00:39:31.197Z
What should an Einstein-like figure in Machine Learning do? 2020-08-05T23:52:14.539Z

Comments

Comment by Razied on Decaeneus's Shortform · 2024-03-17T17:38:55.518Z · LW · GW

Unfortunately the entire complexity has just been pushed one level down into the definition of "simple". The L2 norm can't really be what we mean by simple, because simply scaling the weights in a layer by A, and the weights in the next layer by 1/A leaves the output of the network invariant, assuming ReLU activations, yet you can obtain arbitrarily high L2 norms by just choosing A high enough. 

Comment by Razied on Elon files grave charges against OpenAI · 2024-03-01T23:00:04.949Z · LW · GW

Unfortunately if OpenAI the company is destroyed, all that happens is that all of its employees get hired by Microsoft, they change the lettering on the office building, and sama's title changes from CEO to whatever high level manager positions he'll occupy within microsoft.

Comment by Razied on Goal-Completeness is like Turing-Completeness for AGI · 2024-02-19T19:01:57.454Z · LW · GW

Hmm, but here the set of possible world states would be the domain of the function we're optimising, not the function itself. Like, No-Free-Lunch states (from wikipedia):

Theorem 1: Given a finite set  and a finite set  of real numbers, assume that  is chosen at random according to uniform distribution on the set  of all possible functions from  to . For the problem of optimizing  over the set , then no algorithm performs better than blind search.

Here  is the set of possible world arrangements, which is admittedly much smaller than all possible data structures, but the theorem still holds because we're averaging over all possible value functions on this set of worlds, a set which is not physically restricted by anything.

I'd be very interested if you can find Byrnes' writeup.

Comment by Razied on What experiment settles the Gary Marcus vs Geoffrey Hinton debate? · 2024-02-14T19:21:02.305Z · LW · GW

Obviously LLMs memorize some things, the easy example is that the pretraining dataset of GPT-4 probably contained lots of cryptographically hashed strings which are impossible to infer from the overall patterns of language. Predicting those accurately absolutely requires memorization, there's literally no other way unless the LLM solves an NP-hard problem. Then there are in-between things like Barack Obama's age, which might be possible to infer from other language (a president is probably not 10 yrs old or 230), but within the plausible range, you also just need to memorize it. 

Comment by Razied on Evolution is an observation, not a process · 2024-02-06T18:28:07.788Z · LW · GW

There is no optimization pressure from “evolution” at all. Evolution isn’t tending toward anything. Thinking otherwise is an illusion.

Can you think of any physical process at all where you'd say that there is in fact optimization pressure? Of course at the base layer it's all just quantum fields changing under unitary evolution with a given Hamiltonian, but you can still identify subparts of the system that are isomorphic with a process we'd call "optimization". Evolution doesn't have a single time-independent objective it's optimizing, but it does seem to me that it's basically doing optimization on a slowly time-changing objective.

Comment by Razied on Childhood and Education Roundup #4 · 2024-01-30T15:44:16.178Z · LW · GW

Why would you want to take such a child and force them to ‘emotionally develop’ with dumber children their own age?

Because you primarily make friends in school with people in your grade, and if you skip too many grades, the physical difference between the gifted kid and other kids will prevent them from building a social circle based on physical play, and probably make any sort of dating much harder.

Comment by Razied on Is a random box of gas predictable after 20 seconds? · 2024-01-25T00:03:03.522Z · LW · GW

Predicting the ratio at t=20s is hopeless. The only sort of thing you can predict is the variance in the ratio over time, like the ratio as a function of time is  , where  . Here the large number of atoms lets you predict  , but the exact number after 20 seconds is chaotic. To get an exact answer for how much initial perturbation still leads to a predictable state, you'd need to compute the lyapunov exponents of an interacting classical gas system, and I haven't been able to find a paper that does this within 2 min of searching. (Note that if the atoms are non-interacting the problem stops being chaotic, of course, since they're just bouncing around on the walls of the box)

Comment by Razied on Goal-Completeness is like Turing-Completeness for AGI · 2023-12-20T23:45:26.954Z · LW · GW

I'll try to say the point some other way: you define "goal-complete" in the following way:

By way of definition: An AI whose input is an arbitrary goal, which outputs actions to effectively steer the future toward that goal, is goal-complete.

Suppose you give me a specification of a goal as a function  from a state space to a binary output. Is the AI which just tries out uniformly random actions in perpetuity until it hits one of the goal states "goal-complete"? After all, no matter the goal specification this AI will eventually hit it, though it might take a very long time.

I think the interesting thing you're trying to point at is contained in what it means to "effectively" steer the future, not in goal-arbitrariness.

Comment by Razied on Goal-Completeness is like Turing-Completeness for AGI · 2023-12-20T22:49:36.566Z · LW · GW

E.g. I claim humans are goal-complete General Intelligences because you can give us any goal-specification and we'll very often be able to steer the future closer toward it.

If you're thinking of "goals" as easily specified natural-language things, then I agree with you, but the point is that turing-completeness is a rigorously defined concept, and if you want to get the same level of rigour for "goal-completeness", then most goals will be of the form "atom 1 is a location x, atom 2 is at location y, ..." for all atoms in the universe. And when averaged across all such goals, literally just acting randomly performs as well as a human or a monkey trying their best to achieve the goal.

Comment by Razied on Goal-Completeness is like Turing-Completeness for AGI · 2023-12-20T22:35:23.443Z · LW · GW

Goal-completeness doesn't make much sense as a rigorous concept because of No-Free-Lunch theorems in optimisation. A goal is essentially a specification of a function to optimise, and all optimisation algorithms perform equally well (or rather poorly) when averaged across all functions.

There is no system that can take in an arbitrary goal specification (which is, say, a subset of the state space of the universe) and achieve that goal on average better than any other such system. My stupid random action generator is equally as bad as the superintelligence when averaged across all goals. Most goals are incredibly noisy, the ones that we care about form a tiny subset of the space of all goals, and any progress in AI we make is really about biasing our models to be good on the goals we care about.

Comment by Razied on Monthly Roundup #13: December 2023 · 2023-12-19T21:12:50.965Z · LW · GW

Zvi, you continue to be literally the best news aggregator on the planet for the stuff that I actually care about. Really, thanks a lot for doing this, it's incredibly valuable to me.

Comment by Razied on The likely first longevity drug is based on sketchy science. This is bad for science and bad for longevity. · 2023-12-12T16:37:00.983Z · LW · GW

Wouldn't lowering igf-1 also lead to really shity quality of life from lower muscle mass and much longer recovery times from injury?

Comment by Razied on Why Yudkowsky is wrong about "covalently bonded equivalents of biology" · 2023-12-06T15:53:16.628Z · LW · GW

The proteins themselves are primarily covalent, but a quick google search says that the forces in the lipid layer surrounding cells are primarily non-covalent, and the forces between cells seem also non-covalent. Aren't those forces the ones we should be worrying about? 

It seems like Eliezer is saying "the human body is a sand-castle, what if we made it a pure crystal block?", and you're responding with "but individual grains of sand are very strong!"

Comment by Razied on SIA Is Just Being a Bayesian About the Fact That One Exists · 2023-11-14T23:42:41.108Z · LW · GW

But perhaps the bigger reason is that I find SIA intuitively extremely obvious. It’s just what you get when you apply Bayesian reasoning to the fact that you exist.

Correct, except for the fact that you're failing to consider the possibility that you might not exist at all...

My entire uncertainty in anthropic reasoning is bound up in the degree to which an "observer" is at all a coherent concept.

Comment by Razied on How can the world handle the HAMAS situation? · 2023-10-13T21:52:09.600Z · LW · GW

And my guess is that is how Hamas see and bill themselves.

And your guess would be completely, hopelessly wrong. There is an actual document called "The Covenant of Hamas" written in 1988 and updated in 2017, which you can read here, it starts with

Praise be to Allah, the Lord of all worlds. May the peace and blessings of Allah be upon Muhammad, the Master of Messengers and the Leader of the mujahidin, and upon his household and all his companions.

... so, uh, not a good start for the "not religious" thing. It continues:

1. The Islamic Resistance Movement “Hamas” is a Palestinian Islamic national liberation and resistance movement. Its goal is to liberate Palestine and confront the Zionist project. Its frame of reference is Islam, which determines its principles, objectives and means.

In the document they really seem to want to clarify at every opportunity that yes, indeed they are religious at the most basic level, and that religion impacts every single aspect of their decision-making. I strongly recommend that everyone here read the whole thing, just to see what it really means to take your religion seriously.
 
The 2017 version has been cleaned up, but in the 1988 covenant you also had this gem:

> The Day of Judgment will not come about until Moslems fight Jews and kill them. Then, the Jews will hide behind rocks and trees, and the rocks and trees will cry out: 'O Moslem, there is a Jew hiding behind me, come and kill him.' (Article 7) 

>The HAMAS regards itself the spearhead and the vanguard of the circle of struggle against World Zionism... Islamic groups all over the Arab world should also do the same, since they are best equipped for their future role in the fight against the warmongering Jews.'

Comment by Razied on How can the world handle the HAMAS situation? · 2023-10-13T15:24:55.717Z · LW · GW

It is important that Gazans won't feel like their culture is being erased.

 

A new education curriculum is developed which fuses western education, progressive values and Muslim tradition while discouraging political violence.

These two things are incompatible. Their culture is the entire problem. To get a sense of the sheer vastness of the gap, consider the fact that Arabs read on average 6 pages per year. It would take a superintelligence to somehow convince the palestinians to embrace western thought and values while not feeling like their culture is being erased.

Comment by Razied on NYT on the Manifest forecasting conference · 2023-10-10T01:55:40.091Z · LW · GW

Oh, true! I was going to reply that since probability is just a function of a physical system, and the physical system is continuous, then probability is continuous... but if you change an integer variable in C from 35 to 5343 or whatever, there's no real sense in which the variable goes through all intermediate values, even if the laws of physics are continuous.

Comment by Razied on NYT on the Manifest forecasting conference · 2023-10-09T23:17:07.846Z · LW · GW

If he's ever attended an event which started out with less than a 28% chance of orgy, which then went on to have an orgy, then that statement is false by the Intermediate Value Theorem, since there would have been an instant in time where the probability of the event crossed 28%.

Comment by Razied on Petrov Day [Spoiler Warning] · 2023-09-28T00:40:32.110Z · LW · GW

The most basic rationalist precept is to not forcibly impose your values onto another mind.

What? How does that make any sense at all? The most basic precept of rationality is to take actions which achieve future world states that rank highly under your preference ordering. Being less wrong, more right, being bayesian, saving the world, not imposing your values on others, etc. are all deductions that follow from that most basic principle: Act and Think Such That You Win.

Comment by Razied on Bariatric surgery seems like a no-brainer for most morbidly obese people · 2023-09-27T19:11:07.669Z · LW · GW

Wait, do lesswrongers not know about semaglutide and tirzepatide yet? Why would anyone do something as extreme as bariatric surgery when tirzepatide patients lose pretty much the same amount of weight after a year as with the surgery?

Comment by Razied on Barbieheimer: Across the Dead Reckoning · 2023-08-01T19:04:38.626Z · LW · GW

But if you are right that you only respond to a limited set of story types, do you therefore aspire to opening yourself to different ones in future, or is your conclusion that you just want to stick to films with 'man becomes strong' character arcs?

Not especially, for the same reason that I don't plan on starting to eat 90% dark chocolate to learn to like it, even if other people like it (and I can even appreciate that it has a few health benefits). I certainly am not saying that only movies that appeal to me be made, I'm happy that Barbie exists and that other people like it, but I'll keep reading my male-protagonist progression fantasies on RoyalRoad.

Greta Gerwig obviously thinks so: when she says: "I think equally men have held themselves to just outrageous standards that no one can meet. And they have their own set of contradictions where they’re walking a tightrope. I think that’s something that’s universal."

I have a profound sense of disgust and recoil when someone tells me to lower my standards about myself. Whenever I hear something like "it's ok, you don't need to improve, just be yourself, you're enough", I react strongly, because That Way Lay Weakness. I don't have problems valuing myself, and I'm very good at appreciating my achievements, so that self-acceptance message is generally not properly aimed at me, it would be an overcorrection if I took that message even more to heart than I do right now. 

Comment by Razied on Barbieheimer: Across the Dead Reckoning · 2023-08-01T18:22:46.915Z · LW · GW

I watched Barbie and absolutely hated it. Though it did provide some value to me after I spent some time thinking about why precisely I hated it. Barbie really showed me the difference between the archetypal story that appeals to males and the female equivalent, and how much just hitting that archetypal story is enough to make a movie enjoyable for either men or women.

The plot of the basic male-appealing story is "Man is weak. Man works hard with clear goal. Man becomes strong". I think men feel this basic archetypal story much more strongly than women, so that even an otherwise horrible story can be entertaining if it hits that particular chord well enough (evidence: most isekai stories), if the man is weak enough at the beginning, or the work especially hard. I'm not exactly clear what the equivalent story is for women, but it's something like "Woman thinks she's not good enough, but she needs to realise that she is already perfect". And the Barbie movie really hits on that note, which is why I think the women in my life seemed to enjoy it. But that archetype just doesn't resonate with me at all.

The apparent end-point for the Kens in the movie is that they "find themselves". This was (to me) a clear misunderstanding by the female authors of what the masculine instinct is like. Men don't "find themselves", they decide who they want to be and work towards climbing out of their pitiful initial states. (There was also the weird Ken obsession with horses, which are mostly a female-only thing)

Comment by Razied on Open Thread - July 2023 · 2023-07-26T23:41:59.097Z · LW · GW

I'm fairly sure that there's architectures where each layer is a linear function of the concatenated activations of all previous layers, though I can't seem to find it right now. If you add possible sparsity to that, then I think you get a fully general DAG.

Comment by Razied on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-26T14:44:53.840Z · LW · GW

Their paper for the sample preparation (here) has a trademark sign next to the "LK-99" name, which suggests they've trademarked it... strongly suggesting that the authors actually believe in their stuff.

Comment by Razied on Please speak unpredictably · 2023-07-23T23:41:17.764Z · LW · GW

There are a whole bunch of ways that trying to optimise for unpredictability is not a good idea:

  1. Most often technical discussions are not just exposition dumps, they're a part of the creative process itself. Me telling you an idea is an essential part of my coming up with the idea. I essentially don't know where I'm going before I get there, so it's impossible for me to optimise for unpredictability on your end.
  2. This ignores a whoooole bunch of status-effects and other goals of human conversation. The point of conversation is not solely to transmit information. In real life information-transfer is a minuscule part of most conversations: try telling your girlfriend to "speak unpredictably" when she gets home and wants to vent to you about her boss.
  3. People often don't say what they mean. The process of translating a mental idea into words on-the-fly often results in sequences of words that are very bad at communicating the idea. The only solution to this is to be redundant, repeat the idea multiple times in different ways until you hit one that your interlocutor understands.

Humans are not Vulcans, and we shouldn't try to optimise human communication the way we'd optimise a network protocol. 

Comment by Razied on Open Thread - July 2023 · 2023-07-22T14:02:11.580Z · LW · GW

I think you might want to look at the litterature on "sparse neural networks", which is the right search term for what you mean here.

Comment by Razied on Meta announces Llama 2; "open sources" it for commercial use · 2023-07-19T02:32:29.153Z · LW · GW

I'm really confused about how anybody thinks they can "license" these models. They're obviously not works of authorship.

I'm confused why you're confused, if I write a computer program that generates an artifact that is useful to other people, obviously the artifact should be considered a part of the program itself, and therefore subject to licensing just like the generating program. If I write a program to procedurally generate interesting minecraft maps, should I not be able to license the maps, just because there's one extra step of authorship between me and them?

Comment by Razied on Elon Musk announces xAI · 2023-07-13T14:30:30.833Z · LW · GW

The word "curiosity" has a fairly well-defined meaning in the Reinforcement Learning literature (see for instance this paper). There are vast numbers of papers that try to come up with ways to give an agent intrinsic rewards that map onto the human understanding of "curiosity", and almost all of them are some form of "go towards states you haven't seen before". The predictable consequence of prioritising states you haven't seen before is that you will want to change the state of the universe very very quickly.

Comment by Razied on Elon Musk announces xAI · 2023-07-13T14:23:10.306Z · LW · GW

Not too sure about the downvotes either, but I'm curious how the last sentence misses the point? Are you aware of a formal definition of "interesting" or "curiosity" that isn't based on novelty-seeking? 

Comment by Razied on Elon Musk announces xAI · 2023-07-13T13:07:21.549Z · LW · GW

According to reports xAI will seek to create a "maximally curious" AI, and this also seems to be the main new idea how to solve safety, with Musk explaining: "If it tried to understand the true nature of the universe, that's actually the best thing that I can come up with from an AI safety standpoint," ... "I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity."

Is Musk just way less intelligent than I thought? He still seems to have no clue at all about the actual safety problem. Anyone thinking clearly should figure out that this is a horrible idea within at most 5 minutes of thinking.

Obviously pure curiosity is a horrible objective to give to a superAI. "Curiosity" as currently defined in the RL literature is really something more like "novelty-seeking", and in the limit this will cause the AI to keep rearranging the universe into configurations it hasn't seen before, as fast as it possibly can... 

Comment by Razied on Monthly Roundup #8: July 2023 · 2023-07-03T15:01:24.507Z · LW · GW

A theory of the popularity of anime.

Much like there have been ten thousand reskins of Harry Potter I’ve been waiting for more central examples of English-language cultural products to take that story archetype and just run with it. There is clearly a demand.

Well then Rejoice! The entire genre of Progression Fantasy is what you desire, and you need only browse the Best Of RoyalRoad to see lots of english-language stories that scratch that particular itch. In fact, I find these english stories immensely superior to anything in anime or manga. 

A particularly good example is the recently-finished 12 book series Cradle, whose books ranked at #1 on fantasy Audible for the past few years.

Comment by Razied on Nature: "Stop talking about tomorrow’s AI doomsday when AI poses risks today" · 2023-06-28T15:02:13.746Z · LW · GW

Overall, a headline that seems counterproductive and needlessly divisive.

Probably the understatement of the decade, this article is literally an "order" from Official Authority to stop talking about what I believe is literally the most important thing in the world. I guess this is not literally the headline that would maximally make me lose respect for Nature... but it's pretty close. 

This article is a pure appeal to authority. It contains no arguments at all, it only exists as a social signal that Respectable Scientists should steer away from talk of AI existential risk. 

The AI risk debate is now no more about any actual arguments, it's now about slinging around political capital and scientific prestige. It has become political in nature.

Comment by Razied on AI #17: The Litany · 2023-06-26T00:41:08.848Z · LW · GW

That's not a math or physics paper, and it includes a bit more "handholding" in the form of an explicit database than would really make me update. The style of scientific papers is obviously very easy to copy for current LLMs, what I'm trying to get at is that if LLMs can start to make genuinely novel contributions at a slightly below-human level and learn from the mediocre article they write, pure volume of papers can make up for quality.

Comment by Razied on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? · 2023-06-25T20:24:43.023Z · LW · GW
  • "This has been killing people!"
  • "Yes, but it might kill all people!"
  • "Yes, but it's killing people!"
  • "Of course, sure, whatever, it's killing people, but it might kill all people!"


But this isn't the actual back-and-forth, the third point should be "no it won't, you're distracting from the people currently being killed!". This is all a game to subtly beg the question. If AI is an existential threat, all current mundane threats like misinformation, job loss, AI bias, etc. are rounding errors to the total harm, the only situation where you'd talk about them is if you've already granted that the existential risks don't exist. 

If a large comet is heading towards Earth, and some group thinks it won't actual hit Earth, but merely pass harmlessly close-by, and they start talking about the sun's reflections off the asteroid making life difficult for people with sensitive eyes... they are trying to get you to assume the conclusion.

Comment by Razied on AI #17: The Litany · 2023-06-24T01:39:31.823Z · LW · GW

I don't think we need superhuman capability here for stuff to get crazy, pure volume of papers could substitute for that. If you can write a mediocre but logically correct paper with $50 of compute instead of with $10k of graduate student salary, that accelerates the pace of progress by a factor of 200, which seems enough for me to enable a whole bunch of other advances which will feed into AI research and make the models even better.

Comment by Razied on AI #17: The Litany · 2023-06-24T00:23:40.373Z · LW · GW

If we get to that point of AI capabilities, we will likely be able to make 50 years of scientific progress in a matter of months for domains which are not too constrained by physical experimentation (just run more compute for LLMs), and I'd expect AI safety to be one of those. So either we die quickly thereafter, or we've solved AI safety. Getting LLMs to do scientific progress basically telescopes the future.

Comment by Razied on AI #17: The Litany · 2023-06-23T13:47:26.622Z · LW · GW

Fair point, "non-trivial" is too subjective, the intuition that I meant to convey was that if we get to the point where LLMs can do the sort of pure-thinking research in math and physics at a level where the papers build on top of one another in a coherent way, then I'd expect us to be close to the end. 

Said another way, if theoretical physicists and mathematicians get automated, then we ought to be fairly close to the end. If in addition to that the physical research itself gets automated, such that LLMs write their own code to do experiments (or run the robotic arms that manipulate real stuff) and publish the results, then we're *really* close to the end. 

Comment by Razied on AI #17: The Litany · 2023-06-23T02:02:05.813Z · LW · GW

If the question is ‘what’s one experiment that would drop your p(doom) to under 1%?’ then I can’t think of such an experiment that would provide that many bits of data, without also being one where getting the good news seems absurd or being super dangerous.

Not quite an experiment, but to give an explicit test: if we get to the point where an AI can write non-trivial scientific papers in physics and math, and we then aren't all dead within 6 months, I'll be convinced that p(doom) < 0.01, and that something was very deeply wrong with my model of the world.

Comment by Razied on EY in the New York Times · 2023-06-10T13:11:28.646Z · LW · GW

Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they’ve been light on the details.
...
The letter was the latest in a series of ominous warnings about A.I. that have been notably light on details.

Has Cade Metz bothered to perhaps read a bit more on AI risk than the one-sentence statement in the safe.ai open letter? To my eye this article is full of sneering and dismissive insinuations about the real risk. It's like the author is only writing this article in the most grudging way possible, because at this point the prestige of the people talking about AI risk has gotten so large that he can't quite so easily dismiss it without losing status himself.

I think rationalists need to snap out of the "senpai noticed me" mode with respect to the NYT, and actually look at the pathetic level its AI articles operate on. Is quoting the oldest, most famous and most misunderstood meme of AI safety really the level you ought to expect from what is ostensibly the peak of journalism in the western world?

Comment by Razied on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-29T19:59:47.889Z · LW · GW

Anyone writing an effortful response to the original post should be presumed to have good faith to some reasonable degree, and any point that you think they ignored was probably either misunderstood, or the relevance of the point is not obvious to the author of the comment. By responding in a harsh way to what might be a non-obvious misunderstanding, you're essentially adopting the conflict side of the "mistake vs conflict theory" side of things.

Any comments which aren't effortful and are easily seen to have an answer in the original post will probably just be downvoted anyway, and the proper response from OP is to just not respond at all. 

To be clear, I think that the community here is probably kind enough so that these aren't big problems, but it still kind of irks me to make it slightly easier to be unkind.

Comment by Razied on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-28T12:54:56.467Z · LW · GW

Hmm, some of these reacts seem kind of passive-aggressive to me, the "Not planning to respond" and "I already addressed this" in particular just close off conversational doors in a fairly rude way. How do you respond to someone saying "I already addressed this" to a long paragraph of yours in such a low-effort way? It's like texting "ok" to a long detailed message.

Comment by Razied on Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI · 2023-05-27T12:14:38.773Z · LW · GW

If you believe this, and you have not studied quantum chemistry, I invite you to consider as to how you could possibly be sure about this. This is a mathematical question. There is a hard, mathematical limit to the accuracy that can be achieved in finite time. 

Doesn't the existence of AlphaFold basically invalidate this? The exact same problems you describe for band-gap computation exist for protein folding: the underlying true equations that need to be solved are monstrously complicated in both cases, and previous approximate models made by humans aren't that accurate in both cases... yet this didn't prevent AlphaFold from destroying previous attempts made by humans by just using a lot of protein structure data and the magic generalisation power of deep networks. This tells me that there's a lot of performance to be gained in clever approximations to quantum mechanical problems.

Comment by Razied on Twiblings, four-parent babies and other reproductive technology · 2023-05-22T11:56:36.748Z · LW · GW

I could ask just the same why you'd identify so strongly with a mere pattern of neural activation that make up the memes in the child's mind. This preference of mine is getting close to the bedrock of my preference ordering, I want my child to share my genes because that's just kind of what I want, I don't know how to explain that in terms of any more fundamental desire of mine.

But like I said, I'd be fine with CRISPR to change a small fraction of the genes which have an out-sized impact on success, what I don't want is to change (or worse, take from someone else) the large number of genes which don't particularly influence success or intelligence, but which make me who I am.

Comment by Razied on Twiblings, four-parent babies and other reproductive technology · 2023-05-20T19:01:47.634Z · LW · GW

But wait! Why stop with two parents? Couldn’t we get chromosomes from the embryos of more than one couple?

I'm very, very interested in embryo/chromosomal selection of this kind for my future children... but there is absolutely no chance, no fucking chance at all, that I'd be okay with using the DNA of more than my spouse and I, the idea repulses me on an incredibly deep level. I want my children to look like me, and it's very important to me that a plurality of their genes be mine. I'm okay with doing CRISPR to change specific genes in addition to the chromosomal selection, so they wouldn't be 50% my genes, maybe a bit less, but if you can point to some specific third human and say "yeah an equal fraction of genes came from this one other dude", I'm out.

Comment by Razied on Most people should probably feel safe most of the time · 2023-05-09T11:03:46.985Z · LW · GW

There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”.

Wait, really?! If this is true then I had severely overestimated the sanity minimum of rationalists. The objections in your post are all true, of course, but they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement...

Comment by Razied on Yoshua Bengio argues for tool-AI and to ban "executive-AI" · 2023-05-09T10:56:48.424Z · LW · GW

The main advantage of Tool-AIs is that they can be used to solve alignment for more agentic approaches. You don't need to prevent people from building agentic AI for all time, just in the intermittent period while we have Tool AI, but don't yet have alignment.

Comment by Razied on Where is all this evidence of UFOs? · 2023-05-01T14:43:25.509Z · LW · GW

The way to actually make the universe colder and preserve all the energy currently going to waste in stars is to dump all the matter in your galaxy in two giant spinning black holes, and then extract energy via the Penrose process. There's no way that a civilisation would just say "oops, we want to use reversible computing, I guess we now have no use for all those stars and giant gas clouds, let's just leave them be as they are now..."

Comment by Razied on Where is all this evidence of UFOs? · 2023-05-01T13:51:25.044Z · LW · GW

But it's just that we don't see any evidence of alien civilisation when we look at the stars, implying that any alien civ that does exist has a very, very strong preference for not being seen... which doesn't square at all with the "oh well if humans see us a bit it's no big deal" attitude, this is a civilisation who has hampered its own technological growth probably for millenia (required for travel between stars) in order not to be seen. The seas are so vast compared to the area that fighter jets can survey, and apparent capabilities of the alien ships so incredible, that it should be trivial for them to evade literally all observation. ( And the CMV temperature placed an upper bound as a function of time on the lowest temperature you can achieve in outer space anyway)

Comment by Razied on Where is all this evidence of UFOs? · 2023-05-01T12:59:33.984Z · LW · GW

The David Fravor event in particular doesn't seem to me like an unreliable eyewitness, the object was seen with human eyes, with the cameras on the planes, with the radar on the plane, and with the radar on the ship. I have no idea what to think of his account in particular. Either he (and all the pilots there with him that day and all the people on the ships who saw stuff on the ship radars) are lying for some unknown reason, or there are aliens on Earth. In which case their behavior makes absolutely no sense to me, either completely hiding themselves, or full outright reveal would make sense to me, but this weird "let humans have sneak-peaks but never any actual proof" is just weird.

Comment by Razied on The Brain is Not Close to Thermodynamic Limits on Computation · 2023-04-24T18:06:54.615Z · LW · GW

Do you usually correct people when they are being polite and courteous to others? I also find that days are seldom "great", and that I'm not actually feeling that grateful when I say "thank you" to the cashier...