Posts
Comments
Well, sure, there may be a more general argument for FDA bureaucracy being too convoluted (though of course there are risks with it being too lax too - no surer way to see the market flooded with snake oil). But that's general and applies to many therapies, not just genetic ones. Same goes for research costs being the big slice of the pie.
I believe that the CRISPR cost is for treating an already existing adult, and that it would be much cheaper to do it for a newly fertilised egg that is about to implanted as a pregnancy. Looking to the future we could also hope that CRISPR will get cheaper.
But then people need to preemptively decide to get IVF, going though all the related pains and troubles, because they sequenced their genomes and realized that one of them carries a gene that might increase that risk of diabetes by 0.3% (and do some other things we are not sure of). It's still a huge cost, time investment, effort investment, and quality of life sacrifice - IVF isn't entirely risk-free and there are several invasive procedures involved for the woman. It's again not obvious why would they do this if the payoff isn't really worth it.
It's not enough unless you know that changing that gene doesn't also increase the chance of some other problem by 2% causing a completely random different issue. And if it costs millions to CRISPR away sickle cell anemia... who would pay millions to shave off a 0.3% chance of diabetes?
Ideally, doing customised gene therapy and all sorts of fancy stuff is a cool idea. But you need to crash the prices of these things down enough for them to be scalable. I would need a certain treatment that uses monoclonal antibodies to make my life quite a bit better... but it costs thousands of bucks a month and I can't afford that, so, too bad. Until new tech or scale economies drive the price of these things down to mass use levels, we won't have a revolution.
From the discussion it seemed that most physicists do take the realist view on electrons, but in general the agreement was that either view works and there's not a lot to say about it past acknowledging what everyone's favorite interpretation is. A question that can have no definite answer isn't terribly interesting.
Yeah, exactly. Bayesian theory is built on top of an assumption of regularity, not the other way around. If some malicious genie purposefully screwed with your observations, Bayesian theory would crash and burn. Heck, the classic "inductivist turkey" would have very high Bayesian belief in his chances of living past Christmas.
For example there's recently been a controversy adjacent to this topic on Twitter involving one Philip Goff (philosopher) who started feuding over it with Sabine Hossenfelder (physicist, albeit with some controversial opinions). Basically Hossenfelder took up an instrumentalist position of "I don't need to assume that things described in the models we use are real in whatever sense you care to give to the word, I only need to know that those models' predictions fit reality" and Goff took issue with how she was brushing away the ontological aspects. Several days of extremely silly arguments about whether electrons exist followed. To me Hossenfelder's position seemed entirely reasonable, and yes, a philosophical one, but she never claimed otherwise. But Goff and other philosophers' position seemed to be "the scientists are ignorant of philosophy of science, if only they knew more about it, they would be far less certain about their intuitions on this stuff!" and I can't understand how they can be so confident about that or in what way would that precisely impact the scientists' actual work. Whether electrons "exist" in some sense or they are just a convenient mathematical fiction doesn't really matter a lot to a physicist's work (after all, electrons are nothing but quantized fluctuations of a quantum field, just like phonons are quantized fluctuations of an elastic deformation field; yet people probably feel intuitively that electrons "exist" a lot more than phonons, despite them being essentially the same sort of mathematical object. So maybe our intuitions about existence are just crude and don't well describe the stuff that goes on at the very edge of matter).
Oh, I mean, I agree. I'm not asking "why" really. I think in the end "I will assume empiricism works because if it doesn't then the fuck am I gonna do about it" is as respectable a reason to just shrug off the induction problem as they come. It is in fact the reason why I get so annoyed when certain philosophers faff about how ignorant scientists are for not asking the questions in the first place. We asked the questions, we found as useful an answer as you can hope for, now we're asking more interesting questions. Thinking harder won't make answers to unsolvable questions pop out of nowhere, and in practice, every human being lives accordingly to an implicit belief in empiricism anyway. You couldn't do anything if you couldn't rely on some basic constant functionality of the universe. So there's only people who accept this condition and move on and people who believe they can somehow think it away and have failed one way or another for the last 2500 years at least. At some point, you gotta admit you likely won't do much better than the previous fellows.
It just pushes the question further. The essential issue with inference is "why should the universe be so nicely well-behaved and have regular properties?". Bayesian probability theory assumes it makes sense to e.g. assign a fixed probability to the belief that swans are white based on a certain amount of swans that we've seen being white, which already bakes in assumptions like e.g. that the swans don't suddenly change colour, or that there is a finite amount of them and you're sampling them in a reasonably random manner. Basically, "the universe does not fuck with us". If the universe did fuck with us, empirical inquiry would be a hopeless endeavour. And you can't really prove for sure that it doesn't.
The strongest argument in favour of the universe really being so nice IMO is an anthropic/evolutionary one. Intelligence is the ability to pattern-match and perform inference. This ability only confers a survival advantage in a world that is reasonably well-behaved (e.g. constant rules in space and time). Hence the existence of intelligent beings at all in a world is in itself an update towards that world having sane rules. If the rules did not exist or were too chaotic to be understood and exploited, intelligence would only be a burden.
Googling... oh, it was a Tychonic model, where Venus orbits the sun in an ellipse (in agreement with Kepler), but the sun orbits the Earth.
I mean, that's not even a different model, that's just the real thing visualized in a frame of reference centred on the Earth.
But I don't think you can call such a process a Bayesian update. Again, it would require you placing conditional probabilities on the various metaphysical axioms - but the very concept of probabilities and Bayes' theorem are built upon those axioms. If causality doesn't always hold, if there are entities that do not need to obey it, then Bayes' theorem doesn't apply to them. It's just your own personal conviction shift, but you shouldn't use Bayesian updates as a framework to think about it, nor fall prey to the illusion that it makes your decision process any better in this kind of thing. It doesn't. Everyone is just as clueless as everyone else on these matters and no one has any hope to know better. You may pick your metaphysical axioms as they were revealed to you in a dream and they'll be as good as anything.
One knows that in principle anything can be a hallucination, and that only very rare events have true certainty
Well, precisely, so some metaphysical axioms I just take on faith, because that's all I can do. Maybe I'm a Boltzmann brain existing only for a moment, but that's not actionable, so I scratch that. And if one of my metaphysical axioms is "the world is made only of things that are empirically observable, causal, and at their fundamental level passive" (so I don't expect some tricky demon to purposefully mess with my observations in ways I couldn't possibly detect), then that's it. I can't really update away from it, because the entire framework of my epistemology (including the very notion of beliefs and Bayesian updates!) rests on it. I can fit in it all sorts of entities - including gods, demons, angels and ghosts - just fine, but I need those to still be causal, emergent entities, just like you and me, subject to some kind of natural, regular, causal law, if not the ones I'm used to. Otherwise, I got nothing.
I'm sure they could if you're willing to sum up an infinity of them. Epicycles are fundamentally equivalent to a Fourier series/transform. The only reason to drop them is that obviously they're a very high complexity rule that can be much more efficiently compressed if you only look at the phenomenon in a different reference frame.
I am here more thinking of "spiritual experiences" as e.g. visions or dreams. I do think that for example a miracle that leaves more tangible, durable proof is definitely evidence that updates towards the existence of some power causing that miracle.
However the point I'm making is a bit subtler. Even the existence of gods or demons need not imply a dualist worldview. There could be entities that possess immense power and obey rules that we still do not understand; but as long as we can, with study and observation, bring them into the fold of the causal relationships between various interlocking parts of the universe, they can perfectly fit within a materialist worldview. Consider most fantasy worlds with hard magic systems, in which crazy stuff happens daily, but it all works exactly like their local version of science - there is nothing mystical about it once it's understood.
So what I mean is more, what qualifies an experience as "spiritual"? For example, you can be a panpsychist and that would imply all sorts of weird possibilities (including small consciousnesses in nature and objects, like Japanese kami, and massive gestalt divine consciousnesses in the planet or the universe itself), but it would all still obey the rules of a single world made of matter. It just updates our understanding of what the properties of matter are. Dualism is a much weirder claim; for something to be spiritual it has to be in some way fundamentally different from the regular sort of matter (so for example acausal).
You keep missing the main point: why does a certain experience seem more or less likely based on a spiritual vs materialist world view?
That's what it takes for a meaningful update. . But what experience could possibly have that property? If I trip balls, and see God himself descend from His Heaven, amidst a choir of angels who sing with beauty that makes me cry and shiver, and He speaks to me and says "BELIEVE", His voice shaking my very core... what distinguishes that from just some very fancy hallucination my brain came up with while tripping balls?
Nothing.
That's the point. It is utterly impossible to tell whether the experience is "spiritual" or not because I know my qualia are at the mercy of a few kg of electric meat that sometimes goes on the fritz because I gave it the wrong chemicals or I slept too little or it just decided to do so. If you're inclined to believe it is spiritual, you will consider it validating. If you're inclined to believe it is material, you'll seek psychiatric help. But either way the experience conveys no new information - it just reflects your priors at you.
So now I ask, what kind of experience would most definitely NOT do that, and instead provide genuine new information that I can recognise as such? Because that's the only kind of experience that one can truly update on.
It's irrelevant. If a belief isn't of the empirical type, then no information can allow you to meaningfully update. You'll have a certain number and either stick to it your whole life or modify it arbitrarily.
As I'm saying, what does an empirical discrimination between spiritual and non-spiritual worlds look like? What kind of experience would increase or decrease your belief in a spiritual reality, and why would it - as in, what makes some rule of the world unique enough to qualify as a "dual" of material reality rather than just another part of it? That's my question. If you can answer it, then maybe meaningful updates are possible; otherwise they're not, and everyone will merrily go on believing what they already do, whatever happens.
Yes, but should it? This depends on one's priors. If one has very firm priors in favor of materialism, it's one thing. If one starts from a more agnostic and open-minded position, then it's different.
No, it's not, that's my point in saying this is a metaphysical position. It is not possible to perform updates on it at all. What you call "open minded" is "you already believe that a certain kind of experience qualifies as spiritual", so you are already a dualist; you just don't know when and how can the other substance be triggered. If a single falsifiable prediction about what things are and aren't possible in a materialist vs. dualist world can't be produced, then these two frameworks aren't two sides of a testable hypothesis - instead, they are two different ways of describing and understanding the same thing.
Even bringing qualia into it doesn't really fix things. Suppose we assume qualia are indeed the product of some spiritual substance, in some way. How can you distinguish a "pure qualia" experience from a merely "brain doing funny things" one? We know that brain stuff it's somewhere in the regular pipeline that produces qualia, so there is a coupling there.
And again, supposing you can e.g. prove there is such a thing as a soul, what would be the difference between a true dualist framework vs. simply an extended materialism one, in which you add to your model of the world one special kind of particle/force/matter that carries consciousness? And what kind of experience would be such that it constitutes evidence in favour of it?
My point is, what even defines an experience as "spiritual"? It seems to me like this is almost a pre-empirical metaphysical belief, one that can't simply work as "I did not believe in it, then I had an experience, now I believe". If you see the world in a material framework, pretty much everything can be explained by it. Any kind of mind altering experience for example can obviously just be in one's brain - does not have to be depending on your views on consciousness, but can. And it's not like the higher intensity of feeling would change that. As long as feeling is just an internal phenomenon to my brain, it can be of any intensity and that's no evidence at all of anything else than my brain acting funny. So it's not clear what would separate "spiritual experience" from simply "temporary mind impairment". Unless e.g. one could indeed attest things like "I predicted the future" or "I learned of true things that I could not have otherwise known", which would at least, if reliably observed, be evidence for ESP - but even ESP does not need to be explained in a dualistic, rather than materialistic, framework.
Mostly, any kind of dual substance (spirit, soul, what have you) that:
- regularly and bi-directionally interacts with ordinary matter
- obeys some kind of consistent set of rules
is just matter with extra steps. So materialism can almost by definition subsume any of these phenomena, even if unknown, as long as they can be empirically observed, tested and predicted.
Then I started having lots of experiences that forced me to discredit materialism and embrace some spiritual beliefs.
This kind of statement always puzzles me a bit. What kind of experience can absolutely be ruled out as being materialist and admits only a spiritual explanation? Like, no matter what one experiences, the experience itself can always be mediated by our brain firing in certain ways, so there's an obvious layer susceptible to hacking there. And even an experience that can come with things that couldn't be internally justified (e.g. knowledge you shouldn't possess) could still be explained by some material laws we simply don't understand yet. I don't get what precisely "evidence that makes me update away from materialism towards dualism" could possibly look like, because those are two frameworks that can be used to explain the same exact things. Like interpretations of QM.
But on topics of practical life, there is little difference. Well, except for some sensitive topics such as gender norms and sexual behavior, because I would expect that they talk about what should be (according to Bible) rather than what actually is.
TBF that's specifically if the god they believe in is the Christian one. But in general I think some other points hold: someone who believes in an intelligent creative force behind the universe (even a non-conventional one) is more likely to believe intervention at some point is possible, or to hold moral realist views. But neither is a requirement; the most bare form of belief in a God, the "clockmaker deity" of Voltaire and such, pretty much comes with no strings attached. It's just saying "the universe's first cause possesses self-awareness and intentionality", which as far as we can tell is impossible to prove or disprove.
I suppose you could still imagine that if said first cause was also superintelligent, then they might have set things up just right that eventually they went the way they wanted, with no need for further interventions; but I'd contend that you can probably show that no ordinary computation could predict you the universe without outright simulating the universe, so the universe itself would be the computation. Unless of course which ever layer of existence this first cause subsists on somehow obeys not just different physical laws, but different logical/mathematical ones, such that it allows things that are computationally impossible here. Hard to even imagine how that works though, mathematics seems just so... absolute. I'm not sure what a world in which the halting problem is solvable would have to look like. Maybe acausal?
While judgement can vary, I think this is about more than just judging a person morally. I don't think what Summers said, even in the most uncharitable reading, should disqualify him from most jobs. I do think though that they might disqualify him, or at least make him a worse choice, for something like the OpenAI board, because that comes with ideological requirements.
EDIT: so best source I've found for the excerpt is https://en.m.wikipedia.org/wiki/Summers_memo. I think it's nothing particularly surprising and it's 30 years old, but rather than ironic it sounds to me like it's using this as an example of things that would look outrageous but are equivalent to other things that we do and don't look quite as bad due to different vibes. I don't know that it disqualifies his character somehow, it's way too scant evidence to decide either way, but I do think it updates slightly towards him being a kind of economist I don't much like to be potentially in charge of AGI, and again, this is because the requirements are strict for me. If you treat AGI with the same hands off approach as we usually do normal economic matters, you almost assuredly get a terrible world.
Hm. I'd need to read the memo to form my own opinion on whether that holds. It could be a "Modest Proposal" thing but irony is also a fairly common excuse used to walk back the occasional stupid statement.
I never got the sense of this being settled science (of course given how controversial the claim would be hard for it to be settled for good), but even besides that, the question is: what does one do with that information?
Let's put it in LW language: I think that a good anti-discrimination policy might indeed be "if you have to judge a human's abilities in a given domain (e.g. for hiring), precommit to assuming a Bayesian prior of total ignorance about those abilities, regardless of what any exterior information might suggest you, and only update on their demonstrated skills. This essentially means that we shift the cognitive burden of updating on the judge rather than the judged (who otherwise would have to fight a disadvantageous prior). It seems quite sensible IMO, as usually the judge has more resources to spare anyway. It centers human opportunity over maximal efficiency.
Conversely, someone who suggests that "the economic logic behind dumping a load of toxic waste in the lowest-wage country is impeccable" seems already to think that economics are mainly about maximal efficiency, and any concerns for human well being are at best tacked on. This is not a good ideological fit for OpenAI's mission! Unless you think the economy is and ought to be only human well-being's bitch, so to speak, you have no business anywhere near building AGI.
I can't imagine any board, for-profit or non-profit, tolerating one of its members criticizing its organization in public.
This is only evidence for how insane the practices of our civilization are, requiring that those who most have the need and the ability to scrutinise the power of a corporation do so the least. OpenAI was supposedly trying to swim against the current, but alas, it just became another example of the regular sort of company.
I honestly think the Board should have just blown OpenAI up. The compromise is worthless, these conditions remain and thus Sam Altman is in power. So at least have him go work for Microsoft, it likely won't be any worse but the pretense is over. And yeah, they should have spoken more and more openly, at least give people some good ammo to defend their choices though the endless cries of "BUT ALL THAT LOST VALUE" would have come anyway.
Power comes from gaining support of billionaires, journalists, and human capital. It's kind of crazy that Sam Altman essentially rewrote the rules, whether he was justified or not.
Until stakes get so high that we go straight to the realm of violence (threatened, by the State or other actors, or enacted), yes, it does. Enough soft power becomes hard. Honestly can't figure how anyone thought this setup would work, especially with Altman being a deft manipulator as he seems to be. I've made the Senate vs Caesar comparison before but I'll reiterate it, because ultimately rules only matter until personal loyalty persuades enough people to ignore them.
I feel like at this point the only truly rational comment is:
what the absolute fuck.
Wait, what? In your polynomial model, the constant solution literally has 14 local degrees of freedom, you can vary any other parameter and it still works. It's by far the one with the lowest RLCT.
I'll be real, I don't know what everyone else thinks, but personally I can say I wouldn't feel comfortable contributing to anything AGI-related at this point because I have very low trust even aligned AGI would result in a net good for humanity, with this kind of governance. I can imagine maybe amidst all the bargains with the Devil there is one that will genuinely pay off and is the lesser evil, but can't tell which one. I think the wise thing to do would be just not to build AGI at all, but that's not a realistically open path. So yeah, my current position is that literally any action I could take advances the kind of future I would want by an amount that is at best below the error margin of my guesses, and at worst negative. It's not a super nice spot to be in but it's where I'm at and I can't really lie to myself about it.
Personally I am fascinated by the problems of interpretability and I would consider "no more GPTs for you guys until you figure out at least the main functioning principles of GPT-3" a healthy exercise in actual ML science to pursue, but I also have to acknowledge that such an understanding would make distillation far more powerful and thus also lead to a corresponding advance in capabilities. I am honestly stumped at what "I want to do something" looks like that doesn't somehow end up backfiring. It maybe that the problem is just thinking this way in the first place, and this really is just a shudder political problem, and tech/science can only make it worse.
Oh, I mean, sure, scepticism about OpenAI was already widespread, no question. But in general it seems to me like there's been too many attempts to be too clever by half from people at least adjacent in ways of thinking to rationalism/EA (like Elon) that go "I want to avoid X-risk but also develop aligned friendly AGI for myself" and the result is almost invariably that it just advances capabilities more than safety. I just think sometimes there's a tendency to underestimate the pull of incentives and how you often can't just have your cake and eat it. I remain convinced that if one wants to avoid X-risk from AGI the safest road is probably to just strongly advocate for not building AGI, and putting it in the same bin as "human cloning" as a fundamentally unethical technology. It's not a great shot, but it's probably the best one at stopping it. Being wishy-washy doesn't pay off.
I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".
This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.
More than just not incomprehensible, "whenever I start a project I immediately feel an impulse to focus on another" is in fact painfully relatable.
Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes "you and what army?".
Here that's what is referred to as "being civil". The post argues against niceness as being overly concerned with hurting the others' feelings.
That's just Yud's idea of a fast takeoff. Personally I share much more some worries for a slow takeoff that doesn't look like that but is still bad for either most or all of humanity. I don't expect AGI to instantly foom though.
Honestly this does seem... possible. A disagreement on whether GPT-5 counts as AGI would have this effect. The most safety minded would go "ok, this is AGI, we can't give it to Microsoft". The more business oriented and less conservative would go "no, this isn't AGI yet, it'll make us a fuckton of money though". There would be conflict. But for example seeing how now everyone might switch to Microsoft and simply rebuild the thing from scratch there, Ilya despairs and decides to do a 180 because at least this way he gets to supervise the work somehow.
I mean, this would not be too hard though. It could be achieved by a simple trick of appearing smarter to some people and then dumber at subsequent interactions with others, scaring the safety conscious and then making them look insane for being scared.
I don't think that's what's going on (why would even an AGI model they made be already so cleverly deceptive and driven? I would expect OAI to not be stupid enough to build the most straightforward type of maximizer) but it wouldn't be particularly hard to think up or do.
(or would only do so at a cost that's not worth paying)
That's the part that confuses me most. An NDA wouldn't be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there's criminal liability on both sides). I got nothing.
I keep being confused by them not revealing their reasons. Whatever they are, there's no way that saying them out loud wouldn't give some ammo to those defending them, unless somehow between Friday and now they swung from "omg this is so serious we need to fire Altman NOW" to "oops looks like it was a nothingburger, we'll look stupid if we say it out loud". Do they think it's a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?
I mean, yes, a company self-destructing doesn't stop much if their knowledge isn't also actively deleted - and even then, it's just a setback of a few months. But also, by going "oh well we need to work inside the system to fix it somehow" at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I'd be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.
Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).
The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.
It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
This would be very consistent with the problem being about safety (Altman at MSFT is worse than Altman at OAI for that), but then Shear is lying (understandable that he might have to for political reasons). Or I suppose anything that involved the survival of Open AI, which at this point is threatened anyway.
What about this?
https://twitter.com/robbensinger/status/1726387432600613127
We can definitely say that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.
If considered reputable (and not a lie), this would significantly narrow the space of possible reasons.
Man, Sutskever's back and forth is so odd. Hard to make obvious sense of, especially if we believe Shear's claim that this was not about disagreements on safety. Any chance that it was Annie Altman's accusations towards Sam that triggered this whole thing? It seems strange since you'd expect it to only happen if public opinion built up to unsustainable levels.
If we want to go by realistic outcomes, we're either lucky in that somehow AGI isn't straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we're dead. If we want to talk about scenarios in which things go otherwise then I'm not sure what's more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to "Wireheading by Infinite Jest". Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it's far from guaranteed to solve the problem (if it's solvable at all). You can't realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can't expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.