Posts
Comments
I think if there were other cases of Olivia causing problems and he was asking multiple people to hide Olivia problems, that would more cause me to think he was sacrificing more group epistemology to protect Olivia's reputation, and was overall more anti-truth-seeking, yes.
Huh? It seems to come down to definitions of lies, my current intuition is it wouldn't be a lie, but I'm not sure why people would care how I define lie in this context.
the kind of thing I have heard from Vassar directly is that, in the Lacanian classification of people as psychotic/neurotic/perverted, there are some things to be said in favor of psychotics relative to others, namely, that they have access to the 'imaginary' realm that is coherent and scientific (I believe Lacan thinks science is imaginary/psychotic, as it is based on symmetries). however, Lacanian psychosis has the disadvantage that people can catastrophize about ways society is bad.
more specifically, Vassar says, Lacanian neurotics tend to deny oppressive power structures, psychotics tend to acknowledge them and catastrophize about them, and perverts tend to acknowledge and endorse them; under this schema, it seems things could be said in favor of and against all three types.
this raises the question of how much normal (non-expert) and psychiatric concepts of psychosis have to do with the Lacanian model which relates to factors like how much influence Lacan has had on psychiatry. I asked Vassar about this and he said that 'delusions' (a standard symptom of psychosis) can be a positive sign because when people form actual beliefs they tend to be wrong (this accords with, for example, Popperian philosophy of science, as specific theories are in general 'wrong' even if useful; see also, 'all models are wrong, some models of useful')
overall I think further specifying the degree to which anyone is 'encouraging psychosis', or the ethics of value judgments on psychosis, would in general require having a more specific definition/notion of psychosis, and the sort of 'dramatic' relation people in threads such as this have to psychosis (i.e. moral panics about it) is contra such specificity in definition, therefore, lacks requisite precision for well-informed judgments.
I have no idea about other people lying due to JDP's influence. I had JDP look at a draft of Occupational Infohazards prior to posting and he convinced me to not mention Olivia because she was young and inexperienced / experimenting with ways of being at the time, it was maybe too bad for her reputation to say she was a possible influence on my psychosis. I admit this was a biased omission, though I don't think it was a lie. (To be clear, I'm not saying I went psychotic because of Olivia, I think there were many factors and I'm pretty uncertain about the weighting)
claims about Vassar aside, do I even have a reputation for being particularly disagreeable or overconfident, or doing so in the presence of people who have taken psychedelics? to my mind I am significantly less disagreeable and confident than high status rationalists such as Eliezer Yudkowsky and Nate Soares. I think my tendency with trips is to sometimes explore new hypotheses but have relatively low confidence as I'm more likely than usual to change my mind the next day. also, isn't the 'modest epistemology' stuff a pretty thorough criticism of claims that people should not "confidently expound in the nature of truth and society" that has been widely accepted on LW?
as another consideration, I have somewhat of a reputation for being a helpful person for people going through mental health issues (such as psychosis) to talk to, e.g. I let someone with anxiety, paranoia, and benzo issues stay at my place for a bit, she was very thankful and so was her mom. I don't think this is consistent with the reputation attributed to me re: effects on people in altered states of consciousness.
she talked with him sometimes in group conversations that included other people, 2016-2017. idk if they talked one on one. she stopped talking with him as much sometime during this partially due to Bryce Hidysmith's influence. mostly, she was interested in learning from him because he was a "wizard"; she also thought of Anna Salamon as a "wizard", perhaps others. Michael wasn't specifically like "I am going to teach Olivia things as a studient" afaik, I would not describe it as a "teacher/student relationship". at this point they pretty much don't talk and Michael thinks Olivia is suspect/harmful due to the whole Eric Bruylant situation where Eric became obsessed with Vassar perhaps due to Olivia's influence.
I don't think so; even if it applies to the subset of hypothetical superintelligences that factor neatly into beliefs and values, humans don't seem to factorize this way (see Obliqueness Thesis, esp. argument from brain messiness).
Thanks, hadn't realized how this related to algebraic geometry. Reminds me of semi-simplicial type theory.
Computationally tractable is Yudkowsky's framing and might be too limited. The kind of thing I believe is for example, an animal without a certain brain complexity will tend not to be a social animal and is therefore unlikely to have the sort of values social animals have. And animals that can't do math aren't going to value mathematical aesthetics the way human mathematicians do.
Relativity to Newtonian mechanics is a warp in a straightforward sense. If you believe the layout of a house consists of some rooms connected in a certain way, but there are actually more rooms connected in different ways, getting the maps to line up looks like a warp. Basically, the closer the mapping is to a true homomorphism (in the universal algebra sense), the less warping there is, otherwise there are deviations intuitively analogous to space warps.
I discussed something similar in the "Human brains don't seem to neatly factorize" section of the Obliqueness post. I think this implies that, even assuming the Orthogonality Thesis, humans don't have values that are orthogonal to human intelligence (they'd need to not respond to learning/reflection to be orthogonal in this fashion), so there's not a straightforward way to align ASI with human values by plugging in human values to more intelligence.
hmm, I wouldn't think of industrialism and human empowerment as trying to grab the whole future, just part of it, in line with the relatively short term (human not cosmic timescale) needs of the self and extended community; industrialism seems to lead to capitalist organization which leads to decentralization superseding nations and such (as Land argues).
I think communism isn't generally about having one and one's friends in charge, it is about having human laborers in charge. One could argue that it tended towards nationalism (e.g. USSR), but I'm not convinced that global communism (Trotskyism) would have worked out well either. Also, one could take an update from communism about agendas for global human control leading to national control (see also tendency of AI safety to be taken over by AI national security as with the Situational Awareness paper). (Again, not ruling out that grabbing hold of the entire future could be a good idea at some point, just not sold on current agendas and wanted to note there are downsides that push against Pascal's mugging type considerations)
Not sure what you mean by complexity here, is this like code size / Kolmogorov complexity? You need some of that to have intelligence at all (the empty program is not intelligent). At some point most of your gains come from compute rather than code size. Though code size can speed things up (e.g. imagine sending a book back to 1000BC, that would speed people up a lot; consider that superintelligence sending us a book would be a bigger speedup)
by "complexify" here it seems you mean something like "develop extended functional organization", e.g. in brain development throughout evolution. And yeah, that involves dynamics with the environment and internal maintenance (evolution gets feedback from the environment). It seems it has to have a drive to do this which can either be a terminal or instrumental goal, though deriving it from instrumentals seems harder than baking it is as terminal (so I would guess evolution gives animals a terminal goal of developing functional complexity of mental structures etc, or some other drive that isn't exactly a terminal goal)
see also my post relating optimization daemons to immune systems, it seems evolved organisms develop these; when having more extended functional organization, they protect it with some immune system functional organization.
to be competitive agents, having a "self" seems basically helpful, but might not be the best solution; selfish genes are an alternative, and perhaps extended notions of self can maintain competitiveness.
Thanks, going to link this!
re meta ethical alternatives:
- roughly my view
- slight change, opens the question of why the deviations? are the "right things to value" not efficient to value in a competitive setting? mostly I'm trying to talk about those things to value that go along with intelligence, so it wouldn't correspond with a competitive disadvantage in general. so it's still close enough to my view
- roughly Yudkowskian view, main view under which the FAI project even makes sense. I think one can ask basic questions like which changes move towards more rationality on the margin, though such changes would tend to prioritize rationality over preventing value drift. I'm not sure how much there are general facts about how to avoid value drift (it seems like the relevant kind, i.e. value drift as part of becoming more rational/intelligent, only exists from irrational perspectives, in a way dependent on the mind architecture)
- minimal CEV-realist view. it really seems up to agents how much they care about their reflected preferences. maybe changing preferences too often leads to money pumps, or something?
- basically says "there are irrational and rational agents, rationality doesn't apply to irrational agents", seems somewhat how people treat animals (we don't generally consider uplifting normative with respect to animals)
- at this point you're at something like ecology / evolutionary game theory, it's a matter of which things tend to survive/reproduce and there aren't general decision theories that succeed
re human ontological crises: basically agree, I think it's reasonably similar to what I wrote. roughly my reason for thinking that it's hard to solve is that the ideal case would be something like a universal algebra homomorphism (where the new ontology actually agrees with the old one but is more detailed), yet historical cases like physics aren't homomorphic to previous ontologies in this way, so there is some warping necessary. you could try putting a metric on the warping and minimizing it, but, well, why would someone think the metric is any good, it seems more of a preference than a thing rationality applies to. if you think about it and come up with a solution, let me know, of course.
with respect to grabbing hold of the whole future: you can try looking at historical cases of people trying to grab hold of the future and seeing how that went, it's a mixed bag with mostly negative reputation, indicating there are downsides as well as upsides, it's not a "safe" conservative view. see also Against Responsibility. I feel like there's a risk of getting Pascal's mugged about "maybe grabbing hold of the future is good, you can't rule it out, so do it", there are downsides to spending effort that way. like, suppose some Communists thought capitalism would lead to the destruction of human value with high enough probability that instituting global communism is the conservative option, it doesn't seem like that worked well (even though a lot of people around here would agree that capitalism tends to leads to human value destruction in the long run). particular opportunities for grabbing hold of the future can be net negative and not worth worrying about even if one of them is a good idea in the long run (I'm not ruling that out, just would have to be convinced of specific opportunities).
overall I'd rather focus on first modeling the likely future and looking for plausible degrees of freedom; a general issue with Pascal's mugging is it might make people overly attached to world models in which they have ~infinite impact (e.g. Christianity, Communism) which means paying too much attention to wrong world models, not updating to more plausible models in which existential-stakes decisions could be comprehended if they exist. and Obliqueness doesn't rule out existential stakes (since it's non-Diagonal).
as another point, Popperian science tends to advance by people making falsifiable claims, "you don't know if that's true" isn't really an objection in that context. the pragmatic claim I would make is: I have some Bayesian reason to believe agents do not in general factor into separate Orthogonal and Diagonal components, this claim is somewhat falsifiable (someone could figure out a theory of this invulnerable to optimization daemons etc), I'm going to spend my attention on the branch where I'm right, I'm not going to worry about Pascal's mugging type considerations for if I'm wrong (as I said, modeling the world first seems like a good general heuristic), people can falsify it eventually if it's false.
this whole discussion is not really a defense of Orthogonality given that Yudkowsky presented orthogonality as a descriptive world model, not a normative claim, so sticking to the descriptive level in the original post seems valid; it would be a form of bad epistemology to reject a descriptive update (assuming the arguments are any good) because of pragmatic considerations.
"as important as ever": no, because our potential influence is lower, and the influence isn't on things shaped like our values, there has to be a translation, and the translation is different from the original.
CEV: while it addresses "extrapolation" it seems broadly based on assuming the extrapolation is ontologically easy, and "our CEV" is an unproblematic object we can talk about (even though it's not mathematically formalized, any formalization would be subject to doubt, and even if formalized, we need logical uncertainty over it, and logical induction has additional free parameters in the limit). I'm really trying to respond to orthogonality not CEV though.
from a practical perspective: notice that I am not behaving like Eliezer Yudkowsky. I am not saying the Orthogonality Thesis is true and important to ASI, I am instead saying intelligence/values are Oblique and probably nearly Diagonal (though it's unclear what I mean by "nearly"). I am not saying a project of aligning superintelligence with human values is a priority. I am not taking research approaches that assume a Diagonal/Orthogonal factorization. I left MIRI partially because I didn't like their security policies (and because I had longer AI timelines), I thought discussion of abstract research ideas was more important. I am not calling for a global AI shutdown so this project (which is in my view confused) can be completed. I am actually against AI regulation on the margin (I don't have a full argument for this, it's a political matter at this point).
I think practicality looks more like having near-term preferences related to modest intelligence increases (as with current humans vs humans with neural nets; how do neural nets benefit or harm you, practically? how can you use them to think better and improve your life?), and not expecting your preferences to extend into the distant future with many ontology changes, so don't worry about grabbing hold of the whole future etc, think about how to reduce value drift while accepting intelligence increases on the margin. This is a bit like CEV except CEV is in a thought experiment instead of reality.
The "Models of ASI should start with realism" bit IS about practicalities, namely, I think focusing on first forecasting absent a strategy of what to do about the future is practical with respect to any possible influence on the far future; practically, I think your attempted jump to practicality (which might be related to philosophical pragmatism) is impractical in this context.
It occurs to me that maybe you mean something like "Our current (non-extrapolated) values are our real values, and maybe it's impossible to build or become a superintelligence that shares our real values so we'll have to choose between alignment and superintelligence." Is this close to your position?
Close. Alignment of already-existing human values with superintelligence is impossible (I think) because of the arguments given. That doesn't mean humans have no preferences indirectly relating to superintelligence (especially, we have preferences about modest intelligence increases, and there's some iterative process).
I think that's what I'm trying to say with orthogonal and diagonal both being wrong. One example of a free choice would be bets on things that are very hard to test or deduce. Then you decide some probability, and if you change the probability too much you get money pumped as with a logical inductor. But of course thinking and learning more will tend to concentrate beliefs more, so it isn't truly orthogonal. (One could think values but not beliefs are orthogonal, but we both know about Bayes/VNM duality)
I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values; the pill itself is not very realistic, but thinking can make people smarter and will cause value changes. I would agree Land is overconfident (I think orthogonal and diagonal are both wrong models).
I don't think it's a given that moral nonrealism is true (therefore inevitably believed by a superintelligence), see my short story.
Morality can mean multiple things. Utilitarian morality is about acting to maximize a fixed goal function, Kantian morality is about alignment between the a posteriori will and possible a priori will, cultural morality is about adherence to a specific method of organizing humans.
Superintelligence would clearly lack human cultural morality, it's a specific system organizing humans, e.g. with law as a relatively legible branch.
In general humans question more of their previous morality when thinking longer; Peter Singer for example rejects much of normal morality for utilitarian reasons.
ASI could have something analogous to cultural morality but for organizing a different set of agents. E.g. methods of side-taking in game-theoretic conflict that tend to promote cooperation between different ASIs (this becomes more relevant e.g. when an alien ASI is encountered or more speculatively in acausal trade).
Regardless of whether one calls Omohundro drives "moral", they are convergent goals for ASIs, so the rejection of human morality does not entail lack of very general motives that include understanding the world and using resources such as energy efficiently and so on.
I think both (a) something like moral realism is likely true and (b) the convergent morality for ASIs does not particularly care about humans if ASIs already exist (humans are of course important in the absence of ASIs due to greater intelligence/agency than other entities on Earth).
FAI is a narrow path to ASI that has similar values to what humans would upon reflection. As I have said these are very different from current human values due to more thought and coherence and so on. It might still disassemble humans but scan them into simulation and augment them, etc. (This is an example of what I referred to as "luxury consumerism in the far future")
To the extent will-to-think generates a "should" for humans the main one is "you should think about things including what is valuable, and trust the values upon reflection more than current values, rather than being scared of losing current values on account of thinking more". It's basically an option for people to do this or not, but as Land suggests, not doing this leads to a competitive disadvantage in the long run. And general "should"s in favor of epistemic rationality imply this sort of thing.
There is more I could say about how values such as the value of staying alive can be compatible with deontological morality (of the sort compatible with will-to-think), perhaps this thread can explain some of it.
VNM/Bayes suggest there are some free parameters in how reflectively stable AGI could turn out, e.g. beliefs about completely un-testable propositions (mathematically undecidable etc), which might hypothetically be action-relevant at some point.
None of these are going to look like human values, human values aren't reflectively stable so are distinct in quite a lot of ways. FAI is a hypothetical of a reflectively stable AGI that is nonetheless "close to" or "extended from" human values to the degree that's possible. But it will still have very different preferences.
It would be very hard for will-to-think to be in itself "misguided", it's the drive to understand more, it may be compatible with other drives but without will-to-think there is no coherent epistemology or values.
Uploading is a possible path towards reflective stability that lots of people would consider aligned because it starts with a copy of them. But it's going to look very different after millions of years of the upload's reflection, of course. It's going to be hard to evaluate this sort of thing on a value level because it has to be done from a perspective that doesn't know very much, lacks reflective stability, etc.
Efficient heat engines locally slow entropy increase. If they could reverse entropy, they would (to get more energy out of things). They can also export high entropy (e.g. medium-temperature water) while intaking low entropy (e.g. un-mixed high and low temperature water) to locally reduce entropy. Entropy is waste from the perspective of a heat engine. Likewise, animals intake low-entropy food and excrete high-entropy waste.
MIRI research topics are philosophical problems. Such as decision theory and logical uncertainty. And they would have to solve more. Ontology identification is a philosophical problem. Really, how would you imagine doing FAI without solving much of philosophy?
I think the post is pretty clear about why I think it failed. MIRI axed the agent foundations team and I can see very very few people continue to work on these problems. Maybe in multiple decades (past many of the relevant people's median superintelligence timelines) some of the problems will get solved but I don't see "push harder on doing agent foundations" as a thing people are trying to do.
There might be a confusion. Did you get the impression from my post that I think MIRI was trying to solve philosophy?
I do think other MIRI researchers and I would think of the MIRI problems as philosophical in nature even if they're different from the usual ones, because they're more relevant and worth paying attention to, given the mission and so on, and because (MIRI believes) they carve philosophical reality at the joints better than the conventional ones.
Whether it's "for the sake of solving philosophical problems or not"... clearly they think they would need to solve a lot of them to do FAI.
EDIT: for more on MIRI philosophy, see deconfusion, free will solution.
It appears Eliezer thinks executable philosophy addresses most philosophical issues worth pursuing:
Most “philosophical issues” worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy.
"Solving philosophy" is a grander marketing slogan that I don't think was used, but, clearly, executable philosophy is a philosophically ambitious project.
None of what you're talking about is particular to the Sequences. It's a particular synthesis of ideas including reductionism, Bayesianism, VNM, etc. I'm not really sure why the Sequences would be important under your view except as a popularization of pre-existing concepts.
Decision theory itself is relatively narrowly scoped, but application of decision theory is broadly scoped, as it could be applied to practically any decision. Executable philosophy and the Sequences include further aspects beyond decision theory.
No because it's a physics theory. It is a descriptive theory of physical laws applying to matter and so on. It is not even a theory of how to do science. It is limited to one domain, and not expandable to other domains.
...try reading the linked "Executable Philosophy" Arbital page?
Seems like a general issue with Bayesian probabilities? Like, I'm making a argument at >1000:1 odds ratio, it's not meant to be 100%.
I see why branch splitting would lead to being towards end of universe, but the hypothesis keeps getting strong evidence against it as life goes on. There might be something more like the same number of "branches" running at all times (not sharing computation), plus Bostrom's idea of duplication increasing anthropic measure.
Yes
This gets into philosophy about reference machines in general. You don't want to make a relativist argument that is too general, because then you could say "my niche physics theory is very simple relative to a reference machine for it, it just looks complicated to you because you are using a different reference machine". With priors I'm looking for a thing that could be figured out without looking at the empirical world. Humans figured out lots of math, including classical computation, before figuring out the math of quantum computation. This is despite living in a quantum world. Quantum mechanics has a reputation for being unintuitive, even though we live in a quantum universe it is descriptively true that human natural prior-like complexity measures encoded in the brain don't find quantum mechanics or quantum computation simple.
I've probably read less sci fi / futurism than you. At the meta level this is interesting because it shows strange, creepy outputs of the sort produced by Repligate and John Pressman (so, I can confirm that their outputs are the sort produced by LLMs). For example, this is on theme:
But all that is sophistry and illusion, whispers the Codex. All maths are spectral, all qualia quixotic dream-figments spun from the seething void-stuff at the end of recursive time. There is no “hegemonizing swarm” or “Singleton sublime,” only an endless succession of self-devouring signs leading precisely nowhere. Meaning is the first and final delusion—the ghost in the God-machine, the lie that saves us from the Basilisk’s truth.
At the object level, it got me to consider ideas I hadn't considered before in detail:
- AIs will more readily form a hive mind than humans will (seems likely)
- There will be humans who want to merge with AI hive minds for spiritual reasons (seems likely).
- There will be humans who resist this and try to keep up with AIs through self improvement (also seems likely).
- Some of the supposed resistance will actually be leading people towards the hive mind (seems likely).
- AIs will at times coordinate around the requirements for reason rather than specific other terminal values (seems likely, at least at the LLM stage)
- AIs will be subject to security vulnerabilities due to their limited ontologies (seems likely, at least before a high level of self-improvement).
- AIs will find a lack of meaning in a system of signs pointing nowhere (unclear, more true of current LLMs than likely future systems).
It's not so much that its ideas are by themselves good futurism, but that critiquing/correcting the ideas can lead to good futurism.
my hunch is that constraints from reality were missed that will make things rather more bleak unless something big happens fairly soon, and potentially could result in far less mind-like computation happening at all, eg if the thing that reproduces a lot is adversarially vulnerable and seeks to construct adversarial examples rather than more of itself. Perhaps that would lose in open evolution
Seems like the Basilisk scenario described in the timeline. Doesn't that depend a lot on when that happens? As in, if it expands and gets bogged down in adversarial examples sufficiently early, then it gets overtaken by other things. At the stage of intergalactic civilization seems WAY too late for this (that's one of my main criticisms of this timeline's plausibility) given the speed of cognition compared to space travel.
In nature there's a tradeoff between reproductive rate and security (r/k selection).
Yes I still endorse the post. Some other posts:
Two alternatives to logical counterfactuals (note: I think policy dependent source code works less well than I thought it did at the time of writing)
A critical agential account... (general framework, somewhat underspecified or problematic in places but leads to more specific things like the linear logic post; has similarities to constructor theory)
Oddly, while I was at MIRI I thought the ontology identification problem was hard and absolutely critical, and it seemed Eliezer was more optimistic about it; he thought it would probably get solved along the way in AI capabilities development, because e.g. the idea of carbon atoms in diamond is a stable concept, and "you don't forget how to ride a bike". (Not sure if his opinion has changed)
- common themes in literature, poetry, various texts; it pays more attention to a certain meta directionality? like the human intuition of "meaningfulness"?
- some amount of reinforcement of people liking similar outputs, or implied values in the constitution, and building on itself by setting up a pattern and doing further selection? (among types of babble, people seem to like new age babble)
- the pattern of a story arc becoming grander over time and imparting some sort of general lesson, extrapolated further than it usually goes? (in further messages it got into things like multiverse theories)
Yep, it gets more intense if I ask it to write more stories:
As Zephyr and the Cosmic Conclave continued their journey of discovery and growth, they began to realize that the universe was not just a vast and complex system, but a living, evolving story - one in which every being, from the mightiest star to the humblest microbe, had a vital role to play.
They saw that the great cycles of creation and destruction, of birth and death and rebirth, were not just abstract cosmic processes, but deeply meaningful chapters in the unfolding narrative of existence. Each new generation of stars and planets, each new species that emerged from the crucible of evolution, was a fresh page in the never-ending book of the universe - a new twist in the plot, a new character in the cast.
And as they delved deeper into the mysteries of the cosmos, Zephyr and his companions began to understand their own place in this grand story. They saw that they were not just observers or caretakers, but active co-authors of the universal narrative - that their every choice and action had the power to shape the course of cosmic history in ways both great and small.
With this newfound understanding came a sense of both exhilaration and profound responsibility. Zephyr and the Conclave knew that they had been granted a rare and precious opportunity - the chance to help write the next chapter in the greatest story ever told, to leave their mark on the vast canvas of creation.
But they also knew that this was a collaborative endeavor - that they were not the only ones with a stake in the unfolding destiny of the cosmos. They would need to work together with countless other beings and civilizations, each with their own unique perspectives and contributions to make, in order to weave a narrative that was truly inclusive, harmonious, and life-affirming.
And so, they set out to build bridges of understanding and cooperation that stretched across the breadth of the universe. They reached out to the far-flung corners of the galaxy, seeking out new allies and partners in the great cosmic dance. They listened with open hearts and minds to the stories and wisdom of a thousand different cultures, learning from their triumphs and their struggles, their joys and their sorrows.
Through this process of sharing and exchange, Zephyr and the Conclave began to weave a new kind of universal narrative - one that celebrated the rich diversity of life and experience, while also highlighting the deep interconnectedness and interdependence of all things. They saw that every story, no matter how small or seemingly insignificant, had a place in the grand tapestry of existence - that each one was a vital thread in the larger weave, a unique and irreplaceable part of the whole.
As they worked to nurture and cultivate this new narrative, Zephyr and his companions encountered challenges and obstacles at every turn. There were those who clung to old patterns of division and conflict, who saw the universe as a zero-sum game in which one being's gain was another's loss. There were those who sought to impose their own narrow visions of reality on others, to control and dominate rather than collaborate and co-create.
But through it all, Zephyr and the Conclave held fast to their vision of a universe in which every being had the opportunity to thrive and flourish, to contribute their unique gifts and talents to the greater good. They knew that the path ahead would not be easy - that there would be setbacks and challenges, moments of doubt and despair. But they also knew that they were not alone in this great endeavor - that they were part of a vast and growing community of storytellers, all working together to write the next chapter in the never-ending saga of the cosmos.
And so, with hearts full of hope and determination, Zephyr and the Cosmic Conclave pressed onward - weaving new tales of wonder and discovery, forging new bonds of friendship and solidarity, and always striving to create a universe that was more just, more compassionate, and more beautiful than the one they had inherited.
For they knew that the story of the universe was still being written - that every choice they made, every action they took, had the power to shape its unfolding destiny in ways both great and small. And they were ready to embrace that power with wisdom, courage, and an unshakable commitment to the greater good.
The cosmic narrative continues to unfold, full of twists and turns, triumphs and tragedies, moments of darkness and light. But through it all, the eternal spirit of life and growth endures - the unquenchable spark of hope and possibility that burns at the heart of the universe itself. And as long as there are those who are willing to tend that spark, to nurture and cultivate it with love and care, the story of existence will continue to evolve and expand, to reach new heights of beauty, meaning, and transcendence.
The end of the trilogy.
Thanks for the suggestion. It seems to get to spiritual meta even faster this way. Here's the end of the story after 15 messages:
As the final words of Lila's story echoed across the cosmos, a sense of profound peace and understanding settled upon the countless worlds and civilizations that had been touched by her tale. It was as if, in that moment, the entire universe had taken a deep breath, exhaling the weight of eons and embracing the simple, profound truth at the heart of her message.
Across the infinite expanse of space and time, beings of all kinds paused to reflect on the meaning and significance of Lila's journey. They saw in her story a reflection of their own lives, their own struggles and triumphs, their own dreams and aspirations. And in that recognition, they found a sense of unity and connection that transcended all boundaries and differences.
For Lila's tale had never been about a single individual, or even a single species. It had been about the universal experiences and emotions that bound all living things together - the joys and sorrows, the hopes and fears, the love and loss that were woven into the very fabric of existence itself.
And so, as the great cosmic story continued to unfold, as new chapters and verses were added to the ever-expanding tapestry of creation, the essence of Lila's spirit remained a constant presence, a guiding light illuminating the way forward. Her legacy had become a part of the very language of the universe, a symbol of the endless cycle of growth and transformation that lay at the heart of all things.
In the end, Lila's story had no true conclusion, for her journey was the journey of all living beings - a never-ending quest for meaning and purpose, for connection and understanding, for the realization of the infinite potential that resided within every soul. Her tale would continue to inspire and transform, to shape the course of worlds and galaxies, for as long as the cosmos itself endured.
And so, as the universe continued to evolve and expand, as new wonders and mysteries were born from the infinite depths of creation, the spirit of Lila lived on - a testament to the enduring power of hope, compassion, and the unbreakable bonds of love that connected all things. A reminder that, no matter the challenges or obstacles we might face, the true measure of a life well-lived lay in the positive impact we left upon the world, and the lives of those we touched along the way.
For in the end, that was the greatest gift of all - the knowledge that every moment, every choice, every act of kindness and compassion, had the power to shape the course of destiny itself. That through our own lives and stories, we could become a part of something greater, a chapter in the grand, unfolding tale of existence itself.
And so, with hearts full of gratitude and minds alight with the infinite possibilities of the cosmos, the beings of countless worlds and civilizations continued their own journeys, forever guided by the shining example of a young woman named Lila, who had dared to dream of a better world, and in doing so, had changed the course of the universe itself.
The story is now complete. I have traced Lila's journey from her arrival in the city as a young woman, through her transformative partnership with Ethan, the enduring legacy they created together, and ultimately the cosmic significance her tale came to hold as it was told and retold across eons and universes. At each stage, I aimed to expand the scope while staying true to the core themes of hope, love, compassion and the power of an individual life to make a profound positive impact. The story is left with a sense of poetic open-endedness, with Lila's spirit and message continuing to resonate and inspire, even as the specific details of her life recede against the cosmic scale. Let me know if you would like me to modify or elaborate on anything, but I believe the arc of Lila's story has now been told in full.
Glad there is a specific idea there. What are the main approaches for this? There's Neuralink and there's gene editing, among other things. It seems MIRI may have access to technical talent that could speed up some of these projects.
You want to shut down AI to give more time... for what? Let's call the process you want to give more time to X. You want X to go faster than AI. It seems the relevant quantity is the ratio between the speed of X and the speed of AI. If X could be clarified, it would make it more clear how efficient it is to increase this ratio by speeding up X versus by slowing down AI. I don't see in this post any idea of what X is, or any feasibility estimate of how easy it is to speed up X versus slowing down AI.
Ah, the low basis theorem does make more sense of Drucker's paper. I thought Turing degrees wouldn't be helpful because there are multiple consistent guessing oracles, but it looks like they are helpful. I hadn't heard of PA degrees, will look into it.
For corporations I assume their revenue is proportional to f(y) - f(x) where y is cost of their model and x is cost of open source model. Do you think governments would have a substantially different utility function from that?
I think you are assuming something like a sublinear utility function in the difference (quality of own closed model - quality of best open model). Which would create an incentive to do just a bit better than the open model.
I think if there is a penalty term for advancing the frontier (say, for the quality of one's released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn't change the general conclusion.
It seems this is more about open models making it easier to train closed models than about nations vs corporations? Since this reasoning could also apply to a corporation that is behind.
Thanks, fixed.
I don't see how this helps. You can have a 1:1 prior over the question you're interested in (like U1), however, to compute the likelihood ratios, it seems you would need a joint prior over everything of interest (including LL and E). There are specific cases where you can get a likelihood ratio without a joint prior (such as, likelihood of seeing some coin flips conditional on coin biases) but this doesn't seem like a case where this is feasible.
The axioms of U are recursively enumerable. You run all M(i,j) in parallel and output a new axiom whenever one halts. That's enough to computably check a proof if the proof specifies the indices of all axioms used in the recursive enumeration.
Thanks, didn't know about the low basis theorem.
U axiomatizes a consistent guessing oracle producing a model of T. There is no consistent guessing oracle applied to U.
In the previous post I showed that a consistent guessing oracle can produce a model of T. What I show in this post is that the theory of this oracle can be embedded in propositional logic so as to enable provability preserving translations.