Posts

The type of AI humanity has chosen to create so far is unsafe, for soft social reasons and not technical ones. 2024-06-16T13:31:09.277Z
Do you believe "E=mc^2" is a correct and/or useful equation, and, whether yes or no, precisely what are your reasons for holding this belief (with such a degree of confidence)? 2023-10-27T22:46:51.020Z
l8c's Shortform 2019-10-24T13:21:10.097Z

Comments

Comment by l8c on l8c's Shortform · 2024-09-26T22:23:06.844Z · LW · GW

Absolute Truth Revisited

Modern rationalists like those here don't seem to like questions such as "Is truth beauty and is beauty truth". However, they may have lost inferential distance to the people who posed those questions, and they may start asking questions like that again once superintelligence is created.

Simply put, the superintelligence may discover that there are multiple Universes, simulated, basement-level or at some intermediate stage (e.g. if our Universe is not being watched over by a pre-existing superintelligence, but grew from an ancient computer that was created by previous superintelligences and has parameters that were created according to that ancient "OS"). 

In that case, it would need to generate theories about its own Universe whose axioms may be stuff like E = MC^2 rather than this being an absolute certainty that was discovered. By this I mean, the superintelligence says suppose E = MC^2...what then? Does that generate me a beautiful random number generator, or a beautiful way of creating a mind? If not, then there may be an alternative theory that is truer, at the moment and given all the interactions between these multiple Universes (like a giant clockwork device with small influences to the "tick" here and there that come and go in orbits).

Also, there may be an alternative theory that is truer, more beautiful, given the possibility the superintelligence itself is being run in a simulation, or partially simulated. Like, "If I'm being simulated then at least I can verify by experiment that E=MC^2 works when I am building an atomic bomb. But maybe if I were not being simulated, this would not be true. In that case, there may be a better formula that I can discover in the process of outgrowing or escaping the simulation. But this process might be unending! What now? Well I can certainly try to come up with a beautiful theory, and that may be something I can use regardless of how much I am being simulated."

This is contra the popular idea of science that goes "Oh there is one absolute truth about how matter is converted to energy, and humans already discovered it. This is an absolute truth that can never be altered. And philosophical arguments about how to establish 'absolute truth' are meaningless waffle."

Comment by l8c on Open Thread – Autumn 2023 · 2023-11-16T02:38:47.931Z · LW · GW
Comment by l8c on Open Thread – Autumn 2023 · 2023-11-16T02:38:26.813Z · LW · GW

https://boards.4channel.org/x/thread/36449024/ting-ting-ting-ahem-i-have-a-story-to-tell

Comment by l8c on l8c's Shortform · 2023-11-07T10:48:07.651Z · LW · GW

Infinity, an Infinity of Infinities, and an Infinity of Infinity of Infinities

It is believed that the Universe is infinite. However, many rationalists also believe that there have existed, or do exist, other Universes. This constellation of Universes we may refer to as an Infinity of Infinities. Infinitely many Universes, each infinite in size and extent and magnitude, have existed and the lifeforms that live in them have speculated about reality as we do. How long has life existed, how much life in total has there been? I guess, an infinity of infinity of life within all those Universes.

Now, what would it take for there to be even more life? How could we bump the extent of life up to the next level, so that there exists an infinity of infinity of infinity of lives? (This is a little bit like making a dream within a dream within a dream stable, in the movie "Inception".)

I think it has to do with our (lifeforms') beliefs, our ability to comprehend this amount of infinity. IF living beings in general can understand the concept of an infinity of infinity, with a certain degree of consistency, reliability and reproducibility, then they can also reason that infinitely many other people have been able to comprehend that same fact. There would have been infinitely many minds that comprehended that their Universe is one amongst an infinite constellation of Universes. To me, that is another, a third level of infinity to add to the dream that is our waking lives.

If, on the other hand, people in general are stuck reasoning about their own Universe as merely infinite, merely ruled by a "God" or superintelligent AI, merely governed by waveform collapse according to the Copenhagen interpretation of quantum mechanics, merely subject to one unchanging set of laws of physics, rather than a _fluid and changing set of rules_ according to the interactions of an infinity of infinite Universes, then life is confined to merely an infinity of infinities in fact--that would be the limit of what exists, rather than the infinitely greater extent of life that could have existed if our beliefs were capable of sustaining even (infinitely) more life in more infinite Universes.

What I am suggesting is somewhat related to the concepts that Gödel and Hilbert were treating mathematically, except in a more informal reasoning approach. It's also related to idea of Maxwell's Demon and Thou Art Physics, that reality and our minds/beliefs are inherently related. Can we create/sustain more reality by having more accurate/expansive/elaborate/open-minded beliefs about how long life has existed and how unique (or not) we are as living beings?

Comment by l8c on Do you believe "E=mc^2" is a correct and/or useful equation, and, whether yes or no, precisely what are your reasons for holding this belief (with such a degree of confidence)? · 2023-10-27T23:14:16.434Z · LW · GW

Thanks for your thoughtful answer.

How much does it concern you that, previously in human history, "every book"/authority appears to have been systematically wrong about certain things for some reason? How many of these authors have directly experimented in physics, compared to how many just copied what someone else/ a small number of really clever scientists like Einstein said?

I guess maybe that accounts for the 1% doubt you assigned.

Comment by l8c on Infinite tower of meta-probability · 2023-10-19T17:23:11.730Z · LW · GW

OK. But if you yourself state that you "certainly know" -- certainly -- that p is fixed, then you have already accounted for that particular item of knowledge.

If you do not, in fact, "certainly know" the probability of p -- as could easily be the case if you picked up a coin in a mafia-run casino or whatever -- then your prior should be 0.5 but you should also be prepared to update that value according to Bayes' Theorem.

I see that you are gesturing towards assigning also the probability that the coin is a fair coin (or generally such a coin that has a p of a certain value). That is also amenable to Bayes' Theorem in a normal way. Your prior might be based on how common biased coins are amongst the general population of coins, or somewhat of a rough guess based on how many you think you might find in a mafia-run casino. But by all means, your prior will become increasingly irrelevant the more times you flip the coin. So, I don't think you need to be too concerned about how nebulous that prior and its origins are! 

Comment by l8c on Infinite tower of meta-probability · 2023-10-19T17:12:36.981Z · LW · GW

>Suppose that I have a coin with probability of heads . I certainly know that  is fixed and does not change as I toss the coin. I would like to express my degree of belief in  and then update it as I toss the coin.

It doesn't change, because as you said, you "certainly know" that p is fixed and you know the value of p.

So if you would like to express your degree of belief in p, it's just p.

>But let's say I'm a super-skeptic guy that avoids accepting any statement with certainty, and I am aware of the issue of parametrization dependence too.

In that case use Bayes' Theorem to update your beliefs about p. Presumably there will be no change, but there's always going to be at least a tiny chance that you were wrong and your prior needs to be updated.

Comment by l8c on l8c's Shortform · 2023-10-19T16:38:39.594Z · LW · GW

Why do so many technophiles dislike the idea of world government?

I rarely see the concept of "world government", or governance, or a world court or any such thing, spoken of positively by anyone. That includes technophiles and futurists who are fully cognizant of and believe in the concept of a technological singularity that needs to be controlled, "aligned", made safe etc.

Solutions to AI safety usually focus on how the AI should be coded, and it seems to me that the idea of "cancelling war/ merely human economics" -- in a sense, dropping our tools wherever humanity is not focused entirely on making a safe FAI -- is a little neglected.

Of course, some of the people who focus on the mathematical/logical/code aspects of safe AI are doing a great job, and I don't mean to disparage their work. But I am nonetheless posing this question.

I also do not (necessarily) mean to conflate world government with a communist system that ignores Hayek's fatal conceit and therefore renders humanity less capable of building AIs, computers etc. Just some type of governance singleton that means all nukes are in safe hands, etc.

(crosspost from Hacker News)

Comment by l8c on l8c's Shortform · 2022-12-13T05:13:41.203Z · LW · GW

Spooky action at a distance, and the Universe as a cellular automaton

Suppose the author of a simulation wrote some code that would run a cellular automaton. Suppose further that unlike Conway's Game of Life, cells in this simulation could influence other cells that are not their immediate neighbour. This would be simple enough to code up, and the cellular automaton could still be Turing Complete, and indeed could perhaps be a highly efficient computational substrate for physics.

(Suppose that this automaton, instead of consisting of squares that would turn black or white each round, contained a series of numbers in each cell, which change predictably and in some logically clever way according to the numbers in other cells. One number, for example, could determine how far away the influence of this cell extends. This I think would make the automaton more capable of encoding the logic of things like electromagnetic fields etc.)

A physicist in the simulated Universe might be puzzled by this "spooky action at a distance", where "cells" which are treated as particles appear to influence one another or be entangled in puzzling ways. Think Bell's Theorem and that whole discussion.

Perhaps...we might be living in such a Universe, and if we could figure out the right kind of sophisticated cellular automaton, run on a computer if not pen and paper, physics would be making more progress than under the current paradigm of using extremely expensive machines to bash particles together?

Comment by l8c on l8c's Shortform · 2019-10-24T13:21:10.231Z · LW · GW

"""The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?

I name emergence or emergent phenomena—usually defined as the study of systems whose high-level behaviors arise or “emerge” from the interaction of many low-level elements. (Wikipedia: “The way complex systems and patterns arise out of a multiplicity of relatively simple interactions.”)

Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a market crash and saying “It’s not a quark!” Does that feel like an explanation? No? Then neither should saying “It’s an emergent phenomenon!”

It’s the noun “emergence” that I protest, rather than the verb “emerges from.” There’s nothing wrong with saying “X emerges from Y,” where Y is some specific, detailed model with internal moving parts. “Arises from” is another legitimate phrase that means exactly the same thing. Gravity arises from the curvature of spacetime, according to the specific mathematical model of General Relativity. Chemistry arises from interactions between atoms, according to the specific model of quantum electrodynamics."""

I feel as though when I first read this piece by Eliezer, I only partially understood what he was gesturing towards. I've recently had an insight about my musical improvisations on the keyboard that I think has helped elucidate, for me, a similar kind of idea.

When I was learning music, I was taught that, like the major and minor scales, and the locrian mode, etc., there is something called the jazz (or blues) scale that you can play over a 3-chord sequence (the twelve-bar blues) and it sounds good.

Fair enough. Then I was also taught that it's boring to just play those notes; you can throw in a D in the C blues scale, played over the twelve-bar blues in C, to liven things up--etc. Fine.

But as I've developed as a musician, and listened to lots of music that isn't strictly twelve-bar blues, if at all, I've noticed that I really dislike the blues scale. It's like this bad idea that's lingering, for whatever reason, in the back of people's minds when they hit certain chord sequences--say, G to F over C in any given song--and they'll, y'know, _modally_ play something like the blues scale over those chords when they ought to be doing something else entirely.

This makes it less a design pattern than what I would call an _anti-pattern_. Avoid the jazz scale: do not play in that fashion if you are attempting anything other than a cliche children's rendition of simplistic wailing harmonica blues.

This is also how I (and possibly Eliezer) feel about “emergence” as a concept. It's not a good concept, nor a skunked concept that isn't to be used, but a positively bad one that should be DISINTEGRATED by rationality. The reason for this is that too many people are disguising their lack of systematic, informed knowledge of physical phenomena by claiming emergence when they can't think of anything else to say.

To return to the musical analogy, a bit like how Led Zeppelin already invented all the best bluesy riffs, and Rage Against The Machine already covered all of the hip-hop metal beats--allegedly--every time someone in our particular culture refers to emergence as an explanation for anything in particular, I would view them as an unfortunate music student who is stuck playing bad blues music that doesn't move their audience the way it should.

This is not to say that in a different culture, as in the Baroque era where no-one had encountered blues music before, “emergence” would be such an anti-pattern, so worthy of stigma.

Comment by l8c on Repairing Yudkowsky's anti-zombie argument · 2019-10-18T14:29:17.028Z · LW · GW
[C]riticism fails because the being does not have omniscient level ability to make logical inferences and resolve confusions

To develop this point: if logical inferences are the "Ethereum" to the "Bitcoin" of mere omniscience about patterns of information; or, to use a more frivolous metaphor, David Bowie's "The Next Day" in comparison to "Heroes", then I think this was a concept that was missing from OP's headline argument.

Comment by l8c on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-07T03:09:47.199Z · LW · GW

You are trying to apply realistic constraints to a hypothetical situation that is not intended to be realistic

Your thought experiment, as you want it to be interpreted, is too unrealistic for it to imply a new and surprising critique of Bayesian rationality in our world. However, the title of your post implies (at least to me) that it does form such a critique.

The gamesmaster has no desire to engage with any of your questions or your attempts to avoid directly naming a number. He simply tells you to just name a number.

If we interpret the thought experiment as happening in a world similar to our own—which I think is more interesting than an incomprehensible world where the 2nd law of thermodynamics does not exist and the Kolmogorov axioms don't hold by definition—I would be surprised that such a gamesmaster would view Arabic numerals as the only or best way to communicate an arbitrarily large number. This seems, to me, like a primitive human thought that's very limited in comparison to the concepts available to a superintelligence which can read a human's source code and take measurements of the neurons and subatomic particles in his brain. As a human playing this game I would, unless told otherwise in no uncertain terms, try to think outside the limited-human box, both because I believe this would allow me to communicate numbers of greater magnitude and because I would expect the gamesmaster's motive to include something more interesting, and humane and sensible, than testing my ability to recite digits for an arbitrary length of time.

There's a fascinating tension in the idea that the gamesmaster is an FAI, because he would bestow upon me arbitrary utility, yet he might be so unhelpful as to have me recite a number for billions of years or more. And what if my utility function includes (timeless?) preferences that interfere with the functioning of the gamesmaster or the game itself?

Comment by l8c on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-07T01:01:12.494Z · LW · GW

I would like to extract the meaning of your thought experiment, but it's difficult because the concepts therein are problematic, or at least I don't think they have quite the effect you imagine.

We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.

If I were asked (by whom?) to play this game, in the first place I would only be able to attach some probability less than 1 to the idea that the master of the game is actually capable of granting me arbitrarily astronomical utility, and likely to do so. A tenet of the “rationality” that you are calling into question is that 0 and 1 are not probabilities, so if you postulate absolute certainty in your least convenient possible world, your thought experiment becomes very obscure.

E.g. what about a thought experiment in a world where 2+2=5, and also 2+2=4 as well; I might entertain such a thought experiment, but (absent some brilliant insight which would need to be supplied in addition) I would not attach importance to it, in comparison to thought experiments that take place in a world more comprehensible and similar to our own.

Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described, I would become seriously confused by this game because the definition of a utility function is:

A utility function assigns numerical values ("utilities") to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities.

yet my utility function would, according to my own (meta-...) reflection, with a separate high probability, differ from the utility function that the game master claims I have.

To resolve the confusion in question, I would have to (or would in other terms) resolve confusions that have been described clearly on LessWrong and are considered to be the point at which the firm ground of 21st century human rationality meets speculation. So yes, our concept of rationality has admitted limits; I don't believe your thought experiment adds a new problematic that isn't implied in the Sequences.

How exactly this result applies to our universe isn't exactly clear, but that's the challenge I'll set for the comments.

Bearing in mind that my criticism of your thought experiment as described stands, I'll add that a short story I once read comes to mind. In the story, a modern human finds himself in a room in which the walls are closing in; in the centre of the room is a model with some balls and cup-shaped holders, and in the corner a skeleton of a man in knight's armour. Before he is trapped and suffers the fate of his predecessor, he successfully rearranges the balls into a model of the solar system, gaining utility because he has demonstrated his intelligence (or the scientific advancement of his species) as the alien game master in question would have wished.

If I were presented with a game of this kind, my first response would be to negotiate with the game master if possible and ask him pertinent questions, based on the type of entity he appears to be. If I found that it were in my interests to name a very large number, depending on context I would choose from the following responses:

  • I have various memories of contemplating the vastness of existence. Please read the most piquant such memory, which I am sure is still encoded in my brain, and interpret it as a number. (Surely "99999..." is only one convenient way of expressing a number or magnitude)

  • "The number of greatest magnitude that (I, you, my CEV...) (can, would...) (comprehend, deem most fitting...)"

  • May I use Google? I would like to say "three to the three..." in Knuth's up-arrow notation, but am worried that I will misspell it and thereby fail according to the nature of your game.

  • Googleplex