Posts
Comments
A failure of practical CF can be of two kinds:
- We fail to create a digital copy of a person which have the same behavior with 99.9 fidelity.
Copy is possible, but it will not have phenomenal consciousness or, at least, it will be non-human or non-mine phenomenal consciousness, e.g., it will have different non-human qualia.
What is your opinion about (1) – the possibility of creating a copy?
With 50T tokens repeated 5 times, and a 60 tokens/parameter[3] estimate for a compute optimal dense transformer,
Does it mean that the optimal size of the model will be around 4.17Tb?
Less melatonin production during night makes it easy to get up?
One interesting observation: If I have two variant of future life – go to live in Miami or in SF, – both will be me from my point of view now. But from the view of Miami-me, the one who is in SF will be not me.
There is a similar idea with an opposite conclusion – that more "complex" agents are more probable here https://arxiv.org/abs/1705.03078
One way of not being suicide is not live alone. Stay with 4 friends.
I will lower the possible incentive of the killers by publishing all I know - and make it in such legal way that it can be used in court even if I am dead (affidavit?)
Whist-blowers commit suicide rather often. First, they are already stressed by the situation. Second, they may want to attract attention to their problem if they feel being ignored or even punish the accused organization.
OpenAI whistleblower found dead in San Francisco apartment.
Suchir Balaji, 26, claimed the company broke copyright law.
Thanks.
The text looks like AI-generated without mentioning it.
The main idea is similar to the one discussed in "Permutation city" by Igen, but this didn't mentioned.
It also fails to mention Almond's idea of universal resurrection machine based on quantum randomness generator.
I observed an effect of "chatification": my LLM-powered mind model tell me stories about my life and now I am not sure what is my original style of telling such stories.
If not AGI, it will fail without enough humans. If AGI, it is just an example of misalignment.
When green and red qualia are exchanged, all functions that point to red now point to green, so no additional update is needed. I say "green" when I see RED and I say "I like green" when I see RED (here capital letters are used to denote qualia).
If we use a system of equations as an example, when I replace Y with Z in all equations while X remains X, it will still be functionally the same system of equations.
If you assume sentience cut-off as a problem, it boils down to the Doomsday argument: why are we so early in the history of humanity? Maybe our civilization becomes non-sentient after the 21st century, either because of extinction or non-sentient AI takeover.
If we agree with the Doomsday argument here, we should agree that most AI-civilizations are non-sentient. And as most Grabby Aliens are AI-civilizations, they are non-sentient.
TLDR: If we apply anthropics to the location of sentience in time, we should assume that Grabby Aliens are non-sentient, and thus the Grabby Alien argument is not affected by the earliness of our sentience.
Need to be proved as x-risk. For example, if population fails below 100 people, then regulation fails first.
It is still a functional property of qualia - whether they will be more beautiful or not.
In my view, qualia are internal variables which do not affect the output. For example, x² + x + 1 = 0 and y² + y + 1 = 0 are the same equation but with different variables. Moreover, these variables originate not from the mathematical world, but from the Greek alphabet. So, by studying the types of variables used, we can learn something about the Greeks.
You still use function aspect of qualia here – will red be more beautiful than blue.
In my view, qualia are internal variables which are not affecting the result of computations. For example, equation x^2+x+1=0 is the same as y^2+y+1=0. They use x or y as internal variables
The problem of "humans hostile to humans" has two heavy tails: nuclear war and biological terrorism, which could kill all humans. A similar problem is the main AI risk: AI killing everyone for paperclips.
The central (and not often discussed) claim of AI safety is that the second situation is much more likely: it is more probable that AI will kill all humans than that humans will kill all humans. For example, by advocating for pausing AI development, we assume that the risks of nuclear war causing extinction are less than AI extinction risks.
If AI is used to kill humans as just one more weapon, it doesn't change anything stated above until AI evolves into an existential weapon (like a billion-drone swarm).
We can suggest a Weak Zombie Argument: It is logically possible to have a universe where all qualia of red and green are inverted in the minds of its inhabitants, while all physical things remain the same. This argument supports epiphenomenalism as well as the previous zombie argument but cannot be as easily disproved.
This is because it breaks down the idea of qualia into two parts: the functional aspect and the qualitative aspect. Functionally, all types of "red" are the same and are used to represent red color in the mind.
Zombies are not possible because something is needed to represent red in their minds. However, the most interesting qualitative aspect of that "something" is still arbitrary and doesn't have any causal effects.
Returning here after 3 years after reading about ghost drones flap.
One thing that occurred to me is that evolutionary dynamic in vast unbounded spaces is different from the one in confined spaces. When toads were released in Australia, they were selected for longer legs and quicker jumping ability. These long-legged toads reached farther parts of Australia.
Another direction of evolution of 'space animals' involves stability over very long time and mimicry in the case of "dark forrest".
We can ask an LLM to simulate a persona who is not opposed to being simulated and is known for being helpful to strangers. It should be a persona with a large internet footprint and with interests in AI safety, consciousness, and simulation, while also being friendly and not selfish. It could be Gwern, Bostrom, or Hanson.
If we deny practical computational functionalism (CF), we need to pay a theoretical cost:
1. One such possible cost is that we have to assume that the secret of consciousness lies in some 'exotic transistors,' meaning that consciousness depends not on the global properties of the brain, but on small local properties of neurons or their elements (microtubules, neurotransmitter concentrations, etc.).
1a. Such exotic transistors are also internally unobservable. This makes them similar to the idea of soul, as criticized by Locke. He argued that change or replacement of the soul can't be observed. Thus, Locke's argument against the soul is similar to the fading qualia argument.
1b. Such exotic transistors should be inside the smallest animals and even bacteria. This paves the way to panpsychism, but strong panpsychism implies that computers are conscious because everything is conscious. (There are theories that a single electron is the carrier of consciousness – see Argonov).
There are theories which suggest something like a "global quantum field" or "quantum computer of consciousness" and thus partially escape the curse of exotic transistors. The assume global physical property which is created by many small exotic transistors.
2 and 3. If we deny exotic transistors, we remain either with exotic computations or soul.
"Soul" here includes non-physicalist world models, e.g., qualia-only world, which is a form of solipsism or requires the existence of God who produces souls and installs them in minds (and can install them in computers).
Exotic computations can be either extremely complex or require very special computational operations (strange loop by Hofstadter).
Thanks, now I better understand your argument.
However, we can expect that any civilization is sentient only for a short time in its development, analogous to the 19th-21st centuries. After that, it becomes controlled by non-sentient AI. Thus, it's not surprising that aliens are not sentient during their grabby stage.
But one may argue that even a grabby alien civilization has to pass through a period when it is sentient.
For that, Hanson's argument may suggest that:
a) All the progenitors of future grabby aliens already exist now (maybe we will become grabby)
b) Future grabby aliens destroy any possible civilization before it reaches the sentient stage in the remote future.
Thus, the only existing sentient civilizations are those that exist in the early stage of the universe.
But I don't think the Grabby/Loud Aliens argument actually explains my, Lorec's, earliness in an anthropic sense, given the assumption that future aliens will also be populous and sentient.
There is no assumption that grabby aliens will be sentient in Hanson's model. They only prevent other sentient civilizations from appearing.
We have enough AI to cause billion deaths in the next decade via mass production of AI-drones, robotic armies and AI-empowered strategic planners. No new capabilities are needed.
We should also pay attention to the new unknown respiratory diseases in Congo which killed 131 person last month.
I made my sideload (a model of my personality based on a long prompt) and it outputs two streams - thoughts and speech. Sometime in thought stream it thinks "I will not speak about this", which may - or may not?? - be regarded as scheming.
Here's the text with improved grammar:
I think there is one more level at which natural abstraction can occur: the level just "beneath" consciousness.
For example, we can create an LLM that almost perfectly matches my internal voice dialogue's inputs and outputs. For me – internally – there would be no difference if thoughts appearing in my mind were generated by such an LLM, rather than by real biological neurons or even cortical columns. The same applies to the visual cortex and other brain regions.
Such an LLM for thoughts would be no larger than GPT-4 (as I haven't had that many new ideas). In most cases, I can't feel changes in individual neurons and synapses, but only the high-level output of entire brain regions.
I think we can achieve 99 percent behavioral and internal thought mimicry with this approach, but a question arises: what about qualia? However, this question isn't any easier to answer if we choose a much lower level of abstraction.
If we learn that generating qualia requires performing some special mathematical operation F(Observations), we can add this operation to the thought-LLM's outputs. If we have no idea what F(Observations) is, going to a deeper level of abstraction won't reassure us that we've gone deep enough to capture F(O).
Similar ideas here: https://medium.com/@bablulawrence/cognitive-architectures-and-llm-applications-83d6ba1c46cd
One problem of ischemia is that cryoprotectant will not reach all parts of the brain. While cryoprotectant is pumped through existing blood vessels, some brain regions will decay, and one cannot know in advance which ones will be affected.
The solution is known: slice the brain into thin sections and place each section in cryoprotectant or chemical fixative. In this case, the preservation chemicals will reach all parts of the brain, and any damage from slicing is predictable.
Interestingly, Lenin's brain was preserved this way in 1924. It appears this was the best method available, and we haven't advanced much since then.
It is not clear for me why you call
an empirical hypothesis with a very low prior
If MWI is true, there will be timelines where I survive any risk. This claim is factual and equivalent to MWI, and the only thing that prevents me from regarding it as immortality are questions related to decision theory. If MWI is true, QI has high a priori probability and low associated complexity.
The Fedora case has high complexity and no direct connection to MWI, hence a low a priori probability.
Now for the interesting part: QI becomes distinct from the Fedora case only when the chances are 1 in a trillion.
First example:
When 1000 people play Russian roulette and one survives (10 rounds at 0.5), they might think it's because of QI. (This probability is equivalent to surviving to 100 years old according to the Gompertz law.)
When 1000 people play Quantum Fedora (10 rounds at 0.5), one doesn't get a Fedora, and they think it's because they have a special anti-Fedora survival capability. In this case, it's obvious they're wrong, and I think this first example is what you're pointing to.
(I would note that even in this case, one has to update more for QI than for Fedora. In the Fedora case, there will be, say, 1023 copies of me with Fedora after 10 flips of a quantum coin versus 1 copy without Fedora. Thus, I am very unlikely to find myself without a Fedora. This boils down to difficult questions about SSA and SIA and observer selection. Or, in other words: can I treat myself as a random sample, or should I take the fact that I exist without a Fedora as axiomatic? This question arises often in the Doomsday argument, where I treat myself as a random sample despite knowing my date of birth.)
However, the situation is different if one person plays Russian roulette 30 times. In that case, externalization of the experiment becomes impossible: only 8 billion people live on Earth, and there are no known aliens. (This probability is equivalent to surviving to 140 years old according to the Gompertz law.) In this case, even if the entire Earth's population played Russian roulette, there would be only a 1 percent chance of survival, and the fact of surviving would be surprising. But if QI is true, it isn't surprising. That is, it's not surprising to survive to 100 years old, but surviving to 140 is.
Now if I play Fedora roulette 30 times and still have no Fedora, this can be true only in MWI. So if there's no Fedora after 30 rounds, I get evidence that MWI is true and thus QI is also true. But I am extremely unlikely to find myself in such a situation.
Did I understand you right that you argue against path-dependent identity here?
"'I' will experience being copy A (as opposed to B or C)" are not pointing to an actual fact about the world. Thus assigning a probability number to such a statement is a mental convenience that should not be taken seriously
Copies might be the same after copying but the room numbers in which they appear are different, and thus they can make bets on room numbers
I think that what I call 'objective probability" represent physical property of the coin before the toss, and also that before the toss I can't get any evidence about the result the toss. In MWI it would be mean split of timelines. While it is numerically equal to credence about a concrete toss result, there is a difference and SB can be used to illustrate it.
'Observation selection effect' is another name for 'conditional probability' - the probability of an event X, given that I observe it at all or observe it several times.
By the way, there's an interesting observation: my probability estimate before a coin toss is an objective probability that describes the property of the coin. However, after the coin toss, it becomes my credence that this specific toss landed Heads. We expect these probabilities to coincide.
If I get partial information about the result of the toss (maybe I heard a sound that is more likely to occur during Heads), I can update my credence about the result of that given toss. The obvious question is: can Sleeping Beauty update her credence before learning that it is Monday?
Maybe that's why people meditate – they enter a simple state of mind that emerges everywhere.
It will work only if I care for my observations, something like EDT.
I'm inclined to bite this bullet too, though it feels somewhat strange. Weird implication: you can increase the amount of reality-fluid assigned to you by giving yourself amnesia.
I explored a similar line of reasoning here: Magic by forgetting
I think that yes, the sameness of humans as agents is generated by the process of self-identification in which a human being is identifies herself through a short string of information "Name, age, sex, profession + few more kilobytes". Evidence for this is the success of improv theatre, where people quickly adopt completely new roles through one-line instructions.
If yes, then we should expect ourselves to be agents that exist in a universe that abstracts well, because "high-level agents" embedded in such universes are "supported" by a larger equivalence class of universes (since they draw on reality fluid from an entire pool of "low-level" agents).
I think that your conclusion is valid.
An interesting thing is that Laplace’s rule gives almost the same result as Gott’s equation from Doomsday argument, which have much simpler derivation.
I could suggest a similar experiment which also illustrates difference between probabilities from different points of view and can be replicated without God and incubators. I toss a coin and if heads says `hello' to a random person from a large group. If tails, I say this to two people. From my point of view chances to observe the coin is heads are 0.5. For the outside people, chances that I said Hello|Heads are only 1/3.
It is an observation selection effect (a better therm than 'anthropics'). Outside people can observe Tails twice and that is why they get different estimate.
The first idea seems similar to Big World immortality: the concept that due to chaotic inflation, many copies of me exist somewhere, and some of them will not die in any situation. While the copies are the same, the worlds around them could be different, which opens other options for survival: in some worlds, aliens might exist who could save me. The simulation argument can also act as such an anthropic angel, as there will be simulations where I survive. So there can be different observation selection effects that ensure my survival, and it may be difficult to observationally distinguish between them.
Therefore, survival itself is not evidence of MWI, Big World, or simulation. Is that your point?
Regarding the car engineers situation, It is less clear. I know that cars are designed safe, so there is no surprise. Are you suggesting they are anthropic because we are more likely to be driving later in the car evolution timeline when cars are safer?
Past LHC failures are just civilization-level QI. (BTW, there are real things like this related to the history of earth atmosphere, in which CO2 content was anti-correlated with Sun's luminosity which result in stable temperatures). But it is not clear to me, what are other anthropic effects, which are not QI – what do you mean here? Can you provide one more example?
I meant that by creating and openly putting my copies I increase the number of my copies, and that diluting is not just an ethical judgement, but the real effect, similar to self-sampling assumption, in which I am less likely to be a copy-in-pain, if there are many my happy copies. Moreover, this effect may be so strong that my copies will "jump" from unhappy world to happy one. I explored it here.
Thanks. It is a good point that. I should add this.
consent to sideloading should be conditional instead of general
Unfortunately, as a person in pain will not have time to remember a lot details about their past, a very short list of facts can be enough to recreate "me in pain". May be less than 100.
Instead of deleting, I suggest diluting: generate many fake facts about yourself and inject them into the forum. Thus chances to get recreate you will be slim.
Anyway, I bet on idea that it is better to have orders of magnitude more happy copies, than fight to prevent one in pain. Here I dilute not information, but pain with happiness.
I understood your argument as following; anything which is an argument for QI, can also be argument for alien saving us. Thus, nothing is evidence for QI.
However, apriory probabilities of QI and alien are not mutually independent. QI increases chances of alien with every round. We can't observe QI directly. But we will observe the alien and this is what is predicted by QI.
- We care still alive
- No GPT-5 yet
- Rumors of hitting the wall
Thanks. By the way, the "chatification" of the mind is a real problem. It's an example of reverse alignment: humans are more alignable than AI (we are gullible), so during interactions with AI, human goals will drift more quickly than AI goals. In the end, we get perfect alignment: humans will want paperclips.
For the outside view: Imagine that an outside observer uses a fair coin to observe one of two rooms (assuming merging in the red room has happened). They will observe either a red room or a green room, with a copy in each. However, the observer who was copied has different chances of observing the green and red rooms. Even if the outside observer has access to the entire current state of the world (but not the character of mixing of the paths in the past), they can't determine the copied observer's subjective chances. This implies that subjective unmeasurable probabilities are real.
Even without merging, an outside observer will observe three rooms with equal 1/3 probability for each, while an insider will observe room 1 with 1/2 probability. In cases of multiple sequential copying events, the subjective probability for the last copy becomes extremely small, making the difference between outside and inside perspectives significant.
When I spoke about the similarity with the Sleeping Beauty problem, I meant its typical interpretation. It's an important contribution to recognize that Monday-tails and Tuesday-tails are not independent events.
However, I have an impression that this may result in a paradoxical two-thirder solution: In it, Sleeping Beauty updates only once – recognizing that there are two more chances to be in tails. But she doesn't update again upon knowing it's Monday, as Monday-tails and Tuesday-tails are the same event. In that case, despite knowing it's Monday, she maintains a 2/3 credence that she's in the tails world. This is technically equivalent to the 'future anthropic shadow' or anti-doomsday argument – the belief that one is now in the world with the longest possible survival.
Thanks for your thoughtful answer.
To achieve magic, we need the ability to merge minds, which can be easily done with programs and doesn't require anything quantum. If we merge 21 and 1, both will be in the same red room after awakening. If awakening in the red room means getting 100 USD, and the green room means losing it, then the machine will be profitable from the subjective point of view of the mind which enters it. Or we can just turn off 21 without awakening, in which case we will get 1/3 and 2/3 chances for green and red.
The interesting question here is whether this can be replicated at the quantum level (we know there is a way to get quantum magic in MWI, and it is quantum suicide with money prizes, but I am interested in a more subtle probability shift where all variants remain). If yes, such ability may naturally evolve via quantum Darwinism because it would give an enormous fitness advantage – I will write a separate post about this.
Now the next interesting thing: If I look at the experiment from outside, I will give all three variants 1/3, but from inside it will be 1/4, 1/4, and 1/2. The probability distribution is exactly the same as in Sleeping Beauty, and likely both experiments are isomorphic. In the SB experiment, there are two different ways of "copying": first is the coin and second is awakenings with amnesia, which complicates things.
Identity is indeed confusing. Interestingly, in the art world, path-based identity is used to define identity, that is, the provenance of artworks = history of ownership. Blockchain is also an example of path-based identity. Also, in path-based identity, the Ship of Theseus remains the same.
There is a strange correlation between paradox of young Sun (it had lower luminosity) and stable Earth temperature which was provided by higher greenhouse effect. As sun goes brighter, CO2 declined. It was even analyses as evidence of anthropic effects.
In his article "The Anthropic Principle in Cosmology and Geology" [Shcherbitsky, 1999], A. S. Shcherbakov thoroughly examines the anthropic principle's effect using the historical dynamics of Earth's atmosphere as an example. He writes: "It is known that geological evolution proceeds within an oscillatory regime. Its extreme points correspond to two states, known as the 'hot planet' and 'white planet'... The 'hot planet' situation occurs when large volumes of gaseous components, primarily carbon dioxide, are released from Earth's mantle...
As calculations show, the gradual evaporation of ocean water just 10 meters deep can create such greenhouse conditions that water begins to boil. This process continues without additional heat input. The endpoint of this process is the boiling away of the oceans, with near-surface temperatures and pressures rising to hundreds of atmospheres and degrees... Geological evidence indicates that Earth has four times come very close to total glaciation. An equal number of times, it has stopped short of ocean evaporation. Why did neither occur? There seems to be no common and unified saving cause. Instead, each time reveals a single and always unique circumstance. It is precisely when attempting to explain these that geological texts begin to show familiar phrases like '...extremely low probability,' 'if this geological factor had varied by a small fraction,' etc...
In the fundamental monograph 'History of the Atmosphere' [Budyko, 1985], there is discussion of an inexplicable correlation between three phenomena: solar activity rhythms, mantle degassing stages, and the evolution of life. 'The correspondence between atmospheric physicochemical regime fluctuations and biosphere development needs can only be explained by random coordination of direction and speed of unrelated processes - solar evolution and Earth's evolution. Since the probability of such coordination is exceptionally small, this leads to the conclusion about the exceptional rarity of life (especially its higher forms) in the Universe.'"
Quantum immortality and gun jammed do not contradict each other: for example, if we survive 10 rounds failures because of QI, we most likely survive only on those timelines where gun is broken. So both QI and gun jamming can be true and support one another and there is no contradiction.