Posts

Is AI alignment a purely functional property? 2024-12-15T21:42:50.674Z
What is MIRI currently doing? 2024-12-14T02:39:20.886Z
The Dissolution of AI Safety 2024-12-12T10:34:14.253Z
What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? 2024-10-19T06:11:12.602Z
The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind 2024-10-16T01:24:51.102Z
A Heuristic Proof of Practical Aligned Superintelligence 2024-10-11T05:05:58.262Z
A Nonconstructive Existence Proof of Aligned Superintelligence 2024-09-12T03:20:09.531Z
Ice: The Penultimate Frontier 2024-07-13T23:44:56.827Z
Less Wrong automated systems are inadvertently Censoring me 2024-02-21T12:57:16.955Z
A Back-Of-The-Envelope Calculation On How Unlikely The Circumstantial Evidence Around Covid-19 Is 2024-02-07T21:49:46.331Z
The Math of Suspicious Coincidences 2024-02-07T13:32:35.513Z
Brute Force Manufactured Consensus is Hiding the Crime of the Century 2024-02-03T20:36:59.806Z
Without Fundamental Advances, Rebellion and Coup d'État are the Inevitable Outcomes of Dictators & Monarchs Trying to Control Large, Capable Countries 2024-01-31T10:14:02.042Z
"AI Alignment" is a Dangerously Overloaded Term 2023-12-15T14:34:29.850Z
Could Germany have won World War I with high probability given the benefit of hindsight? 2023-11-27T22:52:42.066Z
Could World War I have been prevented given the benefit of hindsight? 2023-11-27T22:39:15.866Z
“Why can’t you just turn it off?” 2023-11-19T14:46:18.427Z
On Overhangs and Technological Change 2023-11-05T22:58:51.306Z
Stuxnet, not Skynet: Humanity's disempowerment by AI 2023-11-04T22:23:55.428Z
Architects of Our Own Demise: We Should Stop Developing AI Carelessly 2023-10-26T00:36:05.126Z
Roko's Shortform 2020-10-14T17:30:47.334Z
Covid-19 Points of Leverage, Travel Bans and Eradication 2020-03-19T09:08:28.846Z
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z
$100 for the best article on efficient charty - the winner is ... 2010-12-12T15:02:06.007Z
$100 for the best article on efficient charity: the finalists 2010-12-07T21:15:31.102Z
$100 for the best article on efficient charity -- Submit your articles 2010-12-02T20:57:31.410Z
Superintelligent AI mentioned as a possible risk by Bill Gates 2010-11-28T11:51:50.475Z
$100 for the best article on efficient charity -- deadline Wednesday 1st December 2010-11-24T22:31:57.215Z
Competition to write the best stand-alone article on efficient charity 2010-11-21T16:57:35.003Z
Public Choice and the Altruist's Burden 2010-07-22T21:34:52.740Z
Politicians stymie human colonization of space to save make-work jobs 2010-07-18T12:57:47.388Z
Financial incentives don't get rid of bias? Prize for best answer. 2010-07-15T13:24:59.276Z
A proposal for a cryogenic grave for cryonics 2010-07-06T19:01:36.898Z
MWI, copies and probability 2010-06-25T16:46:08.379Z
Poll: What value extra copies? 2010-06-22T12:15:54.408Z
Aspergers Survey Re-results 2010-05-29T16:58:34.925Z
Shock Level 5: Big Worlds and Modal Realism 2010-05-25T23:19:44.391Z
The Tragedy of the Social Epistemology Commons 2010-05-21T12:42:38.103Z
The Social Coprocessor Model 2010-05-14T17:10:15.475Z
Aspergers Poll Results: LW is nerdier than the Math Olympiad? 2010-05-13T14:24:24.783Z
Do you have High-Functioning Asperger's Syndrome? 2010-05-10T23:55:45.936Z
What is missing from rationality? 2010-04-27T12:32:06.806Z
Report from Humanity+ UK 2010 2010-04-25T12:33:33.170Z
Ugh fields 2010-04-12T17:06:18.510Z
Anthropic answers to logical uncertainties? 2010-04-06T17:51:49.486Z
What is Rationality? 2010-04-01T20:14:09.309Z
David Pearce on Hedonic Moral realism 2010-02-03T17:27:31.982Z
Strong moral realism, meta-ethics and pseudo-questions. 2010-01-31T20:20:47.159Z
Simon Conway Morris: "Aliens are likely to look and behave like us". 2010-01-25T14:16:18.752Z
London meetup: "The Friendly AI Problem" 2010-01-19T23:35:47.131Z

Comments

Comment by Roko on The Dissolution of AI Safety · 2024-12-18T05:20:06.252Z · LW · GW

How can we solve that coordination problem? I have yet to hear a workable idea.

This is my next project!

Comment by Roko on The Dissolution of AI Safety · 2024-12-18T05:19:33.416Z · LW · GW

some guy who was recently hyped about asking o1 for the solution to quantum gravity - it gave the user some gibberish

yes, but this is pretty typical for what a human would generate.

Comment by Roko on Is AI alignment a purely functional property? · 2024-12-18T05:18:06.597Z · LW · GW

There are plenty of systems where we rationally form beliefs about likely outputs from a system without a full understanding of how it works. Weather prediction is an example.

Comment by Roko on Is AI alignment a purely functional property? · 2024-12-16T01:25:20.463Z · LW · GW

I should have been clear: "doing things" is a form of input/output since the AI must output some tokens or other signals to get anything done

Comment by Roko on What is MIRI currently doing? · 2024-12-14T20:40:54.192Z · LW · GW

If you look at the answers there is an entire "hidden" section of the MIRI website doing technical governance!

Comment by Roko on What is MIRI currently doing? · 2024-12-14T09:20:18.798Z · LW · GW

Why is this work hidden from the main MIRI website?

Comment by Roko on What is MIRI currently doing? · 2024-12-14T09:02:05.665Z · LW · GW

nice!

Comment by Roko on What is MIRI currently doing? · 2024-12-14T05:05:22.021Z · LW · GW

"Our objective is to convince major powers to shut down the development of frontier AI systems worldwide"

This?

Comment by Roko on What is MIRI currently doing? · 2024-12-14T03:57:00.764Z · LW · GW

Who works on this?

Comment by Roko on The Dissolution of AI Safety · 2024-12-14T00:30:39.196Z · LW · GW

Re: (2) it will only impact output on the current generated output, once the output is over all that stuff will be reset and the only thing that remains is the model weights which were set in stone at train time.

re: (1) "a LLM might produce text for reasons that don't generalize like a sincere human answer would" it seems that current LLM systems are pretty good at generalizing like a human would and in some ways they are better due to being more honest, easier to monitor, etc

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T19:56:49.435Z · LW · GW

But do you really think we're going to stop with tool AI, and not turn them into agents?

But if it is the case that agentic AI is an existential risk then if actors could choose not to develop it, which is a coordination problem not an alignment problem.

We already have aligned AGI, we can coordinate to not build misaligned AGI.

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T03:20:25.003Z · LW · GW

ok but as a matter of terminology, is a "Satan reverser" misaligned because it contains a Satan?

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T02:58:28.373Z · LW · GW

OK, imagine that I make an AI that works like this: a copy of Satan is instantiated and his preferences are extracted in percentiles, then sentences from Satan's 2nd-5th percentile of outputs are randomly sampled. Then that copy of Satan is destroyed.

Is the "Satan Reverser" AI misaligned?

Is it "inner misaligned"?

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T02:11:57.381Z · LW · GW

So your definition of "aligned" would depend on the internals of a model, even if its measurable external behavior is always compliant and it has no memory/gets wiped after every inference?

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T02:06:01.151Z · LW · GW

Further on the tech tree, alignment tax can end up motivating systematic uses that make LLMs a source of danger.

Sure, but you can say the same about humans. Enron was a thing. Obeying the law is not as profitable as disobeying it.

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T01:43:34.580Z · LW · GW

maybe you should swap "understand ethics" for something like "follow ethics"/"display ethical behavior"

What is the difference between these two? This sounds like a distinction without a difference

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T01:42:24.676Z · LW · GW

Any argument which features a "by definition"

What is your definition of "Aligned" for an LLM with no attached memory then?

Wouldn't it have to be

"The LLM outputs text which is compliant with the creator's ethical standards and intentions"?

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T01:29:56.232Z · LW · GW

To add: I didn't expect this to be controversial but it is currently on -12 agreement karma!

Comment by Roko on The Dissolution of AI Safety · 2024-12-13T01:27:58.352Z · LW · GW

LLMs have plenty of internal state, the fact that it's usually thrown away is a contingent fact about how LLMs are currently used

yes, but then your "Aligned AI based on LLMs" is just a normal LLM used in the way it is currently used.

Relevant aspects of observable behavior screen off internal state that produced it.

Yes this is a good way of putting it.

Comment by Roko on The Dissolution of AI Safety · 2024-12-12T21:52:25.070Z · LW · GW

equivalence between LLMs understanding ethics and caring about ethics

I think you don't understand what an LLM is. When the LLM produces a text output like "Dogs are cute", it doesn't have some persistent hidden internal state that can decide that dogs are actually not cute but it should temporarily lie and say that they are cute.

The LLM is just a memoryless machine that produces text. If it says "dogs are cute" and that's the end of the output, then that's all there is to it. Nothing is saved, the weights are fixed at training time and not updated at inference time and the neuron activations are thrown away at the of the inference computation.

If you can get (using RLHF) an LLM to output text that consistently reflects human value judgements, then it is by definition "aligned". It really cares, in the only way it is possible for a text generator to care.

Comment by Roko on If far-UV is so great, why isn't it everywhere? · 2024-10-20T23:45:39.807Z · LW · GW

Yes, certain places like preschools might benefit even from an isolated install.

But that is kind of exceptional.

The world isn't an efficient market, especially because people are kind of set in their ways and like to stick to the defaults unless there is strong social pressure to change.

Comment by Roko on If far-UV is so great, why isn't it everywhere? · 2024-10-20T21:59:57.221Z · LW · GW

Far-UVC probably would have a large effect if a particular city or country installed it.

But if only a few buildings install it, then it has no effect because people just catch the bugs elsewhere.

Imagine the effect of just treating sewage from one house, and leaving all the untreated sewage from a million houses untreated in the river. There would be essentially no effect.

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-20T16:36:27.334Z · LW · GW

ok so from the looks of that it basically just went along with a fantasy he already had. But this is an interesting case and an example of the kind of thing I am looking for.

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-20T16:34:53.710Z · LW · GW

ok, but this is sort of circular reasoning because the only reason people freaked out is that they were worried about AI risk.

I am asking for a concrete bad outcome in the real world caused by a lack of RLHF-based ethics alignment, which isn't just people getting worried about AI risk.

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-20T16:33:29.330Z · LW · GW

alignment has always been about doing what the user/operator wants

Well it has often been about not doing what the user wants, actually.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T22:25:22.689Z · LW · GW

giving each individual influence over the adoption (by any clever AI) of those preferences that refer to her.

Influence over preferences of a single entity is much more conflict-y.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T22:24:25.837Z · LW · GW

Trying to give everyone overlapping control over everything that they care about in such spaces introduces contradictions.

The point of ELYSIUM is that people get control over non-overlapping places. There are some difficulties where people have preferences over the whole universe. But the real world shows us that those are a smaller thing than the direct, local preference to have your own volcano lair all to yourself.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T20:17:30.238Z · LW · GW

catgirls are consensually participating in a universe that is not optimal for them because they are stuck in the harem of a loser nerd with no other males and no other purpose in life other than being a concubine to Reedspacer

And, the problem with saying "OK let's just ban the creation of catgirls" is that then maybe Reedspacer builds a volcano lair just for himself and plays video games in it, and the catgirls whose existence you prevented are going to scream bloody murder because you took away from them a very good existence that they would have enjoyed and also made Reedsapcer sad.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T20:01:56.152Z · LW · GW

The question of what BPA wants to do to Steve, seems to me to be far more important for Steve's safety, than the question of what set of rules will constrain the actions of BPA.

BPA shouldn't be allowed to want anything for Steve. There shouldn't be a term in its world-model for Steve. This is the goal of cosmic blocking. The BPA can't even know that Steve exists.

I think the difficult part is when BPA looks at Bob's preferences (excluding, of course, references to most specific people) and sees preferences for inflicting harm on people-in-general that can be bent just enough to fit into the "not-torture" bucket, and so it synthetically generates some new people and starts inflicting some kind of marginal harm on them.

And I think that this will in fact be a binding constraint on utopia, because most humans will (given the resources) want to make a personal utopia full of other humans that forms a status hierarchy with them at the top. And 'being forced to participate in a status hierarchy that you are not at the top of' is a type of 'generalized consensual harm'.

Even the good old Reedspacer's Lower Bound fits this model. Reedspacer wants a volcano lair full of catgirls, but the catgirls are consensually participating in a universe that is not optimal for them because they are stuck in the harem of a loser nerd with no other males and no other purpose in life other than being a concubine to Reedspacer. Arguably, that is a form of consensual harm to the catgirls.

So I don't think there is a neat boundary here. The neatest boundary is informed consent, perhaps backed up by some lower-level tests about what proportion of an entity's existence is actually miserable.

If Reedspacer beats his catgirls, makes them feel sad all the time, that matters. But maybe if one of them feels a little bit sad for a short moment that is acceptable.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T19:29:06.106Z · LW · GW

Steve will never become aware of what Bob is doing to OldSteve

But how would Bob know that he wanted to create OldSteve, if Steve has been deleted from his memory via a cosmic block?

I suppose perhaps Bob could create OldEve. Eve is in a similar but not identical point in personality space to Steve and the desire to harm people who are like Eve is really the same desire as the desire to harm people like Steve. So Bob's Extrapolated Volition could create OldEve, who somehow consents to being mistreated in a way that doesn't trigger your torture detection test.

This kind of 'marginal case of consensual torture' has popped up in other similar discussions. E.g. In Yvain's (Scott Alexander's) article on Archipelago there's this section:

"""A child who is abused may be too young to know that escape is an option, or may be brainwashed into thinking they are evil, or guilted into believing they are betraying their families to opt out. And although there is no perfect, elegant solution here, the practical solution is that UniGov enforces some pretty strict laws on child-rearing, and every child, no matter what other education they receive, also has to receive a class taught by a UniGov representative in which they learn about the other communities in the Archipelago, receive a basic non-brainwashed view of the world, and are given directions to their nearest UniGov representative who they can give their opt-out request to"""

So Scott Alexander's solution to OldSteve is that OldSteve must get a non-brainwashed education about how ELYSIUM/Archipelago works and be given the option to opt out.

I think the issue here is that "people who unwisely consent to torture even after being told about it" and "people who are willing and consenting submissives" is not actually a hard boundary.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T19:10:48.088Z · LW · GW

a 55 percent majority (that does not have a lot of resource needs) burning 90 percent of all resources in ELYSIUM to fully disenfranchise everyone else. And then using the remaining resources to hurt the minority.

If there is an agent that controls 55% of the resources in the universe and are prepared to use 90% of that 55% to kill/destroy everyone else, then assuming that ELYSIUM forbids them to do that, their rational move is to use their resources to prevent ELYSIUM from being built.

And since they control 55% of the resources in the universe and are prepared to use 90% of that 55% to kill/destroy everyone who was trying to actually create ELYSIUM, they would likely succeed and ELYSIUM wouldn't happen.

Re:threats, see my other comment.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-19T18:55:05.364Z · LW · GW

Especially if they like the idea of killing someone for refusing to modify the way that she lives her life. They can do this with person after person, until they have run into 9 people that prefers death to compliance. Doing this costs them basically nothing.

This assumes that threats are allowed. If you allow threats within your system you are losing out on most of the value of trying to create an artificial utopia because you will recreate most of the bad dynamics of real history which ultimately revolve around threats of force in order to acquire resources. So, the ability to prevent entities from issuing threats that they then do not follow through on is crucial.

Improving the equilibria of a game is often about removing strategic options; in this case the goal is to remove the option of running what is essentially organized crime.

In the real world there are various mechanisms that prevent organized crime and protection rackets. If you threaten to use force on someone in exchange for resources, the mere threat of force is itself illegal at least within most countries and is punished by a loss of resources far greater than the threat could win.

People can still engage in various forms of protest that are mutually destructive of resources (AKA civil disobedience).

The ability to have civil disobedience without protection rackets does seem kind of crucial.

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-19T17:20:15.986Z · LW · GW

his AI girlfriend told him to

Which AI told him this? What exactly did it say? Had it undergone RLHF for ethics/harmlessness?

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-19T17:19:25.121Z · LW · GW

This is not to do with ethics though?

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

This is just the model hallucinating?

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-19T17:18:27.337Z · LW · GW

prevention of another Sydney.

But concretely, what bad outcomes eventuated because of Sydney?

Comment by Roko on What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented? · 2024-10-19T17:17:00.833Z · LW · GW

Why would less RL on Ethics reduce productivity? Most work-use of AI has nothing to do with ethics.

In fact since RLHF decreases model capability AFAIK, would skipping this actually increase productivity because the models would be better?

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-18T19:21:21.208Z · LW · GW

One principled way to do it would be simulated war on narrow issues.

So if actor A spends resources R_c on computation C, any other actor B can surrender resources equal to R_c to prevent computation C from happening. The surrendered resources and the original resources are then physically destroyed (e.g. spent on Bitcoin mining or something).

This then at least means that to a first approximation, no actor has an incentive to destroy ELYSIUM itself in order to stop some computation inside it from happening, because they could just use their resources to stop the computation in the simulation instead. And many actors benefit from ELYSIUM, so there's a large incentive to protect it.

And since the interaction is negative sum (both parties lose resources from their personal utopias) there would be strong reasons to negotiate.

In addition to this there could be rule-based and AI-based protections to prevent unauthorized funny tricks with simulations. One rule could be a sort of "cosmic block" where you can just block some or all other Utopias from knowing about you outside of a specified set of tests ("is torture happening here", etc).

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-18T06:47:07.822Z · LW · GW

But the text that you link to does not suggest any mechanism, that would actually protect Steve

There is a baseline set of rules that exists for exactly this purpose, which I didn't want to go into detail on in that piece because it's extremely distracting from the main point. These rules are not necessarily made purely by humans, but could for example be the result of some kind of AI-assisted negotiation that happens at ELYSIUM Setup.

"There would also be certain baseline rules like “no unwanted torture, even if the torturer enjoys it”, and rules to prevent the use of personal utopias as weapons."

But I think you're correct that the system that implements anti-weaponization and the systems that implement extrapolated volitions are potentially pushing against each other. This is of course a tension that is present in human society as well, which is why we have police.

So basically the question is "how do you balance the power of generalized-police against the power of generalized-self-interest."

Now the whole point of having "Separate Individualized Utopias" is to reduce the need for police. In the real world, it does seem to be the case that extremely geographically isolated people don't need much in the way of police involvement. Most human conflicts are conflicts of proximity, crimes of opportunity, etc. It is rare that someone basically starts an intercontinental stalking vendetta against another person. And if you had the entire resources of police departments just dedicated to preventing that kind of crime, and they also had mind-reading tech for everyone, I don't think it would be a problem.

I think the more likely problem is that people will want to start haggling over what kind of universal rights they have over other people's utopias. Again, we see this in real life. E.g. "diverse" characters forced into every video game because a few people with a lot of leverage want to affect the entire universe.

So right now I don't have a fully satisfactory answer to how to fix this. It's clear to me that most human conflict can be transformed into a much easier negotiation over basically who gets how much money/general-purpose-resources. But the remaining parts could get messy.

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-17T18:47:07.920Z · LW · GW

This seems to only be a problem if the individual advocates have vastly more optimization power than the AIs that check for non-aggression. I don't think there's any reason for that to be the case.

In contemporary society we generally have the opposite problem (the state uses lawfare against individuals).

Comment by Roko on The ELYSIUM Proposal - Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind · 2024-10-16T20:21:36.380Z · LW · GW

virtual is strictly better. No one wants his utopia constrained by the laws of physics

Well. Maybe.

Comment by Roko on A Heuristic Proof of Practical Aligned Superintelligence · 2024-10-11T17:06:31.940Z · LW · GW

Technically it doesn't matter whether Valdimir Putin is good or bad.

What matters is that he is small and weak, and yet he still controls the whole of Russia which is large and powerful and much more intelligent than him.

Comment by Roko on A Heuristic Proof of Practical Aligned Superintelligence · 2024-10-11T13:14:20.489Z · LW · GW

Yes, I think this objection captures something important.

I have proven that aligned AI must exist and also that it must be practically implementable.

But some kind of failure, i.e. a "near miss" on achieving a desired goal can happen even if success was possible.

I will address these near misses in future posts.

Comment by Roko on A Heuristic Proof of Practical Aligned Superintelligence · 2024-10-11T13:10:54.580Z · LW · GW

This objection doesn't affect my argument because I am arguing that an aligned, controllable team of AIs exists, not that every team of AIs is aligned and controllable.

If IQ 500 is a problem, then make them the same IQs as people in Russia who are actually as a matter of fact controlled by Vladimir Putin, and who cannot and do not spontaneously come up with inscrutable steganography.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-10-10T22:50:04.932Z · LW · GW

I don't see how anyone could possibly argue with my definitions.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-10-10T22:48:48.310Z · LW · GW

mathematical abstraction of an actual real-world ASI

But it's not that: it's a mathematical abstraction of a disembodied ASI that lacks any physical footprint.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-10-07T20:19:59.738Z · LW · GW

The problem with this is that people use the word "superintelligence" without a precise definition. Clearly they mean some computational process. But nobody who uses the term colloquially defines it.

So, I will make the assertion that if a computational process achieves the best possible outcome for you, it is a superintelligence. I don't think anyone would disagree with that.

If you do, please state what other properties you think a "superintelligence" must have other than being a computational process achieves the best possible outcome.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-10-07T20:12:01.886Z · LW · GW

I never said it had to be implemented by a state. That is not the claim: the claim is merely that such a function exists.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-09-21T16:32:03.499Z · LW · GW

you can have an alignment problem without humans. E.g. two strawberries problem.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-09-21T16:31:01.971Z · LW · GW

Decoherence means that different branches don't interfere with each other on macroscopic scales. That's just the way it works.

Superfluids/superconductors/lasers are still microscopic effects that only matter at the scale of atoms or at ultra-low temperature or both.

Comment by Roko on A Nonconstructive Existence Proof of Aligned Superintelligence · 2024-09-21T16:29:13.219Z · LW · GW

bringing QM into this is not helping. All these types of questions are completely generic QM questions and ultimately they come down to measure ||Psi>|²