Posts

How could a friendly AI deal with humans trying to sabotage it? (like how present day internet trolls introduce such problems) 2021-11-27T22:07:15.960Z
What are the mutual benefits of AGI-human collaboration that would otherwise be unobtainable? 2021-11-17T03:09:40.733Z
What’s the likelihood of only sub exponential growth for AGI? 2021-11-13T22:46:25.277Z
Avoiding Negative Externalities - a theory with specific examples - Part 1 2021-11-12T04:09:32.462Z
M. Y. Zuo's Shortform 2021-11-07T01:42:46.261Z

Comments

Comment by M. Y. Zuo on The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century · 2021-12-03T16:54:51.536Z · LW · GW

Seems like a society stuck in a Nash equilibrium. Perhaps every society that reaches their malthusian limit invariably does so as zero sum games predominate.

Comment by M. Y. Zuo on Second-order selection against the immortal · 2021-12-03T15:31:37.100Z · LW · GW

It seems that this post is describing the regime beyond the threshold where group advantages outweigh individual advantages.

Comment by M. Y. Zuo on Morality is Scary · 2021-12-03T03:27:33.566Z · LW · GW

I imagine competition, in some form or another, will continue until the last star burns out.

Comment by M. Y. Zuo on "Infohazard" is a predominantly conflict-theoretic concept · 2021-12-03T03:03:02.814Z · LW · GW

“Nick Bostrom, for example, despite discussing "disappointment risks", spends quite a lot of his time thinking about very disappointing scenarios, such as AGI killing everyone, or nuclear war happening.  This shows a revealed preference for, not against, receiving disappointing information.”                                                          You are presuming there is no information more ‘disappointing’ that he is consciously or unconsciously avoiding. Although those scenarios may be ‘very disappointing’ they are far from maximally ‘disappointing’.

Comment by M. Y. Zuo on [linkpost] Crypto Cities · 2021-12-02T15:39:02.929Z · LW · GW

In the end I think your last paragraph explains everything. The proposal presumes that a parallel legal system governing property rights would even be allowed in the first place. Considering that in every nation I can think of property rights are guaranteed by a constitution, or a crown or something else equally difficult to change, this seems to be about as likely as any other activist proposal getting a supermajority to change the constitution. In practice, if crypto property rights were ever to come about, it would have to be in either a city state like Singapore or Monaco, or in a watered down form where the ‘on chain system’ is a bit of sprinkling to make the city seem high tech and futuristic. Your right to not be too hopeful about on chain real estate. It seems to be better to start with some smaller, manageable, nexus of improvements.

Comment by M. Y. Zuo on [linkpost] Crypto Cities · 2021-12-02T00:47:32.647Z · LW · GW

After further reflection, these are pretty good reasons for preferring crypto to a central database, though I think in practice the costs of a central authority managing a database would have to be really high for serious people to prefer the on-chain solution. Which brings us to cost, the closest real world example I can think of to the proposal is the Reedy Creek Improvement District and even that is actually not quite as independent. Considering that it took nearly all the influence of a large company in 1966 to secure, and it was a close call at that, how do you imagine something similar could be secured in the present day, when the competition for land and access to real estate information is so much greater? Even the entire value of Bitcoin+Ethereum couldn’t buy that much contiguous land in any desirable locale in the US, without eminent domain authority. Let alone the special legal regimes. The alternative of converting any existing city in a developed country would be a massive political fight, probably the biggest since the 60s.

Comment by M. Y. Zuo on Coordinating the Unequal Treaties · 2021-12-01T18:21:13.552Z · LW · GW

What other asymmetric ratcheting mechanisms are there?

Comment by M. Y. Zuo on M. Y. Zuo's Shortform · 2021-12-01T18:16:51.617Z · LW · GW

It does seem like alignment for all intents and purposes is impossible. Creating an AI truly beyond us then is really creating future, hopefully doting, parents to live under.

Comment by M. Y. Zuo on How could a friendly AI deal with humans trying to sabotage it? (like how present day internet trolls introduce such problems) · 2021-12-01T01:19:22.093Z · LW · GW

So it seems that incipient AI need a protected environment to develop into one capable of reliably carrying out such activities. Much like raising children in protected environments before adulthood.

Comment by M. Y. Zuo on How could a friendly AI deal with humans trying to sabotage it? (like how present day internet trolls introduce such problems) · 2021-12-01T01:16:35.138Z · LW · GW

That seems to be a plausible course of action if the AI(s) were in an unchallengeable position. But how would they get there without resolving the question prior?

Comment by M. Y. Zuo on M. Y. Zuo's Shortform · 2021-11-28T17:28:19.751Z · LW · GW

Let’s think about it another way. Consider the thought experiment where a single normal cell is removed from the body of any randomly selected human. Clearly they would still be human.

If you keep on removing normal cells though eventually they would die. And if you keep on plucking away cells eventually the entire body would be gone and only cancerous cells would be left, i.e. only a ‘paperclip optimizer’ would remain from the original human, albeit inefficient and parasitic ‘paperclips’ that need a organic host.

(Due to the fact that everyone has some small number of cancerous cells at any given time that are taken care of by regular processes)

At what point does the human stop being ‘human’ and starts being a lump of flesh? And at what point does the lump of flesh become a latent ‘paperclip optimizer’?

Without a sharp cutoff, which I don’t think there is, there will inevitably be inbetween cases where your proposed methods cannot be applied consistently. 

The trouble is if we, or the decision makers of the future, accept even one idea that is not internally consistent then it hardly seems like anyone will be able to refrain from accepting other ideas that are internally contradictory too. Nor will everyone err in the same way. There is no rational basis to accept one or another as a contradiction can imply anything at all, as we know from basic logic.

Then the end result will appear quite like monkey tribes fighting each other, agitating against each and all based on which inconsistencies they accept or not. Regardless of what they call each other, humans, aliens, AI, machines, organism, etc…

Comment by M. Y. Zuo on Almost everyone should be less afraid of lawsuits · 2021-11-28T00:02:00.875Z · LW · GW

Isn’t this just an example of the loss aversion tendency in action? It’s a well known concept rooted in our primate evolutionary past, nowadays broadly accepted by mainstream academia, that people care more about losing X then gaining Y when X = Y.  Maybe the ratio is as great as losing 1*X = gaining 2*Y.

So in the liminal zone where there isn’t sufficient activation energy where X < Y < 2*X, it should be expected that humans avoid taking such actions, for example like legal risks. Even if it’s a net benefit, because the net benefit is insufficiently large. 

The real challenge is finding a scenario where losing 1*X = gaining >2*Y.

Comment by M. Y. Zuo on [deleted post] 2021-11-25T17:13:28.740Z

I can think immediately of Maxwell’s electromagnetic theory following the previously accepted theory of some ’Luminiferous aether’ which was at the time believed to be what light propagated through in a vacuum. Going from Newton -> to ‘Luminiferous aether’ using induction works fine, explains many observable phenomena, and is somewhat elegant too. Compare the next step to Maxwell’s equations which are horrendously baroque and reliant on much more sophisticated math with some really bizarre implications that were difficult to accept until Einstein came along. There doesn’t appear to be any way induction would have led you to the correct result had you been researching this topic in the mid 19th century. In fact many people did waste their lives on the garden path trying to induce onwards from the aether.

Comment by M. Y. Zuo on [deleted post] 2021-11-23T00:16:37.248Z

After some reflection of what you wrote and what I wrote before I think the problem I was trying to articulate is actually an interesting subset of a more general problem, namely the Halting problem, as it applies to humans.

That is, how does one know when to stop inducing on a chain of inductions? Because surely there has to be a threshold, as with the neutrino example, beyond which induction will most likely yield a misleading answer that if taken at face value like every previous stage of induction, will lead down a garden path. Identifying that threshold every time may indeed be impossible without knowing everything.

Comment by M. Y. Zuo on Vitalik: Cryptoeconomics and X-Risk Researchers Should Listen to Each Other More · 2021-11-22T03:06:22.191Z · LW · GW

I hesitate to be the first to respond here but it seems there is a point so strange that someone else must have noticed as well so I hope it can be clarified. That is since the main problem your work is tackling is:

“how can we regulate a very complex and very smart system with unpredictable emergent properties using a very simple and dumb system whose properties once created are inflexible”

There must be then some hard constraint for AI work as well that involves some ‘simple and dumb system whose properties once created are inflexible’. Which does not seem to be inevitable.

Utility and objective functions don’t have to follow that kind of description, it is only assumed in certain projections.

If in the future some world compact decided, for example, that some hard coded objective function must be continuously executed and also be very difficult to change, then it seems plausible, but that is by no means preordained. Since there is no clear consensus that such a system can even be maintained perpetually.

Comment by M. Y. Zuo on [deleted post] 2021-11-22T02:33:09.189Z

Thanks for the neat thoughts. I truly believe some differences can be settled by words because there exist a class of differences that arise due to misperceptions, misunderstandings, etc., and are not grounded in anything substantive otherwise. Otherwise why would LW even exist?

Induction works fine without a global framework only if the inducer can correctly perceive the relationships between what they are observing. Someone lacking such capability, would inevitably become confused in their analysis when they stumble upon some component, at a deep enough level, that has dependent relationship(s) on other things far away in space time or perception. I.e. It works until it doesn’t.

For example, it wasn’t that long ago that no one on this planet understood how neutrinos worked, even though neutrinos are actually quite critical to understanding many interrelated phenomena. Some of which quite vital to understanding physics in general. Not to mention all the dependent fields. And induction by no means guaranteed anything close to the correct conclusion.

Of course folks had hunches, or just pretended to know, and some pretended to be able to induce from what knowledge was at the time available. But in fact no one could really once they hit the wall of confusion surrounding neutrinos.

Which is to say no one on this planet could correctly induce beyond a certain point in anything even if they wanted to do so, regardless of starting topic, from best ways of writing an essay or Buddhist history all the way down to neutrino physics. Everyone’s powers of induction would have failed sooner or later.

It’s just that practically no one bothered to go so deep in their analysis, outside of some small groups, so it was assumed that induction just works.

I imagine the same principle applies in any complex area of knowledge.

Comment by M. Y. Zuo on [deleted post] 2021-11-21T21:12:37.860Z

It’s a topic that likely doesn’t have any realistic call to action, since most of the factors are not within anyone’s control. But perhaps there is and if someone would like to share then feel free.

It was crossposted from my blog to see if anyone had similar thoughts, a wider circulation then that is unnecessary for my intentions.

Comment by M. Y. Zuo on M. Y. Zuo's Shortform · 2021-11-20T17:16:08.007Z · LW · GW

Those appear to be examples of arguments from consequences, a logical fallacy. How could similar reasoning be derived from axioms, if at all?

Comment by M. Y. Zuo on M. Y. Zuo's Shortform · 2021-11-20T03:07:04.237Z · LW · GW

If no one’s goals can be definitely proven to be better than anyone else’s goals, then it doesnt seem like we can automatically conclude the majority of present or future humans, or our descendants, will prioritize maximizing fun, happiness, etc.

If some want to pursue that then fine, if others want to pursue different goals, even ones that are deleterious to overall fun, happiness, etc., then there doesn’t seem to be a credible argument to dissuade them?

Comment by M. Y. Zuo on Applications for AI Safety Camp 2022 Now Open! · 2021-11-20T01:42:42.185Z · LW · GW

Seems interesting, I applied. On a logistical note, supplying a pre formatted google sheet for draft answers is a neat innovation. 

Comment by M. Y. Zuo on M. Y. Zuo's Shortform · 2021-11-18T05:35:08.665Z · LW · GW

What’s the rational basis for preferring all mass-energy consuming grey goo created by humans over all mass-energy consuming grey goo created by a paperclip optimizer? The only possible ultimate end in both scenarios is heat death anyways.

Comment by M. Y. Zuo on What are the mutual benefits of AGI-human collaboration that would otherwise be unobtainable? · 2021-11-18T02:28:14.851Z · LW · GW

Because human deference is usually conditioned on motives beyond deferring for the sake of deferring. Thus even in that case there will still need to be some collaboration.

Comment by M. Y. Zuo on What are the mutual benefits of AGI-human collaboration that would otherwise be unobtainable? · 2021-11-18T01:18:36.184Z · LW · GW

I intended to ask what can we not do presently that may be possible with the help of AGIs.

Comment by M. Y. Zuo on M. Y. Zuo's Shortform · 2021-11-16T16:46:35.305Z · LW · GW

So why must we prevent paperclip optimizers from bringing about their own ‘fun’?

Comment by M. Y. Zuo on Why do you believe AI alignment is possible? · 2021-11-16T15:07:14.810Z · LW · GW

Are you pondering what arguments a future AGI will need to convince humans? That’s well covered on LW. 

Otherwise my point is that we will almost certainly not convince monkeys that ‘we’re one of them‘ if they can use their eyes and see instead of spending resources on bananas, etc., we’re spending it on ballistic missiles, etc. 

Unless you mean if we can by deception, such as denying we spend resources along those lines, etc… in that case I’m not sure how that relates to a future AGI/human scenarios. 

Comment by M. Y. Zuo on Why do you believe AI alignment is possible? · 2021-11-16T13:31:11.148Z · LW · GW

In this case we would we be the monkeys gazing at the strange, awkwardly tall and hairless monkeys pondering about them in terms of monkey affairs. Maybe I would understand alignment in terms of whose territory is whose, who is the alpha and omega among the human tribe(s), which bananas trees are the best, where is the nearest clean water source, what kind of sticks and stones make the best weapons, etc.

I probably won’t understand why human tribe(s) commit such vast efforts into creating and securing and moving around those funny looking giant metal cylinders with lots of wizmos at the top, bigger than any tree I’ve seen. Why every mention of them elicits dread, why only a few of the biggest human tribes are allowed to have them, why they need to be kept on constant alert, why multiple  need to be put in even bigger metal cylinders to roam around underwater, etc., surely nothing can be that important right?

If the AGI is moderately above us, than we could probably find such arguments convincing to both, but we would never be certain of them.

If the AGI becomes as far above us as humans to monkeys then I believe the chances are about as likely as us  arguments that could convince monkeys about the necessity of ballistic missile submarines.

Comment by M. Y. Zuo on What’s the likelihood of only sub exponential growth for AGI? · 2021-11-16T03:46:27.693Z · LW · GW

Thanks for the links. It may be that the development of science, and of all technical endeavours in general, follow a pattern of punctuated equilibrium, that is sub linear growth, or even regression, for the vast majority of the time, interspersed by brief periods of tremendous change.

Comment by M. Y. Zuo on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T02:40:07.308Z · LW · GW

I imagine that was one of the critiques for prediction markets using crypto currencies held in escrow, yet they exist now, and they’re not all scams, so there must be some, non zero, market clearing price.

Comment by M. Y. Zuo on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T00:06:41.962Z · LW · GW

The actual structure, and payout ratio, would probably be set in a much more elaborate way. Maybe some kind of annuity paying out every year the world hasn’t ended yet? Like commit 10 bitcoins every year from a reversible escrow account to the irreversible escrow if the servers still exist or the total balance will be forfeited. Something along those lines, perhaps others would want to take up the project.

Comment by M. Y. Zuo on Ngo and Yudkowsky on alignment difficulty · 2021-11-15T22:55:15.019Z · LW · GW

Wow, that may be a genuinely ground breaking application for crypto currencies. e.g. someone with 1000 bitcoins can put them in some form of guaranteed, irreversible, escrow for a million dollars upfront and release date in 2050. If the world ends then the escrow vanishes, if not the lucky better would get it.

Comment by M. Y. Zuo on What’s the likelihood of only sub exponential growth for AGI? · 2021-11-15T22:43:35.947Z · LW · GW

#1 resonates with me somehow. Perhaps because I’ve witnessed a few people in real life, profoundly autistics, or disturbed, or on drugs, speak somewhat like an informal spoken variant of GPT-3, or is it the other way around?

Comment by M. Y. Zuo on Why do you believe AI alignment is possible? · 2021-11-15T18:00:23.663Z · LW · GW

Well I would answer but the answers would be recursive. I cannot know the true values and alignment of such a superhuman intellect without being one myself. And if I were, I wouldn’t be able to communicate such thoughts with their full strength, without you also being at least equally superhuman to understand. And if we both were, then you would know already.

And if neither of us are, then we can at best speculate with some half baked ideas that might sound convincing to us but unconvincing to said superhuman intellects. At best we can hope that any seeming alignment of values, perceived to the best of our abilities, is actual. Additionally, said supers may consider themselves ’humans’ or not, on criteria possibly also beyond our understanding.

Alternatively, if we could raise ourselves to that level, then case super-super affairs would become the basis, thus leading us to speculate on hyper-superhuman topics on super-Lesswrong. ad infinitum.

Comment by M. Y. Zuo on Why do you believe AI alignment is possible? · 2021-11-15T17:03:34.507Z · LW · GW

I didn’t mean to suggest that any future approach has to rely on ‘typical human architecture’. I also believe the least possibly aligned humans are less aligned than the least possibly aligned dolphins, elephants, whales, etc…,  are with each other. Treating AGI as a new species, at least as distant to us as dolphins for example, would be a good starting point.

Comment by M. Y. Zuo on Why do you believe AI alignment is possible? · 2021-11-15T16:30:05.524Z · LW · GW

I believe its possible for AI values to align as much as the least possibly aligned human individuals are aligned with each other. And in my books, if this could be guaranteed, would already constitue a heroic achievement, perhaps the greatest accomplishment of mankind up until that point.

Any greater alignment would be a pleasant fantasy, hopefully realizable if AGIs were to come into existence, but doesn’t seem to have any solid justification, at least not any more than many other pleasant fantasies.

Comment by M. Y. Zuo on Education on My Homeworld · 2021-11-15T16:20:49.774Z · LW · GW

Interesting, how have the forces promoting greater regulations, liability, etc., been kept quiescent on your homeworld?

Comment by M. Y. Zuo on Education on My Homeworld · 2021-11-15T04:06:16.192Z · LW · GW

How does tort liability work on your homeworld?

Comment by M. Y. Zuo on What would we do if alignment were futile? · 2021-11-15T00:44:02.983Z · LW · GW

It doesn’t seem credible for AIs to be more aligned with researchers than researchers are aligned with each other, or with the general population. 

Maybe that’s ‘gloomy’ but thats no different than how human affairs have progressed since the first tribes were established. From the viewpoint of broader society it’s more of positive development to understand there’s an upper limit for how much alignment efforts can expect to yield. So that resources are allocated properly to their most beneficial usage.

Comment by M. Y. Zuo on What would we do if alignment were futile? · 2021-11-14T15:04:53.740Z · LW · GW

There seems to be an implicit premise lurking in the background, please correct me if I’m wrong, that determining the degree of alignment to a mutually satisfactory level will be possible at all, that it will not be NP hard. 

i.e. Even if that ‘unknown miracle’ does happen, can we be certain that everyone in a room could closely agree on the ‘how much alignment’ questions? 

Comment by M. Y. Zuo on [linkpost] Crypto Cities · 2021-11-14T14:40:34.590Z · LW · GW

It does seem that option 3 is what is being proposed. I agree that if some solution could be designed to make the whole idea acceptable to the legal authorities, then the further implementation seems entirely credible from a technical view. Though I really am not sure in that case why the additional complexity of an ‘on chain system’ is necessary, a centralized database would do everything proposed. And any further benefits would be negated by the central control and trust required for such ‘on chain systems’ to be realized. e.g. someone dissatisfied with any transaction could always bypass whatever technical implementation by going to the legal authorities directly, or appeal from the ‘on chain courts’ to the superior authority, etc.

Comment by M. Y. Zuo on What’s the likelihood of only sub exponential growth for AGI? · 2021-11-14T14:23:34.320Z · LW · GW

Thanks JBlack, those are some convincing points. Especially the likelihood that even a chimpanzee level intelligence directly interfaced to present day supercomputers would likely yield tangible performance greater than any human in many ways. Though perhaps the danger is lessened if for the first few decades the energy and space requirements are, at a minimum, equal to a present day supercomputing facility. It’s a big and obvious enough ’bright line’ so to speak.

Comment by M. Y. Zuo on What’s the likelihood of only sub exponential growth for AGI? · 2021-11-14T03:59:58.908Z · LW · GW

Thanks for the in-depth answer. The engineer side of me gets leery whenever ‘straightforward real world scaling following a working theory’ is a premise, the likelihood of there being no significant technical obstacles at all, other than resources and energy, seems vanishingly low. A thousand and one factors could impede in realizing even the most perfect theory, much like other complex engineered systems. Possible surprises such as some dependence on the substrate, on the specific arrangement of hardware, on other emergent factors, on software factors, etc...

Comment by M. Y. Zuo on What’s the likelihood of only sub exponential growth for AGI? · 2021-11-14T00:03:26.152Z · LW · GW

Thanks, that does seem to be a possible motive for constant observation, and interference, if such aliens were to exist. 

Comment by M. Y. Zuo on Avoiding Negative Externalities - a theory with specific examples - Part 1 · 2021-11-13T23:52:09.937Z · LW · GW

I go by the standard OED definition of biodiversity

biodiversity      /  ˌbʌɪə(ʊ)dʌɪˈvəːsɪti      /         ▸     noun        [  mass noun  ]       the variety of plant and animal life in the world or in a particular habitat, a high level of which is usually considered to be important and desirable .

It was supposed to point to a decline in the variety of plant and animal life in the world or in a particular habitat, particularly those which are usually considered to be important and desirable .

Comment by M. Y. Zuo on What’s the likelihood of only sub exponential growth for AGI? · 2021-11-13T22:53:51.774Z · LW · GW

So it seems that even ‘fooming’ would be a coin toss as it stands?

Comment by M. Y. Zuo on [linkpost] Crypto Cities · 2021-11-13T21:11:47.326Z · LW · GW

Since in the US the top court is prohibited from being an ‘on chain court’ by its constitution, and will not recognize judicial authority in any lower court that disobeys it, and likewise for every other state and country I know of… why would we care about the decisions of an ‘on chain court’? 

Btw I’m not saying that I can see a case why that ever would be necessary but then again the folks whose opinions are considered as final judgement can make surprising decisions.

Comment by M. Y. Zuo on [linkpost] Crypto Cities · 2021-11-13T14:07:43.927Z · LW · GW

But what if a court issues an order to alter the transaction history? (Or some other change that conflicts with any feature that requires consistency over time of said blockchain) In the US that surely is within the power of the supreme court as practically everything is, and probably the state supreme courts too.

Comment by M. Y. Zuo on [linkpost] Crypto Cities · 2021-11-13T04:20:17.899Z · LW · GW

It’s unclear how any decentralized system could work in the context of governance. By definition a government based on laws must have some central control, at least one court capable of issuing final and binding orders, at the very least to resolve disputes of interpretation. This does not seem reconcilable with any ’blockchain’ system I’ve heard of. Since transactions would have to be mutable anytime that such a court issues orders that require mutability, or if a later court decision overturned an earlier one.

Perhaps I’m not understanding the proposal but although it seems to offer solutions better than the preexisting to certain coordination problems in local governments, those benefits would only last until the first contrary court order is issued. If the proposal is really meant to replace the judicial system itself, that would be quite astonishing.

Comment by M. Y. Zuo on Preprint is out! 100,000 lumens to treat seasonal affective disorder · 2021-11-13T04:12:27.399Z · LW · GW

I experimented with using a very powerful halogen bulb (double envelope for safety) positioned as close as possible to my face and it seemed to work quite well, it really did feel like a summer afternoon if I closed my eyes. It’s a remarkably simple thing to do too so developing a more convenient system would seem to be highly useful. 

Comment by M. Y. Zuo on Avoiding Negative Externalities - a theory with specific examples - Part 1 · 2021-11-12T21:40:04.587Z · LW · GW

I appreciate the support Sean. It’s not too difficult to write in lucid prose if it’s a subject near and dear to the heart and you allow a few hours of undivided attention, or at least that’s what I’ve found to be the case. 

The list of those effects not widely agreed could include all effects below the measurement capacities of our instruments, those limited to a certain subset of people, to a certain timeframe, or those that occur very briefly. I imagine the possible list is boundless in length.

Comment by M. Y. Zuo on Robin Hanson's Grabby Aliens model explained - part 2 · 2021-11-12T17:53:22.241Z · LW · GW

Although the analysis is interesting it seems to not include the possibility of lifeforms beyond the 3+1 dimensions we are accustomed to.