Posts

s/acc: Safe Accelerationism Manifesto 2023-12-19T22:19:59.551Z
What fact that you know is true but most people aren't ready to accept it? 2023-02-03T00:06:42.460Z
A pragmatic metric for Artificial General Intelligence 2022-10-17T22:07:25.108Z
Basic Post Scarcity Q&A 2022-07-23T13:43:25.472Z
Understanding Gato's Supervised Reinforcement Learning 2022-05-18T11:08:10.885Z
Awesome-github Post-Scarcity List 2021-11-20T08:47:59.454Z
A Roadmap to a Post-Scarcity Economy 2021-10-30T09:04:29.479Z
On Falsifying the Simulation Hypothesis (or Embracing its Predictions) 2021-04-12T00:12:12.838Z

Comments

Comment by lorepieri (lorenzo-rex) on s/acc: Safe Accelerationism Manifesto · 2024-01-09T17:33:52.841Z · LW · GW

This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the "immune system" of society. 

Comment by lorepieri (lorenzo-rex) on s/acc: Safe Accelerationism Manifesto · 2024-01-09T17:30:37.707Z · LW · GW

Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting. 

And yes I was expecting not to find much agreement here, but that's what makes it interesting :) 

Comment by lorepieri (lorenzo-rex) on Lack of Spider-Man is evidence against the simulation hypothesis · 2024-01-09T17:16:38.241Z · LW · GW

A somewhat similar statistical reasoning can be done to argue that the abundance of optional complexity (things could have been similar but simpler) is evidence against the simulation hyphotesis.

See https://philpapers.org/rec/PIETSA-6  (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization)

This is based on the general principle of computational resources being finite for any arbitrary civilisations (assuming infinities are not physical) and therefore minimised when possible by the simulators. In particular one can use the simplicity assumption: If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation. 

It is hard to argue that a similar general principle can be found for something being "mundane" since the definition of mundane seems dependent on the simulators point of view. Can you perhaps modify this reasoning to make it more general?    

Comment by lorepieri (lorenzo-rex) on Childhoods of exceptional people · 2023-02-07T18:27:44.536Z · LW · GW

Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field.

Even if we assume this, it does not follow that we should try to recreate the subjective conditions that led to (perceived) "success".  The environment is always changing (tech, knowledge base, tools), so many learnings will not apply.  Moreover, biographies tend to create a narrative after the fact, emphasizing the message the writer want to convey. 

I prefer the strategy to master the basics from previous works and then figure out yourself how to innovate and improve the state of the art.

Comment by lorepieri (lorenzo-rex) on What fact that you know is true but most people aren't ready to accept it? · 2023-02-03T12:44:49.960Z · LW · GW

True :) 

(apart for your reply!)

Comment by lorepieri (lorenzo-rex) on UDASSA · 2023-01-29T23:08:01.979Z · LW · GW

Using the Universal Distribution in the context of the simulation argument makes a lot of sense if we think that the base reality has no intelligent simulators, as it fits with our expectations that a randomly generated simulator is very likely to be coincise. But for human (or any agent-simulators) generated simulations, a more natural prior is how easy is the simulation to be run (Simplicity Assumption), since agent-simulators face concrete tradeoffs in using computational resources, while they have no pressing tradeoffs on the length of the program. 

See here for more info on the latter assumption.

Comment by lorepieri (lorenzo-rex) on Why don't we think we're in the simplest universe with intelligent life? · 2023-01-29T22:51:48.482Z · LW · GW

This is also known as Simplicity Assumption: "If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation."

In a nutshell, the amount  of computation needed to perform simulations matters (if resources are somewhat finite in base reality, which is fair to imagine), and over the long  term simple simulations will dominate the space of sims.

See here for more info.

Comment by lorepieri (lorenzo-rex) on The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument · 2023-01-29T20:07:03.912Z · LW · GW

Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).

Comment by lorepieri (lorenzo-rex) on AGI Impossible due to Energy Constrains · 2022-12-01T15:09:51.399Z · LW · GW

I would suggest to remove "I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. " and present your argument, without speaking for the whole community. 

Comment by lorepieri (lorenzo-rex) on Introducing the Basic Post-scarcity Map · 2022-10-09T21:46:21.470Z · LW · GW

Very interesting division, thanks for your comment. 

Paraphrasing what you said,  in the informational domain we are very close to post scarcity already (minimal effort to distribute high level education and news globally), while in the material and human attention domain we likely still need advancements in robotics and AI to scale.

Comment by lorepieri (lorenzo-rex) on Introducing the Basic Post-scarcity Map · 2022-10-09T21:34:15.225Z · LW · GW

You mean the edit functionality of Gitlab? 

Thanks for the gitbook tip, I will look into it.

Comment by lorepieri (lorenzo-rex) on Introducing the Basic Post-scarcity Map · 2022-10-09T21:30:55.128Z · LW · GW

Yes, the code is open source: https://gitlab.com/postscarcity/map

Comment by lorepieri (lorenzo-rex) on A paradox of existence · 2022-09-05T19:04:19.684Z · LW · GW

Interesting paradox. 

As other commented, I see multiple flaws:

  1. We believe to seem to know that there is a reality that exists. I doubt we can conceive reality, but only a vague understanding of it. Moreover we have no experience of "not existing", so it's hard to argue that we have a strong grasp on deeply understanding that there is a reality that exists.
  2. Biggest issue is here imho  (this is a very common misunderstanding): math is just a tool which we use to describe our universe, it is not (unless you take some approach like the mathematical universe) our universe. The fact that it works well is selection bias. We use math that works well to describe our universe, we discard the rest (see e.g. negative solution to the equation of motion in newtonian mechanics). Math by itself is infinite, we just use a small subset to describe our universe.  Also we take insipiration from our universe to build math. 
Comment by lorepieri (lorenzo-rex) on Are we there yet? · 2022-06-20T20:01:52.767Z · LW · GW

Not conclusive, but still worth doing in my view due to the relative easiness. Create the spreadsheet, make it public and let's see how it goes.

I would add the actual year in which you think it will happen.

Comment by lorepieri (lorenzo-rex) on Understanding Gato's Supervised Reinforcement Learning · 2022-06-18T19:49:40.061Z · LW · GW

Yea, what I meant is that the slides of Full Stack Deep Learning course materials provide a decent outline of all of the significant architectures worth learning.

I would personally not go to that low level of abstraction (e.g. implementing NNs in a new language) unless you really feel your understanding is shaky.  Try building an actual side project, e.g. an object classifier for cars, and problems will arise naturally.

Comment by lorepieri (lorenzo-rex) on Quantifying General Intelligence · 2022-06-18T18:47:43.854Z · LW · GW

I fear that measuring modifications it's like measuring a moving target. I suspect it will be very hard to consider all the modifications, and many AIs may blend each other under large modifications.  Also it's not clear how hard some modifications will be without actually carrying out those modifications.

Why not fixing a target, and measuring the inputs needed (e.g. flops, memory, time) to achieve goals? 

I'm working on this topic too, I will PM you.  

Also feel free to reach out if topic is of interest.

Comment by lorepieri (lorenzo-rex) on Quantifying General Intelligence · 2022-06-18T18:24:22.488Z · LW · GW

Other useful references:

-On the Measure of Intelligence https://arxiv.org/abs/1911.01547 

-S. Legg and M. Hutter, A collection of definitions of intelligence, Frontiers in Artificial Intelligence and applications, 157 (2007), 

-S. Legg and M. Hutter, Universal intelligence: A definition of machine intelligence, Minds and Machines, 17 (2007), pp. 391-444.  https://arxiv.org/pdf/0712.3329.pdf 

-P. Wang, On Defining Artificial Intelligence, Journal of Artificial General Intelligence, 10 (2019), pp. 1-37.

-J. Hernández-Orallo, The measure of all minds: evaluating natural and artificial intelligence, Cambridge University Press, 2017.


 

Comment by lorepieri (lorenzo-rex) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-12T17:35:42.735Z · LW · GW

This is the most likely scenario, with AGI getting heavily regulated, similarly to nuclear. It doesn't get much publicity because it's "boring". 

Comment by lorepieri (lorenzo-rex) on If there was a millennium equivalent prize for AI alignment, what would the problems be? · 2022-06-10T22:13:35.526Z · LW · GW

Nice link, thanks for sharing.

Comment by lorepieri (lorenzo-rex) on If there was a millennium equivalent prize for AI alignment, what would the problems be? · 2022-06-10T22:02:03.099Z · LW · GW

The 1 million prize problem should be "clearly define the AI alignement problem". I'm not even joking, actually understanding the problem and enstablising that there is a problem in the first place may give us hints to the solution.

Comment by lorepieri (lorenzo-rex) on Understanding Gato's Supervised Reinforcement Learning · 2022-06-10T21:53:01.322Z · LW · GW

In research there are a lot of publications, but few stand the test of time. I would suggest to you to look at the architectures which brought significant changes and ideas, those are still very relevant as they:

- often form the building block of current solutions

- they help you build intuition on how architectures can be improved

- it is often assumed in the field that you know about them

- they are often still useful, especially when having low resources

You should not need to look at more than 1-2 architectures per year in each field (computer vision, NLP,  RL). Only then I would focus on SOTA. 

You may want to check https://fullstackdeeplearning.com/spring2021/ it should have enough historic material to get you covered and expand from there, while also going quickly to modern topics.

Comment by lorepieri (lorenzo-rex) on The Problem With The Current State of AGI Definitions · 2022-06-06T17:03:02.403Z · LW · GW

Thanks for the link, I will check it out.

Comment by lorepieri (lorenzo-rex) on The Problem With The Current State of AGI Definitions · 2022-06-02T14:15:16.193Z · LW · GW

ARC is a nice attempt. I also participated in the original challenge on Kaggle. The issue is that the test can be gamed (as anyone on Kaggle did) brute forcing over solution strategies. 

An open-ended or interactive version of ARC may solve this issue.

Comment by lorepieri (lorenzo-rex) on The Problem With The Current State of AGI Definitions · 2022-06-01T14:13:24.305Z · LW · GW

I'm working on these lines to create an easy to understand numeric evaluation scale for AGIs. The dream would be something like: "Gato is AGI level 3.5, while the average human is 8.7." I believe the scale should factor in that no single static test can be a reliable test of intelligence (any test can be gamed and overfitted).

A good reference on the subject is "The Measure of All Minds"  by Orallo. 

Happy to share a draft, send me a DM if interested.

Comment by lorepieri (lorenzo-rex) on Gato's Generalisation: Predictions and Experiments I'd Like to See · 2022-05-18T21:02:05.048Z · LW · GW

When you say "switching" it reminds me of the "big switch" approach of https://en.wikipedia.org/wiki/General_Problem_Solver.

Regarding to how they do it, I believe the relevant passage to be:

Because distinct tasks within a domain can share identical embodiments, observation formats and action specifications, the model sometimes needs further context to disambiguate tasks. Rather than providing e.g. one-hot task identifiers, we instead take inspiration from (Brown et al., 2020; Sanh et al., 2022; Wei et al., 2021) and use prompt conditioning.

I guess it should be possible to locate the activation paths for different tasks, as the tasks are pretty well separated. Something on the lines of https://github.com/jalammar/ecco

Comment by lorepieri (lorenzo-rex) on Gato's Generalisation: Predictions and Experiments I'd Like to See · 2022-05-18T11:48:45.598Z · LW · GW

Fair analysis, I agree with the conclusions. The main contribution seems to be a proof that transformers can handle many tasks at the same time. 

Not sure if you sorted the tests in order of relevance, but I also consider the "held-out" test as being the more revealing. Besides finetuning, it would be interesting to test the zero-shot capabilities.

Comment by lorepieri (lorenzo-rex) on Gato as the Dawn of Early AGI · 2022-05-18T10:44:11.006Z · LW · GW

A single network is solving 600 different tasks spanning different areas. 100+ of the tasks are solved at 100% human performance. Let that sink in. 

While not a breaktrough in arbitrary scalable generality, the fact that so many tasks can be fitted into one architecture is surprising and novel. For many real life applications, being good in 100-1000 tasks makes an AI general enough to be deployed as an error tollerant robot, say in a warehouse. 

The main point imho is that this architecture may be enough to be scaled (10-1000x parameters) in few years to a useful proto-AGI product.

Comment by lorepieri (lorenzo-rex) on DeepMind is hiring for the Scalable Alignment and Alignment Teams · 2022-05-17T11:11:42.910Z · LW · GW

Pretty disappointing and unexpected to hear this in 2022,  after all the learnings from the pandemic. 

Comment by lorepieri (lorenzo-rex) on What's keeping concerned capabilities gain researchers from leaving the field? · 2022-05-16T11:15:19.049Z · LW · GW

What's stopping the companies from hiring a new researcher? People are queueing for tech jobs.

Comment by lorepieri (lorenzo-rex) on What's keeping concerned capabilities gain researchers from leaving the field? · 2022-05-13T08:47:47.902Z · LW · GW

If they leave then only who does not care remains... 

Comment by lorepieri (lorenzo-rex) on "A Generalist Agent": New DeepMind Publication · 2022-05-13T08:35:13.139Z · LW · GW

If by "sort of general, flexible learning ability that would let them tackle entirely new domains" we include adding new tokenised vectors in the training set, then this fit the definition. Of course this is "cheating" since the system is not learning purely by itself, but for the purpose of building a product or getting the tasks done this does not really matter. 

And it's not unconcievable to imagine self-supervised tokens generation to get more skills and perhaps a K-means algorithm to make sure that the new embeddings do not interfere with previous knowledge. It's a dumb way of getting smarter, but apparently it works thanks to scale effects!

Comment by lorepieri (lorenzo-rex) on "A Generalist Agent": New DeepMind Publication · 2022-05-13T08:28:50.179Z · LW · GW

I would agree with "proto-AGI". I might soon write a blog on this, but ideally we could define a continuous value to track how close we are to AGI, which is increasing if:

-the tasks to solve are very different from each other

-the tasks are complex

-how well a task have been solved

-few experience (or info) is fed to the system

-experience is not directly related to the task

-experience is very raw

-computation is done in few steps

Then adding new tasks and changing the environment.

Comment by lorepieri (lorenzo-rex) on "A Generalist Agent": New DeepMind Publication · 2022-05-12T23:52:23.104Z · LW · GW

I have always been cautios, but I would say yes this time. 

With the caveat that it learns new tasks only from supervised data, and not reusing previous experience.

Comment by lorepieri (lorenzo-rex) on "A Generalist Agent": New DeepMind Publication · 2022-05-12T23:50:40.623Z · LW · GW

The fact that adding new tasks doesn't diminuish performance on previous tasks is highly non trivial!

It may be that there is a lot of room in the embedding space to store them. The wild thing is that nothing (apart few hardware iterations) stop us to increase the embedding space if really needed.

Comment by lorepieri (lorenzo-rex) on "A Generalist Agent": New DeepMind Publication · 2022-05-12T23:46:31.913Z · LW · GW

Possibly the first truly AGI paper. 

Even though it is just exploiting the fact that all the narrow problems can be solved as sequence problems via tokenisation, it's remarkable that the tasks do not interferee distructively between each other. My gut feeling is that this is due the very high dimensional space of the embedding vectors.

It leaves ample room for grow.

Comment by lorepieri (lorenzo-rex) on A Quick Guide to Confronting Doom · 2022-04-15T23:45:44.596Z · LW · GW

My main point is that there is not enough evidence for a strong claim like doom-soon. In absence of hard data anybody is free to cook up argument pro or against doom-soon. 

You may not like my suggestion, but I would strongly advise to get deeper into the field and understand it better yourself, before taking important decisions.

In terms of paradigms, you may have a look at why building AI-software development is hard (easy to get to 80% accurate, hellish to get to 99%),  AI-winters and hype cycles (disconnect between claims-expectations and reality), the development of dangerous technologies (nuclear, biotech) and how stability has been achieved.

Comment by lorepieri (lorenzo-rex) on A Quick Guide to Confronting Doom · 2022-04-13T22:56:05.236Z · LW · GW

Don't look at opinions, look for data and facts.  Speculations, opinions or beliefs cannot be the basis on which you take decisions or update your knowledge. It's better to know few things, but with high confidence. 

Ask yourself, which hard data points are there in favour of doom-soon? 

Comment by lorepieri (lorenzo-rex) on What can people not smart/technical/"competent" enough for AI research/AI risk work do to reduce AI-risk/maximize AI safety? (which is most people?) · 2022-04-11T17:01:01.345Z · LW · GW

Geniuses or talented researchers are not that impactful as much as the right policy.  Contribute creating the right conditions (work environment, education, cross contamination, funding, etc.) to make good research flourish.  At the same time if fundamentals are not covered (healthcare, housing, etc.) people are not able to focus on much more than suvival. So pretty much anything that makes the whole system works better helps.

As an example, there are plenty of smart individuals in poor counties which are not able to express their potential. 

Comment by lorepieri (lorenzo-rex) on A concrete bet offer to those with short AGI timelines · 2022-04-11T16:48:28.561Z · LW · GW

Thanks.  Yes, pretty much in line with the authors. Btw, I would super happy to be wrong and see advancement in those areas, especially the robotic one.

 Thanks for the offer, but I'm not interested in betting money. 

Comment by lorepieri (lorenzo-rex) on A concrete bet offer to those with short AGI timelines · 2022-04-11T16:42:45.265Z · LW · GW

A close call, but I would lean still on no. Engineering the prompt is where humans leverage all their common sense and vast (w.r.t.. the AI) knowledge. 

Comment by lorepieri (lorenzo-rex) on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-11T16:09:45.934Z · LW · GW

The bottom line is: nobody has a strong argument in support of the inevitability of the doom scenario (If you have it, just reply to this with a clear and self contained argument.). 

From what I'm reading in the comments and in other papers/articles, it's a mixture of beliefs, estrapolations from known facts, reliance on what "experts" said, cherry picking. Add the fact that bad/pessimistic news travel and spread faster than boring good news.

A sober analysis enstablish that super-AGI can be dangerous (indeed there are no theorems forbidding this either), what's unproven is that it will be HIGHLY LIKELY to be a net minus for humanity. Even admitting that alignement is not possible, it's not clear why humanity and super-AGI goals should be in contrast, and not just different. Even admitting that they are highly likely to be in contrasts, is not clear why strategies to counter this cannot be of effect (e.g. parner up with a "good" super-AGI).

Another factors often forgotten is that what we mean by "humanity" today may not have the same meaning when we will have technologies like AGIs, mind upload or intelligence enhancement. We may literally become those AIs.

Comment by lorepieri (lorenzo-rex) on AI safety: the ultimate trolley problem · 2022-04-10T17:45:59.304Z · LW · GW

The downvotes are excessive, the post is provoking,  but interesting.

I think you will not even need to "push the fat man". The development on an AGI will be slow and gradual (as any other major technology) and there will be incidents along the way (e.g. an AGI chatbot harassing someone). Those incidents will periodically mandate new regulations, so that measurements to tackle real AGI related dangers will be enacted, similarly to what happens in the nuclear energy sector.  They will not be perfect, but there will be regulations.

The tricky part is that not all nations will set similar safety level, in fact some may encourage the development of unsafe, but high reward,  AGI. So overall it looks like "pushing the fat man" will not even work that well.

Comment by lorepieri (lorenzo-rex) on A concrete bet offer to those with short AGI timelines · 2022-04-10T17:21:02.293Z · LW · GW

Matthew, Tamay: Refreshing post, with actual hard data and benchmarks. Thanks for that.

My predictions:

  • A model/ensemble of models achieves >80% on all tasks in the MMLU benchmark

No in 2026, no in 2030. Mainly due to the fact that we don't have much structured data and incentives to solve some of the categories. A powerful unsupervised AI would be needed to clear those categories, or more time.

  • A credible estimate reveals that an AI lab deployed EITHER >10^30 FLOPs OR hardware that would cost $1bn if purchased through competitive cloud computing vendors at the time on a training run to develop a single ML model (excluding autonomous driving efforts)

This may actually happen (the 1B one, not the 10^30), also due to inflation and USD created out of thin air and injected into the market. I would go for no in 2026 and yes in 2030. 

  • A model/ensemble of models will achieve >90% on the MATH dataset using a no-calculator rule

No in 2026, no in 2030. Significant algorithmic improvements needed. It may be done if prompt engineering is allowed.

  • A model/ensemble of models achieves >80% top-1 strict accuracy on competition-level problems on the APPS benchmark

No in 2026, no in 2030. Similar to the above, but there will be more progress, as a lot of data is available.

  • A gold medal for the IMO Grand Challenge (conditional on it being clear that the questions were not in the training set)

No in 2026, no in 2030. 

  • A robot that can, from beginning to end, reliably wash dishes, take them out of an ordinary dishwasher and stack them into a cabinet, without breaking any dishes, and at a comparable speed to humans (<120% the average time)

I work with smart robots, this cannot happen so fast also due to hardware limitations. The speed requirement is particularly harsh. Without the speed limit and with the system known in advance I would say yes in 2030. As the bet stands, I go for No in 2026, no in 2030.

  • Tesla’s full-self-driving capability makes fewer than one major mistake per 100,000 miles

Not sure about this one, but I lean on No in 2026, no in 2030. 

Comment by lorepieri (lorenzo-rex) on The Proof of Doom · 2022-04-10T16:56:34.714Z · LW · GW

This is a possible AGI scenario, but it's not clear why it should be particularly likely. For instance the AGI may reason that going aggressive will also be the fastest route to be terminated. Or the AGI may consider that keeping humans alive is good, since they were responsable for the AGI creation in the first place. 

What you describe is the paper-clip maximiser scenario, which is arguably the most extreme end of the spectrum of super-AGI behaviours.

Comment by lorepieri (lorenzo-rex) on The Proof of Doom · 2022-04-10T16:47:36.718Z · LW · GW

This would not be a conclusive test, but definitely a cool one and may spark a lot of research. Perhaps we could get started with something NLP based, opening up more and more knowledge access to the AI in the form of training data. Probably still not feasible as of 2022 in term of raw compute required.

Comment by lorepieri (lorenzo-rex) on MIRI announces new "Death With Dignity" strategy · 2022-04-08T23:37:29.781Z · LW · GW

It would be good if you could summarise your strongest argument in favour of your conclusion "no alignement = bad for humanity". 

Things are rarely black or white, I don't see an AI partially aligned as necessaly a bad thing.  

As an example, consider the partial alignement between a child and his parent.  A parent is not simply fulfilling every desire of the child, but only a subset.

Comment by lorepieri (lorenzo-rex) on Convincing All Capability Researchers · 2022-04-08T23:24:25.611Z · LW · GW

Unpopular opinion (on this site I guess):  AI alignment is not a well defined problem, there is no clear cut resolution to it.  It will be an incremental process, similar to cybersecurity research.

About the money, I would do the opposite: select researchers that would do it for free, just pay them living expenses and give them arbitrary resources.

Comment by lorepieri (lorenzo-rex) on The Proof of Doom · 2022-03-11T00:27:19.216Z · LW · GW

The newly-created AGI will immediately kill everyone on the planet, and proceed to the destruction of the universe. Its sphere of destruction will expand at light speed, eventually encompassing everything reachable.

Why?

In fact, if not consensus, then at least the majority opinion amongst those mathematicians, computer scientists, and AI researchers who have given the subject more than a few days thought.

Is this true, or you have asked only inside an AI-pessimistic bubble? 

And if True, why should opinions matter at all? Opinions cannot influence reality which is outside human control.

Overall I don't see a clear argument about why should we worried about AGI. Quite the contrary, building AGI is still an active area of research with no clear solution.

Comment by lorepieri (lorenzo-rex) on Can you prove that 0 = 1? · 2021-12-17T22:37:13.567Z · LW · GW

Consider modular arithmetic with modulo 1. It is true that 0+0=0, 1+1=1, and indeed 0=1. What is this describing? A theory of complete nothingness. 

I'm working on something on these lines, there is much more structure that one would expect at first. Feel free to reach out privately.

Comment by lorepieri (lorenzo-rex) on Awesome-github Post-Scarcity List · 2021-11-22T22:25:47.135Z · LW · GW

Nothing much to add to gbear605, there was no self-congratulatory intent here! I'm editing the title to make this a bit more clear.