Awesome-github Post-Scarcity List 2021-11-20T08:47:59.454Z
A Roadmap to a Post-Scarcity Economy 2021-10-30T09:04:29.479Z
On Falsifying the Simulation Hypothesis (or Embracing its Predictions) 2021-04-12T00:12:12.838Z


Comment by Lorenzo Rex (lorenzo-rex) on Awesome-github Post-Scarcity List · 2021-11-22T22:25:47.135Z · LW · GW

Nothing much to add to gbear605, there was no self-congratulatory intent here! I'm editing the title to make this a bit more clear.

Comment by Lorenzo Rex (lorenzo-rex) on Awesome-github Post-Scarcity List · 2021-11-20T22:28:38.314Z · LW · GW

Awesome-github are indeed curated open-source lists. If you know better resources feel free to open a pull request so that I can incorporate those, thanks! 

Comment by Lorenzo Rex (lorenzo-rex) on Boring machine learning is where it's at · 2021-10-24T19:36:42.728Z · LW · GW

It's a good point, but it's like saying that to improve a city you can just bomb it and build it from scratch. In reality improvements need to be incremental and coexist with the legacy system for a while.

Comment by Lorenzo Rex (lorenzo-rex) on How would the Scaling Hypothesis change things? · 2021-08-14T22:42:46.819Z · LW · GW
  • Would your forecasts for AI timelines shorten significantly?

Yes, by 10-20 years, in particular for the first human level AGI, which I currently forecast between 2045-2060. 

  • Would your forecasts change for the probability of AI-caused global catastrophic / existential risks?

Not by much, I give a low estimate to an AI existential risk.

  • Would your focus of research or interests change at all?

Yes, in the same way that the classic computer vision field has been made pretty much obsolete by deep learning, apart for few pockets or for simple use cases.

  • Would it perhaps even change your perspective on life?

Yes, positively. We would get faster than expected to the commercialisation of AGI, shortening the gap to a post-scarcity society. 


That said, I don't believe to the scaling hypothesis. Even though NNs appear capable to simulate arbitrary complex behaviours, I think we will hit a wall of diminishing returns soon, making it impractical to proceed this way for the first AGI.  

Comment by Lorenzo Rex (lorenzo-rex) on Analysis of World Records in Speedrunning [LINKPOST] · 2021-08-05T20:59:23.345Z · LW · GW

Apparently many records have been subjected to cheating: 

Comment by Lorenzo Rex (lorenzo-rex) on The accumulation of knowledge: literature review · 2021-07-25T13:43:44.806Z · LW · GW

I will briefly give it a shot:

Operative definition of knowledge K about X in a localised region R of spacetime:

Number N of yes/no questions (information) which a blank observer O can confidently answer about X, by having access to R.



-Blank observer = no prior exposure to X. Obvious extension to observers which know something already about X.

-Knowledge makes sense only with respect to some entity X, and for a given observer O.

-Access to K in a given R may be very difficult, so an extension of this definition is enforcing a maximum effort E required to extract K. Max N obtained in this way is K.

-Equivalently, this can be defined in terms of probability distributions which are updated after every interaction of O with R.

-This definition requires having access to X, to verify that the content of R is sufficient to unambiguous to answer N questions. As such, it's not useful to quantify accumulation of knowledge about things we don't know entirely. But this has to be expected, I'm pretty sure one can map this to the halting problem.

Anyway, in the future it may be handy for instance to quantify if a computer vision system (and which part of it) has knowledge of objects it is classifying, say an apple.

-To make the definition more usable, one can limit the pool of questions and see which fraction of those can be answered by having access to R.

-The number N of questions should be pruned into classes of questions, to avoid infinities. (e.g. does an apple weighs less than 10kg? Less than 10.1kg? Less than  10.2kg? ...)


Regarding, your attempts at:

-Mutual information between region and environment: Enforcing a max effort E implies that rocks have small amount of knowledge, since it's very hard to reverse engineer them.

-Mutual information over digital abstraction layers: The camera cannot answer yes/no questions, so no knowledge. But a human with access to that camera certainly has more knowledge than one without.

-Precipitation of action: Knowledge is with respect to an observer. So no knowledge for the map alone.

Comment by Lorenzo Rex (lorenzo-rex) on The BTC equilibriumating and the ETH one-eightening · 2021-05-27T22:59:30.846Z · LW · GW

-Polkadot has less than 300 validators at the moment, the system is not decentralised enough to support large attacks. 

-Well, rising or at least stable. Considering that gold market cap is 10x bitcoin, and then bitcoin can be gold 2.0, there is definitely a large upside left. See also the stock-to-flow model applied to bitcoin. 

Comment by Lorenzo Rex (lorenzo-rex) on The BTC equilibriumating and the ETH one-eightening · 2021-05-25T23:23:20.907Z · LW · GW

The problem I see with Ethereum is the tech itself. Is building a scalable and decentralised blockchain possible at all? Ethereum needs to get it right in few years, or it will lose the first mover advantage and other chains will take the lead.

On the other side, Bitcoin is already working as a decentralised store of value and doesn't need crazy scalability, even though it would be beneficial (and necessary in order to be a daily currency).

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-05-23T23:09:13.087Z · LW · GW

Those extended simulations are more complex than non extended simulations. The simplicity assumptions tells you that those extended simulations are less likely, and the distribution is dominated by non extended simulations (assuming that they are considerably less complex). 

To see this more clearly, take the point of view of the simulators, and for simplicity neglect all the simulations that are running t=now. So, consider all the simulations ever run by the simulators so far and that have finished. A simulation is considered finished when it is not run anymore. If a simulation of cost C1 is "extended" to 2 C1, then de facto we call it a C2 simulation. So, there is well defined distributions of finished simulations C1, C2 (including pure C2 and C1 extended sims), C3 (including pure C3, extended C2, very extended C1, and all the combinations), etc.

You can also include simulations running t=now in the distribution, even though you cannot be sure how to classify them until the finish. Anyway, for large t the number of simulations running now will be a small number w.r.t the number of simulations ever run. 

Nitpick:  A simulation is never really finished, as it can be reactivated at any time. 

Comment by Lorenzo Rex (lorenzo-rex) on Re: Fierce Nerds · 2021-05-23T19:56:06.904Z · LW · GW

That acceptance is in my experience due to lack of skills/intelligence. By realising that you don't have enough skills/intelligence to withstand the (possible) consequences of speaking up, it is rational to comply with the rules and just hope that somebody else will bring the change. 

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-15T00:17:38.973Z · LW · GW

Thanks for sharing, I will cite in a future v2 of the paper. 

I don't agree with simple --> highest probability of glitches, at least not always. For instance, if we restrict to the case of the same universe-simulating algorithms running on smaller portions of simulated space (same level of approximation). In that case running an algorithm on larger spaces may lead to more rounding errors.

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-14T23:40:33.450Z · LW · GW

My view is that Kolmogorov is the right simplicity measure for probabilistically or brute force generated universes, as you also mention. But for intentionally generated universes the length and elegance of the program is not that relevant in determining how likely is a simulation to be run, while computational power and memory are hard constraints that the simulators must face. 

For instance while I would expect unnecessary long programs to be unlikely to be run, if a long program L  is 2x more efficient than a shorter program S, then I expect L to be more likely (many more simulators can afford L, cheaper to run in bulk, etc.). 

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-14T21:55:15.715Z · LW · GW

Regarding the first point, yes, that's likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don't compute the evolution of the system in the first place? You would end up having conflicting observations. It's not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world.  As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don't keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map. 

What I'm saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).  

The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely.  But here I'm speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators. 

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-13T22:23:59.386Z · LW · GW

Not sure I get what you mean by simpler universes.  According to the SH simulated universes greatly outnumber any real universes.  

The bold statement is to be able to actually extract experimental consequences also for passive simulations, even if only probabilistically.  Active simulations are indeed interesting because they would give us a way to prove that we are in a simulation, while the argument in the post can only disprove that we are in one. 

A possible problem with active simulations is that they may be a very small percentage of the total simulations, since they require someone actively interacting with the simulation. If this is true, we are very likely a passive simulation. 

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-13T21:59:59.831Z · LW · GW

Quantum computing is a very good point. I thought about it, but I'm not sure if we should consider it "optional". Perhaps to simulate our reality with good fidelity, simulating the quantum is necessary and not an option. So if the simulators are already simulating all the quantum interactions in our daily life, building quantum computers would not really increase the power consumption of the simulation.

Comment by Lorenzo Rex (lorenzo-rex) on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-13T21:53:59.848Z · LW · GW

It is surely hard and tricky.

One of the assumptions of the original simulation hypothesis is that there are many simulations of our reality,  and therefore we are with probability close to 1 in a simulation. I'm starting with the assumption that SH is true and extrapolating from that.  

Boltzmann Brains are incoherent random fluctuations, so I tend to believe that they should not emerge in large numbers in an intentional process. But other kind of solipsistic observers may tend to dominate indeed. In that case though, the predictions of  SH+SA are still there, since simulating the milky way for a solo observer is still much harder than simulating only the solar system for a solo observer.