Posts

Developing Positive Habits through Video Games 2024-08-24T03:47:26.427Z
pzas' Shortform 2024-08-17T15:00:57.222Z
The Process of Unfolding Truth and its Relations with Morality 2024-08-16T18:01:52.961Z

Comments

Comment by pzas (pizzaslice) on Limitations on Formal Verification for AI Safety · 2024-09-08T17:25:50.059Z · LW · GW

The context is AI safety and naturally you're meant to interpret the bad actor as having access to and using a powerful AI.

Comment by pzas (pizzaslice) on Developing Positive Habits through Video Games · 2024-08-29T08:32:37.771Z · LW · GW

Thanks for sharing

Comment by pzas (pizzaslice) on Developing Positive Habits through Video Games · 2024-08-29T08:12:10.305Z · LW · GW

I was wondering if something could be designed that's almost as simple as pulling a lever, you can do as many times as you want, and not anywhere near as addictive as ordinary games. And by activating the circuit would strengthen it measurably. My intuition tells me no, but if it is possible some way, I think that would be a game changer

Now I wonder if a video, or a song, etc could do it. Or a writing/reading task. Something low complexity, repeatable, non-addictive, non-time consuming.

You could give it to an alcoholoc at a low price, they could use it 5 minutes a day instead of spending a fortune on therapy. (It's starting to sound like meditation)

I think it's meditation

Comment by pzas (pizzaslice) on RobertM's Shortform · 2024-08-24T02:47:33.348Z · LW · GW

I agree if their decision was voluntary and is a product of at least some reflection. Sometimes you're mad at them precisely because they sign up for it.

From my perspective, when people make decisions, the decision involves many problems at once at different time scales. Some in the present, others in the future, or in the far future. Even if they calculate consequences correctly, there's an action potential required. From the perspective of a different person, a decision might be simple - do x vs y and it requires little physical energy. But it's not so simple at the level of the neurons. The circuit for making good decisions needs to be sufficiently developed, and the conditions for developing them are relatively rare. 

One condition might be gradual exposure to increasing complexity in the problems that you solve, so that you can draw the appropriate conclusions, extract the lesson at each level of complexity, and develop healthy habits. But most people are faced with the full force of complexity of the world from the day they're born. Imperfect upbringing, parents, environments, social circles.

When people make irrational decisions, in many cases I don't believe it's because they don't know better.

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-22T14:42:15.899Z · LW · GW

Really appreciate you taking the time to write this. No doubt there's a lot I can learn from reflecting more on this and I will in my own time. I can better understand what you mean now and I definitely agree. It nuances the proper way to propagate information a bit more. Incomplete ideas could be valuable but you'll want to develop it enough that it can be understood from first principles and address sources of misunderstandings. There are features of knowledge communication that make ideas less prone to mutating counter-productively

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-22T09:03:43.417Z · LW · GW

I think there's room for doubt in that claim but my sense is that most ideas that are thought of are never shared. There's so much that can be improved everywhere and there's no culture nurtured where people speak their mind about it to get the discussion going. It's actually socially awkward to point out points for improvement in things, even if it's cliche to say it would be helpful. As a consequence, the average conversation is predictable and sterile , and improvements aren't implemented even if they're obvious. Improvements don't happen because people don't like to entertain ideas.

Although what I really intended to say in the post is that I regret when people don't speak their ideas because they can't find the perfect way to say it, or they haven't fully thought it through. And for instance in discussions, there's a lot you want to say, and you'd rather take time perfecting the formulation of your thoughts.

You're trying to communicate a complex idea and you know that what you wrote doesn't fully capture everything. I think that there's so much value to putting it out there even in incomplete form and I wish people would do that more, instead of letting it be hidden away.

In terms of mutation, information has more potential to mutate positively when it's shared than when it sits only in your mind.
______________________________________________________________________________
"the average conversation is predictable and sterile"  I don't mean this the wrong way, I just couldn't think of another way to put it in 10 seconds

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-22T06:08:28.261Z · LW · GW

I'm telling you there's no base reality

The simulation is infinitely nested!

I bet you 10 bucks the universe was simulated so I can bet 10 bucks on it (among other things)

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-22T02:45:14.095Z · LW · GW

You can rarely write a complete representation of any idea and it's more valuable to get bits and pieces of the truth to propagate through society instead of letting it stay in your mind until you can write a perfect and complete representation of it, risking that it never sees the light of day. Information needs to mutate

Comment by pzas (pizzaslice) on Me & My Clone · 2024-08-21T18:44:41.411Z · LW · GW

Maybe you could treat each other as two parts of the same system. It hurts to pull your hair out so suppose you had some scissors too. You could cut your hair, and scatter them on the floor. Wind is really sensitive to initial conditions (consider brownian motion) so suppose you blow the hair and you both decide to pick them up. Since you're different distances to the hair strands you could break the symmetry?

Comment by pzas (pizzaslice) on Limitations on Formal Verification for AI Safety · 2024-08-20T08:49:15.178Z · LW · GW

I imagine that the behavior of strong AI, even narrow AI, is computationally irreducible. In that case would it still be verifiable?

Comment by pzas (pizzaslice) on Limitations on Formal Verification for AI Safety · 2024-08-20T07:11:04.206Z · LW · GW

To add to the deadly virus point, a bad actor could design both the virus and the cure, vaccinate themselves, and then spread the virus to everyone else. I had the same thought and always been afraid of giving people ideas. I'm still uncomfortable that it's being discussed even if I know others will probably think of it

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-19T11:42:27.265Z · LW · GW

If intelligence and consciousness are properly understood as processes, it makes more sense that it would emerge from unconscious constituents like atoms. But it also means that arguments like Searle's Chinese Room are wrong because consciousness doesn't arise from the program/rule book, but from the physical execution of it. And indeed the execution couldn't have been done without a consciousness. In the case of the Chinese Room argument it was in the person.

Searle makes a distinction between syntax and semantics but perhaps semantics is a non-sensory perception that purely comes from conceptual associations that can be reduced to syntax and the way information is organized

Comment by pzas (pizzaslice) on Emergence, The Blind Spot of GenAI Interpretability? · 2024-08-19T02:36:01.436Z · LW · GW

General intelligence might be an emergent property - something you can get from scaling a model. But it's not clear what is the basic model which, if scaled, leads to it. It would be interesting to consider how to make progress identifying what that is. How do you know if the model you're scaling has a peak intelligence that doesn't fall short of 'general intelligence'? How do you know when it's time to stop scaling and explore a new model? 

I guess there's a hard limit on the scale of models that can be explored though. If it's not practical and it doesn't cut it, it's time to try something new. But it's still interesting to ask if there's any way to determine that there's still some juice in the model which hasn't been squeezed out. Identifying the scale required, or even some vague sense of it, to achieve general intelligence feels important

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-17T20:09:07.708Z · LW · GW

I see what you mean, another thing is you could take the whole class of nested simulations and regard it as base, or simulated, ad infinitum. Maybe the term simulation is systematically ambiguous

Thanks for sharing your thoughts

I still think it's an infintely nested simulation with no base

In the elevator scene, why didn't the Hydra agents just stab or shoot Captain  America instead of shockers or choking? - Quora
Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-17T15:39:48.617Z · LW · GW

But it's impossible to tell if a simulator is intervening with a simulated reality or not. The simulator could be disinterested in making interventions or they could have desiged the system so that you couldn't tell because nothing happens that's outside its laws of physics.

I do believe that it's important to identify what differentiates base realities from simulated realities though. I'd love to hear others thoughts what it could be.

Comment by pzas (pizzaslice) on pzas' Shortform · 2024-08-17T15:00:57.910Z · LW · GW

Elon Musk believes that the probability of our reality being the base reality is really low because it can spawn uncountably many nested simulations. But come to think about it, with the limits of our imagination, it doesn't seem conceivable for a reality to exist unless it's being simulated. It might be the case that there is no base reality and we live in an infinite simulation fractal. Perhaps there's a sense in which everything that's simulated is just as real as the layer of reality that simulates it, even if it's only an incomplete slice of a possible reality that could have a complete simulation on its own.

Comment by pzas (pizzaslice) on Our time in history as evidence for simulation theory? · 2024-08-17T00:27:59.244Z · LW · GW

Elon Musk thinks that the probability that we're the base reality is almost 0 because on top of the base reality there could be infinitely many simulations. 

But if you think about it and ask yourself how the base reality can exist unless it's being simulated, there doesn't seem to be a conceivable answer. So it might be the case that we're living in a simulation fractal and anything that's simulation should be regarded as real in the same way as what we conventionally regard as the base reality, even if the realities we simulate are not as fleshed out as our own reality (which start to look 'fabricated' and fuzzy at the quantum scale so to speak).

Comment by pzas (pizzaslice) on When is a mind me? · 2024-08-16T15:20:52.048Z · LW · GW

If you take a snapshot of time, you're left with a non-evolving slice of a human being. Just the configuration of atoms at that time slice. There is no information there other than the configuration of the atoms (nevermind velocity etc. because we're talking about one timeslice, and those things require more than one).

It would be hard to accept that you are nothing more than the configuration of the atoms so let's say you're not the configuration. My sense is that you are the way that the configuration evolves, and actually the way that the configuration evolves can be replicated in different instances of the same type of atoms. But also, in different mediums, as long as you can make a one-to-one mapping between the two different evolving substrates. If they are evolving in the same way, they will behave in the same way, and it would be you.