Doing "good" 2022-01-13T19:52:02.119Z
Emotional microscope 2021-09-20T21:37:30.034Z
A gentle apocalypse 2021-08-16T05:03:32.210Z
Is social theory our doom? 2021-07-15T03:31:16.192Z
Does butterfly affect? 2021-05-14T04:20:58.374Z
Mindfulness as debugging 2021-04-30T16:59:12.834Z
Our compressed perception 2021-04-06T11:01:11.244Z
Objective truth? 2021-02-15T21:47:35.973Z


Comment by pchvykov on Doing "good" · 2022-01-17T18:51:28.185Z · LW · GW

oooh, don't get me started on expectation values... I have heated opinions here, sorry.  The two easiest problems with expectations in this case is that to average something, you need to average over some known space, according to some chosen measure - neither of which will be by any means obvious in a real-world scenario. More subtly, with real-world distributions, expectation values can often be infinite or undefined, and median might be more representative - but then should you look at mean, median, or what else?

Comment by pchvykov on Doing "good" · 2022-01-17T18:46:54.430Z · LW · GW

To me, the counter-argument to saving drowning children isn't the admittedly unlikely "Hitler" one, but more the "let them learn on their own mistakes" one - some will learn to swim and grow up more resilient, and some won't. The long-term impact of this approach on our species seems much harder to quantify.

Comment by pchvykov on Doing "good" · 2022-01-17T18:41:42.024Z · LW · GW

wonderful - thanks so much for the references! "moral case against leaving the house" is a nice example to have in the back pocket :)

Comment by pchvykov on Doing "good" · 2022-01-13T19:56:59.380Z · LW · GW

Just read a bit about rationalist understanding of "ritual" - seems that I'm sort of arguing that the value in donating is largely ritualistic :)

Comment by pchvykov on Emotional microscope · 2021-10-14T23:06:19.691Z · LW · GW

Wow, wonderful analysis! I'm on-board mostly - except maybe I'd leave some room for doubt of some claims you're making. 

And your last paragraph seems to suggest that a "sufficiently good and developed" algorithm could produce large cultural change? 
Also, you say "as human mediators (plus the problem of people framing it as 'objective'), just cheaper and more scalable" - to me that would quite a huge win! And I sort of thought that "people framing it as objective" is a good thing - why do you think it's a problem? 
I could even go as far as saying that even if it was totally inaccurate, but unbiased - like a coin-flip - and if people trusted it as objectively true, that would already help a lot! Unbiased = no advantage to either side. Trusted = no debate about who's right. Random = no way to game it.

Comment by pchvykov on Emotional microscope · 2021-09-24T03:26:51.699Z · LW · GW

Cool that you find this method so powerful! To me it's a question of scaling: do you think personal mindfulness practices like Gendlin's Focusing are as easy to scale to a population as a gadget that tell you some truth about you? I guess each of these face very different challenges - but so far experience seems to show that we're better at building fancy tech than we are at learning to change ourselves.
What do you think is the most effective way to create such culture-shift?

Comment by pchvykov on Emotional microscope · 2021-09-21T20:08:44.475Z · LW · GW

Thanks for such thoughtful reply - I think I'm really on-board with most of what you're saying. 

I agree that analysis is the hard part of this tech - and I'm hoping that this is what is just now becoming possible to do well with AI, like check out

Another point I think is important: you say "Emotions aren't exactly impossible to notice and introspect honestly on." - having been doing some emotional-intelligence practice for the last few years, I'm very aware of how difficult it is to honestly introspect on my own emotions. It's sort of like trying to objectively gauge my own attractiveness in photos - really tough to be objective! and I think this is one place that an AI could really help (they're building one for attractiveness now too actually).

I see your point that the impact will likely be marginal, compared to what we already have now - and I'm wondering if there is some way we could imagine applying such technology to have a revolutionary impact, without falling into Orwellian dystopia. Something about creative inevitable self-awareness, emotion-based success metrics, or conscious governance. 

Any ideas how this could be used save the world? Or do you think there isn't any real edge it could give us?

Comment by pchvykov on A gentle apocalypse · 2021-08-16T17:39:00.588Z · LW · GW

yeah, I can try to clarify some of my assumptions, which probably won't be fully satisfactory to you, but a bit:

  • I'm trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
  • I'm assuming that the question "is AI conscious?" to be fundamentally ill-posed as we don't have a good definition for consciousness - hence I'm imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having "interests at heart" or doing anything "deliberately" 
  • and so yes, I'm suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It's more a matter of a certain carelessness, than deliberate suicide. 
Comment by pchvykov on A gentle apocalypse · 2021-08-16T14:43:34.171Z · LW · GW
  1. Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it - so to me it seems it will necessarily be a continuation of things we did and wanted. No?
  2. Well, in some sense yes, that's sort of the idea I'm entertaining here: while these things all do matter, they aren't the "end of the world" - humanity and human culture carries on.  And I have the feeling that it might not be so different even if robots take over. 

[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad - but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it's not as big as it first seems?]

Comment by pchvykov on Does butterfly affect? · 2021-05-15T05:46:25.758Z · LW · GW

Yeah, I'm quite curious to understand this point too - certainly not sure how far this reasoning can be applied (and whether Ferdinand is too much of a stretch). I was thinking of this assassination as the "perturbation in a super-cooled liquid" - where it's really the overall geopolitical tension that was the dominant cause, and anything could have set off the global phase transition. Though this gets back to the limitations of counter-factual causality in the real-world...

Comment by pchvykov on Does butterfly affect? · 2021-05-15T05:29:55.930Z · LW · GW

cool - and I appreciate that you think my posts are promising! I'm never sure if my posts have any meaningful 'delta' - seems like everything's been said before. 

But this community is really fun to post for, with meaningful engagement and discussion =)

Comment by pchvykov on Does butterfly affect? · 2021-05-15T05:21:00.933Z · LW · GW

hmm, so what I was thinking is whether we could give an improved definition of causality based on something like "A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments" - which may have a funny dependence on the game or environment we choose. 

Though as hard as the counterfactual definition is to work with in practice, this may be even harder... 

You post may be related to this, though not the same, I think. I guess what I'm suggesting isn't directly about decision theory. 

Comment by pchvykov on Does butterfly affect? · 2021-05-15T04:53:05.996Z · LW · GW

whow, some Bayesian updating there - impressive! :)

Comment by pchvykov on Does butterfly affect? · 2021-05-15T01:14:18.125Z · LW · GW

I'm not sure why this was crossed out - seems quite civil to me... And I appreciate your thoughts on this!

I do think we agree at the big-picture level, but have some mismatch in details and language. In particular, as I understand J. Pearl's counter-factual analysis, you're supposed to compare this one perturbation against the average over the ensemble of all possible other interventions. So in this sense, it's not about "holding everything else fixed," but rather about "what are all the possible other things that could have happened."

Comment by pchvykov on Does butterfly affect? · 2021-05-15T00:58:42.598Z · LW · GW

Yes!! Very cool - going even one meta level up. I agree that usefulness of proposed models is certainly the ultimate judge of whether it's "good" or not. To make this even more concrete, we could try to construct a game and compare the mean performance of two agents having the two models we want to compare... I wonder if anyone's tried that... As far as I know, the counterfactual approach is "state of the art" for understanding causality these days - and it is a bit lacking for the reason you say. This could be a  cool paper to write!

Comment by pchvykov on Does butterfly affect? · 2021-05-15T00:50:30.358Z · LW · GW

ah yes, great minds think alike! =)

What I really like about J. Pearl's counter-factual causality framework is that it gives a way to make these arguments rigorously, and even to precisely quantify "how much did the butterfly cause the tornado" - in bits!

Comment by pchvykov on Does butterfly affect? · 2021-05-15T00:45:44.752Z · LW · GW

Cool - thanks for your feedback! I agree that I could be more rigorous with my terminology. Nonetheless, I do think I have a rigorous argument underneath all this - even if it didn't come across. Let me try to clarify:

I did not mean to refer to human intentionality anywhere here. I was specifically trying to argue that the "chaos-theory definition of causality" you give, while great in idealized deterministic systems, is inadequate in complex messy "real world." Instead, the rigorous definition I prefer is the counter-factual information theoretic one, developed by Judea Pearl, and which I here tried to outline in layman's terms. This definition is entirely ill-posed in a deterministic chaotic system, but will work as soon as we have any stochasticity (from whatever source).

Does this address your point at all, or am I off-base?

Comment by pchvykov on Mindfulness as debugging · 2021-05-05T21:48:20.067Z · LW · GW

That's an interesting question - I was assuming that there is a sort of "natural selection" process that acts over generations, and picks out the "best" algorithms. This way, I can understand your comment in two ways:

  1. the selection pressures may not be directed at individual benefit, but rather at group survival or optimal transmission (rules that are easier to remember are easier to pass down)
  2. the selection that led to our algorithms may be outdated in our modern world

Am I getting it, or did you have something else in mind?

Comment by pchvykov on Utility Maximization = Description Length Minimization · 2021-04-06T10:57:31.471Z · LW · GW

Thanks for your interest - really nice to hear! here is a link to the videos (and supplement): 

Comment by pchvykov on Utility Maximization = Description Length Minimization · 2021-02-25T11:28:06.844Z · LW · GW

I'm really excited about this post, as it relates super closely to a recent paper I published (in Science!) about spontaneous organization of complex systems - like when a house builds itself somehow, or utility self-maximizes just following natural dynamics of the world. I have some fear of spamming, but I'm really excited others are thinking along these lines - so I wanted to share a post I wrote explaining the idea in that paper

Would love to hear your thoughts!