Posts
Comments
Thanks for this - and sorry I missed it earlier. I marked a couple of your statements that especially hit me.
Overall great analysis - I'm only be a bit more positive about these things myself. Mainly:
- I do think science is a tool that may be qualitatively different than all other tools we have - just looking at the impact it managed to have on the world. And in so far as it is "special," it's worth considering whether it can solve our problem
- I don't think it's ever possible to completely understand the world - and so there will always be "God at the bottom of the glass" - no matter how much we do solve. I also don't think there is anything bad about completely understanding some aspect of the world - our wonder just transitions to another aspect. Planetary motion no longer creates wonder, we take it for granted, and use it as a workhorse on which we build other things that do generate wonder (like "why are the laws what they are?"). The things we completely understand we just stop noticing really - but we start noticing other things. Driving a car becomes mundane once you're good at it - but the road and places you can visit become exciting.
Great points - thanks for your thoughts on this! 2 questions:
1) Do you think it may be better to "wrap science in spirituality" instead? Or should we just leave them segregated as they are today?
2) My suggestion here was that we adjust what science is so that it no longer creates the problems you are pointing at. Specifically perhaps we can relax the "3rd person objective observer" paradigm and give more weight to 1st person perspective as well. I do believe that science can be quite spiritual and can generate well-being when it's driven by genuine wonder, curiosity and intention to make life more wonderful. Does this fit with your thoughts?
So my understanding is that the more everyone uses this strategy, the more prices of different stocks get correlated (you sell the stock that went up, that drops its price back down), and that reduces your ability to diversify (the challenge becomes in finding uncorrelated assets). But yeah, I'm not a finance person either - just played with this for a few months...
Well if you take simple returns, then the naive mean and std gives . If you use log returns, then you'd get - which you can use to get if you need.
Thanks for expanding on this stuff - really nice discussion!
Yeah that stock-market analogy is quite tantalizing - and I like the breadth that it could apply to.
For your discussion on "unnatural" - sure, I agree with the sentiment - but it's the question of how to formalize this all so that it can produce a testable, falsifiable theory that I'm unclear on. Poetically it's all great - and I enjoy reading philosophical treatise on this - but they always leave me wanting, as I don't get something to hold onto at the end, something I can directly and tangeably apply to decision-making.
For your last paragraph, yeah that emphasis on "relational" perspective of reality is what I'm trying to build up and formalize in this post. And yes, it's a bit hypocritical to say that "ultimately reality is relational" ;P
Great points - I'm more-or-less on-board with everything you say. Ontology in QM I think is quite inherently murky - so I try to avoid talking about "what're really real" (although personally I find the Relational QM perspective on this to be most clear - and with some handwaving I could carry it over to QD I think).
Social quantum darwinism - yeah, sounds about right. And yeah, the word "quantum" is a bit ambiguous here - it's a bit of a political choice whether to use it or avoid it. Although besides superpositions and tensor products, quantum cognition also includes collapse - and that's now taking quite a few (yes, not all!) ingredients from the quantum playbook to warrant the name?
There can never be an "objective consensus" about what happens in the bomb cavity,
Ah, nice catch - I see your point now, quite interesting. Now I'm curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work - here is a first attempt, let me know what you think (honestly, I'm just using decoherence here, nothing else):
If the bomb is 'live', then the two paths will quickly entangle many degree of freedom of the environment, and so you can't get reproducible records that involve interference between the two branches. If the bomb is "dud", then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable - so ultimately feel like a personal preference of what argumentation you find convincing.
Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it?
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it's a matter of anyone repeating the experiment getting the same outcome - whether this outcome is "ball rolls down" or "ball rolls down 20% of the time". I'm trying to see if we can say something in cases where no outcome is quite reproducible - probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than "I don't know, so it's 50-50", but where that's the only truly reproducible statement.
Thanks for sharing your thoughts - cool ideas!
Yes, I've actually thought that human interactions may be well modeled as a stock-market... never actually looked into whether this has been done though. And yes, maybe such model could be framed using this network-type setup I described... could be interesting - what if different cliques have different 'stock' valuation?
"...the more unnatural said law is." - the word 'natural' is a bit of a can of worms... I guess your statement could be viewed as an interesting definition of 'natural'? E.g., in nonequilibrium stat mech you can quantify a lower-bound on energy expenditure to keep something away from the equilibrium distribution. E.g., I've thought of applying this to quantify minimum welfare spending needed to keep social inequality below some value. But here maybe you're thinking more general? I just think 'natural' or 'real self' are really slippery notions to define. E.g., is all life inherently unnatural since it requires energy expenditure to exist?
"As if the brain experiences a linear combination of conflicting things." - that's precisely the sort of observations that Quantum Cognition models using quantum-like state-vectors. And precisely the sort of thing this framework I'm describing could help to explain perhaps.
"It feels sort of like a set trying to put itself inside itself?" - nice one! And there was a time when ancient Greek philosophers conclusively 'proved' to themselves the impossibility of ever fully understanding what matter is made of, and figured it's better to spend time on moral philosophy. Now, the former is basically solved, and the latter is still very much open. So I don't buy into no-go theorems much...
Thanks for your comments! I'm having a bit of trouble clearly seeing your core points here - so forgive me if I misinterpret, or address something that wasn't core to your argument.
To the first part, I feel like we need to clearly separate QM itself (Copenhagen), different Quantum Foundation theories, and Quantum Darwinism specifically. What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can't use QM arguments here. So QD says that (alive, dead) is the complete list because of consensus (i.e., in this view, there isn't anything more fundamental than consensus).
I don't think I agree with (or don't understand what you mean by) "including the superposition of dead and alive leads to actual physical consequences" - bomb-testing result is consequence of standard QM, so it doesn't prove anything "new."
To the second part, I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don't really see those two as fundamentally different. In either case, we think of objective truth (whether probabilistic or deterministic) as something derived from reproducible - so, for example, excluding Knightian uncertainty.
Re: "so you're telling me that if we kill everyone who we don't like, that means our values are objectively good?" - winners write history, so I think yes, that is how people view Darwinism, selection of values, and I think implicitly our values are derived from this thinking (though no-one will ever admit to this). The modern values of tolerance I think still come from this same thinking - just with the additional understanding that diverse societies tend to win-out over homogeneous societies. So we transition from individual Darwinism, to group Darwinism - but still keep Darwinism as our way to arrive at values.
Adding memetic Darwinism on top of this may qualitatively change the landscape, I believe.
Thanks for those references - definitely an interesting way to quantitatively study these things, will look in more detail.
I appreciate the care and support there :)
Honestly, I never really looked at my karma score and wasn't sure how that works. I think that helps. The reason I post on here is because I find the engagement encouraging (even when negative) - like comments, evidence of people reading and thinking about my stuff. The worst is when no-one has read it at all.
On the other hand, I agree that becoming a echo-chamber is a very possible danger, and goes deeply against LessWrong values - and I definitely have a sense that it's happening at least to some extent. I have a couple posts that got large negative scores for reasons that I think were more cultural than factual.
Still, it shouldn't be on readers to caretake for the writer's karma - I think your suggestion should be directed at whoever maintains this site, to update their karma calculation system. As for me, since engagement is encouraging, I'd love to see voting history of my posts - not just the final score (this article had quite some ups and downs over the last few days - I'd be curious to see it in detail).
yeah, that could be a cleaner line of argument, I agree - though I think I'd need to rewrite the whole thing.
For testable predictions... I could at least see models of extreme cases - purely physical or purely memetic selection - and perhaps being able to find real-world example where one or the other or neither is a good description. That could be fun
Interesting point - that adds a whole other layer of complexity to the argument, which feels a bit daunting to me to even start dissecting.
Still, could we say that in the standard formulation of Darwinian selection, where only the "fittest" survives, the victim is really considered to be dead and gone? I think that at least in the model of Darwinism this is the case. So my goal in this post is to push back on this model. You give a slightly different angle to also push back on this model. I.e., whether intentional or accidental, when one culture defeats another, it takes on attributes of the victim - and therefore some aspects of the victim live on, modifying the dynamics of "natural selection."
As to whether it's a good thing - well, the whole post starts on moral relativism, so I don't want to suddenly bring in moral judgements at this point. It's an interesting question, and I think you could make the argument either way.
Thanks for your comment!
From this and other comments, I get the feeling I didn't make my goal clear: I'm trying to see if there is any objective way to define progress / values (starting from assuming moral relativism). I'm not tryin to make any claim as to what these values should be. Darwinian argument is the only one I've encountered that made sense to me - and so here I'm pushing back on it a bit - but maybe there are other good ways to objectively define values?
Imho, we tend to implicitly ground many of our values in this Darwinian perspective - hence I think it's an important topic.
I like what you point out about the distinction between prescriptive vs descriptive values here. Within moral relativism, I guess there is nothing to say about prescriptive values at all. So yes, Darwinism can only comment on descriptive values.
However, I don't think this is quite the same as the fallacies you mention. "Might makes right" (Darwinian) is not the same as "natural makes right" - natural is a series of historical accidents, while survival of the fittest is a theoretical construct (with the caveat that at the scale of nations, number of conflicts is small, so historical accidents could become important in determining "fittest"). Similarly, "fittest" as determined by who survives seems like an objective fact, rather than a mind projection (with the caveat that an "individual" may be a mind projection - but I think that's a bit deeper).
Yes! and here we are trying to study the spectral properties of said noise to try to reverse-engineer your radio, as well as understand the properties of electromagnetic field itself. So perhaps that's one way to look at the practice :)
Can you please commercialize this gem? I (and probably many others) would totally buy it - but making it myself is a bit of a hurdle...
So yes, I agree that intolerance can also be contagious - and it's sort of a quantitative question of which one outweighs the other. I don't personally believe in "evil" (as you sort of hint there, I believe that if we are sufficiently eager to understand, we can always find common humanity with anyone) - but all kinds of neurodivergences, such as biological lack of empathy, do exist, and while we need not stigmatize them, they may be socially disruptive (like torching a city). Again, the question of whether our absolutely tolerant society can be stable in face of psychopaths torching cities once in a while I think is a quantitative one.
But what I'm excited about here is that in the case that those quantities are sufficient (tolerance is sufficiently contagious, psychopaths are sufficiently rare, etc), then we could have an absolutely tolerant society - even in that pacifist way you don't quite like. And that possibility in itself I find exciting. And that possibility is something that I think Popper did not see.
While these are relevant elaborations on the paradox of tolerance, I'd also be curious to hear your opinion on the proposal I'm making here - could tolerance be contagious, without any intentional action to make it so (violent or otherwise)? If so, could that make the existence of an absolutely tolerant society conceivable?
I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly - but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context - and in that case, you may be quite right.
But here I'm trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don't actually know what our goals are or where those goals came from. So I guess here I'm thinking of people more as dynamical systems.
For what it's worth, let me just reply to your specific concern here: I think the value of anthropomorphization I tried to explain is somehow independent of whether we expect God to intervene or not. If you are saying that this "expectation" may be an undesirable side-effect, then that may be so for some people, but that does not directly contradict my argument. What do you think?
just updated the post to add this clarification about "too perfect" - thanks for your question!
I like the idea of agency being some sweet spot between being too simple and too complex, yes. Though I'm not sure I agree that if we can fully understand the algorithm, then we won't view it as an agent. I think the algorithm for this point particle is simple enough for us to fully understand, but due to the stochastic nature of the optimization algorithm, we can never fully predict it. So I guess I'd say agency isn't a sweet spot in the amount of computation needed, but rather in the amount of stochasticity perhaps?
As for other examples of "doing something so well we get a strange feeling," the chess example wouldn't be my go-to, since the action space there is somehow "small" since it is discrete and finite. I'm more thinking of the difference between a human ballet dancer, and an ideal robotic ballet dancer - that slight imperfection makes the human somehow relatable for us. E.g., in CGI you have to make your animated characters make some unnecessary movements, each step must be different than any other, etc. We often admire hand-crafted art more than perfect machine-generated decorations for the same sort of minute asymmetry that makes it relatable, and thus admirable. In voice recording, you often record the song twice for the L and R channels, rather than just copying (see 'double tracking') - the slight differences make the sound "bigger" and "more alive." Etc, etc.
Does this make sense?
ah, yes! good point - so something like the presence of "unseen causes"?
The other hypothesis the lab I worked with looked into was the presence of some 'internally generated forces' - sort of like an 'unmoved mover' - which feels similar to what you're suggesting?
In some way, this feels not really more general than "mistakes," but sort of a different route. Namely, I can imagine some internal forces guiding a particle perfectly through a maze in a way that will still look like an automaton
Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency
yeah, I thought so too - but I only had very preliminary results, not enough for a publication... but perhaps I could write up a post based on what I had
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn't get to publishing my results - but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to "plan" further ahead, we view it as an agent - but instead was finding that actually the number of mistakes it makes in the planning is more important.
I really appreciate your care in having a supportive tone here - it is a bit heart aching to read some of the more directly critical comments.
- great point about the non-consentual nature of Ea's actions - it does create a dark undertone to the story, and needs either correcting, or expanding (perhaps framing it as the source of the "shadow of sexuality" - so we might also remember the risks)
- the heteronormative line I did notice, and I think could generalize straightforwardly - this was just the simplest place to start. I love your suggestion of ""sex" as acting on a body specifically to produce pleasure in that body."
- And yes, there are definitely many many aspects of sex that can then be addressed within this lore - like rape, consent, STD, procreation, sublimation, psychological impacts, gender, family, etc. Taking the Freudian approach, we could really frame all aspects of human life within this context - could be a fun exercise.
- I guess the key hypothesis I'm suggesting here is that explaining the many varied aspects of sexuality in terms of a deity could help to clarify all its complexity - just as the pantheon of gods helped early pagan cultures make sense of the world and make some successful predictions / inventions. It could be nicer to have a science-like explanation, but people would have a harder time keeping that straight (and I believe we don't yet have enough consensus in psychology as a science anyway).
yeah I don't know how cultural myths like Santa form or where they start - now they are grounded in rituals, but I haven't looked at how they were popularized in the first place.
hmm, with all this feedback I'm wondering if my framing of this story as "sex-ed to smooth out the impact of puberty" is not quite fitting. I definitely have a sense that this story can play some beneficial role in promoting a more healthy sexuality in our society - though perhaps my framing about puberty is misplaced?
huh, thanks for the engagement guys - I definitely didn't anticipate this to be so triggering...
I'm hearing two separate points here: 1) magic creatures and fairy tails do more to confuse rather than clarify; 2) let's be careful not to scare kids about sex nor make it a bigger deal than it already is. I think we could have a rich discourse about each of these, and I see many arguments to be made for both sides - with neither being a clearly resolved issue, imho. Just as an example, here are some possible counters I see to these:
1) What role do fairy tails and lore play in our education and building understanding? For one, "all models are wrong, some are useful" - so I don't think that whether Santa exists or not is really the interesting question, I'd rather ask in what ways is it helpful / confusing? As far as story-telling is a good vehicle for humans to convey values and information, it serves its purpose. As far as lying to kids - I'd say we can keep Santa without claiming things about him that aren't true. I think another important purpose of such lore is ritual - of which Christmas is an example. Ritual practices have a clear role and impact on people, that can be cognitively very beneficial if not abused.
2) Yes, sex may already "too big of a deal," but not in ways that are constructive / helpful. The hormonal impact of sex on our mind itself is hard to overstate - it really is a huge deal, for some people more than others. Since this is a question of qualia, I can reliably talk only about personal experience - and in retrospect I see that it ran my life for a number of years, the more so the more I repressed it. Learning to sublimate that energy, and really enjoy it in areas of life outside of sex has been the single greatest shift I experienced in persistent personal happiness, energy, and productivity. And this is what I'm referring to in this story - to me, sex and its broader impact is the most magical thing I have experienced in life, and so if anything is worth calling magical, I'd say this is it.
Of course, both of these points are a biased side of the full story, and I wouldn't personally 100% agree with these, as reality is always more subtle and balanced than such arguments. If you like, check out some other, perhaps more scientific discussions I wrote around related topics:
a rationalist perspective on "magic": https://www.lesswrong.com/posts/uRiiNMCDdNnGo3Lqa/magic-tricks-and-high-dimensional-configuration-spaces
Is Santa Real - as an effective theory: https://www.pchvykov.com/post/is-santa-real
oh yeah, I've seen that one before - really awesome stuff! I guess you could say the goalkeeper discovers a "mental" dimension whereby it can beat the attacker easier than if it uses the "physical" dimensions of directly blocking.
This all also feels related to Goodhart's law - though subtly different...
Check out the follow up post on this
wow... I definitely did not know we were that intense with making things artificial..
and I like that argument to draw a parallel with horses - quite convincing.
I'm really interested in the question of what's the difference between human systems and things like ecosystems? There are definitely some advantages biological systems have - antifragility, adaptability, sustainability. On the other hand, as you point out, human-designed systems are more efficient, but at a more narrow task.
So are there structural lessons we could adapt from biological system designs? Or are we good where we are?
Thanks for all the great comments! - I feel like the follow-up post I just published gets at some of them: https://www.lesswrong.com/posts/WNjKyFxNbhonBvhwz/building-cars-we-don-t-understand
oooh, don't get me started on expectation values... I have heated opinions here, sorry. The two easiest problems with expectations in this case is that to average something, you need to average over some known space, according to some chosen measure - neither of which will be by any means obvious in a real-world scenario. More subtly, with real-world distributions, expectation values can often be infinite or undefined, and median might be more representative - but then should you look at mean, median, or what else?
To me, the counter-argument to saving drowning children isn't the admittedly unlikely "Hitler" one, but more the "let them learn on their own mistakes" one - some will learn to swim and grow up more resilient, and some won't. The long-term impact of this approach on our species seems much harder to quantify.
wonderful - thanks so much for the references! "moral case against leaving the house" is a nice example to have in the back pocket :)
Just read a bit about rationalist understanding of "ritual" - seems that I'm sort of arguing that the value in donating is largely ritualistic :)
Wow, wonderful analysis! I'm on-board mostly - except maybe I'd leave some room for doubt of some claims you're making.
And your last paragraph seems to suggest that a "sufficiently good and developed" algorithm could produce large cultural change?
Also, you say "as human mediators (plus the problem of people framing it as 'objective'), just cheaper and more scalable" - to me that would quite a huge win! And I sort of thought that "people framing it as objective" is a good thing - why do you think it's a problem?
I could even go as far as saying that even if it was totally inaccurate, but unbiased - like a coin-flip - and if people trusted it as objectively true, that would already help a lot! Unbiased = no advantage to either side. Trusted = no debate about who's right. Random = no way to game it.
Cool that you find this method so powerful! To me it's a question of scaling: do you think personal mindfulness practices like Gendlin's Focusing are as easy to scale to a population as a gadget that tell you some truth about you? I guess each of these face very different challenges - but so far experience seems to show that we're better at building fancy tech than we are at learning to change ourselves.
What do you think is the most effective way to create such culture-shift?
Thanks for such thoughtful reply - I think I'm really on-board with most of what you're saying.
I agree that analysis is the hard part of this tech - and I'm hoping that this is what is just now becoming possible to do well with AI, like check out https://www.chipbrain.com/
Another point I think is important: you say "Emotions aren't exactly impossible to notice and introspect honestly on." - having been doing some emotional-intelligence practice for the last few years, I'm very aware of how difficult it is to honestly introspect on my own emotions. It's sort of like trying to objectively gauge my own attractiveness in photos - really tough to be objective! and I think this is one place that an AI could really help (they're building one for attractiveness now too actually).
I see your point that the impact will likely be marginal, compared to what we already have now - and I'm wondering if there is some way we could imagine applying such technology to have a revolutionary impact, without falling into Orwellian dystopia. Something about creative inevitable self-awareness, emotion-based success metrics, or conscious governance.
Any ideas how this could be used save the world? Or do you think there isn't any real edge it could give us?
yeah, I can try to clarify some of my assumptions, which probably won't be fully satisfactory to you, but a bit:
- I'm trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
- I'm assuming that the question "is AI conscious?" to be fundamentally ill-posed as we don't have a good definition for consciousness - hence I'm imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having "interests at heart" or doing anything "deliberately"
- and so yes, I'm suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It's more a matter of a certain carelessness, than deliberate suicide.
- Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it - so to me it seems it will necessarily be a continuation of things we did and wanted. No?
- Well, in some sense yes, that's sort of the idea I'm entertaining here: while these things all do matter, they aren't the "end of the world" - humanity and human culture carries on. And I have the feeling that it might not be so different even if robots take over.
[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad - but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it's not as big as it first seems?]
Yeah, I'm quite curious to understand this point too - certainly not sure how far this reasoning can be applied (and whether Ferdinand is too much of a stretch). I was thinking of this assassination as the "perturbation in a super-cooled liquid" - where it's really the overall geopolitical tension that was the dominant cause, and anything could have set off the global phase transition. Though this gets back to the limitations of counter-factual causality in the real-world...
cool - and I appreciate that you think my posts are promising! I'm never sure if my posts have any meaningful 'delta' - seems like everything's been said before.
But this community is really fun to post for, with meaningful engagement and discussion =)
hmm, so what I was thinking is whether we could give an improved definition of causality based on something like "A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments" - which may have a funny dependence on the game or environment we choose.
Though as hard as the counterfactual definition is to work with in practice, this may be even harder...
You post may be related to this, though not the same, I think. I guess what I'm suggesting isn't directly about decision theory.
whow, some Bayesian updating there - impressive! :)
I'm not sure why this was crossed out - seems quite civil to me... And I appreciate your thoughts on this!
I do think we agree at the big-picture level, but have some mismatch in details and language. In particular, as I understand J. Pearl's counter-factual analysis, you're supposed to compare this one perturbation against the average over the ensemble of all possible other interventions. So in this sense, it's not about "holding everything else fixed," but rather about "what are all the possible other things that could have happened."
Yes!! Very cool - going even one meta level up. I agree that usefulness of proposed models is certainly the ultimate judge of whether it's "good" or not. To make this even more concrete, we could try to construct a game and compare the mean performance of two agents having the two models we want to compare... I wonder if anyone's tried that... As far as I know, the counterfactual approach is "state of the art" for understanding causality these days - and it is a bit lacking for the reason you say. This could be a cool paper to write!
ah yes, great minds think alike! =)
What I really like about J. Pearl's counter-factual causality framework is that it gives a way to make these arguments rigorously, and even to precisely quantify "how much did the butterfly cause the tornado" - in bits!
Cool - thanks for your feedback! I agree that I could be more rigorous with my terminology. Nonetheless, I do think I have a rigorous argument underneath all this - even if it didn't come across. Let me try to clarify:
I did not mean to refer to human intentionality anywhere here. I was specifically trying to argue that the "chaos-theory definition of causality" you give, while great in idealized deterministic systems, is inadequate in complex messy "real world." Instead, the rigorous definition I prefer is the counter-factual information theoretic one, developed by Judea Pearl, and which I here tried to outline in layman's terms. This definition is entirely ill-posed in a deterministic chaotic system, but will work as soon as we have any stochasticity (from whatever source).
Does this address your point at all, or am I off-base?