Posts
Comments
In a topological space, defining
- X ∨ Y as X ∪ Y
- X ∧ Y as X ∩ Y
- X → Y as Int( X^c ∪ Y )
- ¬X as Int( X^c )
does yield a Heyting algebra. This means that the understanding (but not the explanation) of /u/cousin_it checks out: removing the border on each negation is the "right way".
Notice that under this interpretation X is always a subset of ¬¬X.:
- Int(X^c) is a subset of X^c; by definition of Int(-).
- Int(X^c)^c is a superset of X^c^c = X; since taking complements reverses containment.
- Int( Int(X^c)^c ) is a superset of Int(X) = X; since Int(-) preserves containment.
But Int( Int(X^c)^c ) is just ¬¬X. So X is always a subset of ¬¬X.
However, in many cases ¬¬X is not a subset of X. For example, take the Euclidean plane with the usual topology, and let X be the plane with one point removed. Then ¬X = Int( X^c ) = ∅ is empty, so ¬¬X is the whole plane. But the whole plane is obviously not a subset of the plane with one point removed.
You can get a clearer-if-still-imperfect sense from contrasting upvotes on parallel,
I'm fairly certain that P(disagrees with blargtroll | disagrees with your proposal) >> P(agrees with blargtroll | disagrees with your proposal), simply because blargtroll's counterargument is weak and its followups reveal some anger management issues.
For example, I would downvote both your proposal and blargtroll's counterargument if I could - and by the Typical Mind heuristic so would everyone else :)
That said, I think you're right in that this would not have received sufficiently many downvotes to become invisible.
Thanks for giving a name to this phenomenon.
Indeed, it would not surprise me if some people actually want hedge drift to occur. They don't actually try to prevent their claims from being misunderstood.
It's much worse. In my experience as an academic, most departments simply pre-hedge-drift their press releases. Science journalists don't - and are often not qualified to - read and comment on the actual papers, all they have to work with is the press release.
I mean nationalized, as in the distribution of tobacco products (imports, wholesale, retail) is handled by companies that may or may not have been private at some point, but are now property of the state.
What do you mean by nationalizing?
The weight of evidence best demonstrates that control measures have thus far been quite uniformly positive.
I see. The black market effects are well-documented, but I am not familiar with evidence which shows that control measures have any measurable effects on public health. Where could I find that data?
Dagon's points are very good. There's another aspect as well:
Tobacco import and distribution (and in some cases, production) are already nationalized in many countries, especially in the EU. National governments try to impose artificial scarcity (winding down operations, tax increases, fixed pricing), and this makes the statistics look better - officially monitored tobacco sales decrease.
Artificial scarcity cannot last: a black market of RYO tobacco, and home-made cigarettes of dubious origin is always ready to serve customer demands. In the end, the health effects of nationalizing the tobacco industry, and winding down operations, can easily be negative.
I bet if you phrase the question as "your brain is destroyed and recreated 5 minutes later", most people outside LW answer no. I guess this might be another instance of brain functions inactive vs lack of ability to have experiences.
In row 8 of the table, P(D) should be replaced by P(~D).
Location-specific advice
Libgen is blocked by court order in the United Kingdom, but if you're a student, you can usually access it through Eduroam.
Yes.
In a sterilized and sealed jar, jam made without sugar can last for years. Once you actually open the jar, you have about 7 days to eat it, and you better keep it refrigerated. You don't need the sugar for thickening - the pectin in the fruit thickens jam just fine.
However, if you don't add any sweetener, the result will be very sour.
Source: been making my own jam for years, had plenty of time to experiment.
In my experience, acid reflux can cause similar sensations.
I'm not sure that my paradox even requires the proof system to prove it's own consistency.
Your argument requires the proof system to prove it's own consistency. As we discussed before, your argument relies on the assumption that the implication
If "φ is provable" then "φ"
Provable(#φ) → φ
is available for all φ. If this were the case, your theory would prove itself consistent. Why? Because you could take the contrapositive
If "φ is false" then "φ is not provable"
¬φ → ¬Provable(#φ)
and substitute "0=1" for φ. This gives you
if "0≠1" then "0=1 is not provable"
¬0=1 → ¬Provable(#(0=1))
The premise "0≠1" holds. Therefore, the consequence "0=1 is not provable" also holds. At this point your theory is asserting its own consistency: everything is provable in an inconsistent theory.
You might enjoy reading about the Turing Machine proof of Gödel's first incompleteness theorem, which is closely related to your paradox.
It is not that these statements are "not generally valid"
The intended meaning of valid in my post is "valid step in a proof" in the given formal system. I reworded the offending section.
Obviously such statements will be true if H's axiom system is true, and in that sense they are always valid.
Yes, and one also has to be careful with the use of the word "true". There are models in which the axioms are true, but which contain counterexamples to Provable(#φ) → φ
.
Now here is the weird and confusing part. If the above is a valid proof, then H will eventually find it. It searches all proofs, remember?
Fortunately, H will never find your argument because it is not a correct proof. You rely on hidden assumptions of the following form (given informally and symbolically):
If φ is provable, then φ holds.
Provable(#φ) → φ
where #φ denotes the Gödel number of the proposition φ.
Statements of these form are generally not provable. This phenomenon is known as Löb's theorem - featured in Main back in 2008.
You use these invalid assumptions to eliminate the first two options from Either H returns true, or false, or loops forever. For example, if H returns true, then you can infer that "FF halts on input FF" is provable, but that does not contradict FF does not halt on input FF.
From Falsehoods Programmers Believe About Names:
anything someone tells you is their name is — by definition — an appropriate identifier for them.
There should be a list of false things people coming from common law jurisdictions believe about how choice of identity works on the rest of the globe.
Should this be surprising? I briefly worked at a French school in Hungary: the guy who taught Spanish was Mexican, the girl who taught English was American, and so on. A Korean living in Guatemala still needs to learn English.
looked up monotone voice on google, and found that it has a positive, redeeming side – attractiveness.
My friends tell me that my face is pretty scarred. Research shows that facial scars are attractive. By the word scar, researchers mean healed cut. My friends mean acne hole.
Not all monotone voices are created equal. I'd be really surprised if "autistic" monotone and a "high-status" monotone would refer to the same thing.
I believe that an ultrafinitist arithmetic would still be incomplete. By that I mean that classical mathematics could prove that a sufficiently powerful ultrafinitist arithmetic is necessarily incomplete. The exact definition of "sufficiently powerful", and more importantly, the exact definition of "ultrafinitistic" would require attention. I'm not aware of any such result or on-going investigation.
The possibility of an ultrafinitist proof of Gödel's theorem is a different question. For some definition of "ultrafinitistic", even the well-known proofs of Gödel's theorem qualify. Mayhap^1 someone will succed where Nelson failed, and prove that "powerful systems of arithmetic are inconsistent". However, compared to that, Gödel's 1st incompleteness theorem, which merely states that "powerful systems of arithmetic are either incomplete or inconsistent", would seem rather... benign.
^1 very unlikely, but not cosmically unlikely
Sure, that's exactly what we have to do, on pain of inconsistency. We have to disallow representation schemas powerful enough to internalise the Berry paradox, so that "the smallest number not definable in less than 11 words" is not a valid representation. Cf. the various set theories, where we disallow comprehension schemas strong enough to internalise Russell's paradox, so that "the set of all sets that don't contain themselves" is not a valid comprehension.
Nelson thought that, similarly to how we reject "the smallest number not effectively representable" as an invalid representation, we should also reject e.g. "3^^^3" as an invalid representation; not because of the Berry paradox, but because of a different one, one that he ultimately could not establish.
Nelson introduced a family of standardness predicates, each one relative to a hyper-operation notation (addition, multiplication, exponentiation, the ^^-up-arrow operation, the ^^^-up-arrow notation and so on). Since standardness is not a notion internal to arithmetic, induction is not allowed on these predicates (i. e. '0' is standard, and if 'x' is standard then so is 'x+1', but you cannot use induction to conclude that therefore everything is standard).
He was able to prove that the standardness of n and m implies the standardness of n+m, and that of n×m. However, the corresponding result for exponentiation is provably false and the obstruction is non-associativity. What's more, even if we can prove that 2^^d is standard, this does not mean that the same holds for 2^^(d+1).
At this point, Nelson attempted to prove that an explicit recursion of super-exponential length does not terminate, thereby establishing that arithmetic is inconsistent, and vindicating ultrafinitism as the only remaining option. His attempted proof was faulty, with no obvious fix. Nelson continued looking for a valid proof until his death last September.
Mainly in the city of Edinburgh, HW campus and the Lothians. It worked well inside college buildings with non-trivial layouts as well.
Important question: do you usually travel by car? I can't drive, so my main methods of transportation were public transport and walking.
Thanks! If still possible, I'd like to ask the following:
Who performed the insertion procedure? How long does it take to heal?
An N52 is very strong. Did you experience any unexpected negative side-effects while handling everyday objects (weight training, smartphones, et c.)?
Apart from the ring, have you tried achieving the same thing externally (i.e. without an implant)? Do you think it would be possible to "come close"?
Is such a long answer suitable in OT? If not, where should I move it?
tl;dr Naive ultrafinitism is based on real observations, but its proposals are a bit absurd. Modern ultrafinitism has close ties with computation. Paradoxically, taking ultrafinitism seriously has led to non-trivial developments in classical (usual) mathematics. Finally: ultrafinitism would probably be able to interpret all of classical mathematics in some way, but the details would be rather messy.
1 Naive ultrafinitism
1.1. There are many different ways of representing (writing down) mathematical objects.
The naive ultrafinitist chooses a representation, calls it explicit, and says that a number is "truly" written down only when its explicit representation is known. The prototypical choice of explicit representation is the tallying system, where 6 is written as ||||||. This choice is not arbitrary either: the foundations of mathematics (e. g. Peano arithmetic) use these tally marks by necessity.
However, the integers are a special^1 case, and in the general case, the naive ultrafinitist insistance on fixing a representation starts looking a bit absurd. Take Linear Algebra: should you choose an explicit basis of R3 that you use indiscriminately for every problem; or should you use a basis (sometimes an arbitary one) that is most appropriate for the problem at hand?
1.2. Not all representations are equally good for all purposes.
For example, enumerating the prime factors of 2*3*5 is way easier than doing the same for ||||||||||||||||||||||||||||||, even though both represent the same number.
1.3. Converting between representations is difficult, and in some cases outright impossible.
Lenstra earned $14,527 by converting the number known as RSA-100 from "positional" to "list of prime factors" representation.
Converting 3\^\^\^3 from up-arrow representation to the binary positional representation is not possible for obvious reasons.
As usual, up-arrow notation is overkill. Just writing the decimal number 100000000000000000000000000000000000000000000000000000000000000000000000000000000 would take more tally-marks than the number of atoms in the observable universe. Nonetheless, we can deduce a lot of things about this number: it is an even number, and its larger than RSA-100. Nonetheless, I can manually convert it to "list of prime factors" representation: 2\^80 * 5\^80.
2 Constructivism
The constructivists were the first to insist that algorithmic matters be taken seriously. Constructivism separates concepts that are not computably equivalent. Proofs with algorithmic content are distinguished from proofs without such content, and algorithmically inequivalent objects are separated.
For example, there is no algorithm for converting Dedekind cuts to equivalence classes of rational Cauchy sequences. Therefore, the concept of real number falls apart: constructively speaking, the set of Cauchy-real numbers is very different from the set of Dedekind-real numbers.
This is a tendency in non-classical mathematics: concepts that we think are the same (and are equivalent classically) fall apart into many subtly different concepts.
Constructivism separates concepts that are not computably equivalent. Computability is a qualitative notion, and even most constructivists stop here (or even backtrack, to regain some classicality, as in the foundational program known as Homotopy Type Theory).
3. Modern ultra/finitism
The same way constructivism distinguished qualitatively different but classically equivalent objects, one could starts distinguishing things that are constructively equivalent, but quantitatively different.
One path leads to the explicit approach to representation-awareness. For example, LNST^4 explicitly distinguishes between the set of binary natural numbers B and the set of tally natural numbers N. Since these sets have quantitatively different properties, it is not possible to define a bijection between B and N inside LNST.
Another path leads to ultrafinitism.
The most important thinker in modern ultra/finitism was probably Edward Nelson. He observed that the "set of effectively representable numbers" is not downward-closed: even though we have a very short notation for 3\^\^\^3, there are lots of numbers between 0 and 3^^^3 that have no such short representation. In fact, by elementary considerations, the overwhelming majority of them cannot ever have a short representation.
What's more, if our system of notation allows for expressing big enough numbers, then the "set of effectively representable numbers" is not even inductive because of the Berry paradox. In a sense, the growth of 'bad enough' functions can only be expressed in terms of themselves. Nelson's hope was to prove the inconsistency of arithmetic itself using a similar trick. His attempt was unsuccessful: Terry Tao pointed out why Nelson's approach could not work.
However, Nelson found a way to relate unexpressibly huge numbers to non-standard models of arithmetic^(2).
This correspondence turned out to be very powerful, leading to many paradoxical developments: including finitistic^3 extension of Set Theory; a radically elementary treatment of Probability Theory and a new ways of formalising the Infinitesimal Calculus.
4. Answering your question
What kind of mathematics would we still be able do (cryptography, analysis, linear algebra …)?
All of it; modulo translating the classical results to the subtler, ultra/finitistic language. This holds even for the silliest versions of ultrafinitism. Imagine a naive ultrafinitist mathematician, who declares that the largest number is m. She can't state the proposition R(n,2^(m)), but she can still state its translation R(log_2 n,m), which is just as good.
Translating is very difficult even for the qualitative case, as seen in this introductory video about constructive mathematics. Some theorems hold for Dedekind-reals, others for Cauchy-reals, et c. Similarly, in LNST, some theorems hold only for "binary naturals", others only for "tally naturals". It would be even harder for true ultrafinitism: the set of representable numbers is not downward-closed.
This was a very high-level overview. Feel free to ask for more details (or clarification).
^1 The integers are absolute. Unfortunately, it is not entirely clear what this means.
^2 coincidentally, the latter notion prompted my very first contribution to LW
^3 in this so-called Internal Set Theory, all the usual mathematical constructions are still possible, but every set of standard numbers is finite.
^4 Light Naive Set Theory. Based on Linear Logic. Consistent with unrestricted comprehension.
If you set down to meditate, instead of using a timer you can set the goal of meditating for 20 minutes. That skill is trainable and with time you can get +1/-1.
Interesting. I don't meditate, but I'll try this in other contexts (probably in tasks related to giving talks) and see how my time sense improves.
In my case, the answer is simple: tutoring, teaching and lecturing. The feedback of watches and timers is completely inadequate: I can't "profile", I can't adjust my tempo in real time, et c.
Not to say that I prefer to have this information subconsciously. The information from the compass anklet was far more useful (and efficient) than glancing at my smartphone's compass every second would have been.
I can contact him and see if he can comment here if you are interested
I would be very interested in hearing about his experience, especially since I'd love to replicate something like this externally.
The skills lingered, and for some amount of time, I was able to "feel" where the compass would be pointing in many places I visited while wearing the anklet.
From memory, I'm still able to tell the general direction of the magnetic north in many places.
I think the pre-assembled NorthPaw is available for $199 + shipping.
I have tried:
Wearing a vibrating compass anklet for a week. It improved my navigational skills tremendously. I have low income, but I would definitely buy one if I could afford it.
Listening to a 60 bpm metronome on a Bluetooth earpiece for a week (excluding showers). I got used to the sound relatively quickly, but I most definitely did not acquire an absolute sense of time. However, I noticed that during boring activities such as filling out paperwork, the ticking itself seems to slow down.
I will try:
- Wearing an Oculus Rift that shows the Fourier Transform of what I would normally see. I'd like to know if I can get used to it, and if it improves my mathematical intuition.
The most well known and simple example is an implanted magnet, which would alert you to magnetic fields (the trade-off being that you could never have an MRI).
Can't we achieve the same objective by wearing a magnet ring or a magnet bracelet, without the serious downsides of having an implant?
Would you consider a Wikipedia brain implant to be a transhumanist modification? After all, ordinary humans can query Wikipedia too!
2. A multitude of models
As a general rule, consistent theories have multiple models. Models have more consequences than the theories they model: for example, our model of the example system proves that there are only 2 men, even though this does not follow from the axioms. A sentence follows from the axioms only if it is satisfied in every possible model of S. ^4
Even the axiomatic theory of natural number arithmetic, which we would think is absolute, has multiple models. Mathematicians have agreed on a standard model (the so-called set of natural numbers), but it is easy to prove that other models exist:
Extend the theory of arithmetic (PA) with a new constant K, and the following (infinitely many) axioms.
0 < K
1 < K
2 < K
...
65534 < K
65535 < K
...
Surprisingly, the resulting theory PAK is consistent. Proofs are finite: any proof of a contradiction in PAK would use only finitely many axioms, so there is a largest number n such that n < K is used in the proof. Therefore, K can be replaced in the proof by n + 1, yielding a proof of a contradiction in PA itself! Since arithmetic is consistent, there is no proof of contradiction in PAK.
We have shown that PAK is consistent relative to ZFC. Therefore, it has a model. A model of PAK is a model of arithmetic, but it is clearly not the standard model. Therefore, arithmetic has a non-standard model, which contains the standard integers, as well as non-standard integers (such as the one corresponding to our constant K). In a sense, the non-standard models contain "infinite" numbers that the model cannot distinguish from the real, finite numbers.
The existence of non-standard models is a serious issue: There are situations where the standard model has no counterexamples to a statement, but some non-standard model has. This means that the statement ought to follow from the axioms of arithmetic, but we cannot prove it because it fails in a weird, non-standard model.
For example, some non-standard models disagree with the following statement (the Ramsey theorem), which is satisified by the standard model.
For any non-zero natural numbers n, k, m we can find a natural number N such that if we color each of the n-element subsets of S = {1, 2, 3,..., N} with one of k colors, then we can find a subset Y of S with at least m elements, such that all n element subsets of Y have the same color, and the number of elements of Y is at least the smallest element of Y.
Another, more accessible example is whether you can kill the Hydra or not. You can kill the hydra in the standard model, but many non-standard models disagree. If you would number all the hydras in a non-standard model, the counterexamples would be numbered by non-standard numbers such as K in the proof above.
We need to add new axioms to the axiomatic system of arithmetic, so that it corresponds more faithfully to the standard model. However, our work is never over: as a consequence of Gödel's incompleteness theorem, new axioms can rule out some non-standard models, but never all of them.
3. Generalised models and Hamkins' paper
So far:
Consistency relative to ZFC is a useful notion: giving a model allows us to prove that our theories are as consistent as mathematics itself.
Arithmetic has multiple models. There is a so-called standard model of arithmetic, which is not some real-world or transcendent notion. It is merely a set that mathematicians have agreed to call the standard model. The axioms of arithmetic are unable to exactly describe the standard model: they always describe the standard model plus some other "junk" models.
Do we know that ZFC is consistent? The short answer: we don't and we can't. By Gödel's incompleteness, if ZFC is consistent then it has no models. However, by adding new axioms to ZFC (e. g. large cardinal axioms). we can create set theories that have generalised notions of models. While ZFC has no models, it does have generalised models.
Unlike arithmetic, ZFC itself has no agreed-upon standard generalised model. There is not even a standard system in which we construct generalised models. In all of the above, we have refused to choose a specific model of ZFC (i. e. we did not use the phrase "satisfied in a generalised model of ZFC" or any semantically equivalent sentences). We used the notion of provability in ZFC (which is absolute).
If we replace provability in ZFC with "satisfiability in some specific model", we are suddenly able to prove more properties about the standard model of arithmetic (similarly to how we can prove more theorems about numbers by passing to the standard model of arithmetic from the axioms of arithmetic). Unfortunately, it is well-known (and intuitively obvious) that if you and I choose different generalised models, our conclusions (about these previously undecidable properties) can disagree.
The paper of Hamkins collects some stronger results: our conclusions can disagree even if our chosen generalised models are very similar. For example
There are two generalised models which agree upon the elements that constitute the standard model, yet disagree on the properties of these elements.
There are two generalised models which agree upon the elements that constitute the standard model, agree upon the properties of the addition operation, yet disagree about the properties of the multiplication operations.
and so on... Unfortunately, the proofs of these rely on powerful lemmas, so I can't instantiate them to produce explicit examples.
Anyway, this should be enough to get you started.
First of all, let me issue a warning: model-theoretic truth is a mathematical notion, which (a priori) doesn't have anything to do with the real-world sense of truth!
A short introduction to model theory follows. It is not LW quality, but hopefully it's good enough to answer some questions about MrMind's post. Prerequisites: merely some familiarity with formal reasoning, but I guess knowing the Mental Concepts of Model Theory doesn't hurt.
The 1st part is the general introduction to model theory, the examples about non-standard models are in the 2nd and 3rd parts.
1 Models explained
Axioms are the starting points of formal reasoning. A collection (system) of axioms is inconsistent if it is possible to prove a contradiction using them. E.g. consider the following system of three axioms.
- All men are mortal.
- Socrates is a man.
- Socrates is not mortal.
This system is inconsistent, because the contradiction "Socrates is mortal and Socrates is not mortal" is a consequence of the axioms. On the other hand, the following system (from now on referred to as the example system) is not inconsistent:
- All men are mortal.
- Socrates is a man.
Inconsistent systems are liars: the conclusions derived from them cannot be trusted. ^1 Axiomatic systems can be defined for many specific purposes (mathematics, ethics, et c.). Hopefully I don't have to explain why an inconsistent system of ethics would be disastrous. We would like some assurance that our frameworks are not inconsistent: proofs of consistency!
Proofs of consistency are possible because mathematicians have agreed upon a powerful axiomatic system, the Set Theory ZFC that they believe to be consistent. ^3
Take any axiomatic theory S. Sometimes, you can re-label the axioms of S to be about mathematical objects. This is possible if
All quantifiers that occur in the axioms can be restricted to range over some given set M (in the example system, you would replace "All men" with "All men belonging to the set M*").
Each symbol occuring in the axioms can be identified with an element of the set M (in the example system you would interpret the word "Socrates" to refer to some specific element of the set M).
Each predicate occuring in the axioms can be identified with a subset of the set M (in the example system you would interpret "is a man" by a subset of M, and "is mortal" by another subset of M).
The sentences of S, when interpreted this way, are consequences of the axioms of ZFC.
Such sets M, whenever they exist, are called the models of S. The set of numbers less than 5 {0,1,2,3,4} is a model of the example system, because you can
Interpret the symbol "Socrates" as referring to the number 1.
Interpret the predicate "is a man" as the subset of odd numbers; this means that you consider 1 and 3 men, but not 0, 2 and 4.
Interpret the predicate "is mortal" as the subset of numbers less than 4; this means that you consider 0, 1, 2 and 3 to be mortals, but not 4.
Now, the sentence "All men are mortal" means "All odd numbers in M are less than 3", which is a true mathematical statement that you can prove using the axioms of ZFC.
The sentence "Socrates is a man" means "The number 1 is odd", which is again a true mathematical statement that can be proved from the axioms of ZFC.
The relabeling interprets the axioms and consequences of S as true mathematical statements (the form of the statements is preserved, even if meaning is not). If a contradiction follows from the axioms of S, then it can be relabeled into a contradiction in mathematics (ZFC). Therefore, every axiomatic system that has a model is at least as consistent as mathematics itself: giving a model amounts to giving a consistency proof. We say that this consistency proof is relative to ZFC.
It can be demonstrated that Model Theory is the most general method of giving consistency proofs (relative to ZFC): if ZFC proves that a system is consistent then the system has a model, and vice versa.
^1 There are also consistent liars. Observing an inconsistency is sufficient to conclude that an axiom system is a liar, but it is not necessary.
^2 Observe that we still have no assurance about the consistency of this general-purpose system.
^3 This is not entirely true, but it is a reasonable non-technical explanation.
^4 Unfortunately, the satisfaction relation "satisfied in a model" is commonly referred to as true in a model. Worst of all, "X is satisfied in the standard model" is sometimes abbreivated to X is true, giving these results a false aura of deep philosophical relevance.
Location and age cohort based education is designed for the center of the bell curve at the expense of the tails in about every way imaginable. It's a bad fit socially, because large discrepancies in intelligence makes for difficulty in relating. It's crippling intellectually, because beyond being bored to tears, you're not learning how to control and drive yourself toward goals, which is the fundamental skill to be developed in your youth.
I went to an elementary school for gifted children, so all of my classmates had above-average intelligence, and we had a challenging academic program. I'd be really surprised if we'd turn out to be any happier, or better at driving ourselves towards goals, than high-intelligence people who were educated in regular schools. In fact, my intuition tells me that these problems are not environmental but biological in origin.
Is there any hard data on this? Are high-IQ people who grow up in high-IQ environments happier or more goal-oriented than high-IQ people who grow up in an average-IQ environment?
Osho's right hand did run the biggest bioattack on the US at the time. I don't want to live in a world where when someone doesn't like how an election is going to go they try to poison a significant portion of the electorate to keep them at home.
As far as I know, most other gurus teaching similar principles were not involved with bioterrorism. Do you think there might be a causal relationship between preaching you are not helpless with your feelings and committing bioattacks?
That particular idea has been widely explored in the literature. E.g. Fajnzylber does it in Inequality and violent crime, finding a significant correlation of 0.54 between income inequality and log of homicide rate. This is pretty strong by social science standards. The correlation with other types of crime is much lower.
Curiously, If you restrict to Europe, the correlation is negative, but it is positive if you restrict to East and South Asia, which has Gini coefficients and murder rates comparable to European countries.