Posts
Comments
A few remarks that don't add up to either agreement or disagreement with any point here:
Considering rivers conscious hasn't been a difficulty for humans, as animism is a baseline impulse that develops even in absence of theism, and it takes effort, at either the individual or cultural levels, for people to learn not to anthropomorphize the world. As such, I'd suggest a thought experiment that allows for the possibility of a conscious river, even if composed of atomic moments of consciousness arising from strange flows through an extremely complex network of pipes, taps back, into that underlying animistic impulse, and so will only seem weird to those who've previously managed to supress it either via effort or nurture.
Conversely, as one can learn to suppress their animistic impulse towards the world, one can also suppress their animistic impulse towards themselves. Buddhism is the paradigmatic example of that effort. Most Buddhist schools of thought deny the reality of any kind of permanent self, asserting the perception of an "I" emerges from atomistic moments as an effect of those interactions, not as their cause or as a parallel process to them. From this perspective we may have a "non-conscious in itself" river whose pipe flows, interrupted or otherwise, cause the emergence of consciousness, exactly the same and in no way differently from what human minds do.
But even those Buddhist schools that do admit of a "something extra" at the root of the experience of consciousness, consider it as a form of matter that binds to ordinary matter to, operating as a single organic mixture, give rise to those moments of consciousness. This might correspond, or be an analogous on some level, to Searle's symbols, at least going from the summarized view presented in this post. Now, irrespective of such symbols being or not reducible to ordinary matter, if they can "attach" to human brain's matter to form, er, "carbon-based neuro-symbolic aggregates", nothing in principle (that I can imagine, at least) prevents them from attaching to any other substrate, such a water pipes, at which point we'd have "water-based pipe-symbolic" ones. Such an aggregate might develop a mind of its own, and even a human-like mind, complete with a self-delusion that similarly believes that emergent self as essential.
As such, it'd seem to me that, without a fully developed "physics of symbols", such speculations may go either way and don't really help solve the issue. A full treatment of the topic would need to expand on all such possibilities, and then analyse them from perspectives such as the ones above, before properly contrasting them.
Where is all the furry AI porn you'd expect to be generated with PonyDiffusion, anyway?
From my experience, it's on Telegram groups (maybe Discord ones too, but I don't use it myself). There are furries who love to generate hundreds of images around a certain theme, typically on their own desktop computers where they have full control and can tweak parameters until they get what they wanted exactly right. They share the best ones, sometimes with the recipes. People comment, and quickly move on.
At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it.
I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good: even if the industrialized one is better by objetive parameters, the handcrafted one is perceived as qualitatively distinct. So I can imagine a scenario in which there are automated, generative websites for quick consumption -- especially video, as you mentioned -- and Etsy-like made-by-a-real-person premium ones, with most of the associated social status geared towards the later.
A smart group of furry advertisers would look at this situation and see a commoditize-your-complement play: if you can break the censorship and everyone switches to the preferred equilibrium of AI art, that frees up a ton of money.
I don't know about sexual toys specifically, but something like that has been attempted with fursuits. There are cheap, knockoff Chinese fursuit sellers on sites such as Alibaba, and there's a market for those somewhere otherwise those wouldn't be advertised, but I've never seen someone wearing one of those on either big cons or small local meetups I attended, nor have I heard of someone who does. As with handcrafted art, it seems furries prefer handcrafted fursuits made either by the user themselves, or by artisan fursuit makers.
I suppose that might all change if the fandom grows to the point of becoming fully mainstream. If at some point there are tens to hundreds of millions of furries, most of whom carrying furry-related fetishes (sexual or otherwise), real industries might form around us to the point of breaking through the traditional handcraft focus. But I confess I have difficulty even visualizing such a scenario.
Hmm... maybe a good source for potential analogies would be Renaissance Fairs scene. I don't know much about them, but they're (as far as I can gather) more mainstream than the Furry Fandom. Do you know if such commoditization happens there? That might be a good model for what's likely to happen with the Furry Fandom as it further mainstreams.
This probably doesn't generalize beyond very niche subcultures, but in the one I'm a member of, the Furry Fandom, art drawn by real artists is such a core aspect that, even though furries use generative AI for fun, we don't value it. One reason behind this is that, different from more typical fandoms, in which members are fans of something specific made by a 3rd party, in the Furry Fandom members are fans of each other.
Give that, and assuming the Furry Fandom continues existing in the future, I expect members will continue commissioning art from each other or, at the very least, will continue wanting to be able to commission art from each other, and will use AI-generated art as a temporary stand in while they save to commission real pieces from the actual artists they admire.
I'd like to provide a qualitative counterpoint.
Aren't these arguments valid for almost all welfare programs provided by a first-world country to anyone but the base of the social pyramid? For one example, let's take retirement. All the tax money that goes into paying retirees to do nothing would be much better spent by helping victims of malaria etc. in 3rd world countries. If they weren't responsible enough to save during their working years to be able to live without working for the last 10 to 30 years of their lives, especially those from the lower middle class and above, or to have had 10 kids who would sustain them in their late years, each with 10% of their income, that increases the burden on society etc. And thus similarly for other programs targeting the middle class. So why not redirect most or even all of this to those more in need?
A possible answer, covering the specific case you brought as well as the generalized version above, counterintuitive as it may be, is that the original intent of welfare seems to have been forgotten nowadays, which makes it worth bringing it back.
Welfare wasn't originally implemented due to charitable impulses of those in power. Rather, it was first implemented to increase worker productivity, as in the programs pioneered by Bismarck in the 19th century. After that, it went on being implemented to reduce the working class's drive to become revolutionaries, as Marx noticed would happen in his Critique of the Gotha Program, which is why he opposed such programs. And in fact, wherever extensive welfare programs were instituted early empirical observations showed they did in fact reduce the revolutionary impulse.
Add to that the well observed fact mass revolutions over the last century and half, both left- and right-wing alike, have been strongly driven by dispossessed but well-educated, and thus entitled, young adults whose social and economic status were below their perceived self-worth, and we have the recipe for why providing welfare directed at those who traditionally form a revolutionary vanguard so they don't become a vanguard may be a reasonable long-term strategy, supposing we consider such movements, and what they result in, a net negative.
Hence the baseline question, as I see it, isn't as much in regard to the raw economics of the issue, but on how likely a revolution in the US due to the worsening economic conditions of its young middle class versus the changing shape of the US age pyramid is, and, based on a cost-benefit analysis, how much a revolution not happening in the US over the next generation or two is worth in monetary terms. Is a US revolution strictly impossible? If it's possible, is its likelihood high enough that reducing that likelihood is worth $1 trillion?
The same goes for all welfare aimed at this socio-economic/age-bracket group.
EDIT: Typo and punctuation corrections, and minor clarifications.
When this person goes to post the answer to the alignment problem to LessWrong, they will have low enough accumulated karma that the post will be poorly received.
I don't think this is accurate, it depends more on how it's presented.
In my experience, if someone posts something that's controversial to the general LW consensus, but argues carefully and in details, addressing the likely conflicts and recognizing where their position differs from the consensus, how, why, etc., in short, if they do the hard work of properly presenting it, it's well received. It may earn an agreement downvote, which is natural and expected, but it also earns a karma upvote for the effort put into exposing the point, plus those who disagreed engaging with the person explaining their points of disagreement.
Your point would be valid on most online forums, as people who aren't as careful about arguments as LWers tend to conflate disliking with disagreeing, which results in a downvote is a downvote is a downvote. Most LWers, in contrast, tend to be well skilled at treating the two axes as orthogonal, and it shows.
The answer is threefold.
a) First, religious and spiritual perspectives are a primarily a perceptual experience, not a set of beliefs. For those who have this perception, the object of which is technically named "the numinous", it is self-evident. The numinous stuff clearly "is there", for anyone to see/feel/notice/perceive/experience/etc., and they cannot quite grasp the concept of someone saying they notice nothing.
Here are two analogies of how this works.
For people with numinal perception, hearing "it's pretty, but that's all" is somewhat similar to someone with perfect vision hearing from a born blind person they don't see anything. The person with vision can only imagine "not seeing" as "seeing a black background", similar to what they perceive when they close their eyes or are in a perfectly dark room. Not seeing isn't seeing black, it's not seeing.
Consider, for another analogy, that a dove with normally functioning magnetic field sensing were able to talk, and it asked you: "So, if you don't feel North, which direction do you feel?" You'd reply "none", and the dove would at most be able to imagine you feel something like up or down, because they cannot grasp what it is like not to physically feel cardinal directions.
The opposite also applies. People with no numinous perception at all are baffled by those with it describing they perceive something that quite evidently isn't there. Their immediate take is that the person is self-deluded, or maybe suffering from some perceptual issue, maybe even schizophrenic, if not outright lying. At their most charitable, they'll attribute this perceptual error to a form of synesthesia.
Unsurprisingly, it's much more likely to be a Theist or similar if one has numinous perception, and much easier to be an Atheist if one doesn't have it, though there are exceptions. I don't remember if it was Carl Sagan or Isaac Asimov, but I recall one of them explaining in an interview they did have this perception of a "something" there (I don't think they referred to it by its name), and were thus constantly tempted towards becoming religious, but kept fighting against that impulse due to knowing it as a mental trick.
b) Thus, if we establish numinal perception is a thing, it becomes easy to understand what religions and spiritual beliefs are. Supernatural belief system are attempts, some tentative and in broad strokes, others quite systematic, to account for these perceptions, starting from the premise they're perceptions of objective phenomena, not of merely subjective, mental constructs.
Interestingly, in my experience talking with people with this perception, what's perceived as numinal varies from one to the other, which likely account for religious preferences when one has a choice.
For example, for some the navy of a Catholic cathedral is shock full of the numinal, while a crystal clear waterfall in a forest is just pretty but not numinal at all. Those with this kind of numinal perception are more likely to be Christian.
For others, it's the reverse. Those are more likely to go for some religion more focused on nature things, some form of native religiosity, unstructured spirituality, animism or the like.
For others yet, they feel the numinal in both contexts. These will be all in with syncretisms, complex ontological takes, and the like.
c) Finally, on whether perceived numinous thingies are objectively real or not depends on one's philosophical assumptions.
If one's on the side of reductionism, then they're clearly some kind of mental epiphenomena either advantageous or at least not-disadvantegeous for survival, so it keeps being expressed.
If one's an antireductionist, they can say numinous thingies are quite real, but made of pure qualia, without any measurable counterpart to make it numerically apprehensible, so either one has the sensory apparatus to perceive them, or they don't, external devices won't help.
And the main issue here is the choice for either reductionism or antireductionism is axiomatic. One either prefers one, and goes with it, or prefers the other, and goes with it. There's no extrinsic way to decide, only opposite arguments that tend to cancel out.
In conclusion:
To more directly answer the question then, when someone says they believe in God, what they mean is they perceive a certain numinal thing-y, and that the most accurate way to describe that numinal thing-y is with the word "God", plus the entire set of concepts that come with it in the belief system they're attuned with.
If they abandoned this specific explanatory system, that wouldn't affect their numinal perception qua perception, so they'd likely either go with another explanation they felt covered their perception even better, or more rarely actively force themselves to resist accepting the reality of that perception. The perception itself would remain there, calling for their attention.
I mean sure if you take self-reports as the absolute truth (...)
Absolute truth doesn't exist, the range is always ]0;1[. 0 and 1 require infinitely strong evidence. What imprecisions in self-reporting do generate is higher variance, skewing, bias etc., and these can be solved by better causal hypotheses. However, those causal hypotheses must be predictive and falsifiable.
why go with the convoluted point about aro-ace trans women (...)
Because that's central to the falsifiability requirement. Consider: if transgender individuals explicitly telling researchers they never experienced autogynephilic impulses, nor any sexual impulse or attraction at all, is dismissed by the autogynephilic hypothesis proponents and considered invalid, with proponents suggesting they actually did experience it but {ad hoc rationalization follows}, then what is the autogynephilic hypothesis' falsifiability criteria? Is there any?
More studies != better integration of the information from those studies into a coherent explanation.
There are several moments in research.
The initial hypothesis is simple: there are identifiable physiological differences between human male and female brains, and transgender individuals' brains show distinctive traits typical of the brains of the other sex, while cisgender individuals don't.
This is testable, with clear falsifiability criteria, and provides a pathway for the development of a taxonomy of such differences, including typical values, typical variances, normal distributions for each sex, a full binomial distribution to cover both sexes, and the ability to position an individual's brain somewhere along that binomial distribution.
Following that taxonomic mapping, if it pans out, there come questions of causality, such as what causes some individual brains to fall so distantly from the average for their birth sex. But that's a further development way down the line. Right now what matters is the first stage is falsifiable and has been experiencing constant corroboration, not constant falsification.
So now it's a matter of contrasting this theory's falsifiability track record with the autogynephilic hypothesis's falsifiability track record -- supposing there's one.
Feels like an example of bad discourse that you dismiss it on the basis of ace trans women without responding to what Blanchardians have to say about ace trans women.
Thanks for the link, but I'd say the text actually confirms my point rather than contradicting it. The numbers referred to:
"In this study, Blanchard (...) found that 75% of his asexual group answered yes. Similarly, Nuttbrock found that 67% of his asexual group had experienced transvestic arousal at some point in their lives. (...) 45.2% of the asexuals feel that it applies at least a little bit to them (...)"
Can all be reversed to show that, respectively, 25% / 33% / 54.8% of aro-ace trans individuals answer in the negative, and the rebuttal of the universality of the hypothesis needs only these numbers to be non-zero. That they're this high comes as an added bonus, so to speak.
I would enjoy if someone could lay it out in a more comprehensible manner.
This is being constantly done. Over the last 20+ years, as neuroimaging and autopsy techniques advance, and new studies are done using those more advanced techniques, we mostly get corroborations with more precision, not falsifications. There are occasional null results, so that isn't strictly always the case, but those come as outliers, not forming a new, contrary body of evidence, and not significantly affecting the trend identified as meta-analyses keep being done.
I'm not aware of someone having done a formal Bayesian calculation on this, but my impression is it'd show the scale constantly sliding toward the physiological hypothesis, and away from the autogynephilic one, as time advances, with only small backslides along the way.
Yep, the idea autogynephilia explains transgender identities can be shown to be false by referring a single piece of direct evidence: it isn't difficult to find aro-ace trans people. That right there shows autogynephilia isn't a universal explanation. It may apply to some cases, maybe, but transgender identities definitely go way beyond that.
Besides, but also mainly, we have evidence for physiological causes:
- Frigerio, Alberto, Lucia Ballerini, and Maria Valdés Hernández. “Structural, Functional, and Metabolic Brain Differences as a Function of Gender Identity or Sexual Orientation: A Systematic Review of the Human Neuroimaging Literature.” Archives of Sexual Behavior 50, no. 8 (November 2021): 3329–52. https://doi.org/10.1007/s10508-021-02005-9.
And it takes lots of handwaving, or deliberately ignoring the data, to stick with the autogynephilic hypothesis as the most general explanation.
Which texts is Hegel responding too? Is it ultimately rooted in Aristotle/Plato/Socretes? How much work does one have to do to get up to speed?
I'm not well versed in Hegel's philosophy, but I know he does three things (and probably more).
First, he builds upon Kant, who himself is moving against all philosophy that came before him and refunding the entire thing so as to be compatible with modern scientific inquiry.
Second, he changes the concept of truth, from static to dynamic, not in the sense that what we think is true may be wrong and so we fix our knowledge until it becomes actually true, but in the sense that the very notion of "truth" itself changes over time, and hence a knowledge that was true once becomes false not because it was incorrect, but because it's aligned with a notion of truth that isn't valid anymore. This comes on the heels of a new analysis methodology he invented for this purpose, and that you need to master before seeing it in use.
Third, he tries to integrate notions of justice, rights etc. that are still grounded on pre-Kantian notion with all the above.
That paragraph quoted touches on all of the above, so it takes a knowledge of classic metaphysics, plus Kantian anti-metaphysics, plus classic political philosophy, plus Hegel's own take on words such as "truth", "rights" etc. actually refer to.
It's an extremely ambitious project, and on top of that he has to deal with the potential censorship of rulers and church, so even in parts in which he could be clearer he has to deliberately obfuscate things so that censors don't catch up with what he's actually trying to say (this was a usual procedure for many philosophers, and continues being among some).
(...) when I read Bostrom, Parfait, or Focault or listen to Amanda Askill or Agnes Callard or Amia Srinivasan I don't get the sense that they're necessarily trying to bring fundamentally new objects into our ontology or metaphysics, but rather that they're trying to clarify and tease apart distinctions and think through implications;
I don't know the last three, but the first two basically go in the opposite direction. They take all these complex novel notions of the genius philosophers and distillate them down into useable bits by applying them to specific problems, with some small insights of theirs sprinkled here and there. Foucault in particular also did some of the "big insight" thing, but on a more limited fashion and with a narrower focus, so it isn't as earth-shattering as what the major philosophers did.
Besides, there are movements among professional academic philosophers that propose developing philosophy in small bits, one tiny problem at a time worked to exhaustion. Much of what they do is in fact this. But how that's seen varies. When I was majoring in Philosophy in the 2000's, for example, there was an opinion shared by all professors and teachers in the Philosophy Department that from all of them who worked there since it was founded in the 1930's until that date, only one single professor has been seen as a real philosopher. Everyone else were historians of philosophy, which indeed was how they described what we were learning how to do. :-)
is that a project that tends to lend itself to a really different, "clearer" way of using language?
Yes, undoubtedly. On the flip side, it doesn't lend itself to noticing large scale structural issues. For instance, from working tiny problem by tiny problem, one after the other, one would never do as Hegel did, stop, look at things from a distance, and perceive the very concept of truth everyone was using is itself full of assumptions that need unpacking and criticizing, in particular the assumption of the atemporality of truth. Rather, they will all tend to keep working from within that very concept of truth, assumed wholesale, doing their 9-to-5 job, accumulating their quotations so as to get a higher pay, and not really looking outside any of it.
A rule of thumb is that major philosophers make you feel ill. They destroy your certainties by showing what you used to consider solid ground were mirages. Minor philosophers and professional philosophers, in contrast, feel safe. At most a little inconvenient here and there, but still safe, since with them the ground is still the same, and still mostly as firm as before.
... this quote ... was used by Scott Alexander in his Nonfiction Writing Advice as an example of entirely
unreadableabstract paragraph.
It isn't unreadable. Hegel is arguing with concepts from previous philosophies which he presumes the reader already knows and understands well. If one begins reading him possessing the prerequisite knowledge one can understand him just fine. Besides, this is a point in the middle of a long discussion, so he already presumes the reader understood the previous points and is connecting the dots.
Great philosophers are great because they notice something no one has noticed before and are thus the very first person in History to try and express that. They have no tool for doing so other than everything that was said before, which, by definition, doesn't include what they're trying to say. So, on top of trying to say something utterly, absolutely novel, they must invent the language and semantics with which to say it by repurposing words and concepts that aren't appropriate for the task. Eventually (measured in decades to centuries) students of that philosopher figure out better ways to express the same novel notions he pioneered, and cause the learning curve to become less and less steep. In the extreme this is so well done, and that philosopher's ideas and terminology gain such widespread adoption, that language itself adapts to the way the philosopher used it. And then everyone is talking from within that philosopher's terminology, and wondering, when they read the original work, what was the big deal with someone who was all about stating, and badly at that, mere truisms.
If philosophers wrote presuming their readers have no philosophical knowledge at all, and under the requirement that all words they use must retain their current, commonsensical meaning, every sentence of theirs would balloon into an entire book. The philosopher would die of old age before having presented 1% of what they wanted to say.
Either that, or instead this happens. I guess by this point we're in Schrödinger's Cat territory:
Humans also bottleneck the maritime side of cargo shipments via artificial scarcity in the form of cartels and monopolies. The referred $2k shipments could have costed even less, but there's rent capture in it driving final transportation prices higher than they could be, and payments to on the ground operators lower than those, too, could be, the resulting spread going into the hands of the monopolists who successfully work around legal impositions from as many jurisdictions as possible.
I wouldn't say it's a matter of validity, exactly, but of suitability to different circumstances.
In my own personal ethics I mix a majority of Western virtues with a few Eastern ones, filter them through my own brand of consequentialism in which I give preference to actions that preserve information to actions that destroy it, ignore deontology almost entirely, take into consideration the distribution of moral reasoning stages as well as which of the 20 natural desires may be at play, and leave utilitarian reasoning proper to solve edge cases and gray areas.
The Moriori massacre is precisely one of the references I keep in mind when balancing all of these influences into taking a concrete action.
This analysis shows one advantage virtue ethics has over utilitarianism and deontology with its strong focus on internal states as compared to these and their focus on external reality. And it also shows aspects of the Kohlbergian analysis of the different levels of cognitive complexity possible in the moral reasoning of moral agents. Well done!
One concrete example I like to refer to is the Maori massacre of the Moriori tribe. The Moriori were radical non-violence practitioners who lived in their own island, to the point even Gandhi would be considered too angry of a person to their tastes. The Maori, in contrast, had a culture that valued war. When the Maori invaded the Moriori's island, they announced it by torturing a Moriori girl to death and waited for them to attack, expecting a worthy battle. The Moriori didn't attack, they tried to flee and submit. The Maori were so offended by having their worthy battle denied that they hunted the Moriori to extinction, and not via quick deaths, no. Via days-long torture. This is the one tale that helps me to weight down my own non-violence preferences down into reasonableness, to avoid over-abstracting things.
On the last point, you reminded me of a comedian impersonating different MBTI types. When playing the INTP profile he began acting as a teacher reading a math question from the textbook to his students: "There are 40 bananas on the table. If Suzy eats 32 bananas, how many bananas...", then stops, looks up at the camera while throwing the book away, and asks "Why is Suzy eating 32 bananas? What's wrong with her!?" 😁
Thanks. Now I'm torn between my own take and a possibly improved version of this one. :-)
Thanks for this review. I have done evil in the past due to similar reasons the author points. Not huge evils, smaller evil, but evils nonetheless. Afterwards I learned to be on guard against those small causal chains, but even so, even having began being on guard, I still did evil one more time afterwards. I hope my future rate will go down to zero and stay there. We'll see.
By the way, an additional factor not mentioned in the review, and thus, I suppose, on the book, is the matter of evil governments manipulating the few who are good so they, too, serve evil purposes. This is something major powers do regularly. Their strategists identify some injustice going on in enemy territory, and induce those there who care to seek justice in specific ways calculated to cause the most disruption to the enemy government. Power structures thus destabilized result in social chaos, which can grow, when properly nurtured, into extreme violence, blood feuds, crackdowns, oppression, and generations-long prejudice and hatred. All by manipulating the goodness and sense of justice of the gullible.
To avoid that and do true good one needs to think from the perspective of evil. To imagine the many ways in which one's good impulses could be redirected into evil deeds, and to act one or more layers above that.
"The Worst Mistake in the History of Ethics"
I'm curious what GPT-3 would output for this one. :-)
PS: And I have my own answer for that: Aristotle's development of the concept of eudaimonia, "the good life", meaning the realization of all human potential. For him it was such a desirable outcome, so valuable, that it's existence justified slavery, since those many working allowed a few to realize it. Advance 2,400 years of people also finding it incredibly desirable, and we end up with, among others, Marx and Engels defending revolutionary terror, massacres, and mass political persecution so that it could be realized for all, rather than for a few.
I personally think quotation-over-punctuation would solve this nicely. Here's an example from someone who managed to have his TeX documents do exactly that:
Minor curiosity: originally, back in old printing days, quotations marks went neither before nor after punctuation marks, but above these, after all, it's a half-height symbol with empty space below it, and another half-height symbol with empty space above it, so both merged well into a single combined glyph, saving space.
When movable types entered the picture almost no types set had unified quotation+punctuation types, so both were physically distinct symbols that needed a sequence when placed on the printing board. Over time the US mostly settled with punctuation-then-quotation, while most other countries went mostly with quotation-then-punctuation -- which on further analysis (and then with programming languages) proved more sensible.
Nowadays with modern Unicode ligatures we could easily go back to quotation-over-punctuation for display purposes, while allowing the writing to be either way, but I suppose after 200 years of printing these glyphs separately no one has much interest in that.
I'm intrigued – google gives only porn videos as search results.
The tongue is very sensitive. A very skilled kisser knows how to intensely stimulate the top of their partner's tongue with theirs while French kissing, to the point one or both of them get a very specific kind of orgasm different from any other. In my case I got spasms while washed in endorphins, which took several minutes to subside. :-)
Also, I assume you mean a P-spot orgasm when you say "female orgasm"?
No, I mean actual female orgasm. I can provide exactly zero evidence for this, which on LW is a particularly huge no-no, but if mentioning a little bit of mystic experiences isn't too much of a problem I can say there are Tantra masters out there who can induce some pretty interesting experiences on suitable students, one of which is, on male-bodied ones, those of having a full set of phantom limb representatives of female genitalia complete with the mental experience of female orgasms (as well as of male genitalia on female-bodied students). This is linked to advanced Karmamudrā techniques.
Ditto, or more precisely, no one from my graduation class has any interest in paying for one, so we all got our certificates by mail. I suppose it helps that most everyone was 30+, and the major was Philosophy, neither of which predisposes one to care much about such things, much less when put together.
Looking at the pain scale, I guess I'm somewhat atypical. On the pleasurable experiences I had, I'd order them such:
- 0.0: College graduation (I haven't really felt it as anything special)
- 0.2: Alcohol consumption (but I haven't gotten really drunk)
- 1.0 to 3.0: Male orgasm (kinda meh most of the time, sometimes good)
- 2.0: Tongue orgasm from a skilled kisser
- 4.0 to 6.0: Female orgasm (the first one is 4.0, successive ones being more and more intense until it plateaus at 6.0 on the 8th orgasm or so)
(Yes, I've had the last one despite being 100% a cis-male. Let's attribute it to "the magics" and leave it at that.)
And on the pain scale, the worst tooth ache I've ever had was way stronger that when my gallbladder was almost rupturing, so I think it'd go like this:
- 1.0: ear infection
- 1.0 to 3.0: tooth ache, lower back pain
- 2.5: gallbladder going kaput
- 3.0: the most impacting death in family
- 4.0: heartbreak
That depends. Several metaphysical systems develop ontologies, with concepts such as "objects" and "properties". Couple that with the subfield of Applied Metaphysics, which informs other areas of knowledge by providing systematic means to deal with those foundations. So it's no surprise that one such application, several steps down the line, was the development of object-oriented programming with its "objects possessing properties" ordered in "ontologies" via inheritance, interfaces and the like.
Thanks! And done! :-)
I've tried adding spoiler tags, but it isn't working. According the FAQ for Markdown it's three colons and the word "spoiler" at the beginning, followed by three colons at the end, but no luck. Any suggestion?
I think that was the one, yes. It's been years and I forgot the name.
I'll add the tags, thanks!
There's a Naruto fanfic (much better than the actual manga, mind) with this trope, except the author adds a cool extra at the end. In that, it turns out one with looping power only goes back to the same point in time because
they haven't learned how to set a new, so to speak, "save point". This mechanic became clear to the characters after they had decades of experience in child bodies, so that they began to carefully plan the world they wanted to have, and exhaustively time looped until they managed to set things perfectly aligned for the next stage of their plan, at which point they "saved", and went for it.
Those aren't metaphysical. Metaphysics is a well defined philosophical research field.
To complement @Dagon's comment, another difficulty is that Skepticism itself is also a philosophical model, which can be taken either as merely epistemological, or as a metaphysical model unto itself, so the initial 1:1 model actually giving Skepticism a 50% prior vs. all other models. And then we have some relatively weird models such as Nominalism, which is metaphysically skeptical except for affirming, atop a sea of complete no-rules free-formness, the absolute will of an absolute god who decides everything just because.
Fun detail: my Philosophy major followed a method called "monographic structuralism" that consists in learning each philosopher's system as if we were devout followers of theirs (for the class duration). The idea was that before opining on this or that philosophical problem it was worth knowing that philosopher's arguments and reasoning as well as they themselves did. So one studied philosopher A enough to argue perfectly for his ideas, finding them perfectly self-consistent from beginning to end and from top to bottom; then studied philosopher B similarly; then philosopher C, ditto; and so on and so forth, which invariably led one to learn two philosophers who said the exact opposite of each other while still being perfectly self-consistent -- at which point one threw their hands up and concluded the issue to be strictly undecidable. In the end most students, or at least those who stuck with the major long enough, became philosophical skeptics. :-)
This was extremely informative! Thank you!
A few points I'd like to comment on:
"So eager were poor farmers for dirty, dangerous factory jobs (...)"
There's an underlying question on why those farmers were that poor and such dire need for those factory jobs. One reason I've seen given was in Hillaire Belloc's 1912 book The Servile State, one of the first books of the Distributist school of economics. According him, the end of the feudal system in England, and its turning into a modern nation-state, involved among other things the closing off and appropriation, by nobles as a reward from the kingdom, of the former common farmlands they farmed on, as well as the confiscation of the lands owned by the Catholic Church, which for all practical purposes also served as common farmlands. This resulted in a huge mass of landless farmers with no access to land, or only very diminished access, who in turn decades later became the proletarians for the newly developing industries. If that's accurate, then it may be the case that the Industrial Revolution wouldn't have happened had all those poor not have existed, since the very first industries wouldn't have been attractive compared to condition non-forcibly-starved farmers had.
"By making wage labour attractive enough to draw in millions of free workers, higher wages made forced labor less necessary, and because impoverished serfs and slaves—unlike the increasingly prosperous wage labourers—could rarely buy the manufactured goods being churned out by factories, forced labour increasingly struck business interests as an obstacle to growth (especially when it was competitors who were using it)."
This is a common narrative about how chattel slavery came to an end, to the point it even sounds like common sense by now, but I haven't actually seen strong evidence for this interpretation. Maybe this evidence exists and it's just a matter of someone pointing it out for me, but so far I know three points of divergence about this narrative:
-
Force labor ended once before. During the Middle Age, as its complex farming hierarchies and belief-systems developed in the millennia following the fall of the Roman Empire, saw the descendants of the former Roman villas-turned fiefdoms' slaves slooowly gaining more and more customary legal rights in their process of becoming serfs, rights feudal lords rarely refused them lest doing so hit their reputations hard. By the Late Middle Age this process had made serfs, while technically still property most everywhere, in practice free, with some places having outright forbidden literal slavery altogether by as early as the 12th century.
-
This was quite clearly recognized as such by the Catholic Church, who, once the new nation-states began their Great Navigations, and started the once mostly abandoned process of enslavement all over again, began to periodically issue papal bulls heavily condemning enslavers, the earliest of which in the 16th century. Not that the Church had effective power on the matter, all they could do was to tell enslaver they were going to Hell, a threat enslavers clearly give little attention to, but this at the very least shows that, culturally at least, there was a strong anti-enslavement cultural force in place amidst all that European agrarian ethos, and one that kept advancing in parallel and despite nation-states renewed push for slavery.
-
This cultural force finally cascaded when, in the late-18th century, religious-based political abolitionist associations began developing and lobbying for the end of slavery and, in a mere 50 years, turned England from a heavy promoter of slavery into a country who spend huge amounts of money and military resources to hunt enslavers worldwide.
Notice that, while point 3 overlaps with the Industrial Revolution, the causality here would seem to me to be the opposite of how it's usually depicted, that is, with abolitionism having helped to advance industrialization as an unintended side effect of its ideals cascading into practice, and not the other way around. Which, evidently, doesn't prevent the usual narrative from being valid in other places, that is, countries in which slavery was still well accepted finding themselves forced, first militarily, then technologically, and finally economically, to adapt or perish. But the former case seems to me to have been the more prevalent, in the West at least, what with the Civil War in the US, and enlightened royals voluntarily giving up their crowns to end slavery on moral grounds.
Over millennia, such societies either had their tricks independently discovered or copies by others, or then outright went warpath to subjugate over societies to their rule – and, of course, preach their values, which (given human adaptability) they held sincerely, and with no idea that they thought differently from their distant ancestors.
I think at least some recognized quite clearly they thought differently. I don't remember where I got this information, I think it was on Karen Armstrong's Muhammad: A Biography of the Prophet, but I distinctly remember reading about how when Muhammad was young he was sent by his uncle to live among nomads for a few years, as it was considered part of the proper education of the young back then precisely because nomads were seen as the preservers of the old ways, keepers of strict adherence to proper moral values and work ethics, thus excellent examples to a young, impressionable mind compared to the lazy, inferior moral developed in the sedentary lifestyle of farms and villages (yes, laboring 12+ hours a day under backbreaking conditions was considered sedentary).
Now, while foragers and nomads aren't the same category of wandering people, it'd seem to me that there was an awareness of the cultural differences between those who lived from the land and those who didn't, in at least a roughly similar way to how those living in, and fully inserted into, modern, huge metropolitan areas nowadays are aware of the cultural differences between themselves and those living in the country.
(...) was the centralisation-vs-decentralisation tradeoff really so simple in the farming era that "godlike kings everywhere" was the only effective answer?
Perhaps it was seen as such by those involved. One interesting reference point is given in the Bible.
1 Samuel 8 narrates how at one point the Hebrews, envying their surrounding countries having kings, decided they wanted one too, so they demanded prophet Samuel to crown one. Samuel disliked this, prayed to God, and God told him to warn their fellow countrymen of all the very-bad-things that having a kingdom would result in (verses 11-18):
"This is what the king who will reign over you will claim as his rights: He will take your sons and make them serve with his chariots and horses, and they will run in front of his chariots. Some he will assign to be commanders of thousands and commanders of fifties, and others to plow his ground and reap his harvest, and still others to make weapons of war and equipment for his chariots. He will take your daughters to be perfumers and cooks and bakers. He will take the best of your fields and vineyards and olive groves and give them to his attendants. He will take a tenth of your grain and of your vintage and give it to his officials and attendants. Your male and female servants and the best of your cattle and donkeys he will take for his own use. He will take a tenth of your flocks, and you yourselves will become his slaves. When that day comes, you will cry out for relief from the king you have chosen, but the Lord will not answer you in that day."
This suggests the system of government that existed before didn't do those things. That system, called Judging, isn't well known, but I remember reading a historian once explaining it was very decentralized. If I remember right, political power was intermittent and an all-or-nothing proposition, as some families had generational military duties that included, but only in war times, absolute power for the purposes of defense against external aggression. In times of peace, in contrast, those families had no power, having to tend to their lands and produce their won food or whatever by themselves, similar to everyone else. It therefore worked more as a loose, decentralized federation of micro-states that used militias for self-defense than as a big, integrated, centralized government with a permanente military force.
And yet, if there's any truth left in the story after centuries of retellings until it was put into paper, the people saw their neighbors centralization and really wanted a piece of that for themselves. Alas the text doesn't dwell on their reasons for that, but if I were to venture a guess it'd be that they saw their neighbors as having effective, deployable armies as threatening, and saw centralization as a means to more effectively defend themselves despite the listed drawbacks.
I have the impression you're confounding the terms "freedom" and "democracy", themselves quite broad. The contents of your post suggest what you're seeking is to live in a country that are representative liberal democracies, and whose electoral process results in specific representativeness quotients, as well as in other specific features. But that doesn't exactly overlap with any specific notion of "freedom", such as that of "true freedom", unless you also were to provide a specific definition of both.
I imagine you're going to find a better response if you were to taboo the words "democracy", "freedom", and "true freedom", so as to restate what you're seeking in more objective, concrete terms.
I can vouch for Aigent's effectiveness! It even help with hobbies! Why, over the last month it earned me about +30 karma on LW alone!
Powered by Aigent® Free. More smarts, less effort!™
About this:
People reproduce at an exponential rate. The amount of food we can create is finite. Population growth will eventually outstrip production. Humanity will starve unless population control is implemented by governments.
The calculation and the predictions were correct until the 1960's, including very gloomy views that wars around food would begin happening by the 1980's. What changed things was the Green Revolution. Weren't for this technological breakthrough no one could actually have predicted, and right now we might be looking back at 40 years of wars, plenty more dictatorships and authoritarian regimes all around, some going for multiple wars against their neighbors, others with long running one child policies of their own.
So, in addition to the points you made, I'd add that many times uncertainty comes from "unknown unknowns" such as not knowing what technologies will be developed, while at other times it comes from hoping certain technologies will be developed, betting on them, but then those failing to materialize.
Is it worth acting when you're comparing a 0.051% chance of doing good to a 0.049% chance of doing harm?
I'd say Chesterton's Fence provides a reasonable heuristics for such cases.
You're welcome. There's a stronger continuity if you look at pre-modern Catholicism and Orthodoxy, but yes, Christianity changed a lot over time.
By the way, something that may help you locate your own personal moment in your relation towards the religious teachings you received are in light of Piaget's theory of cognitive development, Kohlberg's theory of stages of moral development, and Fowler's theory of stages of faith development, as these helped me understand my own. They build one atop the other in this same sequence, Fowler's depending on Kolhberg's, which in turn depends on Piaget's, so it's important to read the 3 links in the order provided.
There is an element of submission, but originally it meant submission of the will to the knowledge of those who know better even when what they say goes counter your base interests.
For example, going back to praus/taming/meekness, one reference Jesus use is that of his "yoke" being easy and with a light load. Yoke is a U-shaped bar used to fix two draft animals together, so they can pull loads together. One way animal trainers used back then (and maybe still use today) to train an animal in a new job is to fix his neck on one side of a yoke, and on the other a very experienced animal. This way the learned animal, doing his well practiced routine, leads the untrained one to learn them much faster. So the idea here is that, by emulating the elders, the novice gets "there" much faster, and with much less difficulty, than he would by doing things on his own. Which, considering this is in context of iron age societies, in which an established practice remained as the state-of-the-art for generations at a time, in general tended to be true.
Nowadays things change at such a fast pace that this isn't the case anymore, so there's a clear mismatch between what the intended purposes of such a saying was meant to convey, that is, that one should listen to those who know better, and what one derives from the saying in a modern context, which depending on circumstances ends up frequently being the opposite.
It's worth noting that Paul teaches the exact same thing in a much more straightforward way, for now still understandable verbatim, when he said it's good to learn about everything to then prudentially chose what to actually use from all one learned. A huge number of Christians definitely don't do that, preferring instead to practice the misinterpreted version of the "yoke" metaphor.
The English work "meek" is a problematic translation of the original Greek "praus". Praus refers to a wild animal who's been tamed, the connotation being that such a person hasn't lost the virtue of strength of their wild nature, but added to it the virtue of civilized interaction, similar to how a tamed animal learns to do things their wild counterparts would never do.
This links to several other similar notions spread through the New Testament. For example, when Jesus:
a) Tells his disciples to be "harmless as doves" but "wise as serpents";
b) When he orders them to first go around and learn to preach without carrying weapons, thus having to resort to fleeing when threatened, and then, after they managed to do that, instructs them to arm themselves with swords, the implication being that now they have the experience needed to know when violence can be dispensed with, and when it cannot;
c) Or when he teaches them to give the other face, which also is quite misunderstood modernly. Back then when a person of higher social standing wanted to deeply offend someone from a lower social standing, they slapped them with the back of their hands. By showing such a person "the other face" they couldn't use that movement, and were forced to slap you with the palm of their hand, a gesture reserved to challenging someone of their same social standing, which most wouldn't dare do.
In short, such expressions have a connotation of deliberately restraining one's own savagery, but not letting it go, so that others may know that, while you're fine and good and helpful, you aren't weak, and aren't to be trifled with. A connotation that more often than not is lost in translation.
Regarding 1 and 3, good points, and I agree.
On 2, when I say formalizable, I mean in terms of giving the original arguments a symbolic formal treatment, that is, converting them into formal logical statements. Much of non-analytic philosophy has to do with criticizing this kind of procedure. For an example among many, check this recent one from a Neo-Thomistic perspective (I refer to this one because it's fresh on my mind, I read it a few days ago).
On 4, maybe a practical alternative would be to substitute vaguer but broader relations, such as "agrees", "partially agrees", "disagrees", "purports to encompass", "purports to replace", "opposes", "strawmans" etc., to the more restricted notions of truth values. This would allow for a mindmap-style set of multidirectional relations and clusterings.
My comments:
-
That's actually not the case. Analytic Philosophy is preeminent in the US and, to some extent, the UK. Everywhere else it's a topic that one learns among others, and usually in a secondary and subsidiary manner. For example, I majored in Philosophy in 2009. My university's Philosophy department, which happens to be the most important in my country and therefore the source of that vast majority of Philosophy undergraduates and graduates who then go on to influence other Philosophy departments, was founded by Continental philosophers, and remains almost entirely focused on that, with a major French sub-department, a secondary German one, some professors focusing in Classic and (continental style) English philosophers. In the Analytic tradition there was exactly one professor, whose area of research was Philosophy of Science.
-
Formalization, of any kind, is mostly an Analytic approach. When one formalizes a Continental philosophy, it cease being the original philosophy and becomes an Analytic interpretation of that Continental philosophy, so not the original anymore. And there's a remarkable loss of content in such a translation.
-
They have "experiences" and "perceptions". Husserl's project, for instance, was to re-fund Philosophy in the manner of a science by insisting that the objects (in the proper Kantian meaning of the word) philosophers work upon be first described precisely so that, when two philosophers discuss about them, they're talking about precisely the same thing, so as to avoid divergences due to ambiguities in regards to the objects themselves. Phenomenology then, as Husserl understood it, was to focus on developing a full description of phenomena (perceived objects), to only afterwards philosophize about them. Phenomena, therefore, don't have opposites, since they're raw "objectively shared subjetive perceptual descriptions", never concepts. Heidegger was a student under Husserl, so much of his work consists in describing phenomena. And those who then followed both did the same, with so many different emphasis and methods, and mutual criticisms went more about aspects other phenomenologists didn't notice in this or that described phenomena.
-
I'll give an example of how hard that can be. In Buddhist logic there are five truth categories: true, false, true-and-false, neither-true-nor-false, and unitive. In Jain logic, there are seven: true, false, undefined, true-and-false, true-and-undefined, false-and-undefined, true-false-and-undefined. Philosophy Web, as I understand it at least, would focus strongly on opposite categories, that is, this is true therefore those are false, which are seen similarly from the others' perspectives, so other truth-categories get sidelined. And that's without entering the topic of the many different Western dialectical methods, such as Hegel's, who has historically-bound time-dependent truth-variability linked to the overcoming of oppositions.
I don't mean to imply it wouldn't be a useful project though. I'm just pointing out its actual scope in practice will be narrower than your original proposal suggests.
It seems to me this would work for Analytic Philosophy, but not for other philosophical traditions. For instance:
a. Continental Philosophy has, since Heidegger (or, arguably, Husserl) taken a turn away from conceptual definitions towards phenomenological descriptions, so anything concept-based is subject, as a whole, to all manners of phenomenological criticisms;
b. Classic Philosophy frequently isn't formalizable, with its nuclear terms overlapping in a very interdependent manner, the same applying to some Modern ones. Splitting them into separate concepts doesn't quite work;
c. And Eastern Philosophies have a strong tendency to operate apophatically, that is, through negation rather than affirmation of concepts, so that every nuclear term comprises a set of negations, resulting in a kind of mix of "a" and "b", with inverted signals.
In short, a Philosophy Web, as proposed, would be a specific kind of meta-philosophical effort, and since every meta-philosophy is itself a philosophy, thus subject to being marked as an item among others in alternative meta-philosophical taxonomies, as well as of being refutable from opposite methodologies, it wouldn't be able to encompass more than a specific subset of philosophical thinking.
These a few problems with that. One is that you just figured out how the universe works without examining the the universe. Another is that it you can't get MWI out if it...unless you regard it as a statement only about subjective probability.
I'm not sure I understood these two points. Can you elaborate?
The unstated part of the argument being that free will must be neither-deterministic nor probabilistic?
Actually, the state part. It's my original comment. Although maybe I wasn't as clear as I thought I was about it.
I know what "reductionism" means.
This isn't quite the same reductionism as understood in physics, it has to do with Whitehead's discussion of the problem of bifurcationism in nature (see the next block for details). In this context even a Jupiter-sized Culture-style AI Mind orders of orders of magnitude more complex than a human brain still counts as "physical reduction" in regards to "objective corporeality" if one assumes its computations capable of qualia-perception.
The problem is that you haven't explained why reducing the qualia of free will disposes of free will, since you haven't explained why free will "is" the qualia of free will, or why free will (the ability as opposed to the qualia) can't be physically explained.
Free will is always perceived as qualia. You perceive it in yourself and in others, similarly to how you perceive any other qualia.
Any attempt at reducing it to the physical aspects of a being describes at most the physical processes that occur in/with/to the object in correlation with that qualia. Therefore, two philosophical options arise:
a) One may assume the qualia thus perceived is as fundamental as the measurable properties of the corporeal object, thus irreducible to those measurable properties, and that the corporeal object is thus a composite of both measurable properties and qualia properties.
In this scenario the set of the measurable properties of a corporeal object can be abstracted from it forming a pseudo-entity, the "physical object", which is the object studied via mensuration, that is, via mathematical (and by extension logical) procedures and all they provide, among which statistical and probabilistic methods. Any conclusion arrived through them is then understood to describe the "physical object", which, being only part of the full corporeal object, makes any such conclusion partial by definition, as they never cover the entirety of all properties of the corporeal object, in particular never covering its qualitative properties, as all they ever cover are its quantitative properties.
b) Or one may assume the qualia thus perceived is a consequence of those measurable properties, reducible to them, and therefore the corporeal object is those measurable properties, that is, that the corporeal object and the physical object are one and the same.
The burden of proof for case "a" is much lighter than that of case "b". In fact, case "a" is the null hypothesis, as it corresponds to our direct perception of the world. Case "b", in contrast, goes against that perception, and therefore is the one that needs to provide proof of its assertions. In particular, in the case of free will, it'd need to identify all the measurables related to what's perceived as free will, then show with absolute rigor that they produce the perceived qualia of free will in something formerly devoid of it, and then, somehow, make that generated qualia perceptible as qualia to qualia-perceivers.
To use a classic analogy, even something much more simple, such as showing that the qualia "color red" is the electromagnetic range from 400 to 484 THz cannot be done yet. Note that this isn't the same as showing that the qualia "color red" is associated with and carried by that EM range. For instance, if I close my eyes and think about an apple, I can access that qualia without a 400~484 THz EM wave hitting my eyes. As such, my affirmation that the qualia "color red" is distinct from the EM wave is straightforward and needs no further proof, while any affirmation involving the assertion that the qualia "color red" is reducible to, first, the measurable physical property "400~484 THz EM wave", second, to the measurable physical properties of neurons in a brain, are the ones that need thorough proof.
While any such proof -- for colors, as the entry level "easy" case, then for the much more difficult stuff such as free will -- doesn't appear, opting for "a" or for "b" will remain an arbitrary preference, as philosophical arguments for one and for the other cancel out.
That {QM}'s the best known example {of “indeterministic physics”} .
From the summary of the bifurcation problem I provided above I think it's more clear what I mean as indeterministic. From an "a" point of view QM is still entirely about physical objects, saying much about their measurable properties but nothing actually about their qualia. Hence, all it says is that some aspects of corporeal objects are fuzzy, the range of that fuzziness however being strictly determined and that, if MWI is correct, even this fuzziness is more apparent than real, since what it really is saying is not that such physically measurable aspects are fuzzy, but rather that the physical object branches very deterministically into so many ways.
Whether such "fuzziness within a determined range in a single world" or such "deterministic branching in many worlds" works as carriers for, or in correlation to, qualia properties of the full corporeal object, including but not limited to the free will qualia perceived by qualia-perceivers, is an entirely different problem, and there's no easy, straight jump from one domain to the other. I suppose there may be, but no matter how much physically measurable randomness properties one identifies and determines, there's still no self-evident link between this property of the physical object and the "free will" qualia of the full corporeal object.
You can conceivably have free will while having no qualia , or while having a bunch of qualia, but not that one.
From the exposed, you may have determinations in the form of single values or that of value ranges with inherent randomness while having no qualia, but stating these physical determinations imply having the "free will" qualia is a logical jump.
Taking from the "color red" example again, you may have an extremely energetic 400~484 THz EM wave, and yet no "color red" qualia at all for the simple lack of any qualia-perceiver in its path, or for the lack of any qualia-perceiver who however lacks the ability to extract a "color red" qualia from that carrier, or because the EM wave was absorbed by a black body etc.
Hence, while physically measurable randomness may be a "free will" qualia carrier, the lack of qualia perception would still result in the "free will" qualia carried by it to be lost. Conversely, a qualia-perceiver may have free will even in the absence of the typical physical carrier of "free will" qualia, as in the analogous case of a mind capable of imagining the "color red" qualia despite the absence of it usual "400~484 THz EM wave" carrier.
the branching structure as whole is deterministic, not that the branches are individually.
That depends on how you consider probabilities. One usual take, when it comes to concrete events, is that the probability of something that actually happened is 1.0, since it actually happened. Therefore, when you look at a sequence of causes and events backwards, that is, as history, this after-the-fact sequence is always strictly deterministic even if every single one of its links had a less-than-1.0 probability of happening before it actually happened in that specific way.
Maps aren't territories, even though territories are modelled with maps. Modelling isn't ontological identity.
Well, if you prefer that terminology, I can restate it this way: maps that only provide deterministic and/or probabilistic (which I understand as a superset of deterministic) nodes cannot deal with neither-deterministic-nor-probabilistic features of the territories they're trying to make.
To provide an example: a map that only provides RF frequencies says nothing of colors unless it also maps the connection of RF frequencies, to colors, via visual cortexes and all the associated biological organs, and provides primitives for the qualia of those colors.
It's not obvious that being reducible to physics is the same as being reducible to deterministic physics,
Sorry, I wasn't clear. "Physical reducibility" is a technical expression that refers to the philosophical assumption that the whole of concrete object, that is, both its quantitative properties as well as its qualitative properties, arises exclusively from its quantitative properties, in other words, that the a concrete object is "nothing but" the physical object.
it's not obvious that indeterministic physics can't support free will,
I'm not sure what you mean by "indeterministic physics". Do you mean QM?
and it's not obvious that you need a quale of free will to have free will. Just as you can live and die without knowing you have a spleen.
I'm not sure I understand this point either. Are you referring to philosophical zombies?
That's a contradiction in terms
Not really. The sentence you split forms a single reasoning. The first part is the claim, the second is the justification for the claim. You can read them in reverse if you prefer, which would gives it a more syllogistic form.
Which? Logical or causal?
Both, since causal determinism is logically modelled. More specifically, causal determinism is a subset and a consequence of logical determinism, which is inherent to all forms of logical reasoning, including this one.
In any case, the point of causal determinism is that there is only on possible outcome to a state, ie. only one path going forwards. / If you mean an RNG as opposed to a pseudo RNG, yes it does make it less deterministic...by definition.
That's precisely what MWI and similar notions disagree with. But yes, if we assume a single world, then the consequence is one of the alternatives, and none of the others.
Huh? That's not generally acknowledged. / That is not universally acknowledged.
True. I'm arguing against the generally acknowledge view. My position is based on traditional non-physically-reducible qualia-based concepts of free will as present in, e.g., Aquinas and Aristotle.
Evidently, if one assumes all qualia is physically-reducible, then free will as such doesn't exist and is a mere subjective interpretation of deterministic and/or randomly-determined processes, but that's precisely the same I've said, except that coming from the other direction.
Formal logic, mathematics, informal deductive reasoning, algorithmics etc. are all interchangeable for the effects of my point, and usually also mutually translatable. Using any of them to model reality always yields a deterministic chain even when probabilistic paths are involved, because on can always think of these as branching in a manner similar to MWI: starting from such and such probabilities (or likelihoods, if the question is about one's knowledge of the world rather than about the world itself) we end up with a causal tree, each of whose branches, when looked backwards, forms a logic causal chain.
That's why free will cannot be modeled in terms of probabilities or likelihoods. Inserting a RNG in a logical chain only makes it more complex, it doesn't make it less deterministic, and again causes free will proper to disappear, as it's then reduced to mere randomness.
"Probably most ambitious people are starved for the sort of encouragement they'd get from ambitious peers"
This, I think, is one of the roots of smart people getting into weird stuff. Contrarians, contra-cultural types, conspiracy theorists (the inventors, not the believers) and the like are usually very smart, they just don't optimize their smarts in a good direction, so a newly minted smart person will feel attracted to them. The end result are very suboptimal communities of smart individuals going in all kinds of weird directions.
That's my case, mind. Finding the rationalist community has helped me put breaks on some of my weirdest aspects, but by no means on all of them. Which might or not be smart of me, no idea yet at this point.
A fundamental difficulty in thinking logically about free will is that it involves thinking logically.
Logic, by its very nature, has embedded as its most essential hidden premise a deterministic structure. This makes all reasoning chains, no matter what their subject (including this one), to be deterministic. In other words, a deterministic structure is imposed upon the elements that will be logically analyzed so that they can be logically analyzed.
This leads one, if they ignore this structure is present as the very first link in chain, then proceeds to analyze the entire chain minus this hidden first premise in an attempt to determine what can be abstracted out from it, to incur into an involuntary 'begging the question' and to conclude all elements present in the chain, and all their mutual relations, are strictly deterministic. And, by extensions, that free will doesn't exist in reality, when the most we can actually say is that free will doesn't exist as a deduced link within deterministically structured logical reasoning chains.
Notice that this doesn't preclude free will from being part of deterministically structured logical reasoning chains, it only says where free will cannot be present. It can still be present as an irreducible axiomatic premise, an "assuming free will exists..." used to reach further deductions. But that's it. Any attempt at moving it from the position of an axiom down into the chain proper will invariably fail because the chain itself doesn't admit of it.
I wonder if more positive encounters would help gradually change the bias, also for your own well-being (...)
Ah! I have plenty of extremely positive experiences with black people, from black friends, to coworkers, to acquaintances, to (awesome!) teachers, to college friends. For me, people are all individuals, no exception, and I cannot think in terms of groups or collectivities even if I tried forcing myself to do so. As such, I have always been extremely careful not to allow this irrational trigger to affect anything real, and this is why I described this quirk as "extremely annoying". It'd be an easy but deeply flawed pseudo-solution to keep the problem at bay by distancing myself from situations that trigger it, but I refuse to do that.
If it helps to visualize it, imagine walking around and suddenly noticing a tiger looking at you growling at their signature 18Hz, or a snake rising their head. Your body would react in a split instant, much faster than your conscious mind registers it, by pumping you with adrenaline in order to increase to the max your chances of survival. That, more or less, is what happens, so the most I can do, and this I make myself do all the time, is to forcefully shut the adrenaline pump down once it opens, and carry on as if it hadn't opened up. The mechanism by which it opens, though, that one is beyond my conscious control, and while familiarity reduces its triggering, it unfortunately doesn't fully eliminate it.
Which is why I linked it to PTSD. When a person suffers a trauma and develops PTSD, their brain physically rewires as a defense mechanism. Barring some very experimental psychotropic treatments being currently researched, this physical rewiring cannot be reversed. It can at most be eased, but fully reversed, not yet, no.
Which subcultures are these?
The furry fandom and the otherkin community here in Brazil.
It's okay if you don't want to answer.
Nah, I'm an open book. I make a point of not keeping secrets unless absolutely necessary. There's no risk in doxing if you yourself provide the doxa beforehand. ;-)
I would indeed be interested in your mention of this sort of thing having "changed in a bad way".
Well, in my case it came due to robbery. Until my late teens / early adulthood I was robbed four times, which wasn't uncommon in the region of Brazil I lived at the time (crime rates have diminished a lot in the intervening decades). From those, three were by black thieves, blacks being a very discriminated-against group here, even if not as much as in the US. The third time has caused in me what I suppose I could describe as a "micro-PTSD", because from that day my System 1 began making me acutely aware, in a fight-or-flight manner, of the presence of unknown black people around me, something that didn't happen before.
This is extremely annoying, to say the least. No matter how much I want to turn off this trigger, it remains "there", unconsciously activating whenever I'm distracted from actively suppressing it at the System 2 level. That said, over time I've managed to learn to suppress it very quickly, but I always worry on occasion it may be not be quick enough, that the person at whom it triggered will notice that split-second spark of irrational fear in my eyes before I can consciously force it off.
On the not quite bright side, gaining this trigger made me understand how racial biases develop and perpetuate. But I still would have very much preferred to never have gained it to begin with.
I'm not sure what it means for a newborn to be transgendered.
Over the last two to three decades many clinical studies have been developed scanning the brains of transgendered individuals. Brain regions have been identified that mark brains as clearly masculine, feminine, or somewhere in between, and transgendered individuals' brains show the properties of the brains typical of the other sex, meaning trans women have structurally female brains in male bodies, and trans men have structurally male brain in female bodies. You can find a fairly comprehensive list of papers on this at the Causes of Transexuality Wikipedia article. Additionally, gender dysphoria is characterized, as I see it, by a clear mismatch between body shape and the homunculus, which further points to transgenderism being a neurological fact.
The 1:20,000 factor comes from the prevalence of gender dysphoria in adults, that is, from this brain/body mismatch. This paper refers to different studies and their ranges, some finding a prevalence as low as 1:100,000, others one as high as 1:10,000:
- Kenneth J. Zucker & Anne A. Lawrence (2009) Epidemiology of Gender Identity Disorder: Recommendations for the Standards of Care of the World Professional Association for Transgender Health, International Journal of Transgenderism, 11:1, 8-18, DOI: 10.1080/15532730902799946
In the US roughly 1/300 identify as transgender and in the rationality community maybe 1/30.
I'm not aware of these numbers, but it wouldn't surprise me if there's a conceptual confusion between being transgender in the strict, biological brain vs. body sense, and being gender non-conformant. In my case, I'm behaviorally gender non-conformant, having a very high number of stereotypically female traits (I've been described by people as "very androgynous", with one saying I was "the most androgynous person" they've ever met), but in terms of my brain-body matching I'm clearly cis male, experiencing no gender dysphoria of any sort. Therefore, I don't consider myself transgendered, although, yes, I can see how there might be a use case in making the word encompass both strict biological transgenderism and gender non-conformance.