Comment by alexgieg on GPT-Augmented Blogging · 2021-09-14T17:38:36.492Z · LW · GW

"The Worst Mistake in the History of Ethics"

I'm curious what GPT-3 would output for this one. :-)

PS: And I have my own answer for that: Aristotle's development of the concept of eudaimonia, "the good life", meaning the realization of all human potential. For him it was such a desirable outcome, so valuable, that it's existence justified slavery, since those many working allowed a few to realize it. Advance 2,400 years of people also finding it incredibly desirable, and we end up with, among others, Marx and Engels defending revolutionary terror, massacres, and mass political persecution so that it could be realized for all, rather than for a few.

Comment by alexgieg on Prefer the British Style of Quotation Mark Punctuation over the American · 2021-09-11T22:52:50.884Z · LW · GW

I personally think quotation-over-punctuation would solve this nicely. Here's an example from someone who managed to have his TeX documents do exactly that:

Overlapping quotes with periods and commas

Comment by alexgieg on Prefer the British Style of Quotation Mark Punctuation over the American · 2021-09-11T14:49:21.012Z · LW · GW

Minor curiosity: originally, back in old printing days, quotations marks went neither before nor after punctuation marks, but above these, after all, it's a half-height symbol with empty space below it, and another half-height symbol with empty space above it, so both merged well into a single combined glyph, saving space.

When movable types entered the picture almost no types set had unified quotation+punctuation types, so both were physically distinct symbols that needed a sequence when placed on the printing board. Over time the US mostly settled with punctuation-then-quotation, while most other countries went mostly with quotation-then-punctuation -- which on further analysis (and then with programming languages) proved more sensible.

Nowadays with modern Unicode ligatures we could easily go back to quotation-over-punctuation for display purposes, while allowing the writing to be either way, but I suppose after 200 years of printing these glyphs separately no one has much interest in that.

Comment by alexgieg on Pleasure and Pain are Long-Tailed · 2021-09-09T19:09:07.779Z · LW · GW

I'm intrigued – google gives only porn videos as search results.

The tongue is very sensitive. A very skilled kisser knows how to intensely stimulate the top of their partner's tongue with theirs while French kissing, to the point one or both of them get a very specific kind of orgasm different from any other. In my case I got spasms while washed in endorphins, which took several minutes to subside. :-)

Also, I assume you mean a P-spot orgasm when you say "female orgasm"?

No, I mean actual female orgasm. I can provide exactly zero evidence for this, which on LW is a particularly huge no-no, but if mentioning a little bit of mystic experiences isn't too much of a problem I can say there are Tantra masters out there who can induce some pretty interesting experiences on suitable students, one of which is, on male-bodied ones, those of having a full set of phantom limb representatives of female genitalia complete with the mental experience of female orgasms (as well as of male genitalia on female-bodied students). This is linked to advanced Karmamudrā techniques.

Comment by alexgieg on Pleasure and Pain are Long-Tailed · 2021-09-09T18:54:37.603Z · LW · GW

Ditto, or more precisely, no one from my graduation class has any interest in paying for one, so we all got our certificates by mail. I suppose it helps that most everyone was 30+, and the major was Philosophy, neither of which predisposes one to care much about such things, much less when put together.

Comment by alexgieg on Pleasure and Pain are Long-Tailed · 2021-09-09T16:15:56.690Z · LW · GW

Looking at the pain scale, I guess I'm somewhat atypical. On the pleasurable experiences I had, I'd order them such:

  • 0.0: College graduation (I haven't really felt it as anything special)
  • 0.2: Alcohol consumption (but I haven't gotten really drunk)
  • 1.0 to 3.0: Male orgasm (kinda meh most of the time, sometimes good)
  • 2.0: Tongue orgasm from a skilled kisser
  • 4.0 to 6.0: Female orgasm (the first one is 4.0, successive ones being more and more intense until it plateaus at 6.0 on the 8th orgasm or so)

(Yes, I've had the last one despite being 100% a cis-male. Let's attribute it to "the magics" and leave it at that.)

And on the pain scale, the worst tooth ache I've ever had was way stronger that when my gallbladder was almost rupturing, so I think it'd go like this:

  • 1.0: ear infection
  • 1.0 to 3.0: tooth ache, lower back pain
  • 2.5: gallbladder going kaput
  • 3.0: the most impacting death in family
  • 4.0: heartbreak
Comment by alexgieg on Assigning probabilities to metaphysical ideas · 2021-09-09T15:33:05.224Z · LW · GW

That depends. Several metaphysical systems develop ontologies, with concepts such as "objects" and "properties". Couple that with the subfield of Applied Metaphysics, which informs other areas of knowledge by providing systematic means to deal with those foundations. So it's no surprise that one such application, several steps down the line, was the development of object-oriented programming with its "objects possessing properties" ordered in "ontologies" via inheritance, interfaces and the like.

Comment by alexgieg on [Review] Edge of Tomorrow (2014) · 2021-09-08T19:08:30.828Z · LW · GW

Thanks! And done! :-)

Comment by alexgieg on [Review] Edge of Tomorrow (2014) · 2021-09-08T18:00:21.593Z · LW · GW

I've tried adding spoiler tags, but it isn't working. According the FAQ for Markdown it's three colons and the word "spoiler" at the beginning, followed by three colons at the end, but no luck. Any suggestion?

Comment by alexgieg on [Review] Edge of Tomorrow (2014) · 2021-09-08T17:49:21.905Z · LW · GW

I think that was the one, yes. It's been years and I forgot the name.

I'll add the tags, thanks!

Comment by alexgieg on [Review] Edge of Tomorrow (2014) · 2021-09-08T16:01:30.006Z · LW · GW

There's a Naruto fanfic (much better than the actual manga, mind) with this trope, except the author adds a cool extra at the end. In that, it turns out one with looping power only goes back to the same point in time because

they haven't learned how to set a new, so to speak, "save point". This mechanic became clear to the characters after they had decades of experience in child bodies, so that they began to carefully plan the world they wanted to have, and exhaustively time looped until they managed to set things perfectly aligned for the next stage of their plan, at which point they "saved", and went for it.

Comment by alexgieg on Assigning probabilities to metaphysical ideas · 2021-09-08T15:54:00.872Z · LW · GW

Those aren't metaphysical. Metaphysics is a well defined philosophical research field.

Comment by alexgieg on Assigning probabilities to metaphysical ideas · 2021-09-08T15:51:34.761Z · LW · GW

To complement @Dagon's comment, another difficulty is that Skepticism itself is also a philosophical model, which can be taken either as merely epistemological, or as a metaphysical model unto itself, so the initial 1:1 model actually giving Skepticism a 50% prior vs. all other models. And then we have some relatively weird models such as Nominalism, which is metaphysically skeptical except for affirming, atop a sea of complete no-rules free-formness, the absolute will of an absolute god who decides everything just because.

Fun detail: my Philosophy major followed a method called "monographic structuralism" that consists in learning each philosopher's system as if we were devout followers of theirs (for the class duration). The idea was that before opining on this or that philosophical problem it was worth knowing that philosopher's arguments and reasoning as well as they themselves did. So one studied philosopher A enough to argue perfectly for his ideas, finding them perfectly self-consistent from beginning to end and from top to bottom; then studied philosopher B similarly; then philosopher C, ditto; and so on and so forth, which invariably led one to learn two philosophers who said the exact opposite of each other while still being perfectly self-consistent -- at which point one threw their hands up and concluded the issue to be strictly undecidable. In the end most students, or at least those who stuck with the major long enough, became philosophical skeptics. :-)

Comment by alexgieg on Review: Foragers, Farmers, and Fossil Fuels · 2021-09-03T15:36:39.398Z · LW · GW

This was extremely informative! Thank you!

A few points I'd like to comment on:

"So eager were poor farmers for dirty, dangerous factory jobs (...)"

There's an underlying question on why those farmers were that poor and such dire need for those factory jobs. One reason I've seen given was in Hillaire Belloc's 1912 book The Servile State, one of the first books of the Distributist school of economics. According him, the end of the feudal system in England, and its turning into a modern nation-state, involved among other things the closing off and appropriation, by nobles as a reward from the kingdom, of the former common farmlands they farmed on, as well as the confiscation of the lands owned by the Catholic Church, which for all practical purposes also served as common farmlands. This resulted in a huge mass of landless farmers with no access to land, or only very diminished access, who in turn decades later became the proletarians for the newly developing industries. If that's accurate, then it may be the case that the Industrial Revolution wouldn't have happened had all those poor not have existed, since the very first industries wouldn't have been attractive compared to condition non-forcibly-starved farmers had.

"By making wage labour attractive enough to draw in millions of free workers, higher wages made forced labor less necessary, and because impoverished serfs and slaves—unlike the increasingly prosperous wage labourers—could rarely buy the manufactured goods being churned out by factories, forced labour increasingly struck business interests as an obstacle to growth (especially when it was competitors who were using it)."

This is a common narrative about how chattel slavery came to an end, to the point it even sounds like common sense by now, but I haven't actually seen strong evidence for this interpretation. Maybe this evidence exists and it's just a matter of someone pointing it out for me, but so far I know three points of divergence about this narrative:

  1. Force labor ended once before. During the Middle Age, as its complex farming hierarchies and belief-systems developed in the millennia following the fall of the Roman Empire, saw the descendants of the former Roman villas-turned fiefdoms' slaves slooowly gaining more and more customary legal rights in their process of becoming serfs, rights feudal lords rarely refused them lest doing so hit their reputations hard. By the Late Middle Age this process had made serfs, while technically still property most everywhere, in practice free, with some places having outright forbidden literal slavery altogether by as early as the 12th century.

  2. This was quite clearly recognized as such by the Catholic Church, who, once the new nation-states began their Great Navigations, and started the once mostly abandoned process of enslavement all over again, began to periodically issue papal bulls heavily condemning enslavers, the earliest of which in the 16th century. Not that the Church had effective power on the matter, all they could do was to tell enslaver they were going to Hell, a threat enslavers clearly give little attention to, but this at the very least shows that, culturally at least, there was a strong anti-enslavement cultural force in place amidst all that European agrarian ethos, and one that kept advancing in parallel and despite nation-states renewed push for slavery.

  3. This cultural force finally cascaded when, in the late-18th century, religious-based political abolitionist associations began developing and lobbying for the end of slavery and, in a mere 50 years, turned England from a heavy promoter of slavery into a country who spend huge amounts of money and military resources to hunt enslavers worldwide.

Notice that, while point 3 overlaps with the Industrial Revolution, the causality here would seem to me to be the opposite of how it's usually depicted, that is, with abolitionism having helped to advance industrialization as an unintended side effect of its ideals cascading into practice, and not the other way around. Which, evidently, doesn't prevent the usual narrative from being valid in other places, that is, countries in which slavery was still well accepted finding themselves forced, first militarily, then technologically, and finally economically, to adapt or perish. But the former case seems to me to have been the more prevalent, in the West at least, what with the Civil War in the US, and enlightened royals voluntarily giving up their crowns to end slavery on moral grounds.

Over millennia, such societies either had their tricks independently discovered or copies by others, or then outright went warpath to subjugate over societies to their rule – and, of course, preach their values, which (given human adaptability) they held sincerely, and with no idea that they thought differently from their distant ancestors.

I think at least some recognized quite clearly they thought differently. I don't remember where I got this information, I think it was on Karen Armstrong's Muhammad: A Biography of the Prophet, but I distinctly remember reading about how when Muhammad was young he was sent by his uncle to live among nomads for a few years, as it was considered part of the proper education of the young back then precisely because nomads were seen as the preservers of the old ways, keepers of strict adherence to proper moral values and work ethics, thus excellent examples to a young, impressionable mind compared to the lazy, inferior moral developed in the sedentary lifestyle of farms and villages (yes, laboring 12+ hours a day under backbreaking conditions was considered sedentary).

Now, while foragers and nomads aren't the same category of wandering people, it'd seem to me that there was an awareness of the cultural differences between those who lived from the land and those who didn't, in at least a roughly similar way to how those living in, and fully inserted into, modern, huge metropolitan areas nowadays are aware of the cultural differences between themselves and those living in the country.

(...) was the centralisation-vs-decentralisation tradeoff really so simple in the farming era that "godlike kings everywhere" was the only effective answer?

Perhaps it was seen as such by those involved. One interesting reference point is given in the Bible.

1 Samuel 8 narrates how at one point the Hebrews, envying their surrounding countries having kings, decided they wanted one too, so they demanded prophet Samuel to crown one. Samuel disliked this, prayed to God, and God told him to warn their fellow countrymen of all the very-bad-things that having a kingdom would result in (verses 11-18):

"This is what the king who will reign over you will claim as his rights: He will take your sons and make them serve with his chariots and horses, and they will run in front of his chariots. Some he will assign to be commanders of thousands and commanders of fifties, and others to plow his ground and reap his harvest, and still others to make weapons of war and equipment for his chariots. He will take your daughters to be perfumers and cooks and bakers. He will take the best of your fields and vineyards and olive groves and give them to his attendants. He will take a tenth of your grain and of your vintage and give it to his officials and attendants. Your male and female servants and the best of your cattle and donkeys he will take for his own use. He will take a tenth of your flocks, and you yourselves will become his slaves. When that day comes, you will cry out for relief from the king you have chosen, but the Lord will not answer you in that day."

This suggests the system of government that existed before didn't do those things. That system, called Judging, isn't well known, but I remember reading a historian once explaining it was very decentralized. If I remember right, political power was intermittent and an all-or-nothing proposition, as some families had generational military duties that included, but only in war times, absolute power for the purposes of defense against external aggression. In times of peace, in contrast, those families had no power, having to tend to their lands and produce their won food or whatever by themselves, similar to everyone else. It therefore worked more as a loose, decentralized federation of micro-states that used militias for self-defense than as a big, integrated, centralized government with a permanente military force.

And yet, if there's any truth left in the story after centuries of retellings until it was put into paper, the people saw their neighbors centralization and really wanted a piece of that for themselves. Alas the text doesn't dwell on their reasons for that, but if I were to venture a guess it'd be that they saw their neighbors as having effective, deployable armies as threatening, and saw centralization as a means to more effectively defend themselves despite the listed drawbacks.

Comment by alexgieg on [deleted post] 2021-09-01T20:11:59.909Z

I have the impression you're confounding the terms "freedom" and "democracy", themselves quite broad. The contents of your post suggest what you're seeking is to live in a country that are representative liberal democracies, and whose electoral process results in specific representativeness quotients, as well as in other specific features. But that doesn't exactly overlap with any specific notion of "freedom", such as that of "true freedom", unless you also were to provide a specific definition of both.

I imagine you're going to find a better response if you were to taboo the words "democracy", "freedom", and "true freedom", so as to restate what you're seeking in more objective, concrete terms.

Comment by alexgieg on [Sponsored] Job Hunting in the Modern Economy · 2021-09-01T19:51:31.271Z · LW · GW

I can vouch for Aigent's effectiveness! It even help with hobbies! Why, over the last month it earned me about +30 karma on LW alone!

Powered by Aigent® Free. More smarts, less effort!™

Comment by alexgieg on Altruism Under Extreme Uncertainty · 2021-08-27T14:04:59.083Z · LW · GW

About this:

People reproduce at an exponential rate. The amount of food we can create is finite. Population growth will eventually outstrip production. Humanity will starve unless population control is implemented by governments.

The calculation and the predictions were correct until the 1960's, including very gloomy views that wars around food would begin happening by the 1980's. What changed things was the Green Revolution. Weren't for this technological breakthrough no one could actually have predicted, and right now we might be looking back at 40 years of wars, plenty more dictatorships and authoritarian regimes all around, some going for multiple wars against their neighbors, others with long running one child policies of their own.

So, in addition to the points you made, I'd add that many times uncertainty comes from "unknown unknowns" such as not knowing what technologies will be developed, while at other times it comes from hoping certain technologies will be developed, betting on them, but then those failing to materialize.

Is it worth acting when you're comparing a 0.051% chance of doing good to a 0.049% chance of doing harm?

I'd say Chesterton's Fence provides a reasonable heuristics for such cases.

Comment by alexgieg on For me the Christianity deal-breaker was meekness · 2021-08-27T12:34:32.199Z · LW · GW

You're welcome. There's a stronger continuity if you look at pre-modern Catholicism and Orthodoxy, but yes, Christianity changed a lot over time.

By the way, something that may help you locate your own personal moment in your relation towards the religious teachings you received are in light of Piaget's theory of cognitive development, Kohlberg's theory of stages of moral development, and Fowler's theory of stages of faith development, as these helped me understand my own. They build one atop the other in this same sequence, Fowler's depending on Kolhberg's, which in turn depends on Piaget's, so it's important to read the 3 links in the order provided.

Comment by alexgieg on For me the Christianity deal-breaker was meekness · 2021-08-26T18:39:22.142Z · LW · GW

There is an element of submission, but originally it meant submission of the will to the knowledge of those who know better even when what they say goes counter your base interests.

For example, going back to praus/taming/meekness, one reference Jesus use is that of his "yoke" being easy and with a light load. Yoke is a U-shaped bar used to fix two draft animals together, so they can pull loads together. One way animal trainers used back then (and maybe still use today) to train an animal in a new job is to fix his neck on one side of a yoke, and on the other a very experienced animal. This way the learned animal, doing his well practiced routine, leads the untrained one to learn them much faster. So the idea here is that, by emulating the elders, the novice gets "there" much faster, and with much less difficulty, than he would by doing things on his own. Which, considering this is in context of iron age societies, in which an established practice remained as the state-of-the-art for generations at a time, in general tended to be true.

Nowadays things change at such a fast pace that this isn't the case anymore, so there's a clear mismatch between what the intended purposes of such a saying was meant to convey, that is, that one should listen to those who know better, and what one derives from the saying in a modern context, which depending on circumstances ends up frequently being the opposite.

It's worth noting that Paul teaches the exact same thing in a much more straightforward way, for now still understandable verbatim, when he said it's good to learn about everything to then prudentially chose what to actually use from all one learned. A huge number of Christians definitely don't do that, preferring instead to practice the misinterpreted version of the "yoke" metaphor.

Comment by alexgieg on For me the Christianity deal-breaker was meekness · 2021-08-26T17:12:30.845Z · LW · GW

The English work "meek" is a problematic translation of the original Greek "praus". Praus refers to a wild animal who's been tamed, the connotation being that such a person hasn't lost the virtue of strength of their wild nature, but added to it the virtue of civilized interaction, similar to how a tamed animal learns to do things their wild counterparts would never do.

This links to several other similar notions spread through the New Testament. For example, when Jesus:

a) Tells his disciples to be "harmless as doves" but "wise as serpents";

b) When he orders them to first go around and learn to preach without carrying weapons, thus having to resort to fleeing when threatened, and then, after they managed to do that, instructs them to arm themselves with swords, the implication being that now they have the experience needed to know when violence can be dispensed with, and when it cannot;

c) Or when he teaches them to give the other face, which also is quite misunderstood modernly. Back then when a person of higher social standing wanted to deeply offend someone from a lower social standing, they slapped them with the back of their hands. By showing such a person "the other face" they couldn't use that movement, and were forced to slap you with the palm of their hand, a gesture reserved to challenging someone of their same social standing, which most wouldn't dare do.

In short, such expressions have a connotation of deliberately restraining one's own savagery, but not letting it go, so that others may know that, while you're fine and good and helpful, you aren't weak, and aren't to be trifled with. A connotation that more often than not is lost in translation.

Comment by alexgieg on Philosophy Web - Project Proposal · 2021-08-25T16:14:07.937Z · LW · GW

Regarding 1 and 3, good points, and I agree.

On 2, when I say formalizable, I mean in terms of giving the original arguments a symbolic formal treatment, that is, converting them into formal logical statements. Much of non-analytic philosophy has to do with criticizing this kind of procedure. For an example among many, check this recent one from a Neo-Thomistic perspective (I refer to this one because it's fresh on my mind, I read it a few days ago).

On 4, maybe a practical alternative would be to substitute vaguer but broader relations, such as "agrees", "partially agrees", "disagrees", "purports to encompass", "purports to replace", "opposes", "strawmans" etc., to the more restricted notions of truth values. This would allow for a mindmap-style set of multidirectional relations and clusterings.

Comment by alexgieg on Philosophy Web - Project Proposal · 2021-08-25T12:42:13.434Z · LW · GW

My comments:

  1. That's actually not the case. Analytic Philosophy is preeminent in the US and, to some extent, the UK. Everywhere else it's a topic that one learns among others, and usually in a secondary and subsidiary manner. For example, I majored in Philosophy in 2009. My university's Philosophy department, which happens to be the most important in my country and therefore the source of that vast majority of Philosophy undergraduates and graduates who then go on to influence other Philosophy departments, was founded by Continental philosophers, and remains almost entirely focused on that, with a major French sub-department, a secondary German one, some professors focusing in Classic and (continental style) English philosophers. In the Analytic tradition there was exactly one professor, whose area of research was Philosophy of Science.

  2. Formalization, of any kind, is mostly an Analytic approach. When one formalizes a Continental philosophy, it cease being the original philosophy and becomes an Analytic interpretation of that Continental philosophy, so not the original anymore. And there's a remarkable loss of content in such a translation.

  3. They have "experiences" and "perceptions". Husserl's project, for instance, was to re-fund Philosophy in the manner of a science by insisting that the objects (in the proper Kantian meaning of the word) philosophers work upon be first described precisely so that, when two philosophers discuss about them, they're talking about precisely the same thing, so as to avoid divergences due to ambiguities in regards to the objects themselves. Phenomenology then, as Husserl understood it, was to focus on developing a full description of phenomena (perceived objects), to only afterwards philosophize about them. Phenomena, therefore, don't have opposites, since they're raw "objectively shared subjetive perceptual descriptions", never concepts. Heidegger was a student under Husserl, so much of his work consists in describing phenomena. And those who then followed both did the same, with so many different emphasis and methods, and mutual criticisms went more about aspects other phenomenologists didn't notice in this or that described phenomena.

  4. I'll give an example of how hard that can be. In Buddhist logic there are five truth categories: true, false, true-and-false, neither-true-nor-false, and unitive. In Jain logic, there are seven: true, false, undefined, true-and-false, true-and-undefined, false-and-undefined, true-false-and-undefined. Philosophy Web, as I understand it at least, would focus strongly on opposite categories, that is, this is true therefore those are false, which are seen similarly from the others' perspectives, so other truth-categories get sidelined. And that's without entering the topic of the many different Western dialectical methods, such as Hegel's, who has historically-bound time-dependent truth-variability linked to the overcoming of oppositions.

I don't mean to imply it wouldn't be a useful project though. I'm just pointing out its actual scope in practice will be narrower than your original proposal suggests.

Comment by alexgieg on Philosophy Web - Project Proposal · 2021-08-24T16:17:07.207Z · LW · GW

It seems to me this would work for Analytic Philosophy, but not for other philosophical traditions. For instance:

a. Continental Philosophy has, since Heidegger (or, arguably, Husserl) taken a turn away from conceptual definitions towards phenomenological descriptions, so anything concept-based is subject, as a whole, to all manners of phenomenological criticisms;

b. Classic Philosophy frequently isn't formalizable, with its nuclear terms overlapping in a very interdependent manner, the same applying to some Modern ones. Splitting them into separate concepts doesn't quite work;

c. And Eastern Philosophies have a strong tendency to operate apophatically, that is, through negation rather than affirmation of concepts, so that every nuclear term comprises a set of negations, resulting in a kind of mix of "a" and "b", with inverted signals.

In short, a Philosophy Web, as proposed, would be a specific kind of meta-philosophical effort, and since every meta-philosophy is itself a philosophy, thus subject to being marked as an item among others in alternative meta-philosophical taxonomies, as well as of being refutable from opposite methodologies, it wouldn't be able to encompass more than a specific subset of philosophical thinking.

Comment by alexgieg on A cognitive algorithm for "free will." · 2021-07-20T21:16:11.450Z · LW · GW

These a few problems with that. One is that you just figured out how the universe works without examining the the universe. Another is that it you can't get MWI out if it...unless you regard it as a statement only about subjective probability.

I'm not sure I understood these two points. Can you elaborate?

The unstated part of the argument being that free will must be neither-deterministic nor probabilistic?

Actually, the state part. It's my original comment. Although maybe I wasn't as clear as I thought I was about it.

I know what "reductionism" means.

This isn't quite the same reductionism as understood in physics, it has to do with Whitehead's discussion of the problem of bifurcationism in nature (see the next block for details). In this context even a Jupiter-sized Culture-style AI Mind orders of orders of magnitude more complex than a human brain still counts as "physical reduction" in regards to "objective corporeality" if one assumes its computations capable of qualia-perception.

The problem is that you haven't explained why reducing the qualia of free will disposes of free will, since you haven't explained why free will "is" the qualia of free will, or why free will (the ability as opposed to the qualia) can't be physically explained.

Free will is always perceived as qualia. You perceive it in yourself and in others, similarly to how you perceive any other qualia.

Any attempt at reducing it to the physical aspects of a being describes at most the physical processes that occur in/with/to the object in correlation with that qualia. Therefore, two philosophical options arise:

a) One may assume the qualia thus perceived is as fundamental as the measurable properties of the corporeal object, thus irreducible to those measurable properties, and that the corporeal object is thus a composite of both measurable properties and qualia properties.

In this scenario the set of the measurable properties of a corporeal object can be abstracted from it forming a pseudo-entity, the "physical object", which is the object studied via mensuration, that is, via mathematical (and by extension logical) procedures and all they provide, among which statistical and probabilistic methods. Any conclusion arrived through them is then understood to describe the "physical object", which, being only part of the full corporeal object, makes any such conclusion partial by definition, as they never cover the entirety of all properties of the corporeal object, in particular never covering its qualitative properties, as all they ever cover are its quantitative properties.

b) Or one may assume the qualia thus perceived is a consequence of those measurable properties, reducible to them, and therefore the corporeal object is those measurable properties, that is, that the corporeal object and the physical object are one and the same.

The burden of proof for case "a" is much lighter than that of case "b". In fact, case "a" is the null hypothesis, as it corresponds to our direct perception of the world. Case "b", in contrast, goes against that perception, and therefore is the one that needs to provide proof of its assertions. In particular, in the case of free will, it'd need to identify all the measurables related to what's perceived as free will, then show with absolute rigor that they produce the perceived qualia of free will in something formerly devoid of it, and then, somehow, make that generated qualia perceptible as qualia to qualia-perceivers.

To use a classic analogy, even something much more simple, such as showing that the qualia "color red" is the electromagnetic range from 400 to 484 THz cannot be done yet. Note that this isn't the same as showing that the qualia "color red" is associated with and carried by that EM range. For instance, if I close my eyes and think about an apple, I can access that qualia without a 400~484 THz EM wave hitting my eyes. As such, my affirmation that the qualia "color red" is distinct from the EM wave is straightforward and needs no further proof, while any affirmation involving the assertion that the qualia "color red" is reducible to, first, the measurable physical property "400~484 THz EM wave", second, to the measurable physical properties of neurons in a brain, are the ones that need thorough proof.

While any such proof -- for colors, as the entry level "easy" case, then for the much more difficult stuff such as free will -- doesn't appear, opting for "a" or for "b" will remain an arbitrary preference, as philosophical arguments for one and for the other cancel out.

That {QM}'s the best known example {of “indeterministic physics”} .

From the summary of the bifurcation problem I provided above I think it's more clear what I mean as indeterministic. From an "a" point of view QM is still entirely about physical objects, saying much about their measurable properties but nothing actually about their qualia. Hence, all it says is that some aspects of corporeal objects are fuzzy, the range of that fuzziness however being strictly determined and that, if MWI is correct, even this fuzziness is more apparent than real, since what it really is saying is not that such physically measurable aspects are fuzzy, but rather that the physical object branches very deterministically into so many ways.

Whether such "fuzziness within a determined range in a single world" or such "deterministic branching in many worlds" works as carriers for, or in correlation to, qualia properties of the full corporeal object, including but not limited to the free will qualia perceived by qualia-perceivers, is an entirely different problem, and there's no easy, straight jump from one domain to the other. I suppose there may be, but no matter how much physically measurable randomness properties one identifies and determines, there's still no self-evident link between this property of the physical object and the "free will" qualia of the full corporeal object.

You can conceivably have free will while having no qualia , or while having a bunch of qualia, but not that one.

From the exposed, you may have determinations in the form of single values or that of value ranges with inherent randomness while having no qualia, but stating these physical determinations imply having the "free will" qualia is a logical jump.

Taking from the "color red" example again, you may have an extremely energetic 400~484 THz EM wave, and yet no "color red" qualia at all for the simple lack of any qualia-perceiver in its path, or for the lack of any qualia-perceiver who however lacks the ability to extract a "color red" qualia from that carrier, or because the EM wave was absorbed by a black body etc.

Hence, while physically measurable randomness may be a "free will" qualia carrier, the lack of qualia perception would still result in the "free will" qualia carried by it to be lost. Conversely, a qualia-perceiver may have free will even in the absence of the typical physical carrier of "free will" qualia, as in the analogous case of a mind capable of imagining the "color red" qualia despite the absence of it usual "400~484 THz EM wave" carrier.

Comment by alexgieg on A cognitive algorithm for "free will." · 2021-07-20T18:22:50.505Z · LW · GW

the branching structure as whole is deterministic, not that the branches are individually.

That depends on how you consider probabilities. One usual take, when it comes to concrete events, is that the probability of something that actually happened is 1.0, since it actually happened. Therefore, when you look at a sequence of causes and events backwards, that is, as history, this after-the-fact sequence is always strictly deterministic even if every single one of its links had a less-than-1.0 probability of happening before it actually happened in that specific way.

Maps aren't territories, even though territories are modelled with maps. Modelling isn't ontological identity.

Well, if you prefer that terminology, I can restate it this way: maps that only provide deterministic and/or probabilistic (which I understand as a superset of deterministic) nodes cannot deal with neither-deterministic-nor-probabilistic features of the territories they're trying to make.

To provide an example: a map that only provides RF frequencies says nothing of colors unless it also maps the connection of RF frequencies, to colors, via visual cortexes and all the associated biological organs, and provides primitives for the qualia of those colors.

It's not obvious that being reducible to physics is the same as being reducible to deterministic physics,

Sorry, I wasn't clear. "Physical reducibility" is a technical expression that refers to the philosophical assumption that the whole of concrete object, that is, both its quantitative properties as well as its qualitative properties, arises exclusively from its quantitative properties, in other words, that the a concrete object is "nothing but" the physical object.

it's not obvious that indeterministic physics can't support free will,

I'm not sure what you mean by "indeterministic physics". Do you mean QM?

and it's not obvious that you need a quale of free will to have free will. Just as you can live and die without knowing you have a spleen.

I'm not sure I understand this point either. Are you referring to philosophical zombies?

Comment by alexgieg on A cognitive algorithm for "free will." · 2021-07-20T17:37:35.412Z · LW · GW

That's a contradiction in terms

Not really. The sentence you split forms a single reasoning. The first part is the claim, the second is the justification for the claim. You can read them in reverse if you prefer, which would gives it a more syllogistic form.

Which? Logical or causal?

Both, since causal determinism is logically modelled. More specifically, causal determinism is a subset and a consequence of logical determinism, which is inherent to all forms of logical reasoning, including this one.

In any case, the point of causal determinism is that there is only on possible outcome to a state, ie. only one path going forwards. / If you mean an RNG as opposed to a pseudo RNG, yes it does make it less definition.

That's precisely what MWI and similar notions disagree with. But yes, if we assume a single world, then the consequence is one of the alternatives, and none of the others.

Huh? That's not generally acknowledged. / That is not universally acknowledged.

True. I'm arguing against the generally acknowledge view. My position is based on traditional non-physically-reducible qualia-based concepts of free will as present in, e.g., Aquinas and Aristotle.

Evidently, if one assumes all qualia is physically-reducible, then free will as such doesn't exist and is a mere subjective interpretation of deterministic and/or randomly-determined processes, but that's precisely the same I've said, except that coming from the other direction.

Comment by alexgieg on A cognitive algorithm for "free will." · 2021-07-20T16:51:44.132Z · LW · GW

Formal logic, mathematics, informal deductive reasoning, algorithmics etc. are all interchangeable for the effects of my point, and usually also mutually translatable. Using any of them to model reality always yields a deterministic chain even when probabilistic paths are involved, because on can always think of these as branching in a manner similar to MWI: starting from such and such probabilities (or likelihoods, if the question is about one's knowledge of the world rather than about the world itself) we end up with a causal tree, each of whose branches, when looked backwards, forms a logic causal chain.

That's why free will cannot be modeled in terms of probabilities or likelihoods. Inserting a RNG in a logical chain only makes it more complex, it doesn't make it less deterministic, and again causes free will proper to disappear, as it's then reduced to mere randomness.

Comment by alexgieg on Re: Competent Elites · 2021-07-15T18:34:51.654Z · LW · GW

"Probably most ambitious people are starved for the sort of encouragement they'd get from ambitious peers"

This, I think, is one of the roots of smart people getting into weird stuff. Contrarians, contra-cultural types, conspiracy theorists (the inventors, not the believers) and the like are usually very smart, they just don't optimize their smarts in a good direction, so a newly minted smart person will feel attracted to them. The end result are very suboptimal communities of smart individuals going in all kinds of weird directions.

That's my case, mind. Finding the rationalist community has helped me put breaks on some of my weirdest aspects, but by no means on all of them. Which might or not be smart of me, no idea yet at this point.

Comment by alexgieg on A cognitive algorithm for "free will." · 2021-07-15T18:02:21.912Z · LW · GW

A fundamental difficulty in thinking logically about free will is that it involves thinking logically.

Logic, by its very nature, has embedded as its most essential hidden premise a deterministic structure. This makes all reasoning chains, no matter what their subject (including this one), to be deterministic. In other words, a deterministic structure is imposed upon the elements that will be logically analyzed so that they can be logically analyzed.

This leads one, if they ignore this structure is present as the very first link in chain, then proceeds to analyze the entire chain minus this hidden first premise in an attempt to determine what can be abstracted out from it, to incur into an involuntary 'begging the question' and to conclude all elements present in the chain, and all their mutual relations, are strictly deterministic. And, by extensions, that free will doesn't exist in reality, when the most we can actually say is that free will doesn't exist as a deduced link within deterministically structured logical reasoning chains.

Notice that this doesn't preclude free will from being part of deterministically structured logical reasoning chains, it only says where free will cannot be present. It can still be present as an irreducible axiomatic premise, an "assuming free will exists..." used to reach further deductions. But that's it. Any attempt at moving it from the position of an axiom down into the chain proper will invariably fail because the chain itself doesn't admit of it.

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T18:24:06.777Z · LW · GW

I wonder if more positive encounters would help gradually change the bias, also for your own well-being (...)

Ah! I have plenty of extremely positive experiences with black people, from black friends, to coworkers, to acquaintances, to (awesome!) teachers, to college friends. For me, people are all individuals, no exception, and I cannot think in terms of groups or collectivities even if I tried forcing myself to do so. As such, I have always been extremely careful not to allow this irrational trigger to affect anything real, and this is why I described this quirk as "extremely annoying". It'd be an easy but deeply flawed pseudo-solution to keep the problem at bay by distancing myself from situations that trigger it, but I refuse to do that.

If it helps to visualize it, imagine walking around and suddenly noticing a tiger looking at you growling at their signature 18Hz, or a snake rising their head. Your body would react in a split instant, much faster than your conscious mind registers it, by pumping you with adrenaline in order to increase to the max your chances of survival. That, more or less, is what happens, so the most I can do, and this I make myself do all the time, is to forcefully shut the adrenaline pump down once it opens, and carry on as if it hadn't opened up. The mechanism by which it opens, though, that one is beyond my conscious control, and while familiarity reduces its triggering, it unfortunately doesn't fully eliminate it.

Which is why I linked it to PTSD. When a person suffers a trauma and develops PTSD, their brain physically rewires as a defense mechanism. Barring some very experimental psychotropic treatments being currently researched, this physical rewiring cannot be reversed. It can at most be eased, but fully reversed, not yet, no.

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T17:35:40.710Z · LW · GW

Which subcultures are these?

The furry fandom and the otherkin community here in Brazil.

It's okay if you don't want to answer.

Nah, I'm an open book. I make a point of not keeping secrets unless absolutely necessary. There's no risk in doxing if you yourself provide the doxa beforehand. ;-)

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T14:42:04.799Z · LW · GW

I would indeed be interested in your mention of this sort of thing having "changed in a bad way".

Well, in my case it came due to robbery. Until my late teens / early adulthood I was robbed four times, which wasn't uncommon in the region of Brazil I lived at the time (crime rates have diminished a lot in the intervening decades). From those, three were by black thieves, blacks being a very discriminated-against group here, even if not as much as in the US. The third time has caused in me what I suppose I could describe as a "micro-PTSD", because from that day my System 1 began making me acutely aware, in a fight-or-flight manner, of the presence of unknown black people around me, something that didn't happen before.

This is extremely annoying, to say the least. No matter how much I want to turn off this trigger, it remains "there", unconsciously activating whenever I'm distracted from actively suppressing it at the System 2 level. That said, over time I've managed to learn to suppress it very quickly, but I always worry on occasion it may be not be quick enough, that the person at whom it triggered will notice that split-second spark of irrational fear in my eyes before I can consciously force it off.

On the not quite bright side, gaining this trigger made me understand how racial biases develop and perpetuate. But I still would have very much preferred to never have gained it to begin with.

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T14:20:03.455Z · LW · GW

I'm not sure what it means for a newborn to be transgendered.

Over the last two to three decades many clinical studies have been developed scanning the brains of transgendered individuals. Brain regions have been identified that mark brains as clearly masculine, feminine, or somewhere in between, and transgendered individuals' brains show the properties of the brains typical of the other sex, meaning trans women have structurally female brains in male bodies, and trans men have structurally male brain in female bodies. You can find a fairly comprehensive list of papers on this at the Causes of Transexuality Wikipedia article. Additionally, gender dysphoria is characterized, as I see it, by a clear mismatch between body shape and the homunculus, which further points to transgenderism being a neurological fact.

The 1:20,000 factor comes from the prevalence of gender dysphoria in adults, that is, from this brain/body mismatch. This paper refers to different studies and their ranges, some finding a prevalence as low as 1:100,000, others one as high as 1:10,000:

  • Kenneth J. Zucker & Anne A. Lawrence (2009) Epidemiology of Gender Identity Disorder: Recommendations for the Standards of Care of the World Professional Association for Transgender Health, International Journal of Transgenderism, 11:1, 8-18, DOI: 10.1080/15532730902799946

In the US roughly 1/300 identify as transgender and in the rationality community maybe 1/30.

I'm not aware of these numbers, but it wouldn't surprise me if there's a conceptual confusion between being transgender in the strict, biological brain vs. body sense, and being gender non-conformant. In my case, I'm behaviorally gender non-conformant, having a very high number of stereotypically female traits (I've been described by people as "very androgynous", with one saying I was "the most androgynous person" they've ever met), but in terms of my brain-body matching I'm clearly cis male, experiencing no gender dysphoria of any sort. Therefore, I don't consider myself transgendered, although, yes, I can see how there might be a use case in making the word encompass both strict biological transgenderism and gender non-conformance.

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T13:30:49.139Z · LW · GW

Thanks, that's very nice to know!

I'm involved in subcultures with even higher proportion of transgendered people, being relatively fluid myself, so it's always nice to find other contexts in which transgendered individuals have a higher representativeness than they have in the general population.

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T13:24:27.475Z · LW · GW

I wish the LW team would prioritize thinking about how to enable such discussions to happen more safely on LW

One way to do this would be to create a tag for socially risky topics, and make posts marked as such visible only when logged in, to accounts that have existed for more than 't' time with at least 'k' of karma. The original poster would be able to add the tag to their own post, but not remove it unless they themselves meet the minimum 'tk' threshold. Others would be able to add or remove it only if they themselves have those same stats. And comments made under a topic thus marked would by default inherit the same tag and properties. This would make it possible to have such conversations with little risk, with further improvements possible.

Comment by alexgieg on [Letter] Imperialism in the Rationalist Community · 2021-06-25T02:21:57.398Z · LW · GW

I don't know about the majority, but I can say for at least a few, when they say "I don't see people in terms of race", they're being literal, not metaphoric. I was like this until my late teen years, when it changed, in a bad way -- which I can detail if there's interest. But the point is, until that moment I really couldn't see race, at all. I evidently noticed people had different skin colors, hair types, and eye shapes, but this didn't register with me as significant in any way, shape or form, concrete or abstract.

And one comment about the AMAB and AFAB acronyms. A study I read years ago showed that about 1 in 20,000 newborns are transgendered. This means that 99.995% of the time the gender assigned at birth is indeed the gender the person will have. Now, the usual, in contexts in which one has a 99.995% likelihood of making a correct guess, is to simply say "x is y". Evidently, for the 0.005% of cases in which the that guess was incorrect, it makes sense to say they were "incorrectly AM/FAB", but outside of these exceptional cases of misassignment, using these expressions gives the impression the assignation is incorrectly made way more often than it in fact is.

Comment by alexgieg on How can there be a godless moral world ? · 2021-06-24T20:42:08.423Z · LW · GW

Scott Alexander ... claimed that many theists would change their mind if you could convince them on a gut-level that there could exist a godless moral world.

I guess I, and apatheists in general, are exceptions to this prediction, for while I personally am a theistic apatheist -- and of the polytheist variety at that --, I don't believe morality depends on divine will, be it either that of an absolute god, or that of a pantheon of gods. Reality, as I understand it, is essentially amoral, with morals being features we mortals add to it. Hence, my view is essentially similar to that of a godless moral world, except it has gods in it (or, more precisely, transcendent to it).

I would be grateful if someone could give me not only reasons why and how there can be a morality without God, but also arguments that could speak to my gut-level.

An usual approach is to find morality as a distillation from natural impulses filtered through high level cognition.

Dr. Larry Arnhardt, author of a number of books merging classic philosophy, classic liberalism, and evolutionary biology, identify 20 natural desires in human nature, all evolved through natural selection, which determine how humans interact with the world. Each one of those can be thought of a different axis along which uses and customs (a.k.a mores, from which "morals" and "morality") develop; then are reasoned about, leading to the development of formal moral codes, to different ethical system, and to meta-ethical frameworks, through so many abstraction layers.

Different individuals hierarchize those desires differently, and different societies also have their own hierarchies for them, their members aligning or not with their societies' hierarchy. One of those desires is "religious understanding", so it isn't really surprising that so many societies, and individuals, seek to interpret the entire set of desires, the uses and customs surrounding each one, and their corresponding abstractions, in terms of, and as part of, a religious understanding, which is, I venture, where the notion of morality sourced in divine design finds its root.

Notice, on the other hand, that "intellectual understanding" is also an evolved desire, so it isn't surprising those who place it at the top of their hierarchy of desires will see things in non-religious terms. Or maybe, if both religious and intellectual understanding are at the top, from an intellectually-rich theological perspective.

In fact, we can notice the interplay between the 20 desires diversely hierarchized in the Christian Bible itself, as it provides not one, but at least four distinct moral codes:

  1. The way God himself behaves as the basis of imitatio dei;
  2. The commandments of God to the Hebrews;
  3. The commandments of God to the Christians;
  4. The way Christians are described acting in the afterlife.

There's some minimal overlapping between these four moral codes, but on the whole they oppose each other. And one can disagree and criticize them either from a moral perspective based on none of the four, or from one based on a subset of one of those complemented by reasons outside all four.

Hence, if morality were to be of divine origin, then either only the minimalist set of behaviors at the intersection of all customs of all human societies in all times and places counts as the one spark of divinity amidst humanity, or, conversely, the maximalist set of the entire multidimensional ethics-space comprised of the full 20 axes counts. Anything in between would seem, at best, arbitrary, leading to a discussion about which meta-ethical decision making process is of divine origin, and which isn't etc.

Myself, I have my own ethical framework, which is a combination of Virtue Ethics with a Consequentialism based not on utilitarian criteria, but on the preservation of information. Taking the four Biblical moral codes, it intersects with a subset of the commandments given to the Christians, but it certainly doesn't align with the other three. I wouldn't, however, assign my ethical code to any deity affirming it's authoritative because of that assigning. But if I met a deity who opposed and acted contrary to it I'd feel quite on my right to criticize that deity as immoral from the perspective of my own ethics, irrespective of it being divine or not, and not out of hubris, but because I really would think of them as acting immorally.

Comment by alexgieg on Chinese History · 2021-05-13T20:04:34.990Z · LW · GW

I wouldn't say that Confucianism is a religion.

If we go for a very technical take, the term "religion" refers only to Christianity. That's because the term was adopted during the Reformation era, and later expanded during the Enlightenment, to make some sense of what was going on between the different Nation States going for this or that version of Christianity, and then by contrasting all of those takes with the novel alternatives of Deism, Agnosticism, Atheism, of political power grounded on the people vs. on God etc., all the while "back porting" it to the question of the earlier disputes between Christendom's (the original term) original great schism and earlier heresies, and between those as a whole vs. Judaism, Islam, and so-called Paganism. As such, any attempt of extending it to anything beyond primarily Christianity internal disputes, and secondarily Abrahamic disputes, is fraught with complications, since one's operating more on the basis of analogies than on a strictly defined conceptual axis. For more details, check Catholic philosopher Edward Feser's blog post What is religion?

Given that, taking Confucianism to be a religion, or taking it not to be a religion, are both arguably valid, since it comes down to which aspects one's emphasizing and deemphasizing in their analogical approach.

Now, I consider Confucianism a religion because it had and has a priesthood, rites, temples, and presented itself as a continuation and development of ancient Chinese beliefs. Confucius himself, for instance, was a well regarded and accomplished expert in the art of ritual animal sacrifices, and it'd be very odd to try and disengage his religious piety from his intellectual work, when both in fact complement each other. It'd be akin to thinking of the Neoplatonic philosophers, and Neoplatonism, as non-religious despite many of them being pious worshippers of several Greek deities, deities who in turn can be taken to be as abstract as Confucianism's Tian. In fact, the very Physis referred to by the scholar mentioned in the Wikipedia article was a duly worshipped primordial goddess in the Orphic tradition in Greece.

Other polytheistic and/or ancestor-worshipping belief systems have similar traits. In fact, in the set of human belief systems, it's modern Western ones that stand out as somewhat weird -- or rather WEIRD -- in their sharp distinction between secular and religious spheres of influence and action. Most everyone else doesn't do that. Hence, maybe it'd be more accurate to say neither that Confucianism is a religion, nor that Confucianism isn't a religion, but rather that Confucianism, Neoplatonism, Hinduism and others are all holistic paths (that they're "daos"), and that both Western religions and non-religions alike are, all of them, so many daos.

Comment by alexgieg on [Letter] Advice for High School #2 · 2021-05-13T13:38:11.876Z · LW · GW

The difference between someone with an IQ of 115 and someone with an IQ of 175 is four standard deviations. Four standard deviations is huge. It is equal to the difference between a PhD in science and someone hovering on the edge of an intellectual disability.

I'd be careful with this kind of comparison. IQ numbers and SDs may look like cardinal measurements, but they're actually an ordinal hierarchical system. What one can say is that someone with IQ n+1 is "smarter than" someone with IQ n, who in turn is "smarter than" someone with IQ n-1. But there's no way, for now, to convert that in a cardinality.

Hence, in an absolute sense of literal, actual intelligence, the difference in between an IQ 175 and an IQ 115 may be either greater or smaller than the difference in intelligence between an IQ 115 and an IQ 55. My personal hunch is that it's much smaller, although, evidently, I have no way to back that up.

Comment by alexgieg on Chinese History · 2021-05-13T13:17:54.599Z · LW · GW

Chinese religions were never exported mostly because of their lack of use in governance.

That's quite incorrect. In addition to my reply above to ChristianKI, I'll add that Confucianism has been exported all around Asia precisely because of its use in governance, having historically resulted in extensive political changes in the Vietnamese, Korean and Japanese governments of old.

Comment by alexgieg on Chinese History · 2021-05-13T13:12:10.766Z · LW · GW

The Chinese fight Catholicism this way precisely because Catholism is politic in a way that their homegrown religions weren't.

Confucianism is extremely political. If I remember right, when an emperor's government began to severely fail, their priests practiced rites to determine whether they had lost the Mandate of Heaven and a new emperor should be chosen, opening the way for religiously-legitimated rebellions to replace the distrusted dynasty.

This influence of religion on politics in part explains the reason the CCP is always so worried about, and ruthless towards, any religion that deviates from its ideology du jour.

Comment by alexgieg on Why I Work on Ads · 2021-05-04T19:35:41.534Z · LW · GW

When you switch to a paywall model, you have to accept that you're going to lose a large portion of your readers, which means you need to charge the remaining ones a lot more, no?

Yes, but no.

Technically there's no direct derivation from costs to price charged. The costs involved in you providing a good or service let's call it Vmin, determine a lower boundary, so that if you cannot charge below that you're operating at a loss and won't provide that service, instead opting to do something else. On the other extreme, your potentials customers' maximum ability to pay (in aggregate), let's call it Vmax, which in turn is bounded by their income, determine how much you can charge them. The price, V, that you're effectively going to charge, is between Vmin and Vmax.

Customers will do what they can to push V towards Vmin. You, on the contrary, will do what you can to push V towards Vmax. In the end, V ends up somewhere in the middle, so that Vmin < V < Vmax. Therefore, my prior is that a charge of $20/month for such a service is much closer to Vmax than it is to Vmin, for the sole reason this is the incentive playing on the provider's side.

Be as it may, I neither accept lying, biased, and dark-pattern exploiting ads, nor do I have a high enough income to justify paying more than a few dollars per month, in aggregate, for the sites I read. Solving this equation is something site owners, together, should work into. If there's no solution and the end result is less of those specific contents, well, I derive marginal utility from having access to that content, so if it goes missing, shrugs.

Comment by alexgieg on Why I Work on Ads · 2021-05-04T16:22:24.151Z · LW · GW

I'll describe the problems I have with online advertising, both for me and in general:

a) For me personally:

  • I'm a philosopher by formation, and I work in a very technical area, so I have as my focus of interest two things: truth, and data. Modern ads have neither.

If a manufacturer wants to get me interested in anything, at all, I want specs front and first. No fluff, no emotional appeal, no aesthetic considerations. Hard data. If an ad has any of these, and none of the form, I not only ignore it, but I develop a very strong negative bias towards the brand, to the point the more ads I see for anything lacking factual rigor, the more opposed I become to ever buying it, or from it, if it's an ad to develop brand awareness. These, in particular, made me quite aware of brands not to purchase from.

  • They very, very, very rarely have anything to do with my interests.

In the past they used to be a tiny little bit more relevant, back when ads were based on the contents of the page itself, since if I was interested in reading that topic I was interested in that topic. Then it's changed to my former topics of interest, meaning not on what I'm actually interested right now, and thus became even more irrelevant than they already were.

I think that, over a period of maybe 3 years, I've seen one ad that was relevant to my interests. It was for a classic music streaming service. It got me to the point of actually opening their website. Alas, it was too expensive and I didn't subscribe. But that was it.

  • When they hit close, they're for things I already purchased.

When I decide to purchase something, my procedure is systematic. I seek reviews of things in that category, I visit technical sites with specs for the top 5 to 10 items I found that roughly match my interest to see which ones fit, I narrow down my choices to 3 items, then I compare their prices at price searching sites, and buy the one that provides the best return per dollar. Ad tracking machine learning is very dumb and, not understanding I already purchased the item (usually the same day I began searching), causes ads to start offering me the very same item, for days on end, which is useless both for me as well as for everyone else involved.

b) In general:

  • Ads exploit cognitive biases.

As a philosopher first, and a rationalist second, I strive to get rid of cognitive biases in myself, and try to elevate others out of them. Hence, ads that aren't strictly data-driven and factual exist in the opposite side to mine in this moral axis.

  • Sites showing ads exploit dark patterns.

The same thing, except from the side of those showing ads.

All of that said, there was a time I didn't mind ads. It was that narrow period of a few years in which Google distributed only textual ads, and had rules about sensible places to put them at. When they changed direction I began using ad blockers, and whenever I stop using them the end result is so obnoxious I promptly go back to using them.

Now, in regards to:

c) Alternatives:

  • Paywalls and micropayments.

Values for paywalls are, simply put, nonsensical. There's no way the ads I would have seen in a site over a period of one month would have generated $20 for the site, so trying to charge me $20/month is a no start. I could see paying $1 for the right to read a number of articles from that site, let's say, 100 articles at a $0.01 per article, which would be more than enough for several months (provided it was tied into my upvoting the article after having read it precisely so as to discourage clickbaity content-free nonsense that tried to waste my time), but that's about it.

How that would be implemented in practice is a matter for browser manufacturers to solve. I imagine they will do so eventually as adblocking becomes more and more pervasive, as this doesn't seem to be a particularly difficult problem to fix.

  • Curated ads.

There's one category of ads I don't mind: curated ads by site owners, in which they themselves evaluate every ad show in their site for truthfulness, adequacy, and taste, and they themselves host and provide them.

These are rare nowadays, but it's the one kind of ad I don't block. It's basically the kind of ad one would find in printed magazines and printed newspapers, except that online.

These are my 2 cents on the subject.

Comment by alexgieg on Can you improve IQ by practicing IQ tests? · 2021-04-29T14:59:20.491Z · LW · GW

Whether or not something "is" known to work or to fail often determines whether you "ought" to do it.

Not at all. Knowing that doing X causes Y only informs that if you want result Y, the way to achieve that is by doing X. It doesn't tell you whether Y is desirable or not.

Hence, if a society wants maximum productive efficiency, and allocating more resources to their most intelligent members is the most effective way to achieve that, then yes, allocating more resources for them, and less for less gifted individuals, is the way to go. On the flip side, if a society wants, let's say, to maximize equality of outcomes among its members, then they'll completely ignore that means, and look for the method that will provide that outcome.

The decision about the "ought", then, is what truly determines which "is" will be chosen, not the other way around.

Comment by alexgieg on Can you improve IQ by practicing IQ tests? · 2021-04-29T14:44:46.347Z · LW · GW

Not really. Currently IQ distribution is defined as a Gaussian, so if tests are made correctly and the proper transformation is applied the shape of the curve, for a large enough population, will literally be a Gaussian "by definition". Check this answer on Stack Exchange for details and references:

Now, evidently, for smaller sub-samples of the population the shape will vary.

Comment by alexgieg on Can you improve IQ by practicing IQ tests? · 2021-04-29T00:03:10.351Z · LW · GW

I often wonder how many of our characteristics are truly innate, and not just learned or trained.

In the case of IQ this has been well established. There's some variance due to nurture, but the bulk of it is nature. For example, very young children adopted by high IQ couples, and raised with a focus on intellectual matters, still demonstrate an IQ much closer to that of their lower-IQ biological mother than to that of their adoptive parents.

This isn't to say that being raised by high IQ parents has no consequences. These children learn several personal and cultural skills in an environment that nurtures their abilities, and therefore manage to, for example, obtain a bachelor's degree with a much higher likelihood than average for for their origin groups, meaning their Big 5 "Conscientiousness" trait did grow remarkably.

In terms of their raw IQ, though, other than the increase due to better nutrition, no, nurture has no effect, unfortunately.

An argument for nature undercuts the idea that education and good opportunities should be made available to everyone.

Not really. And "is" doesn't determine an "ought". It can easily be argued, to the contrary, that precisely because low IQ individuals need more institutional support compared to high IQ individuals, they should receive a much better tailored education and much better vocational opportunities, as high IQ individuals are much more likely to solve what they need solved on their own without, or with bare minimum, external aid.

Comment by alexgieg on Can you improve IQ by practicing IQ tests? · 2021-04-28T15:36:43.336Z · LW · GW

AFAIK, most don't prepare at all since there isn't much at stake.

Very few companies hire based on high IQ, when they do it's usually because the problems the employee will have to deal with are highly mathematical and/or logical in nature and a person with a low (real) IQ would do really poorly in that, and in any case they still require candidates to have specific skills, which are more determinant than the IQ. And when such companies do take IQ in consideration, they usually do so not by requiring an official score, but by making candidates go through aptitude tests and puzzles, then checking how they scored in those. Very few go for a fully certified score, and when they do, they have requirements such that they may well also require a full personality evaluation, meaning a full Big 5 assessment.

On the flip side, there are jobs that have a maximum IQ score requirement, and don't hire people above that, the reasoning being that anyone with an IQ higher than that would get utterly bored at that job and leave it on the first opportunity, thus wasting the company's time and training investment. So they provide a test and if you get too good a score on it you're let go.

Hence, if one were to try gaming the score, one would either end up in a job with such extreme mathematical and logical thinking requirements they would end up constantly mentally exhausted and leave, unable to cope with spending so much mental energy (and this is measurable, brain scans of high IQ individuals show their brains do very energy expenditure when dealing with complex tasks that, for average IQ individuals, cause their brains to flare up in a storm of long, constant, intense activity). Or, on the other extreme, would put them in a job with such low requirements for their abilities that it'd make them feel miserable until they in fact jumped ship for something more stimulating.

Now, one important thing to keep in mind is that IQ scores aren't absolute values, they're relative values based on how a population answers tests, and it follows a Gaussian distribution.

If a test has 100 questions, and 50% of those taking it get less than 60 questions right, and the other 50% get more than 60 questions right, then IQ 100 is defined as "getting 60 questions right". If in 20 years the same test has 50% of those taking it getting less than 70 questions right, and the other 50% getting more than 70 questions right, then IQ 100 is redefined as "getting 70 questions right". Hence, IQ 100 is always the average of a population.

Then, for numbers above and below 100, every 'n' points (usually 15) are defined as "one standard deviation". Since the distribution is Gaussian, this means that IQ 85 (1 standard deviation below the mean) is defined as whatever number of questions 84.1% of respondents get right; IQ 100 (the mean) is the number of questions the aforementioned 50% of respondents get right; IQ 115 (1 standard deviation above the mean) as the number of questions only the top 15.9% of respondents get right; IQ 130 (2 standard deviations above the mean) as the number of questions only the top 2.3% of respondents get right; IQ 145 (3 standard deviations) as the number of questions only the top 0.2% of respondents get right; and so on and so forth, in both directions.

This means that, if people began gaming the score, the shape of the curve would change into a distorted Gaussian, introducing a perceptible skew that could be calculated following standard statistical procedures, which in turn would prompt a renormalization of the test so that it would track averages and standard deviations correctly once again, rendering any such effort a one time stunt.

Comment by alexgieg on Can you improve IQ by practicing IQ tests? · 2021-04-28T12:12:35.115Z · LW · GW

Is the IQ test fundamentally different from a school test?

Yes. It measures an intrinsic ability, not a learned skill. I'll make an analogy:

Suppose there was an "athletic ability" measurement score calibrated so that it can gauge, via a set of physical tests, the athletic potential of individuals. It's devised so that a population with no specific training can take it, and the result correlates with, for example, how fast a person will be able to run if they dedicate themselves to short range training full time.

This limit, notice, is genetic. Your genes determine the structure and interconnection of your skeleton, muscles, nerves etc. and how well they all respond to diet, training regimen, stimulants, and other external factors. Hence, every person will have a range of running speed that goes from their speed when running without any specific training, let's call this speed A, all the way up to their maximum genetically determined potential, let's call this speed B.

The "athletic ability" scoring then, taking into account several factors tested, including your current, untrained running speed, will give you a number that, when you look at a table constructed after test with thousands of other individuals, show that your maximum speed, if you dedicate yourself completely to developing your running potential, will be B.

Now, suppose an individual, for some reason, trains day and night at short range running before taking the "athletic ability" test. Maybe they're a teen with parents who insist they excel at the test due to, let's say, the potential to get tuition fee reductions in college. Or maybe they have a parent who's an running champion and they want to impress them, be up to their standards, or whatever. They thus decide to look at how the test is applied, does everything to nail it, and so, when the day comes, they take the test -- in which, among other things, they run at their current speed P --, and as a result obtain a much higher score than they would have gotten otherwise. According this score, when they look at the statistical table of maximum short range running speeds, it tells them that if they begin training full time (which assumes they aren't already training) their maximum speed B will be Q! Amazing!

So, our fictional teen continues their training as much as they can, full time, doing everything optimally! And now, years later, at the very top of their performance, their maximum speed is... Q? No. It's a little bit above P, but nowhere near Q. Why? Because the test wasn't devised for people who were already training, much less for people trying to game it. They gamed the test, got a higher score than they would have gotten otherwise, but their maximum genetically determined potential speed is what it is. Gaming the test won't change their true maximum speed B no matter how much they try to skew it.

Now, suppose everyone began gaming the "athletic ability" test so that the table of maximum speeds B in light of scores didn't correlate anymore, what would happen? Well, psychologists would analyze the new trend. They'd look at current full time professional short range runners, the scores they obtained in their "athletic ability" test when they took it a few years before, and develop a new table with updated maximum speeds B for "athletic score" abilities, so that both numbers began correlating again.

That's how IQ works.

A high IQ person, let's say, someone with a IQ 140, can instantly grasp novel, complex abstract concepts in a field they never studied before after barely glancing at it and hearing it explained to them one time in a summed up form that took 15 minutes to provide, and then get a 9.0 on a test without having studied it again in between. A person with an IQ 100, in contrast, might require a full class on the topic, several hours or even days of study at home, and lots of reading, to manage the same 9.0 in the same test.

If the later had gamed the IQ test so that their official score also were 140, that wouldn't have changed the outcome of this scenario. They'd still have had to take the full class, study several hours to days at home, and do lots of reading, to get that 9.0, while the "not-gamed" IQ 140 person still required a mere 15 minutes of hearing about the topic once to score that 9.0. And, had the "not-gamed" IQ 140 decided to get a 10.0 with honors, they'd have needed to study the topic further for maybe 3 hours, they just didn't care enough to bother.

Comment by alexgieg on Can you improve IQ by practicing IQ tests? · 2021-04-27T19:10:20.334Z · LW · GW

Roughly speaking, the IQ score measures one's ability to recognize patterns, so it isn't a direct measurement of intelligence per se, but of an ability that correlates strongly with several other abilities that people associate with the much fuzzier concept of intelligence.

If you practice for IQ tests, you're going to become better at detecting the specific kinds of patterns used in IQ tests, but then your IQ score will correlate less with your general pattern-recognition ability, and in turn with those other traits, so at some point your score will stop reflecting your general intelligence.

To increase your intelligence as a whole you'd have to become better at recognizing more and more complex patterns in general, and not only for when you're focusing on problems, but on the automatic, as a passive ability. That would require quite a lot of cerebral plasticity, which is something adults almost universally lack.

Now, having a great pattern recognition, and by extension a high IQ score, by itself, doesn't suffice to say someone is actually intelligent in a broad sense, because when one is very good at detecting very hard-to-perceive patterns (hard to perceive for the majority), they also become very good at detecting patterns that aren't there at all. For example, conspiracy theorists -- the kind who creates conspiracy theories, not mere followers -- are usually very high IQ individuals whose pattern-recognition went in quite wrong directions. Hence, a high IQ is, at best, a very raw measure of one's cognitive potential, more than one's cognitive execution. This one does require training to be turned into something actually able to accomplish great things.

Be as it may, there have been some studies on what does increase the average IQ scores for populations at large. The main factor, above everything else, is better nutrition during infancy. That helps the brain to develop without hindrances, resulting in most of those children, when they grow, being able to recognize much more patterns than peers of theirs who were malnourished in their first years. That one factor cannot be compensated for later in life. And on top of that, access to excellent education in a stable environment during one's formative years also helps with a few extra points.

Finally, it should be noted that the effects of IQ scores are better understood (because easier to study) for lower IQs than for higher IQs. For lower IQs there are lots of correlations with anti-social behavior, criminality, impulsiveness, mental illnesses etc. For higher IQs there are correlations with mathematical prowess and having better incomes, probably because we live in a society that values professions requiring pattern recognition (engineering, law, finance, programming, anything requiring complex strategizing etc.), but not much beyond that.

Comment by alexgieg on What weird beliefs do you have? · 2021-04-15T17:55:38.528Z · LW · GW

One way to look at this is in focusing on what purpose money serves.

Suppose you do something for someone, and that person pays you a $1 bill. What does it mean, to have that $1 bill in your hands? After all, concretely speaking, it doesn't serve for much. It's a small piece of generic printed paper, so you can use it for same general purpose any piece of paper with something printed on it serves.

However, it has attached to a formal "possibility of" a future something, as you can eventually exchange it for something else, be it a good or a service. Hence, at its core that $1 bill is a contract, or more specifically, a promise.

Hence, when you do something and receive $1, you're exchanging that work for a promise. And, conversely, someone else is promising you a future reward in exchange for you doing something now. And, evidently, such promises themselves can be exchanged, such as when one exchanges one country's currency for another's.

Notice then that debt, in aggregate, works in a very similar way. When a credit agencies you owe money to negotiates that debt of yours with another, they're exchanging promises between themselves, tied to something eventually happening, namely, you providing them many $1 promise bills in exchange for a return of the big promise letter with your signature one of them is carrying. And thus, similarly, at higher layers, until the much higher one of debts hold by countries, which also are exchanged around.

Hence, at that very high level the movement of debts around is a form of money. Rather than moving around packs of first-order promises, aka, stored currency, they move around wide blocks of second or third-order promises, tied to their whole countries doing this or that in the negotiated time frame.

This is why holding countries to having a positive cash flow doesn't make much sense. I mean, it does make some sense, in that handing out blocks of "small promises" simplifies many things. But it also makes other movements more complex, as using debt, that is, "big promises", can be a very effective tool to move things faster when done carefully.