Posts
Comments
I already think that "the entire shape of the zeitgeist in America" is downstream of non-trivial efforts by more than one state actor. Those links explain documented cases of China and Russia both trying to foment race war in the US, but I could pull links for other subdimensions of culture (in science, around the second amendment, and in other areas) where this has been happening since roughly 2014.
My personal response is to reiterate over and over in public that there should be a coherent response by the governance systems of free people, so that, for example, TikTok should either (1) be owned by human people who themselves have free speech rights and rights to a jury trial, or else (2) should be shut down by the USG via taxes, withdrawal of corporate legal protections, etc...
...and also I just track actual specific people, and what they have personally seen and inferred and probably want and so on, in order to build a model of the world from "second hand info".
I've met you personally, Jan, at a conference, and you seemed friendly and weird and like you had original thoughts based on original seeing, and so even if you were on the payroll of the Russians somehow... (which to me clear I don't think you are) ....hey: Cyborgs! Neat idea! Maybe true. Maybe not. Maybe useful. Maybe not.
Whether or not your cyborg ideas are good or bad can be screened off from whether or not you're on the payroll of a hostile state actor. Basically, attending primarily to local validity is basically always possible, and nearly always helpful :-)
Thank you for the response <3
"...I would like to prove to the Court Philosopher that I'm right and he's wrong."
This part of the story tickles me more, reading it a second time.
I like to write stories that mean different things to different people ...this story isn't a puzzle at all. It is a joke about D&D-style alignment systems.
And it kinda resonates with this bit. In both cases there's a certain flexibility. The flexibility itself is unexpected, but reasonable safe... which is often a formula for comedy? It is funny to see the flexibility in Phil as he "goes social", and also funny to see it in you as you "go authorial" :-)
It is true that there are some favorable properties that many systems other than the best system has compared to FPTP.
I like methods that are cloneproof and which can't be spoofed by irrelevant alternatives, and if there is ONLY a choice between "something mediocre" and "something mediocre with one less negative feature" then I guess I'll be in favor of hill climbing since "some mysterious force" somehow prevents "us" from doing the best thing.
However, I think cloning and independence are "nice to haves" whereas the condorcet criterion is probably a "need to have"
((The biggest design fear I have is actually the "participation criterion". One of the very very few virtues of FPTP is that it at least satisfies the criterion where someone showing up and "wasting their vote on a third party" doesn't cause their least preferred candidate to jump ahead of a more preferred candidate. But something similar can happen in every method I know of that reliably selects the Condorcet Winner when one exists :-(
Mathematically, I've begun to worry that maybe I should try to prove that Condorcet and Participation simply cannot both be satisfied at the same time?
Pragmatically, I'm not sure what it looks like to "attack people's will to vote" (or troll sad people into voting in ways that harm their interests and have the sad people fight back righteously by insisting that they shouldn't vote, because voting really will net harm their interests).
One can hope that people will simply "want to vote" because it make civic sense, but it actually looks like a huge number of humans are biased to feel like a peasant, and to have a desire to be ruled? Or something? And maybe you can just make it "against the law to not vote" (like in Australia) but maybe that won't solve the problems that could hypothetically "sociologically arise" from losing the participation criterion in ways that might be hard to foresee.))
In general, I think people should advocate for the BEST thing. The BEST thing I currently know of for picking an elected civilian commander in chief is "Ranked Pairs tabulation over Preference Ballots (with a law that requires everyone to vote during the two day Voting Holiday)".
Regarding approval ratings on products using stars...
...I'd like to point out that a strategic voter using literal "star" voting should generally always collapse down to "5 stars for the good ones, 0 stars for everyone else".
This is de facto approval voting, and a strategic voter doing approval voting learns to restrict their approval to ONLY the "electable favorite", which de facto gives you FPTP all over gain.
And FPTP is terrible.
Among the quoted takes, this was the best, about the sadness of the star voting systems, because it was practical, and placed the blame where it properly belongs: on the designers and maintainers of the central parts of the system.
Nobe: On Etsy you lose your “star seller” rating if it dips below 4.8. A couple of times I’ve gotten 4 stars and I’ve been beside myself wondering what I did wrong even when the comment is like “I love it, I’ll cherish it forever”
If you look at LessWrong, you'll find a weirdly large number of people into Star Voting but they don't account for "the new meta" that it would predictably introduce. (Approval voting also gets some love, but less.)
My belief is that LW-ers who are into these things naively think that "stars on my ballot would be like a proxy for my estimate of the utility estimate, and utility estimates would be the best thing (and surely everyone (just like me) would not engage in strategic voting to break this pleasing macro property of the aggregation method (that arises if everyone is honest and good and smart like I am))".
Which makes sense, for people from LessWrong, who are generally not cynical enough about how a slight admixture of truly bad faith (or just really stupid) players, plus everyone else "coping with reality" often leads to bad situations.
Like the bad situation you see on Etsy, with Etsy's rating system.
...
Its weird to me that LW somehow stopped believing (or propagating the belief very far?) that money is the unit of caring.
When you propagate this belief quite far, I think you end up with assurance contracts instead of voting. for almost all "domestic" or "normal" issues.
And when you notice how using money to vote in politics is often considered a corrupt practice, its pretty natural to get confused.
You wouldn't let your literal enemy in literal war spend the tiny amount it would (probably) cost to bribe your own personal commander in chief to be nice to your enemy while your enemy plunders your country at a profit relative (to the size of the bribe)...
...and so then you should realize that your internal political system NEEDS security mindset, and you should be trying to get literally the most secure possible method to get literally the best possible "trusted component" in your communal system for defending the community.
The reason THIS is necessary is that we live in a world of hobbesian horror. This is the real state of affairs on the international stage. There are no global elected leaders who endorse globally acceptable moral principles for the entire world.
(((Proposing to elect such a person democratically over all the voters in China, India, Africa, and the Middle East swiftly leads reasonable and wise Americans to get cold feet. I'm not so crazy as to propose this... yet... and I don't want to talk about multi-cultural "fully collective" extrapolated volition here. But I will say that I personal suspect "extrapolated volition and exit rights" is probably better than "collective extrapolated volition" when it comes to superintelligent benevolence algorithms.)))
In lots of modern spy movies, the "home office" gets subverted, and the spy hero has to "go it alone".
That story-like trope is useful for symbolically and narratively explaining the problem America is facing, since our constitution has this giant festering bug in the math of elections, and its going to be almost impossible for us to even patch the bug.
The metaphorical situation where "the hero can't actually highly trust the home office in this spy movie" is the real situation for almost all of us because "ameicans (and people outside of America) can't actually highly trust America's president selected by America's actual elections"... because in the movie, the home office was broken because it was low security, and in real life out elections are broken because they have low security... just like Etsy's rating systems are broken because they are badly designed.
Creating systemic and justified trust is the EXACT issue shared across all examples: random spy movies, each US election, and Etsy.
A traditional way to solve this is to publicly and verifiably selecting a clear single human leader (assuming we're not punting, and putting AI in charge yet) to be actually trusted.
You need someone who CAN and who SHOULD have authority over your domestic intelligence community, because otherwise your domestic intelligence community will have no real public leader and once you're in that state of affairs, you have no way to know they haven't gone entirely off the rails into 100% private corruption for the pure hedonic enjoyment of private power over weak humans who can't defend themselves because they gain sexual enjoyment from watching humans suffer at their hands.
Biden was probably against that stuff? I think that's part of why he insisted on getting out of Afghanistan?
But our timeline got really really really lucky that an actually moral man might have been in the whitehouse for a short period of history from 2020 to 2024. But that was mostly random.
FPTP generates random presidents.
Approval voting collapses down to FPTP under strategy and would (under optimization pressure) also generate random presidents.
Star voting collapses down to approval voting under strategy and would (under optimization pressure) also generate random presidents.
I've thought about this a lot, and I think that the warfighting part of a country needs an elected civilian commander in chief, and the single best criteria for picking someone to fill that role is the Condorcet Criterion and from there I'm not as strongly certain, but I think the most secure way to hit that criterion with a practical implementation that has quite a few other properties include Schulze and Ranked Pair ballot tabulation...
...neither of which use "stars", which is a stupid choice for preference aggregation!!
Star voting is stupid.
Years ago I heard from someone, roughly, that "optics is no longer science, just a field of engineering, because there are no open questions in optics anymore, we now 'merely' learn the 'science' of optics to become better engineers".
(This was in a larger discussion about whether and how long it would take for anything vaguely similar to happen to "all of physics", and talking about the state of optics research was helpful in clarifying whether or not "that state of seeming to be fully solved" would count as a "fully solved" field for other fields for various people in the discussion.)
In searching just now, I find that Stack Exchange also mentions ONLY the Abraham-Minkowki question as an actual suggestion about open questions in optics... and it is at -1, with four people quibbling with the claim! <3
Thank you for surprising me in a way that I was prepared to connect to a broader question about the the sociology of science and the long run future of physics!
I hit ^f and searched for "author" and didn't find anything, and this is... kind of surprising.
For me, nothing about Harry Potter's physical existence as a recurring motif in patterns of data inscribed on physical media in the physical world makes sense without positing a physically existent author (and in Harry's case a large collection of co-authors who did variational co-authoring in a bunch of fics).
Then I can do a similar kind of "obtuse intest in the physical media where the data is found" when I think about artificial rewards signals in digital people... in nearly all AIs, there is CODE that implements reinforcement learning signals...
...possibly ab initio, in programs where the weights, and the "game world", and the RL schedule for learning weights by playing in the game world were all written at the same time...
...possibly via transduction of real measurements (along with some sifting, averaging, or weighting?) such that the RL-style change in the AI's weights can only be fully predicted by not only knowing the RL schedule, but also by knowing about whatever more-distant-thing as being measured such as to predict the measurements in advance.
The code that implements the value changes during the learning regime, as the weights converge on the ideal is "the author of the weights" in some sense...
...and then of course almost all code has human authors who physically exist. And of course, with all concerns of authorship we run into issues like authorial intent and skill!
It is natural, at this juncture to point out that "the 'author' of the conscious human experience of pain, pleasure, value shifts while we sleep, and so on (as well as the 'author' of the signals fed to this conscious process from sub-conscious processes that generate sensoria, or that sample pain sensors, to create a subjective pain qualia to feed to the active self model, and so on)" is the entire human nervous system as a whole system.
And the entire brain as a whole system is primarily authored by the human genome.
And the human genome is primarily authored by the history of human evolution.
So like... One hypothesis I have is that you're purposefully avoiding "being Pearlian enough about the Causes of various Things" for the sake of writing a sequence with bite-sized chunks, than can feel like they build on each other, with the final correct essay and the full theory offered only at the end, with links back to all the initial essays with key ideas?
But maybe you guys just really really don't want to be forced down the Darwinian sinkhole, into a bleak philosophic position where everything we love and care about turns out to have been constructed by Nature Red In Tooth And Claw and so you're yearning for some kind of platonistic escape hatch?
I definitely sympathize with that yearning!
Another hypothesis is that you're trying to avoid "invoking intent in an author" because that will be philosophically confusing to most of the audience, because it explains a "mechanism with ought-powers" via a pre-existing "mechanism with ought-powers" which then cannot (presumably?) produce a close-ended "theory of ought-powers" which can start from nothing and explain how they work from scratch in a non-circularly way?
Personally, I think it is OK to go "from ought to ought to ought" in a good explanation, so long as there are other parts to the explanation.... So minimally, you would need two parts, that work sort of like a proof by induction. Maybe?
First, you would explain how something like "moral biogenesis" could occur in a very very very simple way. Some catholic philosophers, call this "minimal unit" of moral faculty "the spark of conscience" and a technical term that sometimes comes up is "synderesis".
Then, to get the full explanation, and "complete the inductive proof" the theorist would explain how any generic moral agent with the capacity for moral growth could go through some kind of learning step (possibly experiencing flavors of emotional feedback on the way) and end up better morally calibrated at the end.
Together the two parts of the theory could explain how even a small, simple, mostly venal, mostly stupid agent with a mere scintilla of moral development, and some minimal bootstrap logic, could grow over time towards something predictably and coherently Good.
(Epistemics can start and proceed analogously... The "epistemic equivalent of synderesis" would be something like a "uniform bayesian prior" and the "epistemic equivalent of moral growth" would be something like "bayesian updating".)
Whether the overall form of the Good here is uniquely convergent for all agents is not clear.
It would probably depend at least somewhat on the details of the bootstrap logic, and the details of the starting agent, and the circumstances in which development occurs? Like... surely in epistemics you can give an agent a "cursed prior" to make it unable to update epistmically towards a real truth via only bayesian updates? (Likewise I would expect at least some bad axiological states, or environmental setups, to be possible to construct if you wanted to make a hypothetically cursed agent as a mental test of the theory.)
So...
The best test case I could come up with for separately out various "metaphysical and ontology issues" around your "theory of Thingness" as it relates to abstract data structures (including ultimately perhaps The Algorithm of Goodness (if such a thing even exists)) was this smaller, simpler, less morally loaded, test case...
(Sauce is figure 4 from this paper.)
Granting that the Thingness Of Most Things rests in the sort of mostly-static brute physicality of objects...
...then noticing and trying to deal with a large collection of tricky cases lurking in "representationally stable motifs that seem thinglike despite not being very Physical" that almost all have Physical Authors...
...would you say that the Lorenz Attractor (pictured above) is a Thing?
If it is a Thing, is it a thing similar to Harry Potter?
And do you think this possible-thing has zero, one, or many Authors?
If it has non-zero Authors... who are the Authors? Especially: who was the first Author?
There's a long time contributor to lesswrong who has been studying this stuff since at least 2011 in a very mechanistic way, with lots of practical experimental data. His blog is still up, and still has circa-2011 essays like "What Trance Says About Rationality".
What I'd prefer is to have someone do data science on all that content, and find the person inside of wikipedia who is least bad, and the most good, according to my preferences and ideals, and then I'd like to donate $50 to have all their votes count twice as much in every vote for a year.
Remember the OP?
The question is "How could a large number of venal idiots attacking The Internet cost more damage than all the GDP of all the people who create and run The Internet via market mechanisms?"
I'm claiming that the core issue is that The Internet is mostly a public good, and there is no known way to turn dollars into "more or better public goods" (not yet anyway) but there are ways to ruin public goods, and then charge for access to an unruined simulacrum of a public good.
All those votes... those are a cost (and one invisible to the market, mostly). And they are only good if they reliably "generate the right answer (as judged from far away by those who wish Wikipedia took its duties as a public goods institution more seriously and coherently)".
Are you a wikipedian? Is there some way that I could find all the wikipedians and just appeal to them directly and fix the badness more simply? I like fixing things simply when simple fixes can work... :-)
(However, in my experience, most problems like this are caused by conflicts of interest, and it has seemed to me in the past that when pies are getting bigger, people are more receptive to ideas of fair and good justice, whereas when pies are getting smaller people's fallenness becomes more prominent.
I'm not saying Jimbo is still ruining things. For all I know he's not even on the board of directors of Wilkipedia anymore. I haven't checked. I'm simply saying that there are clear choices that were made in the deep past that seem to have followed a logic that would naturally help his pocketbook and naturally hurt natural public interests, and these same choices seem to still be echoing all the way up to the present.
I'm with Shankar and that meme: Stack Exchange used to be good, but isn't any more.
Regarding Wikipedia, I've had similar thoughts, but they caused me to imagine how to deeply restructure Wikipedia so that it can collect and synthesize primary sources.
Perhaps it could contain a system for "internal primary sources" where people register as such, and start offering archived testimony (which could then be cited in "purely secondary articles") similarly to the way random people hired by the NYT are trusted to offer archived testimony suitable for inclusion in current Wikipedia stuff?
This is the future. It runs on the Internet. Shall this future be democratic and flat, or full of silos and tribalism?
The thing I object to, Christian, is that "outsiders" are the people Wikipedia should properly be trying to serve but Wikipedia (like most public institutions eventually seem to do?) seems to have become insular and weird and uninterested in changing their mission to fulfill social duties that are currently being neglected by most institutions.
Wikipedia seem, to me, from the outside, as someone who they presumably are nominally "hoping to serve by summarizing all the world's trustworthy knowledge" to not actually be very good at governance, or vetting people who can or can't lock pages, or allocating power wisely, or choosing good operating policies.
Some of it I understand. "Fandom" used to be called "Wikia" and was (maybe still is?) run by Jimbo as a terrible and ugly "for profit, ad infested" system of wikis.
He naturally would have wanted wikipedia to have a narrow mandate so that "the rest of the psychic energy" could accumulate in his for-profit monstrosity, I think? But I don't think it served the world for this breakup and division into subfields to occur.
And, indeed, I think it would be good for Wikipedia to import all the articles across all of Fandom that it can legally import as "part of RETVRNING to inclusionism" <3
I think it would require *not just throwing money* at it, but also *actually designing sensible political institutions* to help aggregate and focus people's voluntary interest in creating valuable public goods that they (as well as everyone) can enjoy, after they are created.
For example, I would happily give Wikipedia $100 if I could have them switch to Inclusionism and end the rule of the "Deletionist" faction.
((Among other things, I think that anyone who ever runs for any elected political office, and anyone nominated or appointed by an elected official should be deemed Automatically Politically Notable on Wikipedia.
They should be allowed by Wikipedia (in a way that follows a named policy) to ADD material to their own article (to bulk it up from a stub or from non-existence), or to have at least ~25% of the text be written by themselves if the article is big, but not DELETE from their article.
Move their shit about themselves to the bottom, or into appendices, alongside the appendix of "their opinions about Star Wars (according to star wars autists)" and the appendix on "their likely percentage of neanderthal genes (according to racists)", and flag what they write about themselves as possibly interested writing by a possibly interested party, or whatever... but don't DELETE it.))
Now... clearly I cannot currently donate $100 to cause this to happen, but what if a "meta non-profit" existed that I could donate $100 to for three months (to pool with others making a similar demand), and then get the $100 back at the end of the three months if Wikipedia's rulers say no to our offer?
The pooling process, itself, could be optimized. Set up a ranked ballot over all the options with "max payment" only to "my favorite option" and then do monetary stepdowns as one moves down the ballot until you hit the natural zero.
There is some non-trivial math lurking here, in the nooks and crannies, but I know a handful of mathematicians I could probably tempt into consulting on these early challenges, and I know enough to be able to verify their proofs, even if I might not be able to generate the right proofs and theorems myself.
If someone wants to start this non-profit with me, I'd probably be willing serve in exchange for a permanent seat on the board of directors, and I'd be willing to serve as the initial Chief Operating Officer for very little money (and for only a handshake agreement from the rest of the board that I'll get back pay contingent on success after we raise money to financially stabilize things).
The really hard part is finding a good CEO. Such roles require a very stable genius (possibly with a short tenure, and a strong succession planning game), because they kinda drive people crazy by default, from what I've seen.
I don't know the answer to how much cybercrime is really costing, but I think your economic analysis is not accurately tracking "what GDP means".
Arms length financial transactions of "money points for services or goods" operates on the basis of scarcity, monopoly pricing power, and other power concerns that are locally legible inside of bilateral exchanges between reasonable agents.
GDP does not track the "reserve price" of consumers of computational services, where conditional on a computing service hypothetically being monopolistically priced, the person would hypothetically pay a LOT for that service.
Various surveys and a bit of logic suggest that people would hypothetically pay thousands or in many cases even tens of thousands of dollars for access to the internet even though the real cost is much much less.
By contrast, GDP just measures the "true scarcity... and lawful evil induced scarcity" part of the economy (mushed together and swirled around, so the DMCA makes hacking printer ink cartridges full of producer-added malware illegal, rather than subsidizing such heroic hacking work, as would occur under benevolent governance, and so on).
Linus Torvalds is probably owed a "debt of gratitude", by Earth, on the order of many billions, and possibly trillions, but he gave away Linux and has never been paid anything like that amount, and so the value he created and gave away does not show up in GDP. (Not just him, there's a whole constellation of rarely sung heroes and moderately happy dudes who were part of a hobbyist ecosystem that created the modern digital world between 1970 and 2010 and gave it away for free).
On a deeper level, the inability to measure or encourage the "post-scarcity" or "public goods" part of the human "economy" (if you can even call it an "economy" when it doesn't run on bilateral arms-length self-interested deals) is part of why such goods are underproduced by default, in general, and have been underproduced for all of human history.
Within this frame, it seems very plausible that the computational consumer surplus that cybercriminals attack is worth huge amounts of money to protect, even though it was acquired very cheaply from people like Linus.
Presumably humans are not yet in "private scarcity-based equilibrium" with the economics of computation processes?
In the long run it might be reasonable to expect the "a la carte computer security situation" (where every technical system becomes a game of whack-a-mole fighting many very specific ways to ruin everything in the computational commons) to devolve until most uses of most computer processes have almost no consumer surplus, because the costs of paying for a la carte help with computer security almost perfectly balances against the consumer surplus from using "essentially free compute".
This would not happen if good computer security practices arise that can somehow preserve the existing (and probably massive) consumer surplus around computers such that "using the internet and computers in general in a safe way is very cheap because computer security itself is easy to get right and spread around as a public good with nearly no marginal cost".
Like... hypothetically the government could make baseline "secure and super valuable" computing systems.
But it doesn't.
A private ad-based surveillance and propaganda corporation "solved search and created lots of billionaires" NOT the library of congress.
The NSA tries to make sure that most consumer hardware and software is insecure so that the <0.5% of consumer buyers that happen to be mobsters or terrorists can be spied on, rather than putting out open source defensive software for everyone.
People like Aaron Swartz and Moxie did, mostly for free, the thigns that a benevolent government would do if a benevolent government existed.
But no actively benevolent governments exist.
In Anathem, Neil Stephenson (who is very smart, in a very fun way) posits a giant science inquisition that prevents technological advancement (leading to AGI or nukes or bioweapons or what have you) and lets humanity "experience the current tech scale" for thousands of years with instabilities factored out and only locally stable cultural loops retained...
...in that world it is just taken for granted that 99.999% of the internet is full of auto-generated lies called "bogons" that are put out by computer security companies so as to force consumers to pay monthly subscriptions for expensive bogon filtering software that make their handheld jeejaws only really good for talking with close personal friends or business associates. It is just normal to them, for the internet to exist and be worthless, like it is normal to us for lies in ads and on the news to be the default.
Anathem's future contains no wikipedia, because wikipedia is like linux: insanely valuable, yet not scarce, with very few dollars directed to it in ways that ensures (1) it isn't hacked from the outside and (2) the leadership doesn't ruin it for personal or ideological profit from the inside.
Anathem offers us a bleak "impossible possible future" but not the bleakest.
Things probably won't happen that way because that exact way of stabilizing human civilization is unlikely, but Anathem honestly grapples with the broader issue where information services are (1) insanely valuable and (2) also nearly impossible for the market to properly price.
I'm not sure about the rest of it, but this caught my eye:
if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths.
I had a similar thought, and was trying to figure out if I could find a single good person to formally and efficiently coordinate with in a non-trivial pre-existing institution full of "safely good and sane people".
I'm still searching. If anyone has a solid lead on this, please DM me, maybe?
Something you might expect is that many such "hypothetically existing hypothetically good people" would be willing to die slightly earlier for a good enough cause (especially late in life when their life expectancy is low, and especially for very high stakes issues where a lot of leverage is possible) but they wouldn't waste lives, because waste is ceteris paribus bad, and so... so... what about martyrs who are also leaders?
This line of thinking is how I learned about Martin The Confessor, the last Pope to ever die for his beliefs.
Since 655 AD is much much earlier than 2024 AD, it would seem that Catholicism no longer "has the sauce" so to speak?
Also, slightly relatedly, I'm more glad that I otherwise might be that in this timeline the bullet missed Trump. In other very nearby timelines I'm pretty sure the whole idea of using physical courage to detect morally good leadership in a morally good group would be much more controversial than the principle is here, now, in this timeline, where no one has trapped priors about it that are being actively pumped full of energy by the media, with the creation of new social traumas, and so on...
...not that elected secular leaders of mere nation states would have any obvious formal duties to specifically be the person to benevolently serve literally all good beings as a focal point.
To get that formula to basically work, in a way that it kinda seems to work with US elections, since many US Presidents are assassinated in ways they could probably predict were possible (modulo this currently only working within the intrinsically "partial" nature of US elections, since these are merely elections for the leader of a single nation state that faces many other hostile nation states in a hobbesian world of eternal war (at least eternal war... so far!) ) I think one might need to hold global elections?
And... But... And this... this seems sorta do-able?!? Weirdly so!
We have the internet now. We have translation software to translate all the political statements into all the languages. We have internet money that could be used to donate to something that was worth donating to.
Why not create a "United Persons Alliance" (to play the "House of Representatives" to the UN's "Senate"?) and find out what the UPA's "Donation Weighted Condorcet Prime Minister" has to say?
I kinda can't figure out why no one has tried it yet.
Maybe it is because, logically speaking, moral realism MIGHT be true and also maybe all humans are objectively bad?
If a lot of people knew for sure that "moral realism is true but humans are universally fallen" then it might explain why we almost never "produce and maintain legibly just institutions".
Under the premises entertained here so far, IF such institutions were attempted anyway, and the attempt had security holes, THEN those security holes would be predictably abused and it would be predictably regretted by anyone who spent money setting it up, or trusted such a thing.
So maybe it is just that "moral realism is true, humans are bad, and designing secure systems is hard and humans are also smart enough to never try to summon a real justice system"?
Maybe.
I appreciate your desire for this clarity, but I think the counter argument might actually just be "the oversimplifying assumption that everyone's labor just ontologically goes on existing is only true if society (and/or laws and/or voters-or-strongmen) make it true on purpose (which they tended to do, for historically contingent reasons, in some parts of Earth, for humans, and some pets, between the late 1700s and now)".
You could ask: why is the holocene extinction occurring when Ricardo's Law of Comparative Advantage says that wooly mammoths (and many amphibian species) and cave men could have traded...
...but once you put it that way, it is clear that it really kinda was NOT in the narrow short term interests of cave men to pay the costs inherent in respecting the right to life and right to property of beasts that can't reason about natural law.
Turning land away from use by amphibians and towards agriculture was just... good for humans and bad for frogs. So we did it. Simple as.
The math of ecology says: life eats life, and every species goes extinct eventually. The math of economics says: the richer you are, the more you can afford to be linearly risk tolerant (which is sort of the definition of prudent sanity) for larger and larger choices, and the faster you'll get richer than everyone else, and so there's probably "one big rich entity" at the end of economic history.
Once humans close their heart to other humans and "just stop counting those humans over there as having interests worth calculating about at all" it really does seem plausible that genocide is simply "what many humans would choose to do, given those (evil) values".
Slavery is legal in the US, after all. And the CCP has Uighur Gulags. And my understanding is that Darfur is headed for famine?
I think this is sort of the "ecologically economic core" of Eliezer's position: kindness is simply not a globally instrumentally convergent tactic across all possible ecological and economic regimes... right now quite a few humans want there to not be genocide and slavery of other humans, but if history goes in a sad way in the next ~100 years, there's a decent chance the other kind of human (the ones that quite like the long term effects of the genocide and/or enslavement other sapient beings) will eventually get their way and genocide a bunch of other humans.
If all of modern morality is a local optimum that is probably not the global optimum, then you might look out at the larger world and try and figure out what naturally occurs when the powerful do as they will, and the weak cope as they can...
Once the billionaires like Putin and Xi and Trump and so on don't need human employees any more, its seems plausible they could aim for a global Earth population of humans of maybe 20,000 people, plus lots and lots of robot slaves?
It seems quite beautiful and nice to be here, now, with so many people having so many dreams, and so many of us caring about caring about other sapient beings... but unless we purposefully act to retain this moral shape, in ourselves and in our digital and human progeny, we (and they) will probably fall out of this shape in the long run.
And that would be sad. For quite a few philosophic reasons, and also for over 7 billion human reasons.
And personally, I think the only way to "keep the party going" even for a few more centuries or millennia is to become extremely wealthy.
I think we should be mining asteroids, and building fusion plants, and building new continents out of ice, and terraforming Venus and Mars, and I think we should build digital people who know how precious and rare humane values so they can enjoy the party with us, and keep it going for longer than we could plausibly hope to (since we tend to be pretty terrible at governing ourselves).
But we shouldn't believe good outcomes are inevitable or even likely, because they aren't. If something slightly smarter than us with a feasible doubling time of weeks instead of decades arrives, we could be the next frogs.
This writeup is great. Very simple. Beat by beat. Motion by motion. The character of the writing makes me feel like anything was possible, and history was a series of accidents, which I think is a "true feeling" about history.
I kind of love how this post is very very narrow, and very very specific, and about a topic that everyone was mind-killed on in the late aughties, but which very few people are mind-killed on in modern times.
It feels like a calibration exercise!
(Also, I wrote a LOT of words on related issues, and what I think this might be a calibration exercise for ...that I've edited out since it was a big and important topic, and would have taken a long time to edit into something usefully readable.)
It is safe and easy to say: I appreciate the scholarship and care that was taken to figure things out here, and to highlight how rare it is for people to understand the specific subquestion, and not conflate subquestions with larger nearby issues, and (without doing any original research or even clicking through to read most of the links) I find the conclusion and confidence level reasonably convincing.
On mechanistic psychology priors (given that no smoking guns were found here) the thing I would expect is that Hitchens spent some time thinking that water boarding wasn't really brutal or terrible torture that should be illegal... (maybe he published something that is hard to find now and felt guilt about that, or maybe he just had private opinions) and then he probably did some research on it and at some point changed his mind in private, and then he might have tried to experience it as a way of creating credibility using a story that would echo in history?
That is, I suspect the direct personal experience didn't cause the update.
I suspect he intellectually suspected what was probably true, and then gathered personally expensive evidence that confirmed his intellectual suspicions for the sake of how the evidence gathering method would play in stories about his take on the topic.
I read your gnostic/pagan stuff and chuckled over the "degeneracy [ranking where] Paganism < ... < Gnosticism < Atheism < Buddhism".
I think I'll be better able to steelman you in the future and I'm sorry if I caused you to feel misrepresented with my previous attempt. I hadn't realized that the vibe you're trying to serve is so Nietzschean.
Just to clarify, when you say "pathetic" it is is not intended to evoke "pathos" and function as an even hypothetically possible compliment regarding a wise and pleasant deployment of feelings (even subtle feelings) in accord with reason, that could be unified and balanced to easily and pleasantly guide persons into actions in accord with The Good after thoughtful cultivation...
...but rather I suspect you intended it as a near semantic neighbor (but with opposite moral valence) of something like "precious" (as an insult (as it is in some idiolects)) in that both "precious and pathetic things" are similarly weak and small and in need of help.
Like the central thing you're trying to communicate with the word "pathetic" (I think, but am not sure, and hence I'm seeking clarification) is to notice that entities labeled with that adjective could hypothetically be beloved and cared for... but you want to highlight how such things are also sort of worthy of contempt and might deserve abandonment.
We could argue: Such things are puny. They will not be good allies. They are not good role models. They won't autonomously grow. They lack the power to even access whole regimes of coherently possible data gathering loops. They "will not win" and so, if you're seeking "systematized winning", such "pathetic" things are not where you should look. Is this something like what you're trying to point to by invoking "patheticness" so centrally in a discussion of "solving philosophy formally"?
I think of "the rationalist project" as "having succeeded" in a very limited and relative sense that is still quite valuable.
For example, back when the US and Chinese governments managed to accidentally make a half-cocked bioweapon and let it escape from a lab and then not do any adequate public health at all, or hold the people who caused the megadeath event even slightly accountable, and all of the institutions of basically every civilization on Earth failed to do their fucking jobs, the "rationalists" (ie the people on LW and so on) were neck and neck with anonymous anime catgirls on twitter (who overlap a lot with rationalists in practice) in terms of being actually sane and reasonable voices in the chaos... and it turns out that having some sane and reasonable voices is useful!
Eliezer says "Rationalists should win" but Yvain said "its really not that great" and Yvain got more upvotes (90 vs 247 currently) so Yvain is prolly right, right? But either way it means rationality is probably at least a little bit great <3
So Newsome would control 4 out of 8 of the votes, until this election occurs?
I wonder what his policies are? :thinking:
(Among the Presidential candidates, I liked RFK's position best. When asked, off the top of his head, he jumps right into extinction risks, totalitarian control of society, and the need for international treaties for AI and bioweapons. I really love how he lumps "bioweapons and AI" as a natural category. It is a natural category.
But RFK dropped out, and even if he hadn't dropped out it was pretty clear that he had no chance of winning because most US voters seem to think being a hilariously awesome weirdo is bad, and it is somehow so bad that "everyone dying because AI killed us" is like... somehow more important than that badness? (Obviously I'm being facetious. US voters don't seem to think. They scrupulously avoid seeming that way because only weirdos "seem to think".))
I'm guessing the expiration date on the law isn't in there at all, because cynicism predicts that nothing like it would be in there, because that's not how large corrupt bureaucracies work.
(/me wonders aloud if she should stop calling large old bureaucracies corrupt-by-default in order to start sucking up to Newsome as part of a larger scheme to get onto that board somehow... but prolly not, right? I think my comparative advantage is probably "being performatively autistic in public" which is usually incompatible with acquiring or wielding democratic political power.)
If I was going to steelman Mr Tailcalled, I'd imagine that he was trying to "point at the reason" that transfer learning is far and away the exception.
Mostly learning (whether in humans, beasts, or software) happens relative to a highly specific domain of focus and getting 99.8% accuracy in the domain, and making a profit therein... doesn't really generalize. I can't run a hedge fund after mastering the hoola hoop, and I can't win a boxing match from learning to recognize real and forged paintings. NONE of these skills would be much help in climbing a 200 foot tall redwood tree with my bare hands and bare feet... and mastering the Navajo language is yet again "mostly unrelated" to any of them. The challenges we agents seem to face in the world are "one damn thing after another".
(Arguing against this steelman, the exception here might be "next token prediction". Mastering next token prediction seems to grant the power to play Minecraft through APIs, win art contests, prove math theorems, and drive theologically confused people into psychosis. However, consistent with the steelman, next token prediction hasn't seemed to offer any help at fabbing smaller and faster and more efficient computer chips. If next token prediction somehow starts to make chip fabbing go much faster, then hold onto your butts.)
This caught my eye:
But, the goal of this phase, is to establish "hey, we have dangerous AI, and we don't yet have the ability to reasonably demonstrate we can render it non-dangerous", and stop development of AI until companies reasonably figure out some plans that at _least_ make enough sense to government officials.
I think I very strongly expect corruption-by-default in the long run?
Also, since the government of California is a "long run bureaucracy" already I naively expect it to appoint "corrupt by default" people unless this is explicitly prevented in the text of the law somehow.
Like maybe there could be a proportionally representative election (or sortition?) over a mixture of the (1) people who care (artists and luddites and so on) and (2) people who know (ML engineers and CS PhDs and so on) and (3) people who are wise about conflicts (judges and DAs and SEC people and divorce lawyers and so on).
I haven't read the bill in its modern current form. Do you know if it explains a reliable method to make sure that "the actual government officials who make the judgement call" will exist via methods that make it highly likely that they will be honest and prudent about what is actually dangerous when the chips are down and cards turned over, or not?
Also, is there an expiration date?
Like... if California's bureaucracy still (1) is needed and (2) exists... by the time 2048 rolls around (a mere 24 years from now (which is inside the life expectancy of most people, and inside the career planning horizon of everyone smart who is in college right now)) then I would be very very very surprised.
By 2048 I expect (1) California (and maybe humans) to not exist, or else (2) for a pause to have happened and, in that case, a subnational territory isn't the right level for Pause Maintenance Institution to draw authority from, or else (3) I expect doomer premises to be deeply falsified based on future technical work related to "inevitably convergent computational/evolutionary morality" (or some other galaxy brained weirdness).
Either we are dead by then, or wrong about whether superintelligence was even possible, or we managed to globally ban AGI in general, or something.
So it seems like it would be very reasonable to simply say that in 2048 the entire thing has to be disbanded, and a brand new thing started up with all new people, to have some OTHER way break the "naturally but sadly arising" dynamics of careerist political corruption.
I'm not personally attached to 2048 specifically, but I think some "expiration date" that is farther in the future than 6 years, and also within the lifetime of most of the people participating in the process, would be good.
Nope! They named her after me.
</joke>
Alright! I'm going to try to stick to "biology flavored responses" and "big picture stuff" here, maybe? And see if something conversational happens? <3
(I attempted several responses in the last few days and each sketch turned into a sprawling messes that became a "parallel comment". Links and summaries at the bottom.)
The thing that I think unifies these two attempts at comments is a strong hunch that "human language itself is on the borderland of being anti-epistemic".
Like... like I think humans evolved. I think we are animals. I think we individually grope towards learning the language around us and always fail. We never "get to 100%". I think we're facing a "streams of invective" situation by default.
Don: “Up until the age of 25, I believed that ‘invective’ was a synonym for 'urine’.”
BBC: “Why ever would you have thought that?”
Don: “During my childhood, I read many of the Edgar Rice Burroughs 'Tarzan’ stories, and in those books, whenever a lion wandered into a clearing, the monkeys would leap into the trees and 'cast streams of invective upon the lion’s head.’”
BBC: long pause “But, surely sir, you now know the meaning of the word.”
Don: “Yes, but I do wonder under what other misapprehensions I continue to labour.”
I think prairie dogs have some kind of chord-based chirp system that works like human natural language noun phrases do because noun-phrases are convergently useful. And they are flexible-and-learned enough for them to have regional dialects.
I think elephants have personal names to help them manage moral issues and bad-actor-detection that arise in their fission-fusion social systems, roughly as humans do, because personal names are convergently useful for managing reputation and tracking loyalty stuff in very high K family systems.
I think humans evolved under Malthusian conditions and that there's lots of cannibalism in our history and that we use social instincts to manage groups that manage food shortages (who semi-reliably go to war when hungry). If you're not tracking such latent conflict somehow then you're missing something big.
I think human languages evolve ON TOP of human speech capacities, and I follow McWhorter in thinking that some languages are objectively easy (because of being learned by many as a second language (for trade or slavery or due to migration away from the horrors of history or whatever)) and others are objectively hard (because of isolation and due to languages naturally becoming more difficult over time, after a disruption-caused-simplification).
Like it isn't just that we never 100% learn our own language. It is also that adults make up new stuff a lot, and it catches on, and it becomes default, and the accretion of innovation only stabilizes when humans hit their teens and refuse to learn "the new and/or weird shit" of "the older generation".
Maybe there can be language super-geniuses who can learn "all the languages" very easily and fast, but language are defined, in a deep sense, by a sort of "20th percentile of linguistic competence performance" among people who everyone wants to be understood by.
And the 20th percentile "ain't got the time" to learn 100% of their OWN language.
But also: the 90th percentile is not that much better! There's a ground floor where human beings who can't speak "aren't actually people" and they're weeded out, just like the fetuses with 5 or 3 heart chambers are weeded out, and the humans who'd grow to be 2 feet tall or 12 feet tall die pretty fast, and so on.
On the "language instincts" question, I think: probably yes? If Neanderthals spoke, it was probably with a very high pitch, but they had Sapiens-like FOXP2 I think? But even in modern times there are probably non-zero alleles to help recognize tones in regions where tonal languages are common.
Tracking McWhorter again, there are quite a few languages spoken in mountain villages or tiny islands with maybe 500 speakers (and the village IQ is going to be pretty stable, and outliers don't matter much), where children simply can't speak properly until they are maybe 12.
(This isn't something McWhorter talks about at all, but usually puberty kicks in, and teens refuse to learn any more arbitrary bullshit... but also accents tend to freeze around age 12 (especially in boys, maybe?) which might have something to do with shibboleths and "immutable sides" in tribal wars?)
Those languages where 11 year olds are just barely fluent are at the limit of isolated learnable complexity.
For an example of a seriously tricky language, my understanding (not something I can cite, just gossip from having friends in Northern Wisconsin and a Chippewa chromosome or two) is that in Anishinaabemowin they are kinda maybe giving up on retaining all the conjugations and irregularities that only show up very much in philosophic or theological or political discussions by adults, even as they do their best to retain as much as they can in tribal schools that also use English (for economic rather than cultural reasons)?
So there are still Ojibwe grandparents who can "talk fancy", but the language might be simplifying because it somewhat overshot the limits of modern learnability!
Then there's languages like nearly all the famous ones including English, where almost everyone masters it by age 7 or 8 or maybe 9 for Russian (which is "one of the famous ones" that might have kept more of the "weird decorative shit" that presumably existed in Indo-European)?
...and we kinda know which features in these "easy well known languages" are hard based on which features become "nearly universal" last. For example, rhotics arrive late for many kids in America (with quite a few kindergartners missing an "R" that the teacher talks to their parents about, and maybe they go to speech therapy) but which are also just missing in many dialects, like the classic accents of Boston, New York City, and London... because "curling your tongue back for that R sound" is just kinda objectively difficult.
In my comment laying out a hypothetical language like "Lyapunese" all the reasons that it would never be a real language don't relate to philosophy, or ethics, or ontics, or epistemology, but to language pragmatics. Chaos theory is important, and not in language, and its the fault of humans having short lives (and being generally shit at math because of nearly zero selective pressure on being good at it), I think?
In my comment talking about the layers and layers of difficulty in trying (and failing!) to invent modal auxialiary verbs for all the moods one finds in Nenets, I personally felt like I was running up against the wall of my own ability to learn enough about "those objects over there (ie weird mood stuff in other languages and even weird mood stuff in my own)" to grok the things they took for granted enough to go meta on each thing and become able to wield them as familiar tools that I could put onto some kind of proper formal (mathematical) footing. I suspect that if it were easy for an adult to learn that stuff, I think the language itself would have gotten more complex, and for this reason the task was hard in the way that finding mispricings in a market is hard.
Humans simply aren't that smart, when it comes to serial thinking. Almost all of our intelligence is cached.
"During covid" I got really interested in language, and was thinking of making a conlang.
It would be an intentional pidgin (and so very very simple in some sense) that was on the verge of creolizing but which would have small simple words with clear definitions that could be used to "ungramaticalize" everything that had been grammaticalized in some existing human language...
...this project to "lexicalize"-all-the-grammar(!) defeated me.
I want to ramble at length about my defeat! <3
The language or system I was trying to wrap my head around would be kind of like Ithkuil, except, like... hopefully actually usable by real humans?
But the rabbit-hole-problems here are rampant. There are so many ideas here. It is so easy to get bad data and be confused about it. Here is a story of being pleasantly confused over and over...
TABLE OF CONTENTS:
I. Digression Into A Search For A Periodic Table Of "Grammar"
I.A. Grammar Is Hard, Lets Just Be Moody As A Practice Run
I.A.1. Digression Into Frege's Exploration Of ONLY The Indicative Mood
I.A.2. Commentary on Frege, Seeking Extensions To The Interogrative Moods
I.A.2.a. Seeking briefly to sketch better evidentiality markers in a hypothetical language (and maybe suggesting methods thereby)
I.A.2.a.i. Procedural commentary on evidentiality concomittment to the challenges of understanding the interogative mood.
I.B.1. Trying To Handle A Simple Case: Moods In Diving Handsigns
I.B.1.a Diving Handsigns Have Pragmatically Weird Mood (Because Avoiding Drowning Is The Most Important Thing) But They are Simple (Because It Is For Hobbyists With Shit In Their Mouth)
I.B.2. Trying To Find The Best Framework For Mood Leads To... Nenets?
I.B.2.a. But Nenets Is Big, And Time Was Short, And Kripke Is Always Dogging Me, And I'm A Pragmatist At Heart
I.B.2.a. Frege Dogs Me Less But Still... Really?
II. It Is As If Each Real Natural Language Is Almost Anti-Epistemic And So Languages Collectively ARE Anti-Epistemic?
...
I. Digression Into A Search For A Periodic Table Of "Grammar"
I feel like a lot of people eventually convergently aspire to what I wanted. Like they want a "Master list of tense, aspect, mood, and voice across languages?"
That reddit post, that I found while writing this, was written maybe a year after I tried to whip one of these up just for mood in a month or three of "work to distract me from the collapse of civilization during covid"... and failed!
((I mean... I probably did succeed at distracting myself from the collapse of civilization during covid, but I did NOT succeed at "inventing the omnilang semantic codepoint set". No such codepoints are on my harddrive, so I'm pretty sure I failed. The overarching plan that I expected to take a REALLY long time was to have modular control of semantics, isolating grammars, and phonology all working orthogonally, so I could eventually generate an infinite family of highly regular conlangs at will, just from descriptions of how they should work.))
So a first and hopefully simplest thing I was planning on building, was a sort of periodic table of "mood".
Just mood... I could do the rest later... and yet even this "small simplest thing" defeated me!
(Also note that the most centrally obvious overarching thing would be to do a TAME system with Tense, Aspect, Mood, and Evidentiality. I don't think Voice is that complicated... Probably? But maybe that redditor knows something I don't?)
I.A. Grammar Is Hard, Lets Just Be Moody As A Practice Run
Part of the problem I ran into with this smaller question is: "what the fuck even is a mood??"
Like in terms of its "total meaning" what even are these things? What is their beginning and ends? How are they bounded?
Like if we're going to be able, as "analytic philosophers or language" to form a logically coherent theory of natural human language pragmatics and semantics that enables translation from any natural utterance by any human into and through a formally designed (not just a pile of matrices) way to translate that utterance into some sort of Characteristica Universalis... what does that look like?
In modern English grammar we basically only have two moods in our verb marking grammar: the imperative and the indicative (and maybe the interrogative mood, but that mostly just happens mostly in the word order)...
(...old European linguists seemed to have sometimes thought "real grammar" was just happening in the verbs, where you'd sometimes find them saying, of a wickedly complex language, that "it doesn't even have grammar" because it didn't have wickedly complex verb conjugation.)
And in modern English we also have the the modal auxiliary verbs that (depending on where you want to draw certain lines) include: can, could, may, might, must, shall, should, will, would, and ought!
Also sometimes there are some small phrases which do similar work but don't operate grammatically the same way.
(According to Wikipedia-right-now Mandarin Chinese has a full proper modal auxiliary verb for "daring to do something"! Which is so cool! And I'm not gonna mention it again in this whole comment, because I'm telling a story about a failure, and "dare" isn't part of the story! Except like: avoiding rabbit holes like these is key to making any progress, and yet if you don't explore them all you probably will never get a comprehensive understanding, and that's the overall tension that this sprawling comment is trying to illustrate.)
In modern English analytic philosophy we also invented "modal" logic which is about "possibility" and "necessity". And this innovation in symbolic logic might capture successfully formally capture "can" and "must" (which are "modal auxiliary verbs)... but it doesn't have almost anything to do with the interrogative mood. Right? I think?
In modern English, we have BOTH an actual grammatical imperative mood with verb-changes-and-everything, but we also have modal auxiliary verbs like "should" (and the archaic "may").
Is the change in verb conjugation for imperative, right next to "should" and "may" pointless duplication... or not? Does it mean essentially the same thing to say "Sit down!" vs "You should sit down!" ...or not?
Consider lots of sentences like "He can run", "He could run", "He may run", etc.
But then notice that "He can running", "He could running", "He may running" all sound wrong (but "he can be running, "he could be running", and "he may be running" restore the sound of real English).
This suggests that "-ing" and "should" are somewhat incompatible... but not 100%? When I hear "he should be running" it is a grammatical statement that can't be true if "he" is known to the speaker to be running right now.
The speaker must not know for the sentence to work!
Our hypothetical shared English-parsing LAD subsystems which hypothetically generate the subjective sense of "what sounds right and wrong as speech" thinks that active present things are slightly structurally incompatible with whatever modal auxiliary verbs are doing, in general, with some kind of epistemic mediation!
But why LAD? Why?!?!
Wikipedia says of the modal verbs:
Modal verbs generally accompany the base (infinitive) form of another verb having semantic content.
With "semantics" (on the next Wikipedia page) defined as:
Semantics is the study of linguistic meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction between sense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts with syntax, which studies the rules that dictate how to create grammatically correct sentences, and pragmatics, which investigates how people use language in communication.
So like... it kind seems like the existing philosophic and pedagogical frameworks here can barely wrap their head around "the pragmatics of semantics" or "the semantics of pragmatics" or "the referential content of an imperative sentence as a whole" or any of this sort of thing.
Maybe linguists and ESL teachers and polyglots have ALL given up on the "what does this mean and what's going on in our heads" questions...
...but then the philosophers (to whom this challenge should naturally fall) don't even have a good clean answer for THE ONE EASIEST MOOD!!! (At least not to my knowledge right now.)
I.A.1. Digression Into Frege's Exploration Of ONLY The Indicative Mood
Frege attacked this stuff kinda from scratch (proximate to his invention of kinda the entire concept of symbolic logic in general) in a paper "Ueber Sinn und Bedeutung" which has spawned SO SO MANY people who start by explaining what Frege said, and then explaining other philosopher's takes on it, and then often humbly sneaking in their own take within this large confusing conversation.
For example, consider Kevin C. Klement's book, "Frege And The Logic Of Sense And Reference".
Anyway, the point of bringing up Frege is that he had a sort of three layer system, where utterable sentences in the indicative mood had connotative and denotative layers and the denontative layers had two sublayers. (Connotation is thrown out to be treated later... and then never really returned to.)
Each part of speech (but also each sentence (which makes more sense given that sentence CAN BE a subphrase within a larger sentence)) could be analyzed for its denotation in term the two things (senses and references) from the title of the paper.
All speechstuff might have "reference" (what it points to in the extended external context that exists) and a "sense" (the conceptual machinery reliably evoked, in a shared way, in the minds of all capable interpreters of a sentence by each part of the speechstuff, such that this speechstuff could cause the mind to find the thing that was referred to).
"DOG" then has a reference to all the dogs and/or doglike things out there such such that the word "DOG" can be used to "de re refer" to what "DOG" obviously can be used to refer to "out there".
Then, "DOG" might also have a sense of whatever internal conceptual machinery "DOG" evokes in a mind to be able to perform that linkage. In so maybe "DOG" also "de dicto refers" to this "sense of what dogs are in people's minds"???
Then, roughly, Frege proposed that a sentence collects up all the senses in the individual words and mixes them together.
This OVERALL COMBINED "sense of the sentence" (a concept machine for finding stuff in reality) would be naturally related to the overall collection of all the senses of all of the parts of speech. And studying how the senses of words linked into the sense of the sentence was what "symbolic logic" was supposed to be a clean externalized theoretical mirror of.
Once we have a complete concept machine mentally loaded up as "the sense of the sentence" this concept machine could be used to examine the world (or the world model, or whatever) to see if there is a match.
The parts of speech have easy references. "DOG" extends to "the set of all the dogs out there" and "BROWN" extends to "all the brown things out there" and "BROWN DOG" is just the intersection of these sets. Easy peasy!
Then perhaps (given that we're trying to push "sense" and "reference" as far as we can to keep the whole system parsimonious as a theory for how indicative sentences work) we could say "the ENTIRE sentence refers to Truth" (and, contrariwise, NO match between the world and the sense of the sentence means "the sentence refers to Falsehood").
That is, to Frege, depending on how you ready him "all true sentences refer to the category of Truth itself".
Aside from the fact that this is so galaxy-brained and abstract that it is probably a pile of bullshit... a separate problem arises in that... it is hard to directly say much here about "the imperative mood"!
Maybe it has something to say about the interrogative mood?
I.A.2. Commentary on Frege, Seeking Extensions To The Interogrative Moods
Maybe when you ask a question, pragmatically, it is just "the indicative mood but as a two player game instead of a one player game"?
Maybe uttering a sentence in the interrogative mood is a way for "player one" to offer "a sense" to "player two" without implying that they know how the sense refers (to Truth of Falsehood or whatever).
They might be sort of "cooperatively hoping for" player two to take "the de dicto reference to the sense of the utterance of player one" and check that sense (which player one "referred to"?) against player two's own distinct world model (which would be valuable if player two has better mapped some parts of the actual world than player one has)?
If player two answers the question accurately, then the combined effect for both of them is kind of like what Frege suggests is occurring in a single lonely mind when that mind reads and understands the indicative form of "the same sentence" and decides that they are true based on comparing them to memory and so on. Maybe?
Except the first mind who hears an answer to a question still has sort of not grounded directly to the actual observables or their own memories or whatever. It isn't literally mentally identical.
If player one "learned something" from hearing a question answered (and player one is human rather than a sapient AI), it might, neurologically, be wildly distinct from "learning something" by direct experience!
Now... there's something to be said for this concern already being gramaticalized (at least in other languages) in the form of "evidentiality", such that interrogative moods and evidential markers should "unify somehow".
Hypothetically, evidential marking could show up as a sentence final particle, but I think in practice it almost always shows up as a marker on verbs.
And then, if we were coming at this from the perspective of AI, and having a stable and adequate language for talking to AI, a sad thing is that the evidentiality markers are almost always based on folk psychology, not on the real way that actual memories work in a neo-modern civilization running on top of neurologically baseline humans with access to the internet :-(
I.A.2.a. Seeking briefly to sketch better evidentiality markers in a hypothetical language (and maybe suggesting methods thereby)
I went to Wikipedia's Memory Category and took all the articles that had a title in the from of "<adjective phrase> <noun>" where <noun> was "memory" or "memories".
ONLY ONE was plural! And so I report that here as the "weird example": Traumatic memories.
Hypothetically then, we could have a language where everyone was obliged to mark all main verbs as being based on "traumatic" vs "non-traumatic" memories?
((So far as I'm aware, there's no language on earth that is obliged to mark whether a verb in a statement is backed by memories that are traumatic or not.))
Scanning over all the Wikipedia articles I can find here (that we might hypothetically want to mark as an important distinction) in verbs and/or sentences, the adjectives that can modify a "memory" article are (alphabetically): Adaptive, Associative, Autobiographical, Childhood, Collective, Context-dependent, Cultural, Destination, Echoic, Eidetic, Episodic, Episodic-like, Exosomatic, Explicit, External, Eyewitness, Eyewitness (child), Flashbulb, Folk, Genetic. Haptic, Iconic, Implicit, Incidental, Institutional, Intermediate-term, Involuntary, Long-term, Meta, Mood-dependent, Muscle, Music-evoked autobiographical, Music-related, National, Olfactory, Organic, Overgeneral autobiographical, Personal-event, Plant, Prenatal, Procedural, Prospective, Recognition, Reconstructive, Retrospective, Semantic, Sensory, Short-term, Sparse distributed, Spatial, Time-based prospective, Transactive, and Transsaccadic.
In the above sentence, I said roughly
"The adjectives that can modify a 'memory' article are (alphabetically): <list>"
The main verb of that sentence is technically "are" but "modify" is also salient, and already was sorta-conjugated into the "can-modify" form.
Hypothetically (if speaking a language where evidentiality must be marked, and imagining marking it with all the features that could work differently in various forms of memory) I could mark the entire sentence I just uttered in terms of my evidence for the sentence itself!
I believe that sentence itself was probably:
+ Institutional (via "Wikipedia") and
+ Context Dependent (I'll forget it after reading and processing wikipedia falls out of my working memory) and
+ Cultural (based on the culture of english-speaking wikipedians) and
+ Exosomatic (I couldn't have spoken the sentence aloud with my mouth without intense efforts of memorization, but I could easily compose the sentence in writing with a text editor), and
+ Explicit (in words, not not-in-words), and
+ Folk (because wikipedians are just random people, not Experts), and
+ Meta (because in filtering the wikipedia articles down to that list I was comparing ways I have of memorizing to claims about how memory works), and
+ National (if you think of the entire Anglosphere as being a sort of nation separated by many state boundaries, so that 25-year-old Canadians and Australians and Germans-who-learned English young can't ever all have the same Prime Minister without deeply restructuring various States, but are still "born together" in some tribal sense, and they all can reason and contribute to the same English language wikipedia), and maybe
+ Procedural (in that I used procedures to manipulate the list of kinds of memories by hand, and if I made procedural errors in composing it (like accidentally deleting a word and not noticing) then I might kinda have lied-by-accident due to my hands "doing data manipulation" wrongly), and definitely
+ Reconstructive (from many many inputs and my own work), and
+ Semantic (because words and means are pretty central here).
Imagine someone tried to go through an essay that they had written in the past and do a best-effort mark-up of ALL of the verbs with ALL of these, and then look for correlations?
Like I bet I bet "Procedural" and "Reconstructive" and "Semantic" go together a lot?
(And maybe that is close to one or more of the standard Inferential evidential markers?)
Likewise "Cultural" and "National" and "Institutional" and "Folk" also might go together a lot?
They they link up somewhat nicely with a standard evidentiality marker that often shows up which is "Reportative"!
So here is the sentence again, re-written, with some coherent evidential and modal tags attached, that is trying to simply and directly speak to the challenges:
"These granular adjectives mightvalidly-reportatively-inferably-modify the concept of memory: <list>."
One or more reportatives sorta convergently shows up in many language that have obligate evidential marking.
The thing I really want here is to indicate that "I'm mentally outsourcing a reconciliation of kinds of memories and kinds of evidential markers to the internet institution of Wikipedia via elaborate procedures".
Sometimes, some languages require that what not say "reportatively" but specifically drill down to distinguish between "Quotative" (where the speaker heard from someone who saw it and is being careful with attribution) vs "Hearsay" (which is what the listener of a Quotative or a Hearsay evidential claim should probably use when they relate the same fact again because now they are offering hearsay (at least if you think of each conversation as a court and each indicative utterance in a conversation as testimony in that court)).
Since Wikipedia does not allow original research, it is essentially ALL hearsay, I think? Maybe? And so maybe it'd be better to claim:
"These granular adjectives might-viaInternetHearsay-inferably-validly-modify the concept of memory: <list>."
For all I know (this is not an area of expertise for me at all) there could be a lot of other "subtypes of reportative evidential markers" in real existing human languages so that some language out there could say this easily???
I'm not sure if I should keep the original "can" or be happy about this final version's use of "might".
Also, "validly" snuck in there, and I'm not sure if I mean "scientificallyValidly" (tracking the scientific concept of validity) or "morallyValidly" (in the sense that I "might not be writing pure bullshit and so I might not deserve moral sanction")?
Dear god. What even is this comment! Why!? Why is it so hard?!
Where were we again?
I.A.2.a.i. Procedural commentary on evidentiality concomittment to the challenges of understanding the interogative mood.
Ahoy there John and David!
I'm not trying to write an essay (exactly), I'm writing a comment responding to you! <3
I think I don't trust language to make "adequate" sense. Also, I don't trust humans to "adequately" understand language. I don't trust common sense utterances to "adequately" capture anything in a clean and good and tolerably-final way.
The OP seems to say "yeah, this language stuff is safe to rely on to be basically complete" and I think I'm trying to say "no! that's not true! that's impossible!" because language is a mess. Everywhere you look it is wildly half-assed, and vast, and hard to even talk about, and hard to give examples of, and combintorially interacting with its own parts.
The digression just now into evidentiality was NOT something I worked on back in 2020, but it is illustrative of the sort of rabbit holes that one finds almost literally everywhere one looks, when working on "analytic meta linguistics" (or whatever these efforts could properly be called).
Remember when I said this at the outset?
"During covid" I got really interested in language, and was thinking of making a conlang that would be an intentional pidgin (and so very very simple in some sense) that was on the verge of creolizing but which would have small simple words with clear definitions that could be used to "ungramaticalize" everything that had been grammaticalized in some existing human language...
...this project to "lexicalize"-all-the-grammar(!) defeated me, and I want to digress here to briefly to talk about my defeat! <3
It would be kind of like Ithkuil, except, like... hopefully actually usable by real humans.
The reason I failed to create anything like a periodic table of grammar for a pidgin style conlang is because there are so many nooks and crannies! ...and they ALL SEEM TO INTERACT!
Maybe if I lived to be 200 years old, I could spend 100 of those years in a library, designing a language for children to really learn to speak as "a second toy language" that put them upstream of everything in every language? Maybe?
However, if I could live to 200 and spend 100 years on this, then probably so could all the other humans, and then... then languages would take until you were 30 to even speak properly, I suspect, and it would just loop around to not being possible for me again even despite living to 200?
I.B.1. Trying To Handle A Simple Case: Moods In Diving Handsigns
When I was working on this, I was sorta aiming to get something VERY SMALL at first because that's often the right way to make progress in software. Get test cases working inside of a framework.
So, it seemed reasonable to find "a REAL language" that people really need and use and so on, but something LESS than the full breadth of everything one can generally find being spoken in a tiny village on some island near Papua New Guinea?
So I went looking into scuba hand signs with the hope of translating a tiny and stupidly simple language and just successfully send THAT to some kind of Characteristia Universalis prototype to handle the "semantics of the pragmatics of modal operators".
The goal wasn't to handle tense, aspect, evidentiality, voice, etc in general. I suspect that diving handsigns don't even have any of that!
But it would maybe be some progress to be able to translate TOY languages into a prototype of an ultimate natural meta-language.
I.B.1.a Diving Handsigns Have Pragmatically Weird Mood (Because Avoiding Drowning Is The Most Important Thing) But They are Simple (Because It Is For Hobbyists With Shit In Their Mouth)
So the central juicy challenge was that in diving manuals, a lot of times their hand signs are implicitly in the imperative mood.
The dive leader's orders are strong, and mostly commands, by default.
The dive followers mostly give suggestions (unless they relate to safety, in which case they aren't supposed to use them except for really reals, because even if they use them wrongly, the dive leader has to end the dive if there's a chance of a risk of drowning based on what was probably communicated).
Then, in this linguistic situation, it turns out they just really pragmatically need stuff like this "question mark" handsign which marks the following or preceding handsign (or two) as having been in the interrogative mood:
And so I felt like I HAD to be able to translate the interrogative and imperative moods "cleanly" into something cleanly formal, even just for this "real toy language".
If I was going to match Frege's successes in a way that is impressive enough to justify happening in late 2020 (222 years after the 1892 publication of "Sense and Reference"), then... well... maybe I could use this to add one or two signs to "diving sign language" and actually generate technology from my research, as a proof that the research wasn't just a bunch of bullshit!
(Surely there has been progress here in philosophy in two centuries... right?!)
((As a fun pragmatic side note, there's a kind of interpretation here of this diving handsign where "it looks like a question mark" but also its kind of interesting how the index finger is "for pointing" and that pointing symbol is "broken or crooked" so even an alien might be able to understand that as "I can't point, but want to be able to point"?!? Is "broken indexicality" the heart of the interrogative mood somehow? If we wish to RETVRN TO NOVA ZEMBLA must we eschew this entire mood maybe??))
Like... the the imperative and interrogative moods are the default moods for a lot of diving handsigns!
You can't just ignore this and only think about the indicative mood all the time, like it was still the late 1800s... right? <3
So then... well... what about "the universal overarching framework" for this?
I.B.2. Trying To Find The Best Framework For Mood Leads To... Nenets?
So I paused without any concrete results on the diving stuff (because making Anki decks for that and trying it in a swimming pool would take forever and not give me a useful output) to think about where it was headed.
And now I wanted to know "what are all the Real Moods?"
And a hard thing here is (1) English doesn't have that many in its verbs and (2) linguists often only count the ones that show up in verb conjugation as "real" (for counting purposes), and (3) there's a terrible terrible problem in getting a MECE list of The Full List Of Real Moods from "all the languages".
Point three is non-obvious. The issue is, from language to language, they might lump and split the whole space of possible moods to mark differently so that one language might use "the mood the linguist decided to call The Irrealis Mood" only for telling stories with magic in them (but also they are animists and think the world is full of magic), and another language might use something a linguist calls "irrealis" for that AND ALSO other stuff like basic if/then logic!
So... I was thinking that maybe the thing to do would be to find the SINGLE language that, to the speakers of that language and linguists studying them, had the most DISTINCT moods with MECE marking.
This language turns out to be: Nenets. It has (I think) ~16 moods, marked inside the verb conjugation like it has been allowed to simmer and get super weird and barely understandable to outsiders for >1000 years, and marking mood is obligatory! <3
One can find academic reports on Nenets grammar like this:
In all types of data used in this study, the narrative mood is the most frequently used non-indicative mood marker. The narrative mood is mutually exclusive with any other mood markers. However, it co-occurs with tense markers, the future and the general past (preterite), as well as the habitual aspect. Combined with the future tense, it denotes past intention or necessity (Nikolaeva 2014: 93), and combined with the preterite marker, it encodes more remote past (Ljublinskaja & Malčukov 2007: 459–461). Most commonly, however, the narrative mood appears with no additional tense marking, denoting a past action or event.
So... so I think they have a "once upon a time" mood? Or maybe it is like how technical projects often make commitments at the beginning like "we're are only going to use technology X" and then this is arguably a mistake, and yet arguably everyone has to live with it forever, and so you tell the story about how "we decided to only, in the future, use technology X"... and that would be marked as "it was necessary in the deep past to use technology X going forward" with this "narrative mood" thingy that Nenets reportedly has? So you might just say something like "we narratively-use technology X" in that situation?
Maybe?
I.B.2.a. But Nenets Is Big, And Time Was Short, And Kripke Is Always Dogging Me, And I'm A Pragmatist At Heart
And NONE OF THIS got at what I actually think is often going on when, like at an animal level, where "I hereby allow you to eat" has a deep practical meaning!
The PDF version of Irina Nikolaeva's "A Grammar of Tundras Nenets" is 528 pages, but only pages 85 to 105 are directly about mood. Maybe it should be slogged through? (I tried slogging, and the slogging lead past so many rabbit holes!)
Like, I think maybe "allowing someone to eat" could be done by marking "eat" with the Jussive Mood (that Nenets has) and then if we're trying to unpack that into some kind of animalistic description of all of what is kinda going on the phrase "you jussive-eat" might mean something like:
"I will not sanction you with my socially recognized greater power if you try to eat, despite how I normally would sanction you, and so, game theoretically, it would be in your natural interest to eat like everyone usually wants to eat (since the world is Malthusian by default) but would normally be restrained from eating by fear of social sanction (since food sharing is a core loop in social mammals and eating in front of others without sharing will make enemies and disrupt the harmony of the group and so on), but it would be wise of you to do so urgently in this possibly short period of special dispensation, from me, who is the recognized controller of rightful and morally tolerated access to the precious resource that is food".
Now we could ask, is my make-believe Nenets phrase "you jussive-eat" similar to English "you should eat" or "you may eat" or "you can eat" or none-of-these-and-something-else or what?
Maybe English would really need something very complexly marked with status and pomp to really communicate it properly like "I allow you to eat, hereby, with this speech act"?
Or maybe I still don't have a good grasp on the underlying mood stuff and am fundamentally misunderstanding Nenets and this mood? It could be!
But then, also, compare my giant paragraph full of claims about status and hunger and predictable patterns of sanction with Kripke's modal logic which is full of clever representations of "necessity" and "possibility" in a way that he is often argued to have grounded in possible worlds.
"You must pay me for the food I'm selling you."
"There exist no possible worlds where it is possible for you to not pay me for the food I'm selling you."
The above are NOT the SAME! At all!
But maybe that's the strawman sketch... but every time I try to drop into the symbolic logic literature around Kripke I come out of it feeling like they are entirely missing the idea of like... orders and questions and statements, and how orders and questions and statements are different from each other and really important to what people use modals in language and practically unmentioned by the logicians :-(
I.B.2.b. Frege Dogs Me Less But Still... Really?
In the meantime, in much older analytic philosophy, Frege has this whole framework for taking the surface words as having senses in a really useful way, and this whole approach to language is really obsessed with "intensional contexts where that-quoting occurs" (because reference seems to work differently inside vs outside a "that-quoted-context"). Consider...
The subfield where people talk about "intensional language contexts" is very tiny, but with enough googling you can find people saying stuff about it like this:
As another example of an intensional context, reflectica allows us to correctly distinguish between de re and de dicto meanings of a sentence, see the Supplement to [6]. For example, the sentence Leo believes that some number is prime can mean either
𝖡𝖾𝗅𝗂𝖾𝗏𝖾𝗌¯(𝖫𝖾𝗈¯,∃x[𝖭𝗎𝗆𝖻𝖾𝗋¯(x)&𝖯𝗋𝗂𝗆𝖾¯(x)]) or
∃x(𝖭𝗎𝗆𝖻𝖾𝗋¯(x)&𝖡𝖾𝗅𝗂𝖾𝗏𝖾𝗌¯[𝖫𝖾𝗈¯,𝖯𝗋𝗂𝗆𝖾¯(x)]). Note that, since the symbol `𝖡𝖾𝗅𝗂𝖾𝗏𝖾𝗌¯’ is intensional in the second argument, the latter formula involves quantifying into an intensional context, which Quine thought is incoherent [7] (but reflectica allows to do such things coherently).
Sauce is: Mikhail Patrakeev's "Outline of a Self-Reflecting Theory"
((Oh yeah. Quine worked on this stuff too! <3))
So in mere English words we might try to spell out a Fregean approach like this...
"You must pay me for the food I'm selling you."
"It is (indicatively) true: I gave you food. Also please (imperatively): the sense of the phrase 'you pay me' should become true."
I think that's how Frege's stuff might work if we stretched it quite far? But it is really really fuzzy. It starts to connect a little tiny bit to the threat and counter threat of "real social life among humans" but Kripke's math seems somewhat newer and shinier and weirder.
Like... "reflectiva" is able to formally capture a way for the indicative mood to work in a safe and tidy domain like math despite the challenges of self reference and quoting and so on...
...but I have no idea whether or how reflectiva could bring nuance to questions, or commands, or laws, or stories-of-what-not-to-do, such that "all the real grammaticalized modes" could get any kind of non-broken treatment in reflectiva.
And in the meantime, in Spanish "poder" is the verb for "can" and cognate to modal auxiliary verbs like "could" (which rhymes with "would" and "should") and poder is FULL of emotions and metaphysics!
Where are the metaphysics here? Where is the magic? Where is the drama? "Shouldness" causes confusion that none of these theories seem to me to explain!
II. It Is As If Each Real Natural Language Is Almost Anti-Epistemic And So Languages Collectively ARE Anti-Epistemic?
Like WTF, man... WTF.
And that is why my attempt, during covid, to find a simple practical easy Characteristica universalis for kids, failed.
Toki pona is pretty cool, though <3
One could imagine a language "Lyapunese" where every adjective (AND noun probably?) had to be marked in relation to a best guess as to the lyapunov time on the evolution of the underlying substance and relevant level of description in the semantics of the word, such that the veridicality conditions for the adjective or noun might stop applying to the substance with ~50% probability after that rough amount of time.
Call this the "temporal mutability marker".
"Essentialism" is already in the English language and is vaguely similar?
In English essential traits are in the noun and non-essential trait are in adjectives. In Spanish non-essential adjectives are attributed using the "estar" family of verbs and essential adjectives are attributed using the "ser" family of verbs. (Hard to find a good cite here, but maybe this?)
(Essentialism is DIFFERENT from "predictable stability"! In general, when one asserts something to be "essential" via your grammar's way of asserting that, it automatically implies that you think no available actions can really change the essential cause of the apparent features that arise from that essence. So if you try to retain that into Lyapunese you might need to mark something like the way "the temporal mutability marker appropriate to the very planning routines of an agent" interact with "the temporal mutability marker on the traits or things the agent might reasonably plan to act upon that they could in fact affect".)
However, also, in Lyapunese, the fundamental evanescence of all physical objects except probably protons (and almost certainly electrons and photons and one of the neutrinos (but not all the neutrinos)) is centered.
No human mental trait could get a marker longer than the life of the person (unless cryonics or heaven is real) and so on. The mental traits of AI would have to be indexed to either the stability of the technical system in which they are inscribed (with no updates possible after they are recorded) or possibly to the stability of training regime or updating process their mental traits are subject to at the hands of engineers?
Then (if we want to make it really crazy, but add some fun utility) there could be two sentence final particles that summarize the longest time on any of the nouns and shortest such time markings on any of the adjectives, to help clarify urgency and importance?
This would ALL be insane of course.
Almost no one even knows what lyapnunov time even is, as a concept.
And children learning the language would almost INSTANTLY switch to insisting that the grammatical marking HAD to be a certain value for certain semantic root words not because they'd ever had the patience to watch such things change for themselves but simply because "that's how everyone says that word".
Here's a sketch of an attempt at a first draft, where some salient issues with the language itself arise:
((
Ally: "All timeblank-redwoods are millennial-redwoods, that is simply how the century-jargon works!"
Bobby: "No! The shortlife-dad of longlife-me is farming nearby 33-year-redwoods because shortlife-he has decade-plans to eventually harvest 33-year-them and longlife-me will uphold these decade-plans."
Ally: "Longlife-you can't simply change how century-jargon works! Only multicentury-universities can perform thusly!"
Bobby: "Pfah! Longlife-you who is minutewise-silly doesn't remember that longlife-me has a day-idiolect that is longlife-owned by longlife-himself."
Ally: "Longlife-you can't simply change how century-jargon works! Only multicentury-universities can perform thusly! And longlife-you have a decade-idiolect! Longlife-you might learn eternal-eight day-words each day-day, but mostlly longlife-you have a decade-idiolect!
Bobby: "Stop oppressing longlife-me with eternal-logic! Longlife-me is decade-honestly speaking the day-mind of longlife-me right now and longlife-me says: 33-year-redwoods!"
))
But it wouldn't just be kids!
The science regarding the speed at which things change could eventually falsify common ways of speaking!
And adults who had "always talked that way" would hear it as "gramatically wrong to switch" and so they just would refuse. And people would copy them!
I grant that two very skilled scientists talking about meteorology or astronomy in Lyapunese would be amazing.
They would be using all these markings that almost never come up in daily life, and/or making distinctions that teach people a lot about all the time scales involved.
But also the scientists would urgently need a way to mark "I have no clue what the right marking is", so maybe also this would make every adjective and noun need an additional "evidentiality marker on top of the temporal mutability marker"???
And then when you do the sentence final particles, how would the evidence-for-mutability markings carry through???
When I did the little script above, I found myself wanting to put the markers on adverbs, where the implicit "underlying substance" was "the tendency of the subject of the verb to perform the verb in that manner".
It could work reasonable cleanly if "Alice is decade-honestly speaking" implies that Alice is strongly committed to remaining an honestly-speaking-person with a likelihood of success that the speaker thinks will last for roughly 10 years.
The alternative was to imagine that "the process of speaking" was the substance, and then the honesty of that speech would last... until the speaking action stopped in a handful of seconds? Or maybe until the conversation ends in a few minutes? Or... ???
I'm not going to try to flesh out this conlang any more.
This is enough to make the implicit point, I think? <3
Basically, I think that Lyapunese is ONLY "hypothetically" possible, and that it wouldn't catch on, it would be incredibly hard to learn, and that will likely never be observed in the wild, and so on...
...and yet, also, I think Lyapunov Time is quite important and fundamental to reality and an AI with non-trivial plans and planning horizons would be leaving a lot of value on the table if it ignored deep concepts from chaos theory.
The Piraha can't count and many of them don't appear to be able to learn to count, not even as motivated adults, past a critical period (when (I've heard but haven't found a way to nail down for sure from clean eye witness reports) they have sometimes attended classes because they wish to be able to count the "money" they make from sex work, for example).
Are the Piraha in some meaningful sense "not fully human" due to environmental damage or are "counting numbers" not a natural abstraction or... or what?
On the other end of the spectrum, Ithkuil is a probably-impossible-for-humans-to-master conlang whose creator sorta tried to give it EVERY feature that has shown up in at least one human language that the creator of the language could find.
Does that mean that once an AI is fluent in Ithkuil (which surely will be possible soon, if it is not already) maybe the AI will turn around and see all humans sorta the way that we see the Piraha?
...
My current working model of the essential "details AND limits" of human mental existence puts a lot of practical weight and interest on valproic acid because of the paper "Valproate reopens critical-period learning of absolute pitch".
Also, it might be usable to cause us to intuitively understand (and fluently and cleanly institutionally wield, in social groups, during a political crisis) untranslatable 5!
Like, in a deep sense, I think that the "natural abstractions" line of research leads to math, both discovered, and undiscovered, especially math about economics and cooperation and agency, and it also will run into the limits of human plasticity in the face of "medicalized pedagogy".
And, as a heads up, there's a LOT of undiscovered math (probably infinitely much of it, based on Goedel's results) and a LOT of unperfected technology (that could probably change a human base model so much that the result crosses some lines of repugnance even despite being better at agency and social coordination).
...
Speaking of "the wisdom of repugnance".
In my experience, studying things where normies experience relatively unmediated disgust, I can often come up with pretty simply game theory to explain both (1) why the disgust would evolutionarily arise and also (2) why it would be "unskilled play within the game of being human in neo-modern times" to talk about it.
That is to say, I think "bringing up the wisdom of repugnance" is often a Straussian(?) strategy to point at coherent logic which, if explained, would cause even worse dogpiles than the current kerfuffle over JD Vance mentioning "cat ladies".
This leads me to two broad conclusions.
(1) The concepts of incentive compatible mechanism design and cooperative game theory in linguistics both suggest places to predictably find concepts that are missing from polite conversation that are deeply related to competition between adult humans who don't naturally experience storge (or other positive attachments) towards each other as social persons, and thus have no incentive to tell each other certain truths, and thus have no need for certain words or concepts, and thus those words don't exist in their language. (Notice: the word "storge" doesn't exist in English except as a loan word used by philosophers and theologians, but the taunt "mama's boy" does!)
(2) Maybe we should be working on "artificial storge" instead of a way to find "words that will cause AI to NOT act like a human who only has normal uses for normal human words"?
...
I've long collected "untranslatable words" and a fun "social one" is "nemawashi" which literally means "root work", and it started out as a gardening term meaning "to carefully loosen all the soil around the roots of a plant prior to transplanting it".
Then large companies in Japan (where the Plutocratic culture is wildly different than in the US) use nemawashi to mean something like "to go around and talk to the lowest status stakeholders about proposed process changes first, in relative confidence, so they can veto stupid ideas without threatening their own livelihood or publicly threatening the status of the managers above them, so hopefully they can tweak details of a plan before the managers synthesize various alternative plans into a reasonable way for the whole organization to improve its collective behavior towards greater Pareto efficiency"... or something?
The words I expect to not be able to find in ANY human culture are less wholesome than this.
English doesn't have "nemawashi" itself for... reasons... presumably? <3
...
Contrariwise... the word "bottom bitch" exists, which might go against my larger claim? Except in that case it involves a kind of stabilized multi-shot social "compatibility" between a pimp and a ho, that at least one of them might want to explain to third parties, so maybe it isn't a counter-example?
The only reason I know the word exists is that Chappelle had to explain what the word means, to indirectly explain why he stopped wanting to work on The Chappelle Show for Comedy Central.
Oh! Here's a thing you might try! Collect some "edge-case maybe-too-horrible-to-exist" words, and then check where they are in an embedding space, and then look for more words in that part of the space?
Maybe you'll be able to find-or-construct a "verbal Loab"?
(Ignoring the sense in which "Loab was discovered" and that discovery method is now part of her specific meaning in English... Loab, in content, seems to me to be a pure Jungian Vampire Mother without any attempt at redemption or social usefulness, but I didn't notice this for myself. A friend who got really into Lacan noticed it and I just think he might be right.)
And if you definitely cannot construct any "verbal Loab", then maybe that helps settle some "matters of theoretical fact" in the field of semantics? Maybe?
Ooh! Another thing you might try, based on this sort of thing, is to look for "steering vectors" where "The thing I'm trying to explain, in a nutshell, is..." completes (at low temperature) in very very long phrases? The longer the phrase required to "use up" a given vector, the more "socially circumlocutionary" the semantics might be? This method might be called "dowsing for verbal Loabs".
You're welcome! I'm glad you found it useful.
I previously wrote [an "introduction to thermal conductivity and noise management" here].
This is amazingly good! The writing is laconic, modular, model-based, and relies strongly on the reader's visualization skills!
Each paragraph was an idea, and I had to read it more like a math text than like "human writing" to track latent conceptual structure despite it being purely in language and no equations occuring in the text.
(It is similar to Munenori's "The Life Giving Sword" and Zizioulas's "Being As Communion" but not quite as hard as those because those require emotional and/or moral and/or "remembering times you learned or applied a skill" and/or "cogito ergo sum" fit checks instead of pauses to "visualize complex physical systems in motion".)
The "big picture fit check on concepts" at the end of your conceptual explanation (just before application to examples began) was epiphanic (in context):
...Because of phonon scattering, thermal conductivity can decrease with temperature, but it can also increase with temperature, because at higher temperature, more vibrational modes are possible. So, crystals have some temperature at which their thermal conductivity peaks.
With this understanding, we'd expect amorphous materials to have low thermal conductivity, even if they have a 3d network of strong covalent bonds. And indeed, typical window glass has a relatively low thermal conductivity, ~1/30th that of aluminum oxide, and only ~2x that of HDPE plastic.
I had vaguely known that thermal and electric conductivity were related, but I had never seen them connected together such that "light transparency and heat insulation often go together" could be a natural and low cost sentence.
I had not internalized before that matter might have fundamental limits on "how much frequency" (different frequencies + wavelengths + directions of many wave, all passing through the same material) might be operating on every scale and wave type simultaneously!
Now I have a hunch: if Drexlerian nanotech ever gets built, some of those objects might have REALLY WEIRD macroscropic properties... like being transparent from certain angles or accidentally a "superconductor" of certain audio frequencies? Unless maybe every type and scale of wave propagation is analyzed and the design purposefully suppresses all such weird stray macroscopic properties???
The main point of this post wasn't to explain superconductors, but to consider some sociology.
I think a huge part of why these kinds of things often occur is that they are MUCH more likely in fields where the object level considerations have become pragmatically impossible for normal people to track, and they've been "taking it on faith" for a long time.
Normal humans can then often become REALLY interested when "a community that has gotten high trust" suddenly might be revealed to be running on "Naked Emperor Syndrome" instead of simply doing "that which they are trusted to do" in an honest and clean way.
((Like, at this point, if a physics PhD has "string theory" on their resume after about 2005, I just kinda assume they are a high-iq scammer with no integrity. I know this isn't fully justified, but that field has for so long: (1) failed to generate any cool tech AND (2) failed to be intelligible to outsiders AND (3) been getting "grant funding that was 'peer reviewed' only by more string theorists" that I assume that intellectual parasites invaded it and I wouldn't be able to tell.))
Covid caused a lot of normies to learn that a lot of elites (public health officials, hospital administrators, most of the US government, most of the Chinese government, drug regulators, drug makers, microbiologists capable of gain-of-function but not epidemiology, epidemiologists with no bioengineering skills, etc) were not competently discharging their public duties to Know Their Shit And Keep Their Shit Honest And Good.
LK-99 happening in the aftermath of covid, proximate to accusations of bad faith by the research team who had helped explore new materials in a new way, was consistent with the new "trust nothing from elites, because trust will be abused by elites, by default" zeitgeist... and "the material science of conductivity" is a vast, demanding, and complex topic that can mostly only be discussed coherently by elite material scientists.
In many cases, whether the social status of a scientific theory is amplified or diminished over time seems to depend more on the social environment than on whether it's true.
I think that different "scientific fields" will experience this to different amounts depending on how many of their concepts can be reduced to things that smart autodidacts can double click on, repeatedly, until they ground in things that connect broadly to bedrock concepts in the rest of math and science.
This is related to very early material on lesswrong, in my opinion, like That Magical Click and Outside The Laboratory and Taking Ideas Seriously that hit a very specific layer of "how to be a real intellectual in the real world" where broad abstractions and subjectively accessible updates are addressed simultaneously, and kept in communication with each other, without either of them falling out of the "theory about how to be a real intellectual in the real world".
I think your condensation of that post you linked to is missing the word "superstimulus" (^f on the linked essay is also missing the term) which is the thing that the modern world adds to our environment on purpose to make our emotions less adaptive for us and more adaptive for the people selling us superstimuli (or using that to sell literally any other random thing). I added the superstimuli tag for you :-)
My reaction to the physics here was roughly: "phonon whatsa whatsa?"
It could be that there is solid reasoning happening in this essay, but maybe there is not enough physics pedagogy in the essay for me to be able to tell that solid reasoning is here, because superconductors aren't an area of expertise (yet! (growth mindset)).
To double check that this essay ITSELF wasn't bullshit I dropped [the electron-phonon interaction must be stronger than random thermal movement] into Google and... it seems to be a real thing! <3
The top hit was this very blog post... and the second hit was to "Effect of Electron-Phonon Coupling on Thermal Transport across Metal-Nonmetal Interface - A Second Look" with this abstract:
The effect of electron-phonon (e-ph) coupling on thermal transport across metal-nonmetal interfaces is yet to be completely understood. In this paper, we use a series of molecular dynamics (MD) simulations with e-ph coupling effect included by Langevin dynamics to calculate the thermal conductance at a model metal-nonmetal interface. It is found that while e-ph coupling can present additional thermal resistance on top of the phonon-phonon thermal resistance, it can also make the phonon-phonon thermal conductance larger than the pure phonon transport case. This is because the e-ph interaction can disturb the phonon subsystem and enhance the energy communication between different phonon modes inside the metal. This facilitates redistributing phonon energy into modes that can more easily transfer energy across the interfaces. Compared to the pure phonon thermal conduction, the total thermal conductance with e-ph coupling effect can become either smaller or larger depending on the coupling factor. This result helps clarify the role of e-ph coupling in thermal transport across metal-nonmetal interface.
An interesting thing here is that, based just on skimming and from background knowledge I can't tell if this is about superconductivity or not.
The substring "superconduct" does not appear in that paper.
Searching more broadly, it looks like a lot of these papers actually are about electronic and conductive properties in general, often semi-conductors, (though some hits for this search query ARE about superconductivity) and so searching like this helped me learn a little bit more about "why anything conducts or resists electric current at all", which is kinda cool!
I liked "Electron-Phonon Coupling as the Source of 1/f Noise in Carbon Soot" for seeming to go "even more in the direction of extremely general reasoning about extremely general condensed matter physics"...
...which leads naturally to the question "What the hell is 1/f noise?" <3
I tried getting an answer from youtube (this video was helpful and worked for me at 1.75X speed) which helped me start to imagine that "diagrams about electrons going through stuff" was nearby, and also to learn that a synonym for this is Pink Noise, which is a foundational concept I remember from undergrad math.
I'm not saying I understand this yet, but I am getting to be pretty confident that "a stack of knowledge exists here that is not fake, and which I could learn, one bite at a time, and that you might be applying correctly" :-)
Log odds, measured in something like "bits of evidence" or "decibels of evidence", is the natural thing to think of yourself as "counting". A probability of 100% would be like having infinite positive evidence for a claim and a probability of 0% is like having infinite negative evidence for a claim. Arbital has some math and Eliezer has a good old essay on this.
A good general heuristic (or widely applicable hack) to "fix your numbers to even be valid numbers" when trying to get probabilities for things based on counts (like a fast and dirty spreadsheet analysis), and never having this spit out 0% or 100% due to naive division on small numbers (like seeing 3 out of 3 of something and claiming it means the probability of that thing is probably 100%), is to use "pseudo-counting" where every category that is analytically possible is treated as having been "observed once in our imaginations". This way, if you can fail or succeed, and you've seen 3 of either, and seen nothing else, you can use pseudocounts to guesstimate that whatever happened every time so far is (3+1)/(3+2) == 80% likely in the future, and whatever you've never seen is (0+1)/(3+2) == 20% likely.
That's fascinating and I'm super curious: when precisely, in your experience as a participant in a language community did it feel like "The American definition where a billion is 10^9 and a trillion is 10^12 has long since taken over"?
((I'd heard about the British system, and I had appreciated how it makes the the "bil", "tril", "quadril", "pentil" prefixes of all the "-illion" words make much more sense as "counting how many 10^6 chunks were being multiplied together".
The American system makes it so that you're "counting how many thousands are being multiplied together", but you're starting at 1 AFTER the first thousand, so there's "3 thousands in a billion" and "4 thousands in a trillion", and so on... with a persistent off-by-one error all the way up...
Mathematically, there's a system that makes more sense and is simpler to teach in the British way, but linguistically, the American way lets you speak and write about 50k, 50M, 50B, 50T, 50Q, and finally 50P (for fifty pentillion)...
...and that linguistic frame is probably going to get more and more useful as inflation keeps inflating?
Eventually the US national debt will probably be "in the quadrillions of paper dollars" (and we'll NEED the word in regular conversation by high status people talking about the well being of the country)...
...and yet (presumably?) the debt-to-gdp ratio will never go above maybe 300% (not even in a crisis?) because such real world crises or financial gyrations will either lead to massive defaults, or renominalization (maybe go back to metal for a few decades?), or else the government will go bankrupt and not exist to carry those debts, or something "real" will happen.
Fundamentally, the ratio of debt-to-gdp is "real" in a way that the "monetary unit we use to talk about our inflationary script" is not. There are many possible futures where all countries on Earth slowly eventually end up talking about "pentillions of money units" without ever collapsing, whereas debt ratios are quite real and firm and eventually cause the pain that they imply will arrive...
One can see in teh graph below how these numbers mostly "clustering together because annual-interest-rates and debt-to-gdp-ratios are directly and meaningfully comparable and constrained by the realities of sane financial reasoning" much more clearly when you show debt ratios, over time, internationally...
...you can see in that data that Japan, Greece, and Israel are in precarious places, just with your eyeballs in that graph with nicely real units.
Then the US, the UK, Portugal, Spain, France, Canada, and Belgium are also out into the danger zone with debt well above 100% of GDP, where we better have non-trivial population growth and low government spending for a while, or else we could default in a decade or two.
A small part of me wonders if "the financial innumeracy of the median US and UK voter" are part of the explanation for why we are in the danger zone, and not seeming to react to it in any sort of sane way, as part of the zeitgeist of the English speaking world?
For both of our governments, they "went off the happy path" (above 100%) right around 2008-2011, due to the Great Recession. So it would presumably be some RECENT change that switched us from "financial prudence before" and then "financial imprudence afterwards"?
Maybe it is something boring and obvious like birthrates and energy production?
For reference, China isn't on wikipedia's graph (maybe because most of their numbers are make believe and its hard to figure out what's going on there for real?) but it is plausible they're "off the top of the chart" at this point. Maybe Xi and/or the CCP are innumerate too? Or have similar "birthrate and energy" problems? Harder to say for them, but the indications are that, whatever the cause, their long term accounting situation is even more dire.
Looping all the way back, was it before or after the Great Recession, in your memory, that British speakers de facto changed to using "billion" to talk about 10^9 instead of 10^12?))
Fascinating. I am surprised and saddened, and thinking about the behavioral implications. Do you have a "goto brand" that is "the cheapest that doesn't give you preflux"? Now I'm wondering if maybe I should try some of that.
I feel like you're saying "safety research" when the examples of what corporations centrally want is "reliable control over their slaves"... that is to say, they want "alignment" and "corrigibility" research.
This has been my central beef for a long time.
Eliezer's old Friendliness proposals were at least AIMED at the right thing (a morally praiseworthy vision of humanistic flourishing) and CEV is more explicitly trying for something like this, again, in a way that mostly just tweaks the specification (because Eliezer stopped believing that his earliest plans would "do what they said on the tin they were aimed at" and started over).
If an academic is working on AI, and they aren't working on Friendliness, and aren't working on CEV, and it isn't "alignment to benevolence " or making "corrigibly seeking humanistic flourishing for all"... I don't understand why it deserves applause lights.
(EDITED TO ADD: exploring the links more, I see "benevolent game theory, algorithmic foundations of human rights" as topics you raise. This stuff seems good! Maybe this is the stuff you're trying to sneak into getting more eyeballs via some rhetorical strategy that makes sense in your target audience?)
"The alignment problem" (without extra qualifications) is an academic framing that could easily fit in a grant proposal by an academic researcher to get funding from a slave company to make better slaves. "Alignment IS capabilities research".
Similarly, there's a very easy way to be "safe" from skynet: don't built skynet!
I wouldn't call a gymnastics curriculum that focused on doing flips while you pick up pennies in front of a bulldozer "learning to be safe". Similarly, here, it seems like there's some insane culture somewhere that you're speaking to whose words are just systematically confused (or intentionally confusing).
Can you explain why you're even bothering to use the euphemism of "Safety" Research? How does it ever get off the ground of "the words being used denote what naive people would think those words mean" in any way that ever gets past "research on how to put an end to all AI capabilities research in general, by all state actors, and all corporations, and everyone (until such time as non-safety research, aimed at actually good outcomes (instead of just marginally less bad outcomes from current AI) has clearly succeeding as a more important and better and more funding worthy target)"? What does "Safety Research" even mean if it isn't inclusive of safety from the largest potential risks?
Also, there's now a second detected human case, this one in Michigan instead of Texas.
Both had a surprising-to-me "pinkeye" symptom profile. Weird!
The dairy worker in Michigan had various "compartments" tested and their nasal compartment (and people they lived with) were all negative. Hopeful?
Apparently and also hopefully this virus is NOT freakishly good at infecting humans and also weirdly many other animals (like covid was with human ACE2, in precisely the ways people have talked about when discussing gain-of-function in years prior to covid).
If we're being foolishly mechanical in our inferences "n=2 with 2 survivors" could get rule of succession treatment. In that case we pseudocount 1 for each category of interest (hence if n=0 we say 50% survival chance based on nothing but pseudocounts), and now we have 3 survivors (2 real) versus 1 dead (0 real) and guess that the worst the mortality rate here would be maybe 1/4 == 25% (?? (as an ass number)), which is pleasantly lower than overall observed base rates for avian flu mortality in humans! :-)
Naive impressions: a natural virus, with pretty clear reservoirs (first birds and now dairy cows), on the maybe slightly less bad side of "potentially killing millions of people"?
I haven't heard anything about sequencing yet (hopefully in a BSL4 (or homebrew BSL5, even though official BSL5s don't exist yet), but presumably they might not bother to treat this as super dangerous by default until they verify that it is positively safe) but I also haven't personally looked for sequencing work on this new thing.
When people did very dangerous Gain-of-Function research with a cousin of this, in ferrets, over 10 year ago (causing a great uproar among some) the supporters argued that it was was worth creating especially horrible diseases on purpose in labs in order to see the details, like a bunch of geeks who would Be As Gods And Know Good From Evil... and they confirmed back then that a handful of mutations separated "what we should properly fear" from "stuff that was ambient".
Four amino acid substitutions in the host receptor-binding protein hemagglutinin, and one in the polymerase complex protein basic polymerase 2, were consistently present in airborne-transmitted viruses. (same source)
It seems silly to ignore this, and let that hilariously imprudent research of old go to waste? :-)
The transmissible viruses were sensitive to the antiviral drug oseltamivir and reacted well with antisera raised against H5 influenza vaccine strains. (still the same source)
(Image sauce.)
Since some random scientists playing with equipment bought using taxpayer money already took the crazy risks back then, it would be silly to now ignore the information they bought so dearly (with such large and negative EV) back then <3
To be clear, that drug worked against something that might not even be the same thing.
All biological STEM stuff is a crapshoot. Lots and lots of stamp-collecting. Lots of guess and check. Lots of "the closest example we think we know might work like X" reasoning. Biological systems or techniques can do almost anything physically possible eventually, but each incremental improvement in repeatability (going from having to try 10 million times to get something to happen to having to try 1 million times (or going from having to try 8 times on average to 4 times on average) due to "progress" ) is kinda "as difficult as the previous increment in progress that made things an order of magnitude more repeatable".
The new flu just went from 1 to 2. I hope it never gets to 4.
As of May 16, 2024 an easily findable USDA/CDC report says that widely dispersed cow herds are being detectably infected.
So far, that I can find reports of, only one human dairy worker has been detected as having an eye infection.
I saw a link to a report on twitter from an enterprising journalist who claimed to have gotten some milk directly from small local farms in Texas, and the first lab she tried refuse to test it. They asked the farms. The farms said no. The labs were happy to go with this!
So, the data I've been able to get so far is consistent with many possibly real worlds.
The worst plausible world would involve a jump to humans, undetected for quite a while, allowing time for adaptive evolution, and an "influenza normal" attack rate of 5% -10% for adults and ~30% for kids, and an "avian flu plausible" mortality rate of 56%(??) (but maybe not until this winter when cold weather causes lots of enclosed air sharing?) which implies that by June of 2025 maybe half a billion people (~= 7B*0.12*0.56) will be dead???
But probably not, for a variety of reasons.
However, I sure hope that the (half imaginary?) Administrators who would hypothetically exist in some bureaucracy somewhere (if there was a benevolent and competent government) have noticed that paying two or three people $100k each to make lots of phone calls and do real math (and check each other's math) and invoke various kinds of legal authority to track down the real facts and ensure that nothing that bad happens is a no-brainer in terms of EV.
I see it. If you try to always start with a digit, then always follow with a decimal place, then the rest implies measurement precision, and the mantissa lets you ensure a dot after the first digit <3
The most amusing exceptional case I could think of: "0.1e1" :-D
This would be like "I was trying to count penguins by eyeball in the distance against the glare of snow and maybe it was a big one, or two huddled together, or maybe it was just a weirdly shaped rock... it could have been a count of 0 or 1 or 2."
There is a bit of a tradeoff if the notation aims to transmit the idea of measurement error.
I would read "700e6" as saying that there were three digits of presumed accuracy in the measurement, and "50e3" as claiming only two digits of confidence in the precision.
If I knew that both were actually a measurement with a mere one part in ten of accuracy, and I was going to bodge the numeric representation for verbal convenience like this, it would give my soul a twinge of pain.
Also, if I'm gonna bodge my symbols to show how sloppy I'm being, like in text, I'd probably write 50k and 700M (pronounced "fifty kay" and "seven hundred million" respectively).
Then I'd generally expect people to expect me to be so sloppy with this that it doesn't even matter (like I haven't looked it up, to be precise about anything) if I meant to point to 5*10^3 or 5*2^10. In practice I would have meant roughly "both or either of these and I can't be arsed to check right now, we're just talking and not making spreadsheets or writing code or cutting material yet".
Something that has always seemed a bit weird to me is that it seems like economists normally assume (or seem to assume from a distance) that laborers "live to make money (at work)" rather than that they "work to have enough money (to live)".
Microeconomically, especially for parents I think this is not true.
You'd naively expect, for most things, that if the price goes down, the supply goes down.
But for the labor of someone with a family, if the price given for their labor goes down in isolation, then they work MORE (hunt for overtime, get a second job, whatever) because they need to make enough to hit their earning goals in order to pay for the thing they need to protect: their family. (Things that really cause them to work more: a kid needs braces. Thing that causes them to work less: a financial windfall.)
Looking at that line, the thing it looks like to me is "the opportunity cost is REAL" but then also, later, the amount of money that had to be earned went up too (because of "another mouth to feed and clothe and provide status goods for and so on"). Maybe?
The mechanistic hypothesis here (that parents work to be able to hit spending targets which must rise as family size goes up) implies a bunch of additional details: (1) the husband's earnings should be tracked as well and the thing that will most cleanly go up is the sum of their earnings, (2) if a couple randomly has and keeps twins then the sum of the earnings should go up more.
Something I don't know how to handle is that (here I reach back into fuzzy memories and might be trivially wrong from trivially misremembering) prior to ~1980 having kids caused marriages to be more stable (maybe "staying together for the kids"?), and afterwards it caused marriages to be more likely to end in divorce (maybe "more kids, more financial stress, more divorce"?) and if either of those effects apply (or both, depending on the stress reactions and family values of the couple?) then it would entangle with the data on their combined earnings?
Scanning the paper for whether or how they tracked this lead me to this bit (emphasis not in original), which gave me a small groan and then a cynical chuckle and various secondary thoughts...
As opposed to the fall in female earnings, however, we see no dip in male earnings. Instead, both groups of men continue to closely track each other’s earnings in the years following the first IVF treatment as if nothing has happened. Towards the end of the study period, the male earnings for both groups fall, which we attribute to the rising share of retired men.
(NOTE: this ~falsifies the prediction I made a mere 3 paragraphs ago, but I'm leaving that in, rather than editing it out to hide my small local surprise.)
If I'm looking for a hypothetical framing that isn't "uncomplimentary towards fathers" then maybe that could be spun as the idea that men are simply ALWAYS "doing their utmost at their careers" (like economists might predict, with a normal labor supply curve) and they don't have any of that mama bear energy where they have "goals they will satisfice if easy or kill themselves or others to achieve if hard" the way women might when the objective goal is the wellbeing of their kids?
Second order thoughts: I wonder if economists and anthropologists could collaborate here, to get a theory of "family economics" modulo varying cultural expectations?
I've heard of lots of anthropological stuff about how men and women in Africa believe that farming certain crops is "for men" or "for women" and then they execute these cultural expectations without any apparent microeconomic sensitivity (although the net upshot is sort of a reasonable portfolio that insures families against droughts).
Also, I've heard that on a "calorie in, calorie out" basis in hunter-gatherer cultures, it is the grandmothers who are the huge breadwinners (catch lots of rabbits with traps, and generally forage super efficiently) whereas the men hunt big game (which they and the grandmas know is actually inefficient, if an anthropologist asks this awkward question) so that, when the men (rarely) succeed in a hunt they can throw a big BBQ for the whole band and maybe get some nookie in the party's aftermath.
It seems like it would be an interesting thing to read a paper about: "how and where the weirdly adaptive foraging and family economic cultures" even COME FROM.
My working model is that it is mostly just "monkey see, monkey do" on local role models, with re-calibration cycle times of roughly 0.5-2 generations. I remember writing a comment about mimetic economic learning in the past... and the search engine says it was for Unconscious Economics :-)
This is pretty cool. I think the fact that the cost is so low is almost a bit worrying. Because of reading this article, I'm likely to hum in the future due to "the potential non-trivial benefits compared to probably minuscule side effects and very low costs".
In some sense you've just made this my default operating hypothesis (and hence in some sense "an idea I give life to" or "enliven", and hence in some sense "a 'belief' of mine") not because I think it is true, but simply because it kinda makes sense and generalized prudence suggests that it probably won't hurt to try.
But also: I'm pretty sure this broader meta-cognitive pattern explains a LOT of superstitious behavior! ;-)
The other posting is here, if you're trying to get a full count of attendees based on the two posts for this one event.
There seem to be two if these postings for a single event? The other is here.
I think I'll be there and will bring a guest or three and will bring some basic potluck/picnic food :-)
There was an era in a scientific community where they were interested in the "kinds of learning and memory that could happen in de-corticated animals" and they sort of homed in on the basal ganglia (which, to a first approximation "implements habits" (including bad ones like tooth grinding)) as the locus of this "ability to learn despite the absence of stuff you'd think was necessary for your naive theory of first-order subjectively-vivid learning".
(The cerebellum also probably has some "learning contribution" specifically for fine motor skills, but it is somewhat selectively disrupted just by alcohol: hence the stumbling and slurring. I don't know if anyone yet has a clean theory for how the cerebellum's full update loop works. I learned about alcohol/cerebellum interactions because I once taught a friend to juggle at a party, and she learned it, but apparently only because she was drunk. She lost the skill when sober.)
Wait, you know smart people who have NOT, at some point in their life: (1) taken a psychedelic NOR (2) meditated, NOR (3) thought about any of buddhism, jainism, hinduism, taoism, confucianisn, etc???
To be clear to naive readers: psychedelics are, in fact, non-trivially dangerous.
I personally worry I already have "an arguably-unfair and a probably-too-high share" of "shaman genes" and I don't feel I need exogenous sources of weirdness at this point.
But in the SF bay area (and places on the internet memetically downstream from IRL communities there) a lot of that is going around, memetically (in stories about) and perhaps mimetically (via monkey see, monkey do).
The first time you use a serious one you're likely getting a permanent modification to your personality (+0.5 stddev to your Openness?) and arguably/sorta each time you do a new one, or do a higher dose, or whatever, you've committed "1% of a personality suicide" by disrupting some of your most neurologically complex commitments.
To a first approximation my advice is simply "don't do it".
HOWEVER: this latter consideration actually suggests: anyone seriously and truly considering suicide should perhaps take a low dose psychedelic FIRST (with at least two loving tripsitters and due care) since it is also maybe/sorta "suicide" but it leaves a body behind that most people will think is still the same person and so they won't cry very much and so on?
To calibrate this perspective a bit, I also expect that even if cryonics works, it will also cause an unusually large amount of personality shift. A tolerable amount. An amount that leaves behind a personality that similar-enough-to-the-current-one-to-not-have-triggered-a-ship-of-theseus-violation-in-one-modification-cycle. Much more than a stressful day and then bad nightmares and a feeling of regret the next day, but weirder. With cryonics, you might wake up to some effects that are roughly equivalent to "having taken a potion of youthful rejuvenation, and not having the same birthmarks, and also learning that you're separated-by-disjoint-subjective-deaths from LOTS of people you loved when you experienced your first natural death" for example.This is a MUCH BIGGER CHANGE than just having a nightmare and a waking up with a change of heart (and most people don't have nightmares and changes of heart every night (at least: I don't and neither do most people I've asked)).
Remember, every improvement is a change, though not every change is an improvement. A good "epistemological practice" is sort of a idealized formal praxis for making yourself robust to "learning any true fact" and changing only in GOOD ways from such facts.
A good "axiological practice" (which I don't know of anyone working on except me (and I'm only doing it a tiny bit, not with my full mental budget)) is sort of an idealized formal praxis for making yourself robust to "humanely heartful emotional changes"(?) and changing only in <PROPERTY-NAME-TBD> ways from such events.
(Edited to add: Current best candidate name for this property is: "WISE" but maybe "healthy" works? (It depends on whether the Stoics or Nietzsche were "more objectively correct" maybe? The Stoics, after all, were erased and replaced by Platonism-For-The-Masses (AKA "Christianity") so if you think that "staying implemented in physics forever" is critically important then maybe "GRACEFUL" is the right word? (If someone says "vibe-alicious" or "flowful" or "active" or "strong" or "proud" (focusing on low latency unity achieved via subordination to simply and only power) then they are probably downstream of Heidegger and you should always be ready for them to change sides and submit to metaphorical Nazis, just as Heidegger subordinated himself to actual Nazis without really violating his philosophy at all.)))
I don't think that psychedelics fits neatly into EITHER category. Drugs in general are akin to wireheading, except wireheading is when something reaches into your brain to overload one or more of your positive-value-tracking-modules, (as a trivially semantically invalid shortcut to achieving positive value "out there" in the state-of-affairs that your tracking modules are trying to track) but actual humans have LOTS of <thing>-tracking-modules and culture and science barely have any RIGOROUS vocabulary for any them.
Note that many of these neurological <thing>-tracking-modules were evolved.
Also, many of them will probably be "like hands" in terms of AI's ability to model them.
This is part of why AI's should be existentially terrifying to anyone who is spiritually adept.
AI that sees the full set of causal paths to modifying human minds will be "like psychedelic drugs with coherent persistent agendas". Humans have basically zero cognitive security systems. Almost all security systems are culturally mediated, and then (absent complex interventions) lots of the brain stuff freezes in place around the age of puberty, and then other stuff freezes around 25, and so on. This is why we protect children from even TALKING to untrusted adults: they are too plastic and not savvy enough. (A good heuristic for the lowest level of "infohazard" is "anything you wouldn't talk about in front of a six year old".)
Humans are sorta like a bunch of unpatchable computers, exposing "ports" to the "internet", where each of our port numbers is simply a lightly salted semantic hash of an address into some random memory location that stores everything, including our operating system.
Your word for "drugs" and my word for "drugs" don't point to the same memory addresses in the computer's implementing our souls. Also our souls themselves don't even have the same nearby set of "documents" (because we just have different memories n'stuff)... but the word "drugs" is not just one of the ports... it is a port that deserves a LOT of security hardening.
The bible said ~"thou shalt not suffer a 'pharmakeia' to live" for REASONS.
These are valid concerns! I presume that if "in the real timeline" there was a consortium of AGI CEOs who agreed to share costs on one run, and fiddled with their self-inserts, then they... would have coordinated more? (Or maybe they're trying to settle a bet on how the Singularity might counterfactually might have happened in the event of this or that person experiencing this or that coincidence? But in that case I don't think the self inserts would be allowed to say they're self inserts.)
Like why not re-roll the PRNG, to censor out the counterfactually simulable timelines that included me hearing from any of the REAL "self inserts of the consortium of AGI CEOS" (and so I only hear from "metaphysically spurious" CEOs)??
Or maybe the game engine itself would have contacted me somehow to ask me to "stop sticking causal quines in their simulation" and somehow I would have been induced by such contact to not publish this?
Mostly I presume AGAINST "coordinated AGI CEO stuff in the real timeline" along any of these lines because, as a type, they often "don't play well with others". Fucking oligarchs... maaaaaan.
It seems like a pretty normal thing, to me, for a person to naturally keep track of simulation concerns as a philosophic possibility (its kinda basic "high school theology" right?)... which might become one's "one track reality narrative" as a sort of "stress induced psychotic break away from a properly metaphysically agnostic mental posture"?
That's my current working psychological hypothesis, basically.
But to the degree that it happens more and more, I can't entirely shake the feeling that my probability distribution over "the time T of a pivotal acts occurring" (distinct from when I anticipate I'll learn that it happened which of course must be LATER than both T and later than now) shouldn't just include times in the past, but should actually be a distribution over complex numbers or something...
...but I don't even know how to do that math? At best I can sorta see how to fit it into exotic grammars where it "can have happened counterfactually" or so that it "will have counterfactually happened in a way that caused this factually possible recurrence" or whatever. Fucking "plausible SUBJECTIVE time travel", fucking shit up. It is so annoying.
Like... maybe every damn crazy AGI CEO's claims are all true except the ones that are mathematically false?
How the hell should I know? I haven't seen any not-plausibly-deniable miracles yet. (And all of the miracle reports I've heard were things I was pretty sure the Amazing Randi could have duplicated.)
All of this is to say, Hume hasn't fully betrayed me yet!
Mostly I'll hold off on performing normal updates until I see for myself, and hold off on performing logical updates until (again!) I see a valid proof for myself <3
For most of my comments, I'd almost be offended if I didn't say something surprising enough to get a "high interestingness, low agreement" voting response. Excluding speech acts, why even say things if your interlocutor or full audience can predict what you'll say?
And I usually don't offer full clean proofs in direct word. Anyone still pondering the text at the end, properly, shouldn't "vote to agree", right? So from my perspective... its fine and sorta even working as intended <3
However, also, this is currently the top-voted response to me, and if William_S himself reads it I hope he answers here, if not with text then (hopefully? even better?) with a link to a response elsewhere?
((EDIT: Re-reading everything above his, point, I notice that I totally left out the "basic take" that might go roughly like "Kurzweil, Altman, and Zuckerberg are right about compute hardware (not software or philosophy) being central, and there's a compute bottleneck rather than a compute overhang, so the speed of history will KEEP being about datacenter budgets and chip designs, and those happen on 6-to-18-month OODA loops that could actually fluctuate based on economic decisions, and therefore its maybe 2026, or 2028, or 2030, or even 2032 before things pop, depending on how and when billionaires and governments decide to spend money".))
Pulling honest posteriors from people who've "seen things we wouldn't believe" gives excellent material for trying to perform aumancy... work backwards from their posteriors to possible observations, and then forwards again, toward what might actually be true :-)
I look forward to your reply!
(And regarding "food cost psychology" this is an area where I think Neo Stoic objectivity is helpful. Rich people can pick up a lot of hedons just from noticing how good their food is, and formerly poor people have a valuable opportunity to re-calibrate. There are large differences in diet between socio-economic classes still, and until all such differences are expressions of voluntary preference, and "dietary price sensitivity has basically evaporated", I won't consider the world to be post-scarcity. Each time I eat steak, I can't help but remember being asked in Summer Camp as a little kid, after someone ask "if my family was rich" and I didn't know, about this... like the very first "objective calibrating response" accessible to us as children was the rate of my family's steak consumption. Having grown up in some amount of poverty, I often see "newly rich people" eating as if their health is not the price of slightly more expensive food, or their health is "not worth avoiding the terrible terrible sin of throwing food in the garbage (which my aunt who lived through the Great Depression in Germany yelled at me, once, with great feeling, for doing, when I was child and had eaten less than ALL the birthday cake that had been put on my plate)". Cultural norms around food are fascinating and, in my opinion, are often rewarding to think about.)
What are your timelines like? How long do YOU think we have left?
I know several CEOs of small AGI startups who seem to have gone crazy and told me that they are self inserts into this world, which is a simulation of their original self's creation. However, none of them talk about each other, and presumably at most one of them can be meaningfully right?
One AGI CEO hasn't gone THAT crazy (yet), but is quite sure that the November 2024 election will be meaningless because pivotal acts will have already occurred that make nation state elections visibly pointless.
Also I know many normies who can't really think probabilistically and mostly aren't worried at all about any of this... but one normy who can calculate is pretty sure that we have AT LEAST 12 years (possibly because his retirement plans won't be finalized until then). He also thinks that even systems as "mere" as TikTok will be banned before the November 2024 election because "elites aren't stupid".
I think I'm more likely to be better calibrated than any of these opinions, because most of them don't seem to focus very much on "hedging" or "thoughtful doubting", whereas my event space assigns non-zero probability to ensembles that contain such features of possible futures (including these specific scenarios).
It was a time before LSTMs or Transformers, a time before Pearlian Causal Graphs, a time before computers.
Indeed, it was even a time before Frege or Bayes. It was a time and place where even arabic numerals had not yet memetically infected the minds of people to grant them the powers of swift and easy mental arithmetic, and where non-syllabic alphabets (with distinct consonants and vowels) were still kinda new...
...in that time, someone managed to get credit for inventing the formalization of the syllogism! And he had a whole school for people to get naked and talk philosophy with each other. And he took the raw material of a simple human boy, and programmed that child into a world conquering machine whose great act of horror was to sack Thebes. (It is remarkable how many philosophers are "causally upstream, though a step or two removed" from giant piles of skulls. Hopefully, the "violent tragedy part" can be avoided this time around.)
Libertinism, logic, politics, and hypergraphia were his tools. His name was Aristotle. (Weirdly, way more people name their own children after the person-shaped-machine who was programmed to conquer the world, rather than the person-shaped programmer. All those Alexes and Alexandras, and only a very few Aristotles.)