Posts
Comments
The actual hard parts? Math probably doesn't help much directly, unfortunately. Mathematical thinking is good. You'll have to learn how to think in novel ways, so there's not even a vector anyone can point you in, except for pointers with a whole lot of "dereference not included" like "figure out how to understand the fundamental forces involved in what actually determines what a mind ends up trying to do long term" (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html).
Some of the problems: https://tsvibt.blogspot.com/2023/03/the-fraught-voyage-of-aligned-novelty.html A meta-philosophy discussion of what might work: https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html
I appreciate the pursuit of non-strawman understandings of misgivings around reprogenetics, and the pursuit of addressing them.
I don't feel I understand the people who talk about embryo selection as "killing embryos" or "choosing who lives and dies", but I want to and have tried, so I'll throw some thoughts into the mix.
First: Maybe take a look at: https://www.thenewatlantis.com/publications/the-anti-theology-of-the-body
Hart, IIUC, argues that wanting to choose who will live and who won't means you're evil and therefore shouldn't be making such choices. I think his argument is ultimately stupid, so maybe I still don't get it. But anyway, I think it's an importantly different sort of argument than the two you present. It's an indictment of the character of the choosers.
Second: When I tried to empathize with "life/soul starts at conception", what I got was:
- We want a simple boundary...
- ... for political purposes, to prevent...
- child sacrifice (which could make sense given the cults around the time of the birth of Christianity?).
- killing mid-term fetuses, which might actually for real start to have souls.
- ... for social purposes, because it causes damage to ....
- the would-be parents's souls to abort the thing which they do, or should, think of as having a soul.
- the social norm / consensus / coordination around not killing things that people do or should orient towards as though they have souls.
- ... for political purposes, to prevent...
- The pope said so. (...But then I'd like to understand why the pope said so, which would take more research.) (Something I said to a twitter-famous Catholic somehow caused him to seriously consider that, since Yermiahu says that god says "Before I formed you in the womb I knew you...", maybe it's ok to discard embryos before implantation...)
- (My invented explanation:) Souls are transpersonal. They are a distributed computation between the child, the parents, the village, society at large, and humanity throughout all time (god). As an embryo grows, the computation is, gradually, "handed off to / centralized in" the physical locus of the child. But already upon conception, the parents are oriented towards the future existence of the child, and are computing their part of the child's soul--which is most of what has currently manifested of the child's soul. In this way, we get:
- From a certain perspective:
- It reflects poorly on would-be parents who decide to abort.
- It makes sense for the state to get involved to prevent abortion. (I don't agree with this, but hear me out:)
- The perspective is one which does not acknowledge the possibility of would-be parents not mentally and socially orienting to a pregnancy in the same way that parents orient when they are intending to have children, or at least open to it and ready to get ready for it.
- ...Which is ultimately stupid of course, because that is a possibility. So maybe this is still a strawman.
- Well, maybe the perspective is that it's possible but bad, which is at least usefully a different claim.
- Within my invented explanation, the "continuous distributed metaphysics of the origins of souls", it is indeed the case that the soul starts at conception--BUT in fact it's fine to swap embryos! It's actually a strange biodeterminism to say that this clump of cells or that, or this genome or that, makes the person. A soul is not a clump of cells or a genome! The soul is the niche that the parents, and the village, have already begun constructing for the child; and, a little bit, the soul is the structure of all humanity (e.g. the heritage of concepts and language; the protection of rights; etc.).
- From a certain perspective:
I have personally been harmed by antibiotics (Cipro) and suffered an astonishing array of symptoms.
That sucks, yeah. So, I totally buy something like "in many cases, there's some medical intervention (such as fixing something about the microbiome) that would increase that person's effective cognitive capacity by quite a lot". As really simple, broad example, getting 7 hrs sleep / night vs 4 hrs should be a very big boost for almost everyone.
The question I'm asking is "how can we get lots of super brilliant geniuses (of whatever flavor--artistic, philosophical, scientific, mathematical, political, empathetic, etc etc)"? Microbiomes might be important to get reasonably correct, in that if you get them wrong then they have a really bad effect, but I highly doubt you can take a normal person and make them a super brilliant person by tweaking their microbiome (at least in any reasonably normal / feasible way).
I find that the type of thing greatly affects how I want to engage with it. I'll just illustrate with a few extremal points:
- Philosophy: I'm almost entirely here to think, not to hear their thoughts. I'll skip whole paragraphs or pages if they're doing throat clearing. Or I'll reread 1 paragraph several times, slowly, with 10 minute pace+think in between each time.
- History: Unless I'm especially trusting of the analysis, or the analysis is exceptionally conceptually rich, I'm mainly here for the facts + narrative that makes the facts fit into a story I can imagine. Best is audiobook + high focus, maybe 1.3x -- 2.something x, depending on how dense / how familiar I already am. I find that IF I'm going linearly, there's a small gain to having the words turned into spoken language for me, and to keep going without effort. This benefit is swamped by the cost of not being able to pause, skip back, skip around, if that's what I want to do.
- Math / science: Similar to philosophy, though with much more variation in how much I'm trying to think vs get info.
- Investigating a topic, reading papers: I skip around very aggressively--usually there's just a couple sentences that I need to see, somewhere in the paper, in order to decide whether the paper is relevant at all, or to decide which citation to follow. Here I have to consciously firmly hold the intention to investigate the thing I'm investigating, or else I'll get distracted trying to read the paper (incorrect!), and probably then get bored.
Yes. The special language is supposed to have the property that Ak can automatically learn if Ak+1 plans good, bad, or unnecessary actions. An can't be arbitrarily smarter than humans, but it's a general intelligence which doesn't imitate humans and can know stuff humans don't know.
So to my mind, this scheme is at significant risk of playing a shell game with "how the AIs collectively use novel structures but in a way that is answerable to us / our values". You're saying that the simple AI can tell if the more complex AI's plans are good, bad, or unnecessary--but also the latter "can know stuff humans don't know". How?
In other words, I'm saying that making it so that
the AI generates concepts in a special language
but also the AI is actually useful at all, is almost just a restatement of the whole alignment problem.
- All note-taking systems hitherto have failed for a simple reason: they do not ask thinking what it needs. (I think you appreciate this already, just restating.) https://www.lesswrong.com/posts/CoqFpaorNHsWxRzvz/what-comes-after-roam-s-renaissance?commentId=CNK44LqKyh2EQZpJm
- I agree with your point that we should be looking to the human as the starting point. In fact, I think this means we should be asking for MENTAL tools for thinking FIRST. Maybe possibly LATER we could use software to help thinking, if it asks for it.
- The mental tool we should be looking at is LEXICOGENESIS. I've written about this at length: "The possible shared Craft of deliberate Lexicogenesis" But to summarize for this context: if we create resources that improve our lexicogenetic abilities (such as a more productive morphemicon, augmented grammars, augmented notation, or skill with making words (thinking of metaphors, using the morphemicon, clarifying / factoring ideas to put words to, etc.)), then we will be better able to think at the edge in the language of thinking at the edge.
An issue with long-form and asynchronous discourse is wasted motion. Without shared assumptions, the logic and info that locutor 1 adduces is less relevant to locutor 2 than to locutor 1. And, that effect becomes more pronounced as locutor 1 goes down a path of reasoning, constructing more context that locutor 2 doesn't share. (OTOH, long-form is better in terms of individual thinking.)
It was briefly in the 300s overall, and 1 or 2 in a few subcategory thingies.
[call turns out to be maybe logistically inconvenient]
It's OK if a person's mental state changes because they notice a pink car ("human object recognition" is an easier to optimize/comprehend process). It's not OK if a person's mental state changes because the pink car has weird subliminal effects on the human psyche ("weird subliminal effects on the human psyche" is a harder to optimize/comprehend process).
So, somehow you're able to know when an AI is exerting optimization power in "a way that flows through" some specific concepts? I think this is pretty difficult; see the fraughtness of inexplicitness or more narrowly the conceptual Doppelgänger problem.
It's extra difficult if you're not able to use the concepts you're trying to disallow, in order to disallow them--and it sounds like that's what you're trying to do (you're trying to "automatically" disallow them, presumably without the use of an AI that does understand them).
You say this:
But I don't get if, or why, you think that adds up to anything like the above.
Anyway, is the following basically what you're proposing?
Humans can check goodness of because is only able to think using stuff that humans are quite familiar with. Then is able to oversee because... (I don't get why; something about mapping primitives, and deception not being possible for some reason?) Then is really smart and understands stuff that humans don't understand, but is overseen by a chain that ends in a good AI, .
I don't know whether this applies to you
I'm not sure. I did put in some effort to survey various strands of philosophy related to axiology, but not much effort. E.g. looked at some writings in the vein of Anscombe's study of intention; tried to read D+G because maybe "machines" is the sort of thing I'm asking about (was not useful to me lol); have read some Heidegger; some Nietzsche; some more obscure things like "Care Crosses the River" by Blumenberg; the basics of the "analytical" stuff LWers know (including doing some of my own research on decision theory); etc etc. But in short, no, none of it even addresses the question--and the failure is the sort of failure that was supposed to have its coarsest outlines brought to light by genuinely Socratic questioning, which is why I call it "pre-Socratic", not to say that "no one since Socrates has billed themselves as talking about something related to values or something".
I think even communicating the question would take a lot of work, which as I said is part of the problem. A couple hints:
- https://www.lesswrong.com/posts/NqsNYsyoA2YSbb3py/fundamental-question-what-determines-a-mind-s-effects (I think if you read this it will seem incredibly boringly obvious and trivial, and yet, literally no one addresses it! Some people sort of try, but fail so badly that it can't count as progress. Closest would be some bits of theology, maybe? Not sure.)
- https://www.lesswrong.com/posts/p7mMJvwDbuvo4K7NE/telopheme-telophore-and-telotect (I think this distinction is mostly a failed attempt to carve things, but the question that it fails to answer is related to the important question of values.)
- You should think of the question of values as being more like "what is the driving engine" rather than "what are the rules" or "what are the outcomes" or "how to make decisions" etc.
"A lot of progress".... well, reality doesn't grade on a curve. Surely someone has said something about something, yes, but have we said enough about what matters? Not even close. If you don't know how inadequate our understanding of values is I can't convince you in a comment, but one way to find out would be to try to solve alignment. E.g. see https://tsvibt.blogspot.com/2023/03/the-fraught-voyage-of-aligned-novelty.html
It seems to me that values have been a main focus of philosophy for a long time I'm curious about the rationale behind your suggestion.
Specifically the question of "what values are" I don't think has been addressed (I've looked around some, but certainly not thoroughly). A key problem with previous philosophy is that values are extreme in how much they require some sort of mental context (https://www.lesswrong.com/posts/HJ4EHPG5qPbbbk5nK/gemini-modeling). Previous philosophy (that I'm aware of) largely takes the mental context for granted, or only highlights the parts of it that are called into question, or briefly touches on it. This is pretty reasonable if you're a human talking to humans, because you do probably share most of that mental context. But it fails on two sorts of cases:
- trying to think about or grow/construct/shape really alien minds, like AIs;
- trying to exert human values in a way that is good but unnatural (think for example of governments, teams, "superhuman devotion to a personal mission", etc.).
The latter, 2., might have, given more progress, helped us be wiser.
My comment was responding to
it was a bad idea to invent things like logic, mathematical proofs, and scientific methodologies, because it permanently accelerated the wrong things (scientific and technological progress) while giving philosophy only a temporary boost (by empowering the groups that invented those things, which had better than average philosophical competence, to spread their culture/influence).
So I'm saying, in retrospect on the 2.5 millennia of philosophy, it plausibly would have been better to have an "organology, physiology, medicine, and medical enhancement" of values. To say it a different way, we should have been building the conceptual and introspective foundations that would have provided the tools with which we might have been able to become much wiser than is accessible to the lone investigators who periodically arise, try to hack their way a small ways up the mountain, and then die, leaving mostly only superficial transmissions.
whereas metaphilosophy has received much less attention.
I would agree pretty strongly with some version of "metaphilosophy is potentially a very underserved investment opportunity", though we don't necessarily agree (because of having "very different tastes" about what metaphilosophy should be, amounting to not even talking about the same thing). I have ranted several times to friends about how philosophy (by which I mean metaphilosophy--under one description, something like "recursive communal yak-shaving aimed at the (human-)canonical") has barely ever been tried, etc.
I want to flag that the position that we could have understood values/philosophy without knowing about math/logic is a fictional world/fabricated option.
Maybe but I don't believe that you know this. Lots of important concepts want to be gotten at by routes that don't use much math or use quite different math from "math to understand computers" or "math to formalize epistemology". Darwin didn't need much math to get lots of the core structure of evolution by natural selection on random mutation.
the current human values are enough to express corrigibility
Huh? Not sure I understand this. How is this the case?
(I may have to tap out, because busy. At some point we could have a call to chat--might be much easier to communicate in that context. I think we have several background disagreements, so that I don't find it easy to interpret your statements.)
Can you make the problem statement more precise
No, that's part of the problem. There's pretheoretic material as some of a starting point here:
https://www.lesswrong.com/posts/YLRPhvgN4uZ6LCLxw/human-wanting
Whatever those things are, you'd want to understand the context that makes them what they are:
https://www.lesswrong.com/posts/HJ4EHPG5qPbbbk5nK/gemini-modeling
And refactor the big blob into lots of better concepts, which would probably require a larger investigation and conceptual refactoring:
https://www.lesswrong.com/posts/TNQKFoWhAkLCB4Kt7/a-hermeneutic-net-for-agency
In particular so that we understand how "values" can be stable (https://www.lesswrong.com/posts/Ht4JZtxngKwuQ7cDC/tsvibt-s-shortform?commentId=koeti9ygXB9wPLnnF) and can incorporate novel concepts / deal with novel domains (https://www.lesswrong.com/posts/CBHpzpzJy98idiSGs/do-humans-derive-values-from-fictitious-imputed-coherence) and eventually address the stuff here: https://www.lesswrong.com/posts/ASZco85chGouu2LKk/the-fraught-voyage-of-aligned-novelty
The issue is that the following is likely true according to me, though controversial:
The type of mind that might kill all humans has to do a bunch of truly novel thinking.
To have our values interface appropriately with with this novel thinking patterns in the AI, including through corrigibility, I think we have to work with "values" that are the sort of thing that can refer / be preserved / be transferred across "ontological" changes.
Quoting from https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html:
Rasha: "This will discover variables that you know how to evaluate, like where the cheese is in the maze--you have access to the ground truth against which you can compare a reporter-system's attempt to read off the position of the cheese from the AI's internals. But this won't extend to variables that you don't know how to evaluate. So this approach to honesty won't solve the part of alignment where, at some point, some mind has to interface with ideas that are novel and alien to humanity and direct the power of those ideas toward ends that humans like."
When it looks like humans form preferences about incomprehensible things, they really form preferences only about comprehensible properties of those incomprehensible things
Then you're not talking about human values, you're talking about [short timescale implementations of values] or something.
I'd suggest that trying to understand what values are would potentially have been a better direction to emphasize. Our understanding here is still pre-Socratic, basically pre-cultural.
Is everyone dropping the ball on cryonics
More or less AFAIK. (See though https://www.amazon.com/Future-Loves-You-Should-Abolish-ebook/dp/B0CW9KTX76 )
I have a pet lay-speculation that there's a pretty mathematically interesting question here, which hasn't been understood yet. I can't formulate the question clearly, but it's something like: "What sort of thing are these states?" We can abstractly talk about stable states of high-dimensional dynamical systems, but this probably isn't very satisfying or helpful in this context. There's some more practical or concrete or specific things we might want to know about the landscape of possible stable or quasi-stable states for gene regulatory networks, and how they transition, and how one could perturb them.
By default, humans only care about variables they could (in principle) easily optimize or comprehend.
I think this is incorrect. I think humans have values which are essentially provisional. In other words, they're based on pointers which are supposed to be impossible to fully dereference. Examples:
- Friendship--pointing at another mind, who you never fully comprehend, who can always surprise you--which is part of the point
- Boredom / fun--pointing at surprise, novelty, diagonalizing against what you already understand
At the moment, the US government is calling for deregulation suggestions: https://www.regulations.gov/deregulation. If there's someone who understands how the US Code of Federal Regulations works, and would be up for making a couple submissions, one or two of the policy recommendations here might be doable. E.g. maybe we can delete some small subset of the CITES treaty so that cell lines can be imported.
(IIUC, this call for suggestions is only for stuff in the US CFR, which is regulations implemented by executive branch departments to fulfill Congressional laws? So we can't directly address federal or state laws this way, though maybe the implementations of these laws can be modified.)
My version:
Probably too understated, but it's the sort of thing I like.
GoogleDraw link if anyone wants to copy and modify: https://docs.google.com/drawings/d/10nB-1GC_LWAZRhvFBJnAAzhTNJueDCtwHXprVUZChB0/edit
Yeah unfortunately it seems to be the case that no one has really seriously tried (ie invested a lot of resources, on the scale of a large company or a government) to do R&D on significantly increasing IQ in healthy people through drugs...
Ok.
- Cringe. But,
- If anyone is reading this, if Dw629's claims are true, this is a place where everyone's dropping the ball for no good reason, so you could have the ball!
I really do recommend to talk with the people at Nootopics.
Yep... If I find the time/energy I'll do so.
Thanks for your help!
Very interesting, thanks. I've now read most of your links. Obviously I can't actually evaluate them but they seem intriguing... Especially because IIUC they at least allege positive effects working on different regions of the brain (and contributing to improvements on different sorts of tests), which suggests maybe they can stack.
I take your point that no one's really trying. Has anyone really tried to really try? For example, has someone who actually knows their stuff tried working out a plausible market plan (e.g. how to deal with regulation), and then tried to get venture capital, for intelligence enhancement? I guess there's tons of stuff sold as mind enhancing, though presumably it's mostly useless; and if these are all research chemicals from pharma companies then they'd be hard to sell... Or, has anyone tried a noncommercial (philanthropic, say) angle? Maybe I should talk to the Noo people.
Hm. TBC, the broader category would be "molecule that would activate master regulation of one or more gene regulatory networks related to brain function", e.g. a hormone but maybe also some other things.
Thanks. Seems worth looking into more. I googled the first few on your list, and they're all described as working via some neurotransmitter / receptor type, either agonist / antagonist / reuptake inhibition. Not everything on the list is like that (I recognize gingko biloba as being related to blood flow). But I don't think these sorts of things would stack at all, or even necessarily help much with someone who isn't sick / has some big imbalance or whatever.
My hope for something like this existing is a bit more specific. It comes from thinking that there should be small levers with large effects, because natural development probably pulls some such levers which activate specific gene regulatory networks at different points--e.g. first we pull the [baby-type brain lever], then the [5 year old brain lever], etc.
Thanks. One of the first places I'd look would be hormones, which IIUC don't count as small molecules? Though maybe natural hormones have already been tried? But I'd wonder about more obscure or risky ones, e.g. ones normally only active in children.
Periodic reminder: AFAIK (though I didn't look much) no one has thoroughly investigated whether there's some small set of molecules, delivered to the brain easily enough, that would have some major regulatory effects resulting in greatly increased cognitive ability. (Feel free to prove me wrong with an example of someone plausibly doing so, i.e. looking hard enough and thinking hard enough that if such a thing was feasible to find and do, then they'd probably have found it--but "surely, surely, surely someone has done so because obviously, right?" is certainly not an accepted proof. And don't call me Shirley!)
I'm simply too busy, but you're not!
(Thank you, seems valuable!)
Our experience has been that talking to Congressional staffers about a ban or pause on superintelligence research tends to result in blank stares and a rapid end to the meeting. [...] A global moratorium [....] we don’t see anything that we can do to help make that happen.
Ok. Thank you for the info. Would you speculate a bit about what might change this, that other people might be able to do? E.g. what number of call-ins to their offices from constituents, or how many protests, or what industry testimony, or how much campaign funding, etc.
I mean, I'm not familiar with the whole variety of different ways and reasons that people attack other people as "racist". I'm just saying that only saying true statements is not conclusive evidence that you're not a racist, or that you're not having the effect of supporting racist coalitions. I guess this furthermore implies that it can be justified to attack Bob even if Bob only says true statements, assuming it's sometimes justified to attack people for racist action-stances, apart from any propositional statements they make--but yeah, in that case you'd have to attack Bob for something other than "Bob says false statements", e.g. "Bob implicitly argues for false statements via emphasis" or "Bob has bad action-stances".
Huh? No? Filling in the missing narrative can take a bunch of work, like days or months of study. (What is it even a cope for?)
The term "racist" usually carries the implication or implicature of an attitude that is merely based on an irrational prejudice, not an empirical hypothesis with reference to a significant amount of statistical and other evidence.
It is also possible that Bob is racist in the sense of successfully working to cause unjust ethnic conflict of some kind, but also Bob only says true things. Bob could selectively emphasize some true propositions and deemphasize others. The richer the area, the more you can pick and choose, and paint a more and more outrage-inducing, one-sided story (cf. Israel/Palestine conflict). If I had to guess, in practice racists do systematically say false things; but a lot of the effect comes from selective emphasis.
Things can get even more muddied if people are unepistemically pushing against arguments that X; then someone might be justified in selectively arguing for X, in order to "balance the scales". That could be an appropriate thing to do if the only problem was that some group was unepistemically pushing against X--you correct the shared knowledge pool by bringing back in specifically the data that isn't explained by the unepistemic consensus. But if X is furthermore some natural part of a [selective-emphasis memeplex aimed at generating political will towards some unjust adversariality], then you look a lot like you're intentionally constructing that memeplex.
(Not implying anything about Cremieux, I'm barely familiar with his work.)
See Jessica's comment. Yeah it's primitive recursive assuming that your deductive process is primitive recursive. (Also assuming that your traders are primitive recursive; e.g. if they are polytime as in the paper.) There's probably some other parameters not necessarily set in the implementation described in the paper, e.g. the enumerator of trader-machines, but you can make those primrec.
I wish more people were interested in lexicogenesis as a serious/shared craft. See:
The possible shared Craft of deliberate Lexicogenesis: https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html (lengthy meditation--recommend skipping around; maybe specifically look at https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html#seeds-of-the-shared-craft)
Sidespeak: https://tsvibt.github.io/theory/pages/bl_25_04_25_23_19_30_300996.html
Tiny community: https://lexicogenesis.zulipchat.com/ Maybe it should be a discord or a reddit, thoughts?
Thanks for doing this! Clarifying things is good.
I think occassionally some lotteries are positive or neutral-ish EV, when the jackpots are really big (like >$1billion)? Not sure. You have to check the taxes and the payment schedules etc.
These arguments are so nonsensical that I don't know how to respond to them without further clarification, and so far the people I've talked to about them haven't provided that clarification. "Programming" is not a type of cognitive activity any more than "moving your left hand in some manner" is. You could try writing out the reasoning, trying to avoid enthymemes, and then I could critique it / ask followup questions. Or we could have a conversation that we record and publish.
https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
I won't give why I think this, but I'll give another reason that should make you more seriously consider this: their sample complexity sucks.
Just think of anything that you've wanted to use a gippity to understand, but it didn't quickly work and you tried to ask it followup questions and it didn't understand what was happening / didn't propagate propositions / didn't clarify / etc.
For its performances, current AI can pick up to 2 of 3 from:
- Interesting (generates outputs that are novel and useful)
- Superhuman (outperforms humans)
- General (reflective of understanding that is genuinely applicable cross-domain)
AlphaFold's outputs are interesting and superhuman, but not general. Likewise other Alphas.
LLM outputs are a mix. There's a large swath of things that it can do superhumanly, e.g. generating sentences really fast or various kinds of search. Search is, we could say, weakly novel in a sense; LLMs are superhumanly fast at doing a form of search which is not very reflective of general understanding. Quickly generating poems with words that all start with the letter "m" or very quickly and accurately answering stereotyped questions like analogies is superhuman, and reflects a weak sort of generality, but is not interesting.
ImageGen is superhuman and a little interesting, but not really general.
Many architectures + training setups constitute substantive generality (can be applied to many datasets), and produce interesting output (models). However, considered as general training setups (i.e., to be applied to several contexts), they are subhuman.
It's straightforward to disprove: they should be able to argue for their views in a way that stands up to scrutiny.
I'd like to see more intellectual scenes that seriously think about AGI and its implications. There are surely holes in our existing frameworks, and it can be hard for people operating within them to spot. Creating new spaces with different sets of shared assumptions seems like it could help.
Absolutely not, no, we need much better discovery mechanisms for niche ideas that only isolated people talk about, so that the correct ideas can be formed.
Thank you for writing this!
Hm. I super like the notion and would like to see it implemented well. The very first example was bad enough to make me lose interest: https://russellconjugations.com/conj/1eaace137d74861f123219595a275f82 (Text from https://www.thenewatlantis.com/publications/the-anti-theology-of-the-body)
So I tried the same thing but with more surrounding text... and it was much better!... though not actually for the subset I'd already tried above. https://russellconjugations.com/conj/3a749159e066ebc4119a3871721f24fc
A longer sentence is produced by, and is asking the reader to be, putting more things together in the same [momentary working memory context]. Has advantages and disadvantages, but is not the same.
Yes, and this also applies to your version! For difficult or subtle thoughts, short sentences have to come strictly after the long sentences. If you're having enough such thoughts, it doesn't make sense to restrict long sentences out of communication channels; how else are you supposed to have the thoughts?
On second/third thought, I think you're making a good point, though also I think you're missing a different important point. And I'm not sure what the right answers are. Thanks for your engagement... If you'd be interested in thinking through this stuff in a more exploratory way on a recorded call to be maybe published, hopefully I'll be set up for that in a week or two, LMK.