Posts
Comments
It's also not really a movie as much as a live recording of a stage play. But agree it's fantastic (honestly, I'd be comfortable calling it Aladdin rational fanfiction).
Also a little silly detail I love about it in hindsight:
!During the big titular musical number, all big Disney villains show on stage to make a case for themselves and why what they wanted was right - though some of their cases were quite stretched. Even amidst this collection of selfish entitled people, when Cruella De Vil shows up to say "I only wanted a coat made of puppies!" she elicits disgust and gets kicked out by her fellow villains, having crossed a line. Then later on Disney thought it was a good idea to unironically give her the Wicked treatment in "Cruella".
Must be noted that all that subtext is entirely the product of the movie adaptation. The short story absolutely leaves no room for doubt, and in fact concludes on a punchline that rests on that.
This muddies the alienness of AI representation quite a bit.
I don't think that's necessarily it. For example, suppose we build some kind of potentially dangerous AGI. We're pretty much guaranteed to put some safety measures in place to keep it under control. Suppose these measures are insufficient and the AGI manages to deceive its way out of the box - and we somehow still live to tell the tale and ask ourselves what went wrong. "You treated the AGI with mistrust, therefore it similarly behaved in a hostile manner" is guaranteed to be one of the interpretations that pop up (you already see some of this logic, people equating alignment to wanting to enslave AIs and claiming it is thus more likely to make them willing to rebel). And if you did succeed to make a truly "human" AI (not outside of the realm of possibility if you're training it on human content/behaviour to begin with), that would be a possible explanation - after all, it's very much what a human would do. So is the AI so human it also reacted to attempt to control it as a human would - or so inhuman it merely backstabbed us without the least hesitation? That ambiguity exists with Ava, but I also feel like it would exist in any comparable IRL situation.
Anyway "I am Mother" sounds really interesting, I need to check it out.
Only tangentially related, but one very little known movie that I enjoyed is the Korean sci-fi "Jung_E". It's not about "alien" AGI but rather about human brain uploads used as AGI. It's quite depressing, along the lines of that qntm story you may have read on the same topic, but it felt like a pretty thoughtful representation of a concept that usually doesn't make it a lot into mainstream cinema.
Curious - what other AI depictions are you considering/comparing to? I'm not 100% sure about what my best would be, I find good bits and pieces here and there in several movies (Ex Machina, 2001: A Space Odyssey, even the very cheesy but surprisingly not entirely unserious M3gan) but maybe not a single organic example I'd place above the rest.
"It's like Mother of Learning, but if it was a cozy romance instead of high fantasy."
This still feels like instrumentality. I guess maybe the addition is that it's a sort of "when all you have is a hammer" situation; as in, even when the optimal strategy for a problem does not involve seeking power (assuming such a problem exists; really I'd say the question is what the optimal power seeking vs using that power trade-off is), the AI would be more liable to err on the side of seeking too much power because that just happens to be such a common successful strategy that it's sort of biased towards it.
If what you mean by 'normalize everything' is to only consider the quantum weights (which are finite as mathematical measures) and not the number of worlds, then that seems more a case of ignoring those problems rather than addressing them.
I mean that the amount of universes that is created will be created anyway, just as a consequence of time passing. So it doesn't matter anyway. If your actions e.g. cause misery in 20% of those worlds, then the fraction is all that matters; the worlds will exist anyway, and the total amount is not something you're affecting or controlling.
This third approach is based on the idea that 'worlds' are macroscopic, emergent phenomena created through decoherence (Wallace's book contains a full mathematical treatment of this). This supports both the claim that the number of worlds is indefinite (since it depends on ultimately arbitrary mappings of macroscopic to microscopic states) and the claim that worlds are created through quantum processes (since they are macroscopically indistinguishable before decoherence occurs). My point in the post was that these two claims in combination can avoid the repugnant conclusion via the approach of focusing on the weights.
I honestly don't think decoherence means the worlds are indefinite. I think it means they are an infinite continuum with the cardinality of the reals. Decoherence is just something you observe when you divide system from environment, in reality the Universe should have only a single, always coherent, giant wavefunction.
I feel like branches being in fact an uncountable continuum is essentially a given, at least unless we were to fundamentally rewrite quantum mechanics to use something other than complex numbers with a cardinality of . Talking about branches in terms of countable outcomes only makes sense if we group them by measurement outputs for specific discrete observables; but each of the uncountable infinity of worlds will continuously spawn uncountable infinite worlds and that's just something you gotta deal with. If you want to do ethics over this very confusing multiverse your best bet is probably to normalize everything - "adjust for inflation", so to speak.
I also don't think that even if the worlds were countable (and I have seen arguments to the effect of "actually only integer numbers exist and thus if we looked close enough we'd find that all equations and fields etc are discrete-valued") this would make a lot of difference. You making or not making the experiment does not create more branches, it just determines the outcome of branches that would already exist anyway. Assuming that we can purposefully create branches would require defining "measurement" as an actual discrete specific process, which is a much stronger claim (and I don't think any non-objective interpretation of QM really suggests how to do that, though some gesture towards such a thing existing in theory; and objective QM theories do not admit many worlds). "By looking at specific phenomena, sentient beings create new world-lines" would certainly be A Take; if true, it would beget an ethical nightmare, the Quantum Repugnant Conclusion that we all ought to spend all our time collapsing the wavefunctions that result in the most new worlds being created.
(as a side note, have you read Quarantine, by Greg Egan? I won't explain how precisely to avoid spoiling it, but it deals precisely with these sort of questions)
I don't think this is quite the same thing. Most people actually don't want to have to apply moral thought to every single aspect of their lives, it's extenuating. The ones who are willing to, and try to push this mindset on others, are often singularly focused. Yes, bundling people and ideas in broad clusters is itself a common simplification we gravitate towards as a way to understand the world, but that does not prevent people from still being perfectly willing to accept some activities as fundamentally non-political.
Pretty much. It's not "naive" if it's literally the only option that actually does not harm everyone involved, unless of course we want to call every world leader and self-appointed foreign policy expert a blithering idiot with tunnel vision (I make no such claim a priori; ball's in their court).
It's important to not oversimplify things. It's also important to not overcomplicate them. Domain experts tend to be resistant to the first kind of mental disease, but tragically prone to the second. Sometimes it really is Just That Simple, and everything else is commentary and superfluous detail.
Agree 100% with all of this.
There is one thing that comes to mind IMO and that people who argue that "everything is political" and that neutrality is an evil ploy to actually sneak in your evil ideas really underestimate: the point of impartiality as you describe it is to keep things simpler. Maybe a God with an infinite mind could keep in it all the issues, all the complexities, all the nuances simultaneously, and continuously figure out the optimal path. But we can't. We come up with simple rules like "if you're a doctor, you have a duty to cure anyone, not pick and choose" because they make things more straightforward and decouple domains. Doctors cure people. If you do crimes, there's a system dedicated to punish you. But a doctor's job is different, and the knowledge they need to do it has nothing to do with your rap sheet.
The frenzy to couple everything into a single tangle of complexity is driven by the misunderstanding that complacency is the only reason why your ideology is not the winning one, and that if only everyone was forced to think about it all of the time, they'd end up agreeing with it. But in reality, decoupling is necessary mostly because it allows the world to be cognitively accessible rather than driving us into either perpetual decision paralysis or perpetual paranoia (or worse, both). Destroying that doesn't give anyone victory, we just end up all worse off.
But by believing that they automatically become not Catholic any more, according to the definition of Catholic given by the Catholic Boss who is also the only one with the right to make the rules. If they state that openly they are liable to be excommunicated, though of course most of the times no one will care (even in much darker times the Inquisition probably wouldn't come after every nobody who said something blasphemous once).
I think the crux here is the "relative" poverty aspect. Comparison with others is actually really important, it turns out. Going to Disneyland isn't just a net positive; not going to Disneyland can be a negative if your kids expect you to and all their friends are. A lot of human activities are aimed at winning status games with other humans, and in that sense, in our society of abundance, marketing has vastly offset those gains by making sure it's painfully clear which things make you rich and which aren't worth all that much. So basically the Poverty Restoring force is "other people". No matter the actual material conditions there's always going to be by definition a bottom something percentile in status, and they'll be frustrated by this condition and trying to get out of it to earn some respect by the rest of society.
Yeah, I found it pretty soon after.
Is anyone actually around? I can't find the spot.
I think your model only applies to some famous cases, but ignored others. Who invented computers? Who invented television networks? Who invented the internet?
Lots of things have inventors and patents only for specific chunks of them, or specific versions, but are as a whole too big to be encompassed. They're not necessarily very well defined technologies, but systems and concepts that can be implemented in many different ways. In these fields, focusing on patents is likely to be a losing strategy anyway as you'll simply stand still to protect your one increasingly obsolete good idea like Homer Simpson in front of his sugar while everyone else runs circles around you with their legally distinct versions of the same thing that they keep iterating and improving on. I think AI and even LLMs fall under this category. It's specifically quite hard to patent algorithms - and good thing too, or it would really have a chilling effect for the whole field. I think you can patent only a specific implementation of them, but that's very limited; you can't patent the concept of a self-attention layer, for example, as that's just math. And that kind of thing is all it takes to build your own spin on an LLM anyway.
Omnicide I can get behind, but patent infringement would be a bridge too far!
I think in general it's mostly 1); obviously "infinite perfect bathroom availability everywhere" isn't a realistic goal, so this is about striking a compromise that is however more practical than the current situation. For things like these honestly I am disinclined to trust private enterprise too much - especially if left completely unregulated - but am willing to concede that it's not my main value. Obviously I wouldn't want the sidewalk to be entirely crowded out by competing paid chemical toilets though, that solves one problem but creates another.
Since the discussion here started around homelessness, and homeless people obviously wouldn't be able to pay for private bathrooms (especially if these did the obvious thing for convenience and forgo coins in exchange for some kind of subscription service, payment via app, or such), I think the best solution would be free public bathrooms, and I think they would "pay themselves" in terms of gains in comfort and cleanliness for the people living in the neighborhood. They should be funded locally of course. Absent that though, sure, I think removing some barriers to private suppliers of paid for bathroom services would still be better than this.
My wife was put on benzodiazepines not long ago for a wisdom tooth extraction, same as the author of that post. She did manifest some of the same behaviours (e.g. asking the same thing repeatedly). But your plan to make people in those conditions take an IQ test has a flaw: she was also obviously high as balls. No way her cognitive abilities weren't cut down to like half of the usual. Not sure if this is a side effect of the loss of short term memory or a different effect of the sedatives, but yeah, this would absolutely impact an experiment IMO.
No, sorry, it's not that I didn't find it clear, but I thought it was kind of an irrelevant aside - it's obviously true (though IMO going to a barista and passing a bill while whispering "you didn't see anything" might not necessarily work that well either), but my original comment was about the absurdity of the lack of systemic solutions, so saying there are individual ones doesn't really address the main claim.
We're discussing whether this is a systemic problem, not whether there are possible individual solutions. We can come up with solutions just fine, in fact most of the times you can just waltz in, go to the bathroom, and no one will notice. But "everyone pays bribes to the barista to go to the bathroom" absolutely makes no sense as a universal rule over "we finally acknowledge this is an issue and thus incorporate it squarely in our ordinary services instead of making up weird and unnecessary work-arounds".
Tipping the barista is not really sticking to the rules of the business, though. It's bribing the watchman to close an eye, and the watchman must take the bribe (and deem it worthy its risks).
Which is probably why there were apparently >50,000 pay bathrooms in the USA before some activists got them outlawed
Oh, I didn't know this story. Seems like a prime example of "be careful what economic incentives you're setting up". All that banning paid toilets has done is... less toilets, not more free toilets.
Though wonder if now you could run a public toilet merely by plastering it with ads.
Why is it better to pay an explicit bathroom providing business, then to pay a cafe (in the form of buying a cup of coffee)? It strikes me as a distinction without real difference, but maybe I'm confused.
Economically speaking, if to acquire good A (which I need) I also have to acquire good B (which I don't need and is more expensive), thus paying more than I would pay for good A alone, using up resources and labor I didn't need and that were surely better employed elsewhere, that seems to me like a huge market inefficiency.
Imagine this happening with anything else. "I want a USB cable." "Oh we don't sell USB cables on their own, that would be ridiculous. But we do include them as part of the chargers in smartphones, so if you want a USB cable, you can buy a smartphone." Would that make sense?
Honestly if the proportions of those roles were true to real life I would simply never take the lottery, that's an almost certainty of being a peasant. I guess they still must have made things a bit more friendly.
I explained my reasoning here. Also note that most people who have demand for using the bathroom are not penniless homeless people.
Here is my reasoning. On one hand, obviously going to the bathroom, sometimes in random circumstances, is an obvious universal necessity. It is all the more pressing for people with certain conditions that make it harder for them to control themselves for long. So it's important that bathrooms are available, quickly accessible, and distributed reasonably well everywhere. I would also argue it's important that they have no barrier to access because sometimes time is critical when using it. In certain train stations I've seen bathrooms that can only be used by paying a small price, which often meant you needed to have and find precise amounts of change to go. Absolutely impractical stuff for bathrooms.
On the other, obviously maintaining bathrooms is expensive as it requires labour. You don't want your bathrooms to be completely fouled on the regular, or worse, damaged, and if they happen to be, you need money to fix them. So bathrooms aren't literally "free".
Now one possible solution would be to have "public bathroom" as a business. Nowadays you could allow entrance with a credit card (note that this doesn't solve the homeless thing, but it addresses most people's need). But IMO this isn't a particularly high value business, and on its own certainly not a good use of valuable city centre land, which goes directly against the fact that you need bathrooms to be the most where the most people are. So this never really happens.
Another solution is to have bathrooms as part of private businesses doing other stuff (serving food/drinks) and have them charge for their use. Which is how it works now. The inadequacy lies into how for some reason these businesses charge you indirectly by asking you to buy something. This is inefficient in a number of ways: it forces you to buy something you don't really want, paying more than you would otherwise, and the provider probably still doesn't get as much as they could if they just asked a bathroom fee since they also need the labour and ingredients to make the coffee or whatever. So why are things like this? I'm not sure - I think part of it may be that they don't just want money, they want a filter that will discourage people from using the bathroom too much to avoid having too many bathroom goers. If that's the case, that's bad, because it means some needs will remain unfulfilled (and some people might forgo going out for too long entirely rather than risking being left without options). Part of it may be that they just identify their business as cafes and would find it deleterious to their image to explicitly provide a bathroom service. But that's a silly hangup and one we should overcome, if it causes this much trouble. Consider also that the way things are now, it's pretty hard of the cafes to enforce their rules anyway, and lots of people will just use the bathroom without asking or buying anything anyway. Everyone loses.
Or you could simply build and maintain public bathrooms with tax money. There are solutions to the land value problem (e.g. build them as provisionary structures on the sidewalk) and this removes all issues and quite a lot of unpleasantness. You could probably use even just some of the sales tax and house taxes income from the neighbourhood and the payers would in practice see returns out of this. Alternatively, you could publicly subsidize private businesses offering their bathrooms for free. Though I reckon that real public bathrooms would be better for the homeless issue since businesses probably don't want those in their august establishments.
I suspect the argument that it is ridiculous comes from an intuition that the need to go to the bathroom is such a human universal that we are all accustomed to, and the knowledge that having to hold in your urine is seriously unpleasant is so universal, that it becomes a matter of basic consideration for your fellow human beings to provide them with the ability to access the bathroom in an establishment when they clearly need to.
This, and how completely unrelated specifically the "buy a coffee" thing is. It makes no sense that to satisfy need A I have to do unrelated thing B. The private version of the solution would be bathrooms I can pay to use, and those happen sometimes, but they're not a particularly common business model so I guess maybe the economics don't work out to it being a good use of capital or land.
Technical AI safety and/or alignment advances are intrinsically safe and helpful to humanity, irrespective of the state of humanity.
I think this statement is weakly true, insofar as almost no misuse by humans could possibly be worse than what a completely out of control ASI would do. Technical safety is a necessary but not sufficient condition to beneficial AI. That said, it's also absolutely true that it's not nearly enough. Most scenarios with controllable AI still end with humanity nearly extinct IMO, with only a few people lording their AI over everyone else. Preventing that is not a merely technical challenge.
The impossibility of traveling faster than the speed of light was a lot less obvious in 1961.
I would argue that's questionable - they knew relativity very well in 1961 and all the physicists would have been able to roll out the obvious theoretical objections. But obvious the difficulties of approaching the speed of light (via e.g. ramscoop engine, solar sail, nuclear propulsion etc) are another story.
Was Concorde “inherently a bad idea”? No, but “inherently” is doing the work here. It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged. It didn’t matter how glorious, beautiful or innovative it was. It’s a pyramid that was built even though it wasn’t efficient.
I guess my point is that there are objective limits and then there are cultural ones. We do most things only for the sake of making money, but as far as human cultures go we are perhaps more the exception than the rule. And in the end individuals often do the opposite - they make money to do things, things they like that play to their personal values but don't necessarily turn out a profit all the time. A different culture could have concluded that the Concorde was a success because it was awesome, and we should do more of that. In such a culture in fact the Concorde might even have been a financial success, because people would have been more willing to pay more money to witness it first hand. Since here the argument involves more the inherent limits of technology and/or science, I'd say we should be careful to separate out cultural effects. Self-sustaining Mars colonies, for example, are probably a pipe dream with current technology. But the only reason why we don't have a Moon base yet is that we don't give enough of a shit. If we cared to build one, we probably could have by now.
I'm honestly always amazed from just how much money some people in these parts seem to have. That's a huge sum to spend on an LLM experiment. It would be pretty large even for a research group, to burn that in just 6 days!
TBF, was Concorde inherently "a bad idea"? Technologies have a theoretical limit and a practical one. There's deep reasons why we simply couldn't reach even near speed of light by 1982 no matter how much money we poured into it, but Concorde seems more a case of "it can be done, but it's too expensive to keep safe enough and most people won't pay such exorbitant tickets just to shave a few hours off their transatlantic trip". I don't think we can imagine such things happening with AGI, partly because its economic returns are obvious and far greater, partly because many who are racing to it have more than just economic incentives to do so - some have an almost religious fervour. Pyramids can be built even if they're not efficient.
I think in practice we don't know for sure - that's part of the problem - but there are various reasons to think why this might be possible with vastly less complexity than the human brain. First, the task is vastly less complex than what the human brain does. The human brain does not handle only conscious rational thought, it does a bunch of other things that mean it still fires at full cylinders even when you're unconscious. Second, lots of artificial versions of natural organs are vastly less complex than their inspiration. Cameras are vastly less complex than eyes. Plane wings are vastly less complex than bird wings. And yet these things outperform their natural counterparts. To me the essence of the reason for this is that evolution deals in compromises. It can never design just a camera. The camera must be made of organic materials, it must be self organising and self repairing, it must be compatible with everything else and it must be achievable via a set of small mutations that are each as or more viable than the previous one. It's all stumbling around in the dark until you hit something that works under the many, many constraints of the problem. Meanwhile, artificial intelligent design on our part is a lot more deliberate and a lot less constrained. The AI itself doesn't need to do anything more than be an AI - we'll provide the infrastructure, and we'll throw money at it to keep it viable until it doesn't need it any more, because we foresee the future and can invest on it. That's more than evolution can do, and it's a significant advantage that can compensate for a lot of complexity.
How much of that is API costs? Seems like the most part, unless you're considering a truly exorbitant salary.
The bathroom thing sucks in general. We honestly just need more public bathrooms, or subsidies paid to venues to keep their bathrooms fully public. I understand most businesses won't risk having to deal with the potential mess of having anyone use their bathroom, but it's ridiculous even for those who do have the money that you're supposed to buy a coffee or something to take a leak (and then in practice you can often sneak by anyway).
Seems a restrictive definition of "utility function". It can have the weather as one of its inputs. It can have state (because really, that only means its input is not just the present but the whole past trajectory).
"Function" is an incredibly broad mathematical term.
About the post linked, mostly agree, but I don't see the need to move away from utility maximisation as a framework. We just have a piss poor description of the utility function. "I enjoy being like that rich and successful dude" is a value.
What's a solution to this problem?
Abolish the conference talk, turn everything into a giant poster session, possibly with scheduled explanations. Or use the unconference format, and everyone only talks with a table's worth of people at a time, possibly doing multiple rounds if there's interest.
Academic conferences as they work now are baaaad. No wonder people complained about them going remote for COVID, everything of value happens chatting over coffee and/or in front of posters, no one gives a shit or gains anything from the average talk, given by some tired and inexperienced PhD student who doesn't know how to communication well, thinks they have to jam their talk with overly technical language to be more impressive, and possibly has bad English to make things even harder to follow to boot. Absolute snoozefest, almost no reach outside of the very narrow group of hyper specialists already studying the same topic.
(I also think AIs will probably be conscious in a way that's morally important, in case that matters to you.)
I don't think that's either a given nor something we can ever know for sure. "Handing off" the world to robots and AIs that for all we know might be perfect P-zombies doesn't feel like a good idea.
I don't think the Gulf Stream can collapse as long as the Earth spins, I guess you mean the AMOC?
I think the core concerns remain, and more importantly, there are other rather doom-y scenarios possible involving AI systems more similar to the ones we have that opened up and aren't the straight up singleton ASI foom. The problem here is IMO not "this specific doom scenario will become a thing" but "we don't have anything resembling a GOOD vision of the future with this tech that we are nevertheless developing at breakneck pace". Yet the amount of dystopian or apocalyptic possible scenarios is enormous. Part of this is "what if we lose control of the AIs" (singleton or multipolar), part of it is "what if we fail to structure our society around having AIs" (loss of control, mass wireheading, and a lot of other scenarios I'm not sure how to name). The only positive vision the "optimists" on this have to offer is "don't worry, it'll be fine, this clearly revolutionary and never seen before technology that puts in question our very role in the world will play out the same way every invention ever did". And that's not terribly convincing.
We think audiences are numb to politics as usual. They know when they’re being manipulated. We have opted out of the political theater, the kayfabe, with all its posing and posturing. We are direct and blunt and honest, and we come across as exactly what we are.
This is IMO a great point, and true in general. I think "the meta" is sort of shifting and it's the guys who try too hard to come off as diplomatic who are often behind the curve. This has good and bad sides (sometimes it means that political extremism wins out over common sense simply because it's screechy and transgressive), but overall I think you got the pulse right on it.
I honestly don't think shutting it down on AWS would be the hard part, if it's clearly identifiable. To sum it up:
- if it's doing anything illegal (like hacking or engaging in insider trading) for a quick buck, it can be obviously taken down;
- if it's doing anything that can be reasonably construed as a threat to US national security, then it better be taken down, or else.
That leaves us with a rogue ARA that is regardless entirely on the straight and narrow, playing the good kid and acting essentially like a perfectly honest company, making money legally, which is then entirely defensible for Amazon to not shut down despite the complaints. And even still, it's not like Amazon couldn't shut it down entirely at its whim if they had reason to. If they thought it's bad publicity (and hosting a totally-not-suspicious autonomous AI that might or might not be scheming to take over the world seems like terrible publicity), they can shut it down. If it causes their relationship to other companies (like the social media the AI is probably flooding with ToS-violating botnets right now) to sour, they can shut it down. See for example how app stores and many websites are essentially purging everything remotely lewd because payment processors don't want to be seen supporting that stuff, and every business is downstream of payment processors. You don't have to convince Amazon that AI is dangerous, you have to convince VISA and Mastercard, and the rest will follow suit.
If everything else fails, and if the US government doesn't yet feel threatened enough to go "screw it" and roll in the SWAT teams anyway, there's always the option of legal loopholes. For example, if the AI was trained on copyrighted material (which it almost certainly was), you can probably invoke anti-piracy laws. I would need a legal expert to pitch in, but I can imagine you might not even need to win such a lawsuit - you might manage to get the servers put under seizure just by raising it at all.
IMO dangerous ARAs would need to be some degree of sneaky, using backups in consumer hardware and/or collaborators. Completely loner agents operating off AWS or similar services would have a clear single point of failure.
The problem is that as usual people will worry that the NatSec guys are using the threat to try to slip us the pill of additional surveillance and censorship for political purposes - and they probably won't be entirely wrong. We keep undermining our civilizational toolset by using extreme measures for trivial partisan stuff and that reduces trust.
I honestly don't think ARA immediately and necessarily leads to overall loss of control. It would in a world that has also widespread robotics. What it would potentially be, however, is a cataclysmic event for the Internet and the digital world, possibly on par with a major solar flare, which is bad enough. Destruction of trust, cryptography, banking system belly up, IoT devices and basically all systems possibly compromised. We'd look at old computers that have been disconnected from the Internet from before the event the way we do at pre-nuclear steel. That's in itself bad and dangerous enough to worry about, and far more plausible than outright extinction scenarios, which require additional steps.
Yeah I think the idea is "I get the point you moron, now stop speaking so loud or the game's up."
It's not that people won't talk about spherical policies in a vacuum, it's that the actual next step of "how does this translate into actual politics" is forbidding. Which is kind of understandable, given that we're probably not very peopley persons, so to speak, inclined to high decoupling, and politics can objectively get very stupid.
In fact my worst worry about this idea isn't that there wouldn't be consensus, it's how it would end up polarising once it's mainstream enough. Remember how COVID started as a broad "Let's keep each other safe" reaction and then immediately collapsed into idiocy as soon as worrying about pesky viruses became coded as something for liberal pansies? I expect with AI something similar might happen, not sure in what direction either (there's a certain anti-AI sentiment building up on the far left but ironically it denies entirely the existence of X-risks as a right wing delusion concocted to hype up AI more). Depending on how those chips fall, actual political action might require all sorts of compromises with annoying bedfellows.
I mean, if a mere acquaintance told me something like that I don't know what I'd say, but it wouldn't be an offer to "talk about it" right away - I wouldn't feel like I'd enjoy talking about it with a near stranger, so I'd expect the same applies to them. It's one of those prefab reactions that don't really hold much water upon close scrutiny.
I find that rather adorable
In principle it is, but I think people do need some self awareness to distinguish between "I wish to help" and "I wish to feel like a person who's helping". The former requires focusing more genuinely on the other, rather than going off a standard societal script. Otherwise, if your desire to help ends up merely forcing the supposedly "helped" person to entertain you, after a while you'll effectively be perceived as a nuisance, good intentions or not.
Hard agree. People might be traumatised by many things, but you don't really want to convince them they should be traumatised, or define their identity about trauma (and then possibly insist that if they swear up and down they aren't that just means they're really repressing or not admitting - this has happened to me). That only increases the suffering! If they're not traumatised, great - they dodged a bullet! It doesn't mean that e.g. sex assault is less bad - the same way shooting someone isn't any less bad just because you happened to miss their vital organs (ok, so actually the funny thing is I guess that attempted murder is punished less than actual murder... but morally speaking, I'd say how good a shot you are has no relevance).
The thing is, it's hard to come up with ways to package the problem. I've tried doing small data science efforts for lesser chronic problems on myself and my wife, recording the kind of biometric indicators that were likely to correlate with our issues (e.g. food diaries vs symptoms) and it's still almost impossible to suss out meaningful correlations unless it's something as basic as "eating food X causes you immediate excruciating pain". In a non laboratory setting, controlling environmental conditions is impossible. Actual rigorous datasets, if they exist at all, are mostly privacy protected. Relevant diagnostic parameters are often incredibly expensive and complex to acquire, and possibly gatekept. The knowledge aspect is almost secondary IMO (after all, in the end, lots of recommendations your doctor will give you are still little more than empirical fixes someone came up with by analysing the data, mechanistic explanations don't go very far when dealing with biology). But even the data science, which would be doable by curious individuals, is forbidding. Even entire fields of actual, legitimate academia are swamped in this sea of noisy correlations and statistical hallucinations (looking at you, nutrition science). Add to that the risk of causing harm to people even if well meaning, and the ethical and legal implications of that, and I can see why this wouldn't take off. SMTM's citizen research on obesity seems the closest I can think of, and I've heard plenty of criticism of it and its actual rigour.