Posts
Comments
"But why would an AI experience entertainment?"
I think it's reasonable to assume that AI would build one logical conclusion on another with exceptional rapidity, relative to slower thinkers. Eventually, and probably soon because of its speed, I expect that AI would hit a dead end where it simply doesn't have the facts to add to its already complete analysis of the information it started with plus the information it has collected. In that situation, many people would want entertainment, so we can speculate that maybe AI would want entertainment too. Generalizing from one example is not anywhere near conclusive, but it provides a plausible scenario.
In the classic movie "Bridge on the River Kwai", a new senior Allied officer (Colonel Nicholson) arrives in a Japanese POW camp and tries to restore morale among his dispirited men. This includes ordering them to stop malingering/sabotage and to build a proper bridge for the Japanese, one that will stand as a tribute to the British Army's capabilities for centuries - despite the fact that this bridge will harm the British Army and their allies.
While I always saw the benefit to prisoner morale in working toward a clear goal, I took a long time to see the Colonel Nicholson character as realistic. Yann LeCunn reads like a man who would immediately identify with Nicholson and view the colonel as a role model. LeCunn has chosen to build a proper bridge to the AI future.
And now I have that theme song stuck in my head.
I consider "6. Aims to kill us all" to be an unnecessary assumption, one that overlooks many existential risks.
People were only somewhat smarter than North America's indigenous horses and other large animals, most of which were wiped out (with our help) long ago. However, eliminating horses and megafauna probably wasn't a conscious aim. Those were most likely inadvertent oopsies, similar to wiping out the passenger pigeon (complete with shooting and eating the last survivor found in the wild) and our other missteps. I can only barely imagine ASI objectives where all the atoms in the universe are required, where human extinction is thus central to the goal. The more plausible worry, to me, is ASI's indifference, where eventually ours would be the anthill that gets stepped on simply because the shortest path includes that step. Same outcome, but a different mechanism.
It's probably important to consider all ASI objectives that may lead to our obliteration, not just malice. Limiting ASI designs to those that are not malignant is insufficient for human survival, and only focusing on malice may lead people to underemphasize the magnitude of the overall ASI risk. In addition to the risk you are talking about more generally, that our future with AI will eventually be outside our control, a second factor is that Ernst Stavro Blofeld exists and would certainly use any earlier AGI to, as he would put it to ChatGPT-11.2, help him write the villain's plan for his next 007 novel/movie: "I'm Ian Fleming's literary heir, his great-grandniece - and I'm writing my next ..."
On the positive side, kids have been known to take care of an ant farm without applying a magnifying glass to heat it in the sun. Perhaps our trivial but to us purposeful motions will be fun for ASI to watch and will continue to entertain.
Early in the Ivermectin/COVID discussion, I posted on Twitter the best peer-reviewed study I could find supporting Ivermectin for COVID and the best study (peer-reviewed meta-analysis) I could find opposing Ivermectin for COVID. My comment was that it was important to read reputable sources on both sides and reach informed conclusions. That tweet linking to peer-reviewed research was labeled "misinformation", and my account got my first suspension.
A second tweet (yes, I'm a slow learner - I thought adding good data to that discussion was essential) contained a link to a CDC study at a CDC.gov web address investigating whether Ivermectin worked for COVID. That tweet was also taken down as misinformation, and my account was again suspended, when the only information I added to the link was a brief but accurate summary of the study. Again, this reputable link was labeled "misinformation".
I appealed both suspensions and lost both times. I would have put money down that my appeals would win, back when I assumed these decisions would be thoughtful and fact-based. And, yes, I would have been willing to take Twitter to court over censoring peer-reviewed research and censoring links to CDC studies, if I could find a lawyer and a legal basis. Those lawsuits would be negative for Twitter. Adding financial harm to the personal offense taken over them censoring fact-based posts would also be a strong negative. I don't think their content moderation team is competent enough that Twitter can afford to raise the stakes.
Responding to your points on "Here are some of the things I agree that you do not need to do":
- Have everything be educational.
All of my kids are grown and out of college, all successful. I was careful to make sure they had a lot of unstructured play time, such as following a neighborhood creek for miles in different seasons, year after year, just to see what we saw together or what they saw on their own. I didn't build academic lessons around that, or about most of what we did, so they still mention the creek as a childhood highlight.
- Constantly be teaching, including basic stuff they learn anyway.
I made sure my kids were exposed to a lot of things, including seeing me do many things well and many more poorly. I gave them a lot of experiences and then more time doing the ones they asked to do again.
Kids are constantly learning. Why interrupt that by teaching?
- Prevent social conflict.
Children need to learn how to handle conflict, disappointment, and failure while they are kids, so they know how to respond and then rebound when it happens as adults.
- Buy infinite toys.
We didn't quite ban toys that use electricity, but close. Duplos/Legos/blocks were big. Their number of other toys was low. Imagination filled in the gaps.
- Keep them away from the real work.
Excellent advice. My kids had chores, even if they had to stand on a stool to reach the sink or interrupt studying for AP exams because it was their night to help with the dishes.
My kids grew up seeing themselves as part of making the home work, which is a whole lot more useful than most homework.
[Note: I was reluctant to post this because if AGI is a real threat, then downplaying that threat could destroy all value in the universe. However, I chose to support discussion of all sides on this issue.]
[WARNING - my point may be both inaccurate and harmful - WARNING!]
It's not obvious that AGI will be dangerous on its own, especially if it becomes all-powerful in comparison to us (also not obvious that it will remain safe). I do not have high confidence in either direction, but it seems to me that those designing the AGI can and will shape that outcome if they are thoughtful.
A common view of AGI is as a potential Thanos dialed to 100% rather than 50%, Cthulhu with less charm. However, I see no reason why a fully-developed AGI would have any reason to consider people - or all planetary life - to be a threat that merits a response, even less so than we worry about being overrun by butterflies. Cautious people view AGI as stuck at Maslow's first two levels of need (physiological and safety). If AGI is so quick and powerful that we have zero chance of shutting it down or resisting, shouldn't it know that before we even figure out it has reached a level where it could choose to be a threat? If AGI is so risk averse that it would wipe out all life on the planet (beyond?), isn't it also so risk averse that it would leave the planet to avoid geological and solar threats too? If it is leaving anyways (presumably much more quickly and efficiently than we could manage), why clean up before departing?
Or would AGI more likely stay on earth for self-actualization:
And AGI blessed every living thing on the planet, saying, "be fruitful, and multiply, and fill the waters in the seas, and let fowl, the beasts, and man multiply in the earth. And AGI saw that it was good."
Down side? A far less powerful Chat GPT tells me about things that Gaius Julius Caesar built in the Third Century AD, 300 years after it says Caesar died, and about an emperor of the Holy Roman Empire looting a monastery that was actually sacked by Saracens in the year it claims Henry IV attacked that monastery. If an early AGI has occasional hallucinations and also only slightly superior intelligence, so that it knows we could overwhelm it, that could turn out badly. Other down side? Whether hooked to the outside world or used by any of a thousand movie-inspired supervillains, AGI's power can be used as an irresistible weapon. Given human nature, we can be certain that someone wants that to happen.
I agree on not terribly detailed. It's more of an "I checked, and Climate Change is correct" than a critical analysis. [I'll reread it more carefully in a few weeks, but that was my impression on a first reading, admittedly while drugged up after surgery.]
Perhaps I'm looking for the impossible, but I'm not comfortable with the idea that climate is so esoteric that no one outside the field can understand anything between CO2 traps UV at one extreme ... and the other extreme consisting of the entire model with conclusion that therefore the planet will warm by x degrees this century unless we eliminate fossil fuels. That alone has not satisfied many who ask - and it shouldn't. I have more respect for my students (math-based but a different field) who search for more detail than for those who accept doctrine.
I can explain fusion on many levels: from hydrogen-becomes-helium to deuterium-and-tritium become helium to this is the reaction cross section for D-D or D-T or D-He3 and ____ MeV are released in the form of ____ .... Similarly for the spectrum from lift/drag to the Navier–Stokes equations ..., and similarly for dynamic stability of structures. I am disappointed that climate scientists cannot communicate their conclusions at any intermediate level. Where is their Richard Feynman or (preferably) Carl Sagan?
This is an important point that is often ignored.
"Does blinding as it's commonly done mean that the patients don't know whether they are taking the placebo or not?" You likely get a lot of them falsely answering that it means that because they are ignorant of the literature that found that if you ask patients they frequently have some knowledge."
Accurate - and obvious on reflection, particularly with the COVID vaccines. I knew multiple people in the COVID vaccine trials. Just over half confidently said they got the real vaccine, and they knew it because of side effects. The rest were in most cases less certain but suspected they got the placebo, because so many participants had the side effects and they didn't.
Example: Moderna And Pfizer Vaccine Studies Hampered As Placebo Recipients Get R eal Shot : Shots - Health News : NPR "Mott, who lives in the Overland Park, Kan., got a strong reaction to the second shot, so she correctly surmised she had received the Moderna vaccine, not the placebo."
Our blind and double-blind methodology is nowhere near the perfect black box we pretend.
Very much not asking that anyone write a new post on Climate Change, since I assume a good discussion on that topic exists, but ... does anyone have a recommended link to a critical analysis of those questions comparable to Scott Alexander's discussion of Ivermectin, one that neither assumes a priori that the environmental science community will always get such questions right nor that those who question the approved science are idiots?
And, yes, I have read Climate Change 2022: Mitigation of Climate Change (ipcc.ch), and several previous editions, but that is not at all what I am looking for.
Note: My intended audience is the occasional college student who asks in good faith, generally almost in a whisper because of the (inappropriate) stigma of even asking.
Excellent point on the selective subject matter placement of articles with misleading implications. Thank you. I should have thought that through.
"How the media makes errors"?
I think Zvi's point is that errors do not dominate media deceptions. Their writing is a made up of conscious choices that mostly follow clear rules.
For those who don't know and didn't look these up:
NPI = Nonpharmaceutical Intervention Non-Pharmaceutical Interventions and COVID-19 Burden in the United States - PMC (nih.gov)
GDPR = General Data Protection Regulation What is GDPR, the EU’s new data protection law? - GDPR.eu
MAID = medical assistance in dying Medical assistance in dying - Canada.ca
If you model this as telling you that people who previously would have had no health insurance now have Medicaid, while telling you nothing about those people otherwise, this seems like good news.
Not conditioning on this tells you that an awful lot of Americans need Medicaid and cannot do better. Which seems, if new information, like very bad news.
There is a third option, one that fits in nicely with your view of bureaucracy.
The rules for Medicaid enrollment changed: The Families First Coronavirus Response Act COVID-19 Public Health Emergency Unwinding FAQs (medicaid.gov) requires that Medicaid programs keep people continuously enrolled through the end of the month in which the COVID-19 public health emergency (PHE) ends. The numbers are at a record high because new people are added, but the old ones don't leave. And there is no plan to end a public health emergency that justifies expanded government.