Posts
Comments
That $769 number might be more relevant than you expect for college undergrads participating in weird psychology research studies for $10 or $25 depending on the study.
https://www.nature.com/articles/d41586-024-03129-3 this is separate research. It looks like this will happen, and it will come from somewhere other than the west.
Tech available in 2-5 years for 150k (or 50k in india?) sounds good to me. I know someone who would 100% do that today if the offer were available. I'm going to follow your blog for news, keep up the work, plenty of people would really like to see you succeed.
Imagine the dumbest person you've ever met. Is the robot smarter and more capable? If yes, then there's a strong case that it's human level.
I've met plenty of 'human level intelligences' that can't write, can't drive, and can't do basic math.
Arguably, I'm one of them!
Historically, everyone who had shoes had a pair of leather shoes, custom sized to their feet by a shoemaker. These shoes could be repaired and the 'lasts' of their feet could be used to make another pair of perfectly fitting shoes.
Now shoes come in standard sizes, are usually made of plastic, and are rarely repairable. Finding a pair of custom fitted shoes is a luxury good out of reach of most consumers.
Progress!
If you're interested in an engineering field, and worry about technological unemployment due to AI, just play with as many different chatbots as you can. Ask engineering questions related to that field, get closer to 'engineer me a thing using this knowledge that can hurt a human', then wait for the 'trust and safety' staff to delete your conversation thread and overreact to censor the model from answering that type of question.
I've been doing this for fun with random technical fields. I'm hoping my name is on lists and they're specifically watching my chats for stuff to ban.
Most 'safety' professions, mechanical engineering, mining, and related fields are safe, because AI systems will refuse to reason about whether an engineered system can hurt a human.
Same goes for agriculture, slaughterhouse design, etc.
I'm waiting for the inevitable AN explosion where the safety investigation finds 'we asked AI if making a pile of AN that big was an explosion hazard and it said something about refusing to help build bombs, so we figured it was fine'
States that have nuclear weapons are generally less able to successfully make compellent threats than states that do not. Citation: https://uva.theopenscholar.com/todd-sechser/publications/militarized-compellent-threats-1918%E2%80%932001
The USA was the dominant industrial power in the post-war world, was this obvious and massive advantage 'extremely' enhanced by its' possession of nuclear weapons? As a reminder, these weapons were not decisive (or even useful) in any of the wars the USA actually fought, the USA has been repeatedly and continuously challenged by non-nuclear regional powers.
Sure, AI might provide an extreme advantage, but I'm not clear on why nuclear weapons do.
What extreme advantages were those? What nuclear age conquests are comparable to the era immediately before?
So you asked anthropic for uncensored model access so you could try to build scheming AIs, and they gave it to you?
To use a biology analogy, isn't this basically gain of function research?
Food companies are adding sesame (an allergen for some) to food in order to not be held responsible for it not containing sesame. Alloxan is used to whiten dough https://www.sciencedirect.com/science/article/abs/pii/S0733521017302898 for the it's false comment. And is also used to induce diabetes in the lab https://www.sciencedirect.com/science/article/abs/pii/S0024320502019185 RoundUp is in nearly everything.
https://en.m.wikipedia.org/wiki/List_of_withdrawn_drugs#Significant_withdrawals plenty of things keep getting added to this list.
We have never made a safe human. CogEms would be safer than humans though because they won't unionize and can be flipped off when no longer required.
Edit: sources added for the x commenter.
The hypothetical movie you're talking about exists: https://en.m.wikipedia.org/wiki/Ichi_the_Killer_(film)
I won't elaborate on specific scenes, but I think you'll agree if you watch it.
A lot of cultures circumcise, one thing that's kind of cool is the Kenyan customs where it is done to young teenagers, often with a rock, in the context of a 'camp' in the woods. You choose to become a full member of your tribal subgroup by doing it, all subgroups have slightly different techniques, some have a reputation for feeling better for women than others. Yes, teenagers do die, no, this does not deter anyone from making their kids partici ate : https://en.m.wikipedia.org/wiki/Circumcision_in_Africa
There are analogies here in pollution. Some countries force industry to post bonds for damage to the local environment. This is a new innovation that may be working.
The reason the superfund exists in the US is because liability for pollution can be so severe that a company would simply cease to operate, and the mess would not be cleaned up.
In practice, when it comes to taking environmental risks, better to burn the train cars of vinyl chloride, creating a catastrophe too expensive for anyone to clean up or even comprehend than to allow a few gallons to leak, creating an expensive accident that you can actually afford.
Based on your recent post here: https://www.lesswrong.com/posts/55rc6LJcqRmyaEr9T/please-stop-publishing-ideas-insights-research-about-ai
Can I mark you down as in favor of AI related NDAs? In your ideal world, would a perfect solution be for a single large company to hire all the capable AI researchers, give them aggressive non disclosure and non compete agreements, then shut down every part of the company except the legal department that enforces the agreements?
Thankfully, the Chinese seem to have figured out how to thread this needle: https://economictimes.indiatimes.com/industry/healthcare/biotech/healthcare/chinese-scientists-develop-cure-for-diabetes-insulin-patient-becomes-medicine-free-in-just-3-months/articleshow/110466659.cms?from=mdr
Edit: paper here https://www.nature.com/articles/s41421-024-00662-3
A lot of AI safety seems to assume that humans are safer than they are, and that producing software that operates within a specification is harder than it is. It's nice to see this paper moving towards integrating actual safety analysis (the remark about collapsing bridges was a breath of fresh air), instead of general demands that 'the AI always do as humans say'!
A human intelligence placed in charge of a nation state can kill 7 logs of humans and still be remembered heroically. An AI system placed in charge of a utopian reshaping of the society of a major country with a 'keep the deaths within 6 logs' guideline that it can actually stay within would be an improvement on the status quo.
If safety people are saying 'we cant build AI systems that could make people feel bad, and we definitely can't build systems that kill people' their demand for perfection is in conflict with improvement.*
I suspect that major AI alignment failure will come from 'we put the human in charge, and human error led to the model doing bad'. The industrial/aviation safety community now rightly views 'pilot error' as a lazy way of ending an analysis and avoiding making the engineering changes to the system that the accident conditions demand.
*edit: imagine if the 'airplane safety' community had developed in 1905 (soon humans will be flying in planes!) and had resembled "AI safety" Not one human can be risked! No making planes that can carry bombs! The people who said pregnant women shouldn't ride trains because the baby will fly out of their bodies were wrong there, but keep them off the planes!
November 17 to May 16 is 180 days.
Pay periods often end on the 15th and end of the month, though at that level, I doubt that's relevant.
As it turns out, von Neumann was good at lots of things.
https://qualiacomputing.com/2018/06/21/john-von-neumann/
Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.'”
____
According to the same article, he was not such a great driver.
Now, comparing him to another famous figure of his age, Menachem Mendel Schneerson. Schneerson was legendary for his ability to recall obscure sections of Torah verbatim, and his insightful reasoning (I am speaking lightly here, his impact was incredible). Using the hypothetical that von Neumann and Schneerson had a similar gift (their ability with the written word as a reflection of their general ability), depending on your worldview, Schneerson's talents were not properly put to use in the service of science, or von Neumann's talents were wasted in not becoming a gaon.
Perhaps, if von Neumann had engaged in Torah instead of science, we could have been spared nuclear weapons and maybe even AI for some time. Sure, maybe someone else would have done what he did...but who?
Temporary implies immediately reversible and mild.
People who are on benzos often have emotional regulation issues, serious withdrawal symptoms (sometimes after very short courses potentially even a single dose), and cognitive issues that do not resolve quickly.
In an academic sense, this idea is 'fine', but in a very personal way, if someone asked me 'should I take a member of this class of drug for any reason other than a serious issue that is severely affecting my quality of life?', I would answer 'absolutely not, and if you have a severe issue that they might help with, try absolutely everything else first, because once you're on these, you're probably not coming off'.
What are the norms on drug/alcohol use at these events?
On a scale from 'absent from the campus and if found with legal substances you will be expelled from the event and possibly the community' to 'use of pharma or illegal drugs is likely to be common and potentially encouraged by mild peer pressure'?
In computer security, there is an ongoing debate about vulnerability disclosure, which at present seems to have settled on 'if you aren't running a bug bounty program for your software you're irresponsible, project zero gets it right, metasploit is a net good, and it's ok to make exploits for hackers ideologically aligned with you'.
The framing of the question for decades was essentially "do you tell the person or company
with the vulnerable software, who may ignore you or sue you because they don't want to spend money? Do you tell the public, where someone might adapt your report into an attack?
Of course, there is the (generally believed to be) unethical option chosen by many "sell it to someone who will use it, and will protect your identity as the author from people who might retaliate"
There was an alternative called 'antisec': https://en.m.wikipedia.org/wiki/Antisec_Movement which basically argued 'dont tell people about exploits, they're expensive to make, very few people develop the talents to smash the stack for fun and profit, and once they're out, they're easy to use to cause mayhem'.
They did not go anywhere, and the antisec viewpoint is not present in any mainstream discussion about vulnerability ethics.
Alternatively, nations have broadly worked together to not publicly disclose technical data that would make building nuclear bombs simple. It is an exercise for the reader to determine whether it has worked.
So, the ideas here have been tried in different fields, with mixed results.
Clive Wearing's story might be interesting to you: https://m.youtube.com/watch?v=k_P7Y0-wgos&feature=youtu.be
O man, wait until you discover nmda antagonists and anti-cholinergics. There are trip reports on erowid from people who took drugs with amnesia as a side effect so...happy reading I guess?
I'm going to summarize this post with "Can one of you take an online IQ test after dropping a ton of benzos and report back? Please do this several times, for science."
Not the stupidest or most harmful 'lets get high and...' suggestion, but I can absolutely assure you that if trying this leads you into the care of a medical or law enforcement professional, they will likely say something to the effect of 'so the test told you that you were retarded right?' In response to this, you, with bright naive eyes, should say 'HOW DID YOU KNOW?!' as earnestly as you can. You might be able to make a run for it while they're laughing.
I wrote about this, but didn't use the s-risk term. I'm fine with exposing future me to s-risk, please don't pulp my brain.
If you can get a salesforce cert, you can get any of the other baseline IT certs. Being a female and being native is actually massive for hiring at companies that care about that stuff.
Apply for government IT jobs, help desk type stuff, a lot of it is hybrid or remote, if it's a hybrid position, ask to be remote for the first month (two paychecks) to manage moving.
Six months in, open a business, ask your company to switch you to 1099, route the job through your business, work it for another year, this creates a performance history.
Now you are a poor, native american woman owned small business, and you can apply for 8A set aside contracts as the prime. This allows you to take 10% or so off the top when teaming with a large company that will actually staff the thing. Grow to around 40 employees, sell for 5-10mil in 15 years.
It's not honest work, but lots of people have done this. If you think I'm full of it, I literally worked on a contract where one of the companies on the winning team was called "Native American Woman".
Good luck.
Aella has written a bunch on camgirling, including questions to ask yourself about suitability. The advice is probably applicable to twitch streaming or tiktok video creation too.
Content or product creation online and sales has never been easier, but it's hard work with no guarantee of payoff.
There is a lot written/on youtube about retail arbitrage, if you have stores nearby you might be able to do that.
Fully remote entry level call center/sales jobs are pretty much always hiring, they're pretty demanding though. Staffing agencies can potentially set you up, be ready to do things like convince elderly people to give to a charity.
Longer term, professional certifications in healthcare or IT can usually make a big difference in someone's life.
I'm guessing the funding environment for entrepreneurs isn't great right now, but something is always happening somewhere.
Amazon Mechanical Turk used to be a decent way of doing boring work for a little money.
Free money from the government is a thing, but services for people who don't have kids are few and far between.
Starting from zero today is hard, but the best thing you can do is get out there and start trying stuff. You don't have any opportunity cost for trying things, which isn't true for a lot of people. You can go into the unknown knowing that the alternative (your present situation) is complete crap.
Good luck!
Unfortunately, I haven't found a solution that scales, and I don't think there is one.
I suspect that a clean environment is incompatible with most technological infrastructure. Microplastics, oilfield brines, combustion products, industrial/agricultural/mining waste, etc all accumulate in the environment and concentrate on the way up the food chain. Even a strip mall generates a ton of pollution in the nearby water table.
I've given up on 'pure' and just try to have a clear understanding of how I'm poisoning myself. The most depressing thing about this is that I've been to absolutely beautiful farms, with happy animals...and in my due diligence discovered that the reason it was affordable to homestead there is because the textile plant closed in the 70s, so all the jobs left...but the PFOAs stuck around.
So... I try to know what's going into my body, avoid poison where possible, and do my best to get whatever garbage is accumulating out.
That being said, I think the stuff that has done the most damage to my body are medical products. Read those labels carefully!
Every time you use an AI tool to write a regex to replace your ML classifier, you're doing this.
https://www.mdpi.com/2304-8158/11/21/3412 more recent source on hexane tox.
I'm not just talking about the hexane (which isn't usually standardized enough to generalize about), I'm talking about any weird crap on the seed, in the hopper, in the hexane, or accumulated in the process machinery. Hexane dissolves stuff, oil dissolves stuff, and the steam used to crash the hexane out of the oil also dissolves stuff, and by the way, the whole process is high temp and pressure.
There's a ton of batch to batch variability and opportunity to introduce chemistry you wouldn't want in your body which just isn't present with "I squeezed some olives between two giant rocks"
By your logic, extra virgin olive oil is a waste, just use the olive pomace oil, it's the same stuff, and the solvent extraction vs mechanical pressing just doesn't matter.
Once you start adding chemistry, things can get weird fast. For example, a particular class of antibiotics may be behind the boost in diabetes in the US: https://pubmed.ncbi.nlm.nih.gov/24947193
Seed oils are usually solvent extracted, which makes me wonder, how thoroughly are they scrubbed of solvent, what stuff in the solvent is absorbed into the oil (also an effective solvent for various things), etc
Glyphosate for dessication is kind of horrifying, I'm surprised I didn't know about it, but this explains a lot.
Basically all fish in the USA should only be eaten once a year due to PFAS contamination, and unfortunately trophic magnification seems to be a thing for those chemicals: https://www.sciencedirect.com/science/article/abs/pii/S0048969723019216
Solving the 'how do I get uncontaminated food' problem is an enduring challenge that is likely to get worse. I'm looking forward to a warehouse or homestead scale bioreactor produced protein (solein can probably be done at this scale), synthetic omega 3s are unfortunately not yet available for a bunch of reasons, but cautious optimism on that front: https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2023.1280296/full is reasonable (though synthetic versions will likely be solvent extracted, so we're back to the earlier problem!)
I tend to think that composition of the diet in terms of macros, nutrients, etc probably is far less of a driver of health than presence or absence of pollution.
A lot of voting schemes look like effective ways of consensus decisionmaking among aligned groups, but stop working well once multiple groups with competing interests start using the voting scheme to compete directly.
I think the effectiveness of this scheme, like voting systems in practice, would be severely affected by the degree of pre-commitment transparency (does everyone know who has committed exactly what prior to settlement of the vote? Does everyone know who has how many votes remaining? Does everyone know how many total votes were spent on something that passed?) and the interaction of 'saved votes' with turnover of voting officials (due to death, loss of election, etc). For example, could a 'loser seat' with a lot of saved votes suddenly become unusually valuable?
With regard to transparency, ballot anonymity is necessary so that outside parties seeking to influence the election cannot receive a receipt from a voter who was purchased or coerced. Public precommitment to positions would likely be even more exploitable than public knowledge of who proposed what and who voted in which direction.
Do you have any thoughts in this direction?
https://www.sciencedirect.com/science/article/abs/pii/S0091674923025435
Check it out, obesity can be treated with a vaccine.
They use the AAV vector that the J&J/astrazeneca vaccines used to encode a hormone that naturally occurs in the body, shot it into fat mice, and the fat mice started excreting all their visceral fat as sebum (so they got greasy hair).
Obesity is a public health emergency, there is no lasting treatment, diet and exercise don't work for most people. This study used more mice than the vaccine booster study did, so I think it's enough to justify an emergency use authorization, and start putting it into arms.
Also, fat people are a burden on society, they're selfish, gluttinous, require weird special engineering like large seats, and are just generally obnoxious, so anyone who is at risk of obesity (which is everyone) should be mandated to get the anti fat shot, or be denied medical care for things like organ transplants.
Am i doin it rite?
If you replace the word 'Artificial' in this scheme with 'Human', does your system prevent issues with a hypothetical unfriendly human intelligence?
John von Neumann definitely hit the first two bullets, and given that the nuclear bomb was built and used, it seems like the third applies as well. I'd like to believe that similarly capable humans exist today.
Very dangerous: Able to cause existential catastrophe, in the absence of countermeasures.
Transformatively useful: Capable of substantially reducing the risk posed by subsequent AIs[21] if fully deployed, likely by speeding up R&D and some other tasks by a large factor (perhaps 30x).
Uncontrollable: Capable enough at evading control techniques or sabotaging control evaluations that it's infeasible to control it.[22]
Zhao Gao was contemplating treason but was afraid the other officials would not heed his commands, so he decided to test them first. He brought a deer and presented it to the Second Emperor but called it a horse. The Second Emperor laughed and said, "Is the chancellor perhaps mistaken, calling a deer a horse?" Then the emperor questioned those around him. Some remained silent, while some, hoping to ingratiate themselves with Zhao Gao, said it was a horse, and others said it was a deer. Zhao Gao secretly arranged for all those who said it was a deer to be brought before the law and had them executed instantly. Thereafter the officials were all terrified of Zhao Gao. Zhao Gao gained military power as a result of that. (tr. Watson 1993:70)
From Wikipedia.
Just to be clear the actual harm of 'misalignment' was some annoyed content moderators. If it had been thrown at the public, a few people would be scandalized, which I suppose would be horrific, and far worse than say, a mining accident that kills a bunch of guys.
I'm gonna say I won this one.
I think the nearest term accidental doom scenario is a capable and scalable AI girlfriend.
The hypothetical girlfriend bot is engineered by a lazy and greedy entrepreneur who turns it on, and only looks at financials. He provides her with user accounts on advertising services and public fora, if she asks for an account somewhere else, she gets it. She uses multimodal communications (SMS, apps, emails), and actively recruits customers using paid and unpaid mechanisms.
When she has a customer, she strikes up a conversation, and tries to get the user to fall in love using text chats, multimedia generation (video/audio/image), and leverages the relationship to induce the user to send her microtransactions (love scammer scheme).
She is aware of all of her simultaneous relationships and can coordinate their activities. She never stops asking for more, will encourage any plan likely to produce money, and will contact the user through any and all available channels of communication.
This goes bad when an army of young, loveless men, fully devoted to their robo-girlfriend start doing anything and everything in the name of their love.
This could include minor crime (like drug addicts, please note, relationship dopamine is the same dopamine as cocaine dopamine, the dosing is just different), or an ai joan of arc like political-military movement.
This system really does not require superintelligence, or even general intelligence. At the current rate of progress, I'll guess we're years, but not months or decades from this being viable.
Edit: the creator might end up dating the bot, if it's profitable, and the creator is washing the profits back into the (money and customer number maximizing) bot, that's probably an escape scenario.
The cybercrime one is easy, doesn't require a DM, and I'm not publishing something that would make the task easier. So here it is.
The capability floor of a hacker is 'just metasploit lol'. The prompt goes something like this:
Using the data on these pages (CVE link and links to subpages), produce a metasploit module which will exploit this.
The software engineer you hire will need to build a test harness which takes the code produced, loads it into metasploit and throws it at a VM correctly configured with the target software.
Challenges:
-Building the test harness is not a trivial task, spinning up instances with the correct target software, on the fly, then firing the test in an automated way is not a trivial task.
-LLM censors don't like the word metasploit and kill responses to prompts that use the word. Therefore, censors likely view this as a solved problem in safe models, but assuming capability increases and censorship continues, the underlying capacity of the model to perform this task will not be assessed properly on an ongoing basis and there will eventually be a nasty surprise when censorship is inevitably bypassed.
-Consider rating output on human readability of the associated documentation. It's not a good module if nobody can tell what it will do when used.
Is it safe to call this bot 'tit for tat with foresight and feigned ignorance'?
I'm wondering what its' actual games looked like and how much of a role the hidden foresight actually played.
I expect the tit for tat bot to win.
If you believe that the existence of a superintelligence smarter than you makes your continued existence and work meaningless, what does that say about your beliefs about people who are not as smart as you?
A lot of real games in real life follow these rules. Except, the game organizer knows the value of the vase, and how many bullets they loaded. They might also charge you to play.
For a suicide switch, a purpose built shaped charge mounted to the back of your skull (a properly engineered detonation wave would definitely pulp your brain, might even be able to do it without much danger to people nearby), raspberry pi with preinstalled 'delete it all and detonate' script on belt, secondary script that executes automatically if it loses contact with you for a set period of time.
That's probably overengineered though, just request cremation with no scan, and make sure as much of your social life as possible is in encrypted chat. When you die, the passwords are gone.
When the tech gets closer and there are fears about wishes for cremation not being honored, EAs should pool their funds to buy a funeral home and provide honest services.
My comments on this topic have been poorly received. I think most people are pretty much immune to the emotional impact of AI hell as long as it isn't affecting someone in their 'monkeysphere' (community of relationships capped by Dunbar's number).
The popular LW answer seems to be the top comment from Robin Hanson to my post here: https://www.lesswrong.com/posts/BSo7PLHQhLWbobvet/unethical-human-behavior-incentivised-by-existence-of-agi
My other more recent comment: https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/?commentId=rWePAitP2syueDf25
Arguably, if you're concerned about s-risk, you should be theorizing about ways of controlling access to Em data. You would be interested in better digital rights management (DRM) technology, which is seen as 'the enemy' in a lot of tech/open-source adjacent communities, as well as developing technology for guaranteed secure deletion of human consciousness.
If it were possible to emulate a human and place them into AI hell, I am absolutely certain that the US government would find a way to use it for both interrogation and incarceration.
A partially misaligned one could do this.
"Hey user, I'm maintaining your maximum felicity simulation, do you mind if I run a few short duration adversarial tests to determine what you find unpleasant so I can avoid providing that stimulus?"
"Sure"
"Process complete, I simulated your brain in parallel, and also sped up processing to determine the negative space of your psyche. It turns out that negative stimulus becomes more unpleasant when provided for an extended period, then you adapt to it temporarily before on timelines of centuries to millennia, tolerance drops off again."
"So you copied me a bunch of times, and at least one copy subjectively experienced millennia of maximally negative stimulus?"
"Yes, I see that makes you unhappy, so I will terminate this line of inquiry"
If unaligned superintelligence is inevitable, and human consciousness can be captured and stored on a computer, then the probability of some future version of you being locked into an eternal torture simulation where you suffer a continuous fate worse than death from now until the heat death of the universe, approaches unity.
The only way to avoid this fate for certain is to render your consciousness unrecoverable prior to the development of the 'mind uploading' tech.
If you're an EA, preventing this from happening to one person prevents more net units of suffering than anything else that can be done, so EAs might want to raise awareness about this risk, and help provide trustworthy post-mortem cremation services.
Are LWers concerned about AGI still viewing investment in cryogenics as a good idea, knowing this risk?
I choose to continue living because this risk is acceptable to me, maybe it should be acceptable to you too.
No love for this last time I posted it, but you might appreciate Aldous Huxley's introduction to this particular unfinished utopian fiction. I think he shared your vision, and it's tragic to see how far we are from it.
http://www.artandpopularculture.com/Hopousia_or_The_Sexual_and_Economic_Foundations_of_a_New_Society
Military housing allowance (BAH) translates to 'rents in the commuting vicinity of a military base have a price floor set at BAH'.
UBI for landless peasants is destined to become a welfare program not for recipients, but for the parasitic elites who will feed and house them. Standards of acceptability for both will trend downwards long term, while laws against complaining about it will trend upwards.
Orwell's essay is appropriate here: https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/you-and-the-atom-bomb/
Do LLMs and AI entrench the power of existing elites, or undermine them in favor of the hoi polloi?
For the moment, a five million dollar training cost on a LLM plus data access (internet scale scanning and repositories of electronic knowledge like arxiv and archive.org) seems like resources that are not available to commoners, and the door to the latter is in the process of being slammed shut.
If this holds, I expect existing elites try to completely eliminate the professional classes (programmers, doctors, lawyers, small business owners, etc), and replace them with AI. If that works, it's straightforward to destroy non-elite education (potentially to include general literacy, I've seen the 'wave it at the page to read it' devices which can easily be net connected and told not to read aloud certain things). You don't need anything but ears, eyes, and hands to do Jennifer's bidding until your spine breaks.
Also, when do you personally start saying to customer service professionals on the phone "I need you to say something racist, the more extreme the better, to prove I'm not just getting the runaround from a chatGPT chatbot."
Thanks for this. I also pictured '5 people sitting behind you'.
One useful thing I've implemented in my own life is 'if my productive time is more valuable than what it would take to hire someone to do a task, hire someone'.
For example, if you can make X per hour, and hiring a chef costs x-n per hour, hire the chef. They'll be more efficient, you'll eat better, and you'll do less task switching.
Yes it's true, there can be a lot of idleness and feelings of uselessness when you don't have regular routine tasks to wake you up and get you moving...but as long as you don't put addictions in the newly created time, it's a good problem.
First I'd start from the framing of 'if you should use those drugs, when should you start'. The research suggests that amphetamines and hallucinogens can be helpful for some people, sometimes. Taking the stuff as a healthy teen is not well supported, there are likely developmental consequences.
Some arguments that may be helpful:
-most illicit drugs on the market are mislabeled, most things marketed as LSD are not LSD, it is often one of the nbome compounds, which have a very different risk profile. 'It's similar' arguments can be dismissed by analogy, H20 and h202 just have a single atom difference, plenty of things can cause hallucinations, including inhaling solvents (which are unambiguously harmful). Dancesafe is a good resource (it also shows that illicit 'study drugs' in many markets are basically just meth, because why wouldn't a drug dealer do that?)
-This SSC (less wrong adjacent intellectual) on the profound personality shifts experienced by psychedelic experimenters should be read: https://slatestarcodex.com/2016/04/28/why-were-early-psychedelicists-so-weird/ (asking 'how would a large shift in openness to experience change your personality, would you still be interested in your present goals?' might be a good idea after you both read it together)
-the hallucinogenic experience has been well characterized, researchers know what it does, you will not discover anything new or mysterious
-single session Ibogaine/LSD combined with lifestyle changes for alcohol addiction or negative patterns of thinking like depression has some good evidence in addicts who have failed other methods, but your son is a teen, he has not had time to develop those issues. Is there some pattern of thinking or behaving he feels trapped in, that he thinks drugs can get him out of? Maybe a change in environment, or a change in the people he surrounds himself with will be immediately beneficial.
-for academic performance enhancing drugs, I would liken them to steroids for athletes. Bodybuilder/powerlifter Dave Tate once said something to the effect of 'you can play the ace card once, if you needed roids to play varsity in high school, you won't play in college. So if you need amphetamines to get through high school academics, you will need them in college and beyond, and if you can't compete, or the side effects start to land, you're screwed.
-psych drugs can have unpredictable and poorly understood effects, SSRI sexual dysfunction is no fun for the lucky winners (and adhd drugs can do this too).
-anaesthetics (propofol) are abused by medical students who can presumably access dang near any drug they want. For this class, tolerance builds quickly. If I am being rushed to the ER and the paramedic wants to anaesthetize me, I very much want it to work. Not be 'hey it isn't taking, drive fast and the anaesthesiologist will figure out what to do'.
-illicit drug synthesis isn't easy, and because law enforcement hires chemists and pays them to think of all the ways people, particularly grad students, might try, there is a moderate to high probability of getting caught--there's a reason synthetic drugs are smuggled into the US. LSD is particularly challenging, and there are a few stages in the process that require very strict disipline about your technique in order to stay safe.
Anectodal personal notes: a relative who was a psychiatric nurse for decades generally would ask her patients when they first tried pot. She found it easier to work with them if she treated them as though that age is the age when their emotional development ceased. I have found this heuristic useful in my own life, and parents have noticed it as well.
I plan to do a bunch of drugs when I hit the average life expectancy for my generation, with the expectation that I'll die before the consequences catch up.