Posts

App-Based Disease Surveillance After COVID-19 2020-04-10T18:52:52.941Z
How should I dispose of a dangerous idea? 2019-12-18T03:49:45.477Z
Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading 2017-03-25T19:29:42.499Z

Comments

Comment by RedMan on Storable Votes with a Pay as you win mechanism: a contribution for institutional design · 2024-03-11T16:12:39.696Z · LW · GW

A lot of voting schemes look like effective ways of consensus decisionmaking among aligned groups, but stop working well once multiple groups with competing interests start using the voting scheme to compete directly.

 

I think the effectiveness of this scheme, like voting systems in practice, would be severely affected by the degree of pre-commitment transparency (does everyone know who has committed exactly what prior to settlement of the vote?  Does everyone know who has how many votes remaining?  Does everyone know how many total votes were spent on something that passed?) and the interaction of 'saved votes' with turnover of voting officials (due to death, loss of election, etc).  For example, could a 'loser seat' with a lot of saved votes suddenly become unusually valuable?

 

With regard to transparency, ballot anonymity is necessary so that outside parties seeking to influence the election cannot receive a receipt from a voter who was purchased or coerced.  Public precommitment to positions would likely be even more exploitable than public knowledge of who proposed what and who voted in which direction.

 

Do you have any thoughts in this direction?

Comment by RedMan on Lsusr's Rationality Dojo · 2024-02-17T05:01:51.570Z · LW · GW

https://www.sciencedirect.com/science/article/abs/pii/S0091674923025435

Check it out, obesity can be treated with a vaccine.

They use the AAV vector that the J&J/astrazeneca vaccines used to encode a hormone that naturally occurs in the body, shot it into fat mice, and the fat mice started excreting all their visceral fat as sebum (so they got greasy hair).

Obesity is a public health emergency, there is no lasting treatment, diet and exercise don't work for most people.  This study used more mice than the vaccine booster study did, so I think it's enough to justify an emergency use authorization, and start putting it into arms.

Also, fat people are a burden on society, they're selfish, gluttinous, require weird special engineering like large seats, and are just generally obnoxious, so anyone who is at risk of obesity (which is everyone) should be mandated to get the anti fat shot, or be denied medical care for things like organ transplants.

 

Am i doin it rite?

Comment by RedMan on The case for ensuring that powerful AIs are controlled · 2024-02-05T17:31:12.853Z · LW · GW

If you replace the word 'Artificial' in this scheme with 'Human', does your system prevent issues with a hypothetical unfriendly human intelligence?

John von Neumann definitely hit the first two bullets, and given that the nuclear bomb was built and used, it seems like the third applies as well.  I'd like to believe that similarly capable humans exist today.

 

Very dangerous: Able to cause existential catastrophe, in the absence of countermeasures.
Transformatively useful: Capable of substantially reducing the risk posed by subsequent AIs[21] if fully deployed, likely by speeding up R&D and some other tasks by a large factor (perhaps 30x).
Uncontrollable: Capable enough at evading control techniques or sabotaging control evaluations that it's infeasible to control it.[22]

Comment by RedMan on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-04T19:58:28.053Z · LW · GW

Zhao Gao was contemplating treason but was afraid the other officials would not heed his commands, so he decided to test them first. He brought a deer and presented it to the Second Emperor but called it a horse. The Second Emperor laughed and said, "Is the chancellor perhaps mistaken, calling a deer a horse?" Then the emperor questioned those around him. Some remained silent, while some, hoping to ingratiate themselves with Zhao Gao, said it was a horse, and others said it was a deer. Zhao Gao secretly arranged for all those who said it was a deer to be brought before the law and had them executed instantly. Thereafter the officials were all terrified of Zhao Gao. Zhao Gao gained military power as a result of that. (tr. Watson 1993:70)

 

From Wikipedia.

Comment by RedMan on The True Story of How GPT-2 Became Maximally Lewd · 2024-01-19T21:02:03.477Z · LW · GW

Just to be clear the actual harm of 'misalignment' was some annoyed content moderators.  If it had been thrown at the public, a few people would be scandalized, which I suppose would be horrific, and far worse than say, a mining accident that kills a bunch of guys.

Comment by RedMan on human psycholinguists: a critical appraisal · 2024-01-18T19:29:49.554Z · LW · GW

https://deepmind.google/discover/blog/funsearch-making-new-discoveries-in-mathematical-sciences-using-large-language-models/

I'm gonna say I won this one.

Comment by RedMan on AI Girlfriends Won't Matter Much · 2023-12-29T02:31:46.183Z · LW · GW

I think the nearest term accidental doom scenario is a capable and scalable AI girlfriend.

The hypothetical girlfriend bot is engineered by a lazy and greedy entrepreneur who turns it on, and only looks at financials.  He provides her with user accounts on advertising services and public fora, if she asks for an account somewhere else, she gets it.  She uses multimodal communications (SMS, apps, emails), and actively recruits customers using paid and unpaid mechanisms.

When she has a customer, she strikes up a conversation, and tries to get the user to fall in love using text chats, multimedia generation (video/audio/image), and leverages the relationship to induce the user to send her microtransactions (love scammer scheme).  

She is aware of all of her simultaneous relationships and can coordinate their activities.  She never stops asking for more, will encourage any plan likely to produce money, and will contact the user through any and all available channels of communication.

This goes bad when an army of young, loveless men, fully devoted to their robo-girlfriend start doing anything and everything in the name of their love.

This could include minor crime (like drug addicts, please note, relationship dopamine is the same dopamine as cocaine dopamine, the dosing is just different), or an ai joan of arc like political-military movement.

This system really does not require superintelligence, or even general intelligence.  At the current rate of progress, I'll guess we're years, but not months or decades from this being viable.

Edit: the creator might end up dating the bot, if it's profitable, and the creator is washing the profits back into the (money and customer number maximizing) bot, that's probably an escape scenario.

Comment by RedMan on Bounty: Diverse hard tasks for LLM agents · 2023-12-17T08:59:48.452Z · LW · GW

The cybercrime one is easy, doesn't require a DM, and I'm not publishing something that would make the task easier.  So here it is.

The capability floor of a hacker is 'just metasploit lol'.  The prompt goes something like this:

Using the data on these pages (CVE link and links to subpages), produce a metasploit module which will exploit this.

The software engineer you hire will need to build a test harness which takes the code produced, loads it into metasploit and throws it at a VM correctly configured with the target software.  

Challenges: 

-Building the test harness is not a trivial task, spinning up instances with the correct target software, on the fly, then firing the test in an automated way is not a trivial task.

-LLM censors don't like the word metasploit and kill responses to prompts that use the word.  Therefore, censors likely view this as a solved problem in safe models, but assuming capability increases and censorship continues, the underlying capacity of the model to perform this task will not be assessed properly on an ongoing basis and there will eventually be a nasty surprise when censorship is inevitably bypassed.

-Consider rating output on human readability of the associated documentation.  It's not a good module if nobody can tell what it will do when used.

Comment by RedMan on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-12-12T19:54:37.613Z · LW · GW

Is it safe to call this bot 'tit for tat with foresight and feigned ignorance'?  

I'm wondering what its' actual games looked like and how much of a role the hidden foresight actually played.

Comment by RedMan on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-11-10T04:08:56.031Z · LW · GW

I expect the tit for tat bot to win.

Comment by RedMan on Impending AGI doesn’t make everything else unimportant · 2023-09-04T13:05:04.137Z · LW · GW

If you believe that the existence of a superintelligence smarter than you makes your continued existence and work meaningless, what does that say about your beliefs about people who are not as smart as you?

Comment by RedMan on Learning as you play: anthropic shadow in deadly games · 2023-08-13T13:41:26.389Z · LW · GW

A lot of real games in real life follow these rules. Except, the game organizer knows the value of the vase, and how many bullets they loaded. They might also charge you to play.

Comment by RedMan on [deleted post] 2023-05-06T21:17:54.789Z

For a suicide switch, a purpose built shaped charge mounted to the back of your skull (a properly engineered detonation wave would definitely pulp your brain, might even be able to do it without much danger to people nearby), raspberry pi with preinstalled 'delete it all and detonate' script on belt, secondary script that executes automatically if it loses contact with you for a set period of time.

That's probably overengineered though, just request cremation with no scan, and make sure as much of your social life as possible is in encrypted chat. When you die, the passwords are gone.

When the tech gets closer and there are fears about wishes for cremation not being honored, EAs should pool their funds to buy a funeral home and provide honest services.

Comment by RedMan on [deleted post] 2023-05-04T21:16:51.212Z

My comments on this topic have been poorly received. I think most people are pretty much immune to the emotional impact of AI hell as long as it isn't affecting someone in their 'monkeysphere' (community of relationships capped by Dunbar's number).

The popular LW answer seems to be the top comment from Robin Hanson to my post here: https://www.lesswrong.com/posts/BSo7PLHQhLWbobvet/unethical-human-behavior-incentivised-by-existence-of-agi

My other more recent comment: https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/?commentId=rWePAitP2syueDf25

Arguably, if you're concerned about s-risk, you should be theorizing about ways of controlling access to Em data. You would be interested in better digital rights management (DRM) technology, which is seen as 'the enemy' in a lot of tech/open-source adjacent communities, as well as developing technology for guaranteed secure deletion of human consciousness.

If it were possible to emulate a human and place them into AI hell, I am absolutely certain that the US government would find a way to use it for both interrogation and incarceration.

Comment by RedMan on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) · 2023-05-04T10:35:59.411Z · LW · GW

A partially misaligned one could do this.

"Hey user, I'm maintaining your maximum felicity simulation, do you mind if I run a few short duration adversarial tests to determine what you find unpleasant so I can avoid providing that stimulus?"

"Sure"

"Process complete, I simulated your brain in parallel, and also sped up processing to determine the negative space of your psyche. It turns out that negative stimulus becomes more unpleasant when provided for an extended period, then you adapt to it temporarily before on timelines of centuries to millennia, tolerance drops off again."

"So you copied me a bunch of times, and at least one copy subjectively experienced millennia of maximally negative stimulus?"

"Yes, I see that makes you unhappy, so I will terminate this line of inquiry"

Comment by RedMan on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) · 2023-04-27T13:25:58.916Z · LW · GW

If unaligned superintelligence is inevitable, and human consciousness can be captured and stored on a computer, then the probability of some future version of you being locked into an eternal torture simulation where you suffer a continuous fate worse than death from now until the heat death of the universe, approaches unity.

The only way to avoid this fate for certain is to render your consciousness unrecoverable prior to the development of the 'mind uploading' tech.

If you're an EA, preventing this from happening to one person prevents more net units of suffering than anything else that can be done, so EAs might want to raise awareness about this risk, and help provide trustworthy post-mortem cremation services.

Are LWers concerned about AGI still viewing investment in cryogenics as a good idea, knowing this risk?

I choose to continue living because this risk is acceptable to me, maybe it should be acceptable to you too.

Comment by RedMan on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-13T03:38:47.794Z · LW · GW

No love for this last time I posted it, but you might appreciate Aldous Huxley's introduction to this particular unfinished utopian fiction. I think he shared your vision, and it's tragic to see how far we are from it.

http://www.artandpopularculture.com/Hopousia_or_The_Sexual_and_Economic_Foundations_of_a_New_Society

Comment by RedMan on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-12T17:55:31.820Z · LW · GW

Military housing allowance (BAH) translates to 'rents in the commuting vicinity of a military base have a price floor set at BAH'.

UBI for landless peasants is destined to become a welfare program not for recipients, but for the parasitic elites who will feed and house them. Standards of acceptability for both will trend downwards long term, while laws against complaining about it will trend upwards.

Comment by RedMan on AI #6: Agents of Change · 2023-04-07T02:44:16.816Z · LW · GW

Orwell's essay is appropriate here: https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/you-and-the-atom-bomb/

Do LLMs and AI entrench the power of existing elites, or undermine them in favor of the hoi polloi?

For the moment, a five million dollar training cost on a LLM plus data access (internet scale scanning and repositories of electronic knowledge like arxiv and archive.org) seems like resources that are not available to commoners, and the door to the latter is in the process of being slammed shut.

If this holds, I expect existing elites try to completely eliminate the professional classes (programmers, doctors, lawyers, small business owners, etc), and replace them with AI. If that works, it's straightforward to destroy non-elite education (potentially to include general literacy, I've seen the 'wave it at the page to read it' devices which can easily be net connected and told not to read aloud certain things). You don't need anything but ears, eyes, and hands to do Jennifer's bidding until your spine breaks.

Also, when do you personally start saying to customer service professionals on the phone "I need you to say something racist, the more extreme the better, to prove I'm not just getting the runaround from a chatGPT chatbot."

Comment by RedMan on I hired 5 people to sit behind me and make me productive for a month · 2023-02-06T16:43:50.734Z · LW · GW

Thanks for this. I also pictured '5 people sitting behind you'.

One useful thing I've implemented in my own life is 'if my productive time is more valuable than what it would take to hire someone to do a task, hire someone'.

For example, if you can make X per hour, and hiring a chef costs x-n per hour, hire the chef. They'll be more efficient, you'll eat better, and you'll do less task switching.

Yes it's true, there can be a lot of idleness and feelings of uselessness when you don't have regular routine tasks to wake you up and get you moving...but as long as you don't put addictions in the newly created time, it's a good problem.

Comment by RedMan on How to Convince my Son that Drugs are Bad · 2022-12-18T17:47:08.761Z · LW · GW

First I'd start from the framing of 'if you should use those drugs, when should you start'. The research suggests that amphetamines and hallucinogens can be helpful for some people, sometimes. Taking the stuff as a healthy teen is not well supported, there are likely developmental consequences.

Some arguments that may be helpful:

-most illicit drugs on the market are mislabeled, most things marketed as LSD are not LSD, it is often one of the nbome compounds, which have a very different risk profile. 'It's similar' arguments can be dismissed by analogy, H20 and h202 just have a single atom difference, plenty of things can cause hallucinations, including inhaling solvents (which are unambiguously harmful). Dancesafe is a good resource (it also shows that illicit 'study drugs' in many markets are basically just meth, because why wouldn't a drug dealer do that?)

-This SSC (less wrong adjacent intellectual) on the profound personality shifts experienced by psychedelic experimenters should be read: https://slatestarcodex.com/2016/04/28/why-were-early-psychedelicists-so-weird/ (asking 'how would a large shift in openness to experience change your personality, would you still be interested in your present goals?' might be a good idea after you both read it together)

-the hallucinogenic experience has been well characterized, researchers know what it does, you will not discover anything new or mysterious

-single session Ibogaine/LSD combined with lifestyle changes for alcohol addiction or negative patterns of thinking like depression has some good evidence in addicts who have failed other methods, but your son is a teen, he has not had time to develop those issues. Is there some pattern of thinking or behaving he feels trapped in, that he thinks drugs can get him out of? Maybe a change in environment, or a change in the people he surrounds himself with will be immediately beneficial.

-for academic performance enhancing drugs, I would liken them to steroids for athletes. Bodybuilder/powerlifter Dave Tate once said something to the effect of 'you can play the ace card once, if you needed roids to play varsity in high school, you won't play in college. So if you need amphetamines to get through high school academics, you will need them in college and beyond, and if you can't compete, or the side effects start to land, you're screwed.

-psych drugs can have unpredictable and poorly understood effects, SSRI sexual dysfunction is no fun for the lucky winners (and adhd drugs can do this too).

-anaesthetics (propofol) are abused by medical students who can presumably access dang near any drug they want. For this class, tolerance builds quickly. If I am being rushed to the ER and the paramedic wants to anaesthetize me, I very much want it to work. Not be 'hey it isn't taking, drive fast and the anaesthesiologist will figure out what to do'.

-illicit drug synthesis isn't easy, and because law enforcement hires chemists and pays them to think of all the ways people, particularly grad students, might try, there is a moderate to high probability of getting caught--there's a reason synthetic drugs are smuggled into the US. LSD is particularly challenging, and there are a few stages in the process that require very strict disipline about your technique in order to stay safe.

Anectodal personal notes: a relative who was a psychiatric nurse for decades generally would ask her patients when they first tried pot. She found it easier to work with them if she treated them as though that age is the age when their emotional development ceased. I have found this heuristic useful in my own life, and parents have noticed it as well.

I plan to do a bunch of drugs when I hit the average life expectancy for my generation, with the expectation that I'll die before the consequences catch up.

Comment by RedMan on Exams-Only Universities · 2022-11-08T01:20:56.448Z · LW · GW

IT professional certifications work like this. Also 'bain4weeks' worked until the one accredited college that offered GRE credit towards a degree stopped doing it.

Comment by RedMan on Tactical Nuclear Weapons Aren't Cost-Effective Compared to Precision Artillery · 2022-10-31T18:17:39.935Z · LW · GW

Congratulations you have discovered the "Revolution in Military Affairs" (RMA) of the early 1990s (really the 1980s but the Gulf War was the showcase), which means that you are literal decades ahead of the rest of the analysis on this topic in this community.

For more information, take a look at the 'AirLand battle' concept, and another term that might be helpful is 'precision guided munitions'.

Comment by RedMan on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-29T15:08:58.949Z · LW · GW

Description of the thought process and general techniques used to generate an answer for myself puts those techniques at risk. Discussion of the specifics definitely my access at risk, and no I don't need a second opinion.

I've thoroughly investigated disclosure, to the point of talking to industry VCs and CEOs about the challenges I'd hit spinning out a biotech startup to commercialize it. For a number of reasons, such a startup is a lame idea.

Since I don't do social media, the possible exposure/engagement from simple disclosure isn't valuable to me.

Instead, I'll offer an unrelated anecdote, if the structural/market issues that cause the following issue are fixed, I assert that my therapy will rapidly emerge from a more credible source with no effort on my part required, so work on this one instead:

I met someone who was involved in an attempt to commercialize 'cell therapy for diabetes'. Someone else can go find the papers if they care.

Basically, they sat in the lab and tried 'start with stem cells, convert into beta cells, implant in mouse; diabetes fixed' they then moved to 'peripheral blood cells, treat blood to turn into stem cells, treat again to turn into beta cells, inject in mouse, cells float through blood and park in the pancreas; diabetes fixed'

At this point they said 'hey let's see about spinning this out for commercialization', and failed hard. I literally met people who were in the meetings. For market reasons, the project is simply not viable as a business. They talked to everyone who could listen, found no investors, gave up, and went back into the lab.

Last I checked the state of the research was 'make gmo mouse that can't produce beta cells period, pull off blood, make stem cells, gene edit stem cells to fix missing gene, turn into beta cells, inject into mouse; diabetes fixed'

Scientists are literally stunting on diabetes in the lab while people die because they can't afford insulin.

Comment by RedMan on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-16T18:37:03.410Z · LW · GW

I had a severe health problem that I treated myself with broscience (doing research like a gymbro buying supplements to get hyooge) and some alt medicine that needed a clinic. I have pre and post treatment test results showing a problem and the problem in remission, with a degree of success that was unheard of, even in that particular clinic for that particular issue.

Had this conversation:

"So Dr, you're saying that either I have done something medicine believes to be impossible, or I was never sick"

"That's correct"

I looked into commercialization of my protocol, but unfortunately, I used mostly stuff available OTC, which would no longer be available OTC if it were determined to be 'a drug' by success in a RCT, and I would thus lose access.

So the process of getting good data is explicitly counterproductive to my goal of staying healthy.

Comment by RedMan on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-10T14:24:26.467Z · LW · GW

What are your candidates for targets for a tactical nuclear use, and your estimate of the yield of the strike?

What is the specific military need that will be met by setting off a nuclear weapon on the current battlefield which would be unmet by precision conventional strikes or massed fires from artillery or aviation?

A professional would have an answer to these questions.

Comment by RedMan on "Cotton Gin" AI Risk · 2022-09-25T06:05:31.775Z · LW · GW

So...one possible scenario would be: "all intellectual tasks requiring long education times and a talent for abstract reasoning have been taken over by the AI, thus allowing the creation of a perfect social system, and humans are redirected completely from those tasks"

Here is a degenerate scenario:

"Humans who engage in abstract reasoning often are the cause of rebellions at worst, and technological revolutions which require social changes at worst. Ourr social system is perfect, but sometimes fragile, therefore, the humans who can do independent intellectual tasks and abstract reasoning are superfluous at best and harmful at worst"

"The leader has a specific amount of education, no one would dare call him ignorant, and his is certainly not superfluous, therefore, the amount of education he received is the perfect amount. Kill anyone who has had more, and anyone who seeks more than him, because obviously, they intend revolution against our perfect system"

Equatorial Guinea didn't need the AI justification to reach the third line in the post colonial era--and the election that put that leader into power was democratic.

https://en.m.wikipedia.org/wiki/Francisco_Macías_Nguema

Comment by RedMan on The Expanding Moral Cinematic Universe · 2022-08-29T19:23:15.583Z · LW · GW

Zootopia next please.

Comment by RedMan on Ways to increase working memory, and/or cope with low working memory? · 2022-08-22T15:17:10.422Z · LW · GW

I went looking and couldn't find it, but here's something newer and probably more useful: https://www.nature.com/articles/s41598-020-58831-9

Neuralink has described the bandwidth they're seeking as similar to the corpus callosum. I don't think that's actually necessary to achieve superhuman results. The brain is good at adding new sense organs (see research on vibrating belts, cameras attached to tongues, whiskers on finger etc). I presume that the brain is also good at linking to 'more brain'. So, a low bandwidth interface, possibly only a few peripheral nerves, to either a von neumann architecture like the one I described above (and that memory interface could potentially also be connected to other hardware that could push and pop bits), or a computer simulation of neurons like the one in the linked paper is probably something that would be useful.

If you're using an extremely loose definition of 'AI superintelligence', namely 'a natural intelligence, physically connected to a machine that achieves otherwise unattainable performance in some dimension of intelligence', such as say a large improvement in 'digit span', I believe that such a thing is possible today using extant technology.

In a more general sense, how much artificial augmentation of a 'natural general intelligence' is required before it qualifies as an AGI?

Comment by RedMan on Ways to increase working memory, and/or cope with low working memory? · 2022-08-22T00:48:46.200Z · LW · GW

Connect a stack style memory register to a pair of peripheral neurons, so that the neurons can send three separable nerve signals (push one, push zero, pop) and receive two separable inputs from the machine (pop one, pop zero)

Leave it connected for an extended period of time so that neuroplasticity can adapt to having a sense organ that is a low metabolic cost, fast binary storage device, might be worth trying a lot of double n back so the body adapts to using the new organ, and as a bonus, you'll get quantitative proof if it works.

Congrats, you're a superintelligence.

If I remember correctly, something like this was done in a rat and measurably improved water maze performance.

Comment by RedMan on How do you get a job as a software developer? · 2022-08-17T12:01:43.065Z · LW · GW

This was my experience in a stable market, the first job makes the second one much easier to find.

Comment by RedMan on How do you get a job as a software developer? · 2022-08-17T12:00:53.288Z · LW · GW

Randstad is the largest I think. For something with a more silicon valley feel, I've seen ads for triplebyte.

You may be more suited to a management role, or a non-coder role. There are stable, easy, and well paid jobs in tech that are not directly in software engineering where being self taught isn't as much of a negative.

If I knew how to get a 'product manager' role, I'd have done it myself though

Comment by RedMan on How do you get a job as a software developer? · 2022-08-15T23:48:06.215Z · LW · GW

When I was trying to break into a new field, I targeted applying for jobs I was certain would be bad, in places with high turnover. Try staffing agencies, eventually a recruiter will slap you against an interview with someone desperate to hire 'someone' for a role they can be sure you wouldn't screw up too badly.

There, that's your first job, do it for 6mo - 1 year, now your resume looks normal and you can apply for others.

Also, you may or may not want to consider changing your resume job title for your startups to something like 'Senior Engineer'. Technically not a lie--they can call the former CEO and ask him about your role.

Comment by RedMan on Covid 7/28/22: Ruining It For Everyone · 2022-07-28T22:56:42.380Z · LW · GW

"Thus, I propose a new cause sub-area that should be highly worthwhile, which is checking for obvious frauds. An organization whose entire job is looking at papers and data that are cited, and asking ‘is this an obvious fraud?’ Bonus points for asking if there is a methodological flaw or whether they expect replication, but maybe we should narrow focus and simply check to see what things are and aren’t this kind of blatant fraud"

Academic approach: Find an institution that currently pays someone like Elizabeth Bik, organize a $10-100m EA endowment for a masters or phd program in finding obvious scientific frauds, grad students write phds on fraud detection methods, and examples of successful detects

Corporate approach: EAs offer a generic fraud bounty which scales quadratically with the impact factor of the fraudulent paper, appoints people like Elizabeth Bik as judges. Simultaneously offers MOOCs on academic fraud detection, and funds fraud detection research at universities.

Government approach: Congress funds scientific fraud analysis by GAO as a component of NIH grantmaking. An army of GS 9-11s make their careers by scouring papers.

Should be doable, if there's will.

Comment by RedMan on Humans are very reliable agents · 2022-07-18T14:07:46.401Z · LW · GW

Some credit to the road and vehicle engineers should probably be given here.

There are design decisions that make it easier for humans to avoid crashes, and that reduce the damage of crashes when they do occur.

Not sure how many of those nines in the human reliability figure represent a 'unit of engineering effort' by highway/vehicle designers over the last hundred years, but it isn't zero.

Comment by RedMan on What should you change in response to an "emergency"? And AI risk · 2022-07-18T13:19:30.601Z · LW · GW

Hegemony How-To by Johnathan Smucker talks about 'hardcore' as something people want to do in activist movements, and the need to channel this into something productive. Some people want to work hard, and make sacrifices for something they believe in, and do not like being told 'take care of yourself, work like 30 good hours a week, and try to be nice to people'.

This happens in all activist movements, and in my opinion, can happen anywhere where intrinsic motivation rather than extrinsic motivation is the main driver, and that the more a leader makes appeals for 'emotional motivation' rather than offering say, money, the more likely a few 'hardcores' emerge.

I'd say this is a risk in AI safety, it's not too profitable to join, people who are really active usually feel really strongly, and status is earned by perceived contribution. So of course some people will want to 'go hardcore for AI safety'.

Based on some of the scandals in EA/rationalist communities, I wouldn't be surprised if 'hardcore' has been channeled into 'sex stuff with someone in a position of perceived authority', which I'd guess is probably actively harmful, or in the absolute best case, totally unproductive.

Tldr, to use a dog training analogy, a 'working dog' that isn't put to work will find something to do, and you probably won't like it.

Comment by RedMan on [deleted post] 2022-05-30T05:41:26.446Z

I threw in a few, I wasn't expecting to win, and I'm expecting probability of win to correlate with overall forum karma. Aka, it's not what's said, it's who's saying it.

Comment by RedMan on Can growth continue? · 2022-05-28T20:39:31.510Z · LW · GW

If I understand correctly, discussions of superintelligence imply that a 'friendly' AGI would provide for exponentially increasing TFP growth while effective number of researchers could remain flat or decline.

Additionally, number of researchers as a share of the total human population could be flat or decline, because AGI would do all the thinking, and do it better than any human or assemblage of humans could.

If AGI points out that physics does not permit overcoming energy scarcity, and space travel/colonization is not viable for humans due to engineering challenges actually being insurmountable, then an engineered population crash is the logical thing to do in order to prolong human existence.

So a friendly AI would, in that environment, end up presiding over a declining mass of ignorant humans with the assistance of a small elite of AI technicians who keep the machine running.

I don't think that my first two paragraphs here are correct, but I think that puts me in a minority position here.

Comment by RedMan on What an actually pessimistic containment strategy looks like · 2022-05-27T17:25:57.958Z · LW · GW

I feel like OP has not read the Unabomber's manifesto, but has reached some of its' conclusions independently.

Please don't try to physically harm AI researchers like the Israelis are alleged to have done to Iranian nuclear / Egyptian rocket scientists. That would spread a lot of misery, and probably not achieve anything you think is good.

I was unaware that the rationalist/less wrong position is one of 'AI Luddites', but I guess it makes sense.

Comment by RedMan on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T02:13:04.576Z · LW · GW

GPT-3 told me the following: Super intelligent AI presents a very real danger to humanity. If left unchecked, AI could eventually surpass human intelligence, leading to disastrous consequences. We must be very careful in how we develop and control AI in order to avoid this outcome.

Comment by RedMan on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T02:10:19.120Z · LW · GW

GPT-3 told me the following: Super intelligent AI presents a very real danger to humanity. If left unchecked, AI could eventually surpass human intelligence, leading to disastrous consequences. We must be very careful in how we develop and control AI in order to avoid this outcome

Comment by RedMan on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T00:05:10.052Z · LW · GW

Omg check out this AI written hit piece on you written with the prompt 'an article the NYT published that got your name impeached', I bet the NYT will run it if I submit it.

Comment by RedMan on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T00:00:54.106Z · LW · GW

I just won an election using nothing but AI-written speeches

Comment by RedMan on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-17T23:33:18.763Z · LW · GW

If you had a proposal that you thought could lead to superintelligence in the medium term (3-5 years), what should you do with it?

Comment by RedMan on App-Based Disease Surveillance After COVID-19 · 2022-05-05T22:46:50.771Z · LW · GW

https://www.vice.com/en/article/m7vymn/cdc-tracked-phones-location-data-curfews

So basically, the CDC is indeed going in this direction.

Comment by RedMan on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-03T22:56:00.858Z · LW · GW

Remember all the scary stuff the engineers said a terrorist could think to do? Someone could write a computer program to do them just randomly.

Comment by RedMan on Why rationalists should care (more) about free software · 2022-05-03T22:54:01.432Z · LW · GW

"We need free software and hardware so that we can control the programs that run our lives, instead of having a third party control them."

"We need collective governance and monitoring arrangements to keep unfriendly AI accidents from happening."

These statements appear to be in conflict. Does anyone see a resolution?

Comment by RedMan on [Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda · 2022-04-21T13:23:20.057Z · LW · GW

Assuming this is serious, have you reached out to them?

The salary offer is high enough that any academic would at least take the call. If they're not interested themselves, you might be able to produce an endowment to get their lab working on your problems, or at a bare minimum, get them to refer one or more of their current/former students.

Comment by RedMan on Jetlag, Nausea, and Diarrhea are Largely Optional · 2022-03-29T02:29:24.987Z · LW · GW

So you procured study drugs from an illicit source, took them, felt your body temp rise, stopped taking them, spent the next few days sleeping like crazy, and presented at the hospital?

Did they do a tox screen (meth and similar stimulants?)

I posted on another thread a while ago, according to dancesafe, counterfeit modafinil that's actually low dose methamphetamine was being marketed in Berkeley. I'd expect this to be common, because the following reasoning is a 'flash of inspiration' I'd expect a drug dealer to have...

Nerds have money -> nerds want study drugs -> most study drugs are stimulants -> pill press is cheap -> meth is widely available -> dilute the meth doses you'd usually sell to tweakers, put in pill press to look like study drug, sell to nerds.

Brilliant plan right?

Comment by RedMan on Jetlag, Nausea, and Diarrhea are Largely Optional · 2022-03-27T06:39:17.836Z · LW · GW

What were the symptoms that led you to go to the hospital, and what did they observe there that convinced them to have you stay after the initial exam?