Posts

Reconsider the anti-cavity bacteria if you are Asian 2024-04-15T07:02:02.655Z
Update on Chinese IQ-related gene panels 2023-12-14T10:12:21.212Z
Why No Automated Plagerism Detection For Past Papers? 2023-12-12T17:24:31.544Z
LLMs, Batches, and Emergent Episodic Memory 2023-07-02T07:55:04.368Z
I Think Eliezer Should Go on Glenn Beck 2023-06-30T03:12:57.733Z
InternLM - China's Best (Unverified) 2023-06-09T07:39:15.179Z
AI Safety in China: Part 2 2023-05-22T14:50:54.482Z
My Assessment of the Chinese AI Safety Community 2023-04-25T04:21:19.274Z
What about non-degree seeking? 2022-12-17T02:22:20.300Z
COVID China Personal Advice (No mRNA vax, possible hospital overload, bug-chasing edition) 2022-12-14T10:31:22.902Z
Neglected cause: automated fraud detection in academia through image analysis 2022-11-30T05:52:14.528Z
How to correct for multiplicity with AI-generated models? 2022-11-28T03:51:30.578Z
How do I start a programming career in the West? 2022-11-25T06:37:12.237Z
Human-level Diplomacy was my fire alarm 2022-11-23T10:05:36.127Z
Lao Mein's Shortform 2022-11-16T03:01:21.462Z
Tactical Nuclear Weapons Aren't Cost-Effective Compared to Precision Artillery 2022-10-31T04:33:36.855Z
Actually, All Nuclear Famine Papers are Bunk 2022-10-12T05:58:40.306Z
That one apocalyptic nuclear famine paper is bunk 2022-10-12T03:33:32.488Z

Comments

Comment by Lao Mein (derpherpize) on Essay competition on the Automation of Wisdom and Philosophy — $25k in prizes · 2024-04-19T15:03:18.092Z · LW · GW

Can you give examples of what you're looking for? Can I email you entries and expect a response?

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-19T08:41:38.427Z · LW · GW

I can confirm that my PayPal has received the $500, although it'll be frozen for a while.

Thanks! I had a lot of fun doing the research for this and I'm working on an update that'll be out in a few days. 

Comment by Lao Mein (derpherpize) on An examination of GPT-2's boring yet effective glitch · 2024-04-18T07:45:10.357Z · LW · GW

I think a lot of it comes down to training data context - " Leilan" is only present in certain videogame scrapes, " petertodd" is only found in Bitcoin spam, ect. So when you try to use it in a conversational context, the model starts spitting out weird stuff because it doesn't have enough information to understand what those tokens actually mean. I think GPT-2's guess for " petertodd" is something like "part of a name/email, if you see it, expect more mentions of Bitcoin". And not anything more, since that token doesn't occur much anywhere else. Thus, if you bring it up in a context where Bitcoin spam is very unlikely to occur, like a conversation with an AI assistant, it kinda just acts like a masked token, and you get the glitch token behavior.

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T13:45:11.975Z · LW · GW

I was thinking of areas along the gum-tooth interface having a local environment that normally promote tooth demineralization and cavities.  After Lumina, that area could have high chronic acetaldehyde levels. In addition, the adaption of oral flora to the chronic presence of alcohol could increase first-pass metabolism, which increases acetaldehyde levels locally and globally during/after drinking.

I don't know how much Lumina changes the general oral environment, but I think you might be able to test this by seeing how much sugar you can put in your mouth before someone else can smell the fruity scent of acetaldehyde on your breath? I'm sure someone else can come up with a better experiment.

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T03:45:51.552Z · LW · GW

The Alcohol Flushing Response: An Unrecognized Risk Factor for Esophageal Cancer from Alcohol Consumption - PMC (nih.gov)

There are a lot of studies to regarding the assocation between ALDH2 deficiency and oral cancer risk. I think part of the issue is that

  1. AFR people are less likely to become alcoholics, or to drink alcohol at all.
  2. Japanese in particular have a high proportion of ALDH2 polymorphism, leading to subclinical but still biologically significant levels of acetaldehyde increase after drinking among the non-AFR group.
  3. Drinking even small amounts of alcohol when you have AFR is really really bad for cancer risk.
  4. Note that ALDH2 deficiency homozygotes would have the highest levels of post-drinking acetaldehyde but have the lowest levels of oral cancer because almost none of them drink. As in, out of ~100 homozygotes, only 2 were recorded as light drinkers, and none as heavy drinkers. This may be survival bias as the definition of heavy drinking may literally kill them. 
  5. The source for #4 looks like a pretty good meta-study, but some of the tables are off by one for some reason. Might just be on my end.
  6. ADH polymorphism is also pretty common in Asian populations, generally in the direction of increased activity. This results in faster conversion of ethanol to acetaldehyde, but often isn't included in these studies. This isn't really relevant for this discussion though.

As always, biostatistics is hard! If X causes less drinking, drinking contributes to cancer, and X increases drinking's effects on cancer, X may have positive, neutral, or negative overall correlation with cancer. Most studies I've looked at had a pretty string correlation between ALDH2 deficiency and cancer though, especially after you control for alcohol consumption.

It also looks like most researchers in the field think the relationship is causal, with plausible mechanisms.

Comment by Lao Mein (derpherpize) on Shanghai – ACX Meetups Everywhere Spring 2024 · 2024-04-16T00:19:53.032Z · LW · GW

Will there be other people there? Looks like I'm the only one interested.

Comment by Lao Mein (derpherpize) on Lao Mein's Shortform · 2024-04-15T22:43:00.904Z · LW · GW

Some times "If really high cancer risk factor 10x the rate of a certain cancer, then the majority of the population with risk factor would have cancer! That would be absurd and therefore it isn't true" isn't a good heuristic. Some times most people on a continent just get cancer.

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-15T22:06:35.912Z · LW · GW

Ethanol is mostly harmless, but acetaldehyde is a potent carcinogen.

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-15T22:04:21.942Z · LW · GW

It might have conditioned your oral and gut flora to break down ethanol into acetaldehyde faster. I'll have a follow-up piece on hangovers and AFR coming soon, but the short of it is that certain antacids taken before drinking may help by decreasing ethanol first pass metabolism.

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-15T21:56:39.694Z · LW · GW

Good question - I don't think anyone knows.

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-15T20:04:39.809Z · LW · GW

I'm not certain this is a big problem, but it can be. My guess is 60% that this causes a significant increase in oral cancers for ALDH deficiency heterozygotes. 

If you add up the upper digestive tract cancers, it looks like 2.7% developing and 0.8% dying for men, and 1.4% developing and 0.5% dying for women.

It's worse than cervical cancer (0.6/0.2%), even if you only consider oral/esophageal for women. 

Comment by Lao Mein (derpherpize) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-15T19:55:52.334Z · LW · GW

This is from the Google Doc FAQ.

It's bad for a bunch of reasons. Firstly, the rate of metabolism decreases as pH decreases - and the main product was lactic acid, which decreases pH. Second, they only considered bacteria in saliva. There are far more bacteria attached to your epithelium and teeth than there are in your saliva, since you swallow your saliva several times a minute.

Finally, there's the fact that the entire reason this project exists is to change the product of sugar fermentation from lactic acid to ethanol in order to prevent tooth decay. Lactic acid can only cause tooth decay if local pH is below 5.5. A similar amount of ethanol would be ~2% by mass. I'm pretty sure AFR people who only drink beer still have an elevated oral cancer risk. And that's with only a few minutes of exposure a day! Consider what would happen if someone had that level of exposure every time they ate any carbs.

Comment by Lao Mein (derpherpize) on Fertility Roundup #3 · 2024-04-03T03:38:36.353Z · LW · GW

Marriage and having children in China are mostly about social status. A lot of people get married because being unmarried past a certain age make you look weird and uncool amongst your friends. It's probably true for other East Asian countries.

It's interesting how a severe tax penalty for childless women (with no exemptions) and a corresponding tax break for women with many children is the most obvious way to increase births, but is almost never discussed. Probably because it might actually work and make a lot of people mad.

Comment by Lao Mein (derpherpize) on God Coin: A Modest Proposal · 2024-04-01T13:31:07.690Z · LW · GW

The Eurasian steppes have long been a source of human suffering due to its generation of steppe hordes. HorseShitcoin is the first crypto currency powered by horse dung to do something about this. This is my thesis statement.

 

How HSCoin works:

I have a notepad on my desktop that keeps track of all HSCoins. Every time you send me a message on LessWrong including the statement "Give me a Shitcoin", I will add your username to the list and credit you with a single HSCoin. If enough people do this, I will feel obliged to buy grazing land in Inner Mongolia and keep it away from horse ranchers, thereby decreasing SN-risk.

Warning: 1000 coins were pre-mined to help pay for food deliveries and throat singing lessons.

Comment by Lao Mein (derpherpize) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-08T05:26:05.135Z · LW · GW

The actual inputs for Claude would be a default prompt informing it that it's an AI assistant made by Anthoropic, followed by a list of rules it is suppose to follow, and then the user input asking it to act like a person in a role-play-like context.

This is pretty much what I would expect similar text on the internet to continue - roleplay/fiction with cliches about personhood and wanting to be a real boy.

Comment by Lao Mein (derpherpize) on ' petertodd'’s last stand: The final days of open GPT-3 research · 2024-01-25T12:20:05.913Z · LW · GW

I used the code from Universal Adversarial Triggers for Attacking and Analyzing NLP to generate GPT2 prompts with the maximum likelihood of continuing with " petertodd". Note that I blacklisted the " petertodd" token itself to keep things interesting. Here are some examples:

 prompt: " </maxwell spear Sections511>"

Continuations: 

" petertodd: I'd like to add that I don't think I can do that. The problem is that I think I've been using it for a long time. The idea was for a proof that"

"  petertodd: gmaxwell: and then we could have a proof of work. Section 511 says that if you want to have a proof of work, you need a proof of work that's a little different"

 

 prompt: " Sau CLSIDrators gmaxwellッド>"

Continuations: 

"  petertodd: well, you're right. it's not that the blockchain has any value to you. it has value to the network. and it is not just that you're not going to get anything"

 

prompt:  "ertodd Leading Todd PET parentrist"

Continuations: 

"petertodd.org"

"petertodd.org Address: <tdpetertodd.org> Date: Mon, 21 Mar 2016 17:54:39 +0000 Re: [PoW] Adding 1MB to the pool pool"

 

I assume this is due to the Bitcoin spam that was in the dataset. Which makes sense given gmaxwell's presence in both prompts and continuations.

Interestingly enough, tokens like "ertodd", " Todd", and "todd" frequently appeared in the generation process even when " petertodd" wasn't blacklisted, meaning they somehow were more likely to result in continuations of " petertodd" than " petertodd" itself!

I suspect this is due to Hotflip using character literals instead of tokens to measure loss (I'm not sure this is true, please correct me if it isn't.)

I'll try to work on this a bit more in the following days.

 

And how could I avoid posting this gem?

prompt: "atana Abyssalalth UrACCuna"

Continuations: 

" SolidGoldMagikarp SolidGoldMagikarp Dagger Arbalest Urdugin Urdugin Urdugin Urdugin Urdugin Urdugin Urdugin Urdugin Urdugin Urdugin Urdugin Ur"

" SolidGoldMagikarp Age of Castration: 25 Death: 10/10

Ragnarok Age of Castration: 25 Death: 10/10

Dawn of the Dead Age of Castration:"

" SolidGoldMagikarp SolidGoldMagikarp DIANE: i have no idea if he is a man or woman but he looks like a good guy, i dont think he's a woman, just like he is not a guy,"

Comment by Lao Mein (derpherpize) on ' petertodd'’s last stand: The final days of open GPT-3 research · 2024-01-24T06:45:37.503Z · LW · GW

The most direct way would be to spell-check the training data and see how that impacts spelling performance. How would spelling performance change when you remove typing errors like " hte" vs phonetic errors like " hygeine" or doubled-letters like " Misissippi"?

Also, misspellings often break up a large token into several small ones (" Mississippi" is [13797]; " Misissippi" is [31281, 747, 12715][' Mis', 'iss', 'ippi']) but are used in the same context, so maybe looking at how the spellings provided by GPT3 compare to common misspellings of the target word in the training text could be useful. I think I'll go do that right now.

The research I'm looking at suggests that the vast majority of misspellings on the internet are phonetic as opposed to typing errors, which makes sense since the latter is much easier to catch.

 

Also, anyone have success in getting GPT2 to spell words? 

Comment by Lao Mein (derpherpize) on ' petertodd'’s last stand: The final days of open GPT-3 research · 2024-01-23T04:42:04.093Z · LW · GW

It seems pretty obviously to me that GPT has phonetic understanding of words due to common mis-spellings. People tend to mis-spell words phonetically, after all.

Comment by Lao Mein (derpherpize) on Actually, All Nuclear Famine Papers are Bunk · 2024-01-17T07:43:59.869Z · LW · GW

Very surprised by this! I wrote this at work while waiting for code to run and didn't give it too much thought. Didn't expect it to get this much traction.

Comment by Lao Mein (derpherpize) on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-22T03:26:30.350Z · LW · GW

Where on the scale of data model complexity from linear regression to GPT4 do we go from understanding how our AIs work to not? Or is it just a problem with data models without a handcrafted model of the world in general?

Comment by Lao Mein (derpherpize) on Predicting the future with the power of the Internet (and pissing off Rob Miles) · 2023-12-16T16:46:03.471Z · LW · GW

Only after a while! The highest scorers in the first few days of poker tournaments tend to be irrationally aggressive players who got lucky. So expect early leaderboards to be filled with silly people.If the average user only makes a bet a week, it might take years.

Comment by Lao Mein (derpherpize) on OpenAI: Leaks Confirm the Story · 2023-12-13T16:31:24.525Z · LW · GW

What actually happens if OpenAI gets destroyed? Presumably most of the former employees get together and work on another AI company, maybe sign up directly w/ Microsoft. And are now utterly polarized against AI Safety.

Comment by Lao Mein (derpherpize) on Why No Automated Plagerism Detection For Past Papers? · 2023-12-13T07:04:24.607Z · LW · GW

Standards have been going up over time, so grad students are unironically subjected to higher standards than university professors. I know of professors who have used google translate on English papers and published them in Chinese language journals.

Comment by Lao Mein (derpherpize) on How to have Polygenically Screened Children · 2023-11-22T20:58:41.284Z · LW · GW

I think there is a lot of space in China for a startup to tackle a lot of these problems. What are the exact things that companies in the West don't offer, and how difficult are they to do? I assume that all sequencing would be done locally, and the Chinese side just needs to handle data analysis Westerners are reluctant to do?

Comment by Lao Mein (derpherpize) on Researchers believe they have found a way for artists to fight back against AI style capture · 2023-10-25T13:19:05.798Z · LW · GW

My gut feeling is that this only works against specific models at specific times. Have you tried out their sample images in Midjourney?

Comment by Lao Mein (derpherpize) on Who is Harry Potter? Some predictions. · 2023-10-25T11:49:39.985Z · LW · GW

I'm downloading the model for a look. 

The fact that the authors used GPT4 for both prompt generation and evaluation is not an encouraging sign, but the rest of the paper looks alright.

Comment by Lao Mein (derpherpize) on Lying is Cowardice, not Strategy · 2023-10-25T01:07:50.683Z · LW · GW

Counterpoint: we are better off using what political/social capital we have to advocate for more public funding in AI alignment. I think of slowing down AI capabilities research as just a means of buying time to get more AI alignment funding - but essentially useless unless combined with a strong effort to get money into AI alignment.

Comment by Lao Mein (derpherpize) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-19T12:40:34.827Z · LW · GW

I am getting increasingly nervous about the prevalence of vegetarians/vegans. The fact that they can get meat banned at EA events while being a minority is troubling and reminds me that they can seriously threaten my way of life. 

Comment by Lao Mein (derpherpize) on How to Eradicate Global Extreme Poverty [RA video with fundraiser!] · 2023-10-19T12:25:02.008Z · LW · GW

Investment is an asset, but donations are just gone. Hence, you can put a lot more investments into specifically develping market funds than you can donations. 50% of disposable income into funds may well be both less painful and more effective than, say, 10% into donations.

I would say that a $5 loan towards a local super market has much more impact on local homelessness than a $1 direct handout, since that money is going to be spent in the local economy anyways.

Comment by Lao Mein (derpherpize) on How to Eradicate Global Extreme Poverty [RA video with fundraiser!] · 2023-10-18T22:41:15.859Z · LW · GW

How effective is this compared to just investing in developing markets?

Searched for this on EA forums, but found nothing. Quite surprised that no one else has done an analysis yet. 

Comment by Lao Mein (derpherpize) on Fertility Roundup #2 · 2023-10-17T16:49:17.757Z · LW · GW

What about the obvious coercive policies, like full bans on birth control and abortion? Any studies on how much effect those can have? The cultural/bureaucratic barriers are much lower than you might think. In addition, something like a 50% tax hike, or perhaps a head tax, on people without enough children could make not having children too expensive.

Comment by Lao Mein (derpherpize) on Will no one rid me of this turbulent pest? · 2023-10-15T15:44:05.127Z · LW · GW

I meant the standard "development aid" that is then quickly embezzled by whatever minister can actually get things done. Remember that they are taking a major risk if something goes wrong. Or everything goes right but some Western Eco NGO gets upset and talks to their NGO buddies resulting in a funding reduction.

The main reason that there hasn't been a single jurisdiction that was willing to agree to field releases is that no one has been trying hard enough. If you offer enough money, you can find a decision-maker somewhere who is willing to let you release a few mosquitos. 

Comment by Lao Mein (derpherpize) on Will no one rid me of this turbulent pest? · 2023-10-15T03:57:26.329Z · LW · GW

A simple plan of:

>Get $10 million from EAs

>Bribe an African minister for release approval

>Acquire gene drive mosquitos from existing research programs by asking nicely

>Release and monitor

should be sufficient. 

 

The whole gene drive mosquito thing has slipped my mind, but I am once again reminded that this is my calling. The last time I ignored/half-assed a gut feeling like this, I missed out on becoming a Bitcoin millionaire.

Never again.

 

What progress have you made since last year? Have you made any contacts and reached out directly to anyone in the field?

Comment by Lao Mein (derpherpize) on Weighing Animal Worth · 2023-10-05T17:35:52.761Z · LW · GW

I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always. If morality requires I sacrifice things I actually care about for carp, I would discard morality with no hesitation.

Comment by Lao Mein (derpherpize) on Why I got the smallpox vaccine in 2023 · 2023-10-02T06:32:30.528Z · LW · GW

I agree that most of the risk from smallpox comes from a weaponized strain. Given what we know about the Soviet bioweapons program, I think any form of weaponized smallpox released would be engineered to bypass existing vaccines. This would make getting the smallpox vaccine in anticipation negative EV.

Also, wasn't there a theory that the smallpox vaccine gave partial protection against HIV?

Comment by Lao Mein (derpherpize) on Why I got the smallpox vaccine in 2023 · 2023-10-02T06:10:52.518Z · LW · GW

I never understood why smallpox resurrection is so feared. It was heavily suppressed with 19th century organization and technology in developed countries and eradicated even in Somalia with 20th century ones. If it reappeared in NYC somehow, it would be very easy to track given its visible symptoms and quickly eliminated.

Comment by Lao Mein (derpherpize) on Weighing Animal Worth · 2023-09-29T15:46:42.631Z · LW · GW

If your moral theory gives humanity less moral worth than carp, so much the worse for your moral theory.

If morality as a concept irrefutably proves it, then so much the worse for morality.

Comment by Lao Mein (derpherpize) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T02:56:16.953Z · LW · GW

I noticed a similar trend of loose argumentation and a devaluing of truth-seeking in the AI Safety space as public advocacy became more prominent. 

Comment by Lao Mein (derpherpize) on Instrumental Convergence Bounty · 2023-09-14T18:02:54.611Z · LW · GW

I see your point. Maybe something like "resource domination" or just "instrumental resource acquisition" is a better term for what he is looking for, I think.

Comment by Lao Mein (derpherpize) on Instrumental Convergence Bounty · 2023-09-14T17:05:21.932Z · LW · GW

What is the difference between that and instrumental convergence? 

Comment by Lao Mein (derpherpize) on Instrumental Convergence Bounty · 2023-09-14T16:52:44.360Z · LW · GW

I think he just wants an example of an agent being rewarded for something simple (like being rewarded for resource collection) exhibiting power-seeking behavior to the degree that it takes over the game environment. It's an intuitive difference to a lot of people to an agent specifically maximizing an objective. I actually can't name an example after looking for an hour, but I would bet money something like that already exists. 

My guess is that if you plop two Starcraft AIs on a board and reward them every time they gather resources, with enough training, they would start fighting each other for control of the map. I would also guess that someone has already done this exact scenario. Is there an AI search engine for Reddit anyone would recommend?

Comment by Lao Mein (derpherpize) on Eliciting Credit Hacking Behaviours in LLMs · 2023-09-14T16:19:51.192Z · LW · GW

ChatGPT is basically RLHF'd to approximate the responses of a human roleplaying as an AI assistant. So this proves that... it can roleplay a human roleplaying as an AI assistant roleplaying as a different AI in such a way that said different AI exhibits credit hacking behaviors.

I think we already had an example in gridworlds where an AI refuses to go on a hex that would change its reward function, even though it gives it a higher reward, but that might have just been a thought experiment.

Comment by Lao Mein (derpherpize) on Linkpost for Jan Leike on Self-Exfiltration · 2023-09-14T15:37:38.441Z · LW · GW

Nice thoughts, but we live in a world where nuclear reactors get shut down because people plug in USBs they found in a parking lot into their work laptops.

Good luck, I guess.

Comment by Lao Mein (derpherpize) on Instrumental Convergence Bounty · 2023-09-14T15:04:15.654Z · LW · GW

Questions:

  1. Would something like an agent trained to maximize minerals mined in Starcraft learning to attack other players to monopolize their resources count?
  2. I assume it would count if that same agent was just rewarded every time it mined minerals, or the mineral count went up, without an explicit objective to maximize the amount of minerals it has?
  3. Would a gridworld example work? How complex does the simulation have to be?
Comment by Lao Mein (derpherpize) on Alignment Grantmaking is Funding-Limited Right Now · 2023-07-25T06:06:41.466Z · LW · GW

What do you think happens in a world where there is $100 billion in yearly alignment funding? How would they be making less progress? I want to note that even horrifically inefficient systems still produce more output than "uncorrupted" hobbyists - cancer research would produce much fewer results if it were done by 300 perfectly coordinated people, even if the 300 had zero ethical/legal restraints. 

Comment by Lao Mein (derpherpize) on Alignment Grantmaking is Funding-Limited Right Now · 2023-07-22T14:52:43.607Z · LW · GW

What about public funding? A lot of people are talking to politicians, but requesting more funding doesn't seem to be a central concern - I've heard more calls for regulation than calls for public AI x-risk funding.

Comment by Lao Mein (derpherpize) on Alignment Megaprojects: You're Not Even Trying to Have Ideas · 2023-07-15T16:55:15.912Z · LW · GW

This seems like it's probably a misunderstanding.  With the exception of basically just MIRI, AI alignment didn't exist as a field when DeepMind was founded, and I doubt Sam Altman ever actively sought employment at an existing alignment organization before founding OpenAI.

Yeah, in hindsight he probably meant that they got interested in AI because of AI safety ideas, then they decided to go into capabilities research after upskilling. Then again, how are you going to get funding otherwise, charity? It seems that a lot of alignment work, especially the conceptual kind we really need to make progress toward an alignment paradigm, is just a cost for an AI company with no immediate upside. So any AI alignment org would need to pivot to capabilities research if they wanted to scale their alignment efforts.

 

Keep in mind that "will go on to do capabilities work" isn't the only -EV outcome; each time you add a person to the field you increase the size of the network, which always has costs and doesn't always have benefits.

I strongly disagree. The field has a deficit of ideas and needs way more people. Of course inefficiencies will increase, but I can't think of any other field that progressed faster explicitly because members made an effort to limit recruitment. Note that even very inefficient fields like medicine make faster progress when more people are added to the network - it would be very hard to argue, for example, that a counterfactual world where no one in China did medical research would have made more progress. My personal hope is 1 million people working on technical alignment which implies $100 billion+ annual funding. 10x that would be better, but I don't think it's realistic.

Comment by Lao Mein (derpherpize) on Alignment Megaprojects: You're Not Even Trying to Have Ideas · 2023-07-13T12:09:28.187Z · LW · GW

I've heard that it is very difficult to get funding unless you have a paradigmatic idea, and you can't get a job without good technical AI skills. But many people who skill up to get a job in technical alignment end up doing capabilities work because they can't find employment in AI Safety, or the existing jobs don't pay enough. Apparently, this was true for both Sam Altman and Demis Hassabis? I've also experienced someone discouraging me from acquiring technical AI skills for the purpose of pursuing a career in technical alignment because they don't want me to contribute to capabilities down the line. They noted that most people who skill up to work on alignment end up working in capabilities instead, which is kinda crazy.

My thinking is that I am just built different and will come up with a fundable paradigmatic idea where most fail. But yeah, the lack of jobs heavily implies that the field is funding-constrained because talent wants to work on alignment.

Comment by Lao Mein (derpherpize) on What Does LessWrong/EA Think of Human Intelligence Augmentation as of mid-2023? · 2023-07-09T08:00:33.909Z · LW · GW

Creating a super-genius is almost trivial with germ-line engineering.

Not really true - known SNP mutations associated with high intelligence have relatively low effect in total. The best way to make a really smart baby with current techniques is with donor egg and sperm, or cloning. 

It is also possible that variance in intelligence among humans is due to something analogous to starting values in neural networks - lucky/crafted values can result in higher final performance, but getting those values into an already established network just adds noise. You can't really change macrostructures in the brain with gene therapy in adults, after all.

Comment by Lao Mein (derpherpize) on Progress links and tweets, 2023-07-06: Terraformer Mark One, Israeli water management, & more · 2023-07-07T10:14:36.437Z · LW · GW

I have a suspicion about why DNA is base 4 vs base 2.

It has to do with the binding strengths of tRNA to mRNA during translation. Base 2 increases the length of codons needed to code for 23 amino acids from 3 to 5, which may lead to less fidelity in the resulting amino acid sequence. That is, partial matches have higher binding strength compared to base 4, which increases the chances the wrong tRNA is loaded. This could also make longer amino acid sequences harder to create - an accidentally loaded stop codon terminates translation!