Posts

Comments

Comment by Oleg S. on re: Yudkowsky on biological materials · 2023-12-12T05:14:10.952Z · LW · GW

Diamond is hard to make with enzymes because they can't stabilize intermediates for adding carbons to diamond.

This is very strong claim. It puts severe limitations on biotech capabilities. Do you have any references to support it?

Comment by Oleg S. on The goal of physics · 2023-09-03T21:06:17.213Z · LW · GW

When discussing the physics behind why the sky is blue, I'm surprised that the question 'Why isn't it blue on Mars or Titan?' isn't raised more often. Perhaps kids are so captivated by concepts like U(1) that they overlook inconsistencies in the explanation.

Comment by Oleg S. on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-07-02T16:12:07.719Z · LW · GW

Just realized that stability of goals under self-improvement is kinda similar to stability of goals of mesa-optimizers; so there vingian reflection paradigm and mesa-optimization paradigm should fit.

Comment by Oleg S. on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T14:47:35.361Z · LW · GW

What are practical implication of alignment research in the world where AGI is hard? 

Imagine we have a good alignment theory but do not have AGI. Can this theory be used to manipulate existing superintelligent systems such as science, deep state, stock market? Does alignment research have any results which can be practically used outside of AGI field right now?

Comment by Oleg S. on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T14:45:39.517Z · LW · GW

How does AGI solves it's own alignment problem?

For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell AGI how to self-improve without destroying its own values. Good alignment theory should work across all intelligence levels. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips?

Comment by Oleg S. on Intuitions about solving hard problems · 2022-04-27T06:00:12.500Z · LW · GW

I don’t know too much about alignment research, but what surprises me most is lack of discussion of two points:

  1. For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell that AGI how to self-improve without destroying its own values. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips? Good alignment theory should work across all intelligence levels.

  2. What are practical implication of alignment research in the world where AGI is hard? Imagine we have a good alignment theory but do not have AGI. I would assume that the theory can be used to manipulate existing superintelligent systems such as science, deep state, stock market. The reverse of this is does alignment research have any results which can be practically used right now?

Comment by Oleg S. on What an actually pessimistic containment strategy looks like · 2022-04-06T06:28:35.655Z · LW · GW

What do you think about offering an option to divest from companies developing unsafe AGI? For example, by creating something like an ESG index that would deliberately exclude AGI-developing companies (Meta, Google etc) or just excluding these companies from existing ESGs. 

The impact = making AGI research a liability (being AGI-unsafe costs money) + raising awareness in general (everyone will see AGI-safe & AGI-unsafe options in their pension investment menu + a decision itself will make a noise) + social pressure on AGI researchers (equating them to fossil fuels extracting guys). 

Do you think this is implementable short-term? Is there a shortcut from this post to whoever makes a decisions at BlackRock & Co?

Comment by Oleg S. on How common are abiogenesis events? · 2021-11-28T05:35:33.974Z · LW · GW

You can do something similar to the Drake equation:

where Nlife is how many stars with life there are in the Milky Way and it is assumed that a) once self-replicating molecule is evolved it produces life with 100% probability; b) there is an infinite supply of RNA monomers, and c) lifetime of RNA does not depend on its length. In addition:

  • Nstars - how many stars capable of supporting life there are (between 100 and 400 billion),
  • Fplanet - Number of planets and moons capable of supporting life per star - between 0.0006 (which is 0.2 of Earth-size planets per G2 star) and 20 (upper bound on planets, each having Enceladus/Europe-like moon)
  • Tplanet - mean age of a planet capable of sustaining life (5-10 Gy)
  • Splanet - typical surface area of a planet capable of sustaining life (can be obtained from radii of between 252 km for Enceladus and 2Rearth for Super Earths)
  • Fsurface - fraction of surface where life can originate (between tectonically-active area fraction of about 0.3, and total area 1.0)
  • D - typical depth of a layer above surface where life can originate (between 1m for surface-catalyzed RNA synthesis and 50 km for ocean depth on Enceladus or Europa)
  • TRNA - typical time required to synthesize RNA molecule of typical size for replication, between 1s (from replication rate of 1000 nucleotides per second for RNA polymerases) and 30 min, a replication rate of E.coli
  • VRNA - minimal volume where RNA synthesis can take place, between volume of a ribosome (20 nm in diameter) and size of eukaryotic cell (100 um in diameter)
  • Rvolume - dilution of RNA replicators - between 1 (for tightly packed replicating units) and 10 million (which is calculated from a typical cell density for Earth' ocean of 5*10^4 cells/ml and a typical diameter of prokaryotic cell of 1.5 um)
  • Nbase - number of bases in genetic code, equals to 4
  • LRNA - minimal length of self-replicating RNA molecule.

You can combine everything except Nbase and LRNA into one factor Pabio, which would give you an approximation of "sampling power" of the galaxy: how many base pairs could have been sampled. If you take assumption that parameters are distributed log-normally with lower estimated range corresponding to mean minus 2 standard deviations and upper range to mean plus 2 standard deviations (and converting all to the same units), you will get the approximate sampling power of Milky Way of 

Using this approximation you can see how long an RNA molecule should be to be found if you take top 5% of Pabio distribution: 102 bases.  Sequence of 122 bases could be found in at least one galaxy in the observable universe (with 5% probability).

In 2009 article https://www.science.org/doi/10.1126/science.1167856 the sequence of the RNA on the Fig. 1B contained 63 bases. Given the assumptions above, such an RNA molecule could have evolved 0.3 times - 300 trillion times per planet (for comparison, abiogenesis event on Earth' could have occurred 6-17 times in Earth's history, as calculated from the date of earliest evidence of life).

Small 16S ribosomal subunit of prokaryotes contains ~1500 nucleotides, there is no way such a complex machinery could have evolved in the observable universe by pure chance.

Comment by Oleg S. on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-15T19:10:17.413Z · LW · GW

On the object level it looks like there are a spectrum of society-level interventions starting from "incentivizing research that wouldn't be published" (which is supported by Eliezer) and all the way to "scaring the hell out of general public" and beyond. For example, I can think of removing $FB and $NVDA from ESGs, disincentivizing publishing code and research articles in AI, introducing regulation of compute-producing industry. Where do you think the line should be drawn between reasonable interventions and ones that are most likely to backfire?

On the meta level, the whole AGI foom management/alignment starts not some abstract 50 years in the future, but right now, with the managing of ML/AI research by humans. Do you know of any practical results produced by alignment research community that can be used right now to manage societal backfire / align incentives?

Comment by Oleg S. on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-15T16:09:56.337Z · LW · GW

You haven't commented much on Eliezer's views on the social approach to slow down the development of AGI - the blocks starting with 

I don't know how to effectively prevent or slow down the "next competitor" for more than a couple of years even in plausible-best-case scenarios. 

and

I don't want to sound like I'm dismissing the whole strategy, but it sounds a lot like the kind of thing that backfires because you did not get exactly the public reaction you wanted

What's your take on this?

Comment by Oleg S. on Could you have stopped Chernobyl? · 2021-08-27T17:16:49.884Z · LW · GW

Here are some other failure modes that might be important: 

The Covid origin story (https://www.facebook.com/yudkowsky/posts/10159653334879228) - some sort of AI research moratorium is held in US, the problem appears to be solved, but in reality it is just off-shored, and then it explodes in an unpredicted way.

The Archegos/Credit Suisse blow up (https://www.bloomberg.com/opinion/articles/2021-07-29/archegos-was-too-busy-for-margin-calls) - special comittee is set up to regulate AI-related risks, and there is a general consensus that something has to be done, but the actions are bogged down by bureaucracy, and key stakeholders are unresponsive for the period which looks reasonable at first. However, the explosive nature of AI development process is not taken into account, and the whole process blows up much faster than control system could manage to scram.

More practically, can you suggest specific topics to discuss on 5 Sept ACX online meetup with Sam Altman, the CEO of OpenAI?

Comment by Oleg S. on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-12T14:54:48.067Z · LW · GW

An important consideration is whether you are trying to fool simulated creatures into believing simulation is real by hiding glitches, or you are doing an honest simulation and allow exploitation of these glitches. You should take it into account when you consider how deep you should simulate matter to make simulation plausible.

For example up until 1800-s you could coarse-grain atoms and molecules, and fool everyone about composition of stuff. The advances in chemistry and physics and widespread adoption of inventions relying on atomic theory made it progressively harder to identify scientists among simulated folks, so to be able to get to early 1900, your simulation should have grounding in XIX-th century physics, otherwise people in your simulation will be exposed to a lot of miracles.

In 1900-s it's quantum mechanics, Standard model, and solar system exploration (also, relativity but I don't know about complexity of GR simulation). I think you could still fool early experimenters into seeing double-slit experiments, convincingly simulate effects of atomic blasts using classical computers, and maybe even fake Moon landings.

But there are two near-future simulated events that will cause you to buy more computational power. The first one is Solar system exploration. This is less of a concern because in the worst case scenario, it's just an increase in N proportional to the number of simulated particles, or maybe you can do it more efficiently by simulating only visited surface - so not a big deal. 

The real trouble is universal quantum computers. These beasts are exponentially more powerful on some tasks (unless BPP=BQP of course), and if they become ubiquitous, to simulate the world reliably you have to use the real quantum computers.

Some other things to look out for:

  • Is there more powerful fundamental complexity class at deeper than quantum level?
  • Is there an evidence in nature of solving computational problems too fast to be reproduced on quantum computers (e.g. does any process give solutions to NP-hard problems in polynomial time)?
  • Is there a pressure against expanding computational power required to simulate the universe?
Comment by Oleg S. on Don't Sell Your Soul · 2021-04-07T14:17:35.473Z · LW · GW

I think the offer needs to be modified to generate more solid market.

First, instead of making a crazy N-point contract, the correct way to trade souls is through the NFT auctions/markets. The owner gets the same symbolic rights as the owner of the arts sold through this mechanism, and there are no those extra unclear requirements on seller's actions that will hinder the healthy financial derivatives market.

Second, only desperate people sell their whole soles. Obviously, you should trade shares of souls.

So, how much do you think it will cost to develop a platform, where one can register, put a share of one's soul for auction, or build a solid portfolio of souls? What do you think the market size would be?

Comment by Oleg S. on Core Pathways of Aging · 2021-03-28T15:27:12.278Z · LW · GW

If I follow the logic correctly, the root cause of aging is that stem cells can irreversibly and invisibly accumulate active transposones, which are then passed on to derived cells, which then become senescent much faster. Also, for some reason this process is supressed in gonads. So, I see these alternatives:

  1. Transposone activation is essentially blocked in gonades, or
  2. There is a barrier which prevents embryos with above-normal number of active transposones from developing, or
  3. Children born from parents of old age will age faster, or
  4. Active transposone accumulation is not a root cause of aging.
Comment by Oleg S. on Moloch's Toolbox (2/2) · 2017-11-08T17:40:40.143Z · LW · GW

  1. Omegaven® is manufactured by German pharmaceutical company Fresenius Kabi. For some reason, the company decided to stay away from US market and this raised questions when announced back in 2006. Until patents held by FK are expired, no one in USA can sell Omegaven without license from FK.
  2. Brief search in Clinical Trials registry gives 14 open clinical studies of Omegaven as Parenteral Nutrition in USA. I hope at least some of them don't just pursue scientific goal of replicating earlier, but are compassionate attempts to provide an access to Omegaven by an Expanded Access Use program from FDA.
  3. Several hundred saved children sadly is indeed too few for pharma to seriously care. The cost of clinical trial required for regulatory approval is ~ 100 M$ + about 100 M$ is required to set up manufacture, sales etc. With generous $ 100 000 per course of application (approx. a cost of life saved for pediatric anti-cancer drugs) and 200 patients/year, a company can generate $20 M revenue, so it has to wait 10 years just to cover the losses. And that without taking into account 30% IRR for VC, possible competitors undermining market share and so on.
Comment by Oleg S. on An Equilibrium of No Free Energy · 2017-11-01T22:46:15.743Z · LW · GW

Here is another example of inadequacy / inefficiency in the pharmaceutical market.

Cancer X is very aggressive and even when it is diagnosed at very early stage and surgically removed, the recurrence rate is something around 70% in 5 years. When cancer returns or when a patient has advanced stage, the mean survival time is only 6 months.

Pharmaceutical company Y has recently discovered a new anticancer drug. According to state-of-art preclinical experiments, the drug inhibits spread of cancer and kills cancer cells very effectively. Top scientists at company Y expect that when applied in adjuvant settings for 4 months after the surgical operation, the drug would reduce the cancer recurrence rate to 30%. Even when the drug is given to patients with advanced stages of cancer, the drug is expected to prolong the life of patients twice.

Driven by the desire to bring the drug to the market as early as possible, executives at company Y initiate the fastest clinical trial. A study in the adjuvant setting (4 weeks after the operation) require several years to complete in order to show that drug has an advantage over the standard of care. A study in advanced stage cancer required much less time, so there is some benefit in getting to market for advanced cancer and to extend survival from 6 to 12 months.

However, once the drug is at the market against advanced disease, clinical trial and eventual approval of the drug as adjuvant would undermine total sales because of the two-fold reduction in the number of relapsed patients who need to take the drug every single day until the end of their life.

This is efficient (drug company gives the most profit per dollar invested) but inadequate (drug company could save way more patients by going into adjuvant therapy) market situation. I would say that the root of inadequacy is a conflict in what is a goal of a pharmaceutical company. I would expect pharmaceutical company to sell drugs and not mortgage derivatives, but as a company, its main objective is the maximization of the profits for investors. So, probably there should be a composite measure of QALY and profits that should be used to evaluate adequateness and effectiveness of the market.