## Posts

## Comments

**Jalex Stark (jalex-stark-1)**on Great minds might not think alike · 2020-12-29T11:39:04.978Z · LW · GW

You talk mostly about community-level information gain. I'd like to add that the act of translation is a good way for an individual to generate new insights. In theorem-proving academia, there's a lot of juice to get out of shuttling information back and forth between people in math, physics, CS, and econ departments.

**Jalex Stark (jalex-stark-1)**on Brainstorming positive visions of AI · 2020-10-08T21:31:37.399Z · LW · GW

Your first sentence came off as quite patronizing, so I wasn't able to do a good-faith reading of the rest of your post.

**Jalex Stark (jalex-stark-1)**on On Option Paralysis - The Thing You Actually Do · 2020-10-03T22:43:09.239Z · LW · GW

Once you've got a method of exercise you actually do, then you should apply some optimization pressure to making it more fun and safe and rewarding.

**Jalex Stark (jalex-stark-1)**on Doing discourse better: Stuff I wish I knew · 2020-09-29T20:05:55.274Z · LW · GW

Kialo is some kind of attempt to experiment with the forum dimension stuff.

(EDIT: I don't know how to make external links in the LW dialect of markdown.)

**Jalex Stark (jalex-stark-1)**on Why GPT wants to mesa-optimize & how we might change this · 2020-09-23T02:17:20.548Z · LW · GW

See GPT-f for combining a transformer model (with pre-trained language weights?) with alphazero style training to learn to prove theorems

**Jalex Stark (jalex-stark-1)**on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-08-31T16:05:50.254Z · LW · GW

surviving COVID might cost a lot of QALYs from permanent lung and brain damage. It might also cost a lot of future expected earnings for the same reason.

**Jalex Stark (jalex-stark-1)**on Meaningful Rest · 2020-08-31T06:18:24.870Z · LW · GW

I am likely to remember this string for a while: "at those times I can’t do anything *but *my default policy".

**Jalex Stark (jalex-stark-1)**on Forecasting AI Progress: A Research Agenda · 2020-08-11T12:17:23.770Z · LW · GW

People that find this arxiv post interesting may also want to read/listen to this interview by Arden Koehler with Danny Hernandez, who works on the Foresight team at OpenAI.

https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/

**Jalex Stark (jalex-stark-1)**on How much is known about the "inference rules" of logical induction? · 2020-08-09T00:46:54.722Z · LW · GW

I think short timescale behavior of logical induction is model-dependent. I'm not sure whether your first conjecture is true, and I'd guess that it's false in some models.

I find myself a little confused. Isn't it the case that the probability of statement converges to 1 if and only if it is provable?

**Jalex Stark (jalex-stark-1)**on How much is known about the "inference rules" of logical induction? · 2020-08-09T00:42:22.922Z · LW · GW

Eigel is asking a specific (purely mathematical!) question about "logical induction", which is defined in the paper they linked to. Your comment seems to miss the question.

**Jalex Stark (jalex-stark-1)**on Titan (the Wealthfront of active stock picking) - What's the catch? · 2020-08-06T02:30:15.029Z · LW · GW

I don't understand why there needs to be a catch. It seems like they're just running a hedge fund where they tell everybody which things they're buying. It's an unusual thing to do, because you could probably get better returns by being more secretive (otherwise why are most hedge funds so secretive?).

You can become good at hedge-funding without having money as a primary motivation. If you did, you might try to start an open-access hedge fund just because it's a neat idea.

**Jalex Stark (jalex-stark-1)**on How uniform is the neocortex? · 2020-05-06T03:13:56.628Z · LW · GW

Some are really obvious: the neocortex doesn't use backprop!

That doesn't seem obvious to me. Could you point to some evidence, or flesh out your model for how data influences neural connections?

**Jalex Stark (jalex-stark-1)**on Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature · 2020-04-22T00:53:40.619Z · LW · GW

Do you mean biorxiv?

**Jalex Stark (jalex-stark-1)**on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-04-16T04:19:24.954Z · LW · GW

I think you should read this advice thread: https://www.lesswrong.com/posts/B9qzPZDcPwnX6uEpe/coronavirus-justified-practical-advice-summary

**Jalex Stark (jalex-stark-1)**on How will this recession differ from the last two? · 2020-04-06T20:58:05.029Z · LW · GW

My intuition is that GDP is, on the margin, like 70% correlated with human flourishing.

Even in your example, it's not clear that GDP is pointing in the wrong direction.

**Jalex Stark (jalex-stark-1)**on Would Covid19 patients benefit from blood transfusions from people who have recovered? · 2020-03-30T14:58:44.928Z · LW · GW

You only need a plasma transfusions, so it's not sensitive to blood types. I think hospitals doing this intervention have been able to treat (dramatically increase recovery time and reduce mortality) three patients from one donation session.

Presumably one could donate once every few days with an optimal diet + rest regime? I haven't heard of anyone moving to do this full time.

**Jalex Stark (jalex-stark-1)**on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-19T18:39:25.776Z · LW · GW

Typo thread: the sentence "In addition, UV degradation of surfaces might result from chronic UV exposure." has a hyperlink with an extra character at the end.

**Jalex Stark (jalex-stark-1)**on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-19T18:32:37.933Z · LW · GW

I think understanding all three of the following papers (which are predecessors to the 2018 Nature paper) is important to guessing the efficacy and safety of the intervention. I'll add to this comment as I gain more understanding.

- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3797730/
- https://journals.plos.org/plosone/article/file?type=printable&id=10.1371/journal.pone.0138418
- https://www.rrjournal.org/doi/pdf/10.1667/RR0010CC.1

EDIT: I found thinking about this very difficult, so I made progress very slowly and have mostly stopped. Here's my current story.

The biophysical reason to believe that 207nm light preferentially harms viruses and bacteria over mammalian cells:

Mammalian cells are big, like >50 wavelengths long, while bacteria and viruses are <5 wavelengths long. 207nm light activates certain peptide bonds that are found in ~ all proteins. So energy transfers at a large constant rate, let's call it lambda (I feel pretty confident about the argument if lambda > 10% per wavelength. I think it can be estimated from the papers but I just haven't tried hard enough) from light to organic material. When you shine light at a virus, you get something like lambda * (1 - lambda)^3 fraction of the energy deposited in the squishy RNA bits. But when you shine it at a mammal cell, you get like lambda * (1 - lambda)^30 fraction of the energy in the squishy DNA bits.

Thing that's empirically shown in these papers:

There is a dosage at which 207nm is clearly quite toxic (here toxicity means "causes cell death / virus inactivation") to bacteria/viruses but is clearly not toxic to mammalian cells.

Thing that's not empirically shown in any paper that I know of:

There is a power level of 207nm such that constant exposure keeps a space disinfected but which does not cause cancer in humans.

An incomplete argument for the above claim:

- Cell-killing and cancer-causing from radiation are both caused by excess energy induced into the nucleus of mammal cells, in such a way that total risk scales linearly with the integral over time of power of radiation exposure.
- The toxicity study yields an upper bound on how much radiation power makes it to the nucleus given a certain power of exposure to the cell.
- The cancer literature yields "safe exposure" levels that can be cast in terms of "radiation power that reaches the nucleus".
- Combining 2 and 3 gives a safe exposure power level for 207nm lamps, and I conjecture (or maybe just hope?) that this level is greater than the level used in the toxicity study. That level of power is already shown effective at killing viruses and bacteria, and does so in a way that only depends on them being small and made of replicating nucleic acid.

**Jalex Stark (jalex-stark-1)**on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-19T16:28:11.338Z · LW · GW

Asymptomatic carriers seem to be rare, though not completely nonexistent.

I think one should expect asymptomatic people to be less infectious than pre-symptomatic people, so it may not be important from a "changing current responses" perspective to understand the prevalence of asymptomatic-ness. That being said, here is a paper that analyzes the Diamond Princess data and claims the asymptomatic rate to be >15%. https://eurosurveillance.org/content/10.2807/1560-7917.ES.2020.25.10.2000180

I haven't understood their methodology well enough to have a strong opinion about its validity. I think they're using time series data along with some priors about the shape of the distribution of the random variable "number of days between testing positive and showing symptoms, given that the patient will at some point show symptoms" to conclude that a decent fraction of the Diamond Princess people won't show symptoms.

The claim is made in paragraph 5 of the discussion section.

**Jalex Stark (jalex-stark-1)**on What cognitive biases feel like from the inside · 2020-03-02T17:25:54.178Z · LW · GW

These infographics feature pairs of people having difficult conversations snippets about the morality of abortion. https://whatsmyprolifeline.com/

**Jalex Stark (jalex-stark-1)**on Refactoring EMH – Thoughts following the latest market crash · 2020-03-01T01:41:57.836Z · LW · GW

I feel like this version of EMH is already a consensus replacement (of the EMH found in textbooks) among the finance people that I know, but it is possible that I am speaking from within a very small bubble. Anyway, I think your formulation is well-stated and useful.

**Jalex Stark (jalex-stark-1)**on We run the Center for Applied Rationality, AMA · 2019-12-28T05:46:03.680Z · LW · GW

Yes, I agree that the space of things to be uncertain about is multidimensional. We project the uncertainty onto a one-dimensional space parameterized by "probability of <event> by <time>".

It would be surprising for a sophisticated person to show a market of 49 @ 51 on this event. (Unpacking jargon, showing this market means being willing to buy for 49 or sell at 51 a contract which is worth 100 if the hypothesis is true and 0 if it is false)

(it's somewhat similar saying that your 2-sigma confidence interval around the "true probability" of the event is 49 to 51. The market language can be interpreted with just decision theory while the confidence interval idea also requires some notion of statistics)

My interpretation of the second-hand evidence about Shane Legg's opinion suggests that Shane would quote a market like 40 @ 60. (The only thing I know about Shane is that they apparently summarized their belief as 50% a number of years ago and hasn't publicly changed their opinion since)

**Jalex Stark (jalex-stark-1)**on We run the Center for Applied Rationality, AMA · 2019-12-22T14:47:53.441Z · LW · GW

Why is that surprising? Doesn't it just mean that the pace of development in the last decade has been approximately equal to the average over Shane_{2011}'s distribution of development speeds?

**Jalex Stark (jalex-stark-1)**on Skill and leverage · 2019-11-06T20:01:58.153Z · LW · GW

You're overloading "want" here. If all of your sub-agents want to load a dishwasher, then surely you will load the dishwasher. If some of your sub-agents want to load a dishwasher, but need to get other sub-agents on board in order to do so, then you might not. It depends on how good your dishwasher agent is at recruiting the other agents. But this recruitment problem is not a subproblem of every other task you might care about.

**Jalex Stark (jalex-stark-1)**on Dony's Shortform Feed · 2019-08-10T15:38:42.886Z · LW · GW

We live in a world with large incentives to teach yourself to do something like this, so either it is too hard for a single person to come up with on their own or it is possible to find people that have done it.

Some military studies might fit what you're looking for.

**Jalex Stark (jalex-stark-1)**on AI Alignment Open Thread August 2019 · 2019-08-08T22:09:17.841Z · LW · GW

I think Bostrom uses the term "hardware overhang" in *Superintelligence *to point to a cluster of discontinuous takeoff scenarios including this one

**Jalex Stark (jalex-stark-1)**on Keep Your Beliefs Cruxy · 2019-08-03T04:24:11.863Z · LW · GW

The restrictions are something like "real humans who generally want to be effective should want to use the method".

**Jalex Stark (jalex-stark-1)**on Is the sum individual informativeness of two independent variables no more than their joint informativeness? · 2019-07-08T12:38:13.242Z · LW · GW

Just for amusement, I think this theorem can fail when s, x, y represent subsystems of an entangled quantum state. (The most natural generalization of mutual information to this domain is sometimes negative.)

**Jalex Stark (jalex-stark-1)**on Open question: are minimal circuits daemon-free? · 2019-05-02T19:49:00.860Z · LW · GW

Rice's theorem applies if you replace "circuit" with "Turing machine". The circuit version can be resolved with a finite brute force search.

**Jalex Stark (jalex-stark-1)**on Value Learning is only Asymptotically Safe · 2019-04-10T02:32:25.838Z · LW · GW

"In the presence of cosmic rays, then, this agent is not safe for its entire lifetime with probability 1."

I think some readers may disagree about whether you this sentence means "with probability 1, the agent is not safe" or "with probability strictly greater than 0, the agent is not safe". In particular, I think Hibron's comment is predicated on the former interpretation and I think you meant the latter.

**Jalex Stark (jalex-stark-1)**on [Link] Did AlphaStar just click faster? · 2019-01-29T00:19:09.561Z · LW · GW

I think the most interesting part of the piece is the bit at the end where the author analyzes a misleading graph. Now that I understand the graph, it seems like like strong evidence towards malicious misrepresentation or willful ignorance on the part of some subset (possibly quite small) of the AlphaStar team.

I think the article might benefit from comparisons to OpenAI's Dota demonstration. I don't remember anyone complaining about superhuman micro in that case. Did that team do something to combat superhuman-APM, or is Starcraft just more vulnerable than Dota to superhuman-APM tactics?

**Jalex Stark (jalex-stark-1)**on Meditations on Momentum · 2019-01-09T05:48:01.302Z · LW · GW

That's not true? If there are five authors selling 0, 0, 1, 2, and 3 books each, then the mode is 0 and the median is 1.

**Jalex Stark (jalex-stark-1)**on Meditations on Momentum · 2019-01-07T20:31:36.249Z · LW · GW

To state it more plainly, the claim "the median number of sales is zero" is equivalent to the claim "more than half of self-published ebooks sell zero copies".

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-18T01:12:54.700Z · LW · GW

Adding to Vladimir_Nesov's comment:

In general, every suborder of a well-order is well-ordered. In a word, the property of "being a well-order" is *hereditary. *(compare: every subset of a finite set is finite)

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-18T01:10:56.017Z · LW · GW

Yes, there are good ways to index sets other than well orders. A net where the index set is the real line and the function is continuous is usually called a *path*, and these are ubiquitous e.g. in the foundations of algebraic topology.

I guess you could say that I think well-orders are important to the picture at hand "because of transfinite induction" but a simpler way to state the same objection is that "tomorrow" = "the unique least element of the set of days not yet visited". If tomorrow always exists / is uniquely defined, then we've got a well-order. So something about the story has to change if we're not fitting into the ordinal box.

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-17T05:48:19.268Z · LW · GW

Yeah, I'd agree with the "boundary doesn't exist" interpretation.

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-17T05:39:56.333Z · LW · GW

An ordered set is well-ordered iff every subset has a unique least element. If your set is closed under subtraction, you get infinite descending sequences such as . If your sequence is closed under division, you get infinite descending sequences that are furthermore bounded such as . It should be clear that the two linear orders I described are not well-orders.

A small order theory fact that is not totally on-topic but may help you gather intuition:

Every countable ordinal embeds into the reals but no uncountable ordinal does.

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-15T20:52:31.130Z · LW · GW

A net is just a function where is an ordered index set. For limits in general topological spaces, might be pretty nasty, but in your case, you would want to be some totally-ordered subset of the surreals. For example, in the trump paradox, you probably want to:

include and for some infinite

have a least element (the first day)

It sounds like you also want some coherent notion of "tomorrow" at each day, so that you can get through all the days by passing from today to tomorrow infinitely many times. But this is *equivalent* to having your set be well-ordered, which is incompatible with the property "closed under division and subtraction by finite integers". So you should clarify which of these properties you want.

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-14T19:51:22.317Z · LW · GW

the ordinary notion of sequence

I assume here you mean something like "a sequence of elements from a set is a function where is an ordinal". Do you know about nets? Nets are a notion of sequence preferred by people studying point-set topology.

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-14T19:45:18.725Z · LW · GW

My best guess about how to clear up confusion about "what the boundary looks like" is via mathematics rather than philosophy. For example, have you understood the properties of the long line?

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-14T05:14:57.958Z · LW · GW

I think gjm's response is approximately the clarification I would have made about my question if I had spent 30 minutes thinking about it.

**Jalex Stark (jalex-stark-1)**on An Extensive Categorisation of Infinite Paradoxes · 2018-12-13T20:33:50.449Z · LW · GW

In "Trumped", it seems that if , the first infinite ordinal, then on every subsequent day, the remaining number of days will be for some natural . This is never equal to .

Put differently, just because we count up to doesn't mean we pass through . Of course, the total order on days has has for each finite , but this isn't a well-order anymore so I'm not sure what you mean when you say there's a sequence of decisions. Do you know what you mean?

**Jalex Stark (jalex-stark-1)**on Summary: Surreal Decisions · 2018-12-02T05:22:09.318Z · LW · GW

In order to apply surreal arithmetic to the expected utility of world-states, it seems we'll need to fix some canonical bijection between states of the world and ordinals / surreals. In the most general case this will require some form of the Axiom of Choice, but if we stick to a nice constructive universe (say the state space is computable) then things will be better. Is this the gist of what you're working on?

**Jalex Stark (jalex-stark-1)**on Ontological uncertainty and diversifying our quantum portfolio · 2018-08-09T03:32:48.099Z · LW · GW

I'm not sure whether I've understood the point you're trying to make, in part because I don't know the answer to the following question:

Does your point change if you replace "quantumness" with ordinary randomness?

**Jalex Stark (jalex-stark-1)**on Are ethical asymmetries from property rights? · 2018-07-07T12:17:32.333Z · LW · GW

I was confused about the title until I realized it means the same thing as "Do ethical asymmetries come from property rights?"