Analysis of World Records in Speedrunning [LINKPOST] 2021-08-04T15:26:35.463Z
Work on Bayesian fitting of AI trends of performance? 2021-07-19T18:45:19.148Z
Trying to approximate Statistical Models as Scoring Tables 2021-06-29T17:20:11.050Z
Parameter counts in Machine Learning 2021-06-19T16:04:34.733Z
How to Write Science Fiction and Fantasy - A Short Summary 2021-05-29T11:47:30.613Z
Parameter count of ML systems through time? 2021-04-19T12:54:26.504Z
Survey on cortical uniformity - an expert amplification exercise 2021-02-23T22:13:24.157Z
Critiques of the Agent Foundations agenda? 2020-11-24T16:11:22.495Z
Spend twice as much effort every time you attempt to solve a problem 2020-11-15T18:37:24.372Z
Aggregating forecasts 2020-07-23T18:04:37.477Z
What confidence interval should one report? 2020-04-20T10:31:54.107Z
On characterizing heavy-tailedness 2020-02-16T00:14:06.197Z
Implications of Quantum Computing for Artificial Intelligence Alignment Research 2019-08-22T10:33:27.502Z
Map of (old) MIRI's Research Agendas 2019-06-07T07:22:42.002Z
Standing on a pile of corpses 2018-12-21T10:36:50.454Z
EA Tourism: London, Blackpool and Prague 2018-08-07T10:41:06.900Z
Learning strategies and the Pokemon league parable 2018-08-07T09:37:27.689Z
EA Spain Community Meeting 2018-07-10T07:24:59.310Z
Estimating the consequences of device detection tech 2018-07-08T18:25:15.277Z
Advocating for factual advocacy 2018-05-06T08:47:46.599Z
The most important step 2018-03-24T12:34:01.643Z


Comment by Jsevillamol on How much compute was used to train DeepMind's generally capable agents? · 2021-07-31T01:26:29.520Z · LW · GW

Do you mind sharing your guesstimate on number of parameters?

Also, do you have per chance guesstimates on number of parameters / compute of other systems?

Comment by Jsevillamol on Incorrect hypotheses point to correct observations · 2021-07-30T22:26:40.268Z · LW · GW

Ah, I just realized this is the norm with curated posts. FWIW I feel a bit awkward to have the curation comments displayed so prominently, since it alters the author's intended reading experience in a way I find a bit weird / offputting.

If it was up to me, I would remove the curator's words at the top of a post in favor of comments like this one, where the reasons for curation are explained but its not the first thing that readers see when reading the post.

Comment by Jsevillamol on Incorrect hypotheses point to correct observations · 2021-07-30T22:20:53.964Z · LW · GW

Meta: it seems like you have accidentally added this comment also at the beginning of the post besides commenting?

Comment by Jsevillamol on "AI and Compute" trend isn't predictive of what is happening · 2021-07-21T10:25:25.627Z · LW · GW

What is the GShard dense transformer you are referring to in this post?

Comment by Jsevillamol on How much chess engine progress is about adapting to bigger computers? · 2021-07-08T22:39:32.195Z · LW · GW

Very tangential to the discussion so feel free to ignore, but given that you have put some though before on prize structures I am curious about the reasoning for why you would award a different prize for something done in the past versus something done in the future

Comment by Jsevillamol on Spend twice as much effort every time you attempt to solve a problem · 2021-07-05T10:03:31.068Z · LW · GW

Nicely done!

I think this improper prior approach makes sense.

I am a bit confused on the step when you go from an improper prior to saying that  the "expected" effort would land in the middle of these numbers. This is because the continuous part of the total effort spent vs doubling factor is concave, so I would expect the "expected" effort to be weighted more in favor of the lower bound.

I tried coding up a simple setup where I average the graphs across a space of difficulties to approximate the "improper prior" but it is very hard to draw a conclusion from it. I think the graph suggests that the asymptotic minimum is somewhere above 2.5 but I am not sure at all. 

Doubling factor (x-axis) vs expected total effort spent (y-axis), averaged across 1e5 difficulty levels uniformly spaced between d=2 and d=1e6

Also I guess it is unclear to me whether a flat uninformative prior is best, vs an uninformative prior over logspace of difficulties. 

What do you think about both of these things?

Code for the graph:

import numpy as np
import matplotlib.pyplot as plt
import math

effort_spent = lambda d,b : (b**(np.ceil(math.log(d, b))+1)-1) / (b-1)

ds = np.linspace(2, 1000000, 100000)
hist = np.zeros(shape=(1000,))
for d in ds:
  bs = np.linspace(1.1, 5, 1000)
  hist += np.vectorize(lambda b : effort_spent(d,b))(bs) / len(ds)
plt.plot(bs, hist);
Comment by Jsevillamol on The Generalized Product Rule · 2021-07-02T11:52:37.015Z · LW · GW

I really like this article.

It has helped me appreciate how product rules (or additivity, if we apply a log transform) arises in many contexts. One thing I hadn't appreciated when studying Cox theorem is that you do not need to respect "commutativity" to get a product rule (though obviously this restricts how you can group information). This was made very clear to me in example 3. 

One thing that confused me in the first reading was that I misunderstood you as referring to the third requirement as associativity of . Rereading this is not the case; you just say that the third requirement implies that F is associative. But I wish you had spelled out the implication, ie saying that.

Comment by Jsevillamol on Parameter counts in Machine Learning · 2021-06-28T09:54:14.517Z · LW · GW

Good suggestion! Understanding the trend of record-setting would be interesting indeed so that we avoid the pesky influence of the systems which are below the trend like CURL in the game domain.

The problem with the naive setup of just regressing on record-setters is that is quite sensitive to noise - one early outlier in the trend can completely alter the result.

I explore a similar problem in my paper Forecasting timelines of quantum computing, where we try to extrapolate progress on some key metrics like qubit count and gate error rate. The method we use in the paper to address this issue is to bootstrap the input and predict a range of possible growth rates - that way outliers do not completely dominate the result.

I will probably not do it right now for this dataset, though I'd be interested in having other people try that if they are so inclined!

Comment by Jsevillamol on Parameter counts in Machine Learning · 2021-06-28T09:45:18.679Z · LW · GW

This is now fixed; see the updated graphs. We have also updated the eye ball estimates accordingly.

Comment by Jsevillamol on The Point of Trade · 2021-06-26T18:42:51.501Z · LW · GW

Trying to think a bit harder about this - maybe companies are sort of like this? To manage my online shop I need someone to maintain the web, someone to handle marketing, etc. I need many people to work for me to make it work, and I need all of them at once. Let's suppose that I pay my workers directly proportionally to the amount of sales they manage to make it more obvious.

As I painted it, this is not about amortizing a fixed cost. And I cannot subdivide the task - if I tell my team I expect to make only 10 sales and pay accordingly they are going to tell me go eff myself (though maybe in the magical world where there are no task-switching costs this breaks down).

Another try: maybe a fairness constraint can force a minimum. The government has given me the okay to sell my new cryonics procedure, but only if I can make enough for everyone.

Comment by Jsevillamol on The Point of Trade · 2021-06-26T17:18:36.106Z · LW · GW

You are quite right that 1 and 2 are related, but the way I was thinking about them I didn't have them as equivalent.

1 is about fixed costs; each additional sheet of paper I produce amortizes part of the initial, fixed cost 

2 is about a threshold of operation. Even if there are no fixed costs, it would happen in a world when I can only produce in large bulks and no individual units.

Then again, I am struggling to think of a real-life example of 2, so maybe it is not something that happens in our universe.

Comment by Jsevillamol on The Point of Trade · 2021-06-22T18:28:49.494Z · LW · GW

I'm confused. Why would diminishing marginal returns incentivize trade? If the first unit of everything was very cheap then I would rather produce it myself than produce extra of one things (which costs more) then trade.  

Comment by Jsevillamol on The Point of Trade · 2021-06-22T18:26:12.660Z · LW · GW

Other magical powers of trade:

  1. Economies of scale. It is basically as easy for me to produce 20 sheets of paper as to produce 1 ; after paying the set up costs the marginal costs are much smaller in comparison. So all in all I would rather specialize in paper-making, have somebody else specialize in pencil-making, then trade.
  2. Investment. often I need A LOT of capital to get something started, more than I could reasonably accumulate over a lifetime. So I would rather trade the starting capital for IOUs I will get from profit.
  3. Insurance. I may have a particularly bad harvest this year and a very good one the next one, while my neighbour might have the opposite problem. All in all I would rather we pool our harvests each year, so that we can have food both years. So we are "trading" part of our harvest for insurance.
Comment by Jsevillamol on Parameter counts in Machine Learning · 2021-06-21T14:40:12.807Z · LW · GW

Thank you! The shapes mean the same as the color (ie domain) - they were meant to make the graph more clear. Ideally both shape and color would be reflected in the legend. But whenever I tried adding shapes to the legend instead a new legend was created, which was more confusing.

If somebody reading this knows how to make the code produce a correct legend I'd be very keen on hearing it!

EDIT: Now fixed

Comment by Jsevillamol on Parameter counts in Machine Learning · 2021-06-21T14:33:04.360Z · LW · GW

Thank you! I think you are right - by default the Altair library (what we used to plot the regressions) does OLS fitting of an exponential instead of fitting a linear model over the log transform. We'll look into this and report back.

Comment by Jsevillamol on How to Write Science Fiction and Fantasy - A Short Summary · 2021-05-31T08:33:26.852Z · LW · GW

Thank you! Now fixed :)

Comment by Jsevillamol on Parameter count of ML systems through time? · 2021-04-19T15:22:29.883Z · LW · GW

Thank you for the feedback, I think what you say makes sense.

I'd be interested in seeing whether we can pin down exactly in what sense are Switch parameters "weaker". Is it because of the lower precision? Model sparsity (is Switch sparse on parameters or just sparsely activated?)?

What do you think, what typology of parameters would make sense / be useful to include?

Comment by Jsevillamol on "New EA cause area: voting"; or, "what's wrong with this calculation?" · 2021-02-27T22:05:44.960Z · LW · GW

Can you explain to me why is the probability of a swing $1/\srt{NVoters}$? :)

Comment by Jsevillamol on Survey on cortical uniformity - an expert amplification exercise · 2021-02-24T09:27:10.213Z · LW · GW

re: "I'd expect experts to care more about the specific details than I would"

Good point. We tried to account for this by making it so that the experts do not have to agree or disagree directly with each sentence but instead choose the least bad of two extreme positions.

But in practice one of the experts bypassed the system by refusing to answer Q1 and Q2 and leaving an answer in the space for comments.

Comment by Jsevillamol on Survey on cortical uniformity - an expert amplification exercise · 2021-02-24T09:20:03.563Z · LW · GW

Street fighting math:

Let's model experts as independent draws of a binary random variable with a bias $P$. Our initial prior over their chance of choosing the pro-uniformity option (ie $P$) is uniform. Then if our sample is $A$ people who choose the pro-uniformity option and $B$ people who choose the anti-uniformity option we update our beliefs over $P$ to a $Beta(1+A,1+B)$, with the usual Laplace's rule calculation.  

To scale this up to eg a $n$ people sample we compute the mean of $n$ independent draws of a $Bernoilli(P)$, where $P$ is drawn from the posterior Beta. By the central limit theorem is approximately a normal of mean $P$ and variance equal to the variance of the bernouilli divided by $n$ ie $\{1}{n}P(1-P)$.

We can use this to compute the approximate probability that the majority of experts in the expanded sample will be pro-uniformity, by integrating the probability that this normal is greater than $1/2$ over the possible values of $P$.

So for example we have $A=1$, $B=3$ in Q1, so for a survey of $n=100$ participants we can approximate the chance of the majority selecting option $A$ as:

import scipy.stats as stats
import numpy as np

A = 1
B = 3
n = 100

b = stats.beta(A+1,B+1)
np.mean([(1 - survey_dist.cdf(1/2)) * b.pdf(p)
        for p in np.linspace(0.0001,0.9999,10000)
        for survey_dist in (stats.norm(loc = p, scale = np.sqrt(p*(1-p)/n)),)])

which gives about $0.19$.

For Q2 we have $A=1$, $B=4$, so the probability of the majority selecting option $A$ is about $0.12$.

For Q3 we have $A=6$, $B=0$, so the probability of the majority selecting option $A$ is about $0.99$.

EDIT: rephrased the estimations so they match the probability one would enter in the Elicit questions 

Comment by Jsevillamol on Implications of Quantum Computing for Artificial Intelligence Alignment Research · 2021-02-23T11:59:01.039Z · LW · GW

re: impotance of oversight

I do not think we really disagree on this point. I also believe that looking at the state of the computer is not as important as having an understanding of how the program is going to operate and how to shape its incentives. 

Maybe this could be better emphasized, but the way I think about this article is showing that even the strongest case for looking at the intersection of quantum computing and AI alignment does not look very promising. 


re: How quantum computing will affect ML

I basically agree that the most plausible way QC can affect AI aligment is by providing computational speedups - but I think this mostly changes the timelines rather than violating any specific assumptions in usual AI alignment research.

Relatedly, I am bullish that we will see better than quadratic speedups (ie Grover) - to get better-than-quadratic speedups you need to surpass many challenges that right now it is not clear can be surpassed outside of very contrived problem setup [REF].

In fact I think that the speedups will not even be quadratic because you "lose" the quadratic speedup when parallelizing quantum computing (in the sense that the speedup does not scale quadratically with the number of cores).

Comment by Jsevillamol on Suggestions of posts on the AF to review · 2021-02-18T12:55:46.559Z · LW · GW

Suggestion 1: Utility != reward by Vladimir Mikulik. This post attempts to distill the core ideas of mesa alignment. This kind of distillment increases the surface area of AI Alignment, which is one of the key bottlenecks of the area (that is, getting people familiarized with the field, motivated to work on it and with a handle on some open questions to work on). I would like an in-depth review because it might help us learn how to do it better!

Suggestion 2: me and my coauthor Pablo Moreno would be interested in feedback in our post about quantum computing and AI alignment. We do not think that the ideas of the paper are useful in the sense of getting us closer to AI alignment, but I think it is useful to have signpost explaining why avenues that might seem attractive to people coming into the field are not worth exploring, while introducing them to the field in a familiar way (in this case our audience are quantum computing experts). One thing that confuses me is that some people have approached me after publishing the post asking me why I think that quantum computing is useful for AI alignment, so I'd be interested in feedback on what went wrong on the communication process given the deflationary nature of the article. 

Comment by Jsevillamol on Making Vaccine · 2021-02-10T12:55:37.856Z · LW · GW

Amazing initiative John - you might give yourself a D but I am giving you an A+ no doubt.

Trying to decide if I should recommend this to my family.

In Spain, we have 18000 confirmed COVID cases in January 2021. I assume real cases are at least 20000. Some projections estimate that laypeople might not get vaccinated in 10 months, so the potential benefit of a widespread DIY vaccine is avoiding 200k cases of COVID19 (optimistically assuming linear growth of cases). 

Spain pop is 47 million, so the naïve chance of COVID for an individual before vaccines are widely available is 2e4*10 / 5e6 ie about 1 in 250.

Let's say that the DIY vaccine has 10% chance of working on a givne individual. If we take the side effects of the vaccine to be as bad as catching COVID19 itself, then I want the chances of a serious side effect to be lower than 1 in 2500 for the DIY vaccine to be worth it.

Taking into account the risk of preparing it incorrectly plus general precaution, the chances of a serious side effect look to me more like 1 in 100 than 1 in 1000.

So I do not think, given my beliefs, that I should recommend it. Is this reasoning broadly correct? What is a good baseline for the chances of a side effect in a new peptide vaccine?

Comment by Jsevillamol on How long does it take to become Gaussian? · 2020-12-10T02:26:46.337Z · LW · GW

This post is great! I love the visualizations. And I hadn't made the explicit connection between iterated convolution and CLT!

Comment by Jsevillamol on Spend twice as much effort every time you attempt to solve a problem · 2020-11-16T16:00:30.733Z · LW · GW

I don't think so.

What I am describing is an strategy to manage your efforts in order to spend as little as possible while still meeting your goals (when you do not know in advance how much effort will be needed to solve a given problem).

So presumably if this heuristic applies to the problems you want to solve, you spend less on each problem and thus you'll tackle more problems in total. 

Comment by Jsevillamol on AGI safety from first principles: Goals and Agency · 2020-10-22T10:42:46.330Z · LW · GW

I think this helped me a lot understand you a bit better - thank you

Let me try paraphrasing this:

> Humans are our best example of a sort-of-general intelligence. And humans have a lazy, satisfying, 'small-scale' kind of reasoning that is mostly only well suited for activities close to their 'training regime'. Hence AGIs may also be the same - and in particular if AGIs are trained with Reinforcement Learning and heavily rewarded for following human intentions this may be a likely outcome.

Is that pointing in the direction you intended?

Comment by Jsevillamol on Babble challenge: 50 ways to escape a locked room · 2020-10-13T18:10:53.454Z · LW · GW

(I realized I miseed the part on the instructions about an empty room - so my solutions involve other objects)

Comment by Jsevillamol on Babble challenge: 50 ways to escape a locked room · 2020-10-13T18:00:51.097Z · LW · GW
  1. Break the door with your shoulders
  2. Use the window
  3. Break the wall with your fists
  4. Scream for help until somebody comes
  5. Call a locksmith
  6. Light up a paper and trigger the smoke alarm and wait for the firemen to rescue you
  7. Hide in the closet and wait for your captors to come back - then run for your life
  8. Discover how to time travel - time travel forward into the future until there is no room
  9. Wait until the house becomes old and crumbles
  10. Pick the lock with a paperclip
  11. Shred the bed into a string, pass it through the pet door, lasso the lock and open it
  12. Google how to make a bomb and blast the wall
  13. Open the door
  14. Wait for somebody to pass by, attract their attention hitting the window and ask for help writing on a notepad
  15. Write your location in a paper and slide it under the door, hoping it will find its way to someone who can help
  16. Use the vents
  17. Use that handy secret door you built it a while ago and your wife called you crazy for doing so
  18. Send a message through the internet asking for help
  19. Order a pizza, ask for help when they arrive
  20. Burn the door
  21. Melt the door with a smelting tool
  22. Shoot at the lock with a gun
  23. Push against the door until you quantum tunnel through it
  24. Melt the lock with the Breaking Bad melting lock stuff (probably google that first)
  25. There is no door - overcome your fears and cross the emptyness
  26. Split your matress in half with a kitchen knife, fit the split mattress through the window to make a landing spot and jump into it
  27. Make a paper plane with instructions for someone to help and throw it out of the window
  28. Make a rope with your duvet and slide yourself down to the street
  29. Make a makeshift glider with your duvet and jump out of the window - hopefully it will slow you down enough to not die
  30. Climb out of the window and into the next room
  31. Dig the soil under the door until you can fit through
  32. Set your speaker to maximum volume and ask for help
  33. Break the window with a chair and climb outside
  34. Grow a tree under the door and let it lift the door for you
  35. Use a clothe hanger to slide through the clothing line between your building and your neighbourg's. Apologize to the neightbour for disrupting their sleep.
  36. Hit the ceiling with a broom to make the house rate come out. Attach a message to them and send them back into their hole, and to your neighbour
  37. Meditate until somebody opens the door
  38. Train your flexibility for years until you fit through the dog door
  39. Build a makeshift ariete with the wooden frame of the bed
  40. Unmont the hinges with a scredriver and remove the door
  41. Try random combinations until you find the password
  42. Look for the key over the door frame
  43. Collect dust and blow it over the numpad. The dust collects over the three most greasy digits. Try the 6 possible combinations until the door opens.
  44. Find the model number of the lock. Call the fabricator pretending to be the owner. Wait five minutes while listening to waiting music. Explain you are locked. Realize you are talking to an automated receiver. Ask to talk with a real person. Explain you ae locked. Follow all instructions.
  45. Do not be in the room in the first place
  46. Try figuring out if you really need to escape in the first place
  47. Swap consciosuness with the other body you left outside the room
  48. Complain to your captor that the room is too small and you are claustrophobic. Hope they are understanding.
  49. Pretend to have a hearth attack, wait for your captor to carry you outside
  50. Check out ideas on how to escape in the lesswrong bable challenge
Comment by Jsevillamol on AGI safety from first principles: Goals and Agency · 2020-10-02T13:53:26.281Z · LW · GW

Let me try to paraphrase this: 

In the first paragraph you are saying that "seeking influence" is not something that a system will learn to do if that was not a possible strategy in the training regime. (but couldn't it appear as an emergent property? Certainly humans were not trained to launch rockets - but they nevertheless did?)

In the second paragraph you are saying that common sense sometimes allows you to modify the goals you were given (but for this to apply to AI ststems, wouldn't they need have common sense in the first place, which kind of assumes that the AI is already aligned?)

In the third paragraph it seems to me that you are saying that humans have some goals that have an built-in override mechanism in them - eg in general humans have a goal of eating delicious cake, but they will forego this goal in the interest of seeking water if they are about ot die of dehydratation (but doesn't this seem to be a consequence of these goals being just instrumental things  that proxy the complex thing that humans actually care about?)

I think I am confused because I do not understand your overall point, so the three paragraphs seem to be saying wildly different things to me.

Comment by Jsevillamol on AGI safety from first principles: Goals and Agency · 2020-09-29T14:16:06.236Z · LW · GW

I notice I am surprised you write

However, the link from instrumentally convergent goals to dangerous influence-seeking is only applicable to agents which have final goals large-scale enough to benefit from these instrumental goals

and not address the "Riemman disaster" or "Paperclip maximizer" examples [1]

  • Riemann hypothesis catastrophe. An AI, given the final goal of evaluating the Riemann hypothesis, pursues this goal by transforming the Solar System into “computronium” (physical resources arranged in a way that is optimized for computation)— including the atoms in the bodies of whomever once cared about the answer.
  • Paperclip AI. An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.

Do you think that the argument motivating these examples is invalid?

Do you disagree with the claim that even systems with very modest and specific goals will have incentives to seek influence to perform their tasks better? 

Comment by Jsevillamol on Aggregating forecasts · 2020-08-04T22:21:37.007Z · LW · GW

Thank you for pointing this out!

I have a sense that that log-odds are an underappreciated tool, and this makes me excited to experiment with them more - the "shared and distinct bits of evidence" framework also seems very natural.

On the other hand, if the Goddess of Bayesian evidence likes log odds so much, why did she make expected utility linear on probability? (I am genuinely confused about this)

Comment by Jsevillamol on Aggregating forecasts · 2020-08-04T07:48:20.351Z · LW · GW


I had not realized, and this makes so much sense.

Comment by Jsevillamol on Can an agent use interactive proofs to check the alignment of succesors? · 2020-07-18T17:43:58.951Z · LW · GW

Paul Christiano has explored the framing of interactive proofs before, see for example this or this.

I think this is a exciting framing for AI safety, since it gets to the crux of one of the issues as you point out in your question.

Comment by Jsevillamol on What confidence interval should one report? · 2020-04-20T15:18:02.932Z · LW · GW

It's good to know that this a extended practice (do you have handy examples to see how others approach this issue?)

However to clarify my question is not whether those should be distinguished, but rather what should be the the confidence interval I should be reporting, given we are making the distinction between model predection and model error.

Comment by Jsevillamol on Assessing Kurzweil's 1999 predictions for 2019 · 2020-04-10T14:48:48.735Z · LW · GW

I do not understand prediction 86.

In other words, the difference between those "productively" engaged and those who are not is not always clear.

As context, prediction 84 says

While there is sufficient prosperity to provide basic necessities (secure housing and food,
among others) without significant strain to the economy, old controversies persist
regarding issues of responsibility and opportunity.

And prediction 85 says

The issue is complicated by the
growing component of most employment's being concerned with the employee's own
learning and skill acquisition.

What is Kurzweil talking about? Is this about whether we can tell when employees are doing useful work and when they are shirking?

Comment by Jsevillamol on Assessing Kurzweil's 1999 predictions for 2019 · 2020-04-10T14:38:49.002Z · LW · GW

Sorry for being dense, but how should we fill it?

By default I am going to add a third column with the prediction, is that how you want to receive the data?

Comment by Jsevillamol on Call for volunteers: assessing Kurzweil, 2019 · 2020-04-01T14:25:40.684Z · LW · GW

Sure sign me up, happy to do up to 10 for now, plausibly more later depending on how hard it turns out to be

Comment by Jsevillamol on Is there an intuitive way to explain how much better superforecasters are than regular forecasters? · 2020-02-19T13:55:37.264Z · LW · GW

Brier scores are scoring three things:

  • How uncertain the forecasting domain is (because of this Brier scores are not comparable between domains - if I have a high Brier score in short term weather predictions and you have a low Brier score on geopolitical forecasting that does not imply I am a better forecaster than you)
  • How well-calibrated is the forecaster (eg we would say that a forecaster is well-calibrated if 80% of the predictions that he assigned 80% confidence to actually come true)
  • How much information does a forecaster convey in their predictions (eg if I am predicting coin flips and say 50% all the time, my calibration will be perfect but I will not be conveying extra information)

Note that in Tetlock's research there is no hard cutoff from regular forecasters to superforecasters - he arbitrarily declared that the top 2% were superforecasters, and showed that 1) the top 2% of forecasters tended to remain in the top 2% between years and 2) that some of the techniques they used for thinking about forecasts could be shown in an RCT to improve the forecasting accuracy of most people.

Comment by Jsevillamol on On characterizing heavy-tailedness · 2020-02-16T23:11:04.233Z · LW · GW

Sadly I have not come across many definitions of heavy tailedness that are compatible with finite support, so I dont have any ready examples of action relevance AND finite support.

Another example involving a momentum-centric definition:

Distributions which are heavy tailed in the sense of not having a finite moment generating function in a neighbourhood of zero heavily reward exploration over exploitation in multi armed bandit scenarios.

See for example an invocation of light tailedness to simplify an analysis at the beginning of this paper, implying that the analysis does not carry over directly to heavy tail scenarios (disclaimer, I have not read the whole thing).

Comment by Jsevillamol on On characterizing heavy-tailedness · 2020-02-16T22:48:13.839Z · LW · GW

The point you are making - that distributions with infinite support may be used to represent model error - is a valid one.

And in fact I am less confident about that one that point relative to others.

I still think that is a nice property to have, though I find it hard to pinpoint exactly what is my intuition here.

One plausible hypothesis is because I think it makes a lot of sense to talk about frequency of outliers in bounded contexts. For example, I expect that my beliefs about the world are heavy tailed - I am mostly ignorant about everything (eg, "is my flatmate brushing their teeth right now?"), but have some outlier strong beliefs about reality which drives my decision making (eg, "after I click submit this comment will be read by you").

Thus if we sample the confidence of my beliefs the emerging distribution seems to be heavy tailed in some sense, even though the distribution has finite support.

One could argue that this is because I am plotting my beliefs in a weird space, and if I plot them with a proper scale like odd-scale which is unbounded the problem dissolves. But since expected value is linear with probabilities, not odds, this seems a hard pill to swallow.

Another intuition is that if you focus on studying asymptotic tails you expose yourself to Pascal's mugging scenarios - but this may be a consideration which requires separate treatment (eg Pascal's mugging may require a patch from the decision-theoretic side of things anyway).

As a different point, I would not be surprised if allowing finite support requires significantly more complicated assumptions / mathematics, and ends up making the concept of heavy tails less useful. Infinites are useful to simplify unimportant details, as with complexity theory for example.

TL;DR: I agree that infinite support can be used to conceptualize model error. I however think there are examples of bounded contexts where we want to talk about dominating outliers - ie heavy tails.

Comment by Jsevillamol on Advocating for factual advocacy · 2019-08-07T10:36:18.285Z · LW · GW

UPDATE AFTER A YEAR: Since most people believe that lives in the developing world are cheaper to save than what they actually are I think that pretty much invalidates my argument.

My current best hypothesis is that the Drowning Child argument derives its strength from creating a cheap opportunity to buy status.

Comment by Jsevillamol on Alignment Newsletter One Year Retrospective · 2019-04-11T11:05:59.138Z · LW · GW

Some back of the envelope calculations trying to make sense out of the number of subscribers.

The EA survey gets about ~2500 responses per year from self identified EAs and I expect it represents between 10% and 70% of the EA community, so a fair estimate is that the EA community is about 1e4 people.
They ask about top priorities. About 16% of respondents consider AI risk a top priority.
Assuming representativeness, that means about 2e3 EAs who consider AI risk a priority.
Of those I would expect about half to be considering actively pursuing a career in the field, for 1e3 people.
This checks out with the newsletter number of subscribers.

Comment by Jsevillamol on Book Review: AI Safety and Security · 2018-08-21T22:02:18.002Z · LW · GW

Typo: Tegmarck should be Tegmark

Comment by Jsevillamol on Is there a practitioner's guide for rationality? · 2018-08-13T20:05:33.585Z · LW · GW

At risk of stating the obvious, have you considered attending a CFAR workshop in person?

I found them to be a really great experience, and now that they have started organizing events in Europe they are more accessible than ever!

Check out their page.

Comment by Jsevillamol on Logarithms and Total Utilitarianism · 2018-08-13T11:43:13.757Z · LW · GW

The movement I was going through when thinking about the RC is something akin to "huh, happiness/utility is not a concept that I have an intuitive feeling for, so let me substitute happiness/utility for resources. Now clearly distributing the resources so thinly is very suboptimal. So let's substitute back resources for utility/happiness and reach the conclusion that distributing the utility/happiness so thinly is very suboptimal, so I find this scenario repugnant."

Yeah, the simple model you propose beats my initial intuition. It feels very off though. Maybe its missing diminishing returns and I am rigged to expect diminishing returns?

Comment by Jsevillamol on Learning strategies and the Pokemon league parable · 2018-08-13T09:25:33.717Z · LW · GW

I actually got directed to your article by another person before this! Congrats on creating something that people actually reference!

On hindsight, yeah, project based learning is nor what I meant nor a good alternative to traditional learning; if you can use cheat codes to speed up your learning using the experience from somebody else you should do so without a doubt.

The generator of this post is a combination of the following observations:

1) I see a lot of people who keep waiting for a call to adventure

2) Most knowledge I have acquired through life has turned out to be useless, non transferable and/or fades out very quickly

3) It makes sense to think that people get a better grasp of what skills they need to solve a problem (such as producing high quality AI Alignment research) after they have grappled with the problem. This feels specially true when you are in the edge of a new field, because there is no one else you can turn to who would be able to compress their experience in a digestible format.

4) People (specially in mathematics) have a tendency to wander around aimlessly picking up topics, and then use very few of what they learn. Here I am standing on not very solid ground, because conventional wisdom is that you need to wander around to "see the connections", but I feel like that might be just confirmation bias creeping in.

Comment by Jsevillamol on Logarithms and Total Utilitarianism · 2018-08-13T09:04:44.695Z · LW · GW

It dissolves the RC for me, because it answers the question "What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "the Repugnant Conclusion"?" [grabbed from your link, substituted "free will" for "repugnant conclusion"].

I feel after reading that post that I do no longer feel that the RC is counterintuitive, and instead it feels self evident; I can channel the repugnancy to aberrant distributions of resources.

But granted, most people I have talked to do not feel the question is dissolved through this. I would be curious to see how many people stop being intuitively confused about RC after reading a similar line of reasoning.

The point about more workers => more resources is also an interesting thought. We could probably expand the model to vary resources with workers, and I would expect a similar conclusion for a reasonable model to hold: optimal sum of utility is not achieved in the extremes, but in a happy medium. Either that or each additional worker produces so much that even utility per capita grows as workers goes to infinity.

Comment by Jsevillamol on Logarithms and Total Utilitarianism · 2018-08-12T21:06:58.137Z · LW · GW

As I understand it, the idea behind this post dissolves the paradox because it allows us to reframe it in terms of possibility: for a fixed level of resources, there is a number of people for which equal distribution of resources produces optimal sum of utility.

Sure, you could get a greater sum from an enormous repugnant population at subsistence level, but that will take more resources than what you have to be created.

And what is more; even in that situation there is always another non-aberrant distribution of resources, that uses in total the same quantity of resources as the repugnant distribution, and produces greater sum of utility.

Comment by Jsevillamol on Logarithms and Total Utilitarianism · 2018-08-09T09:06:58.689Z · LW · GW

This has shifted my views very positively in favor of total log utilitarianism, as it dissolves quite cleanly the Repugnant Conclussion. Great post!

Comment by Jsevillamol on Prisoners' Dilemma with Costs to Modeling · 2018-07-30T13:36:14.841Z · LW · GW

I have been thinking about this research direction for ~4 days.

No interesting results, though it was a good exercise to calibrate how much do I enjoy researching this type of stuff.

In case somebody else wants to dive into it, here are some thoughts I had and resources I used:


  • The definition of depth given in the post seems rather unnatural to me. This is because I expected it would be easy to relate the depth of two agents to the rank of the world of a Kripke chain where the fixed points representing their behavior will stabilize. Looking at Zachary Gleit's proof of the fixed point theorem (see The Logic of Provability, chapter 8, by G. Boolos) we can relate the modal degree of a fixed point to the number of modal operators that appear in the modalized formula to be fixed. I thought I could go through Gleit's proof counting the number of boxes that appear in the fixed points, and then combine that with my proof of the generalized fixed point theorem to derive the relationship between the number of boxes appearing in the definition of two agents and the modal degree of the fixed points that appear during a match. This ended up being harder than what I anticipated, because naively counting the number of boxes that appear in Gleit's proof makes really fast growing formulas appear and its hard to combine them through the induction of the generalized theorem proof.


  • The Logic of Provability, by G. Boolos. Has pretty much everything you need to know about modal logic. Recommended reading chapters 1,3,4,5,6,7,8.
  • Fixed point theorem of provability logic, by J. Sevilla. An in depth explanation I wrote in Arbital some years ago.
  • Modal Logic in the Wolfram Language, by J. Sevilla. A working implementation of Modal Combat, with some added utilities. It is hugely inefficient and Wolfram is not a good choice because license issues, but may be useful to somebody who wants to compute the result of a couple combats or read about modal combat at introductory level. You can open the attached notebook in the Wolfram Programming Lab.

Thank you Scott for writing this post, it has been useful to get a glimpse of how to do research.