Posts

Aggregating forecasts 2020-07-23T18:04:37.477Z · score: 8 (3 votes)
What confidence interval should one report? 2020-04-20T10:31:54.107Z · score: 4 (1 votes)
On characterizing heavy-tailedness 2020-02-16T00:14:06.197Z · score: 32 (13 votes)
Implications of Quantum Computing for Artificial Intelligence Alignment Research 2019-08-22T10:33:27.502Z · score: 21 (15 votes)
Map of (old) MIRI's Research Agendas 2019-06-07T07:22:42.002Z · score: 11 (5 votes)
Standing on a pile of corpses 2018-12-21T10:36:50.454Z · score: 38 (24 votes)
EA Tourism: London, Blackpool and Prague 2018-08-07T10:41:06.900Z · score: 39 (18 votes)
Learning strategies and the Pokemon league parable 2018-08-07T09:37:27.689Z · score: 43 (22 votes)
EA Spain Community Meeting 2018-07-10T07:24:59.310Z · score: 4 (2 votes)
Estimating the consequences of device detection tech 2018-07-08T18:25:15.277Z · score: 26 (11 votes)
Advocating for factual advocacy 2018-05-06T08:47:46.599Z · score: 27 (11 votes)
The most important step 2018-03-24T12:34:01.643Z · score: 47 (12 votes)

Comments

Comment by jsevillamol on Can an agent use interactive proofs to check the alignment of succesors? · 2020-07-18T17:43:58.951Z · score: 5 (3 votes) · LW · GW

Paul Christiano has explored the framing of interactive proofs before, see for example this or this.

I think this is a exciting framing for AI safety, since it gets to the crux of one of the issues as you point out in your question.

Comment by jsevillamol on What confidence interval should one report? · 2020-04-20T15:18:02.932Z · score: 1 (1 votes) · LW · GW

It's good to know that this a extended practice (do you have handy examples to see how others approach this issue?)

However to clarify my question is not whether those should be distinguished, but rather what should be the the confidence interval I should be reporting, given we are making the distinction between model predection and model error.

Comment by jsevillamol on Assessing Kurzweil's 1999 predictions for 2019 · 2020-04-10T14:48:48.735Z · score: 1 (1 votes) · LW · GW

I do not understand prediction 86.

In other words, the difference between those "productively" engaged and those who are not is not always clear.

As context, prediction 84 says

While there is sufficient prosperity to provide basic necessities (secure housing and food,
among others) without significant strain to the economy, old controversies persist
regarding issues of responsibility and opportunity.

And prediction 85 says

The issue is complicated by the
growing component of most employment's being concerned with the employee's own
learning and skill acquisition.

What is Kurzweil talking about? Is this about whether we can tell when employees are doing useful work and when they are shirking?

Comment by jsevillamol on Assessing Kurzweil's 1999 predictions for 2019 · 2020-04-10T14:38:49.002Z · score: 1 (1 votes) · LW · GW

Sorry for being dense, but how should we fill it?

By default I am going to add a third column with the prediction, is that how you want to receive the data?

Comment by jsevillamol on Call for volunteers: assessing Kurzweil, 2019 · 2020-04-01T14:25:40.684Z · score: 3 (2 votes) · LW · GW

Sure sign me up, happy to do up to 10 for now, plausibly more later depending on how hard it turns out to be

Comment by jsevillamol on Is there an intuitive way to explain how much better superforecasters are than regular forecasters? · 2020-02-19T13:55:37.264Z · score: 12 (5 votes) · LW · GW

Brier scores are scoring three things:

  • How uncertain the forecasting domain is (because of this Brier scores are not comparable between domains - if I have a high Brier score in short term weather predictions and you have a low Brier score on geopolitical forecasting that does not imply I am a better forecaster than you)
  • How well-calibrated is the forecaster (eg we would say that a forecaster is well-calibrated if 80% of the predictions that he assigned 80% confidence to actually come true)
  • How much information does a forecaster convey in their predictions (eg if I am predicting coin flips and say 50% all the time, my calibration will be perfect but I will not be conveying extra information)

Note that in Tetlock's research there is no hard cutoff from regular forecasters to superforecasters - he arbitrarily declared that the top 2% were superforecasters, and showed that 1) the top 2% of forecasters tended to remain in the top 2% between years and 2) that some of the techniques they used for thinking about forecasts could be shown in an RCT to improve the forecasting accuracy of most people.

Comment by jsevillamol on On characterizing heavy-tailedness · 2020-02-16T23:11:04.233Z · score: 3 (2 votes) · LW · GW

Sadly I have not come across many definitions of heavy tailedness that are compatible with finite support, so I dont have any ready examples of action relevance AND finite support.

Another example involving a momentum-centric definition:

Distributions which are heavy tailed in the sense of not having a finite moment generating function in a neighbourhood of zero heavily reward exploration over exploitation in multi armed bandit scenarios.

See for example an invocation of light tailedness to simplify an analysis at the beginning of this paper, implying that the analysis does not carry over directly to heavy tail scenarios (disclaimer, I have not read the whole thing).

Comment by jsevillamol on On characterizing heavy-tailedness · 2020-02-16T22:48:13.839Z · score: 3 (3 votes) · LW · GW

The point you are making - that distributions with infinite support may be used to represent model error - is a valid one.

And in fact I am less confident about that one that point relative to others.

I still think that is a nice property to have, though I find it hard to pinpoint exactly what is my intuition here.

One plausible hypothesis is because I think it makes a lot of sense to talk about frequency of outliers in bounded contexts. For example, I expect that my beliefs about the world are heavy tailed - I am mostly ignorant about everything (eg, "is my flatmate brushing their teeth right now?"), but have some outlier strong beliefs about reality which drives my decision making (eg, "after I click submit this comment will be read by you").

Thus if we sample the confidence of my beliefs the emerging distribution seems to be heavy tailed in some sense, even though the distribution has finite support.

One could argue that this is because I am plotting my beliefs in a weird space, and if I plot them with a proper scale like odd-scale which is unbounded the problem dissolves. But since expected value is linear with probabilities, not odds, this seems a hard pill to swallow.

Another intuition is that if you focus on studying asymptotic tails you expose yourself to Pascal's mugging scenarios - but this may be a consideration which requires separate treatment (eg Pascal's mugging may require a patch from the decision-theoretic side of things anyway).

As a different point, I would not be surprised if allowing finite support requires significantly more complicated assumptions / mathematics, and ends up making the concept of heavy tails less useful. Infinites are useful to simplify unimportant details, as with complexity theory for example.

TL;DR: I agree that infinite support can be used to conceptualize model error. I however think there are examples of bounded contexts where we want to talk about dominating outliers - ie heavy tails.

Comment by jsevillamol on Advocating for factual advocacy · 2019-08-07T10:36:18.285Z · score: 10 (3 votes) · LW · GW

UPDATE AFTER A YEAR: Since most people believe that lives in the developing world are cheaper to save than what they actually are I think that pretty much invalidates my argument.

My current best hypothesis is that the Drowning Child argument derives its strength from creating a cheap opportunity to buy status.

Comment by jsevillamol on Alignment Newsletter One Year Retrospective · 2019-04-11T11:05:59.138Z · score: 13 (5 votes) · LW · GW

Some back of the envelope calculations trying to make sense out of the number of subscribers.

The EA survey gets about ~2500 responses per year from self identified EAs and I expect it represents between 10% and 70% of the EA community, so a fair estimate is that the EA community is about 1e4 people.
They ask about top priorities. About 16% of respondents consider AI risk a top priority.
Assuming representativeness, that means about 2e3 EAs who consider AI risk a priority.
Of those I would expect about half to be considering actively pursuing a career in the field, for 1e3 people.
This checks out with the newsletter number of subscribers.

Comment by jsevillamol on Book Review: AI Safety and Security · 2018-08-21T22:02:18.002Z · score: 1 (1 votes) · LW · GW

Typo: Tegmarck should be Tegmark

Comment by jsevillamol on Is there a practitioner's guide for rationality? · 2018-08-13T20:05:33.585Z · score: 6 (4 votes) · LW · GW

At risk of stating the obvious, have you considered attending a CFAR workshop in person?

I found them to be a really great experience, and now that they have started organizing events in Europe they are more accessible than ever!

Check out their page.

Comment by jsevillamol on Logarithms and Total Utilitarianism · 2018-08-13T11:43:13.757Z · score: 1 (1 votes) · LW · GW

The movement I was going through when thinking about the RC is something akin to "huh, happiness/utility is not a concept that I have an intuitive feeling for, so let me substitute happiness/utility for resources. Now clearly distributing the resources so thinly is very suboptimal. So let's substitute back resources for utility/happiness and reach the conclusion that distributing the utility/happiness so thinly is very suboptimal, so I find this scenario repugnant."

Yeah, the simple model you propose beats my initial intuition. It feels very off though. Maybe its missing diminishing returns and I am rigged to expect diminishing returns?

Comment by jsevillamol on Learning strategies and the Pokemon league parable · 2018-08-13T09:25:33.717Z · score: 4 (3 votes) · LW · GW

I actually got directed to your article by another person before this! Congrats on creating something that people actually reference!

On hindsight, yeah, project based learning is nor what I meant nor a good alternative to traditional learning; if you can use cheat codes to speed up your learning using the experience from somebody else you should do so without a doubt.

The generator of this post is a combination of the following observations:

1) I see a lot of people who keep waiting for a call to adventure

2) Most knowledge I have acquired through life has turned out to be useless, non transferable and/or fades out very quickly

3) It makes sense to think that people get a better grasp of what skills they need to solve a problem (such as producing high quality AI Alignment research) after they have grappled with the problem. This feels specially true when you are in the edge of a new field, because there is no one else you can turn to who would be able to compress their experience in a digestible format.

4) People (specially in mathematics) have a tendency to wander around aimlessly picking up topics, and then use very few of what they learn. Here I am standing on not very solid ground, because conventional wisdom is that you need to wander around to "see the connections", but I feel like that might be just confirmation bias creeping in.

Comment by jsevillamol on Logarithms and Total Utilitarianism · 2018-08-13T09:04:44.695Z · score: 3 (2 votes) · LW · GW

It dissolves the RC for me, because it answers the question "What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "the Repugnant Conclusion"?" [grabbed from your link, substituted "free will" for "repugnant conclusion"].

I feel after reading that post that I do no longer feel that the RC is counterintuitive, and instead it feels self evident; I can channel the repugnancy to aberrant distributions of resources.

But granted, most people I have talked to do not feel the question is dissolved through this. I would be curious to see how many people stop being intuitively confused about RC after reading a similar line of reasoning.

The point about more workers => more resources is also an interesting thought. We could probably expand the model to vary resources with workers, and I would expect a similar conclusion for a reasonable model to hold: optimal sum of utility is not achieved in the extremes, but in a happy medium. Either that or each additional worker produces so much that even utility per capita grows as workers goes to infinity.

Comment by jsevillamol on Logarithms and Total Utilitarianism · 2018-08-12T21:06:58.137Z · score: 7 (4 votes) · LW · GW

As I understand it, the idea behind this post dissolves the paradox because it allows us to reframe it in terms of possibility: for a fixed level of resources, there is a number of people for which equal distribution of resources produces optimal sum of utility.

Sure, you could get a greater sum from an enormous repugnant population at subsistence level, but that will take more resources than what you have to be created.

And what is more; even in that situation there is always another non-aberrant distribution of resources, that uses in total the same quantity of resources as the repugnant distribution, and produces greater sum of utility.

Comment by jsevillamol on Logarithms and Total Utilitarianism · 2018-08-09T09:06:58.689Z · score: 8 (5 votes) · LW · GW

This has shifted my views very positively in favor of total log utilitarianism, as it dissolves quite cleanly the Repugnant Conclussion. Great post!

Comment by jsevillamol on Prisoners' Dilemma with Costs to Modeling · 2018-07-30T13:36:14.841Z · score: 9 (5 votes) · LW · GW

I have been thinking about this research direction for ~4 days.

No interesting results, though it was a good exercise to calibrate how much do I enjoy researching this type of stuff.

In case somebody else wants to dive into it, here are some thoughts I had and resources I used:

Thoughts:

  • The definition of depth given in the post seems rather unnatural to me. This is because I expected it would be easy to relate the depth of two agents to the rank of the world of a Kripke chain where the fixed points representing their behavior will stabilize. Looking at Zachary Gleit's proof of the fixed point theorem (see The Logic of Provability, chapter 8, by G. Boolos) we can relate the modal degree of a fixed point to the number of modal operators that appear in the modalized formula to be fixed. I thought I could go through Gleit's proof counting the number of boxes that appear in the fixed points, and then combine that with my proof of the generalized fixed point theorem to derive the relationship between the number of boxes appearing in the definition of two agents and the modal degree of the fixed points that appear during a match. This ended up being harder than what I anticipated, because naively counting the number of boxes that appear in Gleit's proof makes really fast growing formulas appear and its hard to combine them through the induction of the generalized theorem proof.

Resources:

  • The Logic of Provability, by G. Boolos. Has pretty much everything you need to know about modal logic. Recommended reading chapters 1,3,4,5,6,7,8.
  • Fixed point theorem of provability logic, by J. Sevilla. An in depth explanation I wrote in Arbital some years ago.
  • Modal Logic in the Wolfram Language, by J. Sevilla. A working implementation of Modal Combat, with some added utilities. It is hugely inefficient and Wolfram is not a good choice because license issues, but may be useful to somebody who wants to compute the result of a couple combats or read about modal combat at introductory level. You can open the attached notebook in the Wolfram Programming Lab.

Thank you Scott for writing this post, it has been useful to get a glimpse of how to do research.

Comment by jsevillamol on Estimating the consequences of device detection tech · 2018-07-09T10:06:55.047Z · score: 3 (2 votes) · LW · GW

How much video proof evidence is used today is thus an upper bound for how useful it will be in the future, as video proof becomes easier to fake its credibility and thus usefulness diminishes.

If today we do not rely on video proofs that much, then this is not a big deal. If on the contrary we rely a lot in video proofs then this becomes a huge deal.

Comment by jsevillamol on Estimating the consequences of device detection tech · 2018-07-09T06:39:12.571Z · score: 3 (2 votes) · LW · GW

Could you sport a guess of how often does that happen, and which is thus the number of people affected each year for things like these? :)

Comment by jsevillamol on Predicting Future Morality · 2018-05-07T07:16:41.934Z · score: 11 (5 votes) · LW · GW

But we also have the opposite narrative: people have more control over which parts of their life are shown in social media to their friends, so it's easier for them to selectively create mask values.

And since in real life you are incentivized to not remain anonymous, it seems like this effect should prevail IRL, relegating 'true' values to anonimous social interaction.

Im not endorsing either view, just signalling confusion about narratives that I see as equally persuasive.

What do you think?

Comment by jsevillamol on Advocating for factual advocacy · 2018-05-07T06:56:44.397Z · score: 4 (2 votes) · LW · GW

I am going to make a bold claim: traditional marketing strategies are succesful due to poorly understood rational incentives they create.

In other words: they are succesful because they give factual knowledge of cheap opportunities to purchase status or other social commodities, not because they change our aliefs.

Under other light, the marketing success evidence supports the morality-as-schelling-point-selector in Qiaochu's comment above.

Comment by jsevillamol on Predicting Future Morality · 2018-05-06T07:34:27.851Z · score: 22 (7 votes) · LW · GW

It seems obvious to me that after clean meat is developed we will see the vegan movement rise, and as life extension tech is perfected we will see fewer people advocating against immortality.

Comment by jsevillamol on The most important step · 2018-03-26T09:22:03.327Z · score: 5 (2 votes) · LW · GW

Ask and thou shalt receive: https://www.lesswrong.com/posts/9xLhQ7nJqB8QYyoaY/the-song-of-unity

It's probably not that good compared to this one tho, but this is a promise that I will write another thing every time somebody asks me too!

Comment by jsevillamol on The most important step · 2018-03-26T09:18:05.827Z · score: 3 (1 votes) · LW · GW

Thank you for your kindness! I am glad you enjoyed it.