Posts

Model Mis-specification and Inverse Reinforcement Learning 2018-11-09T15:33:02.630Z
Latent Variables and Model Mis-Specification 2018-11-07T14:48:40.434Z
[link] Essay on AI Safety 2015-06-26T07:42:11.581Z
The Power of Noise 2014-06-16T17:26:30.329Z
A Fervent Defense of Frequentist Statistics 2014-02-18T20:08:48.833Z
Another Critique of Effective Altruism 2014-01-05T09:51:12.231Z
Macro, not Micro 2013-01-06T05:29:38.689Z
Beyond Bayesians and Frequentists 2012-10-31T07:03:00.818Z
Recommendations for good audio books? 2012-09-16T23:43:31.596Z
What is the evidence in favor of paleo? 2012-08-27T07:07:07.105Z
PM system is not working 2012-08-02T16:09:06.846Z
Looking for a roommate in Mountain View 2012-08-01T19:04:59.872Z
Philosophy and Machine Learning Panel on Ethics 2011-12-17T23:32:20.026Z
Help me fix a cognitive bug 2011-06-25T22:22:31.484Z
Utility is unintuitive 2010-12-09T05:39:34.176Z
Interesting talk on Bayesians and frequentists 2010-10-23T04:10:27.684Z

Comments

Comment by jsteinhardt on AI x-risk reduction: why I chose academia over industry · 2021-03-15T05:52:28.441Z · LW · GW

This doesn't seem so relevant to capybaralet's case, given that he was choosing whether to accept an academic offer that was already extended to him.

Comment by jsteinhardt on Covid 2/18: Vaccines Still Work · 2021-02-19T16:16:25.082Z · LW · GW

I think if you account for undertesting, then I'd guess 30% or more of the UK was infected during the previous peak, which should reduce R by more than 30% (the people most likely to be infected are also most likely to spread further), and that is already enough to explain the drop.

Comment by jsteinhardt on Making Vaccine · 2021-02-06T01:18:27.894Z · LW · GW

I wasn't sure what you meant by more dakka, but do you mean just increasing the dose? I don't see why that would necessarily work--e.g. if the peptide just isn't effective.

I'm confused because we seem to be getting pretty different numbers. I asked another bio friend (who is into DIY stuff) and they also seemed pretty skeptical, and Sarah Constantin seems to be as well: https://twitter.com/s_r_constantin/status/1357652836079837189.

Not disbelieving your account, just noting that we seem to be getting pretty different outputs from the expert-checking process and it seems to be more than just small-sample noise. I'm also confused because I generally trust stuff from George Church's group, although I'm still near the 10% probability I gave above.

I am certainly curious to see whether this does develop measurable antibodies :).

Comment by jsteinhardt on Making Vaccine · 2021-02-05T02:52:45.316Z · LW · GW

Ah got it, thanks!

Comment by jsteinhardt on Making Vaccine · 2021-02-05T02:24:30.096Z · LW · GW

Have you run this by a trusted bio expert? When I did this test (picking a bio person who I know personally, who I think of as open-minded and fairly smart), they thought that this vaccine is pretty unlikely to be effective and that the risks in this article may be understated (e.g. food grade is lower-quality than lab grade, and it's not obvious that inhaling food is completely safe). I don't know enough biology to evaluate their argument, beyond my respect for them.

I'd be curious if the author, or others who are considering trying this, have applied this test.

My (fairly uninformed) estimates would be:
 - 10% chance that the vaccine works in the abstract
 - 4% chance that it works for a given LW user
 - 3% chance that a given LW user has an adverse reaction
  -12% chance at least 1 LW user has an adverse reaction

Of course, from a selfish perspective, I am happy for others to try this. In the 10% of cases where it works I will be glad to have that information. I'm more worried that some might substantially overestimate the benefit and underestimate the risks, however.

Comment by jsteinhardt on Making Vaccine · 2021-02-05T02:18:13.308Z · LW · GW

I don't think I was debating the norms, but clarifying how they apply in this case. Most of my comment was a reaction to the "pretty important" and "timeless life lessons", which would apply to Raemon's comment whether or not he was a moderator.

Comment by jsteinhardt on Making Vaccine · 2021-02-05T02:16:28.006Z · LW · GW

Often, e.g. Stanford profs claiming that COVID is less deadly than the flu for a recent and related example.

Comment by jsteinhardt on Making Vaccine · 2021-02-04T19:38:54.276Z · LW · GW

Hmm, important as in "important to discuss", or "important to hear about"?

My best guess based on talking to a smart open-minded biologist is that this vaccine probably doesn't work, and that the author understates the risks involved. I'm interpreting the decision to frontpage as saying that you think I'm wrong with reasonably high confidence, but I'm not sure if I should interpret it that way.

Comment by jsteinhardt on Covid 12/24: We’re F***ed, It’s Over · 2021-01-16T06:13:56.568Z · LW · GW

That seems irrelevant to my claim that Zvi's favored policy is worse than the status quo.

Comment by jsteinhardt on Covid 12/24: We’re F***ed, It’s Over · 2021-01-16T06:11:45.627Z · LW · GW

This isn't based on personal anecdote, sudies that try to estimate this come up with 3x. See eg the MicroCovid page: https://www.microcovid.org/paper/6-person-risk

Comment by jsteinhardt on Covid 12/31: Meet the New Year · 2021-01-03T07:32:32.957Z · LW · GW

You may well be right. I guess we don't really know what the sampling bias is (it would have to be pretty strongly skewed towards incoming UK cases though to get to a majority, since the UK itself was near 50%).

Comment by jsteinhardt on Covid 12/31: Meet the New Year · 2021-01-01T07:54:58.249Z · LW · GW

See here: https://cov-lineages.org/global_report.html

Comment by jsteinhardt on Covid 12/31: Meet the New Year · 2021-01-01T00:40:34.258Z · LW · GW

I don't think it's correct to say that it remains stable at 0.5-1% of samples in Denmark. There were 13 samples of the new variant last week, vs. only 3 two weeks ago, if I understood the data correctly. If it went from 0.5% to 1% in a week then you should be alarmed. (Although 3 and 13 are both small enough that it's hard to compute a growth rate, but it certainly seems consistent with the UK data to me.)

I think better evidence against non-infectiousness would be Italy and Israel, where the variant seems to be dominant but there isn't runaway growth. But:
 - Italy was on a downtick and then imposed a stronger lockdown, yet the downtick switched to being flat. So R does seem to have increased in Italy.
 - Israel is vaccinating everyone fairly quickly right now.

Comment by jsteinhardt on Covid 12/31: Meet the New Year · 2020-12-31T21:57:31.301Z · LW · GW

Zvi, I still think that your model of vaccination ordering is wrong, and that the best read of the data is that frontline essential workers should be very highly prioritized from a DALY / deaths averted perspective. I left this comment on the last thread that explains my reasoning in detail, looking at both of the published papers I've seen that model vaccine ordering: link. I'd be happy to elaborate on it but I haven't yet seen anyone provide any disagreement.

More minor, but regarding rehab facilities, from a bureaucratic perspective they are "congregate living facilities" and in the same category as retirement homes. I don't think New York is doing anything exceptional by having them high on the list, for instance California is doing the same thing if I understand correctly. We can of course argue over whether it's good for them to be high on the list; I personally think of them as 20-person group houses and so feel reasonably good prioritizing them highly, though I'm not confident in that conclusion.

Comment by jsteinhardt on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T05:00:31.523Z · LW · GW

Zvi, I agree with you that the CDC's reasoning was pretty sketchy, but I think their actual recommendation is correct while everyone else (e.g. the UK) is wrong. I think the order should be something like:

Nursing homes -> HCWs -> 80+ -> frontline essential workers -> ...

(Possibly switching the order of HCWs and 80+.)

The public analyses saying that we should start with the elderly are these two papers:

https://www.medrxiv.org/content/10.1101/2020.09.08.20190629v2.full.pdf
https://www.medrxiv.org/content/10.1101/2020.09.22.20194183v2

Notably, both papers don't even consider vaccinating essential workers as a potential intervention. The only option categories are by age, comorbidities, and whether you're a healthcare worker. The first paper only considers age and concludes unsurprisingly that if your only option is to order by age, you should start with the oldest. In the second paper, which includes HCWs as a category (modeling them as having higher susceptibility but not higher risk of transmitting to others), HCWs jump up on the queue to right after the 80+ age group (!!!). Since the only factor being considered is susceptibility, presumably many types of essential workers would also have a higher susceptibility and fall into the same group.

If we apply the Zvi cynical lens here, we can ask why these papers perform an analysis that suggests prioritizing healthcare workers but don't bother to point out that the same analysis applies to 10% of the population (hint: there is less than 10% available vaccines and the authors are in the healthcare profession).

The actual problem with the original CDC recommendations was that essential workers is so broad a category that it encompasses lots of people who aren't actually at increased risk (because their jobs don't require much contact). The new recommendations revised this to focus on frontline essential workers, a more-focused category that is about half of all essential workers. This is a huge improvement but I think even the original recommendations are better than the UK approach of prioritizing only based on age.

Remember, we should focus on results. If the CDC is right while everyone else is wrong, even if the stated reasoning is bad, pressuring them to conform to everyone else's worse approach is even worse.

Comment by jsteinhardt on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-12-02T17:58:58.939Z · LW · GW

Mo Bamba (NBA) and Cody Garbrandt (UFC) are both pro athletes who are still out of commission months later. I found this looking for NBA information, and only about 50 NBA players have gotten Covid, so this suggests at least 2% chance of pretty bad long term symptoms.

Comment by jsteinhardt on Pain is not the unit of Effort · 2020-12-02T08:02:32.345Z · LW · GW

I think that the right amount level of effort leaves you tired but warm inside, like you look forward doing this again, rather than just feeling you HAVE to do this again.

 

This is probably true in a practical sense (otherwise you won't sustain it as a habit), but I'm not sure it describes a well-defined level of effort. For me an extreme effort could still lead to me looking forward to it, if I have a concrete sense of what that effort bought me (maybe I do some tedious and exhausting footwork drills, but I understand the sense in which this will carry over into a game-like situation, so it feels rewarding; but I wouldn't be able to sustainably put in that same level of effort if I couldn't visualize the benefits).

It seems to me like to calibrate the right level of effort requires some other principle (for physical activity this would be based on rates of adaptation to avoid overtraining), and then you should perform visualization or other mental exercises to align your psychology with that level of effort. 

Comment by jsteinhardt on Pain is not the unit of Effort · 2020-12-02T07:52:26.976Z · LW · GW

If most workouts are painful, then I agree you are probably overtraining. But if no workouts at all are painful, you're probably missing opportunities to improve. And many workouts should at least be uncomfortable for parts of it. E.g. when lifting, for the last couple deadlift sets I often feel incredibly gassed and don't feel like doing another one. But this can be true even when I'm far away from my limits (like, a month later I'll be lifting 30 pounds more and feel about as tired, rather than failing to do the lift).

My guess is that on average 1-2 workouts a week should feel uncomfortable in some way, and 1-2 workouts a month should feel painful, if you're training optimally. But it probably varies by sport (I'm mostly thinking sports like soccer or basketball that are high on quickness and lateral movement, but only moderate on endurance).

ETA: Regarding whether elite athletes are performing optimally, it's going to depend on the sport, but in say basketball where players have 10+ years careers, teams generally have a lot of incentive to not destroy players' bodies. Most of the wear and tear comes from games, while training outside of games is often preventing injuries by preparing the body for high and erratic levels of contact in games. (I could imagine that in say gymnastics, or maybe even American football, the training incentives are misaligned with long-term health, but I don't know much about either.)

Comment by jsteinhardt on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-29T16:53:40.895Z · LW · GW

You could look at papers published on medrxiv rather than news articles, which would resolve the clickbait issue, though you'd still have to assess the study quality.

Comment by jsteinhardt on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-29T04:29:12.129Z · LW · GW

Have you tried googling yourself and were unable to find them? (Sorry that I'm too lazy to re-look them up myself, but given that LW is mostly leisure for me I don't feel like doing it, and I'd be somewhat surprised if you googled for stuff and didn't find it.)

Comment by jsteinhardt on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-22T16:17:58.051Z · LW · GW

I also think you are probably overestimating vaccine risks (the main risk is that its effectiveness wanes, and that it interferes with future antibody responses from similar vaccines; not that you'll get horrible side effects) but that isn't necessary to explain why people want the vaccine now.

Comment by jsteinhardt on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-22T16:14:02.864Z · LW · GW

I think cutting the IFR by 25 on the basis of one study is a mistake, the chance of the study being fatally flawed is greater than 1 in 25. On the other hand 0.5% is overall CFR and would be lower for young people.

I think it's hard to cut risk of long term effects by more than a factor of 10 from published estimates. Note there is evidence of long term effects contrary to your claim, i.e. studies that do 6 week follow ups and find people still with some symptom. This isn't 6 months but is still surprisingly long and should shift our belief about 6 months at least somewhat. Also novel disease that attacks many parts of the body is some evidence. I agree the evidence is exaggerated to scare us but it feels like a different situation from reinfection where it actually is almost impossible to find instances except when immunocompromised.

But I think perhaps the most important is that even young people are currently limiting their activities in many undesirable ways in accordance with local government ordinances (which apply equally to old and young). Vaccination allows one to end or partially end these limitations--even if not in a legal sense, probably at least in a moral sense.

Comment by jsteinhardt on Why Boston? · 2020-10-13T06:50:04.150Z · LW · GW

I noticed the prudishness, but "rudeness" to me parses as people actually telling you what's on their mind, rather than the passive-aggressive fake niceness that seems to dominate in the Bay Area. I'll personally take the rudeness :).

Comment by jsteinhardt on Why Boston? · 2020-10-13T06:46:42.306Z · LW · GW

On the other hand, the second-best place selects for people who don't care strongly about optimizing for legible signals, which is probably a plus. (An instance of this: In undergrad the dorm that, in my opinion, had the best culture was the run-down dorm that was far from campus.)

Comment by jsteinhardt on Why Boston? · 2020-10-11T05:29:57.228Z · LW · GW

Many of the factors affecting number of deaths are beyond a place's control, such as how early on the pandemic spread to that place, and how densely populated the city is. I don't have a strong opinion about MA but measuring by deaths per capita isn't a good way of judging the response.

Comment by jsteinhardt on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-17T02:02:03.053Z · LW · GW

That's not really what a p-value means though, right? The actual replication rate should depend on the prior and the power of the studies.

Comment by jsteinhardt on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-12T18:03:46.264Z · LW · GW

What are some of the recommendations that seem most off base to you?

Comment by jsteinhardt on Covid-19 6/11: Bracing For a Second Wave · 2020-06-13T19:58:18.847Z · LW · GW

My prediction: infections will either go down or only slowly rise in most places, with the exception of one or two metropolitan areas. If I had to pick one it would be LA, not sure what the second one will be. The places where people are currently talking about spikes won't have much correlation with the places that look bad two weeks from now (i.e. people are mostly chasing noise).

I'm not highly confident in this, but it's been a pretty reliable prediction for the past month at least...

Comment by jsteinhardt on Estimating COVID-19 Mortality Rates · 2020-06-13T08:00:32.737Z · LW · GW

Here is a study that a colleague recommends: https://www.medrxiv.org/content/10.1101/2020.05.03.20089854v3. Tweet version: https://mobile.twitter.com/gidmk/status/1270171589170966529?s=21

Their point estimate is 0.64% but with likely heterogeneity across settings.

Comment by jsteinhardt on Quarantine Bubbles Require Directness, and Tolerance of Rudeness · 2020-06-10T02:33:59.701Z · LW · GW

I don't think bubble size is the right thing to measure; instead you should measure the amount of contract you have with people, weighted by time, distance, indoor/outdoor, mask-wearing, and how likely the other person is to be infected (I.e. how careful they are).

An important part of my mental model is that infection risk is roughly linear in contact time.

Comment by jsteinhardt on Quarantine Bubbles Require Directness, and Tolerance of Rudeness · 2020-06-08T07:50:07.485Z · LW · GW

As a background assumption, I'm focused on the societal costs of getting infected, rather than the personal costs, since in most places the latter seem negligible unless you have pre-existing health conditions. I think this is also the right lens through which to evaluate Alameda's policy, although I'll discuss the personal calculation at the end.

From a social perspective, I think it's quite clear that the average person is far from being effectively isolated, since R is around 0.9 and you can only get to around half of that via only household infection. So a 12 person bubble isn't really a bubble... It's 12 people who each bring in non trivial risk from the outside world. On the other hand they're also not that likely to infect each other.

From a personal perspective, I think the real thing to care about is whether the other people are about as careful as you. By symmetry there's no reason to think that another house that practices a similar level of precaution is more likely to get an outside infection than your house is. But by the same logic there's nothing special about a 12 person bubble: you should be trying to interact with people with the same or better risk profile as you (from a personal perspective; from a societal perspective you should interact with riskier people, at least if you're low risk, because bubbles full of risky people are the worst possible configuration and you want to help break those up).

Comment by jsteinhardt on Quarantine Bubbles Require Directness, and Tolerance of Rudeness · 2020-06-08T04:52:54.259Z · LW · GW

I think the biggest issue with the bubble rule is that the math doesn't work out. The secondary attack rate between house members is ~30% and probably much lower between other contacts. At that low of a rate, these games with the graph structure buy very little and may be harmful because they increase the fraction of contact occurring between similar people (which is bad because the social cost of a pair of people interacting is roughly the product of their infection risks).

Comment by jsteinhardt on Estimating COVID-19 Mortality Rates · 2020-06-07T20:14:30.925Z · LW · GW

I'm not trying to intimidate; I'm trying to point out that I think you're making errors that could be corrected by more research, which I hoped would be helpful. I've provided one link (which took me some time to dig up). If you don't find this useful that's fine, you're not obligated to believe me and I'm not obligated to turn a LW comment into a lit review.

Comment by jsteinhardt on Estimating COVID-19 Mortality Rates · 2020-06-07T18:58:05.422Z · LW · GW

The CFR will shift substantially over time and location as testing changes. I'm not sure how you would reliably use this information. IFR should not change much and tells you how bad it is for you personally to get sick.

I wouldn't call the model Zvi links expert-promoted. Every expert I talked to thought it had problems, and the people behind it are economists not epidemiologists or statisticians.

For IFR you can start with seroprevalence data here and then work back from death rates: https://twitter.com/ScottGottliebMD/status/1268191059009581056

Regarding back-of-the-envelope calculations, I think we have different approaches to evidence/data. I started with back-of-the-envelope calculations 3 months ago. But I would have based things on a variety of BOTECs and not a single one. Now I've found other sources that are taking the BOTEC and doing smarter stuff on top of it, so I mostly defer to those sources, or to experts with a good track record. This is easier for me because I've worked full-time on COVID for the past 3 months; if I weren't in that position I'd probably combine some of my own BOTECs with opinions of people I trusted. In your case, I predict Zvi if you asked him would also say the IFR was in the range I gave.

Comment by jsteinhardt on Estimating COVID-19 Mortality Rates · 2020-06-07T17:04:38.882Z · LW · GW

Ben, I think you're failing to account for under-testing. You're computing the case fatality rate when you want the infection fatality rate. Most experts, as well as the well-done meta analyses, place the IFR in the 0.5%-1% range. I'm a little bit confused why you're relying on this back of the envelope rather than the pretty extensive body of work on this question.

Comment by jsteinhardt on Ben Hoffman's donor recommendations · 2018-07-30T17:59:04.732Z · LW · GW

I don't understand why this is evidence that "EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities", which was Howie's original question. It seems like evidence that donations to OpenPhil (which afaik cannot be made by individual donors) funge against donations to the long-term future EA fund.

Comment by jsteinhardt on RFC: Philosophical Conservatism in AI Alignment Research · 2018-05-15T04:24:03.648Z · LW · GW

I like the general thrust here, although I have a different version of this idea, which I would call "minimizing philosophical pre-commitments". For instance, there is a great deal of debate about whether Bayesian probability is a reasonable philosophical foundation for statistical reasoning. It seems that it would be better, all else equal, for approaches to AI alignment to not hinge on being on the right side of this debate.

I think there are some places where it is hard to avoid pre-commitments. For instance, while this isn't quite a philosophical pre-commitment, it is probably hard to develop approaches that are simultaneously optimized for short and long timelines. In this case it is probably better to explicitly do case splitting on the two worlds and have some subset of people pursuing approaches that are good in each individual world.

Comment by jsteinhardt on [deleted post] 2018-03-19T19:43:11.984Z

FWIW I understood Zvi's comment, but feel like I might not have understood it if I hadn't played Magic: The Gathering in the past.

EDIT: Although I don't understand the link to Sir Arthur's green knight, unless it was a reference to the fact that M:tG doesn't actually have a green knight card.

Comment by jsteinhardt on Takeoff Speed: Simple Asymptotics in a Toy Model. · 2018-03-06T13:54:41.342Z · LW · GW

Thanks for writing this Aaron! (And for engaging with some of the common arguments for/against AI safety work.)

I personally am very uncertain about whether to expect a singularity/fast take-off (I think it is plausible but far from certain). Some reasons that I am still very interested in AI safety are the following:

  • I think AI safety likely involves solving a number of difficult conceptual problems, such that it would take >5 years (I would guess something like 10-30 years, with very wide error bars) of research to have solutions that we are happy with. Moreover, many of the relevant problems have short-term analogues that can be worked on today. (Indeed, some of these align with your own research interests, e.g. imputing value functions of agents from actions/decisions; although I am particularly interested in the agnostic case where the value function might lie outside of the given model family, which I think makes things much harder.)
  • I suppose the summary point of the above is that even if you think AI is a ways off (my median estimate is ~50 years, again with high error bars) research is not something that can happen instantaneously, and conceptual research in particular can move slowly due to being harder to work on / parallelize.
  • While I have uncertainty about fast take-off, that still leaves some probability that fast take-off will happen, and in that world it is an important enough problem that it is worth thinking about. (It is also very worthwhile to think about the probability of fast take-off, as better estimates would help to better direct resources even within the AI safety space.)
  • Finally, I think there are a number of important safety problems even from sub-human AI systems. Tech-driven unemployment is I guess the standard one here, although I spend more time thinking about cyber-warfare/autonomous weapons, as well as changes in the balance of power between nation-states and corporations. These are not as clearly an existential risk as unfriendly AI, but I think in some forms would qualify as a global catastrophic risk; on the other hand I would guess that most people who care about AI safety (at least on this website) do not care about it for this reason, so this is more idiosyncratic to me.

Happy to expand on/discuss any of the above points if you are interested.

Best,

Jacob

Comment by jsteinhardt on Takeoff Speed: Simple Asymptotics in a Toy Model. · 2018-03-06T13:32:48.176Z · LW · GW

Very minor nitpick, but just to add, FLI is as far as I know not formally affiliated with MIT. (FHI is in fact a formal institute at Oxford.)

Comment by jsteinhardt on Zeroing Out · 2017-11-05T22:19:45.863Z · LW · GW

Hi Zvi,

I enjoy reading your posts because they often consist of clear explanations of concepts I wish more people appreciated. But I think this is the first instance where I feel I got something that I actually hadn't thought about before at all, so I wanted to convey extra appreciation for writing it up.

Best,

Jacob

Comment by jsteinhardt on Seek Fair Expectations of Others’ Models · 2017-10-20T03:53:12.702Z · LW · GW

I think the conflation is "decades out" and "far away".

Comment by jsteinhardt on [deleted post] 2017-10-17T03:04:59.264Z

Galfour was specifically asked to write his thought up in this thread: https://www.lesserwrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence/kAywLDdLrNsCvXztL

It seems either this was posted to the wrong place, or there is some disagreement within the community (e.g. between Ben in that thread and the people downvoting).

Comment by jsteinhardt on Oxford Prioritisation Project Review · 2017-10-14T18:08:10.872Z · LW · GW

Points 1-5 at the beginning of the post are all primarily about community-building and personal development externalities of the project, and not about the donation itself.

Comment by jsteinhardt on Oxford Prioritisation Project Review · 2017-10-14T03:58:56.583Z · LW · GW

?? If you literally mean minimum-wage, I think that is less than 10,000 pounds... although agree with the general thrust of your point about the money being more valuanle than the time (but think you are missing the spirit of the exercise as outlined in the post).

Comment by jsteinhardt on Robustness as a Path to AI Alignment · 2017-10-11T05:46:54.275Z · LW · GW

You might be interested in my work on learning from untrusted data (see also earlier work on aggregating unreliable human input). I think it is pretty relevant to what you discussed, although if you do not think it is, then I would also be pretty interested in understanding that.

Unrelated, but for quantilizers, isn't the biggest issue going to be that if you need to make a sequence of decisions, the probabilities are going to accumulate and give exponential decay? I don't see how to make a sequence of 100 decisions in a quantilizing way unless the base distribution of policies is very close to the target policy.

Comment by jsteinhardt on [deleted post] 2017-05-27T20:51:14.016Z

Parts of the house setup pattern-match to a cult, cult members aren't good at realizing when they need to leave, but their friends can probably tell much more easily.

(I don't mean the above as negatively as it sounds connotatively, but it's the most straightforward way to say what I think is the reason to want external people. I also think this reasoning degrades gracefully with the amount of cultishness.)

Comment by jsteinhardt on [deleted post] 2017-05-27T17:29:34.887Z

I think there's a difference between a friend that one could talk to (if they decide to), and a friend tasked with the specific responsibility of checking in and intervening if things seem to be going badly.

Comment by jsteinhardt on Scenario analysis: a parody · 2017-04-28T04:52:09.218Z · LW · GW

I feel like you're straw-manning scenario analysis. Here's an actual example of a document produced via scenario analysis: Global Trends 2035.

Comment by jsteinhardt on Effective altruism is self-recommending · 2017-04-21T23:35:42.407Z · LW · GW

When you downvote something on the EA forum, it becomes hidden. Have you tried viewing it while not logged in to your account? It's still visible to me.