Posts

Comments

Comment by Benjamin Rachbach (benjamin-rachbach) on Quick evidence review of bulking & cutting · 2024-04-05T23:45:01.558Z · LW · GW

I've been working on making Elicit search work better for reviews.  Would be curious for more detail on how Elicit failed here, if you'd like to share!

Comment by Benjamin Rachbach (benjamin-rachbach) on Running With a Backpack · 2023-01-16T03:08:10.384Z · LW · GW

Yeah maybe -- I have a ton of calf problems in general when running, and I should probably see a running coach or something.

This pretty clearly did make the calf problems even worse than usual though :p

Comment by Benjamin Rachbach (benjamin-rachbach) on Running With a Backpack · 2023-01-15T21:55:47.358Z · LW · GW

I tried the quick gait:
1. running with a backpack
2. running for exercise without a backpack

I think I'm sold on it for 1, seems better than the long, loping gait I previously used for backpack running

Not sold for 2, seems to wear out my calves quickly

Comment by Benjamin Rachbach (benjamin-rachbach) on Running by Default · 2023-01-05T19:53:50.253Z · LW · GW

Other things that help you run with a backpack:

1. use both a hip strap and a sternum strap, and tighten both (especially the sternum strap) way tighter than you normally would for walking. In my experience this eliminates most of the jostling of the backpack relative to not using straps
2. instead of carrying water bottle on the outside, put it inside for better balance and no chance of it falling out
3. use a high-quality backpack with good padding, and probably with a rigid back, e.g. (https://smile.amazon.com/North-Face-Router-Meld-Black/dp/B092RJ8G86?sa-no-redirect=1&th=1&psc=1). Also helps a ton with jostling and with not getting poked/smacked by things in the backpack

Have tested all of these a bunch and they help me a ton

Less robustly useful:

1. hold onto the shoulder straps as you run (reduces jostling a bit)
2. smooth your running gait to reduce jostling

Comment by Benjamin Rachbach (benjamin-rachbach) on Running by Default · 2023-01-05T19:47:34.529Z · LW · GW

Yep that helps a ton! (having tested it many times)

Comment by Benjamin Rachbach (benjamin-rachbach) on An Observation of Vavilov Day · 2022-01-18T00:13:55.350Z · LW · GW

I'd be interested in joining for a Bay Area kickoff!

Comment by benjamin-rachbach on [deleted post] 2022-01-03T19:10:53.029Z
Comment by benjamin-rachbach on [deleted post] 2021-12-24T17:02:53.433Z

Test:

Elicit prediction (https://forecast.elicit.org/binary/questions/el3utYd8Z)

Comment by benjamin-rachbach on [deleted post] 2021-12-24T17:02:07.227Z

Test:

Elicit prediction (

)

Comment by benjamin-rachbach on [deleted post] 2021-12-24T17:01:15.540Z
Comment by Benjamin Rachbach (benjamin-rachbach) on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-23T01:15:49.251Z · LW · GW

My distribution

My biggest differences with Rohin's prior distribution are:

1. I think that it's much more likely than he does that AGI researchers already agree with safety concerns

2. I think it's considerably more likely than he does that the majority of AGI researchers will never agree with safety concerns

These differences are explained more on my distribution and in my other comments.

The next step that I think would help the most to make my distribution better would be to do more research.

Comment by Benjamin Rachbach (benjamin-rachbach) on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-23T01:09:41.254Z · LW · GW

I thought about how I could most efficiently update my and Rohin’s views on this question.

My best ideas are:
1. Get information directly on this question. What can we learn from surveys of AI researchers or from public statements from AI researchers?

2. Get information on the question’s reference class. What can we learn about how researchers working on other emerging technologies that might have huge risks thought about those risks?

I did a bit of research/thinking on these, which provided a small update towards thinking that AGI researchers will evaluate AGI risks appropriately.

I think that there’s a bunch more research that would be helpful -- in particular, does anyone know of surveys of AI researchers on their views on safety?

Comment by Benjamin Rachbach (benjamin-rachbach) on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-23T01:08:24.747Z · LW · GW

I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specifies will not be met by 2100?

This could happen due to any of the following non-mutually exclusive reasons:

1. Global catastrophe before the condition is met that makes it so that people are no longer thinking about AI safety (e.g. human extinction or end of civilization): I think there's a 50% chance

2. Condition is met sometime after the timeframe (mostly, I'm imagining that AI progress is slower than I expect): 5%

3. AGI succeeds despite the condition not being met: 30%

4. There's some huge paradigm shift that makes AI safety concerns irrelevant -- maybe most people are convinced that we'll never build AGI, or our focus shifts from AGI to some other technology: 10%

5. Some other reason: 20%

I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 60% chance that the condition would not be met by 2100.

Comment by Benjamin Rachbach (benjamin-rachbach) on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-23T01:07:11.018Z · LW · GW

I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specified would already be met (if he went out and talked to the researchers today)?

Considerations that make it more likely:

1. The considerations identified in ricaz’s and Owain’s comments and their subcomments

2. The bar for understanding safety concerns (question 2 on the "survey") seems like it may be quite low. It seems to me that researchers entirely unfamiliar with safety could gain the required level of understanding in just 30 minutes of reading (depends on how Rohin would interpret his conversation with the researcher in deciding whether to mark “Yes” or “No”)

Considerations that make it less likely:

1. I’d guess that currently, most AI researchers have no idea what any of the concrete safety concerns are, i.e. they’d be “No”s on question 2

2. The bar for question 3 on the "survey" ("should we wait to build AGI") might be pretty high. If someone thinks that some safety concerns remain but that we should cautiously move forward on building things that look more and more like AGI, does that count as a "Yes" or a "No"?

3. I have the general impression that many AI researchers really dislike the idea that safety concerns are serious enough that we should in any way slow down AI research

I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 25% chance that the condition Rohin specified would already be met.


Note: I work at Ought, so I'm ineligible for the prizes