Posts

The AGI Optimist’s Dilemma 2023-02-23T20:20:22.507Z
Overconfidence bubbles 2022-12-18T02:07:32.122Z

Comments

Comment by kaputmi on What causes randomness? · 2023-02-23T20:38:20.548Z · LW · GW

I'd recommend reading Stephen Wolfram on this question. For instance: https://www.wolframscience.com/nks/p315--the-intrinsic-generation-of-randomness/

Comment by kaputmi on 2022 was the year AGI arrived (Just don't call it that) · 2023-01-04T21:49:56.021Z · LW · GW
  • Building a superhuman AI focused on a specific task is more economically valuable than building a much more expensive AI that is bad at a large number of things.

It also comes with ~0 risk of paperclipping the world — Alphazero is godlike at chess without needing to hijack all resources for its purposes

Comment by kaputmi on What is the best way to approach Expected Value calculations when payoffs are highly skewed? · 2023-01-03T02:42:47.684Z · LW · GW

Yes, I think performance ultimately matters much more than risk preferences. If you really want to take that into account you can just define utility as a function of wealth, and then maximize the growth of utility instead. But I think risk-aversion has been way overemphasized by academics that weren't thinking about ergodicity, and were thinking along St Petersburg Paradox lines that any +EV bet must be rational, so when people don't take +EV bets they must be irrationally risk-averse.

Comment by kaputmi on What is the best way to approach Expected Value calculations when payoffs are highly skewed? · 2022-12-28T18:10:42.173Z · LW · GW

What you actually want is to maximize the growth rate of your bankroll. You can go broke making +EV bets. The Kelly Criterion is the solution you're looking for for something like a lottery – a bet is "rational" iff the Kelly Criterion says you should make it.

Comment by kaputmi on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-18T03:05:15.177Z · LW · GW

Why wouldn’t AGI build a superhuman understanding of ethics, which it would then use to guide its decision-making? 

Comment by kaputmi on Be less scared of overconfidence · 2022-12-18T02:32:18.990Z · LW · GW

I think the gears-level models are really the key here.  Without a gears-level model, you are flying blind, and the outside view is very helpful when you're flying blind.  But with a solid understanding of the causal mechanisms in a system, you don't need to rely on others' opinions to make good predictions and decisions. 

Comment by kaputmi on College Selection Advice for Technical Alignment · 2022-12-18T02:24:16.994Z · LW · GW

My advice:

  • You're certainly not going to go wrong with Harvard. The value of college is much more in the people you will meet than anything else, and Harvard's quality of student body is as high as anywhere. 
  • When I applied to college I judged schools based on the quality of their economics department because I was convinced I would become an academic economist. Turned out to be very wrong. I think the chances you end up working on alignment research are low — maybe 20% — so don't over-index on that. 
  • Of course you can take advanced courses in whatever you want! Information is free these days. Don't let the school's curriculum dictate what you pursue. It's OK to optimize to some degree for easy classes so long as you are doing something valuable with the free time you are gaining.