Posts
Comments
If you have an idea for a bet that's net-positive for me I'm all ears.
Are you much higher than Metaculus' community on Will ARC find that GPT-5 has autonomous replication capabilities??
I gain money in expectation with loans, because I don't expect to have to pay them back.
I see. I was implicitly assuming a nearterm loan or one with an interest rate linked to economic growth, but you might be able to get a longterm loan with a fixed interest rate.
What specific bet are you offering?
I transfer 10 k today-€ to you now, and you transfer 20 k today-€ to me if there is no ASI as defined by Metaculus on date X, which has to be sufficiently far away for the bet to be better than your best loan. X could be 12.0 years (= LN(0.9*20*10^3/(10*10^3))/LN(1 + 0.050)) from now assuming a 90 % chance I win the bet, and an annual growth of my investment of 5.0 %. However, if the cost-effectiveness of my donations also decreases 5 %, then I can only go as far as 6.00 years (= 12.0/2).
I also guess the stock market will grow faster than suggested by historical data, so I would only want to have X roughly as far as in 2028. So, at the end of the day, it looks like you are right that you would be better off getting a loan.
You could instead pay me $10k now, with the understanding that I'll pay you $20k later in 2028 unless AGI has been achieved in which case I keep the money... but then why would I do that when I could just take out a loan for $10k at low interest rate?
We could set up the bet such that it would involve you losing/gaining no money in expectation under your views, whereas you would lose money in expectation with a loan? Also, note the bet I proposed above was about ASI as defined by Metaculus, not AGI.
Thanks, Daniel. That makes sense.
But it wasn't rational for me to do that, I was just doing it to prove my seriousness.
My offer was also in this spirit of you proving your seriousness. Feel free to suggest bets which would be rational for you to take. Do you think there is a significant risk of a large AI catastrophe in the next few years? For example, what do you think is the probability of human population decreasing from (mid) 2026 to (mid) 2027?
Thanks, Daniel!
To be clear, my view is that we'll achieve AGI around 2027, ASI within a year of that, and then some sort of crazy robot-powered self-replicating economy within, say, three years of that
Is you median date of ASI as defined by Metaculus around 2028 July 1 (it would be if your time until AGI was strongly correlated with your time from AGI to ASI)? If so, I am open to a bet where:
- I give you 10 k€ if ASI happens until the end of 2028 (slightly after your median, such that you have a positive expected monetary gain).
- Otherwise, you give me 10 k€, which I would donate to animal welfare interventions.
Thanks, Ryan.
Daniel almost surely doesn't think growth will be constant. (Presumably he has a model similar to the one here.)
That makes senes. Daniel, my terms are flexible. Just let me know what is your median fraction for 2027, and we can go from there.
I assume he also thinks that by the time energy production is >10x higher, the world has generally been radically transformed by AI.
Right. I think the bet is roughly neutral with respect to monetary gains under Daniel's view, but Daniel may want to go ahead despite that to show that he really endorses his views. Not taking the bet may suggest Daniel is worried about losing 10 k€ in a world where 10 k€ is still relevant.
Thanks for the update, Daniel! How about the predictions about energy consumption?
In what year will the energy consumption of humanity or its descendants be 1000x greater than now? |
Your median date for humanity's energy consumption being 1 k times as large as now is 2031, whereas Ege's is 2177. What is your median primary energy consumption in 2027 as reported by Our World in Data as a fraction of that in 2023? Assuming constant growth from 2023 until 2031, your median fraction would be 31.6 (= (10^3)^((2027 - 2023)/(2031 - 2023))). I would be happy to set up a bet where:
- I give you 10 k€ if the fraction is higher than 31.6.
- You give me 10 k€ if the fraction is lower than 31.6. I would then use the 10 k€ to support animal welfare interventions.
Hi there,
Assuming 10^6 bit erasures per FLOP (as you did; which source are you using?), one only needs 8.06*10^13 kWh (= 2.9*10^(-21)*10^(35+6)/(3.6*10^6)), i.e. 2.83 (= 8.06*10^13/(2.85*10^13)) times global electricity generation in 2022, or 18.7 (= 8.06*10^13/(4.30*10^12)) times the one generated in the United States.
Thanks!
Nice post, Luke!
with this handy reference table:
There is no table after this.
He also offers a chart showing how a pure Bayesian estimator compares to other estimators:
There is no chart after this.
Thanks for this clarifying comment, Daniel!
Great post!
The R-square measure of correlation between two sets of data is the same as the cosine of the angle between them when presented as vectors in N-dimensional space
Not R-square, just R:
Nice post! I would be curious to know whether significant thinking has been done on this topic since your post.
Great!
Thanks for writing this!
Have you considered crossposting to the EA Forum (although the post was mentioned here)?
With a loguniform distribution, the mean moral weight is stable and roughly equal to 2.
Thanks for the post!
I was trying to use the lower and upper estimates of 5*10^-5 and 10, guessed for the moral weight of chickens relative to humans, as the 10th and 90th percentiles of a lognormal distribution. This resulted in a mean moral weight of 1000 to 2000 (the result is not stable), which seems too high, and a median of 0.02.
1- Do you have any suggestions for a more reasonable distribution?
2- Do you have any tips for stabilising the results for the mean?
I think I understand the problems of taking expectations over moral weights (E(X) is not equal to 1/E(1/X)), but believe that it might still be possible to determine a reasonable distribution for the moral weight.
"These two equations are algebraically inconsistent". Yes, combining them results into "0 < 0", which is false.