Posts

How Likely Are Various Precursors of Existential Risk? 2024-10-28T13:27:31.620Z
Forecasting Newsletter: March 2022 2022-04-05T20:23:04.373Z
Forecasting Newsletter: April 2222 2022-04-01T07:07:24.605Z
Forecasting Newsletter: February 2022 2022-03-05T19:30:11.094Z
Forecasting Newsletter: January 2022 2022-02-03T19:22:00.438Z
Forecasting Newsletter: Looking back at 2021 2022-01-27T20:08:16.038Z
Forecasting Newsletter: December 2021 2022-01-10T19:35:40.815Z
Forecasting Newsletter: November 2021 2021-12-02T21:44:29.506Z
Latacora might be of interest to some AI Safety organizations 2021-11-25T23:57:04.881Z
Forecasting Newsletter: October 2021. 2021-11-02T14:07:23.771Z
Forecasting Newsletter: September 2021. 2021-10-01T17:06:38.771Z
Forecasting Newsletter: August 2021 2021-09-01T17:01:23.170Z
US Military Global Information Dominance Experiments 2021-09-01T13:34:39.169Z
Metaforecast update: Better search, capture functionality, more platforms. 2021-08-16T18:31:08.932Z
All Metaforecast COVID predictions 2021-08-16T18:30:36.851Z
Forecasting Newsletter: July 2021 2021-08-01T17:00:07.550Z
Forecasting Newsletter: June 2021 2021-07-01T21:35:26.537Z
Forecasting Newsletter: May 2021 2021-06-01T15:51:26.463Z
Forecasting Newsletter: April 2021 2021-05-01T16:07:22.689Z
Forecasting Newsletter: March 2021 2021-04-01T17:12:09.499Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:35.920Z
Forecasting Newsletter: February 2021 2021-03-01T21:51:27.758Z
Forecasting Prize Results 2021-02-19T19:07:09.420Z
Forecasting Newsletter: January 2021 2021-02-01T23:07:39.131Z
2020: Forecasting in Review. 2021-01-10T16:06:32.082Z
Forecasting Newsletter: December 2020 2021-01-01T16:07:39.015Z
Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) 2020-12-03T22:00:26.889Z
Forecasting Newsletter: November 2020 2020-12-01T17:00:58.898Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:12:39.009Z
Incentive Problems With Current Forecasting Competitions. 2020-11-09T16:20:06.394Z
Forecasting Newsletter: October 2020. 2020-11-01T13:09:50.542Z
Adjusting probabilities for the passage of time, using Squiggle 2020-10-23T18:55:30.860Z
A prior for technological discontinuities 2020-10-13T16:51:32.572Z
NunoSempere's Shortform 2020-10-13T16:40:05.972Z
AI race considerations in a report by the U.S. House Committee on Armed Services 2020-10-04T12:11:36.129Z
Forecasting Newsletter: September 2020. 2020-10-01T11:00:54.354Z
Forecasting Newsletter: August 2020. 2020-09-01T11:38:45.564Z
Forecasting Newsletter: July 2020. 2020-08-01T17:08:15.401Z
Forecasting Newsletter. June 2020. 2020-07-01T09:46:04.555Z
Forecasting Newsletter: May 2020. 2020-05-31T12:35:58.063Z
Forecasting Newsletter: April 2020 2020-04-30T16:41:35.849Z
What are the relative speeds of AI capabilities and AI safety? 2020-04-24T18:21:58.528Z
Some examples of technology timelines 2020-03-27T18:13:19.834Z
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z
What do you do when you find out you have inconsistent probabilities? 2018-12-31T18:13:51.455Z
The hunt of the Iuventa 2018-03-10T20:12:13.342Z

Comments

Comment by NunoSempere (Radamantis) on Understanding Shapley Values with Venn Diagrams · 2024-12-18T21:45:14.796Z · LW · GW

Shapley values are constructed such that introducing a null player doesn't change the result. You are doing something different by considering the wrong counterfactual (one where C exists but isn't part of the coalition, vs one when it doesn't exist)

Comment by NunoSempere (Radamantis) on Understanding Shapley Values with Venn Diagrams · 2024-12-17T06:46:16.458Z · LW · GW

Adding a person with veto power is not a neutral change.

Comment by NunoSempere (Radamantis) on Why I’m not a Bayesian · 2024-11-06T09:14:03.503Z · LW · GW

Maybe you could address these problems, but could you do so in a way that is "computationally cheap"? E.g., for forecasting on something like extinction, it is much easier to forecast on a vague outcome than to precisely define it.

Comment by NunoSempere (Radamantis) on Survival without dignity · 2024-11-06T08:50:39.760Z · LW · GW

I have a writeup on solar storm risk here that could be of interest

Comment by NunoSempere (Radamantis) on How Likely Are Various Precursors of Existential Risk? · 2024-10-28T21:13:16.635Z · LW · GW

Nice consideration, we hadn't considered non-natural asteroids here. I agree this is a consideration as humanity reaches for the stars, or the rest of the solar system.

If you've thought about it a bit more, do you have a sense of your probability over the next 100 years?

Comment by NunoSempere (Radamantis) on How Likely Are Various Precursors of Existential Risk? · 2024-10-28T19:50:15.461Z · LW · GW

To nitpick on your nitpick, in the US, 1000x safer would be 42 deaths yearly. https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year

For the whole world, it would just be above 1k. https://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate#List, but 2032 seems like an ambitious deadline for that.

In addition, it does seem against the spirit of the question to resolve positively solely because of reducing traffic deaths.

Comment by NunoSempere (Radamantis) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-29T23:03:31.134Z · LW · GW

To me this looks like circular reasoning: this example supports my conceptual framework because I interpret the example according to the conceptual framework.

Instead, I notice that Stockfish in particular has some salient characteristics that go against the predictions of the conceptual framework:

  • It is indeed superhuman
  • It is not the case that once Stockfish ends the game that's it. I can rewind Stockfish. I can even make one version of Stockfish play against another. I can make Stockfish play a chess variant. Stockfish doesn't annihilate my physical body when it defeats me
  • It is extremely well aligned with my values. I mostly use it to analyze games I've played against other people my level
  • If Stockfish wants to win the game and I want an orthogonal goal, like capturing its pawns, this is very feasible

Now, does this even matter for considering whether a superintelligence would trade, wouldn't trade? Not that much, it's a weak consideration. But insofar as it's a consideration, does it really convince someone who doesn't already but the frame? Not to me.

Comment by NunoSempere (Radamantis) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-28T19:23:37.255Z · LW · GW

This is importantly wrong because the example is in the context of an analogy

getting some pawns : Stockfish : Stockfish's goal of winning the game :: getting a sliver of the Sun's energy : superintelligence : the superintelligence's goals

The analogy is presented as forceful and unambiguous, but it is not. It's instead an example of a system being grossly more capable than humans in some domain, and not opposing a somewhat orthogonal goal

Comment by NunoSempere (Radamantis) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-27T15:56:16.088Z · LW · GW

Incidentally you have a typo on "pawn or too" (should be "pawn or two"), which is worrying in the context of how wrong this is.

Comment by NunoSempere (Radamantis) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-27T15:55:22.170Z · LW · GW

There is no equally simple version of Stockfish that is still supreme at winning at chess, but will easygoingly let you take a pawn or too. You can imagine a version of Stockfish which does that -- a chessplayer which, if it's sure it can win anyways, will start letting you have a pawn or two -- but it's not simpler to build. By default, Stockfish tenaciously fighting for every pawn (unless you are falling into some worse sacrificial trap), is implicit in its generic general search through chess outcomes.

The bolded part (bolded by me) is just wrong man, here is an example of taking five pawns: https://lichess.org/ru33eAP1#35

Edit: here is one with six. https://lichess.org/SL2FnvRvA1UE

Comment by NunoSempere (Radamantis) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-27T15:51:09.675Z · LW · GW

you will not find it easy to take Stockfish's pawns

Seems importantly wrong, in that if your objective is to take a few pawns (say, three), you can easily do this. This seems important in the context that it's hard to to obtain resources from an adversary that cares about things differently.

In the case of stockfish you can also rewind moves. 

Comment by NunoSempere (Radamantis) on Ambiguity in Prediction Market Resolution is Still Harmful · 2024-09-05T02:29:37.168Z · LW · GW

I disagree with the 5% of switching to a Sundar Pichai hairs simile:

  • Prediction market prices are bounded between 0 and 1
  • Polymarket has > 1k markets, and maybe 3 to 10 ambiguous resolutions a year. It's more like 0.3% to 1%.
Comment by NunoSempere (Radamantis) on Milan W's Shortform · 2024-08-27T13:03:51.174Z · LW · GW

I'm willing to bet 2k USD on my part against a single dollar yours that that if I waterboard you, you'll want to stop before 3 minutes have passed

Interesting, where are you physically located? Also, are you thinking of the unpleasantness of the situation, or are you thinking of the physical asphyxiation component?

Comment by NunoSempere (Radamantis) on An anti-inductive sequence · 2024-08-15T20:02:11.397Z · LW · GW

You might want to download the Online Encyclopedia of Integer Sequences, e.g., as in here, and play around with it, e.g., look at the least likely completion for a given sequence & so on.

Comment by NunoSempere (Radamantis) on An Overview of the AI Safety Funding Situation · 2024-08-06T20:32:30.769Z · LW · GW

You're right, changed

Comment by NunoSempere (Radamantis) on The Best Software For Every Need · 2024-06-16T15:37:42.998Z · LW · GW

I ended up solving the equations either analytically (partially with the help of Phil Trammell), https://forum.effectivealtruism.org/posts/FXPaccMDPaEZNyyre/a-model-of-patient-spending-and-movement-building or through simulations https://github.com/NunoSempere/ReverseShooting
https://github.com/NunoSempere/LaborCapitalAndTheOptimalGrowthOfSocialMovements

Comment by NunoSempere (Radamantis) on An Overview of the AI Safety Funding Situation · 2024-05-24T11:34:24.503Z · LW · GW

I found this post super valuable but I found the presentation confusing. Here is a table, provided as is, that I made based on this post & a few other sources:

Source Amount for 2024 Note
Open Philanthropy $80M Projected from past amount
Foundation Model Taskforce $20M 100M GBP but unclear over how many years?
FLI $30M $600M donation in crypto, say you can get $300M out of it, distributed over 10 years
AI labs $30M
Jan Tallin $20M See [here](https://jaan.info/philanthropy/)
NSF $5M
LTFF (not OpenPhil) $2M
Nonlinear fund and donors $1M
Academia Considered separately
GWWC $1M
Total $189M Does not consider uncertainty!
Comment by NunoSempere (Radamantis) on Now THIS is forecasting: understanding Epoch’s Direct Approach · 2024-05-04T14:27:28.346Z · LW · GW

You might also enjoy this review: https://nunosempere.com/blog/2023/04/28/expert-review-epoch-direct-approach/

Comment by NunoSempere (Radamantis) on Polymarket Covid-19 1/17/2022 · 2023-06-16T23:25:38.527Z · LW · GW

One particularity of polymarket is that you couldn't as of the time of this market divide $1 into four shares and sell all of them for $1.09. If you could have--well, then this problem wouldn't have existed--but if you could have then this would have been a 9%.

Comment by NunoSempere (Radamantis) on Polymarket Covid-19 1/17/2022 · 2023-06-16T23:24:18.777Z · LW · GW

I don't have a link off the top of my head, but the trade would have been to sell one share of yes for each market. You can do this by splitting $1 into a Yes and No share, and selling the Yes. Specifically in Polymarket you achieve this by adding and then withdrawing liquidity (for a specific type of markets called "amm', for "automatic market marker", which were the only ones supported by Polymarket at the time, though it since then also supports an order book). 

By doing this, you earn $1.09 from the sale + $3 from the three events eventually, and the whole thing costs $4, so it's a guaranteed profit. So I guess that I was making a mistake when I said that there was a 9% in 1.5 months (there is a $4.09/$4, or a 2.25% return over 1.5 months, which is much worse).

Comment by NunoSempere (Radamantis) on AI strategy nearcasting · 2023-03-17T06:11:51.228Z · LW · GW

The framework is AI strategy nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that key events (e.g., the development of transformative AI) will happen in a world that is otherwise relatively similar to today’s.

Usage of "nearcasting" here feels pretty fake. "Nowcasting" is a thing because 538/meteorology/etc. has a track record of success in forecasting and decent feedback loops, and extrapolating those a bit seems neat. 

But as used in this case, feedback loops are poor, and it just feels like a different analytical beast. So the resemblance to "forecasting" seems a bit icky, particularly if you are going to reference "nearcasting" without explanation it in subsequent posts: <https://ea.greaterwrong.com/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding>.  

I spent a bit thinking about a replacement term, and I came up with "scenario planning absent radical transformations analysis", or SPARTA for short. Not perfect, though.

Comment by NunoSempere (Radamantis) on There are no coherence theorems · 2023-02-21T19:00:22.479Z · LW · GW

See this comment: <https://www.lesswrong.com/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems?commentId=v2mgDWqirqibHTmKb>

Comment by NunoSempere (Radamantis) on There are no coherence theorems · 2023-02-21T18:59:02.156Z · LW · GW

I am not defending the language of the OP's title, I am defending the content of the post.

Comment by NunoSempere (Radamantis) on There are no coherence theorems · 2023-02-21T18:56:56.013Z · LW · GW

You don't have strategic voting with probabilistic results. And the degree of strategic voting can also be mitigated.

Comment by NunoSempere (Radamantis) on There are no coherence theorems · 2023-02-21T02:43:12.387Z · LW · GW

Copying my second response from the EA forum:

Like, I feel like with the same type of argument that is made in the post I could write a post saying "there are no voting impossibility theorems" and then go ahead and argue that the Arrow's Impossibility Theorem assumptions are not universally proven, and then accuse everyone who ever talked about voting impossibility theorems that they are making "an error" since "those things are not real theorems". And I think everyone working on voting-adjacent impossibility theorems would be pretty justifiedly annoyed by this.

I think that there is some sense in which the character in your example would be right, since:

  • Arrow's theorem doesn't bind approval voting.
  • Generalizations of Arrow's theorem don't bind probabilistic results, e.g., each candidate is chosen with some probability corresponding to the amount of votes he gets.

Like, if you had someone saying there was "a deep core of electoral process" which means that as they scale to important decisions means that you will necessarily get "highly defective electoral processes", as illustrated in the classic example of the "dangers of the first pass the post system". Well in that case it would be reasonable to wonder whether the assumptions of the theorem bind, or whether there is some system like approval voting which is much less shitty than the theorem provers were expecting, because the assumptions don't hold.

The analogy is imperfect, though, since approval voting is a known decent system, whereas for AI systems we don't have an example friendly AI.

Comment by NunoSempere (Radamantis) on There are no coherence theorems · 2023-02-21T02:42:37.590Z · LW · GW

Copying my response from the EA forum:

(if this post is right)

The post does actually seem wrong though. 

Glad that I added the caveat.

Also, the title of "there are no coherence arguments" is just straightforwardly wrong. The theorems cited are of course real theorems, they are relevant to agents acting with a certain kind of coherence, and I don't really understand the semantic argument that is happening where it's trying to say that the cited theorems aren't talking about "coherence", when like, they clearly are.

Well, part of the semantic nuance is that we don't care as much about the coherence theorems that do exist if they will fail to apply to current and future machines

IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).

Here are some scenarios:

  • Our highly intelligent system notices that to have complete preferences over all trades would be too computationally expensive, and thus is willing to accept some, even a large degree of incompleteness. 
  • The highly intelligent system learns to mimic the values of human, which end up having non-complete preferences, which the agent mimics
  • You train a powerful system to do some stuff, but also to detect when it is out of distribution and in that case do nothing. Assuming you can do that, their preference is incomplete, since when offered tradeoffs they always take the default option when out of distribution. 

The whole section at the end feels very confused to me. The author asserts that there is "an error" where people assert that "there are coherence theorems", but man, that just seems like such a weird thing to argue for. Of course there are theorems that are relevant to the question of agent coherence, all of these seem really quite relevant. They might not prove the things in-practice, as many theorems tend to do. 

Mmh, then it would be good to differentiate between:

  • There are coherence theorems that talk about some agents with some properties
  • There are coherence theorems that prove that AI systems as will soon exist in the future will be optimizing utility functions

You could also say a third thing, which would be: there are coherence theorems that strongly hint that AI systems as will soon exist in the future will be optimizing utility functions. They don't prove it, but they make it highly probable because of such and such. In which case having more detail on the such and such would deflate most of the arguments in this post, for me.

For instance:

Coherence arguments’ mean that if you don’t maximize ‘expected utility’ (EU)—that is, if you don’t make every choice in accordance with what gets the highest average score, given consistent preferability scores that you assign to all outcomes—then you will make strictly worse choices by your own lights than if you followed some alternate EU-maximizing strategy (at least in some situations, though they may not arise). For instance, you’ll be vulnerable to ‘money-pumping’—being predictably parted from your money for nothing.

This is just false, because it is not taking into account the cost of doing expected value maximization, since giving consistent preferability scores is just very expensive and hard to do reliably. Like, when I poll people for their preferability scores, they give inconsistent estimates. Instead, they could be doing some expected utility maximization, but the evaluation steps are so expensive that I now basically don't bother to do some more hardcore approximation of expected value for individuals, but for large projects and organizations.  And even then, I'm still taking shortcuts and monkey-patches, and not doing pure expected value maximization.

“This post gets somewhat technical and mathematical, but the point can be summarised as:

  • You are vulnerable to money pumps only to the extent to which you deviate from the von Neumann-Morgenstern axioms of expected utility.

In other words, using alternate decision theories is bad for your wealth.”

The "in other words" doesn't follow, since EV maximization can be more expensive than the shortcuts.

Then there are other parts that give the strong impression that this expected value maximization will be binding in practice:

“Rephrasing again: we have a wide variety of mathematical theorems all spotlighting, from different angles, the fact that a plan lacking in clumsiness, is possessing of coherence.”

 

“The overall message here is that there is a set of qualitative behaviors and as long you do not engage in these qualitatively destructive behaviors, you will be behaving as if you have a utility function.”

 

  “The view that utility maximizers are inevitable is supported by a number of coherence theories developed early on in game theory which show that any agent without a consistent utility function is exploitable in some sense.”

 

Here are some words I wrote that don't quite sit right but which I thought I'd still share: Like, part of the MIRI beat as I understand it is to hold that there is some shining guiding light, some deep nature of intelligence that models will instantiate and make them highly dangerous. But it's not clear to me whether you will in fact get models that instantiate that shining light. Like, you could imagine an alternative view of intelligence where it's just useful monkey patches all the way down, and as we train more powerful models, they get more of the monkey patches, but without the fundamentals. The view in between would be that there are some monkey patches, and there are some deep generalizations, but then I want to know whether the coherence systems will bind to those kinds of agents.

No need to respond/deeply engage, but I'd appreciate if you let me know if the above comments were too nitpicky.

Comment by NunoSempere (Radamantis) on A proposed method for forecasting transformative AI · 2023-02-11T09:05:19.862Z · LW · GW

I am also curious about the extent to which you are taking the Hoffman scaling laws as an assumption, rather than as something you can assign uncertainty over.

Comment by NunoSempere (Radamantis) on A proposed method for forecasting transformative AI · 2023-02-11T08:59:16.860Z · LW · GW

I thought this was great, cheers. 

Here:

Next, we estimate a sufficient horizon length, which I'll call the k-horizon, over which we expect the most complex reasoning to emerge during the transformative task. For the case of scientific research, we might reasonably take the k-horizon to roughly be the length of an average scientific paper, which is likely between 3,000 and 10,000 words. However, we can also explicitly model our uncertainty about the right choice for this parameter.

It's unclear whether the final paper would be the needed horizon length.

For analogous reasoning, consider a model trained to produce equations which faithfully describe reality. These equations tend to be quite short. But I imagine that the horizon length needed to produce them is larger, because you have to keep many things in mind when doing so. Unclear if I'm anthropomorphizing here.

Comment by NunoSempere (Radamantis) on Incentive Problems With Current Forecasting Competitions. · 2023-01-21T15:51:55.031Z · LW · GW

But I think it is >30% likely you can compensate for past over or under estimations.

I'd bet against that at 1:5, i.e., against the proposition that the optimal forecast is not subject to your previous history

Comment by NunoSempere (Radamantis) on K-types vs T-types — what priors do you have? · 2022-11-08T10:49:16.974Z · LW · GW

This is true in the abstract, but the physical word seems to be such that difficult computations are done for free in the physical substrate (e.g,. when you throw a ball, this seems to happen instantaneously, rather than having to wait for a lengthy derivation of the path it traces). This suggests a correct bias in favor of low-complexity theories regardless of their computational cost, at least in physics.

Comment by NunoSempere (Radamantis) on No, human brains are not (much) more efficient than computers · 2022-09-06T14:31:56.452Z · LW · GW

Neat. I have some uncertainty about the evolutionary estimates you are relying on, per here. But neat.

Comment by NunoSempere (Radamantis) on The longest training run · 2022-08-18T08:46:18.113Z · LW · GW

Thanks Tamay!

Comment by NunoSempere (Radamantis) on Interpretability Tools Are an Attack Channel · 2022-08-17T22:50:11.808Z · LW · GW

Seems like this assumes an actual superintelligence, rather than near-term scarily capable successor of current ML systems.

Comment by NunoSempere (Radamantis) on The longest training run · 2022-08-17T22:48:29.859Z · LW · GW

Why publish this publicly? Seems like it would improve optimality of training runs?

Comment by NunoSempere (Radamantis) on Adjusting probabilities for the passage of time, using Squiggle · 2022-07-23T04:46:34.146Z · LW · GW

Now https://www.squiggle-language.com/playground

Comment by NunoSempere (Radamantis) on The Best Software For Every Need · 2022-07-18T14:20:18.731Z · LW · GW

Software: archivenow

Need: Archiving websites to the internet archive.

Other programs I've tried: The archive.org website, spn, various scripts, various extensions.

archivenow is trusty enough for my use case, and it feels like it fails less often than other alternatives. It was also easy enough to wrap into a bash script and process markdown files. Spn is newer and has parallelism, but I'm not as familiar with it and subjectively it feels like it fails a bit more.

See also: Gwern's setup.

Comment by NunoSempere (Radamantis) on How "should" counterfactual prediction markets work? · 2022-07-11T04:15:08.073Z · LW · GW

Have prediction markets which pay $100 per share, but only pay out 1% of the time, chosen randomly. If the 1% case that happens, then also implement the policy under consideration.

Comment by Radamantis on [deleted post] 2022-07-11T04:14:44.823Z

Have prediction markets which pay $100 per share, but only pay out 1% of the time, chosen randomly. If the 1% case that happens, then also implement the policy under consideration.

Comment by Radamantis on [deleted post] 2022-07-11T00:33:10.467Z

The issue is that probabilities for something that will either happen or not don't really make sense in a literal way

 

This is just wrong/frequentist. Search for the "Bayesian" view of probability.

Comment by NunoSempere (Radamantis) on Forecasts are not enough · 2022-07-11T00:28:02.963Z · LW · GW

I thought this post was great; thanks for writing it.

Comment by NunoSempere (Radamantis) on It’s Probably Not Lithium · 2022-07-01T10:07:36.717Z · LW · GW

Will SMTM answer NCM's post criticizing their Lithium theory? <https://manifold.markets/NuñoSempere/will-smtm-answer-ncms-post-criticiz>

Comment by NunoSempere (Radamantis) on ETH is probably undervalued right now · 2022-06-19T03:23:57.125Z · LW · GW

https://metaforecast.org/?query=ETH+merge -> https://polymarket.com/market-group/ethereum-merge-pos -> 59% by October, 87% by November.

Comment by NunoSempere (Radamantis) on A Litany Missing from the Canon · 2022-06-17T03:45:38.542Z · LW · GW

The Litany of Might

 

I strive to take whatever steps may help me best to reach my goals,

I strive to be the very best at what I strive

 

There is no glory in bygone hopes,

There is no shame in aiming for the win,

there is no choice besides my very best,

to play my top moves and disregard the rest

Comment by NunoSempere (Radamantis) on Moses and the Class Struggle · 2022-06-16T01:13:13.531Z · LW · GW

That as well.

Comment by NunoSempere (Radamantis) on Moses and the Class Struggle · 2022-06-15T22:13:22.143Z · LW · GW

I was assigning less than 3% probability to ~plagiarism being the case, mostly based on Isusr not mentioning that at all in the original post + people seeing similarities where there are none. But seems that I was wrong. 

Comment by NunoSempere (Radamantis) on Open & Welcome Thread - May 2022 · 2022-05-11T20:16:05.134Z · LW · GW

Curious if you know where those people come from?

Sure, see here: https://imgur.com/a/pMR7Qw4

I'm not sure to what extent there's a "forecasting scene", or who is part of it. 

There is a forecasting scene, made out of hobbyist forecasters and more hardcore prediction market players, and a bunch of researchers. The best prediction market people tend to have fairly sharp models of the world, particularly around elections. They also have a pretty high willingness to bet. 

Comment by NunoSempere (Radamantis) on Open & Welcome Thread - May 2022 · 2022-05-11T18:55:53.098Z · LW · GW

I've become a bit discouraged by the lack of positive reception for my forecasting newsletter on LessWrong, to which I've been publishing it since April 2020. For example, I thought that Forecasting Newsletter: Looking back at 2021 was excellent. It was very favorably reviewed by Scott Alexander here. I poured a bunch of myself into that newsletter. It got 18 karma.

I haven't bothered crossposting it to LW this month, but it continues in substack and on the EA forum.

Comment by Radamantis on [deleted post] 2022-04-30T19:41:57.847Z

This was hillarious, very fun to read.

Comment by NunoSempere (Radamantis) on A Quick Guide to Confronting Doom · 2022-04-16T18:00:57.400Z · LW · GW

Whoops, changed

Comment by NunoSempere (Radamantis) on A Quick Guide to Confronting Doom · 2022-04-14T18:05:53.133Z · LW · GW

Odds are an alternative way of presenting probabilities. 50% corresponds to 1:1, 66.66..% corresponds to 1:2, 90% corresponds to 1:9, etc. 33.33..% correspond to 2:1 odds, or, with the first number as as a 1, 1:0.5 odds.

Log odds, or bits, are the logarithm of probabilities expressed as 1:x odds. In some cases, they can be a more natural way of thinking about probabilities (see e.g., here.)