Posts
Comments
What do they have against AI? Seems like the impact on regular people has been pretty minimal. Also, if GPT4 level technology ws allowed to fully mature and diffuse to a wide audience without increasing in base capability, it seems like the impact on everyone would be hugely beneficial
In an impure sample you would see high residual resistance below Tc
Don't the authors claim to have measured 0 resistivity (modulo measurement noise)?
In the MIRI dialogues from 2021/2022 I thought you said you would update to 40% of AGI by 2040 if AI got an IMO gold medal by 2025? Did I misunderstand or have you shifted your thinking (if so, how?)
What do you think are the strongest arguments in that list, and why are they weaker than a vague "oh maybe we'll figure it out"?
It seems like something has to be going wrong if the model output has higher odds that TAI is already here (~12%) than TAI being developed between now and 2027 (~11%)? Relatedly, I'm confused by the disclaimer that "we are not updating on the fact that TAI has obviously not yet arrived" -- shouldn't that fact be baked into the distributions for each parameter (particularly the number of FLOPs to reach TAI)?
Well....Eliezer does think we're doomed so doesn't necessarily contradict his worldview
Minor curiosity: What was the context behind Asimov predicting in 1990 that permanent space cities would be built within 10 years? It seems like a much wilder leap than any of his other predictions.
Would be very curious to hear thoughts from the people that voted "disagree" on this post
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John's goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it's a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to "waste" time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]
I strongly prefer the "dying with dignity" mentality for 3 basic reasons:
- as other people have mentioned, "playing to your outs" is too easy to misinterpret as conditioning on comfortable improbabilities no matter how much you try to draw the distinctions
- relatedly, focusing on "playing to your outs" (especially if you do so for emotional reasons) may make it harder to stay grounded in accurate models of reality (that may mostly output "we will die soon")
- Operating under the mindset that death is likely when AGI is still some ways around the corner and easy to ignore seems like it ought to make it easier to stay emotionally resilient and ready to exploit miracle opportunities if/when AGI is looming and impossible to ignore
Of these, the 3rd feels the most important to me, partly because I've seen it discussed least. It seems like if Eliezer's basic model is right, a significant portion of the good outcomes require some kind of miracle occuring at crunch time, which will presumably be easier to obtain if key players are emotionally prepared and not suddenly freaking out for the first time (on an emotional/subconscious level). I know basically nothing about psychology, but isn't it a bad sign if you retreat to "oh death with dignity is unmotivating, let's just focus on our outs" when AGI is less salient?
Wait why are your predictions for Brazil so far from the market? As of right now, there are 180,000 shares of Bolsonaro on the orderbook under 50c on FTX (avg price of 44c if you buy them all).
Yeah it's definitely against poly's terms of service but not against US law (otherwise they wouldn't be complying with the prohibition on offering their services to US customers)
FWIW it is totally legal for Americans to trade on polymarket via a VPN or similar; it's just not legal for polymarket itself to offer services to people with US IP addresses
Is there currently a supply shortage of vaccines?
Yep, I wanted to experiment with a central example of a comment that should be in the "downvote/agree" quadrant, since that seemed like the least likely to occur naturally. It's nice to see the voting system is working as intended.
Yudkowsky is so awesome!!
I haven't done much research on this, but from a naive perspective, spending 4 billion dollars to move up vaccine access by a few months sounds incredibly unlikely to be a good idea? Is the idea that it is more effective than standard global health interventions in terms of QALYs or a similar metric, or that there's some other benefit that is incommensurable with other global health interventions? (This feels like asking the wrong question but maybe it will at least help me understand your perspective)
Wait, how do you get to 17%-25% chance of a crisis situation if there's only a 2.5% chance of omicron causing severe disease in vaccinated/previously infected people? Isn't that the vast majority of people in the US?
My uninformed impression is that an "adjuvant" is just something that stimulates increased immune response, which has historically been an additional chemical (such as an aluminum salt) added to vaccines. The mRNA vaccines do not contain these chemicals, but some people confusingly refer to the lipid nanoparticles that surround and protect the mRNA as adjuvants (because they also help increase immune response). I haven't seen any evidence that these lipid nanoparticles are the kind of adjuvants that might mitigate OAS, but that doesn't mean much because I'm a total nonexpert.
hopefully fixed?
Detonation -- an entertaining book that tries to flesh out a fast takeoff scenario and explicitly cites Bostrom and Yudkowsky. However, it also makes some extremely dubious choices; for example, the protagonist is a Marine hired to fight against the unfriendly AI, which doesn't seem like a very effective AI alignment strategy.
Hopefully it's not too late to try to keep the focus on the fun puzzles, etc! There really does seem to be an alarming amount of craziness floating around LW, along with the constant weird attempts to explicitly model things that we evolved to understand instinctively (eg most aspects of social interaction). Reading that stuff slightly negatively affected my mental health despite thinking it was mostly silly -- to the extent it's taken seriously it seems like it could have more substantial negative effects.
If anyone wants to get money onto Polymarket (real-money prediction market with no limits, no fees, and a wide variety of markets), I can facilitate that for free. Send me cash or crypto and I will send money directly to your poly account. As a heavy polymarket user I'm somewhat sad more LW people aren't taking the opportunity to bet on their beliefs, so I'd like to make it as easy as possible for anyone interested.
What prediction markets are liquid enough for a fund to make sense? (unless you were just referring to the crypto trading part?)
Risk neutral pricing is always a danger when trying to make inferences about real world probabilities based on market pricing, but it's usually a negligible one because participants in current prediction markets are generally speculators with no built-in exposure to the underlying asset, or ability to hedge against other markets.
On the other hand, implied probabilities from options pricing can differ significantly from real world probability, because any participant in the options market can hedge their position against the underlying asset.
"In all the examples we're talking about, those risk premiums are tiny relative to the numbers involved so they don't make a significant difference to how we should be calculating the "market implied" odds." What evidence do you have that this is true? Your post is taking risk neutral probabilites from the market + your own opinion that risk neutral is similar to real world, then presenting that as the "market probability", which is very misleading.
Edit: Maybe a better framing is that in order for option probabilities to give us a ~real world pdf of asset price at a given time, the asset needs to be approximately a martingale from now to the time in question. Many people would strongly disagree that BTC/ETH are even approximately a martingale on this time scale (they think there's large positive drift). You are making a strong claim that is contrary to the view of many or most of the top crypto traders in the market, and yet you don't make this clear but instead claim it's a "market probability", with the implication that people should defer to it unless they have strong domain knowledge.
I'm happy to; no commission needed. If anyone else wants to get money from fiat into polymarket easily with no fee, just let me know
https://polymarket.com/market/will-benjamin-netanyahu-remain-prime-minister-of-israel-through-june-30-2021 If you want to, you can in fact bet that Netanyahu will be PM on June 30th at 30c
This isn't really a meaningful explanation for why risk neutral vs real world is meaningless? To me "the credence I have that something happens" is actually a meaningful, important number that is by definition different from the risk neutral price. You can argue that all market probabilities may deviate from real-world probabilities in some way, but that doesn't make real-world probability meaningless!
https://www.betonline.ag/sportsbook/futures-and-props/politics-futures has a market for "California Governor on 12/31/2021", which should function as a great proxy for the chances of Newsom being recalled. They're quoting .0625@? for the chance of Newsom not being governor on 12/31 (they accept bets on Newsom remaining governor, but not on Newsom being removed as governor, thus the "?")
You think there's only an 80% chance the olympics happen? This is a bit of a tangent but I'd love to hear why, since I have 100k+ shares of OLY2021 on FTX and haven't heard a convincing argument for the odds being below 90%.
re options, presumably Zvi is using the real world measure, which you can'treally infer from options prices without making a lot of dubious assumptions. Can you elaborate on how/why Zvi is "off the market forecast"?
Cumulative cases are still much lower, right? I assumed that "case count" meant cumulative cases, but it sounds like you're interpreting it as daily cases, which would explain our difference. Given that Scott only gives 50% chance it seems much more likely that he meant cumulative cases too imho
Can you elaborate on why India's cases being higher than the US is an instabuy to 80%? I haven't given it any thought but it seems like that would require a pretty big increase in cases there?
Because mr co2 guy was clearly making a -EV bet that happened to pay off this time :)
FWIW FTX allows you to bet on its prediction markets on margin with a tokenized version of the S&P 500 as collateral, which accomplishes exactly what you want to accomplish here
markets that easily allow you to make tens of thousands of dollars with nearly no risk.
Back during November-January, FTX had a contract called TRUMPFEB that gave ~risk free ~15% returns in 2 months, up to millions of dollars (by betting against Trump to be president in Feburary). Right now the FTX OLY2021 market comes pretty darn close -- you can bet hundreds of thousands of dollars on the Olympics happening at 76c. There is obviously the risk of the Olympics not happening, but I haven't seen a good case for that risk being under 10%, making this a fantastic trade in expectation.
Just commenting to say this is a great post and I'm surprised it hasn't gotten more engagement (maybe it's so good there's nothing else to say)
The fees are always 2% of the transaction value; i.e. numShares*avgPrice. The trick I described lets you substitute (1-avgPrice) when that would be cheaper . I should've been much clearer about this initially; I forgot that most people here probably don't know the polymarket fee structure.
You still pay fees, they're just lower. Most sharps on polymarket do indeed do this
I don't know about Omen, but on Polymarket you can mostly avoid this issue by minting a complete set of shares and selling the cheaper side. In your example this would only cost you $0.0004 in fees to acquire shares at $0.98. The easiest way I know of to mint complete shares is to add and then immediately remove liquidity
But he's been a professional politics gambler for years now, which seems like much stronger evidence for evaluating his calibration than the results of one election cycle?
But that line has an infinitesimal chance of intersecting with any point as significant as the Great Pyramids? Scott's argument about Uganda and Tanzania seems wrong for the same reasons
I'm probably too late here but if anyone's wondering, PI will send you a 1099-MISC with $8.57 of income, and that's what you have to pay taxes on. (well, they only send the form if you make over $600, but that doesn't change your tax liability).
The $85 you deposit definitely does not count as income, regardless of whether you itemize deductions. Regarding the discussion below, the IRS does not treat PI winnings as gambling income, but rather as 1099-MISC (other) income.
(source: have paid taxes on PredictIt winnings)
Shor is very open about the fact that his views are to the left of 90%+ of the electorate, and that his goal is to maximize the power of people that share his views despite their general unpopularity.
For anyone interested, the keyword to read about things like this in the economics literature is "performativity"
FWIW I'm 99% sure RJ made money from this election, and I'm 50% sure he made over 90k. Why would you update against someone who has consistently made enormous amounts of money betting on his beliefs?
Edit: looks like I was a bit overoptimistic about his profits but he supposedly did make a decent amount
https://twitter.com/rainbow_jeremy_/status/1336815220061327363
(keep in mind he lies all the time so this is only noisy evidence)
reasonable expectation of negative outcomes if you lost your bankroll
One of the necessary conditions for the Kelly Criterion to be optimal is that losing your bankroll is infinitely bad, so a reasonable expectation of negative outcomes isn't enough to deviate from Kelly.
Isn't the point of "bet or update" is that you should be either updating on your counterparty's credences or taking a bet that your counterparty thinks is +EV? Here, the player is updating upon observing the host point to the door, not on the bet itself.
After the player has updated on the host pointing to the door, you can require the player to take the offered bet or update as normal. Assuming the host is offering the bet as a function of his credences*, the player should update from P(gold) = 1/3 to P(gold) ~= 0, because the player knows that the host knows where the gold is.
*as opposed to e.g. offering the bet to provide entertainment to the audience. If the host doesn't vary the terms of the bet based on his private knowledge of where the gold is, then the player should bet rather than update, because the bet offer doesn't transmit any info about the host's credences or the actual state of the world.
Can you explain how the leverage system works on FTX for TRUMPFEB? The calculator on the site seems to produce bizzare results.
You've actually met the first mugger so he's more likely to exist than a hypothetical counter-mugger you've never seen any evidence for