Posts
Comments
Links to Dan Murfet's AXRP interview:
Frankfurt-style counterexamples for definitions of optimization
In "Bottle Caps Aren't Optimizers", I wrote about a type of definition of optimization that says system S is optimizing for goal G iff G has a higher value than it would if S didn't exist or were randomly scrambled. I argued against these definitions by providing a examples of systems that satisfy the criterion but are not optimizers. But today, I realized that I could repurpose Frankfurt cases to get examples of optimizers that don't satisfy this criterion.
A Frankfurt case is a thought experiment designed to disprove the following intuitive principle: "a person is morally responsible for what she has done only if she could have done otherwise." Here's the basic idea: suppose Alice is considering whether or not to kill Bob. Upon consideration, she decides to do so, takes out her gun, and shoots Bob. But little-known to her, a neuroscientist had implanted a chip in her brain that would have forced her to shoot Bob if she had decided not to. That said, the chip didn't activate, because she did decide to shoot Bob. The idea is that she's morally responsible, even tho she couldn't have done otherwise.
Anyway, let's do this with optimizers. Suppose I'm playing Go, thinking about how to win - imagining what would happen if I played various moves, and playing moves that make me more likely to win. Further suppose I'm pretty good at it. You might want to say I'm optimizing my moves to win the game. But suppose that, unbeknownst to me, behind my shoulder is famed Go master Shin Jinseo. If I start playing really bad moves, or suddenly die or vanish etc, he will play my moves, and do an even better job at winning. Now, if you remove me or randomly rearrange my parts, my side is actually more likely to win the game. But that doesn't mean I'm optimizing to lose the game! So this is another way such definitions of optimizers are wrong.
That said, other definitions treat this counter-example well. E.g. I think the one given in "The ground of optimization" says that I'm optimizing to win the game (maybe only if I'm playing a weaker opponent).
Update: there's now a YouTube link
I've added a link to listen on Apple Podcasts.
Sorry - YouTube's taking an abnormally long time to process the video.
Is there going to be some sort of slack or discord for attendees?
What are the two other mechanisms of action?
In my post, I didn't require the distribution over meanings of words to be uniform. It could be any distribution you wanted - it just resulted in the prior ratio of "which utterance is true" being 1:1.
Is this just the thing where evidence is theory-laden? Like, for example, how the evidentiary value of the WHO report on the question of COVID origins depends on how likely one thinks it is that people would effectively cover up a lab leak?
To be clear, this is an equivalent way of looking at normal prior-ful inference, and doesn't actually solve any practical problem you might have. I mostly see it as a demonstration of how you can shove everything into stuff that gets expressed as likelihood functions.
Why wouldn't this construction work over a continuous space?
Thanks for finding this! Will link it in the transcript.
oops, thanks for the reminder
Sorry, it will be a bit before the video uploads. I'll hide the link until then.
Proposal: merge with the separate tag "AI Control"
How would you rate the Book of Mormon as a book? What's your favourite part?
I recently heard of the book How to leave the Mormon church by Alyssa Grenfell, which might be good. Based on an interview with the author, it seemed like it was focussed on nuts-and-bolts stuff (e.g. "practically how do you explore alcohol in a way that isn't dangerous") and explicitly avoiding a permanent state of having an "ex-mormon" identity, which strikes me as healthy (altho I think some doubt is warranted on how good the advice is, given that the author's social media presence is primarily focussed on being ex-mormon). The book is associated with a website.
NB: I have a casual interest in high-demand religions, but have never been a part of one (with the arguable exception of the rationality/EA community).
My guess is this won't work in all cases, because norm enforcement is usually yes/no, and needs to be judged by people with little information. They can't handle "you can do any 2 of these 5 things, but no more" or "you can do this but only if you implement it really skillfully". So either everyone is allowed to impose 80 hour weeks, or no one can work 80 hour weeks, and I don't like either of those options.
I think this might be wrong - for example, my understanding is that there are some kinds of jobs where it's considered normal for people to work 80-hour weeks, and other kinds where it isn't. Maybe the issue is that the "kind of job" norms can easily operate on lets you pick out things like "finance" but not "jobs that have already made one costly vulnerability bid"?
Katja responds on substack:
I'm calling people who know where they are (i.e. are not confused) not in simulations, for the sake of argument. But this shouldn't matter, except for understanding each other.
It sounds like you are saying that ~100x more people live in confused simulations than base reality, but I'm questioning that. The resources to run a brain are about the same whether it's a 'simulation' or a mind in touch with the real world. Why would future civilization spend radically more resources on simulations than on minds in the world? (Or if the non-confused simulations are also relevantly minds in the world, then there are a lot more of them than the confused simulations, so we are back to quite low probability of being mistaken.)
(I plan on continuing the conversation there, not here)
I don't think this argument quite works? Like, suppose each base civilization simulates 100,000 civilizations. 100 are confused and think they're base civilizations, and the rest are non-confused and know they're simulations being run by a base civilization. In this world, most civilizations are right about their status, but most civilizations who think they're base civilizations are wrong.
Hmm I think somehow the problem is that the equals sign in your url is being encoded as an ASCII value with a % sign etc rather than being treated as a raw equals sign, weird.
For one particularly legible example, see this comment by janus.
Link should presumably be to this comment.
Oh - it could be that my peer score is artificially high due to using Metaculus back when there were fewer peers who were good at forecasting.
FWIW: in the debate, Rootclaim don't really push the line that alleged evidence for zoonosis is a Chinese cover-up, but mostly take reports as-is, and accept that one of the first outbreaks was at the Huanan Seafood Market. I think in some cases they allege that some cases weren't tracked or were suppressed, and maybe they say that some evidence is faked in passing, but it wasn't core to their argument.
I think what is missing here is that this debate has been cited repeatedly in rationalist spaces, by people who were already quite engaged with the topic, familiar with the evidence, and in possession of carefully-formed views, as having been extremely valuable and informative, and having shifted their position significantly.
I'm not sure who you're referring to and can't think of examples, but in case it's me, I wasn't already very engaged with the topic or in possession of carefully-formed views.
Notes:
- Yep, the tweet thread is just me jotting down thoughts etc while watching the debate.
- I was 75-80% convinced when I tweeted that, which was before I had finished the debate. After watching the debate, I made a sketchy Bayesian calculation that got me to 96%, but I've since backed off to maybe 66%.
- Basically: the key question for me is whether you think one of the first outbreaks of COVID happened at the Huanan Seafood Market. Rootclaim conceded this, and as far as I can tell if this is true then it's dispositive evidence, but I have since began to doubt it.
- One thing in favour of my judgements is that I'm at number 5 on Metaculus' leaderboard of how accurate predictors were compared to their peers, altho that mostly comes from predictions made between 2016 and April 2020, when I burnt out from forecasting on Metaculus.
- I'm currently in the Diamond league on Manifold, which is much less impressive, and in part driven by my prediction of which way the debate would go.
Isn't there a transcript?
No transcript, but the judges have documents where they outline their reasoning:
Here's one about how many users will know how to make a dialogue:
Here's one about what the most common political affiliation will be:
Here's a Manifold market on how many people will fill out the survey in 2024.
I got 13/18.
What does the model predict non-rationalists would score?
Mine was 12/24.
I couldn’t come up with one obviously best way to show what’s going on [for the probability section]. After a lot of messing around with graphs and charts, there were two ways to display things that I settled on.
May I suggest ridgeline plots?
When I first looked at that graph I had no explanation for the sudden drop, but then I realized the missing years between 2016 and 2022 got me. I think what we’re looking at is a bit of a bump in 2022 (possibly due to the smaller sample) and then a return to where we were in 2016. Is that a just-so story? Eh, possibly.
Doesn't it look like the second and third quartiles are lower than they used to be? Like, AFAICT, if you ignore 2022, you just have downward trends in those quartiles of reported IQ.
No chance you could edit in a bit more about what the questions were? I don't really know what e.g. "Calibration IQ", "Californian LW", or "Heavy AI" mean.
Seems like you should at least try it once.
I think the right strategy is to assume guilt in the presence of a coverup, because then someone who is genuinely uncertain as to whether or not they caused the issue is incentivized to cooperate with investigations instead of obstruct them.
There are two ways I can read this. The first is that when we catch people covering up evidence that points to them committing a crime, we should assume that they're guilty of the underlying crime. That seems pretty bad because it's not necessarily true (altho the coverup is some evidence for it). The second is that you should assume they're guilty of the crime of covering up relevant info, and treat them as such. That does sound right, but it doesn't justify you in talking about the underlying crime as if you know who's guilty of it.
I think it's also pretty obvious that the social consensus is against lab leak not because all the experts have watched the 17 hour rootclaim debate, but because it was manufactured
I agree that the virology and epidemiology communities didn't watch the rootclaim debate, but I feel like this is jumping to the conclusion that they're gullible, rather than that they're just familiar with a bunch of data and are integrating it sensibly. The main thing that plays against it is low familiarity with the DEFUSE proposal in the GCRI survey, but I think that's plausibly explained by people not having read the whole thing (it's really long!).
Here is the end of a tweet thread with links to all the slides, as well as questions the judges asked and the debaters' answers. IDK if they count as being in a 'hierarchical format'.
[Marc Andreessen] followed it up in October with his “Techno-Optimist Manifesto,” which, in addition to praising a founder of Italian fascism
For those who were as curious as me, the person in question is Filippo Tommaso Marinetti, who Wikipedia says founded the Italian Futurist movement and also was a co-author of the Fascist Manifesto in 1919.
He seems to have had a strange relationship to Fascism as it became more prominent - my shallow read is that he supported it but was more focussed on the "national revival" part than racial hatred. Quotes from the relevant section of his Wikipedia page:
Marinetti was one of the first affiliates of the Italian Fascist Party. In 1919 he co-wrote with Alceste De Ambris the Fascist Manifesto, the original manifesto of Italian Fascism. He opposed Fascism's later exaltation of existing institutions, terming them "reactionary," and, after walking out of the 1920 Fascist party congress in disgust, withdrew from politics for three years. However, he remained a notable force in developing the party philosophy throughout the regime's existence...
As part of his campaign to overturn tradition, Marinetti also attacked traditional Italian food. His Manifesto of Futurist Cooking was published in the Turin Gazzetta del Popolo on 28 December 1930. Arguing that "People think, dress[,] and act in accordance with what they drink and eat", Marinetti proposed wide-ranging changes to diet. He condemned pasta, blaming it for lassitude, pessimism, and lack of virility, — and promoted the eating of Italian-grown rice. In this, as in other ways, his proposed Futurist cooking was nationalistic, rejecting foreign foods and food names. It was also militaristic, seeking to stimulate men to be fighters...
On 17 November 1938, Italy passed The Racial Laws, discriminating against Italian Jews, much like the discrimination pronounced in the Nuremberg Laws. The antisemitic trend in Italy resulted in attacks against modern art, judged too foreign, too radical and anti-nationalist. In the 11 January 1939 issue of the Futurist journal, Artecrazia, Marinetti expressed his condemnation of such attacks on modern art, noting Futurism is both Italian and nationalist, not foreign, and stating that there were no Jews in Futurism. Furthermore, he claimed Jews were not active in the development of modern art. Regardless, the Italian state shut down Artecrazia.
Live in Berkeley? I think you should consider running for the city council. Why?
- 4 seats are going to be open with no incumbents:
- District 4: the area between Sacramento, Blake, Fulton, and University, plus the area between University, Cedar, MLK, and Fulton. Lots of rationalists live in this area. This will be a special election that's yet to be scheduled, but I imagine it will be held in April or May, with a filing deadline in late Feb / early Mar. (Or maybe it will be held at the same time as District 7, on April 16, filing deadline on EOD Feb 16)
- District 5: north of Cedar, between Spruce and Sacramento/Tulare/Nelson. Election in November.
- District 6: north of Hearst, between Oxford/Spruce and Wildcat Canyon Road. Election in November.
- District 7: campus and the couple blocks immediately south of it. Borders are hard to describe, check here. Special election: filing deadline is EOD Feb 16, election is April 16.
- Nobody is running in those races yet.
- You probably have gripes with how the city is running: maybe you wish policing were different, or there were more permissive zoning, or better education.
- You probably have a bunch of friends who feel similarly who maybe would want to vote for you or support your campaign.
Here is a candidate handbook for the District 7 election, I imagine running for the other districts is similar (but with different relevant dates).
Thanks!
Like, how much should William Lane Craig winning his debates update me on theism?
Partly I think that this debate format was way higher-quality than most formats I've seen, including in the domain of theism vs atheism. I also think that the answer is going to depend on whether or not your reasons for atheism are basically the same as Craig's opponents' - if they are, then I think it actually should somewhat (at least in a format where the winner is picked correctly), but if they aren't, then it probably shouldn't much at all.
Snowballing contacts does introduce a risk of bias but that is mitigated by the disciplinary and geographic spread in the target sample.
Is there any chance you guys could share information about the trees of who recommended who, to help get a sense of how big this bias could be? Like, how large was the largest recommendation chain, what fraction of people were recommended vs initially contacted, etc?
(Casual readers may not realize that John Halstead was one of the co-authors of the report on this survey)
I think there's some controversy about which previous pandemics were caused by lab leaks (e.g. one flu season was speculated to have been), making the base rates less informative than you'd think.
Did you scroll down to see what people who were familiar with DEFUSE said?
A bunch of people in the comments section are skeptical that we should care about the consensus of experts on this question. One thing I'm curious to get people's opinion on: late last year, Rootclaim did a series of debates with Peter Miller on whether COVID was a gain-of-function lab leak or a zoonotic spillover, you can watch the videos here. Two judges were mutually agreed upon, and for each judge that's convinced one way or the other, the loser (according to that judge) has to pay the winner $50,000. As a result the debates were pretty extensive - they went for a total of 17 hours, and the judges were pretty engaged, including asking written questions between rounds. The judges haven't released their decisions yet, but they will later this month.
For people who are inclined to disregard this survey: if the judges rule in favour of a zoonotic origin, would that count as relevant evidence in favour of zoonosis? Alternatively, if they rule in favour of a GoF lab origin, would that count as relevant evidence in favour of a lab leak?
How is this a response to my point, that you can apparently be a virologist who has worked with Daszak and still publicly disagree with him?