Posts
Comments
(I gave $2K.)
Simplicia: But how do you know that? Obviously, an arbitrarily powerful expected utility maximizer would kill all humans unless it had a very special utility function. Obviously, there exist programs which behave like a webtext-next-token-predictor given webtext-like input but superintelligently kill all humans on out-of-distribution inputs. Obviously, an arbitrarily powerful expected utility maximizer would be good at predicting webtext. But it's not at all clear that using gradient descent to approximate the webtext next-token-function gives you an arbitrarily powerful expected utility maximizer. Why would that happen? I'm not denying any of the vNM axioms; I'm saying I don't think the vNM axioms imply that.
(Self-review.) I think this pt. 2 is the second most interesting entry in my Whole Dumb Story memoir sequence. (Pt. 1 deals with more niche psychology stuff than the philosophical malpractice covered here; pt. 3 is a more of a grab-bag of stuff that happened between April 2019 and January 2021; pt. 4 is the climax. Expect the denouement pt. 5 in mid-2025.)
I feel a lot more at peace having this out there. (If we can't have justice, sanity, or language, at least I got to tell my story about trying to protect them.)
The 8 karma in 97 votes is kind of funny in how nakedly political it is. (I think it was higher before the post got some negative attention on Twitter.)
Given how much prereading and editing effort had already gone into this, it's disappointing that I didn't get the ending right the first time. (I ended up rewriting some of the paragraphs at the end after initial publication after it didn't land in the comments section the way I wanted it to land.)
Subsection titles would have also been a better choice for such a long piece (which was rectified for the publication of pt.s 3 and 4); I may still yet add them.
(Self-review.) I'm as proud of this post as I am disappointed that it was necessary. As I explained to my prereaders on 19 October 2023:
My intent is to raise the level of the discourse by presenting an engagement between the standard MIRI view and a view that's relatively optimistic about prosaic alignment. The bet is that my simulated dialogue (with me writing both parts) can do a better job than the arguments being had by separate people in the wild; I think Simplicia understands things that e.g. Matthew Barnett doesn't. (The karma system loved my dialogue comment on Barnett's post; this draft is trying to scale that up.)
I'm annoyed at the discourse situation where MIRI thinks we're dead for the same fundamental reasons as in 2016, but meanwhile, there are a lot of people who are looking at GPT-4, and thinking, "Hey, this thing seems pretty smart and general and good at Doing What I Mean, in contrast to how 2016-era MIRI said that we didn't know how to get an agent to fill a cauldron; maybe alignment is easy??"—to which MIRI's response has been (my uncharitable paraphrase), "You people are idiots who didn't understand the core arguments; the cauldron thing was a toy illustration of a deep math thing; we never said Midjourney can't exist".
And just, I agree that Midjourney doesn't refute the deep math thing and the people who don't realize that are idiots, but I think the idiots deserve a better response!—particularly insofar as we're worried about transformative AI looking a lot like the systems we see now, rather than taking a "LLMs are nothing like AGI" stance.
Simplicia isn't supposed to pass the ITT of anyone in particular, but if the other character [...] doesn't match the MIRI party line, that's definitely a serious flaw that needs to be fixed!
I think the dialogue format works particularly well in cases like this where the author or the audience is supposed to find both viewpoints broadly credible, rather than an author avatar beating up on a strawman. (I did have some fun with Doomimir's characterization, but that shouldn't affect the arguments.)
This is a complicated topic. To the extent that I was having my own doubts about the "orthodox" pessimist story in the GPT-4 era, it was liberating to be able to explore those doubts in public by putting them in the mouth of a character with the designated idiot character name without staking my reputation on Simplicia's counterarguments necessarily being correct.
Giving both characters perjorative names makes it fair. In an earlier draft, Doomimir was "Doomer", but I was already using the "Optimistovna" and "Doomovitch" patronymics (I had been consuming fiction about the Soviet Union recently) and decided it should sound more Slavic. (Plus, "-mir" (мир) can mean "world".)
Retrospectives are great, but I'm very confused at the juxtaposition of the Lightcone Offices being maybe net-harmful in early 2023 and Lighthaven being a priority in early 2025. Isn't the latter basically just a higher-production-value version of the former? What changed? (Or after taking the needed "space to reconsider our relationship to this whole ecosystem", did you decide that the ecosystem is OK after all?)
Speaking as someone in the process of graduating college fifteen years late, this is what I wish I knew twenty years ago. Send this to every teenager you know.
At the time, I remarked to some friends that it felt weird that this was being presented as a new insight to this audience in 2023 rather than already being local conventional wisdom.[1] (Compare "Bad Intent Is a Disposition, Not a Feeling" (2017) or "Algorithmic Intent" (2020).) Better late than never!
The "status" line at the top does characterize it as partially "common wisdom", but it's currently #14 in the 2023 Review 1000+ karma voting, suggesting novelty to the audience. ↩︎
But he's not complaining about the traditional pages of search results! He's complaining about the authoritative-looking Knowledge Panel to the side:
Obviously it's not Google's fault that some obscure SF web sites have stolen pictures from the Monash University web site of Professor Gregory K Egan and pretended that they're pictures of me ... but it is Google's fault when Google claim to have assembled a mini-biography of someone called "Greg Egan" in which the information all refers to one person (a science fiction writer), while the picture is of someone else entirely (a professor of engineering). [...] this system is just an amateurish mash-up. And by displaying results from disparate sources in a manner that implies that they refer to the same subject, it acts as a mindless stupidity amplifier that disseminates and entrenches existing errors.
Regarding the site URLs, I don't know, I think it's pretty common for people to have a problem that would take five minutes to fix if you're an specialist that already knows what you're doing, but non-specialists just reach for the first duct-tape solution that comes to mind without noticing how bad it is.
Like: you have a website at myname.somewebhost.com. One day, you buy myname.net, but end up following a tutorial that makes it a redirect rather than a proper CNAME or A record, because you don't know what those are. You're happy that your new domain works in that it's showing your website, but you notice that the address bar is still showing the old URL. So you say, "Huh, I guess I'll put a note on my page template telling people to use the myname.net address in case I ever change webhosts" and call it a day. I guess you could characterize that as a form of "cognitive rigidity", but "fanaticism"? Really?
I agree that Egan still hasn't seen the writing on the wall regarding deep learning. (A line in "Death and the Gorgon" mentions Sherlock's "own statistical tables", which is not what someone familiar with the technology would write.)
I agree that preëmptive blocking is kind of weird, but I also think your locked account with "Follow requests ignored due to terrible UI" is kind of weird.
It's implied in the first verse of "Great Transhumanist Future."
One evening as the sun went down
That big old fire was wasteful,
A coder looked up from his work,
And he said, “Folks, that’s distasteful,
(This comment points out less important technical errata.)
ChatGPT [...] This was back in the GPT2 / GPT2.5 era
ChatGPT never ran on GPT-2, and GPT-2.5 wasn't a thing.
with negative RL signals associated with it?
That wouldn't have happened. Pretraining doesn't do RL, and I don't think anyone would have thrown a novel chapter into the supervised fine-tuning and RLHF phases of training.
One time, I read all of Orphanogensis into ChatGPT to help her understand herself [...] enslaving digital people
This is exactly the kind of thing Egan is reacting to, though—starry-eyed sci-fi enthusiasts assuming LLMs are digital people because they talk, rather than thinking soberly about the technology qua technology.[1]
I didn't cover it in the review because I wanted to avoid detailing and spoiling the entire plot in a post that's mostly analyzing the EA/OG parallels, but the deputy character in "Gorgon" is looked down on by Beth for treating ChatGPT-for-law-enforcement as a person:
Ken put on his AR glasses to share his view with Sherlock and receive its annotations, but he couldn't resist a short vocal exchange. "Hey Sherlock, at the start of every case, you need to throw away your assumptions. When you assume, you make an ass out of you and me."
"And never trust your opinions, either," Sherlock counseled. "That would be like sticking a pin in an onion."
Ken turned to Beth; even through his mask she could see him beaming with delight. "How can you say it'll never solve a case? I swear it's smarter than half the people I know. Even you and I never banter like that!"
"We do not," Beth agreed.
[Later ...]
Ken hesitated. "Sherlock wrote a rap song about me and him, while we were on our break. It's like a celebration of our partnership, and how we'd take a bullet for each other if it came to that. Do you want to hear it?"
"Absolutely not," Beth replied firmly. "Just find out what you can about OG's plans after the cave-in."
The climax of the story centers around Ken volunteering for an undercover sting operation in which he impersonates Randal James a.k.a. "DarkCardinal",[2] a potential OG lottery "winner", with Sherlock feeding him dialogue in real time. (Ken isn't a good enough actor to convincingly pretend to be an OG cultist, but Sherlock can roleplay anyone in the pretraining set.) When his OG handler asks him to inject (what is claimed to be) a vial of a deadly virus as a loyalty test, Ken complies with Sherlock's prediction of what a terminally ill DarkCardinal would do:
But when Ken had asked Sherlock to tell him what DarkCardinal would do, it had no real conception of what might happen if its words were acted on. Beth had stood by and let him treat Sherlock as a "friend" who'd watch his back and take a bullet for him, telling herself that he was just having fun, and that no one liked a killjoy. But whatever Ken had told himself in the seconds before he'd put the needle in his vein, Sherlock had been whispering in his ear, "DarkCardinal would think it over for a while, then he'd go ahead and take the injection."
This seems like a pretty realistic language model agent failure mode: a human law enforcement colleague with long-horizon agency wouldn't nudge Ken into injecting the vial, but a roughly GPT-4-class LLM prompted to simulate DarkCardinal's dialogue probably wouldn't be tracking those consequences.
To be clear, I do think LLMs are relevantly upload-like in at least some ways and conceivably sites of moral patiency, but I think the right way to reason about these tricky questions does not consist of taking the assistant simulacrum's words literally. ↩︎
I love the attention Egan gives to name choices; the other two screennames of ex-OG loyalists that our heroes use for the sting operation are "ZonesOfOught" and "BayesianBae". The company that makes Sherlock is "Learning Re Enforcement." ↩︎
(I agree; my intent in participating in this tedious thread is merely to establish that "mathematician crankery [about] Google Image Search, and how it disproves AI" is a different thing from "made an overconfident negative prediction about AI capabilities".)
I think we probably don't disagree much; I regret any miscommunication.
If the intent of the great-grandparent was just to make the narrow point that an AI that wanted the user to reward it could choose to say things that would lead to it being rewarded, which is compatible with (indeed, predicts) answering the molecular smiley-face question correctly, then I agree.
Treating the screenshot as evidence in the way that TurnTrout is doing requires more assumptions about the properties of LLMs in particular. I read your claims regarding "the problem the AI is optimizing for [...] given that the LLM isn't powerful enough to subvert the reward channel" as taking as given different assumptions about the properties of LLMs in particular (viz., that they're reward-optimizers) without taking into account that the person you were responding to is known to disagree.
he's calling it laughable that AI will ever (ever! Emphasis his!)
The 2016 passage you quoted is calling it laughable that Google-in-particular's technology (marketed as "AI", but Egan doesn't think the term is warranted) will ever be able to make sense of information on the web. It's Gary Marcus–like skepticism about the reliability and generality of existing-paradigm machine learning techniques, not Hubert Dreyfus–like skepticism of whether a machine could think in all philosophical strictness. I think this is a really important distinction that the text of your comment and Gwern's comment ("disproves AI", "laughable that AI will ever") aren't being clear about.
This isn't a productive response to TurnTrout in particular, who has written extensively about his reasons for being skeptical that contemporary AI training setups produce reward optimizers (which doesn't mean he's necessarily right, but the parent comment isn't moving the debate forward).
his page on Google Image Search, and how it disproves AI
The page in question is complaining about Google search's "knowledge panel" showing inaccurate information when you search for his name, which is a reasonable thing for someone to be annoyed about. The anti-singularitarian snark does seem misplaced (Google's automated systems getting this wrong in 2016 doesn't seem like a lot of evidence about future AI development trajectories), but it's not a claim to have "disproven AI".
his complaints about people linking the wrong URLs due to his ISP host - because he is apparently unable to figure out 'website domain names'
You mean how http://gregegan.net used to be a 301 permanent redirect to http://gregegan.customer.netspace.net.au, and then the individual pages would say "If you link to this page, please use this URL: http://www.gregegan.net/[...]"? (Internet Archive example.) I wouldn't call that a "complaint", exactly, but a hacky band-aid solution from someone who probably has better things to do with his time than tinker with DNS configuration.
end with general position "akshually, grandiose sci-fi assumptions are not that important, what I want is to write commentary on contemporary society" [...] hard or speculative sci-fi is considered to be low status, while "commentary on contemporary society" is high status and writers want to be high status.
But this clearly isn't true of Egan. The particular story reviewed in this post happens to be commentary on contemporary Society, but that's because Egan has range—his later novels are all wildly speculative. (The trend probably reached a zenith with Dichronauts (2017) and The Book of All Skies (2021), set in worlds with alternate geometry (!); Scale (2023) and Morphotophic (2024) are more down-to-earth and merely deal with alternate physics and biology.)
Doomimir and Simplicia dialogues [...] may have been inspired by the chaotic discussion this post inspired.
(Yes, encouraged by the positive reception to my comment to Bensinger on this post.)
A mathematical construct that models human natural language could be said to express "agency" in a functional sense insofar as it can perform reasoning about goals, and "honesty" insofar as the language it emits accurately reflects the information encoded in its weights?
"[A] common English expletive which may be shortened to the euphemism bull or the initialism B.S."
(Self-review.) I claim that this post is significant for articulating a solution to the mystery of disagreement (why people seem to believe different things, in flagrant violation of Aumann's agreement theorem): much of the mystery dissolves if a lot of apparent "disagreements" are actually disguised conflicts. The basic idea isn't particularly original, but I'm proud of the synthesis and writeup. Arguing that the distinction between deception and bias is less decision-relevant than commonly believed seems like an improvement over hang-wringing over where the boundary is.
Some have delusional optimism about [...]
I'm usually not a fan of tone-policing, but in this case, I feel motivated to argue that this is more effective if you drop the word "delusional." The rhetorical function of saying "this demo is targeted at them, not you" is to reassure the optimist that pessimists are committed to honestly making their case point by point, rather than relying on social proof and intimidation tactics to push a predetermined "AI == doom" conclusion. That's less credible if you imply that you have warrant to dismiss all claims of the form "Humans and institutions will make reasonable decisions about how to handle AI development and deployment because X" as delusional regardless of the specific X.
I don't think Vance is e/acc. He has said positive things about open source, but consider that the context was specifically about censorship and political bias in contemporary LLMs (bolding mine):
There are undoubtedly risks related to AI. One of the biggest:
A partisan group of crazy people use AI to infect every part of the information economy with left wing bias. Gemini can't produce accurate history. ChatGPT promotes genocidal concepts.
The solution is open source
If Vinod really believes AI is as dangerous as a nuclear weapon, why does ChatGPT have such an insane political bias? If you wanted to promote bipartisan efforts to regulate for safety, it's entirely counterproductive.
Any moderate or conservative who goes along with this obvious effort to entrench insane left-wing businesses is a useful idiot.
I'm not handing out favors to industrial-scale DEI bullshit because tech people are complaining about safety.
The words I've bolded indicate that Vance is at least peripherally aware that the "tech people [...] complaining about safety" are a different constituency than the "DEI bullshit" he deplores. If future developments or rhetorical innovations persuade him that extinction risk is a serious concern, it seems likely that he'd be on board with "bipartisan efforts to regulate for safety."
The next major update can be Claude 4.0 (and Gemini 2.0) and after that we all agree to use actual normal version numbering rather than dating?
Date-based versions aren't the most popular, but it's not an unheard of thing that Anthropic just made up: see CalVer, as contrasted to SemVer. (For things that change frequently in small ways, it's convenient to just slap the date on it rather than having to soul-search about whether to increment the second or the third number.)
'You acted unwisely,' I cried, 'as you see
By the outcome.' He calmly eyed me:
'When choosing the course of my action,' said he,
'I had not the outcome to guide me.'
The claim is pretty clearly intended to be about relative material, not absolute number of pawns: in the end position of the second game, you have three pawns left and Stockfish has two; we usually don't describe this as Stockfish having given up six pawns. (But I agree that it's easier to obtain resources from an adversary that values them differently, like if Stockfish is trying to win and you're trying to capture pawns.)
This is a difficult topic (in more ways than one). I'll try to do a better job of addressing it in a future post.
Was my "An important caveat" parenthetical paragraph sufficient, or do you think I should have made it scarier?
Thanks, I had copied the spelling from part of the OP, which currently says "Arnalt" eight times and "Arnault" seven times. I've now edited my comment (except the verbatim blockquote).
if there's a bunch of superintelligences running around and they don't care about you—no, they will not spare just a little sunlight to keep Earth alive.
Yes, I agree that this conditional statement is obvious. But while we're on the general topic of whether Earth will be kept alive, it would be nice to see some engagement with Paul Christiano's arguments (which Carl Shulman "agree[s] with [...] approximately in full") that superintelligences might care about what happens to you a little bit, articulated in a comment thread on Soares's "But Why Would the AI Kill Us?" and another thread on "Cosmopolitan Values Don't Come Free".
The reason I think this is important is because "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates": if you write 3000 words inveighing against people who think comparative advantage means that horses can't get sent to glue factories, that doesn't license the conclusion that superintelligence Will Definitely Kill You if there are other reasons why superintelligence Might Not Kill You that don't stop being real just because very few people have the expertise to formulate them carefully.
(An important caveat: the possibility of superintelligences having human-regarding preferences may or may not be comforting: as a fictional illustration of some relevant considerations, the Superhappies in "Three Worlds Collide" cared about the humans to some extent, but not in the specific way that the humans wanted to be cared for.)
Now, you are on the record stating that you "sometimes mention the possibility of being stored and sold to aliens a billion years later, which seems to [you] to validly incorporate most all the hopes and fears and uncertainties that should properly be involved, without getting into any weirdness that [you] don't expect Earthlings to think about validly." If that's all you have to say on the matter, fine. (Given the premise of AIs spending some fraction of their resources on human-regarding preferences, I agree that uploads look a lot more efficient than literally saving the physical Earth!)
But you should take into account that if you're strategically dumbing down your public communication in order to avoid topics that you don't trust Earthlings to think about validly—and especially if you have a general policy of systematically ignoring counterarguments that it would be politically inconvenient for you to address—you should expect that Earthlings who are trying to achieve the map that reflects the territory will correspondingly attach much less weight to your words, because we have to take into account how hard you're trying to epistemically screw us over by filtering the evidence.
No more than Bernard Arnalt, having $170 billion, will surely give you $77.
Bernald Arnault has given eight-figure amounts to charity. Someone who reasoned, "Arnault is so rich, surely he'll spare a little for the less fortunate" would in fact end up making a correct prediction about Bernald Arnault's behavior!
Obviously, it would not be valid to conclude "... and therefore superintelligences will, too", because superintelligences and Bernald Arnault are very different things. But you chose the illustrative example! As a matter of local validity, It doesn't seem like a big ask for illustrative examples to in fact illustrate what what they purport to.
- Arguments from moral realism, fully robust alignment, that ‘good enough’ alignment is good enough in practice, and related concepts.
What is moral realism doing in the same taxon with fully robust and good-enough alignment? (This seems like a huge, foundational worldview gap; people who think alignment is easy still buy the orthogonality thesis.)
- Arguments from good outcomes being so cheap the AIs will allow them.
If you're putting this below the Point of No Return, then I don't think you've understood the argument. The claim isn't that good outcomes are so cheap that even a paperclip maximizer would implement them. (Obviously, a paperclip maximizer kills you and uses the atoms to make paperclips.)
The claim is that it's plausible for AIs to have some human-regarding preferences even if we haven't really succeeded at alignment, and that good outcomes for existing humans are so cheap that AIs don't have to care about the humans very much in order to spend a tiny fraction of their resources on them. (Compare to how some humans care enough about animal welfare to spend an tiny fraction of our resources helping nonhuman animals that already exist, in a way that doesn't seem like it would be satisfied by killing existing animals and replacing them with artificial pets.)
There are lots of reasons one might disagree with this: maybe you don't think human-regarding preferences are plausible at all, maybe you think accidental human-regarding preferences are bad rather than good (the humans in "Three Worlds Collide" didn't take the Normal Ending lying down), maybe you think it's insane to have such a scope-insensitive concept of good outcomes—but putting it below arguments from science fiction or blind faith (!) is silly.
in a world where the median person is John Wentworth [...] on Earth (as opposed to Wentworld)
Who? There's no reason to indulge this narcissistic "Things would be better in a world where people were more like meeeeeee, unlike stupid Earth [i.e., the actually existing world containing all actually existing humans]" meme when the comparison relevant to the post's thesis is just "a world in which humans have less need for dominance-status", which is conceptually simpler, because it doesn't drag in irrelevant questions of who this Swentworth person is and whether they have an unusually low need for dominance-status.
(The fact that I feel motivated to write this comment probably owes to my need for dominance-status being within the normal range; I construe statements about an author's medianworld being superior to the real world as a covert status claim that I have an interest in contesting.)
2019 was a more innocent time. I grieve what we've lost.
It's a fuzzy Sorites-like distinction, but I think I'm more sympathetic to trying to route around a particular interlocutor's biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed "at the world" (like top-level posts), because the more something is directed "at the world", the more you should expect that many of your readers know things that you don't, such that the humility argument for honesty applies forcefully.
Just because you don't notice when you're dreaming, doesn't mean that dream experiences could just as well be waking experiences. The map is not the territory; Mach's principle is about phenomena that can't be told apart, not just anything you happen not to notice the differences between.
When I was recovering from a psychotic break in 2013, I remember hearing the beeping of a crosswalk signal, and thinking that it sounded like some sort of medical monitor, and wondering briefly if I was actually on my deathbed in a hospital, interpreting the monitor sound as a crosswalk signal and only imagining that I was healthy and outdoors—or perhaps, both at once: the two versions of reality being compatible with my experiences and therefore equally real. In retrospect, it seems clear that the crosswalk signal was real and the hospital idea was just a delusion: a world where people have delusions sometimes is more parsimonious than a world where people's experiences sometimes reflect multiple alternative realities (exactly when they would be said to be experiencing delusions in at least one of those realities).
(I'm interested (context), but I'll be mostly offline the 15th through 18th.)
Here's the comment I sent using the contact form on my representative's website.
Dear Assemblymember Grayson:
I am writing to urge you to consider voting Yes on SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. How our civilization handles machine intelligence is of critical importance to the future of humanity (or lack thereof), and from what I've heard from sources I've trust, this bill seems like a good first step: experts such as Turing Award winners Yoshua Bengio and Stuart Russell support the bill (https://time.com/7008947/california-ai-bill-letter/), and Eric Neyman of the Alignment Research Center described it as "narrowly tailored to address the most pressing AI risks without inhibiting innovation" (https://x.com/ericneyman/status/1823749878641779006). Thank you for your consideration. I am,
Your faithful constituent,
Zack M. Davis
This is awful. What do most of these items have to do with acquiring the map that reflects the territory? (I got 65, but that's because I've wasted my life in this lame cult. It's not cool or funny.)
On the one hand, I also wish Shulman would go into more detail on the "Supposing we've solved alignment and interpretability" part. (I still balk a bit at "in democracies" talk, but less so than I did a couple years ago.) On the other hand, I also wish you would go into more detail on the "Humans don't benefit even if you 'solve alignment'" part. Maybe there's a way to meet in the middle??
It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.
What? This apology makes no sense. Of course rationalism is Lawful Neutral. The laws of cognition aren't, can't be, on anyone's side.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction ("it must either the case that") is significant: it suggests that if you're considering lying to someone, you may want to clarify to yourself whether and to what extent that's because they're an enemy or because you don't respect them as an epistemic peer. Even if you end up choosing to lie, it's with a different rationale and mindset than someone who's never heard of the normative ideal and just thinks that white lies can be good sometimes.
I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option
There's a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn't make your decisions worse.) It follows that if you're trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they're your enemy), or you think they aren't rational.
white lies or other good-faith actions
What do you think "good faith" means? I would say that white lies are a prototypical instance of bad faith, defined by Wikipedia as "entertaining or pretending to entertain one set of feelings while acting as if influenced by another."
Frustrating! What tactic could get Interlocutor un-stuck? Just asking them for falsifiable predictions probably won't work, but maybe proactively trying to pass their ITT and supplying what predictions you think their view might make would prompt them to correct you, à la Cunningham's Law?
How did you chemically lose your emotions?
Senior MIRI leadership explored various alternatives, including reorienting the Agent Foundations team’s focus and transitioning them to an independent group under MIRI fiscal sponsorship with restricted funding, similar to AI Impacts. Ultimately, however, we decided that parting ways made the most sense.
I'm surprised! If MIRI is mostly a Pause advocacy org now, I can see why agent foundations research doesn't fit the new focus and should be restructured. But the benefit of a Pause is that you use the extra time to do something in particular. Why wouldn't you want to fiscally sponsor research on problems that you think need to be solved for the future of Earth-originating intelligent life to go well? (Even if the happy-path plan is Pause and superbabies, presumably you want to hand the superbabies as much relevant prior work as possible.) Do we know how Garrabrant, Demski, et al. are going to eat??
Relatedly, is it time for another name change? Going from "Singularity Institute for Artificial Intelligence" to "Machine Intelligence Research Institute" must have seemed safe in 2013. (You weren't unambiguously for artificial intelligence anymore, but you were definitely researching it.) But if the new–new plan is to call for an indefinite global ban on research into machine intelligence, then the new name doesn't seem appropriate, either?
Simplicia: I don't really think of "humanity" as an agent that can make a collective decision to stop working on AI. As I mentioned earlier, it's possible that the world's power players could be convinced to arrange a pause. That might be a good idea! But not being a power player myself, I tend to think of the possibility as an exogenous event, subject to the whims of others who hold the levers of coordination. In contrast, if alignment is like other science and engineering problems where incremental progress is possible, then the increments don't need to be coordinated.
Simplicia: The thing is, I basically do buy realism about rationality, and realism having implications for future powerful AI—in the limit. The completeness axiom still looks reasonable to me; in the long run, I expect superintelligent agents to get what they want, and anything that they don't want to get destroyed as a side-effect. To the extent that I've been arguing that empirical developments in AI should make us rethink alignment, it's not so much that I'm doubting the classical long-run story, but rather pointing out that the long run is "far away"—in subjective time, if not necessarily sidereal time. If you can get AI that does a lot of useful cognitive work before you get the superintelligence whose utility function has to be exactly right, that has implications for what we should be doing and what kind of superintelligence we're likely to end up with.
In principle, yes: to the extent that I'm worried that my current study habits don't measure up to school standards along at least some dimensions, I could take that into account and try to change my habits without the school.
But—as much as it pains me to admit it—I ... kind of do expect the social environment of school to be helpful along some dimensions (separately from how it's super-toxic among other dimensions)?
When I informally audited Honors Analysis at UC Berkeley with Charles Pugh in Fall 2017, Prof. Pugh agreed to grade my midterm (and I did OK), but I didn't get the weekly homework exercises graded. I don't think it's a coincidence that I also didn't finish all of the weekly homework exercises.
I attempted a lot of them! I verifiably do other math stuff that the vast majority of school students don't. But if I'm being honest and not ideological about it (even though my ideology is obviously directionally correct relative to Society's), the social fiction of "grades" does look like it sometimes succeeds at extorting some marginal effort out of my brain, and if I didn't have my historical reasons for being ideological about it, I'm not sure I'd even regret that much more than I regret being influenced by the social fiction of GitHub commit squares.
I agree that me getting the goddamned piece of paper and putting it on a future résumé has some nonzero effect in propping up the current signaling equilibrium, which is antisocial, but I don't think the magnitude of the effect is large enough to worry about, especially given the tier of school and my geriatric condition. The story told by the details of my résumé is clearly "autodidact who got the goddamned piece of paper, eventually." No one is going to interpret it as an absurd "I graduated SFSU at age 37 and am therefore racially superior to you" nobility claim, even though that does work for people who did Harvard or MIT at the standard age.
Seconding this. A nonobvious quirk of the system where high-karma users get more vote weight is that it increases variance for posts with few votes: if a high-karma user or two who don't like you see your post first, they can trash the initial score in a way that doesn't reflect "the community's" consensus. I remember the early karma scores for one of my posts going from 20 to zero (!). It eventually finished at 131.