Posts
Comments
I don't emphasize this because I care more about humanity's survival than the next decades sucking really hard for me and everyone I love.
I'm flabbergasted by this degree/kind of altruism. I respect you for it, but I literally cannot bring myself to care about "humanity"'s survival if it means the permanent impoverishment, enslavement or starvation of everybody I love. That future is simply not much better on my lights than everyone including the gpu-controllers meeting a similar fate. In fact I think my instincts are to hate that outcome more, because it's unjust.
But how do LW futurists not expect catastrophic job loss that destroys the global economy?
Slight correction: catastrophic job loss would destroy the ability of the non-landed, working public to paritcipate in and extract value from the global economy. The global economy itself would be fine. I agree this is a natural conclusion; I guess people were hoping to get 10 or 15 more years out of their natural gifts.
I think if I got asked randomly at an AI conference if I knew what AGI was I would probably say no, just to see what the questioner was going to tell me.
Saying "I have no intention to kill myself, and I suspect that I might be murdered" is not enough.
Frankly I do think this would work in many jurisdictions. It didn't work for John McAfee because he has a history of crazy remarks, it sounds like the sort of thing he'd do to save face/generate intrigue if he actually did plan on killing himself, and McAfee made no specific accusations. But if you really thought Sam Altman's head of security was going to murder you, you'd probably change their personal risk calculus dramatically by saying that repeatedly on the internet. Just make sure you also contact police specifically with what you know, so that the threat is legible to them as an institution.
If someone wants to murder you, they can. If you ever walk outside, you can't avoid being shot by a sniper.
If the person or people trying to murder you is omnicompetent, then it's hard. If they're regular people, then there are at least lots of temporary measures you can take that would make it more difficult. You can fly to a random state or country and check into a motel without telling anybody where you are. Or you could find a bunch of friends and stay in a basement somewhere. Mobsters used to call doing that sort of thing for a time before a threat had receded "going to ground".
Wearing a camera that is streaming to a cloud 24/7, and your friends can publish the video in case of your death... seems a bit too much. (Also, it wouldn't protect you e.g. against being poisoned. But I think this is not a typical way how whistleblowers die.) Is there something simpler?
You could move to New York or London, and your every move outside of a private home or apartment will already be recorded. Then place a security camera in your house.
Postdiction: Modern "cancel culture" was mostly a consequence of new communication systems (social media, etc.) rather than a consequence of "naturally" shifting attitudes or politics.
I have a draft that has wasted away for ages. I will probably post something this month though. Very busy with work.
The original comment you wrote appeared to be a response to "AI China hawks" like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don't think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).
If you're trying to argue instead that the Manhattan Project won't happen, then I'm mostly ambivalent. But I'll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump's daughter is literally retweeting Leopold's manifesto.
No, my problem with the hawks, as far as this criticism goes, is that they aren't repeatedly and explicitly saying what they will do
One issue with "explicitly and repeatedly saying what they will do" is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin:
The example I usually give is "burn all GPUs". This is not what I think you'd actually want to do with a powerful AGI - the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says "how dare you propose burning all GPUs?" I can say "Oh, well, I don't actually advocate doing that; it's just a mild overestimate for the rough power level of what you'd have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years."
What does winning look like? What do you do next? How do you "bury the body"? You get AGI and you show it off publicly, Xi blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and... then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do... stuff. What is this stuff? Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just... do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don't, what is the point of 'winning the race'?
The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.
Did you look into: https://longtermrisk.org/?
"Spy" is an ambiguous term, sometimes meaning "intelligence officer" and sometimes meaning "informant". Most 'spies' in the "espionage-commiting-person" sense are untrained civilians who have chosen to pass information to officers of a foreign country, for varying reasons. So if you see someone acting suspicious, an argument like "well surely a real spy would have been coached not to do that during spy school" is locally invalid.
Why hardware bugs in particular?
Well that's at least a completely different kind of regulatory failure than the one that was proposed on Twitter. But this is probably motivated reasoning on Microsoft's part. Kernel access is only necessary for IDS because of Microsoft's design choices. If Microsoft wanted, they could also have exported a user API for IDS services, which is a project they are working on now. MacOS already has this! And Microsoft would never ever have done as good a job on their own if they hadn't faced competition from other companies, which is why everyone uses CrowdStrike in the first place.
I have more than once noticed gell-mann amnesia (either in myself or others) about standard LessWrong takes on regulation. I think this community has a bias toward thinking regulations are stupider and responsible for more scarcity than they actually are. I would be skeptical of any particular story someone here tells you about how regulations are making things worse unless they can point to the specific rules involved.
For example: there is a persistent meme here and in sort of the rat-blogosphere that the FDA is what's causing the food you make at home to be so much less expensive than the food you order out. But any person who has managed or owned a restaurant will tell you that the actual two biggest things making your hamburger expensive are labor and real estate, not complying with food service codes. People don't spend as much money cooking at home because they're getting both the kitchen and labor for free (or at least paying for it in other ways), and this would remain true even if it were legal to sell that food you're making on the street without a license.
Another example that's more specific and in my particular trade: Back in May, when the Crowdstrike bug happened, people were posting wild takes on Twitter and in my signal groupchats about how Crowdstrike is only used everywhere because the government regulators subject you to copious extra red tape if you try to switch to something else.
I cannot for the life of me imagine what regulators people were talking about. First of all a large portion of cybersecurity regulation, like SOC2, is self-imposed by the industry; second anyone who's ever had to go through something unusual like ISO 27001 or FedRAMP knows that they do not give a rats ass what particular software vendor you use for anything. At most your accountant will ask if you use an endpoint defense product, and then require you to upload some sort of logfile regularly to make sure you're using the product. Which is a different kind of regulatory failure, I suppose, but it's not what caused the Crowdstrike bug.
As the name suggests, Leela Queen Odds is trained specifically to play without a queen, which is of course an absolutely bonkers disadvantage against 2k+ elo players. One interesting wrinkle is the time constraint. AIs are better at fast chess (obviously), and apparently no one who's tried is yet able to beat it consistently at 3+0 (3 minutes with no timing increment)
Epstein was an amateur rapist, not a pro rapist. His cabal - the parts of it that are actually confirmed and not just speculated about baselessly - seems extremely limited in scope compared to the kinds of industrial conspiracies that people propose about child sex work. Most of epstein's victims only ever had sex with Epstein, and only one of them - Virginia Giuffre - ever appears to have publicly claimed being passed around to many of Epstein's friends.
What I am referring to are claims an underworld industry for exploiting children the primary purpose of which is making money. For example, in the Sound of Freedom, a large part of the plot hinges on the idea that there are professionals who literally bring kidnapped children from South America into the United States so that pedophiles here can have sex with them. I submit that this industry in particular does not exist, or at least would be a terrible way to make money on a risk-adjusted basis compared to drug dealing.
I think P(DOOM) is fairly high (maybe 60%) and working on AI research or accelerating AI race dynamics independently is one of the worst things you can do. I do not endorse improving the capabilities of frontier models and think humanity would benefit if you worked on other things instead.
That said, I hope Anthropic retains a market lead, ceteris paribus. I think there's a lot of ambiguous parts of the standard AI risk thesis, and that there's a strong possibility we get reasonablish alignment with a few quick creative techniques at the finish like faithful CoT. If that happens I expect it might be because Anthropic researchers decided to pull back and use their leverage to coordinate a pause. I also do not see what Anthropic could do from a research front at this point that would make race dynamics even worse than they already are, besides split up the company. I also do not want to live in a world entirely controlled by Sam Altman, and think that could be worse than death.
So one of the themes of sequences is that deliberate self-deception or thought censorship - deciding to prevent yourself from "knowing" or learning things you would otherwise learn - is almost always irrational. Reality is what it is, regardless of your state of mind, and at the end of the day whatever action you're deciding to take - for example, not talking about dragons - you could also be doing if you knew the truth. So when you say:
But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.
It's not a reason not to investigate. You could continue to avoid these consequences you speak of by not writing about Dragons regardless of the results of your investigation. One possibility is that what you're also avoiding, is guilt/discomfort that might come from knowing the truth and remaining silent. But through your decision not to investigate, the world is going to carry the burden of that silence either way.
Another theme of the sequences is that self-deception, deliberate agnosticism, and motivated reasoning are a source of surprising amounts of human suffering. Richard explains one way it goes horribly wrong here. Whatever subject you're talking about, I'm sure there a lot of other people in your position who have chosen not to look into it for the same reasons. But if all of those people had looked into it, and faced whatever conclusion that resulted squarely, you yourself might not be in the position of having to face a harmful taboo in the first place. So the form of information hiding you endorse in the post is self-perpetuating, and is part of what helps keep the taboo strong.
I think the entire point of rationalism is that you don't do things like this.
The greatest strategy for organizing vast conspiracies is usually failing to realize that what you're doing is illegal.
I plan to cross-post to LessWrong but to not read or reply to comments (with a few planned exceptions).
:( why not?
Pretty much ~everybody on the internet I can find talking about the issue both mischaracterizes and exaggerates the extent of child sex work inside the United States, often to a patently absurd degree. Wikipedia alone reports that there are anywhere from "100,000-1,000,000" child prostitutes in the U.S. There are only ~75 million children in the U.S., so I guess Wikipedia thinks it's possible that more than 1% of people aged 0-17 are prostitutes. As in most cases, these numbers are sourced from "anti sex trafficking" organizations that, as far as I can tell, completely make them up.
Actual child sex workers - the kind that get arrested, because people don't like child prostitution - are mostly children who pass themselves off as adults in order to make money. Part of the confusion comes from the fact that the government classifies any instance of child prostitution as human trafficking, regardless of whether or not there's evidence the child was coerced. Thus, when the Department of Justice reports that federal law enforcement investigated "2,515 instances of suspected human trafficking" from 2008-2010, and that "forty percent involved prostitution of a child or child sexual exploitation", it means that it investigated ~1000 possible cases of child prostitution, not that it found 1000 child sex slaves.
People believe a lot of crazy things, but I am genuinely flabbergasted at how many people find it plausible that there's an entire underworld industry of kidnapping children and selling them to pedophiles in first world countries. I know why the anti sex trafficking orgs sell these stories - they're trying to attract donations, and who is going to call out an "anti sex trafficking" charity? But surely most people realize that it would be very hard for an organized child rape cabal to spread word about their offerings to customers without someone alerting police.
Sometimes people say "before we colonize Mars, we have to be able to colonize Antarctica first".
What are the actual obstacles to doing that? Is there any future tech somewhere down the tree that could fix its climate, etc.?
LessWrong and "TPOT" is not the general public. They're not even smart versions of the general public. An end to leftist preference falsification and sacred cows, if it does come, will not bring whatever brand of IQ realism you are probably hoping for. It will not mainstream Charles Murray or Garrett Jones. Far more simple, memetic, and popular among both white and nonwhite right wingers in the absence of social pressures against it is groyper-style antisemitism. That is just one example; it could be something stupider and more invigorating.
I wish it weren't so. Alas.
I regret that both factions couldn't lose.
I do think that AIs will eventually get much smarter than humans, and this implies that artificial minds will likely capture the majority of wealth and power in the world in the future. However, I don't think the way that we get to that state will necessarily be because the AIs staged a coup. I find more lawful and smooth transitions more likely.
I think my writing was ambiguous. My comment was supposed to read "similar constraints may apply to AIs unless one (AI) gets much smarter (than other AIs) much more quickly, as you say." I was trying to say the same thing.
My original point was also not actually that we will face an abrupt transition or AI coup, I was just objecting to the specific example Meme Machine gave.
Is it your contention that similar constraints will not apply to AIs?
Similar constraints may apply to AIs unless one gets much smarter much more quickly, as you say. But even if those AIs create a nice civilian government to govern interactions with each other, those AIs will have any reason to respect our rights unless some of them care about us more than we care about stray dogs or cats.
I mean to say that high ranking generals could issue such a coup
Yes, and by "any given faction or person in the U.S. military" I mean to say that high ranking generals inside the United States cannot form a coup. They literally cannot successfully give the order to storm the capitol. Their inferiors, understanding that:
- The order is illegal
- The order would have to be followed by the rest of their division in order to have a chance of success
- The order would be almost guaranteed to fail in its broader objective even if they manage to seize the FBI headquarters or whatever
- That others around them are also making the same calculation and will also probably be unwilling to follow the order
Would report their superiors to military law enforcement instead. This is obvious if you take even a moment to put your shoes in any of the parties involved. Our generals inside the U.S. military also realize this themselves and so do not attempt to perform coups, even though I'm certain there are many people inside the white house with large 'nominal' control over U.S. forces who would love to be dictator.
I think your blanket statement on the impossibility of Juntas is void.
I made no such blanket statement. In different countries the odds and incentives facing each of these parties are different. For example, if you live in a South American country with a history of successful military overthrows, you might have a much greater fear your superior will succeed, and so you might be more scared of him than the civilian government. This is part (though not all) of the reason why some countries are continually stable and others are continually unstable.
If it really wanted to, there would be nothing at all stopping the US military from launching a coup on its civilian government.
There are enormous hurdles preventing the U.S. military from overthrowing the civilian government.
The confusion in your statement is caused by blocking up all the members of the armed forces in the term "U.S. military". Principally, a coup is an act of coordination. Any given faction or person in the U.S. military would have a difficult time organizing the forces necessary without being stopped by civilian or military law enforcement first, and then maintaining control of their civilian government afterwards without the legitimacy of democratic governance.
In general, "more powerful entities control weaker entities" is a constant. If you see something else, your eyes are probably betraying you.
Totally understand why this would be more interesting; I guess I would still fundamentally describe what we're doing on the internet as conversation, with the same rules as you would describe above. It's just that the conversation you can find here (or potentially on Twitter) is superstimulating compared to what you're getting elsewhere. Which is good in the sense that it's more fun, and I guess bad inasmuch as IRL conversation was fulfilling some social or networking role that online conversation wasn't.
10xing my income did absolutely nothing for my dating life. It had so little impact that I am now suspicious of all of the people who suggest this more than marginally improves sexual success for men.
In the same way that Chinese people forgot how to write characters by hand, I think most programmers will forget how to write code without LLM editors or plugins pretty soon.
...How is that definition different than a realtime version of what you do when participating in this forum?
I'm interested too. I think several of the above are solvable issues. AFAICT:
Solved by simple modifications to markets:
- Races to correct naive bidders
- Defending the true price from incorrect bidders for $ w/o letting price shift
Seem doable with thought:
- Billing for information value
- Policy conditionals
Seem hard/idk if it's possible to fully solve:
- Collating information known by different bidders
- Preventing tricking other bidders for profit
- General enterprise of credit allocation for knowledge creation
The Prediction Market Discord Message, by Eva_:
Current market structures can't bill people for the information value that went into the market fairly, can't fairly handle secret information known to only some bidders, pays out most of the subsidy to whoever corrects the naive bidder fastest even though there's no benefit to making it a race, offers almost no profit to people trying to defend the true price from incorrect bidders unless they let the price shift substantially first, can't be effectively used to collate information known by different bidders, can't handle counterfactuals / policy conditionals cleanly, implement EDT instead of LDT, let you play games of tricking other bidders for profit and so require everyone to play trading strategies that are inexploitable even if less beneficial, can't defend against people who are intentionally illegible as to whether they have private information or are manipulating the market for profit elsewhere...
But most of all, prediction markets contain supposedly ideal economic actors who don't really suspect each other of dishonesty betting money against each other even though it's a net-zero trade and the aggreeement theorem says they shouldn't expect to profit from it at all, so clearly this is not the mathematically ideal solution for a group to collate its knowledge and pay itself for the value that knowledge [provides]. Even if you need betting to be able to trust nonsharable information from another party, you shouldn't have people betting in excess of what is needed to prove sincerity out of a belief other people are wrong unless you've actually got other people being wrong even in light of the new actors' information.
I don't think I will ever find the time to write my novel. Writing novels is dumb anyways. But I feel like the novel and world are bursting out of me. What do
Outside of politics, none are more certain that a substandard or overpriced product is a moral failing than gamers. You'd think EA were guilty of war crimes with the way people treat them for charging for DLC or whatever.
At least according to CNN's exit polls, a white person in their twenties was only 6% less likely to vote for Trump in 2020 than a white person above the age of sixty!
This was actually very surprising for me; I think a lot of people have a background sense that younger white voters are much less socially and politically conservative. That might still be true, but the ones that choose to vote vote republican at basically the same rate in national elections.
Interesting whitepill hidden inside Scott Alexander's SB 1047 writeup was that lying doesn't work as well as predicted in politics. It's possible that if the opposition had lied less often, or we had lied more often, the bill would not have gotten a supermajority in the senate.
The Center on Long Term Risk is absurdly underfunded, but they focus on S-risks and not X-risks.
I actually don't really know how to think about the question of whether or not the 2016 election was stolen. Our sensemaking institutions would say it wasn't stolen if it was, and it wasn't stolen if it wasn't.
But the prediction markets provide some evidence! Where are all of the election truthers betting against Trump?
The security team definitely know about the attack vector and I've spoken to them. It's just that neither I nor they really know what the industry as a whole is going to do about it.
Google's red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They're just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.
There's a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don't know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.
This incentivizes people to spend a lot of wasted time gathering information about what the other party is willing to pay. Which I suppose is true anyways, but I'd like to see a spherical cow system where that isn't the case.
Have only watched Season one, but so far Game of Thrones has been a lot less cynical than I expected.
I think the prestigious universities mostly select for diligence and intelligence and any selection for prosocial behavior is sort of downstream of those things' correlates.
Why aren't there institutions that certify people as being "Good people"?