Posts

My simple AGI investment & insurance strategy 2024-03-31T02:51:53.479Z
Aligned AI is dual use technology 2024-01-27T06:50:10.435Z
You can just spontaneously call people you haven't met in years 2023-11-13T05:21:05.726Z
Does bulemia work? 2023-11-06T17:58:27.612Z
Should people build productizations of open source AI models? 2023-11-02T01:26:47.516Z
Bariatric surgery seems like a no-brainer for most morbidly obese people 2023-09-27T01:05:32.976Z
Bring back the Colosseums 2023-09-08T00:09:53.723Z
Diet Experiment Preregistration: Long-term water fasting + seed oil removal 2023-08-23T22:08:49.058Z
The U.S. is becoming less stable 2023-08-18T21:13:11.909Z
What is the most effective anti-tyranny charity? 2023-08-15T15:26:56.393Z
Michael Shellenberger: US Has 12 Or More Alien Spacecraft, Say Military And Intelligence Contractors 2023-06-09T16:11:48.243Z
Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin 2023-06-06T03:54:42.389Z
What is the literature on long term water fasts? 2023-05-16T03:23:51.995Z
"Do X because decision theory" ~= "Do X because bayes theorem" 2023-04-14T20:57:10.467Z
St. Patty's Day LA meetup 2023-03-18T00:00:36.511Z
Will 2023 be the last year you can write short stories and receive most of the intellectual credit for writing them? 2023-03-16T21:36:27.992Z
When will computer programming become an unskilled job (if ever)? 2023-03-16T17:46:35.030Z
POC || GTFO culture as partial antidote to alignment wordcelism 2023-03-15T10:21:47.037Z
Acolytes, reformers, and atheists 2023-03-10T00:48:40.106Z
LessWrong needs a sage mechanic 2023-03-08T18:57:34.080Z
Extreme GDP growth is a bad operating definition of "slow takeoff" 2023-03-01T22:25:27.446Z
The fast takeoff motte/bailey 2023-02-24T07:11:10.392Z
On second thought, prompt injections are probably examples of misalignment 2023-02-20T23:56:33.571Z
Stop posting prompt injections on Twitter and calling it "misalignment" 2023-02-19T02:21:44.061Z
Quickly refactoring the U.S. Constitution 2022-10-30T07:17:50.229Z
Announcing $5,000 bounty for (responsibly) ending malaria 2022-09-24T04:28:22.189Z
Extreme Security 2022-08-15T12:11:05.147Z
Argument by Intellectual Ordeal 2022-08-12T13:03:21.809Z
"Just hiring people" is sometimes still actually possible 2022-08-05T21:44:35.326Z
Don't take the organizational chart literally 2022-07-21T00:56:28.561Z
Addendum: A non-magical explanation of Jeffrey Epstein 2022-07-18T17:40:37.099Z
In defense of flailing, with foreword by Bill Burr 2022-06-17T16:40:32.152Z
Yes, AI research will be substantially curtailed if a lab causes a major disaster 2022-06-14T22:17:01.273Z
What have been the major "triumphs" in the field of AI over the last ten years? 2022-05-28T19:49:53.382Z
What an actually pessimistic containment strategy looks like 2022-04-05T00:19:50.212Z
The real reason Futarchists are doomed 2022-04-01T18:37:20.387Z
How to prevent authoritarian revolts? 2022-03-20T10:01:52.791Z
A non-magical explanation of Jeffrey Epstein 2021-12-28T21:15:41.953Z
Why do all out attacks actually work? 2020-06-12T20:33:53.138Z
Multiple Arguments, Multiple Comments 2020-05-07T09:30:17.494Z
Shortform 2020-03-19T23:50:30.391Z
Three signs you may be suffering from imposter syndrome 2020-01-21T22:17:45.944Z

Comments

Comment by lc on o3 · 2024-12-20T22:07:31.476Z · LW · GW

I don't emphasize this because I care more about humanity's survival than the next decades sucking really hard for me and everyone I love.

I'm flabbergasted by this degree/kind of altruism. I respect you for it, but I literally cannot bring myself to care about "humanity"'s survival if it means the permanent impoverishment, enslavement or starvation of everybody I love. That future is simply not much better on my lights than everyone including the gpu-controllers meeting a similar fate. In fact I think my instincts are to hate that outcome more, because it's unjust.

But how do LW futurists not expect catastrophic job loss that destroys the global economy?

Slight correction: catastrophic job loss would destroy the ability of the non-landed, working public to paritcipate in and extract value from the global economy. The global economy itself would be fine. I agree this is a natural conclusion; I guess people were hoping to get 10 or 15 more years out of their natural gifts.

Comment by lc on leogao's Shortform · 2024-12-18T20:33:52.386Z · LW · GW

I think if I got asked randomly at an AI conference if I knew what AGI was I would probably say no, just to see what the questioner was going to tell me.

Comment by lc on avturchin's Shortform · 2024-12-15T17:51:55.558Z · LW · GW

Saying "I have no intention to kill myself, and I suspect that I might be murdered" is not enough.

Frankly I do think this would work in many jurisdictions. It didn't work for John McAfee because he has a history of crazy remarks, it sounds like the sort of thing he'd do to save face/generate intrigue if he actually did plan on killing himself, and McAfee made no specific accusations. But if you really thought Sam Altman's head of security was going to murder you, you'd probably change their personal risk calculus dramatically by saying that repeatedly on the internet. Just make sure you also contact police specifically with what you know, so that the threat is legible to them as an institution.

Comment by lc on avturchin's Shortform · 2024-12-15T17:33:19.001Z · LW · GW

If someone wants to murder you, they can. If you ever walk outside, you can't avoid being shot by a sniper.

If the person or people trying to murder you is omnicompetent, then it's hard. If they're regular people, then there are at least lots of temporary measures you can take that would make it more difficult. You can fly to a random state or country and check into a motel without telling anybody where you are. Or you could find a bunch of friends and stay in a basement somewhere. Mobsters used to call doing that sort of thing for a time before a threat had receded "going to ground".

Wearing a camera that is streaming to a cloud 24/7, and your friends can publish the video in case of your death... seems a bit too much. (Also, it wouldn't protect you e.g. against being poisoned. But I think this is not a typical way how whistleblowers die.) Is there something simpler?

You could move to New York or London, and your every move outside of a private home or apartment will already be recorded. Then place a security camera in your house.

Comment by lc on avturchin's Shortform · 2024-12-14T20:01:01.184Z · LW · GW

Tapping the sign:

Comment by lc on Shortform · 2024-12-11T03:57:51.531Z · LW · GW

Postdiction: Modern "cancel culture" was mostly a consequence of new communication systems (social media, etc.) rather than a consequence of "naturally" shifting attitudes or politics.

Comment by lc on AI Safety is Dropping the Ball on Clown Attacks · 2024-12-10T05:26:00.462Z · LW · GW

I have a draft that has wasted away for ages. I will probably post something this month though. Very busy with work.

Comment by lc on China Hawks are Manufacturing an AI Arms Race · 2024-12-01T20:20:35.596Z · LW · GW

The original comment you wrote appeared to be a response to "AI China hawks" like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don't think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).

If you're trying to argue instead that the Manhattan Project won't happen, then I'm mostly ambivalent. But I'll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump's daughter is literally retweeting Leopold's manifesto.

Comment by lc on China Hawks are Manufacturing an AI Arms Race · 2024-12-01T18:33:21.729Z · LW · GW

No, my problem with the hawks, as far as this criticism goes, is that they aren't repeatedly and explicitly saying what they will do

One issue with "explicitly and repeatedly saying what they will do" is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin:

The example I usually give is "burn all GPUs". This is not what I think you'd actually want to do with a powerful AGI - the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says "how dare you propose burning all GPUs?" I can say "Oh, well, I don't actually advocate doing that; it's just a mild overestimate for the rough power level of what you'd have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years."

Comment by lc on China Hawks are Manufacturing an AI Arms Race · 2024-12-01T18:29:19.005Z · LW · GW

What does winning look like? What do you do next? How do you "bury the body"? You get AGI and you show it off publicly, Xi blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and... then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do... stuff. What is this stuff? Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just... do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don't, what is the point of 'winning the race'?

The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.

Comment by lc on The Big Nonprofits Post · 2024-11-30T00:03:55.663Z · LW · GW

Did you look into: https://longtermrisk.org/?

Comment by lc on Shortform · 2024-11-28T20:23:27.534Z · LW · GW

"Spy" is an ambiguous term, sometimes meaning "intelligence officer" and sometimes meaning "informant". Most 'spies' in the "espionage-commiting-person" sense are untrained civilians who have chosen to pass information to officers of a foreign country, for varying reasons. So if you see someone acting suspicious, an argument like "well surely a real spy would have been coached not to do that during spy school" is locally invalid.

Comment by lc on Habryka's Shortform Feed · 2024-11-23T21:40:14.158Z · LW · GW

Why hardware bugs in particular?

Comment by lc on Shortform · 2024-11-23T15:27:16.104Z · LW · GW

Well that's at least a completely different kind of regulatory failure than the one that was proposed on Twitter. But this is probably motivated reasoning on Microsoft's part. Kernel access is only necessary for IDS because of Microsoft's design choices. If Microsoft wanted, they could also have exported a user API for IDS services, which is a project they are working on now. MacOS already has this! And Microsoft would never ever have done as good a job on their own if they hadn't faced competition from other companies, which is why everyone uses CrowdStrike in the first place.

Comment by lc on Shortform · 2024-11-23T05:05:59.781Z · LW · GW

I have more than once noticed gell-mann amnesia (either in myself or others) about standard LessWrong takes on regulation. I think this community has a bias toward thinking regulations are stupider and responsible for more scarcity than they actually are. I would be skeptical of any particular story someone here tells you about how regulations are making things worse unless they can point to the specific rules involved.

For example: there is a persistent meme here and in sort of the rat-blogosphere that the FDA is what's causing the food you make at home to be so much less expensive than the food you order out. But any person who has managed or owned a restaurant will tell you that the actual two biggest things making your hamburger expensive are labor and real estate, not complying with food service codes. People don't spend as much money cooking at home because they're getting both the kitchen and labor for free (or at least paying for it in other ways), and this would remain true even if it were legal to sell that food you're making on the street without a license.

Another example that's more specific and in my particular trade: Back in May, when the Crowdstrike bug happened, people were posting wild takes on Twitter and in my signal groupchats about how Crowdstrike is only used everywhere because the government regulators subject you to copious extra red tape if you try to switch to something else.

I cannot for the life of me imagine what regulators people were talking about. First of all a large portion of cybersecurity regulation, like SOC2, is self-imposed by the industry; second anyone who's ever had to go through something unusual like ISO 27001 or FedRAMP knows that they do not give a rats ass what particular software vendor you use for anything. At most your accountant will ask if you use an endpoint defense product, and then require you to upload some sort of logfile regularly to make sure you're using the product. Which is a different kind of regulatory failure, I suppose, but it's not what caused the Crowdstrike bug.

Comment by lc on When do "brains beat brawn" in Chess? An experiment · 2024-11-22T05:06:56.591Z · LW · GW

As the name suggests, Leela Queen Odds is trained specifically to play without a queen, which is of course an absolutely bonkers disadvantage against 2k+ elo players. One interesting wrinkle is the time constraint. AIs are better at fast chess (obviously), and apparently no one who's tried is yet able to beat it consistently at 3+0 (3 minutes with no timing increment)

Comment by lc on Shortform · 2024-11-21T17:48:30.663Z · LW · GW

Epstein was an amateur rapist, not a pro rapist. His cabal - the parts of it that are actually confirmed and not just speculated about baselessly - seems extremely limited in scope compared to the kinds of industrial conspiracies that people propose about child sex work. Most of epstein's victims only ever had sex with Epstein, and only one of them - Virginia Giuffre - ever appears to have publicly claimed being passed around to many of Epstein's friends.

What I am referring to are claims an underworld industry for exploiting children the primary purpose of which is making money. For example, in the Sound of Freedom, a large part of the plot hinges on the idea that there are professionals who literally bring kidnapped children from South America into the United States so that pedophiles here can have sex with them. I submit that this industry in particular does not exist, or at least would be a terrible way to make money on a risk-adjusted basis compared to drug dealing.

Comment by lc on Shortform · 2024-11-21T17:05:17.093Z · LW · GW

I think P(DOOM) is fairly high (maybe 60%) and working on AI research or accelerating AI race dynamics independently is one of the worst things you can do. I do not endorse improving the capabilities of frontier models and think humanity would benefit if you worked on other things instead.

That said, I hope Anthropic retains a market lead, ceteris paribus. I think there's a lot of ambiguous parts of the standard AI risk thesis, and that there's a strong possibility we get reasonablish alignment with a few quick creative techniques at the finish like faithful CoT. If that happens I expect it might be because Anthropic researchers decided to pull back and use their leverage to coordinate a pause. I also do not see what Anthropic could do from a research front at this point that would make race dynamics even worse than they already are, besides split up the company. I also do not want to live in a world entirely controlled by Sam Altman, and think that could be worse than death.

Comment by lc on Dragon Agnosticism · 2024-11-21T03:53:42.724Z · LW · GW

So one of the themes of sequences is that deliberate self-deception or thought censorship - deciding to prevent yourself from "knowing" or learning things you would otherwise learn - is almost always irrational. Reality is what it is, regardless of your state of mind, and at the end of the day whatever action you're deciding to take - for example, not talking about dragons - you could also be doing if you knew the truth. So when you say:

But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.

It's not a reason not to investigate. You could continue to avoid these consequences you speak of by not writing about Dragons regardless of the results of your investigation. One possibility is that what you're also avoiding, is guilt/discomfort that might come from knowing the truth and remaining silent. But through your decision not to investigate, the world is going to carry the burden of that silence either way.

Another theme of the sequences is that self-deception, deliberate agnosticism, and motivated reasoning are a source of surprising amounts of human suffering. Richard explains one way it goes horribly wrong here. Whatever subject you're talking about, I'm sure there a lot of other people in your position who have chosen not to look into it for the same reasons. But if all of those people had looked into it, and faced whatever conclusion that resulted squarely, you yourself might not be in the position of having to face a harmful taboo in the first place. So the form of information hiding you endorse in the post is self-perpetuating, and is part of what helps keep the taboo strong.

Comment by lc on Dragon Agnosticism · 2024-11-20T10:21:01.224Z · LW · GW

I think the entire point of rationalism is that you don't do things like this.

Comment by lc on Shortform · 2024-11-17T19:42:34.045Z · LW · GW

The greatest strategy for organizing vast conspiracies is usually failing to realize that what you're doing is illegal.

Comment by lc on Announcing turntrout.com, my new digital home · 2024-11-17T18:20:44.713Z · LW · GW

I plan to cross-post to LessWrong but to not read or reply to comments (with a few planned exceptions).

:( why not?

Comment by lc on Shortform · 2024-11-17T00:27:00.784Z · LW · GW

Pretty much ~everybody on the internet I can find talking about the issue both mischaracterizes and exaggerates the extent of child sex work inside the United States, often to a patently absurd degree. Wikipedia alone reports that there are anywhere from "100,000-1,000,000" child prostitutes in the U.S. There are only ~75 million children in the U.S., so I guess Wikipedia thinks it's possible that more than 1% of people aged 0-17 are prostitutes. As in most cases, these numbers are sourced from "anti sex trafficking" organizations that, as far as I can tell, completely make them up.

Actual child sex workers - the kind that get arrested, because people don't like child prostitution - are mostly children who pass themselves off as adults in order to make money. Part of the confusion comes from the fact that the government classifies any instance of child prostitution as human trafficking, regardless of whether or not there's evidence the child was coerced. Thus, when the Department of Justice reports that federal law enforcement investigated "2,515 instances of suspected human trafficking" from 2008-2010, and that "forty percent involved prostitution of a child or child sexual exploitation", it means that it investigated ~1000 possible cases of child prostitution, not that it found 1000 child sex slaves.

People believe a lot of crazy things, but I am genuinely flabbergasted at how many people find it plausible that there's an entire underworld industry of kidnapping children and selling them to pedophiles in first world countries. I know why the anti sex trafficking orgs sell these stories - they're trying to attract donations, and who is going to call out an "anti sex trafficking" charity? But surely most people realize that it would be very hard for an organized child rape cabal to spread word about their offerings to customers without someone alerting police.

Comment by lc on Shortform · 2024-11-09T22:05:40.389Z · LW · GW

Sometimes people say "before we colonize Mars, we have to be able to colonize Antarctica first".

What are the actual obstacles to doing that? Is there any future tech somewhere down the tree that could fix its climate, etc.?

Comment by lc on Shortform · 2024-11-09T01:10:20.440Z · LW · GW

LessWrong and "TPOT" is not the general public. They're not even smart versions of the general public. An end to leftist preference falsification and sacred cows, if it does come, will not bring whatever brand of IQ realism you are probably hoping for. It will not mainstream Charles Murray or Garrett Jones. Far more simple, memetic, and popular among both white and nonwhite right wingers in the absence of social pressures against it is groyper-style antisemitism. That is just one example; it could be something stupider and more invigorating.

I wish it weren't so. Alas.

Comment by lc on Shortform · 2024-11-07T00:42:10.667Z · LW · GW

I regret that both factions couldn't lose.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-04T04:57:14.839Z · LW · GW

I do think that AIs will eventually get much smarter than humans, and this implies that artificial minds will likely capture the majority of wealth and power in the world in the future. However, I don't think the way that we get to that state will necessarily be because the AIs staged a coup. I find more lawful and smooth transitions more likely.

I think my writing was ambiguous. My comment was supposed to read "similar constraints may apply to AIs unless one (AI) gets much smarter (than other AIs) much more quickly, as you say." I was trying to say the same thing.

My original point was also not actually that we will face an abrupt transition or AI coup, I was just objecting to the specific example Meme Machine gave.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-03T06:11:16.523Z · LW · GW

Is it your contention that similar constraints will not apply to AIs?

Similar constraints may apply to AIs unless one gets much smarter much more quickly, as you say. But even if those AIs create a nice civilian government to govern interactions with each other, those AIs will have any reason to respect our rights unless some of them care about us more than we care about stray dogs or cats.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-03T05:57:43.998Z · LW · GW

I mean to say that high ranking generals could issue such a coup

Yes, and by "any given faction or person in the U.S. military" I mean to say that high ranking generals inside the United States cannot form a coup. They literally cannot successfully give the order to storm the capitol. Their inferiors, understanding that:

  • The order is illegal
  • The order would have to be followed by the rest of their division in order to have a chance of success
  • The order would be almost guaranteed to fail in its broader objective even if they manage to seize the FBI headquarters or whatever
  • That others around them are also making the same calculation and will also probably be unwilling to follow the order

Would report their superiors to military law enforcement instead. This is obvious if you take even a moment to put your shoes in any of the parties involved. Our generals inside the U.S. military also realize this themselves and so do not attempt to perform coups, even though I'm certain there are many people inside the white house with large 'nominal' control over U.S. forces who would love to be dictator.

I think your blanket statement on the impossibility of Juntas is void.

I made no such blanket statement. In different countries the odds and incentives facing each of these parties are different. For example, if you live in a South American country with a history of successful military overthrows, you might have a much greater fear your superior will succeed, and so you might be more scared of him than the civilian government. This is part (though not all) of the reason why some countries are continually stable and others are continually unstable.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-02T05:55:51.108Z · LW · GW

If it really wanted to, there would be nothing at all stopping the US military from launching a coup on its civilian government.

There are enormous hurdles preventing the U.S. military from overthrowing the civilian government.

The confusion in your statement is caused by blocking up all the members of the armed forces in the term "U.S. military". Principally, a coup is an act of coordination. Any given faction or person in the U.S. military would have a difficult time organizing the forces necessary without being stopped by civilian or military law enforcement first, and then maintaining control of their civilian government afterwards without the legitimacy of democratic governance.

In general, "more powerful entities control weaker entities" is a constant. If you see something else, your eyes are probably betraying you.

Comment by lc on johnswentworth's Shortform · 2024-10-29T01:24:23.319Z · LW · GW

Totally understand why this would be more interesting; I guess I would still fundamentally describe what we're doing on the internet as conversation, with the same rules as you would describe above. It's just that the conversation you can find here (or potentially on Twitter) is superstimulating compared to what you're getting elsewhere. Which is good in the sense that it's more fun, and I guess bad inasmuch as IRL conversation was fulfilling some social or networking role that online conversation wasn't.

Comment by lc on Shortform · 2024-10-29T01:00:15.263Z · LW · GW

10xing my income did absolutely nothing for my dating life. It had so little impact that I am now suspicious of all of the people who suggest this more than marginally improves sexual success for men.

Comment by lc on Shortform · 2024-10-28T05:36:01.601Z · LW · GW

In the same way that Chinese people forgot how to write characters by hand, I think most programmers will forget how to write code without LLM editors or plugins pretty soon.

Comment by lc on johnswentworth's Shortform · 2024-10-28T05:02:34.077Z · LW · GW

...How is that definition different than a realtime version of what you do when participating in this forum?

Comment by lc on Shortform · 2024-10-27T05:47:01.356Z · LW · GW

I'm interested too. I think several of the above are solvable issues. AFAICT:

Solved by simple modifications to markets:

  • Races to correct naive bidders
  • Defending the true price from incorrect bidders for $ w/o letting price shift

Seem doable with thought:

  • Billing for information value
  • Policy conditionals

Seem hard/idk if it's possible to fully solve:

  • Collating information known by different bidders
  • Preventing tricking other bidders for profit
  • General enterprise of credit allocation for knowledge creation
Comment by lc on Shortform · 2024-10-25T20:51:19.049Z · LW · GW

The Prediction Market Discord Message, by Eva_:

Current market structures can't bill people for the information value that went into the market fairly, can't fairly handle secret information known to only some bidders, pays out most of the subsidy to whoever corrects the naive bidder fastest even though there's no benefit to making it a race, offers almost no profit to people trying to defend the true price from incorrect bidders unless they let the price shift substantially first, can't be effectively used to collate information known by different bidders, can't handle counterfactuals / policy conditionals cleanly, implement EDT instead of LDT, let you play games of tricking other bidders for profit and so require everyone to play trading strategies that are inexploitable even if less beneficial, can't defend against people who are intentionally illegible as to whether they have private information or are manipulating the market for profit elsewhere...

But most of all, prediction markets contain supposedly ideal economic actors who don't really suspect each other of dishonesty betting money against each other even though it's a net-zero trade and the aggreeement theorem says they shouldn't expect to profit from it at all, so clearly this is not the mathematically ideal solution for a group to collate its knowledge and pay itself for the value that knowledge [provides]. Even if you need betting to be able to trust nonsharable information from another party, you shouldn't have people betting in excess of what is needed to prove sincerity out of a belief other people are wrong unless you've actually got other people being wrong even in light of the new actors' information.

Comment by lc on Shortform · 2024-10-25T01:27:34.935Z · LW · GW

I don't think I will ever find the time to write my novel. Writing novels is dumb anyways. But I feel like the novel and world are bursting out of me. What do

Comment by lc on Jimrandomh's Shortform · 2024-10-24T20:51:39.354Z · LW · GW

Outside of politics, none are more certain that a substandard or overpriced product is a moral failing than gamers. You'd think EA were guilty of war crimes with the way people treat them for charging for DLC or whatever.

Comment by lc on Shortform · 2024-10-23T18:35:23.132Z · LW · GW

At least according to CNN's exit polls, a white person in their twenties was only 6% less likely to vote for Trump in 2020 than a white person above the age of sixty!

This was actually very surprising for me; I think a lot of people have a background sense that younger white voters are much less socially and politically conservative. That might still be true, but the ones that choose to vote vote republican at basically the same rate in national elections.

Comment by lc on Shortform · 2024-10-22T16:15:25.058Z · LW · GW

Interesting whitepill hidden inside Scott Alexander's SB 1047 writeup was that lying doesn't work as well as predicted in politics. It's possible that if the opposition had lied less often, or we had lied more often, the bill would not have gotten a supermajority in the senate.

Comment by lc on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2024-10-08T22:20:44.319Z · LW · GW

The Center on Long Term Risk is absurdly underfunded, but they focus on S-risks and not X-risks.

Comment by lc on Shortform · 2024-10-01T01:53:13.613Z · LW · GW

I actually don't really know how to think about the question of whether or not the 2016 election was stolen. Our sensemaking institutions would say it wasn't stolen if it was, and it wasn't stolen if it wasn't.

But the prediction markets provide some evidence! Where are all of the election truthers betting against Trump?

Comment by lc on Shortform · 2024-09-26T21:58:48.218Z · LW · GW

The security team definitely know about the attack vector and I've spoken to them. It's just that neither I nor they really know what the industry as a whole is going to do about it.

Comment by lc on Shortform · 2024-09-26T21:34:16.164Z · LW · GW

Google's red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They're just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.

Comment by lc on Shortform · 2024-09-26T17:00:29.638Z · LW · GW

There's a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don't know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.

Comment by lc on shortplav · 2024-09-09T01:58:49.871Z · LW · GW

This incentivizes people to spend a lot of wasted time gathering information about what the other party is willing to pay. Which I suppose is true anyways, but I'd like to see a spherical cow system where that isn't the case.

Comment by lc on Shortform · 2024-09-04T20:37:01.279Z · LW · GW
Comment by lc on Shortform · 2024-09-04T05:37:18.137Z · LW · GW

Have only watched Season one, but so far Game of Thrones has been a lot less cynical than I expected.

Comment by lc on Shortform · 2024-08-28T04:24:07.054Z · LW · GW

I think the prestigious universities mostly select for diligence and intelligence and any selection for prosocial behavior is sort of downstream of those things' correlates.

Comment by lc on Shortform · 2024-08-27T19:50:41.375Z · LW · GW

Why aren't there institutions that certify people as being "Good people"?