Posts

My simple AGI investment & insurance strategy 2024-03-31T02:51:53.479Z
Aligned AI is dual use technology 2024-01-27T06:50:10.435Z
You can just spontaneously call people you haven't met in years 2023-11-13T05:21:05.726Z
Does bulemia work? 2023-11-06T17:58:27.612Z
Should people build productizations of open source AI models? 2023-11-02T01:26:47.516Z
Bariatric surgery seems like a no-brainer for most morbidly obese people 2023-09-27T01:05:32.976Z
Bring back the Colosseums 2023-09-08T00:09:53.723Z
Diet Experiment Preregistration: Long-term water fasting + seed oil removal 2023-08-23T22:08:49.058Z
The U.S. is becoming less stable 2023-08-18T21:13:11.909Z
What is the most effective anti-tyranny charity? 2023-08-15T15:26:56.393Z
Michael Shellenberger: US Has 12 Or More Alien Spacecraft, Say Military And Intelligence Contractors 2023-06-09T16:11:48.243Z
Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin 2023-06-06T03:54:42.389Z
What is the literature on long term water fasts? 2023-05-16T03:23:51.995Z
"Do X because decision theory" ~= "Do X because bayes theorem" 2023-04-14T20:57:10.467Z
St. Patty's Day LA meetup 2023-03-18T00:00:36.511Z
Will 2023 be the last year you can write short stories and receive most of the intellectual credit for writing them? 2023-03-16T21:36:27.992Z
When will computer programming become an unskilled job (if ever)? 2023-03-16T17:46:35.030Z
POC || GTFO culture as partial antidote to alignment wordcelism 2023-03-15T10:21:47.037Z
Acolytes, reformers, and atheists 2023-03-10T00:48:40.106Z
LessWrong needs a sage mechanic 2023-03-08T18:57:34.080Z
Extreme GDP growth is a bad operating definition of "slow takeoff" 2023-03-01T22:25:27.446Z
The fast takeoff motte/bailey 2023-02-24T07:11:10.392Z
On second thought, prompt injections are probably examples of misalignment 2023-02-20T23:56:33.571Z
Stop posting prompt injections on Twitter and calling it "misalignment" 2023-02-19T02:21:44.061Z
Quickly refactoring the U.S. Constitution 2022-10-30T07:17:50.229Z
Announcing $5,000 bounty for (responsibly) ending malaria 2022-09-24T04:28:22.189Z
Extreme Security 2022-08-15T12:11:05.147Z
Argument by Intellectual Ordeal 2022-08-12T13:03:21.809Z
"Just hiring people" is sometimes still actually possible 2022-08-05T21:44:35.326Z
Don't take the organizational chart literally 2022-07-21T00:56:28.561Z
Addendum: A non-magical explanation of Jeffrey Epstein 2022-07-18T17:40:37.099Z
In defense of flailing, with foreword by Bill Burr 2022-06-17T16:40:32.152Z
Yes, AI research will be substantially curtailed if a lab causes a major disaster 2022-06-14T22:17:01.273Z
What have been the major "triumphs" in the field of AI over the last ten years? 2022-05-28T19:49:53.382Z
What an actually pessimistic containment strategy looks like 2022-04-05T00:19:50.212Z
The real reason Futarchists are doomed 2022-04-01T18:37:20.387Z
How to prevent authoritarian revolts? 2022-03-20T10:01:52.791Z
A non-magical explanation of Jeffrey Epstein 2021-12-28T21:15:41.953Z
Why do all out attacks actually work? 2020-06-12T20:33:53.138Z
Multiple Arguments, Multiple Comments 2020-05-07T09:30:17.494Z
Shortform 2020-03-19T23:50:30.391Z
Three signs you may be suffering from imposter syndrome 2020-01-21T22:17:45.944Z

Comments

Comment by lc on When do "brains beat brawn" in Chess? An experiment · 2024-11-22T05:06:56.591Z · LW · GW

As the name suggests, Leela Queen Odds is trained specifically to play without a queen, which is of course an absolutely bonkers disadvantage against 2k+ elo players. One interesting wrinkle is the time constraint. AIs are better at fast chess (obviously), and apparently no one who's tried is yet able to beat it consistently at 3+0 (3 minutes with no timing increment)

Comment by lc on Shortform · 2024-11-21T17:48:30.663Z · LW · GW

Epstein was an amateur rapist, not a pro rapist. His cabal - the parts of it that are actually confirmed and not just speculated about baselessly - seems extremely limited in scope compared to the kinds of industrial conspiracies that people propose about child sex work. Most of epstein's victims only ever had sex with Epstein, and only one of them - Virginia Giuffre - ever appears to have publicly claimed being passed around to many of Epstein's friends.

What I am referring to are claims an underworld industry for exploiting children the primary purpose of which is making money. For example, in the Sound of Freedom, a large part of the plot hinges on the idea that there are professionals who literally bring kidnapped children from South America into the United States so that pedophiles here can have sex with them. I submit that this industry in particular does not exist, or at least would be a terrible way to make money on a risk-adjusted basis compared to drug dealing.

Comment by lc on Shortform · 2024-11-21T17:05:17.093Z · LW · GW

I think P(DOOM) is fairly high (maybe 60%) and working on AI research or accelerating AI race dynamics independently is one of the worst things you can do. I do not endorse improving the capabilities of frontier models and think humanity would benefit if you worked on other things instead.

That said, I hope Anthropic retains a market lead, ceteris paribus. I think there's a lot of ambiguous parts of the standard AI risk thesis, and that there's a strong possibility we get reasonablish alignment with a few quick creative techniques at the finish like faithful CoT. If that happens I expect it might be because Anthropic researchers decided to pull back and use their leverage to coordinate a pause. I also do not see what Anthropic could do from a research front at this point that would make race dynamics even worse than they already are, besides split up the company. I also do not want to live in a world entirely controlled by Sam Altman, and think that could be worse than death.

Comment by lc on Dragon Agnosticism · 2024-11-21T03:53:42.724Z · LW · GW

So one of the themes of sequences is that deliberate self-deception or thought censorship - deciding to prevent yourself from "knowing" or learning things you would otherwise learn - is almost always irrational. Reality is what it is, regardless of your state of mind, and at the end of the day whatever action you're deciding to take - for example, not talking about dragons - you could also be doing if you knew the truth. So when you say:

But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.

It's not a reason not to investigate. You could continue to avoid these consequences you speak of by not writing about Dragons regardless of the results of your investigation. One possibility is that what you're also avoiding, is guilt/discomfort that might come from knowing the truth and remaining silent. But through your decision not to investigate, the world is going to carry the burden of that silence either way.

Another theme of the sequences is that self-deception, deliberate agnosticism, and motivated reasoning are a source of surprising amounts of human suffering. Richard explains one way it goes horribly wrong here. Whatever subject you're talking about, I'm sure there a lot of other people in your position who have chosen not to look into it for the same reasons. But if all of those people had looked into it, and faced whatever conclusion that resulted squarely, you yourself might not be in the position of having to face a harmful taboo in the first place. So the form of information hiding you endorse in the post is self-perpetuating, and is part of what helps keep the taboo strong.

Comment by lc on Dragon Agnosticism · 2024-11-20T10:21:01.224Z · LW · GW

I think the entire point of rationalism is that you don't do things like this.

Comment by lc on Shortform · 2024-11-17T19:42:34.045Z · LW · GW

The greatest strategy for organizing vast conspiracies is usually failing to realize that what you're doing is illegal.

Comment by lc on Announcing turntrout.com, my new digital home · 2024-11-17T18:20:44.713Z · LW · GW

I plan to cross-post to LessWrong but to not read or reply to comments (with a few planned exceptions).

:( why not?

Comment by lc on Shortform · 2024-11-17T00:27:00.784Z · LW · GW

Pretty much ~everybody on the internet I can find talking about the issue both mischaracterizes and exaggerates the extent of child sex work inside the United States, often to a patently absurd degree. Wikipedia alone reports that there are anywhere from "100,000-1,000,000" child prostitutes in the U.S. There are only ~75 million children in the U.S., so I guess Wikipedia thinks it's possible that more than 1% of people aged 0-17 are prostitutes. As in most cases, these numbers are sourced from "anti sex trafficking" organizations that, as far as I can tell, completely make them up.

Actual child sex workers - the kind that get arrested, because people don't like child prostitution - are mostly children who pass themselves off as adults in order to make money. Part of the confusion comes from the fact that the government classifies any instance of child prostitution as human trafficking, regardless of whether or not there's evidence the child was coerced. Thus, when the Department of Justice reports that federal law enforcement investigated "2,515 instances of suspected human trafficking" from 2008-2010, and that "forty percent involved prostitution of a child or child sexual exploitation", it means that it investigated ~1000 possible cases of child prostitution, not that it found 1000 child sex slaves.

People believe a lot of crazy things, but I am genuinely flabbergasted at how many people find it plausible that there's an entire underworld industry of kidnapping children and selling them to pedophiles in first world countries. I know why the anti sex trafficking orgs sell these stories - they're trying to attract donations, and who is going to call out an "anti sex trafficking" charity? But surely most people realize that it would be very hard for an organized child rape cabal to spread word about their offerings to customers without someone alerting police.

Comment by lc on Shortform · 2024-11-09T22:05:40.389Z · LW · GW

Sometimes people say "before we colonize Mars, we have to be able to colonize Antarctica first".

What are the actual obstacles to doing that? Is there any future tech somewhere down the tree that could fix its climate, etc.?

Comment by lc on Shortform · 2024-11-09T01:10:20.440Z · LW · GW

LessWrong and "TPOT" is not the general public. They're not even smart versions of the general public. An end to leftist preference falsification and sacred cows, if it does come, will not bring whatever brand of IQ realism you are probably hoping for. It will not mainstream Charles Murray or Garrett Jones. Far more simple, memetic, and popular among both white and nonwhite right wingers in the absence of social pressures against it is groyper-style antisemitism. That is just one example; it could be something stupider and more invigorating.

I wish it weren't so. Alas.

Comment by lc on Shortform · 2024-11-07T00:42:10.667Z · LW · GW

I regret that both factions couldn't lose.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-04T04:57:14.839Z · LW · GW

I do think that AIs will eventually get much smarter than humans, and this implies that artificial minds will likely capture the majority of wealth and power in the world in the future. However, I don't think the way that we get to that state will necessarily be because the AIs staged a coup. I find more lawful and smooth transitions more likely.

I think my writing was ambiguous. My comment was supposed to read "similar constraints may apply to AIs unless one (AI) gets much smarter (than other AIs) much more quickly, as you say." I was trying to say the same thing.

My original point was also not actually that we will face an abrupt transition or AI coup, I was just objecting to the specific example Meme Machine gave.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-03T06:11:16.523Z · LW · GW

Is it your contention that similar constraints will not apply to AIs?

Similar constraints may apply to AIs unless one gets much smarter much more quickly, as you say. But even if those AIs create a nice civilian government to govern interactions with each other, those AIs will have any reason to respect our rights unless some of them care about us more than we care about stray dogs or cats.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-03T05:57:43.998Z · LW · GW

I mean to say that high ranking generals could issue such a coup

Yes, and by "any given faction or person in the U.S. military" I mean to say that high ranking generals inside the United States cannot form a coup. They literally cannot successfully give the order to storm the capitol. Their inferiors, understanding that:

  • The order is illegal
  • The order would have to be followed by the rest of their division in order to have a chance of success
  • The order would be almost guaranteed to fail in its broader objective even if they manage to seize the FBI headquarters or whatever
  • That others around them are also making the same calculation and will also probably be unwilling to follow the order

Would report their superiors to military law enforcement instead. This is obvious if you take even a moment to put your shoes in any of the parties involved. Our generals inside the U.S. military also realize this themselves and so do not attempt to perform coups, even though I'm certain there are many people inside the white house with large 'nominal' control over U.S. forces who would love to be dictator.

I think your blanket statement on the impossibility of Juntas is void.

I made no such blanket statement. In different countries the odds and incentives facing each of these parties are different. For example, if you live in a South American country with a history of successful military overthrows, you might have a much greater fear your superior will succeed, and so you might be more scared of him than the civilian government. This is part (though not all) of the reason why some countries are continually stable and others are continually unstable.

Comment by lc on The Compendium, A full argument about extinction risk from AGI · 2024-11-02T05:55:51.108Z · LW · GW

If it really wanted to, there would be nothing at all stopping the US military from launching a coup on its civilian government.

There are enormous hurdles preventing the U.S. military from overthrowing the civilian government.

The confusion in your statement is caused by blocking up all the members of the armed forces in the term "U.S. military". Principally, a coup is an act of coordination. Any given faction or person in the U.S. military would have a difficult time organizing the forces necessary without being stopped by civilian or military law enforcement first, and then maintaining control of their civilian government afterwards without the legitimacy of democratic governance.

In general, "more powerful entities control weaker entities" is a constant. If you see something else, your eyes are probably betraying you.

Comment by lc on johnswentworth's Shortform · 2024-10-29T01:24:23.319Z · LW · GW

Totally understand why this would be more interesting; I guess I would still fundamentally describe what we're doing on the internet as conversation, with the same rules as you would describe above. It's just that the conversation you can find here (or potentially on Twitter) is superstimulating compared to what you're getting elsewhere. Which is good in the sense that it's more fun, and I guess bad inasmuch as IRL conversation was fulfilling some social or networking role that online conversation wasn't.

Comment by lc on Shortform · 2024-10-29T01:00:15.263Z · LW · GW

10xing my income did absolutely nothing for my dating life. It had so little impact that I am now suspicious of all of the people who suggest this more than marginally improves sexual success for men.

Comment by lc on Shortform · 2024-10-28T05:36:01.601Z · LW · GW

In the same way that Chinese people forgot how to write characters by hand, I think most programmers will forget how to write code without LLM editors or plugins pretty soon.

Comment by lc on johnswentworth's Shortform · 2024-10-28T05:02:34.077Z · LW · GW

...How is that definition different than a realtime version of what you do when participating in this forum?

Comment by lc on Shortform · 2024-10-27T05:47:01.356Z · LW · GW

I'm interested too. I think several of the above are solvable issues. AFAICT:

Solved by simple modifications to markets:

  • Races to correct naive bidders
  • Defending the true price from incorrect bidders for $ w/o letting price shift

Seem doable with thought:

  • Billing for information value
  • Policy conditionals

Seem hard/idk if it's possible to fully solve:

  • Collating information known by different bidders
  • Preventing tricking other bidders for profit
  • General enterprise of credit allocation for knowledge creation
Comment by lc on Shortform · 2024-10-25T20:51:19.049Z · LW · GW

The Prediction Market Discord Message, by Eva_:

Current market structures can't bill people for the information value that went into the market fairly, can't fairly handle secret information known to only some bidders, pays out most of the subsidy to whoever corrects the naive bidder fastest even though there's no benefit to making it a race, offers almost no profit to people trying to defend the true price from incorrect bidders unless they let the price shift substantially first, can't be effectively used to collate information known by different bidders, can't handle counterfactuals / policy conditionals cleanly, implement EDT instead of LDT, let you play games of tricking other bidders for profit and so require everyone to play trading strategies that are inexploitable even if less beneficial, can't defend against people who are intentionally illegible as to whether they have private information or are manipulating the market for profit elsewhere...

But most of all, prediction markets contain supposedly ideal economic actors who don't really suspect each other of dishonesty betting money against each other even though it's a net-zero trade and the aggreeement theorem says they shouldn't expect to profit from it at all, so clearly this is not the mathematically ideal solution for a group to collate its knowledge and pay itself for the value that knowledge [provides]. Even if you need betting to be able to trust nonsharable information from another party, you shouldn't have people betting in excess of what is needed to prove sincerity out of a belief other people are wrong unless you've actually got other people being wrong even in light of the new actors' information.

Comment by lc on Shortform · 2024-10-25T01:27:34.935Z · LW · GW

I don't think I will ever find the time to write my novel. Writing novels is dumb anyways. But I feel like the novel and world are bursting out of me. What do

Comment by lc on Jimrandomh's Shortform · 2024-10-24T20:51:39.354Z · LW · GW

Outside of politics, none are more certain that a substandard or overpriced product is a moral failing than gamers. You'd think EA were guilty of war crimes with the way people treat them for charging for DLC or whatever.

Comment by lc on Shortform · 2024-10-23T18:35:23.132Z · LW · GW

At least according to CNN's exit polls, a white person in their twenties was only 6% less likely to vote for Trump in 2020 than a white person above the age of sixty!

This was actually very surprising for me; I think a lot of people have a background sense that younger white voters are much less socially and politically conservative. That might still be true, but the ones that choose to vote vote republican at basically the same rate in national elections.

Comment by lc on Shortform · 2024-10-22T16:15:25.058Z · LW · GW

Interesting whitepill hidden inside Scott Alexander's SB 1047 writeup was that lying doesn't work as well as predicted in politics. It's possible that if the opposition had lied less often, or we had lied more often, the bill would not have gotten a supermajority in the senate.

Comment by lc on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2024-10-08T22:20:44.319Z · LW · GW

The Center on Long Term Risk is absurdly underfunded, but they focus on S-risks and not X-risks.

Comment by lc on Shortform · 2024-10-01T01:53:13.613Z · LW · GW

I actually don't really know how to think about the question of whether or not the 2016 election was stolen. Our sensemaking institutions would say it wasn't stolen if it was, and it wasn't stolen if it wasn't.

But the prediction markets provide some evidence! Where are all of the election truthers betting against Trump?

Comment by lc on Shortform · 2024-09-26T21:58:48.218Z · LW · GW

The security team definitely know about the attack vector and I've spoken to them. It's just that neither I nor they really know what the industry as a whole is going to do about it.

Comment by lc on Shortform · 2024-09-26T21:34:16.164Z · LW · GW

Google's red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They're just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.

Comment by lc on Shortform · 2024-09-26T17:00:29.638Z · LW · GW

There's a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don't know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.

Comment by lc on shortplav · 2024-09-09T01:58:49.871Z · LW · GW

This incentivizes people to spend a lot of wasted time gathering information about what the other party is willing to pay. Which I suppose is true anyways, but I'd like to see a spherical cow system where that isn't the case.

Comment by lc on Shortform · 2024-09-04T20:37:01.279Z · LW · GW
Comment by lc on Shortform · 2024-09-04T05:37:18.137Z · LW · GW

Have only watched Season one, but so far Game of Thrones has been a lot less cynical than I expected.

Comment by lc on Shortform · 2024-08-28T04:24:07.054Z · LW · GW

I think the prestigious universities mostly select for diligence and intelligence and any selection for prosocial behavior is sort of downstream of those things' correlates.

Comment by lc on Shortform · 2024-08-27T19:50:41.375Z · LW · GW

Why aren't there institutions that certify people as being "Good people"?

Comment by lc on Shortform · 2024-08-25T03:20:11.950Z · LW · GW

Many of you are probably wondering what you will do if/when you see a polar bear. There's a Party Line, uncritically parroted by the internet and wildlife experts, that while you can charge/intimidate a black bear, polar bears are Obligate Carnivores and the only thing you can do is accept your fate.

I think this is nonsense. A potential polar bear attack can be defused just like a black bear attack. There are loads of youtube videos of people chasing Polar Bears away by making themselves seem big and aggressive, and I even found some indie documentaries of people who went to the arctic with expectations of being able to do this. The main trick seems to be to resist the urge to run away, make yourself look menacing, and commit to warning charges in the bear's general direction until it leaves.

Comment by lc on Shortform · 2024-08-22T04:22:54.454Z · LW · GW

Gambling is just stealing money from your clones in foreign everett branches

Comment by lc on Shortform · 2024-08-22T03:03:23.799Z · LW · GW

If we can imagine medianworlds in which the average person on Earth would be considered extremely stupid, we can also imagine medianworlds in which the average person on Earth is extremely poorly-put-together, in the same sense that someone on the internet might be aghast at the self destructive behavior of Christian Weston Chandler or BossmanJack. In such a world there'd be an everyman Joe Bauers who livestreams their life to wide ridicule for their inability to follow a diet or go to sleep on time.

Comment by lc on The other side of the tidal wave · 2024-08-19T07:25:08.263Z · LW · GW

https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence

Comment by lc on Towards more cooperative AI safety strategies · 2024-08-17T16:01:57.902Z · LW · GW

Given that OP works for OpenAI, this post reads like when Marc Andreesen complains about the "gigantic amount of money in AI safety".

Comment by lc on Shortform · 2024-08-03T06:30:56.206Z · LW · GW

The olympics are really cool. I appreciate that they exist. There's some timelines out there where they don't have an Olympics and nobody notices anything is wrong.

Comment by lc on lemonhope's Shortform · 2024-07-31T20:18:40.862Z · LW · GW

It's a lot easier to signal the kind of intelligence that LessWrong values-in-practice by writing a philosophical treatise than by actually accomplishing something.

Comment by lc on Shortform · 2024-07-21T19:57:50.612Z · LW · GW

In the last ten days we've had the trump assassination attempt, crowdstrike global computer outage, and the Joe Biden dropout

Comment by lc on Shortform · 2024-07-15T02:44:23.294Z · LW · GW

What percent of everett branches is Trump dead since yesterday morning?

Comment by lc on Shortform · 2024-07-12T06:08:09.252Z · LW · GW

Political dialogue is a game with a meta. The same groups of people with the same values in a different environment will produce a different socially determined ruleset for rhetorical debate. The arguments we see as common are a product of the current debate meta, and the debate meta changes all the time.

Comment by lc on Shortform · 2024-06-28T21:37:38.587Z · LW · GW

I think there's a small expected value difference between the two candidates, but I am simply too disgusted to care. We need to overthrow the government or primary systems and replace it something that manages offer us people who are under the age of 75.

Comment by lc on Shortform · 2024-06-28T19:24:10.004Z · LW · GW

I'm not voting for either presidential candidate this year. I know my vote doesn't mattter, but I don't care. What we have is indistinguishable from soft authoritarianism, and I'd prefer not to lend any legitimacy to a "democracy" that gives me only two choices for President, one of whom is literally senile and cannot articulate his own policy positions on a podium.

Comment by lc on Shortform · 2024-06-27T01:38:39.639Z · LW · GW

It doesn't feel like I'm getting smarter. It feels like everybody else is getting dumber. I feel as smart as I was when I was 14.

Comment by lc on Thoughts on Francois Chollet's belief that LLMs are far away from AGI? · 2024-06-18T22:55:40.802Z · LW · GW

Francois seems almost to assume that just because an algorithm takes millions or billions of datapoints to train, that means its output is just "memorization". In fact it seems to me that the learning algorithms just work pretty slowly, and that the thing that's learned after those millions or billions of tries is the actual generative concepts.

Comment by lc on Shortform · 2024-05-31T13:16:16.006Z · LW · GW

I think most observers are underestimating how popular Nick Fuentes will be in about a year among conservatives. Would love to operationalize this belief and create some manifold markets about it. Some ideas:

  • Will Nick Fuentes have over 1,000,000 Twitter followers by 2025?
  • Will Nick Fuentes have a public debate with [any of Ben Shapiro/Charlie Kirk/etc.] by 2026?
  • Will Nick Fuentes have another public meeting with a national level politician (I.e. congressman or above) by 2026?
  • Will any national level politicians endorse Nick Fuentes' content or claim they are a fan of his by 2026?