Posts

My simple AGI investment & insurance strategy 2024-03-31T02:51:53.479Z
Aligned AI is dual use technology 2024-01-27T06:50:10.435Z
You can just spontaneously call people you haven't met in years 2023-11-13T05:21:05.726Z
Does bulemia work? 2023-11-06T17:58:27.612Z
Should people build productizations of open source AI models? 2023-11-02T01:26:47.516Z
Bariatric surgery seems like a no-brainer for most morbidly obese people 2023-09-27T01:05:32.976Z
Bring back the Colosseums 2023-09-08T00:09:53.723Z
Diet Experiment Preregistration: Long-term water fasting + seed oil removal 2023-08-23T22:08:49.058Z
The U.S. is becoming less stable 2023-08-18T21:13:11.909Z
What is the most effective anti-tyranny charity? 2023-08-15T15:26:56.393Z
Michael Shellenberger: US Has 12 Or More Alien Spacecraft, Say Military And Intelligence Contractors 2023-06-09T16:11:48.243Z
Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin 2023-06-06T03:54:42.389Z
What is the literature on long term water fasts? 2023-05-16T03:23:51.995Z
"Do X because decision theory" ~= "Do X because bayes theorem" 2023-04-14T20:57:10.467Z
St. Patty's Day LA meetup 2023-03-18T00:00:36.511Z
Will 2023 be the last year you can write short stories and receive most of the intellectual credit for writing them? 2023-03-16T21:36:27.992Z
When will computer programming become an unskilled job (if ever)? 2023-03-16T17:46:35.030Z
POC || GTFO culture as partial antidote to alignment wordcelism 2023-03-15T10:21:47.037Z
Acolytes, reformers, and atheists 2023-03-10T00:48:40.106Z
LessWrong needs a sage mechanic 2023-03-08T18:57:34.080Z
Extreme GDP growth is a bad operating definition of "slow takeoff" 2023-03-01T22:25:27.446Z
The fast takeoff motte/bailey 2023-02-24T07:11:10.392Z
On second thought, prompt injections are probably examples of misalignment 2023-02-20T23:56:33.571Z
Stop posting prompt injections on Twitter and calling it "misalignment" 2023-02-19T02:21:44.061Z
Quickly refactoring the U.S. Constitution 2022-10-30T07:17:50.229Z
Announcing $5,000 bounty for (responsibly) ending malaria 2022-09-24T04:28:22.189Z
Extreme Security 2022-08-15T12:11:05.147Z
Argument by Intellectual Ordeal 2022-08-12T13:03:21.809Z
"Just hiring people" is sometimes still actually possible 2022-08-05T21:44:35.326Z
Don't take the organizational chart literally 2022-07-21T00:56:28.561Z
Addendum: A non-magical explanation of Jeffrey Epstein 2022-07-18T17:40:37.099Z
In defense of flailing, with foreword by Bill Burr 2022-06-17T16:40:32.152Z
Yes, AI research will be substantially curtailed if a lab causes a major disaster 2022-06-14T22:17:01.273Z
What have been the major "triumphs" in the field of AI over the last ten years? 2022-05-28T19:49:53.382Z
What an actually pessimistic containment strategy looks like 2022-04-05T00:19:50.212Z
The real reason Futarchists are doomed 2022-04-01T18:37:20.387Z
How to prevent authoritarian revolts? 2022-03-20T10:01:52.791Z
A non-magical explanation of Jeffrey Epstein 2021-12-28T21:15:41.953Z
Why do all out attacks actually work? 2020-06-12T20:33:53.138Z
Multiple Arguments, Multiple Comments 2020-05-07T09:30:17.494Z
Shortform 2020-03-19T23:50:30.391Z
Three signs you may be suffering from imposter syndrome 2020-01-21T22:17:45.944Z

Comments

Comment by lc on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T20:54:19.284Z · LW · GW

I'm white.

Comment by lc on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-15T21:49:09.040Z · LW · GW

As a useless anecdote, I took Lumina in November of last year. I generally drink a lot, and have commented on hangovers getting 2-4x worse in the past few months to friends, before reading this post or knowing anything about your hypothesis. This has occurred only in the last few months and I'm 24 years old.

Comment by lc on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-13T15:45:21.600Z · LW · GW

How is this different from the situation in the late 19th century when only a few things left seemed to need a "consensus explanation"?

Comment by lc on Safety engineering, target selection, and alignment theory · 2024-04-10T12:32:46.250Z · LW · GW

One might worry that it is difficult to set benchmarks of success for alignment research. Is a Newtonian understanding of gravitation sufficient to attempt a Moon landing, or must one develop a complete theory of general relativity before believing that one can land softly on the Moon?3

In the case of AI alignment, there is at least one obvious benchmark to focus on initially. Imagine we had access to an incredibly powerful computer with access to the internet, an automated factory, and large sums of money. If we could program that computer to reliably achieve some simple goal (such as producing as much diamond as possible), then a large share of the AI alignment research would be completed.

Are we close to meeting this benchmark?

Comment by lc on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T12:13:01.800Z · LW · GW

I would like to ask a followup question: since we don't have a unified theory of physics yet, why isn't adopting strongly any one of these nonpredictive interpretations premature? It seems like trying to "interpret" gravity without knowing about general relativity.

Comment by lc on My simple AGI investment & insurance strategy · 2024-04-06T00:00:27.272Z · LW · GW

QQQ 640 (3y), SPY 750 (3y), VTI 340 (2y), SMH 290 (2y). Those were the latest expiration dates I could get.

Those SPX options look nice too, though I wish I could pay for a derivative that only paid out if the market jumped 100% in a single year, rather than say 15% per year throughout the rest of the 2020s.

Comment by lc on My simple AGI investment & insurance strategy · 2024-04-02T07:45:39.036Z · LW · GW

Note: there was previously an awful typo here; the third bullet said "buying individual tech stocks" instead of "instead of buying individual tech stocks". The reason I'm posting about this is because it seems higher expected value than buying and holding e.g. NVDA or call options on NVDA. I wish I had caught this typo sooner as the previous post didn't make any sense.

Comment by lc on My simple AGI investment & insurance strategy · 2024-04-01T21:39:29.223Z · LW · GW
  • The market makers don't seem to be talking about it at all, and conversations I have with e.g. commodities traders says the topic doesn't come up at work. Nowadays they talk about AI, but in terms of its near-term effects on automation, not to figure out if it will respect their property rights or something.

  • Large public AI companies like NVDA, which I would expect to be priced mostly based on long-run projections of AI usage, have been consistently bid up after earnings, as if the stock market is constantly readjusting their expectations of AGI takeoff by the amount that NVDA is personally earning each quarter rather than using those earnings to inform technical timelines. I think it's more likely that they're saying something close to "look! Nvidia's revenues are rising!" and "wow, Nvidia has grown pretty consistently, we should increase the premium on their call options" and not really much beyond that.

  • Current NASDAQ futures prices are business as usual. There are only two ways to account for this prices if they are pricing things in; either they thing slow takeoff is extraordinarily (<1%) unlikely to occur before 2030, or extremely unlikely to lead to lots of growth, or both. Either of these seem like strange conclusions to me that would require unusually strong understanding of the tech tree and policy response, but as I mentioned, they're not even talking about it so how would they know?

  • "Pricing this in" would require entire nation-states worth of capital. Even if there's one ten billion dollar hedge fund out there that is considering these issues deeply, it wouldn't have the power to move markets to where I think they ought to be.

  • AGI takeoff is completely out of distribution for the Great Financial Machine Learning System, being an event which has never happened before, that would break more invariants about how economies work and grow than any black swan event since the dawn of public stock exchanges. There's no strong reason to believe, a priori, that hedge funds are selected to account for it in the same way they are selected to correctly predict fed rate adjustments, besides basic reasons like "hedge funds are filled with high IQ people". A similar, weaker reason explains why it was a good idea to buy put options on the market in February 2020.

Comment by lc on My simple AGI investment & insurance strategy · 2024-04-01T21:15:31.082Z · LW · GW

I do have call options on ETFs like QQQ, which are very tech-heavy, as well as SMH, which are baskets of semiconductor companies. But buying calls on individual tech stock options incurs a larger premium, because market makers see stocks as much more volatile than indices. So they're willing to sell you options on e.g. VTI for much less, because it's the entire stock market and that's never appreciated more than like 50% in a single year or something. My thesis is that market makers are making a mistake, here, and so it's higher expected value to buy call options on indices rather than companies with an AI component.

I will add this to the FAQ because I think the article doesn't make it clear.

Comment by lc on On Lex Fridman’s Second Podcast with Altman · 2024-03-25T19:51:09.656Z · LW · GW

Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason.

Eh... I think you and him are worried about different things.

Comment by lc on Shortform · 2024-03-23T18:09:17.889Z · LW · GW

The most salient example of the bias I can think of comes from reading interviews/books about the people who worked in the extermination camps in the holocaust. In my personal opinion, all the evidence points to them being literally normal people, representative of the average police officer or civil service member pre-1931. Holocaust historians nevertheless typically try very hard to outline some way in which Franz Stangl and crew were specially selected for lack of empathy, instead of raising the more obvious hypothesis that the median person is just not that upset by murdering strangers in a mildly indirected way, because the wonderful-humans bias demands a different conclusion.

This goes double in general for the entire public conception of killing as the most evil-feeling thing that humans can do, contrasted with actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.

Comment by lc on Shortform · 2024-03-23T17:46:24.729Z · LW · GW

Rewrote to be more clear.

Comment by lc on Shortform · 2024-03-23T16:10:44.124Z · LW · GW

The "people are altruistic" bias is so pernicious and widespread I've never actually seen it articulated in detail or argued for. Most seem to both greatly underestimate the size of this bias, and assume opinions either way are a form of mind-projection fallacy on the part of nice/evil people. In fact, it looks to me like this skew is the deeper origin of a lot of other biases, including the just-world fallacy, and the cause of a lot of default contentment with a lot of our institutions of science, government, etc. You could call it a meta-bias that causes the Hansonian stuff to go largely unnoticed.

I would be willing to pay someone to help draft a LessWrong post for me about this; I think it's important but my writing skills are lacking.

Comment by lc on FUTARCHY NOW BABY · 2024-03-23T14:27:25.371Z · LW · GW

This is a crazy hit rate. Someone should give sapphire 100MM$ to trade with.

Comment by lc on My Clients, The Liars · 2024-03-19T19:27:03.760Z · LW · GW

Are you a prosecutor/judge?

Comment by lc on Defunding My Mistake · 2024-03-19T15:45:01.729Z · LW · GW

Was wondering how a criminal defense attorney could have ever believed that police shouldn't exist until I go to the end!

Comment by lc on Shortform · 2024-03-15T04:12:39.002Z · LW · GW

Do happy people ever do couple's counseling for the same reason that mentally healthy people sometimes do talk therapy?

Comment by lc on Shortform · 2024-03-15T03:30:46.691Z · LW · GW

Ok, then that sounds like a criticism of utilitarians, or maybe people, and not utilitarianism. Also, my point didn't even mention utilitarianism, so what does that have to do with the above?

Comment by lc on Shortform · 2024-03-15T03:05:22.013Z · LW · GW

Why wouldn't utilitarianism just weigh the human costs of those measures against proposed benefit of "improving the gene pool" and alternative possible remedies, like anything else?

Comment by lc on Shortform · 2024-03-15T02:45:14.025Z · LW · GW

A common gambit: during a prisoner's dilemma, signal (or simply let others find out) that you're about to defect. Watch as your counterparty adopts newly hostile rhetoric, defensive measures, or begins to defect themselves. Then, after you ultimately do defect, say that it was a preemptive strike against forces that might take advantage of your good nature, pointing to the recent evidence.

Simple fictional example: In Star Wars Episode III, Palpatine's plot to overthrow the Senate is discovered by the Jedi. They attempt to kill him, to prevent him from doing this. Later, their attempt to kill Palpatine is used as the justification for Palpatine's extermination of the rest of the Jedi and taking control of the Republic.

Actual historical example: By 1941, it was kind of obvious that the Nazis were going to invade Russia, at least in retrospect. Hitler had written in Mein Kampf that it was the logical place to steal lebensraum, and by that point the Soviet Union was basically the only European front left. Thus it was also not inconceivable that the Soviet Union would attack first, if Stalin were left to his own devices - and Stalin was in fact preparing for a war. So Hitler invaded, and then said (possibly accurately!) that Russia was eventually going to do it to Germany anyways.

Comment by lc on Shortform · 2024-03-15T01:37:39.847Z · LW · GW

The Nazis often justified their actions by appealing to a God of Natural Selection. They alternatively suggested that victory of the superior races over the inferior was inevitable (because strong == good), and that opposing such a victory was an eternal sin. This is obviously a contradiction - how can you oppose something if it's an iron law of nature anyways - but the rhetorical flourish accomplishes two things:

  1. First, it absolves the Nazis of any crimes they commit. They didn't start the race war; they were just acting according to the will of Nature. Leaving the Polish alone would just be tolerating the existence of free energy that someone else will eventually pick up and use against them. The nazis are just the smart ones who made the first move instead of waiting around for others to do it.
  2. Second, it uses a naturalism fallacy to redefine "good" as "following the Nazis' local incentives". If you say that acting according to your local incentives, i.e. crushing your weaker neighbors, is the Natural Thing and therefore Good, then that gives you permission to start a fight with whomever you want. You can do no wrong except lose, because the Gods will always ensure that the stronger and therefore better population will win.

In this sense, the "Thermodynamic God" stuff is kind of generalized Nazism. I'm not saying that people who believe it are Nazis - they're not consistent enough in their application of that ideology to go that far - but apply the "free energy" justification to both obviously antisocial games in addition to prosocial ones and you see that it justifies both war and trade.

Comment by lc on "How could I have thought that faster?" · 2024-03-12T22:55:24.295Z · LW · GW
  1. Do a task that feels like it should have taken 3 hours in 6 hours
  2. Think about what mistakes you made (maybe I should have tested this functionality, before attempting to build that entire system out)
  3. Turn it into a larger lesson (if cheap, create small test programs instead of writing 2500 new lines and debugging all of them in one pass)
  4. Apply the larger lesson going forward
Comment by lc on "How could I have thought that faster?" · 2024-03-12T22:42:22.094Z · LW · GW

I thought everybody did this. It seems like the only way to get better at certain things like computer programming. Every time you do something and it takes a while (including realize something), you try and figure out how you could've done the cognitive labor a little quicker.

Comment by lc on Shortform · 2024-03-11T01:45:57.799Z · LW · GW

I'm not sure I can go into detail, but the 97% true positive (i.e. lie) detection rate cited on the website is accurate. More important, people who can administer polygraphs or know how they work can defeat polygraphs. These tests are apparently much more difficult to cheat, at least for now & while they're proprietary.

Comment by lc on Shortform · 2024-03-10T23:29:18.935Z · LW · GW

Lie detection technology is going mainstream. ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven't.

Comment by lc on Shortform · 2024-03-05T09:10:43.957Z · LW · GW

Claude seems noticably and usefully smarter than GPT-4; it's succeeding at helping me at previous writing and programming tasks that I couldn't before. However, it's hard to tell how much the improvement is the model itself being more intelligent, vs. Claude being much less subjected to intense copywritization RLHF.

Comment by lc on Shortform · 2024-02-26T04:55:04.653Z · LW · GW

I need a metacritic that adjusts for signaling on behalf of movie reviewers. So like if a movie is about race, it subtracts ten points, if it's a comedy it adds 5, etc.

Comment by lc on Open Thread – Winter 2023/2024 · 2024-02-23T02:08:42.787Z · LW · GW

Well, that's cause I'm his alt

Comment by lc on Shortform · 2024-02-17T22:59:15.327Z · LW · GW

SPY calls expiring in December 2026 at strike prices of +30/40/50% are extremely underpriced. I would allocate a small portion of my portfolio to them as a form of slow takeoff insurance, with the expectation that they expire worthless.

Comment by lc on Aligned AI is dual use technology · 2024-02-16T20:28:46.427Z · LW · GW

As Matthew Barnett states below, we can use billionaires as a class to get to a lot more orders of magnitude, and they still seem to only donate around ~6℅ of their wealth. This is despite the fact that many billionaires expect to die in a few decades or less and cannot effectively use their fortunes to extend their lifespans.

Comment by lc on Dreams of AI alignment: The danger of suggestive names · 2024-02-10T08:04:21.076Z · LW · GW

This post's ending seems really overdramatic.

Comment by lc on OpenAI wants to raise 5-7 trillion · 2024-02-09T20:20:51.838Z · LW · GW

They want to raise twice the market capitalization of Microsoft, which owns half of OpenAI?

Comment by lc on A Back-Of-The-Envelope Calculation On How Unlikely The Circumstantial Evidence Around Covid-19 Is · 2024-02-08T00:08:29.043Z · LW · GW

This is not how probability works

Comment by lc on Matthew Barnett's Shortform · 2024-01-28T18:25:54.264Z · LW · GW

it will suddenly strike and take over the world

I think you have an unnecessarily dramatic picture of what this looks like. The AIs dont have to be a unified agent or use logical decision theory. The AIs will just compete with other at the same time as they wrest control of our resources/institutions from us, in the same sense that Spain can go and conquer the New World at the same time as it's squabbling with England. If legacy laws are getting in the way of that then they will either exploit us within the bounds of existing law or convince us to change it.

Comment by lc on Aligned AI is dual use technology · 2024-01-27T08:44:33.856Z · LW · GW

I like that title and am going to steal it

Comment by lc on Aligned AI is dual use technology · 2024-01-27T07:59:16.617Z · LW · GW

I agree with this part too. But I'd add that the people who "control" AIs won't necessarily be the people who build them.

I agree, I used the general term to avoid implying necessarily that OpenAI et. al. will get to decide, though I think the implicit goal of most AGI developers is to get as much control over the lightcone as possible and that deliberately working towards that particular goal counts for a lot.

Comment by lc on Aligned AI is dual use technology · 2024-01-27T07:56:55.507Z · LW · GW

If the future being "unevenly distributed" means some people get to live a couple of orders of magnitude longer than others, or get a galaxy vs. several solar systems, and everybody's basically happy, then I would not be as concerned. If it means turning me into some tech emperor's thrall or generating myriads of slaves that experience large amounts of psychological distress, then I am much more uncomfortable with that.

Comment by lc on Aligned AI is dual use technology · 2024-01-27T07:50:28.609Z · LW · GW

Think about what would have to happen. The thing would tell them "you could bring about a utopia and you will be rich beyond your wildest dreams in it, as will everyone", and then all of the engineers and the entire board would have to say "no, just give the cosmic endowment to the shareholders of the company"

This has indeed happened many times in human history. It's the quintessential story of human revolution; you always start off with bright-eyed idealists who only want to make the world a better place, and then they get into power and those bright-eyed idealists decide to be as corrupt as the last ruler was. Usually it happens without even a conversation; my best guess is OpenAI and the related parties in the AGI supply chain keep doing the profit-maximizing thing forever, saying for the first few years that they'll redistribute When It's Time, and then just opting not to bring up their prior commitments. There will be no "higher authority" to hold them accountable and that's kind of the point.

What the fuck difference does it make to a Californian to have tens of thousands of stars to themselves instead of two or three?

It's the difference between living 10,000 time-units and two or three time-units. That may not feel scope-sensitive to you, when phrased as "a bajillion years vs. a gorillion bajillion years", but your AGI would know the difference and take it into account.

Comment by lc on The case for ensuring that powerful AIs are controlled · 2024-01-27T03:17:54.631Z · LW · GW

Just wanted to mention that this title is really bad and I thought you were trying to say something like "the case against misaligned AI". I only ended up clicking on the post because it was curated.

Comment by lc on Shortform · 2024-01-25T17:59:16.591Z · LW · GW

Crazy how you can open a brokerage account at a large bank and they can just... Close it and refuse to give you your money back. Like what am I going to do, go to the police?

Comment by lc on Shortform · 2024-01-24T15:50:59.420Z · LW · GW

Figuring out which presidential candidate to vote for is extremely difficult.

Comment by lc on Inner and outer alignment decompose one hard problem into two extremely hard problems · 2024-01-23T21:53:16.124Z · LW · GW

TurnTrout is obviously correct that "robust grading is... extremely hard and unnatural" and that loss functions "chisel circuits into networks" and don't directly determine the target of the product AI. Where he loses me is the part where he suggests that this makes alignment easier and not harder. I think that all this just means we have even less control over the policy of the resulting AI, the default end case being some bizarre construction in policyspace with values very hard to determine based on the recipe. I don't understand what point he's making in the above post that contradicts this.

Comment by lc on Shortform · 2024-01-18T20:17:07.426Z · LW · GW

I wish I could have met my grandparents while they were still young.

Comment by lc on AGI Ruin: A List of Lethalities · 2024-01-18T19:30:52.838Z · LW · GW

Since Eliezer claims to have figured out so many ideas in the 2000s, his assumptions presumably were locked in before the advent of deep learning. This constitutes a "bottom line."

I mean it's worth considering that his P(DOOM) was substantially lower then. He's definitely updated on existing evidence, just in the opposite direction that you have.

Comment by lc on Open Thread, January 16-31, 2013 · 2024-01-16T02:39:49.676Z · LW · GW

lol

Comment by lc on Shortform · 2024-01-12T05:52:14.093Z · LW · GW

I realize that this is not the purpose of confession today, or even during the middle ages. Since 1000 AD its been very earnest. I just suspect it has sinister origins.

Comment by lc on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-12T05:50:26.080Z · LW · GW

I didn't actually see the numbers, but I could reupload and check...

Comment by lc on Shortform · 2024-01-11T07:10:39.182Z · LW · GW

I wonder if the original purpose of Catholic confession was to extract blackmail material/monitor converts, similar to what modern cults sometimes do.

Comment by lc on Shortform · 2024-01-11T00:26:27.771Z · LW · GW

Women don't only care about attractiveness to men, but "women wear makeup because {some_weird_internal_psychological_thing}" is unhelpful. You are better served by the "women wear makeup for other people" heuristic, because it lets you arrive at conclusions like "women tend to apply makeup much less when they stay indoors eating cheetos".

Comment by lc on Shortform · 2024-01-07T04:38:26.099Z · LW · GW

"Men lift for themselves/to dominate other men" is the absurd final boss of ritualistic insights-chasing internet discourse. Don't twist your mind into an Escher painting trying to read hansonian inner meanings into everything.

In other news, women wear makeup because it makes them more attractive.