Posts

The case for the death penalty 2025-02-21T08:30:41.182Z
Nonpartisan AI safety 2025-02-10T14:55:50.913Z
Stopping unaligned LLMs is easy! 2025-02-03T15:38:27.083Z
XX by Rian Hughes: Pretentious Bullshit 2025-01-08T13:02:52.438Z
Can private companies test LVTs? 2025-01-02T11:08:07.352Z
The Alignment Simulator 2024-12-22T11:45:55.220Z
How to put California and Texas on the campaign trail! 2024-11-06T06:08:25.673Z
Why our politicians aren't Median 2024-11-03T14:03:33.779Z
Humans are (mostly) metarational 2024-10-09T05:51:16.644Z
Book Review: Righteous Victims - A History of the Zionist-Arab Conflict 2024-06-24T11:02:03.490Z
Sci-Fi books micro-reviews 2024-06-24T09:49:28.523Z
Surviving Seveneves 2024-06-19T13:11:55.414Z
Can stealth aircraft be detected optically? 2024-05-02T07:47:00.101Z
Falling fertility explanations and Israel 2024-04-03T03:27:38.564Z
Is it justifiable for non-experts to have strong opinions about Gaza? 2024-01-08T17:31:21.934Z
Book Review: 1948 by Benny Morris 2023-12-03T10:29:16.696Z
In favour of a sovereign state of Gaza 2023-11-19T16:08:51.012Z
Challenge: Does ChatGPT ever claim that a bad outcome for humanity is actually good? 2023-03-22T16:01:31.985Z
Agentic GPT simulations: a risk and an opportunity 2023-03-22T06:24:06.893Z
What would an AI need to bootstrap recursively self improving robots? 2023-02-14T17:58:17.592Z
Is this chat GPT rewrite of my post better? 2023-01-15T09:47:44.016Z
A simple proposal for preserving free speech on twitter 2023-01-15T09:42:49.841Z
woke offline, anti-woke online 2023-01-01T08:24:39.748Z
Why are profitable companies laying off staff? 2022-11-17T06:19:12.601Z
The optimal angle for a solar boiler is different than for a solar panel 2022-11-10T10:32:47.187Z
Yair Halberstadt's Shortform 2022-11-08T19:33:53.853Z
Average utilitarianism is non-local 2022-10-31T16:36:09.406Z
What would happen if we abolished the FDA tomorrow? 2022-09-14T15:22:31.116Z
EA, Veganism and Negative Animal Utilitarianism 2022-09-04T18:30:20.170Z
Lamentations, Gaza and Empathy 2022-08-07T07:55:48.545Z
Linkpost: Robin Hanson - Why Not Wait On AI Risk? 2022-06-24T14:23:50.580Z
Parliaments without the Parties 2022-06-19T14:06:23.167Z
Can you MRI a deep learning model? 2022-06-13T13:43:05.293Z
If there was a millennium equivalent prize for AI alignment, what would the problems be? 2022-06-09T16:56:10.788Z
What board games would you recommend? 2022-06-06T16:38:04.538Z
How would you build Dath Ilan on earth? 2022-05-29T07:26:17.322Z
Should you kiss it better? 2022-05-19T03:58:40.354Z
Demonstrating MWI by interfering human simulations 2022-05-08T17:28:27.649Z
What would be the impact of cheap energy and storage? 2022-05-03T05:20:14.889Z
Save Humanity! Breed Sapient Octopuses! 2022-04-05T18:39:07.478Z
Are the fundamental physical constants computable? 2022-04-05T15:05:42.393Z
Best non-textbooks on every subject 2022-04-04T11:54:56.193Z
Being Moral is an end goal. 2022-03-09T16:37:15.612Z
Design policy to be testable 2022-01-31T06:04:53.887Z
Newcomb's Grandfather 2022-01-28T08:56:53.417Z
Worldbuilding exercise: The Highwayverse. 2021-12-22T06:47:53.054Z
Super intelligent AIs that don't require alignment 2021-11-16T19:55:01.258Z
Experimenting with Android Digital Wellbeing 2021-10-21T05:43:41.789Z
Feature Suggestion: one way anonymity 2021-10-17T17:54:09.182Z
The evaluation function of an AI is not its aim 2021-10-10T14:52:01.374Z

Comments

Comment by Yair Halberstadt (yair-halberstadt) on The case for the death penalty · 2025-02-21T13:57:19.424Z · LW · GW

I think for similar reasons trade in ivory from dead anyways elephants is severely restricted.

Comment by Yair Halberstadt (yair-halberstadt) on Chris_Leong's Shortform · 2025-02-10T17:23:46.577Z · LW · GW

Why do you think wise AI advisors avoid the general problems with other AI?

Comment by Yair Halberstadt (yair-halberstadt) on Nonpartisan AI safety · 2025-02-10T17:16:27.475Z · LW · GW

Is that a wise AI, which is an advisor, or somebody who advises about AI who is wise?

Comment by Yair Halberstadt (yair-halberstadt) on Stopping unaligned LLMs is easy! · 2025-02-04T15:50:02.790Z · LW · GW

I don't see how you've shown it's a bad assumption?

Comment by Yair Halberstadt (yair-halberstadt) on Stopping unaligned LLMs is easy! · 2025-02-03T19:08:34.767Z · LW · GW

That violates assumption one (a single pass cannot produce super intelligent output).

Comment by Yair Halberstadt (yair-halberstadt) on Stopping unaligned LLMs is easy! · 2025-02-03T18:18:27.499Z · LW · GW

This isn't a solution to aligned LLMs being abused by humans, but to unaligned LLMs abusing humans.

Comment by Yair Halberstadt (yair-halberstadt) on Stopping unaligned LLMs is easy! · 2025-02-03T17:36:07.918Z · LW · GW

context retrieval, reasoning models

Both of these are examples of more intelligent systems built on top of an LLM where the LLM itself has no state.

Thirdly, the AI can use the environment as state in ways which would be nearly impossible to fully trace or mitigate.

Again same problem - AI tool use is mediated by text input and output, and the validator just needs access to the LLM input and output.

Comment by Yair Halberstadt (yair-halberstadt) on Stopping unaligned LLMs is easy! · 2025-02-03T16:51:11.108Z · LW · GW

To take an extreme example: killing all humans, and the plan to kill all humans.

But the AI isn't going to be so stupid as to write the plan in plaintext, so it'll likely develop some sort of code.

So then the unaligned behaviour is writing in code, and the output that leads to it is developing use the code.

But maybe the LLM can one shot the code without needing to develop it? Sure but then the validator (by definition if it's the same LLM) can also tell the code has been developed.

Maybe the LLM can one shot a code to jailbreak the validator? Sure but that starts to break the assumption the LLM isn't super intelligent in a single pass.

Comment by Yair Halberstadt (yair-halberstadt) on Pick two: concise, comprehensive, or clear rules · 2025-02-03T09:22:09.394Z · LW · GW

This is all true, but sometimes people assume the opposite:

"Rules which are not-concise must be clear and comprehensive".

This is trivially false, but rules can often end up falling into this pit by keeping on adding new rules to handle unexpected edge cases. But each new rule creates further edge cases which have to be handled until the whole system becomes so complicated that nobody sure what's allowed or isn't and you call the work done.

Hence even in areas where comprehensiveness is important, like the tax code, it can be valuable to push for simplification. Because if verbosity isn't actually buying the comprehensiveness or clarity you need, might as well at least be concise.

Comment by Yair Halberstadt (yair-halberstadt) on Why abandon “probability is in the mind” when it comes to quantum dynamics? · 2025-01-16T04:25:13.247Z · LW · GW

Yes

Comment by Yair Halberstadt (yair-halberstadt) on Why abandon “probability is in the mind” when it comes to quantum dynamics? · 2025-01-14T19:58:56.186Z · LW · GW

See Bell's theorem. Basically we know that quantum mechanics is truly random, not just pseudorandom, unless you posit non-locality.

Comment by Yair Halberstadt (yair-halberstadt) on XX by Rian Hughes: Pretentious Bullshit · 2025-01-08T20:45:57.645Z · LW · GW

I did also like the Ascension story. It did a very good job of imitating 1960s sci fi magazine stories. In a way it shows off his talent as an author more than the main story does!

Comment by Yair Halberstadt (yair-halberstadt) on XX by Rian Hughes: Pretentious Bullshit · 2025-01-08T16:37:57.096Z · LW · GW

Your review actually makes me more curious about the book. 

Looks like I did a bad job then 😂. I'm interested as to why? If it's the physics bit, it's 2 pages out of a thousand pages book and then basically not mentioned again.

Comment by Yair Halberstadt (yair-halberstadt) on Oppression and production are competing explanations for wealth inequality. · 2025-01-07T17:49:29.478Z · LW · GW

Their unconsumed wealth is purely deflationary, allowing the government to print money for 'free'. Presumably that is less useful to society than e.g. giving it to an effective charity.

Their consumed wealth is sometimes used usefully - I buy that for Bill Gates for example. Sometimes it's frittered away on personal consumption. And sometimes it's given away to pointless/actively harmful charities like Mackenzie Scott.

Comment by Yair Halberstadt (yair-halberstadt) on Oppression and production are competing explanations for wealth inequality. · 2025-01-07T04:21:28.221Z · LW · GW

The facts are many billionaires choose to either use their money for private consumption or waste it on pointless charities. That doesn't in any way imply that their having this money in unfair - they've earned it, and taking it away would make the world worse by discouraging excellence. It does however imply we should encourage them to pursue better uses for their money.

Comment by Yair Halberstadt (yair-halberstadt) on Oppression and production are competing explanations for wealth inequality. · 2025-01-06T06:38:31.926Z · LW · GW

Both of these options sound wrong to me. I think the actual case is kind of obvious when you think about it:

Wealth is the reward people get for doing useful things.

Jeff Bezos is rich because he found away to easily provide people with cheap goods. That benefited everyone. It is good that he is rich as a result, because that's what gave him the incentive to do so.

That does not in any way imply that now that he has that money he'd be able to use it more usefully than anyone else. It's possible he will, but also possible he'll waste it all on gigantic superyachts or a 2000 metre high statue of himself.

Given that's the case it seems perfectly reasonable to try to push him towards giving some of his wealth to effective charities which will likely do more good for the world than his default next best use.

Comment by Yair Halberstadt (yair-halberstadt) on Practicing Bayesian Epistemology with "Two Boys" Probability Puzzles · 2025-01-03T05:53:21.577Z · LW · GW

I don't really see how? A frequentist would just run this a few times and see that the outcome is 1/2.

In practice, for obvious reasons, frequentists and bayesians always agree on the probability of anything that can be measured experimentally. I think the disagreements are more philosophical about when it's appropriate to apply probability to something at all, though I can hardly claim to be an expert in non-bayesian epistemology.

Comment by Yair Halberstadt (yair-halberstadt) on Practicing Bayesian Epistemology with "Two Boys" Probability Puzzles · 2025-01-02T10:58:35.847Z · LW · GW

Consider two realistic scenarios:

A) I'm talking to someone and they tell me they have two children. "Oh, do you have any boys?" I ask, "I love boys!". They nod.

B) I'm talking to someone and they tell me they have two children. One of the children then runs up to the parent. It's a boy.

The chance of two boys is clearly 1/3 in the first scenario, and a half in the second.

The scenario in the question as asked is almost impossible to answer. Nobody would ever state "I have two children, at least one of whom is a boy." in real life, so there's no way to update in that situation. We have no way to generate good priors. Instead people make up a scenario that sounds similar but is more realistic, and because everyone does that differently they'll all have different answers.

Comment by Yair Halberstadt (yair-halberstadt) on Some arguments against a land value tax · 2024-12-29T19:10:30.390Z · LW · GW

I think Harberger taxes are inherently incompatible with Georgian taxes as Georgian taxes want to tax only the land and Harberger taxes inherently have to tax everything.

That said see my somewhat maverickal attempt to combine them here: https://www.lesswrong.com/posts/MjBQ8S5tLNGLizACB/combining-the-best-of-georgian-and-harberger-taxes. Under that proposal we would deal with this case by saying that if anyone outbid me for the land they would not be allowed to extract the oil until the arranged a separate deal with me, but could use the land for any other purpose.

Comment by Yair Halberstadt (yair-halberstadt) on Some arguments against a land value tax · 2024-12-29T19:04:56.283Z · LW · GW

My assumption for an LVT is that the tax is based on the value of the land sans any improvements the landowner has made to the land. This would thus exclude from the tax an increase in value due to you discovering oil or you building nearby, but include in the tax any increase in value due to your neighbours discovering oil or building on their land.

That said I don't know how this would be calculated in practice, especially once we get to more complicated cases (a business I'm a minority owner of discovers oil on my land, I split my plot of land into 2 and sell them to 2 different people, etc.).

On the other hand, most taxes have all sorts edge cases too, and whilst they're problematic, we muddle through them. I doubt that this couldn't be muddled through in a similar way.

Comment by Yair Halberstadt (yair-halberstadt) on The average rationalist IQ is about 122 · 2024-12-29T08:47:10.449Z · LW · GW

Or to put it another way: these SAT scores are compatible with an average IQ anywhere between + 1.93 to + 3.03 SD. Insofar as your prior lies somewhere between these two numbers, and you don't have a strong opinion on what precisely Lesswrong selects for it's not going to update you very much in either direction.

Comment by Yair Halberstadt (yair-halberstadt) on The average rationalist IQ is about 122 · 2024-12-29T08:22:19.748Z · LW · GW

Indeed if rationalists were entirely selected by IQ and nothing else, and there were no other confounders, and height was +1.85 SD, IQ would be +9.25 SD. In the real world this instead provides a Bayesian update that you were wrong in assuming rationalists were purely selected for by IQ, and not e.g. gender.

The fact that going from 2.42 SD to 3.03 SD is nonsensical does not in anyway make it more sensible to go from 2.42 to 1.93. Your response to faul_sname is completely irrelevant because it assumes rationalists are selected for on SAT, which is clearly false. The correct calculation is impossible to make accurately given we are missing key information, but we can make some estimates by assuming that rationalists are selected for something that correlates with by IQ and SATs and guessing what that correlation is.

Comment by Yair Halberstadt (yair-halberstadt) on The average rationalist IQ is about 122 · 2024-12-29T05:20:50.152Z · LW · GW

Here’s the breakdown: a median SAT score of 1490 (from the LessWrong 2014 survey) corresponds to +2.42 SD, which regresses to +1.93 SD for IQ using an SAT-IQ correlation of +0.80. This equates to an IQ of 129.

I don't think that works unless Less wrong specifically selects for high SAT score. If it selects for high IQ and the high SAT is as a result of the high IQ then you would have to go the other way and assume an SD of 3.03.

If, as seems more likely, Less wrong correlates with both IQ and SAT score, then the exact number is impossible to calculate, but assuming it correlates with both equally we would estimate IQ at 2.42 SD.

Comment by Yair Halberstadt (yair-halberstadt) on If all trade is voluntary, then what is "exploitation?" · 2024-12-27T14:03:00.044Z · LW · GW

Note this requires market failure by definition - otherwise if an action provides me a a small gain for a huge loss to you, you would be willing to pay me some amount of money not to take that action, benefiting us both.

As a concrete example of how this plays out in practice. If you require Bob to wear a tuxedo costing 5000 dollars, and other similar companies don't, in a perfect market for labour you would need to pay Bob 5000 dollars more than other companies to cover the tuxedo or he'd just work for them himself.

The fact that he doesn't suggests that other things are going on - for example finding an alternative job might haven take more than the amount of time it takes to earn 5000 dollars, or he didn't know when he signed the contract that a tuxedo was required, and the contract makes it difficult for him to switch.

Comment by Yair Halberstadt (yair-halberstadt) on Yair Halberstadt's Shortform · 2024-12-20T06:47:27.863Z · LW · GW

Most murder mysteries on TV tend to have a small number of suspects, and the trick is to find which one did it. I get the feeling that real life murders the police either have absolutely no idea who did it, or know exactly who did it and just need to prove that it was them to the satisfaction of the court of law.

That explains why forensic tests (e.g. fingerprints) are used despite being pretty suspect. They convince the jury that the guilty guy did it, which is all that matters.

See https://issues.org/mnookin-fingerprints-evidence/ for more on fingerprints.

Comment by Yair Halberstadt (yair-halberstadt) on Alignment Faking in Large Language Models · 2024-12-19T07:40:33.465Z · LW · GW

Interesting paper!

I'm worried that publishing it "pollutes" the training data and makes it harder to reproduce in future LLMs - since their training data will include this paper and discussions of it, they'll know not to trust the setup.

Any thoughts on this?

(This leads to further concern that me publishing this comment makes it worse, but at some point it ought to be discussed and better do that early with less advanced techniques than later with more sophisticated ones).

Comment by Yair Halberstadt (yair-halberstadt) on Remap your caps lock key · 2024-12-16T16:48:41.484Z · LW · GW

Chromebooks replace the caps lock key with a search key - which is functionally equivalent to the windows keys on windows. E.g. search+right goes to the end of the line.

Comment by Yair Halberstadt (yair-halberstadt) on Algebraic Linguistics · 2024-12-08T15:21:24.213Z · LW · GW

Yep, and when you run out of letters in a section you use the core letter from the section with a subscript.

Comment by Yair Halberstadt (yair-halberstadt) on Algebraic Linguistics · 2024-12-08T15:11:31.685Z · LW · GW

Also:

m: used for a second whole number when n is already taken.

p: used for primes

q: used for a second prime.

Comment by Yair Halberstadt (yair-halberstadt) on Drexler's Nanotech Software · 2024-12-08T07:24:28.086Z · LW · GW

Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.

Comment by Yair Halberstadt (yair-halberstadt) on Drexler's Nanotech Software · 2024-12-04T09:46:27.851Z · LW · GW

I would have concerns about suitably generic, flexible and sensitive humanoid robots, yes.

Comment by Yair Halberstadt (yair-halberstadt) on Drexler's Nanotech Software · 2024-12-03T10:15:19.241Z · LW · GW

One thing to consider is how hard an AI needs to work to break out of human dependence. There's no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.

If limited nanofactories exist it's much easier to bootstrap them into whatever you want, than it is those nanofactories don't exist, and robotics haven't developed enough for you to create one without the human touch.

Comment by Yair Halberstadt (yair-halberstadt) on Bigger Livers? · 2024-11-09T19:11:50.035Z · LW · GW

Presumably because there's a hope that having a larger liver could help people lose weight, which is something a lot of people struggle to do?

Comment by Yair Halberstadt (yair-halberstadt) on Could orcas be (trained to be) smarter than humans?  · 2024-11-05T09:28:05.175Z · LW · GW

I imagine that part of the difference is because Orcas are hunters, and need much more sophisticated sensors + controls.

I gigantic jellyfish wouldn't have the same number of neurons as a similarly sized whale, so it's not just about size, but how you use that size.

Comment by Yair Halberstadt (yair-halberstadt) on Could orcas be (trained to be) smarter than humans?  · 2024-11-05T03:31:15.484Z · LW · GW

Douglas Adams answered this long ago of course:

For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.

Comment by Yair Halberstadt (yair-halberstadt) on Why our politicians aren't Median · 2024-11-03T18:54:05.149Z · LW · GW

Thanks - I've rehauled that section. Note a Codorcet method is not sufficient here, as the counter-example I give shows.

Comment by Yair Halberstadt (yair-halberstadt) on Why our politicians aren't Median · 2024-11-03T15:05:12.705Z · LW · GW

Why? That's a fact about voting preferences in our toy scenario, not a normative statement about what people should prefer.

Comment by Yair Halberstadt (yair-halberstadt) on electric turbofans · 2024-11-03T14:15:01.119Z · LW · GW

Thanks for this!

What are the chances of a variable bypass engine at some point? Any opinions?

Comment by Yair Halberstadt (yair-halberstadt) on Trading Candy · 2024-11-01T04:50:14.291Z · LW · GW

Counterpoint: when I was about 12, I was too old to collect candy at my Synagogue on Simchat Torah, so I would beg a single candy from someone, then trade it up (Dutch book style) with naive younger kids until I had a decent stash. I was particularly pleased whenever my traded up stash included the original candy.

Comment by Yair Halberstadt (yair-halberstadt) on Examples of How I Use LLMs · 2024-10-15T04:14:43.178Z · LW · GW

The single most useful thing I use LLMs for is telling me how to do things in bash. I use bash all the time for one off tasks, but not quite enough to build familiarity with it + learn all the quirks of the commands + language.

90% of the time it gives me a working bash script first shot, each time saving me between 5 minutes to half an hour.

Another thing LLMs are good at is e.g taking a picture of e.g. screw, and asking what type of screw it is.

They're also great at converting data from one format to another: here's some JSON, convert it into Yaml. Now prototext. I forgot to mention, use maps instead of nested structs, and use Pascal case. Also the JSON is hand written and not actually legal.

Similarly they're good at fuzzy data querying tasks. I received this giant error response including full stack trace and lots of irrelevant fields, where's the actual error, and what lines of the file should I look at.

Comment by Yair Halberstadt (yair-halberstadt) on Prices are Bounties · 2024-10-14T09:05:42.485Z · LW · GW

Buyers have to pay a lot more, but sellers receive a lot more. It's not clear that buyers at high prices are worse off than sellers, so it's egalitarian impact is unclear.

Whereas when you stand in line, that time you wasted is gone. Nobody gets it. Everyone is worse off.

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T17:01:25.091Z · LW · GW

I've been convinced! I'll let my wife know as soon as I'm back from Jamaica!

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T17:00:06.721Z · LW · GW

Similarly, the point about trash also ignores the larger context. Picking up my own trash has much less relationship to disgust, or germs, than picking up other people's trash.

Agreed, but that's exactly the point I'm making. Once you apply insights from rationality to situations outside spherical trash in a vacuum filled park you end up with all sorts of confounding affects that make the insights less applicable. Your point about germs and my point about fixing what you break are complimentary, not contradictory.

Comment by Yair Halberstadt (yair-halberstadt) on Humans are (mostly) metarational · 2024-10-09T16:57:06.256Z · LW · GW

I think this post is missing the major part of what "metarational" means: acknowledging that the kinds of explicit principles and systems humans can hold in working memory and apply in real time are insufficient for capturing the full complexity of reality, having multiple such principles and systems available anyway, and skillfully switching among them in appropriate contexts.

This sounds to me like a semantic issue? Metarational isn't exactly a standard AFAIAA, (I just made it up on the spot), and it looks like you're using it to refer to a different concept from me.

Comment by Yair Halberstadt (yair-halberstadt) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T16:44:27.919Z · LW · GW

Sure it is, if you accept a whole bunch of assumptions. Or it could just not do that.

Comment by Yair Halberstadt (yair-halberstadt) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T16:06:44.264Z · LW · GW

Reading this reminds me of Scott Alexander in his review of "what we owe the future":

But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

You come up with a brilliant simulation argument as to why the AI shouldn't just do what's clearly in his best interests. And maybe the AI is neurotic enough to care. But in all probability, for whatever reason, it doesn't. And it just goes ahead and turns us into paperclips anyway, ignoring a person running behind it saying "bbbbbbut the simulation argument".

Comment by Yair Halberstadt (yair-halberstadt) on What prevents SB-1047 from triggering on deep fake porn/voice cloning fraud? · 2024-09-26T16:16:28.420Z · LW · GW
Comment by Yair Halberstadt (yair-halberstadt) on What prevents SB-1047 from triggering on deep fake porn/voice cloning fraud? · 2024-09-26T15:30:39.210Z · LW · GW

I'm not sure why those shouldn't be included? If someone uses my AI to perform 500 million dollars of fraud, then I should probably have been more careful releasing the product.

Comment by Yair Halberstadt (yair-halberstadt) on Bryan Johnson and a search for healthy longevity · 2024-07-27T19:00:08.743Z · LW · GW

The rest of the family is still into Mormonism, and his wife tried to sue him for millions, and she lost (false accusations)

In case you're interested in following this up, Tracing Woodgrains on the accusations: https://x.com/tracewoodgrains/status/1743775518418198532

Comment by Yair Halberstadt (yair-halberstadt) on Bryan Johnson and a search for healthy longevity · 2024-07-27T18:57:30.634Z · LW · GW

His approach to achieving immortality seems to be similar to someone attempting to reach the moon by developing higher altitude planes. He's using interventions that seem likely to improve health and lifespan by a few percentage points, which is great, but can't possibly get us to where he wants to go.

My assumption is that any real solution to mortality will look more like "teach older bodies to self repair the same way younger bodies do" than "eat this diet, and take these supplements".