Posts

If the DoJ goes through with the Google breakup,where does Deepmind end up? 2024-10-12T05:06:50.996Z
Thoughts on Francois Chollet's belief that LLMs are far away from AGI? 2024-06-14T06:32:48.170Z
What happens to existing life sentences under LEV? 2024-06-09T17:49:39.804Z
Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) 2024-05-19T02:18:53.524Z
Supposing the 1bit LLM paper pans out 2024-02-29T05:31:24.158Z
OpenAI wants to raise 5-7 trillion 2024-02-09T16:15:00.421Z
O O's Shortform 2023-06-03T23:32:12.924Z

Comments

Comment by O O (o-o) on o3 · 2024-12-21T01:36:21.672Z · LW · GW

I think a lot of this is wishful thinking from safetyists who want AI development to stop. This may be reductionist but almost every pause historically can be explained economics. 

Nuclear - war usage is wholly owned by the state and developed to its saturation point (i.e. once you have nukes that can kill all your enemies, there is little reason to develop them more). Energy-wise, supposedly, it was hamstrung by regulation, but in countries like China where development went unfettered, they are still not dominant. This tells me a lot it not being developed is it not being economical. 

For bio related things, Eroom's law reigns supreme. It is just economically unviable to discover drugs in the way we do. Despite this, it's clear that bioweapons are regularly researched by government labs. The USG being so eager to fund gof research despite its bad optics should tell you as much.

Or maybe they will accidentally ban AI too due to being a dysfunctional autocracy - 

I remember many essays from people all over this site on how China wouldn't be able to get to X-1 nm (or the crucial step for it) for decades, and China would always figure a way to get to that nm or step within a few months. They surpassed our chip lithography expectations for them. They are very competent. They are run by probably the most competent government bureaucracy in the world. I don't know what it is, but people keep underestimating China's progress. When they aim their efforts on a target, they almost always achieve it.

Rapid progress is a powerful attractor state that requires a global hegemon to stop. China is very keen on the possibilities of AI which is why they stop at nothing to get their hands on Nvidia GPUs. They also have literally no reason to develop a centralized project they are fully in control of. We have superhuman AI that seem quite easy to control already. What is stopping this centralized project on their end. No one is buying that even o3, which is nearly superhuman in math and coding, and probably lots of scientific research, is going to attempt world takeover. 


 

Comment by O O (o-o) on o3 · 2024-12-20T23:24:48.301Z · LW · GW

We still somehow got the steam engine, electricity, cars, etc.  

There is an element of international competition to it. If we slack here, China will probably raise armies of robots with unlimited firepower and take over the world. (They constantly show aggression)

The longshoreman strike is only allowed (I think) because the west coast did automate and somehow are less efficient than the east coast for example. 

Comment by O O (o-o) on o3 · 2024-12-20T22:57:27.857Z · LW · GW

Oh I guess I was assuming automation of coding would result in a step change in research in every other domain. I know that coding is actually one of the biggest blockers in much of AI research and automation in general.  

It might soon become cost effective to write bespoke solutions for a lot of labor jobs for example. 

Comment by O O (o-o) on o3 · 2024-12-20T22:34:29.872Z · LW · GW

Why would that be the likely case? Are you sure it's likely or are you just catastrophizing?

Comment by O O (o-o) on o3 · 2024-12-20T22:29:29.094Z · LW · GW

catastrophic job loss that destroys the global economy?

I expect the US or Chinese government to take control of these systems sooner than later to maintain sovereignty. I also expect there will be some force to counteract the rapid nominal deflation that would happen if there was mass job loss. Every ultra rich person now relies on billions of people buying their products to give their companies the valuation they have. 

I don't think people want nominal deflation even if it's real economic growth. This will result in massive printing from the fed that probably lands in poeple's pockets (Iike covid checks).

Comment by O O (o-o) on o3 · 2024-12-20T22:27:19.733Z · LW · GW

While I'm not surprised by the pessimism here, I am surprised at how much of it is focused on personal job loss. I thought there would be more existential dread. 

Comment by O O (o-o) on O O's Shortform · 2024-12-09T05:06:16.858Z · LW · GW

It’s better at questions but subjectively there doesn’t feel like there’s much transfer. It still gets some basic questions wrong.

Comment by O O (o-o) on O O's Shortform · 2024-12-08T03:00:39.265Z · LW · GW

O1’s release has made me think Yann Lecun’s AGI timelines are probably more correct than shorter ones

Comment by O O (o-o) on Akash's Shortform · 2024-11-20T18:38:58.477Z · LW · GW

Why is the built-in assumption for almost every single post on this site that alignment is impossible and we need a 100 year international ban to survive? This does not seem particularly intellectually honest to me. It is very possible no international agreement is needed. Alignment may turn out to be quite tractable.

Comment by O O (o-o) on O O's Shortform · 2024-11-17T17:57:00.244Z · LW · GW

I guess in the real world the rules aren’t harder per se but just less clear and not written down. I think both the rules and tools needed to solve contest math questions at least feel harder than the vast majority of rules and tools human minds deal with. Someone like Terrence Tao, who is a master of these, excelled in every subject when he was a kid (iirc).

I think LLMs have a pretty good model of human behavior, so for anything related to human judgement, in theory this isn’t why it’s not doing well.

And where rules are unwritten/unknown (say biology), are the rules not at least captured by current methods? The next steps are probably like baking the intuitions of something like alphafold into something like o1. Whatever that means. R&D is what’s important and there is generally vast sums of data there.

Comment by O O (o-o) on O O's Shortform · 2024-11-17T05:36:38.259Z · LW · GW

O1 probably scales to superhuman reasoning:

O1 given maximal compute solves most AIME questions. (One of the hardest benchmarks in existence). If this isn’t gamed by having the solution somewhere in the corpus then:

-you can make the base model more efficient at thinking

-you can implement the base model more efficiently on hardware

-you can simply wait for hardware to get better

-you can create custom inference chips

Anything wrong with this view? I think agents are unlocked shortly along with or after this too.

Comment by O O (o-o) on O O's Shortform · 2024-11-10T05:57:58.531Z · LW · GW

Where are all the successful rationalists? 

https://x.com/JDVance/status/1854925621425533043

Is it too soon to say a rationalist is running the White House?

Comment by O O (o-o) on O O's Shortform · 2024-10-24T01:40:09.231Z · LW · GW

https://x.com/arcprize/status/1849225898391933148?s=46&t=lZJAHzXMXI1MgQuyBgEhgA

My read of the events. Anthropic is trying to raise money and rushed out a half baked model.

3.5 opus has not yet had the desired results. 3.5 sonnet, being easier to iterate on, was tuned to beat OpenAI’s model on some arbitrary benchmarks in an effort to wow investors.

With the failed run of Opus, they presumably tried to get o1 like reasoning results or some agentic breakthrough. The previous 3.5s was also particularly good because of a fluke of the training run rng (same as gpt4-0314), which makes it harder for iterations to beat it.

They are probably now rushing to scale inference time compute. I wonder if they tried doing something with steering vectors initially for 3.5 opus.

Comment by O O (o-o) on O O's Shortform · 2024-10-04T22:40:48.909Z · LW · GW

A while ago I predicted that I think there's a more likely than not chance Anthropic would run out of money trying to compete with OpenAI, Meta, and Deepmind (60%).  At the time and now, it seems they still have no image video or voice generation unlike the others, and do not process image as well in inputs either. 
 

 OpenAI's costs are reportedly at 8.5 billion. Despite being flush in cash from a recent funding round, they were allegedly at the brink of bankruptcy and required a new, even larger, funding round.  Anthropic does not have the same deep pockets as the other players. Big tech like apple who are not deeply invested in AI seem to be wary of investing in OpenAI. It stands to reason, Amazon may be as well. It is looking more likely that Anthropic will be left in the dust (80%), 
 

The only winning path I see is a new more compute efficient architecture emerges, they are first, and they manage to kick of RSI before more funded competitors rush in to copy them. Since this seems unlikely I think they are not going to fare well. 

Comment by O O (o-o) on Ruby's Quick Takes · 2024-09-30T03:46:13.184Z · LW · GW

Really? He seems pretty bullish. He thinks it will co author math papers pretty soon. I think he just doesn’t think or at least state his thoughts on implications outside of math.

Comment by O O (o-o) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-23T06:31:00.903Z · LW · GW

Except billionaires give out plenty of money for philanthropy. If the AI has a slight preference to keeping humans alive, things probably work out well. Billionaires have a slight preference to things they care about instead of random charities. I don’t see how preferences don’t apply here.

This is a vibes based argument using math incorrectly. A randomly chosen preference from a distribution of preferences is unlikely to involve humans, but that’s not necessarily what we’re looking at here is it.

Comment by O O (o-o) on O O's Shortform · 2024-09-04T02:45:56.622Z · LW · GW

The chip export controls are largely irrelevant.  Westerners badly underestimate the Chinese and they have caught up to 7nm at scale.  They also caught up to 5nm, but not at scale. The original chip ban was meant to stop China from going sub 14nm. Instead now we may have just bifurcated advanced chip capabilities. 

The general argument before was "In 10 years, when the Chinese catch up to TSMC, TSMC will be 10 years ahead." Now the only missing link in the piece for China is EUV. And now the common argument is that same line with ASML subbed in for TSMC. Somehow, I doubt this will be a long term blocker. 

Best case for the Chinese chip industry, they just clone EUV.  Worst case, they find an alternative. Monopolies and first movers don't often have the most efficient solution. 

Comment by O O (o-o) on O O's Shortform · 2024-08-26T23:31:46.110Z · LW · GW

Talk through the grapevine:

Safety is implemented in a highly idiotic way in non frontier but well-funded labs (and possibly in frontier ones too?). 

Think raising a firestorm over a 10th leading mini LLM being potentially jailbroken. 

The effect is employees get mildly disillusioned with saftey-ism, and it gets seen as unserious. There should have been a hard distinction between existential risks and standard corporate censorship. "Notkilleveryoneism" is simply too ridiculous sounding to spread. But maybe memetic selection pressures make it impossible for the irrelevant version of safety to not dominate.
 

Comment by O O (o-o) on Linch's Shortform · 2024-08-26T22:52:52.997Z · LW · GW

Talk is cheap. It's hard to say how they will react as both risks and upsides remain speculative. From the actual plenum, it's hard to tell if Xi is talking about existential risks.

Comment by O O (o-o) on O O's Shortform · 2024-08-09T21:58:07.595Z · LW · GW

Red-teaming is being done in a way that doesn't reduce existential risk at all but instead makes models less useful for users. 

https://x.com/shaunralston/status/1821828407195525431

Comment by O O (o-o) on Re: Anthropic's suggested SB-1047 amendments · 2024-08-01T20:08:49.220Z · LW · GW

In other contexts, it seems it's quite common for a disgruntled employee to go to a journalist and blow up a minor problem. Why can't this similarly be abused if the bar isn't high?

Comment by O O (o-o) on O O's Shortform · 2024-07-27T10:26:24.152Z · LW · GW

Feels like Test Time Training will eat the world. People thought it was search, but make alphaproof 100x efficient (3 days to 40 minutes) and you probably have something superhuman.

Comment by O O (o-o) on "AI achieves silver-medal standard solving International Mathematical Olympiad problems" · 2024-07-26T23:57:10.977Z · LW · GW

This part seems to just be to not allow an LLM translation to get the problem slightly wrong and mess up the score as a result.

It would be a shame for your once a year attempt to have even a 2% chance of being messed up by an LLM hallucination.

Comment by O O (o-o) on "AI achieves silver-medal standard solving International Mathematical Olympiad problems" · 2024-07-26T18:57:54.902Z · LW · GW

https://x.com/wtgowers/status/1816839783034843630

It wasn't told what to prove. To get round that difficulty, it generated several hundred guesses (many of which were equivalent to each other). Then it ruled out lots of them by finding simple counterexamples, before ending up with a small shortlist that it then worked on.

That comment doesn’t seem to be correct.

Comment by O O (o-o) on Llama Llama-3-405B? · 2024-07-26T00:14:31.690Z · LW · GW

I think a lot of it is simply just eating away at the margins of companies and product that might become larger in the future. Even if they are not direct competitors, it's still tech investment money going away from their VR bets into AI.  Also big companies fully controlling important tech products has proven to be a nuisance to Meta in the past.

Comment by O O (o-o) on "AI achieves silver-medal standard solving International Mathematical Olympiad problems" · 2024-07-26T00:05:58.616Z · LW · GW

I'm guessing many people assumed an IMO solver would be AGI. However this is actually a narrow math solver. But it's probably useful on the road to AGI nonetheless.

Comment by O O (o-o) on What’s the Deal with Elon Musk and Twitter? · 2024-07-17T19:58:58.734Z · LW · GW

I predict the move to Texas will be largely fake and just whining to get CA politicians to listen to his policy suggestions. They will still have a large office in California. 

Comment by O O (o-o) on Pondering how good or bad things will be in the AGI future · 2024-07-12T20:39:47.804Z · LW · GW
Re-constructing the Roman economy (Chapter 4) - The Cambridge History of  Capitalism

This is a reconstruction of Roman GDP per capita. Source of image.  There is ~200 years of quick growth followed by a long and slow decline.  I think it's clear to me we could be in the year 26, extrapolating past trends without looking at the 2nd derivative. I can't find a source of fertility rates, but child mortality rates were much higher then so the bar for fertility rates was also much higher. 


For posterity, I'll add Japan's gdp per capita. Similar graphs exist for many of the other countries I mention. I think this is a better and more direct example anyways. 

Comment by O O (o-o) on Pondering how good or bad things will be in the AGI future · 2024-07-12T19:06:13.870Z · LW · GW

It is plausible that technological and political progress might get it to fulfilling all Sustainable Development Goal

This seems highly implausible to me. The technological progress and economic growth trend is really an illusion. We are already slowly trending in the wrong direction. The U.S. is an exception and all countries are headed towards Japan or Europe. Many of those countries have declined since 2010 or so.

If you plotted trends from the Roman Empire but ignored the same population decline/institutional decay from them we should have reached technological goals a long time ago.

Comment by O O (o-o) on Why Can’t Sub-AGI Solve AI Alignment? Or: Why Would Sub-AGI AI Not be Aligned? · 2024-07-05T17:51:13.515Z · LW · GW

It’s always hard to say whether this is an alignment or capabilities problem. It’s also too contrived too offer much signal.

The overall vibe is these LLMs grasp most of our values pretty well. They give common sense answers to most moral questions. You can see them grasp Chinese values pretty well too, so n=2. It’s hard to characterize this as mostly “terrible”.

This shouldn’t be too surprising in retrospect. Our values are simple for LLMs to learn. It’s not going to disassemble cows for atoms to end racism.There are edge cases where it’s too woke, but these got quickly fixed. I don’t expect them to ever pop up again.

Comment by O O (o-o) on Why Can’t Sub-AGI Solve AI Alignment? Or: Why Would Sub-AGI AI Not be Aligned? · 2024-07-04T08:36:46.277Z · LW · GW

much of what they say on matters of human values is actually pretty terrible

Really? I’m not aware of any examples of this.

Comment by O O (o-o) on How are you preparing for the possibility of an AI bust? · 2024-06-24T03:05:27.525Z · LW · GW

TSMC has multiple fabs outside of Taiwan. It would be a setback but 10+ years seems to be misinformed. Also there would likely be more effort to restore the semi supply chain than post covid. (I could see the military try being mobilized to help or the Defense Production Act being used)

Comment by O O (o-o) on Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) · 2024-06-24T00:52:56.648Z · LW · GW

If OpenAI didn’t get the 30m from any other donor, they’d probably just turn into a capped profit earlier and raise money that way.

Also I never said Elon would have been the one to donate. They had 1B pledged, so they could have conceivably gotten that money from any other donors.

By the backing of Elon Musk, I mean the startup is associated with his brand. I’d imagine this would make raising funding easier.

Comment by O O (o-o) on Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) · 2024-06-23T07:08:52.142Z · LW · GW

It’s a lot for AI safety but for OpenAI at the time, with the backing of Elon Musk and the most respected AI researchers in the country, they could have raised a similar amount for series A funding at the time. (I’m unsure if they were a capped profit yet). Likewise, 1B was pledged to them at their founding, but it’s hard to tell how much was actually distributed out by 2017.

Agree with 2, but Safety research also seems hard to fund.

Comment by O O (o-o) on johnswentworth's Shortform · 2024-06-22T14:09:03.005Z · LW · GW

Shorting nvidia might be tricky. I’d short nvidia and long TSM or an index fund to be safe at some point. Maybe now? Typically the highest market cap stock has poor performance after it claims that spot.

Comment by O O (o-o) on Ilya Sutskever created a new AGI startup · 2024-06-20T14:54:47.064Z · LW · GW

I see Elon throwing money into this. He originally recruited Sutskever and he’s probably(?) smart enough to diversify his AGI bets.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-20T05:19:12.141Z · LW · GW

Oh yeah I agree. Misread that. Still, maybe not so confident. Market leaders often don’t last. Competition always catches up.

Comment by O O (o-o) on Ilya Sutskever created a new AGI startup · 2024-06-19T23:25:28.307Z · LW · GW

OpenAI is closed

StabilityAI is unstable

SafeSI is ...

Comment by O O (o-o) on My AI Model Delta Compared To Christiano · 2024-06-19T22:52:38.831Z · LW · GW

I think a more nuanced take is there is a subset of generated outputs that are hard to verify. This subset is split into two camps, one where you are unsure of the outputs correctness (and thus can reject/ask for an explanation). This isn’t too risky. The other camp is ones where you are sure but in reality overlook something. That’s the risky one.

However at least my priors tell me that the latter is rare with a good reviewer. In a code review, if something is too hard to parse, a good reviewer will ask for an explanation or simplification. But bugs still slip by so it’s imperfect.

The next question is whether the bugs that slip by in the output will be catastrophic. I don’t think it dooms the generation + verification pipeline if the system is designed to be error tolerant.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T18:09:53.862Z · LW · GW

 I'm not that confident about how the Arizona fab is going. I've mostly heard second hand accounts.

So, from their site "TSMC Arizona’s first fab is on track to begin production leveraging 4nm technology in first half of 2025." You are probably thinking of their other Arizona fabs. Those are indeed delayed. However, they cite "funding" as the issue.[1] Based on how quickly TSMC changed tune on delays once they got Chips funding, I think it's largely artificial, and a means to extract CHIPS money. 

I'm very confident that TSMC's edge is more than cheap labor. 

They have cumulative investments over the years, but based on accounts of Americans who have worked there, they don't sound extremely advanced. Instead they sound very hard working, which gives them a strong ability to execute. Also, I still think these delays are somewhat artificial. There are natsec concerns for Taiwan to let TSMC diversify, and TSMC seems to think it can wring a lot of money out of the US by holding up construction. They are, after all, a monopoly.

Samsung will catch up, sure. But by the time they catch up to the TSMC's 2024 state of the art, TSMC will have moved on to the next node.
...
it's not feasible for any company to catch up with TSMC within the next 10 years, at least.

Is Samsung 5 generations behind? I know that nanometers don't really mean anything anymore, but TSMC and Samsung's 4 nm don't seem 10 years apart based on the tidbits I get online. 

  1. ^

    Liu said construction on the shell of the factory had begun, but the Taiwanese chipmaking titan needed to review “how much incentives … the US government can provide.”


     

Comment by O O (o-o) on Boycott OpenAI · 2024-06-19T15:00:07.768Z · LW · GW

There’s an API playground which is essentially a chat interface. It’s highly convenient.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T07:09:22.503Z · LW · GW

This makes no sense. Wars are typically existential. In a hot war with another state, why would the government not use all of industrial capacity that is more useful to make weapons to make weapons. It’s well documented that governments can repurpose unnecessary parts of industry (say training Grok or an open source chatbot) into whatever else.

Biden used them for largely irrelevant reasons. This indicates that with an actual war, usage would be wider and more extensive.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T06:16:45.399Z · LW · GW

Wartime powers let governments do whatever they want essentially. Even recently Biden has flexed the defense production act.

https://www.defense.gov/News/Feature-Stories/story/article/2128446/during-wwii-industries-transitioned-from-peacetime-to-wartime-production/

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T04:51:39.815Z · LW · GW

Hold on. The TSMC Arizona fab is actually ahead of schedule. They were simply waiting for funds. I believe TSMC’s edge is largely cheap labor.

https://www.tweaktown.com/news/97293/tsmc-to-begin-pilot-program-at-its-arizona-usa-fab-plant-for-mass-production-by-end-of-2024/index.html

Comment by O O (o-o) on Boycott OpenAI · 2024-06-19T00:47:52.984Z · LW · GW

Use their API directly. I don’t do this to boycott them in particular but the API cost of your typical chat usage is far lower than the subscription cost.

Comment by O O (o-o) on The thing I don't understand about AGI · 2024-06-19T00:41:05.647Z · LW · GW

Not exactly an answer, but have you read about cases of the incredible plasticity of the human brain? There is a person out there that gradually lost 90% of their brain to fluid leakage and could still largely function. They didn’t even notice it until much later. There are more examples of functioning people who had half their brain removed as a child. And just like our scaling laws, those people as kids just learned slower, and to a lower depth, but still learned.

This tells me the brain actually isn’t that complex, and features aren’t necessarily localized to certain regions. The mass of neurons there, if primed properly, will create intelligence.

It’s also clear from the above that the brain has a ton of redundancy built in, likely to account for the fact that it’s in a moving vessel subject to external attacks. There are far fewer negative selection pressures on an Nvidia gpu. It also has a larger energy budget.

Comment by O O (o-o) on The thing I don't understand about AGI · 2024-06-19T00:38:46.883Z · LW · GW

I think the typical response from a skeptic here would be we may be nearing the end of a sigmoid curve.

Comment by O O (o-o) on OpenAI #8: The Right to Warn · 2024-06-18T20:31:47.792Z · LW · GW

Do you think Sam Altman is seen as a reckless idiot by anyone aside from the pro-pause people in the Lesswrong circle? 

Comment by O O (o-o) on simeon_c's Shortform · 2024-06-16T14:59:54.348Z · LW · GW

They are pushing the frontier (https://arxiv.org/abs/2406.07394), but it’s hard to say where they would be without llamas. I don’t think they’d be much far behind. They have gpt-4 class models as is and also don’t care about copyright restrictions when training models. (Arguably they have better image models as a result)

Comment by O O (o-o) on The Leopold Model: Analysis and Reactions · 2024-06-16T00:49:36.225Z · LW · GW

All our discussions will be repeated ad nauseam in DoD boardrooms with people whose job it is to talk about info hazards. And I also doubt discussion here will move the needle much if Trump and Jake Paul have already digested these ideas.