Posts

Thoughts on Francois Chollet's belief that LLMs are far away from AGI? 2024-06-14T06:32:48.170Z
What happens to existing life sentences under LEV? 2024-06-09T17:49:39.804Z
Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) 2024-05-19T02:18:53.524Z
Supposing the 1bit LLM paper pans out 2024-02-29T05:31:24.158Z
OpenAI wants to raise 5-7 trillion 2024-02-09T16:15:00.421Z
O O's Shortform 2023-06-03T23:32:12.924Z

Comments

Comment by O O (o-o) on "AI achieves silver-medal standard solving International Mathematical Olympiad problems" · 2024-07-26T23:57:10.977Z · LW · GW

This part seems to just be to not allow an LLM translation to get the problem slightly wrong and mess up the score as a result.

It would be a shame for your once a year attempt to have even a 2% chance of being messed up by an LLM hallucination.

Comment by O O (o-o) on "AI achieves silver-medal standard solving International Mathematical Olympiad problems" · 2024-07-26T18:57:54.902Z · LW · GW

https://x.com/wtgowers/status/1816839783034843630

It wasn't told what to prove. To get round that difficulty, it generated several hundred guesses (many of which were equivalent to each other). Then it ruled out lots of them by finding simple counterexamples, before ending up with a small shortlist that it then worked on.

That comment doesn’t seem to be correct.

Comment by O O (o-o) on Llama Llama-3-405B? · 2024-07-26T00:14:31.690Z · LW · GW

I think a lot of it is simply just eating away at the margins of companies and product that might become larger in the future. Even if they are not direct competitors, it's still tech investment money going away from their VR bets into AI.  Also big companies fully controlling important tech products has proven to be a nuisance to Meta in the past.

Comment by O O (o-o) on "AI achieves silver-medal standard solving International Mathematical Olympiad problems" · 2024-07-26T00:05:58.616Z · LW · GW

I'm guessing many people assumed an IMO solver would be AGI. However this is actually a narrow math solver. But it's probably useful on the road to AGI nonetheless.

Comment by O O (o-o) on What’s the Deal with Elon Musk and Twitter? · 2024-07-17T19:58:58.734Z · LW · GW

I predict the move to Texas will be largely fake and just whining to get CA politicians to listen to his policy suggestions. They will still have a large office in California. 

Comment by O O (o-o) on Pondering how good or bad things will be in the AGI future · 2024-07-12T20:39:47.804Z · LW · GW
Re-constructing the Roman economy (Chapter 4) - The Cambridge History of  Capitalism

This is a reconstruction of Roman GDP per capita. Source of image.  There is ~200 years of quick growth followed by a long and slow decline.  I think it's clear to me we could be in the year 26, extrapolating past trends without looking at the 2nd derivative. I can't find a source of fertility rates, but child mortality rates were much higher then so the bar for fertility rates was also much higher. 


For posterity, I'll add Japan's gdp per capita. Similar graphs exist for many of the other countries I mention. I think this is a better and more direct example anyways. 

Comment by O O (o-o) on Pondering how good or bad things will be in the AGI future · 2024-07-12T19:06:13.870Z · LW · GW

It is plausible that technological and political progress might get it to fulfilling all Sustainable Development Goal

This seems highly implausible to me. The technological progress and economic growth trend is really an illusion. We are already slowly trending in the wrong direction. The U.S. is an exception and all countries are headed towards Japan or Europe. Many of those countries have declined since 2010 or so.

If you plotted trends from the Roman Empire but ignored the same population decline/institutional decay from them we should have reached technological goals a long time ago.

Comment by O O (o-o) on Why Can’t Sub-AGI Solve AI Alignment? Or: Why Would Sub-AGI AI Not be Aligned? · 2024-07-05T17:51:13.515Z · LW · GW

It’s always hard to say whether this is an alignment or capabilities problem. It’s also too contrived too offer much signal.

The overall vibe is these LLMs grasp most of our values pretty well. They give common sense answers to most moral questions. You can see them grasp Chinese values pretty well too, so n=2. It’s hard to characterize this as mostly “terrible”.

This shouldn’t be too surprising in retrospect. Our values are simple for LLMs to learn. It’s not going to disassemble cows for atoms to end racism.There are edge cases where it’s too woke, but these got quickly fixed. I don’t expect them to ever pop up again.

Comment by O O (o-o) on Why Can’t Sub-AGI Solve AI Alignment? Or: Why Would Sub-AGI AI Not be Aligned? · 2024-07-04T08:36:46.277Z · LW · GW

much of what they say on matters of human values is actually pretty terrible

Really? I’m not aware of any examples of this.

Comment by O O (o-o) on How are you preparing for the possibility of an AI bust? · 2024-06-24T03:05:27.525Z · LW · GW

TSMC has multiple fabs outside of Taiwan. It would be a setback but 10+ years seems to be misinformed. Also there would likely be more effort to restore the semi supply chain than post covid. (I could see the military try being mobilized to help or the Defense Production Act being used)

Comment by O O (o-o) on Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) · 2024-06-24T00:52:56.648Z · LW · GW

If OpenAI didn’t get the 30m from any other donor, they’d probably just turn into a capped profit earlier and raise money that way.

Also I never said Elon would have been the one to donate. They had 1B pledged, so they could have conceivably gotten that money from any other donors.

By the backing of Elon Musk, I mean the startup is associated with his brand. I’d imagine this would make raising funding easier.

Comment by O O (o-o) on Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) · 2024-06-23T07:08:52.142Z · LW · GW

It’s a lot for AI safety but for OpenAI at the time, with the backing of Elon Musk and the most respected AI researchers in the country, they could have raised a similar amount for series A funding at the time. (I’m unsure if they were a capped profit yet). Likewise, 1B was pledged to them at their founding, but it’s hard to tell how much was actually distributed out by 2017.

Agree with 2, but Safety research also seems hard to fund.

Comment by O O (o-o) on johnswentworth's Shortform · 2024-06-22T14:09:03.005Z · LW · GW

Shorting nvidia might be tricky. I’d short nvidia and long TSM or an index fund to be safe at some point. Maybe now? Typically the highest market cap stock has poor performance after it claims that spot.

Comment by O O (o-o) on Ilya Sutskever created a new AGI startup · 2024-06-20T14:54:47.064Z · LW · GW

I see Elon throwing money into this. He originally recruited Sutskever and he’s probably(?) smart enough to diversify his AGI bets.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-20T05:19:12.141Z · LW · GW

Oh yeah I agree. Misread that. Still, maybe not so confident. Market leaders often don’t last. Competition always catches up.

Comment by O O (o-o) on Ilya Sutskever created a new AGI startup · 2024-06-19T23:25:28.307Z · LW · GW

OpenAI is closed

StabilityAI is unstable

SafeSI is ...

Comment by O O (o-o) on My AI Model Delta Compared To Christiano · 2024-06-19T22:52:38.831Z · LW · GW

I think a more nuanced take is there is a subset of generated outputs that are hard to verify. This subset is split into two camps, one where you are unsure of the outputs correctness (and thus can reject/ask for an explanation). This isn’t too risky. The other camp is ones where you are sure but in reality overlook something. That’s the risky one.

However at least my priors tell me that the latter is rare with a good reviewer. In a code review, if something is too hard to parse, a good reviewer will ask for an explanation or simplification. But bugs still slip by so it’s imperfect.

The next question is whether the bugs that slip by in the output will be catastrophic. I don’t think it dooms the generation + verification pipeline if the system is designed to be error tolerant.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T18:09:53.862Z · LW · GW

 I'm not that confident about how the Arizona fab is going. I've mostly heard second hand accounts.

So, from their site "TSMC Arizona’s first fab is on track to begin production leveraging 4nm technology in first half of 2025." You are probably thinking of their other Arizona fabs. Those are indeed delayed. However, they cite "funding" as the issue.[1] Based on how quickly TSMC changed tune on delays once they got Chips funding, I think it's largely artificial, and a means to extract CHIPS money. 

I'm very confident that TSMC's edge is more than cheap labor. 

They have cumulative investments over the years, but based on accounts of Americans who have worked there, they don't sound extremely advanced. Instead they sound very hard working, which gives them a strong ability to execute. Also, I still think these delays are somewhat artificial. There are natsec concerns for Taiwan to let TSMC diversify, and TSMC seems to think it can wring a lot of money out of the US by holding up construction. They are, after all, a monopoly.

Samsung will catch up, sure. But by the time they catch up to the TSMC's 2024 state of the art, TSMC will have moved on to the next node.
...
it's not feasible for any company to catch up with TSMC within the next 10 years, at least.

Is Samsung 5 generations behind? I know that nanometers don't really mean anything anymore, but TSMC and Samsung's 4 nm don't seem 10 years apart based on the tidbits I get online. 

  1. ^

    Liu said construction on the shell of the factory had begun, but the Taiwanese chipmaking titan needed to review “how much incentives … the US government can provide.”


     

Comment by O O (o-o) on Boycott OpenAI · 2024-06-19T15:00:07.768Z · LW · GW

There’s an API playground which is essentially a chat interface. It’s highly convenient.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T07:09:22.503Z · LW · GW

This makes no sense. Wars are typically existential. In a hot war with another state, why would the government not use all of industrial capacity that is more useful to make weapons to make weapons. It’s well documented that governments can repurpose unnecessary parts of industry (say training Grok or an open source chatbot) into whatever else.

Biden used them for largely irrelevant reasons. This indicates that with an actual war, usage would be wider and more extensive.

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T06:16:45.399Z · LW · GW

Wartime powers let governments do whatever they want essentially. Even recently Biden has flexed the defense production act.

https://www.defense.gov/News/Feature-Stories/story/article/2128446/during-wwii-industries-transitioned-from-peacetime-to-wartime-production/

Comment by O O (o-o) on Eli's shortform feed · 2024-06-19T04:51:39.815Z · LW · GW

Hold on. The TSMC Arizona fab is actually ahead of schedule. They were simply waiting for funds. I believe TSMC’s edge is largely cheap labor.

https://www.tweaktown.com/news/97293/tsmc-to-begin-pilot-program-at-its-arizona-usa-fab-plant-for-mass-production-by-end-of-2024/index.html

Comment by O O (o-o) on Boycott OpenAI · 2024-06-19T00:47:52.984Z · LW · GW

Use their API directly. I don’t do this to boycott them in particular but the API cost of your typical chat usage is far lower than the subscription cost.

Comment by O O (o-o) on The thing I don't understand about AGI · 2024-06-19T00:41:05.647Z · LW · GW

Not exactly an answer, but have you read about cases of the incredible plasticity of the human brain? There is a person out there that gradually lost 90% of their brain to fluid leakage and could still largely function. They didn’t even notice it until much later. There are more examples of functioning people who had half their brain removed as a child. And just like our scaling laws, those people as kids just learned slower, and to a lower depth, but still learned.

This tells me the brain actually isn’t that complex, and features aren’t necessarily localized to certain regions. The mass of neurons there, if primed properly, will create intelligence.

It’s also clear from the above that the brain has a ton of redundancy built in, likely to account for the fact that it’s in a moving vessel subject to external attacks. There are far fewer negative selection pressures on an Nvidia gpu. It also has a larger energy budget.

Comment by O O (o-o) on The thing I don't understand about AGI · 2024-06-19T00:38:46.883Z · LW · GW

I think the typical response from a skeptic here would be we may be nearing the end of a sigmoid curve.

Comment by O O (o-o) on OpenAI #8: The Right to Warn · 2024-06-18T20:31:47.792Z · LW · GW

Do you think Sam Altman is seen as a reckless idiot by anyone aside from the pro-pause people in the Lesswrong circle? 

Comment by O O (o-o) on simeon_c's Shortform · 2024-06-16T14:59:54.348Z · LW · GW

They are pushing the frontier (https://arxiv.org/abs/2406.07394), but it’s hard to say where they would be without llamas. I don’t think they’d be much far behind. They have gpt-4 class models as is and also don’t care about copyright restrictions when training models. (Arguably they have better image models as a result)

Comment by O O (o-o) on The Leopold Model: Analysis and Reactions · 2024-06-16T00:49:36.225Z · LW · GW

All our discussions will be repeated ad nauseam in DoD boardrooms with people whose job it is to talk about info hazards. And I also doubt discussion here will move the needle much if Trump and Jake Paul have already digested these ideas.

Comment by O O (o-o) on Open Thread Summer 2024 · 2024-06-16T00:43:47.785Z · LW · GW

I think this outcome is more likely than people give credit to. People have speculated about the arms race nature of AI we might already be seeing and agreed but it hasn’t gotten much signal until now.

Comment by O O (o-o) on Thoughts on Francois Chollet's belief that LLMs are far away from AGI? · 2024-06-14T15:30:20.752Z · LW · GW

My guess is that it will fall soon: progress on math and programming benchmarks has been rapid, so visual logic puzzles doesn't seem like it would be that hard.

His argument is that with millions of examples of these puzzles, you can train an LLM to be good at this particular task, but that doesn’t mean reasoning if it fails at a similar task it doesn’t see. He thinks you should be able to train an LLM to do this without ever training on tasks like these.

I can buy this argument, but still have some doubts. It may be this reasoning is just derived from visual training data + spending more time per token reliably, or he is right and LLMs are fundamentally terrible at abstract reasoning. I think it would be nice to know what’s the youngest a human can be and still solve this. Might give us a sense of the “training data” a human needs to get there.

Some caveats: humans can only get 85% on the public test set I believe. This is to say nothing about the difficulty of the private test set. Maybe it’s harder, tho I doubt it since it would go against what he claims is the spirit of the benchmark.

Comment by O O (o-o) on Thoughts on Francois Chollet's belief that LLMs are far away from AGI? · 2024-06-14T15:28:09.468Z · LW · GW

He doesn't claim this will be a big blocker.

He does, otherwise the claim that OpenAI pushed back AGI timelines by 5-10 years doesn’t make sense.

Comment by O O (o-o) on What happens to existing life sentences under LEV? · 2024-06-13T06:56:07.732Z · LW · GW

See I don’t expect LEV tier longevity treatments to possibly exist before AGI so I also don’t expect it to be expensive if it does exist.

Comment by O O (o-o) on What if a tech company forced you to move to NYC? · 2024-06-10T18:30:24.351Z · LW · GW

I think this post was supposed to be some sort of gotcha to SF AI optimists given how this is worded

 Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant


but in reality a lot of tech workers without family here would gladly move to NYC.[1] 

 

A better example would be Dubai. Objectively not a bad city and you can possibly make a lot more without tax, but obvious reasons you'd be hesitant. Still don't think this is that huge of a gotcha. The type of people this post is targeting are generally risk-tolerant. So yeah if you effectively tripled their pay and made them move to Dubai, they'd take it with high likelihood. 

  1. ^

    I don't get the misses the point reaction, as I'm pretty sure this was the true motivation of this post, think about it. Who could they be talking about where a NYC relocation is within the realm of possibility, are tech workers, and are chill with AI transformations.

Comment by O O (o-o) on What happens to existing life sentences under LEV? · 2024-06-09T21:41:15.132Z · LW · GW

Yes prison sentences don't make sense after the fact. To be clear, I mean existing life sentences. 

For those, I think it depends on how much the rule deciders decide to uphold laws. To let them out requires rewriting current laws and retroactively applying them. If the the rule decider generally follows democratic will eventually they will be let out.

OTOH, if they vaguely uphold the status quo and rigidly follow a constitution, I can see certain people locked up forever. Some prison sentences are written in years and many life sentences still have a time limit. In reality, some old criminals under a life sentence are also let out. I can see immortal entities under existing life sentences arguing their punishment is cruel and unusual and eventually succeeding. 

Comment by O O (o-o) on What if a tech company forced you to move to NYC? · 2024-06-09T07:34:14.267Z · LW · GW

NYC is a great city. Many tech workers I know are trying to move to NYC, especially younger ones. So, not the best example, but I get your point. 

Comment by O O (o-o) on O O's Shortform · 2024-06-09T06:24:31.313Z · LW · GW

The Manhattan project had elements where they were worried they'd end the world through atmospheric chain reactions (but this wasn't taken too seriously). The scientists on this project considered MAD and nuclear catastrophes were considered as plausible outcomes. Many had existential dread. I think it actually maps out well, since you are uncertain how likely a nuclear exchange is, but you could easily say there is a high chance of it happening, just like you can easily now say with some level of uncertainty that p(doom) is high. 

I think the real analogy for alignment is not Manhattan project but "how to successfully make first nuclear strike given that the enemy has detection system and nuclear ICBM too". 

This requires the planners to be completely convinced that p(doom) is high (as in self immolation and not Russian roulette where 5/6 bullets lead to eternal prosperity). The odds of a retaliatory strike or war vs the USSR on the other hand at any given point was 100%. The US's nuclear advantage at no point was overwhelming enough outside of Japan where we did use it. The fact that a first-strike against the USSR was never pursued is evidence of this. Think of the USSR instead being in Iran's relative position today. If Iran tried to build thousands of nukes today and it looked like they would succeed, we'd definitely see a first strike or a hot war.  

 

So alignment isn't like this, there is a non trivial chance that even RLHF just happens to scale to super intelligence. After 20 years, MIRI, nor anyone can prove nor disprove this, and that's enough reason to try to do it anyways, just like how nuclear might inevitably lead to the nations with the nukes to engage in an exchange, but they were built anyways. And unlike nuclear, the upside of ASI being aligned is practically infinite. In the first strike scenario, it's a definite severe downside to preventing a potentially more severe downside in the future. 

Comment by O O (o-o) on Response to Aschenbrenner's "Situational Awareness" · 2024-06-09T06:11:38.084Z · LW · GW

I don't know what this means. If you're saying "nuclear weapons kill the people they hit", I don't see the relevance; guns also kill the people they hit, hut that doesn't make a gun strategically similar to a smarter-than-human AI system.

It is well known nuclear weapons result in MAD, or localized annihilation. It was still built. But my more important point is this sort of thinking requires most to be convinced there is a high p(doom) and more importantly, also convinced that the other side believes that there is a high p(doom). If either of those are false, then not building doesn't work. If the other side is building it, then you have to build it anyways just in case your theoretical p(doom) arguments are wrong. Again this is just arguing your way around a pretty basic prisoner's dilemma. 

And think about the fact that we will develop AGIs (note not ASI) anyways and alignment (or at least control) will almost certainly work for them.[1] Prisoner's dilemma indicates you have to match the drone warfare capabilities of the other side regardless of p(doom).

In the world where the USG understands there are risks but thinks of it closer to something with decent odds of being solvable, we build it anyways. The gameboard is 20% of dying, 80% of handing the light cone to your enemy if the other side builds it and you do not. I think this is the most probable option, making all Pause efforts doomed. High p(doom) folks can't even convince low p(doom) folks in Lesswrong, the subset of optimists most likely to be receptive to their arguments, that they are wrong. There is no chance you won't simply be a faction in the USG like environmentalists are. 

But let's pretend for a moment that the USG buys the high risk doomer argument for superintelligence. The USG and CCP are both rushing to build AGIs regardless, since AGI can be controlled and not having a drone swarm means you lose military relevance. Because of how fuzzy the line between ASI and AGI in this world will be, I think it's very plausible enough people will be convinced the CCP isn't convinced alignment is too hard and will build it anyways.   

Even people with high p(doom)'s might have a nagging part of their mind saying that what if alignment  just works. If alignment just works (again this is impossible to disprove since if we could prove /disprove it we wouldn't need to consider pausing to begin with, it would be self-evident), then great you just handed your entire nation's future to the enemy. 

 

We have some time to solve alignment, but a long term pause will be downright impossible. What we need to do is tackle the technical problem asap instead of trying to pause. The race conditions are set, the prisoner's dilemma is locked in. 

  1. ^

    I think they will certainly work. We have a long history of controlling humans and forcing them to do things that they don't want to do. Practically every argument about p(doom) relies on the AI being smarter than us. If it's not, then it's just an insanely useful tool. All the solutions that sound "dumb" with ASI, like having an off switch, air gapping, etc. work with weak enough but still useful systems. 

Comment by O O (o-o) on O O's Shortform · 2024-06-08T08:47:14.319Z · LW · GW

I'm really feeling this comment thread lately. It feels like there is selective rationalism going on, many dissenting voices have given up on posting, and plenty of bad arguments are getting signal boosted, repeatedly. There is some unrealistic contradictory world model most people here have that will get almost every policy approach taken to utterly fail, as they have in the recent past.  I largely describe the flawed world model as not appreciating the game theory dynamics and ignoring any evidence that makes certain policy approaches impossible. 

(Funny enough its traits remind me of an unaligned AI, since the world model almost seems to have developed a survival drive) 

Comment by O O (o-o) on Response to Aschenbrenner's "Situational Awareness" · 2024-06-08T08:28:36.172Z · LW · GW

Leopold's scenario requires that the USG come to deeply understand all the perils and details of AGI and ASI (since they otherwise don't have a hope of building and aligning a superintelligence), but then needs to choose to gamble its hegemony, its very existence

Alternatively, they either don't buy the perils or believes there's a chance the other chance may not? I think there is an assumption made in this statement and a lot of proposed strategies in this thread. If not everyone is being cooperative and doesn't buy the high p(doom) arguments then this all falls apart. Nuclear war essentially has a localized p(doom) of 1, yet both superpowers still built them. I am highly skeptical of any potential solution to any of this. It requires everyone (and not just say half) to buy the arguments to begin with.

Comment by O O (o-o) on Response to Aschenbrenner's "Situational Awareness" · 2024-06-08T08:22:27.833Z · LW · GW

Indeed, forecasters have been surprised by how slowly safety/robustness/etc. have progressed in recent years

Interesting, do you have a link to these safety predictions? I was not aware of this.

Comment by O O (o-o) on O O's Shortform · 2024-06-07T23:09:27.991Z · LW · GW

Don’t see how this is relevant to my broader point. But the Manhattan project was essentially try every research direction instead of picking and choosing to reduce experimentation time.

Comment by O O (o-o) on O O's Shortform · 2024-06-07T23:08:17.297Z · LW · GW

I mean are you sure Singapore’s sudden large increase in GPU purchases is organic? GPU bans have very obviously not stopped Chinese AI progress, so I think we should build conclusions starting from there instead of the reverse order.

I also think US GPU superiority is short lived. China can skip engineering milestones we’ve had to pass, exploit the fact that they have far more energy than us, skip the general purpose computing/gaming tech debt that may exist in current GPUs, etc.

EDIT: This is selective rationalism. If you sought any evidence in this issue, it would become extremely obvious that Singapore's orders of H100s magically increased by many magnitudes after they were banned in China.

Comment by O O (o-o) on O O's Shortform · 2024-06-07T19:19:35.886Z · LW · GW

I think Leopold addresses this but 5% of our compute will be used to make a hypothetical AGI while China can direct 100% of their compute. They can make up in quality with quantity and they also happen to have far more energy than us which is probably the more salient variable in the AGI equation.  

 

Also I'm of the opinion that the GPU bans are largely symbolic. There is little incentive to respect them, especially when China realizes stakes are higher than they seem now. In fact they are largely symbolic now.

Comment by O O (o-o) on O O's Shortform · 2024-06-07T16:14:59.593Z · LW · GW

There was more theory laid out and theory discovered in the process but I think more importantly there were just a lot of approaches to try. I don’t think your analogy fits best. The alignment Manhattan project to me would be to scale up existing mech-interp work 1000x and try every single alignment idea under the sun simultaneously with the goal of automating it once we’re confident of human level systems. Can you explain more of where your analogy works and what would break the above?

Comment by O O (o-o) on O O's Shortform · 2024-06-07T08:30:47.985Z · LW · GW

There seems to be a fair amount of motivated reasoning with denying China’s AI capabilities when they’re basically neck and neck with the U.S. (their chatbots, video bots, social media algorithms, and self driving cars are as roughly good as or better than ours).

I think a lot of policy approaches fail within an AGI singleton race paradigm. It’s also clear a lot of EA policy efforts are basically in denial that this is already starting to happen.

I’m glad Leopold Aschenbrenner spelled out the uncomfortable but remarkably obvious truth for us. China is gearing to wage war for silicon. The U.S. is slowly tightening their control over silicon. This reads exactly like a race.

I personally think it’s futile to try to stop this and our best bet is to solve alignment as fast as possible with a Manhattan project scale effort.

Comment by O O (o-o) on Low Fertility is a Degrowth Paradise · 2024-05-26T01:43:11.383Z · LW · GW

So if current fertility trends continue, gentle degrowth is the default result.

That’s only a short term trend and even then, assumes we don’t get AGI magic. Pronatalist belief systems that persist in modernity (Judaism, Islam) will continue replicating and replace most people. We might even see genes that make people want kids or not use birth control propagate.

Comment by O O (o-o) on What's Going on With OpenAI's Messaging? · 2024-05-25T07:27:22.373Z · LW · GW

https://x.com/ylecun/status/1794248728825524303?s=46&t=lZJAHzXMXI1MgQuyBgEhgA

He’s recently mentioned it again.

Comment by O O (o-o) on The case for stopping AI safety research · 2024-05-23T17:14:50.024Z · LW · GW

AI control is useful to corporations even if it doesn't result in more capabilities. This is why so much money is invested in it. Customers want predictable and reliable AI. There is a great post here about AI's aligning to Do What I want and Double Checking in the short term. There's your motive.  

Also in a world where we stop safety research, it's not obvious to me why capabilities research will be stopped or even slowed down. I can imagine them being slightly less economically valuable but not much less capable. If anything, without reliability, devs might be pushed to extract value out of these models by making them more capable. 

Fixing them will push the failure modes beyond our ability to understand and anticipate, let alone fix.

So that's why this point isn't very obvious to me. It seems like we can just have both failures we can understand and can't understand. They aren't mutually exclusive.[1]

  1. ^

    Also if we can't understand why something is bad, even given a long amount of time, is it really bad?

Comment by O O (o-o) on What's Going on With OpenAI's Messaging? · 2024-05-21T16:55:45.768Z · LW · GW

I recall him saying this on Twitter and linking a person in a leadership position who runs things there. Don’t know how to search that.

Comment by O O (o-o) on What's Going on With OpenAI's Messaging? · 2024-05-21T06:00:47.273Z · LW · GW

He isn’t in charge there. He simply offers research directions and probably a link to academia.