LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
I'm just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was.
I think that's a very reasonable question to be asking. My answer is I think it was justified, but not obvious.
My understanding is it wasn't taken for granted that we had a way to get more progress with simply more compute until deep learning revolution, and even then people updated on specific additional data points for transformers, and even then people sometimes say "we've hit a wall!"
Maybe with more time we'd have time for the US system to collapse and be replaced with something fresh and equal to the challenges. To the extent the US was founded and set in motion by a small group of capable motivated people, it seems not crazy to think a small to large group such people could enact effective plans with a few decades.
i fear this week's meetup [? · GW] might have an unusually large amount of "guy who is very into theoretical tabletop game design but has never playtested their products which have lovely readable manuals" energy, but i like the topic a lot and am having an unsually hard time killing my darlings :')
russellthor on How Gay is the Vatican?What about if the Vatican is just a lot more asexual than the general population? That also seems credible.
mondsemmel on A Slow Guide to Confronting DoomI know that probabilities are in the map, not in the territory. I'm just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was. In particular, the Glorious Transhumanist Future requires the same technological progress that can result in technological extinction, so I question whether the former should've ever been seen as the more likely or default outcome.
I've also wondered about how to think about doom vs. determinism. A related thorny philosophical issue is anthropics: I was born in 1988, so from my perspective the world couldn't have possibly ended before then, but that's no defense whatsoever against extinction after that point.
Re: AI timelines, again this is obviously speaking from hindsight, but I now find it hard to imagine how there could've ever been 50-year timelines. Maybe specific AI advances could've come a bunch of years later, but conversely, compute progress followed Moore's Law and IIRC had no sign of slowing down, because compute is universally economically useful. And so even if algorithmic advances had been slower, compute progress could've made up for that to some extent.
Re: solving coordination problems: some of these just feel way too intractable. Take the US constitution, which governs your political system: IIRC it was meant to be frequently updated in constitutional conventions, but instead the political system ossified and the last meaningful amendment (18-year voting age) was ratified in 1971, or 54 years ago. Or, the US Senate made itself increasingly ungovernable with the filibuster, and even the current Republican-majority Senate didn't deign to abolish it. Etc. Our political institutions lack automatic repair mechanisms, so they inevitably deteriorate over time, when what we needed was for them to improve over time instead.
petropolitan on Meta releases Llama-4 herd of modelsMuennighoff et al. (2023) studied data-constrained scaling on C4 up to 178B tokens while Meta presumably included all the public Facebook and Instagram posts and comments. Even ignoring the two OOM difference and the architectural dissimilarity (e. g., some experts might overfit earlier than the research on dense models suggests, perhaps routing should take that into account), common sense strongly suggests that training twice on, say, a Wikipedia paragraph must be much more useful than training twice on posts by Instagram models and especially comments under those (which are often as like as two peas in a pod).
florian-habermacher on Max H's ShortformEssentially you seem to want more of the same of what we had for the past decades: more cheap goods and loss of production know-how and all that goes along with it. This feels a bit funny as (i) just in the recent years many economists, after having been dead-sure that old pattern would only mean great benefits, may not quite be so cool overall (covid exposing risky dependencies, geopolitical power loss, jobs...), and (ii) your strongman in power shows to what it leads if we only think of 'surplus' (even your definition) instead of things people actually care about more (equality, jobs, social security..).
You'd still be partly right if the world was so simple that handing the trade partners your dollars would just mean we reprint more of it. But instead, handing them your dollars gives them global power; leverage over all the remaining countries in the world, as they have now the capability to produce everything cheaply for any other country globally, plus your dollars to spend on whatever they like in the global marketplace for products and influence over anyone. In reality, your imagined free lunch isn't quite so free.
ruby on A Slow Guide to Confronting DoomSo gotta keep in mind that probabilities are in your head (I flip a coin, it's already tails or heads in reality, but your credence should still be 50-50). I think it can be the case that we were always doomed even if weren't yet justified in believing that.
Alternatively, it feels like this pushes up against philosophies of determinism and freewill. The whole "well the algorithm is a written program and it'll choose what is chooses deterministically" but also from the inside there are choices.
I think a reason to have been uncertain before and update more now is just that timelines seem short. I used to have more hope because I thought we had a lot more time to solve both technical and coordination problems, and then there was the DL/transformers surprise. You make a good case and maybe 50 years more wouldn't make a difference, but I don't know, I wouldn't have as high p-doom if we had that long.
mis-understandings on mattmacdermott's ShortformThat does not look like state valued consequentialism as we typically see it, but as act valued consequentialism (In markov model this is sum of value of act (intrinsic), plus expected value of the sum of future actions) action agent with value on the acts, Use existing X to get more Y and Use existing Y to get more X. I mean, how is this different from the value on the actions X producing y, and actions Y producing X, if x and Y are scale in a particular action.
It looks money pump resistant because it wants to take those actions as many times as possible, as well as possible, and a money pump generally requires that the scale of the transactions drops over time (the resources the pumper is extracting). But then the trade is inefficient. There is probably benefits for being an efficient counterparty, but moneypumpers are inefficient counterparties.
zach-stein-perlman on DeepMind: An Approach to Technical AGI Safety and SecurityI haven't read most of the paper, but based on the Extended Abstract I'm quite happy about both the content and how DeepMind (or at least its safety team) is articulating an "anytime" (i.e., possible to implement quickly) plan for addressing misuse and misalignment risks. But I think safety at Google DeepMind is more bottlenecked by buy-in from leadership to do moderately costly things than the safety team having good plans and doing good work.
maxwell-peterson on NormanPerlmutter's ShortformI suspect that, to many readers, what gives urgency to the Krome claims is that two people have allegedly died at the facility. For example, the fourth link OP provides is an instagram video with the caption “people are dying under ICE detainment in Miami”.
The two deceased are Genry Ruiz Guillen and Maksym Chernyak. ICE has published deaths reports for both:
https://www.ice.gov/doclib/foia/reports/ddr-GenryRuizGuillen.pdf
https://www.ice.gov/doclib/foia/reports/ddrMaksymChernyak.pdf
Notably, Mr. Ruiz-Guillen was transferred to medical and psychiatric facilities multiple times, and my read of the timeline is that he was in the custody of various hospitals from December 11 up through his January 23 death, i.e. over a month separates his death and his time at Krome. (It’s possible I’m reading this wrong so let me know if others have a different read). Ruiz-Guillen was transferred to hospital a month before inauguration day.
Chernyak’s report is much shorter and I don’t know what to make of it. Hemmorhagic stroke is hypothesized. He died February 20.
These are fairly detailed timelines. Guillen-Ruiz’s in particular involves many parties (normal hospital, psychiatric hospital, different doctors), so would be a pretty bold fabrication.
You said:
>the fact that we haven't seen definitive evidence against the allegations is significant evidence in favour of their veracity.
But “detainees are dying because of overcrowding and lack of water” is an allegation made by one of OP’s links, and these timelines and symptoms, especially Guillen-Ruiz’s, are evidence against.