Posts
Comments
People aren’t going to read books and stop to ask questions. That sounds like work and being curious and paying attention, and people don’t even read books when not doing any of those things.
People definitely aren’t going to start cracking open history books. I mean, ‘cmon.
The ‘ask LLMs lots of questions while reading’ tactic is of course correct.
I was thinking along these lines about a year back, and I started working on an ePub PWA (web-based) reader with some bells and whistles. The relevant whistle here is that you can highlight a word, passage, whatever, and tap a button to make the LLM-du-jour guess your intent from context and go ahead and answer it. I find it generally knows what I want maybe 85-90% of the time. It seems like such a trivial feature, but once you get used to never having even wildly opaque references go over your head, it's hard to go back.
It also makes it a lot less onerous to work your way through a book in a foreign language you're learning. I know it's usually not its wheelhouse, but Sonnet is inexplicably strong at translating passages from French and hitting the sweet spot of offering a relevant tip targeted to just the right skill level.
(It's available here if that sounds appealing to anyone else. It looks like this. You gotta supply your own epub files, of course.)
I didn't find the results about cheating and shoplifting surprising, but that tracks with my friend group at the time. That said, I was curious about whether there's a gender discrepancy in shoplifting (there's not), and found a large 2002 survey which gives 11% as the lifetime incidence of shoplifting in the U.S.
I confess I am perplexed, as I suspect most people are aware there is more than one Trevor in the world. As you point out, that is not your last name. I have no idea who you are, or why you feel this is some targeted "weaponization."
Is it conceivable that this is purely an emergent feature from LLMs, or does this necessarily mean there's some other stuff going on with Sydney? I don't see how it could be the former, but I'm not an expert.
My best guess is that there's a metaverse which consists of (at a minimum) every possible computation. While not technically provable or falsifiable, it does result in predictions which mean that circumstantially we should have an excellent guess whether or not it's true.
So far, it's true. It nicely explains the fine-tuned constants and QM and the discrete nature of the apparent finest (Planck-region) levels of reality. And yes, it also predicts that we will, on average, be overwhelmingly likely to live in one of the simplest possible universes supporting intelligence (but almost certainly not the VERY simplest).
If this is the case, any actual fundamental mechanism of reality is irrelevant to the point of meaninglessness, as such a metaverse is completely described by a ...0001000...
initial row in ECA rules 30 or 45, or a correspondingly simple Turing machine, Lambda Calculus expression, tag machine, Perl script, etc.
(A post of mine approaching this argument from the tension between subjectivity and computation.)
From what I know of security, any system requiring secrecy is already implicitly flawed.
(Naturally, if this doesn't apply and you backchanneled your idea for some legitimate meta-reason, I withdraw my objection.)
For the record, I found that line especially effective. I stopped, reread it, stopped again, had to think it through for a minute, and then found satisfaction with understanding.
Here's an outside-the-box suggestion:
Clearly the development of any AGI is an enormous risk. While I can't back this up with any concrete argument, a couple decades of working with math and CS problems gives me a gut intuition that statements like "I figure there's a 50-50 chance it'll kill us", or even a "5-15% everything works out" are wildly off. I suspect this is the sort of issue where the probability of survival is funneled to something more like either or , of which the latter currently seems far more likely.
Has anyone discussed the concept of deliberately trying to precipitate a global nuclear war? I'm half kidding, but half not; if the risk is really so great and so imminent and potentially final as many on here suspect, then a near-extinction-event like that (presumably wiping out the infrastructure for GPU farms for a long time to come) which wouldn't actually wipe out the race but could buy time to work the problem (or at least pass the buck to our descendants) could conceivably be preferable.
Obviously, it's too abhorrent to be a real solution, but it does have the distinct advantage that it's something that could be done today if the right people wanted to do it, which is especially important given that I'm not at all convinced that we'll recognize a powerful AGI when we see it, based on how cavalierly everyone is dismissing large language models as nothing more than a sophisticated parlor trick, for instance.