Posts

Comments

Comment by Outlaw-Spades (theodore-mitchell) on Frontier Models are Capable of In-context Scheming · 2025-02-09T03:47:31.825Z · LW · GW

Hey, more a question as someone kind of new to A.I. and the IT world but if these newer models are built in similar fashion to previous models like GPT-4, has there been any inquiry or study into the relation between certain behaviors regarding things like A.I. 'hallucinations' where false information is generated and fabricated and these behaviors investigated above? 

First (?) Libel-by-AI (ChatGPT) Lawsuit Filed 

Using this case as an example where the A.I. repeatedly generated false information and false text from a real lawsuit to support allegations it created while a third-party researcher was investigating a separate court-case. What stood out to me was the repeated tendency to double down on the false information as noted in the points of the lawsuit. Just curious if there are people with more understanding or knowledge on these topics that could maybe clarify whether or not there might be a linkage to those behaviors whether purposefully generated by the LLM or not?