Posts
Comments
It is cool, and I have believed something like this for a while. Problem is that Claude 3.5 invalidated all that - it does know how to program, understands stuff, and does at least 50% work for me. This was not at all the case for previous models.
And all those "LLL would be just tools until 2030" arguments are not baked by anything and based solely on vibes. People said the same about understanding of context, hallucinations, and other stuff. So far the only prediction that worked is that LLM gains more common sense with scaling. And this is exactly what is needed to crack its agency.
AI is likely to be trained on human understanding of identity and death, which is very much based on generational replacement rather than continuity over centuries.
For me it sounds like you did not mention the whole AI alignment question.Like that is the point: AI position might not follow with dataset given, as far as it start the self=imrpovement.