Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
This is entirely presumptive, we simply do not and cannot know how much they understand, this all boils down to if it looks like a duck and quacks like a duck is it a duck?
That would be true if the world is hollow.
But we know it is not.
we do, and anybody telling you “it’s complicated” has an agenda.