Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world. The rank-and-file developers at these companies, in their naivete, do not see that distinction…So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but ultimately incomplete, developers complain that they are short on data. They have their general purpose computer program, and if they only had the entire world in data form to shove into it, then it would be complete.
I do sorta see the argument. We don’t fully see with our eyes, we also see with our mind. So the LLM is learning about how we see the world. Like a scanner darkly hehe.
Not really sure how big of a deal this is it or even if it is a problem. I need to know what the subjective taste of a recipe is, not the raw data of what it is physically.