Stochastic Scribbles
Random musings in a variety of subjects, from science to religion.

Large language models

Large language models such as ChatGPT and Bard have been all the rage lately. After trying them out for a while, however, I think the general hype around them may be a tad bit overdone. The feeling I get is that they are very good at stitching text seamlessly together, but they do not actually know anything.

Most of the time when I interact with these large language models, I can’t help but feel that there is something artificial about them. They sound like they are repeating talking points that a marketing department came up with, without a true understanding of what they are talking about. When talking about something in depth, they will frequently answer with something entirely different than what is being asked, with no awareness that they are doing this. And for niche topics for which there are only a few web pages discussing them, the answers would sometimes be suspiciously similar to what is on these web pages.

There seems to a general feeling that the Turing Test is obsolete these days, but as far as I am concerned, current levels of artificial intelligence are not quite able to pass it yet. Yes, there are humans who occasionally display similar levels of mechanical responses through rote recitation of talking points, but I don’t consider these great displays of intelligence, so they would be the wrong target to compare against.

I am not too worried about machines taking over everything any time soon, but the sophistication of large language models does make me wonder about our own intelligence. A large part of machine learning is basically curve fitting, albeit highly sophisticated curve fitting. Does this have implications for what many things we consider intelligence in ourselves are? Are they mostly a similar sort of curve fitting instead of any sort of conscious thinking? Or is even conscious thinking ultimately an incredibly advanced form of curve fitting?