Yann LeCun, head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems aren’t powerful enough to achieve true intelligence.
Google’s technology is what scientists call a neural network, which is a mathematical system that learns skills by analyzing large amounts of data. By identifying patterns in thousands of cat photos, for example, he can learn to recognize a cat.
Over the past few years, Google and other large companies have designed neural networks that have learned massive amounts of prose, including unpublished books and Wikipedia articles in the thousands. These “big language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets, and even write blog posts.
But they are extremely imperfect. Sometimes they generate perfect prose. Sometimes they generate nonsense. Systems are very good at recreating patterns they’ve seen in the past, but they can’t reason like a human.