AI stands out as it suggests a future in which machines may surpass humans in artificial general intelligence (AGI). Many technology experts anticipate that AGI will produce insights far beyond human capabilities, which may necessitate the creation of laws to prevent AGI robots from exerting control over humans. The primary concern regarding AI is not its advancement to AGI but the substantial resources squandered in pursuing a nearly unattainable goal.

AI and Large Language Models

The hype surrounding large language models (LLM) in AI is quite baffling, especially when you consider the stark reality that these models do not offer anything groundbreaking nor bring us closer to achieving artificial general intelligence (AGI). We’re amidst the 4th generation of chat boxes spawned by LLM. Some computer experts claim the 10th generation of chat boxes will achieve AGI, but that’s as likely as a cow jumping over the moon. Both scenarios are not inherently contradictory, but the probability of either occurring is nearly nonexistent.

As a young boy in Chicago, I fondly remember visiting the Museum of Science and Industry with my parents. One exhibit that particularly excited me was a tic-tac-toe machine. Tic-tac-toe is a 3×3 game where one player uses X’s and the other uses O’s. The goal is to place three consecutive X’s or O’s in one of the three rows or two diagonals to win. Otherwise, the game ends in a draw. The machine, a primitive computer, was created in 1949 to play a perfect tic-tac-toe game. This means it always wins if the human makes a mistake and sets up a situation where the opponent has only one move to block two possible winning sequences. The machine was first exhibited in the early 1950s after being invented a few years earlier. Little did I know then this was my first encounter with artificial intelligence.

A prime example of artificial intelligence interacting with humans can be seen in the game of checkers, which was “solved” in 2007. While computers can outperform the best human players in chess and Go, neither of these games has been solved to the extent that tic-tac-toe has. In reality, checkers have only been “weakly solved,” meaning that the optimal sequence of moves from the initial position is known to result in a draw. However, there are still some positions for which calculations haven’t been fully explored. Nonetheless, the probability of encountering a position where the outcome cannot be predicted is very low. In comparison, tic-tac-toe is a game that has been strongly solved.

Aside from Othello, checkers will likely be the last popular game for computers to solve. Chess would be the only game that AI might not be able to solve, as it could take a computer the size of the solar system and a lifetime to complete the task. Interestingly, in terms of the number of positions, chess is to checkers as gigascale computers (the fastest supercomputers in 2007) are to exascale computers (the fastest computers of today).

“This reinforces a point we have stressed: Increased complexity requires exponentially greater resources. Thus, several games, such as Go and Shogi, are unsolvable regarding any conceivable extension of today’s technologies.”

Chess and AI

In most games, computers will continue to outperform humans because of their ability to calculate moves quickly and with brute force. However, human intuition may still lead to better moves in some instances, and it’s important to note that the ability of computers to calculate quickly does not equate to genius. There are many chess puzzles that humans can solve, which even the most advanced chess AI may struggle with even after extensive computation.

The most well-known example is a chess puzzle presented to the players in a high-level grandmaster tournament, including Kasparov. None of them had a clue about the solution. However, the reporter at the tournament, Mikhail Tal, was considered one of the most remarkable chess talents of all time, known for his masterful and almost magical games. Tal didn’t immediately see the puzzle’s solution like the other players. He thought that taking a short break might help. After a ten-minute walk, he returned and revealed the puzzle’s solution to the many perplexed grandmasters.

Even today, our most potent chess engines cannot solve challenging puzzles requiring 6 to 10 moves. You can find many examples of such puzzles on Google and YouTube, some of which are beyond engines’ reach. There are also chess positions where computers struggle to find the right plan, especially in certain endgames. Additionally, super puzzles with 20 to 25 or more forcing moves that lead to checkmate are far beyond any computer’s capacity.

Humans and computers have the same time limit to make game moves. However, computers rely on calculations and a numerical methodology to evaluate their moves, while humans provide the evaluation methods. Computers have an advantage because they calculate faster and can store interim results of calculations in their memory. On the other hand, humans have needs, emotions, and distractions that can affect their focus during a game. To level the playing field, humans should be given extended time limits compared to computers. Additionally, allowing humans to store interim results or take pictures of their calculations could help compensate for their natural disadvantages.

A few years back, Vladimir Kramnik, the world champion then, was playing against the most powerful chess engines. After around 20 moves, he had a position that was arguably as good as the computer’s. However, on his next move, Kramnik made a blunder that even a beginner would avoid. This serves as a clear example of how nerves can lead to carelessness. The fact that we have never really addressed the idea of creating a level playing field suggests that, deep down, we want to believe that AI can outperform us. In a way, this would justify our almost unlimited faith in science and technology.

Who’s Smarter?

Modern game engines are taught using methods similar to language models, with some engines even learning by playing against themselves. AlphaGo was the first engine to learn and play the game Go using reinforcement learning, and it went on to defeat the world champion in the game. This achievement was particularly notable as Go is one of the most challenging board games. An important article in Nature highlighted the significance of this accomplishment.

“In the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov. It compensated for this by selecting positions more intelligently using the policy network and evaluating them more precisely using the value network. This approach is closer to how humans play.

If a follow-up article were to be written, it would have to mention that a novice Go player named Kellin Perine used AI to analyze computer Go games and discovered a significant error in the calculation method used by the highly regarded open-source Go engine KataGo. After finding the error, Perine beat KataGo in 14 out of 15 games without using AI. This discovery showed that computers need to possess the intuitive understanding that even an amateur player has about identifying flaws in the overall strategy. This challenges the common assumption that machines always play at their best or at least better than humans due to the influence of our fascination with technology and computers.

Many believe that games like Go and Chess will always be unsolvable for computers, but brute force may eventually surpass human creativity. Due to the number of pieces and their movements, Go, predetermined rules, and fixed complexity levels characterize chess and other challenging board games.

Magic: The Gathering (MTG) is considered one of the most complex games humans play due to its inclusion of hidden information that cannot be predicted in advance, making it difficult to use a brute-force solution. While computers excel at games like poker, which also have hidden information, MTG’s range of possible reactions is much more comprehensive. This suggests that a metric like hidden information multiplied by possible responses to the information (H.I. x Choices) could establish an upper boundary on what a computer could feasibly play, even at a low level.

Advancements In AGI

Despite game-playing advancements, we are no closer to creating AGI than we were decades ago when I struggled to win at tic-tac-toe. Before you argue, today’s AI encompasses much more than games. Some believe that all human interactions can be considered a game. Ludwig Wittgenstein, one of the greatest thinkers of modern times, used games as a model for many human interactions.

There are various types of AI beyond just gaming applications. Gaming is a good representation of what AI can do and its limitations. Large language models (LLMs) and similar AI technologies can be classified as types of games. This is because facial and speech recognition, autonomous driving, LLMs, and chat machines share similarities with gaming technologies. The tools used for developing AI are also similar across different applications.

To understand LLMs, it is essential to be familiar with key terms such as ontologies, parameters, tokens, and agents. Ontologies and tokens describe the scope of the relevant knowledge domain and the level of detail of the knowledge represented within that domain. Parameters work together with tokens to best represent the model’s complexity. A typical parameter might be the probability of a token following a specific path. In certain LLMs, particularly those using “deep learning,” layers of tokens are connected, with the number of parameters reaching into the trillions. Examples of ontologies vary widely, encompassing music, medical diagnosis, and general knowledge.

Tokens represent essential inputs used to train AI. They can range from a simple phrase or a few letters to a number. In the case of facial or voice recognition, advanced biometric data, defined in concert with AI, would be used as tokens. Virtually anything representing a person’s face or speech can be translated into numerical scales, ranging from wavelengths and patterns used for speech to distances among facial features. Not all human knowledge can be combined into one massive dataset, as defining the relationships among all tokens would exceed the capacity in terms of time and potential memory storage of any imaginable computer. One workaround for these time and memory limitations is using agents, in which an LLM will link a question to another AI database, referred to as an agent. For example, if you ask an LLM a technical question in physics, the LLM or chat box, in this example, would turn the question over to another LLM that specializes in physics and then communicate the answer to you.

For Wittgenstein, “family resemblances” are used to define our use of language rather than a single strand or set of rules used to define all language. In other words, language games have various characteristics, some shared and some not, rather than a single definition. However, it is always true that language games will share at least one or more strands. Wittgenstein’s idea is illustrated in paragraph 67 of Philosophic Investigations. There, he states:

“…the kinds of numbers, for example, form a family. Why do we call something a ‘number’? Well, perhaps because it has a – direct – affinity with several things that have hitherto been called ‘numbers.’ And we extend our concept of number, as in spinning, we twist fiber on fiber. And the strength of the thread resides not in the fact that someone fiber runs through its whole length, but in the overlapping of many fibers.”

Philosophy Meets Artificial Intelligence

In contrast to Wittgenstein, AI, especially LLM, and chat boxes define language in terms of cast-in-stone rules regarding connections among various tokens. Even the language’s creativity is defined directly in terms of the strength of these connections or the “temperature” of the language. Why does Wittgenstein go out of his way to avoid hard-and-fast rules? One clue comes from the quote: his description of family resemblances is described almost entirely in material objects.

Wittgenstein’s views on language changed significantly between the publication of his Tractatus Logico-Philosophicus in 1922 and the posthumous release of Philosophical Investigations. He explored the gap between our use and understanding of language, concluding that fixed rules governing language can mean different things to different people. Instead of strict rules, he proposed the concept of family resemblance and a matrix of material relations among words to bridge the gap between different language games. According to Wittgenstein, the material, non-material dichotomy is crucial in understanding many philosophical problems, and conflating the two leads to further confusion.

There’s an absolute distinction between the objective material world and the subjective internal dialogue.

Thus, an enigmatic parallel arises, and it’s important to note that Thomas Jefferson highlighted this same division in the first two paragraphs of the Declaration of Independence. In the first paragraph, Jefferson differentiates between “the Laws of Nature” and “Nature’s God,” separating science and its laws from the higher power that created nature or, more generally, the distinction between the material and the non-material. In the second paragraph, this division appears differently: Jefferson states that “all men are created equal” and “their Creator endows them with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

These two phrases do not contradict each other if the first refers to the non-material, such as freedom of mind or ideas, which Jefferson viewed as the most essential freedom man possesses. At the same time, the second relates to material things. A man may be able to strive for whatever he wants, but it is self-evident that in most material pursuits, there will be inequalities. Jefferson regarded freedom of mind and ideas as essential for assuring a growing understanding of science and, in turn, the increasing lifespan of our species, i.e., raising the probability that our species will avoid extinction.

Language generated by AI has prompted ardent believers in AI to suggest that science is about to make a giant leap to artificial general intelligence (AGI), which will exceed the abilities of its human creators. In a recent Wall Street Journal article, one of the major players in AI wants to be able to scan irises to distinguish human beings from robots. Here is a better way to ask a robot about the relationship between beauty and truth, or better yet, ask the robot what it feels like to be in love. No meaningful answer will be forthcoming, as AI is all about the material.

The central role of AI in our future is utterly opposed to the views of Wittgenstein and Jefferson, among many others, including but not limited to anyone who believes in a higher power and spirituality. Language produced by AI is governed solely by material rules. There is nothing non-material about the computer programs responsible for AI language. As a result, it is as if AI language speaks to all in the same way. In the same way, Newton’s laws of motion are laws about material objects that are intended to have a universal meaning. Those laws would not exist without Newton’s non-material creativity. Most important to Jefferson and other philosophers like Kant is that there is a non-material side to human beings, which houses consciousness, spirituality, dignity, ideas, and emotions, i.e., everything that makes us unique and is not part of the material world. You can’t touch, see, or feel the non-material. Yet it is the non-material and, in particular, our creative ideas that drive material progress and change while providing the spiritual awareness that anchors us to life and humanity.