This prospect, frequently highlighted by technology leaders, ignites enthusiasm for the groundbreaking discoveries AGI could offer. However, it also raises critical questions about the need for legal frameworks to ensure AGI does not overpower humanity. The primary issue surrounding AI isn’t merely its evolution into AGI but the immense resources funneled into achieving this daring objective.

AI and Large Language Models

The excitement surrounding large language models (LLMs) in AI is intriguing. Yet, it overlooks a critical truth: these models do not provide revolutionary innovations and are unlikely to bring us closer to artificial general intelligence (AGI). We are witnessing the evolution of the 4th generation of chatbots powered by LLMs. While some tech experts assert that the 10th generation will achieve AGI, such predictions are as improbable as a cow leaping over the moon. Although these ideas aren’t mutually exclusive, the chances of them materializing remain exceedingly slim.

As a young boy growing up in Chicago, I have vivid and cherished memories of visiting the Museum of Science and Industry alongside my parents. One exhibit that genuinely captivated my imagination was the incredible tic-tac-toe machine. This classic game, played on a 3×3 grid, allows one player to use X’s and the other O’s, with the objective being to align three markers in a row—horizontally, vertically, or diagonally. If neither player achieves this, the game concludes in a draw. What amazed me about the machine, a groundbreaking invention from 1949, was its ability to play tic-tac-toe flawlessly. It always triumphs if the human player makes an error, cleverly setting up scenarios where the opponent must block two possible winning moves. First unveiled in the early 1950s, this innovative creation marked a pivotal technological moment. I had no idea at the time that this would be my first encounter with the fascinating world of artificial intelligence.

A compelling illustration of artificial intelligence engaging with humans is showcased in the game of checkers, which achieved a “weakly solved” status in 2007. While computers consistently outshine even the top human competitors in chess and Go, neither of these intricate games has reached the level of resolution seen in tic-tac-toe. Specifically, the game of checkers is partially resolved; the optimal sequence of moves from the initial position guarantees a draw, yet some positions remain uncalculated. Despite this, the likelihood of facing an unpredictable position is exceedingly rare. In stark contrast, tic-tac-toe stands out as a game that boasts a complete and definitive solution.

Othello stands out when we consider the realm of popular games for computers to conquer, but checkers will likely be one of the last major games to be fully solved by AI. Chess presents a unique challenge; it may require computational power on an unimaginable scale — equivalent to a computer the size of a solar system operating for an entire lifetime! Intriguingly, the complexity of chess compared to checkers is like comparing Giga-scale computers from 2007 to today’s exascale computers. The vast difference in possible positions illustrates how far we still have to go in our quest for AI mastery in strategy games.

“This reinforces a point we have stressed: Increased complexity requires exponentially greater resources. Thus, several games, such as Go and Shogi, are unsolvable regarding any conceivable extension of today’s technologies.

Chess and AI

In many games, computers consistently outshine humans for their remarkable speed in calculating potential moves. However, it’s essential to appreciate the unique advantage human intuition brings, which can occasionally result in superior decisions. This highlights the distinction that quick calculations do not necessarily reflect true genius. Despite its extensive computational capabilities, numerous chess puzzles pose significant challenges for even the most sophisticated chess AI, proving that human insight still holds tremendous value.

In a high-level grandmaster tournament featuring legends like Kasparov, players faced a particularly challenging chess puzzle that stumped them all. Yet, one of the most extraordinary chess geniuses, Mikhail Tal, was present. Renowned for his enchanting and strategic games, Tal initially struggled to find the solution like his peers. Rather than succumbing to frustration, he took a brief ten-minute walk to clear his mind. Upon his return, he astounded the gathered grandmasters by unveiling the puzzle’s solution, showcasing his unique approach to problem-solving and reinforcing his reputation as a remarkable chess talent.

Despite the advancements in chess technology, even the most powerful chess engines today struggle with complex puzzles that require 6 to 10 moves to solve. A quick search on Google or YouTube will reveal countless examples of such challenging positions that still need to be discovered in computer analysis. This emphasizes that there are scenarios in chess where artificial intelligence falters, particularly in certain endgames. Moreover, consider the astonishing super puzzles that demand 20, 25, or more forcing moves to achieve checkmate—these are beyond any computer’s reach. This stark contrast underscores the remarkable human ingenuity in problem-solving that machines simply cannot replicate.

Humans and computers operate under the same time constraints when playing games, yet they approach decision-making differently. Computers utilize rapid calculations and a numerical methodology to evaluate each move, while humans bring their evaluation techniques to the table. This gives computers a distinct advantage due to their ability to compute quickly and retain interim results. In contrast, humans face challenges such as emotions, needs, and distractions that can hinder their concentration during gameplay. To create a more balanced competition, granting humans more extended time limits than their computer counterparts is essential. Furthermore, allowing players to keep track of interim results or take photos of their calculations would help mitigate their natural disadvantages and enhance their overall performance.

A few years ago, Vladimir Kramnik, then the reigning world champion, faced off against some of the most powerful chess engines. After approximately 20 moves, he found himself in a position that rivaled the computer’s strength. Yet, in a crucial moment, Kramnik made a blunder that even novice players would shun. This incident powerfully illustrates how nerves can lead to lapses in judgment. The fact that we have not fully confronted the notion of creating an accurate level playing field hints at a more profound belief that AI can surpass human capabilities. In essence, this fuels our unwavering confidence in science and technology.

Who’s Smarter?

Modern game engines utilize techniques akin to language models, with some even mastering games by competing against themselves. AlphaGo pioneered reinforcement learning to master the game of Go, ultimately defeating the world champion. This groundbreaking feat is especially remarkable given the complexities of Go, often regarded as one of the most challenging board games. A pivotal article in Nature emphasized the profound implications of this achievement on artificial intelligence and beyond.

In a follow-up article, it would be essential to highlight the groundbreaking discovery made by novice Go player Kellin Perine, who leveraged AI to analyze computer Go games. Perine pinpointed a critical error in the calculation method employed by the well-respected open-source Go engine, KataGo. Remarkably, after identifying this flaw, he defeated KataGo in 14 out of 15 matches without relying on AI assistance. This finding underscores a crucial point: computers must emulate the intuitive understanding that even amateur players have when spotting strategic weaknesses. It challenges the prevailing belief that machines consistently outperform humans, a notion fueled by our admiration for technology. While many assert that games like Go and Chess will remain unsolvable by computers, it’s worth considering that brute force could eventually outstrip human ingenuity. Both Go, and Chess are defined by a combination of numerous pieces, their potential movements, predetermined rules, and a set complexity that presents challenges far beyond mere computation.

Magic: The Gathering (MTG) stands out as one of humans’ most intricate games, primarily due to its incorporation of hidden information that defies prediction. This complexity presents a significant challenge for algorithmic analysis, unlike games such as poker, where computers have demonstrated proficiency. The vast array of possible moves in MTG far surpasses that of poker, illustrating an exceptionally high level of variability. A compelling metric to consider is the product of hidden information and the multitude of possible responses (H.I. x Choices), which could effectively delineate the limits of a computer’s ability to compete, even at the most basic level.

Advancements In AGI

Despite significant advancements in gaming technologies, we still need to achieve Artificial General Intelligence (AGI), which is different from what we were decades ago when I found it challenging to win at tic-tac-toe. While it’s tempting to argue otherwise, the scope of today’s AI extends far beyond mere games. Some theorists suggest that all human interactions can be viewed through the lens of a game. Ludwig Wittgenstein, an influential thinker of modern philosophy, employed games as a framework to explore many aspects of human interaction.

AI encompasses a broad spectrum of applications that extends far beyond gaming. While gaming is a compelling illustration of AI’s capabilities and boundaries, it is essential to recognize that technologies such as large language models (LLMs), facial and speech recognition, and autonomous driving are fundamentally connected to gaming principles. LLMs and chatbots can be viewed as sophisticated forms of gameplay, reflecting the underlying mechanics of game design. Moreover, the development tools employed across these AI applications share striking similarities, reinforcing the interconnectedness of all AI technologies.

To truly grasp the world of Large Language Models (LLMs), it’s crucial to understand fundamental concepts like ontologies, parameters, tokens, and agents. Ontologies and tokens define the boundaries of the knowledge domain and indicate the depth of detail captured within that domain. Parameters are vital alongside tokens, shaping the model’s overall complexity. For instance, a typical parameter could represent the likelihood of a token following a particular sequence. In many advanced LLMs, especially those leveraging “deep learning,” intricate layers of tokens interconnect, with parameters soaring into the trillions. The diversity of ontologies is remarkable, with applications spanning music, medical diagnostics, and a wide array of general knowledge. Understanding these elements empowers us to fully appreciate the sophistication of LLMs.

Tokens are crucial components in training AI systems. They encompass various inputs, from simple phrases and letters to numerical values. In facial and voice recognition, advanced biometric data—carefully defined alongside AI—serve as essential tokens. Almost anything that signifies an individual’s face or voice can be converted into numerical representations, reflecting elements like wavelengths for speech or the spatial relationships of facial features. However, it’s important to note that not all human knowledge can be aggregated into a single, vast dataset; attempting to map the interconnections of all tokens would surpass the time constraints and memory capacities of any conceivable computer. A promising solution to these limitations lies in using agents, where a large language model (LLM) connects inquiries to specialized AI databases known as agents. For instance, if you pose a complex physics question to an LLM, it seamlessly delegates the investigation to another with expertise in that field, ensuring you receive an accurate and informed answer.

For Wittgenstein, the “family resemblances” concept offers a more nuanced understanding of language use than relying on a singular set of rules. Instead of narrowing language to one rigid definition, he posits that language games possess many traits—some common and others unique. Despite this diversity, it is essential to recognize that these language games will invariably share one or more characteristic elements. Wittgenstein powerfully exemplifies this notion in paragraph 67 of his work, Philosophical Investigations, where he articulates this intricate web of connections.

“…the kinds of numbers, for example, form a family. Why do we call something a ‘number’? Well, perhaps because it has a – direct – affinity with several things that have hitherto been called ‘numbers.’ And we extend our concept of number, as in spinning, we twist fiber on fiber. And the strength of the thread resides not in the fact that someone fiber runs through its whole length, but in the overlapping of many fibers.”

Philosophy Meets Artificial Intelligence

Unlike Wittgenstein, modern AI, particularly LLMs, and chatbots, constrains language to rigid, predefined rules that dictate the relationships between various tokens. Even creativity in language is measured through the strength of these connections, often referred to as the “temperature” of the language. Why does Wittgenstein deliberately steer clear of absolute rules? A significant insight lies in his emphasis on family resemblances, which he illustrates primarily through tangible objects.

Wittgenstein’s perspective on language underwent a profound transformation from the 1922 publication of his Tractatus Logico-Philosophicus to the later release of Philosophical Investigations. He delved into the complexities of how we use and comprehend language, ultimately recognizing that fixed linguistic rules can have varied interpretations among different individuals. Moving away from rigidity, he introduced the idea of family resemblance and a web of material relationships among words to connect disparate language games. Wittgenstein emphasized that distinguishing between the material and non-material aspects is essential to unraveling numerous philosophical dilemmas, warning that conflating these two realms only deepens the confusion.

There’s an absolute distinction between the objective material world and the subjective internal dialogue.

An intriguing parallel emerges when we examine the foundational ideas expressed by Thomas Jefferson in the Declaration of Independence. In the opening paragraph, Jefferson skillfully distinguishes between “the Laws of Nature” and “Nature’s God,” emphasizing a crucial dichotomy: he sets apart scientific laws governing the physical world from the divine force that gave rise to it, establishing a clear contrast between the material and the spiritual realms. In the following paragraph, this distinction takes on a different form. Jefferson affirms that “all men are created equal” and that “their Creator endows them with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Such assertions reinforce the profound belief in inherent human rights grounded in nature and divine providence.

These two statements can coexist harmoniously if we interpret the first as about the non-material realm, such as the freedom of thought and the exchange of ideas—principles Jefferson deemed fundamental to human existence. Meanwhile, the second statement addresses the tangible aspects of life. While individuals may aspire to achieve their desires, it is clear that, in practical endeavors, disparities will inevitably arise. Jefferson championed the freedom of thought and creativity as vital for fostering a deeper understanding of science, which, in turn, enhances human longevity and raises the likelihood of our species avoiding extinction.

The advancements in AI-generated language have led many enthusiasts to herald the imminent arrival of artificial general intelligence (AGI), a technology poised to surpass human capabilities. A recent article in the Wall Street Journal highlighted a leading figure in AI who envisions a future where iris scans could differentiate between humans and robots. However, instead of focusing on these technicalities, we should delve into deeper inquiries, such as exploring the connection between beauty and truth or, even more intriguingly, asking robots what it’s like to experience love. The reality is that no significant response is expected, as AI fundamentally operates within the realm of the material.

The pivotal role of AI in shaping our future starkly contrasts the views of thinkers like Wittgenstein and Jefferson, as well as many others who believe in a higher power and the significance of spirituality. The language generated by AI is bound strictly by material rules. It is devoid of any non-material essence that defines human expression, creating a scenario where AI language communicates uniformly to everyone.

Similarly, Newton’s laws of motion encompass principles about material objects that aspire to have universal significance, yet those laws spawned from Newton’s profound non-material creativity; for philosophers like Jefferson and Kant, the essence of humanity lies in our non-material aspects, which encompass consciousness, spirituality, dignity, and the array of ideas and emotions that make us truly unique—elements that reside outside the realm of the tangible. While we cannot touch, see, or feel the non-material, this aspect, particularly our creative ideas, propels material advancement and transformation while providing the spiritual grounding that connects us to life and the core of our humanity.


Investment News presented by World Renowned Economist, Money Manager & Finance Expert Dr. Stephen Leeb Ph.D. Founder of Leeb Capital Management Leeb.net
Available Now on Amazon, Kindle & Audible!