AI, extensive language models (LLM), doesn’t entirely overlook non-material and creative aspects. This uniquely human product of consciousness is defined in AI by a concept called temperature. Temperature settings control the probabilities of a particular response to a question. With the highest temperature setting, the response to a question will always be the same. In contrast, with a zero setting, the response will be a random word or sentence among a collection of possibilities, where each will have the same probability of being selected. Contrast a chat box reply to “Why an apple falls from a tree” with Newton’s epiphany about gravity when witnessing an apple drop from a tree’s branch.

In the case of LLMs, one task has been tried that might be thought of as weakly satisfying; something that could be called creative is generating data from its human-produced database that could be used to train another computer on tasks with fewer ontologies that those represented by its enormous database.

In a recent cover story in Nature, LLMs asked to generate data relevant to dogs after many iterations, and they ended up with multiple pictures of one breed, Golden Retrievers.

After several iterations, the LLM creates a list of jackrabbits when asked to develop data pertinent to medieval architecture.

This might be funny in one sense but profoundly problematic in another. After all, we have valued AI-related companies in the trillions of dollars in the hope of a massive payoff—which means something much more than being able to do routine tasks that humans guide. At a minimum, AI should be able to produce answers to the profound problems that potentially limit its growth.

False Information Breeds False Narratives

In a recent interview on YouTube, the technologist and former CEO of Google, Eric Schmidt, made the point about the resources AI requires in a light-hearted way by noting that to get AI past new hurdles, we better stay close friends with our Canadian neighbors. The reason was their vast oil and gas endowment. Schmidt and two co-authors, Henry Kisinger and Daniel Huttenlocher, are AI researchers with an administrative position at MIT, where he received his Ph.D.

The overarching message purported in Schmidt’s book needs revision for many reasons. Contrary to the book’s central thesis, AI has already passed human capabilities in certain areas, such as games, and promises to be the foundation of one of the most extraordinary transformations in the history of humanity. This is especially surprising coming from Schmidt, who has seen firsthand the gains in many technologies, especially AI, that require the use of resources – especially in AI that are exponentially greater than the gains in AI. He nodded to the situation in his reference to Canada but probably believes the problem does not pose insurmountable difficulties. The evidence is overwhelming that it does. Chess would be just a tiny part of AGI. As pointed out above, a solution to chess is well beyond the foreseeable abilities of current computer technologies – in terms of calculating capacities and energy requirements.

Asking AI to solve the problem is like asking a cat to catch its tail. Human ingenuity may be able to hold off on the issue for a while, but in the long term, there is no chance. Indeed, there are already signs that producing more powerful AI chips is becoming a problem. Nvidia recently announced delays in their next-generation LLM chip. Even the strongest believers in AI must admit that we still have miles to go, meaning multiple generations of chips lie ahead. It is only a question of what comes first – the unmanageable complexity of future chips or insufficient energy to produce and use them.

Can Greater Sophistication Change The Game?

Yet many continue to view AI as more than a boost to productivity in relatively routine tasks. Instead, AGI is expected to be able to make far-reaching creative contributions to the sciences and develop new types of mathematics to solve problems ranging from the Reiman hypothesis to ultra-complex combinatorial problems. The mathematician Paul Erdos and others have said we need more advanced mathematics to solve problems related to groupings within networks, among many others. Again, the power and calculating needs of much more advanced AI bring up the cat and tail metaphor. AI will no doubt become more widespread, but we are already in the very late innings in terms of greater sophistication.

Why do we persist in believing what amounts to magic?

One reason is that we—and here, I refer to the collective West—have veered wildly from the path we had been following from the penning of the Declaration of Independence to the early 1970s. 

The West has become home to one of the most materialistic societies in human history, and America is leading the pack. Science has wholly replaced spirituality. Multiple psychology studies, including my Ph.D. thesis, have shown a strong correlation between a willingness to accept narratives unquestioningly and, relatedly, a desire to follow authoritarian leadership. A worldly perspective also severely hampered creativity.

Remarkably, we criticize societies like China and Russia for their lack of freedom. There is no quarrel that these societies lack many fundamental democratic values. However, there is a lack of freedom—especially freedom of ideas, the cornerstone of what Jefferson believed was critical to scientific advancement and overall well-being. All signs are that China and Russia have left us in the dust.

One of many examples is that according to three primary Western sources, China has far surpassed us in STEM. The Netherlands’ bibliometric ratings of worldwide universities have China taking the first ten places in STEM. The highest rating of any U.S. university is MIT, a shade above 40 (the average of the two categories that make up STEM). Harvard does not make the top 100, and no other U.S. school has even been in the top 50. According to Australia, China leads in 90% of all the sciences. One of the very few areas in which America beats China is quantum computing. Ironically, this is a result of China’s recent unwillingness to publish its findings. A recent IEEE report notes the following:

In one study, the researchers experimented with Zuchongzi, which used 56 superconducting qubits on a task whose solutions are random instances, or samples, from a given spread of probabilities. They found that Zuchongzi had completed the sampling task in 1.2 hours, which they estimated would take Summit at least 8.2 years to finish. They also noted that this sampling task was tens to hundreds of times more computationally demanding than what Google used to establish a quantum advantage with Sycamore. In another study, the scientists tested Jiuzhang 2.0, a photonic quantum computer, using Gaussian boson sampling, a task where the machine analyzes random data patches. Using 113 detected photons, they estimated Jiuzhang 2.0 could solve the problem roughly 1024 faster than classical supercomputers.

Although the sampling task used in experiments with Zuchongzi has no known practical value, the Gaussian boson sampling problem on which Jiuzhang 2.0 was tested potentially has many valuable applications, such as identifying which pairs of molecules are the best fits for each other. As such, this work may have quantum chemistry applications in simulating vital molecules and chemical reactions, says physicist Chao-Yang Lu at the University of Science and Technology of China in Hefei, a co-author on both studies.”

In other words, whether in practical applications or theoretical tests, China’s quantum computing is well ahead of the U.S. In a blog for another day, we will argue that our nearly 200 years under Jeffersonian views of equality, spirituality, and democracy created perhaps the most creative society in human history. Leaving spirituality out of the list is one of the critical driving forces in creativity. When was the last time you heard a U.S. president sincerely reference God in a significant speech? It was John Kennedy. Some of the many quotes can be found on Google. It seems we have a lot to relearn in America, and I don’t see AI as a suitable teacher.

Final Thoughts

In any AI task, the initial goal is to transform an initial data set and its parameters into higher-level data sets, in which initial data is linked with higher-level parameters. Success is determined by how accurately the AI produces a result in all cases. Ultimately, this is a function of the number of computations required to produce a result. Recognize that human input is at virtually every step in all cases, from defining the database to defining success. In philosophy, defining a language game might be a relatively low bar, while explaining the paradox associated with the game’s rules would be a much higher bar, and stating how that paradox relates to conscious experience at a level that AI is unlikely to reach. In chess, winning a game is a much lower bar than solving any arbitrary position for a correct answer. The first is a low bar; the second is beyond foreseeable capabilities.