Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The Economic Reality of AI: Statistics and Decision-making

Man has been looking for a way to make the right decisions long before recorded history. Long ago, astrology appeared; much later, science and economics emerged. The difficulty is making the right decision. Now we have AI. Businesses predominantly generate the drive for more AI, hoping to sell more and increase profit while reducing the number of employees to cut costs.
Not long ago, “artificial” had a negative connotation. “Intelligence” is something we are looking for everywhere, even in outer space. By the amount of money and effort we spend on finding intelligence, we clearly have not seen it yet. Putting blind faith and money in AI exposes our society to a scenario that raises serious questions.
Statistical tools and algorithms apply to large data sets, and we consider the result AI. Statistical theories help make sense of data, assisting AI in its logic and decision-making. In Thinking, Fast and Slow, Daniel Kahneman, a psychologist, received the Nobel Prize in Economics in 2002 for his research on human judgment and decision-making under uncertainty. He tells us how he slowly discovered that, even among scientists, our views of statistics tend to be biased. This is a polite way of saying that we continuously err in our understanding of statistics.
In the context of knowledge in the discussion, AI employs various methods to comprehend human language, enabling it to replicate human decision-making. Data is information transformed into a format that helps AI understand problems and learn solutions. Intelligence is the ability to analyze a collection of data and determine which pieces of information are significant or relevant. Wisdom is knowing and making the right choice, even in uncertain circumstances. No amount of data or number crunching can change that. Suppose the data points contain any information that needs to be more evident. In that case, we need to analyze the data further to find if this information contains any intelligence, which takes even more analysis. Intelligence is the link between information and decision-making. The result will only show if we display wisdom after making the decision.
There are solved problems or questions and unsolved problems. “This focus on established knowledge thus prevents us from developing a ‘common culture’ of critical thinking.” Peter Isackson: “Outside the box: Media Literacy, Critical Thinking and AI.” Can AI deliver anything sensible to unsolved problems? 
AI relies on a larger amount of data than what was ever available before. However, more data does not guarantee coming closer to a correct decision. Statistics and algorithms form the basis of AI data manipulation. Statistics refers to data collected from the past. It cannot say anything specific about the outcome of future processes. More data, more of the same, will not generate anything new.
The information content of a system, be it a book, the universe or an LLM, is measured by the behavior of large sets of discrete random variables and is determined by their probability distribution. This is saying in a complicated manner that we are talking about probabilities, not certainties. 1+1 does not necessarily equal 2.
Therefore, AI’s outcome will be mediocre at best. AI will likely have even more trouble separating correlation and causality than humans have. Correlation does not tell us anything about cause and effect. It may seem that way sometimes, but only to an undiscerning observer. So, the more times a specific set of information occurs, the more likely that information will be included in the AI’s response. 
Some researchers have asked whether more information or data will enhance AI’s answers. This is not the case. The larger the data set’s size and complexity, the more difficult it will be to detect causality. The addition of new knowledge will not significantly change the answers AI gives. Even if researchers were to discover a cure for cancer tomorrow, this knowledge would be just one fact among millions. 
Values are marginal, not absolute. Doing more of the same will only give you more value for a limited time and a limited number of marginal increments. Beyond such a point, the marginal costs will rapidly outweigh any gains. AI relies on continually doing more of the same. The more AI is applied, the lower the additional value will be.
Too many economists tried to follow astrologists’ footsteps and attempted to predict the future. Except by coincidence, the forecasts tend to be wrong. This has led to a general disregard for some of the main insights that rule economies, societies and human life. They are worth mentioning here.
There are no returns without risks. This is true in all sectors of the economy, not only in the financial markets. Every decision involves a risk, and the desired outcome is never certain. Whatever man does, there will never be guaranteed certainty about the outcome. We look to AI to give us more precise answers and diminish our uncertainty. If AI ever can, every decision involves risk, and the desired outcome is never certain. The hope is that AI can help mitigate some risks and give humans more certainty in their decision-making. If AI can provide us with specific answers at lower costs and less risk, the returns will be lower than what we otherwise gain.
All decisions involve a trade-off. Whatever decision you make, whatever choice or gain you make, you will lose something. You will pay opportunity costs. Rest assured that no website, shopping basket or fine print will disclose those opportunity costs.
A good example is dynamic pricing. With the rise of the internet, it seemed as if price comparison would lower the search costs associated with imperfect information. Soon, merchants discovered the benefits of dynamic pricing based on the benefit of having better knowledge of consumers’ search behavior. Any benefit the consumer had from the internet was turned into a disadvantage, based once again on unequal access to information.
One of the oldest laws in Economics states, “Bad money always drives out good money.” also known as Gresham’s law (1588). Thomas Gresham, financial agent of Queen Elizabeth I, elucidated that if coins containing metal of different value have the same value as legal tender, the coins composed of the cheaper metal will be used for payment. In contrast, people tend to hoard or export items made of more expensive metal, causing them to disappear from circulation. Strangely enough, very few people, even economists, understand that this applies to everything of value, not just money. Today, money holds little value; most people prefer stocks. We’ve witnessed the emergence of bad stocks over good stocks, which are no longer secure. In the 1970s, we saw the emergence of “Bad quality always drives out good quality” (Phillips vs. Sony video-systems, Ikea is an example of what happened in furniture. Is there anyone who doubts the prevalence of polyester over natural fibers, the dominance of Chinese goods?) If “information is money,” low-quality information will always have the upper hand over good-quality information. If schools and universities accept AI-based work, what are the chances of any progress in knowledge?
Bad (low-quality) information always drives out good information. The emergence and rising use of the ‘fake news’ label should remove doubts in that field.
Profit is based on value-added. To add value, someone or something must create and incorporate that additional value into a product or service. Creativity plays a central role in providing added value. Can AI generate added value? 
I used to joke about intelligence. Why are people looking for intelligent life in space when it is already so difficult to find on Earth? Today, I no longer joke about it. Does the emergence of ‘Artificial’ Intelligence mean we have given up hope of finding real intelligence?
Business leaders may have more confidence in AI than they do in economists. I can’t even say I blame them. But whatever else AI may bring, the displays of blind faith in AI, as are currently being witnessed, will have consequences:
Less choice means less freedom.
I used to think that computers would never outsmart humans. I was wrong. I was thinking of computers getting more ingenious and overtaking human intelligence. If humans become less intelligent, the average person will someday be less intelligent than a computer. The complacency and sometimes blind trust people display towards AI can make this a self-fulfilling prophecy.
As with all supply and demand, if there is a demand for AI with all its current pitfalls, someone will supply such a tool. The consequences will be anybody’s guess. The good news is that someone will supply such a tool if there is a demand for AI without the pitfalls. Mankind might even be the winner. Can I have some natural intelligence, please?
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

en_USEnglish