Stock Analysis in the Twenty-First Century and Beyond For years, financial analysts have struggled with the fact that practically all the financial measures used to analyze corporate performance lack predictive power when it comes to forecasting the market performance of the company’s stock. Numerous academic studies have documented and reported this lack of predictability. Correlation coefficients close to zero have been reported for the relationship between stock market performance and such critical financial measures as earnings growth, sales growth, price/earnings ratio, return on equity, intrinsic value (models based on discounted cash flow or dividends), and many more. It is this disconnect between traditional financial measures and the performance of stocks in the marketplace that has led to the now-famous efficient market hypothesis, the cornerstone of modern portfolio theory. To accept the idea that the future performance of stocks is unpredictable is to say that nothing a company does will affect the future performance of its stock in the market, and that is absurd. It would be more accurate to say that everything a company does will affect the future performance of its stock in the market. The problem with this statement is that it makes the forecasting of future stock performance so complex that it removes it from the realm of human solution. Confident in the belief that something other than chance and irrational investors determine future stock prices, several research groups around the world have started exploring the use of intelligent computer programs (programs that self-organize based on environmental feedback). Early results are very promising and have provided a glimpse of the economic forces described by Adam Smith as the invisible hand that guides economic activity. Stock Analysis in the Twenty-First Century and Beyond describes the stock analysis problem and explores one of the more successful efforts to harness the new intelligent computer technology. Many people mistakenly classify Artificially Intelligent (AI) computer systems as a form of quantitative analysis. There are two distinct differences between advanced AI systems and traditional quantitative analysis. They are (1) who makes up the selection rules and weighting and (2) what information is used to discriminate between good- and poor-performing securities. In most quantitative systems, even in an advanced expert system form, humans make up the investment rules and mathematically derive the weightings associated with the rules. Computer systems that depend on outside human intelligence to program their actions are not inherently intelligent. In advanced AI systems, the computer makes up its own rules and weightings. The computer learns from examples of good- and poor-performing stocks and determines its own ways for discriminating between them. The procedures that are derived by the computer are often so complex that they defy human understanding. In addition to making up its own rules, advanced AI systems look at corporate financial data differently. Just like in the human brain, where information is not stored in the brain cells but rather in the connections and relationships between cells, so too is corporate performance information stored in the relationships between financial numbers. Assessing the performance of companies is not so much in the numbers as it is in the connections between the numbers. Financial analysts recognized this early on and have used first-order relational information in the form of financial ratios for many years (price/book, debt/equity, current assets / current liabilities, price/earnings, etc.). Now with advanced AI systems, we are finally able to look at and evaluate high-order interrelationships in financial data that have been far too complex to analyze with less sophisticated systems. These then are the fundamental differences between what has been used in the past and what will be used in the future. Cdr. Thomas E. Berghage
Thomas Berghage is cofounder and CEO of NeuWorld Financial, a San Diego–based money management firm. A naval officer for twenty-four years, Commander Berghage was a member of an elite group of navy psychologists working on advanced military systems. Upon retiring from the navy, Mr. Berghage joined the financial community in San Diego. He is a past president of the CFA Society of San Diego and has served on its board of directors in various capacities. He has an undergraduate degree from Minnesota State University, Mankato, an MA degree in research psychology from Western Michigan University, and an MBA degree from Central Michigan University and has completed all course work for a PhD at the University of Louisville. Prior to establishing NeuWorld Financial, Berghage was vice president/director of research of two West Coast regional brokerage firms. He also served two years as an independent consultant to Lawrence Livermore National Laboratory. Berghage has written more than twenty scientific journal articles, thirty technical reports, and three books: Perception and Performance Underwater, Decompression Theory, and Beyond Human Comprehension: The Limits of Human Security Analysis. He has also served on the editorial board of the Journal of Undersea Medicine and contributed to the National Geographic book on exploring the deep frontier. His books have been translated into several foreign languages, and he is widely recognized for his original contributions in the field of human performance and undersea exploration. Commander Berghage has served as an invited lecturer and presented numerous papers both here and in Europe. Since entering the world of finance, Berghage has continued writing and has published numerous corporate evaluation reports and several articles in Artificial Intelligence in Finance. He has presented his work at the Institutional Investor Institute and many CFA Society chapters around the world.