Function assignment options - Part 3
Artificial intelligence and economic practice
Hector McNeill1
SEEL
AI or artificial intelligence is based on a pretty old cybernetics model where a calculating machine has a feedback mechanism to alter the output signals of the calculator. Today, these same models are implemented as digital systems and the feedback loop in the system is the "monitoring and evaluation" logic used to help the processor "learn". Thus the notion of "artificial intelligence".
The logical mathematics now applied to the operation of digital systems and the deductive logic applied to the handling of feedback signals was first published by George Boole in 1854.
Although AI or knowledge engineering was the subject of government funding and research initiatives in the UK, Europe and the USA, prompted by Japan's entry into this field, almost nothing took us beyond the basic cybernetics model and Boole's mathematical logic. Today, this is still the case.
|
Black boxes and decision analysisIn the 1960s Ronald Howard of Stanford Research Institute, and now with Stanford University, developed the discipline of decision analysis, it was he who coined the name of "decision analysis" as a new discipline (See:
Policy Decision Analysis & the Real Incomes Approach). One objective of decision analysis was to remove confusion from the justifications surrounding any particular course of action by companies or governments. Transparency is not only important in identifying the best policies, decision analysis also provides the logical reasoning as to why a specific policy has been selected in terms of its benefits.
The approach consists of building a determinate decision analysis model. This is a computer model of the relevant cause and effect relationship between inputs - determinants - and outputs - the desired objective. A decision analysis cycle is used to test the model against known quantitative relationships. If the model doesn't generate known outputs for known historic circumstances, then this signifies:
- The model is inadequate
- The information used is not representative of existing "real life" circumstances
- The determination of likely - probabilities - of constraints on the achievement of objectives are incorrect
Each of these factors are adjusted on a reiterative basis until the model is deemed acceptable for analyzing decision options.
In this case the difference between artificial intelligence (AI) and decision analysis is that the feedback loop, that is, the evaluation of outputs generated against different combinations of inputs, are assessed one by one by decision analysts. However, the algorithms in the model can be set up to only "report" those outputs which meet specific criteria, such as profit or numbers employed, across a range of assumed-to-be-realistic input combinations. If the algorithm is connected to a database of known feasible values for inputs, based on past experience, then the decision analysis model can run without intervention to generate reports on ranked optional solutions. The so-called Monte Carlo simulation system works on this basis. In this case if the decision makers establish desirable outputs as the decision analysis models's target, then one ends up with an AI operation. The logic here is that the feedback loop, which records attained outputs, assesses if these fall within the desired range. When a solution doesn't fall within that range, the algorithm is considered to have "learned" that this combination is not acceptable, and therefore, this particular input combination will not be repeated. The input combinations that generate outputs satisfying the decision maker's desired range of outputs are reported.
Transparency and obfuscationIn terms of macroeconomic policy there is a battle between coherent decision analysis or AI-based analysis and the explanation of what a policy will achieve. One reason for this is either the decision analysis models in reality do not work, as in the case of the Quantity Theory of Money (QTM) (See:
A Real Money Theory-a note and also
The real incomes component of the Real Money Theory - a note) or policy-makers are aware that it is irrelevant and yet assert that the QTM is a valid basis for guiding policy and pointing to blatantly false beneficial outcomes expected from monetary policy. After all, the QTM totem pole has been around for a long time as the "explanation" for the impact of money volume on "demand" and "inflation"; to openly state that it is nonsense would be embarrassing for some while liberating the majority. Even Keynes explained that there was no direct connection between money volume and inflation just as RIO-Real incomes Objective research provided a more realistic model. However, contemporary economists, who cling onto the equally fallacious Aggregate Demand Model, insist that the QTM is valid. So what we see here, to sustain an archaic monetary system, is a good deal of misrepresentation as to the way in which it operates.
GIGA-Garbage in Garbage outWhen the foundation "model" of monetarism is so plainly flawed and governments adhere to regular central bank reporting consisting largely of Gobbledy-Gook pronouncements, somewhat perfected by Alan Greenspan at the FED, this might greatly amuse those benefiting from the system to behold the respect and awe the constituency and media assign to attempting to make sense of every work uttered or printed on such occasions. The situation is one where the exceptionally poor quality of the logic of a disproven system can only be defended or explained by obfuscation and dishonesty.
You cannot regulate Mumbo-JumboThe main evolution in the application of algorithms to market transaction decision analysis emerged after Black and Scholes developed their hedging model for derivatives in 1973. This led to a spiral in the application of algorithms and AI to market transactions. The power of even open source online databases and processors has resulted in what is known as high-frequency trading in a range of asset and many commodity markets. In terms of shares operators have purposely placed their processing servers close to or even within the building housing the exchanges to secure market information before the so-called "wire services". As a result, illegal, split second high-frequency trading is harvesting millions of such operators. More importantly the combination of "financial engineering" and sometime criminal intent has resulted in a massive rise in the so-called grey derivatives market trading "securities" and newly nominated "financial instruments" in volumes that exceed the GNP of countries. This has dealt an even more significant blow to the efficacy of monetary policy because the corporations involved hide what they are doing and the governments are left pretending to regulate a black box or plain Mumbo-Jumbo. One device applied by banks, hedge funds and other financial intermediaries is to keep introducing newly nominated financial instruments, often with the same function as previously condemned instruments to keep any regulators off balance. The most significant obfuscation by both operators and governments is that all of this is happening in "free markets".
The basis of regulationCybernetics theory has much to contribute to the question of establishing effective financial regulations. A theorem linked to cybernetics is the "good regulator" conceived by Roger C. Conant and W. Ross Ashby.
Basically this states that a good regulator of a system needs to be a model of that system. Technically this states that maximized regulatory effectiveness relies on the system being isomorphic with the regulated system. So as explained the decision analysis model that contains access to the range of possible inputs and notions of where output values need to lie can operate as an effective interpreter and basis for explaining what is happening in the system being regulated. Referring back to Boole's mathematical logic on human deduction, we as humans remain reasonably successful and efficient as overseers and regulators if this is exercised by individuals who have previously learned through an exposure to the formation, testing and analysis of outputs of a model of the environment to be regulated. The process of instructional simulation is an essential foundation for an effective exercise of the regulatory function. In this way transparency is gained and explanations of "market perturbations" easily understood. Clearly limiting regulation to "light touch" legislation and regulations overseen by people working in offices with slightly dated stats is completely incapable of overseeing the regulation of the current financial markets. The constant introduction of variants of financial instruments keeps any regulatory system off balance.
It so happens that regulators can get ahead of this game by changing the regulatory conditions to make a more effective use of IT systems which could gain complete transparency over all transactions. Companies do not want this so lobbies keep things that way, but this leads to prejudice for the majority.
I will follow this up with how regulatory systems can gain a better oversight over financial markets.
1 Hector McNeill is the Director of SEEL-Systems Engineering Economics Lab.
All content on this site is subject to Copyright
All copyright is held by © Hector Wetherell McNeill (1975-2020) unless otherwise indicated