LSE EUROPP
Sergio Scandizzo
June 30th, 2023
Compression and complexity: Making sense of Artificial Intelligence
Artificial Intelligence (AI) is expected to have a major impact on Europe in the coming decades. Sergio Scandizzo explains how the concepts of compression, complexity and depth can help us understand the potential implications of AI for our daily lives.
ChatGPT and other instances of ‘generative AI’ have recently taken the internet by storm and in parallel generated a mountain of critical comments ranging from the awed and terrified to the unimpressed and disparaging. On one side, is the traditional concern that AI can help people cheat, replace human judgement in key decisions for our lives and ultimately damage livelihoods by raising unemployment.
On the other, somehow illogically although perhaps as an understandable reaction to those fears, critics have tried to find fault with AI’s performance: it cannot solve certain mathematical puzzles (nor can the majority of humans); it writes essays that are predictable and solely based on searching the available literature (as most essays written by humans sadly are); on occasion, it can produce absurd results and reach biased and discriminatory conclusions (otherwise said, it looks as human as it gets).
So, while we tremble at the thought of a dystopic future of technological unemployment, AI-controlled governments, and stultified students, at the same time we berate current AI applications for not yet being that kind of God-like, infallible intelligence capable of solving any possible problem without fail. The reality is that most ‘failures’ of AI – not being original, basing decisions on existing information, numbly following rules or simply failing several times at complex tasks – are typical human features.
Some critics note that ChatGPT uses the most cited texts, assuming that those are the most scientifically reliable, to come up with answers. True, but what do most people do when they write an essay? First, they read the most cited texts. Similarly, others devise tricky mathematical questions to make the programme produce wrong answers (which are the kind of answers most humans would give). I wonder therefore about how many university essays, honestly written by mediocre students in the future, will look like they were written by ChatGPT or some other AI engine and attract unfair accusations of plagiarism.
The objective of Deep Blue or AlphaGO is not to be intelligent, but to play Chess or Go like an intelligent being (us). That they clearly can do so is disquieting perhaps because it suggests that it doesn’t necessarily take human intelligence to play these games, even at the highest level.
The same holds true for so-called creative tasks. If a machine can write a perfectly acceptable, even if not especially original, essay, it gives us pause for thought primarily because it forces us to rethink both the value of certain products of our intelligence and the meaning we attach to them. This is presumably what makes some commentators desperate to find fault in AI’s performance, as if they were keen to reassert the primacy of human intelligence against an existential threat.
Compression as intelligence
Let us try to look at the problem from another perspective. A ‘lossy’ compression algorithm is an algorithm that saves memory space by identifying statistical regularities across a set of data and storing a single copy of patterns that recur multiple times, without being exactly the same. The results of such technique are worse than what you get using a ‘lossless’ compression algorithm, where the original information can be completely reconstructed, but good enough for several practical applications.
It works especially well with images and music, much less well, unsurprisingly, with text and numbers. For example, if a lossy algorithm will store just one copy for several similar-looking areas of a picture, the reconstructed image may become slightly blurred but would still be recognisable overall. On the other hand, if the algorithm stores only the average of several similar numbers in a spreadsheet, the results will likely be useless.
Last February, science fiction author Ted Chiang wrote a very thoughtful piece in which he argued that ChatGPT works very much like a lossy algorithm applied to the internet, whereby it samples a large amount of information and repackages it in the form of text that is not exactly the same as any of the texts available online, but close enough to look both correct and original.
Aside from the fact that he may have stumbled across a definition applicable to a lot of what passes for creativity these days, what is especially intriguing is the use of compression as a metaphor for intelligence and specifically, his observation that the best way to devise a way to efficiently compress a set of data is to understand them.
Indeed, if we need to compress, for instance, the Fibonacci sequence, which is an infinite series in which each number is the sum of the previous two, we would do well by storing only three equations – F0 = 0 (applies only to the first integer), F1 = 1 (applies only to the second integer), and Fn = Fn-1 + Fn-2 (applies to all other integers) – rather than a very long sequence of integers hoping that the next user will guess the rule.
Complexity and depth
In a different context, Nobel laureate Giorgio Parisi argues that the problem of finding the simplest description of a complicated set of data corresponds to finding the scientific laws of the world and is “often taken as a sign of intelligence”. To clarify this idea, Parisi draws on the concept of the algorithmic complexity of a string of symbols.
The latter is defined as the length of the shortest computer programme producing that string as an output. In the Fibonacci sequence example, such a programme will incorporate, in the simplest possible fashion, the three equations above, thereby obtaining a very short description of an (infinite) sequence. On the other hand, if we examine the string “Dkd78wrteilrkvj0-a984ne;tgsro9r2]3”., nm od490jwmeljm io;v9sdo0e,.scvj0povm]]-” the shortest programme most likely will have to look like:
This is longer than the string to be printed. Equally important, however, is the concept of the logical depth of an algorithm, which is the actual amount of CPU time needed to execute it. In the Fibonacci case, while the algorithm is short, the actual amount of CPU required to execute it is potentially infinite as the sequence goes on forever. In the random string above, the algorithm is indeed not very efficient with respect to the length of the string, but its execution is very quick.
We say therefore that the former algorithm has low complexity and high logical depth, while the latter exhibits the opposite features. A good scientific theory has low complexity (it gives the simplest explanation of data) but potentially high logical depth (it explains a lot and may require a very long time to compute all its implications).
As an example, E=mc2 has very low complexity, but an enormous level of logical depth as it implies a very profound and far-reaching set of results. It follows, alas, that in many practical cases we settle for more approximated theories with higher complexity and lower logical depth either because of efficiency (approximated theory may be good enough for certain tasks) or because we simply cannot afford the CPU time required (Parisi gives the example of when we meet a lion on our path and we must make a decision in real time or else become its dinner).
This trade-off between complexity and depth is fundamental in understanding how intelligence, human or otherwise, works, yet most discussions of AI seem to ignore it. ChatGpt may well be, as Chiang says, an imperfectly compressed version of the available data, but so is most of our learning. Intelligence, amongst other things, is the ability to perform those somewhat imperfect compressions that balance cognitive objectives with our natural constraints.
Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy, the London School of Economics or the European Investment Bank. Featured image credit: Emiliano Vittoriosi on Unsplash
Sergio Scandizzo
Sergio Scandizzo is Head of Internal Modelling at the European Investment Bank.
Posted In: Latest Research | Politics | Uncategorized
No comments:
Post a Comment