The non-linear reality of AI: Why markets, regulators and nations must prepare for sudden leaps

Zimbabwe News Update

🇿🇼 Published: 10 March 2026
📘 Source: Mail & Guardian

We talk about artificial intelligence as though it were a faster spreadsheet or a better search engine. That framing is comfortable — and dangerously wrong. AI does not progress like most technologies.

It produces quiet plateaus followed by discontinuities that shock incumbents, confuse forecasters and reorder competitive landscapes. The right mental model is stepwise upheaval, not incremental improvement. Over the past several years, researchers have documented scaling laws showing that when you enlarge models, data and compute together, performance follows smooth power-law curves — until new abilities appear that smaller systems simply did not have.

That 2020 result — now a bedrock of modern AI planning — formalises why bigger models trained longer on more data keep getting better in predictable ways. At the same time, algorithmic efficiency — how cleverly we use the same compute — has been doubling roughly every 16 months; in image classification, the compute required to hit the same benchmark fell 44 times from 2012 to 2019. Hardware and software gains multiply.

📖 Continue Reading
This is a preview of the full article. To read the complete story, click the button below.

Read Full Article on Mail & Guardian

AllZimNews aggregates content from various trusted sources to keep you informed.

[paywall]

Put plainly: even if chips stopped improving tomorrow, smarter training would still accelerate capability. When chips also improve and clusters scale, the curve bends faster. Today, the supply side of intelligence is compounding.

One recent analysis estimates that global AI computing capacity has been growing about 3.3 times per year since 2022 — equivalent to a seven-month doubling time — driven largely by specialised accelerators. A parallel metric from independent evaluators finds that the “time horizon” of tasks frontier systems can complete autonomously has also been doubling on a similar cadence. Whatever your preferred indicator, the message is the same: your planning assumptions are obsolete long before your organisational chart updates.

As models scale, capabilities can appear discontinuously — not present at smaller sizes, then suddenly competent. The technical literature calls these emergent abilities. That is bad news for any governance or go-to-market process that treats tomorrow’s model as a slightly better version of today’s.

Markets have seen this movie. From currency pegs to collateralised debt obligations, stability narratives often hold — until they do not. The prudent response to emergence is to design for surprises: tighter feedback cycles between deployment and risk monitoring, capital buffers in compute supply chains and regulatory mechanisms that key off empirical capability — not brand names or parameter counts.

We have also entered the era in which AI helps write the software — and even the scaffolding — that improves AI itself. Academic work has demonstrated recursively self-improving code pipelines, where language-model-driven “improvers” iteratively enhance their own optimiser. It is not a sci-fi intelligence explosion; it is a real feedback loop that speeds iteration.

Recent reporting on cutting-edge coding models reflects the same pattern: systems assisting in their own development, with humans still firmly in the loop. The loop is not closed — but it is tightening. For enterprises, this means software roadmaps compress unexpectedly.

For regulators, it means capability assessments based on last quarter’s benchmarks are a rear-view mirror. For investors, it creates a trap: valuation models that discount future cash flows linearly will misprice firms that compound their development velocity. Forecasts differ on totals, but the direction is consistent: generative AI could add trillions in economic value annually through productivity gains in customer operations, marketing, coding and research and development.

Goldman Sachs puts a rough order-of-magnitude estimate at a 7% global GDP lift over a decade, with large exposure across knowledge work. If you think that is aggressive, the more conservative Penn Wharton model still finds enduring gains in total factor productivity and GDP levels across multi-decade horizons. The nuance investors should heed: productivity arrives in lumps.

A single model upgrade can unlock a wide band of tasks, not a thin sliver. If your business model assumes a smooth adoption curve, expect to be surprised by punctuated reality.

[/paywall]

📰 Article Attribution
Originally published by Mail & Guardian • March 10, 2026

Powered by
AllZimNews

All Zim News – Bringing you the latest news and updates.

By Hope