Regulators will always struggle to keep pace with AI development

Regulators will always struggle to keep pace with AI development
0 0
Read Time:2 Minute, 18 Second

Unlock the Editor’s Digest for free

Legislators across the globe are tussling with artificial intelligence. Early efforts are voluminous but hardly speedy. The EU’s AI Act, first out of the blocks, runs to 144 pages. Regulation lags behind innovation by a country mile. The EU was obliged to add in a chapter for generative AI part way through its process. 

True, few economic, financial and societal issues are untouched by the peripatetic technology. That requires a lot of guardrails.

Unlike the principle-based approach taken by the EU towards data in General Data Protection Regulation — GDPR — the AI Act takes a product safety approach, similar to regulation of cars or medical devices, say. It seeks to quantify and address risks, with standards met and verified prior to market launch. Think test crashing a car model before its rollout.

The EU ranks capabilities, and the subsequent requirements, by risk profile. Top of the pyramid is the Black Mirror stuff — behavioural manipulation, social scoring — which is prohibited. At the bottom are the common-or-garden spam filters and AI-enabled games, where a voluntary code suffices.

Naturally, it is the two middle layers that will most impact tech developers and their users. Financial services and other companies that use AI tools to determine creditworthiness or for hiring staff, for example, will fall into this category. Users are also on the hook in higher risk categories if they modify a model: a company may over time switch the use of the AI, say from sifting through resumes to making decisions on who gets promoted.

One likely upshot is heavy use of contracts between those deploying AI and the big tech providers, says Newcastle University professor Lilian Edwards.

Defining what constitutes systemic risk in generative AI is tricky. The EU — and US in its executive order on the use of AI — have resorted to computing power metrics. The EU sets its threshold at 10²⁵ floating-point operations per second, a measure of computing performance, while the US has set its at 10²⁶. Going beyond this triggers extra obligations.

Computer power used for AI training runs

The trouble is that this relates to the power used for training. That could rise, or even fall, once it is deployed. It is also a somewhat spurious number: there are many other determinants, including data quality and chain of thought reasoning, which can boost performance without requiring extra training compute power. It will also date quickly: today’s big number could be mainstream next year. 

The EU law, formally in force as of August, is being phased in. Further snags will arise as capabilities move on. Even as rules evolve, the risk is they remain behind the technological curve.

[email protected]

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
S Africa signals more support for energy groups after Total exits gas project Previous post S Africa signals more support for energy groups after Total exits gas project
Next post Exploring peptide clumping for improved drug and material solutions