Margins of error – how to work with an AI system that is not always right

AI week-index-07

In the second part of our series looking at AI in construction, Thomas Lane examines the downsides of artificial intelligence

>> Take part in our AI reader survey and enter a £50 voucher prize draw >>

The apparent brilliance and promise offered by GPT-4, DALL-E2 and others is counterbalanced by major flaws which could severely limit their usefulness. Wrong responses to a prompt, or “hallucinations” in the jargon, are a major problem because these cast doubt on all of the information provided by the system.

“The answers you get back [from a ML system] are 80% correct, but 20% can be very wrong,” says Martha Tsigkari, the head of Foster + Partners’ applied R&D team and a Building the Future commissioner. “There is a lot of danger with that 80% because you cannot have a sense of comfort that this is right as the information you get back might be completely misleading with that 20% of error.”

The problem is that much of the data used by large language models is sourced from the internet, which is awash with misinformation and false narratives. This not only means the information produced by these models can be spectacularly wrong; it can also be politically biased and racist.

This is PREMIUM content, available to subscribers only

You are not currently logged in. Subscribers may LOGIN here.

SUBSCRIBE to access this story

Gated access promo

SUBSCRIBE for UNLIMITED access to news and premium content

A subscription will provide access to the latest industry news, expert analysis & comment from industry leaders,  data and research - including our popular annual league tables. You will receive:

  • Print/digital issues delivered to your door/inbox
  • Unlimited access to building.co.uk including our archive
  • Print/digital supplements
  • Newsletters - unlimited access to the stories behind the headlines

Subscribe now 

 

Get access to premium content subscribe today