In the second part of our series looking at AI in construction, Thomas Lane examines the downsides of artificial intelligence
The apparent brilliance and promise offered by GPT-4, DALL-E2 and others is counterbalanced by major flaws which could severely limit their usefulness. Wrong responses to a prompt, or “hallucinations” in the jargon, are a major problem because these cast doubt on all of the information provided by the system.
“The answers you get back [from a ML system] are 80% correct, but 20% can be very wrong,” says Martha Tsigkari, the head of Foster + Partners’ applied R&D team and a Building the Future commissioner. “There is a lot of danger with that 80% because you cannot have a sense of comfort that this is right as the information you get back might be completely misleading with that 20% of error.”
The problem is that much of the data used by large language models is sourced from the internet, which is awash with misinformation and false narratives. This not only means the information produced by these models can be spectacularly wrong; it can also be politically biased and racist.
You are not currently logged in. Subscribers may LOGIN here.
A subscription will provide access to the latest industry news, expert analysis & comment from industry leaders, data and research - including our popular annual league tables. You will receive:
Get access to premium content subscribe today