The RICS global professional standard on the responsible use of AI came into force last month. It provides welcome guidance on an area where many still feel at sea, as well as new obligations that both individual RICS members and regulated firms are now required to comply with. Warwick Stockdale, Jalal Miah and Colin McCaffrey of McBains set out five practical steps to help you get on top of what is required

McBains

Colin McCaffrey is a director and AI champion, Warwick Stockdale is a senior cost manager and AI champion, and Jalal Miah is the head of digital and data and AI champion at McBains

AI is already being used across surveying practices, but until now this has often been done piecemeal. Some individual surveyors have been keen adopters of the technology and are actively using it to analyse data or produce summaries. Still more are turning to the free consumer tools that are increasingly replacing conventional search engines.

But now the RICS professional standard on AI imposes new obligations on firms to record and monitor any use of AI within their firms or by their employees, and it also introduces requirements for individual surveyors to ensure they develop and maintain their own AI knowledge at a level appropriate to their work.

Here are five tips for how to effectively implement the new standard:

Understand what the standard requires of you personally, and what it requires of your firm

Under the standard, firms must safeguard confidential data, maintain a register of AI systems in use, implement risk management procedures, and establish internal policies on accountability and training. At an individual level, chartered surveyors are now required to develop and maintain their knowledge to support their responsible use of AI systems. This also ties in with a separate RICS requirement that came into effect from January 2026, where members must complete at least one hour of structured CPD on “Data, Technology, and AI” each year.

While there is currently no formal mechanism for a firm to confirm its compliance, savvy firms should implement policies and controls aligned with the standard and document their compliance to protect themselves against future investigations.

Get your data safeguarding and internal AI policies in order

The standard is clear that confidential or client information must not be uploaded to AI platforms without explicit, informed consent. Something as simple as a team member pasting a client email into a free version of ChatGPT can pose a significant data risk.

Many firms are starting to invest in corporate subscriptions to licensed, closed-loop AI tools to prevent data leakage and the training of AI tools on the data. At McBains, we decided to primarily use embedded AI tools from well-established platforms, as they are already integrated into our existing digital infrastructure.

We have developed a permit, restrict, monitor framework to categorise these AI tools used across our business, and circulate a monthly internal bulletin to keep best practice front of mind. It is also important that any firm’s policies reflect clients who have stipulated that their data must be held on UK-based servers.

Build your team’s AI knowledge from the ground up

The standard requires that members develop a working understanding of AI systems, including their limitations, biases, and the risk of erroneous outputs. Establishing a common baseline for all staff is a sensible starting point.

To achieve this, we have chosen to adopt a five-week online course (part of a government initiative to improve AI literacy in construction) that covers generative AI basics, effective prompting, and our internal AI policy, and that takes employees about five to 10 minutes a day to complete. For individual QSs whose firms are not yet providing training, Coursera and LinkedIn Learning offer solid foundational courses.

Using an AI tool is itself one of the best ways to learn, particularly if you ask the tool to critique your prompts.

Set up your AI systems register and your risk register

Firms are now required to maintain both a register of the AI systems they use and a dedicated AI risk register covering four core areas: inherent bias in AI outputs; erroneous outputs; limitations in the information available about the system’s training data; and the retention or use of data that is entered into the AI system.

For most firms, this will represent an extension of risk management already in place. At McBains, we have incorporated the new requirements into our central risk register and implemented quarterly reviews, supported by a group of “AI champions” in the company, alongside existing quality assurance processes that ensure AI-assisted work is checked before reaching clients.

Never let AI substitute for professional judgment

Everybody knows that it is possible for AI to produce plausible-sounding answers that are simply wrong, and that professional responsibility for the accuracy of advice to clients remains entirely with the surveyor. It is therefore important that staff are encouraged to use AI to support and accelerate their work, but that every output is reviewed and made the professional’s own before it reaches a client. The standard also requires transparency with clients about when and how AI is being used.

Ultimately, the RICS standard should be viewed not as a burden but as welcome guidance on how to responsibly adopt the new technology, in which we all know that we need to become proficient. Ensuring that AI tools are demonstrably used securely, and to improve cost-effectiveness is the key to upholding the professional standards that define quantity surveying.

Colin McCaffrey is a director and AI champion, Warwick Stockdale is a senior cost manager and AI champion, and Jalal Miah is the head of digital and data and AI champion at McBains.