Technology is advancing at an ever-faster rate. The buzz around AI is quickly manifesting itself in all the devices and applications that we use in our jobs. How accurate, reliable is the information we get back and how confidential is the data we send? And what about copyright? May Winfield, global director of commercial, legal and digital risks at engineering consultancy Buro Happold shares her insight
Do you remember the days before iPhones? How about Google? Generation-defining tech like this has a way of becoming so pervasive we can barely remember a time ‘before’. Whilst Gen AI tools like Copilot and ChatGPT are relatively new in the general consciousness, their take-up is at a pace rarely seen before. (ChatGPT reportedly hit 100m users in Jan 2022). How long will it be until they form part of our common lexicon as has “to Google”?
As a human race, we are predisposed to be suspicious of the unknown; this caution kept our ancestors alive. However, in the modern world it often manifests as tech-phobia – a reluctance towards innovation that changes the game a little too much.
I’m sure readers of this article would have seen multiple headlines about how AI is going to come and take over the world. Those of my generation will remember this idea playing out in the Terminator movies, or more recently in ‘Blade Runner 2049’.
There is no doubt that AI, and related technologies like automation, will change society but anyone who has used the gen AI chatbots will realise that these clunky tools have some generations of development before they equal human intellect, reaching the realms of “general intelligence” AI. It reminds me of being about six years old (many, many years ago!), playing with a purported “intelligent” chatbot on a DOS system in the days pre-Windows. It was exciting at first, but after a few questions I realised the bot required precisely worded questions to avoid repetitive answers. The technology has developed in leaps and bounds in the decades since, but gen AI currently still needs careful prompts to get the answers you seek. Indeed, prompt engineering is so fundamental to the use of gen AI that it has quickly been established as a whole new profession.
On the flipside, once adopted we have a habit of then blindly trusting technology. Almost everyone has input their symptoms into Google, which has confidently declared their common infection to be a life-threatening disease. We rarely question the answer given by a calculator to a complicated maths equation, or a translation sought on the internet. Yet nothing is perfect, not even technology and certainly not AI.
Most people will be aware that gen AI can “hallucinate”, or in simple terms, make stuff up, if it doesn’t know the answer or wants to provide a more comprehensive one. A lawyer in the US found this out at his peril when gen AI confidently gave him details of a number of cases to support his legal arguments, only for the judge to point out that all these cases were completely fictional. Apparently, it simply had not occurred to him to check.
Many of the kinks in gen AI will be ironed out as the technology develops at its relentless pace. In the meantime, most companies are already embracing the use of gen AI, though in varying degrees and with different levels of understanding.
There are some key risks to bear in mind when implementing gen AI. These risks are likely to remain largely unchanged for the foreseeable future at least – until gen AI technology develops well beyond its current iterations.
These risks can be categorised into a couple of broad categories which I’ll consider briefly below.
Accuracy and reliance
As we’ve seen, gen AI can make things up or provide partly inaccurate results. Imagine asking a gen AI tool if a connection is safe and it gets it wrong; the consequences could be serious. Yet it is not a legitimate defence to assert the gen AI tool itself is at fault. That would be like blaming the calculator for your mathematical errors, or the teacher for the poor quality of your homework.
Ultimately gen AI tools are exactly that, tools to supplement and augment what we do rather than replace it. To blindly rely on gen AI is a folly and fraught with risk. It will be very interesting to also see if courts, and insurers, will regard such actions as knowingly reckless, rather than negligent, and therefore falling outside the protection of professional indemnity insurance.
Find this article plus many more in the Nov / Dec 2024 Edition of AEC Magazine
👉 Subscribe FREE here 👈
Confidentiality
Readers may have heard the saying, data is the new gold. Project and company data can have great value; knowing how your competitor is pricing or designing their projects could open up a significant competitive edge.
It is tempting to input data into a gen AI tool and ask it to, say, reformat it or provide similar examples. However, inputting data into a public generative AI like ChatGPT is in essence like throwing this data into a public forum. It is not possible to delete or remove it. Someone asking the right question could extract that data, and I have seen some terrifying examples online of people doing exactly that.
It is tempting to input data into a gen AI tool and ask it to, say, reformat it or provide similar examples. However, inputting data into a public generative AI like ChatGPT is in essence like throwing this data into a public forum
Whilst it may not matter for some generic data entry, it has a bigger and serious impact for sensitive client or project data. There was a widely reported example of employees of a major technology company inputting code into ChatGPT for it to fix it, thereby losing control of this code. Such actions could expose your company data to external parties, and as regards client and project data, likely put you in breach of your contractual obligations. I also doubt clients, and management, would be too pleased at this loss of confidentiality.
It is therefore important to always check the user terms of any gen AI tools before implementing it within your organisation. Can the data be used to train the models? Will the data be accessible to other parties? Where is the data stored? These are just some of the questions to consider in such situations.
Copyright
There have been various reports of court cases by newspapers, authors and artists alleging breach of copyright as gen AI tools have scraped data off the internet and/or been fed material in training the models without appropriate copyright licences. Whilst some gen AI companies have entered into copyright agreements with key information providers, there remains a risk that the gen AI output you receive could be partly or wholly in breach of copyright. You simply can’t know until a party alleges such a breach on seeing your output. It is worth therefore thinking cautiously before using gen AI outputs in a public forum, at least insofar as it has not been purely trained on data you own or have rights to use.
A number of software providers are offering indemnities (sometimes for an additional fee) to protect users from claims of copyright breaches in using their gen AI tools. Whilst a responsible step by such organisations, it is recommended to read the wording of such indemnities before paying extra for them, or seeking to rely on them, as some can be quite narrow and have a number of potential loopholes. The jury is also out on how a court would interpret such indemnities in practice.
Personal data and ethics
point to note on personal data is the impact of personal data legislation, like the GDPR, in the UK and Europe. Such legislation seeks to protect personal data and has a direct impact on how personal data can be input into gen AI tools, particularly public ones.
As regards ethics, it is a known phenomenon that gen AI can be biased, simply because it is trained in historical data that is known to have such bias. A friend working in HR noted that when a gen AI tool was asked to propose the best CVs for a role, all the candidates put forward were white males. As with all risks arising from gen AI, this doesn’t mean you need to turn your back on the technology, simply that you need to put in place risk management and protective measures to counter this known issue.
Conclusion
The above is but a flavour of some key issues that should be borne in mind when adopting and implementing this potentially paradigm-shifting new technology. It’s also important to bear in mind any new AI-specific legislation, most notably the EU AI Act and various US state legislation, and ensure you are in compliance.
We are living in an exciting world of progress, where the possibilities shown in the likes of Bladerunner and Star Trek are increasingly becoming more than a vague possibility. We just need to ensure we have the right safeguards and safety nets to ensure we reap all the potential benefits, whilst avoiding the known pitfalls.
Further reading
Chartered Institute of Building (CIOB) AI Handbook
Compilation of my previous talks and presentations