Insights

Corporate and Commercial: The Rise of AI in Business: Why Contracts Matter More Than Ever

16 May 2025


Duolingo recently announced that it is going “AI first”, using AI wherever possible across the company. It is clear that the world has arrived, reluctantly or otherwise, in the AI age of business. Commercial solicitors support organisations as they move in their chosen direction. For many organisations, that direction is squarely towards AI, but there are some risks which come with that move.

Although historically it has been thought that machines do not make mistakes, now AI chatbots are known to ‘hallucinate’, ie generate incorrect or misleading information but with total confidence. This leads to the first risk to consider when entering into an agreement where one party is using AI. If a business is relying on AI to perform part of the services it is providing under a contract, and the AI makes a mistake, how should that be handled? How should liability be allocated?

Further risks arise if an organisation is contracting with a service provider who is using one of the free and open AI models. What if the service provider uses the organisation’s commercially sensitive data and runs it through ChatGPT? What if the service provider supplies the AI with personal data which the organisation is responsible for? The concern here is that the service provider will cause organisations to be in breach of their duties under data protection legislation, with the resultant risks of fines, negative publicity and loss of public trust. Even if the information is not commercially sensitive or does not comprise personal data, organisations may simply not want the AI to be able to use the information for its own purposes, something which most openly available AI models do, especially if organisations want to protect the intellectual property in the information which is being uploaded.

The safest way to deal with all these risks is through a tight contract which covers all the risks generated by use of AI. Such a contract would clearly allocate liability, so there is no doubt about who is responsible if the AI produces an incorrect response. It would also be beneficial to both parties if the contract explicitly states whether the organisation using the service is willing for its commercial and personal data to be used, perhaps by specifically limiting what information can be uploaded to an external server and any restrictions on use of AI. By making this clear, it ensures that both parties are on the same page, with a clear understanding of how AI will be used under the contract. Even if a problem materialises, a clear contract helps to avoid arguments over the agreed position leading to costly litigation.

Regulation should start to provide certainty in this area, but this is yet to fully materialise. The government has stated they will be taking an approach which is “strongly pro-innovation and pro-safety”, and it will be interesting to see how this translates into concrete legislation. The EU AI Act came into force on 1 August 2024, with a framework which classifies AI systems into four risk levels (unacceptable risk, high risk, limited risk, and minimal or no risk), where each level requires specific actions and protective measures. The legislation tends more towards regulation of AI use itself, rather than governing the use of AI between businesses. The UK may well follow a similar approach in its own legislation. Until the UK Act comes into force, it is advisable to ensure that a contract covers the necessary AI risks.

There will be an inevitable rise in the use of AI across businesses and organisations wanting to harness its many benefits, but should they go down that route, they should do so with their eyes open. Regulation in this area has not yet caught up with the fast-paced development of AI, so it is advisable to rely on considered contract drafting to protect and future proof the business or organisation.

Sign up for Our Newsletter