
DealMakers - Q2 2025 (released August 2025)

The impact of AI on M&A transactions
by Tayyibah Suliman and Izabella Balkovic
​
Artificial intelligence (AI) is rapidly transforming our everyday lives, including merger and acquisition (M&A) transactions.
​
Currently, 77% of businesses use AI or plan to implement it. (1) Most of these businesses believe that AI will increase their productivity, leading to higher revenue.
​
This trend has increased the number of businesses undergoing M&A processes that either use or have developed AI tools across various business areas. The growing use of AI necessitates bespoke considerations for M&A transactions, both in evaluating the business and conducting due diligence investigations.
​
Considerations for M&A transactions
When undertaking a due diligence, first consider whether and to what extent the business utilises AI, and how this AI is deployed. A strategic assessment must be undertaken to determine whether AI usage adds value to the business, and whether it will continue after the M&A transaction. Some general aspects for consideration include:

Tayyibah Suliman

Izabella Balkovic
-
AI development and maintenance in businesses
-
Strict restraint of trade, confidentiality and intellectual property provisions in employment contracts for AI development and maintenance
-
Ownership of intellectual property rights in relation to strategic AI inputs and outputs
-
Costs of in-house AI updates and maintenance
-
Consideration of off-the-shelf AI solutions for cost-effectiveness over bespoke solutions
-
Quality of training data which affects the AI system’s value
-
The difficulty with assessing the value of the AI system during M&A transactions
​
Almost every AI model processes personal information, requiring compliance with data protection laws during M&A processes. These laws prescribe strict conditions for processing personal information, and often limit automated processing. AI models’ lack of transparency makes it nearly impossible to comply with data subject requests for deletion or correction of processed personal information. M&A transactions may identify this as a risk, potentially decreasing business value or resulting in warranties and/or indemnities against potential sanctions from data protection non-compliance.
​
Risk assessments
As demonstrated above, AI can add immense value to businesses, but it also introduces risks – many unknown – associated with its use. Businesses must balance AI’s added value against these risks. This balance makes it extremely difficult to accurately value businesses that have developed bespoke AI, or which rely heavily on AI systems.
​
Standardised risk-based approach
Foreign jurisdictions have developed AI risk management frameworks to manage AI-associated risks. For example, the US Department of Commerce classifies generative AI risks into the following categories: technical/model risks (risks of malfunction), human misuse (malicious use), and ecosystem/societal risks (systemic risks).(2) Additionally, the European Union’s high-level expert group on AI has developed assessment tools, including ethics guidelines for trustworthy AI, policy and investment recommendations, assessment lists, and sectoral considerations. (3) This development raises questions about whether due diligence investigations should apply a standardised approach when assessing AI systems. Evaluations aligned with these frameworks could provide in-depth and uniform AI system assessments. However, these remain recommendations, and many businesses are likely to avoid applying these frameworks due to their onerous obligations.
​
Contractual risks
Most contracts reviewed during a M&A due diligence predate widespread AI adoption and, therefore, inadequately address AI-related considerations. This gap becomes particularly critical when examining agreements with key business partners, including suppliers and customers, where AI usage creates unforeseen legal and commercial implications. Since AI systems generate novel outputs and creative works, contracts must clearly delineate intellectual property ownership rights in any content, data or innovations produced by the AI systems.
​
Cybersecurity risks
Integrating AI technologies into business operations introduces substantial cybersecurity vulnerabilities requiring careful evaluation during M&A processes. These elevated security risks necessitate comprehensive incident management frameworks and robust response protocols to address potential cybersecurity incidents. AI systems significantly alter a target company’s risk profile, often compelling buyers to seek additional contractual protections through specialised indemnities and/or warranties designed to mitigate emerging technological threats. This creates challenges for sellers, as businesses now face exponentially higher probabilities of cyber incidents. The prevalence of cyber incidents makes sellers reluctant to accept expansive liability provisions that could materially impact M&A transactions.
​
AI in the M&A process
Businesses can use AI at nearly every stage of the M&A transaction, including:
-
Target identification and screening
-
Due diligence investigations
-
Report editing
-
Transaction document drafting
-
Post-merger integration monitoring
​
AI’s ability to process large volumes of data quickly makes it ideal for due diligence processes. Legal-specific AI can review multiple documents simultaneously and provide outputs in user-defined categories.
​
But while AI expedites due diligence processes, it can also hallucinate outputs. AI hallucination refers to the phenomenon where AI systems, particularly large language models, generate outputs that are incorrect, nonsensical, or lack factual basis, while presenting them as accurate.
​
The consequences of hallucinations in due diligence processes, where AI fails to identify risks in M&A transactions, could prove ruinous. Thus, dedicated human oversight must ensure factually accurate AI outputs. To mitigate this risk, practitioners can adopt a sampling approach, manually reviewing a sample of AI-reviewed agreements and comparing results to AI outputs. These samples should come from different document categories of varying complexities and importance.
​
M&A practitioners must also consider potential confidentiality breaches resulting from AI tool use. When uploading confidential M&A transaction documents to AI tools, this information generally becomes available to AI service providers, who may store it on their servers or use it to train their models. This creates significant risks, as sensitive commercial information, financial data, strategic plans, and proprietary business details could be inadvertently disclosed to third parties or accessed by unauthorised persons. M&A transactions require strict information security protocols due to their confidential nature, yet AI systems’ ability to process large data volumes quickly makes them attractive despite these confidentiality risks. Organisations must carefully evaluate AI providers’ data handling practices, security measures and privacy policies before uploading sensitive M&A documentation to AI systems.
​
In summary
AI integration into business operations fundamentally transforms the M&A landscape in two critical ways. First, the proliferation of AI-enabled businesses creates new complexities in transaction evaluation and execution that require specialised expertise and risk assessment frameworks. Second, AI tools revolutionise how businesses conduct M&A processes, offering unprecedented efficiency gains while introducing novel risks requiring careful management.
​
Moving forward, successful M&A practitioners must master both the evaluation of AI as a business asset, and the strategic deployment of AI as a transaction tool. This dual competency, combined with robust risk management frameworks and stakeholder transparency, will prove essential for navigating the AI-transformed M&A environment.
​
Suliman is a Director and Balkovic an Associate in Corporate and Commercial | Cliffe Dekker Hofmeyr
​
1 AI in Business Statistics 2025 [Worldwide Data] by T Benson accessed at https://aistatistics.ai/business/ (on 8 July 2025).
2 National Institute of Standards Technology, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile accessed at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf on 8 July 2025.
3 European Union accessed at https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai on 8 July 2025.