Artificial intelligence (AI) is the newest golden child in technology. While definitions abound, we will define artificial intelligence as “the theory and development of computer systems able to perform tasks that normally require human intelligence.” This article addresses licensing issues that are unique to AI, including legal compliance of the AI decisions, allocating intellectual property rights, ownership and use rights of the components of AI, and data use and privacy.
AI Licenses and Service Agreements
Licensing AI capability from a third party is likely the fastest way to obtain AI for use by businesses. The license may take the form of an on-premise license of AI that will be installed, trained and operated by the business, or the license may be part of a software-as-a-service (SaaS) solution in the cloud by the provider.
A unique aspect of licensing AI is that the output, value and performance of an AI system generally cannot be accurately predicted before an AI solution is implemented. Too much is new or in the hands of the user for the licensor to make strong promises. Moreover, AI with machine-learning capabilities will change over time, generally but not necessarily improving, so it may be impossible to test at the outset how the product will work over the license or subscription term.
Many businesses are turning to a collection of AI providers to test the waters. A good, lower-risk way to do this is through a proof-of-concept arrangement. A proof-of-concept arrangement is a short-term agreement that allows a company to test and a supplier to prove the value of an AI product or service.
Once the proof-of-concept is complete, the business may license the AI from a provider. Businesses should seek to satisfy the usual requirements for license and SaaS agreements in their AI licenses and services agreements, with particular attention to the following four unique areas.
Legal Compliance: AI-based decisions must satisfy the laws and regulations that apply to businesses. This requires a business to apply the same level of diligence to the AI tool or service that the business applies to its other third-party products and services. Of particular concern is that AI-based decisions may discriminate because they rely on data that reflects a discriminatory past or looks only at correlation instead of causal factors. Businesses that use AI tools in credit decisions or fraud detection, for example, must ensure that these tools do not discriminate against certain protected classes of applicants or employees.
AI is an increasing focus for regulators. For example, the New York Department of Financial Services recently issued requirements on the use of “unconventional sources or types of external data” to address the risk of unlawful discrimination and a lack of data transparency in insurance decisions. If a business uses AI for a decision that it may need to explain, the licensee should look for AI systems to produce output that is transparent and auditable, and that can be explained—sometimes called “explainable AI.”
Licensees should verify, during due diligence, the extent to which the AI decisions and outcomes are explainable, and the method by which the business may access those explanations and related data. The license agreement may also need to specify that the AI may be subject to regulatory examination and require the AI provider to cooperate with such examinations. A business may also want to require that AI has “circuit breakers”—a method for pausing operations to gather data about correct and compliant operation, confirm security compliance and make necessary adjustments in the AI tool to eliminate errors, mistakes, bias or non-compliant decision criteria and output.
Record-keeping and audit requirements are also important considerations for businesses. Because AI tools evolve, data sets change and iterations are part of the process, AI licensees should address how they can access versions of past decisions based on AI tools and data sets that have shifted over time. This is particularly important when businesses are using AI in a provider cloud and when the business is not in control of archiving the AI components and outputs.
Licensees of AI can mitigate these risks by incorporating in the license oversight, risk management and controls to meet legal compliance and business objectives. Finally, consider whether the license should include rights to training and access to specialists who are familiar with the AI tools and can assist the business with its training, use and ongoing monitoring requirements. Regular compliance meetings with the provider may be required to offer assurance on these key items.
Protection of IP Rights: Patent, copyright, trade secret and other IP laws were written with a bias to protecting human creativity. Intellectual property (IP) laws in the United States do not square nicely with AI. Not only may a business not own AI that it pays to create, it also may not have the means to fully protect its AI under US IP laws. Contractual protections are a key element of capturing and preserving value in the creation of, and returns on the investment in, AI. These protections, to be effective, must be implemented before the AI effort begins.
Licensees should consider contractual ownership and use of the components of AI, including the AI tool, evolutionary changes to the AI tool, the training data and instructions, and the output of operation of the AI tool. When licensing AI, AI providers expect to continue to own the underlying AI tool, and some may expect to own the evolutionary changes as well. Much of the AI that businesses will use may require training. The license should address which party will train the AI, which party will own the training instructions and which party will own the evolution of the AI tool based on the training.
Shifting to the output of the AI tool, most licensees would expect to own the decisions of the AI tool based on the licensee’s input, and specific language in the license agreement will be necessary to achieve that result. Once the parties have determined how they will allocate these ownership rights, they also need to determine whether, and to what extent, the other party will have ongoing license and use rights in those components.
Data Use and Privacy: Data is the fuel for AI, but data use must comply with privacy, data security, export control and other laws that apply to the data. In addition, data use must comply with any contractual requirements to third-party data suppliers. To guard against these data pitfalls, businesses should inquire as to the level of legal and regulatory diligence that has been done on the uses of data to fuel AI systems. The license should specify whether the AI will rely on provider data or business data or both—and, importantly, which party will own which data, as well as which party may use that data and for what purposes.
The license agreement may also specify that the party supplying data is responsible for obtaining necessary consents and rights to use that data with the AI software. The license should also address liability for issues arising from improper use or failure to obtain proper consents. If the data include personal data, the license should assess compliance with its privacy policies governing that data. Similarly, many countries, such as European countries, have tough data protection laws that prohibit the use of individual data for automated processing to evaluate any feature of behavior, preferences or location absent the explicit consent of the individual. And yet, automated processing of individual data to determine preferences is the hallmark of many AI tools. Consider whether the license should require the provider to conduct privacy assessments of the AI tool on a periodic basis.
Embedded AI Licenses: AI has been called “the electricity of the 21st century” due to its potential to transform a wide range of products and services. Increasingly, products are being sold with embedded AI systems, with voice-controlled devices, autonomous vehicles and personalized recommendations being common examples for consumer products. Businesses are also actively working to add AI to their operations and their business-to-business offerings.
As a result, many businesses will need to work with AI providers using service agreements in which the provider uses AI to remain competitive. Service agreements that use or rely on AI are another channel through which businesses may obtain the use of AI. Although the main purpose of the agreement is receipt of services, and licensing of AI may not be the cornerstone of such an arrangement, businesses should require service providers to reveal if they are using AI tools to provide the services, and if so, they should understand the uses. If the uses bear on any of the issues described in this article above, then the business should take care to perform diligence on those uses and to define the contractual license terms and other rights and obligations with respect to such AI.
The licensing of AI provides a way for most businesses to quickly capture some of the competitive advantages of leveraging this new and exciting technology. As with all assets that are licensed versus owned by a business, the licensee must understand and plan for the unique issues presented by AI. Doing so will enable businesses to better comply with legal and regulatory issues, avoid surprises regarding ownership and use, plan for the data that will fuel the AI, and utilize contract terms to allocate IP rights in the absence of clear IP law outcomes.
Brad Peterson is a partner in Mayer Brown‘s Chicago office and leads the firm’s global technology transactions practice. Brad’s practice focuses on data, digital, outsourcing and software transactions, data licensing and analytics, outsourcing of the full range of information technology and business process functions, as well as projects in emerging technologies such as artificial intelligence, robotic process automation, blockchain and other distributed ledger technologies. Rebecca Eisner is a partner and member of the Global Management Committee for Mayer Brown and previously served as a member of the Global Partnership Board and as partner-in-charge of Mayer Brown’s Chicago office. Rebecca has more than 25 years of experience in the areas of digital transformation, data, software, outsourcing, data privacy and security. She has also represented many clients in complex global technology and business press outsourcing transactions.