As generative AI takes great strides forward in the business activities of more and more companies and organisations, there are several legal aspects to be aware of. In anticipation of the upcoming AI regulation, Lindahl’s experts Alexander Tham and Mikael Olsson list five key legal areas to take into consideration when generative AI is used.
AI AND COPYRIGHT
Even though we are still waiting for the legislator or the courts to finally determine the situation, there is nevertheless broad consensus among lawyers that what is produced by AI tools cannot be protected by copyright, which is reserved for works produced by humans.
Risk: If the output from the AI tool cannot be protected and made subject to an exclusive right, there is a risk that the economic value of what is produced will be limited, perhaps to the extent that the investment in input and tools does not correspond to what is extracted.
What you can do: Make sure to have guidelines for the use of AI in your business: the steps AI will be used in, and consider in advance how intellectual property values will be safeguarded at each stage. Also ensure that the business’ agreements are compatible with the absence of copyright, for example when it comes to guarantees and transfer provisions.
AI AND INTELLECTUAL PROPERTY INFRINGEMENTS
The person who supplied the input data and instructions to an AI tool is typically liable for the output from the tool. The user may also be liable if the output contains material added independently by the AI tool that proves to be subject to intellectual property restrictions (for example if the tool has added code that is subject to an open source license).
Risk: The use of AI output containing a third party’s material that is protected by intellectual property entails a risk that the user may become subject to actions for infringement, claims for damages and loss of goodwill.
What you can do: Make sure to have internal guidelines for use of AI, including comprehensive documentation. This is for the purpose of identifying the origin of the data and ensuring the legality of its use and also subsequent tracking. Investigate and document the data the AI tool has been trained on.
AI, DATA PROTECTION AND CONFIDENTIALITY
The content input into the AI tool may be subject to confidentiality by law or in accordance with an agreement. Furthermore, output or areas of use relating to individuals may be subject to protection under personal data legislation. At the same time, the suppliers of the tools often reserve the use of data that is added to the tool.
Risk: Besides the fact that the use of data in the tool itself may constitute a breach of confidentiality, there is also a risk of breach of confidentiality or other rights due to data leaks, violations of license terms or data protection legislation (such as GDPR or HIPAA). For example, this may be due to incorrect use of the AI tool, the fact that the tool does not meet security requirements or the fact that the tool is relied on for making decisions. It should also be noted that, as a starting point, content added to the tool becomes part of the tool’s data model, which may mean that other users can also access the information.
What you can do: Be careful about what you put into the AI tool: is it data that is yours to use and that you can grant others a right to use? Review the conditions and security of the tool as well as your policies for handling data and user training.
AI AND FALSE OR MISLEADING OUTPUT
In addition to the external risk of malicious actors using AI to help them deliberately create malware or generate content for phishing attacks and scams, the possibility that an AI tool may itself generate output that is false or misleading can never be ruled out.
Risk: The consequences of using or making decisions based on false or misleading information are difficult to foresee, but can lead to damages, violations of the law (with administrative or criminal sanctions) and damage to your brand.
What you can do: Ensure that the AI tool is reliable and accurate and review the veracity and completeness of the output, for example by testing the tool for its intended use. Consider the need for further controls before it is implemented in your business.
AI AND INSURANCE
Solid, extensive insurance cover is a hygiene factor for the vast majority of businesses. But what do your insurance policies say about risks associated with technology in general and AI in particular?
Risk: A lack of insurance cover can be extremely costly and may be crucial for the company’s or the organisation’s risk management in relation to AI-related risks.
What you can do: Review your insurance cover and identify which supplementary insurance policies or supplements to existing business and liability insurance policies can also cover AI-related risks. This area is in its infancy but, in the same way that it is now possible to insure against cybersecurity risks, there will certainly be a development linked to AI.
SUMMARY – THE LEGAL ASPECTS OF AI IN BUSINESS ACTIVITIES:
Since many people are curious about AI tools, it is a case of ensuring that business uses the technology in a risk-aware manner – both in relation to what is added to the tools and what is generated.
Copyright. Consider in advance how intellectual property values are safeguarded and whether the business concept and agreements of the business are compatible with the absence of copyright on AI-generated material.
Infringement of intellectual property rights. Have detailed processes and guidelines on the use of generative AI, including comprehensive documentation, in order to avoid intellectual property infringement and badwill.
Data protection and confidentiality. Content may be protected by law or through agreements. Ensure that you have the right to use the content you add to AI tools, evaluate the security and conditions for the tool and review your policies for data processing and user training.
False or misleading output. Test and ensure that the AI tool is reliable and accurate.
Insurance. Review the business’ existing insurance cover and whether there are supplementary insurance policies that can cover AI-related risks in the business and monitor developments in the insurance market.
This article is trend-spotting and must not be seen as legal advice.