Businesses everywhere are quickly discovering the benefits of generative AI and artificial intelligence. Do you need to write a written reply to a disgruntled customer? Ask AI. Do you want to draft an internal policy on dress codes? AI can help. You tell it what document you need, and just like that, AI can generate a response within seconds. AI undoubtedly offers efficiency and convenience. However, businesses must be aware of the risks involved in relying on AI to produce documents. This article discusses the risks posed to businesses in using generative AI tools to draft their business documents. Several measures are also recommended to reduce the likelihood of these risks eventuating.
Risk of Errors
Generally speaking, generative AI works by undergoing “training” on pre-existing data sets. The AI tool, during this “training” period, learns and identifies patterns which then assists the technology in generating appropriate responses. Where the AI technology has not been trained on high-quality, unbiased and accurate data, issues tend to arise. Similarly, if the AI technology has not been trained on a data set of ample size, issues can arise.
For example, the free version of ChatGPT (GPT-3.5) has only been trained on data up to September 2021. This means that if you ask it for the current interest rate, it will not be able to provide you with the most up-to-date information.
Generative AI tools do not yet have the capability to interpret complex data such as legislation. Accordingly, it is common for AI tools to generate responses that include laws or regulations that simply do not exist. It is likely then that any documents or responses generative AI tools create, such as ChatGPT, will not be factually accurate and, in turn, fit for purpose.
Once you have produced a business document using generative AI, it is best practice to thoroughly fact check it to reduce the risk of the document being factually inaccurate.
Risk of Bias
Generative AI relies significantly on the data set in which it was trained on. If the data set contains any bias, the generative AI tool will also likely contain this bias.
Bias can arise in a data sample if the data sample is small, lacks diversity or reflects stereotypes.
Generative AI tools may also inadvertently reinforce stereotypes if trained on a biased data set. For example, an AI tool may generate an image of a middle-aged white man in a suit when prompted to generate an image of a typical CEO.
AI models may also produce responses that lack diversity if trained on a biased data set. For example, if an AI model was prompted to generate skincare product recommendations but was trained on data that is biased towards young males, the needs of older individuals or individuals from different cultural backgrounds may be overlooked. The AI model’s subsequent response will likely contain biased outputs.
Bias not only limits the accuracy of your business documents but it exposes you to potential reputational implications. For example, where the bias contained in the documents contradicts contemporary community values, your business’ reputation might consequently suffer.
While bias is hard to detect, businesses, nonetheless, should review and verify any documents produced by generative AI tools. By doing this, businesses will reduce the risk of any bias or potential bias. Likewise, where a business is training the AI, they should ensure the data is broad, diverse and free from bias.
Continue reading this article below the formRisk of Breaching Privacy Laws
Businesses should refrain from inputting personal or sensitive information about their clients or staff into generative AI tools. This is best practice, so you maintain compliance with your Australian privacy law obligations. Strict rules apply regarding how you can use and disclose personal information. Stringent rules also apply concerning the steps you must take to ensure the information is secure. Your business must comply with Australia’s privacy laws or else risk potential legal, financial and reputational damage.
Risk of Data Breaches
Similar to other data services, generative AI tools, in most cases, process and store the prompts from users while making little to no security assurances in their terms of use section.
Additionally, users who upload information to generative AI tools, such as ChatGPT, run the risk that this data may be intercepted or accessed by an unauthorised party. This risk of data breaches will continue to grow as hackers and cyber attackers develop new and more sophisticated ways of breaching conventional data security measures.
Therefore, your business should take great caution in the AI models you choose to use and review the data security clauses in the generative AI tool’s terms of use.
Risk of Unintentional Disclosure
Since generative AI uses data from previous inputs, there is a risk that any content it produces will contain information provided by a previous user. This situation is likely to occur when the prompt given to the generative AI tool includes contextual clarification, which closely links to information previously provided by a platform user.
To reduce the risks of using generative AI, when using tools such as ChatGPT to draft business documents, you should avoid inputting commercially sensitive data or information that is confidential, personal or sensitive. Where you cannot omit certain data, your business should aim to de-identify the information.
Most AI models also allow users to opt out of having data used for “training”.
Key Takeaways
Generative AI is a great tool for drafting business documents quickly and efficiently. However, businesses may find themselves in hot water without understanding the risks of using AI to generate these documents. The key risks associated with relying on generative AI to draft your business documents include the risk of errors, bias, non-compliance with Australian privacy laws, data breaches and unintentional disclosure of information.
If you need help understanding the legal risks associated with using generative AI for your business, our experienced artificial intelligence lawyers can assist as part of our LegalVision membership. For a low monthly fee, you will have unlimited access to lawyers to answer your questions and draft and review your documents. Call us today on 1300 544 755 or visit our membership page.
Frequently Asked Questions
Generally speaking, generative AI works by undergoing “training” on pre-existing data sets, enabling it to learn and identify patterns to produce appropriate and useful responses.
The accuracy of generative AI depends on the quality and accuracy of the data it is trained on. Where AI relies on outdated, biased or inaccurate data, it will produce content based on that data.
We appreciate your feedback – your submission has been successfully received.