ChatGPT wrote this blog post.
A slight oversimplification perhaps.
I planned this blog post and put together some careful prompt engineering. Telling ChatGPT what I wanted to cover: Legal considerations of ChatGPT. I told it that I wanted an SEO friendly post on the legals of ChatGPT and artificial intelligence. I gave it headings for each of the issues I wanted to deal with. Once it was written, I changed at least 50% of the content because I’m fussy. As a last step, I asked ChatGPT to critique what I put together. See the results for yourself.
I know that most people I speak with are using ChatGPT or other AI in their organisations by now. Others are too scared to use it or haven’t had time to research the issues so are avoiding it until they get that time. That’s ok, I’ve done the work for you!
So to the bits that ChatGPT wrote for me. Here we go!
In the rapidly evolving world of artificial intelligence (AI), ChatGPT has emerged as a powerful language model capable of generating human-like text. While AI has immense potential, there are some legal implications and considerations associated with its use.
In this blog post, we will explore key legal aspects surrounding ChatGPT and AI, including potential uses, copyright, confidentiality, accuracy, and the anticipated need for disclosure.
Plus my 5 key tips if you want to have a go at implementing it today.
From my discussions with business owners, I know that AI is already being used for things like:
- writing SEO friendly blog posts (how very inception of me),
- social media content prompts,
- proof reading,
- “reading” a text and summarise the key learnings,
- staff hiring letters
- writing policies and procedures
Its capabilities are not just for the workplace. AI can also be used for:
- Creating training programs for the gym, or a plan to learn to run a marathon
- Creating meal plans
- Writing selection criteria and cover letters
- Writing jokes
- Responding to emails
- Tutoring and assisting students with homework
- Naming your side hustle business
As we get more confident with AI, more sophisticated uses may emerge such as:
- diagnosing patients by inputting patient symptoms, medical history, test results and other data to get diagnosis and treatment options for a human specialist to consider
- Judge-bots who can adjudicate cases by reviewing relevant precedents, the submissions from either side of the case, reviewing evidence and coming up with a decision for a human judge to consider
- project planning and estimation by inputting project specifications and requirements, plus construction regulations, standards and building codes to get project plans, support compliance and make an estimate for resources requirements for a human project manager to consider
There are some MASSIVE issues we need to deal with before we become comfortable using AI at this level which we will explore below (and get excited for a highly publicised example of how not to do this).
Plagiarism and breaching copyright
The first thing we need to be careful of is breaching copyright when using AI generated material. AI literally cobbles together its ideas from pre-existing content on the internet, stealing ideas, images and words to find a solution to your prompt. Any ‘idea’ is a summation of existing content available on the internet. This is not so different to human authors. It has long been posited by philosophers that there is no original thought. Mark Twain said we can only turn old ideas into new, curious combinations. But he reckoned they are “the same old pieces of coloured glass that have been in use through all the ages.”
But AI presents new challenges for copyright. We all know how to adapt a text so that we’re not plagiarising (hello high school English class). But what if we can’t see the original texts and make sure that copyright hasn’t been breached?
When asked, ChatGPT told me that “responsibility for ensuring compliance with copyright and avoiding plagiarism ultimately lies with the user. Taking proactive measures, such as conducting thorough reviews, using plagiarism detection tools, and seeking legal advice when necessary, can help mitigate potential risks and ensure compliance with copyright laws.”
This area requires careful inquiry into the sources that AI uses and using professional judgement about when AI generated content can be used. If something has to be done right, assumptions cannot be made that AI has got it right. Every word and source has to be considered and verified by someone who knows what they are doing.
“If something has to be done right, assumptions cannot be made that AI has got it right. Every word and source has to be considered and verified by someone who knows what they are doing.”
Ownership of copyright
Another thing to be aware of is that you can’t own copyright in an AI generated material. Under current legal frameworks, copyright protection is granted to human authors rather than AI systems. This means that AI-generated material, such as text produced by ChatGPT, cannot be protected by copyright. Copyright laws typically require the exercise of human creativity and judgment for recognition. Consequently, only human authors are recognised as copyright authors.
It gets complicated when (as for this article), parts are AI generated and other parts are human generated. Keeping records of what parts you created and what parts the program created would be helpful. It’s important to note that AI-generated content may still be made subject to other legal protections, such as trade secrets or confidentiality agreements.
It is critical to exercise caution when handling confidential information. As businesses, we all hold sensitive and confidential data which are subject to privacy laws (my blog post on this topic is here). When we input data into ChatGPT and other AI, we have given away that data and lost control of it.
Therefore, it is advisable to not input confidential information into generative AI systems to avoid the potential unauthorised disclosure or misuse of such data. In the examples above, inputting patient data would need to be done by referring to the patient as X (for example) and making sure that the aggregation of other data does not make them identifiable (date of birth, unique medical information, family histories for example).
Putting together content if you are a B2B business also means not putting their data into AI as you will not have the permission to do that.
Protecting confidential information should always be a priority. Using AI responsibly includes safeguarding sensitive data by not putting it in in the first place.
Accuracy and Authenticity
A case this week out of the United States highlights the importance of carefully verifying AI-generated content for accuracy and authenticity. In this case, a lawyer relied on ChatGPT to generate pleadings. However, the AI invented fictional cases to support their case that did not exist.
According to local reports, a lawyer was asked to help a client sue an airline after a food cart injured his knee, banging into it on a flight. The lawyer asked ChatGPT to put together the brief for the court. As you may know, courts operate based on precedent. This means that highlighting previous cases that are relevant can be key to a successful case. ChatGPT invented 6 cases that didn’t exist, even giving plausible case citations. The lawyer asked ChatGPT for its sources. ChatGPT confirmed its cases could be located in legal databases. But the lawyer didn’t go the extra step of checking those citations. He now faces sanctions from a very unhappy judge. Not to mention his poor client.
I verified this story by checking it appears on a reputable news site. The link to an article by Forbes is here: Lawyer Uses ChatGPT In Federal Court And It Goes Horribly Wrong (forbes.com)
This example underscores the need for human oversight and review when using AI systems. While AI models like ChatGPT are powerful tools, they should not replace critical human analysis and fact-checking. Users must exercise due diligence in verifying the output of AI systems to ensure accuracy and reliability. Arguably, it should only be used by people who know how to do the task they are asking AI to do, without using AI.
Anticipated disclosure requirements
As AI technology becomes more prevalent, industry commentators anticipate the introduction of disclosure requirements. It is expected that in the future, users will be required to disclose when AI has been used to generate works. This will promote transparency and enable consumers and readers to make informed decisions about the authenticity and origins of the content they encounter.
While such regulations are yet to be formalised, staying informed about evolving legal standards is crucial to staying compliant and maintaining trust in your organisation. This is one area I will continually be monitoring.
“My best summation is that treating AI like an enthusiastic but hapless new employee is the best path to success.”
My 5 top tips are:
- Do not put any confidential information into AI. Use a find and replace tool to remove confidential information from data you want to input. Even then, consider whether the other data or the aggregate of the data is confidential or sensitive or breaches a law or contract you’re bound by.
- Get creative about what you can use AI for. And if you can’t think of anything or aren’t feeling creative, why not ask AI what you can use it for.
- If you ask it to do something, give it very clear instructions. Give it background material and tell it the purpose of your query.
- Don’t stop with one prompt, if there is something that is not quite right in the response you get, give it feedback and ask it to deliver something differently. For example, “Can you say that again but imagine you are explaining it to a 5 year old” Play around with different prompts until you get a result that’s closer to what you want.
- Only use AI for things that either don’t need to be 100% right, or for things that you know how to do without using AI. Use your own professional skills and experience to check authenticity and accuracy. Otherwise, you run the risk of being swayed by false information that sounds persuasive.