Protecting your business during Cyber Security Awareness Month

Sarah Gee

This Cyber Security Awareness Month, it’s worth reflecting on your organisation’s Artificial Intelligence (AI) strategy. Taking a proactive approach to AI governance will not only protect your business but will also help build trust with your clients and stakeholders.

Whilst AI holds great promise (or threat, depending how you look at it) of revolutionising the workplace, it is important to recognise that it is still in its infancy and like an infant, it needs very close supervision.

It is still prone to those infamous ‘hallucinations’ and inaccuracies but don’t let the risks scare you off. We’ve already seen enough potential here to justify experimenting, innovating, and planning ahead for the future.

It’s been more than 18 months since our first blog post looking at the legal considerations of ChatGPT. Our general views around risks and risk management haven’t changed so it’s worth another read.

Currently, AI is growing up fast, and more of our clients are jumping on the AI bandwagon. We’re also seeing the Government dipping its toes into AI regulation. So, with Cyber Security Awareness Month upon us, it’s a great time to check in and see where things stand—and where they might be headed.

 

Where we’re at in business with AI

In the USA, a whopping 22% of companies are using AI to tackle labour shortages, 65% are using it internally, and 74% are testing new AI tools. There’s even a link between AI adoption and the dream of a 4-day work week – what’s not to love about that?

New tools like Microsoft Copilot are helping businesses rely less on open-source models like ChatGPT by keeping data in closed, secure systems. But, these developments are not making AI risk free. AI still comes with some pretty significant risks – especially when vendors are overseas and might not be playing by Australian rules, or even aware of them. We’ve seen more than a few contracts where software companies try to make compliance your problem via their T&Cs, so beware!

Even without this contract risk, getting your internal risk management in shape is still critical. As an organisation, regulating what AI can be used for, what data is put into AI, and training your staff about your expectations are all fundamental controls you need to have in place.

Forward-thinking businesses have a unique opportunity to get ahead of the curve by embracing these initial technologies, getting their risk management right, and most importantly paying close attention on an ongoing basis.

 

Voluntary AI standards released

On 5 September 2024, the Federal Government released Voluntary AI Standards called The 10 Guardrails. This is the first step towards regulation.

On its release, the Government acknowledged that the use of AI is “probably one of the most complex public policy challenges for governments across the world.”

They are available here and focus on the introduction of AI in organisations.

They cover things like:

  • Assigning an AI owner within the organisation
  • Implementing a robust risk management process
  • Having a regular testing and monitoring program
  • Establishing processes to oversee and test AI-generated decisions.


While these standards are still voluntary, it’s smart to start thinking about how they might shape future laws – and what that means for your AI game plan.

 

Steps to introducing an AI software in your organisation

If you are looking at introducing AI in your organisation (or already have), make sure you conduct a Privacy Impact Assessment on every technology which touches data which relates to people and their personal information.

This assessment should be answering the following types of questions.

Data collection and processing

  • What data will be collected and processed by AI?
  • How will it be collected?
  • What purpose is it being collected and processed for?

Ensure data is provided to AI only for legitimate, lawful purposes and is aligned with your privacy policies.

Data minimisation

  • Is all collected data necessary?
  • Can users adopt pseudonyms or interact anonymously?

Limit data collection to only what is strictly needed for the AI system’s function, and reduce risks by ensuring personal data isn’t directly identifiable wherever possible.

Transparency and consent

  • How will users be informed about the collection and use of data?
  • How will consent be obtained?

Informed consent will not just apply at a customer/client level, but also with suppliers and employees. Are there specific laws that apply to your industry around data collection and consents? Also consider any changes you might need to your terms of trade or employment contracts or policies.

Data security

  • What security measures are in place?
  • Where is the data held?
  • Who has access to the data?

Make sure you understand where data is held and what it can be used for. Are you leaving the vault door open? Software vendors may store or use information in ways you may not expect. Remember this area is new to regulation and vendors based overseas may not understand or comply with Australian law.

Algorithm transparency

  • How does the AI make decisions?
  • Is there any bias or discrimination in the code?
  • Is there a process for human oversight?

Make sure you fully understand the decisions AI is making and what the codes are sitting behind this decision making. (Think the Robodebt scandal). These should be aligned with your organisational values.

Staff training and change management

  • Are your staff trained to look for risks and understand the limitations?
  • Do your staff consent to the use of AI on their personal data?
  • Is there one person allocated to have oversight and responsibility for AI risk?

Staff training is a key component of risk management. Your staff are the ones who will spot the errors, and drive innovation and improvement. Make sure they are on board with your strategy.

Ongoing monitoring

  • How often do we check output?
  • How often do we think about risk and risk mitigation?

Keep an eye on things – AI may ‘hallucinate’ or throw out some questionable outputs from time to time, so regular checks are a must.

 

Opportunity cost – understand the risks so you can control them

Don’t let fear of AI stop you from experimenting with tools that can drive your organisation forward and maybe even plug a few of those human resource gaps. Like any tool, you need to be trained to use it, and understand the risks so you can control them.

So, take the plunge, experiment, and maybe you’ll even find yourself working that elusive 4-day week. Just make sure to have some guardrails in place! You already know where to find the right advisers! 😉

To ensure you are safely using AI in your business reach out to Sarah today.