Skip to main content
Home » AI & Digital Transformation » Why companies should give their staff a list of AI ‘dos and don’ts’
Sponsored

Marie Murphy

Chief Operations Officer, Fort Privacy

Tricia Higgins

Chief Executive Officer, Fort Privacy

While AI can be transformative for a business, it also poses various risks. Organisations need to set rules and boundaries, so their staff can use the technology safely.


“Artificial intelligence is an extremely powerful technology,” says Marie Murphy, Chief Operations Officer at data protection and privacy specialists, Fort Privacy. Tricia Higgins, CEO adds: “However, that power needs oversight and care. It needs handling.”

Understand trustworthy AI and upcoming regulation

Companies must ensure that their AI systems are ‘trustworthy’ (which the European Union defines as ‘lawful’, ‘ethical’ and ‘robust’). Improving AI solution uptake while mitigating risks poses practical challenges. An example is employees resorting to ‘shadow AI’ (using unsanctioned and untrustworthy applications).

“For example, a member of staff might upload company and customer data into an unauthorised AI application, use it for a while then forget about it, leaving sensitive information exposed,” explains Marie. “It’s important to be aware of the risks that AI poses, in order to mitigate and manage them.”

Indeed, this risk-based approach to AI drives the European Union’s AI Act, the world’s first significant AI regulation. Parts of the Act come into effect later this year, establishing different rules for different risk levels. High-risk AI applications will require significant oversight in their development and deployment while low-risk applications won’t.

It’s important to be aware of the risks that AI
poses, in order to mitigate and manage them.

Marie Murphy

Three key steps to effective AI governance

Effective AI governance ensures safe and secure utilisation of technology while safeguarding sensitive corporate and customer data. “There are three steps to achieving good governance,” explains Marie.

“First, it’s vital to understand who is using AI within an organisation, what it’s used for and how it might be used in the future. Second, once you have this information, you can develop a ‘rules of engagement’ policy regarding AI use, which sets out what is and isn’t permitted. Third, you create an oversight committee — made up of people within key areas of the company such as IT, HR and compliance — who will monitor AI use and issues over time.”

Executive AI engagement beyond IT

The tone of AI governance should be set from the top. “It’s no use leaving it to the IT department,” insists Tricia. “Company leaders must be the ones who establish boundaries around AI use.”

Marie is certain that AI will impact every business eventually. While its potential is tempting, organisations must stay vigilant about its risks. “They can’t ignore the fact that their staff are starting to experiment with AI,” she says. “They need a proactive response to this issue to protect themselves from risk.”

Next article