ߣÄÌÉçÇø

Skip to content

Canadian companies' AI policies aim to balance risk with rewards

TORONTO — When talent search platform Plum noticed ChatGPT sending ripples through the tech world and beyond, it decided to turn right to the source to lay out how staff could and couldn't use the generative artificial intelligence chatbot.
20240506090548-725cfe0a0781bbe7aea85b7ae0645ee6deb3012f3cb1ad3e846c50dc888e386a
Caitlin MacGregor, shown in a handout, the founder of Waterloo-Ont. recruitment technology company Plum, has recently created a policy on the use of artificial intelligence (AI) for staff. THE CANADIAN PRESS/HO-Michael Henry *MANDATORY CREDIT*

TORONTO — When talent search platform Plum noticed ChatGPT sending ripples through the tech world and beyond, it decided to turn right to the source to lay out how staff could and couldn't use the generative artificial intelligence chatbot.

ChatGPT, which can turn simple text instructions into poems, essays, emails and more, churned out a draft document last summer that got the Kitchener, Ont.-based company about 70 per cent of the way to its final policy.

"There was nothing in there that was wrong; there was nothing in there that was crazy," Plum's chief executive Caitlin MacGregor recalled. "But there was an opportunity to get a little bit more specific or to make it a little bit more custom to our business."

Plum's final policy — a four-page document building on ChatGPT's draft with advice from other startups cobbled together last summer — advises staff to keep client and proprietary info out of AI systems, review anything the technology spits out for accuracy and attribute any content it generates.

It makes Plum one of several Canadian organizations codifying their stance around AI as people increasingly rely on the technology to boost their productivity at work.

Many were spurred into developing policiesby the federal government, which released a set of AI guidelines for the public sector last fall. Now scores of startups and larger organizations have reworked them for their own needs or are developing their own versions.

These companies say their goal is not to curtail the use of generative AI but to ensure workers feel empowered enough to use it — responsibly.

"You'd be wrong to not leverage the power of this technology. It's got so much opportunity for productivity, for functionality," said Niraj Bhargava, founder of Nuenergy.ai, an Ottawa-based AI management software firm.

"But on the other hand, if you use it without putting (up) guardrails, there's a lot of risks. There's the existential risks of our planet, but then there's the practical risks of bias and fairness or privacy issues."

Striking a balance between both is key, but Bhargava said there's "no one size fits all" policy that works for every organization.

If you're a hospital, you might have a very different answer to what's acceptable than a private sector tech company, he said.

There are, however, some tenets that frequently crop up in guidelines.

One is not plugging in client or proprietary data into AI tools because companies can't ensure such info will remain private. It might even be used to train the models that power AI systems.

Another is treating anything that AI spits out as potentially false.

AI systems are still not foolproof. Tech startup Vectara estimates AI chatbots invent information at least three per cent of the time and in some instances, as much as 27 per cent of the time.

A B.C. lawyer had to admit to a court in February that she had cited two cases in a family dispute that were fabricated by ChatGPT.

A California lawyer similarly uncovered accuracy issues, when he asked the chatbot in April 2023 to compile a list of legal scholars who had sexually harassed someone. It incorrectly named an academic and cited a Washington Post article that did not exist.

Organizations crafting AI policies also often touch on transparency issues.

"If you would not attribute something that somebody else wrote as your own work, why would you attribute something that ChatGPT wrote as your own work?" questioned Elissa Strome,executive director of Pan-Canadian artificial intelligence strategy at the Canadian Institute for Advanced Research (CIFAR).

Many say people should be informed when it's used to parse data, write text or create images, video or audio, but other instances are not as clear.

"We can use ChatGPT 17 times a day, but do we have to write an email disclosing it every single time? Probably not if you're figuring out your travel itinerary and whether you should go by plane or by car, something like that," Bhargava said.

"There's a lot of innocuous cases where I don't think I have to disclose that I used ChatGPT."

How many companies have explored all the ways staff could use AI and conveyed what's acceptable or not is unclear.

An November 2023 study from consulting firm KPMG of 4,515 Canadians found 70 per cent of Canadians who use generative AI say their employer has a policy around the technology.

However, October 2023 research from software firm Salesforce and YouGov concluded 41 per cent of the 1,020 Canadians surveyed reported their company had no policies on using generative AI for work. Some 13 per cent only had "loosely defined" guidelines.

At Sun Life Financial Inc., staff are blocked from using external AI tools for work because the company can't guarantee client, financial or health information will be kept private when these systems are used.

However, the insurer lets workers use internal versions of Anthropic's AI chatbot Claude and GitHub Copilot, an AI-based programming assistant, because the company has been able to ensure both abide by its data privacy policies, said chief information officer Laura Money.

So far, she's seen staff use the tools to write code and craft memos and scripts for videos.

To encourage more to experiment, the insurer has encouraged staff to enrol in a self-directed free online course from CIFAR that teaches the principles of AI and its effects.

Of that move, Money said, "You want your employees to be familiar with these technologies because it can make them more productive and make their work lives better and make work a little more fun."

About 400 workers have enrolled since the course was offered to them a few weeks ago.

Despite offering the course, Sun Life knows its approach to the technology has to keep evolving because AI is advancing so quickly.

Plum and CIFAR, for example, each launched their policies before generative AI tools that go beyond text to create sound, audio or video were readily available.

"There wasn't the same level of image generation as there is now," MacGregor said of summer 2023, when Plum launched its AI policy with a hackathon asking staff to write poems about the business with ChatGPT or experiment with how it could solve some of the business's problems.

"Definitely a yearly review is probably necessary."

Bhargava agrees, but said many organizations still have to play catch up because they have no policy yet.

"The time is now to do it," he said.

"If the genie is out of the bottle, it's not like we can think 'Maybe next year we'll do this.'"

This report by The Canadian Press was first published May 6, 2024.

Companies in this story: (TSX:SLF)

Tara Deschamps, The Canadian Press

Note to readers: This is a corrected story. A previous version had an incorrect photo caption and the story included wrong month for the KPMG study.

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks