close
close

Canadian companies’ AI policies aim to balance risk with reward

“You’d be wrong not to harness the power of this technology. It has so many opportunities for productivity, for functionality,” says the founder of the AI ​​management software company

When talent search platform Plum noticed that ChatGPT was making waves across the tech world and beyond, it decided to head to the source to find out how staff could and couldn’t use the generative AI chatbot.

ChatGPT, which can turn simple text instructions into poems, essays, emails and more, drafted a draft last summer that brought the Kitchener, Ont.-based company about 70 percent of the way to its policy the final.

“There was nothing wrong there; it was nothing crazy,” recalls Plum’s chief executive Caitlin MacGregor. “But there was an opportunity to get a little more specific or make a little more personalized to our business.”

Plum’s final policy — a four-page document based on ChatGPT’s draft of advice from other startups put together last summer — advises staff to keep customer and proprietary information away from AI systems, review anything the technology spits out for accuracy and to attribute any content they generate. .

This makes Plum one of several Canadian organizations codifying their position around artificial intelligence as people increasingly rely on technology to increase their productivity at work.

Many have been spurred on to develop policy by the federal government, which released a set of AI guidelines for the public sector last fall. Now dozens of startups and larger organizations have redesigned them for their own needs or are developing their own versions.

These companies say their goal is not to curtail the use of generative AI, but to ensure that workers feel empowered enough to use it — responsibly.

“You’d be wrong not to harness the power of this technology. It has so many opportunities for productivity, for functionality,” said Niraj Bhargava, founder of Nuenergy.ai, an AI management software firm in Ottawa.

“But on the other hand, if you use it without putting (up) railings, there are a lot of risks. There are the existential risks to our planet, but then there are the practical risks of bias and fairness or privacy issues.”

Finding a balance between both is key, but Bhargava said there is no “one-size-fits-all” policy that will work for every organization.

If you’re a hospital, you might have a very different answer to what’s acceptable than a private-sector technology company, he said.

There are, however, a few principles that appear frequently in the guides.

One does not plug customer or proprietary data into AI tools because companies cannot ensure that this information will remain private. It can even be used to train the models that power AI systems.

Another treats anything the AI ​​spits out as potentially fake.

AI systems are not yet secure. Tech startup Vectara estimates that AI chatbots make up information at least three percent of the time, and in some cases as much as 27 percent of the time.

A BC lawyer had to admit in court in February that he cited two cases in a family dispute that were invented by ChatGPT.

A California attorney similarly found accuracy issues when he asked the chatbot in April 2023 to compile a list of journalists who had sexually harassed someone. He incorrectly named an academic and cited a Washington Post article that did not exist.

Organizations developing AI policies often address issues of transparency.

“If you wouldn’t attribute something someone else wrote as your own work, why would you attribute something ChatGPT wrote as your own work?” asked Elissa Strome, executive director of the pan-Canadian artificial intelligence strategy at the Canadian Institute for Advanced Research (CIFAR).

Many say people should be told when it’s used to analyze data, write text, or create images, videos, or audio, but other cases aren’t as clear.

“We can use ChatGPT 17 times a day, but do we have to write an email disclosing it every time? Probably not if you figure out your travel itinerary and whether you should go by plane or drive, that sort of thing,” Bhargava said.

“There are plenty of innocuous cases where I don’t think I need to disclose that I’ve used ChatGPT.”

How many companies have explored all the ways staff could use AI and conveyed what is acceptable or not is unclear.

An April 2023 study by consulting firm KPMG of 4,515 Canadians found that 70% of Canadians using generative AI say their employer has a policy around the technology.

However, October 2023 research from software firm Salesforce and YouGov concluded that 41% of 1,020 Canadians surveyed reported that their company had no policies regarding the use of generative AI for work. About 13 percent had only “loosely defined” guidelines.

At Sun Life Financial Inc., employees may not use external artificial intelligence tools for work because the company cannot guarantee that customer, financial or health information will be kept private when these systems are used.

However, the insurer is allowing workers to use in-house versions of Anthropic’s AI chatbot Claude and GitHub Copilot, an artificial intelligence-based programming assistant, because the company has been able to ensure that both adhere to its data privacy policies , said Chief Information Officer Laura Money.

So far, she’s seen staff using the tools to write code and create notes and scripts for videos.

To further encourage experimentation, the insurer encouraged staff to enroll in a free online course from CIFAR, which teaches the principles of AI and its effects.

Of the move, Money said, “You want your employees to be familiar with these technologies because it can make them more productive and improve their work lives and make work a little more fun.”

About 400 workers have signed up since the course was offered to them a few weeks ago.

Despite offering the course, Sun Life knows its approach to technology must continue to evolve because AI is advancing so quickly.

Plum and CIFAR, for example, each launched their policies before generative AI tools that go beyond text to create sound, audio or video were readily available.

“There wasn’t the same level of imaging as there is now,” MacGregor said of the summer of 2023, when Plum launched its AI policy with a hackathon asking staff to write poems about the ChatGPT business or experiment how they could solve some of the business problems.

“Certainly an annual review is probably required.”

Bhargava agrees, but said many organizations still need to catch up because they don’t yet have a policy.

“Now is the time to do it,” he said.

“If the genie is out of the bottle, we can’t think ‘maybe next year we’ll do this.’

This report by The Canadian Press was first published on May 6.

Related Articles

Back to top button