Joel Rumson
With the rise of artificial intelligence (AI) sweeping the nations schools, business and organizations, the Government of Canada has taken a significant stride. Minister François-Philippe Champagne introduced a groundbreaking act, Bill C-27, during the first session of parliament on November 22, 2021.
In the ever-evolving world of technology, it is essential regulations can keep up with advancements. From young students experimenting with AI to massive organizations deploying it for marketing, Canada faces an AI revolution.
Bill C-27 embodies voluntary code of conduct, emphasizing the importance of a proactive choice by stakeholders within Canada’s tech landscape. It isn’t imposed by external regulations but instead signifies a collective commitment to transparency and a bias-free AI environment.
The code emphasizes a voluntary pledge by industry players, including names like BlackBerry, OpenText, Telus, and the Canadian Council of Innovators, to adhere to its guidelines. By embracing this voluntary framework, these entities willingly engage in fostering a transparent and unbiased ecosystem for generative AI systems within the nation, showcasing their commitment to ethical practices and the power of voluntary cooperation in shaping Canada’s AI ethical landscape.
Speaking at the “All In” conference on AI in Montréal, Industry Minister François-Philippe Champagne stated that the code will complement legislation making its way through Parliament, Bill C-27, and promote the safe development of AI systems in Canada.
“While we are developing a law here in Canada, it will take time,” he said. “If you ask people in the street, they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products.”
The Canadian government had already proposed Bill C-27 in June 2022, focusing on AI legislation and specifically addressing the management of “high-impact systems.” (High-impact systems in AI are powerful technologies with significant effects on people’s lives and society, whether positive or negative.)
Furthermore, more than a thousand experts, including Montréal researcher Yoshua Bengio, signed an open letter calling for a six-month moratorium on the development of systems more powerful than GPT-4 until security protocols are established.
Doubts often surface regarding the present ethical frameworks in AI, raising questions about whether binding legal rules and abstract principles, like transparency, have the ability and strength to guide users in a specific and practical way.
Audrey Champoux, a spokesperson, stated, “Clause-by-clause needs to happen first, and we need to hear from witnesses before amendments are drafted.”
Experts in Canada, by advocating for a moratorium, aim to slow down AI innovation to catch up with AI-related legislation and create principles that are transparent and robust enough to guide user actions.
The Quebec government has already recognized the importance of taking action and has entrusted the Quebec Innovation Council with reflecting on the issues raised by AI.
Canada’s code of conduct represents “an important step” in the direction of achieving “democratic governance,” said Yoshua Bengio, founder and scientific director of Montréal’s Mila AI institute, while adding that we must also look ahead. As AI technology becomes more potent, society must be equipped with the necessary tools to protect individuals, democracy, and society.
It is well-known that without firm legal backing, ethical frameworks can give rise to ‘ethical laundering,’ where private companies create ethical guidelines primarily aiming to enhance their public image, without being legally obliged to follow them.
In the absence of concrete legal support, ethical laundering can enable companies to create superficial ethical guidelines that may not align with their actual practices. This lack of legal accountability can result in inconsistencies, a lack of genuine commitment to ethical conduct, and even potentially exploitative practices, ultimately eroding trust among consumers and stakeholders.
At a personal level, individuals may face issues such as biased algorithms, privacy violations, deepfake creation, social manipulation, job displacement, mass surveillance, invasion of mental privacy, development of autonomous weapons, unfair resource distribution, and addictive user interfaces.
NDP MP, Brian Masse, told Champoux at the meeting “We have to deal with the reality that the people who are going to be sitting where you’re sitting, they’re not just going to trust speculation on potential amendments, and even if they are done with the most genuine interest, they could run into technical legal problems we may not foresee.”
Hence, robust legal frameworks are indispensable in ensuring that ethical principles are not merely proposed, but are translated into tangible actions, promoting authentic ethical practices within the industry.
Concerns about AI systems often center around the risk of bias within algorithms, potentially leading to discriminatory decisions. The code of conduct, signed by industry stakeholders, commits to evaluating and rectifying discriminatory impacts at various stages of system development and deployment.
However, citizens remain skeptical about the effective application of Bill C-27. This skepticism arises from past experiences, such as the underutilization of the Emergencies Act, enacted in 1988, which was intended to ensure safety and security during national emergencies. To date, it has been employed only once, in response to the Canada convoy protest in 2022.
In the ever-evolving landscape of AI in Canada, Bill C-27 stands as a significant milestone, symbolizing a collaborative endeavour for transparent and ethical AI development. The voluntary commitments made by industry leaders like BlackBerry, OpenText, and Telus demonstrate a shared dedication to fostering responsible online practices.
Despite this progress, concerns loom large, notably the issue of ‘ethical laundering,’ where ethical guidelines lack legal significance, potentially undermining trust. Individuals continue to grapple with the ramifications of biased algorithms and privacy breaches. The House of Commons’ reservations about rushed consultations underscore the need for meticulous, inclusive decision-making processes.
The efforts of the Quebec government and researchers like Yoshua Bengio signal a collective resolve to tackle these challenges. As the narrative of AI in Canada unfolds, the establishment of robust legal frameworks remains crucial, ensuring that ethical principles are not just in aim, but show themselves as tangible foundations guiding Canada’s AI journey.