U.S. President Donald Trump seems determined to destroy America’s economy — and ours — with his erratic tariffs, along with ending decades-long fruitful relationships with allies.
But there is a silver lining amid the mayhem: Canada can take advantage of the MAGA (Move Away, Go Abroad) brain gain to relocate AI north of the border as things go south on the other side.
The time is ripe for a comprehensive Made in Canada AI policy.
Experts are looking to relocate from the U.S.; global warming calls for greener AI; and we can learn from European Union’s digital regulation.
This policy should combine forward-thinking industrial strategy across the AI value chain with meaningful regulation to protect people against AI’s downsides.
Canada is home to fundamental (mostly public-funded) breakthroughs in AI, but many top-tier researchers leave for the U.S. due to more attractive opportunities.
However, recent U.S. funding cuts and repression of non-citizens presents a unique opportunity to repatriate and attract sought-after talent in research and industry roles.
Fast-tracked visa applications and earmarked funding in new and existing research institutes would bolster our advantage.
Canada should also leverage cooler temperatures and clean energy to innovate upstream in the supply chain of AI.
Greener data centres in northern locations would be ideal for training models because the latency of remote sites would not be an issue, as compared to faster processing required to run models.
Such projects should centre the sovereignty of Indigenous stewards of the land. Learning from previous proposals like the ill-fated Wonder Valley situated on Treaty 8 land and the traditional territory of the Sturgeon Lake Cree Nation, these data centres should involve Indigenous stakeholders at the earliest stages of conception.
Only through real partnerships that guarantee the ability to meaningfully shape and reap benefits from such infrastructure can we move beyond the extractive logics of past projects.
Sustainable data centres should involve local communities and be responsive to and responsible for externalities such as the degradation of the quality of water used to cool and power data centres.
Coupled with protective data regulation, hosted-in-Canada AI systems running on renewable energy are attractive alternatives to systems running on fossil fuels in jurisdictions with a track record of privacy-infringing state interference.
As for regulation, it’s about protecting people on the ground from AI-driven harms such as discrimination and propaganda. A major problem with previously proposed AI regulation is in-house risk assessments essentially let companies grade their own homework.
To be sure, imposing static, detailed rules is challenging in a nascent, fast-moving and eclectic space like AI. But there are ways around this issue. Independent experts can assess compliance for general purpose models powering many applications such as generative AI chatbots like ChatGPT, and for applications deployed in high-stakes contexts such as health care, education, legal services and national security.
People should also have the ability to know and understand which AI systems are involved in which aspects of their lives. Public databases listing AI deployments and legible explanations for individual decisions enable affected people to contest unfair outcomes.
A private right of action should empower anyone affected to easily lodge a complaint and get prompt resolution.
When it comes to determining the substance of obligations, Canada should avoid ‘kicking the regulatory can’ down the road to technical standards. Instead, politically accountable legislators should make explicit substantive choices and subject a fully fleshed-out bill to democratic debate to get the benefit of diverse viewpoints. Though AI is a pressing matter, it is as important to get it right as it is to do it fast.
Finally, independent enforcement is key. This was a major issue with the first iteration of proposed AI regulation in Canada as the department in charge of innovation was also the ultimate arbiter of compliance.
Innovation and regulation should be communicating vessels insofar as they are two facets of AI policy, but a structural separation between the two functions is necessary if either is going to have legitimacy.
On the international side, Canada has an enviable track record of championing digital rights, and an even longer tradition of pioneering human rights.
Meaningful, rights-respecting AI regulation at home would bolster our stature in the global governance of AI, including at the G7 which we are hosting this year.
And, vice versa, our international commitments to protect and respect human rights should inform domestic AI regulation as a matter of consistency.
Canada has all the makings of a global AI leader.
It has an enviable quality of life to attract AI talent, geographic conditions to pioneer a greener supply chain, and a human rights tradition conducive to world-class regulation.
So, elbows up!
To join the conversation set a first and last name in your user profile.
Sign in or register for free to join the Conversation