Ryan speaks with CTV News about AI regulation in Canada
This is the question that Big Tech leaders gathered in Washington to discuss earlier this month. The delegation, which included Elon Musk, Sundai Pichar, Mark Zuckerberg and Sam Altman, met with US senators behind closed doors for an “AI safety forum”. As the pace of AI advancement and adoption accelerates, the US government is holding a series of meetings with Silicon Valley executives, researchers and labour representatives. US Lawmakers’ objective is to manage risks and mitigate the dangers of AI. But the US is still far from creating a regulatory framework.
There is little consensus on what potential legislation may look like, and striking a balance between creating necessary safeguards and supporting innovation will not be straightforward. US Senate Majority Leader Chuck Schumer said that while there is a need for regulations on artificial intelligence, these should not be rushed: “If you go too fast, you can ruin things”. Schumer contrasted the Senate’s approach to the EU’s, which in his view has gone too quickly. The EU AI Act is the world’s first comprehensive AI law - the European Parliament passed the Act in June, and it is expected to become law by the end of 2023. The EU AI Act has been criticised for being too extreme: 150+ executives from European companies including Airbus, Siemens and Heineken have signed an open letter urging the EU to reconsider the regulation, which they claim will “jeopardise Europe’s competitiveness and technological sovereignty”.
CTV News spoke with Enzai founder, Ryan Donnelly, to discuss the “AI safety forum” discussions in Washington and the future regulation of AI technologies. “[The AI safety forum] is a start but there is a long way to go in the conversation around regulating this incredible technology”, explains Donnelly. Much of the focus of the discussions in Washington has been on frontier AI, defined by OpenAI as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”. While these are existential threats, they are not imminent risks. Deepfakes, for example, are an immediate concern.
Donnelly proposes steps that companies can take today to improve AI governance and manage the risks that are present today: “Putting a quality management system in place around how you build and deploy these technologies, keeping detailed technical documentation around how they’re prepared, and conducting the right risk assessments” are low-hanging fruits. The key, according to Donnelly, is ensuring that AI is “used for good” and that people trust it.
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.