AI Regulations and Standards

Meet Regulatory Requirements

Monitor emerging regulations and standards

AI Risks Are Mounting

The explosive growth of AI usage is bringing tangible social and economic benefits for society - including advancing medical care, speeding scientific breakthroughs, and making our transport safer.

But along with these opportunities, dangers and risks have surged: AI-enabled scams, deepfakes, and misinformation, as well as risks to privacy and safety (including national security).

For organisations looking to capitalise on AI technology, the clock is running. To keep up with AI regulations, the time to act is now

Countries around the world are drafting and passing legislation to harness AI technology for good. Contrasting approaches - from the US to the EU, from Brazil to Southeast Asia - will make navigating regulations extremely complex.
Beyond mandatory legislation, firms are increasingly complying with standards and codes of conduct for AI. Though voluntary, these guidelines are becoming industry standard. Businesses that want to build trust and get the most out of their AI can’t afford to be left behind.

Regulations

EU AI Act

The EU AI Act is the world’s first comprehensive AI law. It analyses and classifies AI systems into four categories (low/minimal risk, limited risk, high risk, and unacceptable risk) according to the risk they pose to users for specific applications. Depending on the level of risk, the systems will be more or less closely regulated. The European Parliament passed the Act in June 2023, and it is expected to become law by the end of 2023.

US Executive Order on AI

President Biden’s Executive Order, signed in October 2023, seeks to set out guidelines for how to to expand the ways in which the US will make use of Artificial Intelligence to achieve pre-existing goals, while also setting up stricter regulations on the private use of AI, in order to manage the potential risks. It tasks US agencies with drafting more detailed regulatory guidelines, which are expected in 2024.

NYC Local Law 144

NYC 144 regulates the use of AI tools in employment decisions. The law focuses on the results of the tool (the impact) rather than how the results are achieved (the process). Other legislatures have proposed similar laws that focus on the impact of tools used for employment purposes, including New York (Proposed): Bill A00567, New Jersey (Proposed): Bill A4909, California (Proposed): Bill AB331, and Massachusetts (Proposed): Bill H.1873.

Colorado SB21-169

Colorado SB21-169 protects consumers in Colorado from insurance practices that unfairly discriminate on the basis of race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The law makes insurers accountable for testing their AI systems - including external consumer data and information sources, algorithms, and predictive models - to avoid unfair discrimination against consumers and to take corrective action to address any consumer harms that are discovered.

Canada AI and Data Act

The Canadian AI and Data Act is designed to provide a meaningful framework to be completed and brought into effect through detailed regulations. While these regulations are still being drafted, the Canadian government has stated their intent for the regulations to be built on existing best practices.UK’s Pro-Innovation approach: The UK’s framework aims to support innovation while ensuring that risks are identified and addressed. It is a proportionate and pro-innovation regulatory framework, focusing on the context in which AI is deployed.

Standards

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF) offers a resource for designing, developing, deploying, or using AI systems to manage the risks of AI and promote trustworthy and responsible development and use of AI systems. Unlike many emerging regulations, the framework is voluntary and use-case agnostic.

World Ethical Data Foundation AI Governance Framework

The World Ethical Data Foundation has written an open letter with questions and considerations to ensure more AI ethical models are released. The format is a free online forum where new suggestions and approaches are invited. The steps are isolated based on the core elements of building AI (Training, Building, Testing) and the actors who engage in the process of reducing silos.

SR 11-7

SR 7-11, published by the Federal Reserve, offers supervisory guidance on firms’ model risk management practices. These guidelines are aimed at all banks regulated by the Fed, and is thus quickly becoming industry standard.

ISO Framework for AI

The ISO Framework for AI was published in February 2023. It offers strategic guidance to all businesses for managing risk connected to the development and use of AI. A key benefit of the ISO framework is that it can be customised to any organisation and its business context.

Built by Expert Lawyers with Top Regulatory Experience

The Enzai platform is the best way to manage AI governance risk. This is the only solution with a strong legal foundation to ensure compliance with the nuanced requirements of emerging legislation and industry codes of conduct.
Enzai’s platform provides compliance by design. The ready-made policy packs enable organisations to stay on top of and comply with emerging regulations and standards efficiently. Companies can even create their own policies for maximum flexibility.
Audits are made easy with the platform’s custom assessments. We provide increased assurance to resolve regulatory inquiries.

Latest in AI Regulations

Join 2,122 others getting no-fuss AI Governance updates every month.
Oops! Something went wrong while submitting the form.

Compliance by Design

Speak to an Expert