Skip to main content

Search

Guardrails for High-risk AI

Technology and Human Rights

Summary

Learn more about the need for an Australian Artificial Intelligence Act to better protect human rights

The Australian Human Rights Commission (Commission) has provided a submission to the Proposals Paper on Introducing Mandatory Guardrails for AI in High-risk Settings (Proposals Paper).

Human rights

A human rights risk-based approach to artificial intelligence (AI) is essential for regulation. 

The Commission welcomes the inclusion of human rights as a key factor in determining if AI is ‘high-risk’. Given that there is currently no human rights act in Australia, any classification approach will need to expressly make reference to both domestic human rights law and international human rights obligations.

A human rights-based approach to AI classification must be expansive to ensure that the full spectrum of rights are considered. This principles-based approach reflects the natural evolution of human rights over time. 

It also ensures an interoperable approach with other jurisdictions incorporating human rights into their AI legislative responses.

Unacceptable risk

Some uses of AI pose an unacceptable risk to human rights and should be prohibited. For example, the European Union’s AI Act, specifically prohibits against AI applications presenting an ‘unacceptable risk’.

One key example that currently poses an unacceptable risk is the use of AI in facial recognition technologies (FRT). A moratorium on the use of FRT in decision-making that has a legal, or similarly significant, effect for individuals (or where there is a high risk to human rights) is still needed until specific legislation to regulate this is introduced. 

The Commission provides in-principle support for the Human Technology Institute’s Model Law on FRT to address this issue.

AI Act for Australia 

Each of the proposed options for introducing mandatory guardrails come with benefits and limitations. Considering the need for a consistent approach to AI governance, the Commission supports option three of the Proposals Paper which would create an Australian AI Act.

However, the introduction of an Australian AI Act is neither a panacea nor a timely solution to the most pressing issues posed by AI. There remains a need to urgently address specific examples of harm that have arisen due to new and emerging AI tools.

The introduction of an AI Act is only part of the solution. The Federal Government must continue to address the most urgent harmful impacts of AI through ongoing law reform efforts (in addition to creating an AI Act).

Recommendations 

The Commission makes eight recommendations to the Proposals Paper. The submission should be read in full to understand and contextualise these recommendations.