Discussion Paper: Human Rights and Technology
Ethical AI
New technologies are changing our lives profoundly for better or for worse.
After completing the first phase of the Commission's public consultations, we heard how Australian innovators build our economy, revolutionise how health care and other services are delivered, and use data to make smarter decisions.
But we also saw how artificial intelligence (AI) and other new technologies can threaten our human rights. People repeatedly told us, "I’m starting to realise that my personal information can be used against me".
For example, AI is being used to make decisions that unfairly disadvantage people based on their race, age, gender or other characteristic. This problem arises in high-stakes decision-making, such as social security, policing and home loans.
These risks effect all of us, but not equally. We saw how new technologies are often ‘beta tested’ on vulnerable or disadvantaged members of our community.
Technology and human rights
Australia must innovate in accordance with our liberal democratic values.
The strongest community response is related to AI. The Commission proposed three key goals in that AI should be:
- used in ways that comply with human rights law
- used in ways that minimise harm
- accountable for how it is used.
The project
This Discussion Paper proposed modernising our regulatory approach for AI and found we need to apply the foundational principles of our democracy, such as accountability and the rule of law, more effectively to the use and development of AI.
You can read more about this project by reading the associated reports: