The Australian government has released its 8 principles for the ethical use of AI – but killer robots don’t rate an explicit mention

Building one of these is probably off the table, though. Photo: Paramount Pictures
  • The Australian government has released its ethics framework for businesses and organisations who design, develop, integrate or use artificial intelligence.
  • It consists of 8 voluntary principles which set out basic standards for AI implementation.
  • It comes among concerns from some that increasingly advanced AI will be disruptive – and not in a good way.

When you’re considering the institutions standing between the world and some kind of robot apocalypse, the Australian government is unlikely to make anyone’s list.

And yet the government wants to be part of that conversation regardless. The minister for industry, science and technology, Karen Andrews, today announced the Australian government’s official AI ethics framework – a voluntary set of principles for businesses and organisations when designing, developing, integrating or using artificial intelligence.

“The Morrison Government is determined to create an environment where AI helps the economy and everyday Australians to thrive. The eight AI ethics principles are just one part of this vision,” Andrews said in a press release provided to Business Insider Australia.

“We need to make sure we’re working with the business community as AI becomes more prevalent and these principles encourage organisations to strive for the best outcomes for Australia and to practice the highest standards of ethical business.”

You can read about the principles in greater detail over at the department’s website, but here are the big points – straight from the horse’s mouth:

  • Human, social and environmental wellbeing. Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
  • Human-centred values. Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness. Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security. Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety. Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability. There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
  • Contestability. When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
  • Accountability. Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

As you can see, the development and maintenance of killer robots doesn’t rate a mention, though you could charitably read such an outcome of point number one, given a murderous android would be unlikely to respect human wellbeing.

Companies including NAB, Commonwealth Bank, Telstra, Microsoft and Flamingo AI have signed up to test the principles, in order to ensure they have actual practical benefits.

In a statement, NAB chief data officer Glenda Crisp said the bank was keen to trial the ethics principles.

“We hope to make a meaningful contribution to the discussion, to learn more about how we can leverage AI in an ethical way in order to help deliver new and improved experiences for our customers.

“Collaborating with Government and across industry drives diversity of thinking which is vital in developing new ways of working and implementing new technologies safely.”

Businesses in the AI space consider some kind of ethical framework crucial.

It goes without saying there is a great deal of unease in the community about the potential of AI – especially as it pertains to the future of work. “The Terminator” might provide us a lurid vision of AI gone wrong, but for many people there’s a fear greater concern a robot might put them out of a job.

These concerns are certainly valid. A 2015 study by the Committee for Economic Development of Australia (CEDA) found that more than five million Australian jobs were at risk of disappearing over the following 10 to 15 years because of technological advancement and automation.

Advancements in AI are, of course, key to these potential labour market changes.

For businesses working in the artificial intelligence space, an ethical framework is part of allaying those concerns. Karl Redenbach, co-founder and CEO of LiveTiles – an intelligent workplace software company – told Business Insider Australia that there is a hunger in the community for the sort of solutions provided by AI, but that having the right ethical standards in place is “absolutely pivotal” to realising that potential.

“Conversations around the correct and effective implementation of artificial intelligence in this country is the first sign that our tech industry is making strong progress in the right direction,” Redenbach said.

“The success of this trial will come down to engendering trust in AI solutions and showcasing the disruptive potential and positive implications that they could have for people’s lives.”