ADDRESS TO AI FOR SOCIAL GOOD SUMMIT BANGKOK 13 DECEMBER 2018 ***CHECK AGAINST DELIVERY***

18 December 2018

ED HUSIC MPSHADOW MINISTER FOR HUMAN SERVICESSHADOW MINISTER FOR THE DIGITAL ECONOMYMEMBER FOR CHIFLEY
ASIA PACIFIC
AI FOR SOCIAL GOOD SUMMIT
BANGKOK
13 DECEMBER 2018
***CHECK AGAINST DELIVERY***The Americans have need of the telephone, but we do not. We have plenty of messenger boys.
The bold 1876 declaration made by William Preece of the British Post Office.
I know Im walking down a well worn path here dragging out a quote from the days of yore and then shining the light of modern experience through it to get a reaction.
The quote has some utility although Im not trying to liken a clunky Edison inspired telephone handset to complex artificial intelligence.
But for where we are right now, its not hard to imagine some might be wondering about the merit of rushing to embrace the latest tech fad.
Why rush? Weve got people here to do the same thing quite easily, they might think.
The feeling about AI being a fad is reasonable.
You dont have to agree, just appreciate how some think that way.
Because at the moment it feels like were in the midst of an international AI bidding war.
It seems like every few months another country has announced another mega AI investment.
Just the other week Germany announced a 3bn investment in AI research and development.
Their neighbours the French already flagged a 1.5bn commitment, the UK around 1bn.
Within our own neighbourhood China is positioning itself to become an AI leader by 2030 backing in this ambition via a reported $7bn investment with a view to having a $1 trillion industry benefit emerge as a result.
Theyre not alone in our region with sizeable commitments made by South Korea and Singapore. Across the Pacific, the Canadians are setting aside C$125m.
In our last federal budget, the Australian government announced it was allocating roughly AU$30 million over four years to support the development of AI in Australia.
This apparently is to be used to create a Technology Roadmap (which the government has refused opposition requests to get a briefing on), a Standards Framework, and a national AI Ethics Framework.
Within that modest allocation it will also support Cooperative Research Centre projects, PhD scholarships, and other initiatives to increase the supply of AI talent in Australia.
While AU$30 million is a start, theres little doubt we need governments to get serious about the scale of investment required to ensure Australia can reap the most it can from AI application.
Because Im concerned that Australia is being outpaced in a global AI race.
For example our Singaporean neighbours are devoting around five times the amount we announced this year for AI investment.
This is not a pure competition for financial spend, its a much more nebulous opportunity for all countries.
For example Australia is unique because of its vast size, demographic mix, culture and values with a population spread intermittently from coast to coast.
The opportunities for AI in Australia will need to be matched to these and many other individual factors something surely best done with significant Australian input.
Its not just dollars here governments can be a huge source of fuel for AI via the massive amount of data it collects.
Despite suggestions it would release an open data plan for the nation this year, the Australian government shelved this to 2019, which undermines the ability to gain quick and robust advantage from AI investments.
I make the point that we should rightfully expect our government to not just devote dollars in this field
but to possess a passionate determination to ensure Australia is in the best possible position to extract the most from AI investments, for the ultimate benefit of our economy and community.
Simply put at a federal level, what we can be observe is:
Missing drive;
Misfired application;
Misjudged impact.
On the issue of missing drive, its incredibly important that there are ministerial champions for the power of AI to trigger national economic and social benefit if applied with a clear assessment and preparedness for impact.
On the issue of application, planning for automated decision making is crucial.
In Australia the application of automated decision making in government functions for example the calculation of welfare debt has misfired badly.
There are many prepared to argue this massively dented public confidence in the governments ability to oversee automated decision making.
And this is before you take into account the government has committed itself to a seven year plan to automate welfare decision making.
On misjudged impact, I would make the case that given the lack of urgency evidenced within the federal government around the future of work, it must believe AI will have little impact on labour markets.
It prompts the question: is the government operating under an assumption that little will be required across the economy to re-skill and re-design investment in our human capital?
So, how does this translate in real terms?
Its already been observed that Australias got to do a much better job on improving the operation of a range of AI enablers, including:
Digital absorption, challenging us as to how we apply and use technology across government and business;
Strengthening our innovation foundations via overall R&D investment or business model creation; and,
Enriching human capital measured by improved PISA scores, uplift in STEM graduate numbers.
These are flickering warning lights demanding action from government.
Because I would argue that the lukewarm interest in AI application from government is likely to influence thinking on this within our business community.
This might be why when you compare the level of tech and digital investment support devoted by business across a range of countries, Australia is also lagging.
At the point our businesses realise they must catch up, the big concern is that a rush to onboard new tech, will be hugely disruptive to the jobs of average Australians.
The opportunity cost of this in economic terms as well as the social impact of this means we simply cant afford to be ill-prepared.
And roping this altogether its hard to imagine applying AI to attack social problems and challenges, if Australian governments and businesses under-invest in AI application.
Avoiding the investment in this technology is not an option.
Again, scanning the committed AI investments of a host of different countries against the backdrop of global interconnectedness means as a nation we cant afford to be left behind.
Countries have calculated the power that can be injected into their economies via shrewd AI investment. Australia can ignore this at its peril.
Although the size of the investments is serious its not the only thing that counts.
Its probably unrealistic to expect that all countries can spend up big to match the AI investment of others, but it is definitely worth thinking about how regional cooperation could potentially magnify benefit across borders and within nations.
Breadth of vision matters.
We shouldnt confine the value of AI in pure commercial terms.
There are social benefits to be gained, especially through the use of data to power AI in addressing social challenges for wider public benefit.
New analysis adds weight to this thinking.
The McKinsey and Co report Applying artificial intelligence for good released in the lead up to this conference highlights the possibilities of using AI across a range of social measures. Using AI models to achieve:
better healthcare and tackling hunger,
improved student achievement and the productivity of teaching,
drive better care for the environment; or,
strengthen our responses to natural disasters and emergencies.
Compellingly, this work demonstrated how AI can potentially have a beneficial impact across every single UN Sustainable Development Goal.
What stood out to me in the report is that if we want to witness serious social benefit through the use of AI, we must address three core challenges standing before non-government organisations and social enterprises:
Data access: while governments and businesses collect huge amounts of data at relatively lower costs compared to previous data collection techniques, were not developing open data regimes fast enough to feed the work of NGOs and the social sector.

Talent gaps: when it comes to AI theres a global skills shortage, a problem felt even more keenly in the social sector. How will NGOs access their own data scientists or software developers with AI experience to churn through data that will help guide decision making or better target investment or advocate action by government?

Last-mile implementation: bringing in government or business talent to help set up NGOs to use AI for social good is one thing, but once they leave will the strategies and talent be there to drive sustainable AI use within those organisations?
The McKinsey report along with many others notes what I think will become an inescapable demand: the need for transparency in the application of AI.
As the report rightly points out, decisions made by complex AI models will have to become more readily explainable.
I imagine this will be something regulators increasingly focus on, as witnessed in my own country this week with the release of major interim reviews to the impact of digital platforms on media markets and journalism.
These arent insurmountable challenges.
And its good weve had this analysis to provoke thinking and more importantly prompt a response.
My side of politics in Australia has been thinking deeply about the way technology will transform economies and communities.
Balancing the enthusiasm generated by the promise and power of this new tech with a clear assessment of risk and opportunity as well as the need to prepare.
We went on the record over a year ago urging our government to work with others internationally on building the decision making frameworks that will guide the application of AI across borders.
And in July we announced a commitment to invest in the establishment of a National Centre of AI Excellence, the core components of which would include:
the emergence of an AI Lab whose mission will be to champion the development of ethical AI frameworks;
the nations first AI accelerator for industry coming up with ways to generate new firms and strengthen existing ones.
As much as the Centre will advance the generation of new jobs it will also help guide national thinking about managing the impact of technology on our existing workforce where automation has been estimated to potentially impact 3.5m Australian jobs by 2030.
The Centre will also work across levels of government to think about the evolution of AI and plan for its use to improve policy and decision making State and Territory governments will be invited to support and work with the Centre.
Crucially, for this audience, it will think about the way we can work with ASEAN neighbours to best apply AI while managing the impacts felt along the way.
This is a vehicle I had in mind when I mentioned the need for us to think creatively about how we work across borders to make sure we collectively get the best out of our respective AI investments.
Its also the ideal pathway to help address some of the challenges I referenced earlier.
In particular, the national centre we have committed to would be ideally placed to drive social sector capability by embracing some of the ideas contained in the McKinsey report, notably:
Creating avenues for the establishment of AI residencies within companies to help enhance NGO AI skills base
Opening up access to corporate AI research and knowhow but applied to overcoming social challenge and raising social benefit capability
Building awareness about AI application through relatable case studies.
Id like to end my contribution by making these final points, referencing some exceptionally valuable work, hugely relevant to this forum.
Yes, my last words today are the thoughtful and considered words of someone else.
Made by Humans is a book released this year by author Ellen Broad, who has had a long standing connection in the tech arena, having considered over many years the intersection of technology and policy.
The book is devoted to clarifying thinking around AI, challenging notions that having access to more data equate to more informed decisions. As she observes:
data doesnt always improve our decisions. Data is messy and complicated. It can be incomplete, biased, fraudulent. It can be out of date it can be a record of the past without being a prophecy of the future.
Through her book, she makes a series of arguments that demand consideration:
That currently we are locked in a fundamental contest between the softer touch focus centred on creating ethical frameworks around AI versus the need for government to have a clear view on regulatory responses that are enforceable and actionable for instances where lines are crossed;

The value of asking the right questions about the application of AI and regulation instead of asking should AI be regulated we should ask should the public be able to understand and challenge automated decisions made about them?

That governments should not build automated decision making processes levering off AI to such an extent that it effectively brick-walls itself out of contact with the public, with average citizens unable to talk to another person to challenge a decision made about them;

Recognising that governments have traditionally combined technology and data in ways that disproportionately impact on the most vulnerable and least able to challenge decisions, notably the poor, sick and minorities; and

Finally, the standards of ethical AI much revolve around the transparency of methodologies and data, assumptions, the diligence and care taken to resolving issues and errors before deployment and establishing meaningful ways for people to engage and challenge automated decisions.
In the end, Broad rightly reminds us:
Everything about it (AI) is still made by humans; the criteria its built on, the model, the data, the code, the processes for verifying the results of software and how its to be used for decision making.
Not an insignificant point.
And certainly worthy of remembering during considerations about how we use one of the most profound technologies to be deployed across countries at this point in time.
Thanks for inviting me join you today.
ENDS
Media contact: NATASHA BOLSIN 0476 125 112