2025 Election: Policy Spotlight on Frontier Technologies Shaping Australia's Future

As we approach the 2025 Australian Federal election, our experts spotlight some of the pressing policy issues surrounding the impact of frontier technologies on Australian society. 

Clas Webber
They Are Coming - and We Are Not Ready

Until recently, artificial intelligence was dismissed by many as impossible or mere science fiction—something for future generations to worry about. But how quickly things have changed. The rise of machines with human-level and even superhuman cognitive abilities now seems inevitable. The AI revolution has the potential to be as transformative as the Industrial Revolution, arriving within years, not decades. Yet, we remain alarmingly unprepared, and policymakers have done little to change that.

One of the most pressing concerns is safety. If AI surpasses human intelligence, will we still control it? The real danger may not be AI turning against us, but rather its indifference to our survival. Consider this analogy: when we build a highway, we do not deliberately harm ants, but we also do not factor their well-being into our plans. If AI advances without safeguards, humanity could find itself in a similarly precarious position.

This is not just a risk-management issue—AI presents an existential threat. If we lose control, humanity itself could be at stake. Some, like Google’s Larry Page, see AI as the next step in evolution. But without evidence that AI possesses real consciousness, we could be extinguishing the only known form of sentience in the universe. At present, we lack the tools to solve this problem—and worse, we are unsure whether a solution even exists.

AI also raises deep ethical concerns. Our best scientific theories of consciousness suggest that machines could in principle develop sentience, meaning they could experience pleasure and suffering. If true, AI wouldn’t just be a tool but an entity deserving moral consideration, like humans and animals. Ignoring this could create a new form of exploitation—one where intelligent beings suffer without recognition of their rights. However, our understanding of consciousness remains incomplete. These theories could be wrong. But if they are right, dismissing AI’s moral status could lead to profound ethical failures.

Beyond existential and moral risks, AI is poised to reshape economies and societies. AI will likely accelerate general technological progress to unprecedented levels. Some economists think that the near-term effects of AI on the economy will be fairly moderate. But AI seems different from previous technological advances—it threatens cognitive jobs once thought uniquely human. If AI displaces large portions of the workforce, mass unemployment, widening inequality, and wealth concentration could follow. At present, there’s little indication that the immense productivity gains from AI will be shared equitably within or between nations.

But AI’s impact goes beyond economics. What happens to human purpose when machines perform all meaningful work? For many, jobs are not just a means of survival but a source of meaning and fulfillment. If AI renders large swaths of traditional work obsolete, we may face a crisis of purpose, with profound implications for mental health and social stability.

The AI revolution isn’t a distant possibility—it is already underway. Yet, we remain disturbingly unprepared for its consequences. If we fail to address these challenges now, we risk being swept away by forces we no longer control.

Amanda Davies 
Harnessing AI to reform Australia’s Public Sector 

The federal election has brought into public focus the size and cost of the Australian public service. There are more than 365,000 people employed in the commonwealth public sector, and a further 1,950,000 employed in the public sector at state and local government levels across Australia (ABS, 2024). Critics have noted the rapid growth of Australia’s public sector over the three years to June 2024, and the resulting cost to Australian taxpayers. In the three years from June 2022, there has been a 15% increase in wage costs, equating to an additional $31.1 billion.

Australia is not alone in facing escalating costs for its public sector. New Zealand, Canada, the United States, and the United Kingdom, among others, are also grappling with this challenge. In the United States, the administration has adopted a multi-faceted strategy to cost-cutting, including commercial best practices for service delivery and process improvements. Critically, leveraging technology such as AI to automate processes and reduce service duplication underpins much of the multi-faceted strategy.

A broadly similar multi-faceted approach to reforming the public sector was recently announced by the UK Prime Minister, in that the use of AI and automation was positioned as central to the reforms, which are designed to improve service efficiency while substantively reducing costs. The Prime Minister stated, "No person’s substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard.

Is the Workforce Ready?

Workforce readiness and capability have been identified as major challenges for the integration of AI into the public sector in Australia. Critics argue that extensive planning is needed to ensure that the workforce can be supported in the transition to greater integration of AI technologies. 

Some argue that it would be too disruptive to core service delivery to engage in reform processes that enable the incorporation of technological efficiencies. Workforce readiness for AI is a difficult challenge, one that impacts businesses, organizations, and governments alike. Effective leadership is a core enabler in overcoming this barrier. Indeed, McKinsey & Co found that the biggest barrier to scaling AI adoption across companies is not the employees, but the leaders  "who are not steering fast enough”.

Given the escalating costs of the public sector, and the reforms underway in the US, UK and many other advanced economies, the incorporation of AI and automation across Australia’s public sector must be accelerated. For this, it is important developing clear terminology that is contextually relevant. It is also critical that leaders and policymakers are provided with information about AI tools that are available, how they can be deployed, and how the workforce can be supported to transition.

Developing technology-based solutions for service delivery is only one part of the problem. The other, perhaps more substantive issue, is identifying how leaders in the public sector can be supported to make innovative and contextually appropriate decisions and support their staff in the transition as the nature of jobs changes.

By addressing these challenges head-on, Australia can ensure that its public sector remains efficient, cost-effective, and ready to meet the demands of the future.

Maggie Jiang
Regulation of AI and Data: The Ethical Challenges in Communication

Artificial intelligence (AI) is rapidly transforming communication, but with this progress comes a growing responsibility to ensure it’s used ethically. Misinformation, deepfakes, and algorithmic bias are just some of the challenges that AI presents, and governments need to step in with clear policies to address these concerns. As AI becomes an integral part of our digital lives, it’s crucial that we get the regulations right to ensure transparency, accountability, and data privacy.

Take deepfakes, for instance. They’ve been in the spotlight for their potential to manipulate videos and audio in ways that are nearly indistinguishable from reality. A deepfake video of a public figure could quickly spread on social media, causing damage to reputations or even influencing elections.  Then there’s misinformation. AI-powered tools can generate highly convincing fake news articles and spread them across the internet faster than humans ever could. What’s more troubling is how these AI systems learn from human behaviour, often amplifying content that engages people’s emotions, whether it’s true or not. Governments need to regulate these platforms to ensure they take responsibility for the content they promote and prevent the spread of harmful, false information.

Algorithmic bias is another major issue. AI systems are only as good as the data they’re trained on, and if that data contains biases—whether in terms of gender, race, or socioeconomic status—the system will inevitably reflect and perpetuate those biases. This can lead to unfair treatment of individuals, particularly in fields like hiring, lending, or even criminal justice. A regulation that mandates regular audits of AI algorithms to assess bias and fairness could help reduce these risks.

What can governments do to tackle these problems? For one, they can introduce policies that force AI developers to disclose how their systems work. Transparency is key — people need to understand how AI algorithms are making decisions that affect them. Regulators could also require AI companies to conduct thorough impact assessments of their technologies to understand their societal effects, particularly when it comes to misinformation and bias.

As AI collects vast amounts of personal data to operate effectively, individuals’ privacy rights must be safeguarded. Governments could enforce stricter regulations on how personal data is stored, shared, and used by AI companies.

Ultimately, it’s essential that governments take proactive steps to ensure that AI is used ethically in communication—balancing innovation with the protection of individuals and society. Only then can AI truly serve the public good.

Jason Weismueller and Paul Harrigan
Misinformation Isn’t Going Away—So What Can We Do?

Misinformation is a serious challenge that threatens democracy, public health, and social cohesion. Unsurprisingly, data shows that 75% of Australians are concerned about its impact. Compounding the issue, the rapid advancement of AI is making it easier than ever to create and spread false information. However, some social media platforms appear to be scaling back their efforts or, at the very least, shifting their approach to combating it. For example, Meta recently announced plans to replace its fact-checking program with a community-based model, similar to X’s (formerly Twitter) Community Notes. So, what are effective strategies for tackling misinformation, and who is responsible?

Debunking: Fixing misinformation after the fact 

A common approach has been debunking, where fact-checking corrects false claims after they have spread. While it has been shown to be effective in many circumstances, it is reactive and often too slow—false information tends to go viral, accumulating likes, shares, and engagement, while corrections receive far less attention. By the time a fact-check is applied, the damage is often already done. In Australia, fact-checking suffered a setback when ABC and RMIT announced the end of their RMIT ABC Fact Check partnership.

Prebunking: Inoculating against misinformation before it spreads

Prebunking offers a proactive solution. Based on inoculation theory, it exposes people to weakened versions of misinformation or common manipulation tactics, helping them build psychological resistance—similar to how vaccines work against viruses. Studies suggest prebunking reduces susceptibility to misinformation, making it a promising tool. However, it has limitations as well. For example, we know little about how long its effects last, how often people need exposure, and who should be prioritised for inoculation.

An overlooked part of the solution: Education

One underexplored avenue is education. Children and teenagers are accessing social media at younger ages, often without the critical thinking skills needed to navigate online spaces responsibly. Recognising this, the Australian government has invested in media literacy resources to equip students with the skills to critically assess sources, recognise manipulation tactics, and make informed decisions. For example, the eSmart Media Literacy Lab, is a free, gamified platform available to all Australian secondary schools, designed to empower students to navigate the media landscape effectively.  While research on the effectiveness is still limited, education must surely underpin longer-term efforts to combat misinformation.

A multi-pronged approach is needed

There is no silver bullet—we need a combination of these approaches, with a range of key stakeholders taking responsibility:
Policymakers should push for greater platform accountability and support efforts to regulate misinformation while ensuring access to credible information.
Social media companies must integrate prebunking and debunking strategies into their platforms.
Education providers should embed digital and media literacy into curricula, equipping individuals—especially those new to the digital space—with the skills to critically evaluate information.
Individuals should fact-check sources, think critically, and slow down before sharing content.

Misinformation is an evolving challenge, and with the 2025 Australian federal election approaching, it is high season for the spread of political misinformation. But through a mix of policy, education, and platform interventions, we can build a more resilient society. How can you strengthen your resistance? It begins with questioning sources, recognising manipulation tactics, and developing digital and media literacy as an ongoing skill—not just a one-time effort.

Katarina Damjanov
Telecommunication Security: Australian Internet Networks 

Achieving national coverage of Australia's internet infrastructure remains a complex challenge due to the country’s vast landmass and extreme geography. While the National Broadband Network (NBN) has thus far connected over 12 million homes and businesses nationwide, its fibre-based connections still fall short of fully reaching rural, regional, and remote areas. The emergence of satellite-based broadband has alleviated some of these connectivity gaps. However, it has also heightened concerns about the security of national internet networks, particularly amid a shifting geopolitical landscape.

In recent years, efforts to deploy privately operated constellations of internet satellites in low Earth orbit (LEO) have accelerated significantly – systems such as SpaceX’ Starlink and Eutelsat’s OneWeb are already operational and those by Amazon and Samsung are in progress. These satellites offer high-speed broadband connectivity and have the potential to mitigate digital divides and enhance responses to natural disasters and crisis communication.  

In Australia, the major satellite internet provider Starlink occupies a critical part of the country’s telecommunications network, enabling internet access to over 200,000 consumers. As ABC News reports, since Starlink’s arrival in 2021 State and Federal government departments have spent over $50 million on its hardware and services, including integrating them into police and fire emergency response systems and planning to install them on 50 naval vessels. The Western Australian government recently announced that it is upgrading 122 remote police stations and 550 vehicles with Starlink equipment.  

However, a situation in which Australia’s critical services increasingly depend upon private offshore internet providers highlight the problematics of reliance on foreign-controlled telecommunication infrastructure. The dominance of Starlink in the Australian market looms as a particular matter of concern. Owned by tech billionaire Elon Musk, Starlink has become a subject of controversy, including its implications for national security. For example, in 2023, the company reportedly already restricted internet access in specific areas of the Ukraine and more recently, the possibility was raised that it could shut off its services entirely, fuelling global apprehensions about its dependability and neutrality during high-stake situations.
 
In the face of a rapidly evolving geopolitical landscape, accelerating technological advancements and the increasing centrality of the internet, Australia needs to consider securing its internet networks. This, in the first instance, involves investing in the further strengthening of the NBN and its rollout, and diversifying satellite internet providers beyond Starlink in order to reduce dependence on a single company. Alternatively, in the true spirit of Australian invention, investing in fostering home-based innovation and solutions for stable, affordable and high-speed internet will ensure resilience, security and equity of access across the nation.

Contact us