- Outsourcing News
- Outsourcing Press-Releases
- Outsourcing Events
- Outsourcing Analytics
In cybersecurity as elsewhere, artificial intelligence presents ‘spectacular’ opportunities. But can it be trusted yet? And who’s thinking about ethics and human rights?
There are “significant challenges” in commercialising artificial intelligence (AI) techniques in cybersecurity, according to Simon Ractliffe, head of cybersecurity at Singtel Optus.
“This capability holds the best opportunity for us, in terms of detection and response, but what we know is that there’s a lot to do in terms of making this detection and response to the point where it’s at industrial strength,” he told the SINET61 cybersecurity innovation conference in Melbourne on Wednesday.
“It has to be truly safe to rely on … but the opportunities are just spectacular.”
Singtel Optus is looking at reducing the time taken from detecting a cybersecurity event to its eventual resolution, as well as reducing the unit cost.
“We need to be able to make good cybersecurity services accessible to small and medium businesses, and consumers, and so we see a great opportunity in that regard,” Ractliffe said.
Australia’s defence scientists are also turning to AI techniques in the military’s increasingly complex networked environment.
The internet is a “best effort” network. Malicious actors can slow down network traffic, or even divert it to where it can be monitored. This can happen in real time, and the challenge is how to detect that, and respond as quickly as possible.
“I think that’s where the AI elements come in,” Zelinsky said.
But one of the challenges of using AI in a protective system, or in the potential offensive systems that Zelinsky hinted that DSTG is working on, is explainability. Human operators have to understand what the system is recommending, and why.
“At the end of the day, people are sitting on top of systems. They must understand what is happening,” he said.
Confidence in the system is also vital, Ractliffe said.
“Working with government departments, some of their analysts have actually declared that they are less confident about the outcomes from their systems, and want to work back before they work forward,” he said.
“Some of these systems generate more work. They create more questions than answers.”
Zelinsky said AI is about automating problems that we understand.
“The tampering with the social media in the US election that occurred, after the fact you can see how it was done. Of course then you can build a system to try to detect that. But usually with innovation, innovation outpaces such systems,” he said.
DSTG is also trying to understand the optimal sizes for datasets for machine learning. Collecting more and more data is very expensive, so what are the essential datasets, and their minimum sizes, needed to produce an effective machine learning system?
Min Livanidis, head of security intelligence and behavioural insights at NBN, cast the net much wider.
“To me, the journey to where we are today really starts in the 18th century. During that time, we had obviously the first industrial revolution, as well as the Enlightenment, and the Enlightenment thinkers really believed that the advancement of the technologies and sciences would naturally lead to the advancement of the human condition,” Livanidis said.
But now we know that technology can be used to harm as well as to help, and security professionals have an “acute understating and awareness” of that.
“We think about the ways technology can be used for its worst possible end, and a lot of the trepidation for me around AI is based in that historical memory,” she said.
“Industrialisation [was] used to its worst possible ends in the two World Wars, and then obviously the development of atomic weaponry has really told that story quite well, in terms of what is the worst way we can use technology going forward.”
AI is being used to great benefit in medical research, Livanidis said, but it’s also being used to challenge our ideas about the “separation of hard and soft power”, particularly in its use for “undermining democracy”.
“That’s again forcing us to rethink how we define freedom of expression and privacy,” she said.
“That’s why ethics and human rights need to sit at the core of this discussion, and as security professionals we’re really well placed to contribute.”
Adrian Turner, chief executive officer of CSIRO’s Data61, said that we’re on the cusp of a transformative breakthrough. We’re seeing the convergence of multiple disciplines, from materials science to medicine to chemistry.
“As we move to every industry becoming data driven, there’s this increased focus around machine learning and automation, to deal with not only the scale, not only to create new value, but also in the context of cybersecurity, both on the offence and defence side,” he said.