AI: friend or foe?

0 views
Skip to first unread message

kofi addo

unread,
Sep 18, 2018, 12:49:22 PM9/18/18
to Ghana Leadership Union GLU

AI: friend or foe?

SUPPORTED BY

AI is upping the ante in the world of cybercrime. But in the battle of wits between hackers and security experts, the best defence is still for humans to remain vigilant

September 11 2018, 12:01am

Illustration by Dan Neather for The Bridge Studio

Share

 
 
 

Artificial intelligence (AI) and machine learning (ML) are hot topics. According to the consultancy Gartner, AI is adding over a $1 trillion of value a year to business, a figure that’s forecast to grow to almost $4 trillion by 2022. In the longer term, these technologies promise to revolutionise everything from how we drive, to how we work, even how long we live.

But they have a dark side too – and we’re not talking about a Terminator-style dystopia. Hackers are often at the vanguard of technological developments and have very quickly automated cybercrime in areas such as phishing, password cracking and denial-of-service attacks. In the coming years we are likely to see online crooks enthusiastically embrace AI and ML.

So, does this herald a whole new era in the constant move/countermove conflict between hackers and security experts?

“It’s been an arms race for decades,” says John Shaw, vice-president product management at the IT security firm Sophos. But, he adds, we may be witnessing an acceleration.

“We’re now seeing 400,000 new bits of bad code produced every day. This does not mean 400,000 programmers writing code. It means heavily automated systems. The result is bespoke malware – a virus written just for you.”

David Moloney, the disruptive technology director at PwC, the professional service firm, says that what you need to remember is that these technologies can enable one person to do a lot of damage to a large organisation. AI, he says, is likely to increase this potential, especially as it becomes widespread. “AI used to be new and require real expertise,” he says. “But now it’s becoming a platform. People can assemble [malware] that others have built.”

But equally, says Shaw, AI is providing crucial new tools for the good guys to fight back, and the latest anti-virus software makes good use of new technology and ML. Rather than rely on the “signatures” of known viruses, this software, with its use of algorithms, can predict whether an unknown piece of code is likely to be bad. The technology works in a similar way to the neural networks in the brain, which will adapt to changing circumstances or habits by finding new paths. “You can train machines to spot bad code 99 per cent of the time,” Shaw says.

However, technology is developing fast and there’s always something new. Moloney says: “An example of the kind of thing we’re likely to see more of is ‘deep fakes’,  such as the Photoshopping of pictures and videos that is so good it can’t be detected. This could then be used in phishing.”

It’s not just a question of technology either. Some of the best hackers are taking a more sophisticated line and using automation to do the lion’s share of the work – the number crunching and defining of targets – but then finishing it off with the human touch, such as a personal message or mannerism that is typical of a particular user.

While hackers are finding this mix makes for an effective attack strategy, the same can be said for the security industry. There is much, says Shaw, that can be done on the human side in terms of defence: “End users should always be aware and on guard. However good the defensive technology is, you still need to train end-users not to make basic mistakes and learn who or what to trust.”

This is still the backbone of defence, and needs to be a key part of any security strategy. As Moloney warns: “Using AI for defence could be a distraction for businesses that haven’t got the basics right.”

Reply all
Reply to author
Forward
0 new messages