Assessing the Trustworthiness of AI and ML in Cybersecurity | #education | #computertraining


Businesses should regard AI cybersecurity predictions with a level of skepticism for some time to come due to the reliability uncertainty of such predictions.

AI will revolutionize cybersecurity. How many times have you heard that phrase – or a close variation on it – in the past five years? If you’ve been paying attention to the industry press, the answer is probably in the dozens, if not the hundreds.

The argument generally goes like this. AI and ML technologies allow security professionals to process massive amounts of network surveillance data and can spot malicious or suspicious activity at a much higher rate of accuracy than even the most highly-trained human.

The reality is a little more complex. While AI and ML tools are certainly useful, particularly when it comes to security and software testing, they are many years away from providing reliable, trustworthy, truly intelligent insights on network security issues. In this article, we’ll look at why that is.

See also: Why Automotive Cybersecurity Needs Real-Time Threat Detection

The promise of AI/ML in cybersecurity

In case I am accused of being unnecessarily skeptical, let me get one
thing out of the way – I think AI and ML tools are great and that they have a major role to play in improving
cybersecurity in many
contexts.

As Evan Elliott, Senior Data Scientist at BMT, told DevOps Online recently, this is largely due to the speed with which AIs and ML tools can process large amounts of data and make predictions from them. In this way, AIs can provide early warning of activity that may turn out to be malicious.

This also means, claim some proponents of AI and ML-driven approaches to security, that AIs can search for security vulnerabilities. This is, to put it bluntly, a much less reliable claim. While AIs have proven themselves to be quite effective when it comes to making predictions based on historical data, the idea of using them to proactively identify potential vulnerabilities in networks is far more ambitious. Let’s look at why.

AI bias in cybersecurity

The reason why AI-produced predictions of network reliability should be
regarded with suspicion can be explained simply. AIs and ML tools, no matter
how sophisticated, still work on the basis of correlation. They run through reams of historical data and can make eerily accurate predictions
based on this. The problem is that they also tend to be biased when it comes to
these predictions.

Examples of this AI bias have been everywhere in the last few years.
Analysts point out that the AI algorithms that are now used to inform decisions
in healthcare, justice, and even cryptocurrency trading can entrench patterns
of prejudice and lead to highly problematic social outcomes.

The effect of AI bias on cybersecurity has been less well studied, but
it’s not hard to take the insights generated by studies of AI bias in other
fields and apply them to the security of computer systems. It’s true, of
course, that all of these AI systems – whether they are advising on sentencing
or the security of a network – have the ability to produce information much
faster than a human could. This is inherently impressive. The problem is that
it can be hard to assess the reliability of these predictions and
recommendations.

As the MIT Technology Review recently pointed out, bias can creep into AI systems in two main ways – either when an AI algorithm is fed unrepresentative data or when the representative data for the problem at hand replicates an existing prejudice. In the context of AIs assessing the security of our networks, we are faced with both problems simultaneously – we have nowhere near enough normalizable data on what threats look like in generalized computer systems. Even the data we have is hopelessly skewed toward what looks like security threats to human analysts.

Collaborative approaches

The outcome of these twin sources of bias will be familiar to any
network administrator who has tried to use existing AI tools to identify
network vulnerabilities: enormous numbers of potential security vulnerabilities
that look suspiciously similar to those generated by much less sophisticated
systems with static rules.

AIs, in other words, may just end up replicating the process of
scanning for vulnerabilities by hand. They may be able to perform this kind of
analysis much more quickly than the average human, but if each vulnerability
needs to be checked by a human operator for reliability, the total time spent
actually fixing network vulnerabilities will stay the same.

Even worse, because AI tools will be trained on data produced by
human-led security scanning, it may be that we are feeding our AI tools the
same prejudices that we’ve built up over the years. The best security analysts
have spent much of the last decade trying to re-educate themselves about what a
security flaw looks like – if AIs are trained on our previous work, they won’t
be much better at spotting novel threats.

Ultimately, some analysts have argued, we will therefore need to
develop a much more collaborative approach to using AI in cybersecurity. AIs
will be good at closing security holes identified by other opposing AIs, but
they shouldn’t be relied on to provide thorough intelligence on the actual
level of vulnerability of a system.

The real world

This kind of approach is, in fact, what is already being done in many real-world AI apps. The most successful of these are built on the assumption that humans are great at some things – such as thinking creatively and reacting to novel threats – and AIs are currently great at a completely different set of skills – namely running through huge amounts of data and spotting patterns in it.

Because of this difference in skillset, it will remain important to
regard AI cybersecurity predictions with a level of skepticism for some time to
come. There remain significant concerns about the level of reliability of AI
predictions in many other fields – criminal justice and stock trading, for
instance – and cybersecurity should not be exempted from this list.


____________________________________________________________________________________________


Click Here For Education, Skills Training and Certification Training in Computer/Cyber Security (like CompTIA, EC-Council, Cisco…) Cyber Crime, Surveillance, Counter-Surveillance and Private Investigation.

Leave a Reply

Your email address will not be published. Required fields are marked *

40 + = 50