Automating cyber-defence

Pascal Geenens, EMEA Security Evangelist, Radware


AI has potential to make the lives of security professionals a lot easier – but it should be approached with caution.

Deep learning is a useful tool to optimise and validate security posture. But until we overcome some of its challenges, positive security models and behavioural algorithms that are deterministic and predictable are still more effective for defence and mitigation.

Most deep learning applications in use today (successful ones, that is) are based on supervised learning neural nets. They take an input and produce an output where the output provides a confidence level across a fixed set of labels. Given lots of data, the neural net will usually make the right “decision”.

Supervised nets are an advanced form of automation, where automation is coded through data and learned by example. They are highly efficient and provide a solution for many domains where coding rules would be virtually impossible because of the complexity and our limitations to process vast amounts of data as humans.

Deep learning is not without its challenges though. It’s hard to trust in adversarial contexts while requiring continuous maintenance, as the diversity and size of data changes. While attackers can exploit mistakes using AI to generate new attacks, the defence system has zero margin for error – otherwise you might be looking at what could be considered the next Equifax breach.

Unsupervised machine learning provides another approach – it does not require training sets. It’s good at discovering the structure in unlabelled data and finding anomalies in large pools of unstructured data. Most unsupervised learning techniques are based on traditional machine learning and not deep learning.

Statistical behavioural detection models have been used for a long time in the detection of anomalies in traffic patterns and behaviour. They are modelled closely to a problem, which makes them very specific, so they cannot be applied to a large set of problems. But they do not need to be trained using large sets of data but merely need a baseline. And, while they can be complex, they are also deterministic, and humans can interpret and understand their decisions.

The major difference between traditional machine learning and statistical behavioural solutions is that neural nets are a more general approach, and their output cannot be blindly trusted or used in a system for mitigation and enforcement.

The case for deep learning is about complexity, in other words, where modelling is virtually impossible. Machine learning helps us to see more clearly as the number of events we encounter grows, helping security teams detect anomalies and potentially malicious traffic without further human effort.

A combination of traditional machine and deep learning systems will help security operations improve their security posture and automate their defence capability.

AI should be considered an integral part of security strategies. It allows us to cover more ground and detect gaps in our current security posture. AI will enable security experts, make their jobs more interesting and help them focus on the right indicators. But it will never replace the experts.


Safeguarding applications is critical to ensuring the digital experience

Fight fire with fire: malicious agents automate; time to automate our defences!