Technology / The experts’ view: Protecting your business against emerging cyber threats
The experts’ view: Protecting your business against emerging cyber threats
22 November 2016 |
The recent growth of ransomware has demonstrated that human response times are often not fast enough to handle modern security threats, said Dave Palmer of Darktrace, introducing an Inner Circle breakfast briefing at London's Savoy Hotel.
Palmer told attendees - senior executives and security experts from a range of sectors - that over time the growth of AI would benefit defenders more than attackers but in the short term hackers will benefit most. He said that self-defending networks will be increasingly important in defending against a range of evolving threats.
Many businesses are nervous about automated security systems taking decisions on behalf of their human operators but most of those at the briefing agreed that this is often the only practical option.
One attendee said that IT security staff rely on their colleagues to notice problems and alert somebody. At busy times that can take a while. In the evening or at weekends, problems can go unnoticed for a long time.
The volume of data going through the system is another problem, said Alex Bosworth from UK Parliament. In most modern organisations there is a firehose of data that human beings cannot process quickly enough. Separating signal from noise has become almost impossible.
Ultimately, said David Ward from ICAP there aren’t enough people to prevent and fix threats, so who could object to an automated solution?
One obstacle to the adoption of automated security systems is the question of who is responsible if something goes wrong. Human staff members have a line manager, appraisals, training and other systems in place to ensure they can perform at their best. When they don’t, there are procedures for solving the problem. These are not there for our ‘robot’ colleagues and most organisations have not even considered what they might be and how they would operate.
Another attendee pointed out that we tend to assume that holding people responsible for their actions will ensure their good behaviour. We tend to want the approval of our colleagues and we try to avoid punishment for doing something wrong.
We can’t ensure ‘good’ behaviour from AI workers in the same way. If they make a crucial mistake then who is responsible? Is it the department they are working in, the IT department for buying the AI, the company that built the AI or somebody else altogether?
We might need to rethink our need to hold someone responsible in the security space anyway, said Palmer. He argued that the need to find out who is responsible is itself a security risk. Following a security breach companies tend to close ranks and keep details of their investigation quiet, often for perfectly sound legal reasons.
However, while the investigation is ongoing, other organisations might be at risk from the same security flaw. The hackers – who are typically much more willing to share information than defenders are – have a window in which they can exploit this flaw before it’s widely known and patched.
The problem often does not end with the security patch, said Emil Lupu from Imperial College London. Many companies do not patch their software immediately because an update can break existing systems and workflows. That means that often the time from a security flaw being exploited by hackers to its being fixed by the majority of organisations can be worryingly long.
Nader Hosni from LCH said that much security thinking still centres on protecting the perimeter. To use a biological analogy, it’s about protecting the ‘skin’. What we should be doing, Hosni argued, is understanding that the threat is already within the perimeter – in the bloodstream, to continue the biological analogy. That means focusing on the IT security equivalent of the immune system.
Palmer said that this is one of the things an automated security system can handle very well. Because they work to a playbook controlled by the IT department, automated systems can be deployed in a variety of ways.
For example, fake LinkedIn emails are a common form of phishing email but certain departments, such as sales, will be staffed by people who need to accept LinkedIn emails in order to do business. Perhaps, then, you task the automated system with monitoring their activity very closely, maybe even limiting their file privileges because they are a ‘riskier’ group.
He added that automated technology is currently helping to find lots of long-hidden vulnerabilities. In the long term, this will help us to substantially increase our security but in the short term it will help to expand the range of options for hackers. We should be prepared for an increase in attacks over the next few years.
Given tighter resources, that’s a compelling reason for businesses to investigate automated security tools.