Want tighter security? Work on your data quality, says LogRhythm
Better data quality enables faster time to respond, and therefore a higher likelihood of a positive outcome
Better data quality is essential for effective security, as it enables a faster time to respond, and therefore a higher liklihood or a positive outcome.
That's the opinion of Andrew Hollister, senior technical director EMEA at LogRhythm, speaking at today's Enterprise Security and Risk Management Summit from Computing.
Hollister likened security to a castle - a one time effective method of physical security, which over time became ineffective as attacking strategies and technologies evolved.
"We're seeing the same approaches in the cyber world," said Hollister. "There's lots of focus on defence, lots of spending on firewalls, perimeter defences, anti-virus and other types of malware detection trying to keep the bad guys out. It's accepted now that a determined attacker with enough time, resources and motivation will find their way around any kind of defence."
And attacks will continue to evolve, he added.
"There's no end in sight. There are motivated threat actors, it's not just a script kiddy in his bedroom, they are motivated people."
And even more concerning, Hollister explained, is the evolution of a support system of outsourced cyber criminal services.
"The most concerning thing is the establishment of a robust cyber criminal supply chain. You don't need skills to launch a DDoS attack, or write a zero-day attack, you can pay someone to do it for you, and perhaps they'll take the fall if they're caught. Cyber crime as a service is a threat that's evolved over the last couple of years."
As a response, he continued, there has been a strategic shift towards detection and response in cyber defence spending.
"It takes time to attack an organisation and exfiltrate data, or to corrupt data. As defenders we've got a timeline across which there are multiple opportunities to get visibility of what's going on, what's happening on the network, before you get to the disruption. There are many stages where we can detect and prevent that."
The problem is, according to Hollister, that it typically takes an organisation anywhere from 140 days to over 300 days to initially detect they've been breached.
"Speed of detection is directly correlated to speed of response. If it takes a long time to discover you've been breached, it takes a long time to respond.
"And there are obstacles to faster detection, and it all starts with data quality," he argued. "Every couple of years there's a new buzzword. It was big data a few years ago, now it's data lakes. It's great to have a data lake, but what's in the lake and what can I get out of it? Can anything tell me what I need to know?
"You need to understand any language and translate it into something you can understand. How do I understand the information coming from the Linux estate, Active Directory and others, when they all record what's going on in a different way? Every device talks a different language. You need to turn it into a common language, put that into data lake, then you can go after the data and understand what's really going on."
A second obstacle, he said, is alarm fatigue.
"Many large organisations have over 30 security products in their environment, all generating information. It's simply not effective to have this in a SOC [Security Operations Centre], where analysts are trying to make sense of lots of different products, all with different interfaces. The time to respond continues to be extended because of this.
"Then you have forensic data siloes and a lack of automation. It leads to a fragmented workflow."
To address these obstacles, Hollister continued, organisations need technology to act as the enabler between people and processes.
"Threat lifecycle management begins with the ability to see broadly and deeply across the environment, and ends with the ability to quckly mitigate and recover from security incidents.
"It all starts with forensic data collection. It's about data quality - this is the point at which you want data in a common language so you can do something useful.
"Once you've got a machine to bubble up the things of most interest, assess if it's a threat in your environment and see if a full investigation is necessary. You can measure over time if your time to detect is reducing or increasing.
"Then you go to the investigation phase, then implement the appropriate counter measures. Then you have the final stage, where you clean-up, report, review, update policies, and educate users."
He explained that LogRhythm brings these technologies and the threat lifecycle management workflow into a single pane of glass.
"This avoids fragmented workflows and data siloes, and you get overall visiblity of what's going on in a single environment.
"This is one of the primary enablers to fast detection and response - data quality is key. You have thousands of different device types with billions of messages per day in a large enterprise, and you need to turn them from gobbledygook into seeing that a rogue admin in Norway is accessing pay files."
Another benefit, he added is precision search and holistic threat detection, with the aim of lowering false positives and false negatives, so the alarms seen are those of primary importance.