Eva Chen has been with Trend Micro for 21 years serving as executive vice president for eight years, and chief technology officer for a further eight years, before her promotion to chief executive in 2004.
Computing talked to Chen about why large organisations may need to rethink securing their datacentres, and how to reduce PC user complaints about anti-virus packages hitting their system's performance while scanning for malware.
What is the split of Trend Micro's business?
It’s about 25 per cent consumer and 75 per cent business, with 40-50 per cent of that business-related security being for large enterprises.
What do you think the biggest problem facing IT security experts?
I would say it’s the budget allocation, because there’s so many security threats on all fronts. For instance, firms going through datacentre consolidation and virtualisation deployments are facing new security threats that need to be dealt with, and that requires budget. On the desktop and mobile devices, I think traditional anti-virus packages are not enough, but again they need budget, time and resources to deploy properly.
Having said the main problem is budgets, tied into that is firms' compliance regime, because it’s competing with the security budget. My view is that compliance itself has nothing to do with security, and a lot of times I see IT security specialists having to spend a lot of budget and time on compliance, rather than thinking about what security really need.
Compliance guidelines are OK, but lawmakers should leave how to do security to IT experts. Big data breaches happen in large firms, but I bet all of these firms were compliant.
One of the complaints about anti-virus packages is the performance decline which happens when the software starts scanning for malware – how can security vendors address this?
As a security vendor we always try to balance three things for customers: system performance, system security, and eliminating false positives.
Imagine it as a triangle, with those three parameters being the vertices. Customers want 100 per cent in all three areas - the best performance, the best protection and no false positives, a scenario which ways been a struggle for security vendors.
If you’re trying to load malware signatures onto systems and use that system itself to compare potential malware with the signatures, the performance will be low and false positives will be high.
About three years ago we started to invest in cloud computing and moved all these signatures and comparisons into the cloud. The result is a big reduction on the load on the user’s system, since no memory or processor time is used in that comparison. We call it the client-cloud architecture. So we can update the pattern file often – say every hour on just one of our servers. We don’t have to distribute the pattern file to thousands of systems, which means the user’s system only has to have a very small onboard anti-virus engine.
Enterprises can also reduce network latency by caching our virus pattern file on a server in their own private cloud.
Why don't all security vendors get together and pool malware signatures?
The anti-virus industry already collaborates on pooling the malware they find, but because every vendor’s scan engine is different, the structure of the virus pattern files are different.
What about standardising the pattern file format?
That would be a big job. Which anti-virus vendor is going to give up their specific pattern? These vendors have a lot of customers out there, the upgrading and migrating of scan engines and pattern files would be a big job.
Theoretically it could be done, but remember you shouldn’t put all your eggs in one basket. A lot of customers purposely split their security among different vendors, to build-in some degree of redundancy.
Looking into your crystal ball, what would you say would be the major threat in the future?
Actually I don’t need a crystal ball, because it’ll still be botnets that will cause most problems for enterprises, because we still don't know how to deal with them properly.
Even larger enterprises who open up their networks or web site for collaboration with suppliers and customers could be vulnerable, especially if one of your collaborators has a botnet-infected system on their network that’s connecting to your enterprise.
Such a botnet-infected system could be used to steal business critical data or start a distributed denial of service (DDoS) attack on any major web site.
The Korean government web site almost went down last month under the weight of a large DDoS attack, with most of the systems implicated being located in the UK, although nobody in the UK wants to DDoS South Korea.
Does datacentre security protection need a rethink?
Usually chief security officers (CSOs) put firewall and intrusion detection systems (IDS) in front of their server systems, because the only way to attack is through the connections to the datacentre – the gateway.
There are two problems with this model of protection, the first being if there’s a botnet-infected computer actually inside your network.
The second problem arises when the enterprise starts to use virtualisation. Before because the servers never moved from datacentres once they’re in there, that meant the firewall/IDS rules never needed to be changed.
So If you add a new server, it'll probably take a couple of weeks, and you have time to configure the firewall and IDS to match the new servers you've added.
However if you deploy virtualisation, the server can move between network segments, which could have different firewall and IDS rules. You can add a new server in seconds, which means that the firewall and IDS rules may need to cha nge frequently – which is a big burden to IT administrators, and it could be a big hole for datacentre server security [if the rules aren’t updated frequently enough].
So instead of having the old model firewall and IDS in front of servers, CSOs need to consider putting a protection agent onto the server itself, rather than having a big firewall/IDS in front of the server farm.