Peter Cochrane: ChatGPT - BYOD versus security

Peter Cochrane: ChatGPT - BYOD versus security

The biggest security challenge comes where open and closed networks are combined. AI is going to make that harder still

For well over 30 years the Bring Your Own Device (BYOD) mode of working has afforded users greater mobility, flexibility and adaptability, whilst at the same time saving corporates, companies and institutions millions in support, equipment and app purchases.

For users, it has simplified their work and home lives as the demarcation between the two faded into the past. But perhaps more importantly, it personalised IT and realised the potential of mobile and home working, the gig economy and a more fluid workforce.

The best estimate I can find suggests that over 70% of us use our own devices for work, including mobiles, tablets and laptops. And that includes most of our big institutions spanning education, healthcare, government, manufacturing, military and logistics.

So what of the 30% outside this creed? Of course, there are always tech laggards, technophobes, control freaks and old mindsets. Then there are operations with unusually complex computing needs such as banks, aerospace and infrastructure operations. These essentially demand higher levels of security and often operate closed networks.

For cybersecurity professionals, the biggest challenge comes where open and closed networks are combined, and this includes a large proportion of the BYOD sector.

For example, medical staff often use personal mobiles freely in hospitals and surgeries, whilst patient and commercial data are guarded by internal networks. The same is true of banks and the military where ‘boots on the ground' require fast, flexible and reliable communication on demand for non-critical information, but very high levels of security for the strategic and operational data.

So far so good! Except, that is, for a recent a flurry of alarmist publications proclaiming that BYOD presents a very high cyber risk along with exposing companies to new legal compliance issues. I would suggest that none of this is a big deal. According to surveys around 90% of data, security professionals and employees are aware and actively concerned. The bigger questions are:

I would suggest that the first three items are mostly the same as for "closed" company assets, where the insider attack appears the biggest risk.

Unfortunately, the last two items are often neglected as IT departments in general are starved of money and resources due to the failings of the board. To be blunt: company boards are generally uneducated in technology, tending to be tech resistant/phobic and unable to appreciate cyber risk. On the other side of the coin, it is often the case that CIOs, CTOs and security officers are not well versed in "board speak".

This communication chasm is already very damaging, and it's about to get very much worse. Witness the amount of GPT tosh and scaremongering being published, and the complete lack of any viable explanation of the risks or opportunities!

GPT isn't about to destroy the human race, but it is powering the dark side and yielding increasingly sophisticated cyberattacks. These will amplify the risk to BYOD and fixed operations alike.

My experience is that most computer science graduates and IT professionals no longer study mathematics, and therefore cannot fully understand AI.

Now try explaining that to the main board whilst asking for a bigger budget!

Peter Cochrane OBE, DSc, University of Hertfordshire