'Levelling up cybersecurity is a team effort,' says Jacob DePriest of GitHub

But security starts with developers, and AI isn’t going to replace them

Shutterstock

Image:
Shutterstock

VP and Deputy Chief Security Officer at GitHub explains why basic security hygiene measures like securing developer accounts can prevent much bigger supply chain problems later, and how AI tools can play a part in making us more secure as well as more productive.

Jacob DePriest, as VP and Deputy Chief Security Officer at GitHub (a subsidiary of Microsoft since 2018) is responsible for the internal security of the business itself and the platform and developer community.

"GitHub is really the world's home for open source," DePriest says. "We have over 100 million users on the platform and take that responsibility and that place in the centre of that software ecosystem very seriously."

DePriest explains how keeping developer accounts safe can prevent the supply chain attacks that we've seen so much of in recent years.

"For us at GitHub, software supply chain and security generally starts with the developer. We're focused on providing end-to-end development platform for productivity, but if we don't secure where the developers are working then we've missed an opportunity."

Image
Figure image
Description
Jacob DePriest GitHub VP & Deputy CSO

DePriest points out, quite rightly, that the vast majority of breaches come from password and credential theft.

"The way we combat that is strong multifactor authentication," he says. "In 2022, we announced that we would require two factor authentication for the vast majority of developers working on the GitHub platform. We've been working towards that for the last couple of years so we've seen a massive increase in adoption."

He continues:

"From there we move to software dependencies. This is where our GitHub Advanced Security capabilities come in, so Dependabot things like that. We're trying to make it as easy as possible for developers using GitHub to have a secure by design approach."

"We have to make sure that secrets and credentials aren't being stored in software. That's why we turned on push protection by default as part of our secret scanning solutions for all public repositories recently."

Can AI make us more secure?

Nobody is going to argue against ensuring a robust security framework from the outset of software development, but although a necessary measure it should be one component of a much wider strategy of addressing the risks posed by all sorts of hostile actors towards critical infrastructure.

A report published at the end of last year by the Joint Committee on National Security concluded that the UK remained at a high risk of a "catastrophic" ransomware attack. Another published last month found that 78% of businesses didn't have an incident response plan in place.

DePriest acknowledges the issue and thinks that levelling up security is going to be very much the team effort.

"It's going to take public and private sector, developers, large organisations – all of us really – coming together to tackle this problem. A lot of the consumers of the open source world are organisations and so we'd love to see more responsibility taken by those organisations in the overall security posture of the things they're using, so giving back and contributing to the open source security ecosystem, responsible disclosure and normal best practice.

"Long term to make a dent in some of the supply chain challenges, it's going to take other companies requiring two factor authentication, it's going to take organisations that are consuming open-source taking that responsibility on and making sure that the open source they're using to power their products, has the resources and the patches and the security that's necessary. That's going to level up the entire community."

Naturally, DePriest thinks that AI development tools like Copilot can help to level up security, in addition to making developers more productive.

"In security terms, automation has been complicated in the past," he says. "Developers have had to switch between all these different tools. They would see an unidentified vulnerability and go spend a bunch of time researching how to fix it. Applying AI to our code scanning vulnerability scanner, we can automatically suggest a fix to the developer in the flow that they're in."

"We still want to lean into the skills and expertise that developers have. We just want to keep them in the flow and reduce the time it takes to context switch and find answers to all these problems."

Indeed, given the tightly controlled financial environment that cybersecurity teams are operating in, AI in development can have a big impact.

"I think that given some of the tight fiscal constraints that security teams have, and the technical sector has more generally whether that's from skill sets or other constraints, I think that investing in developers through things like AI adoption and security adoption will have an outsized impact on the security posture of organisations for a reasonable cost."

AI will not replace developers - but it will help them

An opinion often voiced has been that GenAI tools like Copilot will eventually make all but a very few skilled developers redundant. DePriest thinks almost the opposite. Not only will it not cost jobs, it might help companies hang on to the developers they have.

"I think what it's going to mean is that developers focus on the things we want them to focus on – things that provide real value. There's a deep set of skills and problem-solving capabilities unique to each developer. Often they spend time debugging software or doing repetitive tasks. We believe AI is already making a huge dent in taking those things away and helping them focus on how to provide more value to the company or organisation they work for and the mission that they're working on."

Will it lead to a flatter organisational structure in software development teams?

"My sense is that we will still need the various experience levels. I just think those experience levels are going to be applied in increasingly different ways. We're still going to have very complex technical challenges, even if some of the repetitive stuff goes away. And so I think it's going to just mean that the types of problems those folks are focused on are going to increase in complexity as well and they'll be able to spend more time on those than they are today."