For want of a nail: Why choices about AI credentials can make or break your AI strategy
Identity management has moved on in the age of AI
There’s an old proverb that starts, “For want of a nail, the shoe was lost…” and ends with the fall of a kingdom.
In 2025, for organisations racing to operationalise AI, your “nail” is something as unglamorous as how you manage AI accounts and credentials to your company’s AI tools like ChatGPT, Gemini and Perplexity. These identities sit between your people, your data, and the AI models you’re wiring into the core of your business.
Currently, most security teams don’t treat those accounts as critical assets. Teams still have their projects configured with throwaway logins, basic settings and minimal controls. This lax approach will likely continue until something goes wrong: a leaked API key, a surprise bill where tens of thousands of tokens have been charged to the company’s account, or a compromised AI account wired into the company’s SharePoint or Salesforce Automation (SFA) system leaks data. Suddenly the conversation flips from “How do we scale AI?” to “How bad is the damage?”
LLM accounts aren’t normal SaaS logins
Today, we have two types of access to worry about. There are the programmatic AI identities - which include API keys, access tokens and service principals used by applications, scripts and AI agents. Then there are interactive SaaS AI identities, which are human subscription seats to tools like ChatGPT Business, Gemini for Workspace and Perplexity Enterprise. Three things make AI accounts unusually dangerous compared to other SaaS tools. First, there are several LLMs that are metered, such as OpenAI’s API and Perplexity API. Metered accounts are usually tied to a company’s credit card. A single over-privileged key or user’s seat can burn through your budget fast. If the credentials are stolen, you have effectively handed an attacker a prepaid credit card. Second, AI accounts expose concentrated knowledge. In many deployments, organisations are racing to integrate AI accounts directly into the systems that store and manage company data - wikis, file shares, ticketing systems and code repositories.
If a hacker takes the identity of an LLM account holder, they get natural-language access to everything the business is trying to bring into view of the AI. Third, and most importantly, is that AI accounts increasingly drive actions in other corporate systems. As companies rush from deploying “chatbots” to full agentic agents, LLM identities pick up permissions to call plugins, update records, raise tickets and trigger workflows. At that point, the AI accounts are no longer just reading data; they’re actively taking actions in other business systems on behalf of corporate users. The connected AI account starts to look like a central login to a lot of business data you care about.
Free pass to walk into the company: the insider angle
From an attacker’s perspective, the real prize is not free computing power. It’s an easy way to become an “insider” without ever breaching your core Identity Provider System (IdP). For example, an employee has a seat to the company’s LLM subscription. The service is not behind enforced Single Sign On (SSO) and Multi-Factor Authentication (MFA). Consequently, any threat actor that gets their hands on the employee’s email address and password to the LLM service can log in as that employee, can read and copy the employee’s chat history, download files that have been uploaded, and if the company has connected any tools like SharePoint, Salesforce or Google Drive to the LLM, they can access everything the employee can. The threat actor can quietly exfiltrate whatever they find interesting.
Threat actors don’t need to touch your SSO, corporate email or VPN. They don’t have to defeat your existing MFA policies. And from the AI service’s point of view, the attacker is a perfectly “legitimate user” because they are using the correct username and password, they have valid sessions and cookies, and are using standard APIs and UI, as well as working from plausible IP ranges. From a security perspective, this is now an insider problem. The attacker is now inside your AI tenant, using a real identity, accessing only things that the identity is allowed to see, via the normal user experience.
Why traditional controls won’t save you
Most of the controls we’ve built are designed to spot illegitimate access such as unknown devices and locations, suspicious sign-in patterns, impossible travel, and obvious privilege escalation. Once an attacker is using a valid AI account with valid credentials, those signals get weaker fast and to your AI provider and even your IdP, it’s business as usual.
The only reliable way to catch these attackers is to start asking, “Is this how this identity normally behaves?” The focus must shift from pure authentication events to patterns like does this user suddenly search across repositories they’ve never touched. Are they asking for broad or sensitive data, such as names of all customers across a specific region, or are they pulling more documents than usual, or summarising whole repositories repeatedly? Finally, are they using plugins and connectors they’ve never used before?
You must also apply that thinking to non-human identities as well. You have to ask, is this API key suddenly hammering high-cost models from a new geography? Is this AI agent calling tools it has never used, or touching new classes of data? Once credentials and MFA are bypassed, behaviour is your only reliable threat signal.
What good looks like
To protect your organisation’s LLM identities, security teams must place your LLM services behind IdP, enforce SSO and MFA polices for LLM applications, disable direct email-and-password logins where possible, and apply conditional access based on device posture, geography and risk. For programmatic access organisations need to use managed identities and gateways over raw API keys; apply least privilege and environment separation; and set per-identity budgets and alerts for unusual usage.
For AI subscription seats, organisations need to limit which data repositories they connect to their LLM. Don’t default to “everything for everyone,” utilise Role Based Access Control (RBAC) so highly sensitive data isn’t casually exposed via chat, and treat “connect a new data source” as a governed change, not a user preference.
You also need to monitor AI behaviour, not just logins. Security teams need to connect AI sign-ins, prompts, connector activity and agent actions into their Security Information and Event Management (SIEM) or Extended Detection and Response (XDR) platform. You need to build detection rules around abnormal behaviour for both human and non-human AI identities and lastly have a playbook ready for suspected AI identity compromise: revoke access, rotate credentials, investigate patterns, assess data exposure, and notify those that need to know.
The failures most likely to derail your AI plans in the near term are just ordinary identity mistakes. That’s a lost nail that a fall of the kingdom can start from. Therefore, treat LLM credentials - human and non-human alike - as part of your core identity fabric and back them up with behavioural monitoring. Ignore identities, and it won’t be failed models that bring your AI strategy down, it will be attackers using your AI accounts.
Alexander Feick is Vice President at eSentire Labs.