Identity

On Agents and Digital Identity

Over the course of my career, digital identity has applied to many things in an enterprise context. First it was the network, then the device, then the user. And just when organisations got used to, and good at, managing workloads, service accounts, and APIs as “first-class citizens”, AI Agents emerged.

If you've been to a tech conference recently, or have managed to not live under the proverbial rock, you’ve heard the term tossed around a lot lately. Vendors are embracing it, management is overusing the term and engineers are (supposed to be) experimenting with it. You probably have someone at your organisation ask you recently if you’re “using agents yet.” And yet, there’s surprisingly little clarity on what an AI agent actually is, let alone how it should be governed, secured, or identified.

What Is An AI Agent?

Depends on who you ask.

Despite the growing interest, there’s no consensus on what counts as an AI agent. Here’s a rough spectrum of current interpretations as I understand it:

Views on AI Agents

The challenge isn’t just semantic. Each of these interpretations implies very different identity and security requirements.

If an agent is just a stateless function call, maybe you audit the prompt and call it a day. But if it’s an entity that operates over time, remembers context, and initiates actions across systems? That’s not a chatbot. That’s a user you didn’t hire and you very likely won't have full oversight of it. You might want to apply strict governance protocols to it as it snakes its way through your organisation.

Worse, most enterprises don’t yet distinguish between these types (mostly because I don't know of or work with a client that has actually thrown an AI Agent into their organisation just yet). There’s a risk of flattening all AI agents into “non-human identities” and assigning them the same governance as a Terraform script or Slack bot. If you're after just checking a box, that's cool, but it will likely become a headache down the line.

Is An Agent A Glorified Service Account?

It's quite tempting to handle new and emerging concepts by mapping them to older concepts. When APIs proliferated, we gave them service accounts. When bots showed up in business processes, we registered them in the IAM stack like users and called them machine identities. When cloud workloads emerged, we invented workload identity. AI agents will likely be no different.

Faced with unfamiliar behaviours and ambiguous definitions, most orgs will default to what they know: wrap the agent in a generic machine identity, assign it to a system, and Bob's your uncle. It will get an account, some roles, some documentation and if you're lucky, someone remembers to rotate its API key.

Unfortunately, agents aren't just executing logic — they’re interpreting intent. They’re ingesting data, making decisions, and sometimes taking action in ways that aren’t fully transparent to the humans who invoked them.

Traditional Service Accounts vs Agents

From a security perspective, this creates a troubling blind spot. When something goes wrong like say, a leak, a breach or a misfired account termination, you’ll be left staring at an audit trail that says “Agent Smith did it.” But not why, or on whose behalf or with what justification. You'd be lucky if Agent Smith were even still around; after all, agents can be ephemeral depending on what they're meant to do.

So What?

All this begs the question - are our existing identity stacks still fit for purpose?

I have to admit I'm still coming to terms with what a solution to all this will eventually look like. If AI agents are going to become routine actors in the enterprise, our IAM systems will need to evolve well beyond where they are today. Not in the sense of adding another checkbox or creating an “agent” user type. That would be like bolting a sidecar onto a moving train. What’s needed is deeper: a rethink of what identity means when the actor is no longer human or even fully deterministic.

I'm almost certain that we'll see the likes of SailPoint, Okta, Saviynt and others start to address some of these problems in the coming months. Microsoft’s already partnered with ServiceNow and Workday on this front. At the very least, we'll need to look at the following:

  • Creating new identity constructs that are more expressive than a service account and more ephemeral than a workload identity.

  • Audit actual prompts that make agents do what they do - perhaps rethink privileged accounts?

  • Include agents with the remit of workforce identity governance

  • Keep humans in the loop when it comes to making decisions on what agents do

Of course, none of this matters if AI Agents don't take off in a meaningful way within enterprises. But if they do, I'm guessing our identity systems will need to do a whole lot more than just manage access.

———

Postscript:

I used ChatGPT and Claude as a ‘thinking partners’ while developing this piece and am likely to use this method for future posts. It helps me test arguments, identify gaps in logic and explore alternate views. I also used napkin.io for creating the included diagrams. Farhad Manjoo’s take on how he incorporated GenAI into his writing workflow is a useful listen and somewhat similar to how I’ve started using these tools.

Additionally, I drew on some excellent writing that touches on the evolving nature of agency, identity, and AI systems:

  1. Arvind Narayanan & Sayash Kapoor – AI as Normal Technology

  2. Microsoft – 2025: The Year The Frontier Firm Is Born

  3. Benedict Evans – Looking For AI Use-Cases

  4. Identity Defined Security Alliance (IDSA) – Managing Non-Human Identities (2021)

On Worldcoin, DAOs and digital identity black markets

Molly White has a great essay on Sam Altman’s iris scanning orb and its purported use cases.

Much of Worldcoin’s promises are predicated on the questionable idea that highly sophisticated artificial intelligence, even artificial general intelligence, is right around the corner. It also hinges on the “robots will take our jobs!” panic — a staple of the last couple centuries — finally coming to bear. Worldcoin offers other use cases for its product too, like DAO voting, but it is not the promise to solve DAO voting that earned them a multi-billion dollar valuation from venture capitalists.

Other use cases that Worldcoin has offered seem to assume that various entities — governments, software companies, etc. — would actually want to use the Worldcoin system. This seems highly dubious to me, particularly given that many governments have established identification systems that already enjoy widespread use. Some even employ biometrics of their own, like India’s Aadhaar. There’s also the scalability question: Worldcoin operates on the Optimism Ethereum layer-2 blockchain, a much speedier alternative to the layer-1 Ethereum chain to be sure, but any blockchain is liable to be a poor candidate for handling the kind of volume demanded by a multi-billion user system processing everyday transactions.

What will happen when you promise people anywhere from $10 to $100 for scanning their eyeball? What if that’s not dollars, but denominated in a crypto token, making it appealing to speculators? And what if some people don’t have the option to scan their own eyeballs to achieve access to it?

A black market for Worldcoin accounts has already emerged in Cambodia, Nigeria, and elsewhere, where people are being paid to sign up for a World ID and then transfer ownership to buyers elsewhere — many of whom are in China, where Worldcoin is restricted. There is no ongoing verification process to ensure that a World ID continues to belong to the person who signed up for it, and no way for the eyeball-haver to recover an account that is under another person’s control. Worldcoin acknowledges that they have no clue how to resolve the issue: “Innovative ideas in mechanism design and the attribution of social relationships will be necessary.“ The lack of ongoing verification also means that there is no mechanism by which people can be removed from the program once they pass away, but perhaps Worldcoin will add survivors’ benefits to its list of use cases and call that a feature.

Relatively speaking, scanning your iris and selling the account is fairly benign. But depending on the popularity of Worldcoin, the eventual price of WLD, and the types of things a World ID can be used to accomplish, the incentives to gain access to others’ accounts could become severe. Coercion at the individual or state level is absolutely within the realm of possibility, and could become dangerous.

On Facial Recognition and Identity Proofing

Wired has a good piece on the IRS in the US caving to public outcry and ditching its integration with ID.me - a service that was supposed to verify identities (by matching video selfies to existing records). It’s understandable why this would cause concerns given that facial recognition is rife with false matches, biases and a reputation for invasiveness. With fraud being a pressing issue now when a majority of us (at least in Australia) access nearly every civic service online, governments are going to want to think about how they balance policy, privacy and messaging.

Unfortunately, the landscape at the moment is messy and populated by a number of third-party vendors still finding their feet in an area where privacy and policy concerns are outweighed by sexier usability and convenience use cases.

“The fact we don’t have good digital identity systems can’t become a rationale for rushing to create systems with Kafkaesque fairness and equity problems.” - Jay Stanley, ACLU

It’ll be interesting to see how Australia’s Trusted Digital Identity Framework (TDIF) will look to address some of these inherent problems through a continuous expansion of its standards.

It Takes Two (To Thwart Data Breaches)

Some interesting insight from Gemalto's 2017 Data Breaches and Customer Loyalty Report:

  • Of the 10,000 consumers interviewed, only 27% feel businesses take customer data security seriously
  • 70% would take their business elsewhere following a breach
  • 41% fail to take advantage of available security measures available such as multi-factor authentication
  • 56% use the same password for multiple online accounts

While consumers are rightfully skeptical of the security hygiene of businesses they interact with, there is certainly a role for consumers to play here.