Field Notes [0126]

I’ve been flirting with the idea of building a dedicated local LLM-capable PC for a while now and have half-heartedly specced one up, but haven’t pulled the trigger yet. Moxie’s new venture, Confer, feels like it’ll scratch that privacy itch for a while longer. Reading up on it, it’s fascinating.

Once you log in using passkeys, which derive the encryption keys right on your device, your chats are processed within a Trusted Execution Environment (TEE). It uses a Remote Attestation process to verify the authenticity of the code running on the server, ensuring that conversations remain invisible to anyone outside of the TEE, including Confer itself. As Moxie reminds us, the very nature of LLMs invites users to share personal thoughts; this is an elegant solution to keep those thoughts private.

Confer Architecture - showing key derivation, remote attestation, encrypted chat flow through TEE

Reads:

  • Children of Time by Adrian Tchaikovsky – I enjoyed this a lot, but it didn’t quite do for me what Cixin Liu’s Remembrance of Earth’s Past trilogy did.

  • The Mountain in the Sea by Ray Nayler - A nice palate cleanser after Permutation City. Well-researched and thoughtful.

Listens:

  • In holiday clothing, out of the great darkness by Clarice Jensen - Really haunting and beautiful cello work especially if you like Max Richter.

  • DJ Set by Mouseatouille - Local act. Fun and morose in equal measure.

Watches:

  • 28 Years Later: Bone Temple (2026), Nia DaCosta - Like the previous one, this sat with me for days. Very topical zombie movie about cults of personality.

  • Picnic at Hanging Rock (1975), Peter Weir - One of Australia’s most important contributions to cinema; up there, for me, with Wake in Fright.


“Fascism doesn’t just ‘pop up’ and then recede when it finds out no one’s into it.” - Dan Harmon

Field Notes [1225]

The Social Media Minimum Age ban in Australia kicks in this month and the noise is deafening. While the headlines are about “banning TikTok for teens”, the real story is the infrastructure underneath it all. We are essentially watching the first mass-scale stress test of Age Assurance tech in Australia. While the government has recommended a principles-based approach and most orgs seem to have settled on a ‘Successive Validation’ approach (inference > estimation > verification), it’ll be interesting to see how good current tech is at differentiating between 15 year-olds and 17 year-olds. Never mind what your stance is, this is forcing an interesting conversation: the internet was built without an identity layer and we are now awkwardly trying to retrofit one.

PS: I was on a panel at CyberArk IMPACT recently alongside some smart people speaking on the topic, “Building Cyber Resilient Organisations – Technical Leadership in the AI Era” and the video is live on their website.

Reads:

  • Permutation City by Greg Egan - Heady, mind-expanding and well ahead of its time. (““Simulated consciousness” was as oxymoronic as “simulated addition.”)

  • The Art of Spending Money by Morgan Housel - Clean, wholesome conventional wisdom for fans of The Psychology of Money.

  • How Kerala Got Rich, Aeon - Interesting read on Kerala’s ‘success’ being attributed to a multitude of factors from government policies, global trade connections, migration, remittances etc.

Listens:

  • Lux by Rosalia - Perfect. No notes.

  • So Tonight That I Might See by Mazzy Star - I’ve been revisiting shoegaze greats and this one’s up there with Nowhere by Ride.

Watches:

  • Wake Up Dead Man (2025), Rian Johnson - Fun, moving and made specifically for me.

  • One Battle After Another (2025), Paul Thomas Anderson - Let down only by noisy patrons at my local.

Plays:

  • Dispatch - Mild fun

  • Marvel’s Cosmic Invasion - I hate my thumbs, so this is perfect.

“How we see the world matters - but knowing how the world sees us also matters.” - Ray Nayler, The Mountain in the Sea

“Keep your eye on the doughnut, not on the hole.”
― David Lynch, Catching the Big Fish: Meditation, Consciousness, and Creativity

On Agents and Digital Identity

Over the course of my career, digital identity has applied to many things in an enterprise context. First it was the network, then the device, then the user. And just when organisations got used to, and good at, managing workloads, service accounts, and APIs as “first-class citizens”, AI Agents emerged.

If you’ve been to a tech conference recently, or have managed to not live under the proverbial rock, you’ve heard the term tossed around a lot lately. Vendors are embracing it, management is overusing the term and engineers are (supposed to be) experimenting with it. You probably have someone at your organisation ask you recently if you’re “using agents yet.” And yet, there’s surprisingly little clarity on what an AI agent actually is, let alone how it should be governed, secured, or identified.

What Is An AI Agent?

Depends on who you ask.

Despite the growing interest, there’s no consensus on what counts as an AI agent. Here’s a rough spectrum of current interpretations as I understand it:

 Views on AI Agents Views on AI Agents

The challenge isn’t just semantic. Each of these interpretations implies very different identity and security requirements.

If an agent is just a stateless function call, maybe you audit the prompt and call it a day. But if it’s an entity that operates over time, remembers context, and initiates actions across systems? That’s not a chatbot. That’s a user you didn’t hire and you very likely won’t have full oversight of it. You might want to apply strict governance protocols to it as it snakes its way through your organisation.

Worse, most enterprises don’t yet distinguish between these types (mostly because I don’t know of or work with a client that has actually thrown an AI Agent into their organisation just yet). There’s a risk of flattening all AI agents into “non-human identities” and assigning them the same governance as a Terraform script or Slack bot. If you’re after just checking a box, that’s cool, but it will likely become a headache down the line.

Is An Agent A Glorified Service Account?

It’s quite tempting to handle new and emerging concepts by mapping them to older concepts. When APIs proliferated, we gave them service accounts. When bots showed up in business processes, we registered them in the IAM stack like users and called them machine identities. When cloud workloads emerged, we invented workload identity. AI agents will likely be no different.

Faced with unfamiliar behaviours and ambiguous definitions, most orgs will default to what they know: wrap the agent in a generic machine identity, assign it to a system, and Bob’s your uncle. It will get an account, some roles, some documentation and if you’re lucky, someone remembers to rotate its API key.

Unfortunately, agents aren’t just executing logic — they’re interpreting intent. They’re ingesting data, making decisions, and sometimes taking action in ways that aren’t fully transparent to the humans who invoked them.

 Traditional Service Accounts vs Agents Traditional Service Accounts vs Agents

From a security perspective, this creates a troubling blind spot. When something goes wrong like say, a leak, a breach or a misfired account termination, you’ll be left staring at an audit trail that says “Agent Smith did it.” But not why, or on whose behalf or with what justification. You’d be lucky if Agent Smith were even still around; after all, agents can be ephemeral depending on what they’re meant to do.

So What?

All this begs the question - are our existing identity stacks still fit for purpose?

I have to admit I’m still coming to terms with what a solution to all this will eventually look like. If AI agents are going to become routine actors in the enterprise, our IAM systems will need to evolve well beyond where they are today. Not in the sense of adding another checkbox or creating an “agent” user type. That would be like bolting a sidecar onto a moving train. What’s needed is deeper: a rethink of what identity means when the actor is no longer human or even fully deterministic.

I’m almost certain that we’ll see the likes of SailPoint, Okta, Saviynt and others start to address some of these problems in the coming months. Microsoft’s already partnered with ServiceNow and Workday on this front. At the very least, we’ll need to look at the following:

  • Creating new identity constructs that are more expressive than a service account and more ephemeral than a workload identity.

  • Audit actual prompts that make agents do what they do - perhaps rethink privileged accounts?

  • Include agents with the remit of workforce identity governance

  • Keep humans in the loop when it comes to making decisions on what agents do

Of course, none of this matters if AI Agents don’t take off in a meaningful way within enterprises. But if they do, I’m guessing our identity systems will need to do a whole lot more than just manage access.

———

Postscript:

I used ChatGPT and Claude as a ‘thinking partners’ while developing this piece and am likely to use this method for future posts. It helps me test arguments, identify gaps in logic and explore alternate views. I also used napkin.io for creating the included diagrams. Farhad Manjoo’s take on how he incorporated GenAI into his writing workflow is a useful listen and somewhat similar to how I’ve started using these tools.

Additionally, I drew on some excellent writing that touches on the evolving nature of agents, identity, and AI systems:

  1. Arvind Narayanan & Sayash Kapoor – AI as Normal Technology

  2. Microsoft – 2025: The Year The Frontier Firm Is Born

  3. Benedict Evans – Looking For AI Use-Cases

  4. Identity Defined Security Alliance (IDSA) – Managing Non-Human Identities (2021)