← Back to all posts

Coming Soon: Lessons from Using AI Tools in Production — A Governance Perspective

This post is a preview — a placeholder for an upcoming piece exploring what it's actually like to use AI coding assistants in a production environment, and why that experience matters for the people writing AI governance policy.

The Gap Between Policy and Practice

Most AI governance frameworks are written by people who've never shipped code with an AI assistant. That's not a criticism — it's a structural gap. The people writing policy and the people using AI tools in daily development workflows rarely overlap. But they should.

When you use tools like Cursor and Claude to build production web applications every day, you encounter governance questions in their most concrete form:

  • What data from your codebase is being sent to third-party AI models?
  • Who reviews AI-generated code before it hits production?
  • How do you attribute authorship — and liability — for AI-assisted output?
  • What happens when the AI suggests a dependency with a known vulnerability?

The strongest governance frameworks will come from people who understand the technology from the inside — not just as an abstraction, but as a daily workflow.

What I'll Cover

The full post will explore these themes through the lens of my own experience as a junior developer working with AI tools at Simplicity Group, grounded in the frameworks that are emerging to address these questions.

Frameworks in Focus

Several frameworks are beginning to address AI governance in ways that directly touch development workflows:

  1. NIST AI RMF — The risk management framework that's becoming the de facto standard for U.S. organizations thinking about AI governance
  2. ISO/IEC 42001 — The international standard for AI management systems, with requirements that map directly to how teams adopt and oversee AI tools
  3. EU AI Act — The most comprehensive regulatory framework to date, with risk classifications that have implications for AI-assisted development

A Practical Example

Consider a simple scenario: your team adopts an AI coding assistant. The tool has access to your codebase through its context window. It suggests code completions, refactors functions, and generates boilerplate. Productivity goes up. But has anyone asked:

// Questions your compliance team should be asking:
//
// 1. Data flow: What code/data leaves your environment?
// 2. Review: Is AI-generated code held to the same review standard?
// 3. Licensing: Does the AI's training data create IP risk?
// 4. Audit trail: Can you distinguish human from AI code?
// 5. Vendor risk: What's the AI provider's data retention policy?

These aren't hypothetical. They're the questions I encounter in my own workflow, and they're the questions that governance frameworks need to address with practical, implementable guidance — not just high-level principles.

Coming Soon

The full post will be published in April 2026. It will draw on my daily experience with AI development tools and connect those experiences to the governance frameworks that are shaping how organizations should think about AI risk.

In the meantime, feel free to reach out if you're working on similar questions — I'm always interested in connecting with people thinking seriously about AI governance.