Category: Product Management

  • AI Coding Assistants: Six Months In the Trenches

    I spent the last six months working with AI coding assistants daily. Not as a demo, but as my primary workflow. Here’s what actually changed.

    The shift isn’t about AI writing your code. It’s about how you think about problems.

    The Real Productivity Gain

    Most discussions focus on autocomplete speed. That’s the visible part. The real gain is harder to measure: reduced friction between thinking and implementing.

    When I have an idea, I can test it immediately. Describe the function in plain language, review what the AI generates, iterate. The bottleneck shifts from typing to reasoning.

    Three things surprised me:

    • Debugging time dropped: AI reads error messages differently than humans. It correlates the error with your specific codebase, not just the general pattern. Half my debugging sessions now end in minutes instead of hours.
    • Code review quality improved: When AI suggests changes, it explains the reasoning. I find myself understanding other people’s code faster because the AI can summarize unfamiliar sections.
    • Documentation got actually written: Instead of dreading the docstring, I let AI draft it and then review. This sounds minor until you realize how much institutional knowledge disappears when nobody documents the tricky parts.

    Where It Breaks Down

    AI coding assistants fail in specific ways. Understanding these failure modes matters more than the capabilities.

    Context windows are real constraints. Feed an AI a 50-file codebase and ask about architectural decisions made three years ago, you’ll get confident nonsense. The model works best with focused, recent changes.

    Security edge cases get missed. AI will suggest code that works for the happy path. It doesn’t naturally think about adversarial inputs, race conditions, or compliance requirements unless you explicitly ask.

    The biggest risk is subtle: learned helplessness. If you rely on AI to generate everything, you stop building the mental models that let you catch mistakes. The tool makes you faster until you forget how to verify the output.

    What I’d Tell My Past Self

    Use AI for the mechanical work. Let it handle boilerplate, refactoring, test generation, and initial drafts. Your job is to define what good looks like and verify the result.

    The developers who thrive won’t be the ones who use AI most. They’ll be the ones who know when to trust it and when to dig in manually.

    The question isn’t whether to use AI coding assistants. It’s whether you’re using them to augment your thinking or to replace it.

    What’s your experience been? Are you seeing real productivity gains, or is the tooling still too immature for your workflow?

  • The Agentic Workflow: How AI is Changing Product Requirements

    The Product Requirement Document has been the backbone of product management for years. It tells engineering exactly what to build. But that model is breaking under the weight of AI-driven development.

    We are moving toward agentic workflows. Agents don’t read specs and wait for clarification. They take a directive, interpret it, and start building. For product teams, this fundamentally changes what a “requirement” even means.

    Instead of a 40-page document, requirements become a set of constraints and success criteria. The PM’s job shifts from writing specs to defining the logic the agent follows.

    Constraint-Based Requirements

    In a traditional workflow, the PM details every user story, edge case, and UI state. That level of granularity was necessary because developer time was expensive and misalignment was costly. Agents flip that cost equation. It is now cheaper to iterate on a high-level directive than to document every step in advance.

    The requirement is no longer a step-by-step instruction. It becomes a boundary.

    • Success metrics over user stories: Instead of “Add a filter dropdown,” the directive is “Users must be able to narrow results to under 50 items with two clicks.” The agent figures out the implementation.
    • Rapid prototyping: Agents can generate working drafts or code skeletons in minutes. PMs validate against the output rather than a theoretical spec, turning discovery into a feedback loop.
    • Technical and persona guardrails: The agent needs rules. “Must use existing API,” “Must comply with WCAG 2.1,” “Target audience: enterprise admins.” These constraints keep the agent’s output aligned with reality.

    From Writer to Orchestrator

    This transition moves the product manager away from documentation and toward system management. The value is no longer in how well you write a spec, but in how effectively you coordinate the agents that execute it.

    Three responsibilities become central:

    • Strategic direction: Agents optimize for what they’re told. They don’t know about the Q3 revenue target or the recent customer churn spike. The PM provides the business context that prevents local optimization.
    • Governance: Autonomous systems need hard limits. PMs define the non-negotiables—data privacy boundaries, brand standards, compliance requirements. The agent handles the rest.
    • Human alignment: An agent can draft a feature, but it can’t negotiate with engineering on technical debt or align with sales on a launch timeline. That human coordination is still a PM’s core responsibility.

    The Friction Is Real

    Adopting this workflow is not trivial. Data security is the first hurdle; teams are understandably cautious about feeding roadmaps into external models. Then there’s reliability. Agents hallucinate. They misinterpret nuance. They produce confident but incorrect outputs.

    The practical approach is hybrid. Use agents for the heavy lifting of documentation, test case generation, and initial prototyping. Keep human review before anything reaches production.

    Teams that do this well report significantly shorter cycles from concept to working software. But it requires a new level of discipline. The spec isn’t gone—it’s just executable now.

    How is your team approaching this? Are you using AI to accelerate the discovery phase, or are you still keeping it strictly out of the requirements process?

  • The Real Reason Startups Are Firing Engineers and Hiring PMs (Or Vice Versa)

    If you’ve been paying attention to tech job postings lately, you’ve noticed a strange pattern. Some startups are quietly trimming their engineering teams — not the dramatic headlines of 30,000 cuts at Oracle, but slow, deliberate reductions. And at the same time, they’re hiring aggressively in product management, developer relations, and customer success.

    The obvious explanation is “AI will replace engineers.” It makes for a good tweet. But the reality is more interesting and more nuanced.

    The Cost-to-Value Equation Has Flipped

    Two years ago, a startup’s competitive advantage was its engineering velocity. If you could ship faster, iterate quicker, and build a more polished product than your competitors, you won. So startups hired engineers — lots of them. Every additional engineer meant more features, more experiments, more shipped code.

    AI has compressed that advantage. What used to take a team of three engineers a week now takes one engineer an afternoon with a capable AI coding assistant. The marginal value of each additional engineer has dropped, dramatically.

    But here’s the thing nobody talks about: building the product was always the easy part. Finding product-market fit, understanding what customers actually want, pricing it right, communicating it effectively, keeping customers happy — those things haven’t gotten any easier. If anything, AI has made them more important, because now everyone can build.

    The Real Bottleneck Moved

    In 2023, the bottleneck was engineering capacity. In 2026, it’s strategic clarity.

    A startup can now build a functioning MVP in a weekend. Three founders with AI assistants, no dedicated engineering team, and a clear vision can ship something that would’ve required six months and a $2M seed round two years ago. The barrier to building has collapsed.

    But the barrier to knowing what to build? That’s still incredibly hard.

    This is where the shift in hiring comes from. Startups are realizing that their scarcest resource isn’t coding capacity anymore — it’s product insight. They need people who can:

    • Talk to customers and translate messy, contradictory feedback into clear feature priorities
    • Define a positioning strategy that cuts through the noise of a thousand AI-wrapped competitors
    • Write PRDs that actually constrain AI behavior instead of vague wishlists
    • Design go-to-market motions that don’t rely on “build it and they will come”

    That’s a product manager’s job. It always has been. It just got way more valuable relative to everything else.

    But Here’s the Twist: It Goes Both Ways

    Not every startup is the same, and the reverse trend is equally real: engineering-heavy startups are finding they don’t need traditional PMs anymore.

    Why? Because a good engineer with an AI assistant can now do most of what a PM used to do. Draft a PRD? AI can help. Analyze user feedback? AI can summarize thousands of reviews in seconds. Create user personas? AI can do it from your existing customer data. Write a competitive analysis? Ten minutes with an LLM and a clear prompt.

    The PM role is getting squeezed from both sides. On one end, AI-augmented engineers are absorbing the tactical PM work (writing specs, prioritizing backlogs, analyzing data). On the other end, PMs who learn to use AI are becoming so efficient at their core work that fewer of them are needed.

    The surviving PMs are the ones who’ve moved up the value chain — from writing tickets to shaping strategy, from backlog management to market positioning, from feature spec to business model.

    What This Means for You

    If you’re an engineer: your coding skills are table stakes now. The engineers who thrive in 2026 are the ones who combine technical depth with product instinct. You need to be able to talk to users, understand market dynamics, and make judgment calls about what to build — not just how to build it.

    If you’re a PM: stop being a ticket factory. If your job is just writing user stories and grooming backlogs, you are one AI prompt away from obsolescence. Move toward strategy, toward user research, toward the parts of the job that require actual human judgment about what the market wants and why.

    The startups that will win in this environment are the ones that figure out the right ratio. Too many engineers without product direction means you’re building efficiently in the wrong direction. Too many PMs without building capacity means you’re strategizing with nothing to ship.

    The sweet spot is a small, sharp team of T-shaped people — engineers who understand their customers, and PMs who understand the technical tradeoffs — all operating at maximum leverage with AI doing the heavy lifting on execution.

    The org chart is flattening. The roles are blurring. And the people who’ll thrive are the ones who stop thinking about what their title is and start thinking about what the product needs.

    What do you think? Has your team’s ratio shifted, or are you seeing the opposite trend? I’m genuinely curious what the data looks like on the ground.

  • From ‘Chatbot’ to ‘Colleague’: Designing UX for AI Agents

    We’ve spent the last decade designing chatbots. They live in little bubbles on our screens, waiting for us to ask a question. But the next generation of AI isn’t just a chatbot—it’s a colleague. And designing the user experience (UX) for an AI that can take actions, edit files, and make decisions is a completely different challenge.

    The Trust Deficit

    When a chatbot gives you a wrong answer, it’s annoying. When an AI agent takes the wrong action—like deleting the wrong database or sending an email to the wrong person—it’s catastrophic. The primary goal of “Agentic UX” is to bridge the trust deficit between human intent and machine execution.

    This requires a shift from “chat” interfaces to “approval” interfaces. Instead of just showing a text response, the UI must clearly outline: What am I about to do? What tools will I use? And what is the potential risk?

    Designing for “Human-in-the-Loop”

    The most successful AI agents will be those that know when to pause. This is the concept of Human-in-the-Loop (HITL) design. A good agent UX should:

    • Show Its Work: Display a “thought process” or a step-by-step plan before executing complex tasks.
    • Provide Granular Permissions: Allow users to say “Yes” to reading a file but “No” to deleting it.
    • Offer Easy “Undo” Buttons: Since AI is probabilistic, mistakes will happen. The UI must make it easy to roll back changes.

    The Future of Interaction

    We are moving away from simple “prompt and response” toward a more collaborative “review and refine” workflow. As developers and product designers, our job is to make sure that while the AI is doing the heavy lifting, the human remains the pilot, not just a passenger.

    What’s the biggest frustration you’ve had with an AI agent so far? Was it a lack of transparency or a lack of control? Let’s discuss in the comments.

  • The ‘Agentic’ Workflow: How AI is Changing Product Requirements

    For decades, the Product Requirements Document (PRD) has been the bible of product development. It’s a static artifact—a Word doc or a Confluence page—that outlines what we’re building, for whom, and why. But as we shift from building traditional software to designing AI Agents, the humble PRD is undergoing a radical transformation.

    From Static Text to Dynamic Logic

    In a traditional workflow, a PRD describes a feature: “The user clicks a button, and the system generates a report.” In an agentic workflow, the requirements must account for autonomy and probability. We aren’t just defining a path; we’re defining a “solution space.”

    An AI-native spec doesn’t just say what the output should be; it defines the guardrails the agent must stay within. It includes:

    • Success Metrics as Code: Instead of “high accuracy,” we define specific evaluation datasets and pass/fail thresholds for the model.
    • Tool Selection Logic: A map of which APIs or databases the agent is allowed to touch and under what conditions.
    • Edge-Case Simulations: A list of “adversarial” inputs we expect the agent to handle without hallucinating or breaking.

    The Rise of the “Executable” PRD

    We are moving toward a world where the PRD is an executable file. Imagine a specification that not only tells the engineering team what to build but also serves as the initial “system prompt” or “evaluation harness” for the AI model itself. This shifts the PM’s role from “documenter” to “architect of behavior.”

    For product managers, this means learning to speak the language of constraints. It’s less about writing long paragraphs of user stories and more about defining the logical boundaries within which an intelligent agent can operate safely and effectively.

    Why This Matters for Your Career

    If you’re a PM looking to transition into AI, your ability to write these “agentic specs” will be your most valuable skill. It demonstrates that you understand not just the user’s intent, but the model’s limitations. It’s the difference between building a feature that “works sometimes” and one that users can actually trust.

    How are you adapting your product documentation for AI? Are you still using traditional PRDs, or have you moved to more dynamic frameworks? Let’s talk about it in the comments.