Back to overview

Building AI prototypes as a PM: a superpower with a shadow side

As a product manager, you no longer need to be a developer to build working software. That is a real shift. But the lower the barrier to creating something, the greater the chance of things going wrong in ways you do not see coming.

With the right AI tools, you can build a prototype in an afternoon that would have taken weeks of development time in the past. For a PM, that is a serious advantage: you validate faster, deliver better input for your team and create more value without having to wait for a developer or a sprint every time. But speed does not absolve you of your responsibilities. Not towards your stakeholders, not towards auditors, and certainly not towards the people you are building for. That is the crux of this piece.

Below, I will unpack this in two blocks: first what you can do with it, then what you need to watch out for.

1. Prototypes that actually tell you something

A clickable mockup is nice, but a working prototype leads to an entirely different conversation. Users respond differently to something that feels real. You validate faster, more sharply and closer to reality. What used to take weeks, you now do in a morning. And the insights you get back are correspondingly more concrete.

2. Gathering validated input for your team more quickly

A slide with an idea is one thing. A working demo leads to an entirely different conversation. You can involve users and stakeholders earlier and more concretely, which means you gather validated instructions for your team faster. Fewer assumptions, better user stories, less waste in the sprint.

3. CI/CD and agents as an extension of your team

AI agents can take over tasks: testing, deploying, monitoring, notifying. A PM who understands how to set this up effectively has a larger team than what is on paper. That is a serious competitive advantage, especially if you work in a small organisation or have few people around you as an interim.

So much for the good news. But the lower the barrier to building software, the greater the chance of things going wrong in ways you do not see coming. Especially in complex or regulated environments.

4. Compliance is not a tick-box exercise

In regulated markets such as healthcare, government and finance, a working product is not enough. It must be demonstrably secure, auditable, and compliant with legislation that is non-negotiable. A prototype you build in an afternoon rarely passes that test. In fact, it can damage your position if you take it into production regardless. Speed is not an argument a regulator will accept.

I know that world from the inside. Through Medux and Zorg bij jou, I worked in healthcare for an extended period, a sector where regulation is non-negotiable and where a fault in a system can have direct consequences for people. But in retail and automotive too, the same applies: the larger the organisation and the more sensitive the data, the more is at stake if you lose control over what you build.

5. Your data goes somewhere

When you enter confidential information into an AI tool, you lose control over where that data ends up. Customer data, medical records, internal strategy: it is not a hypothetical risk. It is a real danger with real consequences for your organisation, your users and yourself. Healthy scepticism is not pessimism here; it is professionalism.

6. Speed is no excuse for technical debt

AI-generated code is not bad code by definition, but it is rarely code that a mature organisation takes into production without question. Maintenance, documentation, scalability: the AI does not account for any of that. You need to compensate for it, or make sure someone else does. And if nobody does, the debt piles up until it hurts.

7. What you cannot explain, you cannot defend

During an audit, you want to be able to explain exactly how a system works, who has access and how decisions are made. If you cannot provide that answer, you have a problem, regardless of how well the product works. For organisations that want to work with governments, in healthcare or in other heavily regulated sectors, that applies even more so. Building a prototype is one thing. Demonstrating that you have handled it responsibly is quite another.

Conclusion

AI prototypes are a genuine superpower for PMs. Use them to learn, validate faster and provide your team with better input. But do not forget: speed is not a blank cheque. Your responsibility towards stakeholders, users, auditors and regulators does not disappear because you built something quickly. In regulated markets, a working prototype is not yet a responsible product. Knowing the difference is precisely what sets a good PM apart from a naive one.


Curious how to use AI tools responsibly within your organisation or product? Or would you like to discuss what does and does not work in your market?

Send me a message on LinkedIn →