I run an MVP development agency, so read this with that context. But I'm going to give you the honest, unfiltered due diligence framework because the alternative — watching founders get burned by bad agencies — is worse for everyone, including us. When founders have bad experiences with agencies, it makes the entire category harder to sell. Transparency is in our interest too.
Here are the eight questions you should ask every agency you evaluate. Including us. Judge us by these same standards.
Table of Contents
- The Problem With How Founders Evaluate Agencies
- The 8 Questions to Ask Every Agency
- Red Flags in Discovery Calls
- How to Read a Portfolio
- What a Real Project Timeline Looks Like
- Why Fixed-Scope Pricing Matters
- The Contract Terms That Protect You
- Here's Exactly What V12 Labs' Process Looks Like
- Ready to Build?
The Problem With How Founders Evaluate Agencies
Most founders evaluate agencies by answering three questions:
- Are they cheaper than the most expensive option?
- Do they seem competent enough?
- Did the discovery call feel good?
These are the wrong questions. They're proxies for what you actually want to know, and they're easily gamed by agencies that know how to present well.
The agency with the best discovery call might be the worst at execution. The agency with the most polished portfolio might produce great demos and terrible production code. The lowest quote might mean the worst timeline — or hidden costs that balloon the final invoice.
You need to dig deeper. These eight questions do that.
The 8 Questions to Ask Every Agency
Question 1: Can I see production products — not demos — that you've built in my category?
"Here are our Behance screenshots and Figma mockups" is not the answer you want. You want to see live, deployed products that real users use, in a category similar to yours (SaaS, AI tools, marketplaces — whatever applies).
Why this matters: building a beautiful prototype is a different skill from shipping production-quality software that handles real traffic, real edge cases, and real user behavior. Many agencies are excellent at the former and mediocre at the latter.
Ask for a live URL. Open it. Try to use it as a real user would. If it's slow, buggy, or clearly outdated — that tells you something.
Question 2: Who specifically will be building my product?
Some agencies have senior developers doing the selling and junior developers doing the building. You want to know the actual team that will work on your project — their experience levels, their specific backgrounds, and whether they've worked on similar projects before.
Ask: "Can I meet the developers I'll be working with before signing?" A good agency will say yes. An agency that routes you away from this question has something to hide.
Question 3: What does your scope management process look like?
Scope creep is how hourly-billing agencies make money. A good agency has a clear process for handling scope changes — either a defined change request process with explicit pricing, or a fixed-scope commitment.
If they say "we'll handle changes as they come up," that's not a process — that's a blank check to add hours.
Ask for a specific example of how they handled scope creep on a past project. Their answer will tell you everything about how they'll handle it on yours.
Question 4: Who owns the code, and when?
The only correct answer to this question is: "You own the code. It's in your GitHub repository from Day 1."
If the answer is anything else — "you get the code upon final payment," "we host it on our servers," "you own the product, we license the framework" — walk away. These arrangements create dependency that can be weaponized against you later.
Question 5: What's your QA and testing process?
Many agencies build features and ship them with no automated testing. This looks fine initially and becomes a nightmare when you need to change anything — because every change might break something you didn't know was connected.
A good agency will describe their testing approach: unit tests for critical functions, integration tests for key user flows, manual QA before releases. They should also describe their staging environment — where your code is tested before it goes live.
If their testing "process" is "we test it manually before shipping," that's a yellow flag. Not disqualifying, but ask harder about the details.
Question 6: How do you handle it when something breaks after delivery?
This is the question agencies hate most, because the answer reveals whether they actually stand behind their work.
Ask specifically: "If a bug is found in the code you wrote two weeks after delivery, what happens?" A good agency has a clear warranty period (30–90 days is common) during which they fix defects in their work at no charge. They should also have documented what's in scope for that warranty vs. what constitutes a new feature request.
Question 7: What are the three most common reasons your projects go over deadline?
Every agency occasionally goes over deadline. The question isn't whether they do — it's whether they're honest about why. An agency that says "we never go over deadline" is lying. An agency that says "usually it's because the client is slow to provide feedback on designs" is being honest and giving you actionable information (respond to design feedback fast).
Common real answers: unclear requirements, delayed client approvals, third-party API integration problems, scope changes mid-project.
Question 8: Can I talk to two previous clients about their experience?
References are the most underused tool in agency evaluation. Ask for two references from completed projects in the last 12 months. Call them. Ask specifically:
- What was the most challenging moment in the project, and how did the agency handle it?
- What would you do differently?
- Did they deliver what they said they would, on the timeline they said?
- Would you hire them again?
If the agency won't provide references, that's a red flag. If the references are only from long-ago projects or only from projects that are still ongoing — probe further.
Red Flags in Discovery Calls
Red flag #1: They agree with everything you say. A good agency pushes back on scope that doesn't belong in an MVP, questions assumptions about the market, and asks hard questions about the problem you're solving. If they just say yes to everything, they're in sales mode, not builder mode.
Red flag #2: They jump to solution before understanding the problem. If the discovery call becomes a features list within 10 minutes, they haven't understood your business. Your product exists to solve a problem for users. The features are the implementation of that solution. An agency that skips from "here's my idea" to "here's the feature list" has missed the most important step.
Red flag #3: They don't ask about your users. Who are your users? What's the most important thing they need to be able to do? How will you measure success? If these questions aren't asked in the discovery call, they're not thinking about your product from the right angle.
Red flag #4: Vague answers to the contract questions. When you ask about code ownership, IP assignment, or warranty terms, you should get clear, confident answers. Vague or deflecting responses mean the contract terms may not be in your favor.
Red flag #5: They quote with no scope document. A quote without a specific, written scope of work is a quote that will balloon. "We'll build your MVP for $10K" with nothing written down is not a contract — it's an invitation to dispute.
How to Read a Portfolio
Portfolios are carefully curated to show the best work. Here's how to see past the curation:
Look for recency: Projects from 3+ years ago tell you who they were, not who they are. The tech landscape changes fast. An agency's AI capabilities from 2021 are mostly irrelevant to what you need in 2026.
Look for category depth: A portfolio of 20 projects in 15 different categories suggests generalists. A portfolio of 20 projects with 8 in your specific category (SaaS, AI tools, marketplaces) suggests specialists. For an MVP build, you want someone who's solved similar problems before.
Look for live links: Click them. Are the products actually live? Are they being maintained? Do they look like they have real users? Dead links and broken products are signals.
Ask about the failures: "Which project in your portfolio are you least proud of, and why?" How an agency answers this tells you more about their character than anything on the portfolio page.
What a Real Project Timeline Looks Like
For a well-scoped MVP with one primary user flow and 8–12 core features:
- Days 1–3: Discovery and specification. Final scope document signed off by both parties.
- Days 4–5: Architecture, tech stack decisions, environment setup.
- Days 6–12: Core development. Daily or every-other-day status updates.
- Days 13–14: Integration testing, QA, bug fixes.
- Day 15: Deployment to production, handover documentation, knowledge transfer.
Total: 15 business days. That's our standard timeline at V12 Labs for a standard MVP.
If an agency tells you a typical MVP takes 3–6 months, ask what's in scope. Either the scope is much larger than a true MVP, or their process is slow. A tightly scoped MVP should not take 6 months.
If an agency tells you they can do it in 5 days — be skeptical. You're either getting a very thin product or a template with your logo on it.
Why Fixed-Scope Pricing Matters
Hourly billing aligns incentives against you. Every extra hour of work is more revenue for the agency. There's no incentive to work efficiently, keep scope tight, or push back on feature additions that don't belong in the MVP.
Fixed-scope pricing aligns incentives with you. The agency has committed to a defined deliverable for a defined price. Extra scope is their problem, not yours. If they underestimated something — they absorb the cost of fixing it, not you.
This doesn't mean hourly billing is never appropriate. For maintenance, ongoing support, and iterative work after an MVP is launched, hourly or retainer arrangements make sense. But for the initial build, fixed-scope is how you protect yourself from a billing surprise.
The caveat: fixed-scope only works if the scope is actually fixed — meaning both parties have signed off on a detailed specification before work begins. "Fixed price for the MVP" with no written spec is not actually fixed-price. It's "I'll charge you a fixed amount for what I decide to build."
The Contract Terms That Protect You
Before signing any development contract, verify these five things are in writing:
-
IP assignment clause: All code, designs, and intellectual property created in the project are assigned to you upon full payment. Not licensed — assigned.
-
Delivery milestones: Specific deliverables tied to specific dates, with a definition of what constitutes completion for each milestone.
-
Revision scope: How many rounds of revisions are included? What counts as a revision vs. a scope change?
-
Warranty period: Defects in the delivered work are fixed at no charge within X days (30–90 is reasonable). New feature requests are not covered by warranty.
-
Access and credentials: A clause specifying that all repository access, deployment credentials, domain access, and third-party account access are transferred to you upon completion.
If any of these are missing from the contract, ask for them. If the agency won't include them, that's the answer you needed.
Here's Exactly What V12 Labs' Process Looks Like
I said I'd be transparent, so here it is:
Before we sign anything:
- Discovery call to understand your business, users, and problem (not just your features)
- Scope reduction exercise to identify the true MVP
- Written specification document — every feature, every user flow, every integration
Once signed:
- Your code goes into your GitHub repo on Day 1
- You have full access throughout the project
- Daily async updates via Slack
- You can see every commit being made
Delivery:
- Production deployment on Day 15 for standard MVPs
- Full handover documentation including architecture overview, deployment instructions, and environment variable guide
- 30-day defect warranty
What you get:
- Full source code ownership
- All credentials and access transferred on Day 1
- No proprietary frameworks
- Clear path for your next developer to pick up where we left off
What we charge: $6K flat. That's it. No hourly overages, no "change request" fees for reasonable adjustments during development, no ongoing maintenance contracts you didn't ask for.
Judge us by the eight questions above. We'd rather lose a deal to honest evaluation than win it by hiding something.
Ready to Build?
If you've asked the hard questions and you like what you hear, I want to build your MVP. V12 Labs specializes in AI-powered products for non-technical pre-seed founders — $6K flat, 15-day delivery, full source code ownership.
Book a discovery call at v12labs.io — and bring your eight questions. We'll answer all of them.