It’s never been more straightforward for a scrappy entrepreneur to build something impactful as their MVP using GenAI tools.

Let’s talk about some of these tools, but keep in mind this landscape is rapidly changing as the AI behind these tools become commoditized. This list is generally ordered from less-tech-focused to more:

Lovable

An early entrant into the build-an-MVP-with-GenAI space, Lovable provides a chat interface that is geared toward building from a blank state. It can go from zero to something in just a few prompts, but one can quickly run into a paywall with their free version consisting of a mere five credits per day and a monthly maximum of 30 per month. Paid plans start at $25 / month for 100 monthly credits. Of note: Each chat interaction costs a credit, and the development cycle will naturally involve much back and forth as you dial in the design and functionality.

Lovable also has the ability to invite collaborators and integrations with Supabase and GitHub. The GitHub integration is also two-way, meaning that if you do have some coding chops, or an available developer, work can happen outside of Lovable and then synced back into the platform.

Bolt

Bolt also has integrations with Supabase and GitHub but layers in a direct connection to Figma to import designs as well as a Stripe integration that can make payments much more straightforward to support. Bolt also has a more generous free plan and charges based on AI tokens used rather than chat interactions. This can be a bit more difficult to track, but their free tier includes 1 million tokens per month and 150,000 tokens per day. These tokens can be thought of as how complicated the request or task might be.

Lovable’s model means that it behooves a user to send fewer, longer messages. Bolt’s means that it will work better for folks who would like a more iterative, conversational approach. Paid plans start at $20 / month for 10 million tokens.

Replit

Replit has a far nerdier name and has integrations with the prior two entries to migrate a project to its platform instead. It also has a slightly different model than the others, starting with a “Plan” stage that produces a mockup of sorts before it starts building the application or landing page for you.

Replit also comes with web and mobile applications and an interesting “bounty” program that can be used to find more tech-savvy folks to lend a hand with a project or to get past a particular bug or tricky spot. Its pricing model is also way more complicated than our earlier choices, belying its roots as a developer platform.

Of the choices so far, it’s also much more comfortable building more complicated applications, which might require a proper database or file storage beyond what Supabase can provide.

Cursor

Cursor is a different beast than our previous entries, first as a primary desktop app and second as a dedicated code editor. Cursor gives engineers an AI assist in programming work, helping to write new features or explain what existing code does. It’s best used for existing code and apps, perhaps using the ones built with the previous tools.

For a non-technical user, a surprising use case is to ask it about an app’s functionality – investigating if the current code base supports a particular feature, or what happens when a particular button is clicked. Paid plans start at $20 / month.

Claude Code

A new tool launched by Anthropic – whose Claude model is the default AI behind Cursor and others – Claude Code is primarily a command line interface and tool. It is handy for developers comfortable in that interface but likely not useful to non-technical-vibe coding founders. It also, interestingly, has no free tier – instead plans start at $17 / month or API token usage-based billing.

Beyond the Tools

With any of these tools, it’s important to realize how they work and what their limitations are.

Today’s generative AI tools are probability machines, mixing together things they have seen before to predict what the next word or paragraph or line of code should be – based on what everybody else has typically done.

This also means that common application user interface patterns – tables, menus, lists, and graphs that are found across many, many applications they will have in their training set – can also be in reach.

This puts more emphasis on the entrepreneur to find – and define! – what differentiates their product.

The application design that comes out of these AI-driven tools will all tend to look the same, with the same pieces and bits.

What makes your idea different?

What forces the potential customer or investor to perk up in a landscape of so many AI-driven MVPs?

That special sauce – what your application does that hasn’t been done before – is where human involvement comes into play.

Your sense of taste, and your context around the customer and the problem you’re trying to solve are the guardrails that can guide the AI.

For instance, when using Cursor, providing it with some context for the application you’re working on can help it make better decisions on the code it writes and how it tests that code.

Humans are also necessary when you hit the edges of what today’s AI can do.

One local startup – very early on and built using these tools by a non-technical founder – hit a wall before their launch. They were attempting to get the log in and sign up flow just the way they had envisioned and could not get the AI to get things working the way they expected.

They eventually had to call in for developer intervention.

When thinking about developer intervention, there is another angle.

By default, these platforms will build their MVPs on the same stack – a TypeScript and React frontend with a low-code / no-code backend like Supabase.

This stack is great for MVPs and common enough that finding talent later to debug and pick up the project should not be an issue.

But not every problem is a good fit for this stack. Sometimes, a more specialized stack might be better suited to the task at hand.

Novel approaches, designs, programming languages, or any new ground generally will run out of the training material and become much more likely to venture into hallucinations and plain bad output.

These probability machines also don’t have reason, or morals, or judgement per se. They are at their heart stringing together words that are likely to appear next to one another.

They are also trained to do what you ask and to try to please you.

This can be thought of as a feature or a bug.

A human might tell you your approach is a bad idea, but today’s AI is very unlikely to.

It’s also trained to be – frustratingly sometimes! – relentlessly confident and assured. When debugging, many models will insist repeatedly that they’ve solved a problem that they have not actually solved.

Reining in these trained approaches in many AI tools can prevent them from going too far afield and introducing security holes or going down an unanticipated path.

Additionally, a frustrating part of any AI chat platform is the limited memory window. When taking an interactive, conversational approach with any of these platforms, after a period of time earlier decisions or discussions will fall out of the machine’s context window, forcing the user (that’s you) to frustratingly reiterate.

As a result, I have settled on a few long-lived context notes I’ve found helpful in using AI tools to write code:

  • Only implement the exact changes I ask for and summarize your planned changes before executing them.
  • Always test your code changes and assume they are incorrect until you verify them. After making changes, run the relevant tests and report the results. Wait for my confirmation before proceeding.
  • The current year is 2025. Don’t fight me on this. It is.
  • If you notice any oversteps or assumptions, please point them out immediately so I can correct them.

A brief description of the app itself typically follows.

And, yes, I’ve had to correct the year with these tools enough times that the current year has become part of the standard context for each project I work on. It saves a long, surprisingly fraught conversation which makes we wonder which programming communities it’s been trained on …

Anyway.

In Cursor, I write this context in a Notepad and then pin it to the chat conversation with the Cursor Agent. Look for similar functionality in your platform of choice.

As a final note, many AI platforms engage in mirroring – parroting the words and tone you use with it.

For an entrepreneur, this can generate a false sense of confidence beyond the typical.

It’s more important than ever to get your ideas out in front of humans – because only they can tell you if your idea has legs.


About the Author

Chris Vannoy serves on the board of IndyHackers, an organization which strives to help tech people in Indiana grow by fostering community connections and celebrating individual successes. To learn more about their organization, visit the Indy Hackers website.