Prototyping with AI: Learnings from Figma Make

This article explores the challenge of creating prototypes with AI connected to real company data.

By Aline Vieira, Product Designer at Taqtile.

Prototyping with artificial intelligence is no longer a future trend — it is already part of the daily routine of many teams working on the creation of digital products.

When well applied, AI accelerates the exploration of solutions, reduces validation cycles, and enhances decision-making capacity throughout the entire process.

At Taqtile, we have a strong culture of experimentation combined with the ongoing pursuit of excellence. This is reflected in the appreciation of practical learning, which allows us to test new approaches, critically evaluate them, and incorporate this knowledge into our clients' projects.

In the design area, we have conducted different initiatives with AI tools. One of our most recent experiments was the exploration of Figma Make as a tool for creating complex prototypes.

Next, I will share how this experience went and what the main learnings were.

1. About the challenge

As a Product Designer, I was responsible for creating the prototype of a dashboard whose main function was to strategically contribute to the decision-making of an internal team at a multinational company.

The challenge consisted of proposing a simpler and clearer way to visualize data that was already available but came from different sources. This fragmentation limited the analysis process and made it difficult to extract relevant insights.

I then identified an opportunity to conduct a new experiment. I opted to use Figma Make — an artificial intelligence solution that transforms prompts (natural language commands) into functional prototypes, interactive interfaces, and web applications.

The goal was to evaluate the feasibility of testing ideas quickly, generating layouts, components, and interactions directly on the platform.

2. Defining the experiment's objectives

Initially, I aligned with the project team on the intention to test this new approach. Besides support to move forward, an interesting challenge arose: to maximize learning, it would be interesting to evaluate the tool's ability to integrate data from other tools.

The focus would particularly be on using this data in tables and graphs, enabling validation closer to a real usage scenario.

It's important to emphasize that the goal was not to evaluate whether the tool "worked" or not, but to understand its limits and identify at which stages of the prototype creation process it adds the most value.

Later, this learning would be documented, shared, and transformed into applicable knowledge for similar projects.

The specific objectives defined for the experiment were:

  • Create a complete prototype 100% via prompts, exploring different writing, refinement, and execution strategies.

  • Test the integration between Figma Make and Google Sheets, building tables and graphs fed by real data.

  • Develop a medium fidelity interface from scratch, without resorting to style guides or pre-existing components.

  • Build a structure with transitions and animations, evaluating whether creating these interactions in Figma Make would be faster and more efficient than the traditional manual prototyping process in Figma.

3. Approaches tested

In general, I tested two distinct approaches, aiming to understand which one would produce more consistent results with less effort:

  • Complete generation of the dashboard in a single prompt, intending to create the most complete initial version possible and refine it later.

  • Start with the most complex component, followed by an incremental process, where the structure would be built progressively.

Prompt structure recommended by Make Prompt Assistant 5.2

Complete generation of the dashboard with a single prompt

Using a summary of the transcription of the first alignment meeting of the project with the team (done by Gemini) and the GPT Make Prompt Assistant for refining the prompt structure, I generated a single detailed command containing all the information that, in theory, would be necessary to generate a good initial version of the dashboard.

Test 1: Generation of an initial version as complete as possible. Some information was omitted to preserve the identification of the project.

This strategy proved effective in quickly obtaining an initial proposal for the information architecture, allowing me to map sections, hierarchies, and main components.

However, it also highlighted relevant limitations:

  • Low level of control in the refinement stages;

  • Implicit decisions made by AI in points that were not clearly specified;

  • Difficulty in consistently and reliably connecting the prototype to Google Sheets.

Starting with the most complex component + incremental approach

In the second approach, I started the process by instructing the AI to build only a single page, focusing the effort on the most complex component of the dashboard: the table integrated with the spreadsheet containing real data.

Test 2: Building the most complex component first.
Test 2: Result generated in the first version.

From this core, subsequent prompts were used to gradually expand the structure, building the elements "around" the main component.

Test 2.2: Prompt used in the sequence. Some information was omitted to preserve the identification of the project.

From the perspective of control, predictability, and ability to correct throughout the process, this strategy was the one that brought me the best results in this challenge. In the subsequent prompts, it was possible to establish the navigation structure and add new components progressively, ensuring structural consistency.

Only after completing the structure did the focus shift to the visual aspects and refining the interface — a stage that, I must say, was an exploration apart.

It is worth noting that this was a personal experience, guided by a sequence of decisions that made sense in that context. There are certainly other ways to achieve the same result. One of the most interesting aspects of this type of project is how much it sparks curiosity in understanding how other people would approach the same problem.

More tests throughout the process

Throughout the two approaches mentioned, I tested different written prompt strategies, observing how variations in the process impacted the level of control and predictability during prototype construction.

I also explored the detailed construction of requirements compared to less specific commands. The prompts went through cycles of writing, refinement, and review before execution, often with support from other generative tools, such as ChatGPT and Gemini.

Visual refinement was explored with different levels of specificity, as well as situations where gaps were intentionally left to observe how Make made decisions automatically.

Tests for improving the design of generated components

Finally, the experiment included stating restrictions and limitations in the prompts, evaluating how these instructions influenced the tool's behavior throughout the iterations.

4. Learnings

As I mentioned, for my specific challenge, an incremental approach yielded better results. Besides this finding, I share below other learnings and insights gained throughout the process:

  • Prototyping with Figma Make vs. traditional Figma: The speed with which Figma Make generates hover states, loading states, graphs, and variations of components is a relevant differential. This allows for testing behaviors and user experiences even in the early stages, without needing to build everything manually.

  • Writing and refining prompts beforehand, with the support of LLMs, reduces rework: Using tools like ChatGPT, Gemini, and others to structure, review, and detail prompts before executing them in Make helps organize thoughts, eliminate ambiguities, and anticipate decisions that, if left open, would be made automatically by AI.

  • Prompt clarity = result quality level: Perhaps the most important learning from the experiment. AI does not interpret implicit context or vague intentions. The clearer, more specific, and better-structured the prompt is, the greater the control over the result. The quality of the output is directly proportional to the quality of the input.

  • A single prompt does not solve everything, but successive micro-adjustments are also a problem: Small adjustments without a good structure generate dozens of new versions of the prototype and tend to produce inconsistent results in the medium term. Short, direct, yet complete commands (with task, context, elements, behavior, and limitations) yield more coherent and predictable responses.

  • Data-driven products require caution: Dashboards, tables, and graphs are still challenging to construct via prompt. Fine-tuning design on these elements, in some cases, is faster and more accurate when done manually.

  • Visual refinement requires a high degree of specificity: Typography, spacing, colors, hierarchy, and components need to be described precisely. Any unspecified detail becomes an automatic decision by AI.

What I have been testing currently

  • Integrate traditional Figma in intermediate stages: use “classic” Figma as a fine-tuning tool, such as spacing, visual hierarchy, typography, alignments, and aesthetic micro-decisions. These decisions tend to be more predictable and controllable when made manually.

  • Build prototypes from an existing style guide: test the quick composition of screens in Figma Make by leveraging tokens, typography, colors, and components already defined in the style guide. The idea is to generate consistent layouts, accelerate the exploration of flows, and reduce repetitive visual decisions, maintaining the visual coherence of the product from the early versions.

  • Continue exploring integration with other tools: Assess new possibilities for integrating with real data, investigating how external sources and different data formats can be incorporated.

5. The role of the designer in the era of AI

More than just delving into a new tool, this experiment reinforced that the value of AI is directly proportional to the clarity, criteria, and repertoire of those who use it. When there is a well-defined intention, technology enhances results. Without it, it merely accelerates the noise.

Tools like Figma Make expand possibilities, accelerate cycles, and reduce execution barriers. But what sustains good decisions continues to be the designer’s eyecritical thinking, judgment, and the ability to provide direction are fundamental.

Follow Taqtile's upcoming articles on LinkedIn to learn more about our experiments, discussions, and strategies in the use of Artificial Intelligence.