Do you also need to upskill in AI?
You don’t need to learn everything about Artificial Intelligence (AI) — you need to know your next step.
By Naomi Sato
Read to the end and discover how to deepen your self-knowledge.
This article will cover:
What I learned by doing…
What changed in practice…
What I’m doing differently today…
How you can test this…
I’m Naomi, AI Product Manager, designer graduated from USP, leading diversity, equity and inclusion (DE&I) initiatives such as digital accessibility and women’s leadership, JICA scholar and kenpi kenshu (Japanese government technical exchange program for Japanese descendants) fellow in Okayama, the Japanese province where my paternal grandfather’s family came from as immigrants in the 1930s.
Here is what I learned on this journey, and what might help you on yours.
What I learned by doing…
Experimenting with AI-based product solutions, applying AI to my work processes that were Manual, Repetitive, and Intellectually demanding, and also training people, I learned that the pace at which knowledge about AI expands is not the same pace at which I can keep up. And this is true even for me, someone who has been curious and eager to learn and experiment since childhood.
With the popularization of generative AI, new concepts kept appearing to study and new tools to learn. All of this while I was still getting my head around the previous acronym and figuring out how to use the latest release.
Then, just like that, a new model would arrive promising to be better than the competition. A new way to create intelligent assistants. A new way to connect applications. So much to get excited consuming (situation 1) or desperate and paralyzed (situation 2).
I can only imagine the feeling of those who started this journey later, when the market shifted and AI in companies became a demand, with teams being pressured: “understand it,” “know how to use it,” “create,” “test,” “measure,” “evolve or fix this AI process, product, or service.”
Whatever the case, the result seems the same: too little time and/or energy to go deep, experiment, think critically, and apply things in a meaningful way.
Alongside that come more and more stories about how irresponsible use of AI amplifies problems that already existed, about how people and companies use it, and why they use it.
For me, at the time, that was what pushed me into situation 2, paralysis: not wanting to be involved anymore. That is why it was essential to understand that it is completely okay that the pace at which knowledge about AI expands is not the same pace at which I can keep up.
What changed in practice…
I understood that not being involved would not solve anything. Nor would it help prevent the problems caused by AI misuse.
I also understood that I was not alone in this concern (Tuanny Martins, thank you for existing!) and that I worked in an organization that cared about it.
This combination of factors made me realize there was still a path I could hold onto, seek out, and build: learning and teaching how to use AI in a more ethical and responsible way. It was through doing exactly this, for example, that I discovered the concept of Responsible AI, something that changed my perspective when engaging with the topic.
If the question of responsibility in AI use also moves you, we go deeper into that discussion here: Beyond compliance: the strategic value of responsible AI in generative AI adoption
But understanding the path was only the first part. What really changed was how I started choosing where to put my energy — and when.
What I’m doing differently today…
Focusing more on the WHY (why does this problem deserve attention right now?) and the HOW (what is the most sustainable approach to solving it?), giving myself permission to ‘go slow’ in order to choose right before moving.
In practice: before rushing to test a new AI approach on the first problem I spot, I evaluate which of the many problems has the necessary tension (not too much, not too little) to create a fast solution that can be sustained afterward.
And what is that “necessary tension”?
It is when the problem is not just something I am seeing. It is when other people also see it and are willing to do something to solve it alongside me.
How many solutions have you helped build that depended on other people to keep generating value? In my case, far fewer than I would like.
Do not get me wrong, I love working with people. I genuinely believe in the power of collective intelligence and teamwork that supports and builds together.
It is just that for it to be functional, it needs alignment. Looking back, maybe the trap was too much autonomy and proactivity, with an extra dose of anxiety. I do not regret what I was able to build during that phase because I did it following what I believed in, but perhaps I made the easy choice that ended up making the path harder later.
Today I try to identify the hard choices that will make the path easier.
An easy way to understand this is through an analogy. Think of a rope with people holding each end. The rope represents the problem. The two groups represent the parties involved. If one of the parties does not want to solve the problem, it will keep existing. It is like that in relationships too, isn’t it? Corporate problems are also part of relationships, whether between companies, employees, or clients.
There is actually a game that illustrates this scene perfectly: tug of war!
The difference is that in tug of war, whoever pulls the rope to their side wins. Here, the game is cooperative, not a competition. So the game ends when everyone stays firmly on their feet and, instead of the rope between them, manages to shake hands, hug, and celebrate what they built together.
If the people on both sides are distracted, the game does not even start. If one side pulls when the other is not ready, everyone falls.

Personal archive: illustrated by Naomi Sato to represent absent tension
If both sides start pulling at the same time, one cancels out the other’s force, even if they stay standing, eventually one side gives way and everyone ends up exhausted.

Personal archive: illustrated by Naomi Sato to represent exhausting tension
Now, when all participants are looking at the rope, adjusting the tension, keeping it taut but without pulling or releasing too much, that is when we are ready to begin. That is the sign that we have found the right moment to solve that problem.

Personal archive: illustrated by Naomi Sato to represent necessary tension
In short: we are talking about understanding the right moment to choose what to focus on.
And this applies directly to AI upskilling: the best next step is not the most advanced one, nor the most basic, it is the one where there is real tension, for you and for the people you work with.
For me, it was essential to look at myself honestly and understand my level of AI maturity. From there, I focused on learning what would take me to the next level. And I started paying less attention to the novelties I identified as either far above or far below where I was.
It is worth saying that even though I accepted that the pace at which I learn on my own will not be enough to keep me fully up to date, that does not mean I accepted falling behind.
I just understood that it is not worth walking and learning alone, it is better that all of us involved can mature together, supporting each other, exchanging learnings, to create solutions that work and evolve over time.
My journey may have helped me stay more up to date on one AI topic, but there is certainly another one where you know more than me.
Ready to figure this out together?
How you can test this…
How can you test a new direction for your AI upskilling? Start by identifying what you want to upskill in.
Considering your maturity in AI use: what is the knowledge or attitude that will make not only you see yourself at the next level, but also the people you work with notice?
AI is not a magic button. And it is not just for technical people.
Each level has a signal: if the idea sounds familiar, you have already been there. If it sounds like a possible challenge, that is your next step.
To see this maturity movement in practice: Ysabella Andrade, from Taqtile, used synthetic personas as a maturity filter to identify UX process improvements even before the first user interview.
Learning to use AI is like going up the floors of a building:
>> Start with the staircase that already exists.
Identify which AI features are already built into the tools you use every day. Many AI platforms for business embed these capabilities directly into familiar workflows, and you may not even know they are there.
>> Then, check if there’s an elevator.
Evaluate new AI-based tools that could help with an important process where you identified a problem. Sometimes, the most efficient path is not climbing step by step.
>> Time to customize the elevator.
Maybe the standard elevator is no longer enough: it’s slow, hard to use, not working well for your needs. This is the moment you start building your own solution, combining tools and choices to create something better suited to the problem you want to solve.
>> Now, neither stairs nor elevator, you’re building something new.
This is the level where you create your own product. It will require deep technical knowledge in AI and other areas to test, validate, sustain, and scale the solution.
A real example of what this looks like in practice: Aline Vieira, a designer at Taqtile, built a functional prototype integrated with real Google Sheets data, entirely with prompts.
The point is: you need to know which floor you’re on and what the next step is.
To help you with that, I am sharing: Test your AI maturity level
I created this AI upskilling assistant in ChatGPT to solve a pain point of mine and the people I work with: knowing what to focus on to evolve in AI digital maturity. If you use it, leave your feedback so I can improve it. This is already the third version.
If it truly resonated with you and you believe it can help others, share it!
Just don’t forget: it is not AI that will decide your future (or at least it shouldn’t be). It improves human decision-making, use the result to improve your own choices about where to focus your learning. In the end, you are the one who decides what your next step will be.
Depending on your role, though, that decision may not only impact you, it might influence your entire company. It is worth pausing to invest time in the strategy of the next step. If that is your case, you may want to learn more about how to better approach artificial intelligence in business alongside your team. For content like that, I recommend following the updates on Taqtile’s page.
So, do you need to upskill in AI? Probably yes. But not the way the market pressures you to, on your own terms, at your own pace, on your own next step.
Recap:
What I learned by doing… it’s okay that the pace of AI development is faster than you can keep up with.
What changed in practice… realizing that not being involved would not solve anything.
What I’m doing differently today… focusing more on the WHY and the HOW, giving myself more time to choose the right problem and the best way to approach it.
How you can test this… start by identifying what you already know and what will propel you forward. Test your AI maturity level.