Beyond compliance: the strategic value of responsible AI and AI governance in generative AI adoption

An account of Taqtile's experiences in AI ethics and governance, and how the topic can be approached incrementally in the corporate environment

In the race to adopt Artificial Intelligence, it is common to view the ethical dimension as merely a set of restrictions or compliance barriers. Here at Taqtile, we see it differently: we believe that ethical responsibility is not a brake, but a competitive differentiator capable of protecting brand reputation and creating new value opportunities.

We know the topic is complex and often difficult to make tangible in the day-to-day of projects. However, ignoring this layer is not an option. Ethical care is, ultimately, care for the long-term resilience of the business. Even as regulatory and social requirements are still maturing, the brands that get ahead are building far more solid foundations for scaling their innovations.

In this article, Nicolás de Arriba shares practical learnings from dialogues with Tuanny Martins and Danilo Toledo, who lead these processes at Taqtile. Below, you will find strategies for moving the topic out of the theoretical and integrating it into innovation processes incrementally — treating ethics as a constant learning that evolves alongside the technology.

Responsibility and Artificial Intelligence: how to handle the ethical challenges of technology adoption

By Nicolás de Arriba

Concept/Prompt: Nicolás de Arriba · Image: DALL-E 3

A common tension in innovation and Artificial Intelligence projects is that ethical challenges — by being vast and complex — risk becoming a "white elephant." Everyone knows it is there, taking up space, but the difficulty of addressing all its ramifications often leads to postponement or even the problem being forgotten altogether.

And in part, this makes sense, because every innovation project faces many barriers — and a barrier of this magnitude can be decisive for whether a project continues. This is where one of Taqtile's key learnings about the ethical challenge comes from: the complexity of the topic must not overshadow the scale of the initiative being discussed. It needs to grow as the project advances.

As will be explored below through conversations with Tuanny Martins and Danilo Toledo, who lead these discussions at Taqtile, between halting a project and avoiding the subject altogether, the path is to work through it incrementally.

The goal of this piece is therefore to share what we have learned at Taqtile about how to prevent the topic from being set aside. The key is not having all the answers and commitments on day one, but in turning ethics into a manageable process — one that is addressed and matures at the same pace as the project advances.

The importance of making it a subject

Following the development of this theme reminded me of when I first encountered the research of Sheila Jasanoff, an Indo-American social scientist who is a leading reference in the field of Science, Technology, and Society (STS) studies. Studying the impact of new technologies on society, the author became known for demonstrating a gap in how we think about society: the failure to account for how much technologies affect the production of social orders.

In her studies, Jasanoff introduces the concept of the sociotechnical imaginary — referring to collectively held and publicly performed visions of desirable futures as something achievable through scientific and technological advances. In this scenario, the author shows, we tend to look at innovation only through the lens of the solution, overlooking how it affects and produces new social orders.

In this sense, the author argues that the greatest risk arises not from the technology itself, but primarily when aspects of responsibility and ethical challenges are not part of these sociotechnical imaginaries of technology.

Not coincidentally, this is one of Taqtile's central concerns: beyond seeking notions of compliance, studying the topic with the aim of making it manageable and possible to address in the corporate environment. To share what we have already learned through internal discussions, we have consolidated three strategies for bringing the conversation into the corporate context.

1. The topic must keep pace with the project

How, then, to bring the subject into practice without stalling innovation? The first strategy suggested by Danilo is the incremental approach.

"Ethics cannot be an afterthought — it needs to be present at the starting point, and evolve in complexity alongside the project." For this reason, the role of consulting is to raise awareness of broad risks from the outset, but address tensions proportionally to the stage of development. This prevents ethics from becoming a sensitive, insurmountable block, allowing it to be addressed step by step — keeping pace with the growing complexity and impact of the solution.

2. Approach it through the lens of efficiency: the cost of not doing it

Tuanny also points to a pragmatic dimension worth addressing. Treating ethics from the beginning is a matter of efficiency.

For example, the cost of remediating a poorly built AI model is exponentially greater than developing it with responsible criteria from the start. "AI models learn and amplify patterns. If data bias is not addressed at the source, it becomes embedded in the architecture of the system. Correcting this afterward can require anything from costly retraining to discarding the model entirely. Furthermore, technical aspects such as explainability — the system's capacity to detail how it reached a conclusion — often cannot be 'added in' after the fact."

Beyond that, over the long term, thinking about AI governance and responsibility from "day zero" is about protecting the investment — especially in a market that is set to become increasingly regulated. Proactive AI risk management helps anticipate where regulation will arrive first, allowing companies to get ahead.

3. Ethics is also about generating value

Something Tuanny highlights is that "it is fundamental to shift the perspective: move from a defensive posture ('block AI to avoid harm') to a proactive one. Ethics does not just mitigate risks; it generates value opportunities.

A practical example she brings illustrates this point: "while developing an internal prototype, we noticed that the AI was using predominantly masculine language and examples, reflecting the bias in its training data. By identifying this, it was possible to adjust the model to adopt neutral, inclusive language with a precision and scale that would have been difficult to achieve manually. Here, the ethical lens was not a brake — it was a lever to deliver a technically superior and socially more appropriate product."

Looking at the generative side that Artificial Intelligence can deliver, therefore, is not about treating its desirable (and distant) futures — it is also about understanding how the technology's capabilities and skills can solve real problems today.

How Taqtile is going deeper on the challenge

At Taqtile, as in the broader market, the complexity of the topic is still being mapped and elaborated. Building proprietary tools, the focus is not only on raising awareness, but on bringing the subject into the practical reality of companies and projects.

In this direction, a working group at Taqtile is finalizing a Responsible AI assessment checklist. Designed to evaluate specific vulnerabilities, the proposal is to offer a structured way to assess and implement AI ethics and governance practices in Artificial Intelligence projects. Additionally, within our enterprise AI consulting methodology, the AI Sprint, there are dedicated steps for raising awareness among companies on the topic and mapping risks within the project context.

Even so, acknowledging the distance between the complexity of the topic and the initiatives underway, Taqtile holds to the reflection: deal with complexity incrementally — do not avoid or set aside the challenges.

Want to understand the patterns that separate AI projects that deliver results from those stuck in pilot? See what we learned by combining MIT data with our own enterprise experience.

Where does responsibility begin and end?

To close, I want to return to another author I studied throughout my academic formation. The anthropologist Dinah Rajak, in her book In Good Company (2011), conducts an anatomy of the notion of Corporate Social Responsibility (CSR). Studying it not only as a modern phenomenon, she shows that the notion of responsibility is part of the very history of how corporations formed.

As the author writes, although the idea of Corporate Social Responsibility is "often presented as a distinctly modern phenomenon, a product of millennial concerns about social and ecological sustainability in an era of globalization" (Rajak, 2011, p. 10, my translation), it emerges much earlier in the history of corporations.

With this, Rajak proposes that establishing in advance how the boundaries between companies' role and responsibility toward social dimensions are defined — and particularly how this role is or should be exercised — does not fit within a pragmatic perspective on the subject. Moreover, it ignores the fact that the notion of responsibility can be observed as far back as philanthropic industrialists in Victorian Britain in the eighteenth century, such as Rowntree, Cadbury, or Lever and their self-proclaimed commitment to social improvement (Sartre, 2005; Rajak, 2011).

In other words, even if clear boundaries of responsibility can be established in relation to technology adoption and its impacts, for the author this is something that not only can but must always be rethought. Therefore, beyond raising awareness about possible negative impacts in technology adoption processes, it is necessary to find ways to include it within the sociotechnical imaginary itself — not as a subject belonging to some other sphere outside the companies themselves. The risks involving the new technology also belong to companies.

While the strategies raised in this discussion have shown interesting results, we recognize there is still a long road ahead.

Want to discuss how to integrate Responsible AI practices into your project? Schedule a conversation with Taqtile.

To learn more about how Taqtile is implementing the technology in large enterprises, follow our page and stay up to date with experiences, learnings, and case studies produced by the company's own team.