Governance & Trust

Well-managed AI is successful AI

Secure your place: Executive Online Briefing on February 10
Download study now

The study shows a clear correlation between AI success and governance. Companies with clear responsibilities, ethical guidelines, and transparent processes achieve better results than companies that primarily use AI for efficiency reasons.

Regulatory frameworks are also perceived ambivalently. While many companies see short-term restrictions, a majority recognize the potential of clear rules to build trust and ensure long-term competitiveness.

Percent of decision-makers point out that companies that invest in ethical, transparent, and well-managed AI will be more successful by 2030 than those that focus solely on speed and automation.

Governance and ethical conduct are factors for success – but is regulation too?

The establishment of governance structures for AI and data is one of the top success factors in the use of AI. With the right governance, companies ensure the ethical and transparent use of intelligent applications. This is very important, as more than 80 percent of respondents indicate that companies that invest in ethical, transparent, and well-managed AI will be more successful by 2030 than those that focus solely on speed and automation.

Ethical use requires companies to take responsibility for the impact of AI deployment. Large companies in particular, which find it easier to provide resources for this, should keep this in mind. At least, this is what almost four-fifths of respondents emphasize.

Companies need specific responsibilities for this ethical approach, but also for the strategic discussion of AI in general. These could be dedicated AI executives. According to around 79 percent of respondents, successful companies are characterized by such executives. For example, the main task of a Chief AI Officer would be to take care of all aspects of the use of intelligent applications—including the necessary prerequisites such as a suitable data basis or the necessary skills among the workforce. In this way, a company can benefit optimally from the use of the technology.

Decision-makers believe that by 2030, every successful company will need a dedicated AI executive—such as a Chief AI Officer, Agent Team Lead, or Agent Workforce Lead—to strategically manage the economic and ethical value of AI.

Where does your company stand in terms of AI maturity?

Assess your AI maturity level now

Regulation as an opportunity – or a risk for European companies?

Ethical conduct is not only the responsibility of individual companies, but can also be supported by appropriate framework conditions. Regulation plays a role here. For example, in mid-March 2024, the European Parliament passed the Artificial Intelligence Act (AI Act).

The EU AI Act regulates aspects such as the authorship of information and the guarantee of fair, transparent, and ethical algorithms. Depending on the risk associated with the respective AI system, its use is regulated to varying degrees or, in extreme cases, prohibited. If companies do not comply with requirements such as effective risk management or ensuring quality and (technical) documentation, they face financial penalties.

Although this may serve ethical purposes, the majority of respondents view the current regulatory framework critically in terms of the competitive situation of European companies that use AI. For example, some of the experts interviewed point out that the EU AI Act involves significant bureaucratic requirements and thus a great deal of effort. The goal may be right, but the implementation is not.

However, the corporate decision-makers surveyed do not fundamentally reject regulation in the field of AI. If it is appropriately designed, it may even improve the competitiveness of European companies. It can create trust. Partners and customers who work with such regulated companies in the field of AI can better assess any risks.

of decision-makers criticize that the current design of the EU AI Act worsens the competitive situation of European companies that use AI.

of decision-makers expect that adequate regulation of AI in the EU can improve the competitiveness of European companies.

When I look at the EU AI Act from the perspective of a company that uses AI applications, I find the regulation to be very positive. It raises awareness of the associated risks, so that not just any tools are used. The mandatory training and the associated knowledge building are also positive.

Gunnar Weider, SVP Enterprise Architecture, Evonik Industries AG

Download study now

Privacy settings

Data Protection Declaration

Legal Notice