IBM India/South Asia Blog

Responsible AI – the what and the why of it

Feb 15, 2024

By Geeta Gurnani

In the next few years artificial intelligence (AI) adoption will continue to accelerate with more mainstream use cases, as organizations leverage it to transform processes, reduce costs and increase business value. Building and scaling AI models will become business critical, but in the haste to do so we should not forget that this transformation needs to balance with ethics and accountability so that users can trust the technology. Hence responsible AI, that adheres to the principles of explainability, fairness, robustness transparency and privacy, should be considered an integral part of AI design and deployment.

There is a growing need – driven by real business impact – to proactively drive fair, responsible, ethical decisions and comply with laws and regulations. No organization wants to be in the spotlight for the wrong reasons, and unfortunately there have already been many cases that brought to the forefront issues of unfair, unexplainable, or biased AI. In addition, AI regulations are growing and changing at a rapid pace and noncompliance can be very expensive for companies. This issue is compounded for global organizations with branches in multiple countries trying to meet local and country specific regulations. Organizations in highly regulated industries such as healthcare or BFSI have additional challenges in even stricter industry specific regulations.

So, what is responsible AI? The definition by Gartner is perhaps the easiest to understand – the process of creating policies, assigning decision rights and ensuring organizational accountability for risks and investment decisions for the application and use of artificial intelligence techniques.” 

While there are several aspects to achieving responsible AI, one key component is governance. While not all models are created equal, every model needs governance to drive responsible and ethical decision-making throughout the business. A platform centric approach for AI governance allows you to direct, manage and monitor your organization’s AI activities.

AI governance for many organizations requires a lot of manual work which is amplified by changes in data and model versions and the use of multiple tools, applications and platforms. Manual tools and processes can lead to costly errors and the models end up lacking transparency, proper cataloguing and monitoring. These “black box” models can produce analytic results that are unexplainable even by the data scientist and other key stakeholders.  

The pace of transformation in this space in the next few years will grow exponentially. Hence there will be a need to employ software automation to strengthen an organization’s ability to mitigate risks, manage regulatory requirements and address ethical concerns for both generative AI and machine learning (ML) models.

To know more about how you can enable governance in your organization visit our website.

 

 

Geeta Gurnani, IBM Technology CTO & Technical Sales Leader, India & South Asia

 

 

 

Blog Categories