Tech Term Decoded: Responsible AI by Design

Definition

Responsible AI is an approach to designing, developing, deploying, and using AI systems based on a frame work of principles that covers the broader societal impact of AI systems and the measures needed to reduce risks and negative outcomes that may arise with the use of AI, while still maximizing positive outcomes [1].

For further understanding of this concept, let’s take a look at a scenario where a transportation ride-sharing safety AI is given the challenge of ensuring passenger and driver safety in our growing ride-sharing market while addressing gender-based concerns.

Using Responsible AI approach, the following will have to be considered;

Safety First: Real-time monitoring of unusual route deviations, detecting unsafe driving patterns using local road conditions and provision of emergency alerts in local languages.

Gender Sensitivity: Optional female driver matching for women passengers

Cultural Respect: Understanding local transportation customs and preferences and cultural factors like avoiding certain areas during religious observances

Economic Fairness: Prevention of surge pricing exploitation during emergencies and ensuring fair pricing that accounts for local economic conditions


Responsible AI by Design

The basic principles of Responsible AI [2].

Origin

The origin of Responsible AI can be trace back to 2016 when Amazon, Google, IBM, Facebook (now Meta) and Microsoft formed an alliance on AI, with the sole aim to study and promote the responsible use of artificial intelligence. This partnership set the stage for today’s set of principles guiding responsible AI.

Since then, numerous public and private endeavors have joined the quest to refine voluntary guidelines for responsible AI. Some national and local governments such as US states (e.g., California, Colorado, Illinois, New York) already have laws in place related to AI, with more states set to follow [3].

Context and Usage

We can see the applications and benefits of responsible AI in healthcare, businesses, customers services, chatbots and human resources (recruitment).

In Healthcare, organizations use responsible AI to improve patient outcomes while upholding ethical considerations. Also, it helps treatment recommendation systems guide treatment plans, eliminating discriminatory practices.

In Business and Customer Service, responsible AI improves user experiences while maintaining ethical business and customer service standards. Also, companies utilize AI-powered chatbots to interact with customers without bias, ensuring consistent and fair responses.

In human resources department, responsible AI is utilized to improve ethical recruitment and employee management. AI systems screen candidates impartially, ensuring fair hiring practices while hiring [4].

Why it Matters

With the increase in popularity and use of software programs with AI features, the three laws of robotics created by Isaac Asimov (a science fiction writer), is no longer enough to be used as benchmark. There is need for standards in AI beyond them. Responsible AI can be used to mitigate bias, build more transparent AI systems, and increase user trust in those systems.

Usually, the data sets used to train machine learning (ML) models used in AI introduce bias via incomplete or faulty data, or from the biases of those training the ML model. When an AI program is biased, it can end up negatively affecting or hurting humans. For instance, it can unfairly decline applications for financial loans or, in healthcare, inaccurately diagnose a patient [5].

In Practice

Ada Health is a good example of a real-life case study of responsible AI in practice. Ada Health is an AI-powered chatbot that helps you assess a disease. The platform boasts of over 13 million users globally and over 34 million symptom assessments. The company practices responsible AI by securing data, implementing ethical principles, performing thorough tests, and making regular audits [2].

References

  1. Stryker, C. (2024).What is responsible AI? 
  2. Oleksandra. (2024). A Guide to Responsible AI: Best Practices and Examples.
  3. Vartak, M. (2023). Responsible AI Explained
  4. Convin. (2024). Examples of Responsible AI in Action Across Industries
  5. Hashemi-Pour, C., Gillis, A., S. (2024). What is responsible AI? 

Kelechi Egegbara

Hi. Am a Computer Science lecturer with over 12 years of experience, an award winning Academic Adviser, Member of Computer professionals of Nigeria and the founder of Kelegan.com. With a background in tech education, I've dedicated the later years of my career to making technology education accessible to everyone by publishing papers that explores how emerging technologies transform various sectors like education, healthcare, economy, agriculture, governance, environment, photography, etc.Beyond tech, I'm passionate about documentaries, sports, and storytelling - interests that help me create engaging technical content. Connect with me at kegegbara@fpno.edu.ng to explore the exciting world of technology together.

Post a Comment

Previous Post Next Post