October 22, 2024

Why should designers think about AI Human Experience rather than only AI User Experience? About direct and indirect users

Picture yourself designing an AI application that streamlines the loan decision-making process within the banking sector. Bank staff will no longer spend countless hours checking the documentation of individual applicants.

Share article

Image generated by ImageFX

Picture yourself designing an AI application that streamlines the loan decision-making process within the banking sector. Bank staff will no longer spend countless hours checking the documentation of individual applicants. Instead, they’ll input data into the system based on documents provided by bank clients and their data and credit histories, allowing the AI system to propose decisions on whether to grant a loan to a specific individual.

Consequently, bank employees will become direct users of the AI system. But are they the only recipients of the AI output? Certainly not. Many of applicants may even remain unaware that it was the AI model that made the decision to give them a loan or not. At the end of the story, designed AI model has a significant impact on people’s lives, maybe a bank client will not be able to buy their dream house or send their child to study abroad after their application has been rejected.

Furthermore, this second group significantly outnumbers the first in terms of recipients of AI output. In extreme cases, it could appear that an individual who hardly meets loan conditions at one bank secures a credit, while another bank denies them. Assuming both banks rely on their respective AI models, albeit trained on different datasets and with different assumptions. Additionally, the AI model lacks personal skills possessed by bank employees when interacting with applicants. Drawing from their experience, employees can consider factors like client motivation or the broader socio-economic context to make a better fair decision.

Although two distinct clients may exhibit similar numerical profiles, a deeper understanding of their circumstances by a bank employee may lead to divergent assessments than made by AI. AI models perceive numbers but lack the broader human context, often crucial for avoiding discriminatory decisions.

There are various ways to design such a system when delivering the decision. One approach could entail the model providing a binary decision — either yes or no. Alternatively, the model might delineate the outcome on a scale from 0% to 100%, categorising results into three groups: decisively no, indecisive, and decisively yes. Subsequently, the group labelled ‘indecisive’ could undergo further scrutiny by the bank employee, who then makes the final decision.

Designers are tasked with holistically considering the design of digital solutions at the nexus of technology and humans. We’ve already grasped the profound implications of application design for human life. Long-term consequences may include the impact of social media design, not only on interactions but also on algorithms, influencing teenagers’ self-esteem or altering the dynamics of interpersonal relationships through dating apps.

Hence, it’s vital to discuss not only a user-centered design approach but also a broader human-centered design approach, anticipating the long-term effects of specific solutions on individuals and societies. Naturally, technology also yields positive outcomes, exemplified by the proliferation of e-learning platforms, which have dismantled numerous geographical and economic barriers to education.

As the number of AI-based solutions grows, so does the number of direct and indirect users, many oblivious to the impact of AI model design quality on their lives. Therefore, it’s imperative for UX designers to consider the broad context when designing such digital solutions across various scenarios, elucidating the short- and long-term positive and negative effects of the software. Pondering “what if…” and considering potential extreme scenarios reflects the essence of UX designers’ work.

Image designed by Anna Maria Szlachta

When designing AI solutions, a critical aspect is crafting interactions with the AI system for direct users (e.g., bank employees) and considering aspects such as usability, accessibility, learnability, and user feedback mechanisms to perpetually enhance the AI user experience. Equally crucial is acknowledging the broader context of indirect AI human experience (all bank clients applying for a loan), encompassing ethical, social, cultural, and psychological aspects of AI adoption and usage. This encompasses considerations such as privacy, trust, transparency, bias, fairness, accountability, and the broader societal implications of AI deployment. This poses a formidable challenge for multidisciplinary teams, where, alongside technology and business experts, human factor specialists must actively contribute to creating AI systems that are inclusive for all.

*Of course, the example presented has been described in very general terms, and the actual implementation requires an in-depth analysis of each case and an understanding of many aspects from the technological, business and human experience of AI to make the best decisions while designing AI solutions.

Related posts

What does designing for AI have in common with Information Design?
Interface Design

What does designing for AI have in common with Information Design?

Read workshop program UX for AI: Fundamentals
Workshop

Read workshop program UX for AI: Fundamentals

From pixels to predictions: 4 realms where UX design meets AI
Design process

From pixels to predictions: 4 realms where UX design meets AI