Words like analogue, non-AI, acoustic, artisan and hand-made are used to describe human-made content. As authenticity might be valued even more in the future, we should also think about how it affects trust towards digital systems.
Read my previous blog → Made by human or made by AI, does it matter?
If we get a feeling that more effort was put into the outcome by humans, would we consider content with the “made by human” label more trustworthy? Or are humans in fact more biased than AI and driving their own agenda, sometimes maybe unconsciously?
We are teaching our children how to handle information found from internet and social media, how to be critical and use media literacy skills. The same goes with the new era of AI generated content and AI systems. It is good to know how generative AI works in practice, in order to evaluate how trustworthy the outcome is.
Trust is hard to gain but easy to lose
Trust is one of the key factors in this new era of AI. We know from human relationships that sometimes it takes time to build trust, for example with a new friend or team member. When trust has been built, it brings the feelings of loyalty and stability. But maybe you also know from experience, that when trust has been lost for some reason, it takes time to get it back.
Same goes with new technologies, new innovations and specially AI solutions. I have been doing user research for about twenty years now, and I have seen that users’ trust towards technological solutions, machines, user interfaces and digital services can be quite fragile in the beginning. If the trust gets broken, it is far more difficult to pursue the users back, than in the beginning when everything was new and unknown.
So let’s be careful when doing AI experimenting. I surely have heard about quite wild ideas about ChatGPT analyses, that clearly violate users’ privacy and betray their trust. People get easily excited, and wild ideas can become scary reality quite fast when the ethical risks are not considered.
Ethical aspects
The basic formula for evaluating ethical aspects is:
- What are the ethical risks?
- What are the possible positive outcomes?
- What is the relation between these two? Which value outweighs the other? Can we afford the risks when gaining the positive outcomes?
When designing AI solutions, or any digital solutions, it usually includes these both: risks and positive impact. When I create new concept ideas, I like to evaluate them using our Ethical Design lenses. This value-based approach helps me to deal with the dilemma and make informed decisions.
In our Ethical Design lenses, lens number 6 talks about trustworthiness. I find this lens useful when experimenting with AI. The aim for AI solutions should be to work in line with the user’s personal goal – and not to have hidden aspirations that cannot be openly communicated to the user.
Do not design systems that work behind the user’s back or against whatever they would deem as good.
Trustworthiness – Ethical Design lenses
It is important to recognize when it’s time to ask: Is this ethical or unethical? Does this promote good life for the users, as well as for society? Ethical consideration is always useful and very often needed. With this mindset we can anticipate for possible challenges digitalization might bring. Just asking ethical questions increases awareness and takes you further on the path towards ethical digital solutions.
Ethical Design framework provides helpful tools to reflect on AI related questions.