Many of us use Artificial Intelligence (AI) Systems built using Machine Learning (ML) methods everyday. Especially when judges or doctors are using assistive ML, the right level of trust in an AI system is critical. Too much or even blind trust can lead to ill-considered decisions - and not enough trust into assistive AI can ignore valuable information.

In recent years, many methods were proposed to render AI systems and their predictions more transparent, in order to foster trust in AI systems. To what extent transparency really increased trust in AI systems remained largely unexplored.

In collaboration with Philipp Schmidt, Amazon Research, and Prof. Timm Teubner, TU Berlin, we are investigating whether and when transparency in AI actually increases trust in AI systems.

Preliminary results indicate that transparency can indeed often increase trust and can substantially improve human-AI collaboration (Schmidt und Biessmann, 2018).

However we also find that transparency in AI systems can lead to the opposite effect, in some cases transparency leads to blind trust or ignorance of an assistive AI's recommendations (Schmidt et al, 2020).

 

An important aspect of our results is that quality metrics of transparency should always take into account human cognition (Biessmann und Refiano, 2019).

Other results of our experiments suggest that a wide range of factors impact the effect of transparency on human-AI collaboration and trust. Our results indicate that task difficulty and personality traits such as risk aversion can alter the effect of transparency on trust in AI systems (Schmidt und Biessmann, 2020).

 

Contact at the BHT:

Prof. Dr. Felix Bießmann