What is the role of human dignity in governing Artificial Intelligence?
Progress in science and technology makes our lives more comfortable and extended. Nevertheless, it also certainly presents massive challenges, among which related to the dignity of the users of new technologies. It is for this reason that in more recent times, the topic of human dignity has been raised in discussions on the way forward regarding technological development, in particular in relation to robotics and AI technologies. These debates have predominantly featured issues related to autonomy in driverless cars, the moral dilemmas of deploying ‘killer robots’, the challenges presented by systems that monitor our online searches and spam us with advertising, impact voters’ decisions, but also in medicine, and in algorithms which determine police profiling or foreclosing on a mortgage.
In this regard, human dignity is generally considered to be the inviolable value upon which all fundamental rights are grounded. It represents an existing legal concept, which is believed to have been the central value underpinning the entirety of international human rights law. Yet despite the large and erudite body of available literature, there is currently still little or no consensus as to what the concept of human dignity demands of lawmakers and adjudicators. This, despite the fact that we continue to emphasise the importance of human dignity in order to ensure the development of ‘good’ technologies.
In this episode of The Law of Tech Podcast, I discussed the role of human dignity in AI governance with Lexo Zardiashvilli, Ph.D. researcher at Leiden University (The Netherlands).
Follow The Law of Tech on LinkedIn and Twitter to get behind the scenes and receive episode insights. If you enjoyed the episode, please make sure to share it with your network, and feel free to contact Hadassah Drukarch at thelawoftech@gmail.com for feedback and suggestions for future episodes.