In an age where DeepFakes are becoming increasingly hard to distinguish from reality, what impacts does that have on trust, ethics and interface innovations? Should we be treating a digital-human AI as a conversationally equivalent adult language interaction, or instead overlay a Childs voice to better reflect the linguistic maturity of the AI engines that sit beneath the surface? How do we help our audiences discern between illusion and reality, within a carefully considered interaction design? What are the cybersecurity implications of malicious actors on robust design?
There's some great work being done by lauded institutions such as the Alan Turing Institute on the ethical implications of AI and the problems it's going to pose for effective regulatory oversight to protect consumers. The positioning of critique and regulation of AI as a domestic political and economic issue ignores the global availability and addressable market mindset of the vendors behind the engineering though.
We're busily exploring the relevance of interaction innovations such as chatbots and "digital humans" as a way of supporting a variety of health sector problem sets, since technology such as this helps achieve scale in service delivery, but really value the tough questions that are being posed in an age of increasingly blurred realities. In the long run, it will lead to better design and guide the innovators journeys of what's viable, ethical and trustworthy.
Longer reads
https://www.parliament.uk/business/committees/committees-a-z/lords-select/ai-committee/news-parliament-2017/ai-report-published/
https://www.turing.ac.uk/media/news/turing-response-house-lords-ai-report/
https://www.technologyreview.com/s/603895/customer-service-chatbots-are-about-to-become-frighteningly-realistic
check out my opinion piece @TheEconomist on the role of ethics and law in a data driven society, and why ethics alone is not always good enough https://t.co/zuOVo1xEy3 @turinginst @oiioxford @UniofOxford
— Sandra Wachter (@SandraWachter5) May 1, 2018
Here it is! Our report on ‘Ethical, Social, and Political Challenges of #AI in Health’, supported by @wellcometrust. Thrilled to contribute to such an exciting and challenging field. #AIethics #AI4Healthhttps://t.co/Xw6GYj9y39 pic.twitter.com/7ruKpPWFXu
— Future Advocacy (@FutureAdvocacy) April 30, 2018