In an age where DeepFakes are becoming increasingly hard to distinguish from reality, what impacts does that have on trust, ethics and interface innovations? Should we be treating a digital-human AI as a conversationally equivalent adult language interaction, or instead overlay a Childs voice to better reflect the linguistic maturity of the AI engines that sit beneath the surface? How do we help our audiences discern between illusion and reality, within a carefully considered interaction design? What are the cybersecurity implications of malicious actors on robust design?
There's some great work being done by lauded institutions such as the Alan Turing Institute on the ethical implications of AI and the problems it's going to pose for effective regulatory oversight to protect consumers. The positioning of critique and regulation of AI as a domestic political and economic issue ignores the global availability and addressable market mindset of the vendors behind the engineering though.
We're busily exploring the relevance of interaction innovations such as chatbots and "digital humans" as a way of supporting a variety of health sector problem sets, since technology such as this helps achieve scale in service delivery, but really value the tough questions that are being posed in an age of increasingly blurred realities. In the long run, it will lead to better design and guide the innovators journeys of what's viable, ethical and trustworthy.