The future of man-machine interfaces? Artificial intelligence capable of dialoguing with users and understanding their needs.
The future of virtual assistants is augmented man-machine Interfaces, i.e. Intelligent User Interfaces (IUIs), chatbots, and voicebots. These intelligent and augmented interfaces will be able to understand the needs expressed by users in their environment.
What is the user’s state of mind? Is he alert? Is he sleepy? Is he under an excessive cognitive load? IUIs will be able to answer these questions thanks to a system of sensors capable of tracking and analysing the person’s expression and measuring the diameter of a pupil, heartbeat, perspiration, and so on.
What information does the user need to make the most appropriate decision? In what form should this information be provided? Virtual assistants will be able to detect the user’s intentions, whether expressed orally or in writing. They will also be able to “reason” with the support of databases and/or knowledge representation systems, taking into account parameters related to the user’s environment.
Intelligent assistants will have the capacity to understand the users’ needs as well as their behaviour. As a result, they will be able to assist humans in a personal manner according to their degree of fatigue or emotional state, interacting with them in a natural way and adapting solutions to the user’s situation.
To do this, IUIs or other intelligent assistants will need contributions from language sciences (linguistics, Natural Language Processing), applied cognitive sciences, and neuro-ergonomic technologies – “competencies” to be combined with what scientists and researchers call Explainable Artificial Intelligence (xAI), a sort of AI within AI, with the objective of explaining or providing interpretive elements to users so they understand what is happening in the “black boxes”.