Abstract:
The development of humanoid robotics for sensitive environments, such as medical or pediatric settings, requires careful handling of sensitive information that should remain within those environments. This work package focuses on early childhood education and the use of AI applications that interact with pre-elementary school pupils. Recent global movements emphasize the need for trustworthiness, privacy, and fairness in autonomous systems and technologies utilizing AI for automatic decision-making (He, 2021; Hesken, 2021).
These concerns have significantly impacted the assistive robotics industry, where issues of trustworthiness remain unresolved among top market players (Chatzimichali, 2021). For example, Softbank Robotics’ Pepper operates in a closed environment, complicating audits for fairness and bias. Similarly, humanoid robots like Amy and Sanbot are constantly connected to servers in mainland China, raising significant privacy and trust concerns (Hägglund, 2022). Such scenarios are common in the industry and pose potential obstacles to the future deployment of robotic technologies in sensitive settings such as childcare and hospitals.
This trustworthiness dilemma presents an opportunity for the development and study of new technologies aimed at creating a new generation of autonomous robots. At Arcada, the researchers have substantial experience designing services for humanoid robots, with access to three humanoid robots and various non-humanoid ones (Majd, 2021). Through various projects, they have developed expertise in integrating non-proprietary applications into robot operating systems (Hägglund, 2022). This capability positions them to contribute to the advancement of secure and trustworthy autonomous robots for sensitive environments.
Reference:
https://techlabs.fi/projects/tais-wp-3