@iti.larsys.pt
Assistant Researcher
Interactive Technologies Institute
Human-Computer Interaction, Artificial Intelligence, Computer Engineering, Social Psychology
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Filipa Correia, Isabel Neto, Soraia Paulo, Patricia Piedade, Hadas Erel, Ana Paiva, and Hugo Nicolau
ACM
Isabel Neto, Yuhan Hu, Filipa Correia, Filipa Rocha, João Nogueira, Katharina Buckmayer, Guy Hoffman, Hugo Nicolau, and Ana Paiva
ACM
Filipa Correia, Isabel Neto, Margarida Fortes-Ferreira, Doenja Oogjes, and Teresa Almeida
ACM
Filipa Rocha, Filipa Correia, Isabel Neto, Ana Cristina Pires, João Guerreiro, Tiago Guerreiro, and Hugo Nicolau
ACM
Collaborative coding environments foster learning, social skills, computational thinking training, and supportive relationships. In the context of inclusive education, these environments have the potential to promote inclusive learning activities for children with mixed-visual abilities. However, there is limited research focusing on remote collaborative environments, despite the opportunity to design new modes of access and control of content to promote more equitable learning experiences. We investigated the tradeoffs between remote and co-located collaboration through a tangible coding kit. We asked ten pairs of mixed-visual ability children to collaborate in an interdependent and asymmetric coding game. We contribute insights on six dimensions - effectiveness, computational thinking, accessibility, communication, cooperation, and engagement - and reflect on differences, challenges, and advantages between collaborative settings related to communication, workspace awareness, and computational thinking training. Lastly, we discuss design opportunities of tangibles, audio, roles, and tasks to create inclusive learning activities in remote and co-located settings.
Katherine Harrison, Giulia Perugia, Filipa Correia, Kavyaa Somasundaram, Sanne van Waveren, Ana Paiva, and Amy Loutfi
ACM
Focusing on failure to improve human-robot interactions represents a novel approach that calls into question human expectations of robots, as well as posing ethical and methodological challenges to researchers. Fictional representations of robots (still for many non-expert users the primary source of expectations and assumptions about robots) often emphasize the ways in which robots surpass/perfect humans, rather than portraying them as fallible. Thus, to encounter robots that come too close, drop items or stop suddenly starts to close the gap between fiction and reality. These kinds of failures - if mitigated by explanation or recovery procedures - have the potential to make the robot a little more relatable and human-like. However, studying failures in human-robot interaction requires producing potentially difficult or uncomfortable interactions in which robots failing to behave as expected may seem counterintuitive and unethical. In this space, interdisciplinary conversations are the key to untangling the multiple challenges and bringing themes of power and context into view. In this workshop, we invite researchers from across the disciplines to an interactive, interdisciplinary discussion around failure in social robotics. Topics for discussion include (but are not limited to) methodological and ethical challenges around studying failure in HRI, epistemological gaps in defining and understanding failure in HRI, sociocultural expectations around failure and users' responses.
Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, and Hugo Nicolau
ACM
Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.
Regina de Brito Duarte, Filipa Correia, Patrícia Arriaga, and Ana Paiva
Hindawi Limited
Explainable artificial intelligence (XAI), known to produce explanations so that predictions from AI models can be understood, is commonly used to mitigate possible AI mistrust. The underlying premise is that the explanations of the XAI models enhance AI trust. However, such an increase may depend on many factors. This article examined how trust in an AI recommendation system is affected by the presence of explanations, the performance of the system, and the level of risk. Our experimental study, conducted with 215 participants, has shown that the presence of explanations increases AI trust, but only in certain conditions. AI trust was higher when explanations with feature importance were provided than with counterfactual explanations. Moreover, when the system performance is not guaranteed, the use of explanations seems to lead to an overreliance on the system. Lastly, system performance had a stronger impact on trust, compared to the effects of other factors (explanation and risk).
Filipa Correia, Joana Campos, Francisco S. Melo, and Ana Paiva
Springer Science and Business Media LLC
Diogo Rato, Filipa Correia, André Pereira, and Rui Prada
Springer Science and Business Media LLC
AbstractDuring the past two decades, robots have been increasingly deployed in games. Researchers use games to better understand human-robot interaction and, in turn, the inclusion of social robots during gameplay creates new opportunities for novel game experiences. The contributions from social robotics and games communities cover a large spectrum of research questions using a wide variety of scenarios. In this article, we present the first comprehensive survey of the deployment of robots in games. We organise our findings according to four dimensions: (1) the societal impact of robots in games, (2) games as a research platform, (3) social interactions in games, and (4) game scenarios and materials. We discuss some significant research achievements and potential research avenues for the gaming and social robotics communities. This article describes the state of the art of the research on robots in games in the hope that it will assist researchers to contextualise their work in the field, to adhere to best practices and to identify future areas of research and multidisciplinary collaboration.
Filipa Correia, Sean Christeson, Samuel F. Mascarenhas, Ana Paiva, and Marlena Fraune
Association for Computing Machinery (ACM)
As routinely working with robots spreads globally, it becomes important to understand how best to customize robots to each culture to maximize collaboration between humans and robots. In two distinct cultures (United States and Portugal) we examined group-based emotions toward robots with participants self-categorizing three different ways (ingroup, outgroup, and neutral). We tested and confirmed our baseline assumptions that Portuguese participants are more collectivistic and less individualistic, and feel closer with a team in negative, but not positive, scenarios, compared to United States participants. Supporting our hypotheses, the results showed that participants rated more positive emotions toward the robot in the ingroup condition than in the outgroup or neutral conditions. Moreover, an interaction effect between culture and self-categorization revealed that Portuguese participants had more positive group-based emotions toward the robot than United States participants when self-categorizing as an ingroup. We discuss the implications in terms of human-robot teaming and potential future research directions.
Filipa Correia, Francisco S. Melo, and Ana Paiva
Wiley
Creating effective teamwork between humans and robots involves not only addressing their performance as a team but also sustaining the quality and sense of unity among teammates, also known as cohesion. This paper explores the research problem of: how can we endow robotic teammates with social capabilities to improve the cohesive alliance with humans? By defining the concept of a human-robot cohesive alliance in the light of the multidimensional construct of cohesion from the social sciences, we propose to address this problem through the idea of multifaceted human-robot cohesion. We present our preliminary effort from previous works to examine each of the five dimensions of cohesion: social, collective, emotional, structural, and task. We finish the paper with a discussion on how human-robot cohesion contributes to the key questions and ongoing challenges of creating robotic teammates. Overall, cohesion in human-robot teams might be a key factor to propel team performance and it should be considered in the design, development, and evaluation of robotic teammates.
Cristiana Antunes, Isabel Neto, Filipa Correia, Ana Paiva, and Hugo Nicolau
IEEE
Storytelling has the potential to be an inclusive and collaborative activity. However, it is unclear how interactive storytelling systems can support such activities, particularly when considering mixed-visual ability children. In this paper, we present an interactive multisensory storytelling system and explore the extent to which an emotional robot can be used to support inclusive experiences. We investigate the effect of the robot's emotional behavior on the joint storytelling process, resulting narratives, and collaboration dynamics. Results show that when children co-create stories with a robot that exhibits emotional behaviors, they include more emotive elements in their stories and explicitly accept more ideas from their peers. We contribute with a multisensory environment that enables children with visual impairments to engage in joint storytelling activities with their peers and analyze the effect of a robot's emotional behaviors on an inclusive storytelling experience.
Fernando P. Santos, Samuel Mascarenhas, Francisco C. Santos, Filipa Correia, Samuel Gomes, and Ana Paiva
Springer Science and Business Media LLC
AbstractUnderstanding how to design agents that sustain cooperation in multi-agent systems has been a long-lasting goal in distributed artificial intelligence. Proposed solutions rely on identifying free-riders and avoiding cooperating or interacting with them. These mechanisms of social control are traditionally studied in games with linear and deterministic payoffs, such as the prisoner’s dilemma or the public goods game. In reality, however, agents often face dilemmas in which payoffs are uncertain and non-linear, as collective success requires a minimum number of cooperators. The collective risk dilemma (CRD) is one of these games, and it is unclear whether the known mechanisms of cooperation remain effective in this case. Here we study the emergence of cooperation in CRD through partner-based selection. First, we discuss an experiment in which groups of humans and robots play a CRD. This experiment suggests that people only prefer cooperative partners when they lose a previous game (i.e., when collective success was not previously achieved). Secondly, we develop an evolutionary game theoretical model pointing out the evolutionary advantages of preferring cooperative partners only when a previous game was lost. We show that this strategy constitutes a favorable balance between strictness (only interact with cooperators) and softness (cooperate and interact with everyone), thus suggesting a new way of designing agents that promote cooperation in CRD. We confirm these theoretical results through computer simulations considering a more complex strategy space. Third, resorting to online human–agent experiments, we observe that participants are more likely to accept playing in a group with one defector when they won in a previous CRD, when compared to participants that lost the game. These empirical results provide additional support to the human predisposition to use outcome-based partner selection strategies in human–agent interactions.
Raquel Oliveira, Patrícia Arriaga, Filipa Correia, and Ana Paiva
Springer Science and Business Media LLC
Filipa Correia, Samuel Gomes, Samuel Mascarenhas, Francisco S. Melo, and Ana Paiva
Robotics: Science and Systems Foundation
In the past years, research on the embodiment of interactive social agents has been focused on comparisons between robots and virtually-displayed agents. Our work contributes to this line of research by providing a comparison between social robots and disembodied agents exploring the role of embodiment within group interactions. We conducted a user study where participants formed a team with two agents to play a Collective Risk Dilemma (CRD). Besides having two levels of embodiment as between-subjects —physically-embodied and disembodied—, we also manipulated the agents’ degree of cooperation as a within-subjects variable —one of the agents used a prosocial strategy and the other used selfish strategy. Our results show that while trust levels were similar between the two conditions of embodiment, participants identified more with the team of embodied agents. Surprisingly, when the agents were disembodied, the prosocial agent was rated more positively and the selfish agent was rated more negatively, compared to when they were embodied. The obtained results support that embodied interactions might improve how humans relate with agents in team settings. However, if the social aspects can positively mask selfish behaviours, as our results suggest, a dark side of embodiment may emerge.
Filipa Correia, Ana Paiva, Shruti Chandra, Samuel Mascarenhas, Julien Charles-Nicolas, Justin Gally, Diana Lopes, Fernando P. Santos, Francisco C. Santos, and Francisco S. Melo
IEEE
This paper explores how robotic teammates can enhance and promote cooperation in collaborative settings. It presents a user study in which participants engaged with two fully autonomous robotic partners to play a game together, named “For The Record”, a variation of a public goods game. The game is played for a total of five rounds and in each of them, players face a social dilemma: to cooperate i.e., contributing towards the team’s goal while compromising individual benefits, or to defect i.e., favouring individual benefits over the team’s goal. Each participant collaborates with two robotic partners that adopt opposite strategies to play the game: one of them is an unconditional cooperator (the pro-social robot), and the other is an unconditional defector (the selfish robot). In a betweensubjects design, we manipulated which of the two robots criticizes behaviours, which consists of condemning participants when they opt to defect, and it represents either an alignment or a misalignment of words and deeds by the robot. Two main findings should be highlighted (1) the misalignment of words and deeds may affect the level of discomfort perceived on a robotic partner; (2) the perception a human has of a robotic partner that criticizes him is not damaged as long as the robot displays an alignment of words and deeds.
Filipa Correia, Francisco S. Melo, and Ana Paiva
IEEE
This PhD project aims at investigating how a social robot can adapt its behaviours to the group members in order to achieve more positive group dynamics, which we identify as group intelligence. This goal is supported by our previous work, which contains relevant data and insightful results to the understanding of group interactions between humans and robots. Finally, we examine and discuss the future work we have planned and what are the contributions to Human-Robot Interaction (HRI) field.
Filipa Correia, Raquel Oliveira, Mayara Bonani, Andre Rodrigues, Tiago Guerreiro, and Ana Paiva
IEEE
Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.
Raquel Oliveira, Patricia Arriaga, Filipa Correia, and Ana Paiva
IEEE
In this paper we sought to understand how the display of different levels of warmth and competence, as well as, different roles (opponent versus partner) portrayed by a robot, affect the display of emotional responses towards robots and how they can be used to predict future intention to work. For this purpose we devised an entertainment card-game group scenario involving two humans and two robots $(\\mathbf{n}=54)$. The results suggest that different levels of warmth and competence are associated with distinct emotional responses from users and that these variables are useful in predicting future intention to work, thus hinting at the importance of considering warmth and competence stereotypes in Human-Robot Interaction.
Filipa Correia, Samuel F. Mascarenhas, Samuel Gomes, Patricia Arriaga, Iolanda Leite, Rui Prada, Francisco S. Melo, and Ana Paiva
IEEE
This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.
Filipa Correia, Sofia Petisca, Patrícia Alves-Oliveira, Tiago Ribeiro, Francisco S. Melo, and Ana Paiva
Springer Science and Business Media LLC
Although groups of robots are expected to interact with groups of humans in the near future, research related to teams of humans and robots is still scarce. This paper contributes to the study of human–robot teams by describing the development of two autonomous robotic partners and by investigating how humans choose robots to partner with in a multi-party game context. Our work concerns the successful development of two autonomous robots that are able to interact with a group of two humans in the execution of a task for social and entertainment purposes. The creation of these two characters was motivated by psychological research on learning goal theory, according to which we interpret and approach a given task differently depending on our learning goal. Thus, we developed two robotic characters implemented in two robots: Emys (a competitive robot, based on characteristics related to performance-orientation goals) and Glin (a relationship-driven robot, based on characteristics related to learning-orientation goals). In our study, a group of four (two humans and two autonomous robots) engaged in a card game for social and entertainment purposes. Our study yields several important conclusions regarding groups of humans and robots. (1) When a partner is chosen without previous partnering experience, people tend to prefer robots with relationship-driven characteristics as their partners compared with competitive robots. (2) After some partnering experience has been gained, the choice becomes less clear, and additional driving factors emerge as follows: (2a) participants with higher levels of competitiveness (personal characteristics) tend to prefer Emys, whereas those with lower levels prefer Glin, and (2b) the choice of which robot to partner with also depends on team performance, with the winning team being the preferred choice.
Silvia Tulli, Filipa Correia, Samuel Mascarenhas, Samuel Gomes, Francisco S. Melo, and Ana Paiva
Springer International Publishing
Transparency in the field of human-machine interaction and artificial intelligence has seen a growth of interest in the past few years. Nonetheless, there are still few experimental studies on how transparency affects teamwork, in particular in collaborative situations where the strategies of others, including agents, may seem obscure.
Joao Avelino, Filipa Correia, Joao Catarino, Pedro Ribeiro, Plinio Moreno, Alexandre Bernardino, and Ana Paiva
IEEE
In this paper, we study the influence of a handshake in the human emotional bond to a robot. In particular, we evaluate the human willingness to help a robot whether the robot first introduces itself to the human with or without a handshake. In the tested paradigm the robot and the human have to perform a joint task, but at a certain stage, the robot needs help to navigate through an obstacle. Without requesting explicit help from the human, the robot performs some attempts to navigate through the obstacle, suggesting to the human that it requires help. In a study with 45 participants, we measure the human's perceptions of the social robot Vizzy, comparing the handshake vs non-handshake conditions. In addition, we evaluate the influence of a handshake in the pro-social behaviour of helping it and the willingness to help it in the future. The results show that a handshake increases the perception of Warmth, Animacy, Likeability, and the tendency to help the robot more, by removing the obstacle.