Robots proposed model of the human concentrates on adaptability

Robots are becoming ubiquitous. Many robot tasks can be thought of as team-oriented tasks between robots and Human. The decision process of many robots are designed on the premise of Reinforcement Learning, that is a robot interacts with the environment get a response and interacts back. In this model, human actions are modeled implicitly as a component in the environment. In this article, we present a recent work by Nikolaidas et. al. that attempts to explicitly model the human response. The proposed model of the human concentrates on adaptability aspects. The human model is then incorporated into the robot decision-making process.  Nikolaidis et. al. conducted an experiment with human subjects showing the superiority of the new model for the given task.  We present also that experiment.  Robots are destined to be everywhere in our lives. They carry out tasks of importance in factories, homes, and soon will be in the streets. The interaction with humans is a crucial part of their success at the tasks they carry out. The coexistence of human and robot in one environment trying to achieve a common goal can be thought of a human-robot team that tries to achieve a collaborative task. It is observed in several studies that in human-human teams that ”coadaptation” helps the team performs better. The work of cite{Nikolaidis2017a} postulates that the same is true for human-robot teams. To that extent, the author proposed a model of a human decision-making process that takes account of human adaptability. The model helps reason about human actions, which they incorporate in the robot decision-making process. They then showed in a human experiment that the proposed model led to increased team-performance in a simple task.The table carrying task. The goal is to get the table out of the room. Shown in (a) is a possible strategy where the human face away from the door and the robot face the door. Shown in (b) another possible strategy where the robot faces away from the door and the human faces the door. Courtesy of cite{Nikolaidis2017a}To ease the introduction of the abstract constructs presented, we follow cite{Nikolaidis2017a} in using the example presented in cite{Nikolaidis2017a} to streamline the discussion. The task in the example is a table carrying task, see figure
ef{task}. A human and a robot, in this case, HERB cite{Srinivasa2010}, have the common task to get a table outside of a room. There are two considered strategies. First, the robot faces the door while the human agent faces away from the door. We denote that (Goal A). The second strategy is that the human faces the door while the robot faces away from the door. We denote that (Goal B). The preference of the robot is assumed to be Goal A, because of better sensors on his front than his back. If the human, however, prefers Goal B, which is not unlikely, a conflict occurs. To resolve such a conflict, either the robot or the human will have adapted to the other preference. When the human agent is adaptable, he will change his strategy to follow the robot’s preferred strategy thus, carry out the task in the most optimal way. On the other hand, consider the case where the human is not adaptable. If the robot insists on his preferred strategy, the task will not be carried out and therefore the human will lose trust in the robot which will lead the robot to be disused in the future cite{Hancock2011}. Alternatively, if the robot chooses to adapt to the human preference, it will carry out the task sub-optimally but will keep the human trust maintained.Based on the argument in the last paragraph one can notice the trade-off between trust and performance. A robot decision-making process shall take into account such a trade-off. cite{Nikolaidis2017a} approaches a solution by considering a real number which is called a human adaptability level. The human may adapt to the robot’s preferred strategy with some probability, represented by this real number, or sticks to his preference. Another layer of complexity is when encountering a new human for the same task, his adaptability level is unknown and thus required to be estimated from the interaction.cite{Nikolaidis2017a} proposed a model for human adaptation named  Bounded-Memory-Adaptation (BAM) cite{Nikolaidis2016}. It is a probabilistic finite state controller. According to the model human agent operate in one of a set of modes. A mode is a deterministic strategy cite{Nikolaidis2017a}. The behavior of the human agent is defined by a selection of one of those modes. For simplicity, they assume for the task presented above that the human has two modes, first is the strategy for achieving Goal A and the strategy of achieving Goal B. The two modes can be described as a rigid strategy and an adaptive one. In BAM, a human chooses a mode by considering a finite history of the past interactions. The choice of mode is made in a stochastic fashion with probability defined by the extit{adaptation level} $alpha$ of the human agent. The adaptation level is unknown to the robot and should be learned from the interaction.

x

Hi!
I'm Brenda!

Would you like to get a custom essay? How about receiving a customized one?

Check it out