Search
Warning: Undefined array key "6157//" in /web/zanos/classes/Edit/EditForm_class.php on line 263
Warning: Undefined array key "6157//" in /web/zanos/classes/Player/SearchArticle_class.php on line 261
# | Search | Downloads | ||||
---|---|---|---|---|---|---|
1 | The number of studies on emotions has increased significantly during the past decades; new theories and models of the emotional system have appeared. It became clear to researchers that the phenomenon of emotions can be understood, used and modeled using two fundamental processes: the generation of emotions and the effects of emotions, as well as the associated modeling tasks. These tasks serve as building blocks for affective models and, for both processes, include the following components: identification of the set of areas; determination of the relationship between these areas (from emotion triggers to emotion generation, and from emotions to their effects); calculation of the intensity and magnitude to calculate the intensity of emotions in the process of the generation and the magnitudes of the effects in their occurrence; determination of functions that connect and integrate complex emotions. Interest in the problem of emotions also increases with the development of a new research direction: artificial intelligence research. Increasingly, there is a requirement for demonstration and reproduction of human-like behavior, which is very difficult to achieve without trying to model the emotional apparatus. Especially important is the modeling of emotions in the context of creating agents whose functionality is associated with communication with a human. For many practical tasks and problems (for example, recognition of emotions, realization of emotion effects or consequences), the development of machine learning technologies seems promising. However, the solution of particular tasks (such as the recognition of emotions by photo, text, etc.) has not yet led to a qualitative breakthrough in the modeling of emotional systems. Moreover, researchers are increasingly refusing to create a separate emotional system, appealing to the fact that the effects of emotions are realized in the behavior of the agent automatically if they were included in the datasets used to train the agent. An example of the practical implementation is a Microsoft chatter bot Tay released via Twitter. It quickly learned to write emotional texts, but it is clear that its behavior is not the result of the work of its own emotional system. But many researchers in the field of robotics, AI, man-computer interfaces, and cognitive sciences still create computational models based on the previously developed theories and the nature of emotions. The purpose of the emergence of such models is to create more reliable, humanlike and effective artificial characters (including NPCs, non-personal characters) and robots, and also to improve the quality of human-computer interaction. The article presents an analysis of the methodological difficulties of modeling the effects of emotions. This analysis represents a step towards formalizing the modeling of emotions and suggests a basis for developing a more systematic, general approach to modeling, as well as particular approaches to create models of the effects of emotions and generation of emotions. As a result of the analysis, a number of modeling principles are revealed, which must be taken into account. These principles, which form the basis of an agent’s modeled emotional system, can help researchers move towards the creation of a humanlike AI, which uses the emotional system as a visual language in communication. Keywords: visual language, effects of emotions, modeling of emotions, architecture of artificial agent, AI – artificial intelligence | 705 |