The basic underlying concept of this project is to program different ways of seeing as well as offer an insight into how a program does pattern recognition. The trick here was to provide the user with visual feedback on how different parameters are manipulated to create the experience that he sees. The motivation here was to not only create something that was visually engaging, but also reflective. Consumers like to anthropomorphize software (for instance chatbots) and often lead themselves to believe that the program is intelligent and can react appropriately to the inputs of the user. As we quickly learn from looking at AI that apply machine learning, that is not the case. Instead, we mostly human biases that are replicated in the pattern recognition algorithm or the dataset itself. This project aims at touching on that subject by providing direct feedback with a HUD system that display the distance between the limbs, the rgb channels, and the presence of different states such as anger/shyness/warmth etc… through a percentage value that is tied to the distance of the limbs and the color channels. The way it frames those parameters plays a big role into how it communicates the idea of human biases in pattern recognition. In fact, the program portrays the pattern recognition algorithm as a HUD component which gives a direct insight into how this algorithm decides your overall mood. It fuses the aesthetic side of the program with the debug interface.
Over the course of this project, I gained some insight into what it meant to code an algorithm for ascribing a certain vibe or emotion on a human being. The interesting thing about it was how I often engaged with the problem in a very procedural manner. It made me realize that the medium and the tools that I was using was extending their inherent biases onto me. Humans do not perceive other humans on a procedural level yet I found myself coding a procedural way of perception out of virtue of using a computational environment. It also made me realize how so much context and nuance is needed in order to make these types of software work. Briefly speaking, human behaviour is hard to simplify unless you have clearly defined parameters. And, often times those parameters end up reflecting our own biases and ideas about things.