The Emoji Bar

CPSC 581 Fall 2021 - Project 4

github-img

The Emoji Bar

Created By: Abraham Beauferris, Marcello Di Benedetto, Christopher Rodriguez, Rory Skipper

This project seeks to create a novel interface to augment in-person and/or remote communication and collaboration. As an answer to this, we created a novel interface capable of allowing users to remotely communicate through emojis that correspond to the user's expression and pose. Users can emote in real life and the corresponding caricature will be shared, facilitating communication seamlessly and without the use of one's direct likeness.

This system can be used to facilitate the private communication of emotions and gestures without the need to have a webcam on throughout periods of collaboration. This system is also a more lightweight design compared to video conferencing software, and could be implemented as an “overlay” over full screen applications. It allows remote collaborators to get a sense of your emotions or gestures without exposing your face.

We used a collaborative take on the 10 + 10 ideation method for this project to decide on a final design with useful and intuitive features and use cases.

Initial Concept Sketches

Intial ideation sketches for a prototype to "augment in-person and/or remote communication and collaboration"

Choice of Concept

Moving forward from the ideation phase, our group chose to iterate on concept #1, “The Emoji Overlay”. We landed on this concept as this system aims to solve a well defined problem that is especially relevant to our position as students working remotely in collaborative environments. This relatively simple design would provide a great benefit to the user, while also allowing us to become more acquainted with interesting libraries that identify changes in the user’s facial expressions and gestures.

Variations/ Details

The following are some details and variations of our chosen concept, the “The Emoji Overlay”

Final Design

For the final design, we chose to implement refinement 9: “gesture based inputs” and refinement 4: “Color coding” to varying degrees. Gestures were included in a smaller scope, supporting only three gesture based inputs, and color coding was utilized on a per emoji basis to support “at a glance” usage. Refinement 1 was implemented with networking functionality, to reinforce this tools intent as a remote collaboration and communication tool. These refinements were chosen as they provided a reasonable breadth to the functionality and design for a prototype as a proof of concept for this system.

Implementation

This project was implemented with Python 3.8.7 using the socket library to structure a client/server architecture to update all remote applications with each individual's changes in emotions. Facial expressions and emotions are detected through the user’s webcam, and run through the DeepFace library, which uses a trained neural network to identify emotions such as sadness, anger, happiness, surprise or fear. Gesture based inputs are identified with a Google Teachable Machine we trained ourselves, allowing the parsing of webcam data into gestures including “thinking”, “ok” and “peace”. These states are received by the client application, and emoji based representations are shown accordingly. The GUI was implemented with TKinter, which came with some limitations to the aesthetic design of the application as our imagined overlay-like display with transparency was not possible with this library. In further iterations, this system would be adapted to a desktop based Javascript application to better facilitate a responsive and aesthetically pleasing program layout.

My personal contribution to this project was the integration of libraries for the software, along with participating in pair programming for the program logic. All group members contributed to the ideation and refinment process of the 10+10 design process, along with the documentation of the portfolio.

Storyboard

Demo Video

Links

Source Code