Projects Research & Design Hub

Building a Twitter Emote Robot

Reactions in Real Time

Social media is so pervasive these days that it’s hard to image life without it. But digital interactions can be isolating, because the physical feedback component gets lost. These three Cornell students built an emotionally expressive robot that physically reacts to tweets in a live setting. Users can tweet to the robot’s Twitter account and get instant feedback as the robot shares its feelings about the tweet via physical means such as sounds, facial expressions and more.

Social Media outlets like Twitter and Facebook have become dominant players in the field of human interaction. Indeed, many interactions have become mediated by digital technology. We believe the loss of the physical component of interaction has had negative effects on human relationships overall. In fact, according to an article in the American Journal of Preventive Medicine [1], people become more isolated through the use of social media.

Our project aimed to explore potential solutions to the lack of physical feedback in the world of social media. We built an emotionally expressive robot that physically reacts to tweets in a live setting. Users can tweet to the robot’s Twitter account and receive nearly instant feedback, as the robot plays a sound, moves on its surface, displays the tweet text, shows a facial expression and lights up with different colors and intensities to convey its feelings about the tweet.

HIGH-LEVEL DESIGN
The robot is a stand-alone unit intended for use on a desk or table (Figure 1). Users communicate with the robot via Twitter by including the robot’s Twitter handle—@BotCornell— in a tweet. A server application written in Python runs on a laptop computer and monitors the robot’s Twitter account in real time. When a tweet is received, the server processes the content, and fits the text to an emotion. The Twitter user ID, tweet content and emotion are all sent to the robot via a Bluetooth serial connection, and are processed by two Microchip PIC32 microcontrollers (MCUs). Upon receipt, the tweet content is displayed on an LCD screen on the robot’s body. Simultaneously, the robot reacts to the tweet’s emotion by displaying the corresponding pre-programmed response. A visualization of the system is shown in Figure 2.

FIGURE 1 – The robot is a stand-alone unit intended for tabletop use. When in rest mode, it displays a positive face and fades the red and green lights.
FIGURE 2 – Top-level diagram depicting the connections among components in the system. The system originates on Twitter and ends with physical reactions from the robot.

The robot’s responses are designed to physically embody the detected emotion of the received tweet. The main response is a change in facial expression, displayed on an LCD screen on the robot’s face. Other responses include playing a Twitter notification sound, spinning in a circle and changing colors and intensities of lights. The robot reacts to the tweet for 20 seconds, giving the user enough time to read the tweet and experience the robot’s emotional reaction. After the 20 seconds, the robot returns to a rest state, where it waits for another tweet to arrive.

When considering the final design of the robot, it was important that it have human-like traits, so that users could connect with the robot in a personal way. We therefore gave it a head and a body, both rectangular for simplicity. With intentions to keep the robot portable, we designed the body box to have dimensions of only 6″ × 5.5″ × 4″ and the head box to be 4″ × 2.5″ × 2″. The structure was made using laser-cut plywood and hot glue, resulting in a sturdy and finished product.

The robot’s face is an Adafruit Color TFT LCD Screen mounted on the head piece with screws [2]. Two speakers are attached to the sides of the head to resemble ears, further contributing to the anthropomorphic design. On the body, we added a second TFT screen identical to the one used for the face, but intended to display the tweet content to the user. Separating the two TFT screens is a rectangular cut-out, in which we placed a series of LEDs to create visual appeal during notifications. The LEDs are covered by a piece of frosted plastic to protect the circuitry and create a softened effect by dispersing the light. Finally, the robot rests on two wheels attached to the sides of the robot and controlled by servo motors. The internal circuitry of the robot is shown in Figure 3 and the schematic in Figure 4.

— ADVERTISMENT—

Advertise Here

FIGURE 3 – The circuitry is accessible via a rear access door. When the door is closed, no circuitry is exposed. Components are secured such that when the robot moves, connections remain closed.
FIGURE 4 – The full schematic for on-board circuitry shows the use of two PIC32 MCUs on custom PCBs. These MCUs communicate with a series of peripherals to create emotion outputs.

TWEET FETCHING & ANALYSIS
Operation requires real-time access to Twitter, and the ability to pull tweet content and user ID for any tweet that mentions the robot’s handle immediately after the tweet is posted. This is accomplished using Tweepy, a library created for easily accessing the Twitter API in the Python programming language [3]. We first generated API keys using a Twitter development account, which was subsequently used by Tweepy functions to connect to the Twitter API. Our PC application then spawns a thread, which asynchronously retrieves tweets in real time using Tweepy functions, and enqueues them onto a thread-safe internal buffer.

A second thread is also spawned, which monitors the internal buffer to retrieve and process tweets in the order in which they are enqueued. It is important to note that maintaining an internal buffer is crucial to the scalability of the system, as it enables the application to process tweets while simultaneously retrieving new tweets. Once a tweet has been dequeued, the script determines its emotion by making an API call to a text-to-emotion library called ParallelDots [4]. There are several text-to-emotion libraries available, however we chose ParallelDots because it is free and allows up to 1,000 API hits per day at a rate of 20 hits per minute.

The server sends tweet content, user ID and emotion choice to the robot via a serial Bluetooth channel. On the server end, we used the internal Bluetooth module on a laptop computer, controlled by the server application using the pySerial library. On the robot end, we use an HC-05 Bluetooth module with standard settings, interfaced with a PIC32 MCU mounted on a PCB via a wired serial UART connection [5][6]. Data are sent in only one direction, from server to robot, making the required software relatively simple.

To correctly and easily parse the data for each tweet on the receiving end, we send it in chunks of characters. First, we send the user ID, followed by the tweet emotion represented by a single character, and ending with the tweet content divided up such that a separate message is sent for each group of 63 characters. Of course, a tweet may contain up to 280 characters. So, for a higher degree of robustness, five chunks of tweet content are sent with each tweet. In the event that more than one tweet is sent, each tweet is sent sequentially such that the time between each full transmission allows the robot to completely react to each tweet.

The PIC32 MCU receives the data by reading the serial line from the Bluetooth HC-05 module. This is accomplished in software using a peripheral device library (provided by Bruce Land of Cornell University) to read from the UART buffer [7]. Once all the data have been transferred to the robot, the process of reacting begins, as described in the following section.

REACTION OUTPUT MAPPING
Affixed to the robot body are a series of output devices, all controlled by two PIC32 MCUs and powered by an on-board 9 V battery. In a sense, these output devices give the robot life through sound, movement, light and countenance.

After sitting still for some time waiting for an incoming tweet, the robot first grabs the user’s attention with a Twitter notification sound resembling the high-pitched chirping of a bird. We acquired the sound, which was sampled at 44,100 Hz, from an online sound effects library, and extracted the raw samples using a simple MathWorks MATLAB program. The samples were scaled between 0 and 4,095, because we synthesized the sound digitally with a 12- bit DAC, and values outside this range would have been misinterpreted by the hardware. To output the audio samples to the DAC, we set up a DMA channel to send the samples over an SPI channel. This involved setting the DMA channel to framed mode, which allows the DMA channel to move whole blocks of memory at a time into the SPI buffer. The transfer is initiated immediately after the robot receives a Bluetooth transmission for a new tweet.

The TFT screens are driven by the PIC32 MCUs over a serial connection, using a TFT software library provided by Syed Tahmid Mahbub [8]. Functionality of the library includes line-drawing functions, which are used to draw the robot’s facial expressions, and a write string function, which is used to print the tweet content to the body screen. Due to the large number and nature of the I/O lines to drive one screen, two PIC32 MCUs must be used to drive the two screens. The MCUs communicate emotion data over a serial UART line, allowing them to simultaneously display the correct emotion.

The content screen is updated by writing the user ID and tweet content to the body screen with formatting, so that the text is readable. The face screen works on a case statement that selects a pre-programmed facial expression based on the underlying emotion (as determined by text-to-emotion analysis), and draws it to the face screen (Figure 5).

— ADVERTISMENT—

Advertise Here

FIGURE 5 – The robot’s facial expressions for different emotions. A. Angry B. Resting C. Sad D. Excited E. Happy F. Bored G. Afraid H. Sarcastic. These expressions were drawn based on emojis.

PROGRAMMING THE LEDS
To achieve more visual stimulation, we programmed the LEDs on the robot to constantly change intensity. The robot has 12 LEDs (4 green, 4 yellow, 4 red), with all LEDs of the same color connected in series with resistors to limit current. Each group is controlled individually by 2N7000 MOSFETs acting as a low-side switches. 2N7000 MOSFETs are made by a variety of companies, including ON Semiconductor. The gates of the MOSFETs are connected to digital PWM signals from the PIC32 MCU, allowing individual control of the brightness of each color group.

The lights are employed in the design by implementing four states for each color: on, off, blinking and fading. The blinking function toggles the group of lights at a settable frequency. The fading function is slightly more complex and fades the light off and on at a settable frequency, according to a discrete exponential function—causing the light to have an apparent “breathing” effect. These effects, in combination with the colors, are used to create a level of intensity associated with each emotion the robot displays.

Although the inclusion of visual and sonic stimulation allows for a wonderful means of expressing emotion, we felt that the robot was lackluster without some element of motion to catch the user’s attention. We added continuous rotation servos and wheels as a final component, to enable the robot to move on its surface. Upon receiving a tweet, the robot spins in place a full 360 degrees. We chose spinning in place for its simplicity, and because we wanted the robot to be functional in areas with limited space. To enable this movement, we power the servos with 5 V, which is stepped down from the 9 V battery by a DC-DC converter circuit. A PWM signal from the PIC32 controls the rotation direction and speed of the wheels. The signal is put on a timer, so that the wheels turn only for the time it takes to complete one full rotation—the amount of time was determined empirically.

RESULTS AND USER EXPERIENCE
When deployed, our robot responded successfully to tweets under a reasonable load, while maintaining user engagement. The system also demonstrated relatively low latency when in use. After the user tweets, and the robot has no other tweets enqueued in its internal buffer, it takes an average of 6.89 seconds to respond—effectively real time. The main bottleneck during this process is the amount of time it takes for Tweepy to retrieve an incoming tweet, which typically is 6.25 seconds on average. After the tweet is received, the robot reacts almost instantaneously from the user’s point of view.

To capture the user’s attention, we carefully designed the robot’s facial expressions, lighting, sound, and motion. As shown in Figure 5, the robot was capable of displaying eight distinct facial expressions: resting, happy, sad, angry, excited, afraid, bored and sarcastic. We modeled the robot’s facial expressions after emojis, which we expected the user base to easily identify.

Lighting also had a major impact on the user’s perception of the robot. At rest, we wanted the robot to convey a generally relaxed yet friendly and up-beat demeanor. Consequently, we programmed the green lights to fade on and off slowly, such that the robot appeared to be “calmly breathing.” In general, we mapped the sentiment—positive, negative or neutral—of the emotion to the color of the LEDs, and the intensity of the emotion to the blinking speed. For example, when the robot was angry, it quickly flashed the red LEDs—conveying an intense negative feeling. Alternately, when the robot was happy, the yellow and green LEDs blinked on and off somewhat quickly.

Finally, the robot’s motion was chosen to be as unobtrusive as possible. Spinning in a circle allows the robot to be placed in a relatively confined space. To see the robot in action, check out our demonstration video here:

FUTURE IMPROVEMENTS
All in all, we were extremely satisfied with our final robot, which was able to successfully implement all of its intended features and looked like a finished product. That being said, we were severely limited in our device choices by a budget of only $120. If we were to move forward and revise our project with increased funding, there are many ways we could improve the user experience overall to create a more polished and professional product.

One of the most effective ways to enhance the quality of our robot would be to make changes to the two TFT displays. With a higher resolution screen and additional time, we would be able to improve the graphics on the face tremendously. This would include making the faces more realistic, and enhancing the expression of emotion by animating the faces. An example of this might be programming the face to shed a tear when a sad tweet is identified. Furthermore, the tweets that were displayed on the second screen were rather small and not formatted to be visually appealing. Therefore, we would edit these to be more eye-catching and easier for the user to read at a glance.

Animating the resting state would have also improved the user experience. However, implementing such animations would require a change to the protothread scheduler. We initially planned to add resting animations such as blinking, but that proved to be difficult because of the cooperative scheduler that managed all the running threads. For example, if we added a thread to handle blinking, the thread waiting on emotion information could miss an incoming UART transmission due to cooperative scheduling.

Another potential improvement is to enhance the appeal to emotions by currently implemented features. This can be done in two ways. First, we could use multicolor LEDs to allow for the inclusion of more colors. This would be an important enhancement, because there are many studies that relate certain colors to certain emotions. With that in mind, including more colors in addition to red, yellow and green would allow us to associate the color display more strongly with the intended emotion.

In a similar fashion, we could program the motion of the robot to be more sophisticated and act as a further reflection of emotion. For example, on a happy tweet, it could spin in a circle, but for a sad tweet, make a subtle scooting motion back and forth. The combination of these improvements would make the overall user experience more luxurious and memorable as a physical embodiment of social media.

RESOURCES

References
[1]  Primack, Brian A et al. “Social Media Use and Perceived Social Isolation Among Young Adults in the U.S” American journal of preventive medicine vol. 53,1 (2017): 1-8.
[2] 18-bit color TFT
Adafruit Inc. | www.adafruit.com
[3] Twitter Python Library
Tweepy | www.tweepy.org
[4] Emotion Analyzer
ParallelDots | www.paralleldots.com
[5] Bluetooth HC-05
DSD Tech | www.dsdtech-global.com
[6] Pic32 PCB
Sean Carroll | people.ece.cornell.edu/land
[7] Peripheral Device Library
Bruce Land | people.ece.cornell.edu/land
[8] Modified Protothreads Library
Syed Tahmid Mahbub | tahmidmc.blogspot.com

Adafruit | www.adafruit.com
DSD Tech | www.dsdtech-global.com
MathWorks | www.mathworks.com
Microchip Technology | www.microchip.com
ON Semiconductor | www.onsemi.com
ParallelDots | www.paralleldots.com

— ADVERTISMENT—

Advertise Here

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • AUGUST 2019 #349 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Ian Kranz (imk33@cornell.edu) is a recent graduate of the Cornell University College of Engineering. He is currently pursuing a career in Electrical Engineering while keeping up some hobbyist electronics projects on the side.

Nikhil Dhawan (nd353@cornell.edu) is a recent graduate of the College of Engineering at Cornell University, where he obtained a Bachelor’s degree in Computer Science with minors in Electrical and Computer Engineering and Business. Currently, Nikhil is a software engineer at Microsoft in Redmond, WA.

Sofya Calvin (sec293@cornell.edu) graduated from Cornell University with a Bachelor’s degree in Electrical and Computer Engineering. She is currently a Technology Consultant at Accenture.

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Building a Twitter Emote Robot

by Circuit Cellar Staff time to read: 11 min