CC Blog Projects Research & Design Hub

Asymmetric VR Game with Custom Microcontroller Peripherals

Collaborative Game

Cornell University students designed an Asymmetric VR Game in proper utilization of custom microcontroller peripherals.

  • How do collaborative games offer to interact with others remotely?

  • How to develop an Asymmetric VR Game?

  • How do Custom Microcontroller Peripherals work?

  • What is the sequence of gameplay?

  • How is serial communication between the PIC and a lab computer?

  • Valve Index

  • PIC32 microcontroller

  • Unity game engine

  • Moderated desktop computer

  • TFT display

At a time when it’s difficult to get together with friends, collaborative games offer a safe way to interact with others remotely. Although multiplayer video games have been around for nearly as long as video games themselves, new technologies make room for improving immersiveness and overall experience. Most recently, virtual reality (VR) has offered players the opportunity to enter the world of a game like never before, opening new options for developers to tell stories and entertain their audiences. Asymmetrical, multiplayer VR games—those that involve only a portion of the players in VR—show promise as a method of allowing people to come together remotely without feeling the distance. Adding custom, microcontroller-based peripherals further the immersiveness of the experience, creating the opportunity for exciting collaboration over long distances.


We imagined an asymmetrical VR game in which two or more players work together to navigate a maze. One player navigates within the maze using a VR headset (“VR player”), while the others (“non-VR players”) receive a map of the maze. The twist is that all players lack crucial information about the maze that would enable them to reach the end. The map provides non-VR players with a bird’s-eye view of the maze and the location of the exit but gives no information about the VR player’s positions within the maze. Meanwhile, the player in VR wanders through the maze’s rooms and corridors but has no knowledge of its layout.

To solve the maze, the parties must verbally communicate the information that the others lack. Together, they must first determine the VR player’s position within the maze. Doing so involves moving through the rooms, noting the available paths, and trying to match the area around the player to a portion of the map. Once this has been done, the players with the map can guide the maze explorer to the exit. Each type of player’s unique view of the maze is shown in Figure 1.

Figure 1 Non-VR player's view through webcam (left) and VR player's view (right)
Figure 1
Non-VR player’s view through webcam (left) and VR player’s view (right)

The maze that we created for the game contains no “true” entrance or exit; there is no path that leads to the outside of the maze. Rather than a series of connected hallways, our maze consists of a series of connected rooms. One room contains the entrance and another contains the exit, but there is no way to tell these rooms apart from the others. We designed the maze in this way so that the players could not accidentally solve the maze by themselves. To check whether the players have identified the correct room, the VR player within the maze uses an in-game object. The object changes color, depending on whether or not the room has been guessed correctly.


To implement the game, we leveraged several services. For the virtual reality hardware, we used a Valve Index. To design our custom peripherals, we used a PIC32 microcontroller equipped with a thin-film transistor display. To build and run the game world, we used the Unity game engine.

One of the advantages of using a PIC32 microcontroller for our custom peripheral is that it can be extended. For our project, the peripheral was a map display. However, the microcontroller provides the opportunity to add additional hardware that increases the amount of available interaction between the VR and non-VR players. For example, one could imagine adding buttons that allow players outside of the maze to manipulate objects within it.

Our project contains no networking code because our access to the computer to which our custom peripheral was connected was restricted. These restrictions were a result of the remote learning environment in which the course took place, due to COVID-19. Rather than a traditional laboratory setup, we were given access to the PIC via a remote desktop connection; the PIC was connected to a Cornell-moderated desktop computer in the lab space, which we were able to control using Cornell’s VPN. A webcam connected to the lab computer allowed us to see the TFT display.

Because we were unable to make external connections to transport data, we had to come up with a creative solution. We chose to use GitHub as a way to send information between computers. Information could be pushed to and then pulled from a GitHub repository, simulating the transfer of data over the Internet. A diagram of this setup is shown in Figure 2 More traditional network code would allow for more efficient communication between the VR and peripheral devices.

Figure 2 Diagram showing the remote learning lab setup
Figure 2
Diagram showing the remote learning lab setup

The sequence of gameplay is summarized as follows:

  • The VR environment is initialized in a “waiting room.” This room contains a couple of decorative objects, an elevator connected to the maze, status lights indicating the state of the maze, and a button to begin.
  • The VR player presses the button, which sets a flag in a JSON file.
  • A Python script running on the VR side (Local Machine) notices the flag, writes a command into another JSON file and pushes both files to GitHub. This script then begins repeatedly pulling from GitHub.
  • A Python script running on the PIC side (Lab Computer) is constantly pulling from GitHub and checking for new commands. Upon noticing the issued command, it clears the command flag and sends a serial packet to the PIC.
  • The PIC receives the serial packet, generates a new maze, and draws this maze to the TFT display.
  • The PIC streams the new maze over the serial connection back to the lab computer.
  • The same Python script that sent the serial command to the PIC receives the maze serial data, packages it into a JSON, sets a generation flag, and pushes the JSON and flag to GitHub.
  • The Python script running on the VR side (Local Machine) pulls the new maze JSON from GitHub.
  • A C# program in Unity notices the generation flag and begins parsing the maze JSON.
  • The VR environment is populated with maze rooms that match what is described in the JSON.
  • The location of the waiting room and player are changed to be directly above the starting room indicated by the maze JSON. A light in the waiting room turns green to indicate that the maze is ready and the elevator door opens, allowing the player to descend into the maze.
  • All players collaborate to find the exit.
  • The VR player interacts with a spherical light given to him or her in the waiting room to indicate guesses. If correct, the light turns blue. If incorrect, the light turns red and becomes dimmer. Subsequent incorrect guesses result in the light dimming further.

To implement the above game-play sequence, we wrote code in three locations and three different languages: C for the PIC; Python to run on a PC connected to the PIC and on a separate computer running Unity; and C# in Unity. To get the maze information from the PC connected to the PIC, we used a series of GitHub pushes and pulls to repositories containing JSON files. An outline of the flow of information is shown in Figure 3. Although this communication protocol is far from ideal, it allows us to send information in and out of Cornell’s secure network, while still following all University security protocols.

Figure 3 Diagram showing the infrastructure for communication between the PIC and VR
Figure 3
Diagram showing the infrastructure for communication between the PIC and VR

To generate the maze for our game, we modified Prim’s algorithm. This algorithm begins with a grid of cells of a predetermined size. These cells eventually become the rooms of the maze. For our implementation, we made the grid 8×8.

To start, the algorithm chooses a random starting cell and “adds” it to the maze. The code then adds the four adjacent cells to a list of neighboring cells. A neighboring cell is chosen at random and added to the maze. The algorithm deletes the wall between the two aforementioned cells and adds all of the latter cell’s neighbors to the list of neighboring cells. This process is repeated until all nodes are connected to the maze—randomly choosing a new neighbor, adding it to the maze by deleting a wall, and then adding its neighbors to the list. This algorithm creates a “perfect maze” every time, that is, one in which every cell can be reached from any other cell. To implement this in C, we made use of a struct to store all the relevant information for each cell, which we referred to as a “node,” the implementation code is shown in Listing 1.

Code block showing the implementation of the “node” struct

// define the struct for a single maze node (or room)
// with necessary fields
typedef struct Maze_t {
    short north; // 0 if wall, 1 if door
    short south; // 0 if wall, 1 if door
    short east; // 0 if wall, 1 if door
    short west; // 0 if wall, 1 if door
    short x; // location of specific node in maze
    short y;
// for maze generation, not important for data transfer
    short isConnected; //is in neighbors list
    short inMaze; // is included in the valid maze area
// array of structs of maze nodes
Maze_node maze[maze_size][maze_size];

The code initializes the struct with eight fields: four shorts for the cardinal directions; two shorts for the x and y position of the node within the maze; and two shorts to indicate the state of the node (isConnected and inMaze).

The cardinal direction fields contain either a 0 or a 1 to indicate whether there are walls or openings on those sides of the node, respectively. The x and y fields contain values between 0 and 7 to indicate the node’s location in the coordinate system of the maze; (0, 0) represents the bottom left, and (7,7) represents the top right. The isConnected field begins as 0 and is set to 1 once the node is added to the list of neighboring nodes. This list ensures that the same neighbor node is not added to the list multiple times, and holds the candidates for the next node to be added to the maze. The inMaze field begins as 0 and is set to 1 once the node is removed from the list of neighboring nodes and added to the maze, itself. The maze itself, is stored as a 2D array of node structs. The Unity equivalent of a node is shown in Figure 4.

Figure 4 View into a single maze room and associated hallways as seen in the Unity environment editor.
Figure 4
View into a single maze room and associated hallways as seen in the Unity environment editor.

We use a function to control the generation of the maze. This function initializes the list of neighbors used in the algorithm as a 1D array, initializes the structs that store the node information, and chooses the random starting node with which the algorithm begins.

When the first element is added to the maze, a field is set within the struct to indicate its membership within the maze. We then use a helper function to add the neighbors of that node to the neighbors list. This helper function ensures that the added neighbors exist, that they aren’t outside the maze, and that they aren’t already part of the maze.

Then we randomly select a valid element of the neighbors’ list to connect to the current node. To connect the nodes, a helper function removes the wall between the two, adding the neighbor to the maze. This function also takes on the responsibility of removing the freshly added node from the neighbors’ list, which ensures it won’t be added twice.

After the second element of the maze has been added, we use a large do-while loop to perform the rest of the maze generation iteratively. A function checks if there are still neighbors in the neighbors’ list by iterating through the list and checking for non-NULL entries. As soon as it finds a valid entry, it returns a 1. The maze is complete when the function fails to find a valid neighbor.

Inside the do-while, we first pick a random element of the neighbor list to add to the maze. We then mark that node, indicating that it will be the next node added to the maze. We look for one of its neighbors that is already in the maze to connect it to, and if it has multiple, we randomly choose one to avoid bias in any direction. To accomplish this, we use another do-while loop that randomly selects a number between 0 and 3, where each number corresponds to a different side. The do-while will terminate when a neighboring node is found that is already in the maze.

Once we’ve identified both a node in the neighbors’ list and one of its neighbors that is already in the maze, they are connected using the helper function that was mentioned earlier. This function will also remove the node from the neighbors’ list, set the flag that indicates it is in the maze, and add its neighbors to the neighbors’ list. We assigned the exit of the maze to the node that was added to the maze last. Choosing the end this way has a couple of advantages. First, because the neighbors of the start node are the first to be added to the maze, the entrance and the exit can’t be right next to one another. Second, this results in a random exit without needing to generate an additional random value.

When the maze has finished being generated, the result is a 2D array of nodes with all fields accurately set. This 2D array can be operated on to both draw the map on the TFT and to send maze information to the Unity code.


We created a function that draws the maze generated by Prim’s Algorithm onto the TFT display so that the player with the PIC can see a bird’s-eye view of the maze. It begins by drawing the outer borders of the map onto the TFT screen. A helper function then iterates through the 2D array of nodes that results from the maze generation to draw the walls one node at a time. This helper function takes in a single pointer to a node, then uses the x, y, north, south, east, and west fields of that node to determine where to draw the lines. It calculates where the x and y coordinates will correspond in pixels on the TFT, and then uses additional fields within the node to determine whether or not to draw walls at its boundaries.

After the maze is fully drawn, we use the coordinates of the ending node as dictated by the generation to draw a red square in the middle of the exit on the TFT. To activate an easier mode of the game in which the start position is known, we also have code that draws the start node of the maze. We often used this for debugging purposes as we created the game, but feel that excluding this code makes the game more difficult and interesting. The PIC with a maze drawn on the TFT is shown on the left side of Figure 1.


The serial communication between the PIC and the lab computer is handled by a specific thread on the PIC. This thread uses the serial interface of the PIC to send and receive data.

Serial packets are parsed in two stages. The first stage looks at the start of a serial packet to determine what kind of information has been received. For our final implementation, we only used one type of serial packet. However, we imagined using different packet formats to separate serial data handling for different features.

The serial interface code on the PIC side is handled by a thread separate from the maze generation. This thread yields until the flag that indicates a new command has been set. When a packet is received that begins with the character ‘c’, which indicates a command, the packet is saved using strcpy(), and a flag is set. This signals the thread to begin parsing the command. In addition to the ‘c’ prepend, we also used ‘d’ to indicate debugging information, which was helpful while tracing the path of serial packets through the interface. Each of the headers that we used had a different function dedicated to handling that type of information; this enabled informative testing while isolating debugging code from implementation code.

When the command to generate the maze is received, the code resets all the elements of the maze array. This ensures that the newly generated maze will be valid. To achieve “true” randomness, we seed our random number generator by reading from one of the hardware timers on the PIC. This timer starts counting when the program begins executing. Given the timer’s frequency of approximately 32kHz, and the fact that the maze generation command is issued by a human, the chance of two mazes having the same random seed is unlikely.

Once the maze has been generated, the serial thread uses two nested for loops to stream the information over the serial connection. The thread sends the positions of each room in the maze, which of their walls contain doors, and which rooms are the start and end of the maze.


The laboratory PC that communicates with the PIC and the PC that runs Unity must communicate game information. Each device runs some additional software to facilitate this. As was mentioned earlier, we used GitHub as a creative solution to our networking problem. Depending on the desired form of communication, the software discussed below may look very different from implementation to implementation.

We started by becoming familiar with how to send and receive serial messages to and from the PIC using a Python program running on the laboratory PC. Additionally, we learned how to make use of the JSON, OS, and subprocess libraries to allow communication to Unity by pushing JSONs to a GitHub repository.

We initially intended to have a Python program send and receive different types of command messages to and from the PIC, but due to time constraints, we only implemented a make maze command and some debugging commands. After sending the serial message that requests the PIC make a maze, the Python code waits for serial messages from the PIC containing information about each node in the maze. Python saves each received node in a 2D array of node objects and sends the full maze data to Unity once the array has been filled.

Having the Python and Unity programs communicate took a lot of trials to find a relatively efficient medium that wasn’t restricted by Cornell’s IT policies for lab computers. We ultimately settled on using GitHub repositories pushed to and pulled from both the Python program and the Unity code. Python starts by pulling a command file over and over to see if there is a command from Unity, before passing that command to the PIC. Once the full maze data has reached the Python code, it loops through all the maze nodes and stores their data in a JSON file before pushing that to the repository for it to be pulled on the Unity side. This is illustrated in Figure 3.


To turn our generated maze into an explorable virtual environment, we used Unity, a free game engine. Whereas the PIC generates the maze and draws a two-dimensional map onto the TFT screen, Unity constructs the maze in a three-dimensional, virtual space and allows the player to interact with it. Unity provided us with a medium through which we could build our game and interface with our chosen peripherals.

To integrate the VR headset into our project, we used Valve’s SteamVR assets. This Unity asset pack provided everything that we needed to incorporate the VR headset into our project. The Unity development environment is shown in Figure 5.

Figure 5 The “waiting” room, as seen from the Unity environment editor.
Figure 5
The “waiting” room, as seen from the Unity environment editor.

It was mentioned earlier that the Python program is responsible for sending the maze JSON checks for commands from Unity. This process starts with a button in the game. Once the player has pressed the button Unity modifies a file and adds a command to it. This command is pushed to a GitHub repository, where it is picked up by the Python script. After sending the command, Unity checks the file valid.txt to see if the maze configuration file has been updated. Once it has been updated, Unity resets the valid status and begins constructing the maze.

The construction of the maze consists of instantiating several pre-made maze rooms. These rooms were constructed by us in the Unity editor, and consist of everything that is contained within the maze. Each maze room has a floor, ceiling, lights, a set of four walls without doors, and a set of four walls with doors. Each component of a room can be toggled on or off within a C# script, which is how the maze is constructed. Unity reads the information from the JSON into a class containing a list of maze rooms. Each of these rooms contains information about the doors leading into and out of that room. Based on which connections should be made, either the wall or door component is enabled.

The rooms are spaced using their positions in the array and a constant offset. Information about which room is the start room and which room is the end room is contained within the script itself. Once the correct maze rooms have been placed, the room that the player starts in, along with all the objects in it, are moved over top of the correct starting room, and the elevator door, (Figure 6) ominously swings open.

Figure 6 The elevator and maze generation status lights, as seen during gameplay
Figure 6
The elevator and maze generation status lights, as seen during gameplay

Numerous features are at work while the player is venturing through the maze. For instance, each of the doors and the corresponding buttons that open them is controlled by C# scripts. These scripts specify the amount of time that the doors should be open, how fast the doors open and close, what sound the doors make, and other features. In addition to controlling doors, other scripts exist to assist the player in picking up items, checking whether or not a player is at the end of the maze, and so on.

When players want to guess whether or not they are at the end of the maze, they pull the trigger on the hand that is holding the spherical light given to them in the starting room. A script attached to the sphere then converts the position of the player into an ID, and checks that against the ID of the ending room. The color and luminous intensity of the sphere are updated according to whether or not the player made a correct guess.


Using Unity enabled us to build a high-fidelity, virtual environment without worrying about optimizations that affect performance. For the most part, performance in our game is constant and high enough to be an enjoyable experience. There is a moment of lag when the maze is first created. During this period, the Unity engine is instantiating and places a large number of objects at once, which momentarily prevents other objects from updating.

On the PIC side of things, it is hard to quantify the time complexity of our maze generation algorithm. The algorithm at the core of our implementation, Prim’s algorithm, has a time complexity that is O(n2). However, to simplify our implementation, we do not dynamically resize the list of neighbors. This means that when our algorithm picks a random location in the list of neighbors, it has the potential to pick an empty index, in which case the algorithm tries again.

Theoretically, the PIC could never choose a non-empty index, though this is not the case in practice. Although it’s hard to define the time complexity, for this project, we were more interested in the total time that it took for the maze to be captured by the Python script after the command had been issued. For 100 trials, the average time between the Python script issuing the make maze command to the time when the maze JSON was written was 0.249 seconds.

Our project should have no interference with other designs and is generally usable by anyone without visual or auditory disabilities. Colorblindness should not have a major impact on the game experience, since other cues, such as text and brightness, should indicate to the user the meaning of lights and colors in the maze. Those with hearing disabilities would require an add-in within the VR environment to be able to communicate to the other users without audio.

Our testing of the game was a satisfying experience. Despite being in separate locations, having the VR headset on and wandering around the maze allowed us to interact and have an immersive experience as we tried to find our way out. Not being face to face with one another felt like part of


Advertise Here

Additional materials from the author are available at:

Microchip |
Unity |
Intel |


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Asymmetric VR Game with Custom Microcontroller Peripherals

by Cameron Haire, Michaela Bettez, and Daniel Batan time to read: 17 min