Projects Research & Design Hub

Giga-Bug: A Retro Game Revamp

Written by Chris Cantrell

Using Raspberry Pi and Python

Long gone are the days when computer games were programmed on a pixel-by-pixel basis, but Chris remembers them. In this article, he comes full circle, sharing how he upgrades the old Mega-Bug game on a Raspberry Pi using Python and the PyGame library. He also shows you how he displays it on a giant RGB LED display.

  • How to build an upgraded Mega-Bug game on a Raspberry Pi using Python and the PyGame library

  • How to select and connect the hardware

  • How to use the  Henner Zeller display library for frame buffering, etc.

  • How to program the images in an ASCII format

  • How to makes use of the Pygame library 

  • Raspberry Pi 3B+ SBC

  • 6x RGB LED Matrix panels (64×32) from Adafruit

  • Adafruit RGB Matrix Bonnet 

  • Sheet of Chemcast Black LED Acrylic

  • SAFFUN generic USB controllers

  • Henner Zeller’s Pi-Display Driver

I learned to program in the early 1980s, during the heyday of pixel-programming. Pixels were different critters back then. I sound like my grandpa—kids these days have it easy! Back in my day, the computer screen was a few thousand fuzzy squares we could turn on or off, and the rich kids could use four different colors. Each pixel was precious—our code choreographed them one by one. Today, the resolution of a display screen rivals that of the human eye, and each pixel has more color depth than the eye can distinguish. Today, pixels and colors are plentiful, and our code deals with shapes and images instead of individual dots.

Computer games have evolved along with the pixel. Today, we have massively multiplayer 3D game worlds. But I remember when two players took turns guiding a yellow circle to “eat” the dots in a small 2D maze—three lives for a quarter, and we liked it that way. Back then, it was more about clever gameplay than fancy graphics. Today’s game developer must master both.

Most of my dot-eating was done on the TRS80 Color Computer (CoCo) with the game Mega-Bug. You may have played it as Dung Beetles on the Apple II. The game is a typical maze dot eater, but with a cool visual effect—a magnifying glass that follows the player around the screen. The lens magnifies everything, including the score and graphics when the player nears the edge of the maze.

In this article, I will describe how I revived the old Mega-Bug game on a Raspberry Pi, using Python and the Pygame library. Figure 1 shows the completed project. I’ll show you how to use my library for color palettes and frame buffers to make your own games. And I’ll show you how I revisited the pixels of my youth with a giant 128×96 RGB LED display. A link to my GitHub repository [1] for this project is available on in the RESOURCES section at the end of the article.

FIGURE 1 – The Giga-Bug game running on the finished project. Here you see the giant 128×96 LED display, stereo speakers and two game controllers. The black LED acrylic sheet over the display makes a big difference in appearance.

The display is made from six RGB LED Matrix panels from Adafruit. I used the big 64×32-pixel panels with the 6mm pitch [2]. Links to the key bill of materials items are also available in RESOURCESFigure 2 shows how the panels fit together to make a 128×96-pixel display, which is the exact resolution of the original Mega-Bug game. I first played Mega-Bug on a 13″ TV, but this display is nearly 3′ across!

FIGURE 2 – The frame is made of wood strips held together with screws, glue and metal brackets. Each panel comes with four magnetic standoff posts that anchor them nicely to the metal brackets.

Each panel has two 16-pin IDC connectors—one for input and one for output to another panel. There is no PWM (pulse width modulation) driver on the panel. Instead, you must shift the RGB on/off bits through the entire display chain, and implement your own PWM as you rapidly refresh the display.

With 37,000 individual LEDs (128×96×3) refreshed ten times a second (370,000 updates/sec), the display requires a beefy processor and constant attention. My Raspberry Pi 3B+ SBC was up to the task. I used an Adafruit RGB Matrix Bonnet [3] to drive the display chain. The board uses 13 of the Pi’s GPIO pins—six for data and seven for control. It boosts these signals to 5V, and maps them to the IDC connector for the display panel. Then it is up to the Pi to bit-bang those GPIO pins.

The Adafruit Bonnet has a single output IDC connector to drive one chain of displays—one chain of six in my case. Henner Zeller’s amazing GitHub repo [4] has detailed information on these display panels and a C++ library for the Raspberry Pi. The Adafruit tutorial [5] shows you how to download Zeller’s library and his Python bindings.

Mr. Zeller also designed several flavors of HATs for the Raspberry Pi. You can order them from OSH Park, but you must order the delicate SMT components separately and then do all the soldering yourself. Really, it is more like brain surgery, but I built one of the Zeller boards as a personal challenge. It has three output connectors that drive three display chains at once—in my case, three chains of two panels. Both the Adafruit and the Zeller boards work well, and I stuck with the pre-assembled Adafruit board for this project.

Figure 3 shows how the panels are chained together to make the display. The chain begins with the Bonnet in the lower right corner. The communication flows like the number “2” shape, back and forth from bottom to top, ending with the panel in upper left corner. Note that the panels in the middle are mounted upside down. This keeps the input/output connectors close to their neighbor panels. Otherwise, I would need 3′ ribbon cables to span the wider distance.

FIGURE 3 – Block diagram for the project. I used 18-gauge hook-up wire to make the power harness for the six panels and driver board. The panels are chained together with IDC connectors. The speakers and controllers plug into the Raspberry Pi’s USB ports.

Each Adafruit panel comes with two short IDC connectors. These are perfect for the horizontal connections, but they were too short for the vertical runs. For those, I ended up buying a pack of 1′ ribbon cables from Amazon. Each display also comes with a power connector cable. I used some 18-gauge wire from the local hardware store to make a wiring harness for all six panels. I used solder and shrink tubing to join the wires, but wire nuts or electrical tape work fine too. While I was shopping at Amazon, I bought a large power supply—60A at 5V [6]. It does the job, but it is bulky, and its internal cooling fan is loud. If you build your own display, I recommend using a smaller, quieter power supply.

I built the frame for the display out of wood from the local hardware store. I used screws and glue to hold the frame together. And I used metal “L” braces to reinforce the corners. Each Adafruit display panel comes with four magnetic standoff posts. These posts hold the panels securely to the metal braces on the frame. This magnetic mounting allows for fine adjustments to get the panels perfectly aligned.

Finally, I bought a sheet of Chemcast Black LED Acrylic to cover the matrix. The acrylic diffuses the bright pixels and makes a clean, professional-looking display. I ordered this sheet from TAP Plastics. I put the exact dimensions into the form on their website, and the plastic came cut to size shipped to my door. Many people use 3D printed clips to hold the plastic to their displays. I found that small pieces of clear packing tape on the edges work perfectly.

I wanted audio for my project, but the display driver board uses the same GPIO pins as the Pi’s built-in audio system. I disabled the onboard audio as per the instructions in the Adafruit tutorial [5]. Then I used an inexpensive USB sound card for the audio. I also bought a pair of small speakers from my local Walmart. They are USB-powered and plug into the Pi right next to the sound card. For user inputs, I added a pair of standard USB controllers (SAFFUN brand) [7]. There is nothing special (or expensive) about these controllers. For now, my game only uses one controller. Maybe my next game will be for two players.


The Henner Zeller display library for the Raspberry Pi is written in C++. You can use the library directly in your own C++ code. Or you can use the bindings for Python, Go, NodeJS or Rust. My language of choice is Python.

To display an image with the library, you first configure an RGBMatrix() object. This is where you specify the physical characteristics of your display, which include the PWM resolution, number of rows, columns, panels, chains, the type of display (Zeller Adafruit) and many other parameters.

Next you call CreateFrameCanvas() on the matrix object to get a frame buffer object to draw on. You call SetPixel(x,y,r,g,b) on the frame buffer to set the colors of any pixels you want to twiddle. Finally, you call SwapOnVSync() on the matrix object and pass your created frame buffer. In a background task, the Zeller library rapidly refreshes the display using your frame buffer. Whenever you want the display to change, you create a new frame buffer, set the new pixels and tell the library to use the new buffer—easy!

The frame buffer object expects 3 bytes for each pixel—one each for red, green and blue. My game does not need that degree of color resolution. The original game only has four colors. Instead, my code uses a color palette that is just an array of 3-byte values. Pixels in my code are simple 1-byte values that are really indices into the color palette for the final display.

By default, the Zeller library sees a chain of panels as one long horizontal display. It thinks my six panels are laid out side by side as a 384×32 display. The library comes with several “mappers” you can use to specify the physical layout of your panels, but my complex layout with the upside-down middle panels does not fit any of the stock mappers.

I could write my own custom mapper in C++, but I chose to implement the mapping function in my high-level Python application code. My own Frame() object keeps an array of 128×96 single-byte values. When I’m ready to draw the display, my code iterates through the array of pixel values one by one, looks up the 3-byte color for each and maps the pixel to the correct spot in the 384×32 frame buffer expected by the library.

My Frame() buffer object includes a copy constructor to create a new frame from an existing frame. This copy is implemented in the native Python library and it is very fast. This allows you to create a new frame buffer starting with a base frame, and change only the pixels that are different for the new frame. You don’t need to waste compute cycles redrawing the same static pixels over and over.


In my code, images are stored as rectangular arrays of pixel values. Images usually come in groups. The game’s mouth, for instance, has two images (open and closed) for each of the four directions: north, south, east and west. I collected these eight images in the code as a list of lists. The first list index identifies the direction (0, 1, 2, 3), and the second index identifies the open/close animation (0 or 1) for that direction. Therefore MOUTH[0][0] is the picture of the mouth going up in the closed position, and MOUTH[3][1] is the picture of the mouth going left in the open position.

The images for text characters, however, are more naturally organized in a map of string-to-array, with the actual character string as the key. The image for the uppercase letter “A,” for instance, is in CHARS[‘A’], and the dollar sign is in CHARS[‘$’]. This map makes it easy to iterate over a string of characters and draw a message on the screen.

Python’s built-in lists and maps are perfect for organizing images. But I wanted a developer-friendly way of defining the images themselves. I wanted to use string representations in the code without having to pull in externally generated bitmap images. My library function from_string() recurses through a data structure—any complex data structure you give it—and converts the string values from ASCII art to pixel arrays.

At the top of Listing 1, you see two alternate images for the mouth facing up. The other three sets of images have been removed from the listing for brevity. My from_string() function recurses the object I pass to it, and looks for any list of strings. It replaces every list of strings with a list of images where an image is a rectangular array of pixels.

LISTING 1 – No fancy bitmap editor is needed! All my game sprites and text characters are drawn in ASCII art in Python strings. The code converts these strings to two-dimensional pixel lists at runtime.

[ # Mouth moving UP
11..11 .1111.
1....1 .1..1.
11..11 .1111.
1....1 11..11
111111 111111
..11.. ..11..
], # Three more directions
]MOUTH = image.from_string(MOUTH)

'0|1|2|3' :
.111.. ..1... .111.. .111..
1...1. .11... 1...1. 1...1.
1..11. ..1... ....1. ....1.
1.1.1. ..1... ..11.. ..11..
11..1. ..1... .1.... ....1.
1...1. ..1... 1..... 1...1.
.111.. .111.. 11111. .111..
...... ...... ...... ......
...... ...... ...... ......
'4|5|6|7' :
...1.. 11111. .111.. 11111.
..11.. 1..... 1...1. 1...1.
.1.1.. 1..... 1..... ...1..
11111. 1111.. 1111.. ...1..
...1.. ....1. 1...1. ..1...
...1.. ....1. 1...1. ..1...
...1.. 1111.. .111.. ..1...
...... ...... ...... ......
...... ...... ...... ......
''', # Many more images
CHARS = image.from_string(CHARS)

At the bottom of Listing 2, you see eight text characters defined visually in multi-line strings. The from_string() function recurses down the object I pass to it, and looks for any maps of string-to-string. It replaces these maps with new maps of string-to-images (pixel arrays). Multiple images can be defined in a single string, and the function uses the symbol in the key name to identify separate names for each of the images.

LISTING 2 – This complete example shows how to use my library to make animations on the display. First, create a background frame. Then, in a loop, make a copy of the background frame and draw the dynamic pieces.

base_frame = Frame()

base_frame.draw_image(10,15, GR.CHARS['A']) # The letter 'A'
base_frame.draw_text(5,5, GR.CHARS,'Demonstration') # Line of text
base_frame.draw_image(20,25, GR.BIG_BUG['standing']) # Bug standing
base_frame.draw_image(50,25, GR.BIG_BUG['dancing'][0]) # Bug dancing ...
base_frame.draw_image(70,25, GR.BIG_BUG['dancing'][1]) # ... two animations

direction = 1 # 0=UP, 1=RIGHT, 2=DOWN, 3=LEFT
animation = 0 # 0 or 1 ... two animations

while True:
frame = Frame(base_frame) # Make a copy of the base frame
frame.draw_image(10,60, GR.MOUTH[direction][animation])
animation = (animation + 1) % 2 # toggle between 0 and 1

The Frame() object has a draw_image() method that copies the pixels of an image into the frame buffer at the given (X,Y) coordinates. Listing 2 shows all the library functions in action. First, the code creates a frame buffer with several images and text. Then the while loop copies the base frame to a new frame on each pass and draws the mouth with alternating open/close images. Only the mouth changes on each new frame, and that is all the code needs to draw. That is very efficient!


I used the popular Pygame library to play audio files and read the USB controller. Pygame really shines when you use its graphics library to make Python games using the monitor of your PC or Raspberry Pi. But it also works well in “headless” mode, where it manages just audio and inputs. That’s how I used it.

Listing 3 shows a simple Pygame program to move a sprite around on my LED matrix, using the joystick. The main loop uses a Pygame Clock() object to control the delay between frames. I want the frames to change regularly—on evenly spaced ticks in time. In this example, I want 10 frames every second. I cannot use a simple “sleep” function for the timing, because my code loop is not deterministic. Some passes through the loop are quick, whereas other passes take longer, and I want my frames to be perfectly spaced.

LISTING 3 – This complete example shows how to use Pygame to play a sound effect, read the gamepad controls and move a sprite around on the display.

clock = pygame.time.Clock()

x,y = 62,46 # Near the center
img = GR.MOUTH[1][0] # Just for demo

sound_eat = pygame.mixer.Sound('eat.wav') # Load sound resource

while True:
for evt in pygame.event.get():
if evt.type==pygame.JOYBUTTONDOWN and evt.button==1:
# Event is PRESS the A
x,y = 62,46 # Back to center # Play the eat.wav file
x = x + round(joystick.get_axis(0))
y = y + round(joystick.get_axis(1))
fr = Frame()
clock.tick(10) # 10 frames per second

The clock object tracks the time that has passed since the last call to tick() and uses the value to calculate the wait time for the next tick. You tell the tick() method what the overall delay should be. Here I said 10 frames a second—one frame every tenth of a second. The clock object subtracts the elapsed time and performs the needed wait.

First, the main loop reads all the Pygame events that have happened since the last pass. I am only looking for one event here—pressing the “A” button. When the button is pressed, the code resets the sprite to the center of the display and plays the eat.wav file. The sound effect plays in the background while the code continues.


Advertise Here

Next, the main loop calls joystick.get_axis() to read the joystick position. On some controllers, each axis is an analog value from -1.0 to 1.0, with 0.0 being the center. My controller has digital inputs. I get a -1, 0 or +1. The code simply adds that value to the (X,Y) coordinate of the sprite.

Finally, the code creates a new frame and draws the mouth image on the frame buffer at the new (X,Y) coordinates. Then it waits for the next clock tick, and renders the frame on the display. And that is all there is to it! The short example in Listing 3 shows you a fully functional program with animation, audio, inputs and frame timing.


What better way to remake Mega-Bug than by following the original source code? One of my nerdy hobbies is to disassemble and comment old ROM images. You can find a link to my Mega-Bug work (and other disassemblies) at [8]. There are so many clever ideas in the original assembly language, and I even uncovered a bug in the code—not just one of the game bugs.

Mega-Bug uses a graphics mode with a resolution of 128×96 pixels. Each pixel can be one of four colors, and there are 4 pixels per byte. That yields a screen buffer size of 3KB. The game uses three screen buffers, which is why a 4K CoCo will not play Mega-Bug. The first buffer is the base screen that holds the mostly-static things—the maze, the dots and the pictures along the sides of the maze. The other two buffers alternate as active/background screens. While one of these buffers is being displayed, the other is being updated.

The assembly code does not redraw the new frame buffers from scratch every time. That would take too much precious time. Instead, the code copies smaller areas of memory from the base frame to erase any changes made from the base state. Then it runs a pixel-doubler routine to draw the magnifying glass over the maze. Finally, it draws the mouth and any visible bugs on top of the magnifier.

My Python code flows the same way, but instead of erasing small sections of an existing buffer, it uses the copy constructor to quickly copy the base frame buffer into a new frame buffer. Then it draws the magnifier, mouth and bugs on top of that. Finally, the code hands the completed frame buffer to the Zeller library for rendering, and begins drawing on a new frame.

I translated the maze-generator algorithm from 6809 assembly language to Python for my code. The algorithm has a loopiness parameter that determines how many maze runs terminate in loops and how many terminate in dead ends. The more loops there are in the maze, the easier it is for the player to dodge the enemy bugs. The loopiness decreases with each completed screen, making the game harder and harder at each level. My commented disassembly of Mega-Bug includes a 6809 emulator on the HTML page. You can experiment with the loopiness and watch the maze being drawn on an HTML canvas in slow motion. It is amazing to watch, if I say so myself.

Since my hardware has more colors than the original hardware, I added a color enhancement to my pixel-doubler code. As the code turns a single pixel into a block of four, it sets bit 7 in each magnified pixel. Thus, I have two sets of colors: 0-127 for the normal (unmagnified) things and 128-255 for pixels in the magnifier. For the original look of the game, I set both these color sets to the same values. The maze wall is red whether it is magnified or not. But you can tweak these sets for some cool effects!

For instance, you can set the maze color to black in the unmagnified color set. Now the maze is only visible in the magnifier. Or you can hide the uneaten dots outside the magnifier. Or you can make all unmagnified colors into shades of gray, so that only the screen under the magnifier is shown in color. For demonstration, I added several color effects to my game. You can press buttons to change color sets as you play.


If you build a giant LED display, the application ideas will come. At least I hope they do, because these matrix panels are not cheap! I want to get a lot of hobby time out of them. So, what have I planned for them next? I might focus on software, and enhance the game with power-ups, warps and even shaped maze levels. I could add different kinds of bugs with different speeds or abilities. Maybe I will write the code in a different language—C++ or Go. But a rewrite would be just for fun, since with the Python code it is plenty fast.

I might focus on hardware, and spin a driver board that uses the Parallax Propeller 2 chip. It has more than enough GPIO pins and processing power to drive four display chains at once. I could give it an SPI or I2C interface for the main processor to talk to. And the Propeller driver could implement a tile-and-sprite engine to cut down on the communication between the host and display board. That would open up these panels to low-power microcontrollers. 


[1] GitHub repo for project:
[2] Adafruit Panel (BOM):
[3] Adafruit RGB Bonnet (BOM):
[4] Henner Zeller’s Pi-Display-Driver Repo:
[6] Power Supply (BOM):
[7] SAFFUN generic USB controllers (BOM):
[8] Disassembled Meg-Bug Assembly Code:

Adafruit |
OSH Park |
Parallax |
Raspberry Pi Foundation |
TAP Plastics |


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article

Chris Cantrell ( is a Staff Engineer for Vertiv. He
also teaches for Professional and Continuing Studies at the University of
Alabama in Huntsville. When he isn’t working, you’ll find him soldering on
some fun IoT project or digging around in the ROMs of an old arcade game.
Chris has written several articles for Circuit Cellar over the years.

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Giga-Bug: A Retro Game Revamp

by Chris Cantrell time to read: 16 min