The PANTHER I Project
Drones are an efficient tool for capturing the visual data needed to build a 3D model of a large space. Learn how this Camosun College team used a custom-built drone, open-source photogrammetry software, and game design software to create a 3D model of a building on campus.
Long, boring bus rides to work are a daily reality for many around the world. Millions of people stare out bus windows for hours, as monotonous, grey concrete jungles drift by. Now, imagine that same commute instead spent meandering through the ancient stones of Stonehenge on a warm summer day, all the while gazing out over the soft, rolling hills of the British countryside. Pure relaxation. This scenario is not just wishful thinking anymore. Virtual reality (VR) escapism is becoming much more prevalent in today’s world; as such, advancements in photogrammetry, 3D modelling, drone, and AI/computer-vision technologies have made VR projects and applications more accessible for low-budget teams.
We are MADKAT Vision—one of Camosun College’s Electronics and Computer Engineering Technologist Capstone teams from Victoria, BC. At the start of our project term, our vision was to make use of those technologies to reach our goal: 3D modeling structures through the use of aerial photogrammetry, and implementing them in a video game. Specifically, what we had in mind was capturing a model of someone’s house. Surely, seeing a 3D model of a building like would impress our instructors and get us an A+!
We chose to build a custom photogrammetry drone (Figure 1) to bring this real structure into the virtual world. To attain this goal, we combined our drone with the free, open-source photogrammetry software: Meshroom, and the game design software: Unity, to bring our vision into reality. We called our drone the PANTHER I. It was designed explicitly for aerial photogrammetry. (Photogrammetry is a 3D coordinate measuring technique that allows users to generate 3D models of an object or environment from a series of photos. It is the same technology that made possible Google Earth’s 3D representation of our planet.)

Our drone, the PANTHER I, as it goes through some pre-flight checks. This drone is the backbone of our project, designed to carry high-quality cameras for extended periods of time.
The drone is modular, user-friendly, and capable of long flight times with a high-carrying capacity. These are ideal qualities, because photogrammetry is a process that requires taking several hundred photos in one flight, potentially hundreds of feet in the sky. Our drone’s payload is a high-resolution GoPro HERO9 camera, which we use to capture every angle of a structure. Those images are then combined using photogrammetry software and subsequently used in game design.
Over 14 weeks, we gradually brought the PANTER I into existence while refining our photogrammetry-to-game process. Throughout, we faced many challenges. The most interesting obstacle we overcame and something we are going to delve into, can be summed up with one question: “How do we control a GoPro remotely from over a kilometer away?”
PAYLOAD CONTROL SYSTEM
The answer we found to that question is our Payload Control System (PCS) shown in Figure 2. This system uses the pre-existing radio-frequency (RF) connection between the handheld controller and on-board flight controller (FC). Thus, our system gains the long-range benefits of radio transmission, while concurrently supporting the necessary short-range Bluetooth Low Energy (BLE) connection between the ESP32 module and the GoPro HERO9.
— ADVERTISMENT—
—Advertise Here—

A flow chart depicting our picture-taking process. You can see how using the ESP32 Feather Board allowed us to control any peripheral device, not just a GoPro.
Our PCS begins with the FrSky Taranis X9 Lite transmitter (RC controller) and the drone’s FrSky R-XSR receiver. These two RF devices operate at 2.4GHz and communicate over
24 channels. This connection is responsible for the acquisition and sending of control input data to the Holybro Kakute F7 V1.5 flight controller.
The RC controller is the user’s main interface with the drone. It is here that flight, failsafe, gimbal, and camera elements are controlled by the flick of a switch or push of a joystick. For the GoPro’s shutter control, we configured a switch on the Taranis X9 Lite to transmit a signal to the onboard flight controller over a specific radio channel, configured using an open-source flight controller firmware: Betaflight.
Once received, our switch signal is configured to trigger a 5V high signal on one of its UART pins. This pin is directly wired to a GPIO pin on our ESP32, and upon receiving the high signal, it executes code that sends a “set shutter” command over BLE to the GoPro HERO9. And voilà, a picture is taken!
The last part of our PCS, the BLE connection between the ESP32 and GoPro, may sound simple in theory; after all, how hard can it be to get a camera to take a picture? In fact, this turned out to one of the more crucial and technical highlights of our entire project. To break down that process a little more, we shall start by explaining a bit about the GoPro HERO9.
GOPRO HERO9
We chose the GoPro HERO9 camera because it has the exact qualities students look for in an entry-level, aerial photogrammetry camera: cost effective, lightweight, robust, and high-resolution (20 megapixels). There was only one problem—we needed to find a way to make the GoPro take photos remotely. At this point, you might ask: “Doesn’t GoPro already have an app for that?”; and the answer is yes. The remote-control app, GoPro Quik, works very well at short ranges, since it uses a cell phone’s Wi-Fi and BLE connections. However, it does not reliably function over the long distances we needed. This is because the nature of large-scale aerial photogrammetry demands functionality at distances away from the operator of hundreds or even thousands of meters. Thus, we got to work engineering a way to make the GoPro take pictures on command from longer distances.
While brainstorming, we came up with a few potential solutions to this problem. Our very first thought was to try using direct-wired connection. Unfortunately, it turns out that GoPro cameras do not currently support device control through their USB-C connection. Other potential solutions included a custom-made physical button pressing mechanism, or triggering the internal electronic shutter using a wireless connection. The other two choices were: through a mechanical button presser that could be integrated into our custom-designed GoPro enclosure, or using a wireless connection facilitated by an external wireless device. Since we are electronics and computer engineering technologists, we decided to give the wireless option a shot; leaving the mechanical option as a backup.
ESP32-TO-GOPRO CONNECTION
Shortly after deciding how to proceed, we began researching wireless connections to GoPro models. This proved to be a full-time job, and we were met with several roadblocks along the way. The first hurdle was an easy one—what device do we use to facilitate our wireless connection? We quickly selected an ESP32-WROOM microcontroller module.
To those unfamiliar, the ESP32-WROOM is a low-power microcontroller module that supports Wi-Fi and Bluetooth connectivity applications [1]. In addition, it only costs about $10, including shipping. Did we mention we are students? The thing is those features are fairly standard among microcontroller devices of its class, so the real reason we chose the ESP32 was that we already had experience working with the device in previous lab projects. The fact that we had five of them lying around helped make it an easy decision to use this device.
The second hurdle proved to be much more significant. We learned that the GoPro’s shutter controls could only be accessed through a Bluetooth Low Energy (BLE) connection, and not over Wi-Fi. This revelation was provided to us by Open GoPro, an open-source Application Programming Initiative (API) project released by GoPro in the summer of 2021 [2]. The Open GoPro project is intended to assist third-party developers, such as ourselves, with integrating the latest GoPro cameras into their own development efforts.
— ADVERTISMENT—
—Advertise Here—
Importantly pertaining to our project, Open GoPro displays all the Wi-Fi and BLE specifications for the latest cameras alongside some code examples for apps using the specifications. As you could guess, this wealth of knowledge was our key reference when putting together the code for our ESP32-GoPro interface application.
With that said, Open GoPro is still pretty new to the scene, and doesn’t have many examples available; there were no examples at all for C code. On top of that, none of us had any experience working with BLE, and so much of the information on the website looked like an indecipherable alien transcript. Thankfully, there are reams of learning resources, discussions, and other BLE projects from places online, like Arduino, that we used to fill the gaps in our knowledge.
After the invaluable discovery of Open GoPro, we began steadily building our knowledge of BLE. We read online resources, built test programs, followed tutorials, and played with the code. The very first was following an Arduino example and successfully getting the ESP32 to act as a server. Using a mobile app, we were able to read and write values that were stored in an on-board memory cache known as a characteristic.
Progress on the project continued and we reached an instrumental milestone that came at a pivotal moment of low morale. We had hit a bit of a wall, banging our heads against the C code, and so we decided to try another avenue. This avenue was some C# example code provided by Open GoPro that theoretically did exactly what we wanted for our project. Our intention was to attempt some reverse engineering. But with our limited knowledge of C#, the code ended up being more indecipherable than the BLE spec sheet. Despite flying blind, as a test we ran the code on a school computer, and lo and behold—we connected with the GoPro HERO9 and were able to take our first picture using a BLE connection! This simple act proved that our goal was attainable, and we refocused on completing our C code with renewed vigor.
Within a week, we had some code that we felt confident would work. We had an amalgamation of Arduino’s BLE example programs, Open GoPro, and our own work. Our program recognized that it was connected to the GoPro, but to our disappointment, the GoPro wouldn’t leave pairing mode. This meant that the camera would not accept our commands. While the pairing symbol taunted us, we turned back to Open GoPro, whose only instruction at this stage was to “Finish pairing with the peripheral”.
Eventually, we learned that the GoPro might require some encryption in order to pair. We scoured the Internet, eventually making it all the way to the deepest darkest corner of the web: page 2 of Google. We found our answer at the very bottom—a relatively nondescript forum post where members were helping troubleshoot someone else’s GoPro remote-control app. It was here that user jpbusman casually supplied the exact solution to our problem [3].
As a reminder, at this point our ESP32 program scans, finds, and connects to the GoPro, but fails to pair. With the implementation of this (Figure 3) line of code before our connecting sequence, the ESP32 will be allowed to pair with the GoPro. After successful pairing the GoPro is able to receive commands, including the essential “shutter: on” command.
uint8_t array[] = {0x03, 0x01, 0x01, 0x01};
pRemoteCharacteristic->writeValue(array, 4);

jpbusman’s encryption code snippet. Our entire project was held up until we came across this comment. Our previous work with wireless protocols didn’t have a focus on encryption and so we weren’t expecting to contend with anything like that in our project. Looking back however, it seems obvious that modern devices would have some level of communications protection that we would have to deal with!
After a quick edit and blessedly quick compile, our team held a collective breath as we watched the connection and pairing complete. After a press of our thrown-together, button clicker circuit, time seemed to slow as the milliseconds ticked by and the camera sat unresponsive. An eternity later, the GoPro made a small sound—and just like that, our first picture was taken. A huge hurdle had been passed and our project was nearing completion! We only had time for a small celebration however, as our semester was quickly coming to an end, and we had lots to do.
The final part of the hardware portion of our project was the integration of the drone and the ESP32-GoPro interface. This required replacing the test pushbutton circuit with the toggle switch on our RC controller. We modified the settings on the RC controller to transmit the user’s toggle to the FC, and then programmed the FC to send a 5V high signal to one of its UART ports. This UART was then wired directly to the same GPIO pin on the ESP32 that we had used previously to read the test button click. Without having to edit any code, our first integrated test worked perfectly, and our GoPro’s distance problem was solved.
PHOTOGRAMMETRY
With the drone now able to take photos, we moved on to the next phase of the project, the photogrammetry.
Through research and practice, we developed methodology for getting good quality images that are conducive to a good final model. We must start by taking pictures of a stationary object, making sure to capture every angle, and typically encircling it three times at three different altitudes. At least 60% object overlap is maintained between pictures. If we wanted more detail in a certain area, such as a doorway, for example, we could make a set of close-up photos, and the software would incorporate it into the final product.
We also took care to limit the number of reflections and shadows on the structure. Ideal conditions for photogrammetry would be a stationary structure in weather that is overcast but not raining. If an object and its surroundings change too much during the capturing process or include too many reflections or shadows, the software used to construct the 3D model likely will fail to build the model accurately in those areas, leaving big holes or other oddities in the final product.
It depends on the size of the subject and the amount of detail you need, but it usually only takes about an hour to get a good capture in. Once all the photos are taken, we import them into the photogrammetry software, Meshroom (Figure 4). Meshroom is a free, open-source, photogrammetric, computer vision framework created by Alice Vision. “Free” made it an easy choice, but Meshroom has the added benefit of providing quality comparable to that of some of its paid competitors. Its node-based workflow was another reason that we chose this product. With this we were able to modify the computational workflow extensively to improve our output. The downside of using this free software is that it requires about double the rendering time of some of its paid peers. Rendering time is highly dependent on the quality of your computer’s graphics card (As a reference, using our school computer with a NVIDIA 1080 graphics card, a typical 225-image rendering took around 8 hours to complete). We typically rendered overnight.
— ADVERTISMENT—
—Advertise Here—

Photogrammetry in action. This is Meshroom showing us our point cloud of the Portable A building on our campus. Note the white floating boxes, Meshroom software has figured out where the photos were taken from in space relative to each other.
To capture high-quality image sets that Meshroom can recognize takes practice. Using a high-definition camera and a basic three-tier, three-angle capturing technique can cut out most of the uncertainty. But capturing an object, while being mindful of reflections, shadows, angles, and details, takes a bit of practice. Developing and creating an optimized, node-based photogrammetry pipeline through Meshroom takes a lot of research and understanding of the process, but its default configuration is more than adequate for a decent scan. After developing these skills (or following an online tutorial or two), there is truly no limit to what you can capture and reconstruct in a virtual world.
APPLICATION TO UNITY GAME DESIGN SOFTWARE
For the final step in our project, we brought our 3D models to life using Unity. Unity is a popular game design software with a built-in IDE in which users can view, interact with, and modify 3D structures and their environments. Since the start of our capstone project, we envisioned being able to easily create 3D model of one’s home. We would then be able to import that 3D model into Unity and use it to plan out, and (using VR) walk-through renovations without touching the building in real life. Having a realistic virtual copy of a real-world structure such as this would allow people to view it from all over the globe.
By the end of our project, we were able to demonstrate this idea’s veracity (Figure 5). We placed a team member’s house, currently undergoing renovations, into the same game environment as an alien model representing a playable character. Using Unity as the foundation, the possibilities of implementing real structures into the virtual world is limited only by your imagination.

A house we scanned that is under renovations. It is being represented in a video game environment we created, and has a placeholder character for any users/players. You can see how useful an accurate model of a structure under construction can be!
CONCLUSION
Considering that this was our first team project at a semi-professional level, we think it was a huge success. We achieved everything we set out to accomplish. Our drone is capable of stable flight, the camera can be triggered remotely, and real-world objects that were captured by our device were successfully 3D modeled using Meshroom and can be interacted with in Unity. The end result is pretty impressive, and the accurate virtual representations of places we have been to and lived in is exciting to see.
Our Payload Control System is one of the project’s highlights. There, the foundations are laid such that our drone platform can be used for a wide variety of functions, with all the code well-documented and easily modifiable.
All in all, the MADKAT Vision team is in total agreement that this project semester was an amazing learning experience, a resounding success, and most importantly, a lot of fun. We are proud of what we accomplished and are excited to have been able to share it with you. We became closer as a team, engineered solutions to many design challenges, and ultimately delivered a functional final product. The PANTHER I still has room to grow, but even so, we are happy we had the opportunity to fly a drone of our own design, and make our vision a reality. For more information about our project, please visit our website at “www.madkatvision.com”.
Additional materials from the author are available at:
www.circuitcellar.com/article-materials
References [1] to [3] as marked in the article can be found there.
RESOURCES
Adafruit | www.adafruit.com
AliceVision | www.alicevision.org
Bitfab | www.bitfab.io
Drone Safety (Government of Canada) | https://tc.canada.ca/en/aviation/drone-safety
ElectroRules | www.electrorules.com
Open Go Pro | https://gopro.github.io/OpenGoPro
Unity | www.unity.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • MAY 2022 #382 – Get a PDF of the issue
Sponsor this Article