Using a PIC32 MCU
It’s not easy to hold a heavy camera for a long period. With that in mind, these three Cornell students designed and implemented a 2-axis, gesture-controlled camera platform based on a Microchip PIC32 MCU that maps the orientation of the user’s hand to the camera. It’s able to support a relatively large camera system, orient the camera accurately and respond quickly to user input.
While pulling the smartphone out of your pocket and taking a snapshot have become normal for most people, carrying several pounds of gear and manipulating heavy cameras are still part of the daily life of professional photographers. With that in mind, we designed and implemented an automated camera controller that photographers would dream about—a 2-Axis, gesture-controlled camera platform that removes the equipment weight from their hands.
Our design was inspired by the observation that holding a professional DSLR camera during a long photo-shooting event—such as a hockey game or an air show—could be tiring. Furthermore, the accuracy with which a photographer can track an object with the viewfinder always tends to decrease as he or she becomes fatigued. Therefore, a platform that allows the photographer to accurately control the camera without directly lifting it would be a helpful accessory.
Most of the existing platforms that accomplish such goals are either too expensive (multiple-DOF (degrees of freedom) robot arms) , restricted in their available motions, (can only stabilize the camera or follow pre-programmed sequences) , controlled by joysticks  or limited in the supportable cameras due to mechanical or electrical constraints. We, therefore, wanted to design a camera controller that was accurate, low-cost, multifunctional, easy to manufacture and intuitive to use.
WHAT IT CAN DO
Our 2-DOF gesture-controlled platform can point the camera in any direction within a hemisphere, based on spherical coordinates. It is capable of continuously rotating in horizontal direction (yaw), and traversing close to 180 degrees vertically (pitch). It can support a relatively large camera system (more than 3kg in total weight and 40cm in length), orient the camera accurately (error less than 3 degrees) and respond quickly to user input (transverse 180 degrees in under 3 seconds). In addition to orienting the camera, the system also has simple control functionality, such as allowing the user to auto-focus and take photos remotely, which is achieved through DSLR’s peripheral connections.
For the user interface, our design supports three user input modes. The first uses a joystick, and the other two use an inertial measurement unit (IMU). In the first mode, the X- and Y-axis of a joystick are mapped to the velocities of the camera in the yaw and pitch directions. A gamepad-style SparkFun Joystick Shield is used as the controller, which is connected to the analog and GPIO pins of the PIC32 (Figure 1, left) . The Joystick Shield also has buttons for focusing the camera, releasing the shutter, adjusting the panning speed and switching control modes.
In the second mode, the roll and pitch angles of the user’s hand are mapped to the velocities of the camera in the yaw and pitch directions. The third mode maps the angles to the angular position of the camera in yaw and pitch. The angles are measured through the on-board LSM9DS1 9-axis, inertial sensor (STMicroelectronics) of an Arduino Nano 33 BLE Sense (Figure 1, right) . The Arduino is attached to the user’s hand and communicates with the microcontroller (MCU) via serial. It also serves as a fully functional remote controller that allows users to take photos through the motions of their fingers.
Figure 2 shows the overall logical structure of our design. Two servo motors actuate the camera, and an AS5600 Hall effect sensor from AMS Electronics measures its horizontal angle . A Microchip PIC32MX250F128B MCU provides the processing power for the system . It takes inputs from the user to determine the target orientation of the camera, possibly with commands to switch control modes. Then, based on the error between the target position and current position given by the angle sensor, the MCU sends pulse-width modulation (PWM) signals to the servo motors, with the goal of reducing the error.
Mechanical Assembly: No one wants to see his or her camera lying on the ground with a shattered lens. Therefore, considerable efforts were put into the design and manufacturing of the mechanical assembly, to ensure that the platform can support a heavy DSLR camera and lens. The main assembly of the platform is made of transparent acrylic sheets, and can be mounted on top of most tripods. Figure 3 and Figure 4 show the finished assembly with and without the camera mounted. The major structural support of the assembly is made of 5.4mm, laser-cut, clear acrylic sheets and 3D-printed ABS plastic. All servo motors, power supply and control circuitry rotate together with the platform, so that no wire will be entangled during continuous rotation.
The lower part of the platform consists of two layers. The lower layer is mounted to a 6″ aluminum rotating bearing and the top layer is mounted above it supported by 45mm brass spacers. To increase the margin of safety, each layer is two pieces of circular acrylic sheets stacked together. The top part of the platform consists of two vertical stands, with the camera rotating between them. Each stand has three layers of acrylic and is fastened to the horizontal plate through a set of angle brackets. All acrylic components could be fit onto a 36″ × 24″ sheet. M3 screws are used throughout the design as the fasteners.
Actuators and Sensors: Two, 25kg, high-torque servo motors are used as actuators in the camera platform. The bottom servo, shown in Figure 5, is a continuous-rotation servo (no load speed: 0.14s/60 degrees at 7.2V). Mounted to the top part of the assembly, it rotates the whole platform continuously, using a set of steel bevel gears with a 38/13 gear ratio. The 360-degree analog Hall effect angle sensor monitors the orientation of the rotation platform.
A Hall effect sensor measures the flux density and polarity of a magnetic field. Therefore, if a magnet is placed near the sensor, the relative angle between the sensor and magnet can be determined through the magnetic flux in multiple vector components of the field. On the platform, the sensor is positioned near the bottom servo, directly above the larger bevel gear, which holds a diametrically magnetized disk magnet at its center. The magnet has the north and south poles partitioned at its diameter. The magnet and Hall effect sensor are approximately 2mm apart, as required by the sensor specification.
The top servo (Figure 6) has a 180-degree range (no load speed: 0.13s/60 degrees at 6.8V). It is directly connected to the square camera mount via an 8mm steel beam that runs through the bearing on the vertical stand and the coupling connectors. The design of the square bracket ensures that the camera’s center of gravity in vertical direction is approximately aligned with the rotation axis of the top servo. The center of gravity in a horizontal direction can be adjusted through a common tripod quick release plate, based on different camera and lens configurations. This design minimizes the required torque, and ensures that the camera can maintain its angle by the internal friction of the servo when the power is off—that is, when the PWM has 0% duty cycle. This feature reduces the vibration of the camera while taking a photo, allowing longer exposure time.
Circuit, Power and Interface to Camera: The complete schematic diagram of the circuit is shown in Figure 7. The used pins on the PIC32 are shown. The diagram also shows the connections among all the components mentioned above. A board designed by Sean Carroll is used as the development board for the PIC32 .
The high-torque servos require large current while running (200mA nominal, 3,400mA stall), which could not be supplied by the PIC32 board. Therefore, a separate battery is provided for each servo motor. In our design, the LP-E6 battery (7.2V, 2,000mA-hours) is used. It is a lithium-ion battery used in many Canon DSLR cameras. Camera batteries from other manufacturers could be used as well. Such compatibility ensures that the photographers can always conveniently charge the batteries. A single 9V alkaline battery powers the PIC32 board and Arduino Nano. All batteries and circuits are placed on top of the horizontal plate, which minimizes the interference with the moving camera.
Most DSLR cameras provide a connector for external shutter release. There are typically three wires—ground, shutter signal and focus signal. Auto-focusing and shutter release are enabled by shorting the corresponding wire to ground. In our design, such functionality is achieved using an NPN BJT (bipolar junction transistor), with the base connected to a digital output pin on the PIC32. An audio jack, shown to the right of the PIC32 in the Figure 7 schematic diagram, is soldered to a protoboard for the cable connection to the camera.
The program running on the PIC32 is divided into multiple threads implemented with Protothreads—a lightweight, stackless threading library . Protothreads system is a cooperative, multi-threaded system that requires each thread to yield voluntarily. We used a scheduler called
pt_cornell_1_3_3 written by Professor Bruce Land . There are two major threads (
protothread_serial thread and
protothread_mode thread) and one interrupt handler (ISR) in the program. All code for the PIC32 was written in C. Apart from the code on the PIC32, the code on the Arduino Nano—written with Arduino IDE—also played a big role in this project. The connections and communication protocols between components are shown in Figure 8.
Timer interrupt handler (ISR): In the ISR, we implemented three control modes for this project: mode 0 (Joystick), mode 1 (IMU Speed) and mode 2 (IMU Angle). In mode 0, the platform is controlled by a joystick, which changes the angular velocity of the camera based on the joystick’s position. Mode 0 contains two speed settings. When the button on the joystick shield connected to RB5 on the PIC32 is pressed, the angular velocity of the platform is reduced to 25%, which allows for more accurate position control.
In mode 1, the roll and pitch angles of the user’s hand are mapped to the velocities of the camera in the yaw and pitch directions. A larger tilt angle of the hand will result in a higher angular velocity. In mode 2, the roll and pitch of the hand directly correspond to the target yaw and pitch of the camera. As a result, when the user’s hand is stationary, the camera will move to the target orientation and stabilize at that angle. In this mode, the duty-cycle of the bottom servo motor depends on the relative position between the current angle and target angle. The servo will turn more slowly as it approaches the target. This prevents the camera from overshooting the target and eliminates oscillations around the target angle. The full range of motion achievable by mode 2 is 180 degrees in both directions.
The algorithm in the ISR computes and sets the duty-cycle for servo PWM based on the control mode. Variable
servo_p is determined by the pitch angle of the gesture in mode 1 and 2, or the Y-axis of the joystick in mode 0. It controls the top servo and has a range of 0 – 1,023. Variable
servo_r is calculated based on the roll angle of gesture in mode 1 and 2, or the X-axis of the joystick in mode 0. It controls the bottom servo and has a range of 312 – 1,562. The ISR runs at 50Hz (20ms interval), and the range of duty cycle 312 – 1,562 corresponds to an on time of 0.5ms – 2.5ms, as specified by the servo documentation. The flow chart in Figure 9 shows the overall structure of the ISR.
TWO MAJOR THREADS
protothread_serial thread: We used serial commands in the serial thread for the communication between the Arduino Nano 33 BLE Sense and the PIC32 board. The messages sent by the Arduino are pattern-matched against a single letter command followed by an integer value. There are three types of serial messages. The message that starts with character “p’” is the pitch angle of the hand, the message that starts with “r” is the roll angle of the hand and the ones that start with “s” indicate shutter status. When
cmd—the first letter of the command, is “s,” if the follow-up integer value is 1, then the control signals for camera shutter and auto-focus are set to true. If the value is 0, the shutter is set to false and the auto-focus is set to true; otherwise, both are set to false.
protothread_mode thread: The mode thread is used to switch mode and process received serial commands for camera control. In this thread, we set the yield time to 50ms, which causes the
protothread_mode thread to yield for 50ms without blocking other threads. The user can switch the mode, if the mode button connected to RA4 on the PIC32 with internal pull-up is pressed, and the button was not pressed in the previous cycle. For example, if the mode is 0, when the mode button is pressed but was not pressed in the previous cycle, we switched the mode to 1. Moreover, in mode 0 (Joystick), if the shutter button connected to RA3 is pressed, the camera will release the shutter and take a picture. If the focus button connected to RB4 is pressed, the camera will turn on the auto-focus. In modes 1 and 2, if the control signal for camera shutter, which is set in the serial thread, is true, the shutter will be released. If the signal for focusing is true, the auto-focus will be turned on.
Arduino Nano 33 BLE Sense: To use the IMU as a user interface, we need to extract information about the pitch and roll angles from the sensor readings. We can calculate the pitch and roll angles based on the accelerometer readings as the following:
where X, Y and Z are the readings from the three axes. The resulting pitch and roll angles range from -90 to 90 degrees. To reduce noise from the vibration of the user’s hand, a low-pass filter is applied to the calculated angles.
The Arduino board reads the data from the IMU, calculates the pitch and roll angles, maps them to an integer from 0 – 999 and sends the data to PIC32 through serial. It also determines the camera control signal based on the position of the user’s finger, which could be detected by the on-board proximity and light sensors. If the finger is above the digital proximity sensor, the sensor will output 0 and the auto-focusing will be turned on through serial command. If the finger is covering the ambient light sensor, the sensor will read close to 0 in all three-color channels, and the shutter will be released through serial.
TESTING AND RESULTS
The final assembly of our 2-axis camera platform was tested against a 3.5kg camera system with a zoom lens, and was able to turn the camera with reasonable stability. The system was put into IMU angle mode, to test the accuracy and responsiveness of the platform. The average settling time from the current orientation to the target orientation was measured by a digital timer. The measured angle of the camera was compared with the desired target orientation printed out by the Arduino to the serial monitor.
The angle in yaw direction was measured against a marked reference angle on the base of the platform, using a protractor, and the angle in pitch direction was measured using a smartphone’s built-in angle sensor, by placing it parallel to the lens. The average error was 1.5 degrees in the pitch direction and 2.8 degrees in the yaw direction, and the average settling time was 2.91 seconds for 120-degree traversal.
We designed a gesture-controlled camera platform that is precise, multifunctional, low-cost and intuitive to use. The performance (accuracy and responsiveness) of the design met our expectations. However, we see multiple ways through which we can improve the project. Most of the errors in the yaw direction were caused by the loose mechanical coupling between the two bevel gears. A redesign of the support structure for the bottom platform could potentially resolve the issue. Furthermore, the loose coupling makes implementing a more sophisticated controller for the bottom servo relatively difficult, since the behavior of the gear is unpredictable.
In our current implementation of mode 2, the bottom servo was never put into full power to prevent overshooting and oscillation around the target position. Meanwhile, to make the platform more usable, all the user interfaces could be changed to wireless, and pre-programmed control sequences could be stored inside the PIC32, to enable the photographer to take panoramic photos. We believe that the responsiveness and accuracy of the system open up great application potentials, particularly because more advanced features such as object tracking could also be incorporated.
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • AUGUST 2020 #361 – Get a PDF of the issueSponsor this Article