LIDAR 3D Imaging on a Budget

PIC-32-Based Design

Demand is on the rise for 3D image data in a variety of applications. That has spurred research into LIDAR systems capable of keeping pace. Learn how this Cornell student leveraged inexpensive LIDAR sensors to build a 3D imaging system—all within a budget of around $200.

By Chris Graef

There’s a growing demand for 3D image data in a variety of applications, from autonomous cars to military base security. This has prompted research into high precision LIDAR systems capable of creating extremely clear 3D images to meet this demand. While these high-end systems can produce accurate and precise images, they can cost on the order of multiple thousands to tens of thousands of dollars. A side effect of this research, however, is the increasing availability of LIDAR devices at a cost much more affordable for tinkerers, students, hobbyists and budget-constrained embedded system developers. Using this new supply of inexpensive LIDAR sensors, I was able to build a 3D imaging system with a budget of around $200. The major parts used for the system can be seen in Table 1.

Table 1 Shown here are the cost and quantity of the major components used in the project. Not included are some smaller components such as wires, resistors and op amps.

At a glance, my LIDAR scanner works by turning a single-point LIDAR range finder through a scan pattern. I use a Microchip PIC32 microcontroller to control two analog feedback servos—one setting azimuth angle and one setting altitude angle—to move a mounted LIDAR distance sensor through a scan pattern. By synchronizing the feedback data of these two servos with the distance readings from the LIDAR sensor, the system defines one point in 3D space in a spherical coordinate format. After allowing the system time to create 10,000 to 20,000 points, the result is a 3D image made up of distinct spatial points. These points are stored in a point cloud data file format, which can be displayed by graphing software such as MATLAB.

MECHANICAL DESIGN

A CAD model of the imaging system is shown in Figure 1. The servos are shown in blue, the LIDAR is shown in red and the 3D printed mounts are shown in gray. All the components are connected using nuts and machine screws. The lower (azimuth) servo rotates the entire apparatus above it. The upper (altitude) servo rotates just the LIDAR sensor. The combined motion of the two servos results in the scan pattern of the system.

Figure 1
A CAD model of the LIDAR sensor and servo mounting. The LIDAR sensor is shown in red, the servos are shown in blue and the mounting brackets are shown in gray.

One thing to note in this design are the slots used on the mounting brackets to fasten both the altitude servo and the LIDAR sensor. One of the biggest requirements for the mechanical design of this project was to ensure that the center of rotation for the LIDAR sensor was in the center of the scanner. If the LIDAR sensor is positioned away from either axis of rotation, error gets introduced into the system. Here’s why this occurs: When converting raw data to cartesian points, we assume that the LIDAR sensor is giving us the distance to a point in 3D space from the origin of our spherical coordinate space. Deviation from the center of rotation for the azimuth or altitude angle would mean that we are recording a distance from somewhere else in our geometric plane.

It’s still possible to get accurate 3D points if the LIDAR sensor is not at the center of rotation, but this requires precise measurement of where the LIDAR sensor actually is in our coordinate space, and the use of complex mathematics to transform the measured data into accurate 3D position points. I thought that adding a couple of slots to a 3D bracket would be slightly easier and more effective. These slots allow for micro adjustments to be made in two dimensions, so that the LIDAR sensor lies in the direct center of both axes of rotation.

ELECTRICAL DESIGN

There are two main electrical circuits in this design: The power/servo control circuit and the feedback amplifier circuit.The power/servo control circuit shown in Figure 2 was designed to allow the PIC32 MCU to send a pulse width modulation (PWM) signal to the servos, while protecting the MCU from possibly harmful electrical noise made by the servo motors. The first step to reduce noise was to use an opto-isolator as a switch for the servo motors control pin. By driving pins RPB9 and RPB7 high, the MCU connects the servo motors’ control pin to the 5-V source. This converts the PIC32’s 3.3-V PWM output into a 5-V PWM usable by the two servos, while isolating the PIC32’s output pin from any electrical noise.

Figure 2
Shown here is the circuit of the power supply module. The servos are shown as motors. The RPB9 and the RPB7 are wires connected to output pins on the PIC32.

If the servos were the only things that needed to be connected to the MCUs, then the opto-isolator configuration would have been enough to protect the PIC32 from the servo motors’ electrical noise. However, the MCU must share a common ground with the LIDAR sensor, to be able to read the sensor’s analog output. This means electrical noise on the power/servo controller circuit can travel to the PIC32 through this common ground. To reduce the chance of electrical damage, two capacitors—one ceramic and one electrolytic—were connected across the 5-V source and the ground. The smaller ceramic capacitor attenuates any smaller amplitude, high frequency noise and the larger electrolytic capacitor is used to attenuate the lower frequency noise. The combination of these two capacitors ideally stops any damaging noise from travelling through the power connection by shunting the noise to ground instead.  …

Read the full article in the September 338 issue of Circuit Cellar

Don’t miss out on upcoming issues of Circuit Cellar. Subscribe today!

Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Silicon APDs are Optimized for LIDAR Applications

The Series 9 from First Sensor offers a wide range of silicon avalanche photodiodes (APDs) with very high sensitivity in the near infrared (NIR) wavelength range, especially at 905 nm. With their internal gain mechanism, large dynamic range and fast rise time the APDs are ideal for LIDAR systems for optical distance measurement and object recognition according to the time of flight method. Application examples include driver assistance systems, drones, safety laser scanners, 3D-mapping and robotics.

The Series 9 offers detectors as single elements as well as linear and matrix arrays with multiple sensing elements. The package options include rugged TO housings or flat ceramic SMD packages. The slow increase of the gain of the Series 9 photodiodes with the applied reverse bias voltage allows for easy and precise adjustments of high gain factors. For particularly low light levels, hybrid solutions are also available that further enhance the APD signal with an internal transimpedance amplifier (TIA). The integrated amplifier is optimally matched to the photodiode and allows compact setups as well as very large signal-to-noise ratios.

Using its own semiconductor manufacturing facility and extensive development capabilities, First Sensor can adapt its silicon avalanche photodiodes to specific customer requirements, such as sensitivity, gain, rise time or design.

Important features of the Series 9 APDs:

  • Very high sensitivity at 905 nm
  • Large dynamic range and fast rise time
  • Single element photodiodes as well as linear and matrix arrays
  • Rugged TO housings or flat ceramic SMD packages
  • Hybrid solutions with integrated TIA

First Sensor | www.first-sensor.com

Kit for R-Car V3M SoC Speeds Development

Renesas Electronics has announced the R-Car V3M Starter Kit to simplify and speed up the development of New Car Assessment Program (NCAP) front camera applications, surround view system, and LiDARs. The new starter kit is based on the R-Car V3M image recognition system-on-chip (SoC), delivering a combination of low power consumption and high performance for the growing NCAP front camera market. By combining the R-Car V3M starter kit with supporting software and tools, system developers can easily develop front camera applications, contributing to reduced development efforts and faster time-to-market.

Renesas also announced an enhancement to the R-Car V3M by integrating a new, highly power-efficient hardware accelerator for high-performance convolutional neural networks (CNNs), which enables features such as road detection or object classification that are increasingly used in automotive applications. The R-Car V3M’s innovative hardware accelerator enables CNNs to execute at ultra-low power consumption levels that cannot be reached when CNNs are running on CPUs or GPUs.

The new R-Car V3M Starter Kit, the R-Car V3M SoC, and supporting software and tools including Renesas’ open-source e² studio IDE integrated development environment (IDE), are part of Renesas’ open, innovative, and trusted Renesas autonomy Platform for ADAS and automated driving that delivers total end-to-end solutions scaling from cloud to sensing and vehicle control.

The new starter kit is a ready-to-use kit. In addition to the required interface and tools, the kit provides essential components for ADAS and automated driving development, including 2GB RAM, 4GB eMMC (embedded multi-media controller) onboard memory, Ethernet, display outputs, and interfaces for debugging. The integrated 440-pin expansion port gives full freedom for system manufacturers to develop application-specific expansion boards for a wide range of computing applications, from a simple advanced computer vision development environment to prototyping of multi-camera systems for applications such as surround view. This board flexibility reduces the time needed for hardware development in addition to maintaining a high degree of software portability and reusability.

The R-Car V3M Starter Kit is supported by a Linux Board Support Package (BSP), which is available through elinux.org. Further commercial operating systems will be made available from next year onwards. Codeplay will enable OpenCL and SYCL on the starter kit in Q1 2018. Further tools, sample code and application notes for computer vision and image processing will be provided throughout 2018. Renesas enables several tools on the R-Car V3M Starter Kit including Renesas e² studio toolchain and tools for debugging, which ease the development burden and enable faster time-to-market.

In addition to the R-Car V3M Starter Kit, Renesas has enabled ultra-low power consumption for CNNs, which achieve image recognition and image classification, on the R-Car V3M SoC. The R-Car V3M allows the implementation of high-performance, low power consumption CNN networks in NCAP cameras that cannot be realized with traditional high power consuming CPU or GPU architectures. Renesas complements the IMP-X5, a subsystem for computer vision processing that is composed of an image processor and the programmable CV engine, with a new, innovative CNN hardware accelerator developed in house, that allows the implementation of high-performance CNNs at ultra-low low power. With this new IP, Renesas enables system developers to choose between the IMP-X5 or the new hardware accelerator to deploy CNNs. This heterogeneous approach allows system developers to choose the most efficient architecture, depending on required programming flexibility, performance and power consumption.

The Renesas R-Car V3M is available now. The R-Car V3M Starter Kit with a Linux BSP will be available in Q1 2018 initially in limited quantities. A complete offering with an extended software solution is scheduled for Q3 2018.

Renesas Electronics | www.renesas.com

Platform Enables Automated Vehicle Application Development

NXP Semiconductors has announced the availability of the NXP Automated Drive Kit, a software enabled platform for the development and testing of automated vehicle applications. The kit enables carmakers and suppliers to develop, test and deploy autonomous algorithms and applications quickly on an open and flexible platform with an expanding ecosystem of partners.

Taking on automated drive applications requires easy access to multiple hardware and software options. NXP has opened the door to hardware and software partners to foster a flexible development platform that meets the needs of a diverse set of developers. The NXP Automated Drive Kit provides a baseline for level 3 development and will expand to additional autonomy levels as the ecosystem’s performance scales.

The first release of the Automated Drive Kit will include a front vision system based on NXP’s S32V234 processor, allowing customers to deploy their algorithms of choice. The Kit also includes front camera application software APIs and object detection algorithms provided by Neusoft; a leading IT solutions and services provider in China and a strategic advanced driver assistance system (ADAS) and AD partner to NXP. Additionally, the Kit includes sophisticated radar options and GPS positioning technology. Customers choose from various LiDAR options and can add LiDAR Object Processing (LOP) modular software from AutonomouStuff, which provides ground segmentation and object tracking.

The NXP Automated Drive Kit is now available for ordering from AutonomouStuff as a standalone package that can be deployed by the customer in their own vehicle or as an integrated package with an AutonomouStuff Automated Research Development Vehicle.

NXP Semiconductors | www.nxp.com