Projects Research & Design Hub

Laser 3D Scanner Uses Raspberry Pi

Meshes and Motors

Digital visualization of 3D objects has become more prevalent in the modern age, with the rise of 3D printing. Learn how these Cornell undergraduates used a Raspberry Pi, a camera and a laser to make an accurate 3D scanner capable of producing digital meshes of physical objects.

— ADVERTISMENT—

Advertise Here

  • How to capture 3D images using a laser and a Pi Camera

  • How to design circuitry to drive a stepper motor

  • How to use motor control to rotate an object

  • How to integrate image processing with mesh creation

  • How to develop embedded software using Python

  • Raspberry Pi SBC

  • PiCam — Raspberry Pi camera

  • Autodesk Fusion 360

  • Stepper motor driver module (based on ST’s L298N)

  • Epilog Zing 40W laser cutter

  • Repetier 3D slicing software

  • An LED and metal pushbutton

  • 3D printer and raw material for laser cutting

— ADVERTISMENT—

Advertise Here

Our Laser Scanner project is a Raspberry Pi-based embedded system device that’s able to digitize objects into .obj mesh files for reproduction using 3D printing. The device does this by utilizing a line laser and the Pi Camera to perform computer vision. The laser is positioned 45 degrees askew from the laser, and projects a bright red line on one vertical slice of the object. The camera detects the slice’s distance from the center to give a mesh slice. The object is spun on a rotating tray, and the process is repeated until the full object is scanned. The generated .obj file is emailed to the user.

The inspiration for this project came from our past experience with 3D printing and rapid prototyping. The ability to quickly scan existing objects can make them easy to reproduce, and facilitates designing other parts around them. The cost of these scanners is often quite expensive. With our background in computer graphics, embedded systems and mechanical design, we decided that this was a feasible and interesting project for our team to take on.

The central component in this design is the line laser that projects upon a vertical slice of the objects. This projection could be captured on the Pi Camera (PiCam), have its perspective corrected, and then filtered prior to image processing. In image processing, the distance between each segment of the line from the center of the object could be collected. In radial coordinates, this picture would yield both the r and z components. The third dimension, Θ (theta), is then achieved by rotating the object to a new slice. This concept is shown in Figure 1.

FIGURE 1 – Laser scanning concept

— ADVERTISMENT—

Advertise Here

To perform the above actions, we used a Raspberry Pi as our central computing unit. We attached a stepper motor and a motor driver to the Pi, powered by an external 5V supply and controlled by the Pi’s GPIO pins. A line laser was put on the 3.3V line on the Pi, and a PiCam was attached to the camera input on the Pi. Then a simple pull-down button was installed, and a status LED was added to indicate the state of the system to the user. A block diagram of the full system is shown in Figure 2.

FIGURE 2 – System block diagram

These pieces were housed in a sleek, laser-cut box with a hinged lid, and held together with T-joint M3 machine screws. The electronics are hidden from sight in a bottom compartment, and the lid allows easy access for object placement on the rotating tray. This lid minimizes the amount of light that leaks into the system, because external light can produce noise in the final scan.

HARDWARE
Before we began laser cutting or 3D printing, we used Autodesk Fusion 360 to make a detailed 3D model of our design (Figure 3). The device is a simple box, with a lid with laser-cut hinges. The device has two main layers—the electronics bed and the main bed, with holes for wires to run between the two layers. The main parts we used included the Raspberry Pi, the line laser, the stepper motor, a stepper motor driver, the PiCam, a metal push button, an LED and raw material for laser cutting and 3D printing. The wiring hardware of this project was very simple, because the 3D scanner did not require too many peripherals.

Figure 3 – CAD prototype

As shown in Figure 4, we connected resistors in series with each pin, to protect the pins from shorts. One GPIO pin was dedicated to controlling the status LED, which would illuminate when the device was ready to be used and pulse with PWM when the device was operating. Another GPIO pin was connected to a pull-up button, registering HIGH when the button was not pressed and LOW when the button was pressed. Four GPIO pins were dedicated to driving the stepper motor.

FIGURE 4 – Wiring diagram to the Raspberry Pi’s GPIO pins

Since our motor only had to step a certain extent without requiring control of speed, we opted for a simpler stepper motor driver module (L298N)—a module based on STMicroelectronics’ motor driver, which simply steps-up the control lines to feed into the motor’s inputs. To learn how to operate the stepper motors on a very low level, we referred to both the L298N datasheet and the Arduino library.

Stepper motors have a magnetic core with protruding fingers of alternating polarity. The four wires are wrapped to control two electromagnets, each of which powers every other opposing finger in the motor. Thus, by switching the polarity of the fingers, we are able to push the stepper one step.

With this knowledge of how steppers worked from a hardware level, we were able to control the steppers much more easily. We opted to power our stepper motor with a 5V power supply in the lab, rather than the Pi, because its maximum current draw of about 0.8A is more than the Pi could supply.

Most of our box was manufactured with a laser cutter. Designs were produced in Fusion 360 and cut using CorelDraw on an Epilog Zing 40W laser cutter. Our designs for the pieces are shown in Figure 5. From top left to right, the pieces are the main bed, the electronics bed, two pieces for the lid, the back, the front and the two sides. In the main bed, there are three major cutouts: one for mounting the stepper motor, one to route wires from the laser and one to route the PiCam’s wide cable.

FIGURE 5 – Laser-cut pieces: the main bed (a), the electronics bed (b), two pieces for the lid (c,d), the back (e), the front (f) and the two sides (g,h)

The electronics bed piece has mounting holes for securing the Pi, breadboard, and motor driver and a larger cutout to access the stepper motor. The lid pieces snap together simply to form the triangular piece shown in Figure 5, and the hinge is a simple extrusion that is the width of the diameter of the hole of the side boards. The back and one of the side pieces have slots on the side, so that the ports of the Pi (HDMI, USB, Ethernet and Power) can be accessed easily. The front is a simple piece that we drilled holes in to mount the button and LED.

All parts are held together by M3 hardware, using T-joints and slots. This is a method of holding laser-cut pieces orthogonally and securely. The fins of pieces line up with the slots of other pieces, and the T-shaped cuts on the edges give space for an M3 nut to be jammed in without spinning. This allows us to use an M3 screw to lock the pieces together with very little wiggle room.

Most of our pieces were made with a laser cutter, due to its speed and ease. We still had to 3D-print some pieces, however, because their 3D geometry would have been difficult to create on the cutter. The first of such pieces was the line laser holder. This piece was to be mounted on the main bed at 45 degrees from the camera’s view, and to have a hole to allow the laser to be snuggly friction-fit into it.

We also had to create a motor mount, because the motor’s shaft was so long. The mount friction-fit into the laser-cut pieces and lowered the plane to which the motor was attached, such that the rotating platform was flush with the main bed. These 3D-printed pieces are shown in Figure 6.

FIGURE 6 – 3D-printed pieces: (a) line laser holder; (b) stepper motor holder; and (c) assembled stepper motor mounting with the rotating wheel

After manufacturing and assembly, our 3D scanner’s hardware was complete, and we were ready to tackle the software portion of the project. Figure 7 contains photos of various features of the hardware design.

FIGURE 7 – Various features of the hardware design: (a) assembled box outside; (b) assembled box inside with camera and laser; (c) inside view of electronics bed; (d) back of the Pi with access to Pi ports and the 5 V motor input; and (e) push button with LED ring and status light in the front of the device.

IMAGE PROCESSING
The software for this project comprises four main components that interact together: Image Processing, Motor Control, Mesh Creation and Embedded Functions.

The software components are depicted in the block diagram in Figure 8. As the system boots, .bashrc automatically logs into the Pi and starts running our Python code. The system lights up the status light to let the user know that it has been booted correctly, and waits for the button press. The user can then insert the item to be scanned and close the lid. After pushing the button, the LED pulses to let the user know the device is working. The device loops between image processing and motor control, until the full rotation is complete and all object data is collected. Finally, the mesh is created, and the file is emailed to a preselected address. This restarts the cycle, and the device is ready to perform another scan at the press of a button.

FIGURE 8 – Software block diagram

A captured image was first processed, to extract the information stored in it into a form that could be used to create an array of points in space. To do this, we photographed the object on the platform, along with all the background noise created by the laser shining onto the back of the box and dispersing. The raw format of this image had two main problems. First, the object was viewed at an angle with an elevated perspective, and second, there was a lot of background noise. We needed to compensate for this viewing angle, because using the image as is would not allow us to determine a consistent object height. As shown in Figure 9, the height of the upside-down “L” shape is consistent. However, because one side is longer than the other, they app;ear to have different heights at the edge closest to the viewer.

FIGURE 9 – The effect of perspective on relative height

To fix this, we had to transform the workspace in the image from a trapezoid to a rectangle. To do this, we used code provided by Perspective Transform in Python [2]which, when given an image and four points, crops the image between the four points. It then compensates for the perspective by transforming a trapezoid into a rectangle (Figure 10).

FIGURE 10 – Workspace before (left) and after (right) transformation from a trapezoid to a rectangle

The next problem to address was the background noise in the form of outside light and light being reflected by the laser itself. To do this, we filtered the light using the inRange() function of OpenCV. We set the threshold to pick up only red light at a certain level. To get the correct value, we started with a flexible threshold, and kept increasing the threshold level until the only light being picked up was the laser light on the object being scanned.

Once we had this image, we found the brightest pixel in each row, to get a line of one pixel per row that bordered the left-most side of the laser line. Each pixel was then converted to a vertex in 3D space and stored in an array (see later section on Mesh Creation). The results of these steps are shown in Figure 11.

FIGURE 11 – Image transformation and background noise reduction. (a) The raw image. (b) The raw image transformed to account for the perspective of the camera. (c) The transformed image after background noise is filtered out. (d) The final image with only one white pixel per row.

MOTOR CONTROL
After processing a single image successfully to get the slice of the object, we needed to be able to rotate the object to take a new photo with a different angle. To do this, we controlled the stepper motor below the platform on which the object being scanned sits. We built a foundation of our stepping function from the reference code Stepper Motor [2]. In essence, the code continuously tracks the state of the motor and the location of the fingers (described in the Hardware sections). This is key to stepping and microstepping. Microstepping gives us the option to perform half steps, doubling our steps per rotation from 200 to 400.

The motor is stepped by altering which of the motor pins are HIGH and which are LOW. We made a step function that takes the input of how many microsteps are needed to move the stepper motor, and the current state of the motor. It then moves the motor accordingly, and also outputs the new state. We had a simple int variable in Python to globally record the state. The state variable took eight different values representing the eight states (Table 1).

TABLE 1 – Stepper motor states and GPIO outputs

We called this step function each time our system was ready to take a new photo. To determine how many steps the motor should take each turn, we needed to know the desired number of photos for the scan, or the resolution about the rotational axis. Knowing this, and that the motor would have to take 400 micro steps to complete a 360-degree rotation, we found the desired number of micro steps by dividing 400 by the angular resolution. The angular resolution could be changed in the software, depending on how accurate a scan we needed and how quickly we wanted to scan the object.

MESH CREATION
To create a mesh from all the processed images, we first had to convert each white pixel in the processed image into a vertex in 3D space. Because we are collecting individual slices of the object with cylindrical symmetry, it made sense to start to collect cylindrical coordinates. Thus, the height of the picture could represent the Z-axis, the distance from the center of the rotating table could represent the r-axis, and the rotation of the stepper motor could represent the Θ-axis. Because we stored our data in cylindrical coordinates, however, we had to convert each of these vertices into Cartesian coordinates. This is shown in Figure 12.

FIGURE 12 – Visualization of cylindrical coordinates and the conversion algorithm

After these vertices were created, they were stored in a list, and this list was stored in a second list that contained the vertex lists created for each image captured. Subsequently, we had to select the vertices of the captured images that we actually wanted represented in the final mesh. We wanted the top vertex and the bottom vertex to be included, and then based on the resolution, we picked an evenly spaced number of vertices to use for each image. Because the vertex lists were different lengths, we had to even them out. We did this by finding the list with the smallest number of vertices, and then removing vertices from all the other lists until they were all even.

With the vertex lists created, we were now able to create a mesh. We formatted our mesh by the .obj file standard, which is simple and 3D-printable. An .obj file has four parts: the positional coordinates, the texture coordinates, the face normals and the faces. We did not include texture coordinates or face normals, because we didn’t have any textures, and most 3D mesh viewers assign basic normals if not told otherwise.

For the position coordinates, the format is: v a b c, where a is the x coordinate, b is the y coordinate and c is the z coordinate. Each positional coordinate in use is listed in this manner in the beginning of the file. The faces are under the positional coordinates. For the faces, the format is: f a b c, where a is the index of the first positional coordinate of the face, b is the second and c is the third. Convention states that when looking at a face from the outside, coordinates are labeled in clockwise order. For example, the .obj file with text shown in Table 2 would create a square mesh in the x-y plane with the points (0,0), (0,1), (1,1) and (1,0) as its corners.

TABLE 2 – Sample OBJ file text

To construct a mesh in this way, we went through each vertex list and created faces connecting the current vertex list to the previous vertex list. The challenge with this was that, to make the mesh file as small as possible, we had to avoid writing any unnecessary positional coordinates to the file. To do this, we kept track of the index of a vertex in the .obj file, if we would need it to create a face later. This was especially important when making the final faces that connected the last vertex list to the first vertex list. Once this was done, the .obj file was ready to be viewed or sent.

EMBEDDED FUNCTIONS
After our device was functional, we polished it by adding full, embedded functionality. This meant removing the keyboard, mouse and monitor, and having it wirelessly send us the .obj file after finishing processing. We first changed the .bashrc code to automatically log-in and launch the main Python program on start-up. This was done by using sudo raspi-config and selecting “Console Autologin” and by adding the line “sudo python /home/pi/finalProject/FINAL.py” to /home/pi/.bashrc.

We also added a button and status LED for user input and output. The button would let the user tell the device when to start scanning, and the LED would tell the user the state of the machine. If the LED is on, the device is ready to start a new scan. If the LED is pulsing, the device is currently scanning. If the LED is OFF, there is a software error, calling for a system restart. To implement these functions, we found the GPIO Zero library very helpful. We defined pins for the button and the PWMLED (GPIO 23 and 18 respectively) and used the built-in functions. We used a simple polling while loop to wait for the button to be pressed, and a pulse() function on the LED that would slowly alter its PWM pulse width to simulate a pulsing LED.

Last, we enabled the device to send the .obj file by email. This was done by using the smtplib and email libraries. We built a foundation for our code using Sam Lopez’s example framework [3] and wrote a function to send the file to an email address. We then created a dummy email account for our Pi to use, and gave the login credentials to the program. A MIME (Multipurpose Internet Mail Extensions) email object was constructed, and a subject line, message and attachment were appended. Next, an SMTP (Simple Mail Transfer Protocol) server was created, and the login credentials for our Pi’s email was used to login. This MIME object was transported through the account on the server to be sent to any email with the 3d.obj file attached. The ability to send emails gave us a convenient and wireless way to deliver the produced file to the user, to access on many different platforms.

RESULTS
As shown in Figure 13 and Figure 14, the laser 3D scanner was able to scan objects with adequate precision. The objects’ features were distinct and recognizable, and the parts were easy to 3D-print using a slicing software such as Repetier. Figure 13 is one of the first scans we tried, and the 3D-printed result from the scan. After successfully scanning many simple objects, we tried to scan a more complex object. We chose a “rubber ducky” and successfully captured its features in our scan (Figure 15).

FIGURE 13 – Wooden chair original piece and 3D-printed recreation (left), and scanned .obj file (right)

FIGURE 14 – Stacked wood blocks (left) and .obj scan (right)

FIGURE 15 – Rubber duck and .obj scan

One of the biggest findings and successes that we discovered during testing was the consistency of the device. Throughout multiple trials of the same object, the scanner produced an .obj file that was similar each time, even when we slightly altered the placement of the object. This is shown in Figure 16. The three scans look very similar, capturing the same details and same amount of detail. We were overall most impressed with our system’s consistency and robustness.

FIGURE 16 – Three separate scans of a rubber duck, showing reproducibility

One of the variables we can tune is the resolution of the scans. Because we have 400 steps in the stepper, we can choose how big each ΔΘ is to dictate the angular resolution. By default, we have the angular resolution set to 20 iterations, meaning that for each frame, the motor rotates by 20 steps (400/20 = 20). This was chosen mainly in the interest of time—it takes about 45 seconds to complete a scan this way. However, for a higher quality scan, we can increase the number of iterations all the way up to 400. This gives us many more points with which to construct the model, making for a much more detailed scan.

In addition to angular resolution, we can adjust the vertical resolution, or how many different points we choose to poll along the laser slice. For a similar interest in time, we set this default to 20, but we can increase it for better results.

In playing with the parameters of angular resolution and spatial resolution, we were able to compile the results of different scans (Figure 17). Each label is formatted such that it is the angular resolution × spatial resolution. As seen in Figure 17b, the default scanning settings, the features of the duck are recognizable but not detailed. However, as the resolution is increased, individual precise features begin to show, including the eyes, beak, tail and wings on the duck. The highest resolution image took about 5 minutes to scan. Achieving this high resolution was a great success.

FIGURE 17 – Scans made with various resolutions. (a) 4×4 resolution; (b) 20×20 resolution; (c) 50×40 resolution; (d) 100×40 resolution; (e) 200×40 resolution; (f) 200×80 resolution.

LIMITATIONS AND CONCLUSION
Despite the successful results of the project, there are still a few limitations of our design and implementation. With the use of the laser comes a lot of issues with how the light disperses. Many objects that were translucent, shiny or very dark proved troublesome to scan, because of the way the light reflected off their surfaces. When the object was translucent, the light was absorbed and dispersed, creating a very noisy reading of slices. With shiny and dark objects, the light either was reflected or absorbed so much that it was difficult to pick up.

Using a camera to capture the features of objects also was sometimes problematic, because its sensing is limited by its line of sight; concave objects and sharp angles often were blocked by other parts of the object. In our rubber duck scans, for example, the curvature of the tail was sometimes lost. Additionally, the camera can detect only surface structures, so holes or internal geometries cannot be captured. This is a common problem with many other scanning solutions.

Although we were happy with the results of our project, several things could be implemented to make it better. First, in the current state, the scan resolution can be adjusted only by changing the hard-coded resolution variables in our code. To make our project more embedded, we could create a resolution potentiometer, so that we could change the resolution without having to plug a monitor and keyboard into the scanner.

Second, our scanner creates images that sometimes look jagged. To fix this, we could explore mesh-smoothing techniques that would lessen irregularities and harsh corners on our finished mesh.

Third, we noticed that sometimes our imaging filtering process did not work perfectly, and strange outlier points were added to the mesh. To address this problem, we could try filtering the white pixels based on the surrounding pixels, to exclude outliers from the final mesh.

Fourth, we found that pixel coordinates did not scale well into the real world. The meshes we created were six to seven times larger than the actual objects. In the future it would be advantageous to implement a method of scaling our meshes to sizes closer to the actual sizes of the objects.

Overall, we learned a great deal from our project and had fun building the system. From using the Pi Camera to controlling a stepper motor, processing images and sending emails, we were able to overcome diverse and unique obstacles, and to apply our knowledge and what we learned to solve these problems and create a successful device. We are quite proud of the scan quality achieved by the laser 3D scanner, and want to explore implementing some of these ideas for future work. A video of this project is provided below.

 

Informational Video (Original video at https://www.youtube.com/watch?v=lAxufl-BqTc&feature=youtu.be)

RESOURCES

References:

[1] Adrian Rosebrock, Perspective Transform in Python, https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/, August 25, 2014, Py Image Search)
[2] Stepper Motor (raviteja, Raspberry Pi Stepper Motor Control using L298N, https://www.electronicshub.org/raspberry-pi-stepper-motor-control/, February 16, 2018, Electronics Hub)
[3] Emailing with Python (Sam Lopez, Python Email, https://github.com/samlopezf/Python-Email/blob/master/send_email.py, August 8, 2017, Github)

Project Site
https://courses.ece.cornell.edu/ece5990/ECE5725_Spring2019_Projects/3D_Scanner_mfx2_tbs47/index.html

Autodesk | www.autodesk.com
PiCamera | https://picamera.readthedocs.io
Raspberry Pi Foundation | www.raspberrypi.org
Repetier | www.repetier.com
STMicroelectronics | www.st.com

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • APRIL 2020 #357 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Michael Xiao (mfx2@cornell.edu) is a senior studying electrical and computer engineering at Cornell University. His interests include robotics, mechanical design, and embedded systems. Outside the classroom, Michael can be found rock climbing, combining cultural cuisine, or perusing local thrift stores.

Thomas Scavella (tbs47@cornell.edu) is a recent graduate of Cornell University where he studied computer science. In his studies, he has interests within the fields of data structures, operating systems, and embedded systems. Outside of classes and work, he enjoys surfing and rock climbing.

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Laser 3D Scanner Uses Raspberry Pi

by Michael Xiao and Thomas Scavella time to read: 18 min