Microcontroller-Based Wireless Pedometer & Pace Tacker Project

Anyone can easily order a pedometer or GPS sports watch on the Internet. But engineers like challenges, right? With the right parts and a little knowhow, you can engineer your own microcontroller-based wireless Bluetooth pedometer and pace tracker.

In the article, “Run With It” (Circuit Cellar 203, December 2014), Ellen Chuang and Julie Wang explain how they built an Atmel ATmega1284P microcontroller-based wireless pedometer and pace tracker. They wrote:

There’s a simple question most runners, walkers, and joggers ask themselves: How fast am I going? There are various tools for measuring pace, including step counters, GPS units, and smartphone applications. Pedometers are common tools for tracking physical activity. However, pedometers are typically either self-contained units that are out of sight and out of mind or expensive products like the FitBit and Nike+. So, for our culminating design project for Cornell University’s ECE 4760 Microcontrollers course, we decided to create a low-budget wireless pedometer and pace tracker.

The wireless pedometer hardware implementation includes a (a) foot module and a (b) wrist module on the right.

The wireless pedometer includes a foot module (a) and a wrist module (b).

Chuang and Wang’s design is notable because they enabled it with Bluetooth so they could connect it with other Bluetooth devices. In addition, they separated the user interface from the measurement unit.

Our system contains two separate modules: a foot-mounted module that captures acceleration data and a wrist-mounted module with a user interface. The foot module captures your acceleration data, lightly processes it to find values of interest, and sends the data over Bluetooth to the wrist module. The wrist module then further processes the information to determine if a step has occurred. The wrist module also handles user input and displaying information on an LCD.

To use the device, the foot unit is strapped on your lower leg and the wrist unit is either strapped to your wrist or held in your hand. At power-up, the units automatically pair over a Bluetooth link. The wrist unit powers up in Configuration mode, which enables you to use the two push buttons to adjust your stride length and also your desired pace in minutes per mile. You are first prompted for your stride length—the average length of one step—and then the targeted speed in minutes per mile. The Enter button enables you to confirm the parameters, exit Configuration mode, and enter into Pace Display mode. In Pace Display mode, the LCD displays the cumulative number of steps, your calculated pace, and your desired pace. At any point, you can reenter Configuration mode by toggling a switch on the wrist module.

Both modules contain an Atmel ATmega1284P microcontroller and an HC-05 Bluetooth master/slave module.

The foot unit contains a 1.5-g, single-axis Freescale Semiconductor MMA2260D accelerometer orientated with its positive measurement axis pointed away from the center of the Earth. We’ll refer to this direction as the z-axis. The analog data coming from the accelerometer is very noisy. To filter out the excessive noise, the accelerometer data passes through a low-pass filter with a cutoff frequency around 33 Hz using a 10-kΩ resistor and 470-nF capacitor. Human stride frequency typically falls between 185 and 200 strides per minute, or around 3.33 Hz. In order to capture this frequency, as well as higher frequency signals that correspond to other foot movements, we set the low-pass filter’s cutoff frequency to 33 Hertz. Additionally, to fully utilize the internal ADC’s 10-bit range, we dropped the voltage across a 20-kΩ resistor…

This is the foot unit. We used MAX233/RS-232 for debugging purposes and not for the final foot module.

This is the foot unit. We used MAX233/RS-232 for debugging purposes and not for the final foot module.

The wrist unit includes a 16 × 2 LCD a number of user input buttons/switches, status-linked LEDs, and a Bluetooth transceiver (see Figure 2). The three buttons and switch are used in Configuration mode. Additionally, we included the following LEDs: a red LED on the board that toggles every time a step is detected; a yellow LED that toggles every time a packet is received via Bluetooth; and a green “keep alive” LED. The software on the microcontroller manages all of these tasks, which we cover in the next section.

The complete article appears in Circuit Cellar 293 (December 2014).

Read Your Technical Documentation (EE Tip #145)

Last year we had a problem that showed up only after we started making the product in 1,000-piece runs. The problem was that some builds of the system took a very long time to power up. We had built about 10 prototypes, tested the design over thousands of power ups, and it tested just fine (thanks to POC-IT). Then the 1,000-piece run uncovered about a half-dozen units that had variable power-up times—ranging from a few seconds to more than an hour! Replacing the watchdog chip that controlled the RESET line to an ARM9 processor fixed the problem.

But why did these half dozen fail?

Many hours into the analysis we discovered that the RESET line out of the watchdog chip on the failed units would pulse but stay low for long periods of time. A shot of cold air instantly caused the chip to release the RESET. Was it a faulty chip lot? Nope. Upon a closer read of the documentation, we found that you cannot have a pull-up resister on the RESET line. For years we always had pull-ups on RESET lines. We’d missed that in the documentation.

Like it or not, we have to pour over the documentation of the chips and software library calls we use. We have to digest the content carefully. We cannot rely on what is intuitive.

Finally, and this is much more necessary than in years past, we have to pour over the errata sheets. And we need to do it before we commit the design. A number of years ago, a customer designed a major new product line around an Atmel ARM9. This ARM9 had the capability of directly addressing NOR memory up to 128 MB. Except for the fact that the errata said that due to a bug it could only address 16 MB. Ouch! Later we had problems with the I2C bus in the same chip. At times, the bus would lock up and nothing except a power cycle would unlock it. Enter the errata. Under some unmentioned conditions the I2C state machine can lock up. Ouch! In this case, we were able to use a bit-bang algorithm rather than the built-in I2C—but obviously at the cost of money, scheduling, and real time.—Bob Japenga, CC25, 2013

Got Problematic Little Bits? (EE Tip #142)

While testing a project, something strange happened (see the nearby image). The terminal showed nonsense, but the logic analyzer properly displayed “Elektor” in ASCII. The latter also indicated that the UART was operating at 4800 baud instead of the 19200 baud that I had programmed (at least that’s what I thought), a difference with a factor of four. The change I had made in my code was a fourfold increase in the clock speed of the dsPIC. The conclusion I had to arrive at is that the clock speed was not being changed. But why not?

Source: Raymond Vermeulen (Elektor, October 2011)

Source: Raymond Vermeulen (Elektor, October 2011)

The inspiration came, and where else, in the shower. In a hobby project, I had used an ATmega32u4 with a bootloader whose only limitation was being unable to program the fuse bits. “That’s not going to be…” I was thinking. But yes, the bootloader I used in my dsPIC cannot program the configuration bits either. Experienced programmers would have realized that long ago, but we all have our off-days. (The solution is to use a “real” programmer, such as the ICD3).—By Raymond Vermeulen (Elektor Labs, Elektor, October 2011)

 

A Coding Interface for an Evaluation Tool

John Peck, a test engineer at Knowles Electronics in Itasca, IL, has used ASCII interfaces to test equipment since he was a graduate student.

“I love test equipment with open, well-documented, ASCII command sets,” he says. “The plain text commands give a complicated instrument a familiar interface and an easy way to automate measurements.”

So when Peck needed to automate the process of reading his ultrasonic range finder’s voltage output, he wanted an ASCII interface to a voltmeter. He also wanted the meter to convert volts into distance, so he added an Atmel AVR Butterfly microcontroller into the mix (see Photo 1). “I thought it would be easy to give it a plain text interface to a PC,” he says.

Atmel AVR Butterfly

Atmel AVR Butterfly

The project became a bit more complex than he expected. But ultimately, Peck says, he came up came up with “a simple command interface that’s easy to customize and extend. It’s not at the level of a commercial instrument, but it works well for sending a few commands and getting some data back.”

If you would like to learn more about how to send commands from a PC to the AVR Butterfly and the basics of using the credit card-sized, single-board microcontroller to recognize, parse, and execute remote commands, read Peck’s article about his project in Circuit Cellar’s May issue.

In the italicized excerpts below, he describes his hardware connections to the board and the process of receiving remote characters (the first step in receiving remote commands). Other topics you’ll find in the full article include setting up a logging system to understand how commands are being processed, configuring the logger (which is the gatekeeper for messages from other subsystems), recognizing and adding commands to extend the system, and sending calibration values.

Peck programmed his system so that it has room to grow and can accommodate his future plans:

“I built the code with AVR-GCC, using the -Os optimization level. The output of avr-gcc –version is avr-gcc (Gentoo 4.6.3 p1.3, pie-0.5.1) 4.6.3.

“The resulting memory map file shows a 306-byte .data size, a 49-byte .bss size, and a 7.8-KB .text size. I used roughly half of the AVR Butterfly’s flash memory and about a third of its RAM. So there’s at least some space left to do more than just recognizing commands and calibrating voltages.”

“I’d like to work on extending the system to handle more types of arguments (e.g., signed integers and floats). And I’d like to port the system to a different part, one with more than one USART. Then I could have a dedicated logging port and log messages wouldn’t get in the way of other communication. Making well-documented interfaces to my designs would help me with my long-term goal of making them more modular.”

These are the connections needed for Atmel’s AVR Butterfly. Atmel’s AVRISP mkII user’s guide stresses that the programmer must be connected to the PC before the target (AVR Butterfly board).

Figure 1: These are the connections needed for Atmel’s AVR Butterfly. Atmel’s AVRISP mkII user’s guide stresses that the programmer must be connected to the PC before the target (AVR Butterfly board).

WORKING WITH THE AVR BUTTERFLY
The AVR Butterfly board includes an Atmel ATmega169 microcontroller and some peripherals. Figure 1 shows the connections I made to it. I only used three wires from the DB9 connector for serial communication with the PC. There isn’t any hardware handshaking. While I could also use this serial channel for programming, I find that using a dedicated programmer makes iterating my code much faster.

A six-pin header soldered to the J403 position enabled me to use Atmel’s AVRISP mkII programmer. Finally, powering the board with an external supply at J401 meant I wouldn’t have to think about the AVR Butterfly’s button cell battery. However, I did need to worry about the minimum power-on reset slope rate. The microcontroller won’t reset at power-on unless the power supply can ramp from about 1 to 3 V at more than 0.1 V/ms. I had to reduce a filter capacitor in my power supply circuit to increase its power-on ramp rate. With that settled, the microcontroller started executing code when I turned on the power supply.

After the hardware was connected, I used the AVR downloader uploader (AVRDUDE) and GNU Make to automate building the code and programming the AVR Butterfly’s flash memory. I modified a makefile template from the WinAVR project to specify my part, programmer, and source files. The template file’s comments helped me understand how to customize the template and comprehend the general build process. Finally, I used Gentoo, Linux’s cross-development package, to install the AVR GNU Compiler Collection (AVR-GCC) and other cross-compilation tools. I could have added these last pieces “by hand,” but Gentoo conveniently updates the toolchain as new versions are released.

Figure2

Figure 2: This is the program flow for processing characters received over the Atmel AVR Butterfly’s USART. Sending a command terminator (carriage return) will always result in an empty Receive buffer. This is a good way to ensure there’s no garbage in the buffer before writing to it.

HANDLING INCOMING CHARACTERS
To receive remote commands, you begin by receiving characters, which are sent to the AVR Butterfly via the USART connector shown in Figure 1. Reception of these characters triggers an interrupt service routine (ISR), which handles them according to the flow shown in Figure 2. The first step in this flow is loading the characters into the Receive buffer.

Figure 3: The received character buffer and pointers used to fill it are shown. There is no limit to the size of commands and their arguments, as long as the entire combined string and terminator fit inside the RECEIVE_BUFFER_SIZE.

Figure 3: The received character buffer and pointers used to fill it are shown. There is no limit to the size of commands and their arguments, as long as the entire combined string and terminator fit inside the RECEIVE_BUFFER_SIZE.

Figure 3 illustrates the Receive buffer loaded with a combined string. The buffer is accessed with a pointer to its beginning and another pointer to the next index to be written. These pointers are members of the recv_cmd_state_t-type variable recv_cmd_state.

This is just style. I like to try to organize a flow’s variables by making them members of their own structure. Naming conventions aside, it’s important to notice that no limitations are imposed on the command or argument size in this first step, provided the total character count stays below the RECEIVE_BUFFER_SIZE limit.

When a combined string in the Receive buffer is finished with a carriage return, the string is copied over to a second buffer. I call this the “Parse buffer,” since this is where the string will be searched for recognized commands and arguments. This buffer is locked until its contents can be processed to keep it from being overwhelmed by new combined strings.

Sending commands faster than they can be processed will generate an error and combined strings sent to a locked parse buffer will be dropped. The maximum command processing frequency will depend on the system clock and other system tasks. Not having larger parse or receive buffers is a limitation that places this project at the hobby level. Extending these buffers to hold more than just one command would make the system more robust.

Editor’s Note: If you are interested in other projects utilizing the AVR Butterfly, check out the Talk Zombie, which won “Distinctive Excellence” in the AVR Design Contest 2006 sponsored by ATMEL and administered by Circuit Cellar. The ATmega169-based multilingual talking device relates ambient temperature and current time in the form of speech (English, Dutch, or French). 

Q&A: Andrew Godbehere, Imaginative Engineering

Engineers are inherently imaginative. I recently spoke with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley, about how his ideas become realities, his design process, and his dream project. —Nan Price, Associate Editor

Andrew Godbehere

Andrew Godbehere

NAN: You are currently working toward your Electrical Engineering PhD at the University of California, Berkeley. Can you describe any of the electronics projects you’ve worked on?

ANDREW: In my final project at Cornell University, I worked with a friend of mine, Nathan Ward, to make wearable wireless accelerometers and find some way to translate a dancer’s movement into music, in a project we called CUMotive. The computational core was an Atmel ATmega644V connected to an Atmel AT86RF230 802.15.4 wireless transceiver. We designed the PCBs, including the transmission line to feed the ceramic chip antenna. Everything was hand-soldered, though I recommend using an oven instead. We used Kionix KXP74 tri-axis accelerometers, which we encased in a lot of hot glue to create easy-to-handle boards and to shield them from static.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

The dancer had four accelerometers connected to a belt pack with an Atmel chip and transceiver. On the receiver side, a musical instrument digital interface (MIDI) communicated with a synthesizer. (Design details are available at http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2007/njw23_abg34/index.htm.)

I was excited about designing PCBs for 802.15.4 radios and making them work. I was also enthusiastic about trying to figure out how to make some sort of music with the product. We programmed several possibilities, one of which was a sort of theremin; another was a sort of drum kit. I found that this was the even more difficult part—not just the making, but the making sense.

When I got to Berkeley, my work switched to the theoretical. I tried to learn everything I could about robotic systems and how to make sense of them and their movements.

NAN: Describe the real-time machine vision-tracking algorithm and integrated vision system you developed for the “Are We There Yet?” installation.

ANDREW: I’ve always been interested in using electronics and robotics for art. Having a designated emphasis in New Media on my degree, I was fortunate enough to be invited to help a professor on a fascinating project.

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

For the “Are We There Yet?” installation, we used a PointGrey FireFlyMV camera with a wide-angle lens. The camera was situated a couple hundred feet away from the control computer, so we used a USB-to-Ethernet range extender to communicate with the camera.

We installed a color camera in a gallery in the Contemporary Jewish Museum in San Francisco, CA. We used Meyer Sound speakers with a high-end controller system, which enabled us to “position” sound in the space and to sweep audio tracks around at (the computer’s programmed) will. The Meyer Sound D-Mitri platform was controlled by the computer with Open Sound Control (OSC).

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

The hard work was to then program the computer to discern humans from floors, furniture, shadows, sunbeams, and cloud reflections. The gallery had many skylights, which made the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful to determine whether detected color-perturbance was human or not.

Once complete, the experience of the installation was beautiful, enchanting, and maybe a little spooky. The audio tracks were all questions (e.g., “Are we there yet?”) and they were always spoken near you, as if addressed to you. They responded to your movement in a way that felt to me like dancing with a ghost. You can watch videos about the installation at www.are-we-there-yet.org.

The “Are We There Yet?” project opens itself up to possible use as an embedded system. I’ve been told that the software I wrote works on iOS devices by the start-up company Romo (www.kickstarter.com/projects/peterseid/romo-the-smartphone-robot-for-everyone), which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud-tracking system using a Raspberry Pi and a reasonable webcam.

I may create an automatic cloud-tracking system to watch clouds. I think computers could be capable of this capacity for abstraction, even though we think of the leisurely pastime as the mark of a dreamer.

NAN: Some of the projects you’ve contributed to focus on switched linear systems, hybrid systems, wearable interfaces, and computation and control. Tell us about the projects and your research process.

ANDREW: I think my research is all driven by imagination. I try to imagine a world that could be, a world that I think would be nice, or better, or important. Once I have an idea that captivates my imagination in this way, I have no choice but to try to realize the idea and to seek out the knowledge necessary to do so.

For the wearable wireless accelerometers, it began with the thought: Wouldn’t it be cool if dance and music were inherently connected the way we try to make it seem when we’re dancing? From that thought, the designs started. I thought: The project has to be wireless and low power, it needs accelerometers to measure movement, it needs a reasonable processor to handle the data, it needs MIDI output, and so forth.

My switched linear systems research came about in a different way. As I was in class learning about theories regarding stabilization of hybrid systems, I thought: Why would we do it this complicated way, when I have this reasonably simple intuition that seems to solve the problem? I happened to see the problem a different way as my intuition was trying to grapple with a new concept. That naive accident ended up as a publication, “Stabilization of Planar Switched Linear Systems Using Polar Coordinates,” which I presented in 2010 at Hybrid Systems: Computation and Control (HSCC) in Stockholm, Sweden.

NAN: How did you become interested in electronics?

ANDREW: I always thought things that moved seemingly of their own volition were cool and inherently attention-grabbing. I would think: Did it really just do that? How is that possible?

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Electric rally-car tracks and radio-controlled cars were a favorite of mine. I hadn’t really thought about working with electronics or computers until middle school. Before that, I was all about paleontology. Then, I saw an episode of Scientific American Frontiers, which featured Alan Alda excitedly interviewing RoboCup contestants. Watching RoboCup [a soccer game involving robotic players], I was absolutely enchanted.

While my childhood electronic toys moved and somehow acted as their own entities, they were puppets to my intentions. Watching RoboCup, I knew these robots were somehow making their own decisions on-the-fly, magically making beautiful passes and goals not as puppets, but as something more majestic. I didn’t know about the technical blood, sweat, and tears that went into it all, so I could have these romantic fantasies of what it was, but I was hooked from that moment.

That spurred me to apply to a specialized science and engineering high school program. It was there that I was fortunate enough to attend a fabulous electronics class (taught by David Peins), where I learned the basics of electronics, the joy of tinkering, and even PCB design and assembly (drilling included). I loved everything involved. Even before I became academically invested in the field, I fell in love with the manual craft of making a circuit.

NAN: Tell us about your first design.

ANDREW: Once I’d learned something about designing and making circuits, I jumped in whole-hog, to a comical degree. My very first project without any course direction was an electroencephalograph!

I wanted to make stuff move on my computer with my brain, the obvious first step. I started with a rough design and worked on tweaking parameters and finding components.

In retrospect, I think that first attempt was actually an electromyograph that read the movements of my eye muscles. And it definitely was an electrocardiograph. Success!

Someone suggested that it might not be a good idea to have a power supply hooked up in any reasonably direct path with your brain. So, in my second attempt, I tried to make something new, so I digitized the signal on the brain side and hooked it up to eight white LEDs. On the other side, I had eight phototransistors coupled with the LEDs and covered with heat-shrink tubing to keep out outside light. That part worked, and I was excited about it, even though I was having some trouble properly tuning the op-amps in that version.

NAN: Describe your “dream project.”

ANDREW: Augmented reality goggles. I’m dead serious about that, too. If given enough time and money, I would start making them.

I would use some emerging organic light-emitting diode (OLED) technology. I’m eyeing the start-up MicroOLED (www.microoled.net) for its low-power “near-to-eye” display technologies. They aren’t available yet, but I’m hopeful they will be soon. I’d probably hook that up to a Raspberry Pi SBC, which is small enough to be worn reasonably comfortably.

Small, high-resolution cameras have proliferated with modern cell phones, which could easily be mounted into the sides of goggles, driving each OLED display independently. Then, it’s just a matter of creativity for how to use your newfound vision! The OpenCV computer vision library offers a great starting point for applications such as face detection, image segmentation, and tracking.

Google Glass is starting to get some notice as a sort of “heads-up” display, but in my opinion, it doesn’t go nearly far enough. Here’s the craziest part—please bear with me—I’m willing to give up directly viewing the world with my natural eyes, I would be willing to have full field-of-vision goggles with high-resolution OLED displays with stereoscopic views from two high-resolution smartphone-style cameras. (At least until the technology gets better, as described in Rainbows End by Vernor Vinge.) I think, for this version, all the components are just now becoming available.

Augmented reality goggles would do a number of things for vision and human-computer interaction (HCI). First, 3-D overlays in the real world would be possible.

Crude example: I’m really terrible with faces and names, but computers are now great with that, so why not get a little help and overlay nametags on people when I want? Another fascinating thing for me is that this concept of vision abstracts the body from the eyes. So, you could theoretically connect to the feed from any stereoscopic cameras around (e.g., on an airplane, in the Grand Canyon, or on the back of some wild animal), or you could even switch points of view with your friend!

Perhaps reality goggles are not commercially viable now, but I would unabashedly use them for myself. I dream about them, so why not make them?