The Future of Small Radar Technology

Directing the limited resources of Fighter Command to intercept a fleet of Luftwaffe bombers en route to London or accurately engaging the Imperial Navy at 18,000 yards in the dead of night. This was our grandfather’s radar, the technology that evened the odds in World War II.

This is the combat information center aboard a World War II destroyer with two radar displays.

This is the combat information center aboard a World War II destroyer with two radar displays.

Today there is an insatiable demand for short-range sensors (i.e., small radar technology)—from autonomous vehicles to gaming consoles and consumer devices. State-of-the-art sensors that can provide full 3-D mapping of a small-target scenes include laser radar and time-of-flight (ToF) cameras. Less expensive and less accurate acoustic and infrared devices sense proximity and coarse angle of arrival. The one sensor often overlooked by the both the DIY and professional designer is radar.

However, some are beginning to apply small radar technology to solve the world’s problems. Here are specific examples:

Autonomous vehicles: In 2007, the General Motors and Carnegie Mellon University Tartan Racing team won the Defense Advanced Research Projects Agency (DARPA) Urban Challenge, where autonomous vehicles had to drive through a city in the shortest possible time period. Numerous small radar devices aided in their real-time decision making. Small radar devices will be a key enabling technology for autonomous vehicles—from self-driving automobiles to unmanned aerial drones.

Consumer products: Recently, Massachusetts Institute of Technology (MIT) researchers developed a radar sensor for gaming systems, shown to be capable of detecting gestures and other complex movements inside a room and through interior walls. Expect small radar devices to play a key role in enabling user interface on gaming consoles to smartphones.

The Internet of Things (IoT): Fybr is a technology company that uses small radar sensors to detect the presence of parked automobiles, creating the most accurate parking detection system in the world for smart cities to manage parking and traffic congestion in real time. Small radar sensors will enable the IoT by providing accurate intelligence to data aggregators.

Automotive: Small radar devices are found in mid- to high-priced automobiles in automated cruise control, blind-spot detection, and parking aids. Small radar devices will soon play a key role in automatic braking, obstacle-avoidance systems, and eventually self-driving automobiles, greatly increasing passenger safety.

Through-Wall Imaging: Advances in small radar have numerous possible military applications, including recent MIT work on through-wall imaging of human targets through solid concrete walls. Expect more military uses of small radar technology.

What is taking so long? A tremendous knowledge gap exists between writing the application and emitting, then detecting, scattered microwave fields and understanding the result. Radar was originally developed by physicists who had a deep understanding of electromagnetics and were interested in the theory of microwave propagation and scattering. They created everything from scratch, from antennas to specialized vacuum tubes.

Microwave tube development, for example, required a working knowledge of particle physics. Due to this legacy, radar textbooks are often intensely theoretical. Furthermore, microwave components were very expensive—handmade and gold-plated. Radar was primarily developed by governments and the military, which made high-dollar investments for national security.

Small radar devices such as the RFBeam Microwave K-LC1a radio transceiver cost less than $10 when purchased in quantity.

Small radar devices such as the RFBeam Microwave K-LC1a radio transceiver cost less than $10 when purchased in quantity.

It’s time we make radar a viable option for DIY projects and consumer devices by developing low-cost, easy-to-use, capable technology and bridging the knowledge gap!
Today you can buy small radar sensors for less than $10. Couple this with learning practical radar processing methods, and you can solve a critical sensing problem for your project.

Learn by doing. I created the MIT short-course “Build a Small Radar Sensor,” where students learn about radar by building a device from scratch. Those interested can take the online course for free through MIT Opencourseware or enroll in the five-day MIT Professional Education course.

Dive deeper. My soon-to-be published multimedia book, Small and Short-Range Radar Systems, explains the principles and building of numerous small radar devices and then demonstrates them so readers at all levels can create their own radar devices or learn how to use data from off-the-shelf radar sensors.

This is just the beginning. Soon small radar sensors will be everywhere.

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

AAR Arduino Autonomous Mobile Robot

The AAR Arduino Robot is a small autonomous mobile robot designed for those new to robotics and for experienced Arduino designers. The robot is well suited for hobbyists and school projects. Designed in the Arduino open-source prototyping platform, the robot is easy to program and run.

The AAR, which is delivered fully assembled, comes with a comprehensive CD that includes all the software needed to write, compile, and upload programs to your robot. It also includes a firmware and hardware self test. For wireless control, the robot features optional Bluetooth technology and a 433-MHz RF.

The AAR robot’s features include an Atmel ATmega328P 8-bit AVR-RISC processor with a 16-MHz clock, Arduino open-source software, two independently controlled 3-VDC motors, an I2C bus, 14 digital I/Os on the processor, eight analog input lines, USB interface programming, an on-board odometer sensor on both wheels, a line tracker sensor, and an ISP connector for bootloader programming.

The AAR’s many example programs help you get your robot up and running. With many expansion kits available, your creativity is unlimited.

Contact Global Specialties for pricing.

Global Specialties
http://globalspecialties.com

Using Socially Assistive Robots to Address the Caregiver Gap

David Feil-Seifer

Editor’s Note: David Feil-Seifer, a Postdoctoral Fellow in the Computer Science Department at Yale University, wrote this  essay for Circuit Cellar. Feil-Seifer focuses his research on socially assistive robotics (SAR), particularly the study of human-robot interaction for children with autism spectrum disorders (ASD). His dissertation work addressed autonomous robot behavior so that socially assistive robots can recognize and respond to a child’s behavior in unstructured play. He recently was hired as Assistant Professor of Computer Science at the University of Nevada, Reno.

There are looming health care and education crises on the horizon. Baby boomers are getting older and requiring more care, which puts pressure on caregivers. The US nursing shortage is projected to worsen. Similarly, the rapid growth of diagnoses of developmental disorders suggests a greater need for educators, one that the education system is struggling to meet. These great and growing shortfalls in the number of caregivers and educators may be addressed (in part) through the use of socially assistive robotics.

In health care, non-contact repetitive tasks make up a large part of a caregiver’s day. Tasks such as monitoring instruments only require a check to verify that readings are within norms. By offloading these tasks to an automated system, a nurse or doctor could spend more time doing work that better leverages their medical training. A robot can effectively perform simple repetitive tasks (e.g., monitoring breath spirometry exercises or post-stroke rehabilitation compliance).

I coined the term “socially assistive robotics” (SAR) to describe robots that provide such assistance through social rather than physical interaction. My research is the development of SAR algorithms and complete systems relevant to domains such as post-stroke rehabilitation, elder care, and therapeutic interaction for children with autism spectrum disorders (ASD). A key challenge for such autonomous SAR systems is the ability to sense, interpret, and properly respond to human social behavior.

One of my research priorities is developing a socially assistive robotic system for children with ASD. Children with ASD are characterized by social impairments, communication difficulties, and repetitive and stereotyped behaviors. Significant anecdotal evidence indicates that some children with ASD respond socially to robots, which could have therapeutic ramifications. We envision a robot that could act as a catalyst for social interaction, both human-robot and human-human, thus aiding ASD users’ human-human socialization. In such a scenario, the robot is not specifically generating social behavior or participating in social interaction, but instead behaves in a way known to provoke human-human interaction.

David Feil-Seifer developed an autonomous robot that recognizes and appropriately responds to a child’s free-form behavior in play contexts, similar to those seen in some more traditional autism spectrum disorder (ASD) therapies.

Enabling a robot to exhibit and understand social behavior with a child is challenging. Children are highly individual and thus technology used for social interaction needs to be robust to be effective. I developed an autonomous robot that recognizes and appropriately responds to a child’s free-form behavior in play contexts, similar to those seen in some more traditional ASD therapies.

To detect and mitigate child distress, I developed a methodology for learning and then applying a data-driven spatiotemporal model of social behavior based on distance-based features to automatically differentiate between typical vs. aversive child-robot interactions. Using a Gaussian mixture model learned over distance-based feature data, the developed system was able to detect and interpret social behavior with sufficient accuracy to recognize child distress. The robot can use this to change its own behavior to encourage positive social interaction.

To encourage human-human interaction once human-robot interaction was achieved, I developed a navigation planner that used the above spatiotemporal model. This was used to maintain the robot’s spatial relationship with a child to sustain interaction while also guiding the child to a particular location in a room. This could be used to encourage a child to move toward another interaction partner (e.g., a parent). The desired spatial interaction behavior is achieved by modifying an established trajectory planner to weigh candidate trajectories based on conformity to a trained model of the desired behavior.

I also developed a methodology for robot behavior that provides autonomous feedback for a robot-child imitation and turn-taking game. This was accomplished by incorporating an established therapeutic model of feedback along with a trained model of imitation behavior. This is used as part of an autonomous system that can play Simon Says, recognize when the rules have been violated, and provide appropriate feedback.

A growing body of data supports the hypothesis that robots have the potential to aid in addressing the needs of people through non-contact assistance. My research, along with that of many others, has resulted in technical advances for robots providing assistance to people. However, there is a long way to go before these systems can be deployed as a therapeutic platform. Given that the beneficiary populations are growing, and the required therapeutic needs are increasing far more rapidly than the existing resources to address it, SAR could provide lasting benefits to people in need.

David Feil-Seifer, a Postdoctoral Fellow in the Computer Science Department at Yale University, focuses his research on socially assistive robotics (SAR), particularly the study of human-robot interaction for children with autism spectrum disorders (ASD). His dissertation work addressed autonomous robot behavior so that socially assistive robots can recognize and respond to a child’s behavior in unstructured play. David received his MS and PhD in Computer Science from the University of Southern California and a BS in Computer Science from the University of Rochester, NY. He recently was hired as Assistant Professor of Computer Science at the University of Nevada, Reno.