Project Spotlight: “3D-Printed Mouse” Circuitry & Design

I get to meet and interact with creative engineers and researchers around the world who are working on innovative MCU-based projects. Some of them show up at our office to chat. Others I meet with when I travel to California for events like the Embedded Systems Conference. But many of the most interesting people and projects I find are on the Net. A perfect example is David Mellis, whose projects and research grabbed my attention recently while I was browsing the MIT Media Lab website. He is a PhD student in the High-Low Tech research group at the MIT Media Lab.

Mellis gave me permission to write about the projects and post some of the photos from his website, so let’s take a look at one of them—the “3D-Printed Mouse.”

The 3D-Printed Mouse design (Source: D. Mellis)

Check out the mouse strapped to a hand.

The mouse in hand (Source: D. Mellis)

Mellis writes:

This mouse combines a traditional electronic circuit board and components with a 3D-printed enclosure. The mouse is open-source: the original files necessary to make or modify its design are available for download below.

Download

Enclosure
Rhino: mouse.3dm
STLs: mouse-shell.stl, mouse-base.stl

Circuit board
Eagle files: mouse.brd, mouse.sch
Gerbers: mouse-gerbers.zip
Schematic: mouse.pdf

Component Datasheets

Button: SS-P_1110.pdf
Mouse Chip: ADNS2620.pdf

Code: hid-mouse.zip

Mellis notes that the circuitry and code are based on SparkFun’s ADNS2620 Evaluation Board, but “have been modified to include buttons.”

The first prototype with the SparkFun board (Source: D. Mellis)

Click here to access the project site.

 

User Interface Innovation: Collaborative Navigation in Virtual Search and Rescue

An engineering team from Virginia Tech’s Center of HCI and Department of Computer Science recently won first place in the IEEE’s 2012 3DUI Contest the for their Collaborative Navigation in Virtual Search and Rescue Escort (CARNAGE) project. The project was designed to enable emergency responders to collaborate and safely navigate a dangerous environment such as a disaster area.

The contest was open to researchers, students, and professionals working on 3-D user interface technologies. Entrants were challenged to design an application to enable two users—situated in different locations with his or her own UI—to navigate a 3-D environment without speaking to each other.

Collaborative Augmented Rescue Navigation and Guidance Escort UI (Source: Virginia Tech News, YouTube)

The Virginia Tech team—comprising Felipe Bacim, Eric Ragan, Siroberto, and Cheryl Stinson—described their design in a concise system description, which is currently available on the 3DUI 2012 contest website:

Our task specifically looks at communication between a scene commander and a disaster relief responder during a search and rescue operation. The responders inside the environment have great difficulty navigating because of hazards, reduced visibility, disorientation, and lack of survey knowledge of the environment. Observing the operation from outside of the disaster area, scene commanders work to help coordinate the response effort  [1, 2]. With the responder’s notifications about the environment, scene commanders can provide new instructions, alert the responders to risks, and issue evacuation orders. Since neither the commander nor the responder has complete information about the environment, effective communication is essential.

As technology advances, the incorporation of new tools into search and rescue protocols shows promise for improving  operation efficiency and safety. In this research, we explore the  use of 3D user interfaces to assist collaborative search-and-rescue. Ideally, users should be able to focus on their primary tasks in the VE, rather than struggle with travel and way finding. Using virtual reality (VR) as a prototyping testbed, we implemented a proof-of-concept collaborative guidance system. Preliminary evaluation has demonstrated promising results for efficient rescue operations.

The team also created an explanatory 6:16-minute project video:

Click here for more information about the IEEE Symposium on 3D User Interfaces in Costa Mesa, CA.

Source: Virginia Tech News

 

FPGA-Based VisualSonic Design Project

The VisualSonic Studio project on display at Design West last week was as innovative as it was fun to watch in operation. The design—which included an Altera DE2-115 FPGA development kit and a Terasic 5-megapixel CMOS Sensor (D5M)—used interactive tokens to control computer-generated music.

at Design West 2012 in San Jose, CA (Photo: Circuit Cellar)

I spoke with Allen Houng, Strategic Marketing Manager for Terasic, about the project developed by students from National Taiwan University. He described the overall design, and let me see the Altera kit and Terasic sensor installation.

A view of the kit and sensor (Photo: Circuit Cellar)

Houng also he also showed me the design in action. To operate the sound system, you simply move the tokens to create the sound effects of your choosing. Below is a video of the project in operation (Source: Terasic’s YouTube channel).

Robot Design with Microsoft Kinect, RDS 4, & Parallax’s Eddie

Microsoft announced on March 8 the availability of Robotics Developer Studio 4 (RDS 4) software for robotics applications. RDS 4 was designed to work with the Kinect for Windows SDK. To demonstrate the capabilities of RDS 4, the Microsoft robotics team built the Follow Me Robot with a Parallax Eddie robot, laptop running Windows 7, and the Kinect.

In the following short video, Microsoft software developer Harsha Kikkeri demonstrates Follow Me Robot.

Circuit Cellar readers are already experimenting Kinect and developing embedded system to work with it n interesting ways. In an upcoming article about a Kinect-based project, designer Miguel Sanchez describes a interesting Kinect-based 3-D imaging system.

Sanchez writes:

My project started as a simple enterprise that later became a bit more challenging. The idea of capturing the silhouette of an individual standing in front of the Kinect was based on isolating those points that are between two distance thresholds from the camera. As depth image already provides the distance measurement, all the pixels of the subject will be between a range of distances, while other objects in the scene will be outside of this small range. But I wanted to have just the contour line of a person and not all the pixels that belong to that person’s body. OpenCV is a powerful computer vision library. I used it for my project because of function blobs. This function extracts the contour of the different isolated objects of a scene. As my image would only contain one object—the person standing in front of the camera—function blobs would return the exact list of coordinates of the contour of the person, which was what I needed. Please note that this function is a heavy image processing made easy for the user. It provides not just one, but a list of all the different objects that have been detected in the image. It can also specify is holes inside a blob are permitted. It can also specify the minimum and maximum areas of detected blobs. But for my project, I am only interested in detecting the biggest blob returned, which will be the one with index zero, as they are stored in decreasing order of blob area in the array returned by the blobs function.

Though it is not a fault of blobs function, I quickly realized that I was getting more detail than I needed and that there was a bit of noise in the edges of the contour. Filtering out on a bit map can be easily accomplished with a blur function, but smoothing out a contour did not sound so obvious to me.

A contour line can be simplified by removing certain points. A clever algorithm can do this by removing those points that are close enough to the overall contour line. One of these algorithms is the Douglas-Peucker recursive contour simplification algorithm. The algorithm starts with the two endpoints and it accepts one point in between whose orthogonal distance from the line connecting the two first points is larger than a given threshold. Only the point with the largest distance is selected (or none if the threshold is not met). The process is repeated recursively, as new points are added, to create the list of accepted points (those that are contributing the most to the general contour given a user-provided threshold). The larger the threshold, the rougher the resulting contour will be.

By simplifying a contour, now human silhouettes look better and noise is gone, but they look a bit synthetic. The last step I did was to perform a cubic-spline interpolation so contour becomes a set of curves between the different original points of the simplified contour. It seems a bit twisted to simplify first to later add back more points because of the spline interpolation, but this way it creates a more visually pleasant and curvy result, which was my goal.

 

(Source: Miguel Sanchez)
(Source: Miguel Sanchez)

The nearby images show aspects of the process Sanchez describes in his article, where an offset between the human figure and the drawn silhouette is apparent.

The entire article is slated to appear in the June or July edition of Circuit Cellar.

Zero-Power Sensor (ZPS) Network

Recently, we featured two notable projects featuring Echelon’s Pyxos Pyxos technology: one about solid-state lighting solutions and one about a radiant floor heating zone controller. Here we present another innovative project: a zero-power sensor (ZPS) network on polymer.

The Zero Power Switch (Source: Wolfgang Richter, Faranak M.Zadeh)

The ZPS system—which was developed by Wolfgang Richter and Faranak M. Zadeh of Ident Technology AG— doesn’t require battery or RF energy for operation. The sensors, developed on polymer foils, are fed by an electrical alternating field with a 200-kHz frequency. A Pyxos network enables you to transmit of wireless sensor data to various devices.

In their documentation, Wolfgang Richter and Faranak M. Zadeh write:

“The developed wireless Zero power sensors (ZPS) do not need power, battery or radio frequency energy (RF) in order to operate. The system is realized on polymer foils in a printing process and/or additional silicon and is very eco-friendly in production and use. The sensors are fed by an electrical alternating field with the frequency of 200 KHz and up to 5m distance. The ZPS sensors can be mounted anywhere that they are needed, e.g. on the body, in a room, a machine or a car. One ZPS server can work for a number of ZPS-sensor clients and can be connected to any net to communicate with network intelligence and other servers. By modulating the electric field the ZPS-sensors can transmit a type of “sensor=o.k. signal” command. Also ZPS sensors can be carried by humans (or animals) for the vital signs monitoring. So they are ideal for wireless monitoring systems (e.g. “aging at home”). The ZPS system is wireless, powerless and cordless system and works simultaneously, so it is a self organized system …

The wireless Skinplex zero power sensor network is a very simply structured but surely functioning multiple sensor system that combines classical physics as taught by Kirchhoff with the latest advances in (smart) sensor technology. It works with a virtually unlimited number of sensor nodes in inertial space, without a protocol, and without batteries, cables and connectors. A chip not bigger than a particle of dust will be fabricated this year with the assistance of Cottbus University and Prof. Wegner. The system is ideal to communicate via PYXOS/Echelon to other instances and servers.

Pyxos networks helps to bring wireless ZPS sensor data over distances to external instances, nets and servers. With the advanced ECHELON technology even AC Power Line (PL) can be used.

As most of a ZPS server is realized in software it can be easily programmed into a Pyxos networks device, a very cost saving effect! Applications start from machine controls, smart office solutions, smart home up to Homes of elderly and medical facilities as everywhere else where Power line (PL) exists.”

Inside the ZPS project (Source: Wolfgang Richter, Faranak M.Zadeh)

For more information about Pyxos technology, visit www.echelon.com.

This project, as well as others, was promoted by Circuit Cellar based on a 2007 agreement with Echelon.