Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

3-D Printing with Liquid Metals

by Collin Ladd and Michael Dickey

Our research group at North Carolina State University has been studying new ways to use simple processes to print liquid metals into 3-D shapes at room temperature. 3-D printing is gaining popularity because of the ability to quickly go from concept to reality to design, replicate, or create objects. For example, it is now possible to draw an object on a computer or scan a physical object into software and have a highly detailed replica within a few hours.

3-D printing with liquid metals: a line of dollsMost 3-D printers currently pattern plastics, but printing metal objects is of particular interest because of metal’s physical strength and electrical conductivity. Because of the difficulty involved with metal printing, it is considered one of the “frontiers” of 3-D printing.
There are several approaches for 3-D printing of metals, but they all have limitations, including high temperatures (making it harder to co-print with other materials) and prohibitively expensive equipment. The most popular approach to printing metals is to use lasers or electron beams to sinter fine metal powders together at elevated temperatures, one layer at a time, to form solid metal parts.

Our approach uses a simple method to enable direct printing of liquid metals at room temperature. We print liquid metal alloys primarily composed of gallium. These alloys have metallic conductivity and a viscosity similar to water. Unlike mercury, gallium is not considered toxic nor does it evaporate. We extrude this metal from a nozzle to create droplets that can be stacked to form 3-D structures. Normally, two droplets of liquid (e.g., water) merge together into a single drop if stacked on each other. However, these metal droplets do not succumb to surface-tension effects because the metal rapidly forms a solid oxide “skin” on its surface that mechanically stabilizes the printed structures. This skin also makes it possible to extrude wires or metal fibers.

This printing process is important for two reasons. First, it enables the printing of metallic structures at room temperature using a process that is compatible with other printed materials (e.g., plastics). Second, it results in metal structures that can be used for flexible and stretchable electronics.

 

Stretchable electronics are motivated by the new applications that emerge by building electronic functionality on deformable substrates. It may enable new wearable sensors and textiles that deform naturally with the human body, or even an elastic array of embedded sensors that could serve as a substitute for skin on a prosthetic or robot-controlled fingertip. Unlike the bendable polyimide-based circuits commonly seen on a ribbon cable or inside a digital camera, stretchable electronics require more mechanical robustness, which may involve the ability to deform like a rubber band. However, a stretchable device need not be 100% elastic. Solid components embedded in a substrate (e.g., silicone) can be incorporated into a stretchable device if the connections between them can adequately deform.

Using our approach, we can direct print freestanding wire bonds or circuit traces to directly connect components—without etching or solder—at room temperature. Encasing these structures in polymer enables these interconnects to be stretched tenfold without losing electrical conductivity. Liquid metal wires also have been shown to be self-healing, even after being completely severed. Our group has demonstrated several applications of the liquid metal in soft, stretchable components including deformable antennas, soft-memory devices, ultra-stretchable wires, and soft optical components.

Although our approach is promising, there are some notable limitations. Gallium alloys are expensive and the price is expected to rise due to gallium’s expanding industrial use. Nevertheless, it is possible to print microscale structures without using much volume, which helps keep the cost down per component. Liquid metal structures must also be encased in a polymer substrate because they are not strong enough to stand by themselves for rugged applications.

Our current work is focused on optimizing this process and exploring new material possibilities for 3-D printing. We hope advancements will enable users to print new embedded electronic components that were previously challenging or impossible to construct using a 3-D printer.

Collin Ladd (claddc4@gmail.com)  is pursuing a career in medicine at the Medical University of South Carolina in Charleston, SC. Since 2009, he has been the primary researcher for the 3-D printed liquid metals project at The Dickey Group, which is headed by Michael Dickey. Collin’s interests include circuit board design and robotics. He has been an avid electronics hobbyist since high school.

Collin Ladd (claddc4@gmail.com) is pursuing a career in medicine at the Medical University of South Carolina in Charleston, SC. Since 2009, he has been the primary researcher for the 3-D printed liquid metals project at The Dickey Group, which is headed by Michael Dickey. Collin’s interests include circuit board design and robotics. He has been an avid electronics hobbyist since high school.

Michael Dickey (mddickey@ncsu.edu) is an associate professor at the North Carolina State University Department of Chemical and Biomolecular Engineering. His research includes studying soft materials, thin films and interfaces, and unconventional nanofabrication techniques. His research group’s projects include stretchable electronics, patterning gels, and self-folding sheets.

Michael Dickey (mddickey@ncsu.edu) is an associate professor at the North Carolina State University Department of Chemical and Biomolecular Engineering. His research includes studying soft materials, thin films and interfaces, and unconventional nanofabrication techniques. His research group’s projects include stretchable electronics, patterning gels, and self-folding sheets.

 

 

 

Q&A: Jeremy Blum, Electrical Engineer, Entrepreneur, Author

Jeremy Blum

Jeremy Blum

Jeremy Blum, 23, has always been a self-proclaimed tinkerer. From Legos to 3-D printers, he has enjoyed learning about engineering both in and out of the classroom. A recent Cornell University College of Engineering graduate, Jeremy has written a book, started his own company, and traveled far to teach children about engineering and sustainable design. Jeremy, who lives in San Francisco, CA, is now working on Google’s Project Glass.—Nan Price, Associate Editor

NAN: When did you start working with electronics?

JEREMY: I’ve been tinkering, in some form or another, ever since I figured out how to use my opposable thumbs. Admittedly, it wasn’t electronics from the offset. As with most engineers, I started with Legos. I quickly progressed to woodworking and I constructed several pieces of furniture over the course of a few years. It was only around the start of my high school career that I realized the extent to which I could express my creativity with electronics and software. I thrust myself into the (expensive) hobby of computer building and even built an online community around it. I financed my hobby through my two companies, which offered computer repair services and video production services. After working exclusively with computer hardware for a few years, I began to dive deeper into analog circuits, robotics, microcontrollers, and more.

NAN: Tell us about some of your early, pre-college projects.

JEREMY: My most complex early project was the novel prosthetic hand I developed in high school. The project was a finalist in the prestigious Intel Science Talent Search. I also did a variety of robotics and custom-computer builds. The summer before starting college, my friends and I built a robot capable of playing “Guitar Hero” with nearly 100% accuracy. That was my first foray into circuit board design and parallel programming. My most ridiculous computer project was a mineral oil-cooled computer. We submerged an entire computer in a fish tank filled with mineral oil (it was actually a lot of baby oil, but they are basically the same thing).

DeepNote Guitar Hero Robot

DeepNote Guitar Hero Robot

Mineral Oil-Cooled Computer

Mineral Oil-Cooled Computer

NAN: You’re a recent Cornell University College of Engineering graduate. While you were there, you co-founded Cornell’s PopShop. Tell us about the workspace. Can you describe some PopShop projects?

Cornell University's PopShop

Cornell University’s PopShop

JEREMY: I recently received my Master’s degree in Electrical and Computer Engineering from Cornell University, where I previously received my BS in the same field. During my time at Cornell, my peers and I took it upon ourselves to completely retool the entrepreneurial climate at Cornell. The PopShop, a co-working space that we formed a few steps off Cornell’s main campus, was our primary means of doing this. We wanted to create a collaborative space where students could come to explore their own ideas, learn what other entrepreneurial students were working on, and get involved themselves.

The PopShop is open to all Cornell students. I frequently hosted events there designed to get more students inspired about pursuing their own ideas. Common occurrences included peer office hours, hack-a-thons, speed networking sessions, 3-D printing workshops, and guest talks from seasoned venture capitalists.

Student startups that work (or have worked) out of the PopShop co-working space include clothing companies, financing companies, hardware startups, and more. Some specific companies include Rosie, SPLAT, LibeTech (mine), SUNN (also mine), Bora Wear, Yorango, Party Headphones, and CoVenture.

NAN: Give us a little background information about Cornell University Sustainable Design (CUSD). Why did you start the group? What types of CUSD projects were you involved with?

CUSD11JEREMY: When I first arrived at Cornell my freshman year, I knew right away that I wanted to join a research lab, and that I wanted to join a project team (knowing that I learn best in hands-on environments instead of in the classroom). I joined the Cornell Solar Decathlon Team, a very large group of mostly engineers and architects who were building a solar-powered home to enter in the biannual solar decathlon competition orchestrated by the Department of Energy.

By the end of my freshman year, I was the youngest team leader in the organization.  After competing in the 2009 decathlon, I took over as chief director of the team and worked with my peers to re-form the organization into Cornell University Sustainable Design (CUSD), with the goal of building a more interdisciplinary team, with far-reaching impacts.

CUSD3

Under my leadership, CUSD built a passive schoolhouse in South Africa (which has received numerous international awards), constructed a sustainable community in Nicaragua, has been the only student group tasked with consulting on sustainable design constraints for Cornell’s new Tech Campus in New York City, partnered with nonprofits to build affordable homes in upstate New York, has taught workshops in museums and school, contributed to the design of new sustainable buildings on Cornell’s Ithaca campus, and led a cross-country bus tour to teach engineering and sustainability concepts at K–12 schools across America. The group is now comprised of students from more than 25 different majors with dozens of advisors and several simultaneous projects. The new team leaders are making it better every day. My current startup, SUNN, spun out of an EPA grant that CUSD won.

CUSD7NAN: You spent two years working at MakerBot Industries, where you designed electronics for a 3-D printer and a 3-D scanner. Any highlights from working on those projects?

JEREMY: I had a tremendous opportunity to learn and grow while at MakerBot. When I joined, I was one of about two dozen total employees. Though I switched back and forth between consulting and full-time/part-time roles while class was in session, by the time I stopped working with MakerBot (in January 2013), the company had grown to more than 200 people. It was very exciting to be a part of that.

I designed all of the electronics for the original MakerBot Replicator. This constituted a complete redesign from the previous electronics that had been used on the second generation MakerBot 3-D printer. The knowledge I gained from doing this (e.g., PCB design, part sourcing, DFM, etc.) drastically outweighed much of what I had learned in school up to that point. I can’t say much about the 3-D scanner (the MakerBot Digitizer), as it has been announced, but not released (yet).

The last project I worked on before leaving MakerBot was designing the first working prototype of the Digitizer electronics and firmware. These components comprised the demo that was unveiled at SXSW this past April. This was a great opportunity to apply lessons learned from working on the Replicator electronics and find ways in which my personal design process and testing techniques could be improved. I frequently use my MakerBot printers to produce custom mechanical enclosures that complement the open-source electronics projects I’ve released.

NAN: Tell us about your company, Blum Idea Labs. What types of projects are you working on?

JEREMY: Blum Idea Labs is the entity I use to brand all my content and consulting services. I primarily use it as an outlet to facilitate working with educational organizations. For example, the St. Louis Hacker Scouts, the African TAHMO Sensor Workshop, and several other international organizations use a “Blum Idea Labs Arduino curriculum.” Most of my open-source projects, including my tutorials, are licensed via Blum Idea Labs. You can find all of them on my blog (www.jeremyblum.com/blog). I occasionally offer private design consulting through Blum Idea Labs, though I obviously can’t discuss work I do for clients.

NAN: Tell us about the blog you write for element14.

JEREMY: I generally use my personal blog to write about projects that I’ve personally been working on.  However, when I want to talk about more general engineering topics (e.g., sustainability, engineering education, etc.), I post them on my element14 blog. I have a great working relationship with element14. It has sponsored the production of all my Arduino Tutorials and also provided complete parts kits for my book. We cross-promote each-other’s content in a mutually beneficial fashion that also ensures that the community gets better access to useful engineering content.

NAN: You recently wrote Exploring Arduino: Tools and Techniques for Engineering Wizardry. Do you consider this book introductory or is it written for the more experienced engineer?

JEREMY: As with all the video and written content that I produce on my website and on YouTube, I tried really hard to make this book useful and accessible to both engineering veterans and newbies. The book builds on itself and provides tons of optional excerpts that dive into greater technical detail for those who truly want to grasp the physics and programming concepts behind what I teach in the book. I’ve already had readers ranging from teenagers to senior citizens comment on the applicability of the book to their varying degrees of expertise. The Amazon reviews tell a similar story. I supplemented the book with a lot of free digital content including videos, part descriptions, and open-source code on the book website.

NAN: What can readers expect to learn from the book?

JEREMY: I wrote the book to serve as an engineering introduction and as an idea toolbox for those wanting to dive into concepts in electrical engineering, computer science, and human-computer interaction design. Though Exploring Arduino uses the Arduino as a platform to experiment with these concepts, readers can expect to come away from the book with new skills that can be applied to a variety of platforms, projects, and ideas. This is not a recipe book. The projects readers will undertake throughout the book are designed to teach important concepts in addition to traditional programming syntax and engineering theories.

NAN: I see you’ve spent some time introducing engineering concepts to children and teaching them about sustainable engineering and renewable energy. Tell us about those experiences. Any highlights?

JEREMY: The way I see it, there are two ways in which engineers can make the world a better place: they can design new products and technologies that solve global problems or they can teach others the skills they need to assist in the development of solutions to global problems. I try hard to do both, though the latter enables me to have a greater impact, because I am able to multiply my impact by the number of students I teach. I’ve taught workshops, written curriculums, produced videos, written books, and corresponded directly with thousands of students all around the world with the goal of transferring sufficient knowledge for these students to go out and make a difference.

Here are some highlights from my teaching work:

bluestamp

I taught BlueStamp Engineering, a summer program for high school students in NYC in the summer of 2012. I also guest-lectured at the program in 2011 and 2013.

I co-organized a cross-country bus tour where we taught sustainability concepts to school children across the country.

indiaI was invited to speak at Techkriti 2013 in Kanpur, India. I had the opportunity to meet many students from IIT Kanpur who already followed my videos and used my tutorials to build their own projects.

Blum Idea Labs partnered with the St. Louis Hacker Scouts to construct a curriculum for teaching electronics to the students. Though I wasn’t there in person, I did welcome them all to the program with a personalized video.

brooklyn_childrens_zoneThrough CUSD, I organized multiple visits to the Brooklyn Children’s Zone, where my team and I taught students about sustainable architecture and engineering.

Again with CUSD, we visited the Intrepid museum to teach sustainable energy concepts using potato batteries.

intrepid

NAN: Speaking of promoting engineering to children, what types of technologies do you think will be important in the near future?

JEREMY: I think technologies that make invention more widely accessible are going to be extremely important in the coming years. Cheaper tools, prototyping platforms such as the Arduino and the Raspberry Pi, 3-D printers, laser cutters, and open developer platforms (e.g., Android) are making it easier than ever for any person to become an inventor or an engineer.  Every year, I see younger and younger students learning to use these technologies, which makes me very optimistic about the things we’ll be able to do as a society.

Using Socially Assistive Robots to Address the Caregiver Gap

David Feil-Seifer

Editor’s Note: David Feil-Seifer, a Postdoctoral Fellow in the Computer Science Department at Yale University, wrote this  essay for Circuit Cellar. Feil-Seifer focuses his research on socially assistive robotics (SAR), particularly the study of human-robot interaction for children with autism spectrum disorders (ASD). His dissertation work addressed autonomous robot behavior so that socially assistive robots can recognize and respond to a child’s behavior in unstructured play. He recently was hired as Assistant Professor of Computer Science at the University of Nevada, Reno.

There are looming health care and education crises on the horizon. Baby boomers are getting older and requiring more care, which puts pressure on caregivers. The US nursing shortage is projected to worsen. Similarly, the rapid growth of diagnoses of developmental disorders suggests a greater need for educators, one that the education system is struggling to meet. These great and growing shortfalls in the number of caregivers and educators may be addressed (in part) through the use of socially assistive robotics.

In health care, non-contact repetitive tasks make up a large part of a caregiver’s day. Tasks such as monitoring instruments only require a check to verify that readings are within norms. By offloading these tasks to an automated system, a nurse or doctor could spend more time doing work that better leverages their medical training. A robot can effectively perform simple repetitive tasks (e.g., monitoring breath spirometry exercises or post-stroke rehabilitation compliance).

I coined the term “socially assistive robotics” (SAR) to describe robots that provide such assistance through social rather than physical interaction. My research is the development of SAR algorithms and complete systems relevant to domains such as post-stroke rehabilitation, elder care, and therapeutic interaction for children with autism spectrum disorders (ASD). A key challenge for such autonomous SAR systems is the ability to sense, interpret, and properly respond to human social behavior.

One of my research priorities is developing a socially assistive robotic system for children with ASD. Children with ASD are characterized by social impairments, communication difficulties, and repetitive and stereotyped behaviors. Significant anecdotal evidence indicates that some children with ASD respond socially to robots, which could have therapeutic ramifications. We envision a robot that could act as a catalyst for social interaction, both human-robot and human-human, thus aiding ASD users’ human-human socialization. In such a scenario, the robot is not specifically generating social behavior or participating in social interaction, but instead behaves in a way known to provoke human-human interaction.

David Feil-Seifer developed an autonomous robot that recognizes and appropriately responds to a child’s free-form behavior in play contexts, similar to those seen in some more traditional autism spectrum disorder (ASD) therapies.

Enabling a robot to exhibit and understand social behavior with a child is challenging. Children are highly individual and thus technology used for social interaction needs to be robust to be effective. I developed an autonomous robot that recognizes and appropriately responds to a child’s free-form behavior in play contexts, similar to those seen in some more traditional ASD therapies.

To detect and mitigate child distress, I developed a methodology for learning and then applying a data-driven spatiotemporal model of social behavior based on distance-based features to automatically differentiate between typical vs. aversive child-robot interactions. Using a Gaussian mixture model learned over distance-based feature data, the developed system was able to detect and interpret social behavior with sufficient accuracy to recognize child distress. The robot can use this to change its own behavior to encourage positive social interaction.

To encourage human-human interaction once human-robot interaction was achieved, I developed a navigation planner that used the above spatiotemporal model. This was used to maintain the robot’s spatial relationship with a child to sustain interaction while also guiding the child to a particular location in a room. This could be used to encourage a child to move toward another interaction partner (e.g., a parent). The desired spatial interaction behavior is achieved by modifying an established trajectory planner to weigh candidate trajectories based on conformity to a trained model of the desired behavior.

I also developed a methodology for robot behavior that provides autonomous feedback for a robot-child imitation and turn-taking game. This was accomplished by incorporating an established therapeutic model of feedback along with a trained model of imitation behavior. This is used as part of an autonomous system that can play Simon Says, recognize when the rules have been violated, and provide appropriate feedback.

A growing body of data supports the hypothesis that robots have the potential to aid in addressing the needs of people through non-contact assistance. My research, along with that of many others, has resulted in technical advances for robots providing assistance to people. However, there is a long way to go before these systems can be deployed as a therapeutic platform. Given that the beneficiary populations are growing, and the required therapeutic needs are increasing far more rapidly than the existing resources to address it, SAR could provide lasting benefits to people in need.

David Feil-Seifer, a Postdoctoral Fellow in the Computer Science Department at Yale University, focuses his research on socially assistive robotics (SAR), particularly the study of human-robot interaction for children with autism spectrum disorders (ASD). His dissertation work addressed autonomous robot behavior so that socially assistive robots can recognize and respond to a child’s behavior in unstructured play. David received his MS and PhD in Computer Science from the University of Southern California and a BS in Computer Science from the University of Rochester, NY. He recently was hired as Assistant Professor of Computer Science at the University of Nevada, Reno.

CC275: Shape The Future

In January, Circuit Cellar introduced a new section, Tech the Future, which dedicates page 80 of our magazine to the insights of innovators in groundbreaking technologies.

We’ve reached out to a number of graduate students, professors, researchers, engineers, designers, and entrepreneurs, asking them to write short essays on their fields of expertise, with an emphasis on future trends.

Their topics have included high-speed data acquisition, Linux home automation, research into new materials to replace traditional silicon-based CMOS for circuitry design, control system theory for electronic device DIYers, and how open-source hardware will make world economies more democratic and efficient.

Our contributors have been diverse in more than just their topics. They have been talented

Tech the Future essayist Fergus Dixon designed this DNA sequencer, the subject of an article in the May 2013 issue of Circuit Cellar.

young researchers and seasoned professionals. Male and female. American, Portuguese, Italian, Indian, and Australian.

The one thing they have in common? They keep a close eye on the ever-changing landscape of technological change. And their essays have helped our readers focus on what to watch. We compensate authors for the essays we choose to publish, and we are eager to hear your suggestions on subjects for Tech the Future.

If you are an innovator interested in writing an essay for Tech the Future, e-mail me (editor@circuitcellar.com) with the topic you’d like to address and some information about yourself. If you are a reader who wants to hear from someone in particular through Tech the Future or has a suggestion for an essay topic, please contact me.

The work of those we’ve featured so far can be found online at circuitcellar.com/category/tech-the-future. Here are just a few of the innovators you will find there:

Maurizio Di Paolo Emilio, a designer of data acquisition software for physics-related experiments and industrial applications, discussing the future of data acquisition technology.

Saptarshi Das, a nano materials researcher who holds a PhD in Electrical Engineering from Purdue University, focusing on the urgent need for alternatives to silicon-based CMOS. These alternative materials, now the subject of extensive scientific research, will be game changers for the microelectronics and nanoelectronics industries, he says.

Fergus Dixon, an Australian entrepreneur and designer of the popular software program “Simulator for Arduino,” explaining why open-source hardware is a valuable tool in the development of new medical devices. Design opportunities for such devices are countless. Hot technologies developed for 3-D printing and unmanned aerial vehicles (UAVs) have direct medical applications, including 3-D-printed prosthetic ears and nanorobots that utilize UAV technology.

Enjoy these articles and others online. In the meantime, I’ll be checking my e-mail for what you would like to see featured in Tech the Future.