The robot invasion isn’t coming. It’s already here. One would be hard-pressed to find anything in modern “industrialized” society that doesn’t rely on a substantial level of automation during its life cycle—whether in its production, consumption, use, or (most probably) all of the above. Regardless of the definition du jour, “robots” are indeed well on their way to taking over—and not in the terrifying, apocalyptic, “Skynet” kind of way, but in a way that will universally improve the human condition.
Of course, the success of this r/evolution relies on an almost incomprehensible level of compounding innovations and advancements accompanied by a mountain of associated technical complexities and challenges. The good news is many of these challenges have already been addressed—albeit in a piecemeal manner—by focused professionals in their respective fields. The real obstacle to progress, therefore, ultimately lies in the compilation and integration of a variety of technologies and techniques from heterogeneous industries, with the end goal being a collection of cohesive systems that can be easily and intuitively implemented in industry.
Two of the most promising and critical aspects of robotics and automation today are human-machine collaboration and flexible manufacturing. Interestingly (and, perhaps, fortuitously), their problem sets are virtually identical, as the functionality of both systems inherently revolves around constantly changing and wildly unpredictable environments and tasks. These machines, therefore, have to be heavily adaptable to and continuously “aware” of their surroundings in order to maintain not only a high level of performance, but also to consistently perform safely and reliably.
Not unlike humans, machines rely on their ability to collect, analyze, and act on external data, oftentimes in a deterministic fashion—in other words, certain things must happen in a pre-defined amount of time for the system to perform as intended. These data can range from the very slow and simple (e.g., calculating temperature by reading the voltage of a thermocouple once a second) to the extremely fast and complex (e.g., running control loops for eight brushless electric motors 25,000-plus times a second). Needless to say, giving a machine the ability to perceive—be it through sight, sound, and/or touch—and act on its surroundings in “real time” is no easy task.
Read more Tech the Future essays and get inspired!
Computer vision (sight and perception), speech recognition (sound and language), and precision motion control (touch and motor skills) are things most people take for granted, as they are collectively fundamental to survival. Machines, however, are not “born” with—nor have they evolved—these abilities. Piling on additional layers of complexity like communication and the ability to acquire knowledge/learn new tasks, and it becomes menacingly apparent how substantial the challenge of creating intelligent and connected automated systems really is.
— ADVERTISMENT—
—Advertise Here—
While the laundry list of requirements might seem nearly impossible to address, fortunately the tools used for integrating these exceedingly complex systems have undergone their own period of hyper growth in the last several years. In much the same way developers, researchers, engineers, and entrepreneurs have picked off industry- and application-specific problems related to the aforementioned technical hurdles, as have the people behind the hardware and software that make it possible for these independently developed, otherwise standalone solutions to be combined and interwoven, thus resulting in truly world-changing innovations.
For developers, only in the last few years has it become practical to leverage the combination of embedded technologies like the power-efficient, developer-friendly mobile application processor with the flexibility and raw “horsepower” of programmable logic (i.e., field-programmable gate arrays, which have historically been reserved for the aerospace/defense and telecommunication industries) at scales never previously imagined. And with rapidly growing developer communities, the platforms built around these technologies are directly facilitating the advancement of automation, and doing it all under a single silicon “roof.” There’s little doubt that increasing access to these new tools will usher in a more nimble, intelligent, safe, and affordable wave of robotics.
Looking forward, automation will undoubtedly continue to play an ever-increasingly central role in day-to-day life. As a result, careful consideration must be given to facilitating human-machine (and machine-machine) collaboration in order to accelerate innovation and overcome the technical and societal impacts bred from the disruption of the status quo. The pieces are already there, now it’s time to assemble them.
This article appears in Circuit Cellar 320.
Ryan Cousins is cofounder and CEO of krtkl, Inc.http://krtkl.com/ ("critical"), a San Francisco-based embedded systems company. The krtkl team created snickerdoodle—an affordable and highly reconfigurable platform for developing mechatronic, audio/video, computer vision, networking, and wireless communication systems. Ryan has a BS in mechanical engineering from UCLA. He has experience in R&D, project management, and business development in the medical and embedded systems industries. Learn more at krtkl.com or snickerdoodle.io.