Kit for R-Car V3M SoC Speeds Development

Renesas Electronics has announced the R-Car V3M Starter Kit to simplify and speed up the development of New Car Assessment Program (NCAP) front camera applications, surround view system, and LiDARs. The new starter kit is based on the R-Car V3M image recognition system-on-chip (SoC), delivering a combination of low power consumption and high performance for the growing NCAP front camera market. By combining the R-Car V3M starter kit with supporting software and tools, system developers can easily develop front camera applications, contributing to reduced development efforts and faster time-to-market.

Renesas also announced an enhancement to the R-Car V3M by integrating a new, highly power-efficient hardware accelerator for high-performance convolutional neural networks (CNNs), which enables features such as road detection or object classification that are increasingly used in automotive applications. The R-Car V3M’s innovative hardware accelerator enables CNNs to execute at ultra-low power consumption levels that cannot be reached when CNNs are running on CPUs or GPUs.

The new R-Car V3M Starter Kit, the R-Car V3M SoC, and supporting software and tools including Renesas’ open-source e² studio IDE integrated development environment (IDE), are part of Renesas’ open, innovative, and trusted Renesas autonomy Platform for ADAS and automated driving that delivers total end-to-end solutions scaling from cloud to sensing and vehicle control.

The new starter kit is a ready-to-use kit. In addition to the required interface and tools, the kit provides essential components for ADAS and automated driving development, including 2GB RAM, 4GB eMMC (embedded multi-media controller) onboard memory, Ethernet, display outputs, and interfaces for debugging. The integrated 440-pin expansion port gives full freedom for system manufacturers to develop application-specific expansion boards for a wide range of computing applications, from a simple advanced computer vision development environment to prototyping of multi-camera systems for applications such as surround view. This board flexibility reduces the time needed for hardware development in addition to maintaining a high degree of software portability and reusability.

The R-Car V3M Starter Kit is supported by a Linux Board Support Package (BSP), which is available through elinux.org. Further commercial operating systems will be made available from next year onwards. Codeplay will enable OpenCL and SYCL on the starter kit in Q1 2018. Further tools, sample code and application notes for computer vision and image processing will be provided throughout 2018. Renesas enables several tools on the R-Car V3M Starter Kit including Renesas e² studio toolchain and tools for debugging, which ease the development burden and enable faster time-to-market.

In addition to the R-Car V3M Starter Kit, Renesas has enabled ultra-low power consumption for CNNs, which achieve image recognition and image classification, on the R-Car V3M SoC. The R-Car V3M allows the implementation of high-performance, low power consumption CNN networks in NCAP cameras that cannot be realized with traditional high power consuming CPU or GPU architectures. Renesas complements the IMP-X5, a subsystem for computer vision processing that is composed of an image processor and the programmable CV engine, with a new, innovative CNN hardware accelerator developed in house, that allows the implementation of high-performance CNNs at ultra-low low power. With this new IP, Renesas enables system developers to choose between the IMP-X5 or the new hardware accelerator to deploy CNNs. This heterogeneous approach allows system developers to choose the most efficient architecture, depending on required programming flexibility, performance and power consumption.

The Renesas R-Car V3M is available now. The R-Car V3M Starter Kit with a Linux BSP will be available in Q1 2018 initially in limited quantities. A complete offering with an extended software solution is scheduled for Q3 2018.

Renesas Electronics | www.renesas.com

Platform Enables Automated Vehicle Application Development

NXP Semiconductors has announced the availability of the NXP Automated Drive Kit, a software enabled platform for the development and testing of automated vehicle applications. The kit enables carmakers and suppliers to develop, test and deploy autonomous algorithms and applications quickly on an open and flexible platform with an expanding ecosystem of partners.

Taking on automated drive applications requires easy access to multiple hardware and software options. NXP has opened the door to hardware and software partners to foster a flexible development platform that meets the needs of a diverse set of developers. The NXP Automated Drive Kit provides a baseline for level 3 development and will expand to additional autonomy levels as the ecosystem’s performance scales.

The first release of the Automated Drive Kit will include a front vision system based on NXP’s S32V234 processor, allowing customers to deploy their algorithms of choice. The Kit also includes front camera application software APIs and object detection algorithms provided by Neusoft; a leading IT solutions and services provider in China and a strategic advanced driver assistance system (ADAS) and AD partner to NXP. Additionally, the Kit includes sophisticated radar options and GPS positioning technology. Customers choose from various LiDAR options and can add LiDAR Object Processing (LOP) modular software from AutonomouStuff, which provides ground segmentation and object tracking.

The NXP Automated Drive Kit is now available for ordering from AutonomouStuff as a standalone package that can be deployed by the customer in their own vehicle or as an integrated package with an AutonomouStuff Automated Research Development Vehicle.

NXP Semiconductors | www.nxp.com

Circuit Cellar Flash Back – Motion Triggered Video Camera Multiplexer

The new year 2018 is almost upon us. It’s a special year for us because the year marks Circuit Cellar’s 30th anniversary. In tribute of that, we thought we’d share an article from the very first issue. 


anniversary 2

Motion Triggered Video Camera Multiplexer
by Steve Ciarcia

One of the most successful Circuit Cellar projects ever was the ImageWise video digitizing and display system (BYTE, MayAugust ‘87). It seems to be finding its way into a lot of industrial applications. I suppose I should feel flattered that a whole segment of American industry might someday depend on a Circuit Cellar project, but I can’t let that hinder me from completing the project that was the original incentive for ImageWise. Let me explain.

How it all started

When I’m not in the Circuit Cellar I’m across town at INK or in an office that I use to meet a prospective consulting client so that he doesn’t think that I only lead a subterranean existence. Rather than discuss the work done for other clients to make my point, however, I usually demonstrate my engineering expertise more subtly by just leaving some of the electronic “toys” I’ve presented lying around. The Fraggle Rock lunchbox with the dual-disk SBI 80 in it gets them every time! ImageWise was initially conceived to be the “piece de resistance”” of these hardware toys. The fact that it may have had some commercial potential was secondary. I just wanted to see the expressions on the faces of usually stern businessmen when I explained that the monitor on the corner of my desk wasn’t a closedcircuit picture of the parking lot outside my office building. It was a live video data transmission from the driveway at my house in an adjacent town.

Implementing this video system took a lot of work and it seems like I’ve opened Pandora’s box in the process. It would have been a simple matter to just aim a camera at my house and transmit a picture to the monitor on the desk but the Circuit Cellar creed is that hardware should actually work, not just impress business executives. ImageWise is a standalone serial video digitizer (there is a companion serial input video display unit as well) which is not computer dependent. Attached to a standard video camera, it takes a “video snapshot” at timed intervals or when manually triggered. The 256×244-pixel (64level grayscale) image is digitized and stored in a 62K-byte block of memory. It is then serially transmitted either as an uncompressed or run-length-encoded compressed file (this will generally reduce the 62K bytes to about 40K bytes per picture, depending upon content).

An ImageWise digitizer/transmitter normally communicates with its companion receiver/display at 28.8K bits per second. Digitized pictures therefore can be taken and displayed about every 14 seconds. While this might seem like a long time, it is quite adequate for surveillance activities and approximates the picture taking rate of bank security cameras.

“Real-Time” is relative

When we have to deal with remote rather than direct communication, “freeze-frame” imaging systems such as ImageWise can lose most of their “real time” effectiveness as continuous-activity monitors due to slow transmission mediums. Using a 9600-bps modem, a compressed image will take about 40 seconds to be displayed. At 1200 bps it will take over 5 minutes!

Of course, using such narrow logic could lead one to dismiss freeze-frame video imaging and opt for hardwired direct video, whatever the distance. However, unless you own a cable television or telephone company you might have a lot of trouble stringing wires across town. All humor aside, the only reason for using continuous monitoring systems at all is to capture and record asynchronous “events.” In the case of a surveillance system, the “event” is when someone enters the area under surveillance. For the rest of the time, you might as well be taking nature photos because, according to the definition of an event, nothing else is important. Most continuous surveillance video systems are, by necessity, real-time monitors as well. Because they have no way to ascertain that an event has occurred they simply record everything and ultimately capture the “event” by default. So what if there is 6 hours of video tape or 200 gigabytes of useless video data transmission around a 4-second “event.”

If we know exactly when an event occurs and take a freeze frame picture exactly at that time, there is no difference between its record of the event and a real-time recorder or snap-shot camera at the same instant. The only difference is that a freeze-frame recorder needs some local intelligence to ascertain that an event is occurring so that it knows when to snap a picture. Sounds simple, right?

To put real-timing into my driveway monitor, I combined a video camera and an infrared motion detector. When someone (or something) enters the trigger zone of the motion detector it will also be within the field of the video camera. If motion is detected, the controller triggers the ImageWise to capture that video frame at that instant and transmit the picture via modem immediately. The result is, in fact, real-time video, albeit delayed by 40 seconds. Using a 9600-bps modem, you will see what is going on 40 seconds after it has occurred. (Of course, you’ll see parts of the picture sooner as it is painting on the screen.) Subsequent motion will trigger additional pictures until eventually the system senses nothing and goes back to timed update. With such a system you’ll also gain new knowledge. You’ll know that it was the UPS truck that drove over the hedge because you were watching, but you aren’t quite sure who bagged the flower bed.

Of course knowing a little bit is sometimes worse than nothing at all. While a single video camera and motion detector might cover the average driveway, my driveway has multiple entrances and a variety of parking areas. When I first installed a single camera to cover the main entrance all it did was create frustration. I would see a car enter and park. If the person exited the vehicle they were soon out of view of the camera and I’d be thinking, “OK, what are they doing?” Rather than laying booby traps for some poor guy delivering newspapers, I decided to expand the system to cover additional territory. Ultimately, I installed three cameras and four motion detectors which could cover all important areas and provide enough video resolution to specifically recognize individuals. (Since I have four telephone lines into my house and only one is being used with ImageWise, I suppose the next step is to use one of them as a live intercom to speak to these visitors. A third line already goes to the home control system so I could entertain less-welcome visitors with a few special effects).

Motion Triggered Video M U X

Enough of how I got into this mess! What this is all leading to is the design of my motion triggered video camera multiplexing (MTVCM) system. I am presenting it because it was fun to do, it solved a particular personal problem, and if I don’t document it somehow, 1’11 never remember what I’ve got wired the next time I work on it.

The MTVCM is a 3-board microcomputer-based 4-channel video multiplexer with optoisolated trigger control inputs (see figure 1). Unlike the high-tech totally solidmultiplexer state audio/video (AVMUX) which I presented a couple years ago (BYTE Feb ‘86), the MTVCM is designed to be simple, lightning-proof, reliable, and above all flexible.

anniversary 3

The MTVCM is designed for relatively harsh environments. To minimize wire lengths from cameras and sensors, the MTVCM is mounted in an outside garage where its anticipated operating temperature range is -20°C to +85”C. The MTVCM operates as a standalone unit running a preprogrammed control program or can be remotely commanded to operate in a specific manner. It is connected to the Imagewise and additional electronics in the house via a twisted-pair RS-232 line, one TTL “camera ready” line, and a video output cable. At the heart of the MTVCM is an industrial temperature version of the Micromint BCC52 8052based controller which has an onboard full floating-point 8K BASIC, EPROM programmer, 48K bytes of memory, 24 bits of parallel I/O and 2 serial ports (for more information on the BCC52 contact Micromint directly, see the Articles section of the Circuit Cellar BBS, see my article “Build the BASIC-52 Computer,” BYTE, Aug ‘85, or send $2 to CIRCUIT CELLAR INK for a reprint of the original BCC52 article).

Because the BCC52 is well documented, I will not discuss it here.
The MTVCM is nothing more than a specific application of a BCC52 process controller with a little custom I/O. In the MTVCM the custom I/O consists of a 4channel relay multiplexer board and a 4-channel optoisolated input board (Micromint now manufactures a BCC40R 8-channel relay output board and a BCC40D direct decoding 8-channel optoisolated input/output board. Their design is different and should not be confused with my MTVCM custom I/O boards). Each of my custom circuits is mounted on a BCC55 decoded and buffered BCC-bus prototyping board.

Figure 2 details the basic circuitry of the BCC55 BCC-bus prototyping board. The 44-pin BCC-bus is a relatively straightforward connection system utilizing a low-order multiplexed address/data configuration directly compatible with many standard microprocessors such as the Z8, 8085, and the 8052. On the protoboard all the pertinent busses are fully latched and buffered. The full 16-bit address is presented on J19 and J20 while the 8-bit buffered data bus is available at J21. J22 presents eight decoded I/O strobes within an address range selected via JP2.

The Multiplexer Board

Figure 3 is the schematic of the relay multiplexer added to the prototyping board. The relay circuit is specifically addressed at C900H and any data sent to that location via an XBY command [typically XBY(0C900H)=X] will be latched into the 74LS273. Since it can be destructive to attach two video outputs together, the four relays are not directly controlled by the latch outputs. Instead, bits DO and Dl are used to address a 74LS139 one-off our decoder chip. The decoder is enabled by a high-level output on bit D3. Therefore, a 1000 (binary) code selects relay 4 while a 1011 code selects relay 1. An output of 0000 shuts off the relay mux (eliminating the decoder and going directly to the relay drivers allows parallel control of the four relays).

anniversary 4

All the normally-open relay contacts are connected together as a common output. Since only a single relay is ever on at one time, that video signal will be routed to the output. If the computer fails or there is a power interrupt, the default output state of a 74LS273 is normally high. Therefore, the highest priority camera should be attached to that input. If the system gets deep-sixed, the output will default to that camera and will still be of some use (I could also have used one of the normally-closed contacts instead but chose not to).

Fools and Mother Nature

I’m sure you’re curious so I will anticipate your question and answer it at the same time. With all the high-tech stuff that I continually present, how come I used mechanical relays? The answer is lightning! Anyone familiar with my writings will remember that I live in a hazardous environment when it comes to
Mother Nature. Every year I get blasted and it’s always the high-tech stuff that gets blitzed.

Because the MTVCM has to work continuously as well as be reliable I had to take measures to protect it from externally-induced calamities. This meant that all the inputs and outputs had to be isolated. In the case of the video mux, the only low-cost totally isolated switches are mechanical relays. CMOS multiplexer chips like the ones I’ve used in other projects are not isolated and would be too susceptible. (Just think of the MTVCM as a computer with three 150-foot lightning collectors running to the cameras.) Relays still serve a useful purpose whatever the level of integrated circuit technology. They also work.

Because the infrared motion sensors are connected to the AC power line and their outputs are common with it, these too had to be isolated to protect the MTVCM. Figure 4 details the circuit of the 4-channel optoisolator input board which connects to the motion detectors.

The Optoisolator Board

The opto board is addressed at CAOOH. Reading location CAOOH
[typically X=XBY(OCAOOH)] will read the 8 inputs of the 74LS244. Bits O-3 are connected to the four optoisolators and bits 4-7 are connected to a 4-pole dip switch which is used for configuration and setup. Between the optoisolators and the LS244 are four 74LS86 exclusive-OR gates. They function as selectable inverters. Depending upon the inputs to the optoisolators (normally high or low) and the settings of DIP SW2 you can select what level outputs you want for your program (guys like me who never got the hang of using PNP transistors have to design hardware so that whatever programming we are forced to do can at least be done in positive logic).

anniversary 7The optoisolators are common units sold by OPT022, Gordos,and other manufacturers. They are generically designated as either IAC5 or IDC5 modules depending upon whether the input voltage is 115 VAC or 5-48 VDC.
Since the motion detectors I used were designed to control AC flood lights, I used the IAC5 units connected across the lights.

Now that we have the hardware I suppose we have to have some software. For all practical purposes, however, virtually none is required. Since teh MTVCM is designed with hardcoded parallel port addressing, you only need about a three-line program to read the inputs, make a decision and select a video mux channel; you know, something like READ CAOOH, juggle it, and OUT C900H. I love simple software.

Of course, I got a little more carried away when I actually wrote my camera control program. I use a lot of REM statements to figure out what I did. Since it would take up too much room here, I’ve posted the MTVCM mux control software on the Circuit Cellar BBS (203-8711988) where you can download it if you want to learn more. Basically, it just sits there looking at camera #l.. If it receives a motion input from one of the sensors, it switches to the appropriate camera and generates a “camera ready” output (TTL output which is optoisolated at the other end) to the ImageWise in the house. It stays on that camera if it senses additional motion or switches to other cameras if it senses motion in their surveillance area. Eventually, it times out and goes back to camera # 1.

Basically, that’s all there is to the MTVCM. If you are an engineer you can think of it as a lightning-proof electrically-isolated process-control system. If not, just put it in your entertainment room and use it as a real neat camera controller. Now I’ve opened a real bag of worms. Remotely controlling the ImageWise digitizer/transmitter from my office through the house to the MTVCM is turning into a bigger task than I originally conceived. Getting the proper picture and tracking someone in the driveway is only part of the task.

anniversary 8

I can already envision a rack of computer equipment in the house which has to synchronize this data traffic. My biggest worry is not how much coordination or equipment it will involve, but how I can design it so that I can do it all with a three-line BASIC program! Be assured that I’ll tell you how as the saga unfolds.

Article first appeared in Issue 1 of Circuit Cellar magazine – January/February 1988

Graphene Enables Broad Spectrum Sensor Development

Team successfully marries a CMOS IC with graphene, resulting in a camera able to image visible and infrared light simultaneously.

Graphene Enables Broad Spectrum Sensor Development

By Wisse Hettinga

Researchers at ICFO—the Institute of Photonic Sciences, located in Catalonia, Spain—have developed a broad-spectrum sensor by depositing graphene with colloidal quantum dots onto a standard, off-the-shelf read-out integrated circuit. It is the first-time scientists and engineers were able to integrate a CMOS circuit with graphene to create a camera capable of imaging visible and infrared light at the same time. Circuit Cellar visited ICFO

Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.

Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.

and talked with Stijn Goossens, one of the lead researchers of the study.

HETTINGA: What is ICFO?

GOOSSENS: ICFO is a research institute devoted to the science and technologies of light. We carry out frontier research in fundamental science in optics and photonics as well as applied research with the objective of developing products that can be brought to market. The institute is based in Castelldefels, in the metropolitan area of Barcelona (Catalonia region of Spain).

HETTINGA: Over the last 3 to 4 years, you did research on how to combine graphene and CMOS. What is the outcome?

GOOSSENS: We’ve been able to create a sensor that is capable of imaging both visible and infrared light at the same time. A sensor like this can be very useful for many applications—automotive solutions and food inspection, to name a few. Moreover, being able to image infrared light can enable night vision features in a smartphone.

HETTINGA: For your research, you are using a standard off-the-shelf CMOS read-out circuit correct?

GOOSSENS: Indeed. We’re using a standard CMOS circuit. These circuits have all the electronics available to read the charges induced in the graphene, the rows and columns selects and the drivers to make the signal available for further processing by a computer or smartphone. For us, it’s a very easy platform to work on as a starting point. We can deposit the graphene and quantum dot layer on top of the CMOS sensor (Photo 1).

PHOTO 1 The CMOS image sensor serves as the base for the graphene layer.

PHOTO 1
The CMOS image sensor serves as the base for the graphene layer.

HETTINGA: What is the shortcoming of normal sensors that can be overcome by using graphene?

GOOSSENS: Normal CMOS imaging sensors only work with visible light. Our solution can image visible and infrared light. We use the CMOS circuit for reading the signal from the graphene and quantum dot sensors. Tt acts more like an ‘infrastructure’ solution. Graphene is a 2D material with very special specifications: it is strong, flexible, almost 100 percent transparent and is a very good conductor.

HETTINGA: How does the graphene sensor work?

GOOSSENS: There are different layers (Figure 1). There’s a layer of colloidal quantum dots. A quantum dot is a nano-sized semiconductor. Due to its small size, the optical and electronic properties differ from larger size particles. The quantum dots turn the photons they receive into an electric charge. This electric charge is then transferred to the graphene layer that acts like a highly sensitive charge sensor. With the CMOS circuit, we then read the change in resistance of the graphene and multiplex the signal from the different pixels on one output line.

FIGURE 1 The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.

FIGURE 1
The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.

HETTINGA: What hurdles did you have to overcome in the development?

GOOSSENS: You always encounter difficulties during the course of a research study and sometimes you’re close to giving up. However, we knew it would work. And with the right team, the right technologies and the lab at ICFO we have shown it is indeed possible. The biggest problem was the mismatch we faced between the graphene layer and the CMOS layer. When there’s a mismatch, that means there’s a lack of an efficient resistance read-out of the graphene—but we were able to solve that problem.

HETTINGA: What is the next step in the research?

GOOSSENS: Together with the European Graphene Flagship project, we are developing a production machine that will allow us to start a more automated production process for these graphene sensors.

HETTINGA: Where will we see graphene-based cameras?

GOOSSENS: One of the most interesting applications will be related to self-driving cars. A self-driving car needs a clear vision to function efficiently. If you want to be able to drive a car through a foggy night or under extreme weather conditions, you’ll definitely need an infrared camera to see what’s ahead of you. Today’s infrared cameras are expensive. With our newly-developed image sensor, you will have a very effective, low-cost solution. Another application will be in the food inspection area. When fruit ripens, the infrared light absorption changes. With our camera, you can measure this change in absorption, which will allow you to identify which fruits to buy in the supermarket. We expect this technology to be integrated in smartphone cameras in the near future.

ICFO | www.icfo.eu

This article appeared in the September 326 issue of Circuit Cellar

New High-Performance VC Z Series Cameras

Vision Components recently announced the availability of its new intelligent camera series VC Z. The embedded systems offer real-time image processing suitable for demanding high-speed and line scan applications. All models are equipped with Xilinx’s Zynq module, an ARM dual-core Cortex-A9 with 866 MHz and an integrated FPGA.Vision Components - VC_Z_series_stapel_pingu

The new camera is based on the board camera series VCSBC nano Z. With a footprint of 40 × 65 mm, these compact systems are especially easy to integrate into machines and plants. They are optionally available with one or two remote sensor heads and thus suitable for stereo applications.You can choose between two enclosed camera types: the VC nano Z, which has housing dimensions of 80 × 45 × 20 mm, and the VC pro Z, which measures 90 × 58 × 36 mm and can be fitted with a lens and an integrated LED illumination. The new operating system VC Linux ensures optimal interaction between hardware and software.

Source: Vision Components

Member Profile: Richard Lord

Richard Lord is an engineer, author, and photographer whose article series on an innovative digital camera controller project will begin in the October issue of Circuit Cellar.  Lord’s Photo-Pal design is an electronic flash-trigger camera controller built around a Microchip Technology PIC16F873. It features four modes of operation: triggered shutter, triggered flash, multiple flash, and time lapse. Now you too can take sound-triggered photos.

The Photo-Pal enables Richard to take amazing photos like this and capture high-speed action.

  • Member Name: Richard H. Lord
  • Location: Durham, NH, United States
  • Education: BS Electrical Engineering 1969, MS Biomedical Engineering, 1971
  • Occupation: Retired electronics hardware design engineer
  • Member Status: Richard said he has subscribed to Circuit Cellar for at least 14 years, maybe longer.
  • Technical Interests: Richard’s interests include photography, model railroading, and microcontroller projects.
  • Most Recent Embedded Tech-Related Purchase: Richard’s most recent purchase was a Microchip Technology dsPIC30F4013 digital signal controller.
  • Current Project: Richard is working on a Microchip PIC16F886-based multipurpose front panel interface controller.
  • Thoughts on the Future of Embedded Technology: “With the ready availability of prepackaged 32-bit processor modules, it’s easy to forget there are many applications where 8-bit controllers are more appropriate”, Richard said. He continued by saying he gets a lot of enjoyment from the challenge of working within the capabilities and constraints of the smaller microcontrollers.