New Raspberry Pi Model B+

The Raspberry Pi foundation announced what it calls “an evolution” of the Raspberry Pi SBC. Compared to the previous model, the new Raspberry Pi Model B+ has more GPIO, and more USB ports. In addition, it uses Micro SD memory cards and improved power consumption.

Source: Raspberry Pi Foundation

Source: Raspberry Pi Foundation

 

The GPIO header is now 40 pins, with the same pinout for the first 26 pins as the Model B. The B+ also has four USB 2.0 ports (compared to two on the Model B) and better hotplug and overcurrent behavior. In place of the old friction-fit SD card socket is a better push-push micro SD version.

In line with today’s electronic concepts, the new board also lowers power consumption. By replacing linear regulators with switching ones, the power requirements are reduced by between 0.5 W and 1 W. The audio circuit incorporates a dedicated low-noise power supply, enabling better audio applications.

The new board is well organized. The USB connectors are aligned with the board edge, and the composite video now has a 3.5-mm jack. The corners are rounded with four squarely placed mounting holes.

The Raspberry Pi Model B+ uses the same BCM2835 application processor as the Model B. It runs the same software and still has 512-MB RAM.

If you want to adapt a current project to the new platform, be sure to study the new GPIO pins and mechanical specs. To ensure continuity of supply for industrial customers, the Model B will be kept in production for as long as there’s demand for it.

At $35, the new model B+ is the same price as the older model B and is already available from Farnell/element14/Newark and RS/Allied Components.

[Source: www.raspberrypi.org]

Nesit

NESIT wants to create, educate, and foster learning in the fields of various technological and other disciplines. They reap the benefits of productivity through volunteer collaboration.

Location 290 Pratt St., Meriden CT 06450
Members 30
Website nesit.org

Read about what Vice President Will Genovese has to say about NESIT.
Tell us about your meeting space!

NESIT meets in a 4000 square feet office that takes place in The Meriden Enterprise Center. A large office and manufacturing building that is home to over 60 businesses.

What tools do you have in your space? 

Soldering stations, oscilloscope, 3-D printer, woodshop, cnc, and a data center.

Are there any tools your group really wants or needs?

A lasercutter would be a nice addition to our arsenal.

What sort of embedded tech does your Hackerspace work with?

We work with PIC, Arduino, and Raspberry Pi, and many more.
In fact one of our recent projects was a DIY PIC Programmer.

Can you tell us about some of your group’s recent tech projects?

1012388_631853183518705_1732534179_n

One of the group’s first tech projects was the “MAME,” a full-size gaming arcade. The project was going well until there was a break in at the location and they lost some equipment; the MAME was put on the backburner.  After they moved to their new location and gained a new member, an art teacher named John, the project garnered interest again. He came up with the design for it. Afterwords it was painted, they got a coin mechanism, speakers were hooked up, and the software was installed and configured. IT was finally finished.

Click here if you want to check it out.

What’s the craziest project you’ve completed?

At the moment we have not yet completed projects I would categorize as “crazy.”

Read more about NESIT on their website. 

Show us your hackerspace! Tell us about your group! Where does your group design, hack, create, program, debug, and innovate? Do you work in a 20′ × 20′ space in an old warehouse? Do you share a small space in a university lab? Do you meet at a local coffee shop or bar? What sort of electronics projects do you work on? Submit your hackerspace and we might feature you on our website!

Linux System Configuration (Part 1)

In Circuit Cellar’s June issue, Bob Japenga, in his Embedded in Thin Slices column, launches a series of articles on Linux system configuration. Part 1 of the series focuses on configuring the Linux kernel. “Linux kernels have hundreds of parameters you can configure for your specific application,” he says.

Linux system configurationPart 1 is meant to help designers of embedded systems plan ahead. “Many of the options I discuss cost little in terms of memory and real-time usage,” Japenga says in Part 1. “This article will examine the kinds of features that can be configured to help you think about these things during your system design. At a minimum, it is important for you to know what features you have configured if you are using an off-the-shelf Linux kernel or a Linux kernel from a reference design. Of course, as always, I’ll examine this only in thin slices.”

In the following excerpt from Part 1, Japenga explains why it’s important to be able to configure the kernel. (You can read the full article in the June issue, available online for single-issue purchase or membership download.)

Why Configure the Kernel?
Certainly if you are designing a board from scratch you will need to know how to configure and build the Linux kernel. However, most of us don’t build a system from scratch. If we are building our own board, we still use some sort of reference design provided by the microprocessor manufacturer. My company thinks these are awesome. The reference designs usually come with a prebuilt kernel and file system.

Even if you use a reference design, you almost always change something. You use different memory chips, physical layers (PHY), or real-time clocks (RTCs). In those cases, you need to configure the kernel to add support for these hardware devices. If you are fortunate enough to use the same hardware, the reference design’s kernel may have unnecessary features and you are trying to reduce the memory footprint (which is needed not just because of your on-board memory but also because of the over-the-air costs of updating, as I mentioned in the introduction). Or, the reference design’s kernel may not have all of the software features you want.

For example, imagine you are using an off-the-shelf Linux board (e.g., a Raspberry Pi or BeagleBoard.org’s BeagleBone). It comes with everything you need, right? Not necessarily. As with the reference design, it may use too many resources and you want to trim it, or it may not have some features you want. So, whether you are using a reference design or an off-the-shelf single-board computer (SBC), you need to be able to configure the kernel.

Linux Kernel Configuration
Many things about the Linux kernel can be tweaked in real time. (This is the subject of a future article series.) However, some options (e.g., handling Sleep mode and support for new hardware) require a separate compilation and kernel build. The Linux kernel is written in the C programming language, which supports code that can be conditionally compiled into the software through what is called a preprocessor #define

A #define is associated with each configurable feature. Configuring the kernel involves selecting the features you want with the associated #define, recompiling, and rebuilding the kernel.

Okay, I said I wasn’t going to tell you how to configure the Linux kernel, but here is a thin slice: One file contains all the #defines. Certainly, one could edit that file. But the classic way is to invoke menuconfig. Generally you would use the make ARCH=arm menuconfig command to identify the specific architecture.

There are other ways to configure the kernel—such as xconfig (QT based), gconfig (GTK+ based), and nconfig (ncurses based)—that are graphical and purport to be a little more user-friendly. We have not found anything unfriendly with using the classical method. In fact, since it is terminal-based, it works well when we remotely log in to the device.

Photo 1—This opening screen includes well-grouped options for easy menu navigation.

Photo 1—This opening screen includes well-grouped options for easy menu navigation.

Photo 1 shows the opening screen for one of our configurations. The options are reasonably well grouped to enable you to navigate the menus. Most importantly, the mutual dependencies of the #defines are built into the tool. Thus if you choose a feature that requires another to be enabled, that feature will also automatically be selected.

In addition to the out-of-the-box version, you can easily tailor all the configuration tools if you are adding your own drivers or drivers you obtain from a chip supplier. This means you can create your own unique menus and help system. It is so simple that I will leave it to you to find out how to do this. The structure is defined as Kconfig, for kernel configuration.

HackRVA: They provide the tools, you provide the enthusiasm

HackRVA Sign4HackRVA is a Richmond-based makerspace. They like to take things apart, put them back together, figure out how they work, and create new things. Their mission? To learn and make stuff sharing tools and knowledge in technology; including Arduino, Makerbot, Linux, and the Open Source movement.

Aaron Nipper will tell us a little more.

Location 1600 Roseneath Road, Suite E, Richmond, VA 23230
Members 65
Website www.hackrva.org

What’s your meeting space like? 

Our space is about 2,000 square feet. We have an AV and general meeting area, a tech lab, and a fab lab.

What’s in your “toolbox”?

  • Two 3D printers
  • Laser cutter
  • Lots of soldering stations
  • O-scopes
  • Hand and power tools
  • A computer lab

Are there any tools your group really wants or needs?

A CNC Router — like a shopbot. Can’t wait to build that first wiki-house!

Arduino, Raspberry Pi, embedded security… which embedded technologies does your group work with most frequently? 

We use all that stuff. Arduino, R-Pi, whatever we can get our hands on! We’ve designed, from scratch, PCB Badges for RichSec security conference the last three years. Click here to learn more about the PCB Badges project.

What have you been working on lately?

For the past three years, we’ve designed those PCB badges for the RichSec security conference. Here’s another recent build where a member took a Power Wheels and made it Xbox controller driven. Check out the video below or click here to read more about that project.

Do you have any events or initiatives you’d like to tell us about? Where can we learn more about them?

You can learn more about us at hackrva.org. We host the Richmond Maker Guild, have regular Saturday Hackathons, as well as a Noise Night. Members are always coming up with creative events!

Any words of advice for fellow hackers?

My personal motto is fail often, teach others, and post to the web. All those things help me learn and think about projects better.

Want to know more about what HackRVA does? Check out their Facebook page and website.

Show us your hackerspace! Tell us about your group! Where does your group design, hack, create, program, debug, and innovate? Do you work in a 20′ × 20′ space in an old warehouse? Do you share a small space in a university lab? Do you meet a local coffee shop or bar? What sort of electronics projects do you work on? Submit your hackerspace and we might feature you on our website!

Build an Automated Vehicle Locator

Several things inspired Electrical and Computer Engineering Professor Chris Coulston and his team at Penn State Erie, The Behrend College, to create an online vehicle-tracking system. Mainly, the team wanted to increase ridership on a shuttle bus the local transit authority provided to serve the expanding campus. Not enough students were “on board,” in part because it was difficult to know when the bus would be arriving at each stop.

So Coulston’s team created a system in which a mobile GPS tracker on the bus communicates its location over a radio link to a base station. Students, professors, or anyone else carrying a smartphone can call up the bus tracker web page, find out the bus’ current location, and receive reliable estimates of the bus’ arrival time at each of its stops. Coulston, computer engineering student Daniel Hankewycz, and computer science student Austin Kelleher wrote an article about the system, which appears in our June issue.

Circuit Cellar recently asked Coulston if the system, implemented in the fall 2013 semester, had accomplished its goals and might be expanded.

“The bus tracker team is tracking usage of the web site using Google Analytics,” Coulston said. “The data reveals that we get on average 100 hits a day during cold weather and fewer on warmer days. Ridership has increased during this past year, helping assure the long-term presence of the shuttle on our campus.”

“Over winter break, shuttle service was increased to a distant location on campus,” he added. “In order to better track the location of the shuttle, a second base station was added. The additional base station required a significant rework of the software architecture. The result is that the software is more modular and can accept an arbitrary number of base stations. There are no plans at present to add a second bus—a good thing, because this change would require another significant rework of the software architecture.”

Initially, Coulston looked to other real-time vehicle trackers for inspiration: “There are a variety of live bus trackers that motivated my early ideas, including the University of Utah’s Live Tracker  and the Chicago Transit Authority’s CTA Bus Tracker. Given our single bus route on campus, I was motivated to keep the interface simple and clean to minimize the amount of time needed to figure out where the bus is and how long it’s going to take to get to my stop.”

The system, as it was originally implemented in August 2013, is fully described in the June issue, now available for single-issue purchase or membership download. The following article excerpt provides a broad overview and a description of the team’s hardware choices.

THE BIG PICTURE
Figure 1 shows the bus tracker’s hardware, which consists of three components: the user’s smartphone, the base station placed at a fixed location on campus, and the mobile tracker that rides around on the bus.

The bus tracking system includes a Digi International XTend radio, a Microchip Technology PIC18F26K22 microcontroller, and a Raspberry Pi single-board computer.

Figure 1: The bus tracking system includes a Digi International XTend radio, a Microchip Technology PIC18F26K22 microcontroller, and a Raspberry Pi single-board computer.

Early on, we decided against a cellular-based solution (think cell phone) as the mobile tracker. While this concept would have benefited from wide-ranging cellular coverage, it would have incurred monthly cellar network access fees. Figure 1 shows the final concept, which utilizes a 900-MHz radio link between the mobile tracker and the base station.

Figure 2 shows the software architecture running on the hardware from Figure 1. When the user’s smartphone loads the bus tracker webpage, the JavaScript on the page instructs the user’s web browser to use the Google Maps JavaScript API to load the campus map. The smartphone also makes an XMLHttpRequests request for a file on the server (stamp.txt) containing the bus’ current location and breadcrumb index.

Figure 2: The bus tracker’s software architecture includes a GPS, the mobile tracker, a smartphone, and the base station.

Figure 2: The bus tracker’s software architecture includes a GPS, the mobile tracker, a smartphone, and the base station.

This information along with data about the bus stops is used to position the bus icon on the map, determine the bus’ next stop, and predict the bus’ arrival time at each of the seven bus stops. The bus’ location contained in stamp.txt is generated by a GPS receiver (EM-408) in the form of an NMEA string. This string is sent to a microcontroller and then parsed. When the microcontroller receives a request for the bus’ location, it formats a message and sends it over the 900-MHz radio link. The base station compares the bus position against a canonical tour of campus (breadcrumb) and writes the best match to stamp.txt.

Early in the project development, we decided to collect the bus’ position and heading information at 2-s intervals during the bus’ campus tour. This collection of strings is called “breadcrumbs” because, like the breadcrumbs dropped by Hansel and Gretel in the eponymously named story, we hope they will help us find our way around campus. Figure 3 shows a set of breadcrumbs (b1 through b10), which were collected as the bus traveled out and back along the same road.

Figure 3: Breadcrumbs (b1 through b10) containing the bus’ position and orientation information were taken every 2 s during a test-run campus tour.

Figure 3: Breadcrumbs (b1 through b10) containing the bus’ position and orientation information were taken every 2 s during a test-run campus tour.

The decision to collect breadcrumbs proved fortuitous as they serve an important role in each of the three hardware components shown in Figure 1.

MOBILE TRACKER
The bus houses the mobile tracker (see Photo 1). Figure 4 shows the schematic, which is deceptively simple. What you see is the third iteration of the mobile tracker hardware.

Figure 4: The mobile tracker includes a Microchip Technology PIC18F26K22 microcontroller, a Micrel MIC5205 regulator, a Digi International XTend RF module, and a Texas Instruments TXS0102 bidirectional translator

Figure 4: The mobile tracker includes a Microchip Technology PIC18F26K22 microcontroller, a Micrel MIC5205 regulator, a Digi International XTend RF module, and a Texas Instruments TXS0102 bidirectional translator

An important starting point in the design was how to step down the bus’ 12-V supply to the 5-V required by our circuit. In terms of hardware, the best decision we made was to abandon the idea of trying to integrate a 12-to-5-V converter onto the mobile tracker PCB. Instead we purchased a $40 CUI VYB15W-T DC-DC converter and fed the mobile tracker 5-V inputs…

We used Micrel’s MIC5205 regulator to step down the 5 V for the 3.3-V GPS receiver, which easily supplied its peak 80 mA. Since we ran a Digi International XTend radio at 5 V for the best range, we ended up with mixed voltage signals. We used a Texas Instruments TXS0102 bidirectional voltage-level translator, which handles voltage-interfacing duties between the 5-V radio and the 3.3-V microcontroller.

The mobile tracker unit

Photo 1: The mobile tracker unit

We selected Microchip Technology’s PIC18F26K22 because it has two hardware serial ports, enabling it to simultaneously communicate with the GPS module and the radio modem when the bus is traveling around campus. We placed two switches in front of the serial ports. One switch toggles between the GPS module and the Microchip Technology PICkit 3 programming pins, which are necessary to program the microcontroller. The second switch toggles between the radio and a header connected to a PC serial port (via a Future Technology Devices FT232 USB-to-serial bridge). This is useful when debugging at your desk. An RGB LED in a compact PLCC4 package provides state information about the mobile tracker.

The XTend RF modules are the big brothers to Digi International’s popular XBee series. These radios come with an impressive 1 W of transmitting power over a 900-MHz frequency, enabling ranges up to a mile in our heavily wooded campus environment. The radios use a standard serial interface requiring three connections: TX, RX, and ground. They are simple to set up. You just drop them into the Command mode, set the module’s source and destination addresses, store this configuration in flash memory, and exit. You never have to deal with them again. Any character sent to the radio appears on the destination modem’s RX line.

The GPS receiver utilizes the CSR SiRFstarIII chipset, which is configured to output a recommended minimum specific (RMC) string every 2 s…

The mobile tracker’s firmware listens for commands over the serial port and generates appropriate replies. Commands are issued by the developer or by the base station…

Burning breadcrumbs into the mobile tracker’s flash memory proved to be a good design decision. With this capability, the mobile tracker can generate a simulated tour of campus while sitting on the lab bench.

BASE STATION
The base station consists of an XTend RF module connected to a Raspberry Pi’s serial port (see Photo 2). The software running on the Raspberry Pi does everything from running an Nginx open-source web server to making requests for data from the mobile tracker.

From Figure 1, the only additional hardware associated with the base station is the 900-MHz XTend radio connected to the Raspberry Pi over a dedicated serial port on pins 8 (TX) and 10 (RX) of the Raspberry Pi’s GPIO header.

The only code that runs on the base station is the Python program, which periodically queries the mobile tracker to get the bus’ position and heading. The program starts by configuring the serial port in the common 9600,8,N,1 mode. Next, the program is put into an infinite loop to query the mobile tracker’s position every 2 s.

Photo 2: The base station includes an interface board, a Raspberry Pi, and a radio modem.

Photo 2: The base station includes an interface board, a Raspberry Pi, and a radio modem.

Q&A: Raspberry Pi Innovation

Orlando, FL-based web app developer and blogger Shea Silverman recently received Kickstarter funding for the latest version of PiPlay, his Raspberry Pi-based OS. Shea and I discussed his ongoing projects, his Raspberry Pi book, and what’s next for PiPlay.—Nan Price, Associate Editor

 

silverman

Shea Silverman

NAN: What is your current occupation?

SHEA: Web applications developer with the Center for Distributed Learning at the University of Central Florida (UCF).

NAN: Why and when did you decide to start your blog?

SHEA: I’ve been blogging on and off for years, but I could never keep to a schedule or really commit myself to writing. After I started working on side projects, I realized I needed a place to store tips and tricks I had figured out. I installed WordPress, posted some PhoneGap tips, and within a day got a comment from someone who had the same issue, and my tips helped them out. I have been blogging ever since. I make sure to post every Friday night.

NAN: Tell us about PiPlay, the Raspberry Pi OS. Why did you start the OS? What new developments, if any, are you working on?

piplay-case

Shea’s PiPlay Raspberry Pi OS recently reached 400% funding on Kickstarter.

SHEA: PiPlay is a gaming and emulation distribution for the Raspberry Pi single-board computer. It is built on top of the Raspbian OS, and tries to make it as easy as possible to play games on your Raspberry Pi. My blog got really popular after I started posting binaries and tutorials on how to compile different emulators to the Raspberry Pi, but I kept getting asked the same questions and saw users struggling with the same consistent issues.

I decided I would release a disk image with everything preconfigured and ready to be loaded onto an SD card. I’ve been adding new emulators, games, and tools to it ever since.

I just recently completed a Kickstarter that is funding the next release, which includes a much nicer front end, a web GUI, and a better controller configuration system.

NAN: You wrote Instant Raspberry Pi Gaming. Do you consider this book introductory or is it written for the more experienced engineer?

SHEA: Instant Raspberry Pi Gaming is written like a cookbook with recipes for doing various tasks. Some of them are very simple, and they build up to some more advanced recipes. One of the easier tasks is creating your user account on the Pi Store, while the more advanced recipes have you working with Python and using an API to interact with Minecraft.

Readers will learn how to setup a Raspberry Pi, install and use various emulators and games, a bit about the Minecraft API, and common troubleshooting tips.

pitroller

The Pitroller is a joystick and buttons hooked up to the GPIO pins of a Raspberry Pi, which can act as a controller or keyboard for various emulators.

NAN: You are a member of FamiLAB, an Orlando, FL-based community lab/hackerspace. What types of projects have you worked on at the lab?

miniarcade

Disney director Rich Moore poses with Shea’s miniature arcade machine. The machine was based on Fix It Felix Jr. from Disney’s Wreck It Ralph.

SHEA: I spend a lot of time at the lab using the laser cutter. Creating a 2-D vector in Inkscape, and then watching it be cut out on a piece of wood or acrylic is really inspiring. My favorite project was making a little arcade machine featuring Fix It Felix Jr. from Wreck It Ralph. A marketing person from Disney was able to get it into the hands of the director Rich Moore. He sent me a bunch of pictures of himself holding my little arcade machine next to the full size version.

NAN: Give us a little background information. How did you become interested in technology?

SHEA: My mom always likes to remind me that I’ve been using computers since I was 2. My parents were very interested in technology and encouraged my curiosity when it came to computers. I always liked to take something apart and see how it worked, and then try to put it back together. As the years went on, I’ve devoted more and more time to making technology a major part of my life.

NAN: Tell us about the first embedded system you designed.

SHEA: I have a lot of designs, but I don’t think I’ve ever finished one. I’ll be halfway into a project, learn about something new, then cannibalize what I was working on and repurpose it for my new idea. One of the first embedded projects I worked on was a paintball board made out of a PICAXE microcontroller. I never got it small enough to fit inside the paintball marker, but it was really cool to see everything in action. The best part was when I finally had that “ah-ha!” moment, and everything I was learning finally clicked.

NAN: What was the last electronics-design related product you purchased and what type of project did you use it with?

SHEA: At UCF, one of our teams utilizes a ticket system for dealing with requests. Our department does a hack day each semester, so my coworker and I decided to rig up a system that changes the color of the lights in the office depending on the urgency of requests in the box. We coded up an API and had a Raspberry Pi ping the API every few minutes for updates. We then hooked up two Arduinos to the Raspberry Pi and color-changing LED strips to the Arduinos. We set it up and it’s been working for the past year and a half, alerting the team with different colors when there is work to do.

NAN: Are you currently working on or planning any projects?

SHEA: My Kickstarter for PiPlay just finished at 400% funding. So right now I’m busy working on fulfilling the rewards, and writing the latest version of PiPlay.

NAN: What do you consider to be the “next big thing” in the industry?

SHEA: Wearable computing. Google Glass, the Pebble smart watch, Galaxy Gear—I think these are all great indicators of where our technology is heading. We currently have very powerful computers in our pockets with all kinds of sensors and gadgets built in, but very limited ways to physically interact with them (via the screen, or a keypad). If we can make the input devices modular, be it your watch, a heads-up display, or something else, I think that is going to spark a new revolution in user experiences.

June Issue: Vehicle Tracking, Bit Banging, and More

Circuit Cellar’s June issue is now online, outlining DIY projects ranging from an automated real-time vehicle locator to a  GPS-oriented solar tracker and offering solid advice on bit banging, FPGA reconfiguration, customizing the Linux kernel, and more.

June issueA persistent problem typically sparks the invention of projects featured in our magazine. For example, when the campus at Penn State Erie, The Behrend College, had a growth spurt, the local transit authority provided a shuttle bus to help students who were rushing from class to class. But ridership was low because of the bus’ unpredictable schedule.

So a college engineering team constructed a mobile application to track the bus. That system inspired the cover of our June issue and complements its communications theme.

The three-part system consists of a user’s smartphone running a HTML5-compatible browser, a base station consisting of an XTend 900-MHz radio connected to a Raspberry Pi single-board computer, and a mobile tracker including a GPS receiver, a Microchip Technology PIC18F26K22 microcontroller, and an XTend module.

The Raspberry Pi runs a web server to handle requests from a user’s smartphone. The user then receives accurate bus arrival times.

Also aligning with June’s theme, we present an article about implementing serial data transmission through bit banging. You’ll gain a better understanding of how serial data is transmitted and received by a microprocessor’s hardware UART peripheral. You’ll also learn how bit banging can achieve serial communication in software, which is essential when your embedded system’s microprocessor lacks a built-in UART.

Recognizing a rapidly unfolding communications trend, this issue includes an inventor’s essay about how the presence of Bluetooth Low Energy (BLE) in the latest mobile devices is sparking a big boom in innovative hardware/sensor add-ons that use your smartphone or tablet as an interface. Other communications-related articles include Part 2 of a close look at radio-frequency identification (RFID). This month’s installment describes the front-end analog circuitry for the RFID base station of a secure door-entry project.

In addition, we offer articles about adjusting your FPGA design while it’s operating, modifying the Linux kernel to suit your hardware and software designs, tools and techniques to boost your power supply, digital data encoding in wireless systems, GPS orientation of a solar panel, and an interview with Quinn Dunki, an embedded applications consultant and hacker.

The June issue is available for membership download or single-issue purchase.

Newcastle Makerspace’s first rule? Do not be on fire.

1069828_219774724881137_1206270128_nIn Newcastle upon Tyne, located in North-East England, lies Newcastle Makerspace. This is an eclectic group of makers, creatives, programmers, scientists, and engineers. They’ve set up a space to meet, work, socialize, share ideas and collaborate.

Gregory Fenton is a member and wants to tell us a little bit more about what they’re working on.

Location 18 New Bridge Street West, Newcastle upon Tyne, NE1 8AW, England
Members Lots and growing fast.

CW: Tell us about your meeting space.

We have 2 large rooms, one for relaxing, holding meetings and talks etc., and one for working on projects. We also have a fully networked computer room with spare monitors and keyboards for people who bring in their Raspberry Pi. Another room is dedicated to our lathe and laser cutter. There’s a kitchen area so people can prepare meals and make drinks and a well-organized storage rack.

CW: What sort of tools do you have at Makerspace Newcastle? 

  • Oscilloscopes
  • Soldering stations (including SMD soldering using heat)
  • Two 3D printers (both working and being built by members)
  • A lathe
  • A laser cutter (ordered, just waiting on delivery)
  • Computers
  • Bench drills and saws
  • Circular saws, sanders, grinders, and lots of general hand and power tools

CW: What’s on your wish list? 

A laser CNC and newer tables and chairs would be nice additions.

CW: What sort of embedded tech does your group work with? 

We use lots of embedded technology such as Arduinos, BeagleBoards, Raspberry Pis, PICs, etc… for various projects.

CW:  What are some projects that your group has been working on?

We have so much going on, projects that come to fruition and projects just being imagined that I could go on for ever!

  • One of our members is building a large quadcopter from scratch with a 3D camera mounted underneath it.
  • Another is working on a candy machine that feeds the Makers whenever someone tweets to it (give it a try by sending a tweet containing the word candy to @maker_space).
  • Several of our members are building 3D printers of various styles and sizes.
  • One of our members designs costumes for shows, circuses and events.
  • A different member is taking his children’s old baby clothes and making a quilted “memory blanket,” as well as creating wooden toys to give to them now they are a little older.
  • Some of our junior members are learning about programming, interfacing to electronics and relays, and making toys by hand from balsa wood.
  • One of our members is creating a power extension that is controlled remotely using Arduinos, servo motors and a GSM shield to switch on and off individual plugs via text message (SMS).
  • A project that’s being done as a group is a Raspberry Pi media server that plays music and controls other devices such as an amplifier, lights and LED strips. I don’t think this project will ever truly be finished as every completed task leads to “wouldn’t it be cool if we did …”.

 

1011727_146929302165680_1086947018_n

What’s the craziest project your group or group members have completed?

Easy. We decided we wanted a laser cutter, went on a members pledge drive and had the money to buy it outright within a week! It is in China at the moment but soon we’ll be cutting out plexiglass and wood like there is no tomorrow!

Do you have any events or initiatives you’d like to tell us about? Where can we learn more about it?

We regularly hold events both in the space itself and in other places in the surrounding area. Check our blog and mailing list from our website for upcoming and past events.

What would you like to say to fellow hackers out there?

  • Always follow rule zero: Do not be on fire.
  • Safety is everyone’s responsibility.
  • Don’t have a space local to you? find a few like minded individuals and set up your own! You can start small (a garage or shed) and expand as time passes and membership increases.
  • If a project interests you, tell the world. Circuit Cellar, Blog, Facebook, Twitter… Spread the word.

Want to know more about what Makerspace Newcastle does? Check out their Facebook and Twitter page!

Show us your hackerspace! Tell us about your group! Where does your group design, hack, create, program, debug, and innovate? Do you work in a 20′ × 20′ space in an old warehouse? Do you share a small space in a university lab? Do you meet at a local coffee shop or bar? What sort of electronics projects do you work on? Submit your hackerspace and we might feature you on our website!

Raspberry Pi-Based Network Monitoring Device

In 2012, Al Anderson, IT director at Salish Kootenai College in Pablo, MT, and his team wired the dorms and student housing units at the small tribal college with fiber and outdoor CAT 5 cable to provide reliable Internet service to students. “Our prior setup was wireless and did not provide very good service,” Anderson says.

The 25 housing units, each with a small unmanaged Ethernet switch, were daisy chained in several different paths. Anderson needed a way to monitor the links from the system’s Simple Network Management Protocol (SNMP) network monitoring software, Help/Systems’s InterMapper. He also wanted to ensure the switches installed inside the sun-exposed utility boxes wouldn’t get too hot.

The Raspberry Pi is a small SBC based on an ARM processor. Its many I/O ports make it very useful for embedded devices that need a little more power than the typical 8-bit microcontroller.

Photo 1: The Raspberry Pi is a small SBC based on an ARM processor. Its many I/O ports make it very useful for embedded devices that need a little more power than the typical 8-bit microcontroller.

His Raspberry Pi-based solution is the subject of an article appearing in Circuit Cellar’s April issue. “We chose the Raspberry Pi because it was less expensive, we had several on hand, and I wanted to see what I could do with it,” Anderson says (see Photo 1).

The article walks readers through each phase of the project:

“I installed a Debian Linux distro, added an I2C TMP102 temperature sensor from SparkFun Electronics, wrote a small Python program to get the temperature via I2C and convert it to Fahrenheit, installed an SNMP server on Linux, added a custom SNMP rule to display the temperature from the script, and finally wrote a custom SNMP MIB to access the temperature information as a string and integer.”

Setting up the SBC and Linux was simple, Anderson says. “The prototype Raspberry Pi has now been running since September 2012 without any problems,” he says in his article. “It has been interesting to see how the temperature fluctuates with the time of day and the level of network activity. As budget and time permit, we will be installing more of these onto our network.”

In the following excerpt, Anderson discusses the project’s design, implementation, and OS installation and configuration. For more details on a project inspired, in part, by the desire to see what a low-cost SBC can do, read Anderson’s full article in the April issue.

DESIGN AND IMPLEMENTATION
Figure 1 shows the overall system design. The TMP102 is connected to the Raspberry Pi via I2C. The Raspberry Pi is connected to the network via its Ethernet port. The monitoring system uses TCP/IP over the Ethernet network to query the Raspberry Pi via SNMP. The system is encased in a small acrylic Adafruit Industries case, which we used because it is inexpensive and easy to customize for the sensor.

The system is designed around the Raspberry Pi SBC. The Raspberry Pi uses the I2C protocol to query the Texas Instruments TMP102 temperature sensor. The Raspberry Pi is queried via SNMP.

Figure 1: The system is designed around the Raspberry Pi SBC. The Raspberry Pi uses the I2C protocol to query the Texas Instruments TMP102 temperature sensor. The Raspberry Pi is queried via SNMP.

Our first step was to set up the Raspberry Pi. We started by installing the OS and the various software packages needed. Next, we wrote the Python script that queries the I2C temperature sensor. Then we configured the SNMP daemon to run the Python script when it is queried. With all that in place, we then set up the SNMP monitoring software that is configured with a custom MIB and a timed query. Finally, we modified the Raspberry Pi case to expose the temperature sensor to the air and installed the device in its permanent location.

OS INSTALLATION AND CONFIGURATION
The Raspberry Pi requires a Linux OS compiled to run on an ARM processor, which is the brain of the device, to be installed on an SD card. It does not have a hard drive. Setting up the SD card is straightforward, but you cannot simply copy the files onto the card. The OS has to be copied in such a way that the SD card has a boot sector and the Linux partitioning and file structure is properly maintained. Linux and Mac OS X users can use the dd command line utility to copy from the OS’s ISO image. Windows users can use a utility (e.g., Win32DiskImager) to accomplish the same thing. A couple of other utilities can be used to copy the OS onto the SD card, but I prefer using the command line.

A Debian-based distribution of Linux seems to be the most commonly used Linux distribution on the Raspberry Pi, with the Raspbian “wheezy” as the recommended distribution. However, for this project I chose Adafruit Learning Systems’s Occidentalis V0.2 Linux distribution because it had several hardware-hacker features rolled into the distribution, including the kernel modules for the temperature sensor. This saved me some work getting those installed and debugged.

Before you can copy the OS to the SD card, you need to download the ISO image. The Resources section of this article lists several sources including a link to the Adafruit Linux distribution. Once you have an ISO image downloaded, you can copy it to the SD card. The Resources section also includes a link to an Embedded Linux Wiki webpage, “RPi Easy SD Card Setup,” which details this copying process for several OSes.

The quick and dirty instructions are to somehow get the SD card hooked up to your computer, either using a built-in SD reader or a peripheral card reader. I used a USB attached reader. Then you need to format the card. The best format is FAT32, since it will get reformatted by the copy command anyway. Next, use your chosen method to copy the OS onto the card. On Linux or Mac OS X, the command:

dd bs=4M if=~/linux_distro.img of=/dev/sdd

will properly copy the OS onto the SD card.

You will need to change two important things in this command for your system. First, the
if parameter, which is the name the in file (i.e., your ISO image) needs to match the file you downloaded. Second, the of device (i.e., the out file or our SD drive in this case) needs to match the SD card. Everything, including devices, is a file in Linux, in case you are wondering why your SD drive is considered a file. We will see this again in a bit with the I2C device. You can toast your hard drive if you put the wrong device path in here. If you are unsure about this, you may want to use a GUI utility so you don’t overwrite your hard drive.

Once the OS is copied onto the SD card, it is time to boot up the Raspberry Pi. A default username and password are available from wherever you download the OS. With our OS, the defaults are “pi” and “raspberry.” Make it your first mission to change that password and maybe even add a new account if your project is going to be in production.

Another thing you may have to change is the IP address configuration on the Ethernet interface. By default, these distributions use DHCP to obtain an address. Unless you have a need otherwise, it is best to leave that be. If you need to use a static IP address, I have included a link in the Resources section with instructions on how to do this in Linux.

To access your Raspberry Pi, hook up a local keyboard and monitor to get to a command line. Once you have the network running and you know the IP address, you can use the SSH utility to gain access via the network.

To get SNMP working on the Raspberry Pi, you need to install two Debian packages: snmpd and snmp. The snmpd package is the actual SNMP server software that will enable other devices to query for SNMP on this device. The second package, snmp, is the client. It is nice to have this installed for local troubleshooting.

We used the Debian package manager, apt-get, to install these packages. The commands also must be run as the root or superuser.

The sudo apt-get install snmpd command installs the snmpd software. The sudo part runs the apt-get command as the superuser. The install and snmpd parts of the command are the arguments for the apt-get command.

Next we issued the
sudo apt-get install snmp command, which installed the SNMP client. Issue the ps -ax | grep snmpd command to see if the snmpd daemon is running after the install. You should see something like this:

1444 ? S 14:22 /usr/sbin/snmpd -Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid

If you do not see a line similar to this, you can issue the sudo /etc/init.d/snmpd command start to start the service. Once it is running, it is time to turn your attention to the Python script that reads the temperature sensor. Configure the SNMP daemon after you get the Python script running.

The Raspberry Pi’s final installation is shown. The clear acrylic case can be seen along with the Texas Instruments TMP102 temperature sensor, which is glued below the air hole drilled into the case. We used a modified ribbon cable to connect the various TMP102 pins to the Raspberry Pi.

The Raspberry Pi’s final installation is shown. The clear acrylic case can be seen along with the Texas Instruments TMP102 temperature sensor, which is glued below the air hole drilled into the case. We used a modified ribbon cable to connect the various TMP102 pins to the Raspberry Pi.

An Organized Space for Programming, Writing, and Soldering

AndersonPhoto1

Photo 1—This is Anderson’s desk when he is not working on any project. “I store all my ‘gear’ in a big plastic bin with several smaller bins inside, which keeps the mess down. I have a few other smaller storage bins as well hidden here and there,” Anderson explained.

AndersonPhoto2

Photo 2—Here is Anderson’s area set up for soldering and running his oscilloscope. “I use a soldering mat to protect my desk surface,” he says. “The biggest issue I have is the power cords from different things getting in my way.”

Al Anderson’s den is the location for a variety of ongoing projects—from programming to writing to soldering. He uses several plastic bins to keep his equipment neatly organized.

Anderson is the IT Director for Salish Kootenai College, a small tribal college based in Pablo, MT. He described some of his workspace features via e-mail:

I work on many different projects. Lately I have been doing more programming. I am getting ready to write a book on the Xojo development system.

Another project I have in the works is using a Raspberry Pi to control my hot tub. The hot tub is about 20 years old, and I want to have better control over what it is doing. Plus I want it to have several features. One feature is a wireless interface that would be accessible from inside the house. The other is a web control of the hot tub so I can turn it on when we are still driving back from skiing to soak my tired old bones.

I am also working on a home yard sprinkler system. I laid some of the pipe last fall and have been working on and off with the controller. This spring I will put in the sprinkler heads and rest of the pipe. I tend to like working with small controllers (e.g., the Raspberry Pi, BeagleBoard’s BeagleBone, and Arduino) and I have a lot of those boards in various states.

Anderson’s article about a Raspberry Pi-based monitoring device will appear in Circuit Cellar’s April issue. You can follow him on Twitter at @skcalanderson.

Q&A: Andrew Godbehere, Imaginative Engineering

Engineers are inherently imaginative. I recently spoke with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley, about how his ideas become realities, his design process, and his dream project. —Nan Price, Associate Editor

Andrew Godbehere

Andrew Godbehere

NAN: You are currently working toward your Electrical Engineering PhD at the University of California, Berkeley. Can you describe any of the electronics projects you’ve worked on?

ANDREW: In my final project at Cornell University, I worked with a friend of mine, Nathan Ward, to make wearable wireless accelerometers and find some way to translate a dancer’s movement into music, in a project we called CUMotive. The computational core was an Atmel ATmega644V connected to an Atmel AT86RF230 802.15.4 wireless transceiver. We designed the PCBs, including the transmission line to feed the ceramic chip antenna. Everything was hand-soldered, though I recommend using an oven instead. We used Kionix KXP74 tri-axis accelerometers, which we encased in a lot of hot glue to create easy-to-handle boards and to shield them from static.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

The dancer had four accelerometers connected to a belt pack with an Atmel chip and transceiver. On the receiver side, a musical instrument digital interface (MIDI) communicated with a synthesizer. (Design details are available at http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2007/njw23_abg34/index.htm.)

I was excited about designing PCBs for 802.15.4 radios and making them work. I was also enthusiastic about trying to figure out how to make some sort of music with the product. We programmed several possibilities, one of which was a sort of theremin; another was a sort of drum kit. I found that this was the even more difficult part—not just the making, but the making sense.

When I got to Berkeley, my work switched to the theoretical. I tried to learn everything I could about robotic systems and how to make sense of them and their movements.

NAN: Describe the real-time machine vision-tracking algorithm and integrated vision system you developed for the “Are We There Yet?” installation.

ANDREW: I’ve always been interested in using electronics and robotics for art. Having a designated emphasis in New Media on my degree, I was fortunate enough to be invited to help a professor on a fascinating project.

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

For the “Are We There Yet?” installation, we used a PointGrey FireFlyMV camera with a wide-angle lens. The camera was situated a couple hundred feet away from the control computer, so we used a USB-to-Ethernet range extender to communicate with the camera.

We installed a color camera in a gallery in the Contemporary Jewish Museum in San Francisco, CA. We used Meyer Sound speakers with a high-end controller system, which enabled us to “position” sound in the space and to sweep audio tracks around at (the computer’s programmed) will. The Meyer Sound D-Mitri platform was controlled by the computer with Open Sound Control (OSC).

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

The hard work was to then program the computer to discern humans from floors, furniture, shadows, sunbeams, and cloud reflections. The gallery had many skylights, which made the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful to determine whether detected color-perturbance was human or not.

Once complete, the experience of the installation was beautiful, enchanting, and maybe a little spooky. The audio tracks were all questions (e.g., “Are we there yet?”) and they were always spoken near you, as if addressed to you. They responded to your movement in a way that felt to me like dancing with a ghost. You can watch videos about the installation at www.are-we-there-yet.org.

The “Are We There Yet?” project opens itself up to possible use as an embedded system. I’ve been told that the software I wrote works on iOS devices by the start-up company Romo (www.kickstarter.com/projects/peterseid/romo-the-smartphone-robot-for-everyone), which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud-tracking system using a Raspberry Pi and a reasonable webcam.

I may create an automatic cloud-tracking system to watch clouds. I think computers could be capable of this capacity for abstraction, even though we think of the leisurely pastime as the mark of a dreamer.

NAN: Some of the projects you’ve contributed to focus on switched linear systems, hybrid systems, wearable interfaces, and computation and control. Tell us about the projects and your research process.

ANDREW: I think my research is all driven by imagination. I try to imagine a world that could be, a world that I think would be nice, or better, or important. Once I have an idea that captivates my imagination in this way, I have no choice but to try to realize the idea and to seek out the knowledge necessary to do so.

For the wearable wireless accelerometers, it began with the thought: Wouldn’t it be cool if dance and music were inherently connected the way we try to make it seem when we’re dancing? From that thought, the designs started. I thought: The project has to be wireless and low power, it needs accelerometers to measure movement, it needs a reasonable processor to handle the data, it needs MIDI output, and so forth.

My switched linear systems research came about in a different way. As I was in class learning about theories regarding stabilization of hybrid systems, I thought: Why would we do it this complicated way, when I have this reasonably simple intuition that seems to solve the problem? I happened to see the problem a different way as my intuition was trying to grapple with a new concept. That naive accident ended up as a publication, “Stabilization of Planar Switched Linear Systems Using Polar Coordinates,” which I presented in 2010 at Hybrid Systems: Computation and Control (HSCC) in Stockholm, Sweden.

NAN: How did you become interested in electronics?

ANDREW: I always thought things that moved seemingly of their own volition were cool and inherently attention-grabbing. I would think: Did it really just do that? How is that possible?

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Electric rally-car tracks and radio-controlled cars were a favorite of mine. I hadn’t really thought about working with electronics or computers until middle school. Before that, I was all about paleontology. Then, I saw an episode of Scientific American Frontiers, which featured Alan Alda excitedly interviewing RoboCup contestants. Watching RoboCup [a soccer game involving robotic players], I was absolutely enchanted.

While my childhood electronic toys moved and somehow acted as their own entities, they were puppets to my intentions. Watching RoboCup, I knew these robots were somehow making their own decisions on-the-fly, magically making beautiful passes and goals not as puppets, but as something more majestic. I didn’t know about the technical blood, sweat, and tears that went into it all, so I could have these romantic fantasies of what it was, but I was hooked from that moment.

That spurred me to apply to a specialized science and engineering high school program. It was there that I was fortunate enough to attend a fabulous electronics class (taught by David Peins), where I learned the basics of electronics, the joy of tinkering, and even PCB design and assembly (drilling included). I loved everything involved. Even before I became academically invested in the field, I fell in love with the manual craft of making a circuit.

NAN: Tell us about your first design.

ANDREW: Once I’d learned something about designing and making circuits, I jumped in whole-hog, to a comical degree. My very first project without any course direction was an electroencephalograph!

I wanted to make stuff move on my computer with my brain, the obvious first step. I started with a rough design and worked on tweaking parameters and finding components.

In retrospect, I think that first attempt was actually an electromyograph that read the movements of my eye muscles. And it definitely was an electrocardiograph. Success!

Someone suggested that it might not be a good idea to have a power supply hooked up in any reasonably direct path with your brain. So, in my second attempt, I tried to make something new, so I digitized the signal on the brain side and hooked it up to eight white LEDs. On the other side, I had eight phototransistors coupled with the LEDs and covered with heat-shrink tubing to keep out outside light. That part worked, and I was excited about it, even though I was having some trouble properly tuning the op-amps in that version.

NAN: Describe your “dream project.”

ANDREW: Augmented reality goggles. I’m dead serious about that, too. If given enough time and money, I would start making them.

I would use some emerging organic light-emitting diode (OLED) technology. I’m eyeing the start-up MicroOLED (www.microoled.net) for its low-power “near-to-eye” display technologies. They aren’t available yet, but I’m hopeful they will be soon. I’d probably hook that up to a Raspberry Pi SBC, which is small enough to be worn reasonably comfortably.

Small, high-resolution cameras have proliferated with modern cell phones, which could easily be mounted into the sides of goggles, driving each OLED display independently. Then, it’s just a matter of creativity for how to use your newfound vision! The OpenCV computer vision library offers a great starting point for applications such as face detection, image segmentation, and tracking.

Google Glass is starting to get some notice as a sort of “heads-up” display, but in my opinion, it doesn’t go nearly far enough. Here’s the craziest part—please bear with me—I’m willing to give up directly viewing the world with my natural eyes, I would be willing to have full field-of-vision goggles with high-resolution OLED displays with stereoscopic views from two high-resolution smartphone-style cameras. (At least until the technology gets better, as described in Rainbows End by Vernor Vinge.) I think, for this version, all the components are just now becoming available.

Augmented reality goggles would do a number of things for vision and human-computer interaction (HCI). First, 3-D overlays in the real world would be possible.

Crude example: I’m really terrible with faces and names, but computers are now great with that, so why not get a little help and overlay nametags on people when I want? Another fascinating thing for me is that this concept of vision abstracts the body from the eyes. So, you could theoretically connect to the feed from any stereoscopic cameras around (e.g., on an airplane, in the Grand Canyon, or on the back of some wild animal), or you could even switch points of view with your friend!

Perhaps reality goggles are not commercially viable now, but I would unabashedly use them for myself. I dream about them, so why not make them?

Member Profile: Walter O. Krawec

Walter O. Krawec

Walter O. Krawec

LOCATION:
Upstate New York

OCCUPATION:
Research Assistant and PhD Student, Stevens Institute of Technology

MEMBER STATUS:
Walter has been reading Circuit Cellar since he got his first issue in 1999. Free copies were available at the Trinity College Fire Fighting Robot Contest, which was his first experience with robotics. Circuit Cellar was the first magazine for which he wrote an article (“An HC11 File Manager,” two-part series, issues 129 and 130, 2001).

TECH INTERESTS:
Robotics, among other things. He is particularly interested in developmental and evolutionary robotics (where the robot’s strategies, controllers, and so forth are evolved instead of programmed in directly).

RECENT TECH ACQUISITION:
Walter is enjoying his Raspberry Pi. “What a remarkable product! I think it’s great that I can take my AI software, which I’ve been writing on a PC, copy it to the Raspberry Pi, compile it with GCC, then off it goes with little or no modification!”

CURRENT PROJECTS:
Walter is designing a new programming language and interpreter (for Windows/Mac/Linux, including the Raspberry Pi) that uses a simulated quantum computer to drive a robot. “What better way to learn the basics of quantum computing than by building a robot around one?” The first version of this language is available on his website (walterkrawec.org). He has plans to release an improved version.

THOUGHTS ON EMBEDDED TECH:
Walter said he is amazed with the power of the latest embedded technology, for example the Raspberry Pi. “For less than $40 you have a perfect controller for a robot that can handle incredibly complex programs. Slap on one of those USB battery packs and you have a fully mobile robot,” he said. He used a Pololu Maestro to interface the motors and analog sensors. “It all works and it does everything I need.” However, he added, “If you want to build any of this yourself by hand it can be much harder, especially since most of the cool stuff is surface mount, making it difficult to get started.”

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

I/O Raspberry Pi Expansion Card

The RIO is an I/O expansion card intended for use with the Raspberry Pi SBC. The card stacks on top of a Raspberry Pi to create a powerful embedded control and navigation computer in a small 20-mm × 65-mm × 85-mm footprint. The RIO is well suited for applications requiring real-world interfacing, such as robotics, industrial and home automation, and data acquisition and control.

RoboteqThe RIO adds 13 inputs that can be configured as digital inputs, 0-to-5-V analog inputs with 12-bit resolution, or pulse inputs capable of pulse width, duty cycle, or frequency capture. Eight digital outputs are provided to drive loads up to 1 A each at up to 24 V.
The RIO includes a 32-bit ARM Cortex M4 microcontroller that processes and buffers the I/O and creates a seamless communication with the Raspberry Pi. The RIO processor can be user-programmed with a simple BASIC-like programming language, enabling it to perform logic, conditioning, and other I/O processing in real time. On the Linux side, RIO comes with drivers and a function library to quickly configure and access the I/O and to exchange data with the Raspberry Pi.

The RIO features several communication interfaces, including an RS-232 serial port to connect to standard serial devices, a TTL serial port to connect to Arduino and other microcontrollers that aren’t equipped with a RS-232 transceiver, and a CAN bus interface.
The RIO is available in two versions. The RIO-BASIC costs $85 and the RIO-AHRS costs $175.

Roboteq, Inc.
www.roboteq.com

Two Campuses, Two Problems, Two Solutions

In some ways, Salish Kootenai College (SKC)  based in Pablo, MT, and Penn State Erie, The Behrend College in Erie, PA, couldn’t be more different

SKC, whose main campus is on the Flathead Reservation, is open to all students but primarily serves Native Americans of the Bitterroot Salish, Kootenai, and Pend d’Orellies tribes. It has an enrollment of approximately 1,400. Penn State Erie has roughly 4,300.

But one thing the schools have in common is enterprising employees and students who recognized a problem on their campuses and came up with technical solutions. Al Anderson, IT director at the SKC, and Chris Coulston, head of the Computer Science and Software Engineering department at Penn State Erie, and his team have written articles about their “campus solutions” to be published in upcoming issues of Circuit Cellar.

In the summer of 2012, Anderson and the IT department he supervises direct-wired the SKC dorms and student housing units with fiber and outdoor CAT-5 cable to provide students better  Ethernet service.

The system is designed around the Raspberry Pi device. The Raspberry Pi queries the TMP102 temperature sensor. The Raspberry Pi is queried via the SNMP protocol.

The system is designed around the Raspberry Pi device. The Raspberry Pi queries the TMP102 temperature sensor. The Raspberry Pi is queried via the SNMP protocol.

“Prior to this, students accessed the Internet via a wireless network that provided very poor service.” Anderson says. “We wired 25 housing units, each with a small unmanaged Ethernet switch. These switches are daisy chained in several different paths back to a central switch.”

To maintain the best service, the IT department needed to monitor the system’s links from Intermapper, a simple network management protocol (SNMP) software. Also, the department had to monitor the temperature inside the utility boxes, because their exposure to the sun could cause the switches to get too hot.

This is the final installation of the Raspberry Pi. The clear acrylic case can be seen along with the TMP102 glued below the air hole drilled into the case. A ribbon cable was modified to connect the various pins of the TMP102 to the Raspberry Pi.

This is the final installation of the Raspberry Pi in the SKC system. The clear acrylic case can be seen along with the TMP102 glued below the air hole drilled into the case. A ribbon cable was modified to connect the various pins of the TMP102 to the Raspberry Pi.

“We decided to build our own monitoring system using a Raspberry Pi to gather temperature data and monitor the network,” Anderson says. “We installed a Debian Linux distro on the Raspberry Pi, added an I2C Texas Instruments TMP102 temperature sensor…, wrote a small Python program to get the temperature via I2C and convert it to Fahrenheit, installed SNMP server software on the Raspberry Pi, added a custom SNMP rule to display the temperature from the script, and finally wrote a custom SNMP MIB to access the temperature information as a string and integer.”

Anderson, 49, who has a BS in Computer Science, did all this even as he earned his MS in Computer Science, Networking, and Telecommunications through the Johns Hopkins University Engineering Professionals program.

Anderson’s article covers the SNMP server installation; I2C TMP102 temperature integration; Python temperature monitoring script; SNMP extension rule; and accessing the SNMP Extension via a custom MIB.

“It has worked flawlessly, and made it through the hot summer fine,” Anderson said recently. “We designed it with robustness in mind.”

Meanwhile, Chris Coulston, head of the Computer Science and Software Engineering department at Penn State Erie, and his team noticed that the shuttle bus

The mobile unit to be installed in the bus. bus

The mobile unit to be installed in the bus.

introduced as his school expanded had low ridership. Part of cause was the unpredictable timing of the bus, which has seven regular stops but also picks up students who flag it down.

“In order to address the issues of low ridership, a team of engineering students and faculty constructed an automated vehicle locator (AVL), an application to track the campus shuttle and to provide accurate estimates when the shuttle will arrive at each stop,” Coulston says.

The system’s three main hardware components are a user’s smartphone; a base station on campus; and a mobile tracker that stays on the traveling bus.

The base station consists of an XTend 900 MHz wireless modem connected to a Raspberry Pi, Coulston says. The Pi runs a web server to handle requests from the user’s smart phones. The mobile tracker consists of a GPS receiver, a Microchip Technology PIC 18F26K22 and an XTend 900 MHz wireless modem.

Coulston and his team completed a functional prototype by the time classes started in August. As a result, a student can call up a bus locater web page on his smartphone. The browser can load a map of the campus via the Google Maps JavaScript API, and JavaScript code overlays the bus and bus stops. You can see the bus locater page between 7:40 a.m. to 7 p.m. EST Monday through Friday.

“The system works remarkably well, providing reliable, accurate information about our campus bus,” Coulston says. “Best of all, it does this autonomously, with very little supervision on our part.  It has worked so well, we have received additional funding to add another base station to campus to cover an extended route coming next year.”

The base station for the mobile tracker is a sandwich of Raspberry Pi, interface board, and wireless modem.

The base station for the mobile tracker is a sandwich of Raspberry Pi, interface board, and wireless modem.

And while the system has helped Penn State Erie students make it to class on time, what does Coulston and his team’s article about it offer Circuit Cellar readers?

“This article should appeal to readers because it’s a web-enabled embedded application,” Coulston says. “We plan on providing users with enough information so that they can create their own embedded web applications.”

Look for the article in an upcoming issue. In the meantime, if you have a DIY wireless project you’d like to share with Circuit Cellar, please e-mail editor@circuitcellar.com.