Quick Bits Resources

Event Queues

Written by Andrew Levido

Writing firmware that is robust and maintainable is a challenge, especially as your application starts to grow in complexity. The basic “super loop” architecture gets unwieldy very quickly. As functionality is added it is difficult to manage asynchronous events in a sensible fashion. A real-time operating system (RTOS) is certainly one answer, but comes with a certain amount of overhead that might not be justified for moderately complex projects.

One alternative that I have successfully used in a range of projects (including in some commercial ones) is the event queue. In this architecture the asynchronous processes—usually interrupt service routines—post events to a first in first out (FIFO) queue as and when they occur. The application can then remove the events from the queue one at a time and process them in sequence as shown in Figure 1.

FIGURE 1. Events which occur asynchronously are posted to the FIFO queue and are processed in the order thy occur. The queue effectively decouples the processing from the events allowing for relatively complex applications to be built robustly.
/* Snippet 1 ********************************************/

/* Event data type */
typedef struct {
    eventType_t type;
    uint32_t payload;
}event_t;

/* Event queue */
volatile event_t queue[EVT_Q_SIZE];
volatile uint32_t qhead;
volatile uint32_t qtail;

/* Event to be processed */
event_t currentEvt;

The first step is to define an event type and the queue which is implemented as a circular buffer (see Code Snippet 1 above). In this example the event structure contains just one integer payload. In practice, the payload could be expanded as necessary. The queue is simply an array of events of appropriate length. We also declare two pointers qhead and qtail which point to the notional ends of the queue. Note that these must all be declared volatile because the queue can be updated asynchronously. The queue is implemented as a circular buffer as shown in Figure 2.

FIGURE 2. The FIFO queue is implemented as a circular buffer. Events are posted to the tail of the queue which points to the next “empty” slot and removed from the head of the queue. The head and tail pointers chase each other around the buffer.

When the queue is empty, the head and the tail pointers point to the same location as shown on the left in Figure 2. Data is pushed onto the queue at the tail and extracted at the head. The function to enqueue an event is shown in Code Snippet 2 below.

/* Snippet 2 **********************************************/

/* Enqueue an event */
void enqueueEvent(uint32_t payload)
{
    disableInterrups();
    queue[qtail].payload = payload;
    qtail++;
    if(qtail == EVT_Q_SIZE) { qtail = 0; }
    if(qtail == qhead) { /* Deal with full queue */ }
    enableInterrupts();
}

The event payload is copied into the queue at the tail, and then the tail pointer is advanced to the next position, wrapping around if at the end of the array. If the tail crashes into the head of the queue at this point, we know the queue is full. Note that interrupts are disabled during the enqueuing process to make sure that each event is pushed into the queue atomically.

/* Snippet 3 *********************************************/

/* Process events */
void processEvent(void)
{
    /* Return if queue empty */
    if(qhead == qtail) return;

    /* pop event off the FIFO */
    disableInterrupts();
    currentEvt.payload = queue[qhead].payload;
    qhead++;
    if(qhead == EVT_Q_SIZE) { qhead = 0; }
    enableInterrupts();
    
    /* Process event - probably using a state machine */
}

The function to extract items from the queue is shown in Code Snippet 3 above. If the queue is empty this function returns without doing anything. If not, the payload of the event at the head of the queue is copied out and the head advanced to the qhead pointer advanced. Again, the de-queueing process must not be interrupted, or all hell will break loose.

— ADVERTISMENT—

Advertise Here

The application code then processes the event completely before the function returns. In most cases the processing function will be some kind of state machine where events are processed or ignored depending on the state of the application.

/* Snippet 4 **************************************/

/* main function */
int main()
{
    /* Initialise things*/
   
    /* Loop forever waiting to process events*/
    while(1){
        processEvent();
    }
    /* Never get here */
}

The final step is to call the event processing function from the application’s main() function in a tight loop as shown in Code Snippet 4 above. The initialization step sets up the various event sources such as timers, user interface elements etc. to post events using the enqueueEvent() function, usually from an ISR. Now when a new event occurs, it will be posted to the queue regardless of what else is happening and processed strictly in order of arrival. This makes for a very robust framework for applications of medium complexity.


Don't miss out on upcoming issues of Circuit Cellar. Subscribe today!

 
 
Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.


Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Become a Sponsor
+ posts

Andrew Levido ([email protected]) earned a bachelor’s degree in Electrical Engineering in Sydney, Australia, in 1986. He worked for several years in R&D for power electronics and telecommunication companies before moving into management roles. Andrew has maintained a hands-on interest in electronics, particularly embedded systems, power electronics, and control theory in his free time. Over the years he has written a number of articles for various electronics publications and occasionally provides consulting services as time allows.