# EQ #7  The C code below converts an unsigned 16-bit binary number into its decimal equivalent, in packed BCD format. Can you describe in simple terms how this works?

``````void adjust (unsigned char *p)
{
unsigned char t = *p + 3;
if (t & 0x08) *p = t;
t = *p + 0x30;
if (t & 0x80) *p = t;
}

unsigned long binary2bcd (unsigned int n)
{
unsigned char bcd = {0, 0, 0};
int i;

for (i=0; i<16; ++i) {

bcd <<= 1;
if (bcd & 0x80) ++bcd;
bcd <<= 1;
if (bcd & 0x80) ++bcd;
bcd <<= 1;
if (n & 0x8000) ++bcd;
n <<= 1;
}

return (unsigned long) (bcd<<16) | (bcd<<8) | bcd;
}``````

The algorithm is realtively straightforward. The combination of the adjust() calls and the left shift of everything constitutes multiplying a 3-byte (6-digit) number by two in BCD. The original 16-bit number is converted to decimal one bit at a time starting with the MSB, by multiplying the result by two and then adding the next bit from the original binary number.

If you work through an example that has just one bit set in the binary number, you’ll see its decimal “weight” computed in the result. If multiple bits are set, their weights are combined in the proper way.

Note that the function adjust() is equivalent to the x86 instruction DAA (decimal adjust for arithmetic), except that the adjustment is being done before the shift, which is perfectly valid.

Don't miss out on upcoming issues of Circuit Cellar. Subscribe today!

Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.