Watching the Clock

By Chris Diorio

A tag's clock frequency is critically important to tag performance. Here's why.

Anyone who has shopped for a PC in recent years is familiar with the "clock-rate wars" that have, until recently, dominated computer marketing. As we all know, clock rate is only one of the many factors that determine a PC's performance. However, it is an important metric—if you buy a PC that is too slow to run your children's favorite video game, you'll hear no end of their unhappiness. Clock rate is an equally important parameter for Gen 2 tags—if you choose wrongly, you will suffer an equally ignominious fate as buying a PC that's too slow. The following information will help you prevent this outcome.

The EPCglobal Gen 2 protocol defines a robust mechanism for communication between readers (interrogators) and tags. It does not, however, dictate how readers and tags are designed, nor how well they perform. In fact, performance is not even addressed in the EPCglobal certification process. Certification ensures that Gen 2 readers and tags operate correctly, but it does not dictate how well they must perform. As such, while certification is certainly important, caveat emptor is still very much advised. The fact is, certified tags may perform poorly if their designers make poor engineering choices.




The selection of a tag's clock frequency is one engineering choice that is critically important to tag performance: Aim too low, and the tag may miss interrogator commands or return data at the wrong rate; aim too high, and it will consume excessive power, shortening read and write ranges.

Fortunately, the minimum clock frequency for a Gen 2 tag—1.92 MHz—can be calculated from purely theoretical considerations. Unfortunately, some legacy RFID tags use a 1.28 MHz clock frequency. If a Gen 2 tag designer cuts corners and reuses one of his existing 1.28 MHz clock oscillators, tag performance will be compromised. Worse yet, a tag designer is actually incentivized to use a 1.28 MHz clock, because the resulting lower chip power translates into longer read range, which can be easily demonstrated to an end user, whereas the correspondingly degraded command decoding and incorrect response frequency can be blamed on the reader or on the "noisy environment."

Tag vendors generally don't publicize the clock frequency their chips use—after all, 1.92 MHz doesn't carry nearly the same bragging potential as a 3.2 GHz microprocessor—nor can end users easily test a tag to uncover that value. So what should you do? The answer is simple. Ask your vendor what clock frequency its tags use. If your vendor asks why it matters, here's what you need to know as a savvy buyer:

Readers use digital signaling to send commands to tags. More specifically, they send unique trains of data-0s and data-1s to form a command. The Gen 2 protocol specifies that the data-0 and data-1 symbols must have different lengths, and that tags must measure the length of each incoming symbol to determine whether it is a data-0 or a data-1. In simple terms, think of the tag as a stopwatch. A reader sends a symbol; the tag starts its stopwatch at the beginning of the symbol and stops it at the end. The tag counts the number of clock ticks between start and stop and, based on the measured length, decides whether the symbol is a data-0 (a short symbol) or a data-1 (a long symbol). Easy enough, right? For communications in the tag-to-interrogator direction, the tag uses a related process to set the backscatter (response) data rate.

Now, here's the rub: If a tag's clock frequency is low, it won't count many clock ticks during the measurement period just described. That is, the granularity by which it decides whether an incoming symbol is a data-0 or a data-1 is coarse. If the clock frequency is high, it will be better able to discern a data-0 from a data-1. By analogy, imagine you have to time a 50-yard dash, where the racers run one at a time. If you have a digital stopwatch that reads only in 10-second increments, then every runner will be timed at 10 seconds, resulting in the runners' performances being indistinguishable from one another. On the other hand, if you have a digital stopwatch that reads in tenths of seconds, you will be able to determine which runner was the fastest. This same notion of sampling resolution applies to tags that must discern, with adequate margin, the difference between a data-0 and a data-1.

As it turns out, for a tag using a 1.28 MHz clock, the decoding margin between data-0s and data-1s at Gen 2's higher data rates is eroded to zero. That is, the length of the measured data symbol (0 or 1) can be equal to the decision threshold! This means the tag will not be able to tell a data-0 from a data-1, and the command will fail. On the other hand, a tag with a 1.92 MHz clock operating under the same conditions always maintains a decoding margin greater than zero. Thus, incoming symbols are never indeterminate.

Similarly, in the tag-to-reader direction, if a tag's clock frequency is low (referring again to our 50-yeard dash analogy), the granularity by which it can decide its backscatter data rate will be coarse. If the clock frequency is high, it will be better able to set the correct tag data rate. This is important, because Gen 2 specifies the tolerances for the various possible tag data rates, and if a tag responds outside the acceptable limits, a reader may (in fact, should) ignore it.

The bottom line is that tags operating with a 1.28 MHz clock cannot meet Gen 2's error-rate requirements, whereas tags with a 1.92 MHz clock meet the Gen 2 requirements with margin.

To compound matters, EPCglobal's certification process does not actually test whether a vendor's tag meets the Gen 2 symbol-measurement and backscatter-accuracy requirements. Rather, it allows a vendor to self-certify (guarantee by design) that its tag meets the requirements. If a tag vendor performs an overly simplified analysis and blindly picks a low clock frequency, the tag may pass certification, but RFID system performance will suffer.

So why do we believe a tag should use a 1.92 MHz clock, for both reader-to-tag symbol decoding and tag-to-reader backscatter? The answer is simple: The team that wrote the Gen 2 protocol performed an exacting analysis of possible clock rates before finalizing the protocol.

The more pertinent question, though, is this: Does the choice of clock frequency really matter? The answer is a resounding yes. Tags with poor decoding margin, or that respond at the wrong data rate (because they're out of tolerance), simply will not perform as well in the real world as tags that get it right. More insidiously, their performance will "appear" to be better in laboratory testing, because their potentially longer range, achieved by choosing too low a clock frequency, will make them seem superior. And in the field, it's easy to blame the reader rather than the tag for readability problems.

Now, just as a clock rate doesn't tell the whole story of a PC, the same can also be said of an RFID tag. Parameters such as read sensitivity and interference rejection also figure prominently in establishing a tag's ultimate performance capability. If the fundamentals of the design aren't right, however, it's a bit like building a house on sand.

So what should an end user do? Well, for the same reasons no one would ever buy a computer without understanding its most critical attributes, you shouldn't procure tags without appreciating their important characteristics. Simply ask your tag vendor this vital question: What clock frequency does your tag use?

Chris Diorio is cofounder and chairman of Impinj, a fabless semiconductor company in Seattle that makes Gen 2-based RFID chips, inlays and readers. He is also one of the chief architects of the EPCglobal Gen 2 specification. John Schroeter, responsible for technical and product marketing communications at Impinj, assisted in the writing of this article. A detailed analysis and report of the tag clock rate issues summarized in this article is available for download from Impinj's Web site.