Home Internet of Things Aerospace Apparel Energy Defense Health Care Logistics Manufacturing Retail

Watching the Clock

A tag's clock frequency is critically important to tag performance. Here's why.
By Chris Diorio
Readers use digital signaling to send commands to tags. More specifically, they send unique trains of data-0s and data-1s to form a command. The Gen 2 protocol specifies that the data-0 and data-1 symbols must have different lengths, and that tags must measure the length of each incoming symbol to determine whether it is a data-0 or a data-1. In simple terms, think of the tag as a stopwatch. A reader sends a symbol; the tag starts its stopwatch at the beginning of the symbol and stops it at the end. The tag counts the number of clock ticks between start and stop and, based on the measured length, decides whether the symbol is a data-0 (a short symbol) or a data-1 (a long symbol). Easy enough, right? For communications in the tag-to-interrogator direction, the tag uses a related process to set the backscatter (response) data rate.

Now, here's the rub: If a tag's clock frequency is low, it won't count many clock ticks during the measurement period just described. That is, the granularity by which it decides whether an incoming symbol is a data-0 or a data-1 is coarse. If the clock frequency is high, it will be better able to discern a data-0 from a data-1. By analogy, imagine you have to time a 50-yard dash, where the racers run one at a time. If you have a digital stopwatch that reads only in 10-second increments, then every runner will be timed at 10 seconds, resulting in the runners' performances being indistinguishable from one another. On the other hand, if you have a digital stopwatch that reads in tenths of seconds, you will be able to determine which runner was the fastest. This same notion of sampling resolution applies to tags that must discern, with adequate margin, the difference between a data-0 and a data-1.

As it turns out, for a tag using a 1.28 MHz clock, the decoding margin between data-0s and data-1s at Gen 2's higher data rates is eroded to zero. That is, the length of the measured data symbol (0 or 1) can be equal to the decision threshold! This means the tag will not be able to tell a data-0 from a data-1, and the command will fail. On the other hand, a tag with a 1.92 MHz clock operating under the same conditions always maintains a decoding margin greater than zero. Thus, incoming symbols are never indeterminate.

Similarly, in the tag-to-reader direction, if a tag's clock frequency is low (referring again to our 50-yeard dash analogy), the granularity by which it can decide its backscatter data rate will be coarse. If the clock frequency is high, it will be better able to set the correct tag data rate. This is important, because Gen 2 specifies the tolerances for the various possible tag data rates, and if a tag responds outside the acceptable limits, a reader may (in fact, should) ignore it.

Login and post your comment!

Not a member?

Signup for an account now to access all of the features of RFIDJournal.com!

PREMIUM CONTENT
Case Studies Features Best Practices How-Tos
RFID JOURNAL EVENTS
Live Events Virtual Events Webinars
ASK THE EXPERTS
Simply enter a question for our experts.
TAKE THE POLL
JOIN THE CONVERSATION ON TWITTER
Loading
RFID Journal LIVE! RFID in Health Care LIVE! LatAm LIVE! Brasil LIVE! Europe RFID Connect Virtual Events RFID Journal Awards Webinars Presentations