University of Arkansas Researchers Study How to Link Visual Identification Technology With RFID

By Claire Swedberg

The school's RFID Research Center and the Center for Advanced Spatial Technologies will also try to develop standardized formats for VIT data, while testing the use of visual data with radio frequency identification for inventory tracking.


Two University of Arkansas research groups—the RFID Research Center, at the Sam M. Walton College of Business, and the J. William Fulbright College of Arts and Sciences’ Center for Advanced Spatial Technologies (CAST)—have teamed up to study ways in which to adapt emerging visual identification technologies (VIT) for retail applications, as well as how VIT systems could be used to complement RFID technology. VIT employs inexpensive 2D and 3D optical imaging technologies, commonly found in cell phones and videogame devices, to identify objects by color, shape and size, without the need for bar codes or product numbers. According to the researchers, VIT-based systems could be used to quickly recognize products on store shelves, add those goods to inventory lists, verify that they are at their correct locations and remove them from inventory upon checkout.

The researchers will test the VIT technology within the RFID Research Centers’ retail environment laboratory, in order to learn how 3D VIT-based data regarding the objects’ positions within a retail store could be coupled with RFID read data obtained from ultrahigh-frequency (UHF) readers to provide a more detailed item-level inventory count. The team intends to release the results of its testing in the fall of 2013.

The efforts are intended to not only standardize the use of VIT-based data, but also determine how an RFID system coupled with VIT technology could enhance the tracking of inventory on a store’s shelves and at sales terminals, or other solutions intended to enhance the customer shopping experience.

CAST has developed software that analyzes data gathered by two- and three-dimensional optical imaging hardware. The software could be utilized for the purpose of geospatial location and mapping, as well as for shape analysis of objects, in order to help users equipped with cameras identify those items, as well as their locations in proximity with other nearby objects. CAST and the RFID Research Center are now working to create a set of standards for collecting and storing such data.

Justin Patton, the RFID Research Center’s managing director, explains that VIT technology incorporates any camera-based device that takes an image and provides information about that image based on its size, shape or location. Google Goggles and Amazon Remembers applications, Patton says, are two examples of a 2D version of this technology. A person can take a picture using a mobile phone camera, and then upload that photograph. The Google Goggles application (for Android or iPhone handsets) or the Amazon Remembers app (for either the Android or Apple IOS operating system) will then compare that photo against a database of images and attempt to identify that natural or manmade object, based on its shape and color.

A 3D version of the same technology is being used for video games. The Xbox 360 solution employs an optical camera feed, by first emitting a field of light and determining an object’s location based on the way that light is reflected off the item within that field, and then by overlaying what it has perceived on a video game image within a three-dimensional format.

The RFID Research Center team has been watching the evolution of this optical-based technology, as well as CAST’s efforts, for several years, Patton explains. In fact, the CAST laboratory is located within the same building as the RFID Research Center. According to Patton, the groups have teamed up to address two concerns: standardization and the possibility of using VIT and RFID technologies together within the retail environment, in order to improve inventory information.

“A lot of people [potential hardware and software vendors] are working on VIT technology,” Patton states. Currently, he says, there are no standard rules for storing visual data and matching that data to objects in a photograph. In other words, there is no standard set of descriptors or unique identifiers that would be utilized with each image of an object. If there were an open standard, Patton notes, retailers or other companies could share the standardized data related to an item. He says he hopes to see standards group GS1 lead efforts in this area, adding that he is presently in discussions with GS1—which has established a number of standards for bar-code and RFID technologies—about this option.

When it comes to testing within its own laboratory, Patton says, the RFID Research Center (together with CAST) will conduct proof-of-concept studies for both 3D and 2D VIT technologies, to be used in tandem with RFID. Some of this testing is already underway, he reports. Patton has fitted an Xbox Kinect camera on handheld passive UHF RFID readers, to determine whether the technology can be used not only to identify specific items (such as sweaters or jackets) based on pictures taken with the camera, but also to create a map of a store that includes each product’s 3D location within that store. The VIT data can be used to reinforce RFID data, thereby providing more precise location information than either technology might be able to provide independently (see Kinect Gives RFID Another Pair of Eyes).

On its own, Patton says, “RFID is good for precise inventory counts.” The technology can read the unique ID number of each item’s label without a line-of-sight, providing a specific list of what is, for example, packed in a box or on a shelf, regardless of whether or not the an RFID reader user would be able to see those items. To provide the object’s exact location based on its proximity with other items within read range, however, is more difficult.

In some cases, the proximity and locations of items can make a big difference regarding the accuracy of an RFID-based inventory count. Patton cites the example of shoes set up on a store’s display table; the table could be situated against a wall separating the storefront from a storage area on which many other shoes are located. The reader might capture footwear on both sides of that wall, thus providing inaccurate information regarding the quantity and styles of shoes actually on the table. By using the reader with a camera employing VIT technology, however, the store could identify the exact shoes on the table, and where they were positioned, screening out stray RFID reads from behind the backroom wall.

A solution that could employ 2D technology, Patton says, would be deployed at a store’s point of sale. Currently, some RFID companies are working to create solutions that would allow the self-checkout of goods at a store, based on UHF tag reads, by simply passing a shopping cart through a reader gate. To date, however, such solutions have not been adopted by retailers in great volume, due to accuracy concerns. Some items may not be accounted for if they are missing a tag, or if their tags are not being properly read. Coupled with VIT technology, however, the system could be more precise. Patton notes that an optical camera could measure the approximate volume of goods within a shopping cart, and software could then compare that figure against the expected volume of goods, based on the RFID reads, and determine whether the read data is accurate and the transaction could be approved.

However, Patton says, the group initially intends to test only whether 2D or 3D VIT and RFID systems could be employed to reinforce data for retailers along the inventory process. First, he adds, the team plans to “nail down the data requirements at the item level” for creating a standard regarding VIT data, and then test the VIT technology’s capabilities within a store environment and test the system with RFID reads.

The group intends to release the preliminary results of the testing this autumn.