Disney Research Explores Ways to Add RFID Intelligence to Robots, Toys

Walt Disney Co.'s lab network, together with scientists from MIT, the University of Washington and Carnegie Mellon, has developed systems that the company could use to help robots identify individuals, as well as to track everyday interactions between people and things.
Published: May 30, 2016

Disney Research, a division of Walt Disney Co. , recently published a spate of academic papers describing novel uses of radio frequency identification and related technologies. These projects could one day help the company to track everyday interactions between people and things, as well as create robots able to identify individuals.

Alanson Sample, a research scientist at Disney Research, described the projects at the RFID Journal LIVE! 2016 conference and exhibition, held earlier this month in Orlando, Fla.

Using the ID-Match system, an autonomous robot hosting an interactive quiz game could accurately identify each of the five human participants.

The authors of one such research paper describe a prototype system called ID-Match, which uses passive ultrahigh-frequency (UHF) RFID technology to facilitate “natural interactions” between humans and robots. The group has found, through laboratory research and testing, that a hybrid computer-vision and RFID solution can be employed to enable an autonomous robot to quickly identify and localize individuals within a group, thereby allowing for a natural and personalized interaction.

Disney Research scientist Alanson Sample

RapID, one of the other three Disney Research projects described in recently published studies, utilizes passive RFID to create inexpensive, wireless sensing devices for gaming purposes. A third project, known as PaperID, discusses methods for tracking how individuals move or touch a tagged piece of paper, including recording drawn or marked responses via a special pen. The fourth project, EM-ID, focuses on the unique patterns of electromagnetic (EM) waves emitted by electronic devices, and also explores how EM signals can be used to uniquely identify each electronic device, such as a smartphone, a laptop or a Hasbro Lightsaber toy, so that RFID tags or bar codes would not be required. Instead, the researchers measured the unique EM emissions from each electronic object in order to identify it.

Disney Research consists of three labs, located in Los Angeles, Pittsburgh and Zurich, comprising 230 principal researchers and 60 staff members and postdoctoral students. The RFID research has been underway at the Pittsburgh lab, but the four projects were also the result of collaborations with scientists at the Massachusetts Institute of Technology (MIT), the University of Washington and Carnegie Mellon University.

With ID-Match, the group has developed a hybrid system combining RFID and computer-vision technologies, in order to enable robots not only to identify a person and his or her location, but also to recognize him or her within a group. The system employs a technique known as reverse synthetic aperture to identify the motion paths of RFID tags worn by people, and to correlate that data with the physical paths of those individuals as measured via a 3D depth camera. This allows a robot to accomplish what humans do naturally: discern one person from another.

The company used a variety of off-the-shelf passive EPC Gen 2 RFID tags, an Impinj Speedway Revolution RFID reader and a single Impinj far-field reader antenna mounted behind a robot. The system employs a Microsoft Kinect 3D depth camera to determine how many people are in front of it, as well as who they are, based on their movements. It also utilizes RFID to determine each individual’s identity, and the reader’s received signal strength indicator (RSSI) to approximate where that person is located.

This RFID-enabled pop-up book uses the PaperID system to trigger sound effects when a page is opened and when the wheel is rotated to expose the images of different animals.

The data is transmitted to software—developed by Disney Research scientists, as well as University of Washington researchers—that identifies who the individual is, along with his or her location.

In that way, a robot could, for instance, identify and greet members of a family. Each individual would need to wear an RFID tag containing an ID number linked to his or her name and other information provided ahead of time. As the family members approach the robot, it could identify each person by name using the UHF RFID reader, while its Kinect vision system could identify the exact number of people and each individual’s position based on his or her movements (by recognizing individual skeleton movement patterns). The system would then link that movement data, based on the Kinect system, with the tags’ low-level communication channel parameters, as measured by the RFID reader (such as RSSI, RF phase, and reverse synthetic aperture path and read rate), thereby enabling the robot to greet each person individually.

During tests involving a robot-hosted interactive quiz, the ID-Match technologies enabled the robot to recognize each individual in a group of five, along with his or her location, in less than eight seconds, with 94 percent accuracy.

In addition, the researchers tested the ID-Match system in an office environment in which 40 participants entered and exited by passing through a hallway in which a Kinect camera and an RFID reader were installed. Sixteen participants carried RFID tags linked to their identities, moving throughout the office for a day, with 302 instances of movement altogether. Of the 129 instances in which an RFID tag was being carried, the system correctly recognized the users 94.5 percent of the time.

This test found that the ID-Match system could identify individuals with sufficient speed and accuracy to determine their direction of travel through a virtual portal scenario. Moreover, it could enable high-accuracy occupancy counting.

Thanks to RapID, the computer is monitoring this tic-tac-toe game.

RapID utilizes EPC Gen 2 passive UHF RFID tags for applications that had previously required battery-powered transponders. With RapID, researchers developed a method for reducing the latency of detecting and responding to user interaction with tags. This method, which employs a predictive model based on signal-level information, could be used to track game pieces or other moveable items. For example, the researchers attached EPC Gen 2 passive RFID tags to game pieces representing the X’s and O’s in a tic-tac-toe game. To create a RapID RFID-enabled tic-tac-toe game board, the researchers used a laser-cut piece of plywood fitted with an RFID tag under each of the nine squares onto which an X or O could be placed. When a game piece was placed onto one of the squares, the transmission of that square’s tag was blocked by metallic foil attached to the back of the game piece covering that square. That enabled the system to identify the location of each playing piece (X or O) used during a game.

RapID is similar to an older Disney Research project called ID-Sense (see RFID for Reading People’s Reactions). The goal of the ID-Sense project was to develop a system for identifying human-object interactions within a home or store environment, and thereby determine what is happening at that location—what jacket was tried on in a store, for instance, or whether someone poured himself or herself a bowl of cereal in a kitchen. As part of this project, researchers affixed off-the-shelf UHF tags to a toy, as well as to various household objects, and monitored the change in low-level communication parameters (such as RSSI, RF phase, read rate and Doppler shift) between the RFID tag and the reader. This enabled them to determine if the tag was being moved or interacted with.

With ID-Sense, the RFID technology could track a person’s activities, such as making breakfast, drinking milk, wearing glasses, reading a book or turning a TV on and off. During testing, the group attached tags to such items as a bowl, a container of cereal and a milk carton. The system could not only identify when a tag was moved, but also when it was touched or covered. By placing a reader on the ceiling, the team was able to detect the movements of items, such as a cup of tea being lifted and put back onto the table.

The system can then gauge human interaction with everyday objects, which could be used to understand the movement of someone within his or her home, but also for interactive storytelling using RFID tagged toys and computer-based media. In the case of a toy, researchers tested the system with a stuffed toy lion wearing an RFID collar that transmits a signal to an RFID reader built into a game console. When a child moved the toy, the reader could detect that movement. If the reader detected a swiping motion over the tag, the ID-Sense system could determine that the child was petting the lion, and the animation on the screen could then respond accordingly, such as showing the lion taking a nap. Shaking the lion could result in the digital version dancing. ID-Sense could also be used in stores to identify which products customers try on or handle. To simulate a retail store environment, five participants tried on clothes with 20 items tagged on two racks. The system’s RFID reader captured each movement and knew if the item had been put on.

In a stimulated retail store, ID-Sense could identify which products customers tried on or handled.

For the PaperID project, Disney employees worked with scientists at the University of Washington and Carnegie Mellon to create a method of recording responses to questions printed on RFID-enabled paper. Ultra-thin UHF tags were attached to pieces of paper and paper objects, enabling the linking of writing or drawing on paper to software on a computer device.

The tags contain silver nano-particles, while conductive ink pens were used with the tag. The system detected movements related to the tag—for example, covering, touching, sliding, turning (such as a knob) and swiping. Therefore, if something like an RFID-enabled pop-up book were opened near an RFID reader, linked to a computer running the researchers’ software, the system could identify what movements were taking place and display related video and audio content. The solution can also be used to control a wireless lamp, such as turning it on or off, when commanded by a computer linked to an RFID reader, by a user waving a hand over the tag. Users could utilize a conductive ink pen to select answers during a multiple-choice test, by marking the RFID-tagged options via the pen, and the software linked to the reader would then detect and interpret those responses.

With any of the solutions being trialed, Disney reports, interactive objects could be created at a much lower cost than was previously feasible, since traditional systems require batteries and usually involve circuit boards. On the other hand, Sample explained during his RFID Journal LIVE! session, tags cost approximately 10 cents apiece and can be made disposable or recyclable.