Man and Machine Collaborating on the Factory Floor: A Nightmare or a Match Made in Heaven?

By Pieter Simoens

Whenever mankind works together with machinery, new methods are needed to cater to human unpredictability—and to ensure that robots can anticipate it.

Industry 5.0: Where Smart Robotics Meets Human Creativity
Industry has evolved at breakneck speed during the last 300 years. It all began in the 18th century, when the rural societies in Europe and the United States underwent the process of urbanization and the iron and textile industries started to blossom, in part thanks to the invention of the steam engine.

Just before the First World War, new industries, such as steel and oil, emerged. Meanwhile, the invention of electricity allowed us to start mass-producing goods. That marked the start of Industry 2.0.

Since then, the pace of development has become ever faster. In the 1970s, we witnessed the start of Industry 3.0, featuring digital technology, the automation of industrial processes and the introduction of robots. Now, we are at the dawn of Industry 4.0, which largely builds on the Internet of Things (IoT) revolution: devices of all sorts, including robots, are connected to the Internet and produce a continuous stream of data. This data can be used to generate more insights into industrial processes, and to support those processes' further optimization.

We've evolved from the steam engine to the IoT in only three centuries. Impressive, right? Of course, we have to add a note of caution about these developments. As automation and optimization have become more important throughout the years, human involvement has been increasingly threatened. Yet, it is precisely this threat that will be halted with the coming of Industry 5.0. In a world in which every individual wants to fully express oneself, there will be increasing demand for unique, customized and personalized products. In such an era, the holy grail will no longer be robot-controlled mass production, but rather human creativity.

As such, in the smart factories of 2035, a new collaboration model will need to be put into place. A marriage, you might say, must take place between man and machine, with robots doing the heavy mechanical labor and their human co-workers being the creative architects, inventing new, custom-made products and overseeing their production in tomorrow's factories.

The questions, then, are these: How can we foster a partnership between man and machine in such a setting? And how can we forge an optimal pairing, so that 1 + 1 effectively becomes 3? It will all boil down to effective communication between the different parties.

Digital Twins for Our Smart Factories
To give Industry 5.0 every chance of success, it will be crucial to advance communication between the different actors (humans and machines). Of course, machines already communicate with each other. For instance, at large car factories, integrators, with the help of standardized protocols, ensure that machines (sometimes from different providers) know enough about each other to meet production targets. But let us be honest: in today's factories, every machine basically does its own bit of (assembly line) work, and little real communication is required.

In the future, once machines become more autonomous and need to anticipate each other's actions, communication will become more difficult. For example, imagine two robots approaching each other on the factory floor. In this situation, how can one robot anticipate how the other is going to move? ("Will he go left or right? And what should I do?") That is before taking into account the positions, actions and reactions of other robots nearby.

To manage this type of situation, you could make a digital copy (or twin) of the factory in the cloud. As such, you would create a digital model of the physical factory floor, a model that would continuously update itself based on real-time sensor data, in which all decisions and their outcomes were simulated in real time. In this scenario, all authority would be hosted at a central location from which all instructions would depart, and the robots and machines on the factory floor would be the physical result of what was happening in that virtual world.

At first glance, this dictator model seems an ideal solution to deal with complex situations on the factory floor while ensuring that production targets are met. Technically, such a scenario is already perfectly feasible; the only things you would need are a fast data connection between the physical machines in the production area and the virtual brain, and a lot of processing power.

There are, however, two caveats to this. The first is purely economical in nature. Let us not forget that industrial settings are often complicated and competitive places where many actors collaborate (suppliers and partners, and sometimes also competitors). In such a context, the protection of data, privacy and information is enormously important—which does not fit the dictator model scenario, in which the central brain must have access to all possible types of data (including competitively sensitive data) to do its job properly. For many business leaders, having to share this information would be the ultimate nightmare.

Should we make digital twins of the factory in the cloud to realize reliable communication between humans and machines? Although this dictator model seems an ideal solution to deal with complex situations on the factory floor, there are two caveats. The first is that competitors working in the same factory don't want to share data, and a human employee needs to be able to intervene.

The second caveat? Human unpredictability. Even if we could operate a factory in which the commercial interests of only one party were involved, the centrally controlled scenario would fall to pieces as soon as one person walked around the factory—a person with his or her own autonomy and authority.

Imagine, for example, that the human employee (the creative architect, as we labeled that individual earlier) notices that a robot is doing something wrong and gets involved to rectify the fault. At that moment, the whole system would come to a standstill, as the virtual brain would have lost all control. Hence, this model might only be valid for industrial facilities that focus on the production of bulk goods, and where the role of humans is minimal—or, in the long run, perhaps even non-existent.

A New Form of Artificial Intelligence: Complex Reasoning
In other words, whenever man and machine have to work together, we will need to use different methods to cater to human unpredictability, and to ensure that robots can anticipate it.

One particularly promising principle is complex reasoning, a new form of artificial intelligence (AI) that can be used to teach machines how to reason autonomously and anticipate the actions of something or someone else. However, there is still a long way to go before we can put the principle of complex reasoning into practice. After all, AI as we know it today is based on deep learning, a powerful technology designed to recognize patterns in huge amounts of data. In the meantime, we have mastered this technology, so now the goal is to take the next step and have machines ask themselves, "How do my actions affect the actions of people around me?"

To make things even more complicated, we must throw this extra consideration into the mix: in an industrial setting, the foremost requirement is transparency, in order to make sure production targets can be met. But deep learning is actually the opposite of this; you train the system to recognize patterns, but you lose control of how that system comes to its conclusions. Hence, an extra requirement of complex reasoning is that it must be sufficiently transparent (or explainable) for people to accept it, meaning that in the future, we will be talking about explainable AI.

Lifelong Learning: Also for Robots
In the run up to 2035, complex reasoning will become a new strategic research topic, with teams from across the globe studying how the underlying algorithms must be developed, implemented and optimized.

Furthermore, we will be confronted with the issue of how machines can continually improve their reactions and ways of anticipating actions. This means that new reward systems based on implicit and explicit feedback signals must be developed. You can bet that, in the future, the concept of lifelong learning will no longer only apply to man, but to machines as well.

Want to Know More?
• The 3D reconstruction of, for example, tunnels or industrial buildings is a time-consuming and expensive process. Read more about LiBorg 2.0, a robot for on-the-fly 3D mapping of environments, based on LIDAR technology.
This article you explains how you can inspect the inside of complex quality products and avoid damaging them.
• Learn more about Antwerp startup Aloxy, a spinoff of imec and the University of Antwerp, which delivers plug-and-play IoT solutions for digitizing manual valves in the petrochemical industry and for asset management during maintenance and shutdowns.
• In The Internet of Unexpected Things, a selection of IoT projects is presented in which imec collaborates closely with industrial partners.
• How can we plug robots into the IoT? Found out more in this article.

Pieter Simoens is a professor at Ghent University, in Belgium, and is affiliated with imec. He received an M.Sc. degree in electronic engineering in 2005 and a Ph.D. degree in 2011 from Ghent University. Pieter is an assistant professor affiliated with Ghent's Department of Information Technology, as well as with iMinds, and is teaching courses on mobile application development and software engineering. He is the author or co-author of more than 70 papers published in international journals, or in the proceedings of international conferences, and has been involved in several national and European research projects, including FP6 MUSE, FP7 MobiThin and H2020 FUSION.