Bio: Yongli Chen is the Founder & CEO of Edgenesis. Edgenesis focuses on addressing IoT interoperability problems, with it’s core product, Shifu, having been adopted worldwide, connecting to more than 2 million devices. Prior to Edgenesis, Chen spent several years at Microsoft Azure Core Compute and Networking.
In order for AI to be applied to the physical world, there needs to be some way to transmit the signals between the AI model and physical devices. In terms of a human comparison, we can view AI as the brain, and physical devices as equivalent to parts of the human body. The transmission process itself can be seen as the nervous system, which, in technical terms is known as middleware.
The Internet of Things
The Internet of Things (IoT) has been developing over the past two decades, and smart devices, from toasters to televisions, are now ubiquitous in our lives. But IoT growth in industry hasn’t been quite as successful so far, and this is due to difficulties with interoperability between different devices.
Computer controlled devices such as drones, industrial robotics and power station modules are developed and produced by a range of companies. But like people from different countries speaking different languages, these devices often work using different language protocols, usually determined by manufacturer specifications, and thus cannot communicate effectively with each other, making streamlining workflows difficult.
Over the past decade or so, there have been attempts made to solve the problem through the creation of global consortiums that would develop a governing protocol for use in all devices. But such a collective tool would eliminate the ability of hardware manufacturers to effectively differentiate themselves within the market, and has therefore not caught on. It would be akin to asking Apple to use Android—it simply isn’t going to happen.
Given that a global protocol is not really an option, there is a need to create middleware that addresses the problem in a different way—by making it able to act as a translator between different device specifications and language protocols.
The key driver that now makes this possible, compared to 10 years ago, has been the rapid and recent proliferation of AI technologies, and more specifically the growth in power and scope of large-language models (LLMs)—such as those that underpin tools including ChatGPT, DeepSeek and Alibaba’s Qwen. Despite the fact that device languages are significantly simpler than human languages, prior to the development of these powerful LLMs, the issue of translation was still too complex for a non-AI enabled software to solve.
Now, however, by taking open source AI models and fine tuning them, it is possible to create applications or middleware that can both translate between the languages at a faster rate than humans or non-AI programs, and also continue to learn and improve as it works. Thus providing the ability for industrial IoT devices to interface efficiently, and thereby shrinking the interoperability issue.
To provide an example of the middleware in context we can look at the work of a synthetic biology lab in the US. The lab work in question was the isolation and extraction of DNA samples, which was previously undertaken manually by six or seven PhD-level scientists. In terms of output, a good afternoon might result in the successful extraction of two samples, but if they were unlucky, that number could be zero.
The process has now been completely automated by using a network of robotic devices that can provide sensor readings, transfer objects and conduct intricate processes, all driven by the AI-enabled middleware. As a result of the faster and more accurate work, the lab can now extract 700 samples in a single afternoon, and there is also the possibility of greater improvements in yield through the iteration of the AI being used.
Applications
There is a wide range of possible future applications for the technology, to the point that it is really just limited by imagination. But if we focus on manufacturing, there is an obvious requirement for middleware to help producers keep up with the rapidly changing nature of consumer demand.
Manufacturers are having to become increasingly agile in what they produce, as demand begins to shift, and it is now common to see producers working to make products reactively based on specific consumer needs, rather than simply creating something and sending it into the market. But this responsive approach requires quick turnaround times and the changing of production lines, which is difficult in the conveyor belt-dominated factories of Industry 3.0. In some cases, it can even require an entirely new factory.
But looking towards the robot-based Industry 4.0, there is an opportunity to make a greater number of different products on the same production line through reprogramming of machines and using different material inputs. As long as the robots have the physical ability to do so, they can create anything. An AI application that can allow for this process of shifting production to become even more streamlined means almost real-time responses to consumer demand, as well as smaller factory footprints and less wasted product. One of our customers is a battery maker that produces four different products. Prior to system implementation it took a week to shift between each type of production, now it takes only a few hours.
Another example of our middleware model’s application outside of manufacturing is in power generation. Power grids can be dangerous places for humans, but they still require upkeep, inspections and maintenance. In this case, there is an option to use technology such as drones and all-terrain vehicles (ATVs) equipped with robot arms to replace humans to conduct said inspections and maintenance.
Due to the lack of technological interoperability, the difficulty lies in the control of these machines, along with integrating the sensors needed to provide the requisite data. Middleware can solve this issue, allowing a single operator to control a number of devices at once and obfuscate the need for human intervention in the particularly dangerous areas of the grid.
Without AI to boost interoperability, integrating the technology to work in the same fashion would require the creation of a whole department of people, and would still take around 10 times as long to do so. In addition, given the learning capabilities of AI, the more the integration process is done, the quicker it will become in future applications, particularly given the relative simplicity of device languages.
Barriers and impacts
There aren’t many other examples of middleware of this kind being produced just yet, but our project is open source and therefore available to everyone around the globe. In terms of where it is being put to use, around 30% of our users are in China, and the other 70% is spread across the globe, but generally, there is little difference between China and the rest of the world in this sector. Many hardware manufacturers are based in China, but the languages used by the devices are global, and differ depending on company specifications.
Looking forward, we are now past the stage where hardware is a barrier to progress, meaning software is where the critical improvements can be made. But even then, the real barrier at the moment lies in educating the market on the benefits and commercial viability of the product.
The technology is in its infancy and there are some innovators that are willing to look past the risk and identify the potential value, but generally there is a skepticism that often accompanies new non-mature product offerings. The more trust that can be built through successful applications of the technology, and the more business cases that can be provided, the more we will see the technology used.
The other thing that needs to be addressed is that adoption of the technology will definitely have a huge impact on employment, and not just in terms of low-wage manual employees. In the case of the biotechnology lab, the automation process has already replaced the need for highly educated scientists, and with a wider view, we have even seen those software developers that have played a role in AI creation losing jobs. This is something that needs a government response in countries all over the world to either safeguard livelihoods or provide alternative means of making a living.
The next steps
There are two things that will really accelerate the use of AI across the world. The first is improvements to interoperability, as discussed above, and the other is reducing the cost of AI. If there are more open source projects, and as a result a decrease in the deployment cost of proprietary AI by companies, adoption will speed up.
Both of these issues are being addressed globally by governments and businesses, and there is no reason to think that we won’t see massive changes over the next five years. After that we will reach a point where whoever wants to build an integrated AI system will be able to do so, and it will be relatively cheap.
We are almost certainly on the cusp of exponential acceleration in AI.