Appen Launches New LLM Features for Enterprises.

What Does Interoperability Mean for the Future of Machine Learning?

Published on
September 22, 2020
Author
Authors
Share

Interoperability, or the ability for two systems to communicate effectively, is a key factor in the future of machine learning. For banking, healthcare, and other everyday industries, we’ve come to expect that the platforms we use to exchange information can communicate seamlessly whenever we need them to. Because we all have hundreds of thousands of data points associated with our lives as they relate to our health, our finances, and other major facets of life as we know it, it makes sense that recent developments in machine learning and artificial intelligence (AI) could be used to make all this data work together for our benefit.

Interoperability in action: Healthcare

Let’s use healthcare as an example of how interoperable machine learning technology can enhance our lives. Consider high-tech medical procedures like CT scans. They automatically generate large volumes of sensor data for a single patient. This is different from health information your doctor manually enters into a proprietary database during a routine check-up. Without a way to quickly and automatically integrate these two data types for analysis, there is lost potential for fast diagnosis of critical illnesses. This has created a demand for optimization across different information models. Current methods and legacy systems simply aren’t friendly in terms of interoperability. Recent developments in machine learning and machine learning algorithms are opening the door for the possibility of stronger, faster translation between information platforms. The future of machine learning will enable vastly improved medical care and optimized research practices.

The role of neural networks

Modeled after the human brain, neural networks are comprised of a set of algorithms that are designed to recognize patterns. They interpret sensory data through a sort of machine perception, labeling, or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data (images, sound, text, or time series) must be translated.According to a 2017 article in MIT News, neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department. Since that time, the approach has fallen in and out of favor, but its return is in the future of machine learning.In 2017, the Open Neural Network eXchange (ONNX) format was created as a community-driven open-source standard for deep learning and traditional machine learning models. The goal of this project was to tackle the limitations created within a disjointed AI ecosystem by creating a standardized framework. With support from numerous companies in the AI community, ONNX has gained steam in terms of adoption for both software and hardware industries. ONNX allows developers and data science teams to minimize future performance and compatibility challenges while opening doors for massive innovations in a variety of tech-centric fields. ONNX allows developers to use their machine learning framework of choice and removes compatibility barriers. Hardware and software products can work together more easily, now, and in future iterations.In terms of compatibility, according to a Medium article by Microsoft’s Faith Xu, “…the ONNX community has contributed many different tools to convert and performantly run models. Models trained on various frameworks can be converted to the ONNX format using tools such as TensorFlow-ONNX and ONNXMLTools (Keras, Scikit-Learn, CoreML, and more). Native ONNX export capabilities are already supported in PyTorch 1.2. Additionally, the ONNX model zoo provides popular, ready-to-use models.”

Semantic interoperability – a requirement for successful AI

While the ONNX format has already helped to unify the AI and machine learning efforts of many large companies, it has become apparent that simply having all your data in the same format does not automatically mean success. This has spurred an ongoing focus on semantic interoperability for future projects. You can’t necessarily learn the patterns, predictions, or anomalies of data when the data is a mash-up of sources that do not mean the same thing. Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. To this end, high-quality, human-annotated data sets are needed to accurately train machine learning models, regardless of whether your data has been aggregated from a single source or heterogeneous sources and converted via an ONNX-style format or not.Still have questions about machine learning? Take a look at our Machine Learning FAQ.

Related posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

What is Human-in-the-Loop Machine Learning?

Human-in-the-loop (HITL) is a branch of artificial intelligence that leverages both human and machine intelligence to create machine learning models. In a traditional
Read more

Deciphering AI from Human Generated Text: The Behavioral Approach

One of the most important elements of building a well-functioning AI model is consistent human feedback. When generative AI models are trained by human annotators, they serve
Read more

Data Quality: The Better the Data, the Better the Model

If your data’s not accurate, your model won’t run...properly, that is. While you may end up with a working model, it won’t function the way it was intended. The quality of
Read more

Machine Vision vs. Computer Vision — What’s the Difference?

Artificial Intelligence is an umbrella term that covers several specific technologies. In this post, we will explore machine vision (MV) vs. computer vision (CV). They both
Read more
Dec 11, 2023