How Knowledge Graphs Will Transform Data Management And Business



In late November the U.S. Federal Drug Administration approved Benevolent AI’s recommended arthritis drug Baricitnib as a COVID-19 treatment, just nine-months after the hypothesis was developed. The correlation between the properties of this existing Eli Lilly drug and a potential treatment for seriously ill COVID-19 patients, was made with the help of knowledge graphs, which represent data in context, in a manner that humans and machines can readily understand.

Knowledge graphs apply semantics to give context and relationships to data, providing a framework for data integration, unification, analytics and sharing. Think of them as a flexible means of discovering facts and relationships between people, processes, applications and data, in ways that give companies new insights into their businesses, create new services and improve R&D research.

Benevolent AI, a six-year-old London-based company which has developed a platform of computational and experimental technologies and processes that can draw on vast quantities of biomedical data to advance drug development, built-in the use of knowledge graphs from day one. “In the human genome there are about 20,000 genes with a much smaller number of approved drugs. The number of entities out there isn’t very large but the number of connections between them ranks in the hundreds of millions and that is only scratching the surface of what’s known,” says Olly Oechsle, a senior software engineer at Benevolent AI. “Having a graph system that is able to help you navigate between all of those connections is vital.”

Oechsle was one of seven panelists who participated in a December 7 roundtable discussion on the power of graphs organized by DataSeries, a global network of data leaders led by venture capital firm OpenOcean and moderated by The Innovator Editor-in-Chief Jennifer L. Schenker.

Until recently knowledge graphs were mainly leveraged by young companies like Benevolent AI or tech companies like Google, Facebook, LinkedIn and Amazon. But large corporates in traditional sectors such as chemicals and finance are starting to discover the power of graphs and, as their number grows, it is expected to cause a paradigm shift in data management. What’s more, as graphs progress they could potentially lead to more robust and reliable AI systems.

Knowledge graphs are a game changer that help companies move away from relational databases and leverage the power of natural language processing, semantic understanding and machine learning to better leverage their data, says panelist Michael Atkin, a principal at agnos.ai, a specialist consultancy that designs and implements enterprise knowledge graphs. The advantages to business are clear.

Graphs “are a prerequisite for achieving smart, semantic AI-powered applications that can help you discover facts from your content, data and organizational knowledge which would otherwise go unnoticed,” says Atkin. They help corporates organize the information from disparate data sources to facilitate intelligent search. They make data understandable in business terms rather than in symbols only understood by a handful of specialized personnel. And they speed digital transformation by delivering a “digital twin” of a company that encompasses all data points as well as the relationships between data elements. “By fundamentally understanding the way all data relates throughout the organization, graphs offer an added dimension of context which informs everything from initial data discovery to flexible analytics,” he says. “Graphs give corporates the ability to ask business questions and get business answers and value. These developments promise to enhance productivity and usher in a new era of business opportunity.”

Breaking Data Silos

Today, the infrastructure for managing data in most major corporations is based on decades old technology. Line of business and functional silos are everywhere. They are exacerbated by relational database management systems based on physical data elements that are stored as columns in tables and constrained by the lack of data standards. “Data meaning is tied to proprietary data models and managed independently,” explains Atkin. These data silos, when combined with external models for glossaries, entity relationship diagrams, databases and metadata repositories, lead to incongruent data and, due to the explosion of uniquely labeled elements, it is nearly impossible to align these silos. As a result, corporates end up with “data that is hard to access, blend, analyze and use, impeding application development, data science, analytics, process automation, reporting and compliance,” he says.

Advantages of Semantic Technology

Semantic technology — which uses formal semantics to help AI systems understand language and process information the way humans do, is seen as the best way to handle large volumes of data in multiple forms. It was designed specifically for interconnected data and is good at unraveling complex relationships.

Semantic processing (which is now a World Wide Web Consortium standard) was a huge breakthrough for content management, says Atkin. And because it is an open standard it has propelled lots of companies into the world of knowledge management. It is the backbone of the Semantic Web. It is the infrastructure for bio-medical engineering in areas such as cancer research and for the human genome project. And, it is the basis for what Google and other tech companies are doing with knowledge graphs and graph neural networks (GNNs), a type of neural network which learns directly from the graph structure, helping make recommendations for applications like search, e-commerce and computer vision.

<