top of page

Managing the Knowledge Graph Lifecycle: How Semedy is Already Solving Recognized Challenges

  • Aug 29
  • 4 min read

Updated: Sep 12


Tech Globe

Authors: Jürgen Baier, Semedy GmbH, and Saverio Maviglia, Semedy Inc


In a recent paper published in the proceedings of the 51st International Conference on Very Large Data Bases (VLDB 2025), Geisler et al. [1] present a comprehensive overview of the key challenges and research directions for lifecycle management in Knowledge Graphs (KGs). These challenges are often regarded as low-priority issues to be solved through ad-hoc processes and extensive manual effort. They are, however, highly relevant; especially in complex, real-world domains like healthcare.


At Semedy, we were encouraged to see that many of the problems identified in the paper are

already being addressed by our Knowledge Management System (KMS). In fact, lifecycle

management of knowledge graphs is one of our core strengths. The authors identify several

pressing challenges in the ecosystem of KG lifecycle management. Here’s how CKMS is already

tackling them:


1. Integration of Heterogeneous Data Sources

KMS supports comprehensive and customizable ETL (Extract, Transform, Load) pipelines to

integrate diverse data sources, from narrative to structured formats. This integration supports the creation of a unified knowledge graph that reflects the multifaceted nature of real-world

information, and which enables cross-domain reasoning and downstream reuse.


2. Supporting Evolving Knowledge Graphs

KMS is designed to accommodate the dynamic nature of knowledge. It supports re-visioning of

knowledge entities at the node level (i.e., at the most granular level), allowing multiple versions

to coexist, and the nodes to evolve independently over time. This means multiple workflows

(such as import, editing, analysis, and publication) can progress in parallel, while always

preserving semantic integrity of the knowledge. This capability allows continuous efficient

evolution of knowledge—without losing control.


3. Enabling interoperability across ecosystem components

Via customizable ETL pipelines, KMS can import/export knowledge in formats adherent to

standards – for example HL7 FHIR in the healthcare domain – to ensure knowledge flows across

clinical systems and tools. Furthermore, our integrated authoring workbench includes tools

powered by large language models (LLMs) for ontology mapping and matching, to align

different vocabularies and terminologies semantically, not just syntactically. For example, the

patient reported symptom “dizziness” must be converted to the SNOMED reference term

“Vertigo (finding) [399153001]”. Or facts about a patient represented in OMOP (Observational

Medical Outcomes Partnership) must be converted to equivalent facts represented as mCODE

(minimal Common Oncology Data Elements) FHIR resources.


4. Scalability of ecosystem operations KMS is engineered to scale from pilot use cases to enterprise-level deployments, with flexible configuration, granular permissions, and robust automation options to support different concurrent workflows and governance policies. The largest knowledge graph managed in KMS has over 20 million nodes to date.


5. Ensuring data and knowledge quality

At the heart of KMS is a powerful rule-based validation engine built on an extended Datalog

foundation. This engine enables logical validation of content with a performant top-down and

bottom-up reasoner to ensure that the knowledge graph maintains high standards of accuracy and consistency and semantic integrity. The same reasoner used for validation also powers ad-hoc queries via Advanced Search— giving knowledge engineers, reviewers, and domain experts a unified interface for content inspection, exploratory analysis, and debugging. In this way, an error reported in one node of a knowledge graph defines a pattern of error that can be used to identify the same error everywhere else in the graph; and further, once the errors are corrected, the search query can be converted to a validation rule to prevent this error pattern from ever appearing again in the future.


6. Defining and managing KGE life cycles

KMS provides fully configurable lifecycle states for knowledge entities, tailored to their specific

types. The default lifecycle progression—from "Work in Progress" to "Under Review,"

"Approved," and "Published"—can be customized to align with organizational workflows and

governance models.


7. Tracing, validation, and explainability across the ecosystem

The system maintains comprehensive provenance information and change-history, tracking the

origins and transformations of knowledge entities. This traceability supports transparency and

accountability, essential for applications where decision-making relies on the integrity of the

underlying knowledge. One high stakes example is the prescription of the optimal treatment plan

for a particular patient’s cancer diagnosis and stage.


8. Supporting role-specific interactions

KMS offers role-based interfaces that cater to the specific needs of different stakeholders,

including domain experts, data scientists, and knowledge engineers. These tailored interfaces

facilitate efficient interaction with the system, enhancing user experience and productivity.


Conclusion

Semedy’s KMS exemplifies a practical implementation that addresses many of the challenges

identified in the lifecycle management of knowledge graphs. By integrating heterogeneous data

sources, supporting evolving knowledge structures, ensuring data quality, and providing role-specific interactions, KMS offers a comprehensive framework for managing knowledge graphs,

small or large, simple or complex.


Ready to see what your knowledge graph can become? Request a Demo and visit semedy.com to learn more.



References

1. Geisler, S., Cappiello, C., Celino, I., Chaves-Fraga, D., Dimou, A., Iglesias-Molina, A.,

Lenzerini, M., Rula, A., Van Assche, D., Welten, S., & Vidal, M.-E. (2025). Managing

the Lifecycle of Knowledge Graphs: Challenges, Ecosystem, and Research Directions.

Proceedings of the VLDB Endowment, 18(5), 1390–1397.

Comments


bottom of page