Unifying Large Language Models and Knowledge Graphs: A Forward-Looking Roadmap
Large Language Models (LLMs), like ChatGPT and GPT4, are stirring the pot in natural language processing and artificial intelligence due to their evolving capabilities and wide-ranging applicability. Despite this, LLMs are considered black-box models, often lacking in capturing and harnessing factual knowledge.
Contrarily, Knowledge Graphs (KGs) such as Wikipedia and Huapu, present structured knowledge models that explicitly store rich factual information. These can augment LLMs by offering external knowledge for improved inference and interpretability.
However, KGs pose their own challenges such as construction difficulty and the dynamic nature of their evolution. This tests the existing KG methods when it comes to generating new facts and representing unseen knowledge. Hence, combining LLMs and KGs is beneficial, bringing their advantages together.
In this article, we propose a future-oriented roadmap for combining LLMs and KGs. Our roadmap is divided into three frameworks:
- KG-enhanced LLMs: Incorporating KGs during the LLMs pre-training and inference phases, or to enhance the understanding of knowledge learned by LLMs.
- LLM-augmented KGs: Utilizing LLMs for varied KG tasks like embedding, completion, construction, graph-to-text generation, and question answering.
- Synergized LLMs + KGs: Both LLMs and KGs playing equal roles, working together to improve both LLMs and KGs for bidirectional reasoning driven by both data and knowledge.
We critically analyze and summarize the existing efforts within these three frameworks in our roadmap. As for what directions future research may take, that remains open and uncertain, but it's a prospect we anticipate with great excitement and optimism.