You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GraphRAG constructs knowledge networks and offers new perspectives for traditional RAG. However, there are some issues worth exploring further.
🎾 Entity Resolution: In unstructured text, the same entity might have different names, such as Harry, Harry Potter, and Potter.
🎾 Entity Matching: Entities extracted from user queries may not exist in the database. For instance, if a user asks, "When did the most exciting fight happen?" The focus here is on "exciting fight," which is not an entity.
🚩 Objective Facts vs. Subjective Impressions
This highlights a core difference between LLM-constructed knowledge networks and human memory: LLMs only capture objective facts, while human memory consists of both objective facts and subjective impressions.
🎾 For example, when we read about Harry and Ron rescuing Hermione in the bathroom, we might subjectively perceive them as loyal and responsible. In contrast, an LLM would simply record the event. If I were to use entity matching to ask the LLM about Harry or Ron's heroic deeds, "heroic deeds" might not exist as an entity in the knowledge network, making it unlikely to retrieve this event.
🎾 Similarly, if we had delicious Kung Pao chicken today, the subjective impression of "delicious" becomes part of our memory alongside the objective fact of eating the dish. However, an LLM would record only the fact that we ate Kung Pao chicken, without noting "delicious," thus lacking the associative ability based on subjective impressions.
A bold approach is to segment all potential texts based on the user's query, then using LLMs to conduct Q&A for each segment, and finally aggregating the answers into a final response, this method circumvents the need for associative thinking and ensures a thorough review of all relevant documents, although it requires substantial computational power.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
GraphRAG constructs knowledge networks and offers new perspectives for traditional RAG. However, there are some issues worth exploring further.
🎾 Entity Resolution: In unstructured text, the same entity might have different names, such as Harry, Harry Potter, and Potter.
🎾 Entity Matching: Entities extracted from user queries may not exist in the database. For instance, if a user asks, "When did the most exciting fight happen?" The focus here is on "exciting fight," which is not an entity.
🚩 Objective Facts vs. Subjective Impressions
This highlights a core difference between LLM-constructed knowledge networks and human memory: LLMs only capture objective facts, while human memory consists of both objective facts and subjective impressions.
🎾 For example, when we read about Harry and Ron rescuing Hermione in the bathroom, we might subjectively perceive them as loyal and responsible. In contrast, an LLM would simply record the event. If I were to use entity matching to ask the LLM about Harry or Ron's heroic deeds, "heroic deeds" might not exist as an entity in the knowledge network, making it unlikely to retrieve this event.
🎾 Similarly, if we had delicious Kung Pao chicken today, the subjective impression of "delicious" becomes part of our memory alongside the objective fact of eating the dish. However, an LLM would record only the fact that we ate Kung Pao chicken, without noting "delicious," thus lacking the associative ability based on subjective impressions.
A bold approach is to segment all potential texts based on the user's query, then using LLMs to conduct Q&A for each segment, and finally aggregating the answers into a final response, this method circumvents the need for associative thinking and ensures a thorough review of all relevant documents, although it requires substantial computational power.
I put more details on the article:
Article Link: Click here to view
screen_record_code.mp4
Beta Was this translation helpful? Give feedback.
All reactions