Large Language Models such as ChatGPT have an amazing capacity to learn things computers have never been able to achieve before. This capability can come at a huge cost, especially to achieve outcomes specific to your business. Organizations with massive IT budgets and resources can afford to acquire and build the ecosystem needed to train, manage and effectively use LLMs. Not so for smaller companies. The ecosystem can look like the image below:

Besides the cost of getting started before any business benefit is possible, there is another hidden cost. The “P” in GPT means that it learns everything at once and cannot add incremental knowledge without complete retraining.
Our solution works differently. Any source that can provide new knowledge such as a social media feed, streaming log data from manufacturing equipment or transactions from an ERP can be ingested and added to the central knowledge catalog incrementally. The metadata catalog supports self-service analytics and search. We provide curation tools so Subject Matter Experts or data stewards can ensure that new knowledge is classified correctly. Once classified correctly (or understood) it can be matched with inquiries immediately and used in verbal and visual insights for qualitative and prescriptive intelligence.

The Knowledge Catalog contains proprietary knowledge controlled by each customer and never shared. The Catalog works with the General DLU Knowledge Model to bring in Empathi’s deep conceptual knowledge of the natural associations between all the digital assets a business uses to compete and grow. As not all users can access all data, the metadata model of the Denims Catalog uses tags for least privilege access management.