February 15th, 2024

How AI Can Transform Knowledge Management

by Dev Nag at QueryPal

Much of a DevOps engineer’s job is providing internal support with their platforms, pipelines, docs, and more. Teams often take on on-call shifts to resolve internal questions across multiple communication platforms. These on-call shifts take DevOps engineers and SREs away from work on building tools that make reliability processes efficient and consistent, and often lead to burnout and turnover.

While DevOps teams may produce documentation for developers to self-serve information, most people don’t want to manually search through documents (often living in different knowledge stores) to find answers. Instead, team chats are where we now collaborate, ask questions, and get meaningful answers in real-time. The problem with traditional knowledge management is that it’s focused on capturing and storing information. It doesn’t deliver on the knowledge sharing and distribution promise; instead, that burden is still placed on the asker. This has led to DevOps teams being bombarded with messages in their team chat. Among them, the same questions, over and over, by different developers who don’t realize their question was answered just a few days ago in the same channel. How would they?

And the consequences of these questions are far more than just the time taken to answer. There’s the cost of context switching: researchers at UC Irvine found that it took 25 minutes after an interruption to fully return to your previous task. There’s the opportunity cost of the higher-impact tasks that get squeezed out by repeat questions, backlogs that seem to get longer and longer.

This is the problem QueryPal is tackling with retrieval-augmented generation (RAG) and large language models (LLMs).

We help teams resolve internal requests efficiently because on-call and day-to-day activities that don’t add new value shouldn’t consume your critical resources. We do that by auto-answering repetitive questions in team chats — instantly — using your company documents and previously answered questions within chat channels. The asker doesn’t have to change how they ask questions or where they ask questions. We intercept questions right where they live.

How it works 

Imagine all of your best practices, how-to guides, processes, resolved conversations, and frequently asked questions served to any authorized employee in real time — all in chat. QueryPal is not just a chatbot; it’s a new way to interact with enterprise data. It’s about having information come to you instead of having to search through different repositories. 

When you give QueryPal access to your chat channels and knowledge bases, the chatbot will pull information from a wide variety of sources, including Slack and Microsoft Teams conversations, Notion, Google Docs, Confluence, uploaded documents, and admin-specified web sites, and community-generated content on GitHub to automatically answer internal queries in chat. QueryPal goes the extra mile by offering context and sources from where it pulls its answers. These sources enable use cases such as new developer onboarding and productivity, internal and external customer support, and many others. We’re shrinking the distance between questions and answers.

We securely store all the data for fast, semantic search to match the meaning of a query. When the search finds the match or multiple matches, it fetches the related data and passes it back to the LLM to define the optimal response to the query. Not only does QueryPal answer questions that may be ill formed or imprecise, it is able to summarize its findings and cite its sources. That can save hours of time and energy manually answering redundant questions.

Once installed and after the first 10-minute training period, we index the data every five minutes. That means up-to-date answers for queries, without any administrative work. Every five minutes, we also pull changes to wikis and other knowledge bases. We support strict access control so you have the flexibility to pick the channels where the chatbot can listen in, and specify the spaces or pages that the chatbot can access in Notion, Google Docs, and Confluence.


Connecting People with the Right Knowledge

This AI chatbot is not about replacing humans. It’s about supporting us and taking away the mundane so our teams can focus on more high-impact tasks. If a question has never been answered before — anywhere — then it requires a human to dig in and diagnose the issue. But the second, third, or fourth time someone asks that same question, we can automatically surface the previous conversation or documentation and tell you how to resolve the problem. It’s just-in-time knowledge, delivered right where enterprises have moved over the decade: team chat.

Nobody wants to waste an expert’s time and energy to respond to those repetitive questions, but we need a better solution for finding information we need without having to switch across multiple portals. 

In reality, knowledge is always inside people’s heads. You might be able to find the information you need, but what if something has changed since the time that document was published and it requires additional discussion with the team to figure out the right answer? We all know what we do — go into Slack or Microsoft Teams and ask the question, knowing that the expert will provide the correct answer. 

Try in Slack for free or if you would like a demo from our team click here.

Try in Slack - Free