Wouldn’t it be convenient to have an AI-powered chatbot that could answer questions about your app data in Kintone? For this use case, ChatGPT and other LLMs are promising, but not sufficient on their own. For consistently relevant results, retrieval of context is needed. This project explores Retrieval Augmented Generation (RAG), which fetches contextual data related to a user’s question. This allows for the chatbot to provide a much more relevant answer to the user’s inquiry, with sources referenced.
A chatbox is used to answer an employee users’ question about HR documents stored in the datasource app.
Agenda:
- Introduction
- Setting up the Data Source app
- Cloning Repository
- Configuring Settings
- Running the Backend
- Running the Frontend
Introduction:
The project repository can be found here at this GitHub link.
Retrieval Augmented Generation (RAG) architecture improves the relevance of an AI chatbot’s answers. This RAG-powered Chatbox allows a user to ask specific questions about data stored in a Kintone app. This example allows an employee user to ask about HR department documents stored in Kintone via a Chatbox frontend app.
More specifically, the LangChain framework uses an OpenAI model that answers the user’s questions based on context stored in a Chroma database, via a FastAPI python server.
LangChain allows for many steps (from sentence transformation to API calls) and integrations with LLMs (Large Language Model such as ChatGPT). Additionally, LangChain also uses vector databases like Chroma, which can store documents and contextual data. A chain of steps defined in LangChain help optimize the responses of the LLM Chatbot.
Chroma is a vector database that converts natural language data (example, HR documents) into high-quality embeddings via sentence transformers. When the question is asked by the user, only the embeddings related to the question are retrieved from Chroma. These embeddings are used as context in the prompt that is provided to OpenAI model, which then formulates the response to the user.
This AI-powered chatbot app returns an answer related to the user’s question, by querying a Chroma database that stores data from the “Retail Store Internal FAQ” Kintone app.
An app in Kintone called “Retail Store Internal FAQ” stores PDFs, Excel files, and record data related to the store’s company policy. This app is referred to as the Data Source app (see diagram below).
Note: Here is an app template of the “Retail Store Internal FAQ” apps, and sample data.
Setting up the Data Source app:
The data source app called “Retail Store Internal FAQ” contains question and answer data related to the company policy.
The API Token of this app, and the subdomain are needed for the Frugal RAG app. The python backend script from this project will pull data from this app, transform the sentences into embeddings, and store them in the Chroma database.
Cloning the Repository:
Create a new empty folder.
Open the project in VS Studio Code.
Clone the repository in the folder by running the following command in the terminal:
git clone https://github.com/yamaryu0508/kintone-rag-example
Backend folder contains the python server for generating responses.
Frontend folder contains the TypeScript project for the Chatbox user interface.
Images folder contains diagrams of the architecture, and a sample screenshot of the chatbox.
Next, install the python packages for the backend by running:
cd kintone-rag-template
cd backend
cd app
pip install -r requirements.txt
Next, dot env will be installed. Navigate to the backend folder in the console.
cd ..
pip install python-dotenv
Want to learn more?
If you're ready to try your hand at app building using Kintone’s drag-and-drop builder, sign up for a free 30-day trial today, no strings attached.
About the Author