Skip to content

Subham-Maity/QChatAi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

53 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Chat with PDF - RAG

Qchat.1.mp4

Article

πŸ”— Logic

source

1. Document Ingestion

  • User Upload: Users upload a PDF file (or any other file) to S3/Cloudinary.
  • Extract Content: The content of the PDF is extracted.
  • Split into Chunks: The extracted content is split into manageable chunks.

2. Generate Embeddings

  • Generate Embeddings: Each chunk is processed to generate vector embeddings using an embedding API (e.g., OpenAI Embeddings).
  • Embeddings: These are numeric representations of the chunks, which capture the semantic meaning.

3. Knowledge Base

  • Store Embeddings: The embeddings and document chunks are stored in a database (like PostgreSQL) that acts as the knowledge base.
  • Embedding Database: Tools like pyevctor, pinecone, faiss, or chromadb can be used for storing and indexing these embeddings.

4. Retrieval

  • User Question: A user asks a question.
  • Generate Query Embedding: The question is converted into a vector embedding.
  • Semantic Search: Using the question embedding, a semantic search is performed on the stored document embeddings.
  • Ranked Result: Results are ranked based on similarity scores (e.g., cosine similarity).

5. Context to LLM (Large Language Model)

  • CTX: The top-k similar results are used to provide context to the LLM (e.g., GPT-4, LLaMA 2, Mistral 7B, ChatGPT).
  • Answer: The LLM uses this context to generate and return a relevant answer to the user.

πŸ”— How to Use

  1. Sign In: Start by signing in from the landing page. Once signed in, you'll see a Let's Start button. Click on this button to begin.

  2. Create a Project: After clicking Let's Start, you can create a project. Provide a title, description, and upload a PDF file. The PDF will be uploaded to a cloud bucket (such as S3) and stored in PostgreSQL.

  3. PDF Processing: On the backend, the PDF file is processed to generate vector embeddings of the content. These embeddings are then stored for future use.

  4. Asynchronous Processing: The entire processing is handled asynchronously using BullMQ, ensuring that it is efficient and does not block other operations, allowing you to proceed without waiting for the chat interface to be ready.

  5. Dashboard Monitoring: You can view all your projects on the frontend dashboard. Each project will display a status: 'creating', 'failed', or 'created'. This allows you to track the progress and know when your project is ready. If any issues occur, you will be able to see the status and take appropriate action.

  6. Chat Interface: Once a project is ready, you can open it to access a user-friendly chat interface. Here, you can ask questions and receive relevant answers based on the content of your PDF.

πŸ”— How to Run

Without Docker

  1. Install Node.js from Node.js Downloads.
  2. Install dependencies:
    cd client
    pnpm i
    
    cd server
    yarn
  3. Run the code:
    cd client
    pnpm run dev
    
    cd server
    yarn start:dev

With Docker

  1. Run the following command:
    docker compose up

    Note: We use both server actions and a NestJS server. Occasionally, Docker may throw an error. If you encounter an issue, please raise an issue.

πŸ”— Environment Setup

β‡‰βŸ­ Server

  1. Create a .env file in the server directory (server/.env):
    # Server port
    PORT=3333
    DATABASE_URL="postgresql://neondb_owner:********/neondb?sslmode=require"
    
    # S3
    AWS_ACCESS_KEY_ID=A********P**T********VN
    AWS_SECRET_ACCESS_KEY=M********U9J********aYr4********Yostzb
    AWS_S3_REGION=us-east-1
    AWS_S3_BUCKET_NAME=********
    
    # Rate Limit
    UPLOAD_RATE_TTL=60
    UPLOAD_RATE_LIMIT=3
    
    # Pinecone
    PINECONE_API_KEY=e1******-56**-43**-8f**-f**7**3**2**
    
    # OpenAI
    OPENAI_API_KEY=sk-p******j-f******og******Nr0P******FJt******JiBl******EvExEK
    
    # Clerk
    CLERK_SECRET_KEY=sk_t******rL5******BkF******7******2hF******aL
    
    # Redis
    REDIS_HOST="redis-*****tysub*****1-8*****.a.aivencloud.com"
    REDIS_PORT=*****
    REDIS_USERNAME="*****"
    REDIS_PASSWORD="AVNS_*****Mi*****S*****a"
  2. Set up the following services:

β‡‰βŸ­ Client

  1. Create a .env.local file in the client directory (client/.env.local):
    NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_test_******reWx******M******su******b3******uZG******A
    CLERK_SECRET_KEY=sk_test_SI******B******Kw******Qgdx7V******9aL
    
    NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
    NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up
    NEXT_PUBLIC_CLERK_AFTER_SIGN_IN_URL=/
    NEXT_PUBLIC_CLERK_AFTER_SIGN_UP_URL=/
    NEXT_PUBLIC_CLERK_AFTER_SIGN_OUT_URL=/
    
    # OpenAI
    OPENAI_API_KEY=sk-p******j-f******og******Nr0P******FJt******JiBl******EvExEK
  2. Set up Clerk: Clerk

πŸ”— Frameworks

β‡‰βŸ­ Server

β‡‰βŸ­ Client

πŸ”— Features

  1. Chat
  2. User Dashboard
  3. Authentication
  4. Landing Page

πŸ”— Key Notes

  1. NestJS: NestJS provides a robust MVC architecture and excellent scalability, incorporating built-in features like dependency injection and a modular structure. It's ideal for production-grade enterprise applications, supporting microservices and ensuring maintainability, which traditional Node.js setups often lack.

    NestJS optimally utilizes Express (Node.js), achieving performance and scalability not easily attained with a conventional setup.

  2. All backend APIs are protected by your Clerk session token. Without this, backend access is restricted, complicating Postman testing. To test with Postman, temporarily remove or comment out the following line in server/src/app.module.ts:
    providers: [
      {
        provide: APP_GUARD,
        useClass: ClerkAuthGuard,
      }
    ]
  3. All processing occurs within BullMQ workers, ensuring high scalability.
  4. We use Redis for data caching. You can use it locally or connect to the production URL.

πŸ”— Helper Notes

a. Use Prisma commands:

npx prisma studio
npx prisma migrate dev --name init

b. S3

  1. Bucket Create
  • ACLs disabled (recommended)
  • Block all public access (Uncheck)
  • Bucket Versioning - Disable
  • Object Lock - Disable
  • Default encryption
    • Encryption type - Server-side encryption with Amazon S3 managed keys (SSE-S3)
    • Bucket Key - Enable
  1. Bucket -> Permission Bucket policy - Edit

Paste this but make sure - Edit Block public access - Disable

2.1 S3 policy (arn:aws:s3:::<bucket-name>):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::<bucket-name>/*"
    }
  ]
}

2.2 . CORS configuration:

Cross-origin resource sharing (CORS) - edit

[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["PUT", "POST", "DELETE", "GET"],
    "AllowedOrigins": ["*"],
    "ExposeHeaders": []
  }
]