-
Innovaccer
- India
-
09:56
(UTC +05:30) - in/sparsh-dutta-4b3a8857
Stars
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.
Robust Speech Recognition via Large-Scale Weak Supervision
Visualize and compare datasets, target values and associations, with one line of code.
An idiomatic, lean, fast & safe pure Rust implementation of Git
DSPy: The framework for programming—not prompting—foundation models
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
A cloud-native vector database, storage for next generation AI applications
the AI-native open-source embedding database
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
AutoMQ is a cloud-first alternative to Kafka by decoupling durability to S3 and EBS. 10x cost-effective. Autoscale in seconds. Single-digit ms latency.
🦜🔗 Build context-aware reasoning applications
A programming framework for agentic AI 🤖
Python library and shell utilities to monitor filesystem events.
OnnxTR a docTR (Document Text Recognition) library Onnx pipeline wrapper - for seamless, high-performing & accessible OCR
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
docTR (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.
A smarter cd command. Supports all major shells.
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Stable Diffusion web UI
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
LlamaIndex is a data framework for your LLM applications
Chronon is a data platform for serving for AI/ML applications.
Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python with AI-powered optimization proposals