-
Foundation29 - @foundation29org
- Madrid, Spain
- https://www.linkedin.com/in/juanjodoolmo/
- @claxterix
- https://foundation29.org
Stars
The unofficial DSPy framework. Build LLM powered Agents and "Agentic workflows" based on the Stanford DSP paper.
Claude Engineer is an interactive command-line interface (CLI) that leverages the power of Anthropic's Claude-3.5-Sonnet model to assist with software development tasks. This tool combines the capa…
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
Turn any glasses into AI-powered smart glasses
AutoGroq is a groundbreaking tool that revolutionizes the way users interact with Autogen™ and other AI assistants. By dynamically generating tailored teams of AI agents based on your project requi…
Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
An open-source visual programming environment for battle-testing prompts to LLMs.
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
The #1 open-source voice interface for desktop, mobile, and ESP32 chips.
SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
A feature-rich command-line audio/video downloader
Open-source observability for your LLM application, based on OpenTelemetry
A framework to enable multimodal models to operate a computer.
Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
Incredibly fast Whisper-large-v3
Streamlit app of ChatGPT that can use the clinical-trials.gov API. Better with GPT-4.
RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.
Monitors and processes traffic to and from Azure OpenAI endpoints.
From the Transistor to the Web Browser, a rough outline for a 12 week course
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.