Production-ready Python framework for building AI applications fast
RapidAI is designed for one thing: getting from idea to deployed AI application in under an hour. When your boss asks you to POC the latest AI tool, this is the framework you reach for.
A web framework that bridges the gap between Flask's simplicity and Django's batteries-included approach, but optimized specifically for modern AI development. Think of it as "the Rails of AI apps" - convention over configuration, but for LLM-powered applications.
from rapidai import App, LLM
app = App()
llm = LLM("claude-3-haiku-20240307")
@app.route("/chat", methods=["POST"])
async def chat(message: str):
response = await llm.complete(message)
return {"response": response}
if __name__ == "__main__":
app.run()
from rapidai import App, LLM
app = App()
llm = LLM("claude-3-haiku-20240307")
@app.route("/chat", methods=["POST"])
async def chat(message: str):
async for chunk in llm.stream(message):
yield chunk
if __name__ == "__main__":
app.run()
from rapidai import App, background
app = App()
@background(max_retries=3)
async def process_document(doc_id: str):
# Long-running task runs in background
await analyze_document(doc_id)
@app.route("/process", methods=["POST"])
async def start_processing(doc_id: str):
job = await process_document(doc_id)
return {"job_id": job.id, "status": job.status}
from rapidai import App, LLM, monitor
app = App()
llm = LLM("claude-3-haiku-20240307")
@app.route("/chat", methods=["POST"])
@monitor() # Automatically tracks tokens and costs
async def chat(message: str):
return await llm.complete(message)
@app.route("/metrics")
async def metrics():
return app.get_metrics_html() # Built-in dashboard
pip install rapidai-framework
Install with specific features:
# Anthropic Claude support
pip install "rapidai-framework[anthropic]"
# OpenAI support
pip install "rapidai-framework[openai]"
# RAG (document loading, embeddings, vector DB)
pip install "rapidai-framework[rag]"
# Redis (for caching and memory)
pip install "rapidai-framework[redis]"
# Everything
pip install "rapidai-framework[all]"
# Development tools
pip install "rapidai-framework[dev]"
@background decorator with retry logic and job tracking@monitor decorator with token/cost tracking and HTML dashboardrapidai new, rapidai dev, rapidai deploy, rapidai testVersion: 1.0.0 - Production Ready 🎉
See CHANGELOG.md for release notes.
Perfect for building:
Complete documentation available at https://shaungehring.github.io/rapidai/
RapidAI includes a powerful CLI for project scaffolding and management:
# Create a new project from template
rapidai new my-chatbot --template chatbot
# Start development server with hot reload
rapidai dev
# Run tests
rapidai test
# Deploy to cloud platforms
rapidai deploy --platform vercel
# Generate documentation
rapidai docs
Available templates:
chatbot - Simple chat applicationrag - RAG system with document Q&Aagent - AI agent with toolsapi - REST API with LLM endpoints# Clone the repository
git clone https://github.com/shaungehring/rapidai.git
cd rapidai
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Run tests with coverage
pytest --cov=rapidai tests/
# Type check
mypy rapidai
# Lint and format
ruff check rapidai
ruff format rapidai
RapidAI is available on PyPI. To publish a new version:
# Test on TestPyPI first
./scripts/publish.sh test
# Publish to production PyPI
./scripts/publish.sh prod
See PUBLISHING.md for complete publishing guide.
MIT License - see LICENSE file for details.
We welcome contributions! Whether it's:
See CONTRIBUTING.md for guidelines on how to contribute.
If you find RapidAI helpful, please consider:
Built with ❤️ for AI engineers who move fast
Version: 1.0.0 | Status: Production Ready 🎉