Frugal AI: The Green Model Router
The Green Model Router
Frugal AI Router is an open-source proxy layer that sits between your application and AI providers. It analyzes incoming requests, benchmarks them against efficiency metrics, and routes them to the most resource-appropriate model. Think of it as a “green load balancer” for AI.
Core Components
- Request Analyzer: Classifies task complexity (simple Q&A vs. complex reasoning)
- Model Efficiency Index: Database of models ranked by performance-per-watt
- Smart Router: Directs requests to the most efficient capable model
- Dashboard: Visualizes energy savings, cost reduction, and carbon impact
Problem to Solve
- Wasteful AI defaults: Developers use powerful models for simple tasks (summarization, classification, basic chat) that smaller models handle equally well
- No visibility: There’s no easy way to measure or compare the environmental footprint of AI choices
- Cost explosion: Overpowered model usage drives up API costs unnecessarily
- Hosting blind spot: Web hosts offering AI features have no tools to optimize or report on sustainability
Target Audience
- Web hosting providers adding AI features to their platforms
- WordPress plugin developers integrating AI functionality
- Agencies and developers building AI-powered sites for clients
- Enterprise teams with sustainability reporting requirements
- Open-source projects wanting responsible AI integration
- …
| Role | Contribution |
|---|---|
| Backend Developers | Build the routing logic, API proxy, model abstraction layer |
| Frontend/UI Developers | Create the dashboard and configuration interface |
| DevOps/Platform Engineers | Containerization, deployment, scaling considerations |
| Data/ML Enthusiasts | Help build the task classification and benchmarking system |
| Sustainability Advocates | Research carbon metrics, validate methodology |
| Technical Writers | Documentation, pitch materials, demo scripts |
Hackathon Goals
- Deliver an MVP routing proxy across multiple AI providers. Build a working proxy that can route requests between 2–3 model providers (e.g., OpenAI + Anthropic) and different models, with a stable API surface that teams can drop into existing apps.
- Add an initial task complexity signal to drive routing. Implement a basic complexity classifier (rule-based or lightweight ML) that predicts “cheap vs. capable” needs and selects an appropriate model accordingly.
- Make footprint and cost observable by default. Log every request/response with estimated cost and energy/CO₂ proxy metrics (even if coarse at first), ensuring audits and comparisons are possible.
- Provide a minimal dashboard that proves value. Ship a simple dashboard that explains routing decisions and quantifies savings (cost, estimated energy/CO₂), so users can validate impact quickly.
- Enable configurable routing strategies beyond the default. Support switchable policies (cost-first vs. green-first vs. performance-first) via configuration and rules, so teams can align routing with their priorities.
- Lay the foundation for ecosystem adoption and benchmarking. Package integrations (e.g., a WordPress wrapper) and a model-efficiency benchmarking harness, with a path toward later stretch features such as a developer browser extension and a public “greenest implementation” leaderboard.
Results
The Frugal AI team tackled an increasingly important challenge in AI: how to use large language models more efficiently without sacrificing quality. Their project focused on building an intelligent routing solution that analyzes incoming tasks, estimates their complexity, and routes them to the most suitable model rather than automatically relying on the largest and most resource-intensive one. The core idea was simple but highly relevant from a sustainability perspective: not every query needs a frontier model. By avoiding unnecessary use of large-scale models for simpler tasks, the project aimed to reduce energy consumption, lower computational overhead, and support a more sustainable use of AI systems.
What We Built
At the core of the project is a modular proxy layer that connects different task assessment methods with different language models. Rather than relying on a single fixed approach, the system allows plugging in multiple ways of estimating task complexity and combining them with a range of model options. During the hackathon, the team explored several approaches, ranging from keyword search as a baseline to more advanced methods such as vector search, SVMs, BERT-based models, and LLM-driven techniques. This allowed them to compare how different methods perform when balancing quality, speed, cost, and efficiency. The modular design also provides a strong foundation for future research, facilitating exploration of new task assessment methods, routing strategies, and model combinations.
Alongside the routing logic, the team also worked on benchmarking and a dashboard to make efficiency, cost, and environmental impact more visible. The result was a practical prototype that not only demonstrated the value of smarter model selection but also raised broader research questions about how to estimate query complexity efficiently.
What’s Next
These questions are now being explored further beyond the hackathon, with ongoing research building on the project’s modular architecture and early findings. With its combination of practical relevance, modular design, and strong potential for further development, Frugal AI stood out as a thoughtful contribution to more efficient and responsible AI systems.
Team Members

- Ida Meier
- Jonas Liebschner
- Tobias Winter
- Eduard Marbach
- Benjamin Burkhardt
- Andreas Biberacher
- Chandan Das Adhikari
- Dmitry Rybakov
- Fabian Genes
- Julian Haupt
- Leon Knauer
- Michael Schmitz
- Blanca Vitallowitz
- Farzaneh Shams
- Oliver Bartsch
Project links
Project Leads

Daniel Heinz
Postdoctoral Researcher,
Karlsruhe Institute of Technology

Jonas Liebschner
Research Associate,
Karlsruhe Institute of Technology

Ida Meier
Research Associate,
Karlsruhe Institute of Technology
#FrugalAI
#GreenModelRouter
Project Mentor









