LLMeter is a tool designed to help you test and evaluate prompts for various language models. It provides insights into performance and cost, allowing you to optimize your usage effectively.

- Multi-Model Support: Test prompts across GPT-3.5-turbo, Gemini-2.0-flash, and more
- Cost Intelligence: Real-time token cost estimation with currency conversion
- Performance Metrics: Response time tracking and quality comparisons
- Prompt Versioning: Save and compare different prompt variations
- API Agnostic: Works with multiple LLM providers simultaneously
- Security First: API keys stored locally in browser storage
Access the production deployment directly:
👉 llmeter.vercel.app
-
Clone the repository:
git clone https://github.com/IremOztimur/llmeter.git
-
Install dependencies:
cd llmeter && npm install
-
Start the development server:
npm run dev
-
Access at http://localhost:3000
- Click Config in the navigation bar
- Select your target model from the dropdown
- Add your API keys:
- Get OpenAI API Key
- Get Gemini API Key
- Save configuration

Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (git checkout -b feature/amazing-feature)
- Commit changes (git commit -m 'Add amazing feature')
- Push to branch (git push origin feature/amazing-feature)
- Open a Pull Request