Code Reviewer AI is a powerful tool that leverages artificial intelligence to analyze your code and provide feedback. It helps you identify potential bugs, improve code quality, and enforce best practices.
- Features
- Technologies
- Running Locally
- AI Provider Setup
- Comparing AI Providers
- Recommended Configurations
- Contributing
- License
- Author
- Acknowledgments
- Project Status
- Future Plans
- Bug Reports
- Feature Requests
- Support
- Code of Conduct
- AI-powered Code Review: Get intelligent feedback on your code.
- Multiple AI Providers: Choose between Google Gemini, Ollama (Local), and LM Studio (Local).
- File Tree Navigation: Easily browse and select files from your project.
- Code Formatting: Format your code with the click of a button.
- Linter Integration: Identify and fix syntax errors and style issues.
- Welcome Message: Get started quickly with a helpful welcome message.
-
Clone the repository:
git clone https://github.com/deoninja/Code-Reviewer-AI.git cd Code-Reviewer-AI -
Install dependencies:
npm install
-
Run the development server:
npm run dev
-
Open your browser and navigate to
http://localhost:5173(or the address shown in your terminal).
- Visit Google AI Studio
- Sign in with your Google account
- Click "Create API Key"
- Choose "Create API key in new project" or select an existing project
- Copy the generated API key
- Important: Keep your API key secure and never share it publicly
- Gemini API has generous free tier limits
- Rate limits apply (requests per minute)
- Monitor your usage in the Google AI Studio dashboard
Ollama allows you to run large language models locally on your machine, providing complete privacy and offline functionality.
Windows:
- Download Ollama from https://ollama.ai/download
- Run the installer and follow the setup wizard
- Ollama will automatically start as a service
macOS:
- Download Ollama from https://ollama.ai/download
- Drag Ollama to your Applications folder
- Run Ollama from Applications
Linux:
curl -fsSL https://ollama.ai/install.sh | sh-
Install a recommended model (choose one based on your system):
# For systems with 8GB+ RAM - Good balance of speed and quality ollama pull llama3.1:8b # For systems with 16GB+ RAM - Better quality ollama pull llama3.1:13b # For systems with 4-8GB RAM - Faster but lower quality ollama pull llama3.1:3b # Alternative: Code-specific model ollama pull codellama:7b
-
Verify Ollama is running:
ollama list
-
Test the model:
ollama run llama3.1:8b "Hello, can you help me review code?"
- Open the Code Review AI application
- Click the settings icon
- Change the AI Provider to "Ollama (Local)"
- Configure the settings:
- URL:
http://localhost:11434/v1/chat/completions(default) - Model: Enter the model name you installed (e.g.,
llama3.1:8b)
- URL:
- Click "Save Configuration"
Ollama not responding:
- Check if Ollama service is running:
ollama ps - Restart Ollama:
ollama serve - Check firewall settings (port 11434)
Model not found:
- List installed models:
ollama list - Pull the model again:
ollama pull model-name
Performance issues:
- Use a smaller model for faster responses
- Close other memory-intensive applications
- Consider upgrading RAM for larger models
LM Studio provides a user-friendly interface for running local language models with excellent performance optimization.
-
Download LM Studio:
- Visit https://lmstudio.ai/
- Download for your operating system (Windows, macOS, Linux)
- Install following the standard process
-
First-time Setup:
- Launch LM Studio
- The application will guide you through initial setup
- No account required - everything runs locally
-
Browse and Download Models:
- Open LM Studio
- Go to the "Discover" tab
- Search for recommended models:
- Llama 3.1 8B - Good balance for most systems
- Code Llama 7B - Optimized for code tasks
- Mistral 7B - Fast and efficient
- Phi-3 Mini - Lightweight option
-
Download a Model:
- Click on your chosen model
- Select the quantization level:
- Q4_K_M - Good balance of quality and speed (recommended)
- Q8_0 - Higher quality, needs more RAM
- Q2_K - Faster, lower quality
- Click "Download"
-
Load the Model:
- Go to the "Chat" tab
- Click "Select a model to load"
- Choose your downloaded model
- Wait for it to load (may take a few minutes)
-
Enable Server Mode:
- Go to the "Local Server" tab in LM Studio
- Select your loaded model
- Click "Start Server"
- Note the server URL (usually
http://localhost:1234)
-
Verify Server is Running:
- You should see "Server running on http://localhost:1234"
- The status indicator should be green
- Open the Code Review AI application
- Click the settings icon
- Change the AI Provider to "LM Studio (Local)"
- Configure the settings:
- URL:
http://localhost:1234/v1/chat/completions(default) - Model: Use
local-modelor check LM Studio for the exact model name
- URL:
- Click "Save Configuration"
Server won't start:
- Ensure a model is loaded in the Chat tab first
- Check if port 1234 is available
- Restart LM Studio
Connection refused:
- Verify the server is running (green status in Local Server tab)
- Check the URL in settings matches LM Studio's server URL
- Disable firewall/antivirus temporarily to test
Poor performance:
- Try a smaller model or lower quantization
- Adjust context length in LM Studio settings
- Close other applications to free up RAM
| Feature | Gemini (Cloud) | Ollama (Local) | LM Studio (Local) |
|---|---|---|---|
| Setup Difficulty | Easy | Medium | Easy |
| Internet Required | Yes | No | No |
| Privacy | Data sent to Google | Complete privacy | Complete privacy |
| Speed | Fast | Depends on hardware | Depends on hardware |
| Cost | Free tier, then paid | Free | Free |
| Model Quality | Very High | High (varies by model) | High (varies by model) |
| RAM Requirements | None | 4-16GB+ | 4-16GB+ |
| Best For | Quick setup, high quality | Privacy, offline use | User-friendly local AI |
- Start with: Gemini (easiest setup)
- Upgrade to: LM Studio (user-friendly local option)
- Recommended: Ollama with Llama 3.1 8B
- Alternative: LM Studio with Code Llama 7B
- Ollama:
llama3.1:3borphi3:mini - LM Studio: Phi-3 Mini with Q4_K_M quantization
- Ollama:
llama3.1:13borcodellama:13b - LM Studio: Llama 3.1 8B with Q8_0 quantization
Contributions are welcome! Please see our Contributing Guide for more information.
This project is licensed under the MIT License - see the LICENSE file for details.
- This project was inspired by the need for a simple and effective code review tool.
This project is currently in active development.
- Add support for more AI providers.
- Implement user authentication.
- Add more options for code formatting.
- Improve the UI/UX.
Please report any bugs or issues you find by opening an issue on the GitHub repository.
If you have an idea for a new feature, please open an issue on the GitHub repository to discuss it.
For support, please open an issue on the GitHub repository.
Please read our Code of Conduct for details on our code of conduct and the process for submitting a report.
All rights reserved Deo Trinidad ©️ 2025
