Back to AI Tools
Front End DeveloperBack End Developer Trending Free & Open Source
Ollama
Run large language models locally on your machine
Overview
Ollama makes it easy to run open-source LLMs locally. It supports models like Llama, Mistral, Gemma, DeepSeek, and more. Perfect for developers who want privacy, offline access, or to avoid API costs.
Features
One-command model download
Support for 100+ models
REST API compatible
GPU acceleration
Custom model creation
Cross-platform (macOS, Linux, Windows)
How to Get Started
Download from ollama.com for your OS. On macOS/Linux run: curl -fsSL https://ollama.com/install.sh | sh. Then pull a model with ollama pull llama3 and start chatting with ollama run llama3.
Pricing
Free & Open Source
View pricing detailsFAQ
What models can I run?
Ollama supports Llama 3, Mistral, Gemma, DeepSeek, Phi, Qwen, CodeLlama, and many more open-source models.
How much RAM do I need?
7B models need ~8GB RAM, 13B models need ~16GB, and 70B models need ~64GB.