Contáctanos
Webflow Premium Partner Ehab Fayez
Volver a Herramientas IA
Front End DeveloperBack End Developer Tendencia Free & Open Source
Ollama

Ollama

Run large language models locally on your machine

Descripción General

Ollama makes it easy to run open-source LLMs locally. It supports models like Llama, Mistral, Gemma, DeepSeek, and more. Perfect for developers who want privacy, offline access, or to avoid API costs.

Características

One-command model download
Support for 100+ models
REST API compatible
GPU acceleration
Custom model creation
Cross-platform (macOS, Linux, Windows)

Cómo Empezar

Download from ollama.com for your OS. On macOS/Linux run: curl -fsSL https://ollama.com/install.sh | sh. Then pull a model with ollama pull llama3 and start chatting with ollama run llama3.

Precios

Free & Open Source

Ver detalles de precios

Preguntas Frecuentes

What models can I run?

Ollama supports Llama 3, Mistral, Gemma, DeepSeek, Phi, Qwen, CodeLlama, and many more open-source models.

How much RAM do I need?

7B models need ~8GB RAM, 13B models need ~16GB, and 70B models need ~64GB.