Local AI on Linux: Alternatives to Mac’s MLX Using LLM and Ollama

After reading Anthony Lewis’ excellent post on running local AI using MLX on macOS, I wanted to document a similar approach tailored for Linux users.

Quick Setup Guide for Linux

Here’s how you can run your own local AI models on Linux using the llm tool with Ollama support.

1. Install uv (a package manager)

curl -LsSf https://astral.sh/uv/install.sh | sh

After installation, ensure uv is in your PATH. Restart your shell or source your profile if needed.

2. Install the llm CLI

uv tool install llm --python 3.12

This installs llm in a virtual environment using Python 3.12.

3. Install the Ollama plugin for llm

llm install llm-ollama

4. List Available Local Models

ollama list

Example output:

NAME                  ID              SIZE      MODIFIED     
dolphin3:latest d5ab9ae8e1f2 4.9 GB 3 months ago
deepscaler:latest 0031bcf7459f 3.6 GB 3 months ago
deepseek-r1:latest 0a8c26691023 4.7 GB 3 months ago
llama3.2:latest a80c4f17acd5 2.0 GB 3 months ago

5. Interact With a Model

You can run prompts directly or open an interactive session:

llm -m dolphin3:latest "42 is the answer to which question?"
# start interactive session
llm chat -m dolphin3:latest

6. Pull Additional Models

Visit the Ollama model library to discover more. For example, to install Mistral:

ollama pull mistral

Model Management

CommandFunctionality
ollama listcheck what is locally available
ollama rmremove a model
ollama pull <model>pulls a model from registry 

Key Differences – macOS vs Linux

  • macOS: Uses Apple’s MLX framework, with optimized performance for M-series chips.
  • Linux: MLX isn’t available, so Linux users rely on tools like Ollama instead.

Potential Misconceptions Around llm-llama

On a PC, skip the steps about MLX and use Ollama to download a model. Then install the llm-llama plugin instead of llm-mlx.

source: https://anthonylewis.com/2025/06/01/run-your-own-ai/

$ llm install llm-llama
ERROR: Could not find a version that satisfies the requirement llm-llama (from versions: none)
ERROR: No matching distribution found for llm-llama

llm-llama is not a published plugin on PyPI (the Python Package Index), which is where the llm install command tries to get plugins from.

I use Simon Willison’s llm tool, the supported plugins include:

Conclusion

Converting Mac AI commands to Linux requires understanding the fundamental platform differences, particularly regarding AI frameworks. While uv installation remains identical, the AI execution stack requires platform-specific alternatives. Linux users should replace MLX-based components with tools like Ollama or the llm-llama plugin, and may benefit from APT-managed dependencies for system-level requirements. The resulting setup provides similar functionality to the Mac implementation while leveraging Linux-native AI frameworks and package management systems.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.