Large language models (LLMs), such as GPT and Llama, have completely changed the way we handle language tasks, from creating smart chatbots to generating complex code snippets, everything. Cloud platforms such as Hugging Face simplify the use of these models, but in some cases it is a smarter choice to run LLM locally on your own computer. Why? Because it provides greater privacy, allows customization to your specific needs and can significantly reduce costs. Running LLM locally gives you full control, allowing you to take advantage of its power on your own terms.
Let's see how to run LLM on your system with Ollama and Hugging Face in just a few simple steps!
The following video explains the process step by step:
How to run LLM locally in one minute [beginner friendly]
Use Ollama ? and Hugging Face ? Video link
— dylan (@dylanebert) January 6, 2025
Step to run LLM locally
Step 1: Download Ollama
First, search for "Ollama" on your browser, download and install it on your system.
Step 2: Find the best open source LLM
Next, search for the "Hugging Face LLM Ranking" to find a list of top open source language models.
Step 3: Filter the model based on your device
After seeing the list, apply a filter to find the best model for your setup. For example:
- Select home consumer-grade equipment.
- Select only official providers to avoid unofficial or unverified models.
- If your laptop is equipped with a low-end GPU, choose a model designed for edge devices.
Click on top-ranked models, such as Qwen/Qwen2.5-35B. In the upper right corner of the screen, click "Use this model". However, you can't find Ollama as an option here.
This is because Ollama uses a special format called gguf, which is a smaller, faster and quantitative version of the model.
(Note: Quantization will slightly reduce quality, but make it more suitable for local use.)
Get models in gguf format:
- Go to the "Quantity" section on the rankings - there are about 80 models available here. Sort these models by the most downloads.
Look for models with "gguf" in their names, such as Bartowski. This is a good choice.
- Select this model and click "Use this model with Ollama".
- For quantization settings, select a file size that is 1-2GB smaller than your GPU RAM, or select the recommended option, such as Q5_K_M.
Step 5: Download and start using the model
Copy the commands provided for the model of your choice and paste them into your terminal. Press the "Enter" key and wait for the download to complete.
After the download is complete, you can start chatting with the model like you would with any other LLM. Simple and fun!
That's it! You are now running powerful LLM locally on your device. Please tell me if these steps work for you in the comments section below.
The above is the detailed content of How to Run LLMs Locally in 1 Minute?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Disclosure: My company, Tirias Research, has consulted for IBM, Nvidia, and other companies mentioned in this article.Growth driversThe surge in generative AI adoption was more dramatic than even the most optimistic projections could predict. Then, a

The gap between widespread adoption and emotional preparedness reveals something essential about how humans are engaging with their growing array of digital companions. We are entering a phase of coexistence where algorithms weave into our daily live

Those days are numbered, thanks to AI. Search traffic for businesses like travel site Kayak and edtech company Chegg is declining, partly because 60% of searches on sites like Google aren’t resulting in users clicking any links, according to one stud

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Let’s take a closer look at what I found most significant — and how Cisco might build upon its current efforts to further realize its ambitions.(Note: Cisco is an advisory client of my firm, Moor Insights & Strategy.)Focusing On Agentic AI And Cu
