LLM Fine-tuning & Deployment

Master LLM fine-tuning (LoRA/RLHF), quantization, and best practices for local and server-side production deployment.

6 Articles in This Series · 创建于 2026-02-21
2

LoRA Fine-Tuning: QLoRA Setup & PEFT Guide

Fine-tune LLMs efficiently with LoRA and QLoRA. Step-by-step PEFT setup, key hyperparameters, and memory optimization for Hugging Face model customization.

5

Ollama Advanced Practical Guide: Running and Fine-tuning Open Source LLMs Locally

With increasing demands for data privacy and offline computing, running Large Language Models (LLMs) locally has become a top choice for many enterprises and developers. This article delves into the advanced usage of Ollama, including custom Modelfiles, REST API integration, and lightweight fine-tuning with external data.