起步

安装依赖

  • torch/tensorflow根据不同项目需要安装
# 安装
pip install huggingface_hub

# (按项目需要)安装PyTorch的GPU版,及huggingface辅助特性
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install 'huggingface_hub[torch]'

# (按项目需要)安装tensorflow,及huggingface辅助特性
pip install tensorflow
pip install 'huggingface_hub[tensorflow]'


# 其他常用依赖
pip install transformers accelerate

# 项目可能有各自需要的依赖,看项目介绍去安装

注册/登录

  • 目前注册/登录必须有"科学"上网环境
  • 很多仓库下载需要权限,所以先配置token

    • 注册HuggingFace账号,并登录
    • 创建AccessToken路径:右上角用户头像 > "Settings" > "Access Tokens" > "Create new token" > "Token Type"切换到"Read" > 随意填个名字 > "Create token" > 复制下来,格式"hf_*"
  • 如下代码登录

    • 登录后,token存储在"~/.cache/huggingface/token"
    • 仅需登录(运行)一次,除非token失效
from huggingface_hub import login

token = 'hf_***'
login(token)

下载模型

  • 国内镜像:https://hf-mirror.com/

    • 主页有下载方式汇总
  • 大模型随随便便就好几G,注意磁盘空间大小

使用cli

# 安装工具
pip install huggingface_hub
# 设置镜像地址
export HF_ENDPOINT=https://hf-mirror.com


# 下载整个库
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' --local-dir 'sd'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' --local-dir 'sd' --token 'hf_****'

# 下载特定文件
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' 'xxx.safetensors'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' 'xxx.safetensors' --local-dir 'sd'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' 'xxx.safetensors' --local-dir 'sd' --token 'hf_****'

使用python

  • 下载

    • 未指定local_dir参数时,默认下载路径:~/.cache/huggingface/
import os
from huggingface_hub import hf_hub_download, snapshot_download

# 设置镜像地址
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'

# 下载整个仓库
snapshot_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers")
snapshot_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", local_dir="sd")
snapshot_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", local_dir="sd", token="hd_***")

# 下载特定文件
hf_hub_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", filename="xxx.safetensors")
hf_hub_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", filename="xxx.safetensors", local_dir="sd")
hf_hub_download(repo_id="stabilityai/stable-diffusion-3-medium-diffusers", filename="xxx.safetensors", local_dir="sd", token="hd_***")

示例

  • 每个项目运行都有差别,注意看项目的README

    • 一部分要加载项目中的"model_index.json"配置运行
    • 一部分使用ComfyUI运行

文生图(emilianJR/epiCRealism)

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install 'huggingface_hub[torch]'

pip install transformers accelerate diffusers
  • 下载模型

    • 默认下载路径:~/.cache/huggingface/
# 设置镜像
export HF_ENDPOINT=https://hf-mirror.com

# 下载模型
huggingface-cli download 'emilianJR/epiCRealism'
# 默认空间不够的,--local-dir参数指定下载路径,使用时替换成该路径即可
huggingface-cli download 'emilianJR/epiCRealism' --local-dir 'models/emilianJR/epiCRealism'
  • 执行
from diffusers import StableDiffusionPipeline
import torch

model_id = "emilianJR/epiCRealism"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "提示词,要用英文,这个模型中文效果差"
image = pipe(prompt).images[0]

image.save("image.png")
  • 包装Gradio使用
pip install gradio
from diffusers import StableDiffusionPipeline
import torch
import gradio as gr

model_id = "emilianJR/epiCRealism"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")


def generate(prompt: str):
    image = pipe(prompt).images[0]
    return image


demo = gr.Interface(fn=generate,
                    inputs=gr.Textbox(label="提示词,使用英文,中文兼容性不好"),
                    outputs=gr.Image(),
                    examples=["A girl smiling", "A boy smiling", "A dog running"])
demo.launch()

文生视频(ByteDance/AnimateDiff-Lightning)

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip install 'huggingface_hub[torch]'

pip install transformers accelerate diffusers
  • 轻量文生视频,需要借助文生图模型,再生成视频

    • 文生图模型使用上述emilianJR/epiCRealism
# 设置镜像
export HF_ENDPOINT=https://hf-mirror.com

# 基础文生图模型
huggingface-cli download 'emilianJR/epiCRealism'
# 下载文生视频模型
huggingface-cli download 'ByteDance/AnimateDiff-Lightning' 'animatediff_lightning_4step_diffusers.safetensors'

# 也可以下载整个仓库,仓库包含多个级别的模型文件
huggingface-cli download 'ByteDance/AnimateDiff-Lightning'
  • 示例

    • 生成结果是gif动图
import torch
from diffusers import AnimateDiffPipeline, MotionAdapter, EulerDiscreteScheduler
from diffusers.utils import export_to_gif
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file

device = "cuda"
dtype = torch.float16

step = 4  # Options: [1,2,4,8]
repo = "ByteDance/AnimateDiff-Lightning"
ckpt = f"animatediff_lightning_{step}step_diffusers.safetensors"
base = "emilianJR/epiCRealism"  # Choose to your favorite base model.

adapter = MotionAdapter().to(device, dtype)
adapter.load_state_dict(load_file(hf_hub_download(repo ,ckpt), device=device))
pipe = AnimateDiffPipeline.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", beta_schedule="linear")

output = pipe(prompt="A girl smiling", guidance_scale=1.0, num_inference_steps=step)
export_to_gif(output.frames[0], "animation.gif")

文生图(stable-diffusion-3)

pip install huggingface_hub

# 安装PyTorch的CUDA(GPU)版,安装命令参考PyTorch官网:https://pytorch.org/
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# 或者Torch的CPU版,建议上GPU,CPU根本跑不动
pip install 'huggingface_hub[torch]'

# 其他依赖
pip install transformers
pip install accelerate
pip install diffusers
pip install sentencepiece
pip install protobuf
  • 下载模型

    • 默认下载路径:~/.cache/huggingface/
    • --local-dir指定下载目录,运行代码时将模型设置为相同目录即可
# 设置环境变了
export HF_ENDPOINT=https://hf-mirror.com

# 下载整个库
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers'
huggingface-cli download 'stabilityai/stable-diffusion-3-medium-diffusers' --local-dir 'sd'
  • 运行代码

    • 提示词使用英文,中文效果很差
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16)
# pipe = StableDiffusion3Pipeline.from_pretrained("sd", torch_dtype=torch.float16) # 指定目录

pipe = pipe.to("cuda")

image = pipe("A cat holding a sign that says hello world",
             negative_prompt="",
             num_inference_steps=28,
             guidance_scale=7.0).images[0]

# 保存成图片
image.save('cat.jpg')
  • GPU显存小于16GB报错
torch.OutOfMemoryError: CUDA out of memory. 
Tried to allocate 512.00 MiB. GPU 0 has a total capacity of 11.00 GiB of which 0 bytes is free. 
Of the allocated memory 16.80 GiB is allocated by PyTorch, and 574.00 MiB is reserved by PyTorch but unallocated. 
If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. 
See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

可能遇到问题

报错缺失'fbgemm.dll'

  • 报错内容
OSError: [WinError 126] 找不到指定的模块。 Error loading "...\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.

言午日尧耳总
1 声望6 粉丝

不秃顶、不猝死,顺顺利利活到100可以吗?