序
本文主要研究一下如何在腾讯云HAI-GPU服务器上部署DeepSeek Janus-Pro来进行文本生成图片
步骤
选择带GPU的服务器
到deepseek2025试用一下带GPU的服务器
下载Janus
git clone https://github.com/deepseek-ai/Janus.git
安装依赖
cd Janus
pip install -e .
安装gradio
pip install gradio
安装torch
pip uninstall torch torchvision torchaudio -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
运行示例
python demo/app_januspro.py --device cuda
输出示例如下
Python version is above 3.10, patching the collections module.
/root/miniforge3/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py:594: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead
warnings.warn(
pytorch_model-00001-of-00002.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 9.99G/9.99G [09:34<00:00, 11.9MB/s]
pytorch_model-00002-of-00002.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 4.85G/4.85G [06:46<00:00, 11.9MB/s]
Downloading shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [16:21<00:00, 490.70s/it]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:04<00:00, 2.47s/it]
preprocessor_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 346/346 [00:00<00:00, 3.40MB/s]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 285/285 [00:00<00:00, 2.94MB/s]
tokenizer.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.72M/4.72M [00:00<00:00, 18.1MB/s]
special_tokens_map.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 344/344 [00:00<00:00, 2.93MB/s]
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
processor_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 210/210 [00:00<00:00, 2.00MB/s]
Some kwargs in processor config are unused and will not have any effect: ignore_id, add_special_token, num_image_tokens, mask_prompt, sft_format, image_tag.
* Running on local URL: http://127.0.0.1:7860
* Running on public URL: https://xxxxx.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)
可以访问这个public URL
使用示例
大概需要等120s左右可以生成,app.py使用的模型deepseek-ai/Janus-1.3B
小结
自己部署实际还是挺多麻烦的(最开始是在mac上跑,遇到CUDA_HOME问题,后来是找了cpu版本的,遇到没有GPU的问题,最后用了一个带GPU的服务器才跑成功),会遇到各种依赖问题,还有GPU等配置问题,另外就是网络访问问题,所以实际折腾下来就是,如果没有其他特殊需求,还是乖乖用云服务的api吧。
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。