多数据源兼容性:无论您的数据是单个图像、图像集合、视频文件还是实时视频流,预测模式都能处理。
流模式:使用流功能生成内存高效的结果对象生成器。通过在预测器的调用方法中设置 stream=True 启用此功能。
批处理:能够在单个批处理中处理多个图像或视频帧,从而加快推理时间。
集成友好:由于其灵活的 API,轻松与现有数据管道和其他软件组件集成。
Ultralytics YOLO 模型在推理期间返回 Python 结果对象列表或在传递 stream=True 时返回内存高效的结果对象生成器:
from ultralytics import YOLO
Load a model
model = YOLO("yolov8n.pt") # pretrained YOLOv8n model
Run batched inference on a list of images
results = model(["im1.jpg", "im2.jpg"], stream=True) # return a generator of Results objects
Process results generator
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
obb = result.obb # Oriented boxes object for OBB outputs
result.show() # display to screen
result.save(filename="result.jpg") # save to disk
from ultralytics import YOLO
Load a pretrained YOLOv8n model
model = www.laipuhuo.com YOLO("yolov8n.pt")
Define remote image or video URL
source = "https://ultralytics.com/images/bus.jpg"
Run inference on the source
results = model(source) # list of Results objects
from ultralytics import YOLO
Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
Define a path to a CSV file with images, URLs, videos and directories
source = "path/to/file.csv"
Run inference on the source
results = model(source) # list of Results objects
from ultralytics import YOLO
Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
Define a glob search for all JPG files in a directory
source = "path/to/dir/*.jpg"
OR define a recursive glob search for all JPG files including subdirectories
source = "path/to/dir/*/.jpg"
Run inference on the source
results = model(source, stream=True) # generator of Results objects
from ultralytics import www.laipuhuo.com YOLO
Load a pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
Single stream with batch-size 1 inference
source = "rtsp://example.com/media.mp4" # RTSP, RTMP, TCP or IP streaming address
Multiple streams with batched inference (i.e. batch-size 8 for 8 streams)
source = "path/to/list.streams" # *.streams text file with one streaming address per row
Run inference on the source
results = model(source, stream=True www.laipuhuo.com) # generator of Results objects
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。