- Introduction: The Vision framework was introduced by Apple in 2017 at WWDC as part of iOS 11. It marked a turning point in machine vision and image analysis, providing native tools. In 2017, it introduced text recognition, face recognition, detection of rectangular shapes, and barcode and QR code recognition. Since then, Apple has continuously enhanced it. By 2024 with iOS 18, it offers improved text recognition accuracy for many languages, face and feature detection, movement analysis, pose recognition, object tracking in video, better integration with CoreML, and deep integration with related frameworks.
- VNRequest: Vision has an abstract class
VNRequest
that defines data request structures. Subclasses implement specific requests. TheVNRequest
initializer takes a completion handler.VNRequestCompletionHandler
is a typealias that returns aVNRequest
with results or an error. TheVNRecognizeTextRequest
class is for text recognition. An example shows how to implement text recognition: create a request, handle results withVNRecognizedTextObservation
, and process the image. - VNDetectFaceRectanglesRequest: This class finds faces in an image and returns their coordinates. An example shows how to implement face recognition: create a
VNDetectFaceRectanglesRequest
, handle results withVNFaceObservation
, and process the image. It can be used in KYC onboarding for confirming a real person's face. - VNDetectBarcodesRequest: This class recognizes and reads barcodes and QR codes from an image. An example shows how to implement barcode and QR code recognition: create a
VNDetectBarcodesRequest
, handle results withVNBarcodeObservation
, and process the image. It can be used in a QR scanner.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。