Vision Detector
Run your Vision CoreML model
Free
1.5.7for iPhone, iPad and more
Age Rating
Vision Detector Screenshots
About Vision Detector
Vision Detector performs image processing using a CoreML model on iPhones and iPads. Typically, CoreML models must be previewed in Xcode, or an app must be built with Xcode to run on an iPhone. However, Vision Detector allows you to easily run CoreML models on your iPhone.
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, copy this model into the iPhone/iPad file system, which is accessible through the iPhone's 'Files' app. This includes local storage and various cloud services (iCloud Drive, One Drive, Google Drive, Dropbox, etc.). You can also use AirDrop to store the CoreML model in the 'Files' app. After launching the app, select and load your machine learning model.
You can choose the input source image from:
- Video captured by the iPhone/iPad's built-in camera
- Still images from the built-in camera
- The photo library
- The file system
For video inputs, continuous inference is performed on the camera feed. However, the frame rate and other parameters depend on the device.
The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.
In the local 'Vision Detector' documents folder, you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
Note: This application does not include a machine learning model.
On the iPhone, you can use the LED torch feature. When the screen is in landscape orientation, touching the screen will hide the UI and switch to full-screen mode.
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, copy this model into the iPhone/iPad file system, which is accessible through the iPhone's 'Files' app. This includes local storage and various cloud services (iCloud Drive, One Drive, Google Drive, Dropbox, etc.). You can also use AirDrop to store the CoreML model in the 'Files' app. After launching the app, select and load your machine learning model.
You can choose the input source image from:
- Video captured by the iPhone/iPad's built-in camera
- Still images from the built-in camera
- The photo library
- The file system
For video inputs, continuous inference is performed on the camera feed. However, the frame rate and other parameters depend on the device.
The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.
In the local 'Vision Detector' documents folder, you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
Note: This application does not include a machine learning model.
On the iPhone, you can use the LED torch feature. When the screen is in landscape orientation, touching the screen will hide the UI and switch to full-screen mode.
Show More
What's New in the Latest Version 1.5.7
Last updated on Mar 15, 2024
Old Versions
Added support for files in the .mlpackage and .mlmodelc format.
Show More
Version History
1.5.7
Mar 15, 2024
Added support for files in the .mlpackage and .mlmodelc format.
1.5.6
Feb 4, 2024
Changed the algorithm for scaling still images.
Optimized internal variable management, potentially enhancing memory efficiency.
Fixed crash bug with video capture button before loading model.
Resolved an issue in style transfer.
Implemented displaying the description data of the model after loading.
Optimized internal variable management, potentially enhancing memory efficiency.
Fixed crash bug with video capture button before loading model.
Resolved an issue in style transfer.
Implemented displaying the description data of the model after loading.
1.5.5
Jan 17, 2024
The app now reprocesses still images after device rotation.
Fixed an issue where overlays were misaligned in the processing of still images from camera.
Importing still images from 'Files' app now works correctly.
Fixed an issue where the model used for still images could not be changed.
Fixed an issue where overlays were misaligned in the processing of still images from camera.
Importing still images from 'Files' app now works correctly.
Fixed an issue where the model used for still images could not be changed.
1.5.4
Jan 10, 2024
Models can be accessed and opened directly from the 'Files' app using the export menu.
1.5.3
Feb 28, 2023
Fixed issue in style transfer model.
1.5.2
Jan 24, 2023
Reduced memory usage and stabilized performance.
1.5.1
Jan 19, 2023
Ver. 1.5.1
Tap to hide toolbar in landscape mode.
Ver. 1.5
LED torch is available in live video mode.
The screen layout is now compatible with the Safearea of iPhone X or later models.
Improved visual quality of detection overlay for small images.
The format of the custom message file has been changed from CSV to TSV.
Tap to hide toolbar in landscape mode.
Ver. 1.5
LED torch is available in live video mode.
The screen layout is now compatible with the Safearea of iPhone X or later models.
Improved visual quality of detection overlay for small images.
The format of the custom message file has been changed from CSV to TSV.
1.5
Jan 16, 2023
LED torch is available in live video mode.
The screen layout is now compatible with the Safearea of iPhone X or later models.
Improved visual quality of detection overlay for small images.
The format of the custom message file has been changed from CSV to TSV.
The screen layout is now compatible with the Safearea of iPhone X or later models.
Improved visual quality of detection overlay for small images.
The format of the custom message file has been changed from CSV to TSV.
1.4
Dec 7, 2022
Some crashing bugs have been fixed.
1.3
Nov 21, 2022
Now iOS 13.0 or later is required.
Dark mode compatible.
Improvement of custom message for objet detection.
FPS counter is added.
Default language is set to English (US).
macOS version is released.
Dark mode compatible.
Improvement of custom message for objet detection.
FPS counter is added.
Default language is set to English (US).
macOS version is released.
1.2
Nov 9, 2022
Optimized displayed messages.
1.1
Oct 15, 2022
1.0
Oct 9, 2022
Vision Detector FAQ
Click here to learn how to download Vision Detector in restricted country or region.
Check the following list to see the minimum requirements of Vision Detector.
iPhone
Requires iOS 13.0 or later.
iPad
Requires iPadOS 13.0 or later.
Mac
Requires macOS 11.0 or later.
iPod touch
Requires iOS 13.0 or later.
Vision Detector supports English