Code examples for new APIs of iOS 10.
-
Updated
May 1, 2024 - Swift
Code examples for new APIs of iOS 10.
Vision Framework Demo on Text Detection
CarLens - Recognize and Collect Cars
Trains a model, then generates a complete Xcode project that uses it - no code necessary
A simple demo for Core ML
This application is designed for the effective interaction between patients and doctors
Object Tracking using Apple's VISION Framework
ARKit 1.5 - Image Recognition Demo
This project shows how to use CoreML and Vision with a pre-trained deep learning SSD (Single Shot MultiBox Detector) model. There are many variations of SSD. The one we’re going to use is MobileNetV2 as the backbone this model also has separable convolutions for the SSD layers, also known as SSDLite. This app can find the locations of several di…
Uses iOS 11 and Apple's CoreML to add nutrition data to your food diary based on pictures. CoreML is used for the image recognition (Inceptionv3). Alamofire (with CocoaPods) is used for REST requests against the Nutritionix-API for nutrition data.
ImageOCRUI is a powerful OCR package developed in SwiftUI, allowing you to seamlessly scan and extract text from images.
Swift API client for iOS, iPadOS, tvOS, macOS, and Linux handles uploads and further operations with files by wrapping Uploadcare Upload and REST APIs.
Core ML and Vision object classifier with a lightweight trained model. The model is trained and tested with Create ML straight from Xcode Playgrounds with the dataset I provided.
Face Comparison App with SwiftUI and AWS Face Rekognition
Basic practice about image recognition with CoreML & Vision application using Places205-GoogLeNet.
Simple proof of concept app combining image recognition via ARkit and a swiftUI view to display athlete metrics.
iOS application Using Deep Learning to detect Pneumonia caused by NCOV-19 from X-Ray Images
SNAP a pic of your surplus food, and SHARE it with people in need. (iOS App)
Project provides basic idea and approach to implement the Recognizing Text in Images by using apple provided framwork Visson.
VisionSnap is a uniquely designed iOS app, employing Swift for seamless camera interactions and sophisticated image recognition. Leveraging the YOLOv8 model on a Google Cloud server, VisionSnap allows real-time image capturing and analysis, providing results instantly back to the device. A perfect blend of AI and user experience.
Add a description, image, and links to the image-recognition topic page so that developers can more easily learn about it.
To associate your repository with the image-recognition topic, visit your repo's landing page and select "manage topics."