Skip to content

Latest commit

 

History

History
39 lines (22 loc) · 2.08 KB

README.md

File metadata and controls

39 lines (22 loc) · 2.08 KB

saveSession-ARKit-CoreML

ArKit is Apple's framework for Augmented Reality. It can do amazing stuffs but it couldn't be used to add layers to the existing word, for example informations inside a museum because the position of the virtual objects is given by the GPS coords of the phone, thus the precision in the position of the objects is in the order of meters.

But, as shown in the video, using machine learning is possible to get a more precise position, in the order of centemeters, even where the GPS has no signal.

Demo Video - on Youtube

The aim of this project is to make an aumented reality apps in which every user can add and share layers to the real word, everywhere and in the most easy way possible.

Based on CoreML-in-ARKit by hanleyweng

Model: Inception V3

Language: Swift 4.0

Written in: Xcode 9.0

Content Technology: SceneKit

Tested on iPhone 6s running iOS 11.2.6 (15D100)

Instructions

You'll have to download "Inceptionv3.mlmodel" from Apple's Machine Learning page, and copy it into your XCode project. (As depicted in the following gif)

Gif to show dragging and dropping of model into XCode

(Gif via Atomic14)

If you're having issues, double check that the model is part of a target (source: stackoverflow).

Footnotes

The Objects are the real objects seen by the camera The anchor is an object that you set as the reference point the reference system in every sesson of ARKit The nodes are arkit nodes composed by a point and a the name of an object You can change the anchor using the change anchor button but the nodes will be resetted

Right now the orientation precision is given by by the precision of the compass, the more still you are the better the precision but it is planned the automatic addiction of multiple anchors to improve it