Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NFT marker detection #65

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

actarus1975
Copy link

Hi @qian256 😄

here is the pull request with the NFT markers detection integration, as promised.
I wanted to perform few tests before releasing it.
However I had the chance to test it on my HoloLens 2 only, so I cannot guarantee it works on the first generation HoloLenses as well (as I believe, anyway).

I added a new scene named HL2ARToolKitNFT to the Unity project which shows the NFT detection of the ARToolKit "pinball" image marker.

Flaws and potential improvements
The NFT marker detection seems to be a little bit too slow (sometimes it takes from 3 to 4 seconds to show the virtual content, especially the first time). This is true even if the preview video is disabled.
I tried reintroducing the "color pattern detection mode" based on your previous implementation (v0.2 release), leaving to the user the choice of the detection mode to use. My guess was that "maybe" the color mode could ease and speed up the detection phase (didn't have time to go in depth into the original mechanism adopted by ARToolKit libraries).
However I'm still having troubles showing the color video stream into the Texture2D object, even creating a specific color shader.
I think I'm doing some mistake related to the bitmap's buffer locking which is required during the conversion stage and/or something related to the threads management.
Maybe you can shed some light on this topic...

Used software
This is the list of software versions used to compile the DLL and deploy the app into the HL2:

  • Windows 10 SDK 10.0.18362.0
  • Visual Studio 2019
  • Unity 2019.4.5f1

I'm open to discuss any further improvement or give explanations if required.

actarus1975 and others added 8 commits June 12, 2021 16:36
All projects retargeted to "Platform Toolset Visual Studio 2019 (v142)".
Added a post build event to copy the built DLL into the Unity  project's "Plugins" folder.
…dle the correct placement of the virtual content based on the scale ratio between the sizes of the trained image soft-copy and the image hard-copy.
@actarus1975
Copy link
Author

Hi again @qian256,

I've just updated the pull request adding few improvements about the NFT detection feature.
In particular I added (in the ARUWPMarker.cs script) the "Scale Factor" option which is NFT marker specific and that, in fact, I had missed.

This option, which is set to 1.0 by default, is decisive for ARToolKit to be able to calculate the proper "depth" of the virtual content/object position relative to the physical marker position.
It defines the ratio between the sizes of the "trained" image soft-copy and the image hard-copy.

For example, say the image soft-copy is 14 x 18 cm but the printed image hard-copy is magnified by a factor of 3 (its physical size is 42 x 54 cm).
In this case the Scale Factor option must be set to 3.0 if we want the virtual content to overlap the printed image and be positioned in a correct way, otherwise it will be placed in the mixed reality scene always far from the printed image.

Another important thing I'd like to share with you (probably you are already aware of it) is about the process used by the ARToolKit SDK to train an image detection and track.
The choices made while launching the genTexData tool are very important to define the way the detection and the tracking must happen.
At some point the genTexData executable will ask to "select the extraction level for the initializing features" and the "tracking features" in a range of values from 0 (few) to 3 (many).
Well, setting the initializing features level to the maximum value (i.e. 3) will, of course, increase the final size of the .fset/.fset3 files, but, on the other side, the time required to detect the image the first time will be greatly reduced.
Same thing about the choice of the tracking features level to achieve a better robustness.

These considerations reply to one of the potential flaws described in my previous comment, which actually doesn't represent a defective behaviour but a lack of knowledge ;)

@actarus1975
Copy link
Author

Ah, one last thing.

For whoever could be interested, I managed to migrate the HoloLensARToolKit project to Unity 2020.3.4f1. Anyway I didn't include the changes into the nft_detection branch, which is involved in this pull request.
The reason is that I should have included the MRTK Foundation package to the repository, which is quite heavy, so I preferred to avoid that for now.

As we all know, starting from Unity 2020.1 the bulit-in Legacy VR feature isn't supported anymore and it has been replaced by the XR SDK. Moreover, the XR SDK supports two possible plugins: OpenXR and Windows Mixed Reality (again deprecated starting from Unity 2021.2).

So far I was able to upgrade the HoloLensARToolKit project to Unity 2020.3.4f1 making use of the Windows Mixed Reality plugin. To achieve that you need to:

  1. Install the MRTK Foundation package into the project (I used version 2.6.1, but the latest version should be also fine) and then install the "XR Plug-In Management" feature choosing the Windows Mixed Reality plugin for the UWP platform.

  2. Replace the following instruction (file ARUWPVideo.cs, line 740):

WorldOriginPtr = WorldManager.GetNativeISpatialCoordinateSystemPtr();

with

WorldOriginPtr = UnityEngine.XR.WindowsMR.WindowsMREnvironment.OriginSpatialCoordinateSystem

That's it.
Hope this could help someone :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant