Replies: 1 comment 2 replies
-
Thanks @bnco-dev! I agree this needs to be done, here are some purely personal thoughts on how... I am coming at this from two main considerations:
In my experience, even one additional step can derail junior developers, especially when it's not common/unexpected. (It's why I never use git submodules, for example.) For UsI like a lot how you have set up the project so when we release we get the Unity package in a branch. I would recommend leaning into this and minimising how much we split the repo. Personally, doing this server refactor, I already regret splitting the extended browser samples into a separate repo - there was no choice but it's still a pain! I think we should avoid this as much as possible. The way the repo is configured at the moment is that each root folder represents a 'platform'. Perhaps we can keep this pattern, and have multiple Unity projects for different SDKs, and have the automated build tools create branches for different UPMs from this one repo? Originally, the recommended way for users to consume Ubiq was by cloning the repo into their project, so we needed to be careful with the file structure of the repo. Now we have the UPM support, I think we can be a little more flexible with the repo itself. I do agree that some SDKs will need their own repo though, simply due to the size. Meta's SDK for one. However, if these can be included as UPM dependencies, can we avoid committing the package contents? And rely on Unity downloading them when its first opened? Maybe we can even use this pattern for our own code... For UsersFor projects like this, I personally prefer the "path of least resistance" approach: where the default way of doing everything just works, and does everything very junior people need, and if you go off the beaten path then you are expected to know what you're doing. To that end, I think it is OK to have 'favoured' dependencies. XRITFor example, I would recommend instead of splitting ubiq and ubiq-xri into two different packages, what we have is that core components (i.e. anything in the Runtime folder) never reference XRIT, however the samples fully embrace XRIT. We can have similar discussions around other features, like Avatars. For example, I think in principle we've settled on having a small number of avatars that are compatible with RPM avatars to make the samples work out of the box, and recommend users use RPM. If we could avoid a dependency on RPM's package (for various reasons: size, licensing, etc), I would personally say its OK to include a Component that brings in RPM avatars, at least in the Samples, if not Runtime, too in the future. The main thing is simply that XRIT doesn't preclude anything else. So long as nothing in Runtime depends on it, then non XR projects can still follow the same instructions, and just use the Components in Runtime along with the traditional UX API, while never interacting with XRIT, even if its cloned. And regarding other projects: realistically, if using Unity, the only other platform that we haven't already considered is Android - if users are going to write Peers for native C# apps or microcontrollers, they won't use Unity. So, if our samples support Desktop out of the box, which they should continue to do so, we've already covered 98%, with just a slight gap for touch screens that needs to be filled. Other SDKsThings are different for, e.g. MRTK or WebXR though I recognise (and as you say, other feature packages, etc), for which we will need separate UPMs. I will think and put some more comments about this, but it maybe is something to be solved through Editor tooling. We need to consider what to do if, for example, someone has both the Meta and MRTK packages pulled in. In that case there is going to have to be some user intervention since there's no way to decide which takes priority automatically. |
Beta Was this translation helpful? Give feedback.
-
Issue
Many Unity toolkits and plugins exist for adding VR/AR functionality. Particularly for mixed-reality apps, vendor plugins like Meta's and MRTK contain essential functionality that cannot be easily replicated in another package. Ubiq's core features (messaging, avatars, distributed logging, VOIP, etc) run parallel to these other plugins. Ideally we want to exist alongside them, adding the kind of networking functionality one might need for social VR/AR.
Currently Ubiq's player prefab, custom interaction code and samples make this difficult, as they're not compatible with any other library out of the box. We've worked to increasingly remove reliance on them from the core features above, and have now reached the point where all core features (Messaging, Avatars, VOIP) work cross-platform across Windows, Mac, Linux, Android, WebGL and UWP without them. This is great, but still requires developers to do some wrangling to connect other libraries. This isn't much fun as it's not always clear to developers what steps to take to make this happen for different libraries. Even afterwards, it will break the samples, as they're based on Ubiq interactions.
We need to think about how to organise Ubiq for the greatest possible compatibility, hopefully without losing our current 'click-play-to-start' ethos!
Proposal
Ubiq will be separated out into a
ubiq
package with minimal samples (networking-only) and aubiq-xri
package with the current samples updated for XRI. WebXR will be made a lot more straightforward to use withubiq-webxr
, which will build on top ofubiq-xri
so that applications can be built for both by swapping the player prefab out at runtime. Here's what the result would look like (dotted packages are external dependencies):Avatars can be separated into their own packages too:
ubiq-avatars-floating
,ubiq-avatars-rocketbox
,ubiq-avatars-readyplayerme
etc. They'll depend onubiq
, and users could import as many or as few as they like. See the current ubiq-avatars-readyplayerme package for an example.Discussion
The really good news is that most of the vendor packages seem to be cohering around Unity's XR Interaction Toolkit for interaction. For example, both MRTKv3 and Meta's new mixed reality tools for the Quest 3 build off that base. This makes it a good base for us to build from too. Separating out core Ubiq into another package has the benefit of making it easier to integrate Ubiq into other (non-XRI, even non-XR) projects too. This comes at the cost of managing an extra repo. We'd be interested in thoughts on this; would it be worth it to you? Might you use Ubiq in non-XR projects?
This should not require any server changes, but in the future it may make sense to separate the server into its own project and encourage use through NPM or a Docker image.
Thanks for reading. Any and all comments welcome.
Beta Was this translation helpful? Give feedback.
All reactions