SDK tested and researched:
a) Wikitude
b) Vuforia
c) EasyAR
d) AR Foundation (AR Core / AR Kit)
The AR Framework (SDK) of my personal choice is AR Foundation from Unity, for several reasons.
Applications done so far:
-First
- Object placement AR
(Extended with XR Interaction Toolkit)
In this kind of experience normally I use a secondary Menu to choose from which Object should be placed on the scene. Plane Detection / Cloud points
Why XR Interaction Toolkit?
a) Use object manipulation (translation, rotation, scale) with proper Huds and nice shaders.
b) Use proximity Annotations (Adding a range +/- 1.5 meters to show/hide extra info).
Tricks / Ideas ?
1) Use Light estimations (with Hdri).
2) I create fake contact shadows (emulating Ambient Occlusion).
3) Add to the Prefab an animator controller by playing random short blended animations offering more realism.
4) More...
-Second
- Image Tracker
(Single or Library based usage)
Great for Catalogs, Musseums, Technical Brochures and Advertisment QRcode style.
Where and How did I use them?
I did a deliverable assignment for the Post degree which consisted of making an Adventure Zoo (that was my choice).
The mechanism was simple, the kids could spawn different animals (there were 14 of them) in their gardens
or any open space.
Music, details, and the ability to get info about each animal was the goal of this app.
I did a Refinery facility builder, in which the idea was that the user was able to Generate and place giant structures
in the plant facility and conform a layout to be presented to Stakeholders.
The interesting was the ability to add "Annotations" by using the Pen Tool and then capture the Scene (render screen) and
share the final result through email, Pdf, or save it.
For the firm "Angst & Pfister (Switzerland, Zurich)" I did an AR application called: Railway.
It consisted of an animated train coming to the initial screen and a floating hotspot to get more info about their
different solutions for each area.
For the firm "Zimm GmbH (Austria, Vorarlberg)" I did an AR Application (also a 3D App) for the user to be able to
track images on their Commercial Catalogs and be able to: (A) Animate the device (B) Rotate and scale (C) Highlight
certain areas of the device (wich shaders) and offering extra information about it.
-Third
- Other innovative POC's
(done for University and myself)
The City Augmented
Using the GPS triangulation and object placement (spawn on proximity) the user can discover and get augmented information
based on the device geolocalization with a precision of +/- 350cm (eRR 30% *).
The Indoor guide sys
Using NavMesh & QR Trackers to re-position (if track is lost).
The fruit health calculator
Years ago, I came up with the idea of using IBM Watson (Image Classify model) to calculate and give an estimation of the state and quality of fruits being
sold by a Supermarket. The idea was quite simple (yet interesting), there is an AR Device mounted close to the cash registry, and every time a fruit was
appearing in the conveyor belt, the App would:
a) First -> Detect the kind of Fruit (Image Recognition part).
b) Second -> Once the App detected it as a Banana (as an example) It scans and searches for the trained model how good (or bad) the Banana is.
As more references the trained model was getting (images samples in array) the most accurate result it was offering.
I never took this POC into real-world production state, but was able to identify and count if there were 3 excellent bananas,
2 overages and 1 bad quality.
I thought of integrating the model into a "Yield price action" but never had the chance.
[VR] Virtual Reality
Regarding VR Applications I was using and testing:
a) VRTK
b) XR Interaction Toolkit (modifying some base scritps)
c) VR Interaction Framework (paid Asset)
Where and How did I use those?
As a starter, I have to mention that I did my Thesis (Universtity XR post-degree) based on VR.
I could have chosen any of the 13 assignments we went through alongside the year (AR, VR, Games, 360, Narrative, Shaders, Lighting, etc.)
but I choose the most simple and complex at the same time. It was a "VR 360 3D Stereoscopic" experience.
Since I wanted to make it the closest to AAA graphics, I decided to use HDRP (High definition render pipeline) to bake and pre-render the whole experience.
It is about 5 minutes' travel for Kids surrounding the Pyramids and life in a desert.
Target: Educational
You can give a look with VR or simple in the Browser at: VR Experience Youtube
What did I do on this project?
I have done everything, listed as follows:
a) Storyboard and mockups
b) Sound effects
d) 3D Assets (at least 51% of them, cause that was one of the rules)
e) 3D Materials and Shader graph for a couple some elements
f) Asset's animations
g) Camera movements (travelling) and controllers
h) Scripts to trigger narratives (voices & actions)
i) Particle and special effects (Unity Particle system and VFX graph)
Total hours invested: 280 hs. (Including: Prototypes, Development, Assets creation, sound and VFX, Scripting, Cameras, Baking, Lightning and Rendering)
- Other VR develpments
To be completed with AEC project such as Gas Connect (Gas Austria) also with Doppelmayr (Mix reality with Nreal) and GFM for International exhibition fair.
[3D] Knowledge / Assets
I use Blender 3.x and I can model, texture, and light pretty much everything. I use to work a lot with Cinema4D and Octane.
Generally speaking, I receive from my Team 3D ready assets (but it does not mean that I can not do them) so eventually, and at some circumstances
I have to do some retopology, re-create materials following the Render pipeline standards, and/or create a specific shader for a particular use case.
I can do basic rigging ( nowadays exist a large number of addons for auto rigging, etc.) nevertheless I am able to handle those situations.
Regarding organic modeling, sculpt, etc. I did some practice and I understand the workflow on this.
Regarding materials, nodes, etc. I prefer using Substance Painter and then bake transforming to PBR.
Using this technique I can work with ease within the tedious process to taking FBX assets into Unity.
I do care a lot and take into consideration performance by doing profiling and a lot of calculations in order to establish a perfect relationship between
device processor capacity VS Fps and scene management (among others). In VR app and Gaming, you can not afford to waste CPU or GPU resources moreover RAM usage should always be a sensitive topic to pay attention.
I care about things such as: Asset Bundle's (Addressables), data management, loading times, and UX of every single app I develope or manage.
Please check some 3D stuff in the POW section. Watch now
I am nowadays working/learning getting senior on WEBGL 3D experiences.
Learning REACT 3 FIBER, DREI and Three.JS native.
For other WebGL experiences I tend to use Unity Webgl.
3D Configurator
Here you have a video pitch (sample) of an application done 100% by me for one of the biggest textil corporations in Europe (Getzner).
It consist in 3 big separated modules.
a) API (endpoint) Data REST - SQL / Controller and Data layer. (with JWT and encrypted token security SHA-256).
b) MVC .net Core 6 - EF - Backend Administration. View Layer (including Controller and repositories).
c) Unity WebGL app. (Real time 3D experience with C# and Repositories).
More than 12.800 lines of code, capable of search/filter and bring 3.000 images (textures) in less than 1.5 sec. and of course, Real time 3D.
The client is able to manage everything.
They have the ability to and use it as the Best tool for Marketing and sales, with dozens of custom options and features.