I’M CURRENTLY WORKING ON LOCATION-BASED AND ML/AI XR EXPERIENCES
The forefront of developing XR experiences with a new focus on integrating Machine Learning (ML) and Artificial Intelligence (AI). I am currently exploring the fusion of machine learning and computer vision using generative synthetic datasets in Unity Perception. The primary objective is to develop a product prototype utilizing ML and AI models. I’m also utilizing complementary tools such as TensorFlow for model training, Python for data set analysis, Houdini for procedural modeling, and 3DMax for modeling and textures. The resultant system will take into account various use cases, including human interaction, drone applications, and static cameras. The generated dataset will be used to train a model to be used in an XR application capable of real-time object classification in field environments. Besides that, I’m striving to continue creating engaging and innovative XR applications that bridge the gap between the real and virtual worlds.