The forefront of developing XR experiences with a new focus on integrating Machine Learning (ML) and Artificial Intelligence (AI). I am currently exploring the fusion of machine learning and computer vision using generative synthetic datasets in Unity Perception. The primary objective is to develop a product prototype utilizing ML and AI models. I’m also utilizing complementary tools such as TensorFlow for model training, Python for data set analysis, Houdini for procedural modelling, and 3DMax for modelling and textures. The resultant system will consider various use cases, including human interaction, drone applications, and static cameras. The generated dataset will be used to train a model for an XR application capable of real-time object classification in field environments. Besides that, I’m striving to continue creating engaging and innovative XR applications that bridge the gap between the real and virtual worlds.