top of page

Chase Morgan

Unity_Ltsvhs5wNR.gif

Object Detection

Abbey of New Clairvaux Sprint 3

04/12/2026

Sprint Overview

Sprint three is over and with it came some good work on the project! This sprint we were able to get an actual build out for testing. A lot was learned during this sprint, especially about dealing with lower performant platforms like web builds for Unity. Unfortunately, due to the complexity of the tasks assigned this sprint only three cards were completed. The main thing focused on during this sprint was implementing the main feature for this product which is the scanning feature.

The scanning feature uses a common machine learning model known as YOLO (You Only Look Once) which basically divides the screen space into a grid and trains a model uses neighboring pixels to build connections to identify classes. This is useful for us since a trained model can be used to pick out objects with incredible accuracy. Usually, YOLO is used in situations where you need to detect many objects that could slightly differ in shape/size; however, it still is great for general scanning purposes.

This scanning feature basically takes a snapshot of the current camera texture every ~30 frames and then runs it through the model that was trained using thousands of images. It will only go through roughly 5 layers every frame as to keep performance at around 30 frames per second just so we don't accidently crash the user's device. YOLO is designed to be performant; however, we are still doing billions of floating-point operations a second, so we also want to offload it to the GPU when necessary. Unfortunately, due to WebGL we don't have access to CPU multi-threading or GPU compute shaders, so we have to use GPU pixel shaders to transfer work to the GPU. This does come with a loss of performance, but it isn't really that noticeable in practice.

Scan Logic

After we take that snapshot, we will start up the worker using Unity Sentis which dynamically loads the model at runtime in a way that impacts performance the least. Using C# asynchronous methods allows us to clone each tensor off of the main thread to read and gather data from the GPU. Using this data, we could know various things like if it detected anything, the confidence level of the detection, the location of the detection, and the class of the detected object. We can also find even more data like calculated tensor values, but they aren't relevant for display. 

Worker Setup

Overall, it doesn't seem like much was added, but a lot of work was done outside of the codebase to get to this point. In order to get YOLO working we had to make a test dataset to test with. After you take the pictures, you must label each image one by one, and then after that you have to sort every image into three categories: test, validation, and training. During the next sprint we will be taking images of our actual finalized dataset which will include thousands of images and will take a week at least to fully label. Thank you for reading!

Unity_MWLsjqfDHr.gif

POI Tracking

Unity_H3FgYAWiEU.gif

POI Completion

Abbey of New Clairvaux Sprint 2

03/29/2026

Sprint Overview

Sprint two is over and with it came a fair amount of work on the project. Unfortunately, this sprint was only one week in length total so there was less time to complete work than other sprints. This sprint I completed around 6 cards all related to the POI system for the project.

The POI system is a relatively straight forward state system that holds data related to tasks and is spawned at runtime from pre-defined data assets that the designer creates. It automatically will bind to its UI Toolkit element using the built-in binding system that UI Toolkit offers. This way whenever the POI updates its state, the UI matches to reflect that! Another thing completed this sprint related to the POI system was the tracking and info display feature.

When a user clicks on the POI it should open up and display a nice little menu showing the POI and the current task completion compared to the total task amount. It is tracked using UI Toolkit bindings and the menu itself is a child of the POI container as we need the menu to follow wherever the POI goes so this would be the fastest method to do so! Now when the user double clicks the POI should be tracked which marks it as blue. 

None of what was created unfortunately has any way of working as it is just data bindings under the hood that would need actual implementation to work in the future. Overall, this sprint did go fairly slowly and if we want a product we will have to put in some extra hours to get more done. Thanks for reading!

Unity_MWLsjqfDHr.gif

POI Tracking

bottom of page