For a Christmas present for a family member, I made an application that indexes clothes that you upload to it, allowing you to query them via an AI prompt.
This is a Dart / Flutter site with a Python backend. It is optimized for mobile, however, the screenshots below are from a desktop web browser.
Main Page
============================================

============================================
This is the main page of the application. It presents the weather of your current location at the top, grabbing that first, then asking the Gemma model, "Xyla", to generate the best outfit combination with the weather.
You can see its explanations as to why an item was chosen in the descriptions below the analyzed item.
Wardrobe Page
============================================

============================================
This is the secondary Wardrobe page. This is where you can asynchronously upload or take photos, which are identified and analyzed by the Gemma model and then organized into subsequent categories. Note in the images of these coats how the AI describes the clothing item, then providing a color composition as well. This is stored and taken into consideration by the model when choosing an outfit based on a query. Lastly, "Failed Jobs" at the top highlights any items that aren't clothing, containing a photo of the failed item as well as a reason as to why the image failed parsing (like if you upload a picture of a hotdog).
How It Works
This application works by using the OCR capabilities of the Gemma LLM. When ingesting a photo, its color composition, style, length, and clothing item type are all analyzed for, then subsequently organized.
After organization these items can be parsed via search or specifically requested via the "Ask Xyla" button, which is a direct connector to the LLM model.
Fun Example
Here's a fun example when asking the model "give me an outfit for a valentines day date". Check out it's explanations for an all red outfit :D
============================================
