Recommendation systems are everywhere. From Netflix and Spotify to Amazon. But what if you wanted to build a visual recommendation engine? One that looks at the image, not just the title or tags? In this article, you’ll build a men’s fashion recommendation system. It will use image embeddings and the Qdrant vector database. You’ll go from raw image data to real-time visual recommendations.
Table of contents
- Learning Objective
- Use Case: Visual Recommendations for T-shirts and Polos
- Step 1: Understanding Image Embeddings
- Step 2: Getting the Dataset
- Step 3: Store and Search Vectors with Qdrant
- Step 4: Create the Recommendation Engine with Feedback
- Step 5: Build a UI with Streamlit
- Conclusion
- Frequently Asked Questions
Learning Objective
- How image embeddings represent visual content
- How to use FastEmbed for vector generation
- How to store and search vectors using Qdrant
- How to build a feedback-driven recommendation engine
- How to create a simple UI with Streamlit
Use Case: Visual Recommendations for T-shirts and Polos
Imagine a user clicks on a stylish polo shirt. Instead of using product tags, your fashion recommendation system will recommend T-shirts and polos that look similar. It uses the image itself to make that decision.
Let’s explore how.
Step 1: Understanding Image Embeddings
What Are Image Embeddings?
An image embedding is a vector. It is a list of numbers. These numbers represent the key features in the image. Two similar images have embeddings that are close together in vector space. This allows the system to measure visual similarity.
For example, two different T-shirts may look different pixel-wise. But their embeddings will be close if they have similar colors, patterns, and textures. This is a crucial ability for a fashion recommendation system.
How Are Embeddings Generated?
Most embedding models use deep learning. CNNs (Convolutional Neural Networks) extract visual patterns. These patterns become part of the vector.
In our case, we use FastEmbed. The embedding model used here is: Qdrant/Unicom-ViT-B-32
from fastembed import ImageEmbedding from typing import List from dotenv import load_dotenv import os load_dotenv() model = ImageEmbedding(os.getenv("IMAGE_EMBEDDING_MODEL")) def compute_image_embedding(image_paths: List[str]) -> list[float]: return list(model.embed(image_paths))
This function takes a list of image paths. It returns vectors that capture the essence of those images.
Step 2: Getting the Dataset
We used a dataset of around 2000 men’s fashion images. You can find it on Kaggle. Here is how we load the dataset:
import shutil, os, kagglehub from dotenv import load_dotenv load_dotenv() kaggle_repo = os.getenv("KAGGLE_REPO") path = kagglehub.dataset_download(kaggle_repo) target_folder = os.getenv("DATA_PATH") def getData(): if not os.path.exists(target_folder): shutil.copytree(path, target_folder)
This script checks if the target folder exists. If not, it copies the images there.
Step 3: Store and Search Vectors with Qdrant
Once we have embeddings, we need to store and search them. This is where Qdrant comes in. It’s a fast and scalable vector database.
Here is how to connect to Qdrant Vector Database:
from qdrant_client import QdrantClient client = QdrantClient( url=os.getenv("QDRANT_URL"), api_key=os.getenv("QDRANT_API_KEY"), ) This is how to insert the images paired with its embedding to a Qdrant collection: class VectorStore: def __init__(self, embed_batch: int = 64, upload_batch: int = 32, parallel_uploads: int = 3): # ... (initializer code omitted for brevity) ... def insert_images(self, image_paths: List[str]): def chunked(iterable, size): for i in range(0, len(iterable), size): yield iterable[i:i size] for batch in chunked(image_paths, self.embed_batch): embeddings = compute_image_embedding(batch) # Batch embed points = [ models.PointStruct(id=str(uuid.uuid4()), vector=emb, payload={"image_path": img}) for emb, img in zip(embeddings, batch) ] # Batch upload each sub-batch self.client.upload_points( collection_name=self.collection_name, points=points, batch_size=self.upload_batch, parallel=self.parallel_uploads, max_retries=3, wait=True )
This code takes a list of image file paths, turns them into embeddings in batches, and uploads those embeddings to a Qdrant collection. It first checks if the collection exists. Then it processes the images in parallel using threads to speed things up. Each image gets a unique ID and is wrapped into a “Point” with its embedding and path. These points are then uploaded to Qdrant in chunks.
Search Similar Images
def search_similar(query_image_path: str, limit: int = 5): emb_list = compute_image_embedding([query_image_path]) hits = client.search( collection_name="fashion_images", query_vector=emb_list[0], limit=limit ) return [{"id": h.id, "image_path": h.payload.get("image_path")} for h in hits]
You give a query image. The system returns images that are visually similar using cosine similarity metrics.
Step 4: Create the Recommendation Engine with Feedback
We now go a step further. What if the user likes some images and dislikes others? Can the fashion recommendation system learn from this?
Yes. Qdrant allows us to give positive and negative feedback. It then returns better, more personalized results.
class RecommendationEngine: def get_recommendations(self, liked_images:List[str], disliked_images:List[str], limit=10): recommended = client.recommend( collection_name="fashion_images", positive=liked_images, negative=disliked_images, limit=limit ) return [{"id": hit.id, "image_path": hit.payload.get("image_path")} for hit in recommended]
Here are the inputs of this function:
- liked_images: A list of image IDs representing items the user has liked.
- disliked_images: A list of image IDs representing items the user has disliked.
- limit (optional): An integer specifying the maximum number of recommendations to return (defaults to 10).
This will returns recommended clothes using the embedding vector similarity presented previously.
This lets your system adapt. It learns user preferences quickly.
Step 5: Build a UI with Streamlit
We use Streamlit to build the interface. It’s simple, fast, and written in Python.
Users can:
- Browse clothing
- Like or dislike items
- View new, better recommendations
Here is the streamlit code:
import streamlit as st from PIL import Image import os from src.recommendation.engine import RecommendationEngine from src.vector_database.vectorstore import VectorStore from src.data.get_data import getData # -------------- Config -------------- st.set_page_config(page_title="? Men's Fashion Recommender", layout="wide") IMAGES_PER_PAGE = 12 # -------------- Ensure Dataset Exists (once) -------------- @st.cache_resource def initialize_data(): getData() return VectorStore(), RecommendationEngine() vector_store, recommendation_engine = initialize_data() # -------------- Session State Defaults -------------- session_defaults = { "liked": {}, "disliked": {}, "current_page": 0, "recommended_images": vector_store.points, "vector_store": vector_store, "recommendation_engine": recommendation_engine, } for key, value in session_defaults.items(): if key not in st.session_state: st.session_state[key] = value # -------------- Sidebar Info -------------- with st.sidebar: st.title("? Men's Fashion Recommender") st.markdown(""" **Discover fashion styles that suit your taste.** Like ? or dislike ? outfits and receive AI-powered recommendations tailored to you. """) st.markdown("### ? Dataset") st.markdown(""" - Source: [Kaggle – virat164/fashion-database](https://www.kaggle.com/datasets/virat164/fashion-database) - ~2,000 fashion images """) st.markdown("### ? How It Works") st.markdown(""" 1. Images are embedded into vector space 2. You provide preferences via Like/Dislike 3. Qdrant finds visually similar images 4. Results are updated in real-time """) st.markdown("### ?? Technologies") st.markdown(""" - **Streamlit** UI - **Qdrant** vector DB - **Python** backend - **PIL** for image handling - **Kaggle API** for data """) st.markdown("---") # -------------- Core Logic Functions -------------- def get_recommendations(liked_ids, disliked_ids): return st.session_state.recommendation_engine.get_recommendations( liked_images=liked_ids, disliked_images=disliked_ids, limit=3 * IMAGES_PER_PAGE ) def refresh_recommendations(): liked_ids = list(st.session_state.liked.keys()) disliked_ids = list(st.session_state.disliked.keys()) st.session_state.recommended_images = get_recommendations(liked_ids, disliked_ids) # -------------- Display: Selected Preferences -------------- def display_selected_images(): if not st.session_state.liked and not st.session_state.disliked: return st.markdown("### ? Your Picks") cols = st.columns(6) images = st.session_state.vector_store.points for i, (img_id, status) in enumerate( list(st.session_state.liked.items()) list(st.session_state.disliked.items()) ): img_path = next((img["image_path"] for img in images if img["id"] == img_id), None) if img_path and os.path.exists(img_path): with cols[i % 6]: st.image(img_path, use_container_width=True, caption=f"{img_id} ({status})") col1, col2 = st.columns(2) if col1.button("? Remove", key=f"remove_{img_id}"): if status == "liked": del st.session_state.liked[img_id] else: del st.session_state.disliked[img_id] refresh_recommendations() st.rerun() if col2.button("? Switch", key=f"switch_{img_id}"): if status == "liked": del st.session_state.liked[img_id] st.session_state.disliked[img_id] = "disliked" else: del st.session_state.disliked[img_id] st.session_state.liked[img_id] = "liked" refresh_recommendations() st.rerun() # -------------- Display: Recommended Gallery -------------- def display_gallery(): st.markdown("### ? Smart Suggestions") page = st.session_state.current_page start_idx = page * IMAGES_PER_PAGE end_idx = start_idx IMAGES_PER_PAGE current_images = st.session_state.recommended_images[start_idx:end_idx] cols = st.columns(4) for idx, img in enumerate(current_images): with cols[idx % 4]: if os.path.exists(img["image_path"]): st.image(img["image_path"], use_container_width=True) else: st.warning("Image not found") col1, col2 = st.columns(2) if col1.button("? Like", key=f"like_{img['id']}"): st.session_state.liked[img["id"]] = "liked" refresh_recommendations() st.rerun() if col2.button("? Dislike", key=f"dislike_{img['id']}"): st.session_state.disliked[img["id"]] = "disliked" refresh_recommendations() st.rerun() # Pagination col1, _, col3 = st.columns([1, 2, 1]) with col1: if st.button("?? Previous") and page > 0: st.session_state.current_page -= 1 st.rerun() with col3: if st.button("?? Next") and end_idx <h2>Conclusion</h2> <p>You just built a complete fashion recommendation system. It sees images, understands visual features, and makes smart suggestions.</p> <p>Using FastEmbed, Qdrant, and Streamlit, you now have a powerful recommendation system. It works for T-shirts, polos and for any men’s clothing but can be adapted to any other image-based recommendations.</p> <h2>Frequently Asked Questions</h2> <strong>Do the numbers in image embeddings represent pixel intensities?</strong> <p>Not exactly. The numbers in embeddings capture semantic features like shapes, colors, and textures—not raw pixel values. This helps the system understand the meaning behind the image rather than just the pixel data.</p> <strong>Does this recommendation system require training?</strong> <p>No. It leverages vector similarity (like cosine similarity) in the embedding space to find visually similar items without needing to train a traditional model from scratch.</p> <strong>Can I fine-tune or train my own image embedding model?</strong> <p>Yes, you can. Training or fine-tuning image embedding models typically involves frameworks like TensorFlow or PyTorch and a labeled dataset. This lets you customize embeddings for specific needs.</p> <strong>Is it possible to query image embeddings using text?</strong> <p>Yes, if you use a multimodal model that maps both images and text into the same vector space. This way, you can search images with text queries or vice versa.</p> <strong>Should I always use FastEmbed for embeddings?</strong> <p>FastEmbed is a great choice for quick and efficient embeddings. But there are many alternatives, including models from OpenAI, Google, or Groq. Choosing depends on your use case and performance needs.</p> <strong>Can I use vector databases other than Qdrant?</strong> <p>Absolutely. Popular alternatives include Pinecone, Weaviate, Milvus, and Vespa. Each has unique features, so pick what best fits your project requirements.</p> <strong>Is this system similar to Retrieval Augmented Generation (RAG)?</strong> <p>No. While both use vector searches, RAG integrates retrieval with language generation for tasks like question answering. Here, the focus is purely on visual similarity recommendations.</p>
The above is the detailed content of Fashion Recommendation System Using FastEmbed, Qdrant. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Let’s dive into this.This piece analyzing a groundbreaking development in AI is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking and clarifying major AI advancements and complexities

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

Looking at the updates in the latest version, you’ll notice that Alphafold 3 expands its modeling capabilities to a wider range of molecular structures, such as ligands (ions or molecules with specific binding properties), other ions, and what’s refe

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

Dia is the successor to the previous short-lived browser Arc. The Browser has suspended Arc development and focused on Dia. The browser was released in beta on Wednesday and is open to all Arc members, while other users are required to be on the waiting list. Although Arc has used artificial intelligence heavily—such as integrating features such as web snippets and link previews—Dia is known as the “AI browser” that focuses almost entirely on generative AI. Dia browser feature Dia's most eye-catching feature has similarities to the controversial Recall feature in Windows 11. The browser will remember your previous activities so that you can ask for AI

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a
