Services that we do here to explain
Read case studies
Email Data Extraction and Lead Generation from PST Files: Turning Historical Emails into Qualified Leads with AI
Introduction
Businesses are turning the tables into data-driven models, yet they often overlook one of the richest sources of untapped lead data- email archives. Buried within old Outlook backups are potential goldmines of sales intelligence, contact information, and engagement patterns. This project focuses on unlocking potential by automating the extraction of valuable lead data from PST files using Python and artificial intelligence (AI) technologies. The results include a scalable system that not just parsed emails but enriched, validated, and prepared the data for smooth CRM integration.
Brief Description
The solution was built to mine the data, specifically the large ones from historical emails stored in PST format —Outlook’s native archive format. These archives contained years of business communication that hold valuable lead information if mined smartly.
We developed a Python-based automation tool to handle this task completely. It parsed PST files to extract email metadata and content, used artificial intelligence to interpret unstructured text, and generated organized CSV files ready for the CRM platform.
The key processes include deduplication, validation via external APIs, and filtering irrelevant or internal communication. This tool revived both forgotten email threads and active sales opportunities.

Objectives
Client-specific goals
The tool was engineered to offer several targeted objectives:
- Automatic extraction of lead data from old Outlook emails- The system eliminates the need for any data mining by parsing PST backups automatically. These backups contain thousands of emails from previous years, which, when correctly parsed, reveal valuable insights and contracts.
- Structured dataset generation for the sales team- Rather than presenting raw data, the tool structures extracted information into clearly defined fields, including names, email addresses, job titles, company names, phone numbers, and more. This allowed for a dataset that was actionable for the sales team,, providing them with the option to filter, sort, and analyze the data as needed.
- Cleaning, deduplication, and validation of extracted contacts- To ensure high data quality, duplicate contacts were removed using a session-wide comparison. Additionally, you can utilize syntax checks and blacklist filters to validate emails, enabling teams to focus solely on usable and high-value leads.

Conclusion
The project illustrates the powerful combination of AI, automation, and data validation in transforming legacy email archives into actionable lead intelligence. The use of Google Gemini for intelligent parsing and NeverBounce for email validation ensured accuracy and relevance while incorporating features such as deduplication, logging, and domain filtering.
This case is a clear example of how old communications when combined with modern technology, can fuel new opportunities and streamline lead generation workflows.

Wider Business Purposes
Beyond immediate use cases, the solution came together with broader business development goals:
- Lead generation: Identification of High-Quality, Engaged contacts
The automation helps identify contacts who have previously interacted with the business or with individuals already familiar with the company. It also focuses on leads that are likely to engage again and bring out high-value targets for outreach.
- Data enrichment: Converting unstructured email data into information
By using AI, the system adds structure and intelligence to unstructured email content. Information, such as job titles and inferred company types, turns basic emails into strategic sales leads.
- CRM Readiness: Generating importable CSVs for HubSpot/Salesforce
The organized output of data is prepared for compatibility with some of the frequently used CRM platforms. This ensures the importing process is smooth, allowing the sales team to start engagement activities without delay.
- Personalization for Outreach: Using Roles and Industry to Tailor Campaigns
Detailed job titles and company information make hyper-targeted messaging easy. For example, marketing executives can receive campaign pitches, while IT heads can get product specifications relevant to their business.
- Email validation: Improving overall deliverables via External APIs
Email validation via APIs reduces bounce rates. This improves campaign efficiency and ensures a higher sender reputation is put forward for future outreach.
- Competitor insights: Finding for company engagement
By analyzing sender domains and relevant content, the tool identifies which competitor companies are involved in previous conversations. This information informs competitive strategies and reveals better possibilities for partnership opportunities.
Technical Base
The entire system was built using modular and scalable technologies:
- Programming language: Python 3.x
The solution was developed via Python. It is chosen for its versatility, wide range of libraries, and robustness in data manipulation and automation.
- AI API: Google Gemini
The tool was integrated with Google Gemini to perform natural language parsing. This could extract names, job roles, companies, and phone numbers and infer organizational structure from contextual clues.
- Email validation: NeverBounce API
To ensure data accuracy, emails are validated through the route of NeverBounce API. This checks for deliverability, syntax correctness, and domain reputation.
- Data Storage: CSV via Pandas
Structured data was stored using Pandas DataFrames and exported as CSV files. This format facilitates universal compatibility and ease of use in CRMs.
- Logging: Custom module
A dedicated logging module was used to track every step of the extraction process- starting from successful parses to debugging.
Key features
PST Email Extraction
At the core of the tool is its ability to extract information from PST files:
- Parses .pst Outlook Backups
This tool uses a PST parser to read and iterate over each item in the backup file. This helps in navigating folders, subfolders, and threads.
- Extracts email bodies and metadata
Each email subject, body, sender and receiver metadata, and timestamp information are captured.
Filters out Internal or Irrelevant Domains
Domains like internal company emails or spam-like sources are filtered using verified and configured blacklists (failed.json).
AI-based parsing
Once raw emails are extracted, Google Gemini powered the intelligent interpretation:
- Contact names- Names are pulled from both email metadata and content, accounting for signatures and context within threads.
- Job titles- The AI reads email signatures and introductory lines to deduce any professional roles.
- Company names- It detects company names with the help of domain references, email signatures, and mentions in the content.
- Phone numbers and addresses - Contact details embedded in signatures or within emails are extracted.
- Company type inference- Based on domain names and context, the AI attempts to list information about the industry or function of the organization.
Validation and deduplication
To bring out the best output, the following process is undertaken:
- Removal of duplicate entries across various sessions - This tool maintains a cache of processed entries to prevent redundancy, even during multiple runs.
- Validate email syntax- Regex patterns check the organized level and validity of each email before moving into further processing.
- Skips blacklisted domains - This process is for internal domains that can be excluded using a configurable list. This helps focus on external leads.
Data Output
The final dataset is a highly planned and organized CSV file having-
- Cleaned output
- Fields with the rows, namely Name, Company, Job Title, Email, Website, Phone number, and other valuable attributes that assist with segmentation and targeting.
Customization and notes
- Domain filtering via failed.json
A JSON file allows dynamic updates to the domain exclusion list without changing code.
- Rate limiting with time.sleep(1)
To comply with API usage quotas, delays were added between requests to Google Gemini.
- Logging errors and duplicates
Detailed logs were used to enable traceability and help troubleshoot any skipped or failed entries,
- Future extensibility
While the current version's output CSVs are available, the architecture was designed to support direct integration with CRM APIs, such as HubSpot and Salesforce, in future iterations.

Outcome
Achievements
- Massive Time savings by processing thousands of emails in an hour.
- High-quality leads with enriched metadata to ensure data isn’t just complete but meaningful and ready for outreach.
- Focused sales efforts to ensure relevant leads were prioritized as per the high-intent contacts.
Potential ROI
- An 80-90% reduction in lead research time, resulting in a decrease in manual labor required to identify qualified leads.
- Real-time validation and domain filtering further reduced the bounce rates.
- Historical emails once considered digital clutter, are now an active resource in the business development arsenal.
Automating High-Volume Lottery Ticket Generation with the Toto-TKT Project
Overview
The Toto-TKT project involved developing an automated ticket creation system to replace manual formatting and production processes for branded lottery and raffle tickets. This project was created to read Excel data and generate structured, print-ready tickets with precise layouts, essentially functioning like an automated ticket machine for high-volume output.
The goal was to turn digital inputs into high-quality physical materials, making it easier to produce tickets for campaigns, sweepstakes, and other large-scale events.
Client Background
The client works in promotional events, where ticket-based systems are an important part of operations.
Whether running a sweepstake, community giveaway, or corporate campaign, the client frequently has to produce a large volume of tickets with custom layouts and branding elements.
Previously, these tickets were created manually by editing data in Excel and mapping it onto physical templates. This was not only labour-intensive but also inconsistent, especially at higher volumes. Small layout mistakes, printing issues, and the risk of duplication led to operational setbacks. They needed a solution that would eliminate these problems while saving time. We endeavoured to set up an automated ticket creation system.

Challenges & Objectives
The following were some challenges that the auto lottery processor faced and the resultant objectives that were formulated:
Challenges
- The manual formatting process was error-prone, especially for high-ticket volumes.
- Inconsistencies in layout often led to misalignment during printing.
- Physical ticket templates required precise mark placement to remain legible and usable.
- The ticket generation process had to accommodate varied ticket lengths and structures, depending on the campaign.
Objectives
- Develop a tool that could convert structured Excel rows into printable tickets.
- Maintain strict grid alignment for marks and numbers to match physical layouts.
- Output professional-quality PDFs that require no additional editing.
- Build a process that could be reused for future campaigns with minimal setup.

Conclusion
Relu Consultancy built an auto lottery processor that solved a clear operational pain point for the client. The Toto-TKT Project turned spreadsheet data into clean, event-ready tickets with speed and accuracy.
The system’s modular design made it easy to reuse across different types of events, including raffles, lotteries, and entry passes. The combination of precision formatting, batch processing, and user-friendly documentation made it easy for the client to adopt and scale.
Overall, the project made high-volume ticket creation faster and more consistent, helping the client deliver professional, branded materials for their campaigns.

Approach & Implementation
Relu Consultancy created a Python-based automated ticket generation system capable of reading Excel files and converting them into layout-specific, high-resolution PDFs. The system was built to handle variations in ticket structure, support branding requirements, and generate outputs ready for printing on standard A4 paper. To support adoption, the final solution included walkthrough documentation and a screen recording for internal training.
Key Features
- The script reads six-number combinations from a designated "Combination" column in Excel.
- It supports multiple formats: 10, 12, or 13 values per combination mapped precisely on a grid.
- Each ticket displays either dots or crosses based on client preference.
- Ticket numbers are rendered upside-down to suit the physical layout's readability.
- Outputs are saved as A4-sized PDFs at 300 DPI for high-quality print compatibility.
- Error handling, validation checks, and auto-foldered output directories help maintain order during large batch generation.
Common Use Cases
The Toto-TKT Project’s functionality made it suitable for a variety of event and campaign needs. Some of the most frequent applications included:
- Lottery Tickets: Generates unique combinations for official lottery draws, laid out for immediate printing.
- Raffle Entries: Creates hundreds or thousands of entries for community or commercial raffles, all formatted consistently.
- Event Entry Passes: Custom tickets with individual identifiers and formatting tailored to specific event themes.
- Survey or Exam Sheets: Marked layouts aligned with answer sheets or feedback forms, where precise placement is crucial for scanning or review.

Results & Outcomes
The automated ticket generation system delivered significant improvements across multiple areas:
- Faster Turnaround: The manual formatting process that once took hours was reduced to just minutes.
- Accuracy at Scale: Mark placements were accurate to the millimetre, helping avoid printing misalignment and formatting problems.
- Consistent Branding: Branded tickets followed a standard design across batches, improving presentation at events.
- Scalable Outputs: The client could generate large quantities of tickets in one run without worrying about duplication or formatting breakdowns.
By delivering consistent, professional tickets that could be printed and distributed immediately, the client gained greater control over promotional materials. The tool also opened opportunities for new use cases, such as interactive surveys and educational events.
Key Takeaways
Several important lessons emerged from the Toto -TKT Project:
- Automating layout-based tasks significantly reduced the risk of human mistakes and the time spent on repetitive formatting.
- Using a grid-based logic system helped maintain precise alignment across ticket designs and batches.
- Including a screen recording and walkthrough documentation made onboarding internal users simpler and more effective.
- The solution bridged digital inputs (Excel) and physical outputs (printed tickets), giving the client a reliable way to manage ticketing for events and campaigns of any size.
AI-Powered Web Scraping for Smarter Data Extraction
Introduction
Moderating and analyzing large volumes of data can be challenging, especially when information is spread across multiple sources. This project focused on developing an automated web scraping system combined with AI-driven data structuring to simplify data extraction, improve accuracy, and enhance decision-making. By leveraging web scraping with Python, SerpApi, and Selenium, alongside Gemini 1.5 Pro AI, the solution provided a structured approach to gathering and processing part number data while minimizing manual effort.
Client Background
The client needed a streamlined solution to collect and process part number data from multiple websites. Their existing method relied on manual data entry, which was slow, prone to errors, and increasingly difficult to manage as the volume of information grew. Extracting, analyzing, and organizing this data required significant time and effort, limiting their ability to make timely decisions. To address these challenges, they required automated web scraping tools capable of mechanizing these tasks while ensuring accuracy and adaptability.

Challenges & Goals
The following were some of the challenges and goals of the project:
Challenges
Collecting part number data manually was time-consuming and required a considerable amount of effort. This method not only slowed down the process but also led to inconsistencies, making it difficult to maintain accuracy.
Many websites posed additional challenges, such as requiring logins, incorporating captchas, and using dynamically loaded content, all of which complicated data extraction. These barriers made it difficult to gather information efficiently and required constant manual adjustments.
Even when data was successfully retrieved, it often lacked a structured format. This made it challenging to compare and analyze, further slowing down decision-making processes. As the need for data grew, the limitations of manual collection became even more apparent, highlighting the necessity for a more effective and scalable approach.
Goals
The first goal of the project was to create a system for web scraping using Selenium, SerpApi, and Python to collect part number data from multiple websites. By automating this process, the aim was to reduce reliance on manual entry and improve the reliability of data collection.
Another key objective was to apply AI-based processing to analyze and organize the extracted data. The system needed to identify alternate and equivalent part numbers, allowing for a more comprehensive understanding of available components and their relationships.
Ensuring data retrieval remained accurate and consistent despite website restrictions was also a priority. The question was: How to bypass captchas in web scraping? The solution also had to navigate logins and dynamically loaded content without disrupting the flow of information.
Finally, the extracted data needed to be presented in structured formats, such as CSV and Google Sheets. This would allow for seamless integration into the client’s existing workflows, making the information easily accessible and actionable.

Conclusion
This project improved how the client collects and processes data, replacing manual methods with an automated system that organizes and structures information effectively. By combining web scraping with AI, Relu Consultancy provided a reliable solution tailored to the client’s needs. The result was a more accessible, accurate, and manageable data collection process, allowing for better decision-making and reduced workload.

Implementation & Results
A custom web scraping workflow was built using SerpApi, Selenium, and Python. The system was designed to handle various website structures, extract part numbers accurately, and minimize errors. With this approach, data retrieval became faster and required less manual input.
AI-Powered Data Structuring
Once the data was collected, Gemini 1.5 Pro AI processed and structured the information. This AI-powered data extraction:
- Identified alternate and equivalent part numbers, ensuring a broader scope of data.
- Formatted the extracted information into structured files for better usability.
- Generated reports in CSV and Google Sheets, making data more accessible for analysis.
Reliable System for Long-Term Use
To maintain accuracy and consistency, the system was built to:
- Adjust to changing website structures, reducing the need for constant manual updates.
- Bypass obstacles like logins, captchas, and dynamic content without compromising reliability.
- Require minimal manual intervention while being adaptable to increasing data demands.

Business Impact
By implementing this system, the client saw significant improvements in their workflow:
- Reduced manual data collection, lowering errors and saving valuable time.
- Faster data retrieval, enabling quicker responses to business needs.
- Structured insights made data easier to analyze, improving decision-making.
- A system built to handle growing data needs, ensuring continued usability.
Key Insights
- Reducing manual processes saves time and minimizes errors.
- AI-powered structuring makes data more practical for analysis.
- Addressing website restrictions ensures reliable data extraction over time.
- Systems that adapt to growing data requirements remain useful in the long run.
Backup Email Parsing Automation System: Streamline Monitoring and Instant Access to Insights
Introduction
Reading the email and extracting the crucial data is a tedious and time-consuming task. This is especially true for IT teams that spend their valuable time going through the text-heavy backup emails received from backup and disaster recovery service providers.
The emails from the service providers usually contain critical information about the status and health of data backups. These emails help the IT team monitor the backup processes and take necessary actions if any problem is detected.
However, if any critical detail is missed during manual reading, it can lead to delayed responses, unnoticed backup failures, and potential data loss. A simple negligence can disrupt business continuity and affect infrastructure security.
With automation enhancing efficiency in traditionally time-consuming processes, Relu expert helped build a system that automated the data extraction from backup emails from service providers like Acronis and Datto.

Project Scope
The manual approach to reading emails, especially the ones that deal with critical data associated with data security and backup. However, the manual approach slows down the response time and also increases the risk of overlooking the essential details. This can lead to missed alerts and potential data protection failures.
The client faced the same issues and required an automated system to parse backup alert emails from Acronis and Datto. Acronis and Datto are well-known providers of backup, disaster recovery, and cybersecurity solutions.
Email parsing for IT management is the automatic extraction of structured data from emails and helps the teams collect specific data accurately. In this case, the business wanted an intelligent automated email extraction solution for fetching details, like backup alerts, timestamps, and backup size.

Objectives
The objective of this Acronis and Datto backup alert monitoring process is to solve common problems, as:
- Eliminate Manual Tracking of Backup Alert Emails
With the backup email parsing automation solution, the need for IT teams to sift through numerous emails is completely eliminated. The team can focus on resolving the issues rather than searching for them.
- Accurate Parsing of Critical Data
Automated solutions can parse the key details, like backup status, timestamps, and affected systems, with precision. So, it reduces the risk of human errors that happen during natural extraction.
- Filter Out Irrelevant Details From Email
The intelligent Datto backup monitoring solution smartly filters out non-essential emails, like promotional emails and routine confirmations. This allows the team to focus on important alerts that indicate potential failures or security threats.
- Filter Out Irrelevant Details From Email
The extracted backup data is stored in a centralized and structured format such that the team can easily access the data and go through it. The structured data can be used for further processing, providing actionable insights.

The Bottom Line
Manual data entry is prone to errors, and a small error in security and data backup aspects can lead to severe consequences. That’s why Relu’s automated email parsing for IT monitoring ensures that all the critical details are extracted accurately and stored in a structured manner. The stored data can be exported in other formats, like CSV, JSON, or Excel format, for further analysis. It can exported using API, like Flask or Fast API, to fetch MySQL data dynamically and present it in JSON format.
The automated email extraction solution for backup alerts provides a scalable and error-resilient framework for backup monitoring. With automated error handling, logging mechanisms, and server deployment, IT teams can keep track of everything easily with minimal intervention.

Solution
To build the automated email parsing solution for IT monitoring, Relu experts designed a platform that automates email processing, parsing, and tracking of backup alert emails from Acronis and Datto. The platform is designed using Python’s libraries and frameworks for email processing, parsing, and monitoring.
Here’s how our solution works:
- Email Processing and Parsing:
The backup email parsing automation solution fetches the backup email alerts from Arconis and Datto using Microsoft Emails API. The implemented parsing rules extract the data systematically to get the relevant details, like:
- Backup Status
- Timestamp
- Device/Server Name
- Backup Size
- Error Messages (if any)
- Backup Location
- Next Scheduled Backup
Once the processing of a set of emails is done, the email status is updated to prevent duplicate parsing.
2. Acronis Parsing
The Datto and Acronis backup alert processing solution extracted the backup job details from both the subject and the body. The key data points included the start time and end time of the backup, the duration of the backup process, and backup size.
3. Datto Parsing
For Datto emails, the Arconis and Datto backup monitoring solution extracted the essential details for accurate status tracking. The applied filters remove the irrelevant emails which do not contain the data related to backup alerts.
4. Error Handling and Logging
The custom error messages were implemented to detect and log issues, like missing or malformed backup data, email parsing failures, and connectivity or API issues. The logging mechanism implemented was designed to track the errors and debug the insights for better system maintenance.
5. Data Storage and Integration
The parsed data was stored in MySQL database, which helps the client in quick retrieval of backup history and efficient monitoring and reporting. The email records are updated with their processing status to ensure transparency.
6. Deployment and Automation
The backup email parsing automation solution is deployed on a server to ensure it remains up and running at all times. The automated scripts monitor and parse the incoming emails from Arconis and Datto in real time. It reduces the need for manual intervention.

Results & Impact
Theimplementation of backup email parsing automation transformed backupmonitoring. The system improved the team’s efficiency and productivity, whichwas affected by manually checking and reading the emails from Acronis andDatto. Manually checking each email from the stack, extracting the details, andcopying them into a database is an error-prone process.
However, this automated solution substantially reduced the manual efforts, allowing the IT team to focus more on proactive issue resolution and less on email checking.
Chrome Extension for Sports Betting Automation
Client Background
The client operates in the overseas sports betting space. Their operations involved monitoring and placing bets across several sports betting platforms, includingPS3838 and CoolBet. Previously, this process was handled manually, which was time-consuming and introduced room for human error. The client sought a sports betting software solution that could automate these actions while adapting to platform changes and maintaining data reliability.

Challenges & Objectives
The objectives and potential challenges of the project were as follows:
Challenges
- Each sports betting platform used a different site structure, which made creating a consistent automation logic difficult.
- Layouts, CSS selectors, and API structures changed frequently, often breaking existing scripts.
- Many platforms employed anti-bot systems, such as CAPTCHAs, behavioral detection, and IP restrictions.
- The client needed up-to-date and accurate data at all times to make informed decisions, which required validation and error-handling mechanisms.
Objectives
- Build a flexible Chrome extension capable of sports betting automation across multiple platforms.
- Design it to adapt quickly to frontend changes and different site architectures.
- Implement basic bot avoidance features such as proxy rotation and request timing.
- Maintain and update the system regularly to support long-term use.

Conclusion
The browser-based automation system developed by Relu Consultancy gave the client a reliable way to manage betting tasks across platforms like PS3838 and CoolBet. Each platform had its quirks, so custom scripts were built to handle the different site structures. To keep things running smoothly, the system dealt with common issues like CAPTCHAs, changing layouts, and bot detection using proxy rotation, timed requests, and fallback strategies.
The sports betting software continued to perform well even as betting platforms evolved. With steady updates and bug fixes, it reduced the amount of manual work involved and helped create a more consistent, automated process. Overall, the project showed how important it is to build adaptable tools that can grow with changing online environments.

Approach & Implementation
Custom browser scripts were developed for each supported website, allowing the extension to interact with the site’s elements as a user would. The code was structured in a modular fashion, making it easier to isolate and update individual components when a platform changed. This modularity also simplified testing and future feature integration.
A lightweight design was prioritized to ensure the extension ran smoothly on standard user systems without needing significant resources or complex setup.
The AI betting software incorporated dynamic learning mechanisms to adapt to platform changes efficiently.
Maintenance & Updates
Frequent platform updates often caused selector breakage. To address this, regular bug-fixing cycles were introduced to inspect and update affected scripts. Code refactoring accompanied these updates to maintain a clean codebase.
The selector logic was improved with strategies to handle minor layout shifts, reducing the need for constant manual changes. The automated betting strategies integrated into the system ensured that users could adjust betting logic without overhauling the software.
Anti-Bot Considerations
Several strategies were used to reduce the risk of bot detection:
- Proxy rotation was implemented to distribute traffic and avoid IP bans.
- Request timing and user-like behaviors were randomized to mimic human actions.
- Fallback mechanisms were added to maintain functionality during temporary access issues or data gaps.
Monitoring & Support
Basic logging captured session data, including timestamps, responses, and errors, enabling faster issue identification. Retry logic helped the system recover from failed or timed-out requests.
Ongoing support involved regular performance reviews, updates, and the rollout of new features in response to evolving needs. The integration of web scraping for betting sites helped ensure that real-time odds and data were always accessible.

Results & Outcomes
The Chrome extension reduced the need for manual interaction in betting tasks. Processes such as monitoring odds, placing bets, and navigating between platforms became partially or fully automated.
Response times improved across multiple platforms, and the system remained stable even during frequent front-end changes. The automated sports betting system continued to perform well despite evolving platform restrictions.
The solution also scaled over time. As new sports and platforms were introduced into the client’s workflow, the extension continued to deliver reliable performance thanks to its maintainable design and structured update process.
Key Takeaways
Upon completing the project, we identified the following key takeaways:
- In fast-moving environments like online sports betting, sports betting software significantly improved efficiency and accuracy.
- Planning for constant change early on helped the system stay one step ahead of platform updates and anti-bot measures.
- Regular maintenance, whether updating broken selectors or fixing subtle bugs, was key to keeping the extension stable over time.
- Having a modular code structure allowed new platforms and features to be added without reworking the entire system.
- Even small improvements in betting bot development free up time and allow the client to focus on decision-making rather than manual data gathering.
Watch at Web: The AI-Powered Watch Marketplace Intelligence Platform
Project Background& Objectives
- The web dashboard serves as an all-in-one hub for watch collectors, sellers, and resellers, offering smarter search, reliable verification, and time-saving tools for better outcomes.
- Manual tracking across platforms—Users had to jump between websites like eBay,Chrono24, and niche watch forums to track listings. This process was time-consuming and tedious, especially when trying to monitor changes in availability or price.
- Lack of verified information—Most listings clearly show whether the watch is genuine, including original packaging and proper documentation. This creates uncertainty and increases the risk of buying counterfeits and overpriced items.
- Slow, time-consuming decision—Without consolidated data and easy comparison tools, buyers and sellers spend hours searching for watches, which delays the purchase, sales, and overall market opportunities.

Target Users
- Watch collectors- They are individuals who are looking for limited edition, rare, and vintage watches and have a strong preference for authenticity and originality.
- Vintage watch lover- People deeply interested in heritage timepieces and historical models seek complete sets, including boxes and papers.
- Resellers- Professionals identifying price differences between platforms to profit from quick buys and resales.
- Online sellers- Sellers manage inventory on multiple platforms and require monitoring of competition and market trends.
- Luxury buyers- Individuals purchasing high-value watches need assurance of legitimacy and pricing accuracy.
Project Goals
Automate watch data collection eliminates the need for manual browsing by automatically pulling listings from multiple sources in real-time.
- Verify listing from AI- Use AI to confirm listing authenticity based on photos, description quality, and metadata.
- Provide actionable market information—Present users with pricing history, demand signals, and comparable listings through a clear, interactive dashboard.
- Enable proactive tracking- Let users set alerts and automated searches to be notified when a desirable listing becomes available.

Conclusion
Relu's web dashboard solved the coreproblem of fragmented, unverified watch listings by delivering an AI-poweredplatform and comprehensive solution. This solution unified search, alertcreation, and analysis. It empowered users to act faster, smarter, and withbetter confidence in the high-stakes luxury watch marketplace.
By reducing manual effort and boosting trust through intelligent automation, this web dashboard refined timepiece tracking.

Solution Overview
The web dashboard delivers an intelligent, end-to-end solution that transforms fragmented online listings into usable insights:
User-focused design: A dashboard with filters to compare and track watches easily.
AI-powered validation: It processes images and descriptions to assess the authenticity and completeness of each listing.
Data aggregation engine- Uses web scraping to pull watch listings from key platforms and forums.
Phases of Development
Phase1 – Data Aggregation: Automated script crawl and extraction of watch listing from marketplaces like eBay, Chrono24, and others:
- Rotation proxies: It uses rotating proxies and user agents to avoid detection and IP bans.
- Multiple process in line: Adapts to different site structures, ensuring consistent, clean data extraction from multiple sources
Phase2 – AI Verification: Every listing is analyzed to assess its credibility:
- NLP: It parses the product title and description to extract key details such as brand, model, reference number, and packaging information.
- Image recognition: AI models review uploaded images to identify box presence, documents, and counterfeiting indicators.
- Confidence score: Assign a confidence score to check authenticity and listing quality.
Phase3 – Dashboard & UX: User interaction can be done when the dashboard and UX are built on an attractive interface:
- Saved search: Personalized setting sallow for quick reassessing of commonly used questions.
- Price trend graphs: Historical data is displayed visually, helping users see how prices have changed over time.
- Advanced filters: Users can search for it by price, brand, model, location, documentation, etc.
Phase4 – Smart Alerts: These alerts help users stay on top of the market:
- Email notification: Alerts for specific keywords, price drops, and newly listed items
- Search automation: To continuously monitor user-defined parameters and trigger notifications instantly.
Phase5 – Deployment: Scalable deployment is put into place to ensure long-term reliability and growth readiness:
- Cloud-native architecture: It is hosted on cloud-native architecture that dynamically helps allocate resources for performance under load.
- Job queues: These include background job queues for the timely processing of alerts and scraping
- Automated backup: Backup and monitoring tools safeguard data integrity and platform availability.
Challenges &Solutions
- IP bans during scraping - We have used rotating proxy servers and diversified user-agent strings to mimic natural user behavior.
- Duplicate listings - Employed AI that matches images and description to detect duplicates across platforms even with slight changes.
- Authenticity validation - Developed custom-trained machine learning models that analyze product images and meta data for improvising trust in listing.
- Multi-currency confusion - Developed live currency conversion APIs so that listings reflect current exchange rates.
- Timely updates - Implemented asynchronous job queues that allow for frequent email alerts and listing updates without overloading the system.

Business Impact
General Impact
- Unified marketplace view—This web dashboard combines data from scattered platforms into one centralized place, allowing users to monitor the entire market from a single dashboard.
- Massive time savings—Thanks to the automation and verification tools, Users can report reducing their research and comparison time by nearly 80%.
- Improved buyer confidence- AI-backed authenticity scoring adds a trusted layer, especially for high-ticket purchases.
- Better pricing strategy- With access to historical data, users can negotiate better and spot undervalued listings.
User-Specific Impact
For Watch Collectors:
- Identify rate and vintage models matching specific criteria like original boxes, paper, and regional availability.
- Reduces the risk of buying incomplete sets and counterfeiting items by relying on AI-authenticated insights.
- Saves personalized search and receives real-time alerts.
For Resellers:
- Gain competitive edge by spotting underpriced listings across platforms.
- Use historical data and trend graphs to determine ideal buy and sell windows.
- Maximizes resale margin by understanding and targeting high-demand watches
For Watch Sellers:
- It uses the platform to benchmark prices against similar listings across different marketplaces.
- Understand clearly what features increase desirability, such as specific model references and included packaging.
- Gauges real-time market demand and helps in inventory planning.
User Feedback & Early Metrics
- Simplicity- Users supported the platform’s attractive UI and how much faster it made their decision process.
- High engagement- The average sessiontime was over 12 minutes. This indicated deep user interaction.
- Effective alerts- Email open rates were 63%. This confirms that notifications were timely and relevant.
- Reliable data- Marketplace data scraping accuracy stood at 94%. This is verified through sampling and user validation.