🎓 AI enthusiast with a Master’s in Artificial Intelligence and a deep passion for building intelligent systems for the real world. I’ve engineered Transformer-based models for radio map estimation 📡, built and deployed chatbots using ViT and custom NER pipelines, and published peer-reviewed research at IEEE. ☁️ Experienced in designing secure, scalable AI systems on AWS—hands-on with SageMaker, S3, IAM, and containerized MLOps workflows. I focus on resource-efficient models that run smoothly on CPUs—ideal for lightweight production-grade assistants 🤖💬. 📊 My work blends ML modeling, data cleaning, and pipeline automation—whether it's enhancing radio-map estimation with Deep Progressive Networks 🛰️ or building Kafka-powered YouTube analytics pipelines 📊. Always curious, always building! ⚙️ I thrive at the intersection of ML, cloud, and software engineering—always learning, always building. Let’s connect and geek out on AI, cloud, or anything in between! 🚀🔍
The languages i am proficient with and would always love to work with
Data plays a critical role in AI, and having access to big data provides a significant advantage. I have utilized various technologies to effectively handle and analyze large datasets, allowing me to harness the power of data in my AI endeavors.
Deep learning frameworks are truly works of art, empowering us to construct intricate models and effectively navigate the complexities of training and testing. Through my learning journey, I have gained proficiency in utilizing libraries that not only enhance the functionality of these frameworks but also provide valuable visualization capabilities and support.
My favorite tools for version control, code editing, and container orchestration.
Developed a customized Language Model (LLM) using Gemini Pro and SERP API, empowering students and researchers to effortlessly upload research papers and generate vectors using FAISS. Created a user-friendly interface using the frontend library Streamlit, enabling users to interact with the LLM and ask questions directly from PDF documents. Implemented an advanced response comparison system that leverages both the RAG (Retrieval-Augmented Generation) model and SERP (Search Engine Results Page) data to provide highly relevant responses to user queries. Applied innovative techniques, such as Langchain agents for SERP API integration, prompt compression for improved efficiency, and Langchain memory concept to maintain contextual understanding of user interactions. Successfully streamlined the research process by combining cutting-edge technologies, resulting in an efficient and intuitive Research Companion tool for enhanced information retrieval and analysis.
Check it out!I built a merger model using LSTM and GRU which enables the us to compare the images and also analyse the sentiments in them using the captions generated.
Check it out!Used Kafka Streams to extract the data using the YouTube API with in a window of 30 mins. Efficiently generated the analytics in real time using pyspark.
Check it out!Implemented Markov Decision Processes(MDP) to optimize the ride sharing application. The MDP generated the values and directions needed to have a optimal pick up and drop off schedule for any given pair of passengers.
Check it out!This project aimed to classify real and fake news articles using various techniques. The best model achieved 96.83% accuracy by combining artificial neural networks with TF-IDF weighted Word2vec vectors and text characteristics. Ensemble methods and CNN models also showed competitive accuracy. However, the NMF model performed relatively poorly. The study emphasizes the importance of selecting appropriate techniques for text classification.
Check it out!