<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Building As I Learn]]></title><description><![CDATA[Building As I Learn]]></description><link>https://blog.venkatkolasani.xyz</link><generator>RSS for Node</generator><lastBuildDate>Sat, 16 May 2026 18:05:00 GMT</lastBuildDate><atom:link href="https://blog.venkatkolasani.xyz/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[My Journey Through Cohere Labs ML Summer School: From Beginner to AI Enthusiast]]></title><description><![CDATA[Ever wondered how Netflix knows exactly what show you'll binge next, or how self-driving cars "see" the road? That's the magic of machine learning (ML for short) — a type of AI where computers spot patterns in data all on their own. If that sounds in...]]></description><link>https://blog.venkatkolasani.xyz/cohere-labs-ml-summer-school</link><guid isPermaLink="true">https://blog.venkatkolasani.xyz/cohere-labs-ml-summer-school</guid><category><![CDATA[cohere ]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[openscience]]></category><category><![CDATA[community]]></category><category><![CDATA[ML]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI enthusiast]]></category><dc:creator><![CDATA[Kolasani Venkat]]></dc:creator><pubDate>Wed, 23 Jul 2025 17:56:07 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://media.licdn.com/dms/image/v2/D5622AQH_V9eyrJKsTQ/feedshare-shrink_800/B56Zei2XV1HoAk-/0/1750783850439?e=1755734400&amp;v=beta&amp;t=g9IIhcLf5ObHfIBcF9TEUhkAjAotjxXxGYG7CpNDAuk" alt="No alternative text description for this image" /></p>
<p>Ever wondered how Netflix knows exactly what show you'll binge next, or how self-driving cars "see" the road? That's the magic of machine learning (ML for short) — a type of AI where computers spot patterns in data all on their own. If that sounds intriguing but a bit mysterious, you're in the right place!</p>
<p>I recently joined the <strong>Cohere Labs Open Science Community ML Summer School</strong>, a super accessible program designed for anyone curious about machine learning. As a complete beginner, I stumbled upon this program and immediately got excited and honestly, it turned out to be one of the best ways I could’ve spent my summer.</p>
<p>Every session was packed with <strong>cool ideas, exciting research paper reviews, and mind-blowing concepts</strong>. Sure, there were moments when I didn’t fully understand everything, but I learned to <strong>embrace the confusion</strong>, trust the process, and just keep exploring. And that made all the difference.</p>
<hr />
<h2 id="heading-what-is-the-cohere-labs-ml-summer-school"><strong>What is the Cohere Labs ML Summer School?</strong></h2>
<p>The <strong>Cohere Labs Open Science Community ML Summer School</strong> is based on a simple but powerful idea: machine learning should be accessible to anyone, no matter their background, location, or experience level. It’s about learning together, staying curious, and building with others.</p>
<p>This summer, Cohere Labs launched an amazing learning initiative featuring speakers from <strong>INRIA, Meta (FAIR), Google DeepMind, Cohere</strong>, and more. These are some of the leading minds in the field, and they shared insights on topics like <strong>foundation models, retrieval systems, multimodal learning</strong>, and even how AI can be used for <strong>social good</strong>.</p>
<p>The best part? It was completely open and beginner-friendly. Whether you were just getting started or already experimenting with models, there was something for everyone. At the end of the program, every participant received a <strong>digital certificate</strong> recognizing their participation.</p>
<p>But what really made it special was the community. Being part of a global group of learners who were all equally excited to explore ML made the experience even more inspiring. It wasn’t just about learning concepts — it was about growing together.</p>
<hr />
<h2 id="heading-a-quick-look-at-the-sessions"><strong>A Quick Look at the Sessions</strong></h2>
<p>The summer school was structured around a series of live sessions, each led by experts working at the cutting edge of machine learning. Every session brought something new — from core concepts to advanced techniques and made them surprisingly approachable.</p>
<h3 id="heading-session-1-ml-math-refresher"><strong>Session 1: ML Math Refresher</strong></h3>
<p><strong>Speaker:</strong> <em>Katrina Lawrence</em><br />Applied Mathematician<br /><strong>Topic:</strong> <em>Foundational Math for Machine Learning</em></p>
<p>We began the summer school with a back-to-basics session that felt essential -- especially for someone like me coming in without a strong math background.</p>
<p>Katrina walked us through the core mathematical concepts that underpin most machine learning algorithms:</p>
<ul>
<li><p><strong>Derivatives</strong></p>
</li>
<li><p><strong>Vector Calculus</strong></p>
</li>
<li><p><strong>Linear Algebra</strong></p>
</li>
</ul>
<p>What I appreciated most was how clearly she explained things. She emphasized that understanding these basics isn’t about memorizing formulas, but about building <strong>intuition</strong>. Whether it’s calculating gradients for optimization or working with matrices in neural networks, this session helped demystify the math behind ML.</p>
<p>If you're looking for an approachable way to refresh your math skills, check out her <a target="_blank" href="https://www.youtube.com/@MathUnlockedWithKatrina">YouTube channel, Math Unlocked</a>.</p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQH9wT4oyNdS7A/feedshare-shrink_2048_1536/B56ZfM5Si.HoA0-/0/1751489268078?e=1755734400&amp;v=beta&amp;t=xDch_2Opk8QWZ2-2j9ayTJs80RtJ8sBuWygcUj1dXT4" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-2-introduction-to-embeddings-amp-retrieval">🔍 <strong>Session 2: Introduction to Embeddings &amp; Retrieval</strong></h3>
<p><strong>Speaker:</strong> <em>Nils Reimers</em><br />VP of AI Search at Cohere<br /><strong>Topic:</strong> <em>How Embeddings Power Modern Search Systems</em></p>
<p>In the second session, we dove into the world of <strong>embeddings and neural retrieval</strong> with Nils Reimers, who leads AI Search at Cohere.</p>
<p>This was a big shift from theory to real-world application. Nils explained how transformer-based models (like BERT) are used to <strong>generate embeddings</strong> — dense numerical representations that capture the meaning of text. These embeddings allow models to search and compare information in a far more nuanced way than traditional keyword methods.</p>
<p>Key concepts he covered:</p>
<ul>
<li><p><strong>Retriever + Ranker architecture</strong></p>
</li>
<li><p><strong>Dense vs. Sparse Embeddings</strong></p>
</li>
<li><p><strong>Challenges with limited labeled data</strong></p>
</li>
<li><p><strong>Context Engineering for better search relevance</strong></p>
</li>
</ul>
<p>What stood out most to me was how <strong>context engineering</strong> and smart architecture choices can make or break a search system. It was a powerful reminder that great models alone aren’t enough, how you use them really matters.</p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQFpfQLUuG-b8A/feedshare-shrink_2048_1536/B56ZfM5SiVH8Ao-/0/1751489267026?e=1755734400&amp;v=beta&amp;t=wsmEpRpIsy9LJDb-pnw19yJopGvIjXwOGYTRa_11S9o" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-3-introduction-to-transformers-and-the-evolution-of-large-language-models"><strong>Session 3: Introduction to Transformers and the Evolution of Large Language Models</strong></h3>
<p><strong>Speaker:</strong> <em>Siddhant Gupta</em><br />NLP Community Lead, Cohere Labs<br /><strong>Topic:</strong> <em>How Transformers Changed the Game in NLP</em></p>
<p>On Day 2, we explored the architecture that powers modern AI : <strong>Transformers</strong> in a session led by <strong>Siddhant Gupta</strong>, who leads the NLP community at Cohere Labs.</p>
<p>We started with a brief history of models that came before transformers:</p>
<ul>
<li><p><strong>RNNs (Recurrent Neural Networks):</strong> Designed for sequential data, but struggled with long-term memory.</p>
</li>
<li><p><strong>LSTMs (Long Short-Term Memory networks):</strong> An improvement over RNNs with gated memory units for handling longer sequences better.</p>
</li>
</ul>
<p>Then came the real highlight — understanding <strong>Transformers</strong>, which completely reshaped how language models work. Instead of relying on sequential processing, transformers introduced <strong>attention mechanisms</strong>, allowing models to process entire sequences in parallel while still maintaining context.</p>
<h4 id="heading-key-components-of-a-transformer">Key Components of a Transformer:</h4>
<ul>
<li><p><strong>Embedding Layer</strong>: Converts tokens (words or subwords) into dense vectors.</p>
</li>
<li><p><strong>Positional Encoding</strong>: Adds information about token order to embeddings.</p>
</li>
<li><p><strong>Self-Attention &amp; Multi-Head Attention</strong>: Enables the model to focus on relevant words throughout the sequence.</p>
</li>
<li><p><strong>Feedforward Neural Networks</strong>: Processes the attended information.</p>
</li>
<li><p><strong>Stacked Layers</strong>: Allow deeper understanding by layering multiple transformer blocks.</p>
</li>
</ul>
<h4 id="heading-concepts-covered">Concepts Covered:</h4>
<ul>
<li><p><strong>Tokenization</strong>: Splitting text into smaller units like words or subwords.</p>
</li>
<li><p><strong>Word Embeddings</strong>: Vector representations that help the model understand meaning and relationships.</p>
</li>
<li><p><strong>Attention Mechanisms</strong>:</p>
<ul>
<li><p><em>Self-Attention</em>: Each word attends to others in the same input.</p>
</li>
<li><p><em>Cross-Attention</em>: Used in models like encoder-decoder architectures.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-transformer-variants">Transformer Variants:</h4>
<p>We also explored different transformer-based models and how they work:</p>
<ul>
<li><p><strong>BERT</strong> (Encoder-only): For tasks like classification, sentiment analysis, and question answering.</p>
</li>
<li><p><strong>GPT</strong> (Decoder-only): Ideal for text generation and conversational AI.</p>
</li>
<li><p><strong>RAG</strong> (Retrieval-Augmented Generation): Merges retrieval and generation, useful for providing accurate and up-to-date responses.</p>
</li>
</ul>
<p>Siddhant did a great job breaking down what can be an overwhelming topic into something digestible. I particularly appreciated how he showed real-world applications like <strong>Google Docs suggestions</strong> and <strong>Gmail’s Smart Compose</strong> using BERT, or how GPT is behind models like ChatGPT.</p>
<p>If you're interested in going through the session slides, you can check them out <a target="_blank" href="https://lnkd.in/gjzs_e3E">here</a>.</p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQF5B3bq8ELoDQ/feedshare-shrink_2048_1536/B56ZfRrYcwHQAs-/0/1751569509124?e=1755734400&amp;v=beta&amp;t=2amvi9aRoMq1I7vxvRBEmmvfyXTWSN83tj0xc8_FgO4" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-3-scaling-self-supervised-learning-for-vision-an-introduction-to-dinov2"><strong>Session 3: Scaling Self-Supervised Learning for Vision — An Introduction to DINOv2</strong></h3>
<p><strong>Speaker:</strong> <em>Timothée Darcet</em><br />PhD Researcher, Meta AI (FAIR) &amp; Inria<br /><strong>Topic:</strong> <em>Self-Supervised Learning in Computer Vision with DINOv2</em></p>
<p>This session introduced us to the exciting world of <strong>self-supervised learning (SSL)</strong> in computer vision — a method where models learn to understand images without needing manually labeled data. Instead of relying on external annotations, these models generate their own pseudo-labels during training.</p>
<p>Timothée Darcet explained the motivation behind SSL and walked us through key techniques like <strong>contrastive learning</strong> and <strong>masked image modeling</strong>. The highlight was a deep dive into <strong>DINOv2</strong>, a cutting-edge SSL model used for learning high-quality visual representations.</p>
<h4 id="heading-key-takeaways">Key Takeaways:</h4>
<ul>
<li><p><strong>DINOv2</strong> is trained on a curated dataset of <strong>142 million images</strong> using a mix of loss functions (DINO, iBOT, COLIO).</p>
</li>
<li><p>It outperforms models like <strong>CLIP</strong> in tasks such as segmentation and feature extraction.</p>
</li>
<li><p>Its general-purpose nature makes it suitable for specialized domains, including <strong>medical imaging</strong>.</p>
</li>
<li><p>DINOv2 is particularly strong in feature map quality and interpretability, enabling precise image understanding without labels.</p>
</li>
</ul>
<p>This session helped bridge the gap between complex CV models and practical applications, offering a fresh perspective on how vision models are evolving beyond supervised learning.</p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQHrvOIQeiyIqQ/feedshare-shrink_2048_1536/B56ZfrcyIFHQAo-/0/1752001885378?e=1755734400&amp;v=beta&amp;t=f59pKwDt7ZDzLcaAzjj0AjJ7UGaY89z0e-r6ll364gc" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-4-a-temperature-check-on-web-agents"><strong>Session 4: A Temperature Check on Web Agents</strong></h3>
<p><strong>Speaker:</strong> <em>Lawrence Jang</em><br />Researcher at Meta<br /><strong>Topic:</strong> <em>Autonomous Web Interaction with Language Models</em></p>
<p>With large language models gaining the ability to understand and generate text, the next frontier is getting them to <strong>act</strong> especially on the web. In this session, Lawrence Jang explored the emerging field of <strong>LLM-powered web agents</strong>, which can autonomously navigate websites, click buttons, scroll pages, and even fill out forms using natural language instructions.</p>
<h4 id="heading-highlights">Highlights:</h4>
<ul>
<li><p><strong>WebArena</strong> was introduced as a benchmark, where <strong>humans achieve 80% task success</strong>, while LLM agents currently achieve only around <strong>14%</strong>, highlighting how early the field still is.</p>
</li>
<li><p>Advanced benchmarks like <strong>VisualWebArena</strong> and <strong>VideoWebArena</strong> extend evaluation to visual and video-based tasks.</p>
</li>
<li><p><strong>ICAL</strong> is one approach that uses <strong>human feedback</strong> to fine-tune web agents for better task performance.</p>
</li>
<li><p>The session addressed major challenges:</p>
<ul>
<li><p>Following instructions accurately</p>
</li>
<li><p>Aligning text with visual content</p>
</li>
<li><p>Memory and long-term planning</p>
</li>
<li><p>Preventing hallucinations and ensuring safe behavior</p>
</li>
</ul>
</li>
<li><p>We also got a glimpse into practical tools like <strong>LangChain</strong>, and discussions on future directions such as <strong>multi-agent systems</strong>, <strong>visual grounding</strong>, and <strong>ethical considerations</strong>.</p>
</li>
</ul>
<p>Together, these two sessions(3 &amp;4) gave us a look into the cutting-edge of AI - - from models that learn to see without supervision to agents that learn to act in the digital world. The possibilities, and the challenges, are both massive and inspiring.</p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQHt6CDzddYcHg/feedshare-shrink_2048_1536/B56Zfrcv8VHEAs-/0/1752001876533?e=1755734400&amp;v=beta&amp;t=Uw2Bg6kPltqpje5PleokLhlnEuyLd4NgBFvPgYaR6Rc" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-5-test-time-scaling-small-lms-to-o1-level">Session 5: Test-Time Scaling Small LMs to o1 Level</h3>
<p><strong>Speaker:</strong> Isha Puri</p>
<p>AI PhD at MIT<br /><strong>Date:</strong> July 10, 2025</p>
<p>As large language models reach diminishing returns from scale, Isha Puri presented a compelling direction: achieving high performance at <strong>test-time</strong> without retraining. Her method rooted in <strong>particle-based inference</strong> and <strong>process reward models</strong> emphasizes diversity, balancing exploration and exploitation during decoding.</p>
<p>The results are remarkable: small models (1.5B parameters) were shown to outperform GPT-4o in just four inference rollouts. For 7B models, scaling up to o1-level capabilities took only 32 rollouts.</p>
<p>What stood out was how this technique bypasses the early pruning limitations of greedy or beam search. By unlocking latent capabilities through smarter inference rather than brute-force training, this approach opens the door to <strong>democratizing powerful LLM reasoning at lower cost, latency, and compute.</strong></p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQFeh6vg1d-DbA/feedshare-shrink_2048_1536/B56ZgCP9zhHQAo-/0/1752384404498?e=1756339200&amp;v=beta&amp;t=Q3we6IZ_rUJn2W1NszdysezFXsIXZklda4FjzY5JxmI" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-6-secret-life-of-noise-understanding-diffusion-models">Session 6: Secret Life of Noise — Understanding Diffusion Models</h3>
<p><strong>Speaker:</strong> Gowthami Somepalli</p>
<p>Research Scientist at Adobe Firefly<br /><strong>Date:</strong> July 11, 2025</p>
<p>Gowthami walked us through the evolution of <strong>diffusion models</strong> — the engines behind modern generative art tools like Firefly. Beginning with <strong>DDPMs (Denoising Diffusion Probabilistic Models)</strong> and extending to <strong>DDIMs (Deterministic variants)</strong>, the session was a deep dive into how structured noise can be harnessed to produce realistic, diverse outputs.</p>
<p>She clarified how <strong>noise schedules</strong> determine the quality and control of generated content, while also introducing <strong>flow matching</strong>, a deterministic framework offering more direct distribution transformation, potentially bridging the gap between variational autoencoders and diffusion models.</p>
<p>Notably, these models offer:</p>
<ul>
<li><p><strong>Superior sample quality</strong> over GANs</p>
</li>
<li><p><strong>Stable training dynamics</strong></p>
</li>
<li><p><strong>Mathematical rigor</strong></p>
</li>
<li><p><strong>Inference-time flexibility</strong>, making them ideal for creative applications.</p>
<p>  <img src="https://media.licdn.com/dms/image/v2/D5622AQGWP4jurAy_xg/feedshare-shrink_2048_1536/B56ZgCP9ztHYAo-/0/1752384402432?e=1756339200&amp;v=beta&amp;t=Tm-n5sjQBiLRyhfAFZMZsvkBJDfdNewIpUeRczjnJY0" alt="No alternative text description for this image" /></p>
</li>
</ul>
<hr />
<h3 id="heading-session-7-understanding-transformers-via-n-gram-statistics">Session 7: Understanding Transformers via N-gram Statistics</h3>
<p><strong>Speaker:</strong> Timothy Nguyen</p>
<p>AI Researcher at Google DeepMind<br /><strong>Date:</strong> July 11, 2025</p>
<p>This session reimagined the transformer’s inner workings not as black boxes, but as <strong>statistical machines</strong>. Timothy Nguyen revealed how <strong>up to 79% of transformer predictions</strong> on the TinyStories dataset could be explained using <strong>optimal N-gram rules</strong> derived from training data.</p>
<p>Key takeaways:</p>
<ul>
<li><p>Low-variance predictions align closely with N-gram patterns</p>
</li>
<li><p>Transformers exhibit <strong>curriculum-like learning</strong>, progressing from simple to complex rules</p>
</li>
<li><p>Introduced a novel, <strong>training-intrinsic metric</strong> to detect overfitting <em>without needing a validation set</em></p>
</li>
</ul>
<p>This reframing provides practical tools to better understand <strong>when LLMs memorize, generalize, or hallucinate</strong>, offering a statistically grounded perspective on model interpretability.</p>
<p>🔗 <a target="_blank" href="https://lnkd.in/gXXXkS5m">Research Paper</a></p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQFegc27BY5aQQ/feedshare-shrink_2048_1536/B56ZgCP9z3GUAo-/0/1752384403131?e=1756339200&amp;v=beta&amp;t=ZzPFnvOrtDGYcEjJVfPi7dtNZeSTSuSZQ71eZv_Uc4g" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-8-distributed-training-in-machine-learning"><strong>Session 8: Distributed Training in Machine Learning</strong></h3>
<p><strong>Speaker:</strong> Arthur Douillard<br /><strong>Senior Researcher, Google DeepMind</strong><br /><strong>Topic:</strong> Distributed Training Strategies for Large Language Models</p>
<p>In this session, Arthur Douillard took us behind the scenes of what it really takes to train large language models (LLMs). With their enormous size, these models can’t be trained on a single GPU, distributed training is essential. Arthur unpacked the core strategies used in practice today, like Fully Sharded Data Parallelism (FSDP), Tensor and Pipeline Parallelism, and Expert Parallelism.</p>
<p>What stood out was his dive into experimental methods like DiLoCo, SWARM, PowerSGD, and DeMo. These techniques aim to scale LLMs across devices even when they’re not co-located but often at the cost of some accuracy or performance. He also touched on the real-world challenges: GPU hardware failures, communication bottlenecks, and the inherent complexity of coordinating planetary-scale training. While we’re not fully there yet, we’re inching closer to a future where training across global clusters is a reality.</p>
<p><img src="https://media.licdn.com/dms/image/v2/D5622AQGByE-iXh_5ng/feedshare-shrink_2048_1536/B56ZgNDeUbHcAo-/0/1752565680453?e=1756339200&amp;v=beta&amp;t=BIlVE_C48HU8qf3XRwGV-eke0ap0NdGtlCuzREdTc_E" alt="No alternative text description for this image" /></p>
<hr />
<h3 id="heading-session-9-research-mentorship"><strong>Session 9: Research Mentorship</strong></h3>
<p><strong>Speaker:</strong> Sara Hooker<br /><strong>Head of Cohere Labs</strong><br /><strong>Topic:</strong> Finding Meaningful Directions in ML Research</p>
<p>Sara Hooker’s mentorship session felt like a compass for anyone early in their ML research journey. She began with a reflection on the evolution of AI research, urging us to think deeply about how and why we choose problems to work on. Instead of chasing incremental papers or buzzwords, she encouraged us to:</p>
<ul>
<li><p>Master a topic deeply and thoroughly</p>
</li>
<li><p>Collaborate openly and generously</p>
</li>
<li><p>Learn by teaching others</p>
</li>
<li><p>Constantly ask: "Is this scientifically meaningful?"</p>
</li>
</ul>
<p>Sara also introduced the idea of a “third path” between academia and industry, represented by Cohere Labs and other open science communities. These spaces provide an alternative for those who want to contribute to cutting-edge research without being bound by the formal structures of universities or corporate labs. Her session was as inspiring as it was practical, offering a vision of research that is both rigorous and radically accessible.</p>
<hr />
<h3 id="heading-session-10-ml-open-science-social"><strong>Session 10: ML Open Science Social</strong></h3>
<p><strong>Speakers:</strong> Madeline Smith &amp; Brittawnya Prince<br /><strong>Team: Cohere Labs Operations</strong><br /><strong>Topic:</strong> Building Community Through Open Science</p>
<p>To wrap up the summer school, Cohere Labs hosted a virtual social, an informal yet deeply meaningful session. It was a space for researchers from across the globe to connect, share stories, and brainstorm future ideas together. The event captured the spirit of open science: diverse voices, shared curiosity, and a collective drive to explore the unknown.</p>
<p>More than just a networking event, it felt like a celebration of everything we had learned, unlearned, and reimagined during the program. It was a fitting finale to a summer spent not just learning machine learning but living it as a collaborative, creative, and community-first endeavor.</p>
<hr />
<h3 id="heading-reflections-more-than-just-a-summer-school"><strong>Reflections: More Than Just a Summer School</strong></h3>
<p>Looking back, the Cohere Labs ML Summer School wasn’t just a series of lectures — it was a turning point. Coming in with beginner-level knowledge, I walked away not only understanding complex topics (maybe not always but still great learning) like self-supervised learning, distributed training, and tokenization but also feeling part of a vibrant open science community.</p>
<p>What stood out most was the spirit of accessibility. The sessions weren’t about gatekeeping knowledge but they were about opening doors. Each speaker, from leading researchers at Meta and DeepMind to pioneers at Cohere Labs, made the content feel approachable without watering it down.</p>
<p>I also learned that doing machine learning research isn't about knowing everything from the start — it’s about being curious, collaborative, and resilient. Whether it's contributing to open-source projects, diving deeper into topics like explainability or fairness, or just asking better questions, I now feel equipped to take meaningful next steps in my ML journey.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753292885070/8b129176-89fa-4d39-b623-3701b5df34ef.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-whats-next"><strong>What’s Next?</strong></h3>
<p>This summer school planted the seed and now it’s up to me (and all of us who joined) to keep it growing. I’m planning to build hands-on projects, explore open research challenges, and stay connected with the community I’ve found here.</p>
<p>If you’ve ever felt like machine learning was too vast or too complex to dive into - trust me, you’re not alone. But with communities like Cohere Labs and the right mindset, you can absolutely get started.</p>
<p>Let the exploration continue 🚀</p>
<h3 id="heading-explore-more-amp-stay-connected"><strong>Explore More &amp; Stay Connected</strong></h3>
<p>If you’re interested in watching the recorded sessions or learning more about the Cohere Labs Open Science Community, you can visit:<br />🔗 <a target="_blank" href="https://sites.google.com/cohere.com/coherelabs-community/community-programs/summer-school">https://sites.google.com/cohere.com/coherelabs-community/community-programs/summer-school</a></p>
<p>🔗 <a target="_blank" href="https://cohere.com/research">https://cohere.com/research</a></p>
<p>They regularly host talks, reading groups, and other open learning initiatives - highly recommended for anyone passionate about ML and open science!</p>
<p>Feel free to connect with me if you’d like to discuss anything from the sessions, share ideas, or collaborate on projects.</p>
<p>📬 <strong>Connect with me on</strong> <a target="_blank" href="https://www.linkedin.com/in/kolasani-venkat/"><strong>LinkedIn</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Winning at TechXcelerate 2025: A Hackathon Experience That Redefined My Learning Curve]]></title><description><![CDATA[About TechXcelerate 2025
TechXcelerate 2025 is a national-level hackathon organized by CodeBeat and BharatVersity, hosted at BITS Pilani, Hyderabad Campus, bringing together some of the most passionate student innovators from across the country.
The ...]]></description><link>https://blog.venkatkolasani.xyz/techxcelerate-2025-hackathon</link><guid isPermaLink="true">https://blog.venkatkolasani.xyz/techxcelerate-2025-hackathon</guid><category><![CDATA[techxcelerate]]></category><category><![CDATA[hackathon]]></category><category><![CDATA[StudentDeveloper]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Kolasani Venkat]]></dc:creator><pubDate>Sun, 22 Jun 2025 18:09:55 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-about-techxcelerate-2025"><strong>About TechXcelerate 2025</strong></h2>
<p><strong>TechXcelerate 2025</strong> is a national-level hackathon organized by <strong>CodeBeat</strong> and <strong>BharatVersity</strong>, hosted at <strong>BITS Pilani, Hyderabad Campus</strong>, bringing together some of the most passionate student innovators from across the country.</p>
<p>The event featured:</p>
<p>→ <strong>Hands-on workshops</strong> across domains like <strong>AI/ML, MERN Stack, DevOps, and IoT</strong>, led by top industry mentors</p>
<p>→ A <strong>20-hour hackathon</strong> to solve real-world problems and compete for a ₹5,00,000+ prize pool across multiple innovation tracks</p>
<p>With <strong>hundreds of teams and thousands of participants</strong> competing across tracks such as <strong>Open Innovation, Generative AI, Health Care, SaaS</strong>, and more the hackathon followed a <strong>three-round structure</strong>:</p>
<ul>
<li><p><strong>Round 1: Idea &amp; Team Submission</strong></p>
<p>  Teams submitted their project ideas, selected tech stacks, described team members’ technical skills, and evaluated business viability and future scope.</p>
</li>
<li><p><strong>Round 2: Virtual Progress Check</strong></p>
<p>  A mid-hackathon check-in via progress update forms, assessing technical progress and roadmap alignment.</p>
</li>
<li><p><strong>Round 3: Evaluation &amp; Elimination Round</strong></p>
<p>  Based on a detailed review of project advancement and clarity of implementation — this was the final cut before the finals.</p>
</li>
</ul>
<p>Each round was scored out of <strong>100 points</strong>, and only the <strong>top teams per track</strong> were selected for the final stage.</p>
<ul>
<li><p><strong>Final Round: Jury Pitch &amp; Live Evaluation</strong></p>
<p>  Teams had to present their working MVPs to a jury panel, showcasing both technical execution and real-world impact.</p>
</li>
</ul>
<p>And that’s exactly where our journey and surprise comeback began.</p>
<hr />
<h2 id="heading-our-journey-begins">✨ Our Journey Begins</h2>
<p>I went into <strong>TechXcelerate 2025</strong> with zero expectations — just a backpack, some Red Bulls, and a hope to learn something new. I came out of it with an incredible team experience, a working product, and a <strong>hackathon win I’ll never forget</strong>.</p>
<p>As beginners stepping into our very first hackathon, we didn’t expect much. We came for the experience, the learning, and maybe the late-night hustle. But what unfolded over the next 24 hours turned out to be one of the most unforgettable moments in our tech journey.</p>
<p>What happened still feels surreal: <strong>we made it to the final round… and actually WON 2nd place in the Open Innovation track!</strong> With hundreds of teams, advanced projects, and three intense elimination rounds, the odds didn’t seem in our favor. But somehow, through chaos, collaboration, and a lot of coffee, we pulled off something we’re truly proud of.</p>
<p>Our project, <strong>StudySync</strong>, was built to solve real problems students like us face every day and in the end, it resonated not just with us, but with the judges too.</p>
<p>This blog is my attempt to bottle up the adrenaline, the laughter, the panic-googling, the 3 a.m bugs, and that heart stopping moment when they called out our team name as finalists.</p>
<hr />
<h2 id="heading-problem-statement-amp-inspiration">Problem Statement &amp; Inspiration</h2>
<p>The idea for <strong>StudySync</strong> came from a place every student knows too well - the chaos of last minute exam prep, scattered resources, disconnected study groups, and the constant “Who has the notes?” messages flying around.</p>
<p>We’ve all been there: trying to find the right people to study with, struggling to organize group sessions, and drowning in endless PDFs, links, and screenshots across a dozen platforms. Learning should be collaborative — but too often, it’s fragmented and overwhelming.</p>
<p>That’s where the idea clicked.</p>
<p><strong>What if we built a single platform where students could connect, collaborate, and thrive — all in one place?</strong> With that question in mind, <strong>StudySync</strong> was born. Our goal was to create a centralized ecosystem where students could:</p>
<ul>
<li><p>Form or join <strong>subject-specific study groups</strong></p>
</li>
<li><p><strong>Chat in real-time</strong></p>
</li>
<li><p><strong>Share learning materials</strong></p>
</li>
<li><p><strong>Plan study sessions effortlessly</strong></p>
</li>
</ul>
<p>We didn’t just want to build another chat app. We wanted to build a focused, student-first platform that encourages productivity, reduces isolation, and brings structure to group learning.</p>
<hr />
<h2 id="heading-the-build-how-we-built-studysync">The Build – How We Built StudySync</h2>
<p>Once the idea was locked in, it was time to turn our vision into a real, working product and do it in less than 24 hours.</p>
<p>As a team of five beginner developers, we knew the challenge wasn’t just building fast — it was about building smart. We divided responsibilities based on our interests and strengths, and kept communication flowing through quick stand-ups, checklists, and chaotic but effective coordination.</p>
<p>We also made great use of <strong>AI tools like Windsurf and Cursor</strong>, which helped us code faster and debug smarter, our silent sixth and seventh teammates.</p>
<p>We went with a <strong>modern and efficient tech stack</strong> that gave us both speed and flexibility:</p>
<ul>
<li><p><strong>Frontend:</strong> React + TypeScript</p>
</li>
<li><p><strong>Build Tool:</strong> Vite (for a lightning-fast dev experience)</p>
</li>
<li><p><strong>UI Components:</strong> shadcn-ui</p>
</li>
<li><p><strong>Styling:</strong> Tailwind CSS</p>
</li>
<li><p><strong>Backend:</strong> Supabase (for authentication, database, and file storage)</p>
</li>
</ul>
<p>Supabase turned out to be a massive time saver, setting up user auth and managing resource uploads became quick and seamless, letting us focus on features that really mattered:</p>
<p>✅ Creating &amp; joining study groups</p>
<p>✅ Uploading and viewing study resources</p>
<p>✅ A clean, distraction-free user interface</p>
<p>Of course, no hackathon is complete without bugs. We faced weird UI glitches, broken state updates, and a few "why-is-this-not-rendering" moments. But with some <em>panic-googling, console.log therapy,</em> and lots of team support, we made it through.</p>
<p>At some point, we realized we couldn’t possibly build <strong>everything</strong> we had planned in 24 hours. So, we made a strategic decision to <strong>pivot to an MVP</strong>: focus on the core features, make the frontend beautiful and intuitive, and let the idea shine — even if a few checkboxes remained unticked.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750612812093/9e4de5f6-0bbc-44a2-9c6c-7ee6d5e08fa5.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-pitching-amp-presentation-from-no-hopes-to-finalists">Pitching &amp; Presentation – From No Hopes to Finalists</h2>
<p>After Round 3, the last elimination stage — we were mentally prepared to pack up and cheer for the remaining teams. We thought we had done well, but with so many participants and experienced teams in the pool, we had <strong>zero expectations</strong>.</p>
<p>And then it happened.</p>
<p><strong>StudySync was announced as one of the Top finalists in Open Innovation Track.</strong></p>
<p>We froze. A mix of adrenaline, shock, and pure happiness took over and we had exactly <strong>30 minutes</strong> to get it together and prepare a pitch. No time to celebrate, no time to panic - just go.</p>
<p>While setting up the demo, a few features that worked minutes ago suddenly bugged out. Classic last-minute chaos. But we didn’t let it show. We shifted gears and decided to lean into what we did best at that point: storytelling, design clarity, and <strong>clear communication</strong>.</p>
<p>Our pitch focused on the <strong>why</strong> behind StudySync — the problem every student faces and how our solution creates a structured, collaborative learning space. We kept it authentic, highlighted our core features, and emphasized our clean UI and thoughtful UX. The MVP wasn’t feature heavy, but it was solid and well-designed and we made sure that came through.</p>
<p>The judges were receptive and encouraging. We shared how we tackled challenges, made last-minute decisions, and how five beginners built something that worked, looked good, and solved a real problem.</p>
<p>When the winners were announced, seeing “StudySync” as the <strong>2nd place winner</strong> in the Open Innovation track didn’t just feel like a victory — it felt like a dream that we somehow debugged into reality.</p>
<hr />
<h2 id="heading-lessons-learned-amp-whats-next">Lessons Learned &amp; What’s Next</h2>
<p>TechXcelerate 2025 wasn’t just a hackathon — it was a crash course in creativity, problem-solving, and teamwork under pressure. As first-time participants, we went in with beginner-level skills and walked out with a ton of real-world experience (and a trophy to match).</p>
<p>Here are some of our biggest takeaways:</p>
<ul>
<li><strong>Build the MVP, Not Everything:</strong></li>
</ul>
<p>We had a long list of features, but limited time forced us to focus on the essentials. Learning to <strong>prioritize what matters most</strong> was a game changer.</p>
<ul>
<li><strong>Communication Beats Perfection:</strong></li>
</ul>
<p>When features failed, it was our <strong>presentation, clarity, and storytelling</strong> that carried us. Being able to explain your "why" is just as important as showing your "what."</p>
<ul>
<li><strong>Teamwork Makes It Happen:</strong></li>
</ul>
<p>Working with a team of passionate learners made everything smoother - from debugging late night errors to boosting each other’s morale in crunch time.</p>
<ul>
<li><strong>Adaptability is Key:</strong></li>
</ul>
<p>We learned to pivot fast, simplify our architecture, and even redesign parts of the UI mid-way. Being flexible saved our project more than once.</p>
<ul>
<li><strong>AI Can Accelerate You, Not Replace You:</strong></li>
</ul>
<p>Tools like Cursor and Windsurf helped us speed up development, but they were just that — tools. Our <strong>logic, creativity, and decisions</strong> still made the difference.</p>
<hr />
<h2 id="heading-whats-next">What’s Next?</h2>
<p>This is just the beginning for <strong>StudySync</strong>. Our hackathon version was the MVP and now we’re planning to:</p>
<ul>
<li><p>Add more collaboration features like <strong>group calendars and shared notes</strong></p>
</li>
<li><p>Add real-time chat features and Improve the <strong>chat experience with notifications and threads</strong></p>
</li>
<li><p>Launch a public <strong>beta version</strong> with student feedback</p>
</li>
<li><p>Possibly open-source the project and build in public</p>
</li>
</ul>
<p>We also want to keep participating in more hackathons, learning as we go, and building tools that solve real-world problems. If there’s one thing TechXcelerate taught us, it’s this:</p>
<p><strong>You don’t have to be an expert to build something impactful, just be willing to start.</strong></p>
<hr />
<h2 id="heading-final-reflections-amp-gratitude">Final Reflections &amp; Gratitude</h2>
<p>Looking back, <strong>TechXcelerate 2025</strong> wasn’t just a hackathon — it was a reminder that taking a leap, even with limited experience, can lead to something extraordinary. From ideation to MVP, from bug-fixes at 3 AM to presenting in front of a jury panel, every moment was intense, real, and unforgettable.</p>
<p>Winning <strong>2nd place in the Open Innovation track</strong> as first time participants proved to us that it’s not always about having the most experience or the perfect product but it’s about building with intent, solving real problems, and showing up with passion and persistence.</p>
<p>We’re incredibly grateful to the organizers at <strong>CodeBeat</strong>, <strong>BharatVersity</strong>, and <strong>BITS Pilani Hyderabad Campus</strong> for hosting such a student-focused event. A big shoutout to the volunteers and judges who made the experience even more valuable with their insights and support.</p>
<p>Most importantly, thank you to my amazing team — the minds behind <strong>StudySync</strong>. We started as a bunch of curious learners and walked out with not just a win, but a project we genuinely believe in.</p>
<p>And this is just the beginning.</p>
<p>If you’d like to connect, collaborate, or try out <strong>StudySync</strong>, feel free to reach out!</p>
<hr />
<h2 id="heading-lets-connect">Let’s Connect!</h2>
<p>🔗 <strong>Try out StudySync</strong> (Coming Soon!)</p>
<p>📬 Got ideas or feedback? DM me or email at <a target="_blank" href="mailto:kolasanivenkat2@gmail.com">kolasanivenkat2@gmail.com</a></p>
<p>💼 Let’s connect on <a target="_blank" href="https://www.linkedin.com/in/kolasani-venkat/">Linkedin</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750613202897/e01c747e-8a8c-4857-8995-c0eb31952fba.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item></channel></rss>