<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sarthak Malhotra</title>
    <description>The latest articles on Forem by Sarthak Malhotra (@sarthakwer).</description>
    <link>https://forem.com/sarthakwer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sarthakwer"/>
    <language>en</language>
    <item>
      <title>Hugging Face is looking for reasoning datasets beyond math, science and coding</title>
      <dc:creator>Sarthak Malhotra</dc:creator>
      <pubDate>Wed, 16 Apr 2025 16:21:12 +0000</pubDate>
      <link>https://forem.com/sarthakwer/hugging-face-is-looking-for-reasoning-datasets-beyond-math-science-and-coding-3e9c</link>
      <guid>https://forem.com/sarthakwer/hugging-face-is-looking-for-reasoning-datasets-beyond-math-science-and-coding-3e9c</guid>
      <description>&lt;h1&gt;
  
  
  Reasoning Datasets Competition
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Bespoke Labs, Hugging Face, and Together.ai are launching a competition to find the most innovative reasoning datasets. Create a great proof-of-concept reasoning dataset and win prizes to help you scale your work!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Deepseek moment for datasets
&lt;/h2&gt;

&lt;p&gt;Since the launch of DeepSeek-R1 in January 2025, we've seen remarkable growth in reasoning-focused datasets on the Hugging Face Hub, such as &lt;a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k" rel="noopener noreferrer"&gt;OpenThoughts-114k&lt;/a&gt;, &lt;a href="https://huggingface.co/datasets/nvidia/OpenCodeReasoning" rel="noopener noreferrer"&gt;OpenCodeReasoning&lt;/a&gt;, and &lt;a href="https://huggingface.co/datasets/open-r1/codeforces-cots" rel="noopener noreferrer"&gt;codeforces-cot&lt;/a&gt;. These primarily cover math, coding, and science: domains with clearly verifiable answers.&lt;/p&gt;

&lt;p&gt;Now, reasoning is expanding into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/papers/2502.08127" rel="noopener noreferrer"&gt;Financial analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT" rel="noopener noreferrer"&gt;Medical reasoning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/datasets/virtuoussy/Multi-subject-RLVR" rel="noopener noreferrer"&gt;Multi-domain reasoning&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OpenThoughts-114k alone has helped train over 230 models! We believe future breakthroughs won’t come from architecture alone, but from &lt;strong&gt;better data&lt;/strong&gt;, datasets that reflect real-world complexity, uncertainty, and richness.&lt;/p&gt;

&lt;p&gt;To accelerate progress, we're launching a &lt;strong&gt;Reasoning Dataset Competition&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtlxz8itxxeq9vhkmxz9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtlxz8itxxeq9vhkmxz9.png" alt="image/png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How the competition works
&lt;/h2&gt;

&lt;p&gt;The goal: create impactful &lt;strong&gt;proof-of-concept reasoning datasets&lt;/strong&gt; and share them on the Hugging Face Hub. The best submissions will win prizes to help scale these datasets and train models using them.&lt;/p&gt;

&lt;h2&gt;
  
  
  🗓️ Timeline
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Launch Date&lt;/strong&gt;: April 9, 2025
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Submission Deadline&lt;/strong&gt;: May 1, 2025 (11:59 PM PT)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Winners Announced&lt;/strong&gt;: May 5, 2025
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 Submission Instructions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a dataset with at least &lt;strong&gt;100 examples&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Upload to the &lt;strong&gt;Hugging Face Hub&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Tag it with &lt;code&gt;reasoning-datasets-competition&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We'll evaluate 100 examples per submission (or all if you submit exactly 100).&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Submission Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Size&lt;/strong&gt;: Minimum 100 examples
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: Include a dataset card with:

&lt;ul&gt;
&lt;li&gt;Purpose and scope
&lt;/li&gt;
&lt;li&gt;Dataset creation method
&lt;/li&gt;
&lt;li&gt;Example uses
&lt;/li&gt;
&lt;li&gt;Limitations or biases
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Viewer Preview&lt;/strong&gt;: Must work on the HF viewer
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Tag&lt;/strong&gt;: &lt;code&gt;reasoning-datasets-competition&lt;/code&gt;
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;License&lt;/strong&gt;: Clear licensing info for research use
&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 While these are the minimum requirements we encourage you to go beyond these! Think of your dataset card as your pitch. It’s your chance to showcase what makes your dataset the best, and help judges see why you deserve a high score across our evaluation criteria: Approach, Domain, and Quality.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🔍 What We're Looking For
&lt;/h2&gt;

&lt;h3&gt;
  
  
  New Domains
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Legal reasoning: Judgments based on laws and precedents
&lt;/li&gt;
&lt;li&gt;Financial analysis: Evaluation of investments
&lt;/li&gt;
&lt;li&gt;Literary interpretation: Symbolism and theme analysis
&lt;/li&gt;
&lt;li&gt;Ethics/philosophy: Moral reasoning and frameworks
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Novel Tasks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Structured data extraction from unstructured text &lt;a href="https://huggingface.co/blog/Ihor/replicating-deepseek-r1-for-information-extraction" rel="noopener noreferrer"&gt;example&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Zero-shot classification: Datasets focused on training smaller models to be more effective zero-shot classifiers through reasoning&lt;/li&gt;
&lt;li&gt;Search improvement: Reasoning datasets designed to enhance search relevance and accuracy&lt;/li&gt;
&lt;li&gt;Diagrammatic reasoning: Datasets that train models to interpret, analyze, and reason about visual representations like flowcharts, system diagrams, or decision trees&lt;/li&gt;
&lt;li&gt;Constraint satisfaction problems: Collections teaching models to reason through complex scheduling, resource allocation, or optimization scenarios with multiple interdependent constraints&lt;/li&gt;
&lt;li&gt;Evidence evaluation: Datasets demonstrating how to assess source credibility and weigh conflicting information&lt;/li&gt;
&lt;li&gt;Counterfactual reasoning: Collections developing "what if" thinking by systematically altering variables and exploring potential outcomes &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Reasoning Distillation
&lt;/h3&gt;

&lt;p&gt;Inspired by the &lt;a href="https://huggingface.co/papers/2501.12948" rel="noopener noreferrer"&gt;DeepSeek paper&lt;/a&gt;: distill reasoning from large to smaller models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Support a reasoning Ecosystem
&lt;/h3&gt;

&lt;p&gt;Beyond direct reasoning datasets, we're interested in collections that help build a robust reasoning ecosystem. This could include:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reasoning classification: Datasets for training models to classify or annotate different types of reasoning&lt;/li&gt;
&lt;li&gt;Error detection: datasets for training models to identify flaws in reasoning processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This area is one where you can potentially make a big impact without needing a lot of resources to get started. &lt;/p&gt;

&lt;h2&gt;
  
  
  🧪 Evaluation Criteria
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;What It Covers&lt;/th&gt;
&lt;th&gt;What We Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Approach&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Dataset creation method: tools, prompts, pipelines&lt;/td&gt;
&lt;td&gt;Novelty and scalability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Domain&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Domain or skill covered&lt;/td&gt;
&lt;td&gt;Real-world relevance and coverage of underexplored fields&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Quality&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clarity, diversity, and structure of examples&lt;/td&gt;
&lt;td&gt;Reasoning-rich prompts and minimal hallucination&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  🏆 Prizes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🥇 First Place
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;$1,500 API credits from Together.ai
&lt;/li&gt;
&lt;li&gt;$1,500 Amazon (or country-specific equivalent) gift card
&lt;/li&gt;
&lt;li&gt;Hugging Face Pro subscription + compute credits&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🥈 1st &amp;amp; 2nd Runner-Up
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;$500 Amazon (or country-specific equivalent) gift card
&lt;/li&gt;
&lt;li&gt;HF Pro subscription + compute credits&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🌟 Spotlight Awards
&lt;/h3&gt;

&lt;p&gt;Top 4 innovative uses of &lt;a href="https://github.com/bespokelabsai/curator" rel="noopener noreferrer"&gt;Curator&lt;/a&gt;, each get a $250 Amazon (or country-specific equivalent) gift card&lt;/p&gt;

&lt;h3&gt;
  
  
  🎁 All Participants
&lt;/h3&gt;

&lt;p&gt;$50 in Together.ai API credits (details on how to claim credits in FAQ below)&lt;/p&gt;

&lt;h2&gt;
  
  
  📝 Signup Instructions
&lt;/h2&gt;

&lt;p&gt;Step 1: &lt;a href="https://forms.gle/gwvvvCKfmNJxVZDR6" rel="noopener noreferrer"&gt;Register here&lt;/a&gt; to recieve Together.ai credit and updates on the competition&lt;/p&gt;

&lt;p&gt;Step 2: Join the &lt;a href="https://huggingface.co/reasoning-datasets-competition" rel="noopener noreferrer"&gt;discussion thread&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Step 3: Join &lt;code&gt;#reasoning-dataset-competition&lt;/code&gt; on &lt;a href="https://discord.com/invite/KqpXvpzVBS" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧰 Helpful Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://colab.research.google.com/drive/1YoA23-cBcWpaSErULzBI2bo2LPGo37GQ" rel="noopener noreferrer"&gt;GPU-based distillation example (Colab)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://colab.research.google.com/drive/1Zfl3g7POsqqYQqkzXdyhYRSAymLhZugn?usp=sharing" rel="noopener noreferrer"&gt;API-only distillation example (Colab)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/open-thoughts/open-thoughts/tree/main/open_thoughts" rel="noopener noreferrer"&gt;OpenThoughts-114k generation code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.bespokelabs.ai/bespoke-curator/how-to-guides/getting-claude-3.7-sonnets-reasoning-traces" rel="noopener noreferrer"&gt;Claude 3.7 Sonnet traces&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.bespokelabs.ai/bespoke-curator/how-to-guides/using-kluster.ai-for-batch-inference" rel="noopener noreferrer"&gt;Batch inference with Kluster.ai&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ❓ FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I submit multiple datasets?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Yes!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I collaborate with others?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Absolutely. Teams are welcome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How to claim Together AI credits?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Fill &lt;a href="https://www.together.ai/forms/hackathon" rel="noopener noreferrer"&gt;this questionaire&lt;/a&gt; on Together's website. Enter hackathon name (question 6) as 'Reasoning datasets competition'. Here's a walkthrough. &lt;a href="https://cdn-uploads.huggingface.co/production/uploads/679813734a10be7109c56a2d/kmJiMuVxjE-vx9Zl1e5Ai.qt" rel="noopener noreferrer"&gt;https://cdn-uploads.huggingface.co/production/uploads/679813734a10be7109c56a2d/kmJiMuVxjE-vx9Zl1e5Ai.qt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Do I have to use Curator?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: No. Use any tools or methods you like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Do I have to use LLMs or synthetic data?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Not at all. All methodologies are welcome.&lt;/p&gt;

&lt;p&gt;Got more questions? Drop by the &lt;a href="https://huggingface.co/reasoning-datasets-competition" rel="noopener noreferrer"&gt;HF discussion thread&lt;/a&gt; or chat on &lt;a href="https://discord.com/invite/KqpXvpzVBS" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;!&lt;/p&gt;




</description>
      <category>ai</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>1000 stars on GitHub feels like a Million likes on any other platform</title>
      <dc:creator>Sarthak Malhotra</dc:creator>
      <pubDate>Fri, 21 Mar 2025 20:03:33 +0000</pubDate>
      <link>https://forem.com/sarthakwer/1000-stars-on-github-feels-like-a-million-likes-on-any-other-platform-4ie1</link>
      <guid>https://forem.com/sarthakwer/1000-stars-on-github-feels-like-a-million-likes-on-any-other-platform-4ie1</guid>
      <description>&lt;p&gt;Check it out: (&lt;a href="https://github.com/bespokelabsai/curator" rel="noopener noreferrer"&gt;https://github.com/bespokelabsai/curator&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57x3qc0p1h2fm642aj19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57x3qc0p1h2fm642aj19.png" alt="Image description" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Absolutely love the open source community!!&lt;/p&gt;

</description>
      <category>github</category>
      <category>opensource</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>Gemini 50% cheaper with Batch API in Curator</title>
      <dc:creator>Sarthak Malhotra</dc:creator>
      <pubDate>Fri, 14 Mar 2025 16:36:00 +0000</pubDate>
      <link>https://forem.com/sarthakwer/gemini-50-cheaper-with-batch-api-in-curator-3k3h</link>
      <guid>https://forem.com/sarthakwer/gemini-50-cheaper-with-batch-api-in-curator-3k3h</guid>
      <description>&lt;p&gt;Generating synthetic data at scale can be expensive. So, several LLM API providers, including Google, offer 50%-70% discounts through batch mode, which processes large requests asynchronously. However, batch API with Gemini is notoriously tricky due to many steps involved and scattered documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The challenge with Gemini batch API
&lt;/h2&gt;

&lt;p&gt;Let’s go over the steps required for a simple Gemini batch processing (when not using Curator):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create request files in JSONL format (must follow Gemini’s request structure!).&lt;/li&gt;
&lt;li&gt;Upload this file to a GCP bucket and get the cloud storage URL (and keep track of this).&lt;/li&gt;
&lt;li&gt;Create a batch prediction job on Vertex AI with the same cloud storage URL.&lt;/li&gt;
&lt;li&gt;Split requests exceeding 150k, repeating steps 1 and 2 for each batch.&lt;/li&gt;
&lt;li&gt;Manual polling of status from Vertex using batch IDs (gets complicated when multiple batch files are uploaded).&lt;/li&gt;
&lt;li&gt;Persist responses manually for basic caching.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These steps add a lot of friction, causing many users to stick to online processing and miss out on significant cost savings. Curator solves this by making Gemini’s batch APIs easy to use!&lt;/p&gt;

&lt;h2&gt;
  
  
  Curator Gemini Batch mode: 50% cost-efficient and infinitely easier
&lt;/h2&gt;

&lt;p&gt;No manual polling, no file management, just cost-efficient batch processing in a few lines of code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

from bespokelabs import curator

os.environ["HOSTED_CURATOR_VIEWER"]="1"
os.environ["GOOGLE_CLOUD_PROJECT"] = "&amp;lt;project-id&amp;gt;"
os.environ["GEMINI_BUCKET_NAME"] = "&amp;lt;bucket-name&amp;gt;"
os.environ["GOOGLE_CLOUD_REGION "] = "us-central1"  # us-central1 is default

llm = curator.LLM(model_name="gemini-1.5-flash-001", backend="gemini", batch=True)
questions = [
    {"prompt": "What is the capital of Montana?"},
    {"prompt": "Who wrote the novel 'Pride and Prejudice'?"},
    {"prompt": "What is the largest planet in our solar system?"},
    {"prompt": "In what year did World War II end?"},
    {"prompt": "What is the chemical symbol for gold?"},
]
ds = llm(questions)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about other API batch processing offered by Curator for OpenAI, Anthropic and &lt;a href="https://github.com/bespokelabsai/curator/tree/main/examples/providers" rel="noopener noreferrer"&gt;more here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please give Curator feedback and show your support by starring on &lt;a href="https://github.com/bespokelabsai/curator" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqai3eg8l259xcpktcmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqai3eg8l259xcpktcmu.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gemini</category>
      <category>opensource</category>
      <category>programming</category>
      <category>github</category>
    </item>
  </channel>
</rss>
