Forem

Machine Learning

A branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
RAG Redefined : Ready-to-Deploy RAG for Organizations at Scale.
Cover image for RAG Redefined : Ready-to-Deploy RAG for Organizations at Scale.

RAG Redefined : Ready-to-Deploy RAG for Organizations at Scale.

1
Comments 2
1 min read
Day 1 of 30 : Machine Learning

Day 1 of 30 : Machine Learning

11
Comments 6
2 min read
AI is Not Going to Steal Your Keyboard (Unless You Let Them Write the Code)
Cover image for AI is Not Going to Steal Your Keyboard (Unless You Let Them Write the Code)

AI is Not Going to Steal Your Keyboard (Unless You Let Them Write the Code)

1
Comments
2 min read
Get Hired Faster: How to use Lyzr-Automata to draft personalised cold emails
Cover image for Get Hired Faster: How to use Lyzr-Automata to draft personalised cold emails

Get Hired Faster: How to use Lyzr-Automata to draft personalised cold emails

1
Comments
4 min read
Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies
Cover image for Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies

Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies

5
Comments
3 min read
SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
Cover image for SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes

SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes

5
Comments
4 min read
Rho-1: Not All Tokens Are What You Need
Cover image for Rho-1: Not All Tokens Are What You Need

Rho-1: Not All Tokens Are What You Need

5
Comments
4 min read
Vision Transformers Need Registers

Vision Transformers Need Registers

5
Comments
4 min read
The Expressive Power of Transformers with Chain of Thought

The Expressive Power of Transformers with Chain of Thought

5
Comments
5 min read
CodecLM: Aligning Language Models with Tailored Synthetic Data
Cover image for CodecLM: Aligning Language Models with Tailored Synthetic Data

CodecLM: Aligning Language Models with Tailored Synthetic Data

6
Comments
4 min read
RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Cover image for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models

RecurrentGemma: Moving Past Transformers for Efficient Open Language Models

5
Comments
4 min read
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Cover image for InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD

InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD

5
Comments
3 min read
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
Cover image for MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies

MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies

5
Comments
3 min read
JetMoE: Reaching Llama2 Performance with 0.1M Dollars
Cover image for JetMoE: Reaching Llama2 Performance with 0.1M Dollars

JetMoE: Reaching Llama2 Performance with 0.1M Dollars

4
Comments
4 min read
The Impact of Depth on Compositional Generalization in Transformer Language Models

The Impact of Depth on Compositional Generalization in Transformer Language Models

5
Comments
4 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.