Recently, I was asked to add a button on a frontend dashboard that would allow users to bulk download Excel reports for multiple users. Having never implemented this before, my initial approach was to download all the files from S3 on the backend, convert them to base64 strings, send them as an array to the frontend, and then convert those base64 strings back into files for download on the user's machine.
It seemed simple enough but is not at all scalable.
Think about it:
What if there are thousands of users whose reports need to be downloaded?
What if each report is large in size?
What if multiple users initiate the download at the same time?
This approach would quickly become a bottleneck, putting unnecessary load on the backend server, increasing memory usage, and ultimately hurting performance.
That’s when I discovered a better way, streaming file downloads directly from S3 to the frontend.
In this article, I’ll walk you through how to implement this properly using Node.js on the backend and React.js on the frontend, so users can download large files efficiently without choking your server.
How Does It Work?
This is the theory part, feel free to skip ahead to the implementation if you're already familiar with how streaming works.
Here’s a simplified overview of the process:
- The user clicks a button on the frontend to initiate a bulk download.
- A request is sent from the frontend to the backend API endpoint.
- The backend connects to AWS S3 and requests the file(s), creating a
ReadableStream
from the S3 response. - This stream is piped directly to the HTTP response, which sends the file data in chunks to the frontend as it arrives.
- On the frontend, the browser receives this data and converts it into a downloadable file, without needing to load the entire file into memory.
Why Is This Better?
Reduced Server Load: The backend doesn't store or buffer the entire file. It simply acts as a proxy, streaming the data directly from S3 to the client. This means far less memory and CPU usage on your server.
Handles Large Files Gracefully: Whether you're downloading a 1MB file or a 1GB report, the process is the same, data flows in chunks and doesn't overwhelm the server. The only limiting factor is the user's network speed and local storage.
Scales with Multiple Users: Since each user's system handles their own download, the backend doesn’t need to manage multiple large file buffers simultaneously. The load is distributed naturally across client machines.
Backend Implementation
On the backend, we create a ZIP archive of all the requested files from S3 and stream it directly in the response without storing anything temporarily on the server.
import archiver from 'archiver';
import { getAwsClient } from './aws.js'; // Add your own import
const downloadFiles = async (req, res) => {
const { files } = req.body;
const AWS = getAwsClient();
const s3 = new AWS.S3();
res.setHeader('Content-Type', 'application/zip');
res.setHeader('Content-Disposition', 'attachment; filename=files.zip');
const archive = archiver('zip', {
zlib: { level: 9 }, // Best compression
});
archive.on('error', (err) => {
console.error('Archive error:', err);
res.status(500).send('Internal Server Error');
});
archive.pipe(res); // Connect zip stream to the HTTP response
for (const file of files) {
const { file: key } = file;
const s3Stream = s3
.getObject({
Bucket: process.env.S3_BUCKET_NAME, // Make sure this is defined
Key: key,
})
.createReadStream();
archive.append(s3Stream, { name: fileName });
}
await archive.finalize(); // Triggers the streaming download
};
- We use
archiver
to zip files on the fly. - Files are streamed from S3 directly, not buffered or saved temporarily.
Frontend Implementation
On the frontend, we send a request to the backend to start the bulk download. The backend streams the zipped file, which we receive as a Blob and trigger the browser to download it directly.
import request from './axios/request'; // Replace with your actual axios instance
export const downloadAllReports = async () => {
try {
// Send POST request to initiate bulk download
const res = await request.post(
'/bulk-download',
{ files: ['file1.pdf', 'file2.jpg'] }, // List your files here
{
responseType: 'blob', // Important: tells axios to handle response as Blob
}
);
// Create a Blob from the response data
const blob = new Blob([res.data], { type: 'application/zip' });
// Create a temporary URL for the Blob object
const url = URL.createObjectURL(blob);
// Create a hidden anchor element and trigger the download
const a = document.createElement('a');
a.href = url;
a.download = 'report.zip'; // Filename for the downloaded file
document.body.appendChild(a);
a.click();
// Clean up by removing the anchor and revoking the object URL
a.remove();
URL.revokeObjectURL(url);
} catch (err) {
console.error('Download failed:', err);
}
};
Conclusion
Streaming file downloads is a powerful technique to handle large or multiple files efficiently without overloading your backend server. By leveraging streams, you can:
- Reduce server memory and CPU usage
- Enable scalable downloads for many users simultaneously
- Provide a smooth download experience even for large files
If you’re building any app that involves bulk file downloads or large reports, consider using streaming to keep your system fast and scalable.
Hi, I'm Samit. A Software Developer and a freelancer who’s always on the lookout for exciting, real world projects to build and contribute to. I love hearing from people, whether it’s to collaborate, share ideas, or work together.
If you're looking to hire a passionate developer or even if you just want to say hi, feel free to check out my portfolio and reach out. I'd love to connect!
Top comments (0)