DEV Community

Cover image for Asynchronous File Upload in Java with Spring
Matheus Bernardes Spilari
Matheus Bernardes Spilari

Posted on

Asynchronous File Upload in Java with Spring

Uploading files to cloud storage like AWS S3 is a common task in modern applications. At first, you might handle this in a simple, synchronous way: one file at a time, blocking the thread. But this approach doesn't scale well.

In this post, we'll explore how to use asynchronous uploads in Java with Spring Boot, why it's a better approach, the trade-offs you need to be aware of, and how to make it more resilient using Resilience4j with retry and exponential backoff.


🐢 The Problem with Synchronous Uploads

Let’s start with a basic implementation using a for loop:

for (MultipartFile file : files) {
    uploadToS3(file);
}
Enter fullscreen mode Exit fullscreen mode

This is easy to write, but uploads are sequential and slow — especially if you're uploading many files or large ones. Every file waits for the previous one to finish.

This leads to:

  • Higher latency
  • Poor user experience
  • Underutilization of server resources

⚡ The Power of Asynchronous Uploads

By uploading files concurrently, we let multiple files be uploaded at the same time, without blocking the main thread. This is especially useful when:

  • Uploading to remote services like S3 (network-bound operations)
  • Handling multiple large files
  • Reducing total execution time

Benefits of Asynchronous Uploads

  • 🔄 Parallel execution: Uploads happen simultaneously, drastically reducing total upload time.
  • 🧵 Non-blocking: Frees up the main thread for other tasks.
  • 🚀 Scalability: Ideal for handling large batches of uploads.

✅ Using CompletableFuture for Concurrent Uploads

Java’s CompletableFuture combined with a custom ExecutorService provides a clean and scalable way to handle concurrent tasks.

Here’s a simple implementation:

import java.util.concurrent.*;
import java.util.stream.Collectors;

@Service
public class S3UploadService {

    private final ExecutorService executor = Executors.newFixedThreadPool(5); // Adjust pool size as needed

    public void uploadMultipleFiles(List<MultipartFile> files) {
        List<CompletableFuture<Void>> futures = files.stream()
            .map(file -> CompletableFuture.runAsync(() -> uploadToS3(file), executor))
            .collect(Collectors.toList());

        // Wait for all uploads to complete
        CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
    }

    private void uploadToS3(MultipartFile file) {
        // Perform actual upload logic using S3Client
        System.out.println("Uploading: " + file.getOriginalFilename());
    }
}
Enter fullscreen mode Exit fullscreen mode

🛠️ Optional: Use Spring’s @Async

For even cleaner integration, Spring allows asynchronous methods via the @Async annotation.

@Async
public CompletableFuture<Void> uploadAsync(MultipartFile file) {
    uploadToS3(file);
    return CompletableFuture.completedFuture(null);
}
Enter fullscreen mode Exit fullscreen mode

Just remember to enable async in your config:

@EnableAsync
@Configuration
public class AsyncConfig {}
Enter fullscreen mode Exit fullscreen mode

🔁 Making It Robust: Retry with Backoff

To handle temporary failures (like network hiccups or AWS rate limits), you can implement retry logic with exponential backoff — increasing the delay between attempts to avoid overwhelming the service.

A better way to manage this in Spring Boot is using Resilience4j.

✳️ Add Resilience4j to your pom.xml

<dependency>
    <groupId>io.github.resilience4j</groupId>
    <artifactId>resilience4j-spring-boot3</artifactId>
</dependency>
Enter fullscreen mode Exit fullscreen mode

⚙️ Configure Retry with Backoff in application.yml

resilience4j.retry:
  instances:
    uploadRetry:
      max-attempts: 3
      wait-duration: 2s
      exponential-backoff:
        enabled: true
        multiplier: 2
Enter fullscreen mode Exit fullscreen mode

🧩 Create a Service with Resilience Retry

import io.github.resilience4j.retry.annotation.Retry;
import org.springframework.stereotype.Service;

@Service
public class UploadService {

    @Retry(name = "uploadRetry")
    public void uploadToS3(MultipartFile file) {
        // Your upload logic here
        System.out.println("Uploading: " + file.getOriginalFilename());
        // Simulate error for testing
        if (Math.random() < 0.2) {
            throw new RuntimeException("Simulated failure");
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

🧵 Use it Asynchronously

@Autowired
private UploadService uploadService;

@PostMapping("/upload")
public ResponseEntity<?> upload(@RequestParam("files") List<MultipartFile> files) {
    ExecutorService executor = Executors.newFixedThreadPool(5);

    List<CompletableFuture<Void>> futures = files.stream()
        .map(file -> CompletableFuture.runAsync(() -> uploadService.uploadToS3(file), executor))
        .toList();

    CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
    executor.shutdown();

    return ResponseEntity.ok("Upload complete!");
}
Enter fullscreen mode Exit fullscreen mode

🤔 Why Is This Better?

Feature Synchronous (for) Asynchronous (CompletableFuture)
Total Time Longer Significantly faster
Resource Utilization Low High (more parallelism)
User Experience Laggy Responsive and smooth
Code Flexibility Limited Easy to scale and customize

⚠️ The Risks of Concurrency

Although we're using concurrency to speed up uploads by executing tasks independently, it's important to note that concurrency isn't the same as parallelism. While concurrency deals with managing multiple tasks at once, parallelism actually runs them simultaneously. Depending on your thread pool and hardware, concurrent uploads may or may not run in parallel — but the risks are similar either way.

  1. Too many threads: Uploading dozens of files at once can create too many concurrent threads, exhausting system resources.
  2. Race conditions: If files depend on each other, concurrent execution might break the logic.
  3. Error handling is harder: One failure might go unnoticed unless you're careful.
  4. Network or API throttling: AWS S3 has rate limits. Uploading too much, too fast, may lead to SlowDown or 503 errors.

💡 Bonus Tip

If you're planning to run this setup locally with services like LocalStack, and eventually move to the cloud with Terraform, remember:

Creating AWS resources programmatically (like buckets or tables) is fine for learning. In real-world apps, you should use Infrastructure as Code (IaC) tools like Terraform to provision your cloud resources, and just connect your application to them.


🧠 Final Thoughts

Asynchronous uploads are a powerful technique for improving performance and scalability, especially when dealing with cloud services. However, with this power comes the need for proper error handling, resource management, and backoff strategies to avoid overwhelming your system or external APIs.

Using Resilience4j for retry logic makes your code cleaner, more maintainable, and production-ready.


📍 Reference

💻 Project Repository

👋 Talk to me

Top comments (0)