<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: mahmoudabbasi</title>
    <description>The latest articles on Forem by mahmoudabbasi (@mahmoudabbasi).</description>
    <link>https://forem.com/mahmoudabbasi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mahmoudabbasi"/>
    <language>en</language>
    <item>
      <title>Building a Self-Learning Recommender System (Without Needing Netflix Data!)</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Wed, 24 Sep 2025 10:14:10 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/building-a-self-learning-recommender-system-without-needing-netflix-data-550j</link>
      <guid>https://forem.com/mahmoudabbasi/building-a-self-learning-recommender-system-without-needing-netflix-data-550j</guid>
      <description>&lt;p&gt;Many people think you need huge datasets like Netflix or Amazon to build a recommender system. The truth is: you can start small — with a simple model — and let it improve itself over time as users interact with your product.&lt;/p&gt;

&lt;p&gt;In this post, we'll build a &lt;strong&gt;self-learning recommender system&lt;/strong&gt; that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Starts with a pre-trained model (solves the cold-start problem)&lt;/li&gt;
&lt;li&gt;Learns from user interactions over time (incremental learning)&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Updates recommendations to stay relevant&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you first launch a product, you usually have no idea what each user likes. If your recommender starts giving random suggestions, users might leave before the system learns anything.&lt;/p&gt;

&lt;p&gt;To avoid this, we start with a &lt;strong&gt;pre-trained model&lt;/strong&gt; based on your historical sales or click data. This allows your system to give at least reasonable recommendations on day one.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Building the Initial Model&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's assume you have a small e-commerce store selling shoes, t-shirts, and accessories. You have some historical purchase data like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd


# Historical data
data = pd.DataFrame({
"userId": [1, 1, 2, 2, 3],
"productId": ["T-Shirt", "Shoes", "Shoes", "Jacket", "Hat"],
"category": ["Clothes", "Shoes", "Shoes", "Clothes", "Accessories"]
})


# Step 1: Find most popular categories
category_popularity = data.groupby("category")["productId"].count().sort_values(ascending=False)


def initial_recommendations(user_id):
return list(category_popularity.index) # Recommend popular categories first


print(initial_recommendations(1))
# Output: ['Shoes', 'Clothes', 'Accessories']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With just a few lines of code, you have a model that suggests the most popular categories for new users.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Making It Self-Learning&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now let's say user 1 buys another pair of shoes. Our system should learn from this and boost the weight for "Shoes" in that user's profile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# A simple way to store user preferences
user_profiles = {
1: {"Shoes": 1, "Clothes": 1},
2: {"Shoes": 2, "Clothes": 1}
}


# Update function when a new purchase happens
def update_profile(user_id, category):
user_profiles.setdefault(user_id, {})
user_profiles[user_id][category] = user_profiles[user_id].get(category, 0) + 1


# User 1 buys new shoes
update_profile(1, "Shoes")
print(user_profiles[1]) # {'Shoes': 2, 'Clothes': 1}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can recommend categories based on this updated profile — focusing on what each user really likes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;The Learning Cycle&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmshutkncruyfjygibn3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmshutkncruyfjygibn3l.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start with pre-trained model (based on historical data)&lt;/li&gt;
&lt;li&gt;Serve recommendations to the user&lt;/li&gt;
&lt;li&gt;Track interactions (click, purchase, like/dislike)&lt;/li&gt;
&lt;li&gt;Update user profile or model weights&lt;/li&gt;
&lt;li&gt;Generate improved recommendations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This cycle continues, and your system becomes smarter with every interaction.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Advantages of This Approach&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;✅ &lt;strong&gt;Better cold-start performance&lt;/strong&gt; – users get meaningful suggestions from day one&lt;br&gt;
✅ &lt;strong&gt;Personalization over time&lt;/strong&gt; – recommendations adapt to user behavior&lt;br&gt;
✅ &lt;strong&gt;Scalable&lt;/strong&gt; – works with small data, can grow into ML models like ALS, bandits, or deep learning later&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Next Steps&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want to take this further, you could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;ALS (Alternating Least Squares)&lt;/strong&gt; in Spark for collaborative filtering&lt;/li&gt;
&lt;li&gt;Implement &lt;strong&gt;multi-armed bandits&lt;/strong&gt; for real-time optimization&lt;/li&gt;
&lt;li&gt;Combine content-based + collaborative filtering for hybrid recommenders&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Would you like me to write a follow-up on building this with Spark or deep learning? Leave a comment below!&lt;/p&gt;

&lt;p&gt;This approach lets you build a recommender that doesn't stay static — it learns and adapts with your users.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>recommendersystem</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Database Connection Settings in Java: How to Optimize MongoDB Usage</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Wed, 24 Sep 2025 09:30:08 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/database-connection-settings-in-java-how-to-optimize-mongodb-usage-5ejn</link>
      <guid>https://forem.com/mahmoudabbasi/database-connection-settings-in-java-how-to-optimize-mongodb-usage-5ejn</guid>
      <description>&lt;p&gt;When building Java applications that connect to a database like MongoDB, it’s not enough to just provide a connection string. Proper configuration ensures &lt;strong&gt;performance, stability, and scalability&lt;/strong&gt;. In this guide, we’ll explore the key database settings in Java, what each does, and the best practices for connecting to MongoDB — including &lt;strong&gt;rate limiting&lt;/strong&gt; to protect your app during peak load.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Connection Settings&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These define how your Java app connects to MongoDB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;URI / Host / Port&lt;/strong&gt;&lt;br&gt;
Example: mongodb://localhost:27017&lt;br&gt;
If you use a replica set, list all nodes for proper failover.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Authentication&lt;/strong&gt;&lt;br&gt;
Username, password, and authentication database. Essential for security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connect Timeout&lt;/strong&gt;&lt;br&gt;
Maximum time to wait for a connection to establish.&lt;br&gt;
Default: 10 seconds; can reduce to 3–5 seconds for latency-sensitive apps.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Socket Timeout&lt;/strong&gt;&lt;br&gt;
Maximum time to wait for a response after a connection is established.&lt;br&gt;
Be careful not to set too low, otherwise long queries may fail.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connection Pool Settings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MongoDB uses connection pools in Java via MongoClient. Key settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Max Pool Size&lt;/strong&gt;&lt;br&gt;
Maximum open connections. Default: 100.&lt;br&gt;
Choose based on concurrent threads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Min Pool Size&lt;/strong&gt;&lt;br&gt;
Minimum connections kept alive for quick allocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Max Connection Idle Time&lt;/strong&gt;&lt;br&gt;
Maximum idle time before a connection is closed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wait Queue Timeout&lt;/strong&gt;&lt;br&gt;
Time a thread waits for a free connection before failing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Proper connection pooling prevents Connection pool exhausted errors under load.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Read &amp;amp; Write Concerns&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MongoDB offers guarantees for &lt;strong&gt;data safety&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Write Concern&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;w=1: write acknowledged by primary only&lt;/p&gt;

&lt;p&gt;w=majority: write acknowledged by majority of nodes (safer but slower)&lt;/p&gt;

&lt;p&gt;w=0: unacknowledged write (fast, risky)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read Preference&lt;/strong&gt;
Choose from primary, primaryPreferred, secondary, secondaryPreferred, nearest based on load balancing and consistency needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SSL / TLS&lt;/strong&gt;
Use TLS if your database is in the cloud or on an insecure network.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongodb+srv://user:password@cluster.mongodb.net/test?tls=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Monitoring &amp;amp; Heartbeat&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;MongoDB uses heartbeat intervals to check replica set status.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Default: 10 seconds. Shorter intervals allow faster failover.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Recommended Java Settings for MongoDB&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s a robust example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MongoClientSettings settings = MongoClientSettings.builder()
    .applyConnectionString(new ConnectionString("mongodb://user:pass@host1,host2,host3/?replicaSet=myRepl"))
    .applyToConnectionPoolSettings(builder -&amp;gt; builder
        .maxSize(100)
        .minSize(10)
        .maxConnectionIdleTime(60, TimeUnit.SECONDS)
        .maxWaitTime(5000, TimeUnit.MILLISECONDS)
    )
    .applyToSocketSettings(builder -&amp;gt; builder
        .connectTimeout(3000, TimeUnit.MILLISECONDS)
        .readTimeout(30000, TimeUnit.MILLISECONDS)
    )
    .readPreference(ReadPreference.primaryPreferred())
    .writeConcern(WriteConcern.MAJORITY)
    .build();

MongoClient mongoClient = MongoClients.create(settings);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Tips:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Use a singleton MongoClient to reuse connections.&lt;/li&gt;
&lt;li&gt; Use WriteConcern.MAJORITY for data safety.&lt;/li&gt;
&lt;li&gt; Choose ReadPreference based on your application's load and latency needs.&lt;/li&gt;
&lt;li&gt; Proper timeouts and idle settings prevent threads from blocking indefinitely.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Rate Limiting (Highly Recommended)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even with a properly tuned connection pool, &lt;strong&gt;sudden spikes of traffic&lt;/strong&gt; can overwhelm MongoDB. Rate limiting is a safety net that ensures your app stays responsive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Implement in Java&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Option 1: Using Guava RateLimiter&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import com.google.common.util.concurrent.RateLimiter;

RateLimiter limiter = RateLimiter.create(100); // 100 requests per second

public void handleRequest() {
    limiter.acquire(); // blocks until a permit is available
    // Perform MongoDB query
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Option 2: Bucket4j for Spring Boot APIs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Bean
public FilterRegistrationBean&amp;lt;Filter&amp;gt; rateLimitingFilter() {
    Bandwidth limit = Bandwidth.simple(100, Duration.ofSeconds(1));
    Bucket bucket = Bucket4j.builder().addLimit(limit).build();

    return new FilterRegistrationBean&amp;lt;&amp;gt;(new RateLimitFilter(bucket));
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rate limiting protects both your &lt;strong&gt;database&lt;/strong&gt; and your &lt;strong&gt;application&lt;/strong&gt; from cascading failures during peak load.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Queue &amp;amp; Retry&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Combine rate limiting with &lt;strong&gt;queueing + exponential backoff retries&lt;/strong&gt; to avoid dropping user requests immediately when the system is under stress&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Correct database configuration is crucial for reliable and scalable Java applications. By tuning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connection pool&lt;/li&gt;
&lt;li&gt;Timeouts&lt;/li&gt;
&lt;li&gt;Read/write concerns&lt;/li&gt;
&lt;li&gt;Monitoring &amp;amp; alerting&lt;/li&gt;
&lt;li&gt;Rate limiting and retries
you can ensure your MongoDB-backed app performs smoothly under load...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;architecture diagram :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0gq8rua4beunz6xbrls.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0gq8rua4beunz6xbrls.jpg" alt=" " width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💡 Pro Tip: Always perform load testing to validate your settings before production.&lt;/p&gt;

</description>
      <category>database</category>
      <category>mongodb</category>
      <category>performance</category>
      <category>java</category>
    </item>
    <item>
      <title>🧠 Analyzing SOLID Principles in an Epsilon-Greedy Recommender (Java)</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Wed, 24 Sep 2025 04:53:05 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/analyzing-solid-principles-in-an-epsilon-greedy-recommender-java-21lm</link>
      <guid>https://forem.com/mahmoudabbasi/analyzing-solid-principles-in-an-epsilon-greedy-recommender-java-21lm</guid>
      <description>&lt;p&gt;In this post, we’ll take a simple implementation of an Epsilon-Greedy Recommender in Java and check whether it follows the SOLID principles. Then, we’ll see how to refactor it for better maintainability, extensibility, and testability.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The Example Code&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class EpsilonGreedyRecommender {
    private int nItems;
    private double epsilon;
    private int[] counts;
    private double[] values;
    private Random random;

    public EpsilonGreedyRecommender(int nItems, double epsilon) {
        this.nItems = nItems;
        this.epsilon = epsilon;
        this.counts = new int[nItems];
        this.values = new double[nItems];
        this.random = new Random();
    }

    public int recommend() {
        if (random.nextDouble() &amp;lt; epsilon) {
            return random.nextInt(nItems);
        }
        int bestIndex = 0;
        for (int i = 1; i &amp;lt; nItems; i++) {
            if (values[i] &amp;gt; values[bestIndex]) {
                bestIndex = i;
            }
        }
        return bestIndex;
    }

    public void update(int item, double reward) {
        counts[item]++;
        values[item] += (reward - values[item]) / counts[item];
    }

    public double[] getValues() {
        return values;
    }

    public int[] getCounts() {
        return counts;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;SRP – Single Responsibility Principle&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📖 &lt;strong&gt;Definition&lt;/strong&gt;:&lt;br&gt;
A class should have only one reason to change – it should have a single responsibility.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Analysis&lt;/strong&gt;:&lt;br&gt;
This class is doing multiple things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storing the bandit state (counts, values)&lt;/li&gt;
&lt;li&gt;Implementing the selection policy (recommend())&lt;/li&gt;
&lt;li&gt;Updating statistics (update())&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means any change to the policy logic, or to how state is stored, requires modifying the same class.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Verdict: SRP is partially violated&lt;/strong&gt; – we have multiple responsibilities in one place.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;OCP – Open/Closed Principle&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📖 &lt;strong&gt;Definition&lt;/strong&gt;:&lt;br&gt;
Classes should be open for extension but closed for modification.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Analysis&lt;/strong&gt;:&lt;br&gt;
If we want to switch to a different policy (e.g., Softmax, UCB), we would have to edit the recommend() method directly.&lt;br&gt;
Better design: define a SelectionPolicy interface and plug in different implementations.&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Verdict: OCP is violated&lt;/strong&gt; – adding new policies requires modifying the class.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;LSP – Liskov Substitution Principle&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📖 &lt;strong&gt;Definition&lt;/strong&gt;:&lt;br&gt;
Subtypes must be substitutable for their base types without changing program correctness.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Analysis&lt;/strong&gt;:&lt;br&gt;
We don’t have inheritance here, so there is nothing to violate.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Verdict: LSP is respected&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;ISP – Interface Segregation Principle&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📖 &lt;strong&gt;Definition&lt;/strong&gt;:&lt;br&gt;
Clients should not be forced to depend on interfaces they do not use.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Analysis&lt;/strong&gt;:&lt;br&gt;
Since we have no interfaces at all, there’s no problem here.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Verdict: ISP is respected.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;DIP – Dependency Inversion Principle&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;📖 &lt;strong&gt;Definition&lt;/strong&gt;:&lt;br&gt;
Depend on abstractions, not on concrete implementations.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Analysis&lt;/strong&gt;:&lt;br&gt;
The class creates its own Random instance. This is a direct dependency on a concrete class, which makes testing harder (no way to inject a predictable RNG).&lt;/p&gt;

&lt;p&gt;Better design: inject Random as a dependency via the constructor (or use an interface).&lt;/p&gt;

&lt;p&gt;❌ *&lt;em&gt;Verdict: DIP is violated *&lt;/em&gt;– we depend on a concrete Random implementation.&lt;/p&gt;

&lt;p&gt;Summary Table&lt;br&gt;
Principle   Status  Notes&lt;br&gt;
SRP         ❌ Multiple responsibilities (state + policy + update logic)&lt;br&gt;
OCP         ❌ Cannot add new policies without modifying code&lt;br&gt;
LSP         ✅ No inheritance, no violation&lt;br&gt;
ISP         ✅ No large interfaces, no violation&lt;br&gt;
DIP         ❌ Direct dependency on Random, hard to test&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactored Design&lt;br&gt;
**&lt;br&gt;
Let’s refactor the code to follow **SOLID&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Introduce&lt;/strong&gt; a SelectionPolicy &lt;strong&gt;interface&lt;/strong&gt; (Strategy Pattern)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inject&lt;/strong&gt; Random from outside to improve testability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Step 1: Define the Policy Interface&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface SelectionPolicy {
    int select(double[] values);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Step 2: Implement Epsilon-Greedy Policy&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.Random;

public class EpsilonGreedyPolicy implements SelectionPolicy {
    private final double epsilon;
    private final Random random;

    public EpsilonGreedyPolicy(double epsilon, Random random) {
        this.epsilon = epsilon;
        this.random = random;
    }

    @Override
    public int select(double[] values) {
        int nItems = values.length;
        if (random.nextDouble() &amp;lt; epsilon) {
            return random.nextInt(nItems);
        }
        int bestIndex = 0;
        for (int i = 1; i &amp;lt; nItems; i++) {
            if (values[i] &amp;gt; values[bestIndex]) {
                bestIndex = i;
            }
        }
        return bestIndex;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Step 3: Make the Bandit Class Focus on State&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Bandit {
    private final int[] counts;
    private final double[] values;
    private final SelectionPolicy policy;

    public Bandit(int nItems, SelectionPolicy policy) {
        this.counts = new int[nItems];
        this.values = new double[nItems];
        this.policy = policy;
    }

    public int recommend() {
        return policy.select(values);
    }

    public void update(int item, double reward) {
        counts[item]++;
        values[item] += (reward - values[item]) / counts[item];
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Now:&lt;/p&gt;

&lt;p&gt;SRP is respected → Bandit only manages state, EpsilonGreedyPolicy only handles selection.&lt;/p&gt;

&lt;p&gt;OCP is respected → We can add new policies without touching Bandit.&lt;/p&gt;

&lt;p&gt;DIP is respected → Random is injected, so we can pass a mock RNG in tests.&lt;/p&gt;

&lt;p&gt;Key Takeaways&lt;/p&gt;

&lt;p&gt;Applying SOLID makes your code easier to extend and maintain.&lt;/p&gt;

&lt;p&gt;Using interfaces and dependency injection helps make your code testable and more robust.&lt;/p&gt;

&lt;p&gt;Even small classes can benefit from SOLID – especially if you expect the algorithm to evolve over time.&lt;/p&gt;

&lt;p&gt;💡 What do you think? Would you keep the state and policy together for small projects, or always split them like this?&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>architecture</category>
      <category>softwareengineering</category>
      <category>java</category>
    </item>
    <item>
      <title>Persian OCR with YOLO + CRNN: Building a Custom Text Recognition Pipeline</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Tue, 23 Sep 2025 14:19:32 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/persian-ocr-with-yolo-crnn-building-a-custom-text-recognition-pipeline-4hid</link>
      <guid>https://forem.com/mahmoudabbasi/persian-ocr-with-yolo-crnn-building-a-custom-text-recognition-pipeline-4hid</guid>
      <description>&lt;p&gt;Running OCR for Persian text is tricky. Unlike English, Persian (and Arabic) scripts are right‑to‑left, letters change shape based on position, and there are fewer open‑source datasets available. In this post, we’ll build a &lt;strong&gt;custom OCR pipeline&lt;/strong&gt; using &lt;strong&gt;YOLO&lt;/strong&gt; for text detection and &lt;strong&gt;CRNN&lt;/strong&gt; for character recognition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why YOLO + CRNN?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YOLO&lt;/strong&gt; is great at detecting objects — here, the objects are text regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CRNN&lt;/strong&gt; (Convolutional Recurrent Neural Network) is ideal for sequence recognition like text.&lt;/p&gt;

&lt;p&gt;Combined, they form a two‑stage pipeline: detect → crop → recognize.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Preparing the Dataset&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For Persian OCR we need two datasets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Detection dataset (for YOLO): images with bounding boxes around words or lines.&lt;/li&gt;
&lt;li&gt;Recognition dataset (for CRNN): cropped images of words/lines with their correct text.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;p&gt;Use tools like &lt;strong&gt;labelImg&lt;/strong&gt; or &lt;strong&gt;Roboflow&lt;/strong&gt; to annotate bounding boxes.&lt;/p&gt;

&lt;p&gt;Generate synthetic data: render Persian text on random backgrounds using different fonts to increase data size.&lt;/p&gt;

&lt;p&gt;YOLO expects annotations in this format:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;class x_center y_center width height&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
where values are normalized between 0 and 1.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Training YOLO for Text Detection&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use YOLOv8 for best results:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yolo detect train data=persian_text.yaml model=yolov8s.pt epochs=50 imgsz=640&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After training, YOLO will output bounding boxes for text regions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Training CRNN for Text Recognition&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CRNN = CNN + RNN + CTC loss.&lt;/p&gt;

&lt;p&gt;Define your Persian character set (32 letters + space) and encode labels as sequences.&lt;/p&gt;

&lt;p&gt;Example PyTorch model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import torch
import torch.nn as nn


class CRNN(nn.Module):
def __init__(self, num_classes):
super(CRNN, self).__init__()
self.cnn = nn.Sequential(
nn.Conv2d(1, 64, 3, 1, 1), nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(64, 128, 3, 1, 1), nn.ReLU(),
nn.MaxPool2d(2, 2)
)
self.rnn = nn.LSTM(128*8, 256, bidirectional=True, num_layers=2)
self.fc = nn.Linear(512, num_classes)


def forward(self, x):
x = self.cnn(x)
b, c, h, w = x.size()
x = x.permute(3, 0, 1, 2).contiguous().view(w, b, c*h)
x, _ = self.rnn(x)
x = self.fc(x)
return x # [T, B, num_classes]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use CTC Loss to align predictions with ground truth.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Combining YOLO + CRNN&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Feed image → YOLO → bounding boxes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crop each box and resize to a fixed height.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pass to CRNN → predicted text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Concatenate results (right‑to‑left ordering).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Challenges and Tips&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Right‑to‑left text&lt;/strong&gt;: reverse CRNN output sequences before final join.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fonts and noise&lt;/strong&gt;: use data augmentation (blur, rotation, brightness) to improve generalization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small dataset?&lt;/strong&gt; Consider transfer learning or fine‑tuning PaddleOCR models for Persian.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By combining YOLO and CRNN, we created a flexible OCR pipeline that works for Persian text. This approach can be extended to other right‑to‑left scripts like Arabic or Urdu.&lt;/p&gt;

&lt;p&gt;You can check out the GitHub repo for sample code and try it on your own dataset!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Handling Distributed Transactions with Orchestrator Pattern (Withdrawal &amp; Deposit Example)</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Tue, 23 Sep 2025 12:06:35 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/handling-distributed-transactions-with-orchestrator-pattern-withdrawal-deposit-example-dap</link>
      <guid>https://forem.com/mahmoudabbasi/handling-distributed-transactions-with-orchestrator-pattern-withdrawal-deposit-example-dap</guid>
      <description>&lt;p&gt;When building microservices, one of the common challenges is dealing with distributed transactions — ensuring data consistency when multiple services need to work together.&lt;/p&gt;

&lt;p&gt;Let's consider a simple but very real-world example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service A&lt;/strong&gt;: Withdrawal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service B&lt;/strong&gt;: Deposit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We want to withdraw money from one account and deposit into another.&lt;br&gt;
But what if one of them fails?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine the following flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Withdrawal Service reduces the balance.&lt;/li&gt;
&lt;li&gt;Deposit Service adds the amount to the destination account.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If step 1 succeeds but step 2 fails, you have just lost money in the system.&lt;br&gt;
This is where distributed transaction management becomes critical&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Approaches&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Two-Phase Commit (2PC)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A transaction coordinator asks each service to prepare and commit.&lt;br&gt;
If all services agree, the transaction is committed.&lt;br&gt;
If any fail, everything is rolled back.&lt;/p&gt;

&lt;p&gt;✅ Strong consistency&lt;/p&gt;

&lt;p&gt;❌ High complexity, risk of blocking, not always a good fit for microservices&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Saga Pattern with Orchestrator&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Saga pattern breaks the big transaction into a sequence of smaller, local transactions.&lt;br&gt;
Each step has a compensating transaction (rollback action) if something goes wrong.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step      Action                    Compensation&lt;/strong&gt;&lt;br&gt;
1   Withdraw from Account A    Deposit back to Account A&lt;br&gt;
2   Deposit to Account B       Withdraw from Account B&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;orchestrator&lt;/strong&gt; is a service that manages this workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Starts with &lt;strong&gt;withdrawal&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;If successful, triggers &lt;strong&gt;deposit&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;If deposit fails, runs &lt;strong&gt;compensation&lt;/strong&gt; (refund) for withdrawal
This is much more scalable and microservice-friendly.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Event-Driven &amp;amp; Eventually Consistent&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another approach is using message queues (Kafka, RabbitMQ):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Send a WithdrawalCompleted event&lt;/li&gt;
&lt;li&gt;Deposit service consumes and processes it&lt;/li&gt;
&lt;li&gt;Retry on failure until success&lt;/li&gt;
&lt;li&gt;Make services idempotent (safe to retry without double processing)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures eventual consistency even if failures occur.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orchestrator Workflow Example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6iuq6fpko0821lbyalz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6iuq6fpko0821lbyalz.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Orchestrator Implementation (Pseudo-Code)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class TransferOrchestrator {

    public void transfer(String fromAccount, String toAccount, BigDecimal amount) {
        boolean withdrawSuccess = withdrawalService.withdraw(fromAccount, amount);

        if (!withdrawSuccess) {
            log.error("Withdrawal failed");
            return;
        }

        boolean depositSuccess = depositService.deposit(toAccount, amount);

        if (!depositSuccess) {
            log.error("Deposit failed, triggering compensation...");
            withdrawalService.compensate(fromAccount, amount); // rollback
        } else {
            log.info("Transfer completed successfully");
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Idempotent APIs&lt;/strong&gt; – handle retries safely&lt;br&gt;
✅ &lt;strong&gt;Proper Logging&lt;/strong&gt; – so you can trace what happened&lt;br&gt;
✅ &lt;strong&gt;Dead Letter Queues&lt;/strong&gt; – for failed events that need manual review&lt;br&gt;
✅ &lt;strong&gt;Monitoring &amp;amp; Alerts&lt;/strong&gt; – you don’t want silent failures&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;br&gt;
**&lt;br&gt;
Distributed transactions are challenging, but with the **Saga pattern and orchestration&lt;/strong&gt;, you can build resilient and scalable systems.&lt;/p&gt;

&lt;p&gt;The orchestrator gives you full control over the transaction flow and lets you recover gracefully from failures — which is critical for financial and mission-critical systems&lt;/p&gt;

</description>
      <category>java</category>
      <category>python</category>
      <category>dataengineering</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Real-Time Fraud Detection Using Apache Flink</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Tue, 23 Sep 2025 08:14:26 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/real-time-fraud-detection-using-apache-flink-4l2a</link>
      <guid>https://forem.com/mahmoudabbasi/real-time-fraud-detection-using-apache-flink-4l2a</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Every second, millions of financial transactions happen worldwide. How can banks instantly detect fraudulent activity and prevent losses in real-time?&lt;/p&gt;

&lt;p&gt;This is where Apache Flink comes in — a powerful stream processing engine that can analyze millions of events as they happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Real-Time Fraud Detection?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fraud in financial systems can be extremely costly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Examples: Credit cards, online payments, bank transfers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Batch vs. Stream processing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch: Data is analyzed after being collected → too late&lt;/li&gt;
&lt;li&gt;Stream: Every transaction is analyzed instantly → fast and     effective&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Flink?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Event-driven: Processes data streams with low latency&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stateful computation: Maintains account-level or user-level state for detecting anomalies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalable: Handles millions of transactions per second&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Proposed Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kafka as the transaction source&lt;/li&gt;
&lt;li&gt;Flink Job for real-time analysis and fraud scoring&lt;/li&gt;
&lt;li&gt;Rules &amp;amp; ML model to detect suspicious patterns&lt;/li&gt;
&lt;li&gt;Alerts → sent to dashboards or monitoring systems&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;architecture Diagram :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajutvwq4f775s0jkjh2s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajutvwq4f775s0jkjh2s.jpg" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

DataStream&amp;lt;Transaction&amp;gt; transactions = env
    .addSource(new KafkaTransactionSource());

DataStream&amp;lt;Transaction&amp;gt; flagged = transactions
    .keyBy(Transaction::getAccountId)
    .process(new FraudDetectionFunction());

flagged.addSink(new AlertSink());

env.execute("Real-Time Fraud Detection");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;keyBy: Groups transactions by account&lt;/li&gt;
&lt;li&gt;process: Runs fraud detection logic&lt;/li&gt;
&lt;li&gt;AlertSink: Sends alerts when suspicious activity is detected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Flink enables real-time fraud detection&lt;br&gt;
Helps reduce financial losses and increase customer trust&lt;br&gt;
Offers high scalability and flexibility&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CTA (Call to Action)&lt;/strong&gt;&lt;br&gt;
Curious how real-time fraud detection can save millions? Let’s connect and discuss streaming analytics with Flink!&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>dataengineering</category>
      <category>security</category>
      <category>java</category>
    </item>
    <item>
      <title>How Self-Learning Recommender Systems Adapt to User Behavior</title>
      <dc:creator>mahmoudabbasi</dc:creator>
      <pubDate>Tue, 23 Sep 2025 07:42:23 +0000</pubDate>
      <link>https://forem.com/mahmoudabbasi/how-self-learning-recommender-systems-adapt-to-user-behavior-4bk4</link>
      <guid>https://forem.com/mahmoudabbasi/how-self-learning-recommender-systems-adapt-to-user-behavior-4bk4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Recommender systems are key components in many applications—from online stores to content platforms. Traditional models often require manual updates and may struggle to adapt to users’ changing preferences.&lt;/p&gt;

&lt;p&gt;In this article, we introduce a self-learning recommender system in Java that dynamically learns from user interactions and provides more accurate, personalized suggestions.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system has three main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Collection – Track user interactions, e.g., views, ratings, clicks.&lt;/li&gt;
&lt;li&gt;Self-Learning Model – Updates recommendations automatically as user behavior changes.&lt;/li&gt;
&lt;li&gt;Recommendation Engine – Provides personalized suggestions to each user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7ipsozp32rk693jm88i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7ipsozp32rk693jm88i.png" alt=" " width="800" height="531"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.*;

public class SelfLearningRecommender {
    private Map&amp;lt;String, Map&amp;lt;String, Integer&amp;gt;&amp;gt; userRatings = new HashMap&amp;lt;&amp;gt;();

    public SelfLearningRecommender() {
        userRatings.put("Alice", new HashMap&amp;lt;&amp;gt;(Map.of("Book A", 5, "Book B", 3)));
        userRatings.put("Bob", new HashMap&amp;lt;&amp;gt;(Map.of("Book A", 2, "Book B", 4)));
    }

    // Add a new rating (self-learning updates automatically)
    public void addRating(String user, String item, int rating) {
        userRatings.computeIfAbsent(user, k -&amp;gt; new HashMap&amp;lt;&amp;gt;()).put(item, rating);
    }

    // Recommend an item user hasn't interacted with yet
    public String recommend(String user) {
        Map&amp;lt;String, Integer&amp;gt; ratings = userRatings.getOrDefault(user, new HashMap&amp;lt;&amp;gt;());
        return ratings.entrySet().stream()
                .max(Map.Entry.comparingByValue())
                .map(Map.Entry::getKey)
                .orElse("No recommendation available");
    }

    public static void main(String[] args) {
        SelfLearningRecommender recommender = new SelfLearningRecommender();
        System.out.println("Recommendation for Alice: " + recommender.recommend("Alice"));

        // Simulate new behavior
        recommender.addRating("Alice", "Book C", 6);
        System.out.println("Updated Recommendation for Alice: " + recommender.recommend("Alice"));
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Features of this self-learning model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updates recommendations automatically with new data&lt;/li&gt;
&lt;li&gt;Adapts to changing user behavior&lt;/li&gt;
&lt;li&gt;Simple, scalable, and easy to extend&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔹 &lt;strong&gt;Results and Benefits&lt;/strong&gt;&lt;br&gt;
Users get more personalized suggestions&lt;br&gt;
No need for manual model retraining&lt;br&gt;
Can scale from small apps to large e-commerce platforms&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Self-learning recommender systems represent the future of personalized user experiences. By combining data collection, dynamic learning, and personalized recommendations, apps can better serve their users and improve engagement&lt;/p&gt;

</description>
      <category>java</category>
      <category>python</category>
      <category>dataengineering</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
