<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: IT Solutions Pro</title>
    <description>The latest articles on Forem by IT Solutions Pro (@it_solutions_pro).</description>
    <link>https://forem.com/it_solutions_pro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/it_solutions_pro"/>
    <language>en</language>
    <item>
      <title>Build a "Military-Grade" Network Scanner in Python (Scapy Tutorial)</title>
      <dc:creator>IT Solutions Pro</dc:creator>
      <pubDate>Tue, 03 Feb 2026 18:35:38 +0000</pubDate>
      <link>https://forem.com/it_solutions_pro/build-a-military-grade-network-scanner-in-python-scapy-tutorial-1a4a</link>
      <guid>https://forem.com/it_solutions_pro/build-a-military-grade-network-scanner-in-python-scapy-tutorial-1a4a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn2kbdrlkwr71k24d5w3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn2kbdrlkwr71k24d5w3.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the world of cybersecurity, information is power. Before an attack happens, there is a phase of observation—watching, listening, and mapping out the network.&lt;/p&gt;

&lt;p&gt;Imagine knowing exactly every device connected to your Wi-Fi right now. Your neighbor’s phone, your smart TV, or maybe even an intruder lurking on your network.&lt;/p&gt;

&lt;p&gt;Most people rely on pre-made tools like Nmap. But today, we are going to &lt;strong&gt;build our own&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will write a Python script that reverse-engineers the &lt;strong&gt;ARP (Address Resolution Protocol)&lt;/strong&gt; to discover devices on any network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we write code, we need to handle the "Layer 2" drivers. Python cannot speak directly to the network card without help.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. For Windows Users (Critical)
&lt;/h3&gt;

&lt;p&gt;You must install &lt;strong&gt;Npcap&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://npcap.com" rel="noopener noreferrer"&gt;npcap.com&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; During installation, check the box &lt;strong&gt;"Install Npcap in WinPcap API-compatible Mode"&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Install Scapy
&lt;/h3&gt;

&lt;p&gt;We will use &lt;code&gt;scapy&lt;/code&gt;, a powerful packet manipulation library.&lt;br&gt;
Open your terminal/command prompt and run:&lt;/p&gt;

&lt;p&gt;pip install scapy&lt;/p&gt;

&lt;p&gt;The Code Construction&lt;br&gt;
We will build this in 3 phases using a single file: scanner.py.&lt;/p&gt;

&lt;p&gt;Phase 1: Handling Arguments&lt;br&gt;
We don't want to hard-code the IP. We want to pass it via the command line (e.g., -t 10.0.0.1/24).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import scapy.all as scapy
import argparse
import time

def get_arguments():
    parser = argparse.ArgumentParser()
    parser.add_argument("-t", "--target", dest="target", help="Target IP / IP range.")
    options = parser.parse_args()

    # Smart Logic: If user forgets the IP, default to a test range
    if not options.target:
        print("[-] No target specified. Defaulting to 192.168.1.1/24 for demo.")
        return "192.168.1.1/24"

    return options.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Phase 2: The Scanner Engine&lt;br&gt;
This is where the magic happens. We create an Ethernet frame destined for ff:ff:ff:ff:ff:ff (Broadcast), ensuring every device hears us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def scan(ip):
    print(f"\n[+] Starting Scan on {ip}...\n")
    print("-" * 50)
    print("IP Address\t\tMAC Address")
    print("-" * 50)

    # 1. Create ARP Request
    arp_request = scapy.ARP(pdst=ip)

    # 2. Create Broadcast Frame
    broadcast = scapy.Ether(dst="ff:ff:ff:ff:ff:ff")

    # 3. Combine Frame
    arp_request_broadcast = broadcast/arp_request

    # 4. Send &amp;amp; Receive
    # verbose=False keeps the terminal clean
    answered_list = scapy.srp(arp_request_broadcast, timeout=1, verbose=False)[0]

    clients_list = []

    for element in answered_list:
        client_dict = {"ip": element[1].psrc, "mac": element[1].hwsrc}
        clients_list.append(client_dict)

        # visual effect: print immediately
        print(f"{element[1].psrc}\t\t{element[1].hwsrc}")
        time.sleep(0.1) # Cinematic delay

    return clients_list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Full Source Code&lt;br&gt;
Here is the complete script. Copy this into a file named scanner.py.&lt;/p&gt;

&lt;p&gt;Python&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import scapy.all as scapy
import argparse
import time

def get_arguments():
    parser = argparse.ArgumentParser()
    parser.add_argument("-t", "--target", dest="target", help="Target IP / IP range.")
    options = parser.parse_args()
    if not options.target:
        print("[-] Please specify a target IP range. Use --help for more info.")
        # Default to local range if not provided
        return "192.168.1.1/24" 
    return options.target

def scan(ip):
    print(f"\n[+] Starting Scan on {ip}...\n")
    print("-" * 50)
    print("IP Address\t\tMAC Address")
    print("-" * 50)

    arp_request = scapy.ARP(pdst=ip)
    broadcast = scapy.Ether(dst="ff:ff:ff:ff:ff:ff")
    arp_request_broadcast = broadcast/arp_request

    # Send packet and wait for response
    answered_list = scapy.srp(arp_request_broadcast, timeout=1, verbose=False)[0]

    clients_list = []
    for element in answered_list:
        client_dict = {"ip": element[1].psrc, "mac": element[1].hwsrc}
        clients_list.append(client_dict)
        print(f"{element[1].psrc}\t\t{element[1].hwsrc}")
        time.sleep(0.1) # Added delay for visual effect

    return clients_list

def print_result(results_list):
    print("-" * 50)
    print(f"[+] Scan Complete. Found {len(results_list)} devices.")

if __name__ == "__main__":
    target_ip = get_arguments()
    scan_result = scan(target_ip)
    print_result(scan_result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Conclusion&lt;br&gt;
You just built a professional network reconnaissance tool in under 60 lines of code. This is the foundation of network security.&lt;/p&gt;

&lt;p&gt;If you enjoyed this, I have a full video breakdown explaining every single line of code below.&lt;/p&gt;

&lt;p&gt;📺 Watch the Masterclass&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/4HKf8_5a8nY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;For more IT Masterclasses, subscribe to: &lt;a class="mentioned-user" href="https://dev.to/it_solutions_pro"&gt;@it_solutions_pro&lt;/a&gt; &lt;/p&gt;

</description>
      <category>python</category>
      <category>security</category>
      <category>networking</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Build a Self-Hosted AI Server (n8n + Docker + OpenAI)</title>
      <dc:creator>IT Solutions Pro</dc:creator>
      <pubDate>Fri, 30 Jan 2026 17:45:17 +0000</pubDate>
      <link>https://forem.com/it_solutions_pro/build-a-self-hosted-ai-server-n8n-docker-openai-m3n</link>
      <guid>https://forem.com/it_solutions_pro/build-a-self-hosted-ai-server-n8n-docker-openai-m3n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45gsot2flf5vutvsgfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45gsot2flf5vutvsgfb.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are serious about automation in 2026, you have two choices: Pay monthly fees for services like Zapier, or become the architect of your own infrastructure.&lt;/p&gt;

&lt;p&gt;Today, we choose option two.&lt;/p&gt;

&lt;p&gt;We are building a self-hosted AI server that monitors inputs, analyzes them with &lt;strong&gt;GPT-4&lt;/strong&gt;, and sends intelligent alerts directly to Telegram—all running locally via &lt;strong&gt;Docker&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This isn't just a toy project; this is an enterprise-grade solution that you can run on a local machine, a Raspberry Pi, or a cloud VPS for pennies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Self-Host?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Privacy:&lt;/strong&gt; Your data stays on your machine.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Cost:&lt;/strong&gt; No more "pay per task" limits like Zapier.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Power:&lt;/strong&gt; You get full control over the AI logic and database.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Desktop&lt;/strong&gt; installed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code&lt;/strong&gt; text editor.&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;OpenAI API Key&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Secure Configuration (.env)
&lt;/h2&gt;

&lt;p&gt;First, we need to set up our environment variables. This keeps our passwords safe and out of the main code.&lt;br&gt;
Create a file named &lt;code&gt;.env&lt;/code&gt;:&lt;/p&gt;
&lt;h1&gt;
  
  
  .env file configuration
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POSTGRES_USER=n8n
POSTGRES_PASSWORD=mysecretpassword
POSTGRES_DB=n8n
N8N_ENCRYPTION_KEY=supersecretkey123
N8N_HOST=localhost
N8N_PORT=5678
GENERIC_TIMEZONE=America/New_York
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Step 2: The Architecture (docker-compose.yml)&lt;br&gt;
We are using Docker Compose to spin up two containers:&lt;/p&gt;

&lt;p&gt;PostgreSQL: A robust database to store workflow history.&lt;/p&gt;

&lt;p&gt;n8n: The workflow automation tool itself.&lt;/p&gt;

&lt;p&gt;Create a file named docker-compose.yml:&lt;/p&gt;

&lt;p&gt;YAML&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  postgres:
    image: postgres:16-alpine
    restart: always
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data

  n8n:
    image: n8nio/n8n:latest
    restart: always
    ports:
      - "5678:5678"
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - N8N_HOST=${N8N_HOST}
    links:
      - postgres
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      - postgres

volumes:
  postgres_data:
  n8n_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How to Run It&lt;br&gt;
Open your terminal in the same folder and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then visit &lt;a href="http://localhost:5678" rel="noopener noreferrer"&gt;http://localhost:5678&lt;/a&gt; in your browser.&lt;/p&gt;

&lt;p&gt;Step 3: Programming the AI Agent&lt;br&gt;
To make this system smart, we need to give the AI specific instructions. When setting up the AI Agent Node in n8n, use this System Prompt to enforce "Structured Output" (JSON):&lt;/p&gt;

&lt;p&gt;Plaintext&lt;br&gt;
You are a Senior IT Support Automation Agent.&lt;br&gt;
Your job is to analyze incoming text and categorize it.&lt;/p&gt;

&lt;p&gt;RULES:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If the text contains words like "crash", "down", "urgent", or "fire", categorize as "HIGH_PRIORITY".&lt;/li&gt;
&lt;li&gt;Otherwise, categorize as "LOW_PRIORITY".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;OUTPUT FORMAT (JSON ONLY):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
 "category": "HIGH_PRIORITY" | "LOW_PRIORITY",&lt;br&gt;
 "suggested_reply": "Write a short, professional response to the user here."&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Step 4: Connecting Telegram&lt;br&gt;
To receive alerts on your phone:&lt;/p&gt;

&lt;p&gt;Search for &lt;a class="mentioned-user" href="https://dev.to/botfather"&gt;@botfather&lt;/a&gt; on Telegram.&lt;/p&gt;

&lt;p&gt;Type /newbot to get your API Token.&lt;/p&gt;

&lt;p&gt;Search for @userinfobot to get your personal Chat ID.&lt;/p&gt;

&lt;p&gt;Use these credentials in the Telegram Node in n8n.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
You have just built a system that would cost enterprise companies thousands of dollars, and you did it for free on your own hardware. That is the power of open-source IT.&lt;/p&gt;

&lt;p&gt;In the next tutorial, we will secure this server with SSL and a custom domain.&lt;/p&gt;

&lt;p&gt;Happy Automating!&lt;/p&gt;

&lt;p&gt;📺 Watch the Full Masterclass&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/VBsCJN3-78k"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;If you prefer text, check out the channel here: &lt;a class="mentioned-user" href="https://dev.to/it_solutions_pro"&gt;@it_solutions_pro&lt;/a&gt; &lt;/p&gt;

</description>
      <category>docker</category>
      <category>n8nbrightdatachallenge</category>
      <category>automaton</category>
      <category>ai</category>
    </item>
    <item>
      <title>Build a "Stateful" AI Chatbot with Python &amp; OpenAI</title>
      <dc:creator>IT Solutions Pro</dc:creator>
      <pubDate>Wed, 28 Jan 2026 20:40:32 +0000</pubDate>
      <link>https://forem.com/it_solutions_pro/build-a-stateful-ai-chatbot-with-python-openai-5857</link>
      <guid>https://forem.com/it_solutions_pro/build-a-stateful-ai-chatbot-with-python-openai-5857</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstmqrmdcuj63wjq6wnxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstmqrmdcuj63wjq6wnxv.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most beginners make a critical mistake when working with the OpenAI API: they assume the AI remembers them.&lt;/p&gt;

&lt;p&gt;By default, Large Language Models (LLMs) are &lt;strong&gt;"Stateless"&lt;/strong&gt;. This means if you say "My name is Shakar," and then ask "What is my name?" in the next request, the API will have no idea who you are.&lt;/p&gt;

&lt;p&gt;In this tutorial, we are going to fix that. We will build a &lt;strong&gt;Stateful&lt;/strong&gt; chatbot in Python that maintains conversation history, handles errors gracefully, and runs locally in your terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  📺 Watch the Full Masterclass
&lt;/h3&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/yelLBd-FPB8"&gt;
  &lt;/iframe&gt;


&lt;br&gt;
&lt;em&gt;If you prefer text, check out the channel here: &lt;a href="https://www.youtube.com/@IT_Solutions_Pro" rel="noopener noreferrer"&gt;IT Solutions Pro&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
from dotenv import load_dotenv
from openai import OpenAI

# 1. Load environment variables securely
load_dotenv()

# 2. Initialize the Client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

print("--- AI Chatbot Initialized (Type 'quit' to exit) ---")

# 3. Setup Memory (System Context)
# This sets the behavior of the AI
messages = [
    {"role": "system", "content": "You are a helpful, friendly IT Assistant."},
]

# 4. The Main Loop
while True:
    try:
        user_input = input("\nYou: ")

        # Exit Condition
        if user_input.lower() in ['quit', 'exit']:
            print("Shutting down...")
            break

        # STEP A: Add User Input to Memory
        messages.append({"role": "user", "content": user_input})

        # STEP B: Send the WHOLE history to the API
        response = client.chat.completions.create(
            model="gpt-4o", # You can use "gpt-3.5-turbo" to save cost
            messages=messages,
            temperature=0.7
        )

        # STEP C: Extract Answer &amp;amp; Add to Memory
        ai_response = response.choices[0].message.content

        # Crucial Step: Save the AI's own words back to the list
        messages.append({"role": "assistant", "content": ai_response})

        print(f"AI: {ai_response}")

    except Exception as e:
        print(f"An error occurred: {e}")
        break
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>python</category>
      <category>ai</category>
      <category>openai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Build a Military-Grade SOC for $0 (Wazuh + Docker + Python)</title>
      <dc:creator>IT Solutions Pro</dc:creator>
      <pubDate>Tue, 27 Jan 2026 16:29:41 +0000</pubDate>
      <link>https://forem.com/it_solutions_pro/build-a-military-grade-soc-for-0-wazuh-docker-python-3kam</link>
      <guid>https://forem.com/it_solutions_pro/build-a-military-grade-soc-for-0-wazuh-docker-python-3kam</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvhoqjss6pyiu3q741pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvhoqjss6pyiu3q741pv.png" alt=" "&gt;&lt;/a&gt;**STOP paying $5,000/month for enterprise security tools like Splunk or Datadog just to monitor your home lab or small business server.&lt;/p&gt;

&lt;p&gt;You can build a &lt;strong&gt;Military-Grade Security Operations Center (SOC)&lt;/strong&gt; entirely for free using Open Source tools.&lt;/p&gt;

&lt;p&gt;In this masterclass, I’ll show you how to deploy &lt;strong&gt;Wazuh&lt;/strong&gt; (The Open Source SIEM) using Docker, and then we will write a custom &lt;strong&gt;Python Attack Bot&lt;/strong&gt; to test our defenses in real-time.&lt;br&gt;
**&lt;/p&gt;
&lt;h3&gt;
  
  
  📺 Watch the Full Masterclass
&lt;/h3&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/u8S71azxz_E"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;




&lt;h3&gt;
  
  
  🛠️ What We Build in This Video:
&lt;/h3&gt;




&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;The Architecture:&lt;/strong&gt; Setting up the Wazuh Manager (The Brain) and Agents (The Eyes).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Docker Deployment:&lt;/strong&gt; Getting the stack up in under 3 minutes.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Attack:&lt;/strong&gt; Writing a Python script (&lt;code&gt;audit_tool.py&lt;/code&gt;) to simulate a brute-force attack.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Defense:&lt;/strong&gt; Configuring a Custom XML Rule to detect the pattern and auto-ban the IP.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;strong&gt;### 👨‍💻 The Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Don't want to type everything from the video? Here is the source code for the tools we built.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. The Python Attack Bot (&lt;code&gt;audit_tool.py&lt;/code&gt;)
&lt;/h4&gt;

&lt;p&gt;Use this script to simulate an attack on your own server (Do NOT use this on servers you don't own).&lt;/p&gt;

&lt;p&gt;python&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import paramiko
import socket
import time

# CHANGE THIS to your local server IP
TARGET_IP = "192.168.1.XX" 
USER = "root"

print(f"[*] Starting Audit Tool targeting {TARGET_IP}...")

while True:
    password = input("Enter Password to Test: ")

    try:
        client = paramiko.SSHClient()
        client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

        # Attempt Connection
        client.connect(TARGET_IP, username=USER, password=password, timeout=3)
        print("[+] SUCCESS: Password Found!")
        client.close()
        break

    except paramiko.AuthenticationException:
        print("[-] Auth Failed: Wrong Credentials.")
    except socket.error:
        print("[!!!] CONNECTION REFUSED: Server blocked us! (Active Response Worked)")
        break
    except Exception as e:
        print(f"[!] Error: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;rule id="100003" level="10" frequency="15" timeframe="10"&amp;gt;
  &amp;lt;if_matched_sid&amp;gt;60137&amp;lt;/if_matched_sid&amp;gt;
  &amp;lt;description&amp;gt;Critical: Massive Logoff Flood Detected (Possible Brute Force)&amp;lt;/description&amp;gt;
&amp;lt;/rule&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
      <category>security</category>
      <category>python</category>
      <category>docker</category>
    </item>
    <item>
      <title>Build Your Own Private AI Station: Llama + Open WebUI with Docker</title>
      <dc:creator>IT Solutions Pro</dc:creator>
      <pubDate>Fri, 16 Jan 2026 12:39:40 +0000</pubDate>
      <link>https://forem.com/it_solutions_pro/build-your-own-private-ai-station-llama-open-webui-with-docker-2ii8</link>
      <guid>https://forem.com/it_solutions_pro/build-your-own-private-ai-station-llama-open-webui-with-docker-2ii8</guid>
      <description>&lt;p&gt;📄 Content (Markdown):&lt;br&gt;
Privacy in AI is no longer a luxury; it’s a necessity. If you are tired of sending your sensitive data to cloud-based LLMs and paying monthly subscriptions, it's time to host your own.&lt;/p&gt;

&lt;p&gt;In this guide, I will show you how to deploy Llama 3.1 using Ollama and Open WebUI inside Docker. This setup gives you a ChatGPT-like experience running 100% locally on your hardware.&lt;/p&gt;

&lt;p&gt;Why Llama 3.1 + Open WebUI?&lt;br&gt;
Complete Privacy: Your prompts never leave your local network.&lt;/p&gt;

&lt;p&gt;Rich UI: Open WebUI provides a professional interface with Markdown support, image generation integration, and document RAG.&lt;/p&gt;

&lt;p&gt;Performance: Llama 3.1 (8B) is highly optimized for consumer-grade GPUs like the RTX 30-series or 40-series.&lt;/p&gt;

&lt;p&gt;🛠 The Stack&lt;br&gt;
Ollama: The engine that runs the model.&lt;/p&gt;

&lt;p&gt;Open WebUI: The professional frontend.&lt;/p&gt;

&lt;p&gt;Docker: To keep the environment clean and isolated.&lt;/p&gt;

&lt;p&gt;Quick Deployment (Docker Compose)&lt;br&gt;
Create a docker-compose.yml file and paste this configuration:&lt;/p&gt;

&lt;p&gt;YAML&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  ollama:
    volumes:
      - ./ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest

  open-webui:
    build:
      context: .
      args:
        - UID=1000
        - GID=1000
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - ./open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 3000:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
    extra_hosts:
      - "host.docker.internal:host-gateway"
    restart: unless-stopped```



🚀 How to Run
Install Docker and Docker Compose.

Run docker-compose up -d.

Access the UI at http://localhost:3000.

Download the model inside the UI by typing llama3.1.

🎥 Full Step-by-Step Masterclass
If you want to see the full installation, including GPU optimization settings, persistent storage configuration, and a tour of the best features, I have recorded a detailed 15-minute Masterclass.

Watch the full tutorial on IT Solutions Pro: 

  &lt;iframe src="https://www.youtube.com/embed/lRziiN7sJUA"&gt;
  &lt;/iframe&gt;



Conclusion
Running Llama 3.1 locally is a game-changer for developers and IT professionals. It’s fast, secure, and free.

If you encounter any issues with the Docker configuration, feel free to drop a comment below or on my YouTube channel!

#ai #selfhosted #docker #llama3 #opensource``
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>docker</category>
      <category>llm</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
