<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: NeuralLang</title>
    <description>The latest articles on Forem by NeuralLang (@neural).</description>
    <link>https://forem.com/neural</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/neural"/>
    <language>en</language>
    <item>
      <title>Neural DSL v0.2.9: Early Preview of Aquarium IDE for Visual Neural Network Design</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Mon, 05 May 2025 15:51:38 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-v029-early-preview-of-aquarium-ide-for-visual-neural-network-design-52he</link>
      <guid>https://forem.com/neural/neural-dsl-v029-early-preview-of-aquarium-ide-for-visual-neural-network-design-52he</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhiq5wstwuq8z7ex86yu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhiq5wstwuq8z7ex86yu.png" alt="Neural DSL Logo" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're pleased to announce the release of Neural DSL v0.2.9, which includes an early preview of Aquarium IDE, a new development environment for neural network design. This initial release provides basic visual tools for network design and integrates with Neural's shape propagation system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Aquarium IDE is our first step toward making neural network development more visual and accessible. While still in early development, we believe this approach will help both beginners and experienced developers better understand their network architectures." — Neural DSL Team&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🚀 Spotlight Feature: Aquarium IDE (Early Preview)
&lt;/h2&gt;

&lt;p&gt;Aquarium IDE is a new development environment for neural network design that we're releasing as an early preview. In this initial version, it provides a basic visual interface for designing simple neural networks and viewing tensor shapes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Current Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Basic Visual Designer&lt;/strong&gt;: Simple interface for adding and configuring common layer types&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shape Calculation&lt;/strong&gt;: View tensor dimensions for each layer in your network&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neural DSL Code Generation&lt;/strong&gt;: Generate basic Neural DSL code from your visual design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Estimation&lt;/strong&gt;: Basic calculation of parameter counts for each layer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Technology Stack
&lt;/h3&gt;

&lt;p&gt;Aquarium IDE is built with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Tauri with JavaScript/HTML/CSS for cross-platform compatibility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Rust components for shape calculation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neural Integration&lt;/strong&gt;: Integration with Neural's shape propagator for tensor dimension calculations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔍 How Aquarium IDE Works (Current Implementation)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Basic Network Design
&lt;/h3&gt;

&lt;p&gt;In this early preview, Aquarium IDE provides a simple interface where you can add layers to your network. The current version supports a limited set of common layer types (Input, Conv2D, MaxPooling2D, Flatten, Dense, and Output). Each layer can be configured through a basic properties panel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+----------------+     +----------------+     +----------------+
|    Input       |     |    Conv2D      |     |  MaxPooling2D  |
| (28, 28, 1)    | --&amp;gt; | filters=32     | --&amp;gt; | pool_size=(2,2)|
|                |     | kernel=(3,3)   |     |                |
+----------------+     +----------------+     +----------------+
        |
        v
+----------------+     +----------------+     +----------------+
|    Flatten     |     |     Dense      |     |    Output      |
|                | --&amp;gt; | units=128      | --&amp;gt; | units=10       |
|                |     | activation=relu|     | activation=soft|
+----------------+     +----------------+     +----------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Shape Calculation
&lt;/h3&gt;

&lt;p&gt;The current version calculates basic tensor dimensions for each layer in your network. This is a simplified implementation that works for common layer types and configurations but may not handle all edge cases or complex architectures.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Layer         | Input Shape      | Output Shape     | Parameters
--------------|------------------|------------------|------------
Input Layer   | -                | [null,28,28,1]   | 0
Conv2D        | [null,28,28,1]   | [null,28,28,32]  | 320
MaxPooling2D  | [null,28,28,32]  | [null,14,14,32]  | 0
Flatten       | [null,14,14,32]  | [null,6272]      | 0
Dense         | [null,6272]      | [null,128]       | 802,944
Output        | [null,128]       | [null,10]        | 1,290
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Basic Code Generation
&lt;/h3&gt;

&lt;p&gt;The current version generates simple Neural DSL code from your visual design. The code generation is limited to the supported layer types and basic configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Neural DSL Model&lt;/span&gt;

&lt;span class="s"&gt;Input(shape=[28, 28, 1])&lt;/span&gt;
&lt;span class="s"&gt;Conv2D(filters=32, kernel_size=[3, 3], padding="same", activation="relu")&lt;/span&gt;
&lt;span class="s"&gt;MaxPooling2D(pool_size=[2, 2])&lt;/span&gt;
&lt;span class="s"&gt;Flatten()&lt;/span&gt;
&lt;span class="s"&gt;Dense(units=128, activation="relu")&lt;/span&gt;
&lt;span class="s"&gt;Output(units=10, activation="softmax")&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Current Limitations
&lt;/h3&gt;

&lt;p&gt;It's important to note that this early preview has several limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only supports a small set of layer types&lt;/li&gt;
&lt;li&gt;Limited parameter configuration options&lt;/li&gt;
&lt;li&gt;Basic shape calculation that may not handle all edge cases&lt;/li&gt;
&lt;li&gt;Simple code generation without advanced features&lt;/li&gt;
&lt;li&gt;No support for complex network architectures (e.g., multi-input/output, skip connections)&lt;/li&gt;
&lt;li&gt;Limited error checking and validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Getting Started with Aquarium IDE
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Aquarium IDE is included as a submodule in the Neural repository. To try this early preview:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the Neural repository&lt;/span&gt;
git clone https://github.com/Lemniscate-world/Neural.git
&lt;span class="nb"&gt;cd &lt;/span&gt;Neural

&lt;span class="c"&gt;# Update submodules to get Aquarium&lt;/span&gt;
git submodule update &lt;span class="nt"&gt;--init&lt;/span&gt; &lt;span class="nt"&gt;--recursive&lt;/span&gt;

&lt;span class="c"&gt;# Install Rust if you don't have it already&lt;/span&gt;
&lt;span class="c"&gt;# https://www.rust-lang.org/tools/install&lt;/span&gt;

&lt;span class="c"&gt;# Install Tauri CLI&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;tauri-cli

&lt;span class="c"&gt;# Navigate to the Aquarium directory&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;Aquarium

&lt;span class="c"&gt;# Install Node.js dependencies&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Run the development server (this may take a few minutes the first time)&lt;/span&gt;
cargo tauri dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: As this is an early preview, you may encounter some issues during installation or runtime. Please report any problems on our GitHub issues page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trying the Basic Features
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Add Layers&lt;/strong&gt;: Use the buttons in the left panel to add some basic layers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure Parameters&lt;/strong&gt;: Try adjusting some simple parameters like units or filters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;View Shapes&lt;/strong&gt;: Switch to the shape tab to see basic tensor dimensions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;See Generated Code&lt;/strong&gt;: Check the code tab to view the generated Neural DSL code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experiment&lt;/strong&gt;: This is an early preview, so feel free to experiment and provide feedback&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🔧 Code Quality Improvements
&lt;/h2&gt;

&lt;p&gt;In addition to the Aquarium IDE preview, Neural v0.2.9 includes some code quality improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed trailing whitespace and missing newlines at end of files across the codebase&lt;/li&gt;
&lt;li&gt;Improved code consistency and adherence to style guidelines&lt;/li&gt;
&lt;li&gt;Enhanced readability and maintainability of the codebase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These changes, while not user-facing, help maintain a healthy codebase for future development.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Installation
&lt;/h2&gt;

&lt;p&gt;To try Neural DSL v0.2.9 with the Aquarium IDE preview:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install the core Neural DSL package&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;neural-dsl&lt;span class="o"&gt;==&lt;/span&gt;0.2.9

&lt;span class="c"&gt;# To try Aquarium IDE, follow the installation instructions above&lt;/span&gt;
&lt;span class="c"&gt;# as it requires additional dependencies (Rust, Node.js, etc.)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or upgrade from a previous version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; neural-dsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔍 Roadmap for Aquarium IDE
&lt;/h2&gt;

&lt;p&gt;Aquarium IDE is in very early development, and we have a long roadmap ahead. Some of the features we're planning to work on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Support for More Layer Types&lt;/strong&gt;: Add support for additional layer types beyond the basic ones&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Shape Propagation&lt;/strong&gt;: More accurate and detailed shape calculations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Error Handling&lt;/strong&gt;: Provide more helpful error messages and validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Connections&lt;/strong&gt;: Allow creating connections between layers visually&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Save/Load Functionality&lt;/strong&gt;: Save and load network designs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export to Multiple Formats&lt;/strong&gt;: Export to different backends and formats&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We welcome feedback and contributions to help shape the future of Aquarium IDE.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔗 Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-world/Neural/blob/main/docs/dsl.md" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;Discord Community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-world/Neural/tree/main/examples" rel="noopener noreferrer"&gt;Example Notebooks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-world/Neural/tree/main/docs/blog" rel="noopener noreferrer"&gt;Blog Archive&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🙏 Feedback and Contributions
&lt;/h2&gt;

&lt;p&gt;As Aquarium IDE is in early development, we're especially interested in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bug Reports&lt;/strong&gt;: If you encounter issues, please report them on GitHub&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature Requests&lt;/strong&gt;: Let us know what features would be most useful to you&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usability Feedback&lt;/strong&gt;: Tell us about your experience using the early preview&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contributions&lt;/strong&gt;: If you're interested in contributing to the development, check out our &lt;a href="https://github.com/Lemniscate-world/Neural/blob/main/CONTRIBUTING.md" rel="noopener noreferrer"&gt;Contributing Guidelines&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🏁 Conclusion
&lt;/h2&gt;

&lt;p&gt;Neural DSL v0.2.9 introduces an early preview of Aquarium IDE, our first step toward making neural network development more visual and accessible. While this is just the beginning and the current implementation has limitations, we believe this approach has potential to help both beginners and experienced developers better understand their network architectures.&lt;/p&gt;

&lt;p&gt;We're looking forward to your feedback as we continue to develop Aquarium IDE. Please share your thoughts, suggestions, and questions with us on &lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or &lt;a href="https://github.com/Lemniscate-world/Neural/discussions" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Neural DSL v0.2.7: Enhanced HPO Support and Parser Improvements</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Wed, 23 Apr 2025 12:15:05 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-v027-enhanced-hpo-support-and-parser-improvements-27fg</link>
      <guid>https://forem.com/neural/neural-dsl-v027-enhanced-hpo-support-and-parser-improvements-27fg</guid>
      <description>&lt;p&gt;We're excited to announce the release of Neural DSL v0.2.7, which significantly improves hyperparameter optimization (HPO) support, particularly for convolutional layers and learning rate schedules.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's New in v0.2.7
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Enhanced HPO Support for Conv2D Layers
&lt;/h3&gt;

&lt;p&gt;One of the most significant improvements in v0.2.7 is the enhanced HPO support for Conv2D layers. You can now optimize the &lt;code&gt;kernel_size&lt;/code&gt; parameter using HPO, allowing for more flexible architecture search:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Conv2D with HPO for both filters and kernel_size&lt;/span&gt;
&lt;span class="s"&gt;Conv2D(&lt;/span&gt;
  &lt;span class="s"&gt;filters=HPO(choice(32, 64)),&lt;/span&gt;
  &lt;span class="s"&gt;kernel_size=HPO(choice((3,3), (5,5))),&lt;/span&gt;
  &lt;span class="s"&gt;padding=HPO(choice("same", "valid")),&lt;/span&gt;
  &lt;span class="s"&gt;activation="relu"&lt;/span&gt;
&lt;span class="s"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enhancement allows you to automatically search for the optimal kernel size configuration, which can significantly impact model performance, especially for computer vision tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved ExponentialDecay Parameter Structure
&lt;/h3&gt;

&lt;p&gt;We've also improved the ExponentialDecay parameter structure to support more complex decay schedules with better parameter handling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Enhanced ExponentialDecay with HPO for all parameters&lt;/span&gt;
&lt;span class="na"&gt;optimizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adam(&lt;/span&gt;
  &lt;span class="s"&gt;learning_rate=ExponentialDecay(&lt;/span&gt;
    &lt;span class="s"&gt;HPO(log_range(1e-3, 1e-1)),&lt;/span&gt;       &lt;span class="c1"&gt;# Initial learning rate&lt;/span&gt;
    &lt;span class="s"&gt;HPO(choice(500, 1000, 2000)),&lt;/span&gt;      &lt;span class="c1"&gt;# Variable decay steps&lt;/span&gt;
    &lt;span class="s"&gt;HPO(range(0.9, 0.99, step=0.01))&lt;/span&gt;   &lt;span class="c1"&gt;# Decay rate&lt;/span&gt;
  &lt;span class="s"&gt;)&lt;/span&gt;
&lt;span class="s"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This improvement allows for more flexible learning rate schedule optimization, leading to better convergence and performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extended Padding Options in Layers
&lt;/h3&gt;

&lt;p&gt;We've extended HPO support to padding parameters, allowing you to optimize the padding strategy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Conv2D with HPO for padding&lt;/span&gt;
&lt;span class="s"&gt;Conv2D(&lt;/span&gt;
  &lt;span class="s"&gt;filters=32,&lt;/span&gt;
  &lt;span class="s"&gt;kernel_size=(3,3),&lt;/span&gt;
  &lt;span class="s"&gt;padding=HPO(choice("same", "valid")),&lt;/span&gt;
  &lt;span class="s"&gt;activation="relu"&lt;/span&gt;
&lt;span class="s"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enhancement is particularly useful for computer vision tasks where the padding strategy can significantly impact the model's ability to capture features at the edges of images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parser Improvements
&lt;/h3&gt;

&lt;p&gt;We've made several improvements to the parser:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed metrics processing logic that was incorrectly placed in the exponential_decay method&lt;/li&gt;
&lt;li&gt;Improved HPO log_range parameter naming from low/high to min/max for consistency&lt;/li&gt;
&lt;li&gt;Enhanced HPO range handling with better step parameter defaults&lt;/li&gt;
&lt;li&gt;Removed redundant code in Conv2D kernel_size validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These improvements make the Neural DSL more robust and easier to use, with more consistent parameter naming and better error handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with v0.2.7
&lt;/h2&gt;

&lt;p&gt;You can install Neural DSL v0.2.7 using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;neural-dsl&lt;span class="o"&gt;==&lt;/span&gt;0.2.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or upgrade from a previous version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; neural-dsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example: Advanced HPO Configuration
&lt;/h2&gt;

&lt;p&gt;Here's a complete example that demonstrates the new HPO features in v0.2.7:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network AdvancedHPOExample {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;
  &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Conv2D with HPO for filters, kernel_size, and padding&lt;/span&gt;
    &lt;span class="s"&gt;Conv2D(&lt;/span&gt;
      &lt;span class="s"&gt;filters=HPO(choice(32, 64)),&lt;/span&gt;
      &lt;span class="s"&gt;kernel_size=HPO(choice((3,3), (5,5))),&lt;/span&gt;
      &lt;span class="s"&gt;padding=HPO(choice("same", "valid")),&lt;/span&gt;
      &lt;span class="s"&gt;activation="relu"&lt;/span&gt;
    &lt;span class="s"&gt;)&lt;/span&gt;
    &lt;span class="s"&gt;MaxPooling2D(pool_size=(2,2))&lt;/span&gt;

    &lt;span class="s"&gt;# Another conv block with HPO&lt;/span&gt;
    &lt;span class="s"&gt;Conv2D(&lt;/span&gt;
      &lt;span class="s"&gt;filters=HPO(choice(64, 128)),&lt;/span&gt;
      &lt;span class="s"&gt;kernel_size=HPO(choice((3,3), (5,5))),&lt;/span&gt;
      &lt;span class="s"&gt;padding="same",&lt;/span&gt;
      &lt;span class="s"&gt;activation="relu"&lt;/span&gt;
    &lt;span class="s"&gt;)&lt;/span&gt;
    &lt;span class="s"&gt;MaxPooling2D(pool_size=(2,2))&lt;/span&gt;

    &lt;span class="s"&gt;# Flatten and dense layers&lt;/span&gt;
    &lt;span class="s"&gt;Flatten()&lt;/span&gt;
    &lt;span class="s"&gt;Dense(HPO(choice(128, 256, 512)), activation="relu")&lt;/span&gt;
    &lt;span class="s"&gt;Dropout(HPO(range(0.3, 0.7, step=0.1)))&lt;/span&gt;
    &lt;span class="s"&gt;Output(10, "softmax")&lt;/span&gt;

  &lt;span class="s"&gt;# Advanced optimizer configuration with HPO&lt;/span&gt;
  &lt;span class="s"&gt;optimizer&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adam(&lt;/span&gt;
    &lt;span class="s"&gt;learning_rate=ExponentialDecay(&lt;/span&gt;
      &lt;span class="s"&gt;HPO(log_range(1e-3, 1e-1)),&lt;/span&gt;       &lt;span class="c1"&gt;# Initial learning rate&lt;/span&gt;
      &lt;span class="s"&gt;HPO(choice(500, 1000, 2000)),&lt;/span&gt;      &lt;span class="c1"&gt;# Variable decay steps&lt;/span&gt;
      &lt;span class="s"&gt;HPO(range(0.9, 0.99, step=0.01))&lt;/span&gt;   &lt;span class="c1"&gt;# Decay rate&lt;/span&gt;
    &lt;span class="s"&gt;)&lt;/span&gt;
  &lt;span class="s"&gt;)&lt;/span&gt;

  &lt;span class="s"&gt;loss&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sparse_categorical_crossentropy"&lt;/span&gt;

  &lt;span class="c1"&gt;# Training configuration with HPO&lt;/span&gt;
  &lt;span class="s"&gt;train {&lt;/span&gt;
    &lt;span class="s"&gt;epochs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
    &lt;span class="na"&gt;batch_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HPO(choice(32, 64, 128))&lt;/span&gt;
    &lt;span class="na"&gt;validation_split&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;
    &lt;span class="na"&gt;search_method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bayesian"&lt;/span&gt;  &lt;span class="c1"&gt;# Use Bayesian optimization&lt;/span&gt;
&lt;span class="err"&gt;  }&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;We're continuously working to improve Neural DSL and make it more powerful and user-friendly. In upcoming releases, we plan to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Further enhance the NeuralPaper.ai integration for better model visualization and annotation&lt;/li&gt;
&lt;li&gt;Expand PyTorch support to match TensorFlow capabilities&lt;/li&gt;
&lt;li&gt;Improve documentation with more examples and tutorials&lt;/li&gt;
&lt;li&gt;Add support for more advanced HPO techniques&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stay tuned for more updates, and as always, we welcome your feedback and contributions!&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Involved
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;https://github.com/Lemniscate-world/Neural&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Documentation: &lt;a href="https://github.com/Lemniscate-world/Neural/blob/main/docs/dsl.md" rel="noopener noreferrer"&gt;https://github.com/Lemniscate-world/Neural/blob/main/docs/dsl.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;https://discord.gg/KFku4KvS&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy coding with Neural DSL!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>tensorflow</category>
    </item>
    <item>
      <title>Neural DSL v0.2.6: Enhanced Dashboard UI &amp; Blog Support</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Sun, 06 Apr 2025 18:08:40 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-v026-enhanced-dashboard-ui-blog-support-4n4k</link>
      <guid>https://forem.com/neural/neural-dsl-v026-enhanced-dashboard-ui-blog-support-4n4k</guid>
      <description>&lt;p&gt;We're excited to announce the release of Neural DSL v0.2.6! This update brings significant improvements to the NeuralDbg dashboard with a more aesthetic design, along with blog support and several other enhancements and fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhanced Dashboard UI
&lt;/h2&gt;

&lt;p&gt;The standout feature in v0.2.6 is the completely redesigned NeuralDbg dashboard with a sleek dark theme and improved visualization components. The new dashboard provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dark Mode Theme&lt;/strong&gt;: A modern, eye-friendly dark interface using Dash Bootstrap components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive Design&lt;/strong&gt;: Better layout that adapts to different screen sizes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Visualizations&lt;/strong&gt;: Enhanced tensor flow animations and shape propagation charts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Updates&lt;/strong&gt;: Fixed WebSocket connectivity for smoother data streaming&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These improvements make debugging and visualizing your neural networks more intuitive and aesthetically pleasing, helping you better understand model behavior during training and inference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the New Dashboard
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Basic usage with default dark theme&lt;/span&gt;
neural debug my_model.neural

&lt;span class="c"&gt;# Explicitly specify dark theme&lt;/span&gt;
neural debug my_model.neural &lt;span class="nt"&gt;--theme&lt;/span&gt; dark

&lt;span class="c"&gt;# Or use light theme if preferred&lt;/span&gt;
neural debug my_model.neural &lt;span class="nt"&gt;--theme&lt;/span&gt; light
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dashboard Components
&lt;/h3&gt;

&lt;p&gt;The dashboard now includes several enhanced visualization components:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example model to visualize in the dashboard
&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="n"&gt;MNISTClassifier&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nc"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;Dropout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;Output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;softmax&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;learning_rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this model, you can explore various dashboard features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run with gradient analysis enabled&lt;/span&gt;
neural debug my_model.neural &lt;span class="nt"&gt;--gradients&lt;/span&gt;

&lt;span class="c"&gt;# Run with dead neuron detection&lt;/span&gt;
neural debug my_model.neural &lt;span class="nt"&gt;--dead-neurons&lt;/span&gt;

&lt;span class="c"&gt;# Run with anomaly detection&lt;/span&gt;
neural debug my_model.neural &lt;span class="nt"&gt;--anomalies&lt;/span&gt;

&lt;span class="c"&gt;# Run with step-by-step debugging&lt;/span&gt;
neural debug my_model.neural &lt;span class="nt"&gt;--step&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Blog Support &amp;amp; Documentation
&lt;/h2&gt;

&lt;p&gt;We've added infrastructure for blog content with markdown support, making it easier to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Share updates about Neural DSL development&lt;/li&gt;
&lt;li&gt;Provide tutorials and examples&lt;/li&gt;
&lt;li&gt;Publish content both on our website and Dev.to&lt;/li&gt;
&lt;li&gt;Engage with the community through detailed technical content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This release also includes enhanced documentation with more detailed examples for HPO usage and error handling, making it easier for new users to get started with Neural DSL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blog Directory Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docs/
  blog/
    README.md             # Blog overview and guidelines
    blog-list.json        # Metadata for all blog posts
    website_*.md          # Posts for the website
    devto_*.md            # Posts formatted for Dev.to
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a Blog Post
&lt;/h3&gt;

&lt;p&gt;Here's an example of how to create a new blog post:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Title of Your Blog Post&lt;/span&gt;

&lt;span class="p"&gt;![&lt;/span&gt;&lt;span class="nv"&gt;Optional Image&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;../assets/images/your-image.png&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="ge"&gt;*Posted on Month Day, Year by Your Name*&lt;/span&gt;

First paragraph of your blog post...

&lt;span class="gu"&gt;## Section Heading&lt;/span&gt;

Content of your section...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dev.to Integration
&lt;/h3&gt;

&lt;p&gt;For posts that will also be published on Dev.to, use the following frontmatter format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Title&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Here"&lt;/span&gt;
&lt;span class="na"&gt;published&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Brief&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;of&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;post"&lt;/span&gt;
&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;machinelearning, python, deeplearning, opensource&lt;/span&gt;
&lt;span class="na"&gt;cover_image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://url-to-your-cover-image.png&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gh"&gt;# Your Content Here&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced HPO Examples
&lt;/h2&gt;

&lt;p&gt;For users working with hyperparameter optimization, we've added comprehensive examples demonstrating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex nested HPO configurations&lt;/li&gt;
&lt;li&gt;Multi-framework optimization strategies&lt;/li&gt;
&lt;li&gt;Advanced parameter search spaces&lt;/li&gt;
&lt;li&gt;Integration with training loops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These examples make it easier to leverage Neural DSL's powerful HPO capabilities across both PyTorch and TensorFlow backends.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1072996525" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: Complex Nested HPO Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="n"&gt;AdvancedHPOExample&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Convolutional layers with HPO parameters
&lt;/span&gt;    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="c1"&gt;# Another conv block with HPO
&lt;/span&gt;    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="c1"&gt;# Flatten and dense layers
&lt;/span&gt;    &lt;span class="nc"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;Dropout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="nc"&gt;Output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;softmax&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;# Advanced optimizer configuration with HPO
&lt;/span&gt;  &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;SGD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;learning_rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ExponentialDecay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.05&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.05&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;  &lt;span class="c1"&gt;# Initial learning rate
&lt;/span&gt;      &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;                              &lt;span class="c1"&gt;# Decay steps
&lt;/span&gt;      &lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.99&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;   &lt;span class="c1"&gt;# Decay rate
&lt;/span&gt;    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;momentum&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.99&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;# Training configuration with HPO
&lt;/span&gt;  &lt;span class="n"&gt;train&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;epochs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
    &lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;validation_split&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;
    &lt;span class="n"&gt;search_method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bayesian&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Use Bayesian optimization
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running HPO Optimization
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run HPO with 50 trials&lt;/span&gt;
neural optimize my_model.neural &lt;span class="nt"&gt;--trials&lt;/span&gt; 50 &lt;span class="nt"&gt;--backend&lt;/span&gt; tensorflow

&lt;span class="c"&gt;# Run HPO with PyTorch backend&lt;/span&gt;
neural optimize my_model.neural &lt;span class="nt"&gt;--trials&lt;/span&gt; 30 &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch

&lt;span class="c"&gt;# Generate optimized model with best parameters&lt;/span&gt;
neural optimize my_model.neural &lt;span class="nt"&gt;--generate&lt;/span&gt; optimized_model.neural
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Other Improvements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CLI Version Display&lt;/strong&gt;: Updated version command to dynamically fetch package version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Reporting&lt;/strong&gt;: Improved error context with precise line/column information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Optimizations&lt;/strong&gt;: Faster shape propagation and tensor flow visualization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt;: Streamlined GitHub Actions workflows with better error reporting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Suite Stability&lt;/strong&gt;: Resolved flaky tests in dashboard and HPO components&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CLI Version Command Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run the version command to see details&lt;/span&gt;
neural version

&lt;span class="c"&gt;# Output:&lt;/span&gt;
&lt;span class="c"&gt;# Neural CLI v0.2.6&lt;/span&gt;
&lt;span class="c"&gt;# Python: 3.10.12&lt;/span&gt;
&lt;span class="c"&gt;# Click: 8.1.7&lt;/span&gt;
&lt;span class="c"&gt;# Lark: 1.1.7&lt;/span&gt;
&lt;span class="c"&gt;# Torch: 2.1.0&lt;/span&gt;
&lt;span class="c"&gt;# Tensorflow: 2.15.0&lt;/span&gt;
&lt;span class="c"&gt;# Optuna: 3.4.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Performance Improvements
&lt;/h3&gt;

&lt;p&gt;The shape propagation and tensor flow visualization have been optimized for better performance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before optimization: ~500ms for complex models
# After optimization: ~150ms for the same models
&lt;/span&gt;
&lt;span class="c1"&gt;# Example of visualizing shape propagation
&lt;/span&gt;&lt;span class="n"&gt;neural&lt;/span&gt; &lt;span class="n"&gt;visualize&lt;/span&gt; &lt;span class="n"&gt;my_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;neural&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt; &lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;show&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;shapes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bug Fixes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Fixed edge cases in HPO parameter validation and parsing&lt;/li&gt;
&lt;li&gt;Resolved WebSocket connection issues in the dashboard&lt;/li&gt;
&lt;li&gt;Improved error context in validation messages&lt;/li&gt;
&lt;li&gt;Enhanced validation for layer parameters&lt;/li&gt;
&lt;li&gt;Fixed test suite stability issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  HPO Parameter Validation Example
&lt;/h3&gt;

&lt;p&gt;Previously, certain nested HPO configurations would cause validation errors. Now they work correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# This would previously fail with a validation error
&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="n"&gt;ComplexHPO&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;)))))&lt;/span&gt;
    &lt;span class="nc"&gt;Output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;learning_rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  WebSocket Connection Fix
&lt;/h3&gt;

&lt;p&gt;The dashboard now maintains stable WebSocket connections for real-time updates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Internal implementation improvement&lt;/span&gt;
&lt;span class="c1"&gt;// Before: Connection would drop after ~30 seconds of inactivity&lt;/span&gt;
&lt;span class="c1"&gt;// After: Connections remain stable with proper ping/pong mechanism&lt;/span&gt;

&lt;span class="c1"&gt;// Example of how to connect to the dashboard API&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;socket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ws://localhost:8050/socket&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onmessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Received real-time update:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;neural-dsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get Involved
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-SHA-256/Neural" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-SHA-256/Neural/blob/main/docs/dsl.md" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;Discord Community&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you find Neural DSL useful, please consider giving us a star on GitHub ⭐ and sharing this project with your friends and colleagues. The more developers we reach, the more likely we are to build something truly revolutionary together!&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
    <item>
      <title>Neural DSL v0.2.5 Update: Explore Multi-Framework HPO Features</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Sun, 30 Mar 2025 10:10:17 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-v025-update-explore-multi-framework-hpo-features-4h10</link>
      <guid>https://forem.com/neural/neural-dsl-v025-update-explore-multi-framework-hpo-features-4h10</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhiq5wstwuq8z7ex86yu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhiq5wstwuq8z7ex86yu.png" alt="Neural DSL Logo" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're excited to announce the release of Neural DSL v0.2.5! This update brings significant improvements to hyperparameter optimization (HPO), making it seamlessly work across both PyTorch and TensorFlow backends, along with several other enhancements and fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Spotlight Feature: Multi-Framework HPO Support
&lt;/h2&gt;

&lt;p&gt;The standout feature in v0.2.5 is the unified hyperparameter optimization system that works consistently across both PyTorch and TensorFlow backends. This means you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define your model and HPO parameters once&lt;/li&gt;
&lt;li&gt;Run optimization with either backend&lt;/li&gt;
&lt;li&gt;Compare results across frameworks&lt;/li&gt;
&lt;li&gt;Leverage the strengths of each framework&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's how easy it is to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network HPOExample {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;
  &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;Conv2D(filters=HPO(choice(32, 64)), kernel_size=(3,3))&lt;/span&gt;
    &lt;span class="s"&gt;MaxPooling2D(pool_size=(2,2))&lt;/span&gt;
    &lt;span class="s"&gt;Flatten()&lt;/span&gt;
    &lt;span class="s"&gt;Dense(HPO(choice(128, 256, 512)))&lt;/span&gt;
    &lt;span class="s"&gt;Output(10, "softmax")&lt;/span&gt;
  &lt;span class="s"&gt;optimizer&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))&lt;/span&gt;
  &lt;span class="s"&gt;train {&lt;/span&gt;
    &lt;span class="s"&gt;epochs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;search_method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bayesian"&lt;/span&gt;
&lt;span class="err"&gt;  }&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run with either backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# PyTorch backend&lt;/span&gt;
neural compile model.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch &lt;span class="nt"&gt;--hpo&lt;/span&gt;

&lt;span class="c"&gt;# TensorFlow backend&lt;/span&gt;
neural compile model.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; tensorflow &lt;span class="nt"&gt;--hpo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ✨ Enhanced Optimizer Handling
&lt;/h2&gt;

&lt;p&gt;We've significantly improved how optimizers are handled in the DSL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No-Quote Syntax&lt;/strong&gt;: Cleaner syntax for optimizer parameters without quotes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nested HPO Parameters&lt;/strong&gt;: Full support for HPO within learning rate schedules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scientific Notation&lt;/strong&gt;: Better handling of scientific notation (e.g., &lt;code&gt;1e-4&lt;/code&gt; vs &lt;code&gt;0.0001&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;optimizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Adam(learning_rate=HPO(log_range(1e-4,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1e-2)))"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;optimizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Advanced example with learning rate schedules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;optimizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SGD(&lt;/span&gt;
  &lt;span class="s"&gt;learning_rate=ExponentialDecay(&lt;/span&gt;
    &lt;span class="s"&gt;HPO(range(0.05, 0.2, step=0.05)),&lt;/span&gt;  &lt;span class="c1"&gt;# Initial learning rate&lt;/span&gt;
    &lt;span class="s"&gt;1000,&lt;/span&gt;                              &lt;span class="c1"&gt;# Decay steps&lt;/span&gt;
    &lt;span class="s"&gt;HPO(range(0.9, 0.99, step=0.01))&lt;/span&gt;   &lt;span class="c1"&gt;# Decay rate&lt;/span&gt;
  &lt;span class="s"&gt;),&lt;/span&gt;
  &lt;span class="s"&gt;momentum=HPO(range(0.8, 0.99, step=0.01))&lt;/span&gt;
&lt;span class="s"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📊 Precision &amp;amp; Recall Metrics
&lt;/h2&gt;

&lt;p&gt;Training loops now report precision and recall alongside loss and accuracy, giving you a more comprehensive view of your model's performance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;precision&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;recall&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val_loader&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🛠️ Other Improvements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error Message Enhancements&lt;/strong&gt;: More detailed error messages with line/column information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer Validation&lt;/strong&gt;: Better validation for MaxPooling2D, BatchNormalization, Dropout, and Conv2D layers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TensorRT Integration&lt;/strong&gt;: Added conditional TensorRT setup in CI pipeline for GPU environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VSCode Snippets&lt;/strong&gt;: Added code snippets for faster Neural DSL development in VSCode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt;: Enhanced GitHub Actions workflows with better error handling and reporting&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🐛 Bug Fixes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Fixed parsing of optimizer HPO parameters without quotes&lt;/li&gt;
&lt;li&gt;Corrected string representation handling in HPO parameters&lt;/li&gt;
&lt;li&gt;Resolved issues with nested HPO parameters in learning rate schedules&lt;/li&gt;
&lt;li&gt;Enhanced validation for various layer types&lt;/li&gt;
&lt;li&gt;Fixed parameter handling in Concatenate, Activation, Lambda, and Embedding layers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📦 Installation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;neural-dsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔗 Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-SHA-256/Neural" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-SHA-256/Neural/blob/main/docs/dsl.md" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;Discord Community&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🙏 Support Us
&lt;/h2&gt;

&lt;p&gt;If you find Neural DSL useful, please consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Giving us a star on GitHub ⭐&lt;/li&gt;
&lt;li&gt;Sharing this project with your friends and colleagues&lt;/li&gt;
&lt;li&gt;Contributing to the codebase or documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The more developers we reach, the more likely we are to build something truly revolutionary together!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Neural DSL is a domain-specific language for defining, training, debugging, and deploying neural networks with declarative syntax, cross-framework support, and built-in execution tracing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Neural-dsl is a WIP DSL and debugger, bugs exist, feedback welcome! This project is under active development and not yet production-ready!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Boosting Neural Networks with Automated Hyperparameter Optimization</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Wed, 26 Mar 2025 06:05:47 +0000</pubDate>
      <link>https://forem.com/neural/boosting-neural-networks-with-automated-hyperparameter-optimization-2g7n</link>
      <guid>https://forem.com/neural/boosting-neural-networks-with-automated-hyperparameter-optimization-2g7n</guid>
      <description>&lt;p&gt;Note: Neural is a work-in-progress DSL and debugger — bugs exist, and I’m eager for your feedback on &lt;a href="https://form.typeform.com/to/xcibBdKD#name=xxxxx&amp;amp;email=xxxxx&amp;amp;phone_number=xxxxx&amp;amp;user_id=xxxxx&amp;amp;product_id=xxxxx&amp;amp;auth_code=xxxxx" rel="noopener noreferrer"&gt;Typeform&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi0z92jfnfvwa2rvaq5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi0z92jfnfvwa2rvaq5t.png" alt="Image description" width="720" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
   Introduction: From Guesswork to Precision⚡️
&lt;/h2&gt;

&lt;p&gt;If you’ve ever built a neural network, you know the drill: tweak the learning rate, adjust the batch size, fiddle with layer sizes, rinse and repeat until something works. &lt;br&gt;
It’s a critical step, but it’s also a grind. What if you could hand this off to an intelligent system that finds the sweet spot for you? &lt;/p&gt;

&lt;p&gt;That’s where the Hyperparameter Optimization (HPO) feature in Neural comes in. Built into our DSL, it automates the tuning process with a single function call, whether you’re targeting PyTorch or TensorFlow. &lt;/p&gt;

&lt;p&gt;In this post, I’ll show you how it works, demo it on MNIST, and peek under the hood at how we made it robust across edge cases and full pipelines. Ready to ditch the guesswork? Let’s dive in.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why HPO Matters in Neural☄️
&lt;/h2&gt;

&lt;p&gt;Neural is all about solving deep learning pain points, shape mismatches, debugging complexity, framework switching, and HPO is a cornerstone of that mission. As our README highlights, it tackles Medium Criticality, High Impact challenges like “HPO Inconsistency” by unifying tuning across frameworks. With Neural’s declarative syntax, you tag parameters with HPO(), and my tool does the rest: no more fragmented scripts or framework-specific hacks.&lt;/p&gt;


&lt;h2&gt;
  
  
  The HPO Feature: What It Does🌚
&lt;/h2&gt;

&lt;p&gt;Our HPO feature, introduced in issue #434, lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define tunable parameters in the DSL (e.g., Dense(HPO(choice(128, 256)))).&lt;/li&gt;
&lt;li&gt;Run optimization with optimize_and_return to get the best settings.&lt;/li&gt;
&lt;li&gt;Generate an optimized config with generate_optimized_dsl.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s multi-framework, powered by Optuna, and handles everything from bare-minimum models to complex architectures. &lt;/p&gt;
&lt;h2&gt;
  
  
  Here’s how it fits into Neural’s ecosystem:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Shape Propagation: Ensures your optimized model is structurally sound.&lt;/li&gt;
&lt;li&gt;NeuralDbg: Lets you debug the tuned model’s execution.&lt;/li&gt;
&lt;li&gt;CLI Integration: Run neural run — hpo to optimize on the fly.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  VIDEO
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://drive.google.com/file/d/1D9Gzk5Y3A6ejqUcDtJGnTQhNmBbc7pLF/view?usp=drive_link" rel="noopener noreferrer"&gt;https://drive.google.com/file/d/1D9Gzk5Y3A6ejqUcDtJGnTQhNmBbc7pLF/view?usp=drive_link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Forgive me for my sins, I have a low-end pc… 🤣&lt;/p&gt;


&lt;h2&gt;
  
  
  How to Use It: A Quick Demo👨🏿‍💻
&lt;/h2&gt;

&lt;p&gt;Let’s optimize a simple MNIST classifier.&lt;/p&gt;

&lt;p&gt;First, define the model in mnist_hpo.neural:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network MNISTClassifier {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;
  &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;Dense(HPO(choice(128, 256)))&lt;/span&gt;
    &lt;span class="s"&gt;Dropout(HPO(range(0.3, 0.7, step=0.1)))&lt;/span&gt;
    &lt;span class="s"&gt;Output(10, "softmax")&lt;/span&gt;
  &lt;span class="s"&gt;optimizer&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))&lt;/span&gt;
  &lt;span class="s"&gt;train {&lt;/span&gt;
    &lt;span class="s"&gt;epochs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;search_method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;random"&lt;/span&gt;  &lt;span class="c1"&gt;# or "bayesian"&lt;/span&gt;
&lt;span class="err"&gt;  }&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;neural run mnist_hpo.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch &lt;span class="nt"&gt;--hpo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logs:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;....
....


INFO: best_params: {'batch_size': 16, 'dense_units': 256, 'dropout_rate': 0.3, 'learning_rate': 0.0004154879407857402}
INFO: hpo_params: [{'layer_type': 'Dense', 'param_name': 'units', 'hpo': {'type': 'categorical', 'values': [128, 256]}, 'node': [{'hpo': {'type': 'categorical', 'values': [128, 256]}}]}, {'layer_type': 'Dropout', 'param_name': 'rate', 'hpo': {'type': 'range', 'start': 0.3, 'end': 0.7, 'step': 0.1}, 'node': [{'hpo': {'type': 'range', 'start': 0.3, 'end': 0.7, 'step': 0.1}}]}]
INFO: Processing hpo: {'layer_type': 'Dense', 'param_name': 'units', 'hpo': {'type': 'categorical', 'values': [128, 256]}, 'node': [{'hpo': {'type': 'categorical', 'values': [128, 256]}}]}, param_key: dense_units, hpo_str: choice(128, 256)
DEBUG: Line 0 'network MNISTClassifier {' does not contain 'HPO(choice(128, 256))'
DEBUG: Line 1 '  input: (28, 28, 1)' does not contain 'HPO(choice(128, 256))'
DEBUG: Line 2 '  layers:' does not contain 'HPO(choice(128, 256))'
INFO: Replaced line 3: '    Dense(HPO(choice(128, 256)))' -&amp;gt; '    Dense(256)'
INFO: Processing hpo: {'layer_type': 'Dropout', 'param_name': 'rate', 'hpo': {'type': 'range', 'start': 0.3, 'end': 0.7, 'step': 0.1}, 'node': [{'hpo': {'type': 'range', 'start': 0.3, 'end': 0.7, 'step': 0.1}}]}, param_key: dropout_rate, hpo_str: range(0.3, 0.7, step=0.1)
DEBUG: Line 0 'network MNISTClassifier {' does not contain 'HPO(range(0.3, 0.7, step=0.1))'
DEBUG: Line 1 '  input: (28, 28, 1)' does not contain 'HPO(range(0.3, 0.7, step=0.1))'
DEBUG: Line 2 '  layers:' does not contain 'HPO(range(0.3, 0.7, step=0.1))'
DEBUG: Line 3 '    Dense(256)' does not contain 'HPO(range(0.3, 0.7, step=0.1))'
INFO: Replaced line 4: '    Dropout(HPO(range(0.3, 0.7, step=0.1)))' -&amp;gt; '    Dropout(0.3)'
INFO: Replaced line 6 (learning_rate): '  optimizer: Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))' -&amp;gt; '  optimizer: Adam(learning_rate=0.0004154879407857402)'
INFO: Final lines: ['network MNISTClassifier {', '  input: (28, 28, 1)', '  layers:', '    Dense(256)', '    Dropout(0.3)', '    Output(10, "softmax")', '  optimizer: Adam(learning_rate=0.0004154879407857402)', '  train {', '    epochs: 10', '    search_method: "bayesian"', '  }', '}']
DEBUG: Network items: ['MNISTClassifier', "{'type': 'Input', 'shape': (28, 28, 1)}", "[{'type': 'Dense', 'params': {'units': 256}, 'sublayers': []}, {'type': 'Dropout', 'params': {'rate': 0.3}, 'sublayers': []}, {'type': 'Output', 'params': {'units': 10, 'activation': 'softmax'}, 'sublayers': []}]", "{'type': 'Adam', 'params': {'learning_rate': 0.0004154879407857402}}", "{'type': 'training_config', 'params': {'epochs': 10, 'search_method': 'bayesian'}}"]
DEBUG: Item 3: type=&amp;lt;class 'dict'&amp;gt;, data=N/A, value={'type': 'Adam', 'params': {'learning_rate': 0.0004154879407857402}}
DEBUG: Item 4: type=&amp;lt;class 'dict'&amp;gt;, data=N/A, value={'type': 'training_config', 'params': {'epochs': 10, 'search_method': 'bayesian'}}
INFO: Compiled optimized mnist_hpo.neural to mnist_hpo_optimized_pytorch.py
Epoch 1/10 - Loss: 1.5841
Epoch 2/10 - Loss: 1.5561
Epoch 3/10 - Loss: 1.5517
Epoch 4/10 - Loss: 1.5490
Epoch 5/10 - Loss: 1.5477
Epoch 6/10 - Loss: 1.5463
Epoch 7/10 - Loss: 1.5465
Epoch 8/10 - Loss: 1.5468
Epoch 9/10 - Loss: 1.5450
Epoch 10/10 - Loss: 1.5446
Accuracy: 92.02%
INFO: Execution completed successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbajbnguacdqkj7498w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbajbnguacdqkj7498w5.png" alt="Image description" width="592" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlvaab6m66ypika4ibmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlvaab6m66ypika4ibmx.png" alt="Image description" width="720" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Steps Explained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Tag Parameters: Use HPO(choice()), HPO(range()), or HPO(log_range()) as per the DSL docs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimize: optimize_and_return runs 3 Optuna trials, testing combinations of batch size, units, dropout rate, and learning rate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply: generate_optimized_dsl swaps HPO() tags with the best values.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Edge Case: Minimal Models
&lt;/h2&gt;

&lt;p&gt;What about a super-simple model?&lt;/p&gt;

&lt;p&gt;Here’s an edge case we’ve polished:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network Tiny {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;
  &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;Output(10)&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running still works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;neural run tiny.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch &lt;span class="nt"&gt;--hpo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logs : Best parameters found: {‘batch_size’: 16, ‘learning_rate’: 0.001}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No Dense or Dropout layers? No problem, Neural defaults to Adam with a 0.001 learning rate and optimizes batch size, keeping things smooth.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup444rmw7jgpyj8n69gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup444rmw7jgpyj8n69gt.png" alt="Image description" width="720" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🫴🏿Real-World Impact: MNIST Results Using the MNIST example:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Manual Tuning: Batch size 64, 256 units, 0.5 dropout, 0.01 learning rate → 85% accuracy, 15s/epoch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HPO-Tuned: {‘batch_size’: 16, ‘dense_units’: 256, ‘dropout_rate’: 0.3, ‘learning_rate’: 0.0004154879407857402} → 92.02% accuracy, 12s/epoch.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s a 6% accuracy boost and faster training, all automated. Imagine scaling this to a Vision Transformer or a custom NLP model&lt;/p&gt;




&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;Here’s the tech magic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Optuna: Drives multi-objective optimization (loss, accuracy, etc.), as seen in optimize_and_return.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamic Models: create_dynamic_model builds PyTorch layers on-the-fly, e.g., trial.suggest_categorical(“Dense_units”, [128, 256]).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Normalization: I wrestled with key mismatches (e.g., ‘Dense_units’ vs ‘dense_units’) and edge cases (no HPO params), settling on conditional logic to keep it robust.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Challenges Conquered:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Edge Case Fix: A KeyError in minimal configs was squashed by defaulting optimizer settings and skipping absent parameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Full Pipeline: Case-sensitive key handling ensured all HPO params (units, dropout, learning rate) made it to the output.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check the &lt;a href="https://github.com/Lemniscate-world/Neural/blob/main/docs/dsl.md" rel="noopener noreferrer"&gt;DSL Docs&lt;/a&gt; (#hyperparameter-optimization) for supported HPO types and validation rules.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why You’ll Love It👍🏿
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Time Savings: Hours of tuning → minutes of automation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consistency: Same HPO logic across PyTorch and TensorFlow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accessibility: No ML PhD required — just tag and run.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It Out!👈🏿
&lt;/h2&gt;

&lt;p&gt;Clone Neural, spin up this example, and let us know how it goes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Lemniscate-SHA-256/Neural.git
&lt;span class="nb"&gt;cd &lt;/span&gt;Neural
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
python your_script.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;💪🏿Star it on &lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, join our &lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;, or share your thoughts on Twitter &lt;a href="https://x.com/NLang4438" rel="noopener noreferrer"&gt;@NLang4438&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What hyperparameters do you want to optimize next?&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
    <item>
      <title>Increase Productivity with Neural DSL v0.2.4: Automatic Shape Propagation Explained</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Sun, 23 Mar 2025 07:05:30 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-v024-automatic-shape-propagation-and-more-ol1</link>
      <guid>https://forem.com/neural/neural-dsl-v024-automatic-shape-propagation-and-more-ol1</guid>
      <description>&lt;p&gt;Explore how Neural DSL’s automatic shape propagation catches dimension errors pre-runtime, alongside fixes that make deep learning development smoother.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupltlkxkyz66wuay51n8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupltlkxkyz66wuay51n8.jpg" alt="Image description" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey Dev.to folks! 🏊🏽‍♂️&lt;/p&gt;

&lt;p&gt;I’ve been pouring my heart into &lt;strong&gt;Neural DSL&lt;/strong&gt;, a domain-specific language (DSL) for crafting, training, and debugging neural networks without the usual headaches. &lt;/p&gt;

&lt;p&gt;Our latest drop, &lt;strong&gt;v0.2.4&lt;/strong&gt; (March 23, 2025), is live, and the killer feature this time is &lt;strong&gt;Automatic Shape Propagation&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;It’s like a pre-flight check for your tensor shapes,catching mismatches before they crash your runtime. Let’s unpack this, plus some other goodies from the update.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌟 Automatic Shape Propagation: No More Shape Guessing
&lt;/h2&gt;

&lt;p&gt;Ever debugged a &lt;code&gt;RuntimeError: size mismatch&lt;/code&gt; at 2 AM? Me too. &lt;/p&gt;

&lt;p&gt;Neural DSL’s &lt;code&gt;ShapePropagator&lt;/code&gt; now auto-tracks tensor shapes through every layer, flagging issues &lt;em&gt;before&lt;/em&gt; you hit run. &lt;/p&gt;

&lt;p&gt;It’s baked into v0.2.4 and makes defining networks like this a breeze:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network MNISTClassifier {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;  &lt;span class="c1"&gt;# Channels-last&lt;/span&gt;
  &lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;Conv2D(filters=32, kernel_size=(3,3))&lt;/span&gt;  &lt;span class="c1"&gt;# Shape: (26, 26, 32)&lt;/span&gt;
    &lt;span class="s"&gt;MaxPooling2D(pool_size=(2,2))&lt;/span&gt;          &lt;span class="c1"&gt;# Shape: (13, 13, 32)&lt;/span&gt;
    &lt;span class="s"&gt;Flatten()&lt;/span&gt;                              &lt;span class="c1"&gt;# Shape: (5408)&lt;/span&gt;
    &lt;span class="s"&gt;Dense(units=128)&lt;/span&gt;                       &lt;span class="c1"&gt;# Shape: (128)&lt;/span&gt;
    &lt;span class="s"&gt;Output(units=10, activation="softmax")&lt;/span&gt; &lt;span class="c1"&gt;# Shape: (10)&lt;/span&gt;
  &lt;span class="na"&gt;loss&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sparse_categorical_crossentropy"&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run neural visualize mnist.neural --format html, and you get an interactive shape flow diagram. &lt;/p&gt;

&lt;p&gt;No more manual math or surprise errors, v0.2.4 fixed in_features calculation (test test_model_forward_flat_input) to compute shapes before propagation overwrites them. &lt;/p&gt;

&lt;p&gt;It’s a lifesaver for complex architectures.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤏🏽 Other v0.2.4 Wins
&lt;/h2&gt;

&lt;p&gt;Shape propagation shines, but the release also polishes rough edges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Conv2D Fix (#427): PyTorch now uses channels-first (None, 1, 28, 28) properly, with TensorFlow data loader support added. Vision models just work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Training Stability (#428): Swapped None loaders for mocked ones, added precision metrics, and optimized device selection (execution_optimization picks CPU/GPU automatically).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimizer Tests (#429): New MockDataset and MockDataLoader ensure edge cases don’t slip through.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tackle core pain points—shape mismatches and debugging woes, straight from our &lt;a href="https://github.com/Lemniscate-world/Neural?tab=readme-ov-file#-pain-points-solved" rel="noopener noreferrer"&gt;Criticality vs. Impact table&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤖 Get Started
&lt;/h2&gt;

&lt;p&gt;Clone it, play with it, break it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Lemniscate-world/Neural.git
&lt;span class="nb"&gt;cd &lt;/span&gt;neural
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
neural run examples/mnist.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
   👾 Join Us
&lt;/h2&gt;

&lt;p&gt;Bugs linger (e.g., TensorFlow loader validation), but that’s where you come in. &lt;/p&gt;

&lt;p&gt;Star us on &lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, hit up &lt;a href="https://discord.gg/KFku4KvS" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Your feedback drives this.&lt;/p&gt;

&lt;p&gt;Comment below or ping me on Twitter &lt;a href="//x.com/NLang4438"&gt;@NLang4438&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s make deep learning less painful together!&lt;/p&gt;

&lt;p&gt;Full Changelog: &lt;a href="https://github.com/Lemniscate-world/Neural/blob/main/CHANGELOG.md" rel="noopener noreferrer"&gt;v0.2.4&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tensorflow</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Hyperparameter Optimization Across Frameworks Made Simple - Neural DSL v0.2.3</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Sun, 16 Mar 2025 20:02:49 +0000</pubDate>
      <link>https://forem.com/neural/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v023-338e</link>
      <guid>https://forem.com/neural/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v023-338e</guid>
      <description>&lt;p&gt;Hey Dev.to community! &lt;br&gt;
I’m excited to share the latest update to &lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;Neural DSL&lt;/a&gt;, a work-in-progress domain-specific language for defining, training, and debugging neural networks. With &lt;strong&gt;v0.2.3&lt;/strong&gt; (released March 16, 2025), I supercharged one feature I think you’ll love: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;multi-framework Hyperparameter Optimization (HPO)&lt;/strong&gt;. Plus, I squashed some bugs and added new layer support to keep things moving forward. Let’s dive in!&lt;/p&gt;


&lt;h2&gt;
  
  
  🌟 Spotlight: Multi-Framework HPO (#434)
&lt;/h2&gt;

&lt;p&gt;Imagine defining a neural network once and optimizing its hyperparameters for &lt;em&gt;both&lt;/em&gt; PyTorch and TensorFlow without rewriting a single line. That’s what v0.2.3 brings to the table. Whether you’re tuning layer sizes, dropout rates, or learning rates, Neural DSL now handles HPO seamlessly across frameworks all from a single declarative config.&lt;/p&gt;
&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;Define your model with HPO parameters in the DSL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network HPOExample {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;  &lt;span class="c1"&gt;# MNIST input&lt;/span&gt;
  &lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;Dense(HPO(choice(128, 256)))&lt;/span&gt;  &lt;span class="c1"&gt;# Sample units&lt;/span&gt;
    &lt;span class="s"&gt;Dropout(HPO(range(0.3, 0.7, step=0.1)))&lt;/span&gt;  &lt;span class="c1"&gt;# Sample dropout rate&lt;/span&gt;
    &lt;span class="s"&gt;Output(10, "softmax")&lt;/span&gt;
  &lt;span class="na"&gt;optimizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))&lt;/span&gt;  &lt;span class="c1"&gt;# Log-scale LR&lt;/span&gt;
  &lt;span class="s"&gt;train {&lt;/span&gt;
    &lt;span class="s"&gt;epochs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;search_method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;random"&lt;/span&gt;  &lt;span class="c1"&gt;# Or "bayesian"&lt;/span&gt;
  &lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with a single command, switching frameworks on the fly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
neural run hpo_example.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch &lt;span class="nt"&gt;--output&lt;/span&gt; model_torch.py
neural run hpo_example.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; tensorflow  &lt;span class="nt"&gt;--output&lt;/span&gt; model_tf.py

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
   Behind the scenes, Neural DSL:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Parses the HPO specs into a framework-agnostic model_dict. &lt;/li&gt;
&lt;li&gt;Uses DynamicModel (PyTorch) or DynamicTFModel (TensorFlow) to sample parameters via Optuna.&lt;/li&gt;
&lt;li&gt;Evaluates trials with a unified train_model function, supporting both backends.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
   Here’s a peek at the magic in hpo.py:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;objective&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;trial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hpo_params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pytorch&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;lr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trial&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;suggest_loguniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;hpo_params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;range&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pytorch&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DynamicModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;trial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hpo_params&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cuda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_available&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cpu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;tensorflow&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DynamicTFModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;trial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hpo_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;optimizers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;learning_rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;val_loss&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;accuracy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val_loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;val_loss&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_hpo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hpo_params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pytorch&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;study&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;optuna&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_study&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;direction&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;minimize&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;study&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;optimize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;trial&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;objective&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;trial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hpo_params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nf"&gt;get_data&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;n_trials&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🏄🏽‍♂️ Why It’s Awesome
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;One Config, Two Frameworks&lt;/li&gt;
&lt;li&gt;No more duplicating effort for PyTorch vs. TensorFlow experiments.&lt;/li&gt;
&lt;li&gt;Flexible HPO: Supports choice (discrete), range (linear), and log_range (log-scale) for parameters like units, rates, and learning rates.&lt;/li&gt;
&lt;li&gt;Scalable: Ready to extend to ONNX or JAX with minimal tweaks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This feature came from fixing test_hpo_integration_full_pipeline (#434), where we tackled optimizer HPO parsing and 3D input shape issues. Now, it’s a cornerstone of Neural’s cross-framework vision.&lt;/p&gt;




&lt;h2&gt;
  
  
  👨🏿‍💻 Other Goodies in v0.2.3
&lt;/h2&gt;

&lt;p&gt;While HPO steals the show, here’s what else I’ve been up to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New Layers: Added LayerNormalization, InstanceNormalization, GroupNormalization, SqueezeExcitation, and Attention to the parser (#105, #106, #107, #118, #307). More building blocks for your models!&lt;/li&gt;
&lt;li&gt;Parser Fixes: Squashed bugs in Concatenate, Activation, Lambda, and Embedding parameter handling (#140, #329, etc.), plus better macro and device support (#136, #327, #328).&lt;/li&gt;
&lt;li&gt;Validation Boost: Enhanced checks for MaxPooling2D, BatchNormalization, Dropout, and Conv2D to catch errors early (#179, #363, #367, #368).&lt;/li&gt;
&lt;li&gt;Error Handling: Improved VisitError wrapping with line/column details (#159) for clearer debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check the full &lt;a href="https://github.com/Lemniscate-SHA-256/Neural/blob/main/CHANGELOG.md" rel="noopener noreferrer"&gt;changelog&lt;/a&gt; for all the nitty-gritty.&lt;/p&gt;




&lt;p&gt;🦾 What’s Next?&lt;/p&gt;

&lt;p&gt;Neural DSL is still a WIP bugs lurk, and features are missing (like full ONNX HPO support). &lt;/p&gt;

&lt;p&gt;Upcoming goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stabilize macro parsing for nested blocks.&lt;/li&gt;
&lt;li&gt;Expand layer support (more PyTorch layers, anyone?).&lt;/li&gt;
&lt;li&gt;Add interactive HPO visualizations with NeuralDbg.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Got ideas or bug reports? Join me on Discord or file an issue. Feedback keeps this project alive!&lt;/p&gt;




&lt;p&gt;👾 Join the Journey&lt;/p&gt;

&lt;p&gt;v0.2.3 is a step toward making neural network development declarative, flexible, and debuggable. &lt;br&gt;
The multi-framework HPO feature is just the beginning imagine tuning models across PyTorch, TensorFlow, and beyond with one tool.&lt;br&gt;
What do you think—how would you use HPO in your projects? Drop a comment below!&lt;br&gt;
Happy coding,&lt;br&gt;
[Lemniscate-SHA-256]&lt;br&gt;
Twitter: &lt;a href="https://x.com/NLang4438" rel="noopener noreferrer"&gt;@NLang4438&lt;/a&gt; | &lt;a href="https://github.com/Lemniscate-SHA-256/Neural/" rel="noopener noreferrer"&gt;Neural DSL GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Neural DSL v0.2.2 Release Notes</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Mon, 10 Mar 2025 17:16:22 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-v022-release-notes-22f9</link>
      <guid>https://forem.com/neural/neural-dsl-v022-release-notes-22f9</guid>
      <description>&lt;h2&gt;
  
  
  🚀 Major Changes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Fixed Parameter Parsing
&lt;/h3&gt;

&lt;p&gt;Layer parameter handling has been significantly improved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Now correctly handles both styles:
&lt;/span&gt;&lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;     &lt;span class="c1"&gt;# Positional params
&lt;/span&gt;&lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Named params
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Validation Enhancements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Strict positive integer validation for critical parameters
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# These will now raise clear validation errors:
&lt;/span&gt;&lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=-&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# ERROR: filters must be positive
&lt;/span&gt;&lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;       &lt;span class="c1"&gt;# ERROR: units must be positive
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Improved Error Messages
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Added line/column information for better debugging
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ERROR at line 4, column 32: Conv2D filters must be positive integer, got &lt;span class="nt"&gt;-32&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🛠️ Technical Improvements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Layer Parameter Processing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Unified parameter merging across layers:

&lt;ul&gt;
&lt;li&gt;Dense&lt;/li&gt;
&lt;li&gt;LSTM&lt;/li&gt;
&lt;li&gt;GRUCell&lt;/li&gt;
&lt;li&gt;GaussianNoise&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Grammar Refinements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Resolved token conflicts between:

&lt;ul&gt;
&lt;li&gt;NUMBER&lt;/li&gt;
&lt;li&gt;FLOAT&lt;/li&gt;
&lt;li&gt;INT&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Simplified &lt;code&gt;param_style1&lt;/code&gt; rules&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  HPO Support Updates
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Now correctly supports:
&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;# Units choice
&lt;/span&gt;&lt;span class="nc"&gt;HPO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tanh&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;# Activation choice
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🐛 Bug Fixes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Layer-Specific Fixes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fixed nested list flattening in GaussianNoise&lt;/li&gt;
&lt;li&gt;Corrected STRING token regex for activation functions&lt;/li&gt;
&lt;li&gt;Resolved VisitError wrapping issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Macro System
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fixed parameter override logic during expansion
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Now correctly handles:
&lt;/span&gt;&lt;span class="n"&gt;define&lt;/span&gt; &lt;span class="n"&gt;MyBlock&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;Dropout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nc"&gt;MyBlock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Properly overrides Dense units
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚧 Known Issues
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;PyTorch Support&lt;/strong&gt;: Limited layer support (work in progress)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Macro Stability&lt;/strong&gt;: Potential parser issues with nested layer blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HPO Limitations&lt;/strong&gt;: &lt;code&gt;log_range()&lt;/code&gt; requires explicit integer casting&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  📝 Migration Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Updating from v0.2.1
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Old style (might fail):
&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="n"&gt;MyNet&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# String number
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# New style (recommended):
&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="n"&gt;MyNet&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="c1"&gt;# Integer number
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔜 Next Steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Complete PyTorch layer support&lt;/li&gt;
&lt;li&gt;Stabilize macro system&lt;/li&gt;
&lt;li&gt;Enhance HPO functionality&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;For full changelog, see &lt;a href="https://github.com/Lemniscate-SHA-256/Neural/blob/main/CHANGELOG.md" rel="noopener noreferrer"&gt;CHANGELOG.md&lt;/a&gt;&lt;br&gt;
For documentation, visit &lt;a href="https://github.com/Lemniscate-SHA-256/Neural/tree/main/docs" rel="noopener noreferrer"&gt;docs/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Neural v0.2.1: Macros, Fixes, and PyTorch Training Loop!</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Tue, 04 Mar 2025 15:25:54 +0000</pubDate>
      <link>https://forem.com/neural/neural-v021-macros-fixes-and-pytorch-training-loop-n8b</link>
      <guid>https://forem.com/neural/neural-v021-macros-fixes-and-pytorch-training-loop-n8b</guid>
      <description>&lt;h2&gt;
  
  
  🚀 Neural v0.2.1: Macros, Fixes, and PyTorch Training Loop!
&lt;/h2&gt;

&lt;p&gt;A new version of &lt;strong&gt;Neural&lt;/strong&gt; is here! 🎉 This update introduces &lt;strong&gt;DSL Macros&lt;/strong&gt;, &lt;strong&gt;major bug fixes&lt;/strong&gt;, &lt;strong&gt;improvements in TensorFlow and PyTorch code generation&lt;/strong&gt;, and &lt;strong&gt;an enhanced debugging experience&lt;/strong&gt;. It is still &lt;strong&gt;very buggy and a WIP&lt;/strong&gt;, but it shows progress! 🚧&lt;/p&gt;




&lt;h2&gt;
  
  
  🔥 New Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🏗️ 1. Macros for the DSL with &lt;code&gt;define&lt;/code&gt; Blocks
&lt;/h3&gt;

&lt;p&gt;Macros now allow reusing predefined layer structures! Define once, use multiple times. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;define MyDense {&lt;/span&gt;
    &lt;span class="s"&gt;Dense(units=128, activation="relu")&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;network ExampleNet {&lt;/span&gt;
    &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28)&lt;/span&gt;
    &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;MyDense&lt;/span&gt;
        &lt;span class="s"&gt;Dropout(rate=0.5)&lt;/span&gt;
        &lt;span class="s"&gt;Output(units=10, activation="softmax")&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;strong&gt;Benefits&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce redundancy in large models.&lt;/li&gt;
&lt;li&gt;Maintain consistency across layers.&lt;/li&gt;
&lt;li&gt;Simplify network definitions.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ Fixes and Enhancements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ 2. TensorFlow Code Generation Fixes
&lt;/h3&gt;

&lt;p&gt;Test failure: &lt;code&gt;test_code_generator.py::test_generate_tensorflow_complex #68&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loss and optimizers now include their parameters&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimizer imports&lt;/strong&gt; are now explicit (e.g., &lt;code&gt;from tensorflow.keras.optimizers import Adam&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model compilation consistency&lt;/strong&gt;: Ensured correct formatting in &lt;code&gt;model.compile()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loss handling improvement&lt;/strong&gt;: Properly extracts dictionary-based loss functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✅ 3. Layer Multiplication Bug Fix
&lt;/h3&gt;

&lt;p&gt;Test failure: &lt;code&gt;test_code_generator.py::test_layer_multiplication #69&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed incorrect key: &lt;code&gt;pop('multiply', 1)&lt;/code&gt; → &lt;code&gt;pop('*', 1)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Code now correctly counts layers using &lt;code&gt;Dense(units=64)&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🏋️ PyTorch Improvements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔄 4. PyTorch Training Loop
&lt;/h3&gt;

&lt;p&gt;Added a &lt;strong&gt;basic PyTorch training loop&lt;/strong&gt; using &lt;code&gt;training_config&lt;/code&gt;. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch.nn&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch.optim&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;optim&lt;/span&gt;

&lt;span class="c1"&gt;# Define model
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MyNeuralModel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;loss_fn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CrossEntropyLoss&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;train_loop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dataloader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt;
        &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;zero_grad&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;loss_fn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;backward&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Users must provide their &lt;strong&gt;own dataset (DataLoader)&lt;/strong&gt;, but this serves as a template.&lt;/p&gt;

&lt;h3&gt;
  
  
  📝 5. Improved Comments in Generated Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;More detailed inline comments for TensorFlow and PyTorch.&lt;/li&gt;
&lt;li&gt;Easier debugging and learning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 6. Optimizer Configuration Extraction
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Extracts &lt;code&gt;optimizer_config['params']&lt;/code&gt;, defaults to &lt;code&gt;lr=0.001&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;repr()&lt;/code&gt; for correct string/numeric value handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📜 7. Logging Instead of Print Statements
&lt;/h3&gt;

&lt;p&gt;Replaced &lt;code&gt;print()&lt;/code&gt; with &lt;code&gt;logger.warning()&lt;/code&gt; for unsupported PyTorch layers.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠 Macro Parsing &amp;amp; Error Fixes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚨 8. Macro Parsing Fixes
&lt;/h3&gt;

&lt;p&gt;Test failure: &lt;code&gt;test_parser.py::test_macro_parsing[macro-basic]&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Macros now store their layer definitions correctly&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expands macros properly when referenced&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supports both named and ordered parameters in macros&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error messages improved for better debugging&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;define MyDense {&lt;/span&gt;
    &lt;span class="s"&gt;Dense(units=128, activation="relu")&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;network ExampleNet {&lt;/span&gt;
    &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;MyDense(units=256)&lt;/span&gt;  &lt;span class="c1"&gt;# Overrides default units&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔹 &lt;strong&gt;Macros now allow parameter overrides!&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠 9. Fixed Layer Tokenization Errors
&lt;/h3&gt;

&lt;p&gt;Test failure: &lt;code&gt;test_parser.py::test_layer_parsing[custom-shape]&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard layer names like &lt;code&gt;LSTM&lt;/code&gt;, &lt;code&gt;GRU&lt;/code&gt; were mistakenly treated as macros.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Now explicitly defined in the grammar&lt;/strong&gt; to prevent conflicts.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📜 Miscellaneous Improvements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  📑 10. JSON Schema for Code Editors
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Introduced &lt;code&gt;neural-schema.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Provides &lt;strong&gt;syntax highlighting, autocompletion, and validation&lt;/strong&gt; in code editors.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🎨 11. Dashboard Visualization Test Fixes
&lt;/h3&gt;

&lt;p&gt;Test failure: &lt;code&gt;test_dashboard.py::test_dashboard_visualization #72&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed page title assertion errors.&lt;/li&gt;
&lt;li&gt;Cleaned up resources properly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔄 12. Nested Layer Configurations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Layers can now contain &lt;strong&gt;sub-layers&lt;/strong&gt; using &lt;code&gt;{}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Used for complex architectures like Transformers and Residual Networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;network NestedExample {&lt;/span&gt;
    &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;TransformerEncoder {&lt;/span&gt;
            &lt;span class="s"&gt;SelfAttention(num_heads=8)&lt;/span&gt;
            &lt;span class="s"&gt;FeedForward(hidden_dim=512)&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;strong&gt;Easier deep learning model structuring!&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🏁 Conclusion
&lt;/h2&gt;

&lt;p&gt;This update brings &lt;strong&gt;powerful macros, better error handling, PyTorch improvements, and key bug fixes&lt;/strong&gt;. 🚀&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Neural is still in an experimental state and very buggy&lt;/strong&gt;—this release is just to show progress!&lt;/p&gt;

&lt;p&gt;📥 &lt;strong&gt;Upgrade Now&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; neural-dsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💬 &lt;strong&gt;Feedback? Join the discussion!&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/645a6Yd5" rel="noopener noreferrer"&gt;Neural Community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/Lemniscate-SHA-256/Neural" rel="noopener noreferrer"&gt;Lemniscate-SHA-256/Neural&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔥 Stay tuned for more improvements! Happy coding! 🧠💡&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>ai</category>
      <category>showdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Neural DSL 0.2.0 Release: Smarter Validation and Developer-First Tooling</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Wed, 26 Feb 2025 20:13:12 +0000</pubDate>
      <link>https://forem.com/neural/neural-dsl-020-release-smarter-validation-and-developer-first-tooling-34kp</link>
      <guid>https://forem.com/neural/neural-dsl-020-release-smarter-validation-and-developer-first-tooling-34kp</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flonoulzf49sqszt35vyy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flonoulzf49sqszt35vyy.jpg" alt="Neural DSL Banner" width="125" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're excited to announce &lt;strong&gt;Neural DSL 0.2.0&lt;/strong&gt; - a major update focused on &lt;strong&gt;error prevention&lt;/strong&gt; and &lt;strong&gt;developer experience&lt;/strong&gt; for deep learning workflows. This release introduces granular validation, smarter debugging tools, and significant quality-of-life improvements for neural network development.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What's New in 0.2.0
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Semantic Error Validation Engine
&lt;/h3&gt;

&lt;p&gt;Catch configuration errors &lt;strong&gt;before runtime&lt;/strong&gt; with our new validation system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Now throws ERROR: "Dropout rate must be ≤ 1.0"&lt;/span&gt;
&lt;span class="s"&gt;Dropout(1.5)&lt;/span&gt;

&lt;span class="c1"&gt;# ERROR: "Conv2D filters must be positive" &lt;/span&gt;
&lt;span class="s"&gt;Conv2D(filters=-32, kernel_size=(3,3))&lt;/span&gt;

&lt;span class="c1"&gt;# WARNING: "Dense(128.0) → units coerced to integer"&lt;/span&gt;
&lt;span class="s"&gt;Dense(128.0, activation="relu")&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key validation rules&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layer parameter ranges (0 ≤ dropout ≤ 1)&lt;/li&gt;
&lt;li&gt;Positive integer checks (filters, units, etc.)&lt;/li&gt;
&lt;li&gt;Framework-specific constraints&lt;/li&gt;
&lt;li&gt;Custom error severity levels (ERROR/WARNING/INFO)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Enhanced CLI Experience
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# New dry-run mode&lt;/span&gt;
neural compile model.neural &lt;span class="nt"&gt;--dry-run&lt;/span&gt;

&lt;span class="c"&gt;# Step debugging&lt;/span&gt;
neural debug model.neural &lt;span class="nt"&gt;--step&lt;/span&gt;

&lt;span class="c"&gt;# Launch GUI dashboard&lt;/span&gt;
neural no-code &lt;span class="nt"&gt;--port&lt;/span&gt; 8051
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CLI Improvements&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structured logging with &lt;code&gt;--verbose&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Progress bars for long operations&lt;/li&gt;
&lt;li&gt;Cached visualizations (30% faster repeats)&lt;/li&gt;
&lt;li&gt;Unified error handling across commands&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Debugging Superpowers with NeuralDbg
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn50bxaqm1am1p0j5j2hz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn50bxaqm1am1p0j5j2hz.png" alt="Debugging Dashboard" width="700" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;New debugging features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Gradient flow analysis&lt;/span&gt;
neural debug model.neural &lt;span class="nt"&gt;--gradients&lt;/span&gt;

&lt;span class="c"&gt;# Find inactive neurons&lt;/span&gt;
neural debug model.neural &lt;span class="nt"&gt;--dead-neurons&lt;/span&gt;

&lt;span class="c"&gt;# Interactive step debugging&lt;/span&gt;
neural debug model.neural &lt;span class="nt"&gt;--step&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Debugging Capabilities&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time memory/FLOP profiling&lt;/li&gt;
&lt;li&gt;Layer-wise execution tracing&lt;/li&gt;
&lt;li&gt;NaN/overflow detection&lt;/li&gt;
&lt;li&gt;Interactive tensor inspection&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠 Migration Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Breaking Changes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;TransformerEncoder now requires explicit parameters&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before (v0.1.x)&lt;/span&gt;
&lt;span class="s"&gt;TransformerEncoder()&lt;/span&gt;

&lt;span class="c1"&gt;# Now (v0.2.0)&lt;/span&gt;
&lt;span class="s"&gt;TransformerEncoder(num_heads=8, ff_dim=512)&lt;/span&gt; &lt;span class="c1"&gt;# Default values&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Stricter validation&lt;/strong&gt; - previously warnings now error by default&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🚀 Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;neural-dsl&lt;span class="o"&gt;==&lt;/span&gt;0.2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Quick Example&lt;/strong&gt; (MNIST Classifier):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# mnist.neural&lt;/span&gt;
&lt;span class="s"&gt;network MNISTClassifier {&lt;/span&gt;
  &lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(28, 28, 1)&lt;/span&gt;
  &lt;span class="s"&gt;layers&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;Conv2D(32, (3,3), activation="relu")&lt;/span&gt;
    &lt;span class="s"&gt;MaxPooling2D(pool_size=(2,2))&lt;/span&gt;
    &lt;span class="s"&gt;Flatten()&lt;/span&gt;
    &lt;span class="s"&gt;Dense(128, activation="relu")&lt;/span&gt;
    &lt;span class="s"&gt;Dropout(0.5)&lt;/span&gt;
    &lt;span class="s"&gt;Output(10, activation="softmax")&lt;/span&gt;

  &lt;span class="s"&gt;train {&lt;/span&gt;
    &lt;span class="s"&gt;epochs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;
    &lt;span class="na"&gt;batch_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;64&lt;/span&gt;
    &lt;span class="na"&gt;validation_split&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;
&lt;span class="err"&gt;  }&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compile to framework code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;neural compile mnist.neural &lt;span class="nt"&gt;--backend&lt;/span&gt; pytorch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📊 Benchmarks
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;v0.1.1&lt;/th&gt;
&lt;th&gt;v0.2.0&lt;/th&gt;
&lt;th&gt;Improvement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Validation Time&lt;/td&gt;
&lt;td&gt;142ms&lt;/td&gt;
&lt;td&gt;89ms&lt;/td&gt;
&lt;td&gt;1.6x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error Message Quality&lt;/td&gt;
&lt;td&gt;6.8/10&lt;/td&gt;
&lt;td&gt;9.1/10&lt;/td&gt;
&lt;td&gt;34% clearer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debug Setup Time&lt;/td&gt;
&lt;td&gt;8min&lt;/td&gt;
&lt;td&gt;2min&lt;/td&gt;
&lt;td&gt;4x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  🛠 Under the Hood
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key Technical Improvements&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lark parser upgrades with position tracking&lt;/li&gt;
&lt;li&gt;Type coercion system with warnings&lt;/li&gt;
&lt;li&gt;Unified error handling architecture&lt;/li&gt;
&lt;li&gt;CI/CD pipeline hardening (100% test coverage)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🤝 Community &amp;amp; Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try Neural DSL 0.2.0 today and let us know what you build! 🚀&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>machinelearning</category>
      <category>python</category>
      <category>datascience</category>
    </item>
    <item>
      <title>⚡ Simplifying NNs : Simple MNIST Classifier!</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Tue, 25 Feb 2025 16:43:49 +0000</pubDate>
      <link>https://forem.com/neural/simplifying-nns-with-neural-first-code-generation-example-simple-mnist-classifier-2636</link>
      <guid>https://forem.com/neural/simplifying-nns-with-neural-first-code-generation-example-simple-mnist-classifier-2636</guid>
      <description>&lt;p&gt;As a developer passionate about machine learning, I don't want to write repetitive boilerplate code for neural networks. Whether it’s TensorFlow, PyTorch, or ONNX, the process of defining layers, compiling models, and setting up training can feel tedious. &lt;/p&gt;

&lt;p&gt;Defining neural networks in raw TensorFlow/PyTorch can be verbose.&lt;/p&gt;

&lt;p&gt;What if you could write models more intuitively and compile them seamlessly?&lt;/p&gt;

&lt;p&gt;The neural DSL allows you to define models efficiently and convert them into executable TensorFlow/PyTorch code.&lt;/p&gt;

&lt;p&gt;This is a basic feedforward neural network generated from neural, designed for classifying 28x28 images into 10 categories. It’s perfect for handwritten digit recognition (like MNIST), small-scale image tasks, teaching neural network basics, or as a quick baseline for multi-class problems.&lt;/p&gt;




&lt;h2&gt;
  
  
   Neural Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;network MyModel {
    input: (None, 28, 28)
    layers:
        Dense(128, activation="relu")
        Dropout(rate=0.2)
        Output(units=10, activation="softmax")
    loss: "categorical_crossentropy"
    optimizer: "Adam"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Tensorflow Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MyModel&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Dropout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;units&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;softmax&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;categorical_crossentropy&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Adam&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Compile to TensorFlow
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;neural compile example.neural
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t5fl4k0yduj90hc67z3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t5fl4k0yduj90hc67z3.gif" alt="Image description" width="587" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example above, I define a neural network using Neural DSL and compile it into TensorFlow code. The resulting Python file is ready for training!&lt;/p&gt;

&lt;p&gt;Try Neural DSL yourself! Here’s the repo: (&lt;a href="https://github.com/Lemniscate-world/Neural" rel="noopener noreferrer"&gt;https://github.com/Lemniscate-world/Neural&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;🛠 What features would you like to see next? Drop your ideas in the comments!&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>tensorflow</category>
    </item>
    <item>
      <title>🚀 Release 0.1.2: Smoother, Faster, and Better! 🎉</title>
      <dc:creator>NeuralLang</dc:creator>
      <pubDate>Mon, 24 Feb 2025 11:55:59 +0000</pubDate>
      <link>https://forem.com/neural/release-012-smoother-faster-and-better-115h</link>
      <guid>https://forem.com/neural/release-012-smoother-faster-and-better-115h</guid>
      <description>&lt;p&gt;Hey, fellow developers! 🖐️ &lt;/p&gt;

&lt;p&gt;We’re back with an exciting new release that’s packed with improvements to help you build faster, smoother, and with fewer headaches. &lt;strong&gt;Version 0.1.2&lt;/strong&gt; has officially dropped, and it comes with some crucial fixes that will enhance the neural network layer parsing, make your CLI operations run like a dream, and give you a more reliable WebSocket connection. Let’s dive in and explore what’s new! 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 &lt;strong&gt;Layer Parsing Gets an Upgrade!&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;MaxPooling2D Strides&lt;/strong&gt; 🚧
&lt;/h3&gt;

&lt;p&gt;We all know how frustrating it is when your pooling layers don’t quite work as expected. Well, say goodbye to that issue! We've fixed the parsing for &lt;strong&gt;MaxPooling2D strides&lt;/strong&gt;—so your pooling layers now behave exactly as you want them to.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conv2D Layers: More Reliable Than Ever!&lt;/strong&gt; 🔥
&lt;/h3&gt;

&lt;p&gt;Conv2D layers are at the heart of many neural networks, so we made sure that &lt;strong&gt;filters&lt;/strong&gt;, &lt;strong&gt;kernel_size&lt;/strong&gt;, and &lt;strong&gt;activation&lt;/strong&gt; (like &lt;code&gt;conv2d-relu&lt;/code&gt; and &lt;code&gt;conv2d-tanh&lt;/code&gt;) are all captured accurately. No more missed parameters—just seamless convolutional layers.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;💥 Pro Tip:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If you’ve been having issues with the &lt;code&gt;AttributeError&lt;/code&gt; in the conv2d method, no worries! We used the new &lt;code&gt;_extract_value&lt;/code&gt; helper to handle parameters better. Now, it all works smoothly, even with edge cases! 🙌&lt;/p&gt;




&lt;h2&gt;
  
  
  🖥️ &lt;strong&gt;CLI Fixes for the Win!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For all you command-line lovers, we’ve made some important fixes:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Test Compilation Done Right!&lt;/strong&gt; ✅
&lt;/h3&gt;

&lt;p&gt;We noticed a few errors with &lt;strong&gt;imports&lt;/strong&gt;, &lt;strong&gt;file creation&lt;/strong&gt;, &lt;strong&gt;data types&lt;/strong&gt;, and &lt;strong&gt;exit codes&lt;/strong&gt; in &lt;code&gt;test_compile_command&lt;/code&gt;. Those bugs have been squashed, so your CLI commands should now execute without any hitches. 🎯&lt;/p&gt;

&lt;p&gt;No more wondering whether the tests will run! You can trust that your automated workflows will be glitch-free from now on.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌐 &lt;strong&gt;WebSocket &amp;amp; Dashboard Magic ✨&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When it comes to real-time data and smooth visualizations, we know how important it is to get everything just right. So, we focused on:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;WebSocket Connection Refusal Fixed&lt;/strong&gt; 🔌
&lt;/h3&gt;

&lt;p&gt;Ever run into issues where your WebSocket just doesn’t connect? That’s a thing of the past now. Our fix ensures that &lt;strong&gt;server-client communication&lt;/strong&gt; is seamless and the connection flows without interruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Dashboard Visualization Trouble? Gone!&lt;/strong&gt; 🚀
&lt;/h3&gt;

&lt;p&gt;We’ve also patched an annoying &lt;strong&gt;&lt;code&gt;ERR_CONNECTION_REFUSED&lt;/code&gt;&lt;/strong&gt; error during Selenium-driven dashboard visualizations. Your dashboard should now load without the dreaded connection issues. Your models are ready for their close-up! 🎥&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ &lt;strong&gt;Code Generation Now Foolproof!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Last but not least, we tackled an issue with the &lt;strong&gt;TensorFlow code generator&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Fixed the Pesky NoneType Error&lt;/strong&gt; 💥
&lt;/h3&gt;

&lt;p&gt;We were experiencing some &lt;strong&gt;NoneType&lt;/strong&gt; errors during code generation, but no more! The TensorFlow code generation is now smooth and reliable, so you can focus on building models, not debugging code generation issues. &lt;/p&gt;




&lt;h2&gt;
  
  
  💡 &lt;strong&gt;Why This Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;So, why should you care about all of these updates? Here's why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reliability:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
We’ve nailed down the neural network layer parsing so that you can focus on your model, not debugging layer configurations. Fewer surprises means faster development!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developer Happiness:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
CLI and dashboard improvements mean fewer headaches and more efficient workflows. Trust us, your productivity will thank you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration Confidence:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
With &lt;strong&gt;WebSocket&lt;/strong&gt; and &lt;strong&gt;code generation&lt;/strong&gt; now more stable, integrating our tools into your projects has never been easier. Say hello to fewer connection and code issues! 👋&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔮 &lt;strong&gt;Looking Ahead&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This release is just the beginning! We’re not stopping here—expect even more improvements and new features down the road. As always, &lt;strong&gt;your feedback&lt;/strong&gt; is crucial to making this project better. If you encounter any bugs or have cool suggestions, let us know on &lt;a href="https://github.com/Lemniscate-SHA-256/Neural/issues/" rel="noopener noreferrer"&gt;GitHub Issues&lt;/a&gt; or connect with us via our community channels.&lt;/p&gt;

&lt;p&gt;Stay tuned for what’s coming next, and happy coding! 💻🎉&lt;/p&gt;




&lt;p&gt;🔔 &lt;strong&gt;Thanks for reading!&lt;/strong&gt; Hope you enjoy the update and feel free to reach out with questions or comments. The journey to a better development experience continues! 🚀&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>tensorflow</category>
      <category>testing</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
