DEV Community

Cover image for Neural DSL v0.2.5 Update: Explore Multi-Framework HPO Features
NeuralLang
NeuralLang

Posted on

1 1 1 1 1

Neural DSL v0.2.5 Update: Explore Multi-Framework HPO Features

Neural DSL Logo

We're excited to announce the release of Neural DSL v0.2.5! This update brings significant improvements to hyperparameter optimization (HPO), making it seamlessly work across both PyTorch and TensorFlow backends, along with several other enhancements and fixes.

πŸš€ Spotlight Feature: Multi-Framework HPO Support

The standout feature in v0.2.5 is the unified hyperparameter optimization system that works consistently across both PyTorch and TensorFlow backends. This means you can:

  • Define your model and HPO parameters once
  • Run optimization with either backend
  • Compare results across frameworks
  • Leverage the strengths of each framework

Here's how easy it is to use:

network HPOExample {
  input: (28, 28, 1)
  layers:
    Conv2D(filters=HPO(choice(32, 64)), kernel_size=(3,3))
    MaxPooling2D(pool_size=(2,2))
    Flatten()
    Dense(HPO(choice(128, 256, 512)))
    Output(10, "softmax")
  optimizer: Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))
  train {
    epochs: 10
    search_method: "bayesian"
  }
}
Enter fullscreen mode Exit fullscreen mode

Run with either backend:

# PyTorch backend
neural compile model.neural --backend pytorch --hpo

# TensorFlow backend
neural compile model.neural --backend tensorflow --hpo
Enter fullscreen mode Exit fullscreen mode

✨ Enhanced Optimizer Handling

We've significantly improved how optimizers are handled in the DSL:

  • No-Quote Syntax: Cleaner syntax for optimizer parameters without quotes
  • Nested HPO Parameters: Full support for HPO within learning rate schedules
  • Scientific Notation: Better handling of scientific notation (e.g., 1e-4 vs 0.0001)

Before:

optimizer: "Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))"
Enter fullscreen mode Exit fullscreen mode

After:

optimizer: Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))
Enter fullscreen mode Exit fullscreen mode

Advanced example with learning rate schedules:

optimizer: SGD(
  learning_rate=ExponentialDecay(
    HPO(range(0.05, 0.2, step=0.05)),  # Initial learning rate
    1000,                              # Decay steps
    HPO(range(0.9, 0.99, step=0.01))   # Decay rate
  ),
  momentum=HPO(range(0.8, 0.99, step=0.01))
)
Enter fullscreen mode Exit fullscreen mode

πŸ“Š Precision & Recall Metrics

Training loops now report precision and recall alongside loss and accuracy, giving you a more comprehensive view of your model's performance:

loss, acc, precision, recall = train_model(model, optimizer, train_loader, val_loader)
Enter fullscreen mode Exit fullscreen mode

πŸ› οΈ Other Improvements

  • Error Message Enhancements: More detailed error messages with line/column information
  • Layer Validation: Better validation for MaxPooling2D, BatchNormalization, Dropout, and Conv2D layers
  • TensorRT Integration: Added conditional TensorRT setup in CI pipeline for GPU environments
  • VSCode Snippets: Added code snippets for faster Neural DSL development in VSCode
  • CI/CD Pipeline: Enhanced GitHub Actions workflows with better error handling and reporting

πŸ› Bug Fixes

  • Fixed parsing of optimizer HPO parameters without quotes
  • Corrected string representation handling in HPO parameters
  • Resolved issues with nested HPO parameters in learning rate schedules
  • Enhanced validation for various layer types
  • Fixed parameter handling in Concatenate, Activation, Lambda, and Embedding layers

πŸ“¦ Installation

pip install neural-dsl
Enter fullscreen mode Exit fullscreen mode

πŸ”— Links

πŸ™ Support Us

If you find Neural DSL useful, please consider:

  • Giving us a star on GitHub ⭐
  • Sharing this project with your friends and colleagues
  • Contributing to the codebase or documentation

The more developers we reach, the more likely we are to build something truly revolutionary together!


Neural DSL is a domain-specific language for defining, training, debugging, and deploying neural networks with declarative syntax, cross-framework support, and built-in execution tracing.

Neural-dsl is a WIP DSL and debugger, bugs exist, feedback welcome! This project is under active development and not yet production-ready!

DevCycle image

Ship Faster, Stay Flexible.

DevCycle is the first feature flag platform with OpenFeature built-in to every open source SDK, designed to help developers ship faster while avoiding vendor-lock in.

Start shipping

Top comments (0)

DevCycle image

Ship Faster, Stay Flexible.

DevCycle is the first feature flag platform with OpenFeature built-in to every open source SDK, designed to help developers ship faster while avoiding vendor-lock in.

Start shipping

πŸ‘‹ Kindness is contagious

Explore this insightful write-up embraced by the inclusive DEV Community. Tech enthusiasts of all skill levels can contribute insights and expand our shared knowledge.

Spreading a simple "thank you" uplifts creatorsβ€”let them know your thoughts in the discussion below!

At DEV, collaborative learning fuels growth and forges stronger connections. If this piece resonated with you, a brief note of thanks goes a long way.

Okay