Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3

Hey Dev.to community!
I’m excited to share the latest update to Neural DSL, a work-in-progress domain-specific language for defining, training, and debugging neural networks. With v0.2.3 (released March 16, 2025), I supercharged one feature I think yo…


This content originally appeared on DEV Community and was authored by NeuralLang

Hey Dev.to community!
I’m excited to share the latest update to Neural DSL, a work-in-progress domain-specific language for defining, training, and debugging neural networks. With v0.2.3 (released March 16, 2025), I supercharged one feature I think you’ll love:

multi-framework Hyperparameter Optimization (HPO). Plus, I squashed some bugs and added new layer support to keep things moving forward. Let’s dive in!

🌟 Spotlight: Multi-Framework HPO (#434)

Imagine defining a neural network once and optimizing its hyperparameters for both PyTorch and TensorFlow without rewriting a single line. That’s what v0.2.3 brings to the table. Whether you’re tuning layer sizes, dropout rates, or learning rates, Neural DSL now handles HPO seamlessly across frameworks all from a single declarative config.

How It Works

Define your model with HPO parameters in the DSL:

network HPOExample {
  input: (28, 28, 1)  # MNIST input
  layers:
    Dense(HPO(choice(128, 256)))  # Sample units
    Dropout(HPO(range(0.3, 0.7, step=0.1)))  # Sample dropout rate
    Output(10, "softmax")
  optimizer: Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))  # Log-scale LR
  train {
    epochs: 10
    search_method: "random"  # Or "bayesian"
  }
}

Run it with a single command, switching frameworks on the fly:


neural run hpo_example.neural --backend pytorch --output model_torch.py
neural run hpo_example.neural --backend tensorflow  --output model_tf.py

 Behind the scenes, Neural DSL:

  1. Parses the HPO specs into a framework-agnostic model_dict.
  2. Uses DynamicModel (PyTorch) or DynamicTFModel (TensorFlow) to sample parameters via Optuna.
  3. Evaluates trials with a unified train_model function, supporting both backends.

 Here’s a peek at the magic in hpo.py:

def objective(trial, model_dict, hpo_params, train_loader, val_loader, backend='pytorch'):
    lr = trial.suggest_loguniform('lr', *hpo_params['lr']['range'])
    if backend == 'pytorch':
        model = DynamicModel(model_dict, trial, hpo_params).to('cuda' if torch.cuda.is_available() else 'cpu')
        optimizer = optim.Adam(model.parameters(), lr=lr)
    elif backend == 'tensorflow':
        model = DynamicTFModel(model_dict, trial, hpo_params)
        optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
    val_loss, accuracy = train_model(model, optimizer, train_loader, val_loader, backend)
    return val_loss

def run_hpo(model_dict, hpo_params, backend='pytorch'):
    study = optuna.create_study(direction='minimize')
    study.optimize(lambda trial: objective(trial, model_dict, hpo_params, *get_data(), backend), n_trials=10)

🏄🏽‍♂️ Why It’s Awesome

  • One Config, Two Frameworks
  • No more duplicating effort for PyTorch vs. TensorFlow experiments.
  • Flexible HPO: Supports choice (discrete), range (linear), and log_range (log-scale) for parameters like units, rates, and learning rates.
  • Scalable: Ready to extend to ONNX or JAX with minimal tweaks.

This feature came from fixing test_hpo_integration_full_pipeline (#434), where we tackled optimizer HPO parsing and 3D input shape issues. Now, it’s a cornerstone of Neural’s cross-framework vision.

👨🏿‍💻 Other Goodies in v0.2.3

While HPO steals the show, here’s what else I’ve been up to:

  • New Layers: Added LayerNormalization, InstanceNormalization, GroupNormalization, SqueezeExcitation, and Attention to the parser (#105, #106, #107, #118, #307). More building blocks for your models!
  • Parser Fixes: Squashed bugs in Concatenate, Activation, Lambda, and Embedding parameter handling (#140, #329, etc.), plus better macro and device support (#136, #327, #328).
  • Validation Boost: Enhanced checks for MaxPooling2D, BatchNormalization, Dropout, and Conv2D to catch errors early (#179, #363, #367, #368).
  • Error Handling: Improved VisitError wrapping with line/column details (#159) for clearer debugging.

Check the full changelog for all the nitty-gritty.

🦾 What’s Next?

Neural DSL is still a WIP bugs lurk, and features are missing (like full ONNX HPO support).

Upcoming goals:

  • Stabilize macro parsing for nested blocks.
  • Expand layer support (more PyTorch layers, anyone?).
  • Add interactive HPO visualizations with NeuralDbg.

Got ideas or bug reports? Join me on Discord or file an issue. Feedback keeps this project alive!

👾 Join the Journey

v0.2.3 is a step toward making neural network development declarative, flexible, and debuggable.
The multi-framework HPO feature is just the beginning imagine tuning models across PyTorch, TensorFlow, and beyond with one tool.
What do you think—how would you use HPO in your projects? Drop a comment below!
Happy coding,
[Lemniscate-SHA-256]
Twitter: @NLang4438 | Neural DSL GitHub


This content originally appeared on DEV Community and was authored by NeuralLang


Print Share Comment Cite Upload Translate Updates
APA

NeuralLang | Sciencx (2025-03-16T20:02:49+00:00) Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3. Retrieved from https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/

MLA
" » Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3." NeuralLang | Sciencx - Sunday March 16, 2025, https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/
HARVARD
NeuralLang | Sciencx Sunday March 16, 2025 » Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3., viewed ,<https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/>
VANCOUVER
NeuralLang | Sciencx - » Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/
CHICAGO
" » Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3." NeuralLang | Sciencx - Accessed . https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/
IEEE
" » Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3." NeuralLang | Sciencx [Online]. Available: https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/. [Accessed: ]
rf:citation
» Hyperparameter Optimization Across Frameworks Made Simple – Neural DSL v0.2.3 | NeuralLang | Sciencx | https://www.scien.cx/2025/03/16/hyperparameter-optimization-across-frameworks-made-simple-neural-dsl-v0-2-3/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.