Skip to content

Releases: deepmodeling/deepmd-kit

v3.1.0a0

30 Mar 01:47
52f8ece
Compare
Choose a tag to compare
v3.1.0a0 Pre-release
Pre-release

What's Changed

Highlights

DPA-3

DPA-3 is an advanced interatomic potential leveraging the message-passing architecture. Designed as a large atomic model (LAM), DPA-3 is tailored to integrate and simultaneously train on datasets from various disciplines, encompassing diverse chemical and materials systems across different research domains. Its model design ensures exceptional fitting accuracy and robust generalization within and beyond the training domain. Furthermore, DPA-3 maintains energy conservation and respects the physical symmetries of the potential energy surface, making it a dependable tool for a wide range of scientific applications.

Refer to examples/water/dpa3/input_torch.json for the training script. After training, the PyTorch model can be converted to the JAX model.

PaddlePaddle backends

The PaddlePaddle backend features a similar Python interface to the PyTorch backend, ensuring compatibility and flexibility in model development. PaddlePaddle has introduced dynamic-to-static functionality and PaddlePaddle JIT compiler (CINN) in DeePMD-kit, which allow for dynamic shapes and higher-order differentiation. The dynamic-to-static functionality automatically captures the user’s dynamic graph code and converts it into a static graph. After conversion, the CINN compiler is used to optimize the computational graph, thereby enhancing the efficiency of model training and inference. In experiments with the DPA-2 model, we achieved approximately a 40% reduction in training time compared to the dynamic graph, effectively improving the model training efficiency.

Other new features

All changes in v3.0.1 and v3.0.2 are included.

Contributors

New Contributors

Full Changelog: v3.0.0...v3.1.0a0

v3.0.2

02 Mar 03:32
70bc6d8
Compare
Choose a tag to compare

What's Changed

This patch version only contains minor features, bug fixes, enhancements, and documentation improvements.

New features

  • feat(tf): support tensor fitting with hybrid descriptor by @njzjz in #4542

Enhancement

Bugfix

Documentation

  • docs: fix the header of the scaling test table by @njzjz in #4507
  • docs: add sphinx.configuration to .readthedocs.yml by @njzjz in #4553
  • docs: add v3 paper citations by @njzjz in #4619
  • docs: add PyTorch Profiler support details to TensorBoard documentation by @caic99 in #4615

CI/CD

  • CI: switch linux_aarch64 to GitHub hosted runners by @njzjz in #4557

New Contributors

Full Changelog: v3.0.1...v3.0.2

v3.0.1

23 Dec 20:14
Compare
Choose a tag to compare

This patch version only contains bug fixes, enhancements, and documentation improvements.

What's Changed

Enhancements

  • Perf: print summary on rank 0 (#4434)
  • perf: optimize training loop (#4426)
  • chore: refactor training loop (#4435)
  • Perf: remove redundant checks on data integrity (#4433)
  • Perf: use fused Adam optimizer (#4463)

Bug fixes

  • Fix: add model_def_script to ZBL (#4423)
  • fix: add pairtab compression (#4432)
  • fix(tf): pass type_one_side & exclude_types to DPTabulate in se_r (#4446)
  • fix: print dlerror if dlopen fails (#4485)

Documentation

  • chore(pt): update multitask example (#4419)
  • docs: update DPA-2 citation (#4483)
  • docs: update deepmd-gnn URL (#4482)
  • docs: fix a minor typo on the title of install-from-c-library.md (#4484)

Other Changes

Full Changelog: v3.0.0...v3.0.1

v3.0.0

23 Nov 08:10
e695a91
Compare
Choose a tag to compare

DeePMD-kit v3: Multiple-backend Framework, DPA-2 Large Atomic Model, and Plugin Mechanisms

After eight months of public tests, we are excited to present the first stable version of DeePMD-kit v3, an advanced version that enables deep potential models with TensorFlow, PyTorch, or JAX backends. Additionally, DeePMD-kit v3 introduces support for the DPA-2 model, a novel architecture optimized for large atomic models. This release enhances plugin mechanisms, making integrating and developing new models easier.

Highlights

Multiple-backend framework: TensorFlow, PyTorch, and JAX support

image

DeePMD-kit v3 adds a versatile, pluggable framework providing consistent training and inference experience across multiple backends. Version 3.0.0 includes:

  • TensorFlow backend: Known for its computational efficiency with a static graph design.
  • PyTorch backend: A dynamic graph backend that simplifies model extension and development.
  • DP backend: Built with NumPy and Array API, a reference backend for development without heavy deep-learning frameworks.
  • JAX backend: Based on the DP backend via Array API, a static graph backend.
Features TensorFlow PyTorch JAX DP
Descriptor local frame
Descriptor se_e2_a
Descriptor se_e2_r
Descriptor se_e3
Descriptor se_e3_tebd
Descriptor DPA1
Descriptor DPA2
Descriptor Hybrid
Fitting energy
Fitting dipole
Fitting polar
Fitting DOS
Fitting property
ZBL
DPLR
DPRc
Spin
Gradient calculation
Model training
Model compression
Python inference
C++ inference

Critical features of the multiple-backend framework include the ability to:

  • Train models using different backends with the same training data and input script, allowing backend switching based on your efficiency or convenience needs.
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze
dp --tf compress

# Training a model using the PyTorch backend
dp --pt train input.json
dp --pt freeze
dp --pt compress
  • Convert models between backends using dp convert-backend, with backend-specific file extensions (e.g., .pb for TensorFlow and .pth for PyTorch).
# Convert from a TensorFlow model to a PyTorch model
dp convert-backend frozen_model.pb frozen_model.pth
# Convert from a PyTorch model to a TensorFlow model
dp convert-backend frozen_model.pth frozen_model.pb
# Convert from a PyTorch model to a JAX model
dp convert-backend frozen_model.pth frozen_model.savedmodel
# Convert from a PyTorch model to the backend-independent DP format
dp convert-backend frozen_model.pth frozen_model.dp
  • Run inference across backends via interfaces like dp test, Python/C++/C interfaces, or third-party packages (e.g., dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc.).
# In a LAMMPS file:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# run LAMMPS with a JAX backend model
pair_style deepmd frozen_model.savedmodel
# Calculate model deviation using different models
pair_style deepmd frozen_model.pb frozen_model.pth frozen_model.savedmodel out_file md.out out_freq 100
  • Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.

DPA-2 model: a large atomic model as a multi-task learner

The DPA-2 model offers a robust architecture for large atomic models (LAM), accurately representing diverse chemical systems for high-quality simulations. In this release, DPA-2 can be trained using the PyTorch backend, supporting both single-task (see examples/water/dpa2) or multi-task (see examples/water_multi_task/pytorch_example) training schemes. DPA-2 is available for Python/C++ inference in the JAX backend.

The DPA-2 descriptor comprises repinit and repformer, as shown below.

DPA-2

The PyTorch backend supports training strategies for large atomic models, including:

  • Parallel training: Train large atomic models on multiple GPUs for efficiency.
torchrun --nproc_per_node=4 --no-python dp --pt train input.json
  • Multi-task training: For large atomic models trained across a broad range of data calculated on different DFT levels with shared descriptors. An example is given in examples/water_multi_task/pytorch_example/input_torch.json.
  • Finetune: Training a pre-train large atomic model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune argument in the dp --pt train command line.

Plugin mechanisms for external models

In version 3.0.0, the plugin capabilities have been implemented to support the development and integration of potential energy models using TensorFlow, PyTorch, or JAX backends, leveraging DeePMD-kit's trainer, loss functions, and interfaces. A plugin example is deepmd-gnn, which supports training the MACE and NequIP models in the DeePMD-kit with the familiar commands.

dp --pt train mace.json
dp --pt freeze
dp --pt test -m frozen_model.pth -s ../data/

image

Other new features

  • Descriptor se_e3_tebd. (#4066)
  • Fitting the property (#3867).
  • New training parameters: max_ckpt_keep (#3441), change_bias_after_training (#3993), and stat_file.
  • New command line interface: dp change-bias (#3993) and dp show (#3796).
  • Support generating JSON schema for integration with VSCode (#3849).
  • The latest LAMMPS version (stable_29Aug2024_update1) is supported. (#4088, #4179)

Breaking changes

  • The deepmodeling conda channel is deprecated. Use the conda-forge channel instead. (#3462, #4385)
  • The offline package and conda packages for CUDA 11 are dropped.
  • Python 3.7 and 3.8 supports are dropped. (#3185, #4185)
  • The minimal versions of deep learning frameworks: TensorFlow 2.7, PyTorch 2.1, JAX 0.4.33, and NumPy 1.21.
  • We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with .pb extension.
  • Bias is removed by default from type embedding. (#3958)
  • The spin model is refactored, and its usage in the LAMMPS module has been changed. (#3301, #4321)
  • Multi-task training support is removed from the TensorFlow backend. (#3763)
  • The set_prefix key is deprecated. (#3753)
  • dp test now uses all sets for training and test. In previous versions, only the last set is used as the test set in dp test. (#3862)
  • The Python module structure is fully refactored. The old deepmd module was moved to deepmd.tf without other API changes, and deepmd_utils was moved to deepmd without other API changes. (#3177, #3178)
  • Python class DeepTensor (including DeepDiople and DeepPolar) now returns atomic tensor in the dimension of natoms instead of nsel_atoms. (#3390)
  • C++ 11 support is dropped. (#4068)

For other changes, refer to Full Changelog: v2.2.11...v3.0.0rc0

Contributors

The PyTorch backend was developed in the dptech-corp/deepmd-pytorch repository, and then it was fully merged into the deepmd-kit repository in #3180. Contributors to the deepmd-pytorch repository:

Contributors to the deepmd-kit repository:

Read more

v3.0.0rc0

14 Nov 19:36
0ad4289
Compare
Choose a tag to compare
v3.0.0rc0 Pre-release
Pre-release

DeePMD-kit v3: Multiple-backend Framework, DPA-2 Large Atomic Model, and Plugin Mechanisms

We are excited to present the first release candidate of DeePMD-kit v3, an advanced version that enables deep potential models with TensorFlow, PyTorch, or JAX backends. Additionally, DeePMD-kit v3 introduces support for the DPA-2 model, a novel architecture optimized for large atomic models. This release enhances plugin mechanisms, making integrating and developing new models easier.

Highlights

Multiple-backend framework: TensorFlow, PyTorch, and JAX support

image

DeePMD-kit v3 adds a versatile, pluggable framework providing consistent training and inference experience across multiple backends. Version 3.0.0 includes:

  • TensorFlow backend: Known for its computational efficiency with a static graph design.
  • PyTorch backend: A dynamic graph backend that simplifies model extension and development.
  • DP backend: Built with NumPy and Array API, a reference backend for development without heavy deep-learning frameworks.
  • JAX backend: Based on the DP backend via Array API, a static graph backend.
Features TensorFlow PyTorch JAX DP
Descriptor local frame
Descriptor se_e2_a
Descriptor se_e2_r
Descriptor se_e3
Descriptor se_e3_tebd
Descriptor DPA1
Descriptor DPA2
Descriptor Hybrid
Fitting energy
Fitting dipole
Fitting polar
Fitting DOS
Fitting property
ZBL
DPLR
DPRc
Spin
Gradient calculation
Model training
Model compression
Python inference
C++ inference

Critical features of the multiple-backend framework include the ability to:

  • Train models using different backends with the same training data and input script, allowing backend switching based on your efficiency or convenience needs.
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze
dp --tf compress

# Training a model using the PyTorch backend
dp --pt train input.json
dp --pt freeze
dp --pt compress
  • Convert models between backends using dp convert-backend, with backend-specific file extensions (e.g., .pb for TensorFlow and .pth for PyTorch).
# Convert from a TensorFlow model to a PyTorch model
dp convert-backend frozen_model.pb frozen_model.pth
# Convert from a PyTorch model to a TensorFlow model
dp convert-backend frozen_model.pth frozen_model.pb
# Convert from a PyTorch model to a JAX model
dp convert-backend frozen_model.pth frozen_model.savedmodel
# Convert from a PyTorch model to the backend-independent DP format
dp convert-backend frozen_model.pth frozen_model.dp
  • Run inference across backends via interfaces like dp test, Python/C++/C interfaces, or third-party packages (e.g., dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc.).
# In a LAMMPS file:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# run LAMMPS with a JAX backend model
pair_style deepmd frozen_model.savedmodel
# Calculate model deviation using different models
pair_style deepmd frozen_model.pb frozen_model.pth frozen_model.savedmodel out_file md.out out_freq 100
  • Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.

DPA-2 model: Towards a universal large atomic model for molecular and material simulation

The DPA-2 model offers a robust architecture for large atomic models (LAM), accurately representing diverse chemical systems for high-quality simulations. In this release, DPA-2 is trainable in the PyTorch backend, with an example configuration available in examples/water/dpa2. DPA-2 is available for Python inference in the JAX backend.

The DPA-2 descriptor comprises repinit and repformer, as shown below.

DPA-2

The PyTorch backend supports training strategies for large atomic models, including:

  • Parallel training: Train large atomic models on multiple GPUs for efficiency.
torchrun --nproc_per_node=4 --no-python dp --pt train input.json
  • Multi-task training: For large atomic models trained across a broad range of data calculated on different DFT levels with shared descriptors. An example is given in examples/water_multi_task/pytorch_example/input_torch.json.
  • Finetune: Training a pre-train large atomic model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune argument in the dp --pt train command line.

Plugin mechanisms for external models

In v3.0.0, plugin capabilities allow you to develop models with TensorFlow, PyTorch, or JAX, leveraging DeePMD-kit's trainer, loss functions, and interfaces. A plugin example is deepmd-gnn, which supports training the MACE and NequIP models in the DeePMD-kit with the familiar commands.

dp --pt train mace.json
dp --pt freeze
dp --pt test -m frozen_model.pth -s ../data/

image

Other new features

  • Descriptor se_e3_tebd. (#4066)
  • Fitting the property (#3867).
  • New training parameters: max_ckpt_keep (#3441), change_bias_after_training (#3993), and stat_file.
  • New command line interface: dp change-bias (#3993) and dp show (#3796).
  • Support generating JSON schema for integration with VSCode (#3849).
  • The latest LAMMPS version (stable_29Aug2024_update1) is supported. (#4088, #4179)

Breaking changes

  • Python 3.7 and 3.8 supports are dropped. (#3185, #4185)
  • We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with .pb extension.
  • Bias is removed by default from type embedding. (#3958)
  • The spin model is refactored, and its usage in the LAMMPS module has been changed. (#3301, #4321)
  • Multi-task training support is removed from the TensorFlow backend. (#3763)
  • The set_prefix key is deprecated. (#3753)
  • dp test now uses all sets for training and test. In previous versions, only the last set is used as the test set in dp test. (#3862)
  • The Python module structure is fully refactored. The old deepmd module was moved to deepmd.tf without other API changes, and deepmd_utils was moved to deepmd without other API changes. (#3177, #3178)
  • Python class DeepTensor (including DeepDiople and DeepPolar) now returns atomic tensor in the dimension of natoms instead of nsel_atoms. (#3390)
  • C++ 11 support is dropped. (#4068)

For other changes, refer to Full Changelog: v2.2.11...v3.0.0rc0

Contributors

The PyTorch backend was developed in the dptech-corp/deepmd-pytorch repository, and then it was fully merged into the deepmd-kit repository in #3180. Contributors to the deepmd-pytorch repository:

Contributors to the deepmd-kit repository:

Read more

v3.0.0b4

25 Sep 16:01
0b3f860
Compare
Choose a tag to compare
v3.0.0b4 Pre-release
Pre-release

What's Changed

Breaking changes

  • breaking: drop C++ 11 by @njzjz in #4068
  • breaking(pt/dp): tune new sub-structures for DPA2 by @iProzd in #4089
    The default values of new options g1_out_conv and g1_out_mlp are set to True. The behaviors in previous versions are False.

New features

Enhancement

  • fix: bump LAMMPS to stable_29Aug2024 by @njzjz in #4088
  • chore(pt): cleanup deadcode by @wanghan-iapcm in #4142
  • chore(pt): make comm_dict for dpa2 noncompulsory when nghost is 0 by @njzjz in #4144
  • Set ROCM_ROOT to ROCM_PATH when it exist by @sigbjobo in #4150
  • chore(pt): move deepmd.pt.infer.deep_eval.eval_model to tests by @njzjz in #4153

Documentation

  • docs: improve docs for environment variables by @njzjz in #4070
  • docs: dynamically generate command outputs by @njzjz in #4071
  • docs: improve error message for inconsistent type maps by @njzjz in #4074
  • docs: add multiple packages to intersphinx_mapping by @njzjz in #4075
  • docs: document CMake variables using Sphinx styles by @njzjz in #4079
  • docs: update ipi installation command by @njzjz in #4081
  • docs: fix the default value of DP_ENABLE_PYTORCH by @njzjz in #4083
  • docs: fix defination of se_e3 by @njzjz in #4113
  • docs: update DeepModeling URLs by @njzjz-bot in #4119
  • docs(pt): examples for new dpa2 model by @iProzd in #4138

Bugfix

  • fix: fix PT AutoBatchSize OOM bug and merge execute_all into base by @njzjz in #4047
  • fix: replace datetime.datetime.utcnow which is deprecated by @njzjz in #4067
  • fix:fix LAMMPS MPI tests with mpi4py 4.0.0 by @njzjz in #4032
  • fix(pt): invalid type_map when multitask training by @Cloudac7 in #4031
  • fix: manage testing models in a standard way by @njzjz in #4028
  • fix(pt): fix ValueError when array byte order is not native by @njzjz in #4100
  • fix(pt): convert torch.__version__ to str when serializing by @njzjz in #4106
  • fix(tests): fix skip_dp by @njzjz in #4111
  • [Fix] Wrap log_path with Path by @HydrogenSulfate in #4117
  • fix: bugs in uts for property fit by @Chengqian-Zhang in #4120
  • fix: type of the preset out bias by @wanghan-iapcm in #4135
  • fix(pt): fix zero inputs for LayerNorm by @njzjz in #4134
  • fix(pt/dp): share params of repinit_three_body by @iProzd in #4139
  • fix(pt): move entry point from deepmd.pt.model to deepmd.pt by @njzjz in #4146
  • fix: fix DPH5Path.glob for new keys by @njzjz in #4152
  • fix(pt): make state_dict safe for weights_only by @iProzd in #4148
  • fix(pt): fix compute_output_stats_global when atomic_output is None by @njzjz in #4155
  • fix(pt ut): make separated uts deterministic by @iProzd in #4162
  • fix(pt): finetuning property/dipole/polar/dos fitting with multi-dimensional data causes error by @Chengqian-Zhang in #4145

Dependency updates

  • chore(deps): bump scikit-build-core to 0.9.x by @njzjz in #4038
  • build(deps): bump pypa/cibuildwheel from 2.19 to 2.20 by @dependabot in #4045
  • build(deps): bump pypa/cibuildwheel from 2.20 to 2.21 by @dependabot in #4127

CI/CD

  • ci: add include-hidden-files to actions/upload-artifact by @njzjz in #4095
  • ci: test Python 3.12 by @njzjz in #4059
  • CI(codecov): do not notify until all reports are ready by @njzjz in #4136

Full Changelog: v3.0.0b3...v3.0.0b4

v3.0.0b3

27 Jul 04:25
0e0fc1a
Compare
Choose a tag to compare
v3.0.0b3 Pre-release
Pre-release

What's Changed

Other Changes

Full Changelog: v3.0.0b2...v3.0.0b3

v3.0.0b2

26 Jul 18:33
7f61048
Compare
Choose a tag to compare
v3.0.0b2 Pre-release
Pre-release

What's Changed

New features

  • feat: add documentation and options for multi-task arguments by @njzjz in #3989
  • feat: plain text model format by @njzjz in #4025
  • feat: allow model arguments to be registered outside by @njzjz in #3995
  • feat: add get_model classmethod to BaseModel by @njzjz in #4002

Enhancement

Documentation

Bugfixes

  • fix(cmake): fix set_if_higher by @njzjz in #3977
  • fix(pt): ensure suffix of --init_model and --restart is .pt by @njzjz in #3980
  • fix(pt): do not overwrite disp_file when restarting training by @njzjz in #3985
  • fix(cc): compile select_map<int> when TensorFlow backend is off by @njzjz in #3987
  • fix(pt): make 'find_' to be float in get data by @iProzd in #3992
  • fix float precision problem of se_atten in line 217 (#3961) by @LiuGroupHNU in #3978
  • fix: fix errors for zero atom inputs by @njzjz in #4005
  • fix(pt): optimize graph memory usage by @iProzd in #4006
  • fix(pt): fix lammps nlist sort with large sel by @iProzd in #3993
  • fix(cc): add atomic argument to DeepPotBase::computew by @njzjz in #3996
  • fix(lmp): call model deviation interface without atomic properties when they are not requested by @njzjz in #4012
  • fix(c): call C++ interface without atomic properties when they are not requested by @njzjz in #4010
  • fix(pt): fix get_dim for DescrptDPA1Compat by @iProzd in #4007
  • fix(cc): fix message passing when nloc is 0 by @njzjz in #4021
  • fix(pt): use user seed in DpLoaderSet by @iProzd in #4015

Code style

CI/CD

  • ci: pin PT to 2.3.1 when using CUDA by @njzjz in #4009

Full Changelog: v3.0.0b1...v3.0.0b2

v3.0.0b1

14 Jul 07:11
ad96750
Compare
Choose a tag to compare
v3.0.0b1 Pre-release
Pre-release

What's Changed

Breaking Changes

  • breaking(pt/tf/dp): disable bias in type embedding by @iProzd in #3958
    This change may make PyTorch checkpoints generated by v3.0.0b0 cannot be used in v3.0.0b1.

New features

  • feat: add plugin entry point for PT by @njzjz in #3965
  • feat(tf): improve the activation setting in tebd by @iProzd in #3971

Bugfix

CI/CD

Full Changelog: v3.0.0b0...v3.0.0b1

v3.0.0b0

03 Jul 19:22
29db791
Compare
Choose a tag to compare
v3.0.0b0 Pre-release
Pre-release

What's Changed

Compared to v3.0.0a0, v3.0.0b0 contains all changes in v2.2.10 and v2.2.11, as well as:

Breaking changes

  • breaking: remove multi-task support in tf by @iProzd in #3763
  • breaking: deprecate set_prefix by @njzjz in #3753
  • breaking: use all sets for training and test by @njzjz in #3862. In previous versions, only the last set is used as the test set in dp test.
  • PyTorch models trained in v3.0.0a0 cannot be used in v3.0.0b0 due to several changes. As mentioned in the release note of v3.0.0a0, we didn't promise backward compatibility for v3.0.0a0.
  • The DPA-2 configurations have been changed by @iProzd in #3768. The old format in v3.0.0a0 is no longer supported.

Major new features

  • Latest supported features in the PyTorch and DP backend, which are consistent with the TensorFlow backend if possible:
    • Descriptor: se_e2_a, se_e2_r, se_e3, se_atten, se_atten_v2, dpa2, hybrid;
    • Fitting: energy, dipole, polar, dos, fparam/apram support
    • Model: standard, DPRc, frozen, ZBL, Spin
    • Python inference interface
    • PyTorch only: C++ inference interface for energy only
    • PyTorch only: TensorBoard
  • Support using the DPA-2 model in the LAMMPS by @CaRoLZhangxy in #3657. If you install the Python interface from the source, you must set the environment variable DP_ENABLE_PYTORCH=1 to build the PyTorch customized OPs.
  • New command line options dp show by @Chengqian-Zhang in #3796 and dp change-bias by @iProzd in #3933.
  • New training options max_ckpt_keep by @iProzd in #3441 and change_bias_after_training by @iProzd in #3933. Several training options now take effect in the PyTorch backend, such as seed by @iProzd in #3773, disp_training and time_training by @iProzd in #3775, and profiling by @njzjz in #3897.
  • Performance improvement of the PyTorch backend by @njzjz in #3422, #3424, #3425 and by @iProzd in #3826
  • Support generating JSON schema for integration with VSCode by @njzjz in #3849

Minor enhancements and code refactoring are listed at v3.0.0a0...v3.0.0b0.

Contributors

New Contributors

Full Changelog: v3.0.0a0...v3.0.0b0

For discussion of v3, please go to #3401