Interdisciplinary Deep-Learning Platform

DeepMuon

1.23.52 Public Alpha Edtion Released

Next-Generation AI For Science

Install DeepMuon

The installation can be accessed from sereval sources, and they are all stable editions. Please make sure you have met the prerequisites (such as numpy). Here we list some important prerequisites of DeepMuon:

Package Name Edition
click tqdm numpy pandas nni



Latest Recommanded
prettytable ptflops torchinfo captum monai
pynvml psutil GPUtil matplotlib timm
SimpleITK scikit-learn scikit-image tensorboard yapf
parso opencv-python rdkit pymatgen


Most of the prerequisites listed above will be installed automatically when installing DeepMuon, however, you need to first make sure PyTorch >= 1.12(CUDA edition) was installed properly before you install DeepMuon. The lowest tolerable edition of PyTorch is 1.10.0., otherwise DeepMuon may occur fatal mistakes.

We strongly suggest you install torchvision and torchaudio at the same time. also you need to install nni manually, which is not available yet on MacOSX, as a result, the Neural network hyperparameter searching (NNHS for short) function will be disabled without nni. What's more, the dgl module is needed if you want to use the graph neural network provided by DeepMuon.

As for the python environment, the lowest edition of python should be python 3.6.0

DeepMuon support Windows/MacOSX/Linux platforms, we recommend you to install DeepMuon from source to get the best customizable experience.

DeepMuon is constructed based on latest (>=1.12) PyTorch, and the supported edition should larger than 1.10. Because not all users' platform support latest pytorch, so the feature Fully Sharded Data Parallel (FSDP) would be disabled if torch's edition is low (such as torch=1.10.0).

From source step by step:
git clone git@github.com:Airscker/DeepMuon.github
cd DeepMuon
pip install -v -e ./
Make Deep-Learning
Simple & Efficient

Create

Practice

Solve

Progress

Key Features

Based on PyTorch

Support almost all available features of newest PyTorch and can be used with minimal effort to realize your ideas.

Distributed Training

Using DDP/FSDP parallel algorithms to accelerate your experiments and make Large model training as practical as possible.

Gradient Accumulation

Help solving shortage of computing sources, make more researches get rid of online training brought by small batch size.

Gradient Clip

Restrict the gradient size to avoid missing the global minimum, help solving gradient annihilation/exploration problems.

Mixed Precision Training

Accelerate model training using less computing sources and reduce the time to wait for the ultimate results.

Attribution Analysis

Findout what neuron network is learning, understanding the function of different neurons/layers of the model.

Excellent Loging System

Using Tensorboard to visulize the evaluation meatrics and training procedure, recording detailed results of training & testing.

Configuration Mechanism

Customize your model/loss/dataset and providing proper interfaces, start your experiment by setting simple configurations.

Neural network hyperparameter searching

Based on NNI, in order to search the best hyperparameter combinations automatically and visualize the optimization course.

Universities using DeepMuon

Where DeepMuon was born

Rebuilding neutrino trace in Trident Neutrino Telescope based on deep-learning.

First ball of DeepMuon

Artificial Intelligence-enabled CMR interpretation.

Next shot coming

Let's see what will happen.

Blogs
PandaX-4T III radioactive source localization

The start point of DeepMuon and how we used it to solve physic problems.

Read More...
TRIDENT neutrino telescope (Hailing Plan) tracing task

Creative work based on Residual CNN and Spatial Pyramid Pooling.

Read More...
Artificial Intelligence-enabled CMR interpretation

The powerful Video Swin-Transformer who gives hope to patients all over the world.

Read More...