Skip to content

Releases: OpenBMB/BMTrain

BMTrain v1.0.0

26 Feb 05:37
Compare
Choose a tag to compare

What's Changed

BMTrain v0.2.3

17 Aug 07:49
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.2.2...0.2.3

BMTrain v0.2.2

05 May 02:08
9cc9755
Compare
Choose a tag to compare

What's Changed

  • Undo a deletion of detach in previous version by @Achazwl in #69
  • avoid empty state when justify scale by @Achazwl in #68
  • fix run bmtrain with one gpu without torchrun by @Achazwl in #70
  • fix inspector grad when tensor is not recorded in some layer by @Achazwl in #90
  • fix: make load stream wait default stream after init_parameters by @Achazwl in #78
  • temparary fix of bmtrain+opendelta load state dict by @Achazwl in #77
  • support multiple input-output in transformerblocklist by @Achazwl in #92 #91

Full Changelog: 0.2.1...0.2.2

BMTrain v0.2.1

30 Jan 03:18
Compare
Choose a tag to compare

What's Changed

  • fix output shape mismatch after CheckpointBlock by @Achazwl in #64
  • add test for grad accumulation and state_dict interface. by @MayDomine in #61
  • fix inspect grad mean/std from None to 0 by @Achazwl in #60

Full Changelog: 0.2.0...0.2.1

v0.2.0

15 Dec 09:44
Compare
Choose a tag to compare

Update Log 0.2.0

New Features

1. Added an Optimizer Manager to support various optimizer algorithms.

Before 0.2.0, the optimizer was strongly coupled to the "loss scaler". This results in users cannot use multiple optimizers at the same time when training model in fp16.

======= Before 0.2.0 =======

for iteration in range(1000):
    # zero grad
    optimizer.zero_grad()

    # ...
    # loss scale and backward
    loss = optimizer.loss_scale(loss)
    loss.backward()

    # optimizer step
    bmtrain.optim_step(optimizer, lr_scheduler)

The bmtrain.optim_step allows only one optimizer and at most one lr_schduler, which cannot handle some more complex scenarios.

======= After 0.2.0 =======

# create a new instance of optimizer manager
optim_manager = bmtrain.optim.OptimManager(loss_scale=1024)
# let optim_manager handle all the optimizer and (optional) their corresponding lr_scheduler
optim_manager.add_optimizer(optimizer, lr_scheduler)
# add_optimizer can be called multiple times to add other optimizers.

for iteration in range(1000):
    # zero grad
    optim_manager.zero_grad() # calling zero_grad for each optimizer
    
    # ...
    # loss scale and backward
    optim_manager.backward(loss)

    # optimizer step
    optim_manager.step()

Starting from BMTrain 0.2.0, we provide "OptimManager" to manage optimizers and lr schdulers. OptimManager supports managing multiple optimizers and lr_schedulers at the same time, and allows setting the loss scale independently. OptimManager can also manage pytorch native optimizers, such as SGD, AdamW, etc.

2. Pipeline Parallelism

In this version, BMTrain has added a new kind of parallel algorithm: pipeline parallelism.
To enable pipeline parallelism, one line of code needs to be modified.

======= ZeRO =======

layers = bmt.TransformerBlockList([
  # ...
])

======= Pipeline =======

layers = bmt.PipelineTransformerBlockList([
  # ...
])

Replacing TransformerBlockList with PipelineTransformerBlockList allows the parallel algorithm to switch from ZeRO to pipeline parallelism.
The number of stages in the pipeline can be set by passing the pipe_size parameter to bmtrain.init_distributed.

3. Others

  • Supports BF16.
  • Tensors recorded in inspector supports backward propagation.
  • Adds new tests.

What's Changed

Full Changelog: 0.1.8...0.2.0

Release 0.1.8 patch 1

22 Sep 03:23
Compare
Choose a tag to compare

What's Changed

  • Fix bug : require_grad_ is usable for parameter in checkpointblock now by @zh-zheng in #48

Full Changelog: 0.1.8...0.1.8.post1

Release v0.1.8

08 Jul 08:52
Compare
Choose a tag to compare

What's Changed

  • Support the maximize parameter for adam when dtype is torch.half by @alphaGem in #35
  • add iter to make TransformerBlockList Iterable by @MayDomine in #37
  • Support pytorch 1.12.0 #38
  • Set default rank and world_size when bmtrain is not initialized. #38

New Contributors

Full Changelog: 0.1.7...0.1.8

v0.1.7 Patch

15 Jun 07:27
Compare
Choose a tag to compare

BMTrain v0.1.7 is unable to release GPU memory in some cases, causing OOM problems. We have fixed it.

What's Changed

  • FIX: release the parameter at some special case by @MayDomine in #32

Full Changelog: 0.1.7...0.1.7.post1

Release v0.1.7

14 Jun 05:50
Compare
Choose a tag to compare

What's Changed

  • NEW: add ZeRO-2 by @MayDomine in #29
  • FIX: load optimizer state dict

New Contributors

Full Changelog: 0.1.6...0.1.7

Release v0.1.6

18 May 08:48
Compare
Choose a tag to compare

What's Changed

  • FIX: load state dict by @a710128 in #26
  • FIX: remove cuda events from optimizer state_dict. FX: F.adam maximize… by @a710128 in #25

Full Changelog: 0.1.5...0.1.6