Chainer v7.0.0b2 をリリースしました! リリースノートは以下の通りです。
This is the release note of v7.0.0b2. See here for the complete list of solved issues and merged PRs.
ChainerX has several new backproppable ops such as ELU and softplus activation functions and loss functions including absolute error, squared error, Huber loss and Gaussian KL divergence. ChainerX is also supported in all OptimizerHook
s when used through Chainer. TabularDataset
has also been improved with new features.
Variable.grad
getter now raises an error when it is called before calling cleargrad
, zerograd
, or setting the gradient directly. (#7146)BatchRenormalization
(usage of epsilon) is fixed. It affects the inference behavior. (#7202)HierarchicalCommunicator
, SingleNodeCommunicator
and TwoDimensionalCommunicator
and are no longer necessary as NCCL now supports inter-node communication. (#7697)WeightStandardization
link hook (#6678, thanks @hitsgub!)chainerx.dsplit
(#7031, thanks @ishanrai05!)chainerx.left_shift
and chainerx.right_shift
(#7339, thanks @sky58!)chainerx.elu
(#7439, thanks @aksub99!)TabularDataset
(#7493)TabluarDataset.__iter__
(#7601)Variable.mean
(#7670)chainerx.softplus
(#7679, thanks @aksub99!)top_data
as -np.inf
and argmax_data
as -1
in F.roi_max_pooling_2d
(#6237, thanks @knorth55!)cleargrad
(#7146)chainerx.grad
from chainer.grad
(#7464)ImportError
(#7518)device
argument a keyword only argument. (#7537, thanks @kshitij12345!)Array::At
and __getitem__
(#7561)chainerx.ndarray._is_chained
(#7565)squared_difference
and fix docs (#7582)allreduce_grad()
and functions related with it (#7604)IndexError
if the index __getitem__
takes is out of bounds (#7614)six.integer_types
for axis check in F.concat
(#7632, thanks @knorth55!)optimizer_hooks.GradientClipping
for ChainerX (#7641)optimizer_hooks.GradientHardClipping
for ChainerX (#7656, thanks @kshitij12345!)IntervalTrigger.__str__
(#7664, thanks @ktns!)GradientLARS
optimizer hook working with ChainerX (#7669)absl::Span
and related helpers instead of gsl::span
(#7671)six.integer_types
for axis checks (#7713)CHAINERX_BUILD_CUDA
is set (#7752)None
array in FunctionNode
NaN check (#6283)CupyMemoryProfiler
(#7003)running_var
of F.batch_renormalization
(#7202)MultiprocessIterator
(#7486)initializers.Identity
for ideep backend (#7548)chainermn.links.create_mnbn_model
(#7603)PickleDataset
crash when using multiprocessing (#7625, thanks @zaltoprofen!)AMSGrad
with intel64 backend (#7661)chainer.grad
for multiple devices (#7692)chainerx::Flip
(#7727)Parameter.dtype
for uninitialized parameter (#7735)UpdateRule.use_fp32_update
for uninitialized parameter (#7736)backend.get_array_module
not cuda.get_array_module
(#7514, thanks @crcrpar!)squared_difference
alias of squared_error
(#7547)Optimizer
and GradientMethod
(#7585)chainerx.clipped_relu
in F.clipped_relu
(#7588)CMakeList.txt
(#7647)Link
s (#6512)CHAINERX_CUDNN_USE_CUPY
(#7574)ResNet
prepare method (#7577)BackwardContext
comment (#7595, thanks @crcrpar!)expand_dims.py
(#7602)FunctionNode
docs. (#7622)chainer/functions/math/average.py
(#7653, thanks @ktns!)F.squeeze
documentation (#7682)examples/vae/train_vae.py
(#7578, thanks @m4saka!)F.polygamma
test (#6970, thanks @ishanrai05!)F.cast
test (#7034)y_shape
not used in tests (#7610)optimizer_hooks.Lasso
for ChainerX (#7657, thanks @kshitij12345!)GroupNormalization
tests (#7684)optimizer_hooks.GradientNoise
for ChainerX (#7709, thanks @kshitij12345!)protobuf
(#7715)optimizer_hooks.WeightDecay
for ChainerX (#7716, thanks @kshitij12345!)atol
/rtol
of chainerx.erf
float16 test (#7721)TestHuberLoss
(#7723)Contrastive.backward
(#7745)TestContrastive
(#7747)