Chainer v6.0.0rc1 をリリースしました!リリースノートは以下の通りです.
v6 branch.CHAINER_DTYPE=mixed16 to make Chainer choose appropriate dtypes for mixed precision training (in most places it is float16, but it automatically chooses float32 when it’s better for precision and performance reasons).(optimizer).loss_scaling(). See the documentation for details.variable.item() (#5797, thanks @crcrpar!)Link.to_device family (#5986)unit to CupyMemoryProfileHook.print_report() (#6256, thanks @hitsgub!)distributions.Independent (#6324, thanks @ganow!)FloorDivide (#6350)testing.FunctionTestCase (#6444)mixed16 mode and its support in L.BatchNormalization (#6456)F.relu6 as an alias to F.clipped_relu (#6463, thanks @aksub99!)minimum to chainerx (#6477, thanks @aksub99!)square to chainerx (#6486, thanks @aksub99!)chainerx.testing.integral_dtypes (#6526)chainer.mixed16 data type in PureNcclCommunicator (#6548)LinkTestCase to simplify link tests (#6559)Sin and Cos to chainerx (#6601, thanks @kshitij12345!)MultiNodeBatchNormalization of ChainerMN (#6619)tan, arcsin, arccos, arctan to ChainerX (#6703, thanks @IvanYashchuk!)F.resize_images speed (#5753, thanks @grafi-tt!)F.group_normalization via cuDNN call (#5924, thanks @grafi-tt!)F.average_pooling_nd with pad_value of None (#6332, thanks @crcrpar!)F.log_ndtr to avoid NaN (#6340)y.grad on y.backward(retain_grad=False) (#6348)requires_grad explicitly in gradient_check and function test (#6364)get_fans (#6365)ResultType to take kind into account (#6419)FunctionTestCase error message (#6426)Adam for float16 parameters to float32 (#6442)chainerx.Scalar (#6481)BatchNorm and FixedBatchNorm (#6484)chainerx::Take indices other dtype than int64 (#6485)cupy.cudnn.batch_normalization_forward_training (#6497)chainerx::conv and chainerx::conv_transpose (#6510)F.cast (#6518)x.dtype == b.dtype in F.convolution_nd and F.deconvolution_nd (#6524)chainerx.Scalar to Python (#6535)parameterize_pytest to allow parameterizing with tuples (#6554)chainerx.linear (#6569)chainer.grad (#6580)PerformanceWarning (#6617)testing.product (#6635)BatchNormalization to only allocate dummy mean and var in cuDNN path (#6656)F.layer_normalization (#6680, thanks @hitsgub!)F.l2_normalization (#6681, thanks @hitsgub!)D.Normal (#6709)minimum and maximum (#6713)Sequential (#6304)F.softmax_cross_entropy float16 under/overflow (#6366)BatchNormalization link (#6369)str.join TypeError in FunctionTestCase helper (#6370)chainer.links.NStepRNN and its variants (#6415, thanks @crcrpar!)chainerx::Array (#6540)chainerx::Slice (#6557)chainerx::Linear (#6593, thanks @crcrpar!)DeviceResident.to_gpu fallback argument (#6712)== / != to compare str) (#6346)# NOQA in docstrings (cont.) (#6356)op_utils.py (#6421)chainerx::Linear (#6425)ResultTypeResolver multiple definitions (#6439).clang-tidy (#6445)AsContiguous in CudaConv::ConvGradWeight (#6520)_BNMode (#6582)collections (#6645)ArrayBody::GetArrayNode to return null (#6658)BackwardBuilder::Target less stateful (#6659)TimerHook (#6433, thanks @hitsgub!)F.prelu (#6455, thanks @fiarabbit!)Dot backward cast (#6537)forward in LinkHook documentation (#6546, thanks @crcrpar!)F.rrelu documentation (#6581, thanks @fiarabbit!)gradient_check.check_double_backward in reference (#6584):meth: link (#6603, thanks @23pointsNorth!)chainerx.md (#6610, thanks @kshitij12345!)F.erfcx, F.erfcinv and F.erfinv (#6618)chainer.backend.get_array_module documentation (#6663)CMAKE_BUILD_TYPE (#6664)args.out in train_cifar_custom_loop.py (#6378, thanks @crcrpar!)__future__.division in imagenet example with Python2 (#6462)__future__.division for Python2 (#6562)F.matmul instead of F.batch_matmul in memnn example (#6611)unchain_backward() in pix2pix example (#6634, thanks @hayato-maki!)mushrooms.csv (#6693)download.py (#6694)guides/functions.rst (#6194)F.swish test (#6306, thanks @ishanrai05!)F.log_softmax test (#6320, thanks @ishanrai05!)F.softmax_cross_entropy test (#6363)F.softmax test (#6371, thanks @aksub99!)F.flipr test (#6389, thanks @ishanrai05!)F.flipud test (#6390, thanks @ishanrai05!)F.moveaxis test (#6392, thanks @ishanrai05!)F.pad test (#6393, thanks @ishanrai05!)F.test_squared_difference test (#6395, thanks @aksub99!)F.minimum test (#6396, thanks @aksub99!)F.maximum test (#6400, thanks @aksub99!)F.convolution_2d and F.convolution_nd (#6406, thanks @crcrpar!)F.rollaxis test (#6408, thanks @ishanrai05!)F.vstack test (#6410, thanks @ishanrai05!)F.transpose test (#6458, thanks @ishanrai05!)F.tile test (#6459, thanks @ishanrai05!)F.swapaxes test (#6460, thanks @ishanrai05!)F.resize_image test. (#6464, thanks @ishanrai05!)F.expand_dims test (#6473, thanks @ishanrai05!)F.prod test (#6479, thanks @aksub99!)F.squeeze test (#6487, thanks @ishanrai05!)examples/.gitignore (#6391, thanks @crcrpar!)FunctionTestCases (#6416)SPHINXOPTS env from Makefile (#6417)test_print_report (#6430)NumpyOpTest (#6437)F.group_normalization test (#6468, thanks @crcrpar!)F.pad test for Python2 (#6478)F.vstack to a list of ndarrays (#6494, thanks @crcrpar!)OpTest (#6507)batch_norm test (#6542)fixed_batch_norm test (#6558)chainerx.divide test (#6573)F.einsum tests (#6588)FunctionTestBase class attributes (#6599)LinkTestCase and LinkInitializersTestCase class attributes (#6600)op_test decorator remove the previous class (#6602)compute_60 instead of compute_50 to run test on P100 (#6633)BatchNormalizationMultiGpuTest (#6652)TestConvTranspose (#6691)F.convolution_nd test for flake8 (#6711)convolution_nd function test (#6728)