[PATCH v1 0/5] treewide cleanup of random integer usage

0 views
Skip to first unread message

Jason A. Donenfeld

unread,
Oct 5, 2022, 5:49:30 PMOct 5
to linux-...@vger.kernel.org, Jason A. Donenfeld, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
Hi folks,

This is a five part treewide cleanup of random integer handling. The
rules for random integers are:

- If you want a secure or an insecure random u64, use get_random_u64().
- If you want a secure or an insecure random u32, use get_random_u32().
* The old function prandom_u32() has been deprecated for a while now
and is just a wrapper around get_random_u32().
- If you want a secure or an insecure random u16, use get_random_u16().
- If you want a secure or an insecure random u8, use get_random_u8().
- If you want secure or insecure random bytes, use get_random_bytes().
* The old function prandom_bytes() has been deprecated for a while now
and has long been a wrapper around get_random_bytes().
- If you want a non-uniform random u32, u16, or u8 bounded by a certain
open interval maximum, use prandom_u32_max().
* I say "non-uniform", because it doesn't do any rejection sampling or
divisions. Hence, it stays within the prandom_* namespace.

These rules ought to be applied uniformly, so that we can clean up the
deprecated functions, and earn the benefits of using the modern
functions. In particular, in addition to the boring substitutions, this
patchset accomplishes a few nice effects:

- By using prandom_u32_max() with an upper-bound that the compiler can
prove at compile-time is ≤65536 or ≤256, internally get_random_u16()
or get_random_u8() is used, which wastes fewer batched random bytes,
and hence has higher throughput.

- By using prandom_u32_max() instead of %, when the upper-bound is not a
constant, division is still avoided, because prandom_u32_max() uses
a faster multiplication-based trick instead.

- By using get_random_u16() or get_random_u8() in cases where the return
value is intended to indeed be a u16 or a u8, we waste fewer batched
random bytes, and hence have higher throughput.

So, based on those rules and benefits from following them, this patchset
breaks down into the following five steps:

1) Replace `prandom_u32() % max` and variants thereof with
prandom_u32_max(max).

2) Replace `(type)get_random_u32()` and variants thereof with
get_random_u16() or get_random_u8(). I took the pains to actually
look and see what every lvalue type was across the entire tree.

3) Replace remaining deprecated uses of prandom_u32() with
get_random_u32().

4) Replace remaining deprecated uses of prandom_bytes() with
get_random_bytes().

5) Remove the deprecated and now-unused prandom_u32() and
prandom_bytes() inline wrapper functions.

I was thinking of taking this through my random.git tree (on which this
series is currently based) and submitting it near the end of the merge
window, or waiting for the very end of the 6.1 cycle when there will be
the fewest new patches brewing. If somebody with some treewide-cleanup
experience might share some wisdom about what the best timing usually
winds up being, I'm all ears.

I've CC'd get_maintainers.pl, which is a pretty big list. Probably some
portion of those are going to bounce, too, and everytime you reply to
this thread, you'll have to deal with a bunch of bounces coming
immediately after. And a recipient list this big will probably dock my
email domain's spam reputation, at least temporarily. Sigh. I think
that's just how it goes with treewide cleanups though. Again, let me
know if I'm doing it wrong.

Please take a look!

Thanks,
Jason

Cc: Ajay Singh <ajay....@microchip.com>
Cc: Akinobu Mita <akinob...@gmail.com>
Cc: Alexandre Torgue <alexandr...@foss.st.com>
Cc: Amitkumar Karwar <amitk...@gmail.com>
Cc: Andreas Dilger <adilger...@dilger.ca>
Cc: Andreas Färber <afae...@suse.de>
Cc: Andreas Noever <andreas...@gmail.com>
Cc: Andrew Lunn <and...@lunn.ch>
Cc: Andrew Morton <ak...@linux-foundation.org>
Cc: Andrii Nakryiko <and...@kernel.org>
Cc: Andy Gospodarek <an...@greyhouse.net>
Cc: Andy Lutomirski <lu...@kernel.org>
Cc: Andy Shevchenko <andriy.s...@linux.intel.com>
Cc: Anil S Keshavamurthy <anil.s.kes...@intel.com>
Cc: Anna Schumaker <an...@kernel.org>
Cc: Arend van Spriel <asp...@gmail.com>
Cc: Ayush Sawal <ayush...@chelsio.com>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Chao Yu <ch...@kernel.org>
Cc: Christoph Böhmwalder <christoph....@linbit.com>
Cc: Christoph Hellwig <h...@lst.de>
Cc: Christophe Leroy <christop...@csgroup.eu>
Cc: Chuck Lever <chuck...@oracle.com>
Cc: Claudiu Beznea <claudiu...@microchip.com>
Cc: Cong Wang <xiyou.w...@gmail.com>
Cc: Dan Williams <dan.j.w...@intel.com>
Cc: Daniel Borkmann <dan...@iogearbox.net>
Cc: Darrick J. Wong <djw...@kernel.org>
Cc: Dave Hansen <dave....@linux.intel.com>
Cc: David Ahern <dsa...@kernel.org>
Cc: David S. Miller <da...@davemloft.net>
Cc: Dennis Dalessandro <dennis.da...@cornelisnetworks.com>
Cc: Dick Kennedy <dick.k...@broadcom.com>
Cc: Dmitry Vyukov <dvy...@google.com>
Cc: Eric Dumazet <edum...@google.com>
Cc: Florian Westphal <f...@strlen.de>
Cc: Franky Lin <frank...@broadcom.com>
Cc: Ganapathi Bhat <ganapa...@gmail.com>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
Cc: Gregory Greenman <gregory....@intel.com>
Cc: H. Peter Anvin <h...@zytor.com>
Cc: Hannes Reinecke <ha...@suse.de>
Cc: Hans Verkuil <hver...@xs4all.nl>
Cc: Hante Meuleman <hante.m...@broadcom.com>
Cc: Hao Luo <hao...@google.com>
Cc: Haoyue Xu <xuha...@hisilicon.com>
Cc: Heiner Kallweit <hkall...@gmail.com>
Cc: Helge Deller <del...@gmx.de>
Cc: Herbert Xu <her...@gondor.apana.org.au>
Cc: Hideaki YOSHIFUJI <yosh...@linux-ipv6.org>
Cc: Hugh Dickins <hu...@google.com>
Cc: Igor Mitsyanko <imits...@quantenna.com>
Cc: Ilya Dryomov <idry...@gmail.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Jack Wang <jinpu...@ionos.com>
Cc: Jaegeuk Kim <jae...@kernel.org>
Cc: Jaehoon Chung <jh80....@samsung.com>
Cc: Jakub Kicinski <ku...@kernel.org>
Cc: Jamal Hadi Salim <j...@mojatatu.com>
Cc: James E.J. Bottomley <je...@linux.ibm.com>
Cc: James Smart <james...@broadcom.com>
Cc: Jan Kara <ja...@suse.com>
Cc: Jason Gunthorpe <j...@ziepe.ca>
Cc: Jay Vosburgh <j.vos...@gmail.com>
Cc: Jean-Paul Roubelat <j...@f6fbb.org>
Cc: Jeff Layton <jla...@kernel.org>
Cc: Jens Axboe <ax...@kernel.dk>
Cc: Jiri Olsa <jo...@kernel.org>
Cc: Jiri Pirko <ji...@resnulli.us>
Cc: Johannes Berg <joha...@sipsolutions.net>
Cc: John Fastabend <john.fa...@gmail.com>
Cc: John Stultz <jst...@google.com>
Cc: Jon Maloy <jma...@redhat.com>
Cc: Jonathan Corbet <cor...@lwn.net>
Cc: Jozsef Kadlecsik <kad...@netfilter.org>
Cc: Julian Anastasov <j...@ssi.bg>
Cc: KP Singh <kps...@kernel.org>
Cc: Kalle Valo <kv...@kernel.org>
Cc: Kees Cook <kees...@chromium.org>
Cc: Keith Busch <kbu...@kernel.org>
Cc: Lars Ellenberg <lars.el...@linbit.com>
Cc: Leon Romanovsky <le...@kernel.org>
Cc: Manish Rangankar <mrang...@marvell.com>
Cc: Manivannan Sadhasivam <ma...@kernel.org>
Cc: Marcelo Ricardo Leitner <marcelo...@gmail.com>
Cc: Marco Elver <el...@google.com>
Cc: Martin K. Petersen <martin....@oracle.com>
Cc: Martin KaFai Lau <marti...@linux.dev>
Cc: Masami Hiramatsu <mhir...@kernel.org>
Cc: Mauro Carvalho Chehab <mch...@kernel.org>
Cc: Maxime Coquelin <mcoquel...@gmail.com>
Cc: Md. Haris Iqbal <haris...@ionos.com>
Cc: Michael Chan <michae...@broadcom.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Michael Jamet <michae...@intel.com>
Cc: Michal Januszewski <sp...@gentoo.org>
Cc: Mika Westerberg <mika.we...@linux.intel.com>
Cc: Miquel Raynal <miquel...@bootlin.com>
Cc: Namjae Jeon <linki...@kernel.org>
Cc: Naveen N. Rao <naveen...@linux.ibm.com>
Cc: Neil Horman <nho...@tuxdriver.com>
Cc: Nicholas Piggin <npi...@gmail.com>
Cc: Nilesh Javali <nja...@marvell.com>
Cc: OGAWA Hirofumi <hiro...@mail.parknet.co.jp>
Cc: Pablo Neira Ayuso <pa...@netfilter.org>
Cc: Paolo Abeni <pab...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Philipp Reisner <philipp...@linbit.com>
Cc: Potnuri Bharat Teja <bha...@chelsio.com>
Cc: Pravin B Shelar <psh...@ovn.org>
Cc: Rasmus Villemoes <li...@rasmusvillemoes.dk>
Cc: Richard Weinberger <ric...@nod.at>
Cc: Rohit Maheshwari <roh...@chelsio.com>
Cc: Russell King <li...@armlinux.org.uk>
Cc: Sagi Grimberg <sa...@grimberg.me>
Cc: Santosh Shilimkar <santosh....@oracle.com>
Cc: Sergey Matyukevich <geom...@gmail.com>
Cc: Sharvari Harisangam <sharvari....@nxp.com>
Cc: Simon Horman <ho...@verge.net.au>
Cc: Song Liu <so...@kernel.org>
Cc: Stanislav Fomichev <s...@google.com>
Cc: Steffen Klassert <steffen....@secunet.com>
Cc: Stephen Boyd <sb...@kernel.org>
Cc: Stephen Hemminger <ste...@networkplumber.org>
Cc: Sungjong Seo <sj155...@samsung.com>
Cc: Theodore Ts'o <ty...@mit.edu>
Cc: Thomas Gleixner <tg...@linutronix.de>
Cc: Thomas Graf <tg...@suug.ch>
Cc: Thomas Sailer <t.sa...@alumni.ethz.ch>
Cc: Toke Høiland-Jørgensen <to...@toke.dk>
Cc: Trond Myklebust <trond.m...@hammerspace.com>
Cc: Ulf Hansson <ulf.h...@linaro.org>
Cc: Varun Prakash <va...@chelsio.com>
Cc: Veaceslav Falico <vfa...@gmail.com>
Cc: Vignesh Raghavendra <vign...@ti.com>
Cc: Vinay Kumar Yadav <vinay...@chelsio.com>
Cc: Vinod Koul <vk...@kernel.org>
Cc: Vlad Yasevich <vyas...@gmail.com>
Cc: Wenpeng Liang <liangw...@huawei.com>
Cc: Xinming Hu <huxinm...@gmail.com>
Cc: Xiubo Li <xiu...@redhat.com>
Cc: Yehezkel Bernat <Yehez...@gmail.com>
Cc: Ying Xue <ying...@windriver.com>
Cc: Yishai Hadas <yis...@nvidia.com>
Cc: Yonghong Song <y...@fb.com>
Cc: Yury Norov <yury....@gmail.com>
Cc: brcm80211-d...@broadcom.com
Cc: ca...@lists.bufferbloat.net
Cc: ceph-...@vger.kernel.org
Cc: core...@netfilter.org
Cc: dc...@vger.kernel.org
Cc: d...@openvswitch.org
Cc: dmae...@vger.kernel.org
Cc: drbd...@lists.linbit.com
Cc: dri-...@lists.freedesktop.org
Cc: kasa...@googlegroups.com
Cc: linux-...@lists.infradead.org
Cc: linux-ar...@lists.infradead.org
Cc: linux...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: linu...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: linux-f2...@lists.sourceforge.net
Cc: linux...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: linu...@kvack.org
Cc: linu...@vger.kernel.org
Cc: linu...@lists.infradead.org
Cc: linu...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: linux...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: linux...@st-md-mailman.stormreply.com
Cc: linu...@vger.kernel.org
Cc: linux-w...@vger.kernel.org
Cc: linu...@vger.kernel.org
Cc: linuxp...@lists.ozlabs.org
Cc: lvs-...@vger.kernel.org
Cc: net...@vger.kernel.org
Cc: netfilt...@vger.kernel.org
Cc: rds-...@oss.oracle.com
Cc: SHA-cyfma...@infineon.com
Cc: target...@vger.kernel.org
Cc: tipc-di...@lists.sourceforge.net

Jason A. Donenfeld (5):
treewide: use prandom_u32_max() when possible
treewide: use get_random_{u8,u16}() when possible
treewide: use get_random_u32() when possible
treewide: use get_random_bytes when possible
prandom: remove unused functions

Documentation/networking/filter.rst | 2 +-
arch/powerpc/crypto/crc-vpmsum_test.c | 2 +-
arch/x86/mm/pat/cpa-test.c | 4 +-
block/blk-crypto-fallback.c | 2 +-
crypto/async_tx/raid6test.c | 2 +-
crypto/testmgr.c | 94 +++++++++----------
drivers/block/drbd/drbd_receiver.c | 4 +-
drivers/dma/dmatest.c | 2 +-
drivers/infiniband/core/cma.c | 2 +-
drivers/infiniband/hw/cxgb4/cm.c | 4 +-
drivers/infiniband/hw/cxgb4/id_table.c | 4 +-
drivers/infiniband/hw/hfi1/tid_rdma.c | 2 +-
drivers/infiniband/hw/hns/hns_roce_ah.c | 5 +-
drivers/infiniband/hw/mlx4/mad.c | 2 +-
drivers/infiniband/ulp/ipoib/ipoib_cm.c | 2 +-
drivers/infiniband/ulp/rtrs/rtrs-clt.c | 3 +-
drivers/md/raid5-cache.c | 2 +-
drivers/media/common/v4l2-tpg/v4l2-tpg-core.c | 2 +-
.../media/test-drivers/vivid/vivid-radio-rx.c | 4 +-
drivers/mmc/core/core.c | 4 +-
drivers/mmc/host/dw_mmc.c | 2 +-
drivers/mtd/nand/raw/nandsim.c | 8 +-
drivers/mtd/tests/mtd_nandecctest.c | 12 +--
drivers/mtd/tests/speedtest.c | 2 +-
drivers/mtd/tests/stresstest.c | 19 +---
drivers/mtd/ubi/debug.c | 2 +-
drivers/mtd/ubi/debug.h | 6 +-
drivers/net/bonding/bond_main.c | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 +-
drivers/net/ethernet/broadcom/cnic.c | 5 +-
.../chelsio/inline_crypto/chtls/chtls_cm.c | 4 +-
.../chelsio/inline_crypto/chtls/chtls_io.c | 4 +-
drivers/net/ethernet/rocker/rocker_main.c | 8 +-
drivers/net/hamradio/baycom_epp.c | 2 +-
drivers/net/hamradio/hdlcdrv.c | 2 +-
drivers/net/hamradio/yam.c | 2 +-
drivers/net/phy/at803x.c | 2 +-
drivers/net/wireguard/selftest/allowedips.c | 16 ++--
.../broadcom/brcm80211/brcmfmac/p2p.c | 2 +-
.../net/wireless/intel/iwlwifi/mvm/mac-ctxt.c | 2 +-
.../net/wireless/marvell/mwifiex/cfg80211.c | 4 +-
.../wireless/microchip/wilc1000/cfg80211.c | 2 +-
.../net/wireless/quantenna/qtnfmac/cfg80211.c | 2 +-
drivers/nvme/common/auth.c | 2 +-
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c | 4 +-
drivers/scsi/fcoe/fcoe_ctlr.c | 4 +-
drivers/scsi/lpfc/lpfc_hbadisc.c | 6 +-
drivers/scsi/qedi/qedi_main.c | 2 +-
drivers/target/iscsi/cxgbit/cxgbit_cm.c | 2 +-
drivers/thunderbolt/xdomain.c | 2 +-
drivers/video/fbdev/uvesafb.c | 2 +-
fs/ceph/inode.c | 2 +-
fs/ceph/mdsmap.c | 2 +-
fs/exfat/inode.c | 2 +-
fs/ext2/ialloc.c | 2 +-
fs/ext4/ialloc.c | 4 +-
fs/ext4/ioctl.c | 4 +-
fs/ext4/mmp.c | 2 +-
fs/ext4/super.c | 7 +-
fs/f2fs/gc.c | 2 +-
fs/f2fs/namei.c | 2 +-
fs/f2fs/segment.c | 8 +-
fs/fat/inode.c | 2 +-
fs/nfsd/nfs4state.c | 4 +-
fs/ubifs/debug.c | 10 +-
fs/ubifs/journal.c | 2 +-
fs/ubifs/lpt_commit.c | 14 +--
fs/ubifs/tnc_commit.c | 2 +-
fs/xfs/libxfs/xfs_alloc.c | 2 +-
fs/xfs/libxfs/xfs_ialloc.c | 4 +-
fs/xfs/xfs_error.c | 2 +-
fs/xfs/xfs_icache.c | 2 +-
fs/xfs/xfs_log.c | 2 +-
include/linux/prandom.h | 12 ---
include/net/netfilter/nf_queue.h | 2 +-
include/net/red.h | 2 +-
include/net/sock.h | 2 +-
kernel/kcsan/selftest.c | 4 +-
kernel/time/clocksource.c | 2 +-
lib/fault-inject.c | 2 +-
lib/find_bit_benchmark.c | 4 +-
lib/random32.c | 4 +-
lib/reed_solomon/test_rslib.c | 12 +--
lib/sbitmap.c | 4 +-
lib/test_fprobe.c | 2 +-
lib/test_kprobes.c | 2 +-
lib/test_list_sort.c | 2 +-
lib/test_objagg.c | 2 +-
lib/test_rhashtable.c | 6 +-
lib/test_vmalloc.c | 19 +---
lib/uuid.c | 2 +-
mm/shmem.c | 2 +-
net/802/garp.c | 2 +-
net/802/mrp.c | 2 +-
net/ceph/mon_client.c | 2 +-
net/ceph/osd_client.c | 2 +-
net/core/neighbour.c | 2 +-
net/core/pktgen.c | 47 +++++-----
net/core/stream.c | 2 +-
net/dccp/ipv4.c | 4 +-
net/ipv4/datagram.c | 2 +-
net/ipv4/igmp.c | 6 +-
net/ipv4/inet_connection_sock.c | 2 +-
net/ipv4/inet_hashtables.c | 2 +-
net/ipv4/ip_output.c | 2 +-
net/ipv4/route.c | 2 +-
net/ipv4/tcp_cdg.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 +-
net/ipv4/udp.c | 2 +-
net/ipv6/addrconf.c | 8 +-
net/ipv6/ip6_flowlabel.c | 2 +-
net/ipv6/mcast.c | 10 +-
net/ipv6/output_core.c | 2 +-
net/mac80211/rc80211_minstrel_ht.c | 2 +-
net/mac80211/scan.c | 2 +-
net/netfilter/ipvs/ip_vs_conn.c | 2 +-
net/netfilter/ipvs/ip_vs_twos.c | 4 +-
net/netfilter/nf_nat_core.c | 4 +-
net/netfilter/xt_statistic.c | 2 +-
net/openvswitch/actions.c | 2 +-
net/packet/af_packet.c | 2 +-
net/rds/bind.c | 2 +-
net/sched/act_gact.c | 2 +-
net/sched/act_sample.c | 2 +-
net/sched/sch_cake.c | 8 +-
net/sched/sch_netem.c | 22 ++---
net/sched/sch_pie.c | 2 +-
net/sched/sch_sfb.c | 2 +-
net/sctp/socket.c | 4 +-
net/sunrpc/auth_gss/gss_krb5_wrap.c | 4 +-
net/sunrpc/cache.c | 2 +-
net/sunrpc/xprt.c | 2 +-
net/sunrpc/xprtsock.c | 2 +-
net/tipc/socket.c | 2 +-
net/unix/af_unix.c | 2 +-
net/xfrm/xfrm_state.c | 2 +-
136 files changed, 304 insertions(+), 339 deletions(-)

--
2.37.3

Jason A. Donenfeld

unread,
Oct 5, 2022, 5:49:45 PMOct 5
to linux-...@vger.kernel.org, Jason A. Donenfeld, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
Rather than incurring a division or requesting too many random bytes for
the given range, use the prandom_u32_max() function, which only takes
the minimum required bytes from the RNG and avoids divisions.

Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---
arch/x86/mm/pat/cpa-test.c | 4 +-
crypto/testmgr.c | 86 +++++++++----------
drivers/block/drbd/drbd_receiver.c | 4 +-
drivers/infiniband/core/cma.c | 2 +-
drivers/infiniband/hw/cxgb4/id_table.c | 4 +-
drivers/infiniband/hw/hns/hns_roce_ah.c | 5 +-
drivers/infiniband/ulp/rtrs/rtrs-clt.c | 3 +-
drivers/mmc/core/core.c | 4 +-
drivers/mmc/host/dw_mmc.c | 2 +-
drivers/mtd/nand/raw/nandsim.c | 4 +-
drivers/mtd/tests/mtd_nandecctest.c | 10 +--
drivers/mtd/tests/stresstest.c | 17 +---
drivers/mtd/ubi/debug.c | 2 +-
drivers/mtd/ubi/debug.h | 6 +-
drivers/net/ethernet/broadcom/cnic.c | 3 +-
.../chelsio/inline_crypto/chtls/chtls_io.c | 4 +-
drivers/net/hamradio/baycom_epp.c | 2 +-
drivers/net/hamradio/hdlcdrv.c | 2 +-
drivers/net/hamradio/yam.c | 2 +-
drivers/net/phy/at803x.c | 2 +-
.../broadcom/brcm80211/brcmfmac/p2p.c | 2 +-
.../net/wireless/intel/iwlwifi/mvm/mac-ctxt.c | 2 +-
drivers/scsi/fcoe/fcoe_ctlr.c | 4 +-
drivers/scsi/qedi/qedi_main.c | 2 +-
fs/ceph/inode.c | 2 +-
fs/ceph/mdsmap.c | 2 +-
fs/ext4/super.c | 7 +-
fs/f2fs/gc.c | 2 +-
fs/f2fs/segment.c | 8 +-
fs/ubifs/debug.c | 8 +-
fs/ubifs/lpt_commit.c | 14 +--
fs/ubifs/tnc_commit.c | 2 +-
fs/xfs/libxfs/xfs_alloc.c | 2 +-
fs/xfs/libxfs/xfs_ialloc.c | 2 +-
fs/xfs/xfs_error.c | 2 +-
kernel/time/clocksource.c | 2 +-
lib/fault-inject.c | 2 +-
lib/find_bit_benchmark.c | 4 +-
lib/reed_solomon/test_rslib.c | 6 +-
lib/sbitmap.c | 4 +-
lib/test_list_sort.c | 2 +-
lib/test_vmalloc.c | 17 +---
net/ceph/mon_client.c | 2 +-
net/ceph/osd_client.c | 2 +-
net/core/neighbour.c | 2 +-
net/core/pktgen.c | 43 +++++-----
net/core/stream.c | 2 +-
net/ipv4/igmp.c | 6 +-
net/ipv4/inet_connection_sock.c | 2 +-
net/ipv4/inet_hashtables.c | 2 +-
net/ipv6/addrconf.c | 8 +-
net/ipv6/mcast.c | 10 +--
net/netfilter/ipvs/ip_vs_twos.c | 4 +-
net/packet/af_packet.c | 2 +-
net/sched/act_gact.c | 2 +-
net/sched/act_sample.c | 2 +-
net/sched/sch_netem.c | 4 +-
net/sctp/socket.c | 2 +-
net/sunrpc/cache.c | 2 +-
net/sunrpc/xprtsock.c | 2 +-
net/tipc/socket.c | 2 +-
net/xfrm/xfrm_state.c | 2 +-
62 files changed, 173 insertions(+), 196 deletions(-)

diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c
index 0612a73638a8..423b21e80929 100644
--- a/arch/x86/mm/pat/cpa-test.c
+++ b/arch/x86/mm/pat/cpa-test.c
@@ -136,10 +136,10 @@ static int pageattr_test(void)
failed += print_split(&sa);

for (i = 0; i < NTEST; i++) {
- unsigned long pfn = prandom_u32() % max_pfn_mapped;
+ unsigned long pfn = prandom_u32_max(max_pfn_mapped);

addr[i] = (unsigned long)__va(pfn << PAGE_SHIFT);
- len[i] = prandom_u32() % NPAGES;
+ len[i] = prandom_u32_max(NPAGES);
len[i] = min_t(unsigned long, len[i], max_pfn_mapped - pfn - 1);

if (len[i] == 0)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 5349ffee6bbd..be45217acde4 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -855,9 +855,9 @@ static int prepare_keybuf(const u8 *key, unsigned int ksize,
/* Generate a random length in range [0, max_len], but prefer smaller values */
static unsigned int generate_random_length(unsigned int max_len)
{
- unsigned int len = prandom_u32() % (max_len + 1);
+ unsigned int len = prandom_u32_max(max_len + 1);

- switch (prandom_u32() % 4) {
+ switch (prandom_u32_max(4)) {
case 0:
return len % 64;
case 1:
@@ -874,14 +874,14 @@ static void flip_random_bit(u8 *buf, size_t size)
{
size_t bitpos;

- bitpos = prandom_u32() % (size * 8);
+ bitpos = prandom_u32_max(size * 8);
buf[bitpos / 8] ^= 1 << (bitpos % 8);
}

/* Flip a random byte in the given nonempty data buffer */
static void flip_random_byte(u8 *buf, size_t size)
{
- buf[prandom_u32() % size] ^= 0xff;
+ buf[prandom_u32_max(size)] ^= 0xff;
}

/* Sometimes make some random changes to the given nonempty data buffer */
@@ -891,15 +891,15 @@ static void mutate_buffer(u8 *buf, size_t size)
size_t i;

/* Sometimes flip some bits */
- if (prandom_u32() % 4 == 0) {
- num_flips = min_t(size_t, 1 << (prandom_u32() % 8), size * 8);
+ if (prandom_u32_max(4) == 0) {
+ num_flips = min_t(size_t, 1 << prandom_u32_max(8), size * 8);
for (i = 0; i < num_flips; i++)
flip_random_bit(buf, size);
}

/* Sometimes flip some bytes */
- if (prandom_u32() % 4 == 0) {
- num_flips = min_t(size_t, 1 << (prandom_u32() % 8), size);
+ if (prandom_u32_max(4) == 0) {
+ num_flips = min_t(size_t, 1 << prandom_u32_max(8), size);
for (i = 0; i < num_flips; i++)
flip_random_byte(buf, size);
}
@@ -915,11 +915,11 @@ static void generate_random_bytes(u8 *buf, size_t count)
if (count == 0)
return;

- switch (prandom_u32() % 8) { /* Choose a generation strategy */
+ switch (prandom_u32_max(8)) { /* Choose a generation strategy */
case 0:
case 1:
/* All the same byte, plus optional mutations */
- switch (prandom_u32() % 4) {
+ switch (prandom_u32_max(4)) {
case 0:
b = 0x00;
break;
@@ -959,24 +959,24 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
unsigned int this_len;
const char *flushtype_str;

- if (div == &divs[max_divs - 1] || prandom_u32() % 2 == 0)
+ if (div == &divs[max_divs - 1] || prandom_u32_max(2) == 0)
this_len = remaining;
else
- this_len = 1 + (prandom_u32() % remaining);
+ this_len = 1 + prandom_u32_max(remaining);
div->proportion_of_total = this_len;

- if (prandom_u32() % 4 == 0)
- div->offset = (PAGE_SIZE - 128) + (prandom_u32() % 128);
- else if (prandom_u32() % 2 == 0)
- div->offset = prandom_u32() % 32;
+ if (prandom_u32_max(4) == 0)
+ div->offset = (PAGE_SIZE - 128) + prandom_u32_max(128);
+ else if (prandom_u32_max(2) == 0)
+ div->offset = prandom_u32_max(32);
else
- div->offset = prandom_u32() % PAGE_SIZE;
- if (prandom_u32() % 8 == 0)
+ div->offset = prandom_u32_max(PAGE_SIZE);
+ if (prandom_u32_max(8) == 0)
div->offset_relative_to_alignmask = true;

div->flush_type = FLUSH_TYPE_NONE;
if (gen_flushes) {
- switch (prandom_u32() % 4) {
+ switch (prandom_u32_max(4)) {
case 0:
div->flush_type = FLUSH_TYPE_REIMPORT;
break;
@@ -988,7 +988,7 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,

if (div->flush_type != FLUSH_TYPE_NONE &&
!(req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
- prandom_u32() % 2 == 0)
+ prandom_u32_max(2) == 0)
div->nosimd = true;

switch (div->flush_type) {
@@ -1035,7 +1035,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,

p += scnprintf(p, end - p, "random:");

- switch (prandom_u32() % 4) {
+ switch (prandom_u32_max(4)) {
case 0:
case 1:
cfg->inplace_mode = OUT_OF_PLACE;
@@ -1050,12 +1050,12 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
break;
}

- if (prandom_u32() % 2 == 0) {
+ if (prandom_u32_max(2) == 0) {
cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
p += scnprintf(p, end - p, " may_sleep");
}

- switch (prandom_u32() % 4) {
+ switch (prandom_u32_max(4)) {
case 0:
cfg->finalization_type = FINALIZATION_TYPE_FINAL;
p += scnprintf(p, end - p, " use_final");
@@ -1071,7 +1071,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
}

if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
- prandom_u32() % 2 == 0) {
+ prandom_u32_max(2) == 0) {
cfg->nosimd = true;
p += scnprintf(p, end - p, " nosimd");
}
@@ -1084,7 +1084,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
cfg->req_flags);
p += scnprintf(p, end - p, "]");

- if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32() % 2 == 0) {
+ if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32_max(2) == 0) {
p += scnprintf(p, end - p, " dst_divs=[");
p = generate_random_sgl_divisions(cfg->dst_divs,
ARRAY_SIZE(cfg->dst_divs),
@@ -1093,13 +1093,13 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
p += scnprintf(p, end - p, "]");
}

- if (prandom_u32() % 2 == 0) {
- cfg->iv_offset = 1 + (prandom_u32() % MAX_ALGAPI_ALIGNMASK);
+ if (prandom_u32_max(2) == 0) {
+ cfg->iv_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK);
p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset);
}

- if (prandom_u32() % 2 == 0) {
- cfg->key_offset = 1 + (prandom_u32() % MAX_ALGAPI_ALIGNMASK);
+ if (prandom_u32_max(2) == 0) {
+ cfg->key_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK);
p += scnprintf(p, end - p, " key_offset=%u", cfg->key_offset);
}

@@ -1652,8 +1652,8 @@ static void generate_random_hash_testvec(struct shash_desc *desc,
vec->ksize = 0;
if (maxkeysize) {
vec->ksize = maxkeysize;
- if (prandom_u32() % 4 == 0)
- vec->ksize = 1 + (prandom_u32() % maxkeysize);
+ if (prandom_u32_max(4) == 0)
+ vec->ksize = 1 + prandom_u32_max(maxkeysize);
generate_random_bytes((u8 *)vec->key, vec->ksize);

vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key,
@@ -2218,13 +2218,13 @@ static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv,
const unsigned int aad_tail_size = aad_iv ? ivsize : 0;
const unsigned int authsize = vec->clen - vec->plen;

- if (prandom_u32() % 2 == 0 && vec->alen > aad_tail_size) {
+ if (prandom_u32_max(2) == 0 && vec->alen > aad_tail_size) {
/* Mutate the AAD */
flip_random_bit((u8 *)vec->assoc, vec->alen - aad_tail_size);
- if (prandom_u32() % 2 == 0)
+ if (prandom_u32_max(2) == 0)
return;
}
- if (prandom_u32() % 2 == 0) {
+ if (prandom_u32_max(2) == 0) {
/* Mutate auth tag (assuming it's at the end of ciphertext) */
flip_random_bit((u8 *)vec->ctext + vec->plen, authsize);
} else {
@@ -2249,7 +2249,7 @@ static void generate_aead_message(struct aead_request *req,
const unsigned int ivsize = crypto_aead_ivsize(tfm);
const unsigned int authsize = vec->clen - vec->plen;
const bool inauthentic = (authsize >= MIN_COLLISION_FREE_AUTHSIZE) &&
- (prefer_inauthentic || prandom_u32() % 4 == 0);
+ (prefer_inauthentic || prandom_u32_max(4) == 0);

/* Generate the AAD. */
generate_random_bytes((u8 *)vec->assoc, vec->alen);
@@ -2257,7 +2257,7 @@ static void generate_aead_message(struct aead_request *req,
/* Avoid implementation-defined behavior. */
memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize);

- if (inauthentic && prandom_u32() % 2 == 0) {
+ if (inauthentic && prandom_u32_max(2) == 0) {
/* Generate a random ciphertext. */
generate_random_bytes((u8 *)vec->ctext, vec->clen);
} else {
@@ -2321,8 +2321,8 @@ static void generate_random_aead_testvec(struct aead_request *req,

/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
vec->klen = maxkeysize;
- if (prandom_u32() % 4 == 0)
- vec->klen = prandom_u32() % (maxkeysize + 1);
+ if (prandom_u32_max(4) == 0)
+ vec->klen = prandom_u32_max(maxkeysize + 1);
generate_random_bytes((u8 *)vec->key, vec->klen);
vec->setkey_error = crypto_aead_setkey(tfm, vec->key, vec->klen);

@@ -2331,8 +2331,8 @@ static void generate_random_aead_testvec(struct aead_request *req,

/* Tag length: in [0, maxauthsize], but usually choose maxauthsize */
authsize = maxauthsize;
- if (prandom_u32() % 4 == 0)
- authsize = prandom_u32() % (maxauthsize + 1);
+ if (prandom_u32_max(4) == 0)
+ authsize = prandom_u32_max(maxauthsize + 1);
if (prefer_inauthentic && authsize < MIN_COLLISION_FREE_AUTHSIZE)
authsize = MIN_COLLISION_FREE_AUTHSIZE;
if (WARN_ON(authsize > maxdatasize))
@@ -2342,7 +2342,7 @@ static void generate_random_aead_testvec(struct aead_request *req,

/* AAD, plaintext, and ciphertext lengths */
total_len = generate_random_length(maxdatasize);
- if (prandom_u32() % 4 == 0)
+ if (prandom_u32_max(4) == 0)
vec->alen = 0;
else
vec->alen = generate_random_length(total_len);
@@ -2958,8 +2958,8 @@ static void generate_random_cipher_testvec(struct skcipher_request *req,

/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
vec->klen = maxkeysize;
- if (prandom_u32() % 4 == 0)
- vec->klen = prandom_u32() % (maxkeysize + 1);
+ if (prandom_u32_max(4) == 0)
+ vec->klen = prandom_u32_max(maxkeysize + 1);
generate_random_bytes((u8 *)vec->key, vec->klen);
vec->setkey_error = crypto_skcipher_setkey(tfm, vec->key, vec->klen);

diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index af4c7d65490b..d8b1417dc503 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -781,7 +781,7 @@ static struct socket *drbd_wait_for_connect(struct drbd_connection *connection,

timeo = connect_int * HZ;
/* 28.5% random jitter */
- timeo += (prandom_u32() & 1) ? timeo / 7 : -timeo / 7;
+ timeo += prandom_u32_max(2) ? timeo / 7 : -timeo / 7;

err = wait_for_completion_interruptible_timeout(&ad->door_bell, timeo);
if (err <= 0)
@@ -1004,7 +1004,7 @@ static int conn_connect(struct drbd_connection *connection)
drbd_warn(connection, "Error receiving initial packet\n");
sock_release(s);
randomize:
- if (prandom_u32() & 1)
+ if (prandom_u32_max(2))
goto retry;
}
}
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index be317f2665a9..d460935e89eb 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -3759,7 +3759,7 @@ static int cma_alloc_any_port(enum rdma_ucm_port_space ps,

inet_get_local_port_range(net, &low, &high);
remaining = (high - low) + 1;
- rover = prandom_u32() % remaining + low;
+ rover = prandom_u32_max(remaining) + low;
retry:
if (last_used_port != rover) {
struct rdma_bind_list *bind_list;
diff --git a/drivers/infiniband/hw/cxgb4/id_table.c b/drivers/infiniband/hw/cxgb4/id_table.c
index f64e7e02b129..280d61466855 100644
--- a/drivers/infiniband/hw/cxgb4/id_table.c
+++ b/drivers/infiniband/hw/cxgb4/id_table.c
@@ -54,7 +54,7 @@ u32 c4iw_id_alloc(struct c4iw_id_table *alloc)

if (obj < alloc->max) {
if (alloc->flags & C4IW_ID_TABLE_F_RANDOM)
- alloc->last += prandom_u32() % RANDOM_SKIP;
+ alloc->last += prandom_u32_max(RANDOM_SKIP);
else
alloc->last = obj + 1;
if (alloc->last >= alloc->max)
@@ -85,7 +85,7 @@ int c4iw_id_table_alloc(struct c4iw_id_table *alloc, u32 start, u32 num,
alloc->start = start;
alloc->flags = flags;
if (flags & C4IW_ID_TABLE_F_RANDOM)
- alloc->last = prandom_u32() % RANDOM_SKIP;
+ alloc->last = prandom_u32_max(RANDOM_SKIP);
else
alloc->last = 0;
alloc->max = num;
diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
index 492b122d0521..480c062dd04f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
+++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
@@ -41,9 +41,8 @@ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr)
u16 sport;

if (!fl)
- sport = get_random_u32() %
- (IB_ROCE_UDP_ENCAP_VALID_PORT_MAX + 1 -
- IB_ROCE_UDP_ENCAP_VALID_PORT_MIN) +
+ sport = prandom_u32_max(IB_ROCE_UDP_ENCAP_VALID_PORT_MAX + 1 -
+ IB_ROCE_UDP_ENCAP_VALID_PORT_MIN) +
IB_ROCE_UDP_ENCAP_VALID_PORT_MIN;
else
sport = rdma_flow_label_to_udp_sport(fl);
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index 449904dac0a9..e2a89d7f52df 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -1511,8 +1511,7 @@ static void rtrs_clt_err_recovery_work(struct work_struct *work)
rtrs_clt_stop_and_destroy_conns(clt_path);
queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork,
msecs_to_jiffies(delay_ms +
- prandom_u32() %
- RTRS_RECONNECT_SEED));
+ prandom_u32_max(RTRS_RECONNECT_SEED)));
}

static struct rtrs_clt_path *alloc_path(struct rtrs_clt_sess *clt,
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index ef53a2578824..95fa8fb1d45f 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -97,8 +97,8 @@ static void mmc_should_fail_request(struct mmc_host *host,
!should_fail(&host->fail_mmc_request, data->blksz * data->blocks))
return;

- data->error = data_errors[prandom_u32() % ARRAY_SIZE(data_errors)];
- data->bytes_xfered = (prandom_u32() % (data->bytes_xfered >> 9)) << 9;
+ data->error = data_errors[prandom_u32_max(ARRAY_SIZE(data_errors))];
+ data->bytes_xfered = prandom_u32_max(data->bytes_xfered >> 9) << 9;
}

#else /* CONFIG_FAIL_MMC_REQUEST */
diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
index 581614196a84..c78bbc22e0d1 100644
--- a/drivers/mmc/host/dw_mmc.c
+++ b/drivers/mmc/host/dw_mmc.c
@@ -1858,7 +1858,7 @@ static void dw_mci_start_fault_timer(struct dw_mci *host)
* Try to inject the error at random points during the data transfer.
*/
hrtimer_start(&host->fault_timer,
- ms_to_ktime(prandom_u32() % 25),
+ ms_to_ktime(prandom_u32_max(25)),
HRTIMER_MODE_REL);
}

diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c
index 24beade95c7f..50bcf745e816 100644
--- a/drivers/mtd/nand/raw/nandsim.c
+++ b/drivers/mtd/nand/raw/nandsim.c
@@ -1405,9 +1405,9 @@ static void ns_do_bit_flips(struct nandsim *ns, int num)
if (bitflips && prandom_u32() < (1 << 22)) {
int flips = 1;
if (bitflips > 1)
- flips = (prandom_u32() % (int) bitflips) + 1;
+ flips = prandom_u32_max(bitflips) + 1;
while (flips--) {
- int pos = prandom_u32() % (num * 8);
+ int pos = prandom_u32_max(num * 8);
ns->buf.byte[pos / 8] ^= (1 << (pos % 8));
NS_WARN("read_page: flipping bit %d in page %d "
"reading from %d ecc: corrected=%u failed=%u\n",
diff --git a/drivers/mtd/tests/mtd_nandecctest.c b/drivers/mtd/tests/mtd_nandecctest.c
index c4f271314f52..1c7201b0f372 100644
--- a/drivers/mtd/tests/mtd_nandecctest.c
+++ b/drivers/mtd/tests/mtd_nandecctest.c
@@ -47,7 +47,7 @@ struct nand_ecc_test {
static void single_bit_error_data(void *error_data, void *correct_data,
size_t size)
{
- unsigned int offset = prandom_u32() % (size * BITS_PER_BYTE);
+ unsigned int offset = prandom_u32_max(size * BITS_PER_BYTE);

memcpy(error_data, correct_data, size);
__change_bit_le(offset, error_data);
@@ -58,9 +58,9 @@ static void double_bit_error_data(void *error_data, void *correct_data,
{
unsigned int offset[2];

- offset[0] = prandom_u32() % (size * BITS_PER_BYTE);
+ offset[0] = prandom_u32_max(size * BITS_PER_BYTE);
do {
- offset[1] = prandom_u32() % (size * BITS_PER_BYTE);
+ offset[1] = prandom_u32_max(size * BITS_PER_BYTE);
} while (offset[0] == offset[1]);

memcpy(error_data, correct_data, size);
@@ -71,7 +71,7 @@ static void double_bit_error_data(void *error_data, void *correct_data,

static unsigned int random_ecc_bit(size_t size)
{
- unsigned int offset = prandom_u32() % (3 * BITS_PER_BYTE);
+ unsigned int offset = prandom_u32_max(3 * BITS_PER_BYTE);

if (size == 256) {
/*
@@ -79,7 +79,7 @@ static unsigned int random_ecc_bit(size_t size)
* and 17th bit) in ECC code for 256 byte data block
*/
while (offset == 16 || offset == 17)
- offset = prandom_u32() % (3 * BITS_PER_BYTE);
+ offset = prandom_u32_max(3 * BITS_PER_BYTE);
}

return offset;
diff --git a/drivers/mtd/tests/stresstest.c b/drivers/mtd/tests/stresstest.c
index cb29c8c1b370..d2faaca7f19d 100644
--- a/drivers/mtd/tests/stresstest.c
+++ b/drivers/mtd/tests/stresstest.c
@@ -45,9 +45,8 @@ static int rand_eb(void)
unsigned int eb;

again:
- eb = prandom_u32();
/* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */
- eb %= (ebcnt - 1);
+ eb = prandom_u32_max(ebcnt - 1);
if (bbt[eb])
goto again;
return eb;
@@ -55,20 +54,12 @@ static int rand_eb(void)

static int rand_offs(void)
{
- unsigned int offs;
-
- offs = prandom_u32();
- offs %= bufsize;
- return offs;
+ return prandom_u32_max(bufsize);
}

static int rand_len(int offs)
{
- unsigned int len;
-
- len = prandom_u32();
- len %= (bufsize - offs);
- return len;
+ return prandom_u32_max(bufsize - offs);
}

static int do_read(void)
@@ -127,7 +118,7 @@ static int do_write(void)

static int do_operation(void)
{
- if (prandom_u32() & 1)
+ if (prandom_u32_max(2))
return do_read();
else
return do_write();
diff --git a/drivers/mtd/ubi/debug.c b/drivers/mtd/ubi/debug.c
index 31d427ee191a..908d0e088557 100644
--- a/drivers/mtd/ubi/debug.c
+++ b/drivers/mtd/ubi/debug.c
@@ -590,7 +590,7 @@ int ubi_dbg_power_cut(struct ubi_device *ubi, int caller)

if (ubi->dbg.power_cut_max > ubi->dbg.power_cut_min) {
range = ubi->dbg.power_cut_max - ubi->dbg.power_cut_min;
- ubi->dbg.power_cut_counter += prandom_u32() % range;
+ ubi->dbg.power_cut_counter += prandom_u32_max(range);
}
return 0;
}
diff --git a/drivers/mtd/ubi/debug.h b/drivers/mtd/ubi/debug.h
index 118248a5d7d4..4236c799a47c 100644
--- a/drivers/mtd/ubi/debug.h
+++ b/drivers/mtd/ubi/debug.h
@@ -73,7 +73,7 @@ static inline int ubi_dbg_is_bgt_disabled(const struct ubi_device *ubi)
static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
{
if (ubi->dbg.emulate_bitflips)
- return !(prandom_u32() % 200);
+ return !(prandom_u32_max(200));
return 0;
}

@@ -87,7 +87,7 @@ static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
{
if (ubi->dbg.emulate_io_failures)
- return !(prandom_u32() % 500);
+ return !(prandom_u32_max(500));
return 0;
}

@@ -101,7 +101,7 @@ static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
static inline int ubi_dbg_is_erase_failure(const struct ubi_device *ubi)
{
if (ubi->dbg.emulate_io_failures)
- return !(prandom_u32() % 400);
+ return !(prandom_u32_max(400));
return 0;
}

diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c
index e86503d97f32..f597b313acaa 100644
--- a/drivers/net/ethernet/broadcom/cnic.c
+++ b/drivers/net/ethernet/broadcom/cnic.c
@@ -4105,8 +4105,7 @@ static int cnic_cm_alloc_mem(struct cnic_dev *dev)
for (i = 0; i < MAX_CM_SK_TBL_SZ; i++)
atomic_set(&cp->csk_tbl[i].ref_count, 0);

- port_id = prandom_u32();
- port_id %= CNIC_LOCAL_PORT_RANGE;
+ port_id = prandom_u32_max(CNIC_LOCAL_PORT_RANGE);
if (cnic_init_id_tbl(&cp->csk_port_tbl, CNIC_LOCAL_PORT_RANGE,
CNIC_LOCAL_PORT_MIN, port_id)) {
cnic_cm_free_mem(dev);
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
index 539992dad8ba..a4256087ac82 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
@@ -919,8 +919,8 @@ static int csk_wait_memory(struct chtls_dev *cdev,
current_timeo = *timeo_p;
noblock = (*timeo_p ? false : true);
if (csk_mem_free(cdev, sk)) {
- current_timeo = (prandom_u32() % (HZ / 5)) + 2;
- vm_wait = (prandom_u32() % (HZ / 5)) + 2;
+ current_timeo = prandom_u32_max(HZ / 5) + 2;
+ vm_wait = prandom_u32_max(HZ / 5) + 2;
}

add_wait_queue(sk_sleep(sk), &wait);
diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
index 3e69079ed694..7df78a721b04 100644
--- a/drivers/net/hamradio/baycom_epp.c
+++ b/drivers/net/hamradio/baycom_epp.c
@@ -438,7 +438,7 @@ static int transmit(struct baycom_state *bc, int cnt, unsigned char stat)
if ((--bc->hdlctx.slotcnt) > 0)
return 0;
bc->hdlctx.slotcnt = bc->ch_params.slottime;
- if ((prandom_u32() % 256) > bc->ch_params.ppersist)
+ if (prandom_u32_max(256) > bc->ch_params.ppersist)
return 0;
}
}
diff --git a/drivers/net/hamradio/hdlcdrv.c b/drivers/net/hamradio/hdlcdrv.c
index 8297411e87ea..360d041a62c4 100644
--- a/drivers/net/hamradio/hdlcdrv.c
+++ b/drivers/net/hamradio/hdlcdrv.c
@@ -377,7 +377,7 @@ void hdlcdrv_arbitrate(struct net_device *dev, struct hdlcdrv_state *s)
if ((--s->hdlctx.slotcnt) > 0)
return;
s->hdlctx.slotcnt = s->ch_params.slottime;
- if ((prandom_u32() % 256) > s->ch_params.ppersist)
+ if (prandom_u32_max(256) > s->ch_params.ppersist)
return;
start_tx(dev, s);
}
diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
index 980f2be32f05..97a6cc5c7ae8 100644
--- a/drivers/net/hamradio/yam.c
+++ b/drivers/net/hamradio/yam.c
@@ -626,7 +626,7 @@ static void yam_arbitrate(struct net_device *dev)
yp->slotcnt = yp->slot / 10;

/* is random > persist ? */
- if ((prandom_u32() % 256) > yp->pers)
+ if (prandom_u32_max(256) > yp->pers)
return;

yam_start_tx(dev, yp);
diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
index 59fe356942b5..2a7108361246 100644
--- a/drivers/net/phy/at803x.c
+++ b/drivers/net/phy/at803x.c
@@ -1732,7 +1732,7 @@ static int qca808x_phy_fast_retrain_config(struct phy_device *phydev)

static int qca808x_phy_ms_random_seed_set(struct phy_device *phydev)
{
- u16 seed_value = (prandom_u32() % QCA808X_MASTER_SLAVE_SEED_RANGE);
+ u16 seed_value = prandom_u32_max(QCA808X_MASTER_SLAVE_SEED_RANGE);

return at803x_debug_reg_mask(phydev, QCA808X_PHY_DEBUG_LOCAL_SEED,
QCA808X_MASTER_SLAVE_SEED_CFG,
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
index 479041f070f9..10d9d9c63b28 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
@@ -1128,7 +1128,7 @@ static void brcmf_p2p_afx_handler(struct work_struct *work)
if (afx_hdl->is_listen && afx_hdl->my_listen_chan)
/* 100ms ~ 300ms */
err = brcmf_p2p_discover_listen(p2p, afx_hdl->my_listen_chan,
- 100 * (1 + prandom_u32() % 3));
+ 100 * (1 + prandom_u32_max(3)));
else
err = brcmf_p2p_act_frm_search(p2p, afx_hdl->peer_listen_chan);

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
index ed586e6d7d64..de0c545d50fd 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
@@ -1099,7 +1099,7 @@ static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm,
iwl_mvm_mac_ap_iterator, &data);

if (data.beacon_device_ts) {
- u32 rand = (prandom_u32() % (64 - 36)) + 36;
+ u32 rand = prandom_u32_max(64 - 36) + 36;
mvmvif->ap_beacon_time = data.beacon_device_ts +
ieee80211_tu_to_usec(data.beacon_int * rand /
100);
diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
index 39e16eab47aa..ddc048069af2 100644
--- a/drivers/scsi/fcoe/fcoe_ctlr.c
+++ b/drivers/scsi/fcoe/fcoe_ctlr.c
@@ -2233,7 +2233,7 @@ static void fcoe_ctlr_vn_restart(struct fcoe_ctlr *fip)

if (fip->probe_tries < FIP_VN_RLIM_COUNT) {
fip->probe_tries++;
- wait = prandom_u32() % FIP_VN_PROBE_WAIT;
+ wait = prandom_u32_max(FIP_VN_PROBE_WAIT);
} else
wait = FIP_VN_RLIM_INT;
mod_timer(&fip->timer, jiffies + msecs_to_jiffies(wait));
@@ -3125,7 +3125,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
fcoe_all_vn2vn, 0);
fip->port_ka_time = jiffies +
msecs_to_jiffies(FIP_VN_BEACON_INT +
- (prandom_u32() % FIP_VN_BEACON_FUZZ));
+ prandom_u32_max(FIP_VN_BEACON_FUZZ));
}
if (time_before(fip->port_ka_time, next_time))
next_time = fip->port_ka_time;
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index cecfb2cb4c7b..df2fe7bd26d1 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -618,7 +618,7 @@ static int qedi_cm_alloc_mem(struct qedi_ctx *qedi)
sizeof(struct qedi_endpoint *)), GFP_KERNEL);
if (!qedi->ep_tbl)
return -ENOMEM;
- port_id = prandom_u32() % QEDI_LOCAL_PORT_RANGE;
+ port_id = prandom_u32_max(QEDI_LOCAL_PORT_RANGE);
if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE,
QEDI_LOCAL_PORT_MIN, port_id)) {
qedi_cm_free_mem(qedi);
diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index 42351d7a0dd6..f0c6e7e7b92b 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -362,7 +362,7 @@ static int ceph_fill_fragtree(struct inode *inode,
if (nsplits != ci->i_fragtree_nsplits) {
update = true;
} else if (nsplits) {
- i = prandom_u32() % nsplits;
+ i = prandom_u32_max(nsplits);
id = le32_to_cpu(fragtree->splits[i].frag);
if (!__ceph_find_frag(ci, id))
update = true;
diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
index 8d0a6d2c2da4..3fbabc98e1f7 100644
--- a/fs/ceph/mdsmap.c
+++ b/fs/ceph/mdsmap.c
@@ -29,7 +29,7 @@ static int __mdsmap_get_random_mds(struct ceph_mdsmap *m, bool ignore_laggy)
return -1;

/* pick */
- n = prandom_u32() % n;
+ n = prandom_u32_max(n);
for (j = 0, i = 0; i < m->possible_max_rank; i++) {
if (CEPH_MDS_IS_READY(i, ignore_laggy))
j++;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 9a66abcca1a8..4af351320075 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -3811,8 +3811,7 @@ static int ext4_lazyinit_thread(void *arg)
}
if (!progress) {
elr->lr_next_sched = jiffies +
- (prandom_u32()
- % (EXT4_DEF_LI_MAX_START_DELAY * HZ));
+ prandom_u32_max(EXT4_DEF_LI_MAX_START_DELAY * HZ);
}
if (time_before(elr->lr_next_sched, next_wakeup))
next_wakeup = elr->lr_next_sched;
@@ -3959,8 +3958,8 @@ static struct ext4_li_request *ext4_li_request_new(struct super_block *sb,
* spread the inode table initialization requests
* better.
*/
- elr->lr_next_sched = jiffies + (prandom_u32() %
- (EXT4_DEF_LI_MAX_START_DELAY * HZ));
+ elr->lr_next_sched = jiffies + prandom_u32_max(
+ EXT4_DEF_LI_MAX_START_DELAY * HZ);
return elr;
}

diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 6da21d405ce1..2c5fd1db3a3e 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -285,7 +285,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,

/* let's select beginning hot/small space first in no_heap mode*/
if (f2fs_need_rand_seg(sbi))
- p->offset = prandom_u32() % (MAIN_SECS(sbi) * sbi->segs_per_sec);
+ p->offset = prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec);
else if (test_opt(sbi, NOHEAP) &&
(type == CURSEG_HOT_DATA || IS_NODESEG(type)))
p->offset = 0;
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 0de21f82d7bc..507f77f839f3 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -2535,7 +2535,7 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)

sanity_check_seg_type(sbi, seg_type);
if (f2fs_need_rand_seg(sbi))
- return prandom_u32() % (MAIN_SECS(sbi) * sbi->segs_per_sec);
+ return prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec);

/* if segs_per_sec is large than 1, we need to keep original policy. */
if (__is_large_section(sbi))
@@ -2589,7 +2589,7 @@ static void new_curseg(struct f2fs_sb_info *sbi, int type, bool new_sec)
curseg->alloc_type = LFS;
if (F2FS_OPTION(sbi).fs_mode == FS_MODE_FRAGMENT_BLK)
curseg->fragment_remained_chunk =
- prandom_u32() % sbi->max_fragment_chunk + 1;
+ prandom_u32_max(sbi->max_fragment_chunk) + 1;
}

static int __next_free_blkoff(struct f2fs_sb_info *sbi,
@@ -2626,9 +2626,9 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
/* To allocate block chunks in different sizes, use random number */
if (--seg->fragment_remained_chunk <= 0) {
seg->fragment_remained_chunk =
- prandom_u32() % sbi->max_fragment_chunk + 1;
+ prandom_u32_max(sbi->max_fragment_chunk) + 1;
seg->next_blkoff +=
- prandom_u32() % sbi->max_fragment_hole + 1;
+ prandom_u32_max(sbi->max_fragment_hole) + 1;
}
}
}
diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c
index fc718f6178f2..f4d3b568aa64 100644
--- a/fs/ubifs/debug.c
+++ b/fs/ubifs/debug.c
@@ -2467,7 +2467,7 @@ int dbg_check_nondata_nodes_order(struct ubifs_info *c, struct list_head *head)

static inline int chance(unsigned int n, unsigned int out_of)
{
- return !!((prandom_u32() % out_of) + 1 <= n);
+ return !!(prandom_u32_max(out_of) + 1 <= n);

}

@@ -2485,13 +2485,13 @@ static int power_cut_emulated(struct ubifs_info *c, int lnum, int write)
if (chance(1, 2)) {
d->pc_delay = 1;
/* Fail within 1 minute */
- delay = prandom_u32() % 60000;
+ delay = prandom_u32_max(60000);
d->pc_timeout = jiffies;
d->pc_timeout += msecs_to_jiffies(delay);
ubifs_warn(c, "failing after %lums", delay);
} else {
d->pc_delay = 2;
- delay = prandom_u32() % 10000;
+ delay = prandom_u32_max(10000);
/* Fail within 10000 operations */
d->pc_cnt_max = delay;
ubifs_warn(c, "failing after %lu calls", delay);
@@ -2571,7 +2571,7 @@ static int corrupt_data(const struct ubifs_info *c, const void *buf,
unsigned int from, to, ffs = chance(1, 2);
unsigned char *p = (void *)buf;

- from = prandom_u32() % len;
+ from = prandom_u32_max(len);
/* Corruption span max to end of write unit */
to = min(len, ALIGN(from + 1, c->max_write_size));

diff --git a/fs/ubifs/lpt_commit.c b/fs/ubifs/lpt_commit.c
index d76a19e460cd..cfbc31f709f4 100644
--- a/fs/ubifs/lpt_commit.c
+++ b/fs/ubifs/lpt_commit.c
@@ -1970,28 +1970,28 @@ static int dbg_populate_lsave(struct ubifs_info *c)

if (!dbg_is_chk_gen(c))
return 0;
- if (prandom_u32() & 3)
+ if (prandom_u32_max(4))
return 0;

for (i = 0; i < c->lsave_cnt; i++)
c->lsave[i] = c->main_first;

list_for_each_entry(lprops, &c->empty_list, list)
- c->lsave[prandom_u32() % c->lsave_cnt] = lprops->lnum;
+ c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
list_for_each_entry(lprops, &c->freeable_list, list)
- c->lsave[prandom_u32() % c->lsave_cnt] = lprops->lnum;
+ c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
list_for_each_entry(lprops, &c->frdi_idx_list, list)
- c->lsave[prandom_u32() % c->lsave_cnt] = lprops->lnum;
+ c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;

heap = &c->lpt_heap[LPROPS_DIRTY_IDX - 1];
for (i = 0; i < heap->cnt; i++)
- c->lsave[prandom_u32() % c->lsave_cnt] = heap->arr[i]->lnum;
+ c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
heap = &c->lpt_heap[LPROPS_DIRTY - 1];
for (i = 0; i < heap->cnt; i++)
- c->lsave[prandom_u32() % c->lsave_cnt] = heap->arr[i]->lnum;
+ c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
heap = &c->lpt_heap[LPROPS_FREE - 1];
for (i = 0; i < heap->cnt; i++)
- c->lsave[prandom_u32() % c->lsave_cnt] = heap->arr[i]->lnum;
+ c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;

return 1;
}
diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c
index 58c92c96ecef..01362ad5f804 100644
--- a/fs/ubifs/tnc_commit.c
+++ b/fs/ubifs/tnc_commit.c
@@ -700,7 +700,7 @@ static int alloc_idx_lebs(struct ubifs_info *c, int cnt)
c->ilebs[c->ileb_cnt++] = lnum;
dbg_cmt("LEB %d", lnum);
}
- if (dbg_is_chk_index(c) && !(prandom_u32() & 7))
+ if (dbg_is_chk_index(c) && !prandom_u32_max(8))
return -ENOSPC;
return 0;
}
diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index e2bdf089c0a3..6261599bb389 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -1520,7 +1520,7 @@ xfs_alloc_ag_vextent_lastblock(

#ifdef DEBUG
/* Randomly don't execute the first algorithm. */
- if (prandom_u32() & 1)
+ if (prandom_u32_max(2))
return 0;
#endif

diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
index 6cdfd64bc56b..7838b31126e2 100644
--- a/fs/xfs/libxfs/xfs_ialloc.c
+++ b/fs/xfs/libxfs/xfs_ialloc.c
@@ -636,7 +636,7 @@ xfs_ialloc_ag_alloc(
/* randomly do sparse inode allocations */
if (xfs_has_sparseinodes(tp->t_mountp) &&
igeo->ialloc_min_blks < igeo->ialloc_blks)
- do_sparse = prandom_u32() & 1;
+ do_sparse = prandom_u32_max(2);
#endif

/*
diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c
index 296faa41d81d..7db588ed0be5 100644
--- a/fs/xfs/xfs_error.c
+++ b/fs/xfs/xfs_error.c
@@ -274,7 +274,7 @@ xfs_errortag_test(

ASSERT(error_tag < XFS_ERRTAG_MAX);
randfactor = mp->m_errortag[error_tag];
- if (!randfactor || prandom_u32() % randfactor)
+ if (!randfactor || prandom_u32_max(randfactor))
return false;

xfs_warn_ratelimited(mp,
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index cee5da1e54c4..8058bec87ace 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -310,7 +310,7 @@ static void clocksource_verify_choose_cpus(void)
* CPUs that are currently online.
*/
for (i = 1; i < n; i++) {
- cpu = prandom_u32() % nr_cpu_ids;
+ cpu = prandom_u32_max(nr_cpu_ids);
cpu = cpumask_next(cpu - 1, cpu_online_mask);
if (cpu >= nr_cpu_ids)
cpu = cpumask_first(cpu_online_mask);
diff --git a/lib/fault-inject.c b/lib/fault-inject.c
index 423784d9c058..96e092de5b72 100644
--- a/lib/fault-inject.c
+++ b/lib/fault-inject.c
@@ -139,7 +139,7 @@ bool should_fail(struct fault_attr *attr, ssize_t size)
return false;
}

- if (attr->probability <= prandom_u32() % 100)
+ if (attr->probability <= prandom_u32_max(100))
return false;

if (!fail_stacktrace(attr))
diff --git a/lib/find_bit_benchmark.c b/lib/find_bit_benchmark.c
index db904b57d4b8..1a6466c64bb6 100644
--- a/lib/find_bit_benchmark.c
+++ b/lib/find_bit_benchmark.c
@@ -157,8 +157,8 @@ static int __init find_bit_test(void)
bitmap_zero(bitmap2, BITMAP_LEN);

while (nbits--) {
- __set_bit(prandom_u32() % BITMAP_LEN, bitmap);
- __set_bit(prandom_u32() % BITMAP_LEN, bitmap2);
+ __set_bit(prandom_u32_max(BITMAP_LEN), bitmap);
+ __set_bit(prandom_u32_max(BITMAP_LEN), bitmap2);
}

test_find_next_bit(bitmap, BITMAP_LEN);
diff --git a/lib/reed_solomon/test_rslib.c b/lib/reed_solomon/test_rslib.c
index d9d1c33aebda..4d241bdc88aa 100644
--- a/lib/reed_solomon/test_rslib.c
+++ b/lib/reed_solomon/test_rslib.c
@@ -183,7 +183,7 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,

do {
/* Must not choose the same location twice */
- errloc = prandom_u32() % len;
+ errloc = prandom_u32_max(len);
} while (errlocs[errloc] != 0);

errlocs[errloc] = 1;
@@ -194,12 +194,12 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,
for (i = 0; i < eras; i++) {
do {
/* Must not choose the same location twice */
- errloc = prandom_u32() % len;
+ errloc = prandom_u32_max(len);
} while (errlocs[errloc] != 0);

derrlocs[i] = errloc;

- if (ewsc && (prandom_u32() & 1)) {
+ if (ewsc && prandom_u32_max(2)) {
/* Erasure with the symbol intact */
errlocs[errloc] = 2;
} else {
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 29eb0484215a..ef0661504561 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -21,7 +21,7 @@ static int init_alloc_hint(struct sbitmap *sb, gfp_t flags)
int i;

for_each_possible_cpu(i)
- *per_cpu_ptr(sb->alloc_hint, i) = prandom_u32() % depth;
+ *per_cpu_ptr(sb->alloc_hint, i) = prandom_u32_max(depth);
}
return 0;
}
@@ -33,7 +33,7 @@ static inline unsigned update_alloc_hint_before_get(struct sbitmap *sb,

hint = this_cpu_read(*sb->alloc_hint);
if (unlikely(hint >= depth)) {
- hint = depth ? prandom_u32() % depth : 0;
+ hint = depth ? prandom_u32_max(depth) : 0;
this_cpu_write(*sb->alloc_hint, hint);
}

diff --git a/lib/test_list_sort.c b/lib/test_list_sort.c
index ade7a1ea0c8e..19ff229b9c3a 100644
--- a/lib/test_list_sort.c
+++ b/lib/test_list_sort.c
@@ -71,7 +71,7 @@ static void list_sort_test(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, el);

/* force some equivalencies */
- el->value = prandom_u32() % (TEST_LIST_LEN / 3);
+ el->value = prandom_u32_max(TEST_LIST_LEN / 3);
el->serial = i;
el->poison1 = TEST_POISON1;
el->poison2 = TEST_POISON2;
diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index 4f2f2d1bac56..56ffaa8dd3f6 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -151,9 +151,7 @@ static int random_size_alloc_test(void)
int i;

for (i = 0; i < test_loop_count; i++) {
- n = prandom_u32();
- n = (n % 100) + 1;
-
+ n = prandom_u32_max(n % 100) + 1;
p = vmalloc(n * PAGE_SIZE);

if (!p)
@@ -293,16 +291,12 @@ pcpu_alloc_test(void)
return -1;

for (i = 0; i < 35000; i++) {
- unsigned int r;
-
- r = prandom_u32();
- size = (r % (PAGE_SIZE / 4)) + 1;
+ size = prandom_u32_max(PAGE_SIZE / 4) + 1;

/*
* Maximum PAGE_SIZE
*/
- r = prandom_u32();
- align = 1 << ((r % 11) + 1);
+ align = 1 << (prandom_u32_max(11) + 1);

pcpu[i] = __alloc_percpu(size, align);
if (!pcpu[i])
@@ -393,14 +387,11 @@ static struct test_driver {

static void shuffle_array(int *arr, int n)
{
- unsigned int rnd;
int i, j;

for (i = n - 1; i > 0; i--) {
- rnd = prandom_u32();
-
/* Cut the range. */
- j = rnd % i;
+ j = prandom_u32_max(i);

/* Swap indexes. */
swap(arr[i], arr[j]);
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index 6a6898ee4049..db60217f911b 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -222,7 +222,7 @@ static void pick_new_mon(struct ceph_mon_client *monc)
max--;
}

- n = prandom_u32() % max;
+ n = prandom_u32_max(max);
if (o >= 0 && n >= o)
n++;

diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 87b883c7bfd6..4e4f1e4bc265 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -1479,7 +1479,7 @@ static bool target_should_be_paused(struct ceph_osd_client *osdc,

static int pick_random_replica(const struct ceph_osds *acting)
{
- int i = prandom_u32() % acting->size;
+ int i = prandom_u32_max(acting->size);

dout("%s picked osd%d, primary osd%d\n", __func__,
acting->osds[i], acting->primary);
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 78cc8fb68814..85d497cb58d8 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -111,7 +111,7 @@ static void neigh_cleanup_and_release(struct neighbour *neigh)

unsigned long neigh_rand_reach_time(unsigned long base)
{
- return base ? (prandom_u32() % base) + (base >> 1) : 0;
+ return base ? prandom_u32_max(base) + (base >> 1) : 0;
}
EXPORT_SYMBOL(neigh_rand_reach_time);

diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 88906ba6d9a7..5ca4f953034c 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2324,7 +2324,7 @@ static inline int f_pick(struct pktgen_dev *pkt_dev)
pkt_dev->curfl = 0; /*reset */
}
} else {
- flow = prandom_u32() % pkt_dev->cflows;
+ flow = prandom_u32_max(pkt_dev->cflows);
pkt_dev->curfl = flow;

if (pkt_dev->flows[flow].count > pkt_dev->lflow) {
@@ -2380,10 +2380,9 @@ static void set_cur_queue_map(struct pktgen_dev *pkt_dev)
else if (pkt_dev->queue_map_min <= pkt_dev->queue_map_max) {
__u16 t;
if (pkt_dev->flags & F_QUEUE_MAP_RND) {
- t = prandom_u32() %
- (pkt_dev->queue_map_max -
- pkt_dev->queue_map_min + 1)
- + pkt_dev->queue_map_min;
+ t = prandom_u32_max(pkt_dev->queue_map_max -
+ pkt_dev->queue_map_min + 1) +
+ pkt_dev->queue_map_min;
} else {
t = pkt_dev->cur_queue_map + 1;
if (t > pkt_dev->queue_map_max)
@@ -2412,7 +2411,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
__u32 tmp;

if (pkt_dev->flags & F_MACSRC_RND)
- mc = prandom_u32() % pkt_dev->src_mac_count;
+ mc = prandom_u32_max(pkt_dev->src_mac_count);
else {
mc = pkt_dev->cur_src_mac_offset++;
if (pkt_dev->cur_src_mac_offset >=
@@ -2438,7 +2437,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
__u32 tmp;

if (pkt_dev->flags & F_MACDST_RND)
- mc = prandom_u32() % pkt_dev->dst_mac_count;
+ mc = prandom_u32_max(pkt_dev->dst_mac_count);

else {
mc = pkt_dev->cur_dst_mac_offset++;
@@ -2470,18 +2469,18 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
}

if ((pkt_dev->flags & F_VID_RND) && (pkt_dev->vlan_id != 0xffff)) {
- pkt_dev->vlan_id = prandom_u32() & (4096 - 1);
+ pkt_dev->vlan_id = prandom_u32_max(4096);
}

if ((pkt_dev->flags & F_SVID_RND) && (pkt_dev->svlan_id != 0xffff)) {
- pkt_dev->svlan_id = prandom_u32() & (4096 - 1);
+ pkt_dev->svlan_id = prandom_u32_max(4096);
}

if (pkt_dev->udp_src_min < pkt_dev->udp_src_max) {
if (pkt_dev->flags & F_UDPSRC_RND)
- pkt_dev->cur_udp_src = prandom_u32() %
- (pkt_dev->udp_src_max - pkt_dev->udp_src_min)
- + pkt_dev->udp_src_min;
+ pkt_dev->cur_udp_src = prandom_u32_max(
+ pkt_dev->udp_src_max - pkt_dev->udp_src_min) +
+ pkt_dev->udp_src_min;

else {
pkt_dev->cur_udp_src++;
@@ -2492,9 +2491,9 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)

if (pkt_dev->udp_dst_min < pkt_dev->udp_dst_max) {
if (pkt_dev->flags & F_UDPDST_RND) {
- pkt_dev->cur_udp_dst = prandom_u32() %
- (pkt_dev->udp_dst_max - pkt_dev->udp_dst_min)
- + pkt_dev->udp_dst_min;
+ pkt_dev->cur_udp_dst = prandom_u32_max(
+ pkt_dev->udp_dst_max - pkt_dev->udp_dst_min) +
+ pkt_dev->udp_dst_min;
} else {
pkt_dev->cur_udp_dst++;
if (pkt_dev->cur_udp_dst >= pkt_dev->udp_dst_max)
@@ -2509,7 +2508,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
if (imn < imx) {
__u32 t;
if (pkt_dev->flags & F_IPSRC_RND)
- t = prandom_u32() % (imx - imn) + imn;
+ t = prandom_u32_max(imx - imn) + imn;
else {
t = ntohl(pkt_dev->cur_saddr);
t++;
@@ -2531,8 +2530,8 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
if (pkt_dev->flags & F_IPDST_RND) {

do {
- t = prandom_u32() %
- (imx - imn) + imn;
+ t = prandom_u32_max(imx - imn) +
+ imn;
s = htonl(t);
} while (ipv4_is_loopback(s) ||
ipv4_is_multicast(s) ||
@@ -2579,9 +2578,9 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
if (pkt_dev->min_pkt_size < pkt_dev->max_pkt_size) {
__u32 t;
if (pkt_dev->flags & F_TXSIZE_RND) {
- t = prandom_u32() %
- (pkt_dev->max_pkt_size - pkt_dev->min_pkt_size)
- + pkt_dev->min_pkt_size;
+ t = prandom_u32_max(pkt_dev->max_pkt_size -
+ pkt_dev->min_pkt_size) +
+ pkt_dev->min_pkt_size;
} else {
t = pkt_dev->cur_pkt_size + 1;
if (t > pkt_dev->max_pkt_size)
@@ -2590,7 +2589,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
pkt_dev->cur_pkt_size = t;
} else if (pkt_dev->n_imix_entries > 0) {
struct imix_pkt *entry;
- __u32 t = prandom_u32() % IMIX_PRECISION;
+ __u32 t = prandom_u32_max(IMIX_PRECISION);
__u8 entry_index = pkt_dev->imix_distribution[t];

entry = &pkt_dev->imix_entries[entry_index];
diff --git a/net/core/stream.c b/net/core/stream.c
index ccc083cdef23..4780558ea314 100644
--- a/net/core/stream.c
+++ b/net/core/stream.c
@@ -123,7 +123,7 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
DEFINE_WAIT_FUNC(wait, woken_wake_function);

if (sk_stream_memory_free(sk))
- current_timeo = vm_wait = (prandom_u32() % (HZ / 5)) + 2;
+ current_timeo = vm_wait = prandom_u32_max(HZ / 5) + 2;

add_wait_queue(sk_sleep(sk), &wait);

diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
index e3ab0cb61624..9149e78beea5 100644
--- a/net/ipv4/igmp.c
+++ b/net/ipv4/igmp.c
@@ -213,7 +213,7 @@ static void igmp_stop_timer(struct ip_mc_list *im)
/* It must be called with locked im->lock */
static void igmp_start_timer(struct ip_mc_list *im, int max_delay)
{
- int tv = prandom_u32() % max_delay;
+ int tv = prandom_u32_max(max_delay);

im->tm_running = 1;
if (!mod_timer(&im->timer, jiffies+tv+2))
@@ -222,7 +222,7 @@ static void igmp_start_timer(struct ip_mc_list *im, int max_delay)

static void igmp_gq_start_timer(struct in_device *in_dev)
{
- int tv = prandom_u32() % in_dev->mr_maxdelay;
+ int tv = prandom_u32_max(in_dev->mr_maxdelay);
unsigned long exp = jiffies + tv + 2;

if (in_dev->mr_gq_running &&
@@ -236,7 +236,7 @@ static void igmp_gq_start_timer(struct in_device *in_dev)

static void igmp_ifc_start_timer(struct in_device *in_dev, int delay)
{
- int tv = prandom_u32() % delay;
+ int tv = prandom_u32_max(delay);

if (!mod_timer(&in_dev->mr_ifc_timer, jiffies+tv+2))
in_dev_hold(in_dev);
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index eb31c7158b39..0c3eab1347cd 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -223,7 +223,7 @@ inet_csk_find_open_port(struct sock *sk, struct inet_bind_bucket **tb_ret, int *
if (likely(remaining > 1))
remaining &= ~1U;

- offset = prandom_u32() % remaining;
+ offset = prandom_u32_max(remaining);
/* __inet_hash_connect() favors ports having @low parity
* We do the opposite to not pollute connect() users.
*/
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index b9d995b5ce24..9dc070f2018e 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -794,7 +794,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
* on low contention the randomness is maximal and on high contention
* it may be inexistent.
*/
- i = max_t(int, i, (prandom_u32() & 7) * 2);
+ i = max_t(int, i, prandom_u32_max(8) * 2);
WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2);

/* Head lock still held and bh's disabled */
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 10ce86bf228e..417834b7169d 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -104,7 +104,7 @@ static inline u32 cstamp_delta(unsigned long cstamp)
static inline s32 rfc3315_s14_backoff_init(s32 irt)
{
/* multiply 'initial retransmission time' by 0.9 .. 1.1 */
- u64 tmp = (900000 + prandom_u32() % 200001) * (u64)irt;
+ u64 tmp = (900000 + prandom_u32_max(200001)) * (u64)irt;
do_div(tmp, 1000000);
return (s32)tmp;
}
@@ -112,11 +112,11 @@ static inline s32 rfc3315_s14_backoff_init(s32 irt)
static inline s32 rfc3315_s14_backoff_update(s32 rt, s32 mrt)
{
/* multiply 'retransmission timeout' by 1.9 .. 2.1 */
- u64 tmp = (1900000 + prandom_u32() % 200001) * (u64)rt;
+ u64 tmp = (1900000 + prandom_u32_max(200001)) * (u64)rt;
do_div(tmp, 1000000);
if ((s32)tmp > mrt) {
/* multiply 'maximum retransmission time' by 0.9 .. 1.1 */
- tmp = (900000 + prandom_u32() % 200001) * (u64)mrt;
+ tmp = (900000 + prandom_u32_max(200001)) * (u64)mrt;
do_div(tmp, 1000000);
}
return (s32)tmp;
@@ -3967,7 +3967,7 @@ static void addrconf_dad_kick(struct inet6_ifaddr *ifp)
if (ifp->flags & IFA_F_OPTIMISTIC)
rand_num = 0;
else
- rand_num = prandom_u32() % (idev->cnf.rtr_solicit_delay ? : 1);
+ rand_num = prandom_u32_max(idev->cnf.rtr_solicit_delay ?: 1);

nonce = 0;
if (idev->cnf.enhanced_dad ||
diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
index 87c699d57b36..bf4f5edb3c3e 100644
--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -1050,7 +1050,7 @@ bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group,
/* called with mc_lock */
static void mld_gq_start_work(struct inet6_dev *idev)
{
- unsigned long tv = prandom_u32() % idev->mc_maxdelay;
+ unsigned long tv = prandom_u32_max(idev->mc_maxdelay);

idev->mc_gq_running = 1;
if (!mod_delayed_work(mld_wq, &idev->mc_gq_work, tv + 2))
@@ -1068,7 +1068,7 @@ static void mld_gq_stop_work(struct inet6_dev *idev)
/* called with mc_lock */
static void mld_ifc_start_work(struct inet6_dev *idev, unsigned long delay)
{
- unsigned long tv = prandom_u32() % delay;
+ unsigned long tv = prandom_u32_max(delay);

if (!mod_delayed_work(mld_wq, &idev->mc_ifc_work, tv + 2))
in6_dev_hold(idev);
@@ -1085,7 +1085,7 @@ static void mld_ifc_stop_work(struct inet6_dev *idev)
/* called with mc_lock */
static void mld_dad_start_work(struct inet6_dev *idev, unsigned long delay)
{
- unsigned long tv = prandom_u32() % delay;
+ unsigned long tv = prandom_u32_max(delay);

if (!mod_delayed_work(mld_wq, &idev->mc_dad_work, tv + 2))
in6_dev_hold(idev);
@@ -1130,7 +1130,7 @@ static void igmp6_group_queried(struct ifmcaddr6 *ma, unsigned long resptime)
}

if (delay >= resptime)
- delay = prandom_u32() % resptime;
+ delay = prandom_u32_max(resptime);

if (!mod_delayed_work(mld_wq, &ma->mca_work, delay))
refcount_inc(&ma->mca_refcnt);
@@ -2574,7 +2574,7 @@ static void igmp6_join_group(struct ifmcaddr6 *ma)

igmp6_send(&ma->mca_addr, ma->idev->dev, ICMPV6_MGM_REPORT);

- delay = prandom_u32() % unsolicited_report_interval(ma->idev);
+ delay = prandom_u32_max(unsolicited_report_interval(ma->idev));

if (cancel_delayed_work(&ma->mca_work)) {
refcount_dec(&ma->mca_refcnt);
diff --git a/net/netfilter/ipvs/ip_vs_twos.c b/net/netfilter/ipvs/ip_vs_twos.c
index acb55d8393ef..f2579fc9c75b 100644
--- a/net/netfilter/ipvs/ip_vs_twos.c
+++ b/net/netfilter/ipvs/ip_vs_twos.c
@@ -71,8 +71,8 @@ static struct ip_vs_dest *ip_vs_twos_schedule(struct ip_vs_service *svc,
* from 0 to total_weight
*/
total_weight += 1;
- rweight1 = prandom_u32() % total_weight;
- rweight2 = prandom_u32() % total_weight;
+ rweight1 = prandom_u32_max(total_weight);
+ rweight2 = prandom_u32_max(total_weight);

/* Pick two weighted servers */
list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 5cbe07116e04..331f80e12779 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -1350,7 +1350,7 @@ static bool fanout_flow_is_huge(struct packet_sock *po, struct sk_buff *skb)
if (READ_ONCE(history[i]) == rxhash)
count++;

- victim = prandom_u32() % ROLLOVER_HLEN;
+ victim = prandom_u32_max(ROLLOVER_HLEN);

/* Avoid dirtying the cache line if possible */
if (READ_ONCE(history[victim]) != rxhash)
diff --git a/net/sched/act_gact.c b/net/sched/act_gact.c
index ac29d1065232..1accaedef54f 100644
--- a/net/sched/act_gact.c
+++ b/net/sched/act_gact.c
@@ -26,7 +26,7 @@ static struct tc_action_ops act_gact_ops;
static int gact_net_rand(struct tcf_gact *gact)
{
smp_rmb(); /* coupled with smp_wmb() in tcf_gact_init() */
- if (prandom_u32() % gact->tcfg_pval)
+ if (prandom_u32_max(gact->tcfg_pval))
return gact->tcf_action;
return gact->tcfg_paction;
}
diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
index 2f7f5e44d28c..55c9f961fb0f 100644
--- a/net/sched/act_sample.c
+++ b/net/sched/act_sample.c
@@ -169,7 +169,7 @@ static int tcf_sample_act(struct sk_buff *skb, const struct tc_action *a,
psample_group = rcu_dereference_bh(s->psample_group);

/* randomly sample packets according to rate */
- if (psample_group && (prandom_u32() % s->rate == 0)) {
+ if (psample_group && (prandom_u32_max(s->rate) == 0)) {
if (!skb_at_tc_ingress(skb)) {
md.in_ifindex = skb->skb_iif;
md.out_ifindex = skb->dev->ifindex;
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 5449ed114e40..3ca320f1a031 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -513,8 +513,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
goto finish_segs;
}

- skb->data[prandom_u32() % skb_headlen(skb)] ^=
- 1<<(prandom_u32() % 8);
+ skb->data[prandom_u32_max(skb_headlen(skb))] ^=
+ 1<<prandom_u32_max(8);
}

if (unlikely(sch->q.qlen >= sch->limit)) {
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 171f1a35d205..1e354ba44960 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -8319,7 +8319,7 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)

inet_get_local_port_range(net, &low, &high);
remaining = (high - low) + 1;
- rover = prandom_u32() % remaining + low;
+ rover = prandom_u32_max(remaining) + low;

do {
rover++;
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index c3c693b51c94..f075a9fb5ccc 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -677,7 +677,7 @@ static void cache_limit_defers(void)

/* Consider removing either the first or the last */
if (cache_defer_cnt > DFR_MAX) {
- if (prandom_u32() & 1)
+ if (prandom_u32_max(2))
discard = list_entry(cache_defer_list.next,
struct cache_deferred_req, recent);
else
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index e976007f4fd0..c2caee703d2c 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1619,7 +1619,7 @@ static int xs_get_random_port(void)
if (max < min)
return -EADDRINUSE;
range = max - min + 1;
- rand = (unsigned short) prandom_u32() % range;
+ rand = (unsigned short) prandom_u32_max(range);
return rand + min;
}

diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index f1c3b8eb4b3d..e902b01ea3cb 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -3010,7 +3010,7 @@ static int tipc_sk_insert(struct tipc_sock *tsk)
struct net *net = sock_net(sk);
struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 remaining = (TIPC_MAX_PORT - TIPC_MIN_PORT) + 1;
- u32 portid = prandom_u32() % remaining + TIPC_MIN_PORT;
+ u32 portid = prandom_u32_max(remaining) + TIPC_MIN_PORT;

while (remaining--) {
portid++;
diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
index 91c32a3b6924..b213c89cfb8a 100644
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -2072,7 +2072,7 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high)
} else {
u32 spi = 0;
for (h = 0; h < high-low+1; h++) {
- spi = low + prandom_u32()%(high-low+1);
+ spi = low + prandom_u32_max(high - low + 1);
x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family);
if (x0 == NULL) {
newspi = htonl(spi);
--
2.37.3

Jason A. Donenfeld

unread,
Oct 5, 2022, 5:50:01 PMOct 5
to linux-...@vger.kernel.org, Jason A. Donenfeld, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value,
simply use the get_random_{u8,u16}() functions, which are faster than
wasting the additional bytes from a 32-bit value.

Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---
crypto/testmgr.c | 8 ++++----
drivers/media/common/v4l2-tpg/v4l2-tpg-core.c | 2 +-
drivers/media/test-drivers/vivid/vivid-radio-rx.c | 4 ++--
.../net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c | 2 +-
drivers/net/hamradio/baycom_epp.c | 2 +-
drivers/net/hamradio/hdlcdrv.c | 2 +-
drivers/net/hamradio/yam.c | 2 +-
drivers/net/wireguard/selftest/allowedips.c | 4 ++--
drivers/scsi/lpfc/lpfc_hbadisc.c | 6 +++---
lib/test_vmalloc.c | 2 +-
net/dccp/ipv4.c | 4 ++--
net/ipv4/datagram.c | 2 +-
net/ipv4/ip_output.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 ++--
net/mac80211/scan.c | 2 +-
net/netfilter/nf_nat_core.c | 4 ++--
net/sched/sch_cake.c | 6 +++---
net/sched/sch_sfb.c | 2 +-
net/sctp/socket.c | 2 +-
19 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index be45217acde4..981c637fa2ed 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -927,7 +927,7 @@ static void generate_random_bytes(u8 *buf, size_t count)
b = 0xff;
break;
default:
- b = (u8)prandom_u32();
+ b = get_random_u8();
break;
}
memset(buf, b, count);
@@ -935,8 +935,8 @@ static void generate_random_bytes(u8 *buf, size_t count)
break;
case 2:
/* Ascending or descending bytes, plus optional mutations */
- increment = (u8)prandom_u32();
- b = (u8)prandom_u32();
+ increment = get_random_u8();
+ b = get_random_u8();
for (i = 0; i < count; i++, b += increment)
buf[i] = b;
mutate_buffer(buf, count);
@@ -944,7 +944,7 @@ static void generate_random_bytes(u8 *buf, size_t count)
default:
/* Fully random bytes */
for (i = 0; i < count; i++)
- buf[i] = (u8)prandom_u32();
+ buf[i] = get_random_u8();
}
}

diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
index 9b7bcdce6e44..303d02b1d71c 100644
--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
@@ -870,7 +870,7 @@ static void precalculate_color(struct tpg_data *tpg, int k)
g = tpg_colors[col].g;
b = tpg_colors[col].b;
} else if (tpg->pattern == TPG_PAT_NOISE) {
- r = g = b = prandom_u32_max(256);
+ r = g = b = get_random_u8();
} else if (k == TPG_COLOR_RANDOM) {
r = g = b = tpg->qual_offset + prandom_u32_max(196);
} else if (k >= TPG_COLOR_RAMP) {
diff --git a/drivers/media/test-drivers/vivid/vivid-radio-rx.c b/drivers/media/test-drivers/vivid/vivid-radio-rx.c
index 232cab508f48..8bd09589fb15 100644
--- a/drivers/media/test-drivers/vivid/vivid-radio-rx.c
+++ b/drivers/media/test-drivers/vivid/vivid-radio-rx.c
@@ -104,8 +104,8 @@ ssize_t vivid_radio_rx_read(struct file *file, char __user *buf,
break;
case 2:
rds.block |= V4L2_RDS_BLOCK_ERROR;
- rds.lsb = prandom_u32_max(256);
- rds.msb = prandom_u32_max(256);
+ rds.lsb = get_random_u8();
+ rds.msb = get_random_u8();
break;
case 3: /* Skip block altogether */
if (i)
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
index ddfe9208529a..ac452a0111a9 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
@@ -1467,7 +1467,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
tp->write_seq = snd_isn;
tp->snd_nxt = snd_isn;
tp->snd_una = snd_isn;
- inet_sk(sk)->inet_id = prandom_u32();
+ inet_sk(sk)->inet_id = get_random_u16();
assign_rxopt(sk, opt);

if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
index 7df78a721b04..791b4a53d69f 100644
--- a/drivers/net/hamradio/baycom_epp.c
+++ b/drivers/net/hamradio/baycom_epp.c
@@ -438,7 +438,7 @@ static int transmit(struct baycom_state *bc, int cnt, unsigned char stat)
if ((--bc->hdlctx.slotcnt) > 0)
return 0;
bc->hdlctx.slotcnt = bc->ch_params.slottime;
- if (prandom_u32_max(256) > bc->ch_params.ppersist)
+ if (get_random_u8() > bc->ch_params.ppersist)
return 0;
}
}
diff --git a/drivers/net/hamradio/hdlcdrv.c b/drivers/net/hamradio/hdlcdrv.c
index 360d041a62c4..6c6f11d3d0aa 100644
--- a/drivers/net/hamradio/hdlcdrv.c
+++ b/drivers/net/hamradio/hdlcdrv.c
@@ -377,7 +377,7 @@ void hdlcdrv_arbitrate(struct net_device *dev, struct hdlcdrv_state *s)
if ((--s->hdlctx.slotcnt) > 0)
return;
s->hdlctx.slotcnt = s->ch_params.slottime;
- if (prandom_u32_max(256) > s->ch_params.ppersist)
+ if (get_random_u8() > s->ch_params.ppersist)
return;
start_tx(dev, s);
}
diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
index 97a6cc5c7ae8..2ed2f836f09a 100644
--- a/drivers/net/hamradio/yam.c
+++ b/drivers/net/hamradio/yam.c
@@ -626,7 +626,7 @@ static void yam_arbitrate(struct net_device *dev)
yp->slotcnt = yp->slot / 10;

/* is random > persist ? */
- if (prandom_u32_max(256) > yp->pers)
+ if (get_random_u8() > yp->pers)
return;

yam_start_tx(dev, yp);
diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c
index 41db10f9be49..dd897c0740a2 100644
--- a/drivers/net/wireguard/selftest/allowedips.c
+++ b/drivers/net/wireguard/selftest/allowedips.c
@@ -310,7 +310,7 @@ static __init bool randomized_test(void)
for (k = 0; k < 4; ++k)
mutated[k] = (mutated[k] & mutate_mask[k]) |
(~mutate_mask[k] &
- prandom_u32_max(256));
+ get_random_u8());
cidr = prandom_u32_max(32) + 1;
peer = peers[prandom_u32_max(NUM_PEERS)];
if (wg_allowedips_insert_v4(&t,
@@ -354,7 +354,7 @@ static __init bool randomized_test(void)
for (k = 0; k < 4; ++k)
mutated[k] = (mutated[k] & mutate_mask[k]) |
(~mutate_mask[k] &
- prandom_u32_max(256));
+ get_random_u8());
cidr = prandom_u32_max(128) + 1;
peer = peers[prandom_u32_max(NUM_PEERS)];
if (wg_allowedips_insert_v6(&t,
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 2645def612e6..26d1779cb570 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -2150,8 +2150,8 @@ lpfc_check_pending_fcoe_event(struct lpfc_hba *phba, uint8_t unreg_fcf)
* This function makes an running random selection decision on FCF record to
* use through a sequence of @fcf_cnt eligible FCF records with equal
* probability. To perform integer manunipulation of random numbers with
- * size unit32_t, the lower 16 bits of the 32-bit random number returned
- * from prandom_u32() are taken as the random random number generated.
+ * size unit32_t, a 16-bit random number returned from get_random_u16() is
+ * taken as the random random number generated.
*
* Returns true when outcome is for the newly read FCF record should be
* chosen; otherwise, return false when outcome is for keeping the previously
@@ -2163,7 +2163,7 @@ lpfc_sli4_new_fcf_random_select(struct lpfc_hba *phba, uint32_t fcf_cnt)
uint32_t rand_num;

/* Get 16-bit uniform random number */
- rand_num = 0xFFFF & prandom_u32();
+ rand_num = get_random_u16();

/* Decision with probability 1/fcf_cnt */
if ((fcf_cnt * rand_num) < 0xFFFF)
diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index 56ffaa8dd3f6..0131ed2cd1bd 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -80,7 +80,7 @@ static int random_size_align_alloc_test(void)
int i;

for (i = 0; i < test_loop_count; i++) {
- rnd = prandom_u32();
+ rnd = get_random_u8();

/*
* Maximum 1024 pages, if PAGE_SIZE is 4096.
diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
index da6e3b20cd75..301799e7fa56 100644
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -123,7 +123,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->inet_daddr,
inet->inet_sport,
inet->inet_dport);
- inet->inet_id = prandom_u32();
+ inet->inet_id = get_random_u16();

err = dccp_connect(sk);
rt = NULL;
@@ -422,7 +422,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
newinet->mc_index = inet_iif(skb);
newinet->mc_ttl = ip_hdr(skb)->ttl;
- newinet->inet_id = prandom_u32();
+ newinet->inet_id = get_random_u16();

if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL)
goto put_and_exit;
diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
index ffd57523331f..fefc5d855a66 100644
--- a/net/ipv4/datagram.c
+++ b/net/ipv4/datagram.c
@@ -71,7 +71,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
reuseport_has_conns(sk, true);
sk->sk_state = TCP_ESTABLISHED;
sk_set_txhash(sk);
- inet->inet_id = prandom_u32();
+ inet->inet_id = get_random_u16();

sk_dst_set(sk, &rt->dst);
err = 0;
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index 04e2034f2f8e..a4fbdbff14b3 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -172,7 +172,7 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
* Avoid using the hashed IP ident generator.
*/
if (sk->sk_protocol == IPPROTO_TCP)
- iph->id = (__force __be16)prandom_u32();
+ iph->id = (__force __be16)get_random_u16();
else
__ip_select_ident(net, iph, 1);
}
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 5b019ba2b9d2..747752980983 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -303,7 +303,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->inet_daddr);
}

- inet->inet_id = prandom_u32();
+ inet->inet_id = get_random_u16();

if (tcp_fastopen_defer_connect(sk, &err))
return err;
@@ -1523,7 +1523,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
inet_csk(newsk)->icsk_ext_hdr_len = 0;
if (inet_opt)
inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
- newinet->inet_id = prandom_u32();
+ newinet->inet_id = get_random_u16();

/* Set ToS of the new socket based upon the value of incoming SYN.
* ECT bits are set later in tcp_init_transfer().
diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
index c4f2aeb31da3..6cab549cc421 100644
--- a/net/mac80211/scan.c
+++ b/net/mac80211/scan.c
@@ -641,7 +641,7 @@ static void ieee80211_send_scan_probe_req(struct ieee80211_sub_if_data *sdata,
if (flags & IEEE80211_PROBE_FLAG_RANDOM_SN) {
struct ieee80211_hdr *hdr = (void *)skb->data;
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
- u16 sn = get_random_u32();
+ u16 sn = get_random_u16();

info->control.flags |= IEEE80211_TX_CTRL_NO_SEQNO;
hdr->seq_ctrl =
diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
index 7981be526f26..57c7686ac485 100644
--- a/net/netfilter/nf_nat_core.c
+++ b/net/netfilter/nf_nat_core.c
@@ -468,7 +468,7 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple,
if (range->flags & NF_NAT_RANGE_PROTO_OFFSET)
off = (ntohs(*keyptr) - ntohs(range->base_proto.all));
else
- off = prandom_u32();
+ off = get_random_u16();

attempts = range_size;
if (attempts > max_attempts)
@@ -490,7 +490,7 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple,
if (attempts >= range_size || attempts < 16)
return;
attempts /= 2;
- off = prandom_u32();
+ off = get_random_u16();
goto another_round;
}

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index a43a58a73d09..637ef1757931 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -2092,11 +2092,11 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)

WARN_ON(host_load > CAKE_QUEUES);

- /* The shifted prandom_u32() is a way to apply dithering to
- * avoid accumulating roundoff errors
+ /* The get_random_u16() is a way to apply dithering to avoid
+ * accumulating roundoff errors
*/
flow->deficit += (b->flow_quantum * quantum_div[host_load] +
- (prandom_u32() >> 16)) >> 16;
+ get_random_u16()) >> 16;
list_move_tail(&flow->flowchain, &b->old_flows);

goto retry;
diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
index 2829455211f8..7eb70acb4d58 100644
--- a/net/sched/sch_sfb.c
+++ b/net/sched/sch_sfb.c
@@ -379,7 +379,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
goto enqueue;
}

- r = prandom_u32() & SFB_MAX_PROB;
+ r = get_random_u16() & SFB_MAX_PROB;

if (unlikely(r < p_min)) {
if (unlikely(p_min > SFB_MAX_PROB / 2)) {
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 1e354ba44960..83628c347744 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -9448,7 +9448,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
newinet->inet_rcv_saddr = inet->inet_rcv_saddr;
newinet->inet_dport = htons(asoc->peer.port);
newinet->pmtudisc = inet->pmtudisc;
- newinet->inet_id = prandom_u32();
+ newinet->inet_id = get_random_u16();

newinet->uc_ttl = inet->uc_ttl;
newinet->mc_loop = 1;
--
2.37.3

Jason A. Donenfeld

unread,
Oct 5, 2022, 5:50:16 PMOct 5
to linux-...@vger.kernel.org, Jason A. Donenfeld, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
The prandom_u32() function has been a deprecated inline wrapper around
get_random_u32() for several releases now, and compiles down to the
exact same code. Replace the deprecated wrapper with a direct call to
the real function.

Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---
Documentation/networking/filter.rst | 2 +-
drivers/infiniband/hw/cxgb4/cm.c | 4 ++--
drivers/infiniband/hw/hfi1/tid_rdma.c | 2 +-
drivers/infiniband/hw/mlx4/mad.c | 2 +-
drivers/infiniband/ulp/ipoib/ipoib_cm.c | 2 +-
drivers/md/raid5-cache.c | 2 +-
drivers/mtd/nand/raw/nandsim.c | 2 +-
drivers/net/bonding/bond_main.c | 2 +-
drivers/net/ethernet/broadcom/cnic.c | 2 +-
.../chelsio/inline_crypto/chtls/chtls_cm.c | 2 +-
drivers/net/ethernet/rocker/rocker_main.c | 6 +++---
.../net/wireless/marvell/mwifiex/cfg80211.c | 4 ++--
.../net/wireless/microchip/wilc1000/cfg80211.c | 2 +-
.../net/wireless/quantenna/qtnfmac/cfg80211.c | 2 +-
drivers/nvme/common/auth.c | 2 +-
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c | 4 ++--
drivers/target/iscsi/cxgbit/cxgbit_cm.c | 2 +-
drivers/thunderbolt/xdomain.c | 2 +-
drivers/video/fbdev/uvesafb.c | 2 +-
fs/exfat/inode.c | 2 +-
fs/ext2/ialloc.c | 2 +-
fs/ext4/ialloc.c | 4 ++--
fs/ext4/ioctl.c | 4 ++--
fs/ext4/mmp.c | 2 +-
fs/f2fs/namei.c | 2 +-
fs/fat/inode.c | 2 +-
fs/nfsd/nfs4state.c | 4 ++--
fs/ubifs/journal.c | 2 +-
fs/xfs/libxfs/xfs_ialloc.c | 2 +-
fs/xfs/xfs_icache.c | 2 +-
fs/xfs/xfs_log.c | 2 +-
include/net/netfilter/nf_queue.h | 2 +-
include/net/red.h | 2 +-
include/net/sock.h | 2 +-
kernel/kcsan/selftest.c | 2 +-
lib/random32.c | 2 +-
lib/reed_solomon/test_rslib.c | 6 +++---
lib/test_fprobe.c | 2 +-
lib/test_kprobes.c | 2 +-
lib/test_rhashtable.c | 6 +++---
mm/shmem.c | 2 +-
net/802/garp.c | 2 +-
net/802/mrp.c | 2 +-
net/core/pktgen.c | 4 ++--
net/ipv4/tcp_cdg.c | 2 +-
net/ipv4/udp.c | 2 +-
net/ipv6/ip6_flowlabel.c | 2 +-
net/ipv6/output_core.c | 2 +-
net/netfilter/ipvs/ip_vs_conn.c | 2 +-
net/netfilter/xt_statistic.c | 2 +-
net/openvswitch/actions.c | 2 +-
net/rds/bind.c | 2 +-
net/sched/sch_cake.c | 2 +-
net/sched/sch_netem.c | 18 +++++++++---------
net/sunrpc/auth_gss/gss_krb5_wrap.c | 4 ++--
net/sunrpc/xprt.c | 2 +-
net/unix/af_unix.c | 2 +-
57 files changed, 79 insertions(+), 79 deletions(-)

diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst
index 43cdc4d34745..f69da5074860 100644
--- a/Documentation/networking/filter.rst
+++ b/Documentation/networking/filter.rst
@@ -305,7 +305,7 @@ Possible BPF extensions are shown in the following table:
vlan_tci skb_vlan_tag_get(skb)
vlan_avail skb_vlan_tag_present(skb)
vlan_tpid skb->vlan_proto
- rand prandom_u32()
+ rand get_random_u32()
=================================== =================================================

These extensions can also be prefixed with '#'.
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 14392c942f49..499a425a3379 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -734,7 +734,7 @@ static int send_connect(struct c4iw_ep *ep)
&ep->com.remote_addr;
int ret;
enum chip_type adapter_type = ep->com.dev->rdev.lldi.adapter_type;
- u32 isn = (prandom_u32() & ~7UL) - 1;
+ u32 isn = (get_random_u32() & ~7UL) - 1;
struct net_device *netdev;
u64 params;

@@ -2469,7 +2469,7 @@ static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
}

if (!is_t4(adapter_type)) {
- u32 isn = (prandom_u32() & ~7UL) - 1;
+ u32 isn = (get_random_u32() & ~7UL) - 1;

skb = get_skb(skb, roundup(sizeof(*rpl5), 16), GFP_KERNEL);
rpl5 = __skb_put_zero(skb, roundup(sizeof(*rpl5), 16));
diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
index 2a7abf7a1f7f..18b05ffb415a 100644
--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
+++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
@@ -850,7 +850,7 @@ void hfi1_kern_init_ctxt_generations(struct hfi1_ctxtdata *rcd)
int i;

for (i = 0; i < RXE_NUM_TID_FLOWS; i++) {
- rcd->flows[i].generation = mask_generation(prandom_u32());
+ rcd->flows[i].generation = mask_generation(get_random_u32());
kern_set_hw_flow(rcd, KERN_GENERATION_RESERVED, i);
}
}
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index d13ecbdd4391..a37cfac5e23f 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -96,7 +96,7 @@ static void __propagate_pkey_ev(struct mlx4_ib_dev *dev, int port_num,
__be64 mlx4_ib_gen_node_guid(void)
{
#define NODE_GUID_HI ((u64) (((u64)IB_OPENIB_OUI) << 40))
- return cpu_to_be64(NODE_GUID_HI | prandom_u32());
+ return cpu_to_be64(NODE_GUID_HI | get_random_u32());
}

__be64 mlx4_ib_get_new_demux_tid(struct mlx4_ib_demux_ctx *ctx)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index fd9d7f2c4d64..a605cf66b83e 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -465,7 +465,7 @@ static int ipoib_cm_req_handler(struct ib_cm_id *cm_id,
goto err_qp;
}

- psn = prandom_u32() & 0xffffff;
+ psn = get_random_u32() & 0xffffff;
ret = ipoib_cm_modify_rx_qp(dev, cm_id, p->qp, psn);
if (ret)
goto err_modify;
diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index f4e1cc1ece43..5b0fc783bf01 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -2993,7 +2993,7 @@ static int r5l_load_log(struct r5l_log *log)
}
create:
if (create_super) {
- log->last_cp_seq = prandom_u32();
+ log->last_cp_seq = get_random_u32();
cp = 0;
r5l_log_write_empty_meta_block(log, cp, log->last_cp_seq);
/*
diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c
index 50bcf745e816..4bdaf4aa7007 100644
--- a/drivers/mtd/nand/raw/nandsim.c
+++ b/drivers/mtd/nand/raw/nandsim.c
@@ -1402,7 +1402,7 @@ static int ns_do_read_error(struct nandsim *ns, int num)

static void ns_do_bit_flips(struct nandsim *ns, int num)
{
- if (bitflips && prandom_u32() < (1 << 22)) {
+ if (bitflips && get_random_u32() < (1 << 22)) {
int flips = 1;
if (bitflips > 1)
flips = prandom_u32_max(bitflips) + 1;
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 86d42306aa5e..c8543394a3bb 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4806,7 +4806,7 @@ static u32 bond_rr_gen_slave_id(struct bonding *bond)

switch (packets_per_slave) {
case 0:
- slave_id = prandom_u32();
+ slave_id = get_random_u32();
break;
case 1:
slave_id = this_cpu_inc_return(*bond->rr_tx_counter);
diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c
index f597b313acaa..2198e35d9e18 100644
--- a/drivers/net/ethernet/broadcom/cnic.c
+++ b/drivers/net/ethernet/broadcom/cnic.c
@@ -4164,7 +4164,7 @@ static int cnic_cm_init_bnx2_hw(struct cnic_dev *dev)
{
u32 seed;

- seed = prandom_u32();
+ seed = get_random_u32();
cnic_ctx_wr(dev, 45, 0, seed);
return 0;
}
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
index ac452a0111a9..b71ce6c5b512 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
@@ -1063,7 +1063,7 @@ static void chtls_pass_accept_rpl(struct sk_buff *skb,
opt2 |= WND_SCALE_EN_V(WSCALE_OK(tp));
rpl5->opt0 = cpu_to_be64(opt0);
rpl5->opt2 = cpu_to_be32(opt2);
- rpl5->iss = cpu_to_be32((prandom_u32() & ~7UL) - 1);
+ rpl5->iss = cpu_to_be32((get_random_u32() & ~7UL) - 1);
set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id);
t4_set_arp_err_handler(skb, sk, chtls_accept_rpl_arp_failure);
cxgb4_l2t_send(csk->egress_dev, skb, csk->l2t_entry);
diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
index fc83ec23bd1d..8c3bbafabb07 100644
--- a/drivers/net/ethernet/rocker/rocker_main.c
+++ b/drivers/net/ethernet/rocker/rocker_main.c
@@ -129,7 +129,7 @@ static int rocker_reg_test(const struct rocker *rocker)
u64 test_reg;
u64 rnd;

- rnd = prandom_u32();
+ rnd = get_random_u32();
rnd >>= 1;
rocker_write32(rocker, TEST_REG, rnd);
test_reg = rocker_read32(rocker, TEST_REG);
@@ -139,9 +139,9 @@ static int rocker_reg_test(const struct rocker *rocker)
return -EIO;
}

- rnd = prandom_u32();
+ rnd = get_random_u32();
rnd <<= 31;
- rnd |= prandom_u32();
+ rnd |= get_random_u32();
rocker_write64(rocker, TEST_REG64, rnd);
test_reg = rocker_read64(rocker, TEST_REG64);
if (test_reg != rnd * 2) {
diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
index 134114ac1ac0..4fbb5c876b12 100644
--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
@@ -238,7 +238,7 @@ mwifiex_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
tx_info->pkt_len = pkt_len;

mwifiex_form_mgmt_frame(skb, buf, len);
- *cookie = prandom_u32() | 1;
+ *cookie = get_random_u32() | 1;

if (ieee80211_is_action(mgmt->frame_control))
skb = mwifiex_clone_skb_for_tx_status(priv,
@@ -302,7 +302,7 @@ mwifiex_cfg80211_remain_on_channel(struct wiphy *wiphy,
duration);

if (!ret) {
- *cookie = prandom_u32() | 1;
+ *cookie = get_random_u32() | 1;
priv->roc_cfg.cookie = *cookie;
priv->roc_cfg.chan = *chan;

diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
index 3ac373d29d93..84b9d3454e57 100644
--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
@@ -1159,7 +1159,7 @@ static int mgmt_tx(struct wiphy *wiphy,
const u8 *vendor_ie;
int ret = 0;

- *cookie = prandom_u32();
+ *cookie = get_random_u32();
priv->tx_cookie = *cookie;
mgmt = (const struct ieee80211_mgmt *)buf;

diff --git a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
index 1593e810b3ca..9c5416141d3c 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
@@ -449,7 +449,7 @@ qtnf_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
{
struct qtnf_vif *vif = qtnf_netdev_get_priv(wdev->netdev);
const struct ieee80211_mgmt *mgmt_frame = (void *)params->buf;
- u32 short_cookie = prandom_u32();
+ u32 short_cookie = get_random_u32();
u16 flags = 0;
u16 freq;

diff --git a/drivers/nvme/common/auth.c b/drivers/nvme/common/auth.c
index 04bd28f17dcc..d90e4f0c08b7 100644
--- a/drivers/nvme/common/auth.c
+++ b/drivers/nvme/common/auth.c
@@ -23,7 +23,7 @@ u32 nvme_auth_get_seqnum(void)

mutex_lock(&nvme_dhchap_mutex);
if (!nvme_dhchap_seqnum)
- nvme_dhchap_seqnum = prandom_u32();
+ nvme_dhchap_seqnum = get_random_u32();
else {
nvme_dhchap_seqnum++;
if (!nvme_dhchap_seqnum)
diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
index 53d91bf9c12a..c07d2e3b4bcf 100644
--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
@@ -254,7 +254,7 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
} else if (is_t5(lldi->adapter_type)) {
struct cpl_t5_act_open_req *req =
(struct cpl_t5_act_open_req *)skb->head;
- u32 isn = (prandom_u32() & ~7UL) - 1;
+ u32 isn = (get_random_u32() & ~7UL) - 1;

INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
@@ -282,7 +282,7 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
} else {
struct cpl_t6_act_open_req *req =
(struct cpl_t6_act_open_req *)skb->head;
- u32 isn = (prandom_u32() & ~7UL) - 1;
+ u32 isn = (get_random_u32() & ~7UL) - 1;

INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
diff --git a/drivers/target/iscsi/cxgbit/cxgbit_cm.c b/drivers/target/iscsi/cxgbit/cxgbit_cm.c
index 3336d2b78bf7..d9204c590d9a 100644
--- a/drivers/target/iscsi/cxgbit/cxgbit_cm.c
+++ b/drivers/target/iscsi/cxgbit/cxgbit_cm.c
@@ -1202,7 +1202,7 @@ cxgbit_pass_accept_rpl(struct cxgbit_sock *csk, struct cpl_pass_accept_req *req)
opt2 |= CONG_CNTRL_V(CONG_ALG_NEWRENO);

opt2 |= T5_ISS_F;
- rpl5->iss = cpu_to_be32((prandom_u32() & ~7UL) - 1);
+ rpl5->iss = cpu_to_be32((get_random_u32() & ~7UL) - 1);

opt2 |= T5_OPT_2_VALID_F;

diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index c31c0d94d8b3..76075e29696c 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -2438,7 +2438,7 @@ int tb_xdomain_init(void)
tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1);
tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100);

- xdomain_property_block_gen = prandom_u32();
+ xdomain_property_block_gen = get_random_u32();
return 0;
}

diff --git a/drivers/video/fbdev/uvesafb.c b/drivers/video/fbdev/uvesafb.c
index 4df6772802d7..285b83c20326 100644
--- a/drivers/video/fbdev/uvesafb.c
+++ b/drivers/video/fbdev/uvesafb.c
@@ -167,7 +167,7 @@ static int uvesafb_exec(struct uvesafb_ktask *task)
memcpy(&m->id, &uvesafb_cn_id, sizeof(m->id));
m->seq = seq;
m->len = len;
- m->ack = prandom_u32();
+ m->ack = get_random_u32();

/* uvesafb_task structure */
memcpy(m + 1, &task->t, sizeof(task->t));
diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
index a795437b86d0..5590a1e83126 100644
--- a/fs/exfat/inode.c
+++ b/fs/exfat/inode.c
@@ -552,7 +552,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
inode->i_uid = sbi->options.fs_uid;
inode->i_gid = sbi->options.fs_gid;
inode_inc_iversion(inode);
- inode->i_generation = prandom_u32();
+ inode->i_generation = get_random_u32();

if (info->attr & ATTR_SUBDIR) { /* directory */
inode->i_generation &= ~1;
diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
index 998dd2ac8008..e439a872c398 100644
--- a/fs/ext2/ialloc.c
+++ b/fs/ext2/ialloc.c
@@ -277,7 +277,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
int best_ndir = inodes_per_group;
int best_group = -1;

- group = prandom_u32();
+ group = get_random_u32();
parent_group = (unsigned)group % ngroups;
for (i = 0; i < ngroups; i++) {
group = (parent_group + i) % ngroups;
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index f73e5eb43eae..954ec9736a8d 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -465,7 +465,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
ext4fs_dirhash(parent, qstr->name, qstr->len, &hinfo);
grp = hinfo.hash;
} else
- grp = prandom_u32();
+ grp = get_random_u32();
parent_group = (unsigned)grp % ngroups;
for (i = 0; i < ngroups; i++) {
g = (parent_group + i) % ngroups;
@@ -1280,7 +1280,7 @@ struct inode *__ext4_new_inode(struct user_namespace *mnt_userns,
EXT4_GROUP_INFO_IBITMAP_CORRUPT);
goto out;
}
- inode->i_generation = prandom_u32();
+ inode->i_generation = get_random_u32();

/* Precompute checksum seed for inode metadata */
if (ext4_has_metadata_csum(sb)) {
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 3cf3ec4b1c21..99df5b8ae149 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -453,8 +453,8 @@ static long swap_inode_boot_loader(struct super_block *sb,

inode->i_ctime = inode_bl->i_ctime = current_time(inode);

- inode->i_generation = prandom_u32();
- inode_bl->i_generation = prandom_u32();
+ inode->i_generation = get_random_u32();
+ inode_bl->i_generation = get_random_u32();
ext4_reset_inode_seed(inode);
ext4_reset_inode_seed(inode_bl);

diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
index 9af68a7ecdcf..588cb09c5291 100644
--- a/fs/ext4/mmp.c
+++ b/fs/ext4/mmp.c
@@ -265,7 +265,7 @@ static unsigned int mmp_new_seq(void)
u32 new_seq;

do {
- new_seq = prandom_u32();
+ new_seq = get_random_u32();
} while (new_seq > EXT4_MMP_SEQ_MAX);

return new_seq;
diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
index bf00d5057abb..939536982c3e 100644
--- a/fs/f2fs/namei.c
+++ b/fs/f2fs/namei.c
@@ -50,7 +50,7 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
inode->i_blocks = 0;
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
F2FS_I(inode)->i_crtime = inode->i_mtime;
- inode->i_generation = prandom_u32();
+ inode->i_generation = get_random_u32();

if (S_ISDIR(inode->i_mode))
F2FS_I(inode)->i_current_depth = 1;
diff --git a/fs/fat/inode.c b/fs/fat/inode.c
index a38238d75c08..1cbcc4608dc7 100644
--- a/fs/fat/inode.c
+++ b/fs/fat/inode.c
@@ -523,7 +523,7 @@ int fat_fill_inode(struct inode *inode, struct msdos_dir_entry *de)
inode->i_uid = sbi->options.fs_uid;
inode->i_gid = sbi->options.fs_gid;
inode_inc_iversion(inode);
- inode->i_generation = prandom_u32();
+ inode->i_generation = get_random_u32();

if ((de->attr & ATTR_DIR) && !IS_FREE(de->name)) {
inode->i_generation &= ~1;
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index c5d199d7e6b4..e10c16cd7881 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4346,8 +4346,8 @@ void nfsd4_init_leases_net(struct nfsd_net *nn)
nn->nfsd4_grace = 90;
nn->somebody_reclaimed = false;
nn->track_reclaim_completes = false;
- nn->clverifier_counter = prandom_u32();
- nn->clientid_base = prandom_u32();
+ nn->clverifier_counter = get_random_u32();
+ nn->clientid_base = get_random_u32();
nn->clientid_counter = nn->clientid_base + 1;
nn->s2s_cp_cl_id = nn->clientid_counter++;

diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
index 75dab0ae3939..4619652046cf 100644
--- a/fs/ubifs/journal.c
+++ b/fs/ubifs/journal.c
@@ -503,7 +503,7 @@ static void mark_inode_clean(struct ubifs_info *c, struct ubifs_inode *ui)
static void set_dent_cookie(struct ubifs_info *c, struct ubifs_dent_node *dent)
{
if (c->double_hash)
- dent->cookie = (__force __le32) prandom_u32();
+ dent->cookie = (__force __le32) get_random_u32();
else
dent->cookie = 0;
}
diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
index 7838b31126e2..94db50eb706a 100644
--- a/fs/xfs/libxfs/xfs_ialloc.c
+++ b/fs/xfs/libxfs/xfs_ialloc.c
@@ -805,7 +805,7 @@ xfs_ialloc_ag_alloc(
* number from being easily guessable.
*/
error = xfs_ialloc_inode_init(args.mp, tp, NULL, newlen, pag->pag_agno,
- args.agbno, args.len, prandom_u32());
+ args.agbno, args.len, get_random_u32());

if (error)
return error;
diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 2bbe7916a998..eae7427062cf 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -596,7 +596,7 @@ xfs_iget_cache_miss(
*/
if (xfs_has_v3inodes(mp) &&
(flags & XFS_IGET_CREATE) && !xfs_has_ikeep(mp)) {
- VFS_I(ip)->i_generation = prandom_u32();
+ VFS_I(ip)->i_generation = get_random_u32();
} else {
struct xfs_buf *bp;

diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index 386b0307aed8..ad8652cbf245 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -3544,7 +3544,7 @@ xlog_ticket_alloc(
tic->t_curr_res = unit_res;
tic->t_cnt = cnt;
tic->t_ocnt = cnt;
- tic->t_tid = prandom_u32();
+ tic->t_tid = get_random_u32();
if (permanent)
tic->t_flags |= XLOG_TIC_PERM_RESERV;

diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
index 980daa6e1e3a..c81021ab07aa 100644
--- a/include/net/netfilter/nf_queue.h
+++ b/include/net/netfilter/nf_queue.h
@@ -43,7 +43,7 @@ void nf_queue_entry_free(struct nf_queue_entry *entry);
static inline void init_hashrandom(u32 *jhash_initval)
{
while (*jhash_initval == 0)
- *jhash_initval = prandom_u32();
+ *jhash_initval = get_random_u32();
}

static inline u32 hash_v4(const struct iphdr *iph, u32 initval)
diff --git a/include/net/red.h b/include/net/red.h
index be11dbd26492..56d0647d7356 100644
--- a/include/net/red.h
+++ b/include/net/red.h
@@ -364,7 +364,7 @@ static inline unsigned long red_calc_qavg(const struct red_parms *p,

static inline u32 red_random(const struct red_parms *p)
{
- return reciprocal_divide(prandom_u32(), p->max_P_reciprocal);
+ return reciprocal_divide(get_random_u32(), p->max_P_reciprocal);
}

static inline int red_mark_probability(const struct red_parms *p,
diff --git a/include/net/sock.h b/include/net/sock.h
index d08cfe190a78..ca2b26686677 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2091,7 +2091,7 @@ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)

static inline u32 net_tx_rndhash(void)
{
- u32 v = prandom_u32();
+ u32 v = get_random_u32();

return v ?: 1;
}
diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c
index 75712959c84e..58b94deae5c0 100644
--- a/kernel/kcsan/selftest.c
+++ b/kernel/kcsan/selftest.c
@@ -26,7 +26,7 @@
static bool __init test_requires(void)
{
/* random should be initialized for the below tests */
- return prandom_u32() + prandom_u32() != 0;
+ return get_random_u32() + get_random_u32() != 0;
}

/*
diff --git a/lib/random32.c b/lib/random32.c
index d5d9029362cb..d4f19e1a69d4 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -47,7 +47,7 @@
* @state: pointer to state structure holding seeded state.
*
* This is used for pseudo-randomness with no outside seeding.
- * For more random results, use prandom_u32().
+ * For more random results, use get_random_u32().
*/
u32 prandom_u32_state(struct rnd_state *state)
{
diff --git a/lib/reed_solomon/test_rslib.c b/lib/reed_solomon/test_rslib.c
index 4d241bdc88aa..848e7eb5da92 100644
--- a/lib/reed_solomon/test_rslib.c
+++ b/lib/reed_solomon/test_rslib.c
@@ -164,7 +164,7 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,

/* Load c with random data and encode */
for (i = 0; i < dlen; i++)
- c[i] = prandom_u32() & nn;
+ c[i] = get_random_u32() & nn;

memset(c + dlen, 0, nroots * sizeof(*c));
encode_rs16(rs, c, dlen, c + dlen, 0);
@@ -178,7 +178,7 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,
for (i = 0; i < errs; i++) {
do {
/* Error value must be nonzero */
- errval = prandom_u32() & nn;
+ errval = get_random_u32() & nn;
} while (errval == 0);

do {
@@ -206,7 +206,7 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,
/* Erasure with corrupted symbol */
do {
/* Error value must be nonzero */
- errval = prandom_u32() & nn;
+ errval = get_random_u32() & nn;
} while (errval == 0);

errlocs[errloc] = 1;
diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c
index ed70637a2ffa..e0381b3ec410 100644
--- a/lib/test_fprobe.c
+++ b/lib/test_fprobe.c
@@ -145,7 +145,7 @@ static unsigned long get_ftrace_location(void *func)
static int fprobe_test_init(struct kunit *test)
{
do {
- rand1 = prandom_u32();
+ rand1 = get_random_u32();
} while (rand1 <= div_factor);

target = fprobe_selftest_target;
diff --git a/lib/test_kprobes.c b/lib/test_kprobes.c
index a5edc2ebc947..eeb1d728d974 100644
--- a/lib/test_kprobes.c
+++ b/lib/test_kprobes.c
@@ -341,7 +341,7 @@ static int kprobes_test_init(struct kunit *test)
stacktrace_driver = kprobe_stacktrace_driver;

do {
- rand1 = prandom_u32();
+ rand1 = get_random_u32();
} while (rand1 <= div_factor);
return 0;
}
diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
index 5a1dd4736b56..b358a74ed7ed 100644
--- a/lib/test_rhashtable.c
+++ b/lib/test_rhashtable.c
@@ -291,7 +291,7 @@ static int __init test_rhltable(unsigned int entries)
if (WARN_ON(err))
goto out_free;

- k = prandom_u32();
+ k = get_random_u32();
ret = 0;
for (i = 0; i < entries; i++) {
rhl_test_objects[i].value.id = k;
@@ -369,12 +369,12 @@ static int __init test_rhltable(unsigned int entries)
pr_info("test %d random rhlist add/delete operations\n", entries);
for (j = 0; j < entries; j++) {
u32 i = prandom_u32_max(entries);
- u32 prand = prandom_u32();
+ u32 prand = get_random_u32();

cond_resched();

if (prand == 0)
- prand = prandom_u32();
+ prand = get_random_u32();

if (prand & 1) {
prand >>= 1;
diff --git a/mm/shmem.c b/mm/shmem.c
index 42e5888bf84d..6f2cef73808d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2330,7 +2330,7 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir,
inode_init_owner(&init_user_ns, inode, dir, mode);
inode->i_blocks = 0;
inode->i_atime = inode->i_mtime = inode->i_ctime = current_time(inode);
- inode->i_generation = prandom_u32();
+ inode->i_generation = get_random_u32();
info = SHMEM_I(inode);
memset(info, 0, (char *)inode - (char *)info);
spin_lock_init(&info->lock);
diff --git a/net/802/garp.c b/net/802/garp.c
index f6012f8e59f0..c1bb67e25430 100644
--- a/net/802/garp.c
+++ b/net/802/garp.c
@@ -407,7 +407,7 @@ static void garp_join_timer_arm(struct garp_applicant *app)
{
unsigned long delay;

- delay = (u64)msecs_to_jiffies(garp_join_time) * prandom_u32() >> 32;
+ delay = (u64)msecs_to_jiffies(garp_join_time) * get_random_u32() >> 32;
mod_timer(&app->join_timer, jiffies + delay);
}

diff --git a/net/802/mrp.c b/net/802/mrp.c
index 35e04cc5390c..3e9fe9f5d9bf 100644
--- a/net/802/mrp.c
+++ b/net/802/mrp.c
@@ -592,7 +592,7 @@ static void mrp_join_timer_arm(struct mrp_applicant *app)
{
unsigned long delay;

- delay = (u64)msecs_to_jiffies(mrp_join_time) * prandom_u32() >> 32;
+ delay = (u64)msecs_to_jiffies(mrp_join_time) * get_random_u32() >> 32;
mod_timer(&app->join_timer, jiffies + delay);
}

diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 5ca4f953034c..c3763056c554 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2464,7 +2464,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
for (i = 0; i < pkt_dev->nr_labels; i++)
if (pkt_dev->labels[i] & MPLS_STACK_BOTTOM)
pkt_dev->labels[i] = MPLS_STACK_BOTTOM |
- ((__force __be32)prandom_u32() &
+ ((__force __be32)get_random_u32() &
htonl(0x000fffff));
}

@@ -2568,7 +2568,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)

for (i = 0; i < 4; i++) {
pkt_dev->cur_in6_daddr.s6_addr32[i] =
- (((__force __be32)prandom_u32() |
+ (((__force __be32)get_random_u32() |
pkt_dev->min_in6_daddr.s6_addr32[i]) &
pkt_dev->max_in6_daddr.s6_addr32[i]);
}
diff --git a/net/ipv4/tcp_cdg.c b/net/ipv4/tcp_cdg.c
index ddc7ba0554bd..efcd145f06db 100644
--- a/net/ipv4/tcp_cdg.c
+++ b/net/ipv4/tcp_cdg.c
@@ -243,7 +243,7 @@ static bool tcp_cdg_backoff(struct sock *sk, u32 grad)
struct cdg *ca = inet_csk_ca(sk);
struct tcp_sock *tp = tcp_sk(sk);

- if (prandom_u32() <= nexp_u32(grad * backoff_factor))
+ if (get_random_u32() <= nexp_u32(grad * backoff_factor))
return false;

if (use_ineff) {
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 560d9eadeaa5..1a5b2464548e 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -246,7 +246,7 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
inet_get_local_port_range(net, &low, &high);
remaining = (high - low) + 1;

- rand = prandom_u32();
+ rand = get_random_u32();
first = reciprocal_scale(rand, remaining) + low;
/*
* force rand to be an odd multiple of UDP_HTABLE_SIZE
diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
index ceb85c67ce39..18481eb76a0a 100644
--- a/net/ipv6/ip6_flowlabel.c
+++ b/net/ipv6/ip6_flowlabel.c
@@ -220,7 +220,7 @@ static struct ip6_flowlabel *fl_intern(struct net *net,
spin_lock_bh(&ip6_fl_lock);
if (label == 0) {
for (;;) {
- fl->label = htonl(prandom_u32())&IPV6_FLOWLABEL_MASK;
+ fl->label = htonl(get_random_u32())&IPV6_FLOWLABEL_MASK;
if (fl->label) {
lfl = __fl_lookup(net, fl->label);
if (!lfl)
diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
index 2880dc7d9a49..2685c3f15e9d 100644
--- a/net/ipv6/output_core.c
+++ b/net/ipv6/output_core.c
@@ -18,7 +18,7 @@ static u32 __ipv6_select_ident(struct net *net,
u32 id;

do {
- id = prandom_u32();
+ id = get_random_u32();
} while (!id);

return id;
diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
index fb67f1ca2495..8c04bb57dd6f 100644
--- a/net/netfilter/ipvs/ip_vs_conn.c
+++ b/net/netfilter/ipvs/ip_vs_conn.c
@@ -1308,7 +1308,7 @@ void ip_vs_random_dropentry(struct netns_ipvs *ipvs)
* Randomly scan 1/32 of the whole table every second
*/
for (idx = 0; idx < (ip_vs_conn_tab_size>>5); idx++) {
- unsigned int hash = prandom_u32() & ip_vs_conn_tab_mask;
+ unsigned int hash = get_random_u32() & ip_vs_conn_tab_mask;

hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[hash], c_list) {
if (cp->ipvs != ipvs)
diff --git a/net/netfilter/xt_statistic.c b/net/netfilter/xt_statistic.c
index 203e24ae472c..b26c1dcfc27b 100644
--- a/net/netfilter/xt_statistic.c
+++ b/net/netfilter/xt_statistic.c
@@ -34,7 +34,7 @@ statistic_mt(const struct sk_buff *skb, struct xt_action_param *par)

switch (info->mode) {
case XT_STATISTIC_MODE_RANDOM:
- if ((prandom_u32() & 0x7FFFFFFF) < info->u.random.probability)
+ if ((get_random_u32() & 0x7FFFFFFF) < info->u.random.probability)
ret = !ret;
break;
case XT_STATISTIC_MODE_NTH:
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
index 868db4669a29..ca3ebfdb3023 100644
--- a/net/openvswitch/actions.c
+++ b/net/openvswitch/actions.c
@@ -1033,7 +1033,7 @@ static int sample(struct datapath *dp, struct sk_buff *skb,
actions = nla_next(sample_arg, &rem);

if ((arg->probability != U32_MAX) &&
- (!arg->probability || prandom_u32() > arg->probability)) {
+ (!arg->probability || get_random_u32() > arg->probability)) {
if (last)
consume_skb(skb);
return 0;
diff --git a/net/rds/bind.c b/net/rds/bind.c
index 5b5fb4ca8d3e..052776ddcc34 100644
--- a/net/rds/bind.c
+++ b/net/rds/bind.c
@@ -104,7 +104,7 @@ static int rds_add_bound(struct rds_sock *rs, const struct in6_addr *addr,
return -EINVAL;
last = rover;
} else {
- rover = max_t(u16, prandom_u32(), 2);
+ rover = max_t(u16, get_random_u32(), 2);
last = rover - 1;
}

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 637ef1757931..48e3e05228a1 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -573,7 +573,7 @@ static bool cobalt_should_drop(struct cobalt_vars *vars,

/* Simple BLUE implementation. Lack of ECN is deliberate. */
if (vars->p_drop)
- drop |= (prandom_u32() < vars->p_drop);
+ drop |= (get_random_u32() < vars->p_drop);

/* Overload the drop_next field as an activity timeout */
if (!vars->count)
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 3ca320f1a031..88c1fa2e1d15 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -171,7 +171,7 @@ static inline struct netem_skb_cb *netem_skb_cb(struct sk_buff *skb)
static void init_crandom(struct crndstate *state, unsigned long rho)
{
state->rho = rho;
- state->last = prandom_u32();
+ state->last = get_random_u32();
}

/* get_crandom - correlated random number generator
@@ -184,9 +184,9 @@ static u32 get_crandom(struct crndstate *state)
unsigned long answer;

if (!state || state->rho == 0) /* no correlation */
- return prandom_u32();
+ return get_random_u32();

- value = prandom_u32();
+ value = get_random_u32();
rho = (u64)state->rho + 1;
answer = (value * ((1ull<<32) - rho) + state->last * rho) >> 32;
state->last = answer;
@@ -200,7 +200,7 @@ static u32 get_crandom(struct crndstate *state)
static bool loss_4state(struct netem_sched_data *q)
{
struct clgstate *clg = &q->clg;
- u32 rnd = prandom_u32();
+ u32 rnd = get_random_u32();

/*
* Makes a comparison between rnd and the transition
@@ -268,15 +268,15 @@ static bool loss_gilb_ell(struct netem_sched_data *q)

switch (clg->state) {
case GOOD_STATE:
- if (prandom_u32() < clg->a1)
+ if (get_random_u32() < clg->a1)
clg->state = BAD_STATE;
- if (prandom_u32() < clg->a4)
+ if (get_random_u32() < clg->a4)
return true;
break;
case BAD_STATE:
- if (prandom_u32() < clg->a2)
+ if (get_random_u32() < clg->a2)
clg->state = GOOD_STATE;
- if (prandom_u32() > clg->a3)
+ if (get_random_u32() > clg->a3)
return true;
}

@@ -632,7 +632,7 @@ static void get_slot_next(struct netem_sched_data *q, u64 now)

if (!q->slot_dist)
next_delay = q->slot_config.min_delay +
- (prandom_u32() *
+ (get_random_u32() *
(q->slot_config.max_delay -
q->slot_config.min_delay) >> 32);
else
diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 5f96e75f9eec..48337687848c 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -130,8 +130,8 @@ gss_krb5_make_confounder(char *p, u32 conflen)

/* initialize to random value */
if (i == 0) {
- i = prandom_u32();
- i = (i << 32) | prandom_u32();
+ i = get_random_u32();
+ i = (i << 32) | get_random_u32();
}

switch (conflen) {
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index f8fae7815649..9407007f47ae 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -1868,7 +1868,7 @@ xprt_alloc_xid(struct rpc_xprt *xprt)
static void
xprt_init_xid(struct rpc_xprt *xprt)
{
- xprt->xid = prandom_u32();
+ xprt->xid = get_random_u32();
}

static void
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index bf338b782fc4..35bd8132113f 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1116,7 +1116,7 @@ static int unix_autobind(struct sock *sk)
addr->name->sun_family = AF_UNIX;
refcount_set(&addr->refcnt, 1);

- ordernum = prandom_u32();
+ ordernum = get_random_u32();
lastnum = ordernum & 0xFFFFF;
retry:
ordernum = (ordernum + 1) & 0xFFFFF;
--
2.37.3

Jason A. Donenfeld

unread,
Oct 5, 2022, 5:50:31 PMOct 5
to linux-...@vger.kernel.org, Jason A. Donenfeld, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
The prandom_bytes() function has been a deprecated inline wrapper around
get_random_bytes() for several releases now, and compiles down to the
exact same code. Replace the deprecated wrapper with a direct call to
the real function.

Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---
arch/powerpc/crypto/crc-vpmsum_test.c | 2 +-
block/blk-crypto-fallback.c | 2 +-
crypto/async_tx/raid6test.c | 2 +-
drivers/dma/dmatest.c | 2 +-
drivers/mtd/nand/raw/nandsim.c | 2 +-
drivers/mtd/tests/mtd_nandecctest.c | 2 +-
drivers/mtd/tests/speedtest.c | 2 +-
drivers/mtd/tests/stresstest.c | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 +-
drivers/net/ethernet/rocker/rocker_main.c | 2 +-
drivers/net/wireguard/selftest/allowedips.c | 12 ++++++------
fs/ubifs/debug.c | 2 +-
kernel/kcsan/selftest.c | 2 +-
lib/random32.c | 2 +-
lib/test_objagg.c | 2 +-
lib/uuid.c | 2 +-
net/ipv4/route.c | 2 +-
net/mac80211/rc80211_minstrel_ht.c | 2 +-
net/sched/sch_pie.c | 2 +-
19 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/crypto/crc-vpmsum_test.c b/arch/powerpc/crypto/crc-vpmsum_test.c
index c1c1ef9457fb..273c527868db 100644
--- a/arch/powerpc/crypto/crc-vpmsum_test.c
+++ b/arch/powerpc/crypto/crc-vpmsum_test.c
@@ -82,7 +82,7 @@ static int __init crc_test_init(void)

if (len <= offset)
continue;
- prandom_bytes(data, len);
+ get_random_bytes(data, len);
len -= offset;

crypto_shash_update(crct10dif_shash, data+offset, len);
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 621abd1b0e4d..ad9844c5b40c 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -539,7 +539,7 @@ static int blk_crypto_fallback_init(void)
if (blk_crypto_fallback_inited)
return 0;

- prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+ get_random_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);

err = bioset_init(&crypto_bio_split, 64, 0, 0);
if (err)
diff --git a/crypto/async_tx/raid6test.c b/crypto/async_tx/raid6test.c
index c9d218e53bcb..f74505f2baf0 100644
--- a/crypto/async_tx/raid6test.c
+++ b/crypto/async_tx/raid6test.c
@@ -37,7 +37,7 @@ static void makedata(int disks)
int i;

for (i = 0; i < disks; i++) {
- prandom_bytes(page_address(data[i]), PAGE_SIZE);
+ get_random_bytes(page_address(data[i]), PAGE_SIZE);
dataptrs[i] = data[i];
dataoffs[i] = 0;
}
diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index 9fe2ae794316..ffe621695e47 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -312,7 +312,7 @@ static unsigned long dmatest_random(void)
{
unsigned long buf;

- prandom_bytes(&buf, sizeof(buf));
+ get_random_bytes(&buf, sizeof(buf));
return buf;
}

diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c
index 4bdaf4aa7007..c941a5a41ea6 100644
--- a/drivers/mtd/nand/raw/nandsim.c
+++ b/drivers/mtd/nand/raw/nandsim.c
@@ -1393,7 +1393,7 @@ static int ns_do_read_error(struct nandsim *ns, int num)
unsigned int page_no = ns->regs.row;

if (ns_read_error(page_no)) {
- prandom_bytes(ns->buf.byte, num);
+ get_random_bytes(ns->buf.byte, num);
NS_WARN("simulating read error in page %u\n", page_no);
return 1;
}
diff --git a/drivers/mtd/tests/mtd_nandecctest.c b/drivers/mtd/tests/mtd_nandecctest.c
index 1c7201b0f372..440988562cfd 100644
--- a/drivers/mtd/tests/mtd_nandecctest.c
+++ b/drivers/mtd/tests/mtd_nandecctest.c
@@ -266,7 +266,7 @@ static int nand_ecc_test_run(const size_t size)
goto error;
}

- prandom_bytes(correct_data, size);
+ get_random_bytes(correct_data, size);
ecc_sw_hamming_calculate(correct_data, size, correct_ecc, sm_order);
for (i = 0; i < ARRAY_SIZE(nand_ecc_test); i++) {
nand_ecc_test[i].prepare(error_data, error_ecc,
diff --git a/drivers/mtd/tests/speedtest.c b/drivers/mtd/tests/speedtest.c
index c9ec7086bfa1..075bce32caa5 100644
--- a/drivers/mtd/tests/speedtest.c
+++ b/drivers/mtd/tests/speedtest.c
@@ -223,7 +223,7 @@ static int __init mtd_speedtest_init(void)
if (!iobuf)
goto out;

- prandom_bytes(iobuf, mtd->erasesize);
+ get_random_bytes(iobuf, mtd->erasesize);

bbt = kzalloc(ebcnt, GFP_KERNEL);
if (!bbt)
diff --git a/drivers/mtd/tests/stresstest.c b/drivers/mtd/tests/stresstest.c
index d2faaca7f19d..75b6ddc5dc4d 100644
--- a/drivers/mtd/tests/stresstest.c
+++ b/drivers/mtd/tests/stresstest.c
@@ -183,7 +183,7 @@ static int __init mtd_stresstest_init(void)
goto out;
for (i = 0; i < ebcnt; i++)
offsets[i] = mtd->erasesize;
- prandom_bytes(writebuf, bufsize);
+ get_random_bytes(writebuf, bufsize);

bbt = kzalloc(ebcnt, GFP_KERNEL);
if (!bbt)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 96da0ba3d507..354953df46a1 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3874,7 +3874,7 @@ static void bnxt_init_vnics(struct bnxt *bp)

if (bp->vnic_info[i].rss_hash_key) {
if (i == 0)
- prandom_bytes(vnic->rss_hash_key,
+ get_random_bytes(vnic->rss_hash_key,
HW_HASH_KEY_SIZE);
else
memcpy(vnic->rss_hash_key,
diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
index 8c3bbafabb07..cd4488efe0a4 100644
--- a/drivers/net/ethernet/rocker/rocker_main.c
+++ b/drivers/net/ethernet/rocker/rocker_main.c
@@ -224,7 +224,7 @@ static int rocker_dma_test_offset(const struct rocker *rocker,
if (err)
goto unmap;

- prandom_bytes(buf, ROCKER_TEST_DMA_BUF_SIZE);
+ get_random_bytes(buf, ROCKER_TEST_DMA_BUF_SIZE);
for (i = 0; i < ROCKER_TEST_DMA_BUF_SIZE; i++)
expect[i] = ~buf[i];
err = rocker_dma_test_one(rocker, wait, ROCKER_TEST_DMA_CTRL_INVERT,
diff --git a/drivers/net/wireguard/selftest/allowedips.c b/drivers/net/wireguard/selftest/allowedips.c
index dd897c0740a2..19eac00b2381 100644
--- a/drivers/net/wireguard/selftest/allowedips.c
+++ b/drivers/net/wireguard/selftest/allowedips.c
@@ -284,7 +284,7 @@ static __init bool randomized_test(void)
mutex_lock(&mutex);

for (i = 0; i < NUM_RAND_ROUTES; ++i) {
- prandom_bytes(ip, 4);
+ get_random_bytes(ip, 4);
cidr = prandom_u32_max(32) + 1;
peer = peers[prandom_u32_max(NUM_PEERS)];
if (wg_allowedips_insert_v4(&t, (struct in_addr *)ip, cidr,
@@ -299,7 +299,7 @@ static __init bool randomized_test(void)
}
for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
memcpy(mutated, ip, 4);
- prandom_bytes(mutate_mask, 4);
+ get_random_bytes(mutate_mask, 4);
mutate_amount = prandom_u32_max(32);
for (k = 0; k < mutate_amount / 8; ++k)
mutate_mask[k] = 0xff;
@@ -328,7 +328,7 @@ static __init bool randomized_test(void)
}

for (i = 0; i < NUM_RAND_ROUTES; ++i) {
- prandom_bytes(ip, 16);
+ get_random_bytes(ip, 16);
cidr = prandom_u32_max(128) + 1;
peer = peers[prandom_u32_max(NUM_PEERS)];
if (wg_allowedips_insert_v6(&t, (struct in6_addr *)ip, cidr,
@@ -343,7 +343,7 @@ static __init bool randomized_test(void)
}
for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
memcpy(mutated, ip, 16);
- prandom_bytes(mutate_mask, 16);
+ get_random_bytes(mutate_mask, 16);
mutate_amount = prandom_u32_max(128);
for (k = 0; k < mutate_amount / 8; ++k)
mutate_mask[k] = 0xff;
@@ -381,13 +381,13 @@ static __init bool randomized_test(void)

for (j = 0;; ++j) {
for (i = 0; i < NUM_QUERIES; ++i) {
- prandom_bytes(ip, 4);
+ get_random_bytes(ip, 4);
if (lookup(t.root4, 32, ip) != horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip);
pr_err("allowedips random v4 self-test: FAIL\n");
goto free;
}
- prandom_bytes(ip, 16);
+ get_random_bytes(ip, 16);
if (lookup(t.root6, 128, ip) != horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
pr_err("allowedips random v6 self-test: FAIL\n");
goto free;
diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c
index f4d3b568aa64..3f128b9fdfbb 100644
--- a/fs/ubifs/debug.c
+++ b/fs/ubifs/debug.c
@@ -2581,7 +2581,7 @@ static int corrupt_data(const struct ubifs_info *c, const void *buf,
if (ffs)
memset(p + from, 0xFF, to - from);
else
- prandom_bytes(p + from, to - from);
+ get_random_bytes(p + from, to - from);

return to;
}
diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c
index 58b94deae5c0..00cdf8fa5693 100644
--- a/kernel/kcsan/selftest.c
+++ b/kernel/kcsan/selftest.c
@@ -46,7 +46,7 @@ static bool __init test_encode_decode(void)
unsigned long addr;
size_t verif_size;

- prandom_bytes(&addr, sizeof(addr));
+ get_random_bytes(&addr, sizeof(addr));
if (addr < PAGE_SIZE)
addr = PAGE_SIZE;

diff --git a/lib/random32.c b/lib/random32.c
index d4f19e1a69d4..32060b852668 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -69,7 +69,7 @@ EXPORT_SYMBOL(prandom_u32_state);
* @bytes: the requested number of bytes
*
* This is used for pseudo-randomness with no outside seeding.
- * For more random results, use prandom_bytes().
+ * For more random results, use get_random_bytes().
*/
void prandom_bytes_state(struct rnd_state *state, void *buf, size_t bytes)
{
diff --git a/lib/test_objagg.c b/lib/test_objagg.c
index da137939a410..c0c957c50635 100644
--- a/lib/test_objagg.c
+++ b/lib/test_objagg.c
@@ -157,7 +157,7 @@ static int test_nodelta_obj_get(struct world *world, struct objagg *objagg,
int err;

if (should_create_root)
- prandom_bytes(world->next_root_buf,
+ get_random_bytes(world->next_root_buf,
sizeof(world->next_root_buf));

objagg_obj = world_obj_get(world, objagg, key_id);
diff --git a/lib/uuid.c b/lib/uuid.c
index 562d53977cab..e309b4c5be3d 100644
--- a/lib/uuid.c
+++ b/lib/uuid.c
@@ -52,7 +52,7 @@ EXPORT_SYMBOL(generate_random_guid);

static void __uuid_gen_common(__u8 b[16])
{
- prandom_bytes(b, 16);
+ get_random_bytes(b, 16);
/* reversion 0b10 */
b[8] = (b[8] & 0x3F) | 0x80;
}
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 795cbe1de912..c3fd6c62897d 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -3719,7 +3719,7 @@ int __init ip_rt_init(void)

ip_idents = idents_hash;

- prandom_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));
+ get_random_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));

ip_tstamps = idents_hash + (ip_idents_mask + 1) * sizeof(*ip_idents);

diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
index 5f27e6746762..39fb4e2d141a 100644
--- a/net/mac80211/rc80211_minstrel_ht.c
+++ b/net/mac80211/rc80211_minstrel_ht.c
@@ -2033,7 +2033,7 @@ static void __init init_sample_table(void)

memset(sample_table, 0xff, sizeof(sample_table));
for (col = 0; col < SAMPLE_COLUMNS; col++) {
- prandom_bytes(rnd, sizeof(rnd));
+ get_random_bytes(rnd, sizeof(rnd));
for (i = 0; i < MCS_GROUP_RATES; i++) {
new_idx = (i + rnd[i]) % MCS_GROUP_RATES;
while (sample_table[col][new_idx] != 0xff)
diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c
index 5a457ff61acd..66b2b23e8cd1 100644
--- a/net/sched/sch_pie.c
+++ b/net/sched/sch_pie.c
@@ -72,7 +72,7 @@ bool pie_drop_early(struct Qdisc *sch, struct pie_params *params,
if (vars->accu_prob >= (MAX_PROB / 2) * 17)
return true;

- prandom_bytes(&rnd, 8);
+ get_random_bytes(&rnd, 8);
if ((rnd >> BITS_PER_BYTE) < local_prob) {
vars->accu_prob = 0;
return true;
--
2.37.3

Jason A. Donenfeld

unread,
Oct 5, 2022, 5:50:46 PMOct 5
to linux-...@vger.kernel.org, Jason A. Donenfeld, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
With no callers left of prandom_u32() and prandom_bytes(), remove these
deprecated wrappers.

Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>
---
include/linux/prandom.h | 12 ------------
1 file changed, 12 deletions(-)

diff --git a/include/linux/prandom.h b/include/linux/prandom.h
index 78db003bc290..e0a0759dd09c 100644
--- a/include/linux/prandom.h
+++ b/include/linux/prandom.h
@@ -12,18 +12,6 @@
#include <linux/percpu.h>
#include <linux/random.h>

-/* Deprecated: use get_random_u32 instead. */
-static inline u32 prandom_u32(void)
-{
- return get_random_u32();
-}
-
-/* Deprecated: use get_random_bytes instead. */
-static inline void prandom_bytes(void *buf, size_t nbytes)
-{
- return get_random_bytes(buf, nbytes);
-}
-
struct rnd_state {
__u32 s1, s2, s3, s4;
};
--
2.37.3

Kees Cook

unread,
Oct 6, 2022, 12:16:53 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 11:48:40PM +0200, Jason A. Donenfeld wrote:
> Rather than incurring a division or requesting too many random bytes for
> the given range, use the prandom_u32_max() function, which only takes
> the minimum required bytes from the RNG and avoids divisions.

Yes please!

Since this is a treewide patch, it's helpful for (me at least) doing
reviews to detail the mechanism of the transformation.

e.g. I imagine this could be done with something like Coccinelle and

@no_modulo@
expression E;
@@

- (prandom_u32() % (E))
+ prandom_u32_max(E)

> diff --git a/drivers/mtd/ubi/debug.h b/drivers/mtd/ubi/debug.h
> index 118248a5d7d4..4236c799a47c 100644
> --- a/drivers/mtd/ubi/debug.h
> +++ b/drivers/mtd/ubi/debug.h
> @@ -73,7 +73,7 @@ static inline int ubi_dbg_is_bgt_disabled(const struct ubi_device *ubi)
> static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
> {
> if (ubi->dbg.emulate_bitflips)
> - return !(prandom_u32() % 200);
> + return !(prandom_u32_max(200));
> return 0;
> }
>

Because some looks automated (why the parens?)

> @@ -393,14 +387,11 @@ static struct test_driver {
>
> static void shuffle_array(int *arr, int n)
> {
> - unsigned int rnd;
> int i, j;
>
> for (i = n - 1; i > 0; i--) {
> - rnd = prandom_u32();
> -
> /* Cut the range. */
> - j = rnd % i;
> + j = prandom_u32_max(i);
>
> /* Swap indexes. */
> swap(arr[i], arr[j]);

And some by hand. :)

Reviewed-by: Kees Cook <kees...@chromium.org>

--
Kees Cook

KP Singh

unread,
Oct 6, 2022, 12:22:46 AMOct 6
to Kees Cook, Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
Thanks!

Reviewed-by: KP Singh <kps...@kernel.org>


>
> --
> Kees Cook

Kees Cook

unread,
Oct 6, 2022, 12:38:06 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 11:48:41PM +0200, Jason A. Donenfeld wrote:
> Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value,
> simply use the get_random_{u8,u16}() functions, which are faster than
> wasting the additional bytes from a 32-bit value.
>
> Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>

Same question about "mechanism of transformation".

> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
> index ddfe9208529a..ac452a0111a9 100644
> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
> @@ -1467,7 +1467,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
> tp->write_seq = snd_isn;
> tp->snd_nxt = snd_isn;
> tp->snd_una = snd_isn;
> - inet_sk(sk)->inet_id = prandom_u32();
> + inet_sk(sk)->inet_id = get_random_u16();
> assign_rxopt(sk, opt);
>
> if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))

This one I had to go look at -- inet_id is u16, so yeah. :)

> diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
> index 56ffaa8dd3f6..0131ed2cd1bd 100644
> --- a/lib/test_vmalloc.c
> +++ b/lib/test_vmalloc.c
> @@ -80,7 +80,7 @@ static int random_size_align_alloc_test(void)
> int i;
>
> for (i = 0; i < test_loop_count; i++) {
> - rnd = prandom_u32();
> + rnd = get_random_u8();
>
> /*
> * Maximum 1024 pages, if PAGE_SIZE is 4096.

This wasn't obvious either, but it looks like it's because it never
consumes more than u8?

> diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
> index 7981be526f26..57c7686ac485 100644
> --- a/net/netfilter/nf_nat_core.c
> +++ b/net/netfilter/nf_nat_core.c
> @@ -468,7 +468,7 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple,
> if (range->flags & NF_NAT_RANGE_PROTO_OFFSET)
> off = (ntohs(*keyptr) - ntohs(range->base_proto.all));
> else
> - off = prandom_u32();
> + off = get_random_u16();
>
> attempts = range_size;

Yup, u16 off;

> diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
> index 2829455211f8..7eb70acb4d58 100644
> --- a/net/sched/sch_sfb.c
> +++ b/net/sched/sch_sfb.c
> @@ -379,7 +379,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
> goto enqueue;
> }
>
> - r = prandom_u32() & SFB_MAX_PROB;
> + r = get_random_u16() & SFB_MAX_PROB;
>
> if (unlikely(r < p_min)) {
> if (unlikely(p_min > SFB_MAX_PROB / 2)) {

include/uapi/linux/pkt_sched.h:#define SFB_MAX_PROB 0xFFFF

Kees Cook

unread,
Oct 6, 2022, 12:39:08 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 11:48:44PM +0200, Jason A. Donenfeld wrote:
> With no callers left of prandom_u32() and prandom_bytes(), remove these
> deprecated wrappers.
>
> Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>

Kees Cook

unread,
Oct 6, 2022, 12:45:27 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 11:48:43PM +0200, Jason A. Donenfeld wrote:
> The prandom_bytes() function has been a deprecated inline wrapper around
> get_random_bytes() for several releases now, and compiles down to the
> exact same code. Replace the deprecated wrapper with a direct call to
> the real function.

Global search/replace matches. :)

Kees Cook

unread,
Oct 6, 2022, 12:48:13 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 11:48:43PM +0200, Jason A. Donenfeld wrote:
> The prandom_bytes() function has been a deprecated inline wrapper around
> get_random_bytes() for several releases now, and compiles down to the
> exact same code. Replace the deprecated wrapper with a direct call to
> the real function.
>
> Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>

Kees Cook

unread,
Oct 6, 2022, 12:55:46 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
It'd be nice to capture some (all?) of the above somewhere. Perhaps just
a massive comment in the header?

> I've CC'd get_maintainers.pl, which is a pretty big list. Probably some
> portion of those are going to bounce, too, and everytime you reply to
> this thread, you'll have to deal with a bunch of bounces coming
> immediately after. And a recipient list this big will probably dock my
> email domain's spam reputation, at least temporarily. Sigh. I think
> that's just how it goes with treewide cleanups though. Again, let me
> know if I'm doing it wrong.

I usually stick to just mailing lists and subsystem maintainers.

If any of the subsystems ask you to break this up (I hope not), I've got
this[1], which does a reasonable job of splitting a commit up into
separate commits for each matching subsystem.

Showing that a treewide change can be reproduced mechanically helps with
keeping it together as one bit treewide patch, too, I've found. :)

Thank you for the cleanup! The "u8 rnd = get_random_u32()" in the tree
has bothered me for a loooong time.

-Kees

--
Kees Cook

Kees Cook

unread,
Oct 6, 2022, 1:40:41 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 09:55:43PM -0700, Kees Cook wrote:
> If any of the subsystems ask you to break this up (I hope not), I've got
> this[1], which does a reasonable job of splitting a commit up into
> separate commits for each matching subsystem.

[1] https://github.com/kees/kernel-tools/blob/trunk/split-on-maintainer

--
Kees Cook

Jan Kara

unread,
Oct 6, 2022, 4:43:34 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed 05-10-22 23:48:42, Jason A. Donenfeld wrote:
> The prandom_u32() function has been a deprecated inline wrapper around
> get_random_u32() for several releases now, and compiles down to the
> exact same code. Replace the deprecated wrapper with a direct call to
> the real function.
>
> Signed-off-by: Jason A. Donenfeld <Ja...@zx2c4.com>

...

> diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
> index 998dd2ac8008..e439a872c398 100644
> --- a/fs/ext2/ialloc.c
> +++ b/fs/ext2/ialloc.c
> @@ -277,7 +277,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
> int best_ndir = inodes_per_group;
> int best_group = -1;
>
> - group = prandom_u32();
> + group = get_random_u32();
> parent_group = (unsigned)group % ngroups;
> for (i = 0; i < ngroups; i++) {
> group = (parent_group + i) % ngroups;

The code here is effectively doing the

parent_group = prandom_u32_max(ngroups);

> diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
> index f73e5eb43eae..954ec9736a8d 100644
> --- a/fs/ext4/ialloc.c
> +++ b/fs/ext4/ialloc.c
> @@ -465,7 +465,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
> ext4fs_dirhash(parent, qstr->name, qstr->len, &hinfo);
> grp = hinfo.hash;
> } else
> - grp = prandom_u32();
> + grp = get_random_u32();

Similarly here we can use prandom_u32_max(ngroups) like:

if (qstr) {
...
parent_group = hinfo.hash % ngroups;
} else
parent_group = prandom_u32_max(ngroups);

> diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
> index 9af68a7ecdcf..588cb09c5291 100644
> --- a/fs/ext4/mmp.c
> +++ b/fs/ext4/mmp.c
> @@ -265,7 +265,7 @@ static unsigned int mmp_new_seq(void)
> u32 new_seq;
>
> do {
> - new_seq = prandom_u32();
> + new_seq = get_random_u32();
> } while (new_seq > EXT4_MMP_SEQ_MAX);

OK, here we again effectively implement prandom_u32_max(EXT4_MMP_SEQ_MAX + 1).
Just presumably we didn't want to use modulo here because EXT4_MMP_SEQ_MAX
is rather big and so the resulting 'new_seq' would be seriously
non-uniform.

Honza
--
Jan Kara <ja...@suse.com>
SUSE Labs, CR

Jason A. Donenfeld

unread,
Oct 6, 2022, 8:28:57 AMOct 6
to Kees Cook, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 09:38:02PM -0700, Kees Cook wrote:
> > diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
> > index 56ffaa8dd3f6..0131ed2cd1bd 100644
> > --- a/lib/test_vmalloc.c
> > +++ b/lib/test_vmalloc.c
> > @@ -80,7 +80,7 @@ static int random_size_align_alloc_test(void)
> > int i;
> >
> > for (i = 0; i < test_loop_count; i++) {
> > - rnd = prandom_u32();
> > + rnd = get_random_u8();
> >
> > /*
> > * Maximum 1024 pages, if PAGE_SIZE is 4096.
>
> This wasn't obvious either, but it looks like it's because it never
> consumes more than u8?

Right. The only uses of that are %23 and %10 later on down.

Jason

Jason A. Donenfeld

unread,
Oct 6, 2022, 8:33:45 AMOct 6
to Jan Kara, Andrew Lunn, Darrick J . Wong, Ulf Hansson, dri-...@lists.freedesktop.org, Andrii Nakryiko, Hans Verkuil, linux...@vger.kernel.org, Md . Haris Iqbal, Miquel Raynal, Christoph Hellwig, Andy Gospodarek, Sergey Matyukevich, Rohit Maheshwari, Michael Ellerman, ceph-...@vger.kernel.org, Christophe Leroy, Jozsef Kadlecsik, Nilesh Javali, Jean-Paul Roubelat, Dick Kennedy, Jay Vosburgh, Potnuri Bharat Teja, Vinay Kumar Yadav, linu...@vger.kernel.org, Nicholas Piggin, Igor Mitsyanko, Andy Lutomirski, linux...@vger.kernel.org, Thomas Gleixner, Trond Myklebust, linux...@vger.kernel.org, Neil Horman, Hante Meuleman, Greg Kroah-Hartman, linu...@vger.kernel.org, Michael Chan, linux-...@vger.kernel.org, Varun Prakash, Chuck Lever, netfilt...@vger.kernel.org, Masami Hiramatsu, Jiri Olsa, Jan Kara, linux-...@vger.kernel.org, Lars Ellenberg, linux...@vger.kernel.org, Claudiu Beznea, Sharvari Harisangam, linux...@vger.kernel.org, linu...@vger.kernel.org, linu...@vger.kernel.org, Dave Hansen, Song Liu, Eric Dumazet, target...@vger.kernel.org, John Stultz, Stanislav Fomichev, Gregory Greenman, drbd...@lists.linbit.com, d...@openvswitch.org, Leon Romanovsky, Helge Deller, Hugh Dickins, James Smart, Anil S Keshavamurthy, Pravin B Shelar, Julian Anastasov, core...@netfilter.org, Veaceslav Falico, Yonghong Song, Namjae Jeon, linux-...@vger.kernel.org, Santosh Shilimkar, Ganapathi Bhat, linux-...@lists.infradead.org, Simon Horman, Jaegeuk Kim, Mika Westerberg, Andrew Morton, OGAWA Hirofumi, Hao Luo, Theodore Ts'o, Stephen Boyd, Dennis Dalessandro, Florian Westphal, Andreas Färber, Jon Maloy, Vlad Yasevich, Anna Schumaker, Yehezkel Bernat, Haoyue Xu, Heiner Kallweit, linux-w...@vger.kernel.org, Marcelo Ricardo Leitner, Rasmus Villemoes, linux...@lists.infradead.org, Michal Januszewski, linu...@lists.infradead.org, kasa...@googlegroups.com, Cong Wang, Thomas Sailer, Ajay Singh, Xiubo Li, Sagi Grimberg, Daniel Borkmann, Jonathan Corbet, linux...@vger.kernel.org, lvs-...@vger.kernel.org, linux-ar...@lists.infradead.org, Naveen N . Rao, Ilya Dryomov, Paolo Abeni, Pablo Neira Ayuso, Marco Elver, Kees Cook, Yury Norov, James E . J . Bottomley, Jamal Hadi Salim, KP Singh, Borislav Petkov, Keith Busch, Dan Williams, Mauro Carvalho Chehab, Franky Lin, Arend van Spriel, linux...@vger.kernel.org, Wenpeng Liang, Martin K . Petersen, Xinming Hu, linux...@st-md-mailman.stormreply.com, Jeff Layton, linu...@vger.kernel.org, net...@vger.kernel.org, Ying Xue, Manish Rangankar, David S . Miller, Toke Høiland-Jørgensen, Vignesh Raghavendra, Peter Zijlstra, H . Peter Anvin, Alexandre Torgue, Amitkumar Karwar, linu...@kvack.org, Andreas Dilger, Ayush Sawal, Andreas Noever, Jiri Pirko, linux-f2...@lists.sourceforge.net, Jack Wang, Steffen Klassert, rds-...@oss.oracle.com, Herbert Xu, linux...@vger.kernel.org, dc...@vger.kernel.org, Richard Weinberger, Russell King, Jason Gunthorpe, SHA-cyfma...@infineon.com, Ingo Molnar, Jakub Kicinski, John Fastabend, Maxime Coquelin, Manivannan Sadhasivam, Michael Jamet, Kalle Valo, Akinobu Mita, linux...@vger.kernel.org, dmae...@vger.kernel.org, Hannes Reinecke, Andy Shevchenko, Dmitry Vyukov, Jens Axboe, ca...@lists.bufferbloat.net, brcm80211-d...@broadcom.com, Yishai Hadas, Hideaki YOSHIFUJI, linuxp...@lists.ozlabs.org, David Ahern, Philipp Reisner, Stephen Hemminger, Christoph Böhmwalder, Vinod Koul, tipc-di...@lists.sourceforge.net, Thomas Graf, Johannes Berg, Sungjong Seo, Martin KaFai Lau
On Thu, Oct 06, 2022 at 10:43:31AM +0200, Jan Kara wrote:
> The code here is effectively doing the
>
> parent_group = prandom_u32_max(ngroups);
>
> Similarly here we can use prandom_u32_max(ngroups) like:
>
> if (qstr) {
> ...
> parent_group = hinfo.hash % ngroups;
> } else
> parent_group = prandom_u32_max(ngroups);

Nice catch. I'll move these to patch #1.


> > diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
> > index 9af68a7ecdcf..588cb09c5291 100644
> > --- a/fs/ext4/mmp.c
> > +++ b/fs/ext4/mmp.c
> > @@ -265,7 +265,7 @@ static unsigned int mmp_new_seq(void)
> > u32 new_seq;
> >
> > do {
> > - new_seq = prandom_u32();
> > + new_seq = get_random_u32();
> > } while (new_seq > EXT4_MMP_SEQ_MAX);
>
> OK, here we again effectively implement prandom_u32_max(EXT4_MMP_SEQ_MAX + 1).
> Just presumably we didn't want to use modulo here because EXT4_MMP_SEQ_MAX
> is rather big and so the resulting 'new_seq' would be seriously
> non-uniform.

I'm not handling this during this patch set, but if in the course of
review we find enough places that want actually uniformly bounded
integers, I'll implement efficient rejection sampling to clean up these
cases, with something faster and general, and add a new function for it.
So far this is the first case to come up, but we'll probably eventually
find others. So I'll make note of this.

Jason

Jason A. Donenfeld

unread,
Oct 6, 2022, 8:45:57 AMOct 6
to Kees Cook, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
Hi Kees,

On Wed, Oct 05, 2022 at 09:16:50PM -0700, Kees Cook wrote:
> On Wed, Oct 05, 2022 at 11:48:40PM +0200, Jason A. Donenfeld wrote:
> > Rather than incurring a division or requesting too many random bytes for
> > the given range, use the prandom_u32_max() function, which only takes
> > the minimum required bytes from the RNG and avoids divisions.
>
> Yes please!
>
> Since this is a treewide patch, it's helpful for (me at least) doing
> reviews to detail the mechanism of the transformation.

This is hand done. There were also various wrong seds done. And then I'd
edit the .diff manually, and then reapply it, as an iterative process.
No internet on the airplane, and oddly no spatch already on my laptop (I
think I had some Gentoo ocaml issues at some point and removed it?).

> e.g. I imagine this could be done with something like Coccinelle and

Feel free to check the work here by using Coccinelle if you're into
that.

> > static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
> > {
> > if (ubi->dbg.emulate_bitflips)
> > - return !(prandom_u32() % 200);
> > + return !(prandom_u32_max(200));
> > return 0;
> > }
> >
>
> Because some looks automated (why the parens?)

I saw this before going out and thought I'd fixed it but I guess I sent
the wrong one.

Jason

Jason Gunthorpe

unread,
Oct 6, 2022, 8:47:53 AMOct 6
to Jason A. Donenfeld, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Kees Cook, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 11:48:42PM +0200, Jason A. Donenfeld wrote:

> index 14392c942f49..499a425a3379 100644
> --- a/drivers/infiniband/hw/cxgb4/cm.c
> +++ b/drivers/infiniband/hw/cxgb4/cm.c
> @@ -734,7 +734,7 @@ static int send_connect(struct c4iw_ep *ep)
> &ep->com.remote_addr;
> int ret;
> enum chip_type adapter_type = ep->com.dev->rdev.lldi.adapter_type;
> - u32 isn = (prandom_u32() & ~7UL) - 1;
> + u32 isn = (get_random_u32() & ~7UL) - 1;

Maybe this wants to be written as

(prandom_max(U32_MAX >> 7) << 7) | 7

?

> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> index fd9d7f2c4d64..a605cf66b83e 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> @@ -465,7 +465,7 @@ static int ipoib_cm_req_handler(struct ib_cm_id *cm_id,
> goto err_qp;
> }
>
> - psn = prandom_u32() & 0xffffff;
> + psn = get_random_u32() & 0xffffff;

prandom_max(0xffffff + 1)

?

Jason

Jason A. Donenfeld

unread,
Oct 6, 2022, 8:53:46 AMOct 6
to Kees Cook, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jason Gunthorpe, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Wed, Oct 05, 2022 at 09:55:43PM -0700, Kees Cook wrote:
> It'd be nice to capture some (all?) of the above somewhere. Perhaps just
> a massive comment in the header?

I'll include (something like) this in some "how to use" documentation
I'm working on separately.

> > I've CC'd get_maintainers.pl, which is a pretty big list. Probably some
> > portion of those are going to bounce, too, and everytime you reply to
> > this thread, you'll have to deal with a bunch of bounces coming
> > immediately after. And a recipient list this big will probably dock my
> > email domain's spam reputation, at least temporarily. Sigh. I think
> > that's just how it goes with treewide cleanups though. Again, let me
> > know if I'm doing it wrong.
>
> I usually stick to just mailing lists and subsystem maintainers.

Lord have mercy I really wish I had done that. I supremely butchered the
sending of this, and then tried to save it by resubmitting directly to
vger with the same message ID but truncated CC, which mostly worked, but
the whole thing is a mess. I'll trim this to subsystem maintainers and
resubmit a v2 right away, rather than having people wade through the
mess.

To any one who's reading this: no more replies to v1! It clogs the
tubes.

> If any of the subsystems ask you to break this up (I hope not), I've got

Oh god I surely hope not. Sounds like a massive waste of time and
paperwork.

Jason

Jason Gunthorpe

unread,
Oct 6, 2022, 8:55:23 AMOct 6
to Jason A. Donenfeld, Kees Cook, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Andy Shevchenko, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
On Thu, Oct 06, 2022 at 06:45:25AM -0600, Jason A. Donenfeld wrote:
> Hi Kees,
>
> On Wed, Oct 05, 2022 at 09:16:50PM -0700, Kees Cook wrote:
> > On Wed, Oct 05, 2022 at 11:48:40PM +0200, Jason A. Donenfeld wrote:
> > > Rather than incurring a division or requesting too many random bytes for
> > > the given range, use the prandom_u32_max() function, which only takes
> > > the minimum required bytes from the RNG and avoids divisions.
> >
> > Yes please!
> >
> > Since this is a treewide patch, it's helpful for (me at least) doing
> > reviews to detail the mechanism of the transformation.
>
> This is hand done. There were also various wrong seds done. And then I'd
> edit the .diff manually, and then reapply it, as an iterative process.
> No internet on the airplane, and oddly no spatch already on my laptop (I
> think I had some Gentoo ocaml issues at some point and removed it?).
>
> > e.g. I imagine this could be done with something like Coccinelle and
>
> Feel free to check the work here by using Coccinelle if you're into
> that.

Generally these series are a lot easier to review if it is structured
as a patches doing all the unusual stuff that had to be by hand
followed by an unmodified Coccinelle/sed/etc handling the simple
stuff.

Especially stuff that is reworking the logic beyond simple
substitution should be one patch per subsystem not rolled into a giant
one patch conversion.

This makes the whole workflow better because the hand-done stuff can
have a chance to flow through subsystem trees.

Thanks,
Jason

Andy Shevchenko

unread,
Oct 6, 2022, 9:01:48 AMOct 6
to Jason A. Donenfeld, Jan Kara, Andrew Lunn, Darrick J . Wong, Ulf Hansson, dri-...@lists.freedesktop.org, Andrii Nakryiko, Hans Verkuil, linux...@vger.kernel.org, Md . Haris Iqbal, Miquel Raynal, Christoph Hellwig, Andy Gospodarek, Sergey Matyukevich, Rohit Maheshwari, Michael Ellerman, ceph-...@vger.kernel.org, Christophe Leroy, Jozsef Kadlecsik, Nilesh Javali, Jean-Paul Roubelat, Dick Kennedy, Jay Vosburgh, Potnuri Bharat Teja, Vinay Kumar Yadav, linu...@vger.kernel.org, Nicholas Piggin, Igor Mitsyanko, Andy Lutomirski, linux...@vger.kernel.org, Thomas Gleixner, Trond Myklebust, linux...@vger.kernel.org, Neil Horman, Hante Meuleman, Greg Kroah-Hartman, linu...@vger.kernel.org, Michael Chan, linux-...@vger.kernel.org, Varun Prakash, Chuck Lever, netfilt...@vger.kernel.org, Masami Hiramatsu, Jiri Olsa, Jan Kara, linux-...@vger.kernel.org, Lars Ellenberg, linux...@vger.kernel.org, Claudiu Beznea, Sharvari Harisangam, linux...@vger.kernel.org, linu...@vger.kernel.org, linu...@vger.kernel.org, Dave Hansen, Song Liu, Eric Dumazet, target...@vger.kernel.org, John Stultz, Stanislav Fomichev, Gregory Greenman, drbd...@lists.linbit.com, d...@openvswitch.org, Leon Romanovsky, Helge Deller, Hugh Dickins, James Smart, Anil S Keshavamurthy, Pravin B Shelar, Julian Anastasov, core...@netfilter.org, Veaceslav Falico, Yonghong Song, Namjae Jeon, linux-...@vger.kernel.org, Santosh Shilimkar, Ganapathi Bhat, linux-...@lists.infradead.org, Simon Horman, Jaegeuk Kim, Mika Westerberg, Andrew Morton, OGAWA Hirofumi, Hao Luo, Theodore Ts'o, Stephen Boyd, Dennis Dalessandro, Florian Westphal, Andreas Färber, Jon Maloy, Vlad Yasevich, Anna Schumaker, Yehezkel Bernat, Haoyue Xu, Heiner Kallweit, linux-w...@vger.kernel.org, Marcelo Ricardo Leitner, Rasmus Villemoes, linux...@lists.infradead.org, Michal Januszewski, linu...@lists.infradead.org, kasa...@googlegroups.com, Cong Wang, Thomas Sailer, Ajay Singh, Xiubo Li, Sagi Grimberg, Daniel Borkmann, Jonathan Corbet, linux...@vger.kernel.org, lvs-...@vger.kernel.org, linux-ar...@lists.infradead.org, Naveen N . Rao, Ilya Dryomov, Paolo Abeni, Pablo Neira Ayuso, Marco Elver, Kees Cook, Yury Norov, James E . J . Bottomley, Jamal Hadi Salim, KP Singh, Borislav Petkov, Keith Busch, Dan Williams, Mauro Carvalho Chehab, Franky Lin, Arend van Spriel, linux...@vger.kernel.org, Wenpeng Liang, Martin K . Petersen, Xinming Hu, linux...@st-md-mailman.stormreply.com, Jeff Layton, linu...@vger.kernel.org, net...@vger.kernel.org, Ying Xue, Manish Rangankar, David S . Miller, Toke Høiland-Jørgensen, Vignesh Raghavendra, Peter Zijlstra, H . Peter Anvin, Alexandre Torgue, Amitkumar Karwar, linu...@kvack.org, Andreas Dilger, Ayush Sawal, Andreas Noever, Jiri Pirko, linux-f2...@lists.sourceforge.net, Jack Wang, Steffen Klassert, rds-...@oss.oracle.com, Herbert Xu, linux...@vger.kernel.org, dc...@vger.kernel.org, Richard Weinberger, Russell King, Jason Gunthorpe, SHA-cyfma...@infineon.com, Ingo Molnar, Jakub Kicinski, John Fastabend, Maxime Coquelin, Manivannan Sadhasivam, Michael Jamet, Kalle Valo, Akinobu Mita, linux...@vger.kernel.org, dmae...@vger.kernel.org, Hannes Reinecke, Dmitry Vyukov, Jens Axboe, ca...@lists.bufferbloat.net, brcm80211-d...@broadcom.com, Yishai Hadas, Hideaki YOSHIFUJI, linuxp...@lists.ozlabs.org, David Ahern, Philipp Reisner, Stephen Hemminger, Christoph Böhmwalder, Vinod Koul, tipc-di...@lists.sourceforge.net, Thomas Graf, Johannes Berg, Sungjong Seo, Martin KaFai Lau
On Thu, Oct 06, 2022 at 06:33:15AM -0600, Jason A. Donenfeld wrote:
> On Thu, Oct 06, 2022 at 10:43:31AM +0200, Jan Kara wrote:

...

> > The code here is effectively doing the
> >
> > parent_group = prandom_u32_max(ngroups);
> >
> > Similarly here we can use prandom_u32_max(ngroups) like:
> >
> > if (qstr) {
> > ...
> > parent_group = hinfo.hash % ngroups;
> > } else
> > parent_group = prandom_u32_max(ngroups);
>
> Nice catch. I'll move these to patch #1.

I believe coccinelle is able to handle this kind of code as well, so Kees'
proposal to use it seems more plausible since it's less error prone and more
flexible / powerful.

--
With Best Regards,
Andy Shevchenko


Andy Shevchenko

unread,
Oct 6, 2022, 9:05:58 AMOct 6
to Jason Gunthorpe, Jason A. Donenfeld, Kees Cook, linux-...@vger.kernel.org, Ajay Singh, Akinobu Mita, Alexandre Torgue, Amitkumar Karwar, Andreas Dilger, Andreas Färber, Andreas Noever, Andrew Lunn, Andrew Morton, Andrii Nakryiko, Andy Gospodarek, Andy Lutomirski, Anil S Keshavamurthy, Anna Schumaker, Arend van Spriel, Ayush Sawal, Borislav Petkov, Chao Yu, Christoph Böhmwalder, Christoph Hellwig, Christophe Leroy, Chuck Lever, Claudiu Beznea, Cong Wang, Dan Williams, Daniel Borkmann, Darrick J . Wong, Dave Hansen, David Ahern, David S . Miller, Dennis Dalessandro, Dick Kennedy, Dmitry Vyukov, Eric Dumazet, Florian Westphal, Franky Lin, Ganapathi Bhat, Greg Kroah-Hartman, Gregory Greenman, H . Peter Anvin, Hannes Reinecke, Hans Verkuil, Hante Meuleman, Hao Luo, Haoyue Xu, Heiner Kallweit, Helge Deller, Herbert Xu, Hideaki YOSHIFUJI, Hugh Dickins, Igor Mitsyanko, Ilya Dryomov, Ingo Molnar, Jack Wang, Jaegeuk Kim, Jaehoon Chung, Jakub Kicinski, Jamal Hadi Salim, James E . J . Bottomley, James Smart, Jan Kara, Jay Vosburgh, Jean-Paul Roubelat, Jeff Layton, Jens Axboe, Jiri Olsa, Jiri Pirko, Johannes Berg, John Fastabend, John Stultz, Jon Maloy, Jonathan Corbet, Jozsef Kadlecsik, Julian Anastasov, KP Singh, Kalle Valo, Keith Busch, Lars Ellenberg, Leon Romanovsky, Manish Rangankar, Manivannan Sadhasivam, Marcelo Ricardo Leitner, Marco Elver, Martin K . Petersen, Martin KaFai Lau, Masami Hiramatsu, Mauro Carvalho Chehab, Maxime Coquelin, Md . Haris Iqbal, Michael Chan, Michael Ellerman, Michael Jamet, Michal Januszewski, Mika Westerberg, Miquel Raynal, Namjae Jeon, Naveen N . Rao, Neil Horman, Nicholas Piggin, Nilesh Javali, OGAWA Hirofumi, Pablo Neira Ayuso, Paolo Abeni, Peter Zijlstra, Philipp Reisner, Potnuri Bharat Teja, Pravin B Shelar, Rasmus Villemoes, Richard Weinberger, Rohit Maheshwari, Russell King, Sagi Grimberg, Santosh Shilimkar, Sergey Matyukevich, Sharvari Harisangam, Simon Horman, Song Liu, Stanislav Fomichev, Steffen Klassert, Stephen Boyd, Stephen Hemminger, Sungjong Seo, Theodore Ts'o, Thomas Gleixner, Thomas Graf, Thomas Sailer, Toke Høiland-Jørgensen, Trond Myklebust, Ulf Hansson, Varun Prakash, Veaceslav Falico, Vignesh Raghavendra, Vinay Kumar Yadav, Vinod Koul, Vlad Yasevich, Wenpeng Liang, Xinming Hu, Xiubo Li, Yehezkel Bernat, Ying Xue, Yishai Hadas, Yonghong Song, Yury Norov, brcm80211-d...@broadcom.com, ca...@lists.bufferbloat.net, ceph-...@vger.kernel.org, core...@netfilter.org, dc...@vger.kernel.org, d...@openvswitch.org, dmae...@vger.kernel.org, drbd...@lists.linbit.com, dri-...@lists.freedesktop.org, kasa...@googlegroups.com, linux-...@lists.infradead.org, linux-ar...@lists.infradead.org, linux...@vger.kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, linux...@vger.kernel.org, linux-f2...@lists.sourceforge.net, linux...@vger.kernel.org, linux-...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linu...@kvack.org, linu...@vger.kernel.org, linu...@lists.infradead.org, linu...@vger.kernel.org, linux...@lists.infradead.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@vger.kernel.org, linux...@st-md-mailman.stormreply.com, linu...@vger.kernel.org, linux-w...@vger.kernel.org, linu...@vger.kernel.org, linuxp...@lists.ozlabs.org, lvs-...@vger.kernel.org, net...@vger.kernel.org, netfilt...@vger.kernel.org, rds-...@oss.oracle.com, SHA-cyfma...@infineon.com, target...@vger.kernel.org, tipc-di...@lists.sourceforge.net
+1 to all arguments for the splitting.

I looked a bit into the code I have the interest to, but I won't spam people
with not-so-important questions / comments / tags, etc.