Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: /
Date: Sat, 26 Sep 2020 22:00:46
Message-Id: 1601157627.de1ca00a7d2ca5589f537b73addb0c569d7821e5.mpagano@gentoo
1 commit: de1ca00a7d2ca5589f537b73addb0c569d7821e5
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Sat Sep 26 22:00:27 2020 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Sat Sep 26 22:00:27 2020 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=de1ca00a
7
8 Linux patch 4.19.148
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1147_linux-4.19.148.patch | 1529 +++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 1533 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index 4996714..9707ae7 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -627,6 +627,10 @@ Patch: 1146_linux-4.19.147.patch
21 From: https://www.kernel.org
22 Desc: Linux 4.19.147
23
24 +Patch: 1147_linux-4.19.148.patch
25 +From: https://www.kernel.org
26 +Desc: Linux 4.19.148
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1147_linux-4.19.148.patch b/1147_linux-4.19.148.patch
33 new file mode 100644
34 index 0000000..30160d9
35 --- /dev/null
36 +++ b/1147_linux-4.19.148.patch
37 @@ -0,0 +1,1529 @@
38 +diff --git a/Documentation/kbuild/llvm.rst b/Documentation/kbuild/llvm.rst
39 +new file mode 100644
40 +index 0000000000000..c776b6eee969f
41 +--- /dev/null
42 ++++ b/Documentation/kbuild/llvm.rst
43 +@@ -0,0 +1,87 @@
44 ++==============================
45 ++Building Linux with Clang/LLVM
46 ++==============================
47 ++
48 ++This document covers how to build the Linux kernel with Clang and LLVM
49 ++utilities.
50 ++
51 ++About
52 ++-----
53 ++
54 ++The Linux kernel has always traditionally been compiled with GNU toolchains
55 ++such as GCC and binutils. Ongoing work has allowed for `Clang
56 ++<https://clang.llvm.org/>`_ and `LLVM <https://llvm.org/>`_ utilities to be
57 ++used as viable substitutes. Distributions such as `Android
58 ++<https://www.android.com/>`_, `ChromeOS
59 ++<https://www.chromium.org/chromium-os>`_, and `OpenMandriva
60 ++<https://www.openmandriva.org/>`_ use Clang built kernels. `LLVM is a
61 ++collection of toolchain components implemented in terms of C++ objects
62 ++<https://www.aosabook.org/en/llvm.html>`_. Clang is a front-end to LLVM that
63 ++supports C and the GNU C extensions required by the kernel, and is pronounced
64 ++"klang," not "see-lang."
65 ++
66 ++Clang
67 ++-----
68 ++
69 ++The compiler used can be swapped out via `CC=` command line argument to `make`.
70 ++`CC=` should be set when selecting a config and during a build.
71 ++
72 ++ make CC=clang defconfig
73 ++
74 ++ make CC=clang
75 ++
76 ++Cross Compiling
77 ++---------------
78 ++
79 ++A single Clang compiler binary will typically contain all supported backends,
80 ++which can help simplify cross compiling.
81 ++
82 ++ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make CC=clang
83 ++
84 ++`CROSS_COMPILE` is not used to prefix the Clang compiler binary, instead
85 ++`CROSS_COMPILE` is used to set a command line flag: `--target <triple>`. For
86 ++example:
87 ++
88 ++ clang --target aarch64-linux-gnu foo.c
89 ++
90 ++LLVM Utilities
91 ++--------------
92 ++
93 ++LLVM has substitutes for GNU binutils utilities. Kbuild supports `LLVM=1`
94 ++to enable them.
95 ++
96 ++ make LLVM=1
97 ++
98 ++They can be enabled individually. The full list of the parameters:
99 ++
100 ++ make CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm STRIP=llvm-strip \\
101 ++ OBJCOPY=llvm-objcopy OBJDUMP=llvm-objdump OBJSIZE=llvm-size \\
102 ++ READELF=llvm-readelf HOSTCC=clang HOSTCXX=clang++ HOSTAR=llvm-ar \\
103 ++ HOSTLD=ld.lld
104 ++
105 ++Currently, the integrated assembler is disabled by default. You can pass
106 ++`LLVM_IAS=1` to enable it.
107 ++
108 ++Getting Help
109 ++------------
110 ++
111 ++- `Website <https://clangbuiltlinux.github.io/>`_
112 ++- `Mailing List <https://groups.google.com/forum/#!forum/clang-built-linux>`_: <clang-built-linux@××××××××××××.com>
113 ++- `Issue Tracker <https://github.com/ClangBuiltLinux/linux/issues>`_
114 ++- IRC: #clangbuiltlinux on chat.freenode.net
115 ++- `Telegram <https://t.me/ClangBuiltLinux>`_: @ClangBuiltLinux
116 ++- `Wiki <https://github.com/ClangBuiltLinux/linux/wiki>`_
117 ++- `Beginner Bugs <https://github.com/ClangBuiltLinux/linux/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22>`_
118 ++
119 ++Getting LLVM
120 ++-------------
121 ++
122 ++- http://releases.llvm.org/download.html
123 ++- https://github.com/llvm/llvm-project
124 ++- https://llvm.org/docs/GettingStarted.html
125 ++- https://llvm.org/docs/CMake.html
126 ++- https://apt.llvm.org/
127 ++- https://www.archlinux.org/packages/extra/x86_64/llvm/
128 ++- https://github.com/ClangBuiltLinux/tc-build
129 ++- https://github.com/ClangBuiltLinux/linux/wiki/Building-Clang-from-source
130 ++- https://android.googlesource.com/platform/prebuilts/clang/host/linux-x86/
131 +diff --git a/MAINTAINERS b/MAINTAINERS
132 +index b9f9da0b886f5..1061db6fbc326 100644
133 +--- a/MAINTAINERS
134 ++++ b/MAINTAINERS
135 +@@ -3613,6 +3613,15 @@ M: Miguel Ojeda <miguel.ojeda.sandonis@×××××.com>
136 + S: Maintained
137 + F: .clang-format
138 +
139 ++CLANG/LLVM BUILD SUPPORT
140 ++L: clang-built-linux@××××××××××××.com
141 ++W: https://clangbuiltlinux.github.io/
142 ++B: https://github.com/ClangBuiltLinux/linux/issues
143 ++C: irc://chat.freenode.net/clangbuiltlinux
144 ++S: Supported
145 ++K: \b(?i:clang|llvm)\b
146 ++F: Documentation/kbuild/llvm.rst
147 ++
148 + CLEANCACHE API
149 + M: Konrad Rzeszutek Wilk <konrad.wilk@××××××.com>
150 + L: linux-kernel@×××××××××××.org
151 +diff --git a/Makefile b/Makefile
152 +index ee648a902ce31..3ffd5b03e6ddf 100644
153 +--- a/Makefile
154 ++++ b/Makefile
155 +@@ -1,7 +1,7 @@
156 + # SPDX-License-Identifier: GPL-2.0
157 + VERSION = 4
158 + PATCHLEVEL = 19
159 +-SUBLEVEL = 147
160 ++SUBLEVEL = 148
161 + EXTRAVERSION =
162 + NAME = "People's Front"
163 +
164 +@@ -358,8 +358,13 @@ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
165 + HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
166 + HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
167 +
168 +-HOSTCC = gcc
169 +-HOSTCXX = g++
170 ++ifneq ($(LLVM),)
171 ++HOSTCC = clang
172 ++HOSTCXX = clang++
173 ++else
174 ++HOSTCC = gcc
175 ++HOSTCXX = g++
176 ++endif
177 + KBUILD_HOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 \
178 + -fomit-frame-pointer -std=gnu89 $(HOST_LFS_CFLAGS) \
179 + $(HOSTCFLAGS)
180 +@@ -368,15 +373,28 @@ KBUILD_HOSTLDFLAGS := $(HOST_LFS_LDFLAGS) $(HOSTLDFLAGS)
181 + KBUILD_HOSTLDLIBS := $(HOST_LFS_LIBS) $(HOSTLDLIBS)
182 +
183 + # Make variables (CC, etc...)
184 +-AS = $(CROSS_COMPILE)as
185 +-LD = $(CROSS_COMPILE)ld
186 +-CC = $(CROSS_COMPILE)gcc
187 + CPP = $(CC) -E
188 ++ifneq ($(LLVM),)
189 ++CC = clang
190 ++LD = ld.lld
191 ++AR = llvm-ar
192 ++NM = llvm-nm
193 ++OBJCOPY = llvm-objcopy
194 ++OBJDUMP = llvm-objdump
195 ++READELF = llvm-readelf
196 ++OBJSIZE = llvm-size
197 ++STRIP = llvm-strip
198 ++else
199 ++CC = $(CROSS_COMPILE)gcc
200 ++LD = $(CROSS_COMPILE)ld
201 + AR = $(CROSS_COMPILE)ar
202 + NM = $(CROSS_COMPILE)nm
203 +-STRIP = $(CROSS_COMPILE)strip
204 + OBJCOPY = $(CROSS_COMPILE)objcopy
205 + OBJDUMP = $(CROSS_COMPILE)objdump
206 ++READELF = $(CROSS_COMPILE)readelf
207 ++OBJSIZE = $(CROSS_COMPILE)size
208 ++STRIP = $(CROSS_COMPILE)strip
209 ++endif
210 + LEX = flex
211 + YACC = bison
212 + AWK = awk
213 +@@ -432,8 +450,8 @@ KBUILD_LDFLAGS :=
214 + GCC_PLUGINS_CFLAGS :=
215 + CLANG_FLAGS :=
216 +
217 +-export ARCH SRCARCH CONFIG_SHELL HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE AS LD CC
218 +-export CPP AR NM STRIP OBJCOPY OBJDUMP KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS
219 ++export ARCH SRCARCH CONFIG_SHELL HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
220 ++export CPP AR NM STRIP OBJCOPY OBJDUMP OBJSIZE READELF KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS
221 + export MAKE LEX YACC AWK GENKSYMS INSTALLKERNEL PERL PYTHON PYTHON2 PYTHON3 UTS_MACHINE
222 + export HOSTCXX KBUILD_HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
223 +
224 +@@ -491,7 +509,9 @@ endif
225 + ifneq ($(GCC_TOOLCHAIN),)
226 + CLANG_FLAGS += --gcc-toolchain=$(GCC_TOOLCHAIN)
227 + endif
228 ++ifneq ($(LLVM_IAS),1)
229 + CLANG_FLAGS += -no-integrated-as
230 ++endif
231 + CLANG_FLAGS += -Werror=unknown-warning-option
232 + KBUILD_CFLAGS += $(CLANG_FLAGS)
233 + KBUILD_AFLAGS += $(CLANG_FLAGS)
234 +diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
235 +index b337a0cd58ba4..5642f025b397c 100644
236 +--- a/arch/x86/boot/compressed/Makefile
237 ++++ b/arch/x86/boot/compressed/Makefile
238 +@@ -102,7 +102,7 @@ vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
239 + quiet_cmd_check_data_rel = DATAREL $@
240 + define cmd_check_data_rel
241 + for obj in $(filter %.o,$^); do \
242 +- ${CROSS_COMPILE}readelf -S $$obj | grep -qF .rel.local && { \
243 ++ $(READELF) -S $$obj | grep -qF .rel.local && { \
244 + echo "error: $$obj has data relocations!" >&2; \
245 + exit 1; \
246 + } || true; \
247 +diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
248 +index 7e27c9aff9b72..430988f797225 100644
249 +--- a/drivers/net/dsa/rtl8366.c
250 ++++ b/drivers/net/dsa/rtl8366.c
251 +@@ -452,13 +452,19 @@ int rtl8366_vlan_del(struct dsa_switch *ds, int port,
252 + return ret;
253 +
254 + if (vid == vlanmc.vid) {
255 +- /* clear VLAN member configurations */
256 +- vlanmc.vid = 0;
257 +- vlanmc.priority = 0;
258 +- vlanmc.member = 0;
259 +- vlanmc.untag = 0;
260 +- vlanmc.fid = 0;
261 +-
262 ++ /* Remove this port from the VLAN */
263 ++ vlanmc.member &= ~BIT(port);
264 ++ vlanmc.untag &= ~BIT(port);
265 ++ /*
266 ++ * If no ports are members of this VLAN
267 ++ * anymore then clear the whole member
268 ++ * config so it can be reused.
269 ++ */
270 ++ if (!vlanmc.member && vlanmc.untag) {
271 ++ vlanmc.vid = 0;
272 ++ vlanmc.priority = 0;
273 ++ vlanmc.fid = 0;
274 ++ }
275 + ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
276 + if (ret) {
277 + dev_err(smi->dev,
278 +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
279 +index a267380b267d7..c3f04fb319556 100644
280 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
281 ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
282 +@@ -6837,18 +6837,16 @@ static ssize_t bnxt_show_temp(struct device *dev,
283 + struct hwrm_temp_monitor_query_output *resp;
284 + struct bnxt *bp = dev_get_drvdata(dev);
285 + u32 len = 0;
286 ++ int rc;
287 +
288 + resp = bp->hwrm_cmd_resp_addr;
289 + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
290 + mutex_lock(&bp->hwrm_cmd_lock);
291 +- if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
292 ++ rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
293 ++ if (!rc)
294 + len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
295 + mutex_unlock(&bp->hwrm_cmd_lock);
296 +-
297 +- if (len)
298 +- return len;
299 +-
300 +- return sprintf(buf, "unknown\n");
301 ++ return rc ?: len;
302 + }
303 + static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
304 +
305 +@@ -6868,7 +6866,16 @@ static void bnxt_hwmon_close(struct bnxt *bp)
306 +
307 + static void bnxt_hwmon_open(struct bnxt *bp)
308 + {
309 ++ struct hwrm_temp_monitor_query_input req = {0};
310 + struct pci_dev *pdev = bp->pdev;
311 ++ int rc;
312 ++
313 ++ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
314 ++ rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
315 ++ if (rc == -EACCES || rc == -EOPNOTSUPP) {
316 ++ bnxt_hwmon_close(bp);
317 ++ return;
318 ++ }
319 +
320 + bp->hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
321 + DRV_MODULE_NAME, bp,
322 +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
323 +index a1cb99110092d..1ea81c23039f5 100644
324 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
325 ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
326 +@@ -1369,9 +1369,12 @@ static int bnxt_set_pauseparam(struct net_device *dev,
327 + if (!BNXT_SINGLE_PF(bp))
328 + return -EOPNOTSUPP;
329 +
330 ++ mutex_lock(&bp->link_lock);
331 + if (epause->autoneg) {
332 +- if (!(link_info->autoneg & BNXT_AUTONEG_SPEED))
333 +- return -EINVAL;
334 ++ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
335 ++ rc = -EINVAL;
336 ++ goto pause_exit;
337 ++ }
338 +
339 + link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
340 + if (bp->hwrm_spec_code >= 0x10201)
341 +@@ -1392,11 +1395,11 @@ static int bnxt_set_pauseparam(struct net_device *dev,
342 + if (epause->tx_pause)
343 + link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
344 +
345 +- if (netif_running(dev)) {
346 +- mutex_lock(&bp->link_lock);
347 ++ if (netif_running(dev))
348 + rc = bnxt_hwrm_set_pause(bp);
349 +- mutex_unlock(&bp->link_lock);
350 +- }
351 ++
352 ++pause_exit:
353 ++ mutex_unlock(&bp->link_lock);
354 + return rc;
355 + }
356 +
357 +@@ -2113,8 +2116,7 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
358 + struct bnxt *bp = netdev_priv(dev);
359 + struct ethtool_eee *eee = &bp->eee;
360 + struct bnxt_link_info *link_info = &bp->link_info;
361 +- u32 advertising =
362 +- _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
363 ++ u32 advertising;
364 + int rc = 0;
365 +
366 + if (!BNXT_SINGLE_PF(bp))
367 +@@ -2123,19 +2125,23 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
368 + if (!(bp->flags & BNXT_FLAG_EEE_CAP))
369 + return -EOPNOTSUPP;
370 +
371 ++ mutex_lock(&bp->link_lock);
372 ++ advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
373 + if (!edata->eee_enabled)
374 + goto eee_ok;
375 +
376 + if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
377 + netdev_warn(dev, "EEE requires autoneg\n");
378 +- return -EINVAL;
379 ++ rc = -EINVAL;
380 ++ goto eee_exit;
381 + }
382 + if (edata->tx_lpi_enabled) {
383 + if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||
384 + edata->tx_lpi_timer < bp->lpi_tmr_lo)) {
385 + netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",
386 + bp->lpi_tmr_lo, bp->lpi_tmr_hi);
387 +- return -EINVAL;
388 ++ rc = -EINVAL;
389 ++ goto eee_exit;
390 + } else if (!bp->lpi_tmr_hi) {
391 + edata->tx_lpi_timer = eee->tx_lpi_timer;
392 + }
393 +@@ -2145,7 +2151,8 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
394 + } else if (edata->advertised & ~advertising) {
395 + netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",
396 + edata->advertised, advertising);
397 +- return -EINVAL;
398 ++ rc = -EINVAL;
399 ++ goto eee_exit;
400 + }
401 +
402 + eee->advertised = edata->advertised;
403 +@@ -2157,6 +2164,8 @@ eee_ok:
404 + if (netif_running(dev))
405 + rc = bnxt_hwrm_set_link_setting(bp, false, true);
406 +
407 ++eee_exit:
408 ++ mutex_unlock(&bp->link_lock);
409 + return rc;
410 + }
411 +
412 +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
413 +index 97d97de9accc5..bb3ee55cb72cb 100644
414 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
415 ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
416 +@@ -1591,13 +1591,16 @@ out:
417 + static int configure_filter_tcb(struct adapter *adap, unsigned int tid,
418 + struct filter_entry *f)
419 + {
420 +- if (f->fs.hitcnts)
421 ++ if (f->fs.hitcnts) {
422 + set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W,
423 +- TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) |
424 ++ TCB_TIMESTAMP_V(TCB_TIMESTAMP_M),
425 ++ TCB_TIMESTAMP_V(0ULL),
426 ++ 1);
427 ++ set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W,
428 + TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M),
429 +- TCB_TIMESTAMP_V(0ULL) |
430 + TCB_RTT_TS_RECENT_AGE_V(0ULL),
431 + 1);
432 ++ }
433 +
434 + if (f->fs.newdmac)
435 + set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,
436 +diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
437 +index 6a79c8e4a7a40..9043d2cadd5de 100644
438 +--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
439 ++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
440 +@@ -744,8 +744,8 @@ nfp_port_get_fecparam(struct net_device *netdev,
441 + struct nfp_eth_table_port *eth_port;
442 + struct nfp_port *port;
443 +
444 +- param->active_fec = ETHTOOL_FEC_NONE_BIT;
445 +- param->fec = ETHTOOL_FEC_NONE_BIT;
446 ++ param->active_fec = ETHTOOL_FEC_NONE;
447 ++ param->fec = ETHTOOL_FEC_NONE;
448 +
449 + port = nfp_port_from_netdev(netdev);
450 + eth_port = nfp_port_get_eth_port(port);
451 +diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
452 +index 817c290b78cd9..d0b5844c8a315 100644
453 +--- a/drivers/net/geneve.c
454 ++++ b/drivers/net/geneve.c
455 +@@ -721,7 +721,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
456 + struct net_device *dev,
457 + struct geneve_sock *gs4,
458 + struct flowi4 *fl4,
459 +- const struct ip_tunnel_info *info)
460 ++ const struct ip_tunnel_info *info,
461 ++ __be16 dport, __be16 sport)
462 + {
463 + bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
464 + struct geneve_dev *geneve = netdev_priv(dev);
465 +@@ -737,6 +738,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
466 + fl4->flowi4_proto = IPPROTO_UDP;
467 + fl4->daddr = info->key.u.ipv4.dst;
468 + fl4->saddr = info->key.u.ipv4.src;
469 ++ fl4->fl4_dport = dport;
470 ++ fl4->fl4_sport = sport;
471 +
472 + tos = info->key.tos;
473 + if ((tos == 1) && !geneve->collect_md) {
474 +@@ -771,7 +774,8 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
475 + struct net_device *dev,
476 + struct geneve_sock *gs6,
477 + struct flowi6 *fl6,
478 +- const struct ip_tunnel_info *info)
479 ++ const struct ip_tunnel_info *info,
480 ++ __be16 dport, __be16 sport)
481 + {
482 + bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
483 + struct geneve_dev *geneve = netdev_priv(dev);
484 +@@ -787,6 +791,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
485 + fl6->flowi6_proto = IPPROTO_UDP;
486 + fl6->daddr = info->key.u.ipv6.dst;
487 + fl6->saddr = info->key.u.ipv6.src;
488 ++ fl6->fl6_dport = dport;
489 ++ fl6->fl6_sport = sport;
490 ++
491 + prio = info->key.tos;
492 + if ((prio == 1) && !geneve->collect_md) {
493 + prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
494 +@@ -833,14 +840,15 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
495 + __be16 df;
496 + int err;
497 +
498 +- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
499 ++ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
500 ++ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
501 ++ geneve->info.key.tp_dst, sport);
502 + if (IS_ERR(rt))
503 + return PTR_ERR(rt);
504 +
505 + skb_tunnel_check_pmtu(skb, &rt->dst,
506 + GENEVE_IPV4_HLEN + info->options_len);
507 +
508 +- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
509 + if (geneve->collect_md) {
510 + tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
511 + ttl = key->ttl;
512 +@@ -875,13 +883,14 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
513 + __be16 sport;
514 + int err;
515 +
516 +- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
517 ++ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
518 ++ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
519 ++ geneve->info.key.tp_dst, sport);
520 + if (IS_ERR(dst))
521 + return PTR_ERR(dst);
522 +
523 + skb_tunnel_check_pmtu(skb, dst, GENEVE_IPV6_HLEN + info->options_len);
524 +
525 +- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
526 + if (geneve->collect_md) {
527 + prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
528 + ttl = key->ttl;
529 +@@ -958,13 +967,18 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
530 + {
531 + struct ip_tunnel_info *info = skb_tunnel_info(skb);
532 + struct geneve_dev *geneve = netdev_priv(dev);
533 ++ __be16 sport;
534 +
535 + if (ip_tunnel_info_af(info) == AF_INET) {
536 + struct rtable *rt;
537 + struct flowi4 fl4;
538 ++
539 + struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
540 ++ sport = udp_flow_src_port(geneve->net, skb,
541 ++ 1, USHRT_MAX, true);
542 +
543 +- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
544 ++ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
545 ++ geneve->info.key.tp_dst, sport);
546 + if (IS_ERR(rt))
547 + return PTR_ERR(rt);
548 +
549 +@@ -974,9 +988,13 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
550 + } else if (ip_tunnel_info_af(info) == AF_INET6) {
551 + struct dst_entry *dst;
552 + struct flowi6 fl6;
553 ++
554 + struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
555 ++ sport = udp_flow_src_port(geneve->net, skb,
556 ++ 1, USHRT_MAX, true);
557 +
558 +- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
559 ++ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
560 ++ geneve->info.key.tp_dst, sport);
561 + if (IS_ERR(dst))
562 + return PTR_ERR(dst);
563 +
564 +@@ -987,8 +1005,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
565 + return -EINVAL;
566 + }
567 +
568 +- info->key.tp_src = udp_flow_src_port(geneve->net, skb,
569 +- 1, USHRT_MAX, true);
570 ++ info->key.tp_src = sport;
571 + info->key.tp_dst = geneve->info.key.tp_dst;
572 + return 0;
573 + }
574 +diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
575 +index 54ac599cffb4d..b884b681d5c52 100644
576 +--- a/drivers/net/phy/phy_device.c
577 ++++ b/drivers/net/phy/phy_device.c
578 +@@ -1154,7 +1154,8 @@ void phy_detach(struct phy_device *phydev)
579 +
580 + phy_led_triggers_unregister(phydev);
581 +
582 +- module_put(phydev->mdio.dev.driver->owner);
583 ++ if (phydev->mdio.dev.driver)
584 ++ module_put(phydev->mdio.dev.driver->owner);
585 +
586 + /* If the device had no specific driver before (i.e. - it
587 + * was using the generic driver), we unbind the device
588 +diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
589 +index 4e9fe75d70675..21190dfbabb16 100644
590 +--- a/drivers/net/wan/Kconfig
591 ++++ b/drivers/net/wan/Kconfig
592 +@@ -199,7 +199,7 @@ config WANXL_BUILD_FIRMWARE
593 + depends on WANXL && !PREVENT_FIRMWARE_BUILD
594 + help
595 + Allows you to rebuild firmware run by the QUICC processor.
596 +- It requires as68k, ld68k and hexdump programs.
597 ++ It requires m68k toolchains and hexdump programs.
598 +
599 + You should never need this option, say N.
600 +
601 +diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile
602 +index 9532e69fda878..0500282e176e0 100644
603 +--- a/drivers/net/wan/Makefile
604 ++++ b/drivers/net/wan/Makefile
605 +@@ -41,17 +41,17 @@ $(obj)/wanxl.o: $(obj)/wanxlfw.inc
606 +
607 + ifeq ($(CONFIG_WANXL_BUILD_FIRMWARE),y)
608 + ifeq ($(ARCH),m68k)
609 +- AS68K = $(AS)
610 +- LD68K = $(LD)
611 ++ M68KCC = $(CC)
612 ++ M68KLD = $(LD)
613 + else
614 +- AS68K = as68k
615 +- LD68K = ld68k
616 ++ M68KCC = $(CROSS_COMPILE_M68K)gcc
617 ++ M68KLD = $(CROSS_COMPILE_M68K)ld
618 + endif
619 +
620 + quiet_cmd_build_wanxlfw = BLD FW $@
621 + cmd_build_wanxlfw = \
622 +- $(CPP) -D__ASSEMBLY__ -Wp,-MD,$(depfile) -I$(srctree)/include/uapi $< | $(AS68K) -m68360 -o $(obj)/wanxlfw.o; \
623 +- $(LD68K) --oformat binary -Ttext 0x1000 $(obj)/wanxlfw.o -o $(obj)/wanxlfw.bin; \
624 ++ $(M68KCC) -D__ASSEMBLY__ -Wp,-MD,$(depfile) -I$(srctree)/include/uapi -c -o $(obj)/wanxlfw.o $<; \
625 ++ $(M68KLD) --oformat binary -Ttext 0x1000 $(obj)/wanxlfw.o -o $(obj)/wanxlfw.bin; \
626 + hexdump -ve '"\n" 16/1 "0x%02X,"' $(obj)/wanxlfw.bin | sed 's/0x ,//g;1s/^/static const u8 firmware[]={/;$$s/,$$/\n};\n/' >$(obj)/wanxlfw.inc; \
627 + rm -f $(obj)/wanxlfw.bin $(obj)/wanxlfw.o
628 +
629 +diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c
630 +index ab8b3cbbb205c..85844f26547dd 100644
631 +--- a/drivers/net/wan/hdlc_ppp.c
632 ++++ b/drivers/net/wan/hdlc_ppp.c
633 +@@ -386,11 +386,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
634 + }
635 +
636 + for (opt = data; len; len -= opt[1], opt += opt[1]) {
637 +- if (len < 2 || len < opt[1]) {
638 +- dev->stats.rx_errors++;
639 +- kfree(out);
640 +- return; /* bad packet, drop silently */
641 +- }
642 ++ if (len < 2 || opt[1] < 2 || len < opt[1])
643 ++ goto err_out;
644 +
645 + if (pid == PID_LCP)
646 + switch (opt[0]) {
647 +@@ -398,6 +395,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
648 + continue; /* MRU always OK and > 1500 bytes? */
649 +
650 + case LCP_OPTION_ACCM: /* async control character map */
651 ++ if (opt[1] < sizeof(valid_accm))
652 ++ goto err_out;
653 + if (!memcmp(opt, valid_accm,
654 + sizeof(valid_accm)))
655 + continue;
656 +@@ -409,6 +408,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
657 + }
658 + break;
659 + case LCP_OPTION_MAGIC:
660 ++ if (len < 6)
661 ++ goto err_out;
662 + if (opt[1] != 6 || (!opt[2] && !opt[3] &&
663 + !opt[4] && !opt[5]))
664 + break; /* reject invalid magic number */
665 +@@ -427,6 +428,11 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
666 + ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data);
667 +
668 + kfree(out);
669 ++ return;
670 ++
671 ++err_out:
672 ++ dev->stats.rx_errors++;
673 ++ kfree(out);
674 + }
675 +
676 + static int ppp_rx(struct sk_buff *skb)
677 +diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
678 +index e1a5887b6d91d..d2df7d71d6667 100644
679 +--- a/drivers/tty/serial/8250/8250_core.c
680 ++++ b/drivers/tty/serial/8250/8250_core.c
681 +@@ -1062,8 +1062,10 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
682 + serial8250_apply_quirks(uart);
683 + ret = uart_add_one_port(&serial8250_reg,
684 + &uart->port);
685 +- if (ret == 0)
686 +- ret = uart->port.line;
687 ++ if (ret)
688 ++ goto err;
689 ++
690 ++ ret = uart->port.line;
691 + } else {
692 + dev_info(uart->port.dev,
693 + "skipping CIR port at 0x%lx / 0x%llx, IRQ %d\n",
694 +@@ -1088,6 +1090,11 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
695 + mutex_unlock(&serial_mutex);
696 +
697 + return ret;
698 ++
699 ++err:
700 ++ uart->port.dev = NULL;
701 ++ mutex_unlock(&serial_mutex);
702 ++ return ret;
703 + }
704 + EXPORT_SYMBOL(serial8250_register_8250_port);
705 +
706 +diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
707 +index 25407c206e732..cbc0294f39899 100644
708 +--- a/include/linux/skbuff.h
709 ++++ b/include/linux/skbuff.h
710 +@@ -3014,8 +3014,9 @@ static inline int skb_padto(struct sk_buff *skb, unsigned int len)
711 + * is untouched. Otherwise it is extended. Returns zero on
712 + * success. The skb is freed on error if @free_on_error is true.
713 + */
714 +-static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,
715 +- bool free_on_error)
716 ++static inline int __must_check __skb_put_padto(struct sk_buff *skb,
717 ++ unsigned int len,
718 ++ bool free_on_error)
719 + {
720 + unsigned int size = skb->len;
721 +
722 +@@ -3038,7 +3039,7 @@ static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,
723 + * is untouched. Otherwise it is extended. Returns zero on
724 + * success. The skb is freed on error.
725 + */
726 +-static inline int skb_put_padto(struct sk_buff *skb, unsigned int len)
727 ++static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len)
728 + {
729 + return __skb_put_padto(skb, len, true);
730 + }
731 +diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
732 +index 2d5220ab0600b..fc9d6e37552d3 100644
733 +--- a/include/net/inet_connection_sock.h
734 ++++ b/include/net/inet_connection_sock.h
735 +@@ -139,8 +139,8 @@ struct inet_connection_sock {
736 + } icsk_mtup;
737 + u32 icsk_user_timeout;
738 +
739 +- u64 icsk_ca_priv[88 / sizeof(u64)];
740 +-#define ICSK_CA_PRIV_SIZE (11 * sizeof(u64))
741 ++ u64 icsk_ca_priv[104 / sizeof(u64)];
742 ++#define ICSK_CA_PRIV_SIZE (13 * sizeof(u64))
743 + };
744 +
745 + #define ICSK_TIME_RETRANS 1 /* Retransmit timer */
746 +diff --git a/kernel/kprobes.c b/kernel/kprobes.c
747 +index eb4bffe6d764d..230d9d599b5aa 100644
748 +--- a/kernel/kprobes.c
749 ++++ b/kernel/kprobes.c
750 +@@ -2061,6 +2061,9 @@ static void kill_kprobe(struct kprobe *p)
751 + {
752 + struct kprobe *kp;
753 +
754 ++ if (WARN_ON_ONCE(kprobe_gone(p)))
755 ++ return;
756 ++
757 + p->flags |= KPROBE_FLAG_GONE;
758 + if (kprobe_aggrprobe(p)) {
759 + /*
760 +@@ -2243,7 +2246,10 @@ static int kprobes_module_callback(struct notifier_block *nb,
761 + mutex_lock(&kprobe_mutex);
762 + for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
763 + head = &kprobe_table[i];
764 +- hlist_for_each_entry_rcu(p, head, hlist)
765 ++ hlist_for_each_entry_rcu(p, head, hlist) {
766 ++ if (kprobe_gone(p))
767 ++ continue;
768 ++
769 + if (within_module_init((unsigned long)p->addr, mod) ||
770 + (checkcore &&
771 + within_module_core((unsigned long)p->addr, mod))) {
772 +@@ -2260,6 +2266,7 @@ static int kprobes_module_callback(struct notifier_block *nb,
773 + */
774 + kill_kprobe(p);
775 + }
776 ++ }
777 + }
778 + mutex_unlock(&kprobe_mutex);
779 + return NOTIFY_DONE;
780 +diff --git a/mm/huge_memory.c b/mm/huge_memory.c
781 +index 1443ae6fee9bd..8b137248b146d 100644
782 +--- a/mm/huge_memory.c
783 ++++ b/mm/huge_memory.c
784 +@@ -2145,7 +2145,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
785 + put_page(page);
786 + add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
787 + return;
788 +- } else if (is_huge_zero_pmd(*pmd)) {
789 ++ } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) {
790 + /*
791 + * FIXME: Do we want to invalidate secondary mmu by calling
792 + * mmu_notifier_invalidate_range() see comments below inside
793 +@@ -2233,27 +2233,33 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
794 + pte = pte_offset_map(&_pmd, addr);
795 + BUG_ON(!pte_none(*pte));
796 + set_pte_at(mm, addr, pte, entry);
797 +- atomic_inc(&page[i]._mapcount);
798 +- pte_unmap(pte);
799 +- }
800 +-
801 +- /*
802 +- * Set PG_double_map before dropping compound_mapcount to avoid
803 +- * false-negative page_mapped().
804 +- */
805 +- if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
806 +- for (i = 0; i < HPAGE_PMD_NR; i++)
807 ++ if (!pmd_migration)
808 + atomic_inc(&page[i]._mapcount);
809 ++ pte_unmap(pte);
810 + }
811 +
812 +- if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
813 +- /* Last compound_mapcount is gone. */
814 +- __dec_node_page_state(page, NR_ANON_THPS);
815 +- if (TestClearPageDoubleMap(page)) {
816 +- /* No need in mapcount reference anymore */
817 ++ if (!pmd_migration) {
818 ++ /*
819 ++ * Set PG_double_map before dropping compound_mapcount to avoid
820 ++ * false-negative page_mapped().
821 ++ */
822 ++ if (compound_mapcount(page) > 1 &&
823 ++ !TestSetPageDoubleMap(page)) {
824 + for (i = 0; i < HPAGE_PMD_NR; i++)
825 +- atomic_dec(&page[i]._mapcount);
826 ++ atomic_inc(&page[i]._mapcount);
827 ++ }
828 ++
829 ++ lock_page_memcg(page);
830 ++ if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
831 ++ /* Last compound_mapcount is gone. */
832 ++ __dec_lruvec_page_state(page, NR_ANON_THPS);
833 ++ if (TestClearPageDoubleMap(page)) {
834 ++ /* No need in mapcount reference anymore */
835 ++ for (i = 0; i < HPAGE_PMD_NR; i++)
836 ++ atomic_dec(&page[i]._mapcount);
837 ++ }
838 + }
839 ++ unlock_page_memcg(page);
840 + }
841 +
842 + smp_wmb(); /* make pte visible before pmd */
843 +diff --git a/mm/vmscan.c b/mm/vmscan.c
844 +index bc2ecd43251ad..b93dc8fc6007f 100644
845 +--- a/mm/vmscan.c
846 ++++ b/mm/vmscan.c
847 +@@ -2708,6 +2708,14 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
848 + unsigned long reclaimed;
849 + unsigned long scanned;
850 +
851 ++ /*
852 ++ * This loop can become CPU-bound when target memcgs
853 ++ * aren't eligible for reclaim - either because they
854 ++ * don't have any reclaimable pages, or because their
855 ++ * memory is explicitly protected. Avoid soft lockups.
856 ++ */
857 ++ cond_resched();
858 ++
859 + switch (mem_cgroup_protected(root, memcg)) {
860 + case MEMCG_PROT_MIN:
861 + /*
862 +diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
863 +index a556cd708885a..5ee6b94131b23 100644
864 +--- a/net/dcb/dcbnl.c
865 ++++ b/net/dcb/dcbnl.c
866 +@@ -1421,6 +1421,7 @@ static int dcbnl_ieee_set(struct net_device *netdev, struct nlmsghdr *nlh,
867 + {
868 + const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops;
869 + struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1];
870 ++ int prio;
871 + int err;
872 +
873 + if (!ops)
874 +@@ -1469,6 +1470,13 @@ static int dcbnl_ieee_set(struct net_device *netdev, struct nlmsghdr *nlh,
875 + struct dcbnl_buffer *buffer =
876 + nla_data(ieee[DCB_ATTR_DCB_BUFFER]);
877 +
878 ++ for (prio = 0; prio < ARRAY_SIZE(buffer->prio2buffer); prio++) {
879 ++ if (buffer->prio2buffer[prio] >= DCBX_MAX_BUFFERS) {
880 ++ err = -EINVAL;
881 ++ goto err;
882 ++ }
883 ++ }
884 ++
885 + err = ops->dcbnl_setbuffer(netdev, buffer);
886 + if (err)
887 + goto err;
888 +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
889 +index fbf30122e8bf2..f0faf1193dd89 100644
890 +--- a/net/ipv4/ip_output.c
891 ++++ b/net/ipv4/ip_output.c
892 +@@ -73,6 +73,7 @@
893 + #include <net/icmp.h>
894 + #include <net/checksum.h>
895 + #include <net/inetpeer.h>
896 ++#include <net/inet_ecn.h>
897 + #include <net/lwtunnel.h>
898 + #include <linux/bpf-cgroup.h>
899 + #include <linux/igmp.h>
900 +@@ -1582,7 +1583,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
901 + if (IS_ERR(rt))
902 + return;
903 +
904 +- inet_sk(sk)->tos = arg->tos;
905 ++ inet_sk(sk)->tos = arg->tos & ~INET_ECN_MASK;
906 +
907 + sk->sk_priority = skb->priority;
908 + sk->sk_protocol = ip_hdr(skb)->protocol;
909 +diff --git a/net/ipv4/route.c b/net/ipv4/route.c
910 +index f752d22cc8a59..84de87b7eedcd 100644
911 +--- a/net/ipv4/route.c
912 ++++ b/net/ipv4/route.c
913 +@@ -777,8 +777,10 @@ static void __ip_do_redirect(struct rtable *rt, struct sk_buff *skb, struct flow
914 + neigh_event_send(n, NULL);
915 + } else {
916 + if (fib_lookup(net, fl4, &res, 0) == 0) {
917 +- struct fib_nh *nh = &FIB_RES_NH(res);
918 ++ struct fib_nh *nh;
919 +
920 ++ fib_select_path(net, &res, fl4, skb);
921 ++ nh = &FIB_RES_NH(res);
922 + update_or_create_fnhe(nh, fl4->daddr, new_gw,
923 + 0, false,
924 + jiffies + ip_rt_gc_timeout);
925 +@@ -1004,6 +1006,7 @@ out: kfree_skb(skb);
926 + static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
927 + {
928 + struct dst_entry *dst = &rt->dst;
929 ++ struct net *net = dev_net(dst->dev);
930 + u32 old_mtu = ipv4_mtu(dst);
931 + struct fib_result res;
932 + bool lock = false;
933 +@@ -1024,9 +1027,11 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
934 + return;
935 +
936 + rcu_read_lock();
937 +- if (fib_lookup(dev_net(dst->dev), fl4, &res, 0) == 0) {
938 +- struct fib_nh *nh = &FIB_RES_NH(res);
939 ++ if (fib_lookup(net, fl4, &res, 0) == 0) {
940 ++ struct fib_nh *nh;
941 +
942 ++ fib_select_path(net, &res, fl4, NULL);
943 ++ nh = &FIB_RES_NH(res);
944 + update_or_create_fnhe(nh, fl4->daddr, 0, mtu, lock,
945 + jiffies + ip_rt_mtu_expires);
946 + }
947 +@@ -2536,8 +2541,6 @@ struct rtable *ip_route_output_key_hash_rcu(struct net *net, struct flowi4 *fl4,
948 + fib_select_path(net, res, fl4, skb);
949 +
950 + dev_out = FIB_RES_DEV(*res);
951 +- fl4->flowi4_oif = dev_out->ifindex;
952 +-
953 +
954 + make_route:
955 + rth = __mkroute_output(res, fl4, orig_oif, dev_out, flags);
956 +diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
957 +index b371e66502c36..93f1763362977 100644
958 +--- a/net/ipv4/tcp_bbr.c
959 ++++ b/net/ipv4/tcp_bbr.c
960 +@@ -115,6 +115,14 @@ struct bbr {
961 + unused_b:5;
962 + u32 prior_cwnd; /* prior cwnd upon entering loss recovery */
963 + u32 full_bw; /* recent bw, to estimate if pipe is full */
964 ++
965 ++ /* For tracking ACK aggregation: */
966 ++ u64 ack_epoch_mstamp; /* start of ACK sampling epoch */
967 ++ u16 extra_acked[2]; /* max excess data ACKed in epoch */
968 ++ u32 ack_epoch_acked:20, /* packets (S)ACKed in sampling epoch */
969 ++ extra_acked_win_rtts:5, /* age of extra_acked, in round trips */
970 ++ extra_acked_win_idx:1, /* current index in extra_acked array */
971 ++ unused_c:6;
972 + };
973 +
974 + #define CYCLE_LEN 8 /* number of phases in a pacing gain cycle */
975 +@@ -174,6 +182,15 @@ static const u32 bbr_lt_bw_diff = 4000 / 8;
976 + /* If we estimate we're policed, use lt_bw for this many round trips: */
977 + static const u32 bbr_lt_bw_max_rtts = 48;
978 +
979 ++/* Gain factor for adding extra_acked to target cwnd: */
980 ++static const int bbr_extra_acked_gain = BBR_UNIT;
981 ++/* Window length of extra_acked window. */
982 ++static const u32 bbr_extra_acked_win_rtts = 5;
983 ++/* Max allowed val for ack_epoch_acked, after which sampling epoch is reset */
984 ++static const u32 bbr_ack_epoch_acked_reset_thresh = 1U << 20;
985 ++/* Time period for clamping cwnd increment due to ack aggregation */
986 ++static const u32 bbr_extra_acked_max_us = 100 * 1000;
987 ++
988 + static void bbr_check_probe_rtt_done(struct sock *sk);
989 +
990 + /* Do we estimate that STARTUP filled the pipe? */
991 +@@ -200,6 +217,16 @@ static u32 bbr_bw(const struct sock *sk)
992 + return bbr->lt_use_bw ? bbr->lt_bw : bbr_max_bw(sk);
993 + }
994 +
995 ++/* Return maximum extra acked in past k-2k round trips,
996 ++ * where k = bbr_extra_acked_win_rtts.
997 ++ */
998 ++static u16 bbr_extra_acked(const struct sock *sk)
999 ++{
1000 ++ struct bbr *bbr = inet_csk_ca(sk);
1001 ++
1002 ++ return max(bbr->extra_acked[0], bbr->extra_acked[1]);
1003 ++}
1004 ++
1005 + /* Return rate in bytes per second, optionally with a gain.
1006 + * The order here is chosen carefully to avoid overflow of u64. This should
1007 + * work for input rates of up to 2.9Tbit/sec and gain of 2.89x.
1008 +@@ -305,6 +332,8 @@ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
1009 +
1010 + if (event == CA_EVENT_TX_START && tp->app_limited) {
1011 + bbr->idle_restart = 1;
1012 ++ bbr->ack_epoch_mstamp = tp->tcp_mstamp;
1013 ++ bbr->ack_epoch_acked = 0;
1014 + /* Avoid pointless buffer overflows: pace at est. bw if we don't
1015 + * need more speed (we're restarting from idle and app-limited).
1016 + */
1017 +@@ -315,30 +344,19 @@ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
1018 + }
1019 + }
1020 +
1021 +-/* Find target cwnd. Right-size the cwnd based on min RTT and the
1022 +- * estimated bottleneck bandwidth:
1023 ++/* Calculate bdp based on min RTT and the estimated bottleneck bandwidth:
1024 + *
1025 +- * cwnd = bw * min_rtt * gain = BDP * gain
1026 ++ * bdp = bw * min_rtt * gain
1027 + *
1028 + * The key factor, gain, controls the amount of queue. While a small gain
1029 + * builds a smaller queue, it becomes more vulnerable to noise in RTT
1030 + * measurements (e.g., delayed ACKs or other ACK compression effects). This
1031 + * noise may cause BBR to under-estimate the rate.
1032 +- *
1033 +- * To achieve full performance in high-speed paths, we budget enough cwnd to
1034 +- * fit full-sized skbs in-flight on both end hosts to fully utilize the path:
1035 +- * - one skb in sending host Qdisc,
1036 +- * - one skb in sending host TSO/GSO engine
1037 +- * - one skb being received by receiver host LRO/GRO/delayed-ACK engine
1038 +- * Don't worry, at low rates (bbr_min_tso_rate) this won't bloat cwnd because
1039 +- * in such cases tso_segs_goal is 1. The minimum cwnd is 4 packets,
1040 +- * which allows 2 outstanding 2-packet sequences, to try to keep pipe
1041 +- * full even with ACK-every-other-packet delayed ACKs.
1042 + */
1043 +-static u32 bbr_target_cwnd(struct sock *sk, u32 bw, int gain)
1044 ++static u32 bbr_bdp(struct sock *sk, u32 bw, int gain)
1045 + {
1046 + struct bbr *bbr = inet_csk_ca(sk);
1047 +- u32 cwnd;
1048 ++ u32 bdp;
1049 + u64 w;
1050 +
1051 + /* If we've never had a valid RTT sample, cap cwnd at the initial
1052 +@@ -353,7 +371,24 @@ static u32 bbr_target_cwnd(struct sock *sk, u32 bw, int gain)
1053 + w = (u64)bw * bbr->min_rtt_us;
1054 +
1055 + /* Apply a gain to the given value, then remove the BW_SCALE shift. */
1056 +- cwnd = (((w * gain) >> BBR_SCALE) + BW_UNIT - 1) / BW_UNIT;
1057 ++ bdp = (((w * gain) >> BBR_SCALE) + BW_UNIT - 1) / BW_UNIT;
1058 ++
1059 ++ return bdp;
1060 ++}
1061 ++
1062 ++/* To achieve full performance in high-speed paths, we budget enough cwnd to
1063 ++ * fit full-sized skbs in-flight on both end hosts to fully utilize the path:
1064 ++ * - one skb in sending host Qdisc,
1065 ++ * - one skb in sending host TSO/GSO engine
1066 ++ * - one skb being received by receiver host LRO/GRO/delayed-ACK engine
1067 ++ * Don't worry, at low rates (bbr_min_tso_rate) this won't bloat cwnd because
1068 ++ * in such cases tso_segs_goal is 1. The minimum cwnd is 4 packets,
1069 ++ * which allows 2 outstanding 2-packet sequences, to try to keep pipe
1070 ++ * full even with ACK-every-other-packet delayed ACKs.
1071 ++ */
1072 ++static u32 bbr_quantization_budget(struct sock *sk, u32 cwnd, int gain)
1073 ++{
1074 ++ struct bbr *bbr = inet_csk_ca(sk);
1075 +
1076 + /* Allow enough full-sized skbs in flight to utilize end systems. */
1077 + cwnd += 3 * bbr_tso_segs_goal(sk);
1078 +@@ -368,6 +403,33 @@ static u32 bbr_target_cwnd(struct sock *sk, u32 bw, int gain)
1079 + return cwnd;
1080 + }
1081 +
1082 ++/* Find inflight based on min RTT and the estimated bottleneck bandwidth. */
1083 ++static u32 bbr_inflight(struct sock *sk, u32 bw, int gain)
1084 ++{
1085 ++ u32 inflight;
1086 ++
1087 ++ inflight = bbr_bdp(sk, bw, gain);
1088 ++ inflight = bbr_quantization_budget(sk, inflight, gain);
1089 ++
1090 ++ return inflight;
1091 ++}
1092 ++
1093 ++/* Find the cwnd increment based on estimate of ack aggregation */
1094 ++static u32 bbr_ack_aggregation_cwnd(struct sock *sk)
1095 ++{
1096 ++ u32 max_aggr_cwnd, aggr_cwnd = 0;
1097 ++
1098 ++ if (bbr_extra_acked_gain && bbr_full_bw_reached(sk)) {
1099 ++ max_aggr_cwnd = ((u64)bbr_bw(sk) * bbr_extra_acked_max_us)
1100 ++ / BW_UNIT;
1101 ++ aggr_cwnd = (bbr_extra_acked_gain * bbr_extra_acked(sk))
1102 ++ >> BBR_SCALE;
1103 ++ aggr_cwnd = min(aggr_cwnd, max_aggr_cwnd);
1104 ++ }
1105 ++
1106 ++ return aggr_cwnd;
1107 ++}
1108 ++
1109 + /* An optimization in BBR to reduce losses: On the first round of recovery, we
1110 + * follow the packet conservation principle: send P packets per P packets acked.
1111 + * After that, we slow-start and send at most 2*P packets per P packets acked.
1112 +@@ -428,8 +490,15 @@ static void bbr_set_cwnd(struct sock *sk, const struct rate_sample *rs,
1113 + if (bbr_set_cwnd_to_recover_or_restore(sk, rs, acked, &cwnd))
1114 + goto done;
1115 +
1116 ++ target_cwnd = bbr_bdp(sk, bw, gain);
1117 ++
1118 ++ /* Increment the cwnd to account for excess ACKed data that seems
1119 ++ * due to aggregation (of data and/or ACKs) visible in the ACK stream.
1120 ++ */
1121 ++ target_cwnd += bbr_ack_aggregation_cwnd(sk);
1122 ++ target_cwnd = bbr_quantization_budget(sk, target_cwnd, gain);
1123 ++
1124 + /* If we're below target cwnd, slow start cwnd toward target cwnd. */
1125 +- target_cwnd = bbr_target_cwnd(sk, bw, gain);
1126 + if (bbr_full_bw_reached(sk)) /* only cut cwnd if we filled the pipe */
1127 + cwnd = min(cwnd + acked, target_cwnd);
1128 + else if (cwnd < target_cwnd || tp->delivered < TCP_INIT_CWND)
1129 +@@ -470,14 +539,14 @@ static bool bbr_is_next_cycle_phase(struct sock *sk,
1130 + if (bbr->pacing_gain > BBR_UNIT)
1131 + return is_full_length &&
1132 + (rs->losses || /* perhaps pacing_gain*BDP won't fit */
1133 +- inflight >= bbr_target_cwnd(sk, bw, bbr->pacing_gain));
1134 ++ inflight >= bbr_inflight(sk, bw, bbr->pacing_gain));
1135 +
1136 + /* A pacing_gain < 1.0 tries to drain extra queue we added if bw
1137 + * probing didn't find more bw. If inflight falls to match BDP then we
1138 + * estimate queue is drained; persisting would underutilize the pipe.
1139 + */
1140 + return is_full_length ||
1141 +- inflight <= bbr_target_cwnd(sk, bw, BBR_UNIT);
1142 ++ inflight <= bbr_inflight(sk, bw, BBR_UNIT);
1143 + }
1144 +
1145 + static void bbr_advance_cycle_phase(struct sock *sk)
1146 +@@ -699,6 +768,67 @@ static void bbr_update_bw(struct sock *sk, const struct rate_sample *rs)
1147 + }
1148 + }
1149 +
1150 ++/* Estimates the windowed max degree of ack aggregation.
1151 ++ * This is used to provision extra in-flight data to keep sending during
1152 ++ * inter-ACK silences.
1153 ++ *
1154 ++ * Degree of ack aggregation is estimated as extra data acked beyond expected.
1155 ++ *
1156 ++ * max_extra_acked = "maximum recent excess data ACKed beyond max_bw * interval"
1157 ++ * cwnd += max_extra_acked
1158 ++ *
1159 ++ * Max extra_acked is clamped by cwnd and bw * bbr_extra_acked_max_us (100 ms).
1160 ++ * Max filter is an approximate sliding window of 5-10 (packet timed) round
1161 ++ * trips.
1162 ++ */
1163 ++static void bbr_update_ack_aggregation(struct sock *sk,
1164 ++ const struct rate_sample *rs)
1165 ++{
1166 ++ u32 epoch_us, expected_acked, extra_acked;
1167 ++ struct bbr *bbr = inet_csk_ca(sk);
1168 ++ struct tcp_sock *tp = tcp_sk(sk);
1169 ++
1170 ++ if (!bbr_extra_acked_gain || rs->acked_sacked <= 0 ||
1171 ++ rs->delivered < 0 || rs->interval_us <= 0)
1172 ++ return;
1173 ++
1174 ++ if (bbr->round_start) {
1175 ++ bbr->extra_acked_win_rtts = min(0x1F,
1176 ++ bbr->extra_acked_win_rtts + 1);
1177 ++ if (bbr->extra_acked_win_rtts >= bbr_extra_acked_win_rtts) {
1178 ++ bbr->extra_acked_win_rtts = 0;
1179 ++ bbr->extra_acked_win_idx = bbr->extra_acked_win_idx ?
1180 ++ 0 : 1;
1181 ++ bbr->extra_acked[bbr->extra_acked_win_idx] = 0;
1182 ++ }
1183 ++ }
1184 ++
1185 ++ /* Compute how many packets we expected to be delivered over epoch. */
1186 ++ epoch_us = tcp_stamp_us_delta(tp->delivered_mstamp,
1187 ++ bbr->ack_epoch_mstamp);
1188 ++ expected_acked = ((u64)bbr_bw(sk) * epoch_us) / BW_UNIT;
1189 ++
1190 ++ /* Reset the aggregation epoch if ACK rate is below expected rate or
1191 ++ * significantly large no. of ack received since epoch (potentially
1192 ++ * quite old epoch).
1193 ++ */
1194 ++ if (bbr->ack_epoch_acked <= expected_acked ||
1195 ++ (bbr->ack_epoch_acked + rs->acked_sacked >=
1196 ++ bbr_ack_epoch_acked_reset_thresh)) {
1197 ++ bbr->ack_epoch_acked = 0;
1198 ++ bbr->ack_epoch_mstamp = tp->delivered_mstamp;
1199 ++ expected_acked = 0;
1200 ++ }
1201 ++
1202 ++ /* Compute excess data delivered, beyond what was expected. */
1203 ++ bbr->ack_epoch_acked = min_t(u32, 0xFFFFF,
1204 ++ bbr->ack_epoch_acked + rs->acked_sacked);
1205 ++ extra_acked = bbr->ack_epoch_acked - expected_acked;
1206 ++ extra_acked = min(extra_acked, tp->snd_cwnd);
1207 ++ if (extra_acked > bbr->extra_acked[bbr->extra_acked_win_idx])
1208 ++ bbr->extra_acked[bbr->extra_acked_win_idx] = extra_acked;
1209 ++}
1210 ++
1211 + /* Estimate when the pipe is full, using the change in delivery rate: BBR
1212 + * estimates that STARTUP filled the pipe if the estimated bw hasn't changed by
1213 + * at least bbr_full_bw_thresh (25%) after bbr_full_bw_cnt (3) non-app-limited
1214 +@@ -736,11 +866,11 @@ static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs)
1215 + bbr->pacing_gain = bbr_drain_gain; /* pace slow to drain */
1216 + bbr->cwnd_gain = bbr_high_gain; /* maintain cwnd */
1217 + tcp_sk(sk)->snd_ssthresh =
1218 +- bbr_target_cwnd(sk, bbr_max_bw(sk), BBR_UNIT);
1219 ++ bbr_inflight(sk, bbr_max_bw(sk), BBR_UNIT);
1220 + } /* fall through to check if in-flight is already small: */
1221 + if (bbr->mode == BBR_DRAIN &&
1222 + tcp_packets_in_flight(tcp_sk(sk)) <=
1223 +- bbr_target_cwnd(sk, bbr_max_bw(sk), BBR_UNIT))
1224 ++ bbr_inflight(sk, bbr_max_bw(sk), BBR_UNIT))
1225 + bbr_reset_probe_bw_mode(sk); /* we estimate queue is drained */
1226 + }
1227 +
1228 +@@ -828,6 +958,7 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs)
1229 + static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
1230 + {
1231 + bbr_update_bw(sk, rs);
1232 ++ bbr_update_ack_aggregation(sk, rs);
1233 + bbr_update_cycle_phase(sk, rs);
1234 + bbr_check_full_bw_reached(sk, rs);
1235 + bbr_check_drain(sk, rs);
1236 +@@ -878,6 +1009,13 @@ static void bbr_init(struct sock *sk)
1237 + bbr_reset_lt_bw_sampling(sk);
1238 + bbr_reset_startup_mode(sk);
1239 +
1240 ++ bbr->ack_epoch_mstamp = tp->tcp_mstamp;
1241 ++ bbr->ack_epoch_acked = 0;
1242 ++ bbr->extra_acked_win_rtts = 0;
1243 ++ bbr->extra_acked_win_idx = 0;
1244 ++ bbr->extra_acked[0] = 0;
1245 ++ bbr->extra_acked[1] = 0;
1246 ++
1247 + cmpxchg(&sk->sk_pacing_status, SK_PACING_NONE, SK_PACING_NEEDED);
1248 + }
1249 +
1250 +diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig
1251 +index 613282c65a107..a32cf50c237d8 100644
1252 +--- a/net/ipv6/Kconfig
1253 ++++ b/net/ipv6/Kconfig
1254 +@@ -321,6 +321,7 @@ config IPV6_SEG6_LWTUNNEL
1255 + config IPV6_SEG6_HMAC
1256 + bool "IPv6: Segment Routing HMAC support"
1257 + depends on IPV6
1258 ++ select CRYPTO
1259 + select CRYPTO_HMAC
1260 + select CRYPTO_SHA1
1261 + select CRYPTO_SHA256
1262 +diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
1263 +index 5e8979c1f76d8..05a206202e23d 100644
1264 +--- a/net/ipv6/ip6_fib.c
1265 ++++ b/net/ipv6/ip6_fib.c
1266 +@@ -1811,14 +1811,19 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
1267 + /* Need to own table->tb6_lock */
1268 + int fib6_del(struct fib6_info *rt, struct nl_info *info)
1269 + {
1270 +- struct fib6_node *fn = rcu_dereference_protected(rt->fib6_node,
1271 +- lockdep_is_held(&rt->fib6_table->tb6_lock));
1272 +- struct fib6_table *table = rt->fib6_table;
1273 + struct net *net = info->nl_net;
1274 + struct fib6_info __rcu **rtp;
1275 + struct fib6_info __rcu **rtp_next;
1276 ++ struct fib6_table *table;
1277 ++ struct fib6_node *fn;
1278 ++
1279 ++ if (rt == net->ipv6.fib6_null_entry)
1280 ++ return -ENOENT;
1281 +
1282 +- if (!fn || rt == net->ipv6.fib6_null_entry)
1283 ++ table = rt->fib6_table;
1284 ++ fn = rcu_dereference_protected(rt->fib6_node,
1285 ++ lockdep_is_held(&table->tb6_lock));
1286 ++ if (!fn)
1287 + return -ENOENT;
1288 +
1289 + WARN_ON(!(fn->fn_flags & RTN_RTINFO));
1290 +diff --git a/net/key/af_key.c b/net/key/af_key.c
1291 +index 1982f9f31debb..e340e97224c3a 100644
1292 +--- a/net/key/af_key.c
1293 ++++ b/net/key/af_key.c
1294 +@@ -1855,6 +1855,13 @@ static int pfkey_dump(struct sock *sk, struct sk_buff *skb, const struct sadb_ms
1295 + if (ext_hdrs[SADB_X_EXT_FILTER - 1]) {
1296 + struct sadb_x_filter *xfilter = ext_hdrs[SADB_X_EXT_FILTER - 1];
1297 +
1298 ++ if ((xfilter->sadb_x_filter_splen >=
1299 ++ (sizeof(xfrm_address_t) << 3)) ||
1300 ++ (xfilter->sadb_x_filter_dplen >=
1301 ++ (sizeof(xfrm_address_t) << 3))) {
1302 ++ mutex_unlock(&pfk->dump_lock);
1303 ++ return -EINVAL;
1304 ++ }
1305 + filter = kmalloc(sizeof(*filter), GFP_KERNEL);
1306 + if (filter == NULL) {
1307 + mutex_unlock(&pfk->dump_lock);
1308 +diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
1309 +index 42bd1e74f78c1..a05c5cb3429c0 100644
1310 +--- a/net/qrtr/qrtr.c
1311 ++++ b/net/qrtr/qrtr.c
1312 +@@ -185,7 +185,7 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
1313 + {
1314 + struct qrtr_hdr_v1 *hdr;
1315 + size_t len = skb->len;
1316 +- int rc = -ENODEV;
1317 ++ int rc;
1318 +
1319 + hdr = skb_push(skb, sizeof(*hdr));
1320 + hdr->version = cpu_to_le32(QRTR_PROTO_VER_1);
1321 +@@ -203,15 +203,17 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
1322 + hdr->size = cpu_to_le32(len);
1323 + hdr->confirm_rx = 0;
1324 +
1325 +- skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
1326 +-
1327 +- mutex_lock(&node->ep_lock);
1328 +- if (node->ep)
1329 +- rc = node->ep->xmit(node->ep, skb);
1330 +- else
1331 +- kfree_skb(skb);
1332 +- mutex_unlock(&node->ep_lock);
1333 ++ rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
1334 +
1335 ++ if (!rc) {
1336 ++ mutex_lock(&node->ep_lock);
1337 ++ rc = -ENODEV;
1338 ++ if (node->ep)
1339 ++ rc = node->ep->xmit(node->ep, skb);
1340 ++ else
1341 ++ kfree_skb(skb);
1342 ++ mutex_unlock(&node->ep_lock);
1343 ++ }
1344 + return rc;
1345 + }
1346 +
1347 +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
1348 +index 119e20cad662b..bd96fd261dba3 100644
1349 +--- a/net/sched/sch_generic.c
1350 ++++ b/net/sched/sch_generic.c
1351 +@@ -1115,27 +1115,36 @@ static void dev_deactivate_queue(struct net_device *dev,
1352 + struct netdev_queue *dev_queue,
1353 + void *_qdisc_default)
1354 + {
1355 +- struct Qdisc *qdisc_default = _qdisc_default;
1356 +- struct Qdisc *qdisc;
1357 ++ struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc);
1358 +
1359 +- qdisc = rtnl_dereference(dev_queue->qdisc);
1360 + if (qdisc) {
1361 +- bool nolock = qdisc->flags & TCQ_F_NOLOCK;
1362 +-
1363 +- if (nolock)
1364 +- spin_lock_bh(&qdisc->seqlock);
1365 +- spin_lock_bh(qdisc_lock(qdisc));
1366 +-
1367 + if (!(qdisc->flags & TCQ_F_BUILTIN))
1368 + set_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state);
1369 ++ }
1370 ++}
1371 +
1372 +- rcu_assign_pointer(dev_queue->qdisc, qdisc_default);
1373 +- qdisc_reset(qdisc);
1374 ++static void dev_reset_queue(struct net_device *dev,
1375 ++ struct netdev_queue *dev_queue,
1376 ++ void *_unused)
1377 ++{
1378 ++ struct Qdisc *qdisc;
1379 ++ bool nolock;
1380 +
1381 +- spin_unlock_bh(qdisc_lock(qdisc));
1382 +- if (nolock)
1383 +- spin_unlock_bh(&qdisc->seqlock);
1384 +- }
1385 ++ qdisc = dev_queue->qdisc_sleeping;
1386 ++ if (!qdisc)
1387 ++ return;
1388 ++
1389 ++ nolock = qdisc->flags & TCQ_F_NOLOCK;
1390 ++
1391 ++ if (nolock)
1392 ++ spin_lock_bh(&qdisc->seqlock);
1393 ++ spin_lock_bh(qdisc_lock(qdisc));
1394 ++
1395 ++ qdisc_reset(qdisc);
1396 ++
1397 ++ spin_unlock_bh(qdisc_lock(qdisc));
1398 ++ if (nolock)
1399 ++ spin_unlock_bh(&qdisc->seqlock);
1400 + }
1401 +
1402 + static bool some_qdisc_is_busy(struct net_device *dev)
1403 +@@ -1196,12 +1205,20 @@ void dev_deactivate_many(struct list_head *head)
1404 + dev_watchdog_down(dev);
1405 + }
1406 +
1407 +- /* Wait for outstanding qdisc-less dev_queue_xmit calls.
1408 ++ /* Wait for outstanding qdisc-less dev_queue_xmit calls or
1409 ++ * outstanding qdisc enqueuing calls.
1410 + * This is avoided if all devices are in dismantle phase :
1411 + * Caller will call synchronize_net() for us
1412 + */
1413 + synchronize_net();
1414 +
1415 ++ list_for_each_entry(dev, head, close_list) {
1416 ++ netdev_for_each_tx_queue(dev, dev_reset_queue, NULL);
1417 ++
1418 ++ if (dev_ingress_queue(dev))
1419 ++ dev_reset_queue(dev, dev_ingress_queue(dev), NULL);
1420 ++ }
1421 ++
1422 + /* Wait for outstanding qdisc_run calls. */
1423 + list_for_each_entry(dev, head, close_list) {
1424 + while (some_qdisc_is_busy(dev))
1425 +diff --git a/net/tipc/group.c b/net/tipc/group.c
1426 +index 9a9138de4eca5..b656385efad65 100644
1427 +--- a/net/tipc/group.c
1428 ++++ b/net/tipc/group.c
1429 +@@ -273,8 +273,8 @@ static struct tipc_member *tipc_group_find_node(struct tipc_group *grp,
1430 + return NULL;
1431 + }
1432 +
1433 +-static void tipc_group_add_to_tree(struct tipc_group *grp,
1434 +- struct tipc_member *m)
1435 ++static int tipc_group_add_to_tree(struct tipc_group *grp,
1436 ++ struct tipc_member *m)
1437 + {
1438 + u64 nkey, key = (u64)m->node << 32 | m->port;
1439 + struct rb_node **n, *parent = NULL;
1440 +@@ -291,10 +291,11 @@ static void tipc_group_add_to_tree(struct tipc_group *grp,
1441 + else if (key > nkey)
1442 + n = &(*n)->rb_right;
1443 + else
1444 +- return;
1445 ++ return -EEXIST;
1446 + }
1447 + rb_link_node(&m->tree_node, parent, n);
1448 + rb_insert_color(&m->tree_node, &grp->members);
1449 ++ return 0;
1450 + }
1451 +
1452 + static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
1453 +@@ -302,6 +303,7 @@ static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
1454 + u32 instance, int state)
1455 + {
1456 + struct tipc_member *m;
1457 ++ int ret;
1458 +
1459 + m = kzalloc(sizeof(*m), GFP_ATOMIC);
1460 + if (!m)
1461 +@@ -314,8 +316,12 @@ static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
1462 + m->port = port;
1463 + m->instance = instance;
1464 + m->bc_acked = grp->bc_snd_nxt - 1;
1465 ++ ret = tipc_group_add_to_tree(grp, m);
1466 ++ if (ret < 0) {
1467 ++ kfree(m);
1468 ++ return NULL;
1469 ++ }
1470 + grp->member_cnt++;
1471 +- tipc_group_add_to_tree(grp, m);
1472 + tipc_nlist_add(&grp->dests, m->node);
1473 + m->state = state;
1474 + return m;
1475 +diff --git a/net/tipc/msg.c b/net/tipc/msg.c
1476 +index cbccf1791d3c5..b078b77620f18 100644
1477 +--- a/net/tipc/msg.c
1478 ++++ b/net/tipc/msg.c
1479 +@@ -140,7 +140,8 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
1480 + if (fragid == FIRST_FRAGMENT) {
1481 + if (unlikely(head))
1482 + goto err;
1483 +- if (unlikely(skb_unclone(frag, GFP_ATOMIC)))
1484 ++ frag = skb_unshare(frag, GFP_ATOMIC);
1485 ++ if (unlikely(!frag))
1486 + goto err;
1487 + head = *headbuf = frag;
1488 + *buf = NULL;
1489 +diff --git a/net/tipc/socket.c b/net/tipc/socket.c
1490 +index d0cf7169f08c8..16e2af3a00ccb 100644
1491 +--- a/net/tipc/socket.c
1492 ++++ b/net/tipc/socket.c
1493 +@@ -2565,10 +2565,7 @@ static int tipc_shutdown(struct socket *sock, int how)
1494 + lock_sock(sk);
1495 +
1496 + __tipc_shutdown(sock, TIPC_CONN_SHUTDOWN);
1497 +- if (tipc_sk_type_connectionless(sk))
1498 +- sk->sk_shutdown = SHUTDOWN_MASK;
1499 +- else
1500 +- sk->sk_shutdown = SEND_SHUTDOWN;
1501 ++ sk->sk_shutdown = SHUTDOWN_MASK;
1502 +
1503 + if (sk->sk_state == TIPC_DISCONNECTING) {
1504 + /* Discard any unreceived messages */
1505 +diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
1506 +index 20f67fcf378d5..baa92279c137e 100644
1507 +--- a/tools/objtool/Makefile
1508 ++++ b/tools/objtool/Makefile
1509 +@@ -7,9 +7,15 @@ ARCH := x86
1510 + endif
1511 +
1512 + # always use the host compiler
1513 ++ifneq ($(LLVM),)
1514 ++HOSTAR ?= llvm-ar
1515 ++HOSTCC ?= clang
1516 ++HOSTLD ?= ld.lld
1517 ++else
1518 + HOSTAR ?= ar
1519 + HOSTCC ?= gcc
1520 + HOSTLD ?= ld
1521 ++endif
1522 + AR = $(HOSTAR)
1523 + CC = $(HOSTCC)
1524 + LD = $(HOSTLD)
1525 +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
1526 +index 2155b52b17eca..6bd01d12df2ec 100644
1527 +--- a/virt/kvm/kvm_main.c
1528 ++++ b/virt/kvm/kvm_main.c
1529 +@@ -3844,7 +3844,7 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
1530 + void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
1531 + struct kvm_io_device *dev)
1532 + {
1533 +- int i;
1534 ++ int i, j;
1535 + struct kvm_io_bus *new_bus, *bus;
1536 +
1537 + bus = kvm_get_bus(kvm, bus_idx);
1538 +@@ -3861,17 +3861,20 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
1539 +
1540 + new_bus = kmalloc(sizeof(*bus) + ((bus->dev_count - 1) *
1541 + sizeof(struct kvm_io_range)), GFP_KERNEL);
1542 +- if (!new_bus) {
1543 ++ if (new_bus) {
1544 ++ memcpy(new_bus, bus, sizeof(*bus) + i * sizeof(struct kvm_io_range));
1545 ++ new_bus->dev_count--;
1546 ++ memcpy(new_bus->range + i, bus->range + i + 1,
1547 ++ (new_bus->dev_count - i) * sizeof(struct kvm_io_range));
1548 ++ } else {
1549 + pr_err("kvm: failed to shrink bus, removing it completely\n");
1550 +- goto broken;
1551 ++ for (j = 0; j < bus->dev_count; j++) {
1552 ++ if (j == i)
1553 ++ continue;
1554 ++ kvm_iodevice_destructor(bus->range[j].dev);
1555 ++ }
1556 + }
1557 +
1558 +- memcpy(new_bus, bus, sizeof(*bus) + i * sizeof(struct kvm_io_range));
1559 +- new_bus->dev_count--;
1560 +- memcpy(new_bus->range + i, bus->range + i + 1,
1561 +- (new_bus->dev_count - i) * sizeof(struct kvm_io_range));
1562 +-
1563 +-broken:
1564 + rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
1565 + synchronize_srcu_expedited(&kvm->srcu);
1566 + kfree(bus);