Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: /
Date: Sat, 01 Dec 2018 15:08:25
Message-Id: 1543676871.3c846c91f5b30fcc7c4adbb519f42d972fd9cfd4.mpagano@gentoo
1 commit: 3c846c91f5b30fcc7c4adbb519f42d972fd9cfd4
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Sat Dec 1 15:07:51 2018 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Sat Dec 1 15:07:51 2018 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3c846c91
7
8 proj/linux-patches: Linux patch 4.19.6
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1005_linux-4.19.6.patch | 4624 +++++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 4628 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index c0b6ddf..c4c0a77 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -63,6 +63,10 @@ Patch: 1004_linux-4.19.5.patch
21 From: http://www.kernel.org
22 Desc: Linux 4.19.5
23
24 +Patch: 1005_linux-4.19.6.patch
25 +From: http://www.kernel.org
26 +Desc: Linux 4.19.6
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1005_linux-4.19.6.patch b/1005_linux-4.19.6.patch
33 new file mode 100644
34 index 0000000..91b0881
35 --- /dev/null
36 +++ b/1005_linux-4.19.6.patch
37 @@ -0,0 +1,4624 @@
38 +diff --git a/Documentation/admin-guide/security-bugs.rst b/Documentation/admin-guide/security-bugs.rst
39 +index 30491d91e93d..30187d49dc2c 100644
40 +--- a/Documentation/admin-guide/security-bugs.rst
41 ++++ b/Documentation/admin-guide/security-bugs.rst
42 +@@ -26,23 +26,35 @@ information is helpful. Any exploit code is very helpful and will not
43 + be released without consent from the reporter unless it has already been
44 + made public.
45 +
46 +-Disclosure
47 +-----------
48 +-
49 +-The goal of the Linux kernel security team is to work with the bug
50 +-submitter to understand and fix the bug. We prefer to publish the fix as
51 +-soon as possible, but try to avoid public discussion of the bug itself
52 +-and leave that to others.
53 +-
54 +-Publishing the fix may be delayed when the bug or the fix is not yet
55 +-fully understood, the solution is not well-tested or for vendor
56 +-coordination. However, we expect these delays to be short, measurable in
57 +-days, not weeks or months. A release date is negotiated by the security
58 +-team working with the bug submitter as well as vendors. However, the
59 +-kernel security team holds the final say when setting a timeframe. The
60 +-timeframe varies from immediate (esp. if it's already publicly known bug)
61 +-to a few weeks. As a basic default policy, we expect report date to
62 +-release date to be on the order of 7 days.
63 ++Disclosure and embargoed information
64 ++------------------------------------
65 ++
66 ++The security list is not a disclosure channel. For that, see Coordination
67 ++below.
68 ++
69 ++Once a robust fix has been developed, the release process starts. Fixes
70 ++for publicly known bugs are released immediately.
71 ++
72 ++Although our preference is to release fixes for publicly undisclosed bugs
73 ++as soon as they become available, this may be postponed at the request of
74 ++the reporter or an affected party for up to 7 calendar days from the start
75 ++of the release process, with an exceptional extension to 14 calendar days
76 ++if it is agreed that the criticality of the bug requires more time. The
77 ++only valid reason for deferring the publication of a fix is to accommodate
78 ++the logistics of QA and large scale rollouts which require release
79 ++coordination.
80 ++
81 ++Whilst embargoed information may be shared with trusted individuals in
82 ++order to develop a fix, such information will not be published alongside
83 ++the fix or on any other disclosure channel without the permission of the
84 ++reporter. This includes but is not limited to the original bug report
85 ++and followup discussions (if any), exploits, CVE information or the
86 ++identity of the reporter.
87 ++
88 ++In other words our only interest is in getting bugs fixed. All other
89 ++information submitted to the security list and any followup discussions
90 ++of the report are treated confidentially even after the embargo has been
91 ++lifted, in perpetuity.
92 +
93 + Coordination
94 + ------------
95 +@@ -68,7 +80,7 @@ may delay the bug handling. If a reporter wishes to have a CVE identifier
96 + assigned ahead of public disclosure, they will need to contact the private
97 + linux-distros list, described above. When such a CVE identifier is known
98 + before a patch is provided, it is desirable to mention it in the commit
99 +-message, though.
100 ++message if the reporter agrees.
101 +
102 + Non-disclosure agreements
103 + -------------------------
104 +diff --git a/Documentation/devicetree/bindings/net/can/holt_hi311x.txt b/Documentation/devicetree/bindings/net/can/holt_hi311x.txt
105 +index 903a78da65be..3a9926f99937 100644
106 +--- a/Documentation/devicetree/bindings/net/can/holt_hi311x.txt
107 ++++ b/Documentation/devicetree/bindings/net/can/holt_hi311x.txt
108 +@@ -17,7 +17,7 @@ Example:
109 + reg = <1>;
110 + clocks = <&clk32m>;
111 + interrupt-parent = <&gpio4>;
112 +- interrupts = <13 IRQ_TYPE_EDGE_RISING>;
113 ++ interrupts = <13 IRQ_TYPE_LEVEL_HIGH>;
114 + vdd-supply = <&reg5v0>;
115 + xceiver-supply = <&reg5v0>;
116 + };
117 +diff --git a/MAINTAINERS b/MAINTAINERS
118 +index b2f710eee67a..9e9b19ecf6f7 100644
119 +--- a/MAINTAINERS
120 ++++ b/MAINTAINERS
121 +@@ -13769,6 +13769,7 @@ F: drivers/i2c/busses/i2c-stm32*
122 +
123 + STABLE BRANCH
124 + M: Greg Kroah-Hartman <gregkh@×××××××××××××××.org>
125 ++M: Sasha Levin <sashal@××××××.org>
126 + L: stable@×××××××××××.org
127 + S: Supported
128 + F: Documentation/process/stable-kernel-rules.rst
129 +diff --git a/Makefile b/Makefile
130 +index a07830185bdf..20cbb8e84650 100644
131 +--- a/Makefile
132 ++++ b/Makefile
133 +@@ -1,7 +1,7 @@
134 + # SPDX-License-Identifier: GPL-2.0
135 + VERSION = 4
136 + PATCHLEVEL = 19
137 +-SUBLEVEL = 5
138 ++SUBLEVEL = 6
139 + EXTRAVERSION =
140 + NAME = "People's Front"
141 +
142 +diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
143 +index e0331e754568..b855f56489ac 100644
144 +--- a/arch/powerpc/include/asm/io.h
145 ++++ b/arch/powerpc/include/asm/io.h
146 +@@ -285,19 +285,13 @@ extern void _memcpy_toio(volatile void __iomem *dest, const void *src,
147 + * their hooks, a bitfield is reserved for use by the platform near the
148 + * top of MMIO addresses (not PIO, those have to cope the hard way).
149 + *
150 +- * This bit field is 12 bits and is at the top of the IO virtual
151 +- * addresses PCI_IO_INDIRECT_TOKEN_MASK.
152 ++ * The highest address in the kernel virtual space are:
153 + *
154 +- * The kernel virtual space is thus:
155 ++ * d0003fffffffffff # with Hash MMU
156 ++ * c00fffffffffffff # with Radix MMU
157 + *
158 +- * 0xD000000000000000 : vmalloc
159 +- * 0xD000080000000000 : PCI PHB IO space
160 +- * 0xD000080080000000 : ioremap
161 +- * 0xD0000fffffffffff : end of ioremap region
162 +- *
163 +- * Since the top 4 bits are reserved as the region ID, we use thus
164 +- * the next 12 bits and keep 4 bits available for the future if the
165 +- * virtual address space is ever to be extended.
166 ++ * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits
167 ++ * that can be used for the field.
168 + *
169 + * The direct IO mapping operations will then mask off those bits
170 + * before doing the actual access, though that only happen when
171 +@@ -309,8 +303,8 @@ extern void _memcpy_toio(volatile void __iomem *dest, const void *src,
172 + */
173 +
174 + #ifdef CONFIG_PPC_INDIRECT_MMIO
175 +-#define PCI_IO_IND_TOKEN_MASK 0x0fff000000000000ul
176 +-#define PCI_IO_IND_TOKEN_SHIFT 48
177 ++#define PCI_IO_IND_TOKEN_SHIFT 52
178 ++#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT)
179 + #define PCI_FIX_ADDR(addr) \
180 + ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK))
181 + #define PCI_GET_ADDR_TOKEN(addr) \
182 +diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
183 +index 491b0f715d6b..ea1d7c808319 100644
184 +--- a/arch/powerpc/kvm/trace.h
185 ++++ b/arch/powerpc/kvm/trace.h
186 +@@ -6,8 +6,6 @@
187 +
188 + #undef TRACE_SYSTEM
189 + #define TRACE_SYSTEM kvm
190 +-#define TRACE_INCLUDE_PATH .
191 +-#define TRACE_INCLUDE_FILE trace
192 +
193 + /*
194 + * Tracepoint for guest mode entry.
195 +@@ -120,4 +118,10 @@ TRACE_EVENT(kvm_check_requests,
196 + #endif /* _TRACE_KVM_H */
197 +
198 + /* This part must be outside protection */
199 ++#undef TRACE_INCLUDE_PATH
200 ++#undef TRACE_INCLUDE_FILE
201 ++
202 ++#define TRACE_INCLUDE_PATH .
203 ++#define TRACE_INCLUDE_FILE trace
204 ++
205 + #include <trace/define_trace.h>
206 +diff --git a/arch/powerpc/kvm/trace_booke.h b/arch/powerpc/kvm/trace_booke.h
207 +index ac640e81fdc5..3837842986aa 100644
208 +--- a/arch/powerpc/kvm/trace_booke.h
209 ++++ b/arch/powerpc/kvm/trace_booke.h
210 +@@ -6,8 +6,6 @@
211 +
212 + #undef TRACE_SYSTEM
213 + #define TRACE_SYSTEM kvm_booke
214 +-#define TRACE_INCLUDE_PATH .
215 +-#define TRACE_INCLUDE_FILE trace_booke
216 +
217 + #define kvm_trace_symbol_exit \
218 + {0, "CRITICAL"}, \
219 +@@ -218,4 +216,11 @@ TRACE_EVENT(kvm_booke_queue_irqprio,
220 + #endif
221 +
222 + /* This part must be outside protection */
223 ++
224 ++#undef TRACE_INCLUDE_PATH
225 ++#undef TRACE_INCLUDE_FILE
226 ++
227 ++#define TRACE_INCLUDE_PATH .
228 ++#define TRACE_INCLUDE_FILE trace_booke
229 ++
230 + #include <trace/define_trace.h>
231 +diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
232 +index bcfe8a987f6a..8a1e3b0047f1 100644
233 +--- a/arch/powerpc/kvm/trace_hv.h
234 ++++ b/arch/powerpc/kvm/trace_hv.h
235 +@@ -9,8 +9,6 @@
236 +
237 + #undef TRACE_SYSTEM
238 + #define TRACE_SYSTEM kvm_hv
239 +-#define TRACE_INCLUDE_PATH .
240 +-#define TRACE_INCLUDE_FILE trace_hv
241 +
242 + #define kvm_trace_symbol_hcall \
243 + {H_REMOVE, "H_REMOVE"}, \
244 +@@ -497,4 +495,11 @@ TRACE_EVENT(kvmppc_run_vcpu_exit,
245 + #endif /* _TRACE_KVM_HV_H */
246 +
247 + /* This part must be outside protection */
248 ++
249 ++#undef TRACE_INCLUDE_PATH
250 ++#undef TRACE_INCLUDE_FILE
251 ++
252 ++#define TRACE_INCLUDE_PATH .
253 ++#define TRACE_INCLUDE_FILE trace_hv
254 ++
255 + #include <trace/define_trace.h>
256 +diff --git a/arch/powerpc/kvm/trace_pr.h b/arch/powerpc/kvm/trace_pr.h
257 +index 2f9a8829552b..46a46d328fbf 100644
258 +--- a/arch/powerpc/kvm/trace_pr.h
259 ++++ b/arch/powerpc/kvm/trace_pr.h
260 +@@ -8,8 +8,6 @@
261 +
262 + #undef TRACE_SYSTEM
263 + #define TRACE_SYSTEM kvm_pr
264 +-#define TRACE_INCLUDE_PATH .
265 +-#define TRACE_INCLUDE_FILE trace_pr
266 +
267 + TRACE_EVENT(kvm_book3s_reenter,
268 + TP_PROTO(int r, struct kvm_vcpu *vcpu),
269 +@@ -257,4 +255,11 @@ TRACE_EVENT(kvm_exit,
270 + #endif /* _TRACE_KVM_H */
271 +
272 + /* This part must be outside protection */
273 ++
274 ++#undef TRACE_INCLUDE_PATH
275 ++#undef TRACE_INCLUDE_FILE
276 ++
277 ++#define TRACE_INCLUDE_PATH .
278 ++#define TRACE_INCLUDE_FILE trace_pr
279 ++
280 + #include <trace/define_trace.h>
281 +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
282 +index 055b211b7126..5500e4edabc6 100644
283 +--- a/arch/powerpc/mm/numa.c
284 ++++ b/arch/powerpc/mm/numa.c
285 +@@ -1179,7 +1179,7 @@ static long vphn_get_associativity(unsigned long cpu,
286 +
287 + switch (rc) {
288 + case H_FUNCTION:
289 +- printk(KERN_INFO
290 ++ printk_once(KERN_INFO
291 + "VPHN is not supported. Disabling polling...\n");
292 + stop_topology_update();
293 + break;
294 +diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
295 +index 61ec42405ec9..110be14e6122 100644
296 +--- a/arch/riscv/Makefile
297 ++++ b/arch/riscv/Makefile
298 +@@ -82,4 +82,8 @@ core-y += arch/riscv/kernel/ arch/riscv/mm/
299 +
300 + libs-y += arch/riscv/lib/
301 +
302 ++PHONY += vdso_install
303 ++vdso_install:
304 ++ $(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@
305 ++
306 + all: vmlinux
307 +diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
308 +index 3303ed2cd419..7dd308129b40 100644
309 +--- a/arch/riscv/kernel/module.c
310 ++++ b/arch/riscv/kernel/module.c
311 +@@ -21,7 +21,7 @@ static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
312 + {
313 + if (v != (u32)v) {
314 + pr_err("%s: value %016llx out of range for 32-bit field\n",
315 +- me->name, v);
316 ++ me->name, (long long)v);
317 + return -EINVAL;
318 + }
319 + *location = v;
320 +@@ -102,7 +102,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location,
321 + if (offset != (s32)offset) {
322 + pr_err(
323 + "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
324 +- me->name, v, location);
325 ++ me->name, (long long)v, location);
326 + return -EINVAL;
327 + }
328 +
329 +@@ -144,7 +144,7 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location,
330 + if (IS_ENABLED(CMODEL_MEDLOW)) {
331 + pr_err(
332 + "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
333 +- me->name, v, location);
334 ++ me->name, (long long)v, location);
335 + return -EINVAL;
336 + }
337 +
338 +@@ -188,7 +188,7 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location,
339 + } else {
340 + pr_err(
341 + "%s: can not generate the GOT entry for symbol = %016llx from PC = %p\n",
342 +- me->name, v, location);
343 ++ me->name, (long long)v, location);
344 + return -EINVAL;
345 + }
346 +
347 +@@ -212,7 +212,7 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location,
348 + } else {
349 + pr_err(
350 + "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
351 +- me->name, v, location);
352 ++ me->name, (long long)v, location);
353 + return -EINVAL;
354 + }
355 + }
356 +@@ -234,7 +234,7 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location,
357 + if (offset != fill_v) {
358 + pr_err(
359 + "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
360 +- me->name, v, location);
361 ++ me->name, (long long)v, location);
362 + return -EINVAL;
363 + }
364 +
365 +diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
366 +index 8527c3e1038b..bfa25814fe5f 100644
367 +--- a/arch/x86/events/intel/uncore_snb.c
368 ++++ b/arch/x86/events/intel/uncore_snb.c
369 +@@ -15,6 +15,25 @@
370 + #define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x1910
371 + #define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f
372 + #define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f
373 ++#define PCI_DEVICE_ID_INTEL_KBL_Y_IMC 0x590c
374 ++#define PCI_DEVICE_ID_INTEL_KBL_U_IMC 0x5904
375 ++#define PCI_DEVICE_ID_INTEL_KBL_UQ_IMC 0x5914
376 ++#define PCI_DEVICE_ID_INTEL_KBL_SD_IMC 0x590f
377 ++#define PCI_DEVICE_ID_INTEL_KBL_SQ_IMC 0x591f
378 ++#define PCI_DEVICE_ID_INTEL_CFL_2U_IMC 0x3ecc
379 ++#define PCI_DEVICE_ID_INTEL_CFL_4U_IMC 0x3ed0
380 ++#define PCI_DEVICE_ID_INTEL_CFL_4H_IMC 0x3e10
381 ++#define PCI_DEVICE_ID_INTEL_CFL_6H_IMC 0x3ec4
382 ++#define PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC 0x3e0f
383 ++#define PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC 0x3e1f
384 ++#define PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC 0x3ec2
385 ++#define PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC 0x3e30
386 ++#define PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC 0x3e18
387 ++#define PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC 0x3ec6
388 ++#define PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC 0x3e31
389 ++#define PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC 0x3e33
390 ++#define PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC 0x3eca
391 ++#define PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC 0x3e32
392 +
393 + /* SNB event control */
394 + #define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff
395 +@@ -569,7 +588,82 @@ static const struct pci_device_id skl_uncore_pci_ids[] = {
396 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC),
397 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
398 + },
399 +-
400 ++ { /* IMC */
401 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_Y_IMC),
402 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
403 ++ },
404 ++ { /* IMC */
405 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_U_IMC),
406 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
407 ++ },
408 ++ { /* IMC */
409 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_UQ_IMC),
410 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
411 ++ },
412 ++ { /* IMC */
413 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SD_IMC),
414 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
415 ++ },
416 ++ { /* IMC */
417 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBL_SQ_IMC),
418 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
419 ++ },
420 ++ { /* IMC */
421 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2U_IMC),
422 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
423 ++ },
424 ++ { /* IMC */
425 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4U_IMC),
426 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
427 ++ },
428 ++ { /* IMC */
429 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4H_IMC),
430 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
431 ++ },
432 ++ { /* IMC */
433 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6H_IMC),
434 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
435 ++ },
436 ++ { /* IMC */
437 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_2S_D_IMC),
438 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
439 ++ },
440 ++ { /* IMC */
441 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_D_IMC),
442 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
443 ++ },
444 ++ { /* IMC */
445 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_D_IMC),
446 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
447 ++ },
448 ++ { /* IMC */
449 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_D_IMC),
450 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
451 ++ },
452 ++ { /* IMC */
453 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_W_IMC),
454 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
455 ++ },
456 ++ { /* IMC */
457 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_W_IMC),
458 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
459 ++ },
460 ++ { /* IMC */
461 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_W_IMC),
462 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
463 ++ },
464 ++ { /* IMC */
465 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_4S_S_IMC),
466 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
467 ++ },
468 ++ { /* IMC */
469 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_6S_S_IMC),
470 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
471 ++ },
472 ++ { /* IMC */
473 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CFL_8S_S_IMC),
474 ++ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
475 ++ },
476 + { /* end: all zeroes */ },
477 + };
478 +
479 +@@ -618,6 +712,25 @@ static const struct imc_uncore_pci_dev desktop_imc_pci_ids[] = {
480 + IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */
481 + IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */
482 + IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */
483 ++ IMC_DEV(KBL_Y_IMC, &skl_uncore_pci_driver), /* 7th Gen Core Y */
484 ++ IMC_DEV(KBL_U_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U */
485 ++ IMC_DEV(KBL_UQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core U Quad Core */
486 ++ IMC_DEV(KBL_SD_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Dual Core */
487 ++ IMC_DEV(KBL_SQ_IMC, &skl_uncore_pci_driver), /* 7th Gen Core S Quad Core */
488 ++ IMC_DEV(CFL_2U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 2 Cores */
489 ++ IMC_DEV(CFL_4U_IMC, &skl_uncore_pci_driver), /* 8th Gen Core U 4 Cores */
490 ++ IMC_DEV(CFL_4H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 4 Cores */
491 ++ IMC_DEV(CFL_6H_IMC, &skl_uncore_pci_driver), /* 8th Gen Core H 6 Cores */
492 ++ IMC_DEV(CFL_2S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 2 Cores Desktop */
493 ++ IMC_DEV(CFL_4S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Desktop */
494 ++ IMC_DEV(CFL_6S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Desktop */
495 ++ IMC_DEV(CFL_8S_D_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Desktop */
496 ++ IMC_DEV(CFL_4S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Work Station */
497 ++ IMC_DEV(CFL_6S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Work Station */
498 ++ IMC_DEV(CFL_8S_W_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Work Station */
499 ++ IMC_DEV(CFL_4S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 4 Cores Server */
500 ++ IMC_DEV(CFL_6S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 6 Cores Server */
501 ++ IMC_DEV(CFL_8S_S_IMC, &skl_uncore_pci_driver), /* 8th Gen Core S 8 Cores Server */
502 + { /* end marker */ }
503 + };
504 +
505 +diff --git a/block/bio.c b/block/bio.c
506 +index 41173710430c..c4ef8aa46452 100644
507 +--- a/block/bio.c
508 ++++ b/block/bio.c
509 +@@ -605,6 +605,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
510 + if (bio_flagged(bio_src, BIO_THROTTLED))
511 + bio_set_flag(bio, BIO_THROTTLED);
512 + bio->bi_opf = bio_src->bi_opf;
513 ++ bio->bi_ioprio = bio_src->bi_ioprio;
514 + bio->bi_write_hint = bio_src->bi_write_hint;
515 + bio->bi_iter = bio_src->bi_iter;
516 + bio->bi_io_vec = bio_src->bi_io_vec;
517 +diff --git a/block/bounce.c b/block/bounce.c
518 +index 418677dcec60..abb50e7e5fab 100644
519 +--- a/block/bounce.c
520 ++++ b/block/bounce.c
521 +@@ -248,6 +248,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
522 + return NULL;
523 + bio->bi_disk = bio_src->bi_disk;
524 + bio->bi_opf = bio_src->bi_opf;
525 ++ bio->bi_ioprio = bio_src->bi_ioprio;
526 + bio->bi_write_hint = bio_src->bi_write_hint;
527 + bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector;
528 + bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
529 +diff --git a/crypto/simd.c b/crypto/simd.c
530 +index ea7240be3001..78e8d037ae2b 100644
531 +--- a/crypto/simd.c
532 ++++ b/crypto/simd.c
533 +@@ -124,8 +124,9 @@ static int simd_skcipher_init(struct crypto_skcipher *tfm)
534 +
535 + ctx->cryptd_tfm = cryptd_tfm;
536 +
537 +- reqsize = sizeof(struct skcipher_request);
538 +- reqsize += crypto_skcipher_reqsize(&cryptd_tfm->base);
539 ++ reqsize = crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm));
540 ++ reqsize = max(reqsize, crypto_skcipher_reqsize(&cryptd_tfm->base));
541 ++ reqsize += sizeof(struct skcipher_request);
542 +
543 + crypto_skcipher_set_reqsize(tfm, reqsize);
544 +
545 +diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
546 +index e9fb0bf3c8d2..78f9de260d5f 100644
547 +--- a/drivers/acpi/acpica/dsopcode.c
548 ++++ b/drivers/acpi/acpica/dsopcode.c
549 +@@ -417,6 +417,10 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
550 + ACPI_FORMAT_UINT64(obj_desc->region.address),
551 + obj_desc->region.length));
552 +
553 ++ status = acpi_ut_add_address_range(obj_desc->region.space_id,
554 ++ obj_desc->region.address,
555 ++ obj_desc->region.length, node);
556 ++
557 + /* Now the address and length are valid for this opregion */
558 +
559 + obj_desc->region.flags |= AOPOBJ_DATA_VALID;
560 +diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
561 +index f2b6f4da1034..fdabd0b74492 100644
562 +--- a/drivers/block/floppy.c
563 ++++ b/drivers/block/floppy.c
564 +@@ -4151,10 +4151,11 @@ static int __floppy_read_block_0(struct block_device *bdev, int drive)
565 + bio.bi_end_io = floppy_rb0_cb;
566 + bio_set_op_attrs(&bio, REQ_OP_READ, 0);
567 +
568 ++ init_completion(&cbdata.complete);
569 ++
570 + submit_bio(&bio);
571 + process_fd_request();
572 +
573 +- init_completion(&cbdata.complete);
574 + wait_for_completion(&cbdata.complete);
575 +
576 + __free_page(page);
577 +diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
578 +index b2ff423ad7f8..f4880a4f865b 100644
579 +--- a/drivers/cpufreq/imx6q-cpufreq.c
580 ++++ b/drivers/cpufreq/imx6q-cpufreq.c
581 +@@ -159,8 +159,13 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
582 + /* Ensure the arm clock divider is what we expect */
583 + ret = clk_set_rate(clks[ARM].clk, new_freq * 1000);
584 + if (ret) {
585 ++ int ret1;
586 ++
587 + dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);
588 +- regulator_set_voltage_tol(arm_reg, volt_old, 0);
589 ++ ret1 = regulator_set_voltage_tol(arm_reg, volt_old, 0);
590 ++ if (ret1)
591 ++ dev_warn(cpu_dev,
592 ++ "failed to restore vddarm voltage: %d\n", ret1);
593 + return ret;
594 + }
595 +
596 +diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c
597 +index 388a929baf95..1a6a77df8a5e 100644
598 +--- a/drivers/firmware/efi/arm-init.c
599 ++++ b/drivers/firmware/efi/arm-init.c
600 +@@ -265,6 +265,10 @@ void __init efi_init(void)
601 + (params.mmap & ~PAGE_MASK)));
602 +
603 + init_screen_info();
604 ++
605 ++ /* ARM does not permit early mappings to persist across paging_init() */
606 ++ if (IS_ENABLED(CONFIG_ARM))
607 ++ efi_memmap_unmap();
608 + }
609 +
610 + static int __init register_gop_device(void)
611 +diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
612 +index 922cfb813109..a00934d263c5 100644
613 +--- a/drivers/firmware/efi/arm-runtime.c
614 ++++ b/drivers/firmware/efi/arm-runtime.c
615 +@@ -110,7 +110,7 @@ static int __init arm_enable_runtime_services(void)
616 + {
617 + u64 mapsize;
618 +
619 +- if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {
620 ++ if (!efi_enabled(EFI_BOOT)) {
621 + pr_info("EFI services will not be available.\n");
622 + return 0;
623 + }
624 +diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
625 +index 14c40a7750d1..c51627660dbb 100644
626 +--- a/drivers/firmware/efi/libstub/Makefile
627 ++++ b/drivers/firmware/efi/libstub/Makefile
628 +@@ -16,7 +16,8 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -O2 \
629 + cflags-$(CONFIG_ARM64) := $(subst -pg,,$(KBUILD_CFLAGS)) -fpie \
630 + $(DISABLE_STACKLEAK_PLUGIN)
631 + cflags-$(CONFIG_ARM) := $(subst -pg,,$(KBUILD_CFLAGS)) \
632 +- -fno-builtin -fpic -mno-single-pic-base
633 ++ -fno-builtin -fpic \
634 ++ $(call cc-option,-mno-single-pic-base)
635 +
636 + cflags-$(CONFIG_EFI_ARMSTUB) += -I$(srctree)/scripts/dtc/libfdt
637 +
638 +diff --git a/drivers/firmware/efi/memmap.c b/drivers/firmware/efi/memmap.c
639 +index 5fc70520e04c..1907db2b38d8 100644
640 +--- a/drivers/firmware/efi/memmap.c
641 ++++ b/drivers/firmware/efi/memmap.c
642 +@@ -118,6 +118,9 @@ int __init efi_memmap_init_early(struct efi_memory_map_data *data)
643 +
644 + void __init efi_memmap_unmap(void)
645 + {
646 ++ if (!efi_enabled(EFI_MEMMAP))
647 ++ return;
648 ++
649 + if (!efi.memmap.late) {
650 + unsigned long size;
651 +
652 +diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
653 +index 25187403e3ac..a8e01d99919c 100644
654 +--- a/drivers/gpio/gpiolib.c
655 ++++ b/drivers/gpio/gpiolib.c
656 +@@ -1285,7 +1285,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *chip, void *data,
657 + gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
658 + if (!gdev->descs) {
659 + status = -ENOMEM;
660 +- goto err_free_gdev;
661 ++ goto err_free_ida;
662 + }
663 +
664 + if (chip->ngpio == 0) {
665 +@@ -1413,8 +1413,9 @@ err_free_label:
666 + kfree_const(gdev->label);
667 + err_free_descs:
668 + kfree(gdev->descs);
669 +-err_free_gdev:
670 ++err_free_ida:
671 + ida_simple_remove(&gpio_ida, gdev->id);
672 ++err_free_gdev:
673 + /* failures here can mean systems won't boot... */
674 + pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__,
675 + gdev->base, gdev->base + gdev->ngpio - 1,
676 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
677 +index 0c791e35acf0..79bd8bd97fae 100644
678 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
679 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
680 +@@ -496,8 +496,11 @@ void amdgpu_amdkfd_set_compute_idle(struct kgd_dev *kgd, bool idle)
681 + {
682 + struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
683 +
684 +- amdgpu_dpm_switch_power_profile(adev,
685 +- PP_SMC_POWER_PROFILE_COMPUTE, !idle);
686 ++ if (adev->powerplay.pp_funcs &&
687 ++ adev->powerplay.pp_funcs->switch_power_profile)
688 ++ amdgpu_dpm_switch_power_profile(adev,
689 ++ PP_SMC_POWER_PROFILE_COMPUTE,
690 ++ !idle);
691 + }
692 +
693 + bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid)
694 +diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
695 +index ad151fefa41f..db406a35808f 100644
696 +--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
697 ++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
698 +@@ -45,6 +45,7 @@ MODULE_FIRMWARE("amdgpu/tahiti_mc.bin");
699 + MODULE_FIRMWARE("amdgpu/pitcairn_mc.bin");
700 + MODULE_FIRMWARE("amdgpu/verde_mc.bin");
701 + MODULE_FIRMWARE("amdgpu/oland_mc.bin");
702 ++MODULE_FIRMWARE("amdgpu/hainan_mc.bin");
703 + MODULE_FIRMWARE("amdgpu/si58_mc.bin");
704 +
705 + #define MC_SEQ_MISC0__MT__MASK 0xf0000000
706 +diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
707 +index 5ae5ed2e62d6..21bc12e02311 100644
708 +--- a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
709 ++++ b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
710 +@@ -129,7 +129,7 @@ static int vega10_ih_irq_init(struct amdgpu_device *adev)
711 + else
712 + wptr_off = adev->wb.gpu_addr + (adev->irq.ih.wptr_offs * 4);
713 + WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_LO, lower_32_bits(wptr_off));
714 +- WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFF);
715 ++ WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI, upper_32_bits(wptr_off) & 0xFFFF);
716 +
717 + /* set rptr, wptr to 0 */
718 + WREG32_SOC15(OSSSYS, 0, mmIH_RB_RPTR, 0);
719 +diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
720 +index 69dab82a3771..bf589c53b908 100644
721 +--- a/drivers/gpu/drm/ast/ast_drv.c
722 ++++ b/drivers/gpu/drm/ast/ast_drv.c
723 +@@ -60,8 +60,29 @@ static const struct pci_device_id pciidlist[] = {
724 +
725 + MODULE_DEVICE_TABLE(pci, pciidlist);
726 +
727 ++static void ast_kick_out_firmware_fb(struct pci_dev *pdev)
728 ++{
729 ++ struct apertures_struct *ap;
730 ++ bool primary = false;
731 ++
732 ++ ap = alloc_apertures(1);
733 ++ if (!ap)
734 ++ return;
735 ++
736 ++ ap->ranges[0].base = pci_resource_start(pdev, 0);
737 ++ ap->ranges[0].size = pci_resource_len(pdev, 0);
738 ++
739 ++#ifdef CONFIG_X86
740 ++ primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
741 ++#endif
742 ++ drm_fb_helper_remove_conflicting_framebuffers(ap, "astdrmfb", primary);
743 ++ kfree(ap);
744 ++}
745 ++
746 + static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
747 + {
748 ++ ast_kick_out_firmware_fb(pdev);
749 ++
750 + return drm_get_pci_dev(pdev, ent, &driver);
751 + }
752 +
753 +diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
754 +index 5e77d456d9bb..7c6ac3cadb6b 100644
755 +--- a/drivers/gpu/drm/ast/ast_mode.c
756 ++++ b/drivers/gpu/drm/ast/ast_mode.c
757 +@@ -568,6 +568,7 @@ static int ast_crtc_do_set_base(struct drm_crtc *crtc,
758 + }
759 + ast_bo_unreserve(bo);
760 +
761 ++ ast_set_offset_reg(crtc);
762 + ast_set_start_address_crt1(crtc, (u32)gpu_addr);
763 +
764 + return 0;
765 +@@ -1254,7 +1255,7 @@ static int ast_cursor_move(struct drm_crtc *crtc,
766 + ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07));
767 +
768 + /* dummy write to fire HWC */
769 +- ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xCB, 0xFF, 0x00);
770 ++ ast_show_cursor(crtc);
771 +
772 + return 0;
773 + }
774 +diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
775 +index 9628dd617826..9214c8b02484 100644
776 +--- a/drivers/gpu/drm/drm_fb_helper.c
777 ++++ b/drivers/gpu/drm/drm_fb_helper.c
778 +@@ -200,6 +200,9 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
779 + mutex_lock(&fb_helper->lock);
780 + drm_connector_list_iter_begin(dev, &conn_iter);
781 + drm_for_each_connector_iter(connector, &conn_iter) {
782 ++ if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
783 ++ continue;
784 ++
785 + ret = __drm_fb_helper_add_one_connector(fb_helper, connector);
786 + if (ret)
787 + goto fail;
788 +diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
789 +index 43ae9de12ba3..c3a64d6a18df 100644
790 +--- a/drivers/gpu/drm/i915/intel_pm.c
791 ++++ b/drivers/gpu/drm/i915/intel_pm.c
792 +@@ -2492,6 +2492,9 @@ static uint32_t ilk_compute_pri_wm(const struct intel_crtc_state *cstate,
793 + uint32_t method1, method2;
794 + int cpp;
795 +
796 ++ if (mem_value == 0)
797 ++ return U32_MAX;
798 ++
799 + if (!intel_wm_plane_visible(cstate, pstate))
800 + return 0;
801 +
802 +@@ -2521,6 +2524,9 @@ static uint32_t ilk_compute_spr_wm(const struct intel_crtc_state *cstate,
803 + uint32_t method1, method2;
804 + int cpp;
805 +
806 ++ if (mem_value == 0)
807 ++ return U32_MAX;
808 ++
809 + if (!intel_wm_plane_visible(cstate, pstate))
810 + return 0;
811 +
812 +@@ -2544,6 +2550,9 @@ static uint32_t ilk_compute_cur_wm(const struct intel_crtc_state *cstate,
813 + {
814 + int cpp;
815 +
816 ++ if (mem_value == 0)
817 ++ return U32_MAX;
818 ++
819 + if (!intel_wm_plane_visible(cstate, pstate))
820 + return 0;
821 +
822 +@@ -2998,6 +3007,34 @@ static void snb_wm_latency_quirk(struct drm_i915_private *dev_priv)
823 + intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);
824 + }
825 +
826 ++static void snb_wm_lp3_irq_quirk(struct drm_i915_private *dev_priv)
827 ++{
828 ++ /*
829 ++ * On some SNB machines (Thinkpad X220 Tablet at least)
830 ++ * LP3 usage can cause vblank interrupts to be lost.
831 ++ * The DEIIR bit will go high but it looks like the CPU
832 ++ * never gets interrupted.
833 ++ *
834 ++ * It's not clear whether other interrupt source could
835 ++ * be affected or if this is somehow limited to vblank
836 ++ * interrupts only. To play it safe we disable LP3
837 ++ * watermarks entirely.
838 ++ */
839 ++ if (dev_priv->wm.pri_latency[3] == 0 &&
840 ++ dev_priv->wm.spr_latency[3] == 0 &&
841 ++ dev_priv->wm.cur_latency[3] == 0)
842 ++ return;
843 ++
844 ++ dev_priv->wm.pri_latency[3] = 0;
845 ++ dev_priv->wm.spr_latency[3] = 0;
846 ++ dev_priv->wm.cur_latency[3] = 0;
847 ++
848 ++ DRM_DEBUG_KMS("LP3 watermarks disabled due to potential for lost interrupts\n");
849 ++ intel_print_wm_latency(dev_priv, "Primary", dev_priv->wm.pri_latency);
850 ++ intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);
851 ++ intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);
852 ++}
853 ++
854 + static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv)
855 + {
856 + intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency);
857 +@@ -3014,8 +3051,10 @@ static void ilk_setup_wm_latency(struct drm_i915_private *dev_priv)
858 + intel_print_wm_latency(dev_priv, "Sprite", dev_priv->wm.spr_latency);
859 + intel_print_wm_latency(dev_priv, "Cursor", dev_priv->wm.cur_latency);
860 +
861 +- if (IS_GEN6(dev_priv))
862 ++ if (IS_GEN6(dev_priv)) {
863 + snb_wm_latency_quirk(dev_priv);
864 ++ snb_wm_lp3_irq_quirk(dev_priv);
865 ++ }
866 + }
867 +
868 + static void skl_setup_wm_latency(struct drm_i915_private *dev_priv)
869 +diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
870 +index ca5aa7fba769..f4d8a730e821 100644
871 +--- a/drivers/gpu/drm/vc4/vc4_kms.c
872 ++++ b/drivers/gpu/drm/vc4/vc4_kms.c
873 +@@ -216,6 +216,12 @@ static int vc4_atomic_commit(struct drm_device *dev,
874 + return 0;
875 + }
876 +
877 ++ /* We know for sure we don't want an async update here. Set
878 ++ * state->legacy_cursor_update to false to prevent
879 ++ * drm_atomic_helper_setup_commit() from auto-completing
880 ++ * commit->flip_done.
881 ++ */
882 ++ state->legacy_cursor_update = false;
883 + ret = drm_atomic_helper_setup_commit(state, nonblock);
884 + if (ret)
885 + return ret;
886 +diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
887 +index 0422ec2b13d2..dc4128bfe2ca 100644
888 +--- a/drivers/hid/hid-steam.c
889 ++++ b/drivers/hid/hid-steam.c
890 +@@ -23,8 +23,9 @@
891 + * In order to avoid breaking them this driver creates a layered hidraw device,
892 + * so it can detect when the client is running and then:
893 + * - it will not send any command to the controller.
894 +- * - this input device will be disabled, to avoid double input of the same
895 ++ * - this input device will be removed, to avoid double input of the same
896 + * user action.
897 ++ * When the client is closed, this input device will be created again.
898 + *
899 + * For additional functions, such as changing the right-pad margin or switching
900 + * the led, you can use the user-space tool at:
901 +@@ -113,7 +114,7 @@ struct steam_device {
902 + spinlock_t lock;
903 + struct hid_device *hdev, *client_hdev;
904 + struct mutex mutex;
905 +- bool client_opened, input_opened;
906 ++ bool client_opened;
907 + struct input_dev __rcu *input;
908 + unsigned long quirks;
909 + struct work_struct work_connect;
910 +@@ -279,18 +280,6 @@ static void steam_set_lizard_mode(struct steam_device *steam, bool enable)
911 + }
912 + }
913 +
914 +-static void steam_update_lizard_mode(struct steam_device *steam)
915 +-{
916 +- mutex_lock(&steam->mutex);
917 +- if (!steam->client_opened) {
918 +- if (steam->input_opened)
919 +- steam_set_lizard_mode(steam, false);
920 +- else
921 +- steam_set_lizard_mode(steam, lizard_mode);
922 +- }
923 +- mutex_unlock(&steam->mutex);
924 +-}
925 +-
926 + static int steam_input_open(struct input_dev *dev)
927 + {
928 + struct steam_device *steam = input_get_drvdata(dev);
929 +@@ -301,7 +290,6 @@ static int steam_input_open(struct input_dev *dev)
930 + return ret;
931 +
932 + mutex_lock(&steam->mutex);
933 +- steam->input_opened = true;
934 + if (!steam->client_opened && lizard_mode)
935 + steam_set_lizard_mode(steam, false);
936 + mutex_unlock(&steam->mutex);
937 +@@ -313,7 +301,6 @@ static void steam_input_close(struct input_dev *dev)
938 + struct steam_device *steam = input_get_drvdata(dev);
939 +
940 + mutex_lock(&steam->mutex);
941 +- steam->input_opened = false;
942 + if (!steam->client_opened && lizard_mode)
943 + steam_set_lizard_mode(steam, true);
944 + mutex_unlock(&steam->mutex);
945 +@@ -400,7 +387,7 @@ static int steam_battery_register(struct steam_device *steam)
946 + return 0;
947 + }
948 +
949 +-static int steam_register(struct steam_device *steam)
950 ++static int steam_input_register(struct steam_device *steam)
951 + {
952 + struct hid_device *hdev = steam->hdev;
953 + struct input_dev *input;
954 +@@ -414,17 +401,6 @@ static int steam_register(struct steam_device *steam)
955 + return 0;
956 + }
957 +
958 +- /*
959 +- * Unlikely, but getting the serial could fail, and it is not so
960 +- * important, so make up a serial number and go on.
961 +- */
962 +- if (steam_get_serial(steam) < 0)
963 +- strlcpy(steam->serial_no, "XXXXXXXXXX",
964 +- sizeof(steam->serial_no));
965 +-
966 +- hid_info(hdev, "Steam Controller '%s' connected",
967 +- steam->serial_no);
968 +-
969 + input = input_allocate_device();
970 + if (!input)
971 + return -ENOMEM;
972 +@@ -492,11 +468,6 @@ static int steam_register(struct steam_device *steam)
973 + goto input_register_fail;
974 +
975 + rcu_assign_pointer(steam->input, input);
976 +-
977 +- /* ignore battery errors, we can live without it */
978 +- if (steam->quirks & STEAM_QUIRK_WIRELESS)
979 +- steam_battery_register(steam);
980 +-
981 + return 0;
982 +
983 + input_register_fail:
984 +@@ -504,27 +475,88 @@ input_register_fail:
985 + return ret;
986 + }
987 +
988 +-static void steam_unregister(struct steam_device *steam)
989 ++static void steam_input_unregister(struct steam_device *steam)
990 + {
991 + struct input_dev *input;
992 ++ rcu_read_lock();
993 ++ input = rcu_dereference(steam->input);
994 ++ rcu_read_unlock();
995 ++ if (!input)
996 ++ return;
997 ++ RCU_INIT_POINTER(steam->input, NULL);
998 ++ synchronize_rcu();
999 ++ input_unregister_device(input);
1000 ++}
1001 ++
1002 ++static void steam_battery_unregister(struct steam_device *steam)
1003 ++{
1004 + struct power_supply *battery;
1005 +
1006 + rcu_read_lock();
1007 +- input = rcu_dereference(steam->input);
1008 + battery = rcu_dereference(steam->battery);
1009 + rcu_read_unlock();
1010 +
1011 +- if (battery) {
1012 +- RCU_INIT_POINTER(steam->battery, NULL);
1013 +- synchronize_rcu();
1014 +- power_supply_unregister(battery);
1015 ++ if (!battery)
1016 ++ return;
1017 ++ RCU_INIT_POINTER(steam->battery, NULL);
1018 ++ synchronize_rcu();
1019 ++ power_supply_unregister(battery);
1020 ++}
1021 ++
1022 ++static int steam_register(struct steam_device *steam)
1023 ++{
1024 ++ int ret;
1025 ++
1026 ++ /*
1027 ++ * This function can be called several times in a row with the
1028 ++ * wireless adaptor, without steam_unregister() between them, because
1029 ++ * another client send a get_connection_status command, for example.
1030 ++ * The battery and serial number are set just once per device.
1031 ++ */
1032 ++ if (!steam->serial_no[0]) {
1033 ++ /*
1034 ++ * Unlikely, but getting the serial could fail, and it is not so
1035 ++ * important, so make up a serial number and go on.
1036 ++ */
1037 ++ if (steam_get_serial(steam) < 0)
1038 ++ strlcpy(steam->serial_no, "XXXXXXXXXX",
1039 ++ sizeof(steam->serial_no));
1040 ++
1041 ++ hid_info(steam->hdev, "Steam Controller '%s' connected",
1042 ++ steam->serial_no);
1043 ++
1044 ++ /* ignore battery errors, we can live without it */
1045 ++ if (steam->quirks & STEAM_QUIRK_WIRELESS)
1046 ++ steam_battery_register(steam);
1047 ++
1048 ++ mutex_lock(&steam_devices_lock);
1049 ++ list_add(&steam->list, &steam_devices);
1050 ++ mutex_unlock(&steam_devices_lock);
1051 + }
1052 +- if (input) {
1053 +- RCU_INIT_POINTER(steam->input, NULL);
1054 +- synchronize_rcu();
1055 ++
1056 ++ mutex_lock(&steam->mutex);
1057 ++ if (!steam->client_opened) {
1058 ++ steam_set_lizard_mode(steam, lizard_mode);
1059 ++ ret = steam_input_register(steam);
1060 ++ } else {
1061 ++ ret = 0;
1062 ++ }
1063 ++ mutex_unlock(&steam->mutex);
1064 ++
1065 ++ return ret;
1066 ++}
1067 ++
1068 ++static void steam_unregister(struct steam_device *steam)
1069 ++{
1070 ++ steam_battery_unregister(steam);
1071 ++ steam_input_unregister(steam);
1072 ++ if (steam->serial_no[0]) {
1073 + hid_info(steam->hdev, "Steam Controller '%s' disconnected",
1074 + steam->serial_no);
1075 +- input_unregister_device(input);
1076 ++ mutex_lock(&steam_devices_lock);
1077 ++ list_del(&steam->list);
1078 ++ mutex_unlock(&steam_devices_lock);
1079 ++ steam->serial_no[0] = 0;
1080 + }
1081 + }
1082 +
1083 +@@ -600,6 +632,9 @@ static int steam_client_ll_open(struct hid_device *hdev)
1084 + mutex_lock(&steam->mutex);
1085 + steam->client_opened = true;
1086 + mutex_unlock(&steam->mutex);
1087 ++
1088 ++ steam_input_unregister(steam);
1089 ++
1090 + return ret;
1091 + }
1092 +
1093 +@@ -609,13 +644,13 @@ static void steam_client_ll_close(struct hid_device *hdev)
1094 +
1095 + mutex_lock(&steam->mutex);
1096 + steam->client_opened = false;
1097 +- if (steam->input_opened)
1098 +- steam_set_lizard_mode(steam, false);
1099 +- else
1100 +- steam_set_lizard_mode(steam, lizard_mode);
1101 + mutex_unlock(&steam->mutex);
1102 +
1103 + hid_hw_close(steam->hdev);
1104 ++ if (steam->connected) {
1105 ++ steam_set_lizard_mode(steam, lizard_mode);
1106 ++ steam_input_register(steam);
1107 ++ }
1108 + }
1109 +
1110 + static int steam_client_ll_raw_request(struct hid_device *hdev,
1111 +@@ -744,11 +779,6 @@ static int steam_probe(struct hid_device *hdev,
1112 + }
1113 + }
1114 +
1115 +- mutex_lock(&steam_devices_lock);
1116 +- steam_update_lizard_mode(steam);
1117 +- list_add(&steam->list, &steam_devices);
1118 +- mutex_unlock(&steam_devices_lock);
1119 +-
1120 + return 0;
1121 +
1122 + hid_hw_open_fail:
1123 +@@ -774,10 +804,6 @@ static void steam_remove(struct hid_device *hdev)
1124 + return;
1125 + }
1126 +
1127 +- mutex_lock(&steam_devices_lock);
1128 +- list_del(&steam->list);
1129 +- mutex_unlock(&steam_devices_lock);
1130 +-
1131 + hid_destroy_device(steam->client_hdev);
1132 + steam->client_opened = false;
1133 + cancel_work_sync(&steam->work_connect);
1134 +@@ -792,12 +818,14 @@ static void steam_remove(struct hid_device *hdev)
1135 + static void steam_do_connect_event(struct steam_device *steam, bool connected)
1136 + {
1137 + unsigned long flags;
1138 ++ bool changed;
1139 +
1140 + spin_lock_irqsave(&steam->lock, flags);
1141 ++ changed = steam->connected != connected;
1142 + steam->connected = connected;
1143 + spin_unlock_irqrestore(&steam->lock, flags);
1144 +
1145 +- if (schedule_work(&steam->work_connect) == 0)
1146 ++ if (changed && schedule_work(&steam->work_connect) == 0)
1147 + dbg_hid("%s: connected=%d event already queued\n",
1148 + __func__, connected);
1149 + }
1150 +@@ -1019,13 +1047,8 @@ static int steam_raw_event(struct hid_device *hdev,
1151 + return 0;
1152 + rcu_read_lock();
1153 + input = rcu_dereference(steam->input);
1154 +- if (likely(input)) {
1155 ++ if (likely(input))
1156 + steam_do_input_event(steam, input, data);
1157 +- } else {
1158 +- dbg_hid("%s: input data without connect event\n",
1159 +- __func__);
1160 +- steam_do_connect_event(steam, true);
1161 +- }
1162 + rcu_read_unlock();
1163 + break;
1164 + case STEAM_EV_CONNECT:
1165 +@@ -1074,7 +1097,10 @@ static int steam_param_set_lizard_mode(const char *val,
1166 +
1167 + mutex_lock(&steam_devices_lock);
1168 + list_for_each_entry(steam, &steam_devices, list) {
1169 +- steam_update_lizard_mode(steam);
1170 ++ mutex_lock(&steam->mutex);
1171 ++ if (!steam->client_opened)
1172 ++ steam_set_lizard_mode(steam, lizard_mode);
1173 ++ mutex_unlock(&steam->mutex);
1174 + }
1175 + mutex_unlock(&steam_devices_lock);
1176 + return 0;
1177 +diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
1178 +index 5c88706121c1..39134dd305f5 100644
1179 +--- a/drivers/infiniband/hw/hfi1/user_sdma.c
1180 ++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
1181 +@@ -328,7 +328,6 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1182 + u8 opcode, sc, vl;
1183 + u16 pkey;
1184 + u32 slid;
1185 +- int req_queued = 0;
1186 + u16 dlid;
1187 + u32 selector;
1188 +
1189 +@@ -392,7 +391,6 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1190 + req->data_len = 0;
1191 + req->pq = pq;
1192 + req->cq = cq;
1193 +- req->status = -1;
1194 + req->ahg_idx = -1;
1195 + req->iov_idx = 0;
1196 + req->sent = 0;
1197 +@@ -400,12 +398,14 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1198 + req->seqcomp = 0;
1199 + req->seqsubmitted = 0;
1200 + req->tids = NULL;
1201 +- req->done = 0;
1202 + req->has_error = 0;
1203 + INIT_LIST_HEAD(&req->txps);
1204 +
1205 + memcpy(&req->info, &info, sizeof(info));
1206 +
1207 ++ /* The request is initialized, count it */
1208 ++ atomic_inc(&pq->n_reqs);
1209 ++
1210 + if (req_opcode(info.ctrl) == EXPECTED) {
1211 + /* expected must have a TID info and at least one data vector */
1212 + if (req->data_iovs < 2) {
1213 +@@ -500,7 +500,6 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1214 + ret = pin_vector_pages(req, &req->iovs[i]);
1215 + if (ret) {
1216 + req->data_iovs = i;
1217 +- req->status = ret;
1218 + goto free_req;
1219 + }
1220 + req->data_len += req->iovs[i].iov.iov_len;
1221 +@@ -561,14 +560,10 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1222 + req->ahg_idx = sdma_ahg_alloc(req->sde);
1223 +
1224 + set_comp_state(pq, cq, info.comp_idx, QUEUED, 0);
1225 +- atomic_inc(&pq->n_reqs);
1226 +- req_queued = 1;
1227 + /* Send the first N packets in the request to buy us some time */
1228 + ret = user_sdma_send_pkts(req, pcount);
1229 +- if (unlikely(ret < 0 && ret != -EBUSY)) {
1230 +- req->status = ret;
1231 ++ if (unlikely(ret < 0 && ret != -EBUSY))
1232 + goto free_req;
1233 +- }
1234 +
1235 + /*
1236 + * It is possible that the SDMA engine would have processed all the
1237 +@@ -588,14 +583,8 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1238 + while (req->seqsubmitted != req->info.npkts) {
1239 + ret = user_sdma_send_pkts(req, pcount);
1240 + if (ret < 0) {
1241 +- if (ret != -EBUSY) {
1242 +- req->status = ret;
1243 +- WRITE_ONCE(req->has_error, 1);
1244 +- if (READ_ONCE(req->seqcomp) ==
1245 +- req->seqsubmitted - 1)
1246 +- goto free_req;
1247 +- return ret;
1248 +- }
1249 ++ if (ret != -EBUSY)
1250 ++ goto free_req;
1251 + wait_event_interruptible_timeout(
1252 + pq->busy.wait_dma,
1253 + (pq->state == SDMA_PKT_Q_ACTIVE),
1254 +@@ -606,10 +595,19 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
1255 + *count += idx;
1256 + return 0;
1257 + free_req:
1258 +- user_sdma_free_request(req, true);
1259 +- if (req_queued)
1260 ++ /*
1261 ++ * If the submitted seqsubmitted == npkts, the completion routine
1262 ++ * controls the final state. If sequbmitted < npkts, wait for any
1263 ++ * outstanding packets to finish before cleaning up.
1264 ++ */
1265 ++ if (req->seqsubmitted < req->info.npkts) {
1266 ++ if (req->seqsubmitted)
1267 ++ wait_event(pq->busy.wait_dma,
1268 ++ (req->seqcomp == req->seqsubmitted - 1));
1269 ++ user_sdma_free_request(req, true);
1270 + pq_update(pq);
1271 +- set_comp_state(pq, cq, info.comp_idx, ERROR, req->status);
1272 ++ set_comp_state(pq, cq, info.comp_idx, ERROR, ret);
1273 ++ }
1274 + return ret;
1275 + }
1276 +
1277 +@@ -917,7 +915,6 @@ dosend:
1278 + ret = sdma_send_txlist(req->sde, &pq->busy, &req->txps, &count);
1279 + req->seqsubmitted += count;
1280 + if (req->seqsubmitted == req->info.npkts) {
1281 +- WRITE_ONCE(req->done, 1);
1282 + /*
1283 + * The txreq has already been submitted to the HW queue
1284 + * so we can free the AHG entry now. Corruption will not
1285 +@@ -1365,11 +1362,15 @@ static int set_txreq_header_ahg(struct user_sdma_request *req,
1286 + return idx;
1287 + }
1288 +
1289 +-/*
1290 +- * SDMA tx request completion callback. Called when the SDMA progress
1291 +- * state machine gets notification that the SDMA descriptors for this
1292 +- * tx request have been processed by the DMA engine. Called in
1293 +- * interrupt context.
1294 ++/**
1295 ++ * user_sdma_txreq_cb() - SDMA tx request completion callback.
1296 ++ * @txreq: valid sdma tx request
1297 ++ * @status: success/failure of request
1298 ++ *
1299 ++ * Called when the SDMA progress state machine gets notification that
1300 ++ * the SDMA descriptors for this tx request have been processed by the
1301 ++ * DMA engine. Called in interrupt context.
1302 ++ * Only do work on completed sequences.
1303 + */
1304 + static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status)
1305 + {
1306 +@@ -1378,7 +1379,7 @@ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status)
1307 + struct user_sdma_request *req;
1308 + struct hfi1_user_sdma_pkt_q *pq;
1309 + struct hfi1_user_sdma_comp_q *cq;
1310 +- u16 idx;
1311 ++ enum hfi1_sdma_comp_state state = COMPLETE;
1312 +
1313 + if (!tx->req)
1314 + return;
1315 +@@ -1391,31 +1392,19 @@ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status)
1316 + SDMA_DBG(req, "SDMA completion with error %d",
1317 + status);
1318 + WRITE_ONCE(req->has_error, 1);
1319 ++ state = ERROR;
1320 + }
1321 +
1322 + req->seqcomp = tx->seqnum;
1323 + kmem_cache_free(pq->txreq_cache, tx);
1324 +- tx = NULL;
1325 +-
1326 +- idx = req->info.comp_idx;
1327 +- if (req->status == -1 && status == SDMA_TXREQ_S_OK) {
1328 +- if (req->seqcomp == req->info.npkts - 1) {
1329 +- req->status = 0;
1330 +- user_sdma_free_request(req, false);
1331 +- pq_update(pq);
1332 +- set_comp_state(pq, cq, idx, COMPLETE, 0);
1333 +- }
1334 +- } else {
1335 +- if (status != SDMA_TXREQ_S_OK)
1336 +- req->status = status;
1337 +- if (req->seqcomp == (READ_ONCE(req->seqsubmitted) - 1) &&
1338 +- (READ_ONCE(req->done) ||
1339 +- READ_ONCE(req->has_error))) {
1340 +- user_sdma_free_request(req, false);
1341 +- pq_update(pq);
1342 +- set_comp_state(pq, cq, idx, ERROR, req->status);
1343 +- }
1344 +- }
1345 ++
1346 ++ /* sequence isn't complete? We are done */
1347 ++ if (req->seqcomp != req->info.npkts - 1)
1348 ++ return;
1349 ++
1350 ++ user_sdma_free_request(req, false);
1351 ++ set_comp_state(pq, cq, req->info.comp_idx, state, status);
1352 ++ pq_update(pq);
1353 + }
1354 +
1355 + static inline void pq_update(struct hfi1_user_sdma_pkt_q *pq)
1356 +@@ -1448,6 +1437,8 @@ static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
1357 + if (!node)
1358 + continue;
1359 +
1360 ++ req->iovs[i].node = NULL;
1361 ++
1362 + if (unpin)
1363 + hfi1_mmu_rb_remove(req->pq->handler,
1364 + &node->rb);
1365 +diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
1366 +index d2bc77f75253..0ae06456c868 100644
1367 +--- a/drivers/infiniband/hw/hfi1/user_sdma.h
1368 ++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
1369 +@@ -205,8 +205,6 @@ struct user_sdma_request {
1370 + /* Writeable fields shared with interrupt */
1371 + u64 seqcomp ____cacheline_aligned_in_smp;
1372 + u64 seqsubmitted;
1373 +- /* status of the last txreq completed */
1374 +- int status;
1375 +
1376 + /* Send side fields */
1377 + struct list_head txps ____cacheline_aligned_in_smp;
1378 +@@ -228,7 +226,6 @@ struct user_sdma_request {
1379 + u16 tididx;
1380 + /* progress index moving along the iovs array */
1381 + u8 iov_idx;
1382 +- u8 done;
1383 + u8 has_error;
1384 +
1385 + struct user_sdma_iovec iovs[MAX_VECTORS_PER_REQ];
1386 +diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
1387 +index 55d33500d55e..5e85f3cca867 100644
1388 +--- a/drivers/input/mouse/synaptics.c
1389 ++++ b/drivers/input/mouse/synaptics.c
1390 +@@ -99,9 +99,7 @@ static int synaptics_mode_cmd(struct psmouse *psmouse, u8 mode)
1391 + int synaptics_detect(struct psmouse *psmouse, bool set_properties)
1392 + {
1393 + struct ps2dev *ps2dev = &psmouse->ps2dev;
1394 +- u8 param[4];
1395 +-
1396 +- param[0] = 0;
1397 ++ u8 param[4] = { 0 };
1398 +
1399 + ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
1400 + ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
1401 +diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
1402 +index 7b662bd1c7a0..30b15e91d8be 100644
1403 +--- a/drivers/media/i2c/ov5640.c
1404 ++++ b/drivers/media/i2c/ov5640.c
1405 +@@ -288,10 +288,10 @@ static const struct reg_value ov5640_init_setting_30fps_VGA[] = {
1406 + {0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
1407 + {0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x3000, 0x00, 0, 0},
1408 + {0x3002, 0x1c, 0, 0}, {0x3004, 0xff, 0, 0}, {0x3006, 0xc3, 0, 0},
1409 +- {0x300e, 0x45, 0, 0}, {0x302e, 0x08, 0, 0}, {0x4300, 0x3f, 0, 0},
1410 ++ {0x302e, 0x08, 0, 0}, {0x4300, 0x3f, 0, 0},
1411 + {0x501f, 0x00, 0, 0}, {0x4713, 0x03, 0, 0}, {0x4407, 0x04, 0, 0},
1412 + {0x440e, 0x00, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
1413 +- {0x4837, 0x0a, 0, 0}, {0x4800, 0x04, 0, 0}, {0x3824, 0x02, 0, 0},
1414 ++ {0x4837, 0x0a, 0, 0}, {0x3824, 0x02, 0, 0},
1415 + {0x5000, 0xa7, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x5180, 0xff, 0, 0},
1416 + {0x5181, 0xf2, 0, 0}, {0x5182, 0x00, 0, 0}, {0x5183, 0x14, 0, 0},
1417 + {0x5184, 0x25, 0, 0}, {0x5185, 0x24, 0, 0}, {0x5186, 0x09, 0, 0},
1418 +@@ -910,6 +910,26 @@ static int ov5640_mod_reg(struct ov5640_dev *sensor, u16 reg,
1419 + }
1420 +
1421 + /* download ov5640 settings to sensor through i2c */
1422 ++static int ov5640_set_timings(struct ov5640_dev *sensor,
1423 ++ const struct ov5640_mode_info *mode)
1424 ++{
1425 ++ int ret;
1426 ++
1427 ++ ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_DVPHO, mode->hact);
1428 ++ if (ret < 0)
1429 ++ return ret;
1430 ++
1431 ++ ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_DVPVO, mode->vact);
1432 ++ if (ret < 0)
1433 ++ return ret;
1434 ++
1435 ++ ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_HTS, mode->htot);
1436 ++ if (ret < 0)
1437 ++ return ret;
1438 ++
1439 ++ return ov5640_write_reg16(sensor, OV5640_REG_TIMING_VTS, mode->vtot);
1440 ++}
1441 ++
1442 + static int ov5640_load_regs(struct ov5640_dev *sensor,
1443 + const struct ov5640_mode_info *mode)
1444 + {
1445 +@@ -937,7 +957,13 @@ static int ov5640_load_regs(struct ov5640_dev *sensor,
1446 + usleep_range(1000 * delay_ms, 1000 * delay_ms + 100);
1447 + }
1448 +
1449 +- return ret;
1450 ++ return ov5640_set_timings(sensor, mode);
1451 ++}
1452 ++
1453 ++static int ov5640_set_autoexposure(struct ov5640_dev *sensor, bool on)
1454 ++{
1455 ++ return ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
1456 ++ BIT(0), on ? 0 : BIT(0));
1457 + }
1458 +
1459 + /* read exposure, in number of line periods */
1460 +@@ -996,6 +1022,18 @@ static int ov5640_get_gain(struct ov5640_dev *sensor)
1461 + return gain & 0x3ff;
1462 + }
1463 +
1464 ++static int ov5640_set_gain(struct ov5640_dev *sensor, int gain)
1465 ++{
1466 ++ return ov5640_write_reg16(sensor, OV5640_REG_AEC_PK_REAL_GAIN,
1467 ++ (u16)gain & 0x3ff);
1468 ++}
1469 ++
1470 ++static int ov5640_set_autogain(struct ov5640_dev *sensor, bool on)
1471 ++{
1472 ++ return ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
1473 ++ BIT(1), on ? 0 : BIT(1));
1474 ++}
1475 ++
1476 + static int ov5640_set_stream_dvp(struct ov5640_dev *sensor, bool on)
1477 + {
1478 + int ret;
1479 +@@ -1104,12 +1142,25 @@ static int ov5640_set_stream_mipi(struct ov5640_dev *sensor, bool on)
1480 + {
1481 + int ret;
1482 +
1483 +- ret = ov5640_mod_reg(sensor, OV5640_REG_MIPI_CTRL00, BIT(5),
1484 +- on ? 0 : BIT(5));
1485 +- if (ret)
1486 +- return ret;
1487 +- ret = ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT00,
1488 +- on ? 0x00 : 0x70);
1489 ++ /*
1490 ++ * Enable/disable the MIPI interface
1491 ++ *
1492 ++ * 0x300e = on ? 0x45 : 0x40
1493 ++ *
1494 ++ * FIXME: the sensor manual (version 2.03) reports
1495 ++ * [7:5] = 000 : 1 data lane mode
1496 ++ * [7:5] = 001 : 2 data lanes mode
1497 ++ * But this settings do not work, while the following ones
1498 ++ * have been validated for 2 data lanes mode.
1499 ++ *
1500 ++ * [7:5] = 010 : 2 data lanes mode
1501 ++ * [4] = 0 : Power up MIPI HS Tx
1502 ++ * [3] = 0 : Power up MIPI LS Rx
1503 ++ * [2] = 1/0 : MIPI interface enable/disable
1504 ++ * [1:0] = 01/00: FIXME: 'debug'
1505 ++ */
1506 ++ ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00,
1507 ++ on ? 0x45 : 0x40);
1508 + if (ret)
1509 + return ret;
1510 +
1511 +@@ -1333,7 +1384,7 @@ static int ov5640_set_ae_target(struct ov5640_dev *sensor, int target)
1512 + return ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL1F, fast_low);
1513 + }
1514 +
1515 +-static int ov5640_binning_on(struct ov5640_dev *sensor)
1516 ++static int ov5640_get_binning(struct ov5640_dev *sensor)
1517 + {
1518 + u8 temp;
1519 + int ret;
1520 +@@ -1341,8 +1392,8 @@ static int ov5640_binning_on(struct ov5640_dev *sensor)
1521 + ret = ov5640_read_reg(sensor, OV5640_REG_TIMING_TC_REG21, &temp);
1522 + if (ret)
1523 + return ret;
1524 +- temp &= 0xfe;
1525 +- return temp ? 1 : 0;
1526 ++
1527 ++ return temp & BIT(0);
1528 + }
1529 +
1530 + static int ov5640_set_binning(struct ov5640_dev *sensor, bool enable)
1531 +@@ -1387,30 +1438,6 @@ static int ov5640_set_virtual_channel(struct ov5640_dev *sensor)
1532 + return ov5640_write_reg(sensor, OV5640_REG_DEBUG_MODE, temp);
1533 + }
1534 +
1535 +-static int ov5640_set_timings(struct ov5640_dev *sensor,
1536 +- const struct ov5640_mode_info *mode)
1537 +-{
1538 +- int ret;
1539 +-
1540 +- ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_DVPHO, mode->hact);
1541 +- if (ret < 0)
1542 +- return ret;
1543 +-
1544 +- ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_DVPVO, mode->vact);
1545 +- if (ret < 0)
1546 +- return ret;
1547 +-
1548 +- ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_HTS, mode->htot);
1549 +- if (ret < 0)
1550 +- return ret;
1551 +-
1552 +- ret = ov5640_write_reg16(sensor, OV5640_REG_TIMING_VTS, mode->vtot);
1553 +- if (ret < 0)
1554 +- return ret;
1555 +-
1556 +- return 0;
1557 +-}
1558 +-
1559 + static const struct ov5640_mode_info *
1560 + ov5640_find_mode(struct ov5640_dev *sensor, enum ov5640_frame_rate fr,
1561 + int width, int height, bool nearest)
1562 +@@ -1452,7 +1479,7 @@ static int ov5640_set_mode_exposure_calc(struct ov5640_dev *sensor,
1563 + if (ret < 0)
1564 + return ret;
1565 + prev_shutter = ret;
1566 +- ret = ov5640_binning_on(sensor);
1567 ++ ret = ov5640_get_binning(sensor);
1568 + if (ret < 0)
1569 + return ret;
1570 + if (ret && mode->id != OV5640_MODE_720P_1280_720 &&
1571 +@@ -1573,7 +1600,7 @@ static int ov5640_set_mode_exposure_calc(struct ov5640_dev *sensor,
1572 + }
1573 +
1574 + /* set capture gain */
1575 +- ret = __v4l2_ctrl_s_ctrl(sensor->ctrls.gain, cap_gain16);
1576 ++ ret = ov5640_set_gain(sensor, cap_gain16);
1577 + if (ret)
1578 + return ret;
1579 +
1580 +@@ -1586,7 +1613,7 @@ static int ov5640_set_mode_exposure_calc(struct ov5640_dev *sensor,
1581 + }
1582 +
1583 + /* set exposure */
1584 +- return __v4l2_ctrl_s_ctrl(sensor->ctrls.exposure, cap_shutter);
1585 ++ return ov5640_set_exposure(sensor, cap_shutter);
1586 + }
1587 +
1588 + /*
1589 +@@ -1594,25 +1621,13 @@ static int ov5640_set_mode_exposure_calc(struct ov5640_dev *sensor,
1590 + * change mode directly
1591 + */
1592 + static int ov5640_set_mode_direct(struct ov5640_dev *sensor,
1593 +- const struct ov5640_mode_info *mode,
1594 +- s32 exposure)
1595 ++ const struct ov5640_mode_info *mode)
1596 + {
1597 +- int ret;
1598 +-
1599 + if (!mode->reg_data)
1600 + return -EINVAL;
1601 +
1602 + /* Write capture setting */
1603 +- ret = ov5640_load_regs(sensor, mode);
1604 +- if (ret < 0)
1605 +- return ret;
1606 +-
1607 +- /* turn auto gain/exposure back on for direct mode */
1608 +- ret = __v4l2_ctrl_s_ctrl(sensor->ctrls.auto_gain, 1);
1609 +- if (ret)
1610 +- return ret;
1611 +-
1612 +- return __v4l2_ctrl_s_ctrl(sensor->ctrls.auto_exp, exposure);
1613 ++ return ov5640_load_regs(sensor, mode);
1614 + }
1615 +
1616 + static int ov5640_set_mode(struct ov5640_dev *sensor)
1617 +@@ -1620,27 +1635,31 @@ static int ov5640_set_mode(struct ov5640_dev *sensor)
1618 + const struct ov5640_mode_info *mode = sensor->current_mode;
1619 + const struct ov5640_mode_info *orig_mode = sensor->last_mode;
1620 + enum ov5640_downsize_mode dn_mode, orig_dn_mode;
1621 +- s32 exposure;
1622 ++ bool auto_gain = sensor->ctrls.auto_gain->val == 1;
1623 ++ bool auto_exp = sensor->ctrls.auto_exp->val == V4L2_EXPOSURE_AUTO;
1624 + int ret;
1625 +
1626 + dn_mode = mode->dn_mode;
1627 + orig_dn_mode = orig_mode->dn_mode;
1628 +
1629 + /* auto gain and exposure must be turned off when changing modes */
1630 +- ret = __v4l2_ctrl_s_ctrl(sensor->ctrls.auto_gain, 0);
1631 +- if (ret)
1632 +- return ret;
1633 ++ if (auto_gain) {
1634 ++ ret = ov5640_set_autogain(sensor, false);
1635 ++ if (ret)
1636 ++ return ret;
1637 ++ }
1638 +
1639 +- exposure = sensor->ctrls.auto_exp->val;
1640 +- ret = ov5640_set_exposure(sensor, V4L2_EXPOSURE_MANUAL);
1641 +- if (ret)
1642 +- return ret;
1643 ++ if (auto_exp) {
1644 ++ ret = ov5640_set_autoexposure(sensor, false);
1645 ++ if (ret)
1646 ++ goto restore_auto_gain;
1647 ++ }
1648 +
1649 + if ((dn_mode == SUBSAMPLING && orig_dn_mode == SCALING) ||
1650 + (dn_mode == SCALING && orig_dn_mode == SUBSAMPLING)) {
1651 + /*
1652 + * change between subsampling and scaling
1653 +- * go through exposure calucation
1654 ++ * go through exposure calculation
1655 + */
1656 + ret = ov5640_set_mode_exposure_calc(sensor, mode);
1657 + } else {
1658 +@@ -1648,15 +1667,16 @@ static int ov5640_set_mode(struct ov5640_dev *sensor)
1659 + * change inside subsampling or scaling
1660 + * download firmware directly
1661 + */
1662 +- ret = ov5640_set_mode_direct(sensor, mode, exposure);
1663 ++ ret = ov5640_set_mode_direct(sensor, mode);
1664 + }
1665 +-
1666 + if (ret < 0)
1667 +- return ret;
1668 ++ goto restore_auto_exp_gain;
1669 +
1670 +- ret = ov5640_set_timings(sensor, mode);
1671 +- if (ret < 0)
1672 +- return ret;
1673 ++ /* restore auto gain and exposure */
1674 ++ if (auto_gain)
1675 ++ ov5640_set_autogain(sensor, true);
1676 ++ if (auto_exp)
1677 ++ ov5640_set_autoexposure(sensor, true);
1678 +
1679 + ret = ov5640_set_binning(sensor, dn_mode != SCALING);
1680 + if (ret < 0)
1681 +@@ -1678,6 +1698,15 @@ static int ov5640_set_mode(struct ov5640_dev *sensor)
1682 + sensor->last_mode = mode;
1683 +
1684 + return 0;
1685 ++
1686 ++restore_auto_exp_gain:
1687 ++ if (auto_exp)
1688 ++ ov5640_set_autoexposure(sensor, true);
1689 ++restore_auto_gain:
1690 ++ if (auto_gain)
1691 ++ ov5640_set_autogain(sensor, true);
1692 ++
1693 ++ return ret;
1694 + }
1695 +
1696 + static int ov5640_set_framefmt(struct ov5640_dev *sensor,
1697 +@@ -1790,23 +1819,69 @@ static int ov5640_set_power(struct ov5640_dev *sensor, bool on)
1698 + if (ret)
1699 + goto power_off;
1700 +
1701 ++ /* We're done here for DVP bus, while CSI-2 needs setup. */
1702 ++ if (sensor->ep.bus_type != V4L2_MBUS_CSI2)
1703 ++ return 0;
1704 ++
1705 ++ /*
1706 ++ * Power up MIPI HS Tx and LS Rx; 2 data lanes mode
1707 ++ *
1708 ++ * 0x300e = 0x40
1709 ++ * [7:5] = 010 : 2 data lanes mode (see FIXME note in
1710 ++ * "ov5640_set_stream_mipi()")
1711 ++ * [4] = 0 : Power up MIPI HS Tx
1712 ++ * [3] = 0 : Power up MIPI LS Rx
1713 ++ * [2] = 0 : MIPI interface disabled
1714 ++ */
1715 ++ ret = ov5640_write_reg(sensor,
1716 ++ OV5640_REG_IO_MIPI_CTRL00, 0x40);
1717 ++ if (ret)
1718 ++ goto power_off;
1719 ++
1720 ++ /*
1721 ++ * Gate clock and set LP11 in 'no packets mode' (idle)
1722 ++ *
1723 ++ * 0x4800 = 0x24
1724 ++ * [5] = 1 : Gate clock when 'no packets'
1725 ++ * [2] = 1 : MIPI bus in LP11 when 'no packets'
1726 ++ */
1727 ++ ret = ov5640_write_reg(sensor,
1728 ++ OV5640_REG_MIPI_CTRL00, 0x24);
1729 ++ if (ret)
1730 ++ goto power_off;
1731 ++
1732 ++ /*
1733 ++ * Set data lanes and clock in LP11 when 'sleeping'
1734 ++ *
1735 ++ * 0x3019 = 0x70
1736 ++ * [6] = 1 : MIPI data lane 2 in LP11 when 'sleeping'
1737 ++ * [5] = 1 : MIPI data lane 1 in LP11 when 'sleeping'
1738 ++ * [4] = 1 : MIPI clock lane in LP11 when 'sleeping'
1739 ++ */
1740 ++ ret = ov5640_write_reg(sensor,
1741 ++ OV5640_REG_PAD_OUTPUT00, 0x70);
1742 ++ if (ret)
1743 ++ goto power_off;
1744 ++
1745 ++ /* Give lanes some time to coax into LP11 state. */
1746 ++ usleep_range(500, 1000);
1747 ++
1748 ++ } else {
1749 + if (sensor->ep.bus_type == V4L2_MBUS_CSI2) {
1750 +- /*
1751 +- * start streaming briefly followed by stream off in
1752 +- * order to coax the clock lane into LP-11 state.
1753 +- */
1754 +- ret = ov5640_set_stream_mipi(sensor, true);
1755 +- if (ret)
1756 +- goto power_off;
1757 +- usleep_range(1000, 2000);
1758 +- ret = ov5640_set_stream_mipi(sensor, false);
1759 +- if (ret)
1760 +- goto power_off;
1761 ++ /* Reset MIPI bus settings to their default values. */
1762 ++ ov5640_write_reg(sensor,
1763 ++ OV5640_REG_IO_MIPI_CTRL00, 0x58);
1764 ++ ov5640_write_reg(sensor,
1765 ++ OV5640_REG_MIPI_CTRL00, 0x04);
1766 ++ ov5640_write_reg(sensor,
1767 ++ OV5640_REG_PAD_OUTPUT00, 0x00);
1768 + }
1769 +
1770 +- return 0;
1771 ++ ov5640_set_power_off(sensor);
1772 + }
1773 +
1774 ++ return 0;
1775 ++
1776 + power_off:
1777 + ov5640_set_power_off(sensor);
1778 + return ret;
1779 +@@ -2144,20 +2219,20 @@ static int ov5640_set_ctrl_white_balance(struct ov5640_dev *sensor, int awb)
1780 + return ret;
1781 + }
1782 +
1783 +-static int ov5640_set_ctrl_exposure(struct ov5640_dev *sensor, int exp)
1784 ++static int ov5640_set_ctrl_exposure(struct ov5640_dev *sensor,
1785 ++ enum v4l2_exposure_auto_type auto_exposure)
1786 + {
1787 + struct ov5640_ctrls *ctrls = &sensor->ctrls;
1788 +- bool auto_exposure = (exp == V4L2_EXPOSURE_AUTO);
1789 ++ bool auto_exp = (auto_exposure == V4L2_EXPOSURE_AUTO);
1790 + int ret = 0;
1791 +
1792 + if (ctrls->auto_exp->is_new) {
1793 +- ret = ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
1794 +- BIT(0), auto_exposure ? 0 : BIT(0));
1795 ++ ret = ov5640_set_autoexposure(sensor, auto_exp);
1796 + if (ret)
1797 + return ret;
1798 + }
1799 +
1800 +- if (!auto_exposure && ctrls->exposure->is_new) {
1801 ++ if (!auto_exp && ctrls->exposure->is_new) {
1802 + u16 max_exp;
1803 +
1804 + ret = ov5640_read_reg16(sensor, OV5640_REG_AEC_PK_VTS,
1805 +@@ -2177,25 +2252,19 @@ static int ov5640_set_ctrl_exposure(struct ov5640_dev *sensor, int exp)
1806 + return ret;
1807 + }
1808 +
1809 +-static int ov5640_set_ctrl_gain(struct ov5640_dev *sensor, int auto_gain)
1810 ++static int ov5640_set_ctrl_gain(struct ov5640_dev *sensor, bool auto_gain)
1811 + {
1812 + struct ov5640_ctrls *ctrls = &sensor->ctrls;
1813 + int ret = 0;
1814 +
1815 + if (ctrls->auto_gain->is_new) {
1816 +- ret = ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
1817 +- BIT(1),
1818 +- ctrls->auto_gain->val ? 0 : BIT(1));
1819 ++ ret = ov5640_set_autogain(sensor, auto_gain);
1820 + if (ret)
1821 + return ret;
1822 + }
1823 +
1824 +- if (!auto_gain && ctrls->gain->is_new) {
1825 +- u16 gain = (u16)ctrls->gain->val;
1826 +-
1827 +- ret = ov5640_write_reg16(sensor, OV5640_REG_AEC_PK_REAL_GAIN,
1828 +- gain & 0x3ff);
1829 +- }
1830 ++ if (!auto_gain && ctrls->gain->is_new)
1831 ++ ret = ov5640_set_gain(sensor, ctrls->gain->val);
1832 +
1833 + return ret;
1834 + }
1835 +@@ -2268,16 +2337,12 @@ static int ov5640_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
1836 +
1837 + switch (ctrl->id) {
1838 + case V4L2_CID_AUTOGAIN:
1839 +- if (!ctrl->val)
1840 +- return 0;
1841 + val = ov5640_get_gain(sensor);
1842 + if (val < 0)
1843 + return val;
1844 + sensor->ctrls.gain->val = val;
1845 + break;
1846 + case V4L2_CID_EXPOSURE_AUTO:
1847 +- if (ctrl->val == V4L2_EXPOSURE_MANUAL)
1848 +- return 0;
1849 + val = ov5640_get_exposure(sensor);
1850 + if (val < 0)
1851 + return val;
1852 +diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
1853 +index 7bfd366d970d..c4115bae5db1 100644
1854 +--- a/drivers/mmc/host/sdhci-pci-core.c
1855 ++++ b/drivers/mmc/host/sdhci-pci-core.c
1856 +@@ -12,6 +12,7 @@
1857 + * - JMicron (hardware and technical support)
1858 + */
1859 +
1860 ++#include <linux/bitfield.h>
1861 + #include <linux/string.h>
1862 + #include <linux/delay.h>
1863 + #include <linux/highmem.h>
1864 +@@ -462,6 +463,9 @@ struct intel_host {
1865 + u32 dsm_fns;
1866 + int drv_strength;
1867 + bool d3_retune;
1868 ++ bool rpm_retune_ok;
1869 ++ u32 glk_rx_ctrl1;
1870 ++ u32 glk_tun_val;
1871 + };
1872 +
1873 + static const guid_t intel_dsm_guid =
1874 +@@ -791,6 +795,77 @@ cleanup:
1875 + return ret;
1876 + }
1877 +
1878 ++#ifdef CONFIG_PM
1879 ++#define GLK_RX_CTRL1 0x834
1880 ++#define GLK_TUN_VAL 0x840
1881 ++#define GLK_PATH_PLL GENMASK(13, 8)
1882 ++#define GLK_DLY GENMASK(6, 0)
1883 ++/* Workaround firmware failing to restore the tuning value */
1884 ++static void glk_rpm_retune_wa(struct sdhci_pci_chip *chip, bool susp)
1885 ++{
1886 ++ struct sdhci_pci_slot *slot = chip->slots[0];
1887 ++ struct intel_host *intel_host = sdhci_pci_priv(slot);
1888 ++ struct sdhci_host *host = slot->host;
1889 ++ u32 glk_rx_ctrl1;
1890 ++ u32 glk_tun_val;
1891 ++ u32 dly;
1892 ++
1893 ++ if (intel_host->rpm_retune_ok || !mmc_can_retune(host->mmc))
1894 ++ return;
1895 ++
1896 ++ glk_rx_ctrl1 = sdhci_readl(host, GLK_RX_CTRL1);
1897 ++ glk_tun_val = sdhci_readl(host, GLK_TUN_VAL);
1898 ++
1899 ++ if (susp) {
1900 ++ intel_host->glk_rx_ctrl1 = glk_rx_ctrl1;
1901 ++ intel_host->glk_tun_val = glk_tun_val;
1902 ++ return;
1903 ++ }
1904 ++
1905 ++ if (!intel_host->glk_tun_val)
1906 ++ return;
1907 ++
1908 ++ if (glk_rx_ctrl1 != intel_host->glk_rx_ctrl1) {
1909 ++ intel_host->rpm_retune_ok = true;
1910 ++ return;
1911 ++ }
1912 ++
1913 ++ dly = FIELD_PREP(GLK_DLY, FIELD_GET(GLK_PATH_PLL, glk_rx_ctrl1) +
1914 ++ (intel_host->glk_tun_val << 1));
1915 ++ if (dly == FIELD_GET(GLK_DLY, glk_rx_ctrl1))
1916 ++ return;
1917 ++
1918 ++ glk_rx_ctrl1 = (glk_rx_ctrl1 & ~GLK_DLY) | dly;
1919 ++ sdhci_writel(host, glk_rx_ctrl1, GLK_RX_CTRL1);
1920 ++
1921 ++ intel_host->rpm_retune_ok = true;
1922 ++ chip->rpm_retune = true;
1923 ++ mmc_retune_needed(host->mmc);
1924 ++ pr_info("%s: Requiring re-tune after rpm resume", mmc_hostname(host->mmc));
1925 ++}
1926 ++
1927 ++static void glk_rpm_retune_chk(struct sdhci_pci_chip *chip, bool susp)
1928 ++{
1929 ++ if (chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
1930 ++ !chip->rpm_retune)
1931 ++ glk_rpm_retune_wa(chip, susp);
1932 ++}
1933 ++
1934 ++static int glk_runtime_suspend(struct sdhci_pci_chip *chip)
1935 ++{
1936 ++ glk_rpm_retune_chk(chip, true);
1937 ++
1938 ++ return sdhci_cqhci_runtime_suspend(chip);
1939 ++}
1940 ++
1941 ++static int glk_runtime_resume(struct sdhci_pci_chip *chip)
1942 ++{
1943 ++ glk_rpm_retune_chk(chip, false);
1944 ++
1945 ++ return sdhci_cqhci_runtime_resume(chip);
1946 ++}
1947 ++#endif
1948 ++
1949 + #ifdef CONFIG_ACPI
1950 + static int ni_set_max_freq(struct sdhci_pci_slot *slot)
1951 + {
1952 +@@ -879,8 +954,8 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
1953 + .resume = sdhci_cqhci_resume,
1954 + #endif
1955 + #ifdef CONFIG_PM
1956 +- .runtime_suspend = sdhci_cqhci_runtime_suspend,
1957 +- .runtime_resume = sdhci_cqhci_runtime_resume,
1958 ++ .runtime_suspend = glk_runtime_suspend,
1959 ++ .runtime_resume = glk_runtime_resume,
1960 + #endif
1961 + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
1962 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
1963 +@@ -1762,8 +1837,13 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
1964 + device_init_wakeup(&pdev->dev, true);
1965 +
1966 + if (slot->cd_idx >= 0) {
1967 +- ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx,
1968 ++ ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx,
1969 + slot->cd_override_level, 0, NULL);
1970 ++ if (ret && ret != -EPROBE_DEFER)
1971 ++ ret = mmc_gpiod_request_cd(host->mmc, NULL,
1972 ++ slot->cd_idx,
1973 ++ slot->cd_override_level,
1974 ++ 0, NULL);
1975 + if (ret == -EPROBE_DEFER)
1976 + goto remove;
1977 +
1978 +diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
1979 +index 49163570a63a..3b3f88ffab53 100644
1980 +--- a/drivers/net/can/dev.c
1981 ++++ b/drivers/net/can/dev.c
1982 +@@ -477,6 +477,34 @@ void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
1983 + }
1984 + EXPORT_SYMBOL_GPL(can_put_echo_skb);
1985 +
1986 ++struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
1987 ++{
1988 ++ struct can_priv *priv = netdev_priv(dev);
1989 ++ struct sk_buff *skb = priv->echo_skb[idx];
1990 ++ struct canfd_frame *cf;
1991 ++
1992 ++ if (idx >= priv->echo_skb_max) {
1993 ++ netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
1994 ++ __func__, idx, priv->echo_skb_max);
1995 ++ return NULL;
1996 ++ }
1997 ++
1998 ++ if (!skb) {
1999 ++ netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n",
2000 ++ __func__, idx);
2001 ++ return NULL;
2002 ++ }
2003 ++
2004 ++ /* Using "struct canfd_frame::len" for the frame
2005 ++ * length is supported on both CAN and CANFD frames.
2006 ++ */
2007 ++ cf = (struct canfd_frame *)skb->data;
2008 ++ *len_ptr = cf->len;
2009 ++ priv->echo_skb[idx] = NULL;
2010 ++
2011 ++ return skb;
2012 ++}
2013 ++
2014 + /*
2015 + * Get the skb from the stack and loop it back locally
2016 + *
2017 +@@ -486,22 +514,16 @@ EXPORT_SYMBOL_GPL(can_put_echo_skb);
2018 + */
2019 + unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
2020 + {
2021 +- struct can_priv *priv = netdev_priv(dev);
2022 +-
2023 +- BUG_ON(idx >= priv->echo_skb_max);
2024 +-
2025 +- if (priv->echo_skb[idx]) {
2026 +- struct sk_buff *skb = priv->echo_skb[idx];
2027 +- struct can_frame *cf = (struct can_frame *)skb->data;
2028 +- u8 dlc = cf->can_dlc;
2029 ++ struct sk_buff *skb;
2030 ++ u8 len;
2031 +
2032 +- netif_rx(priv->echo_skb[idx]);
2033 +- priv->echo_skb[idx] = NULL;
2034 ++ skb = __can_get_echo_skb(dev, idx, &len);
2035 ++ if (!skb)
2036 ++ return 0;
2037 +
2038 +- return dlc;
2039 +- }
2040 ++ netif_rx(skb);
2041 +
2042 +- return 0;
2043 ++ return len;
2044 + }
2045 + EXPORT_SYMBOL_GPL(can_get_echo_skb);
2046 +
2047 +diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
2048 +index 8e972ef08637..75ce11395ee8 100644
2049 +--- a/drivers/net/can/flexcan.c
2050 ++++ b/drivers/net/can/flexcan.c
2051 +@@ -135,13 +135,12 @@
2052 +
2053 + /* FLEXCAN interrupt flag register (IFLAG) bits */
2054 + /* Errata ERR005829 step7: Reserve first valid MB */
2055 +-#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
2056 +-#define FLEXCAN_TX_MB_OFF_FIFO 9
2057 ++#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
2058 + #define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0
2059 +-#define FLEXCAN_TX_MB_OFF_TIMESTAMP 1
2060 +-#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_OFF_TIMESTAMP + 1)
2061 +-#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST 63
2062 +-#define FLEXCAN_IFLAG_MB(x) BIT(x)
2063 ++#define FLEXCAN_TX_MB 63
2064 ++#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1)
2065 ++#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1)
2066 ++#define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f)
2067 + #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7)
2068 + #define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6)
2069 + #define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5)
2070 +@@ -259,9 +258,7 @@ struct flexcan_priv {
2071 + struct can_rx_offload offload;
2072 +
2073 + struct flexcan_regs __iomem *regs;
2074 +- struct flexcan_mb __iomem *tx_mb;
2075 + struct flexcan_mb __iomem *tx_mb_reserved;
2076 +- u8 tx_mb_idx;
2077 + u32 reg_ctrl_default;
2078 + u32 reg_imask1_default;
2079 + u32 reg_imask2_default;
2080 +@@ -515,6 +512,7 @@ static int flexcan_get_berr_counter(const struct net_device *dev,
2081 + static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
2082 + {
2083 + const struct flexcan_priv *priv = netdev_priv(dev);
2084 ++ struct flexcan_regs __iomem *regs = priv->regs;
2085 + struct can_frame *cf = (struct can_frame *)skb->data;
2086 + u32 can_id;
2087 + u32 data;
2088 +@@ -537,17 +535,17 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
2089 +
2090 + if (cf->can_dlc > 0) {
2091 + data = be32_to_cpup((__be32 *)&cf->data[0]);
2092 +- priv->write(data, &priv->tx_mb->data[0]);
2093 ++ priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[0]);
2094 + }
2095 + if (cf->can_dlc > 4) {
2096 + data = be32_to_cpup((__be32 *)&cf->data[4]);
2097 +- priv->write(data, &priv->tx_mb->data[1]);
2098 ++ priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[1]);
2099 + }
2100 +
2101 + can_put_echo_skb(skb, dev, 0);
2102 +
2103 +- priv->write(can_id, &priv->tx_mb->can_id);
2104 +- priv->write(ctrl, &priv->tx_mb->can_ctrl);
2105 ++ priv->write(can_id, &regs->mb[FLEXCAN_TX_MB].can_id);
2106 ++ priv->write(ctrl, &regs->mb[FLEXCAN_TX_MB].can_ctrl);
2107 +
2108 + /* Errata ERR005829 step8:
2109 + * Write twice INACTIVE(0x8) code to first MB.
2110 +@@ -563,9 +561,13 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
2111 + static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
2112 + {
2113 + struct flexcan_priv *priv = netdev_priv(dev);
2114 ++ struct flexcan_regs __iomem *regs = priv->regs;
2115 + struct sk_buff *skb;
2116 + struct can_frame *cf;
2117 + bool rx_errors = false, tx_errors = false;
2118 ++ u32 timestamp;
2119 ++
2120 ++ timestamp = priv->read(&regs->timer) << 16;
2121 +
2122 + skb = alloc_can_err_skb(dev, &cf);
2123 + if (unlikely(!skb))
2124 +@@ -612,17 +614,21 @@ static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
2125 + if (tx_errors)
2126 + dev->stats.tx_errors++;
2127 +
2128 +- can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
2129 ++ can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
2130 + }
2131 +
2132 + static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
2133 + {
2134 + struct flexcan_priv *priv = netdev_priv(dev);
2135 ++ struct flexcan_regs __iomem *regs = priv->regs;
2136 + struct sk_buff *skb;
2137 + struct can_frame *cf;
2138 + enum can_state new_state, rx_state, tx_state;
2139 + int flt;
2140 + struct can_berr_counter bec;
2141 ++ u32 timestamp;
2142 ++
2143 ++ timestamp = priv->read(&regs->timer) << 16;
2144 +
2145 + flt = reg_esr & FLEXCAN_ESR_FLT_CONF_MASK;
2146 + if (likely(flt == FLEXCAN_ESR_FLT_CONF_ACTIVE)) {
2147 +@@ -652,7 +658,7 @@ static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
2148 + if (unlikely(new_state == CAN_STATE_BUS_OFF))
2149 + can_bus_off(dev);
2150 +
2151 +- can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
2152 ++ can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
2153 + }
2154 +
2155 + static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload)
2156 +@@ -720,9 +726,14 @@ static unsigned int flexcan_mailbox_read(struct can_rx_offload *offload,
2157 + priv->write(BIT(n - 32), &regs->iflag2);
2158 + } else {
2159 + priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, &regs->iflag1);
2160 +- priv->read(&regs->timer);
2161 + }
2162 +
2163 ++ /* Read the Free Running Timer. It is optional but recommended
2164 ++ * to unlock Mailbox as soon as possible and make it available
2165 ++ * for reception.
2166 ++ */
2167 ++ priv->read(&regs->timer);
2168 ++
2169 + return 1;
2170 + }
2171 +
2172 +@@ -732,9 +743,9 @@ static inline u64 flexcan_read_reg_iflag_rx(struct flexcan_priv *priv)
2173 + struct flexcan_regs __iomem *regs = priv->regs;
2174 + u32 iflag1, iflag2;
2175 +
2176 +- iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default;
2177 +- iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default &
2178 +- ~FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
2179 ++ iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default &
2180 ++ ~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
2181 ++ iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default;
2182 +
2183 + return (u64)iflag2 << 32 | iflag1;
2184 + }
2185 +@@ -746,11 +757,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
2186 + struct flexcan_priv *priv = netdev_priv(dev);
2187 + struct flexcan_regs __iomem *regs = priv->regs;
2188 + irqreturn_t handled = IRQ_NONE;
2189 +- u32 reg_iflag1, reg_esr;
2190 ++ u32 reg_iflag2, reg_esr;
2191 + enum can_state last_state = priv->can.state;
2192 +
2193 +- reg_iflag1 = priv->read(&regs->iflag1);
2194 +-
2195 + /* reception interrupt */
2196 + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
2197 + u64 reg_iflag;
2198 +@@ -764,6 +773,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
2199 + break;
2200 + }
2201 + } else {
2202 ++ u32 reg_iflag1;
2203 ++
2204 ++ reg_iflag1 = priv->read(&regs->iflag1);
2205 + if (reg_iflag1 & FLEXCAN_IFLAG_RX_FIFO_AVAILABLE) {
2206 + handled = IRQ_HANDLED;
2207 + can_rx_offload_irq_offload_fifo(&priv->offload);
2208 +@@ -779,17 +791,22 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
2209 + }
2210 + }
2211 +
2212 ++ reg_iflag2 = priv->read(&regs->iflag2);
2213 ++
2214 + /* transmission complete interrupt */
2215 +- if (reg_iflag1 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) {
2216 ++ if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) {
2217 ++ u32 reg_ctrl = priv->read(&regs->mb[FLEXCAN_TX_MB].can_ctrl);
2218 ++
2219 + handled = IRQ_HANDLED;
2220 +- stats->tx_bytes += can_get_echo_skb(dev, 0);
2221 ++ stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload,
2222 ++ 0, reg_ctrl << 16);
2223 + stats->tx_packets++;
2224 + can_led_event(dev, CAN_LED_EVENT_TX);
2225 +
2226 + /* after sending a RTR frame MB is in RX mode */
2227 + priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
2228 +- &priv->tx_mb->can_ctrl);
2229 +- priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), &regs->iflag1);
2230 ++ &regs->mb[FLEXCAN_TX_MB].can_ctrl);
2231 ++ priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), &regs->iflag2);
2232 + netif_wake_queue(dev);
2233 + }
2234 +
2235 +@@ -931,15 +948,13 @@ static int flexcan_chip_start(struct net_device *dev)
2236 + reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
2237 + reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
2238 + FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ |
2239 +- FLEXCAN_MCR_IDAM_C;
2240 ++ FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB);
2241 +
2242 +- if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
2243 ++ if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
2244 + reg_mcr &= ~FLEXCAN_MCR_FEN;
2245 +- reg_mcr |= FLEXCAN_MCR_MAXMB(priv->offload.mb_last);
2246 +- } else {
2247 +- reg_mcr |= FLEXCAN_MCR_FEN |
2248 +- FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
2249 +- }
2250 ++ else
2251 ++ reg_mcr |= FLEXCAN_MCR_FEN;
2252 ++
2253 + netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr);
2254 + priv->write(reg_mcr, &regs->mcr);
2255 +
2256 +@@ -982,16 +997,17 @@ static int flexcan_chip_start(struct net_device *dev)
2257 + priv->write(reg_ctrl2, &regs->ctrl2);
2258 + }
2259 +
2260 +- /* clear and invalidate all mailboxes first */
2261 +- for (i = priv->tx_mb_idx; i < ARRAY_SIZE(regs->mb); i++) {
2262 +- priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
2263 +- &regs->mb[i].can_ctrl);
2264 +- }
2265 +-
2266 + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
2267 +- for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++)
2268 ++ for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) {
2269 + priv->write(FLEXCAN_MB_CODE_RX_EMPTY,
2270 + &regs->mb[i].can_ctrl);
2271 ++ }
2272 ++ } else {
2273 ++ /* clear and invalidate unused mailboxes first */
2274 ++ for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) {
2275 ++ priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
2276 ++ &regs->mb[i].can_ctrl);
2277 ++ }
2278 + }
2279 +
2280 + /* Errata ERR005829: mark first TX mailbox as INACTIVE */
2281 +@@ -1000,7 +1016,7 @@ static int flexcan_chip_start(struct net_device *dev)
2282 +
2283 + /* mark TX mailbox as INACTIVE */
2284 + priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
2285 +- &priv->tx_mb->can_ctrl);
2286 ++ &regs->mb[FLEXCAN_TX_MB].can_ctrl);
2287 +
2288 + /* acceptance mask/acceptance code (accept everything) */
2289 + priv->write(0x0, &regs->rxgmask);
2290 +@@ -1355,17 +1371,13 @@ static int flexcan_probe(struct platform_device *pdev)
2291 + priv->devtype_data = devtype_data;
2292 + priv->reg_xceiver = reg_xceiver;
2293 +
2294 +- if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
2295 +- priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_TIMESTAMP;
2296 ++ if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
2297 + priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP];
2298 +- } else {
2299 +- priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_FIFO;
2300 ++ else
2301 + priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO];
2302 +- }
2303 +- priv->tx_mb = &regs->mb[priv->tx_mb_idx];
2304 +
2305 +- priv->reg_imask1_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
2306 +- priv->reg_imask2_default = 0;
2307 ++ priv->reg_imask1_default = 0;
2308 ++ priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
2309 +
2310 + priv->offload.mailbox_read = flexcan_mailbox_read;
2311 +
2312 +diff --git a/drivers/net/can/rx-offload.c b/drivers/net/can/rx-offload.c
2313 +index d94dae216820..727691dd08fb 100644
2314 +--- a/drivers/net/can/rx-offload.c
2315 ++++ b/drivers/net/can/rx-offload.c
2316 +@@ -209,7 +209,54 @@ int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload)
2317 + }
2318 + EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo);
2319 +
2320 +-int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb)
2321 ++int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
2322 ++ struct sk_buff *skb, u32 timestamp)
2323 ++{
2324 ++ struct can_rx_offload_cb *cb;
2325 ++ unsigned long flags;
2326 ++
2327 ++ if (skb_queue_len(&offload->skb_queue) >
2328 ++ offload->skb_queue_len_max)
2329 ++ return -ENOMEM;
2330 ++
2331 ++ cb = can_rx_offload_get_cb(skb);
2332 ++ cb->timestamp = timestamp;
2333 ++
2334 ++ spin_lock_irqsave(&offload->skb_queue.lock, flags);
2335 ++ __skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare);
2336 ++ spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
2337 ++
2338 ++ can_rx_offload_schedule(offload);
2339 ++
2340 ++ return 0;
2341 ++}
2342 ++EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted);
2343 ++
2344 ++unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
2345 ++ unsigned int idx, u32 timestamp)
2346 ++{
2347 ++ struct net_device *dev = offload->dev;
2348 ++ struct net_device_stats *stats = &dev->stats;
2349 ++ struct sk_buff *skb;
2350 ++ u8 len;
2351 ++ int err;
2352 ++
2353 ++ skb = __can_get_echo_skb(dev, idx, &len);
2354 ++ if (!skb)
2355 ++ return 0;
2356 ++
2357 ++ err = can_rx_offload_queue_sorted(offload, skb, timestamp);
2358 ++ if (err) {
2359 ++ stats->rx_errors++;
2360 ++ stats->tx_fifo_errors++;
2361 ++ }
2362 ++
2363 ++ return len;
2364 ++}
2365 ++EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb);
2366 ++
2367 ++int can_rx_offload_queue_tail(struct can_rx_offload *offload,
2368 ++ struct sk_buff *skb)
2369 + {
2370 + if (skb_queue_len(&offload->skb_queue) >
2371 + offload->skb_queue_len_max)
2372 +@@ -220,7 +267,7 @@ int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_b
2373 +
2374 + return 0;
2375 + }
2376 +-EXPORT_SYMBOL_GPL(can_rx_offload_irq_queue_err_skb);
2377 ++EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail);
2378 +
2379 + static int can_rx_offload_init_queue(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight)
2380 + {
2381 +diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
2382 +index 53e320c92a8b..ddaf46239e39 100644
2383 +--- a/drivers/net/can/spi/hi311x.c
2384 ++++ b/drivers/net/can/spi/hi311x.c
2385 +@@ -760,7 +760,7 @@ static int hi3110_open(struct net_device *net)
2386 + {
2387 + struct hi3110_priv *priv = netdev_priv(net);
2388 + struct spi_device *spi = priv->spi;
2389 +- unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_RISING;
2390 ++ unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_HIGH;
2391 + int ret;
2392 +
2393 + ret = open_candev(net);
2394 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
2395 +index 5444e6213d45..64a794be7fcb 100644
2396 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
2397 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
2398 +@@ -5997,7 +5997,8 @@ static int brcmf_construct_chaninfo(struct brcmf_cfg80211_info *cfg,
2399 + * for subsequent chanspecs.
2400 + */
2401 + channel->flags = IEEE80211_CHAN_NO_HT40 |
2402 +- IEEE80211_CHAN_NO_80MHZ;
2403 ++ IEEE80211_CHAN_NO_80MHZ |
2404 ++ IEEE80211_CHAN_NO_160MHZ;
2405 + ch.bw = BRCMU_CHAN_BW_20;
2406 + cfg->d11inf.encchspec(&ch);
2407 + chaninfo = ch.chspec;
2408 +diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
2409 +index cb5f32c1d705..0b3b1223cff7 100644
2410 +--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
2411 ++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
2412 +@@ -6,6 +6,7 @@
2413 + * GPL LICENSE SUMMARY
2414 + *
2415 + * Copyright(c) 2017 Intel Deutschland GmbH
2416 ++ * Copyright(c) 2018 Intel Corporation
2417 + *
2418 + * This program is free software; you can redistribute it and/or modify
2419 + * it under the terms of version 2 of the GNU General Public License as
2420 +@@ -29,6 +30,7 @@
2421 + * BSD LICENSE
2422 + *
2423 + * Copyright(c) 2017 Intel Deutschland GmbH
2424 ++ * Copyright(c) 2018 Intel Corporation
2425 + * All rights reserved.
2426 + *
2427 + * Redistribution and use in source and binary forms, with or without
2428 +@@ -84,7 +86,7 @@
2429 + #define ACPI_WRDS_WIFI_DATA_SIZE (ACPI_SAR_TABLE_SIZE + 2)
2430 + #define ACPI_EWRD_WIFI_DATA_SIZE ((ACPI_SAR_PROFILE_NUM - 1) * \
2431 + ACPI_SAR_TABLE_SIZE + 3)
2432 +-#define ACPI_WGDS_WIFI_DATA_SIZE 18
2433 ++#define ACPI_WGDS_WIFI_DATA_SIZE 19
2434 + #define ACPI_WRDD_WIFI_DATA_SIZE 2
2435 + #define ACPI_SPLC_WIFI_DATA_SIZE 2
2436 +
2437 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
2438 +index 48a3611d6a31..4d49a1a3f504 100644
2439 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
2440 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
2441 +@@ -880,7 +880,7 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
2442 + IWL_DEBUG_RADIO(mvm, "Sending GEO_TX_POWER_LIMIT\n");
2443 +
2444 + BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS *
2445 +- ACPI_WGDS_TABLE_SIZE != ACPI_WGDS_WIFI_DATA_SIZE);
2446 ++ ACPI_WGDS_TABLE_SIZE + 1 != ACPI_WGDS_WIFI_DATA_SIZE);
2447 +
2448 + BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES > IWL_NUM_GEO_PROFILES);
2449 +
2450 +@@ -915,6 +915,11 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
2451 + return -ENOENT;
2452 + }
2453 +
2454 ++static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm)
2455 ++{
2456 ++ return -ENOENT;
2457 ++}
2458 ++
2459 + static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
2460 + {
2461 + return 0;
2462 +@@ -941,8 +946,11 @@ static int iwl_mvm_sar_init(struct iwl_mvm *mvm)
2463 + IWL_DEBUG_RADIO(mvm,
2464 + "WRDS SAR BIOS table invalid or unavailable. (%d)\n",
2465 + ret);
2466 +- /* if not available, don't fail and don't bother with EWRD */
2467 +- return 0;
2468 ++ /*
2469 ++ * If not available, don't fail and don't bother with EWRD.
2470 ++ * Return 1 to tell that we can't use WGDS either.
2471 ++ */
2472 ++ return 1;
2473 + }
2474 +
2475 + ret = iwl_mvm_sar_get_ewrd_table(mvm);
2476 +@@ -955,9 +963,13 @@ static int iwl_mvm_sar_init(struct iwl_mvm *mvm)
2477 + /* choose profile 1 (WRDS) as default for both chains */
2478 + ret = iwl_mvm_sar_select_profile(mvm, 1, 1);
2479 +
2480 +- /* if we don't have profile 0 from BIOS, just skip it */
2481 ++ /*
2482 ++ * If we don't have profile 0 from BIOS, just skip it. This
2483 ++ * means that SAR Geo will not be enabled either, even if we
2484 ++ * have other valid profiles.
2485 ++ */
2486 + if (ret == -ENOENT)
2487 +- return 0;
2488 ++ return 1;
2489 +
2490 + return ret;
2491 + }
2492 +@@ -1155,11 +1167,19 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
2493 + iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN);
2494 +
2495 + ret = iwl_mvm_sar_init(mvm);
2496 +- if (ret)
2497 +- goto error;
2498 ++ if (ret == 0) {
2499 ++ ret = iwl_mvm_sar_geo_init(mvm);
2500 ++ } else if (ret > 0 && !iwl_mvm_sar_get_wgds_table(mvm)) {
2501 ++ /*
2502 ++ * If basic SAR is not available, we check for WGDS,
2503 ++ * which should *not* be available either. If it is
2504 ++ * available, issue an error, because we can't use SAR
2505 ++ * Geo without basic SAR.
2506 ++ */
2507 ++ IWL_ERR(mvm, "BIOS contains WGDS but no WRDS\n");
2508 ++ }
2509 +
2510 +- ret = iwl_mvm_sar_geo_init(mvm);
2511 +- if (ret)
2512 ++ if (ret < 0)
2513 + goto error;
2514 +
2515 + iwl_mvm_leds_sync(mvm);
2516 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
2517 +index 155cc2ac0120..afed549f5645 100644
2518 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
2519 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
2520 +@@ -306,8 +306,12 @@ struct ieee80211_regdomain *iwl_mvm_get_regdomain(struct wiphy *wiphy,
2521 + goto out;
2522 + }
2523 +
2524 +- if (changed)
2525 +- *changed = (resp->status == MCC_RESP_NEW_CHAN_PROFILE);
2526 ++ if (changed) {
2527 ++ u32 status = le32_to_cpu(resp->status);
2528 ++
2529 ++ *changed = (status == MCC_RESP_NEW_CHAN_PROFILE ||
2530 ++ status == MCC_RESP_ILLEGAL);
2531 ++ }
2532 +
2533 + regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg,
2534 + __le32_to_cpu(resp->n_channels),
2535 +@@ -4416,10 +4420,6 @@ static void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
2536 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
2537 + }
2538 +
2539 +- if (!fw_has_capa(&mvm->fw->ucode_capa,
2540 +- IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS))
2541 +- return;
2542 +-
2543 + /* if beacon filtering isn't on mac80211 does it anyway */
2544 + if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
2545 + return;
2546 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
2547 +index cf48517944ec..f2579c94ffdb 100644
2548 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
2549 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
2550 +@@ -545,9 +545,8 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
2551 + }
2552 +
2553 + IWL_DEBUG_LAR(mvm,
2554 +- "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') change: %d n_chans: %d\n",
2555 +- status, mcc, mcc >> 8, mcc & 0xff,
2556 +- !!(status == MCC_RESP_NEW_CHAN_PROFILE), n_channels);
2557 ++ "MCC response status: 0x%x. new MCC: 0x%x ('%c%c') n_chans: %d\n",
2558 ++ status, mcc, mcc >> 8, mcc & 0xff, n_channels);
2559 +
2560 + exit:
2561 + iwl_free_resp(&cmd);
2562 +diff --git a/drivers/opp/ti-opp-supply.c b/drivers/opp/ti-opp-supply.c
2563 +index 9e5a9a3112c9..3f4fb4dbbe33 100644
2564 +--- a/drivers/opp/ti-opp-supply.c
2565 ++++ b/drivers/opp/ti-opp-supply.c
2566 +@@ -288,7 +288,10 @@ static int ti_opp_supply_set_opp(struct dev_pm_set_opp_data *data)
2567 + int ret;
2568 +
2569 + vdd_uv = _get_optimal_vdd_voltage(dev, &opp_data,
2570 +- new_supply_vbb->u_volt);
2571 ++ new_supply_vdd->u_volt);
2572 ++
2573 ++ if (new_supply_vdd->u_volt_min < vdd_uv)
2574 ++ new_supply_vdd->u_volt_min = vdd_uv;
2575 +
2576 + /* Scaling up? Scale voltage before frequency */
2577 + if (freq > old_freq) {
2578 +diff --git a/drivers/pinctrl/meson/pinctrl-meson-gxbb.c b/drivers/pinctrl/meson/pinctrl-meson-gxbb.c
2579 +index 4ceb06f8a33c..4edeb4cae72a 100644
2580 +--- a/drivers/pinctrl/meson/pinctrl-meson-gxbb.c
2581 ++++ b/drivers/pinctrl/meson/pinctrl-meson-gxbb.c
2582 +@@ -830,7 +830,7 @@ static struct meson_bank meson_gxbb_periphs_banks[] = {
2583 +
2584 + static struct meson_bank meson_gxbb_aobus_banks[] = {
2585 + /* name first last irq pullen pull dir out in */
2586 +- BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
2587 ++ BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
2588 + };
2589 +
2590 + static struct meson_pinctrl_data meson_gxbb_periphs_pinctrl_data = {
2591 +diff --git a/drivers/pinctrl/meson/pinctrl-meson-gxl.c b/drivers/pinctrl/meson/pinctrl-meson-gxl.c
2592 +index 7dae1d7bf6b0..158f618f1695 100644
2593 +--- a/drivers/pinctrl/meson/pinctrl-meson-gxl.c
2594 ++++ b/drivers/pinctrl/meson/pinctrl-meson-gxl.c
2595 +@@ -807,7 +807,7 @@ static struct meson_bank meson_gxl_periphs_banks[] = {
2596 +
2597 + static struct meson_bank meson_gxl_aobus_banks[] = {
2598 + /* name first last irq pullen pull dir out in */
2599 +- BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
2600 ++ BANK("AO", GPIOAO_0, GPIOAO_9, 0, 9, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
2601 + };
2602 +
2603 + static struct meson_pinctrl_data meson_gxl_periphs_pinctrl_data = {
2604 +diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c
2605 +index 29a458da78db..4f3ab18636a3 100644
2606 +--- a/drivers/pinctrl/meson/pinctrl-meson.c
2607 ++++ b/drivers/pinctrl/meson/pinctrl-meson.c
2608 +@@ -192,7 +192,7 @@ static int meson_pinconf_set(struct pinctrl_dev *pcdev, unsigned int pin,
2609 + dev_dbg(pc->dev, "pin %u: disable bias\n", pin);
2610 +
2611 + meson_calc_reg_and_bit(bank, pin, REG_PULL, &reg, &bit);
2612 +- ret = regmap_update_bits(pc->reg_pull, reg,
2613 ++ ret = regmap_update_bits(pc->reg_pullen, reg,
2614 + BIT(bit), 0);
2615 + if (ret)
2616 + return ret;
2617 +diff --git a/drivers/pinctrl/meson/pinctrl-meson8.c b/drivers/pinctrl/meson/pinctrl-meson8.c
2618 +index c6d79315218f..86466173114d 100644
2619 +--- a/drivers/pinctrl/meson/pinctrl-meson8.c
2620 ++++ b/drivers/pinctrl/meson/pinctrl-meson8.c
2621 +@@ -1053,7 +1053,7 @@ static struct meson_bank meson8_cbus_banks[] = {
2622 +
2623 + static struct meson_bank meson8_aobus_banks[] = {
2624 + /* name first last irq pullen pull dir out in */
2625 +- BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
2626 ++ BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
2627 + };
2628 +
2629 + static struct meson_pinctrl_data meson8_cbus_pinctrl_data = {
2630 +diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c
2631 +index bb2a30964fc6..647ad15d5c3c 100644
2632 +--- a/drivers/pinctrl/meson/pinctrl-meson8b.c
2633 ++++ b/drivers/pinctrl/meson/pinctrl-meson8b.c
2634 +@@ -906,7 +906,7 @@ static struct meson_bank meson8b_cbus_banks[] = {
2635 +
2636 + static struct meson_bank meson8b_aobus_banks[] = {
2637 + /* name first lastc irq pullen pull dir out in */
2638 +- BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 0, 0, 16, 0, 0, 0, 16, 1, 0),
2639 ++ BANK("AO", GPIOAO_0, GPIO_TEST_N, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0),
2640 + };
2641 +
2642 + static struct meson_pinctrl_data meson8b_cbus_pinctrl_data = {
2643 +diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
2644 +index df0c5776d49b..a5a19ff10535 100644
2645 +--- a/drivers/rtc/rtc-cmos.c
2646 ++++ b/drivers/rtc/rtc-cmos.c
2647 +@@ -257,6 +257,7 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
2648 + struct cmos_rtc *cmos = dev_get_drvdata(dev);
2649 + unsigned char rtc_control;
2650 +
2651 ++ /* This not only a rtc_op, but also called directly */
2652 + if (!is_valid_irq(cmos->irq))
2653 + return -EIO;
2654 +
2655 +@@ -452,6 +453,7 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
2656 + unsigned char mon, mday, hrs, min, sec, rtc_control;
2657 + int ret;
2658 +
2659 ++ /* This not only a rtc_op, but also called directly */
2660 + if (!is_valid_irq(cmos->irq))
2661 + return -EIO;
2662 +
2663 +@@ -516,9 +518,6 @@ static int cmos_alarm_irq_enable(struct device *dev, unsigned int enabled)
2664 + struct cmos_rtc *cmos = dev_get_drvdata(dev);
2665 + unsigned long flags;
2666 +
2667 +- if (!is_valid_irq(cmos->irq))
2668 +- return -EINVAL;
2669 +-
2670 + spin_lock_irqsave(&rtc_lock, flags);
2671 +
2672 + if (enabled)
2673 +@@ -579,6 +578,12 @@ static const struct rtc_class_ops cmos_rtc_ops = {
2674 + .alarm_irq_enable = cmos_alarm_irq_enable,
2675 + };
2676 +
2677 ++static const struct rtc_class_ops cmos_rtc_ops_no_alarm = {
2678 ++ .read_time = cmos_read_time,
2679 ++ .set_time = cmos_set_time,
2680 ++ .proc = cmos_procfs,
2681 ++};
2682 ++
2683 + /*----------------------------------------------------------------*/
2684 +
2685 + /*
2686 +@@ -855,9 +860,12 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
2687 + dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq);
2688 + goto cleanup1;
2689 + }
2690 ++
2691 ++ cmos_rtc.rtc->ops = &cmos_rtc_ops;
2692 ++ } else {
2693 ++ cmos_rtc.rtc->ops = &cmos_rtc_ops_no_alarm;
2694 + }
2695 +
2696 +- cmos_rtc.rtc->ops = &cmos_rtc_ops;
2697 + cmos_rtc.rtc->nvram_old_abi = true;
2698 + retval = rtc_register_device(cmos_rtc.rtc);
2699 + if (retval)
2700 +diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
2701 +index 9f99a0966550..7cb786d76e3c 100644
2702 +--- a/drivers/rtc/rtc-pcf2127.c
2703 ++++ b/drivers/rtc/rtc-pcf2127.c
2704 +@@ -303,6 +303,9 @@ static int pcf2127_i2c_gather_write(void *context,
2705 + memcpy(buf + 1, val, val_size);
2706 +
2707 + ret = i2c_master_send(client, buf, val_size + 1);
2708 ++
2709 ++ kfree(buf);
2710 ++
2711 + if (ret != val_size + 1)
2712 + return ret < 0 ? ret : -EIO;
2713 +
2714 +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
2715 +index 8f60f0e04599..410eccf0bc5e 100644
2716 +--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
2717 ++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
2718 +@@ -904,11 +904,9 @@ static void start_delivery_v1_hw(struct hisi_sas_dq *dq)
2719 + {
2720 + struct hisi_hba *hisi_hba = dq->hisi_hba;
2721 + struct hisi_sas_slot *s, *s1, *s2 = NULL;
2722 +- struct list_head *dq_list;
2723 + int dlvry_queue = dq->id;
2724 + int wp;
2725 +
2726 +- dq_list = &dq->list;
2727 + list_for_each_entry_safe(s, s1, &dq->list, delivery) {
2728 + if (!s->ready)
2729 + break;
2730 +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
2731 +index 9c5c5a601332..1c4ea58da1ae 100644
2732 +--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
2733 ++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
2734 +@@ -1666,11 +1666,9 @@ static void start_delivery_v2_hw(struct hisi_sas_dq *dq)
2735 + {
2736 + struct hisi_hba *hisi_hba = dq->hisi_hba;
2737 + struct hisi_sas_slot *s, *s1, *s2 = NULL;
2738 +- struct list_head *dq_list;
2739 + int dlvry_queue = dq->id;
2740 + int wp;
2741 +
2742 +- dq_list = &dq->list;
2743 + list_for_each_entry_safe(s, s1, &dq->list, delivery) {
2744 + if (!s->ready)
2745 + break;
2746 +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
2747 +index 08b503e274b8..687ff61bba9f 100644
2748 +--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
2749 ++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
2750 +@@ -883,11 +883,9 @@ static void start_delivery_v3_hw(struct hisi_sas_dq *dq)
2751 + {
2752 + struct hisi_hba *hisi_hba = dq->hisi_hba;
2753 + struct hisi_sas_slot *s, *s1, *s2 = NULL;
2754 +- struct list_head *dq_list;
2755 + int dlvry_queue = dq->id;
2756 + int wp;
2757 +
2758 +- dq_list = &dq->list;
2759 + list_for_each_entry_safe(s, s1, &dq->list, delivery) {
2760 + if (!s->ready)
2761 + break;
2762 +diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
2763 +index aec5b10a8c85..ca6c3982548d 100644
2764 +--- a/drivers/scsi/lpfc/lpfc_debugfs.c
2765 ++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
2766 +@@ -700,6 +700,8 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
2767 + rport = lpfc_ndlp_get_nrport(ndlp);
2768 + if (rport)
2769 + nrport = rport->remoteport;
2770 ++ else
2771 ++ nrport = NULL;
2772 + spin_unlock(&phba->hbalock);
2773 + if (!nrport)
2774 + continue;
2775 +diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
2776 +index 431742201709..3ad460219fd6 100644
2777 +--- a/drivers/tty/n_tty.c
2778 ++++ b/drivers/tty/n_tty.c
2779 +@@ -152,17 +152,28 @@ static inline unsigned char *echo_buf_addr(struct n_tty_data *ldata, size_t i)
2780 + return &ldata->echo_buf[i & (N_TTY_BUF_SIZE - 1)];
2781 + }
2782 +
2783 ++/* If we are not echoing the data, perhaps this is a secret so erase it */
2784 ++static void zero_buffer(struct tty_struct *tty, u8 *buffer, int size)
2785 ++{
2786 ++ bool icanon = !!L_ICANON(tty);
2787 ++ bool no_echo = !L_ECHO(tty);
2788 ++
2789 ++ if (icanon && no_echo)
2790 ++ memset(buffer, 0x00, size);
2791 ++}
2792 ++
2793 + static int tty_copy_to_user(struct tty_struct *tty, void __user *to,
2794 + size_t tail, size_t n)
2795 + {
2796 + struct n_tty_data *ldata = tty->disc_data;
2797 + size_t size = N_TTY_BUF_SIZE - tail;
2798 +- const void *from = read_buf_addr(ldata, tail);
2799 ++ void *from = read_buf_addr(ldata, tail);
2800 + int uncopied;
2801 +
2802 + if (n > size) {
2803 + tty_audit_add_data(tty, from, size);
2804 + uncopied = copy_to_user(to, from, size);
2805 ++ zero_buffer(tty, from, size - uncopied);
2806 + if (uncopied)
2807 + return uncopied;
2808 + to += size;
2809 +@@ -171,7 +182,9 @@ static int tty_copy_to_user(struct tty_struct *tty, void __user *to,
2810 + }
2811 +
2812 + tty_audit_add_data(tty, from, n);
2813 +- return copy_to_user(to, from, n);
2814 ++ uncopied = copy_to_user(to, from, n);
2815 ++ zero_buffer(tty, from, n - uncopied);
2816 ++ return uncopied;
2817 + }
2818 +
2819 + /**
2820 +@@ -1960,11 +1973,12 @@ static int copy_from_read_buf(struct tty_struct *tty,
2821 + n = min(head - ldata->read_tail, N_TTY_BUF_SIZE - tail);
2822 + n = min(*nr, n);
2823 + if (n) {
2824 +- const unsigned char *from = read_buf_addr(ldata, tail);
2825 ++ unsigned char *from = read_buf_addr(ldata, tail);
2826 + retval = copy_to_user(*b, from, n);
2827 + n -= retval;
2828 + is_eof = n == 1 && *from == EOF_CHAR(tty);
2829 + tty_audit_add_data(tty, from, n);
2830 ++ zero_buffer(tty, from, n);
2831 + smp_store_release(&ldata->read_tail, ldata->read_tail + n);
2832 + /* Turn single EOF into zero-length read */
2833 + if (L_EXTPROC(tty) && ldata->icanon && is_eof &&
2834 +diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
2835 +index c996b6859c5e..ae3ce330200e 100644
2836 +--- a/drivers/tty/tty_buffer.c
2837 ++++ b/drivers/tty/tty_buffer.c
2838 +@@ -468,11 +468,15 @@ receive_buf(struct tty_port *port, struct tty_buffer *head, int count)
2839 + {
2840 + unsigned char *p = char_buf_ptr(head, head->read);
2841 + char *f = NULL;
2842 ++ int n;
2843 +
2844 + if (~head->flags & TTYB_NORMAL)
2845 + f = flag_buf_ptr(head, head->read);
2846 +
2847 +- return port->client_ops->receive_buf(port, p, f, count);
2848 ++ n = port->client_ops->receive_buf(port, p, f, count);
2849 ++ if (n > 0)
2850 ++ memset(p, 0, n);
2851 ++ return n;
2852 + }
2853 +
2854 + /**
2855 +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
2856 +index 6e0823790bee..f79979ae482a 100644
2857 +--- a/drivers/usb/core/hub.c
2858 ++++ b/drivers/usb/core/hub.c
2859 +@@ -2847,7 +2847,9 @@ static int hub_port_reset(struct usb_hub *hub, int port1,
2860 + USB_PORT_FEAT_C_BH_PORT_RESET);
2861 + usb_clear_port_feature(hub->hdev, port1,
2862 + USB_PORT_FEAT_C_PORT_LINK_STATE);
2863 +- usb_clear_port_feature(hub->hdev, port1,
2864 ++
2865 ++ if (udev)
2866 ++ usb_clear_port_feature(hub->hdev, port1,
2867 + USB_PORT_FEAT_C_CONNECTION);
2868 +
2869 + /*
2870 +diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
2871 +index 88c80fcc39f5..fec97465ccac 100644
2872 +--- a/drivers/usb/dwc3/core.c
2873 ++++ b/drivers/usb/dwc3/core.c
2874 +@@ -1499,6 +1499,7 @@ static int dwc3_probe(struct platform_device *pdev)
2875 +
2876 + err5:
2877 + dwc3_event_buffers_cleanup(dwc);
2878 ++ dwc3_ulpi_exit(dwc);
2879 +
2880 + err4:
2881 + dwc3_free_scratch_buffers(dwc);
2882 +diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
2883 +index 1286076a8890..842795856bf4 100644
2884 +--- a/drivers/usb/dwc3/dwc3-pci.c
2885 ++++ b/drivers/usb/dwc3/dwc3-pci.c
2886 +@@ -283,8 +283,10 @@ err:
2887 + static void dwc3_pci_remove(struct pci_dev *pci)
2888 + {
2889 + struct dwc3_pci *dwc = pci_get_drvdata(pci);
2890 ++ struct pci_dev *pdev = dwc->pci;
2891 +
2892 +- gpiod_remove_lookup_table(&platform_bytcr_gpios);
2893 ++ if (pdev->device == PCI_DEVICE_ID_INTEL_BYT)
2894 ++ gpiod_remove_lookup_table(&platform_bytcr_gpios);
2895 + #ifdef CONFIG_PM
2896 + cancel_work_sync(&dwc->wakeup_work);
2897 + #endif
2898 +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
2899 +index 2b53194081ba..2de1a3971a26 100644
2900 +--- a/drivers/usb/dwc3/gadget.c
2901 ++++ b/drivers/usb/dwc3/gadget.c
2902 +@@ -1072,7 +1072,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
2903 + /* Now prepare one extra TRB to align transfer size */
2904 + trb = &dep->trb_pool[dep->trb_enqueue];
2905 + __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr,
2906 +- maxp - rem, false, 0,
2907 ++ maxp - rem, false, 1,
2908 + req->request.stream_id,
2909 + req->request.short_not_ok,
2910 + req->request.no_interrupt);
2911 +@@ -1116,7 +1116,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
2912 + /* Now prepare one extra TRB to align transfer size */
2913 + trb = &dep->trb_pool[dep->trb_enqueue];
2914 + __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp - rem,
2915 +- false, 0, req->request.stream_id,
2916 ++ false, 1, req->request.stream_id,
2917 + req->request.short_not_ok,
2918 + req->request.no_interrupt);
2919 + } else if (req->request.zero && req->request.length &&
2920 +@@ -1132,7 +1132,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
2921 + /* Now prepare one extra TRB to handle ZLP */
2922 + trb = &dep->trb_pool[dep->trb_enqueue];
2923 + __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0,
2924 +- false, 0, req->request.stream_id,
2925 ++ false, 1, req->request.stream_id,
2926 + req->request.short_not_ok,
2927 + req->request.no_interrupt);
2928 + } else {
2929 +@@ -2250,7 +2250,7 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep,
2930 + * with one TRB pending in the ring. We need to manually clear HWO bit
2931 + * from that TRB.
2932 + */
2933 +- if ((req->zero || req->unaligned) && (trb->ctrl & DWC3_TRB_CTRL_HWO)) {
2934 ++ if ((req->zero || req->unaligned) && !(trb->ctrl & DWC3_TRB_CTRL_CHN)) {
2935 + trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
2936 + return 1;
2937 + }
2938 +diff --git a/drivers/usb/host/xhci-histb.c b/drivers/usb/host/xhci-histb.c
2939 +index 27f00160332e..3c4abb5a1c3f 100644
2940 +--- a/drivers/usb/host/xhci-histb.c
2941 ++++ b/drivers/usb/host/xhci-histb.c
2942 +@@ -325,14 +325,16 @@ static int xhci_histb_remove(struct platform_device *dev)
2943 + struct xhci_hcd_histb *histb = platform_get_drvdata(dev);
2944 + struct usb_hcd *hcd = histb->hcd;
2945 + struct xhci_hcd *xhci = hcd_to_xhci(hcd);
2946 ++ struct usb_hcd *shared_hcd = xhci->shared_hcd;
2947 +
2948 + xhci->xhc_state |= XHCI_STATE_REMOVING;
2949 +
2950 +- usb_remove_hcd(xhci->shared_hcd);
2951 ++ usb_remove_hcd(shared_hcd);
2952 ++ xhci->shared_hcd = NULL;
2953 + device_wakeup_disable(&dev->dev);
2954 +
2955 + usb_remove_hcd(hcd);
2956 +- usb_put_hcd(xhci->shared_hcd);
2957 ++ usb_put_hcd(shared_hcd);
2958 +
2959 + xhci_histb_host_disable(histb);
2960 + usb_put_hcd(hcd);
2961 +diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
2962 +index 12eea73d9f20..94aca1b5ac8a 100644
2963 +--- a/drivers/usb/host/xhci-hub.c
2964 ++++ b/drivers/usb/host/xhci-hub.c
2965 +@@ -876,7 +876,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
2966 + status |= USB_PORT_STAT_SUSPEND;
2967 + }
2968 + if ((raw_port_status & PORT_PLS_MASK) == XDEV_RESUME &&
2969 +- !DEV_SUPERSPEED_ANY(raw_port_status)) {
2970 ++ !DEV_SUPERSPEED_ANY(raw_port_status) && hcd->speed < HCD_USB3) {
2971 + if ((raw_port_status & PORT_RESET) ||
2972 + !(raw_port_status & PORT_PE))
2973 + return 0xffffffff;
2974 +@@ -921,7 +921,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
2975 + time_left = wait_for_completion_timeout(
2976 + &bus_state->rexit_done[wIndex],
2977 + msecs_to_jiffies(
2978 +- XHCI_MAX_REXIT_TIMEOUT));
2979 ++ XHCI_MAX_REXIT_TIMEOUT_MS));
2980 + spin_lock_irqsave(&xhci->lock, flags);
2981 +
2982 + if (time_left) {
2983 +@@ -935,7 +935,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
2984 + } else {
2985 + int port_status = readl(port->addr);
2986 + xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n",
2987 +- XHCI_MAX_REXIT_TIMEOUT,
2988 ++ XHCI_MAX_REXIT_TIMEOUT_MS,
2989 + port_status);
2990 + status |= USB_PORT_STAT_SUSPEND;
2991 + clear_bit(wIndex, &bus_state->rexit_ports);
2992 +@@ -1474,15 +1474,18 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
2993 + unsigned long flags;
2994 + struct xhci_hub *rhub;
2995 + struct xhci_port **ports;
2996 ++ u32 portsc_buf[USB_MAXCHILDREN];
2997 ++ bool wake_enabled;
2998 +
2999 + rhub = xhci_get_rhub(hcd);
3000 + ports = rhub->ports;
3001 + max_ports = rhub->num_ports;
3002 + bus_state = &xhci->bus_state[hcd_index(hcd)];
3003 ++ wake_enabled = hcd->self.root_hub->do_remote_wakeup;
3004 +
3005 + spin_lock_irqsave(&xhci->lock, flags);
3006 +
3007 +- if (hcd->self.root_hub->do_remote_wakeup) {
3008 ++ if (wake_enabled) {
3009 + if (bus_state->resuming_ports || /* USB2 */
3010 + bus_state->port_remote_wakeup) { /* USB3 */
3011 + spin_unlock_irqrestore(&xhci->lock, flags);
3012 +@@ -1490,26 +1493,36 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
3013 + return -EBUSY;
3014 + }
3015 + }
3016 +-
3017 +- port_index = max_ports;
3018 ++ /*
3019 ++ * Prepare ports for suspend, but don't write anything before all ports
3020 ++ * are checked and we know bus suspend can proceed
3021 ++ */
3022 + bus_state->bus_suspended = 0;
3023 ++ port_index = max_ports;
3024 + while (port_index--) {
3025 +- /* suspend the port if the port is not suspended */
3026 + u32 t1, t2;
3027 +- int slot_id;
3028 +
3029 + t1 = readl(ports[port_index]->addr);
3030 + t2 = xhci_port_state_to_neutral(t1);
3031 ++ portsc_buf[port_index] = 0;
3032 +
3033 +- if ((t1 & PORT_PE) && !(t1 & PORT_PLS_MASK)) {
3034 +- xhci_dbg(xhci, "port %d not suspended\n", port_index);
3035 +- slot_id = xhci_find_slot_id_by_port(hcd, xhci,
3036 +- port_index + 1);
3037 +- if (slot_id) {
3038 ++ /* Bail out if a USB3 port has a new device in link training */
3039 ++ if ((t1 & PORT_PLS_MASK) == XDEV_POLLING) {
3040 ++ bus_state->bus_suspended = 0;
3041 ++ spin_unlock_irqrestore(&xhci->lock, flags);
3042 ++ xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
3043 ++ return -EBUSY;
3044 ++ }
3045 ++
3046 ++ /* suspend ports in U0, or bail out for new connect changes */
3047 ++ if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
3048 ++ if ((t1 & PORT_CSC) && wake_enabled) {
3049 ++ bus_state->bus_suspended = 0;
3050 + spin_unlock_irqrestore(&xhci->lock, flags);
3051 +- xhci_stop_device(xhci, slot_id, 1);
3052 +- spin_lock_irqsave(&xhci->lock, flags);
3053 ++ xhci_dbg(xhci, "Bus suspend bailout, port connect change\n");
3054 ++ return -EBUSY;
3055 + }
3056 ++ xhci_dbg(xhci, "port %d not suspended\n", port_index);
3057 + t2 &= ~PORT_PLS_MASK;
3058 + t2 |= PORT_LINK_STROBE | XDEV_U3;
3059 + set_bit(port_index, &bus_state->bus_suspended);
3060 +@@ -1518,7 +1531,7 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
3061 + * including the USB 3.0 roothub, but only if CONFIG_PM
3062 + * is enabled, so also enable remote wake here.
3063 + */
3064 +- if (hcd->self.root_hub->do_remote_wakeup) {
3065 ++ if (wake_enabled) {
3066 + if (t1 & PORT_CONNECT) {
3067 + t2 |= PORT_WKOC_E | PORT_WKDISC_E;
3068 + t2 &= ~PORT_WKCONN_E;
3069 +@@ -1538,7 +1551,26 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
3070 +
3071 + t1 = xhci_port_state_to_neutral(t1);
3072 + if (t1 != t2)
3073 +- writel(t2, ports[port_index]->addr);
3074 ++ portsc_buf[port_index] = t2;
3075 ++ }
3076 ++
3077 ++ /* write port settings, stopping and suspending ports if needed */
3078 ++ port_index = max_ports;
3079 ++ while (port_index--) {
3080 ++ if (!portsc_buf[port_index])
3081 ++ continue;
3082 ++ if (test_bit(port_index, &bus_state->bus_suspended)) {
3083 ++ int slot_id;
3084 ++
3085 ++ slot_id = xhci_find_slot_id_by_port(hcd, xhci,
3086 ++ port_index + 1);
3087 ++ if (slot_id) {
3088 ++ spin_unlock_irqrestore(&xhci->lock, flags);
3089 ++ xhci_stop_device(xhci, slot_id, 1);
3090 ++ spin_lock_irqsave(&xhci->lock, flags);
3091 ++ }
3092 ++ }
3093 ++ writel(portsc_buf[port_index], ports[port_index]->addr);
3094 + }
3095 + hcd->state = HC_STATE_SUSPENDED;
3096 + bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
3097 +diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
3098 +index 71d0d33c3286..60987c787e44 100644
3099 +--- a/drivers/usb/host/xhci-mtk.c
3100 ++++ b/drivers/usb/host/xhci-mtk.c
3101 +@@ -590,12 +590,14 @@ static int xhci_mtk_remove(struct platform_device *dev)
3102 + struct xhci_hcd_mtk *mtk = platform_get_drvdata(dev);
3103 + struct usb_hcd *hcd = mtk->hcd;
3104 + struct xhci_hcd *xhci = hcd_to_xhci(hcd);
3105 ++ struct usb_hcd *shared_hcd = xhci->shared_hcd;
3106 +
3107 +- usb_remove_hcd(xhci->shared_hcd);
3108 ++ usb_remove_hcd(shared_hcd);
3109 ++ xhci->shared_hcd = NULL;
3110 + device_init_wakeup(&dev->dev, false);
3111 +
3112 + usb_remove_hcd(hcd);
3113 +- usb_put_hcd(xhci->shared_hcd);
3114 ++ usb_put_hcd(shared_hcd);
3115 + usb_put_hcd(hcd);
3116 + xhci_mtk_sch_exit(mtk);
3117 + xhci_mtk_clks_disable(mtk);
3118 +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
3119 +index 51dd8e00c4f8..beeda27b3789 100644
3120 +--- a/drivers/usb/host/xhci-pci.c
3121 ++++ b/drivers/usb/host/xhci-pci.c
3122 +@@ -231,6 +231,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
3123 + if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241)
3124 + xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7;
3125 +
3126 ++ if ((pdev->vendor == PCI_VENDOR_ID_BROADCOM ||
3127 ++ pdev->vendor == PCI_VENDOR_ID_CAVIUM) &&
3128 ++ pdev->device == 0x9026)
3129 ++ xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT;
3130 ++
3131 + if (xhci->quirks & XHCI_RESET_ON_RESUME)
3132 + xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
3133 + "QUIRK: Resetting on resume");
3134 +@@ -356,6 +361,7 @@ static void xhci_pci_remove(struct pci_dev *dev)
3135 + if (xhci->shared_hcd) {
3136 + usb_remove_hcd(xhci->shared_hcd);
3137 + usb_put_hcd(xhci->shared_hcd);
3138 ++ xhci->shared_hcd = NULL;
3139 + }
3140 +
3141 + /* Workaround for spurious wakeups at shutdown with HSW */
3142 +diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
3143 +index 94e939249b2b..e5da8ce62914 100644
3144 +--- a/drivers/usb/host/xhci-plat.c
3145 ++++ b/drivers/usb/host/xhci-plat.c
3146 +@@ -359,14 +359,16 @@ static int xhci_plat_remove(struct platform_device *dev)
3147 + struct xhci_hcd *xhci = hcd_to_xhci(hcd);
3148 + struct clk *clk = xhci->clk;
3149 + struct clk *reg_clk = xhci->reg_clk;
3150 ++ struct usb_hcd *shared_hcd = xhci->shared_hcd;
3151 +
3152 + xhci->xhc_state |= XHCI_STATE_REMOVING;
3153 +
3154 +- usb_remove_hcd(xhci->shared_hcd);
3155 ++ usb_remove_hcd(shared_hcd);
3156 ++ xhci->shared_hcd = NULL;
3157 + usb_phy_shutdown(hcd->usb_phy);
3158 +
3159 + usb_remove_hcd(hcd);
3160 +- usb_put_hcd(xhci->shared_hcd);
3161 ++ usb_put_hcd(shared_hcd);
3162 +
3163 + clk_disable_unprepare(clk);
3164 + clk_disable_unprepare(reg_clk);
3165 +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
3166 +index cd4659703647..9ae17a666bdb 100644
3167 +--- a/drivers/usb/host/xhci-ring.c
3168 ++++ b/drivers/usb/host/xhci-ring.c
3169 +@@ -1517,6 +1517,35 @@ static void handle_device_notification(struct xhci_hcd *xhci,
3170 + usb_wakeup_notification(udev->parent, udev->portnum);
3171 + }
3172 +
3173 ++/*
3174 ++ * Quirk hanlder for errata seen on Cavium ThunderX2 processor XHCI
3175 ++ * Controller.
3176 ++ * As per ThunderX2errata-129 USB 2 device may come up as USB 1
3177 ++ * If a connection to a USB 1 device is followed by another connection
3178 ++ * to a USB 2 device.
3179 ++ *
3180 ++ * Reset the PHY after the USB device is disconnected if device speed
3181 ++ * is less than HCD_USB3.
3182 ++ * Retry the reset sequence max of 4 times checking the PLL lock status.
3183 ++ *
3184 ++ */
3185 ++static void xhci_cavium_reset_phy_quirk(struct xhci_hcd *xhci)
3186 ++{
3187 ++ struct usb_hcd *hcd = xhci_to_hcd(xhci);
3188 ++ u32 pll_lock_check;
3189 ++ u32 retry_count = 4;
3190 ++
3191 ++ do {
3192 ++ /* Assert PHY reset */
3193 ++ writel(0x6F, hcd->regs + 0x1048);
3194 ++ udelay(10);
3195 ++ /* De-assert the PHY reset */
3196 ++ writel(0x7F, hcd->regs + 0x1048);
3197 ++ udelay(200);
3198 ++ pll_lock_check = readl(hcd->regs + 0x1070);
3199 ++ } while (!(pll_lock_check & 0x1) && --retry_count);
3200 ++}
3201 ++
3202 + static void handle_port_status(struct xhci_hcd *xhci,
3203 + union xhci_trb *event)
3204 + {
3205 +@@ -1552,6 +1581,13 @@ static void handle_port_status(struct xhci_hcd *xhci,
3206 + goto cleanup;
3207 + }
3208 +
3209 ++ /* We might get interrupts after shared_hcd is removed */
3210 ++ if (port->rhub == &xhci->usb3_rhub && xhci->shared_hcd == NULL) {
3211 ++ xhci_dbg(xhci, "ignore port event for removed USB3 hcd\n");
3212 ++ bogus_port_status = true;
3213 ++ goto cleanup;
3214 ++ }
3215 ++
3216 + hcd = port->rhub->hcd;
3217 + bus_state = &xhci->bus_state[hcd_index(hcd)];
3218 + hcd_portnum = port->hcd_portnum;
3219 +@@ -1635,7 +1671,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
3220 + * RExit to a disconnect state). If so, let the the driver know it's
3221 + * out of the RExit state.
3222 + */
3223 +- if (!DEV_SUPERSPEED_ANY(portsc) &&
3224 ++ if (!DEV_SUPERSPEED_ANY(portsc) && hcd->speed < HCD_USB3 &&
3225 + test_and_clear_bit(hcd_portnum,
3226 + &bus_state->rexit_ports)) {
3227 + complete(&bus_state->rexit_done[hcd_portnum]);
3228 +@@ -1643,8 +1679,12 @@ static void handle_port_status(struct xhci_hcd *xhci,
3229 + goto cleanup;
3230 + }
3231 +
3232 +- if (hcd->speed < HCD_USB3)
3233 ++ if (hcd->speed < HCD_USB3) {
3234 + xhci_test_and_clear_bit(xhci, port, PORT_PLC);
3235 ++ if ((xhci->quirks & XHCI_RESET_PLL_ON_DISCONNECT) &&
3236 ++ (portsc & PORT_CSC) && !(portsc & PORT_CONNECT))
3237 ++ xhci_cavium_reset_phy_quirk(xhci);
3238 ++ }
3239 +
3240 + cleanup:
3241 + /* Update event ring dequeue pointer before dropping the lock */
3242 +@@ -2247,6 +2287,7 @@ static int handle_tx_event(struct xhci_hcd *xhci,
3243 + goto cleanup;
3244 + case COMP_RING_UNDERRUN:
3245 + case COMP_RING_OVERRUN:
3246 ++ case COMP_STOPPED_LENGTH_INVALID:
3247 + goto cleanup;
3248 + default:
3249 + xhci_err(xhci, "ERROR Transfer event for unknown stream ring slot %u ep %u\n",
3250 +diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
3251 +index 4b463e5202a4..b1cce989bd12 100644
3252 +--- a/drivers/usb/host/xhci-tegra.c
3253 ++++ b/drivers/usb/host/xhci-tegra.c
3254 +@@ -1240,6 +1240,7 @@ static int tegra_xusb_remove(struct platform_device *pdev)
3255 +
3256 + usb_remove_hcd(xhci->shared_hcd);
3257 + usb_put_hcd(xhci->shared_hcd);
3258 ++ xhci->shared_hcd = NULL;
3259 + usb_remove_hcd(tegra->hcd);
3260 + usb_put_hcd(tegra->hcd);
3261 +
3262 +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
3263 +index 0420eefa647a..c928dbbff881 100644
3264 +--- a/drivers/usb/host/xhci.c
3265 ++++ b/drivers/usb/host/xhci.c
3266 +@@ -719,8 +719,6 @@ static void xhci_stop(struct usb_hcd *hcd)
3267 +
3268 + /* Only halt host and free memory after both hcds are removed */
3269 + if (!usb_hcd_is_primary_hcd(hcd)) {
3270 +- /* usb core will free this hcd shortly, unset pointer */
3271 +- xhci->shared_hcd = NULL;
3272 + mutex_unlock(&xhci->mutex);
3273 + return;
3274 + }
3275 +diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
3276 +index 6230a578324c..e936e4c8af98 100644
3277 +--- a/drivers/usb/host/xhci.h
3278 ++++ b/drivers/usb/host/xhci.h
3279 +@@ -1678,7 +1678,7 @@ struct xhci_bus_state {
3280 + * It can take up to 20 ms to transition from RExit to U0 on the
3281 + * Intel Lynx Point LP xHCI host.
3282 + */
3283 +-#define XHCI_MAX_REXIT_TIMEOUT (20 * 1000)
3284 ++#define XHCI_MAX_REXIT_TIMEOUT_MS 20
3285 +
3286 + static inline unsigned int hcd_index(struct usb_hcd *hcd)
3287 + {
3288 +@@ -1846,6 +1846,7 @@ struct xhci_hcd {
3289 + #define XHCI_SUSPEND_DELAY BIT_ULL(30)
3290 + #define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31)
3291 + #define XHCI_ZERO_64B_REGS BIT_ULL(32)
3292 ++#define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34)
3293 +
3294 + unsigned int num_active_eps;
3295 + unsigned int limit_active_eps;
3296 +diff --git a/fs/9p/vfs_dir.c b/fs/9p/vfs_dir.c
3297 +index b0405d6aac85..48db9a9f13f9 100644
3298 +--- a/fs/9p/vfs_dir.c
3299 ++++ b/fs/9p/vfs_dir.c
3300 +@@ -76,15 +76,6 @@ static inline int dt_type(struct p9_wstat *mistat)
3301 + return rettype;
3302 + }
3303 +
3304 +-static void p9stat_init(struct p9_wstat *stbuf)
3305 +-{
3306 +- stbuf->name = NULL;
3307 +- stbuf->uid = NULL;
3308 +- stbuf->gid = NULL;
3309 +- stbuf->muid = NULL;
3310 +- stbuf->extension = NULL;
3311 +-}
3312 +-
3313 + /**
3314 + * v9fs_alloc_rdir_buf - Allocate buffer used for read and readdir
3315 + * @filp: opened file structure
3316 +@@ -145,12 +136,10 @@ static int v9fs_dir_readdir(struct file *file, struct dir_context *ctx)
3317 + rdir->tail = n;
3318 + }
3319 + while (rdir->head < rdir->tail) {
3320 +- p9stat_init(&st);
3321 + err = p9stat_read(fid->clnt, rdir->buf + rdir->head,
3322 + rdir->tail - rdir->head, &st);
3323 + if (err) {
3324 + p9_debug(P9_DEBUG_VFS, "returned %d\n", err);
3325 +- p9stat_free(&st);
3326 + return -EIO;
3327 + }
3328 + reclen = st.size+2;
3329 +diff --git a/fs/bfs/inode.c b/fs/bfs/inode.c
3330 +index 9a69392f1fb3..d81c148682e7 100644
3331 +--- a/fs/bfs/inode.c
3332 ++++ b/fs/bfs/inode.c
3333 +@@ -350,7 +350,8 @@ static int bfs_fill_super(struct super_block *s, void *data, int silent)
3334 +
3335 + s->s_magic = BFS_MAGIC;
3336 +
3337 +- if (le32_to_cpu(bfs_sb->s_start) > le32_to_cpu(bfs_sb->s_end)) {
3338 ++ if (le32_to_cpu(bfs_sb->s_start) > le32_to_cpu(bfs_sb->s_end) ||
3339 ++ le32_to_cpu(bfs_sb->s_start) < BFS_BSIZE) {
3340 + printf("Superblock is corrupted\n");
3341 + goto out1;
3342 + }
3343 +@@ -359,9 +360,11 @@ static int bfs_fill_super(struct super_block *s, void *data, int silent)
3344 + sizeof(struct bfs_inode)
3345 + + BFS_ROOT_INO - 1;
3346 + imap_len = (info->si_lasti / 8) + 1;
3347 +- info->si_imap = kzalloc(imap_len, GFP_KERNEL);
3348 +- if (!info->si_imap)
3349 ++ info->si_imap = kzalloc(imap_len, GFP_KERNEL | __GFP_NOWARN);
3350 ++ if (!info->si_imap) {
3351 ++ printf("Cannot allocate %u bytes\n", imap_len);
3352 + goto out1;
3353 ++ }
3354 + for (i = 0; i < BFS_ROOT_INO; i++)
3355 + set_bit(i, info->si_imap);
3356 +
3357 +diff --git a/fs/dax.c b/fs/dax.c
3358 +index 0fb270f0a0ef..b0cd1364c68f 100644
3359 +--- a/fs/dax.c
3360 ++++ b/fs/dax.c
3361 +@@ -217,6 +217,9 @@ static inline void *unlock_slot(struct address_space *mapping, void **slot)
3362 + return (void *)entry;
3363 + }
3364 +
3365 ++static void put_unlocked_mapping_entry(struct address_space *mapping,
3366 ++ pgoff_t index, void *entry);
3367 ++
3368 + /*
3369 + * Lookup entry in radix tree, wait for it to become unlocked if it is
3370 + * exceptional entry and return it. The caller must call
3371 +@@ -256,8 +259,10 @@ static void *__get_unlocked_mapping_entry(struct address_space *mapping,
3372 + revalidate = wait_fn();
3373 + finish_wait(wq, &ewait.wait);
3374 + xa_lock_irq(&mapping->i_pages);
3375 +- if (revalidate)
3376 ++ if (revalidate) {
3377 ++ put_unlocked_mapping_entry(mapping, index, entry);
3378 + return ERR_PTR(-EAGAIN);
3379 ++ }
3380 + }
3381 + }
3382 +
3383 +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
3384 +index 8748539c04ed..7f8bb0868c0f 100644
3385 +--- a/fs/gfs2/bmap.c
3386 ++++ b/fs/gfs2/bmap.c
3387 +@@ -826,7 +826,7 @@ static int gfs2_iomap_get(struct inode *inode, loff_t pos, loff_t length,
3388 + ret = gfs2_meta_inode_buffer(ip, &dibh);
3389 + if (ret)
3390 + goto unlock;
3391 +- iomap->private = dibh;
3392 ++ mp->mp_bh[0] = dibh;
3393 +
3394 + if (gfs2_is_stuffed(ip)) {
3395 + if (flags & IOMAP_WRITE) {
3396 +@@ -863,9 +863,6 @@ unstuff:
3397 + len = lblock_stop - lblock + 1;
3398 + iomap->length = len << inode->i_blkbits;
3399 +
3400 +- get_bh(dibh);
3401 +- mp->mp_bh[0] = dibh;
3402 +-
3403 + height = ip->i_height;
3404 + while ((lblock + 1) * sdp->sd_sb.sb_bsize > sdp->sd_heightsize[height])
3405 + height++;
3406 +@@ -898,8 +895,6 @@ out:
3407 + iomap->bdev = inode->i_sb->s_bdev;
3408 + unlock:
3409 + up_read(&ip->i_rw_mutex);
3410 +- if (ret && dibh)
3411 +- brelse(dibh);
3412 + return ret;
3413 +
3414 + do_alloc:
3415 +@@ -980,9 +975,9 @@ static void gfs2_iomap_journaled_page_done(struct inode *inode, loff_t pos,
3416 +
3417 + static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
3418 + loff_t length, unsigned flags,
3419 +- struct iomap *iomap)
3420 ++ struct iomap *iomap,
3421 ++ struct metapath *mp)
3422 + {
3423 +- struct metapath mp = { .mp_aheight = 1, };
3424 + struct gfs2_inode *ip = GFS2_I(inode);
3425 + struct gfs2_sbd *sdp = GFS2_SB(inode);
3426 + unsigned int data_blocks = 0, ind_blocks = 0, rblocks;
3427 +@@ -996,9 +991,9 @@ static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
3428 + unstuff = gfs2_is_stuffed(ip) &&
3429 + pos + length > gfs2_max_stuffed_size(ip);
3430 +
3431 +- ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);
3432 ++ ret = gfs2_iomap_get(inode, pos, length, flags, iomap, mp);
3433 + if (ret)
3434 +- goto out_release;
3435 ++ goto out_unlock;
3436 +
3437 + alloc_required = unstuff || iomap->type == IOMAP_HOLE;
3438 +
3439 +@@ -1013,7 +1008,7 @@ static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
3440 +
3441 + ret = gfs2_quota_lock_check(ip, &ap);
3442 + if (ret)
3443 +- goto out_release;
3444 ++ goto out_unlock;
3445 +
3446 + ret = gfs2_inplace_reserve(ip, &ap);
3447 + if (ret)
3448 +@@ -1038,17 +1033,15 @@ static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
3449 + ret = gfs2_unstuff_dinode(ip, NULL);
3450 + if (ret)
3451 + goto out_trans_end;
3452 +- release_metapath(&mp);
3453 +- brelse(iomap->private);
3454 +- iomap->private = NULL;
3455 ++ release_metapath(mp);
3456 + ret = gfs2_iomap_get(inode, iomap->offset, iomap->length,
3457 +- flags, iomap, &mp);
3458 ++ flags, iomap, mp);
3459 + if (ret)
3460 + goto out_trans_end;
3461 + }
3462 +
3463 + if (iomap->type == IOMAP_HOLE) {
3464 +- ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);
3465 ++ ret = gfs2_iomap_alloc(inode, iomap, flags, mp);
3466 + if (ret) {
3467 + gfs2_trans_end(sdp);
3468 + gfs2_inplace_release(ip);
3469 +@@ -1056,7 +1049,6 @@ static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
3470 + goto out_qunlock;
3471 + }
3472 + }
3473 +- release_metapath(&mp);
3474 + if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip))
3475 + iomap->page_done = gfs2_iomap_journaled_page_done;
3476 + return 0;
3477 +@@ -1069,10 +1061,7 @@ out_trans_fail:
3478 + out_qunlock:
3479 + if (alloc_required)
3480 + gfs2_quota_unlock(ip);
3481 +-out_release:
3482 +- if (iomap->private)
3483 +- brelse(iomap->private);
3484 +- release_metapath(&mp);
3485 ++out_unlock:
3486 + gfs2_write_unlock(inode);
3487 + return ret;
3488 + }
3489 +@@ -1088,10 +1077,10 @@ static int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
3490 +
3491 + trace_gfs2_iomap_start(ip, pos, length, flags);
3492 + if ((flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)) {
3493 +- ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap);
3494 ++ ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp);
3495 + } else {
3496 + ret = gfs2_iomap_get(inode, pos, length, flags, iomap, &mp);
3497 +- release_metapath(&mp);
3498 ++
3499 + /*
3500 + * Silently fall back to buffered I/O for stuffed files or if
3501 + * we've hot a hole (see gfs2_file_direct_write).
3502 +@@ -1100,6 +1089,11 @@ static int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
3503 + iomap->type != IOMAP_MAPPED)
3504 + ret = -ENOTBLK;
3505 + }
3506 ++ if (!ret) {
3507 ++ get_bh(mp.mp_bh[0]);
3508 ++ iomap->private = mp.mp_bh[0];
3509 ++ }
3510 ++ release_metapath(&mp);
3511 + trace_gfs2_iomap_end(ip, iomap, ret);
3512 + return ret;
3513 + }
3514 +diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
3515 +index 6b84ef6ccff3..b041cb8ae383 100644
3516 +--- a/fs/gfs2/ops_fstype.c
3517 ++++ b/fs/gfs2/ops_fstype.c
3518 +@@ -72,13 +72,13 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
3519 + if (!sdp)
3520 + return NULL;
3521 +
3522 +- sb->s_fs_info = sdp;
3523 + sdp->sd_vfs = sb;
3524 + sdp->sd_lkstats = alloc_percpu(struct gfs2_pcpu_lkstats);
3525 + if (!sdp->sd_lkstats) {
3526 + kfree(sdp);
3527 + return NULL;
3528 + }
3529 ++ sb->s_fs_info = sdp;
3530 +
3531 + set_bit(SDF_NOJOURNALID, &sdp->sd_flags);
3532 + gfs2_tune_init(&sdp->sd_tune);
3533 +diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
3534 +index fa515d5ea5ba..7b861bbc0b43 100644
3535 +--- a/fs/nfs/callback_proc.c
3536 ++++ b/fs/nfs/callback_proc.c
3537 +@@ -66,7 +66,7 @@ __be32 nfs4_callback_getattr(void *argp, void *resp,
3538 + out_iput:
3539 + rcu_read_unlock();
3540 + trace_nfs4_cb_getattr(cps->clp, &args->fh, inode, -ntohl(res->status));
3541 +- iput(inode);
3542 ++ nfs_iput_and_deactive(inode);
3543 + out:
3544 + dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status));
3545 + return res->status;
3546 +@@ -108,7 +108,7 @@ __be32 nfs4_callback_recall(void *argp, void *resp,
3547 + }
3548 + trace_nfs4_cb_recall(cps->clp, &args->fh, inode,
3549 + &args->stateid, -ntohl(res));
3550 +- iput(inode);
3551 ++ nfs_iput_and_deactive(inode);
3552 + out:
3553 + dprintk("%s: exit with status = %d\n", __func__, ntohl(res));
3554 + return res;
3555 +diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
3556 +index f033f3a69a3b..75fe92eaa681 100644
3557 +--- a/fs/nfs/delegation.c
3558 ++++ b/fs/nfs/delegation.c
3559 +@@ -849,16 +849,23 @@ nfs_delegation_find_inode_server(struct nfs_server *server,
3560 + const struct nfs_fh *fhandle)
3561 + {
3562 + struct nfs_delegation *delegation;
3563 +- struct inode *res = NULL;
3564 ++ struct inode *freeme, *res = NULL;
3565 +
3566 + list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
3567 + spin_lock(&delegation->lock);
3568 + if (delegation->inode != NULL &&
3569 + nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) {
3570 +- res = igrab(delegation->inode);
3571 ++ freeme = igrab(delegation->inode);
3572 ++ if (freeme && nfs_sb_active(freeme->i_sb))
3573 ++ res = freeme;
3574 + spin_unlock(&delegation->lock);
3575 + if (res != NULL)
3576 + return res;
3577 ++ if (freeme) {
3578 ++ rcu_read_unlock();
3579 ++ iput(freeme);
3580 ++ rcu_read_lock();
3581 ++ }
3582 + return ERR_PTR(-EAGAIN);
3583 + }
3584 + spin_unlock(&delegation->lock);
3585 +diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
3586 +index 94b52157bf8d..29dee9630eec 100644
3587 +--- a/fs/notify/fanotify/fanotify.c
3588 ++++ b/fs/notify/fanotify/fanotify.c
3589 +@@ -115,12 +115,12 @@ static bool fanotify_should_send_event(struct fsnotify_iter_info *iter_info,
3590 + continue;
3591 + mark = iter_info->marks[type];
3592 + /*
3593 +- * if the event is for a child and this inode doesn't care about
3594 +- * events on the child, don't send it!
3595 ++ * If the event is for a child and this mark doesn't care about
3596 ++ * events on a child, don't send it!
3597 + */
3598 +- if (type == FSNOTIFY_OBJ_TYPE_INODE &&
3599 +- (event_mask & FS_EVENT_ON_CHILD) &&
3600 +- !(mark->mask & FS_EVENT_ON_CHILD))
3601 ++ if (event_mask & FS_EVENT_ON_CHILD &&
3602 ++ (type != FSNOTIFY_OBJ_TYPE_INODE ||
3603 ++ !(mark->mask & FS_EVENT_ON_CHILD)))
3604 + continue;
3605 +
3606 + marks_mask |= mark->mask;
3607 +diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
3608 +index f43ea1aad542..170a733454f7 100644
3609 +--- a/fs/notify/fsnotify.c
3610 ++++ b/fs/notify/fsnotify.c
3611 +@@ -161,9 +161,9 @@ int __fsnotify_parent(const struct path *path, struct dentry *dentry, __u32 mask
3612 + parent = dget_parent(dentry);
3613 + p_inode = parent->d_inode;
3614 +
3615 +- if (unlikely(!fsnotify_inode_watches_children(p_inode)))
3616 ++ if (unlikely(!fsnotify_inode_watches_children(p_inode))) {
3617 + __fsnotify_update_child_dentry_flags(p_inode);
3618 +- else if (p_inode->i_fsnotify_mask & mask) {
3619 ++ } else if (p_inode->i_fsnotify_mask & mask & ALL_FSNOTIFY_EVENTS) {
3620 + struct name_snapshot name;
3621 +
3622 + /* we are notifying a parent so come up with the new mask which
3623 +@@ -193,7 +193,7 @@ static int send_to_group(struct inode *to_tell,
3624 + struct fsnotify_iter_info *iter_info)
3625 + {
3626 + struct fsnotify_group *group = NULL;
3627 +- __u32 test_mask = (mask & ~FS_EVENT_ON_CHILD);
3628 ++ __u32 test_mask = (mask & ALL_FSNOTIFY_EVENTS);
3629 + __u32 marks_mask = 0;
3630 + __u32 marks_ignored_mask = 0;
3631 + struct fsnotify_mark *mark;
3632 +@@ -324,14 +324,17 @@ int fsnotify(struct inode *to_tell, __u32 mask, const void *data, int data_is,
3633 + struct fsnotify_iter_info iter_info = {};
3634 + struct mount *mnt;
3635 + int ret = 0;
3636 +- /* global tests shouldn't care about events on child only the specific event */
3637 +- __u32 test_mask = (mask & ~FS_EVENT_ON_CHILD);
3638 ++ __u32 test_mask = (mask & ALL_FSNOTIFY_EVENTS);
3639 +
3640 + if (data_is == FSNOTIFY_EVENT_PATH)
3641 + mnt = real_mount(((const struct path *)data)->mnt);
3642 + else
3643 + mnt = NULL;
3644 +
3645 ++ /* An event "on child" is not intended for a mount mark */
3646 ++ if (mask & FS_EVENT_ON_CHILD)
3647 ++ mnt = NULL;
3648 ++
3649 + /*
3650 + * Optimization: srcu_read_lock() has a memory barrier which can
3651 + * be expensive. It protects walking the *_fsnotify_marks lists.
3652 +@@ -389,7 +392,7 @@ static __init int fsnotify_init(void)
3653 + {
3654 + int ret;
3655 +
3656 +- BUG_ON(hweight32(ALL_FSNOTIFY_EVENTS) != 23);
3657 ++ BUG_ON(hweight32(ALL_FSNOTIFY_BITS) != 23);
3658 +
3659 + ret = init_srcu_struct(&fsnotify_mark_srcu);
3660 + if (ret)
3661 +diff --git a/include/linux/can/dev.h b/include/linux/can/dev.h
3662 +index a83e1f632eb7..f01623aef2f7 100644
3663 +--- a/include/linux/can/dev.h
3664 ++++ b/include/linux/can/dev.h
3665 +@@ -169,6 +169,7 @@ void can_change_state(struct net_device *dev, struct can_frame *cf,
3666 +
3667 + void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
3668 + unsigned int idx);
3669 ++struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr);
3670 + unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx);
3671 + void can_free_echo_skb(struct net_device *dev, unsigned int idx);
3672 +
3673 +diff --git a/include/linux/can/rx-offload.h b/include/linux/can/rx-offload.h
3674 +index cb31683bbe15..8268811a697e 100644
3675 +--- a/include/linux/can/rx-offload.h
3676 ++++ b/include/linux/can/rx-offload.h
3677 +@@ -41,7 +41,12 @@ int can_rx_offload_add_timestamp(struct net_device *dev, struct can_rx_offload *
3678 + int can_rx_offload_add_fifo(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight);
3679 + int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload, u64 reg);
3680 + int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload);
3681 +-int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb);
3682 ++int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
3683 ++ struct sk_buff *skb, u32 timestamp);
3684 ++unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
3685 ++ unsigned int idx, u32 timestamp);
3686 ++int can_rx_offload_queue_tail(struct can_rx_offload *offload,
3687 ++ struct sk_buff *skb);
3688 + void can_rx_offload_reset(struct can_rx_offload *offload);
3689 + void can_rx_offload_del(struct can_rx_offload *offload);
3690 + void can_rx_offload_enable(struct can_rx_offload *offload);
3691 +diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
3692 +index b8f4182f42f1..4599d1c95f8c 100644
3693 +--- a/include/linux/fsnotify_backend.h
3694 ++++ b/include/linux/fsnotify_backend.h
3695 +@@ -68,15 +68,20 @@
3696 +
3697 + #define ALL_FSNOTIFY_PERM_EVENTS (FS_OPEN_PERM | FS_ACCESS_PERM)
3698 +
3699 ++/* Events that can be reported to backends */
3700 + #define ALL_FSNOTIFY_EVENTS (FS_ACCESS | FS_MODIFY | FS_ATTRIB | \
3701 + FS_CLOSE_WRITE | FS_CLOSE_NOWRITE | FS_OPEN | \
3702 + FS_MOVED_FROM | FS_MOVED_TO | FS_CREATE | \
3703 + FS_DELETE | FS_DELETE_SELF | FS_MOVE_SELF | \
3704 + FS_UNMOUNT | FS_Q_OVERFLOW | FS_IN_IGNORED | \
3705 +- FS_OPEN_PERM | FS_ACCESS_PERM | FS_EXCL_UNLINK | \
3706 +- FS_ISDIR | FS_IN_ONESHOT | FS_DN_RENAME | \
3707 ++ FS_OPEN_PERM | FS_ACCESS_PERM | FS_DN_RENAME)
3708 ++
3709 ++/* Extra flags that may be reported with event or control handling of events */
3710 ++#define ALL_FSNOTIFY_FLAGS (FS_EXCL_UNLINK | FS_ISDIR | FS_IN_ONESHOT | \
3711 + FS_DN_MULTISHOT | FS_EVENT_ON_CHILD)
3712 +
3713 ++#define ALL_FSNOTIFY_BITS (ALL_FSNOTIFY_EVENTS | ALL_FSNOTIFY_FLAGS)
3714 ++
3715 + struct fsnotify_group;
3716 + struct fsnotify_event;
3717 + struct fsnotify_mark;
3718 +diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h
3719 +index 21713dc14ce2..673546ba7342 100644
3720 +--- a/include/linux/pfn_t.h
3721 ++++ b/include/linux/pfn_t.h
3722 +@@ -10,7 +10,7 @@
3723 + * PFN_DEV - pfn is not covered by system memmap by default
3724 + * PFN_MAP - pfn has a dynamic page mapping established by a device driver
3725 + */
3726 +-#define PFN_FLAGS_MASK (((u64) ~PAGE_MASK) << (BITS_PER_LONG_LONG - PAGE_SHIFT))
3727 ++#define PFN_FLAGS_MASK (((u64) (~PAGE_MASK)) << (BITS_PER_LONG_LONG - PAGE_SHIFT))
3728 + #define PFN_SG_CHAIN (1ULL << (BITS_PER_LONG_LONG - 1))
3729 + #define PFN_SG_LAST (1ULL << (BITS_PER_LONG_LONG - 2))
3730 + #define PFN_DEV (1ULL << (BITS_PER_LONG_LONG - 3))
3731 +diff --git a/include/net/sock.h b/include/net/sock.h
3732 +index c64a1cff9eb3..f18dbd6da906 100644
3733 +--- a/include/net/sock.h
3734 ++++ b/include/net/sock.h
3735 +@@ -1491,6 +1491,7 @@ static inline void lock_sock(struct sock *sk)
3736 + lock_sock_nested(sk, 0);
3737 + }
3738 +
3739 ++void __release_sock(struct sock *sk);
3740 + void release_sock(struct sock *sk);
3741 +
3742 + /* BH context may only use the following locking interface. */
3743 +diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
3744 +index ed5d34925ad0..6a4b41484afe 100644
3745 +--- a/kernel/debug/kdb/kdb_io.c
3746 ++++ b/kernel/debug/kdb/kdb_io.c
3747 +@@ -216,7 +216,7 @@ static char *kdb_read(char *buffer, size_t bufsize)
3748 + int count;
3749 + int i;
3750 + int diag, dtab_count;
3751 +- int key;
3752 ++ int key, buf_size, ret;
3753 +
3754 +
3755 + diag = kdbgetintenv("DTABCOUNT", &dtab_count);
3756 +@@ -336,9 +336,8 @@ poll_again:
3757 + else
3758 + p_tmp = tmpbuffer;
3759 + len = strlen(p_tmp);
3760 +- count = kallsyms_symbol_complete(p_tmp,
3761 +- sizeof(tmpbuffer) -
3762 +- (p_tmp - tmpbuffer));
3763 ++ buf_size = sizeof(tmpbuffer) - (p_tmp - tmpbuffer);
3764 ++ count = kallsyms_symbol_complete(p_tmp, buf_size);
3765 + if (tab == 2 && count > 0) {
3766 + kdb_printf("\n%d symbols are found.", count);
3767 + if (count > dtab_count) {
3768 +@@ -350,9 +349,13 @@ poll_again:
3769 + }
3770 + kdb_printf("\n");
3771 + for (i = 0; i < count; i++) {
3772 +- if (WARN_ON(!kallsyms_symbol_next(p_tmp, i)))
3773 ++ ret = kallsyms_symbol_next(p_tmp, i, buf_size);
3774 ++ if (WARN_ON(!ret))
3775 + break;
3776 +- kdb_printf("%s ", p_tmp);
3777 ++ if (ret != -E2BIG)
3778 ++ kdb_printf("%s ", p_tmp);
3779 ++ else
3780 ++ kdb_printf("%s... ", p_tmp);
3781 + *(p_tmp + len) = '\0';
3782 + }
3783 + if (i >= dtab_count)
3784 +diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
3785 +index 1e5a502ba4a7..2118d8258b7c 100644
3786 +--- a/kernel/debug/kdb/kdb_private.h
3787 ++++ b/kernel/debug/kdb/kdb_private.h
3788 +@@ -83,7 +83,7 @@ typedef struct __ksymtab {
3789 + unsigned long sym_start;
3790 + unsigned long sym_end;
3791 + } kdb_symtab_t;
3792 +-extern int kallsyms_symbol_next(char *prefix_name, int flag);
3793 ++extern int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size);
3794 + extern int kallsyms_symbol_complete(char *prefix_name, int max_len);
3795 +
3796 + /* Exported Symbols for kernel loadable modules to use. */
3797 +diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
3798 +index 987eb73284d2..b14b0925c184 100644
3799 +--- a/kernel/debug/kdb/kdb_support.c
3800 ++++ b/kernel/debug/kdb/kdb_support.c
3801 +@@ -221,11 +221,13 @@ int kallsyms_symbol_complete(char *prefix_name, int max_len)
3802 + * Parameters:
3803 + * prefix_name prefix of a symbol name to lookup
3804 + * flag 0 means search from the head, 1 means continue search.
3805 ++ * buf_size maximum length that can be written to prefix_name
3806 ++ * buffer
3807 + * Returns:
3808 + * 1 if a symbol matches the given prefix.
3809 + * 0 if no string found
3810 + */
3811 +-int kallsyms_symbol_next(char *prefix_name, int flag)
3812 ++int kallsyms_symbol_next(char *prefix_name, int flag, int buf_size)
3813 + {
3814 + int prefix_len = strlen(prefix_name);
3815 + static loff_t pos;
3816 +@@ -235,10 +237,8 @@ int kallsyms_symbol_next(char *prefix_name, int flag)
3817 + pos = 0;
3818 +
3819 + while ((name = kdb_walk_kallsyms(&pos))) {
3820 +- if (strncmp(name, prefix_name, prefix_len) == 0) {
3821 +- strncpy(prefix_name, name, strlen(name)+1);
3822 +- return 1;
3823 +- }
3824 ++ if (!strncmp(name, prefix_name, prefix_len))
3825 ++ return strscpy(prefix_name, name, buf_size);
3826 + }
3827 + return 0;
3828 + }
3829 +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
3830 +index 0b760c1369f7..15301ed19da6 100644
3831 +--- a/kernel/rcu/tree.c
3832 ++++ b/kernel/rcu/tree.c
3833 +@@ -2662,6 +2662,15 @@ void rcu_check_callbacks(int user)
3834 + rcu_bh_qs();
3835 + }
3836 + rcu_preempt_check_callbacks();
3837 ++ /* The load-acquire pairs with the store-release setting to true. */
3838 ++ if (smp_load_acquire(this_cpu_ptr(&rcu_dynticks.rcu_urgent_qs))) {
3839 ++ /* Idle and userspace execution already are quiescent states. */
3840 ++ if (!rcu_is_cpu_rrupt_from_idle() && !user) {
3841 ++ set_tsk_need_resched(current);
3842 ++ set_preempt_need_resched();
3843 ++ }
3844 ++ __this_cpu_write(rcu_dynticks.rcu_urgent_qs, false);
3845 ++ }
3846 + if (rcu_pending())
3847 + invoke_rcu_core();
3848 +
3849 +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
3850 +index 908c9cdae2f0..1162552dc3cc 100644
3851 +--- a/kernel/sched/fair.c
3852 ++++ b/kernel/sched/fair.c
3853 +@@ -5672,11 +5672,11 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
3854 + return target;
3855 + }
3856 +
3857 +-static unsigned long cpu_util_wake(int cpu, struct task_struct *p);
3858 ++static unsigned long cpu_util_without(int cpu, struct task_struct *p);
3859 +
3860 +-static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
3861 ++static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
3862 + {
3863 +- return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);
3864 ++ return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
3865 + }
3866 +
3867 + /*
3868 +@@ -5736,7 +5736,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
3869 +
3870 + avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
3871 +
3872 +- spare_cap = capacity_spare_wake(i, p);
3873 ++ spare_cap = capacity_spare_without(i, p);
3874 +
3875 + if (spare_cap > max_spare_cap)
3876 + max_spare_cap = spare_cap;
3877 +@@ -5887,8 +5887,8 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
3878 + return prev_cpu;
3879 +
3880 + /*
3881 +- * We need task's util for capacity_spare_wake, sync it up to prev_cpu's
3882 +- * last_update_time.
3883 ++ * We need task's util for capacity_spare_without, sync it up to
3884 ++ * prev_cpu's last_update_time.
3885 + */
3886 + if (!(sd_flag & SD_BALANCE_FORK))
3887 + sync_entity_load_avg(&p->se);
3888 +@@ -6214,10 +6214,19 @@ static inline unsigned long cpu_util(int cpu)
3889 + }
3890 +
3891 + /*
3892 +- * cpu_util_wake: Compute CPU utilization with any contributions from
3893 +- * the waking task p removed.
3894 ++ * cpu_util_without: compute cpu utilization without any contributions from *p
3895 ++ * @cpu: the CPU which utilization is requested
3896 ++ * @p: the task which utilization should be discounted
3897 ++ *
3898 ++ * The utilization of a CPU is defined by the utilization of tasks currently
3899 ++ * enqueued on that CPU as well as tasks which are currently sleeping after an
3900 ++ * execution on that CPU.
3901 ++ *
3902 ++ * This method returns the utilization of the specified CPU by discounting the
3903 ++ * utilization of the specified task, whenever the task is currently
3904 ++ * contributing to the CPU utilization.
3905 + */
3906 +-static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
3907 ++static unsigned long cpu_util_without(int cpu, struct task_struct *p)
3908 + {
3909 + struct cfs_rq *cfs_rq;
3910 + unsigned int util;
3911 +@@ -6229,7 +6238,7 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
3912 + cfs_rq = &cpu_rq(cpu)->cfs;
3913 + util = READ_ONCE(cfs_rq->avg.util_avg);
3914 +
3915 +- /* Discount task's blocked util from CPU's util */
3916 ++ /* Discount task's util from CPU's util */
3917 + util -= min_t(unsigned int, util, task_util(p));
3918 +
3919 + /*
3920 +@@ -6238,14 +6247,14 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
3921 + * a) if *p is the only task sleeping on this CPU, then:
3922 + * cpu_util (== task_util) > util_est (== 0)
3923 + * and thus we return:
3924 +- * cpu_util_wake = (cpu_util - task_util) = 0
3925 ++ * cpu_util_without = (cpu_util - task_util) = 0
3926 + *
3927 + * b) if other tasks are SLEEPING on this CPU, which is now exiting
3928 + * IDLE, then:
3929 + * cpu_util >= task_util
3930 + * cpu_util > util_est (== 0)
3931 + * and thus we discount *p's blocked utilization to return:
3932 +- * cpu_util_wake = (cpu_util - task_util) >= 0
3933 ++ * cpu_util_without = (cpu_util - task_util) >= 0
3934 + *
3935 + * c) if other tasks are RUNNABLE on that CPU and
3936 + * util_est > cpu_util
3937 +@@ -6258,8 +6267,33 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
3938 + * covered by the following code when estimated utilization is
3939 + * enabled.
3940 + */
3941 +- if (sched_feat(UTIL_EST))
3942 +- util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));
3943 ++ if (sched_feat(UTIL_EST)) {
3944 ++ unsigned int estimated =
3945 ++ READ_ONCE(cfs_rq->avg.util_est.enqueued);
3946 ++
3947 ++ /*
3948 ++ * Despite the following checks we still have a small window
3949 ++ * for a possible race, when an execl's select_task_rq_fair()
3950 ++ * races with LB's detach_task():
3951 ++ *
3952 ++ * detach_task()
3953 ++ * p->on_rq = TASK_ON_RQ_MIGRATING;
3954 ++ * ---------------------------------- A
3955 ++ * deactivate_task() \
3956 ++ * dequeue_task() + RaceTime
3957 ++ * util_est_dequeue() /
3958 ++ * ---------------------------------- B
3959 ++ *
3960 ++ * The additional check on "current == p" it's required to
3961 ++ * properly fix the execl regression and it helps in further
3962 ++ * reducing the chances for the above race.
3963 ++ */
3964 ++ if (unlikely(task_on_rq_queued(p) || current == p)) {
3965 ++ estimated -= min_t(unsigned int, estimated,
3966 ++ (_task_util_est(p) | UTIL_AVG_UNCHANGED));
3967 ++ }
3968 ++ util = max(util, estimated);
3969 ++ }
3970 +
3971 + /*
3972 + * Utilization (estimated) can exceed the CPU capacity, thus let's
3973 +diff --git a/mm/memory.c b/mm/memory.c
3974 +index c467102a5cbc..5c5df53dbdf9 100644
3975 +--- a/mm/memory.c
3976 ++++ b/mm/memory.c
3977 +@@ -3745,10 +3745,36 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
3978 + struct vm_area_struct *vma = vmf->vma;
3979 + vm_fault_t ret;
3980 +
3981 +- /* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */
3982 +- if (!vma->vm_ops->fault)
3983 +- ret = VM_FAULT_SIGBUS;
3984 +- else if (!(vmf->flags & FAULT_FLAG_WRITE))
3985 ++ /*
3986 ++ * The VMA was not fully populated on mmap() or missing VM_DONTEXPAND
3987 ++ */
3988 ++ if (!vma->vm_ops->fault) {
3989 ++ /*
3990 ++ * If we find a migration pmd entry or a none pmd entry, which
3991 ++ * should never happen, return SIGBUS
3992 ++ */
3993 ++ if (unlikely(!pmd_present(*vmf->pmd)))
3994 ++ ret = VM_FAULT_SIGBUS;
3995 ++ else {
3996 ++ vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm,
3997 ++ vmf->pmd,
3998 ++ vmf->address,
3999 ++ &vmf->ptl);
4000 ++ /*
4001 ++ * Make sure this is not a temporary clearing of pte
4002 ++ * by holding ptl and checking again. A R/M/W update
4003 ++ * of pte involves: take ptl, clearing the pte so that
4004 ++ * we don't have concurrent modification by hardware
4005 ++ * followed by an update.
4006 ++ */
4007 ++ if (unlikely(pte_none(*vmf->pte)))
4008 ++ ret = VM_FAULT_SIGBUS;
4009 ++ else
4010 ++ ret = VM_FAULT_NOPAGE;
4011 ++
4012 ++ pte_unmap_unlock(vmf->pte, vmf->ptl);
4013 ++ }
4014 ++ } else if (!(vmf->flags & FAULT_FLAG_WRITE))
4015 + ret = do_read_fault(vmf);
4016 + else if (!(vma->vm_flags & VM_SHARED))
4017 + ret = do_cow_fault(vmf);
4018 +diff --git a/mm/page_alloc.c b/mm/page_alloc.c
4019 +index e2ef1c17942f..b721631d78ab 100644
4020 +--- a/mm/page_alloc.c
4021 ++++ b/mm/page_alloc.c
4022 +@@ -4055,17 +4055,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
4023 + unsigned int cpuset_mems_cookie;
4024 + int reserve_flags;
4025 +
4026 +- /*
4027 +- * In the slowpath, we sanity check order to avoid ever trying to
4028 +- * reclaim >= MAX_ORDER areas which will never succeed. Callers may
4029 +- * be using allocators in order of preference for an area that is
4030 +- * too large.
4031 +- */
4032 +- if (order >= MAX_ORDER) {
4033 +- WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
4034 +- return NULL;
4035 +- }
4036 +-
4037 + /*
4038 + * We also sanity check to catch abuse of atomic reserves being used by
4039 + * callers that are not in atomic context.
4040 +@@ -4359,6 +4348,15 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
4041 + gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
4042 + struct alloc_context ac = { };
4043 +
4044 ++ /*
4045 ++ * There are several places where we assume that the order value is sane
4046 ++ * so bail out early if the request is out of bound.
4047 ++ */
4048 ++ if (unlikely(order >= MAX_ORDER)) {
4049 ++ WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
4050 ++ return NULL;
4051 ++ }
4052 ++
4053 + gfp_mask &= gfp_allowed_mask;
4054 + alloc_mask = gfp_mask;
4055 + if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
4056 +@@ -7690,6 +7688,14 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
4057 + if (PageReserved(page))
4058 + goto unmovable;
4059 +
4060 ++ /*
4061 ++ * If the zone is movable and we have ruled out all reserved
4062 ++ * pages then it should be reasonably safe to assume the rest
4063 ++ * is movable.
4064 ++ */
4065 ++ if (zone_idx(zone) == ZONE_MOVABLE)
4066 ++ continue;
4067 ++
4068 + /*
4069 + * Hugepages are not in LRU lists, but they're movable.
4070 + * We need not scan over tail pages bacause we don't
4071 +diff --git a/mm/shmem.c b/mm/shmem.c
4072 +index 446942677cd4..38d228a30fdc 100644
4073 +--- a/mm/shmem.c
4074 ++++ b/mm/shmem.c
4075 +@@ -2610,9 +2610,7 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
4076 + inode_lock(inode);
4077 + /* We're holding i_mutex so we can access i_size directly */
4078 +
4079 +- if (offset < 0)
4080 +- offset = -EINVAL;
4081 +- else if (offset >= inode->i_size)
4082 ++ if (offset < 0 || offset >= inode->i_size)
4083 + offset = -ENXIO;
4084 + else {
4085 + start = offset >> PAGE_SHIFT;
4086 +diff --git a/mm/slab.c b/mm/slab.c
4087 +index aa76a70e087e..d73c7a4820a4 100644
4088 +--- a/mm/slab.c
4089 ++++ b/mm/slab.c
4090 +@@ -3675,6 +3675,8 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
4091 + struct kmem_cache *cachep;
4092 + void *ret;
4093 +
4094 ++ if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
4095 ++ return NULL;
4096 + cachep = kmalloc_slab(size, flags);
4097 + if (unlikely(ZERO_OR_NULL_PTR(cachep)))
4098 + return cachep;
4099 +@@ -3710,6 +3712,8 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
4100 + struct kmem_cache *cachep;
4101 + void *ret;
4102 +
4103 ++ if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
4104 ++ return NULL;
4105 + cachep = kmalloc_slab(size, flags);
4106 + if (unlikely(ZERO_OR_NULL_PTR(cachep)))
4107 + return cachep;
4108 +diff --git a/mm/slab_common.c b/mm/slab_common.c
4109 +index fea3376f9816..3a7ac4f15194 100644
4110 +--- a/mm/slab_common.c
4111 ++++ b/mm/slab_common.c
4112 +@@ -1027,18 +1027,18 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
4113 + {
4114 + unsigned int index;
4115 +
4116 +- if (unlikely(size > KMALLOC_MAX_SIZE)) {
4117 +- WARN_ON_ONCE(!(flags & __GFP_NOWARN));
4118 +- return NULL;
4119 +- }
4120 +-
4121 + if (size <= 192) {
4122 + if (!size)
4123 + return ZERO_SIZE_PTR;
4124 +
4125 + index = size_index[size_index_elem(size)];
4126 +- } else
4127 ++ } else {
4128 ++ if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) {
4129 ++ WARN_ON(1);
4130 ++ return NULL;
4131 ++ }
4132 + index = fls(size - 1);
4133 ++ }
4134 +
4135 + #ifdef CONFIG_ZONE_DMA
4136 + if (unlikely((flags & GFP_DMA)))
4137 +diff --git a/mm/z3fold.c b/mm/z3fold.c
4138 +index 4b366d181f35..aee9b0b8d907 100644
4139 +--- a/mm/z3fold.c
4140 ++++ b/mm/z3fold.c
4141 +@@ -99,6 +99,7 @@ struct z3fold_header {
4142 + #define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)
4143 +
4144 + #define BUDDY_MASK (0x3)
4145 ++#define BUDDY_SHIFT 2
4146 +
4147 + /**
4148 + * struct z3fold_pool - stores metadata for each z3fold pool
4149 +@@ -145,7 +146,7 @@ enum z3fold_page_flags {
4150 + MIDDLE_CHUNK_MAPPED,
4151 + NEEDS_COMPACTING,
4152 + PAGE_STALE,
4153 +- UNDER_RECLAIM
4154 ++ PAGE_CLAIMED, /* by either reclaim or free */
4155 + };
4156 +
4157 + /*****************
4158 +@@ -174,7 +175,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page,
4159 + clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);
4160 + clear_bit(NEEDS_COMPACTING, &page->private);
4161 + clear_bit(PAGE_STALE, &page->private);
4162 +- clear_bit(UNDER_RECLAIM, &page->private);
4163 ++ clear_bit(PAGE_CLAIMED, &page->private);
4164 +
4165 + spin_lock_init(&zhdr->page_lock);
4166 + kref_init(&zhdr->refcount);
4167 +@@ -223,8 +224,11 @@ static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud)
4168 + unsigned long handle;
4169 +
4170 + handle = (unsigned long)zhdr;
4171 +- if (bud != HEADLESS)
4172 +- handle += (bud + zhdr->first_num) & BUDDY_MASK;
4173 ++ if (bud != HEADLESS) {
4174 ++ handle |= (bud + zhdr->first_num) & BUDDY_MASK;
4175 ++ if (bud == LAST)
4176 ++ handle |= (zhdr->last_chunks << BUDDY_SHIFT);
4177 ++ }
4178 + return handle;
4179 + }
4180 +
4181 +@@ -234,6 +238,12 @@ static struct z3fold_header *handle_to_z3fold_header(unsigned long handle)
4182 + return (struct z3fold_header *)(handle & PAGE_MASK);
4183 + }
4184 +
4185 ++/* only for LAST bud, returns zero otherwise */
4186 ++static unsigned short handle_to_chunks(unsigned long handle)
4187 ++{
4188 ++ return (handle & ~PAGE_MASK) >> BUDDY_SHIFT;
4189 ++}
4190 ++
4191 + /*
4192 + * (handle & BUDDY_MASK) < zhdr->first_num is possible in encode_handle
4193 + * but that doesn't matter. because the masking will result in the
4194 +@@ -720,37 +730,39 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
4195 + page = virt_to_page(zhdr);
4196 +
4197 + if (test_bit(PAGE_HEADLESS, &page->private)) {
4198 +- /* HEADLESS page stored */
4199 +- bud = HEADLESS;
4200 +- } else {
4201 +- z3fold_page_lock(zhdr);
4202 +- bud = handle_to_buddy(handle);
4203 +-
4204 +- switch (bud) {
4205 +- case FIRST:
4206 +- zhdr->first_chunks = 0;
4207 +- break;
4208 +- case MIDDLE:
4209 +- zhdr->middle_chunks = 0;
4210 +- zhdr->start_middle = 0;
4211 +- break;
4212 +- case LAST:
4213 +- zhdr->last_chunks = 0;
4214 +- break;
4215 +- default:
4216 +- pr_err("%s: unknown bud %d\n", __func__, bud);
4217 +- WARN_ON(1);
4218 +- z3fold_page_unlock(zhdr);
4219 +- return;
4220 ++ /* if a headless page is under reclaim, just leave.
4221 ++ * NB: we use test_and_set_bit for a reason: if the bit
4222 ++ * has not been set before, we release this page
4223 ++ * immediately so we don't care about its value any more.
4224 ++ */
4225 ++ if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {
4226 ++ spin_lock(&pool->lock);
4227 ++ list_del(&page->lru);
4228 ++ spin_unlock(&pool->lock);
4229 ++ free_z3fold_page(page);
4230 ++ atomic64_dec(&pool->pages_nr);
4231 + }
4232 ++ return;
4233 + }
4234 +
4235 +- if (bud == HEADLESS) {
4236 +- spin_lock(&pool->lock);
4237 +- list_del(&page->lru);
4238 +- spin_unlock(&pool->lock);
4239 +- free_z3fold_page(page);
4240 +- atomic64_dec(&pool->pages_nr);
4241 ++ /* Non-headless case */
4242 ++ z3fold_page_lock(zhdr);
4243 ++ bud = handle_to_buddy(handle);
4244 ++
4245 ++ switch (bud) {
4246 ++ case FIRST:
4247 ++ zhdr->first_chunks = 0;
4248 ++ break;
4249 ++ case MIDDLE:
4250 ++ zhdr->middle_chunks = 0;
4251 ++ break;
4252 ++ case LAST:
4253 ++ zhdr->last_chunks = 0;
4254 ++ break;
4255 ++ default:
4256 ++ pr_err("%s: unknown bud %d\n", __func__, bud);
4257 ++ WARN_ON(1);
4258 ++ z3fold_page_unlock(zhdr);
4259 + return;
4260 + }
4261 +
4262 +@@ -758,7 +770,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
4263 + atomic64_dec(&pool->pages_nr);
4264 + return;
4265 + }
4266 +- if (test_bit(UNDER_RECLAIM, &page->private)) {
4267 ++ if (test_bit(PAGE_CLAIMED, &page->private)) {
4268 + z3fold_page_unlock(zhdr);
4269 + return;
4270 + }
4271 +@@ -836,20 +848,30 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
4272 + }
4273 + list_for_each_prev(pos, &pool->lru) {
4274 + page = list_entry(pos, struct page, lru);
4275 ++
4276 ++ /* this bit could have been set by free, in which case
4277 ++ * we pass over to the next page in the pool.
4278 ++ */
4279 ++ if (test_and_set_bit(PAGE_CLAIMED, &page->private))
4280 ++ continue;
4281 ++
4282 ++ zhdr = page_address(page);
4283 + if (test_bit(PAGE_HEADLESS, &page->private))
4284 +- /* candidate found */
4285 + break;
4286 +
4287 +- zhdr = page_address(page);
4288 +- if (!z3fold_page_trylock(zhdr))
4289 ++ if (!z3fold_page_trylock(zhdr)) {
4290 ++ zhdr = NULL;
4291 + continue; /* can't evict at this point */
4292 ++ }
4293 + kref_get(&zhdr->refcount);
4294 + list_del_init(&zhdr->buddy);
4295 + zhdr->cpu = -1;
4296 +- set_bit(UNDER_RECLAIM, &page->private);
4297 + break;
4298 + }
4299 +
4300 ++ if (!zhdr)
4301 ++ break;
4302 ++
4303 + list_del_init(&page->lru);
4304 + spin_unlock(&pool->lock);
4305 +
4306 +@@ -898,6 +920,7 @@ next:
4307 + if (test_bit(PAGE_HEADLESS, &page->private)) {
4308 + if (ret == 0) {
4309 + free_z3fold_page(page);
4310 ++ atomic64_dec(&pool->pages_nr);
4311 + return 0;
4312 + }
4313 + spin_lock(&pool->lock);
4314 +@@ -905,7 +928,7 @@ next:
4315 + spin_unlock(&pool->lock);
4316 + } else {
4317 + z3fold_page_lock(zhdr);
4318 +- clear_bit(UNDER_RECLAIM, &page->private);
4319 ++ clear_bit(PAGE_CLAIMED, &page->private);
4320 + if (kref_put(&zhdr->refcount,
4321 + release_z3fold_page_locked)) {
4322 + atomic64_dec(&pool->pages_nr);
4323 +@@ -964,7 +987,7 @@ static void *z3fold_map(struct z3fold_pool *pool, unsigned long handle)
4324 + set_bit(MIDDLE_CHUNK_MAPPED, &page->private);
4325 + break;
4326 + case LAST:
4327 +- addr += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT);
4328 ++ addr += PAGE_SIZE - (handle_to_chunks(handle) << CHUNK_SHIFT);
4329 + break;
4330 + default:
4331 + pr_err("unknown buddy id %d\n", buddy);
4332 +diff --git a/net/can/raw.c b/net/can/raw.c
4333 +index 1051eee82581..3aab7664933f 100644
4334 +--- a/net/can/raw.c
4335 ++++ b/net/can/raw.c
4336 +@@ -745,18 +745,19 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
4337 + } else
4338 + ifindex = ro->ifindex;
4339 +
4340 +- if (ro->fd_frames) {
4341 ++ dev = dev_get_by_index(sock_net(sk), ifindex);
4342 ++ if (!dev)
4343 ++ return -ENXIO;
4344 ++
4345 ++ err = -EINVAL;
4346 ++ if (ro->fd_frames && dev->mtu == CANFD_MTU) {
4347 + if (unlikely(size != CANFD_MTU && size != CAN_MTU))
4348 +- return -EINVAL;
4349 ++ goto put_dev;
4350 + } else {
4351 + if (unlikely(size != CAN_MTU))
4352 +- return -EINVAL;
4353 ++ goto put_dev;
4354 + }
4355 +
4356 +- dev = dev_get_by_index(sock_net(sk), ifindex);
4357 +- if (!dev)
4358 +- return -ENXIO;
4359 +-
4360 + skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),
4361 + msg->msg_flags & MSG_DONTWAIT, &err);
4362 + if (!skb)
4363 +diff --git a/net/core/sock.c b/net/core/sock.c
4364 +index 3730eb855095..748765e35423 100644
4365 +--- a/net/core/sock.c
4366 ++++ b/net/core/sock.c
4367 +@@ -2317,7 +2317,7 @@ static void __lock_sock(struct sock *sk)
4368 + finish_wait(&sk->sk_lock.wq, &wait);
4369 + }
4370 +
4371 +-static void __release_sock(struct sock *sk)
4372 ++void __release_sock(struct sock *sk)
4373 + __releases(&sk->sk_lock.slock)
4374 + __acquires(&sk->sk_lock.slock)
4375 + {
4376 +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
4377 +index bbd07736fb0f..a32a0f4cc138 100644
4378 +--- a/net/ipv4/tcp.c
4379 ++++ b/net/ipv4/tcp.c
4380 +@@ -2403,16 +2403,10 @@ adjudge_to_death:
4381 + sock_hold(sk);
4382 + sock_orphan(sk);
4383 +
4384 +- /* It is the last release_sock in its life. It will remove backlog. */
4385 +- release_sock(sk);
4386 +-
4387 +-
4388 +- /* Now socket is owned by kernel and we acquire BH lock
4389 +- * to finish close. No need to check for user refs.
4390 +- */
4391 + local_bh_disable();
4392 + bh_lock_sock(sk);
4393 +- WARN_ON(sock_owned_by_user(sk));
4394 ++ /* remove backlog if any, without releasing ownership. */
4395 ++ __release_sock(sk);
4396 +
4397 + percpu_counter_inc(sk->sk_prot->orphan_count);
4398 +
4399 +@@ -2481,6 +2475,7 @@ adjudge_to_death:
4400 + out:
4401 + bh_unlock_sock(sk);
4402 + local_bh_enable();
4403 ++ release_sock(sk);
4404 + sock_put(sk);
4405 + }
4406 + EXPORT_SYMBOL(tcp_close);
4407 +diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
4408 +index 1beeea9549fa..b99e73a7e7e0 100644
4409 +--- a/net/llc/af_llc.c
4410 ++++ b/net/llc/af_llc.c
4411 +@@ -730,7 +730,6 @@ static int llc_ui_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
4412 + struct sk_buff *skb = NULL;
4413 + struct sock *sk = sock->sk;
4414 + struct llc_sock *llc = llc_sk(sk);
4415 +- unsigned long cpu_flags;
4416 + size_t copied = 0;
4417 + u32 peek_seq = 0;
4418 + u32 *seq, skb_len;
4419 +@@ -855,9 +854,8 @@ static int llc_ui_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
4420 + goto copy_uaddr;
4421 +
4422 + if (!(flags & MSG_PEEK)) {
4423 +- spin_lock_irqsave(&sk->sk_receive_queue.lock, cpu_flags);
4424 +- sk_eat_skb(sk, skb);
4425 +- spin_unlock_irqrestore(&sk->sk_receive_queue.lock, cpu_flags);
4426 ++ skb_unlink(skb, &sk->sk_receive_queue);
4427 ++ kfree_skb(skb);
4428 + *seq = 0;
4429 + }
4430 +
4431 +@@ -878,9 +876,8 @@ copy_uaddr:
4432 + llc_cmsg_rcv(msg, skb);
4433 +
4434 + if (!(flags & MSG_PEEK)) {
4435 +- spin_lock_irqsave(&sk->sk_receive_queue.lock, cpu_flags);
4436 +- sk_eat_skb(sk, skb);
4437 +- spin_unlock_irqrestore(&sk->sk_receive_queue.lock, cpu_flags);
4438 ++ skb_unlink(skb, &sk->sk_receive_queue);
4439 ++ kfree_skb(skb);
4440 + *seq = 0;
4441 + }
4442 +
4443 +diff --git a/net/sctp/associola.c b/net/sctp/associola.c
4444 +index a827a1f562bf..6a28b96e779e 100644
4445 +--- a/net/sctp/associola.c
4446 ++++ b/net/sctp/associola.c
4447 +@@ -499,8 +499,9 @@ void sctp_assoc_set_primary(struct sctp_association *asoc,
4448 + void sctp_assoc_rm_peer(struct sctp_association *asoc,
4449 + struct sctp_transport *peer)
4450 + {
4451 +- struct list_head *pos;
4452 +- struct sctp_transport *transport;
4453 ++ struct sctp_transport *transport;
4454 ++ struct list_head *pos;
4455 ++ struct sctp_chunk *ch;
4456 +
4457 + pr_debug("%s: association:%p addr:%pISpc\n",
4458 + __func__, asoc, &peer->ipaddr.sa);
4459 +@@ -564,7 +565,6 @@ void sctp_assoc_rm_peer(struct sctp_association *asoc,
4460 + */
4461 + if (!list_empty(&peer->transmitted)) {
4462 + struct sctp_transport *active = asoc->peer.active_path;
4463 +- struct sctp_chunk *ch;
4464 +
4465 + /* Reset the transport of each chunk on this list */
4466 + list_for_each_entry(ch, &peer->transmitted,
4467 +@@ -586,6 +586,10 @@ void sctp_assoc_rm_peer(struct sctp_association *asoc,
4468 + sctp_transport_hold(active);
4469 + }
4470 +
4471 ++ list_for_each_entry(ch, &asoc->outqueue.out_chunk_list, list)
4472 ++ if (ch->transport == peer)
4473 ++ ch->transport = NULL;
4474 ++
4475 + asoc->peer.transport_count--;
4476 +
4477 + sctp_transport_free(peer);
4478 +diff --git a/net/sunrpc/auth_generic.c b/net/sunrpc/auth_generic.c
4479 +index f1df9837f1ac..1ac08dcbf85d 100644
4480 +--- a/net/sunrpc/auth_generic.c
4481 ++++ b/net/sunrpc/auth_generic.c
4482 +@@ -281,13 +281,7 @@ static bool generic_key_to_expire(struct rpc_cred *cred)
4483 + {
4484 + struct auth_cred *acred = &container_of(cred, struct generic_cred,
4485 + gc_base)->acred;
4486 +- bool ret;
4487 +-
4488 +- get_rpccred(cred);
4489 +- ret = test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags);
4490 +- put_rpccred(cred);
4491 +-
4492 +- return ret;
4493 ++ return test_bit(RPC_CRED_KEY_EXPIRE_SOON, &acred->ac_flags);
4494 + }
4495 +
4496 + static const struct rpc_credops generic_credops = {
4497 +diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
4498 +index e9394e7adc84..f4eadd3f7350 100644
4499 +--- a/security/selinux/ss/policydb.c
4500 ++++ b/security/selinux/ss/policydb.c
4501 +@@ -1101,7 +1101,7 @@ static int str_read(char **strp, gfp_t flags, void *fp, u32 len)
4502 + if ((len == 0) || (len == (u32)-1))
4503 + return -EINVAL;
4504 +
4505 +- str = kmalloc(len + 1, flags);
4506 ++ str = kmalloc(len + 1, flags | __GFP_NOWARN);
4507 + if (!str)
4508 + return -ENOMEM;
4509 +
4510 +diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
4511 +index f8d4a419f3af..467039b342b5 100644
4512 +--- a/sound/core/oss/pcm_oss.c
4513 ++++ b/sound/core/oss/pcm_oss.c
4514 +@@ -1062,8 +1062,8 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
4515 + runtime->oss.channels = params_channels(params);
4516 + runtime->oss.rate = params_rate(params);
4517 +
4518 +- vfree(runtime->oss.buffer);
4519 +- runtime->oss.buffer = vmalloc(runtime->oss.period_bytes);
4520 ++ kvfree(runtime->oss.buffer);
4521 ++ runtime->oss.buffer = kvzalloc(runtime->oss.period_bytes, GFP_KERNEL);
4522 + if (!runtime->oss.buffer) {
4523 + err = -ENOMEM;
4524 + goto failure;
4525 +@@ -2328,7 +2328,7 @@ static void snd_pcm_oss_release_substream(struct snd_pcm_substream *substream)
4526 + {
4527 + struct snd_pcm_runtime *runtime;
4528 + runtime = substream->runtime;
4529 +- vfree(runtime->oss.buffer);
4530 ++ kvfree(runtime->oss.buffer);
4531 + runtime->oss.buffer = NULL;
4532 + #ifdef CONFIG_SND_PCM_OSS_PLUGINS
4533 + snd_pcm_oss_plugin_clear(substream);
4534 +diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c
4535 +index 0391cb1a4f19..71571d992159 100644
4536 +--- a/sound/core/oss/pcm_plugin.c
4537 ++++ b/sound/core/oss/pcm_plugin.c
4538 +@@ -66,8 +66,8 @@ static int snd_pcm_plugin_alloc(struct snd_pcm_plugin *plugin, snd_pcm_uframes_t
4539 + return -ENXIO;
4540 + size /= 8;
4541 + if (plugin->buf_frames < frames) {
4542 +- vfree(plugin->buf);
4543 +- plugin->buf = vmalloc(size);
4544 ++ kvfree(plugin->buf);
4545 ++ plugin->buf = kvzalloc(size, GFP_KERNEL);
4546 + plugin->buf_frames = frames;
4547 + }
4548 + if (!plugin->buf) {
4549 +@@ -191,7 +191,7 @@ int snd_pcm_plugin_free(struct snd_pcm_plugin *plugin)
4550 + if (plugin->private_free)
4551 + plugin->private_free(plugin);
4552 + kfree(plugin->buf_channels);
4553 +- vfree(plugin->buf);
4554 ++ kvfree(plugin->buf);
4555 + kfree(plugin);
4556 + return 0;
4557 + }
4558 +diff --git a/tools/power/cpupower/bench/Makefile b/tools/power/cpupower/bench/Makefile
4559 +index d79ab161cc75..f68b4bc55273 100644
4560 +--- a/tools/power/cpupower/bench/Makefile
4561 ++++ b/tools/power/cpupower/bench/Makefile
4562 +@@ -9,7 +9,7 @@ endif
4563 + ifeq ($(strip $(STATIC)),true)
4564 + LIBS = -L../ -L$(OUTPUT) -lm
4565 + OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o \
4566 +- $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/sysfs.o
4567 ++ $(OUTPUT)../lib/cpufreq.o $(OUTPUT)../lib/cpupower.o
4568 + else
4569 + LIBS = -L../ -L$(OUTPUT) -lm -lcpupower
4570 + OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o
4571 +diff --git a/tools/power/cpupower/lib/cpufreq.c b/tools/power/cpupower/lib/cpufreq.c
4572 +index 1b993fe1ce23..0c0f3e3f0d80 100644
4573 +--- a/tools/power/cpupower/lib/cpufreq.c
4574 ++++ b/tools/power/cpupower/lib/cpufreq.c
4575 +@@ -28,7 +28,7 @@ static unsigned int sysfs_cpufreq_read_file(unsigned int cpu, const char *fname,
4576 +
4577 + snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/cpufreq/%s",
4578 + cpu, fname);
4579 +- return sysfs_read_file(path, buf, buflen);
4580 ++ return cpupower_read_sysfs(path, buf, buflen);
4581 + }
4582 +
4583 + /* helper function to write a new value to a /sys file */
4584 +diff --git a/tools/power/cpupower/lib/cpuidle.c b/tools/power/cpupower/lib/cpuidle.c
4585 +index 9bd4c7655fdb..852d25462388 100644
4586 +--- a/tools/power/cpupower/lib/cpuidle.c
4587 ++++ b/tools/power/cpupower/lib/cpuidle.c
4588 +@@ -319,7 +319,7 @@ static unsigned int sysfs_cpuidle_read_file(const char *fname, char *buf,
4589 +
4590 + snprintf(path, sizeof(path), PATH_TO_CPU "cpuidle/%s", fname);
4591 +
4592 +- return sysfs_read_file(path, buf, buflen);
4593 ++ return cpupower_read_sysfs(path, buf, buflen);
4594 + }
4595 +
4596 +
4597 +diff --git a/tools/power/cpupower/lib/cpupower.c b/tools/power/cpupower/lib/cpupower.c
4598 +index 9c395ec924de..9711d628b0f4 100644
4599 +--- a/tools/power/cpupower/lib/cpupower.c
4600 ++++ b/tools/power/cpupower/lib/cpupower.c
4601 +@@ -15,7 +15,7 @@
4602 + #include "cpupower.h"
4603 + #include "cpupower_intern.h"
4604 +
4605 +-unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen)
4606 ++unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen)
4607 + {
4608 + int fd;
4609 + ssize_t numread;
4610 +@@ -95,7 +95,7 @@ static int sysfs_topology_read_file(unsigned int cpu, const char *fname, int *re
4611 +
4612 + snprintf(path, sizeof(path), PATH_TO_CPU "cpu%u/topology/%s",
4613 + cpu, fname);
4614 +- if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0)
4615 ++ if (cpupower_read_sysfs(path, linebuf, MAX_LINE_LEN) == 0)
4616 + return -1;
4617 + *result = strtol(linebuf, &endp, 0);
4618 + if (endp == linebuf || errno == ERANGE)
4619 +diff --git a/tools/power/cpupower/lib/cpupower_intern.h b/tools/power/cpupower/lib/cpupower_intern.h
4620 +index 92affdfbe417..4887c76d23f8 100644
4621 +--- a/tools/power/cpupower/lib/cpupower_intern.h
4622 ++++ b/tools/power/cpupower/lib/cpupower_intern.h
4623 +@@ -3,4 +3,4 @@
4624 + #define MAX_LINE_LEN 4096
4625 + #define SYSFS_PATH_MAX 255
4626 +
4627 +-unsigned int sysfs_read_file(const char *path, char *buf, size_t buflen);
4628 ++unsigned int cpupower_read_sysfs(const char *path, char *buf, size_t buflen);
4629 +diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
4630 +index cffc2c5a778d..ec50d2a95076 100644
4631 +--- a/tools/testing/nvdimm/test/nfit.c
4632 ++++ b/tools/testing/nvdimm/test/nfit.c
4633 +@@ -139,8 +139,8 @@ static u32 handle[] = {
4634 + [6] = NFIT_DIMM_HANDLE(1, 0, 0, 0, 1),
4635 + };
4636 +
4637 +-static unsigned long dimm_fail_cmd_flags[NUM_DCR];
4638 +-static int dimm_fail_cmd_code[NUM_DCR];
4639 ++static unsigned long dimm_fail_cmd_flags[ARRAY_SIZE(handle)];
4640 ++static int dimm_fail_cmd_code[ARRAY_SIZE(handle)];
4641 +
4642 + static const struct nd_intel_smart smart_def = {
4643 + .flags = ND_INTEL_SMART_HEALTH_VALID
4644 +@@ -203,7 +203,7 @@ struct nfit_test {
4645 + unsigned long deadline;
4646 + spinlock_t lock;
4647 + } ars_state;
4648 +- struct device *dimm_dev[NUM_DCR];
4649 ++ struct device *dimm_dev[ARRAY_SIZE(handle)];
4650 + struct nd_intel_smart *smart;
4651 + struct nd_intel_smart_threshold *smart_threshold;
4652 + struct badrange badrange;
4653 +@@ -2678,7 +2678,7 @@ static int nfit_test_probe(struct platform_device *pdev)
4654 + u32 nfit_handle = __to_nfit_memdev(nfit_mem)->device_handle;
4655 + int i;
4656 +
4657 +- for (i = 0; i < NUM_DCR; i++)
4658 ++ for (i = 0; i < ARRAY_SIZE(handle); i++)
4659 + if (nfit_handle == handle[i])
4660 + dev_set_drvdata(nfit_test->dimm_dev[i],
4661 + nfit_mem);