Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Wed, 27 Jan 2021 11:29:24
Message-Id: 1611746939.42b6a29af6c32b8480be206fc4489da99d58ab2c.mpagano@gentoo
1 commit: 42b6a29af6c32b8480be206fc4489da99d58ab2c
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Wed Jan 27 11:28:59 2021 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Wed Jan 27 11:28:59 2021 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=42b6a29a
7
8 Linux patch 5.10.11
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1010_linux-5.10.11.patch | 7021 ++++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 7025 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index 4ad6d69..fe8a778 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -83,6 +83,10 @@ Patch: 1009_linux-5.10.10.patch
21 From: http://www.kernel.org
22 Desc: Linux 5.10.10
23
24 +Patch: 1010_linux-5.10.11.patch
25 +From: http://www.kernel.org
26 +Desc: Linux 5.10.11
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1010_linux-5.10.11.patch b/1010_linux-5.10.11.patch
33 new file mode 100644
34 index 0000000..a4b4a65
35 --- /dev/null
36 +++ b/1010_linux-5.10.11.patch
37 @@ -0,0 +1,7021 @@
38 +diff --git a/Documentation/ABI/testing/sysfs-class-devlink b/Documentation/ABI/testing/sysfs-class-devlink
39 +index b662f747c83eb..8a21ce515f61f 100644
40 +--- a/Documentation/ABI/testing/sysfs-class-devlink
41 ++++ b/Documentation/ABI/testing/sysfs-class-devlink
42 +@@ -5,8 +5,8 @@ Description:
43 + Provide a place in sysfs for the device link objects in the
44 + kernel at any given time. The name of a device link directory,
45 + denoted as ... above, is of the form <supplier>--<consumer>
46 +- where <supplier> is the supplier device name and <consumer> is
47 +- the consumer device name.
48 ++ where <supplier> is the supplier bus:device name and <consumer>
49 ++ is the consumer bus:device name.
50 +
51 + What: /sys/class/devlink/.../auto_remove_on
52 + Date: May 2020
53 +diff --git a/Documentation/ABI/testing/sysfs-devices-consumer b/Documentation/ABI/testing/sysfs-devices-consumer
54 +index 1f06d74d1c3cc..0809fda092e66 100644
55 +--- a/Documentation/ABI/testing/sysfs-devices-consumer
56 ++++ b/Documentation/ABI/testing/sysfs-devices-consumer
57 +@@ -4,5 +4,6 @@ Contact: Saravana Kannan <saravanak@××××××.com>
58 + Description:
59 + The /sys/devices/.../consumer:<consumer> are symlinks to device
60 + links where this device is the supplier. <consumer> denotes the
61 +- name of the consumer in that device link. There can be zero or
62 +- more of these symlinks for a given device.
63 ++ name of the consumer in that device link and is of the form
64 ++ bus:device name. There can be zero or more of these symlinks
65 ++ for a given device.
66 +diff --git a/Documentation/ABI/testing/sysfs-devices-supplier b/Documentation/ABI/testing/sysfs-devices-supplier
67 +index a919e0db5e902..207f5972e98d8 100644
68 +--- a/Documentation/ABI/testing/sysfs-devices-supplier
69 ++++ b/Documentation/ABI/testing/sysfs-devices-supplier
70 +@@ -4,5 +4,6 @@ Contact: Saravana Kannan <saravanak@××××××.com>
71 + Description:
72 + The /sys/devices/.../supplier:<supplier> are symlinks to device
73 + links where this device is the consumer. <supplier> denotes the
74 +- name of the supplier in that device link. There can be zero or
75 +- more of these symlinks for a given device.
76 ++ name of the supplier in that device link and is of the form
77 ++ bus:device name. There can be zero or more of these symlinks
78 ++ for a given device.
79 +diff --git a/Documentation/admin-guide/device-mapper/dm-integrity.rst b/Documentation/admin-guide/device-mapper/dm-integrity.rst
80 +index 3ab4f7756a6e6..bf878c879afb6 100644
81 +--- a/Documentation/admin-guide/device-mapper/dm-integrity.rst
82 ++++ b/Documentation/admin-guide/device-mapper/dm-integrity.rst
83 +@@ -177,14 +177,20 @@ bitmap_flush_interval:number
84 + The bitmap flush interval in milliseconds. The metadata buffers
85 + are synchronized when this interval expires.
86 +
87 ++allow_discards
88 ++ Allow block discard requests (a.k.a. TRIM) for the integrity device.
89 ++ Discards are only allowed to devices using internal hash.
90 ++
91 + fix_padding
92 + Use a smaller padding of the tag area that is more
93 + space-efficient. If this option is not present, large padding is
94 + used - that is for compatibility with older kernels.
95 +
96 +-allow_discards
97 +- Allow block discard requests (a.k.a. TRIM) for the integrity device.
98 +- Discards are only allowed to devices using internal hash.
99 ++legacy_recalculate
100 ++ Allow recalculating of volumes with HMAC keys. This is disabled by
101 ++ default for security reasons - an attacker could modify the volume,
102 ++ set recalc_sector to zero, and the kernel would not detect the
103 ++ modification.
104 +
105 + The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and
106 + allow_discards can be changed when reloading the target (load an inactive
107 +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
108 +index f6a1513dfb76c..26bfe7ae711b8 100644
109 +--- a/Documentation/admin-guide/kernel-parameters.txt
110 ++++ b/Documentation/admin-guide/kernel-parameters.txt
111 +@@ -5965,6 +5965,10 @@
112 + This option is obsoleted by the "nopv" option, which
113 + has equivalent effect for XEN platform.
114 +
115 ++ xen_no_vector_callback
116 ++ [KNL,X86,XEN] Disable the vector callback for Xen
117 ++ event channel interrupts.
118 ++
119 + xen_scrub_pages= [XEN]
120 + Boolean option to control scrubbing pages before giving them back
121 + to Xen, for use by other domains. Can be also changed at runtime
122 +diff --git a/Makefile b/Makefile
123 +index 7d86ad6ad36cc..7a5d906f6ee36 100644
124 +--- a/Makefile
125 ++++ b/Makefile
126 +@@ -1,7 +1,7 @@
127 + # SPDX-License-Identifier: GPL-2.0
128 + VERSION = 5
129 + PATCHLEVEL = 10
130 +-SUBLEVEL = 10
131 ++SUBLEVEL = 11
132 + EXTRAVERSION =
133 + NAME = Kleptomaniac Octopus
134 +
135 +diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
136 +index 60e901cd0de6a..5a957a9a09843 100644
137 +--- a/arch/arm/xen/enlighten.c
138 ++++ b/arch/arm/xen/enlighten.c
139 +@@ -371,7 +371,7 @@ static int __init xen_guest_init(void)
140 + }
141 + gnttab_init();
142 + if (!xen_initial_domain())
143 +- xenbus_probe(NULL);
144 ++ xenbus_probe();
145 +
146 + /*
147 + * Making sure board specific code will not set up ops for
148 +diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
149 +index 015ddffaf6caa..b56a4b2bc2486 100644
150 +--- a/arch/arm64/include/asm/atomic.h
151 ++++ b/arch/arm64/include/asm/atomic.h
152 +@@ -17,7 +17,7 @@
153 + #include <asm/lse.h>
154 +
155 + #define ATOMIC_OP(op) \
156 +-static inline void arch_##op(int i, atomic_t *v) \
157 ++static __always_inline void arch_##op(int i, atomic_t *v) \
158 + { \
159 + __lse_ll_sc_body(op, i, v); \
160 + }
161 +@@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub)
162 + #undef ATOMIC_OP
163 +
164 + #define ATOMIC_FETCH_OP(name, op) \
165 +-static inline int arch_##op##name(int i, atomic_t *v) \
166 ++static __always_inline int arch_##op##name(int i, atomic_t *v) \
167 + { \
168 + return __lse_ll_sc_body(op##name, i, v); \
169 + }
170 +@@ -56,7 +56,7 @@ ATOMIC_FETCH_OPS(atomic_sub_return)
171 + #undef ATOMIC_FETCH_OPS
172 +
173 + #define ATOMIC64_OP(op) \
174 +-static inline void arch_##op(long i, atomic64_t *v) \
175 ++static __always_inline void arch_##op(long i, atomic64_t *v) \
176 + { \
177 + __lse_ll_sc_body(op, i, v); \
178 + }
179 +@@ -71,7 +71,7 @@ ATOMIC64_OP(atomic64_sub)
180 + #undef ATOMIC64_OP
181 +
182 + #define ATOMIC64_FETCH_OP(name, op) \
183 +-static inline long arch_##op##name(long i, atomic64_t *v) \
184 ++static __always_inline long arch_##op##name(long i, atomic64_t *v) \
185 + { \
186 + return __lse_ll_sc_body(op##name, i, v); \
187 + }
188 +@@ -94,7 +94,7 @@ ATOMIC64_FETCH_OPS(atomic64_sub_return)
189 + #undef ATOMIC64_FETCH_OP
190 + #undef ATOMIC64_FETCH_OPS
191 +
192 +-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
193 ++static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
194 + {
195 + return __lse_ll_sc_body(atomic64_dec_if_positive, v);
196 + }
197 +diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
198 +index a8184cad88907..50852992752b0 100644
199 +--- a/arch/arm64/kernel/signal.c
200 ++++ b/arch/arm64/kernel/signal.c
201 +@@ -914,13 +914,6 @@ static void do_signal(struct pt_regs *regs)
202 + asmlinkage void do_notify_resume(struct pt_regs *regs,
203 + unsigned long thread_flags)
204 + {
205 +- /*
206 +- * The assembly code enters us with IRQs off, but it hasn't
207 +- * informed the tracing code of that for efficiency reasons.
208 +- * Update the trace code with the current status.
209 +- */
210 +- trace_hardirqs_off();
211 +-
212 + do {
213 + /* Check valid user FS if needed */
214 + addr_limit_user_check();
215 +diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
216 +index f8f758e4a3064..6fa8cfb8232aa 100644
217 +--- a/arch/arm64/kernel/syscall.c
218 ++++ b/arch/arm64/kernel/syscall.c
219 +@@ -165,15 +165,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
220 + if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
221 + local_daif_mask();
222 + flags = current_thread_info()->flags;
223 +- if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) {
224 +- /*
225 +- * We're off to userspace, where interrupts are
226 +- * always enabled after we restore the flags from
227 +- * the SPSR.
228 +- */
229 +- trace_hardirqs_on();
230 ++ if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
231 + return;
232 +- }
233 + local_daif_restore(DAIF_PROCCTX);
234 + }
235 +
236 +diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
237 +index 1d32b174ab6ae..c1a8aac01cf91 100644
238 +--- a/arch/powerpc/include/asm/exception-64s.h
239 ++++ b/arch/powerpc/include/asm/exception-64s.h
240 +@@ -63,6 +63,12 @@
241 + nop; \
242 + nop;
243 +
244 ++#define SCV_ENTRY_FLUSH_SLOT \
245 ++ SCV_ENTRY_FLUSH_FIXUP_SECTION; \
246 ++ nop; \
247 ++ nop; \
248 ++ nop;
249 ++
250 + /*
251 + * r10 must be free to use, r13 must be paca
252 + */
253 +@@ -70,6 +76,13 @@
254 + STF_ENTRY_BARRIER_SLOT; \
255 + ENTRY_FLUSH_SLOT
256 +
257 ++/*
258 ++ * r10, ctr must be free to use, r13 must be paca
259 ++ */
260 ++#define SCV_INTERRUPT_TO_KERNEL \
261 ++ STF_ENTRY_BARRIER_SLOT; \
262 ++ SCV_ENTRY_FLUSH_SLOT
263 ++
264 + /*
265 + * Macros for annotating the expected destination of (h)rfid
266 + *
267 +diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
268 +index fbd406cd6916c..8d100059e266c 100644
269 +--- a/arch/powerpc/include/asm/feature-fixups.h
270 ++++ b/arch/powerpc/include/asm/feature-fixups.h
271 +@@ -221,6 +221,14 @@ label##3: \
272 + FTR_ENTRY_OFFSET 957b-958b; \
273 + .popsection;
274 +
275 ++#define SCV_ENTRY_FLUSH_FIXUP_SECTION \
276 ++957: \
277 ++ .pushsection __scv_entry_flush_fixup,"a"; \
278 ++ .align 2; \
279 ++958: \
280 ++ FTR_ENTRY_OFFSET 957b-958b; \
281 ++ .popsection;
282 ++
283 + #define RFI_FLUSH_FIXUP_SECTION \
284 + 951: \
285 + .pushsection __rfi_flush_fixup,"a"; \
286 +@@ -254,10 +262,12 @@ label##3: \
287 +
288 + extern long stf_barrier_fallback;
289 + extern long entry_flush_fallback;
290 ++extern long scv_entry_flush_fallback;
291 + extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup;
292 + extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup;
293 + extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup;
294 + extern long __start___entry_flush_fixup, __stop___entry_flush_fixup;
295 ++extern long __start___scv_entry_flush_fixup, __stop___scv_entry_flush_fixup;
296 + extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup;
297 + extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
298 + extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
299 +diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
300 +index 2f3846192ec7d..2831b0aa92b15 100644
301 +--- a/arch/powerpc/kernel/entry_64.S
302 ++++ b/arch/powerpc/kernel/entry_64.S
303 +@@ -75,7 +75,7 @@ BEGIN_FTR_SECTION
304 + bne .Ltabort_syscall
305 + END_FTR_SECTION_IFSET(CPU_FTR_TM)
306 + #endif
307 +- INTERRUPT_TO_KERNEL
308 ++ SCV_INTERRUPT_TO_KERNEL
309 + mr r10,r1
310 + ld r1,PACAKSAVE(r13)
311 + std r10,0(r1)
312 +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
313 +index 4d01f09ecf808..3cde2fbd74fce 100644
314 +--- a/arch/powerpc/kernel/exceptions-64s.S
315 ++++ b/arch/powerpc/kernel/exceptions-64s.S
316 +@@ -2993,6 +2993,25 @@ TRAMP_REAL_BEGIN(entry_flush_fallback)
317 + ld r11,PACA_EXRFI+EX_R11(r13)
318 + blr
319 +
320 ++/*
321 ++ * The SCV entry flush happens with interrupts enabled, so it must disable
322 ++ * to prevent EXRFI being clobbered by NMIs (e.g., soft_nmi_common). r10
323 ++ * (containing LR) does not need to be preserved here because scv entry
324 ++ * puts 0 in the pt_regs, CTR can be clobbered for the same reason.
325 ++ */
326 ++TRAMP_REAL_BEGIN(scv_entry_flush_fallback)
327 ++ li r10,0
328 ++ mtmsrd r10,1
329 ++ lbz r10,PACAIRQHAPPENED(r13)
330 ++ ori r10,r10,PACA_IRQ_HARD_DIS
331 ++ stb r10,PACAIRQHAPPENED(r13)
332 ++ std r11,PACA_EXRFI+EX_R11(r13)
333 ++ L1D_DISPLACEMENT_FLUSH
334 ++ ld r11,PACA_EXRFI+EX_R11(r13)
335 ++ li r10,MSR_RI
336 ++ mtmsrd r10,1
337 ++ blr
338 ++
339 + TRAMP_REAL_BEGIN(rfi_flush_fallback)
340 + SET_SCRATCH0(r13);
341 + GET_PACA(r13);
342 +diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
343 +index f887f9d5b9e84..4a1f494ef03f3 100644
344 +--- a/arch/powerpc/kernel/vmlinux.lds.S
345 ++++ b/arch/powerpc/kernel/vmlinux.lds.S
346 +@@ -145,6 +145,13 @@ SECTIONS
347 + __stop___entry_flush_fixup = .;
348 + }
349 +
350 ++ . = ALIGN(8);
351 ++ __scv_entry_flush_fixup : AT(ADDR(__scv_entry_flush_fixup) - LOAD_OFFSET) {
352 ++ __start___scv_entry_flush_fixup = .;
353 ++ *(__scv_entry_flush_fixup)
354 ++ __stop___scv_entry_flush_fixup = .;
355 ++ }
356 ++
357 + . = ALIGN(8);
358 + __stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) {
359 + __start___stf_exit_barrier_fixup = .;
360 +@@ -187,6 +194,12 @@ SECTIONS
361 + .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
362 + _sinittext = .;
363 + INIT_TEXT
364 ++
365 ++ /*
366 ++ *.init.text might be RO so we must ensure this section ends on
367 ++ * a page boundary.
368 ++ */
369 ++ . = ALIGN(PAGE_SIZE);
370 + _einittext = .;
371 + #ifdef CONFIG_PPC64
372 + *(.tramp.ftrace.init);
373 +@@ -200,21 +213,9 @@ SECTIONS
374 + EXIT_TEXT
375 + }
376 +
377 +- .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
378 +- INIT_DATA
379 +- }
380 +-
381 +- .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
382 +- INIT_SETUP(16)
383 +- }
384 +-
385 +- .initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
386 +- INIT_CALLS
387 +- }
388 ++ . = ALIGN(PAGE_SIZE);
389 +
390 +- .con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
391 +- CON_INITCALL
392 +- }
393 ++ INIT_DATA_SECTION(16)
394 +
395 + . = ALIGN(8);
396 + __ftr_fixup : AT(ADDR(__ftr_fixup) - LOAD_OFFSET) {
397 +@@ -242,9 +243,6 @@ SECTIONS
398 + __stop___fw_ftr_fixup = .;
399 + }
400 + #endif
401 +- .init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
402 +- INIT_RAM_FS
403 +- }
404 +
405 + PERCPU_SECTION(L1_CACHE_BYTES)
406 +
407 +diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
408 +index 321c12a9ef6b8..92705d6dfb6e0 100644
409 +--- a/arch/powerpc/lib/feature-fixups.c
410 ++++ b/arch/powerpc/lib/feature-fixups.c
411 +@@ -290,9 +290,6 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
412 + long *start, *end;
413 + int i;
414 +
415 +- start = PTRRELOC(&__start___entry_flush_fixup);
416 +- end = PTRRELOC(&__stop___entry_flush_fixup);
417 +-
418 + instrs[0] = 0x60000000; /* nop */
419 + instrs[1] = 0x60000000; /* nop */
420 + instrs[2] = 0x60000000; /* nop */
421 +@@ -312,6 +309,8 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
422 + if (types & L1D_FLUSH_MTTRIG)
423 + instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */
424 +
425 ++ start = PTRRELOC(&__start___entry_flush_fixup);
426 ++ end = PTRRELOC(&__stop___entry_flush_fixup);
427 + for (i = 0; start < end; start++, i++) {
428 + dest = (void *)start + *start;
429 +
430 +@@ -328,6 +327,25 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
431 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2]));
432 + }
433 +
434 ++ start = PTRRELOC(&__start___scv_entry_flush_fixup);
435 ++ end = PTRRELOC(&__stop___scv_entry_flush_fixup);
436 ++ for (; start < end; start++, i++) {
437 ++ dest = (void *)start + *start;
438 ++
439 ++ pr_devel("patching dest %lx\n", (unsigned long)dest);
440 ++
441 ++ patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0]));
442 ++
443 ++ if (types == L1D_FLUSH_FALLBACK)
444 ++ patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&scv_entry_flush_fallback,
445 ++ BRANCH_SET_LINK);
446 ++ else
447 ++ patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1]));
448 ++
449 ++ patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2]));
450 ++ }
451 ++
452 ++
453 + printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i,
454 + (types == L1D_FLUSH_NONE) ? "no" :
455 + (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" :
456 +diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
457 +index 44377fd7860e4..234a21d26f674 100644
458 +--- a/arch/riscv/Kconfig
459 ++++ b/arch/riscv/Kconfig
460 +@@ -134,7 +134,7 @@ config PA_BITS
461 +
462 + config PAGE_OFFSET
463 + hex
464 +- default 0xC0000000 if 32BIT && MAXPHYSMEM_2GB
465 ++ default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB
466 + default 0x80000000 if 64BIT && !MMU
467 + default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
468 + default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
469 +@@ -247,10 +247,12 @@ config MODULE_SECTIONS
470 +
471 + choice
472 + prompt "Maximum Physical Memory"
473 +- default MAXPHYSMEM_2GB if 32BIT
474 ++ default MAXPHYSMEM_1GB if 32BIT
475 + default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW
476 + default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
477 +
478 ++ config MAXPHYSMEM_1GB
479 ++ bool "1GiB"
480 + config MAXPHYSMEM_2GB
481 + bool "2GiB"
482 + config MAXPHYSMEM_128GB
483 +diff --git a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
484 +index 4a2729f5ca3f0..24d75a146e02d 100644
485 +--- a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
486 ++++ b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
487 +@@ -88,7 +88,9 @@
488 + phy-mode = "gmii";
489 + phy-handle = <&phy0>;
490 + phy0: ethernet-phy@0 {
491 ++ compatible = "ethernet-phy-id0007.0771";
492 + reg = <0>;
493 ++ reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>;
494 + };
495 + };
496 +
497 +diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
498 +index d222d353d86d4..8c3d1e4517031 100644
499 +--- a/arch/riscv/configs/defconfig
500 ++++ b/arch/riscv/configs/defconfig
501 +@@ -64,6 +64,8 @@ CONFIG_HW_RANDOM=y
502 + CONFIG_HW_RANDOM_VIRTIO=y
503 + CONFIG_SPI=y
504 + CONFIG_SPI_SIFIVE=y
505 ++CONFIG_GPIOLIB=y
506 ++CONFIG_GPIO_SIFIVE=y
507 + # CONFIG_PTP_1588_CLOCK is not set
508 + CONFIG_POWER_RESET=y
509 + CONFIG_DRM=y
510 +diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
511 +index de59dd457b415..d867813570442 100644
512 +--- a/arch/riscv/kernel/cacheinfo.c
513 ++++ b/arch/riscv/kernel/cacheinfo.c
514 +@@ -26,7 +26,16 @@ cache_get_priv_group(struct cacheinfo *this_leaf)
515 +
516 + static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type)
517 + {
518 +- struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(smp_processor_id());
519 ++ /*
520 ++ * Using raw_smp_processor_id() elides a preemptability check, but this
521 ++ * is really indicative of a larger problem: the cacheinfo UABI assumes
522 ++ * that cores have a homonogenous view of the cache hierarchy. That
523 ++ * happens to be the case for the current set of RISC-V systems, but
524 ++ * likely won't be true in general. Since there's no way to provide
525 ++ * correct information for these systems via the current UABI we're
526 ++ * just eliding the check for now.
527 ++ */
528 ++ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(raw_smp_processor_id());
529 + struct cacheinfo *this_leaf;
530 + int index;
531 +
532 +diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
533 +index 835e45bb59c40..744f3209c48d0 100644
534 +--- a/arch/riscv/kernel/entry.S
535 ++++ b/arch/riscv/kernel/entry.S
536 +@@ -155,6 +155,15 @@ skip_context_tracking:
537 + tail do_trap_unknown
538 +
539 + handle_syscall:
540 ++#ifdef CONFIG_RISCV_M_MODE
541 ++ /*
542 ++ * When running is M-Mode (no MMU config), MPIE does not get set.
543 ++ * As a result, we need to force enable interrupts here because
544 ++ * handle_exception did not do set SR_IE as it always sees SR_PIE
545 ++ * being cleared.
546 ++ */
547 ++ csrs CSR_STATUS, SR_IE
548 ++#endif
549 + #if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING)
550 + /* Recover a0 - a7 for system calls */
551 + REG_L a0, PT_A0(sp)
552 +diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
553 +index 4d3a1048ad8b1..8a5cf99c07762 100644
554 +--- a/arch/riscv/kernel/time.c
555 ++++ b/arch/riscv/kernel/time.c
556 +@@ -4,6 +4,7 @@
557 + * Copyright (C) 2017 SiFive
558 + */
559 +
560 ++#include <linux/of_clk.h>
561 + #include <linux/clocksource.h>
562 + #include <linux/delay.h>
563 + #include <asm/sbi.h>
564 +@@ -24,6 +25,8 @@ void __init time_init(void)
565 + riscv_timebase = prop;
566 +
567 + lpj_fine = riscv_timebase / HZ;
568 ++
569 ++ of_clk_init(NULL);
570 + timer_probe();
571 + }
572 +
573 +diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
574 +index e4133c20744ce..608082fb9a6c6 100644
575 +--- a/arch/riscv/mm/init.c
576 ++++ b/arch/riscv/mm/init.c
577 +@@ -155,9 +155,10 @@ disable:
578 + void __init setup_bootmem(void)
579 + {
580 + phys_addr_t mem_start = 0;
581 +- phys_addr_t start, end = 0;
582 ++ phys_addr_t start, dram_end, end = 0;
583 + phys_addr_t vmlinux_end = __pa_symbol(&_end);
584 + phys_addr_t vmlinux_start = __pa_symbol(&_start);
585 ++ phys_addr_t max_mapped_addr = __pa(~(ulong)0);
586 + u64 i;
587 +
588 + /* Find the memory region containing the kernel */
589 +@@ -179,7 +180,18 @@ void __init setup_bootmem(void)
590 + /* Reserve from the start of the kernel to the end of the kernel */
591 + memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
592 +
593 +- max_pfn = PFN_DOWN(memblock_end_of_DRAM());
594 ++ dram_end = memblock_end_of_DRAM();
595 ++
596 ++ /*
597 ++ * memblock allocator is not aware of the fact that last 4K bytes of
598 ++ * the addressable memory can not be mapped because of IS_ERR_VALUE
599 ++ * macro. Make sure that last 4k bytes are not usable by memblock
600 ++ * if end of dram is equal to maximum addressable memory.
601 ++ */
602 ++ if (max_mapped_addr == (dram_end - 1))
603 ++ memblock_set_current_limit(max_mapped_addr - 4096);
604 ++
605 ++ max_pfn = PFN_DOWN(dram_end);
606 + max_low_pfn = max_pfn;
607 + set_max_mapnr(max_low_pfn);
608 +
609 +diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
610 +index 159da4ed578f2..b6f3d49991d37 100644
611 +--- a/arch/sh/Kconfig
612 ++++ b/arch/sh/Kconfig
613 +@@ -30,7 +30,6 @@ config SUPERH
614 + select HAVE_ARCH_KGDB
615 + select HAVE_ARCH_SECCOMP_FILTER
616 + select HAVE_ARCH_TRACEHOOK
617 +- select HAVE_COPY_THREAD_TLS
618 + select HAVE_DEBUG_BUGVERBOSE
619 + select HAVE_DEBUG_KMEMLEAK
620 + select HAVE_DYNAMIC_FTRACE
621 +diff --git a/arch/sh/drivers/dma/Kconfig b/arch/sh/drivers/dma/Kconfig
622 +index d0de378beefe5..7d54f284ce10f 100644
623 +--- a/arch/sh/drivers/dma/Kconfig
624 ++++ b/arch/sh/drivers/dma/Kconfig
625 +@@ -63,8 +63,7 @@ config PVR2_DMA
626 +
627 + config G2_DMA
628 + tristate "G2 Bus DMA support"
629 +- depends on SH_DREAMCAST
630 +- select SH_DMA_API
631 ++ depends on SH_DREAMCAST && SH_DMA_API
632 + help
633 + This enables support for the DMA controller for the Dreamcast's
634 + G2 bus. Drivers that want this will generally enable this on
635 +diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
636 +index 870efeec8bdac..94c6e6330e043 100644
637 +--- a/arch/x86/entry/common.c
638 ++++ b/arch/x86/entry/common.c
639 +@@ -73,10 +73,8 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs,
640 + unsigned int nr)
641 + {
642 + if (likely(nr < IA32_NR_syscalls)) {
643 +- instrumentation_begin();
644 + nr = array_index_nospec(nr, IA32_NR_syscalls);
645 + regs->ax = ia32_sys_call_table[nr](regs);
646 +- instrumentation_end();
647 + }
648 + }
649 +
650 +@@ -91,8 +89,11 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
651 + * or may not be necessary, but it matches the old asm behavior.
652 + */
653 + nr = (unsigned int)syscall_enter_from_user_mode(regs, nr);
654 ++ instrumentation_begin();
655 +
656 + do_syscall_32_irqs_on(regs, nr);
657 ++
658 ++ instrumentation_end();
659 + syscall_exit_to_user_mode(regs);
660 + }
661 +
662 +@@ -121,11 +122,12 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
663 + res = get_user(*(u32 *)&regs->bp,
664 + (u32 __user __force *)(unsigned long)(u32)regs->sp);
665 + }
666 +- instrumentation_end();
667 +
668 + if (res) {
669 + /* User code screwed up. */
670 + regs->ax = -EFAULT;
671 ++
672 ++ instrumentation_end();
673 + syscall_exit_to_user_mode(regs);
674 + return false;
675 + }
676 +@@ -135,6 +137,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
677 +
678 + /* Now this is just like a normal syscall. */
679 + do_syscall_32_irqs_on(regs, nr);
680 ++
681 ++ instrumentation_end();
682 + syscall_exit_to_user_mode(regs);
683 + return true;
684 + }
685 +diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
686 +index 6fb8cb7b9bcc6..6375967a8244d 100644
687 +--- a/arch/x86/hyperv/hv_init.c
688 ++++ b/arch/x86/hyperv/hv_init.c
689 +@@ -16,6 +16,7 @@
690 + #include <asm/hyperv-tlfs.h>
691 + #include <asm/mshyperv.h>
692 + #include <asm/idtentry.h>
693 ++#include <linux/kexec.h>
694 + #include <linux/version.h>
695 + #include <linux/vmalloc.h>
696 + #include <linux/mm.h>
697 +@@ -26,6 +27,8 @@
698 + #include <linux/syscore_ops.h>
699 + #include <clocksource/hyperv_timer.h>
700 +
701 ++int hyperv_init_cpuhp;
702 ++
703 + void *hv_hypercall_pg;
704 + EXPORT_SYMBOL_GPL(hv_hypercall_pg);
705 +
706 +@@ -424,6 +427,7 @@ void __init hyperv_init(void)
707 +
708 + register_syscore_ops(&hv_syscore_ops);
709 +
710 ++ hyperv_init_cpuhp = cpuhp;
711 + return;
712 +
713 + remove_cpuhp_state:
714 +diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
715 +index dcd9503b10983..38f4936045ab6 100644
716 +--- a/arch/x86/include/asm/fpu/api.h
717 ++++ b/arch/x86/include/asm/fpu/api.h
718 +@@ -16,14 +16,25 @@
719 + * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It
720 + * disables preemption so be careful if you intend to use it for long periods
721 + * of time.
722 +- * If you intend to use the FPU in softirq you need to check first with
723 ++ * If you intend to use the FPU in irq/softirq you need to check first with
724 + * irq_fpu_usable() if it is possible.
725 + */
726 +-extern void kernel_fpu_begin(void);
727 ++
728 ++/* Kernel FPU states to initialize in kernel_fpu_begin_mask() */
729 ++#define KFPU_387 _BITUL(0) /* 387 state will be initialized */
730 ++#define KFPU_MXCSR _BITUL(1) /* MXCSR will be initialized */
731 ++
732 ++extern void kernel_fpu_begin_mask(unsigned int kfpu_mask);
733 + extern void kernel_fpu_end(void);
734 + extern bool irq_fpu_usable(void);
735 + extern void fpregs_mark_activate(void);
736 +
737 ++/* Code that is unaware of kernel_fpu_begin_mask() can use this */
738 ++static inline void kernel_fpu_begin(void)
739 ++{
740 ++ kernel_fpu_begin_mask(KFPU_387 | KFPU_MXCSR);
741 ++}
742 ++
743 + /*
744 + * Use fpregs_lock() while editing CPU's FPU registers or fpu->state.
745 + * A context switch will (and softirq might) save CPU's FPU registers to
746 +diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
747 +index ffc289992d1b0..30f76b9668579 100644
748 +--- a/arch/x86/include/asm/mshyperv.h
749 ++++ b/arch/x86/include/asm/mshyperv.h
750 +@@ -74,6 +74,8 @@ static inline void hv_disable_stimer0_percpu_irq(int irq) {}
751 +
752 +
753 + #if IS_ENABLED(CONFIG_HYPERV)
754 ++extern int hyperv_init_cpuhp;
755 ++
756 + extern void *hv_hypercall_pg;
757 + extern void __percpu **hyperv_pcpu_input_arg;
758 +
759 +diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
760 +index f4234575f3fdb..1f6caceccbb02 100644
761 +--- a/arch/x86/include/asm/topology.h
762 ++++ b/arch/x86/include/asm/topology.h
763 +@@ -110,6 +110,8 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
764 + #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id)
765 + #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
766 +
767 ++extern unsigned int __max_die_per_package;
768 ++
769 + #ifdef CONFIG_SMP
770 + #define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu))
771 + #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu))
772 +@@ -118,8 +120,6 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
773 + extern unsigned int __max_logical_packages;
774 + #define topology_max_packages() (__max_logical_packages)
775 +
776 +-extern unsigned int __max_die_per_package;
777 +-
778 + static inline int topology_max_die_per_package(void)
779 + {
780 + return __max_die_per_package;
781 +diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
782 +index 2f1fbd8150af7..a2551b10780c6 100644
783 +--- a/arch/x86/kernel/cpu/amd.c
784 ++++ b/arch/x86/kernel/cpu/amd.c
785 +@@ -569,12 +569,12 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
786 + u32 ecx;
787 +
788 + ecx = cpuid_ecx(0x8000001e);
789 +- nodes_per_socket = ((ecx >> 8) & 7) + 1;
790 ++ __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1;
791 + } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
792 + u64 value;
793 +
794 + rdmsrl(MSR_FAM10H_NODE_ID, value);
795 +- nodes_per_socket = ((value >> 3) & 7) + 1;
796 ++ __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1;
797 + }
798 +
799 + if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
800 +diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
801 +index 05ef1f4550cbd..6cc50ab07bded 100644
802 +--- a/arch/x86/kernel/cpu/mshyperv.c
803 ++++ b/arch/x86/kernel/cpu/mshyperv.c
804 +@@ -135,14 +135,32 @@ static void hv_machine_shutdown(void)
805 + {
806 + if (kexec_in_progress && hv_kexec_handler)
807 + hv_kexec_handler();
808 ++
809 ++ /*
810 ++ * Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor
811 ++ * corrupts the old VP Assist Pages and can crash the kexec kernel.
812 ++ */
813 ++ if (kexec_in_progress && hyperv_init_cpuhp > 0)
814 ++ cpuhp_remove_state(hyperv_init_cpuhp);
815 ++
816 ++ /* The function calls stop_other_cpus(). */
817 + native_machine_shutdown();
818 ++
819 ++ /* Disable the hypercall page when there is only 1 active CPU. */
820 ++ if (kexec_in_progress)
821 ++ hyperv_cleanup();
822 + }
823 +
824 + static void hv_machine_crash_shutdown(struct pt_regs *regs)
825 + {
826 + if (hv_crash_handler)
827 + hv_crash_handler(regs);
828 ++
829 ++ /* The function calls crash_smp_send_stop(). */
830 + native_machine_crash_shutdown(regs);
831 ++
832 ++ /* Disable the hypercall page when there is only 1 active CPU. */
833 ++ hyperv_cleanup();
834 + }
835 + #endif /* CONFIG_KEXEC_CORE */
836 + #endif /* CONFIG_HYPERV */
837 +diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
838 +index d3a0791bc052a..91288da295995 100644
839 +--- a/arch/x86/kernel/cpu/topology.c
840 ++++ b/arch/x86/kernel/cpu/topology.c
841 +@@ -25,10 +25,10 @@
842 + #define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f)
843 + #define LEVEL_MAX_SIBLINGS(ebx) ((ebx) & 0xffff)
844 +
845 +-#ifdef CONFIG_SMP
846 + unsigned int __max_die_per_package __read_mostly = 1;
847 + EXPORT_SYMBOL(__max_die_per_package);
848 +
849 ++#ifdef CONFIG_SMP
850 + /*
851 + * Check if given CPUID extended toplogy "leaf" is implemented
852 + */
853 +diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
854 +index eb86a2b831b15..571220ac8beaa 100644
855 +--- a/arch/x86/kernel/fpu/core.c
856 ++++ b/arch/x86/kernel/fpu/core.c
857 +@@ -121,7 +121,7 @@ int copy_fpregs_to_fpstate(struct fpu *fpu)
858 + }
859 + EXPORT_SYMBOL(copy_fpregs_to_fpstate);
860 +
861 +-void kernel_fpu_begin(void)
862 ++void kernel_fpu_begin_mask(unsigned int kfpu_mask)
863 + {
864 + preempt_disable();
865 +
866 +@@ -141,13 +141,14 @@ void kernel_fpu_begin(void)
867 + }
868 + __cpu_invalidate_fpregs_state();
869 +
870 +- if (boot_cpu_has(X86_FEATURE_XMM))
871 ++ /* Put sane initial values into the control registers. */
872 ++ if (likely(kfpu_mask & KFPU_MXCSR) && boot_cpu_has(X86_FEATURE_XMM))
873 + ldmxcsr(MXCSR_DEFAULT);
874 +
875 +- if (boot_cpu_has(X86_FEATURE_FPU))
876 ++ if (unlikely(kfpu_mask & KFPU_387) && boot_cpu_has(X86_FEATURE_FPU))
877 + asm volatile ("fninit");
878 + }
879 +-EXPORT_SYMBOL_GPL(kernel_fpu_begin);
880 ++EXPORT_SYMBOL_GPL(kernel_fpu_begin_mask);
881 +
882 + void kernel_fpu_end(void)
883 + {
884 +diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
885 +index 84f581c91db45..098015b739993 100644
886 +--- a/arch/x86/kernel/setup.c
887 ++++ b/arch/x86/kernel/setup.c
888 +@@ -665,17 +665,6 @@ static void __init trim_platform_memory_ranges(void)
889 +
890 + static void __init trim_bios_range(void)
891 + {
892 +- /*
893 +- * A special case is the first 4Kb of memory;
894 +- * This is a BIOS owned area, not kernel ram, but generally
895 +- * not listed as such in the E820 table.
896 +- *
897 +- * This typically reserves additional memory (64KiB by default)
898 +- * since some BIOSes are known to corrupt low memory. See the
899 +- * Kconfig help text for X86_RESERVE_LOW.
900 +- */
901 +- e820__range_update(0, PAGE_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED);
902 +-
903 + /*
904 + * special case: Some BIOSes report the PC BIOS
905 + * area (640Kb -> 1Mb) as RAM even though it is not.
906 +@@ -733,6 +722,15 @@ early_param("reservelow", parse_reservelow);
907 +
908 + static void __init trim_low_memory_range(void)
909 + {
910 ++ /*
911 ++ * A special case is the first 4Kb of memory;
912 ++ * This is a BIOS owned area, not kernel ram, but generally
913 ++ * not listed as such in the E820 table.
914 ++ *
915 ++ * This typically reserves additional memory (64KiB by default)
916 ++ * since some BIOSes are known to corrupt low memory. See the
917 ++ * Kconfig help text for X86_RESERVE_LOW.
918 ++ */
919 + memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
920 + }
921 +
922 +diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
923 +index 0bd1a0fc587e0..84c1821819afb 100644
924 +--- a/arch/x86/kernel/sev-es.c
925 ++++ b/arch/x86/kernel/sev-es.c
926 +@@ -225,7 +225,7 @@ static inline u64 sev_es_rd_ghcb_msr(void)
927 + return __rdmsr(MSR_AMD64_SEV_ES_GHCB);
928 + }
929 +
930 +-static inline void sev_es_wr_ghcb_msr(u64 val)
931 ++static __always_inline void sev_es_wr_ghcb_msr(u64 val)
932 + {
933 + u32 low, high;
934 +
935 +@@ -286,6 +286,12 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
936 + u16 d2;
937 + u8 d1;
938 +
939 ++ /* If instruction ran in kernel mode and the I/O buffer is in kernel space */
940 ++ if (!user_mode(ctxt->regs) && !access_ok(target, size)) {
941 ++ memcpy(dst, buf, size);
942 ++ return ES_OK;
943 ++ }
944 ++
945 + switch (size) {
946 + case 1:
947 + memcpy(&d1, buf, 1);
948 +@@ -335,6 +341,12 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
949 + u16 d2;
950 + u8 d1;
951 +
952 ++ /* If instruction ran in kernel mode and the I/O buffer is in kernel space */
953 ++ if (!user_mode(ctxt->regs) && !access_ok(s, size)) {
954 ++ memcpy(buf, src, size);
955 ++ return ES_OK;
956 ++ }
957 ++
958 + switch (size) {
959 + case 1:
960 + if (get_user(d1, s))
961 +diff --git a/arch/x86/lib/mmx_32.c b/arch/x86/lib/mmx_32.c
962 +index 4321fa02e18df..419365c48b2ad 100644
963 +--- a/arch/x86/lib/mmx_32.c
964 ++++ b/arch/x86/lib/mmx_32.c
965 +@@ -26,6 +26,16 @@
966 + #include <asm/fpu/api.h>
967 + #include <asm/asm.h>
968 +
969 ++/*
970 ++ * Use KFPU_387. MMX instructions are not affected by MXCSR,
971 ++ * but both AMD and Intel documentation states that even integer MMX
972 ++ * operations will result in #MF if an exception is pending in FCW.
973 ++ *
974 ++ * EMMS is not needed afterwards because, after calling kernel_fpu_end(),
975 ++ * any subsequent user of the 387 stack will reinitialize it using
976 ++ * KFPU_387.
977 ++ */
978 ++
979 + void *_mmx_memcpy(void *to, const void *from, size_t len)
980 + {
981 + void *p;
982 +@@ -37,7 +47,7 @@ void *_mmx_memcpy(void *to, const void *from, size_t len)
983 + p = to;
984 + i = len >> 6; /* len/64 */
985 +
986 +- kernel_fpu_begin();
987 ++ kernel_fpu_begin_mask(KFPU_387);
988 +
989 + __asm__ __volatile__ (
990 + "1: prefetch (%0)\n" /* This set is 28 bytes */
991 +@@ -127,7 +137,7 @@ static void fast_clear_page(void *page)
992 + {
993 + int i;
994 +
995 +- kernel_fpu_begin();
996 ++ kernel_fpu_begin_mask(KFPU_387);
997 +
998 + __asm__ __volatile__ (
999 + " pxor %%mm0, %%mm0\n" : :
1000 +@@ -160,7 +170,7 @@ static void fast_copy_page(void *to, void *from)
1001 + {
1002 + int i;
1003 +
1004 +- kernel_fpu_begin();
1005 ++ kernel_fpu_begin_mask(KFPU_387);
1006 +
1007 + /*
1008 + * maybe the prefetch stuff can go before the expensive fnsave...
1009 +@@ -247,7 +257,7 @@ static void fast_clear_page(void *page)
1010 + {
1011 + int i;
1012 +
1013 +- kernel_fpu_begin();
1014 ++ kernel_fpu_begin_mask(KFPU_387);
1015 +
1016 + __asm__ __volatile__ (
1017 + " pxor %%mm0, %%mm0\n" : :
1018 +@@ -282,7 +292,7 @@ static void fast_copy_page(void *to, void *from)
1019 + {
1020 + int i;
1021 +
1022 +- kernel_fpu_begin();
1023 ++ kernel_fpu_begin_mask(KFPU_387);
1024 +
1025 + __asm__ __volatile__ (
1026 + "1: prefetch (%0)\n"
1027 +diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
1028 +index 9e87ab010c82b..ec50b7423a4c8 100644
1029 +--- a/arch/x86/xen/enlighten_hvm.c
1030 ++++ b/arch/x86/xen/enlighten_hvm.c
1031 +@@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
1032 + return 0;
1033 + }
1034 +
1035 ++static bool no_vector_callback __initdata;
1036 ++
1037 + static void __init xen_hvm_guest_init(void)
1038 + {
1039 + if (xen_pv_domain())
1040 +@@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
1041 +
1042 + xen_panic_handler_init();
1043 +
1044 +- if (xen_feature(XENFEAT_hvm_callback_vector))
1045 ++ if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
1046 + xen_have_vector_callback = 1;
1047 +
1048 + xen_hvm_smp_init();
1049 +@@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
1050 + }
1051 + early_param("xen_nopv", xen_parse_nopv);
1052 +
1053 ++static __init int xen_parse_no_vector_callback(char *arg)
1054 ++{
1055 ++ no_vector_callback = true;
1056 ++ return 0;
1057 ++}
1058 ++early_param("xen_no_vector_callback", xen_parse_no_vector_callback);
1059 ++
1060 + bool __init xen_hvm_need_lapic(void)
1061 + {
1062 + if (xen_pv_domain())
1063 +diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
1064 +index f5e7db4f82abb..6ff3c887e0b99 100644
1065 +--- a/arch/x86/xen/smp_hvm.c
1066 ++++ b/arch/x86/xen/smp_hvm.c
1067 +@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
1068 + int cpu;
1069 +
1070 + native_smp_prepare_cpus(max_cpus);
1071 +- WARN_ON(xen_smp_intr_init(0));
1072 +
1073 +- xen_init_lock_cpu(0);
1074 ++ if (xen_have_vector_callback) {
1075 ++ WARN_ON(xen_smp_intr_init(0));
1076 ++ xen_init_lock_cpu(0);
1077 ++ }
1078 +
1079 + for_each_possible_cpu(cpu) {
1080 + if (cpu == 0)
1081 +@@ -50,9 +52,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
1082 + static void xen_hvm_cpu_die(unsigned int cpu)
1083 + {
1084 + if (common_cpu_die(cpu) == 0) {
1085 +- xen_smp_intr_free(cpu);
1086 +- xen_uninit_lock_cpu(cpu);
1087 +- xen_teardown_timer(cpu);
1088 ++ if (xen_have_vector_callback) {
1089 ++ xen_smp_intr_free(cpu);
1090 ++ xen_uninit_lock_cpu(cpu);
1091 ++ xen_teardown_timer(cpu);
1092 ++ }
1093 + }
1094 + }
1095 + #else
1096 +@@ -64,14 +68,19 @@ static void xen_hvm_cpu_die(unsigned int cpu)
1097 +
1098 + void __init xen_hvm_smp_init(void)
1099 + {
1100 +- if (!xen_have_vector_callback)
1101 ++ smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
1102 ++ smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
1103 ++ smp_ops.smp_cpus_done = xen_smp_cpus_done;
1104 ++ smp_ops.cpu_die = xen_hvm_cpu_die;
1105 ++
1106 ++ if (!xen_have_vector_callback) {
1107 ++#ifdef CONFIG_PARAVIRT_SPINLOCKS
1108 ++ nopvspin = true;
1109 ++#endif
1110 + return;
1111 ++ }
1112 +
1113 +- smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
1114 + smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
1115 +- smp_ops.cpu_die = xen_hvm_cpu_die;
1116 + smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
1117 + smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
1118 +- smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
1119 +- smp_ops.smp_cpus_done = xen_smp_cpus_done;
1120 + }
1121 +diff --git a/crypto/xor.c b/crypto/xor.c
1122 +index eacbf4f939900..8f899f898ec9f 100644
1123 +--- a/crypto/xor.c
1124 ++++ b/crypto/xor.c
1125 +@@ -107,6 +107,8 @@ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2)
1126 + preempt_enable();
1127 +
1128 + // bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s]
1129 ++ if (!min)
1130 ++ min = 1;
1131 + speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
1132 + tmpl->speed = speed;
1133 +
1134 +diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
1135 +index f23ef508fe88c..dca5cc423cd41 100644
1136 +--- a/drivers/acpi/scan.c
1137 ++++ b/drivers/acpi/scan.c
1138 +@@ -586,6 +586,8 @@ static int acpi_get_device_data(acpi_handle handle, struct acpi_device **device,
1139 + if (!device)
1140 + return -EINVAL;
1141 +
1142 ++ *device = NULL;
1143 ++
1144 + status = acpi_get_data_full(handle, acpi_scan_drop_device,
1145 + (void **)device, callback);
1146 + if (ACPI_FAILURE(status) || !*device) {
1147 +diff --git a/drivers/base/core.c b/drivers/base/core.c
1148 +index a6187f6380d8d..96f73aaf71da3 100644
1149 +--- a/drivers/base/core.c
1150 ++++ b/drivers/base/core.c
1151 +@@ -115,6 +115,16 @@ int device_links_read_lock_held(void)
1152 + #endif
1153 + #endif /* !CONFIG_SRCU */
1154 +
1155 ++static bool device_is_ancestor(struct device *dev, struct device *target)
1156 ++{
1157 ++ while (target->parent) {
1158 ++ target = target->parent;
1159 ++ if (dev == target)
1160 ++ return true;
1161 ++ }
1162 ++ return false;
1163 ++}
1164 ++
1165 + /**
1166 + * device_is_dependent - Check if one device depends on another one
1167 + * @dev: Device to check dependencies for.
1168 +@@ -128,7 +138,12 @@ int device_is_dependent(struct device *dev, void *target)
1169 + struct device_link *link;
1170 + int ret;
1171 +
1172 +- if (dev == target)
1173 ++ /*
1174 ++ * The "ancestors" check is needed to catch the case when the target
1175 ++ * device has not been completely initialized yet and it is still
1176 ++ * missing from the list of children of its parent device.
1177 ++ */
1178 ++ if (dev == target || device_is_ancestor(dev, target))
1179 + return 1;
1180 +
1181 + ret = device_for_each_child(dev, target, device_is_dependent);
1182 +@@ -363,7 +378,9 @@ static int devlink_add_symlinks(struct device *dev,
1183 + struct device *con = link->consumer;
1184 + char *buf;
1185 +
1186 +- len = max(strlen(dev_name(sup)), strlen(dev_name(con)));
1187 ++ len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)),
1188 ++ strlen(dev_bus_name(con)) + strlen(dev_name(con)));
1189 ++ len += strlen(":");
1190 + len += strlen("supplier:") + 1;
1191 + buf = kzalloc(len, GFP_KERNEL);
1192 + if (!buf)
1193 +@@ -377,12 +394,12 @@ static int devlink_add_symlinks(struct device *dev,
1194 + if (ret)
1195 + goto err_con;
1196 +
1197 +- snprintf(buf, len, "consumer:%s", dev_name(con));
1198 ++ snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
1199 + ret = sysfs_create_link(&sup->kobj, &link->link_dev.kobj, buf);
1200 + if (ret)
1201 + goto err_con_dev;
1202 +
1203 +- snprintf(buf, len, "supplier:%s", dev_name(sup));
1204 ++ snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
1205 + ret = sysfs_create_link(&con->kobj, &link->link_dev.kobj, buf);
1206 + if (ret)
1207 + goto err_sup_dev;
1208 +@@ -390,7 +407,7 @@ static int devlink_add_symlinks(struct device *dev,
1209 + goto out;
1210 +
1211 + err_sup_dev:
1212 +- snprintf(buf, len, "consumer:%s", dev_name(con));
1213 ++ snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
1214 + sysfs_remove_link(&sup->kobj, buf);
1215 + err_con_dev:
1216 + sysfs_remove_link(&link->link_dev.kobj, "consumer");
1217 +@@ -413,7 +430,9 @@ static void devlink_remove_symlinks(struct device *dev,
1218 + sysfs_remove_link(&link->link_dev.kobj, "consumer");
1219 + sysfs_remove_link(&link->link_dev.kobj, "supplier");
1220 +
1221 +- len = max(strlen(dev_name(sup)), strlen(dev_name(con)));
1222 ++ len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)),
1223 ++ strlen(dev_bus_name(con)) + strlen(dev_name(con)));
1224 ++ len += strlen(":");
1225 + len += strlen("supplier:") + 1;
1226 + buf = kzalloc(len, GFP_KERNEL);
1227 + if (!buf) {
1228 +@@ -421,9 +440,9 @@ static void devlink_remove_symlinks(struct device *dev,
1229 + return;
1230 + }
1231 +
1232 +- snprintf(buf, len, "supplier:%s", dev_name(sup));
1233 ++ snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
1234 + sysfs_remove_link(&con->kobj, buf);
1235 +- snprintf(buf, len, "consumer:%s", dev_name(con));
1236 ++ snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
1237 + sysfs_remove_link(&sup->kobj, buf);
1238 + kfree(buf);
1239 + }
1240 +@@ -633,8 +652,9 @@ struct device_link *device_link_add(struct device *consumer,
1241 +
1242 + link->link_dev.class = &devlink_class;
1243 + device_set_pm_not_required(&link->link_dev);
1244 +- dev_set_name(&link->link_dev, "%s--%s",
1245 +- dev_name(supplier), dev_name(consumer));
1246 ++ dev_set_name(&link->link_dev, "%s:%s--%s:%s",
1247 ++ dev_bus_name(supplier), dev_name(supplier),
1248 ++ dev_bus_name(consumer), dev_name(consumer));
1249 + if (device_register(&link->link_dev)) {
1250 + put_device(consumer);
1251 + put_device(supplier);
1252 +@@ -1652,9 +1672,7 @@ const char *dev_driver_string(const struct device *dev)
1253 + * never change once they are set, so they don't need special care.
1254 + */
1255 + drv = READ_ONCE(dev->driver);
1256 +- return drv ? drv->name :
1257 +- (dev->bus ? dev->bus->name :
1258 +- (dev->class ? dev->class->name : ""));
1259 ++ return drv ? drv->name : dev_bus_name(dev);
1260 + }
1261 + EXPORT_SYMBOL(dev_driver_string);
1262 +
1263 +diff --git a/drivers/base/dd.c b/drivers/base/dd.c
1264 +index 148e81969e046..3c94ebc8d4bb0 100644
1265 +--- a/drivers/base/dd.c
1266 ++++ b/drivers/base/dd.c
1267 +@@ -612,6 +612,8 @@ dev_groups_failed:
1268 + else if (drv->remove)
1269 + drv->remove(dev);
1270 + probe_failed:
1271 ++ kfree(dev->dma_range_map);
1272 ++ dev->dma_range_map = NULL;
1273 + if (dev->bus)
1274 + blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
1275 + BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
1276 +diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
1277 +index 37244a7e68c22..9cf249c344d9e 100644
1278 +--- a/drivers/clk/tegra/clk-tegra30.c
1279 ++++ b/drivers/clk/tegra/clk-tegra30.c
1280 +@@ -1256,6 +1256,8 @@ static struct tegra_clk_init_table init_table[] __initdata = {
1281 + { TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
1282 + { TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
1283 + { TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
1284 ++ { TEGRA30_CLK_HDA, TEGRA30_CLK_PLL_P, 102000000, 0 },
1285 ++ { TEGRA30_CLK_HDA2CODEC_2X, TEGRA30_CLK_PLL_P, 48000000, 0 },
1286 + /* must be the last entry */
1287 + { TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 },
1288 + };
1289 +diff --git a/drivers/counter/ti-eqep.c b/drivers/counter/ti-eqep.c
1290 +index a60aee1a1a291..65df9ef5b5bc0 100644
1291 +--- a/drivers/counter/ti-eqep.c
1292 ++++ b/drivers/counter/ti-eqep.c
1293 +@@ -235,36 +235,6 @@ static ssize_t ti_eqep_position_ceiling_write(struct counter_device *counter,
1294 + return len;
1295 + }
1296 +
1297 +-static ssize_t ti_eqep_position_floor_read(struct counter_device *counter,
1298 +- struct counter_count *count,
1299 +- void *ext_priv, char *buf)
1300 +-{
1301 +- struct ti_eqep_cnt *priv = counter->priv;
1302 +- u32 qposinit;
1303 +-
1304 +- regmap_read(priv->regmap32, QPOSINIT, &qposinit);
1305 +-
1306 +- return sprintf(buf, "%u\n", qposinit);
1307 +-}
1308 +-
1309 +-static ssize_t ti_eqep_position_floor_write(struct counter_device *counter,
1310 +- struct counter_count *count,
1311 +- void *ext_priv, const char *buf,
1312 +- size_t len)
1313 +-{
1314 +- struct ti_eqep_cnt *priv = counter->priv;
1315 +- int err;
1316 +- u32 res;
1317 +-
1318 +- err = kstrtouint(buf, 0, &res);
1319 +- if (err < 0)
1320 +- return err;
1321 +-
1322 +- regmap_write(priv->regmap32, QPOSINIT, res);
1323 +-
1324 +- return len;
1325 +-}
1326 +-
1327 + static ssize_t ti_eqep_position_enable_read(struct counter_device *counter,
1328 + struct counter_count *count,
1329 + void *ext_priv, char *buf)
1330 +@@ -301,11 +271,6 @@ static struct counter_count_ext ti_eqep_position_ext[] = {
1331 + .read = ti_eqep_position_ceiling_read,
1332 + .write = ti_eqep_position_ceiling_write,
1333 + },
1334 +- {
1335 +- .name = "floor",
1336 +- .read = ti_eqep_position_floor_read,
1337 +- .write = ti_eqep_position_floor_write,
1338 +- },
1339 + {
1340 + .name = "enable",
1341 + .read = ti_eqep_position_enable_read,
1342 +diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
1343 +index 9d6645b1f0abe..ff5e85eefbf69 100644
1344 +--- a/drivers/crypto/Kconfig
1345 ++++ b/drivers/crypto/Kconfig
1346 +@@ -366,6 +366,7 @@ if CRYPTO_DEV_OMAP
1347 + config CRYPTO_DEV_OMAP_SHAM
1348 + tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator"
1349 + depends on ARCH_OMAP2PLUS
1350 ++ select CRYPTO_ENGINE
1351 + select CRYPTO_SHA1
1352 + select CRYPTO_MD5
1353 + select CRYPTO_SHA256
1354 +diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
1355 +index 5d4de5cd67595..f20ac3d694246 100644
1356 +--- a/drivers/gpio/Kconfig
1357 ++++ b/drivers/gpio/Kconfig
1358 +@@ -508,7 +508,8 @@ config GPIO_SAMA5D2_PIOBU
1359 +
1360 + config GPIO_SIFIVE
1361 + bool "SiFive GPIO support"
1362 +- depends on OF_GPIO && IRQ_DOMAIN_HIERARCHY
1363 ++ depends on OF_GPIO
1364 ++ select IRQ_DOMAIN_HIERARCHY
1365 + select GPIO_GENERIC
1366 + select GPIOLIB_IRQCHIP
1367 + select REGMAP_MMIO
1368 +diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
1369 +index e9faeaf65d14f..689c06cbbb457 100644
1370 +--- a/drivers/gpio/gpiolib-cdev.c
1371 ++++ b/drivers/gpio/gpiolib-cdev.c
1372 +@@ -1960,6 +1960,21 @@ struct gpio_chardev_data {
1373 + #endif
1374 + };
1375 +
1376 ++static int chipinfo_get(struct gpio_chardev_data *cdev, void __user *ip)
1377 ++{
1378 ++ struct gpio_device *gdev = cdev->gdev;
1379 ++ struct gpiochip_info chipinfo;
1380 ++
1381 ++ memset(&chipinfo, 0, sizeof(chipinfo));
1382 ++
1383 ++ strscpy(chipinfo.name, dev_name(&gdev->dev), sizeof(chipinfo.name));
1384 ++ strscpy(chipinfo.label, gdev->label, sizeof(chipinfo.label));
1385 ++ chipinfo.lines = gdev->ngpio;
1386 ++ if (copy_to_user(ip, &chipinfo, sizeof(chipinfo)))
1387 ++ return -EFAULT;
1388 ++ return 0;
1389 ++}
1390 ++
1391 + #ifdef CONFIG_GPIO_CDEV_V1
1392 + /*
1393 + * returns 0 if the versions match, else the previously selected ABI version
1394 +@@ -1974,6 +1989,41 @@ static int lineinfo_ensure_abi_version(struct gpio_chardev_data *cdata,
1395 +
1396 + return abiv;
1397 + }
1398 ++
1399 ++static int lineinfo_get_v1(struct gpio_chardev_data *cdev, void __user *ip,
1400 ++ bool watch)
1401 ++{
1402 ++ struct gpio_desc *desc;
1403 ++ struct gpioline_info lineinfo;
1404 ++ struct gpio_v2_line_info lineinfo_v2;
1405 ++
1406 ++ if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
1407 ++ return -EFAULT;
1408 ++
1409 ++ /* this doubles as a range check on line_offset */
1410 ++ desc = gpiochip_get_desc(cdev->gdev->chip, lineinfo.line_offset);
1411 ++ if (IS_ERR(desc))
1412 ++ return PTR_ERR(desc);
1413 ++
1414 ++ if (watch) {
1415 ++ if (lineinfo_ensure_abi_version(cdev, 1))
1416 ++ return -EPERM;
1417 ++
1418 ++ if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines))
1419 ++ return -EBUSY;
1420 ++ }
1421 ++
1422 ++ gpio_desc_to_lineinfo(desc, &lineinfo_v2);
1423 ++ gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
1424 ++
1425 ++ if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) {
1426 ++ if (watch)
1427 ++ clear_bit(lineinfo.line_offset, cdev->watched_lines);
1428 ++ return -EFAULT;
1429 ++ }
1430 ++
1431 ++ return 0;
1432 ++}
1433 + #endif
1434 +
1435 + static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip,
1436 +@@ -2011,6 +2061,22 @@ static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip,
1437 + return 0;
1438 + }
1439 +
1440 ++static int lineinfo_unwatch(struct gpio_chardev_data *cdev, void __user *ip)
1441 ++{
1442 ++ __u32 offset;
1443 ++
1444 ++ if (copy_from_user(&offset, ip, sizeof(offset)))
1445 ++ return -EFAULT;
1446 ++
1447 ++ if (offset >= cdev->gdev->ngpio)
1448 ++ return -EINVAL;
1449 ++
1450 ++ if (!test_and_clear_bit(offset, cdev->watched_lines))
1451 ++ return -EBUSY;
1452 ++
1453 ++ return 0;
1454 ++}
1455 ++
1456 + /*
1457 + * gpio_ioctl() - ioctl handler for the GPIO chardev
1458 + */
1459 +@@ -2018,80 +2084,24 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
1460 + {
1461 + struct gpio_chardev_data *cdev = file->private_data;
1462 + struct gpio_device *gdev = cdev->gdev;
1463 +- struct gpio_chip *gc = gdev->chip;
1464 + void __user *ip = (void __user *)arg;
1465 +- __u32 offset;
1466 +
1467 + /* We fail any subsequent ioctl():s when the chip is gone */
1468 +- if (!gc)
1469 ++ if (!gdev->chip)
1470 + return -ENODEV;
1471 +
1472 + /* Fill in the struct and pass to userspace */
1473 + if (cmd == GPIO_GET_CHIPINFO_IOCTL) {
1474 +- struct gpiochip_info chipinfo;
1475 +-
1476 +- memset(&chipinfo, 0, sizeof(chipinfo));
1477 +-
1478 +- strscpy(chipinfo.name, dev_name(&gdev->dev),
1479 +- sizeof(chipinfo.name));
1480 +- strscpy(chipinfo.label, gdev->label,
1481 +- sizeof(chipinfo.label));
1482 +- chipinfo.lines = gdev->ngpio;
1483 +- if (copy_to_user(ip, &chipinfo, sizeof(chipinfo)))
1484 +- return -EFAULT;
1485 +- return 0;
1486 ++ return chipinfo_get(cdev, ip);
1487 + #ifdef CONFIG_GPIO_CDEV_V1
1488 +- } else if (cmd == GPIO_GET_LINEINFO_IOCTL) {
1489 +- struct gpio_desc *desc;
1490 +- struct gpioline_info lineinfo;
1491 +- struct gpio_v2_line_info lineinfo_v2;
1492 +-
1493 +- if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
1494 +- return -EFAULT;
1495 +-
1496 +- /* this doubles as a range check on line_offset */
1497 +- desc = gpiochip_get_desc(gc, lineinfo.line_offset);
1498 +- if (IS_ERR(desc))
1499 +- return PTR_ERR(desc);
1500 +-
1501 +- gpio_desc_to_lineinfo(desc, &lineinfo_v2);
1502 +- gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
1503 +-
1504 +- if (copy_to_user(ip, &lineinfo, sizeof(lineinfo)))
1505 +- return -EFAULT;
1506 +- return 0;
1507 + } else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) {
1508 + return linehandle_create(gdev, ip);
1509 + } else if (cmd == GPIO_GET_LINEEVENT_IOCTL) {
1510 + return lineevent_create(gdev, ip);
1511 +- } else if (cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) {
1512 +- struct gpio_desc *desc;
1513 +- struct gpioline_info lineinfo;
1514 +- struct gpio_v2_line_info lineinfo_v2;
1515 +-
1516 +- if (copy_from_user(&lineinfo, ip, sizeof(lineinfo)))
1517 +- return -EFAULT;
1518 +-
1519 +- /* this doubles as a range check on line_offset */
1520 +- desc = gpiochip_get_desc(gc, lineinfo.line_offset);
1521 +- if (IS_ERR(desc))
1522 +- return PTR_ERR(desc);
1523 +-
1524 +- if (lineinfo_ensure_abi_version(cdev, 1))
1525 +- return -EPERM;
1526 +-
1527 +- if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines))
1528 +- return -EBUSY;
1529 +-
1530 +- gpio_desc_to_lineinfo(desc, &lineinfo_v2);
1531 +- gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo);
1532 +-
1533 +- if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) {
1534 +- clear_bit(lineinfo.line_offset, cdev->watched_lines);
1535 +- return -EFAULT;
1536 +- }
1537 +-
1538 +- return 0;
1539 ++ } else if (cmd == GPIO_GET_LINEINFO_IOCTL ||
1540 ++ cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) {
1541 ++ return lineinfo_get_v1(cdev, ip,
1542 ++ cmd == GPIO_GET_LINEINFO_WATCH_IOCTL);
1543 + #endif /* CONFIG_GPIO_CDEV_V1 */
1544 + } else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL ||
1545 + cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) {
1546 +@@ -2100,16 +2110,7 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
1547 + } else if (cmd == GPIO_V2_GET_LINE_IOCTL) {
1548 + return linereq_create(gdev, ip);
1549 + } else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) {
1550 +- if (copy_from_user(&offset, ip, sizeof(offset)))
1551 +- return -EFAULT;
1552 +-
1553 +- if (offset >= cdev->gdev->ngpio)
1554 +- return -EINVAL;
1555 +-
1556 +- if (!test_and_clear_bit(offset, cdev->watched_lines))
1557 +- return -EBUSY;
1558 +-
1559 +- return 0;
1560 ++ return lineinfo_unwatch(cdev, ip);
1561 + }
1562 + return -EINVAL;
1563 + }
1564 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
1565 +index 2ddbcfe0a72ff..76d10f1c579ba 100644
1566 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
1567 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
1568 +@@ -80,7 +80,6 @@ MODULE_FIRMWARE("amdgpu/renoir_gpu_info.bin");
1569 + MODULE_FIRMWARE("amdgpu/navi10_gpu_info.bin");
1570 + MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin");
1571 + MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");
1572 +-MODULE_FIRMWARE("amdgpu/green_sardine_gpu_info.bin");
1573 +
1574 + #define AMDGPU_RESUME_MS 2000
1575 +
1576 +diff --git a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
1577 +index 4137dc710aafd..7ad0434be293b 100644
1578 +--- a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
1579 ++++ b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
1580 +@@ -47,7 +47,7 @@ enum psp_gfx_crtl_cmd_id
1581 + GFX_CTRL_CMD_ID_DISABLE_INT = 0x00060000, /* disable PSP-to-Gfx interrupt */
1582 + GFX_CTRL_CMD_ID_MODE1_RST = 0x00070000, /* trigger the Mode 1 reset */
1583 + GFX_CTRL_CMD_ID_GBR_IH_SET = 0x00080000, /* set Gbr IH_RB_CNTL registers */
1584 +- GFX_CTRL_CMD_ID_CONSUME_CMD = 0x000A0000, /* send interrupt to psp for updating write pointer of vf */
1585 ++ GFX_CTRL_CMD_ID_CONSUME_CMD = 0x00090000, /* send interrupt to psp for updating write pointer of vf */
1586 + GFX_CTRL_CMD_ID_DESTROY_GPCOM_RING = 0x000C0000, /* destroy GPCOM ring */
1587 +
1588 + GFX_CTRL_CMD_ID_MAX = 0x000F0000, /* max command ID */
1589 +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
1590 +index d7f67620f57ba..31d793ee0836e 100644
1591 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
1592 ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
1593 +@@ -1034,11 +1034,14 @@ static int kfd_create_vcrat_image_cpu(void *pcrat_image, size_t *size)
1594 + (struct crat_subtype_iolink *)sub_type_hdr);
1595 + if (ret < 0)
1596 + return ret;
1597 +- crat_table->length += (sub_type_hdr->length * entries);
1598 +- crat_table->total_entries += entries;
1599 +
1600 +- sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
1601 +- sub_type_hdr->length * entries);
1602 ++ if (entries) {
1603 ++ crat_table->length += (sub_type_hdr->length * entries);
1604 ++ crat_table->total_entries += entries;
1605 ++
1606 ++ sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
1607 ++ sub_type_hdr->length * entries);
1608 ++ }
1609 + #else
1610 + pr_info("IO link not available for non x86 platforms\n");
1611 + #endif
1612 +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
1613 +index d0699e98db929..e00a30e7d2529 100644
1614 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
1615 ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
1616 +@@ -113,7 +113,7 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
1617 + mutex_lock(&adev->dm.dc_lock);
1618 +
1619 + /* Enable CRTC CRC generation if necessary. */
1620 +- if (dm_is_crc_source_crtc(source)) {
1621 ++ if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) {
1622 + if (!dc_stream_configure_crc(stream_state->ctx->dc,
1623 + stream_state, enable, enable)) {
1624 + ret = -EINVAL;
1625 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
1626 +index 462d3d981ea5e..0a01be38ee1b8 100644
1627 +--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
1628 ++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
1629 +@@ -608,8 +608,8 @@ static const struct dc_debug_options debug_defaults_drv = {
1630 + .disable_pplib_clock_request = false,
1631 + .disable_pplib_wm_range = false,
1632 + .pplib_wm_report_mode = WM_REPORT_DEFAULT,
1633 +- .pipe_split_policy = MPC_SPLIT_DYNAMIC,
1634 +- .force_single_disp_pipe_split = true,
1635 ++ .pipe_split_policy = MPC_SPLIT_AVOID,
1636 ++ .force_single_disp_pipe_split = false,
1637 + .disable_dcc = DCC_ENABLE,
1638 + .voltage_align_fclk = true,
1639 + .disable_stereo_support = true,
1640 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
1641 +index d50a9c3706372..a92f6e4b2eb8f 100644
1642 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
1643 ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
1644 +@@ -2520,8 +2520,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
1645 + * if this primary pipe has a bottom pipe in prev. state
1646 + * and if the bottom pipe is still available (which it should be),
1647 + * pick that pipe as secondary
1648 +- * Same logic applies for ODM pipes. Since mpo is not allowed with odm
1649 +- * check in else case.
1650 ++ * Same logic applies for ODM pipes
1651 + */
1652 + if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe) {
1653 + preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe->pipe_idx;
1654 +@@ -2529,7 +2528,9 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
1655 + secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
1656 + secondary_pipe->pipe_idx = preferred_pipe_idx;
1657 + }
1658 +- } else if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) {
1659 ++ }
1660 ++ if (secondary_pipe == NULL &&
1661 ++ dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) {
1662 + preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe->pipe_idx;
1663 + if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
1664 + secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
1665 +diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
1666 +index f9170b4b22e7e..8a871e5c3e26b 100644
1667 +--- a/drivers/gpu/drm/drm_atomic_helper.c
1668 ++++ b/drivers/gpu/drm/drm_atomic_helper.c
1669 +@@ -3007,7 +3007,7 @@ int drm_atomic_helper_set_config(struct drm_mode_set *set,
1670 +
1671 + ret = handle_conflicting_encoders(state, true);
1672 + if (ret)
1673 +- return ret;
1674 ++ goto fail;
1675 +
1676 + ret = drm_atomic_commit(state);
1677 +
1678 +diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
1679 +index 6e74e6745ecae..3491460498491 100644
1680 +--- a/drivers/gpu/drm/drm_syncobj.c
1681 ++++ b/drivers/gpu/drm/drm_syncobj.c
1682 +@@ -388,19 +388,18 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
1683 + return -ENOENT;
1684 +
1685 + *fence = drm_syncobj_fence_get(syncobj);
1686 +- drm_syncobj_put(syncobj);
1687 +
1688 + if (*fence) {
1689 + ret = dma_fence_chain_find_seqno(fence, point);
1690 + if (!ret)
1691 +- return 0;
1692 ++ goto out;
1693 + dma_fence_put(*fence);
1694 + } else {
1695 + ret = -EINVAL;
1696 + }
1697 +
1698 + if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
1699 +- return ret;
1700 ++ goto out;
1701 +
1702 + memset(&wait, 0, sizeof(wait));
1703 + wait.task = current;
1704 +@@ -432,6 +431,9 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
1705 + if (wait.node.next)
1706 + drm_syncobj_remove_wait(syncobj, &wait);
1707 +
1708 ++out:
1709 ++ drm_syncobj_put(syncobj);
1710 ++
1711 + return ret;
1712 + }
1713 + EXPORT_SYMBOL(drm_syncobj_find_fence);
1714 +diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
1715 +index cdcb7b1034ae4..3f2bbd9370a86 100644
1716 +--- a/drivers/gpu/drm/i915/display/intel_ddi.c
1717 ++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
1718 +@@ -3387,7 +3387,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
1719 + intel_ddi_init_dp_buf_reg(encoder);
1720 +
1721 + if (!is_mst)
1722 +- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
1723 ++ intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
1724 +
1725 + intel_dp_sink_set_decompression_state(intel_dp, crtc_state, true);
1726 + /*
1727 +@@ -3469,8 +3469,8 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
1728 +
1729 + intel_ddi_init_dp_buf_reg(encoder);
1730 + if (!is_mst)
1731 +- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
1732 +- intel_dp_configure_protocol_converter(intel_dp);
1733 ++ intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
1734 ++ intel_dp_configure_protocol_converter(intel_dp, crtc_state);
1735 + intel_dp_sink_set_decompression_state(intel_dp, crtc_state,
1736 + true);
1737 + intel_dp_sink_set_fec_ready(intel_dp, crtc_state);
1738 +@@ -3647,7 +3647,7 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state,
1739 + * Power down sink before disabling the port, otherwise we end
1740 + * up getting interrupts from the sink on detecting link loss.
1741 + */
1742 +- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
1743 ++ intel_dp_set_power(intel_dp, DP_SET_POWER_D3);
1744 +
1745 + if (INTEL_GEN(dev_priv) >= 12) {
1746 + if (is_mst) {
1747 +diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
1748 +index 1901c88d418fa..1937b3d6342ae 100644
1749 +--- a/drivers/gpu/drm/i915/display/intel_dp.c
1750 ++++ b/drivers/gpu/drm/i915/display/intel_dp.c
1751 +@@ -3496,22 +3496,22 @@ void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp,
1752 + enable ? "enable" : "disable");
1753 + }
1754 +
1755 +-/* If the sink supports it, try to set the power state appropriately */
1756 +-void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
1757 ++/* If the device supports it, try to set the power state appropriately */
1758 ++void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode)
1759 + {
1760 +- struct drm_i915_private *i915 = dp_to_i915(intel_dp);
1761 ++ struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
1762 ++ struct drm_i915_private *i915 = to_i915(encoder->base.dev);
1763 + int ret, i;
1764 +
1765 + /* Should have a valid DPCD by this point */
1766 + if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
1767 + return;
1768 +
1769 +- if (mode != DRM_MODE_DPMS_ON) {
1770 ++ if (mode != DP_SET_POWER_D0) {
1771 + if (downstream_hpd_needs_d0(intel_dp))
1772 + return;
1773 +
1774 +- ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER,
1775 +- DP_SET_POWER_D3);
1776 ++ ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
1777 + } else {
1778 + struct intel_lspcon *lspcon = dp_to_lspcon(intel_dp);
1779 +
1780 +@@ -3520,8 +3520,7 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
1781 + * time to wake up.
1782 + */
1783 + for (i = 0; i < 3; i++) {
1784 +- ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER,
1785 +- DP_SET_POWER_D0);
1786 ++ ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
1787 + if (ret == 1)
1788 + break;
1789 + msleep(1);
1790 +@@ -3532,8 +3531,9 @@ void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode)
1791 + }
1792 +
1793 + if (ret != 1)
1794 +- drm_dbg_kms(&i915->drm, "failed to %s sink power state\n",
1795 +- mode == DRM_MODE_DPMS_ON ? "enable" : "disable");
1796 ++ drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Set power to %s failed\n",
1797 ++ encoder->base.base.id, encoder->base.name,
1798 ++ mode == DP_SET_POWER_D0 ? "D0" : "D3");
1799 + }
1800 +
1801 + static bool cpt_dp_port_selected(struct drm_i915_private *dev_priv,
1802 +@@ -3707,7 +3707,7 @@ static void intel_disable_dp(struct intel_atomic_state *state,
1803 + * ensure that we have vdd while we switch off the panel. */
1804 + intel_edp_panel_vdd_on(intel_dp);
1805 + intel_edp_backlight_off(old_conn_state);
1806 +- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
1807 ++ intel_dp_set_power(intel_dp, DP_SET_POWER_D3);
1808 + intel_edp_panel_off(intel_dp);
1809 + }
1810 +
1811 +@@ -3856,7 +3856,8 @@ static void intel_dp_enable_port(struct intel_dp *intel_dp,
1812 + intel_de_posting_read(dev_priv, intel_dp->output_reg);
1813 + }
1814 +
1815 +-void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp)
1816 ++void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
1817 ++ const struct intel_crtc_state *crtc_state)
1818 + {
1819 + struct drm_i915_private *i915 = dp_to_i915(intel_dp);
1820 + u8 tmp;
1821 +@@ -3875,8 +3876,8 @@ void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp)
1822 + drm_dbg_kms(&i915->drm, "Failed to set protocol converter HDMI mode to %s\n",
1823 + enableddisabled(intel_dp->has_hdmi_sink));
1824 +
1825 +- tmp = intel_dp->dfp.ycbcr_444_to_420 ?
1826 +- DP_CONVERSION_TO_YCBCR420_ENABLE : 0;
1827 ++ tmp = crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR444 &&
1828 ++ intel_dp->dfp.ycbcr_444_to_420 ? DP_CONVERSION_TO_YCBCR420_ENABLE : 0;
1829 +
1830 + if (drm_dp_dpcd_writeb(&intel_dp->aux,
1831 + DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1)
1832 +@@ -3929,8 +3930,8 @@ static void intel_enable_dp(struct intel_atomic_state *state,
1833 + lane_mask);
1834 + }
1835 +
1836 +- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
1837 +- intel_dp_configure_protocol_converter(intel_dp);
1838 ++ intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
1839 ++ intel_dp_configure_protocol_converter(intel_dp, pipe_config);
1840 + intel_dp_start_link_train(intel_dp);
1841 + intel_dp_stop_link_train(intel_dp);
1842 +
1843 +diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
1844 +index 08a1c0aa8b94b..2dd934182471e 100644
1845 +--- a/drivers/gpu/drm/i915/display/intel_dp.h
1846 ++++ b/drivers/gpu/drm/i915/display/intel_dp.h
1847 +@@ -50,8 +50,9 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
1848 + int link_rate, u8 lane_count);
1849 + int intel_dp_retrain_link(struct intel_encoder *encoder,
1850 + struct drm_modeset_acquire_ctx *ctx);
1851 +-void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
1852 +-void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp);
1853 ++void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode);
1854 ++void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
1855 ++ const struct intel_crtc_state *crtc_state);
1856 + void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp,
1857 + const struct intel_crtc_state *crtc_state,
1858 + bool enable);
1859 +diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
1860 +index 64d885539e94a..5d745d9b99b2a 100644
1861 +--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
1862 ++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
1863 +@@ -488,7 +488,7 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
1864 + intel_dp->active_mst_links);
1865 +
1866 + if (first_mst_stream)
1867 +- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
1868 ++ intel_dp_set_power(intel_dp, DP_SET_POWER_D0);
1869 +
1870 + drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true);
1871 +
1872 +diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c
1873 +index 5492076d1ae09..17a8c2e73a820 100644
1874 +--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
1875 ++++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
1876 +@@ -2187,6 +2187,7 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
1877 + if (content_protection_type_changed) {
1878 + mutex_lock(&hdcp->mutex);
1879 + hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
1880 ++ drm_connector_get(&connector->base);
1881 + schedule_work(&hdcp->prop_work);
1882 + mutex_unlock(&hdcp->mutex);
1883 + }
1884 +@@ -2198,6 +2199,14 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state,
1885 + desired_and_not_enabled =
1886 + hdcp->value != DRM_MODE_CONTENT_PROTECTION_ENABLED;
1887 + mutex_unlock(&hdcp->mutex);
1888 ++ /*
1889 ++ * If HDCP already ENABLED and CP property is DESIRED, schedule
1890 ++ * prop_work to update correct CP property to user space.
1891 ++ */
1892 ++ if (!desired_and_not_enabled && !content_protection_type_changed) {
1893 ++ drm_connector_get(&connector->base);
1894 ++ schedule_work(&hdcp->prop_work);
1895 ++ }
1896 + }
1897 +
1898 + if (desired_and_not_enabled || content_protection_type_changed)
1899 +diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
1900 +index a24cc1ff08a0c..0625cbb3b4312 100644
1901 +--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
1902 ++++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
1903 +@@ -134,11 +134,6 @@ static bool remove_signaling_context(struct intel_breadcrumbs *b,
1904 + return true;
1905 + }
1906 +
1907 +-static inline bool __request_completed(const struct i915_request *rq)
1908 +-{
1909 +- return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
1910 +-}
1911 +-
1912 + __maybe_unused static bool
1913 + check_signal_order(struct intel_context *ce, struct i915_request *rq)
1914 + {
1915 +@@ -257,7 +252,7 @@ static void signal_irq_work(struct irq_work *work)
1916 + list_for_each_entry_rcu(rq, &ce->signals, signal_link) {
1917 + bool release;
1918 +
1919 +- if (!__request_completed(rq))
1920 ++ if (!__i915_request_is_complete(rq))
1921 + break;
1922 +
1923 + if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL,
1924 +@@ -379,7 +374,7 @@ static void insert_breadcrumb(struct i915_request *rq)
1925 + * straight onto a signaled list, and queue the irq worker for
1926 + * its signal completion.
1927 + */
1928 +- if (__request_completed(rq)) {
1929 ++ if (__i915_request_is_complete(rq)) {
1930 + if (__signal_request(rq) &&
1931 + llist_add(&rq->signal_node, &b->signaled_requests))
1932 + irq_work_queue(&b->irq_work);
1933 +diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
1934 +index 724b2cb897d33..ee9b33c3aff83 100644
1935 +--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
1936 ++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
1937 +@@ -3936,6 +3936,9 @@ err:
1938 + static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine)
1939 + {
1940 + i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0);
1941 ++
1942 ++ /* Called on error unwind, clear all flags to prevent further use */
1943 ++ memset(&engine->wa_ctx, 0, sizeof(engine->wa_ctx));
1944 + }
1945 +
1946 + typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch);
1947 +diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
1948 +index 7ea94d201fe6f..8015964043eb7 100644
1949 +--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
1950 ++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
1951 +@@ -126,6 +126,10 @@ static void __rcu_cacheline_free(struct rcu_head *rcu)
1952 + struct intel_timeline_cacheline *cl =
1953 + container_of(rcu, typeof(*cl), rcu);
1954 +
1955 ++ /* Must wait until after all *rq->hwsp are complete before removing */
1956 ++ i915_gem_object_unpin_map(cl->hwsp->vma->obj);
1957 ++ __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
1958 ++
1959 + i915_active_fini(&cl->active);
1960 + kfree(cl);
1961 + }
1962 +@@ -133,11 +137,6 @@ static void __rcu_cacheline_free(struct rcu_head *rcu)
1963 + static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
1964 + {
1965 + GEM_BUG_ON(!i915_active_is_idle(&cl->active));
1966 +-
1967 +- i915_gem_object_unpin_map(cl->hwsp->vma->obj);
1968 +- i915_vma_put(cl->hwsp->vma);
1969 +- __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
1970 +-
1971 + call_rcu(&cl->rcu, __rcu_cacheline_free);
1972 + }
1973 +
1974 +@@ -179,7 +178,6 @@ cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
1975 + return ERR_CAST(vaddr);
1976 + }
1977 +
1978 +- i915_vma_get(hwsp->vma);
1979 + cl->hwsp = hwsp;
1980 + cl->vaddr = page_pack_bits(vaddr, cacheline);
1981 +
1982 +diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
1983 +index 620b6fab2c5cf..92adfee30c7c0 100644
1984 +--- a/drivers/gpu/drm/i915/i915_request.h
1985 ++++ b/drivers/gpu/drm/i915/i915_request.h
1986 +@@ -434,7 +434,7 @@ static inline u32 hwsp_seqno(const struct i915_request *rq)
1987 +
1988 + static inline bool __i915_request_has_started(const struct i915_request *rq)
1989 + {
1990 +- return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno - 1);
1991 ++ return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno - 1);
1992 + }
1993 +
1994 + /**
1995 +@@ -465,11 +465,19 @@ static inline bool __i915_request_has_started(const struct i915_request *rq)
1996 + */
1997 + static inline bool i915_request_started(const struct i915_request *rq)
1998 + {
1999 ++ bool result;
2000 ++
2001 + if (i915_request_signaled(rq))
2002 + return true;
2003 +
2004 +- /* Remember: started but may have since been preempted! */
2005 +- return __i915_request_has_started(rq);
2006 ++ result = true;
2007 ++ rcu_read_lock(); /* the HWSP may be freed at runtime */
2008 ++ if (likely(!i915_request_signaled(rq)))
2009 ++ /* Remember: started but may have since been preempted! */
2010 ++ result = __i915_request_has_started(rq);
2011 ++ rcu_read_unlock();
2012 ++
2013 ++ return result;
2014 + }
2015 +
2016 + /**
2017 +@@ -482,10 +490,16 @@ static inline bool i915_request_started(const struct i915_request *rq)
2018 + */
2019 + static inline bool i915_request_is_running(const struct i915_request *rq)
2020 + {
2021 ++ bool result;
2022 ++
2023 + if (!i915_request_is_active(rq))
2024 + return false;
2025 +
2026 +- return __i915_request_has_started(rq);
2027 ++ rcu_read_lock();
2028 ++ result = __i915_request_has_started(rq) && i915_request_is_active(rq);
2029 ++ rcu_read_unlock();
2030 ++
2031 ++ return result;
2032 + }
2033 +
2034 + /**
2035 +@@ -509,12 +523,25 @@ static inline bool i915_request_is_ready(const struct i915_request *rq)
2036 + return !list_empty(&rq->sched.link);
2037 + }
2038 +
2039 ++static inline bool __i915_request_is_complete(const struct i915_request *rq)
2040 ++{
2041 ++ return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
2042 ++}
2043 ++
2044 + static inline bool i915_request_completed(const struct i915_request *rq)
2045 + {
2046 ++ bool result;
2047 ++
2048 + if (i915_request_signaled(rq))
2049 + return true;
2050 +
2051 +- return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno);
2052 ++ result = true;
2053 ++ rcu_read_lock(); /* the HWSP may be freed at runtime */
2054 ++ if (likely(!i915_request_signaled(rq)))
2055 ++ result = __i915_request_is_complete(rq);
2056 ++ rcu_read_unlock();
2057 ++
2058 ++ return result;
2059 + }
2060 +
2061 + static inline void i915_request_mark_complete(struct i915_request *rq)
2062 +diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
2063 +index 36d6b6093d16d..5b8cabb099eb1 100644
2064 +--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
2065 ++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
2066 +@@ -221,7 +221,7 @@ nv50_dmac_wait(struct nvif_push *push, u32 size)
2067 +
2068 + int
2069 + nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
2070 +- const s32 *oclass, u8 head, void *data, u32 size, u64 syncbuf,
2071 ++ const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf,
2072 + struct nv50_dmac *dmac)
2073 + {
2074 + struct nouveau_cli *cli = (void *)device->object.client;
2075 +@@ -270,7 +270,7 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
2076 + if (ret)
2077 + return ret;
2078 +
2079 +- if (!syncbuf)
2080 ++ if (syncbuf < 0)
2081 + return 0;
2082 +
2083 + ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF,
2084 +diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.h b/drivers/gpu/drm/nouveau/dispnv50/disp.h
2085 +index 92bddc0836171..38dec11e7dda5 100644
2086 +--- a/drivers/gpu/drm/nouveau/dispnv50/disp.h
2087 ++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.h
2088 +@@ -95,7 +95,7 @@ struct nv50_outp_atom {
2089 +
2090 + int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp,
2091 + const s32 *oclass, u8 head, void *data, u32 size,
2092 +- u64 syncbuf, struct nv50_dmac *dmac);
2093 ++ s64 syncbuf, struct nv50_dmac *dmac);
2094 + void nv50_dmac_destroy(struct nv50_dmac *);
2095 +
2096 + /*
2097 +diff --git a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
2098 +index 685b708713242..b390029c69ec1 100644
2099 +--- a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
2100 ++++ b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
2101 +@@ -76,7 +76,7 @@ wimmc37b_init_(const struct nv50_wimm_func *func, struct nouveau_drm *drm,
2102 + int ret;
2103 +
2104 + ret = nv50_dmac_create(&drm->client.device, &disp->disp->object,
2105 +- &oclass, 0, &args, sizeof(args), 0,
2106 ++ &oclass, 0, &args, sizeof(args), -1,
2107 + &wndw->wimm);
2108 + if (ret) {
2109 + NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret);
2110 +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
2111 +index 7deb81b6dbac6..4b571cc6bc70f 100644
2112 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
2113 ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
2114 +@@ -75,7 +75,7 @@ shadow_image(struct nvkm_bios *bios, int idx, u32 offset, struct shadow *mthd)
2115 + nvkm_debug(subdev, "%08x: type %02x, %d bytes\n",
2116 + image.base, image.type, image.size);
2117 +
2118 +- if (!shadow_fetch(bios, mthd, image.size)) {
2119 ++ if (!shadow_fetch(bios, mthd, image.base + image.size)) {
2120 + nvkm_debug(subdev, "%08x: fetch failed\n", image.base);
2121 + return 0;
2122 + }
2123 +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
2124 +index edb6148cbca04..d0e80ad526845 100644
2125 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
2126 ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
2127 +@@ -33,7 +33,7 @@ static void
2128 + gm200_i2c_aux_fini(struct gm200_i2c_aux *aux)
2129 + {
2130 + struct nvkm_device *device = aux->base.pad->i2c->subdev.device;
2131 +- nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00310000, 0x00000000);
2132 ++ nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00710000, 0x00000000);
2133 + }
2134 +
2135 + static int
2136 +@@ -54,10 +54,10 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux)
2137 + AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl);
2138 + return -EBUSY;
2139 + }
2140 +- } while (ctrl & 0x03010000);
2141 ++ } while (ctrl & 0x07010000);
2142 +
2143 + /* set some magic, and wait up to 1ms for it to appear */
2144 +- nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00300000, ureq);
2145 ++ nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00700000, ureq);
2146 + timeout = 1000;
2147 + do {
2148 + ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50));
2149 +@@ -67,7 +67,7 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux)
2150 + gm200_i2c_aux_fini(aux);
2151 + return -EBUSY;
2152 + }
2153 +- } while ((ctrl & 0x03000000) != urep);
2154 ++ } while ((ctrl & 0x07000000) != urep);
2155 +
2156 + return 0;
2157 + }
2158 +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
2159 +index 2340040942c93..1115376bc85f5 100644
2160 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
2161 ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
2162 +@@ -22,6 +22,7 @@
2163 + * Authors: Ben Skeggs
2164 + */
2165 + #include "priv.h"
2166 ++#include <subdev/timer.h>
2167 +
2168 + static void
2169 + gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
2170 +@@ -31,7 +32,6 @@ gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
2171 + u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400));
2172 + u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400));
2173 + nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
2174 +- nvkm_mask(device, 0x122128 + (i * 0x0400), 0x00000200, 0x00000000);
2175 + }
2176 +
2177 + static void
2178 +@@ -42,7 +42,6 @@ gf100_ibus_intr_rop(struct nvkm_subdev *ibus, int i)
2179 + u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400));
2180 + u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400));
2181 + nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
2182 +- nvkm_mask(device, 0x124128 + (i * 0x0400), 0x00000200, 0x00000000);
2183 + }
2184 +
2185 + static void
2186 +@@ -53,7 +52,6 @@ gf100_ibus_intr_gpc(struct nvkm_subdev *ibus, int i)
2187 + u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400));
2188 + u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400));
2189 + nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
2190 +- nvkm_mask(device, 0x128128 + (i * 0x0400), 0x00000200, 0x00000000);
2191 + }
2192 +
2193 + void
2194 +@@ -90,6 +88,12 @@ gf100_ibus_intr(struct nvkm_subdev *ibus)
2195 + intr1 &= ~stat;
2196 + }
2197 + }
2198 ++
2199 ++ nvkm_mask(device, 0x121c4c, 0x0000003f, 0x00000002);
2200 ++ nvkm_msec(device, 2000,
2201 ++ if (!(nvkm_rd32(device, 0x121c4c) & 0x0000003f))
2202 ++ break;
2203 ++ );
2204 + }
2205 +
2206 + static int
2207 +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
2208 +index f3915f85838ed..22e487b493ad1 100644
2209 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
2210 ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
2211 +@@ -22,6 +22,7 @@
2212 + * Authors: Ben Skeggs
2213 + */
2214 + #include "priv.h"
2215 ++#include <subdev/timer.h>
2216 +
2217 + static void
2218 + gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
2219 +@@ -31,7 +32,6 @@ gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i)
2220 + u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800));
2221 + u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800));
2222 + nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat);
2223 +- nvkm_mask(device, 0x122128 + (i * 0x0800), 0x00000200, 0x00000000);
2224 + }
2225 +
2226 + static void
2227 +@@ -42,7 +42,6 @@ gk104_ibus_intr_rop(struct nvkm_subdev *ibus, int i)
2228 + u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800));
2229 + u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800));
2230 + nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat);
2231 +- nvkm_mask(device, 0x124128 + (i * 0x0800), 0x00000200, 0x00000000);
2232 + }
2233 +
2234 + static void
2235 +@@ -53,7 +52,6 @@ gk104_ibus_intr_gpc(struct nvkm_subdev *ibus, int i)
2236 + u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800));
2237 + u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800));
2238 + nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat);
2239 +- nvkm_mask(device, 0x128128 + (i * 0x0800), 0x00000200, 0x00000000);
2240 + }
2241 +
2242 + void
2243 +@@ -90,6 +88,12 @@ gk104_ibus_intr(struct nvkm_subdev *ibus)
2244 + intr1 &= ~stat;
2245 + }
2246 + }
2247 ++
2248 ++ nvkm_mask(device, 0x12004c, 0x0000003f, 0x00000002);
2249 ++ nvkm_msec(device, 2000,
2250 ++ if (!(nvkm_rd32(device, 0x12004c) & 0x0000003f))
2251 ++ break;
2252 ++ );
2253 + }
2254 +
2255 + static int
2256 +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
2257 +index de91e9a261725..6d5212ae2fd57 100644
2258 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
2259 ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
2260 +@@ -316,9 +316,9 @@ nvkm_mmu_vram(struct nvkm_mmu *mmu)
2261 + {
2262 + struct nvkm_device *device = mmu->subdev.device;
2263 + struct nvkm_mm *mm = &device->fb->ram->vram;
2264 +- const u32 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
2265 +- const u32 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
2266 +- const u32 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
2267 ++ const u64 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL);
2268 ++ const u64 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP);
2269 ++ const u64 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED);
2270 + u8 type = NVKM_MEM_KIND * !!mmu->func->kind;
2271 + u8 heap = NVKM_MEM_VRAM;
2272 + int heapM, heapN, heapU;
2273 +diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
2274 +index afc178b0d89f4..eaba98e15de46 100644
2275 +--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
2276 ++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
2277 +@@ -1268,6 +1268,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
2278 + card->dai_link = dai_link;
2279 + card->num_links = 1;
2280 + card->name = vc4_hdmi->variant->card_name;
2281 ++ card->driver_name = "vc4-hdmi";
2282 + card->dev = dev;
2283 + card->owner = THIS_MODULE;
2284 +
2285 +diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
2286 +index 612629678c845..9b56226ce0d1c 100644
2287 +--- a/drivers/hid/Kconfig
2288 ++++ b/drivers/hid/Kconfig
2289 +@@ -899,6 +899,7 @@ config HID_SONY
2290 + depends on NEW_LEDS
2291 + depends on LEDS_CLASS
2292 + select POWER_SUPPLY
2293 ++ select CRC32
2294 + help
2295 + Support for
2296 +
2297 +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
2298 +index f170feaac40ba..94180c63571ed 100644
2299 +--- a/drivers/hid/hid-ids.h
2300 ++++ b/drivers/hid/hid-ids.h
2301 +@@ -387,6 +387,7 @@
2302 + #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401
2303 + #define USB_DEVICE_ID_HP_X2 0x074d
2304 + #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755
2305 ++#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
2306 +
2307 + #define USB_VENDOR_ID_ELECOM 0x056e
2308 + #define USB_DEVICE_ID_ELECOM_BM084 0x0061
2309 +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
2310 +index 4dca113924593..32024905fd70f 100644
2311 +--- a/drivers/hid/hid-input.c
2312 ++++ b/drivers/hid/hid-input.c
2313 +@@ -322,6 +322,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
2314 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
2315 + USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),
2316 + HID_BATTERY_QUIRK_IGNORE },
2317 ++ { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),
2318 ++ HID_BATTERY_QUIRK_IGNORE },
2319 + {}
2320 + };
2321 +
2322 +diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
2323 +index 1ffcfc9a1e033..45e7e0bdd382b 100644
2324 +--- a/drivers/hid/hid-logitech-dj.c
2325 ++++ b/drivers/hid/hid-logitech-dj.c
2326 +@@ -1869,6 +1869,10 @@ static const struct hid_device_id logi_dj_receivers[] = {
2327 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
2328 + 0xc531),
2329 + .driver_data = recvr_type_gaming_hidpp},
2330 ++ { /* Logitech G602 receiver (0xc537) */
2331 ++ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
2332 ++ 0xc537),
2333 ++ .driver_data = recvr_type_gaming_hidpp},
2334 + { /* Logitech lightspeed receiver (0xc539) */
2335 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
2336 + USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1),
2337 +diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
2338 +index 0ca7231195473..74ebfb12c360e 100644
2339 +--- a/drivers/hid/hid-logitech-hidpp.c
2340 ++++ b/drivers/hid/hid-logitech-hidpp.c
2341 +@@ -4051,6 +4051,8 @@ static const struct hid_device_id hidpp_devices[] = {
2342 + { /* MX Master mouse over Bluetooth */
2343 + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012),
2344 + .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
2345 ++ { /* MX Ergo trackball over Bluetooth */
2346 ++ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) },
2347 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e),
2348 + .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
2349 + { /* MX Master 3 mouse over Bluetooth */
2350 +diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
2351 +index d670bcd57bdef..0743ef51d3b24 100644
2352 +--- a/drivers/hid/hid-multitouch.c
2353 ++++ b/drivers/hid/hid-multitouch.c
2354 +@@ -2054,6 +2054,10 @@ static const struct hid_device_id mt_devices[] = {
2355 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
2356 + USB_VENDOR_ID_SYNAPTICS, 0xce08) },
2357 +
2358 ++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
2359 ++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
2360 ++ USB_VENDOR_ID_SYNAPTICS, 0xce09) },
2361 ++
2362 + /* TopSeed panels */
2363 + { .driver_data = MT_CLS_TOPSEED,
2364 + MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,
2365 +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
2366 +index 4fad3e6745e53..a5a402e776c77 100644
2367 +--- a/drivers/hv/vmbus_drv.c
2368 ++++ b/drivers/hv/vmbus_drv.c
2369 +@@ -2542,7 +2542,6 @@ static void hv_kexec_handler(void)
2370 + /* Make sure conn_state is set as hv_synic_cleanup checks for it */
2371 + mb();
2372 + cpuhp_remove_state(hyperv_cpuhp_online);
2373 +- hyperv_cleanup();
2374 + };
2375 +
2376 + static void hv_crash_handler(struct pt_regs *regs)
2377 +@@ -2558,7 +2557,6 @@ static void hv_crash_handler(struct pt_regs *regs)
2378 + cpu = smp_processor_id();
2379 + hv_stimer_cleanup(cpu);
2380 + hv_synic_disable_regs(cpu);
2381 +- hyperv_cleanup();
2382 + };
2383 +
2384 + static int hv_synic_suspend(void)
2385 +diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
2386 +index 52acd77438ede..251e75c9ba9d0 100644
2387 +--- a/drivers/hwtracing/intel_th/pci.c
2388 ++++ b/drivers/hwtracing/intel_th/pci.c
2389 +@@ -268,6 +268,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
2390 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7aa6),
2391 + .driver_data = (kernel_ulong_t)&intel_th_2x,
2392 + },
2393 ++ {
2394 ++ /* Alder Lake-P */
2395 ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6),
2396 ++ .driver_data = (kernel_ulong_t)&intel_th_2x,
2397 ++ },
2398 + {
2399 + /* Alder Lake CPU */
2400 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
2401 +diff --git a/drivers/hwtracing/stm/heartbeat.c b/drivers/hwtracing/stm/heartbeat.c
2402 +index 3e7df1c0477f7..81d7b21d31ec2 100644
2403 +--- a/drivers/hwtracing/stm/heartbeat.c
2404 ++++ b/drivers/hwtracing/stm/heartbeat.c
2405 +@@ -64,7 +64,7 @@ static void stm_heartbeat_unlink(struct stm_source_data *data)
2406 +
2407 + static int stm_heartbeat_init(void)
2408 + {
2409 +- int i, ret = -ENOMEM;
2410 ++ int i, ret;
2411 +
2412 + if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX)
2413 + return -EINVAL;
2414 +@@ -72,8 +72,10 @@ static int stm_heartbeat_init(void)
2415 + for (i = 0; i < nr_devs; i++) {
2416 + stm_heartbeat[i].data.name =
2417 + kasprintf(GFP_KERNEL, "heartbeat.%d", i);
2418 +- if (!stm_heartbeat[i].data.name)
2419 ++ if (!stm_heartbeat[i].data.name) {
2420 ++ ret = -ENOMEM;
2421 + goto fail_unregister;
2422 ++ }
2423 +
2424 + stm_heartbeat[i].data.nr_chans = 1;
2425 + stm_heartbeat[i].data.link = stm_heartbeat_link;
2426 +diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
2427 +index a49e0ed4a599d..7e693dcbdd196 100644
2428 +--- a/drivers/i2c/busses/Kconfig
2429 ++++ b/drivers/i2c/busses/Kconfig
2430 +@@ -1012,6 +1012,7 @@ config I2C_SIRF
2431 + config I2C_SPRD
2432 + tristate "Spreadtrum I2C interface"
2433 + depends on I2C=y && (ARCH_SPRD || COMPILE_TEST)
2434 ++ depends on COMMON_CLK
2435 + help
2436 + If you say yes to this option, support will be included for the
2437 + Spreadtrum I2C interface.
2438 +diff --git a/drivers/i2c/busses/i2c-octeon-core.c b/drivers/i2c/busses/i2c-octeon-core.c
2439 +index d9607905dc2f1..845eda70b8cab 100644
2440 +--- a/drivers/i2c/busses/i2c-octeon-core.c
2441 ++++ b/drivers/i2c/busses/i2c-octeon-core.c
2442 +@@ -347,7 +347,7 @@ static int octeon_i2c_read(struct octeon_i2c *i2c, int target,
2443 + if (result)
2444 + return result;
2445 + if (recv_len && i == 0) {
2446 +- if (data[i] > I2C_SMBUS_BLOCK_MAX + 1)
2447 ++ if (data[i] > I2C_SMBUS_BLOCK_MAX)
2448 + return -EPROTO;
2449 + length += data[i];
2450 + }
2451 +diff --git a/drivers/i2c/busses/i2c-tegra-bpmp.c b/drivers/i2c/busses/i2c-tegra-bpmp.c
2452 +index ec7a7e917eddb..c0c7d01473f2b 100644
2453 +--- a/drivers/i2c/busses/i2c-tegra-bpmp.c
2454 ++++ b/drivers/i2c/busses/i2c-tegra-bpmp.c
2455 +@@ -80,7 +80,7 @@ static int tegra_bpmp_xlate_flags(u16 flags, u16 *out)
2456 + flags &= ~I2C_M_RECV_LEN;
2457 + }
2458 +
2459 +- return (flags != 0) ? -EINVAL : 0;
2460 ++ return 0;
2461 + }
2462 +
2463 + /**
2464 +diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
2465 +index 6f08c0c3238d5..0727383f49402 100644
2466 +--- a/drivers/i2c/busses/i2c-tegra.c
2467 ++++ b/drivers/i2c/busses/i2c-tegra.c
2468 +@@ -533,7 +533,7 @@ static int tegra_i2c_poll_register(struct tegra_i2c_dev *i2c_dev,
2469 + void __iomem *addr = i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg);
2470 + u32 val;
2471 +
2472 +- if (!i2c_dev->atomic_mode)
2473 ++ if (!i2c_dev->atomic_mode && !in_irq())
2474 + return readl_relaxed_poll_timeout(addr, val, !(val & mask),
2475 + delay_us, timeout_us);
2476 +
2477 +diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c
2478 +index b11c8c47ba2aa..e946903b09936 100644
2479 +--- a/drivers/iio/adc/ti_am335x_adc.c
2480 ++++ b/drivers/iio/adc/ti_am335x_adc.c
2481 +@@ -397,16 +397,12 @@ static int tiadc_iio_buffered_hardware_setup(struct device *dev,
2482 + ret = devm_request_threaded_irq(dev, irq, pollfunc_th, pollfunc_bh,
2483 + flags, indio_dev->name, indio_dev);
2484 + if (ret)
2485 +- goto error_kfifo_free;
2486 ++ return ret;
2487 +
2488 + indio_dev->setup_ops = setup_ops;
2489 + indio_dev->modes |= INDIO_BUFFER_SOFTWARE;
2490 +
2491 + return 0;
2492 +-
2493 +-error_kfifo_free:
2494 +- iio_kfifo_free(indio_dev->buffer);
2495 +- return ret;
2496 + }
2497 +
2498 + static const char * const chan_name_ain[] = {
2499 +diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c
2500 +index 0507283bd4c1d..2dbd2646e44e9 100644
2501 +--- a/drivers/iio/common/st_sensors/st_sensors_trigger.c
2502 ++++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c
2503 +@@ -23,35 +23,31 @@
2504 + * @sdata: Sensor data.
2505 + *
2506 + * returns:
2507 +- * 0 - no new samples available
2508 +- * 1 - new samples available
2509 +- * negative - error or unknown
2510 ++ * false - no new samples available or read error
2511 ++ * true - new samples available
2512 + */
2513 +-static int st_sensors_new_samples_available(struct iio_dev *indio_dev,
2514 +- struct st_sensor_data *sdata)
2515 ++static bool st_sensors_new_samples_available(struct iio_dev *indio_dev,
2516 ++ struct st_sensor_data *sdata)
2517 + {
2518 + int ret, status;
2519 +
2520 + /* How would I know if I can't check it? */
2521 + if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr)
2522 +- return -EINVAL;
2523 ++ return true;
2524 +
2525 + /* No scan mask, no interrupt */
2526 + if (!indio_dev->active_scan_mask)
2527 +- return 0;
2528 ++ return false;
2529 +
2530 + ret = regmap_read(sdata->regmap,
2531 + sdata->sensor_settings->drdy_irq.stat_drdy.addr,
2532 + &status);
2533 + if (ret < 0) {
2534 + dev_err(sdata->dev, "error checking samples available\n");
2535 +- return ret;
2536 ++ return false;
2537 + }
2538 +
2539 +- if (status & sdata->sensor_settings->drdy_irq.stat_drdy.mask)
2540 +- return 1;
2541 +-
2542 +- return 0;
2543 ++ return !!(status & sdata->sensor_settings->drdy_irq.stat_drdy.mask);
2544 + }
2545 +
2546 + /**
2547 +@@ -180,9 +176,15 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
2548 +
2549 + /* Tell the interrupt handler that we're dealing with edges */
2550 + if (irq_trig == IRQF_TRIGGER_FALLING ||
2551 +- irq_trig == IRQF_TRIGGER_RISING)
2552 ++ irq_trig == IRQF_TRIGGER_RISING) {
2553 ++ if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) {
2554 ++ dev_err(&indio_dev->dev,
2555 ++ "edge IRQ not supported w/o stat register.\n");
2556 ++ err = -EOPNOTSUPP;
2557 ++ goto iio_trigger_free;
2558 ++ }
2559 + sdata->edge_irq = true;
2560 +- else
2561 ++ } else {
2562 + /*
2563 + * If we're not using edges (i.e. level interrupts) we
2564 + * just mask off the IRQ, handle one interrupt, then
2565 +@@ -190,6 +192,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
2566 + * interrupt handler top half again and start over.
2567 + */
2568 + irq_trig |= IRQF_ONESHOT;
2569 ++ }
2570 +
2571 + /*
2572 + * If the interrupt pin is Open Drain, by definition this
2573 +diff --git a/drivers/iio/dac/ad5504.c b/drivers/iio/dac/ad5504.c
2574 +index 28921b62e6420..e9297c25d4ef6 100644
2575 +--- a/drivers/iio/dac/ad5504.c
2576 ++++ b/drivers/iio/dac/ad5504.c
2577 +@@ -187,9 +187,9 @@ static ssize_t ad5504_write_dac_powerdown(struct iio_dev *indio_dev,
2578 + return ret;
2579 +
2580 + if (pwr_down)
2581 +- st->pwr_down_mask |= (1 << chan->channel);
2582 +- else
2583 + st->pwr_down_mask &= ~(1 << chan->channel);
2584 ++ else
2585 ++ st->pwr_down_mask |= (1 << chan->channel);
2586 +
2587 + ret = ad5504_spi_write(st, AD5504_ADDR_CTRL,
2588 + AD5504_DAC_PWRDWN_MODE(st->pwr_down_mode) |
2589 +diff --git a/drivers/iio/temperature/mlx90632.c b/drivers/iio/temperature/mlx90632.c
2590 +index 503fe54a0bb93..608ccb1d8bc82 100644
2591 +--- a/drivers/iio/temperature/mlx90632.c
2592 ++++ b/drivers/iio/temperature/mlx90632.c
2593 +@@ -248,6 +248,12 @@ static int mlx90632_set_meas_type(struct regmap *regmap, u8 type)
2594 + if (ret < 0)
2595 + return ret;
2596 +
2597 ++ /*
2598 ++ * Give the mlx90632 some time to reset properly before sending a new I2C command
2599 ++ * if this is not done, the following I2C command(s) will not be accepted.
2600 ++ */
2601 ++ usleep_range(150, 200);
2602 ++
2603 + ret = regmap_write_bits(regmap, MLX90632_REG_CONTROL,
2604 + (MLX90632_CFG_MTYP_MASK | MLX90632_CFG_PWR_MASK),
2605 + (MLX90632_MTYP_STATUS(type) | MLX90632_PWR_STATUS_HALT));
2606 +diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c
2607 +index 7ec4af2ed87ab..35d1ec1095f9c 100644
2608 +--- a/drivers/infiniband/core/cma_configfs.c
2609 ++++ b/drivers/infiniband/core/cma_configfs.c
2610 +@@ -131,8 +131,10 @@ static ssize_t default_roce_mode_store(struct config_item *item,
2611 + return ret;
2612 +
2613 + gid_type = ib_cache_gid_parse_type_str(buf);
2614 +- if (gid_type < 0)
2615 ++ if (gid_type < 0) {
2616 ++ cma_configfs_params_put(cma_dev);
2617 + return -EINVAL;
2618 ++ }
2619 +
2620 + ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type);
2621 +
2622 +diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
2623 +index ffe2563ad3456..2cc785c1970b4 100644
2624 +--- a/drivers/infiniband/core/ucma.c
2625 ++++ b/drivers/infiniband/core/ucma.c
2626 +@@ -95,8 +95,6 @@ struct ucma_context {
2627 + u64 uid;
2628 +
2629 + struct list_head list;
2630 +- /* sync between removal event and id destroy, protected by file mut */
2631 +- int destroying;
2632 + struct work_struct close_work;
2633 + };
2634 +
2635 +@@ -122,7 +120,7 @@ static DEFINE_XARRAY_ALLOC(ctx_table);
2636 + static DEFINE_XARRAY_ALLOC(multicast_table);
2637 +
2638 + static const struct file_operations ucma_fops;
2639 +-static int __destroy_id(struct ucma_context *ctx);
2640 ++static int ucma_destroy_private_ctx(struct ucma_context *ctx);
2641 +
2642 + static inline struct ucma_context *_ucma_find_context(int id,
2643 + struct ucma_file *file)
2644 +@@ -179,19 +177,14 @@ static void ucma_close_id(struct work_struct *work)
2645 +
2646 + /* once all inflight tasks are finished, we close all underlying
2647 + * resources. The context is still alive till its explicit destryoing
2648 +- * by its creator.
2649 ++ * by its creator. This puts back the xarray's reference.
2650 + */
2651 + ucma_put_ctx(ctx);
2652 + wait_for_completion(&ctx->comp);
2653 + /* No new events will be generated after destroying the id. */
2654 + rdma_destroy_id(ctx->cm_id);
2655 +
2656 +- /*
2657 +- * At this point ctx->ref is zero so the only place the ctx can be is in
2658 +- * a uevent or in __destroy_id(). Since the former doesn't touch
2659 +- * ctx->cm_id and the latter sync cancels this, there is no races with
2660 +- * this store.
2661 +- */
2662 ++ /* Reading the cm_id without holding a positive ref is not allowed */
2663 + ctx->cm_id = NULL;
2664 + }
2665 +
2666 +@@ -204,7 +197,6 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
2667 + return NULL;
2668 +
2669 + INIT_WORK(&ctx->close_work, ucma_close_id);
2670 +- refcount_set(&ctx->ref, 1);
2671 + init_completion(&ctx->comp);
2672 + /* So list_del() will work if we don't do ucma_finish_ctx() */
2673 + INIT_LIST_HEAD(&ctx->list);
2674 +@@ -218,6 +210,13 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
2675 + return ctx;
2676 + }
2677 +
2678 ++static void ucma_set_ctx_cm_id(struct ucma_context *ctx,
2679 ++ struct rdma_cm_id *cm_id)
2680 ++{
2681 ++ refcount_set(&ctx->ref, 1);
2682 ++ ctx->cm_id = cm_id;
2683 ++}
2684 ++
2685 + static void ucma_finish_ctx(struct ucma_context *ctx)
2686 + {
2687 + lockdep_assert_held(&ctx->file->mut);
2688 +@@ -303,7 +302,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id,
2689 + ctx = ucma_alloc_ctx(listen_ctx->file);
2690 + if (!ctx)
2691 + goto err_backlog;
2692 +- ctx->cm_id = cm_id;
2693 ++ ucma_set_ctx_cm_id(ctx, cm_id);
2694 +
2695 + uevent = ucma_create_uevent(listen_ctx, event);
2696 + if (!uevent)
2697 +@@ -321,8 +320,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id,
2698 + return 0;
2699 +
2700 + err_alloc:
2701 +- xa_erase(&ctx_table, ctx->id);
2702 +- kfree(ctx);
2703 ++ ucma_destroy_private_ctx(ctx);
2704 + err_backlog:
2705 + atomic_inc(&listen_ctx->backlog);
2706 + /* Returning error causes the new ID to be destroyed */
2707 +@@ -356,8 +354,12 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id,
2708 + wake_up_interruptible(&ctx->file->poll_wait);
2709 + }
2710 +
2711 +- if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL && !ctx->destroying)
2712 +- queue_work(system_unbound_wq, &ctx->close_work);
2713 ++ if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
2714 ++ xa_lock(&ctx_table);
2715 ++ if (xa_load(&ctx_table, ctx->id) == ctx)
2716 ++ queue_work(system_unbound_wq, &ctx->close_work);
2717 ++ xa_unlock(&ctx_table);
2718 ++ }
2719 + return 0;
2720 + }
2721 +
2722 +@@ -461,13 +463,12 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
2723 + ret = PTR_ERR(cm_id);
2724 + goto err1;
2725 + }
2726 +- ctx->cm_id = cm_id;
2727 ++ ucma_set_ctx_cm_id(ctx, cm_id);
2728 +
2729 + resp.id = ctx->id;
2730 + if (copy_to_user(u64_to_user_ptr(cmd.response),
2731 + &resp, sizeof(resp))) {
2732 +- xa_erase(&ctx_table, ctx->id);
2733 +- __destroy_id(ctx);
2734 ++ ucma_destroy_private_ctx(ctx);
2735 + return -EFAULT;
2736 + }
2737 +
2738 +@@ -477,8 +478,7 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
2739 + return 0;
2740 +
2741 + err1:
2742 +- xa_erase(&ctx_table, ctx->id);
2743 +- kfree(ctx);
2744 ++ ucma_destroy_private_ctx(ctx);
2745 + return ret;
2746 + }
2747 +
2748 +@@ -516,68 +516,73 @@ static void ucma_cleanup_mc_events(struct ucma_multicast *mc)
2749 + rdma_unlock_handler(mc->ctx->cm_id);
2750 + }
2751 +
2752 +-/*
2753 +- * ucma_free_ctx is called after the underlying rdma CM-ID is destroyed. At
2754 +- * this point, no new events will be reported from the hardware. However, we
2755 +- * still need to cleanup the UCMA context for this ID. Specifically, there
2756 +- * might be events that have not yet been consumed by the user space software.
2757 +- * mutex. After that we release them as needed.
2758 +- */
2759 +-static int ucma_free_ctx(struct ucma_context *ctx)
2760 ++static int ucma_cleanup_ctx_events(struct ucma_context *ctx)
2761 + {
2762 + int events_reported;
2763 + struct ucma_event *uevent, *tmp;
2764 + LIST_HEAD(list);
2765 +
2766 +- ucma_cleanup_multicast(ctx);
2767 +-
2768 +- /* Cleanup events not yet reported to the user. */
2769 ++ /* Cleanup events not yet reported to the user.*/
2770 + mutex_lock(&ctx->file->mut);
2771 + list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) {
2772 +- if (uevent->ctx == ctx || uevent->conn_req_ctx == ctx)
2773 ++ if (uevent->ctx != ctx)
2774 ++ continue;
2775 ++
2776 ++ if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
2777 ++ xa_cmpxchg(&ctx_table, uevent->conn_req_ctx->id,
2778 ++ uevent->conn_req_ctx, XA_ZERO_ENTRY,
2779 ++ GFP_KERNEL) == uevent->conn_req_ctx) {
2780 + list_move_tail(&uevent->list, &list);
2781 ++ continue;
2782 ++ }
2783 ++ list_del(&uevent->list);
2784 ++ kfree(uevent);
2785 + }
2786 + list_del(&ctx->list);
2787 + events_reported = ctx->events_reported;
2788 + mutex_unlock(&ctx->file->mut);
2789 +
2790 + /*
2791 +- * If this was a listening ID then any connections spawned from it
2792 +- * that have not been delivered to userspace are cleaned up too.
2793 +- * Must be done outside any locks.
2794 ++ * If this was a listening ID then any connections spawned from it that
2795 ++ * have not been delivered to userspace are cleaned up too. Must be done
2796 ++ * outside any locks.
2797 + */
2798 + list_for_each_entry_safe(uevent, tmp, &list, list) {
2799 +- list_del(&uevent->list);
2800 +- if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST &&
2801 +- uevent->conn_req_ctx != ctx)
2802 +- __destroy_id(uevent->conn_req_ctx);
2803 ++ ucma_destroy_private_ctx(uevent->conn_req_ctx);
2804 + kfree(uevent);
2805 + }
2806 +-
2807 +- mutex_destroy(&ctx->mutex);
2808 +- kfree(ctx);
2809 + return events_reported;
2810 + }
2811 +
2812 +-static int __destroy_id(struct ucma_context *ctx)
2813 ++/*
2814 ++ * When this is called the xarray must have a XA_ZERO_ENTRY in the ctx->id (ie
2815 ++ * the ctx is not public to the user). This either because:
2816 ++ * - ucma_finish_ctx() hasn't been called
2817 ++ * - xa_cmpxchg() succeed to remove the entry (only one thread can succeed)
2818 ++ */
2819 ++static int ucma_destroy_private_ctx(struct ucma_context *ctx)
2820 + {
2821 ++ int events_reported;
2822 ++
2823 + /*
2824 +- * If the refcount is already 0 then ucma_close_id() has already
2825 +- * destroyed the cm_id, otherwise holding the refcount keeps cm_id
2826 +- * valid. Prevent queue_work() from being called.
2827 ++ * Destroy the underlying cm_id. New work queuing is prevented now by
2828 ++ * the removal from the xarray. Once the work is cancled ref will either
2829 ++ * be 0 because the work ran to completion and consumed the ref from the
2830 ++ * xarray, or it will be positive because we still have the ref from the
2831 ++ * xarray. This can also be 0 in cases where cm_id was never set
2832 + */
2833 +- if (refcount_inc_not_zero(&ctx->ref)) {
2834 +- rdma_lock_handler(ctx->cm_id);
2835 +- ctx->destroying = 1;
2836 +- rdma_unlock_handler(ctx->cm_id);
2837 +- ucma_put_ctx(ctx);
2838 +- }
2839 +-
2840 + cancel_work_sync(&ctx->close_work);
2841 +- /* At this point it's guaranteed that there is no inflight closing task */
2842 +- if (ctx->cm_id)
2843 ++ if (refcount_read(&ctx->ref))
2844 + ucma_close_id(&ctx->close_work);
2845 +- return ucma_free_ctx(ctx);
2846 ++
2847 ++ events_reported = ucma_cleanup_ctx_events(ctx);
2848 ++ ucma_cleanup_multicast(ctx);
2849 ++
2850 ++ WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, XA_ZERO_ENTRY, NULL,
2851 ++ GFP_KERNEL) != NULL);
2852 ++ mutex_destroy(&ctx->mutex);
2853 ++ kfree(ctx);
2854 ++ return events_reported;
2855 + }
2856 +
2857 + static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
2858 +@@ -596,14 +601,17 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf,
2859 +
2860 + xa_lock(&ctx_table);
2861 + ctx = _ucma_find_context(cmd.id, file);
2862 +- if (!IS_ERR(ctx))
2863 +- __xa_erase(&ctx_table, ctx->id);
2864 ++ if (!IS_ERR(ctx)) {
2865 ++ if (__xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
2866 ++ GFP_KERNEL) != ctx)
2867 ++ ctx = ERR_PTR(-ENOENT);
2868 ++ }
2869 + xa_unlock(&ctx_table);
2870 +
2871 + if (IS_ERR(ctx))
2872 + return PTR_ERR(ctx);
2873 +
2874 +- resp.events_reported = __destroy_id(ctx);
2875 ++ resp.events_reported = ucma_destroy_private_ctx(ctx);
2876 + if (copy_to_user(u64_to_user_ptr(cmd.response),
2877 + &resp, sizeof(resp)))
2878 + ret = -EFAULT;
2879 +@@ -1777,15 +1785,16 @@ static int ucma_close(struct inode *inode, struct file *filp)
2880 + * prevented by this being a FD release function. The list_add_tail() in
2881 + * ucma_connect_event_handler() can run concurrently, however it only
2882 + * adds to the list *after* a listening ID. By only reading the first of
2883 +- * the list, and relying on __destroy_id() to block
2884 ++ * the list, and relying on ucma_destroy_private_ctx() to block
2885 + * ucma_connect_event_handler(), no additional locking is needed.
2886 + */
2887 + while (!list_empty(&file->ctx_list)) {
2888 + struct ucma_context *ctx = list_first_entry(
2889 + &file->ctx_list, struct ucma_context, list);
2890 +
2891 +- xa_erase(&ctx_table, ctx->id);
2892 +- __destroy_id(ctx);
2893 ++ WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY,
2894 ++ GFP_KERNEL) != ctx);
2895 ++ ucma_destroy_private_ctx(ctx);
2896 + }
2897 + kfree(file);
2898 + return 0;
2899 +diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
2900 +index e9fecbdf391bc..5157ae29a4460 100644
2901 +--- a/drivers/infiniband/core/umem.c
2902 ++++ b/drivers/infiniband/core/umem.c
2903 +@@ -126,7 +126,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
2904 + */
2905 + if (mask)
2906 + pgsz_bitmap &= GENMASK(count_trailing_zeros(mask), 0);
2907 +- return rounddown_pow_of_two(pgsz_bitmap);
2908 ++ return pgsz_bitmap ? rounddown_pow_of_two(pgsz_bitmap) : 0;
2909 + }
2910 + EXPORT_SYMBOL(ib_umem_find_best_pgsz);
2911 +
2912 +diff --git a/drivers/interconnect/imx/imx8mq.c b/drivers/interconnect/imx/imx8mq.c
2913 +index ba43a15aefec0..d7768d3c6d8aa 100644
2914 +--- a/drivers/interconnect/imx/imx8mq.c
2915 ++++ b/drivers/interconnect/imx/imx8mq.c
2916 +@@ -7,6 +7,7 @@
2917 +
2918 + #include <linux/module.h>
2919 + #include <linux/platform_device.h>
2920 ++#include <linux/interconnect-provider.h>
2921 + #include <dt-bindings/interconnect/imx8mq.h>
2922 +
2923 + #include "imx.h"
2924 +@@ -94,6 +95,7 @@ static struct platform_driver imx8mq_icc_driver = {
2925 + .remove = imx8mq_icc_remove,
2926 + .driver = {
2927 + .name = "imx8mq-interconnect",
2928 ++ .sync_state = icc_sync_state,
2929 + },
2930 + };
2931 +
2932 +diff --git a/drivers/irqchip/irq-mips-cpu.c b/drivers/irqchip/irq-mips-cpu.c
2933 +index 95d4fd8f7a968..0bbb0b2d0dd5f 100644
2934 +--- a/drivers/irqchip/irq-mips-cpu.c
2935 ++++ b/drivers/irqchip/irq-mips-cpu.c
2936 +@@ -197,6 +197,13 @@ static int mips_cpu_ipi_alloc(struct irq_domain *domain, unsigned int virq,
2937 + if (ret)
2938 + return ret;
2939 +
2940 ++ ret = irq_domain_set_hwirq_and_chip(domain->parent, virq + i, hwirq,
2941 ++ &mips_mt_cpu_irq_controller,
2942 ++ NULL);
2943 ++
2944 ++ if (ret)
2945 ++ return ret;
2946 ++
2947 + ret = irq_set_irq_type(virq + i, IRQ_TYPE_LEVEL_HIGH);
2948 + if (ret)
2949 + return ret;
2950 +diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
2951 +index c1bcac71008c6..28ddcaa5358b1 100644
2952 +--- a/drivers/lightnvm/core.c
2953 ++++ b/drivers/lightnvm/core.c
2954 +@@ -844,11 +844,10 @@ static int nvm_bb_chunk_sense(struct nvm_dev *dev, struct ppa_addr ppa)
2955 + rqd.ppa_addr = generic_to_dev_addr(dev, ppa);
2956 +
2957 + ret = nvm_submit_io_sync_raw(dev, &rqd);
2958 ++ __free_page(page);
2959 + if (ret)
2960 + return ret;
2961 +
2962 +- __free_page(page);
2963 +-
2964 + return rqd.error;
2965 + }
2966 +
2967 +diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
2968 +index 0e04d3718af3c..2cefb075b2b84 100644
2969 +--- a/drivers/md/Kconfig
2970 ++++ b/drivers/md/Kconfig
2971 +@@ -585,6 +585,7 @@ config DM_INTEGRITY
2972 + select BLK_DEV_INTEGRITY
2973 + select DM_BUFIO
2974 + select CRYPTO
2975 ++ select CRYPTO_SKCIPHER
2976 + select ASYNC_XOR
2977 + help
2978 + This device-mapper target emulates a block device that has
2979 +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
2980 +index 89de9cde02028..875823d6ee7e0 100644
2981 +--- a/drivers/md/dm-crypt.c
2982 ++++ b/drivers/md/dm-crypt.c
2983 +@@ -1481,9 +1481,9 @@ static int crypt_alloc_req_skcipher(struct crypt_config *cc,
2984 + static int crypt_alloc_req_aead(struct crypt_config *cc,
2985 + struct convert_context *ctx)
2986 + {
2987 +- if (!ctx->r.req) {
2988 +- ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
2989 +- if (!ctx->r.req)
2990 ++ if (!ctx->r.req_aead) {
2991 ++ ctx->r.req_aead = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO);
2992 ++ if (!ctx->r.req_aead)
2993 + return -ENOMEM;
2994 + }
2995 +
2996 +diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
2997 +index 81df019ab284a..b64fede032dc5 100644
2998 +--- a/drivers/md/dm-integrity.c
2999 ++++ b/drivers/md/dm-integrity.c
3000 +@@ -257,8 +257,9 @@ struct dm_integrity_c {
3001 + bool journal_uptodate;
3002 + bool just_formatted;
3003 + bool recalculate_flag;
3004 +- bool fix_padding;
3005 + bool discard;
3006 ++ bool fix_padding;
3007 ++ bool legacy_recalculate;
3008 +
3009 + struct alg_spec internal_hash_alg;
3010 + struct alg_spec journal_crypt_alg;
3011 +@@ -386,6 +387,14 @@ static int dm_integrity_failed(struct dm_integrity_c *ic)
3012 + return READ_ONCE(ic->failed);
3013 + }
3014 +
3015 ++static bool dm_integrity_disable_recalculate(struct dm_integrity_c *ic)
3016 ++{
3017 ++ if ((ic->internal_hash_alg.key || ic->journal_mac_alg.key) &&
3018 ++ !ic->legacy_recalculate)
3019 ++ return true;
3020 ++ return false;
3021 ++}
3022 ++
3023 + static commit_id_t dm_integrity_commit_id(struct dm_integrity_c *ic, unsigned i,
3024 + unsigned j, unsigned char seq)
3025 + {
3026 +@@ -3140,6 +3149,7 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
3027 + arg_count += !!ic->journal_crypt_alg.alg_string;
3028 + arg_count += !!ic->journal_mac_alg.alg_string;
3029 + arg_count += (ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0;
3030 ++ arg_count += ic->legacy_recalculate;
3031 + DMEMIT("%s %llu %u %c %u", ic->dev->name, ic->start,
3032 + ic->tag_size, ic->mode, arg_count);
3033 + if (ic->meta_dev)
3034 +@@ -3163,6 +3173,8 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
3035 + }
3036 + if ((ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0)
3037 + DMEMIT(" fix_padding");
3038 ++ if (ic->legacy_recalculate)
3039 ++ DMEMIT(" legacy_recalculate");
3040 +
3041 + #define EMIT_ALG(a, n) \
3042 + do { \
3043 +@@ -3792,7 +3804,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
3044 + unsigned extra_args;
3045 + struct dm_arg_set as;
3046 + static const struct dm_arg _args[] = {
3047 +- {0, 15, "Invalid number of feature args"},
3048 ++ {0, 16, "Invalid number of feature args"},
3049 + };
3050 + unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec;
3051 + bool should_write_sb;
3052 +@@ -3940,6 +3952,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
3053 + ic->discard = true;
3054 + } else if (!strcmp(opt_string, "fix_padding")) {
3055 + ic->fix_padding = true;
3056 ++ } else if (!strcmp(opt_string, "legacy_recalculate")) {
3057 ++ ic->legacy_recalculate = true;
3058 + } else {
3059 + r = -EINVAL;
3060 + ti->error = "Invalid argument";
3061 +@@ -4235,6 +4249,20 @@ try_smaller_buffer:
3062 + r = -ENOMEM;
3063 + goto bad;
3064 + }
3065 ++ } else {
3066 ++ if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) {
3067 ++ ti->error = "Recalculate can only be specified with internal_hash";
3068 ++ r = -EINVAL;
3069 ++ goto bad;
3070 ++ }
3071 ++ }
3072 ++
3073 ++ if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING) &&
3074 ++ le64_to_cpu(ic->sb->recalc_sector) < ic->provided_data_sectors &&
3075 ++ dm_integrity_disable_recalculate(ic)) {
3076 ++ ti->error = "Recalculating with HMAC is disabled for security reasons - if you really need it, use the argument \"legacy_recalculate\"";
3077 ++ r = -EOPNOTSUPP;
3078 ++ goto bad;
3079 + }
3080 +
3081 + ic->bufio = dm_bufio_client_create(ic->meta_dev ? ic->meta_dev->bdev : ic->dev->bdev,
3082 +diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
3083 +index 7eeb7c4169c94..09ded08cbb609 100644
3084 +--- a/drivers/md/dm-table.c
3085 ++++ b/drivers/md/dm-table.c
3086 +@@ -370,14 +370,23 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
3087 + {
3088 + int r;
3089 + dev_t dev;
3090 ++ unsigned int major, minor;
3091 ++ char dummy;
3092 + struct dm_dev_internal *dd;
3093 + struct dm_table *t = ti->table;
3094 +
3095 + BUG_ON(!t);
3096 +
3097 +- dev = dm_get_dev_t(path);
3098 +- if (!dev)
3099 +- return -ENODEV;
3100 ++ if (sscanf(path, "%u:%u%c", &major, &minor, &dummy) == 2) {
3101 ++ /* Extract the major/minor numbers */
3102 ++ dev = MKDEV(major, minor);
3103 ++ if (MAJOR(dev) != major || MINOR(dev) != minor)
3104 ++ return -EOVERFLOW;
3105 ++ } else {
3106 ++ dev = dm_get_dev_t(path);
3107 ++ if (!dev)
3108 ++ return -ENODEV;
3109 ++ }
3110 +
3111 + dd = find_device(&t->devices, dev);
3112 + if (!dd) {
3113 +diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
3114 +index de7cb0369c308..002426e3cf76c 100644
3115 +--- a/drivers/mmc/core/queue.c
3116 ++++ b/drivers/mmc/core/queue.c
3117 +@@ -384,8 +384,10 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
3118 + "merging was advertised but not possible");
3119 + blk_queue_max_segments(mq->queue, mmc_get_max_segments(host));
3120 +
3121 +- if (mmc_card_mmc(card))
3122 ++ if (mmc_card_mmc(card) && card->ext_csd.data_sector_size) {
3123 + block_size = card->ext_csd.data_sector_size;
3124 ++ WARN_ON(block_size != 512 && block_size != 4096);
3125 ++ }
3126 +
3127 + blk_queue_logical_block_size(mq->queue, block_size);
3128 + /*
3129 +diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
3130 +index bbf3496f44955..f9780c65ebe98 100644
3131 +--- a/drivers/mmc/host/sdhci-brcmstb.c
3132 ++++ b/drivers/mmc/host/sdhci-brcmstb.c
3133 +@@ -314,11 +314,7 @@ err_clk:
3134 +
3135 + static void sdhci_brcmstb_shutdown(struct platform_device *pdev)
3136 + {
3137 +- int ret;
3138 +-
3139 +- ret = sdhci_pltfm_unregister(pdev);
3140 +- if (ret)
3141 +- dev_err(&pdev->dev, "failed to shutdown\n");
3142 ++ sdhci_pltfm_suspend(&pdev->dev);
3143 + }
3144 +
3145 + MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match);
3146 +diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c
3147 +index 4b673792b5a42..d90020ed36227 100644
3148 +--- a/drivers/mmc/host/sdhci-of-dwcmshc.c
3149 ++++ b/drivers/mmc/host/sdhci-of-dwcmshc.c
3150 +@@ -16,6 +16,8 @@
3151 +
3152 + #include "sdhci-pltfm.h"
3153 +
3154 ++#define SDHCI_DWCMSHC_ARG2_STUFF GENMASK(31, 16)
3155 ++
3156 + /* DWCMSHC specific Mode Select value */
3157 + #define DWCMSHC_CTRL_HS400 0x7
3158 +
3159 +@@ -49,6 +51,29 @@ static void dwcmshc_adma_write_desc(struct sdhci_host *host, void **desc,
3160 + sdhci_adma_write_desc(host, desc, addr, len, cmd);
3161 + }
3162 +
3163 ++static void dwcmshc_check_auto_cmd23(struct mmc_host *mmc,
3164 ++ struct mmc_request *mrq)
3165 ++{
3166 ++ struct sdhci_host *host = mmc_priv(mmc);
3167 ++
3168 ++ /*
3169 ++ * No matter V4 is enabled or not, ARGUMENT2 register is 32-bit
3170 ++ * block count register which doesn't support stuff bits of
3171 ++ * CMD23 argument on dwcmsch host controller.
3172 ++ */
3173 ++ if (mrq->sbc && (mrq->sbc->arg & SDHCI_DWCMSHC_ARG2_STUFF))
3174 ++ host->flags &= ~SDHCI_AUTO_CMD23;
3175 ++ else
3176 ++ host->flags |= SDHCI_AUTO_CMD23;
3177 ++}
3178 ++
3179 ++static void dwcmshc_request(struct mmc_host *mmc, struct mmc_request *mrq)
3180 ++{
3181 ++ dwcmshc_check_auto_cmd23(mmc, mrq);
3182 ++
3183 ++ sdhci_request(mmc, mrq);
3184 ++}
3185 ++
3186 + static void dwcmshc_set_uhs_signaling(struct sdhci_host *host,
3187 + unsigned int timing)
3188 + {
3189 +@@ -133,6 +158,8 @@ static int dwcmshc_probe(struct platform_device *pdev)
3190 +
3191 + sdhci_get_of_property(pdev);
3192 +
3193 ++ host->mmc_host_ops.request = dwcmshc_request;
3194 ++
3195 + err = sdhci_add_host(host);
3196 + if (err)
3197 + goto err_clk;
3198 +diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
3199 +index 24c978de2a3f1..0e5234a5ca224 100644
3200 +--- a/drivers/mmc/host/sdhci-xenon.c
3201 ++++ b/drivers/mmc/host/sdhci-xenon.c
3202 +@@ -167,7 +167,12 @@ static void xenon_reset_exit(struct sdhci_host *host,
3203 + /* Disable tuning request and auto-retuning again */
3204 + xenon_retune_setup(host);
3205 +
3206 +- xenon_set_acg(host, true);
3207 ++ /*
3208 ++ * The ACG should be turned off at the early init time, in order
3209 ++ * to solve a possible issues with the 1.8V regulator stabilization.
3210 ++ * The feature is enabled in later stage.
3211 ++ */
3212 ++ xenon_set_acg(host, false);
3213 +
3214 + xenon_set_sdclk_off_idle(host, sdhc_id, false);
3215 +
3216 +diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
3217 +index 81028ba35f35d..31a6210eb5d44 100644
3218 +--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
3219 ++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
3220 +@@ -1613,7 +1613,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
3221 + /* Extract interleaved payload data and ECC bits */
3222 + for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
3223 + if (buf)
3224 +- nand_extract_bits(buf, step * eccsize, tmp_buf,
3225 ++ nand_extract_bits(buf, step * eccsize * 8, tmp_buf,
3226 + src_bit_off, eccsize * 8);
3227 + src_bit_off += eccsize * 8;
3228 +
3229 +diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c
3230 +index a8048cb8d2205..9a9f1c24d8321 100644
3231 +--- a/drivers/mtd/nand/raw/nandsim.c
3232 ++++ b/drivers/mtd/nand/raw/nandsim.c
3233 +@@ -2211,6 +2211,9 @@ static int ns_attach_chip(struct nand_chip *chip)
3234 + {
3235 + unsigned int eccsteps, eccbytes;
3236 +
3237 ++ chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
3238 ++ chip->ecc.algo = bch ? NAND_ECC_ALGO_BCH : NAND_ECC_ALGO_HAMMING;
3239 ++
3240 + if (!bch)
3241 + return 0;
3242 +
3243 +@@ -2234,8 +2237,6 @@ static int ns_attach_chip(struct nand_chip *chip)
3244 + return -EINVAL;
3245 + }
3246 +
3247 +- chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
3248 +- chip->ecc.algo = NAND_ECC_ALGO_BCH;
3249 + chip->ecc.size = 512;
3250 + chip->ecc.strength = bch;
3251 + chip->ecc.bytes = eccbytes;
3252 +@@ -2274,8 +2275,6 @@ static int __init ns_init_module(void)
3253 + nsmtd = nand_to_mtd(chip);
3254 + nand_set_controller_data(chip, (void *)ns);
3255 +
3256 +- chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
3257 +- chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
3258 + /* The NAND_SKIP_BBTSCAN option is necessary for 'overridesize' */
3259 + /* and 'badblocks' parameters to work */
3260 + chip->options |= NAND_SKIP_BBTSCAN;
3261 +diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
3262 +index 81e39d7507d8f..09879aea9f7cc 100644
3263 +--- a/drivers/net/can/dev.c
3264 ++++ b/drivers/net/can/dev.c
3265 +@@ -592,11 +592,11 @@ static void can_restart(struct net_device *dev)
3266 +
3267 + cf->can_id |= CAN_ERR_RESTARTED;
3268 +
3269 +- netif_rx_ni(skb);
3270 +-
3271 + stats->rx_packets++;
3272 + stats->rx_bytes += cf->can_dlc;
3273 +
3274 ++ netif_rx_ni(skb);
3275 ++
3276 + restart:
3277 + netdev_dbg(dev, "restarted\n");
3278 + priv->can_stats.restarts++;
3279 +diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
3280 +index d29d20525588c..d565922838186 100644
3281 +--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
3282 ++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
3283 +@@ -512,11 +512,11 @@ static int pcan_usb_fd_decode_canmsg(struct pcan_usb_fd_if *usb_if,
3284 + else
3285 + memcpy(cfd->data, rm->d, cfd->len);
3286 +
3287 +- peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(rm->ts_low));
3288 +-
3289 + netdev->stats.rx_packets++;
3290 + netdev->stats.rx_bytes += cfd->len;
3291 +
3292 ++ peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(rm->ts_low));
3293 ++
3294 + return 0;
3295 + }
3296 +
3297 +@@ -578,11 +578,11 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if,
3298 + if (!skb)
3299 + return -ENOMEM;
3300 +
3301 +- peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(sm->ts_low));
3302 +-
3303 + netdev->stats.rx_packets++;
3304 + netdev->stats.rx_bytes += cf->can_dlc;
3305 +
3306 ++ peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(sm->ts_low));
3307 ++
3308 + return 0;
3309 + }
3310 +
3311 +diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
3312 +index d6ba9426be4de..b1baa4ac1d537 100644
3313 +--- a/drivers/net/can/vxcan.c
3314 ++++ b/drivers/net/can/vxcan.c
3315 +@@ -39,6 +39,7 @@ static netdev_tx_t vxcan_xmit(struct sk_buff *skb, struct net_device *dev)
3316 + struct net_device *peer;
3317 + struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
3318 + struct net_device_stats *peerstats, *srcstats = &dev->stats;
3319 ++ u8 len;
3320 +
3321 + if (can_dropped_invalid_skb(dev, skb))
3322 + return NETDEV_TX_OK;
3323 +@@ -61,12 +62,13 @@ static netdev_tx_t vxcan_xmit(struct sk_buff *skb, struct net_device *dev)
3324 + skb->dev = peer;
3325 + skb->ip_summed = CHECKSUM_UNNECESSARY;
3326 +
3327 ++ len = cfd->len;
3328 + if (netif_rx_ni(skb) == NET_RX_SUCCESS) {
3329 + srcstats->tx_packets++;
3330 +- srcstats->tx_bytes += cfd->len;
3331 ++ srcstats->tx_bytes += len;
3332 + peerstats = &peer->stats;
3333 + peerstats->rx_packets++;
3334 +- peerstats->rx_bytes += cfd->len;
3335 ++ peerstats->rx_bytes += len;
3336 + }
3337 +
3338 + out_unlock:
3339 +diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
3340 +index 288b5a5c3e0db..95c7fa171e35a 100644
3341 +--- a/drivers/net/dsa/b53/b53_common.c
3342 ++++ b/drivers/net/dsa/b53/b53_common.c
3343 +@@ -1404,7 +1404,7 @@ int b53_vlan_prepare(struct dsa_switch *ds, int port,
3344 + !(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED))
3345 + return -EINVAL;
3346 +
3347 +- if (vlan->vid_end > dev->num_vlans)
3348 ++ if (vlan->vid_end >= dev->num_vlans)
3349 + return -ERANGE;
3350 +
3351 + b53_enable_vlan(dev, true, ds->vlan_filtering);
3352 +diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
3353 +index 1048509a849bc..0938caccc62ac 100644
3354 +--- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c
3355 ++++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
3356 +@@ -351,6 +351,10 @@ int mv88e6250_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
3357 + if (err)
3358 + return err;
3359 +
3360 ++ err = mv88e6185_g1_stu_data_read(chip, entry);
3361 ++ if (err)
3362 ++ return err;
3363 ++
3364 + /* VTU DBNum[3:0] are located in VTU Operation 3:0
3365 + * VTU DBNum[5:4] are located in VTU Operation 9:8
3366 + */
3367 +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
3368 +index b1ae9eb8f2479..0404aafd5ce56 100644
3369 +--- a/drivers/net/ethernet/broadcom/bcmsysport.c
3370 ++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
3371 +@@ -2503,8 +2503,10 @@ static int bcm_sysport_probe(struct platform_device *pdev)
3372 + priv = netdev_priv(dev);
3373 +
3374 + priv->clk = devm_clk_get_optional(&pdev->dev, "sw_sysport");
3375 +- if (IS_ERR(priv->clk))
3376 +- return PTR_ERR(priv->clk);
3377 ++ if (IS_ERR(priv->clk)) {
3378 ++ ret = PTR_ERR(priv->clk);
3379 ++ goto err_free_netdev;
3380 ++ }
3381 +
3382 + /* Allocate number of TX rings */
3383 + priv->tx_rings = devm_kcalloc(&pdev->dev, txq,
3384 +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
3385 +index fa9152ff5e2a0..f4ecc755eaff1 100644
3386 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
3387 ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
3388 +@@ -454,6 +454,9 @@ int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu,
3389 + int pf = rvu_get_pf(req->hdr.pcifunc);
3390 + u8 cgx_id, lmac_id;
3391 +
3392 ++ if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
3393 ++ return -EPERM;
3394 ++
3395 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
3396 +
3397 + cgx_lmac_addr_set(cgx_id, lmac_id, req->mac_addr);
3398 +@@ -470,6 +473,9 @@ int rvu_mbox_handler_cgx_mac_addr_get(struct rvu *rvu,
3399 + int rc = 0, i;
3400 + u64 cfg;
3401 +
3402 ++ if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
3403 ++ return -EPERM;
3404 ++
3405 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
3406 +
3407 + rsp->hdr.rc = rc;
3408 +diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
3409 +index a53bd36b11c60..d4768dcb6c699 100644
3410 +--- a/drivers/net/ethernet/mscc/ocelot.c
3411 ++++ b/drivers/net/ethernet/mscc/ocelot.c
3412 +@@ -60,14 +60,27 @@ int ocelot_mact_learn(struct ocelot *ocelot, int port,
3413 + const unsigned char mac[ETH_ALEN],
3414 + unsigned int vid, enum macaccess_entry_type type)
3415 + {
3416 ++ u32 cmd = ANA_TABLES_MACACCESS_VALID |
3417 ++ ANA_TABLES_MACACCESS_DEST_IDX(port) |
3418 ++ ANA_TABLES_MACACCESS_ENTRYTYPE(type) |
3419 ++ ANA_TABLES_MACACCESS_MAC_TABLE_CMD(MACACCESS_CMD_LEARN);
3420 ++ unsigned int mc_ports;
3421 ++
3422 ++ /* Set MAC_CPU_COPY if the CPU port is used by a multicast entry */
3423 ++ if (type == ENTRYTYPE_MACv4)
3424 ++ mc_ports = (mac[1] << 8) | mac[2];
3425 ++ else if (type == ENTRYTYPE_MACv6)
3426 ++ mc_ports = (mac[0] << 8) | mac[1];
3427 ++ else
3428 ++ mc_ports = 0;
3429 ++
3430 ++ if (mc_ports & BIT(ocelot->num_phys_ports))
3431 ++ cmd |= ANA_TABLES_MACACCESS_MAC_CPU_COPY;
3432 ++
3433 + ocelot_mact_select(ocelot, mac, vid);
3434 +
3435 + /* Issue a write command */
3436 +- ocelot_write(ocelot, ANA_TABLES_MACACCESS_VALID |
3437 +- ANA_TABLES_MACACCESS_DEST_IDX(port) |
3438 +- ANA_TABLES_MACACCESS_ENTRYTYPE(type) |
3439 +- ANA_TABLES_MACACCESS_MAC_TABLE_CMD(MACACCESS_CMD_LEARN),
3440 +- ANA_TABLES_MACACCESS);
3441 ++ ocelot_write(ocelot, cmd, ANA_TABLES_MACACCESS);
3442 +
3443 + return ocelot_mact_wait_for_completion(ocelot);
3444 + }
3445 +diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
3446 +index b34da11acf65b..d60cd4326f4cd 100644
3447 +--- a/drivers/net/ethernet/mscc/ocelot_net.c
3448 ++++ b/drivers/net/ethernet/mscc/ocelot_net.c
3449 +@@ -952,10 +952,8 @@ static int ocelot_netdevice_event(struct notifier_block *unused,
3450 + struct net_device *dev = netdev_notifier_info_to_dev(ptr);
3451 + int ret = 0;
3452 +
3453 +- if (!ocelot_netdevice_dev_check(dev))
3454 +- return 0;
3455 +-
3456 + if (event == NETDEV_PRECHANGEUPPER &&
3457 ++ ocelot_netdevice_dev_check(dev) &&
3458 + netif_is_lag_master(info->upper_dev)) {
3459 + struct netdev_lag_upper_info *lag_upper_info = info->upper_info;
3460 + struct netlink_ext_ack *extack;
3461 +diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
3462 +index c633046329352..d5d236d687e9e 100644
3463 +--- a/drivers/net/ethernet/renesas/sh_eth.c
3464 ++++ b/drivers/net/ethernet/renesas/sh_eth.c
3465 +@@ -2606,10 +2606,10 @@ static int sh_eth_close(struct net_device *ndev)
3466 + /* Free all the skbuffs in the Rx queue and the DMA buffer. */
3467 + sh_eth_ring_free(ndev);
3468 +
3469 +- pm_runtime_put_sync(&mdp->pdev->dev);
3470 +-
3471 + mdp->is_opened = 0;
3472 +
3473 ++ pm_runtime_put(&mdp->pdev->dev);
3474 ++
3475 + return 0;
3476 + }
3477 +
3478 +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
3479 +index a89d74c5cd1a7..77f615568194d 100644
3480 +--- a/drivers/nvme/host/pci.c
3481 ++++ b/drivers/nvme/host/pci.c
3482 +@@ -542,50 +542,71 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req)
3483 + return true;
3484 + }
3485 +
3486 +-static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
3487 ++static void nvme_free_prps(struct nvme_dev *dev, struct request *req)
3488 + {
3489 +- struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
3490 + const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1;
3491 +- dma_addr_t dma_addr = iod->first_dma, next_dma_addr;
3492 ++ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
3493 ++ dma_addr_t dma_addr = iod->first_dma;
3494 + int i;
3495 +
3496 +- if (iod->dma_len) {
3497 +- dma_unmap_page(dev->dev, dma_addr, iod->dma_len,
3498 +- rq_dma_dir(req));
3499 +- return;
3500 ++ for (i = 0; i < iod->npages; i++) {
3501 ++ __le64 *prp_list = nvme_pci_iod_list(req)[i];
3502 ++ dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]);
3503 ++
3504 ++ dma_pool_free(dev->prp_page_pool, prp_list, dma_addr);
3505 ++ dma_addr = next_dma_addr;
3506 + }
3507 +
3508 +- WARN_ON_ONCE(!iod->nents);
3509 ++}
3510 +
3511 +- if (is_pci_p2pdma_page(sg_page(iod->sg)))
3512 +- pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents,
3513 +- rq_dma_dir(req));
3514 +- else
3515 +- dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req));
3516 ++static void nvme_free_sgls(struct nvme_dev *dev, struct request *req)
3517 ++{
3518 ++ const int last_sg = SGES_PER_PAGE - 1;
3519 ++ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
3520 ++ dma_addr_t dma_addr = iod->first_dma;
3521 ++ int i;
3522 +
3523 ++ for (i = 0; i < iod->npages; i++) {
3524 ++ struct nvme_sgl_desc *sg_list = nvme_pci_iod_list(req)[i];
3525 ++ dma_addr_t next_dma_addr = le64_to_cpu((sg_list[last_sg]).addr);
3526 +
3527 +- if (iod->npages == 0)
3528 +- dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0],
3529 +- dma_addr);
3530 ++ dma_pool_free(dev->prp_page_pool, sg_list, dma_addr);
3531 ++ dma_addr = next_dma_addr;
3532 ++ }
3533 +
3534 +- for (i = 0; i < iod->npages; i++) {
3535 +- void *addr = nvme_pci_iod_list(req)[i];
3536 ++}
3537 +
3538 +- if (iod->use_sgl) {
3539 +- struct nvme_sgl_desc *sg_list = addr;
3540 ++static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req)
3541 ++{
3542 ++ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
3543 +
3544 +- next_dma_addr =
3545 +- le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr);
3546 +- } else {
3547 +- __le64 *prp_list = addr;
3548 ++ if (is_pci_p2pdma_page(sg_page(iod->sg)))
3549 ++ pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents,
3550 ++ rq_dma_dir(req));
3551 ++ else
3552 ++ dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req));
3553 ++}
3554 +
3555 +- next_dma_addr = le64_to_cpu(prp_list[last_prp]);
3556 +- }
3557 ++static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
3558 ++{
3559 ++ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
3560 +
3561 +- dma_pool_free(dev->prp_page_pool, addr, dma_addr);
3562 +- dma_addr = next_dma_addr;
3563 ++ if (iod->dma_len) {
3564 ++ dma_unmap_page(dev->dev, iod->first_dma, iod->dma_len,
3565 ++ rq_dma_dir(req));
3566 ++ return;
3567 + }
3568 +
3569 ++ WARN_ON_ONCE(!iod->nents);
3570 ++
3571 ++ nvme_unmap_sg(dev, req);
3572 ++ if (iod->npages == 0)
3573 ++ dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0],
3574 ++ iod->first_dma);
3575 ++ else if (iod->use_sgl)
3576 ++ nvme_free_sgls(dev, req);
3577 ++ else
3578 ++ nvme_free_prps(dev, req);
3579 + mempool_free(iod->sg, dev->iod_mempool);
3580 + }
3581 +
3582 +@@ -661,7 +682,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev,
3583 + __le64 *old_prp_list = prp_list;
3584 + prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
3585 + if (!prp_list)
3586 +- return BLK_STS_RESOURCE;
3587 ++ goto free_prps;
3588 + list[iod->npages++] = prp_list;
3589 + prp_list[0] = old_prp_list[i - 1];
3590 + old_prp_list[i - 1] = cpu_to_le64(prp_dma);
3591 +@@ -681,14 +702,14 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev,
3592 + dma_addr = sg_dma_address(sg);
3593 + dma_len = sg_dma_len(sg);
3594 + }
3595 +-
3596 + done:
3597 + cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg));
3598 + cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma);
3599 +-
3600 + return BLK_STS_OK;
3601 +-
3602 +- bad_sgl:
3603 ++free_prps:
3604 ++ nvme_free_prps(dev, req);
3605 ++ return BLK_STS_RESOURCE;
3606 ++bad_sgl:
3607 + WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents),
3608 + "Invalid SGL for payload:%d nents:%d\n",
3609 + blk_rq_payload_bytes(req), iod->nents);
3610 +@@ -760,7 +781,7 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
3611 +
3612 + sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma);
3613 + if (!sg_list)
3614 +- return BLK_STS_RESOURCE;
3615 ++ goto free_sgls;
3616 +
3617 + i = 0;
3618 + nvme_pci_iod_list(req)[iod->npages++] = sg_list;
3619 +@@ -773,6 +794,9 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
3620 + } while (--entries > 0);
3621 +
3622 + return BLK_STS_OK;
3623 ++free_sgls:
3624 ++ nvme_free_sgls(dev, req);
3625 ++ return BLK_STS_RESOURCE;
3626 + }
3627 +
3628 + static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev,
3629 +@@ -841,7 +865,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
3630 + sg_init_table(iod->sg, blk_rq_nr_phys_segments(req));
3631 + iod->nents = blk_rq_map_sg(req->q, req, iod->sg);
3632 + if (!iod->nents)
3633 +- goto out;
3634 ++ goto out_free_sg;
3635 +
3636 + if (is_pci_p2pdma_page(sg_page(iod->sg)))
3637 + nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg,
3638 +@@ -850,16 +874,21 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
3639 + nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
3640 + rq_dma_dir(req), DMA_ATTR_NO_WARN);
3641 + if (!nr_mapped)
3642 +- goto out;
3643 ++ goto out_free_sg;
3644 +
3645 + iod->use_sgl = nvme_pci_use_sgls(dev, req);
3646 + if (iod->use_sgl)
3647 + ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped);
3648 + else
3649 + ret = nvme_pci_setup_prps(dev, req, &cmnd->rw);
3650 +-out:
3651 + if (ret != BLK_STS_OK)
3652 +- nvme_unmap_data(dev, req);
3653 ++ goto out_unmap_sg;
3654 ++ return BLK_STS_OK;
3655 ++
3656 ++out_unmap_sg:
3657 ++ nvme_unmap_sg(dev, req);
3658 ++out_free_sg:
3659 ++ mempool_free(iod->sg, dev->iod_mempool);
3660 + return ret;
3661 + }
3662 +
3663 +diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
3664 +index 34803a6c76643..5c1a109842a76 100644
3665 +--- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
3666 ++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
3667 +@@ -347,7 +347,7 @@ FUNC_GROUP_DECL(RMII4, F24, E23, E24, E25, C25, C24, B26, B25, B24);
3668 +
3669 + #define D22 40
3670 + SIG_EXPR_LIST_DECL_SESG(D22, SD1CLK, SD1, SIG_DESC_SET(SCU414, 8));
3671 +-SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU414, 8));
3672 ++SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU4B4, 8));
3673 + PIN_DECL_2(D22, GPIOF0, SD1CLK, PWM8);
3674 + GROUP_DECL(PWM8G0, D22);
3675 +
3676 +diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
3677 +index 7e950f5d62d0f..7815426e7aeaa 100644
3678 +--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
3679 ++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
3680 +@@ -926,6 +926,10 @@ int mtk_pinconf_adv_pull_set(struct mtk_pinctrl *hw,
3681 + err = hw->soc->bias_set(hw, desc, pullup);
3682 + if (err)
3683 + return err;
3684 ++ } else if (hw->soc->bias_set_combo) {
3685 ++ err = hw->soc->bias_set_combo(hw, desc, pullup, arg);
3686 ++ if (err)
3687 ++ return err;
3688 + } else {
3689 + return -ENOTSUPP;
3690 + }
3691 +diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
3692 +index 621909b01debd..033d142f0c272 100644
3693 +--- a/drivers/pinctrl/pinctrl-ingenic.c
3694 ++++ b/drivers/pinctrl/pinctrl-ingenic.c
3695 +@@ -2052,7 +2052,7 @@ static inline bool ingenic_gpio_get_value(struct ingenic_gpio_chip *jzgc,
3696 + static void ingenic_gpio_set_value(struct ingenic_gpio_chip *jzgc,
3697 + u8 offset, int value)
3698 + {
3699 +- if (jzgc->jzpc->info->version >= ID_JZ4760)
3700 ++ if (jzgc->jzpc->info->version >= ID_JZ4770)
3701 + ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_PAT0, offset, !!value);
3702 + else
3703 + ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, offset, !!value);
3704 +@@ -2082,7 +2082,7 @@ static void irq_set_type(struct ingenic_gpio_chip *jzgc,
3705 + break;
3706 + }
3707 +
3708 +- if (jzgc->jzpc->info->version >= ID_JZ4760) {
3709 ++ if (jzgc->jzpc->info->version >= ID_JZ4770) {
3710 + reg1 = JZ4760_GPIO_PAT1;
3711 + reg2 = JZ4760_GPIO_PAT0;
3712 + } else {
3713 +@@ -2122,7 +2122,7 @@ static void ingenic_gpio_irq_enable(struct irq_data *irqd)
3714 + struct ingenic_gpio_chip *jzgc = gpiochip_get_data(gc);
3715 + int irq = irqd->hwirq;
3716 +
3717 +- if (jzgc->jzpc->info->version >= ID_JZ4760)
3718 ++ if (jzgc->jzpc->info->version >= ID_JZ4770)
3719 + ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, true);
3720 + else
3721 + ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, true);
3722 +@@ -2138,7 +2138,7 @@ static void ingenic_gpio_irq_disable(struct irq_data *irqd)
3723 +
3724 + ingenic_gpio_irq_mask(irqd);
3725 +
3726 +- if (jzgc->jzpc->info->version >= ID_JZ4760)
3727 ++ if (jzgc->jzpc->info->version >= ID_JZ4770)
3728 + ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, false);
3729 + else
3730 + ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, false);
3731 +@@ -2163,7 +2163,7 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd)
3732 + irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH);
3733 + }
3734 +
3735 +- if (jzgc->jzpc->info->version >= ID_JZ4760)
3736 ++ if (jzgc->jzpc->info->version >= ID_JZ4770)
3737 + ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_FLAG, irq, false);
3738 + else
3739 + ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, irq, true);
3740 +@@ -2220,7 +2220,7 @@ static void ingenic_gpio_irq_handler(struct irq_desc *desc)
3741 +
3742 + chained_irq_enter(irq_chip, desc);
3743 +
3744 +- if (jzgc->jzpc->info->version >= ID_JZ4760)
3745 ++ if (jzgc->jzpc->info->version >= ID_JZ4770)
3746 + flag = ingenic_gpio_read_reg(jzgc, JZ4760_GPIO_FLAG);
3747 + else
3748 + flag = ingenic_gpio_read_reg(jzgc, JZ4740_GPIO_FLAG);
3749 +@@ -2302,7 +2302,7 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
3750 + struct ingenic_pinctrl *jzpc = jzgc->jzpc;
3751 + unsigned int pin = gc->base + offset;
3752 +
3753 +- if (jzpc->info->version >= ID_JZ4760) {
3754 ++ if (jzpc->info->version >= ID_JZ4770) {
3755 + if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) ||
3756 + ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
3757 + return GPIO_LINE_DIRECTION_IN;
3758 +@@ -2360,7 +2360,7 @@ static int ingenic_pinmux_set_pin_fn(struct ingenic_pinctrl *jzpc,
3759 + ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2);
3760 + ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, func & 0x1);
3761 + ingenic_shadow_config_pin_load(jzpc, pin);
3762 +- } else if (jzpc->info->version >= ID_JZ4760) {
3763 ++ } else if (jzpc->info->version >= ID_JZ4770) {
3764 + ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false);
3765 + ingenic_config_pin(jzpc, pin, GPIO_MSK, false);
3766 + ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2);
3767 +@@ -2368,7 +2368,7 @@ static int ingenic_pinmux_set_pin_fn(struct ingenic_pinctrl *jzpc,
3768 + } else {
3769 + ingenic_config_pin(jzpc, pin, JZ4740_GPIO_FUNC, true);
3770 + ingenic_config_pin(jzpc, pin, JZ4740_GPIO_TRIG, func & 0x2);
3771 +- ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func > 0);
3772 ++ ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func & 0x1);
3773 + }
3774 +
3775 + return 0;
3776 +@@ -2418,7 +2418,7 @@ static int ingenic_pinmux_gpio_set_direction(struct pinctrl_dev *pctldev,
3777 + ingenic_shadow_config_pin(jzpc, pin, GPIO_MSK, true);
3778 + ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input);
3779 + ingenic_shadow_config_pin_load(jzpc, pin);
3780 +- } else if (jzpc->info->version >= ID_JZ4760) {
3781 ++ } else if (jzpc->info->version >= ID_JZ4770) {
3782 + ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false);
3783 + ingenic_config_pin(jzpc, pin, GPIO_MSK, true);
3784 + ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input);
3785 +@@ -2448,7 +2448,7 @@ static int ingenic_pinconf_get(struct pinctrl_dev *pctldev,
3786 + unsigned int offt = pin / PINS_PER_GPIO_CHIP;
3787 + bool pull;
3788 +
3789 +- if (jzpc->info->version >= ID_JZ4760)
3790 ++ if (jzpc->info->version >= ID_JZ4770)
3791 + pull = !ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PEN);
3792 + else
3793 + pull = !ingenic_get_pin_config(jzpc, pin, JZ4740_GPIO_PULL_DIS);
3794 +@@ -2498,7 +2498,7 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc,
3795 + REG_SET(X1830_GPIO_PEH), bias << idxh);
3796 + }
3797 +
3798 +- } else if (jzpc->info->version >= ID_JZ4760) {
3799 ++ } else if (jzpc->info->version >= ID_JZ4770) {
3800 + ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PEN, !bias);
3801 + } else {
3802 + ingenic_config_pin(jzpc, pin, JZ4740_GPIO_PULL_DIS, !bias);
3803 +@@ -2508,7 +2508,7 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc,
3804 + static void ingenic_set_output_level(struct ingenic_pinctrl *jzpc,
3805 + unsigned int pin, bool high)
3806 + {
3807 +- if (jzpc->info->version >= ID_JZ4760)
3808 ++ if (jzpc->info->version >= ID_JZ4770)
3809 + ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, high);
3810 + else
3811 + ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DATA, high);
3812 +diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
3813 +index 77a25bdf0da70..37526aa1fb2c4 100644
3814 +--- a/drivers/pinctrl/qcom/pinctrl-msm.c
3815 ++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
3816 +@@ -51,6 +51,7 @@
3817 + * @dual_edge_irqs: Bitmap of irqs that need sw emulated dual edge
3818 + * detection.
3819 + * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller
3820 ++ * @disabled_for_mux: These IRQs were disabled because we muxed away.
3821 + * @soc: Reference to soc_data of platform specific data.
3822 + * @regs: Base addresses for the TLMM tiles.
3823 + * @phys_base: Physical base address
3824 +@@ -72,6 +73,7 @@ struct msm_pinctrl {
3825 + DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO);
3826 + DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO);
3827 + DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO);
3828 ++ DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO);
3829 +
3830 + const struct msm_pinctrl_soc_data *soc;
3831 + void __iomem *regs[MAX_NR_TILES];
3832 +@@ -96,6 +98,14 @@ MSM_ACCESSOR(intr_cfg)
3833 + MSM_ACCESSOR(intr_status)
3834 + MSM_ACCESSOR(intr_target)
3835 +
3836 ++static void msm_ack_intr_status(struct msm_pinctrl *pctrl,
3837 ++ const struct msm_pingroup *g)
3838 ++{
3839 ++ u32 val = g->intr_ack_high ? BIT(g->intr_status_bit) : 0;
3840 ++
3841 ++ msm_writel_intr_status(val, pctrl, g);
3842 ++}
3843 ++
3844 + static int msm_get_groups_count(struct pinctrl_dev *pctldev)
3845 + {
3846 + struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
3847 +@@ -171,6 +181,10 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
3848 + unsigned group)
3849 + {
3850 + struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
3851 ++ struct gpio_chip *gc = &pctrl->chip;
3852 ++ unsigned int irq = irq_find_mapping(gc->irq.domain, group);
3853 ++ struct irq_data *d = irq_get_irq_data(irq);
3854 ++ unsigned int gpio_func = pctrl->soc->gpio_func;
3855 + const struct msm_pingroup *g;
3856 + unsigned long flags;
3857 + u32 val, mask;
3858 +@@ -187,6 +201,20 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
3859 + if (WARN_ON(i == g->nfuncs))
3860 + return -EINVAL;
3861 +
3862 ++ /*
3863 ++ * If an GPIO interrupt is setup on this pin then we need special
3864 ++ * handling. Specifically interrupt detection logic will still see
3865 ++ * the pin twiddle even when we're muxed away.
3866 ++ *
3867 ++ * When we see a pin with an interrupt setup on it then we'll disable
3868 ++ * (mask) interrupts on it when we mux away until we mux back. Note
3869 ++ * that disable_irq() refcounts and interrupts are disabled as long as
3870 ++ * at least one disable_irq() has been called.
3871 ++ */
3872 ++ if (d && i != gpio_func &&
3873 ++ !test_and_set_bit(d->hwirq, pctrl->disabled_for_mux))
3874 ++ disable_irq(irq);
3875 ++
3876 + raw_spin_lock_irqsave(&pctrl->lock, flags);
3877 +
3878 + val = msm_readl_ctl(pctrl, g);
3879 +@@ -196,6 +224,20 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
3880 +
3881 + raw_spin_unlock_irqrestore(&pctrl->lock, flags);
3882 +
3883 ++ if (d && i == gpio_func &&
3884 ++ test_and_clear_bit(d->hwirq, pctrl->disabled_for_mux)) {
3885 ++ /*
3886 ++ * Clear interrupts detected while not GPIO since we only
3887 ++ * masked things.
3888 ++ */
3889 ++ if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
3890 ++ irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, false);
3891 ++ else
3892 ++ msm_ack_intr_status(pctrl, g);
3893 ++
3894 ++ enable_irq(irq);
3895 ++ }
3896 ++
3897 + return 0;
3898 + }
3899 +
3900 +@@ -210,8 +252,7 @@ static int msm_pinmux_request_gpio(struct pinctrl_dev *pctldev,
3901 + if (!g->nfuncs)
3902 + return 0;
3903 +
3904 +- /* For now assume function 0 is GPIO because it always is */
3905 +- return msm_pinmux_set_mux(pctldev, g->funcs[0], offset);
3906 ++ return msm_pinmux_set_mux(pctldev, g->funcs[pctrl->soc->gpio_func], offset);
3907 + }
3908 +
3909 + static const struct pinmux_ops msm_pinmux_ops = {
3910 +@@ -774,7 +815,7 @@ static void msm_gpio_irq_mask(struct irq_data *d)
3911 + raw_spin_unlock_irqrestore(&pctrl->lock, flags);
3912 + }
3913 +
3914 +-static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear)
3915 ++static void msm_gpio_irq_unmask(struct irq_data *d)
3916 + {
3917 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
3918 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
3919 +@@ -792,17 +833,6 @@ static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear)
3920 +
3921 + raw_spin_lock_irqsave(&pctrl->lock, flags);
3922 +
3923 +- if (status_clear) {
3924 +- /*
3925 +- * clear the interrupt status bit before unmask to avoid
3926 +- * any erroneous interrupts that would have got latched
3927 +- * when the interrupt is not in use.
3928 +- */
3929 +- val = msm_readl_intr_status(pctrl, g);
3930 +- val &= ~BIT(g->intr_status_bit);
3931 +- msm_writel_intr_status(val, pctrl, g);
3932 +- }
3933 +-
3934 + val = msm_readl_intr_cfg(pctrl, g);
3935 + val |= BIT(g->intr_raw_status_bit);
3936 + val |= BIT(g->intr_enable_bit);
3937 +@@ -822,7 +852,7 @@ static void msm_gpio_irq_enable(struct irq_data *d)
3938 + irq_chip_enable_parent(d);
3939 +
3940 + if (!test_bit(d->hwirq, pctrl->skip_wake_irqs))
3941 +- msm_gpio_irq_clear_unmask(d, true);
3942 ++ msm_gpio_irq_unmask(d);
3943 + }
3944 +
3945 + static void msm_gpio_irq_disable(struct irq_data *d)
3946 +@@ -837,11 +867,6 @@ static void msm_gpio_irq_disable(struct irq_data *d)
3947 + msm_gpio_irq_mask(d);
3948 + }
3949 +
3950 +-static void msm_gpio_irq_unmask(struct irq_data *d)
3951 +-{
3952 +- msm_gpio_irq_clear_unmask(d, false);
3953 +-}
3954 +-
3955 + /**
3956 + * msm_gpio_update_dual_edge_parent() - Prime next edge for IRQs handled by parent.
3957 + * @d: The irq dta.
3958 +@@ -894,7 +919,6 @@ static void msm_gpio_irq_ack(struct irq_data *d)
3959 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
3960 + const struct msm_pingroup *g;
3961 + unsigned long flags;
3962 +- u32 val;
3963 +
3964 + if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) {
3965 + if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
3966 +@@ -906,12 +930,7 @@ static void msm_gpio_irq_ack(struct irq_data *d)
3967 +
3968 + raw_spin_lock_irqsave(&pctrl->lock, flags);
3969 +
3970 +- val = msm_readl_intr_status(pctrl, g);
3971 +- if (g->intr_ack_high)
3972 +- val |= BIT(g->intr_status_bit);
3973 +- else
3974 +- val &= ~BIT(g->intr_status_bit);
3975 +- msm_writel_intr_status(val, pctrl, g);
3976 ++ msm_ack_intr_status(pctrl, g);
3977 +
3978 + if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
3979 + msm_gpio_update_dual_edge_pos(pctrl, g, d);
3980 +@@ -936,6 +955,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
3981 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
3982 + const struct msm_pingroup *g;
3983 + unsigned long flags;
3984 ++ bool was_enabled;
3985 + u32 val;
3986 +
3987 + if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) {
3988 +@@ -997,6 +1017,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
3989 + * could cause the INTR_STATUS to be set for EDGE interrupts.
3990 + */
3991 + val = msm_readl_intr_cfg(pctrl, g);
3992 ++ was_enabled = val & BIT(g->intr_raw_status_bit);
3993 + val |= BIT(g->intr_raw_status_bit);
3994 + if (g->intr_detection_width == 2) {
3995 + val &= ~(3 << g->intr_detection_bit);
3996 +@@ -1046,6 +1067,14 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
3997 + }
3998 + msm_writel_intr_cfg(val, pctrl, g);
3999 +
4000 ++ /*
4001 ++ * The first time we set RAW_STATUS_EN it could trigger an interrupt.
4002 ++ * Clear the interrupt. This is safe because we have
4003 ++ * IRQCHIP_SET_TYPE_MASKED.
4004 ++ */
4005 ++ if (!was_enabled)
4006 ++ msm_ack_intr_status(pctrl, g);
4007 ++
4008 + if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
4009 + msm_gpio_update_dual_edge_pos(pctrl, g, d);
4010 +
4011 +@@ -1099,16 +1128,11 @@ static int msm_gpio_irq_reqres(struct irq_data *d)
4012 + }
4013 +
4014 + /*
4015 +- * Clear the interrupt that may be pending before we enable
4016 +- * the line.
4017 +- * This is especially a problem with the GPIOs routed to the
4018 +- * PDC. These GPIOs are direct-connect interrupts to the GIC.
4019 +- * Disabling the interrupt line at the PDC does not prevent
4020 +- * the interrupt from being latched at the GIC. The state at
4021 +- * GIC needs to be cleared before enabling.
4022 ++ * The disable / clear-enable workaround we do in msm_pinmux_set_mux()
4023 ++ * only works if disable is not lazy since we only clear any bogus
4024 ++ * interrupt in hardware. Explicitly mark the interrupt as UNLAZY.
4025 + */
4026 +- if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
4027 +- irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, 0);
4028 ++ irq_set_status_flags(d->irq, IRQ_DISABLE_UNLAZY);
4029 +
4030 + return 0;
4031 + out:
4032 +diff --git a/drivers/pinctrl/qcom/pinctrl-msm.h b/drivers/pinctrl/qcom/pinctrl-msm.h
4033 +index 333f99243c43a..e31a5167c91ec 100644
4034 +--- a/drivers/pinctrl/qcom/pinctrl-msm.h
4035 ++++ b/drivers/pinctrl/qcom/pinctrl-msm.h
4036 +@@ -118,6 +118,7 @@ struct msm_gpio_wakeirq_map {
4037 + * @wakeirq_dual_edge_errata: If true then GPIOs using the wakeirq_map need
4038 + * to be aware that their parent can't handle dual
4039 + * edge interrupts.
4040 ++ * @gpio_func: Which function number is GPIO (usually 0).
4041 + */
4042 + struct msm_pinctrl_soc_data {
4043 + const struct pinctrl_pin_desc *pins;
4044 +@@ -134,6 +135,7 @@ struct msm_pinctrl_soc_data {
4045 + const struct msm_gpio_wakeirq_map *wakeirq_map;
4046 + unsigned int nwakeirq_map;
4047 + bool wakeirq_dual_edge_errata;
4048 ++ unsigned int gpio_func;
4049 + };
4050 +
4051 + extern const struct dev_pm_ops msm_pinctrl_dev_pm_ops;
4052 +diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
4053 +index ecd477964d117..18bf8aeb5f870 100644
4054 +--- a/drivers/platform/x86/hp-wmi.c
4055 ++++ b/drivers/platform/x86/hp-wmi.c
4056 +@@ -247,7 +247,8 @@ static int hp_wmi_perform_query(int query, enum hp_wmi_command command,
4057 + ret = bios_return->return_code;
4058 +
4059 + if (ret) {
4060 +- if (ret != HPWMI_RET_UNKNOWN_CMDTYPE)
4061 ++ if (ret != HPWMI_RET_UNKNOWN_COMMAND &&
4062 ++ ret != HPWMI_RET_UNKNOWN_CMDTYPE)
4063 + pr_warn("query 0x%x returned error 0x%x\n", query, ret);
4064 + goto out_free;
4065 + }
4066 +diff --git a/drivers/platform/x86/i2c-multi-instantiate.c b/drivers/platform/x86/i2c-multi-instantiate.c
4067 +index 6acc8457866e1..d3b5afbe4833e 100644
4068 +--- a/drivers/platform/x86/i2c-multi-instantiate.c
4069 ++++ b/drivers/platform/x86/i2c-multi-instantiate.c
4070 +@@ -166,13 +166,29 @@ static const struct i2c_inst_data bsg2150_data[] = {
4071 + {}
4072 + };
4073 +
4074 +-static const struct i2c_inst_data int3515_data[] = {
4075 +- { "tps6598x", IRQ_RESOURCE_APIC, 0 },
4076 +- { "tps6598x", IRQ_RESOURCE_APIC, 1 },
4077 +- { "tps6598x", IRQ_RESOURCE_APIC, 2 },
4078 +- { "tps6598x", IRQ_RESOURCE_APIC, 3 },
4079 +- {}
4080 +-};
4081 ++/*
4082 ++ * Device with _HID INT3515 (TI PD controllers) has some unresolved interrupt
4083 ++ * issues. The most common problem seen is interrupt flood.
4084 ++ *
4085 ++ * There are at least two known causes. Firstly, on some boards, the
4086 ++ * I2CSerialBus resource index does not match the Interrupt resource, i.e. they
4087 ++ * are not one-to-one mapped like in the array below. Secondly, on some boards
4088 ++ * the IRQ line from the PD controller is not actually connected at all. But the
4089 ++ * interrupt flood is also seen on some boards where those are not a problem, so
4090 ++ * there are some other problems as well.
4091 ++ *
4092 ++ * Because of the issues with the interrupt, the device is disabled for now. If
4093 ++ * you wish to debug the issues, uncomment the below, and add an entry for the
4094 ++ * INT3515 device to the i2c_multi_instance_ids table.
4095 ++ *
4096 ++ * static const struct i2c_inst_data int3515_data[] = {
4097 ++ * { "tps6598x", IRQ_RESOURCE_APIC, 0 },
4098 ++ * { "tps6598x", IRQ_RESOURCE_APIC, 1 },
4099 ++ * { "tps6598x", IRQ_RESOURCE_APIC, 2 },
4100 ++ * { "tps6598x", IRQ_RESOURCE_APIC, 3 },
4101 ++ * { }
4102 ++ * };
4103 ++ */
4104 +
4105 + /*
4106 + * Note new device-ids must also be added to i2c_multi_instantiate_ids in
4107 +@@ -181,7 +197,6 @@ static const struct i2c_inst_data int3515_data[] = {
4108 + static const struct acpi_device_id i2c_multi_inst_acpi_ids[] = {
4109 + { "BSG1160", (unsigned long)bsg1160_data },
4110 + { "BSG2150", (unsigned long)bsg2150_data },
4111 +- { "INT3515", (unsigned long)int3515_data },
4112 + { }
4113 + };
4114 + MODULE_DEVICE_TABLE(acpi, i2c_multi_inst_acpi_ids);
4115 +diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
4116 +index 7598cd46cf606..5b81bafa5c165 100644
4117 +--- a/drivers/platform/x86/ideapad-laptop.c
4118 ++++ b/drivers/platform/x86/ideapad-laptop.c
4119 +@@ -92,6 +92,7 @@ struct ideapad_private {
4120 + struct dentry *debug;
4121 + unsigned long cfg;
4122 + bool has_hw_rfkill_switch;
4123 ++ bool has_touchpad_switch;
4124 + const char *fnesc_guid;
4125 + };
4126 +
4127 +@@ -535,7 +536,9 @@ static umode_t ideapad_is_visible(struct kobject *kobj,
4128 + } else if (attr == &dev_attr_fn_lock.attr) {
4129 + supported = acpi_has_method(priv->adev->handle, "HALS") &&
4130 + acpi_has_method(priv->adev->handle, "SALS");
4131 +- } else
4132 ++ } else if (attr == &dev_attr_touchpad.attr)
4133 ++ supported = priv->has_touchpad_switch;
4134 ++ else
4135 + supported = true;
4136 +
4137 + return supported ? attr->mode : 0;
4138 +@@ -867,6 +870,9 @@ static void ideapad_sync_touchpad_state(struct ideapad_private *priv)
4139 + {
4140 + unsigned long value;
4141 +
4142 ++ if (!priv->has_touchpad_switch)
4143 ++ return;
4144 ++
4145 + /* Without reading from EC touchpad LED doesn't switch state */
4146 + if (!read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value)) {
4147 + /* Some IdeaPads don't really turn off touchpad - they only
4148 +@@ -989,6 +995,9 @@ static int ideapad_acpi_add(struct platform_device *pdev)
4149 + priv->platform_device = pdev;
4150 + priv->has_hw_rfkill_switch = dmi_check_system(hw_rfkill_list);
4151 +
4152 ++ /* Most ideapads with ELAN0634 touchpad don't use EC touchpad switch */
4153 ++ priv->has_touchpad_switch = !acpi_dev_present("ELAN0634", NULL, -1);
4154 ++
4155 + ret = ideapad_sysfs_init(priv);
4156 + if (ret)
4157 + return ret;
4158 +@@ -1006,6 +1015,10 @@ static int ideapad_acpi_add(struct platform_device *pdev)
4159 + if (!priv->has_hw_rfkill_switch)
4160 + write_ec_cmd(priv->adev->handle, VPCCMD_W_RF, 1);
4161 +
4162 ++ /* The same for Touchpad */
4163 ++ if (!priv->has_touchpad_switch)
4164 ++ write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, 1);
4165 ++
4166 + for (i = 0; i < IDEAPAD_RFKILL_DEV_NUM; i++)
4167 + if (test_bit(ideapad_rfk_data[i].cfgbit, &priv->cfg))
4168 + ideapad_register_rfkill(priv, i);
4169 +diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
4170 +index 3b49a1f4061bc..65fb3a3031470 100644
4171 +--- a/drivers/platform/x86/intel-vbtn.c
4172 ++++ b/drivers/platform/x86/intel-vbtn.c
4173 +@@ -204,12 +204,6 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
4174 + DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"),
4175 + },
4176 + },
4177 +- {
4178 +- .matches = {
4179 +- DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
4180 +- DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"),
4181 +- },
4182 +- },
4183 + {
4184 + .matches = {
4185 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
4186 +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
4187 +index 9ebeb031329d9..cc45cdac13844 100644
4188 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c
4189 ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
4190 +@@ -8232,11 +8232,9 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
4191 + goto out;
4192 + }
4193 +
4194 ++ /* always store 64 bits regardless of addressing */
4195 + sense_ptr = (void *)cmd->frame + ioc->sense_off;
4196 +- if (instance->consistent_mask_64bit)
4197 +- put_unaligned_le64(sense_handle, sense_ptr);
4198 +- else
4199 +- put_unaligned_le32(sense_handle, sense_ptr);
4200 ++ put_unaligned_le64(sense_handle, sense_ptr);
4201 + }
4202 +
4203 + /*
4204 +diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
4205 +index f5fc7f518f8af..47ad64b066236 100644
4206 +--- a/drivers/scsi/qedi/qedi_main.c
4207 ++++ b/drivers/scsi/qedi/qedi_main.c
4208 +@@ -2245,7 +2245,7 @@ qedi_show_boot_tgt_info(struct qedi_ctx *qedi, int type,
4209 + chap_name);
4210 + break;
4211 + case ISCSI_BOOT_TGT_CHAP_SECRET:
4212 +- rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN,
4213 ++ rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN,
4214 + chap_secret);
4215 + break;
4216 + case ISCSI_BOOT_TGT_REV_CHAP_NAME:
4217 +@@ -2253,7 +2253,7 @@ qedi_show_boot_tgt_info(struct qedi_ctx *qedi, int type,
4218 + mchap_name);
4219 + break;
4220 + case ISCSI_BOOT_TGT_REV_CHAP_SECRET:
4221 +- rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN,
4222 ++ rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN,
4223 + mchap_secret);
4224 + break;
4225 + case ISCSI_BOOT_TGT_FLAGS:
4226 +diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
4227 +index 24c0f7ec03511..4a08c450b756f 100644
4228 +--- a/drivers/scsi/scsi_debug.c
4229 ++++ b/drivers/scsi/scsi_debug.c
4230 +@@ -6740,7 +6740,7 @@ static int __init scsi_debug_init(void)
4231 + k = sdeb_zbc_model_str(sdeb_zbc_model_s);
4232 + if (k < 0) {
4233 + ret = k;
4234 +- goto free_vm;
4235 ++ goto free_q_arr;
4236 + }
4237 + sdeb_zbc_model = k;
4238 + switch (sdeb_zbc_model) {
4239 +@@ -6753,7 +6753,8 @@ static int __init scsi_debug_init(void)
4240 + break;
4241 + default:
4242 + pr_err("Invalid ZBC model\n");
4243 +- return -EINVAL;
4244 ++ ret = -EINVAL;
4245 ++ goto free_q_arr;
4246 + }
4247 + }
4248 + if (sdeb_zbc_model != BLK_ZONED_NONE) {
4249 +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
4250 +index 656bcf4940d6d..fedb89d4ac3f0 100644
4251 +--- a/drivers/scsi/sd.c
4252 ++++ b/drivers/scsi/sd.c
4253 +@@ -986,8 +986,10 @@ static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd)
4254 + }
4255 + }
4256 +
4257 +- if (sdp->no_write_same)
4258 ++ if (sdp->no_write_same) {
4259 ++ rq->rq_flags |= RQF_QUIET;
4260 + return BLK_STS_TARGET;
4261 ++ }
4262 +
4263 + if (sdkp->ws16 || lba > 0xffffffff || nr_blocks > 0xffff)
4264 + return sd_setup_write_same16_cmnd(cmd, false);
4265 +diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
4266 +index dcdb4eb1f90ba..c339517b7a094 100644
4267 +--- a/drivers/scsi/ufs/Kconfig
4268 ++++ b/drivers/scsi/ufs/Kconfig
4269 +@@ -72,6 +72,7 @@ config SCSI_UFS_DWC_TC_PCI
4270 + config SCSI_UFSHCD_PLATFORM
4271 + tristate "Platform bus based UFS Controller support"
4272 + depends on SCSI_UFSHCD
4273 ++ depends on HAS_IOMEM
4274 + help
4275 + This selects the UFS host controller support. Select this if
4276 + you have an UFS controller on Platform bus.
4277 +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
4278 +index 7b9a9a771b11b..8132893284670 100644
4279 +--- a/drivers/scsi/ufs/ufshcd.c
4280 ++++ b/drivers/scsi/ufs/ufshcd.c
4281 +@@ -283,7 +283,8 @@ static inline void ufshcd_wb_config(struct ufs_hba *hba)
4282 + if (ret)
4283 + dev_err(hba->dev, "%s: En WB flush during H8: failed: %d\n",
4284 + __func__, ret);
4285 +- ufshcd_wb_toggle_flush(hba, true);
4286 ++ if (!(hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL))
4287 ++ ufshcd_wb_toggle_flush(hba, true);
4288 + }
4289 +
4290 + static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba)
4291 +@@ -4912,7 +4913,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
4292 + break;
4293 + } /* end of switch */
4294 +
4295 +- if ((host_byte(result) != DID_OK) && !hba->silence_err_logs)
4296 ++ if ((host_byte(result) != DID_OK) &&
4297 ++ (host_byte(result) != DID_REQUEUE) && !hba->silence_err_logs)
4298 + ufshcd_print_trs(hba, 1 << lrbp->task_tag, true);
4299 + return result;
4300 + }
4301 +@@ -5353,9 +5355,6 @@ static int ufshcd_wb_toggle_flush_during_h8(struct ufs_hba *hba, bool set)
4302 +
4303 + static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable)
4304 + {
4305 +- if (hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL)
4306 +- return;
4307 +-
4308 + if (enable)
4309 + ufshcd_wb_buf_flush_enable(hba);
4310 + else
4311 +@@ -6210,9 +6209,13 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
4312 + intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
4313 + }
4314 +
4315 +- if (enabled_intr_status && retval == IRQ_NONE) {
4316 +- dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n",
4317 +- __func__, intr_status);
4318 ++ if (enabled_intr_status && retval == IRQ_NONE &&
4319 ++ !ufshcd_eh_in_progress(hba)) {
4320 ++ dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x (0x%08x, 0x%08x)\n",
4321 ++ __func__,
4322 ++ intr_status,
4323 ++ hba->ufs_stats.last_intr_status,
4324 ++ enabled_intr_status);
4325 + ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
4326 + }
4327 +
4328 +@@ -6256,7 +6259,10 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
4329 + * Even though we use wait_event() which sleeps indefinitely,
4330 + * the maximum wait time is bounded by %TM_CMD_TIMEOUT.
4331 + */
4332 +- req = blk_get_request(q, REQ_OP_DRV_OUT, BLK_MQ_REQ_RESERVED);
4333 ++ req = blk_get_request(q, REQ_OP_DRV_OUT, 0);
4334 ++ if (IS_ERR(req))
4335 ++ return PTR_ERR(req);
4336 ++
4337 + req->end_io_data = &wait;
4338 + free_slot = req->tag;
4339 + WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs);
4340 +@@ -6569,19 +6575,16 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
4341 + {
4342 + struct Scsi_Host *host;
4343 + struct ufs_hba *hba;
4344 +- unsigned int tag;
4345 + u32 pos;
4346 + int err;
4347 +- u8 resp = 0xF;
4348 +- struct ufshcd_lrb *lrbp;
4349 ++ u8 resp = 0xF, lun;
4350 + unsigned long flags;
4351 +
4352 + host = cmd->device->host;
4353 + hba = shost_priv(host);
4354 +- tag = cmd->request->tag;
4355 +
4356 +- lrbp = &hba->lrb[tag];
4357 +- err = ufshcd_issue_tm_cmd(hba, lrbp->lun, 0, UFS_LOGICAL_RESET, &resp);
4358 ++ lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
4359 ++ err = ufshcd_issue_tm_cmd(hba, lun, 0, UFS_LOGICAL_RESET, &resp);
4360 + if (err || resp != UPIU_TASK_MANAGEMENT_FUNC_COMPL) {
4361 + if (!err)
4362 + err = resp;
4363 +@@ -6590,7 +6593,7 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
4364 +
4365 + /* clear the commands that were pending for corresponding LUN */
4366 + for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) {
4367 +- if (hba->lrb[pos].lun == lrbp->lun) {
4368 ++ if (hba->lrb[pos].lun == lun) {
4369 + err = ufshcd_clear_cmd(hba, pos);
4370 + if (err)
4371 + break;
4372 +diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
4373 +index 590e6d0722281..7d5814a95e1ed 100644
4374 +--- a/drivers/target/target_core_user.c
4375 ++++ b/drivers/target/target_core_user.c
4376 +@@ -562,8 +562,6 @@ tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dbi)
4377 +
4378 + static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd)
4379 + {
4380 +- if (tcmu_cmd->se_cmd)
4381 +- tcmu_cmd->se_cmd->priv = NULL;
4382 + kfree(tcmu_cmd->dbi);
4383 + kmem_cache_free(tcmu_cmd_cache, tcmu_cmd);
4384 + }
4385 +@@ -1188,11 +1186,12 @@ tcmu_queue_cmd(struct se_cmd *se_cmd)
4386 + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
4387 +
4388 + mutex_lock(&udev->cmdr_lock);
4389 +- se_cmd->priv = tcmu_cmd;
4390 + if (!(se_cmd->transport_state & CMD_T_ABORTED))
4391 + ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
4392 + if (ret < 0)
4393 + tcmu_free_cmd(tcmu_cmd);
4394 ++ else
4395 ++ se_cmd->priv = tcmu_cmd;
4396 + mutex_unlock(&udev->cmdr_lock);
4397 + return scsi_ret;
4398 + }
4399 +@@ -1255,6 +1254,7 @@ tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf,
4400 +
4401 + list_del_init(&cmd->queue_entry);
4402 + tcmu_free_cmd(cmd);
4403 ++ se_cmd->priv = NULL;
4404 + target_complete_cmd(se_cmd, SAM_STAT_TASK_ABORTED);
4405 + unqueued = true;
4406 + }
4407 +@@ -1346,6 +1346,7 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
4408 + }
4409 +
4410 + done:
4411 ++ se_cmd->priv = NULL;
4412 + if (read_len_valid) {
4413 + pr_debug("read_len = %d\n", read_len);
4414 + target_complete_cmd_with_length(cmd->se_cmd,
4415 +@@ -1492,6 +1493,7 @@ static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd)
4416 + se_cmd = cmd->se_cmd;
4417 + tcmu_free_cmd(cmd);
4418 +
4419 ++ se_cmd->priv = NULL;
4420 + target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL);
4421 + }
4422 +
4423 +@@ -1606,6 +1608,7 @@ static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
4424 + * removed then LIO core will do the right thing and
4425 + * fail the retry.
4426 + */
4427 ++ tcmu_cmd->se_cmd->priv = NULL;
4428 + target_complete_cmd(tcmu_cmd->se_cmd, SAM_STAT_BUSY);
4429 + tcmu_free_cmd(tcmu_cmd);
4430 + continue;
4431 +@@ -1619,6 +1622,7 @@ static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
4432 + * Ignore scsi_ret for now. target_complete_cmd
4433 + * drops it.
4434 + */
4435 ++ tcmu_cmd->se_cmd->priv = NULL;
4436 + target_complete_cmd(tcmu_cmd->se_cmd,
4437 + SAM_STAT_CHECK_CONDITION);
4438 + tcmu_free_cmd(tcmu_cmd);
4439 +@@ -2226,6 +2230,7 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
4440 + if (!test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) {
4441 + WARN_ON(!cmd->se_cmd);
4442 + list_del_init(&cmd->queue_entry);
4443 ++ cmd->se_cmd->priv = NULL;
4444 + if (err_level == 1) {
4445 + /*
4446 + * Userspace was not able to start the
4447 +diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
4448 +index 7e5e363152607..c2869489ba681 100644
4449 +--- a/drivers/tty/n_tty.c
4450 ++++ b/drivers/tty/n_tty.c
4451 +@@ -2079,9 +2079,6 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
4452 + return 0;
4453 + }
4454 +
4455 +-extern ssize_t redirected_tty_write(struct file *, const char __user *,
4456 +- size_t, loff_t *);
4457 +-
4458 + /**
4459 + * job_control - check job control
4460 + * @tty: tty
4461 +@@ -2103,7 +2100,7 @@ static int job_control(struct tty_struct *tty, struct file *file)
4462 + /* NOTE: not yet done after every sleep pending a thorough
4463 + check of the logic of this change. -- jlc */
4464 + /* don't stop on /dev/console */
4465 +- if (file->f_op->write == redirected_tty_write)
4466 ++ if (file->f_op->write_iter == redirected_tty_write)
4467 + return 0;
4468 +
4469 + return __tty_check_change(tty, SIGTTIN);
4470 +@@ -2307,7 +2304,7 @@ static ssize_t n_tty_write(struct tty_struct *tty, struct file *file,
4471 + ssize_t retval = 0;
4472 +
4473 + /* Job control check -- must be done at start (POSIX.1 7.1.1.4). */
4474 +- if (L_TOSTOP(tty) && file->f_op->write != redirected_tty_write) {
4475 ++ if (L_TOSTOP(tty) && file->f_op->write_iter != redirected_tty_write) {
4476 + retval = tty_check_change(tty);
4477 + if (retval)
4478 + return retval;
4479 +diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
4480 +index 118b299122898..e0c00a1b07639 100644
4481 +--- a/drivers/tty/serial/mvebu-uart.c
4482 ++++ b/drivers/tty/serial/mvebu-uart.c
4483 +@@ -648,6 +648,14 @@ static void wait_for_xmitr(struct uart_port *port)
4484 + (val & STAT_TX_RDY(port)), 1, 10000);
4485 + }
4486 +
4487 ++static void wait_for_xmite(struct uart_port *port)
4488 ++{
4489 ++ u32 val;
4490 ++
4491 ++ readl_poll_timeout_atomic(port->membase + UART_STAT, val,
4492 ++ (val & STAT_TX_EMP), 1, 10000);
4493 ++}
4494 ++
4495 + static void mvebu_uart_console_putchar(struct uart_port *port, int ch)
4496 + {
4497 + wait_for_xmitr(port);
4498 +@@ -675,7 +683,7 @@ static void mvebu_uart_console_write(struct console *co, const char *s,
4499 +
4500 + uart_console_write(port, s, count, mvebu_uart_console_putchar);
4501 +
4502 +- wait_for_xmitr(port);
4503 ++ wait_for_xmite(port);
4504 +
4505 + if (ier)
4506 + writel(ier, port->membase + UART_CTRL(port));
4507 +diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
4508 +index 13eadcb8aec4e..214bf3086c68a 100644
4509 +--- a/drivers/tty/serial/sifive.c
4510 ++++ b/drivers/tty/serial/sifive.c
4511 +@@ -999,6 +999,7 @@ static int sifive_serial_probe(struct platform_device *pdev)
4512 + /* Set up clock divider */
4513 + ssp->clkin_rate = clk_get_rate(ssp->clk);
4514 + ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE;
4515 ++ ssp->port.uartclk = ssp->baud_rate * 16;
4516 + __ssp_update_div(ssp);
4517 +
4518 + platform_set_drvdata(pdev, ssp);
4519 +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
4520 +index 56ade99ef99f4..2f8223b2ffa45 100644
4521 +--- a/drivers/tty/tty_io.c
4522 ++++ b/drivers/tty/tty_io.c
4523 +@@ -143,12 +143,9 @@ LIST_HEAD(tty_drivers); /* linked list of tty drivers */
4524 + DEFINE_MUTEX(tty_mutex);
4525 +
4526 + static ssize_t tty_read(struct file *, char __user *, size_t, loff_t *);
4527 +-static ssize_t tty_write(struct file *, const char __user *, size_t, loff_t *);
4528 +-ssize_t redirected_tty_write(struct file *, const char __user *,
4529 +- size_t, loff_t *);
4530 ++static ssize_t tty_write(struct kiocb *, struct iov_iter *);
4531 + static __poll_t tty_poll(struct file *, poll_table *);
4532 + static int tty_open(struct inode *, struct file *);
4533 +-long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
4534 + #ifdef CONFIG_COMPAT
4535 + static long tty_compat_ioctl(struct file *file, unsigned int cmd,
4536 + unsigned long arg);
4537 +@@ -438,8 +435,7 @@ static ssize_t hung_up_tty_read(struct file *file, char __user *buf,
4538 + return 0;
4539 + }
4540 +
4541 +-static ssize_t hung_up_tty_write(struct file *file, const char __user *buf,
4542 +- size_t count, loff_t *ppos)
4543 ++static ssize_t hung_up_tty_write(struct kiocb *iocb, struct iov_iter *from)
4544 + {
4545 + return -EIO;
4546 + }
4547 +@@ -478,7 +474,8 @@ static void tty_show_fdinfo(struct seq_file *m, struct file *file)
4548 + static const struct file_operations tty_fops = {
4549 + .llseek = no_llseek,
4550 + .read = tty_read,
4551 +- .write = tty_write,
4552 ++ .write_iter = tty_write,
4553 ++ .splice_write = iter_file_splice_write,
4554 + .poll = tty_poll,
4555 + .unlocked_ioctl = tty_ioctl,
4556 + .compat_ioctl = tty_compat_ioctl,
4557 +@@ -491,7 +488,8 @@ static const struct file_operations tty_fops = {
4558 + static const struct file_operations console_fops = {
4559 + .llseek = no_llseek,
4560 + .read = tty_read,
4561 +- .write = redirected_tty_write,
4562 ++ .write_iter = redirected_tty_write,
4563 ++ .splice_write = iter_file_splice_write,
4564 + .poll = tty_poll,
4565 + .unlocked_ioctl = tty_ioctl,
4566 + .compat_ioctl = tty_compat_ioctl,
4567 +@@ -503,7 +501,7 @@ static const struct file_operations console_fops = {
4568 + static const struct file_operations hung_up_tty_fops = {
4569 + .llseek = no_llseek,
4570 + .read = hung_up_tty_read,
4571 +- .write = hung_up_tty_write,
4572 ++ .write_iter = hung_up_tty_write,
4573 + .poll = hung_up_tty_poll,
4574 + .unlocked_ioctl = hung_up_tty_ioctl,
4575 + .compat_ioctl = hung_up_tty_compat_ioctl,
4576 +@@ -607,9 +605,9 @@ static void __tty_hangup(struct tty_struct *tty, int exit_session)
4577 + /* This breaks for file handles being sent over AF_UNIX sockets ? */
4578 + list_for_each_entry(priv, &tty->tty_files, list) {
4579 + filp = priv->file;
4580 +- if (filp->f_op->write == redirected_tty_write)
4581 ++ if (filp->f_op->write_iter == redirected_tty_write)
4582 + cons_filp = filp;
4583 +- if (filp->f_op->write != tty_write)
4584 ++ if (filp->f_op->write_iter != tty_write)
4585 + continue;
4586 + closecount++;
4587 + __tty_fasync(-1, filp, 0); /* can't block */
4588 +@@ -902,9 +900,9 @@ static inline ssize_t do_tty_write(
4589 + ssize_t (*write)(struct tty_struct *, struct file *, const unsigned char *, size_t),
4590 + struct tty_struct *tty,
4591 + struct file *file,
4592 +- const char __user *buf,
4593 +- size_t count)
4594 ++ struct iov_iter *from)
4595 + {
4596 ++ size_t count = iov_iter_count(from);
4597 + ssize_t ret, written = 0;
4598 + unsigned int chunk;
4599 +
4600 +@@ -956,14 +954,20 @@ static inline ssize_t do_tty_write(
4601 + size_t size = count;
4602 + if (size > chunk)
4603 + size = chunk;
4604 ++
4605 + ret = -EFAULT;
4606 +- if (copy_from_user(tty->write_buf, buf, size))
4607 ++ if (copy_from_iter(tty->write_buf, size, from) != size)
4608 + break;
4609 ++
4610 + ret = write(tty, file, tty->write_buf, size);
4611 + if (ret <= 0)
4612 + break;
4613 ++
4614 ++ /* FIXME! Have Al check this! */
4615 ++ if (ret != size)
4616 ++ iov_iter_revert(from, size-ret);
4617 ++
4618 + written += ret;
4619 +- buf += ret;
4620 + count -= ret;
4621 + if (!count)
4622 + break;
4623 +@@ -1023,9 +1027,9 @@ void tty_write_message(struct tty_struct *tty, char *msg)
4624 + * write method will not be invoked in parallel for each device.
4625 + */
4626 +
4627 +-static ssize_t tty_write(struct file *file, const char __user *buf,
4628 +- size_t count, loff_t *ppos)
4629 ++static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from)
4630 + {
4631 ++ struct file *file = iocb->ki_filp;
4632 + struct tty_struct *tty = file_tty(file);
4633 + struct tty_ldisc *ld;
4634 + ssize_t ret;
4635 +@@ -1039,17 +1043,16 @@ static ssize_t tty_write(struct file *file, const char __user *buf,
4636 + tty_err(tty, "missing write_room method\n");
4637 + ld = tty_ldisc_ref_wait(tty);
4638 + if (!ld)
4639 +- return hung_up_tty_write(file, buf, count, ppos);
4640 ++ return hung_up_tty_write(iocb, from);
4641 + if (!ld->ops->write)
4642 + ret = -EIO;
4643 + else
4644 +- ret = do_tty_write(ld->ops->write, tty, file, buf, count);
4645 ++ ret = do_tty_write(ld->ops->write, tty, file, from);
4646 + tty_ldisc_deref(ld);
4647 + return ret;
4648 + }
4649 +
4650 +-ssize_t redirected_tty_write(struct file *file, const char __user *buf,
4651 +- size_t count, loff_t *ppos)
4652 ++ssize_t redirected_tty_write(struct kiocb *iocb, struct iov_iter *iter)
4653 + {
4654 + struct file *p = NULL;
4655 +
4656 +@@ -1060,11 +1063,11 @@ ssize_t redirected_tty_write(struct file *file, const char __user *buf,
4657 +
4658 + if (p) {
4659 + ssize_t res;
4660 +- res = vfs_write(p, buf, count, &p->f_pos);
4661 ++ res = vfs_iocb_iter_write(p, iocb, iter);
4662 + fput(p);
4663 + return res;
4664 + }
4665 +- return tty_write(file, buf, count, ppos);
4666 ++ return tty_write(iocb, iter);
4667 + }
4668 +
4669 + /**
4670 +@@ -2293,7 +2296,7 @@ static int tioccons(struct file *file)
4671 + {
4672 + if (!capable(CAP_SYS_ADMIN))
4673 + return -EPERM;
4674 +- if (file->f_op->write == redirected_tty_write) {
4675 ++ if (file->f_op->write_iter == redirected_tty_write) {
4676 + struct file *f;
4677 + spin_lock(&redirect_lock);
4678 + f = redirect;
4679 +diff --git a/drivers/usb/cdns3/cdns3-imx.c b/drivers/usb/cdns3/cdns3-imx.c
4680 +index 54a2d70a9c730..7e728aab64755 100644
4681 +--- a/drivers/usb/cdns3/cdns3-imx.c
4682 ++++ b/drivers/usb/cdns3/cdns3-imx.c
4683 +@@ -184,7 +184,11 @@ static int cdns_imx_probe(struct platform_device *pdev)
4684 + }
4685 +
4686 + data->num_clks = ARRAY_SIZE(imx_cdns3_core_clks);
4687 +- data->clks = (struct clk_bulk_data *)imx_cdns3_core_clks;
4688 ++ data->clks = devm_kmemdup(dev, imx_cdns3_core_clks,
4689 ++ sizeof(imx_cdns3_core_clks), GFP_KERNEL);
4690 ++ if (!data->clks)
4691 ++ return -ENOMEM;
4692 ++
4693 + ret = devm_clk_bulk_get(dev, data->num_clks, data->clks);
4694 + if (ret)
4695 + return ret;
4696 +@@ -214,20 +218,11 @@ err:
4697 + return ret;
4698 + }
4699 +
4700 +-static int cdns_imx_remove_core(struct device *dev, void *data)
4701 +-{
4702 +- struct platform_device *pdev = to_platform_device(dev);
4703 +-
4704 +- platform_device_unregister(pdev);
4705 +-
4706 +- return 0;
4707 +-}
4708 +-
4709 + static int cdns_imx_remove(struct platform_device *pdev)
4710 + {
4711 + struct device *dev = &pdev->dev;
4712 +
4713 +- device_for_each_child(dev, NULL, cdns_imx_remove_core);
4714 ++ of_platform_depopulate(dev);
4715 + platform_set_drvdata(pdev, NULL);
4716 +
4717 + return 0;
4718 +diff --git a/drivers/usb/gadget/udc/aspeed-vhub/epn.c b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
4719 +index 0bd6b20435b8a..02d8bfae58fb1 100644
4720 +--- a/drivers/usb/gadget/udc/aspeed-vhub/epn.c
4721 ++++ b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
4722 +@@ -420,7 +420,10 @@ static void ast_vhub_stop_active_req(struct ast_vhub_ep *ep,
4723 + u32 state, reg, loops;
4724 +
4725 + /* Stop DMA activity */
4726 +- writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
4727 ++ if (ep->epn.desc_mode)
4728 ++ writel(VHUB_EP_DMA_CTRL_RESET, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
4729 ++ else
4730 ++ writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
4731 +
4732 + /* Wait for it to complete */
4733 + for (loops = 0; loops < 1000; loops++) {
4734 +diff --git a/drivers/usb/gadget/udc/bdc/Kconfig b/drivers/usb/gadget/udc/bdc/Kconfig
4735 +index 3e88c7670b2ed..fb01ff47b64cf 100644
4736 +--- a/drivers/usb/gadget/udc/bdc/Kconfig
4737 ++++ b/drivers/usb/gadget/udc/bdc/Kconfig
4738 +@@ -17,7 +17,7 @@ if USB_BDC_UDC
4739 + comment "Platform Support"
4740 + config USB_BDC_PCI
4741 + tristate "BDC support for PCIe based platforms"
4742 +- depends on USB_PCI
4743 ++ depends on USB_PCI && BROKEN
4744 + default USB_BDC_UDC
4745 + help
4746 + Enable support for platforms which have BDC connected through PCIe, such as Lego3 FPGA platform.
4747 +diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
4748 +index debf54205d22e..da691a69fec10 100644
4749 +--- a/drivers/usb/gadget/udc/core.c
4750 ++++ b/drivers/usb/gadget/udc/core.c
4751 +@@ -1532,10 +1532,13 @@ static ssize_t soft_connect_store(struct device *dev,
4752 + struct device_attribute *attr, const char *buf, size_t n)
4753 + {
4754 + struct usb_udc *udc = container_of(dev, struct usb_udc, dev);
4755 ++ ssize_t ret;
4756 +
4757 ++ mutex_lock(&udc_lock);
4758 + if (!udc->driver) {
4759 + dev_err(dev, "soft-connect without a gadget driver\n");
4760 +- return -EOPNOTSUPP;
4761 ++ ret = -EOPNOTSUPP;
4762 ++ goto out;
4763 + }
4764 +
4765 + if (sysfs_streq(buf, "connect")) {
4766 +@@ -1546,10 +1549,14 @@ static ssize_t soft_connect_store(struct device *dev,
4767 + usb_gadget_udc_stop(udc);
4768 + } else {
4769 + dev_err(dev, "unsupported command '%s'\n", buf);
4770 +- return -EINVAL;
4771 ++ ret = -EINVAL;
4772 ++ goto out;
4773 + }
4774 +
4775 +- return n;
4776 ++ ret = n;
4777 ++out:
4778 ++ mutex_unlock(&udc_lock);
4779 ++ return ret;
4780 + }
4781 + static DEVICE_ATTR_WO(soft_connect);
4782 +
4783 +diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
4784 +index 016937579ed97..17704ee2d7f54 100644
4785 +--- a/drivers/usb/gadget/udc/dummy_hcd.c
4786 ++++ b/drivers/usb/gadget/udc/dummy_hcd.c
4787 +@@ -2266,17 +2266,20 @@ static int dummy_hub_control(
4788 + }
4789 + fallthrough;
4790 + case USB_PORT_FEAT_RESET:
4791 ++ if (!(dum_hcd->port_status & USB_PORT_STAT_CONNECTION))
4792 ++ break;
4793 + /* if it's already enabled, disable */
4794 + if (hcd->speed == HCD_USB3) {
4795 +- dum_hcd->port_status = 0;
4796 + dum_hcd->port_status =
4797 + (USB_SS_PORT_STAT_POWER |
4798 + USB_PORT_STAT_CONNECTION |
4799 + USB_PORT_STAT_RESET);
4800 +- } else
4801 ++ } else {
4802 + dum_hcd->port_status &= ~(USB_PORT_STAT_ENABLE
4803 + | USB_PORT_STAT_LOW_SPEED
4804 + | USB_PORT_STAT_HIGH_SPEED);
4805 ++ dum_hcd->port_status |= USB_PORT_STAT_RESET;
4806 ++ }
4807 + /*
4808 + * We want to reset device status. All but the
4809 + * Self powered feature
4810 +@@ -2288,7 +2291,8 @@ static int dummy_hub_control(
4811 + * interval? Is it still 50msec as for HS?
4812 + */
4813 + dum_hcd->re_timeout = jiffies + msecs_to_jiffies(50);
4814 +- fallthrough;
4815 ++ set_link_state(dum_hcd);
4816 ++ break;
4817 + case USB_PORT_FEAT_C_CONNECTION:
4818 + case USB_PORT_FEAT_C_RESET:
4819 + case USB_PORT_FEAT_C_ENABLE:
4820 +diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
4821 +index 3575b72018810..b5db2b2d0901a 100644
4822 +--- a/drivers/usb/host/ehci-hcd.c
4823 ++++ b/drivers/usb/host/ehci-hcd.c
4824 +@@ -574,6 +574,7 @@ static int ehci_run (struct usb_hcd *hcd)
4825 + struct ehci_hcd *ehci = hcd_to_ehci (hcd);
4826 + u32 temp;
4827 + u32 hcc_params;
4828 ++ int rc;
4829 +
4830 + hcd->uses_new_polling = 1;
4831 +
4832 +@@ -629,9 +630,20 @@ static int ehci_run (struct usb_hcd *hcd)
4833 + down_write(&ehci_cf_port_reset_rwsem);
4834 + ehci->rh_state = EHCI_RH_RUNNING;
4835 + ehci_writel(ehci, FLAG_CF, &ehci->regs->configured_flag);
4836 ++
4837 ++ /* Wait until HC become operational */
4838 + ehci_readl(ehci, &ehci->regs->command); /* unblock posted writes */
4839 + msleep(5);
4840 ++ rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT, 0, 100 * 1000);
4841 ++
4842 + up_write(&ehci_cf_port_reset_rwsem);
4843 ++
4844 ++ if (rc) {
4845 ++ ehci_err(ehci, "USB %x.%x, controller refused to start: %d\n",
4846 ++ ((ehci->sbrn & 0xf0)>>4), (ehci->sbrn & 0x0f), rc);
4847 ++ return rc;
4848 ++ }
4849 ++
4850 + ehci->last_periodic_enable = ktime_get_real();
4851 +
4852 + temp = HC_VERSION(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase));
4853 +diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
4854 +index 087402aec5cbe..9f9ab5ccea889 100644
4855 +--- a/drivers/usb/host/ehci-hub.c
4856 ++++ b/drivers/usb/host/ehci-hub.c
4857 +@@ -345,6 +345,9 @@ static int ehci_bus_suspend (struct usb_hcd *hcd)
4858 +
4859 + unlink_empty_async_suspended(ehci);
4860 +
4861 ++ /* Some Synopsys controllers mistakenly leave IAA turned on */
4862 ++ ehci_writel(ehci, STS_IAA, &ehci->regs->status);
4863 ++
4864 + /* Any IAA cycle that started before the suspend is now invalid */
4865 + end_iaa_cycle(ehci);
4866 + ehci_handle_start_intr_unlinks(ehci);
4867 +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
4868 +index 167dae117f738..db8612ec82d3e 100644
4869 +--- a/drivers/usb/host/xhci-ring.c
4870 ++++ b/drivers/usb/host/xhci-ring.c
4871 +@@ -2930,6 +2930,8 @@ static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,
4872 + trb->field[0] = cpu_to_le32(field1);
4873 + trb->field[1] = cpu_to_le32(field2);
4874 + trb->field[2] = cpu_to_le32(field3);
4875 ++ /* make sure TRB is fully written before giving it to the controller */
4876 ++ wmb();
4877 + trb->field[3] = cpu_to_le32(field4);
4878 +
4879 + trace_xhci_queue_trb(ring, trb);
4880 +diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
4881 +index 934be16863523..50bb91b6a4b8d 100644
4882 +--- a/drivers/usb/host/xhci-tegra.c
4883 ++++ b/drivers/usb/host/xhci-tegra.c
4884 +@@ -623,6 +623,13 @@ static void tegra_xusb_mbox_handle(struct tegra_xusb *tegra,
4885 + enable);
4886 + if (err < 0)
4887 + break;
4888 ++
4889 ++ /*
4890 ++ * wait 500us for LFPS detector to be disabled before
4891 ++ * sending ACK
4892 ++ */
4893 ++ if (!enable)
4894 ++ usleep_range(500, 1000);
4895 + }
4896 +
4897 + if (err < 0) {
4898 +diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
4899 +index 6038c4c35db5a..bbebe248b7264 100644
4900 +--- a/drivers/xen/events/events_base.c
4901 ++++ b/drivers/xen/events/events_base.c
4902 +@@ -2010,16 +2010,6 @@ static struct irq_chip xen_percpu_chip __read_mostly = {
4903 + .irq_ack = ack_dynirq,
4904 + };
4905 +
4906 +-int xen_set_callback_via(uint64_t via)
4907 +-{
4908 +- struct xen_hvm_param a;
4909 +- a.domid = DOMID_SELF;
4910 +- a.index = HVM_PARAM_CALLBACK_IRQ;
4911 +- a.value = via;
4912 +- return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
4913 +-}
4914 +-EXPORT_SYMBOL_GPL(xen_set_callback_via);
4915 +-
4916 + #ifdef CONFIG_XEN_PVHVM
4917 + /* Vector callbacks are better than PCI interrupts to receive event
4918 + * channel notifications because we can receive vector callbacks on any
4919 +diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
4920 +index dd911e1ff782c..9db557b76511b 100644
4921 +--- a/drivers/xen/platform-pci.c
4922 ++++ b/drivers/xen/platform-pci.c
4923 +@@ -149,7 +149,6 @@ static int platform_pci_probe(struct pci_dev *pdev,
4924 + ret = gnttab_init();
4925 + if (ret)
4926 + goto grant_out;
4927 +- xenbus_probe(NULL);
4928 + return 0;
4929 + grant_out:
4930 + gnttab_free_auto_xlat_frames();
4931 +diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
4932 +index 2a93b7c9c1599..dc15373354144 100644
4933 +--- a/drivers/xen/xenbus/xenbus.h
4934 ++++ b/drivers/xen/xenbus/xenbus.h
4935 +@@ -115,6 +115,7 @@ int xenbus_probe_node(struct xen_bus_type *bus,
4936 + const char *type,
4937 + const char *nodename);
4938 + int xenbus_probe_devices(struct xen_bus_type *bus);
4939 ++void xenbus_probe(void);
4940 +
4941 + void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
4942 +
4943 +diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
4944 +index eb5151fc8efab..e5fda0256feb3 100644
4945 +--- a/drivers/xen/xenbus/xenbus_comms.c
4946 ++++ b/drivers/xen/xenbus/xenbus_comms.c
4947 +@@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex);
4948 + static int xenbus_irq;
4949 + static struct task_struct *xenbus_task;
4950 +
4951 +-static DECLARE_WORK(probe_work, xenbus_probe);
4952 +-
4953 +-
4954 + static irqreturn_t wake_waiting(int irq, void *unused)
4955 + {
4956 +- if (unlikely(xenstored_ready == 0)) {
4957 +- xenstored_ready = 1;
4958 +- schedule_work(&probe_work);
4959 +- }
4960 +-
4961 + wake_up(&xb_waitq);
4962 + return IRQ_HANDLED;
4963 + }
4964 +diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
4965 +index 44634d970a5ca..c8f0282bb6497 100644
4966 +--- a/drivers/xen/xenbus/xenbus_probe.c
4967 ++++ b/drivers/xen/xenbus/xenbus_probe.c
4968 +@@ -683,29 +683,76 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
4969 + }
4970 + EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
4971 +
4972 +-void xenbus_probe(struct work_struct *unused)
4973 ++void xenbus_probe(void)
4974 + {
4975 + xenstored_ready = 1;
4976 +
4977 ++ /*
4978 ++ * In the HVM case, xenbus_init() deferred its call to
4979 ++ * xs_init() in case callbacks were not operational yet.
4980 ++ * So do it now.
4981 ++ */
4982 ++ if (xen_store_domain_type == XS_HVM)
4983 ++ xs_init();
4984 ++
4985 + /* Notify others that xenstore is up */
4986 + blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
4987 + }
4988 +-EXPORT_SYMBOL_GPL(xenbus_probe);
4989 +
4990 +-static int __init xenbus_probe_initcall(void)
4991 ++/*
4992 ++ * Returns true when XenStore init must be deferred in order to
4993 ++ * allow the PCI platform device to be initialised, before we
4994 ++ * can actually have event channel interrupts working.
4995 ++ */
4996 ++static bool xs_hvm_defer_init_for_callback(void)
4997 + {
4998 +- if (!xen_domain())
4999 +- return -ENODEV;
5000 ++#ifdef CONFIG_XEN_PVHVM
5001 ++ return xen_store_domain_type == XS_HVM &&
5002 ++ !xen_have_vector_callback;
5003 ++#else
5004 ++ return false;
5005 ++#endif
5006 ++}
5007 +
5008 +- if (xen_initial_domain() || xen_hvm_domain())
5009 +- return 0;
5010 ++static int __init xenbus_probe_initcall(void)
5011 ++{
5012 ++ /*
5013 ++ * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
5014 ++ * need to wait for the platform PCI device to come up.
5015 ++ */
5016 ++ if (xen_store_domain_type == XS_PV ||
5017 ++ (xen_store_domain_type == XS_HVM &&
5018 ++ !xs_hvm_defer_init_for_callback()))
5019 ++ xenbus_probe();
5020 +
5021 +- xenbus_probe(NULL);
5022 + return 0;
5023 + }
5024 +-
5025 + device_initcall(xenbus_probe_initcall);
5026 +
5027 ++int xen_set_callback_via(uint64_t via)
5028 ++{
5029 ++ struct xen_hvm_param a;
5030 ++ int ret;
5031 ++
5032 ++ a.domid = DOMID_SELF;
5033 ++ a.index = HVM_PARAM_CALLBACK_IRQ;
5034 ++ a.value = via;
5035 ++
5036 ++ ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
5037 ++ if (ret)
5038 ++ return ret;
5039 ++
5040 ++ /*
5041 ++ * If xenbus_probe_initcall() deferred the xenbus_probe()
5042 ++ * due to the callback not functioning yet, we can do it now.
5043 ++ */
5044 ++ if (!xenstored_ready && xs_hvm_defer_init_for_callback())
5045 ++ xenbus_probe();
5046 ++
5047 ++ return ret;
5048 ++}
5049 ++EXPORT_SYMBOL_GPL(xen_set_callback_via);
5050 ++
5051 + /* Set up event channel for xenstored which is run as a local process
5052 + * (this is normally used only in dom0)
5053 + */
5054 +@@ -818,11 +865,17 @@ static int __init xenbus_init(void)
5055 + break;
5056 + }
5057 +
5058 +- /* Initialize the interface to xenstore. */
5059 +- err = xs_init();
5060 +- if (err) {
5061 +- pr_warn("Error initializing xenstore comms: %i\n", err);
5062 +- goto out_error;
5063 ++ /*
5064 ++ * HVM domains may not have a functional callback yet. In that
5065 ++ * case let xs_init() be called from xenbus_probe(), which will
5066 ++ * get invoked at an appropriate time.
5067 ++ */
5068 ++ if (xen_store_domain_type != XS_HVM) {
5069 ++ err = xs_init();
5070 ++ if (err) {
5071 ++ pr_warn("Error initializing xenstore comms: %i\n", err);
5072 ++ goto out_error;
5073 ++ }
5074 + }
5075 +
5076 + if ((xen_store_domain_type != XS_LOCAL) &&
5077 +diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
5078 +index 771a036867dc0..553b4f6ec8639 100644
5079 +--- a/fs/btrfs/backref.c
5080 ++++ b/fs/btrfs/backref.c
5081 +@@ -3124,7 +3124,7 @@ void btrfs_backref_error_cleanup(struct btrfs_backref_cache *cache,
5082 + list_del_init(&lower->list);
5083 + if (lower == node)
5084 + node = NULL;
5085 +- btrfs_backref_free_node(cache, lower);
5086 ++ btrfs_backref_drop_node(cache, lower);
5087 + }
5088 +
5089 + btrfs_backref_cleanup_node(cache, node);
5090 +diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
5091 +index 3ba6f3839d392..cef2f080fdcd5 100644
5092 +--- a/fs/btrfs/block-group.c
5093 ++++ b/fs/btrfs/block-group.c
5094 +@@ -2687,7 +2687,8 @@ again:
5095 + * Go through delayed refs for all the stuff we've just kicked off
5096 + * and then loop back (just once)
5097 + */
5098 +- ret = btrfs_run_delayed_refs(trans, 0);
5099 ++ if (!ret)
5100 ++ ret = btrfs_run_delayed_refs(trans, 0);
5101 + if (!ret && loops == 0) {
5102 + loops++;
5103 + spin_lock(&cur_trans->dirty_bgs_lock);
5104 +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
5105 +index af97ddcc6b3e8..56f3b9acd2154 100644
5106 +--- a/fs/btrfs/disk-io.c
5107 ++++ b/fs/btrfs/disk-io.c
5108 +@@ -1482,7 +1482,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
5109 + root = list_first_entry(&fs_info->allocated_roots,
5110 + struct btrfs_root, leak_list);
5111 + btrfs_err(fs_info, "leaked root %s refcount %d",
5112 +- btrfs_root_name(root->root_key.objectid, buf),
5113 ++ btrfs_root_name(&root->root_key, buf),
5114 + refcount_read(&root->refs));
5115 + while (refcount_read(&root->refs) > 1)
5116 + btrfs_put_root(root);
5117 +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
5118 +index 4209dbd6286e4..8fba1c219b190 100644
5119 +--- a/fs/btrfs/extent-tree.c
5120 ++++ b/fs/btrfs/extent-tree.c
5121 +@@ -5571,7 +5571,15 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
5122 + goto out_free;
5123 + }
5124 +
5125 +- trans = btrfs_start_transaction(tree_root, 0);
5126 ++ /*
5127 ++ * Use join to avoid potential EINTR from transaction
5128 ++ * start. See wait_reserve_ticket and the whole
5129 ++ * reservation callchain.
5130 ++ */
5131 ++ if (for_reloc)
5132 ++ trans = btrfs_join_transaction(tree_root);
5133 ++ else
5134 ++ trans = btrfs_start_transaction(tree_root, 0);
5135 + if (IS_ERR(trans)) {
5136 + err = PTR_ERR(trans);
5137 + goto out_free;
5138 +diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
5139 +index 7695c4783d33b..c62771f3af8c6 100644
5140 +--- a/fs/btrfs/print-tree.c
5141 ++++ b/fs/btrfs/print-tree.c
5142 +@@ -26,22 +26,22 @@ static const struct root_name_map root_map[] = {
5143 + { BTRFS_DATA_RELOC_TREE_OBJECTID, "DATA_RELOC_TREE" },
5144 + };
5145 +
5146 +-const char *btrfs_root_name(u64 objectid, char *buf)
5147 ++const char *btrfs_root_name(const struct btrfs_key *key, char *buf)
5148 + {
5149 + int i;
5150 +
5151 +- if (objectid == BTRFS_TREE_RELOC_OBJECTID) {
5152 ++ if (key->objectid == BTRFS_TREE_RELOC_OBJECTID) {
5153 + snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN,
5154 +- "TREE_RELOC offset=%llu", objectid);
5155 ++ "TREE_RELOC offset=%llu", key->offset);
5156 + return buf;
5157 + }
5158 +
5159 + for (i = 0; i < ARRAY_SIZE(root_map); i++) {
5160 +- if (root_map[i].id == objectid)
5161 ++ if (root_map[i].id == key->objectid)
5162 + return root_map[i].name;
5163 + }
5164 +
5165 +- snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", objectid);
5166 ++ snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", key->objectid);
5167 + return buf;
5168 + }
5169 +
5170 +diff --git a/fs/btrfs/print-tree.h b/fs/btrfs/print-tree.h
5171 +index 78b99385a503f..8c3e9319ec4ef 100644
5172 +--- a/fs/btrfs/print-tree.h
5173 ++++ b/fs/btrfs/print-tree.h
5174 +@@ -11,6 +11,6 @@
5175 +
5176 + void btrfs_print_leaf(struct extent_buffer *l);
5177 + void btrfs_print_tree(struct extent_buffer *c, bool follow);
5178 +-const char *btrfs_root_name(u64 objectid, char *buf);
5179 ++const char *btrfs_root_name(const struct btrfs_key *key, char *buf);
5180 +
5181 + #endif
5182 +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
5183 +index 9e08ddb629685..9e5809118c34d 100644
5184 +--- a/fs/btrfs/send.c
5185 ++++ b/fs/btrfs/send.c
5186 +@@ -5512,6 +5512,21 @@ static int clone_range(struct send_ctx *sctx,
5187 + break;
5188 + offset += clone_len;
5189 + clone_root->offset += clone_len;
5190 ++
5191 ++ /*
5192 ++ * If we are cloning from the file we are currently processing,
5193 ++ * and using the send root as the clone root, we must stop once
5194 ++ * the current clone offset reaches the current eof of the file
5195 ++ * at the receiver, otherwise we would issue an invalid clone
5196 ++ * operation (source range going beyond eof) and cause the
5197 ++ * receiver to fail. So if we reach the current eof, bail out
5198 ++ * and fallback to a regular write.
5199 ++ */
5200 ++ if (clone_root->root == sctx->send_root &&
5201 ++ clone_root->ino == sctx->cur_ino &&
5202 ++ clone_root->offset >= sctx->cur_inode_next_write_offset)
5203 ++ break;
5204 ++
5205 + data_offset += clone_len;
5206 + next:
5207 + path->slots[0]++;
5208 +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
5209 +index 78637665166e0..6311308b32beb 100644
5210 +--- a/fs/btrfs/volumes.c
5211 ++++ b/fs/btrfs/volumes.c
5212 +@@ -4288,6 +4288,8 @@ int btrfs_recover_balance(struct btrfs_fs_info *fs_info)
5213 + btrfs_warn(fs_info,
5214 + "balance: cannot set exclusive op status, resume manually");
5215 +
5216 ++ btrfs_release_path(path);
5217 ++
5218 + mutex_lock(&fs_info->balance_mutex);
5219 + BUG_ON(fs_info->balance_ctl);
5220 + spin_lock(&fs_info->balance_lock);
5221 +diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
5222 +index 8bda092e60c5a..e027c718ca01a 100644
5223 +--- a/fs/cachefiles/rdwr.c
5224 ++++ b/fs/cachefiles/rdwr.c
5225 +@@ -413,7 +413,6 @@ int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
5226 +
5227 + inode = d_backing_inode(object->backer);
5228 + ASSERT(S_ISREG(inode->i_mode));
5229 +- ASSERT(inode->i_mapping->a_ops->readpages);
5230 +
5231 + /* calculate the shift required to use bmap */
5232 + shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
5233 +@@ -713,7 +712,6 @@ int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op,
5234 +
5235 + inode = d_backing_inode(object->backer);
5236 + ASSERT(S_ISREG(inode->i_mode));
5237 +- ASSERT(inode->i_mapping->a_ops->readpages);
5238 +
5239 + /* calculate the shift required to use bmap */
5240 + shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
5241 +diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
5242 +index 36b2ece434037..b1c2f416b9bd9 100644
5243 +--- a/fs/cifs/transport.c
5244 ++++ b/fs/cifs/transport.c
5245 +@@ -338,7 +338,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
5246 + if (ssocket == NULL)
5247 + return -EAGAIN;
5248 +
5249 +- if (signal_pending(current)) {
5250 ++ if (fatal_signal_pending(current)) {
5251 + cifs_dbg(FYI, "signal pending before send request\n");
5252 + return -ERESTARTSYS;
5253 + }
5254 +@@ -429,7 +429,7 @@ unmask:
5255 +
5256 + if (signal_pending(current) && (total_len != send_length)) {
5257 + cifs_dbg(FYI, "signal is pending after attempt to send\n");
5258 +- rc = -EINTR;
5259 ++ rc = -ERESTARTSYS;
5260 + }
5261 +
5262 + /* uncork it */
5263 +diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
5264 +index e6005c78bfa93..90dddb507e4af 100644
5265 +--- a/fs/fs-writeback.c
5266 ++++ b/fs/fs-writeback.c
5267 +@@ -1474,21 +1474,25 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
5268 + }
5269 +
5270 + /*
5271 +- * Some filesystems may redirty the inode during the writeback
5272 +- * due to delalloc, clear dirty metadata flags right before
5273 +- * write_inode()
5274 ++ * If the inode has dirty timestamps and we need to write them, call
5275 ++ * mark_inode_dirty_sync() to notify the filesystem about it and to
5276 ++ * change I_DIRTY_TIME into I_DIRTY_SYNC.
5277 + */
5278 +- spin_lock(&inode->i_lock);
5279 +-
5280 +- dirty = inode->i_state & I_DIRTY;
5281 + if ((inode->i_state & I_DIRTY_TIME) &&
5282 +- ((dirty & I_DIRTY_INODE) ||
5283 +- wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync ||
5284 ++ (wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync ||
5285 + time_after(jiffies, inode->dirtied_time_when +
5286 + dirtytime_expire_interval * HZ))) {
5287 +- dirty |= I_DIRTY_TIME;
5288 + trace_writeback_lazytime(inode);
5289 ++ mark_inode_dirty_sync(inode);
5290 + }
5291 ++
5292 ++ /*
5293 ++ * Some filesystems may redirty the inode during the writeback
5294 ++ * due to delalloc, clear dirty metadata flags right before
5295 ++ * write_inode()
5296 ++ */
5297 ++ spin_lock(&inode->i_lock);
5298 ++ dirty = inode->i_state & I_DIRTY;
5299 + inode->i_state &= ~dirty;
5300 +
5301 + /*
5302 +@@ -1509,8 +1513,6 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
5303 +
5304 + spin_unlock(&inode->i_lock);
5305 +
5306 +- if (dirty & I_DIRTY_TIME)
5307 +- mark_inode_dirty_sync(inode);
5308 + /* Don't write the inode if only I_DIRTY_PAGES was set */
5309 + if (dirty & ~I_DIRTY_PAGES) {
5310 + int err = write_inode(inode, wbc);
5311 +diff --git a/fs/io_uring.c b/fs/io_uring.c
5312 +index 265aea2cd7bc8..8cb0db187d90f 100644
5313 +--- a/fs/io_uring.c
5314 ++++ b/fs/io_uring.c
5315 +@@ -353,6 +353,7 @@ struct io_ring_ctx {
5316 + unsigned cq_entries;
5317 + unsigned cq_mask;
5318 + atomic_t cq_timeouts;
5319 ++ unsigned cq_last_tm_flush;
5320 + unsigned long cq_check_overflow;
5321 + struct wait_queue_head cq_wait;
5322 + struct fasync_struct *cq_fasync;
5323 +@@ -1521,19 +1522,38 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx)
5324 +
5325 + static void io_flush_timeouts(struct io_ring_ctx *ctx)
5326 + {
5327 +- while (!list_empty(&ctx->timeout_list)) {
5328 ++ u32 seq;
5329 ++
5330 ++ if (list_empty(&ctx->timeout_list))
5331 ++ return;
5332 ++
5333 ++ seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
5334 ++
5335 ++ do {
5336 ++ u32 events_needed, events_got;
5337 + struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
5338 + struct io_kiocb, timeout.list);
5339 +
5340 + if (io_is_timeout_noseq(req))
5341 + break;
5342 +- if (req->timeout.target_seq != ctx->cached_cq_tail
5343 +- - atomic_read(&ctx->cq_timeouts))
5344 ++
5345 ++ /*
5346 ++ * Since seq can easily wrap around over time, subtract
5347 ++ * the last seq at which timeouts were flushed before comparing.
5348 ++ * Assuming not more than 2^31-1 events have happened since,
5349 ++ * these subtractions won't have wrapped, so we can check if
5350 ++ * target is in [last_seq, current_seq] by comparing the two.
5351 ++ */
5352 ++ events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
5353 ++ events_got = seq - ctx->cq_last_tm_flush;
5354 ++ if (events_got < events_needed)
5355 + break;
5356 +
5357 + list_del_init(&req->timeout.list);
5358 + io_kill_timeout(req);
5359 +- }
5360 ++ } while (!list_empty(&ctx->timeout_list));
5361 ++
5362 ++ ctx->cq_last_tm_flush = seq;
5363 + }
5364 +
5365 + static void io_commit_cqring(struct io_ring_ctx *ctx)
5366 +@@ -2147,6 +2167,8 @@ static void io_req_free_batch_finish(struct io_ring_ctx *ctx,
5367 + struct io_uring_task *tctx = rb->task->io_uring;
5368 +
5369 + percpu_counter_sub(&tctx->inflight, rb->task_refs);
5370 ++ if (atomic_read(&tctx->in_idle))
5371 ++ wake_up(&tctx->wait);
5372 + put_task_struct_many(rb->task, rb->task_refs);
5373 + rb->task = NULL;
5374 + }
5375 +@@ -2166,6 +2188,8 @@ static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req)
5376 + struct io_uring_task *tctx = rb->task->io_uring;
5377 +
5378 + percpu_counter_sub(&tctx->inflight, rb->task_refs);
5379 ++ if (atomic_read(&tctx->in_idle))
5380 ++ wake_up(&tctx->wait);
5381 + put_task_struct_many(rb->task, rb->task_refs);
5382 + }
5383 + rb->task = req->task;
5384 +@@ -3437,7 +3461,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock,
5385 +
5386 + /* read it all, or we did blocking attempt. no retry. */
5387 + if (!iov_iter_count(iter) || !force_nonblock ||
5388 +- (req->file->f_flags & O_NONBLOCK))
5389 ++ (req->file->f_flags & O_NONBLOCK) || !(req->flags & REQ_F_ISREG))
5390 + goto done;
5391 +
5392 + io_size -= ret;
5393 +@@ -4226,7 +4250,6 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
5394 + * io_wq_work.flags, so initialize io_wq_work firstly.
5395 + */
5396 + io_req_init_async(req);
5397 +- req->work.flags |= IO_WQ_WORK_NO_CANCEL;
5398 +
5399 + if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
5400 + return -EINVAL;
5401 +@@ -4259,6 +4282,8 @@ static int io_close(struct io_kiocb *req, bool force_nonblock,
5402 +
5403 + /* if the file has a flush method, be safe and punt to async */
5404 + if (close->put_file->f_op->flush && force_nonblock) {
5405 ++ /* not safe to cancel at this point */
5406 ++ req->work.flags |= IO_WQ_WORK_NO_CANCEL;
5407 + /* was never set, but play safe */
5408 + req->flags &= ~REQ_F_NOWAIT;
5409 + /* avoid grabbing files - we don't need the files */
5410 +@@ -5582,6 +5607,12 @@ static int io_timeout(struct io_kiocb *req)
5411 + tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
5412 + req->timeout.target_seq = tail + off;
5413 +
5414 ++ /* Update the last seq here in case io_flush_timeouts() hasn't.
5415 ++ * This is safe because ->completion_lock is held, and submissions
5416 ++ * and completions are never mixed in the same ->completion_lock section.
5417 ++ */
5418 ++ ctx->cq_last_tm_flush = tail;
5419 ++
5420 + /*
5421 + * Insertion sort, ensuring the first entry in the list is always
5422 + * the one we need first.
5423 +diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
5424 +index f277d023ebcd1..c757193121475 100644
5425 +--- a/fs/kernfs/file.c
5426 ++++ b/fs/kernfs/file.c
5427 +@@ -14,6 +14,7 @@
5428 + #include <linux/pagemap.h>
5429 + #include <linux/sched/mm.h>
5430 + #include <linux/fsnotify.h>
5431 ++#include <linux/uio.h>
5432 +
5433 + #include "kernfs-internal.h"
5434 +
5435 +@@ -180,11 +181,10 @@ static const struct seq_operations kernfs_seq_ops = {
5436 + * it difficult to use seq_file. Implement simplistic custom buffering for
5437 + * bin files.
5438 + */
5439 +-static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
5440 +- char __user *user_buf, size_t count,
5441 +- loff_t *ppos)
5442 ++static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
5443 + {
5444 +- ssize_t len = min_t(size_t, count, PAGE_SIZE);
5445 ++ struct kernfs_open_file *of = kernfs_of(iocb->ki_filp);
5446 ++ ssize_t len = min_t(size_t, iov_iter_count(iter), PAGE_SIZE);
5447 + const struct kernfs_ops *ops;
5448 + char *buf;
5449 +
5450 +@@ -210,7 +210,7 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
5451 + of->event = atomic_read(&of->kn->attr.open->event);
5452 + ops = kernfs_ops(of->kn);
5453 + if (ops->read)
5454 +- len = ops->read(of, buf, len, *ppos);
5455 ++ len = ops->read(of, buf, len, iocb->ki_pos);
5456 + else
5457 + len = -EINVAL;
5458 +
5459 +@@ -220,12 +220,12 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
5460 + if (len < 0)
5461 + goto out_free;
5462 +
5463 +- if (copy_to_user(user_buf, buf, len)) {
5464 ++ if (copy_to_iter(buf, len, iter) != len) {
5465 + len = -EFAULT;
5466 + goto out_free;
5467 + }
5468 +
5469 +- *ppos += len;
5470 ++ iocb->ki_pos += len;
5471 +
5472 + out_free:
5473 + if (buf == of->prealloc_buf)
5474 +@@ -235,31 +235,14 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
5475 + return len;
5476 + }
5477 +
5478 +-/**
5479 +- * kernfs_fop_read - kernfs vfs read callback
5480 +- * @file: file pointer
5481 +- * @user_buf: data to write
5482 +- * @count: number of bytes
5483 +- * @ppos: starting offset
5484 +- */
5485 +-static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf,
5486 +- size_t count, loff_t *ppos)
5487 ++static ssize_t kernfs_fop_read_iter(struct kiocb *iocb, struct iov_iter *iter)
5488 + {
5489 +- struct kernfs_open_file *of = kernfs_of(file);
5490 +-
5491 +- if (of->kn->flags & KERNFS_HAS_SEQ_SHOW)
5492 +- return seq_read(file, user_buf, count, ppos);
5493 +- else
5494 +- return kernfs_file_direct_read(of, user_buf, count, ppos);
5495 ++ if (kernfs_of(iocb->ki_filp)->kn->flags & KERNFS_HAS_SEQ_SHOW)
5496 ++ return seq_read_iter(iocb, iter);
5497 ++ return kernfs_file_read_iter(iocb, iter);
5498 + }
5499 +
5500 +-/**
5501 +- * kernfs_fop_write - kernfs vfs write callback
5502 +- * @file: file pointer
5503 +- * @user_buf: data to write
5504 +- * @count: number of bytes
5505 +- * @ppos: starting offset
5506 +- *
5507 ++/*
5508 + * Copy data in from userland and pass it to the matching kernfs write
5509 + * operation.
5510 + *
5511 +@@ -269,20 +252,18 @@ static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf,
5512 + * modify only the the value you're changing, then write entire buffer
5513 + * back.
5514 + */
5515 +-static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
5516 +- size_t count, loff_t *ppos)
5517 ++static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
5518 + {
5519 +- struct kernfs_open_file *of = kernfs_of(file);
5520 ++ struct kernfs_open_file *of = kernfs_of(iocb->ki_filp);
5521 ++ ssize_t len = iov_iter_count(iter);
5522 + const struct kernfs_ops *ops;
5523 +- ssize_t len;
5524 + char *buf;
5525 +
5526 + if (of->atomic_write_len) {
5527 +- len = count;
5528 + if (len > of->atomic_write_len)
5529 + return -E2BIG;
5530 + } else {
5531 +- len = min_t(size_t, count, PAGE_SIZE);
5532 ++ len = min_t(size_t, len, PAGE_SIZE);
5533 + }
5534 +
5535 + buf = of->prealloc_buf;
5536 +@@ -293,7 +274,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
5537 + if (!buf)
5538 + return -ENOMEM;
5539 +
5540 +- if (copy_from_user(buf, user_buf, len)) {
5541 ++ if (copy_from_iter(buf, len, iter) != len) {
5542 + len = -EFAULT;
5543 + goto out_free;
5544 + }
5545 +@@ -312,7 +293,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
5546 +
5547 + ops = kernfs_ops(of->kn);
5548 + if (ops->write)
5549 +- len = ops->write(of, buf, len, *ppos);
5550 ++ len = ops->write(of, buf, len, iocb->ki_pos);
5551 + else
5552 + len = -EINVAL;
5553 +
5554 +@@ -320,7 +301,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
5555 + mutex_unlock(&of->mutex);
5556 +
5557 + if (len > 0)
5558 +- *ppos += len;
5559 ++ iocb->ki_pos += len;
5560 +
5561 + out_free:
5562 + if (buf == of->prealloc_buf)
5563 +@@ -673,7 +654,7 @@ static int kernfs_fop_open(struct inode *inode, struct file *file)
5564 +
5565 + /*
5566 + * Write path needs to atomic_write_len outside active reference.
5567 +- * Cache it in open_file. See kernfs_fop_write() for details.
5568 ++ * Cache it in open_file. See kernfs_fop_write_iter() for details.
5569 + */
5570 + of->atomic_write_len = ops->atomic_write_len;
5571 +
5572 +@@ -960,14 +941,16 @@ void kernfs_notify(struct kernfs_node *kn)
5573 + EXPORT_SYMBOL_GPL(kernfs_notify);
5574 +
5575 + const struct file_operations kernfs_file_fops = {
5576 +- .read = kernfs_fop_read,
5577 +- .write = kernfs_fop_write,
5578 ++ .read_iter = kernfs_fop_read_iter,
5579 ++ .write_iter = kernfs_fop_write_iter,
5580 + .llseek = generic_file_llseek,
5581 + .mmap = kernfs_fop_mmap,
5582 + .open = kernfs_fop_open,
5583 + .release = kernfs_fop_release,
5584 + .poll = kernfs_fop_poll,
5585 + .fsync = noop_fsync,
5586 ++ .splice_read = generic_file_splice_read,
5587 ++ .splice_write = iter_file_splice_write,
5588 + };
5589 +
5590 + /**
5591 +diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
5592 +index 833a2c64dfe80..5f5169b9c2e90 100644
5593 +--- a/fs/nfsd/nfs4xdr.c
5594 ++++ b/fs/nfsd/nfs4xdr.c
5595 +@@ -4632,6 +4632,7 @@ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
5596 + resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
5597 + if (nfserr)
5598 + return nfserr;
5599 ++ xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
5600 +
5601 + tmp = htonl(NFS4_CONTENT_DATA);
5602 + write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4);
5603 +@@ -4639,6 +4640,10 @@ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
5604 + write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp64, 8);
5605 + tmp = htonl(*maxcount);
5606 + write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp, 4);
5607 ++
5608 ++ tmp = xdr_zero;
5609 ++ write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
5610 ++ xdr_pad_size(*maxcount));
5611 + return nfs_ok;
5612 + }
5613 +
5614 +@@ -4731,14 +4736,15 @@ out:
5615 + if (nfserr && segments == 0)
5616 + xdr_truncate_encode(xdr, starting_len);
5617 + else {
5618 +- tmp = htonl(eof);
5619 +- write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4);
5620 +- tmp = htonl(segments);
5621 +- write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
5622 + if (nfserr) {
5623 + xdr_truncate_encode(xdr, last_segment);
5624 + nfserr = nfs_ok;
5625 ++ eof = 0;
5626 + }
5627 ++ tmp = htonl(eof);
5628 ++ write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4);
5629 ++ tmp = htonl(segments);
5630 ++ write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
5631 + }
5632 +
5633 + return nfserr;
5634 +diff --git a/fs/pipe.c b/fs/pipe.c
5635 +index 0ac197658a2d6..412b3b618994c 100644
5636 +--- a/fs/pipe.c
5637 ++++ b/fs/pipe.c
5638 +@@ -1206,6 +1206,7 @@ const struct file_operations pipefifo_fops = {
5639 + .unlocked_ioctl = pipe_ioctl,
5640 + .release = pipe_release,
5641 + .fasync = pipe_fasync,
5642 ++ .splice_write = iter_file_splice_write,
5643 + };
5644 +
5645 + /*
5646 +diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
5647 +index 317899222d7fd..d2018f70d1fae 100644
5648 +--- a/fs/proc/proc_sysctl.c
5649 ++++ b/fs/proc/proc_sysctl.c
5650 +@@ -1770,6 +1770,12 @@ static int process_sysctl_arg(char *param, char *val,
5651 + return 0;
5652 + }
5653 +
5654 ++ if (!val)
5655 ++ return -EINVAL;
5656 ++ len = strlen(val);
5657 ++ if (len == 0)
5658 ++ return -EINVAL;
5659 ++
5660 + /*
5661 + * To set sysctl options, we use a temporary mount of proc, look up the
5662 + * respective sys/ file and write to it. To avoid mounting it when no
5663 +@@ -1811,7 +1817,6 @@ static int process_sysctl_arg(char *param, char *val,
5664 + file, param, val);
5665 + goto out;
5666 + }
5667 +- len = strlen(val);
5668 + wret = kernel_write(file, val, len, &pos);
5669 + if (wret < 0) {
5670 + err = wret;
5671 +diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
5672 +index dd90c9792909d..0e7316a86240b 100644
5673 +--- a/include/asm-generic/bitops/atomic.h
5674 ++++ b/include/asm-generic/bitops/atomic.h
5675 +@@ -11,19 +11,19 @@
5676 + * See Documentation/atomic_bitops.txt for details.
5677 + */
5678 +
5679 +-static inline void set_bit(unsigned int nr, volatile unsigned long *p)
5680 ++static __always_inline void set_bit(unsigned int nr, volatile unsigned long *p)
5681 + {
5682 + p += BIT_WORD(nr);
5683 + atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
5684 + }
5685 +
5686 +-static inline void clear_bit(unsigned int nr, volatile unsigned long *p)
5687 ++static __always_inline void clear_bit(unsigned int nr, volatile unsigned long *p)
5688 + {
5689 + p += BIT_WORD(nr);
5690 + atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
5691 + }
5692 +
5693 +-static inline void change_bit(unsigned int nr, volatile unsigned long *p)
5694 ++static __always_inline void change_bit(unsigned int nr, volatile unsigned long *p)
5695 + {
5696 + p += BIT_WORD(nr);
5697 + atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
5698 +diff --git a/include/linux/device.h b/include/linux/device.h
5699 +index 5ed101be7b2e7..2b39de35525a9 100644
5700 +--- a/include/linux/device.h
5701 ++++ b/include/linux/device.h
5702 +@@ -615,6 +615,18 @@ static inline const char *dev_name(const struct device *dev)
5703 + return kobject_name(&dev->kobj);
5704 + }
5705 +
5706 ++/**
5707 ++ * dev_bus_name - Return a device's bus/class name, if at all possible
5708 ++ * @dev: struct device to get the bus/class name of
5709 ++ *
5710 ++ * Will return the name of the bus/class the device is attached to. If it is
5711 ++ * not attached to a bus/class, an empty string will be returned.
5712 ++ */
5713 ++static inline const char *dev_bus_name(const struct device *dev)
5714 ++{
5715 ++ return dev->bus ? dev->bus->name : (dev->class ? dev->class->name : "");
5716 ++}
5717 ++
5718 + __printf(2, 3) int dev_set_name(struct device *dev, const char *name, ...);
5719 +
5720 + #ifdef CONFIG_NUMA
5721 +diff --git a/include/linux/tty.h b/include/linux/tty.h
5722 +index eb33d948788cc..bc8caac390fce 100644
5723 +--- a/include/linux/tty.h
5724 ++++ b/include/linux/tty.h
5725 +@@ -422,6 +422,7 @@ extern void tty_kclose(struct tty_struct *tty);
5726 + extern int tty_dev_name_to_number(const char *name, dev_t *number);
5727 + extern int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
5728 + extern void tty_ldisc_unlock(struct tty_struct *tty);
5729 ++extern ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *);
5730 + #else
5731 + static inline void tty_kref_put(struct tty_struct *tty)
5732 + { }
5733 +diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
5734 +index 7338b3865a2a3..111d7771b2081 100644
5735 +--- a/include/net/inet_connection_sock.h
5736 ++++ b/include/net/inet_connection_sock.h
5737 +@@ -76,6 +76,8 @@ struct inet_connection_sock_af_ops {
5738 + * @icsk_ext_hdr_len: Network protocol overhead (IP/IPv6 options)
5739 + * @icsk_ack: Delayed ACK control data
5740 + * @icsk_mtup; MTU probing control data
5741 ++ * @icsk_probes_tstamp: Probe timestamp (cleared by non-zero window ack)
5742 ++ * @icsk_user_timeout: TCP_USER_TIMEOUT value
5743 + */
5744 + struct inet_connection_sock {
5745 + /* inet_sock has to be the first member! */
5746 +@@ -129,6 +131,7 @@ struct inet_connection_sock {
5747 +
5748 + u32 probe_timestamp;
5749 + } icsk_mtup;
5750 ++ u32 icsk_probes_tstamp;
5751 + u32 icsk_user_timeout;
5752 +
5753 + u64 icsk_ca_priv[104 / sizeof(u64)];
5754 +diff --git a/include/net/sock.h b/include/net/sock.h
5755 +index a5c6ae78df77d..253202dcc5e61 100644
5756 +--- a/include/net/sock.h
5757 ++++ b/include/net/sock.h
5758 +@@ -1903,10 +1903,13 @@ static inline void sk_set_txhash(struct sock *sk)
5759 + sk->sk_txhash = net_tx_rndhash();
5760 + }
5761 +
5762 +-static inline void sk_rethink_txhash(struct sock *sk)
5763 ++static inline bool sk_rethink_txhash(struct sock *sk)
5764 + {
5765 +- if (sk->sk_txhash)
5766 ++ if (sk->sk_txhash) {
5767 + sk_set_txhash(sk);
5768 ++ return true;
5769 ++ }
5770 ++ return false;
5771 + }
5772 +
5773 + static inline struct dst_entry *
5774 +@@ -1929,12 +1932,10 @@ sk_dst_get(struct sock *sk)
5775 + return dst;
5776 + }
5777 +
5778 +-static inline void dst_negative_advice(struct sock *sk)
5779 ++static inline void __dst_negative_advice(struct sock *sk)
5780 + {
5781 + struct dst_entry *ndst, *dst = __sk_dst_get(sk);
5782 +
5783 +- sk_rethink_txhash(sk);
5784 +-
5785 + if (dst && dst->ops->negative_advice) {
5786 + ndst = dst->ops->negative_advice(dst);
5787 +
5788 +@@ -1946,6 +1947,12 @@ static inline void dst_negative_advice(struct sock *sk)
5789 + }
5790 + }
5791 +
5792 ++static inline void dst_negative_advice(struct sock *sk)
5793 ++{
5794 ++ sk_rethink_txhash(sk);
5795 ++ __dst_negative_advice(sk);
5796 ++}
5797 ++
5798 + static inline void
5799 + __sk_dst_set(struct sock *sk, struct dst_entry *dst)
5800 + {
5801 +diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
5802 +index 00c7235ae93e7..2c43b0ef1e4d5 100644
5803 +--- a/include/xen/xenbus.h
5804 ++++ b/include/xen/xenbus.h
5805 +@@ -192,7 +192,7 @@ void xs_suspend_cancel(void);
5806 +
5807 + struct work_struct;
5808 +
5809 +-void xenbus_probe(struct work_struct *);
5810 ++void xenbus_probe(void);
5811 +
5812 + #define XENBUS_IS_ERR_READ(str) ({ \
5813 + if (!IS_ERR(str) && strlen(str) == 0) { \
5814 +diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
5815 +index 6edff97ad594b..dbc1dbdd2cbf0 100644
5816 +--- a/kernel/bpf/bpf_inode_storage.c
5817 ++++ b/kernel/bpf/bpf_inode_storage.c
5818 +@@ -176,7 +176,7 @@ BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode,
5819 + * bpf_local_storage_update expects the owner to have a
5820 + * valid storage pointer.
5821 + */
5822 +- if (!inode_storage_ptr(inode))
5823 ++ if (!inode || !inode_storage_ptr(inode))
5824 + return (unsigned long)NULL;
5825 +
5826 + sdata = inode_storage_lookup(inode, map, true);
5827 +@@ -200,6 +200,9 @@ BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode,
5828 + BPF_CALL_2(bpf_inode_storage_delete,
5829 + struct bpf_map *, map, struct inode *, inode)
5830 + {
5831 ++ if (!inode)
5832 ++ return -EINVAL;
5833 ++
5834 + /* This helper must only called from where the inode is gurranteed
5835 + * to have a refcount and cannot be freed.
5836 + */
5837 +diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
5838 +index 8f50c9c19f1b0..9433ab9995cd7 100644
5839 +--- a/kernel/bpf/syscall.c
5840 ++++ b/kernel/bpf/syscall.c
5841 +@@ -2717,7 +2717,6 @@ out_unlock:
5842 + out_put_prog:
5843 + if (tgt_prog_fd && tgt_prog)
5844 + bpf_prog_put(tgt_prog);
5845 +- bpf_prog_put(prog);
5846 + return err;
5847 + }
5848 +
5849 +@@ -2830,7 +2829,10 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
5850 + tp_name = prog->aux->attach_func_name;
5851 + break;
5852 + }
5853 +- return bpf_tracing_prog_attach(prog, 0, 0);
5854 ++ err = bpf_tracing_prog_attach(prog, 0, 0);
5855 ++ if (err >= 0)
5856 ++ return err;
5857 ++ goto out_put_prog;
5858 + case BPF_PROG_TYPE_RAW_TRACEPOINT:
5859 + case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE:
5860 + if (strncpy_from_user(buf,
5861 +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
5862 +index c1418b47f625a..02bc5b8f1eb27 100644
5863 +--- a/kernel/locking/lockdep.c
5864 ++++ b/kernel/locking/lockdep.c
5865 +@@ -79,7 +79,7 @@ module_param(lock_stat, int, 0644);
5866 + DEFINE_PER_CPU(unsigned int, lockdep_recursion);
5867 + EXPORT_PER_CPU_SYMBOL_GPL(lockdep_recursion);
5868 +
5869 +-static inline bool lockdep_enabled(void)
5870 ++static __always_inline bool lockdep_enabled(void)
5871 + {
5872 + if (!debug_locks)
5873 + return false;
5874 +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
5875 +index bc1e3b5a97bdd..801f8bc52b34f 100644
5876 +--- a/kernel/printk/printk.c
5877 ++++ b/kernel/printk/printk.c
5878 +@@ -3376,7 +3376,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
5879 + while (prb_read_valid_info(prb, seq, &info, &line_count)) {
5880 + if (r.info->seq >= dumper->next_seq)
5881 + break;
5882 +- l += get_record_print_text_size(&info, line_count, true, time);
5883 ++ l += get_record_print_text_size(&info, line_count, syslog, time);
5884 + seq = r.info->seq + 1;
5885 + }
5886 +
5887 +@@ -3386,7 +3386,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
5888 + &info, &line_count)) {
5889 + if (r.info->seq >= dumper->next_seq)
5890 + break;
5891 +- l -= get_record_print_text_size(&info, line_count, true, time);
5892 ++ l -= get_record_print_text_size(&info, line_count, syslog, time);
5893 + seq = r.info->seq + 1;
5894 + }
5895 +
5896 +diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
5897 +index 74e25a1704f2b..617dd63589650 100644
5898 +--- a/kernel/printk/printk_ringbuffer.c
5899 ++++ b/kernel/printk/printk_ringbuffer.c
5900 +@@ -1720,7 +1720,7 @@ static bool copy_data(struct prb_data_ring *data_ring,
5901 +
5902 + /* Caller interested in the line count? */
5903 + if (line_count)
5904 +- *line_count = count_lines(data, data_size);
5905 ++ *line_count = count_lines(data, len);
5906 +
5907 + /* Caller interested in the data content? */
5908 + if (!buf || !buf_size)
5909 +diff --git a/lib/iov_iter.c b/lib/iov_iter.c
5910 +index 1635111c5bd2a..a21e6a5792c5a 100644
5911 +--- a/lib/iov_iter.c
5912 ++++ b/lib/iov_iter.c
5913 +@@ -1658,7 +1658,7 @@ static int copy_compat_iovec_from_user(struct iovec *iov,
5914 + (const struct compat_iovec __user *)uvec;
5915 + int ret = -EFAULT, i;
5916 +
5917 +- if (!user_access_begin(uvec, nr_segs * sizeof(*uvec)))
5918 ++ if (!user_access_begin(uiov, nr_segs * sizeof(*uiov)))
5919 + return -EFAULT;
5920 +
5921 + for (i = 0; i < nr_segs; i++) {
5922 +diff --git a/mm/kasan/init.c b/mm/kasan/init.c
5923 +index fe6be0be1f763..b8c6ec172bb22 100644
5924 +--- a/mm/kasan/init.c
5925 ++++ b/mm/kasan/init.c
5926 +@@ -377,9 +377,10 @@ static void kasan_remove_pmd_table(pmd_t *pmd, unsigned long addr,
5927 +
5928 + if (kasan_pte_table(*pmd)) {
5929 + if (IS_ALIGNED(addr, PMD_SIZE) &&
5930 +- IS_ALIGNED(next, PMD_SIZE))
5931 ++ IS_ALIGNED(next, PMD_SIZE)) {
5932 + pmd_clear(pmd);
5933 +- continue;
5934 ++ continue;
5935 ++ }
5936 + }
5937 + pte = pte_offset_kernel(pmd, addr);
5938 + kasan_remove_pte_table(pte, addr, next);
5939 +@@ -402,9 +403,10 @@ static void kasan_remove_pud_table(pud_t *pud, unsigned long addr,
5940 +
5941 + if (kasan_pmd_table(*pud)) {
5942 + if (IS_ALIGNED(addr, PUD_SIZE) &&
5943 +- IS_ALIGNED(next, PUD_SIZE))
5944 ++ IS_ALIGNED(next, PUD_SIZE)) {
5945 + pud_clear(pud);
5946 +- continue;
5947 ++ continue;
5948 ++ }
5949 + }
5950 + pmd = pmd_offset(pud, addr);
5951 + pmd_base = pmd_offset(pud, 0);
5952 +@@ -428,9 +430,10 @@ static void kasan_remove_p4d_table(p4d_t *p4d, unsigned long addr,
5953 +
5954 + if (kasan_pud_table(*p4d)) {
5955 + if (IS_ALIGNED(addr, P4D_SIZE) &&
5956 +- IS_ALIGNED(next, P4D_SIZE))
5957 ++ IS_ALIGNED(next, P4D_SIZE)) {
5958 + p4d_clear(p4d);
5959 +- continue;
5960 ++ continue;
5961 ++ }
5962 + }
5963 + pud = pud_offset(p4d, addr);
5964 + kasan_remove_pud_table(pud, addr, next);
5965 +@@ -462,9 +465,10 @@ void kasan_remove_zero_shadow(void *start, unsigned long size)
5966 +
5967 + if (kasan_p4d_table(*pgd)) {
5968 + if (IS_ALIGNED(addr, PGDIR_SIZE) &&
5969 +- IS_ALIGNED(next, PGDIR_SIZE))
5970 ++ IS_ALIGNED(next, PGDIR_SIZE)) {
5971 + pgd_clear(pgd);
5972 +- continue;
5973 ++ continue;
5974 ++ }
5975 + }
5976 +
5977 + p4d = p4d_offset(pgd, addr);
5978 +@@ -488,7 +492,6 @@ int kasan_add_zero_shadow(void *start, unsigned long size)
5979 +
5980 + ret = kasan_populate_early_shadow(shadow_start, shadow_end);
5981 + if (ret)
5982 +- kasan_remove_zero_shadow(shadow_start,
5983 +- size >> KASAN_SHADOW_SCALE_SHIFT);
5984 ++ kasan_remove_zero_shadow(start, size);
5985 + return ret;
5986 + }
5987 +diff --git a/mm/memcontrol.c b/mm/memcontrol.c
5988 +index a717728cc7b4a..8fc23d53f5500 100644
5989 +--- a/mm/memcontrol.c
5990 ++++ b/mm/memcontrol.c
5991 +@@ -3083,9 +3083,7 @@ void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
5992 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
5993 + page_counter_uncharge(&memcg->kmem, nr_pages);
5994 +
5995 +- page_counter_uncharge(&memcg->memory, nr_pages);
5996 +- if (do_memsw_account())
5997 +- page_counter_uncharge(&memcg->memsw, nr_pages);
5998 ++ refill_stock(memcg, nr_pages);
5999 + }
6000 +
6001 + /**
6002 +diff --git a/mm/migrate.c b/mm/migrate.c
6003 +index 8ea0c65f10756..9d7ca1bd7f4b3 100644
6004 +--- a/mm/migrate.c
6005 ++++ b/mm/migrate.c
6006 +@@ -406,6 +406,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
6007 + struct zone *oldzone, *newzone;
6008 + int dirty;
6009 + int expected_count = expected_page_refs(mapping, page) + extra_count;
6010 ++ int nr = thp_nr_pages(page);
6011 +
6012 + if (!mapping) {
6013 + /* Anonymous page without mapping */
6014 +@@ -441,7 +442,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
6015 + */
6016 + newpage->index = page->index;
6017 + newpage->mapping = page->mapping;
6018 +- page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
6019 ++ page_ref_add(newpage, nr); /* add cache reference */
6020 + if (PageSwapBacked(page)) {
6021 + __SetPageSwapBacked(newpage);
6022 + if (PageSwapCache(page)) {
6023 +@@ -463,7 +464,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
6024 + if (PageTransHuge(page)) {
6025 + int i;
6026 +
6027 +- for (i = 1; i < HPAGE_PMD_NR; i++) {
6028 ++ for (i = 1; i < nr; i++) {
6029 + xas_next(&xas);
6030 + xas_store(&xas, newpage);
6031 + }
6032 +@@ -474,7 +475,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
6033 + * to one less reference.
6034 + * We know this isn't the last reference.
6035 + */
6036 +- page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
6037 ++ page_ref_unfreeze(page, expected_count - nr);
6038 +
6039 + xas_unlock(&xas);
6040 + /* Leave irq disabled to prevent preemption while updating stats */
6041 +@@ -497,17 +498,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
6042 + old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
6043 + new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
6044 +
6045 +- __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
6046 +- __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
6047 ++ __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
6048 ++ __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
6049 + if (PageSwapBacked(page) && !PageSwapCache(page)) {
6050 +- __dec_lruvec_state(old_lruvec, NR_SHMEM);
6051 +- __inc_lruvec_state(new_lruvec, NR_SHMEM);
6052 ++ __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
6053 ++ __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
6054 + }
6055 + if (dirty && mapping_can_writeback(mapping)) {
6056 +- __dec_node_state(oldzone->zone_pgdat, NR_FILE_DIRTY);
6057 +- __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
6058 +- __inc_node_state(newzone->zone_pgdat, NR_FILE_DIRTY);
6059 +- __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
6060 ++ __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
6061 ++ __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr);
6062 ++ __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
6063 ++ __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
6064 + }
6065 + }
6066 + local_irq_enable();
6067 +diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
6068 +index c1c30a9f76f34..8b796c499cbb2 100644
6069 +--- a/net/bpf/test_run.c
6070 ++++ b/net/bpf/test_run.c
6071 +@@ -272,7 +272,8 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
6072 + kattr->test.repeat)
6073 + return -EINVAL;
6074 +
6075 +- if (ctx_size_in < prog->aux->max_ctx_offset)
6076 ++ if (ctx_size_in < prog->aux->max_ctx_offset ||
6077 ++ ctx_size_in > MAX_BPF_FUNC_ARGS * sizeof(u64))
6078 + return -EINVAL;
6079 +
6080 + if ((kattr->test.flags & BPF_F_TEST_RUN_ON_CPU) == 0 && cpu != 0)
6081 +diff --git a/net/core/dev.c b/net/core/dev.c
6082 +index 38412e70f7618..81e5d482c238e 100644
6083 +--- a/net/core/dev.c
6084 ++++ b/net/core/dev.c
6085 +@@ -9602,6 +9602,11 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
6086 + }
6087 + }
6088 +
6089 ++ if ((features & NETIF_F_HW_TLS_RX) && !(features & NETIF_F_RXCSUM)) {
6090 ++ netdev_dbg(dev, "Dropping TLS RX HW offload feature since no RXCSUM feature.\n");
6091 ++ features &= ~NETIF_F_HW_TLS_RX;
6092 ++ }
6093 ++
6094 + return features;
6095 + }
6096 +
6097 +diff --git a/net/core/devlink.c b/net/core/devlink.c
6098 +index 8c5ddffd707de..5d397838bceb6 100644
6099 +--- a/net/core/devlink.c
6100 ++++ b/net/core/devlink.c
6101 +@@ -4134,7 +4134,7 @@ out:
6102 + static int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb,
6103 + struct genl_info *info)
6104 + {
6105 +- struct devlink_port *devlink_port = info->user_ptr[0];
6106 ++ struct devlink_port *devlink_port = info->user_ptr[1];
6107 + struct devlink_param_item *param_item;
6108 + struct sk_buff *msg;
6109 + int err;
6110 +@@ -4163,7 +4163,7 @@ static int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb,
6111 + static int devlink_nl_cmd_port_param_set_doit(struct sk_buff *skb,
6112 + struct genl_info *info)
6113 + {
6114 +- struct devlink_port *devlink_port = info->user_ptr[0];
6115 ++ struct devlink_port *devlink_port = info->user_ptr[1];
6116 +
6117 + return __devlink_nl_cmd_param_set_doit(devlink_port->devlink,
6118 + devlink_port->index,
6119 +diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
6120 +index 80dbf2f4016e2..8e582e29a41e3 100644
6121 +--- a/net/core/gen_estimator.c
6122 ++++ b/net/core/gen_estimator.c
6123 +@@ -80,11 +80,11 @@ static void est_timer(struct timer_list *t)
6124 + u64 rate, brate;
6125 +
6126 + est_fetch_counters(est, &b);
6127 +- brate = (b.bytes - est->last_bytes) << (10 - est->ewma_log - est->intvl_log);
6128 +- brate -= (est->avbps >> est->ewma_log);
6129 ++ brate = (b.bytes - est->last_bytes) << (10 - est->intvl_log);
6130 ++ brate = (brate >> est->ewma_log) - (est->avbps >> est->ewma_log);
6131 +
6132 +- rate = (b.packets - est->last_packets) << (10 - est->ewma_log - est->intvl_log);
6133 +- rate -= (est->avpps >> est->ewma_log);
6134 ++ rate = (b.packets - est->last_packets) << (10 - est->intvl_log);
6135 ++ rate = (rate >> est->ewma_log) - (est->avpps >> est->ewma_log);
6136 +
6137 + write_seqcount_begin(&est->seq);
6138 + est->avbps += brate;
6139 +@@ -143,6 +143,9 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
6140 + if (parm->interval < -2 || parm->interval > 3)
6141 + return -EINVAL;
6142 +
6143 ++ if (parm->ewma_log == 0 || parm->ewma_log >= 31)
6144 ++ return -EINVAL;
6145 ++
6146 + est = kzalloc(sizeof(*est), GFP_KERNEL);
6147 + if (!est)
6148 + return -ENOBUFS;
6149 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
6150 +index f0d6dba37b43d..7ab56796bd3a9 100644
6151 +--- a/net/core/skbuff.c
6152 ++++ b/net/core/skbuff.c
6153 +@@ -432,7 +432,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
6154 +
6155 + len += NET_SKB_PAD;
6156 +
6157 +- if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) ||
6158 ++ /* If requested length is either too small or too big,
6159 ++ * we use kmalloc() for skb->head allocation.
6160 ++ */
6161 ++ if (len <= SKB_WITH_OVERHEAD(1024) ||
6162 ++ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
6163 + (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
6164 + skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
6165 + if (!skb)
6166 +diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
6167 +index f60869acbef02..48d2b615edc26 100644
6168 +--- a/net/ipv4/inet_connection_sock.c
6169 ++++ b/net/ipv4/inet_connection_sock.c
6170 +@@ -851,6 +851,7 @@ struct sock *inet_csk_clone_lock(const struct sock *sk,
6171 + newicsk->icsk_retransmits = 0;
6172 + newicsk->icsk_backoff = 0;
6173 + newicsk->icsk_probes_out = 0;
6174 ++ newicsk->icsk_probes_tstamp = 0;
6175 +
6176 + /* Deinitialize accept_queue to trap illegal accesses. */
6177 + memset(&newicsk->icsk_accept_queue, 0, sizeof(newicsk->icsk_accept_queue));
6178 +diff --git a/net/ipv4/netfilter/ipt_rpfilter.c b/net/ipv4/netfilter/ipt_rpfilter.c
6179 +index cc23f1ce239c2..8cd3224d913e0 100644
6180 +--- a/net/ipv4/netfilter/ipt_rpfilter.c
6181 ++++ b/net/ipv4/netfilter/ipt_rpfilter.c
6182 +@@ -76,7 +76,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
6183 + flow.daddr = iph->saddr;
6184 + flow.saddr = rpfilter_get_saddr(iph->daddr);
6185 + flow.flowi4_mark = info->flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
6186 +- flow.flowi4_tos = RT_TOS(iph->tos);
6187 ++ flow.flowi4_tos = iph->tos & IPTOS_RT_MASK;
6188 + flow.flowi4_scope = RT_SCOPE_UNIVERSE;
6189 + flow.flowi4_oif = l3mdev_master_ifindex_rcu(xt_in(par));
6190 +
6191 +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
6192 +index b2bc3d7fe9e80..41d03683b13d6 100644
6193 +--- a/net/ipv4/tcp.c
6194 ++++ b/net/ipv4/tcp.c
6195 +@@ -2685,6 +2685,7 @@ int tcp_disconnect(struct sock *sk, int flags)
6196 +
6197 + icsk->icsk_backoff = 0;
6198 + icsk->icsk_probes_out = 0;
6199 ++ icsk->icsk_probes_tstamp = 0;
6200 + icsk->icsk_rto = TCP_TIMEOUT_INIT;
6201 + icsk->icsk_rto_min = TCP_RTO_MIN;
6202 + icsk->icsk_delack_max = TCP_DELACK_MAX;
6203 +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
6204 +index ef4bdb038a4bb..6bf066f924c15 100644
6205 +--- a/net/ipv4/tcp_input.c
6206 ++++ b/net/ipv4/tcp_input.c
6207 +@@ -3370,6 +3370,7 @@ static void tcp_ack_probe(struct sock *sk)
6208 + return;
6209 + if (!after(TCP_SKB_CB(head)->end_seq, tcp_wnd_end(tp))) {
6210 + icsk->icsk_backoff = 0;
6211 ++ icsk->icsk_probes_tstamp = 0;
6212 + inet_csk_clear_xmit_timer(sk, ICSK_TIME_PROBE0);
6213 + /* Socket must be waked up by subsequent tcp_data_snd_check().
6214 + * This function is not for random using!
6215 +@@ -4379,10 +4380,9 @@ static void tcp_rcv_spurious_retrans(struct sock *sk, const struct sk_buff *skb)
6216 + * The receiver remembers and reflects via DSACKs. Leverage the
6217 + * DSACK state and change the txhash to re-route speculatively.
6218 + */
6219 +- if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq) {
6220 +- sk_rethink_txhash(sk);
6221 ++ if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq &&
6222 ++ sk_rethink_txhash(sk))
6223 + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDUPLICATEDATAREHASH);
6224 +- }
6225 + }
6226 +
6227 + static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
6228 +diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
6229 +index 595dcc3afac5c..ab8ed0fc47697 100644
6230 +--- a/net/ipv4/tcp_ipv4.c
6231 ++++ b/net/ipv4/tcp_ipv4.c
6232 +@@ -1590,6 +1590,8 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
6233 + tcp_move_syn(newtp, req);
6234 + ireq->ireq_opt = NULL;
6235 + } else {
6236 ++ newinet->inet_opt = NULL;
6237 ++
6238 + if (!req_unhash && found_dup_sk) {
6239 + /* This code path should only be executed in the
6240 + * syncookie case only
6241 +@@ -1597,8 +1599,6 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
6242 + bh_unlock_sock(newsk);
6243 + sock_put(newsk);
6244 + newsk = NULL;
6245 +- } else {
6246 +- newinet->inet_opt = NULL;
6247 + }
6248 + }
6249 + return newsk;
6250 +@@ -1755,6 +1755,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)
6251 + bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
6252 + {
6253 + u32 limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf);
6254 ++ u32 tail_gso_size, tail_gso_segs;
6255 + struct skb_shared_info *shinfo;
6256 + const struct tcphdr *th;
6257 + struct tcphdr *thtail;
6258 +@@ -1762,6 +1763,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
6259 + unsigned int hdrlen;
6260 + bool fragstolen;
6261 + u32 gso_segs;
6262 ++ u32 gso_size;
6263 + int delta;
6264 +
6265 + /* In case all data was pulled from skb frags (in __pskb_pull_tail()),
6266 +@@ -1787,13 +1789,6 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
6267 + */
6268 + th = (const struct tcphdr *)skb->data;
6269 + hdrlen = th->doff * 4;
6270 +- shinfo = skb_shinfo(skb);
6271 +-
6272 +- if (!shinfo->gso_size)
6273 +- shinfo->gso_size = skb->len - hdrlen;
6274 +-
6275 +- if (!shinfo->gso_segs)
6276 +- shinfo->gso_segs = 1;
6277 +
6278 + tail = sk->sk_backlog.tail;
6279 + if (!tail)
6280 +@@ -1816,6 +1811,15 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
6281 + goto no_coalesce;
6282 +
6283 + __skb_pull(skb, hdrlen);
6284 ++
6285 ++ shinfo = skb_shinfo(skb);
6286 ++ gso_size = shinfo->gso_size ?: skb->len;
6287 ++ gso_segs = shinfo->gso_segs ?: 1;
6288 ++
6289 ++ shinfo = skb_shinfo(tail);
6290 ++ tail_gso_size = shinfo->gso_size ?: (tail->len - hdrlen);
6291 ++ tail_gso_segs = shinfo->gso_segs ?: 1;
6292 ++
6293 + if (skb_try_coalesce(tail, skb, &fragstolen, &delta)) {
6294 + TCP_SKB_CB(tail)->end_seq = TCP_SKB_CB(skb)->end_seq;
6295 +
6296 +@@ -1842,11 +1846,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
6297 + }
6298 +
6299 + /* Not as strict as GRO. We only need to carry mss max value */
6300 +- skb_shinfo(tail)->gso_size = max(shinfo->gso_size,
6301 +- skb_shinfo(tail)->gso_size);
6302 +-
6303 +- gso_segs = skb_shinfo(tail)->gso_segs + shinfo->gso_segs;
6304 +- skb_shinfo(tail)->gso_segs = min_t(u32, gso_segs, 0xFFFF);
6305 ++ shinfo->gso_size = max(gso_size, tail_gso_size);
6306 ++ shinfo->gso_segs = min_t(u32, gso_segs + tail_gso_segs, 0xFFFF);
6307 +
6308 + sk->sk_backlog.len += delta;
6309 + __NET_INC_STATS(sock_net(sk),
6310 +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
6311 +index 99011768c2640..e58e2589d7f98 100644
6312 +--- a/net/ipv4/tcp_output.c
6313 ++++ b/net/ipv4/tcp_output.c
6314 +@@ -4080,6 +4080,7 @@ void tcp_send_probe0(struct sock *sk)
6315 + /* Cancel probe timer, if it is not required. */
6316 + icsk->icsk_probes_out = 0;
6317 + icsk->icsk_backoff = 0;
6318 ++ icsk->icsk_probes_tstamp = 0;
6319 + return;
6320 + }
6321 +
6322 +diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
6323 +index 6c62b9ea1320d..faa92948441ba 100644
6324 +--- a/net/ipv4/tcp_timer.c
6325 ++++ b/net/ipv4/tcp_timer.c
6326 +@@ -219,14 +219,8 @@ static int tcp_write_timeout(struct sock *sk)
6327 + int retry_until;
6328 +
6329 + if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
6330 +- if (icsk->icsk_retransmits) {
6331 +- dst_negative_advice(sk);
6332 +- } else {
6333 +- sk_rethink_txhash(sk);
6334 +- tp->timeout_rehash++;
6335 +- __NET_INC_STATS(sock_net(sk),
6336 +- LINUX_MIB_TCPTIMEOUTREHASH);
6337 +- }
6338 ++ if (icsk->icsk_retransmits)
6339 ++ __dst_negative_advice(sk);
6340 + retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
6341 + expired = icsk->icsk_retransmits >= retry_until;
6342 + } else {
6343 +@@ -234,12 +228,7 @@ static int tcp_write_timeout(struct sock *sk)
6344 + /* Black hole detection */
6345 + tcp_mtu_probing(icsk, sk);
6346 +
6347 +- dst_negative_advice(sk);
6348 +- } else {
6349 +- sk_rethink_txhash(sk);
6350 +- tp->timeout_rehash++;
6351 +- __NET_INC_STATS(sock_net(sk),
6352 +- LINUX_MIB_TCPTIMEOUTREHASH);
6353 ++ __dst_negative_advice(sk);
6354 + }
6355 +
6356 + retry_until = net->ipv4.sysctl_tcp_retries2;
6357 +@@ -270,6 +259,11 @@ static int tcp_write_timeout(struct sock *sk)
6358 + return 1;
6359 + }
6360 +
6361 ++ if (sk_rethink_txhash(sk)) {
6362 ++ tp->timeout_rehash++;
6363 ++ __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPTIMEOUTREHASH);
6364 ++ }
6365 ++
6366 + return 0;
6367 + }
6368 +
6369 +@@ -349,6 +343,7 @@ static void tcp_probe_timer(struct sock *sk)
6370 +
6371 + if (tp->packets_out || !skb) {
6372 + icsk->icsk_probes_out = 0;
6373 ++ icsk->icsk_probes_tstamp = 0;
6374 + return;
6375 + }
6376 +
6377 +@@ -360,13 +355,12 @@ static void tcp_probe_timer(struct sock *sk)
6378 + * corresponding system limit. We also implement similar policy when
6379 + * we use RTO to probe window in tcp_retransmit_timer().
6380 + */
6381 +- if (icsk->icsk_user_timeout) {
6382 +- u32 elapsed = tcp_model_timeout(sk, icsk->icsk_probes_out,
6383 +- tcp_probe0_base(sk));
6384 +-
6385 +- if (elapsed >= icsk->icsk_user_timeout)
6386 +- goto abort;
6387 +- }
6388 ++ if (!icsk->icsk_probes_tstamp)
6389 ++ icsk->icsk_probes_tstamp = tcp_jiffies32;
6390 ++ else if (icsk->icsk_user_timeout &&
6391 ++ (s32)(tcp_jiffies32 - icsk->icsk_probes_tstamp) >=
6392 ++ msecs_to_jiffies(icsk->icsk_user_timeout))
6393 ++ goto abort;
6394 +
6395 + max_probes = sock_net(sk)->ipv4.sysctl_tcp_retries2;
6396 + if (sock_flag(sk, SOCK_DEAD)) {
6397 +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
6398 +index 9eeebd4a00542..e37a2fa65c294 100644
6399 +--- a/net/ipv4/udp.c
6400 ++++ b/net/ipv4/udp.c
6401 +@@ -2553,7 +2553,8 @@ int udp_v4_early_demux(struct sk_buff *skb)
6402 + */
6403 + if (!inet_sk(sk)->inet_daddr && in_dev)
6404 + return ip_mc_validate_source(skb, iph->daddr,
6405 +- iph->saddr, iph->tos,
6406 ++ iph->saddr,
6407 ++ iph->tos & IPTOS_RT_MASK,
6408 + skb->dev, in_dev, &itag);
6409 + }
6410 + return 0;
6411 +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
6412 +index 8b6eb384bac7c..4c881f5d9080c 100644
6413 +--- a/net/ipv6/addrconf.c
6414 ++++ b/net/ipv6/addrconf.c
6415 +@@ -2466,8 +2466,9 @@ static void addrconf_add_mroute(struct net_device *dev)
6416 + .fc_ifindex = dev->ifindex,
6417 + .fc_dst_len = 8,
6418 + .fc_flags = RTF_UP,
6419 +- .fc_type = RTN_UNICAST,
6420 ++ .fc_type = RTN_MULTICAST,
6421 + .fc_nlinfo.nl_net = dev_net(dev),
6422 ++ .fc_protocol = RTPROT_KERNEL,
6423 + };
6424 +
6425 + ipv6_addr_set(&cfg.fc_dst, htonl(0xFF000000), 0, 0, 0);
6426 +diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
6427 +index 1319986693fc8..84f932532db7d 100644
6428 +--- a/net/sched/cls_flower.c
6429 ++++ b/net/sched/cls_flower.c
6430 +@@ -1272,6 +1272,10 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
6431 +
6432 + nla_opt_msk = nla_data(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]);
6433 + msk_depth = nla_len(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]);
6434 ++ if (!nla_ok(nla_opt_msk, msk_depth)) {
6435 ++ NL_SET_ERR_MSG(extack, "Invalid nested attribute for masks");
6436 ++ return -EINVAL;
6437 ++ }
6438 + }
6439 +
6440 + nla_for_each_attr(nla_opt_key, nla_enc_key,
6441 +@@ -1307,9 +1311,6 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
6442 + NL_SET_ERR_MSG(extack, "Key and mask miss aligned");
6443 + return -EINVAL;
6444 + }
6445 +-
6446 +- if (msk_depth)
6447 +- nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
6448 + break;
6449 + case TCA_FLOWER_KEY_ENC_OPTS_VXLAN:
6450 + if (key->enc_opts.dst_opt_type) {
6451 +@@ -1340,9 +1341,6 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
6452 + NL_SET_ERR_MSG(extack, "Key and mask miss aligned");
6453 + return -EINVAL;
6454 + }
6455 +-
6456 +- if (msk_depth)
6457 +- nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
6458 + break;
6459 + case TCA_FLOWER_KEY_ENC_OPTS_ERSPAN:
6460 + if (key->enc_opts.dst_opt_type) {
6461 +@@ -1373,14 +1371,20 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key,
6462 + NL_SET_ERR_MSG(extack, "Key and mask miss aligned");
6463 + return -EINVAL;
6464 + }
6465 +-
6466 +- if (msk_depth)
6467 +- nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
6468 + break;
6469 + default:
6470 + NL_SET_ERR_MSG(extack, "Unknown tunnel option type");
6471 + return -EINVAL;
6472 + }
6473 ++
6474 ++ if (!msk_depth)
6475 ++ continue;
6476 ++
6477 ++ if (!nla_ok(nla_opt_msk, msk_depth)) {
6478 ++ NL_SET_ERR_MSG(extack, "A mask attribute is invalid");
6479 ++ return -EINVAL;
6480 ++ }
6481 ++ nla_opt_msk = nla_next(nla_opt_msk, &msk_depth);
6482 + }
6483 +
6484 + return 0;
6485 +diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
6486 +index 78bec347b8b66..c4007b9cd16d6 100644
6487 +--- a/net/sched/cls_tcindex.c
6488 ++++ b/net/sched/cls_tcindex.c
6489 +@@ -366,9 +366,13 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
6490 + if (tb[TCA_TCINDEX_MASK])
6491 + cp->mask = nla_get_u16(tb[TCA_TCINDEX_MASK]);
6492 +
6493 +- if (tb[TCA_TCINDEX_SHIFT])
6494 ++ if (tb[TCA_TCINDEX_SHIFT]) {
6495 + cp->shift = nla_get_u32(tb[TCA_TCINDEX_SHIFT]);
6496 +-
6497 ++ if (cp->shift > 16) {
6498 ++ err = -EINVAL;
6499 ++ goto errout;
6500 ++ }
6501 ++ }
6502 + if (!cp->hash) {
6503 + /* Hash not specified, use perfect hash if the upper limit
6504 + * of the hashing index is below the threshold.
6505 +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
6506 +index 2a76a2f5ed88c..5e8e49c4ab5ca 100644
6507 +--- a/net/sched/sch_api.c
6508 ++++ b/net/sched/sch_api.c
6509 +@@ -412,7 +412,8 @@ struct qdisc_rate_table *qdisc_get_rtab(struct tc_ratespec *r,
6510 + {
6511 + struct qdisc_rate_table *rtab;
6512 +
6513 +- if (tab == NULL || r->rate == 0 || r->cell_log == 0 ||
6514 ++ if (tab == NULL || r->rate == 0 ||
6515 ++ r->cell_log == 0 || r->cell_log >= 32 ||
6516 + nla_len(tab) != TC_RTAB_SIZE) {
6517 + NL_SET_ERR_MSG(extack, "Invalid rate table parameters for searching");
6518 + return NULL;
6519 +diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
6520 +index c2752e2b9ce34..4404c491eb388 100644
6521 +--- a/net/sunrpc/svcsock.c
6522 ++++ b/net/sunrpc/svcsock.c
6523 +@@ -1062,6 +1062,90 @@ err_noclose:
6524 + return 0; /* record not complete */
6525 + }
6526 +
6527 ++static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec,
6528 ++ int flags)
6529 ++{
6530 ++ return kernel_sendpage(sock, virt_to_page(vec->iov_base),
6531 ++ offset_in_page(vec->iov_base),
6532 ++ vec->iov_len, flags);
6533 ++}
6534 ++
6535 ++/*
6536 ++ * kernel_sendpage() is used exclusively to reduce the number of
6537 ++ * copy operations in this path. Therefore the caller must ensure
6538 ++ * that the pages backing @xdr are unchanging.
6539 ++ *
6540 ++ * In addition, the logic assumes that * .bv_len is never larger
6541 ++ * than PAGE_SIZE.
6542 ++ */
6543 ++static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
6544 ++ struct xdr_buf *xdr, rpc_fraghdr marker,
6545 ++ unsigned int *sentp)
6546 ++{
6547 ++ const struct kvec *head = xdr->head;
6548 ++ const struct kvec *tail = xdr->tail;
6549 ++ struct kvec rm = {
6550 ++ .iov_base = &marker,
6551 ++ .iov_len = sizeof(marker),
6552 ++ };
6553 ++ int flags, ret;
6554 ++
6555 ++ *sentp = 0;
6556 ++ xdr_alloc_bvec(xdr, GFP_KERNEL);
6557 ++
6558 ++ msg->msg_flags = MSG_MORE;
6559 ++ ret = kernel_sendmsg(sock, msg, &rm, 1, rm.iov_len);
6560 ++ if (ret < 0)
6561 ++ return ret;
6562 ++ *sentp += ret;
6563 ++ if (ret != rm.iov_len)
6564 ++ return -EAGAIN;
6565 ++
6566 ++ flags = head->iov_len < xdr->len ? MSG_MORE | MSG_SENDPAGE_NOTLAST : 0;
6567 ++ ret = svc_tcp_send_kvec(sock, head, flags);
6568 ++ if (ret < 0)
6569 ++ return ret;
6570 ++ *sentp += ret;
6571 ++ if (ret != head->iov_len)
6572 ++ goto out;
6573 ++
6574 ++ if (xdr->page_len) {
6575 ++ unsigned int offset, len, remaining;
6576 ++ struct bio_vec *bvec;
6577 ++
6578 ++ bvec = xdr->bvec;
6579 ++ offset = xdr->page_base;
6580 ++ remaining = xdr->page_len;
6581 ++ flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
6582 ++ while (remaining > 0) {
6583 ++ if (remaining <= PAGE_SIZE && tail->iov_len == 0)
6584 ++ flags = 0;
6585 ++ len = min(remaining, bvec->bv_len);
6586 ++ ret = kernel_sendpage(sock, bvec->bv_page,
6587 ++ bvec->bv_offset + offset,
6588 ++ len, flags);
6589 ++ if (ret < 0)
6590 ++ return ret;
6591 ++ *sentp += ret;
6592 ++ if (ret != len)
6593 ++ goto out;
6594 ++ remaining -= len;
6595 ++ offset = 0;
6596 ++ bvec++;
6597 ++ }
6598 ++ }
6599 ++
6600 ++ if (tail->iov_len) {
6601 ++ ret = svc_tcp_send_kvec(sock, tail, 0);
6602 ++ if (ret < 0)
6603 ++ return ret;
6604 ++ *sentp += ret;
6605 ++ }
6606 ++
6607 ++out:
6608 ++ return 0;
6609 ++}
6610 ++
6611 + /**
6612 + * svc_tcp_sendto - Send out a reply on a TCP socket
6613 + * @rqstp: completed svc_rqst
6614 +@@ -1089,7 +1173,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
6615 + mutex_lock(&xprt->xpt_mutex);
6616 + if (svc_xprt_is_dead(xprt))
6617 + goto out_notconn;
6618 +- err = xprt_sock_sendmsg(svsk->sk_sock, &msg, xdr, 0, marker, &sent);
6619 ++ err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent);
6620 + xdr_free_bvec(xdr);
6621 + trace_svcsock_tcp_send(xprt, err < 0 ? err : sent);
6622 + if (err < 0 || sent != (xdr->len + sizeof(marker)))
6623 +diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
6624 +index d5f42c62fd79e..52fd1f96b241e 100644
6625 +--- a/net/xdp/xsk.c
6626 ++++ b/net/xdp/xsk.c
6627 +@@ -107,9 +107,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);
6628 +
6629 + void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
6630 + {
6631 +- if (queue_id < dev->real_num_rx_queues)
6632 ++ if (queue_id < dev->num_rx_queues)
6633 + dev->_rx[queue_id].pool = NULL;
6634 +- if (queue_id < dev->real_num_tx_queues)
6635 ++ if (queue_id < dev->num_tx_queues)
6636 + dev->_tx[queue_id].pool = NULL;
6637 + }
6638 +
6639 +diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
6640 +index 11554d0412f06..1b8409ec2c97f 100644
6641 +--- a/sound/core/seq/oss/seq_oss_synth.c
6642 ++++ b/sound/core/seq/oss/seq_oss_synth.c
6643 +@@ -611,7 +611,8 @@ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_in
6644 +
6645 + if (info->is_midi) {
6646 + struct midi_info minf;
6647 +- snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf);
6648 ++ if (snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf))
6649 ++ return -ENXIO;
6650 + inf->synth_type = SYNTH_TYPE_MIDI;
6651 + inf->synth_subtype = 0;
6652 + inf->nr_voices = 16;
6653 +diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
6654 +index 687216e745267..eec1775dfffe9 100644
6655 +--- a/sound/pci/hda/hda_codec.c
6656 ++++ b/sound/pci/hda/hda_codec.c
6657 +@@ -2934,7 +2934,7 @@ static void hda_call_codec_resume(struct hda_codec *codec)
6658 + snd_hdac_leave_pm(&codec->core);
6659 + }
6660 +
6661 +-static int hda_codec_suspend(struct device *dev)
6662 ++static int hda_codec_runtime_suspend(struct device *dev)
6663 + {
6664 + struct hda_codec *codec = dev_to_hda_codec(dev);
6665 + unsigned int state;
6666 +@@ -2953,7 +2953,7 @@ static int hda_codec_suspend(struct device *dev)
6667 + return 0;
6668 + }
6669 +
6670 +-static int hda_codec_resume(struct device *dev)
6671 ++static int hda_codec_runtime_resume(struct device *dev)
6672 + {
6673 + struct hda_codec *codec = dev_to_hda_codec(dev);
6674 +
6675 +@@ -2968,16 +2968,6 @@ static int hda_codec_resume(struct device *dev)
6676 + return 0;
6677 + }
6678 +
6679 +-static int hda_codec_runtime_suspend(struct device *dev)
6680 +-{
6681 +- return hda_codec_suspend(dev);
6682 +-}
6683 +-
6684 +-static int hda_codec_runtime_resume(struct device *dev)
6685 +-{
6686 +- return hda_codec_resume(dev);
6687 +-}
6688 +-
6689 + #endif /* CONFIG_PM */
6690 +
6691 + #ifdef CONFIG_PM_SLEEP
6692 +@@ -2998,31 +2988,31 @@ static void hda_codec_pm_complete(struct device *dev)
6693 + static int hda_codec_pm_suspend(struct device *dev)
6694 + {
6695 + dev->power.power_state = PMSG_SUSPEND;
6696 +- return hda_codec_suspend(dev);
6697 ++ return pm_runtime_force_suspend(dev);
6698 + }
6699 +
6700 + static int hda_codec_pm_resume(struct device *dev)
6701 + {
6702 + dev->power.power_state = PMSG_RESUME;
6703 +- return hda_codec_resume(dev);
6704 ++ return pm_runtime_force_resume(dev);
6705 + }
6706 +
6707 + static int hda_codec_pm_freeze(struct device *dev)
6708 + {
6709 + dev->power.power_state = PMSG_FREEZE;
6710 +- return hda_codec_suspend(dev);
6711 ++ return pm_runtime_force_suspend(dev);
6712 + }
6713 +
6714 + static int hda_codec_pm_thaw(struct device *dev)
6715 + {
6716 + dev->power.power_state = PMSG_THAW;
6717 +- return hda_codec_resume(dev);
6718 ++ return pm_runtime_force_resume(dev);
6719 + }
6720 +
6721 + static int hda_codec_pm_restore(struct device *dev)
6722 + {
6723 + dev->power.power_state = PMSG_RESTORE;
6724 +- return hda_codec_resume(dev);
6725 ++ return pm_runtime_force_resume(dev);
6726 + }
6727 + #endif /* CONFIG_PM_SLEEP */
6728 +
6729 +diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
6730 +index 70164d1428d40..361cf2041911a 100644
6731 +--- a/sound/pci/hda/hda_tegra.c
6732 ++++ b/sound/pci/hda/hda_tegra.c
6733 +@@ -388,7 +388,7 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
6734 + * in powers of 2, next available ratio is 16 which can be
6735 + * used as a limiting factor here.
6736 + */
6737 +- if (of_device_is_compatible(np, "nvidia,tegra194-hda"))
6738 ++ if (of_device_is_compatible(np, "nvidia,tegra30-hda"))
6739 + chip->bus.core.sdo_limit = 16;
6740 +
6741 + /* codec detection */
6742 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
6743 +index dd82ff2bd5d65..ed5b6b894dc19 100644
6744 +--- a/sound/pci/hda/patch_realtek.c
6745 ++++ b/sound/pci/hda/patch_realtek.c
6746 +@@ -6371,6 +6371,7 @@ enum {
6747 + ALC256_FIXUP_HP_HEADSET_MIC,
6748 + ALC236_FIXUP_DELL_AIO_HEADSET_MIC,
6749 + ALC282_FIXUP_ACER_DISABLE_LINEOUT,
6750 ++ ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
6751 + };
6752 +
6753 + static const struct hda_fixup alc269_fixups[] = {
6754 +@@ -7808,6 +7809,12 @@ static const struct hda_fixup alc269_fixups[] = {
6755 + .chained = true,
6756 + .chain_id = ALC269_FIXUP_HEADSET_MODE
6757 + },
6758 ++ [ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST] = {
6759 ++ .type = HDA_FIXUP_FUNC,
6760 ++ .v.func = alc269_fixup_limit_int_mic_boost,
6761 ++ .chained = true,
6762 ++ .chain_id = ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
6763 ++ },
6764 + };
6765 +
6766 + static const struct snd_pci_quirk alc269_fixup_tbl[] = {
6767 +@@ -7826,6 +7833,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
6768 + SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
6769 + SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
6770 + SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
6771 ++ SND_PCI_QUIRK(0x1025, 0x1094, "Acer Aspire E5-575T", ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST),
6772 + SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
6773 + SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
6774 + SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK),
6775 +diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
6776 +index 0ab40a8a68fb5..834367dd54e1b 100644
6777 +--- a/sound/pci/hda/patch_via.c
6778 ++++ b/sound/pci/hda/patch_via.c
6779 +@@ -113,6 +113,7 @@ static struct via_spec *via_new_spec(struct hda_codec *codec)
6780 + spec->codec_type = VT1708S;
6781 + spec->gen.indep_hp = 1;
6782 + spec->gen.keep_eapd_on = 1;
6783 ++ spec->gen.dac_min_mute = 1;
6784 + spec->gen.pcm_playback_hook = via_playback_pcm_hook;
6785 + spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO;
6786 + codec->power_save_node = 1;
6787 +diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
6788 +index 65b59dbfb43c8..a9b1b4180c471 100644
6789 +--- a/sound/soc/codecs/rt711.c
6790 ++++ b/sound/soc/codecs/rt711.c
6791 +@@ -462,6 +462,8 @@ static int rt711_set_amp_gain_put(struct snd_kcontrol *kcontrol,
6792 + unsigned int read_ll, read_rl;
6793 + int i;
6794 +
6795 ++ mutex_lock(&rt711->calibrate_mutex);
6796 ++
6797 + /* Can't use update bit function, so read the original value first */
6798 + addr_h = mc->reg;
6799 + addr_l = mc->rreg;
6800 +@@ -547,6 +549,8 @@ static int rt711_set_amp_gain_put(struct snd_kcontrol *kcontrol,
6801 + if (dapm->bias_level <= SND_SOC_BIAS_STANDBY)
6802 + regmap_write(rt711->regmap,
6803 + RT711_SET_AUDIO_POWER_STATE, AC_PWRST_D3);
6804 ++
6805 ++ mutex_unlock(&rt711->calibrate_mutex);
6806 + return 0;
6807 + }
6808 +
6809 +@@ -859,9 +863,11 @@ static int rt711_set_bias_level(struct snd_soc_component *component,
6810 + break;
6811 +
6812 + case SND_SOC_BIAS_STANDBY:
6813 ++ mutex_lock(&rt711->calibrate_mutex);
6814 + regmap_write(rt711->regmap,
6815 + RT711_SET_AUDIO_POWER_STATE,
6816 + AC_PWRST_D3);
6817 ++ mutex_unlock(&rt711->calibrate_mutex);
6818 + break;
6819 +
6820 + default:
6821 +diff --git a/sound/soc/intel/boards/haswell.c b/sound/soc/intel/boards/haswell.c
6822 +index c55d1239e705b..c763bfeb1f38f 100644
6823 +--- a/sound/soc/intel/boards/haswell.c
6824 ++++ b/sound/soc/intel/boards/haswell.c
6825 +@@ -189,6 +189,7 @@ static struct platform_driver haswell_audio = {
6826 + .probe = haswell_audio_probe,
6827 + .driver = {
6828 + .name = "haswell-audio",
6829 ++ .pm = &snd_soc_pm_ops,
6830 + },
6831 + };
6832 +
6833 +diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
6834 +index 6875fa570c2c5..8b0ddc4b8227b 100644
6835 +--- a/sound/soc/sof/intel/hda-codec.c
6836 ++++ b/sound/soc/sof/intel/hda-codec.c
6837 +@@ -156,7 +156,8 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
6838 + if (!hdev->bus->audio_component) {
6839 + dev_dbg(sdev->dev,
6840 + "iDisp hw present but no driver\n");
6841 +- goto error;
6842 ++ ret = -ENOENT;
6843 ++ goto out;
6844 + }
6845 + hda_priv->need_display_power = true;
6846 + }
6847 +@@ -173,24 +174,23 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
6848 + * other return codes without modification
6849 + */
6850 + if (ret == 0)
6851 +- goto error;
6852 ++ ret = -ENOENT;
6853 + }
6854 +
6855 +- return ret;
6856 +-
6857 +-error:
6858 +- snd_hdac_ext_bus_device_exit(hdev);
6859 +- return -ENOENT;
6860 +-
6861 ++out:
6862 ++ if (ret < 0) {
6863 ++ snd_hdac_device_unregister(hdev);
6864 ++ put_device(&hdev->dev);
6865 ++ }
6866 + #else
6867 + hdev = devm_kzalloc(sdev->dev, sizeof(*hdev), GFP_KERNEL);
6868 + if (!hdev)
6869 + return -ENOMEM;
6870 +
6871 + ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev, HDA_DEV_ASOC);
6872 ++#endif
6873 +
6874 + return ret;
6875 +-#endif
6876 + }
6877 +
6878 + /* Codec initialization */
6879 +diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c
6880 +index 18ff1c2f5376e..2dbc1273e56bd 100644
6881 +--- a/sound/soc/sof/intel/hda-dsp.c
6882 ++++ b/sound/soc/sof/intel/hda-dsp.c
6883 +@@ -683,8 +683,10 @@ static int hda_resume(struct snd_sof_dev *sdev, bool runtime_resume)
6884 +
6885 + #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
6886 + /* check jack status */
6887 +- if (runtime_resume)
6888 +- hda_codec_jack_check(sdev);
6889 ++ if (runtime_resume) {
6890 ++ if (sdev->system_suspend_target == SOF_SUSPEND_NONE)
6891 ++ hda_codec_jack_check(sdev);
6892 ++ }
6893 +
6894 + /* turn off the links that were off before suspend */
6895 + list_for_each_entry(hlink, &bus->hlink_list, list) {
6896 +diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c
6897 +index 90c3155f05b1e..84ae1039b0a87 100644
6898 +--- a/tools/gpio/gpio-event-mon.c
6899 ++++ b/tools/gpio/gpio-event-mon.c
6900 +@@ -107,8 +107,8 @@ int monitor_device(const char *device_name,
6901 + ret = -EIO;
6902 + break;
6903 + }
6904 +- fprintf(stdout, "GPIO EVENT at %llu on line %d (%d|%d) ",
6905 +- event.timestamp_ns, event.offset, event.line_seqno,
6906 ++ fprintf(stdout, "GPIO EVENT at %" PRIu64 " on line %d (%d|%d) ",
6907 ++ (uint64_t)event.timestamp_ns, event.offset, event.line_seqno,
6908 + event.seqno);
6909 + switch (event.id) {
6910 + case GPIO_V2_LINE_EVENT_RISING_EDGE:
6911 +diff --git a/tools/gpio/gpio-watch.c b/tools/gpio/gpio-watch.c
6912 +index f229ec62301b7..41e76d2441922 100644
6913 +--- a/tools/gpio/gpio-watch.c
6914 ++++ b/tools/gpio/gpio-watch.c
6915 +@@ -10,6 +10,7 @@
6916 + #include <ctype.h>
6917 + #include <errno.h>
6918 + #include <fcntl.h>
6919 ++#include <inttypes.h>
6920 + #include <linux/gpio.h>
6921 + #include <poll.h>
6922 + #include <stdbool.h>
6923 +@@ -86,8 +87,8 @@ int main(int argc, char **argv)
6924 + return EXIT_FAILURE;
6925 + }
6926 +
6927 +- printf("line %u: %s at %llu\n",
6928 +- chg.info.offset, event, chg.timestamp_ns);
6929 ++ printf("line %u: %s at %" PRIu64 "\n",
6930 ++ chg.info.offset, event, (uint64_t)chg.timestamp_ns);
6931 + }
6932 + }
6933 +
6934 +diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
6935 +index cfcdbd7be066e..17465d454a0e3 100644
6936 +--- a/tools/lib/perf/evlist.c
6937 ++++ b/tools/lib/perf/evlist.c
6938 +@@ -367,21 +367,13 @@ static struct perf_mmap* perf_evlist__alloc_mmap(struct perf_evlist *evlist, boo
6939 + return map;
6940 + }
6941 +
6942 +-static void perf_evlist__set_sid_idx(struct perf_evlist *evlist,
6943 +- struct perf_evsel *evsel, int idx, int cpu,
6944 +- int thread)
6945 ++static void perf_evsel__set_sid_idx(struct perf_evsel *evsel, int idx, int cpu, int thread)
6946 + {
6947 + struct perf_sample_id *sid = SID(evsel, cpu, thread);
6948 +
6949 + sid->idx = idx;
6950 +- if (evlist->cpus && cpu >= 0)
6951 +- sid->cpu = evlist->cpus->map[cpu];
6952 +- else
6953 +- sid->cpu = -1;
6954 +- if (!evsel->system_wide && evlist->threads && thread >= 0)
6955 +- sid->tid = perf_thread_map__pid(evlist->threads, thread);
6956 +- else
6957 +- sid->tid = -1;
6958 ++ sid->cpu = perf_cpu_map__cpu(evsel->cpus, cpu);
6959 ++ sid->tid = perf_thread_map__pid(evsel->threads, thread);
6960 + }
6961 +
6962 + static struct perf_mmap*
6963 +@@ -500,8 +492,7 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops,
6964 + if (perf_evlist__id_add_fd(evlist, evsel, cpu, thread,
6965 + fd) < 0)
6966 + return -1;
6967 +- perf_evlist__set_sid_idx(evlist, evsel, idx, cpu,
6968 +- thread);
6969 ++ perf_evsel__set_sid_idx(evsel, idx, cpu, thread);
6970 + }
6971 + }
6972 +
6973 +diff --git a/tools/lib/perf/tests/test-cpumap.c b/tools/lib/perf/tests/test-cpumap.c
6974 +index c8d45091e7c26..c70e9e03af3e9 100644
6975 +--- a/tools/lib/perf/tests/test-cpumap.c
6976 ++++ b/tools/lib/perf/tests/test-cpumap.c
6977 +@@ -27,5 +27,5 @@ int main(int argc, char **argv)
6978 + perf_cpu_map__put(cpus);
6979 +
6980 + __T_END;
6981 +- return 0;
6982 ++ return tests_failed == 0 ? 0 : -1;
6983 + }
6984 +diff --git a/tools/lib/perf/tests/test-evlist.c b/tools/lib/perf/tests/test-evlist.c
6985 +index 6d8ebe0c25042..bd19cabddaf62 100644
6986 +--- a/tools/lib/perf/tests/test-evlist.c
6987 ++++ b/tools/lib/perf/tests/test-evlist.c
6988 +@@ -215,6 +215,7 @@ static int test_mmap_thread(void)
6989 + sysfs__mountpoint());
6990 +
6991 + if (filename__read_int(path, &id)) {
6992 ++ tests_failed++;
6993 + fprintf(stderr, "error: failed to get tracepoint id: %s\n", path);
6994 + return -1;
6995 + }
6996 +@@ -409,5 +410,5 @@ int main(int argc, char **argv)
6997 + test_mmap_cpus();
6998 +
6999 + __T_END;
7000 +- return 0;
7001 ++ return tests_failed == 0 ? 0 : -1;
7002 + }
7003 +diff --git a/tools/lib/perf/tests/test-evsel.c b/tools/lib/perf/tests/test-evsel.c
7004 +index 135722ac965bf..0ad82d7a2a51b 100644
7005 +--- a/tools/lib/perf/tests/test-evsel.c
7006 ++++ b/tools/lib/perf/tests/test-evsel.c
7007 +@@ -131,5 +131,5 @@ int main(int argc, char **argv)
7008 + test_stat_thread_enable();
7009 +
7010 + __T_END;
7011 +- return 0;
7012 ++ return tests_failed == 0 ? 0 : -1;
7013 + }
7014 +diff --git a/tools/lib/perf/tests/test-threadmap.c b/tools/lib/perf/tests/test-threadmap.c
7015 +index 7dc4d6fbeddee..384471441b484 100644
7016 +--- a/tools/lib/perf/tests/test-threadmap.c
7017 ++++ b/tools/lib/perf/tests/test-threadmap.c
7018 +@@ -27,5 +27,5 @@ int main(int argc, char **argv)
7019 + perf_thread_map__put(threads);
7020 +
7021 + __T_END;
7022 +- return 0;
7023 ++ return tests_failed == 0 ? 0 : -1;
7024 + }
7025 +diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
7026 +index 84205c3a55ebe..2b5707738609e 100755
7027 +--- a/tools/testing/selftests/net/fib_tests.sh
7028 ++++ b/tools/testing/selftests/net/fib_tests.sh
7029 +@@ -1055,7 +1055,6 @@ ipv6_addr_metric_test()
7030 +
7031 + check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 260"
7032 + log_test $? 0 "Set metric with peer route on local side"
7033 +- log_test $? 0 "User specified metric on local address"
7034 + check_route6 "2001:db8:104::2 dev dummy2 proto kernel metric 260"
7035 + log_test $? 0 "Set metric with peer route on peer side"
7036 +
7037 +diff --git a/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c b/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
7038 +index 9e5c7f3f498a7..0af4f02669a11 100644
7039 +--- a/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
7040 ++++ b/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
7041 +@@ -290,5 +290,5 @@ static int test(void)
7042 +
7043 + int main(void)
7044 + {
7045 +- test_harness(test, "pkey_exec_prot");
7046 ++ return test_harness(test, "pkey_exec_prot");
7047 + }
7048 +diff --git a/tools/testing/selftests/powerpc/mm/pkey_siginfo.c b/tools/testing/selftests/powerpc/mm/pkey_siginfo.c
7049 +index 4f815d7c12145..2db76e56d4cb9 100644
7050 +--- a/tools/testing/selftests/powerpc/mm/pkey_siginfo.c
7051 ++++ b/tools/testing/selftests/powerpc/mm/pkey_siginfo.c
7052 +@@ -329,5 +329,5 @@ static int test(void)
7053 +
7054 + int main(void)
7055 + {
7056 +- test_harness(test, "pkey_siginfo");
7057 ++ return test_harness(test, "pkey_siginfo");
7058 + }