Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.18 commit in: /
Date: Wed, 14 Nov 2018 13:16:13
Message-Id: 1542201341.4c43869360643e3f80b1ebd018e1524787e16a3f.mpagano@gentoo
1 commit: 4c43869360643e3f80b1ebd018e1524787e16a3f
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Tue Nov 13 21:16:56 2018 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Wed Nov 14 13:15:41 2018 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4c438693
7
8 proj/linux-patches: Linux patch 4.18.19
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1018_linux-4.18.19.patch | 15151 +++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 15155 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index bdc7ee9..afaac7a 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -115,6 +115,10 @@ Patch: 1017_linux-4.18.18.patch
21 From: http://www.kernel.org
22 Desc: Linux 4.18.18
23
24 +Patch: 1018_linux-4.18.19.patch
25 +From: http://www.kernel.org
26 +Desc: Linux 4.18.19
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1018_linux-4.18.19.patch b/1018_linux-4.18.19.patch
33 new file mode 100644
34 index 0000000..40499cf
35 --- /dev/null
36 +++ b/1018_linux-4.18.19.patch
37 @@ -0,0 +1,15151 @@
38 +diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
39 +index 48b424de85bb..cfbc18f0d9c9 100644
40 +--- a/Documentation/filesystems/fscrypt.rst
41 ++++ b/Documentation/filesystems/fscrypt.rst
42 +@@ -191,21 +191,11 @@ Currently, the following pairs of encryption modes are supported:
43 +
44 + - AES-256-XTS for contents and AES-256-CTS-CBC for filenames
45 + - AES-128-CBC for contents and AES-128-CTS-CBC for filenames
46 +-- Speck128/256-XTS for contents and Speck128/256-CTS-CBC for filenames
47 +
48 + It is strongly recommended to use AES-256-XTS for contents encryption.
49 + AES-128-CBC was added only for low-powered embedded devices with
50 + crypto accelerators such as CAAM or CESA that do not support XTS.
51 +
52 +-Similarly, Speck128/256 support was only added for older or low-end
53 +-CPUs which cannot do AES fast enough -- especially ARM CPUs which have
54 +-NEON instructions but not the Cryptography Extensions -- and for which
55 +-it would not otherwise be feasible to use encryption at all. It is
56 +-not recommended to use Speck on CPUs that have AES instructions.
57 +-Speck support is only available if it has been enabled in the crypto
58 +-API via CONFIG_CRYPTO_SPECK. Also, on ARM platforms, to get
59 +-acceptable performance CONFIG_CRYPTO_SPECK_NEON must be enabled.
60 +-
61 + New encryption modes can be added relatively easily, without changes
62 + to individual filesystems. However, authenticated encryption (AE)
63 + modes are not currently supported because of the difficulty of dealing
64 +diff --git a/Documentation/media/uapi/cec/cec-ioc-receive.rst b/Documentation/media/uapi/cec/cec-ioc-receive.rst
65 +index e964074cd15b..b25e48afaa08 100644
66 +--- a/Documentation/media/uapi/cec/cec-ioc-receive.rst
67 ++++ b/Documentation/media/uapi/cec/cec-ioc-receive.rst
68 +@@ -16,10 +16,10 @@ CEC_RECEIVE, CEC_TRANSMIT - Receive or transmit a CEC message
69 + Synopsis
70 + ========
71 +
72 +-.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg *argp )
73 ++.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg \*argp )
74 + :name: CEC_RECEIVE
75 +
76 +-.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg *argp )
77 ++.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg \*argp )
78 + :name: CEC_TRANSMIT
79 +
80 + Arguments
81 +@@ -272,6 +272,19 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
82 + - The transmit failed after one or more retries. This status bit is
83 + mutually exclusive with :ref:`CEC_TX_STATUS_OK <CEC-TX-STATUS-OK>`.
84 + Other bits can still be set to explain which failures were seen.
85 ++ * .. _`CEC-TX-STATUS-ABORTED`:
86 ++
87 ++ - ``CEC_TX_STATUS_ABORTED``
88 ++ - 0x40
89 ++ - The transmit was aborted due to an HDMI disconnect, or the adapter
90 ++ was unconfigured, or a transmit was interrupted, or the driver
91 ++ returned an error when attempting to start a transmit.
92 ++ * .. _`CEC-TX-STATUS-TIMEOUT`:
93 ++
94 ++ - ``CEC_TX_STATUS_TIMEOUT``
95 ++ - 0x80
96 ++ - The transmit timed out. This should not normally happen and this
97 ++ indicates a driver problem.
98 +
99 +
100 + .. tabularcolumns:: |p{5.6cm}|p{0.9cm}|p{11.0cm}|
101 +@@ -300,6 +313,14 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
102 + - The message was received successfully but the reply was
103 + ``CEC_MSG_FEATURE_ABORT``. This status is only set if this message
104 + was the reply to an earlier transmitted message.
105 ++ * .. _`CEC-RX-STATUS-ABORTED`:
106 ++
107 ++ - ``CEC_RX_STATUS_ABORTED``
108 ++ - 0x08
109 ++ - The wait for a reply to an earlier transmitted message was aborted
110 ++ because the HDMI cable was disconnected, the adapter was unconfigured
111 ++ or the :ref:`CEC_TRANSMIT <CEC_RECEIVE>` that waited for a
112 ++ reply was interrupted.
113 +
114 +
115 +
116 +diff --git a/Documentation/media/uapi/v4l/biblio.rst b/Documentation/media/uapi/v4l/biblio.rst
117 +index 1cedcfc04327..386d6cf83e9c 100644
118 +--- a/Documentation/media/uapi/v4l/biblio.rst
119 ++++ b/Documentation/media/uapi/v4l/biblio.rst
120 +@@ -226,16 +226,6 @@ xvYCC
121 +
122 + :author: International Electrotechnical Commission (http://www.iec.ch)
123 +
124 +-.. _adobergb:
125 +-
126 +-AdobeRGB
127 +-========
128 +-
129 +-
130 +-:title: AdobeĀ© RGB (1998) Color Image Encoding Version 2005-05
131 +-
132 +-:author: Adobe Systems Incorporated (http://www.adobe.com)
133 +-
134 + .. _oprgb:
135 +
136 + opRGB
137 +diff --git a/Documentation/media/uapi/v4l/colorspaces-defs.rst b/Documentation/media/uapi/v4l/colorspaces-defs.rst
138 +index 410907fe9415..f24615544792 100644
139 +--- a/Documentation/media/uapi/v4l/colorspaces-defs.rst
140 ++++ b/Documentation/media/uapi/v4l/colorspaces-defs.rst
141 +@@ -51,8 +51,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
142 + - See :ref:`col-rec709`.
143 + * - ``V4L2_COLORSPACE_SRGB``
144 + - See :ref:`col-srgb`.
145 +- * - ``V4L2_COLORSPACE_ADOBERGB``
146 +- - See :ref:`col-adobergb`.
147 ++ * - ``V4L2_COLORSPACE_OPRGB``
148 ++ - See :ref:`col-oprgb`.
149 + * - ``V4L2_COLORSPACE_BT2020``
150 + - See :ref:`col-bt2020`.
151 + * - ``V4L2_COLORSPACE_DCI_P3``
152 +@@ -90,8 +90,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
153 + - Use the Rec. 709 transfer function.
154 + * - ``V4L2_XFER_FUNC_SRGB``
155 + - Use the sRGB transfer function.
156 +- * - ``V4L2_XFER_FUNC_ADOBERGB``
157 +- - Use the AdobeRGB transfer function.
158 ++ * - ``V4L2_XFER_FUNC_OPRGB``
159 ++ - Use the opRGB transfer function.
160 + * - ``V4L2_XFER_FUNC_SMPTE240M``
161 + - Use the SMPTE 240M transfer function.
162 + * - ``V4L2_XFER_FUNC_NONE``
163 +diff --git a/Documentation/media/uapi/v4l/colorspaces-details.rst b/Documentation/media/uapi/v4l/colorspaces-details.rst
164 +index b5d551b9cc8f..09fabf4cd412 100644
165 +--- a/Documentation/media/uapi/v4l/colorspaces-details.rst
166 ++++ b/Documentation/media/uapi/v4l/colorspaces-details.rst
167 +@@ -290,15 +290,14 @@ Y' is clamped to the range [0ā€¦1] and Cb and Cr are clamped to the range
168 + 170M/BT.601. The Y'CbCr quantization is limited range.
169 +
170 +
171 +-.. _col-adobergb:
172 ++.. _col-oprgb:
173 +
174 +-Colorspace Adobe RGB (V4L2_COLORSPACE_ADOBERGB)
175 ++Colorspace opRGB (V4L2_COLORSPACE_OPRGB)
176 + ===============================================
177 +
178 +-The :ref:`adobergb` standard defines the colorspace used by computer
179 +-graphics that use the AdobeRGB colorspace. This is also known as the
180 +-:ref:`oprgb` standard. The default transfer function is
181 +-``V4L2_XFER_FUNC_ADOBERGB``. The default Y'CbCr encoding is
182 ++The :ref:`oprgb` standard defines the colorspace used by computer
183 ++graphics that use the opRGB colorspace. The default transfer function is
184 ++``V4L2_XFER_FUNC_OPRGB``. The default Y'CbCr encoding is
185 + ``V4L2_YCBCR_ENC_601``. The default Y'CbCr quantization is limited
186 + range.
187 +
188 +@@ -312,7 +311,7 @@ The chromaticities of the primary colors and the white reference are:
189 +
190 + .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
191 +
192 +-.. flat-table:: Adobe RGB Chromaticities
193 ++.. flat-table:: opRGB Chromaticities
194 + :header-rows: 1
195 + :stub-columns: 0
196 + :widths: 1 1 2
197 +diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
198 +index a5cb0a8686ac..8813ff9c42b9 100644
199 +--- a/Documentation/media/videodev2.h.rst.exceptions
200 ++++ b/Documentation/media/videodev2.h.rst.exceptions
201 +@@ -56,7 +56,8 @@ replace symbol V4L2_MEMORY_USERPTR :c:type:`v4l2_memory`
202 + # Documented enum v4l2_colorspace
203 + replace symbol V4L2_COLORSPACE_470_SYSTEM_BG :c:type:`v4l2_colorspace`
204 + replace symbol V4L2_COLORSPACE_470_SYSTEM_M :c:type:`v4l2_colorspace`
205 +-replace symbol V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
206 ++replace symbol V4L2_COLORSPACE_OPRGB :c:type:`v4l2_colorspace`
207 ++replace define V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
208 + replace symbol V4L2_COLORSPACE_BT2020 :c:type:`v4l2_colorspace`
209 + replace symbol V4L2_COLORSPACE_DCI_P3 :c:type:`v4l2_colorspace`
210 + replace symbol V4L2_COLORSPACE_DEFAULT :c:type:`v4l2_colorspace`
211 +@@ -69,7 +70,8 @@ replace symbol V4L2_COLORSPACE_SRGB :c:type:`v4l2_colorspace`
212 +
213 + # Documented enum v4l2_xfer_func
214 + replace symbol V4L2_XFER_FUNC_709 :c:type:`v4l2_xfer_func`
215 +-replace symbol V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
216 ++replace symbol V4L2_XFER_FUNC_OPRGB :c:type:`v4l2_xfer_func`
217 ++replace define V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
218 + replace symbol V4L2_XFER_FUNC_DCI_P3 :c:type:`v4l2_xfer_func`
219 + replace symbol V4L2_XFER_FUNC_DEFAULT :c:type:`v4l2_xfer_func`
220 + replace symbol V4L2_XFER_FUNC_NONE :c:type:`v4l2_xfer_func`
221 +diff --git a/Makefile b/Makefile
222 +index 7b35c1ec0427..71642133ba22 100644
223 +--- a/Makefile
224 ++++ b/Makefile
225 +@@ -1,7 +1,7 @@
226 + # SPDX-License-Identifier: GPL-2.0
227 + VERSION = 4
228 + PATCHLEVEL = 18
229 +-SUBLEVEL = 18
230 ++SUBLEVEL = 19
231 + EXTRAVERSION =
232 + NAME = Merciless Moray
233 +
234 +diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
235 +index a0ddf497e8cd..2cb45ddd2ae3 100644
236 +--- a/arch/arm/boot/dts/dra7.dtsi
237 ++++ b/arch/arm/boot/dts/dra7.dtsi
238 +@@ -354,7 +354,7 @@
239 + ti,hwmods = "pcie1";
240 + phys = <&pcie1_phy>;
241 + phy-names = "pcie-phy0";
242 +- ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
243 ++ ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
244 + status = "disabled";
245 + };
246 + };
247 +diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
248 +index 962af97c1883..aff5d66ae058 100644
249 +--- a/arch/arm/boot/dts/exynos3250.dtsi
250 ++++ b/arch/arm/boot/dts/exynos3250.dtsi
251 +@@ -78,6 +78,22 @@
252 + compatible = "arm,cortex-a7";
253 + reg = <1>;
254 + clock-frequency = <1000000000>;
255 ++ clocks = <&cmu CLK_ARM_CLK>;
256 ++ clock-names = "cpu";
257 ++ #cooling-cells = <2>;
258 ++
259 ++ operating-points = <
260 ++ 1000000 1150000
261 ++ 900000 1112500
262 ++ 800000 1075000
263 ++ 700000 1037500
264 ++ 600000 1000000
265 ++ 500000 962500
266 ++ 400000 925000
267 ++ 300000 887500
268 ++ 200000 850000
269 ++ 100000 850000
270 ++ >;
271 + };
272 + };
273 +
274 +diff --git a/arch/arm/boot/dts/exynos4210-origen.dts b/arch/arm/boot/dts/exynos4210-origen.dts
275 +index 2ab99f9f3d0a..dd9ec05eb0f7 100644
276 +--- a/arch/arm/boot/dts/exynos4210-origen.dts
277 ++++ b/arch/arm/boot/dts/exynos4210-origen.dts
278 +@@ -151,6 +151,8 @@
279 + reg = <0x66>;
280 + interrupt-parent = <&gpx0>;
281 + interrupts = <4 IRQ_TYPE_NONE>, <3 IRQ_TYPE_NONE>;
282 ++ pinctrl-names = "default";
283 ++ pinctrl-0 = <&max8997_irq>;
284 +
285 + max8997,pmic-buck1-dvs-voltage = <1350000>;
286 + max8997,pmic-buck2-dvs-voltage = <1100000>;
287 +@@ -288,6 +290,13 @@
288 + };
289 + };
290 +
291 ++&pinctrl_1 {
292 ++ max8997_irq: max8997-irq {
293 ++ samsung,pins = "gpx0-3", "gpx0-4";
294 ++ samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
295 ++ };
296 ++};
297 ++
298 + &sdhci_0 {
299 + bus-width = <4>;
300 + pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus4 &sd0_cd>;
301 +diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
302 +index 88fb47cef9a8..b6091c27f155 100644
303 +--- a/arch/arm/boot/dts/exynos4210.dtsi
304 ++++ b/arch/arm/boot/dts/exynos4210.dtsi
305 +@@ -55,6 +55,19 @@
306 + device_type = "cpu";
307 + compatible = "arm,cortex-a9";
308 + reg = <0x901>;
309 ++ clocks = <&clock CLK_ARM_CLK>;
310 ++ clock-names = "cpu";
311 ++ clock-latency = <160000>;
312 ++
313 ++ operating-points = <
314 ++ 1200000 1250000
315 ++ 1000000 1150000
316 ++ 800000 1075000
317 ++ 500000 975000
318 ++ 400000 975000
319 ++ 200000 950000
320 ++ >;
321 ++ #cooling-cells = <2>; /* min followed by max */
322 + };
323 + };
324 +
325 +diff --git a/arch/arm/boot/dts/exynos4412.dtsi b/arch/arm/boot/dts/exynos4412.dtsi
326 +index 7b43c10c510b..51f72f0327e5 100644
327 +--- a/arch/arm/boot/dts/exynos4412.dtsi
328 ++++ b/arch/arm/boot/dts/exynos4412.dtsi
329 +@@ -49,21 +49,30 @@
330 + device_type = "cpu";
331 + compatible = "arm,cortex-a9";
332 + reg = <0xA01>;
333 ++ clocks = <&clock CLK_ARM_CLK>;
334 ++ clock-names = "cpu";
335 + operating-points-v2 = <&cpu0_opp_table>;
336 ++ #cooling-cells = <2>; /* min followed by max */
337 + };
338 +
339 + cpu@a02 {
340 + device_type = "cpu";
341 + compatible = "arm,cortex-a9";
342 + reg = <0xA02>;
343 ++ clocks = <&clock CLK_ARM_CLK>;
344 ++ clock-names = "cpu";
345 + operating-points-v2 = <&cpu0_opp_table>;
346 ++ #cooling-cells = <2>; /* min followed by max */
347 + };
348 +
349 + cpu@a03 {
350 + device_type = "cpu";
351 + compatible = "arm,cortex-a9";
352 + reg = <0xA03>;
353 ++ clocks = <&clock CLK_ARM_CLK>;
354 ++ clock-names = "cpu";
355 + operating-points-v2 = <&cpu0_opp_table>;
356 ++ #cooling-cells = <2>; /* min followed by max */
357 + };
358 + };
359 +
360 +diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
361 +index 2daf505b3d08..f04adf72b80e 100644
362 +--- a/arch/arm/boot/dts/exynos5250.dtsi
363 ++++ b/arch/arm/boot/dts/exynos5250.dtsi
364 +@@ -54,36 +54,106 @@
365 + device_type = "cpu";
366 + compatible = "arm,cortex-a15";
367 + reg = <0>;
368 +- clock-frequency = <1700000000>;
369 + clocks = <&clock CLK_ARM_CLK>;
370 + clock-names = "cpu";
371 +- clock-latency = <140000>;
372 +-
373 +- operating-points = <
374 +- 1700000 1300000
375 +- 1600000 1250000
376 +- 1500000 1225000
377 +- 1400000 1200000
378 +- 1300000 1150000
379 +- 1200000 1125000
380 +- 1100000 1100000
381 +- 1000000 1075000
382 +- 900000 1050000
383 +- 800000 1025000
384 +- 700000 1012500
385 +- 600000 1000000
386 +- 500000 975000
387 +- 400000 950000
388 +- 300000 937500
389 +- 200000 925000
390 +- >;
391 ++ operating-points-v2 = <&cpu0_opp_table>;
392 + #cooling-cells = <2>; /* min followed by max */
393 + };
394 + cpu@1 {
395 + device_type = "cpu";
396 + compatible = "arm,cortex-a15";
397 + reg = <1>;
398 +- clock-frequency = <1700000000>;
399 ++ clocks = <&clock CLK_ARM_CLK>;
400 ++ clock-names = "cpu";
401 ++ operating-points-v2 = <&cpu0_opp_table>;
402 ++ #cooling-cells = <2>; /* min followed by max */
403 ++ };
404 ++ };
405 ++
406 ++ cpu0_opp_table: opp_table0 {
407 ++ compatible = "operating-points-v2";
408 ++ opp-shared;
409 ++
410 ++ opp-200000000 {
411 ++ opp-hz = /bits/ 64 <200000000>;
412 ++ opp-microvolt = <925000>;
413 ++ clock-latency-ns = <140000>;
414 ++ };
415 ++ opp-300000000 {
416 ++ opp-hz = /bits/ 64 <300000000>;
417 ++ opp-microvolt = <937500>;
418 ++ clock-latency-ns = <140000>;
419 ++ };
420 ++ opp-400000000 {
421 ++ opp-hz = /bits/ 64 <400000000>;
422 ++ opp-microvolt = <950000>;
423 ++ clock-latency-ns = <140000>;
424 ++ };
425 ++ opp-500000000 {
426 ++ opp-hz = /bits/ 64 <500000000>;
427 ++ opp-microvolt = <975000>;
428 ++ clock-latency-ns = <140000>;
429 ++ };
430 ++ opp-600000000 {
431 ++ opp-hz = /bits/ 64 <600000000>;
432 ++ opp-microvolt = <1000000>;
433 ++ clock-latency-ns = <140000>;
434 ++ };
435 ++ opp-700000000 {
436 ++ opp-hz = /bits/ 64 <700000000>;
437 ++ opp-microvolt = <1012500>;
438 ++ clock-latency-ns = <140000>;
439 ++ };
440 ++ opp-800000000 {
441 ++ opp-hz = /bits/ 64 <800000000>;
442 ++ opp-microvolt = <1025000>;
443 ++ clock-latency-ns = <140000>;
444 ++ };
445 ++ opp-900000000 {
446 ++ opp-hz = /bits/ 64 <900000000>;
447 ++ opp-microvolt = <1050000>;
448 ++ clock-latency-ns = <140000>;
449 ++ };
450 ++ opp-1000000000 {
451 ++ opp-hz = /bits/ 64 <1000000000>;
452 ++ opp-microvolt = <1075000>;
453 ++ clock-latency-ns = <140000>;
454 ++ opp-suspend;
455 ++ };
456 ++ opp-1100000000 {
457 ++ opp-hz = /bits/ 64 <1100000000>;
458 ++ opp-microvolt = <1100000>;
459 ++ clock-latency-ns = <140000>;
460 ++ };
461 ++ opp-1200000000 {
462 ++ opp-hz = /bits/ 64 <1200000000>;
463 ++ opp-microvolt = <1125000>;
464 ++ clock-latency-ns = <140000>;
465 ++ };
466 ++ opp-1300000000 {
467 ++ opp-hz = /bits/ 64 <1300000000>;
468 ++ opp-microvolt = <1150000>;
469 ++ clock-latency-ns = <140000>;
470 ++ };
471 ++ opp-1400000000 {
472 ++ opp-hz = /bits/ 64 <1400000000>;
473 ++ opp-microvolt = <1200000>;
474 ++ clock-latency-ns = <140000>;
475 ++ };
476 ++ opp-1500000000 {
477 ++ opp-hz = /bits/ 64 <1500000000>;
478 ++ opp-microvolt = <1225000>;
479 ++ clock-latency-ns = <140000>;
480 ++ };
481 ++ opp-1600000000 {
482 ++ opp-hz = /bits/ 64 <1600000000>;
483 ++ opp-microvolt = <1250000>;
484 ++ clock-latency-ns = <140000>;
485 ++ };
486 ++ opp-1700000000 {
487 ++ opp-hz = /bits/ 64 <1700000000>;
488 ++ opp-microvolt = <1300000>;
489 ++ clock-latency-ns = <140000>;
490 + };
491 + };
492 +
493 +diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
494 +index 791ca15c799e..bd1985694bca 100644
495 +--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
496 ++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
497 +@@ -601,7 +601,7 @@
498 + status = "disabled";
499 + };
500 +
501 +- sdr: sdr@ffc25000 {
502 ++ sdr: sdr@ffcfb100 {
503 + compatible = "altr,sdr-ctl", "syscon";
504 + reg = <0xffcfb100 0x80>;
505 + };
506 +diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
507 +index 925d1364727a..b8e69fe282b8 100644
508 +--- a/arch/arm/crypto/Kconfig
509 ++++ b/arch/arm/crypto/Kconfig
510 +@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON
511 + select CRYPTO_BLKCIPHER
512 + select CRYPTO_CHACHA20
513 +
514 +-config CRYPTO_SPECK_NEON
515 +- tristate "NEON accelerated Speck cipher algorithms"
516 +- depends on KERNEL_MODE_NEON
517 +- select CRYPTO_BLKCIPHER
518 +- select CRYPTO_SPECK
519 +-
520 + endif
521 +diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
522 +index 8de542c48ade..bd5bceef0605 100644
523 +--- a/arch/arm/crypto/Makefile
524 ++++ b/arch/arm/crypto/Makefile
525 +@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
526 + obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
527 + obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
528 + obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
529 +-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
530 +
531 + ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
532 + ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
533 +@@ -54,7 +53,6 @@ ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o
534 + crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
535 + crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
536 + chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
537 +-speck-neon-y := speck-neon-core.o speck-neon-glue.o
538 +
539 + ifdef REGENERATE_ARM_CRYPTO
540 + quiet_cmd_perl = PERL $@
541 +diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S
542 +deleted file mode 100644
543 +index 57caa742016e..000000000000
544 +--- a/arch/arm/crypto/speck-neon-core.S
545 ++++ /dev/null
546 +@@ -1,434 +0,0 @@
547 +-// SPDX-License-Identifier: GPL-2.0
548 +-/*
549 +- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
550 +- *
551 +- * Copyright (c) 2018 Google, Inc
552 +- *
553 +- * Author: Eric Biggers <ebiggers@ƗƗƗƗƗƗ.com>
554 +- */
555 +-
556 +-#include <linux/linkage.h>
557 +-
558 +- .text
559 +- .fpu neon
560 +-
561 +- // arguments
562 +- ROUND_KEYS .req r0 // const {u64,u32} *round_keys
563 +- NROUNDS .req r1 // int nrounds
564 +- DST .req r2 // void *dst
565 +- SRC .req r3 // const void *src
566 +- NBYTES .req r4 // unsigned int nbytes
567 +- TWEAK .req r5 // void *tweak
568 +-
569 +- // registers which hold the data being encrypted/decrypted
570 +- X0 .req q0
571 +- X0_L .req d0
572 +- X0_H .req d1
573 +- Y0 .req q1
574 +- Y0_H .req d3
575 +- X1 .req q2
576 +- X1_L .req d4
577 +- X1_H .req d5
578 +- Y1 .req q3
579 +- Y1_H .req d7
580 +- X2 .req q4
581 +- X2_L .req d8
582 +- X2_H .req d9
583 +- Y2 .req q5
584 +- Y2_H .req d11
585 +- X3 .req q6
586 +- X3_L .req d12
587 +- X3_H .req d13
588 +- Y3 .req q7
589 +- Y3_H .req d15
590 +-
591 +- // the round key, duplicated in all lanes
592 +- ROUND_KEY .req q8
593 +- ROUND_KEY_L .req d16
594 +- ROUND_KEY_H .req d17
595 +-
596 +- // index vector for vtbl-based 8-bit rotates
597 +- ROTATE_TABLE .req d18
598 +-
599 +- // multiplication table for updating XTS tweaks
600 +- GF128MUL_TABLE .req d19
601 +- GF64MUL_TABLE .req d19
602 +-
603 +- // current XTS tweak value(s)
604 +- TWEAKV .req q10
605 +- TWEAKV_L .req d20
606 +- TWEAKV_H .req d21
607 +-
608 +- TMP0 .req q12
609 +- TMP0_L .req d24
610 +- TMP0_H .req d25
611 +- TMP1 .req q13
612 +- TMP2 .req q14
613 +- TMP3 .req q15
614 +-
615 +- .align 4
616 +-.Lror64_8_table:
617 +- .byte 1, 2, 3, 4, 5, 6, 7, 0
618 +-.Lror32_8_table:
619 +- .byte 1, 2, 3, 0, 5, 6, 7, 4
620 +-.Lrol64_8_table:
621 +- .byte 7, 0, 1, 2, 3, 4, 5, 6
622 +-.Lrol32_8_table:
623 +- .byte 3, 0, 1, 2, 7, 4, 5, 6
624 +-.Lgf128mul_table:
625 +- .byte 0, 0x87
626 +- .fill 14
627 +-.Lgf64mul_table:
628 +- .byte 0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b
629 +- .fill 12
630 +-
631 +-/*
632 +- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
633 +- *
634 +- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
635 +- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
636 +- * of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64.
637 +- *
638 +- * The 8-bit rotates are implemented using vtbl instead of vshr + vsli because
639 +- * the vtbl approach is faster on some processors and the same speed on others.
640 +- */
641 +-.macro _speck_round_128bytes n
642 +-
643 +- // x = ror(x, 8)
644 +- vtbl.8 X0_L, {X0_L}, ROTATE_TABLE
645 +- vtbl.8 X0_H, {X0_H}, ROTATE_TABLE
646 +- vtbl.8 X1_L, {X1_L}, ROTATE_TABLE
647 +- vtbl.8 X1_H, {X1_H}, ROTATE_TABLE
648 +- vtbl.8 X2_L, {X2_L}, ROTATE_TABLE
649 +- vtbl.8 X2_H, {X2_H}, ROTATE_TABLE
650 +- vtbl.8 X3_L, {X3_L}, ROTATE_TABLE
651 +- vtbl.8 X3_H, {X3_H}, ROTATE_TABLE
652 +-
653 +- // x += y
654 +- vadd.u\n X0, Y0
655 +- vadd.u\n X1, Y1
656 +- vadd.u\n X2, Y2
657 +- vadd.u\n X3, Y3
658 +-
659 +- // x ^= k
660 +- veor X0, ROUND_KEY
661 +- veor X1, ROUND_KEY
662 +- veor X2, ROUND_KEY
663 +- veor X3, ROUND_KEY
664 +-
665 +- // y = rol(y, 3)
666 +- vshl.u\n TMP0, Y0, #3
667 +- vshl.u\n TMP1, Y1, #3
668 +- vshl.u\n TMP2, Y2, #3
669 +- vshl.u\n TMP3, Y3, #3
670 +- vsri.u\n TMP0, Y0, #(\n - 3)
671 +- vsri.u\n TMP1, Y1, #(\n - 3)
672 +- vsri.u\n TMP2, Y2, #(\n - 3)
673 +- vsri.u\n TMP3, Y3, #(\n - 3)
674 +-
675 +- // y ^= x
676 +- veor Y0, TMP0, X0
677 +- veor Y1, TMP1, X1
678 +- veor Y2, TMP2, X2
679 +- veor Y3, TMP3, X3
680 +-.endm
681 +-
682 +-/*
683 +- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
684 +- *
685 +- * This is the inverse of _speck_round_128bytes().
686 +- */
687 +-.macro _speck_unround_128bytes n
688 +-
689 +- // y ^= x
690 +- veor TMP0, Y0, X0
691 +- veor TMP1, Y1, X1
692 +- veor TMP2, Y2, X2
693 +- veor TMP3, Y3, X3
694 +-
695 +- // y = ror(y, 3)
696 +- vshr.u\n Y0, TMP0, #3
697 +- vshr.u\n Y1, TMP1, #3
698 +- vshr.u\n Y2, TMP2, #3
699 +- vshr.u\n Y3, TMP3, #3
700 +- vsli.u\n Y0, TMP0, #(\n - 3)
701 +- vsli.u\n Y1, TMP1, #(\n - 3)
702 +- vsli.u\n Y2, TMP2, #(\n - 3)
703 +- vsli.u\n Y3, TMP3, #(\n - 3)
704 +-
705 +- // x ^= k
706 +- veor X0, ROUND_KEY
707 +- veor X1, ROUND_KEY
708 +- veor X2, ROUND_KEY
709 +- veor X3, ROUND_KEY
710 +-
711 +- // x -= y
712 +- vsub.u\n X0, Y0
713 +- vsub.u\n X1, Y1
714 +- vsub.u\n X2, Y2
715 +- vsub.u\n X3, Y3
716 +-
717 +- // x = rol(x, 8);
718 +- vtbl.8 X0_L, {X0_L}, ROTATE_TABLE
719 +- vtbl.8 X0_H, {X0_H}, ROTATE_TABLE
720 +- vtbl.8 X1_L, {X1_L}, ROTATE_TABLE
721 +- vtbl.8 X1_H, {X1_H}, ROTATE_TABLE
722 +- vtbl.8 X2_L, {X2_L}, ROTATE_TABLE
723 +- vtbl.8 X2_H, {X2_H}, ROTATE_TABLE
724 +- vtbl.8 X3_L, {X3_L}, ROTATE_TABLE
725 +- vtbl.8 X3_H, {X3_H}, ROTATE_TABLE
726 +-.endm
727 +-
728 +-.macro _xts128_precrypt_one dst_reg, tweak_buf, tmp
729 +-
730 +- // Load the next source block
731 +- vld1.8 {\dst_reg}, [SRC]!
732 +-
733 +- // Save the current tweak in the tweak buffer
734 +- vst1.8 {TWEAKV}, [\tweak_buf:128]!
735 +-
736 +- // XOR the next source block with the current tweak
737 +- veor \dst_reg, TWEAKV
738 +-
739 +- /*
740 +- * Calculate the next tweak by multiplying the current one by x,
741 +- * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
742 +- */
743 +- vshr.u64 \tmp, TWEAKV, #63
744 +- vshl.u64 TWEAKV, #1
745 +- veor TWEAKV_H, \tmp\()_L
746 +- vtbl.8 \tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H
747 +- veor TWEAKV_L, \tmp\()_H
748 +-.endm
749 +-
750 +-.macro _xts64_precrypt_two dst_reg, tweak_buf, tmp
751 +-
752 +- // Load the next two source blocks
753 +- vld1.8 {\dst_reg}, [SRC]!
754 +-
755 +- // Save the current two tweaks in the tweak buffer
756 +- vst1.8 {TWEAKV}, [\tweak_buf:128]!
757 +-
758 +- // XOR the next two source blocks with the current two tweaks
759 +- veor \dst_reg, TWEAKV
760 +-
761 +- /*
762 +- * Calculate the next two tweaks by multiplying the current ones by x^2,
763 +- * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
764 +- */
765 +- vshr.u64 \tmp, TWEAKV, #62
766 +- vshl.u64 TWEAKV, #2
767 +- vtbl.8 \tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L
768 +- vtbl.8 \tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H
769 +- veor TWEAKV, \tmp
770 +-.endm
771 +-
772 +-/*
773 +- * _speck_xts_crypt() - Speck-XTS encryption/decryption
774 +- *
775 +- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
776 +- * using Speck-XTS, specifically the variant with a block size of '2n' and round
777 +- * count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and
778 +- * the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a
779 +- * nonzero multiple of 128.
780 +- */
781 +-.macro _speck_xts_crypt n, decrypting
782 +- push {r4-r7}
783 +- mov r7, sp
784 +-
785 +- /*
786 +- * The first four parameters were passed in registers r0-r3. Load the
787 +- * additional parameters, which were passed on the stack.
788 +- */
789 +- ldr NBYTES, [sp, #16]
790 +- ldr TWEAK, [sp, #20]
791 +-
792 +- /*
793 +- * If decrypting, modify the ROUND_KEYS parameter to point to the last
794 +- * round key rather than the first, since for decryption the round keys
795 +- * are used in reverse order.
796 +- */
797 +-.if \decrypting
798 +-.if \n == 64
799 +- add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3
800 +- sub ROUND_KEYS, #8
801 +-.else
802 +- add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2
803 +- sub ROUND_KEYS, #4
804 +-.endif
805 +-.endif
806 +-
807 +- // Load the index vector for vtbl-based 8-bit rotates
808 +-.if \decrypting
809 +- ldr r12, =.Lrol\n\()_8_table
810 +-.else
811 +- ldr r12, =.Lror\n\()_8_table
812 +-.endif
813 +- vld1.8 {ROTATE_TABLE}, [r12:64]
814 +-
815 +- // One-time XTS preparation
816 +-
817 +- /*
818 +- * Allocate stack space to store 128 bytes worth of tweaks. For
819 +- * performance, this space is aligned to a 16-byte boundary so that we
820 +- * can use the load/store instructions that declare 16-byte alignment.
821 +- * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.
822 +- */
823 +- sub r12, sp, #128
824 +- bic r12, #0xf
825 +- mov sp, r12
826 +-
827 +-.if \n == 64
828 +- // Load first tweak
829 +- vld1.8 {TWEAKV}, [TWEAK]
830 +-
831 +- // Load GF(2^128) multiplication table
832 +- ldr r12, =.Lgf128mul_table
833 +- vld1.8 {GF128MUL_TABLE}, [r12:64]
834 +-.else
835 +- // Load first tweak
836 +- vld1.8 {TWEAKV_L}, [TWEAK]
837 +-
838 +- // Load GF(2^64) multiplication table
839 +- ldr r12, =.Lgf64mul_table
840 +- vld1.8 {GF64MUL_TABLE}, [r12:64]
841 +-
842 +- // Calculate second tweak, packing it together with the first
843 +- vshr.u64 TMP0_L, TWEAKV_L, #63
844 +- vtbl.u8 TMP0_L, {GF64MUL_TABLE}, TMP0_L
845 +- vshl.u64 TWEAKV_H, TWEAKV_L, #1
846 +- veor TWEAKV_H, TMP0_L
847 +-.endif
848 +-
849 +-.Lnext_128bytes_\@:
850 +-
851 +- /*
852 +- * Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak
853 +- * values, and save the tweaks on the stack for later. Then
854 +- * de-interleave the 'x' and 'y' elements of each block, i.e. make it so
855 +- * that the X[0-3] registers contain only the second halves of blocks,
856 +- * and the Y[0-3] registers contain only the first halves of blocks.
857 +- * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
858 +- */
859 +- mov r12, sp
860 +-.if \n == 64
861 +- _xts128_precrypt_one X0, r12, TMP0
862 +- _xts128_precrypt_one Y0, r12, TMP0
863 +- _xts128_precrypt_one X1, r12, TMP0
864 +- _xts128_precrypt_one Y1, r12, TMP0
865 +- _xts128_precrypt_one X2, r12, TMP0
866 +- _xts128_precrypt_one Y2, r12, TMP0
867 +- _xts128_precrypt_one X3, r12, TMP0
868 +- _xts128_precrypt_one Y3, r12, TMP0
869 +- vswp X0_L, Y0_H
870 +- vswp X1_L, Y1_H
871 +- vswp X2_L, Y2_H
872 +- vswp X3_L, Y3_H
873 +-.else
874 +- _xts64_precrypt_two X0, r12, TMP0
875 +- _xts64_precrypt_two Y0, r12, TMP0
876 +- _xts64_precrypt_two X1, r12, TMP0
877 +- _xts64_precrypt_two Y1, r12, TMP0
878 +- _xts64_precrypt_two X2, r12, TMP0
879 +- _xts64_precrypt_two Y2, r12, TMP0
880 +- _xts64_precrypt_two X3, r12, TMP0
881 +- _xts64_precrypt_two Y3, r12, TMP0
882 +- vuzp.32 Y0, X0
883 +- vuzp.32 Y1, X1
884 +- vuzp.32 Y2, X2
885 +- vuzp.32 Y3, X3
886 +-.endif
887 +-
888 +- // Do the cipher rounds
889 +-
890 +- mov r12, ROUND_KEYS
891 +- mov r6, NROUNDS
892 +-
893 +-.Lnext_round_\@:
894 +-.if \decrypting
895 +-.if \n == 64
896 +- vld1.64 ROUND_KEY_L, [r12]
897 +- sub r12, #8
898 +- vmov ROUND_KEY_H, ROUND_KEY_L
899 +-.else
900 +- vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]
901 +- sub r12, #4
902 +-.endif
903 +- _speck_unround_128bytes \n
904 +-.else
905 +-.if \n == 64
906 +- vld1.64 ROUND_KEY_L, [r12]!
907 +- vmov ROUND_KEY_H, ROUND_KEY_L
908 +-.else
909 +- vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]!
910 +-.endif
911 +- _speck_round_128bytes \n
912 +-.endif
913 +- subs r6, r6, #1
914 +- bne .Lnext_round_\@
915 +-
916 +- // Re-interleave the 'x' and 'y' elements of each block
917 +-.if \n == 64
918 +- vswp X0_L, Y0_H
919 +- vswp X1_L, Y1_H
920 +- vswp X2_L, Y2_H
921 +- vswp X3_L, Y3_H
922 +-.else
923 +- vzip.32 Y0, X0
924 +- vzip.32 Y1, X1
925 +- vzip.32 Y2, X2
926 +- vzip.32 Y3, X3
927 +-.endif
928 +-
929 +- // XOR the encrypted/decrypted blocks with the tweaks we saved earlier
930 +- mov r12, sp
931 +- vld1.8 {TMP0, TMP1}, [r12:128]!
932 +- vld1.8 {TMP2, TMP3}, [r12:128]!
933 +- veor X0, TMP0
934 +- veor Y0, TMP1
935 +- veor X1, TMP2
936 +- veor Y1, TMP3
937 +- vld1.8 {TMP0, TMP1}, [r12:128]!
938 +- vld1.8 {TMP2, TMP3}, [r12:128]!
939 +- veor X2, TMP0
940 +- veor Y2, TMP1
941 +- veor X3, TMP2
942 +- veor Y3, TMP3
943 +-
944 +- // Store the ciphertext in the destination buffer
945 +- vst1.8 {X0, Y0}, [DST]!
946 +- vst1.8 {X1, Y1}, [DST]!
947 +- vst1.8 {X2, Y2}, [DST]!
948 +- vst1.8 {X3, Y3}, [DST]!
949 +-
950 +- // Continue if there are more 128-byte chunks remaining, else return
951 +- subs NBYTES, #128
952 +- bne .Lnext_128bytes_\@
953 +-
954 +- // Store the next tweak
955 +-.if \n == 64
956 +- vst1.8 {TWEAKV}, [TWEAK]
957 +-.else
958 +- vst1.8 {TWEAKV_L}, [TWEAK]
959 +-.endif
960 +-
961 +- mov sp, r7
962 +- pop {r4-r7}
963 +- bx lr
964 +-.endm
965 +-
966 +-ENTRY(speck128_xts_encrypt_neon)
967 +- _speck_xts_crypt n=64, decrypting=0
968 +-ENDPROC(speck128_xts_encrypt_neon)
969 +-
970 +-ENTRY(speck128_xts_decrypt_neon)
971 +- _speck_xts_crypt n=64, decrypting=1
972 +-ENDPROC(speck128_xts_decrypt_neon)
973 +-
974 +-ENTRY(speck64_xts_encrypt_neon)
975 +- _speck_xts_crypt n=32, decrypting=0
976 +-ENDPROC(speck64_xts_encrypt_neon)
977 +-
978 +-ENTRY(speck64_xts_decrypt_neon)
979 +- _speck_xts_crypt n=32, decrypting=1
980 +-ENDPROC(speck64_xts_decrypt_neon)
981 +diff --git a/arch/arm/crypto/speck-neon-glue.c b/arch/arm/crypto/speck-neon-glue.c
982 +deleted file mode 100644
983 +index f012c3ea998f..000000000000
984 +--- a/arch/arm/crypto/speck-neon-glue.c
985 ++++ /dev/null
986 +@@ -1,288 +0,0 @@
987 +-// SPDX-License-Identifier: GPL-2.0
988 +-/*
989 +- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
990 +- *
991 +- * Copyright (c) 2018 Google, Inc
992 +- *
993 +- * Note: the NIST recommendation for XTS only specifies a 128-bit block size,
994 +- * but a 64-bit version (needed for Speck64) is fairly straightforward; the math
995 +- * is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial
996 +- * x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004:
997 +- * "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes
998 +- * OCB and PMAC"), represented as 0x1B.
999 +- */
1000 +-
1001 +-#include <asm/hwcap.h>
1002 +-#include <asm/neon.h>
1003 +-#include <asm/simd.h>
1004 +-#include <crypto/algapi.h>
1005 +-#include <crypto/gf128mul.h>
1006 +-#include <crypto/internal/skcipher.h>
1007 +-#include <crypto/speck.h>
1008 +-#include <crypto/xts.h>
1009 +-#include <linux/kernel.h>
1010 +-#include <linux/module.h>
1011 +-
1012 +-/* The assembly functions only handle multiples of 128 bytes */
1013 +-#define SPECK_NEON_CHUNK_SIZE 128
1014 +-
1015 +-/* Speck128 */
1016 +-
1017 +-struct speck128_xts_tfm_ctx {
1018 +- struct speck128_tfm_ctx main_key;
1019 +- struct speck128_tfm_ctx tweak_key;
1020 +-};
1021 +-
1022 +-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
1023 +- void *dst, const void *src,
1024 +- unsigned int nbytes, void *tweak);
1025 +-
1026 +-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
1027 +- void *dst, const void *src,
1028 +- unsigned int nbytes, void *tweak);
1029 +-
1030 +-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
1031 +- u8 *, const u8 *);
1032 +-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
1033 +- const void *, unsigned int, void *);
1034 +-
1035 +-static __always_inline int
1036 +-__speck128_xts_crypt(struct skcipher_request *req,
1037 +- speck128_crypt_one_t crypt_one,
1038 +- speck128_xts_crypt_many_t crypt_many)
1039 +-{
1040 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
1041 +- const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1042 +- struct skcipher_walk walk;
1043 +- le128 tweak;
1044 +- int err;
1045 +-
1046 +- err = skcipher_walk_virt(&walk, req, true);
1047 +-
1048 +- crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
1049 +-
1050 +- while (walk.nbytes > 0) {
1051 +- unsigned int nbytes = walk.nbytes;
1052 +- u8 *dst = walk.dst.virt.addr;
1053 +- const u8 *src = walk.src.virt.addr;
1054 +-
1055 +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
1056 +- unsigned int count;
1057 +-
1058 +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
1059 +- kernel_neon_begin();
1060 +- (*crypt_many)(ctx->main_key.round_keys,
1061 +- ctx->main_key.nrounds,
1062 +- dst, src, count, &tweak);
1063 +- kernel_neon_end();
1064 +- dst += count;
1065 +- src += count;
1066 +- nbytes -= count;
1067 +- }
1068 +-
1069 +- /* Handle any remainder with generic code */
1070 +- while (nbytes >= sizeof(tweak)) {
1071 +- le128_xor((le128 *)dst, (const le128 *)src, &tweak);
1072 +- (*crypt_one)(&ctx->main_key, dst, dst);
1073 +- le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
1074 +- gf128mul_x_ble(&tweak, &tweak);
1075 +-
1076 +- dst += sizeof(tweak);
1077 +- src += sizeof(tweak);
1078 +- nbytes -= sizeof(tweak);
1079 +- }
1080 +- err = skcipher_walk_done(&walk, nbytes);
1081 +- }
1082 +-
1083 +- return err;
1084 +-}
1085 +-
1086 +-static int speck128_xts_encrypt(struct skcipher_request *req)
1087 +-{
1088 +- return __speck128_xts_crypt(req, crypto_speck128_encrypt,
1089 +- speck128_xts_encrypt_neon);
1090 +-}
1091 +-
1092 +-static int speck128_xts_decrypt(struct skcipher_request *req)
1093 +-{
1094 +- return __speck128_xts_crypt(req, crypto_speck128_decrypt,
1095 +- speck128_xts_decrypt_neon);
1096 +-}
1097 +-
1098 +-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
1099 +- unsigned int keylen)
1100 +-{
1101 +- struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1102 +- int err;
1103 +-
1104 +- err = xts_verify_key(tfm, key, keylen);
1105 +- if (err)
1106 +- return err;
1107 +-
1108 +- keylen /= 2;
1109 +-
1110 +- err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
1111 +- if (err)
1112 +- return err;
1113 +-
1114 +- return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
1115 +-}
1116 +-
1117 +-/* Speck64 */
1118 +-
1119 +-struct speck64_xts_tfm_ctx {
1120 +- struct speck64_tfm_ctx main_key;
1121 +- struct speck64_tfm_ctx tweak_key;
1122 +-};
1123 +-
1124 +-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
1125 +- void *dst, const void *src,
1126 +- unsigned int nbytes, void *tweak);
1127 +-
1128 +-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
1129 +- void *dst, const void *src,
1130 +- unsigned int nbytes, void *tweak);
1131 +-
1132 +-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
1133 +- u8 *, const u8 *);
1134 +-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
1135 +- const void *, unsigned int, void *);
1136 +-
1137 +-static __always_inline int
1138 +-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
1139 +- speck64_xts_crypt_many_t crypt_many)
1140 +-{
1141 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
1142 +- const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1143 +- struct skcipher_walk walk;
1144 +- __le64 tweak;
1145 +- int err;
1146 +-
1147 +- err = skcipher_walk_virt(&walk, req, true);
1148 +-
1149 +- crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
1150 +-
1151 +- while (walk.nbytes > 0) {
1152 +- unsigned int nbytes = walk.nbytes;
1153 +- u8 *dst = walk.dst.virt.addr;
1154 +- const u8 *src = walk.src.virt.addr;
1155 +-
1156 +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
1157 +- unsigned int count;
1158 +-
1159 +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
1160 +- kernel_neon_begin();
1161 +- (*crypt_many)(ctx->main_key.round_keys,
1162 +- ctx->main_key.nrounds,
1163 +- dst, src, count, &tweak);
1164 +- kernel_neon_end();
1165 +- dst += count;
1166 +- src += count;
1167 +- nbytes -= count;
1168 +- }
1169 +-
1170 +- /* Handle any remainder with generic code */
1171 +- while (nbytes >= sizeof(tweak)) {
1172 +- *(__le64 *)dst = *(__le64 *)src ^ tweak;
1173 +- (*crypt_one)(&ctx->main_key, dst, dst);
1174 +- *(__le64 *)dst ^= tweak;
1175 +- tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
1176 +- ((tweak & cpu_to_le64(1ULL << 63)) ?
1177 +- 0x1B : 0));
1178 +- dst += sizeof(tweak);
1179 +- src += sizeof(tweak);
1180 +- nbytes -= sizeof(tweak);
1181 +- }
1182 +- err = skcipher_walk_done(&walk, nbytes);
1183 +- }
1184 +-
1185 +- return err;
1186 +-}
1187 +-
1188 +-static int speck64_xts_encrypt(struct skcipher_request *req)
1189 +-{
1190 +- return __speck64_xts_crypt(req, crypto_speck64_encrypt,
1191 +- speck64_xts_encrypt_neon);
1192 +-}
1193 +-
1194 +-static int speck64_xts_decrypt(struct skcipher_request *req)
1195 +-{
1196 +- return __speck64_xts_crypt(req, crypto_speck64_decrypt,
1197 +- speck64_xts_decrypt_neon);
1198 +-}
1199 +-
1200 +-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
1201 +- unsigned int keylen)
1202 +-{
1203 +- struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1204 +- int err;
1205 +-
1206 +- err = xts_verify_key(tfm, key, keylen);
1207 +- if (err)
1208 +- return err;
1209 +-
1210 +- keylen /= 2;
1211 +-
1212 +- err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
1213 +- if (err)
1214 +- return err;
1215 +-
1216 +- return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
1217 +-}
1218 +-
1219 +-static struct skcipher_alg speck_algs[] = {
1220 +- {
1221 +- .base.cra_name = "xts(speck128)",
1222 +- .base.cra_driver_name = "xts-speck128-neon",
1223 +- .base.cra_priority = 300,
1224 +- .base.cra_blocksize = SPECK128_BLOCK_SIZE,
1225 +- .base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx),
1226 +- .base.cra_alignmask = 7,
1227 +- .base.cra_module = THIS_MODULE,
1228 +- .min_keysize = 2 * SPECK128_128_KEY_SIZE,
1229 +- .max_keysize = 2 * SPECK128_256_KEY_SIZE,
1230 +- .ivsize = SPECK128_BLOCK_SIZE,
1231 +- .walksize = SPECK_NEON_CHUNK_SIZE,
1232 +- .setkey = speck128_xts_setkey,
1233 +- .encrypt = speck128_xts_encrypt,
1234 +- .decrypt = speck128_xts_decrypt,
1235 +- }, {
1236 +- .base.cra_name = "xts(speck64)",
1237 +- .base.cra_driver_name = "xts-speck64-neon",
1238 +- .base.cra_priority = 300,
1239 +- .base.cra_blocksize = SPECK64_BLOCK_SIZE,
1240 +- .base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx),
1241 +- .base.cra_alignmask = 7,
1242 +- .base.cra_module = THIS_MODULE,
1243 +- .min_keysize = 2 * SPECK64_96_KEY_SIZE,
1244 +- .max_keysize = 2 * SPECK64_128_KEY_SIZE,
1245 +- .ivsize = SPECK64_BLOCK_SIZE,
1246 +- .walksize = SPECK_NEON_CHUNK_SIZE,
1247 +- .setkey = speck64_xts_setkey,
1248 +- .encrypt = speck64_xts_encrypt,
1249 +- .decrypt = speck64_xts_decrypt,
1250 +- }
1251 +-};
1252 +-
1253 +-static int __init speck_neon_module_init(void)
1254 +-{
1255 +- if (!(elf_hwcap & HWCAP_NEON))
1256 +- return -ENODEV;
1257 +- return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
1258 +-}
1259 +-
1260 +-static void __exit speck_neon_module_exit(void)
1261 +-{
1262 +- crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
1263 +-}
1264 +-
1265 +-module_init(speck_neon_module_init);
1266 +-module_exit(speck_neon_module_exit);
1267 +-
1268 +-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
1269 +-MODULE_LICENSE("GPL");
1270 +-MODULE_AUTHOR("Eric Biggers <ebiggers@ƗƗƗƗƗƗ.com>");
1271 +-MODULE_ALIAS_CRYPTO("xts(speck128)");
1272 +-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
1273 +-MODULE_ALIAS_CRYPTO("xts(speck64)");
1274 +-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
1275 +diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
1276 +index 67dac595dc72..3989876ab699 100644
1277 +--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
1278 ++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
1279 +@@ -327,7 +327,7 @@
1280 +
1281 + sysmgr: sysmgr@ffd12000 {
1282 + compatible = "altr,sys-mgr", "syscon";
1283 +- reg = <0xffd12000 0x1000>;
1284 ++ reg = <0xffd12000 0x228>;
1285 + };
1286 +
1287 + /* Local timer */
1288 +diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
1289 +index e3fdb0fd6f70..d51944ff9f91 100644
1290 +--- a/arch/arm64/crypto/Kconfig
1291 ++++ b/arch/arm64/crypto/Kconfig
1292 +@@ -119,10 +119,4 @@ config CRYPTO_AES_ARM64_BS
1293 + select CRYPTO_AES_ARM64
1294 + select CRYPTO_SIMD
1295 +
1296 +-config CRYPTO_SPECK_NEON
1297 +- tristate "NEON accelerated Speck cipher algorithms"
1298 +- depends on KERNEL_MODE_NEON
1299 +- select CRYPTO_BLKCIPHER
1300 +- select CRYPTO_SPECK
1301 +-
1302 + endif
1303 +diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
1304 +index bcafd016618e..7bc4bda6d9c6 100644
1305 +--- a/arch/arm64/crypto/Makefile
1306 ++++ b/arch/arm64/crypto/Makefile
1307 +@@ -56,9 +56,6 @@ sha512-arm64-y := sha512-glue.o sha512-core.o
1308 + obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
1309 + chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
1310 +
1311 +-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
1312 +-speck-neon-y := speck-neon-core.o speck-neon-glue.o
1313 +-
1314 + obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
1315 + aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
1316 +
1317 +diff --git a/arch/arm64/crypto/speck-neon-core.S b/arch/arm64/crypto/speck-neon-core.S
1318 +deleted file mode 100644
1319 +index b14463438b09..000000000000
1320 +--- a/arch/arm64/crypto/speck-neon-core.S
1321 ++++ /dev/null
1322 +@@ -1,352 +0,0 @@
1323 +-// SPDX-License-Identifier: GPL-2.0
1324 +-/*
1325 +- * ARM64 NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
1326 +- *
1327 +- * Copyright (c) 2018 Google, Inc
1328 +- *
1329 +- * Author: Eric Biggers <ebiggers@ƗƗƗƗƗƗ.com>
1330 +- */
1331 +-
1332 +-#include <linux/linkage.h>
1333 +-
1334 +- .text
1335 +-
1336 +- // arguments
1337 +- ROUND_KEYS .req x0 // const {u64,u32} *round_keys
1338 +- NROUNDS .req w1 // int nrounds
1339 +- NROUNDS_X .req x1
1340 +- DST .req x2 // void *dst
1341 +- SRC .req x3 // const void *src
1342 +- NBYTES .req w4 // unsigned int nbytes
1343 +- TWEAK .req x5 // void *tweak
1344 +-
1345 +- // registers which hold the data being encrypted/decrypted
1346 +- // (underscores avoid a naming collision with ARM64 registers x0-x3)
1347 +- X_0 .req v0
1348 +- Y_0 .req v1
1349 +- X_1 .req v2
1350 +- Y_1 .req v3
1351 +- X_2 .req v4
1352 +- Y_2 .req v5
1353 +- X_3 .req v6
1354 +- Y_3 .req v7
1355 +-
1356 +- // the round key, duplicated in all lanes
1357 +- ROUND_KEY .req v8
1358 +-
1359 +- // index vector for tbl-based 8-bit rotates
1360 +- ROTATE_TABLE .req v9
1361 +- ROTATE_TABLE_Q .req q9
1362 +-
1363 +- // temporary registers
1364 +- TMP0 .req v10
1365 +- TMP1 .req v11
1366 +- TMP2 .req v12
1367 +- TMP3 .req v13
1368 +-
1369 +- // multiplication table for updating XTS tweaks
1370 +- GFMUL_TABLE .req v14
1371 +- GFMUL_TABLE_Q .req q14
1372 +-
1373 +- // next XTS tweak value(s)
1374 +- TWEAKV_NEXT .req v15
1375 +-
1376 +- // XTS tweaks for the blocks currently being encrypted/decrypted
1377 +- TWEAKV0 .req v16
1378 +- TWEAKV1 .req v17
1379 +- TWEAKV2 .req v18
1380 +- TWEAKV3 .req v19
1381 +- TWEAKV4 .req v20
1382 +- TWEAKV5 .req v21
1383 +- TWEAKV6 .req v22
1384 +- TWEAKV7 .req v23
1385 +-
1386 +- .align 4
1387 +-.Lror64_8_table:
1388 +- .octa 0x080f0e0d0c0b0a090007060504030201
1389 +-.Lror32_8_table:
1390 +- .octa 0x0c0f0e0d080b0a090407060500030201
1391 +-.Lrol64_8_table:
1392 +- .octa 0x0e0d0c0b0a09080f0605040302010007
1393 +-.Lrol32_8_table:
1394 +- .octa 0x0e0d0c0f0a09080b0605040702010003
1395 +-.Lgf128mul_table:
1396 +- .octa 0x00000000000000870000000000000001
1397 +-.Lgf64mul_table:
1398 +- .octa 0x0000000000000000000000002d361b00
1399 +-
1400 +-/*
1401 +- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
1402 +- *
1403 +- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
1404 +- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
1405 +- * of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64.
1406 +- * 'lanes' is the lane specifier: "2d" for Speck128 or "4s" for Speck64.
1407 +- */
1408 +-.macro _speck_round_128bytes n, lanes
1409 +-
1410 +- // x = ror(x, 8)
1411 +- tbl X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
1412 +- tbl X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
1413 +- tbl X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
1414 +- tbl X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
1415 +-
1416 +- // x += y
1417 +- add X_0.\lanes, X_0.\lanes, Y_0.\lanes
1418 +- add X_1.\lanes, X_1.\lanes, Y_1.\lanes
1419 +- add X_2.\lanes, X_2.\lanes, Y_2.\lanes
1420 +- add X_3.\lanes, X_3.\lanes, Y_3.\lanes
1421 +-
1422 +- // x ^= k
1423 +- eor X_0.16b, X_0.16b, ROUND_KEY.16b
1424 +- eor X_1.16b, X_1.16b, ROUND_KEY.16b
1425 +- eor X_2.16b, X_2.16b, ROUND_KEY.16b
1426 +- eor X_3.16b, X_3.16b, ROUND_KEY.16b
1427 +-
1428 +- // y = rol(y, 3)
1429 +- shl TMP0.\lanes, Y_0.\lanes, #3
1430 +- shl TMP1.\lanes, Y_1.\lanes, #3
1431 +- shl TMP2.\lanes, Y_2.\lanes, #3
1432 +- shl TMP3.\lanes, Y_3.\lanes, #3
1433 +- sri TMP0.\lanes, Y_0.\lanes, #(\n - 3)
1434 +- sri TMP1.\lanes, Y_1.\lanes, #(\n - 3)
1435 +- sri TMP2.\lanes, Y_2.\lanes, #(\n - 3)
1436 +- sri TMP3.\lanes, Y_3.\lanes, #(\n - 3)
1437 +-
1438 +- // y ^= x
1439 +- eor Y_0.16b, TMP0.16b, X_0.16b
1440 +- eor Y_1.16b, TMP1.16b, X_1.16b
1441 +- eor Y_2.16b, TMP2.16b, X_2.16b
1442 +- eor Y_3.16b, TMP3.16b, X_3.16b
1443 +-.endm
1444 +-
1445 +-/*
1446 +- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
1447 +- *
1448 +- * This is the inverse of _speck_round_128bytes().
1449 +- */
1450 +-.macro _speck_unround_128bytes n, lanes
1451 +-
1452 +- // y ^= x
1453 +- eor TMP0.16b, Y_0.16b, X_0.16b
1454 +- eor TMP1.16b, Y_1.16b, X_1.16b
1455 +- eor TMP2.16b, Y_2.16b, X_2.16b
1456 +- eor TMP3.16b, Y_3.16b, X_3.16b
1457 +-
1458 +- // y = ror(y, 3)
1459 +- ushr Y_0.\lanes, TMP0.\lanes, #3
1460 +- ushr Y_1.\lanes, TMP1.\lanes, #3
1461 +- ushr Y_2.\lanes, TMP2.\lanes, #3
1462 +- ushr Y_3.\lanes, TMP3.\lanes, #3
1463 +- sli Y_0.\lanes, TMP0.\lanes, #(\n - 3)
1464 +- sli Y_1.\lanes, TMP1.\lanes, #(\n - 3)
1465 +- sli Y_2.\lanes, TMP2.\lanes, #(\n - 3)
1466 +- sli Y_3.\lanes, TMP3.\lanes, #(\n - 3)
1467 +-
1468 +- // x ^= k
1469 +- eor X_0.16b, X_0.16b, ROUND_KEY.16b
1470 +- eor X_1.16b, X_1.16b, ROUND_KEY.16b
1471 +- eor X_2.16b, X_2.16b, ROUND_KEY.16b
1472 +- eor X_3.16b, X_3.16b, ROUND_KEY.16b
1473 +-
1474 +- // x -= y
1475 +- sub X_0.\lanes, X_0.\lanes, Y_0.\lanes
1476 +- sub X_1.\lanes, X_1.\lanes, Y_1.\lanes
1477 +- sub X_2.\lanes, X_2.\lanes, Y_2.\lanes
1478 +- sub X_3.\lanes, X_3.\lanes, Y_3.\lanes
1479 +-
1480 +- // x = rol(x, 8)
1481 +- tbl X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
1482 +- tbl X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
1483 +- tbl X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
1484 +- tbl X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
1485 +-.endm
1486 +-
1487 +-.macro _next_xts_tweak next, cur, tmp, n
1488 +-.if \n == 64
1489 +- /*
1490 +- * Calculate the next tweak by multiplying the current one by x,
1491 +- * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
1492 +- */
1493 +- sshr \tmp\().2d, \cur\().2d, #63
1494 +- and \tmp\().16b, \tmp\().16b, GFMUL_TABLE.16b
1495 +- shl \next\().2d, \cur\().2d, #1
1496 +- ext \tmp\().16b, \tmp\().16b, \tmp\().16b, #8
1497 +- eor \next\().16b, \next\().16b, \tmp\().16b
1498 +-.else
1499 +- /*
1500 +- * Calculate the next two tweaks by multiplying the current ones by x^2,
1501 +- * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
1502 +- */
1503 +- ushr \tmp\().2d, \cur\().2d, #62
1504 +- shl \next\().2d, \cur\().2d, #2
1505 +- tbl \tmp\().16b, {GFMUL_TABLE.16b}, \tmp\().16b
1506 +- eor \next\().16b, \next\().16b, \tmp\().16b
1507 +-.endif
1508 +-.endm
1509 +-
1510 +-/*
1511 +- * _speck_xts_crypt() - Speck-XTS encryption/decryption
1512 +- *
1513 +- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
1514 +- * using Speck-XTS, specifically the variant with a block size of '2n' and round
1515 +- * count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and
1516 +- * the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a
1517 +- * nonzero multiple of 128.
1518 +- */
1519 +-.macro _speck_xts_crypt n, lanes, decrypting
1520 +-
1521 +- /*
1522 +- * If decrypting, modify the ROUND_KEYS parameter to point to the last
1523 +- * round key rather than the first, since for decryption the round keys
1524 +- * are used in reverse order.
1525 +- */
1526 +-.if \decrypting
1527 +- mov NROUNDS, NROUNDS /* zero the high 32 bits */
1528 +-.if \n == 64
1529 +- add ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #3
1530 +- sub ROUND_KEYS, ROUND_KEYS, #8
1531 +-.else
1532 +- add ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #2
1533 +- sub ROUND_KEYS, ROUND_KEYS, #4
1534 +-.endif
1535 +-.endif
1536 +-
1537 +- // Load the index vector for tbl-based 8-bit rotates
1538 +-.if \decrypting
1539 +- ldr ROTATE_TABLE_Q, .Lrol\n\()_8_table
1540 +-.else
1541 +- ldr ROTATE_TABLE_Q, .Lror\n\()_8_table
1542 +-.endif
1543 +-
1544 +- // One-time XTS preparation
1545 +-.if \n == 64
1546 +- // Load first tweak
1547 +- ld1 {TWEAKV0.16b}, [TWEAK]
1548 +-
1549 +- // Load GF(2^128) multiplication table
1550 +- ldr GFMUL_TABLE_Q, .Lgf128mul_table
1551 +-.else
1552 +- // Load first tweak
1553 +- ld1 {TWEAKV0.8b}, [TWEAK]
1554 +-
1555 +- // Load GF(2^64) multiplication table
1556 +- ldr GFMUL_TABLE_Q, .Lgf64mul_table
1557 +-
1558 +- // Calculate second tweak, packing it together with the first
1559 +- ushr TMP0.2d, TWEAKV0.2d, #63
1560 +- shl TMP1.2d, TWEAKV0.2d, #1
1561 +- tbl TMP0.8b, {GFMUL_TABLE.16b}, TMP0.8b
1562 +- eor TMP0.8b, TMP0.8b, TMP1.8b
1563 +- mov TWEAKV0.d[1], TMP0.d[0]
1564 +-.endif
1565 +-
1566 +-.Lnext_128bytes_\@:
1567 +-
1568 +- // Calculate XTS tweaks for next 128 bytes
1569 +- _next_xts_tweak TWEAKV1, TWEAKV0, TMP0, \n
1570 +- _next_xts_tweak TWEAKV2, TWEAKV1, TMP0, \n
1571 +- _next_xts_tweak TWEAKV3, TWEAKV2, TMP0, \n
1572 +- _next_xts_tweak TWEAKV4, TWEAKV3, TMP0, \n
1573 +- _next_xts_tweak TWEAKV5, TWEAKV4, TMP0, \n
1574 +- _next_xts_tweak TWEAKV6, TWEAKV5, TMP0, \n
1575 +- _next_xts_tweak TWEAKV7, TWEAKV6, TMP0, \n
1576 +- _next_xts_tweak TWEAKV_NEXT, TWEAKV7, TMP0, \n
1577 +-
1578 +- // Load the next source blocks into {X,Y}[0-3]
1579 +- ld1 {X_0.16b-Y_1.16b}, [SRC], #64
1580 +- ld1 {X_2.16b-Y_3.16b}, [SRC], #64
1581 +-
1582 +- // XOR the source blocks with their XTS tweaks
1583 +- eor TMP0.16b, X_0.16b, TWEAKV0.16b
1584 +- eor Y_0.16b, Y_0.16b, TWEAKV1.16b
1585 +- eor TMP1.16b, X_1.16b, TWEAKV2.16b
1586 +- eor Y_1.16b, Y_1.16b, TWEAKV3.16b
1587 +- eor TMP2.16b, X_2.16b, TWEAKV4.16b
1588 +- eor Y_2.16b, Y_2.16b, TWEAKV5.16b
1589 +- eor TMP3.16b, X_3.16b, TWEAKV6.16b
1590 +- eor Y_3.16b, Y_3.16b, TWEAKV7.16b
1591 +-
1592 +- /*
1593 +- * De-interleave the 'x' and 'y' elements of each block, i.e. make it so
1594 +- * that the X[0-3] registers contain only the second halves of blocks,
1595 +- * and the Y[0-3] registers contain only the first halves of blocks.
1596 +- * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
1597 +- */
1598 +- uzp2 X_0.\lanes, TMP0.\lanes, Y_0.\lanes
1599 +- uzp1 Y_0.\lanes, TMP0.\lanes, Y_0.\lanes
1600 +- uzp2 X_1.\lanes, TMP1.\lanes, Y_1.\lanes
1601 +- uzp1 Y_1.\lanes, TMP1.\lanes, Y_1.\lanes
1602 +- uzp2 X_2.\lanes, TMP2.\lanes, Y_2.\lanes
1603 +- uzp1 Y_2.\lanes, TMP2.\lanes, Y_2.\lanes
1604 +- uzp2 X_3.\lanes, TMP3.\lanes, Y_3.\lanes
1605 +- uzp1 Y_3.\lanes, TMP3.\lanes, Y_3.\lanes
1606 +-
1607 +- // Do the cipher rounds
1608 +- mov x6, ROUND_KEYS
1609 +- mov w7, NROUNDS
1610 +-.Lnext_round_\@:
1611 +-.if \decrypting
1612 +- ld1r {ROUND_KEY.\lanes}, [x6]
1613 +- sub x6, x6, #( \n / 8 )
1614 +- _speck_unround_128bytes \n, \lanes
1615 +-.else
1616 +- ld1r {ROUND_KEY.\lanes}, [x6], #( \n / 8 )
1617 +- _speck_round_128bytes \n, \lanes
1618 +-.endif
1619 +- subs w7, w7, #1
1620 +- bne .Lnext_round_\@
1621 +-
1622 +- // Re-interleave the 'x' and 'y' elements of each block
1623 +- zip1 TMP0.\lanes, Y_0.\lanes, X_0.\lanes
1624 +- zip2 Y_0.\lanes, Y_0.\lanes, X_0.\lanes
1625 +- zip1 TMP1.\lanes, Y_1.\lanes, X_1.\lanes
1626 +- zip2 Y_1.\lanes, Y_1.\lanes, X_1.\lanes
1627 +- zip1 TMP2.\lanes, Y_2.\lanes, X_2.\lanes
1628 +- zip2 Y_2.\lanes, Y_2.\lanes, X_2.\lanes
1629 +- zip1 TMP3.\lanes, Y_3.\lanes, X_3.\lanes
1630 +- zip2 Y_3.\lanes, Y_3.\lanes, X_3.\lanes
1631 +-
1632 +- // XOR the encrypted/decrypted blocks with the tweaks calculated earlier
1633 +- eor X_0.16b, TMP0.16b, TWEAKV0.16b
1634 +- eor Y_0.16b, Y_0.16b, TWEAKV1.16b
1635 +- eor X_1.16b, TMP1.16b, TWEAKV2.16b
1636 +- eor Y_1.16b, Y_1.16b, TWEAKV3.16b
1637 +- eor X_2.16b, TMP2.16b, TWEAKV4.16b
1638 +- eor Y_2.16b, Y_2.16b, TWEAKV5.16b
1639 +- eor X_3.16b, TMP3.16b, TWEAKV6.16b
1640 +- eor Y_3.16b, Y_3.16b, TWEAKV7.16b
1641 +- mov TWEAKV0.16b, TWEAKV_NEXT.16b
1642 +-
1643 +- // Store the ciphertext in the destination buffer
1644 +- st1 {X_0.16b-Y_1.16b}, [DST], #64
1645 +- st1 {X_2.16b-Y_3.16b}, [DST], #64
1646 +-
1647 +- // Continue if there are more 128-byte chunks remaining
1648 +- subs NBYTES, NBYTES, #128
1649 +- bne .Lnext_128bytes_\@
1650 +-
1651 +- // Store the next tweak and return
1652 +-.if \n == 64
1653 +- st1 {TWEAKV_NEXT.16b}, [TWEAK]
1654 +-.else
1655 +- st1 {TWEAKV_NEXT.8b}, [TWEAK]
1656 +-.endif
1657 +- ret
1658 +-.endm
1659 +-
1660 +-ENTRY(speck128_xts_encrypt_neon)
1661 +- _speck_xts_crypt n=64, lanes=2d, decrypting=0
1662 +-ENDPROC(speck128_xts_encrypt_neon)
1663 +-
1664 +-ENTRY(speck128_xts_decrypt_neon)
1665 +- _speck_xts_crypt n=64, lanes=2d, decrypting=1
1666 +-ENDPROC(speck128_xts_decrypt_neon)
1667 +-
1668 +-ENTRY(speck64_xts_encrypt_neon)
1669 +- _speck_xts_crypt n=32, lanes=4s, decrypting=0
1670 +-ENDPROC(speck64_xts_encrypt_neon)
1671 +-
1672 +-ENTRY(speck64_xts_decrypt_neon)
1673 +- _speck_xts_crypt n=32, lanes=4s, decrypting=1
1674 +-ENDPROC(speck64_xts_decrypt_neon)
1675 +diff --git a/arch/arm64/crypto/speck-neon-glue.c b/arch/arm64/crypto/speck-neon-glue.c
1676 +deleted file mode 100644
1677 +index 6e233aeb4ff4..000000000000
1678 +--- a/arch/arm64/crypto/speck-neon-glue.c
1679 ++++ /dev/null
1680 +@@ -1,282 +0,0 @@
1681 +-// SPDX-License-Identifier: GPL-2.0
1682 +-/*
1683 +- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
1684 +- * (64-bit version; based on the 32-bit version)
1685 +- *
1686 +- * Copyright (c) 2018 Google, Inc
1687 +- */
1688 +-
1689 +-#include <asm/hwcap.h>
1690 +-#include <asm/neon.h>
1691 +-#include <asm/simd.h>
1692 +-#include <crypto/algapi.h>
1693 +-#include <crypto/gf128mul.h>
1694 +-#include <crypto/internal/skcipher.h>
1695 +-#include <crypto/speck.h>
1696 +-#include <crypto/xts.h>
1697 +-#include <linux/kernel.h>
1698 +-#include <linux/module.h>
1699 +-
1700 +-/* The assembly functions only handle multiples of 128 bytes */
1701 +-#define SPECK_NEON_CHUNK_SIZE 128
1702 +-
1703 +-/* Speck128 */
1704 +-
1705 +-struct speck128_xts_tfm_ctx {
1706 +- struct speck128_tfm_ctx main_key;
1707 +- struct speck128_tfm_ctx tweak_key;
1708 +-};
1709 +-
1710 +-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
1711 +- void *dst, const void *src,
1712 +- unsigned int nbytes, void *tweak);
1713 +-
1714 +-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
1715 +- void *dst, const void *src,
1716 +- unsigned int nbytes, void *tweak);
1717 +-
1718 +-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
1719 +- u8 *, const u8 *);
1720 +-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
1721 +- const void *, unsigned int, void *);
1722 +-
1723 +-static __always_inline int
1724 +-__speck128_xts_crypt(struct skcipher_request *req,
1725 +- speck128_crypt_one_t crypt_one,
1726 +- speck128_xts_crypt_many_t crypt_many)
1727 +-{
1728 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
1729 +- const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1730 +- struct skcipher_walk walk;
1731 +- le128 tweak;
1732 +- int err;
1733 +-
1734 +- err = skcipher_walk_virt(&walk, req, true);
1735 +-
1736 +- crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
1737 +-
1738 +- while (walk.nbytes > 0) {
1739 +- unsigned int nbytes = walk.nbytes;
1740 +- u8 *dst = walk.dst.virt.addr;
1741 +- const u8 *src = walk.src.virt.addr;
1742 +-
1743 +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
1744 +- unsigned int count;
1745 +-
1746 +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
1747 +- kernel_neon_begin();
1748 +- (*crypt_many)(ctx->main_key.round_keys,
1749 +- ctx->main_key.nrounds,
1750 +- dst, src, count, &tweak);
1751 +- kernel_neon_end();
1752 +- dst += count;
1753 +- src += count;
1754 +- nbytes -= count;
1755 +- }
1756 +-
1757 +- /* Handle any remainder with generic code */
1758 +- while (nbytes >= sizeof(tweak)) {
1759 +- le128_xor((le128 *)dst, (const le128 *)src, &tweak);
1760 +- (*crypt_one)(&ctx->main_key, dst, dst);
1761 +- le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
1762 +- gf128mul_x_ble(&tweak, &tweak);
1763 +-
1764 +- dst += sizeof(tweak);
1765 +- src += sizeof(tweak);
1766 +- nbytes -= sizeof(tweak);
1767 +- }
1768 +- err = skcipher_walk_done(&walk, nbytes);
1769 +- }
1770 +-
1771 +- return err;
1772 +-}
1773 +-
1774 +-static int speck128_xts_encrypt(struct skcipher_request *req)
1775 +-{
1776 +- return __speck128_xts_crypt(req, crypto_speck128_encrypt,
1777 +- speck128_xts_encrypt_neon);
1778 +-}
1779 +-
1780 +-static int speck128_xts_decrypt(struct skcipher_request *req)
1781 +-{
1782 +- return __speck128_xts_crypt(req, crypto_speck128_decrypt,
1783 +- speck128_xts_decrypt_neon);
1784 +-}
1785 +-
1786 +-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
1787 +- unsigned int keylen)
1788 +-{
1789 +- struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1790 +- int err;
1791 +-
1792 +- err = xts_verify_key(tfm, key, keylen);
1793 +- if (err)
1794 +- return err;
1795 +-
1796 +- keylen /= 2;
1797 +-
1798 +- err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
1799 +- if (err)
1800 +- return err;
1801 +-
1802 +- return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
1803 +-}
1804 +-
1805 +-/* Speck64 */
1806 +-
1807 +-struct speck64_xts_tfm_ctx {
1808 +- struct speck64_tfm_ctx main_key;
1809 +- struct speck64_tfm_ctx tweak_key;
1810 +-};
1811 +-
1812 +-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
1813 +- void *dst, const void *src,
1814 +- unsigned int nbytes, void *tweak);
1815 +-
1816 +-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
1817 +- void *dst, const void *src,
1818 +- unsigned int nbytes, void *tweak);
1819 +-
1820 +-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
1821 +- u8 *, const u8 *);
1822 +-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
1823 +- const void *, unsigned int, void *);
1824 +-
1825 +-static __always_inline int
1826 +-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
1827 +- speck64_xts_crypt_many_t crypt_many)
1828 +-{
1829 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
1830 +- const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1831 +- struct skcipher_walk walk;
1832 +- __le64 tweak;
1833 +- int err;
1834 +-
1835 +- err = skcipher_walk_virt(&walk, req, true);
1836 +-
1837 +- crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
1838 +-
1839 +- while (walk.nbytes > 0) {
1840 +- unsigned int nbytes = walk.nbytes;
1841 +- u8 *dst = walk.dst.virt.addr;
1842 +- const u8 *src = walk.src.virt.addr;
1843 +-
1844 +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
1845 +- unsigned int count;
1846 +-
1847 +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
1848 +- kernel_neon_begin();
1849 +- (*crypt_many)(ctx->main_key.round_keys,
1850 +- ctx->main_key.nrounds,
1851 +- dst, src, count, &tweak);
1852 +- kernel_neon_end();
1853 +- dst += count;
1854 +- src += count;
1855 +- nbytes -= count;
1856 +- }
1857 +-
1858 +- /* Handle any remainder with generic code */
1859 +- while (nbytes >= sizeof(tweak)) {
1860 +- *(__le64 *)dst = *(__le64 *)src ^ tweak;
1861 +- (*crypt_one)(&ctx->main_key, dst, dst);
1862 +- *(__le64 *)dst ^= tweak;
1863 +- tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
1864 +- ((tweak & cpu_to_le64(1ULL << 63)) ?
1865 +- 0x1B : 0));
1866 +- dst += sizeof(tweak);
1867 +- src += sizeof(tweak);
1868 +- nbytes -= sizeof(tweak);
1869 +- }
1870 +- err = skcipher_walk_done(&walk, nbytes);
1871 +- }
1872 +-
1873 +- return err;
1874 +-}
1875 +-
1876 +-static int speck64_xts_encrypt(struct skcipher_request *req)
1877 +-{
1878 +- return __speck64_xts_crypt(req, crypto_speck64_encrypt,
1879 +- speck64_xts_encrypt_neon);
1880 +-}
1881 +-
1882 +-static int speck64_xts_decrypt(struct skcipher_request *req)
1883 +-{
1884 +- return __speck64_xts_crypt(req, crypto_speck64_decrypt,
1885 +- speck64_xts_decrypt_neon);
1886 +-}
1887 +-
1888 +-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
1889 +- unsigned int keylen)
1890 +-{
1891 +- struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
1892 +- int err;
1893 +-
1894 +- err = xts_verify_key(tfm, key, keylen);
1895 +- if (err)
1896 +- return err;
1897 +-
1898 +- keylen /= 2;
1899 +-
1900 +- err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
1901 +- if (err)
1902 +- return err;
1903 +-
1904 +- return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
1905 +-}
1906 +-
1907 +-static struct skcipher_alg speck_algs[] = {
1908 +- {
1909 +- .base.cra_name = "xts(speck128)",
1910 +- .base.cra_driver_name = "xts-speck128-neon",
1911 +- .base.cra_priority = 300,
1912 +- .base.cra_blocksize = SPECK128_BLOCK_SIZE,
1913 +- .base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx),
1914 +- .base.cra_alignmask = 7,
1915 +- .base.cra_module = THIS_MODULE,
1916 +- .min_keysize = 2 * SPECK128_128_KEY_SIZE,
1917 +- .max_keysize = 2 * SPECK128_256_KEY_SIZE,
1918 +- .ivsize = SPECK128_BLOCK_SIZE,
1919 +- .walksize = SPECK_NEON_CHUNK_SIZE,
1920 +- .setkey = speck128_xts_setkey,
1921 +- .encrypt = speck128_xts_encrypt,
1922 +- .decrypt = speck128_xts_decrypt,
1923 +- }, {
1924 +- .base.cra_name = "xts(speck64)",
1925 +- .base.cra_driver_name = "xts-speck64-neon",
1926 +- .base.cra_priority = 300,
1927 +- .base.cra_blocksize = SPECK64_BLOCK_SIZE,
1928 +- .base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx),
1929 +- .base.cra_alignmask = 7,
1930 +- .base.cra_module = THIS_MODULE,
1931 +- .min_keysize = 2 * SPECK64_96_KEY_SIZE,
1932 +- .max_keysize = 2 * SPECK64_128_KEY_SIZE,
1933 +- .ivsize = SPECK64_BLOCK_SIZE,
1934 +- .walksize = SPECK_NEON_CHUNK_SIZE,
1935 +- .setkey = speck64_xts_setkey,
1936 +- .encrypt = speck64_xts_encrypt,
1937 +- .decrypt = speck64_xts_decrypt,
1938 +- }
1939 +-};
1940 +-
1941 +-static int __init speck_neon_module_init(void)
1942 +-{
1943 +- if (!(elf_hwcap & HWCAP_ASIMD))
1944 +- return -ENODEV;
1945 +- return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
1946 +-}
1947 +-
1948 +-static void __exit speck_neon_module_exit(void)
1949 +-{
1950 +- crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
1951 +-}
1952 +-
1953 +-module_init(speck_neon_module_init);
1954 +-module_exit(speck_neon_module_exit);
1955 +-
1956 +-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
1957 +-MODULE_LICENSE("GPL");
1958 +-MODULE_AUTHOR("Eric Biggers <ebiggers@ƗƗƗƗƗƗ.com>");
1959 +-MODULE_ALIAS_CRYPTO("xts(speck128)");
1960 +-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
1961 +-MODULE_ALIAS_CRYPTO("xts(speck64)");
1962 +-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
1963 +diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
1964 +index e4103b718a7c..b687c80a9c10 100644
1965 +--- a/arch/arm64/kernel/cpufeature.c
1966 ++++ b/arch/arm64/kernel/cpufeature.c
1967 +@@ -847,15 +847,29 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
1968 + }
1969 +
1970 + static bool has_cache_idc(const struct arm64_cpu_capabilities *entry,
1971 +- int __unused)
1972 ++ int scope)
1973 + {
1974 +- return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_IDC_SHIFT);
1975 ++ u64 ctr;
1976 ++
1977 ++ if (scope == SCOPE_SYSTEM)
1978 ++ ctr = arm64_ftr_reg_ctrel0.sys_val;
1979 ++ else
1980 ++ ctr = read_cpuid_cachetype();
1981 ++
1982 ++ return ctr & BIT(CTR_IDC_SHIFT);
1983 + }
1984 +
1985 + static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
1986 +- int __unused)
1987 ++ int scope)
1988 + {
1989 +- return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
1990 ++ u64 ctr;
1991 ++
1992 ++ if (scope == SCOPE_SYSTEM)
1993 ++ ctr = arm64_ftr_reg_ctrel0.sys_val;
1994 ++ else
1995 ++ ctr = read_cpuid_cachetype();
1996 ++
1997 ++ return ctr & BIT(CTR_DIC_SHIFT);
1998 + }
1999 +
2000 + #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
2001 +diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
2002 +index 28ad8799406f..b0db91eefbde 100644
2003 +--- a/arch/arm64/kernel/entry.S
2004 ++++ b/arch/arm64/kernel/entry.S
2005 +@@ -599,7 +599,7 @@ el1_undef:
2006 + inherit_daif pstate=x23, tmp=x2
2007 + mov x0, sp
2008 + bl do_undefinstr
2009 +- ASM_BUG()
2010 ++ kernel_exit 1
2011 + el1_dbg:
2012 + /*
2013 + * Debug exception handling
2014 +diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
2015 +index d399d459397b..9fa3d69cceaa 100644
2016 +--- a/arch/arm64/kernel/traps.c
2017 ++++ b/arch/arm64/kernel/traps.c
2018 +@@ -310,10 +310,12 @@ static int call_undef_hook(struct pt_regs *regs)
2019 + int (*fn)(struct pt_regs *regs, u32 instr) = NULL;
2020 + void __user *pc = (void __user *)instruction_pointer(regs);
2021 +
2022 +- if (!user_mode(regs))
2023 +- return 1;
2024 +-
2025 +- if (compat_thumb_mode(regs)) {
2026 ++ if (!user_mode(regs)) {
2027 ++ __le32 instr_le;
2028 ++ if (probe_kernel_address((__force __le32 *)pc, instr_le))
2029 ++ goto exit;
2030 ++ instr = le32_to_cpu(instr_le);
2031 ++ } else if (compat_thumb_mode(regs)) {
2032 + /* 16-bit Thumb instruction */
2033 + __le16 instr_le;
2034 + if (get_user(instr_le, (__le16 __user *)pc))
2035 +@@ -407,6 +409,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
2036 + return;
2037 +
2038 + force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
2039 ++ BUG_ON(!user_mode(regs));
2040 + }
2041 +
2042 + void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
2043 +diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
2044 +index 137710f4dac3..5105bb044aa5 100644
2045 +--- a/arch/arm64/lib/Makefile
2046 ++++ b/arch/arm64/lib/Makefile
2047 +@@ -12,7 +12,7 @@ lib-y := bitops.o clear_user.o delay.o copy_from_user.o \
2048 + # when supported by the CPU. Result and argument registers are handled
2049 + # correctly, based on the function prototype.
2050 + lib-$(CONFIG_ARM64_LSE_ATOMICS) += atomic_ll_sc.o
2051 +-CFLAGS_atomic_ll_sc.o := -fcall-used-x0 -ffixed-x1 -ffixed-x2 \
2052 ++CFLAGS_atomic_ll_sc.o := -ffixed-x1 -ffixed-x2 \
2053 + -ffixed-x3 -ffixed-x4 -ffixed-x5 -ffixed-x6 \
2054 + -ffixed-x7 -fcall-saved-x8 -fcall-saved-x9 \
2055 + -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12 \
2056 +diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig
2057 +index a874e54404d1..4d4c76ab0bac 100644
2058 +--- a/arch/m68k/configs/amiga_defconfig
2059 ++++ b/arch/m68k/configs/amiga_defconfig
2060 +@@ -650,7 +650,6 @@ CONFIG_CRYPTO_SALSA20=m
2061 + CONFIG_CRYPTO_SEED=m
2062 + CONFIG_CRYPTO_SERPENT=m
2063 + CONFIG_CRYPTO_SM4=m
2064 +-CONFIG_CRYPTO_SPECK=m
2065 + CONFIG_CRYPTO_TEA=m
2066 + CONFIG_CRYPTO_TWOFISH=m
2067 + CONFIG_CRYPTO_LZO=m
2068 +diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig
2069 +index 8ce39e23aa42..0fd006c19fa3 100644
2070 +--- a/arch/m68k/configs/apollo_defconfig
2071 ++++ b/arch/m68k/configs/apollo_defconfig
2072 +@@ -609,7 +609,6 @@ CONFIG_CRYPTO_SALSA20=m
2073 + CONFIG_CRYPTO_SEED=m
2074 + CONFIG_CRYPTO_SERPENT=m
2075 + CONFIG_CRYPTO_SM4=m
2076 +-CONFIG_CRYPTO_SPECK=m
2077 + CONFIG_CRYPTO_TEA=m
2078 + CONFIG_CRYPTO_TWOFISH=m
2079 + CONFIG_CRYPTO_LZO=m
2080 +diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig
2081 +index 346c4e75edf8..9343e8d5cf60 100644
2082 +--- a/arch/m68k/configs/atari_defconfig
2083 ++++ b/arch/m68k/configs/atari_defconfig
2084 +@@ -631,7 +631,6 @@ CONFIG_CRYPTO_SALSA20=m
2085 + CONFIG_CRYPTO_SEED=m
2086 + CONFIG_CRYPTO_SERPENT=m
2087 + CONFIG_CRYPTO_SM4=m
2088 +-CONFIG_CRYPTO_SPECK=m
2089 + CONFIG_CRYPTO_TEA=m
2090 + CONFIG_CRYPTO_TWOFISH=m
2091 + CONFIG_CRYPTO_LZO=m
2092 +diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig
2093 +index fca9c7aa71a3..a10fff6e7b50 100644
2094 +--- a/arch/m68k/configs/bvme6000_defconfig
2095 ++++ b/arch/m68k/configs/bvme6000_defconfig
2096 +@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
2097 + CONFIG_CRYPTO_SEED=m
2098 + CONFIG_CRYPTO_SERPENT=m
2099 + CONFIG_CRYPTO_SM4=m
2100 +-CONFIG_CRYPTO_SPECK=m
2101 + CONFIG_CRYPTO_TEA=m
2102 + CONFIG_CRYPTO_TWOFISH=m
2103 + CONFIG_CRYPTO_LZO=m
2104 +diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig
2105 +index f9eab174915c..db81d8ea9d03 100644
2106 +--- a/arch/m68k/configs/hp300_defconfig
2107 ++++ b/arch/m68k/configs/hp300_defconfig
2108 +@@ -611,7 +611,6 @@ CONFIG_CRYPTO_SALSA20=m
2109 + CONFIG_CRYPTO_SEED=m
2110 + CONFIG_CRYPTO_SERPENT=m
2111 + CONFIG_CRYPTO_SM4=m
2112 +-CONFIG_CRYPTO_SPECK=m
2113 + CONFIG_CRYPTO_TEA=m
2114 + CONFIG_CRYPTO_TWOFISH=m
2115 + CONFIG_CRYPTO_LZO=m
2116 +diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig
2117 +index b52e597899eb..2546617a1147 100644
2118 +--- a/arch/m68k/configs/mac_defconfig
2119 ++++ b/arch/m68k/configs/mac_defconfig
2120 +@@ -633,7 +633,6 @@ CONFIG_CRYPTO_SALSA20=m
2121 + CONFIG_CRYPTO_SEED=m
2122 + CONFIG_CRYPTO_SERPENT=m
2123 + CONFIG_CRYPTO_SM4=m
2124 +-CONFIG_CRYPTO_SPECK=m
2125 + CONFIG_CRYPTO_TEA=m
2126 + CONFIG_CRYPTO_TWOFISH=m
2127 + CONFIG_CRYPTO_LZO=m
2128 +diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig
2129 +index 2a84eeec5b02..dc9b0d885e8b 100644
2130 +--- a/arch/m68k/configs/multi_defconfig
2131 ++++ b/arch/m68k/configs/multi_defconfig
2132 +@@ -713,7 +713,6 @@ CONFIG_CRYPTO_SALSA20=m
2133 + CONFIG_CRYPTO_SEED=m
2134 + CONFIG_CRYPTO_SERPENT=m
2135 + CONFIG_CRYPTO_SM4=m
2136 +-CONFIG_CRYPTO_SPECK=m
2137 + CONFIG_CRYPTO_TEA=m
2138 + CONFIG_CRYPTO_TWOFISH=m
2139 + CONFIG_CRYPTO_LZO=m
2140 +diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig
2141 +index 476e69994340..0d815a375ba0 100644
2142 +--- a/arch/m68k/configs/mvme147_defconfig
2143 ++++ b/arch/m68k/configs/mvme147_defconfig
2144 +@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
2145 + CONFIG_CRYPTO_SEED=m
2146 + CONFIG_CRYPTO_SERPENT=m
2147 + CONFIG_CRYPTO_SM4=m
2148 +-CONFIG_CRYPTO_SPECK=m
2149 + CONFIG_CRYPTO_TEA=m
2150 + CONFIG_CRYPTO_TWOFISH=m
2151 + CONFIG_CRYPTO_LZO=m
2152 +diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig
2153 +index 1477cda9146e..0cb8109b4c9e 100644
2154 +--- a/arch/m68k/configs/mvme16x_defconfig
2155 ++++ b/arch/m68k/configs/mvme16x_defconfig
2156 +@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
2157 + CONFIG_CRYPTO_SEED=m
2158 + CONFIG_CRYPTO_SERPENT=m
2159 + CONFIG_CRYPTO_SM4=m
2160 +-CONFIG_CRYPTO_SPECK=m
2161 + CONFIG_CRYPTO_TEA=m
2162 + CONFIG_CRYPTO_TWOFISH=m
2163 + CONFIG_CRYPTO_LZO=m
2164 +diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig
2165 +index b3a543dc48a0..e91a1c28bba7 100644
2166 +--- a/arch/m68k/configs/q40_defconfig
2167 ++++ b/arch/m68k/configs/q40_defconfig
2168 +@@ -624,7 +624,6 @@ CONFIG_CRYPTO_SALSA20=m
2169 + CONFIG_CRYPTO_SEED=m
2170 + CONFIG_CRYPTO_SERPENT=m
2171 + CONFIG_CRYPTO_SM4=m
2172 +-CONFIG_CRYPTO_SPECK=m
2173 + CONFIG_CRYPTO_TEA=m
2174 + CONFIG_CRYPTO_TWOFISH=m
2175 + CONFIG_CRYPTO_LZO=m
2176 +diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig
2177 +index d543ed5dfa96..3b2f0914c34f 100644
2178 +--- a/arch/m68k/configs/sun3_defconfig
2179 ++++ b/arch/m68k/configs/sun3_defconfig
2180 +@@ -602,7 +602,6 @@ CONFIG_CRYPTO_SALSA20=m
2181 + CONFIG_CRYPTO_SEED=m
2182 + CONFIG_CRYPTO_SERPENT=m
2183 + CONFIG_CRYPTO_SM4=m
2184 +-CONFIG_CRYPTO_SPECK=m
2185 + CONFIG_CRYPTO_TEA=m
2186 + CONFIG_CRYPTO_TWOFISH=m
2187 + CONFIG_CRYPTO_LZO=m
2188 +diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig
2189 +index a67e54246023..e4365ef4f5ed 100644
2190 +--- a/arch/m68k/configs/sun3x_defconfig
2191 ++++ b/arch/m68k/configs/sun3x_defconfig
2192 +@@ -603,7 +603,6 @@ CONFIG_CRYPTO_SALSA20=m
2193 + CONFIG_CRYPTO_SEED=m
2194 + CONFIG_CRYPTO_SERPENT=m
2195 + CONFIG_CRYPTO_SM4=m
2196 +-CONFIG_CRYPTO_SPECK=m
2197 + CONFIG_CRYPTO_TEA=m
2198 + CONFIG_CRYPTO_TWOFISH=m
2199 + CONFIG_CRYPTO_LZO=m
2200 +diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
2201 +index 75108ec669eb..6c79e8a16a26 100644
2202 +--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
2203 ++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
2204 +@@ -67,7 +67,7 @@ void (*cvmx_override_pko_queue_priority) (int pko_port,
2205 + void (*cvmx_override_ipd_port_setup) (int ipd_port);
2206 +
2207 + /* Port count per interface */
2208 +-static int interface_port_count[5];
2209 ++static int interface_port_count[9];
2210 +
2211 + /**
2212 + * Return the number of interfaces the chip has. Each interface
2213 +diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
2214 +index fac26ce64b2f..e76e88222a4b 100644
2215 +--- a/arch/mips/lib/memset.S
2216 ++++ b/arch/mips/lib/memset.S
2217 +@@ -262,9 +262,11 @@
2218 + nop
2219 +
2220 + .Lsmall_fixup\@:
2221 ++ .set reorder
2222 + PTR_SUBU a2, t1, a0
2223 ++ PTR_ADDIU a2, 1
2224 + jr ra
2225 +- PTR_ADDIU a2, 1
2226 ++ .set noreorder
2227 +
2228 + .endm
2229 +
2230 +diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
2231 +index 1b4732e20137..843825a7e6e2 100644
2232 +--- a/arch/parisc/kernel/entry.S
2233 ++++ b/arch/parisc/kernel/entry.S
2234 +@@ -185,7 +185,7 @@
2235 + bv,n 0(%r3)
2236 + nop
2237 + .word 0 /* checksum (will be patched) */
2238 +- .word PA(os_hpmc) /* address of handler */
2239 ++ .word 0 /* address of handler */
2240 + .word 0 /* length of handler */
2241 + .endm
2242 +
2243 +diff --git a/arch/parisc/kernel/hpmc.S b/arch/parisc/kernel/hpmc.S
2244 +index 781c3b9a3e46..fde654115564 100644
2245 +--- a/arch/parisc/kernel/hpmc.S
2246 ++++ b/arch/parisc/kernel/hpmc.S
2247 +@@ -85,7 +85,7 @@ END(hpmc_pim_data)
2248 +
2249 + .import intr_save, code
2250 + .align 16
2251 +-ENTRY_CFI(os_hpmc)
2252 ++ENTRY(os_hpmc)
2253 + .os_hpmc:
2254 +
2255 + /*
2256 +@@ -302,7 +302,6 @@ os_hpmc_6:
2257 + b .
2258 + nop
2259 + .align 16 /* make function length multiple of 16 bytes */
2260 +-ENDPROC_CFI(os_hpmc)
2261 + .os_hpmc_end:
2262 +
2263 +
2264 +diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
2265 +index 4309ad31a874..2cb35e1e0099 100644
2266 +--- a/arch/parisc/kernel/traps.c
2267 ++++ b/arch/parisc/kernel/traps.c
2268 +@@ -827,7 +827,8 @@ void __init initialize_ivt(const void *iva)
2269 + * the Length/4 words starting at Address is zero.
2270 + */
2271 +
2272 +- /* Compute Checksum for HPMC handler */
2273 ++ /* Setup IVA and compute checksum for HPMC handler */
2274 ++ ivap[6] = (u32)__pa(os_hpmc);
2275 + length = os_hpmc_size;
2276 + ivap[7] = length;
2277 +
2278 +diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
2279 +index 2607d2d33405..db6cd857c8c0 100644
2280 +--- a/arch/parisc/mm/init.c
2281 ++++ b/arch/parisc/mm/init.c
2282 +@@ -495,12 +495,8 @@ static void __init map_pages(unsigned long start_vaddr,
2283 + pte = pte_mkhuge(pte);
2284 + }
2285 +
2286 +- if (address >= end_paddr) {
2287 +- if (force)
2288 +- break;
2289 +- else
2290 +- pte_val(pte) = 0;
2291 +- }
2292 ++ if (address >= end_paddr)
2293 ++ break;
2294 +
2295 + set_pte(pg_table, pte);
2296 +
2297 +diff --git a/arch/powerpc/include/asm/mpic.h b/arch/powerpc/include/asm/mpic.h
2298 +index fad8ddd697ac..0abf2e7fd222 100644
2299 +--- a/arch/powerpc/include/asm/mpic.h
2300 ++++ b/arch/powerpc/include/asm/mpic.h
2301 +@@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
2302 + #define MPIC_REGSET_TSI108 MPIC_REGSET(1) /* Tsi108/109 PIC */
2303 +
2304 + /* Get the version of primary MPIC */
2305 ++#ifdef CONFIG_MPIC
2306 + extern u32 fsl_mpic_primary_get_version(void);
2307 ++#else
2308 ++static inline u32 fsl_mpic_primary_get_version(void)
2309 ++{
2310 ++ return 0;
2311 ++}
2312 ++#endif
2313 +
2314 + /* Allocate the controller structure and setup the linux irq descs
2315 + * for the range if interrupts passed in. No HW initialization is
2316 +diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
2317 +index 38c5b4764bfe..a74ffd5ad15c 100644
2318 +--- a/arch/powerpc/kernel/mce_power.c
2319 ++++ b/arch/powerpc/kernel/mce_power.c
2320 +@@ -97,6 +97,13 @@ static void flush_and_reload_slb(void)
2321 +
2322 + static void flush_erat(void)
2323 + {
2324 ++#ifdef CONFIG_PPC_BOOK3S_64
2325 ++ if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
2326 ++ flush_and_reload_slb();
2327 ++ return;
2328 ++ }
2329 ++#endif
2330 ++ /* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */
2331 + asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
2332 + }
2333 +
2334 +diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
2335 +index 225bc5f91049..03dd2f9d60cf 100644
2336 +--- a/arch/powerpc/kernel/setup_64.c
2337 ++++ b/arch/powerpc/kernel/setup_64.c
2338 +@@ -242,13 +242,19 @@ static void cpu_ready_for_interrupts(void)
2339 + }
2340 +
2341 + /*
2342 +- * Fixup HFSCR:TM based on CPU features. The bit is set by our
2343 +- * early asm init because at that point we haven't updated our
2344 +- * CPU features from firmware and device-tree. Here we have,
2345 +- * so let's do it.
2346 ++ * Set HFSCR:TM based on CPU features:
2347 ++ * In the special case of TM no suspend (P9N DD2.1), Linux is
2348 ++ * told TM is off via the dt-ftrs but told to (partially) use
2349 ++ * it via OPAL_REINIT_CPUS_TM_SUSPEND_DISABLED. So HFSCR[TM]
2350 ++ * will be off from dt-ftrs but we need to turn it on for the
2351 ++ * no suspend case.
2352 + */
2353 +- if (cpu_has_feature(CPU_FTR_HVMODE) && !cpu_has_feature(CPU_FTR_TM_COMP))
2354 +- mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
2355 ++ if (cpu_has_feature(CPU_FTR_HVMODE)) {
2356 ++ if (cpu_has_feature(CPU_FTR_TM_COMP))
2357 ++ mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) | HFSCR_TM);
2358 ++ else
2359 ++ mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
2360 ++ }
2361 +
2362 + /* Set IR and DR in PACA MSR */
2363 + get_paca()->kernel_msr = MSR_KERNEL;
2364 +diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
2365 +index 1d049c78c82a..2e45e5fbad5b 100644
2366 +--- a/arch/powerpc/mm/hash_native_64.c
2367 ++++ b/arch/powerpc/mm/hash_native_64.c
2368 +@@ -115,6 +115,8 @@ static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
2369 + tlbiel_hash_set_isa300(0, is, 0, 2, 1);
2370 +
2371 + asm volatile("ptesync": : :"memory");
2372 ++
2373 ++ asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
2374 + }
2375 +
2376 + void hash__tlbiel_all(unsigned int action)
2377 +@@ -140,8 +142,6 @@ void hash__tlbiel_all(unsigned int action)
2378 + tlbiel_all_isa206(POWER7_TLB_SETS, is);
2379 + else
2380 + WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
2381 +-
2382 +- asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
2383 + }
2384 +
2385 + static inline unsigned long ___tlbie(unsigned long vpn, int psize,
2386 +diff --git a/arch/s390/defconfig b/arch/s390/defconfig
2387 +index f40600eb1762..5134c71a4937 100644
2388 +--- a/arch/s390/defconfig
2389 ++++ b/arch/s390/defconfig
2390 +@@ -221,7 +221,6 @@ CONFIG_CRYPTO_SALSA20=m
2391 + CONFIG_CRYPTO_SEED=m
2392 + CONFIG_CRYPTO_SERPENT=m
2393 + CONFIG_CRYPTO_SM4=m
2394 +-CONFIG_CRYPTO_SPECK=m
2395 + CONFIG_CRYPTO_TEA=m
2396 + CONFIG_CRYPTO_TWOFISH=m
2397 + CONFIG_CRYPTO_DEFLATE=m
2398 +diff --git a/arch/s390/kernel/sthyi.c b/arch/s390/kernel/sthyi.c
2399 +index 0859cde36f75..888cc2f166db 100644
2400 +--- a/arch/s390/kernel/sthyi.c
2401 ++++ b/arch/s390/kernel/sthyi.c
2402 +@@ -183,17 +183,19 @@ static void fill_hdr(struct sthyi_sctns *sctns)
2403 + static void fill_stsi_mac(struct sthyi_sctns *sctns,
2404 + struct sysinfo_1_1_1 *sysinfo)
2405 + {
2406 ++ sclp_ocf_cpc_name_copy(sctns->mac.infmname);
2407 ++ if (*(u64 *)sctns->mac.infmname != 0)
2408 ++ sctns->mac.infmval1 |= MAC_NAME_VLD;
2409 ++
2410 + if (stsi(sysinfo, 1, 1, 1))
2411 + return;
2412 +
2413 +- sclp_ocf_cpc_name_copy(sctns->mac.infmname);
2414 +-
2415 + memcpy(sctns->mac.infmtype, sysinfo->type, sizeof(sctns->mac.infmtype));
2416 + memcpy(sctns->mac.infmmanu, sysinfo->manufacturer, sizeof(sctns->mac.infmmanu));
2417 + memcpy(sctns->mac.infmpman, sysinfo->plant, sizeof(sctns->mac.infmpman));
2418 + memcpy(sctns->mac.infmseq, sysinfo->sequence, sizeof(sctns->mac.infmseq));
2419 +
2420 +- sctns->mac.infmval1 |= MAC_ID_VLD | MAC_NAME_VLD;
2421 ++ sctns->mac.infmval1 |= MAC_ID_VLD;
2422 + }
2423 +
2424 + static void fill_stsi_par(struct sthyi_sctns *sctns,
2425 +diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
2426 +index d4e6cd4577e5..bf0e82400358 100644
2427 +--- a/arch/x86/boot/tools/build.c
2428 ++++ b/arch/x86/boot/tools/build.c
2429 +@@ -391,6 +391,13 @@ int main(int argc, char ** argv)
2430 + die("Unable to mmap '%s': %m", argv[2]);
2431 + /* Number of 16-byte paragraphs, including space for a 4-byte CRC */
2432 + sys_size = (sz + 15 + 4) / 16;
2433 ++#ifdef CONFIG_EFI_STUB
2434 ++ /*
2435 ++ * COFF requires minimum 32-byte alignment of sections, and
2436 ++ * adding a signature is problematic without that alignment.
2437 ++ */
2438 ++ sys_size = (sys_size + 1) & ~1;
2439 ++#endif
2440 +
2441 + /* Patch the setup code with the appropriate size parameters */
2442 + buf[0x1f1] = setup_sectors-1;
2443 +diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
2444 +index acbe7e8336d8..e4b78f962874 100644
2445 +--- a/arch/x86/crypto/aesni-intel_glue.c
2446 ++++ b/arch/x86/crypto/aesni-intel_glue.c
2447 +@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
2448 + /* Linearize assoc, if not already linear */
2449 + if (req->src->length >= assoclen && req->src->length &&
2450 + (!PageHighMem(sg_page(req->src)) ||
2451 +- req->src->offset + req->src->length < PAGE_SIZE)) {
2452 ++ req->src->offset + req->src->length <= PAGE_SIZE)) {
2453 + scatterwalk_start(&assoc_sg_walk, req->src);
2454 + assoc = scatterwalk_map(&assoc_sg_walk);
2455 + } else {
2456 +diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
2457 +index 64aaa3f5f36c..c8ac84e90d0f 100644
2458 +--- a/arch/x86/include/asm/cpufeatures.h
2459 ++++ b/arch/x86/include/asm/cpufeatures.h
2460 +@@ -220,6 +220,7 @@
2461 + #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */
2462 + #define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
2463 + #define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */
2464 ++#define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */
2465 +
2466 + /* Virtualization flags: Linux defined, word 8 */
2467 + #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
2468 +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
2469 +index 0722b7745382..ccc23203b327 100644
2470 +--- a/arch/x86/include/asm/kvm_host.h
2471 ++++ b/arch/x86/include/asm/kvm_host.h
2472 +@@ -176,6 +176,7 @@ enum {
2473 +
2474 + #define DR6_BD (1 << 13)
2475 + #define DR6_BS (1 << 14)
2476 ++#define DR6_BT (1 << 15)
2477 + #define DR6_RTM (1 << 16)
2478 + #define DR6_FIXED_1 0xfffe0ff0
2479 + #define DR6_INIT 0xffff0ff0
2480 +diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
2481 +index f6f6c63da62f..e7c8086e570e 100644
2482 +--- a/arch/x86/include/asm/nospec-branch.h
2483 ++++ b/arch/x86/include/asm/nospec-branch.h
2484 +@@ -215,6 +215,7 @@ enum spectre_v2_mitigation {
2485 + SPECTRE_V2_RETPOLINE_GENERIC,
2486 + SPECTRE_V2_RETPOLINE_AMD,
2487 + SPECTRE_V2_IBRS,
2488 ++ SPECTRE_V2_IBRS_ENHANCED,
2489 + };
2490 +
2491 + /* The Speculative Store Bypass disable variants */
2492 +diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
2493 +index 0af97e51e609..6f293d9a0b07 100644
2494 +--- a/arch/x86/include/asm/tlbflush.h
2495 ++++ b/arch/x86/include/asm/tlbflush.h
2496 +@@ -469,6 +469,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
2497 + */
2498 + static inline void __flush_tlb_all(void)
2499 + {
2500 ++ /*
2501 ++ * This is to catch users with enabled preemption and the PGE feature
2502 ++ * and don't trigger the warning in __native_flush_tlb().
2503 ++ */
2504 ++ VM_WARN_ON_ONCE(preemptible());
2505 ++
2506 + if (boot_cpu_has(X86_FEATURE_PGE)) {
2507 + __flush_tlb_global();
2508 + } else {
2509 +diff --git a/arch/x86/kernel/check.c b/arch/x86/kernel/check.c
2510 +index 33399426793e..cc8258a5378b 100644
2511 +--- a/arch/x86/kernel/check.c
2512 ++++ b/arch/x86/kernel/check.c
2513 +@@ -31,6 +31,11 @@ static __init int set_corruption_check(char *arg)
2514 + ssize_t ret;
2515 + unsigned long val;
2516 +
2517 ++ if (!arg) {
2518 ++ pr_err("memory_corruption_check config string not provided\n");
2519 ++ return -EINVAL;
2520 ++ }
2521 ++
2522 + ret = kstrtoul(arg, 10, &val);
2523 + if (ret)
2524 + return ret;
2525 +@@ -45,6 +50,11 @@ static __init int set_corruption_check_period(char *arg)
2526 + ssize_t ret;
2527 + unsigned long val;
2528 +
2529 ++ if (!arg) {
2530 ++ pr_err("memory_corruption_check_period config string not provided\n");
2531 ++ return -EINVAL;
2532 ++ }
2533 ++
2534 + ret = kstrtoul(arg, 10, &val);
2535 + if (ret)
2536 + return ret;
2537 +@@ -59,6 +69,11 @@ static __init int set_corruption_check_size(char *arg)
2538 + char *end;
2539 + unsigned size;
2540 +
2541 ++ if (!arg) {
2542 ++ pr_err("memory_corruption_check_size config string not provided\n");
2543 ++ return -EINVAL;
2544 ++ }
2545 ++
2546 + size = memparse(arg, &end);
2547 +
2548 + if (*end == '\0')
2549 +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
2550 +index 4891a621a752..91e5e086606c 100644
2551 +--- a/arch/x86/kernel/cpu/bugs.c
2552 ++++ b/arch/x86/kernel/cpu/bugs.c
2553 +@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
2554 + static void __init ssb_select_mitigation(void);
2555 + static void __init l1tf_select_mitigation(void);
2556 +
2557 +-/*
2558 +- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
2559 +- * writes to SPEC_CTRL contain whatever reserved bits have been set.
2560 +- */
2561 +-u64 __ro_after_init x86_spec_ctrl_base;
2562 ++/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
2563 ++u64 x86_spec_ctrl_base;
2564 + EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
2565 ++static DEFINE_MUTEX(spec_ctrl_mutex);
2566 +
2567 + /*
2568 + * The vendor and possibly platform specific bits which can be modified in
2569 +@@ -141,6 +139,7 @@ static const char *spectre_v2_strings[] = {
2570 + [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline",
2571 + [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",
2572 + [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",
2573 ++ [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",
2574 + };
2575 +
2576 + #undef pr_fmt
2577 +@@ -324,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
2578 + return cmd;
2579 + }
2580 +
2581 ++static bool stibp_needed(void)
2582 ++{
2583 ++ if (spectre_v2_enabled == SPECTRE_V2_NONE)
2584 ++ return false;
2585 ++
2586 ++ if (!boot_cpu_has(X86_FEATURE_STIBP))
2587 ++ return false;
2588 ++
2589 ++ return true;
2590 ++}
2591 ++
2592 ++static void update_stibp_msr(void *info)
2593 ++{
2594 ++ wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
2595 ++}
2596 ++
2597 ++void arch_smt_update(void)
2598 ++{
2599 ++ u64 mask;
2600 ++
2601 ++ if (!stibp_needed())
2602 ++ return;
2603 ++
2604 ++ mutex_lock(&spec_ctrl_mutex);
2605 ++ mask = x86_spec_ctrl_base;
2606 ++ if (cpu_smt_control == CPU_SMT_ENABLED)
2607 ++ mask |= SPEC_CTRL_STIBP;
2608 ++ else
2609 ++ mask &= ~SPEC_CTRL_STIBP;
2610 ++
2611 ++ if (mask != x86_spec_ctrl_base) {
2612 ++ pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
2613 ++ cpu_smt_control == CPU_SMT_ENABLED ?
2614 ++ "Enabling" : "Disabling");
2615 ++ x86_spec_ctrl_base = mask;
2616 ++ on_each_cpu(update_stibp_msr, NULL, 1);
2617 ++ }
2618 ++ mutex_unlock(&spec_ctrl_mutex);
2619 ++}
2620 ++
2621 + static void __init spectre_v2_select_mitigation(void)
2622 + {
2623 + enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
2624 +@@ -343,6 +382,13 @@ static void __init spectre_v2_select_mitigation(void)
2625 +
2626 + case SPECTRE_V2_CMD_FORCE:
2627 + case SPECTRE_V2_CMD_AUTO:
2628 ++ if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
2629 ++ mode = SPECTRE_V2_IBRS_ENHANCED;
2630 ++ /* Force it so VMEXIT will restore correctly */
2631 ++ x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
2632 ++ wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
2633 ++ goto specv2_set_mode;
2634 ++ }
2635 + if (IS_ENABLED(CONFIG_RETPOLINE))
2636 + goto retpoline_auto;
2637 + break;
2638 +@@ -380,6 +426,7 @@ retpoline_auto:
2639 + setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
2640 + }
2641 +
2642 ++specv2_set_mode:
2643 + spectre_v2_enabled = mode;
2644 + pr_info("%s\n", spectre_v2_strings[mode]);
2645 +
2646 +@@ -402,12 +449,22 @@ retpoline_auto:
2647 +
2648 + /*
2649 + * Retpoline means the kernel is safe because it has no indirect
2650 +- * branches. But firmware isn't, so use IBRS to protect that.
2651 ++ * branches. Enhanced IBRS protects firmware too, so, enable restricted
2652 ++ * speculation around firmware calls only when Enhanced IBRS isn't
2653 ++ * supported.
2654 ++ *
2655 ++ * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
2656 ++ * the user might select retpoline on the kernel command line and if
2657 ++ * the CPU supports Enhanced IBRS, kernel might un-intentionally not
2658 ++ * enable IBRS around firmware calls.
2659 + */
2660 +- if (boot_cpu_has(X86_FEATURE_IBRS)) {
2661 ++ if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
2662 + setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
2663 + pr_info("Enabling Restricted Speculation for firmware calls\n");
2664 + }
2665 ++
2666 ++ /* Enable STIBP if appropriate */
2667 ++ arch_smt_update();
2668 + }
2669 +
2670 + #undef pr_fmt
2671 +@@ -798,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
2672 + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
2673 + char *buf, unsigned int bug)
2674 + {
2675 ++ int ret;
2676 ++
2677 + if (!boot_cpu_has_bug(bug))
2678 + return sprintf(buf, "Not affected\n");
2679 +
2680 +@@ -815,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
2681 + return sprintf(buf, "Mitigation: __user pointer sanitization\n");
2682 +
2683 + case X86_BUG_SPECTRE_V2:
2684 +- return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
2685 ++ ret = sprintf(buf, "%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
2686 + boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
2687 + boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
2688 ++ (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
2689 + spectre_v2_module_string());
2690 ++ return ret;
2691 +
2692 + case X86_BUG_SPEC_STORE_BYPASS:
2693 + return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
2694 +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
2695 +index 1ee8ea36af30..79561bfcfa87 100644
2696 +--- a/arch/x86/kernel/cpu/common.c
2697 ++++ b/arch/x86/kernel/cpu/common.c
2698 +@@ -1015,6 +1015,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
2699 + !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
2700 + setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
2701 +
2702 ++ if (ia32_cap & ARCH_CAP_IBRS_ALL)
2703 ++ setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
2704 ++
2705 + if (x86_match_cpu(cpu_no_meltdown))
2706 + return;
2707 +
2708 +diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
2709 +index 749856a2e736..bc3801985d73 100644
2710 +--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
2711 ++++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
2712 +@@ -2032,6 +2032,13 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf)
2713 + {
2714 + if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled)
2715 + seq_puts(seq, ",cdp");
2716 ++
2717 ++ if (rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled)
2718 ++ seq_puts(seq, ",cdpl2");
2719 ++
2720 ++ if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA]))
2721 ++ seq_puts(seq, ",mba_MBps");
2722 ++
2723 + return 0;
2724 + }
2725 +
2726 +diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
2727 +index 23f1691670b6..61a949d84dfa 100644
2728 +--- a/arch/x86/kernel/fpu/signal.c
2729 ++++ b/arch/x86/kernel/fpu/signal.c
2730 +@@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
2731 + * thread's fpu state, reconstruct fxstate from the fsave
2732 + * header. Validate and sanitize the copied state.
2733 + */
2734 +- struct fpu *fpu = &tsk->thread.fpu;
2735 + struct user_i387_ia32_struct env;
2736 + int err = 0;
2737 +
2738 +diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
2739 +index 203d398802a3..1467f966cfec 100644
2740 +--- a/arch/x86/kernel/kprobes/opt.c
2741 ++++ b/arch/x86/kernel/kprobes/opt.c
2742 +@@ -179,7 +179,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
2743 + opt_pre_handler(&op->kp, regs);
2744 + __this_cpu_write(current_kprobe, NULL);
2745 + }
2746 +- preempt_enable_no_resched();
2747 ++ preempt_enable();
2748 + }
2749 + NOKPROBE_SYMBOL(optimized_callback);
2750 +
2751 +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
2752 +index 9efe130ea2e6..9fcc3ec3ab78 100644
2753 +--- a/arch/x86/kvm/vmx.c
2754 ++++ b/arch/x86/kvm/vmx.c
2755 +@@ -3160,10 +3160,13 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
2756 + }
2757 + } else {
2758 + if (vmcs12->exception_bitmap & (1u << nr)) {
2759 +- if (nr == DB_VECTOR)
2760 ++ if (nr == DB_VECTOR) {
2761 + *exit_qual = vcpu->arch.dr6;
2762 +- else
2763 ++ *exit_qual &= ~(DR6_FIXED_1 | DR6_BT);
2764 ++ *exit_qual ^= DR6_RTM;
2765 ++ } else {
2766 + *exit_qual = 0;
2767 ++ }
2768 + return 1;
2769 + }
2770 + }
2771 +diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
2772 +index 8d6c34fe49be..800de88208d7 100644
2773 +--- a/arch/x86/mm/pageattr.c
2774 ++++ b/arch/x86/mm/pageattr.c
2775 +@@ -2063,9 +2063,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
2776 +
2777 + /*
2778 + * We should perform an IPI and flush all tlbs,
2779 +- * but that can deadlock->flush only current cpu:
2780 ++ * but that can deadlock->flush only current cpu.
2781 ++ * Preemption needs to be disabled around __flush_tlb_all() due to
2782 ++ * CR3 reload in __native_flush_tlb().
2783 + */
2784 ++ preempt_disable();
2785 + __flush_tlb_all();
2786 ++ preempt_enable();
2787 +
2788 + arch_flush_lazy_mmu_mode();
2789 + }
2790 +diff --git a/arch/x86/platform/olpc/olpc-xo1-rtc.c b/arch/x86/platform/olpc/olpc-xo1-rtc.c
2791 +index a2b4efddd61a..8e7ddd7e313a 100644
2792 +--- a/arch/x86/platform/olpc/olpc-xo1-rtc.c
2793 ++++ b/arch/x86/platform/olpc/olpc-xo1-rtc.c
2794 +@@ -16,6 +16,7 @@
2795 +
2796 + #include <asm/msr.h>
2797 + #include <asm/olpc.h>
2798 ++#include <asm/x86_init.h>
2799 +
2800 + static void rtc_wake_on(struct device *dev)
2801 + {
2802 +@@ -75,6 +76,8 @@ static int __init xo1_rtc_init(void)
2803 + if (r)
2804 + return r;
2805 +
2806 ++ x86_platform.legacy.rtc = 0;
2807 ++
2808 + device_init_wakeup(&xo1_rtc_device.dev, 1);
2809 + return 0;
2810 + }
2811 +diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
2812 +index c85d1a88f476..f7f77023288a 100644
2813 +--- a/arch/x86/xen/enlighten_pvh.c
2814 ++++ b/arch/x86/xen/enlighten_pvh.c
2815 +@@ -75,7 +75,7 @@ static void __init init_pvh_bootparams(void)
2816 + * Version 2.12 supports Xen entry point but we will use default x86/PC
2817 + * environment (i.e. hardware_subarch 0).
2818 + */
2819 +- pvh_bootparams.hdr.version = 0x212;
2820 ++ pvh_bootparams.hdr.version = (2 << 8) | 12;
2821 + pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
2822 +
2823 + x86_init.acpi.get_root_pointer = pvh_get_root_pointer;
2824 +diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
2825 +index 33a783c77d96..184b36922397 100644
2826 +--- a/arch/x86/xen/platform-pci-unplug.c
2827 ++++ b/arch/x86/xen/platform-pci-unplug.c
2828 +@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void)
2829 + {
2830 + int r;
2831 +
2832 ++ /* PVH guests don't have emulated devices. */
2833 ++ if (xen_pvh_domain())
2834 ++ return;
2835 ++
2836 + /* user explicitly requested no unplug */
2837 + if (xen_emul_unplug & XEN_UNPLUG_NEVER)
2838 + return;
2839 +diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
2840 +index cd97a62394e7..a970a2aa4456 100644
2841 +--- a/arch/x86/xen/spinlock.c
2842 ++++ b/arch/x86/xen/spinlock.c
2843 +@@ -9,6 +9,7 @@
2844 + #include <linux/log2.h>
2845 + #include <linux/gfp.h>
2846 + #include <linux/slab.h>
2847 ++#include <linux/atomic.h>
2848 +
2849 + #include <asm/paravirt.h>
2850 + #include <asm/qspinlock.h>
2851 +@@ -21,6 +22,7 @@
2852 +
2853 + static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
2854 + static DEFINE_PER_CPU(char *, irq_name);
2855 ++static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
2856 + static bool xen_pvspin = true;
2857 +
2858 + static void xen_qlock_kick(int cpu)
2859 +@@ -40,33 +42,24 @@ static void xen_qlock_kick(int cpu)
2860 + static void xen_qlock_wait(u8 *byte, u8 val)
2861 + {
2862 + int irq = __this_cpu_read(lock_kicker_irq);
2863 ++ atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);
2864 +
2865 + /* If kicker interrupts not initialized yet, just spin */
2866 +- if (irq == -1)
2867 ++ if (irq == -1 || in_nmi())
2868 + return;
2869 +
2870 +- /* clear pending */
2871 +- xen_clear_irq_pending(irq);
2872 +- barrier();
2873 +-
2874 +- /*
2875 +- * We check the byte value after clearing pending IRQ to make sure
2876 +- * that we won't miss a wakeup event because of the clearing.
2877 +- *
2878 +- * The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
2879 +- * So it is effectively a memory barrier for x86.
2880 +- */
2881 +- if (READ_ONCE(*byte) != val)
2882 +- return;
2883 ++ /* Detect reentry. */
2884 ++ atomic_inc(nest_cnt);
2885 +
2886 +- /*
2887 +- * If an interrupt happens here, it will leave the wakeup irq
2888 +- * pending, which will cause xen_poll_irq() to return
2889 +- * immediately.
2890 +- */
2891 ++ /* If irq pending already and no nested call clear it. */
2892 ++ if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
2893 ++ xen_clear_irq_pending(irq);
2894 ++ } else if (READ_ONCE(*byte) == val) {
2895 ++ /* Block until irq becomes pending (or a spurious wakeup) */
2896 ++ xen_poll_irq(irq);
2897 ++ }
2898 +
2899 +- /* Block until irq becomes pending (or perhaps a spurious wakeup) */
2900 +- xen_poll_irq(irq);
2901 ++ atomic_dec(nest_cnt);
2902 + }
2903 +
2904 + static irqreturn_t dummy_handler(int irq, void *dev_id)
2905 +diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
2906 +index ca2d3b2bf2af..58722a052f9c 100644
2907 +--- a/arch/x86/xen/xen-pvh.S
2908 ++++ b/arch/x86/xen/xen-pvh.S
2909 +@@ -181,7 +181,7 @@ canary:
2910 + .fill 48, 1, 0
2911 +
2912 + early_stack:
2913 +- .fill 256, 1, 0
2914 ++ .fill BOOT_STACK_SIZE, 1, 0
2915 + early_stack_end:
2916 +
2917 + ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,
2918 +diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
2919 +index 4498c43245e2..681498e5d40a 100644
2920 +--- a/block/bfq-wf2q.c
2921 ++++ b/block/bfq-wf2q.c
2922 +@@ -1178,10 +1178,17 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)
2923 + st = bfq_entity_service_tree(entity);
2924 + is_in_service = entity == sd->in_service_entity;
2925 +
2926 +- if (is_in_service) {
2927 +- bfq_calc_finish(entity, entity->service);
2928 ++ bfq_calc_finish(entity, entity->service);
2929 ++
2930 ++ if (is_in_service)
2931 + sd->in_service_entity = NULL;
2932 +- }
2933 ++ else
2934 ++ /*
2935 ++ * Non in-service entity: nobody will take care of
2936 ++ * resetting its service counter on expiration. Do it
2937 ++ * now.
2938 ++ */
2939 ++ entity->service = 0;
2940 +
2941 + if (entity->tree == &st->active)
2942 + bfq_active_extract(st, entity);
2943 +diff --git a/block/blk-lib.c b/block/blk-lib.c
2944 +index d1b9dd03da25..1f196cf0aa5d 100644
2945 +--- a/block/blk-lib.c
2946 ++++ b/block/blk-lib.c
2947 +@@ -29,9 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
2948 + {
2949 + struct request_queue *q = bdev_get_queue(bdev);
2950 + struct bio *bio = *biop;
2951 +- unsigned int granularity;
2952 + unsigned int op;
2953 +- int alignment;
2954 + sector_t bs_mask;
2955 +
2956 + if (!q)
2957 +@@ -54,38 +52,15 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
2958 + if ((sector | nr_sects) & bs_mask)
2959 + return -EINVAL;
2960 +
2961 +- /* Zero-sector (unknown) and one-sector granularities are the same. */
2962 +- granularity = max(q->limits.discard_granularity >> 9, 1U);
2963 +- alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
2964 +-
2965 + while (nr_sects) {
2966 +- unsigned int req_sects;
2967 +- sector_t end_sect, tmp;
2968 ++ unsigned int req_sects = nr_sects;
2969 ++ sector_t end_sect;
2970 +
2971 +- /*
2972 +- * Issue in chunks of the user defined max discard setting,
2973 +- * ensuring that bi_size doesn't overflow
2974 +- */
2975 +- req_sects = min_t(sector_t, nr_sects,
2976 +- q->limits.max_discard_sectors);
2977 + if (!req_sects)
2978 + goto fail;
2979 +- if (req_sects > UINT_MAX >> 9)
2980 +- req_sects = UINT_MAX >> 9;
2981 ++ req_sects = min(req_sects, bio_allowed_max_sectors(q));
2982 +
2983 +- /*
2984 +- * If splitting a request, and the next starting sector would be
2985 +- * misaligned, stop the discard at the previous aligned sector.
2986 +- */
2987 + end_sect = sector + req_sects;
2988 +- tmp = end_sect;
2989 +- if (req_sects < nr_sects &&
2990 +- sector_div(tmp, granularity) != alignment) {
2991 +- end_sect = end_sect - alignment;
2992 +- sector_div(end_sect, granularity);
2993 +- end_sect = end_sect * granularity + alignment;
2994 +- req_sects = end_sect - sector;
2995 +- }
2996 +
2997 + bio = next_bio(bio, 0, gfp_mask);
2998 + bio->bi_iter.bi_sector = sector;
2999 +@@ -186,7 +161,7 @@ static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
3000 + return -EOPNOTSUPP;
3001 +
3002 + /* Ensure that max_write_same_sectors doesn't overflow bi_size */
3003 +- max_write_same_sectors = UINT_MAX >> 9;
3004 ++ max_write_same_sectors = bio_allowed_max_sectors(q);
3005 +
3006 + while (nr_sects) {
3007 + bio = next_bio(bio, 1, gfp_mask);
3008 +diff --git a/block/blk-merge.c b/block/blk-merge.c
3009 +index aaec38cc37b8..2e042190a4f1 100644
3010 +--- a/block/blk-merge.c
3011 ++++ b/block/blk-merge.c
3012 +@@ -27,7 +27,8 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
3013 + /* Zero-sector (unknown) and one-sector granularities are the same. */
3014 + granularity = max(q->limits.discard_granularity >> 9, 1U);
3015 +
3016 +- max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);
3017 ++ max_discard_sectors = min(q->limits.max_discard_sectors,
3018 ++ bio_allowed_max_sectors(q));
3019 + max_discard_sectors -= max_discard_sectors % granularity;
3020 +
3021 + if (unlikely(!max_discard_sectors)) {
3022 +diff --git a/block/blk.h b/block/blk.h
3023 +index a8f0f7986cfd..a26a8fb257a4 100644
3024 +--- a/block/blk.h
3025 ++++ b/block/blk.h
3026 +@@ -326,6 +326,16 @@ static inline unsigned long blk_rq_deadline(struct request *rq)
3027 + return rq->__deadline & ~0x1UL;
3028 + }
3029 +
3030 ++/*
3031 ++ * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size
3032 ++ * is defined as 'unsigned int', meantime it has to aligned to with logical
3033 ++ * block size which is the minimum accepted unit by hardware.
3034 ++ */
3035 ++static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
3036 ++{
3037 ++ return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
3038 ++}
3039 ++
3040 + /*
3041 + * Internal io_context interface
3042 + */
3043 +diff --git a/block/bounce.c b/block/bounce.c
3044 +index fd31347b7836..5849535296b9 100644
3045 +--- a/block/bounce.c
3046 ++++ b/block/bounce.c
3047 +@@ -31,6 +31,24 @@
3048 + static struct bio_set bounce_bio_set, bounce_bio_split;
3049 + static mempool_t page_pool, isa_page_pool;
3050 +
3051 ++static void init_bounce_bioset(void)
3052 ++{
3053 ++ static bool bounce_bs_setup;
3054 ++ int ret;
3055 ++
3056 ++ if (bounce_bs_setup)
3057 ++ return;
3058 ++
3059 ++ ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
3060 ++ BUG_ON(ret);
3061 ++ if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
3062 ++ BUG_ON(1);
3063 ++
3064 ++ ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
3065 ++ BUG_ON(ret);
3066 ++ bounce_bs_setup = true;
3067 ++}
3068 ++
3069 + #if defined(CONFIG_HIGHMEM)
3070 + static __init int init_emergency_pool(void)
3071 + {
3072 +@@ -44,14 +62,7 @@ static __init int init_emergency_pool(void)
3073 + BUG_ON(ret);
3074 + pr_info("pool size: %d pages\n", POOL_SIZE);
3075 +
3076 +- ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
3077 +- BUG_ON(ret);
3078 +- if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
3079 +- BUG_ON(1);
3080 +-
3081 +- ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
3082 +- BUG_ON(ret);
3083 +-
3084 ++ init_bounce_bioset();
3085 + return 0;
3086 + }
3087 +
3088 +@@ -86,6 +97,8 @@ static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
3089 + return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
3090 + }
3091 +
3092 ++static DEFINE_MUTEX(isa_mutex);
3093 ++
3094 + /*
3095 + * gets called "every" time someone init's a queue with BLK_BOUNCE_ISA
3096 + * as the max address, so check if the pool has already been created.
3097 +@@ -94,14 +107,20 @@ int init_emergency_isa_pool(void)
3098 + {
3099 + int ret;
3100 +
3101 +- if (mempool_initialized(&isa_page_pool))
3102 ++ mutex_lock(&isa_mutex);
3103 ++
3104 ++ if (mempool_initialized(&isa_page_pool)) {
3105 ++ mutex_unlock(&isa_mutex);
3106 + return 0;
3107 ++ }
3108 +
3109 + ret = mempool_init(&isa_page_pool, ISA_POOL_SIZE, mempool_alloc_pages_isa,
3110 + mempool_free_pages, (void *) 0);
3111 + BUG_ON(ret);
3112 +
3113 + pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE);
3114 ++ init_bounce_bioset();
3115 ++ mutex_unlock(&isa_mutex);
3116 + return 0;
3117 + }
3118 +
3119 +diff --git a/crypto/Kconfig b/crypto/Kconfig
3120 +index f3e40ac56d93..59e32623a7ce 100644
3121 +--- a/crypto/Kconfig
3122 ++++ b/crypto/Kconfig
3123 +@@ -1590,20 +1590,6 @@ config CRYPTO_SM4
3124 +
3125 + If unsure, say N.
3126 +
3127 +-config CRYPTO_SPECK
3128 +- tristate "Speck cipher algorithm"
3129 +- select CRYPTO_ALGAPI
3130 +- help
3131 +- Speck is a lightweight block cipher that is tuned for optimal
3132 +- performance in software (rather than hardware).
3133 +-
3134 +- Speck may not be as secure as AES, and should only be used on systems
3135 +- where AES is not fast enough.
3136 +-
3137 +- See also: <https://eprint.iacr.org/2013/404.pdf>
3138 +-
3139 +- If unsure, say N.
3140 +-
3141 + config CRYPTO_TEA
3142 + tristate "TEA, XTEA and XETA cipher algorithms"
3143 + select CRYPTO_ALGAPI
3144 +diff --git a/crypto/Makefile b/crypto/Makefile
3145 +index 6d1d40eeb964..f6a234d08882 100644
3146 +--- a/crypto/Makefile
3147 ++++ b/crypto/Makefile
3148 +@@ -115,7 +115,6 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
3149 + obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
3150 + obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
3151 + obj-$(CONFIG_CRYPTO_SEED) += seed.o
3152 +-obj-$(CONFIG_CRYPTO_SPECK) += speck.o
3153 + obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
3154 + obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
3155 + obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
3156 +diff --git a/crypto/aegis.h b/crypto/aegis.h
3157 +index f1c6900ddb80..405e025fc906 100644
3158 +--- a/crypto/aegis.h
3159 ++++ b/crypto/aegis.h
3160 +@@ -21,7 +21,7 @@
3161 +
3162 + union aegis_block {
3163 + __le64 words64[AEGIS_BLOCK_SIZE / sizeof(__le64)];
3164 +- u32 words32[AEGIS_BLOCK_SIZE / sizeof(u32)];
3165 ++ __le32 words32[AEGIS_BLOCK_SIZE / sizeof(__le32)];
3166 + u8 bytes[AEGIS_BLOCK_SIZE];
3167 + };
3168 +
3169 +@@ -57,24 +57,22 @@ static void crypto_aegis_aesenc(union aegis_block *dst,
3170 + const union aegis_block *src,
3171 + const union aegis_block *key)
3172 + {
3173 +- u32 *d = dst->words32;
3174 + const u8 *s = src->bytes;
3175 +- const u32 *k = key->words32;
3176 + const u32 *t0 = crypto_ft_tab[0];
3177 + const u32 *t1 = crypto_ft_tab[1];
3178 + const u32 *t2 = crypto_ft_tab[2];
3179 + const u32 *t3 = crypto_ft_tab[3];
3180 + u32 d0, d1, d2, d3;
3181 +
3182 +- d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]] ^ k[0];
3183 +- d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]] ^ k[1];
3184 +- d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]] ^ k[2];
3185 +- d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]] ^ k[3];
3186 ++ d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]];
3187 ++ d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]];
3188 ++ d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]];
3189 ++ d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]];
3190 +
3191 +- d[0] = d0;
3192 +- d[1] = d1;
3193 +- d[2] = d2;
3194 +- d[3] = d3;
3195 ++ dst->words32[0] = cpu_to_le32(d0) ^ key->words32[0];
3196 ++ dst->words32[1] = cpu_to_le32(d1) ^ key->words32[1];
3197 ++ dst->words32[2] = cpu_to_le32(d2) ^ key->words32[2];
3198 ++ dst->words32[3] = cpu_to_le32(d3) ^ key->words32[3];
3199 + }
3200 +
3201 + #endif /* _CRYPTO_AEGIS_H */
3202 +diff --git a/crypto/lrw.c b/crypto/lrw.c
3203 +index 954a7064a179..7657bebd060c 100644
3204 +--- a/crypto/lrw.c
3205 ++++ b/crypto/lrw.c
3206 +@@ -143,7 +143,12 @@ static inline int get_index128(be128 *block)
3207 + return x + ffz(val);
3208 + }
3209 +
3210 +- return x;
3211 ++ /*
3212 ++ * If we get here, then x == 128 and we are incrementing the counter
3213 ++ * from all ones to all zeros. This means we must return index 127, i.e.
3214 ++ * the one corresponding to key2*{ 1,...,1 }.
3215 ++ */
3216 ++ return 127;
3217 + }
3218 +
3219 + static int post_crypt(struct skcipher_request *req)
3220 +diff --git a/crypto/morus1280.c b/crypto/morus1280.c
3221 +index 6180b2557836..8f1952d96ebd 100644
3222 +--- a/crypto/morus1280.c
3223 ++++ b/crypto/morus1280.c
3224 +@@ -385,14 +385,11 @@ static void crypto_morus1280_final(struct morus1280_state *state,
3225 + struct morus1280_block *tag_xor,
3226 + u64 assoclen, u64 cryptlen)
3227 + {
3228 +- u64 assocbits = assoclen * 8;
3229 +- u64 cryptbits = cryptlen * 8;
3230 +-
3231 + struct morus1280_block tmp;
3232 + unsigned int i;
3233 +
3234 +- tmp.words[0] = cpu_to_le64(assocbits);
3235 +- tmp.words[1] = cpu_to_le64(cryptbits);
3236 ++ tmp.words[0] = assoclen * 8;
3237 ++ tmp.words[1] = cryptlen * 8;
3238 + tmp.words[2] = 0;
3239 + tmp.words[3] = 0;
3240 +
3241 +diff --git a/crypto/morus640.c b/crypto/morus640.c
3242 +index 5eede3749e64..6ccb901934c3 100644
3243 +--- a/crypto/morus640.c
3244 ++++ b/crypto/morus640.c
3245 +@@ -384,21 +384,13 @@ static void crypto_morus640_final(struct morus640_state *state,
3246 + struct morus640_block *tag_xor,
3247 + u64 assoclen, u64 cryptlen)
3248 + {
3249 +- u64 assocbits = assoclen * 8;
3250 +- u64 cryptbits = cryptlen * 8;
3251 +-
3252 +- u32 assocbits_lo = (u32)assocbits;
3253 +- u32 assocbits_hi = (u32)(assocbits >> 32);
3254 +- u32 cryptbits_lo = (u32)cryptbits;
3255 +- u32 cryptbits_hi = (u32)(cryptbits >> 32);
3256 +-
3257 + struct morus640_block tmp;
3258 + unsigned int i;
3259 +
3260 +- tmp.words[0] = cpu_to_le32(assocbits_lo);
3261 +- tmp.words[1] = cpu_to_le32(assocbits_hi);
3262 +- tmp.words[2] = cpu_to_le32(cryptbits_lo);
3263 +- tmp.words[3] = cpu_to_le32(cryptbits_hi);
3264 ++ tmp.words[0] = lower_32_bits(assoclen * 8);
3265 ++ tmp.words[1] = upper_32_bits(assoclen * 8);
3266 ++ tmp.words[2] = lower_32_bits(cryptlen * 8);
3267 ++ tmp.words[3] = upper_32_bits(cryptlen * 8);
3268 +
3269 + for (i = 0; i < MORUS_BLOCK_WORDS; i++)
3270 + state->s[4].words[i] ^= state->s[0].words[i];
3271 +diff --git a/crypto/speck.c b/crypto/speck.c
3272 +deleted file mode 100644
3273 +index 58aa9f7f91f7..000000000000
3274 +--- a/crypto/speck.c
3275 ++++ /dev/null
3276 +@@ -1,307 +0,0 @@
3277 +-// SPDX-License-Identifier: GPL-2.0
3278 +-/*
3279 +- * Speck: a lightweight block cipher
3280 +- *
3281 +- * Copyright (c) 2018 Google, Inc
3282 +- *
3283 +- * Speck has 10 variants, including 5 block sizes. For now we only implement
3284 +- * the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and
3285 +- * Speck64/128. Speck${B}/${K} denotes the variant with a block size of B bits
3286 +- * and a key size of K bits. The Speck128 variants are believed to be the most
3287 +- * secure variants, and they use the same block size and key sizes as AES. The
3288 +- * Speck64 variants are less secure, but on 32-bit processors are usually
3289 +- * faster. The remaining variants (Speck32, Speck48, and Speck96) are even less
3290 +- * secure and/or not as well suited for implementation on either 32-bit or
3291 +- * 64-bit processors, so are omitted.
3292 +- *
3293 +- * Reference: "The Simon and Speck Families of Lightweight Block Ciphers"
3294 +- * https://eprint.iacr.org/2013/404.pdf
3295 +- *
3296 +- * In a correspondence, the Speck designers have also clarified that the words
3297 +- * should be interpreted in little-endian format, and the words should be
3298 +- * ordered such that the first word of each block is 'y' rather than 'x', and
3299 +- * the first key word (rather than the last) becomes the first round key.
3300 +- */
3301 +-
3302 +-#include <asm/unaligned.h>
3303 +-#include <crypto/speck.h>
3304 +-#include <linux/bitops.h>
3305 +-#include <linux/crypto.h>
3306 +-#include <linux/init.h>
3307 +-#include <linux/module.h>
3308 +-
3309 +-/* Speck128 */
3310 +-
3311 +-static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
3312 +-{
3313 +- *x = ror64(*x, 8);
3314 +- *x += *y;
3315 +- *x ^= k;
3316 +- *y = rol64(*y, 3);
3317 +- *y ^= *x;
3318 +-}
3319 +-
3320 +-static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
3321 +-{
3322 +- *y ^= *x;
3323 +- *y = ror64(*y, 3);
3324 +- *x ^= k;
3325 +- *x -= *y;
3326 +- *x = rol64(*x, 8);
3327 +-}
3328 +-
3329 +-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
3330 +- u8 *out, const u8 *in)
3331 +-{
3332 +- u64 y = get_unaligned_le64(in);
3333 +- u64 x = get_unaligned_le64(in + 8);
3334 +- int i;
3335 +-
3336 +- for (i = 0; i < ctx->nrounds; i++)
3337 +- speck128_round(&x, &y, ctx->round_keys[i]);
3338 +-
3339 +- put_unaligned_le64(y, out);
3340 +- put_unaligned_le64(x, out + 8);
3341 +-}
3342 +-EXPORT_SYMBOL_GPL(crypto_speck128_encrypt);
3343 +-
3344 +-static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
3345 +-{
3346 +- crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in);
3347 +-}
3348 +-
3349 +-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
3350 +- u8 *out, const u8 *in)
3351 +-{
3352 +- u64 y = get_unaligned_le64(in);
3353 +- u64 x = get_unaligned_le64(in + 8);
3354 +- int i;
3355 +-
3356 +- for (i = ctx->nrounds - 1; i >= 0; i--)
3357 +- speck128_unround(&x, &y, ctx->round_keys[i]);
3358 +-
3359 +- put_unaligned_le64(y, out);
3360 +- put_unaligned_le64(x, out + 8);
3361 +-}
3362 +-EXPORT_SYMBOL_GPL(crypto_speck128_decrypt);
3363 +-
3364 +-static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
3365 +-{
3366 +- crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in);
3367 +-}
3368 +-
3369 +-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
3370 +- unsigned int keylen)
3371 +-{
3372 +- u64 l[3];
3373 +- u64 k;
3374 +- int i;
3375 +-
3376 +- switch (keylen) {
3377 +- case SPECK128_128_KEY_SIZE:
3378 +- k = get_unaligned_le64(key);
3379 +- l[0] = get_unaligned_le64(key + 8);
3380 +- ctx->nrounds = SPECK128_128_NROUNDS;
3381 +- for (i = 0; i < ctx->nrounds; i++) {
3382 +- ctx->round_keys[i] = k;
3383 +- speck128_round(&l[0], &k, i);
3384 +- }
3385 +- break;
3386 +- case SPECK128_192_KEY_SIZE:
3387 +- k = get_unaligned_le64(key);
3388 +- l[0] = get_unaligned_le64(key + 8);
3389 +- l[1] = get_unaligned_le64(key + 16);
3390 +- ctx->nrounds = SPECK128_192_NROUNDS;
3391 +- for (i = 0; i < ctx->nrounds; i++) {
3392 +- ctx->round_keys[i] = k;
3393 +- speck128_round(&l[i % 2], &k, i);
3394 +- }
3395 +- break;
3396 +- case SPECK128_256_KEY_SIZE:
3397 +- k = get_unaligned_le64(key);
3398 +- l[0] = get_unaligned_le64(key + 8);
3399 +- l[1] = get_unaligned_le64(key + 16);
3400 +- l[2] = get_unaligned_le64(key + 24);
3401 +- ctx->nrounds = SPECK128_256_NROUNDS;
3402 +- for (i = 0; i < ctx->nrounds; i++) {
3403 +- ctx->round_keys[i] = k;
3404 +- speck128_round(&l[i % 3], &k, i);
3405 +- }
3406 +- break;
3407 +- default:
3408 +- return -EINVAL;
3409 +- }
3410 +-
3411 +- return 0;
3412 +-}
3413 +-EXPORT_SYMBOL_GPL(crypto_speck128_setkey);
3414 +-
3415 +-static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
3416 +- unsigned int keylen)
3417 +-{
3418 +- return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen);
3419 +-}
3420 +-
3421 +-/* Speck64 */
3422 +-
3423 +-static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
3424 +-{
3425 +- *x = ror32(*x, 8);
3426 +- *x += *y;
3427 +- *x ^= k;
3428 +- *y = rol32(*y, 3);
3429 +- *y ^= *x;
3430 +-}
3431 +-
3432 +-static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
3433 +-{
3434 +- *y ^= *x;
3435 +- *y = ror32(*y, 3);
3436 +- *x ^= k;
3437 +- *x -= *y;
3438 +- *x = rol32(*x, 8);
3439 +-}
3440 +-
3441 +-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
3442 +- u8 *out, const u8 *in)
3443 +-{
3444 +- u32 y = get_unaligned_le32(in);
3445 +- u32 x = get_unaligned_le32(in + 4);
3446 +- int i;
3447 +-
3448 +- for (i = 0; i < ctx->nrounds; i++)
3449 +- speck64_round(&x, &y, ctx->round_keys[i]);
3450 +-
3451 +- put_unaligned_le32(y, out);
3452 +- put_unaligned_le32(x, out + 4);
3453 +-}
3454 +-EXPORT_SYMBOL_GPL(crypto_speck64_encrypt);
3455 +-
3456 +-static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
3457 +-{
3458 +- crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in);
3459 +-}
3460 +-
3461 +-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
3462 +- u8 *out, const u8 *in)
3463 +-{
3464 +- u32 y = get_unaligned_le32(in);
3465 +- u32 x = get_unaligned_le32(in + 4);
3466 +- int i;
3467 +-
3468 +- for (i = ctx->nrounds - 1; i >= 0; i--)
3469 +- speck64_unround(&x, &y, ctx->round_keys[i]);
3470 +-
3471 +- put_unaligned_le32(y, out);
3472 +- put_unaligned_le32(x, out + 4);
3473 +-}
3474 +-EXPORT_SYMBOL_GPL(crypto_speck64_decrypt);
3475 +-
3476 +-static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
3477 +-{
3478 +- crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in);
3479 +-}
3480 +-
3481 +-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
3482 +- unsigned int keylen)
3483 +-{
3484 +- u32 l[3];
3485 +- u32 k;
3486 +- int i;
3487 +-
3488 +- switch (keylen) {
3489 +- case SPECK64_96_KEY_SIZE:
3490 +- k = get_unaligned_le32(key);
3491 +- l[0] = get_unaligned_le32(key + 4);
3492 +- l[1] = get_unaligned_le32(key + 8);
3493 +- ctx->nrounds = SPECK64_96_NROUNDS;
3494 +- for (i = 0; i < ctx->nrounds; i++) {
3495 +- ctx->round_keys[i] = k;
3496 +- speck64_round(&l[i % 2], &k, i);
3497 +- }
3498 +- break;
3499 +- case SPECK64_128_KEY_SIZE:
3500 +- k = get_unaligned_le32(key);
3501 +- l[0] = get_unaligned_le32(key + 4);
3502 +- l[1] = get_unaligned_le32(key + 8);
3503 +- l[2] = get_unaligned_le32(key + 12);
3504 +- ctx->nrounds = SPECK64_128_NROUNDS;
3505 +- for (i = 0; i < ctx->nrounds; i++) {
3506 +- ctx->round_keys[i] = k;
3507 +- speck64_round(&l[i % 3], &k, i);
3508 +- }
3509 +- break;
3510 +- default:
3511 +- return -EINVAL;
3512 +- }
3513 +-
3514 +- return 0;
3515 +-}
3516 +-EXPORT_SYMBOL_GPL(crypto_speck64_setkey);
3517 +-
3518 +-static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
3519 +- unsigned int keylen)
3520 +-{
3521 +- return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen);
3522 +-}
3523 +-
3524 +-/* Algorithm definitions */
3525 +-
3526 +-static struct crypto_alg speck_algs[] = {
3527 +- {
3528 +- .cra_name = "speck128",
3529 +- .cra_driver_name = "speck128-generic",
3530 +- .cra_priority = 100,
3531 +- .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
3532 +- .cra_blocksize = SPECK128_BLOCK_SIZE,
3533 +- .cra_ctxsize = sizeof(struct speck128_tfm_ctx),
3534 +- .cra_module = THIS_MODULE,
3535 +- .cra_u = {
3536 +- .cipher = {
3537 +- .cia_min_keysize = SPECK128_128_KEY_SIZE,
3538 +- .cia_max_keysize = SPECK128_256_KEY_SIZE,
3539 +- .cia_setkey = speck128_setkey,
3540 +- .cia_encrypt = speck128_encrypt,
3541 +- .cia_decrypt = speck128_decrypt
3542 +- }
3543 +- }
3544 +- }, {
3545 +- .cra_name = "speck64",
3546 +- .cra_driver_name = "speck64-generic",
3547 +- .cra_priority = 100,
3548 +- .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
3549 +- .cra_blocksize = SPECK64_BLOCK_SIZE,
3550 +- .cra_ctxsize = sizeof(struct speck64_tfm_ctx),
3551 +- .cra_module = THIS_MODULE,
3552 +- .cra_u = {
3553 +- .cipher = {
3554 +- .cia_min_keysize = SPECK64_96_KEY_SIZE,
3555 +- .cia_max_keysize = SPECK64_128_KEY_SIZE,
3556 +- .cia_setkey = speck64_setkey,
3557 +- .cia_encrypt = speck64_encrypt,
3558 +- .cia_decrypt = speck64_decrypt
3559 +- }
3560 +- }
3561 +- }
3562 +-};
3563 +-
3564 +-static int __init speck_module_init(void)
3565 +-{
3566 +- return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs));
3567 +-}
3568 +-
3569 +-static void __exit speck_module_exit(void)
3570 +-{
3571 +- crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs));
3572 +-}
3573 +-
3574 +-module_init(speck_module_init);
3575 +-module_exit(speck_module_exit);
3576 +-
3577 +-MODULE_DESCRIPTION("Speck block cipher (generic)");
3578 +-MODULE_LICENSE("GPL");
3579 +-MODULE_AUTHOR("Eric Biggers <ebiggers@ƗƗƗƗƗƗ.com>");
3580 +-MODULE_ALIAS_CRYPTO("speck128");
3581 +-MODULE_ALIAS_CRYPTO("speck128-generic");
3582 +-MODULE_ALIAS_CRYPTO("speck64");
3583 +-MODULE_ALIAS_CRYPTO("speck64-generic");
3584 +diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
3585 +index d5bcdd905007..ee4f2a175bda 100644
3586 +--- a/crypto/tcrypt.c
3587 ++++ b/crypto/tcrypt.c
3588 +@@ -1097,6 +1097,9 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
3589 + break;
3590 + }
3591 +
3592 ++ if (speed[i].klen)
3593 ++ crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
3594 ++
3595 + pr_info("test%3u "
3596 + "(%5u byte blocks,%5u bytes per update,%4u updates): ",
3597 + i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);
3598 +diff --git a/crypto/testmgr.c b/crypto/testmgr.c
3599 +index 11e45352fd0b..1ed03bf6a977 100644
3600 +--- a/crypto/testmgr.c
3601 ++++ b/crypto/testmgr.c
3602 +@@ -3000,18 +3000,6 @@ static const struct alg_test_desc alg_test_descs[] = {
3603 + .suite = {
3604 + .cipher = __VECS(sm4_tv_template)
3605 + }
3606 +- }, {
3607 +- .alg = "ecb(speck128)",
3608 +- .test = alg_test_skcipher,
3609 +- .suite = {
3610 +- .cipher = __VECS(speck128_tv_template)
3611 +- }
3612 +- }, {
3613 +- .alg = "ecb(speck64)",
3614 +- .test = alg_test_skcipher,
3615 +- .suite = {
3616 +- .cipher = __VECS(speck64_tv_template)
3617 +- }
3618 + }, {
3619 + .alg = "ecb(tea)",
3620 + .test = alg_test_skcipher,
3621 +@@ -3539,18 +3527,6 @@ static const struct alg_test_desc alg_test_descs[] = {
3622 + .suite = {
3623 + .cipher = __VECS(serpent_xts_tv_template)
3624 + }
3625 +- }, {
3626 +- .alg = "xts(speck128)",
3627 +- .test = alg_test_skcipher,
3628 +- .suite = {
3629 +- .cipher = __VECS(speck128_xts_tv_template)
3630 +- }
3631 +- }, {
3632 +- .alg = "xts(speck64)",
3633 +- .test = alg_test_skcipher,
3634 +- .suite = {
3635 +- .cipher = __VECS(speck64_xts_tv_template)
3636 +- }
3637 + }, {
3638 + .alg = "xts(twofish)",
3639 + .test = alg_test_skcipher,
3640 +diff --git a/crypto/testmgr.h b/crypto/testmgr.h
3641 +index b950aa234e43..36572c665026 100644
3642 +--- a/crypto/testmgr.h
3643 ++++ b/crypto/testmgr.h
3644 +@@ -10141,744 +10141,6 @@ static const struct cipher_testvec sm4_tv_template[] = {
3645 + }
3646 + };
3647 +
3648 +-/*
3649 +- * Speck test vectors taken from the original paper:
3650 +- * "The Simon and Speck Families of Lightweight Block Ciphers"
3651 +- * https://eprint.iacr.org/2013/404.pdf
3652 +- *
3653 +- * Note that the paper does not make byte and word order clear. But it was
3654 +- * confirmed with the authors that the intended orders are little endian byte
3655 +- * order and (y, x) word order. Equivalently, the printed test vectors, when
3656 +- * looking at only the bytes (ignoring the whitespace that divides them into
3657 +- * words), are backwards: the left-most byte is actually the one with the
3658 +- * highest memory address, while the right-most byte is actually the one with
3659 +- * the lowest memory address.
3660 +- */
3661 +-
3662 +-static const struct cipher_testvec speck128_tv_template[] = {
3663 +- { /* Speck128/128 */
3664 +- .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
3665 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
3666 +- .klen = 16,
3667 +- .ptext = "\x20\x6d\x61\x64\x65\x20\x69\x74"
3668 +- "\x20\x65\x71\x75\x69\x76\x61\x6c",
3669 +- .ctext = "\x18\x0d\x57\x5c\xdf\xfe\x60\x78"
3670 +- "\x65\x32\x78\x79\x51\x98\x5d\xa6",
3671 +- .len = 16,
3672 +- }, { /* Speck128/192 */
3673 +- .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
3674 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
3675 +- "\x10\x11\x12\x13\x14\x15\x16\x17",
3676 +- .klen = 24,
3677 +- .ptext = "\x65\x6e\x74\x20\x74\x6f\x20\x43"
3678 +- "\x68\x69\x65\x66\x20\x48\x61\x72",
3679 +- .ctext = "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9"
3680 +- "\x66\x55\x13\x13\x3a\xcf\xe4\x1b",
3681 +- .len = 16,
3682 +- }, { /* Speck128/256 */
3683 +- .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
3684 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
3685 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
3686 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
3687 +- .klen = 32,
3688 +- .ptext = "\x70\x6f\x6f\x6e\x65\x72\x2e\x20"
3689 +- "\x49\x6e\x20\x74\x68\x6f\x73\x65",
3690 +- .ctext = "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e"
3691 +- "\x3e\xf5\xc0\x05\x04\x01\x09\x41",
3692 +- .len = 16,
3693 +- },
3694 +-};
3695 +-
3696 +-/*
3697 +- * Speck128-XTS test vectors, taken from the AES-XTS test vectors with the
3698 +- * ciphertext recomputed with Speck128 as the cipher
3699 +- */
3700 +-static const struct cipher_testvec speck128_xts_tv_template[] = {
3701 +- {
3702 +- .key = "\x00\x00\x00\x00\x00\x00\x00\x00"
3703 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
3704 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
3705 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3706 +- .klen = 32,
3707 +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
3708 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3709 +- .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00"
3710 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
3711 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
3712 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3713 +- .ctext = "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
3714 +- "\x3b\x99\x4a\x64\x74\x77\xac\xed"
3715 +- "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42"
3716 +- "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54",
3717 +- .len = 32,
3718 +- }, {
3719 +- .key = "\x11\x11\x11\x11\x11\x11\x11\x11"
3720 +- "\x11\x11\x11\x11\x11\x11\x11\x11"
3721 +- "\x22\x22\x22\x22\x22\x22\x22\x22"
3722 +- "\x22\x22\x22\x22\x22\x22\x22\x22",
3723 +- .klen = 32,
3724 +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
3725 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3726 +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
3727 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
3728 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
3729 +- "\x44\x44\x44\x44\x44\x44\x44\x44",
3730 +- .ctext = "\xfb\x53\x81\x75\x6f\x9f\x34\xad"
3731 +- "\x7e\x01\xed\x7b\xcc\xda\x4e\x4a"
3732 +- "\xd4\x84\xa4\x53\xd5\x88\x73\x1b"
3733 +- "\xfd\xcb\xae\x0d\xf3\x04\xee\xe6",
3734 +- .len = 32,
3735 +- }, {
3736 +- .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
3737 +- "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
3738 +- "\x22\x22\x22\x22\x22\x22\x22\x22"
3739 +- "\x22\x22\x22\x22\x22\x22\x22\x22",
3740 +- .klen = 32,
3741 +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
3742 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3743 +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
3744 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
3745 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
3746 +- "\x44\x44\x44\x44\x44\x44\x44\x44",
3747 +- .ctext = "\x21\x52\x84\x15\xd1\xf7\x21\x55"
3748 +- "\xd9\x75\x4a\xd3\xc5\xdb\x9f\x7d"
3749 +- "\xda\x63\xb2\xf1\x82\xb0\x89\x59"
3750 +- "\x86\xd4\xaa\xaa\xdd\xff\x4f\x92",
3751 +- .len = 32,
3752 +- }, {
3753 +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
3754 +- "\x23\x53\x60\x28\x74\x71\x35\x26"
3755 +- "\x31\x41\x59\x26\x53\x58\x97\x93"
3756 +- "\x23\x84\x62\x64\x33\x83\x27\x95",
3757 +- .klen = 32,
3758 +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
3759 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3760 +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
3761 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
3762 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
3763 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
3764 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
3765 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
3766 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
3767 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
3768 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
3769 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
3770 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
3771 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
3772 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
3773 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
3774 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
3775 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
3776 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
3777 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
3778 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
3779 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
3780 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
3781 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
3782 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
3783 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
3784 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
3785 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
3786 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
3787 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
3788 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
3789 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
3790 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
3791 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
3792 +- "\x00\x01\x02\x03\x04\x05\x06\x07"
3793 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
3794 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
3795 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
3796 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
3797 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
3798 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
3799 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
3800 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
3801 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
3802 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
3803 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
3804 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
3805 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
3806 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
3807 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
3808 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
3809 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
3810 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
3811 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
3812 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
3813 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
3814 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
3815 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
3816 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
3817 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
3818 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
3819 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
3820 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
3821 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
3822 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
3823 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
3824 +- .ctext = "\x57\xb5\xf8\x71\x6e\x6d\xdd\x82"
3825 +- "\x53\xd0\xed\x2d\x30\xc1\x20\xef"
3826 +- "\x70\x67\x5e\xff\x09\x70\xbb\xc1"
3827 +- "\x3a\x7b\x48\x26\xd9\x0b\xf4\x48"
3828 +- "\xbe\xce\xb1\xc7\xb2\x67\xc4\xa7"
3829 +- "\x76\xf8\x36\x30\xb7\xb4\x9a\xd9"
3830 +- "\xf5\x9d\xd0\x7b\xc1\x06\x96\x44"
3831 +- "\x19\xc5\x58\x84\x63\xb9\x12\x68"
3832 +- "\x68\xc7\xaa\x18\x98\xf2\x1f\x5c"
3833 +- "\x39\xa6\xd8\x32\x2b\xc3\x51\xfd"
3834 +- "\x74\x79\x2e\xb4\x44\xd7\x69\xc4"
3835 +- "\xfc\x29\xe6\xed\x26\x1e\xa6\x9d"
3836 +- "\x1c\xbe\x00\x0e\x7f\x3a\xca\xfb"
3837 +- "\x6d\x13\x65\xa0\xf9\x31\x12\xe2"
3838 +- "\x26\xd1\xec\x2b\x0a\x8b\x59\x99"
3839 +- "\xa7\x49\xa0\x0e\x09\x33\x85\x50"
3840 +- "\xc3\x23\xca\x7a\xdd\x13\x45\x5f"
3841 +- "\xde\x4c\xa7\xcb\x00\x8a\x66\x6f"
3842 +- "\xa2\xb6\xb1\x2e\xe1\xa0\x18\xf6"
3843 +- "\xad\xf3\xbd\xeb\xc7\xef\x55\x4f"
3844 +- "\x79\x91\x8d\x36\x13\x7b\xd0\x4a"
3845 +- "\x6c\x39\xfb\x53\xb8\x6f\x02\x51"
3846 +- "\xa5\x20\xac\x24\x1c\x73\x59\x73"
3847 +- "\x58\x61\x3a\x87\x58\xb3\x20\x56"
3848 +- "\x39\x06\x2b\x4d\xd3\x20\x2b\x89"
3849 +- "\x3f\xa2\xf0\x96\xeb\x7f\xa4\xcd"
3850 +- "\x11\xae\xbd\xcb\x3a\xb4\xd9\x91"
3851 +- "\x09\x35\x71\x50\x65\xac\x92\xe3"
3852 +- "\x7b\x32\xc0\x7a\xdd\xd4\xc3\x92"
3853 +- "\x6f\xeb\x79\xde\x6f\xd3\x25\xc9"
3854 +- "\xcd\x63\xf5\x1e\x7a\x3b\x26\x9d"
3855 +- "\x77\x04\x80\xa9\xbf\x38\xb5\xbd"
3856 +- "\xb8\x05\x07\xbd\xfd\xab\x7b\xf8"
3857 +- "\x2a\x26\xcc\x49\x14\x6d\x55\x01"
3858 +- "\x06\x94\xd8\xb2\x2d\x53\x83\x1b"
3859 +- "\x8f\xd4\xdd\x57\x12\x7e\x18\xba"
3860 +- "\x8e\xe2\x4d\x80\xef\x7e\x6b\x9d"
3861 +- "\x24\xa9\x60\xa4\x97\x85\x86\x2a"
3862 +- "\x01\x00\x09\xf1\xcb\x4a\x24\x1c"
3863 +- "\xd8\xf6\xe6\x5b\xe7\x5d\xf2\xc4"
3864 +- "\x97\x1c\x10\xc6\x4d\x66\x4f\x98"
3865 +- "\x87\x30\xac\xd5\xea\x73\x49\x10"
3866 +- "\x80\xea\xe5\x5f\x4d\x5f\x03\x33"
3867 +- "\x66\x02\x35\x3d\x60\x06\x36\x4f"
3868 +- "\x14\x1c\xd8\x07\x1f\x78\xd0\xf8"
3869 +- "\x4f\x6c\x62\x7c\x15\xa5\x7c\x28"
3870 +- "\x7c\xcc\xeb\x1f\xd1\x07\x90\x93"
3871 +- "\x7e\xc2\xa8\x3a\x80\xc0\xf5\x30"
3872 +- "\xcc\x75\xcf\x16\x26\xa9\x26\x3b"
3873 +- "\xe7\x68\x2f\x15\x21\x5b\xe4\x00"
3874 +- "\xbd\x48\x50\xcd\x75\x70\xc4\x62"
3875 +- "\xbb\x41\xfb\x89\x4a\x88\x3b\x3b"
3876 +- "\x51\x66\x02\x69\x04\x97\x36\xd4"
3877 +- "\x75\xae\x0b\xa3\x42\xf8\xca\x79"
3878 +- "\x8f\x93\xe9\xcc\x38\xbd\xd6\xd2"
3879 +- "\xf9\x70\x4e\xc3\x6a\x8e\x25\xbd"
3880 +- "\xea\x15\x5a\xa0\x85\x7e\x81\x0d"
3881 +- "\x03\xe7\x05\x39\xf5\x05\x26\xee"
3882 +- "\xec\xaa\x1f\x3d\xc9\x98\x76\x01"
3883 +- "\x2c\xf4\xfc\xa3\x88\x77\x38\xc4"
3884 +- "\x50\x65\x50\x6d\x04\x1f\xdf\x5a"
3885 +- "\xaa\xf2\x01\xa9\xc1\x8d\xee\xca"
3886 +- "\x47\x26\xef\x39\xb8\xb4\xf2\xd1"
3887 +- "\xd6\xbb\x1b\x2a\xc1\x34\x14\xcf",
3888 +- .len = 512,
3889 +- }, {
3890 +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
3891 +- "\x23\x53\x60\x28\x74\x71\x35\x26"
3892 +- "\x62\x49\x77\x57\x24\x70\x93\x69"
3893 +- "\x99\x59\x57\x49\x66\x96\x76\x27"
3894 +- "\x31\x41\x59\x26\x53\x58\x97\x93"
3895 +- "\x23\x84\x62\x64\x33\x83\x27\x95"
3896 +- "\x02\x88\x41\x97\x16\x93\x99\x37"
3897 +- "\x51\x05\x82\x09\x74\x94\x45\x92",
3898 +- .klen = 64,
3899 +- .iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
3900 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
3901 +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
3902 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
3903 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
3904 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
3905 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
3906 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
3907 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
3908 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
3909 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
3910 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
3911 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
3912 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
3913 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
3914 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
3915 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
3916 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
3917 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
3918 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
3919 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
3920 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
3921 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
3922 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
3923 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
3924 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
3925 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
3926 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
3927 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
3928 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
3929 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
3930 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
3931 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
3932 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
3933 +- "\x00\x01\x02\x03\x04\x05\x06\x07"
3934 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
3935 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
3936 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
3937 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
3938 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
3939 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
3940 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
3941 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
3942 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
3943 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
3944 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
3945 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
3946 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
3947 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
3948 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
3949 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
3950 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
3951 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
3952 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
3953 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
3954 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
3955 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
3956 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
3957 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
3958 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
3959 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
3960 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
3961 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
3962 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
3963 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
3964 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
3965 +- .ctext = "\xc5\x85\x2a\x4b\x73\xe4\xf6\xf1"
3966 +- "\x7e\xf9\xf6\xe9\xa3\x73\x36\xcb"
3967 +- "\xaa\xb6\x22\xb0\x24\x6e\x3d\x73"
3968 +- "\x92\x99\xde\xd3\x76\xed\xcd\x63"
3969 +- "\x64\x3a\x22\x57\xc1\x43\x49\xd4"
3970 +- "\x79\x36\x31\x19\x62\xae\x10\x7e"
3971 +- "\x7d\xcf\x7a\xe2\x6b\xce\x27\xfa"
3972 +- "\xdc\x3d\xd9\x83\xd3\x42\x4c\xe0"
3973 +- "\x1b\xd6\x1d\x1a\x6f\xd2\x03\x00"
3974 +- "\xfc\x81\x99\x8a\x14\x62\xf5\x7e"
3975 +- "\x0d\xe7\x12\xe8\x17\x9d\x0b\xec"
3976 +- "\xe2\xf7\xc9\xa7\x63\xd1\x79\xb6"
3977 +- "\x62\x62\x37\xfe\x0a\x4c\x4a\x37"
3978 +- "\x70\xc7\x5e\x96\x5f\xbc\x8e\x9e"
3979 +- "\x85\x3c\x4f\x26\x64\x85\xbc\x68"
3980 +- "\xb0\xe0\x86\x5e\x26\x41\xce\x11"
3981 +- "\x50\xda\x97\x14\xe9\x9e\xc7\x6d"
3982 +- "\x3b\xdc\x43\xde\x2b\x27\x69\x7d"
3983 +- "\xfc\xb0\x28\xbd\x8f\xb1\xc6\x31"
3984 +- "\x14\x4d\xf0\x74\x37\xfd\x07\x25"
3985 +- "\x96\x55\xe5\xfc\x9e\x27\x2a\x74"
3986 +- "\x1b\x83\x4d\x15\x83\xac\x57\xa0"
3987 +- "\xac\xa5\xd0\x38\xef\x19\x56\x53"
3988 +- "\x25\x4b\xfc\xce\x04\x23\xe5\x6b"
3989 +- "\xf6\xc6\x6c\x32\x0b\xb3\x12\xc5"
3990 +- "\xed\x22\x34\x1c\x5d\xed\x17\x06"
3991 +- "\x36\xa3\xe6\x77\xb9\x97\x46\xb8"
3992 +- "\xe9\x3f\x7e\xc7\xbc\x13\x5c\xdc"
3993 +- "\x6e\x3f\x04\x5e\xd1\x59\xa5\x82"
3994 +- "\x35\x91\x3d\x1b\xe4\x97\x9f\x92"
3995 +- "\x1c\x5e\x5f\x6f\x41\xd4\x62\xa1"
3996 +- "\x8d\x39\xfc\x42\xfb\x38\x80\xb9"
3997 +- "\x0a\xe3\xcc\x6a\x93\xd9\x7a\xb1"
3998 +- "\xe9\x69\xaf\x0a\x6b\x75\x38\xa7"
3999 +- "\xa1\xbf\xf7\xda\x95\x93\x4b\x78"
4000 +- "\x19\xf5\x94\xf9\xd2\x00\x33\x37"
4001 +- "\xcf\xf5\x9e\x9c\xf3\xcc\xa6\xee"
4002 +- "\x42\xb2\x9e\x2c\x5f\x48\x23\x26"
4003 +- "\x15\x25\x17\x03\x3d\xfe\x2c\xfc"
4004 +- "\xeb\xba\xda\xe0\x00\x05\xb6\xa6"
4005 +- "\x07\xb3\xe8\x36\x5b\xec\x5b\xbf"
4006 +- "\xd6\x5b\x00\x74\xc6\x97\xf1\x6a"
4007 +- "\x49\xa1\xc3\xfa\x10\x52\xb9\x14"
4008 +- "\xad\xb7\x73\xf8\x78\x12\xc8\x59"
4009 +- "\x17\x80\x4c\x57\x39\xf1\x6d\x80"
4010 +- "\x25\x77\x0f\x5e\x7d\xf0\xaf\x21"
4011 +- "\xec\xce\xb7\xc8\x02\x8a\xed\x53"
4012 +- "\x2c\x25\x68\x2e\x1f\x85\x5e\x67"
4013 +- "\xd1\x07\x7a\x3a\x89\x08\xe0\x34"
4014 +- "\xdc\xdb\x26\xb4\x6b\x77\xfc\x40"
4015 +- "\x31\x15\x72\xa0\xf0\x73\xd9\x3b"
4016 +- "\xd5\xdb\xfe\xfc\x8f\xa9\x44\xa2"
4017 +- "\x09\x9f\xc6\x33\xe5\xe2\x88\xe8"
4018 +- "\xf3\xf0\x1a\xf4\xce\x12\x0f\xd6"
4019 +- "\xf7\x36\xe6\xa4\xf4\x7a\x10\x58"
4020 +- "\xcc\x1f\x48\x49\x65\x47\x75\xe9"
4021 +- "\x28\xe1\x65\x7b\xf2\xc4\xb5\x07"
4022 +- "\xf2\xec\x76\xd8\x8f\x09\xf3\x16"
4023 +- "\xa1\x51\x89\x3b\xeb\x96\x42\xac"
4024 +- "\x65\xe0\x67\x63\x29\xdc\xb4\x7d"
4025 +- "\xf2\x41\x51\x6a\xcb\xde\x3c\xfb"
4026 +- "\x66\x8d\x13\xca\xe0\x59\x2a\x00"
4027 +- "\xc9\x53\x4c\xe6\x9e\xe2\x73\xd5"
4028 +- "\x67\x19\xb2\xbd\x9a\x63\xd7\x5c",
4029 +- .len = 512,
4030 +- .also_non_np = 1,
4031 +- .np = 3,
4032 +- .tap = { 512 - 20, 4, 16 },
4033 +- }
4034 +-};
4035 +-
4036 +-static const struct cipher_testvec speck64_tv_template[] = {
4037 +- { /* Speck64/96 */
4038 +- .key = "\x00\x01\x02\x03\x08\x09\x0a\x0b"
4039 +- "\x10\x11\x12\x13",
4040 +- .klen = 12,
4041 +- .ptext = "\x65\x61\x6e\x73\x20\x46\x61\x74",
4042 +- .ctext = "\x6c\x94\x75\x41\xec\x52\x79\x9f",
4043 +- .len = 8,
4044 +- }, { /* Speck64/128 */
4045 +- .key = "\x00\x01\x02\x03\x08\x09\x0a\x0b"
4046 +- "\x10\x11\x12\x13\x18\x19\x1a\x1b",
4047 +- .klen = 16,
4048 +- .ptext = "\x2d\x43\x75\x74\x74\x65\x72\x3b",
4049 +- .ctext = "\x8b\x02\x4e\x45\x48\xa5\x6f\x8c",
4050 +- .len = 8,
4051 +- },
4052 +-};
4053 +-
4054 +-/*
4055 +- * Speck64-XTS test vectors, taken from the AES-XTS test vectors with the
4056 +- * ciphertext recomputed with Speck64 as the cipher, and key lengths adjusted
4057 +- */
4058 +-static const struct cipher_testvec speck64_xts_tv_template[] = {
4059 +- {
4060 +- .key = "\x00\x00\x00\x00\x00\x00\x00\x00"
4061 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
4062 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4063 +- .klen = 24,
4064 +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
4065 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4066 +- .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00"
4067 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
4068 +- "\x00\x00\x00\x00\x00\x00\x00\x00"
4069 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4070 +- .ctext = "\x84\xaf\x54\x07\x19\xd4\x7c\xa6"
4071 +- "\xe4\xfe\xdf\xc4\x1f\x34\xc3\xc2"
4072 +- "\x80\xf5\x72\xe7\xcd\xf0\x99\x22"
4073 +- "\x35\xa7\x2f\x06\xef\xdc\x51\xaa",
4074 +- .len = 32,
4075 +- }, {
4076 +- .key = "\x11\x11\x11\x11\x11\x11\x11\x11"
4077 +- "\x11\x11\x11\x11\x11\x11\x11\x11"
4078 +- "\x22\x22\x22\x22\x22\x22\x22\x22",
4079 +- .klen = 24,
4080 +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
4081 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4082 +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
4083 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
4084 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
4085 +- "\x44\x44\x44\x44\x44\x44\x44\x44",
4086 +- .ctext = "\x12\x56\x73\xcd\x15\x87\xa8\x59"
4087 +- "\xcf\x84\xae\xd9\x1c\x66\xd6\x9f"
4088 +- "\xb3\x12\x69\x7e\x36\xeb\x52\xff"
4089 +- "\x62\xdd\xba\x90\xb3\xe1\xee\x99",
4090 +- .len = 32,
4091 +- }, {
4092 +- .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
4093 +- "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
4094 +- "\x22\x22\x22\x22\x22\x22\x22\x22",
4095 +- .klen = 24,
4096 +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
4097 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4098 +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
4099 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
4100 +- "\x44\x44\x44\x44\x44\x44\x44\x44"
4101 +- "\x44\x44\x44\x44\x44\x44\x44\x44",
4102 +- .ctext = "\x15\x1b\xe4\x2c\xa2\x5a\x2d\x2c"
4103 +- "\x27\x36\xc0\xbf\x5d\xea\x36\x37"
4104 +- "\x2d\x1a\x88\xbc\x66\xb5\xd0\x0b"
4105 +- "\xa1\xbc\x19\xb2\x0f\x3b\x75\x34",
4106 +- .len = 32,
4107 +- }, {
4108 +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
4109 +- "\x23\x53\x60\x28\x74\x71\x35\x26"
4110 +- "\x31\x41\x59\x26\x53\x58\x97\x93",
4111 +- .klen = 24,
4112 +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
4113 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4114 +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
4115 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
4116 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
4117 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
4118 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
4119 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
4120 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
4121 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
4122 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
4123 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
4124 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
4125 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
4126 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
4127 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
4128 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
4129 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
4130 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
4131 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
4132 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
4133 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
4134 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
4135 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
4136 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
4137 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
4138 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
4139 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
4140 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
4141 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
4142 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
4143 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
4144 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
4145 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
4146 +- "\x00\x01\x02\x03\x04\x05\x06\x07"
4147 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
4148 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
4149 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
4150 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
4151 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
4152 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
4153 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
4154 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
4155 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
4156 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
4157 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
4158 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
4159 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
4160 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
4161 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
4162 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
4163 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
4164 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
4165 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
4166 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
4167 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
4168 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
4169 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
4170 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
4171 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
4172 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
4173 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
4174 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
4175 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
4176 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
4177 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
4178 +- .ctext = "\xaf\xa1\x81\xa6\x32\xbb\x15\x8e"
4179 +- "\xf8\x95\x2e\xd3\xe6\xee\x7e\x09"
4180 +- "\x0c\x1a\xf5\x02\x97\x8b\xe3\xb3"
4181 +- "\x11\xc7\x39\x96\xd0\x95\xf4\x56"
4182 +- "\xf4\xdd\x03\x38\x01\x44\x2c\xcf"
4183 +- "\x88\xae\x8e\x3c\xcd\xe7\xaa\x66"
4184 +- "\xfe\x3d\xc6\xfb\x01\x23\x51\x43"
4185 +- "\xd5\xd2\x13\x86\x94\x34\xe9\x62"
4186 +- "\xf9\x89\xe3\xd1\x7b\xbe\xf8\xef"
4187 +- "\x76\x35\x04\x3f\xdb\x23\x9d\x0b"
4188 +- "\x85\x42\xb9\x02\xd6\xcc\xdb\x96"
4189 +- "\xa7\x6b\x27\xb6\xd4\x45\x8f\x7d"
4190 +- "\xae\xd2\x04\xd5\xda\xc1\x7e\x24"
4191 +- "\x8c\x73\xbe\x48\x7e\xcf\x65\x28"
4192 +- "\x29\xe5\xbe\x54\x30\xcb\x46\x95"
4193 +- "\x4f\x2e\x8a\x36\xc8\x27\xc5\xbe"
4194 +- "\xd0\x1a\xaf\xab\x26\xcd\x9e\x69"
4195 +- "\xa1\x09\x95\x71\x26\xe9\xc4\xdf"
4196 +- "\xe6\x31\xc3\x46\xda\xaf\x0b\x41"
4197 +- "\x1f\xab\xb1\x8e\xd6\xfc\x0b\xb3"
4198 +- "\x82\xc0\x37\x27\xfc\x91\xa7\x05"
4199 +- "\xfb\xc5\xdc\x2b\x74\x96\x48\x43"
4200 +- "\x5d\x9c\x19\x0f\x60\x63\x3a\x1f"
4201 +- "\x6f\xf0\x03\xbe\x4d\xfd\xc8\x4a"
4202 +- "\xc6\xa4\x81\x6d\xc3\x12\x2a\x5c"
4203 +- "\x07\xff\xf3\x72\x74\x48\xb5\x40"
4204 +- "\x50\xb5\xdd\x90\x43\x31\x18\x15"
4205 +- "\x7b\xf2\xa6\xdb\x83\xc8\x4b\x4a"
4206 +- "\x29\x93\x90\x8b\xda\x07\xf0\x35"
4207 +- "\x6d\x90\x88\x09\x4e\x83\xf5\x5b"
4208 +- "\x94\x12\xbb\x33\x27\x1d\x3f\x23"
4209 +- "\x51\xa8\x7c\x07\xa2\xae\x77\xa6"
4210 +- "\x50\xfd\xcc\xc0\x4f\x80\x7a\x9f"
4211 +- "\x66\xdd\xcd\x75\x24\x8b\x33\xf7"
4212 +- "\x20\xdb\x83\x9b\x4f\x11\x63\x6e"
4213 +- "\xcf\x37\xef\xc9\x11\x01\x5c\x45"
4214 +- "\x32\x99\x7c\x3c\x9e\x42\x89\xe3"
4215 +- "\x70\x6d\x15\x9f\xb1\xe6\xb6\x05"
4216 +- "\xfe\x0c\xb9\x49\x2d\x90\x6d\xcc"
4217 +- "\x5d\x3f\xc1\xfe\x89\x0a\x2e\x2d"
4218 +- "\xa0\xa8\x89\x3b\x73\x39\xa5\x94"
4219 +- "\x4c\xa4\xa6\xbb\xa7\x14\x46\x89"
4220 +- "\x10\xff\xaf\xef\xca\xdd\x4f\x80"
4221 +- "\xb3\xdf\x3b\xab\xd4\xe5\x5a\xc7"
4222 +- "\x33\xca\x00\x8b\x8b\x3f\xea\xec"
4223 +- "\x68\x8a\xc2\x6d\xfd\xd4\x67\x0f"
4224 +- "\x22\x31\xe1\x0e\xfe\x5a\x04\xd5"
4225 +- "\x64\xa3\xf1\x1a\x76\x28\xcc\x35"
4226 +- "\x36\xa7\x0a\x74\xf7\x1c\x44\x9b"
4227 +- "\xc7\x1b\x53\x17\x02\xea\xd1\xad"
4228 +- "\x13\x51\x73\xc0\xa0\xb2\x05\x32"
4229 +- "\xa8\xa2\x37\x2e\xe1\x7a\x3a\x19"
4230 +- "\x26\xb4\x6c\x62\x5d\xb3\x1a\x1d"
4231 +- "\x59\xda\xee\x1a\x22\x18\xda\x0d"
4232 +- "\x88\x0f\x55\x8b\x72\x62\xfd\xc1"
4233 +- "\x69\x13\xcd\x0d\x5f\xc1\x09\x52"
4234 +- "\xee\xd6\xe3\x84\x4d\xee\xf6\x88"
4235 +- "\xaf\x83\xdc\x76\xf4\xc0\x93\x3f"
4236 +- "\x4a\x75\x2f\xb0\x0b\x3e\xc4\x54"
4237 +- "\x7d\x69\x8d\x00\x62\x77\x0d\x14"
4238 +- "\xbe\x7c\xa6\x7d\xc5\x24\x4f\xf3"
4239 +- "\x50\xf7\x5f\xf4\xc2\xca\x41\x97"
4240 +- "\x37\xbe\x75\x74\xcd\xf0\x75\x6e"
4241 +- "\x25\x23\x94\xbd\xda\x8d\xb0\xd4",
4242 +- .len = 512,
4243 +- }, {
4244 +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
4245 +- "\x23\x53\x60\x28\x74\x71\x35\x26"
4246 +- "\x62\x49\x77\x57\x24\x70\x93\x69"
4247 +- "\x99\x59\x57\x49\x66\x96\x76\x27",
4248 +- .klen = 32,
4249 +- .iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
4250 +- "\x00\x00\x00\x00\x00\x00\x00\x00",
4251 +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
4252 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
4253 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
4254 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
4255 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
4256 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
4257 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
4258 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
4259 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
4260 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
4261 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
4262 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
4263 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
4264 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
4265 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
4266 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
4267 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
4268 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
4269 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
4270 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
4271 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
4272 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
4273 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
4274 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
4275 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
4276 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
4277 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
4278 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
4279 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
4280 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
4281 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
4282 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
4283 +- "\x00\x01\x02\x03\x04\x05\x06\x07"
4284 +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
4285 +- "\x10\x11\x12\x13\x14\x15\x16\x17"
4286 +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
4287 +- "\x20\x21\x22\x23\x24\x25\x26\x27"
4288 +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
4289 +- "\x30\x31\x32\x33\x34\x35\x36\x37"
4290 +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
4291 +- "\x40\x41\x42\x43\x44\x45\x46\x47"
4292 +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
4293 +- "\x50\x51\x52\x53\x54\x55\x56\x57"
4294 +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
4295 +- "\x60\x61\x62\x63\x64\x65\x66\x67"
4296 +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
4297 +- "\x70\x71\x72\x73\x74\x75\x76\x77"
4298 +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
4299 +- "\x80\x81\x82\x83\x84\x85\x86\x87"
4300 +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
4301 +- "\x90\x91\x92\x93\x94\x95\x96\x97"
4302 +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
4303 +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
4304 +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
4305 +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
4306 +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
4307 +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
4308 +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
4309 +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
4310 +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
4311 +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
4312 +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
4313 +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
4314 +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
4315 +- .ctext = "\x55\xed\x71\xd3\x02\x8e\x15\x3b"
4316 +- "\xc6\x71\x29\x2d\x3e\x89\x9f\x59"
4317 +- "\x68\x6a\xcc\x8a\x56\x97\xf3\x95"
4318 +- "\x4e\x51\x08\xda\x2a\xf8\x6f\x3c"
4319 +- "\x78\x16\xea\x80\xdb\x33\x75\x94"
4320 +- "\xf9\x29\xc4\x2b\x76\x75\x97\xc7"
4321 +- "\xf2\x98\x2c\xf9\xff\xc8\xd5\x2b"
4322 +- "\x18\xf1\xaf\xcf\x7c\xc5\x0b\xee"
4323 +- "\xad\x3c\x76\x7c\xe6\x27\xa2\x2a"
4324 +- "\xe4\x66\xe1\xab\xa2\x39\xfc\x7c"
4325 +- "\xf5\xec\x32\x74\xa3\xb8\x03\x88"
4326 +- "\x52\xfc\x2e\x56\x3f\xa1\xf0\x9f"
4327 +- "\x84\x5e\x46\xed\x20\x89\xb6\x44"
4328 +- "\x8d\xd0\xed\x54\x47\x16\xbe\x95"
4329 +- "\x8a\xb3\x6b\x72\xc4\x32\x52\x13"
4330 +- "\x1b\xb0\x82\xbe\xac\xf9\x70\xa6"
4331 +- "\x44\x18\xdd\x8c\x6e\xca\x6e\x45"
4332 +- "\x8f\x1e\x10\x07\x57\x25\x98\x7b"
4333 +- "\x17\x8c\x78\xdd\x80\xa7\xd9\xd8"
4334 +- "\x63\xaf\xb9\x67\x57\xfd\xbc\xdb"
4335 +- "\x44\xe9\xc5\x65\xd1\xc7\x3b\xff"
4336 +- "\x20\xa0\x80\x1a\xc3\x9a\xad\x5e"
4337 +- "\x5d\x3b\xd3\x07\xd9\xf5\xfd\x3d"
4338 +- "\x4a\x8b\xa8\xd2\x6e\x7a\x51\x65"
4339 +- "\x6c\x8e\x95\xe0\x45\xc9\x5f\x4a"
4340 +- "\x09\x3c\x3d\x71\x7f\x0c\x84\x2a"
4341 +- "\xc8\x48\x52\x1a\xc2\xd5\xd6\x78"
4342 +- "\x92\x1e\xa0\x90\x2e\xea\xf0\xf3"
4343 +- "\xdc\x0f\xb1\xaf\x0d\x9b\x06\x2e"
4344 +- "\x35\x10\x30\x82\x0d\xe7\xc5\x9b"
4345 +- "\xde\x44\x18\xbd\x9f\xd1\x45\xa9"
4346 +- "\x7b\x7a\x4a\xad\x35\x65\x27\xca"
4347 +- "\xb2\xc3\xd4\x9b\x71\x86\x70\xee"
4348 +- "\xf1\x89\x3b\x85\x4b\x5b\xaa\xaf"
4349 +- "\xfc\x42\xc8\x31\x59\xbe\x16\x60"
4350 +- "\x4f\xf9\xfa\x12\xea\xd0\xa7\x14"
4351 +- "\xf0\x7a\xf3\xd5\x8d\xbd\x81\xef"
4352 +- "\x52\x7f\x29\x51\x94\x20\x67\x3c"
4353 +- "\xd1\xaf\x77\x9f\x22\x5a\x4e\x63"
4354 +- "\xe7\xff\x73\x25\xd1\xdd\x96\x8a"
4355 +- "\x98\x52\x6d\xf3\xac\x3e\xf2\x18"
4356 +- "\x6d\xf6\x0a\x29\xa6\x34\x3d\xed"
4357 +- "\xe3\x27\x0d\x9d\x0a\x02\x44\x7e"
4358 +- "\x5a\x7e\x67\x0f\x0a\x9e\xd6\xad"
4359 +- "\x91\xe6\x4d\x81\x8c\x5c\x59\xaa"
4360 +- "\xfb\xeb\x56\x53\xd2\x7d\x4c\x81"
4361 +- "\x65\x53\x0f\x41\x11\xbd\x98\x99"
4362 +- "\xf9\xc6\xfa\x51\x2e\xa3\xdd\x8d"
4363 +- "\x84\x98\xf9\x34\xed\x33\x2a\x1f"
4364 +- "\x82\xed\xc1\x73\x98\xd3\x02\xdc"
4365 +- "\xe6\xc2\x33\x1d\xa2\xb4\xca\x76"
4366 +- "\x63\x51\x34\x9d\x96\x12\xae\xce"
4367 +- "\x83\xc9\x76\x5e\xa4\x1b\x53\x37"
4368 +- "\x17\xd5\xc0\x80\x1d\x62\xf8\x3d"
4369 +- "\x54\x27\x74\xbb\x10\x86\x57\x46"
4370 +- "\x68\xe1\xed\x14\xe7\x9d\xfc\x84"
4371 +- "\x47\xbc\xc2\xf8\x19\x4b\x99\xcf"
4372 +- "\x7a\xe9\xc4\xb8\x8c\x82\x72\x4d"
4373 +- "\x7b\x4f\x38\x55\x36\x71\x64\xc1"
4374 +- "\xfc\x5c\x75\x52\x33\x02\x18\xf8"
4375 +- "\x17\xe1\x2b\xc2\x43\x39\xbd\x76"
4376 +- "\x9b\x63\x76\x32\x2f\x19\x72\x10"
4377 +- "\x9f\x21\x0c\xf1\x66\x50\x7f\xa5"
4378 +- "\x0d\x1f\x46\xe0\xba\xd3\x2f\x3c",
4379 +- .len = 512,
4380 +- .also_non_np = 1,
4381 +- .np = 3,
4382 +- .tap = { 512 - 20, 4, 16 },
4383 +- }
4384 +-};
4385 +-
4386 + /* Cast6 test vectors from RFC 2612 */
4387 + static const struct cipher_testvec cast6_tv_template[] = {
4388 + {
4389 +diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
4390 +index cf4fc0161164..e43cb71b6972 100644
4391 +--- a/drivers/acpi/acpi_lpit.c
4392 ++++ b/drivers/acpi/acpi_lpit.c
4393 +@@ -117,11 +117,17 @@ static void lpit_update_residency(struct lpit_residency_info *info,
4394 + if (!info->iomem_addr)
4395 + return;
4396 +
4397 ++ if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
4398 ++ return;
4399 ++
4400 + /* Silently fail, if cpuidle attribute group is not present */
4401 + sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
4402 + &dev_attr_low_power_idle_system_residency_us.attr,
4403 + "cpuidle");
4404 + } else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
4405 ++ if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
4406 ++ return;
4407 ++
4408 + /* Silently fail, if cpuidle attribute group is not present */
4409 + sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
4410 + &dev_attr_low_power_idle_cpu_residency_us.attr,
4411 +diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
4412 +index bf64cfa30feb..969bf8d515c0 100644
4413 +--- a/drivers/acpi/acpi_lpss.c
4414 ++++ b/drivers/acpi/acpi_lpss.c
4415 +@@ -327,9 +327,11 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
4416 + { "INT33FC", },
4417 +
4418 + /* Braswell LPSS devices */
4419 ++ { "80862286", LPSS_ADDR(lpss_dma_desc) },
4420 + { "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
4421 + { "8086228A", LPSS_ADDR(bsw_uart_dev_desc) },
4422 + { "8086228E", LPSS_ADDR(bsw_spi_dev_desc) },
4423 ++ { "808622C0", LPSS_ADDR(lpss_dma_desc) },
4424 + { "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) },
4425 +
4426 + /* Broadwell LPSS devices */
4427 +diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
4428 +index 449d86d39965..fc447410ae4d 100644
4429 +--- a/drivers/acpi/acpi_processor.c
4430 ++++ b/drivers/acpi/acpi_processor.c
4431 +@@ -643,7 +643,7 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
4432 +
4433 + status = acpi_get_type(handle, &acpi_type);
4434 + if (ACPI_FAILURE(status))
4435 +- return false;
4436 ++ return status;
4437 +
4438 + switch (acpi_type) {
4439 + case ACPI_TYPE_PROCESSOR:
4440 +@@ -663,11 +663,12 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
4441 + }
4442 +
4443 + processor_validated_ids_update(uid);
4444 +- return true;
4445 ++ return AE_OK;
4446 +
4447 + err:
4448 ++ /* Exit on error, but don't abort the namespace walk */
4449 + acpi_handle_info(handle, "Invalid processor object\n");
4450 +- return false;
4451 ++ return AE_OK;
4452 +
4453 + }
4454 +
4455 +diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
4456 +index e9fb0bf3c8d2..78f9de260d5f 100644
4457 +--- a/drivers/acpi/acpica/dsopcode.c
4458 ++++ b/drivers/acpi/acpica/dsopcode.c
4459 +@@ -417,6 +417,10 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
4460 + ACPI_FORMAT_UINT64(obj_desc->region.address),
4461 + obj_desc->region.length));
4462 +
4463 ++ status = acpi_ut_add_address_range(obj_desc->region.space_id,
4464 ++ obj_desc->region.address,
4465 ++ obj_desc->region.length, node);
4466 ++
4467 + /* Now the address and length are valid for this opregion */
4468 +
4469 + obj_desc->region.flags |= AOPOBJ_DATA_VALID;
4470 +diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
4471 +index 0f0bdc9d24c6..314276779f57 100644
4472 +--- a/drivers/acpi/acpica/psloop.c
4473 ++++ b/drivers/acpi/acpica/psloop.c
4474 +@@ -417,6 +417,7 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
4475 + union acpi_parse_object *op = NULL; /* current op */
4476 + struct acpi_parse_state *parser_state;
4477 + u8 *aml_op_start = NULL;
4478 ++ u8 opcode_length;
4479 +
4480 + ACPI_FUNCTION_TRACE_PTR(ps_parse_loop, walk_state);
4481 +
4482 +@@ -540,8 +541,19 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
4483 + "Skip parsing opcode %s",
4484 + acpi_ps_get_opcode_name
4485 + (walk_state->opcode)));
4486 ++
4487 ++ /*
4488 ++ * Determine the opcode length before skipping the opcode.
4489 ++ * An opcode can be 1 byte or 2 bytes in length.
4490 ++ */
4491 ++ opcode_length = 1;
4492 ++ if ((walk_state->opcode & 0xFF00) ==
4493 ++ AML_EXTENDED_OPCODE) {
4494 ++ opcode_length = 2;
4495 ++ }
4496 + walk_state->parser_state.aml =
4497 +- walk_state->aml + 1;
4498 ++ walk_state->aml + opcode_length;
4499 ++
4500 + walk_state->parser_state.aml =
4501 + acpi_ps_get_next_package_end
4502 + (&walk_state->parser_state);
4503 +diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
4504 +index 7c479002e798..c0db96e8a81a 100644
4505 +--- a/drivers/acpi/nfit/core.c
4506 ++++ b/drivers/acpi/nfit/core.c
4507 +@@ -2456,7 +2456,8 @@ static int ars_get_cap(struct acpi_nfit_desc *acpi_desc,
4508 + return cmd_rc;
4509 + }
4510 +
4511 +-static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa)
4512 ++static int ars_start(struct acpi_nfit_desc *acpi_desc,
4513 ++ struct nfit_spa *nfit_spa, enum nfit_ars_state req_type)
4514 + {
4515 + int rc;
4516 + int cmd_rc;
4517 +@@ -2467,7 +2468,7 @@ static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa
4518 + memset(&ars_start, 0, sizeof(ars_start));
4519 + ars_start.address = spa->address;
4520 + ars_start.length = spa->length;
4521 +- if (test_bit(ARS_SHORT, &nfit_spa->ars_state))
4522 ++ if (req_type == ARS_REQ_SHORT)
4523 + ars_start.flags = ND_ARS_RETURN_PREV_DATA;
4524 + if (nfit_spa_type(spa) == NFIT_SPA_PM)
4525 + ars_start.type = ND_ARS_PERSISTENT;
4526 +@@ -2524,6 +2525,15 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
4527 + struct nd_region *nd_region = nfit_spa->nd_region;
4528 + struct device *dev;
4529 +
4530 ++ lockdep_assert_held(&acpi_desc->init_mutex);
4531 ++ /*
4532 ++ * Only advance the ARS state for ARS runs initiated by the
4533 ++ * kernel, ignore ARS results from BIOS initiated runs for scrub
4534 ++ * completion tracking.
4535 ++ */
4536 ++ if (acpi_desc->scrub_spa != nfit_spa)
4537 ++ return;
4538 ++
4539 + if ((ars_status->address >= spa->address && ars_status->address
4540 + < spa->address + spa->length)
4541 + || (ars_status->address < spa->address)) {
4542 +@@ -2543,23 +2553,13 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
4543 + } else
4544 + return;
4545 +
4546 +- if (test_bit(ARS_DONE, &nfit_spa->ars_state))
4547 +- return;
4548 +-
4549 +- if (!test_and_clear_bit(ARS_REQ, &nfit_spa->ars_state))
4550 +- return;
4551 +-
4552 ++ acpi_desc->scrub_spa = NULL;
4553 + if (nd_region) {
4554 + dev = nd_region_dev(nd_region);
4555 + nvdimm_region_notify(nd_region, NVDIMM_REVALIDATE_POISON);
4556 + } else
4557 + dev = acpi_desc->dev;
4558 +-
4559 +- dev_dbg(dev, "ARS: range %d %s complete\n", spa->range_index,
4560 +- test_bit(ARS_SHORT, &nfit_spa->ars_state)
4561 +- ? "short" : "long");
4562 +- clear_bit(ARS_SHORT, &nfit_spa->ars_state);
4563 +- set_bit(ARS_DONE, &nfit_spa->ars_state);
4564 ++ dev_dbg(dev, "ARS: range %d complete\n", spa->range_index);
4565 + }
4566 +
4567 + static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc)
4568 +@@ -2840,46 +2840,55 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
4569 + return 0;
4570 + }
4571 +
4572 +-static int ars_register(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa,
4573 +- int *query_rc)
4574 ++static int ars_register(struct acpi_nfit_desc *acpi_desc,
4575 ++ struct nfit_spa *nfit_spa)
4576 + {
4577 +- int rc = *query_rc;
4578 ++ int rc;
4579 +
4580 +- if (no_init_ars)
4581 ++ if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
4582 + return acpi_nfit_register_region(acpi_desc, nfit_spa);
4583 +
4584 +- set_bit(ARS_REQ, &nfit_spa->ars_state);
4585 +- set_bit(ARS_SHORT, &nfit_spa->ars_state);
4586 ++ set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
4587 ++ set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
4588 +
4589 +- switch (rc) {
4590 ++ switch (acpi_nfit_query_poison(acpi_desc)) {
4591 + case 0:
4592 + case -EAGAIN:
4593 +- rc = ars_start(acpi_desc, nfit_spa);
4594 +- if (rc == -EBUSY) {
4595 +- *query_rc = rc;
4596 ++ rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
4597 ++ /* shouldn't happen, try again later */
4598 ++ if (rc == -EBUSY)
4599 + break;
4600 +- } else if (rc == 0) {
4601 +- rc = acpi_nfit_query_poison(acpi_desc);
4602 +- } else {
4603 ++ if (rc) {
4604 + set_bit(ARS_FAILED, &nfit_spa->ars_state);
4605 + break;
4606 + }
4607 +- if (rc == -EAGAIN)
4608 +- clear_bit(ARS_SHORT, &nfit_spa->ars_state);
4609 +- else if (rc == 0)
4610 +- ars_complete(acpi_desc, nfit_spa);
4611 ++ clear_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
4612 ++ rc = acpi_nfit_query_poison(acpi_desc);
4613 ++ if (rc)
4614 ++ break;
4615 ++ acpi_desc->scrub_spa = nfit_spa;
4616 ++ ars_complete(acpi_desc, nfit_spa);
4617 ++ /*
4618 ++ * If ars_complete() says we didn't complete the
4619 ++ * short scrub, we'll try again with a long
4620 ++ * request.
4621 ++ */
4622 ++ acpi_desc->scrub_spa = NULL;
4623 + break;
4624 + case -EBUSY:
4625 ++ case -ENOMEM:
4626 + case -ENOSPC:
4627 ++ /*
4628 ++ * BIOS was using ARS, wait for it to complete (or
4629 ++ * resources to become available) and then perform our
4630 ++ * own scrubs.
4631 ++ */
4632 + break;
4633 + default:
4634 + set_bit(ARS_FAILED, &nfit_spa->ars_state);
4635 + break;
4636 + }
4637 +
4638 +- if (test_and_clear_bit(ARS_DONE, &nfit_spa->ars_state))
4639 +- set_bit(ARS_REQ, &nfit_spa->ars_state);
4640 +-
4641 + return acpi_nfit_register_region(acpi_desc, nfit_spa);
4642 + }
4643 +
4644 +@@ -2901,6 +2910,8 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
4645 + struct device *dev = acpi_desc->dev;
4646 + struct nfit_spa *nfit_spa;
4647 +
4648 ++ lockdep_assert_held(&acpi_desc->init_mutex);
4649 ++
4650 + if (acpi_desc->cancel)
4651 + return 0;
4652 +
4653 +@@ -2924,21 +2935,49 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
4654 +
4655 + ars_complete_all(acpi_desc);
4656 + list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
4657 ++ enum nfit_ars_state req_type;
4658 ++ int rc;
4659 ++
4660 + if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
4661 + continue;
4662 +- if (test_bit(ARS_REQ, &nfit_spa->ars_state)) {
4663 +- int rc = ars_start(acpi_desc, nfit_spa);
4664 +-
4665 +- clear_bit(ARS_DONE, &nfit_spa->ars_state);
4666 +- dev = nd_region_dev(nfit_spa->nd_region);
4667 +- dev_dbg(dev, "ARS: range %d ARS start (%d)\n",
4668 +- nfit_spa->spa->range_index, rc);
4669 +- if (rc == 0 || rc == -EBUSY)
4670 +- return 1;
4671 +- dev_err(dev, "ARS: range %d ARS failed (%d)\n",
4672 +- nfit_spa->spa->range_index, rc);
4673 +- set_bit(ARS_FAILED, &nfit_spa->ars_state);
4674 ++
4675 ++ /* prefer short ARS requests first */
4676 ++ if (test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state))
4677 ++ req_type = ARS_REQ_SHORT;
4678 ++ else if (test_bit(ARS_REQ_LONG, &nfit_spa->ars_state))
4679 ++ req_type = ARS_REQ_LONG;
4680 ++ else
4681 ++ continue;
4682 ++ rc = ars_start(acpi_desc, nfit_spa, req_type);
4683 ++
4684 ++ dev = nd_region_dev(nfit_spa->nd_region);
4685 ++ dev_dbg(dev, "ARS: range %d ARS start %s (%d)\n",
4686 ++ nfit_spa->spa->range_index,
4687 ++ req_type == ARS_REQ_SHORT ? "short" : "long",
4688 ++ rc);
4689 ++ /*
4690 ++ * Hmm, we raced someone else starting ARS? Try again in
4691 ++ * a bit.
4692 ++ */
4693 ++ if (rc == -EBUSY)
4694 ++ return 1;
4695 ++ if (rc == 0) {
4696 ++ dev_WARN_ONCE(dev, acpi_desc->scrub_spa,
4697 ++ "scrub start while range %d active\n",
4698 ++ acpi_desc->scrub_spa->spa->range_index);
4699 ++ clear_bit(req_type, &nfit_spa->ars_state);
4700 ++ acpi_desc->scrub_spa = nfit_spa;
4701 ++ /*
4702 ++ * Consider this spa last for future scrub
4703 ++ * requests
4704 ++ */
4705 ++ list_move_tail(&nfit_spa->list, &acpi_desc->spas);
4706 ++ return 1;
4707 + }
4708 ++
4709 ++ dev_err(dev, "ARS: range %d ARS failed (%d)\n",
4710 ++ nfit_spa->spa->range_index, rc);
4711 ++ set_bit(ARS_FAILED, &nfit_spa->ars_state);
4712 + }
4713 + return 0;
4714 + }
4715 +@@ -2994,6 +3033,7 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
4716 + struct nd_cmd_ars_cap ars_cap;
4717 + int rc;
4718 +
4719 ++ set_bit(ARS_FAILED, &nfit_spa->ars_state);
4720 + memset(&ars_cap, 0, sizeof(ars_cap));
4721 + rc = ars_get_cap(acpi_desc, &ars_cap, nfit_spa);
4722 + if (rc < 0)
4723 +@@ -3010,16 +3050,14 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
4724 + nfit_spa->clear_err_unit = ars_cap.clear_err_unit;
4725 + acpi_desc->max_ars = max(nfit_spa->max_ars, acpi_desc->max_ars);
4726 + clear_bit(ARS_FAILED, &nfit_spa->ars_state);
4727 +- set_bit(ARS_REQ, &nfit_spa->ars_state);
4728 + }
4729 +
4730 + static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
4731 + {
4732 + struct nfit_spa *nfit_spa;
4733 +- int rc, query_rc;
4734 ++ int rc;
4735 +
4736 + list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
4737 +- set_bit(ARS_FAILED, &nfit_spa->ars_state);
4738 + switch (nfit_spa_type(nfit_spa->spa)) {
4739 + case NFIT_SPA_VOLATILE:
4740 + case NFIT_SPA_PM:
4741 +@@ -3028,20 +3066,12 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
4742 + }
4743 + }
4744 +
4745 +- /*
4746 +- * Reap any results that might be pending before starting new
4747 +- * short requests.
4748 +- */
4749 +- query_rc = acpi_nfit_query_poison(acpi_desc);
4750 +- if (query_rc == 0)
4751 +- ars_complete_all(acpi_desc);
4752 +-
4753 + list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
4754 + switch (nfit_spa_type(nfit_spa->spa)) {
4755 + case NFIT_SPA_VOLATILE:
4756 + case NFIT_SPA_PM:
4757 + /* register regions and kick off initial ARS run */
4758 +- rc = ars_register(acpi_desc, nfit_spa, &query_rc);
4759 ++ rc = ars_register(acpi_desc, nfit_spa);
4760 + if (rc)
4761 + return rc;
4762 + break;
4763 +@@ -3236,7 +3266,8 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
4764 + return 0;
4765 + }
4766 +
4767 +-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
4768 ++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
4769 ++ enum nfit_ars_state req_type)
4770 + {
4771 + struct device *dev = acpi_desc->dev;
4772 + int scheduled = 0, busy = 0;
4773 +@@ -3256,13 +3287,10 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
4774 + if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
4775 + continue;
4776 +
4777 +- if (test_and_set_bit(ARS_REQ, &nfit_spa->ars_state))
4778 ++ if (test_and_set_bit(req_type, &nfit_spa->ars_state))
4779 + busy++;
4780 +- else {
4781 +- if (test_bit(ARS_SHORT, &flags))
4782 +- set_bit(ARS_SHORT, &nfit_spa->ars_state);
4783 ++ else
4784 + scheduled++;
4785 +- }
4786 + }
4787 + if (scheduled) {
4788 + sched_ars(acpi_desc);
4789 +@@ -3448,10 +3476,11 @@ static void acpi_nfit_update_notify(struct device *dev, acpi_handle handle)
4790 + static void acpi_nfit_uc_error_notify(struct device *dev, acpi_handle handle)
4791 + {
4792 + struct acpi_nfit_desc *acpi_desc = dev_get_drvdata(dev);
4793 +- unsigned long flags = (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON) ?
4794 +- 0 : 1 << ARS_SHORT;
4795 +
4796 +- acpi_nfit_ars_rescan(acpi_desc, flags);
4797 ++ if (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON)
4798 ++ acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG);
4799 ++ else
4800 ++ acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_SHORT);
4801 + }
4802 +
4803 + void __acpi_nfit_notify(struct device *dev, acpi_handle handle, u32 event)
4804 +diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
4805 +index a97ff42fe311..02c10de50386 100644
4806 +--- a/drivers/acpi/nfit/nfit.h
4807 ++++ b/drivers/acpi/nfit/nfit.h
4808 +@@ -118,9 +118,8 @@ enum nfit_dimm_notifiers {
4809 + };
4810 +
4811 + enum nfit_ars_state {
4812 +- ARS_REQ,
4813 +- ARS_DONE,
4814 +- ARS_SHORT,
4815 ++ ARS_REQ_SHORT,
4816 ++ ARS_REQ_LONG,
4817 + ARS_FAILED,
4818 + };
4819 +
4820 +@@ -197,6 +196,7 @@ struct acpi_nfit_desc {
4821 + struct device *dev;
4822 + u8 ars_start_flags;
4823 + struct nd_cmd_ars_status *ars_status;
4824 ++ struct nfit_spa *scrub_spa;
4825 + struct delayed_work dwork;
4826 + struct list_head list;
4827 + struct kernfs_node *scrub_count_state;
4828 +@@ -251,7 +251,8 @@ struct nfit_blk {
4829 +
4830 + extern struct list_head acpi_descs;
4831 + extern struct mutex acpi_desc_lock;
4832 +-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags);
4833 ++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
4834 ++ enum nfit_ars_state req_type);
4835 +
4836 + #ifdef CONFIG_X86_MCE
4837 + void nfit_mce_register(void);
4838 +diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
4839 +index 8df9abfa947b..ed73f6fb0779 100644
4840 +--- a/drivers/acpi/osl.c
4841 ++++ b/drivers/acpi/osl.c
4842 +@@ -617,15 +617,18 @@ void acpi_os_stall(u32 us)
4843 + }
4844 +
4845 + /*
4846 +- * Support ACPI 3.0 AML Timer operand
4847 +- * Returns 64-bit free-running, monotonically increasing timer
4848 +- * with 100ns granularity
4849 ++ * Support ACPI 3.0 AML Timer operand. Returns a 64-bit free-running,
4850 ++ * monotonically increasing timer with 100ns granularity. Do not use
4851 ++ * ktime_get() to implement this function because this function may get
4852 ++ * called after timekeeping has been suspended. Note: calling this function
4853 ++ * after timekeeping has been suspended may lead to unexpected results
4854 ++ * because when timekeeping is suspended the jiffies counter is not
4855 ++ * incremented. See also timekeeping_suspend().
4856 + */
4857 + u64 acpi_os_get_timer(void)
4858 + {
4859 +- u64 time_ns = ktime_to_ns(ktime_get());
4860 +- do_div(time_ns, 100);
4861 +- return time_ns;
4862 ++ return (get_jiffies_64() - INITIAL_JIFFIES) *
4863 ++ (ACPI_100NSEC_PER_SEC / HZ);
4864 + }
4865 +
4866 + acpi_status acpi_os_read_port(acpi_io_address port, u32 * value, u32 width)
4867 +diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
4868 +index d1e26cb599bf..da031b1df6f5 100644
4869 +--- a/drivers/acpi/pptt.c
4870 ++++ b/drivers/acpi/pptt.c
4871 +@@ -338,9 +338,6 @@ static struct acpi_pptt_cache *acpi_find_cache_node(struct acpi_table_header *ta
4872 + return found;
4873 + }
4874 +
4875 +-/* total number of attributes checked by the properties code */
4876 +-#define PPTT_CHECKED_ATTRIBUTES 4
4877 +-
4878 + /**
4879 + * update_cache_properties() - Update cacheinfo for the given processor
4880 + * @this_leaf: Kernel cache info structure being updated
4881 +@@ -357,25 +354,15 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
4882 + struct acpi_pptt_cache *found_cache,
4883 + struct acpi_pptt_processor *cpu_node)
4884 + {
4885 +- int valid_flags = 0;
4886 +-
4887 + this_leaf->fw_token = cpu_node;
4888 +- if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) {
4889 ++ if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID)
4890 + this_leaf->size = found_cache->size;
4891 +- valid_flags++;
4892 +- }
4893 +- if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) {
4894 ++ if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID)
4895 + this_leaf->coherency_line_size = found_cache->line_size;
4896 +- valid_flags++;
4897 +- }
4898 +- if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) {
4899 ++ if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID)
4900 + this_leaf->number_of_sets = found_cache->number_of_sets;
4901 +- valid_flags++;
4902 +- }
4903 +- if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) {
4904 ++ if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID)
4905 + this_leaf->ways_of_associativity = found_cache->associativity;
4906 +- valid_flags++;
4907 +- }
4908 + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) {
4909 + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) {
4910 + case ACPI_PPTT_CACHE_POLICY_WT:
4911 +@@ -402,11 +389,17 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
4912 + }
4913 + }
4914 + /*
4915 +- * If the above flags are valid, and the cache type is NOCACHE
4916 +- * update the cache type as well.
4917 ++ * If cache type is NOCACHE, then the cache hasn't been specified
4918 ++ * via other mechanisms. Update the type if a cache type has been
4919 ++ * provided.
4920 ++ *
4921 ++ * Note, we assume such caches are unified based on conventional system
4922 ++ * design and known examples. Significant work is required elsewhere to
4923 ++ * fully support data/instruction only type caches which are only
4924 ++ * specified in PPTT.
4925 + */
4926 + if (this_leaf->type == CACHE_TYPE_NOCACHE &&
4927 +- valid_flags == PPTT_CHECKED_ATTRIBUTES)
4928 ++ found_cache->flags & ACPI_PPTT_CACHE_TYPE_VALID)
4929 + this_leaf->type = CACHE_TYPE_UNIFIED;
4930 + }
4931 +
4932 +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
4933 +index 99bf0c0394f8..321a9579556d 100644
4934 +--- a/drivers/ata/libata-core.c
4935 ++++ b/drivers/ata/libata-core.c
4936 +@@ -4552,6 +4552,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
4937 + /* These specific Samsung models/firmware-revs do not handle LPM well */
4938 + { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
4939 + { "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },
4940 ++ { "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
4941 +
4942 + /* devices that don't properly handle queued TRIM commands */
4943 + { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
4944 +diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
4945 +index dfb2c2622e5a..822e3060d834 100644
4946 +--- a/drivers/block/ataflop.c
4947 ++++ b/drivers/block/ataflop.c
4948 +@@ -1935,6 +1935,11 @@ static int __init atari_floppy_init (void)
4949 + unit[i].disk = alloc_disk(1);
4950 + if (!unit[i].disk)
4951 + goto Enomem;
4952 ++
4953 ++ unit[i].disk->queue = blk_init_queue(do_fd_request,
4954 ++ &ataflop_lock);
4955 ++ if (!unit[i].disk->queue)
4956 ++ goto Enomem;
4957 + }
4958 +
4959 + if (UseTrackbuffer < 0)
4960 +@@ -1966,10 +1971,6 @@ static int __init atari_floppy_init (void)
4961 + sprintf(unit[i].disk->disk_name, "fd%d", i);
4962 + unit[i].disk->fops = &floppy_fops;
4963 + unit[i].disk->private_data = &unit[i];
4964 +- unit[i].disk->queue = blk_init_queue(do_fd_request,
4965 +- &ataflop_lock);
4966 +- if (!unit[i].disk->queue)
4967 +- goto Enomem;
4968 + set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
4969 + add_disk(unit[i].disk);
4970 + }
4971 +@@ -1984,13 +1985,17 @@ static int __init atari_floppy_init (void)
4972 +
4973 + return 0;
4974 + Enomem:
4975 +- while (i--) {
4976 +- struct request_queue *q = unit[i].disk->queue;
4977 ++ do {
4978 ++ struct gendisk *disk = unit[i].disk;
4979 +
4980 +- put_disk(unit[i].disk);
4981 +- if (q)
4982 +- blk_cleanup_queue(q);
4983 +- }
4984 ++ if (disk) {
4985 ++ if (disk->queue) {
4986 ++ blk_cleanup_queue(disk->queue);
4987 ++ disk->queue = NULL;
4988 ++ }
4989 ++ put_disk(unit[i].disk);
4990 ++ }
4991 ++ } while (i--);
4992 +
4993 + unregister_blkdev(FLOPPY_MAJOR, "fd");
4994 + return -ENOMEM;
4995 +diff --git a/drivers/block/swim.c b/drivers/block/swim.c
4996 +index 0e31884a9519..cbe909c51847 100644
4997 +--- a/drivers/block/swim.c
4998 ++++ b/drivers/block/swim.c
4999 +@@ -887,8 +887,17 @@ static int swim_floppy_init(struct swim_priv *swd)
5000 +
5001 + exit_put_disks:
5002 + unregister_blkdev(FLOPPY_MAJOR, "fd");
5003 +- while (drive--)
5004 +- put_disk(swd->unit[drive].disk);
5005 ++ do {
5006 ++ struct gendisk *disk = swd->unit[drive].disk;
5007 ++
5008 ++ if (disk) {
5009 ++ if (disk->queue) {
5010 ++ blk_cleanup_queue(disk->queue);
5011 ++ disk->queue = NULL;
5012 ++ }
5013 ++ put_disk(disk);
5014 ++ }
5015 ++ } while (drive--);
5016 + return err;
5017 + }
5018 +
5019 +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
5020 +index b5cedccb5d7d..144df6830b82 100644
5021 +--- a/drivers/block/xen-blkfront.c
5022 ++++ b/drivers/block/xen-blkfront.c
5023 +@@ -1911,6 +1911,7 @@ static int negotiate_mq(struct blkfront_info *info)
5024 + GFP_KERNEL);
5025 + if (!info->rinfo) {
5026 + xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
5027 ++ info->nr_rings = 0;
5028 + return -ENOMEM;
5029 + }
5030 +
5031 +@@ -2475,6 +2476,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
5032 +
5033 + dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
5034 +
5035 ++ if (!info)
5036 ++ return 0;
5037 ++
5038 + blkif_free(info, 0);
5039 +
5040 + mutex_lock(&info->mutex);
5041 +diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
5042 +index 99cde1f9467d..e3e4d929e74f 100644
5043 +--- a/drivers/bluetooth/btbcm.c
5044 ++++ b/drivers/bluetooth/btbcm.c
5045 +@@ -324,6 +324,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
5046 + { 0x4103, "BCM4330B1" }, /* 002.001.003 */
5047 + { 0x410e, "BCM43341B0" }, /* 002.001.014 */
5048 + { 0x4406, "BCM4324B3" }, /* 002.004.006 */
5049 ++ { 0x6109, "BCM4335C0" }, /* 003.001.009 */
5050 + { 0x610c, "BCM4354" }, /* 003.001.012 */
5051 + { 0x2122, "BCM4343A0" }, /* 001.001.034 */
5052 + { 0x2209, "BCM43430A1" }, /* 001.002.009 */
5053 +diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
5054 +index 265d6a6583bc..e33fefd6ceae 100644
5055 +--- a/drivers/char/ipmi/ipmi_ssif.c
5056 ++++ b/drivers/char/ipmi/ipmi_ssif.c
5057 +@@ -606,8 +606,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
5058 + flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
5059 + ssif_info->waiting_alert = true;
5060 + ssif_info->rtc_us_timer = SSIF_MSG_USEC;
5061 +- mod_timer(&ssif_info->retry_timer,
5062 +- jiffies + SSIF_MSG_JIFFIES);
5063 ++ if (!ssif_info->stopping)
5064 ++ mod_timer(&ssif_info->retry_timer,
5065 ++ jiffies + SSIF_MSG_JIFFIES);
5066 + ipmi_ssif_unlock_cond(ssif_info, flags);
5067 + return;
5068 + }
5069 +@@ -939,8 +940,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
5070 + ssif_info->waiting_alert = true;
5071 + ssif_info->retries_left = SSIF_RECV_RETRIES;
5072 + ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
5073 +- mod_timer(&ssif_info->retry_timer,
5074 +- jiffies + SSIF_MSG_PART_JIFFIES);
5075 ++ if (!ssif_info->stopping)
5076 ++ mod_timer(&ssif_info->retry_timer,
5077 ++ jiffies + SSIF_MSG_PART_JIFFIES);
5078 + ipmi_ssif_unlock_cond(ssif_info, flags);
5079 + }
5080 + }
5081 +diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
5082 +index 3a3a7a548a85..e8822b3d10e1 100644
5083 +--- a/drivers/char/tpm/tpm-interface.c
5084 ++++ b/drivers/char/tpm/tpm-interface.c
5085 +@@ -664,7 +664,8 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
5086 + return len;
5087 +
5088 + err = be32_to_cpu(header->return_code);
5089 +- if (err != 0 && desc)
5090 ++ if (err != 0 && err != TPM_ERR_DISABLED && err != TPM_ERR_DEACTIVATED
5091 ++ && desc)
5092 + dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
5093 + desc);
5094 + if (err)
5095 +diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
5096 +index 911475d36800..b150f87f38f5 100644
5097 +--- a/drivers/char/tpm/xen-tpmfront.c
5098 ++++ b/drivers/char/tpm/xen-tpmfront.c
5099 +@@ -264,7 +264,7 @@ static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv)
5100 + return -ENOMEM;
5101 + }
5102 +
5103 +- rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref);
5104 ++ rv = xenbus_grant_ring(dev, priv->shr, 1, &gref);
5105 + if (rv < 0)
5106 + return rv;
5107 +
5108 +diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
5109 +index 0a9ebf00be46..e58bfcb1169e 100644
5110 +--- a/drivers/cpufreq/cpufreq-dt.c
5111 ++++ b/drivers/cpufreq/cpufreq-dt.c
5112 +@@ -32,6 +32,7 @@ struct private_data {
5113 + struct device *cpu_dev;
5114 + struct thermal_cooling_device *cdev;
5115 + const char *reg_name;
5116 ++ bool have_static_opps;
5117 + };
5118 +
5119 + static struct freq_attr *cpufreq_dt_attr[] = {
5120 +@@ -204,6 +205,15 @@ static int cpufreq_init(struct cpufreq_policy *policy)
5121 + }
5122 + }
5123 +
5124 ++ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
5125 ++ if (!priv) {
5126 ++ ret = -ENOMEM;
5127 ++ goto out_put_regulator;
5128 ++ }
5129 ++
5130 ++ priv->reg_name = name;
5131 ++ priv->opp_table = opp_table;
5132 ++
5133 + /*
5134 + * Initialize OPP tables for all policy->cpus. They will be shared by
5135 + * all CPUs which have marked their CPUs shared with OPP bindings.
5136 +@@ -214,7 +224,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
5137 + *
5138 + * OPPs might be populated at runtime, don't check for error here
5139 + */
5140 +- dev_pm_opp_of_cpumask_add_table(policy->cpus);
5141 ++ if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
5142 ++ priv->have_static_opps = true;
5143 +
5144 + /*
5145 + * But we need OPP table to function so if it is not there let's
5146 +@@ -240,19 +251,10 @@ static int cpufreq_init(struct cpufreq_policy *policy)
5147 + __func__, ret);
5148 + }
5149 +
5150 +- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
5151 +- if (!priv) {
5152 +- ret = -ENOMEM;
5153 +- goto out_free_opp;
5154 +- }
5155 +-
5156 +- priv->reg_name = name;
5157 +- priv->opp_table = opp_table;
5158 +-
5159 + ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
5160 + if (ret) {
5161 + dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
5162 +- goto out_free_priv;
5163 ++ goto out_free_opp;
5164 + }
5165 +
5166 + priv->cpu_dev = cpu_dev;
5167 +@@ -282,10 +284,11 @@ static int cpufreq_init(struct cpufreq_policy *policy)
5168 +
5169 + out_free_cpufreq_table:
5170 + dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
5171 +-out_free_priv:
5172 +- kfree(priv);
5173 + out_free_opp:
5174 +- dev_pm_opp_of_cpumask_remove_table(policy->cpus);
5175 ++ if (priv->have_static_opps)
5176 ++ dev_pm_opp_of_cpumask_remove_table(policy->cpus);
5177 ++ kfree(priv);
5178 ++out_put_regulator:
5179 + if (name)
5180 + dev_pm_opp_put_regulators(opp_table);
5181 + out_put_clk:
5182 +@@ -300,7 +303,8 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
5183 +
5184 + cpufreq_cooling_unregister(priv->cdev);
5185 + dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
5186 +- dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
5187 ++ if (priv->have_static_opps)
5188 ++ dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
5189 + if (priv->reg_name)
5190 + dev_pm_opp_put_regulators(priv->opp_table);
5191 +
5192 +diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
5193 +index f20f20a77d4d..4268f87e99fc 100644
5194 +--- a/drivers/cpufreq/cpufreq_conservative.c
5195 ++++ b/drivers/cpufreq/cpufreq_conservative.c
5196 +@@ -80,8 +80,10 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
5197 + * changed in the meantime, so fall back to current frequency in that
5198 + * case.
5199 + */
5200 +- if (requested_freq > policy->max || requested_freq < policy->min)
5201 ++ if (requested_freq > policy->max || requested_freq < policy->min) {
5202 + requested_freq = policy->cur;
5203 ++ dbs_info->requested_freq = requested_freq;
5204 ++ }
5205 +
5206 + freq_step = get_freq_step(cs_tuners, policy);
5207 +
5208 +@@ -92,7 +94,7 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
5209 + if (policy_dbs->idle_periods < UINT_MAX) {
5210 + unsigned int freq_steps = policy_dbs->idle_periods * freq_step;
5211 +
5212 +- if (requested_freq > freq_steps)
5213 ++ if (requested_freq > policy->min + freq_steps)
5214 + requested_freq -= freq_steps;
5215 + else
5216 + requested_freq = policy->min;
5217 +diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
5218 +index 4fb91ba39c36..ce3f9ad7120f 100644
5219 +--- a/drivers/crypto/caam/regs.h
5220 ++++ b/drivers/crypto/caam/regs.h
5221 +@@ -70,22 +70,22 @@
5222 + extern bool caam_little_end;
5223 + extern bool caam_imx;
5224 +
5225 +-#define caam_to_cpu(len) \
5226 +-static inline u##len caam##len ## _to_cpu(u##len val) \
5227 +-{ \
5228 +- if (caam_little_end) \
5229 +- return le##len ## _to_cpu(val); \
5230 +- else \
5231 +- return be##len ## _to_cpu(val); \
5232 ++#define caam_to_cpu(len) \
5233 ++static inline u##len caam##len ## _to_cpu(u##len val) \
5234 ++{ \
5235 ++ if (caam_little_end) \
5236 ++ return le##len ## _to_cpu((__force __le##len)val); \
5237 ++ else \
5238 ++ return be##len ## _to_cpu((__force __be##len)val); \
5239 + }
5240 +
5241 +-#define cpu_to_caam(len) \
5242 +-static inline u##len cpu_to_caam##len(u##len val) \
5243 +-{ \
5244 +- if (caam_little_end) \
5245 +- return cpu_to_le##len(val); \
5246 +- else \
5247 +- return cpu_to_be##len(val); \
5248 ++#define cpu_to_caam(len) \
5249 ++static inline u##len cpu_to_caam##len(u##len val) \
5250 ++{ \
5251 ++ if (caam_little_end) \
5252 ++ return (__force u##len)cpu_to_le##len(val); \
5253 ++ else \
5254 ++ return (__force u##len)cpu_to_be##len(val); \
5255 + }
5256 +
5257 + caam_to_cpu(16)
5258 +diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
5259 +index 85820a2d69d4..987899610b46 100644
5260 +--- a/drivers/dma/dma-jz4780.c
5261 ++++ b/drivers/dma/dma-jz4780.c
5262 +@@ -761,6 +761,11 @@ static int jz4780_dma_probe(struct platform_device *pdev)
5263 + struct resource *res;
5264 + int i, ret;
5265 +
5266 ++ if (!dev->of_node) {
5267 ++ dev_err(dev, "This driver must be probed from devicetree\n");
5268 ++ return -EINVAL;
5269 ++ }
5270 ++
5271 + jzdma = devm_kzalloc(dev, sizeof(*jzdma), GFP_KERNEL);
5272 + if (!jzdma)
5273 + return -ENOMEM;
5274 +diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
5275 +index 4fa4c06c9edb..21a5708985bc 100644
5276 +--- a/drivers/dma/ioat/init.c
5277 ++++ b/drivers/dma/ioat/init.c
5278 +@@ -1205,8 +1205,15 @@ static void ioat_shutdown(struct pci_dev *pdev)
5279 +
5280 + spin_lock_bh(&ioat_chan->prep_lock);
5281 + set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
5282 +- del_timer_sync(&ioat_chan->timer);
5283 + spin_unlock_bh(&ioat_chan->prep_lock);
5284 ++ /*
5285 ++ * Synchronization rule for del_timer_sync():
5286 ++ * - The caller must not hold locks which would prevent
5287 ++ * completion of the timer's handler.
5288 ++ * So prep_lock cannot be held before calling it.
5289 ++ */
5290 ++ del_timer_sync(&ioat_chan->timer);
5291 ++
5292 + /* this should quiesce then reset */
5293 + ioat_reset_hw(ioat_chan);
5294 + }
5295 +diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c
5296 +index 4cf0d4d0cecf..25610286979f 100644
5297 +--- a/drivers/dma/ppc4xx/adma.c
5298 ++++ b/drivers/dma/ppc4xx/adma.c
5299 +@@ -4360,7 +4360,7 @@ static ssize_t enable_store(struct device_driver *dev, const char *buf,
5300 + }
5301 + static DRIVER_ATTR_RW(enable);
5302 +
5303 +-static ssize_t poly_store(struct device_driver *dev, char *buf)
5304 ++static ssize_t poly_show(struct device_driver *dev, char *buf)
5305 + {
5306 + ssize_t size = 0;
5307 + u32 reg;
5308 +diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
5309 +index 18aeabb1d5ee..e2addb2bca29 100644
5310 +--- a/drivers/edac/amd64_edac.c
5311 ++++ b/drivers/edac/amd64_edac.c
5312 +@@ -2200,6 +2200,15 @@ static struct amd64_family_type family_types[] = {
5313 + .dbam_to_cs = f17_base_addr_to_cs_size,
5314 + }
5315 + },
5316 ++ [F17_M10H_CPUS] = {
5317 ++ .ctl_name = "F17h_M10h",
5318 ++ .f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0,
5319 ++ .f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6,
5320 ++ .ops = {
5321 ++ .early_channel_count = f17_early_channel_count,
5322 ++ .dbam_to_cs = f17_base_addr_to_cs_size,
5323 ++ }
5324 ++ },
5325 + };
5326 +
5327 + /*
5328 +@@ -3188,6 +3197,11 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
5329 + break;
5330 +
5331 + case 0x17:
5332 ++ if (pvt->model >= 0x10 && pvt->model <= 0x2f) {
5333 ++ fam_type = &family_types[F17_M10H_CPUS];
5334 ++ pvt->ops = &family_types[F17_M10H_CPUS].ops;
5335 ++ break;
5336 ++ }
5337 + fam_type = &family_types[F17_CPUS];
5338 + pvt->ops = &family_types[F17_CPUS].ops;
5339 + break;
5340 +diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
5341 +index 1d4b74e9a037..4242f8e39c18 100644
5342 +--- a/drivers/edac/amd64_edac.h
5343 ++++ b/drivers/edac/amd64_edac.h
5344 +@@ -115,6 +115,8 @@
5345 + #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F2 0x1582
5346 + #define PCI_DEVICE_ID_AMD_17H_DF_F0 0x1460
5347 + #define PCI_DEVICE_ID_AMD_17H_DF_F6 0x1466
5348 ++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8
5349 ++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee
5350 +
5351 + /*
5352 + * Function 1 - Address Map
5353 +@@ -281,6 +283,7 @@ enum amd_families {
5354 + F16_CPUS,
5355 + F16_M30H_CPUS,
5356 + F17_CPUS,
5357 ++ F17_M10H_CPUS,
5358 + NUM_FAMILIES,
5359 + };
5360 +
5361 +diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
5362 +index 8e120bf60624..f1d19504a028 100644
5363 +--- a/drivers/edac/i7core_edac.c
5364 ++++ b/drivers/edac/i7core_edac.c
5365 +@@ -1711,6 +1711,7 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
5366 + u32 errnum = find_first_bit(&error, 32);
5367 +
5368 + if (uncorrected_error) {
5369 ++ core_err_cnt = 1;
5370 + if (ripv)
5371 + tp_event = HW_EVENT_ERR_FATAL;
5372 + else
5373 +diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
5374 +index 4a89c8093307..498d253a3b7e 100644
5375 +--- a/drivers/edac/sb_edac.c
5376 ++++ b/drivers/edac/sb_edac.c
5377 +@@ -2881,6 +2881,7 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
5378 + recoverable = GET_BITFIELD(m->status, 56, 56);
5379 +
5380 + if (uncorrected_error) {
5381 ++ core_err_cnt = 1;
5382 + if (ripv) {
5383 + type = "FATAL";
5384 + tp_event = HW_EVENT_ERR_FATAL;
5385 +diff --git a/drivers/edac/skx_edac.c b/drivers/edac/skx_edac.c
5386 +index fae095162c01..4ba92f1dd0f7 100644
5387 +--- a/drivers/edac/skx_edac.c
5388 ++++ b/drivers/edac/skx_edac.c
5389 +@@ -668,7 +668,7 @@ sad_found:
5390 + break;
5391 + case 2:
5392 + lchan = (addr >> shift) % 2;
5393 +- lchan = (lchan << 1) | ~lchan;
5394 ++ lchan = (lchan << 1) | !lchan;
5395 + break;
5396 + case 3:
5397 + lchan = ((addr >> shift) % 2) << 1;
5398 +@@ -959,6 +959,7 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
5399 + recoverable = GET_BITFIELD(m->status, 56, 56);
5400 +
5401 + if (uncorrected_error) {
5402 ++ core_err_cnt = 1;
5403 + if (ripv) {
5404 + type = "FATAL";
5405 + tp_event = HW_EVENT_ERR_FATAL;
5406 +diff --git a/drivers/firmware/google/coreboot_table.c b/drivers/firmware/google/coreboot_table.c
5407 +index 19db5709ae28..898bb9abc41f 100644
5408 +--- a/drivers/firmware/google/coreboot_table.c
5409 ++++ b/drivers/firmware/google/coreboot_table.c
5410 +@@ -110,7 +110,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
5411 +
5412 + if (strncmp(header.signature, "LBIO", sizeof(header.signature))) {
5413 + pr_warn("coreboot_table: coreboot table missing or corrupt!\n");
5414 +- return -ENODEV;
5415 ++ ret = -ENODEV;
5416 ++ goto out;
5417 + }
5418 +
5419 + ptr_entry = (void *)ptr_header + header.header_bytes;
5420 +@@ -137,7 +138,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
5421 +
5422 + ptr_entry += entry.size;
5423 + }
5424 +-
5425 ++out:
5426 ++ iounmap(ptr);
5427 + return ret;
5428 + }
5429 + EXPORT_SYMBOL(coreboot_table_init);
5430 +@@ -146,7 +148,6 @@ int coreboot_table_exit(void)
5431 + {
5432 + if (ptr_header) {
5433 + bus_unregister(&coreboot_bus_type);
5434 +- iounmap(ptr_header);
5435 + ptr_header = NULL;
5436 + }
5437 +
5438 +diff --git a/drivers/gpio/gpio-brcmstb.c b/drivers/gpio/gpio-brcmstb.c
5439 +index 16c7f9f49416..af936dcca659 100644
5440 +--- a/drivers/gpio/gpio-brcmstb.c
5441 ++++ b/drivers/gpio/gpio-brcmstb.c
5442 +@@ -664,6 +664,18 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
5443 + struct brcmstb_gpio_bank *bank;
5444 + struct gpio_chip *gc;
5445 +
5446 ++ /*
5447 ++ * If bank_width is 0, then there is an empty bank in the
5448 ++ * register block. Special handling for this case.
5449 ++ */
5450 ++ if (bank_width == 0) {
5451 ++ dev_dbg(dev, "Width 0 found: Empty bank @ %d\n",
5452 ++ num_banks);
5453 ++ num_banks++;
5454 ++ gpio_base += MAX_GPIO_PER_BANK;
5455 ++ continue;
5456 ++ }
5457 ++
5458 + bank = devm_kzalloc(dev, sizeof(*bank), GFP_KERNEL);
5459 + if (!bank) {
5460 + err = -ENOMEM;
5461 +@@ -740,9 +752,6 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
5462 + goto fail;
5463 + }
5464 +
5465 +- dev_info(dev, "Registered %d banks (GPIO(s): %d-%d)\n",
5466 +- num_banks, priv->gpio_base, gpio_base - 1);
5467 +-
5468 + if (priv->parent_wake_irq && need_wakeup_event)
5469 + pm_wakeup_event(dev, 0);
5470 +
5471 +diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
5472 +index 895741e9cd7d..52ccf1c31855 100644
5473 +--- a/drivers/gpu/drm/drm_atomic.c
5474 ++++ b/drivers/gpu/drm/drm_atomic.c
5475 +@@ -173,6 +173,11 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
5476 + state->crtcs[i].state = NULL;
5477 + state->crtcs[i].old_state = NULL;
5478 + state->crtcs[i].new_state = NULL;
5479 ++
5480 ++ if (state->crtcs[i].commit) {
5481 ++ drm_crtc_commit_put(state->crtcs[i].commit);
5482 ++ state->crtcs[i].commit = NULL;
5483 ++ }
5484 + }
5485 +
5486 + for (i = 0; i < config->num_total_plane; i++) {
5487 +diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
5488 +index 81e32199d3ef..abca95b970ea 100644
5489 +--- a/drivers/gpu/drm/drm_atomic_helper.c
5490 ++++ b/drivers/gpu/drm/drm_atomic_helper.c
5491 +@@ -1384,15 +1384,16 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
5492 + void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
5493 + struct drm_atomic_state *old_state)
5494 + {
5495 +- struct drm_crtc_state *new_crtc_state;
5496 + struct drm_crtc *crtc;
5497 + int i;
5498 +
5499 +- for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
5500 +- struct drm_crtc_commit *commit = new_crtc_state->commit;
5501 ++ for (i = 0; i < dev->mode_config.num_crtc; i++) {
5502 ++ struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
5503 + int ret;
5504 +
5505 +- if (!commit)
5506 ++ crtc = old_state->crtcs[i].ptr;
5507 ++
5508 ++ if (!crtc || !commit)
5509 + continue;
5510 +
5511 + ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
5512 +@@ -1906,6 +1907,9 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
5513 + drm_crtc_commit_get(commit);
5514 +
5515 + commit->abort_completion = true;
5516 ++
5517 ++ state->crtcs[i].commit = commit;
5518 ++ drm_crtc_commit_get(commit);
5519 + }
5520 +
5521 + for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
5522 +diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
5523 +index 98a36e6c69ad..bd207857a964 100644
5524 +--- a/drivers/gpu/drm/drm_crtc.c
5525 ++++ b/drivers/gpu/drm/drm_crtc.c
5526 +@@ -560,9 +560,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
5527 + struct drm_mode_crtc *crtc_req = data;
5528 + struct drm_crtc *crtc;
5529 + struct drm_plane *plane;
5530 +- struct drm_connector **connector_set = NULL, *connector;
5531 +- struct drm_framebuffer *fb = NULL;
5532 +- struct drm_display_mode *mode = NULL;
5533 ++ struct drm_connector **connector_set, *connector;
5534 ++ struct drm_framebuffer *fb;
5535 ++ struct drm_display_mode *mode;
5536 + struct drm_mode_set set;
5537 + uint32_t __user *set_connectors_ptr;
5538 + struct drm_modeset_acquire_ctx ctx;
5539 +@@ -591,6 +591,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
5540 + mutex_lock(&crtc->dev->mode_config.mutex);
5541 + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
5542 + retry:
5543 ++ connector_set = NULL;
5544 ++ fb = NULL;
5545 ++ mode = NULL;
5546 ++
5547 + ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
5548 + if (ret)
5549 + goto out;
5550 +diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
5551 +index 59a11026dceb..45a8ba42c8f4 100644
5552 +--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
5553 ++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
5554 +@@ -1446,8 +1446,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
5555 + }
5556 +
5557 + /* The CEC module handles HDMI hotplug detection */
5558 +- cec_np = of_find_compatible_node(np->parent, NULL,
5559 +- "mediatek,mt8173-cec");
5560 ++ cec_np = of_get_compatible_child(np->parent, "mediatek,mt8173-cec");
5561 + if (!cec_np) {
5562 + dev_err(dev, "Failed to find CEC node\n");
5563 + return -EINVAL;
5564 +@@ -1457,8 +1456,10 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
5565 + if (!cec_pdev) {
5566 + dev_err(hdmi->dev, "Waiting for CEC device %pOF\n",
5567 + cec_np);
5568 ++ of_node_put(cec_np);
5569 + return -EPROBE_DEFER;
5570 + }
5571 ++ of_node_put(cec_np);
5572 + hdmi->cec_dev = &cec_pdev->dev;
5573 +
5574 + /*
5575 +diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
5576 +index 23872d08308c..a746017fac17 100644
5577 +--- a/drivers/hid/usbhid/hiddev.c
5578 ++++ b/drivers/hid/usbhid/hiddev.c
5579 +@@ -512,14 +512,24 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
5580 + if (cmd == HIDIOCGCOLLECTIONINDEX) {
5581 + if (uref->usage_index >= field->maxusage)
5582 + goto inval;
5583 ++ uref->usage_index =
5584 ++ array_index_nospec(uref->usage_index,
5585 ++ field->maxusage);
5586 + } else if (uref->usage_index >= field->report_count)
5587 + goto inval;
5588 + }
5589 +
5590 +- if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) &&
5591 +- (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
5592 +- uref->usage_index + uref_multi->num_values > field->report_count))
5593 +- goto inval;
5594 ++ if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) {
5595 ++ if (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
5596 ++ uref->usage_index + uref_multi->num_values >
5597 ++ field->report_count)
5598 ++ goto inval;
5599 ++
5600 ++ uref->usage_index =
5601 ++ array_index_nospec(uref->usage_index,
5602 ++ field->report_count -
5603 ++ uref_multi->num_values);
5604 ++ }
5605 +
5606 + switch (cmd) {
5607 + case HIDIOCGUSAGE:
5608 +diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
5609 +index ad7afa74d365..ff9a1d8e90f7 100644
5610 +--- a/drivers/hid/wacom_wac.c
5611 ++++ b/drivers/hid/wacom_wac.c
5612 +@@ -3335,6 +3335,7 @@ static void wacom_setup_intuos(struct wacom_wac *wacom_wac)
5613 +
5614 + void wacom_setup_device_quirks(struct wacom *wacom)
5615 + {
5616 ++ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
5617 + struct wacom_features *features = &wacom->wacom_wac.features;
5618 +
5619 + /* The pen and pad share the same interface on most devices */
5620 +@@ -3464,6 +3465,24 @@ void wacom_setup_device_quirks(struct wacom *wacom)
5621 +
5622 + if (features->type == REMOTE)
5623 + features->device_type |= WACOM_DEVICETYPE_WL_MONITOR;
5624 ++
5625 ++ /* HID descriptor for DTK-2451 / DTH-2452 claims to report lots
5626 ++ * of things it shouldn't. Lets fix up the damage...
5627 ++ */
5628 ++ if (wacom->hdev->product == 0x382 || wacom->hdev->product == 0x37d) {
5629 ++ features->quirks &= ~WACOM_QUIRK_TOOLSERIAL;
5630 ++ __clear_bit(BTN_TOOL_BRUSH, wacom_wac->pen_input->keybit);
5631 ++ __clear_bit(BTN_TOOL_PENCIL, wacom_wac->pen_input->keybit);
5632 ++ __clear_bit(BTN_TOOL_AIRBRUSH, wacom_wac->pen_input->keybit);
5633 ++ __clear_bit(ABS_Z, wacom_wac->pen_input->absbit);
5634 ++ __clear_bit(ABS_DISTANCE, wacom_wac->pen_input->absbit);
5635 ++ __clear_bit(ABS_TILT_X, wacom_wac->pen_input->absbit);
5636 ++ __clear_bit(ABS_TILT_Y, wacom_wac->pen_input->absbit);
5637 ++ __clear_bit(ABS_WHEEL, wacom_wac->pen_input->absbit);
5638 ++ __clear_bit(ABS_MISC, wacom_wac->pen_input->absbit);
5639 ++ __clear_bit(MSC_SERIAL, wacom_wac->pen_input->mscbit);
5640 ++ __clear_bit(EV_MSC, wacom_wac->pen_input->evbit);
5641 ++ }
5642 + }
5643 +
5644 + int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
5645 +diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
5646 +index 0f0e091c117c..c4a1ebcfffb6 100644
5647 +--- a/drivers/hv/channel_mgmt.c
5648 ++++ b/drivers/hv/channel_mgmt.c
5649 +@@ -606,16 +606,18 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
5650 + bool perf_chn = vmbus_devs[dev_type].perf_device;
5651 + struct vmbus_channel *primary = channel->primary_channel;
5652 + int next_node;
5653 +- struct cpumask available_mask;
5654 ++ cpumask_var_t available_mask;
5655 + struct cpumask *alloced_mask;
5656 +
5657 + if ((vmbus_proto_version == VERSION_WS2008) ||
5658 +- (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
5659 ++ (vmbus_proto_version == VERSION_WIN7) || (!perf_chn) ||
5660 ++ !alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
5661 + /*
5662 + * Prior to win8, all channel interrupts are
5663 + * delivered on cpu 0.
5664 + * Also if the channel is not a performance critical
5665 + * channel, bind it to cpu 0.
5666 ++ * In case alloc_cpumask_var() fails, bind it to cpu 0.
5667 + */
5668 + channel->numa_node = 0;
5669 + channel->target_cpu = 0;
5670 +@@ -653,7 +655,7 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
5671 + cpumask_clear(alloced_mask);
5672 + }
5673 +
5674 +- cpumask_xor(&available_mask, alloced_mask,
5675 ++ cpumask_xor(available_mask, alloced_mask,
5676 + cpumask_of_node(primary->numa_node));
5677 +
5678 + cur_cpu = -1;
5679 +@@ -671,10 +673,10 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
5680 + }
5681 +
5682 + while (true) {
5683 +- cur_cpu = cpumask_next(cur_cpu, &available_mask);
5684 ++ cur_cpu = cpumask_next(cur_cpu, available_mask);
5685 + if (cur_cpu >= nr_cpu_ids) {
5686 + cur_cpu = -1;
5687 +- cpumask_copy(&available_mask,
5688 ++ cpumask_copy(available_mask,
5689 + cpumask_of_node(primary->numa_node));
5690 + continue;
5691 + }
5692 +@@ -704,6 +706,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
5693 +
5694 + channel->target_cpu = cur_cpu;
5695 + channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu);
5696 ++
5697 ++ free_cpumask_var(available_mask);
5698 + }
5699 +
5700 + static void vmbus_wait_for_unload(void)
5701 +diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c
5702 +index 7718e58dbda5..7688dab32f6e 100644
5703 +--- a/drivers/hwmon/pmbus/pmbus.c
5704 ++++ b/drivers/hwmon/pmbus/pmbus.c
5705 +@@ -118,6 +118,8 @@ static int pmbus_identify(struct i2c_client *client,
5706 + } else {
5707 + info->pages = 1;
5708 + }
5709 ++
5710 ++ pmbus_clear_faults(client);
5711 + }
5712 +
5713 + if (pmbus_check_byte_register(client, 0, PMBUS_VOUT_MODE)) {
5714 +diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
5715 +index 82c3754e21e3..2e2b5851139c 100644
5716 +--- a/drivers/hwmon/pmbus/pmbus_core.c
5717 ++++ b/drivers/hwmon/pmbus/pmbus_core.c
5718 +@@ -2015,7 +2015,10 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
5719 + if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK))
5720 + client->flags |= I2C_CLIENT_PEC;
5721 +
5722 +- pmbus_clear_faults(client);
5723 ++ if (data->info->pages)
5724 ++ pmbus_clear_faults(client);
5725 ++ else
5726 ++ pmbus_clear_fault_page(client, -1);
5727 +
5728 + if (info->identify) {
5729 + ret = (*info->identify)(client, info);
5730 +diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
5731 +index 7838af58f92d..9d611dd268e1 100644
5732 +--- a/drivers/hwmon/pwm-fan.c
5733 ++++ b/drivers/hwmon/pwm-fan.c
5734 +@@ -290,9 +290,19 @@ static int pwm_fan_remove(struct platform_device *pdev)
5735 + static int pwm_fan_suspend(struct device *dev)
5736 + {
5737 + struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
5738 ++ struct pwm_args args;
5739 ++ int ret;
5740 ++
5741 ++ pwm_get_args(ctx->pwm, &args);
5742 ++
5743 ++ if (ctx->pwm_value) {
5744 ++ ret = pwm_config(ctx->pwm, 0, args.period);
5745 ++ if (ret < 0)
5746 ++ return ret;
5747 +
5748 +- if (ctx->pwm_value)
5749 + pwm_disable(ctx->pwm);
5750 ++ }
5751 ++
5752 + return 0;
5753 + }
5754 +
5755 +diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
5756 +index 320d29df17e1..8c1d53f7af83 100644
5757 +--- a/drivers/hwtracing/coresight/coresight-etb10.c
5758 ++++ b/drivers/hwtracing/coresight/coresight-etb10.c
5759 +@@ -147,6 +147,10 @@ static int etb_enable(struct coresight_device *csdev, u32 mode)
5760 + if (val == CS_MODE_PERF)
5761 + return -EBUSY;
5762 +
5763 ++ /* Don't let perf disturb sysFS sessions */
5764 ++ if (val == CS_MODE_SYSFS && mode == CS_MODE_PERF)
5765 ++ return -EBUSY;
5766 ++
5767 + /* Nothing to do, the tracer is already enabled. */
5768 + if (val == CS_MODE_SYSFS)
5769 + goto out;
5770 +diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
5771 +index 3c1c817f6968..e152716bf07f 100644
5772 +--- a/drivers/i2c/busses/i2c-rcar.c
5773 ++++ b/drivers/i2c/busses/i2c-rcar.c
5774 +@@ -812,8 +812,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
5775 +
5776 + time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
5777 + num * adap->timeout);
5778 +- if (!time_left) {
5779 ++
5780 ++ /* cleanup DMA if it couldn't complete properly due to an error */
5781 ++ if (priv->dma_direction != DMA_NONE)
5782 + rcar_i2c_cleanup_dma(priv);
5783 ++
5784 ++ if (!time_left) {
5785 + rcar_i2c_init(priv);
5786 + ret = -ETIMEDOUT;
5787 + } else if (priv->flags & ID_NACK) {
5788 +diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
5789 +index 44b516863c9d..75d2f73582a3 100644
5790 +--- a/drivers/iio/adc/at91_adc.c
5791 ++++ b/drivers/iio/adc/at91_adc.c
5792 +@@ -248,12 +248,14 @@ static irqreturn_t at91_adc_trigger_handler(int irq, void *p)
5793 + struct iio_poll_func *pf = p;
5794 + struct iio_dev *idev = pf->indio_dev;
5795 + struct at91_adc_state *st = iio_priv(idev);
5796 ++ struct iio_chan_spec const *chan;
5797 + int i, j = 0;
5798 +
5799 + for (i = 0; i < idev->masklength; i++) {
5800 + if (!test_bit(i, idev->active_scan_mask))
5801 + continue;
5802 +- st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, i));
5803 ++ chan = idev->channels + i;
5804 ++ st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, chan->channel));
5805 + j++;
5806 + }
5807 +
5808 +@@ -279,6 +281,8 @@ static void handle_adc_eoc_trigger(int irq, struct iio_dev *idev)
5809 + iio_trigger_poll(idev->trig);
5810 + } else {
5811 + st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb));
5812 ++ /* Needed to ACK the DRDY interruption */
5813 ++ at91_adc_readl(st, AT91_ADC_LCDR);
5814 + st->done = true;
5815 + wake_up_interruptible(&st->wq_data_avail);
5816 + }
5817 +diff --git a/drivers/iio/adc/fsl-imx25-gcq.c b/drivers/iio/adc/fsl-imx25-gcq.c
5818 +index ea264fa9e567..929c617db364 100644
5819 +--- a/drivers/iio/adc/fsl-imx25-gcq.c
5820 ++++ b/drivers/iio/adc/fsl-imx25-gcq.c
5821 +@@ -209,12 +209,14 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
5822 + ret = of_property_read_u32(child, "reg", &reg);
5823 + if (ret) {
5824 + dev_err(dev, "Failed to get reg property\n");
5825 ++ of_node_put(child);
5826 + return ret;
5827 + }
5828 +
5829 + if (reg >= MX25_NUM_CFGS) {
5830 + dev_err(dev,
5831 + "reg value is greater than the number of available configuration registers\n");
5832 ++ of_node_put(child);
5833 + return -EINVAL;
5834 + }
5835 +
5836 +@@ -228,6 +230,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
5837 + if (IS_ERR(priv->vref[refp])) {
5838 + dev_err(dev, "Error, trying to use external voltage reference without a vref-%s regulator.",
5839 + mx25_gcq_refp_names[refp]);
5840 ++ of_node_put(child);
5841 + return PTR_ERR(priv->vref[refp]);
5842 + }
5843 + priv->channel_vref_mv[reg] =
5844 +@@ -240,6 +243,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
5845 + break;
5846 + default:
5847 + dev_err(dev, "Invalid positive reference %d\n", refp);
5848 ++ of_node_put(child);
5849 + return -EINVAL;
5850 + }
5851 +
5852 +@@ -254,10 +258,12 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
5853 +
5854 + if ((refp & MX25_ADCQ_CFG_REFP_MASK) != refp) {
5855 + dev_err(dev, "Invalid fsl,adc-refp property value\n");
5856 ++ of_node_put(child);
5857 + return -EINVAL;
5858 + }
5859 + if ((refn & MX25_ADCQ_CFG_REFN_MASK) != refn) {
5860 + dev_err(dev, "Invalid fsl,adc-refn property value\n");
5861 ++ of_node_put(child);
5862 + return -EINVAL;
5863 + }
5864 +
5865 +diff --git a/drivers/iio/dac/ad5064.c b/drivers/iio/dac/ad5064.c
5866 +index bf4fc40ec84d..2f98cb2a3b96 100644
5867 +--- a/drivers/iio/dac/ad5064.c
5868 ++++ b/drivers/iio/dac/ad5064.c
5869 +@@ -808,6 +808,40 @@ static int ad5064_set_config(struct ad5064_state *st, unsigned int val)
5870 + return ad5064_write(st, cmd, 0, val, 0);
5871 + }
5872 +
5873 ++static int ad5064_request_vref(struct ad5064_state *st, struct device *dev)
5874 ++{
5875 ++ unsigned int i;
5876 ++ int ret;
5877 ++
5878 ++ for (i = 0; i < ad5064_num_vref(st); ++i)
5879 ++ st->vref_reg[i].supply = ad5064_vref_name(st, i);
5880 ++
5881 ++ if (!st->chip_info->internal_vref)
5882 ++ return devm_regulator_bulk_get(dev, ad5064_num_vref(st),
5883 ++ st->vref_reg);
5884 ++
5885 ++ /*
5886 ++ * This assumes that when the regulator has an internal VREF
5887 ++ * there is only one external VREF connection, which is
5888 ++ * currently the case for all supported devices.
5889 ++ */
5890 ++ st->vref_reg[0].consumer = devm_regulator_get_optional(dev, "vref");
5891 ++ if (!IS_ERR(st->vref_reg[0].consumer))
5892 ++ return 0;
5893 ++
5894 ++ ret = PTR_ERR(st->vref_reg[0].consumer);
5895 ++ if (ret != -ENODEV)
5896 ++ return ret;
5897 ++
5898 ++ /* If no external regulator was supplied use the internal VREF */
5899 ++ st->use_internal_vref = true;
5900 ++ ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
5901 ++ if (ret)
5902 ++ dev_err(dev, "Failed to enable internal vref: %d\n", ret);
5903 ++
5904 ++ return ret;
5905 ++}
5906 ++
5907 + static int ad5064_probe(struct device *dev, enum ad5064_type type,
5908 + const char *name, ad5064_write_func write)
5909 + {
5910 +@@ -828,22 +862,11 @@ static int ad5064_probe(struct device *dev, enum ad5064_type type,
5911 + st->dev = dev;
5912 + st->write = write;
5913 +
5914 +- for (i = 0; i < ad5064_num_vref(st); ++i)
5915 +- st->vref_reg[i].supply = ad5064_vref_name(st, i);
5916 ++ ret = ad5064_request_vref(st, dev);
5917 ++ if (ret)
5918 ++ return ret;
5919 +
5920 +- ret = devm_regulator_bulk_get(dev, ad5064_num_vref(st),
5921 +- st->vref_reg);
5922 +- if (ret) {
5923 +- if (!st->chip_info->internal_vref)
5924 +- return ret;
5925 +- st->use_internal_vref = true;
5926 +- ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
5927 +- if (ret) {
5928 +- dev_err(dev, "Failed to enable internal vref: %d\n",
5929 +- ret);
5930 +- return ret;
5931 +- }
5932 +- } else {
5933 ++ if (!st->use_internal_vref) {
5934 + ret = regulator_bulk_enable(ad5064_num_vref(st), st->vref_reg);
5935 + if (ret)
5936 + return ret;
5937 +diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
5938 +index 31c7efaf8e7a..63406cd212a7 100644
5939 +--- a/drivers/infiniband/core/sysfs.c
5940 ++++ b/drivers/infiniband/core/sysfs.c
5941 +@@ -516,7 +516,7 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
5942 + ret = get_perf_mad(p->ibdev, p->port_num, tab_attr->attr_id, &data,
5943 + 40 + offset / 8, sizeof(data));
5944 + if (ret < 0)
5945 +- return sprintf(buf, "N/A (no PMA)\n");
5946 ++ return ret;
5947 +
5948 + switch (width) {
5949 + case 4:
5950 +@@ -1061,10 +1061,12 @@ static int add_port(struct ib_device *device, int port_num,
5951 + goto err_put;
5952 + }
5953 +
5954 +- p->pma_table = get_counter_table(device, port_num);
5955 +- ret = sysfs_create_group(&p->kobj, p->pma_table);
5956 +- if (ret)
5957 +- goto err_put_gid_attrs;
5958 ++ if (device->process_mad) {
5959 ++ p->pma_table = get_counter_table(device, port_num);
5960 ++ ret = sysfs_create_group(&p->kobj, p->pma_table);
5961 ++ if (ret)
5962 ++ goto err_put_gid_attrs;
5963 ++ }
5964 +
5965 + p->gid_group.name = "gids";
5966 + p->gid_group.attrs = alloc_group_attrs(show_port_gid, attr.gid_tbl_len);
5967 +@@ -1177,7 +1179,8 @@ err_free_gid:
5968 + p->gid_group.attrs = NULL;
5969 +
5970 + err_remove_pma:
5971 +- sysfs_remove_group(&p->kobj, p->pma_table);
5972 ++ if (p->pma_table)
5973 ++ sysfs_remove_group(&p->kobj, p->pma_table);
5974 +
5975 + err_put_gid_attrs:
5976 + kobject_put(&p->gid_attr_group->kobj);
5977 +@@ -1289,7 +1292,9 @@ static void free_port_list_attributes(struct ib_device *device)
5978 + kfree(port->hw_stats);
5979 + free_hsag(&port->kobj, port->hw_stats_ag);
5980 + }
5981 +- sysfs_remove_group(p, port->pma_table);
5982 ++
5983 ++ if (port->pma_table)
5984 ++ sysfs_remove_group(p, port->pma_table);
5985 + sysfs_remove_group(p, &port->pkey_group);
5986 + sysfs_remove_group(p, &port->gid_group);
5987 + sysfs_remove_group(&port->gid_attr_group->kobj,
5988 +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
5989 +index 6ad0d46ab879..249efa0a6aba 100644
5990 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
5991 ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
5992 +@@ -360,7 +360,8 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
5993 + }
5994 +
5995 + /* Make sure the HW is stopped! */
5996 +- bnxt_qplib_nq_stop_irq(nq, true);
5997 ++ if (nq->requested)
5998 ++ bnxt_qplib_nq_stop_irq(nq, true);
5999 +
6000 + if (nq->bar_reg_iomem)
6001 + iounmap(nq->bar_reg_iomem);
6002 +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
6003 +index 2852d350ada1..6637df77d236 100644
6004 +--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
6005 ++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
6006 +@@ -309,8 +309,17 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
6007 + rcfw->aeq_handler(rcfw, qp_event, qp);
6008 + break;
6009 + default:
6010 +- /* Command Response */
6011 +- spin_lock_irqsave(&cmdq->lock, flags);
6012 ++ /*
6013 ++ * Command Response
6014 ++ * cmdq->lock needs to be acquired to synchronie
6015 ++ * the command send and completion reaping. This function
6016 ++ * is always called with creq->lock held. Using
6017 ++ * the nested variant of spin_lock.
6018 ++ *
6019 ++ */
6020 ++
6021 ++ spin_lock_irqsave_nested(&cmdq->lock, flags,
6022 ++ SINGLE_DEPTH_NESTING);
6023 + cookie = le16_to_cpu(qp_event->cookie);
6024 + mcookie = qp_event->cookie;
6025 + blocked = cookie & RCFW_CMD_IS_BLOCKING;
6026 +diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
6027 +index 73339fd47dd8..addd432f3f38 100644
6028 +--- a/drivers/infiniband/hw/mlx5/mr.c
6029 ++++ b/drivers/infiniband/hw/mlx5/mr.c
6030 +@@ -691,7 +691,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
6031 + init_completion(&ent->compl);
6032 + INIT_WORK(&ent->work, cache_work_func);
6033 + INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
6034 +- queue_work(cache->wq, &ent->work);
6035 +
6036 + if (i > MR_CACHE_LAST_STD_ENTRY) {
6037 + mlx5_odp_init_mr_cache_entry(ent);
6038 +@@ -711,6 +710,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
6039 + ent->limit = dev->mdev->profile->mr_cache[i].limit;
6040 + else
6041 + ent->limit = 0;
6042 ++ queue_work(cache->wq, &ent->work);
6043 + }
6044 +
6045 + err = mlx5_mr_cache_debugfs_init(dev);
6046 +diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
6047 +index 01eae67d5a6e..e260f6a156ed 100644
6048 +--- a/drivers/infiniband/hw/mlx5/qp.c
6049 ++++ b/drivers/infiniband/hw/mlx5/qp.c
6050 +@@ -3264,7 +3264,9 @@ static bool modify_dci_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state new
6051 + int req = IB_QP_STATE;
6052 + int opt = 0;
6053 +
6054 +- if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
6055 ++ if (new_state == IB_QPS_RESET) {
6056 ++ return is_valid_mask(attr_mask, req, opt);
6057 ++ } else if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
6058 + req |= IB_QP_PKEY_INDEX | IB_QP_PORT;
6059 + return is_valid_mask(attr_mask, req, opt);
6060 + } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
6061 +diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
6062 +index 5b57de30dee4..b8104d50b1a0 100644
6063 +--- a/drivers/infiniband/sw/rxe/rxe_resp.c
6064 ++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
6065 +@@ -682,6 +682,7 @@ static enum resp_states read_reply(struct rxe_qp *qp,
6066 + rxe_advance_resp_resource(qp);
6067 +
6068 + res->type = RXE_READ_MASK;
6069 ++ res->replay = 0;
6070 +
6071 + res->read.va = qp->resp.va;
6072 + res->read.va_org = qp->resp.va;
6073 +@@ -752,7 +753,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
6074 + state = RESPST_DONE;
6075 + } else {
6076 + qp->resp.res = NULL;
6077 +- qp->resp.opcode = -1;
6078 ++ if (!res->replay)
6079 ++ qp->resp.opcode = -1;
6080 + if (psn_compare(res->cur_psn, qp->resp.psn) >= 0)
6081 + qp->resp.psn = res->cur_psn;
6082 + state = RESPST_CLEANUP;
6083 +@@ -814,6 +816,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
6084 +
6085 + /* next expected psn, read handles this separately */
6086 + qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
6087 ++ qp->resp.ack_psn = qp->resp.psn;
6088 +
6089 + qp->resp.opcode = pkt->opcode;
6090 + qp->resp.status = IB_WC_SUCCESS;
6091 +@@ -1060,7 +1063,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
6092 + struct rxe_pkt_info *pkt)
6093 + {
6094 + enum resp_states rc;
6095 +- u32 prev_psn = (qp->resp.psn - 1) & BTH_PSN_MASK;
6096 ++ u32 prev_psn = (qp->resp.ack_psn - 1) & BTH_PSN_MASK;
6097 +
6098 + if (pkt->mask & RXE_SEND_MASK ||
6099 + pkt->mask & RXE_WRITE_MASK) {
6100 +@@ -1103,6 +1106,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
6101 + res->state = (pkt->psn == res->first_psn) ?
6102 + rdatm_res_state_new :
6103 + rdatm_res_state_replay;
6104 ++ res->replay = 1;
6105 +
6106 + /* Reset the resource, except length. */
6107 + res->read.va_org = iova;
6108 +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
6109 +index af1470d29391..332a16dad2a7 100644
6110 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
6111 ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
6112 +@@ -171,6 +171,7 @@ enum rdatm_res_state {
6113 +
6114 + struct resp_res {
6115 + int type;
6116 ++ int replay;
6117 + u32 first_psn;
6118 + u32 last_psn;
6119 + u32 cur_psn;
6120 +@@ -195,6 +196,7 @@ struct rxe_resp_info {
6121 + enum rxe_qp_state state;
6122 + u32 msn;
6123 + u32 psn;
6124 ++ u32 ack_psn;
6125 + int opcode;
6126 + int drop_msg;
6127 + int goto_error;
6128 +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
6129 +index a620701f9d41..1ac2bbc84671 100644
6130 +--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
6131 ++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
6132 +@@ -1439,11 +1439,15 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
6133 + spin_unlock_irqrestore(&priv->lock, flags);
6134 + netif_tx_unlock_bh(dev);
6135 +
6136 +- if (skb->protocol == htons(ETH_P_IP))
6137 ++ if (skb->protocol == htons(ETH_P_IP)) {
6138 ++ memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
6139 + icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
6140 ++ }
6141 + #if IS_ENABLED(CONFIG_IPV6)
6142 +- else if (skb->protocol == htons(ETH_P_IPV6))
6143 ++ else if (skb->protocol == htons(ETH_P_IPV6)) {
6144 ++ memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
6145 + icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
6146 ++ }
6147 + #endif
6148 + dev_kfree_skb_any(skb);
6149 +
6150 +diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
6151 +index 5349e22b5c78..29646004a4a7 100644
6152 +--- a/drivers/iommu/arm-smmu.c
6153 ++++ b/drivers/iommu/arm-smmu.c
6154 +@@ -469,6 +469,9 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
6155 + bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
6156 + void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
6157 +
6158 ++ if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
6159 ++ wmb();
6160 ++
6161 + if (stage1) {
6162 + reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
6163 +
6164 +@@ -510,6 +513,9 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size,
6165 + struct arm_smmu_domain *smmu_domain = cookie;
6166 + void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu);
6167 +
6168 ++ if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
6169 ++ wmb();
6170 ++
6171 + writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
6172 + }
6173 +
6174 +diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
6175 +index b1b47a40a278..faa7d61b9d6c 100644
6176 +--- a/drivers/irqchip/qcom-pdc.c
6177 ++++ b/drivers/irqchip/qcom-pdc.c
6178 +@@ -124,6 +124,7 @@ static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type)
6179 + break;
6180 + case IRQ_TYPE_EDGE_BOTH:
6181 + pdc_type = PDC_EDGE_DUAL;
6182 ++ type = IRQ_TYPE_EDGE_RISING;
6183 + break;
6184 + case IRQ_TYPE_LEVEL_HIGH:
6185 + pdc_type = PDC_LEVEL_HIGH;
6186 +diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
6187 +index ed9cc977c8b3..f6427e805150 100644
6188 +--- a/drivers/lightnvm/pblk-core.c
6189 ++++ b/drivers/lightnvm/pblk-core.c
6190 +@@ -1538,13 +1538,14 @@ struct pblk_line *pblk_line_replace_data(struct pblk *pblk)
6191 + struct pblk_line *cur, *new = NULL;
6192 + unsigned int left_seblks;
6193 +
6194 +- cur = l_mg->data_line;
6195 + new = l_mg->data_next;
6196 + if (!new)
6197 + goto out;
6198 +- l_mg->data_line = new;
6199 +
6200 + spin_lock(&l_mg->free_lock);
6201 ++ cur = l_mg->data_line;
6202 ++ l_mg->data_line = new;
6203 ++
6204 + pblk_line_setup_metadata(new, l_mg, &pblk->lm);
6205 + spin_unlock(&l_mg->free_lock);
6206 +
6207 +diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
6208 +index d83466b3821b..958bda8a69b7 100644
6209 +--- a/drivers/lightnvm/pblk-recovery.c
6210 ++++ b/drivers/lightnvm/pblk-recovery.c
6211 +@@ -956,12 +956,14 @@ next:
6212 + }
6213 + }
6214 +
6215 +- spin_lock(&l_mg->free_lock);
6216 + if (!open_lines) {
6217 ++ spin_lock(&l_mg->free_lock);
6218 + WARN_ON_ONCE(!test_and_clear_bit(meta_line,
6219 + &l_mg->meta_bitmap));
6220 ++ spin_unlock(&l_mg->free_lock);
6221 + pblk_line_replace_data(pblk);
6222 + } else {
6223 ++ spin_lock(&l_mg->free_lock);
6224 + /* Allocate next line for preparation */
6225 + l_mg->data_next = pblk_line_get(pblk);
6226 + if (l_mg->data_next) {
6227 +@@ -969,8 +971,8 @@ next:
6228 + l_mg->data_next->type = PBLK_LINETYPE_DATA;
6229 + is_next = 1;
6230 + }
6231 ++ spin_unlock(&l_mg->free_lock);
6232 + }
6233 +- spin_unlock(&l_mg->free_lock);
6234 +
6235 + if (is_next)
6236 + pblk_line_erase(pblk, l_mg->data_next);
6237 +diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
6238 +index 88a0a7c407aa..432f7d94d369 100644
6239 +--- a/drivers/lightnvm/pblk-sysfs.c
6240 ++++ b/drivers/lightnvm/pblk-sysfs.c
6241 +@@ -262,8 +262,14 @@ static ssize_t pblk_sysfs_lines(struct pblk *pblk, char *page)
6242 + sec_in_line = l_mg->data_line->sec_in_line;
6243 + meta_weight = bitmap_weight(&l_mg->meta_bitmap,
6244 + PBLK_DATA_LINES);
6245 +- map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
6246 ++
6247 ++ spin_lock(&l_mg->data_line->lock);
6248 ++ if (l_mg->data_line->map_bitmap)
6249 ++ map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
6250 + lm->sec_per_line);
6251 ++ else
6252 ++ map_weight = 0;
6253 ++ spin_unlock(&l_mg->data_line->lock);
6254 + }
6255 + spin_unlock(&l_mg->free_lock);
6256 +
6257 +diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
6258 +index f353e52941f5..89ac60d4849e 100644
6259 +--- a/drivers/lightnvm/pblk-write.c
6260 ++++ b/drivers/lightnvm/pblk-write.c
6261 +@@ -417,12 +417,11 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
6262 + rqd->ppa_list[i] = addr_to_gen_ppa(pblk, paddr, id);
6263 + }
6264 +
6265 ++ spin_lock(&l_mg->close_lock);
6266 + emeta->mem += rq_len;
6267 +- if (emeta->mem >= lm->emeta_len[0]) {
6268 +- spin_lock(&l_mg->close_lock);
6269 ++ if (emeta->mem >= lm->emeta_len[0])
6270 + list_del(&meta_line->list);
6271 +- spin_unlock(&l_mg->close_lock);
6272 +- }
6273 ++ spin_unlock(&l_mg->close_lock);
6274 +
6275 + pblk_down_page(pblk, rqd->ppa_list, rqd->nr_ppas);
6276 +
6277 +@@ -491,14 +490,15 @@ static struct pblk_line *pblk_should_submit_meta_io(struct pblk *pblk,
6278 + struct pblk_line *meta_line;
6279 +
6280 + spin_lock(&l_mg->close_lock);
6281 +-retry:
6282 + if (list_empty(&l_mg->emeta_list)) {
6283 + spin_unlock(&l_mg->close_lock);
6284 + return NULL;
6285 + }
6286 + meta_line = list_first_entry(&l_mg->emeta_list, struct pblk_line, list);
6287 +- if (meta_line->emeta->mem >= lm->emeta_len[0])
6288 +- goto retry;
6289 ++ if (meta_line->emeta->mem >= lm->emeta_len[0]) {
6290 ++ spin_unlock(&l_mg->close_lock);
6291 ++ return NULL;
6292 ++ }
6293 + spin_unlock(&l_mg->close_lock);
6294 +
6295 + if (!pblk_valid_meta_ppa(pblk, meta_line, data_rqd))
6296 +diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
6297 +index 311e91b1a14f..256f18b67e8a 100644
6298 +--- a/drivers/mailbox/pcc.c
6299 ++++ b/drivers/mailbox/pcc.c
6300 +@@ -461,8 +461,11 @@ static int __init acpi_pcc_probe(void)
6301 + count = acpi_table_parse_entries_array(ACPI_SIG_PCCT,
6302 + sizeof(struct acpi_table_pcct), proc,
6303 + ACPI_PCCT_TYPE_RESERVED, MAX_PCC_SUBSPACES);
6304 +- if (count == 0 || count > MAX_PCC_SUBSPACES) {
6305 +- pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
6306 ++ if (count <= 0 || count > MAX_PCC_SUBSPACES) {
6307 ++ if (count < 0)
6308 ++ pr_warn("Error parsing PCC subspaces from PCCT\n");
6309 ++ else
6310 ++ pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
6311 + return -EINVAL;
6312 + }
6313 +
6314 +diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
6315 +index 547c9eedc2f4..d681524f82a4 100644
6316 +--- a/drivers/md/bcache/btree.c
6317 ++++ b/drivers/md/bcache/btree.c
6318 +@@ -2380,7 +2380,7 @@ static int refill_keybuf_fn(struct btree_op *op, struct btree *b,
6319 + struct keybuf *buf = refill->buf;
6320 + int ret = MAP_CONTINUE;
6321 +
6322 +- if (bkey_cmp(k, refill->end) >= 0) {
6323 ++ if (bkey_cmp(k, refill->end) > 0) {
6324 + ret = MAP_DONE;
6325 + goto out;
6326 + }
6327 +diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
6328 +index ae67f5fa8047..9d2fa1359029 100644
6329 +--- a/drivers/md/bcache/request.c
6330 ++++ b/drivers/md/bcache/request.c
6331 +@@ -843,7 +843,7 @@ static void cached_dev_read_done_bh(struct closure *cl)
6332 +
6333 + bch_mark_cache_accounting(s->iop.c, s->d,
6334 + !s->cache_missed, s->iop.bypass);
6335 +- trace_bcache_read(s->orig_bio, !s->cache_miss, s->iop.bypass);
6336 ++ trace_bcache_read(s->orig_bio, !s->cache_missed, s->iop.bypass);
6337 +
6338 + if (s->iop.status)
6339 + continue_at_nobarrier(cl, cached_dev_read_error, bcache_wq);
6340 +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
6341 +index fa4058e43202..6e5220554220 100644
6342 +--- a/drivers/md/bcache/super.c
6343 ++++ b/drivers/md/bcache/super.c
6344 +@@ -1131,11 +1131,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
6345 + }
6346 +
6347 + if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
6348 +- bch_sectors_dirty_init(&dc->disk);
6349 + atomic_set(&dc->has_dirty, 1);
6350 + bch_writeback_queue(dc);
6351 + }
6352 +
6353 ++ bch_sectors_dirty_init(&dc->disk);
6354 ++
6355 + bch_cached_dev_run(dc);
6356 + bcache_device_link(&dc->disk, c, "bdev");
6357 +
6358 +diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
6359 +index 225b15aa0340..34819f2c257d 100644
6360 +--- a/drivers/md/bcache/sysfs.c
6361 ++++ b/drivers/md/bcache/sysfs.c
6362 +@@ -263,6 +263,7 @@ STORE(__cached_dev)
6363 + 1, WRITEBACK_RATE_UPDATE_SECS_MAX);
6364 + d_strtoul(writeback_rate_i_term_inverse);
6365 + d_strtoul_nonzero(writeback_rate_p_term_inverse);
6366 ++ d_strtoul_nonzero(writeback_rate_minimum);
6367 +
6368 + sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
6369 +
6370 +@@ -389,6 +390,7 @@ static struct attribute *bch_cached_dev_files[] = {
6371 + &sysfs_writeback_rate_update_seconds,
6372 + &sysfs_writeback_rate_i_term_inverse,
6373 + &sysfs_writeback_rate_p_term_inverse,
6374 ++ &sysfs_writeback_rate_minimum,
6375 + &sysfs_writeback_rate_debug,
6376 + &sysfs_errors,
6377 + &sysfs_io_error_limit,
6378 +diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
6379 +index b810ea77e6b1..f666778ad237 100644
6380 +--- a/drivers/md/dm-ioctl.c
6381 ++++ b/drivers/md/dm-ioctl.c
6382 +@@ -1720,8 +1720,7 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla
6383 + }
6384 +
6385 + static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kernel,
6386 +- int ioctl_flags,
6387 +- struct dm_ioctl **param, int *param_flags)
6388 ++ int ioctl_flags, struct dm_ioctl **param, int *param_flags)
6389 + {
6390 + struct dm_ioctl *dmi;
6391 + int secure_data;
6392 +@@ -1762,18 +1761,13 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
6393 +
6394 + *param_flags |= DM_PARAMS_MALLOC;
6395 +
6396 +- if (copy_from_user(dmi, user, param_kernel->data_size))
6397 +- goto bad;
6398 ++ /* Copy from param_kernel (which was already copied from user) */
6399 ++ memcpy(dmi, param_kernel, minimum_data_size);
6400 +
6401 +-data_copied:
6402 +- /*
6403 +- * Abort if something changed the ioctl data while it was being copied.
6404 +- */
6405 +- if (dmi->data_size != param_kernel->data_size) {
6406 +- DMERR("rejecting ioctl: data size modified while processing parameters");
6407 ++ if (copy_from_user(&dmi->data, (char __user *)user + minimum_data_size,
6408 ++ param_kernel->data_size - minimum_data_size))
6409 + goto bad;
6410 +- }
6411 +-
6412 ++data_copied:
6413 + /* Wipe the user buffer so we do not return it to userspace */
6414 + if (secure_data && clear_user(user, param_kernel->data_size))
6415 + goto bad;
6416 +diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
6417 +index 969954915566..fa68336560c3 100644
6418 +--- a/drivers/md/dm-zoned-metadata.c
6419 ++++ b/drivers/md/dm-zoned-metadata.c
6420 +@@ -99,7 +99,7 @@ struct dmz_mblock {
6421 + struct rb_node node;
6422 + struct list_head link;
6423 + sector_t no;
6424 +- atomic_t ref;
6425 ++ unsigned int ref;
6426 + unsigned long state;
6427 + struct page *page;
6428 + void *data;
6429 +@@ -296,7 +296,7 @@ static struct dmz_mblock *dmz_alloc_mblock(struct dmz_metadata *zmd,
6430 +
6431 + RB_CLEAR_NODE(&mblk->node);
6432 + INIT_LIST_HEAD(&mblk->link);
6433 +- atomic_set(&mblk->ref, 0);
6434 ++ mblk->ref = 0;
6435 + mblk->state = 0;
6436 + mblk->no = mblk_no;
6437 + mblk->data = page_address(mblk->page);
6438 +@@ -339,10 +339,11 @@ static void dmz_insert_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk)
6439 + }
6440 +
6441 + /*
6442 +- * Lookup a metadata block in the rbtree.
6443 ++ * Lookup a metadata block in the rbtree. If the block is found, increment
6444 ++ * its reference count.
6445 + */
6446 +-static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
6447 +- sector_t mblk_no)
6448 ++static struct dmz_mblock *dmz_get_mblock_fast(struct dmz_metadata *zmd,
6449 ++ sector_t mblk_no)
6450 + {
6451 + struct rb_root *root = &zmd->mblk_rbtree;
6452 + struct rb_node *node = root->rb_node;
6453 +@@ -350,8 +351,17 @@ static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
6454 +
6455 + while (node) {
6456 + mblk = container_of(node, struct dmz_mblock, node);
6457 +- if (mblk->no == mblk_no)
6458 ++ if (mblk->no == mblk_no) {
6459 ++ /*
6460 ++ * If this is the first reference to the block,
6461 ++ * remove it from the LRU list.
6462 ++ */
6463 ++ mblk->ref++;
6464 ++ if (mblk->ref == 1 &&
6465 ++ !test_bit(DMZ_META_DIRTY, &mblk->state))
6466 ++ list_del_init(&mblk->link);
6467 + return mblk;
6468 ++ }
6469 + node = (mblk->no < mblk_no) ? node->rb_left : node->rb_right;
6470 + }
6471 +
6472 +@@ -382,32 +392,47 @@ static void dmz_mblock_bio_end_io(struct bio *bio)
6473 + }
6474 +
6475 + /*
6476 +- * Read a metadata block from disk.
6477 ++ * Read an uncached metadata block from disk and add it to the cache.
6478 + */
6479 +-static struct dmz_mblock *dmz_fetch_mblock(struct dmz_metadata *zmd,
6480 +- sector_t mblk_no)
6481 ++static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd,
6482 ++ sector_t mblk_no)
6483 + {
6484 +- struct dmz_mblock *mblk;
6485 ++ struct dmz_mblock *mblk, *m;
6486 + sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no;
6487 + struct bio *bio;
6488 +
6489 +- /* Get block and insert it */
6490 ++ /* Get a new block and a BIO to read it */
6491 + mblk = dmz_alloc_mblock(zmd, mblk_no);
6492 + if (!mblk)
6493 + return NULL;
6494 +
6495 +- spin_lock(&zmd->mblk_lock);
6496 +- atomic_inc(&mblk->ref);
6497 +- set_bit(DMZ_META_READING, &mblk->state);
6498 +- dmz_insert_mblock(zmd, mblk);
6499 +- spin_unlock(&zmd->mblk_lock);
6500 +-
6501 + bio = bio_alloc(GFP_NOIO, 1);
6502 + if (!bio) {
6503 + dmz_free_mblock(zmd, mblk);
6504 + return NULL;
6505 + }
6506 +
6507 ++ spin_lock(&zmd->mblk_lock);
6508 ++
6509 ++ /*
6510 ++ * Make sure that another context did not start reading
6511 ++ * the block already.
6512 ++ */
6513 ++ m = dmz_get_mblock_fast(zmd, mblk_no);
6514 ++ if (m) {
6515 ++ spin_unlock(&zmd->mblk_lock);
6516 ++ dmz_free_mblock(zmd, mblk);
6517 ++ bio_put(bio);
6518 ++ return m;
6519 ++ }
6520 ++
6521 ++ mblk->ref++;
6522 ++ set_bit(DMZ_META_READING, &mblk->state);
6523 ++ dmz_insert_mblock(zmd, mblk);
6524 ++
6525 ++ spin_unlock(&zmd->mblk_lock);
6526 ++
6527 ++ /* Submit read BIO */
6528 + bio->bi_iter.bi_sector = dmz_blk2sect(block);
6529 + bio_set_dev(bio, zmd->dev->bdev);
6530 + bio->bi_private = mblk;
6531 +@@ -484,7 +509,8 @@ static void dmz_release_mblock(struct dmz_metadata *zmd,
6532 +
6533 + spin_lock(&zmd->mblk_lock);
6534 +
6535 +- if (atomic_dec_and_test(&mblk->ref)) {
6536 ++ mblk->ref--;
6537 ++ if (mblk->ref == 0) {
6538 + if (test_bit(DMZ_META_ERROR, &mblk->state)) {
6539 + rb_erase(&mblk->node, &zmd->mblk_rbtree);
6540 + dmz_free_mblock(zmd, mblk);
6541 +@@ -508,18 +534,12 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd,
6542 +
6543 + /* Check rbtree */
6544 + spin_lock(&zmd->mblk_lock);
6545 +- mblk = dmz_lookup_mblock(zmd, mblk_no);
6546 +- if (mblk) {
6547 +- /* Cache hit: remove block from LRU list */
6548 +- if (atomic_inc_return(&mblk->ref) == 1 &&
6549 +- !test_bit(DMZ_META_DIRTY, &mblk->state))
6550 +- list_del_init(&mblk->link);
6551 +- }
6552 ++ mblk = dmz_get_mblock_fast(zmd, mblk_no);
6553 + spin_unlock(&zmd->mblk_lock);
6554 +
6555 + if (!mblk) {
6556 + /* Cache miss: read the block from disk */
6557 +- mblk = dmz_fetch_mblock(zmd, mblk_no);
6558 ++ mblk = dmz_get_mblock_slow(zmd, mblk_no);
6559 + if (!mblk)
6560 + return ERR_PTR(-ENOMEM);
6561 + }
6562 +@@ -753,7 +773,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
6563 +
6564 + spin_lock(&zmd->mblk_lock);
6565 + clear_bit(DMZ_META_DIRTY, &mblk->state);
6566 +- if (atomic_read(&mblk->ref) == 0)
6567 ++ if (mblk->ref == 0)
6568 + list_add_tail(&mblk->link, &zmd->mblk_lru_list);
6569 + spin_unlock(&zmd->mblk_lock);
6570 + }
6571 +@@ -2308,7 +2328,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
6572 + mblk = list_first_entry(&zmd->mblk_dirty_list,
6573 + struct dmz_mblock, link);
6574 + dmz_dev_warn(zmd->dev, "mblock %llu still in dirty list (ref %u)",
6575 +- (u64)mblk->no, atomic_read(&mblk->ref));
6576 ++ (u64)mblk->no, mblk->ref);
6577 + list_del_init(&mblk->link);
6578 + rb_erase(&mblk->node, &zmd->mblk_rbtree);
6579 + dmz_free_mblock(zmd, mblk);
6580 +@@ -2326,8 +2346,8 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
6581 + root = &zmd->mblk_rbtree;
6582 + rbtree_postorder_for_each_entry_safe(mblk, next, root, node) {
6583 + dmz_dev_warn(zmd->dev, "mblock %llu ref %u still in rbtree",
6584 +- (u64)mblk->no, atomic_read(&mblk->ref));
6585 +- atomic_set(&mblk->ref, 0);
6586 ++ (u64)mblk->no, mblk->ref);
6587 ++ mblk->ref = 0;
6588 + dmz_free_mblock(zmd, mblk);
6589 + }
6590 +
6591 +diff --git a/drivers/md/md.c b/drivers/md/md.c
6592 +index 994aed2f9dff..71665e2c30eb 100644
6593 +--- a/drivers/md/md.c
6594 ++++ b/drivers/md/md.c
6595 +@@ -455,10 +455,11 @@ static void md_end_flush(struct bio *fbio)
6596 + rdev_dec_pending(rdev, mddev);
6597 +
6598 + if (atomic_dec_and_test(&fi->flush_pending)) {
6599 +- if (bio->bi_iter.bi_size == 0)
6600 ++ if (bio->bi_iter.bi_size == 0) {
6601 + /* an empty barrier - all done */
6602 + bio_endio(bio);
6603 +- else {
6604 ++ mempool_free(fi, mddev->flush_pool);
6605 ++ } else {
6606 + INIT_WORK(&fi->flush_work, submit_flushes);
6607 + queue_work(md_wq, &fi->flush_work);
6608 + }
6609 +@@ -512,10 +513,11 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
6610 + rcu_read_unlock();
6611 +
6612 + if (atomic_dec_and_test(&fi->flush_pending)) {
6613 +- if (bio->bi_iter.bi_size == 0)
6614 ++ if (bio->bi_iter.bi_size == 0) {
6615 + /* an empty barrier - all done */
6616 + bio_endio(bio);
6617 +- else {
6618 ++ mempool_free(fi, mddev->flush_pool);
6619 ++ } else {
6620 + INIT_WORK(&fi->flush_work, submit_flushes);
6621 + queue_work(md_wq, &fi->flush_work);
6622 + }
6623 +@@ -5907,14 +5909,6 @@ static void __md_stop(struct mddev *mddev)
6624 + mddev->to_remove = &md_redundancy_group;
6625 + module_put(pers->owner);
6626 + clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
6627 +-}
6628 +-
6629 +-void md_stop(struct mddev *mddev)
6630 +-{
6631 +- /* stop the array and free an attached data structures.
6632 +- * This is called from dm-raid
6633 +- */
6634 +- __md_stop(mddev);
6635 + if (mddev->flush_bio_pool) {
6636 + mempool_destroy(mddev->flush_bio_pool);
6637 + mddev->flush_bio_pool = NULL;
6638 +@@ -5923,6 +5917,14 @@ void md_stop(struct mddev *mddev)
6639 + mempool_destroy(mddev->flush_pool);
6640 + mddev->flush_pool = NULL;
6641 + }
6642 ++}
6643 ++
6644 ++void md_stop(struct mddev *mddev)
6645 ++{
6646 ++ /* stop the array and free an attached data structures.
6647 ++ * This is called from dm-raid
6648 ++ */
6649 ++ __md_stop(mddev);
6650 + bioset_exit(&mddev->bio_set);
6651 + bioset_exit(&mddev->sync_set);
6652 + }
6653 +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
6654 +index 8e05c1092aef..c9362463d266 100644
6655 +--- a/drivers/md/raid1.c
6656 ++++ b/drivers/md/raid1.c
6657 +@@ -1736,6 +1736,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
6658 + */
6659 + if (rdev->saved_raid_disk >= 0 &&
6660 + rdev->saved_raid_disk >= first &&
6661 ++ rdev->saved_raid_disk < conf->raid_disks &&
6662 + conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
6663 + first = last = rdev->saved_raid_disk;
6664 +
6665 +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
6666 +index 8c93d44a052c..e555221fb75b 100644
6667 +--- a/drivers/md/raid10.c
6668 ++++ b/drivers/md/raid10.c
6669 +@@ -1808,6 +1808,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
6670 + first = last = rdev->raid_disk;
6671 +
6672 + if (rdev->saved_raid_disk >= first &&
6673 ++ rdev->saved_raid_disk < conf->geo.raid_disks &&
6674 + conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
6675 + mirror = rdev->saved_raid_disk;
6676 + else
6677 +diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
6678 +index b7fad0ec5710..fecba7ddcd00 100644
6679 +--- a/drivers/media/cec/cec-adap.c
6680 ++++ b/drivers/media/cec/cec-adap.c
6681 +@@ -325,7 +325,7 @@ static void cec_data_completed(struct cec_data *data)
6682 + *
6683 + * This function is called with adap->lock held.
6684 + */
6685 +-static void cec_data_cancel(struct cec_data *data)
6686 ++static void cec_data_cancel(struct cec_data *data, u8 tx_status)
6687 + {
6688 + /*
6689 + * It's either the current transmit, or it is a pending
6690 +@@ -340,13 +340,11 @@ static void cec_data_cancel(struct cec_data *data)
6691 + }
6692 +
6693 + if (data->msg.tx_status & CEC_TX_STATUS_OK) {
6694 +- /* Mark the canceled RX as a timeout */
6695 + data->msg.rx_ts = ktime_get_ns();
6696 +- data->msg.rx_status = CEC_RX_STATUS_TIMEOUT;
6697 ++ data->msg.rx_status = CEC_RX_STATUS_ABORTED;
6698 + } else {
6699 +- /* Mark the canceled TX as an error */
6700 + data->msg.tx_ts = ktime_get_ns();
6701 +- data->msg.tx_status |= CEC_TX_STATUS_ERROR |
6702 ++ data->msg.tx_status |= tx_status |
6703 + CEC_TX_STATUS_MAX_RETRIES;
6704 + data->msg.tx_error_cnt++;
6705 + data->attempts = 0;
6706 +@@ -374,15 +372,15 @@ static void cec_flush(struct cec_adapter *adap)
6707 + while (!list_empty(&adap->transmit_queue)) {
6708 + data = list_first_entry(&adap->transmit_queue,
6709 + struct cec_data, list);
6710 +- cec_data_cancel(data);
6711 ++ cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
6712 + }
6713 + if (adap->transmitting)
6714 +- cec_data_cancel(adap->transmitting);
6715 ++ cec_data_cancel(adap->transmitting, CEC_TX_STATUS_ABORTED);
6716 +
6717 + /* Cancel the pending timeout work. */
6718 + list_for_each_entry_safe(data, n, &adap->wait_queue, list) {
6719 + if (cancel_delayed_work(&data->work))
6720 +- cec_data_cancel(data);
6721 ++ cec_data_cancel(data, CEC_TX_STATUS_OK);
6722 + /*
6723 + * If cancel_delayed_work returned false, then
6724 + * the cec_wait_timeout function is running,
6725 +@@ -458,12 +456,13 @@ int cec_thread_func(void *_adap)
6726 + * so much traffic on the bus that the adapter was
6727 + * unable to transmit for CEC_XFER_TIMEOUT_MS (2.1s).
6728 + */
6729 +- dprintk(1, "%s: message %*ph timed out\n", __func__,
6730 ++ pr_warn("cec-%s: message %*ph timed out\n", adap->name,
6731 + adap->transmitting->msg.len,
6732 + adap->transmitting->msg.msg);
6733 + adap->tx_timeouts++;
6734 + /* Just give up on this. */
6735 +- cec_data_cancel(adap->transmitting);
6736 ++ cec_data_cancel(adap->transmitting,
6737 ++ CEC_TX_STATUS_TIMEOUT);
6738 + goto unlock;
6739 + }
6740 +
6741 +@@ -498,9 +497,11 @@ int cec_thread_func(void *_adap)
6742 + if (data->attempts) {
6743 + /* should be >= 3 data bit periods for a retry */
6744 + signal_free_time = CEC_SIGNAL_FREE_TIME_RETRY;
6745 +- } else if (data->new_initiator) {
6746 ++ } else if (adap->last_initiator !=
6747 ++ cec_msg_initiator(&data->msg)) {
6748 + /* should be >= 5 data bit periods for new initiator */
6749 + signal_free_time = CEC_SIGNAL_FREE_TIME_NEW_INITIATOR;
6750 ++ adap->last_initiator = cec_msg_initiator(&data->msg);
6751 + } else {
6752 + /*
6753 + * should be >= 7 data bit periods for sending another
6754 +@@ -514,7 +515,7 @@ int cec_thread_func(void *_adap)
6755 + /* Tell the adapter to transmit, cancel on error */
6756 + if (adap->ops->adap_transmit(adap, data->attempts,
6757 + signal_free_time, &data->msg))
6758 +- cec_data_cancel(data);
6759 ++ cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
6760 +
6761 + unlock:
6762 + mutex_unlock(&adap->lock);
6763 +@@ -685,9 +686,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
6764 + struct cec_fh *fh, bool block)
6765 + {
6766 + struct cec_data *data;
6767 +- u8 last_initiator = 0xff;
6768 +- unsigned int timeout;
6769 +- int res = 0;
6770 +
6771 + msg->rx_ts = 0;
6772 + msg->tx_ts = 0;
6773 +@@ -797,23 +795,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
6774 + data->adap = adap;
6775 + data->blocking = block;
6776 +
6777 +- /*
6778 +- * Determine if this message follows a message from the same
6779 +- * initiator. Needed to determine the free signal time later on.
6780 +- */
6781 +- if (msg->len > 1) {
6782 +- if (!(list_empty(&adap->transmit_queue))) {
6783 +- const struct cec_data *last;
6784 +-
6785 +- last = list_last_entry(&adap->transmit_queue,
6786 +- const struct cec_data, list);
6787 +- last_initiator = cec_msg_initiator(&last->msg);
6788 +- } else if (adap->transmitting) {
6789 +- last_initiator =
6790 +- cec_msg_initiator(&adap->transmitting->msg);
6791 +- }
6792 +- }
6793 +- data->new_initiator = last_initiator != cec_msg_initiator(msg);
6794 + init_completion(&data->c);
6795 + INIT_DELAYED_WORK(&data->work, cec_wait_timeout);
6796 +
6797 +@@ -829,48 +810,23 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
6798 + if (!block)
6799 + return 0;
6800 +
6801 +- /*
6802 +- * If we don't get a completion before this time something is really
6803 +- * wrong and we time out.
6804 +- */
6805 +- timeout = CEC_XFER_TIMEOUT_MS;
6806 +- /* Add the requested timeout if we have to wait for a reply as well */
6807 +- if (msg->timeout)
6808 +- timeout += msg->timeout;
6809 +-
6810 + /*
6811 + * Release the lock and wait, retake the lock afterwards.
6812 + */
6813 + mutex_unlock(&adap->lock);
6814 +- res = wait_for_completion_killable_timeout(&data->c,
6815 +- msecs_to_jiffies(timeout));
6816 ++ wait_for_completion_killable(&data->c);
6817 ++ if (!data->completed)
6818 ++ cancel_delayed_work_sync(&data->work);
6819 + mutex_lock(&adap->lock);
6820 +
6821 +- if (data->completed) {
6822 +- /* The transmit completed (possibly with an error) */
6823 +- *msg = data->msg;
6824 +- kfree(data);
6825 +- return 0;
6826 +- }
6827 +- /*
6828 +- * The wait for completion timed out or was interrupted, so mark this
6829 +- * as non-blocking and disconnect from the filehandle since it is
6830 +- * still 'in flight'. When it finally completes it will just drop the
6831 +- * result silently.
6832 +- */
6833 +- data->blocking = false;
6834 +- if (data->fh)
6835 +- list_del(&data->xfer_list);
6836 +- data->fh = NULL;
6837 ++ /* Cancel the transmit if it was interrupted */
6838 ++ if (!data->completed)
6839 ++ cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
6840 +
6841 +- if (res == 0) { /* timed out */
6842 +- /* Check if the reply or the transmit failed */
6843 +- if (msg->timeout && (msg->tx_status & CEC_TX_STATUS_OK))
6844 +- msg->rx_status = CEC_RX_STATUS_TIMEOUT;
6845 +- else
6846 +- msg->tx_status = CEC_TX_STATUS_MAX_RETRIES;
6847 +- }
6848 +- return res > 0 ? 0 : res;
6849 ++ /* The transmit completed (possibly with an error) */
6850 ++ *msg = data->msg;
6851 ++ kfree(data);
6852 ++ return 0;
6853 + }
6854 +
6855 + /* Helper function to be used by drivers and this framework. */
6856 +@@ -1028,6 +984,8 @@ void cec_received_msg_ts(struct cec_adapter *adap,
6857 + mutex_lock(&adap->lock);
6858 + dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);
6859 +
6860 ++ adap->last_initiator = 0xff;
6861 ++
6862 + /* Check if this message was for us (directed or broadcast). */
6863 + if (!cec_msg_is_broadcast(msg))
6864 + valid_la = cec_has_log_addr(adap, msg_dest);
6865 +@@ -1490,6 +1448,8 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
6866 + }
6867 +
6868 + mutex_lock(&adap->devnode.lock);
6869 ++ adap->last_initiator = 0xff;
6870 ++
6871 + if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
6872 + adap->ops->adap_enable(adap, true)) {
6873 + mutex_unlock(&adap->devnode.lock);
6874 +diff --git a/drivers/media/cec/cec-api.c b/drivers/media/cec/cec-api.c
6875 +index 10b67fc40318..0199765fbae6 100644
6876 +--- a/drivers/media/cec/cec-api.c
6877 ++++ b/drivers/media/cec/cec-api.c
6878 +@@ -101,6 +101,23 @@ static long cec_adap_g_phys_addr(struct cec_adapter *adap,
6879 + return 0;
6880 + }
6881 +
6882 ++static int cec_validate_phys_addr(u16 phys_addr)
6883 ++{
6884 ++ int i;
6885 ++
6886 ++ if (phys_addr == CEC_PHYS_ADDR_INVALID)
6887 ++ return 0;
6888 ++ for (i = 0; i < 16; i += 4)
6889 ++ if (phys_addr & (0xf << i))
6890 ++ break;
6891 ++ if (i == 16)
6892 ++ return 0;
6893 ++ for (i += 4; i < 16; i += 4)
6894 ++ if ((phys_addr & (0xf << i)) == 0)
6895 ++ return -EINVAL;
6896 ++ return 0;
6897 ++}
6898 ++
6899 + static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
6900 + bool block, __u16 __user *parg)
6901 + {
6902 +@@ -112,7 +129,7 @@ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
6903 + if (copy_from_user(&phys_addr, parg, sizeof(phys_addr)))
6904 + return -EFAULT;
6905 +
6906 +- err = cec_phys_addr_validate(phys_addr, NULL, NULL);
6907 ++ err = cec_validate_phys_addr(phys_addr);
6908 + if (err)
6909 + return err;
6910 + mutex_lock(&adap->lock);
6911 +diff --git a/drivers/media/cec/cec-edid.c b/drivers/media/cec/cec-edid.c
6912 +index ec72ac1c0b91..f587e8eaefd8 100644
6913 +--- a/drivers/media/cec/cec-edid.c
6914 ++++ b/drivers/media/cec/cec-edid.c
6915 +@@ -10,66 +10,6 @@
6916 + #include <linux/types.h>
6917 + #include <media/cec.h>
6918 +
6919 +-/*
6920 +- * This EDID is expected to be a CEA-861 compliant, which means that there are
6921 +- * at least two blocks and one or more of the extensions blocks are CEA-861
6922 +- * blocks.
6923 +- *
6924 +- * The returned location is guaranteed to be < size - 1.
6925 +- */
6926 +-static unsigned int cec_get_edid_spa_location(const u8 *edid, unsigned int size)
6927 +-{
6928 +- unsigned int blocks = size / 128;
6929 +- unsigned int block;
6930 +- u8 d;
6931 +-
6932 +- /* Sanity check: at least 2 blocks and a multiple of the block size */
6933 +- if (blocks < 2 || size % 128)
6934 +- return 0;
6935 +-
6936 +- /*
6937 +- * If there are fewer extension blocks than the size, then update
6938 +- * 'blocks'. It is allowed to have more extension blocks than the size,
6939 +- * since some hardware can only read e.g. 256 bytes of the EDID, even
6940 +- * though more blocks are present. The first CEA-861 extension block
6941 +- * should normally be in block 1 anyway.
6942 +- */
6943 +- if (edid[0x7e] + 1 < blocks)
6944 +- blocks = edid[0x7e] + 1;
6945 +-
6946 +- for (block = 1; block < blocks; block++) {
6947 +- unsigned int offset = block * 128;
6948 +-
6949 +- /* Skip any non-CEA-861 extension blocks */
6950 +- if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
6951 +- continue;
6952 +-
6953 +- /* search Vendor Specific Data Block (tag 3) */
6954 +- d = edid[offset + 2] & 0x7f;
6955 +- /* Check if there are Data Blocks */
6956 +- if (d <= 4)
6957 +- continue;
6958 +- if (d > 4) {
6959 +- unsigned int i = offset + 4;
6960 +- unsigned int end = offset + d;
6961 +-
6962 +- /* Note: 'end' is always < 'size' */
6963 +- do {
6964 +- u8 tag = edid[i] >> 5;
6965 +- u8 len = edid[i] & 0x1f;
6966 +-
6967 +- if (tag == 3 && len >= 5 && i + len <= end &&
6968 +- edid[i + 1] == 0x03 &&
6969 +- edid[i + 2] == 0x0c &&
6970 +- edid[i + 3] == 0x00)
6971 +- return i + 4;
6972 +- i += len + 1;
6973 +- } while (i < end);
6974 +- }
6975 +- }
6976 +- return 0;
6977 +-}
6978 +-
6979 + u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
6980 + unsigned int *offset)
6981 + {
6982 +diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
6983 +index 3a3dc23c560c..a4341205c197 100644
6984 +--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
6985 ++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
6986 +@@ -602,14 +602,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
6987 + [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
6988 + [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
6989 + [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
6990 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
6991 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
6992 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
6993 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
6994 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
6995 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
6996 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
6997 +- [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
6998 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
6999 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
7000 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
7001 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
7002 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
7003 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
7004 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
7005 ++ [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7006 + [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7007 + [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
7008 + [V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
7009 +@@ -658,14 +658,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7010 + [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
7011 + [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
7012 + [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
7013 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7014 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
7015 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
7016 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
7017 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
7018 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
7019 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
7020 +- [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7021 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7022 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
7023 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
7024 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
7025 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
7026 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
7027 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
7028 ++ [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7029 + [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7030 + [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
7031 + [V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
7032 +@@ -714,14 +714,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7033 + [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
7034 + [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
7035 + [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
7036 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7037 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
7038 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
7039 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
7040 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
7041 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
7042 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
7043 +- [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7044 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7045 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
7046 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
7047 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
7048 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
7049 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
7050 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
7051 ++ [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7052 + [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7053 + [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
7054 + [V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
7055 +@@ -770,14 +770,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7056 + [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][5] = { 2599, 901, 909 },
7057 + [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][6] = { 991, 0, 2966 },
7058 + [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
7059 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7060 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][1] = { 2989, 3120, 1180 },
7061 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][2] = { 1913, 3011, 3009 },
7062 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][3] = { 1836, 3099, 1105 },
7063 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][4] = { 2627, 413, 2966 },
7064 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][5] = { 2576, 943, 951 },
7065 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][6] = { 1026, 0, 2942 },
7066 +- [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7067 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7068 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][1] = { 2989, 3120, 1180 },
7069 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][2] = { 1913, 3011, 3009 },
7070 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][3] = { 1836, 3099, 1105 },
7071 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][4] = { 2627, 413, 2966 },
7072 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][5] = { 2576, 943, 951 },
7073 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][6] = { 1026, 0, 2942 },
7074 ++ [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7075 + [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7076 + [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2879, 3022, 874 },
7077 + [V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][2] = { 1688, 2903, 2901 },
7078 +@@ -826,14 +826,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7079 + [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][5] = { 3001, 800, 799 },
7080 + [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3071 },
7081 + [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
7082 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7083 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 776 },
7084 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][2] = { 1068, 3033, 3033 },
7085 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][3] = { 1068, 3033, 776 },
7086 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][4] = { 2977, 851, 3048 },
7087 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][5] = { 2977, 851, 851 },
7088 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3048 },
7089 +- [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7090 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7091 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 776 },
7092 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][2] = { 1068, 3033, 3033 },
7093 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][3] = { 1068, 3033, 776 },
7094 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][4] = { 2977, 851, 3048 },
7095 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][5] = { 2977, 851, 851 },
7096 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3048 },
7097 ++ [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7098 + [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7099 + [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 423 },
7100 + [V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][2] = { 749, 2926, 2926 },
7101 +@@ -882,14 +882,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7102 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
7103 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
7104 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
7105 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7106 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
7107 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
7108 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
7109 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
7110 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
7111 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
7112 +- [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7113 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7114 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
7115 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
7116 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
7117 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
7118 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
7119 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
7120 ++ [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7121 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7122 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
7123 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
7124 +@@ -922,62 +922,62 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7125 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1812, 886, 886 },
7126 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1812 },
7127 + [V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
7128 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
7129 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
7130 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
7131 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
7132 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
7133 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
7134 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
7135 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
7136 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
7137 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
7138 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
7139 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
7140 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
7141 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
7142 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
7143 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
7144 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7145 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 1063 },
7146 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 1828, 3033, 3033 },
7147 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 1828, 3033, 1063 },
7148 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 2633, 851, 2979 },
7149 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 2633, 851, 851 },
7150 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 2979 },
7151 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7152 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7153 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
7154 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
7155 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
7156 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
7157 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
7158 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
7159 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
7160 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
7161 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
7162 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
7163 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
7164 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
7165 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
7166 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
7167 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
7168 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
7169 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
7170 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
7171 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
7172 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
7173 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
7174 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
7175 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
7176 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
7177 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
7178 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
7179 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
7180 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
7181 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
7182 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
7183 +- [V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
7184 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
7185 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
7186 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
7187 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
7188 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
7189 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
7190 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
7191 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
7192 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
7193 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
7194 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
7195 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
7196 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
7197 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
7198 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
7199 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
7200 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7201 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 1063 },
7202 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][2] = { 1828, 3033, 3033 },
7203 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][3] = { 1828, 3033, 1063 },
7204 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][4] = { 2633, 851, 2979 },
7205 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][5] = { 2633, 851, 851 },
7206 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 2979 },
7207 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7208 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7209 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
7210 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
7211 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
7212 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
7213 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
7214 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
7215 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
7216 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
7217 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
7218 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
7219 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
7220 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
7221 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
7222 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
7223 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
7224 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
7225 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
7226 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
7227 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
7228 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
7229 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
7230 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
7231 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
7232 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
7233 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
7234 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
7235 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
7236 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
7237 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
7238 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
7239 ++ [V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
7240 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
7241 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][1] = { 2877, 2923, 1058 },
7242 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][2] = { 1837, 2840, 2916 },
7243 +@@ -994,14 +994,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7244 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][5] = { 2517, 1159, 900 },
7245 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][6] = { 1042, 870, 2917 },
7246 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
7247 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7248 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][1] = { 2976, 3018, 1315 },
7249 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][2] = { 2024, 2942, 3011 },
7250 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][3] = { 1930, 2926, 1256 },
7251 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][4] = { 2563, 1227, 2916 },
7252 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][5] = { 2494, 1183, 943 },
7253 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][6] = { 1073, 916, 2894 },
7254 +- [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7255 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7256 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][1] = { 2976, 3018, 1315 },
7257 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][2] = { 2024, 2942, 3011 },
7258 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][3] = { 1930, 2926, 1256 },
7259 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][4] = { 2563, 1227, 2916 },
7260 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][5] = { 2494, 1183, 943 },
7261 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][6] = { 1073, 916, 2894 },
7262 ++ [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7263 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7264 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][1] = { 2864, 2910, 1024 },
7265 + [V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][2] = { 1811, 2826, 2903 },
7266 +@@ -1050,14 +1050,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
7267 + [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][5] = { 2880, 998, 902 },
7268 + [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][6] = { 816, 823, 2940 },
7269 + [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
7270 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
7271 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][1] = { 3029, 3028, 1255 },
7272 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][2] = { 1406, 2988, 3011 },
7273 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][3] = { 1398, 2983, 1190 },
7274 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][4] = { 2860, 1050, 2939 },
7275 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][5] = { 2857, 1033, 945 },
7276 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][6] = { 866, 873, 2916 },
7277 +- [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
7278 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
7279 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][1] = { 3029, 3028, 1255 },
7280 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][2] = { 1406, 2988, 3011 },
7281 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][3] = { 1398, 2983, 1190 },
7282 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][4] = { 2860, 1050, 2939 },
7283 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][5] = { 2857, 1033, 945 },
7284 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][6] = { 866, 873, 2916 },
7285 ++ [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
7286 + [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
7287 + [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][1] = { 2923, 2921, 957 },
7288 + [V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][2] = { 1125, 2877, 2902 },
7289 +@@ -1128,7 +1128,7 @@ static const double rec709_to_240m[3][3] = {
7290 + { 0.0016327, 0.0044133, 0.9939540 },
7291 + };
7292 +
7293 +-static const double rec709_to_adobergb[3][3] = {
7294 ++static const double rec709_to_oprgb[3][3] = {
7295 + { 0.7151627, 0.2848373, -0.0000000 },
7296 + { 0.0000000, 1.0000000, 0.0000000 },
7297 + { -0.0000000, 0.0411705, 0.9588295 },
7298 +@@ -1195,7 +1195,7 @@ static double transfer_rec709_to_rgb(double v)
7299 + return (v < 0.081) ? v / 4.5 : pow((v + 0.099) / 1.099, 1.0 / 0.45);
7300 + }
7301 +
7302 +-static double transfer_rgb_to_adobergb(double v)
7303 ++static double transfer_rgb_to_oprgb(double v)
7304 + {
7305 + return pow(v, 1.0 / 2.19921875);
7306 + }
7307 +@@ -1251,8 +1251,8 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
7308 + case V4L2_COLORSPACE_470_SYSTEM_M:
7309 + mult_matrix(r, g, b, rec709_to_ntsc1953);
7310 + break;
7311 +- case V4L2_COLORSPACE_ADOBERGB:
7312 +- mult_matrix(r, g, b, rec709_to_adobergb);
7313 ++ case V4L2_COLORSPACE_OPRGB:
7314 ++ mult_matrix(r, g, b, rec709_to_oprgb);
7315 + break;
7316 + case V4L2_COLORSPACE_BT2020:
7317 + mult_matrix(r, g, b, rec709_to_bt2020);
7318 +@@ -1284,10 +1284,10 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
7319 + *g = transfer_rgb_to_srgb(*g);
7320 + *b = transfer_rgb_to_srgb(*b);
7321 + break;
7322 +- case V4L2_XFER_FUNC_ADOBERGB:
7323 +- *r = transfer_rgb_to_adobergb(*r);
7324 +- *g = transfer_rgb_to_adobergb(*g);
7325 +- *b = transfer_rgb_to_adobergb(*b);
7326 ++ case V4L2_XFER_FUNC_OPRGB:
7327 ++ *r = transfer_rgb_to_oprgb(*r);
7328 ++ *g = transfer_rgb_to_oprgb(*g);
7329 ++ *b = transfer_rgb_to_oprgb(*b);
7330 + break;
7331 + case V4L2_XFER_FUNC_DCI_P3:
7332 + *r = transfer_rgb_to_dcip3(*r);
7333 +@@ -1321,7 +1321,7 @@ int main(int argc, char **argv)
7334 + V4L2_COLORSPACE_470_SYSTEM_BG,
7335 + 0,
7336 + V4L2_COLORSPACE_SRGB,
7337 +- V4L2_COLORSPACE_ADOBERGB,
7338 ++ V4L2_COLORSPACE_OPRGB,
7339 + V4L2_COLORSPACE_BT2020,
7340 + 0,
7341 + V4L2_COLORSPACE_DCI_P3,
7342 +@@ -1336,7 +1336,7 @@ int main(int argc, char **argv)
7343 + "V4L2_COLORSPACE_470_SYSTEM_BG",
7344 + "",
7345 + "V4L2_COLORSPACE_SRGB",
7346 +- "V4L2_COLORSPACE_ADOBERGB",
7347 ++ "V4L2_COLORSPACE_OPRGB",
7348 + "V4L2_COLORSPACE_BT2020",
7349 + "",
7350 + "V4L2_COLORSPACE_DCI_P3",
7351 +@@ -1345,7 +1345,7 @@ int main(int argc, char **argv)
7352 + "",
7353 + "V4L2_XFER_FUNC_709",
7354 + "V4L2_XFER_FUNC_SRGB",
7355 +- "V4L2_XFER_FUNC_ADOBERGB",
7356 ++ "V4L2_XFER_FUNC_OPRGB",
7357 + "V4L2_XFER_FUNC_SMPTE240M",
7358 + "V4L2_XFER_FUNC_NONE",
7359 + "V4L2_XFER_FUNC_DCI_P3",
7360 +diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
7361 +index abd4c788dffd..f40ab5704bf0 100644
7362 +--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
7363 ++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
7364 +@@ -1770,7 +1770,7 @@ typedef struct { u16 __; u8 _; } __packed x24;
7365 + pos[7] = (chr & (0x01 << 0) ? fg : bg); \
7366 + } \
7367 + \
7368 +- pos += (tpg->hflip ? -8 : 8) / hdiv; \
7369 ++ pos += (tpg->hflip ? -8 : 8) / (int)hdiv; \
7370 + } \
7371 + } \
7372 + } while (0)
7373 +diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
7374 +index 5731751d3f2a..cd6e7372ef9c 100644
7375 +--- a/drivers/media/i2c/adv7511.c
7376 ++++ b/drivers/media/i2c/adv7511.c
7377 +@@ -1355,10 +1355,10 @@ static int adv7511_set_fmt(struct v4l2_subdev *sd,
7378 + state->xfer_func = format->format.xfer_func;
7379 +
7380 + switch (format->format.colorspace) {
7381 +- case V4L2_COLORSPACE_ADOBERGB:
7382 ++ case V4L2_COLORSPACE_OPRGB:
7383 + c = HDMI_COLORIMETRY_EXTENDED;
7384 +- ec = y ? HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601 :
7385 +- HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB;
7386 ++ ec = y ? HDMI_EXTENDED_COLORIMETRY_OPYCC_601 :
7387 ++ HDMI_EXTENDED_COLORIMETRY_OPRGB;
7388 + break;
7389 + case V4L2_COLORSPACE_SMPTE170M:
7390 + c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE;
7391 +diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
7392 +index cac2081e876e..2437f72f7caf 100644
7393 +--- a/drivers/media/i2c/adv7604.c
7394 ++++ b/drivers/media/i2c/adv7604.c
7395 +@@ -2284,8 +2284,10 @@ static int adv76xx_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
7396 + state->aspect_ratio.numerator = 16;
7397 + state->aspect_ratio.denominator = 9;
7398 +
7399 +- if (!state->edid.present)
7400 ++ if (!state->edid.present) {
7401 + state->edid.blocks = 0;
7402 ++ cec_phys_addr_invalidate(state->cec_adap);
7403 ++ }
7404 +
7405 + v4l2_dbg(2, debug, sd, "%s: clear EDID pad %d, edid.present = 0x%x\n",
7406 + __func__, edid->pad, state->edid.present);
7407 +@@ -2474,7 +2476,7 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
7408 + "YCbCr Bt.601 (16-235)", "YCbCr Bt.709 (16-235)",
7409 + "xvYCC Bt.601", "xvYCC Bt.709",
7410 + "YCbCr Bt.601 (0-255)", "YCbCr Bt.709 (0-255)",
7411 +- "sYCC", "Adobe YCC 601", "AdobeRGB", "invalid", "invalid",
7412 ++ "sYCC", "opYCC 601", "opRGB", "invalid", "invalid",
7413 + "invalid", "invalid", "invalid"
7414 + };
7415 + static const char * const rgb_quantization_range_txt[] = {
7416 +diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
7417 +index fddac32e5051..ceca6be13ca9 100644
7418 +--- a/drivers/media/i2c/adv7842.c
7419 ++++ b/drivers/media/i2c/adv7842.c
7420 +@@ -786,8 +786,10 @@ static int edid_write_hdmi_segment(struct v4l2_subdev *sd, u8 port)
7421 + /* Disable I2C access to internal EDID ram from HDMI DDC ports */
7422 + rep_write_and_or(sd, 0x77, 0xf3, 0x00);
7423 +
7424 +- if (!state->hdmi_edid.present)
7425 ++ if (!state->hdmi_edid.present) {
7426 ++ cec_phys_addr_invalidate(state->cec_adap);
7427 + return 0;
7428 ++ }
7429 +
7430 + pa = cec_get_edid_phys_addr(edid, 256, &spa_loc);
7431 + err = cec_phys_addr_validate(pa, &pa, NULL);
7432 +diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
7433 +index 3474ef832c1e..480edeebac60 100644
7434 +--- a/drivers/media/i2c/ov7670.c
7435 ++++ b/drivers/media/i2c/ov7670.c
7436 +@@ -1810,17 +1810,24 @@ static int ov7670_probe(struct i2c_client *client,
7437 + info->pclk_hb_disable = true;
7438 + }
7439 +
7440 +- info->clk = devm_clk_get(&client->dev, "xclk");
7441 +- if (IS_ERR(info->clk))
7442 +- return PTR_ERR(info->clk);
7443 +- ret = clk_prepare_enable(info->clk);
7444 +- if (ret)
7445 +- return ret;
7446 ++ info->clk = devm_clk_get(&client->dev, "xclk"); /* optional */
7447 ++ if (IS_ERR(info->clk)) {
7448 ++ ret = PTR_ERR(info->clk);
7449 ++ if (ret == -ENOENT)
7450 ++ info->clk = NULL;
7451 ++ else
7452 ++ return ret;
7453 ++ }
7454 ++ if (info->clk) {
7455 ++ ret = clk_prepare_enable(info->clk);
7456 ++ if (ret)
7457 ++ return ret;
7458 +
7459 +- info->clock_speed = clk_get_rate(info->clk) / 1000000;
7460 +- if (info->clock_speed < 10 || info->clock_speed > 48) {
7461 +- ret = -EINVAL;
7462 +- goto clk_disable;
7463 ++ info->clock_speed = clk_get_rate(info->clk) / 1000000;
7464 ++ if (info->clock_speed < 10 || info->clock_speed > 48) {
7465 ++ ret = -EINVAL;
7466 ++ goto clk_disable;
7467 ++ }
7468 + }
7469 +
7470 + ret = ov7670_init_gpio(client, info);
7471 +diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
7472 +index 393bbbbbaad7..865639587a97 100644
7473 +--- a/drivers/media/i2c/tc358743.c
7474 ++++ b/drivers/media/i2c/tc358743.c
7475 +@@ -1243,9 +1243,9 @@ static int tc358743_log_status(struct v4l2_subdev *sd)
7476 + u8 vi_status3 = i2c_rd8(sd, VI_STATUS3);
7477 + const int deep_color_mode[4] = { 8, 10, 12, 16 };
7478 + static const char * const input_color_space[] = {
7479 +- "RGB", "YCbCr 601", "Adobe RGB", "YCbCr 709", "NA (4)",
7480 ++ "RGB", "YCbCr 601", "opRGB", "YCbCr 709", "NA (4)",
7481 + "xvYCC 601", "NA(6)", "xvYCC 709", "NA(8)", "sYCC601",
7482 +- "NA(10)", "NA(11)", "NA(12)", "Adobe YCC 601"};
7483 ++ "NA(10)", "NA(11)", "NA(12)", "opYCC 601"};
7484 +
7485 + v4l2_info(sd, "-----Chip status-----\n");
7486 + v4l2_info(sd, "Chip ID: 0x%02x\n",
7487 +diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
7488 +index 76e6bed5a1da..805bd9c65940 100644
7489 +--- a/drivers/media/i2c/tvp5150.c
7490 ++++ b/drivers/media/i2c/tvp5150.c
7491 +@@ -1534,7 +1534,7 @@ static int tvp5150_probe(struct i2c_client *c,
7492 + 27000000, 1, 27000000);
7493 + v4l2_ctrl_new_std_menu_items(&core->hdl, &tvp5150_ctrl_ops,
7494 + V4L2_CID_TEST_PATTERN,
7495 +- ARRAY_SIZE(tvp5150_test_patterns),
7496 ++ ARRAY_SIZE(tvp5150_test_patterns) - 1,
7497 + 0, 0, tvp5150_test_patterns);
7498 + sd->ctrl_handler = &core->hdl;
7499 + if (core->hdl.error) {
7500 +diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
7501 +index 477c80a4d44c..cd4c8230563c 100644
7502 +--- a/drivers/media/platform/vivid/vivid-core.h
7503 ++++ b/drivers/media/platform/vivid/vivid-core.h
7504 +@@ -111,7 +111,7 @@ enum vivid_colorspace {
7505 + VIVID_CS_170M,
7506 + VIVID_CS_709,
7507 + VIVID_CS_SRGB,
7508 +- VIVID_CS_ADOBERGB,
7509 ++ VIVID_CS_OPRGB,
7510 + VIVID_CS_2020,
7511 + VIVID_CS_DCI_P3,
7512 + VIVID_CS_240M,
7513 +diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
7514 +index 6b0bfa091592..e1185f0f6607 100644
7515 +--- a/drivers/media/platform/vivid/vivid-ctrls.c
7516 ++++ b/drivers/media/platform/vivid/vivid-ctrls.c
7517 +@@ -348,7 +348,7 @@ static int vivid_vid_cap_s_ctrl(struct v4l2_ctrl *ctrl)
7518 + V4L2_COLORSPACE_SMPTE170M,
7519 + V4L2_COLORSPACE_REC709,
7520 + V4L2_COLORSPACE_SRGB,
7521 +- V4L2_COLORSPACE_ADOBERGB,
7522 ++ V4L2_COLORSPACE_OPRGB,
7523 + V4L2_COLORSPACE_BT2020,
7524 + V4L2_COLORSPACE_DCI_P3,
7525 + V4L2_COLORSPACE_SMPTE240M,
7526 +@@ -729,7 +729,7 @@ static const char * const vivid_ctrl_colorspace_strings[] = {
7527 + "SMPTE 170M",
7528 + "Rec. 709",
7529 + "sRGB",
7530 +- "AdobeRGB",
7531 ++ "opRGB",
7532 + "BT.2020",
7533 + "DCI-P3",
7534 + "SMPTE 240M",
7535 +@@ -752,7 +752,7 @@ static const char * const vivid_ctrl_xfer_func_strings[] = {
7536 + "Default",
7537 + "Rec. 709",
7538 + "sRGB",
7539 +- "AdobeRGB",
7540 ++ "opRGB",
7541 + "SMPTE 240M",
7542 + "None",
7543 + "DCI-P3",
7544 +diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
7545 +index 51fec66d8d45..50248e2176a0 100644
7546 +--- a/drivers/media/platform/vivid/vivid-vid-out.c
7547 ++++ b/drivers/media/platform/vivid/vivid-vid-out.c
7548 +@@ -413,7 +413,7 @@ int vivid_try_fmt_vid_out(struct file *file, void *priv,
7549 + mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
7550 + } else if (mp->colorspace != V4L2_COLORSPACE_SMPTE170M &&
7551 + mp->colorspace != V4L2_COLORSPACE_REC709 &&
7552 +- mp->colorspace != V4L2_COLORSPACE_ADOBERGB &&
7553 ++ mp->colorspace != V4L2_COLORSPACE_OPRGB &&
7554 + mp->colorspace != V4L2_COLORSPACE_BT2020 &&
7555 + mp->colorspace != V4L2_COLORSPACE_SRGB) {
7556 + mp->colorspace = V4L2_COLORSPACE_REC709;
7557 +diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
7558 +index 1aa88d94e57f..e28bd8836751 100644
7559 +--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
7560 ++++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
7561 +@@ -31,6 +31,7 @@ MODULE_PARM_DESC(disable_rc, "Disable inbuilt IR receiver.");
7562 + DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
7563 +
7564 + struct dvbsky_state {
7565 ++ struct mutex stream_mutex;
7566 + u8 ibuf[DVBSKY_BUF_LEN];
7567 + u8 obuf[DVBSKY_BUF_LEN];
7568 + u8 last_lock;
7569 +@@ -67,17 +68,18 @@ static int dvbsky_usb_generic_rw(struct dvb_usb_device *d,
7570 +
7571 + static int dvbsky_stream_ctrl(struct dvb_usb_device *d, u8 onoff)
7572 + {
7573 ++ struct dvbsky_state *state = d_to_priv(d);
7574 + int ret;
7575 +- static u8 obuf_pre[3] = { 0x37, 0, 0 };
7576 +- static u8 obuf_post[3] = { 0x36, 3, 0 };
7577 ++ u8 obuf_pre[3] = { 0x37, 0, 0 };
7578 ++ u8 obuf_post[3] = { 0x36, 3, 0 };
7579 +
7580 +- mutex_lock(&d->usb_mutex);
7581 +- ret = dvb_usbv2_generic_rw_locked(d, obuf_pre, 3, NULL, 0);
7582 ++ mutex_lock(&state->stream_mutex);
7583 ++ ret = dvbsky_usb_generic_rw(d, obuf_pre, 3, NULL, 0);
7584 + if (!ret && onoff) {
7585 + msleep(20);
7586 +- ret = dvb_usbv2_generic_rw_locked(d, obuf_post, 3, NULL, 0);
7587 ++ ret = dvbsky_usb_generic_rw(d, obuf_post, 3, NULL, 0);
7588 + }
7589 +- mutex_unlock(&d->usb_mutex);
7590 ++ mutex_unlock(&state->stream_mutex);
7591 + return ret;
7592 + }
7593 +
7594 +@@ -606,6 +608,8 @@ static int dvbsky_init(struct dvb_usb_device *d)
7595 + if (ret)
7596 + return ret;
7597 + */
7598 ++ mutex_init(&state->stream_mutex);
7599 ++
7600 + state->last_lock = 0;
7601 +
7602 + return 0;
7603 +diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
7604 +index ff5e41ac4723..98d6c8fcd262 100644
7605 +--- a/drivers/media/usb/em28xx/em28xx-cards.c
7606 ++++ b/drivers/media/usb/em28xx/em28xx-cards.c
7607 +@@ -2141,13 +2141,13 @@ const struct em28xx_board em28xx_boards[] = {
7608 + .input = { {
7609 + .type = EM28XX_VMUX_COMPOSITE,
7610 + .vmux = TVP5150_COMPOSITE1,
7611 +- .amux = EM28XX_AUDIO_SRC_LINE,
7612 ++ .amux = EM28XX_AMUX_LINE_IN,
7613 + .gpio = terratec_av350_unmute_gpio,
7614 +
7615 + }, {
7616 + .type = EM28XX_VMUX_SVIDEO,
7617 + .vmux = TVP5150_SVIDEO,
7618 +- .amux = EM28XX_AUDIO_SRC_LINE,
7619 ++ .amux = EM28XX_AMUX_LINE_IN,
7620 + .gpio = terratec_av350_unmute_gpio,
7621 + } },
7622 + },
7623 +@@ -3041,6 +3041,9 @@ static int em28xx_hint_board(struct em28xx *dev)
7624 +
7625 + static void em28xx_card_setup(struct em28xx *dev)
7626 + {
7627 ++ int i, j, idx;
7628 ++ bool duplicate_entry;
7629 ++
7630 + /*
7631 + * If the device can be a webcam, seek for a sensor.
7632 + * If sensor is not found, then it isn't a webcam.
7633 +@@ -3197,6 +3200,32 @@ static void em28xx_card_setup(struct em28xx *dev)
7634 + /* Allow override tuner type by a module parameter */
7635 + if (tuner >= 0)
7636 + dev->tuner_type = tuner;
7637 ++
7638 ++ /*
7639 ++ * Dynamically generate a list of valid audio inputs for this
7640 ++ * specific board, mapping them via enum em28xx_amux.
7641 ++ */
7642 ++
7643 ++ idx = 0;
7644 ++ for (i = 0; i < MAX_EM28XX_INPUT; i++) {
7645 ++ if (!INPUT(i)->type)
7646 ++ continue;
7647 ++
7648 ++ /* Skip already mapped audio inputs */
7649 ++ duplicate_entry = false;
7650 ++ for (j = 0; j < idx; j++) {
7651 ++ if (INPUT(i)->amux == dev->amux_map[j]) {
7652 ++ duplicate_entry = true;
7653 ++ break;
7654 ++ }
7655 ++ }
7656 ++ if (duplicate_entry)
7657 ++ continue;
7658 ++
7659 ++ dev->amux_map[idx++] = INPUT(i)->amux;
7660 ++ }
7661 ++ for (; idx < MAX_EM28XX_INPUT; idx++)
7662 ++ dev->amux_map[idx] = EM28XX_AMUX_UNUSED;
7663 + }
7664 +
7665 + void em28xx_setup_xc3028(struct em28xx *dev, struct xc2028_ctrl *ctl)
7666 +diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
7667 +index 68571bf36d28..3bf98ac897ec 100644
7668 +--- a/drivers/media/usb/em28xx/em28xx-video.c
7669 ++++ b/drivers/media/usb/em28xx/em28xx-video.c
7670 +@@ -1093,6 +1093,8 @@ int em28xx_start_analog_streaming(struct vb2_queue *vq, unsigned int count)
7671 +
7672 + em28xx_videodbg("%s\n", __func__);
7673 +
7674 ++ dev->v4l2->field_count = 0;
7675 ++
7676 + /*
7677 + * Make sure streaming is not already in progress for this type
7678 + * of filehandle (e.g. video, vbi)
7679 +@@ -1471,9 +1473,9 @@ static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
7680 +
7681 + fmt = format_by_fourcc(f->fmt.pix.pixelformat);
7682 + if (!fmt) {
7683 +- em28xx_videodbg("Fourcc format (%08x) invalid.\n",
7684 +- f->fmt.pix.pixelformat);
7685 +- return -EINVAL;
7686 ++ fmt = &format[0];
7687 ++ em28xx_videodbg("Fourcc format (%08x) invalid. Using default (%08x).\n",
7688 ++ f->fmt.pix.pixelformat, fmt->fourcc);
7689 + }
7690 +
7691 + if (dev->board.is_em2800) {
7692 +@@ -1666,6 +1668,7 @@ static int vidioc_enum_input(struct file *file, void *priv,
7693 + {
7694 + struct em28xx *dev = video_drvdata(file);
7695 + unsigned int n;
7696 ++ int j;
7697 +
7698 + n = i->index;
7699 + if (n >= MAX_EM28XX_INPUT)
7700 +@@ -1685,6 +1688,12 @@ static int vidioc_enum_input(struct file *file, void *priv,
7701 + if (dev->is_webcam)
7702 + i->capabilities = 0;
7703 +
7704 ++ /* Dynamically generates an audioset bitmask */
7705 ++ i->audioset = 0;
7706 ++ for (j = 0; j < MAX_EM28XX_INPUT; j++)
7707 ++ if (dev->amux_map[j] != EM28XX_AMUX_UNUSED)
7708 ++ i->audioset |= 1 << j;
7709 ++
7710 + return 0;
7711 + }
7712 +
7713 +@@ -1710,11 +1719,24 @@ static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
7714 + return 0;
7715 + }
7716 +
7717 +-static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
7718 ++static int em28xx_fill_audio_input(struct em28xx *dev,
7719 ++ const char *s,
7720 ++ struct v4l2_audio *a,
7721 ++ unsigned int index)
7722 + {
7723 +- struct em28xx *dev = video_drvdata(file);
7724 ++ unsigned int idx = dev->amux_map[index];
7725 ++
7726 ++ /*
7727 ++ * With msp3400, almost all mappings use the default (amux = 0).
7728 ++ * The only one may use a different value is WinTV USB2, where it
7729 ++ * can also be SCART1 input.
7730 ++ * As it is very doubtful that we would see new boards with msp3400,
7731 ++ * let's just reuse the existing switch.
7732 ++ */
7733 ++ if (dev->has_msp34xx && idx != EM28XX_AMUX_UNUSED)
7734 ++ idx = EM28XX_AMUX_LINE_IN;
7735 +
7736 +- switch (a->index) {
7737 ++ switch (idx) {
7738 + case EM28XX_AMUX_VIDEO:
7739 + strcpy(a->name, "Television");
7740 + break;
7741 +@@ -1739,32 +1761,79 @@ static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
7742 + case EM28XX_AMUX_PCM_OUT:
7743 + strcpy(a->name, "PCM");
7744 + break;
7745 ++ case EM28XX_AMUX_UNUSED:
7746 + default:
7747 + return -EINVAL;
7748 + }
7749 +-
7750 +- a->index = dev->ctl_ainput;
7751 ++ a->index = index;
7752 + a->capability = V4L2_AUDCAP_STEREO;
7753 +
7754 ++ em28xx_videodbg("%s: audio input index %d is '%s'\n",
7755 ++ s, a->index, a->name);
7756 ++
7757 + return 0;
7758 + }
7759 +
7760 ++static int vidioc_enumaudio(struct file *file, void *fh, struct v4l2_audio *a)
7761 ++{
7762 ++ struct em28xx *dev = video_drvdata(file);
7763 ++
7764 ++ if (a->index >= MAX_EM28XX_INPUT)
7765 ++ return -EINVAL;
7766 ++
7767 ++ return em28xx_fill_audio_input(dev, __func__, a, a->index);
7768 ++}
7769 ++
7770 ++static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
7771 ++{
7772 ++ struct em28xx *dev = video_drvdata(file);
7773 ++ int i;
7774 ++
7775 ++ for (i = 0; i < MAX_EM28XX_INPUT; i++)
7776 ++ if (dev->ctl_ainput == dev->amux_map[i])
7777 ++ return em28xx_fill_audio_input(dev, __func__, a, i);
7778 ++
7779 ++ /* Should never happen! */
7780 ++ return -EINVAL;
7781 ++}
7782 ++
7783 + static int vidioc_s_audio(struct file *file, void *priv,
7784 + const struct v4l2_audio *a)
7785 + {
7786 + struct em28xx *dev = video_drvdata(file);
7787 ++ int idx, i;
7788 +
7789 + if (a->index >= MAX_EM28XX_INPUT)
7790 + return -EINVAL;
7791 +- if (!INPUT(a->index)->type)
7792 ++
7793 ++ idx = dev->amux_map[a->index];
7794 ++
7795 ++ if (idx == EM28XX_AMUX_UNUSED)
7796 + return -EINVAL;
7797 +
7798 +- dev->ctl_ainput = INPUT(a->index)->amux;
7799 +- dev->ctl_aoutput = INPUT(a->index)->aout;
7800 ++ dev->ctl_ainput = idx;
7801 ++
7802 ++ /*
7803 ++ * FIXME: This is wrong, as different inputs at em28xx_cards
7804 ++ * may have different audio outputs. So, the right thing
7805 ++ * to do is to implement VIDIOC_G_AUDOUT/VIDIOC_S_AUDOUT.
7806 ++ * With the current board definitions, this would work fine,
7807 ++ * as, currently, all boards fit.
7808 ++ */
7809 ++ for (i = 0; i < MAX_EM28XX_INPUT; i++)
7810 ++ if (idx == dev->amux_map[i])
7811 ++ break;
7812 ++ if (i == MAX_EM28XX_INPUT)
7813 ++ return -EINVAL;
7814 ++
7815 ++ dev->ctl_aoutput = INPUT(i)->aout;
7816 +
7817 + if (!dev->ctl_aoutput)
7818 + dev->ctl_aoutput = EM28XX_AOUT_MASTER;
7819 +
7820 ++ em28xx_videodbg("%s: set audio input to %d\n", __func__,
7821 ++ dev->ctl_ainput);
7822 ++
7823 + return 0;
7824 + }
7825 +
7826 +@@ -2302,6 +2371,7 @@ static const struct v4l2_ioctl_ops video_ioctl_ops = {
7827 + .vidioc_try_fmt_vbi_cap = vidioc_g_fmt_vbi_cap,
7828 + .vidioc_s_fmt_vbi_cap = vidioc_g_fmt_vbi_cap,
7829 + .vidioc_enum_framesizes = vidioc_enum_framesizes,
7830 ++ .vidioc_enumaudio = vidioc_enumaudio,
7831 + .vidioc_g_audio = vidioc_g_audio,
7832 + .vidioc_s_audio = vidioc_s_audio,
7833 +
7834 +diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
7835 +index 953caac025f2..a551072e62ed 100644
7836 +--- a/drivers/media/usb/em28xx/em28xx.h
7837 ++++ b/drivers/media/usb/em28xx/em28xx.h
7838 +@@ -335,6 +335,9 @@ enum em28xx_usb_audio_type {
7839 + /**
7840 + * em28xx_amux - describes the type of audio input used by em28xx
7841 + *
7842 ++ * @EM28XX_AMUX_UNUSED:
7843 ++ * Used only on em28xx dev->map field, in order to mark an entry
7844 ++ * as unused.
7845 + * @EM28XX_AMUX_VIDEO:
7846 + * On devices without AC97, this is the only value that it is currently
7847 + * allowed.
7848 +@@ -369,7 +372,8 @@ enum em28xx_usb_audio_type {
7849 + * same time, via the alsa mux.
7850 + */
7851 + enum em28xx_amux {
7852 +- EM28XX_AMUX_VIDEO,
7853 ++ EM28XX_AMUX_UNUSED = -1,
7854 ++ EM28XX_AMUX_VIDEO = 0,
7855 + EM28XX_AMUX_LINE_IN,
7856 +
7857 + /* Some less-common mixer setups */
7858 +@@ -692,6 +696,8 @@ struct em28xx {
7859 + unsigned int ctl_input; // selected input
7860 + unsigned int ctl_ainput;// selected audio input
7861 + unsigned int ctl_aoutput;// selected audio output
7862 ++ enum em28xx_amux amux_map[MAX_EM28XX_INPUT];
7863 ++
7864 + int mute;
7865 + int volume;
7866 +
7867 +diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
7868 +index c81faea96fba..c7c600c1f63b 100644
7869 +--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
7870 ++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
7871 +@@ -837,9 +837,9 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
7872 + switch (avi->colorimetry) {
7873 + case HDMI_COLORIMETRY_EXTENDED:
7874 + switch (avi->extended_colorimetry) {
7875 +- case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
7876 +- c.colorspace = V4L2_COLORSPACE_ADOBERGB;
7877 +- c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
7878 ++ case HDMI_EXTENDED_COLORIMETRY_OPRGB:
7879 ++ c.colorspace = V4L2_COLORSPACE_OPRGB;
7880 ++ c.xfer_func = V4L2_XFER_FUNC_OPRGB;
7881 + break;
7882 + case HDMI_EXTENDED_COLORIMETRY_BT2020:
7883 + c.colorspace = V4L2_COLORSPACE_BT2020;
7884 +@@ -908,10 +908,10 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
7885 + c.ycbcr_enc = V4L2_YCBCR_ENC_601;
7886 + c.xfer_func = V4L2_XFER_FUNC_SRGB;
7887 + break;
7888 +- case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
7889 +- c.colorspace = V4L2_COLORSPACE_ADOBERGB;
7890 ++ case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
7891 ++ c.colorspace = V4L2_COLORSPACE_OPRGB;
7892 + c.ycbcr_enc = V4L2_YCBCR_ENC_601;
7893 +- c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
7894 ++ c.xfer_func = V4L2_XFER_FUNC_OPRGB;
7895 + break;
7896 + case HDMI_EXTENDED_COLORIMETRY_BT2020:
7897 + c.colorspace = V4L2_COLORSPACE_BT2020;
7898 +diff --git a/drivers/mfd/menelaus.c b/drivers/mfd/menelaus.c
7899 +index 29b7164a823b..d28ebe7ecd21 100644
7900 +--- a/drivers/mfd/menelaus.c
7901 ++++ b/drivers/mfd/menelaus.c
7902 +@@ -1094,6 +1094,7 @@ static void menelaus_rtc_alarm_work(struct menelaus_chip *m)
7903 + static inline void menelaus_rtc_init(struct menelaus_chip *m)
7904 + {
7905 + int alarm = (m->client->irq > 0);
7906 ++ int err;
7907 +
7908 + /* assume 32KDETEN pin is pulled high */
7909 + if (!(menelaus_read_reg(MENELAUS_OSC_CTRL) & 0x80)) {
7910 +@@ -1101,6 +1102,12 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
7911 + return;
7912 + }
7913 +
7914 ++ m->rtc = devm_rtc_allocate_device(&m->client->dev);
7915 ++ if (IS_ERR(m->rtc))
7916 ++ return;
7917 ++
7918 ++ m->rtc->ops = &menelaus_rtc_ops;
7919 ++
7920 + /* support RTC alarm; it can issue wakeups */
7921 + if (alarm) {
7922 + if (menelaus_add_irq_work(MENELAUS_RTCALM_IRQ,
7923 +@@ -1125,10 +1132,8 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
7924 + menelaus_write_reg(MENELAUS_RTC_CTRL, m->rtc_control);
7925 + }
7926 +
7927 +- m->rtc = rtc_device_register(DRIVER_NAME,
7928 +- &m->client->dev,
7929 +- &menelaus_rtc_ops, THIS_MODULE);
7930 +- if (IS_ERR(m->rtc)) {
7931 ++ err = rtc_register_device(m->rtc);
7932 ++ if (err) {
7933 + if (alarm) {
7934 + menelaus_remove_irq_work(MENELAUS_RTCALM_IRQ);
7935 + device_init_wakeup(&m->client->dev, 0);
7936 +diff --git a/drivers/misc/genwqe/card_base.h b/drivers/misc/genwqe/card_base.h
7937 +index 1c3967f10f55..1f94fb436c3c 100644
7938 +--- a/drivers/misc/genwqe/card_base.h
7939 ++++ b/drivers/misc/genwqe/card_base.h
7940 +@@ -408,7 +408,7 @@ struct genwqe_file {
7941 + struct file *filp;
7942 +
7943 + struct fasync_struct *async_queue;
7944 +- struct task_struct *owner;
7945 ++ struct pid *opener;
7946 + struct list_head list; /* entry in list of open files */
7947 +
7948 + spinlock_t map_lock; /* lock for dma_mappings */
7949 +diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
7950 +index 0dd6b5ef314a..66f222f24da3 100644
7951 +--- a/drivers/misc/genwqe/card_dev.c
7952 ++++ b/drivers/misc/genwqe/card_dev.c
7953 +@@ -52,7 +52,7 @@ static void genwqe_add_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
7954 + {
7955 + unsigned long flags;
7956 +
7957 +- cfile->owner = current;
7958 ++ cfile->opener = get_pid(task_tgid(current));
7959 + spin_lock_irqsave(&cd->file_lock, flags);
7960 + list_add(&cfile->list, &cd->file_list);
7961 + spin_unlock_irqrestore(&cd->file_lock, flags);
7962 +@@ -65,6 +65,7 @@ static int genwqe_del_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
7963 + spin_lock_irqsave(&cd->file_lock, flags);
7964 + list_del(&cfile->list);
7965 + spin_unlock_irqrestore(&cd->file_lock, flags);
7966 ++ put_pid(cfile->opener);
7967 +
7968 + return 0;
7969 + }
7970 +@@ -275,7 +276,7 @@ static int genwqe_kill_fasync(struct genwqe_dev *cd, int sig)
7971 + return files;
7972 + }
7973 +
7974 +-static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
7975 ++static int genwqe_terminate(struct genwqe_dev *cd)
7976 + {
7977 + unsigned int files = 0;
7978 + unsigned long flags;
7979 +@@ -283,7 +284,7 @@ static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
7980 +
7981 + spin_lock_irqsave(&cd->file_lock, flags);
7982 + list_for_each_entry(cfile, &cd->file_list, list) {
7983 +- force_sig(sig, cfile->owner);
7984 ++ kill_pid(cfile->opener, SIGKILL, 1);
7985 + files++;
7986 + }
7987 + spin_unlock_irqrestore(&cd->file_lock, flags);
7988 +@@ -1357,7 +1358,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
7989 + dev_warn(&pci_dev->dev,
7990 + "[%s] send SIGKILL and wait ...\n", __func__);
7991 +
7992 +- rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */
7993 ++ rc = genwqe_terminate(cd);
7994 + if (rc) {
7995 + /* Give kill_timout more seconds to end processes */
7996 + for (i = 0; (i < GENWQE_KILL_TIMEOUT) &&
7997 +diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c
7998 +index 2e30de9c694a..57a6bb1fd3c9 100644
7999 +--- a/drivers/misc/ocxl/config.c
8000 ++++ b/drivers/misc/ocxl/config.c
8001 +@@ -280,7 +280,9 @@ int ocxl_config_check_afu_index(struct pci_dev *dev,
8002 + u32 val;
8003 + int rc, templ_major, templ_minor, len;
8004 +
8005 +- pci_write_config_word(dev, fn->dvsec_afu_info_pos, afu_idx);
8006 ++ pci_write_config_byte(dev,
8007 ++ fn->dvsec_afu_info_pos + OCXL_DVSEC_AFU_INFO_AFU_IDX,
8008 ++ afu_idx);
8009 + rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_VERSION, &val);
8010 + if (rc)
8011 + return rc;
8012 +diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
8013 +index d7eaf1eb11e7..003bfba40758 100644
8014 +--- a/drivers/misc/vmw_vmci/vmci_driver.c
8015 ++++ b/drivers/misc/vmw_vmci/vmci_driver.c
8016 +@@ -113,5 +113,5 @@ module_exit(vmci_drv_exit);
8017 +
8018 + MODULE_AUTHOR("VMware, Inc.");
8019 + MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
8020 +-MODULE_VERSION("1.1.5.0-k");
8021 ++MODULE_VERSION("1.1.6.0-k");
8022 + MODULE_LICENSE("GPL v2");
8023 +diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
8024 +index 1ab6e8737a5f..da1ee2e1ba99 100644
8025 +--- a/drivers/misc/vmw_vmci/vmci_resource.c
8026 ++++ b/drivers/misc/vmw_vmci/vmci_resource.c
8027 +@@ -57,7 +57,8 @@ static struct vmci_resource *vmci_resource_lookup(struct vmci_handle handle,
8028 +
8029 + if (r->type == type &&
8030 + rid == handle.resource &&
8031 +- (cid == handle.context || cid == VMCI_INVALID_ID)) {
8032 ++ (cid == handle.context || cid == VMCI_INVALID_ID ||
8033 ++ handle.context == VMCI_INVALID_ID)) {
8034 + resource = r;
8035 + break;
8036 + }
8037 +diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
8038 +index 32321bd596d8..c61109f7b793 100644
8039 +--- a/drivers/mmc/host/sdhci-acpi.c
8040 ++++ b/drivers/mmc/host/sdhci-acpi.c
8041 +@@ -76,6 +76,7 @@ struct sdhci_acpi_slot {
8042 + size_t priv_size;
8043 + int (*probe_slot)(struct platform_device *, const char *, const char *);
8044 + int (*remove_slot)(struct platform_device *);
8045 ++ int (*free_slot)(struct platform_device *pdev);
8046 + int (*setup_host)(struct platform_device *pdev);
8047 + };
8048 +
8049 +@@ -756,6 +757,9 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
8050 + err_cleanup:
8051 + sdhci_cleanup_host(c->host);
8052 + err_free:
8053 ++ if (c->slot && c->slot->free_slot)
8054 ++ c->slot->free_slot(pdev);
8055 ++
8056 + sdhci_free_host(c->host);
8057 + return err;
8058 + }
8059 +@@ -777,6 +781,10 @@ static int sdhci_acpi_remove(struct platform_device *pdev)
8060 +
8061 + dead = (sdhci_readl(c->host, SDHCI_INT_STATUS) == ~0);
8062 + sdhci_remove_host(c->host, dead);
8063 ++
8064 ++ if (c->slot && c->slot->free_slot)
8065 ++ c->slot->free_slot(pdev);
8066 ++
8067 + sdhci_free_host(c->host);
8068 +
8069 + return 0;
8070 +diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
8071 +index 555970a29c94..34326d95d254 100644
8072 +--- a/drivers/mmc/host/sdhci-pci-o2micro.c
8073 ++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
8074 +@@ -367,6 +367,9 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
8075 + pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch);
8076 + break;
8077 + case PCI_DEVICE_ID_O2_SEABIRD0:
8078 ++ if (chip->pdev->revision == 0x01)
8079 ++ chip->quirks |= SDHCI_QUIRK_DELAY_AFTER_POWER;
8080 ++ /* fall through */
8081 + case PCI_DEVICE_ID_O2_SEABIRD1:
8082 + /* UnLock WP */
8083 + ret = pci_read_config_byte(chip->pdev,
8084 +diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
8085 +index e686fe73159e..a1fd6f6f5414 100644
8086 +--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
8087 ++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
8088 +@@ -2081,6 +2081,10 @@ atmel_hsmc_nand_controller_legacy_init(struct atmel_hsmc_nand_controller *nc)
8089 + nand_np = dev->of_node;
8090 + nfc_np = of_find_compatible_node(dev->of_node, NULL,
8091 + "atmel,sama5d3-nfc");
8092 ++ if (!nfc_np) {
8093 ++ dev_err(dev, "Could not find device node for sama5d3-nfc\n");
8094 ++ return -ENODEV;
8095 ++ }
8096 +
8097 + nc->clk = of_clk_get(nfc_np, 0);
8098 + if (IS_ERR(nc->clk)) {
8099 +diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
8100 +index c502075e5721..ff955f085351 100644
8101 +--- a/drivers/mtd/nand/raw/denali.c
8102 ++++ b/drivers/mtd/nand/raw/denali.c
8103 +@@ -28,6 +28,7 @@
8104 + MODULE_LICENSE("GPL");
8105 +
8106 + #define DENALI_NAND_NAME "denali-nand"
8107 ++#define DENALI_DEFAULT_OOB_SKIP_BYTES 8
8108 +
8109 + /* for Indexed Addressing */
8110 + #define DENALI_INDEXED_CTRL 0x00
8111 +@@ -1106,12 +1107,17 @@ static void denali_hw_init(struct denali_nand_info *denali)
8112 + denali->revision = swab16(ioread32(denali->reg + REVISION));
8113 +
8114 + /*
8115 +- * tell driver how many bit controller will skip before
8116 +- * writing ECC code in OOB, this register may be already
8117 +- * set by firmware. So we read this value out.
8118 +- * if this value is 0, just let it be.
8119 ++ * Set how many bytes should be skipped before writing data in OOB.
8120 ++ * If a non-zero value has already been set (by firmware or something),
8121 ++ * just use it. Otherwise, set the driver default.
8122 + */
8123 + denali->oob_skip_bytes = ioread32(denali->reg + SPARE_AREA_SKIP_BYTES);
8124 ++ if (!denali->oob_skip_bytes) {
8125 ++ denali->oob_skip_bytes = DENALI_DEFAULT_OOB_SKIP_BYTES;
8126 ++ iowrite32(denali->oob_skip_bytes,
8127 ++ denali->reg + SPARE_AREA_SKIP_BYTES);
8128 ++ }
8129 ++
8130 + denali_detect_max_banks(denali);
8131 + iowrite32(0x0F, denali->reg + RB_PIN_ENABLED);
8132 + iowrite32(CHIP_EN_DONT_CARE__FLAG, denali->reg + CHIP_ENABLE_DONT_CARE);
8133 +diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
8134 +index c88588815ca1..a3477cbf6115 100644
8135 +--- a/drivers/mtd/nand/raw/marvell_nand.c
8136 ++++ b/drivers/mtd/nand/raw/marvell_nand.c
8137 +@@ -691,7 +691,7 @@ static irqreturn_t marvell_nfc_isr(int irq, void *dev_id)
8138 +
8139 + marvell_nfc_disable_int(nfc, st & NDCR_ALL_INT);
8140 +
8141 +- if (!(st & (NDSR_RDDREQ | NDSR_WRDREQ | NDSR_WRCMDREQ)))
8142 ++ if (st & (NDSR_RDY(0) | NDSR_RDY(1)))
8143 + complete(&nfc->complete);
8144 +
8145 + return IRQ_HANDLED;
8146 +diff --git a/drivers/mtd/spi-nor/fsl-quadspi.c b/drivers/mtd/spi-nor/fsl-quadspi.c
8147 +index 7d9620c7ff6c..1ff3430f82c8 100644
8148 +--- a/drivers/mtd/spi-nor/fsl-quadspi.c
8149 ++++ b/drivers/mtd/spi-nor/fsl-quadspi.c
8150 +@@ -478,6 +478,7 @@ static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd)
8151 + {
8152 + switch (cmd) {
8153 + case SPINOR_OP_READ_1_1_4:
8154 ++ case SPINOR_OP_READ_1_1_4_4B:
8155 + return SEQID_READ;
8156 + case SPINOR_OP_WREN:
8157 + return SEQID_WREN;
8158 +@@ -543,6 +544,9 @@ fsl_qspi_runcmd(struct fsl_qspi *q, u8 cmd, unsigned int addr, int len)
8159 +
8160 + /* trigger the LUT now */
8161 + seqid = fsl_qspi_get_seqid(q, cmd);
8162 ++ if (seqid < 0)
8163 ++ return seqid;
8164 ++
8165 + qspi_writel(q, (seqid << QUADSPI_IPCR_SEQID_SHIFT) | len,
8166 + base + QUADSPI_IPCR);
8167 +
8168 +@@ -671,7 +675,7 @@ static void fsl_qspi_set_map_addr(struct fsl_qspi *q)
8169 + * causes the controller to clear the buffer, and use the sequence pointed
8170 + * by the QUADSPI_BFGENCR[SEQID] to initiate a read from the flash.
8171 + */
8172 +-static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
8173 ++static int fsl_qspi_init_ahb_read(struct fsl_qspi *q)
8174 + {
8175 + void __iomem *base = q->iobase;
8176 + int seqid;
8177 +@@ -696,8 +700,13 @@ static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
8178 +
8179 + /* Set the default lut sequence for AHB Read. */
8180 + seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode);
8181 ++ if (seqid < 0)
8182 ++ return seqid;
8183 ++
8184 + qspi_writel(q, seqid << QUADSPI_BFGENCR_SEQID_SHIFT,
8185 + q->iobase + QUADSPI_BFGENCR);
8186 ++
8187 ++ return 0;
8188 + }
8189 +
8190 + /* This function was used to prepare and enable QSPI clock */
8191 +@@ -805,9 +814,7 @@ static int fsl_qspi_nor_setup_last(struct fsl_qspi *q)
8192 + fsl_qspi_init_lut(q);
8193 +
8194 + /* Init for AHB read */
8195 +- fsl_qspi_init_ahb_read(q);
8196 +-
8197 +- return 0;
8198 ++ return fsl_qspi_init_ahb_read(q);
8199 + }
8200 +
8201 + static const struct of_device_id fsl_qspi_dt_ids[] = {
8202 +diff --git a/drivers/mtd/spi-nor/intel-spi-pci.c b/drivers/mtd/spi-nor/intel-spi-pci.c
8203 +index c0976f2e3dd1..872b40922608 100644
8204 +--- a/drivers/mtd/spi-nor/intel-spi-pci.c
8205 ++++ b/drivers/mtd/spi-nor/intel-spi-pci.c
8206 +@@ -65,6 +65,7 @@ static void intel_spi_pci_remove(struct pci_dev *pdev)
8207 + static const struct pci_device_id intel_spi_pci_ids[] = {
8208 + { PCI_VDEVICE(INTEL, 0x18e0), (unsigned long)&bxt_info },
8209 + { PCI_VDEVICE(INTEL, 0x19e0), (unsigned long)&bxt_info },
8210 ++ { PCI_VDEVICE(INTEL, 0x34a4), (unsigned long)&bxt_info },
8211 + { PCI_VDEVICE(INTEL, 0xa1a4), (unsigned long)&bxt_info },
8212 + { PCI_VDEVICE(INTEL, 0xa224), (unsigned long)&bxt_info },
8213 + { },
8214 +diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
8215 +index 46af8052e535..152a65d46e0b 100644
8216 +--- a/drivers/net/dsa/mv88e6xxx/phy.c
8217 ++++ b/drivers/net/dsa/mv88e6xxx/phy.c
8218 +@@ -110,6 +110,9 @@ int mv88e6xxx_phy_page_write(struct mv88e6xxx_chip *chip, int phy,
8219 + err = mv88e6xxx_phy_page_get(chip, phy, page);
8220 + if (!err) {
8221 + err = mv88e6xxx_phy_write(chip, phy, MV88E6XXX_PHY_PAGE, page);
8222 ++ if (!err)
8223 ++ err = mv88e6xxx_phy_write(chip, phy, reg, val);
8224 ++
8225 + mv88e6xxx_phy_page_put(chip, phy);
8226 + }
8227 +
8228 +diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
8229 +index 34af5f1569c8..de0e24d912fe 100644
8230 +--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
8231 ++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
8232 +@@ -342,7 +342,7 @@ static struct device_node *bcmgenet_mii_of_find_mdio(struct bcmgenet_priv *priv)
8233 + if (!compat)
8234 + return NULL;
8235 +
8236 +- priv->mdio_dn = of_find_compatible_node(dn, NULL, compat);
8237 ++ priv->mdio_dn = of_get_compatible_child(dn, compat);
8238 + kfree(compat);
8239 + if (!priv->mdio_dn) {
8240 + dev_err(kdev, "unable to find MDIO bus node\n");
8241 +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
8242 +index 9d69621f5ab4..542f16074dc9 100644
8243 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
8244 ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
8245 +@@ -1907,6 +1907,7 @@ static int is_valid_clean_head(struct hns3_enet_ring *ring, int h)
8246 + bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
8247 + {
8248 + struct net_device *netdev = ring->tqp->handle->kinfo.netdev;
8249 ++ struct hns3_nic_priv *priv = netdev_priv(netdev);
8250 + struct netdev_queue *dev_queue;
8251 + int bytes, pkts;
8252 + int head;
8253 +@@ -1953,7 +1954,8 @@ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
8254 + * sees the new next_to_clean.
8255 + */
8256 + smp_mb();
8257 +- if (netif_tx_queue_stopped(dev_queue)) {
8258 ++ if (netif_tx_queue_stopped(dev_queue) &&
8259 ++ !test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
8260 + netif_tx_wake_queue(dev_queue);
8261 + ring->stats.restart_queue++;
8262 + }
8263 +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
8264 +index 11620e003a8e..967a625c040d 100644
8265 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
8266 ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
8267 +@@ -310,7 +310,7 @@ static void hns3_self_test(struct net_device *ndev,
8268 + h->flags & HNAE3_SUPPORT_MAC_LOOPBACK;
8269 +
8270 + if (if_running)
8271 +- dev_close(ndev);
8272 ++ ndev->netdev_ops->ndo_stop(ndev);
8273 +
8274 + #if IS_ENABLED(CONFIG_VLAN_8021Q)
8275 + /* Disable the vlan filter for selftest does not support it */
8276 +@@ -348,7 +348,7 @@ static void hns3_self_test(struct net_device *ndev,
8277 + #endif
8278 +
8279 + if (if_running)
8280 +- dev_open(ndev);
8281 ++ ndev->netdev_ops->ndo_open(ndev);
8282 + }
8283 +
8284 + static int hns3_get_sset_count(struct net_device *netdev, int stringset)
8285 +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
8286 +index 955f0e3d5c95..b4c0597a392d 100644
8287 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
8288 ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
8289 +@@ -79,6 +79,7 @@ static int hclge_ieee_getets(struct hnae3_handle *h, struct ieee_ets *ets)
8290 + static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
8291 + u8 *tc, bool *changed)
8292 + {
8293 ++ bool has_ets_tc = false;
8294 + u32 total_ets_bw = 0;
8295 + u8 max_tc = 0;
8296 + u8 i;
8297 +@@ -106,13 +107,14 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
8298 + *changed = true;
8299 +
8300 + total_ets_bw += ets->tc_tx_bw[i];
8301 +- break;
8302 ++ has_ets_tc = true;
8303 ++ break;
8304 + default:
8305 + return -EINVAL;
8306 + }
8307 + }
8308 +
8309 +- if (total_ets_bw != BW_PERCENT)
8310 ++ if (has_ets_tc && total_ets_bw != BW_PERCENT)
8311 + return -EINVAL;
8312 +
8313 + *tc = max_tc + 1;
8314 +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
8315 +index 13f43b74fd6d..9f2bea64c522 100644
8316 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
8317 ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
8318 +@@ -1669,11 +1669,13 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev,
8319 + static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
8320 + struct hclge_pkt_buf_alloc *buf_alloc)
8321 + {
8322 +- u32 rx_all = hdev->pkt_buf_size;
8323 ++#define HCLGE_BUF_SIZE_UNIT 128
8324 ++ u32 rx_all = hdev->pkt_buf_size, aligned_mps;
8325 + int no_pfc_priv_num, pfc_priv_num;
8326 + struct hclge_priv_buf *priv;
8327 + int i;
8328 +
8329 ++ aligned_mps = round_up(hdev->mps, HCLGE_BUF_SIZE_UNIT);
8330 + rx_all -= hclge_get_tx_buff_alloced(buf_alloc);
8331 +
8332 + /* When DCB is not supported, rx private
8333 +@@ -1692,13 +1694,13 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
8334 + if (hdev->hw_tc_map & BIT(i)) {
8335 + priv->enable = 1;
8336 + if (hdev->tm_info.hw_pfc_map & BIT(i)) {
8337 +- priv->wl.low = hdev->mps;
8338 +- priv->wl.high = priv->wl.low + hdev->mps;
8339 ++ priv->wl.low = aligned_mps;
8340 ++ priv->wl.high = priv->wl.low + aligned_mps;
8341 + priv->buf_size = priv->wl.high +
8342 + HCLGE_DEFAULT_DV;
8343 + } else {
8344 + priv->wl.low = 0;
8345 +- priv->wl.high = 2 * hdev->mps;
8346 ++ priv->wl.high = 2 * aligned_mps;
8347 + priv->buf_size = priv->wl.high;
8348 + }
8349 + } else {
8350 +@@ -1730,11 +1732,11 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
8351 +
8352 + if (hdev->tm_info.hw_pfc_map & BIT(i)) {
8353 + priv->wl.low = 128;
8354 +- priv->wl.high = priv->wl.low + hdev->mps;
8355 ++ priv->wl.high = priv->wl.low + aligned_mps;
8356 + priv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV;
8357 + } else {
8358 + priv->wl.low = 0;
8359 +- priv->wl.high = hdev->mps;
8360 ++ priv->wl.high = aligned_mps;
8361 + priv->buf_size = priv->wl.high;
8362 + }
8363 + }
8364 +@@ -2396,6 +2398,9 @@ static int hclge_get_mac_phy_link(struct hclge_dev *hdev)
8365 + int mac_state;
8366 + int link_stat;
8367 +
8368 ++ if (test_bit(HCLGE_STATE_DOWN, &hdev->state))
8369 ++ return 0;
8370 ++
8371 + mac_state = hclge_get_mac_link_status(hdev);
8372 +
8373 + if (hdev->hw.mac.phydev) {
8374 +@@ -3789,6 +3794,8 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
8375 + struct hclge_dev *hdev = vport->back;
8376 + int i;
8377 +
8378 ++ set_bit(HCLGE_STATE_DOWN, &hdev->state);
8379 ++
8380 + del_timer_sync(&hdev->service_timer);
8381 + cancel_work_sync(&hdev->service_task);
8382 + clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state);
8383 +@@ -4679,9 +4686,17 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
8384 + "Add vf vlan filter fail, ret =%d.\n",
8385 + req0->resp_code);
8386 + } else {
8387 ++#define HCLGE_VF_VLAN_DEL_NO_FOUND 1
8388 + if (!req0->resp_code)
8389 + return 0;
8390 +
8391 ++ if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) {
8392 ++ dev_warn(&hdev->pdev->dev,
8393 ++ "vlan %d filter is not in vf vlan table\n",
8394 ++ vlan);
8395 ++ return 0;
8396 ++ }
8397 ++
8398 + dev_err(&hdev->pdev->dev,
8399 + "Kill vf vlan filter fail, ret =%d.\n",
8400 + req0->resp_code);
8401 +@@ -4725,6 +4740,9 @@ static int hclge_set_vlan_filter_hw(struct hclge_dev *hdev, __be16 proto,
8402 + u16 vport_idx, vport_num = 0;
8403 + int ret;
8404 +
8405 ++ if (is_kill && !vlan_id)
8406 ++ return 0;
8407 ++
8408 + ret = hclge_set_vf_vlan_common(hdev, vport_id, is_kill, vlan_id,
8409 + 0, proto);
8410 + if (ret) {
8411 +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
8412 +index 12aa1f1b99ef..6090a7cd83e1 100644
8413 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
8414 ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
8415 +@@ -299,6 +299,9 @@ void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
8416 +
8417 + client = handle->client;
8418 +
8419 ++ link_state =
8420 ++ test_bit(HCLGEVF_STATE_DOWN, &hdev->state) ? 0 : link_state;
8421 ++
8422 + if (link_state != hdev->hw.mac.link) {
8423 + client->ops->link_status_change(handle, !!link_state);
8424 + hdev->hw.mac.link = link_state;
8425 +@@ -1439,6 +1442,8 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
8426 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
8427 + int i, queue_id;
8428 +
8429 ++ set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
8430 ++
8431 + for (i = 0; i < hdev->num_tqps; i++) {
8432 + /* Ring disable */
8433 + queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
8434 +diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
8435 +index ed071ea75f20..ce12824a8325 100644
8436 +--- a/drivers/net/ethernet/intel/ice/ice.h
8437 ++++ b/drivers/net/ethernet/intel/ice/ice.h
8438 +@@ -39,9 +39,9 @@
8439 + extern const char ice_drv_ver[];
8440 + #define ICE_BAR0 0
8441 + #define ICE_DFLT_NUM_DESC 128
8442 +-#define ICE_MIN_NUM_DESC 8
8443 +-#define ICE_MAX_NUM_DESC 8160
8444 + #define ICE_REQ_DESC_MULTIPLE 32
8445 ++#define ICE_MIN_NUM_DESC ICE_REQ_DESC_MULTIPLE
8446 ++#define ICE_MAX_NUM_DESC 8160
8447 + #define ICE_DFLT_TRAFFIC_CLASS BIT(0)
8448 + #define ICE_INT_NAME_STR_LEN (IFNAMSIZ + 16)
8449 + #define ICE_ETHTOOL_FWVER_LEN 32
8450 +diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
8451 +index 62be72fdc8f3..e783976c401d 100644
8452 +--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
8453 ++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
8454 +@@ -518,22 +518,31 @@ shutdown_sq_out:
8455 +
8456 + /**
8457 + * ice_aq_ver_check - Check the reported AQ API version.
8458 +- * @fw_branch: The "branch" of FW, typically describes the device type
8459 +- * @fw_major: The major version of the FW API
8460 +- * @fw_minor: The minor version increment of the FW API
8461 ++ * @hw: pointer to the hardware structure
8462 + *
8463 + * Checks if the driver should load on a given AQ API version.
8464 + *
8465 + * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
8466 + */
8467 +-static bool ice_aq_ver_check(u8 fw_branch, u8 fw_major, u8 fw_minor)
8468 ++static bool ice_aq_ver_check(struct ice_hw *hw)
8469 + {
8470 +- if (fw_branch != EXP_FW_API_VER_BRANCH)
8471 +- return false;
8472 +- if (fw_major != EXP_FW_API_VER_MAJOR)
8473 +- return false;
8474 +- if (fw_minor != EXP_FW_API_VER_MINOR)
8475 ++ if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
8476 ++ /* Major API version is newer than expected, don't load */
8477 ++ dev_warn(ice_hw_to_dev(hw),
8478 ++ "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
8479 + return false;
8480 ++ } else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
8481 ++ if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
8482 ++ dev_info(ice_hw_to_dev(hw),
8483 ++ "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
8484 ++ else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
8485 ++ dev_info(ice_hw_to_dev(hw),
8486 ++ "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
8487 ++ } else {
8488 ++ /* Major API version is older than expected, log a warning */
8489 ++ dev_info(ice_hw_to_dev(hw),
8490 ++ "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
8491 ++ }
8492 + return true;
8493 + }
8494 +
8495 +@@ -588,8 +597,7 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
8496 + if (status)
8497 + goto init_ctrlq_free_rq;
8498 +
8499 +- if (!ice_aq_ver_check(hw->api_branch, hw->api_maj_ver,
8500 +- hw->api_min_ver)) {
8501 ++ if (!ice_aq_ver_check(hw)) {
8502 + status = ICE_ERR_FW_API_VER;
8503 + goto init_ctrlq_free_rq;
8504 + }
8505 +diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
8506 +index c71a9b528d6d..9d6754f65a1a 100644
8507 +--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
8508 ++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
8509 +@@ -478,9 +478,11 @@ ice_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
8510 + ring->tx_max_pending = ICE_MAX_NUM_DESC;
8511 + ring->rx_pending = vsi->rx_rings[0]->count;
8512 + ring->tx_pending = vsi->tx_rings[0]->count;
8513 +- ring->rx_mini_pending = ICE_MIN_NUM_DESC;
8514 ++
8515 ++ /* Rx mini and jumbo rings are not supported */
8516 + ring->rx_mini_max_pending = 0;
8517 + ring->rx_jumbo_max_pending = 0;
8518 ++ ring->rx_mini_pending = 0;
8519 + ring->rx_jumbo_pending = 0;
8520 + }
8521 +
8522 +@@ -498,14 +500,23 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
8523 + ring->tx_pending < ICE_MIN_NUM_DESC ||
8524 + ring->rx_pending > ICE_MAX_NUM_DESC ||
8525 + ring->rx_pending < ICE_MIN_NUM_DESC) {
8526 +- netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d]\n",
8527 ++ netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
8528 + ring->tx_pending, ring->rx_pending,
8529 +- ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC);
8530 ++ ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC,
8531 ++ ICE_REQ_DESC_MULTIPLE);
8532 + return -EINVAL;
8533 + }
8534 +
8535 + new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE);
8536 ++ if (new_tx_cnt != ring->tx_pending)
8537 ++ netdev_info(netdev,
8538 ++ "Requested Tx descriptor count rounded up to %d\n",
8539 ++ new_tx_cnt);
8540 + new_rx_cnt = ALIGN(ring->rx_pending, ICE_REQ_DESC_MULTIPLE);
8541 ++ if (new_rx_cnt != ring->rx_pending)
8542 ++ netdev_info(netdev,
8543 ++ "Requested Rx descriptor count rounded up to %d\n",
8544 ++ new_rx_cnt);
8545 +
8546 + /* if nothing to do return success */
8547 + if (new_tx_cnt == vsi->tx_rings[0]->count &&
8548 +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
8549 +index da4322e4daed..add124e0381d 100644
8550 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
8551 ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
8552 +@@ -676,6 +676,9 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
8553 + } else {
8554 + struct tx_sa tsa;
8555 +
8556 ++ if (adapter->num_vfs)
8557 ++ return -EOPNOTSUPP;
8558 ++
8559 + /* find the first unused index */
8560 + ret = ixgbe_ipsec_find_empty_idx(ipsec, false);
8561 + if (ret < 0) {
8562 +diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
8563 +index 59416eddd840..ce28d474b929 100644
8564 +--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
8565 ++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
8566 +@@ -3849,6 +3849,10 @@ static void ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring,
8567 + skb_checksum_help(skb);
8568 + goto no_csum;
8569 + }
8570 ++
8571 ++ if (first->protocol == htons(ETH_P_IP))
8572 ++ type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
8573 ++
8574 + /* update TX checksum flag */
8575 + first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
8576 + vlan_macip_lens = skb_checksum_start_offset(skb) -
8577 +diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
8578 +index 4a6d2db75071..417fbcc64f00 100644
8579 +--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
8580 ++++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
8581 +@@ -314,12 +314,14 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
8582 +
8583 + switch (off) {
8584 + case offsetof(struct iphdr, daddr):
8585 +- set_ip_addr->ipv4_dst_mask = mask;
8586 +- set_ip_addr->ipv4_dst = exact;
8587 ++ set_ip_addr->ipv4_dst_mask |= mask;
8588 ++ set_ip_addr->ipv4_dst &= ~mask;
8589 ++ set_ip_addr->ipv4_dst |= exact & mask;
8590 + break;
8591 + case offsetof(struct iphdr, saddr):
8592 +- set_ip_addr->ipv4_src_mask = mask;
8593 +- set_ip_addr->ipv4_src = exact;
8594 ++ set_ip_addr->ipv4_src_mask |= mask;
8595 ++ set_ip_addr->ipv4_src &= ~mask;
8596 ++ set_ip_addr->ipv4_src |= exact & mask;
8597 + break;
8598 + default:
8599 + return -EOPNOTSUPP;
8600 +@@ -333,11 +335,12 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
8601 + }
8602 +
8603 + static void
8604 +-nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask,
8605 ++nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask,
8606 + struct nfp_fl_set_ipv6_addr *ip6)
8607 + {
8608 +- ip6->ipv6[idx % 4].mask = mask;
8609 +- ip6->ipv6[idx % 4].exact = exact;
8610 ++ ip6->ipv6[word].mask |= mask;
8611 ++ ip6->ipv6[word].exact &= ~mask;
8612 ++ ip6->ipv6[word].exact |= exact & mask;
8613 +
8614 + ip6->reserved = cpu_to_be16(0);
8615 + ip6->head.jump_id = opcode_tag;
8616 +@@ -350,6 +353,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
8617 + struct nfp_fl_set_ipv6_addr *ip_src)
8618 + {
8619 + __be32 exact, mask;
8620 ++ u8 word;
8621 +
8622 + /* We are expecting tcf_pedit to return a big endian value */
8623 + mask = (__force __be32)~tcf_pedit_mask(action, idx);
8624 +@@ -358,17 +362,20 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
8625 + if (exact & ~mask)
8626 + return -EOPNOTSUPP;
8627 +
8628 +- if (off < offsetof(struct ipv6hdr, saddr))
8629 ++ if (off < offsetof(struct ipv6hdr, saddr)) {
8630 + return -EOPNOTSUPP;
8631 +- else if (off < offsetof(struct ipv6hdr, daddr))
8632 +- nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx,
8633 ++ } else if (off < offsetof(struct ipv6hdr, daddr)) {
8634 ++ word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
8635 ++ nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
8636 + exact, mask, ip_src);
8637 +- else if (off < offsetof(struct ipv6hdr, daddr) +
8638 +- sizeof(struct in6_addr))
8639 +- nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx,
8640 ++ } else if (off < offsetof(struct ipv6hdr, daddr) +
8641 ++ sizeof(struct in6_addr)) {
8642 ++ word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact);
8643 ++ nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
8644 + exact, mask, ip_dst);
8645 +- else
8646 ++ } else {
8647 + return -EOPNOTSUPP;
8648 ++ }
8649 +
8650 + return 0;
8651 + }
8652 +diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
8653 +index db463e20a876..e9a4179e7e48 100644
8654 +--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
8655 ++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
8656 +@@ -96,6 +96,7 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
8657 + {
8658 + struct nfp_pf *pf = devlink_priv(devlink);
8659 + struct nfp_eth_table_port eth_port;
8660 ++ unsigned int lanes;
8661 + int ret;
8662 +
8663 + if (count < 2)
8664 +@@ -114,8 +115,12 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
8665 + goto out;
8666 + }
8667 +
8668 +- ret = nfp_devlink_set_lanes(pf, eth_port.index,
8669 +- eth_port.port_lanes / count);
8670 ++ /* Special case the 100G CXP -> 2x40G split */
8671 ++ lanes = eth_port.port_lanes / count;
8672 ++ if (eth_port.lanes == 10 && count == 2)
8673 ++ lanes = 8 / count;
8674 ++
8675 ++ ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
8676 + out:
8677 + mutex_unlock(&pf->lock);
8678 +
8679 +@@ -128,6 +133,7 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
8680 + {
8681 + struct nfp_pf *pf = devlink_priv(devlink);
8682 + struct nfp_eth_table_port eth_port;
8683 ++ unsigned int lanes;
8684 + int ret;
8685 +
8686 + mutex_lock(&pf->lock);
8687 +@@ -143,7 +149,12 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
8688 + goto out;
8689 + }
8690 +
8691 +- ret = nfp_devlink_set_lanes(pf, eth_port.index, eth_port.port_lanes);
8692 ++ /* Special case the 100G CXP -> 2x40G unsplit */
8693 ++ lanes = eth_port.port_lanes;
8694 ++ if (eth_port.port_lanes == 8)
8695 ++ lanes = 10;
8696 ++
8697 ++ ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
8698 + out:
8699 + mutex_unlock(&pf->lock);
8700 +
8701 +diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
8702 +index b48f76182049..10b075bc5959 100644
8703 +--- a/drivers/net/ethernet/qlogic/qla3xxx.c
8704 ++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
8705 +@@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
8706 +
8707 + qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
8708 + ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
8709 +- ql_write_nvram_reg(qdev, spir,
8710 +- ((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
8711 + }
8712 +
8713 + /*
8714 +diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
8715 +index f18087102d40..41bcbdd355f0 100644
8716 +--- a/drivers/net/ethernet/realtek/r8169.c
8717 ++++ b/drivers/net/ethernet/realtek/r8169.c
8718 +@@ -7539,20 +7539,12 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
8719 + {
8720 + unsigned int flags;
8721 +
8722 +- switch (tp->mac_version) {
8723 +- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
8724 ++ if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
8725 + RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
8726 + RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
8727 + RTL_W8(tp, Cfg9346, Cfg9346_Lock);
8728 + flags = PCI_IRQ_LEGACY;
8729 +- break;
8730 +- case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
8731 +- /* This version was reported to have issues with resume
8732 +- * from suspend when using MSI-X
8733 +- */
8734 +- flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
8735 +- break;
8736 +- default:
8737 ++ } else {
8738 + flags = PCI_IRQ_ALL_TYPES;
8739 + }
8740 +
8741 +diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
8742 +index e080d3e7c582..4d7d53fbc0ef 100644
8743 +--- a/drivers/net/ethernet/socionext/netsec.c
8744 ++++ b/drivers/net/ethernet/socionext/netsec.c
8745 +@@ -945,6 +945,9 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
8746 + dring->head = 0;
8747 + dring->tail = 0;
8748 + dring->pkt_cnt = 0;
8749 ++
8750 ++ if (id == NETSEC_RING_TX)
8751 ++ netdev_reset_queue(priv->ndev);
8752 + }
8753 +
8754 + static void netsec_free_dring(struct netsec_priv *priv, int id)
8755 +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
8756 +index f9a61f90cfbc..0f660af01a4b 100644
8757 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
8758 ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
8759 +@@ -714,8 +714,9 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
8760 + return -ENODEV;
8761 + }
8762 +
8763 +- mdio_internal = of_find_compatible_node(mdio_mux, NULL,
8764 ++ mdio_internal = of_get_compatible_child(mdio_mux,
8765 + "allwinner,sun8i-h3-mdio-internal");
8766 ++ of_node_put(mdio_mux);
8767 + if (!mdio_internal) {
8768 + dev_err(priv->device, "Cannot get internal_mdio node\n");
8769 + return -ENODEV;
8770 +@@ -729,13 +730,20 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
8771 + gmac->rst_ephy = of_reset_control_get_exclusive(iphynode, NULL);
8772 + if (IS_ERR(gmac->rst_ephy)) {
8773 + ret = PTR_ERR(gmac->rst_ephy);
8774 +- if (ret == -EPROBE_DEFER)
8775 ++ if (ret == -EPROBE_DEFER) {
8776 ++ of_node_put(iphynode);
8777 ++ of_node_put(mdio_internal);
8778 + return ret;
8779 ++ }
8780 + continue;
8781 + }
8782 + dev_info(priv->device, "Found internal PHY node\n");
8783 ++ of_node_put(iphynode);
8784 ++ of_node_put(mdio_internal);
8785 + return 0;
8786 + }
8787 ++
8788 ++ of_node_put(mdio_internal);
8789 + return -ENODEV;
8790 + }
8791 +
8792 +diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
8793 +index 4f390fa557e4..8ec02f1a3be8 100644
8794 +--- a/drivers/net/net_failover.c
8795 ++++ b/drivers/net/net_failover.c
8796 +@@ -602,6 +602,9 @@ static int net_failover_slave_unregister(struct net_device *slave_dev,
8797 + primary_dev = rtnl_dereference(nfo_info->primary_dev);
8798 + standby_dev = rtnl_dereference(nfo_info->standby_dev);
8799 +
8800 ++ if (WARN_ON_ONCE(slave_dev != primary_dev && slave_dev != standby_dev))
8801 ++ return -ENODEV;
8802 ++
8803 + vlan_vids_del_by_dev(slave_dev, failover_dev);
8804 + dev_uc_unsync(slave_dev, failover_dev);
8805 + dev_mc_unsync(slave_dev, failover_dev);
8806 +diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
8807 +index 5827fccd4f29..44a0770de142 100644
8808 +--- a/drivers/net/phy/phylink.c
8809 ++++ b/drivers/net/phy/phylink.c
8810 +@@ -907,6 +907,9 @@ void phylink_start(struct phylink *pl)
8811 + phylink_an_mode_str(pl->link_an_mode),
8812 + phy_modes(pl->link_config.interface));
8813 +
8814 ++ /* Always set the carrier off */
8815 ++ netif_carrier_off(pl->netdev);
8816 ++
8817 + /* Apply the link configuration to the MAC when starting. This allows
8818 + * a fixed-link to start with the correct parameters, and also
8819 + * ensures that we set the appropriate advertisement for Serdes links.
8820 +diff --git a/drivers/net/tun.c b/drivers/net/tun.c
8821 +index 725dd63f8413..546081993ecf 100644
8822 +--- a/drivers/net/tun.c
8823 ++++ b/drivers/net/tun.c
8824 +@@ -2304,6 +2304,8 @@ static void tun_setup(struct net_device *dev)
8825 + static int tun_validate(struct nlattr *tb[], struct nlattr *data[],
8826 + struct netlink_ext_ack *extack)
8827 + {
8828 ++ if (!data)
8829 ++ return 0;
8830 + return -EINVAL;
8831 + }
8832 +
8833 +diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
8834 +index 2319f79b34f0..e6d23b6895bd 100644
8835 +--- a/drivers/net/wireless/ath/ath10k/wmi.c
8836 ++++ b/drivers/net/wireless/ath/ath10k/wmi.c
8837 +@@ -1869,6 +1869,12 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
8838 + if (ret)
8839 + dev_kfree_skb_any(skb);
8840 +
8841 ++ if (ret == -EAGAIN) {
8842 ++ ath10k_warn(ar, "wmi command %d timeout, restarting hardware\n",
8843 ++ cmd_id);
8844 ++ queue_work(ar->workqueue, &ar->restart_work);
8845 ++ }
8846 ++
8847 + return ret;
8848 + }
8849 +
8850 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
8851 +index d8b79cb72b58..e7584b842dce 100644
8852 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
8853 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
8854 +@@ -77,6 +77,8 @@ static u16 d11ac_bw(enum brcmu_chan_bw bw)
8855 + return BRCMU_CHSPEC_D11AC_BW_40;
8856 + case BRCMU_CHAN_BW_80:
8857 + return BRCMU_CHSPEC_D11AC_BW_80;
8858 ++ case BRCMU_CHAN_BW_160:
8859 ++ return BRCMU_CHSPEC_D11AC_BW_160;
8860 + default:
8861 + WARN_ON(1);
8862 + }
8863 +@@ -190,8 +192,38 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch)
8864 + break;
8865 + }
8866 + break;
8867 +- case BRCMU_CHSPEC_D11AC_BW_8080:
8868 + case BRCMU_CHSPEC_D11AC_BW_160:
8869 ++ switch (ch->sb) {
8870 ++ case BRCMU_CHAN_SB_LLL:
8871 ++ ch->control_ch_num -= CH_70MHZ_APART;
8872 ++ break;
8873 ++ case BRCMU_CHAN_SB_LLU:
8874 ++ ch->control_ch_num -= CH_50MHZ_APART;
8875 ++ break;
8876 ++ case BRCMU_CHAN_SB_LUL:
8877 ++ ch->control_ch_num -= CH_30MHZ_APART;
8878 ++ break;
8879 ++ case BRCMU_CHAN_SB_LUU:
8880 ++ ch->control_ch_num -= CH_10MHZ_APART;
8881 ++ break;
8882 ++ case BRCMU_CHAN_SB_ULL:
8883 ++ ch->control_ch_num += CH_10MHZ_APART;
8884 ++ break;
8885 ++ case BRCMU_CHAN_SB_ULU:
8886 ++ ch->control_ch_num += CH_30MHZ_APART;
8887 ++ break;
8888 ++ case BRCMU_CHAN_SB_UUL:
8889 ++ ch->control_ch_num += CH_50MHZ_APART;
8890 ++ break;
8891 ++ case BRCMU_CHAN_SB_UUU:
8892 ++ ch->control_ch_num += CH_70MHZ_APART;
8893 ++ break;
8894 ++ default:
8895 ++ WARN_ON_ONCE(1);
8896 ++ break;
8897 ++ }
8898 ++ break;
8899 ++ case BRCMU_CHSPEC_D11AC_BW_8080:
8900 + default:
8901 + WARN_ON_ONCE(1);
8902 + break;
8903 +diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
8904 +index 7b9a77981df1..75b2a0438cfa 100644
8905 +--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
8906 ++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
8907 +@@ -29,6 +29,8 @@
8908 + #define CH_UPPER_SB 0x01
8909 + #define CH_LOWER_SB 0x02
8910 + #define CH_EWA_VALID 0x04
8911 ++#define CH_70MHZ_APART 14
8912 ++#define CH_50MHZ_APART 10
8913 + #define CH_30MHZ_APART 6
8914 + #define CH_20MHZ_APART 4
8915 + #define CH_10MHZ_APART 2
8916 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
8917 +index 866c91c923be..dd674dcf1a0a 100644
8918 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
8919 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
8920 +@@ -669,8 +669,12 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
8921 + enabled = !!(wifi_pkg->package.elements[1].integer.value);
8922 + n_profiles = wifi_pkg->package.elements[2].integer.value;
8923 +
8924 +- /* in case of BIOS bug */
8925 +- if (n_profiles <= 0) {
8926 ++ /*
8927 ++ * Check the validity of n_profiles. The EWRD profiles start
8928 ++ * from index 1, so the maximum value allowed here is
8929 ++ * ACPI_SAR_PROFILES_NUM - 1.
8930 ++ */
8931 ++ if (n_profiles <= 0 || n_profiles >= ACPI_SAR_PROFILE_NUM) {
8932 + ret = -EINVAL;
8933 + goto out_free;
8934 + }
8935 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
8936 +index a6e072234398..da45dc972889 100644
8937 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
8938 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
8939 +@@ -1232,12 +1232,15 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
8940 + iwl_mvm_del_aux_sta(mvm);
8941 +
8942 + /*
8943 +- * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()
8944 +- * won't be called in this case).
8945 ++ * Clear IN_HW_RESTART and HW_RESTART_REQUESTED flag when stopping the
8946 ++ * hw (as restart_complete() won't be called in this case) and mac80211
8947 ++ * won't execute the restart.
8948 + * But make sure to cleanup interfaces that have gone down before/during
8949 + * HW restart was requested.
8950 + */
8951 +- if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
8952 ++ if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
8953 ++ test_and_clear_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
8954 ++ &mvm->status))
8955 + ieee80211_iterate_interfaces(mvm->hw, 0,
8956 + iwl_mvm_cleanup_iterator, mvm);
8957 +
8958 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
8959 +index 642da10b0b7f..fccb3a4f9d57 100644
8960 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
8961 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
8962 +@@ -1218,7 +1218,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
8963 + !(info->flags & IEEE80211_TX_STAT_AMPDU))
8964 + return;
8965 +
8966 +- rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, &tx_resp_rate);
8967 ++ if (rs_rate_from_ucode_rate(tx_resp_hwrate, info->band,
8968 ++ &tx_resp_rate)) {
8969 ++ WARN_ON_ONCE(1);
8970 ++ return;
8971 ++ }
8972 +
8973 + #ifdef CONFIG_MAC80211_DEBUGFS
8974 + /* Disable last tx check if we are debugging with fixed rate but
8975 +@@ -1269,7 +1273,10 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
8976 + */
8977 + table = &lq_sta->lq;
8978 + lq_hwrate = le32_to_cpu(table->rs_table[0]);
8979 +- rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate);
8980 ++ if (rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate)) {
8981 ++ WARN_ON_ONCE(1);
8982 ++ return;
8983 ++ }
8984 +
8985 + /* Here we actually compare this rate to the latest LQ command */
8986 + if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) {
8987 +@@ -1371,8 +1378,12 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
8988 + /* Collect data for each rate used during failed TX attempts */
8989 + for (i = 0; i <= retries; ++i) {
8990 + lq_hwrate = le32_to_cpu(table->rs_table[i]);
8991 +- rs_rate_from_ucode_rate(lq_hwrate, info->band,
8992 +- &lq_rate);
8993 ++ if (rs_rate_from_ucode_rate(lq_hwrate, info->band,
8994 ++ &lq_rate)) {
8995 ++ WARN_ON_ONCE(1);
8996 ++ return;
8997 ++ }
8998 ++
8999 + /*
9000 + * Only collect stats if retried rate is in the same RS
9001 + * table as active/search.
9002 +@@ -3241,7 +3252,10 @@ static void rs_build_rates_table_from_fixed(struct iwl_mvm *mvm,
9003 + for (i = 0; i < num_rates; i++)
9004 + lq_cmd->rs_table[i] = ucode_rate_le32;
9005 +
9006 +- rs_rate_from_ucode_rate(ucode_rate, band, &rate);
9007 ++ if (rs_rate_from_ucode_rate(ucode_rate, band, &rate)) {
9008 ++ WARN_ON_ONCE(1);
9009 ++ return;
9010 ++ }
9011 +
9012 + if (is_mimo(&rate))
9013 + lq_cmd->mimo_delim = num_rates - 1;
9014 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
9015 +index cf2591f2ac23..2d35b70de2ab 100644
9016 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
9017 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
9018 +@@ -1385,6 +1385,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
9019 + while (!skb_queue_empty(&skbs)) {
9020 + struct sk_buff *skb = __skb_dequeue(&skbs);
9021 + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
9022 ++ struct ieee80211_hdr *hdr = (void *)skb->data;
9023 + bool flushed = false;
9024 +
9025 + skb_freed++;
9026 +@@ -1429,11 +1430,11 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
9027 + info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;
9028 + info->flags &= ~IEEE80211_TX_CTL_AMPDU;
9029 +
9030 +- /* W/A FW bug: seq_ctl is wrong when the status isn't success */
9031 +- if (status != TX_STATUS_SUCCESS) {
9032 +- struct ieee80211_hdr *hdr = (void *)skb->data;
9033 ++ /* W/A FW bug: seq_ctl is wrong upon failure / BAR frame */
9034 ++ if (ieee80211_is_back_req(hdr->frame_control))
9035 ++ seq_ctl = 0;
9036 ++ else if (status != TX_STATUS_SUCCESS)
9037 + seq_ctl = le16_to_cpu(hdr->seq_ctrl);
9038 +- }
9039 +
9040 + if (unlikely(!seq_ctl)) {
9041 + struct ieee80211_hdr *hdr = (void *)skb->data;
9042 +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
9043 +index d15f5ba2dc77..cb5631c85d16 100644
9044 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
9045 ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
9046 +@@ -1050,6 +1050,14 @@ void iwl_pcie_rx_free(struct iwl_trans *trans)
9047 + kfree(trans_pcie->rxq);
9048 + }
9049 +
9050 ++static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq,
9051 ++ struct iwl_rb_allocator *rba)
9052 ++{
9053 ++ spin_lock(&rba->lock);
9054 ++ list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
9055 ++ spin_unlock(&rba->lock);
9056 ++}
9057 ++
9058 + /*
9059 + * iwl_pcie_rx_reuse_rbd - Recycle used RBDs
9060 + *
9061 +@@ -1081,9 +1089,7 @@ static void iwl_pcie_rx_reuse_rbd(struct iwl_trans *trans,
9062 + if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) {
9063 + /* Move the 2 RBDs to the allocator ownership.
9064 + Allocator has another 6 from pool for the request completion*/
9065 +- spin_lock(&rba->lock);
9066 +- list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
9067 +- spin_unlock(&rba->lock);
9068 ++ iwl_pcie_rx_move_to_allocator(rxq, rba);
9069 +
9070 + atomic_inc(&rba->req_pending);
9071 + queue_work(rba->alloc_wq, &rba->rx_alloc);
9072 +@@ -1261,10 +1267,18 @@ restart:
9073 + IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r);
9074 +
9075 + while (i != r) {
9076 ++ struct iwl_rb_allocator *rba = &trans_pcie->rba;
9077 + struct iwl_rx_mem_buffer *rxb;
9078 +-
9079 +- if (unlikely(rxq->used_count == rxq->queue_size / 2))
9080 ++ /* number of RBDs still waiting for page allocation */
9081 ++ u32 rb_pending_alloc =
9082 ++ atomic_read(&trans_pcie->rba.req_pending) *
9083 ++ RX_CLAIM_REQ_ALLOC;
9084 ++
9085 ++ if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 &&
9086 ++ !emergency)) {
9087 ++ iwl_pcie_rx_move_to_allocator(rxq, rba);
9088 + emergency = true;
9089 ++ }
9090 +
9091 + if (trans->cfg->mq_rx_supported) {
9092 + /*
9093 +@@ -1307,17 +1321,13 @@ restart:
9094 + iwl_pcie_rx_allocator_get(trans, rxq);
9095 +
9096 + if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) {
9097 +- struct iwl_rb_allocator *rba = &trans_pcie->rba;
9098 +-
9099 + /* Add the remaining empty RBDs for allocator use */
9100 +- spin_lock(&rba->lock);
9101 +- list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
9102 +- spin_unlock(&rba->lock);
9103 ++ iwl_pcie_rx_move_to_allocator(rxq, rba);
9104 + } else if (emergency) {
9105 + count++;
9106 + if (count == 8) {
9107 + count = 0;
9108 +- if (rxq->used_count < rxq->queue_size / 3)
9109 ++ if (rb_pending_alloc < rxq->queue_size / 3)
9110 + emergency = false;
9111 +
9112 + rxq->read = i;
9113 +diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
9114 +index ffea610f67e2..10ba94c2b35b 100644
9115 +--- a/drivers/net/wireless/marvell/libertas/if_usb.c
9116 ++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
9117 +@@ -456,8 +456,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
9118 + MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
9119 + cardp);
9120 +
9121 +- cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
9122 +-
9123 + lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
9124 + if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
9125 + lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
9126 +diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
9127 +index 8985446570bd..190c699d6e3b 100644
9128 +--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
9129 ++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
9130 +@@ -725,8 +725,7 @@ __mt76x2_mac_set_beacon(struct mt76x2_dev *dev, u8 bcn_idx, struct sk_buff *skb)
9131 + if (skb) {
9132 + ret = mt76_write_beacon(dev, beacon_addr, skb);
9133 + if (!ret)
9134 +- dev->beacon_data_mask |= BIT(bcn_idx) &
9135 +- dev->beacon_mask;
9136 ++ dev->beacon_data_mask |= BIT(bcn_idx);
9137 + } else {
9138 + dev->beacon_data_mask &= ~BIT(bcn_idx);
9139 + for (i = 0; i < beacon_len; i += 4)
9140 +diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
9141 +index 6ce6b754df12..45a1b86491b6 100644
9142 +--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
9143 ++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
9144 +@@ -266,15 +266,17 @@ static void rsi_rx_done_handler(struct urb *urb)
9145 + if (urb->status)
9146 + goto out;
9147 +
9148 +- if (urb->actual_length <= 0) {
9149 +- rsi_dbg(INFO_ZONE, "%s: Zero length packet\n", __func__);
9150 ++ if (urb->actual_length <= 0 ||
9151 ++ urb->actual_length > rx_cb->rx_skb->len) {
9152 ++ rsi_dbg(INFO_ZONE, "%s: Invalid packet length = %d\n",
9153 ++ __func__, urb->actual_length);
9154 + goto out;
9155 + }
9156 + if (skb_queue_len(&dev->rx_q) >= RSI_MAX_RX_PKTS) {
9157 + rsi_dbg(INFO_ZONE, "Max RX packets reached\n");
9158 + goto out;
9159 + }
9160 +- skb_put(rx_cb->rx_skb, urb->actual_length);
9161 ++ skb_trim(rx_cb->rx_skb, urb->actual_length);
9162 + skb_queue_tail(&dev->rx_q, rx_cb->rx_skb);
9163 +
9164 + rsi_set_event(&dev->rx_thread.event);
9165 +@@ -308,6 +310,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
9166 + if (!skb)
9167 + return -ENOMEM;
9168 + skb_reserve(skb, MAX_DWORD_ALIGN_BYTES);
9169 ++ skb_put(skb, RSI_MAX_RX_USB_PKT_SIZE - MAX_DWORD_ALIGN_BYTES);
9170 + dword_align_bytes = (unsigned long)skb->data & 0x3f;
9171 + if (dword_align_bytes > 0)
9172 + skb_push(skb, dword_align_bytes);
9173 +@@ -319,7 +322,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
9174 + usb_rcvbulkpipe(dev->usbdev,
9175 + dev->bulkin_endpoint_addr[ep_num - 1]),
9176 + urb->transfer_buffer,
9177 +- RSI_MAX_RX_USB_PKT_SIZE,
9178 ++ skb->len,
9179 + rsi_rx_done_handler,
9180 + rx_cb);
9181 +
9182 +diff --git a/drivers/nfc/nfcmrvl/uart.c b/drivers/nfc/nfcmrvl/uart.c
9183 +index 91162f8e0366..9a22056e8d9e 100644
9184 +--- a/drivers/nfc/nfcmrvl/uart.c
9185 ++++ b/drivers/nfc/nfcmrvl/uart.c
9186 +@@ -73,10 +73,9 @@ static int nfcmrvl_uart_parse_dt(struct device_node *node,
9187 + struct device_node *matched_node;
9188 + int ret;
9189 +
9190 +- matched_node = of_find_compatible_node(node, NULL, "marvell,nfc-uart");
9191 ++ matched_node = of_get_compatible_child(node, "marvell,nfc-uart");
9192 + if (!matched_node) {
9193 +- matched_node = of_find_compatible_node(node, NULL,
9194 +- "mrvl,nfc-uart");
9195 ++ matched_node = of_get_compatible_child(node, "mrvl,nfc-uart");
9196 + if (!matched_node)
9197 + return -ENODEV;
9198 + }
9199 +diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
9200 +index 8aae6dcc839f..9148015ed803 100644
9201 +--- a/drivers/nvdimm/bus.c
9202 ++++ b/drivers/nvdimm/bus.c
9203 +@@ -488,6 +488,8 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
9204 + put_device(dev);
9205 + }
9206 + put_device(dev);
9207 ++ if (dev->parent)
9208 ++ put_device(dev->parent);
9209 + }
9210 +
9211 + static void nd_async_device_unregister(void *d, async_cookie_t cookie)
9212 +@@ -507,6 +509,8 @@ void __nd_device_register(struct device *dev)
9213 + if (!dev)
9214 + return;
9215 + dev->bus = &nvdimm_bus_type;
9216 ++ if (dev->parent)
9217 ++ get_device(dev->parent);
9218 + get_device(dev);
9219 + async_schedule_domain(nd_async_device_register, dev,
9220 + &nd_async_domain);
9221 +diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
9222 +index 8b1fd7f1a224..2245cfb8c6ab 100644
9223 +--- a/drivers/nvdimm/pmem.c
9224 ++++ b/drivers/nvdimm/pmem.c
9225 +@@ -393,9 +393,11 @@ static int pmem_attach_disk(struct device *dev,
9226 + addr = devm_memremap_pages(dev, &pmem->pgmap);
9227 + pmem->pfn_flags |= PFN_MAP;
9228 + memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
9229 +- } else
9230 ++ } else {
9231 + addr = devm_memremap(dev, pmem->phys_addr,
9232 + pmem->size, ARCH_MEMREMAP_PMEM);
9233 ++ memcpy(&bb_res, &nsio->res, sizeof(bb_res));
9234 ++ }
9235 +
9236 + /*
9237 + * At release time the queue must be frozen before
9238 +diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
9239 +index c30d5af02cc2..63cb01ef4ef0 100644
9240 +--- a/drivers/nvdimm/region_devs.c
9241 ++++ b/drivers/nvdimm/region_devs.c
9242 +@@ -545,10 +545,17 @@ static ssize_t region_badblocks_show(struct device *dev,
9243 + struct device_attribute *attr, char *buf)
9244 + {
9245 + struct nd_region *nd_region = to_nd_region(dev);
9246 ++ ssize_t rc;
9247 +
9248 +- return badblocks_show(&nd_region->bb, buf, 0);
9249 +-}
9250 ++ device_lock(dev);
9251 ++ if (dev->driver)
9252 ++ rc = badblocks_show(&nd_region->bb, buf, 0);
9253 ++ else
9254 ++ rc = -ENXIO;
9255 ++ device_unlock(dev);
9256 +
9257 ++ return rc;
9258 ++}
9259 + static DEVICE_ATTR(badblocks, 0444, region_badblocks_show, NULL);
9260 +
9261 + static ssize_t resource_show(struct device *dev,
9262 +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
9263 +index bf65501e6ed6..f1f375fb362b 100644
9264 +--- a/drivers/nvme/host/core.c
9265 ++++ b/drivers/nvme/host/core.c
9266 +@@ -3119,8 +3119,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
9267 + }
9268 +
9269 + mutex_lock(&ns->ctrl->subsys->lock);
9270 +- nvme_mpath_clear_current_path(ns);
9271 + list_del_rcu(&ns->siblings);
9272 ++ nvme_mpath_clear_current_path(ns);
9273 + mutex_unlock(&ns->ctrl->subsys->lock);
9274 +
9275 + down_write(&ns->ctrl->namespaces_rwsem);
9276 +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
9277 +index 514d1dfc5630..122b52d0ebfd 100644
9278 +--- a/drivers/nvmem/core.c
9279 ++++ b/drivers/nvmem/core.c
9280 +@@ -518,11 +518,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
9281 + goto err_device_del;
9282 + }
9283 +
9284 +- if (config->cells)
9285 +- nvmem_add_cells(nvmem, config->cells, config->ncells);
9286 ++ if (config->cells) {
9287 ++ rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
9288 ++ if (rval)
9289 ++ goto err_teardown_compat;
9290 ++ }
9291 +
9292 + return nvmem;
9293 +
9294 ++err_teardown_compat:
9295 ++ if (config->compat)
9296 ++ device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
9297 + err_device_del:
9298 + device_del(&nvmem->dev);
9299 + err_put_device:
9300 +diff --git a/drivers/opp/of.c b/drivers/opp/of.c
9301 +index 7af0ddec936b..20988c426650 100644
9302 +--- a/drivers/opp/of.c
9303 ++++ b/drivers/opp/of.c
9304 +@@ -425,6 +425,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
9305 + dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
9306 + count, pstate_count);
9307 + ret = -ENOENT;
9308 ++ _dev_pm_opp_remove_table(opp_table, dev, false);
9309 + goto put_opp_table;
9310 + }
9311 +
9312 +diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
9313 +index 345aab56ce8b..78ed6cc8d521 100644
9314 +--- a/drivers/pci/controller/dwc/pci-dra7xx.c
9315 ++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
9316 +@@ -542,7 +542,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
9317 + };
9318 +
9319 + /*
9320 +- * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
9321 ++ * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
9322 + * @dra7xx: the dra7xx device where the workaround should be applied
9323 + *
9324 + * Access to the PCIe slave port that are not 32-bit aligned will result
9325 +@@ -552,7 +552,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
9326 + *
9327 + * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
9328 + */
9329 +-static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
9330 ++static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
9331 + {
9332 + int ret;
9333 + struct device_node *np = dev->of_node;
9334 +@@ -704,6 +704,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
9335 +
9336 + dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
9337 + DEVICE_TYPE_RC);
9338 ++
9339 ++ ret = dra7xx_pcie_unaligned_memaccess(dev);
9340 ++ if (ret)
9341 ++ dev_err(dev, "WA for Errata i870 not applied\n");
9342 ++
9343 + ret = dra7xx_add_pcie_port(dra7xx, pdev);
9344 + if (ret < 0)
9345 + goto err_gpio;
9346 +@@ -717,7 +722,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
9347 + dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
9348 + DEVICE_TYPE_EP);
9349 +
9350 +- ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
9351 ++ ret = dra7xx_pcie_unaligned_memaccess(dev);
9352 + if (ret)
9353 + goto err_gpio;
9354 +
9355 +diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c
9356 +index e3fe4124e3af..a67dc91261f5 100644
9357 +--- a/drivers/pci/controller/pcie-cadence-ep.c
9358 ++++ b/drivers/pci/controller/pcie-cadence-ep.c
9359 +@@ -259,7 +259,6 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
9360 + u8 intx, bool is_asserted)
9361 + {
9362 + struct cdns_pcie *pcie = &ep->pcie;
9363 +- u32 r = ep->max_regions - 1;
9364 + u32 offset;
9365 + u16 status;
9366 + u8 msg_code;
9367 +@@ -269,8 +268,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
9368 + /* Set the outbound region if needed. */
9369 + if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
9370 + ep->irq_pci_fn != fn)) {
9371 +- /* Last region was reserved for IRQ writes. */
9372 +- cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r,
9373 ++ /* First region was reserved for IRQ writes. */
9374 ++ cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0,
9375 + ep->irq_phys_addr);
9376 + ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
9377 + ep->irq_pci_fn = fn;
9378 +@@ -348,8 +347,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
9379 + /* Set the outbound region if needed. */
9380 + if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
9381 + ep->irq_pci_fn != fn)) {
9382 +- /* Last region was reserved for IRQ writes. */
9383 +- cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1,
9384 ++ /* First region was reserved for IRQ writes. */
9385 ++ cdns_pcie_set_outbound_region(pcie, fn, 0,
9386 + false,
9387 + ep->irq_phys_addr,
9388 + pci_addr & ~pci_addr_mask,
9389 +@@ -510,6 +509,8 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
9390 + goto free_epc_mem;
9391 + }
9392 + ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
9393 ++ /* Reserve region 0 for IRQs */
9394 ++ set_bit(0, &ep->ob_region_map);
9395 +
9396 + return 0;
9397 +
9398 +diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
9399 +index 861dda69f366..c5ff6ca65eab 100644
9400 +--- a/drivers/pci/controller/pcie-mediatek.c
9401 ++++ b/drivers/pci/controller/pcie-mediatek.c
9402 +@@ -337,6 +337,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus,
9403 + {
9404 + struct mtk_pcie *pcie = bus->sysdata;
9405 + struct mtk_pcie_port *port;
9406 ++ struct pci_dev *dev = NULL;
9407 ++
9408 ++ /*
9409 ++ * Walk the bus hierarchy to get the devfn value
9410 ++ * of the port in the root bus.
9411 ++ */
9412 ++ while (bus && bus->number) {
9413 ++ dev = bus->self;
9414 ++ bus = dev->bus;
9415 ++ devfn = dev->devfn;
9416 ++ }
9417 +
9418 + list_for_each_entry(port, &pcie->ports, list)
9419 + if (port->slot == PCI_SLOT(devfn))
9420 +diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
9421 +index 942b64fc7f1f..fd2dbd7eed7b 100644
9422 +--- a/drivers/pci/controller/vmd.c
9423 ++++ b/drivers/pci/controller/vmd.c
9424 +@@ -197,9 +197,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
9425 + int i, best = 1;
9426 + unsigned long flags;
9427 +
9428 +- if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1)
9429 ++ if (vmd->msix_count == 1)
9430 + return &vmd->irqs[0];
9431 +
9432 ++ /*
9433 ++ * White list for fast-interrupt handlers. All others will share the
9434 ++ * "slow" interrupt vector.
9435 ++ */
9436 ++ switch (msi_desc_to_pci_dev(desc)->class) {
9437 ++ case PCI_CLASS_STORAGE_EXPRESS:
9438 ++ break;
9439 ++ default:
9440 ++ return &vmd->irqs[0];
9441 ++ }
9442 ++
9443 + raw_spin_lock_irqsave(&list_lock, flags);
9444 + for (i = 1; i < vmd->msix_count; i++)
9445 + if (vmd->irqs[i].count < vmd->irqs[best].count)
9446 +diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
9447 +index 4d88afdfc843..f7b7cb7189eb 100644
9448 +--- a/drivers/pci/msi.c
9449 ++++ b/drivers/pci/msi.c
9450 +@@ -958,7 +958,6 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
9451 + }
9452 + }
9453 + }
9454 +- WARN_ON(!!dev->msix_enabled);
9455 +
9456 + /* Check whether driver already requested for MSI irq */
9457 + if (dev->msi_enabled) {
9458 +@@ -1028,8 +1027,6 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
9459 + if (!pci_msi_supported(dev, minvec))
9460 + return -EINVAL;
9461 +
9462 +- WARN_ON(!!dev->msi_enabled);
9463 +-
9464 + /* Check whether driver already requested MSI-X irqs */
9465 + if (dev->msix_enabled) {
9466 + pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
9467 +@@ -1039,6 +1036,9 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
9468 + if (maxvec < minvec)
9469 + return -ERANGE;
9470 +
9471 ++ if (WARN_ON_ONCE(dev->msi_enabled))
9472 ++ return -EINVAL;
9473 ++
9474 + nvec = pci_msi_vec_count(dev);
9475 + if (nvec < 0)
9476 + return nvec;
9477 +@@ -1087,6 +1087,9 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
9478 + if (maxvec < minvec)
9479 + return -ERANGE;
9480 +
9481 ++ if (WARN_ON_ONCE(dev->msix_enabled))
9482 ++ return -EINVAL;
9483 ++
9484 + for (;;) {
9485 + if (affd) {
9486 + nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
9487 +diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
9488 +index 5d1698265da5..d2b04ab37308 100644
9489 +--- a/drivers/pci/pci-acpi.c
9490 ++++ b/drivers/pci/pci-acpi.c
9491 +@@ -779,19 +779,33 @@ static void pci_acpi_setup(struct device *dev)
9492 + return;
9493 +
9494 + device_set_wakeup_capable(dev, true);
9495 ++ /*
9496 ++ * For bridges that can do D3 we enable wake automatically (as
9497 ++ * we do for the power management itself in that case). The
9498 ++ * reason is that the bridge may have additional methods such as
9499 ++ * _DSW that need to be called.
9500 ++ */
9501 ++ if (pci_dev->bridge_d3)
9502 ++ device_wakeup_enable(dev);
9503 ++
9504 + acpi_pci_wakeup(pci_dev, false);
9505 + }
9506 +
9507 + static void pci_acpi_cleanup(struct device *dev)
9508 + {
9509 + struct acpi_device *adev = ACPI_COMPANION(dev);
9510 ++ struct pci_dev *pci_dev = to_pci_dev(dev);
9511 +
9512 + if (!adev)
9513 + return;
9514 +
9515 + pci_acpi_remove_pm_notifier(adev);
9516 +- if (adev->wakeup.flags.valid)
9517 ++ if (adev->wakeup.flags.valid) {
9518 ++ if (pci_dev->bridge_d3)
9519 ++ device_wakeup_disable(dev);
9520 ++
9521 + device_set_wakeup_capable(dev, false);
9522 ++ }
9523 + }
9524 +
9525 + static bool pci_acpi_bus_match(struct device *dev)
9526 +diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
9527 +index c687c817b47d..6322c3f446bc 100644
9528 +--- a/drivers/pci/pcie/aspm.c
9529 ++++ b/drivers/pci/pcie/aspm.c
9530 +@@ -991,7 +991,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
9531 + * All PCIe functions are in one slot, remove one function will remove
9532 + * the whole slot, so just wait until we are the last function left.
9533 + */
9534 +- if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices))
9535 ++ if (!list_empty(&parent->subordinate->devices))
9536 + goto out;
9537 +
9538 + link = parent->link_state;
9539 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
9540 +index d1e2d175c10b..a4d11d14b196 100644
9541 +--- a/drivers/pci/quirks.c
9542 ++++ b/drivers/pci/quirks.c
9543 +@@ -3177,7 +3177,11 @@ static void disable_igfx_irq(struct pci_dev *dev)
9544 +
9545 + pci_iounmap(dev, regs);
9546 + }
9547 ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq);
9548 ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq);
9549 ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq);
9550 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
9551 ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq);
9552 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
9553 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
9554 +
9555 +diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
9556 +index 5e3d0dced2b8..b08945a7bbfd 100644
9557 +--- a/drivers/pci/remove.c
9558 ++++ b/drivers/pci/remove.c
9559 +@@ -26,9 +26,6 @@ static void pci_stop_dev(struct pci_dev *dev)
9560 +
9561 + pci_dev_assign_added(dev, false);
9562 + }
9563 +-
9564 +- if (dev->bus->self)
9565 +- pcie_aspm_exit_link_state(dev);
9566 + }
9567 +
9568 + static void pci_destroy_dev(struct pci_dev *dev)
9569 +@@ -42,6 +39,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
9570 + list_del(&dev->bus_list);
9571 + up_write(&pci_bus_sem);
9572 +
9573 ++ pcie_aspm_exit_link_state(dev);
9574 + pci_bridge_d3_update(dev);
9575 + pci_free_resources(dev);
9576 + put_device(&dev->dev);
9577 +diff --git a/drivers/pcmcia/ricoh.h b/drivers/pcmcia/ricoh.h
9578 +index 01098c841f87..8ac7b138c094 100644
9579 +--- a/drivers/pcmcia/ricoh.h
9580 ++++ b/drivers/pcmcia/ricoh.h
9581 +@@ -119,6 +119,10 @@
9582 + #define RL5C4XX_MISC_CONTROL 0x2F /* 8 bit */
9583 + #define RL5C4XX_ZV_ENABLE 0x08
9584 +
9585 ++/* Misc Control 3 Register */
9586 ++#define RL5C4XX_MISC3 0x00A2 /* 16 bit */
9587 ++#define RL5C47X_MISC3_CB_CLKRUN_DIS BIT(1)
9588 ++
9589 + #ifdef __YENTA_H
9590 +
9591 + #define rl_misc(socket) ((socket)->private[0])
9592 +@@ -156,6 +160,35 @@ static void ricoh_set_zv(struct yenta_socket *socket)
9593 + }
9594 + }
9595 +
9596 ++static void ricoh_set_clkrun(struct yenta_socket *socket, bool quiet)
9597 ++{
9598 ++ u16 misc3;
9599 ++
9600 ++ /*
9601 ++ * RL5C475II likely has this setting, too, however no datasheet
9602 ++ * is publicly available for this chip
9603 ++ */
9604 ++ if (socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C476 &&
9605 ++ socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C478)
9606 ++ return;
9607 ++
9608 ++ if (socket->dev->revision < 0x80)
9609 ++ return;
9610 ++
9611 ++ misc3 = config_readw(socket, RL5C4XX_MISC3);
9612 ++ if (misc3 & RL5C47X_MISC3_CB_CLKRUN_DIS) {
9613 ++ if (!quiet)
9614 ++ dev_dbg(&socket->dev->dev,
9615 ++ "CLKRUN feature already disabled\n");
9616 ++ } else if (disable_clkrun) {
9617 ++ if (!quiet)
9618 ++ dev_info(&socket->dev->dev,
9619 ++ "Disabling CLKRUN feature\n");
9620 ++ misc3 |= RL5C47X_MISC3_CB_CLKRUN_DIS;
9621 ++ config_writew(socket, RL5C4XX_MISC3, misc3);
9622 ++ }
9623 ++}
9624 ++
9625 + static void ricoh_save_state(struct yenta_socket *socket)
9626 + {
9627 + rl_misc(socket) = config_readw(socket, RL5C4XX_MISC);
9628 +@@ -172,6 +205,7 @@ static void ricoh_restore_state(struct yenta_socket *socket)
9629 + config_writew(socket, RL5C4XX_16BIT_IO_0, rl_io(socket));
9630 + config_writew(socket, RL5C4XX_16BIT_MEM_0, rl_mem(socket));
9631 + config_writew(socket, RL5C4XX_CONFIG, rl_config(socket));
9632 ++ ricoh_set_clkrun(socket, true);
9633 + }
9634 +
9635 +
9636 +@@ -197,6 +231,7 @@ static int ricoh_override(struct yenta_socket *socket)
9637 + config_writew(socket, RL5C4XX_CONFIG, config);
9638 +
9639 + ricoh_set_zv(socket);
9640 ++ ricoh_set_clkrun(socket, false);
9641 +
9642 + return 0;
9643 + }
9644 +diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
9645 +index ab3da2262f0f..ac6a3f46b1e6 100644
9646 +--- a/drivers/pcmcia/yenta_socket.c
9647 ++++ b/drivers/pcmcia/yenta_socket.c
9648 +@@ -26,7 +26,8 @@
9649 +
9650 + static bool disable_clkrun;
9651 + module_param(disable_clkrun, bool, 0444);
9652 +-MODULE_PARM_DESC(disable_clkrun, "If PC card doesn't function properly, please try this option");
9653 ++MODULE_PARM_DESC(disable_clkrun,
9654 ++ "If PC card doesn't function properly, please try this option (TI and Ricoh bridges only)");
9655 +
9656 + static bool isa_probe = 1;
9657 + module_param(isa_probe, bool, 0444);
9658 +diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
9659 +index 6556dbeae65e..ac251c62bc66 100644
9660 +--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
9661 ++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
9662 +@@ -319,6 +319,8 @@ static int pmic_mpp_set_mux(struct pinctrl_dev *pctldev, unsigned function,
9663 + pad->function = function;
9664 +
9665 + ret = pmic_mpp_write_mode_ctl(state, pad);
9666 ++ if (ret < 0)
9667 ++ return ret;
9668 +
9669 + val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
9670 +
9671 +@@ -343,13 +345,12 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
9672 +
9673 + switch (param) {
9674 + case PIN_CONFIG_BIAS_DISABLE:
9675 +- arg = pad->pullup == PMIC_MPP_PULL_UP_OPEN;
9676 ++ if (pad->pullup != PMIC_MPP_PULL_UP_OPEN)
9677 ++ return -EINVAL;
9678 ++ arg = 1;
9679 + break;
9680 + case PIN_CONFIG_BIAS_PULL_UP:
9681 + switch (pad->pullup) {
9682 +- case PMIC_MPP_PULL_UP_OPEN:
9683 +- arg = 0;
9684 +- break;
9685 + case PMIC_MPP_PULL_UP_0P6KOHM:
9686 + arg = 600;
9687 + break;
9688 +@@ -364,13 +365,17 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
9689 + }
9690 + break;
9691 + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
9692 +- arg = !pad->is_enabled;
9693 ++ if (pad->is_enabled)
9694 ++ return -EINVAL;
9695 ++ arg = 1;
9696 + break;
9697 + case PIN_CONFIG_POWER_SOURCE:
9698 + arg = pad->power_source;
9699 + break;
9700 + case PIN_CONFIG_INPUT_ENABLE:
9701 +- arg = pad->input_enabled;
9702 ++ if (!pad->input_enabled)
9703 ++ return -EINVAL;
9704 ++ arg = 1;
9705 + break;
9706 + case PIN_CONFIG_OUTPUT:
9707 + arg = pad->out_value;
9708 +@@ -382,7 +387,9 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
9709 + arg = pad->amux_input;
9710 + break;
9711 + case PMIC_MPP_CONF_PAIRED:
9712 +- arg = pad->paired;
9713 ++ if (!pad->paired)
9714 ++ return -EINVAL;
9715 ++ arg = 1;
9716 + break;
9717 + case PIN_CONFIG_DRIVE_STRENGTH:
9718 + arg = pad->drive_strength;
9719 +@@ -455,7 +462,7 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
9720 + pad->dtest = arg;
9721 + break;
9722 + case PIN_CONFIG_DRIVE_STRENGTH:
9723 +- arg = pad->drive_strength;
9724 ++ pad->drive_strength = arg;
9725 + break;
9726 + case PMIC_MPP_CONF_AMUX_ROUTE:
9727 + if (arg >= PMIC_MPP_AMUX_ROUTE_ABUS4)
9728 +@@ -502,6 +509,10 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
9729 + if (ret < 0)
9730 + return ret;
9731 +
9732 ++ ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_SINK_CTL, pad->drive_strength);
9733 ++ if (ret < 0)
9734 ++ return ret;
9735 ++
9736 + val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
9737 +
9738 + return pmic_mpp_write(state, pad, PMIC_MPP_REG_EN_CTL, val);
9739 +diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
9740 +index f53e32a9d8fc..0e153bae322e 100644
9741 +--- a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
9742 ++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
9743 +@@ -260,22 +260,32 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
9744 +
9745 + switch (param) {
9746 + case PIN_CONFIG_BIAS_DISABLE:
9747 +- arg = pin->bias == PM8XXX_GPIO_BIAS_NP;
9748 ++ if (pin->bias != PM8XXX_GPIO_BIAS_NP)
9749 ++ return -EINVAL;
9750 ++ arg = 1;
9751 + break;
9752 + case PIN_CONFIG_BIAS_PULL_DOWN:
9753 +- arg = pin->bias == PM8XXX_GPIO_BIAS_PD;
9754 ++ if (pin->bias != PM8XXX_GPIO_BIAS_PD)
9755 ++ return -EINVAL;
9756 ++ arg = 1;
9757 + break;
9758 + case PIN_CONFIG_BIAS_PULL_UP:
9759 +- arg = pin->bias <= PM8XXX_GPIO_BIAS_PU_1P5_30;
9760 ++ if (pin->bias > PM8XXX_GPIO_BIAS_PU_1P5_30)
9761 ++ return -EINVAL;
9762 ++ arg = 1;
9763 + break;
9764 + case PM8XXX_QCOM_PULL_UP_STRENGTH:
9765 + arg = pin->pull_up_strength;
9766 + break;
9767 + case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
9768 +- arg = pin->disable;
9769 ++ if (!pin->disable)
9770 ++ return -EINVAL;
9771 ++ arg = 1;
9772 + break;
9773 + case PIN_CONFIG_INPUT_ENABLE:
9774 +- arg = pin->mode == PM8XXX_GPIO_MODE_INPUT;
9775 ++ if (pin->mode != PM8XXX_GPIO_MODE_INPUT)
9776 ++ return -EINVAL;
9777 ++ arg = 1;
9778 + break;
9779 + case PIN_CONFIG_OUTPUT:
9780 + if (pin->mode & PM8XXX_GPIO_MODE_OUTPUT)
9781 +@@ -290,10 +300,14 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
9782 + arg = pin->output_strength;
9783 + break;
9784 + case PIN_CONFIG_DRIVE_PUSH_PULL:
9785 +- arg = !pin->open_drain;
9786 ++ if (pin->open_drain)
9787 ++ return -EINVAL;
9788 ++ arg = 1;
9789 + break;
9790 + case PIN_CONFIG_DRIVE_OPEN_DRAIN:
9791 +- arg = pin->open_drain;
9792 ++ if (!pin->open_drain)
9793 ++ return -EINVAL;
9794 ++ arg = 1;
9795 + break;
9796 + default:
9797 + return -EINVAL;
9798 +diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
9799 +index 4d9bf9b3e9f3..26ebedc1f6d3 100644
9800 +--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
9801 ++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
9802 +@@ -1079,10 +1079,9 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
9803 + * We suppose that we won't have any more functions than pins,
9804 + * we'll reallocate that later anyway
9805 + */
9806 +- pctl->functions = devm_kcalloc(&pdev->dev,
9807 +- pctl->ngroups,
9808 +- sizeof(*pctl->functions),
9809 +- GFP_KERNEL);
9810 ++ pctl->functions = kcalloc(pctl->ngroups,
9811 ++ sizeof(*pctl->functions),
9812 ++ GFP_KERNEL);
9813 + if (!pctl->functions)
9814 + return -ENOMEM;
9815 +
9816 +@@ -1133,8 +1132,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
9817 +
9818 + func_item = sunxi_pinctrl_find_function_by_name(pctl,
9819 + func->name);
9820 +- if (!func_item)
9821 ++ if (!func_item) {
9822 ++ kfree(pctl->functions);
9823 + return -EINVAL;
9824 ++ }
9825 +
9826 + if (!func_item->groups) {
9827 + func_item->groups =
9828 +@@ -1142,8 +1143,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
9829 + func_item->ngroups,
9830 + sizeof(*func_item->groups),
9831 + GFP_KERNEL);
9832 +- if (!func_item->groups)
9833 ++ if (!func_item->groups) {
9834 ++ kfree(pctl->functions);
9835 + return -ENOMEM;
9836 ++ }
9837 + }
9838 +
9839 + func_grp = func_item->groups;
9840 +diff --git a/drivers/power/supply/twl4030_charger.c b/drivers/power/supply/twl4030_charger.c
9841 +index bbcaee56db9d..b6a7d9f74cf3 100644
9842 +--- a/drivers/power/supply/twl4030_charger.c
9843 ++++ b/drivers/power/supply/twl4030_charger.c
9844 +@@ -996,12 +996,13 @@ static int twl4030_bci_probe(struct platform_device *pdev)
9845 + if (bci->dev->of_node) {
9846 + struct device_node *phynode;
9847 +
9848 +- phynode = of_find_compatible_node(bci->dev->of_node->parent,
9849 +- NULL, "ti,twl4030-usb");
9850 ++ phynode = of_get_compatible_child(bci->dev->of_node->parent,
9851 ++ "ti,twl4030-usb");
9852 + if (phynode) {
9853 + bci->usb_nb.notifier_call = twl4030_bci_usb_ncb;
9854 + bci->transceiver = devm_usb_get_phy_by_node(
9855 + bci->dev, phynode, &bci->usb_nb);
9856 ++ of_node_put(phynode);
9857 + if (IS_ERR(bci->transceiver)) {
9858 + ret = PTR_ERR(bci->transceiver);
9859 + if (ret == -EPROBE_DEFER)
9860 +diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
9861 +index 6437bbeebc91..e026a7817013 100644
9862 +--- a/drivers/rpmsg/qcom_smd.c
9863 ++++ b/drivers/rpmsg/qcom_smd.c
9864 +@@ -1114,8 +1114,10 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
9865 +
9866 + channel->edge = edge;
9867 + channel->name = kstrdup(name, GFP_KERNEL);
9868 +- if (!channel->name)
9869 +- return ERR_PTR(-ENOMEM);
9870 ++ if (!channel->name) {
9871 ++ ret = -ENOMEM;
9872 ++ goto free_channel;
9873 ++ }
9874 +
9875 + spin_lock_init(&channel->tx_lock);
9876 + spin_lock_init(&channel->recv_lock);
9877 +@@ -1165,6 +1167,7 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
9878 +
9879 + free_name_and_channel:
9880 + kfree(channel->name);
9881 ++free_channel:
9882 + kfree(channel);
9883 +
9884 + return ERR_PTR(ret);
9885 +diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
9886 +index cd3a2411bc2f..df0c5776d49b 100644
9887 +--- a/drivers/rtc/rtc-cmos.c
9888 ++++ b/drivers/rtc/rtc-cmos.c
9889 +@@ -50,6 +50,7 @@
9890 + /* this is for "generic access to PC-style RTC" using CMOS_READ/CMOS_WRITE */
9891 + #include <linux/mc146818rtc.h>
9892 +
9893 ++#ifdef CONFIG_ACPI
9894 + /*
9895 + * Use ACPI SCI to replace HPET interrupt for RTC Alarm event
9896 + *
9897 +@@ -61,6 +62,18 @@
9898 + static bool use_acpi_alarm;
9899 + module_param(use_acpi_alarm, bool, 0444);
9900 +
9901 ++static inline int cmos_use_acpi_alarm(void)
9902 ++{
9903 ++ return use_acpi_alarm;
9904 ++}
9905 ++#else /* !CONFIG_ACPI */
9906 ++
9907 ++static inline int cmos_use_acpi_alarm(void)
9908 ++{
9909 ++ return 0;
9910 ++}
9911 ++#endif
9912 ++
9913 + struct cmos_rtc {
9914 + struct rtc_device *rtc;
9915 + struct device *dev;
9916 +@@ -167,9 +180,9 @@ static inline int hpet_unregister_irq_handler(irq_handler_t handler)
9917 + #endif
9918 +
9919 + /* Don't use HPET for RTC Alarm event if ACPI Fixed event is used */
9920 +-static int use_hpet_alarm(void)
9921 ++static inline int use_hpet_alarm(void)
9922 + {
9923 +- return is_hpet_enabled() && !use_acpi_alarm;
9924 ++ return is_hpet_enabled() && !cmos_use_acpi_alarm();
9925 + }
9926 +
9927 + /*----------------------------------------------------------------*/
9928 +@@ -340,7 +353,7 @@ static void cmos_irq_enable(struct cmos_rtc *cmos, unsigned char mask)
9929 + if (use_hpet_alarm())
9930 + hpet_set_rtc_irq_bit(mask);
9931 +
9932 +- if ((mask & RTC_AIE) && use_acpi_alarm) {
9933 ++ if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
9934 + if (cmos->wake_on)
9935 + cmos->wake_on(cmos->dev);
9936 + }
9937 +@@ -358,7 +371,7 @@ static void cmos_irq_disable(struct cmos_rtc *cmos, unsigned char mask)
9938 + if (use_hpet_alarm())
9939 + hpet_mask_rtc_irq_bit(mask);
9940 +
9941 +- if ((mask & RTC_AIE) && use_acpi_alarm) {
9942 ++ if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
9943 + if (cmos->wake_off)
9944 + cmos->wake_off(cmos->dev);
9945 + }
9946 +@@ -980,7 +993,7 @@ static int cmos_suspend(struct device *dev)
9947 + }
9948 + spin_unlock_irq(&rtc_lock);
9949 +
9950 +- if ((tmp & RTC_AIE) && !use_acpi_alarm) {
9951 ++ if ((tmp & RTC_AIE) && !cmos_use_acpi_alarm()) {
9952 + cmos->enabled_wake = 1;
9953 + if (cmos->wake_on)
9954 + cmos->wake_on(dev);
9955 +@@ -1031,7 +1044,7 @@ static void cmos_check_wkalrm(struct device *dev)
9956 + * ACPI RTC wake event is cleared after resume from STR,
9957 + * ACK the rtc irq here
9958 + */
9959 +- if (t_now >= cmos->alarm_expires && use_acpi_alarm) {
9960 ++ if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
9961 + cmos_interrupt(0, (void *)cmos->rtc);
9962 + return;
9963 + }
9964 +@@ -1053,7 +1066,7 @@ static int __maybe_unused cmos_resume(struct device *dev)
9965 + struct cmos_rtc *cmos = dev_get_drvdata(dev);
9966 + unsigned char tmp;
9967 +
9968 +- if (cmos->enabled_wake && !use_acpi_alarm) {
9969 ++ if (cmos->enabled_wake && !cmos_use_acpi_alarm()) {
9970 + if (cmos->wake_off)
9971 + cmos->wake_off(dev);
9972 + else
9973 +@@ -1132,7 +1145,7 @@ static u32 rtc_handler(void *context)
9974 + * Or else, ACPI SCI is enabled during suspend/resume only,
9975 + * update rtc irq in that case.
9976 + */
9977 +- if (use_acpi_alarm)
9978 ++ if (cmos_use_acpi_alarm())
9979 + cmos_interrupt(0, (void *)cmos->rtc);
9980 + else {
9981 + /* Fix me: can we use cmos_interrupt() here as well? */
9982 +diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
9983 +index e9ec4160d7f6..83fa875b89cd 100644
9984 +--- a/drivers/rtc/rtc-ds1307.c
9985 ++++ b/drivers/rtc/rtc-ds1307.c
9986 +@@ -1372,7 +1372,6 @@ static void ds1307_clks_register(struct ds1307 *ds1307)
9987 + static const struct regmap_config regmap_config = {
9988 + .reg_bits = 8,
9989 + .val_bits = 8,
9990 +- .max_register = 0x9,
9991 + };
9992 +
9993 + static int ds1307_probe(struct i2c_client *client,
9994 +diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c
9995 +index c3fc34b9964d..9e5d3f7d29ae 100644
9996 +--- a/drivers/scsi/esp_scsi.c
9997 ++++ b/drivers/scsi/esp_scsi.c
9998 +@@ -1338,6 +1338,7 @@ static int esp_data_bytes_sent(struct esp *esp, struct esp_cmd_entry *ent,
9999 +
10000 + bytes_sent = esp->data_dma_len;
10001 + bytes_sent -= ecount;
10002 ++ bytes_sent -= esp->send_cmd_residual;
10003 +
10004 + /*
10005 + * The am53c974 has a DMA 'pecularity'. The doc states:
10006 +diff --git a/drivers/scsi/esp_scsi.h b/drivers/scsi/esp_scsi.h
10007 +index 8163dca2071b..a77772777a30 100644
10008 +--- a/drivers/scsi/esp_scsi.h
10009 ++++ b/drivers/scsi/esp_scsi.h
10010 +@@ -540,6 +540,8 @@ struct esp {
10011 +
10012 + void *dma;
10013 + int dmarev;
10014 ++
10015 ++ u32 send_cmd_residual;
10016 + };
10017 +
10018 + /* A front-end driver for the ESP chip should do the following in
10019 +diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
10020 +index a94fb9f8bb44..3b3af1459008 100644
10021 +--- a/drivers/scsi/lpfc/lpfc_scsi.c
10022 ++++ b/drivers/scsi/lpfc/lpfc_scsi.c
10023 +@@ -4140,9 +4140,17 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
10024 + }
10025 + lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd);
10026 +
10027 +- spin_lock_irqsave(&phba->hbalock, flags);
10028 +- lpfc_cmd->pCmd = NULL;
10029 +- spin_unlock_irqrestore(&phba->hbalock, flags);
10030 ++ /* If pCmd was set to NULL from abort path, do not call scsi_done */
10031 ++ if (xchg(&lpfc_cmd->pCmd, NULL) == NULL) {
10032 ++ lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
10033 ++ "0711 FCP cmd already NULL, sid: 0x%06x, "
10034 ++ "did: 0x%06x, oxid: 0x%04x\n",
10035 ++ vport->fc_myDID,
10036 ++ (pnode) ? pnode->nlp_DID : 0,
10037 ++ phba->sli_rev == LPFC_SLI_REV4 ?
10038 ++ lpfc_cmd->cur_iocbq.sli4_xritag : 0xffff);
10039 ++ return;
10040 ++ }
10041 +
10042 + /* The sdev is not guaranteed to be valid post scsi_done upcall. */
10043 + cmd->scsi_done(cmd);
10044 +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
10045 +index 6f3c00a233ec..4f8d459d2378 100644
10046 +--- a/drivers/scsi/lpfc/lpfc_sli.c
10047 ++++ b/drivers/scsi/lpfc/lpfc_sli.c
10048 +@@ -3790,6 +3790,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
10049 + struct hbq_dmabuf *dmabuf;
10050 + struct lpfc_cq_event *cq_event;
10051 + unsigned long iflag;
10052 ++ int count = 0;
10053 +
10054 + spin_lock_irqsave(&phba->hbalock, iflag);
10055 + phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
10056 +@@ -3811,16 +3812,22 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
10057 + if (irspiocbq)
10058 + lpfc_sli_sp_handle_rspiocb(phba, pring,
10059 + irspiocbq);
10060 ++ count++;
10061 + break;
10062 + case CQE_CODE_RECEIVE:
10063 + case CQE_CODE_RECEIVE_V1:
10064 + dmabuf = container_of(cq_event, struct hbq_dmabuf,
10065 + cq_event);
10066 + lpfc_sli4_handle_received_buffer(phba, dmabuf);
10067 ++ count++;
10068 + break;
10069 + default:
10070 + break;
10071 + }
10072 ++
10073 ++ /* Limit the number of events to 64 to avoid soft lockups */
10074 ++ if (count == 64)
10075 ++ break;
10076 + }
10077 + }
10078 +
10079 +diff --git a/drivers/scsi/mac_esp.c b/drivers/scsi/mac_esp.c
10080 +index eb551f3cc471..71879f2207e0 100644
10081 +--- a/drivers/scsi/mac_esp.c
10082 ++++ b/drivers/scsi/mac_esp.c
10083 +@@ -427,6 +427,8 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
10084 + scsi_esp_cmd(esp, ESP_CMD_TI);
10085 + }
10086 + }
10087 ++
10088 ++ esp->send_cmd_residual = esp_count;
10089 + }
10090 +
10091 + static int mac_esp_irq_pending(struct esp *esp)
10092 +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
10093 +index 8e84e3fb648a..2d6f6414a2a2 100644
10094 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c
10095 ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
10096 +@@ -7499,6 +7499,9 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
10097 + get_user(user_sense_off, &cioc->sense_off))
10098 + return -EFAULT;
10099 +
10100 ++ if (local_sense_off != user_sense_off)
10101 ++ return -EINVAL;
10102 ++
10103 + if (local_sense_len) {
10104 + void __user **sense_ioc_ptr =
10105 + (void __user **)((u8 *)((unsigned long)&ioc->frame.raw) + local_sense_off);
10106 +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
10107 +index 397081d320b1..83f71c266c66 100644
10108 +--- a/drivers/scsi/ufs/ufshcd.c
10109 ++++ b/drivers/scsi/ufs/ufshcd.c
10110 +@@ -1677,8 +1677,9 @@ static void __ufshcd_release(struct ufs_hba *hba)
10111 +
10112 + hba->clk_gating.state = REQ_CLKS_OFF;
10113 + trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
10114 +- schedule_delayed_work(&hba->clk_gating.gate_work,
10115 +- msecs_to_jiffies(hba->clk_gating.delay_ms));
10116 ++ queue_delayed_work(hba->clk_gating.clk_gating_workq,
10117 ++ &hba->clk_gating.gate_work,
10118 ++ msecs_to_jiffies(hba->clk_gating.delay_ms));
10119 + }
10120 +
10121 + void ufshcd_release(struct ufs_hba *hba)
10122 +diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
10123 +index 8a3678c2e83c..97bb5989aa21 100644
10124 +--- a/drivers/soc/qcom/rmtfs_mem.c
10125 ++++ b/drivers/soc/qcom/rmtfs_mem.c
10126 +@@ -212,6 +212,11 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
10127 + dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
10128 + goto remove_cdev;
10129 + } else if (!ret) {
10130 ++ if (!qcom_scm_is_available()) {
10131 ++ ret = -EPROBE_DEFER;
10132 ++ goto remove_cdev;
10133 ++ }
10134 ++
10135 + perms[0].vmid = QCOM_SCM_VMID_HLOS;
10136 + perms[0].perm = QCOM_SCM_PERM_RW;
10137 + perms[1].vmid = vmid;
10138 +diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
10139 +index 2d6f3fcf3211..ed71a4c9c8b2 100644
10140 +--- a/drivers/soc/tegra/pmc.c
10141 ++++ b/drivers/soc/tegra/pmc.c
10142 +@@ -1288,7 +1288,7 @@ static void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
10143 + if (!pmc->soc->has_tsense_reset)
10144 + return;
10145 +
10146 +- np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip");
10147 ++ np = of_get_child_by_name(pmc->dev->of_node, "i2c-thermtrip");
10148 + if (!np) {
10149 + dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled);
10150 + return;
10151 +diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
10152 +index 8612525fa4e3..584bcb018a62 100644
10153 +--- a/drivers/spi/spi-bcm-qspi.c
10154 ++++ b/drivers/spi/spi-bcm-qspi.c
10155 +@@ -89,7 +89,7 @@
10156 + #define BSPI_BPP_MODE_SELECT_MASK BIT(8)
10157 + #define BSPI_BPP_ADDR_SELECT_MASK BIT(16)
10158 +
10159 +-#define BSPI_READ_LENGTH 512
10160 ++#define BSPI_READ_LENGTH 256
10161 +
10162 + /* MSPI register offsets */
10163 + #define MSPI_SPCR0_LSB 0x000
10164 +@@ -355,7 +355,7 @@ static int bcm_qspi_bspi_set_flex_mode(struct bcm_qspi *qspi,
10165 + int bpc = 0, bpp = 0;
10166 + u8 command = op->cmd.opcode;
10167 + int width = op->cmd.buswidth ? op->cmd.buswidth : SPI_NBITS_SINGLE;
10168 +- int addrlen = op->addr.nbytes * 8;
10169 ++ int addrlen = op->addr.nbytes;
10170 + int flex_mode = 1;
10171 +
10172 + dev_dbg(&qspi->pdev->dev, "set flex mode w %x addrlen %x hp %d\n",
10173 +diff --git a/drivers/spi/spi-ep93xx.c b/drivers/spi/spi-ep93xx.c
10174 +index f1526757aaf6..79fc3940245a 100644
10175 +--- a/drivers/spi/spi-ep93xx.c
10176 ++++ b/drivers/spi/spi-ep93xx.c
10177 +@@ -246,6 +246,19 @@ static int ep93xx_spi_read_write(struct spi_master *master)
10178 + return -EINPROGRESS;
10179 + }
10180 +
10181 ++static enum dma_transfer_direction
10182 ++ep93xx_dma_data_to_trans_dir(enum dma_data_direction dir)
10183 ++{
10184 ++ switch (dir) {
10185 ++ case DMA_TO_DEVICE:
10186 ++ return DMA_MEM_TO_DEV;
10187 ++ case DMA_FROM_DEVICE:
10188 ++ return DMA_DEV_TO_MEM;
10189 ++ default:
10190 ++ return DMA_TRANS_NONE;
10191 ++ }
10192 ++}
10193 ++
10194 + /**
10195 + * ep93xx_spi_dma_prepare() - prepares a DMA transfer
10196 + * @master: SPI master
10197 +@@ -257,7 +270,7 @@ static int ep93xx_spi_read_write(struct spi_master *master)
10198 + */
10199 + static struct dma_async_tx_descriptor *
10200 + ep93xx_spi_dma_prepare(struct spi_master *master,
10201 +- enum dma_transfer_direction dir)
10202 ++ enum dma_data_direction dir)
10203 + {
10204 + struct ep93xx_spi *espi = spi_master_get_devdata(master);
10205 + struct spi_transfer *xfer = master->cur_msg->state;
10206 +@@ -277,9 +290,9 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
10207 + buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
10208 +
10209 + memset(&conf, 0, sizeof(conf));
10210 +- conf.direction = dir;
10211 ++ conf.direction = ep93xx_dma_data_to_trans_dir(dir);
10212 +
10213 +- if (dir == DMA_DEV_TO_MEM) {
10214 ++ if (dir == DMA_FROM_DEVICE) {
10215 + chan = espi->dma_rx;
10216 + buf = xfer->rx_buf;
10217 + sgt = &espi->rx_sgt;
10218 +@@ -343,7 +356,8 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
10219 + if (!nents)
10220 + return ERR_PTR(-ENOMEM);
10221 +
10222 +- txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK);
10223 ++ txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, conf.direction,
10224 ++ DMA_CTRL_ACK);
10225 + if (!txd) {
10226 + dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir);
10227 + return ERR_PTR(-ENOMEM);
10228 +@@ -360,13 +374,13 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
10229 + * unmapped.
10230 + */
10231 + static void ep93xx_spi_dma_finish(struct spi_master *master,
10232 +- enum dma_transfer_direction dir)
10233 ++ enum dma_data_direction dir)
10234 + {
10235 + struct ep93xx_spi *espi = spi_master_get_devdata(master);
10236 + struct dma_chan *chan;
10237 + struct sg_table *sgt;
10238 +
10239 +- if (dir == DMA_DEV_TO_MEM) {
10240 ++ if (dir == DMA_FROM_DEVICE) {
10241 + chan = espi->dma_rx;
10242 + sgt = &espi->rx_sgt;
10243 + } else {
10244 +@@ -381,8 +395,8 @@ static void ep93xx_spi_dma_callback(void *callback_param)
10245 + {
10246 + struct spi_master *master = callback_param;
10247 +
10248 +- ep93xx_spi_dma_finish(master, DMA_MEM_TO_DEV);
10249 +- ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
10250 ++ ep93xx_spi_dma_finish(master, DMA_TO_DEVICE);
10251 ++ ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
10252 +
10253 + spi_finalize_current_transfer(master);
10254 + }
10255 +@@ -392,15 +406,15 @@ static int ep93xx_spi_dma_transfer(struct spi_master *master)
10256 + struct ep93xx_spi *espi = spi_master_get_devdata(master);
10257 + struct dma_async_tx_descriptor *rxd, *txd;
10258 +
10259 +- rxd = ep93xx_spi_dma_prepare(master, DMA_DEV_TO_MEM);
10260 ++ rxd = ep93xx_spi_dma_prepare(master, DMA_FROM_DEVICE);
10261 + if (IS_ERR(rxd)) {
10262 + dev_err(&master->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd));
10263 + return PTR_ERR(rxd);
10264 + }
10265 +
10266 +- txd = ep93xx_spi_dma_prepare(master, DMA_MEM_TO_DEV);
10267 ++ txd = ep93xx_spi_dma_prepare(master, DMA_TO_DEVICE);
10268 + if (IS_ERR(txd)) {
10269 +- ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
10270 ++ ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
10271 + dev_err(&master->dev, "DMA TX failed: %ld\n", PTR_ERR(txd));
10272 + return PTR_ERR(txd);
10273 + }
10274 +diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
10275 +index 3b518ead504e..b82b47152b18 100644
10276 +--- a/drivers/spi/spi-gpio.c
10277 ++++ b/drivers/spi/spi-gpio.c
10278 +@@ -282,9 +282,11 @@ static int spi_gpio_request(struct device *dev,
10279 + spi_gpio->miso = devm_gpiod_get_optional(dev, "miso", GPIOD_IN);
10280 + if (IS_ERR(spi_gpio->miso))
10281 + return PTR_ERR(spi_gpio->miso);
10282 +- if (!spi_gpio->miso)
10283 +- /* HW configuration without MISO pin */
10284 +- *mflags |= SPI_MASTER_NO_RX;
10285 ++ /*
10286 ++ * No setting SPI_MASTER_NO_RX here - if there is only a MOSI
10287 ++ * pin connected the host can still do RX by changing the
10288 ++ * direction of the line.
10289 ++ */
10290 +
10291 + spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
10292 + if (IS_ERR(spi_gpio->sck))
10293 +@@ -408,7 +410,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
10294 + spi_gpio->bitbang.master = master;
10295 + spi_gpio->bitbang.chipselect = spi_gpio_chipselect;
10296 +
10297 +- if ((master_flags & (SPI_MASTER_NO_TX | SPI_MASTER_NO_RX)) == 0) {
10298 ++ if ((master_flags & SPI_MASTER_NO_TX) == 0) {
10299 + spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_txrx_word_mode0;
10300 + spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_txrx_word_mode1;
10301 + spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_txrx_word_mode2;
10302 +diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
10303 +index 990770dfa5cf..ec0c24e873cd 100644
10304 +--- a/drivers/spi/spi-mem.c
10305 ++++ b/drivers/spi/spi-mem.c
10306 +@@ -328,10 +328,25 @@ EXPORT_SYMBOL_GPL(spi_mem_exec_op);
10307 + int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
10308 + {
10309 + struct spi_controller *ctlr = mem->spi->controller;
10310 ++ size_t len;
10311 ++
10312 ++ len = sizeof(op->cmd.opcode) + op->addr.nbytes + op->dummy.nbytes;
10313 +
10314 + if (ctlr->mem_ops && ctlr->mem_ops->adjust_op_size)
10315 + return ctlr->mem_ops->adjust_op_size(mem, op);
10316 +
10317 ++ if (!ctlr->mem_ops || !ctlr->mem_ops->exec_op) {
10318 ++ if (len > spi_max_transfer_size(mem->spi))
10319 ++ return -EINVAL;
10320 ++
10321 ++ op->data.nbytes = min3((size_t)op->data.nbytes,
10322 ++ spi_max_transfer_size(mem->spi),
10323 ++ spi_max_message_size(mem->spi) -
10324 ++ len);
10325 ++ if (!op->data.nbytes)
10326 ++ return -EINVAL;
10327 ++ }
10328 ++
10329 + return 0;
10330 + }
10331 + EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
10332 +diff --git a/drivers/tc/tc.c b/drivers/tc/tc.c
10333 +index 3be9519654e5..cf3fad2cb871 100644
10334 +--- a/drivers/tc/tc.c
10335 ++++ b/drivers/tc/tc.c
10336 +@@ -2,7 +2,7 @@
10337 + * TURBOchannel bus services.
10338 + *
10339 + * Copyright (c) Harald Koerfgen, 1998
10340 +- * Copyright (c) 2001, 2003, 2005, 2006 Maciej W. Rozycki
10341 ++ * Copyright (c) 2001, 2003, 2005, 2006, 2018 Maciej W. Rozycki
10342 + * Copyright (c) 2005 James Simmons
10343 + *
10344 + * This file is subject to the terms and conditions of the GNU
10345 +@@ -10,6 +10,7 @@
10346 + * directory of this archive for more details.
10347 + */
10348 + #include <linux/compiler.h>
10349 ++#include <linux/dma-mapping.h>
10350 + #include <linux/errno.h>
10351 + #include <linux/init.h>
10352 + #include <linux/ioport.h>
10353 +@@ -92,6 +93,11 @@ static void __init tc_bus_add_devices(struct tc_bus *tbus)
10354 + tdev->dev.bus = &tc_bus_type;
10355 + tdev->slot = slot;
10356 +
10357 ++ /* TURBOchannel has 34-bit DMA addressing (16GiB space). */
10358 ++ tdev->dma_mask = DMA_BIT_MASK(34);
10359 ++ tdev->dev.dma_mask = &tdev->dma_mask;
10360 ++ tdev->dev.coherent_dma_mask = DMA_BIT_MASK(34);
10361 ++
10362 + for (i = 0; i < 8; i++) {
10363 + tdev->firmware[i] =
10364 + readb(module + offset + TC_FIRM_VER + 4 * i);
10365 +diff --git a/drivers/thermal/da9062-thermal.c b/drivers/thermal/da9062-thermal.c
10366 +index dd8dd947b7f0..01b0cb994457 100644
10367 +--- a/drivers/thermal/da9062-thermal.c
10368 ++++ b/drivers/thermal/da9062-thermal.c
10369 +@@ -106,7 +106,7 @@ static void da9062_thermal_poll_on(struct work_struct *work)
10370 + THERMAL_EVENT_UNSPECIFIED);
10371 +
10372 + delay = msecs_to_jiffies(thermal->zone->passive_delay);
10373 +- schedule_delayed_work(&thermal->work, delay);
10374 ++ queue_delayed_work(system_freezable_wq, &thermal->work, delay);
10375 + return;
10376 + }
10377 +
10378 +@@ -125,7 +125,7 @@ static irqreturn_t da9062_thermal_irq_handler(int irq, void *data)
10379 + struct da9062_thermal *thermal = data;
10380 +
10381 + disable_irq_nosync(thermal->irq);
10382 +- schedule_delayed_work(&thermal->work, 0);
10383 ++ queue_delayed_work(system_freezable_wq, &thermal->work, 0);
10384 +
10385 + return IRQ_HANDLED;
10386 + }
10387 +diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
10388 +index e77e63070e99..5844e26bd372 100644
10389 +--- a/drivers/thermal/rcar_thermal.c
10390 ++++ b/drivers/thermal/rcar_thermal.c
10391 +@@ -465,6 +465,7 @@ static int rcar_thermal_remove(struct platform_device *pdev)
10392 +
10393 + rcar_thermal_for_each_priv(priv, common) {
10394 + rcar_thermal_irq_disable(priv);
10395 ++ cancel_delayed_work_sync(&priv->work);
10396 + if (priv->chip->use_of_thermal)
10397 + thermal_remove_hwmon_sysfs(priv->zone);
10398 + else
10399 +diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
10400 +index b4ba2b1dab76..f4d0ef695225 100644
10401 +--- a/drivers/tty/serial/kgdboc.c
10402 ++++ b/drivers/tty/serial/kgdboc.c
10403 +@@ -130,6 +130,11 @@ static void kgdboc_unregister_kbd(void)
10404 +
10405 + static int kgdboc_option_setup(char *opt)
10406 + {
10407 ++ if (!opt) {
10408 ++ pr_err("kgdboc: config string not provided\n");
10409 ++ return -EINVAL;
10410 ++ }
10411 ++
10412 + if (strlen(opt) >= MAX_CONFIG_LEN) {
10413 + printk(KERN_ERR "kgdboc: config string too long\n");
10414 + return -ENOSPC;
10415 +diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
10416 +index 6c58ad1abd7e..d5b2efae82fc 100644
10417 +--- a/drivers/uio/uio.c
10418 ++++ b/drivers/uio/uio.c
10419 +@@ -275,6 +275,8 @@ static struct class uio_class = {
10420 + .dev_groups = uio_groups,
10421 + };
10422 +
10423 ++bool uio_class_registered;
10424 ++
10425 + /*
10426 + * device functions
10427 + */
10428 +@@ -877,6 +879,9 @@ static int init_uio_class(void)
10429 + printk(KERN_ERR "class_register failed for uio\n");
10430 + goto err_class_register;
10431 + }
10432 ++
10433 ++ uio_class_registered = true;
10434 ++
10435 + return 0;
10436 +
10437 + err_class_register:
10438 +@@ -887,6 +892,7 @@ exit:
10439 +
10440 + static void release_uio_class(void)
10441 + {
10442 ++ uio_class_registered = false;
10443 + class_unregister(&uio_class);
10444 + uio_major_cleanup();
10445 + }
10446 +@@ -913,6 +919,9 @@ int __uio_register_device(struct module *owner,
10447 + struct uio_device *idev;
10448 + int ret = 0;
10449 +
10450 ++ if (!uio_class_registered)
10451 ++ return -EPROBE_DEFER;
10452 ++
10453 + if (!parent || !info || !info->name || !info->version)
10454 + return -EINVAL;
10455 +
10456 +diff --git a/drivers/usb/chipidea/otg.h b/drivers/usb/chipidea/otg.h
10457 +index 7e7428e48bfa..4f8b8179ec96 100644
10458 +--- a/drivers/usb/chipidea/otg.h
10459 ++++ b/drivers/usb/chipidea/otg.h
10460 +@@ -17,7 +17,8 @@ void ci_handle_vbus_change(struct ci_hdrc *ci);
10461 + static inline void ci_otg_queue_work(struct ci_hdrc *ci)
10462 + {
10463 + disable_irq_nosync(ci->irq);
10464 +- queue_work(ci->wq, &ci->work);
10465 ++ if (queue_work(ci->wq, &ci->work) == false)
10466 ++ enable_irq(ci->irq);
10467 + }
10468 +
10469 + #endif /* __DRIVERS_USB_CHIPIDEA_OTG_H */
10470 +diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
10471 +index 6e2cdd7b93d4..05a68f035d19 100644
10472 +--- a/drivers/usb/dwc2/hcd.c
10473 ++++ b/drivers/usb/dwc2/hcd.c
10474 +@@ -4394,6 +4394,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
10475 + struct dwc2_hsotg *hsotg = dwc2_hcd_to_hsotg(hcd);
10476 + struct usb_bus *bus = hcd_to_bus(hcd);
10477 + unsigned long flags;
10478 ++ int ret;
10479 +
10480 + dev_dbg(hsotg->dev, "DWC OTG HCD START\n");
10481 +
10482 +@@ -4409,6 +4410,13 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
10483 +
10484 + dwc2_hcd_reinit(hsotg);
10485 +
10486 ++ /* enable external vbus supply before resuming root hub */
10487 ++ spin_unlock_irqrestore(&hsotg->lock, flags);
10488 ++ ret = dwc2_vbus_supply_init(hsotg);
10489 ++ if (ret)
10490 ++ return ret;
10491 ++ spin_lock_irqsave(&hsotg->lock, flags);
10492 ++
10493 + /* Initialize and connect root hub if one is not already attached */
10494 + if (bus->root_hub) {
10495 + dev_dbg(hsotg->dev, "DWC OTG HCD Has Root Hub\n");
10496 +@@ -4418,7 +4426,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
10497 +
10498 + spin_unlock_irqrestore(&hsotg->lock, flags);
10499 +
10500 +- return dwc2_vbus_supply_init(hsotg);
10501 ++ return 0;
10502 + }
10503 +
10504 + /*
10505 +diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
10506 +index 17147b8c771e..8f267be1745d 100644
10507 +--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
10508 ++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
10509 +@@ -2017,6 +2017,8 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
10510 +
10511 + udc->errata = match->data;
10512 + udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9g45-pmc");
10513 ++ if (IS_ERR(udc->pmc))
10514 ++ udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9rl-pmc");
10515 + if (IS_ERR(udc->pmc))
10516 + udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9x5-pmc");
10517 + if (udc->errata && IS_ERR(udc->pmc))
10518 +diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
10519 +index 5b5f1c8b47c9..104b80c28636 100644
10520 +--- a/drivers/usb/gadget/udc/renesas_usb3.c
10521 ++++ b/drivers/usb/gadget/udc/renesas_usb3.c
10522 +@@ -2377,6 +2377,9 @@ static ssize_t renesas_usb3_b_device_write(struct file *file,
10523 + else
10524 + usb3->forced_b_device = false;
10525 +
10526 ++ if (usb3->workaround_for_vbus)
10527 ++ usb3_disconnect(usb3);
10528 ++
10529 + /* Let this driver call usb3_connect() anyway */
10530 + usb3_check_id(usb3);
10531 +
10532 +diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
10533 +index e98673954020..ec6739ef3129 100644
10534 +--- a/drivers/usb/host/ohci-at91.c
10535 ++++ b/drivers/usb/host/ohci-at91.c
10536 +@@ -551,6 +551,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
10537 + pdata->overcurrent_pin[i] =
10538 + devm_gpiod_get_index_optional(&pdev->dev, "atmel,oc",
10539 + i, GPIOD_IN);
10540 ++ if (!pdata->overcurrent_pin[i])
10541 ++ continue;
10542 + if (IS_ERR(pdata->overcurrent_pin[i])) {
10543 + err = PTR_ERR(pdata->overcurrent_pin[i]);
10544 + dev_err(&pdev->dev, "unable to claim gpio \"overcurrent\": %d\n", err);
10545 +diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
10546 +index a4b95d019f84..1f7eeee2ebca 100644
10547 +--- a/drivers/usb/host/xhci-hub.c
10548 ++++ b/drivers/usb/host/xhci-hub.c
10549 +@@ -900,6 +900,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
10550 + set_bit(wIndex, &bus_state->resuming_ports);
10551 + bus_state->resume_done[wIndex] = timeout;
10552 + mod_timer(&hcd->rh_timer, timeout);
10553 ++ usb_hcd_start_port_resume(&hcd->self, wIndex);
10554 + }
10555 + /* Has resume been signalled for USB_RESUME_TIME yet? */
10556 + } else if (time_after_eq(jiffies,
10557 +@@ -940,6 +941,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
10558 + clear_bit(wIndex, &bus_state->rexit_ports);
10559 + }
10560 +
10561 ++ usb_hcd_end_port_resume(&hcd->self, wIndex);
10562 + bus_state->port_c_suspend |= 1 << wIndex;
10563 + bus_state->suspended_ports &= ~(1 << wIndex);
10564 + } else {
10565 +@@ -962,6 +964,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
10566 + (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {
10567 + bus_state->resume_done[wIndex] = 0;
10568 + clear_bit(wIndex, &bus_state->resuming_ports);
10569 ++ usb_hcd_end_port_resume(&hcd->self, wIndex);
10570 + }
10571 +
10572 +
10573 +@@ -1337,6 +1340,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
10574 + goto error;
10575 +
10576 + set_bit(wIndex, &bus_state->resuming_ports);
10577 ++ usb_hcd_start_port_resume(&hcd->self, wIndex);
10578 + xhci_set_link_state(xhci, ports[wIndex],
10579 + XDEV_RESUME);
10580 + spin_unlock_irqrestore(&xhci->lock, flags);
10581 +@@ -1345,6 +1349,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
10582 + xhci_set_link_state(xhci, ports[wIndex],
10583 + XDEV_U0);
10584 + clear_bit(wIndex, &bus_state->resuming_ports);
10585 ++ usb_hcd_end_port_resume(&hcd->self, wIndex);
10586 + }
10587 + bus_state->port_c_suspend |= 1 << wIndex;
10588 +
10589 +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
10590 +index f0a99aa0ac58..cd4659703647 100644
10591 +--- a/drivers/usb/host/xhci-ring.c
10592 ++++ b/drivers/usb/host/xhci-ring.c
10593 +@@ -1602,6 +1602,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
10594 + set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
10595 + mod_timer(&hcd->rh_timer,
10596 + bus_state->resume_done[hcd_portnum]);
10597 ++ usb_hcd_start_port_resume(&hcd->self, hcd_portnum);
10598 + bogus_port_status = true;
10599 + }
10600 + }
10601 +diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
10602 +index d1d20252bad8..a7e231ccb0a1 100644
10603 +--- a/drivers/usb/typec/tcpm.c
10604 ++++ b/drivers/usb/typec/tcpm.c
10605 +@@ -1383,8 +1383,8 @@ static enum pdo_err tcpm_caps_err(struct tcpm_port *port, const u32 *pdo,
10606 + if (pdo_apdo_type(pdo[i]) != APDO_TYPE_PPS)
10607 + break;
10608 +
10609 +- if (pdo_pps_apdo_max_current(pdo[i]) <
10610 +- pdo_pps_apdo_max_current(pdo[i - 1]))
10611 ++ if (pdo_pps_apdo_max_voltage(pdo[i]) <
10612 ++ pdo_pps_apdo_max_voltage(pdo[i - 1]))
10613 + return PDO_ERR_PPS_APDO_NOT_SORTED;
10614 + else if (pdo_pps_apdo_min_voltage(pdo[i]) ==
10615 + pdo_pps_apdo_min_voltage(pdo[i - 1]) &&
10616 +@@ -4018,6 +4018,9 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
10617 + goto port_unlock;
10618 + }
10619 +
10620 ++ /* Round down operating current to align with PPS valid steps */
10621 ++ op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
10622 ++
10623 + reinit_completion(&port->pps_complete);
10624 + port->pps_data.op_curr = op_curr;
10625 + port->pps_status = 0;
10626 +@@ -4071,6 +4074,9 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
10627 + goto port_unlock;
10628 + }
10629 +
10630 ++ /* Round down output voltage to align with PPS valid steps */
10631 ++ out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
10632 ++
10633 + reinit_completion(&port->pps_complete);
10634 + port->pps_data.out_volt = out_volt;
10635 + port->pps_status = 0;
10636 +diff --git a/drivers/usb/usbip/vudc_main.c b/drivers/usb/usbip/vudc_main.c
10637 +index 3fc22037a82f..390733e6937e 100644
10638 +--- a/drivers/usb/usbip/vudc_main.c
10639 ++++ b/drivers/usb/usbip/vudc_main.c
10640 +@@ -73,6 +73,10 @@ static int __init init(void)
10641 + cleanup:
10642 + list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
10643 + list_del(&udc_dev->dev_entry);
10644 ++ /*
10645 ++ * Just do platform_device_del() here, put_vudc_device()
10646 ++ * calls the platform_device_put()
10647 ++ */
10648 + platform_device_del(udc_dev->pdev);
10649 + put_vudc_device(udc_dev);
10650 + }
10651 +@@ -89,7 +93,11 @@ static void __exit cleanup(void)
10652 +
10653 + list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
10654 + list_del(&udc_dev->dev_entry);
10655 +- platform_device_unregister(udc_dev->pdev);
10656 ++ /*
10657 ++ * Just do platform_device_del() here, put_vudc_device()
10658 ++ * calls the platform_device_put()
10659 ++ */
10660 ++ platform_device_del(udc_dev->pdev);
10661 + put_vudc_device(udc_dev);
10662 + }
10663 + platform_driver_unregister(&vudc_driver);
10664 +diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c
10665 +index 38716eb50408..8a3e8f61b991 100644
10666 +--- a/drivers/video/hdmi.c
10667 ++++ b/drivers/video/hdmi.c
10668 +@@ -592,10 +592,10 @@ hdmi_extended_colorimetry_get_name(enum hdmi_extended_colorimetry ext_col)
10669 + return "xvYCC 709";
10670 + case HDMI_EXTENDED_COLORIMETRY_S_YCC_601:
10671 + return "sYCC 601";
10672 +- case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
10673 +- return "Adobe YCC 601";
10674 +- case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
10675 +- return "Adobe RGB";
10676 ++ case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
10677 ++ return "opYCC 601";
10678 ++ case HDMI_EXTENDED_COLORIMETRY_OPRGB:
10679 ++ return "opRGB";
10680 + case HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM:
10681 + return "BT.2020 Constant Luminance";
10682 + case HDMI_EXTENDED_COLORIMETRY_BT2020:
10683 +diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
10684 +index 83fc9aab34e8..3099052e1243 100644
10685 +--- a/drivers/w1/masters/omap_hdq.c
10686 ++++ b/drivers/w1/masters/omap_hdq.c
10687 +@@ -763,6 +763,8 @@ static int omap_hdq_remove(struct platform_device *pdev)
10688 + /* remove module dependency */
10689 + pm_runtime_disable(&pdev->dev);
10690 +
10691 ++ w1_remove_master_device(&omap_w1_master);
10692 ++
10693 + return 0;
10694 + }
10695 +
10696 +diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
10697 +index df1ed37c3269..de01a6d0059d 100644
10698 +--- a/drivers/xen/privcmd-buf.c
10699 ++++ b/drivers/xen/privcmd-buf.c
10700 +@@ -21,15 +21,9 @@
10701 +
10702 + MODULE_LICENSE("GPL");
10703 +
10704 +-static unsigned int limit = 64;
10705 +-module_param(limit, uint, 0644);
10706 +-MODULE_PARM_DESC(limit, "Maximum number of pages that may be allocated by "
10707 +- "the privcmd-buf device per open file");
10708 +-
10709 + struct privcmd_buf_private {
10710 + struct mutex lock;
10711 + struct list_head list;
10712 +- unsigned int allocated;
10713 + };
10714 +
10715 + struct privcmd_buf_vma_private {
10716 +@@ -60,13 +54,10 @@ static void privcmd_buf_vmapriv_free(struct privcmd_buf_vma_private *vma_priv)
10717 + {
10718 + unsigned int i;
10719 +
10720 +- vma_priv->file_priv->allocated -= vma_priv->n_pages;
10721 +-
10722 + list_del(&vma_priv->list);
10723 +
10724 + for (i = 0; i < vma_priv->n_pages; i++)
10725 +- if (vma_priv->pages[i])
10726 +- __free_page(vma_priv->pages[i]);
10727 ++ __free_page(vma_priv->pages[i]);
10728 +
10729 + kfree(vma_priv);
10730 + }
10731 +@@ -146,8 +137,7 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
10732 + unsigned int i;
10733 + int ret = 0;
10734 +
10735 +- if (!(vma->vm_flags & VM_SHARED) || count > limit ||
10736 +- file_priv->allocated + count > limit)
10737 ++ if (!(vma->vm_flags & VM_SHARED))
10738 + return -EINVAL;
10739 +
10740 + vma_priv = kzalloc(sizeof(*vma_priv) + count * sizeof(void *),
10741 +@@ -155,19 +145,15 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
10742 + if (!vma_priv)
10743 + return -ENOMEM;
10744 +
10745 +- vma_priv->n_pages = count;
10746 +- count = 0;
10747 +- for (i = 0; i < vma_priv->n_pages; i++) {
10748 ++ for (i = 0; i < count; i++) {
10749 + vma_priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO);
10750 + if (!vma_priv->pages[i])
10751 + break;
10752 +- count++;
10753 ++ vma_priv->n_pages++;
10754 + }
10755 +
10756 + mutex_lock(&file_priv->lock);
10757 +
10758 +- file_priv->allocated += count;
10759 +-
10760 + vma_priv->file_priv = file_priv;
10761 + vma_priv->users = 1;
10762 +
10763 +diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
10764 +index a6f9ba85dc4b..aa081f806728 100644
10765 +--- a/drivers/xen/swiotlb-xen.c
10766 ++++ b/drivers/xen/swiotlb-xen.c
10767 +@@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
10768 + */
10769 + flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
10770 +
10771 ++ /* Convert the size to actually allocated. */
10772 ++ size = 1UL << (order + XEN_PAGE_SHIFT);
10773 ++
10774 + /* On ARM this function returns an ioremap'ped virtual address for
10775 + * which virt_to_phys doesn't return the corresponding physical
10776 + * address. In fact on ARM virt_to_phys only works for kernel direct
10777 +@@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
10778 + * physical address */
10779 + phys = xen_bus_to_phys(dev_addr);
10780 +
10781 ++ /* Convert the size to actually allocated. */
10782 ++ size = 1UL << (order + XEN_PAGE_SHIFT);
10783 ++
10784 + if (((dev_addr + size - 1 <= dma_mask)) ||
10785 + range_straddles_page_boundary(phys, size))
10786 + xen_destroy_contiguous_region(phys, order);
10787 +diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
10788 +index 294f35ce9e46..cf8ef8cee5a0 100644
10789 +--- a/drivers/xen/xen-balloon.c
10790 ++++ b/drivers/xen/xen-balloon.c
10791 +@@ -75,12 +75,15 @@ static void watch_target(struct xenbus_watch *watch,
10792 +
10793 + if (!watch_fired) {
10794 + watch_fired = true;
10795 +- err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
10796 +- &static_max);
10797 +- if (err != 1)
10798 +- static_max = new_target;
10799 +- else
10800 ++
10801 ++ if ((xenbus_scanf(XBT_NIL, "memory", "static-max",
10802 ++ "%llu", &static_max) == 1) ||
10803 ++ (xenbus_scanf(XBT_NIL, "memory", "memory_static_max",
10804 ++ "%llu", &static_max) == 1))
10805 + static_max >>= PAGE_SHIFT - 10;
10806 ++ else
10807 ++ static_max = new_target;
10808 ++
10809 + target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
10810 + : static_max - balloon_stats.target_pages;
10811 + }
10812 +diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
10813 +index 4bc326df472e..4a7ae216977d 100644
10814 +--- a/fs/btrfs/ctree.c
10815 ++++ b/fs/btrfs/ctree.c
10816 +@@ -1054,9 +1054,26 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
10817 + if ((root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && parent)
10818 + parent_start = parent->start;
10819 +
10820 ++ /*
10821 ++ * If we are COWing a node/leaf from the extent, chunk or device trees,
10822 ++ * make sure that we do not finish block group creation of pending block
10823 ++ * groups. We do this to avoid a deadlock.
10824 ++ * COWing can result in allocation of a new chunk, and flushing pending
10825 ++ * block groups (btrfs_create_pending_block_groups()) can be triggered
10826 ++ * when finishing allocation of a new chunk. Creation of a pending block
10827 ++ * group modifies the extent, chunk and device trees, therefore we could
10828 ++ * deadlock with ourselves since we are holding a lock on an extent
10829 ++ * buffer that btrfs_create_pending_block_groups() may try to COW later.
10830 ++ */
10831 ++ if (root == fs_info->extent_root ||
10832 ++ root == fs_info->chunk_root ||
10833 ++ root == fs_info->dev_root)
10834 ++ trans->can_flush_pending_bgs = false;
10835 ++
10836 + cow = btrfs_alloc_tree_block(trans, root, parent_start,
10837 + root->root_key.objectid, &disk_key, level,
10838 + search_start, empty_size);
10839 ++ trans->can_flush_pending_bgs = true;
10840 + if (IS_ERR(cow))
10841 + return PTR_ERR(cow);
10842 +
10843 +diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
10844 +index d20b244623f2..e129a595f811 100644
10845 +--- a/fs/btrfs/dev-replace.c
10846 ++++ b/fs/btrfs/dev-replace.c
10847 +@@ -445,6 +445,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
10848 + break;
10849 + case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED:
10850 + case BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED:
10851 ++ ASSERT(0);
10852 + ret = BTRFS_IOCTL_DEV_REPLACE_RESULT_ALREADY_STARTED;
10853 + goto leave;
10854 + }
10855 +@@ -487,6 +488,10 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
10856 + if (IS_ERR(trans)) {
10857 + ret = PTR_ERR(trans);
10858 + btrfs_dev_replace_write_lock(dev_replace);
10859 ++ dev_replace->replace_state =
10860 ++ BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED;
10861 ++ dev_replace->srcdev = NULL;
10862 ++ dev_replace->tgtdev = NULL;
10863 + goto leave;
10864 + }
10865 +
10866 +@@ -508,8 +513,6 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
10867 + return ret;
10868 +
10869 + leave:
10870 +- dev_replace->srcdev = NULL;
10871 +- dev_replace->tgtdev = NULL;
10872 + btrfs_dev_replace_write_unlock(dev_replace);
10873 + btrfs_destroy_dev_replace_tgtdev(fs_info, tgt_device);
10874 + return ret;
10875 +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
10876 +index 4ab0bccfa281..e67de6a9805b 100644
10877 +--- a/fs/btrfs/extent-tree.c
10878 ++++ b/fs/btrfs/extent-tree.c
10879 +@@ -2490,6 +2490,9 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
10880 + insert_reserved);
10881 + else
10882 + BUG();
10883 ++ if (ret && insert_reserved)
10884 ++ btrfs_pin_extent(trans->fs_info, node->bytenr,
10885 ++ node->num_bytes, 1);
10886 + return ret;
10887 + }
10888 +
10889 +@@ -3034,7 +3037,6 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
10890 + struct btrfs_delayed_ref_head *head;
10891 + int ret;
10892 + int run_all = count == (unsigned long)-1;
10893 +- bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
10894 +
10895 + /* We'll clean this up in btrfs_cleanup_transaction */
10896 + if (trans->aborted)
10897 +@@ -3051,7 +3053,6 @@ again:
10898 + #ifdef SCRAMBLE_DELAYED_REFS
10899 + delayed_refs->run_delayed_start = find_middle(&delayed_refs->root);
10900 + #endif
10901 +- trans->can_flush_pending_bgs = false;
10902 + ret = __btrfs_run_delayed_refs(trans, count);
10903 + if (ret < 0) {
10904 + btrfs_abort_transaction(trans, ret);
10905 +@@ -3082,7 +3083,6 @@ again:
10906 + goto again;
10907 + }
10908 + out:
10909 +- trans->can_flush_pending_bgs = can_flush_pending_bgs;
10910 + return 0;
10911 + }
10912 +
10913 +@@ -4664,6 +4664,7 @@ again:
10914 + goto out;
10915 + } else {
10916 + ret = 1;
10917 ++ space_info->max_extent_size = 0;
10918 + }
10919 +
10920 + space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;
10921 +@@ -4685,11 +4686,9 @@ out:
10922 + * the block groups that were made dirty during the lifetime of the
10923 + * transaction.
10924 + */
10925 +- if (trans->can_flush_pending_bgs &&
10926 +- trans->chunk_bytes_reserved >= (u64)SZ_2M) {
10927 ++ if (trans->chunk_bytes_reserved >= (u64)SZ_2M)
10928 + btrfs_create_pending_block_groups(trans);
10929 +- btrfs_trans_release_chunk_metadata(trans);
10930 +- }
10931 ++
10932 + return ret;
10933 + }
10934 +
10935 +@@ -6581,6 +6580,7 @@ static int btrfs_free_reserved_bytes(struct btrfs_block_group_cache *cache,
10936 + space_info->bytes_readonly += num_bytes;
10937 + cache->reserved -= num_bytes;
10938 + space_info->bytes_reserved -= num_bytes;
10939 ++ space_info->max_extent_size = 0;
10940 +
10941 + if (delalloc)
10942 + cache->delalloc_bytes -= num_bytes;
10943 +@@ -7412,6 +7412,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
10944 + struct btrfs_block_group_cache *block_group = NULL;
10945 + u64 search_start = 0;
10946 + u64 max_extent_size = 0;
10947 ++ u64 max_free_space = 0;
10948 + u64 empty_cluster = 0;
10949 + struct btrfs_space_info *space_info;
10950 + int loop = 0;
10951 +@@ -7707,8 +7708,8 @@ unclustered_alloc:
10952 + spin_lock(&ctl->tree_lock);
10953 + if (ctl->free_space <
10954 + num_bytes + empty_cluster + empty_size) {
10955 +- if (ctl->free_space > max_extent_size)
10956 +- max_extent_size = ctl->free_space;
10957 ++ max_free_space = max(max_free_space,
10958 ++ ctl->free_space);
10959 + spin_unlock(&ctl->tree_lock);
10960 + goto loop;
10961 + }
10962 +@@ -7877,6 +7878,8 @@ loop:
10963 + }
10964 + out:
10965 + if (ret == -ENOSPC) {
10966 ++ if (!max_extent_size)
10967 ++ max_extent_size = max_free_space;
10968 + spin_lock(&space_info->lock);
10969 + space_info->max_extent_size = max_extent_size;
10970 + spin_unlock(&space_info->lock);
10971 +@@ -8158,21 +8161,14 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
10972 + }
10973 +
10974 + path = btrfs_alloc_path();
10975 +- if (!path) {
10976 +- btrfs_free_and_pin_reserved_extent(fs_info,
10977 +- extent_key.objectid,
10978 +- fs_info->nodesize);
10979 ++ if (!path)
10980 + return -ENOMEM;
10981 +- }
10982 +
10983 + path->leave_spinning = 1;
10984 + ret = btrfs_insert_empty_item(trans, fs_info->extent_root, path,
10985 + &extent_key, size);
10986 + if (ret) {
10987 + btrfs_free_path(path);
10988 +- btrfs_free_and_pin_reserved_extent(fs_info,
10989 +- extent_key.objectid,
10990 +- fs_info->nodesize);
10991 + return ret;
10992 + }
10993 +
10994 +@@ -8301,6 +8297,19 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
10995 + if (IS_ERR(buf))
10996 + return buf;
10997 +
10998 ++ /*
10999 ++ * Extra safety check in case the extent tree is corrupted and extent
11000 ++ * allocator chooses to use a tree block which is already used and
11001 ++ * locked.
11002 ++ */
11003 ++ if (buf->lock_owner == current->pid) {
11004 ++ btrfs_err_rl(fs_info,
11005 ++"tree block %llu owner %llu already locked by pid=%d, extent tree corruption detected",
11006 ++ buf->start, btrfs_header_owner(buf), current->pid);
11007 ++ free_extent_buffer(buf);
11008 ++ return ERR_PTR(-EUCLEAN);
11009 ++ }
11010 ++
11011 + btrfs_set_header_generation(buf, trans->transid);
11012 + btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
11013 + btrfs_tree_lock(buf);
11014 +@@ -8938,15 +8947,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
11015 + if (eb == root->node) {
11016 + if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
11017 + parent = eb->start;
11018 +- else
11019 +- BUG_ON(root->root_key.objectid !=
11020 +- btrfs_header_owner(eb));
11021 ++ else if (root->root_key.objectid != btrfs_header_owner(eb))
11022 ++ goto owner_mismatch;
11023 + } else {
11024 + if (wc->flags[level + 1] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
11025 + parent = path->nodes[level + 1]->start;
11026 +- else
11027 +- BUG_ON(root->root_key.objectid !=
11028 +- btrfs_header_owner(path->nodes[level + 1]));
11029 ++ else if (root->root_key.objectid !=
11030 ++ btrfs_header_owner(path->nodes[level + 1]))
11031 ++ goto owner_mismatch;
11032 + }
11033 +
11034 + btrfs_free_tree_block(trans, root, eb, parent, wc->refs[level] == 1);
11035 +@@ -8954,6 +8962,11 @@ out:
11036 + wc->refs[level] = 0;
11037 + wc->flags[level] = 0;
11038 + return 0;
11039 ++
11040 ++owner_mismatch:
11041 ++ btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
11042 ++ btrfs_header_owner(eb), root->root_key.objectid);
11043 ++ return -EUCLEAN;
11044 + }
11045 +
11046 + static noinline int walk_down_tree(struct btrfs_trans_handle *trans,
11047 +@@ -9007,6 +9020,8 @@ static noinline int walk_up_tree(struct btrfs_trans_handle *trans,
11048 + ret = walk_up_proc(trans, root, path, wc);
11049 + if (ret > 0)
11050 + return 0;
11051 ++ if (ret < 0)
11052 ++ return ret;
11053 +
11054 + if (path->locks[level]) {
11055 + btrfs_tree_unlock_rw(path->nodes[level],
11056 +@@ -9772,6 +9787,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
11057 +
11058 + block_group = btrfs_lookup_first_block_group(info, last);
11059 + while (block_group) {
11060 ++ wait_block_group_cache_done(block_group);
11061 + spin_lock(&block_group->lock);
11062 + if (block_group->iref)
11063 + break;
11064 +@@ -10184,15 +10200,19 @@ error:
11065 + void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
11066 + {
11067 + struct btrfs_fs_info *fs_info = trans->fs_info;
11068 +- struct btrfs_block_group_cache *block_group, *tmp;
11069 ++ struct btrfs_block_group_cache *block_group;
11070 + struct btrfs_root *extent_root = fs_info->extent_root;
11071 + struct btrfs_block_group_item item;
11072 + struct btrfs_key key;
11073 + int ret = 0;
11074 +- bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
11075 +
11076 +- trans->can_flush_pending_bgs = false;
11077 +- list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
11078 ++ if (!trans->can_flush_pending_bgs)
11079 ++ return;
11080 ++
11081 ++ while (!list_empty(&trans->new_bgs)) {
11082 ++ block_group = list_first_entry(&trans->new_bgs,
11083 ++ struct btrfs_block_group_cache,
11084 ++ bg_list);
11085 + if (ret)
11086 + goto next;
11087 +
11088 +@@ -10214,7 +10234,7 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
11089 + next:
11090 + list_del_init(&block_group->bg_list);
11091 + }
11092 +- trans->can_flush_pending_bgs = can_flush_pending_bgs;
11093 ++ btrfs_trans_release_chunk_metadata(trans);
11094 + }
11095 +
11096 + int btrfs_make_block_group(struct btrfs_trans_handle *trans,
11097 +@@ -10869,14 +10889,16 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
11098 + * We don't want a transaction for this since the discard may take a
11099 + * substantial amount of time. We don't require that a transaction be
11100 + * running, but we do need to take a running transaction into account
11101 +- * to ensure that we're not discarding chunks that were released in
11102 +- * the current transaction.
11103 ++ * to ensure that we're not discarding chunks that were released or
11104 ++ * allocated in the current transaction.
11105 + *
11106 + * Holding the chunks lock will prevent other threads from allocating
11107 + * or releasing chunks, but it won't prevent a running transaction
11108 + * from committing and releasing the memory that the pending chunks
11109 + * list head uses. For that, we need to take a reference to the
11110 +- * transaction.
11111 ++ * transaction and hold the commit root sem. We only need to hold
11112 ++ * it while performing the free space search since we have already
11113 ++ * held back allocations.
11114 + */
11115 + static int btrfs_trim_free_extents(struct btrfs_device *device,
11116 + u64 minlen, u64 *trimmed)
11117 +@@ -10886,6 +10908,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
11118 +
11119 + *trimmed = 0;
11120 +
11121 ++ /* Discard not supported = nothing to do. */
11122 ++ if (!blk_queue_discard(bdev_get_queue(device->bdev)))
11123 ++ return 0;
11124 ++
11125 + /* Not writeable = nothing to do. */
11126 + if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state))
11127 + return 0;
11128 +@@ -10903,9 +10929,13 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
11129 +
11130 + ret = mutex_lock_interruptible(&fs_info->chunk_mutex);
11131 + if (ret)
11132 +- return ret;
11133 ++ break;
11134 +
11135 +- down_read(&fs_info->commit_root_sem);
11136 ++ ret = down_read_killable(&fs_info->commit_root_sem);
11137 ++ if (ret) {
11138 ++ mutex_unlock(&fs_info->chunk_mutex);
11139 ++ break;
11140 ++ }
11141 +
11142 + spin_lock(&fs_info->trans_lock);
11143 + trans = fs_info->running_transaction;
11144 +@@ -10913,13 +10943,17 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
11145 + refcount_inc(&trans->use_count);
11146 + spin_unlock(&fs_info->trans_lock);
11147 +
11148 ++ if (!trans)
11149 ++ up_read(&fs_info->commit_root_sem);
11150 ++
11151 + ret = find_free_dev_extent_start(trans, device, minlen, start,
11152 + &start, &len);
11153 +- if (trans)
11154 ++ if (trans) {
11155 ++ up_read(&fs_info->commit_root_sem);
11156 + btrfs_put_transaction(trans);
11157 ++ }
11158 +
11159 + if (ret) {
11160 +- up_read(&fs_info->commit_root_sem);
11161 + mutex_unlock(&fs_info->chunk_mutex);
11162 + if (ret == -ENOSPC)
11163 + ret = 0;
11164 +@@ -10927,7 +10961,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
11165 + }
11166 +
11167 + ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
11168 +- up_read(&fs_info->commit_root_sem);
11169 + mutex_unlock(&fs_info->chunk_mutex);
11170 +
11171 + if (ret)
11172 +@@ -10947,6 +10980,15 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
11173 + return ret;
11174 + }
11175 +
11176 ++/*
11177 ++ * Trim the whole filesystem by:
11178 ++ * 1) trimming the free space in each block group
11179 ++ * 2) trimming the unallocated space on each device
11180 ++ *
11181 ++ * This will also continue trimming even if a block group or device encounters
11182 ++ * an error. The return value will be the last error, or 0 if nothing bad
11183 ++ * happens.
11184 ++ */
11185 + int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
11186 + {
11187 + struct btrfs_block_group_cache *cache = NULL;
11188 +@@ -10956,18 +10998,14 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
11189 + u64 start;
11190 + u64 end;
11191 + u64 trimmed = 0;
11192 +- u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
11193 ++ u64 bg_failed = 0;
11194 ++ u64 dev_failed = 0;
11195 ++ int bg_ret = 0;
11196 ++ int dev_ret = 0;
11197 + int ret = 0;
11198 +
11199 +- /*
11200 +- * try to trim all FS space, our block group may start from non-zero.
11201 +- */
11202 +- if (range->len == total_bytes)
11203 +- cache = btrfs_lookup_first_block_group(fs_info, range->start);
11204 +- else
11205 +- cache = btrfs_lookup_block_group(fs_info, range->start);
11206 +-
11207 +- while (cache) {
11208 ++ cache = btrfs_lookup_first_block_group(fs_info, range->start);
11209 ++ for (; cache; cache = next_block_group(fs_info, cache)) {
11210 + if (cache->key.objectid >= (range->start + range->len)) {
11211 + btrfs_put_block_group(cache);
11212 + break;
11213 +@@ -10981,13 +11019,15 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
11214 + if (!block_group_cache_done(cache)) {
11215 + ret = cache_block_group(cache, 0);
11216 + if (ret) {
11217 +- btrfs_put_block_group(cache);
11218 +- break;
11219 ++ bg_failed++;
11220 ++ bg_ret = ret;
11221 ++ continue;
11222 + }
11223 + ret = wait_block_group_cache_done(cache);
11224 + if (ret) {
11225 +- btrfs_put_block_group(cache);
11226 +- break;
11227 ++ bg_failed++;
11228 ++ bg_ret = ret;
11229 ++ continue;
11230 + }
11231 + }
11232 + ret = btrfs_trim_block_group(cache,
11233 +@@ -10998,28 +11038,40 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
11234 +
11235 + trimmed += group_trimmed;
11236 + if (ret) {
11237 +- btrfs_put_block_group(cache);
11238 +- break;
11239 ++ bg_failed++;
11240 ++ bg_ret = ret;
11241 ++ continue;
11242 + }
11243 + }
11244 +-
11245 +- cache = next_block_group(fs_info, cache);
11246 + }
11247 +
11248 ++ if (bg_failed)
11249 ++ btrfs_warn(fs_info,
11250 ++ "failed to trim %llu block group(s), last error %d",
11251 ++ bg_failed, bg_ret);
11252 + mutex_lock(&fs_info->fs_devices->device_list_mutex);
11253 +- devices = &fs_info->fs_devices->alloc_list;
11254 +- list_for_each_entry(device, devices, dev_alloc_list) {
11255 ++ devices = &fs_info->fs_devices->devices;
11256 ++ list_for_each_entry(device, devices, dev_list) {
11257 + ret = btrfs_trim_free_extents(device, range->minlen,
11258 + &group_trimmed);
11259 +- if (ret)
11260 ++ if (ret) {
11261 ++ dev_failed++;
11262 ++ dev_ret = ret;
11263 + break;
11264 ++ }
11265 +
11266 + trimmed += group_trimmed;
11267 + }
11268 + mutex_unlock(&fs_info->fs_devices->device_list_mutex);
11269 +
11270 ++ if (dev_failed)
11271 ++ btrfs_warn(fs_info,
11272 ++ "failed to trim %llu device(s), last error %d",
11273 ++ dev_failed, dev_ret);
11274 + range->len = trimmed;
11275 +- return ret;
11276 ++ if (bg_ret)
11277 ++ return bg_ret;
11278 ++ return dev_ret;
11279 + }
11280 +
11281 + /*
11282 +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
11283 +index 51e77d72068a..22c2f38cd9b3 100644
11284 +--- a/fs/btrfs/file.c
11285 ++++ b/fs/btrfs/file.c
11286 +@@ -534,6 +534,14 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
11287 +
11288 + end_of_last_block = start_pos + num_bytes - 1;
11289 +
11290 ++ /*
11291 ++ * The pages may have already been dirty, clear out old accounting so
11292 ++ * we can set things up properly
11293 ++ */
11294 ++ clear_extent_bit(&BTRFS_I(inode)->io_tree, start_pos, end_of_last_block,
11295 ++ EXTENT_DIRTY | EXTENT_DELALLOC |
11296 ++ EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 0, 0, cached);
11297 ++
11298 + if (!btrfs_is_free_space_inode(BTRFS_I(inode))) {
11299 + if (start_pos >= isize &&
11300 + !(BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC)) {
11301 +@@ -1504,18 +1512,27 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
11302 + }
11303 + if (ordered)
11304 + btrfs_put_ordered_extent(ordered);
11305 +- clear_extent_bit(&inode->io_tree, start_pos, last_pos,
11306 +- EXTENT_DIRTY | EXTENT_DELALLOC |
11307 +- EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG,
11308 +- 0, 0, cached_state);
11309 ++
11310 + *lockstart = start_pos;
11311 + *lockend = last_pos;
11312 + ret = 1;
11313 + }
11314 +
11315 ++ /*
11316 ++ * It's possible the pages are dirty right now, but we don't want
11317 ++ * to clean them yet because copy_from_user may catch a page fault
11318 ++ * and we might have to fall back to one page at a time. If that
11319 ++ * happens, we'll unlock these pages and we'd have a window where
11320 ++ * reclaim could sneak in and drop the once-dirty page on the floor
11321 ++ * without writing it.
11322 ++ *
11323 ++ * We have the pages locked and the extent range locked, so there's
11324 ++ * no way someone can start IO on any dirty pages in this range.
11325 ++ *
11326 ++ * We'll call btrfs_dirty_pages() later on, and that will flip around
11327 ++ * delalloc bits and dirty the pages as required.
11328 ++ */
11329 + for (i = 0; i < num_pages; i++) {
11330 +- if (clear_page_dirty_for_io(pages[i]))
11331 +- account_page_redirty(pages[i]);
11332 + set_page_extent_mapped(pages[i]);
11333 + WARN_ON(!PageLocked(pages[i]));
11334 + }
11335 +@@ -2065,6 +2082,14 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
11336 + goto out;
11337 +
11338 + inode_lock(inode);
11339 ++
11340 ++ /*
11341 ++ * We take the dio_sem here because the tree log stuff can race with
11342 ++ * lockless dio writes and get an extent map logged for an extent we
11343 ++ * never waited on. We need it this high up for lockdep reasons.
11344 ++ */
11345 ++ down_write(&BTRFS_I(inode)->dio_sem);
11346 ++
11347 + atomic_inc(&root->log_batch);
11348 + full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
11349 + &BTRFS_I(inode)->runtime_flags);
11350 +@@ -2116,6 +2141,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
11351 + ret = start_ordered_ops(inode, start, end);
11352 + }
11353 + if (ret) {
11354 ++ up_write(&BTRFS_I(inode)->dio_sem);
11355 + inode_unlock(inode);
11356 + goto out;
11357 + }
11358 +@@ -2171,6 +2197,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
11359 + * checked called fsync.
11360 + */
11361 + ret = filemap_check_wb_err(inode->i_mapping, file->f_wb_err);
11362 ++ up_write(&BTRFS_I(inode)->dio_sem);
11363 + inode_unlock(inode);
11364 + goto out;
11365 + }
11366 +@@ -2189,6 +2216,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
11367 + trans = btrfs_start_transaction(root, 0);
11368 + if (IS_ERR(trans)) {
11369 + ret = PTR_ERR(trans);
11370 ++ up_write(&BTRFS_I(inode)->dio_sem);
11371 + inode_unlock(inode);
11372 + goto out;
11373 + }
11374 +@@ -2210,6 +2238,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
11375 + * file again, but that will end up using the synchronization
11376 + * inside btrfs_sync_log to keep things safe.
11377 + */
11378 ++ up_write(&BTRFS_I(inode)->dio_sem);
11379 + inode_unlock(inode);
11380 +
11381 + /*
11382 +diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
11383 +index d5f80cb300be..a5f18333aa8c 100644
11384 +--- a/fs/btrfs/free-space-cache.c
11385 ++++ b/fs/btrfs/free-space-cache.c
11386 +@@ -10,6 +10,7 @@
11387 + #include <linux/math64.h>
11388 + #include <linux/ratelimit.h>
11389 + #include <linux/error-injection.h>
11390 ++#include <linux/sched/mm.h>
11391 + #include "ctree.h"
11392 + #include "free-space-cache.h"
11393 + #include "transaction.h"
11394 +@@ -47,6 +48,7 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
11395 + struct btrfs_free_space_header *header;
11396 + struct extent_buffer *leaf;
11397 + struct inode *inode = NULL;
11398 ++ unsigned nofs_flag;
11399 + int ret;
11400 +
11401 + key.objectid = BTRFS_FREE_SPACE_OBJECTID;
11402 +@@ -68,7 +70,13 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
11403 + btrfs_disk_key_to_cpu(&location, &disk_key);
11404 + btrfs_release_path(path);
11405 +
11406 ++ /*
11407 ++ * We are often under a trans handle at this point, so we need to make
11408 ++ * sure NOFS is set to keep us from deadlocking.
11409 ++ */
11410 ++ nofs_flag = memalloc_nofs_save();
11411 + inode = btrfs_iget(fs_info->sb, &location, root, NULL);
11412 ++ memalloc_nofs_restore(nofs_flag);
11413 + if (IS_ERR(inode))
11414 + return inode;
11415 + if (is_bad_inode(inode)) {
11416 +@@ -1686,6 +1694,8 @@ static inline void __bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
11417 + bitmap_clear(info->bitmap, start, count);
11418 +
11419 + info->bytes -= bytes;
11420 ++ if (info->max_extent_size > ctl->unit)
11421 ++ info->max_extent_size = 0;
11422 + }
11423 +
11424 + static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
11425 +@@ -1769,6 +1779,13 @@ static int search_bitmap(struct btrfs_free_space_ctl *ctl,
11426 + return -1;
11427 + }
11428 +
11429 ++static inline u64 get_max_extent_size(struct btrfs_free_space *entry)
11430 ++{
11431 ++ if (entry->bitmap)
11432 ++ return entry->max_extent_size;
11433 ++ return entry->bytes;
11434 ++}
11435 ++
11436 + /* Cache the size of the max extent in bytes */
11437 + static struct btrfs_free_space *
11438 + find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
11439 +@@ -1790,8 +1807,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
11440 + for (node = &entry->offset_index; node; node = rb_next(node)) {
11441 + entry = rb_entry(node, struct btrfs_free_space, offset_index);
11442 + if (entry->bytes < *bytes) {
11443 +- if (entry->bytes > *max_extent_size)
11444 +- *max_extent_size = entry->bytes;
11445 ++ *max_extent_size = max(get_max_extent_size(entry),
11446 ++ *max_extent_size);
11447 + continue;
11448 + }
11449 +
11450 +@@ -1809,8 +1826,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
11451 + }
11452 +
11453 + if (entry->bytes < *bytes + align_off) {
11454 +- if (entry->bytes > *max_extent_size)
11455 +- *max_extent_size = entry->bytes;
11456 ++ *max_extent_size = max(get_max_extent_size(entry),
11457 ++ *max_extent_size);
11458 + continue;
11459 + }
11460 +
11461 +@@ -1822,8 +1839,10 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
11462 + *offset = tmp;
11463 + *bytes = size;
11464 + return entry;
11465 +- } else if (size > *max_extent_size) {
11466 +- *max_extent_size = size;
11467 ++ } else {
11468 ++ *max_extent_size =
11469 ++ max(get_max_extent_size(entry),
11470 ++ *max_extent_size);
11471 + }
11472 + continue;
11473 + }
11474 +@@ -2447,6 +2466,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
11475 + struct rb_node *n;
11476 + int count = 0;
11477 +
11478 ++ spin_lock(&ctl->tree_lock);
11479 + for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
11480 + info = rb_entry(n, struct btrfs_free_space, offset_index);
11481 + if (info->bytes >= bytes && !block_group->ro)
11482 +@@ -2455,6 +2475,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
11483 + info->offset, info->bytes,
11484 + (info->bitmap) ? "yes" : "no");
11485 + }
11486 ++ spin_unlock(&ctl->tree_lock);
11487 + btrfs_info(fs_info, "block group has cluster?: %s",
11488 + list_empty(&block_group->cluster_list) ? "no" : "yes");
11489 + btrfs_info(fs_info,
11490 +@@ -2683,8 +2704,8 @@ static u64 btrfs_alloc_from_bitmap(struct btrfs_block_group_cache *block_group,
11491 +
11492 + err = search_bitmap(ctl, entry, &search_start, &search_bytes, true);
11493 + if (err) {
11494 +- if (search_bytes > *max_extent_size)
11495 +- *max_extent_size = search_bytes;
11496 ++ *max_extent_size = max(get_max_extent_size(entry),
11497 ++ *max_extent_size);
11498 + return 0;
11499 + }
11500 +
11501 +@@ -2721,8 +2742,9 @@ u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
11502 +
11503 + entry = rb_entry(node, struct btrfs_free_space, offset_index);
11504 + while (1) {
11505 +- if (entry->bytes < bytes && entry->bytes > *max_extent_size)
11506 +- *max_extent_size = entry->bytes;
11507 ++ if (entry->bytes < bytes)
11508 ++ *max_extent_size = max(get_max_extent_size(entry),
11509 ++ *max_extent_size);
11510 +
11511 + if (entry->bytes < bytes ||
11512 + (!entry->bitmap && entry->offset < min_start)) {
11513 +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
11514 +index d3736fbf6774..dc0f9d089b19 100644
11515 +--- a/fs/btrfs/inode.c
11516 ++++ b/fs/btrfs/inode.c
11517 +@@ -507,6 +507,7 @@ again:
11518 + pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
11519 + if (!pages) {
11520 + /* just bail out to the uncompressed code */
11521 ++ nr_pages = 0;
11522 + goto cont;
11523 + }
11524 +
11525 +@@ -2950,6 +2951,7 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
11526 + bool truncated = false;
11527 + bool range_locked = false;
11528 + bool clear_new_delalloc_bytes = false;
11529 ++ bool clear_reserved_extent = true;
11530 +
11531 + if (!test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
11532 + !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags) &&
11533 +@@ -3053,10 +3055,12 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
11534 + logical_len, logical_len,
11535 + compress_type, 0, 0,
11536 + BTRFS_FILE_EXTENT_REG);
11537 +- if (!ret)
11538 ++ if (!ret) {
11539 ++ clear_reserved_extent = false;
11540 + btrfs_release_delalloc_bytes(fs_info,
11541 + ordered_extent->start,
11542 + ordered_extent->disk_len);
11543 ++ }
11544 + }
11545 + unpin_extent_cache(&BTRFS_I(inode)->extent_tree,
11546 + ordered_extent->file_offset, ordered_extent->len,
11547 +@@ -3117,8 +3121,13 @@ out:
11548 + * wrong we need to return the space for this ordered extent
11549 + * back to the allocator. We only free the extent in the
11550 + * truncated case if we didn't write out the extent at all.
11551 ++ *
11552 ++ * If we made it past insert_reserved_file_extent before we
11553 ++ * errored out then we don't need to do this as the accounting
11554 ++ * has already been done.
11555 + */
11556 + if ((ret || !logical_len) &&
11557 ++ clear_reserved_extent &&
11558 + !test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
11559 + !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags))
11560 + btrfs_free_reserved_extent(fs_info,
11561 +@@ -5293,11 +5302,13 @@ static void evict_inode_truncate_pages(struct inode *inode)
11562 + struct extent_state *cached_state = NULL;
11563 + u64 start;
11564 + u64 end;
11565 ++ unsigned state_flags;
11566 +
11567 + node = rb_first(&io_tree->state);
11568 + state = rb_entry(node, struct extent_state, rb_node);
11569 + start = state->start;
11570 + end = state->end;
11571 ++ state_flags = state->state;
11572 + spin_unlock(&io_tree->lock);
11573 +
11574 + lock_extent_bits(io_tree, start, end, &cached_state);
11575 +@@ -5310,7 +5321,7 @@ static void evict_inode_truncate_pages(struct inode *inode)
11576 + *
11577 + * Note, end is the bytenr of last byte, so we need + 1 here.
11578 + */
11579 +- if (state->state & EXTENT_DELALLOC)
11580 ++ if (state_flags & EXTENT_DELALLOC)
11581 + btrfs_qgroup_free_data(inode, NULL, start, end - start + 1);
11582 +
11583 + clear_extent_bit(io_tree, start, end,
11584 +diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
11585 +index ef7159646615..c972920701a3 100644
11586 +--- a/fs/btrfs/ioctl.c
11587 ++++ b/fs/btrfs/ioctl.c
11588 +@@ -496,7 +496,6 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
11589 + struct fstrim_range range;
11590 + u64 minlen = ULLONG_MAX;
11591 + u64 num_devices = 0;
11592 +- u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
11593 + int ret;
11594 +
11595 + if (!capable(CAP_SYS_ADMIN))
11596 +@@ -520,11 +519,15 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
11597 + return -EOPNOTSUPP;
11598 + if (copy_from_user(&range, arg, sizeof(range)))
11599 + return -EFAULT;
11600 +- if (range.start > total_bytes ||
11601 +- range.len < fs_info->sb->s_blocksize)
11602 ++
11603 ++ /*
11604 ++ * NOTE: Don't truncate the range using super->total_bytes. Bytenr of
11605 ++ * block group is in the logical address space, which can be any
11606 ++ * sectorsize aligned bytenr in the range [0, U64_MAX].
11607 ++ */
11608 ++ if (range.len < fs_info->sb->s_blocksize)
11609 + return -EINVAL;
11610 +
11611 +- range.len = min(range.len, total_bytes - range.start);
11612 + range.minlen = max(range.minlen, minlen);
11613 + ret = btrfs_trim_fs(fs_info, &range);
11614 + if (ret < 0)
11615 +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
11616 +index c25dc47210a3..7407f5a5d682 100644
11617 +--- a/fs/btrfs/qgroup.c
11618 ++++ b/fs/btrfs/qgroup.c
11619 +@@ -2856,6 +2856,7 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
11620 + qgroup->rfer_cmpr = 0;
11621 + qgroup->excl = 0;
11622 + qgroup->excl_cmpr = 0;
11623 ++ qgroup_dirty(fs_info, qgroup);
11624 + }
11625 + spin_unlock(&fs_info->qgroup_lock);
11626 + }
11627 +@@ -3065,6 +3066,10 @@ static int __btrfs_qgroup_release_data(struct inode *inode,
11628 + int trace_op = QGROUP_RELEASE;
11629 + int ret;
11630 +
11631 ++ if (!test_bit(BTRFS_FS_QUOTA_ENABLED,
11632 ++ &BTRFS_I(inode)->root->fs_info->flags))
11633 ++ return 0;
11634 ++
11635 + /* In release case, we shouldn't have @reserved */
11636 + WARN_ON(!free && reserved);
11637 + if (free && reserved)
11638 +diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
11639 +index d60dd06445ce..cad73ed7aebc 100644
11640 +--- a/fs/btrfs/qgroup.h
11641 ++++ b/fs/btrfs/qgroup.h
11642 +@@ -261,6 +261,8 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
11643 + static inline void btrfs_qgroup_free_delayed_ref(struct btrfs_fs_info *fs_info,
11644 + u64 ref_root, u64 num_bytes)
11645 + {
11646 ++ if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
11647 ++ return;
11648 + trace_btrfs_qgroup_free_delayed_ref(fs_info, ref_root, num_bytes);
11649 + btrfs_qgroup_free_refroot(fs_info, ref_root, num_bytes,
11650 + BTRFS_QGROUP_RSV_DATA);
11651 +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
11652 +index be94c65bb4d2..5ee49b796815 100644
11653 +--- a/fs/btrfs/relocation.c
11654 ++++ b/fs/btrfs/relocation.c
11655 +@@ -1321,7 +1321,7 @@ static void __del_reloc_root(struct btrfs_root *root)
11656 + struct mapping_node *node = NULL;
11657 + struct reloc_control *rc = fs_info->reloc_ctl;
11658 +
11659 +- if (rc) {
11660 ++ if (rc && root->node) {
11661 + spin_lock(&rc->reloc_root_tree.lock);
11662 + rb_node = tree_search(&rc->reloc_root_tree.rb_root,
11663 + root->node->start);
11664 +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
11665 +index ff5f6c719976..9ee0aca134fc 100644
11666 +--- a/fs/btrfs/transaction.c
11667 ++++ b/fs/btrfs/transaction.c
11668 +@@ -1930,6 +1930,9 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
11669 + return ret;
11670 + }
11671 +
11672 ++ btrfs_trans_release_metadata(trans);
11673 ++ trans->block_rsv = NULL;
11674 ++
11675 + /* make a pass through all the delayed refs we have so far
11676 + * any runnings procs may add more while we are here
11677 + */
11678 +@@ -1939,9 +1942,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
11679 + return ret;
11680 + }
11681 +
11682 +- btrfs_trans_release_metadata(trans);
11683 +- trans->block_rsv = NULL;
11684 +-
11685 + cur_trans = trans->transaction;
11686 +
11687 + /*
11688 +@@ -2281,15 +2281,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
11689 +
11690 + kmem_cache_free(btrfs_trans_handle_cachep, trans);
11691 +
11692 +- /*
11693 +- * If fs has been frozen, we can not handle delayed iputs, otherwise
11694 +- * it'll result in deadlock about SB_FREEZE_FS.
11695 +- */
11696 +- if (current != fs_info->transaction_kthread &&
11697 +- current != fs_info->cleaner_kthread &&
11698 +- !test_bit(BTRFS_FS_FROZEN, &fs_info->flags))
11699 +- btrfs_run_delayed_iputs(fs_info);
11700 +-
11701 + return ret;
11702 +
11703 + scrub_continue:
11704 +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
11705 +index 84b00a29d531..8b3f14a1adf0 100644
11706 +--- a/fs/btrfs/tree-log.c
11707 ++++ b/fs/btrfs/tree-log.c
11708 +@@ -258,6 +258,13 @@ struct walk_control {
11709 + /* what stage of the replay code we're currently in */
11710 + int stage;
11711 +
11712 ++ /*
11713 ++ * Ignore any items from the inode currently being processed. Needs
11714 ++ * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
11715 ++ * the LOG_WALK_REPLAY_INODES stage.
11716 ++ */
11717 ++ bool ignore_cur_inode;
11718 ++
11719 + /* the root we are currently replaying */
11720 + struct btrfs_root *replay_dest;
11721 +
11722 +@@ -2492,6 +2499,20 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
11723 +
11724 + inode_item = btrfs_item_ptr(eb, i,
11725 + struct btrfs_inode_item);
11726 ++ /*
11727 ++ * If we have a tmpfile (O_TMPFILE) that got fsync'ed
11728 ++ * and never got linked before the fsync, skip it, as
11729 ++ * replaying it is pointless since it would be deleted
11730 ++ * later. We skip logging tmpfiles, but it's always
11731 ++ * possible we are replaying a log created with a kernel
11732 ++ * that used to log tmpfiles.
11733 ++ */
11734 ++ if (btrfs_inode_nlink(eb, inode_item) == 0) {
11735 ++ wc->ignore_cur_inode = true;
11736 ++ continue;
11737 ++ } else {
11738 ++ wc->ignore_cur_inode = false;
11739 ++ }
11740 + ret = replay_xattr_deletes(wc->trans, root, log,
11741 + path, key.objectid);
11742 + if (ret)
11743 +@@ -2529,16 +2550,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
11744 + root->fs_info->sectorsize);
11745 + ret = btrfs_drop_extents(wc->trans, root, inode,
11746 + from, (u64)-1, 1);
11747 +- /*
11748 +- * If the nlink count is zero here, the iput
11749 +- * will free the inode. We bump it to make
11750 +- * sure it doesn't get freed until the link
11751 +- * count fixup is done.
11752 +- */
11753 + if (!ret) {
11754 +- if (inode->i_nlink == 0)
11755 +- inc_nlink(inode);
11756 +- /* Update link count and nbytes. */
11757 ++ /* Update the inode's nbytes. */
11758 + ret = btrfs_update_inode(wc->trans,
11759 + root, inode);
11760 + }
11761 +@@ -2553,6 +2566,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
11762 + break;
11763 + }
11764 +
11765 ++ if (wc->ignore_cur_inode)
11766 ++ continue;
11767 ++
11768 + if (key.type == BTRFS_DIR_INDEX_KEY &&
11769 + wc->stage == LOG_WALK_REPLAY_DIR_INDEX) {
11770 + ret = replay_one_dir_item(wc->trans, root, path,
11771 +@@ -3209,9 +3225,12 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
11772 + };
11773 +
11774 + ret = walk_log_tree(trans, log, &wc);
11775 +- /* I don't think this can happen but just in case */
11776 +- if (ret)
11777 +- btrfs_abort_transaction(trans, ret);
11778 ++ if (ret) {
11779 ++ if (trans)
11780 ++ btrfs_abort_transaction(trans, ret);
11781 ++ else
11782 ++ btrfs_handle_fs_error(log->fs_info, ret, NULL);
11783 ++ }
11784 +
11785 + while (1) {
11786 + ret = find_first_extent_bit(&log->dirty_log_pages,
11787 +@@ -4505,7 +4524,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans,
11788 +
11789 + INIT_LIST_HEAD(&extents);
11790 +
11791 +- down_write(&inode->dio_sem);
11792 + write_lock(&tree->lock);
11793 + test_gen = root->fs_info->last_trans_committed;
11794 + logged_start = start;
11795 +@@ -4586,7 +4604,6 @@ process:
11796 + }
11797 + WARN_ON(!list_empty(&extents));
11798 + write_unlock(&tree->lock);
11799 +- up_write(&inode->dio_sem);
11800 +
11801 + btrfs_release_path(path);
11802 + if (!ret)
11803 +@@ -4784,7 +4801,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
11804 + ASSERT(len == i_size ||
11805 + (len == fs_info->sectorsize &&
11806 + btrfs_file_extent_compression(leaf, extent) !=
11807 +- BTRFS_COMPRESS_NONE));
11808 ++ BTRFS_COMPRESS_NONE) ||
11809 ++ (len < i_size && i_size < fs_info->sectorsize));
11810 + return 0;
11811 + }
11812 +
11813 +@@ -5718,9 +5736,33 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans,
11814 +
11815 + dir_inode = btrfs_iget(fs_info->sb, &inode_key,
11816 + root, NULL);
11817 +- /* If parent inode was deleted, skip it. */
11818 +- if (IS_ERR(dir_inode))
11819 +- continue;
11820 ++ /*
11821 ++ * If the parent inode was deleted, return an error to
11822 ++ * fallback to a transaction commit. This is to prevent
11823 ++ * getting an inode that was moved from one parent A to
11824 ++ * a parent B, got its former parent A deleted and then
11825 ++ * it got fsync'ed, from existing at both parents after
11826 ++ * a log replay (and the old parent still existing).
11827 ++ * Example:
11828 ++ *
11829 ++ * mkdir /mnt/A
11830 ++ * mkdir /mnt/B
11831 ++ * touch /mnt/B/bar
11832 ++ * sync
11833 ++ * mv /mnt/B/bar /mnt/A/bar
11834 ++ * mv -T /mnt/A /mnt/B
11835 ++ * fsync /mnt/B/bar
11836 ++ * <power fail>
11837 ++ *
11838 ++ * If we ignore the old parent B which got deleted,
11839 ++ * after a log replay we would have file bar linked
11840 ++ * at both parents and the old parent B would still
11841 ++ * exist.
11842 ++ */
11843 ++ if (IS_ERR(dir_inode)) {
11844 ++ ret = PTR_ERR(dir_inode);
11845 ++ goto out;
11846 ++ }
11847 +
11848 + if (ctx)
11849 + ctx->log_new_dentries = false;
11850 +@@ -5794,7 +5836,13 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
11851 + if (ret)
11852 + goto end_no_trans;
11853 +
11854 +- if (btrfs_inode_in_log(inode, trans->transid)) {
11855 ++ /*
11856 ++ * Skip already logged inodes or inodes corresponding to tmpfiles
11857 ++ * (since logging them is pointless, a link count of 0 means they
11858 ++ * will never be accessible).
11859 ++ */
11860 ++ if (btrfs_inode_in_log(inode, trans->transid) ||
11861 ++ inode->vfs_inode.i_nlink == 0) {
11862 + ret = BTRFS_NO_LOG_SYNC;
11863 + goto end_no_trans;
11864 + }
11865 +diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
11866 +index b20297988fe0..c1261b7fd292 100644
11867 +--- a/fs/cifs/cifs_debug.c
11868 ++++ b/fs/cifs/cifs_debug.c
11869 +@@ -383,6 +383,9 @@ static ssize_t cifs_stats_proc_write(struct file *file,
11870 + atomic_set(&totBufAllocCount, 0);
11871 + atomic_set(&totSmBufAllocCount, 0);
11872 + #endif /* CONFIG_CIFS_STATS2 */
11873 ++ atomic_set(&tcpSesReconnectCount, 0);
11874 ++ atomic_set(&tconInfoReconnectCount, 0);
11875 ++
11876 + spin_lock(&GlobalMid_Lock);
11877 + GlobalMaxActiveXid = 0;
11878 + GlobalCurrentXid = 0;
11879 +diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
11880 +index b611fc2e8984..7f01c6e60791 100644
11881 +--- a/fs/cifs/cifs_spnego.c
11882 ++++ b/fs/cifs/cifs_spnego.c
11883 +@@ -147,8 +147,10 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo)
11884 + sprintf(dp, ";sec=krb5");
11885 + else if (server->sec_mskerberos)
11886 + sprintf(dp, ";sec=mskrb5");
11887 +- else
11888 +- goto out;
11889 ++ else {
11890 ++ cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n");
11891 ++ sprintf(dp, ";sec=krb5");
11892 ++ }
11893 +
11894 + dp = description + strlen(description);
11895 + sprintf(dp, ";uid=0x%x",
11896 +diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
11897 +index d279fa5472db..334b2b3d21a3 100644
11898 +--- a/fs/cifs/inode.c
11899 ++++ b/fs/cifs/inode.c
11900 +@@ -779,7 +779,15 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
11901 + } else if (rc == -EREMOTE) {
11902 + cifs_create_dfs_fattr(&fattr, sb);
11903 + rc = 0;
11904 +- } else if (rc == -EACCES && backup_cred(cifs_sb)) {
11905 ++ } else if ((rc == -EACCES) && backup_cred(cifs_sb) &&
11906 ++ (strcmp(server->vals->version_string, SMB1_VERSION_STRING)
11907 ++ == 0)) {
11908 ++ /*
11909 ++ * For SMB2 and later the backup intent flag is already
11910 ++ * sent if needed on open and there is no path based
11911 ++ * FindFirst operation to use to retry with
11912 ++ */
11913 ++
11914 + srchinf = kzalloc(sizeof(struct cifs_search_info),
11915 + GFP_KERNEL);
11916 + if (srchinf == NULL) {
11917 +diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
11918 +index f408994fc632..6e000392e4a4 100644
11919 +--- a/fs/cramfs/inode.c
11920 ++++ b/fs/cramfs/inode.c
11921 +@@ -202,7 +202,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
11922 + continue;
11923 + blk_offset = (blocknr - buffer_blocknr[i]) << PAGE_SHIFT;
11924 + blk_offset += offset;
11925 +- if (blk_offset + len > BUFFER_SIZE)
11926 ++ if (blk_offset > BUFFER_SIZE ||
11927 ++ blk_offset + len > BUFFER_SIZE)
11928 + continue;
11929 + return read_buffers[i] + blk_offset;
11930 + }
11931 +diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
11932 +index 39c20ef26db4..79debfc9cef9 100644
11933 +--- a/fs/crypto/fscrypt_private.h
11934 ++++ b/fs/crypto/fscrypt_private.h
11935 +@@ -83,10 +83,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
11936 + filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
11937 + return true;
11938 +
11939 +- if (contents_mode == FS_ENCRYPTION_MODE_SPECK128_256_XTS &&
11940 +- filenames_mode == FS_ENCRYPTION_MODE_SPECK128_256_CTS)
11941 +- return true;
11942 +-
11943 + return false;
11944 + }
11945 +
11946 +diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
11947 +index e997ca51192f..7874c9bb2fc5 100644
11948 +--- a/fs/crypto/keyinfo.c
11949 ++++ b/fs/crypto/keyinfo.c
11950 +@@ -174,16 +174,6 @@ static struct fscrypt_mode {
11951 + .cipher_str = "cts(cbc(aes))",
11952 + .keysize = 16,
11953 + },
11954 +- [FS_ENCRYPTION_MODE_SPECK128_256_XTS] = {
11955 +- .friendly_name = "Speck128/256-XTS",
11956 +- .cipher_str = "xts(speck128)",
11957 +- .keysize = 64,
11958 +- },
11959 +- [FS_ENCRYPTION_MODE_SPECK128_256_CTS] = {
11960 +- .friendly_name = "Speck128/256-CTS-CBC",
11961 +- .cipher_str = "cts(cbc(speck128))",
11962 +- .keysize = 32,
11963 +- },
11964 + };
11965 +
11966 + static struct fscrypt_mode *
11967 +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
11968 +index aa1ce53d0c87..7fcc11fcbbbd 100644
11969 +--- a/fs/ext4/ext4.h
11970 ++++ b/fs/ext4/ext4.h
11971 +@@ -1387,7 +1387,8 @@ struct ext4_sb_info {
11972 + u32 s_min_batch_time;
11973 + struct block_device *journal_bdev;
11974 + #ifdef CONFIG_QUOTA
11975 +- char *s_qf_names[EXT4_MAXQUOTAS]; /* Names of quota files with journalled quota */
11976 ++ /* Names of quota files with journalled quota */
11977 ++ char __rcu *s_qf_names[EXT4_MAXQUOTAS];
11978 + int s_jquota_fmt; /* Format of quota to use */
11979 + #endif
11980 + unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */
11981 +diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
11982 +index 7b4736022761..9c4bac18cc6c 100644
11983 +--- a/fs/ext4/inline.c
11984 ++++ b/fs/ext4/inline.c
11985 +@@ -863,7 +863,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
11986 + handle_t *handle;
11987 + struct page *page;
11988 + struct ext4_iloc iloc;
11989 +- int retries;
11990 ++ int retries = 0;
11991 +
11992 + ret = ext4_get_inode_loc(inode, &iloc);
11993 + if (ret)
11994 +diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
11995 +index a7074115d6f6..0edee31913d1 100644
11996 +--- a/fs/ext4/ioctl.c
11997 ++++ b/fs/ext4/ioctl.c
11998 +@@ -67,7 +67,6 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
11999 + ei1 = EXT4_I(inode1);
12000 + ei2 = EXT4_I(inode2);
12001 +
12002 +- swap(inode1->i_flags, inode2->i_flags);
12003 + swap(inode1->i_version, inode2->i_version);
12004 + swap(inode1->i_blocks, inode2->i_blocks);
12005 + swap(inode1->i_bytes, inode2->i_bytes);
12006 +@@ -85,6 +84,21 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
12007 + i_size_write(inode2, isize);
12008 + }
12009 +
12010 ++static void reset_inode_seed(struct inode *inode)
12011 ++{
12012 ++ struct ext4_inode_info *ei = EXT4_I(inode);
12013 ++ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
12014 ++ __le32 inum = cpu_to_le32(inode->i_ino);
12015 ++ __le32 gen = cpu_to_le32(inode->i_generation);
12016 ++ __u32 csum;
12017 ++
12018 ++ if (!ext4_has_metadata_csum(inode->i_sb))
12019 ++ return;
12020 ++
12021 ++ csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)&inum, sizeof(inum));
12022 ++ ei->i_csum_seed = ext4_chksum(sbi, csum, (__u8 *)&gen, sizeof(gen));
12023 ++}
12024 ++
12025 + /**
12026 + * Swap the information from the given @inode and the inode
12027 + * EXT4_BOOT_LOADER_INO. It will basically swap i_data and all other
12028 +@@ -102,10 +116,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
12029 + struct inode *inode_bl;
12030 + struct ext4_inode_info *ei_bl;
12031 +
12032 +- if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode))
12033 ++ if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
12034 ++ IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
12035 ++ ext4_has_inline_data(inode))
12036 + return -EINVAL;
12037 +
12038 +- if (!inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
12039 ++ if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
12040 ++ !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
12041 + return -EPERM;
12042 +
12043 + inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO);
12044 +@@ -120,13 +137,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
12045 + * that only 1 swap_inode_boot_loader is running. */
12046 + lock_two_nondirectories(inode, inode_bl);
12047 +
12048 +- truncate_inode_pages(&inode->i_data, 0);
12049 +- truncate_inode_pages(&inode_bl->i_data, 0);
12050 +-
12051 + /* Wait for all existing dio workers */
12052 + inode_dio_wait(inode);
12053 + inode_dio_wait(inode_bl);
12054 +
12055 ++ truncate_inode_pages(&inode->i_data, 0);
12056 ++ truncate_inode_pages(&inode_bl->i_data, 0);
12057 ++
12058 + handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
12059 + if (IS_ERR(handle)) {
12060 + err = -EINVAL;
12061 +@@ -159,6 +176,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
12062 +
12063 + inode->i_generation = prandom_u32();
12064 + inode_bl->i_generation = prandom_u32();
12065 ++ reset_inode_seed(inode);
12066 ++ reset_inode_seed(inode_bl);
12067 +
12068 + ext4_discard_preallocations(inode);
12069 +
12070 +@@ -169,6 +188,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
12071 + inode->i_ino, err);
12072 + /* Revert all changes: */
12073 + swap_inode_data(inode, inode_bl);
12074 ++ ext4_mark_inode_dirty(handle, inode);
12075 + } else {
12076 + err = ext4_mark_inode_dirty(handle, inode_bl);
12077 + if (err < 0) {
12078 +@@ -178,6 +198,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
12079 + /* Revert all changes: */
12080 + swap_inode_data(inode, inode_bl);
12081 + ext4_mark_inode_dirty(handle, inode);
12082 ++ ext4_mark_inode_dirty(handle, inode_bl);
12083 + }
12084 + }
12085 + ext4_journal_stop(handle);
12086 +@@ -339,19 +360,14 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
12087 + if (projid_eq(kprojid, EXT4_I(inode)->i_projid))
12088 + return 0;
12089 +
12090 +- err = mnt_want_write_file(filp);
12091 +- if (err)
12092 +- return err;
12093 +-
12094 + err = -EPERM;
12095 +- inode_lock(inode);
12096 + /* Is it quota file? Do not allow user to mess with it */
12097 + if (ext4_is_quota_file(inode))
12098 +- goto out_unlock;
12099 ++ return err;
12100 +
12101 + err = ext4_get_inode_loc(inode, &iloc);
12102 + if (err)
12103 +- goto out_unlock;
12104 ++ return err;
12105 +
12106 + raw_inode = ext4_raw_inode(&iloc);
12107 + if (!EXT4_FITS_IN_INODE(raw_inode, ei, i_projid)) {
12108 +@@ -359,20 +375,20 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
12109 + EXT4_SB(sb)->s_want_extra_isize,
12110 + &iloc);
12111 + if (err)
12112 +- goto out_unlock;
12113 ++ return err;
12114 + } else {
12115 + brelse(iloc.bh);
12116 + }
12117 +
12118 +- dquot_initialize(inode);
12119 ++ err = dquot_initialize(inode);
12120 ++ if (err)
12121 ++ return err;
12122 +
12123 + handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
12124 + EXT4_QUOTA_INIT_BLOCKS(sb) +
12125 + EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
12126 +- if (IS_ERR(handle)) {
12127 +- err = PTR_ERR(handle);
12128 +- goto out_unlock;
12129 +- }
12130 ++ if (IS_ERR(handle))
12131 ++ return PTR_ERR(handle);
12132 +
12133 + err = ext4_reserve_inode_write(handle, inode, &iloc);
12134 + if (err)
12135 +@@ -400,9 +416,6 @@ out_dirty:
12136 + err = rc;
12137 + out_stop:
12138 + ext4_journal_stop(handle);
12139 +-out_unlock:
12140 +- inode_unlock(inode);
12141 +- mnt_drop_write_file(filp);
12142 + return err;
12143 + }
12144 + #else
12145 +@@ -626,6 +639,30 @@ group_add_out:
12146 + return err;
12147 + }
12148 +
12149 ++static int ext4_ioctl_check_project(struct inode *inode, struct fsxattr *fa)
12150 ++{
12151 ++ /*
12152 ++ * Project Quota ID state is only allowed to change from within the init
12153 ++ * namespace. Enforce that restriction only if we are trying to change
12154 ++ * the quota ID state. Everything else is allowed in user namespaces.
12155 ++ */
12156 ++ if (current_user_ns() == &init_user_ns)
12157 ++ return 0;
12158 ++
12159 ++ if (__kprojid_val(EXT4_I(inode)->i_projid) != fa->fsx_projid)
12160 ++ return -EINVAL;
12161 ++
12162 ++ if (ext4_test_inode_flag(inode, EXT4_INODE_PROJINHERIT)) {
12163 ++ if (!(fa->fsx_xflags & FS_XFLAG_PROJINHERIT))
12164 ++ return -EINVAL;
12165 ++ } else {
12166 ++ if (fa->fsx_xflags & FS_XFLAG_PROJINHERIT)
12167 ++ return -EINVAL;
12168 ++ }
12169 ++
12170 ++ return 0;
12171 ++}
12172 ++
12173 + long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
12174 + {
12175 + struct inode *inode = file_inode(filp);
12176 +@@ -1025,19 +1062,19 @@ resizefs_out:
12177 + return err;
12178 +
12179 + inode_lock(inode);
12180 ++ err = ext4_ioctl_check_project(inode, &fa);
12181 ++ if (err)
12182 ++ goto out;
12183 + flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) |
12184 + (flags & EXT4_FL_XFLAG_VISIBLE);
12185 + err = ext4_ioctl_setflags(inode, flags);
12186 +- inode_unlock(inode);
12187 +- mnt_drop_write_file(filp);
12188 + if (err)
12189 +- return err;
12190 +-
12191 ++ goto out;
12192 + err = ext4_ioctl_setproject(filp, fa.fsx_projid);
12193 +- if (err)
12194 +- return err;
12195 +-
12196 +- return 0;
12197 ++out:
12198 ++ inode_unlock(inode);
12199 ++ mnt_drop_write_file(filp);
12200 ++ return err;
12201 + }
12202 + case EXT4_IOC_SHUTDOWN:
12203 + return ext4_shutdown(sb, arg);
12204 +diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
12205 +index 8e17efdcbf11..887353875060 100644
12206 +--- a/fs/ext4/move_extent.c
12207 ++++ b/fs/ext4/move_extent.c
12208 +@@ -518,9 +518,13 @@ mext_check_arguments(struct inode *orig_inode,
12209 + orig_inode->i_ino, donor_inode->i_ino);
12210 + return -EINVAL;
12211 + }
12212 +- if (orig_eof < orig_start + *len - 1)
12213 ++ if (orig_eof <= orig_start)
12214 ++ *len = 0;
12215 ++ else if (orig_eof < orig_start + *len - 1)
12216 + *len = orig_eof - orig_start;
12217 +- if (donor_eof < donor_start + *len - 1)
12218 ++ if (donor_eof <= donor_start)
12219 ++ *len = 0;
12220 ++ else if (donor_eof < donor_start + *len - 1)
12221 + *len = donor_eof - donor_start;
12222 + if (!*len) {
12223 + ext4_debug("ext4 move extent: len should not be 0 "
12224 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
12225 +index a7a0fffc3ae8..8d91d50ccf42 100644
12226 +--- a/fs/ext4/super.c
12227 ++++ b/fs/ext4/super.c
12228 +@@ -895,6 +895,18 @@ static inline void ext4_quota_off_umount(struct super_block *sb)
12229 + for (type = 0; type < EXT4_MAXQUOTAS; type++)
12230 + ext4_quota_off(sb, type);
12231 + }
12232 ++
12233 ++/*
12234 ++ * This is a helper function which is used in the mount/remount
12235 ++ * codepaths (which holds s_umount) to fetch the quota file name.
12236 ++ */
12237 ++static inline char *get_qf_name(struct super_block *sb,
12238 ++ struct ext4_sb_info *sbi,
12239 ++ int type)
12240 ++{
12241 ++ return rcu_dereference_protected(sbi->s_qf_names[type],
12242 ++ lockdep_is_held(&sb->s_umount));
12243 ++}
12244 + #else
12245 + static inline void ext4_quota_off_umount(struct super_block *sb)
12246 + {
12247 +@@ -946,7 +958,7 @@ static void ext4_put_super(struct super_block *sb)
12248 + percpu_free_rwsem(&sbi->s_journal_flag_rwsem);
12249 + #ifdef CONFIG_QUOTA
12250 + for (i = 0; i < EXT4_MAXQUOTAS; i++)
12251 +- kfree(sbi->s_qf_names[i]);
12252 ++ kfree(get_qf_name(sb, sbi, i));
12253 + #endif
12254 +
12255 + /* Debugging code just in case the in-memory inode orphan list
12256 +@@ -1511,11 +1523,10 @@ static const char deprecated_msg[] =
12257 + static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
12258 + {
12259 + struct ext4_sb_info *sbi = EXT4_SB(sb);
12260 +- char *qname;
12261 ++ char *qname, *old_qname = get_qf_name(sb, sbi, qtype);
12262 + int ret = -1;
12263 +
12264 +- if (sb_any_quota_loaded(sb) &&
12265 +- !sbi->s_qf_names[qtype]) {
12266 ++ if (sb_any_quota_loaded(sb) && !old_qname) {
12267 + ext4_msg(sb, KERN_ERR,
12268 + "Cannot change journaled "
12269 + "quota options when quota turned on");
12270 +@@ -1532,8 +1543,8 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
12271 + "Not enough memory for storing quotafile name");
12272 + return -1;
12273 + }
12274 +- if (sbi->s_qf_names[qtype]) {
12275 +- if (strcmp(sbi->s_qf_names[qtype], qname) == 0)
12276 ++ if (old_qname) {
12277 ++ if (strcmp(old_qname, qname) == 0)
12278 + ret = 1;
12279 + else
12280 + ext4_msg(sb, KERN_ERR,
12281 +@@ -1546,7 +1557,7 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
12282 + "quotafile must be on filesystem root");
12283 + goto errout;
12284 + }
12285 +- sbi->s_qf_names[qtype] = qname;
12286 ++ rcu_assign_pointer(sbi->s_qf_names[qtype], qname);
12287 + set_opt(sb, QUOTA);
12288 + return 1;
12289 + errout:
12290 +@@ -1558,15 +1569,16 @@ static int clear_qf_name(struct super_block *sb, int qtype)
12291 + {
12292 +
12293 + struct ext4_sb_info *sbi = EXT4_SB(sb);
12294 ++ char *old_qname = get_qf_name(sb, sbi, qtype);
12295 +
12296 +- if (sb_any_quota_loaded(sb) &&
12297 +- sbi->s_qf_names[qtype]) {
12298 ++ if (sb_any_quota_loaded(sb) && old_qname) {
12299 + ext4_msg(sb, KERN_ERR, "Cannot change journaled quota options"
12300 + " when quota turned on");
12301 + return -1;
12302 + }
12303 +- kfree(sbi->s_qf_names[qtype]);
12304 +- sbi->s_qf_names[qtype] = NULL;
12305 ++ rcu_assign_pointer(sbi->s_qf_names[qtype], NULL);
12306 ++ synchronize_rcu();
12307 ++ kfree(old_qname);
12308 + return 1;
12309 + }
12310 + #endif
12311 +@@ -1941,7 +1953,7 @@ static int parse_options(char *options, struct super_block *sb,
12312 + int is_remount)
12313 + {
12314 + struct ext4_sb_info *sbi = EXT4_SB(sb);
12315 +- char *p;
12316 ++ char *p, __maybe_unused *usr_qf_name, __maybe_unused *grp_qf_name;
12317 + substring_t args[MAX_OPT_ARGS];
12318 + int token;
12319 +
12320 +@@ -1972,11 +1984,13 @@ static int parse_options(char *options, struct super_block *sb,
12321 + "Cannot enable project quota enforcement.");
12322 + return 0;
12323 + }
12324 +- if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA]) {
12325 +- if (test_opt(sb, USRQUOTA) && sbi->s_qf_names[USRQUOTA])
12326 ++ usr_qf_name = get_qf_name(sb, sbi, USRQUOTA);
12327 ++ grp_qf_name = get_qf_name(sb, sbi, GRPQUOTA);
12328 ++ if (usr_qf_name || grp_qf_name) {
12329 ++ if (test_opt(sb, USRQUOTA) && usr_qf_name)
12330 + clear_opt(sb, USRQUOTA);
12331 +
12332 +- if (test_opt(sb, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA])
12333 ++ if (test_opt(sb, GRPQUOTA) && grp_qf_name)
12334 + clear_opt(sb, GRPQUOTA);
12335 +
12336 + if (test_opt(sb, GRPQUOTA) || test_opt(sb, USRQUOTA)) {
12337 +@@ -2010,6 +2024,7 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
12338 + {
12339 + #if defined(CONFIG_QUOTA)
12340 + struct ext4_sb_info *sbi = EXT4_SB(sb);
12341 ++ char *usr_qf_name, *grp_qf_name;
12342 +
12343 + if (sbi->s_jquota_fmt) {
12344 + char *fmtname = "";
12345 +@@ -2028,11 +2043,14 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
12346 + seq_printf(seq, ",jqfmt=%s", fmtname);
12347 + }
12348 +
12349 +- if (sbi->s_qf_names[USRQUOTA])
12350 +- seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
12351 +-
12352 +- if (sbi->s_qf_names[GRPQUOTA])
12353 +- seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
12354 ++ rcu_read_lock();
12355 ++ usr_qf_name = rcu_dereference(sbi->s_qf_names[USRQUOTA]);
12356 ++ grp_qf_name = rcu_dereference(sbi->s_qf_names[GRPQUOTA]);
12357 ++ if (usr_qf_name)
12358 ++ seq_show_option(seq, "usrjquota", usr_qf_name);
12359 ++ if (grp_qf_name)
12360 ++ seq_show_option(seq, "grpjquota", grp_qf_name);
12361 ++ rcu_read_unlock();
12362 + #endif
12363 + }
12364 +
12365 +@@ -5081,6 +5099,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
12366 + int err = 0;
12367 + #ifdef CONFIG_QUOTA
12368 + int i, j;
12369 ++ char *to_free[EXT4_MAXQUOTAS];
12370 + #endif
12371 + char *orig_data = kstrdup(data, GFP_KERNEL);
12372 +
12373 +@@ -5097,8 +5116,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
12374 + old_opts.s_jquota_fmt = sbi->s_jquota_fmt;
12375 + for (i = 0; i < EXT4_MAXQUOTAS; i++)
12376 + if (sbi->s_qf_names[i]) {
12377 +- old_opts.s_qf_names[i] = kstrdup(sbi->s_qf_names[i],
12378 +- GFP_KERNEL);
12379 ++ char *qf_name = get_qf_name(sb, sbi, i);
12380 ++
12381 ++ old_opts.s_qf_names[i] = kstrdup(qf_name, GFP_KERNEL);
12382 + if (!old_opts.s_qf_names[i]) {
12383 + for (j = 0; j < i; j++)
12384 + kfree(old_opts.s_qf_names[j]);
12385 +@@ -5327,9 +5347,12 @@ restore_opts:
12386 + #ifdef CONFIG_QUOTA
12387 + sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
12388 + for (i = 0; i < EXT4_MAXQUOTAS; i++) {
12389 +- kfree(sbi->s_qf_names[i]);
12390 +- sbi->s_qf_names[i] = old_opts.s_qf_names[i];
12391 ++ to_free[i] = get_qf_name(sb, sbi, i);
12392 ++ rcu_assign_pointer(sbi->s_qf_names[i], old_opts.s_qf_names[i]);
12393 + }
12394 ++ synchronize_rcu();
12395 ++ for (i = 0; i < EXT4_MAXQUOTAS; i++)
12396 ++ kfree(to_free[i]);
12397 + #endif
12398 + kfree(orig_data);
12399 + return err;
12400 +@@ -5520,7 +5543,7 @@ static int ext4_write_info(struct super_block *sb, int type)
12401 + */
12402 + static int ext4_quota_on_mount(struct super_block *sb, int type)
12403 + {
12404 +- return dquot_quota_on_mount(sb, EXT4_SB(sb)->s_qf_names[type],
12405 ++ return dquot_quota_on_mount(sb, get_qf_name(sb, EXT4_SB(sb), type),
12406 + EXT4_SB(sb)->s_jquota_fmt, type);
12407 + }
12408 +
12409 +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
12410 +index b61954d40c25..e397515261dc 100644
12411 +--- a/fs/f2fs/data.c
12412 ++++ b/fs/f2fs/data.c
12413 +@@ -80,7 +80,8 @@ static void __read_end_io(struct bio *bio)
12414 + /* PG_error was set if any post_read step failed */
12415 + if (bio->bi_status || PageError(page)) {
12416 + ClearPageUptodate(page);
12417 +- SetPageError(page);
12418 ++ /* will re-read again later */
12419 ++ ClearPageError(page);
12420 + } else {
12421 + SetPageUptodate(page);
12422 + }
12423 +@@ -453,12 +454,16 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
12424 + bio_put(bio);
12425 + return -EFAULT;
12426 + }
12427 +- bio_set_op_attrs(bio, fio->op, fio->op_flags);
12428 +
12429 +- __submit_bio(fio->sbi, bio, fio->type);
12430 ++ if (fio->io_wbc && !is_read_io(fio->op))
12431 ++ wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
12432 ++
12433 ++ bio_set_op_attrs(bio, fio->op, fio->op_flags);
12434 +
12435 + if (!is_read_io(fio->op))
12436 + inc_page_count(fio->sbi, WB_DATA_TYPE(fio->page));
12437 ++
12438 ++ __submit_bio(fio->sbi, bio, fio->type);
12439 + return 0;
12440 + }
12441 +
12442 +@@ -580,6 +585,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
12443 + bio_put(bio);
12444 + return -EFAULT;
12445 + }
12446 ++ ClearPageError(page);
12447 + __submit_bio(F2FS_I_SB(inode), bio, DATA);
12448 + return 0;
12449 + }
12450 +@@ -1524,6 +1530,7 @@ submit_and_realloc:
12451 + if (bio_add_page(bio, page, blocksize, 0) < blocksize)
12452 + goto submit_and_realloc;
12453 +
12454 ++ ClearPageError(page);
12455 + last_block_in_bio = block_nr;
12456 + goto next_page;
12457 + set_error_page:
12458 +@@ -2494,10 +2501,6 @@ static int f2fs_set_data_page_dirty(struct page *page)
12459 + if (!PageUptodate(page))
12460 + SetPageUptodate(page);
12461 +
12462 +- /* don't remain PG_checked flag which was set during GC */
12463 +- if (is_cold_data(page))
12464 +- clear_cold_data(page);
12465 +-
12466 + if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
12467 + if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
12468 + f2fs_register_inmem_page(inode, page);
12469 +diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
12470 +index 231b77ef5a53..a70cd2580eae 100644
12471 +--- a/fs/f2fs/extent_cache.c
12472 ++++ b/fs/f2fs/extent_cache.c
12473 +@@ -308,14 +308,13 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
12474 + return count - atomic_read(&et->node_cnt);
12475 + }
12476 +
12477 +-static void __drop_largest_extent(struct inode *inode,
12478 ++static void __drop_largest_extent(struct extent_tree *et,
12479 + pgoff_t fofs, unsigned int len)
12480 + {
12481 +- struct extent_info *largest = &F2FS_I(inode)->extent_tree->largest;
12482 +-
12483 +- if (fofs < largest->fofs + largest->len && fofs + len > largest->fofs) {
12484 +- largest->len = 0;
12485 +- f2fs_mark_inode_dirty_sync(inode, true);
12486 ++ if (fofs < et->largest.fofs + et->largest.len &&
12487 ++ fofs + len > et->largest.fofs) {
12488 ++ et->largest.len = 0;
12489 ++ et->largest_updated = true;
12490 + }
12491 + }
12492 +
12493 +@@ -416,12 +415,11 @@ out:
12494 + return ret;
12495 + }
12496 +
12497 +-static struct extent_node *__try_merge_extent_node(struct inode *inode,
12498 ++static struct extent_node *__try_merge_extent_node(struct f2fs_sb_info *sbi,
12499 + struct extent_tree *et, struct extent_info *ei,
12500 + struct extent_node *prev_ex,
12501 + struct extent_node *next_ex)
12502 + {
12503 +- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
12504 + struct extent_node *en = NULL;
12505 +
12506 + if (prev_ex && __is_back_mergeable(ei, &prev_ex->ei)) {
12507 +@@ -443,7 +441,7 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
12508 + if (!en)
12509 + return NULL;
12510 +
12511 +- __try_update_largest_extent(inode, et, en);
12512 ++ __try_update_largest_extent(et, en);
12513 +
12514 + spin_lock(&sbi->extent_lock);
12515 + if (!list_empty(&en->list)) {
12516 +@@ -454,12 +452,11 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
12517 + return en;
12518 + }
12519 +
12520 +-static struct extent_node *__insert_extent_tree(struct inode *inode,
12521 ++static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
12522 + struct extent_tree *et, struct extent_info *ei,
12523 + struct rb_node **insert_p,
12524 + struct rb_node *insert_parent)
12525 + {
12526 +- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
12527 + struct rb_node **p;
12528 + struct rb_node *parent = NULL;
12529 + struct extent_node *en = NULL;
12530 +@@ -476,7 +473,7 @@ do_insert:
12531 + if (!en)
12532 + return NULL;
12533 +
12534 +- __try_update_largest_extent(inode, et, en);
12535 ++ __try_update_largest_extent(et, en);
12536 +
12537 + /* update in global extent list */
12538 + spin_lock(&sbi->extent_lock);
12539 +@@ -497,6 +494,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
12540 + struct rb_node **insert_p = NULL, *insert_parent = NULL;
12541 + unsigned int end = fofs + len;
12542 + unsigned int pos = (unsigned int)fofs;
12543 ++ bool updated = false;
12544 +
12545 + if (!et)
12546 + return;
12547 +@@ -517,7 +515,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
12548 + * drop largest extent before lookup, in case it's already
12549 + * been shrunk from extent tree
12550 + */
12551 +- __drop_largest_extent(inode, fofs, len);
12552 ++ __drop_largest_extent(et, fofs, len);
12553 +
12554 + /* 1. lookup first extent node in range [fofs, fofs + len - 1] */
12555 + en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
12556 +@@ -550,7 +548,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
12557 + set_extent_info(&ei, end,
12558 + end - dei.fofs + dei.blk,
12559 + org_end - end);
12560 +- en1 = __insert_extent_tree(inode, et, &ei,
12561 ++ en1 = __insert_extent_tree(sbi, et, &ei,
12562 + NULL, NULL);
12563 + next_en = en1;
12564 + } else {
12565 +@@ -570,7 +568,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
12566 + }
12567 +
12568 + if (parts)
12569 +- __try_update_largest_extent(inode, et, en);
12570 ++ __try_update_largest_extent(et, en);
12571 + else
12572 + __release_extent_node(sbi, et, en);
12573 +
12574 +@@ -590,15 +588,16 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
12575 + if (blkaddr) {
12576 +
12577 + set_extent_info(&ei, fofs, blkaddr, len);
12578 +- if (!__try_merge_extent_node(inode, et, &ei, prev_en, next_en))
12579 +- __insert_extent_tree(inode, et, &ei,
12580 ++ if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en))
12581 ++ __insert_extent_tree(sbi, et, &ei,
12582 + insert_p, insert_parent);
12583 +
12584 + /* give up extent_cache, if split and small updates happen */
12585 + if (dei.len >= 1 &&
12586 + prev.len < F2FS_MIN_EXTENT_LEN &&
12587 + et->largest.len < F2FS_MIN_EXTENT_LEN) {
12588 +- __drop_largest_extent(inode, 0, UINT_MAX);
12589 ++ et->largest.len = 0;
12590 ++ et->largest_updated = true;
12591 + set_inode_flag(inode, FI_NO_EXTENT);
12592 + }
12593 + }
12594 +@@ -606,7 +605,15 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
12595 + if (is_inode_flag_set(inode, FI_NO_EXTENT))
12596 + __free_extent_tree(sbi, et);
12597 +
12598 ++ if (et->largest_updated) {
12599 ++ et->largest_updated = false;
12600 ++ updated = true;
12601 ++ }
12602 ++
12603 + write_unlock(&et->lock);
12604 ++
12605 ++ if (updated)
12606 ++ f2fs_mark_inode_dirty_sync(inode, true);
12607 + }
12608 +
12609 + unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
12610 +@@ -705,6 +712,7 @@ void f2fs_drop_extent_tree(struct inode *inode)
12611 + {
12612 + struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
12613 + struct extent_tree *et = F2FS_I(inode)->extent_tree;
12614 ++ bool updated = false;
12615 +
12616 + if (!f2fs_may_extent_tree(inode))
12617 + return;
12618 +@@ -713,8 +721,13 @@ void f2fs_drop_extent_tree(struct inode *inode)
12619 +
12620 + write_lock(&et->lock);
12621 + __free_extent_tree(sbi, et);
12622 +- __drop_largest_extent(inode, 0, UINT_MAX);
12623 ++ if (et->largest.len) {
12624 ++ et->largest.len = 0;
12625 ++ updated = true;
12626 ++ }
12627 + write_unlock(&et->lock);
12628 ++ if (updated)
12629 ++ f2fs_mark_inode_dirty_sync(inode, true);
12630 + }
12631 +
12632 + void f2fs_destroy_extent_tree(struct inode *inode)
12633 +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
12634 +index b6f2dc8163e1..181aade161e8 100644
12635 +--- a/fs/f2fs/f2fs.h
12636 ++++ b/fs/f2fs/f2fs.h
12637 +@@ -556,6 +556,7 @@ struct extent_tree {
12638 + struct list_head list; /* to be used by sbi->zombie_list */
12639 + rwlock_t lock; /* protect extent info rb-tree */
12640 + atomic_t node_cnt; /* # of extent node in rb-tree*/
12641 ++ bool largest_updated; /* largest extent updated */
12642 + };
12643 +
12644 + /*
12645 +@@ -736,12 +737,12 @@ static inline bool __is_front_mergeable(struct extent_info *cur,
12646 + }
12647 +
12648 + extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync);
12649 +-static inline void __try_update_largest_extent(struct inode *inode,
12650 +- struct extent_tree *et, struct extent_node *en)
12651 ++static inline void __try_update_largest_extent(struct extent_tree *et,
12652 ++ struct extent_node *en)
12653 + {
12654 + if (en->ei.len > et->largest.len) {
12655 + et->largest = en->ei;
12656 +- f2fs_mark_inode_dirty_sync(inode, true);
12657 ++ et->largest_updated = true;
12658 + }
12659 + }
12660 +
12661 +diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
12662 +index cf0f944fcaea..4a2e75bce36a 100644
12663 +--- a/fs/f2fs/inode.c
12664 ++++ b/fs/f2fs/inode.c
12665 +@@ -287,6 +287,12 @@ static int do_read_inode(struct inode *inode)
12666 + if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode))
12667 + __recover_inline_status(inode, node_page);
12668 +
12669 ++ /* try to recover cold bit for non-dir inode */
12670 ++ if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_page)) {
12671 ++ set_cold_node(node_page, false);
12672 ++ set_page_dirty(node_page);
12673 ++ }
12674 ++
12675 + /* get rdev by using inline_info */
12676 + __get_inode_rdev(inode, ri);
12677 +
12678 +diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
12679 +index 52ed02b0327c..ec22e7c5b37e 100644
12680 +--- a/fs/f2fs/node.c
12681 ++++ b/fs/f2fs/node.c
12682 +@@ -2356,7 +2356,7 @@ retry:
12683 + if (!PageUptodate(ipage))
12684 + SetPageUptodate(ipage);
12685 + fill_node_footer(ipage, ino, ino, 0, true);
12686 +- set_cold_node(page, false);
12687 ++ set_cold_node(ipage, false);
12688 +
12689 + src = F2FS_INODE(page);
12690 + dst = F2FS_INODE(ipage);
12691 +@@ -2379,6 +2379,13 @@ retry:
12692 + F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
12693 + i_projid))
12694 + dst->i_projid = src->i_projid;
12695 ++
12696 ++ if (f2fs_sb_has_inode_crtime(sbi->sb) &&
12697 ++ F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
12698 ++ i_crtime_nsec)) {
12699 ++ dst->i_crtime = src->i_crtime;
12700 ++ dst->i_crtime_nsec = src->i_crtime_nsec;
12701 ++ }
12702 + }
12703 +
12704 + new_ni = old_ni;
12705 +diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
12706 +index ad70e62c5da4..a69a2c5c6682 100644
12707 +--- a/fs/f2fs/recovery.c
12708 ++++ b/fs/f2fs/recovery.c
12709 +@@ -221,6 +221,7 @@ static void recover_inode(struct inode *inode, struct page *page)
12710 + inode->i_mtime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
12711 +
12712 + F2FS_I(inode)->i_advise = raw->i_advise;
12713 ++ F2FS_I(inode)->i_flags = le32_to_cpu(raw->i_flags);
12714 +
12715 + recover_inline_flags(inode, raw);
12716 +
12717 +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
12718 +index 742147cbe759..a3e90e6f72a8 100644
12719 +--- a/fs/f2fs/super.c
12720 ++++ b/fs/f2fs/super.c
12721 +@@ -1820,7 +1820,9 @@ static int f2fs_quota_off(struct super_block *sb, int type)
12722 + if (!inode || !igrab(inode))
12723 + return dquot_quota_off(sb, type);
12724 +
12725 +- f2fs_quota_sync(sb, type);
12726 ++ err = f2fs_quota_sync(sb, type);
12727 ++ if (err)
12728 ++ goto out_put;
12729 +
12730 + err = dquot_quota_off(sb, type);
12731 + if (err || f2fs_sb_has_quota_ino(sb))
12732 +@@ -1839,9 +1841,20 @@ out_put:
12733 + void f2fs_quota_off_umount(struct super_block *sb)
12734 + {
12735 + int type;
12736 ++ int err;
12737 ++
12738 ++ for (type = 0; type < MAXQUOTAS; type++) {
12739 ++ err = f2fs_quota_off(sb, type);
12740 ++ if (err) {
12741 ++ int ret = dquot_quota_off(sb, type);
12742 +
12743 +- for (type = 0; type < MAXQUOTAS; type++)
12744 +- f2fs_quota_off(sb, type);
12745 ++ f2fs_msg(sb, KERN_ERR,
12746 ++ "Fail to turn off disk quota "
12747 ++ "(type: %d, err: %d, ret:%d), Please "
12748 ++ "run fsck to fix it.", type, err, ret);
12749 ++ set_sbi_flag(F2FS_SB(sb), SBI_NEED_FSCK);
12750 ++ }
12751 ++ }
12752 + }
12753 +
12754 + static int f2fs_get_projid(struct inode *inode, kprojid_t *projid)
12755 +diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
12756 +index c2469833b4fb..6b84ef6ccff3 100644
12757 +--- a/fs/gfs2/ops_fstype.c
12758 ++++ b/fs/gfs2/ops_fstype.c
12759 +@@ -1333,6 +1333,9 @@ static struct dentry *gfs2_mount_meta(struct file_system_type *fs_type,
12760 + struct path path;
12761 + int error;
12762 +
12763 ++ if (!dev_name || !*dev_name)
12764 ++ return ERR_PTR(-EINVAL);
12765 ++
12766 + error = kern_path(dev_name, LOOKUP_FOLLOW, &path);
12767 + if (error) {
12768 + pr_warn("path_lookup on %s returned error %d\n",
12769 +diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
12770 +index c125d662777c..26f8d7e46462 100644
12771 +--- a/fs/jbd2/checkpoint.c
12772 ++++ b/fs/jbd2/checkpoint.c
12773 +@@ -251,8 +251,8 @@ restart:
12774 + bh = jh2bh(jh);
12775 +
12776 + if (buffer_locked(bh)) {
12777 +- spin_unlock(&journal->j_list_lock);
12778 + get_bh(bh);
12779 ++ spin_unlock(&journal->j_list_lock);
12780 + wait_on_buffer(bh);
12781 + /* the journal_head may have gone by now */
12782 + BUFFER_TRACE(bh, "brelse");
12783 +@@ -333,8 +333,8 @@ restart2:
12784 + jh = transaction->t_checkpoint_io_list;
12785 + bh = jh2bh(jh);
12786 + if (buffer_locked(bh)) {
12787 +- spin_unlock(&journal->j_list_lock);
12788 + get_bh(bh);
12789 ++ spin_unlock(&journal->j_list_lock);
12790 + wait_on_buffer(bh);
12791 + /* the journal_head may have gone by now */
12792 + BUFFER_TRACE(bh, "brelse");
12793 +diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
12794 +index 87bdf0f4cba1..902a7dd10e5c 100644
12795 +--- a/fs/jffs2/super.c
12796 ++++ b/fs/jffs2/super.c
12797 +@@ -285,10 +285,8 @@ static int jffs2_fill_super(struct super_block *sb, void *data, int silent)
12798 + sb->s_fs_info = c;
12799 +
12800 + ret = jffs2_parse_options(c, data);
12801 +- if (ret) {
12802 +- kfree(c);
12803 ++ if (ret)
12804 + return -EINVAL;
12805 +- }
12806 +
12807 + /* Initialize JFFS2 superblock locks, the further initialization will
12808 + * be done later */
12809 +diff --git a/fs/lockd/host.c b/fs/lockd/host.c
12810 +index d35cd6be0675..93fb7cf0b92b 100644
12811 +--- a/fs/lockd/host.c
12812 ++++ b/fs/lockd/host.c
12813 +@@ -341,7 +341,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
12814 + };
12815 + struct lockd_net *ln = net_generic(net, lockd_net_id);
12816 +
12817 +- dprintk("lockd: %s(host='%*s', vers=%u, proto=%s)\n", __func__,
12818 ++ dprintk("lockd: %s(host='%.*s', vers=%u, proto=%s)\n", __func__,
12819 + (int)hostname_len, hostname, rqstp->rq_vers,
12820 + (rqstp->rq_prot == IPPROTO_UDP ? "udp" : "tcp"));
12821 +
12822 +diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
12823 +index d7124fb12041..5df68d79d661 100644
12824 +--- a/fs/nfs/nfs4client.c
12825 ++++ b/fs/nfs/nfs4client.c
12826 +@@ -935,10 +935,10 @@ EXPORT_SYMBOL_GPL(nfs4_set_ds_client);
12827 +
12828 + /*
12829 + * Session has been established, and the client marked ready.
12830 +- * Set the mount rsize and wsize with negotiated fore channel
12831 +- * attributes which will be bound checked in nfs_server_set_fsinfo.
12832 ++ * Limit the mount rsize, wsize and dtsize using negotiated fore
12833 ++ * channel attributes.
12834 + */
12835 +-static void nfs4_session_set_rwsize(struct nfs_server *server)
12836 ++static void nfs4_session_limit_rwsize(struct nfs_server *server)
12837 + {
12838 + #ifdef CONFIG_NFS_V4_1
12839 + struct nfs4_session *sess;
12840 +@@ -951,9 +951,11 @@ static void nfs4_session_set_rwsize(struct nfs_server *server)
12841 + server_resp_sz = sess->fc_attrs.max_resp_sz - nfs41_maxread_overhead;
12842 + server_rqst_sz = sess->fc_attrs.max_rqst_sz - nfs41_maxwrite_overhead;
12843 +
12844 +- if (!server->rsize || server->rsize > server_resp_sz)
12845 ++ if (server->dtsize > server_resp_sz)
12846 ++ server->dtsize = server_resp_sz;
12847 ++ if (server->rsize > server_resp_sz)
12848 + server->rsize = server_resp_sz;
12849 +- if (!server->wsize || server->wsize > server_rqst_sz)
12850 ++ if (server->wsize > server_rqst_sz)
12851 + server->wsize = server_rqst_sz;
12852 + #endif /* CONFIG_NFS_V4_1 */
12853 + }
12854 +@@ -1000,12 +1002,12 @@ static int nfs4_server_common_setup(struct nfs_server *server,
12855 + (unsigned long long) server->fsid.minor);
12856 + nfs_display_fhandle(mntfh, "Pseudo-fs root FH");
12857 +
12858 +- nfs4_session_set_rwsize(server);
12859 +-
12860 + error = nfs_probe_fsinfo(server, mntfh, fattr);
12861 + if (error < 0)
12862 + goto out;
12863 +
12864 ++ nfs4_session_limit_rwsize(server);
12865 ++
12866 + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN)
12867 + server->namelen = NFS4_MAXNAMLEN;
12868 +
12869 +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
12870 +index 67d19cd92e44..7e6425791388 100644
12871 +--- a/fs/nfs/pagelist.c
12872 ++++ b/fs/nfs/pagelist.c
12873 +@@ -1110,6 +1110,20 @@ static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
12874 + return ret;
12875 + }
12876 +
12877 ++static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc)
12878 ++{
12879 ++ u32 midx;
12880 ++ struct nfs_pgio_mirror *mirror;
12881 ++
12882 ++ if (!desc->pg_error)
12883 ++ return;
12884 ++
12885 ++ for (midx = 0; midx < desc->pg_mirror_count; midx++) {
12886 ++ mirror = &desc->pg_mirrors[midx];
12887 ++ desc->pg_completion_ops->error_cleanup(&mirror->pg_list);
12888 ++ }
12889 ++}
12890 ++
12891 + int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
12892 + struct nfs_page *req)
12893 + {
12894 +@@ -1160,25 +1174,11 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
12895 + return 1;
12896 +
12897 + out_failed:
12898 +- /*
12899 +- * We might have failed before sending any reqs over wire.
12900 +- * Clean up rest of the reqs in mirror pg_list.
12901 +- */
12902 +- if (desc->pg_error) {
12903 +- struct nfs_pgio_mirror *mirror;
12904 +- void (*func)(struct list_head *);
12905 +-
12906 +- /* remember fatal errors */
12907 +- if (nfs_error_is_fatal(desc->pg_error))
12908 +- nfs_context_set_write_error(req->wb_context,
12909 +- desc->pg_error);
12910 +-
12911 +- func = desc->pg_completion_ops->error_cleanup;
12912 +- for (midx = 0; midx < desc->pg_mirror_count; midx++) {
12913 +- mirror = &desc->pg_mirrors[midx];
12914 +- func(&mirror->pg_list);
12915 +- }
12916 +- }
12917 ++ /* remember fatal errors */
12918 ++ if (nfs_error_is_fatal(desc->pg_error))
12919 ++ nfs_context_set_write_error(req->wb_context,
12920 ++ desc->pg_error);
12921 ++ nfs_pageio_error_cleanup(desc);
12922 + return 0;
12923 + }
12924 +
12925 +@@ -1250,6 +1250,8 @@ void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
12926 + for (midx = 0; midx < desc->pg_mirror_count; midx++)
12927 + nfs_pageio_complete_mirror(desc, midx);
12928 +
12929 ++ if (desc->pg_error < 0)
12930 ++ nfs_pageio_error_cleanup(desc);
12931 + if (desc->pg_ops->pg_cleanup)
12932 + desc->pg_ops->pg_cleanup(desc);
12933 + nfs_pageio_cleanup_mirroring(desc);
12934 +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
12935 +index 4a17fad93411..18fa7fd3bae9 100644
12936 +--- a/fs/nfsd/nfs4state.c
12937 ++++ b/fs/nfsd/nfs4state.c
12938 +@@ -4361,7 +4361,7 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
12939 +
12940 + fl = nfs4_alloc_init_lease(dp, NFS4_OPEN_DELEGATE_READ);
12941 + if (!fl)
12942 +- goto out_stid;
12943 ++ goto out_clnt_odstate;
12944 +
12945 + status = vfs_setlease(fp->fi_deleg_file, fl->fl_type, &fl, NULL);
12946 + if (fl)
12947 +@@ -4386,7 +4386,6 @@ out_unlock:
12948 + vfs_setlease(fp->fi_deleg_file, F_UNLCK, NULL, (void **)&dp);
12949 + out_clnt_odstate:
12950 + put_clnt_odstate(dp->dl_clnt_odstate);
12951 +-out_stid:
12952 + nfs4_put_stid(&dp->dl_stid);
12953 + out_delegees:
12954 + put_deleg_file(fp);
12955 +diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
12956 +index ababdbfab537..f43ea1aad542 100644
12957 +--- a/fs/notify/fsnotify.c
12958 ++++ b/fs/notify/fsnotify.c
12959 +@@ -96,6 +96,9 @@ void fsnotify_unmount_inodes(struct super_block *sb)
12960 +
12961 + if (iput_inode)
12962 + iput(iput_inode);
12963 ++ /* Wait for outstanding inode references from connectors */
12964 ++ wait_var_event(&sb->s_fsnotify_inode_refs,
12965 ++ !atomic_long_read(&sb->s_fsnotify_inode_refs));
12966 + }
12967 +
12968 + /*
12969 +diff --git a/fs/notify/mark.c b/fs/notify/mark.c
12970 +index 61f4c5fa34c7..75394ae96673 100644
12971 +--- a/fs/notify/mark.c
12972 ++++ b/fs/notify/mark.c
12973 +@@ -161,15 +161,18 @@ static void fsnotify_connector_destroy_workfn(struct work_struct *work)
12974 + }
12975 + }
12976 +
12977 +-static struct inode *fsnotify_detach_connector_from_object(
12978 +- struct fsnotify_mark_connector *conn)
12979 ++static void *fsnotify_detach_connector_from_object(
12980 ++ struct fsnotify_mark_connector *conn,
12981 ++ unsigned int *type)
12982 + {
12983 + struct inode *inode = NULL;
12984 +
12985 ++ *type = conn->type;
12986 + if (conn->type == FSNOTIFY_OBJ_TYPE_INODE) {
12987 + inode = conn->inode;
12988 + rcu_assign_pointer(inode->i_fsnotify_marks, NULL);
12989 + inode->i_fsnotify_mask = 0;
12990 ++ atomic_long_inc(&inode->i_sb->s_fsnotify_inode_refs);
12991 + conn->inode = NULL;
12992 + conn->type = FSNOTIFY_OBJ_TYPE_DETACHED;
12993 + } else if (conn->type == FSNOTIFY_OBJ_TYPE_VFSMOUNT) {
12994 +@@ -193,10 +196,29 @@ static void fsnotify_final_mark_destroy(struct fsnotify_mark *mark)
12995 + fsnotify_put_group(group);
12996 + }
12997 +
12998 ++/* Drop object reference originally held by a connector */
12999 ++static void fsnotify_drop_object(unsigned int type, void *objp)
13000 ++{
13001 ++ struct inode *inode;
13002 ++ struct super_block *sb;
13003 ++
13004 ++ if (!objp)
13005 ++ return;
13006 ++ /* Currently only inode references are passed to be dropped */
13007 ++ if (WARN_ON_ONCE(type != FSNOTIFY_OBJ_TYPE_INODE))
13008 ++ return;
13009 ++ inode = objp;
13010 ++ sb = inode->i_sb;
13011 ++ iput(inode);
13012 ++ if (atomic_long_dec_and_test(&sb->s_fsnotify_inode_refs))
13013 ++ wake_up_var(&sb->s_fsnotify_inode_refs);
13014 ++}
13015 ++
13016 + void fsnotify_put_mark(struct fsnotify_mark *mark)
13017 + {
13018 + struct fsnotify_mark_connector *conn;
13019 +- struct inode *inode = NULL;
13020 ++ void *objp = NULL;
13021 ++ unsigned int type = FSNOTIFY_OBJ_TYPE_DETACHED;
13022 + bool free_conn = false;
13023 +
13024 + /* Catch marks that were actually never attached to object */
13025 +@@ -216,7 +238,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
13026 + conn = mark->connector;
13027 + hlist_del_init_rcu(&mark->obj_list);
13028 + if (hlist_empty(&conn->list)) {
13029 +- inode = fsnotify_detach_connector_from_object(conn);
13030 ++ objp = fsnotify_detach_connector_from_object(conn, &type);
13031 + free_conn = true;
13032 + } else {
13033 + __fsnotify_recalc_mask(conn);
13034 +@@ -224,7 +246,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
13035 + mark->connector = NULL;
13036 + spin_unlock(&conn->lock);
13037 +
13038 +- iput(inode);
13039 ++ fsnotify_drop_object(type, objp);
13040 +
13041 + if (free_conn) {
13042 + spin_lock(&destroy_lock);
13043 +@@ -702,7 +724,8 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
13044 + {
13045 + struct fsnotify_mark_connector *conn;
13046 + struct fsnotify_mark *mark, *old_mark = NULL;
13047 +- struct inode *inode;
13048 ++ void *objp;
13049 ++ unsigned int type;
13050 +
13051 + conn = fsnotify_grab_connector(connp);
13052 + if (!conn)
13053 +@@ -728,11 +751,11 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
13054 + * mark references get dropped. It would lead to strange results such
13055 + * as delaying inode deletion or blocking unmount.
13056 + */
13057 +- inode = fsnotify_detach_connector_from_object(conn);
13058 ++ objp = fsnotify_detach_connector_from_object(conn, &type);
13059 + spin_unlock(&conn->lock);
13060 + if (old_mark)
13061 + fsnotify_put_mark(old_mark);
13062 +- iput(inode);
13063 ++ fsnotify_drop_object(type, objp);
13064 + }
13065 +
13066 + /*
13067 +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
13068 +index dfd73a4616ce..3437da437099 100644
13069 +--- a/fs/proc/task_mmu.c
13070 ++++ b/fs/proc/task_mmu.c
13071 +@@ -767,6 +767,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
13072 + smaps_walk.private = mss;
13073 +
13074 + #ifdef CONFIG_SHMEM
13075 ++ /* In case of smaps_rollup, reset the value from previous vma */
13076 ++ mss->check_shmem_swap = false;
13077 + if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
13078 + /*
13079 + * For shared or readonly shmem mappings we know that all
13080 +@@ -782,7 +784,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
13081 +
13082 + if (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
13083 + !(vma->vm_flags & VM_WRITE)) {
13084 +- mss->swap = shmem_swapped;
13085 ++ mss->swap += shmem_swapped;
13086 + } else {
13087 + mss->check_shmem_swap = true;
13088 + smaps_walk.pte_hole = smaps_pte_hole;
13089 +diff --git a/include/crypto/speck.h b/include/crypto/speck.h
13090 +deleted file mode 100644
13091 +index 73cfc952d405..000000000000
13092 +--- a/include/crypto/speck.h
13093 ++++ /dev/null
13094 +@@ -1,62 +0,0 @@
13095 +-// SPDX-License-Identifier: GPL-2.0
13096 +-/*
13097 +- * Common values for the Speck algorithm
13098 +- */
13099 +-
13100 +-#ifndef _CRYPTO_SPECK_H
13101 +-#define _CRYPTO_SPECK_H
13102 +-
13103 +-#include <linux/types.h>
13104 +-
13105 +-/* Speck128 */
13106 +-
13107 +-#define SPECK128_BLOCK_SIZE 16
13108 +-
13109 +-#define SPECK128_128_KEY_SIZE 16
13110 +-#define SPECK128_128_NROUNDS 32
13111 +-
13112 +-#define SPECK128_192_KEY_SIZE 24
13113 +-#define SPECK128_192_NROUNDS 33
13114 +-
13115 +-#define SPECK128_256_KEY_SIZE 32
13116 +-#define SPECK128_256_NROUNDS 34
13117 +-
13118 +-struct speck128_tfm_ctx {
13119 +- u64 round_keys[SPECK128_256_NROUNDS];
13120 +- int nrounds;
13121 +-};
13122 +-
13123 +-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
13124 +- u8 *out, const u8 *in);
13125 +-
13126 +-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
13127 +- u8 *out, const u8 *in);
13128 +-
13129 +-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
13130 +- unsigned int keysize);
13131 +-
13132 +-/* Speck64 */
13133 +-
13134 +-#define SPECK64_BLOCK_SIZE 8
13135 +-
13136 +-#define SPECK64_96_KEY_SIZE 12
13137 +-#define SPECK64_96_NROUNDS 26
13138 +-
13139 +-#define SPECK64_128_KEY_SIZE 16
13140 +-#define SPECK64_128_NROUNDS 27
13141 +-
13142 +-struct speck64_tfm_ctx {
13143 +- u32 round_keys[SPECK64_128_NROUNDS];
13144 +- int nrounds;
13145 +-};
13146 +-
13147 +-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
13148 +- u8 *out, const u8 *in);
13149 +-
13150 +-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
13151 +- u8 *out, const u8 *in);
13152 +-
13153 +-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
13154 +- unsigned int keysize);
13155 +-
13156 +-#endif /* _CRYPTO_SPECK_H */
13157 +diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
13158 +index a57a8aa90ffb..2b0d02458a18 100644
13159 +--- a/include/drm/drm_atomic.h
13160 ++++ b/include/drm/drm_atomic.h
13161 +@@ -153,6 +153,17 @@ struct __drm_planes_state {
13162 + struct __drm_crtcs_state {
13163 + struct drm_crtc *ptr;
13164 + struct drm_crtc_state *state, *old_state, *new_state;
13165 ++
13166 ++ /**
13167 ++ * @commit:
13168 ++ *
13169 ++ * A reference to the CRTC commit object that is kept for use by
13170 ++ * drm_atomic_helper_wait_for_flip_done() after
13171 ++ * drm_atomic_helper_commit_hw_done() is called. This ensures that a
13172 ++ * concurrent commit won't free a commit object that is still in use.
13173 ++ */
13174 ++ struct drm_crtc_commit *commit;
13175 ++
13176 + s32 __user *out_fence_ptr;
13177 + u64 last_vblank_count;
13178 + };
13179 +diff --git a/include/linux/compat.h b/include/linux/compat.h
13180 +index c68acc47da57..47041c7fed28 100644
13181 +--- a/include/linux/compat.h
13182 ++++ b/include/linux/compat.h
13183 +@@ -103,6 +103,9 @@ typedef struct compat_sigaltstack {
13184 + compat_size_t ss_size;
13185 + } compat_stack_t;
13186 + #endif
13187 ++#ifndef COMPAT_MINSIGSTKSZ
13188 ++#define COMPAT_MINSIGSTKSZ MINSIGSTKSZ
13189 ++#endif
13190 +
13191 + #define compat_jiffies_to_clock_t(x) \
13192 + (((unsigned long)(x) * COMPAT_USER_HZ) / HZ)
13193 +diff --git a/include/linux/fs.h b/include/linux/fs.h
13194 +index e73363bd8646..cf23c128ac46 100644
13195 +--- a/include/linux/fs.h
13196 ++++ b/include/linux/fs.h
13197 +@@ -1416,6 +1416,9 @@ struct super_block {
13198 + /* Number of inodes with nlink == 0 but still referenced */
13199 + atomic_long_t s_remove_count;
13200 +
13201 ++ /* Pending fsnotify inode refs */
13202 ++ atomic_long_t s_fsnotify_inode_refs;
13203 ++
13204 + /* Being remounted read-only */
13205 + int s_readonly_remount;
13206 +
13207 +diff --git a/include/linux/hdmi.h b/include/linux/hdmi.h
13208 +index d271ff23984f..4f3febc0f971 100644
13209 +--- a/include/linux/hdmi.h
13210 ++++ b/include/linux/hdmi.h
13211 +@@ -101,8 +101,8 @@ enum hdmi_extended_colorimetry {
13212 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601,
13213 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_709,
13214 + HDMI_EXTENDED_COLORIMETRY_S_YCC_601,
13215 +- HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601,
13216 +- HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB,
13217 ++ HDMI_EXTENDED_COLORIMETRY_OPYCC_601,
13218 ++ HDMI_EXTENDED_COLORIMETRY_OPRGB,
13219 +
13220 + /* The following EC values are only defined in CEA-861-F. */
13221 + HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM,
13222 +diff --git a/include/linux/signal.h b/include/linux/signal.h
13223 +index 3c5200137b24..42ba31da534f 100644
13224 +--- a/include/linux/signal.h
13225 ++++ b/include/linux/signal.h
13226 +@@ -36,7 +36,7 @@ enum siginfo_layout {
13227 + SIL_SYS,
13228 + };
13229 +
13230 +-enum siginfo_layout siginfo_layout(int sig, int si_code);
13231 ++enum siginfo_layout siginfo_layout(unsigned sig, int si_code);
13232 +
13233 + /*
13234 + * Define some primitives to manipulate sigset_t.
13235 +diff --git a/include/linux/tc.h b/include/linux/tc.h
13236 +index f92511e57cdb..a60639f37963 100644
13237 +--- a/include/linux/tc.h
13238 ++++ b/include/linux/tc.h
13239 +@@ -84,6 +84,7 @@ struct tc_dev {
13240 + device. */
13241 + struct device dev; /* Generic device interface. */
13242 + struct resource resource; /* Address space of this device. */
13243 ++ u64 dma_mask; /* DMA addressable range. */
13244 + char vendor[9];
13245 + char name[9];
13246 + char firmware[9];
13247 +diff --git a/include/media/cec.h b/include/media/cec.h
13248 +index 580ab1042898..71cc0272b053 100644
13249 +--- a/include/media/cec.h
13250 ++++ b/include/media/cec.h
13251 +@@ -63,7 +63,6 @@ struct cec_data {
13252 + struct delayed_work work;
13253 + struct completion c;
13254 + u8 attempts;
13255 +- bool new_initiator;
13256 + bool blocking;
13257 + bool completed;
13258 + };
13259 +@@ -174,6 +173,7 @@ struct cec_adapter {
13260 + bool is_configuring;
13261 + bool is_configured;
13262 + bool cec_pin_is_high;
13263 ++ u8 last_initiator;
13264 + u32 monitor_all_cnt;
13265 + u32 monitor_pin_cnt;
13266 + u32 follower_cnt;
13267 +@@ -451,4 +451,74 @@ static inline void cec_phys_addr_invalidate(struct cec_adapter *adap)
13268 + cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false);
13269 + }
13270 +
13271 ++/**
13272 ++ * cec_get_edid_spa_location() - find location of the Source Physical Address
13273 ++ *
13274 ++ * @edid: the EDID
13275 ++ * @size: the size of the EDID
13276 ++ *
13277 ++ * This EDID is expected to be a CEA-861 compliant, which means that there are
13278 ++ * at least two blocks and one or more of the extensions blocks are CEA-861
13279 ++ * blocks.
13280 ++ *
13281 ++ * The returned location is guaranteed to be <= size-2.
13282 ++ *
13283 ++ * This is an inline function since it is used by both CEC and V4L2.
13284 ++ * Ideally this would go in a module shared by both, but it is overkill to do
13285 ++ * that for just a single function.
13286 ++ */
13287 ++static inline unsigned int cec_get_edid_spa_location(const u8 *edid,
13288 ++ unsigned int size)
13289 ++{
13290 ++ unsigned int blocks = size / 128;
13291 ++ unsigned int block;
13292 ++ u8 d;
13293 ++
13294 ++ /* Sanity check: at least 2 blocks and a multiple of the block size */
13295 ++ if (blocks < 2 || size % 128)
13296 ++ return 0;
13297 ++
13298 ++ /*
13299 ++ * If there are fewer extension blocks than the size, then update
13300 ++ * 'blocks'. It is allowed to have more extension blocks than the size,
13301 ++ * since some hardware can only read e.g. 256 bytes of the EDID, even
13302 ++ * though more blocks are present. The first CEA-861 extension block
13303 ++ * should normally be in block 1 anyway.
13304 ++ */
13305 ++ if (edid[0x7e] + 1 < blocks)
13306 ++ blocks = edid[0x7e] + 1;
13307 ++
13308 ++ for (block = 1; block < blocks; block++) {
13309 ++ unsigned int offset = block * 128;
13310 ++
13311 ++ /* Skip any non-CEA-861 extension blocks */
13312 ++ if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
13313 ++ continue;
13314 ++
13315 ++ /* search Vendor Specific Data Block (tag 3) */
13316 ++ d = edid[offset + 2] & 0x7f;
13317 ++ /* Check if there are Data Blocks */
13318 ++ if (d <= 4)
13319 ++ continue;
13320 ++ if (d > 4) {
13321 ++ unsigned int i = offset + 4;
13322 ++ unsigned int end = offset + d;
13323 ++
13324 ++ /* Note: 'end' is always < 'size' */
13325 ++ do {
13326 ++ u8 tag = edid[i] >> 5;
13327 ++ u8 len = edid[i] & 0x1f;
13328 ++
13329 ++ if (tag == 3 && len >= 5 && i + len <= end &&
13330 ++ edid[i + 1] == 0x03 &&
13331 ++ edid[i + 2] == 0x0c &&
13332 ++ edid[i + 3] == 0x00)
13333 ++ return i + 4;
13334 ++ i += len + 1;
13335 ++ } while (i < end);
13336 ++ }
13337 ++ }
13338 ++ return 0;
13339 ++}
13340 ++
13341 + #endif /* _MEDIA_CEC_H */
13342 +diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
13343 +index 6c003995347a..59185fbbd202 100644
13344 +--- a/include/rdma/ib_verbs.h
13345 ++++ b/include/rdma/ib_verbs.h
13346 +@@ -1296,21 +1296,27 @@ struct ib_qp_attr {
13347 + };
13348 +
13349 + enum ib_wr_opcode {
13350 +- IB_WR_RDMA_WRITE,
13351 +- IB_WR_RDMA_WRITE_WITH_IMM,
13352 +- IB_WR_SEND,
13353 +- IB_WR_SEND_WITH_IMM,
13354 +- IB_WR_RDMA_READ,
13355 +- IB_WR_ATOMIC_CMP_AND_SWP,
13356 +- IB_WR_ATOMIC_FETCH_AND_ADD,
13357 +- IB_WR_LSO,
13358 +- IB_WR_SEND_WITH_INV,
13359 +- IB_WR_RDMA_READ_WITH_INV,
13360 +- IB_WR_LOCAL_INV,
13361 +- IB_WR_REG_MR,
13362 +- IB_WR_MASKED_ATOMIC_CMP_AND_SWP,
13363 +- IB_WR_MASKED_ATOMIC_FETCH_AND_ADD,
13364 ++ /* These are shared with userspace */
13365 ++ IB_WR_RDMA_WRITE = IB_UVERBS_WR_RDMA_WRITE,
13366 ++ IB_WR_RDMA_WRITE_WITH_IMM = IB_UVERBS_WR_RDMA_WRITE_WITH_IMM,
13367 ++ IB_WR_SEND = IB_UVERBS_WR_SEND,
13368 ++ IB_WR_SEND_WITH_IMM = IB_UVERBS_WR_SEND_WITH_IMM,
13369 ++ IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ,
13370 ++ IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP,
13371 ++ IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD,
13372 ++ IB_WR_LSO = IB_UVERBS_WR_TSO,
13373 ++ IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV,
13374 ++ IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV,
13375 ++ IB_WR_LOCAL_INV = IB_UVERBS_WR_LOCAL_INV,
13376 ++ IB_WR_MASKED_ATOMIC_CMP_AND_SWP =
13377 ++ IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP,
13378 ++ IB_WR_MASKED_ATOMIC_FETCH_AND_ADD =
13379 ++ IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD,
13380 ++
13381 ++ /* These are kernel only and can not be issued by userspace */
13382 ++ IB_WR_REG_MR = 0x20,
13383 + IB_WR_REG_SIG_MR,
13384 ++
13385 + /* reserve values for low level drivers' internal use.
13386 + * These values will not be used at all in the ib core layer.
13387 + */
13388 +diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
13389 +index 20fe091b7e96..bc2a1b98d9dd 100644
13390 +--- a/include/uapi/linux/cec.h
13391 ++++ b/include/uapi/linux/cec.h
13392 +@@ -152,10 +152,13 @@ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
13393 + #define CEC_TX_STATUS_LOW_DRIVE (1 << 3)
13394 + #define CEC_TX_STATUS_ERROR (1 << 4)
13395 + #define CEC_TX_STATUS_MAX_RETRIES (1 << 5)
13396 ++#define CEC_TX_STATUS_ABORTED (1 << 6)
13397 ++#define CEC_TX_STATUS_TIMEOUT (1 << 7)
13398 +
13399 + #define CEC_RX_STATUS_OK (1 << 0)
13400 + #define CEC_RX_STATUS_TIMEOUT (1 << 1)
13401 + #define CEC_RX_STATUS_FEATURE_ABORT (1 << 2)
13402 ++#define CEC_RX_STATUS_ABORTED (1 << 3)
13403 +
13404 + static inline int cec_msg_status_is_ok(const struct cec_msg *msg)
13405 + {
13406 +diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
13407 +index 73e01918f996..a441ea1bfe6d 100644
13408 +--- a/include/uapi/linux/fs.h
13409 ++++ b/include/uapi/linux/fs.h
13410 +@@ -279,8 +279,8 @@ struct fsxattr {
13411 + #define FS_ENCRYPTION_MODE_AES_256_CTS 4
13412 + #define FS_ENCRYPTION_MODE_AES_128_CBC 5
13413 + #define FS_ENCRYPTION_MODE_AES_128_CTS 6
13414 +-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7
13415 +-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8
13416 ++#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */
13417 ++#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */
13418 +
13419 + struct fscrypt_policy {
13420 + __u8 version;
13421 +diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
13422 +index 7e27070b9440..2f2c43d633c5 100644
13423 +--- a/include/uapi/linux/ndctl.h
13424 ++++ b/include/uapi/linux/ndctl.h
13425 +@@ -128,37 +128,31 @@ enum {
13426 +
13427 + static inline const char *nvdimm_bus_cmd_name(unsigned cmd)
13428 + {
13429 +- static const char * const names[] = {
13430 +- [ND_CMD_ARS_CAP] = "ars_cap",
13431 +- [ND_CMD_ARS_START] = "ars_start",
13432 +- [ND_CMD_ARS_STATUS] = "ars_status",
13433 +- [ND_CMD_CLEAR_ERROR] = "clear_error",
13434 +- [ND_CMD_CALL] = "cmd_call",
13435 +- };
13436 +-
13437 +- if (cmd < ARRAY_SIZE(names) && names[cmd])
13438 +- return names[cmd];
13439 +- return "unknown";
13440 ++ switch (cmd) {
13441 ++ case ND_CMD_ARS_CAP: return "ars_cap";
13442 ++ case ND_CMD_ARS_START: return "ars_start";
13443 ++ case ND_CMD_ARS_STATUS: return "ars_status";
13444 ++ case ND_CMD_CLEAR_ERROR: return "clear_error";
13445 ++ case ND_CMD_CALL: return "cmd_call";
13446 ++ default: return "unknown";
13447 ++ }
13448 + }
13449 +
13450 + static inline const char *nvdimm_cmd_name(unsigned cmd)
13451 + {
13452 +- static const char * const names[] = {
13453 +- [ND_CMD_SMART] = "smart",
13454 +- [ND_CMD_SMART_THRESHOLD] = "smart_thresh",
13455 +- [ND_CMD_DIMM_FLAGS] = "flags",
13456 +- [ND_CMD_GET_CONFIG_SIZE] = "get_size",
13457 +- [ND_CMD_GET_CONFIG_DATA] = "get_data",
13458 +- [ND_CMD_SET_CONFIG_DATA] = "set_data",
13459 +- [ND_CMD_VENDOR_EFFECT_LOG_SIZE] = "effect_size",
13460 +- [ND_CMD_VENDOR_EFFECT_LOG] = "effect_log",
13461 +- [ND_CMD_VENDOR] = "vendor",
13462 +- [ND_CMD_CALL] = "cmd_call",
13463 +- };
13464 +-
13465 +- if (cmd < ARRAY_SIZE(names) && names[cmd])
13466 +- return names[cmd];
13467 +- return "unknown";
13468 ++ switch (cmd) {
13469 ++ case ND_CMD_SMART: return "smart";
13470 ++ case ND_CMD_SMART_THRESHOLD: return "smart_thresh";
13471 ++ case ND_CMD_DIMM_FLAGS: return "flags";
13472 ++ case ND_CMD_GET_CONFIG_SIZE: return "get_size";
13473 ++ case ND_CMD_GET_CONFIG_DATA: return "get_data";
13474 ++ case ND_CMD_SET_CONFIG_DATA: return "set_data";
13475 ++ case ND_CMD_VENDOR_EFFECT_LOG_SIZE: return "effect_size";
13476 ++ case ND_CMD_VENDOR_EFFECT_LOG: return "effect_log";
13477 ++ case ND_CMD_VENDOR: return "vendor";
13478 ++ case ND_CMD_CALL: return "cmd_call";
13479 ++ default: return "unknown";
13480 ++ }
13481 + }
13482 +
13483 + #define ND_IOCTL 'N'
13484 +diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
13485 +index 600877be5c22..082dc1439a50 100644
13486 +--- a/include/uapi/linux/videodev2.h
13487 ++++ b/include/uapi/linux/videodev2.h
13488 +@@ -225,8 +225,8 @@ enum v4l2_colorspace {
13489 + /* For RGB colorspaces such as produces by most webcams. */
13490 + V4L2_COLORSPACE_SRGB = 8,
13491 +
13492 +- /* AdobeRGB colorspace */
13493 +- V4L2_COLORSPACE_ADOBERGB = 9,
13494 ++ /* opRGB colorspace */
13495 ++ V4L2_COLORSPACE_OPRGB = 9,
13496 +
13497 + /* BT.2020 colorspace, used for UHDTV. */
13498 + V4L2_COLORSPACE_BT2020 = 10,
13499 +@@ -258,7 +258,7 @@ enum v4l2_xfer_func {
13500 + *
13501 + * V4L2_COLORSPACE_SRGB, V4L2_COLORSPACE_JPEG: V4L2_XFER_FUNC_SRGB
13502 + *
13503 +- * V4L2_COLORSPACE_ADOBERGB: V4L2_XFER_FUNC_ADOBERGB
13504 ++ * V4L2_COLORSPACE_OPRGB: V4L2_XFER_FUNC_OPRGB
13505 + *
13506 + * V4L2_COLORSPACE_SMPTE240M: V4L2_XFER_FUNC_SMPTE240M
13507 + *
13508 +@@ -269,7 +269,7 @@ enum v4l2_xfer_func {
13509 + V4L2_XFER_FUNC_DEFAULT = 0,
13510 + V4L2_XFER_FUNC_709 = 1,
13511 + V4L2_XFER_FUNC_SRGB = 2,
13512 +- V4L2_XFER_FUNC_ADOBERGB = 3,
13513 ++ V4L2_XFER_FUNC_OPRGB = 3,
13514 + V4L2_XFER_FUNC_SMPTE240M = 4,
13515 + V4L2_XFER_FUNC_NONE = 5,
13516 + V4L2_XFER_FUNC_DCI_P3 = 6,
13517 +@@ -281,7 +281,7 @@ enum v4l2_xfer_func {
13518 + * This depends on the colorspace.
13519 + */
13520 + #define V4L2_MAP_XFER_FUNC_DEFAULT(colsp) \
13521 +- ((colsp) == V4L2_COLORSPACE_ADOBERGB ? V4L2_XFER_FUNC_ADOBERGB : \
13522 ++ ((colsp) == V4L2_COLORSPACE_OPRGB ? V4L2_XFER_FUNC_OPRGB : \
13523 + ((colsp) == V4L2_COLORSPACE_SMPTE240M ? V4L2_XFER_FUNC_SMPTE240M : \
13524 + ((colsp) == V4L2_COLORSPACE_DCI_P3 ? V4L2_XFER_FUNC_DCI_P3 : \
13525 + ((colsp) == V4L2_COLORSPACE_RAW ? V4L2_XFER_FUNC_NONE : \
13526 +@@ -295,7 +295,7 @@ enum v4l2_ycbcr_encoding {
13527 + *
13528 + * V4L2_COLORSPACE_SMPTE170M, V4L2_COLORSPACE_470_SYSTEM_M,
13529 + * V4L2_COLORSPACE_470_SYSTEM_BG, V4L2_COLORSPACE_SRGB,
13530 +- * V4L2_COLORSPACE_ADOBERGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
13531 ++ * V4L2_COLORSPACE_OPRGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
13532 + *
13533 + * V4L2_COLORSPACE_REC709 and V4L2_COLORSPACE_DCI_P3: V4L2_YCBCR_ENC_709
13534 + *
13535 +@@ -382,6 +382,17 @@ enum v4l2_quantization {
13536 + (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \
13537 + V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE))
13538 +
13539 ++/*
13540 ++ * Deprecated names for opRGB colorspace (IEC 61966-2-5)
13541 ++ *
13542 ++ * WARNING: Please don't use these deprecated defines in your code, as
13543 ++ * there is a chance we have to remove them in the future.
13544 ++ */
13545 ++#ifndef __KERNEL__
13546 ++#define V4L2_COLORSPACE_ADOBERGB V4L2_COLORSPACE_OPRGB
13547 ++#define V4L2_XFER_FUNC_ADOBERGB V4L2_XFER_FUNC_OPRGB
13548 ++#endif
13549 ++
13550 + enum v4l2_priority {
13551 + V4L2_PRIORITY_UNSET = 0, /* not initialized */
13552 + V4L2_PRIORITY_BACKGROUND = 1,
13553 +diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
13554 +index 4f9991de8e3a..8345ca799ad8 100644
13555 +--- a/include/uapi/rdma/ib_user_verbs.h
13556 ++++ b/include/uapi/rdma/ib_user_verbs.h
13557 +@@ -762,10 +762,28 @@ struct ib_uverbs_sge {
13558 + __u32 lkey;
13559 + };
13560 +
13561 ++enum ib_uverbs_wr_opcode {
13562 ++ IB_UVERBS_WR_RDMA_WRITE = 0,
13563 ++ IB_UVERBS_WR_RDMA_WRITE_WITH_IMM = 1,
13564 ++ IB_UVERBS_WR_SEND = 2,
13565 ++ IB_UVERBS_WR_SEND_WITH_IMM = 3,
13566 ++ IB_UVERBS_WR_RDMA_READ = 4,
13567 ++ IB_UVERBS_WR_ATOMIC_CMP_AND_SWP = 5,
13568 ++ IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD = 6,
13569 ++ IB_UVERBS_WR_LOCAL_INV = 7,
13570 ++ IB_UVERBS_WR_BIND_MW = 8,
13571 ++ IB_UVERBS_WR_SEND_WITH_INV = 9,
13572 ++ IB_UVERBS_WR_TSO = 10,
13573 ++ IB_UVERBS_WR_RDMA_READ_WITH_INV = 11,
13574 ++ IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12,
13575 ++ IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13,
13576 ++ /* Review enum ib_wr_opcode before modifying this */
13577 ++};
13578 ++
13579 + struct ib_uverbs_send_wr {
13580 + __aligned_u64 wr_id;
13581 + __u32 num_sge;
13582 +- __u32 opcode;
13583 ++ __u32 opcode; /* see enum ib_uverbs_wr_opcode */
13584 + __u32 send_flags;
13585 + union {
13586 + __be32 imm_data;
13587 +diff --git a/kernel/bounds.c b/kernel/bounds.c
13588 +index c373e887c066..9795d75b09b2 100644
13589 +--- a/kernel/bounds.c
13590 ++++ b/kernel/bounds.c
13591 +@@ -13,7 +13,7 @@
13592 + #include <linux/log2.h>
13593 + #include <linux/spinlock_types.h>
13594 +
13595 +-void foo(void)
13596 ++int main(void)
13597 + {
13598 + /* The enum constants to put into include/generated/bounds.h */
13599 + DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
13600 +@@ -23,4 +23,6 @@ void foo(void)
13601 + #endif
13602 + DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
13603 + /* End of constants */
13604 ++
13605 ++ return 0;
13606 + }
13607 +diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
13608 +index a31a1ba0f8ea..0f5d2e66cd6b 100644
13609 +--- a/kernel/bpf/syscall.c
13610 ++++ b/kernel/bpf/syscall.c
13611 +@@ -683,6 +683,17 @@ err_put:
13612 + return err;
13613 + }
13614 +
13615 ++static void maybe_wait_bpf_programs(struct bpf_map *map)
13616 ++{
13617 ++ /* Wait for any running BPF programs to complete so that
13618 ++ * userspace, when we return to it, knows that all programs
13619 ++ * that could be running use the new map value.
13620 ++ */
13621 ++ if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS ||
13622 ++ map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS)
13623 ++ synchronize_rcu();
13624 ++}
13625 ++
13626 + #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
13627 +
13628 + static int map_update_elem(union bpf_attr *attr)
13629 +@@ -769,6 +780,7 @@ static int map_update_elem(union bpf_attr *attr)
13630 + }
13631 + __this_cpu_dec(bpf_prog_active);
13632 + preempt_enable();
13633 ++ maybe_wait_bpf_programs(map);
13634 + out:
13635 + free_value:
13636 + kfree(value);
13637 +@@ -821,6 +833,7 @@ static int map_delete_elem(union bpf_attr *attr)
13638 + rcu_read_unlock();
13639 + __this_cpu_dec(bpf_prog_active);
13640 + preempt_enable();
13641 ++ maybe_wait_bpf_programs(map);
13642 + out:
13643 + kfree(key);
13644 + err_put:
13645 +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
13646 +index b000686fa1a1..d565ec6af97c 100644
13647 +--- a/kernel/bpf/verifier.c
13648 ++++ b/kernel/bpf/verifier.c
13649 +@@ -553,7 +553,9 @@ static void __mark_reg_not_init(struct bpf_reg_state *reg);
13650 + */
13651 + static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
13652 + {
13653 +- reg->id = 0;
13654 ++ /* Clear id, off, and union(map_ptr, range) */
13655 ++ memset(((u8 *)reg) + sizeof(reg->type), 0,
13656 ++ offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
13657 + reg->var_off = tnum_const(imm);
13658 + reg->smin_value = (s64)imm;
13659 + reg->smax_value = (s64)imm;
13660 +@@ -572,7 +574,6 @@ static void __mark_reg_known_zero(struct bpf_reg_state *reg)
13661 + static void __mark_reg_const_zero(struct bpf_reg_state *reg)
13662 + {
13663 + __mark_reg_known(reg, 0);
13664 +- reg->off = 0;
13665 + reg->type = SCALAR_VALUE;
13666 + }
13667 +
13668 +@@ -683,9 +684,12 @@ static void __mark_reg_unbounded(struct bpf_reg_state *reg)
13669 + /* Mark a register as having a completely unknown (scalar) value. */
13670 + static void __mark_reg_unknown(struct bpf_reg_state *reg)
13671 + {
13672 ++ /*
13673 ++ * Clear type, id, off, and union(map_ptr, range) and
13674 ++ * padding between 'type' and union
13675 ++ */
13676 ++ memset(reg, 0, offsetof(struct bpf_reg_state, var_off));
13677 + reg->type = SCALAR_VALUE;
13678 +- reg->id = 0;
13679 +- reg->off = 0;
13680 + reg->var_off = tnum_unknown;
13681 + reg->frameno = 0;
13682 + __mark_reg_unbounded(reg);
13683 +@@ -1726,9 +1730,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
13684 + else
13685 + mark_reg_known_zero(env, regs,
13686 + value_regno);
13687 +- regs[value_regno].id = 0;
13688 +- regs[value_regno].off = 0;
13689 +- regs[value_regno].range = 0;
13690 + regs[value_regno].type = reg_type;
13691 + }
13692 +
13693 +@@ -2549,7 +2550,6 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
13694 + regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
13695 + /* There is no offset yet applied, variable or fixed */
13696 + mark_reg_known_zero(env, regs, BPF_REG_0);
13697 +- regs[BPF_REG_0].off = 0;
13698 + /* remember map_ptr, so that check_map_access()
13699 + * can check 'value_size' boundary of memory access
13700 + * to map element returned from bpf_map_lookup_elem()
13701 +diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c
13702 +index b3c557476a8d..c98501a04742 100644
13703 +--- a/kernel/bpf/xskmap.c
13704 ++++ b/kernel/bpf/xskmap.c
13705 +@@ -191,11 +191,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
13706 + sock_hold(sock->sk);
13707 +
13708 + old_xs = xchg(&m->xsk_map[i], xs);
13709 +- if (old_xs) {
13710 +- /* Make sure we've flushed everything. */
13711 +- synchronize_net();
13712 ++ if (old_xs)
13713 + sock_put((struct sock *)old_xs);
13714 +- }
13715 +
13716 + sockfd_put(sock);
13717 + return 0;
13718 +@@ -211,11 +208,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
13719 + return -EINVAL;
13720 +
13721 + old_xs = xchg(&m->xsk_map[k], NULL);
13722 +- if (old_xs) {
13723 +- /* Make sure we've flushed everything. */
13724 +- synchronize_net();
13725 ++ if (old_xs)
13726 + sock_put((struct sock *)old_xs);
13727 +- }
13728 +
13729 + return 0;
13730 + }
13731 +diff --git a/kernel/cpu.c b/kernel/cpu.c
13732 +index 517907b082df..3ec5a37e3068 100644
13733 +--- a/kernel/cpu.c
13734 ++++ b/kernel/cpu.c
13735 +@@ -2033,6 +2033,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
13736 + kobject_uevent(&dev->kobj, KOBJ_ONLINE);
13737 + }
13738 +
13739 ++/*
13740 ++ * Architectures that need SMT-specific errata handling during SMT hotplug
13741 ++ * should override this.
13742 ++ */
13743 ++void __weak arch_smt_update(void) { };
13744 ++
13745 + static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
13746 + {
13747 + int cpu, ret = 0;
13748 +@@ -2059,8 +2065,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
13749 + */
13750 + cpuhp_offline_cpu_device(cpu);
13751 + }
13752 +- if (!ret)
13753 ++ if (!ret) {
13754 + cpu_smt_control = ctrlval;
13755 ++ arch_smt_update();
13756 ++ }
13757 + cpu_maps_update_done();
13758 + return ret;
13759 + }
13760 +@@ -2071,6 +2079,7 @@ static int cpuhp_smt_enable(void)
13761 +
13762 + cpu_maps_update_begin();
13763 + cpu_smt_control = CPU_SMT_ENABLED;
13764 ++ arch_smt_update();
13765 + for_each_present_cpu(cpu) {
13766 + /* Skip online CPUs and CPUs on offline nodes */
13767 + if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
13768 +diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
13769 +index d987dcd1bd56..54a33337680f 100644
13770 +--- a/kernel/dma/contiguous.c
13771 ++++ b/kernel/dma/contiguous.c
13772 +@@ -49,7 +49,11 @@ static phys_addr_t limit_cmdline;
13773 +
13774 + static int __init early_cma(char *p)
13775 + {
13776 +- pr_debug("%s(%s)\n", __func__, p);
13777 ++ if (!p) {
13778 ++ pr_err("Config string not provided\n");
13779 ++ return -EINVAL;
13780 ++ }
13781 ++
13782 + size_cmdline = memparse(p, &p);
13783 + if (*p != '@')
13784 + return 0;
13785 +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
13786 +index 9a8b7ba9aa88..c4e31f44a0ff 100644
13787 +--- a/kernel/irq/manage.c
13788 ++++ b/kernel/irq/manage.c
13789 +@@ -920,6 +920,9 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
13790 +
13791 + local_bh_disable();
13792 + ret = action->thread_fn(action->irq, action->dev_id);
13793 ++ if (ret == IRQ_HANDLED)
13794 ++ atomic_inc(&desc->threads_handled);
13795 ++
13796 + irq_finalize_oneshot(desc, action);
13797 + local_bh_enable();
13798 + return ret;
13799 +@@ -936,6 +939,9 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
13800 + irqreturn_t ret;
13801 +
13802 + ret = action->thread_fn(action->irq, action->dev_id);
13803 ++ if (ret == IRQ_HANDLED)
13804 ++ atomic_inc(&desc->threads_handled);
13805 ++
13806 + irq_finalize_oneshot(desc, action);
13807 + return ret;
13808 + }
13809 +@@ -1013,8 +1019,6 @@ static int irq_thread(void *data)
13810 + irq_thread_check_affinity(desc, action);
13811 +
13812 + action_ret = handler_fn(desc, action);
13813 +- if (action_ret == IRQ_HANDLED)
13814 +- atomic_inc(&desc->threads_handled);
13815 + if (action_ret == IRQ_WAKE_THREAD)
13816 + irq_wake_secondary(desc, action);
13817 +
13818 +diff --git a/kernel/kprobes.c b/kernel/kprobes.c
13819 +index f3183ad10d96..07f912b765db 100644
13820 +--- a/kernel/kprobes.c
13821 ++++ b/kernel/kprobes.c
13822 +@@ -700,9 +700,10 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
13823 + }
13824 +
13825 + /* Cancel unoptimizing for reusing */
13826 +-static void reuse_unused_kprobe(struct kprobe *ap)
13827 ++static int reuse_unused_kprobe(struct kprobe *ap)
13828 + {
13829 + struct optimized_kprobe *op;
13830 ++ int ret;
13831 +
13832 + BUG_ON(!kprobe_unused(ap));
13833 + /*
13834 +@@ -714,8 +715,12 @@ static void reuse_unused_kprobe(struct kprobe *ap)
13835 + /* Enable the probe again */
13836 + ap->flags &= ~KPROBE_FLAG_DISABLED;
13837 + /* Optimize it again (remove from op->list) */
13838 +- BUG_ON(!kprobe_optready(ap));
13839 ++ ret = kprobe_optready(ap);
13840 ++ if (ret)
13841 ++ return ret;
13842 ++
13843 + optimize_kprobe(ap);
13844 ++ return 0;
13845 + }
13846 +
13847 + /* Remove optimized instructions */
13848 +@@ -940,11 +945,16 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
13849 + #define kprobe_disarmed(p) kprobe_disabled(p)
13850 + #define wait_for_kprobe_optimizer() do {} while (0)
13851 +
13852 +-/* There should be no unused kprobes can be reused without optimization */
13853 +-static void reuse_unused_kprobe(struct kprobe *ap)
13854 ++static int reuse_unused_kprobe(struct kprobe *ap)
13855 + {
13856 ++ /*
13857 ++ * If the optimized kprobe is NOT supported, the aggr kprobe is
13858 ++ * released at the same time that the last aggregated kprobe is
13859 ++ * unregistered.
13860 ++ * Thus there should be no chance to reuse unused kprobe.
13861 ++ */
13862 + printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
13863 +- BUG_ON(kprobe_unused(ap));
13864 ++ return -EINVAL;
13865 + }
13866 +
13867 + static void free_aggr_kprobe(struct kprobe *p)
13868 +@@ -1343,9 +1353,12 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
13869 + goto out;
13870 + }
13871 + init_aggr_kprobe(ap, orig_p);
13872 +- } else if (kprobe_unused(ap))
13873 ++ } else if (kprobe_unused(ap)) {
13874 + /* This probe is going to die. Rescue it */
13875 +- reuse_unused_kprobe(ap);
13876 ++ ret = reuse_unused_kprobe(ap);
13877 ++ if (ret)
13878 ++ goto out;
13879 ++ }
13880 +
13881 + if (kprobe_gone(ap)) {
13882 + /*
13883 +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
13884 +index 5fa4d3138bf1..aa6ebb799f16 100644
13885 +--- a/kernel/locking/lockdep.c
13886 ++++ b/kernel/locking/lockdep.c
13887 +@@ -4148,7 +4148,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
13888 + {
13889 + unsigned long flags;
13890 +
13891 +- if (unlikely(!lock_stat))
13892 ++ if (unlikely(!lock_stat || !debug_locks))
13893 + return;
13894 +
13895 + if (unlikely(current->lockdep_recursion))
13896 +@@ -4168,7 +4168,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
13897 + {
13898 + unsigned long flags;
13899 +
13900 +- if (unlikely(!lock_stat))
13901 ++ if (unlikely(!lock_stat || !debug_locks))
13902 + return;
13903 +
13904 + if (unlikely(current->lockdep_recursion))
13905 +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
13906 +index 1d1513215c22..72de8cc5a13e 100644
13907 +--- a/kernel/printk/printk.c
13908 ++++ b/kernel/printk/printk.c
13909 +@@ -1047,7 +1047,12 @@ static void __init log_buf_len_update(unsigned size)
13910 + /* save requested log_buf_len since it's too early to process it */
13911 + static int __init log_buf_len_setup(char *str)
13912 + {
13913 +- unsigned size = memparse(str, &str);
13914 ++ unsigned int size;
13915 ++
13916 ++ if (!str)
13917 ++ return -EINVAL;
13918 ++
13919 ++ size = memparse(str, &str);
13920 +
13921 + log_buf_len_update(size);
13922 +
13923 +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
13924 +index b27b9509ea89..9e4f550e4797 100644
13925 +--- a/kernel/sched/fair.c
13926 ++++ b/kernel/sched/fair.c
13927 +@@ -4321,7 +4321,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
13928 + * put back on, and if we advance min_vruntime, we'll be placed back
13929 + * further than we started -- ie. we'll be penalized.
13930 + */
13931 +- if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
13932 ++ if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
13933 + update_min_vruntime(cfs_rq);
13934 + }
13935 +
13936 +diff --git a/kernel/signal.c b/kernel/signal.c
13937 +index 8d8a940422a8..dce9859f6547 100644
13938 +--- a/kernel/signal.c
13939 ++++ b/kernel/signal.c
13940 +@@ -1009,7 +1009,7 @@ static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
13941 +
13942 + result = TRACE_SIGNAL_IGNORED;
13943 + if (!prepare_signal(sig, t,
13944 +- from_ancestor_ns || (info == SEND_SIG_FORCED)))
13945 ++ from_ancestor_ns || (info == SEND_SIG_PRIV) || (info == SEND_SIG_FORCED)))
13946 + goto ret;
13947 +
13948 + pending = group ? &t->signal->shared_pending : &t->pending;
13949 +@@ -2804,7 +2804,7 @@ COMPAT_SYSCALL_DEFINE2(rt_sigpending, compat_sigset_t __user *, uset,
13950 + }
13951 + #endif
13952 +
13953 +-enum siginfo_layout siginfo_layout(int sig, int si_code)
13954 ++enum siginfo_layout siginfo_layout(unsigned sig, int si_code)
13955 + {
13956 + enum siginfo_layout layout = SIL_KILL;
13957 + if ((si_code > SI_USER) && (si_code < SI_KERNEL)) {
13958 +@@ -3417,7 +3417,8 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
13959 + }
13960 +
13961 + static int
13962 +-do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
13963 ++do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp,
13964 ++ size_t min_ss_size)
13965 + {
13966 + struct task_struct *t = current;
13967 +
13968 +@@ -3447,7 +3448,7 @@ do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
13969 + ss_size = 0;
13970 + ss_sp = NULL;
13971 + } else {
13972 +- if (unlikely(ss_size < MINSIGSTKSZ))
13973 ++ if (unlikely(ss_size < min_ss_size))
13974 + return -ENOMEM;
13975 + }
13976 +
13977 +@@ -3465,7 +3466,8 @@ SYSCALL_DEFINE2(sigaltstack,const stack_t __user *,uss, stack_t __user *,uoss)
13978 + if (uss && copy_from_user(&new, uss, sizeof(stack_t)))
13979 + return -EFAULT;
13980 + err = do_sigaltstack(uss ? &new : NULL, uoss ? &old : NULL,
13981 +- current_user_stack_pointer());
13982 ++ current_user_stack_pointer(),
13983 ++ MINSIGSTKSZ);
13984 + if (!err && uoss && copy_to_user(uoss, &old, sizeof(stack_t)))
13985 + err = -EFAULT;
13986 + return err;
13987 +@@ -3476,7 +3478,8 @@ int restore_altstack(const stack_t __user *uss)
13988 + stack_t new;
13989 + if (copy_from_user(&new, uss, sizeof(stack_t)))
13990 + return -EFAULT;
13991 +- (void)do_sigaltstack(&new, NULL, current_user_stack_pointer());
13992 ++ (void)do_sigaltstack(&new, NULL, current_user_stack_pointer(),
13993 ++ MINSIGSTKSZ);
13994 + /* squash all but EFAULT for now */
13995 + return 0;
13996 + }
13997 +@@ -3510,7 +3513,8 @@ static int do_compat_sigaltstack(const compat_stack_t __user *uss_ptr,
13998 + uss.ss_size = uss32.ss_size;
13999 + }
14000 + ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss,
14001 +- compat_user_stack_pointer());
14002 ++ compat_user_stack_pointer(),
14003 ++ COMPAT_MINSIGSTKSZ);
14004 + if (ret >= 0 && uoss_ptr) {
14005 + compat_stack_t old;
14006 + memset(&old, 0, sizeof(old));
14007 +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
14008 +index 6c78bc2b7fff..b3482eed270c 100644
14009 +--- a/kernel/trace/trace_events_hist.c
14010 ++++ b/kernel/trace/trace_events_hist.c
14011 +@@ -1072,8 +1072,10 @@ static int create_synth_event(int argc, char **argv)
14012 + event = NULL;
14013 + ret = -EEXIST;
14014 + goto out;
14015 +- } else if (delete_event)
14016 ++ } else if (delete_event) {
14017 ++ ret = -ENOENT;
14018 + goto out;
14019 ++ }
14020 +
14021 + if (argc < 2) {
14022 + ret = -EINVAL;
14023 +diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
14024 +index e5222b5fb4fe..923414a246e9 100644
14025 +--- a/kernel/user_namespace.c
14026 ++++ b/kernel/user_namespace.c
14027 +@@ -974,10 +974,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
14028 + if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
14029 + goto out;
14030 +
14031 +- ret = sort_idmaps(&new_map);
14032 +- if (ret < 0)
14033 +- goto out;
14034 +-
14035 + ret = -EPERM;
14036 + /* Map the lower ids from the parent user namespace to the
14037 + * kernel global id space.
14038 +@@ -1004,6 +1000,14 @@ static ssize_t map_write(struct file *file, const char __user *buf,
14039 + e->lower_first = lower_first;
14040 + }
14041 +
14042 ++ /*
14043 ++ * If we want to use binary search for lookup, this clones the extent
14044 ++ * array and sorts both copies.
14045 ++ */
14046 ++ ret = sort_idmaps(&new_map);
14047 ++ if (ret < 0)
14048 ++ goto out;
14049 ++
14050 + /* Install the map */
14051 + if (new_map.nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS) {
14052 + memcpy(map->extent, new_map.extent,
14053 +diff --git a/lib/debug_locks.c b/lib/debug_locks.c
14054 +index 96c4c633d95e..124fdf238b3d 100644
14055 +--- a/lib/debug_locks.c
14056 ++++ b/lib/debug_locks.c
14057 +@@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(debug_locks_silent);
14058 + */
14059 + int debug_locks_off(void)
14060 + {
14061 +- if (__debug_locks_off()) {
14062 ++ if (debug_locks && __debug_locks_off()) {
14063 + if (!debug_locks_silent) {
14064 + console_verbose();
14065 + return 1;
14066 +diff --git a/mm/hmm.c b/mm/hmm.c
14067 +index f9d1d89dec4d..49e3db686348 100644
14068 +--- a/mm/hmm.c
14069 ++++ b/mm/hmm.c
14070 +@@ -91,16 +91,6 @@ static struct hmm *hmm_register(struct mm_struct *mm)
14071 + spin_lock_init(&hmm->lock);
14072 + hmm->mm = mm;
14073 +
14074 +- /*
14075 +- * We should only get here if hold the mmap_sem in write mode ie on
14076 +- * registration of first mirror through hmm_mirror_register()
14077 +- */
14078 +- hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
14079 +- if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) {
14080 +- kfree(hmm);
14081 +- return NULL;
14082 +- }
14083 +-
14084 + spin_lock(&mm->page_table_lock);
14085 + if (!mm->hmm)
14086 + mm->hmm = hmm;
14087 +@@ -108,12 +98,27 @@ static struct hmm *hmm_register(struct mm_struct *mm)
14088 + cleanup = true;
14089 + spin_unlock(&mm->page_table_lock);
14090 +
14091 +- if (cleanup) {
14092 +- mmu_notifier_unregister(&hmm->mmu_notifier, mm);
14093 +- kfree(hmm);
14094 +- }
14095 ++ if (cleanup)
14096 ++ goto error;
14097 ++
14098 ++ /*
14099 ++ * We should only get here if hold the mmap_sem in write mode ie on
14100 ++ * registration of first mirror through hmm_mirror_register()
14101 ++ */
14102 ++ hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
14103 ++ if (__mmu_notifier_register(&hmm->mmu_notifier, mm))
14104 ++ goto error_mm;
14105 +
14106 + return mm->hmm;
14107 ++
14108 ++error_mm:
14109 ++ spin_lock(&mm->page_table_lock);
14110 ++ if (mm->hmm == hmm)
14111 ++ mm->hmm = NULL;
14112 ++ spin_unlock(&mm->page_table_lock);
14113 ++error:
14114 ++ kfree(hmm);
14115 ++ return NULL;
14116 + }
14117 +
14118 + void hmm_mm_destroy(struct mm_struct *mm)
14119 +@@ -275,12 +280,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror)
14120 + if (!should_unregister || mm == NULL)
14121 + return;
14122 +
14123 ++ mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
14124 ++
14125 + spin_lock(&mm->page_table_lock);
14126 + if (mm->hmm == hmm)
14127 + mm->hmm = NULL;
14128 + spin_unlock(&mm->page_table_lock);
14129 +
14130 +- mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
14131 + kfree(hmm);
14132 + }
14133 + EXPORT_SYMBOL(hmm_mirror_unregister);
14134 +diff --git a/mm/hugetlb.c b/mm/hugetlb.c
14135 +index f469315a6a0f..5b38fbef9441 100644
14136 +--- a/mm/hugetlb.c
14137 ++++ b/mm/hugetlb.c
14138 +@@ -3678,6 +3678,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
14139 + return err;
14140 + ClearPagePrivate(page);
14141 +
14142 ++ /*
14143 ++ * set page dirty so that it will not be removed from cache/file
14144 ++ * by non-hugetlbfs specific code paths.
14145 ++ */
14146 ++ set_page_dirty(page);
14147 ++
14148 + spin_lock(&inode->i_lock);
14149 + inode->i_blocks += blocks_per_huge_page(h);
14150 + spin_unlock(&inode->i_lock);
14151 +diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
14152 +index ae3c2a35d61b..11df03e71288 100644
14153 +--- a/mm/page_vma_mapped.c
14154 ++++ b/mm/page_vma_mapped.c
14155 +@@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
14156 + if (!is_swap_pte(*pvmw->pte))
14157 + return false;
14158 + } else {
14159 +- if (!pte_present(*pvmw->pte))
14160 ++ /*
14161 ++ * We get here when we are trying to unmap a private
14162 ++ * device page from the process address space. Such
14163 ++ * page is not CPU accessible and thus is mapped as
14164 ++ * a special swap entry, nonetheless it still does
14165 ++ * count as a valid regular mapping for the page (and
14166 ++ * is accounted as such in page maps count).
14167 ++ *
14168 ++ * So handle this special case as if it was a normal
14169 ++ * page mapping ie lock CPU page table and returns
14170 ++ * true.
14171 ++ *
14172 ++ * For more details on device private memory see HMM
14173 ++ * (include/linux/hmm.h or mm/hmm.c).
14174 ++ */
14175 ++ if (is_swap_pte(*pvmw->pte)) {
14176 ++ swp_entry_t entry;
14177 ++
14178 ++ /* Handle un-addressable ZONE_DEVICE memory */
14179 ++ entry = pte_to_swp_entry(*pvmw->pte);
14180 ++ if (!is_device_private_entry(entry))
14181 ++ return false;
14182 ++ } else if (!pte_present(*pvmw->pte))
14183 + return false;
14184 + }
14185 + }
14186 +diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
14187 +index 5e4f04004a49..7bf833598615 100644
14188 +--- a/net/core/netclassid_cgroup.c
14189 ++++ b/net/core/netclassid_cgroup.c
14190 +@@ -106,6 +106,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
14191 + iterate_fd(p->files, 0, update_classid_sock,
14192 + (void *)(unsigned long)cs->classid);
14193 + task_unlock(p);
14194 ++ cond_resched();
14195 + }
14196 + css_task_iter_end(&it);
14197 +
14198 +diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
14199 +index 82178cc69c96..777fa3b7fb13 100644
14200 +--- a/net/ipv4/cipso_ipv4.c
14201 ++++ b/net/ipv4/cipso_ipv4.c
14202 +@@ -1512,7 +1512,7 @@ static int cipso_v4_parsetag_loc(const struct cipso_v4_doi *doi_def,
14203 + *
14204 + * Description:
14205 + * Parse the packet's IP header looking for a CIPSO option. Returns a pointer
14206 +- * to the start of the CIPSO option on success, NULL if one if not found.
14207 ++ * to the start of the CIPSO option on success, NULL if one is not found.
14208 + *
14209 + */
14210 + unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
14211 +@@ -1522,10 +1522,8 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
14212 + int optlen;
14213 + int taglen;
14214 +
14215 +- for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 0; ) {
14216 ++ for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 1; ) {
14217 + switch (optptr[0]) {
14218 +- case IPOPT_CIPSO:
14219 +- return optptr;
14220 + case IPOPT_END:
14221 + return NULL;
14222 + case IPOPT_NOOP:
14223 +@@ -1534,6 +1532,11 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
14224 + default:
14225 + taglen = optptr[1];
14226 + }
14227 ++ if (!taglen || taglen > optlen)
14228 ++ return NULL;
14229 ++ if (optptr[0] == IPOPT_CIPSO)
14230 ++ return optptr;
14231 ++
14232 + optlen -= taglen;
14233 + optptr += taglen;
14234 + }
14235 +diff --git a/net/netfilter/xt_nat.c b/net/netfilter/xt_nat.c
14236 +index 8af9707f8789..ac91170fc8c8 100644
14237 +--- a/net/netfilter/xt_nat.c
14238 ++++ b/net/netfilter/xt_nat.c
14239 +@@ -216,6 +216,8 @@ static struct xt_target xt_nat_target_reg[] __read_mostly = {
14240 + {
14241 + .name = "DNAT",
14242 + .revision = 2,
14243 ++ .checkentry = xt_nat_checkentry,
14244 ++ .destroy = xt_nat_destroy,
14245 + .target = xt_dnat_target_v2,
14246 + .targetsize = sizeof(struct nf_nat_range2),
14247 + .table = "nat",
14248 +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
14249 +index 57f71765febe..ce852f8c1d27 100644
14250 +--- a/net/sched/sch_api.c
14251 ++++ b/net/sched/sch_api.c
14252 +@@ -1306,7 +1306,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
14253 +
14254 + const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
14255 + [TCA_KIND] = { .type = NLA_STRING },
14256 +- [TCA_OPTIONS] = { .type = NLA_NESTED },
14257 + [TCA_RATE] = { .type = NLA_BINARY,
14258 + .len = sizeof(struct tc_estimator) },
14259 + [TCA_STAB] = { .type = NLA_NESTED },
14260 +diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
14261 +index 5185efb9027b..83ccd0221c98 100644
14262 +--- a/net/sunrpc/svc_xprt.c
14263 ++++ b/net/sunrpc/svc_xprt.c
14264 +@@ -989,7 +989,7 @@ static void call_xpt_users(struct svc_xprt *xprt)
14265 + spin_lock(&xprt->xpt_lock);
14266 + while (!list_empty(&xprt->xpt_users)) {
14267 + u = list_first_entry(&xprt->xpt_users, struct svc_xpt_user, list);
14268 +- list_del(&u->list);
14269 ++ list_del_init(&u->list);
14270 + u->callback(u);
14271 + }
14272 + spin_unlock(&xprt->xpt_lock);
14273 +diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
14274 +index a68180090554..b9827665ff35 100644
14275 +--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
14276 ++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
14277 +@@ -248,6 +248,7 @@ static void
14278 + xprt_rdma_bc_close(struct rpc_xprt *xprt)
14279 + {
14280 + dprintk("svcrdma: %s: xprt %p\n", __func__, xprt);
14281 ++ xprt->cwnd = RPC_CWNDSHIFT;
14282 + }
14283 +
14284 + static void
14285 +diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
14286 +index 143ce2579ba9..98cbc7b060ba 100644
14287 +--- a/net/sunrpc/xprtrdma/transport.c
14288 ++++ b/net/sunrpc/xprtrdma/transport.c
14289 +@@ -468,6 +468,12 @@ xprt_rdma_close(struct rpc_xprt *xprt)
14290 + xprt->reestablish_timeout = 0;
14291 + xprt_disconnect_done(xprt);
14292 + rpcrdma_ep_disconnect(ep, ia);
14293 ++
14294 ++ /* Prepare @xprt for the next connection by reinitializing
14295 ++ * its credit grant to one (see RFC 8166, Section 3.3.3).
14296 ++ */
14297 ++ r_xprt->rx_buf.rb_credits = 1;
14298 ++ xprt->cwnd = RPC_CWNDSHIFT;
14299 + }
14300 +
14301 + /**
14302 +diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
14303 +index 4e937cd7c17d..661504042d30 100644
14304 +--- a/net/xdp/xsk.c
14305 ++++ b/net/xdp/xsk.c
14306 +@@ -744,6 +744,8 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
14307 + sk->sk_destruct = xsk_destruct;
14308 + sk_refcnt_debug_inc(sk);
14309 +
14310 ++ sock_set_flag(sk, SOCK_RCU_FREE);
14311 ++
14312 + xs = xdp_sk(sk);
14313 + mutex_init(&xs->mutex);
14314 + spin_lock_init(&xs->tx_completion_lock);
14315 +diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
14316 +index 526e6814ed4b..1d2e0a90c0ca 100644
14317 +--- a/net/xfrm/xfrm_policy.c
14318 ++++ b/net/xfrm/xfrm_policy.c
14319 +@@ -625,9 +625,9 @@ static void xfrm_hash_rebuild(struct work_struct *work)
14320 + break;
14321 + }
14322 + if (newpos)
14323 +- hlist_add_behind(&policy->bydst, newpos);
14324 ++ hlist_add_behind_rcu(&policy->bydst, newpos);
14325 + else
14326 +- hlist_add_head(&policy->bydst, chain);
14327 ++ hlist_add_head_rcu(&policy->bydst, chain);
14328 + }
14329 +
14330 + spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
14331 +@@ -766,9 +766,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
14332 + break;
14333 + }
14334 + if (newpos)
14335 +- hlist_add_behind(&policy->bydst, newpos);
14336 ++ hlist_add_behind_rcu(&policy->bydst, newpos);
14337 + else
14338 +- hlist_add_head(&policy->bydst, chain);
14339 ++ hlist_add_head_rcu(&policy->bydst, chain);
14340 + __xfrm_policy_link(policy, dir);
14341 +
14342 + /* After previous checking, family can either be AF_INET or AF_INET6 */
14343 +diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
14344 +index ae9d5c766a3c..cfb8cc3b975e 100644
14345 +--- a/security/integrity/ima/ima_fs.c
14346 ++++ b/security/integrity/ima/ima_fs.c
14347 +@@ -42,14 +42,14 @@ static int __init default_canonical_fmt_setup(char *str)
14348 + __setup("ima_canonical_fmt", default_canonical_fmt_setup);
14349 +
14350 + static int valid_policy = 1;
14351 +-#define TMPBUFLEN 12
14352 ++
14353 + static ssize_t ima_show_htable_value(char __user *buf, size_t count,
14354 + loff_t *ppos, atomic_long_t *val)
14355 + {
14356 +- char tmpbuf[TMPBUFLEN];
14357 ++ char tmpbuf[32]; /* greater than largest 'long' string value */
14358 + ssize_t len;
14359 +
14360 +- len = scnprintf(tmpbuf, TMPBUFLEN, "%li\n", atomic_long_read(val));
14361 ++ len = scnprintf(tmpbuf, sizeof(tmpbuf), "%li\n", atomic_long_read(val));
14362 + return simple_read_from_buffer(buf, count, ppos, tmpbuf, len);
14363 + }
14364 +
14365 +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
14366 +index 2b5ee5fbd652..4680a217d0fa 100644
14367 +--- a/security/selinux/hooks.c
14368 ++++ b/security/selinux/hooks.c
14369 +@@ -1509,6 +1509,11 @@ static int selinux_genfs_get_sid(struct dentry *dentry,
14370 + }
14371 + rc = security_genfs_sid(&selinux_state, sb->s_type->name,
14372 + path, tclass, sid);
14373 ++ if (rc == -ENOENT) {
14374 ++ /* No match in policy, mark as unlabeled. */
14375 ++ *sid = SECINITSID_UNLABELED;
14376 ++ rc = 0;
14377 ++ }
14378 + }
14379 + free_page((unsigned long)buffer);
14380 + return rc;
14381 +diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
14382 +index 8b6cd5a79bfa..a81d815c81f3 100644
14383 +--- a/security/smack/smack_lsm.c
14384 ++++ b/security/smack/smack_lsm.c
14385 +@@ -420,6 +420,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
14386 + struct smk_audit_info ad, *saip = NULL;
14387 + struct task_smack *tsp;
14388 + struct smack_known *tracer_known;
14389 ++ const struct cred *tracercred;
14390 +
14391 + if ((mode & PTRACE_MODE_NOAUDIT) == 0) {
14392 + smk_ad_init(&ad, func, LSM_AUDIT_DATA_TASK);
14393 +@@ -428,7 +429,8 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
14394 + }
14395 +
14396 + rcu_read_lock();
14397 +- tsp = __task_cred(tracer)->security;
14398 ++ tracercred = __task_cred(tracer);
14399 ++ tsp = tracercred->security;
14400 + tracer_known = smk_of_task(tsp);
14401 +
14402 + if ((mode & PTRACE_MODE_ATTACH) &&
14403 +@@ -438,7 +440,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
14404 + rc = 0;
14405 + else if (smack_ptrace_rule == SMACK_PTRACE_DRACONIAN)
14406 + rc = -EACCES;
14407 +- else if (capable(CAP_SYS_PTRACE))
14408 ++ else if (smack_privileged_cred(CAP_SYS_PTRACE, tracercred))
14409 + rc = 0;
14410 + else
14411 + rc = -EACCES;
14412 +@@ -1840,6 +1842,7 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
14413 + {
14414 + struct smack_known *skp;
14415 + struct smack_known *tkp = smk_of_task(tsk->cred->security);
14416 ++ const struct cred *tcred;
14417 + struct file *file;
14418 + int rc;
14419 + struct smk_audit_info ad;
14420 +@@ -1853,8 +1856,12 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
14421 + skp = file->f_security;
14422 + rc = smk_access(skp, tkp, MAY_DELIVER, NULL);
14423 + rc = smk_bu_note("sigiotask", skp, tkp, MAY_DELIVER, rc);
14424 +- if (rc != 0 && has_capability(tsk, CAP_MAC_OVERRIDE))
14425 ++
14426 ++ rcu_read_lock();
14427 ++ tcred = __task_cred(tsk);
14428 ++ if (rc != 0 && smack_privileged_cred(CAP_MAC_OVERRIDE, tcred))
14429 + rc = 0;
14430 ++ rcu_read_unlock();
14431 +
14432 + smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK);
14433 + smk_ad_setfield_u_tsk(&ad, tsk);
14434 +diff --git a/sound/pci/ca0106/ca0106.h b/sound/pci/ca0106/ca0106.h
14435 +index 04402c14cb23..9847b669cf3c 100644
14436 +--- a/sound/pci/ca0106/ca0106.h
14437 ++++ b/sound/pci/ca0106/ca0106.h
14438 +@@ -582,7 +582,7 @@
14439 + #define SPI_PL_BIT_R_R (2<<7) /* right channel = right */
14440 + #define SPI_PL_BIT_R_C (3<<7) /* right channel = (L+R)/2 */
14441 + #define SPI_IZD_REG 2
14442 +-#define SPI_IZD_BIT (1<<4) /* infinite zero detect */
14443 ++#define SPI_IZD_BIT (0<<4) /* infinite zero detect */
14444 +
14445 + #define SPI_FMT_REG 3
14446 + #define SPI_FMT_BIT_RJ (0<<0) /* right justified mode */
14447 +diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
14448 +index a68e75b00ea3..53c3cd28bc99 100644
14449 +--- a/sound/pci/hda/hda_controller.h
14450 ++++ b/sound/pci/hda/hda_controller.h
14451 +@@ -160,6 +160,7 @@ struct azx {
14452 + unsigned int msi:1;
14453 + unsigned int probing:1; /* codec probing phase */
14454 + unsigned int snoop:1;
14455 ++ unsigned int uc_buffer:1; /* non-cached pages for stream buffers */
14456 + unsigned int align_buffer_size:1;
14457 + unsigned int region_requested:1;
14458 + unsigned int disabled:1; /* disabled by vga_switcheroo */
14459 +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
14460 +index 28dc5e124995..6f6703e53a05 100644
14461 +--- a/sound/pci/hda/hda_intel.c
14462 ++++ b/sound/pci/hda/hda_intel.c
14463 +@@ -410,7 +410,7 @@ static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool
14464 + #ifdef CONFIG_SND_DMA_SGBUF
14465 + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) {
14466 + struct snd_sg_buf *sgbuf = dmab->private_data;
14467 +- if (chip->driver_type == AZX_DRIVER_CMEDIA)
14468 ++ if (!chip->uc_buffer)
14469 + return; /* deal with only CORB/RIRB buffers */
14470 + if (on)
14471 + set_pages_array_wc(sgbuf->page_table, sgbuf->pages);
14472 +@@ -1636,6 +1636,7 @@ static void azx_check_snoop_available(struct azx *chip)
14473 + dev_info(chip->card->dev, "Force to %s mode by module option\n",
14474 + snoop ? "snoop" : "non-snoop");
14475 + chip->snoop = snoop;
14476 ++ chip->uc_buffer = !snoop;
14477 + return;
14478 + }
14479 +
14480 +@@ -1656,8 +1657,12 @@ static void azx_check_snoop_available(struct azx *chip)
14481 + snoop = false;
14482 +
14483 + chip->snoop = snoop;
14484 +- if (!snoop)
14485 ++ if (!snoop) {
14486 + dev_info(chip->card->dev, "Force to non-snoop mode\n");
14487 ++ /* C-Media requires non-cached pages only for CORB/RIRB */
14488 ++ if (chip->driver_type != AZX_DRIVER_CMEDIA)
14489 ++ chip->uc_buffer = true;
14490 ++ }
14491 + }
14492 +
14493 + static void azx_probe_work(struct work_struct *work)
14494 +@@ -2096,7 +2101,7 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
14495 + #ifdef CONFIG_X86
14496 + struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
14497 + struct azx *chip = apcm->chip;
14498 +- if (!azx_snoop(chip) && chip->driver_type != AZX_DRIVER_CMEDIA)
14499 ++ if (chip->uc_buffer)
14500 + area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
14501 + #endif
14502 + }
14503 +@@ -2215,8 +2220,12 @@ static struct snd_pci_quirk power_save_blacklist[] = {
14504 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1581607 */
14505 + SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", 0),
14506 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
14507 ++ SND_PCI_QUIRK(0x1028, 0x0497, "Dell Precision T3600", 0),
14508 ++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
14509 + /* Note the P55A-UD3 and Z87-D3HP share the subsys id for the HDA dev */
14510 + SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte P55A-UD3 / Z87-D3HP", 0),
14511 ++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
14512 ++ SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
14513 + /* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
14514 + SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
14515 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
14516 +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
14517 +index 1a8a2d440fbd..7d6c3cebb0e3 100644
14518 +--- a/sound/pci/hda/patch_conexant.c
14519 ++++ b/sound/pci/hda/patch_conexant.c
14520 +@@ -980,6 +980,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
14521 + SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410),
14522 + SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410),
14523 + SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD),
14524 ++ SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
14525 + SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
14526 + SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
14527 + SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
14528 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
14529 +index 08b6369f930b..23dd4bb026d1 100644
14530 +--- a/sound/pci/hda/patch_realtek.c
14531 ++++ b/sound/pci/hda/patch_realtek.c
14532 +@@ -6799,6 +6799,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
14533 + {0x1a, 0x02a11040},
14534 + {0x1b, 0x01014020},
14535 + {0x21, 0x0221101f}),
14536 ++ SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
14537 ++ {0x14, 0x90170110},
14538 ++ {0x19, 0x02a11030},
14539 ++ {0x1a, 0x02a11040},
14540 ++ {0x1b, 0x01011020},
14541 ++ {0x21, 0x0221101f}),
14542 + SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
14543 + {0x14, 0x90170110},
14544 + {0x19, 0x02a11020},
14545 +@@ -7690,6 +7696,8 @@ enum {
14546 + ALC662_FIXUP_ASUS_Nx50,
14547 + ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
14548 + ALC668_FIXUP_ASUS_Nx51,
14549 ++ ALC668_FIXUP_MIC_COEF,
14550 ++ ALC668_FIXUP_ASUS_G751,
14551 + ALC891_FIXUP_HEADSET_MODE,
14552 + ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
14553 + ALC662_FIXUP_ACER_VERITON,
14554 +@@ -7959,6 +7967,23 @@ static const struct hda_fixup alc662_fixups[] = {
14555 + .chained = true,
14556 + .chain_id = ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
14557 + },
14558 ++ [ALC668_FIXUP_MIC_COEF] = {
14559 ++ .type = HDA_FIXUP_VERBS,
14560 ++ .v.verbs = (const struct hda_verb[]) {
14561 ++ { 0x20, AC_VERB_SET_COEF_INDEX, 0xc3 },
14562 ++ { 0x20, AC_VERB_SET_PROC_COEF, 0x4000 },
14563 ++ {}
14564 ++ },
14565 ++ },
14566 ++ [ALC668_FIXUP_ASUS_G751] = {
14567 ++ .type = HDA_FIXUP_PINS,
14568 ++ .v.pins = (const struct hda_pintbl[]) {
14569 ++ { 0x16, 0x0421101f }, /* HP */
14570 ++ {}
14571 ++ },
14572 ++ .chained = true,
14573 ++ .chain_id = ALC668_FIXUP_MIC_COEF
14574 ++ },
14575 + [ALC891_FIXUP_HEADSET_MODE] = {
14576 + .type = HDA_FIXUP_FUNC,
14577 + .v.func = alc_fixup_headset_mode,
14578 +@@ -8032,6 +8057,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
14579 + SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
14580 + SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
14581 + SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
14582 ++ SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
14583 + SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
14584 + SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
14585 + SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
14586 +diff --git a/sound/soc/codecs/sta32x.c b/sound/soc/codecs/sta32x.c
14587 +index d5035f2f2b2b..ce508b4cc85c 100644
14588 +--- a/sound/soc/codecs/sta32x.c
14589 ++++ b/sound/soc/codecs/sta32x.c
14590 +@@ -879,6 +879,9 @@ static int sta32x_probe(struct snd_soc_component *component)
14591 + struct sta32x_priv *sta32x = snd_soc_component_get_drvdata(component);
14592 + struct sta32x_platform_data *pdata = sta32x->pdata;
14593 + int i, ret = 0, thermal = 0;
14594 ++
14595 ++ sta32x->component = component;
14596 ++
14597 + ret = regulator_bulk_enable(ARRAY_SIZE(sta32x->supplies),
14598 + sta32x->supplies);
14599 + if (ret != 0) {
14600 +diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
14601 +index fcdc716754b6..bde2effde861 100644
14602 +--- a/sound/soc/intel/skylake/skl-topology.c
14603 ++++ b/sound/soc/intel/skylake/skl-topology.c
14604 +@@ -2458,6 +2458,7 @@ static int skl_tplg_get_token(struct device *dev,
14605 +
14606 + case SKL_TKN_U8_CORE_ID:
14607 + mconfig->core_id = tkn_elem->value;
14608 ++ break;
14609 +
14610 + case SKL_TKN_U8_MOD_TYPE:
14611 + mconfig->m_type = tkn_elem->value;
14612 +diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
14613 +index 67b042738ed7..986151732d68 100644
14614 +--- a/tools/perf/Makefile.config
14615 ++++ b/tools/perf/Makefile.config
14616 +@@ -831,7 +831,7 @@ ifndef NO_JVMTI
14617 + JDIR=$(shell /usr/sbin/update-java-alternatives -l | head -1 | awk '{print $$3}')
14618 + else
14619 + ifneq (,$(wildcard /usr/sbin/alternatives))
14620 +- JDIR=$(shell alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
14621 ++ JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
14622 + endif
14623 + endif
14624 + ifndef JDIR
14625 +diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
14626 +index c04dc7b53797..82a3c8be19ee 100644
14627 +--- a/tools/perf/builtin-report.c
14628 ++++ b/tools/perf/builtin-report.c
14629 +@@ -981,6 +981,7 @@ int cmd_report(int argc, const char **argv)
14630 + .id_index = perf_event__process_id_index,
14631 + .auxtrace_info = perf_event__process_auxtrace_info,
14632 + .auxtrace = perf_event__process_auxtrace,
14633 ++ .event_update = perf_event__process_event_update,
14634 + .feature = process_feature_event,
14635 + .ordered_events = true,
14636 + .ordering_requires_timestamps = true,
14637 +diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
14638 +index d40498f2cb1e..635c09fda1d9 100644
14639 +--- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
14640 ++++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
14641 +@@ -188,7 +188,7 @@
14642 + "Counter": "0,1,2,3",
14643 + "EventCode": "0xb",
14644 + "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
14645 +- "Filter": "filter_band0=1200",
14646 ++ "Filter": "filter_band0=12",
14647 + "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14648 + "MetricName": "freq_ge_1200mhz_cycles %",
14649 + "PerPkg": "1",
14650 +@@ -199,7 +199,7 @@
14651 + "Counter": "0,1,2,3",
14652 + "EventCode": "0xc",
14653 + "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
14654 +- "Filter": "filter_band1=2000",
14655 ++ "Filter": "filter_band1=20",
14656 + "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14657 + "MetricName": "freq_ge_2000mhz_cycles %",
14658 + "PerPkg": "1",
14659 +@@ -210,7 +210,7 @@
14660 + "Counter": "0,1,2,3",
14661 + "EventCode": "0xd",
14662 + "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
14663 +- "Filter": "filter_band2=3000",
14664 ++ "Filter": "filter_band2=30",
14665 + "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14666 + "MetricName": "freq_ge_3000mhz_cycles %",
14667 + "PerPkg": "1",
14668 +@@ -221,7 +221,7 @@
14669 + "Counter": "0,1,2,3",
14670 + "EventCode": "0xe",
14671 + "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
14672 +- "Filter": "filter_band3=4000",
14673 ++ "Filter": "filter_band3=40",
14674 + "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14675 + "MetricName": "freq_ge_4000mhz_cycles %",
14676 + "PerPkg": "1",
14677 +@@ -232,7 +232,7 @@
14678 + "Counter": "0,1,2,3",
14679 + "EventCode": "0xb",
14680 + "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
14681 +- "Filter": "edge=1,filter_band0=1200",
14682 ++ "Filter": "edge=1,filter_band0=12",
14683 + "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14684 + "MetricName": "freq_ge_1200mhz_cycles %",
14685 + "PerPkg": "1",
14686 +@@ -243,7 +243,7 @@
14687 + "Counter": "0,1,2,3",
14688 + "EventCode": "0xc",
14689 + "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
14690 +- "Filter": "edge=1,filter_band1=2000",
14691 ++ "Filter": "edge=1,filter_band1=20",
14692 + "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14693 + "MetricName": "freq_ge_2000mhz_cycles %",
14694 + "PerPkg": "1",
14695 +@@ -254,7 +254,7 @@
14696 + "Counter": "0,1,2,3",
14697 + "EventCode": "0xd",
14698 + "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
14699 +- "Filter": "edge=1,filter_band2=4000",
14700 ++ "Filter": "edge=1,filter_band2=30",
14701 + "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14702 + "MetricName": "freq_ge_3000mhz_cycles %",
14703 + "PerPkg": "1",
14704 +@@ -265,7 +265,7 @@
14705 + "Counter": "0,1,2,3",
14706 + "EventCode": "0xe",
14707 + "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
14708 +- "Filter": "edge=1,filter_band3=4000",
14709 ++ "Filter": "edge=1,filter_band3=40",
14710 + "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14711 + "MetricName": "freq_ge_4000mhz_cycles %",
14712 + "PerPkg": "1",
14713 +diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
14714 +index 16034bfd06dd..8755693d86c6 100644
14715 +--- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
14716 ++++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
14717 +@@ -187,7 +187,7 @@
14718 + "Counter": "0,1,2,3",
14719 + "EventCode": "0xb",
14720 + "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
14721 +- "Filter": "filter_band0=1200",
14722 ++ "Filter": "filter_band0=12",
14723 + "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14724 + "MetricName": "freq_ge_1200mhz_cycles %",
14725 + "PerPkg": "1",
14726 +@@ -198,7 +198,7 @@
14727 + "Counter": "0,1,2,3",
14728 + "EventCode": "0xc",
14729 + "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
14730 +- "Filter": "filter_band1=2000",
14731 ++ "Filter": "filter_band1=20",
14732 + "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14733 + "MetricName": "freq_ge_2000mhz_cycles %",
14734 + "PerPkg": "1",
14735 +@@ -209,7 +209,7 @@
14736 + "Counter": "0,1,2,3",
14737 + "EventCode": "0xd",
14738 + "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
14739 +- "Filter": "filter_band2=3000",
14740 ++ "Filter": "filter_band2=30",
14741 + "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14742 + "MetricName": "freq_ge_3000mhz_cycles %",
14743 + "PerPkg": "1",
14744 +@@ -220,7 +220,7 @@
14745 + "Counter": "0,1,2,3",
14746 + "EventCode": "0xe",
14747 + "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
14748 +- "Filter": "filter_band3=4000",
14749 ++ "Filter": "filter_band3=40",
14750 + "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14751 + "MetricName": "freq_ge_4000mhz_cycles %",
14752 + "PerPkg": "1",
14753 +@@ -231,7 +231,7 @@
14754 + "Counter": "0,1,2,3",
14755 + "EventCode": "0xb",
14756 + "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
14757 +- "Filter": "edge=1,filter_band0=1200",
14758 ++ "Filter": "edge=1,filter_band0=12",
14759 + "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14760 + "MetricName": "freq_ge_1200mhz_cycles %",
14761 + "PerPkg": "1",
14762 +@@ -242,7 +242,7 @@
14763 + "Counter": "0,1,2,3",
14764 + "EventCode": "0xc",
14765 + "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
14766 +- "Filter": "edge=1,filter_band1=2000",
14767 ++ "Filter": "edge=1,filter_band1=20",
14768 + "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14769 + "MetricName": "freq_ge_2000mhz_cycles %",
14770 + "PerPkg": "1",
14771 +@@ -253,7 +253,7 @@
14772 + "Counter": "0,1,2,3",
14773 + "EventCode": "0xd",
14774 + "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
14775 +- "Filter": "edge=1,filter_band2=4000",
14776 ++ "Filter": "edge=1,filter_band2=30",
14777 + "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14778 + "MetricName": "freq_ge_3000mhz_cycles %",
14779 + "PerPkg": "1",
14780 +@@ -264,7 +264,7 @@
14781 + "Counter": "0,1,2,3",
14782 + "EventCode": "0xe",
14783 + "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
14784 +- "Filter": "edge=1,filter_band3=4000",
14785 ++ "Filter": "edge=1,filter_band3=40",
14786 + "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
14787 + "MetricName": "freq_ge_4000mhz_cycles %",
14788 + "PerPkg": "1",
14789 +diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
14790 +index 3013ac8f83d0..cab7b0aea6ea 100755
14791 +--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
14792 ++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
14793 +@@ -48,7 +48,7 @@ trace_libc_inet_pton_backtrace() {
14794 + *)
14795 + eventattr='max-stack=3'
14796 + echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
14797 +- echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
14798 ++ echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected
14799 + ;;
14800 + esac
14801 +
14802 +diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
14803 +index 0c8ecf0c78a4..6f3db78efe39 100644
14804 +--- a/tools/perf/util/event.c
14805 ++++ b/tools/perf/util/event.c
14806 +@@ -1074,6 +1074,7 @@ void *cpu_map_data__alloc(struct cpu_map *map, size_t *size, u16 *type, int *max
14807 + }
14808 +
14809 + *size += sizeof(struct cpu_map_data);
14810 ++ *size = PERF_ALIGN(*size, sizeof(u64));
14811 + return zalloc(*size);
14812 + }
14813 +
14814 +diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
14815 +index 6324afba8fdd..86ad1389ff5a 100644
14816 +--- a/tools/perf/util/evsel.c
14817 ++++ b/tools/perf/util/evsel.c
14818 +@@ -1078,6 +1078,9 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
14819 + attr->exclude_user = 1;
14820 + }
14821 +
14822 ++ if (evsel->own_cpus)
14823 ++ evsel->attr.read_format |= PERF_FORMAT_ID;
14824 ++
14825 + /*
14826 + * Apply event specific term settings,
14827 + * it overloads any global configuration.
14828 +diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
14829 +index 3ba6a1742f91..02580f3ded1a 100644
14830 +--- a/tools/perf/util/pmu.c
14831 ++++ b/tools/perf/util/pmu.c
14832 +@@ -936,13 +936,14 @@ static void pmu_format_value(unsigned long *format, __u64 value, __u64 *v,
14833 +
14834 + static __u64 pmu_format_max_value(const unsigned long *format)
14835 + {
14836 +- __u64 w = 0;
14837 +- int fbit;
14838 +-
14839 +- for_each_set_bit(fbit, format, PERF_PMU_FORMAT_BITS)
14840 +- w |= (1ULL << fbit);
14841 ++ int w;
14842 +
14843 +- return w;
14844 ++ w = bitmap_weight(format, PERF_PMU_FORMAT_BITS);
14845 ++ if (!w)
14846 ++ return 0;
14847 ++ if (w < 64)
14848 ++ return (1ULL << w) - 1;
14849 ++ return -1;
14850 + }
14851 +
14852 + /*
14853 +diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
14854 +index 09d6746e6ec8..e767c4a9d4d2 100644
14855 +--- a/tools/perf/util/srcline.c
14856 ++++ b/tools/perf/util/srcline.c
14857 +@@ -85,6 +85,9 @@ static struct symbol *new_inline_sym(struct dso *dso,
14858 + struct symbol *inline_sym;
14859 + char *demangled = NULL;
14860 +
14861 ++ if (!funcname)
14862 ++ funcname = "??";
14863 ++
14864 + if (dso) {
14865 + demangled = dso__demangle_sym(dso, 0, funcname);
14866 + if (demangled)
14867 +diff --git a/tools/perf/util/strbuf.c b/tools/perf/util/strbuf.c
14868 +index 3d1cf5bf7f18..9005fbe0780e 100644
14869 +--- a/tools/perf/util/strbuf.c
14870 ++++ b/tools/perf/util/strbuf.c
14871 +@@ -98,19 +98,25 @@ static int strbuf_addv(struct strbuf *sb, const char *fmt, va_list ap)
14872 +
14873 + va_copy(ap_saved, ap);
14874 + len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap);
14875 +- if (len < 0)
14876 ++ if (len < 0) {
14877 ++ va_end(ap_saved);
14878 + return len;
14879 ++ }
14880 + if (len > strbuf_avail(sb)) {
14881 + ret = strbuf_grow(sb, len);
14882 +- if (ret)
14883 ++ if (ret) {
14884 ++ va_end(ap_saved);
14885 + return ret;
14886 ++ }
14887 + len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);
14888 + va_end(ap_saved);
14889 + if (len > strbuf_avail(sb)) {
14890 + pr_debug("this should not happen, your vsnprintf is broken");
14891 ++ va_end(ap_saved);
14892 + return -EINVAL;
14893 + }
14894 + }
14895 ++ va_end(ap_saved);
14896 + return strbuf_setlen(sb, sb->len + len);
14897 + }
14898 +
14899 +diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
14900 +index 7b0ca7cbb7de..8ad8e755127b 100644
14901 +--- a/tools/perf/util/trace-event-info.c
14902 ++++ b/tools/perf/util/trace-event-info.c
14903 +@@ -531,12 +531,14 @@ struct tracing_data *tracing_data_get(struct list_head *pattrs,
14904 + "/tmp/perf-XXXXXX");
14905 + if (!mkstemp(tdata->temp_file)) {
14906 + pr_debug("Can't make temp file");
14907 ++ free(tdata);
14908 + return NULL;
14909 + }
14910 +
14911 + temp_fd = open(tdata->temp_file, O_RDWR);
14912 + if (temp_fd < 0) {
14913 + pr_debug("Can't read '%s'", tdata->temp_file);
14914 ++ free(tdata);
14915 + return NULL;
14916 + }
14917 +
14918 +diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
14919 +index 40b425949aa3..2d50e4384c72 100644
14920 +--- a/tools/perf/util/trace-event-read.c
14921 ++++ b/tools/perf/util/trace-event-read.c
14922 +@@ -349,9 +349,12 @@ static int read_event_files(struct pevent *pevent)
14923 + for (x=0; x < count; x++) {
14924 + size = read8(pevent);
14925 + ret = read_event_file(pevent, sys, size);
14926 +- if (ret)
14927 ++ if (ret) {
14928 ++ free(sys);
14929 + return ret;
14930 ++ }
14931 + }
14932 ++ free(sys);
14933 + }
14934 + return 0;
14935 + }
14936 +diff --git a/tools/power/cpupower/utils/cpufreq-info.c b/tools/power/cpupower/utils/cpufreq-info.c
14937 +index df43cd45d810..ccd08dd00996 100644
14938 +--- a/tools/power/cpupower/utils/cpufreq-info.c
14939 ++++ b/tools/power/cpupower/utils/cpufreq-info.c
14940 +@@ -200,6 +200,8 @@ static int get_boost_mode(unsigned int cpu)
14941 + printf(_(" Boost States: %d\n"), b_states);
14942 + printf(_(" Total States: %d\n"), pstate_no);
14943 + for (i = 0; i < pstate_no; i++) {
14944 ++ if (!pstates[i])
14945 ++ continue;
14946 + if (i < b_states)
14947 + printf(_(" Pstate-Pb%d: %luMHz (boost state)"
14948 + "\n"), i, pstates[i]);
14949 +diff --git a/tools/power/cpupower/utils/helpers/amd.c b/tools/power/cpupower/utils/helpers/amd.c
14950 +index bb41cdd0df6b..9607ada5b29a 100644
14951 +--- a/tools/power/cpupower/utils/helpers/amd.c
14952 ++++ b/tools/power/cpupower/utils/helpers/amd.c
14953 +@@ -33,7 +33,7 @@ union msr_pstate {
14954 + unsigned vid:8;
14955 + unsigned iddval:8;
14956 + unsigned idddiv:2;
14957 +- unsigned res1:30;
14958 ++ unsigned res1:31;
14959 + unsigned en:1;
14960 + } fam17h_bits;
14961 + unsigned long long val;
14962 +@@ -119,6 +119,11 @@ int decode_pstates(unsigned int cpu, unsigned int cpu_family,
14963 + }
14964 + if (read_msr(cpu, MSR_AMD_PSTATE + i, &pstate.val))
14965 + return -1;
14966 ++ if ((cpu_family == 0x17) && (!pstate.fam17h_bits.en))
14967 ++ continue;
14968 ++ else if (!pstate.bits.en)
14969 ++ continue;
14970 ++
14971 + pstates[i] = get_cof(cpu_family, pstate);
14972 + }
14973 + *no = i;
14974 +diff --git a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
14975 +index 1893d0f59ad7..059b7e81b922 100755
14976 +--- a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
14977 ++++ b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
14978 +@@ -143,6 +143,10 @@ echo "Import devices from localhost - should work"
14979 + src/usbip attach -r localhost -b $busid;
14980 + echo "=============================================================="
14981 +
14982 ++# Wait for sysfs file to be updated. Without this sleep, usbip port
14983 ++# shows no imported devices.
14984 ++sleep 3;
14985 ++
14986 + echo "List imported devices - expect to see imported devices";
14987 + src/usbip port;
14988 + echo "=============================================================="
14989 +diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
14990 +index cef11377dcbd..c604438df13b 100644
14991 +--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
14992 ++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
14993 +@@ -35,18 +35,18 @@ fi
14994 +
14995 + reset_trigger
14996 +
14997 +-echo "Test create synthetic event with an error"
14998 +-echo 'wakeup_latency u64 lat pid_t pid char' > synthetic_events > /dev/null
14999 ++echo "Test remove synthetic event"
15000 ++echo '!wakeup_latency u64 lat pid_t pid char comm[16]' >> synthetic_events
15001 + if [ -d events/synthetic/wakeup_latency ]; then
15002 +- fail "Created wakeup_latency synthetic event with an invalid format"
15003 ++ fail "Failed to delete wakeup_latency synthetic event"
15004 + fi
15005 +
15006 + reset_trigger
15007 +
15008 +-echo "Test remove synthetic event"
15009 +-echo '!wakeup_latency u64 lat pid_t pid char comm[16]' > synthetic_events
15010 ++echo "Test create synthetic event with an error"
15011 ++echo 'wakeup_latency u64 lat pid_t pid char' > synthetic_events > /dev/null
15012 + if [ -d events/synthetic/wakeup_latency ]; then
15013 +- fail "Failed to delete wakeup_latency synthetic event"
15014 ++ fail "Created wakeup_latency synthetic event with an invalid format"
15015 + fi
15016 +
15017 + do_reset
15018 +diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
15019 +new file mode 100644
15020 +index 000000000000..88e6c3f43006
15021 +--- /dev/null
15022 ++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
15023 +@@ -0,0 +1,80 @@
15024 ++#!/bin/sh
15025 ++# SPDX-License-Identifier: GPL-2.0
15026 ++# description: event trigger - test synthetic_events syntax parser
15027 ++
15028 ++do_reset() {
15029 ++ reset_trigger
15030 ++ echo > set_event
15031 ++ clear_trace
15032 ++}
15033 ++
15034 ++fail() { #msg
15035 ++ do_reset
15036 ++ echo $1
15037 ++ exit_fail
15038 ++}
15039 ++
15040 ++if [ ! -f set_event ]; then
15041 ++ echo "event tracing is not supported"
15042 ++ exit_unsupported
15043 ++fi
15044 ++
15045 ++if [ ! -f synthetic_events ]; then
15046 ++ echo "synthetic event is not supported"
15047 ++ exit_unsupported
15048 ++fi
15049 ++
15050 ++reset_tracer
15051 ++do_reset
15052 ++
15053 ++echo "Test synthetic_events syntax parser"
15054 ++
15055 ++echo > synthetic_events
15056 ++
15057 ++# synthetic event must have a field
15058 ++! echo "myevent" >> synthetic_events
15059 ++echo "myevent u64 var1" >> synthetic_events
15060 ++
15061 ++# synthetic event must be found in synthetic_events
15062 ++grep "myevent[[:space:]]u64 var1" synthetic_events
15063 ++
15064 ++# it is not possible to add same name event
15065 ++! echo "myevent u64 var2" >> synthetic_events
15066 ++
15067 ++# Non-append open will cleanup all events and add new one
15068 ++echo "myevent u64 var2" > synthetic_events
15069 ++
15070 ++# multiple fields with different spaces
15071 ++echo "myevent u64 var1; u64 var2;" > synthetic_events
15072 ++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
15073 ++echo "myevent u64 var1 ; u64 var2 ;" > synthetic_events
15074 ++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
15075 ++echo "myevent u64 var1 ;u64 var2" > synthetic_events
15076 ++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
15077 ++
15078 ++# test field types
15079 ++echo "myevent u32 var" > synthetic_events
15080 ++echo "myevent u16 var" > synthetic_events
15081 ++echo "myevent u8 var" > synthetic_events
15082 ++echo "myevent s64 var" > synthetic_events
15083 ++echo "myevent s32 var" > synthetic_events
15084 ++echo "myevent s16 var" > synthetic_events
15085 ++echo "myevent s8 var" > synthetic_events
15086 ++
15087 ++echo "myevent char var" > synthetic_events
15088 ++echo "myevent int var" > synthetic_events
15089 ++echo "myevent long var" > synthetic_events
15090 ++echo "myevent pid_t var" > synthetic_events
15091 ++
15092 ++echo "myevent unsigned char var" > synthetic_events
15093 ++echo "myevent unsigned int var" > synthetic_events
15094 ++echo "myevent unsigned long var" > synthetic_events
15095 ++grep "myevent[[:space:]]unsigned long var" synthetic_events
15096 ++
15097 ++# test string type
15098 ++echo "myevent char var[10]" > synthetic_events
15099 ++grep "myevent[[:space:]]char\[10\] var" synthetic_events
15100 ++
15101 ++do_reset
15102 ++
15103 ++exit 0
15104 +diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
15105 +index cad14cd0ea92..b5277106df1f 100644
15106 +--- a/tools/testing/selftests/net/reuseport_bpf.c
15107 ++++ b/tools/testing/selftests/net/reuseport_bpf.c
15108 +@@ -437,14 +437,19 @@ void enable_fastopen(void)
15109 + }
15110 + }
15111 +
15112 +-static struct rlimit rlim_old, rlim_new;
15113 ++static struct rlimit rlim_old;
15114 +
15115 + static __attribute__((constructor)) void main_ctor(void)
15116 + {
15117 + getrlimit(RLIMIT_MEMLOCK, &rlim_old);
15118 +- rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
15119 +- rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
15120 +- setrlimit(RLIMIT_MEMLOCK, &rlim_new);
15121 ++
15122 ++ if (rlim_old.rlim_cur != RLIM_INFINITY) {
15123 ++ struct rlimit rlim_new;
15124 ++
15125 ++ rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
15126 ++ rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
15127 ++ setrlimit(RLIMIT_MEMLOCK, &rlim_new);
15128 ++ }
15129 + }
15130 +
15131 + static __attribute__((destructor)) void main_dtor(void)
15132 +diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
15133 +index 327fa943c7f3..dbdffa2e2c82 100644
15134 +--- a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
15135 ++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
15136 +@@ -67,8 +67,8 @@ trans:
15137 + "3: ;"
15138 + : [res] "=r" (result), [texasr] "=r" (texasr)
15139 + : [gpr_1]"i"(GPR_1), [gpr_2]"i"(GPR_2), [gpr_4]"i"(GPR_4),
15140 +- [sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "r" (&a),
15141 +- [flt_2] "r" (&b), [flt_4] "r" (&d)
15142 ++ [sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "b" (&a),
15143 ++ [flt_4] "b" (&d)
15144 + : "memory", "r5", "r6", "r7",
15145 + "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
15146 + "r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
15147 +diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
15148 +index 04e554cae3a2..f8c2b9e7c19c 100644
15149 +--- a/virt/kvm/arm/arm.c
15150 ++++ b/virt/kvm/arm/arm.c
15151 +@@ -1244,8 +1244,6 @@ static void cpu_init_hyp_mode(void *dummy)
15152 +
15153 + __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
15154 + __cpu_init_stage2();
15155 +-
15156 +- kvm_arm_init_debug();
15157 + }
15158 +
15159 + static void cpu_hyp_reset(void)
15160 +@@ -1269,6 +1267,8 @@ static void cpu_hyp_reinit(void)
15161 + cpu_init_hyp_mode(NULL);
15162 + }
15163 +
15164 ++ kvm_arm_init_debug();
15165 ++
15166 + if (vgic_present)
15167 + kvm_vgic_init_cpu_hardware();
15168 + }
15169 +diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
15170 +index fd8c88463928..fbba603caf1b 100644
15171 +--- a/virt/kvm/arm/mmu.c
15172 ++++ b/virt/kvm/arm/mmu.c
15173 +@@ -1201,8 +1201,14 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
15174 + {
15175 + kvm_pfn_t pfn = *pfnp;
15176 + gfn_t gfn = *ipap >> PAGE_SHIFT;
15177 ++ struct page *page = pfn_to_page(pfn);
15178 +
15179 +- if (PageTransCompoundMap(pfn_to_page(pfn))) {
15180 ++ /*
15181 ++ * PageTransCompoungMap() returns true for THP and
15182 ++ * hugetlbfs. Make sure the adjustment is done only for THP
15183 ++ * pages.
15184 ++ */
15185 ++ if (!PageHuge(page) && PageTransCompoundMap(page)) {
15186 + unsigned long mask;
15187 + /*
15188 + * The address we faulted on is backed by a transparent huge