Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: /
Date: Thu, 22 Feb 2018 23:23:45
Message-Id: 1519341805.d603d53d71f6a32e75ad20ea841b50b6b9c15bf7.mpagano@gentoo
1 commit: d603d53d71f6a32e75ad20ea841b50b6b9c15bf7
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Thu Feb 22 23:23:25 2018 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Thu Feb 22 23:23:25 2018 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d603d53d
7
8 Linux patchj 4.14.21
9
10 0000_README | 4 +
11 1020_linux-4.14.21.patch | 11566 +++++++++++++++++++++++++++++++++++++++++++++
12 2 files changed, 11570 insertions(+)
13
14 diff --git a/0000_README b/0000_README
15 index 7fd6d67..f9abc2d 100644
16 --- a/0000_README
17 +++ b/0000_README
18 @@ -123,6 +123,10 @@ Patch: 1019_linux-4.14.20.patch
19 From: http://www.kernel.org
20 Desc: Linux 4.14.20
21
22 +Patch: 1020_linux-4.14.21.patch
23 +From: http://www.kernel.org
24 +Desc: Linux 4.14.21
25 +
26 Patch: 1500_XATTR_USER_PREFIX.patch
27 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
28 Desc: Support for namespace user.pax.* on tmpfs.
29
30 diff --git a/1020_linux-4.14.21.patch b/1020_linux-4.14.21.patch
31 new file mode 100644
32 index 0000000..f26afa9
33 --- /dev/null
34 +++ b/1020_linux-4.14.21.patch
35 @@ -0,0 +1,11566 @@
36 +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
37 +index c76afdcafbef..fb385af482ff 100644
38 +--- a/Documentation/admin-guide/kernel-parameters.txt
39 ++++ b/Documentation/admin-guide/kernel-parameters.txt
40 +@@ -1841,13 +1841,6 @@
41 + Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
42 + the default is off.
43 +
44 +- kmemcheck= [X86] Boot-time kmemcheck enable/disable/one-shot mode
45 +- Valid arguments: 0, 1, 2
46 +- kmemcheck=0 (disabled)
47 +- kmemcheck=1 (enabled)
48 +- kmemcheck=2 (one-shot mode)
49 +- Default: 2 (one-shot mode)
50 +-
51 + kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
52 + Default is 0 (don't ignore, but inject #GP)
53 +
54 +diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst
55 +index a81787cd47d7..e313925fb0fa 100644
56 +--- a/Documentation/dev-tools/index.rst
57 ++++ b/Documentation/dev-tools/index.rst
58 +@@ -21,7 +21,6 @@ whole; patches welcome!
59 + kasan
60 + ubsan
61 + kmemleak
62 +- kmemcheck
63 + gdb-kernel-debugging
64 + kgdb
65 + kselftest
66 +diff --git a/Documentation/dev-tools/kmemcheck.rst b/Documentation/dev-tools/kmemcheck.rst
67 +deleted file mode 100644
68 +index 7f3d1985de74..000000000000
69 +--- a/Documentation/dev-tools/kmemcheck.rst
70 ++++ /dev/null
71 +@@ -1,733 +0,0 @@
72 +-Getting started with kmemcheck
73 +-==============================
74 +-
75 +-Vegard Nossum <vegardno@×××××××.no>
76 +-
77 +-
78 +-Introduction
79 +-------------
80 +-
81 +-kmemcheck is a debugging feature for the Linux Kernel. More specifically, it
82 +-is a dynamic checker that detects and warns about some uses of uninitialized
83 +-memory.
84 +-
85 +-Userspace programmers might be familiar with Valgrind's memcheck. The main
86 +-difference between memcheck and kmemcheck is that memcheck works for userspace
87 +-programs only, and kmemcheck works for the kernel only. The implementations
88 +-are of course vastly different. Because of this, kmemcheck is not as accurate
89 +-as memcheck, but it turns out to be good enough in practice to discover real
90 +-programmer errors that the compiler is not able to find through static
91 +-analysis.
92 +-
93 +-Enabling kmemcheck on a kernel will probably slow it down to the extent that
94 +-the machine will not be usable for normal workloads such as e.g. an
95 +-interactive desktop. kmemcheck will also cause the kernel to use about twice
96 +-as much memory as normal. For this reason, kmemcheck is strictly a debugging
97 +-feature.
98 +-
99 +-
100 +-Downloading
101 +------------
102 +-
103 +-As of version 2.6.31-rc1, kmemcheck is included in the mainline kernel.
104 +-
105 +-
106 +-Configuring and compiling
107 +--------------------------
108 +-
109 +-kmemcheck only works for the x86 (both 32- and 64-bit) platform. A number of
110 +-configuration variables must have specific settings in order for the kmemcheck
111 +-menu to even appear in "menuconfig". These are:
112 +-
113 +-- ``CONFIG_CC_OPTIMIZE_FOR_SIZE=n``
114 +- This option is located under "General setup" / "Optimize for size".
115 +-
116 +- Without this, gcc will use certain optimizations that usually lead to
117 +- false positive warnings from kmemcheck. An example of this is a 16-bit
118 +- field in a struct, where gcc may load 32 bits, then discard the upper
119 +- 16 bits. kmemcheck sees only the 32-bit load, and may trigger a
120 +- warning for the upper 16 bits (if they're uninitialized).
121 +-
122 +-- ``CONFIG_SLAB=y`` or ``CONFIG_SLUB=y``
123 +- This option is located under "General setup" / "Choose SLAB
124 +- allocator".
125 +-
126 +-- ``CONFIG_FUNCTION_TRACER=n``
127 +- This option is located under "Kernel hacking" / "Tracers" / "Kernel
128 +- Function Tracer"
129 +-
130 +- When function tracing is compiled in, gcc emits a call to another
131 +- function at the beginning of every function. This means that when the
132 +- page fault handler is called, the ftrace framework will be called
133 +- before kmemcheck has had a chance to handle the fault. If ftrace then
134 +- modifies memory that was tracked by kmemcheck, the result is an
135 +- endless recursive page fault.
136 +-
137 +-- ``CONFIG_DEBUG_PAGEALLOC=n``
138 +- This option is located under "Kernel hacking" / "Memory Debugging"
139 +- / "Debug page memory allocations".
140 +-
141 +-In addition, I highly recommend turning on ``CONFIG_DEBUG_INFO=y``. This is also
142 +-located under "Kernel hacking". With this, you will be able to get line number
143 +-information from the kmemcheck warnings, which is extremely valuable in
144 +-debugging a problem. This option is not mandatory, however, because it slows
145 +-down the compilation process and produces a much bigger kernel image.
146 +-
147 +-Now the kmemcheck menu should be visible (under "Kernel hacking" / "Memory
148 +-Debugging" / "kmemcheck: trap use of uninitialized memory"). Here follows
149 +-a description of the kmemcheck configuration variables:
150 +-
151 +-- ``CONFIG_KMEMCHECK``
152 +- This must be enabled in order to use kmemcheck at all...
153 +-
154 +-- ``CONFIG_KMEMCHECK_``[``DISABLED`` | ``ENABLED`` | ``ONESHOT``]``_BY_DEFAULT``
155 +- This option controls the status of kmemcheck at boot-time. "Enabled"
156 +- will enable kmemcheck right from the start, "disabled" will boot the
157 +- kernel as normal (but with the kmemcheck code compiled in, so it can
158 +- be enabled at run-time after the kernel has booted), and "one-shot" is
159 +- a special mode which will turn kmemcheck off automatically after
160 +- detecting the first use of uninitialized memory.
161 +-
162 +- If you are using kmemcheck to actively debug a problem, then you
163 +- probably want to choose "enabled" here.
164 +-
165 +- The one-shot mode is mostly useful in automated test setups because it
166 +- can prevent floods of warnings and increase the chances of the machine
167 +- surviving in case something is really wrong. In other cases, the one-
168 +- shot mode could actually be counter-productive because it would turn
169 +- itself off at the very first error -- in the case of a false positive
170 +- too -- and this would come in the way of debugging the specific
171 +- problem you were interested in.
172 +-
173 +- If you would like to use your kernel as normal, but with a chance to
174 +- enable kmemcheck in case of some problem, it might be a good idea to
175 +- choose "disabled" here. When kmemcheck is disabled, most of the run-
176 +- time overhead is not incurred, and the kernel will be almost as fast
177 +- as normal.
178 +-
179 +-- ``CONFIG_KMEMCHECK_QUEUE_SIZE``
180 +- Select the maximum number of error reports to store in an internal
181 +- (fixed-size) buffer. Since errors can occur virtually anywhere and in
182 +- any context, we need a temporary storage area which is guaranteed not
183 +- to generate any other page faults when accessed. The queue will be
184 +- emptied as soon as a tasklet may be scheduled. If the queue is full,
185 +- new error reports will be lost.
186 +-
187 +- The default value of 64 is probably fine. If some code produces more
188 +- than 64 errors within an irqs-off section, then the code is likely to
189 +- produce many, many more, too, and these additional reports seldom give
190 +- any more information (the first report is usually the most valuable
191 +- anyway).
192 +-
193 +- This number might have to be adjusted if you are not using serial
194 +- console or similar to capture the kernel log. If you are using the
195 +- "dmesg" command to save the log, then getting a lot of kmemcheck
196 +- warnings might overflow the kernel log itself, and the earlier reports
197 +- will get lost in that way instead. Try setting this to 10 or so on
198 +- such a setup.
199 +-
200 +-- ``CONFIG_KMEMCHECK_SHADOW_COPY_SHIFT``
201 +- Select the number of shadow bytes to save along with each entry of the
202 +- error-report queue. These bytes indicate what parts of an allocation
203 +- are initialized, uninitialized, etc. and will be displayed when an
204 +- error is detected to help the debugging of a particular problem.
205 +-
206 +- The number entered here is actually the logarithm of the number of
207 +- bytes that will be saved. So if you pick for example 5 here, kmemcheck
208 +- will save 2^5 = 32 bytes.
209 +-
210 +- The default value should be fine for debugging most problems. It also
211 +- fits nicely within 80 columns.
212 +-
213 +-- ``CONFIG_KMEMCHECK_PARTIAL_OK``
214 +- This option (when enabled) works around certain GCC optimizations that
215 +- produce 32-bit reads from 16-bit variables where the upper 16 bits are
216 +- thrown away afterwards.
217 +-
218 +- The default value (enabled) is recommended. This may of course hide
219 +- some real errors, but disabling it would probably produce a lot of
220 +- false positives.
221 +-
222 +-- ``CONFIG_KMEMCHECK_BITOPS_OK``
223 +- This option silences warnings that would be generated for bit-field
224 +- accesses where not all the bits are initialized at the same time. This
225 +- may also hide some real bugs.
226 +-
227 +- This option is probably obsolete, or it should be replaced with
228 +- the kmemcheck-/bitfield-annotations for the code in question. The
229 +- default value is therefore fine.
230 +-
231 +-Now compile the kernel as usual.
232 +-
233 +-
234 +-How to use
235 +-----------
236 +-
237 +-Booting
238 +-~~~~~~~
239 +-
240 +-First some information about the command-line options. There is only one
241 +-option specific to kmemcheck, and this is called "kmemcheck". It can be used
242 +-to override the default mode as chosen by the ``CONFIG_KMEMCHECK_*_BY_DEFAULT``
243 +-option. Its possible settings are:
244 +-
245 +-- ``kmemcheck=0`` (disabled)
246 +-- ``kmemcheck=1`` (enabled)
247 +-- ``kmemcheck=2`` (one-shot mode)
248 +-
249 +-If SLUB debugging has been enabled in the kernel, it may take precedence over
250 +-kmemcheck in such a way that the slab caches which are under SLUB debugging
251 +-will not be tracked by kmemcheck. In order to ensure that this doesn't happen
252 +-(even though it shouldn't by default), use SLUB's boot option ``slub_debug``,
253 +-like this: ``slub_debug=-``
254 +-
255 +-In fact, this option may also be used for fine-grained control over SLUB vs.
256 +-kmemcheck. For example, if the command line includes
257 +-``kmemcheck=1 slub_debug=,dentry``, then SLUB debugging will be used only
258 +-for the "dentry" slab cache, and with kmemcheck tracking all the other
259 +-caches. This is advanced usage, however, and is not generally recommended.
260 +-
261 +-
262 +-Run-time enable/disable
263 +-~~~~~~~~~~~~~~~~~~~~~~~
264 +-
265 +-When the kernel has booted, it is possible to enable or disable kmemcheck at
266 +-run-time. WARNING: This feature is still experimental and may cause false
267 +-positive warnings to appear. Therefore, try not to use this. If you find that
268 +-it doesn't work properly (e.g. you see an unreasonable amount of warnings), I
269 +-will be happy to take bug reports.
270 +-
271 +-Use the file ``/proc/sys/kernel/kmemcheck`` for this purpose, e.g.::
272 +-
273 +- $ echo 0 > /proc/sys/kernel/kmemcheck # disables kmemcheck
274 +-
275 +-The numbers are the same as for the ``kmemcheck=`` command-line option.
276 +-
277 +-
278 +-Debugging
279 +-~~~~~~~~~
280 +-
281 +-A typical report will look something like this::
282 +-
283 +- WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88003e4a2024)
284 +- 80000000000000000000000000000000000000000088ffff0000000000000000
285 +- i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
286 +- ^
287 +-
288 +- Pid: 1856, comm: ntpdate Not tainted 2.6.29-rc5 #264 945P-A
289 +- RIP: 0010:[<ffffffff8104ede8>] [<ffffffff8104ede8>] __dequeue_signal+0xc8/0x190
290 +- RSP: 0018:ffff88003cdf7d98 EFLAGS: 00210002
291 +- RAX: 0000000000000030 RBX: ffff88003d4ea968 RCX: 0000000000000009
292 +- RDX: ffff88003e5d6018 RSI: ffff88003e5d6024 RDI: ffff88003cdf7e84
293 +- RBP: ffff88003cdf7db8 R08: ffff88003e5d6000 R09: 0000000000000000
294 +- R10: 0000000000000080 R11: 0000000000000000 R12: 000000000000000e
295 +- R13: ffff88003cdf7e78 R14: ffff88003d530710 R15: ffff88003d5a98c8
296 +- FS: 0000000000000000(0000) GS:ffff880001982000(0063) knlGS:00000
297 +- CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
298 +- CR2: ffff88003f806ea0 CR3: 000000003c036000 CR4: 00000000000006a0
299 +- DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
300 +- DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
301 +- [<ffffffff8104f04e>] dequeue_signal+0x8e/0x170
302 +- [<ffffffff81050bd8>] get_signal_to_deliver+0x98/0x390
303 +- [<ffffffff8100b87d>] do_notify_resume+0xad/0x7d0
304 +- [<ffffffff8100c7b5>] int_signal+0x12/0x17
305 +- [<ffffffffffffffff>] 0xffffffffffffffff
306 +-
307 +-The single most valuable information in this report is the RIP (or EIP on 32-
308 +-bit) value. This will help us pinpoint exactly which instruction that caused
309 +-the warning.
310 +-
311 +-If your kernel was compiled with ``CONFIG_DEBUG_INFO=y``, then all we have to do
312 +-is give this address to the addr2line program, like this::
313 +-
314 +- $ addr2line -e vmlinux -i ffffffff8104ede8
315 +- arch/x86/include/asm/string_64.h:12
316 +- include/asm-generic/siginfo.h:287
317 +- kernel/signal.c:380
318 +- kernel/signal.c:410
319 +-
320 +-The "``-e vmlinux``" tells addr2line which file to look in. **IMPORTANT:**
321 +-This must be the vmlinux of the kernel that produced the warning in the
322 +-first place! If not, the line number information will almost certainly be
323 +-wrong.
324 +-
325 +-The "``-i``" tells addr2line to also print the line numbers of inlined
326 +-functions. In this case, the flag was very important, because otherwise,
327 +-it would only have printed the first line, which is just a call to
328 +-``memcpy()``, which could be called from a thousand places in the kernel, and
329 +-is therefore not very useful. These inlined functions would not show up in
330 +-the stack trace above, simply because the kernel doesn't load the extra
331 +-debugging information. This technique can of course be used with ordinary
332 +-kernel oopses as well.
333 +-
334 +-In this case, it's the caller of ``memcpy()`` that is interesting, and it can be
335 +-found in ``include/asm-generic/siginfo.h``, line 287::
336 +-
337 +- 281 static inline void copy_siginfo(struct siginfo *to, struct siginfo *from)
338 +- 282 {
339 +- 283 if (from->si_code < 0)
340 +- 284 memcpy(to, from, sizeof(*to));
341 +- 285 else
342 +- 286 /* _sigchld is currently the largest know union member */
343 +- 287 memcpy(to, from, __ARCH_SI_PREAMBLE_SIZE + sizeof(from->_sifields._sigchld));
344 +- 288 }
345 +-
346 +-Since this was a read (kmemcheck usually warns about reads only, though it can
347 +-warn about writes to unallocated or freed memory as well), it was probably the
348 +-"from" argument which contained some uninitialized bytes. Following the chain
349 +-of calls, we move upwards to see where "from" was allocated or initialized,
350 +-``kernel/signal.c``, line 380::
351 +-
352 +- 359 static void collect_signal(int sig, struct sigpending *list, siginfo_t *info)
353 +- 360 {
354 +- ...
355 +- 367 list_for_each_entry(q, &list->list, list) {
356 +- 368 if (q->info.si_signo == sig) {
357 +- 369 if (first)
358 +- 370 goto still_pending;
359 +- 371 first = q;
360 +- ...
361 +- 377 if (first) {
362 +- 378 still_pending:
363 +- 379 list_del_init(&first->list);
364 +- 380 copy_siginfo(info, &first->info);
365 +- 381 __sigqueue_free(first);
366 +- ...
367 +- 392 }
368 +- 393 }
369 +-
370 +-Here, it is ``&first->info`` that is being passed on to ``copy_siginfo()``. The
371 +-variable ``first`` was found on a list -- passed in as the second argument to
372 +-``collect_signal()``. We continue our journey through the stack, to figure out
373 +-where the item on "list" was allocated or initialized. We move to line 410::
374 +-
375 +- 395 static int __dequeue_signal(struct sigpending *pending, sigset_t *mask,
376 +- 396 siginfo_t *info)
377 +- 397 {
378 +- ...
379 +- 410 collect_signal(sig, pending, info);
380 +- ...
381 +- 414 }
382 +-
383 +-Now we need to follow the ``pending`` pointer, since that is being passed on to
384 +-``collect_signal()`` as ``list``. At this point, we've run out of lines from the
385 +-"addr2line" output. Not to worry, we just paste the next addresses from the
386 +-kmemcheck stack dump, i.e.::
387 +-
388 +- [<ffffffff8104f04e>] dequeue_signal+0x8e/0x170
389 +- [<ffffffff81050bd8>] get_signal_to_deliver+0x98/0x390
390 +- [<ffffffff8100b87d>] do_notify_resume+0xad/0x7d0
391 +- [<ffffffff8100c7b5>] int_signal+0x12/0x17
392 +-
393 +- $ addr2line -e vmlinux -i ffffffff8104f04e ffffffff81050bd8 \
394 +- ffffffff8100b87d ffffffff8100c7b5
395 +- kernel/signal.c:446
396 +- kernel/signal.c:1806
397 +- arch/x86/kernel/signal.c:805
398 +- arch/x86/kernel/signal.c:871
399 +- arch/x86/kernel/entry_64.S:694
400 +-
401 +-Remember that since these addresses were found on the stack and not as the
402 +-RIP value, they actually point to the _next_ instruction (they are return
403 +-addresses). This becomes obvious when we look at the code for line 446::
404 +-
405 +- 422 int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
406 +- 423 {
407 +- ...
408 +- 431 signr = __dequeue_signal(&tsk->signal->shared_pending,
409 +- 432 mask, info);
410 +- 433 /*
411 +- 434 * itimer signal ?
412 +- 435 *
413 +- 436 * itimers are process shared and we restart periodic
414 +- 437 * itimers in the signal delivery path to prevent DoS
415 +- 438 * attacks in the high resolution timer case. This is
416 +- 439 * compliant with the old way of self restarting
417 +- 440 * itimers, as the SIGALRM is a legacy signal and only
418 +- 441 * queued once. Changing the restart behaviour to
419 +- 442 * restart the timer in the signal dequeue path is
420 +- 443 * reducing the timer noise on heavy loaded !highres
421 +- 444 * systems too.
422 +- 445 */
423 +- 446 if (unlikely(signr == SIGALRM)) {
424 +- ...
425 +- 489 }
426 +-
427 +-So instead of looking at 446, we should be looking at 431, which is the line
428 +-that executes just before 446. Here we see that what we are looking for is
429 +-``&tsk->signal->shared_pending``.
430 +-
431 +-Our next task is now to figure out which function that puts items on this
432 +-``shared_pending`` list. A crude, but efficient tool, is ``git grep``::
433 +-
434 +- $ git grep -n 'shared_pending' kernel/
435 +- ...
436 +- kernel/signal.c:828: pending = group ? &t->signal->shared_pending : &t->pending;
437 +- kernel/signal.c:1339: pending = group ? &t->signal->shared_pending : &t->pending;
438 +- ...
439 +-
440 +-There were more results, but none of them were related to list operations,
441 +-and these were the only assignments. We inspect the line numbers more closely
442 +-and find that this is indeed where items are being added to the list::
443 +-
444 +- 816 static int send_signal(int sig, struct siginfo *info, struct task_struct *t,
445 +- 817 int group)
446 +- 818 {
447 +- ...
448 +- 828 pending = group ? &t->signal->shared_pending : &t->pending;
449 +- ...
450 +- 851 q = __sigqueue_alloc(t, GFP_ATOMIC, (sig < SIGRTMIN &&
451 +- 852 (is_si_special(info) ||
452 +- 853 info->si_code >= 0)));
453 +- 854 if (q) {
454 +- 855 list_add_tail(&q->list, &pending->list);
455 +- ...
456 +- 890 }
457 +-
458 +-and::
459 +-
460 +- 1309 int send_sigqueue(struct sigqueue *q, struct task_struct *t, int group)
461 +- 1310 {
462 +- ....
463 +- 1339 pending = group ? &t->signal->shared_pending : &t->pending;
464 +- 1340 list_add_tail(&q->list, &pending->list);
465 +- ....
466 +- 1347 }
467 +-
468 +-In the first case, the list element we are looking for, ``q``, is being
469 +-returned from the function ``__sigqueue_alloc()``, which looks like an
470 +-allocation function. Let's take a look at it::
471 +-
472 +- 187 static struct sigqueue *__sigqueue_alloc(struct task_struct *t, gfp_t flags,
473 +- 188 int override_rlimit)
474 +- 189 {
475 +- 190 struct sigqueue *q = NULL;
476 +- 191 struct user_struct *user;
477 +- 192
478 +- 193 /*
479 +- 194 * We won't get problems with the target's UID changing under us
480 +- 195 * because changing it requires RCU be used, and if t != current, the
481 +- 196 * caller must be holding the RCU readlock (by way of a spinlock) and
482 +- 197 * we use RCU protection here
483 +- 198 */
484 +- 199 user = get_uid(__task_cred(t)->user);
485 +- 200 atomic_inc(&user->sigpending);
486 +- 201 if (override_rlimit ||
487 +- 202 atomic_read(&user->sigpending) <=
488 +- 203 t->signal->rlim[RLIMIT_SIGPENDING].rlim_cur)
489 +- 204 q = kmem_cache_alloc(sigqueue_cachep, flags);
490 +- 205 if (unlikely(q == NULL)) {
491 +- 206 atomic_dec(&user->sigpending);
492 +- 207 free_uid(user);
493 +- 208 } else {
494 +- 209 INIT_LIST_HEAD(&q->list);
495 +- 210 q->flags = 0;
496 +- 211 q->user = user;
497 +- 212 }
498 +- 213
499 +- 214 return q;
500 +- 215 }
501 +-
502 +-We see that this function initializes ``q->list``, ``q->flags``, and
503 +-``q->user``. It seems that now is the time to look at the definition of
504 +-``struct sigqueue``, e.g.::
505 +-
506 +- 14 struct sigqueue {
507 +- 15 struct list_head list;
508 +- 16 int flags;
509 +- 17 siginfo_t info;
510 +- 18 struct user_struct *user;
511 +- 19 };
512 +-
513 +-And, you might remember, it was a ``memcpy()`` on ``&first->info`` that
514 +-caused the warning, so this makes perfect sense. It also seems reasonable
515 +-to assume that it is the caller of ``__sigqueue_alloc()`` that has the
516 +-responsibility of filling out (initializing) this member.
517 +-
518 +-But just which fields of the struct were uninitialized? Let's look at
519 +-kmemcheck's report again::
520 +-
521 +- WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88003e4a2024)
522 +- 80000000000000000000000000000000000000000088ffff0000000000000000
523 +- i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
524 +- ^
525 +-
526 +-These first two lines are the memory dump of the memory object itself, and
527 +-the shadow bytemap, respectively. The memory object itself is in this case
528 +-``&first->info``. Just beware that the start of this dump is NOT the start
529 +-of the object itself! The position of the caret (^) corresponds with the
530 +-address of the read (ffff88003e4a2024).
531 +-
532 +-The shadow bytemap dump legend is as follows:
533 +-
534 +-- i: initialized
535 +-- u: uninitialized
536 +-- a: unallocated (memory has been allocated by the slab layer, but has not
537 +- yet been handed off to anybody)
538 +-- f: freed (memory has been allocated by the slab layer, but has been freed
539 +- by the previous owner)
540 +-
541 +-In order to figure out where (relative to the start of the object) the
542 +-uninitialized memory was located, we have to look at the disassembly. For
543 +-that, we'll need the RIP address again::
544 +-
545 +- RIP: 0010:[<ffffffff8104ede8>] [<ffffffff8104ede8>] __dequeue_signal+0xc8/0x190
546 +-
547 +- $ objdump -d --no-show-raw-insn vmlinux | grep -C 8 ffffffff8104ede8:
548 +- ffffffff8104edc8: mov %r8,0x8(%r8)
549 +- ffffffff8104edcc: test %r10d,%r10d
550 +- ffffffff8104edcf: js ffffffff8104ee88 <__dequeue_signal+0x168>
551 +- ffffffff8104edd5: mov %rax,%rdx
552 +- ffffffff8104edd8: mov $0xc,%ecx
553 +- ffffffff8104eddd: mov %r13,%rdi
554 +- ffffffff8104ede0: mov $0x30,%eax
555 +- ffffffff8104ede5: mov %rdx,%rsi
556 +- ffffffff8104ede8: rep movsl %ds:(%rsi),%es:(%rdi)
557 +- ffffffff8104edea: test $0x2,%al
558 +- ffffffff8104edec: je ffffffff8104edf0 <__dequeue_signal+0xd0>
559 +- ffffffff8104edee: movsw %ds:(%rsi),%es:(%rdi)
560 +- ffffffff8104edf0: test $0x1,%al
561 +- ffffffff8104edf2: je ffffffff8104edf5 <__dequeue_signal+0xd5>
562 +- ffffffff8104edf4: movsb %ds:(%rsi),%es:(%rdi)
563 +- ffffffff8104edf5: mov %r8,%rdi
564 +- ffffffff8104edf8: callq ffffffff8104de60 <__sigqueue_free>
565 +-
566 +-As expected, it's the "``rep movsl``" instruction from the ``memcpy()``
567 +-that causes the warning. We know about ``REP MOVSL`` that it uses the register
568 +-``RCX`` to count the number of remaining iterations. By taking a look at the
569 +-register dump again (from the kmemcheck report), we can figure out how many
570 +-bytes were left to copy::
571 +-
572 +- RAX: 0000000000000030 RBX: ffff88003d4ea968 RCX: 0000000000000009
573 +-
574 +-By looking at the disassembly, we also see that ``%ecx`` is being loaded
575 +-with the value ``$0xc`` just before (ffffffff8104edd8), so we are very
576 +-lucky. Keep in mind that this is the number of iterations, not bytes. And
577 +-since this is a "long" operation, we need to multiply by 4 to get the
578 +-number of bytes. So this means that the uninitialized value was encountered
579 +-at 4 * (0xc - 0x9) = 12 bytes from the start of the object.
580 +-
581 +-We can now try to figure out which field of the "``struct siginfo``" that
582 +-was not initialized. This is the beginning of the struct::
583 +-
584 +- 40 typedef struct siginfo {
585 +- 41 int si_signo;
586 +- 42 int si_errno;
587 +- 43 int si_code;
588 +- 44
589 +- 45 union {
590 +- ..
591 +- 92 } _sifields;
592 +- 93 } siginfo_t;
593 +-
594 +-On 64-bit, the int is 4 bytes long, so it must the union member that has
595 +-not been initialized. We can verify this using gdb::
596 +-
597 +- $ gdb vmlinux
598 +- ...
599 +- (gdb) p &((struct siginfo *) 0)->_sifields
600 +- $1 = (union {...} *) 0x10
601 +-
602 +-Actually, it seems that the union member is located at offset 0x10 -- which
603 +-means that gcc has inserted 4 bytes of padding between the members ``si_code``
604 +-and ``_sifields``. We can now get a fuller picture of the memory dump::
605 +-
606 +- _----------------------------=> si_code
607 +- / _--------------------=> (padding)
608 +- | / _------------=> _sifields(._kill._pid)
609 +- | | / _----=> _sifields(._kill._uid)
610 +- | | | /
611 +- -------|-------|-------|-------|
612 +- 80000000000000000000000000000000000000000088ffff0000000000000000
613 +- i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
614 +-
615 +-This allows us to realize another important fact: ``si_code`` contains the
616 +-value 0x80. Remember that x86 is little endian, so the first 4 bytes
617 +-"80000000" are really the number 0x00000080. With a bit of research, we
618 +-find that this is actually the constant ``SI_KERNEL`` defined in
619 +-``include/asm-generic/siginfo.h``::
620 +-
621 +- 144 #define SI_KERNEL 0x80 /* sent by the kernel from somewhere */
622 +-
623 +-This macro is used in exactly one place in the x86 kernel: In ``send_signal()``
624 +-in ``kernel/signal.c``::
625 +-
626 +- 816 static int send_signal(int sig, struct siginfo *info, struct task_struct *t,
627 +- 817 int group)
628 +- 818 {
629 +- ...
630 +- 828 pending = group ? &t->signal->shared_pending : &t->pending;
631 +- ...
632 +- 851 q = __sigqueue_alloc(t, GFP_ATOMIC, (sig < SIGRTMIN &&
633 +- 852 (is_si_special(info) ||
634 +- 853 info->si_code >= 0)));
635 +- 854 if (q) {
636 +- 855 list_add_tail(&q->list, &pending->list);
637 +- 856 switch ((unsigned long) info) {
638 +- ...
639 +- 865 case (unsigned long) SEND_SIG_PRIV:
640 +- 866 q->info.si_signo = sig;
641 +- 867 q->info.si_errno = 0;
642 +- 868 q->info.si_code = SI_KERNEL;
643 +- 869 q->info.si_pid = 0;
644 +- 870 q->info.si_uid = 0;
645 +- 871 break;
646 +- ...
647 +- 890 }
648 +-
649 +-Not only does this match with the ``.si_code`` member, it also matches the place
650 +-we found earlier when looking for where siginfo_t objects are enqueued on the
651 +-``shared_pending`` list.
652 +-
653 +-So to sum up: It seems that it is the padding introduced by the compiler
654 +-between two struct fields that is uninitialized, and this gets reported when
655 +-we do a ``memcpy()`` on the struct. This means that we have identified a false
656 +-positive warning.
657 +-
658 +-Normally, kmemcheck will not report uninitialized accesses in ``memcpy()`` calls
659 +-when both the source and destination addresses are tracked. (Instead, we copy
660 +-the shadow bytemap as well). In this case, the destination address clearly
661 +-was not tracked. We can dig a little deeper into the stack trace from above::
662 +-
663 +- arch/x86/kernel/signal.c:805
664 +- arch/x86/kernel/signal.c:871
665 +- arch/x86/kernel/entry_64.S:694
666 +-
667 +-And we clearly see that the destination siginfo object is located on the
668 +-stack::
669 +-
670 +- 782 static void do_signal(struct pt_regs *regs)
671 +- 783 {
672 +- 784 struct k_sigaction ka;
673 +- 785 siginfo_t info;
674 +- ...
675 +- 804 signr = get_signal_to_deliver(&info, &ka, regs, NULL);
676 +- ...
677 +- 854 }
678 +-
679 +-And this ``&info`` is what eventually gets passed to ``copy_siginfo()`` as the
680 +-destination argument.
681 +-
682 +-Now, even though we didn't find an actual error here, the example is still a
683 +-good one, because it shows how one would go about to find out what the report
684 +-was all about.
685 +-
686 +-
687 +-Annotating false positives
688 +-~~~~~~~~~~~~~~~~~~~~~~~~~~
689 +-
690 +-There are a few different ways to make annotations in the source code that
691 +-will keep kmemcheck from checking and reporting certain allocations. Here
692 +-they are:
693 +-
694 +-- ``__GFP_NOTRACK_FALSE_POSITIVE``
695 +- This flag can be passed to ``kmalloc()`` or ``kmem_cache_alloc()``
696 +- (therefore also to other functions that end up calling one of
697 +- these) to indicate that the allocation should not be tracked
698 +- because it would lead to a false positive report. This is a "big
699 +- hammer" way of silencing kmemcheck; after all, even if the false
700 +- positive pertains to particular field in a struct, for example, we
701 +- will now lose the ability to find (real) errors in other parts of
702 +- the same struct.
703 +-
704 +- Example::
705 +-
706 +- /* No warnings will ever trigger on accessing any part of x */
707 +- x = kmalloc(sizeof *x, GFP_KERNEL | __GFP_NOTRACK_FALSE_POSITIVE);
708 +-
709 +-- ``kmemcheck_bitfield_begin(name)``/``kmemcheck_bitfield_end(name)`` and
710 +- ``kmemcheck_annotate_bitfield(ptr, name)``
711 +- The first two of these three macros can be used inside struct
712 +- definitions to signal, respectively, the beginning and end of a
713 +- bitfield. Additionally, this will assign the bitfield a name, which
714 +- is given as an argument to the macros.
715 +-
716 +- Having used these markers, one can later use
717 +- kmemcheck_annotate_bitfield() at the point of allocation, to indicate
718 +- which parts of the allocation is part of a bitfield.
719 +-
720 +- Example::
721 +-
722 +- struct foo {
723 +- int x;
724 +-
725 +- kmemcheck_bitfield_begin(flags);
726 +- int flag_a:1;
727 +- int flag_b:1;
728 +- kmemcheck_bitfield_end(flags);
729 +-
730 +- int y;
731 +- };
732 +-
733 +- struct foo *x = kmalloc(sizeof *x);
734 +-
735 +- /* No warnings will trigger on accessing the bitfield of x */
736 +- kmemcheck_annotate_bitfield(x, flags);
737 +-
738 +- Note that ``kmemcheck_annotate_bitfield()`` can be used even before the
739 +- return value of ``kmalloc()`` is checked -- in other words, passing NULL
740 +- as the first argument is legal (and will do nothing).
741 +-
742 +-
743 +-Reporting errors
744 +-----------------
745 +-
746 +-As we have seen, kmemcheck will produce false positive reports. Therefore, it
747 +-is not very wise to blindly post kmemcheck warnings to mailing lists and
748 +-maintainers. Instead, I encourage maintainers and developers to find errors
749 +-in their own code. If you get a warning, you can try to work around it, try
750 +-to figure out if it's a real error or not, or simply ignore it. Most
751 +-developers know their own code and will quickly and efficiently determine the
752 +-root cause of a kmemcheck report. This is therefore also the most efficient
753 +-way to work with kmemcheck.
754 +-
755 +-That said, we (the kmemcheck maintainers) will always be on the lookout for
756 +-false positives that we can annotate and silence. So whatever you find,
757 +-please drop us a note privately! Kernel configs and steps to reproduce (if
758 +-available) are of course a great help too.
759 +-
760 +-Happy hacking!
761 +-
762 +-
763 +-Technical description
764 +----------------------
765 +-
766 +-kmemcheck works by marking memory pages non-present. This means that whenever
767 +-somebody attempts to access the page, a page fault is generated. The page
768 +-fault handler notices that the page was in fact only hidden, and so it calls
769 +-on the kmemcheck code to make further investigations.
770 +-
771 +-When the investigations are completed, kmemcheck "shows" the page by marking
772 +-it present (as it would be under normal circumstances). This way, the
773 +-interrupted code can continue as usual.
774 +-
775 +-But after the instruction has been executed, we should hide the page again, so
776 +-that we can catch the next access too! Now kmemcheck makes use of a debugging
777 +-feature of the processor, namely single-stepping. When the processor has
778 +-finished the one instruction that generated the memory access, a debug
779 +-exception is raised. From here, we simply hide the page again and continue
780 +-execution, this time with the single-stepping feature turned off.
781 +-
782 +-kmemcheck requires some assistance from the memory allocator in order to work.
783 +-The memory allocator needs to
784 +-
785 +- 1. Tell kmemcheck about newly allocated pages and pages that are about to
786 +- be freed. This allows kmemcheck to set up and tear down the shadow memory
787 +- for the pages in question. The shadow memory stores the status of each
788 +- byte in the allocation proper, e.g. whether it is initialized or
789 +- uninitialized.
790 +-
791 +- 2. Tell kmemcheck which parts of memory should be marked uninitialized.
792 +- There are actually a few more states, such as "not yet allocated" and
793 +- "recently freed".
794 +-
795 +-If a slab cache is set up using the SLAB_NOTRACK flag, it will never return
796 +-memory that can take page faults because of kmemcheck.
797 +-
798 +-If a slab cache is NOT set up using the SLAB_NOTRACK flag, callers can still
799 +-request memory with the __GFP_NOTRACK or __GFP_NOTRACK_FALSE_POSITIVE flags.
800 +-This does not prevent the page faults from occurring, however, but marks the
801 +-object in question as being initialized so that no warnings will ever be
802 +-produced for this object.
803 +-
804 +-Currently, the SLAB and SLUB allocators are supported by kmemcheck.
805 +diff --git a/Documentation/devicetree/bindings/dma/snps-dma.txt b/Documentation/devicetree/bindings/dma/snps-dma.txt
806 +index a122723907ac..99acc712f83a 100644
807 +--- a/Documentation/devicetree/bindings/dma/snps-dma.txt
808 ++++ b/Documentation/devicetree/bindings/dma/snps-dma.txt
809 +@@ -64,6 +64,6 @@ Example:
810 + reg = <0xe0000000 0x1000>;
811 + interrupts = <0 35 0x4>;
812 + dmas = <&dmahost 12 0 1>,
813 +- <&dmahost 13 0 1 0>;
814 ++ <&dmahost 13 1 0>;
815 + dma-names = "rx", "rx";
816 + };
817 +diff --git a/Documentation/filesystems/ext4.txt b/Documentation/filesystems/ext4.txt
818 +index 5a8f7f4d2bca..7449893dc039 100644
819 +--- a/Documentation/filesystems/ext4.txt
820 ++++ b/Documentation/filesystems/ext4.txt
821 +@@ -233,7 +233,7 @@ data_err=ignore(*) Just print an error message if an error occurs
822 + data_err=abort Abort the journal if an error occurs in a file
823 + data buffer in ordered mode.
824 +
825 +-grpid Give objects the same group ID as their creator.
826 ++grpid New objects have the group ID of their parent.
827 + bsdgroups
828 +
829 + nogrpid (*) New objects have the group ID of their creator.
830 +diff --git a/MAINTAINERS b/MAINTAINERS
831 +index 2811a211632c..76ea063d8083 100644
832 +--- a/MAINTAINERS
833 ++++ b/MAINTAINERS
834 +@@ -7670,16 +7670,6 @@ F: include/linux/kdb.h
835 + F: include/linux/kgdb.h
836 + F: kernel/debug/
837 +
838 +-KMEMCHECK
839 +-M: Vegard Nossum <vegardno@×××××××.no>
840 +-M: Pekka Enberg <penberg@××××××.org>
841 +-S: Maintained
842 +-F: Documentation/dev-tools/kmemcheck.rst
843 +-F: arch/x86/include/asm/kmemcheck.h
844 +-F: arch/x86/mm/kmemcheck/
845 +-F: include/linux/kmemcheck.h
846 +-F: mm/kmemcheck.c
847 +-
848 + KMEMLEAK
849 + M: Catalin Marinas <catalin.marinas@×××.com>
850 + S: Maintained
851 +diff --git a/Makefile b/Makefile
852 +index 33176140f133..68d70485b088 100644
853 +--- a/Makefile
854 ++++ b/Makefile
855 +@@ -1,7 +1,7 @@
856 + # SPDX-License-Identifier: GPL-2.0
857 + VERSION = 4
858 + PATCHLEVEL = 14
859 +-SUBLEVEL = 20
860 ++SUBLEVEL = 21
861 + EXTRAVERSION =
862 + NAME = Petit Gorille
863 +
864 +diff --git a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
865 +index 7b8d90b7aeea..29b636fce23f 100644
866 +--- a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
867 ++++ b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
868 +@@ -150,11 +150,6 @@
869 + interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
870 + };
871 +
872 +-&charlcd {
873 +- interrupt-parent = <&intc>;
874 +- interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
875 +-};
876 +-
877 + &serial0 {
878 + interrupt-parent = <&intc>;
879 + interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
880 +diff --git a/arch/arm/boot/dts/exynos5410.dtsi b/arch/arm/boot/dts/exynos5410.dtsi
881 +index 7eab4bc07cec..7628bbb02324 100644
882 +--- a/arch/arm/boot/dts/exynos5410.dtsi
883 ++++ b/arch/arm/boot/dts/exynos5410.dtsi
884 +@@ -333,7 +333,6 @@
885 + &rtc {
886 + clocks = <&clock CLK_RTC>;
887 + clock-names = "rtc";
888 +- interrupt-parent = <&pmu_system_controller>;
889 + status = "disabled";
890 + };
891 +
892 +diff --git a/arch/arm/boot/dts/lpc3250-ea3250.dts b/arch/arm/boot/dts/lpc3250-ea3250.dts
893 +index 52b3ed10283a..e2bc731079be 100644
894 +--- a/arch/arm/boot/dts/lpc3250-ea3250.dts
895 ++++ b/arch/arm/boot/dts/lpc3250-ea3250.dts
896 +@@ -156,8 +156,8 @@
897 + uda1380: uda1380@18 {
898 + compatible = "nxp,uda1380";
899 + reg = <0x18>;
900 +- power-gpio = <&gpio 0x59 0>;
901 +- reset-gpio = <&gpio 0x51 0>;
902 ++ power-gpio = <&gpio 3 10 0>;
903 ++ reset-gpio = <&gpio 3 2 0>;
904 + dac-clk = "wspll";
905 + };
906 +
907 +diff --git a/arch/arm/boot/dts/lpc3250-phy3250.dts b/arch/arm/boot/dts/lpc3250-phy3250.dts
908 +index fd95e2b10357..b7bd3a110a8d 100644
909 +--- a/arch/arm/boot/dts/lpc3250-phy3250.dts
910 ++++ b/arch/arm/boot/dts/lpc3250-phy3250.dts
911 +@@ -81,8 +81,8 @@
912 + uda1380: uda1380@18 {
913 + compatible = "nxp,uda1380";
914 + reg = <0x18>;
915 +- power-gpio = <&gpio 0x59 0>;
916 +- reset-gpio = <&gpio 0x51 0>;
917 ++ power-gpio = <&gpio 3 10 0>;
918 ++ reset-gpio = <&gpio 3 2 0>;
919 + dac-clk = "wspll";
920 + };
921 +
922 +diff --git a/arch/arm/boot/dts/mt2701.dtsi b/arch/arm/boot/dts/mt2701.dtsi
923 +index afe12e5b51f9..f936000f0699 100644
924 +--- a/arch/arm/boot/dts/mt2701.dtsi
925 ++++ b/arch/arm/boot/dts/mt2701.dtsi
926 +@@ -593,6 +593,7 @@
927 + compatible = "mediatek,mt2701-hifsys", "syscon";
928 + reg = <0 0x1a000000 0 0x1000>;
929 + #clock-cells = <1>;
930 ++ #reset-cells = <1>;
931 + };
932 +
933 + usb0: usb@1a1c0000 {
934 +@@ -677,6 +678,7 @@
935 + compatible = "mediatek,mt2701-ethsys", "syscon";
936 + reg = <0 0x1b000000 0 0x1000>;
937 + #clock-cells = <1>;
938 ++ #reset-cells = <1>;
939 + };
940 +
941 + eth: ethernet@1b100000 {
942 +diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
943 +index ec8a07415cb3..36983a7d7cfd 100644
944 +--- a/arch/arm/boot/dts/mt7623.dtsi
945 ++++ b/arch/arm/boot/dts/mt7623.dtsi
946 +@@ -753,6 +753,7 @@
947 + "syscon";
948 + reg = <0 0x1b000000 0 0x1000>;
949 + #clock-cells = <1>;
950 ++ #reset-cells = <1>;
951 + };
952 +
953 + eth: ethernet@1b100000 {
954 +diff --git a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
955 +index 688a86378cee..7bf5aa2237c9 100644
956 +--- a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
957 ++++ b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
958 +@@ -204,7 +204,7 @@
959 + bus-width = <4>;
960 + max-frequency = <50000000>;
961 + cap-sd-highspeed;
962 +- cd-gpios = <&pio 261 0>;
963 ++ cd-gpios = <&pio 261 GPIO_ACTIVE_LOW>;
964 + vmmc-supply = <&mt6323_vmch_reg>;
965 + vqmmc-supply = <&mt6323_vio18_reg>;
966 + };
967 +diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
968 +index 726c5d0dbd5b..b290a5abb901 100644
969 +--- a/arch/arm/boot/dts/s5pv210.dtsi
970 ++++ b/arch/arm/boot/dts/s5pv210.dtsi
971 +@@ -463,6 +463,7 @@
972 + compatible = "samsung,exynos4210-ohci";
973 + reg = <0xec300000 0x100>;
974 + interrupts = <23>;
975 ++ interrupt-parent = <&vic1>;
976 + clocks = <&clocks CLK_USB_HOST>;
977 + clock-names = "usbhost";
978 + #address-cells = <1>;
979 +diff --git a/arch/arm/boot/dts/spear1310-evb.dts b/arch/arm/boot/dts/spear1310-evb.dts
980 +index 84101e4eebbf..0f5f379323a8 100644
981 +--- a/arch/arm/boot/dts/spear1310-evb.dts
982 ++++ b/arch/arm/boot/dts/spear1310-evb.dts
983 +@@ -349,7 +349,7 @@
984 + spi0: spi@e0100000 {
985 + status = "okay";
986 + num-cs = <3>;
987 +- cs-gpios = <&gpio1 7 0>, <&spics 0>, <&spics 1>;
988 ++ cs-gpios = <&gpio1 7 0>, <&spics 0 0>, <&spics 1 0>;
989 +
990 + stmpe610@0 {
991 + compatible = "st,stmpe610";
992 +diff --git a/arch/arm/boot/dts/spear1340.dtsi b/arch/arm/boot/dts/spear1340.dtsi
993 +index 5f347054527d..d4dbc4098653 100644
994 +--- a/arch/arm/boot/dts/spear1340.dtsi
995 ++++ b/arch/arm/boot/dts/spear1340.dtsi
996 +@@ -142,8 +142,8 @@
997 + reg = <0xb4100000 0x1000>;
998 + interrupts = <0 105 0x4>;
999 + status = "disabled";
1000 +- dmas = <&dwdma0 0x600 0 0 1>, /* 0xC << 11 */
1001 +- <&dwdma0 0x680 0 1 0>; /* 0xD << 7 */
1002 ++ dmas = <&dwdma0 12 0 1>,
1003 ++ <&dwdma0 13 1 0>;
1004 + dma-names = "tx", "rx";
1005 + };
1006 +
1007 +diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
1008 +index 17ea0abcdbd7..086b4b333249 100644
1009 +--- a/arch/arm/boot/dts/spear13xx.dtsi
1010 ++++ b/arch/arm/boot/dts/spear13xx.dtsi
1011 +@@ -100,7 +100,7 @@
1012 + reg = <0xb2800000 0x1000>;
1013 + interrupts = <0 29 0x4>;
1014 + status = "disabled";
1015 +- dmas = <&dwdma0 0 0 0 0>;
1016 ++ dmas = <&dwdma0 0 0 0>;
1017 + dma-names = "data";
1018 + };
1019 +
1020 +@@ -290,8 +290,8 @@
1021 + #size-cells = <0>;
1022 + interrupts = <0 31 0x4>;
1023 + status = "disabled";
1024 +- dmas = <&dwdma0 0x2000 0 0 0>, /* 0x4 << 11 */
1025 +- <&dwdma0 0x0280 0 0 0>; /* 0x5 << 7 */
1026 ++ dmas = <&dwdma0 4 0 0>,
1027 ++ <&dwdma0 5 0 0>;
1028 + dma-names = "tx", "rx";
1029 + };
1030 +
1031 +diff --git a/arch/arm/boot/dts/spear600.dtsi b/arch/arm/boot/dts/spear600.dtsi
1032 +index 6b32d20acc9f..00166eb9be86 100644
1033 +--- a/arch/arm/boot/dts/spear600.dtsi
1034 ++++ b/arch/arm/boot/dts/spear600.dtsi
1035 +@@ -194,6 +194,7 @@
1036 + rtc: rtc@fc900000 {
1037 + compatible = "st,spear600-rtc";
1038 + reg = <0xfc900000 0x1000>;
1039 ++ interrupt-parent = <&vic0>;
1040 + interrupts = <10>;
1041 + status = "disabled";
1042 + };
1043 +diff --git a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
1044 +index 68aab50a73ab..733678b75b88 100644
1045 +--- a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
1046 ++++ b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
1047 +@@ -750,6 +750,7 @@
1048 + reg = <0x10120000 0x1000>;
1049 + interrupt-names = "combined";
1050 + interrupts = <14>;
1051 ++ interrupt-parent = <&vica>;
1052 + clocks = <&clcdclk>, <&hclkclcd>;
1053 + clock-names = "clcdclk", "apb_pclk";
1054 + status = "disabled";
1055 +diff --git a/arch/arm/boot/dts/stih407.dtsi b/arch/arm/boot/dts/stih407.dtsi
1056 +index fa149837df14..11fdecd9312e 100644
1057 +--- a/arch/arm/boot/dts/stih407.dtsi
1058 ++++ b/arch/arm/boot/dts/stih407.dtsi
1059 +@@ -8,6 +8,7 @@
1060 + */
1061 + #include "stih407-clock.dtsi"
1062 + #include "stih407-family.dtsi"
1063 ++#include <dt-bindings/gpio/gpio.h>
1064 + / {
1065 + soc {
1066 + sti-display-subsystem {
1067 +@@ -122,7 +123,7 @@
1068 + <&clk_s_d2_quadfs 0>,
1069 + <&clk_s_d2_quadfs 1>;
1070 +
1071 +- hdmi,hpd-gpio = <&pio5 3>;
1072 ++ hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>;
1073 + reset-names = "hdmi";
1074 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>;
1075 + ddc = <&hdmiddc>;
1076 +diff --git a/arch/arm/boot/dts/stih410.dtsi b/arch/arm/boot/dts/stih410.dtsi
1077 +index 21fe72b183d8..96eed0dc08b8 100644
1078 +--- a/arch/arm/boot/dts/stih410.dtsi
1079 ++++ b/arch/arm/boot/dts/stih410.dtsi
1080 +@@ -9,6 +9,7 @@
1081 + #include "stih410-clock.dtsi"
1082 + #include "stih407-family.dtsi"
1083 + #include "stih410-pinctrl.dtsi"
1084 ++#include <dt-bindings/gpio/gpio.h>
1085 + / {
1086 + aliases {
1087 + bdisp0 = &bdisp0;
1088 +@@ -213,7 +214,7 @@
1089 + <&clk_s_d2_quadfs 0>,
1090 + <&clk_s_d2_quadfs 1>;
1091 +
1092 +- hdmi,hpd-gpio = <&pio5 3>;
1093 ++ hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>;
1094 + reset-names = "hdmi";
1095 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>;
1096 + ddc = <&hdmiddc>;
1097 +diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h
1098 +index 0722ec6be692..6821f1249300 100644
1099 +--- a/arch/arm/include/asm/dma-iommu.h
1100 ++++ b/arch/arm/include/asm/dma-iommu.h
1101 +@@ -7,7 +7,6 @@
1102 + #include <linux/mm_types.h>
1103 + #include <linux/scatterlist.h>
1104 + #include <linux/dma-debug.h>
1105 +-#include <linux/kmemcheck.h>
1106 + #include <linux/kref.h>
1107 +
1108 + #define ARM_MAPPING_ERROR (~(dma_addr_t)0x0)
1109 +diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
1110 +index b2902a5cd780..2d7344f0e208 100644
1111 +--- a/arch/arm/include/asm/pgalloc.h
1112 ++++ b/arch/arm/include/asm/pgalloc.h
1113 +@@ -57,7 +57,7 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
1114 + extern pgd_t *pgd_alloc(struct mm_struct *mm);
1115 + extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
1116 +
1117 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1118 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1119 +
1120 + static inline void clean_pte_table(pte_t *pte)
1121 + {
1122 +diff --git a/arch/arm/mach-pxa/tosa-bt.c b/arch/arm/mach-pxa/tosa-bt.c
1123 +index 107f37210fb9..83606087edc7 100644
1124 +--- a/arch/arm/mach-pxa/tosa-bt.c
1125 ++++ b/arch/arm/mach-pxa/tosa-bt.c
1126 +@@ -132,3 +132,7 @@ static struct platform_driver tosa_bt_driver = {
1127 + },
1128 + };
1129 + module_platform_driver(tosa_bt_driver);
1130 ++
1131 ++MODULE_LICENSE("GPL");
1132 ++MODULE_AUTHOR("Dmitry Baryshkov");
1133 ++MODULE_DESCRIPTION("Bluetooth built-in chip control");
1134 +diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
1135 +index dc3817593e14..61da6e65900b 100644
1136 +--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
1137 ++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
1138 +@@ -901,6 +901,7 @@
1139 + "dsi_phy_regulator";
1140 +
1141 + #clock-cells = <1>;
1142 ++ #phy-cells = <0>;
1143 +
1144 + clocks = <&gcc GCC_MDSS_AHB_CLK>;
1145 + clock-names = "iface_clk";
1146 +@@ -1430,8 +1431,8 @@
1147 + #address-cells = <1>;
1148 + #size-cells = <0>;
1149 +
1150 +- qcom,ipc-1 = <&apcs 0 13>;
1151 +- qcom,ipc-6 = <&apcs 0 19>;
1152 ++ qcom,ipc-1 = <&apcs 8 13>;
1153 ++ qcom,ipc-3 = <&apcs 8 19>;
1154 +
1155 + apps_smsm: apps@0 {
1156 + reg = <0>;
1157 +diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
1158 +index d25f4f137c2a..5ca6a573a701 100644
1159 +--- a/arch/arm64/include/asm/pgalloc.h
1160 ++++ b/arch/arm64/include/asm/pgalloc.h
1161 +@@ -26,7 +26,7 @@
1162 +
1163 + #define check_pgt_cache() do { } while (0)
1164 +
1165 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1166 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1167 + #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
1168 +
1169 + #if CONFIG_PGTABLE_LEVELS > 2
1170 +diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
1171 +index 07823595b7f0..52f15cd896e1 100644
1172 +--- a/arch/arm64/kernel/cpu_errata.c
1173 ++++ b/arch/arm64/kernel/cpu_errata.c
1174 +@@ -406,6 +406,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
1175 + .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
1176 + MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
1177 + },
1178 ++ {
1179 ++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
1180 ++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
1181 ++ .enable = qcom_enable_link_stack_sanitization,
1182 ++ },
1183 ++ {
1184 ++ .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
1185 ++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
1186 ++ },
1187 + {
1188 + .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
1189 + MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
1190 +diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
1191 +index 79364d3455c0..e08ae6b6b63e 100644
1192 +--- a/arch/arm64/kvm/hyp/switch.c
1193 ++++ b/arch/arm64/kvm/hyp/switch.c
1194 +@@ -371,8 +371,10 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
1195 + u32 midr = read_cpuid_id();
1196 +
1197 + /* Apply BTAC predictors mitigation to all Falkor chips */
1198 +- if ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)
1199 ++ if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
1200 ++ ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) {
1201 + __qcom_hyp_sanitize_btac_predictors();
1202 ++ }
1203 + }
1204 +
1205 + fp_enabled = __fpsimd_enabled();
1206 +diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
1207 +index 27058f3fd132..329a1c43365e 100644
1208 +--- a/arch/arm64/mm/proc.S
1209 ++++ b/arch/arm64/mm/proc.S
1210 +@@ -190,7 +190,8 @@ ENDPROC(idmap_cpu_replace_ttbr1)
1211 + dc cvac, cur_\()\type\()p // Ensure any existing dirty
1212 + dmb sy // lines are written back before
1213 + ldr \type, [cur_\()\type\()p] // loading the entry
1214 +- tbz \type, #0, next_\()\type // Skip invalid entries
1215 ++ tbz \type, #0, skip_\()\type // Skip invalid and
1216 ++ tbnz \type, #11, skip_\()\type // non-global entries
1217 + .endm
1218 +
1219 + .macro __idmap_kpti_put_pgtable_ent_ng, type
1220 +@@ -249,8 +250,9 @@ ENTRY(idmap_kpti_install_ng_mappings)
1221 + add end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
1222 + do_pgd: __idmap_kpti_get_pgtable_ent pgd
1223 + tbnz pgd, #1, walk_puds
1224 +- __idmap_kpti_put_pgtable_ent_ng pgd
1225 + next_pgd:
1226 ++ __idmap_kpti_put_pgtable_ent_ng pgd
1227 ++skip_pgd:
1228 + add cur_pgdp, cur_pgdp, #8
1229 + cmp cur_pgdp, end_pgdp
1230 + b.ne do_pgd
1231 +@@ -278,8 +280,9 @@ walk_puds:
1232 + add end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
1233 + do_pud: __idmap_kpti_get_pgtable_ent pud
1234 + tbnz pud, #1, walk_pmds
1235 +- __idmap_kpti_put_pgtable_ent_ng pud
1236 + next_pud:
1237 ++ __idmap_kpti_put_pgtable_ent_ng pud
1238 ++skip_pud:
1239 + add cur_pudp, cur_pudp, 8
1240 + cmp cur_pudp, end_pudp
1241 + b.ne do_pud
1242 +@@ -298,8 +301,9 @@ walk_pmds:
1243 + add end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
1244 + do_pmd: __idmap_kpti_get_pgtable_ent pmd
1245 + tbnz pmd, #1, walk_ptes
1246 +- __idmap_kpti_put_pgtable_ent_ng pmd
1247 + next_pmd:
1248 ++ __idmap_kpti_put_pgtable_ent_ng pmd
1249 ++skip_pmd:
1250 + add cur_pmdp, cur_pmdp, #8
1251 + cmp cur_pmdp, end_pmdp
1252 + b.ne do_pmd
1253 +@@ -317,7 +321,7 @@ walk_ptes:
1254 + add end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
1255 + do_pte: __idmap_kpti_get_pgtable_ent pte
1256 + __idmap_kpti_put_pgtable_ent_ng pte
1257 +-next_pte:
1258 ++skip_pte:
1259 + add cur_ptep, cur_ptep, #8
1260 + cmp cur_ptep, end_ptep
1261 + b.ne do_pte
1262 +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
1263 +index c3d798b44030..c82457b0e733 100644
1264 +--- a/arch/mips/Kconfig
1265 ++++ b/arch/mips/Kconfig
1266 +@@ -119,12 +119,12 @@ config MIPS_GENERIC
1267 + select SYS_SUPPORTS_MULTITHREADING
1268 + select SYS_SUPPORTS_RELOCATABLE
1269 + select SYS_SUPPORTS_SMARTMIPS
1270 +- select USB_EHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
1271 +- select USB_EHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
1272 +- select USB_OHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
1273 +- select USB_OHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
1274 +- select USB_UHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
1275 +- select USB_UHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
1276 ++ select USB_EHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
1277 ++ select USB_EHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
1278 ++ select USB_OHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
1279 ++ select USB_OHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
1280 ++ select USB_UHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
1281 ++ select USB_UHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
1282 + select USE_OF
1283 + help
1284 + Select this to build a kernel which aims to support multiple boards,
1285 +diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
1286 +index fe3939726765..795caa763da3 100644
1287 +--- a/arch/mips/kernel/setup.c
1288 ++++ b/arch/mips/kernel/setup.c
1289 +@@ -374,6 +374,7 @@ static void __init bootmem_init(void)
1290 + unsigned long reserved_end;
1291 + unsigned long mapstart = ~0UL;
1292 + unsigned long bootmap_size;
1293 ++ phys_addr_t ramstart = (phys_addr_t)ULLONG_MAX;
1294 + bool bootmap_valid = false;
1295 + int i;
1296 +
1297 +@@ -394,7 +395,8 @@ static void __init bootmem_init(void)
1298 + max_low_pfn = 0;
1299 +
1300 + /*
1301 +- * Find the highest page frame number we have available.
1302 ++ * Find the highest page frame number we have available
1303 ++ * and the lowest used RAM address
1304 + */
1305 + for (i = 0; i < boot_mem_map.nr_map; i++) {
1306 + unsigned long start, end;
1307 +@@ -406,6 +408,8 @@ static void __init bootmem_init(void)
1308 + end = PFN_DOWN(boot_mem_map.map[i].addr
1309 + + boot_mem_map.map[i].size);
1310 +
1311 ++ ramstart = min(ramstart, boot_mem_map.map[i].addr);
1312 ++
1313 + #ifndef CONFIG_HIGHMEM
1314 + /*
1315 + * Skip highmem here so we get an accurate max_low_pfn if low
1316 +@@ -435,6 +439,13 @@ static void __init bootmem_init(void)
1317 + mapstart = max(reserved_end, start);
1318 + }
1319 +
1320 ++ /*
1321 ++ * Reserve any memory between the start of RAM and PHYS_OFFSET
1322 ++ */
1323 ++ if (ramstart > PHYS_OFFSET)
1324 ++ add_memory_region(PHYS_OFFSET, ramstart - PHYS_OFFSET,
1325 ++ BOOT_MEM_RESERVED);
1326 ++
1327 + if (min_low_pfn >= max_low_pfn)
1328 + panic("Incorrect memory mapping !!!");
1329 + if (min_low_pfn > ARCH_PFN_OFFSET) {
1330 +@@ -663,9 +674,6 @@ static int __init early_parse_mem(char *p)
1331 +
1332 + add_memory_region(start, size, BOOT_MEM_RAM);
1333 +
1334 +- if (start && start > PHYS_OFFSET)
1335 +- add_memory_region(PHYS_OFFSET, start - PHYS_OFFSET,
1336 +- BOOT_MEM_RESERVED);
1337 + return 0;
1338 + }
1339 + early_param("mem", early_parse_mem);
1340 +diff --git a/arch/openrisc/include/asm/dma-mapping.h b/arch/openrisc/include/asm/dma-mapping.h
1341 +index f41bd3cb76d9..e212a1f0b6d2 100644
1342 +--- a/arch/openrisc/include/asm/dma-mapping.h
1343 ++++ b/arch/openrisc/include/asm/dma-mapping.h
1344 +@@ -23,7 +23,6 @@
1345 + */
1346 +
1347 + #include <linux/dma-debug.h>
1348 +-#include <linux/kmemcheck.h>
1349 + #include <linux/dma-mapping.h>
1350 +
1351 + extern const struct dma_map_ops or1k_dma_map_ops;
1352 +diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
1353 +index a14203c005f1..e11f03007b57 100644
1354 +--- a/arch/powerpc/include/asm/pgalloc.h
1355 ++++ b/arch/powerpc/include/asm/pgalloc.h
1356 +@@ -18,7 +18,7 @@ static inline gfp_t pgtable_gfp_flags(struct mm_struct *mm, gfp_t gfp)
1357 + }
1358 + #endif /* MODULE */
1359 +
1360 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1361 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1362 +
1363 + #ifdef CONFIG_PPC_BOOK3S
1364 + #include <asm/book3s/pgalloc.h>
1365 +diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
1366 +index 023ff9f17501..d5f2ee882f74 100644
1367 +--- a/arch/powerpc/include/asm/topology.h
1368 ++++ b/arch/powerpc/include/asm/topology.h
1369 +@@ -44,6 +44,11 @@ extern int sysfs_add_device_to_node(struct device *dev, int nid);
1370 + extern void sysfs_remove_device_from_node(struct device *dev, int nid);
1371 + extern int numa_update_cpu_topology(bool cpus_locked);
1372 +
1373 ++static inline void update_numa_cpu_lookup_table(unsigned int cpu, int node)
1374 ++{
1375 ++ numa_cpu_lookup_table[cpu] = node;
1376 ++}
1377 ++
1378 + static inline int early_cpu_to_node(int cpu)
1379 + {
1380 + int nid;
1381 +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
1382 +index e9f72abc52b7..e91b40aa5417 100644
1383 +--- a/arch/powerpc/kernel/exceptions-64s.S
1384 ++++ b/arch/powerpc/kernel/exceptions-64s.S
1385 +@@ -1617,7 +1617,7 @@ USE_TEXT_SECTION()
1386 + .balign IFETCH_ALIGN_BYTES
1387 + do_hash_page:
1388 + #ifdef CONFIG_PPC_STD_MMU_64
1389 +- lis r0,DSISR_BAD_FAULT_64S@h
1390 ++ lis r0,(DSISR_BAD_FAULT_64S|DSISR_DABRMATCH)@h
1391 + ori r0,r0,DSISR_BAD_FAULT_64S@l
1392 + and. r0,r4,r0 /* weird error? */
1393 + bne- handle_page_fault /* if not, try to insert a HPTE */
1394 +diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
1395 +index 8c54166491e7..29b2fed93289 100644
1396 +--- a/arch/powerpc/kernel/head_32.S
1397 ++++ b/arch/powerpc/kernel/head_32.S
1398 +@@ -388,7 +388,7 @@ DataAccess:
1399 + EXCEPTION_PROLOG
1400 + mfspr r10,SPRN_DSISR
1401 + stw r10,_DSISR(r11)
1402 +- andis. r0,r10,DSISR_BAD_FAULT_32S@h
1403 ++ andis. r0,r10,(DSISR_BAD_FAULT_32S|DSISR_DABRMATCH)@h
1404 + bne 1f /* if not, try to put a PTE */
1405 + mfspr r4,SPRN_DAR /* into the hash table */
1406 + rlwinm r3,r10,32-15,21,21 /* DSISR_STORE -> _PAGE_RW */
1407 +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
1408 +index a51df9ef529d..a81279249bfb 100644
1409 +--- a/arch/powerpc/mm/numa.c
1410 ++++ b/arch/powerpc/mm/numa.c
1411 +@@ -142,11 +142,6 @@ static void reset_numa_cpu_lookup_table(void)
1412 + numa_cpu_lookup_table[cpu] = -1;
1413 + }
1414 +
1415 +-static void update_numa_cpu_lookup_table(unsigned int cpu, int node)
1416 +-{
1417 +- numa_cpu_lookup_table[cpu] = node;
1418 +-}
1419 +-
1420 + static void map_cpu_to_node(int cpu, int node)
1421 + {
1422 + update_numa_cpu_lookup_table(cpu, node);
1423 +diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
1424 +index cfbbee941a76..17ae5c15a9e0 100644
1425 +--- a/arch/powerpc/mm/pgtable-radix.c
1426 ++++ b/arch/powerpc/mm/pgtable-radix.c
1427 +@@ -17,6 +17,7 @@
1428 + #include <linux/of_fdt.h>
1429 + #include <linux/mm.h>
1430 + #include <linux/string_helpers.h>
1431 ++#include <linux/stop_machine.h>
1432 +
1433 + #include <asm/pgtable.h>
1434 + #include <asm/pgalloc.h>
1435 +@@ -671,6 +672,30 @@ static void free_pmd_table(pmd_t *pmd_start, pud_t *pud)
1436 + pud_clear(pud);
1437 + }
1438 +
1439 ++struct change_mapping_params {
1440 ++ pte_t *pte;
1441 ++ unsigned long start;
1442 ++ unsigned long end;
1443 ++ unsigned long aligned_start;
1444 ++ unsigned long aligned_end;
1445 ++};
1446 ++
1447 ++static int stop_machine_change_mapping(void *data)
1448 ++{
1449 ++ struct change_mapping_params *params =
1450 ++ (struct change_mapping_params *)data;
1451 ++
1452 ++ if (!data)
1453 ++ return -1;
1454 ++
1455 ++ spin_unlock(&init_mm.page_table_lock);
1456 ++ pte_clear(&init_mm, params->aligned_start, params->pte);
1457 ++ create_physical_mapping(params->aligned_start, params->start);
1458 ++ create_physical_mapping(params->end, params->aligned_end);
1459 ++ spin_lock(&init_mm.page_table_lock);
1460 ++ return 0;
1461 ++}
1462 ++
1463 + static void remove_pte_table(pte_t *pte_start, unsigned long addr,
1464 + unsigned long end)
1465 + {
1466 +@@ -699,6 +724,52 @@ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
1467 + }
1468 + }
1469 +
1470 ++/*
1471 ++ * clear the pte and potentially split the mapping helper
1472 ++ */
1473 ++static void split_kernel_mapping(unsigned long addr, unsigned long end,
1474 ++ unsigned long size, pte_t *pte)
1475 ++{
1476 ++ unsigned long mask = ~(size - 1);
1477 ++ unsigned long aligned_start = addr & mask;
1478 ++ unsigned long aligned_end = addr + size;
1479 ++ struct change_mapping_params params;
1480 ++ bool split_region = false;
1481 ++
1482 ++ if ((end - addr) < size) {
1483 ++ /*
1484 ++ * We're going to clear the PTE, but not flushed
1485 ++ * the mapping, time to remap and flush. The
1486 ++ * effects if visible outside the processor or
1487 ++ * if we are running in code close to the
1488 ++ * mapping we cleared, we are in trouble.
1489 ++ */
1490 ++ if (overlaps_kernel_text(aligned_start, addr) ||
1491 ++ overlaps_kernel_text(end, aligned_end)) {
1492 ++ /*
1493 ++ * Hack, just return, don't pte_clear
1494 ++ */
1495 ++ WARN_ONCE(1, "Linear mapping %lx->%lx overlaps kernel "
1496 ++ "text, not splitting\n", addr, end);
1497 ++ return;
1498 ++ }
1499 ++ split_region = true;
1500 ++ }
1501 ++
1502 ++ if (split_region) {
1503 ++ params.pte = pte;
1504 ++ params.start = addr;
1505 ++ params.end = end;
1506 ++ params.aligned_start = addr & ~(size - 1);
1507 ++ params.aligned_end = min_t(unsigned long, aligned_end,
1508 ++ (unsigned long)__va(memblock_end_of_DRAM()));
1509 ++ stop_machine(stop_machine_change_mapping, &params, NULL);
1510 ++ return;
1511 ++ }
1512 ++
1513 ++ pte_clear(&init_mm, addr, pte);
1514 ++}
1515 ++
1516 + static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
1517 + unsigned long end)
1518 + {
1519 +@@ -714,13 +785,7 @@ static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
1520 + continue;
1521 +
1522 + if (pmd_huge(*pmd)) {
1523 +- if (!IS_ALIGNED(addr, PMD_SIZE) ||
1524 +- !IS_ALIGNED(next, PMD_SIZE)) {
1525 +- WARN_ONCE(1, "%s: unaligned range\n", __func__);
1526 +- continue;
1527 +- }
1528 +-
1529 +- pte_clear(&init_mm, addr, (pte_t *)pmd);
1530 ++ split_kernel_mapping(addr, end, PMD_SIZE, (pte_t *)pmd);
1531 + continue;
1532 + }
1533 +
1534 +@@ -745,13 +810,7 @@ static void remove_pud_table(pud_t *pud_start, unsigned long addr,
1535 + continue;
1536 +
1537 + if (pud_huge(*pud)) {
1538 +- if (!IS_ALIGNED(addr, PUD_SIZE) ||
1539 +- !IS_ALIGNED(next, PUD_SIZE)) {
1540 +- WARN_ONCE(1, "%s: unaligned range\n", __func__);
1541 +- continue;
1542 +- }
1543 +-
1544 +- pte_clear(&init_mm, addr, (pte_t *)pud);
1545 ++ split_kernel_mapping(addr, end, PUD_SIZE, (pte_t *)pud);
1546 + continue;
1547 + }
1548 +
1549 +@@ -777,13 +836,7 @@ static void remove_pagetable(unsigned long start, unsigned long end)
1550 + continue;
1551 +
1552 + if (pgd_huge(*pgd)) {
1553 +- if (!IS_ALIGNED(addr, PGDIR_SIZE) ||
1554 +- !IS_ALIGNED(next, PGDIR_SIZE)) {
1555 +- WARN_ONCE(1, "%s: unaligned range\n", __func__);
1556 +- continue;
1557 +- }
1558 +-
1559 +- pte_clear(&init_mm, addr, (pte_t *)pgd);
1560 ++ split_kernel_mapping(addr, end, PGDIR_SIZE, (pte_t *)pgd);
1561 + continue;
1562 + }
1563 +
1564 +diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
1565 +index ac0717a90ca6..12f95b1f7d07 100644
1566 +--- a/arch/powerpc/mm/pgtable_64.c
1567 ++++ b/arch/powerpc/mm/pgtable_64.c
1568 +@@ -483,6 +483,8 @@ void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
1569 + if (old & PATB_HR) {
1570 + asm volatile(PPC_TLBIE_5(%0,%1,2,0,1) : :
1571 + "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
1572 ++ asm volatile(PPC_TLBIE_5(%0,%1,2,1,1) : :
1573 ++ "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
1574 + trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 1);
1575 + } else {
1576 + asm volatile(PPC_TLBIE_5(%0,%1,2,0,0) : :
1577 +diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
1578 +index d304028641a2..4b295cfd5f7e 100644
1579 +--- a/arch/powerpc/mm/tlb-radix.c
1580 ++++ b/arch/powerpc/mm/tlb-radix.c
1581 +@@ -453,14 +453,12 @@ void radix__flush_tlb_all(void)
1582 + */
1583 + asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
1584 + : : "r"(rb), "i"(r), "i"(1), "i"(ric), "r"(rs) : "memory");
1585 +- trace_tlbie(0, 0, rb, rs, ric, prs, r);
1586 + /*
1587 + * now flush host entires by passing PRS = 0 and LPID == 0
1588 + */
1589 + asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
1590 + : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(0) : "memory");
1591 + asm volatile("eieio; tlbsync; ptesync": : :"memory");
1592 +- trace_tlbie(0, 0, rb, 0, ric, prs, r);
1593 + }
1594 +
1595 + void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
1596 +diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
1597 +index fadb95efbb9e..b1ac8ac38434 100644
1598 +--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
1599 ++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
1600 +@@ -36,6 +36,7 @@
1601 + #include <asm/xics.h>
1602 + #include <asm/xive.h>
1603 + #include <asm/plpar_wrappers.h>
1604 ++#include <asm/topology.h>
1605 +
1606 + #include "pseries.h"
1607 + #include "offline_states.h"
1608 +@@ -331,6 +332,7 @@ static void pseries_remove_processor(struct device_node *np)
1609 + BUG_ON(cpu_online(cpu));
1610 + set_cpu_present(cpu, false);
1611 + set_hard_smp_processor_id(cpu, -1);
1612 ++ update_numa_cpu_lookup_table(cpu, -1);
1613 + break;
1614 + }
1615 + if (cpu >= nr_cpu_ids)
1616 +diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
1617 +index d9c4c9366049..091f1d0d0af1 100644
1618 +--- a/arch/powerpc/sysdev/xive/spapr.c
1619 ++++ b/arch/powerpc/sysdev/xive/spapr.c
1620 +@@ -356,7 +356,8 @@ static int xive_spapr_configure_queue(u32 target, struct xive_q *q, u8 prio,
1621 +
1622 + rc = plpar_int_get_queue_info(0, target, prio, &esn_page, &esn_size);
1623 + if (rc) {
1624 +- pr_err("Error %lld getting queue info prio %d\n", rc, prio);
1625 ++ pr_err("Error %lld getting queue info CPU %d prio %d\n", rc,
1626 ++ target, prio);
1627 + rc = -EIO;
1628 + goto fail;
1629 + }
1630 +@@ -370,7 +371,8 @@ static int xive_spapr_configure_queue(u32 target, struct xive_q *q, u8 prio,
1631 + /* Configure and enable the queue in HW */
1632 + rc = plpar_int_set_queue_config(flags, target, prio, qpage_phys, order);
1633 + if (rc) {
1634 +- pr_err("Error %lld setting queue for prio %d\n", rc, prio);
1635 ++ pr_err("Error %lld setting queue for CPU %d prio %d\n", rc,
1636 ++ target, prio);
1637 + rc = -EIO;
1638 + } else {
1639 + q->qpage = qpage;
1640 +@@ -389,8 +391,8 @@ static int xive_spapr_setup_queue(unsigned int cpu, struct xive_cpu *xc,
1641 + if (IS_ERR(qpage))
1642 + return PTR_ERR(qpage);
1643 +
1644 +- return xive_spapr_configure_queue(cpu, q, prio, qpage,
1645 +- xive_queue_shift);
1646 ++ return xive_spapr_configure_queue(get_hard_smp_processor_id(cpu),
1647 ++ q, prio, qpage, xive_queue_shift);
1648 + }
1649 +
1650 + static void xive_spapr_cleanup_queue(unsigned int cpu, struct xive_cpu *xc,
1651 +@@ -399,10 +401,12 @@ static void xive_spapr_cleanup_queue(unsigned int cpu, struct xive_cpu *xc,
1652 + struct xive_q *q = &xc->queue[prio];
1653 + unsigned int alloc_order;
1654 + long rc;
1655 ++ int hw_cpu = get_hard_smp_processor_id(cpu);
1656 +
1657 +- rc = plpar_int_set_queue_config(0, cpu, prio, 0, 0);
1658 ++ rc = plpar_int_set_queue_config(0, hw_cpu, prio, 0, 0);
1659 + if (rc)
1660 +- pr_err("Error %ld setting queue for prio %d\n", rc, prio);
1661 ++ pr_err("Error %ld setting queue for CPU %d prio %d\n", rc,
1662 ++ hw_cpu, prio);
1663 +
1664 + alloc_order = xive_alloc_order(xive_queue_shift);
1665 + free_pages((unsigned long)q->qpage, alloc_order);
1666 +diff --git a/arch/s390/kernel/compat_linux.c b/arch/s390/kernel/compat_linux.c
1667 +index 59eea9c65d3e..79b7a3438d54 100644
1668 +--- a/arch/s390/kernel/compat_linux.c
1669 ++++ b/arch/s390/kernel/compat_linux.c
1670 +@@ -110,7 +110,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setregid16, u16, rgid, u16, egid)
1671 +
1672 + COMPAT_SYSCALL_DEFINE1(s390_setgid16, u16, gid)
1673 + {
1674 +- return sys_setgid((gid_t)gid);
1675 ++ return sys_setgid(low2highgid(gid));
1676 + }
1677 +
1678 + COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
1679 +@@ -120,7 +120,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
1680 +
1681 + COMPAT_SYSCALL_DEFINE1(s390_setuid16, u16, uid)
1682 + {
1683 +- return sys_setuid((uid_t)uid);
1684 ++ return sys_setuid(low2highuid(uid));
1685 + }
1686 +
1687 + COMPAT_SYSCALL_DEFINE3(s390_setresuid16, u16, ruid, u16, euid, u16, suid)
1688 +@@ -173,12 +173,12 @@ COMPAT_SYSCALL_DEFINE3(s390_getresgid16, u16 __user *, rgidp,
1689 +
1690 + COMPAT_SYSCALL_DEFINE1(s390_setfsuid16, u16, uid)
1691 + {
1692 +- return sys_setfsuid((uid_t)uid);
1693 ++ return sys_setfsuid(low2highuid(uid));
1694 + }
1695 +
1696 + COMPAT_SYSCALL_DEFINE1(s390_setfsgid16, u16, gid)
1697 + {
1698 +- return sys_setfsgid((gid_t)gid);
1699 ++ return sys_setfsgid(low2highgid(gid));
1700 + }
1701 +
1702 + static int groups16_to_user(u16 __user *grouplist, struct group_info *group_info)
1703 +diff --git a/arch/sh/kernel/dwarf.c b/arch/sh/kernel/dwarf.c
1704 +index e1d751ae2498..1a2526676a87 100644
1705 +--- a/arch/sh/kernel/dwarf.c
1706 ++++ b/arch/sh/kernel/dwarf.c
1707 +@@ -1172,11 +1172,11 @@ static int __init dwarf_unwinder_init(void)
1708 +
1709 + dwarf_frame_cachep = kmem_cache_create("dwarf_frames",
1710 + sizeof(struct dwarf_frame), 0,
1711 +- SLAB_PANIC | SLAB_HWCACHE_ALIGN | SLAB_NOTRACK, NULL);
1712 ++ SLAB_PANIC | SLAB_HWCACHE_ALIGN, NULL);
1713 +
1714 + dwarf_reg_cachep = kmem_cache_create("dwarf_regs",
1715 + sizeof(struct dwarf_reg), 0,
1716 +- SLAB_PANIC | SLAB_HWCACHE_ALIGN | SLAB_NOTRACK, NULL);
1717 ++ SLAB_PANIC | SLAB_HWCACHE_ALIGN, NULL);
1718 +
1719 + dwarf_frame_pool = mempool_create_slab_pool(DWARF_FRAME_MIN_REQ,
1720 + dwarf_frame_cachep);
1721 +diff --git a/arch/sh/kernel/process.c b/arch/sh/kernel/process.c
1722 +index b2d9963d5978..68b1a67533ce 100644
1723 +--- a/arch/sh/kernel/process.c
1724 ++++ b/arch/sh/kernel/process.c
1725 +@@ -59,7 +59,7 @@ void arch_task_cache_init(void)
1726 +
1727 + task_xstate_cachep = kmem_cache_create("task_xstate", xstate_size,
1728 + __alignof__(union thread_xstate),
1729 +- SLAB_PANIC | SLAB_NOTRACK, NULL);
1730 ++ SLAB_PANIC, NULL);
1731 + }
1732 +
1733 + #ifdef CONFIG_SH_FPU_EMU
1734 +diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
1735 +index a0cc1be767c8..984e9d65ea0d 100644
1736 +--- a/arch/sparc/mm/init_64.c
1737 ++++ b/arch/sparc/mm/init_64.c
1738 +@@ -2934,7 +2934,7 @@ void __flush_tlb_all(void)
1739 + pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
1740 + unsigned long address)
1741 + {
1742 +- struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
1743 ++ struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
1744 + pte_t *pte = NULL;
1745 +
1746 + if (page)
1747 +@@ -2946,7 +2946,7 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
1748 + pgtable_t pte_alloc_one(struct mm_struct *mm,
1749 + unsigned long address)
1750 + {
1751 +- struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
1752 ++ struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
1753 + if (!page)
1754 + return NULL;
1755 + if (!pgtable_page_ctor(page)) {
1756 +diff --git a/arch/unicore32/include/asm/pgalloc.h b/arch/unicore32/include/asm/pgalloc.h
1757 +index 26775793c204..f0fdb268f8f2 100644
1758 +--- a/arch/unicore32/include/asm/pgalloc.h
1759 ++++ b/arch/unicore32/include/asm/pgalloc.h
1760 +@@ -28,7 +28,7 @@ extern void free_pgd_slow(struct mm_struct *mm, pgd_t *pgd);
1761 + #define pgd_alloc(mm) get_pgd_slow(mm)
1762 + #define pgd_free(mm, pgd) free_pgd_slow(mm, pgd)
1763 +
1764 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1765 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1766 +
1767 + /*
1768 + * Allocate one PTE table.
1769 +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
1770 +index 17de6acc0eab..559b37bf5a2e 100644
1771 +--- a/arch/x86/Kconfig
1772 ++++ b/arch/x86/Kconfig
1773 +@@ -111,7 +111,6 @@ config X86
1774 + select HAVE_ARCH_JUMP_LABEL
1775 + select HAVE_ARCH_KASAN if X86_64
1776 + select HAVE_ARCH_KGDB
1777 +- select HAVE_ARCH_KMEMCHECK
1778 + select HAVE_ARCH_MMAP_RND_BITS if MMU
1779 + select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT
1780 + select HAVE_ARCH_COMPAT_MMAP_BASES if MMU && COMPAT
1781 +@@ -1443,7 +1442,7 @@ config ARCH_DMA_ADDR_T_64BIT
1782 +
1783 + config X86_DIRECT_GBPAGES
1784 + def_bool y
1785 +- depends on X86_64 && !DEBUG_PAGEALLOC && !KMEMCHECK
1786 ++ depends on X86_64 && !DEBUG_PAGEALLOC
1787 + ---help---
1788 + Certain kernel features effectively disable kernel
1789 + linear 1 GB mappings (even if the CPU otherwise
1790 +diff --git a/arch/x86/Makefile b/arch/x86/Makefile
1791 +index 504b1a4535ac..fad55160dcb9 100644
1792 +--- a/arch/x86/Makefile
1793 ++++ b/arch/x86/Makefile
1794 +@@ -158,11 +158,6 @@ ifdef CONFIG_X86_X32
1795 + endif
1796 + export CONFIG_X86_X32_ABI
1797 +
1798 +-# Don't unroll struct assignments with kmemcheck enabled
1799 +-ifeq ($(CONFIG_KMEMCHECK),y)
1800 +- KBUILD_CFLAGS += $(call cc-option,-fno-builtin-memcpy)
1801 +-endif
1802 +-
1803 + #
1804 + # If the function graph tracer is used with mcount instead of fentry,
1805 + # '-maccumulate-outgoing-args' is needed to prevent a GCC bug
1806 +diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
1807 +index 3f48f695d5e6..dce7092ab24a 100644
1808 +--- a/arch/x86/entry/calling.h
1809 ++++ b/arch/x86/entry/calling.h
1810 +@@ -97,80 +97,69 @@ For 32-bit we have the following conventions - kernel is built with
1811 +
1812 + #define SIZEOF_PTREGS 21*8
1813 +
1814 +- .macro ALLOC_PT_GPREGS_ON_STACK
1815 +- addq $-(15*8), %rsp
1816 +- .endm
1817 ++.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax
1818 ++ /*
1819 ++ * Push registers and sanitize registers of values that a
1820 ++ * speculation attack might otherwise want to exploit. The
1821 ++ * lower registers are likely clobbered well before they
1822 ++ * could be put to use in a speculative execution gadget.
1823 ++ * Interleave XOR with PUSH for better uop scheduling:
1824 ++ */
1825 ++ pushq %rdi /* pt_regs->di */
1826 ++ pushq %rsi /* pt_regs->si */
1827 ++ pushq \rdx /* pt_regs->dx */
1828 ++ pushq %rcx /* pt_regs->cx */
1829 ++ pushq \rax /* pt_regs->ax */
1830 ++ pushq %r8 /* pt_regs->r8 */
1831 ++ xorq %r8, %r8 /* nospec r8 */
1832 ++ pushq %r9 /* pt_regs->r9 */
1833 ++ xorq %r9, %r9 /* nospec r9 */
1834 ++ pushq %r10 /* pt_regs->r10 */
1835 ++ xorq %r10, %r10 /* nospec r10 */
1836 ++ pushq %r11 /* pt_regs->r11 */
1837 ++ xorq %r11, %r11 /* nospec r11*/
1838 ++ pushq %rbx /* pt_regs->rbx */
1839 ++ xorl %ebx, %ebx /* nospec rbx*/
1840 ++ pushq %rbp /* pt_regs->rbp */
1841 ++ xorl %ebp, %ebp /* nospec rbp*/
1842 ++ pushq %r12 /* pt_regs->r12 */
1843 ++ xorq %r12, %r12 /* nospec r12*/
1844 ++ pushq %r13 /* pt_regs->r13 */
1845 ++ xorq %r13, %r13 /* nospec r13*/
1846 ++ pushq %r14 /* pt_regs->r14 */
1847 ++ xorq %r14, %r14 /* nospec r14*/
1848 ++ pushq %r15 /* pt_regs->r15 */
1849 ++ xorq %r15, %r15 /* nospec r15*/
1850 ++ UNWIND_HINT_REGS
1851 ++.endm
1852 +
1853 +- .macro SAVE_C_REGS_HELPER offset=0 rax=1 rcx=1 r8910=1 r11=1
1854 +- .if \r11
1855 +- movq %r11, 6*8+\offset(%rsp)
1856 +- .endif
1857 +- .if \r8910
1858 +- movq %r10, 7*8+\offset(%rsp)
1859 +- movq %r9, 8*8+\offset(%rsp)
1860 +- movq %r8, 9*8+\offset(%rsp)
1861 +- .endif
1862 +- .if \rax
1863 +- movq %rax, 10*8+\offset(%rsp)
1864 +- .endif
1865 +- .if \rcx
1866 +- movq %rcx, 11*8+\offset(%rsp)
1867 +- .endif
1868 +- movq %rdx, 12*8+\offset(%rsp)
1869 +- movq %rsi, 13*8+\offset(%rsp)
1870 +- movq %rdi, 14*8+\offset(%rsp)
1871 +- UNWIND_HINT_REGS offset=\offset extra=0
1872 +- .endm
1873 +- .macro SAVE_C_REGS offset=0
1874 +- SAVE_C_REGS_HELPER \offset, 1, 1, 1, 1
1875 +- .endm
1876 +- .macro SAVE_C_REGS_EXCEPT_RAX_RCX offset=0
1877 +- SAVE_C_REGS_HELPER \offset, 0, 0, 1, 1
1878 +- .endm
1879 +- .macro SAVE_C_REGS_EXCEPT_R891011
1880 +- SAVE_C_REGS_HELPER 0, 1, 1, 0, 0
1881 +- .endm
1882 +- .macro SAVE_C_REGS_EXCEPT_RCX_R891011
1883 +- SAVE_C_REGS_HELPER 0, 1, 0, 0, 0
1884 +- .endm
1885 +- .macro SAVE_C_REGS_EXCEPT_RAX_RCX_R11
1886 +- SAVE_C_REGS_HELPER 0, 0, 0, 1, 0
1887 +- .endm
1888 +-
1889 +- .macro SAVE_EXTRA_REGS offset=0
1890 +- movq %r15, 0*8+\offset(%rsp)
1891 +- movq %r14, 1*8+\offset(%rsp)
1892 +- movq %r13, 2*8+\offset(%rsp)
1893 +- movq %r12, 3*8+\offset(%rsp)
1894 +- movq %rbp, 4*8+\offset(%rsp)
1895 +- movq %rbx, 5*8+\offset(%rsp)
1896 +- UNWIND_HINT_REGS offset=\offset
1897 +- .endm
1898 +-
1899 +- .macro POP_EXTRA_REGS
1900 ++.macro POP_REGS pop_rdi=1 skip_r11rcx=0
1901 + popq %r15
1902 + popq %r14
1903 + popq %r13
1904 + popq %r12
1905 + popq %rbp
1906 + popq %rbx
1907 +- .endm
1908 +-
1909 +- .macro POP_C_REGS
1910 ++ .if \skip_r11rcx
1911 ++ popq %rsi
1912 ++ .else
1913 + popq %r11
1914 ++ .endif
1915 + popq %r10
1916 + popq %r9
1917 + popq %r8
1918 + popq %rax
1919 ++ .if \skip_r11rcx
1920 ++ popq %rsi
1921 ++ .else
1922 + popq %rcx
1923 ++ .endif
1924 + popq %rdx
1925 + popq %rsi
1926 ++ .if \pop_rdi
1927 + popq %rdi
1928 +- .endm
1929 +-
1930 +- .macro icebp
1931 +- .byte 0xf1
1932 +- .endm
1933 ++ .endif
1934 ++.endm
1935 +
1936 + /*
1937 + * This is a sneaky trick to help the unwinder find pt_regs on the stack. The
1938 +@@ -178,7 +167,7 @@ For 32-bit we have the following conventions - kernel is built with
1939 + * is just setting the LSB, which makes it an invalid stack address and is also
1940 + * a signal to the unwinder that it's a pt_regs pointer in disguise.
1941 + *
1942 +- * NOTE: This macro must be used *after* SAVE_EXTRA_REGS because it corrupts
1943 ++ * NOTE: This macro must be used *after* PUSH_AND_CLEAR_REGS because it corrupts
1944 + * the original rbp.
1945 + */
1946 + .macro ENCODE_FRAME_POINTER ptregs_offset=0
1947 +diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
1948 +index 16e2d72e79a0..68a2d76e4f8f 100644
1949 +--- a/arch/x86/entry/entry_64.S
1950 ++++ b/arch/x86/entry/entry_64.S
1951 +@@ -209,7 +209,7 @@ ENTRY(entry_SYSCALL_64)
1952 +
1953 + swapgs
1954 + /*
1955 +- * This path is not taken when PAGE_TABLE_ISOLATION is disabled so it
1956 ++ * This path is only taken when PAGE_TABLE_ISOLATION is disabled so it
1957 + * is not required to switch CR3.
1958 + */
1959 + movq %rsp, PER_CPU_VAR(rsp_scratch)
1960 +@@ -223,22 +223,8 @@ ENTRY(entry_SYSCALL_64)
1961 + pushq %rcx /* pt_regs->ip */
1962 + GLOBAL(entry_SYSCALL_64_after_hwframe)
1963 + pushq %rax /* pt_regs->orig_ax */
1964 +- pushq %rdi /* pt_regs->di */
1965 +- pushq %rsi /* pt_regs->si */
1966 +- pushq %rdx /* pt_regs->dx */
1967 +- pushq %rcx /* pt_regs->cx */
1968 +- pushq $-ENOSYS /* pt_regs->ax */
1969 +- pushq %r8 /* pt_regs->r8 */
1970 +- pushq %r9 /* pt_regs->r9 */
1971 +- pushq %r10 /* pt_regs->r10 */
1972 +- pushq %r11 /* pt_regs->r11 */
1973 +- pushq %rbx /* pt_regs->rbx */
1974 +- pushq %rbp /* pt_regs->rbp */
1975 +- pushq %r12 /* pt_regs->r12 */
1976 +- pushq %r13 /* pt_regs->r13 */
1977 +- pushq %r14 /* pt_regs->r14 */
1978 +- pushq %r15 /* pt_regs->r15 */
1979 +- UNWIND_HINT_REGS
1980 ++
1981 ++ PUSH_AND_CLEAR_REGS rax=$-ENOSYS
1982 +
1983 + TRACE_IRQS_OFF
1984 +
1985 +@@ -317,15 +303,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
1986 + syscall_return_via_sysret:
1987 + /* rcx and r11 are already restored (see code above) */
1988 + UNWIND_HINT_EMPTY
1989 +- POP_EXTRA_REGS
1990 +- popq %rsi /* skip r11 */
1991 +- popq %r10
1992 +- popq %r9
1993 +- popq %r8
1994 +- popq %rax
1995 +- popq %rsi /* skip rcx */
1996 +- popq %rdx
1997 +- popq %rsi
1998 ++ POP_REGS pop_rdi=0 skip_r11rcx=1
1999 +
2000 + /*
2001 + * Now all regs are restored except RSP and RDI.
2002 +@@ -555,9 +533,7 @@ END(irq_entries_start)
2003 + call switch_to_thread_stack
2004 + 1:
2005 +
2006 +- ALLOC_PT_GPREGS_ON_STACK
2007 +- SAVE_C_REGS
2008 +- SAVE_EXTRA_REGS
2009 ++ PUSH_AND_CLEAR_REGS
2010 + ENCODE_FRAME_POINTER
2011 +
2012 + testb $3, CS(%rsp)
2013 +@@ -618,15 +594,7 @@ GLOBAL(swapgs_restore_regs_and_return_to_usermode)
2014 + ud2
2015 + 1:
2016 + #endif
2017 +- POP_EXTRA_REGS
2018 +- popq %r11
2019 +- popq %r10
2020 +- popq %r9
2021 +- popq %r8
2022 +- popq %rax
2023 +- popq %rcx
2024 +- popq %rdx
2025 +- popq %rsi
2026 ++ POP_REGS pop_rdi=0
2027 +
2028 + /*
2029 + * The stack is now user RDI, orig_ax, RIP, CS, EFLAGS, RSP, SS.
2030 +@@ -684,8 +652,7 @@ GLOBAL(restore_regs_and_return_to_kernel)
2031 + ud2
2032 + 1:
2033 + #endif
2034 +- POP_EXTRA_REGS
2035 +- POP_C_REGS
2036 ++ POP_REGS
2037 + addq $8, %rsp /* skip regs->orig_ax */
2038 + INTERRUPT_RETURN
2039 +
2040 +@@ -900,7 +867,9 @@ ENTRY(\sym)
2041 + pushq $-1 /* ORIG_RAX: no syscall to restart */
2042 + .endif
2043 +
2044 +- ALLOC_PT_GPREGS_ON_STACK
2045 ++ /* Save all registers in pt_regs */
2046 ++ PUSH_AND_CLEAR_REGS
2047 ++ ENCODE_FRAME_POINTER
2048 +
2049 + .if \paranoid < 2
2050 + testb $3, CS(%rsp) /* If coming from userspace, switch stacks */
2051 +@@ -1111,9 +1080,7 @@ ENTRY(xen_failsafe_callback)
2052 + addq $0x30, %rsp
2053 + UNWIND_HINT_IRET_REGS
2054 + pushq $-1 /* orig_ax = -1 => not a system call */
2055 +- ALLOC_PT_GPREGS_ON_STACK
2056 +- SAVE_C_REGS
2057 +- SAVE_EXTRA_REGS
2058 ++ PUSH_AND_CLEAR_REGS
2059 + ENCODE_FRAME_POINTER
2060 + jmp error_exit
2061 + END(xen_failsafe_callback)
2062 +@@ -1150,16 +1117,13 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
2063 + #endif
2064 +
2065 + /*
2066 +- * Save all registers in pt_regs, and switch gs if needed.
2067 ++ * Switch gs if needed.
2068 + * Use slow, but surefire "are we in kernel?" check.
2069 + * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
2070 + */
2071 + ENTRY(paranoid_entry)
2072 + UNWIND_HINT_FUNC
2073 + cld
2074 +- SAVE_C_REGS 8
2075 +- SAVE_EXTRA_REGS 8
2076 +- ENCODE_FRAME_POINTER 8
2077 + movl $1, %ebx
2078 + movl $MSR_GS_BASE, %ecx
2079 + rdmsr
2080 +@@ -1198,21 +1162,18 @@ ENTRY(paranoid_exit)
2081 + jmp .Lparanoid_exit_restore
2082 + .Lparanoid_exit_no_swapgs:
2083 + TRACE_IRQS_IRETQ_DEBUG
2084 ++ RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
2085 + .Lparanoid_exit_restore:
2086 + jmp restore_regs_and_return_to_kernel
2087 + END(paranoid_exit)
2088 +
2089 + /*
2090 +- * Save all registers in pt_regs, and switch gs if needed.
2091 ++ * Switch gs if needed.
2092 + * Return: EBX=0: came from user mode; EBX=1: otherwise
2093 + */
2094 + ENTRY(error_entry)
2095 +- UNWIND_HINT_FUNC
2096 ++ UNWIND_HINT_REGS offset=8
2097 + cld
2098 +- SAVE_C_REGS 8
2099 +- SAVE_EXTRA_REGS 8
2100 +- ENCODE_FRAME_POINTER 8
2101 +- xorl %ebx, %ebx
2102 + testb $3, CS+8(%rsp)
2103 + jz .Lerror_kernelspace
2104 +
2105 +@@ -1393,22 +1354,7 @@ ENTRY(nmi)
2106 + pushq 1*8(%rdx) /* pt_regs->rip */
2107 + UNWIND_HINT_IRET_REGS
2108 + pushq $-1 /* pt_regs->orig_ax */
2109 +- pushq %rdi /* pt_regs->di */
2110 +- pushq %rsi /* pt_regs->si */
2111 +- pushq (%rdx) /* pt_regs->dx */
2112 +- pushq %rcx /* pt_regs->cx */
2113 +- pushq %rax /* pt_regs->ax */
2114 +- pushq %r8 /* pt_regs->r8 */
2115 +- pushq %r9 /* pt_regs->r9 */
2116 +- pushq %r10 /* pt_regs->r10 */
2117 +- pushq %r11 /* pt_regs->r11 */
2118 +- pushq %rbx /* pt_regs->rbx */
2119 +- pushq %rbp /* pt_regs->rbp */
2120 +- pushq %r12 /* pt_regs->r12 */
2121 +- pushq %r13 /* pt_regs->r13 */
2122 +- pushq %r14 /* pt_regs->r14 */
2123 +- pushq %r15 /* pt_regs->r15 */
2124 +- UNWIND_HINT_REGS
2125 ++ PUSH_AND_CLEAR_REGS rdx=(%rdx)
2126 + ENCODE_FRAME_POINTER
2127 +
2128 + /*
2129 +@@ -1618,7 +1564,8 @@ end_repeat_nmi:
2130 + * frame to point back to repeat_nmi.
2131 + */
2132 + pushq $-1 /* ORIG_RAX: no syscall to restart */
2133 +- ALLOC_PT_GPREGS_ON_STACK
2134 ++ PUSH_AND_CLEAR_REGS
2135 ++ ENCODE_FRAME_POINTER
2136 +
2137 + /*
2138 + * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
2139 +@@ -1642,8 +1589,7 @@ end_repeat_nmi:
2140 + nmi_swapgs:
2141 + SWAPGS_UNSAFE_STACK
2142 + nmi_restore:
2143 +- POP_EXTRA_REGS
2144 +- POP_C_REGS
2145 ++ POP_REGS
2146 +
2147 + /*
2148 + * Skip orig_ax and the "outermost" frame to point RSP at the "iret"
2149 +diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
2150 +index 98d5358e4041..fd65e016e413 100644
2151 +--- a/arch/x86/entry/entry_64_compat.S
2152 ++++ b/arch/x86/entry/entry_64_compat.S
2153 +@@ -85,15 +85,25 @@ ENTRY(entry_SYSENTER_compat)
2154 + pushq %rcx /* pt_regs->cx */
2155 + pushq $-ENOSYS /* pt_regs->ax */
2156 + pushq $0 /* pt_regs->r8 = 0 */
2157 ++ xorq %r8, %r8 /* nospec r8 */
2158 + pushq $0 /* pt_regs->r9 = 0 */
2159 ++ xorq %r9, %r9 /* nospec r9 */
2160 + pushq $0 /* pt_regs->r10 = 0 */
2161 ++ xorq %r10, %r10 /* nospec r10 */
2162 + pushq $0 /* pt_regs->r11 = 0 */
2163 ++ xorq %r11, %r11 /* nospec r11 */
2164 + pushq %rbx /* pt_regs->rbx */
2165 ++ xorl %ebx, %ebx /* nospec rbx */
2166 + pushq %rbp /* pt_regs->rbp (will be overwritten) */
2167 ++ xorl %ebp, %ebp /* nospec rbp */
2168 + pushq $0 /* pt_regs->r12 = 0 */
2169 ++ xorq %r12, %r12 /* nospec r12 */
2170 + pushq $0 /* pt_regs->r13 = 0 */
2171 ++ xorq %r13, %r13 /* nospec r13 */
2172 + pushq $0 /* pt_regs->r14 = 0 */
2173 ++ xorq %r14, %r14 /* nospec r14 */
2174 + pushq $0 /* pt_regs->r15 = 0 */
2175 ++ xorq %r15, %r15 /* nospec r15 */
2176 + cld
2177 +
2178 + /*
2179 +@@ -214,15 +224,25 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
2180 + pushq %rbp /* pt_regs->cx (stashed in bp) */
2181 + pushq $-ENOSYS /* pt_regs->ax */
2182 + pushq $0 /* pt_regs->r8 = 0 */
2183 ++ xorq %r8, %r8 /* nospec r8 */
2184 + pushq $0 /* pt_regs->r9 = 0 */
2185 ++ xorq %r9, %r9 /* nospec r9 */
2186 + pushq $0 /* pt_regs->r10 = 0 */
2187 ++ xorq %r10, %r10 /* nospec r10 */
2188 + pushq $0 /* pt_regs->r11 = 0 */
2189 ++ xorq %r11, %r11 /* nospec r11 */
2190 + pushq %rbx /* pt_regs->rbx */
2191 ++ xorl %ebx, %ebx /* nospec rbx */
2192 + pushq %rbp /* pt_regs->rbp (will be overwritten) */
2193 ++ xorl %ebp, %ebp /* nospec rbp */
2194 + pushq $0 /* pt_regs->r12 = 0 */
2195 ++ xorq %r12, %r12 /* nospec r12 */
2196 + pushq $0 /* pt_regs->r13 = 0 */
2197 ++ xorq %r13, %r13 /* nospec r13 */
2198 + pushq $0 /* pt_regs->r14 = 0 */
2199 ++ xorq %r14, %r14 /* nospec r14 */
2200 + pushq $0 /* pt_regs->r15 = 0 */
2201 ++ xorq %r15, %r15 /* nospec r15 */
2202 +
2203 + /*
2204 + * User mode is traced as though IRQs are on, and SYSENTER
2205 +@@ -338,15 +358,25 @@ ENTRY(entry_INT80_compat)
2206 + pushq %rcx /* pt_regs->cx */
2207 + pushq $-ENOSYS /* pt_regs->ax */
2208 + pushq $0 /* pt_regs->r8 = 0 */
2209 ++ xorq %r8, %r8 /* nospec r8 */
2210 + pushq $0 /* pt_regs->r9 = 0 */
2211 ++ xorq %r9, %r9 /* nospec r9 */
2212 + pushq $0 /* pt_regs->r10 = 0 */
2213 ++ xorq %r10, %r10 /* nospec r10 */
2214 + pushq $0 /* pt_regs->r11 = 0 */
2215 ++ xorq %r11, %r11 /* nospec r11 */
2216 + pushq %rbx /* pt_regs->rbx */
2217 ++ xorl %ebx, %ebx /* nospec rbx */
2218 + pushq %rbp /* pt_regs->rbp */
2219 ++ xorl %ebp, %ebp /* nospec rbp */
2220 + pushq %r12 /* pt_regs->r12 */
2221 ++ xorq %r12, %r12 /* nospec r12 */
2222 + pushq %r13 /* pt_regs->r13 */
2223 ++ xorq %r13, %r13 /* nospec r13 */
2224 + pushq %r14 /* pt_regs->r14 */
2225 ++ xorq %r14, %r14 /* nospec r14 */
2226 + pushq %r15 /* pt_regs->r15 */
2227 ++ xorq %r15, %r15 /* nospec r15 */
2228 + cld
2229 +
2230 + /*
2231 +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
2232 +index 09c26a4f139c..1c2558430cf0 100644
2233 +--- a/arch/x86/events/intel/core.c
2234 ++++ b/arch/x86/events/intel/core.c
2235 +@@ -3559,7 +3559,7 @@ static int intel_snb_pebs_broken(int cpu)
2236 + break;
2237 +
2238 + case INTEL_FAM6_SANDYBRIDGE_X:
2239 +- switch (cpu_data(cpu).x86_mask) {
2240 ++ switch (cpu_data(cpu).x86_stepping) {
2241 + case 6: rev = 0x618; break;
2242 + case 7: rev = 0x70c; break;
2243 + }
2244 +diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
2245 +index ae64d0b69729..cf372b90557e 100644
2246 +--- a/arch/x86/events/intel/lbr.c
2247 ++++ b/arch/x86/events/intel/lbr.c
2248 +@@ -1186,7 +1186,7 @@ void __init intel_pmu_lbr_init_atom(void)
2249 + * on PMU interrupt
2250 + */
2251 + if (boot_cpu_data.x86_model == 28
2252 +- && boot_cpu_data.x86_mask < 10) {
2253 ++ && boot_cpu_data.x86_stepping < 10) {
2254 + pr_cont("LBR disabled due to erratum");
2255 + return;
2256 + }
2257 +diff --git a/arch/x86/events/intel/p6.c b/arch/x86/events/intel/p6.c
2258 +index a5604c352930..408879b0c0d4 100644
2259 +--- a/arch/x86/events/intel/p6.c
2260 ++++ b/arch/x86/events/intel/p6.c
2261 +@@ -234,7 +234,7 @@ static __initconst const struct x86_pmu p6_pmu = {
2262 +
2263 + static __init void p6_pmu_rdpmc_quirk(void)
2264 + {
2265 +- if (boot_cpu_data.x86_mask < 9) {
2266 ++ if (boot_cpu_data.x86_stepping < 9) {
2267 + /*
2268 + * PPro erratum 26; fixed in stepping 9 and above.
2269 + */
2270 +diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
2271 +index 8d0ec9df1cbe..f077401869ee 100644
2272 +--- a/arch/x86/include/asm/acpi.h
2273 ++++ b/arch/x86/include/asm/acpi.h
2274 +@@ -94,7 +94,7 @@ static inline unsigned int acpi_processor_cstate_check(unsigned int max_cstate)
2275 + if (boot_cpu_data.x86 == 0x0F &&
2276 + boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
2277 + boot_cpu_data.x86_model <= 0x05 &&
2278 +- boot_cpu_data.x86_mask < 0x0A)
2279 ++ boot_cpu_data.x86_stepping < 0x0A)
2280 + return 1;
2281 + else if (boot_cpu_has(X86_BUG_AMD_APIC_C1E))
2282 + return 1;
2283 +diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
2284 +index 1e7c955b6303..4db77731e130 100644
2285 +--- a/arch/x86/include/asm/barrier.h
2286 ++++ b/arch/x86/include/asm/barrier.h
2287 +@@ -40,7 +40,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
2288 +
2289 + asm ("cmp %1,%2; sbb %0,%0;"
2290 + :"=r" (mask)
2291 +- :"r"(size),"r" (index)
2292 ++ :"g"(size),"r" (index)
2293 + :"cc");
2294 + return mask;
2295 + }
2296 +diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
2297 +index 34d99af43994..6804d6642767 100644
2298 +--- a/arch/x86/include/asm/bug.h
2299 ++++ b/arch/x86/include/asm/bug.h
2300 +@@ -5,23 +5,20 @@
2301 + #include <linux/stringify.h>
2302 +
2303 + /*
2304 +- * Since some emulators terminate on UD2, we cannot use it for WARN.
2305 +- * Since various instruction decoders disagree on the length of UD1,
2306 +- * we cannot use it either. So use UD0 for WARN.
2307 ++ * Despite that some emulators terminate on UD2, we use it for WARN().
2308 + *
2309 +- * (binutils knows about "ud1" but {en,de}codes it as 2 bytes, whereas
2310 +- * our kernel decoder thinks it takes a ModRM byte, which seems consistent
2311 +- * with various things like the Intel SDM instruction encoding rules)
2312 ++ * Since various instruction decoders/specs disagree on the encoding of
2313 ++ * UD0/UD1.
2314 + */
2315 +
2316 +-#define ASM_UD0 ".byte 0x0f, 0xff"
2317 ++#define ASM_UD0 ".byte 0x0f, 0xff" /* + ModRM (for Intel) */
2318 + #define ASM_UD1 ".byte 0x0f, 0xb9" /* + ModRM */
2319 + #define ASM_UD2 ".byte 0x0f, 0x0b"
2320 +
2321 + #define INSN_UD0 0xff0f
2322 + #define INSN_UD2 0x0b0f
2323 +
2324 +-#define LEN_UD0 2
2325 ++#define LEN_UD2 2
2326 +
2327 + #ifdef CONFIG_GENERIC_BUG
2328 +
2329 +@@ -77,7 +74,11 @@ do { \
2330 + unreachable(); \
2331 + } while (0)
2332 +
2333 +-#define __WARN_FLAGS(flags) _BUG_FLAGS(ASM_UD0, BUGFLAG_WARNING|(flags))
2334 ++#define __WARN_FLAGS(flags) \
2335 ++do { \
2336 ++ _BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \
2337 ++ annotate_reachable(); \
2338 ++} while (0)
2339 +
2340 + #include <asm-generic/bug.h>
2341 +
2342 +diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
2343 +index 836ca1178a6a..69f16f0729d0 100644
2344 +--- a/arch/x86/include/asm/dma-mapping.h
2345 ++++ b/arch/x86/include/asm/dma-mapping.h
2346 +@@ -7,7 +7,6 @@
2347 + * Documentation/DMA-API.txt for documentation.
2348 + */
2349 +
2350 +-#include <linux/kmemcheck.h>
2351 + #include <linux/scatterlist.h>
2352 + #include <linux/dma-debug.h>
2353 + #include <asm/io.h>
2354 +diff --git a/arch/x86/include/asm/kmemcheck.h b/arch/x86/include/asm/kmemcheck.h
2355 +deleted file mode 100644
2356 +index 945a0337fbcf..000000000000
2357 +--- a/arch/x86/include/asm/kmemcheck.h
2358 ++++ /dev/null
2359 +@@ -1,43 +0,0 @@
2360 +-/* SPDX-License-Identifier: GPL-2.0 */
2361 +-#ifndef ASM_X86_KMEMCHECK_H
2362 +-#define ASM_X86_KMEMCHECK_H
2363 +-
2364 +-#include <linux/types.h>
2365 +-#include <asm/ptrace.h>
2366 +-
2367 +-#ifdef CONFIG_KMEMCHECK
2368 +-bool kmemcheck_active(struct pt_regs *regs);
2369 +-
2370 +-void kmemcheck_show(struct pt_regs *regs);
2371 +-void kmemcheck_hide(struct pt_regs *regs);
2372 +-
2373 +-bool kmemcheck_fault(struct pt_regs *regs,
2374 +- unsigned long address, unsigned long error_code);
2375 +-bool kmemcheck_trap(struct pt_regs *regs);
2376 +-#else
2377 +-static inline bool kmemcheck_active(struct pt_regs *regs)
2378 +-{
2379 +- return false;
2380 +-}
2381 +-
2382 +-static inline void kmemcheck_show(struct pt_regs *regs)
2383 +-{
2384 +-}
2385 +-
2386 +-static inline void kmemcheck_hide(struct pt_regs *regs)
2387 +-{
2388 +-}
2389 +-
2390 +-static inline bool kmemcheck_fault(struct pt_regs *regs,
2391 +- unsigned long address, unsigned long error_code)
2392 +-{
2393 +- return false;
2394 +-}
2395 +-
2396 +-static inline bool kmemcheck_trap(struct pt_regs *regs)
2397 +-{
2398 +- return false;
2399 +-}
2400 +-#endif /* CONFIG_KMEMCHECK */
2401 +-
2402 +-#endif
2403 +diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
2404 +index 4d57894635f2..76b058533e47 100644
2405 +--- a/arch/x86/include/asm/nospec-branch.h
2406 ++++ b/arch/x86/include/asm/nospec-branch.h
2407 +@@ -6,6 +6,7 @@
2408 + #include <asm/alternative.h>
2409 + #include <asm/alternative-asm.h>
2410 + #include <asm/cpufeatures.h>
2411 ++#include <asm/msr-index.h>
2412 +
2413 + #ifdef __ASSEMBLY__
2414 +
2415 +@@ -164,10 +165,15 @@ static inline void vmexit_fill_RSB(void)
2416 +
2417 + static inline void indirect_branch_prediction_barrier(void)
2418 + {
2419 +- alternative_input("",
2420 +- "call __ibp_barrier",
2421 +- X86_FEATURE_USE_IBPB,
2422 +- ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));
2423 ++ asm volatile(ALTERNATIVE("",
2424 ++ "movl %[msr], %%ecx\n\t"
2425 ++ "movl %[val], %%eax\n\t"
2426 ++ "movl $0, %%edx\n\t"
2427 ++ "wrmsr",
2428 ++ X86_FEATURE_USE_IBPB)
2429 ++ : : [msr] "i" (MSR_IA32_PRED_CMD),
2430 ++ [val] "i" (PRED_CMD_IBPB)
2431 ++ : "eax", "ecx", "edx", "memory");
2432 + }
2433 +
2434 + #endif /* __ASSEMBLY__ */
2435 +diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
2436 +index 4baa6bceb232..d652a3808065 100644
2437 +--- a/arch/x86/include/asm/page_64.h
2438 ++++ b/arch/x86/include/asm/page_64.h
2439 +@@ -52,10 +52,6 @@ static inline void clear_page(void *page)
2440 +
2441 + void copy_page(void *to, void *from);
2442 +
2443 +-#ifdef CONFIG_X86_MCE
2444 +-#define arch_unmap_kpfn arch_unmap_kpfn
2445 +-#endif
2446 +-
2447 + #endif /* !__ASSEMBLY__ */
2448 +
2449 + #ifdef CONFIG_X86_VSYSCALL_EMULATION
2450 +diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
2451 +index 892df375b615..554841fab717 100644
2452 +--- a/arch/x86/include/asm/paravirt.h
2453 ++++ b/arch/x86/include/asm/paravirt.h
2454 +@@ -297,9 +297,9 @@ static inline void __flush_tlb_global(void)
2455 + {
2456 + PVOP_VCALL0(pv_mmu_ops.flush_tlb_kernel);
2457 + }
2458 +-static inline void __flush_tlb_single(unsigned long addr)
2459 ++static inline void __flush_tlb_one_user(unsigned long addr)
2460 + {
2461 +- PVOP_VCALL1(pv_mmu_ops.flush_tlb_single, addr);
2462 ++ PVOP_VCALL1(pv_mmu_ops.flush_tlb_one_user, addr);
2463 + }
2464 +
2465 + static inline void flush_tlb_others(const struct cpumask *cpumask,
2466 +diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
2467 +index 6ec54d01972d..f624f1f10316 100644
2468 +--- a/arch/x86/include/asm/paravirt_types.h
2469 ++++ b/arch/x86/include/asm/paravirt_types.h
2470 +@@ -217,7 +217,7 @@ struct pv_mmu_ops {
2471 + /* TLB operations */
2472 + void (*flush_tlb_user)(void);
2473 + void (*flush_tlb_kernel)(void);
2474 +- void (*flush_tlb_single)(unsigned long addr);
2475 ++ void (*flush_tlb_one_user)(unsigned long addr);
2476 + void (*flush_tlb_others)(const struct cpumask *cpus,
2477 + const struct flush_tlb_info *info);
2478 +
2479 +diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
2480 +index 211368922cad..8b8f1f14a0bf 100644
2481 +--- a/arch/x86/include/asm/pgtable.h
2482 ++++ b/arch/x86/include/asm/pgtable.h
2483 +@@ -668,11 +668,6 @@ static inline bool pte_accessible(struct mm_struct *mm, pte_t a)
2484 + return false;
2485 + }
2486 +
2487 +-static inline int pte_hidden(pte_t pte)
2488 +-{
2489 +- return pte_flags(pte) & _PAGE_HIDDEN;
2490 +-}
2491 +-
2492 + static inline int pmd_present(pmd_t pmd)
2493 + {
2494 + /*
2495 +diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
2496 +index e67c0620aec2..e55466760ff8 100644
2497 +--- a/arch/x86/include/asm/pgtable_32.h
2498 ++++ b/arch/x86/include/asm/pgtable_32.h
2499 +@@ -61,7 +61,7 @@ void paging_init(void);
2500 + #define kpte_clear_flush(ptep, vaddr) \
2501 + do { \
2502 + pte_clear(&init_mm, (vaddr), (ptep)); \
2503 +- __flush_tlb_one((vaddr)); \
2504 ++ __flush_tlb_one_kernel((vaddr)); \
2505 + } while (0)
2506 +
2507 + #endif /* !__ASSEMBLY__ */
2508 +diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
2509 +index 9e9b05fc4860..3696398a9475 100644
2510 +--- a/arch/x86/include/asm/pgtable_types.h
2511 ++++ b/arch/x86/include/asm/pgtable_types.h
2512 +@@ -32,7 +32,6 @@
2513 +
2514 + #define _PAGE_BIT_SPECIAL _PAGE_BIT_SOFTW1
2515 + #define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1
2516 +-#define _PAGE_BIT_HIDDEN _PAGE_BIT_SOFTW3 /* hidden by kmemcheck */
2517 + #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */
2518 + #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4
2519 +
2520 +@@ -79,18 +78,6 @@
2521 + #define _PAGE_KNL_ERRATUM_MASK 0
2522 + #endif
2523 +
2524 +-#ifdef CONFIG_KMEMCHECK
2525 +-#define _PAGE_HIDDEN (_AT(pteval_t, 1) << _PAGE_BIT_HIDDEN)
2526 +-#else
2527 +-#define _PAGE_HIDDEN (_AT(pteval_t, 0))
2528 +-#endif
2529 +-
2530 +-/*
2531 +- * The same hidden bit is used by kmemcheck, but since kmemcheck
2532 +- * works on kernel pages while soft-dirty engine on user space,
2533 +- * they do not conflict with each other.
2534 +- */
2535 +-
2536 + #ifdef CONFIG_MEM_SOFT_DIRTY
2537 + #define _PAGE_SOFT_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_SOFT_DIRTY)
2538 + #else
2539 +diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
2540 +index c57c6e77c29f..15fc074bd628 100644
2541 +--- a/arch/x86/include/asm/processor.h
2542 ++++ b/arch/x86/include/asm/processor.h
2543 +@@ -91,7 +91,7 @@ struct cpuinfo_x86 {
2544 + __u8 x86; /* CPU family */
2545 + __u8 x86_vendor; /* CPU vendor */
2546 + __u8 x86_model;
2547 +- __u8 x86_mask;
2548 ++ __u8 x86_stepping;
2549 + #ifdef CONFIG_X86_64
2550 + /* Number of 4K pages in DTLB/ITLB combined(in pages): */
2551 + int x86_tlbsize;
2552 +@@ -109,7 +109,7 @@ struct cpuinfo_x86 {
2553 + char x86_vendor_id[16];
2554 + char x86_model_id[64];
2555 + /* in KB - valid for CPUS which support this call: */
2556 +- int x86_cache_size;
2557 ++ unsigned int x86_cache_size;
2558 + int x86_cache_alignment; /* In bytes */
2559 + /* Cache QoS architectural values: */
2560 + int x86_cache_max_rmid; /* max index */
2561 +@@ -968,7 +968,4 @@ bool xen_set_default_idle(void);
2562 +
2563 + void stop_this_cpu(void *dummy);
2564 + void df_debug(struct pt_regs *regs, long error_code);
2565 +-
2566 +-void __ibp_barrier(void);
2567 +-
2568 + #endif /* _ASM_X86_PROCESSOR_H */
2569 +diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
2570 +index 076502241eae..55d392c6bd29 100644
2571 +--- a/arch/x86/include/asm/string_32.h
2572 ++++ b/arch/x86/include/asm/string_32.h
2573 +@@ -179,8 +179,6 @@ static inline void *__memcpy3d(void *to, const void *from, size_t len)
2574 + * No 3D Now!
2575 + */
2576 +
2577 +-#ifndef CONFIG_KMEMCHECK
2578 +-
2579 + #if (__GNUC__ >= 4)
2580 + #define memcpy(t, f, n) __builtin_memcpy(t, f, n)
2581 + #else
2582 +@@ -189,13 +187,6 @@ static inline void *__memcpy3d(void *to, const void *from, size_t len)
2583 + ? __constant_memcpy((t), (f), (n)) \
2584 + : __memcpy((t), (f), (n)))
2585 + #endif
2586 +-#else
2587 +-/*
2588 +- * kmemcheck becomes very happy if we use the REP instructions unconditionally,
2589 +- * because it means that we know both memory operands in advance.
2590 +- */
2591 +-#define memcpy(t, f, n) __memcpy((t), (f), (n))
2592 +-#endif
2593 +
2594 + #endif
2595 + #endif /* !CONFIG_FORTIFY_SOURCE */
2596 +diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
2597 +index 0b1b4445f4c5..533f74c300c2 100644
2598 +--- a/arch/x86/include/asm/string_64.h
2599 ++++ b/arch/x86/include/asm/string_64.h
2600 +@@ -33,7 +33,6 @@ extern void *memcpy(void *to, const void *from, size_t len);
2601 + extern void *__memcpy(void *to, const void *from, size_t len);
2602 +
2603 + #ifndef CONFIG_FORTIFY_SOURCE
2604 +-#ifndef CONFIG_KMEMCHECK
2605 + #if (__GNUC__ == 4 && __GNUC_MINOR__ < 3) || __GNUC__ < 4
2606 + #define memcpy(dst, src, len) \
2607 + ({ \
2608 +@@ -46,13 +45,6 @@ extern void *__memcpy(void *to, const void *from, size_t len);
2609 + __ret; \
2610 + })
2611 + #endif
2612 +-#else
2613 +-/*
2614 +- * kmemcheck becomes very happy if we use the REP instructions unconditionally,
2615 +- * because it means that we know both memory operands in advance.
2616 +- */
2617 +-#define memcpy(dst, src, len) __inline_memcpy((dst), (src), (len))
2618 +-#endif
2619 + #endif /* !CONFIG_FORTIFY_SOURCE */
2620 +
2621 + #define __HAVE_ARCH_MEMSET
2622 +diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
2623 +index 4405c4b308e8..704f31315dde 100644
2624 +--- a/arch/x86/include/asm/tlbflush.h
2625 ++++ b/arch/x86/include/asm/tlbflush.h
2626 +@@ -140,7 +140,7 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
2627 + #else
2628 + #define __flush_tlb() __native_flush_tlb()
2629 + #define __flush_tlb_global() __native_flush_tlb_global()
2630 +-#define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
2631 ++#define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr)
2632 + #endif
2633 +
2634 + static inline bool tlb_defer_switch_to_init_mm(void)
2635 +@@ -397,7 +397,7 @@ static inline void __native_flush_tlb_global(void)
2636 + /*
2637 + * flush one page in the user mapping
2638 + */
2639 +-static inline void __native_flush_tlb_single(unsigned long addr)
2640 ++static inline void __native_flush_tlb_one_user(unsigned long addr)
2641 + {
2642 + u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
2643 +
2644 +@@ -434,18 +434,31 @@ static inline void __flush_tlb_all(void)
2645 + /*
2646 + * flush one page in the kernel mapping
2647 + */
2648 +-static inline void __flush_tlb_one(unsigned long addr)
2649 ++static inline void __flush_tlb_one_kernel(unsigned long addr)
2650 + {
2651 + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
2652 +- __flush_tlb_single(addr);
2653 ++
2654 ++ /*
2655 ++ * If PTI is off, then __flush_tlb_one_user() is just INVLPG or its
2656 ++ * paravirt equivalent. Even with PCID, this is sufficient: we only
2657 ++ * use PCID if we also use global PTEs for the kernel mapping, and
2658 ++ * INVLPG flushes global translations across all address spaces.
2659 ++ *
2660 ++ * If PTI is on, then the kernel is mapped with non-global PTEs, and
2661 ++ * __flush_tlb_one_user() will flush the given address for the current
2662 ++ * kernel address space and for its usermode counterpart, but it does
2663 ++ * not flush it for other address spaces.
2664 ++ */
2665 ++ __flush_tlb_one_user(addr);
2666 +
2667 + if (!static_cpu_has(X86_FEATURE_PTI))
2668 + return;
2669 +
2670 + /*
2671 +- * __flush_tlb_single() will have cleared the TLB entry for this ASID,
2672 +- * but since kernel space is replicated across all, we must also
2673 +- * invalidate all others.
2674 ++ * See above. We need to propagate the flush to all other address
2675 ++ * spaces. In principle, we only need to propagate it to kernelmode
2676 ++ * address spaces, but the extra bookkeeping we would need is not
2677 ++ * worth it.
2678 + */
2679 + invalidate_other_asid();
2680 + }
2681 +diff --git a/arch/x86/include/asm/xor.h b/arch/x86/include/asm/xor.h
2682 +index 1f5c5161ead6..45c8605467f1 100644
2683 +--- a/arch/x86/include/asm/xor.h
2684 ++++ b/arch/x86/include/asm/xor.h
2685 +@@ -1,7 +1,4 @@
2686 +-#ifdef CONFIG_KMEMCHECK
2687 +-/* kmemcheck doesn't handle MMX/SSE/SSE2 instructions */
2688 +-# include <asm-generic/xor.h>
2689 +-#elif !defined(_ASM_X86_XOR_H)
2690 ++#ifndef _ASM_X86_XOR_H
2691 + #define _ASM_X86_XOR_H
2692 +
2693 + /*
2694 +diff --git a/arch/x86/kernel/acpi/apei.c b/arch/x86/kernel/acpi/apei.c
2695 +index ea3046e0b0cf..28d70ac93faf 100644
2696 +--- a/arch/x86/kernel/acpi/apei.c
2697 ++++ b/arch/x86/kernel/acpi/apei.c
2698 +@@ -55,5 +55,5 @@ void arch_apei_report_mem_error(int sev, struct cper_sec_mem_err *mem_err)
2699 +
2700 + void arch_apei_flush_tlb_one(unsigned long addr)
2701 + {
2702 +- __flush_tlb_one(addr);
2703 ++ __flush_tlb_one_kernel(addr);
2704 + }
2705 +diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
2706 +index 6db28f17ff28..c88e0b127810 100644
2707 +--- a/arch/x86/kernel/amd_nb.c
2708 ++++ b/arch/x86/kernel/amd_nb.c
2709 +@@ -235,7 +235,7 @@ int amd_cache_northbridges(void)
2710 + if (boot_cpu_data.x86 == 0x10 &&
2711 + boot_cpu_data.x86_model >= 0x8 &&
2712 + (boot_cpu_data.x86_model > 0x9 ||
2713 +- boot_cpu_data.x86_mask >= 0x1))
2714 ++ boot_cpu_data.x86_stepping >= 0x1))
2715 + amd_northbridges.flags |= AMD_NB_L3_INDEX_DISABLE;
2716 +
2717 + if (boot_cpu_data.x86 == 0x15)
2718 +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
2719 +index 89c7c8569e5e..5942aa5f569b 100644
2720 +--- a/arch/x86/kernel/apic/apic.c
2721 ++++ b/arch/x86/kernel/apic/apic.c
2722 +@@ -553,7 +553,7 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
2723 +
2724 + static u32 hsx_deadline_rev(void)
2725 + {
2726 +- switch (boot_cpu_data.x86_mask) {
2727 ++ switch (boot_cpu_data.x86_stepping) {
2728 + case 0x02: return 0x3a; /* EP */
2729 + case 0x04: return 0x0f; /* EX */
2730 + }
2731 +@@ -563,7 +563,7 @@ static u32 hsx_deadline_rev(void)
2732 +
2733 + static u32 bdx_deadline_rev(void)
2734 + {
2735 +- switch (boot_cpu_data.x86_mask) {
2736 ++ switch (boot_cpu_data.x86_stepping) {
2737 + case 0x02: return 0x00000011;
2738 + case 0x03: return 0x0700000e;
2739 + case 0x04: return 0x0f00000c;
2740 +@@ -575,7 +575,7 @@ static u32 bdx_deadline_rev(void)
2741 +
2742 + static u32 skx_deadline_rev(void)
2743 + {
2744 +- switch (boot_cpu_data.x86_mask) {
2745 ++ switch (boot_cpu_data.x86_stepping) {
2746 + case 0x03: return 0x01000136;
2747 + case 0x04: return 0x02000014;
2748 + }
2749 +diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
2750 +index e4b0d92b3ae0..2a7fd56e67b3 100644
2751 +--- a/arch/x86/kernel/apm_32.c
2752 ++++ b/arch/x86/kernel/apm_32.c
2753 +@@ -2389,6 +2389,7 @@ static int __init apm_init(void)
2754 + if (HZ != 100)
2755 + idle_period = (idle_period * HZ) / 100;
2756 + if (idle_threshold < 100) {
2757 ++ cpuidle_poll_state_init(&apm_idle_driver);
2758 + if (!cpuidle_register_driver(&apm_idle_driver))
2759 + if (cpuidle_register_device(&apm_cpuidle_device))
2760 + cpuidle_unregister_driver(&apm_idle_driver);
2761 +diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c
2762 +index fa1261eefa16..f91ba53e06c8 100644
2763 +--- a/arch/x86/kernel/asm-offsets_32.c
2764 ++++ b/arch/x86/kernel/asm-offsets_32.c
2765 +@@ -18,7 +18,7 @@ void foo(void)
2766 + OFFSET(CPUINFO_x86, cpuinfo_x86, x86);
2767 + OFFSET(CPUINFO_x86_vendor, cpuinfo_x86, x86_vendor);
2768 + OFFSET(CPUINFO_x86_model, cpuinfo_x86, x86_model);
2769 +- OFFSET(CPUINFO_x86_mask, cpuinfo_x86, x86_mask);
2770 ++ OFFSET(CPUINFO_x86_stepping, cpuinfo_x86, x86_stepping);
2771 + OFFSET(CPUINFO_cpuid_level, cpuinfo_x86, cpuid_level);
2772 + OFFSET(CPUINFO_x86_capability, cpuinfo_x86, x86_capability);
2773 + OFFSET(CPUINFO_x86_vendor_id, cpuinfo_x86, x86_vendor_id);
2774 +diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
2775 +index ea831c858195..e7d5a7883632 100644
2776 +--- a/arch/x86/kernel/cpu/amd.c
2777 ++++ b/arch/x86/kernel/cpu/amd.c
2778 +@@ -119,7 +119,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
2779 + return;
2780 + }
2781 +
2782 +- if (c->x86_model == 6 && c->x86_mask == 1) {
2783 ++ if (c->x86_model == 6 && c->x86_stepping == 1) {
2784 + const int K6_BUG_LOOP = 1000000;
2785 + int n;
2786 + void (*f_vide)(void);
2787 +@@ -149,7 +149,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
2788 +
2789 + /* K6 with old style WHCR */
2790 + if (c->x86_model < 8 ||
2791 +- (c->x86_model == 8 && c->x86_mask < 8)) {
2792 ++ (c->x86_model == 8 && c->x86_stepping < 8)) {
2793 + /* We can only write allocate on the low 508Mb */
2794 + if (mbytes > 508)
2795 + mbytes = 508;
2796 +@@ -168,7 +168,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
2797 + return;
2798 + }
2799 +
2800 +- if ((c->x86_model == 8 && c->x86_mask > 7) ||
2801 ++ if ((c->x86_model == 8 && c->x86_stepping > 7) ||
2802 + c->x86_model == 9 || c->x86_model == 13) {
2803 + /* The more serious chips .. */
2804 +
2805 +@@ -221,7 +221,7 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
2806 + * are more robust with CLK_CTL set to 200xxxxx instead of 600xxxxx
2807 + * As per AMD technical note 27212 0.2
2808 + */
2809 +- if ((c->x86_model == 8 && c->x86_mask >= 1) || (c->x86_model > 8)) {
2810 ++ if ((c->x86_model == 8 && c->x86_stepping >= 1) || (c->x86_model > 8)) {
2811 + rdmsr(MSR_K7_CLK_CTL, l, h);
2812 + if ((l & 0xfff00000) != 0x20000000) {
2813 + pr_info("CPU: CLK_CTL MSR was %x. Reprogramming to %x\n",
2814 +@@ -241,12 +241,12 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
2815 + * but they are not certified as MP capable.
2816 + */
2817 + /* Athlon 660/661 is valid. */
2818 +- if ((c->x86_model == 6) && ((c->x86_mask == 0) ||
2819 +- (c->x86_mask == 1)))
2820 ++ if ((c->x86_model == 6) && ((c->x86_stepping == 0) ||
2821 ++ (c->x86_stepping == 1)))
2822 + return;
2823 +
2824 + /* Duron 670 is valid */
2825 +- if ((c->x86_model == 7) && (c->x86_mask == 0))
2826 ++ if ((c->x86_model == 7) && (c->x86_stepping == 0))
2827 + return;
2828 +
2829 + /*
2830 +@@ -256,8 +256,8 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
2831 + * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for
2832 + * more.
2833 + */
2834 +- if (((c->x86_model == 6) && (c->x86_mask >= 2)) ||
2835 +- ((c->x86_model == 7) && (c->x86_mask >= 1)) ||
2836 ++ if (((c->x86_model == 6) && (c->x86_stepping >= 2)) ||
2837 ++ ((c->x86_model == 7) && (c->x86_stepping >= 1)) ||
2838 + (c->x86_model > 7))
2839 + if (cpu_has(c, X86_FEATURE_MP))
2840 + return;
2841 +@@ -583,7 +583,7 @@ static void early_init_amd(struct cpuinfo_x86 *c)
2842 + /* Set MTRR capability flag if appropriate */
2843 + if (c->x86 == 5)
2844 + if (c->x86_model == 13 || c->x86_model == 9 ||
2845 +- (c->x86_model == 8 && c->x86_mask >= 8))
2846 ++ (c->x86_model == 8 && c->x86_stepping >= 8))
2847 + set_cpu_cap(c, X86_FEATURE_K6_MTRR);
2848 + #endif
2849 + #if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_PCI)
2850 +@@ -769,7 +769,7 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
2851 + * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
2852 + * all up to and including B1.
2853 + */
2854 +- if (c->x86_model <= 1 && c->x86_mask <= 1)
2855 ++ if (c->x86_model <= 1 && c->x86_stepping <= 1)
2856 + set_cpu_cap(c, X86_FEATURE_CPB);
2857 + }
2858 +
2859 +@@ -880,11 +880,11 @@ static unsigned int amd_size_cache(struct cpuinfo_x86 *c, unsigned int size)
2860 + /* AMD errata T13 (order #21922) */
2861 + if ((c->x86 == 6)) {
2862 + /* Duron Rev A0 */
2863 +- if (c->x86_model == 3 && c->x86_mask == 0)
2864 ++ if (c->x86_model == 3 && c->x86_stepping == 0)
2865 + size = 64;
2866 + /* Tbird rev A1/A2 */
2867 + if (c->x86_model == 4 &&
2868 +- (c->x86_mask == 0 || c->x86_mask == 1))
2869 ++ (c->x86_stepping == 0 || c->x86_stepping == 1))
2870 + size = 256;
2871 + }
2872 + return size;
2873 +@@ -1021,7 +1021,7 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
2874 + }
2875 +
2876 + /* OSVW unavailable or ID unknown, match family-model-stepping range */
2877 +- ms = (cpu->x86_model << 4) | cpu->x86_mask;
2878 ++ ms = (cpu->x86_model << 4) | cpu->x86_stepping;
2879 + while ((range = *erratum++))
2880 + if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) &&
2881 + (ms >= AMD_MODEL_RANGE_START(range)) &&
2882 +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
2883 +index 71949bf2de5a..d71c8b54b696 100644
2884 +--- a/arch/x86/kernel/cpu/bugs.c
2885 ++++ b/arch/x86/kernel/cpu/bugs.c
2886 +@@ -162,8 +162,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
2887 + if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
2888 + return SPECTRE_V2_CMD_NONE;
2889 + else {
2890 +- ret = cmdline_find_option(boot_command_line, "spectre_v2", arg,
2891 +- sizeof(arg));
2892 ++ ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
2893 + if (ret < 0)
2894 + return SPECTRE_V2_CMD_AUTO;
2895 +
2896 +@@ -175,8 +174,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
2897 + }
2898 +
2899 + if (i >= ARRAY_SIZE(mitigation_options)) {
2900 +- pr_err("unknown option (%s). Switching to AUTO select\n",
2901 +- mitigation_options[i].option);
2902 ++ pr_err("unknown option (%s). Switching to AUTO select\n", arg);
2903 + return SPECTRE_V2_CMD_AUTO;
2904 + }
2905 + }
2906 +@@ -185,8 +183,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
2907 + cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
2908 + cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
2909 + !IS_ENABLED(CONFIG_RETPOLINE)) {
2910 +- pr_err("%s selected but not compiled in. Switching to AUTO select\n",
2911 +- mitigation_options[i].option);
2912 ++ pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option);
2913 + return SPECTRE_V2_CMD_AUTO;
2914 + }
2915 +
2916 +@@ -256,14 +253,14 @@ static void __init spectre_v2_select_mitigation(void)
2917 + goto retpoline_auto;
2918 + break;
2919 + }
2920 +- pr_err("kernel not compiled with retpoline; no mitigation available!");
2921 ++ pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
2922 + return;
2923 +
2924 + retpoline_auto:
2925 + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
2926 + retpoline_amd:
2927 + if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
2928 +- pr_err("LFENCE not serializing. Switching to generic retpoline\n");
2929 ++ pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
2930 + goto retpoline_generic;
2931 + }
2932 + mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :
2933 +@@ -281,7 +278,7 @@ static void __init spectre_v2_select_mitigation(void)
2934 + pr_info("%s\n", spectre_v2_strings[mode]);
2935 +
2936 + /*
2937 +- * If neither SMEP or KPTI are available, there is a risk of
2938 ++ * If neither SMEP nor PTI are available, there is a risk of
2939 + * hitting userspace addresses in the RSB after a context switch
2940 + * from a shallow call stack to a deeper one. To prevent this fill
2941 + * the entire RSB, even when using IBRS.
2942 +@@ -295,21 +292,20 @@ static void __init spectre_v2_select_mitigation(void)
2943 + if ((!boot_cpu_has(X86_FEATURE_PTI) &&
2944 + !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
2945 + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
2946 +- pr_info("Filling RSB on context switch\n");
2947 ++ pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
2948 + }
2949 +
2950 + /* Initialize Indirect Branch Prediction Barrier if supported */
2951 + if (boot_cpu_has(X86_FEATURE_IBPB)) {
2952 + setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
2953 +- pr_info("Enabling Indirect Branch Prediction Barrier\n");
2954 ++ pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
2955 + }
2956 + }
2957 +
2958 + #undef pr_fmt
2959 +
2960 + #ifdef CONFIG_SYSFS
2961 +-ssize_t cpu_show_meltdown(struct device *dev,
2962 +- struct device_attribute *attr, char *buf)
2963 ++ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
2964 + {
2965 + if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
2966 + return sprintf(buf, "Not affected\n");
2967 +@@ -318,16 +314,14 @@ ssize_t cpu_show_meltdown(struct device *dev,
2968 + return sprintf(buf, "Vulnerable\n");
2969 + }
2970 +
2971 +-ssize_t cpu_show_spectre_v1(struct device *dev,
2972 +- struct device_attribute *attr, char *buf)
2973 ++ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
2974 + {
2975 + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1))
2976 + return sprintf(buf, "Not affected\n");
2977 + return sprintf(buf, "Mitigation: __user pointer sanitization\n");
2978 + }
2979 +
2980 +-ssize_t cpu_show_spectre_v2(struct device *dev,
2981 +- struct device_attribute *attr, char *buf)
2982 ++ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
2983 + {
2984 + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
2985 + return sprintf(buf, "Not affected\n");
2986 +@@ -337,9 +331,3 @@ ssize_t cpu_show_spectre_v2(struct device *dev,
2987 + spectre_v2_module_string());
2988 + }
2989 + #endif
2990 +-
2991 +-void __ibp_barrier(void)
2992 +-{
2993 +- __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0);
2994 +-}
2995 +-EXPORT_SYMBOL_GPL(__ibp_barrier);
2996 +diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
2997 +index 68bc6d9b3132..595be776727d 100644
2998 +--- a/arch/x86/kernel/cpu/centaur.c
2999 ++++ b/arch/x86/kernel/cpu/centaur.c
3000 +@@ -136,7 +136,7 @@ static void init_centaur(struct cpuinfo_x86 *c)
3001 + clear_cpu_cap(c, X86_FEATURE_TSC);
3002 + break;
3003 + case 8:
3004 +- switch (c->x86_mask) {
3005 ++ switch (c->x86_stepping) {
3006 + default:
3007 + name = "2";
3008 + break;
3009 +@@ -211,7 +211,7 @@ centaur_size_cache(struct cpuinfo_x86 *c, unsigned int size)
3010 + * - Note, it seems this may only be in engineering samples.
3011 + */
3012 + if ((c->x86 == 6) && (c->x86_model == 9) &&
3013 +- (c->x86_mask == 1) && (size == 65))
3014 ++ (c->x86_stepping == 1) && (size == 65))
3015 + size -= 1;
3016 + return size;
3017 + }
3018 +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
3019 +index 92b66e21bae5..651b7afed4da 100644
3020 +--- a/arch/x86/kernel/cpu/common.c
3021 ++++ b/arch/x86/kernel/cpu/common.c
3022 +@@ -707,7 +707,7 @@ void cpu_detect(struct cpuinfo_x86 *c)
3023 + cpuid(0x00000001, &tfms, &misc, &junk, &cap0);
3024 + c->x86 = x86_family(tfms);
3025 + c->x86_model = x86_model(tfms);
3026 +- c->x86_mask = x86_stepping(tfms);
3027 ++ c->x86_stepping = x86_stepping(tfms);
3028 +
3029 + if (cap0 & (1<<19)) {
3030 + c->x86_clflush_size = ((misc >> 8) & 0xff) * 8;
3031 +@@ -1160,9 +1160,9 @@ static void identify_cpu(struct cpuinfo_x86 *c)
3032 + int i;
3033 +
3034 + c->loops_per_jiffy = loops_per_jiffy;
3035 +- c->x86_cache_size = -1;
3036 ++ c->x86_cache_size = 0;
3037 + c->x86_vendor = X86_VENDOR_UNKNOWN;
3038 +- c->x86_model = c->x86_mask = 0; /* So far unknown... */
3039 ++ c->x86_model = c->x86_stepping = 0; /* So far unknown... */
3040 + c->x86_vendor_id[0] = '\0'; /* Unset */
3041 + c->x86_model_id[0] = '\0'; /* Unset */
3042 + c->x86_max_cores = 1;
3043 +@@ -1353,8 +1353,8 @@ void print_cpu_info(struct cpuinfo_x86 *c)
3044 +
3045 + pr_cont(" (family: 0x%x, model: 0x%x", c->x86, c->x86_model);
3046 +
3047 +- if (c->x86_mask || c->cpuid_level >= 0)
3048 +- pr_cont(", stepping: 0x%x)\n", c->x86_mask);
3049 ++ if (c->x86_stepping || c->cpuid_level >= 0)
3050 ++ pr_cont(", stepping: 0x%x)\n", c->x86_stepping);
3051 + else
3052 + pr_cont(")\n");
3053 + }
3054 +diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
3055 +index 6b4bb335641f..8949b7ae6d92 100644
3056 +--- a/arch/x86/kernel/cpu/cyrix.c
3057 ++++ b/arch/x86/kernel/cpu/cyrix.c
3058 +@@ -215,7 +215,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
3059 +
3060 + /* common case step number/rev -- exceptions handled below */
3061 + c->x86_model = (dir1 >> 4) + 1;
3062 +- c->x86_mask = dir1 & 0xf;
3063 ++ c->x86_stepping = dir1 & 0xf;
3064 +
3065 + /* Now cook; the original recipe is by Channing Corn, from Cyrix.
3066 + * We do the same thing for each generation: we work out
3067 +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
3068 +index 4cf4f8cbc69d..d19e903214b4 100644
3069 +--- a/arch/x86/kernel/cpu/intel.c
3070 ++++ b/arch/x86/kernel/cpu/intel.c
3071 +@@ -116,14 +116,13 @@ struct sku_microcode {
3072 + u32 microcode;
3073 + };
3074 + static const struct sku_microcode spectre_bad_microcodes[] = {
3075 +- { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x84 },
3076 +- { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x84 },
3077 +- { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x84 },
3078 +- { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x84 },
3079 +- { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x84 },
3080 ++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x80 },
3081 ++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x80 },
3082 ++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x80 },
3083 ++ { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x80 },
3084 ++ { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x80 },
3085 + { INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
3086 + { INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
3087 +- { INTEL_FAM6_SKYLAKE_MOBILE, 0x03, 0xc2 },
3088 + { INTEL_FAM6_SKYLAKE_DESKTOP, 0x03, 0xc2 },
3089 + { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
3090 + { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
3091 +@@ -136,8 +135,6 @@ static const struct sku_microcode spectre_bad_microcodes[] = {
3092 + { INTEL_FAM6_HASWELL_X, 0x02, 0x3b },
3093 + { INTEL_FAM6_HASWELL_X, 0x04, 0x10 },
3094 + { INTEL_FAM6_IVYBRIDGE_X, 0x04, 0x42a },
3095 +- /* Updated in the 20180108 release; blacklist until we know otherwise */
3096 +- { INTEL_FAM6_ATOM_GEMINI_LAKE, 0x01, 0x22 },
3097 + /* Observed in the wild */
3098 + { INTEL_FAM6_SANDYBRIDGE_X, 0x06, 0x61b },
3099 + { INTEL_FAM6_SANDYBRIDGE_X, 0x07, 0x712 },
3100 +@@ -149,7 +146,7 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
3101 +
3102 + for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
3103 + if (c->x86_model == spectre_bad_microcodes[i].model &&
3104 +- c->x86_mask == spectre_bad_microcodes[i].stepping)
3105 ++ c->x86_stepping == spectre_bad_microcodes[i].stepping)
3106 + return (c->microcode <= spectre_bad_microcodes[i].microcode);
3107 + }
3108 + return false;
3109 +@@ -196,7 +193,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
3110 + * need the microcode to have already been loaded... so if it is
3111 + * not, recommend a BIOS update and disable large pages.
3112 + */
3113 +- if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_mask <= 2 &&
3114 ++ if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_stepping <= 2 &&
3115 + c->microcode < 0x20e) {
3116 + pr_warn("Atom PSE erratum detected, BIOS microcode update recommended\n");
3117 + clear_cpu_cap(c, X86_FEATURE_PSE);
3118 +@@ -212,7 +209,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
3119 +
3120 + /* CPUID workaround for 0F33/0F34 CPU */
3121 + if (c->x86 == 0xF && c->x86_model == 0x3
3122 +- && (c->x86_mask == 0x3 || c->x86_mask == 0x4))
3123 ++ && (c->x86_stepping == 0x3 || c->x86_stepping == 0x4))
3124 + c->x86_phys_bits = 36;
3125 +
3126 + /*
3127 +@@ -253,21 +250,6 @@ static void early_init_intel(struct cpuinfo_x86 *c)
3128 + if (c->x86 == 6 && c->x86_model < 15)
3129 + clear_cpu_cap(c, X86_FEATURE_PAT);
3130 +
3131 +-#ifdef CONFIG_KMEMCHECK
3132 +- /*
3133 +- * P4s have a "fast strings" feature which causes single-
3134 +- * stepping REP instructions to only generate a #DB on
3135 +- * cache-line boundaries.
3136 +- *
3137 +- * Ingo Molnar reported a Pentium D (model 6) and a Xeon
3138 +- * (model 2) with the same problem.
3139 +- */
3140 +- if (c->x86 == 15)
3141 +- if (msr_clear_bit(MSR_IA32_MISC_ENABLE,
3142 +- MSR_IA32_MISC_ENABLE_FAST_STRING_BIT) > 0)
3143 +- pr_info("kmemcheck: Disabling fast string operations\n");
3144 +-#endif
3145 +-
3146 + /*
3147 + * If fast string is not enabled in IA32_MISC_ENABLE for any reason,
3148 + * clear the fast string and enhanced fast string CPU capabilities.
3149 +@@ -325,7 +307,7 @@ int ppro_with_ram_bug(void)
3150 + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
3151 + boot_cpu_data.x86 == 6 &&
3152 + boot_cpu_data.x86_model == 1 &&
3153 +- boot_cpu_data.x86_mask < 8) {
3154 ++ boot_cpu_data.x86_stepping < 8) {
3155 + pr_info("Pentium Pro with Errata#50 detected. Taking evasive action.\n");
3156 + return 1;
3157 + }
3158 +@@ -342,7 +324,7 @@ static void intel_smp_check(struct cpuinfo_x86 *c)
3159 + * Mask B, Pentium, but not Pentium MMX
3160 + */
3161 + if (c->x86 == 5 &&
3162 +- c->x86_mask >= 1 && c->x86_mask <= 4 &&
3163 ++ c->x86_stepping >= 1 && c->x86_stepping <= 4 &&
3164 + c->x86_model <= 3) {
3165 + /*
3166 + * Remember we have B step Pentia with bugs
3167 +@@ -385,7 +367,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
3168 + * SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until
3169 + * model 3 mask 3
3170 + */
3171 +- if ((c->x86<<8 | c->x86_model<<4 | c->x86_mask) < 0x633)
3172 ++ if ((c->x86<<8 | c->x86_model<<4 | c->x86_stepping) < 0x633)
3173 + clear_cpu_cap(c, X86_FEATURE_SEP);
3174 +
3175 + /*
3176 +@@ -403,7 +385,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
3177 + * P4 Xeon erratum 037 workaround.
3178 + * Hardware prefetcher may cause stale data to be loaded into the cache.
3179 + */
3180 +- if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_mask == 1)) {
3181 ++ if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_stepping == 1)) {
3182 + if (msr_set_bit(MSR_IA32_MISC_ENABLE,
3183 + MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE_BIT) > 0) {
3184 + pr_info("CPU: C0 stepping P4 Xeon detected.\n");
3185 +@@ -418,7 +400,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
3186 + * Specification Update").
3187 + */
3188 + if (boot_cpu_has(X86_FEATURE_APIC) && (c->x86<<8 | c->x86_model<<4) == 0x520 &&
3189 +- (c->x86_mask < 0x6 || c->x86_mask == 0xb))
3190 ++ (c->x86_stepping < 0x6 || c->x86_stepping == 0xb))
3191 + set_cpu_bug(c, X86_BUG_11AP);
3192 +
3193 +
3194 +@@ -665,7 +647,7 @@ static void init_intel(struct cpuinfo_x86 *c)
3195 + case 6:
3196 + if (l2 == 128)
3197 + p = "Celeron (Mendocino)";
3198 +- else if (c->x86_mask == 0 || c->x86_mask == 5)
3199 ++ else if (c->x86_stepping == 0 || c->x86_stepping == 5)
3200 + p = "Celeron-A";
3201 + break;
3202 +
3203 +diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
3204 +index 99442370de40..18dd8f22e353 100644
3205 +--- a/arch/x86/kernel/cpu/intel_rdt.c
3206 ++++ b/arch/x86/kernel/cpu/intel_rdt.c
3207 +@@ -771,7 +771,7 @@ static __init void rdt_quirks(void)
3208 + cache_alloc_hsw_probe();
3209 + break;
3210 + case INTEL_FAM6_SKYLAKE_X:
3211 +- if (boot_cpu_data.x86_mask <= 4)
3212 ++ if (boot_cpu_data.x86_stepping <= 4)
3213 + set_rdt_options("!cmt,!mbmtotal,!mbmlocal,!l3cat");
3214 + }
3215 + }
3216 +diff --git a/arch/x86/kernel/cpu/mcheck/mce-internal.h b/arch/x86/kernel/cpu/mcheck/mce-internal.h
3217 +index aa0d5df9dc60..e956eb267061 100644
3218 +--- a/arch/x86/kernel/cpu/mcheck/mce-internal.h
3219 ++++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h
3220 +@@ -115,4 +115,19 @@ static inline void mce_unregister_injector_chain(struct notifier_block *nb) { }
3221 +
3222 + extern struct mca_config mca_cfg;
3223 +
3224 ++#ifndef CONFIG_X86_64
3225 ++/*
3226 ++ * On 32-bit systems it would be difficult to safely unmap a poison page
3227 ++ * from the kernel 1:1 map because there are no non-canonical addresses that
3228 ++ * we can use to refer to the address without risking a speculative access.
3229 ++ * However, this isn't much of an issue because:
3230 ++ * 1) Few unmappable pages are in the 1:1 map. Most are in HIGHMEM which
3231 ++ * are only mapped into the kernel as needed
3232 ++ * 2) Few people would run a 32-bit kernel on a machine that supports
3233 ++ * recoverable errors because they have too much memory to boot 32-bit.
3234 ++ */
3235 ++static inline void mce_unmap_kpfn(unsigned long pfn) {}
3236 ++#define mce_unmap_kpfn mce_unmap_kpfn
3237 ++#endif
3238 ++
3239 + #endif /* __X86_MCE_INTERNAL_H__ */
3240 +diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
3241 +index a9e898b71208..73237aa271ea 100644
3242 +--- a/arch/x86/kernel/cpu/mcheck/mce.c
3243 ++++ b/arch/x86/kernel/cpu/mcheck/mce.c
3244 +@@ -106,6 +106,10 @@ static struct irq_work mce_irq_work;
3245 +
3246 + static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
3247 +
3248 ++#ifndef mce_unmap_kpfn
3249 ++static void mce_unmap_kpfn(unsigned long pfn);
3250 ++#endif
3251 ++
3252 + /*
3253 + * CPU/chipset specific EDAC code can register a notifier call here to print
3254 + * MCE errors in a human-readable form.
3255 +@@ -582,7 +586,8 @@ static int srao_decode_notifier(struct notifier_block *nb, unsigned long val,
3256 +
3257 + if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) {
3258 + pfn = mce->addr >> PAGE_SHIFT;
3259 +- memory_failure(pfn, MCE_VECTOR, 0);
3260 ++ if (memory_failure(pfn, MCE_VECTOR, 0))
3261 ++ mce_unmap_kpfn(pfn);
3262 + }
3263 +
3264 + return NOTIFY_OK;
3265 +@@ -1049,12 +1054,13 @@ static int do_memory_failure(struct mce *m)
3266 + ret = memory_failure(m->addr >> PAGE_SHIFT, MCE_VECTOR, flags);
3267 + if (ret)
3268 + pr_err("Memory error not recovered");
3269 ++ else
3270 ++ mce_unmap_kpfn(m->addr >> PAGE_SHIFT);
3271 + return ret;
3272 + }
3273 +
3274 +-#if defined(arch_unmap_kpfn) && defined(CONFIG_MEMORY_FAILURE)
3275 +-
3276 +-void arch_unmap_kpfn(unsigned long pfn)
3277 ++#ifndef mce_unmap_kpfn
3278 ++static void mce_unmap_kpfn(unsigned long pfn)
3279 + {
3280 + unsigned long decoy_addr;
3281 +
3282 +@@ -1065,7 +1071,7 @@ void arch_unmap_kpfn(unsigned long pfn)
3283 + * We would like to just call:
3284 + * set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1);
3285 + * but doing that would radically increase the odds of a
3286 +- * speculative access to the posion page because we'd have
3287 ++ * speculative access to the poison page because we'd have
3288 + * the virtual address of the kernel 1:1 mapping sitting
3289 + * around in registers.
3290 + * Instead we get tricky. We create a non-canonical address
3291 +@@ -1090,7 +1096,6 @@ void arch_unmap_kpfn(unsigned long pfn)
3292 +
3293 + if (set_memory_np(decoy_addr, 1))
3294 + pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
3295 +-
3296 + }
3297 + #endif
3298 +
3299 +diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
3300 +index f7c55b0e753a..a15db2b4e0d6 100644
3301 +--- a/arch/x86/kernel/cpu/microcode/intel.c
3302 ++++ b/arch/x86/kernel/cpu/microcode/intel.c
3303 +@@ -921,7 +921,7 @@ static bool is_blacklisted(unsigned int cpu)
3304 + */
3305 + if (c->x86 == 6 &&
3306 + c->x86_model == INTEL_FAM6_BROADWELL_X &&
3307 +- c->x86_mask == 0x01 &&
3308 ++ c->x86_stepping == 0x01 &&
3309 + llc_size_per_core > 2621440 &&
3310 + c->microcode < 0x0b000021) {
3311 + pr_err_once("Erratum BDF90: late loading with revision < 0x0b000021 (0x%x) disabled.\n", c->microcode);
3312 +@@ -944,7 +944,7 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device,
3313 + return UCODE_NFOUND;
3314 +
3315 + sprintf(name, "intel-ucode/%02x-%02x-%02x",
3316 +- c->x86, c->x86_model, c->x86_mask);
3317 ++ c->x86, c->x86_model, c->x86_stepping);
3318 +
3319 + if (request_firmware_direct(&firmware, name, device)) {
3320 + pr_debug("data file %s load failed\n", name);
3321 +@@ -982,7 +982,7 @@ static struct microcode_ops microcode_intel_ops = {
3322 +
3323 + static int __init calc_llc_size_per_core(struct cpuinfo_x86 *c)
3324 + {
3325 +- u64 llc_size = c->x86_cache_size * 1024;
3326 ++ u64 llc_size = c->x86_cache_size * 1024ULL;
3327 +
3328 + do_div(llc_size, c->x86_max_cores);
3329 +
3330 +diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
3331 +index fdc55215d44d..e12ee86906c6 100644
3332 +--- a/arch/x86/kernel/cpu/mtrr/generic.c
3333 ++++ b/arch/x86/kernel/cpu/mtrr/generic.c
3334 +@@ -859,7 +859,7 @@ int generic_validate_add_page(unsigned long base, unsigned long size,
3335 + */
3336 + if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
3337 + boot_cpu_data.x86_model == 1 &&
3338 +- boot_cpu_data.x86_mask <= 7) {
3339 ++ boot_cpu_data.x86_stepping <= 7) {
3340 + if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
3341 + pr_warn("mtrr: base(0x%lx000) is not 4 MiB aligned\n", base);
3342 + return -EINVAL;
3343 +diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
3344 +index 40d5a8a75212..7468de429087 100644
3345 +--- a/arch/x86/kernel/cpu/mtrr/main.c
3346 ++++ b/arch/x86/kernel/cpu/mtrr/main.c
3347 +@@ -711,8 +711,8 @@ void __init mtrr_bp_init(void)
3348 + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
3349 + boot_cpu_data.x86 == 0xF &&
3350 + boot_cpu_data.x86_model == 0x3 &&
3351 +- (boot_cpu_data.x86_mask == 0x3 ||
3352 +- boot_cpu_data.x86_mask == 0x4))
3353 ++ (boot_cpu_data.x86_stepping == 0x3 ||
3354 ++ boot_cpu_data.x86_stepping == 0x4))
3355 + phys_addr = 36;
3356 +
3357 + size_or_mask = SIZE_OR_MASK_BITS(phys_addr);
3358 +diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c
3359 +index e7ecedafa1c8..2c8522a39ed5 100644
3360 +--- a/arch/x86/kernel/cpu/proc.c
3361 ++++ b/arch/x86/kernel/cpu/proc.c
3362 +@@ -72,8 +72,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
3363 + c->x86_model,
3364 + c->x86_model_id[0] ? c->x86_model_id : "unknown");
3365 +
3366 +- if (c->x86_mask || c->cpuid_level >= 0)
3367 +- seq_printf(m, "stepping\t: %d\n", c->x86_mask);
3368 ++ if (c->x86_stepping || c->cpuid_level >= 0)
3369 ++ seq_printf(m, "stepping\t: %d\n", c->x86_stepping);
3370 + else
3371 + seq_puts(m, "stepping\t: unknown\n");
3372 + if (c->microcode)
3373 +@@ -91,8 +91,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
3374 + }
3375 +
3376 + /* Cache size */
3377 +- if (c->x86_cache_size >= 0)
3378 +- seq_printf(m, "cache size\t: %d KB\n", c->x86_cache_size);
3379 ++ if (c->x86_cache_size)
3380 ++ seq_printf(m, "cache size\t: %u KB\n", c->x86_cache_size);
3381 +
3382 + show_cpuinfo_core(m, c, cpu);
3383 + show_cpuinfo_misc(m, c);
3384 +diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
3385 +index 1e82f787c160..c87560e1e3ef 100644
3386 +--- a/arch/x86/kernel/early-quirks.c
3387 ++++ b/arch/x86/kernel/early-quirks.c
3388 +@@ -527,6 +527,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
3389 + INTEL_SKL_IDS(&gen9_early_ops),
3390 + INTEL_BXT_IDS(&gen9_early_ops),
3391 + INTEL_KBL_IDS(&gen9_early_ops),
3392 ++ INTEL_CFL_IDS(&gen9_early_ops),
3393 + INTEL_GLK_IDS(&gen9_early_ops),
3394 + INTEL_CNL_IDS(&gen9_early_ops),
3395 + };
3396 +diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
3397 +index 9c4e7ba6870c..cbded50ee601 100644
3398 +--- a/arch/x86/kernel/espfix_64.c
3399 ++++ b/arch/x86/kernel/espfix_64.c
3400 +@@ -57,7 +57,7 @@
3401 + # error "Need more virtual address space for the ESPFIX hack"
3402 + #endif
3403 +
3404 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
3405 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
3406 +
3407 + /* This contains the *bottom* address of the espfix stack */
3408 + DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack);
3409 +diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
3410 +index c29020907886..b59e4fb40fd9 100644
3411 +--- a/arch/x86/kernel/head_32.S
3412 ++++ b/arch/x86/kernel/head_32.S
3413 +@@ -37,7 +37,7 @@
3414 + #define X86 new_cpu_data+CPUINFO_x86
3415 + #define X86_VENDOR new_cpu_data+CPUINFO_x86_vendor
3416 + #define X86_MODEL new_cpu_data+CPUINFO_x86_model
3417 +-#define X86_MASK new_cpu_data+CPUINFO_x86_mask
3418 ++#define X86_STEPPING new_cpu_data+CPUINFO_x86_stepping
3419 + #define X86_HARD_MATH new_cpu_data+CPUINFO_hard_math
3420 + #define X86_CPUID new_cpu_data+CPUINFO_cpuid_level
3421 + #define X86_CAPABILITY new_cpu_data+CPUINFO_x86_capability
3422 +@@ -332,7 +332,7 @@ ENTRY(startup_32_smp)
3423 + shrb $4,%al
3424 + movb %al,X86_MODEL
3425 + andb $0x0f,%cl # mask mask revision
3426 +- movb %cl,X86_MASK
3427 ++ movb %cl,X86_STEPPING
3428 + movl %edx,X86_CAPABILITY
3429 +
3430 + .Lis486:
3431 +diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
3432 +index 3a4b12809ab5..bc6bc6689e68 100644
3433 +--- a/arch/x86/kernel/mpparse.c
3434 ++++ b/arch/x86/kernel/mpparse.c
3435 +@@ -407,7 +407,7 @@ static inline void __init construct_default_ISA_mptable(int mpc_default_type)
3436 + processor.apicver = mpc_default_type > 4 ? 0x10 : 0x01;
3437 + processor.cpuflag = CPU_ENABLED;
3438 + processor.cpufeature = (boot_cpu_data.x86 << 8) |
3439 +- (boot_cpu_data.x86_model << 4) | boot_cpu_data.x86_mask;
3440 ++ (boot_cpu_data.x86_model << 4) | boot_cpu_data.x86_stepping;
3441 + processor.featureflag = boot_cpu_data.x86_capability[CPUID_1_EDX];
3442 + processor.reserved[0] = 0;
3443 + processor.reserved[1] = 0;
3444 +diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
3445 +index 19a3e8f961c7..e1df9ef5d78c 100644
3446 +--- a/arch/x86/kernel/paravirt.c
3447 ++++ b/arch/x86/kernel/paravirt.c
3448 +@@ -190,9 +190,9 @@ static void native_flush_tlb_global(void)
3449 + __native_flush_tlb_global();
3450 + }
3451 +
3452 +-static void native_flush_tlb_single(unsigned long addr)
3453 ++static void native_flush_tlb_one_user(unsigned long addr)
3454 + {
3455 +- __native_flush_tlb_single(addr);
3456 ++ __native_flush_tlb_one_user(addr);
3457 + }
3458 +
3459 + struct static_key paravirt_steal_enabled;
3460 +@@ -391,7 +391,7 @@ struct pv_mmu_ops pv_mmu_ops __ro_after_init = {
3461 +
3462 + .flush_tlb_user = native_flush_tlb,
3463 + .flush_tlb_kernel = native_flush_tlb_global,
3464 +- .flush_tlb_single = native_flush_tlb_single,
3465 ++ .flush_tlb_one_user = native_flush_tlb_one_user,
3466 + .flush_tlb_others = native_flush_tlb_others,
3467 +
3468 + .pgd_alloc = __paravirt_pgd_alloc,
3469 +diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
3470 +index 307d3bac5f04..11eda21eb697 100644
3471 +--- a/arch/x86/kernel/relocate_kernel_64.S
3472 ++++ b/arch/x86/kernel/relocate_kernel_64.S
3473 +@@ -68,6 +68,9 @@ relocate_kernel:
3474 + movq %cr4, %rax
3475 + movq %rax, CR4(%r11)
3476 +
3477 ++ /* Save CR4. Required to enable the right paging mode later. */
3478 ++ movq %rax, %r13
3479 ++
3480 + /* zero out flags, and disable interrupts */
3481 + pushq $0
3482 + popfq
3483 +@@ -126,8 +129,13 @@ identity_mapped:
3484 + /*
3485 + * Set cr4 to a known state:
3486 + * - physical address extension enabled
3487 ++ * - 5-level paging, if it was enabled before
3488 + */
3489 + movl $X86_CR4_PAE, %eax
3490 ++ testq $X86_CR4_LA57, %r13
3491 ++ jz 1f
3492 ++ orl $X86_CR4_LA57, %eax
3493 ++1:
3494 + movq %rax, %cr4
3495 +
3496 + jmp 1f
3497 +diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
3498 +index b33e860d32fe..a66428dc92ae 100644
3499 +--- a/arch/x86/kernel/traps.c
3500 ++++ b/arch/x86/kernel/traps.c
3501 +@@ -42,7 +42,6 @@
3502 + #include <linux/edac.h>
3503 + #endif
3504 +
3505 +-#include <asm/kmemcheck.h>
3506 + #include <asm/stacktrace.h>
3507 + #include <asm/processor.h>
3508 + #include <asm/debugreg.h>
3509 +@@ -181,7 +180,7 @@ int fixup_bug(struct pt_regs *regs, int trapnr)
3510 + break;
3511 +
3512 + case BUG_TRAP_TYPE_WARN:
3513 +- regs->ip += LEN_UD0;
3514 ++ regs->ip += LEN_UD2;
3515 + return 1;
3516 + }
3517 +
3518 +@@ -764,10 +763,6 @@ dotraplinkage void do_debug(struct pt_regs *regs, long error_code)
3519 + if (!dr6 && user_mode(regs))
3520 + user_icebp = 1;
3521 +
3522 +- /* Catch kmemcheck conditions! */
3523 +- if ((dr6 & DR_STEP) && kmemcheck_trap(regs))
3524 +- goto exit;
3525 +-
3526 + /* Store the virtualized DR6 value */
3527 + tsk->thread.debugreg6 = dr6;
3528 +
3529 +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
3530 +index beb7f8795bc1..ca000fc644bc 100644
3531 +--- a/arch/x86/kvm/mmu.c
3532 ++++ b/arch/x86/kvm/mmu.c
3533 +@@ -5063,7 +5063,7 @@ void kvm_mmu_uninit_vm(struct kvm *kvm)
3534 + typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head);
3535 +
3536 + /* The caller should hold mmu-lock before calling this function. */
3537 +-static bool
3538 ++static __always_inline bool
3539 + slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
3540 + slot_level_handler fn, int start_level, int end_level,
3541 + gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb)
3542 +@@ -5093,7 +5093,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
3543 + return flush;
3544 + }
3545 +
3546 +-static bool
3547 ++static __always_inline bool
3548 + slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
3549 + slot_level_handler fn, int start_level, int end_level,
3550 + bool lock_flush_tlb)
3551 +@@ -5104,7 +5104,7 @@ slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
3552 + lock_flush_tlb);
3553 + }
3554 +
3555 +-static bool
3556 ++static __always_inline bool
3557 + slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
3558 + slot_level_handler fn, bool lock_flush_tlb)
3559 + {
3560 +@@ -5112,7 +5112,7 @@ slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
3561 + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb);
3562 + }
3563 +
3564 +-static bool
3565 ++static __always_inline bool
3566 + slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
3567 + slot_level_handler fn, bool lock_flush_tlb)
3568 + {
3569 +@@ -5120,7 +5120,7 @@ slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
3570 + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb);
3571 + }
3572 +
3573 +-static bool
3574 ++static __always_inline bool
3575 + slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot,
3576 + slot_level_handler fn, bool lock_flush_tlb)
3577 + {
3578 +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
3579 +index 0ea909ca45c2..dd35c6c50516 100644
3580 +--- a/arch/x86/kvm/vmx.c
3581 ++++ b/arch/x86/kvm/vmx.c
3582 +@@ -10127,7 +10127,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
3583 + if (cpu_has_vmx_msr_bitmap() &&
3584 + nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS) &&
3585 + nested_vmx_merge_msr_bitmap(vcpu, vmcs12))
3586 +- ;
3587 ++ vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL,
3588 ++ CPU_BASED_USE_MSR_BITMAPS);
3589 + else
3590 + vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL,
3591 + CPU_BASED_USE_MSR_BITMAPS);
3592 +@@ -10216,8 +10217,8 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
3593 + * updated to reflect this when L1 (or its L2s) actually write to
3594 + * the MSR.
3595 + */
3596 +- bool pred_cmd = msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
3597 +- bool spec_ctrl = msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL);
3598 ++ bool pred_cmd = !msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
3599 ++ bool spec_ctrl = !msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL);
3600 +
3601 + if (!nested_cpu_has_virt_x2apic_mode(vmcs12) &&
3602 + !pred_cmd && !spec_ctrl)
3603 +diff --git a/arch/x86/lib/cpu.c b/arch/x86/lib/cpu.c
3604 +index d6f848d1211d..2dd1fe13a37b 100644
3605 +--- a/arch/x86/lib/cpu.c
3606 ++++ b/arch/x86/lib/cpu.c
3607 +@@ -18,7 +18,7 @@ unsigned int x86_model(unsigned int sig)
3608 + {
3609 + unsigned int fam, model;
3610 +
3611 +- fam = x86_family(sig);
3612 ++ fam = x86_family(sig);
3613 +
3614 + model = (sig >> 4) & 0xf;
3615 +
3616 +diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
3617 +index 52906808e277..27e9e90a8d35 100644
3618 +--- a/arch/x86/mm/Makefile
3619 ++++ b/arch/x86/mm/Makefile
3620 +@@ -29,8 +29,6 @@ obj-$(CONFIG_X86_PTDUMP) += debug_pagetables.o
3621 +
3622 + obj-$(CONFIG_HIGHMEM) += highmem_32.o
3623 +
3624 +-obj-$(CONFIG_KMEMCHECK) += kmemcheck/
3625 +-
3626 + KASAN_SANITIZE_kasan_init_$(BITS).o := n
3627 + obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o
3628 +
3629 +diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
3630 +index b264b590eeec..9150fe2c9b26 100644
3631 +--- a/arch/x86/mm/fault.c
3632 ++++ b/arch/x86/mm/fault.c
3633 +@@ -20,7 +20,6 @@
3634 + #include <asm/cpufeature.h> /* boot_cpu_has, ... */
3635 + #include <asm/traps.h> /* dotraplinkage, ... */
3636 + #include <asm/pgalloc.h> /* pgd_*(), ... */
3637 +-#include <asm/kmemcheck.h> /* kmemcheck_*(), ... */
3638 + #include <asm/fixmap.h> /* VSYSCALL_ADDR */
3639 + #include <asm/vsyscall.h> /* emulate_vsyscall */
3640 + #include <asm/vm86.h> /* struct vm86 */
3641 +@@ -1257,8 +1256,6 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
3642 + * Detect and handle instructions that would cause a page fault for
3643 + * both a tracked kernel page and a userspace page.
3644 + */
3645 +- if (kmemcheck_active(regs))
3646 +- kmemcheck_hide(regs);
3647 + prefetchw(&mm->mmap_sem);
3648 +
3649 + if (unlikely(kmmio_fault(regs, address)))
3650 +@@ -1281,9 +1278,6 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
3651 + if (!(error_code & (X86_PF_RSVD | X86_PF_USER | X86_PF_PROT))) {
3652 + if (vmalloc_fault(address) >= 0)
3653 + return;
3654 +-
3655 +- if (kmemcheck_fault(regs, address, error_code))
3656 +- return;
3657 + }
3658 +
3659 + /* Can handle a stale RO->RW TLB: */
3660 +diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
3661 +index 6b462a472a7b..82f5252c723a 100644
3662 +--- a/arch/x86/mm/init.c
3663 ++++ b/arch/x86/mm/init.c
3664 +@@ -93,8 +93,7 @@ __ref void *alloc_low_pages(unsigned int num)
3665 + unsigned int order;
3666 +
3667 + order = get_order((unsigned long)num << PAGE_SHIFT);
3668 +- return (void *)__get_free_pages(GFP_ATOMIC | __GFP_NOTRACK |
3669 +- __GFP_ZERO, order);
3670 ++ return (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order);
3671 + }
3672 +
3673 + if ((pgt_buf_end + num) > pgt_buf_top || !can_use_brk_pgt) {
3674 +@@ -171,12 +170,11 @@ static void enable_global_pages(void)
3675 + static void __init probe_page_size_mask(void)
3676 + {
3677 + /*
3678 +- * For CONFIG_KMEMCHECK or pagealloc debugging, identity mapping will
3679 +- * use small pages.
3680 ++ * For pagealloc debugging, identity mapping will use small pages.
3681 + * This will simplify cpa(), which otherwise needs to support splitting
3682 + * large pages into small in interrupt context, etc.
3683 + */
3684 +- if (boot_cpu_has(X86_FEATURE_PSE) && !debug_pagealloc_enabled() && !IS_ENABLED(CONFIG_KMEMCHECK))
3685 ++ if (boot_cpu_has(X86_FEATURE_PSE) && !debug_pagealloc_enabled())
3686 + page_size_mask |= 1 << PG_LEVEL_2M;
3687 + else
3688 + direct_gbpages = 0;
3689 +diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
3690 +index adcea90a2046..fe85d1204db8 100644
3691 +--- a/arch/x86/mm/init_64.c
3692 ++++ b/arch/x86/mm/init_64.c
3693 +@@ -184,7 +184,7 @@ static __ref void *spp_getpage(void)
3694 + void *ptr;
3695 +
3696 + if (after_bootmem)
3697 +- ptr = (void *) get_zeroed_page(GFP_ATOMIC | __GFP_NOTRACK);
3698 ++ ptr = (void *) get_zeroed_page(GFP_ATOMIC);
3699 + else
3700 + ptr = alloc_bootmem_pages(PAGE_SIZE);
3701 +
3702 +@@ -256,7 +256,7 @@ static void __set_pte_vaddr(pud_t *pud, unsigned long vaddr, pte_t new_pte)
3703 + * It's enough to flush this one mapping.
3704 + * (PGE mappings get flushed as well)
3705 + */
3706 +- __flush_tlb_one(vaddr);
3707 ++ __flush_tlb_one_kernel(vaddr);
3708 + }
3709 +
3710 + void set_pte_vaddr_p4d(p4d_t *p4d_page, unsigned long vaddr, pte_t new_pte)
3711 +diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
3712 +index 34f0e1847dd6..bb120e59c597 100644
3713 +--- a/arch/x86/mm/ioremap.c
3714 ++++ b/arch/x86/mm/ioremap.c
3715 +@@ -749,5 +749,5 @@ void __init __early_set_fixmap(enum fixed_addresses idx,
3716 + set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags));
3717 + else
3718 + pte_clear(&init_mm, addr, pte);
3719 +- __flush_tlb_one(addr);
3720 ++ __flush_tlb_one_kernel(addr);
3721 + }
3722 +diff --git a/arch/x86/mm/kmemcheck/Makefile b/arch/x86/mm/kmemcheck/Makefile
3723 +deleted file mode 100644
3724 +index 520b3bce4095..000000000000
3725 +--- a/arch/x86/mm/kmemcheck/Makefile
3726 ++++ /dev/null
3727 +@@ -1 +0,0 @@
3728 +-obj-y := error.o kmemcheck.o opcode.o pte.o selftest.o shadow.o
3729 +diff --git a/arch/x86/mm/kmemcheck/error.c b/arch/x86/mm/kmemcheck/error.c
3730 +deleted file mode 100644
3731 +index 872ec4159a68..000000000000
3732 +--- a/arch/x86/mm/kmemcheck/error.c
3733 ++++ /dev/null
3734 +@@ -1,228 +0,0 @@
3735 +-// SPDX-License-Identifier: GPL-2.0
3736 +-#include <linux/interrupt.h>
3737 +-#include <linux/kdebug.h>
3738 +-#include <linux/kmemcheck.h>
3739 +-#include <linux/kernel.h>
3740 +-#include <linux/types.h>
3741 +-#include <linux/ptrace.h>
3742 +-#include <linux/stacktrace.h>
3743 +-#include <linux/string.h>
3744 +-
3745 +-#include "error.h"
3746 +-#include "shadow.h"
3747 +-
3748 +-enum kmemcheck_error_type {
3749 +- KMEMCHECK_ERROR_INVALID_ACCESS,
3750 +- KMEMCHECK_ERROR_BUG,
3751 +-};
3752 +-
3753 +-#define SHADOW_COPY_SIZE (1 << CONFIG_KMEMCHECK_SHADOW_COPY_SHIFT)
3754 +-
3755 +-struct kmemcheck_error {
3756 +- enum kmemcheck_error_type type;
3757 +-
3758 +- union {
3759 +- /* KMEMCHECK_ERROR_INVALID_ACCESS */
3760 +- struct {
3761 +- /* Kind of access that caused the error */
3762 +- enum kmemcheck_shadow state;
3763 +- /* Address and size of the erroneous read */
3764 +- unsigned long address;
3765 +- unsigned int size;
3766 +- };
3767 +- };
3768 +-
3769 +- struct pt_regs regs;
3770 +- struct stack_trace trace;
3771 +- unsigned long trace_entries[32];
3772 +-
3773 +- /* We compress it to a char. */
3774 +- unsigned char shadow_copy[SHADOW_COPY_SIZE];
3775 +- unsigned char memory_copy[SHADOW_COPY_SIZE];
3776 +-};
3777 +-
3778 +-/*
3779 +- * Create a ring queue of errors to output. We can't call printk() directly
3780 +- * from the kmemcheck traps, since this may call the console drivers and
3781 +- * result in a recursive fault.
3782 +- */
3783 +-static struct kmemcheck_error error_fifo[CONFIG_KMEMCHECK_QUEUE_SIZE];
3784 +-static unsigned int error_count;
3785 +-static unsigned int error_rd;
3786 +-static unsigned int error_wr;
3787 +-static unsigned int error_missed_count;
3788 +-
3789 +-static struct kmemcheck_error *error_next_wr(void)
3790 +-{
3791 +- struct kmemcheck_error *e;
3792 +-
3793 +- if (error_count == ARRAY_SIZE(error_fifo)) {
3794 +- ++error_missed_count;
3795 +- return NULL;
3796 +- }
3797 +-
3798 +- e = &error_fifo[error_wr];
3799 +- if (++error_wr == ARRAY_SIZE(error_fifo))
3800 +- error_wr = 0;
3801 +- ++error_count;
3802 +- return e;
3803 +-}
3804 +-
3805 +-static struct kmemcheck_error *error_next_rd(void)
3806 +-{
3807 +- struct kmemcheck_error *e;
3808 +-
3809 +- if (error_count == 0)
3810 +- return NULL;
3811 +-
3812 +- e = &error_fifo[error_rd];
3813 +- if (++error_rd == ARRAY_SIZE(error_fifo))
3814 +- error_rd = 0;
3815 +- --error_count;
3816 +- return e;
3817 +-}
3818 +-
3819 +-void kmemcheck_error_recall(void)
3820 +-{
3821 +- static const char *desc[] = {
3822 +- [KMEMCHECK_SHADOW_UNALLOCATED] = "unallocated",
3823 +- [KMEMCHECK_SHADOW_UNINITIALIZED] = "uninitialized",
3824 +- [KMEMCHECK_SHADOW_INITIALIZED] = "initialized",
3825 +- [KMEMCHECK_SHADOW_FREED] = "freed",
3826 +- };
3827 +-
3828 +- static const char short_desc[] = {
3829 +- [KMEMCHECK_SHADOW_UNALLOCATED] = 'a',
3830 +- [KMEMCHECK_SHADOW_UNINITIALIZED] = 'u',
3831 +- [KMEMCHECK_SHADOW_INITIALIZED] = 'i',
3832 +- [KMEMCHECK_SHADOW_FREED] = 'f',
3833 +- };
3834 +-
3835 +- struct kmemcheck_error *e;
3836 +- unsigned int i;
3837 +-
3838 +- e = error_next_rd();
3839 +- if (!e)
3840 +- return;
3841 +-
3842 +- switch (e->type) {
3843 +- case KMEMCHECK_ERROR_INVALID_ACCESS:
3844 +- printk(KERN_WARNING "WARNING: kmemcheck: Caught %d-bit read from %s memory (%p)\n",
3845 +- 8 * e->size, e->state < ARRAY_SIZE(desc) ?
3846 +- desc[e->state] : "(invalid shadow state)",
3847 +- (void *) e->address);
3848 +-
3849 +- printk(KERN_WARNING);
3850 +- for (i = 0; i < SHADOW_COPY_SIZE; ++i)
3851 +- printk(KERN_CONT "%02x", e->memory_copy[i]);
3852 +- printk(KERN_CONT "\n");
3853 +-
3854 +- printk(KERN_WARNING);
3855 +- for (i = 0; i < SHADOW_COPY_SIZE; ++i) {
3856 +- if (e->shadow_copy[i] < ARRAY_SIZE(short_desc))
3857 +- printk(KERN_CONT " %c", short_desc[e->shadow_copy[i]]);
3858 +- else
3859 +- printk(KERN_CONT " ?");
3860 +- }
3861 +- printk(KERN_CONT "\n");
3862 +- printk(KERN_WARNING "%*c\n", 2 + 2
3863 +- * (int) (e->address & (SHADOW_COPY_SIZE - 1)), '^');
3864 +- break;
3865 +- case KMEMCHECK_ERROR_BUG:
3866 +- printk(KERN_EMERG "ERROR: kmemcheck: Fatal error\n");
3867 +- break;
3868 +- }
3869 +-
3870 +- __show_regs(&e->regs, 1);
3871 +- print_stack_trace(&e->trace, 0);
3872 +-}
3873 +-
3874 +-static void do_wakeup(unsigned long data)
3875 +-{
3876 +- while (error_count > 0)
3877 +- kmemcheck_error_recall();
3878 +-
3879 +- if (error_missed_count > 0) {
3880 +- printk(KERN_WARNING "kmemcheck: Lost %d error reports because "
3881 +- "the queue was too small\n", error_missed_count);
3882 +- error_missed_count = 0;
3883 +- }
3884 +-}
3885 +-
3886 +-static DECLARE_TASKLET(kmemcheck_tasklet, &do_wakeup, 0);
3887 +-
3888 +-/*
3889 +- * Save the context of an error report.
3890 +- */
3891 +-void kmemcheck_error_save(enum kmemcheck_shadow state,
3892 +- unsigned long address, unsigned int size, struct pt_regs *regs)
3893 +-{
3894 +- static unsigned long prev_ip;
3895 +-
3896 +- struct kmemcheck_error *e;
3897 +- void *shadow_copy;
3898 +- void *memory_copy;
3899 +-
3900 +- /* Don't report several adjacent errors from the same EIP. */
3901 +- if (regs->ip == prev_ip)
3902 +- return;
3903 +- prev_ip = regs->ip;
3904 +-
3905 +- e = error_next_wr();
3906 +- if (!e)
3907 +- return;
3908 +-
3909 +- e->type = KMEMCHECK_ERROR_INVALID_ACCESS;
3910 +-
3911 +- e->state = state;
3912 +- e->address = address;
3913 +- e->size = size;
3914 +-
3915 +- /* Save regs */
3916 +- memcpy(&e->regs, regs, sizeof(*regs));
3917 +-
3918 +- /* Save stack trace */
3919 +- e->trace.nr_entries = 0;
3920 +- e->trace.entries = e->trace_entries;
3921 +- e->trace.max_entries = ARRAY_SIZE(e->trace_entries);
3922 +- e->trace.skip = 0;
3923 +- save_stack_trace_regs(regs, &e->trace);
3924 +-
3925 +- /* Round address down to nearest 16 bytes */
3926 +- shadow_copy = kmemcheck_shadow_lookup(address
3927 +- & ~(SHADOW_COPY_SIZE - 1));
3928 +- BUG_ON(!shadow_copy);
3929 +-
3930 +- memcpy(e->shadow_copy, shadow_copy, SHADOW_COPY_SIZE);
3931 +-
3932 +- kmemcheck_show_addr(address);
3933 +- memory_copy = (void *) (address & ~(SHADOW_COPY_SIZE - 1));
3934 +- memcpy(e->memory_copy, memory_copy, SHADOW_COPY_SIZE);
3935 +- kmemcheck_hide_addr(address);
3936 +-
3937 +- tasklet_hi_schedule_first(&kmemcheck_tasklet);
3938 +-}
3939 +-
3940 +-/*
3941 +- * Save the context of a kmemcheck bug.
3942 +- */
3943 +-void kmemcheck_error_save_bug(struct pt_regs *regs)
3944 +-{
3945 +- struct kmemcheck_error *e;
3946 +-
3947 +- e = error_next_wr();
3948 +- if (!e)
3949 +- return;
3950 +-
3951 +- e->type = KMEMCHECK_ERROR_BUG;
3952 +-
3953 +- memcpy(&e->regs, regs, sizeof(*regs));
3954 +-
3955 +- e->trace.nr_entries = 0;
3956 +- e->trace.entries = e->trace_entries;
3957 +- e->trace.max_entries = ARRAY_SIZE(e->trace_entries);
3958 +- e->trace.skip = 1;
3959 +- save_stack_trace(&e->trace);
3960 +-
3961 +- tasklet_hi_schedule_first(&kmemcheck_tasklet);
3962 +-}
3963 +diff --git a/arch/x86/mm/kmemcheck/error.h b/arch/x86/mm/kmemcheck/error.h
3964 +deleted file mode 100644
3965 +index 39f80d7a874d..000000000000
3966 +--- a/arch/x86/mm/kmemcheck/error.h
3967 ++++ /dev/null
3968 +@@ -1,16 +0,0 @@
3969 +-/* SPDX-License-Identifier: GPL-2.0 */
3970 +-#ifndef ARCH__X86__MM__KMEMCHECK__ERROR_H
3971 +-#define ARCH__X86__MM__KMEMCHECK__ERROR_H
3972 +-
3973 +-#include <linux/ptrace.h>
3974 +-
3975 +-#include "shadow.h"
3976 +-
3977 +-void kmemcheck_error_save(enum kmemcheck_shadow state,
3978 +- unsigned long address, unsigned int size, struct pt_regs *regs);
3979 +-
3980 +-void kmemcheck_error_save_bug(struct pt_regs *regs);
3981 +-
3982 +-void kmemcheck_error_recall(void);
3983 +-
3984 +-#endif
3985 +diff --git a/arch/x86/mm/kmemcheck/kmemcheck.c b/arch/x86/mm/kmemcheck/kmemcheck.c
3986 +deleted file mode 100644
3987 +index 4515bae36bbe..000000000000
3988 +--- a/arch/x86/mm/kmemcheck/kmemcheck.c
3989 ++++ /dev/null
3990 +@@ -1,658 +0,0 @@
3991 +-/**
3992 +- * kmemcheck - a heavyweight memory checker for the linux kernel
3993 +- * Copyright (C) 2007, 2008 Vegard Nossum <vegardno@×××××××.no>
3994 +- * (With a lot of help from Ingo Molnar and Pekka Enberg.)
3995 +- *
3996 +- * This program is free software; you can redistribute it and/or modify
3997 +- * it under the terms of the GNU General Public License (version 2) as
3998 +- * published by the Free Software Foundation.
3999 +- */
4000 +-
4001 +-#include <linux/init.h>
4002 +-#include <linux/interrupt.h>
4003 +-#include <linux/kallsyms.h>
4004 +-#include <linux/kernel.h>
4005 +-#include <linux/kmemcheck.h>
4006 +-#include <linux/mm.h>
4007 +-#include <linux/page-flags.h>
4008 +-#include <linux/percpu.h>
4009 +-#include <linux/ptrace.h>
4010 +-#include <linux/string.h>
4011 +-#include <linux/types.h>
4012 +-
4013 +-#include <asm/cacheflush.h>
4014 +-#include <asm/kmemcheck.h>
4015 +-#include <asm/pgtable.h>
4016 +-#include <asm/tlbflush.h>
4017 +-
4018 +-#include "error.h"
4019 +-#include "opcode.h"
4020 +-#include "pte.h"
4021 +-#include "selftest.h"
4022 +-#include "shadow.h"
4023 +-
4024 +-
4025 +-#ifdef CONFIG_KMEMCHECK_DISABLED_BY_DEFAULT
4026 +-# define KMEMCHECK_ENABLED 0
4027 +-#endif
4028 +-
4029 +-#ifdef CONFIG_KMEMCHECK_ENABLED_BY_DEFAULT
4030 +-# define KMEMCHECK_ENABLED 1
4031 +-#endif
4032 +-
4033 +-#ifdef CONFIG_KMEMCHECK_ONESHOT_BY_DEFAULT
4034 +-# define KMEMCHECK_ENABLED 2
4035 +-#endif
4036 +-
4037 +-int kmemcheck_enabled = KMEMCHECK_ENABLED;
4038 +-
4039 +-int __init kmemcheck_init(void)
4040 +-{
4041 +-#ifdef CONFIG_SMP
4042 +- /*
4043 +- * Limit SMP to use a single CPU. We rely on the fact that this code
4044 +- * runs before SMP is set up.
4045 +- */
4046 +- if (setup_max_cpus > 1) {
4047 +- printk(KERN_INFO
4048 +- "kmemcheck: Limiting number of CPUs to 1.\n");
4049 +- setup_max_cpus = 1;
4050 +- }
4051 +-#endif
4052 +-
4053 +- if (!kmemcheck_selftest()) {
4054 +- printk(KERN_INFO "kmemcheck: self-tests failed; disabling\n");
4055 +- kmemcheck_enabled = 0;
4056 +- return -EINVAL;
4057 +- }
4058 +-
4059 +- printk(KERN_INFO "kmemcheck: Initialized\n");
4060 +- return 0;
4061 +-}
4062 +-
4063 +-early_initcall(kmemcheck_init);
4064 +-
4065 +-/*
4066 +- * We need to parse the kmemcheck= option before any memory is allocated.
4067 +- */
4068 +-static int __init param_kmemcheck(char *str)
4069 +-{
4070 +- int val;
4071 +- int ret;
4072 +-
4073 +- if (!str)
4074 +- return -EINVAL;
4075 +-
4076 +- ret = kstrtoint(str, 0, &val);
4077 +- if (ret)
4078 +- return ret;
4079 +- kmemcheck_enabled = val;
4080 +- return 0;
4081 +-}
4082 +-
4083 +-early_param("kmemcheck", param_kmemcheck);
4084 +-
4085 +-int kmemcheck_show_addr(unsigned long address)
4086 +-{
4087 +- pte_t *pte;
4088 +-
4089 +- pte = kmemcheck_pte_lookup(address);
4090 +- if (!pte)
4091 +- return 0;
4092 +-
4093 +- set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
4094 +- __flush_tlb_one(address);
4095 +- return 1;
4096 +-}
4097 +-
4098 +-int kmemcheck_hide_addr(unsigned long address)
4099 +-{
4100 +- pte_t *pte;
4101 +-
4102 +- pte = kmemcheck_pte_lookup(address);
4103 +- if (!pte)
4104 +- return 0;
4105 +-
4106 +- set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
4107 +- __flush_tlb_one(address);
4108 +- return 1;
4109 +-}
4110 +-
4111 +-struct kmemcheck_context {
4112 +- bool busy;
4113 +- int balance;
4114 +-
4115 +- /*
4116 +- * There can be at most two memory operands to an instruction, but
4117 +- * each address can cross a page boundary -- so we may need up to
4118 +- * four addresses that must be hidden/revealed for each fault.
4119 +- */
4120 +- unsigned long addr[4];
4121 +- unsigned long n_addrs;
4122 +- unsigned long flags;
4123 +-
4124 +- /* Data size of the instruction that caused a fault. */
4125 +- unsigned int size;
4126 +-};
4127 +-
4128 +-static DEFINE_PER_CPU(struct kmemcheck_context, kmemcheck_context);
4129 +-
4130 +-bool kmemcheck_active(struct pt_regs *regs)
4131 +-{
4132 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4133 +-
4134 +- return data->balance > 0;
4135 +-}
4136 +-
4137 +-/* Save an address that needs to be shown/hidden */
4138 +-static void kmemcheck_save_addr(unsigned long addr)
4139 +-{
4140 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4141 +-
4142 +- BUG_ON(data->n_addrs >= ARRAY_SIZE(data->addr));
4143 +- data->addr[data->n_addrs++] = addr;
4144 +-}
4145 +-
4146 +-static unsigned int kmemcheck_show_all(void)
4147 +-{
4148 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4149 +- unsigned int i;
4150 +- unsigned int n;
4151 +-
4152 +- n = 0;
4153 +- for (i = 0; i < data->n_addrs; ++i)
4154 +- n += kmemcheck_show_addr(data->addr[i]);
4155 +-
4156 +- return n;
4157 +-}
4158 +-
4159 +-static unsigned int kmemcheck_hide_all(void)
4160 +-{
4161 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4162 +- unsigned int i;
4163 +- unsigned int n;
4164 +-
4165 +- n = 0;
4166 +- for (i = 0; i < data->n_addrs; ++i)
4167 +- n += kmemcheck_hide_addr(data->addr[i]);
4168 +-
4169 +- return n;
4170 +-}
4171 +-
4172 +-/*
4173 +- * Called from the #PF handler.
4174 +- */
4175 +-void kmemcheck_show(struct pt_regs *regs)
4176 +-{
4177 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4178 +-
4179 +- BUG_ON(!irqs_disabled());
4180 +-
4181 +- if (unlikely(data->balance != 0)) {
4182 +- kmemcheck_show_all();
4183 +- kmemcheck_error_save_bug(regs);
4184 +- data->balance = 0;
4185 +- return;
4186 +- }
4187 +-
4188 +- /*
4189 +- * None of the addresses actually belonged to kmemcheck. Note that
4190 +- * this is not an error.
4191 +- */
4192 +- if (kmemcheck_show_all() == 0)
4193 +- return;
4194 +-
4195 +- ++data->balance;
4196 +-
4197 +- /*
4198 +- * The IF needs to be cleared as well, so that the faulting
4199 +- * instruction can run "uninterrupted". Otherwise, we might take
4200 +- * an interrupt and start executing that before we've had a chance
4201 +- * to hide the page again.
4202 +- *
4203 +- * NOTE: In the rare case of multiple faults, we must not override
4204 +- * the original flags:
4205 +- */
4206 +- if (!(regs->flags & X86_EFLAGS_TF))
4207 +- data->flags = regs->flags;
4208 +-
4209 +- regs->flags |= X86_EFLAGS_TF;
4210 +- regs->flags &= ~X86_EFLAGS_IF;
4211 +-}
4212 +-
4213 +-/*
4214 +- * Called from the #DB handler.
4215 +- */
4216 +-void kmemcheck_hide(struct pt_regs *regs)
4217 +-{
4218 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4219 +- int n;
4220 +-
4221 +- BUG_ON(!irqs_disabled());
4222 +-
4223 +- if (unlikely(data->balance != 1)) {
4224 +- kmemcheck_show_all();
4225 +- kmemcheck_error_save_bug(regs);
4226 +- data->n_addrs = 0;
4227 +- data->balance = 0;
4228 +-
4229 +- if (!(data->flags & X86_EFLAGS_TF))
4230 +- regs->flags &= ~X86_EFLAGS_TF;
4231 +- if (data->flags & X86_EFLAGS_IF)
4232 +- regs->flags |= X86_EFLAGS_IF;
4233 +- return;
4234 +- }
4235 +-
4236 +- if (kmemcheck_enabled)
4237 +- n = kmemcheck_hide_all();
4238 +- else
4239 +- n = kmemcheck_show_all();
4240 +-
4241 +- if (n == 0)
4242 +- return;
4243 +-
4244 +- --data->balance;
4245 +-
4246 +- data->n_addrs = 0;
4247 +-
4248 +- if (!(data->flags & X86_EFLAGS_TF))
4249 +- regs->flags &= ~X86_EFLAGS_TF;
4250 +- if (data->flags & X86_EFLAGS_IF)
4251 +- regs->flags |= X86_EFLAGS_IF;
4252 +-}
4253 +-
4254 +-void kmemcheck_show_pages(struct page *p, unsigned int n)
4255 +-{
4256 +- unsigned int i;
4257 +-
4258 +- for (i = 0; i < n; ++i) {
4259 +- unsigned long address;
4260 +- pte_t *pte;
4261 +- unsigned int level;
4262 +-
4263 +- address = (unsigned long) page_address(&p[i]);
4264 +- pte = lookup_address(address, &level);
4265 +- BUG_ON(!pte);
4266 +- BUG_ON(level != PG_LEVEL_4K);
4267 +-
4268 +- set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
4269 +- set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_HIDDEN));
4270 +- __flush_tlb_one(address);
4271 +- }
4272 +-}
4273 +-
4274 +-bool kmemcheck_page_is_tracked(struct page *p)
4275 +-{
4276 +- /* This will also check the "hidden" flag of the PTE. */
4277 +- return kmemcheck_pte_lookup((unsigned long) page_address(p));
4278 +-}
4279 +-
4280 +-void kmemcheck_hide_pages(struct page *p, unsigned int n)
4281 +-{
4282 +- unsigned int i;
4283 +-
4284 +- for (i = 0; i < n; ++i) {
4285 +- unsigned long address;
4286 +- pte_t *pte;
4287 +- unsigned int level;
4288 +-
4289 +- address = (unsigned long) page_address(&p[i]);
4290 +- pte = lookup_address(address, &level);
4291 +- BUG_ON(!pte);
4292 +- BUG_ON(level != PG_LEVEL_4K);
4293 +-
4294 +- set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
4295 +- set_pte(pte, __pte(pte_val(*pte) | _PAGE_HIDDEN));
4296 +- __flush_tlb_one(address);
4297 +- }
4298 +-}
4299 +-
4300 +-/* Access may NOT cross page boundary */
4301 +-static void kmemcheck_read_strict(struct pt_regs *regs,
4302 +- unsigned long addr, unsigned int size)
4303 +-{
4304 +- void *shadow;
4305 +- enum kmemcheck_shadow status;
4306 +-
4307 +- shadow = kmemcheck_shadow_lookup(addr);
4308 +- if (!shadow)
4309 +- return;
4310 +-
4311 +- kmemcheck_save_addr(addr);
4312 +- status = kmemcheck_shadow_test(shadow, size);
4313 +- if (status == KMEMCHECK_SHADOW_INITIALIZED)
4314 +- return;
4315 +-
4316 +- if (kmemcheck_enabled)
4317 +- kmemcheck_error_save(status, addr, size, regs);
4318 +-
4319 +- if (kmemcheck_enabled == 2)
4320 +- kmemcheck_enabled = 0;
4321 +-
4322 +- /* Don't warn about it again. */
4323 +- kmemcheck_shadow_set(shadow, size);
4324 +-}
4325 +-
4326 +-bool kmemcheck_is_obj_initialized(unsigned long addr, size_t size)
4327 +-{
4328 +- enum kmemcheck_shadow status;
4329 +- void *shadow;
4330 +-
4331 +- shadow = kmemcheck_shadow_lookup(addr);
4332 +- if (!shadow)
4333 +- return true;
4334 +-
4335 +- status = kmemcheck_shadow_test_all(shadow, size);
4336 +-
4337 +- return status == KMEMCHECK_SHADOW_INITIALIZED;
4338 +-}
4339 +-
4340 +-/* Access may cross page boundary */
4341 +-static void kmemcheck_read(struct pt_regs *regs,
4342 +- unsigned long addr, unsigned int size)
4343 +-{
4344 +- unsigned long page = addr & PAGE_MASK;
4345 +- unsigned long next_addr = addr + size - 1;
4346 +- unsigned long next_page = next_addr & PAGE_MASK;
4347 +-
4348 +- if (likely(page == next_page)) {
4349 +- kmemcheck_read_strict(regs, addr, size);
4350 +- return;
4351 +- }
4352 +-
4353 +- /*
4354 +- * What we do is basically to split the access across the
4355 +- * two pages and handle each part separately. Yes, this means
4356 +- * that we may now see reads that are 3 + 5 bytes, for
4357 +- * example (and if both are uninitialized, there will be two
4358 +- * reports), but it makes the code a lot simpler.
4359 +- */
4360 +- kmemcheck_read_strict(regs, addr, next_page - addr);
4361 +- kmemcheck_read_strict(regs, next_page, next_addr - next_page);
4362 +-}
4363 +-
4364 +-static void kmemcheck_write_strict(struct pt_regs *regs,
4365 +- unsigned long addr, unsigned int size)
4366 +-{
4367 +- void *shadow;
4368 +-
4369 +- shadow = kmemcheck_shadow_lookup(addr);
4370 +- if (!shadow)
4371 +- return;
4372 +-
4373 +- kmemcheck_save_addr(addr);
4374 +- kmemcheck_shadow_set(shadow, size);
4375 +-}
4376 +-
4377 +-static void kmemcheck_write(struct pt_regs *regs,
4378 +- unsigned long addr, unsigned int size)
4379 +-{
4380 +- unsigned long page = addr & PAGE_MASK;
4381 +- unsigned long next_addr = addr + size - 1;
4382 +- unsigned long next_page = next_addr & PAGE_MASK;
4383 +-
4384 +- if (likely(page == next_page)) {
4385 +- kmemcheck_write_strict(regs, addr, size);
4386 +- return;
4387 +- }
4388 +-
4389 +- /* See comment in kmemcheck_read(). */
4390 +- kmemcheck_write_strict(regs, addr, next_page - addr);
4391 +- kmemcheck_write_strict(regs, next_page, next_addr - next_page);
4392 +-}
4393 +-
4394 +-/*
4395 +- * Copying is hard. We have two addresses, each of which may be split across
4396 +- * a page (and each page will have different shadow addresses).
4397 +- */
4398 +-static void kmemcheck_copy(struct pt_regs *regs,
4399 +- unsigned long src_addr, unsigned long dst_addr, unsigned int size)
4400 +-{
4401 +- uint8_t shadow[8];
4402 +- enum kmemcheck_shadow status;
4403 +-
4404 +- unsigned long page;
4405 +- unsigned long next_addr;
4406 +- unsigned long next_page;
4407 +-
4408 +- uint8_t *x;
4409 +- unsigned int i;
4410 +- unsigned int n;
4411 +-
4412 +- BUG_ON(size > sizeof(shadow));
4413 +-
4414 +- page = src_addr & PAGE_MASK;
4415 +- next_addr = src_addr + size - 1;
4416 +- next_page = next_addr & PAGE_MASK;
4417 +-
4418 +- if (likely(page == next_page)) {
4419 +- /* Same page */
4420 +- x = kmemcheck_shadow_lookup(src_addr);
4421 +- if (x) {
4422 +- kmemcheck_save_addr(src_addr);
4423 +- for (i = 0; i < size; ++i)
4424 +- shadow[i] = x[i];
4425 +- } else {
4426 +- for (i = 0; i < size; ++i)
4427 +- shadow[i] = KMEMCHECK_SHADOW_INITIALIZED;
4428 +- }
4429 +- } else {
4430 +- n = next_page - src_addr;
4431 +- BUG_ON(n > sizeof(shadow));
4432 +-
4433 +- /* First page */
4434 +- x = kmemcheck_shadow_lookup(src_addr);
4435 +- if (x) {
4436 +- kmemcheck_save_addr(src_addr);
4437 +- for (i = 0; i < n; ++i)
4438 +- shadow[i] = x[i];
4439 +- } else {
4440 +- /* Not tracked */
4441 +- for (i = 0; i < n; ++i)
4442 +- shadow[i] = KMEMCHECK_SHADOW_INITIALIZED;
4443 +- }
4444 +-
4445 +- /* Second page */
4446 +- x = kmemcheck_shadow_lookup(next_page);
4447 +- if (x) {
4448 +- kmemcheck_save_addr(next_page);
4449 +- for (i = n; i < size; ++i)
4450 +- shadow[i] = x[i - n];
4451 +- } else {
4452 +- /* Not tracked */
4453 +- for (i = n; i < size; ++i)
4454 +- shadow[i] = KMEMCHECK_SHADOW_INITIALIZED;
4455 +- }
4456 +- }
4457 +-
4458 +- page = dst_addr & PAGE_MASK;
4459 +- next_addr = dst_addr + size - 1;
4460 +- next_page = next_addr & PAGE_MASK;
4461 +-
4462 +- if (likely(page == next_page)) {
4463 +- /* Same page */
4464 +- x = kmemcheck_shadow_lookup(dst_addr);
4465 +- if (x) {
4466 +- kmemcheck_save_addr(dst_addr);
4467 +- for (i = 0; i < size; ++i) {
4468 +- x[i] = shadow[i];
4469 +- shadow[i] = KMEMCHECK_SHADOW_INITIALIZED;
4470 +- }
4471 +- }
4472 +- } else {
4473 +- n = next_page - dst_addr;
4474 +- BUG_ON(n > sizeof(shadow));
4475 +-
4476 +- /* First page */
4477 +- x = kmemcheck_shadow_lookup(dst_addr);
4478 +- if (x) {
4479 +- kmemcheck_save_addr(dst_addr);
4480 +- for (i = 0; i < n; ++i) {
4481 +- x[i] = shadow[i];
4482 +- shadow[i] = KMEMCHECK_SHADOW_INITIALIZED;
4483 +- }
4484 +- }
4485 +-
4486 +- /* Second page */
4487 +- x = kmemcheck_shadow_lookup(next_page);
4488 +- if (x) {
4489 +- kmemcheck_save_addr(next_page);
4490 +- for (i = n; i < size; ++i) {
4491 +- x[i - n] = shadow[i];
4492 +- shadow[i] = KMEMCHECK_SHADOW_INITIALIZED;
4493 +- }
4494 +- }
4495 +- }
4496 +-
4497 +- status = kmemcheck_shadow_test(shadow, size);
4498 +- if (status == KMEMCHECK_SHADOW_INITIALIZED)
4499 +- return;
4500 +-
4501 +- if (kmemcheck_enabled)
4502 +- kmemcheck_error_save(status, src_addr, size, regs);
4503 +-
4504 +- if (kmemcheck_enabled == 2)
4505 +- kmemcheck_enabled = 0;
4506 +-}
4507 +-
4508 +-enum kmemcheck_method {
4509 +- KMEMCHECK_READ,
4510 +- KMEMCHECK_WRITE,
4511 +-};
4512 +-
4513 +-static void kmemcheck_access(struct pt_regs *regs,
4514 +- unsigned long fallback_address, enum kmemcheck_method fallback_method)
4515 +-{
4516 +- const uint8_t *insn;
4517 +- const uint8_t *insn_primary;
4518 +- unsigned int size;
4519 +-
4520 +- struct kmemcheck_context *data = this_cpu_ptr(&kmemcheck_context);
4521 +-
4522 +- /* Recursive fault -- ouch. */
4523 +- if (data->busy) {
4524 +- kmemcheck_show_addr(fallback_address);
4525 +- kmemcheck_error_save_bug(regs);
4526 +- return;
4527 +- }
4528 +-
4529 +- data->busy = true;
4530 +-
4531 +- insn = (const uint8_t *) regs->ip;
4532 +- insn_primary = kmemcheck_opcode_get_primary(insn);
4533 +-
4534 +- kmemcheck_opcode_decode(insn, &size);
4535 +-
4536 +- switch (insn_primary[0]) {
4537 +-#ifdef CONFIG_KMEMCHECK_BITOPS_OK
4538 +- /* AND, OR, XOR */
4539 +- /*
4540 +- * Unfortunately, these instructions have to be excluded from
4541 +- * our regular checking since they access only some (and not
4542 +- * all) bits. This clears out "bogus" bitfield-access warnings.
4543 +- */
4544 +- case 0x80:
4545 +- case 0x81:
4546 +- case 0x82:
4547 +- case 0x83:
4548 +- switch ((insn_primary[1] >> 3) & 7) {
4549 +- /* OR */
4550 +- case 1:
4551 +- /* AND */
4552 +- case 4:
4553 +- /* XOR */
4554 +- case 6:
4555 +- kmemcheck_write(regs, fallback_address, size);
4556 +- goto out;
4557 +-
4558 +- /* ADD */
4559 +- case 0:
4560 +- /* ADC */
4561 +- case 2:
4562 +- /* SBB */
4563 +- case 3:
4564 +- /* SUB */
4565 +- case 5:
4566 +- /* CMP */
4567 +- case 7:
4568 +- break;
4569 +- }
4570 +- break;
4571 +-#endif
4572 +-
4573 +- /* MOVS, MOVSB, MOVSW, MOVSD */
4574 +- case 0xa4:
4575 +- case 0xa5:
4576 +- /*
4577 +- * These instructions are special because they take two
4578 +- * addresses, but we only get one page fault.
4579 +- */
4580 +- kmemcheck_copy(regs, regs->si, regs->di, size);
4581 +- goto out;
4582 +-
4583 +- /* CMPS, CMPSB, CMPSW, CMPSD */
4584 +- case 0xa6:
4585 +- case 0xa7:
4586 +- kmemcheck_read(regs, regs->si, size);
4587 +- kmemcheck_read(regs, regs->di, size);
4588 +- goto out;
4589 +- }
4590 +-
4591 +- /*
4592 +- * If the opcode isn't special in any way, we use the data from the
4593 +- * page fault handler to determine the address and type of memory
4594 +- * access.
4595 +- */
4596 +- switch (fallback_method) {
4597 +- case KMEMCHECK_READ:
4598 +- kmemcheck_read(regs, fallback_address, size);
4599 +- goto out;
4600 +- case KMEMCHECK_WRITE:
4601 +- kmemcheck_write(regs, fallback_address, size);
4602 +- goto out;
4603 +- }
4604 +-
4605 +-out:
4606 +- data->busy = false;
4607 +-}
4608 +-
4609 +-bool kmemcheck_fault(struct pt_regs *regs, unsigned long address,
4610 +- unsigned long error_code)
4611 +-{
4612 +- pte_t *pte;
4613 +-
4614 +- /*
4615 +- * XXX: Is it safe to assume that memory accesses from virtual 86
4616 +- * mode or non-kernel code segments will _never_ access kernel
4617 +- * memory (e.g. tracked pages)? For now, we need this to avoid
4618 +- * invoking kmemcheck for PnP BIOS calls.
4619 +- */
4620 +- if (regs->flags & X86_VM_MASK)
4621 +- return false;
4622 +- if (regs->cs != __KERNEL_CS)
4623 +- return false;
4624 +-
4625 +- pte = kmemcheck_pte_lookup(address);
4626 +- if (!pte)
4627 +- return false;
4628 +-
4629 +- WARN_ON_ONCE(in_nmi());
4630 +-
4631 +- if (error_code & 2)
4632 +- kmemcheck_access(regs, address, KMEMCHECK_WRITE);
4633 +- else
4634 +- kmemcheck_access(regs, address, KMEMCHECK_READ);
4635 +-
4636 +- kmemcheck_show(regs);
4637 +- return true;
4638 +-}
4639 +-
4640 +-bool kmemcheck_trap(struct pt_regs *regs)
4641 +-{
4642 +- if (!kmemcheck_active(regs))
4643 +- return false;
4644 +-
4645 +- /* We're done. */
4646 +- kmemcheck_hide(regs);
4647 +- return true;
4648 +-}
4649 +diff --git a/arch/x86/mm/kmemcheck/opcode.c b/arch/x86/mm/kmemcheck/opcode.c
4650 +deleted file mode 100644
4651 +index df8109ddf7fe..000000000000
4652 +--- a/arch/x86/mm/kmemcheck/opcode.c
4653 ++++ /dev/null
4654 +@@ -1,107 +0,0 @@
4655 +-// SPDX-License-Identifier: GPL-2.0
4656 +-#include <linux/types.h>
4657 +-
4658 +-#include "opcode.h"
4659 +-
4660 +-static bool opcode_is_prefix(uint8_t b)
4661 +-{
4662 +- return
4663 +- /* Group 1 */
4664 +- b == 0xf0 || b == 0xf2 || b == 0xf3
4665 +- /* Group 2 */
4666 +- || b == 0x2e || b == 0x36 || b == 0x3e || b == 0x26
4667 +- || b == 0x64 || b == 0x65
4668 +- /* Group 3 */
4669 +- || b == 0x66
4670 +- /* Group 4 */
4671 +- || b == 0x67;
4672 +-}
4673 +-
4674 +-#ifdef CONFIG_X86_64
4675 +-static bool opcode_is_rex_prefix(uint8_t b)
4676 +-{
4677 +- return (b & 0xf0) == 0x40;
4678 +-}
4679 +-#else
4680 +-static bool opcode_is_rex_prefix(uint8_t b)
4681 +-{
4682 +- return false;
4683 +-}
4684 +-#endif
4685 +-
4686 +-#define REX_W (1 << 3)
4687 +-
4688 +-/*
4689 +- * This is a VERY crude opcode decoder. We only need to find the size of the
4690 +- * load/store that caused our #PF and this should work for all the opcodes
4691 +- * that we care about. Moreover, the ones who invented this instruction set
4692 +- * should be shot.
4693 +- */
4694 +-void kmemcheck_opcode_decode(const uint8_t *op, unsigned int *size)
4695 +-{
4696 +- /* Default operand size */
4697 +- int operand_size_override = 4;
4698 +-
4699 +- /* prefixes */
4700 +- for (; opcode_is_prefix(*op); ++op) {
4701 +- if (*op == 0x66)
4702 +- operand_size_override = 2;
4703 +- }
4704 +-
4705 +- /* REX prefix */
4706 +- if (opcode_is_rex_prefix(*op)) {
4707 +- uint8_t rex = *op;
4708 +-
4709 +- ++op;
4710 +- if (rex & REX_W) {
4711 +- switch (*op) {
4712 +- case 0x63:
4713 +- *size = 4;
4714 +- return;
4715 +- case 0x0f:
4716 +- ++op;
4717 +-
4718 +- switch (*op) {
4719 +- case 0xb6:
4720 +- case 0xbe:
4721 +- *size = 1;
4722 +- return;
4723 +- case 0xb7:
4724 +- case 0xbf:
4725 +- *size = 2;
4726 +- return;
4727 +- }
4728 +-
4729 +- break;
4730 +- }
4731 +-
4732 +- *size = 8;
4733 +- return;
4734 +- }
4735 +- }
4736 +-
4737 +- /* escape opcode */
4738 +- if (*op == 0x0f) {
4739 +- ++op;
4740 +-
4741 +- /*
4742 +- * This is move with zero-extend and sign-extend, respectively;
4743 +- * we don't have to think about 0xb6/0xbe, because this is
4744 +- * already handled in the conditional below.
4745 +- */
4746 +- if (*op == 0xb7 || *op == 0xbf)
4747 +- operand_size_override = 2;
4748 +- }
4749 +-
4750 +- *size = (*op & 1) ? operand_size_override : 1;
4751 +-}
4752 +-
4753 +-const uint8_t *kmemcheck_opcode_get_primary(const uint8_t *op)
4754 +-{
4755 +- /* skip prefixes */
4756 +- while (opcode_is_prefix(*op))
4757 +- ++op;
4758 +- if (opcode_is_rex_prefix(*op))
4759 +- ++op;
4760 +- return op;
4761 +-}
4762 +diff --git a/arch/x86/mm/kmemcheck/opcode.h b/arch/x86/mm/kmemcheck/opcode.h
4763 +deleted file mode 100644
4764 +index 51a1ce94c24a..000000000000
4765 +--- a/arch/x86/mm/kmemcheck/opcode.h
4766 ++++ /dev/null
4767 +@@ -1,10 +0,0 @@
4768 +-/* SPDX-License-Identifier: GPL-2.0 */
4769 +-#ifndef ARCH__X86__MM__KMEMCHECK__OPCODE_H
4770 +-#define ARCH__X86__MM__KMEMCHECK__OPCODE_H
4771 +-
4772 +-#include <linux/types.h>
4773 +-
4774 +-void kmemcheck_opcode_decode(const uint8_t *op, unsigned int *size);
4775 +-const uint8_t *kmemcheck_opcode_get_primary(const uint8_t *op);
4776 +-
4777 +-#endif
4778 +diff --git a/arch/x86/mm/kmemcheck/pte.c b/arch/x86/mm/kmemcheck/pte.c
4779 +deleted file mode 100644
4780 +index 8a03be90272a..000000000000
4781 +--- a/arch/x86/mm/kmemcheck/pte.c
4782 ++++ /dev/null
4783 +@@ -1,23 +0,0 @@
4784 +-// SPDX-License-Identifier: GPL-2.0
4785 +-#include <linux/mm.h>
4786 +-
4787 +-#include <asm/pgtable.h>
4788 +-
4789 +-#include "pte.h"
4790 +-
4791 +-pte_t *kmemcheck_pte_lookup(unsigned long address)
4792 +-{
4793 +- pte_t *pte;
4794 +- unsigned int level;
4795 +-
4796 +- pte = lookup_address(address, &level);
4797 +- if (!pte)
4798 +- return NULL;
4799 +- if (level != PG_LEVEL_4K)
4800 +- return NULL;
4801 +- if (!pte_hidden(*pte))
4802 +- return NULL;
4803 +-
4804 +- return pte;
4805 +-}
4806 +-
4807 +diff --git a/arch/x86/mm/kmemcheck/pte.h b/arch/x86/mm/kmemcheck/pte.h
4808 +deleted file mode 100644
4809 +index b595612382c2..000000000000
4810 +--- a/arch/x86/mm/kmemcheck/pte.h
4811 ++++ /dev/null
4812 +@@ -1,11 +0,0 @@
4813 +-/* SPDX-License-Identifier: GPL-2.0 */
4814 +-#ifndef ARCH__X86__MM__KMEMCHECK__PTE_H
4815 +-#define ARCH__X86__MM__KMEMCHECK__PTE_H
4816 +-
4817 +-#include <linux/mm.h>
4818 +-
4819 +-#include <asm/pgtable.h>
4820 +-
4821 +-pte_t *kmemcheck_pte_lookup(unsigned long address);
4822 +-
4823 +-#endif
4824 +diff --git a/arch/x86/mm/kmemcheck/selftest.c b/arch/x86/mm/kmemcheck/selftest.c
4825 +deleted file mode 100644
4826 +index 7ce0be1f99eb..000000000000
4827 +--- a/arch/x86/mm/kmemcheck/selftest.c
4828 ++++ /dev/null
4829 +@@ -1,71 +0,0 @@
4830 +-// SPDX-License-Identifier: GPL-2.0
4831 +-#include <linux/bug.h>
4832 +-#include <linux/kernel.h>
4833 +-
4834 +-#include "opcode.h"
4835 +-#include "selftest.h"
4836 +-
4837 +-struct selftest_opcode {
4838 +- unsigned int expected_size;
4839 +- const uint8_t *insn;
4840 +- const char *desc;
4841 +-};
4842 +-
4843 +-static const struct selftest_opcode selftest_opcodes[] = {
4844 +- /* REP MOVS */
4845 +- {1, "\xf3\xa4", "rep movsb <mem8>, <mem8>"},
4846 +- {4, "\xf3\xa5", "rep movsl <mem32>, <mem32>"},
4847 +-
4848 +- /* MOVZX / MOVZXD */
4849 +- {1, "\x66\x0f\xb6\x51\xf8", "movzwq <mem8>, <reg16>"},
4850 +- {1, "\x0f\xb6\x51\xf8", "movzwq <mem8>, <reg32>"},
4851 +-
4852 +- /* MOVSX / MOVSXD */
4853 +- {1, "\x66\x0f\xbe\x51\xf8", "movswq <mem8>, <reg16>"},
4854 +- {1, "\x0f\xbe\x51\xf8", "movswq <mem8>, <reg32>"},
4855 +-
4856 +-#ifdef CONFIG_X86_64
4857 +- /* MOVZX / MOVZXD */
4858 +- {1, "\x49\x0f\xb6\x51\xf8", "movzbq <mem8>, <reg64>"},
4859 +- {2, "\x49\x0f\xb7\x51\xf8", "movzbq <mem16>, <reg64>"},
4860 +-
4861 +- /* MOVSX / MOVSXD */
4862 +- {1, "\x49\x0f\xbe\x51\xf8", "movsbq <mem8>, <reg64>"},
4863 +- {2, "\x49\x0f\xbf\x51\xf8", "movsbq <mem16>, <reg64>"},
4864 +- {4, "\x49\x63\x51\xf8", "movslq <mem32>, <reg64>"},
4865 +-#endif
4866 +-};
4867 +-
4868 +-static bool selftest_opcode_one(const struct selftest_opcode *op)
4869 +-{
4870 +- unsigned size;
4871 +-
4872 +- kmemcheck_opcode_decode(op->insn, &size);
4873 +-
4874 +- if (size == op->expected_size)
4875 +- return true;
4876 +-
4877 +- printk(KERN_WARNING "kmemcheck: opcode %s: expected size %d, got %d\n",
4878 +- op->desc, op->expected_size, size);
4879 +- return false;
4880 +-}
4881 +-
4882 +-static bool selftest_opcodes_all(void)
4883 +-{
4884 +- bool pass = true;
4885 +- unsigned int i;
4886 +-
4887 +- for (i = 0; i < ARRAY_SIZE(selftest_opcodes); ++i)
4888 +- pass = pass && selftest_opcode_one(&selftest_opcodes[i]);
4889 +-
4890 +- return pass;
4891 +-}
4892 +-
4893 +-bool kmemcheck_selftest(void)
4894 +-{
4895 +- bool pass = true;
4896 +-
4897 +- pass = pass && selftest_opcodes_all();
4898 +-
4899 +- return pass;
4900 +-}
4901 +diff --git a/arch/x86/mm/kmemcheck/selftest.h b/arch/x86/mm/kmemcheck/selftest.h
4902 +deleted file mode 100644
4903 +index 8d759aae453d..000000000000
4904 +--- a/arch/x86/mm/kmemcheck/selftest.h
4905 ++++ /dev/null
4906 +@@ -1,7 +0,0 @@
4907 +-/* SPDX-License-Identifier: GPL-2.0 */
4908 +-#ifndef ARCH_X86_MM_KMEMCHECK_SELFTEST_H
4909 +-#define ARCH_X86_MM_KMEMCHECK_SELFTEST_H
4910 +-
4911 +-bool kmemcheck_selftest(void);
4912 +-
4913 +-#endif
4914 +diff --git a/arch/x86/mm/kmemcheck/shadow.c b/arch/x86/mm/kmemcheck/shadow.c
4915 +deleted file mode 100644
4916 +index c2638a7d2c10..000000000000
4917 +--- a/arch/x86/mm/kmemcheck/shadow.c
4918 ++++ /dev/null
4919 +@@ -1,173 +0,0 @@
4920 +-#include <linux/kmemcheck.h>
4921 +-#include <linux/export.h>
4922 +-#include <linux/mm.h>
4923 +-
4924 +-#include <asm/page.h>
4925 +-#include <asm/pgtable.h>
4926 +-
4927 +-#include "pte.h"
4928 +-#include "shadow.h"
4929 +-
4930 +-/*
4931 +- * Return the shadow address for the given address. Returns NULL if the
4932 +- * address is not tracked.
4933 +- *
4934 +- * We need to be extremely careful not to follow any invalid pointers,
4935 +- * because this function can be called for *any* possible address.
4936 +- */
4937 +-void *kmemcheck_shadow_lookup(unsigned long address)
4938 +-{
4939 +- pte_t *pte;
4940 +- struct page *page;
4941 +-
4942 +- if (!virt_addr_valid(address))
4943 +- return NULL;
4944 +-
4945 +- pte = kmemcheck_pte_lookup(address);
4946 +- if (!pte)
4947 +- return NULL;
4948 +-
4949 +- page = virt_to_page(address);
4950 +- if (!page->shadow)
4951 +- return NULL;
4952 +- return page->shadow + (address & (PAGE_SIZE - 1));
4953 +-}
4954 +-
4955 +-static void mark_shadow(void *address, unsigned int n,
4956 +- enum kmemcheck_shadow status)
4957 +-{
4958 +- unsigned long addr = (unsigned long) address;
4959 +- unsigned long last_addr = addr + n - 1;
4960 +- unsigned long page = addr & PAGE_MASK;
4961 +- unsigned long last_page = last_addr & PAGE_MASK;
4962 +- unsigned int first_n;
4963 +- void *shadow;
4964 +-
4965 +- /* If the memory range crosses a page boundary, stop there. */
4966 +- if (page == last_page)
4967 +- first_n = n;
4968 +- else
4969 +- first_n = page + PAGE_SIZE - addr;
4970 +-
4971 +- shadow = kmemcheck_shadow_lookup(addr);
4972 +- if (shadow)
4973 +- memset(shadow, status, first_n);
4974 +-
4975 +- addr += first_n;
4976 +- n -= first_n;
4977 +-
4978 +- /* Do full-page memset()s. */
4979 +- while (n >= PAGE_SIZE) {
4980 +- shadow = kmemcheck_shadow_lookup(addr);
4981 +- if (shadow)
4982 +- memset(shadow, status, PAGE_SIZE);
4983 +-
4984 +- addr += PAGE_SIZE;
4985 +- n -= PAGE_SIZE;
4986 +- }
4987 +-
4988 +- /* Do the remaining page, if any. */
4989 +- if (n > 0) {
4990 +- shadow = kmemcheck_shadow_lookup(addr);
4991 +- if (shadow)
4992 +- memset(shadow, status, n);
4993 +- }
4994 +-}
4995 +-
4996 +-void kmemcheck_mark_unallocated(void *address, unsigned int n)
4997 +-{
4998 +- mark_shadow(address, n, KMEMCHECK_SHADOW_UNALLOCATED);
4999 +-}
5000 +-
5001 +-void kmemcheck_mark_uninitialized(void *address, unsigned int n)
5002 +-{
5003 +- mark_shadow(address, n, KMEMCHECK_SHADOW_UNINITIALIZED);
5004 +-}
5005 +-
5006 +-/*
5007 +- * Fill the shadow memory of the given address such that the memory at that
5008 +- * address is marked as being initialized.
5009 +- */
5010 +-void kmemcheck_mark_initialized(void *address, unsigned int n)
5011 +-{
5012 +- mark_shadow(address, n, KMEMCHECK_SHADOW_INITIALIZED);
5013 +-}
5014 +-EXPORT_SYMBOL_GPL(kmemcheck_mark_initialized);
5015 +-
5016 +-void kmemcheck_mark_freed(void *address, unsigned int n)
5017 +-{
5018 +- mark_shadow(address, n, KMEMCHECK_SHADOW_FREED);
5019 +-}
5020 +-
5021 +-void kmemcheck_mark_unallocated_pages(struct page *p, unsigned int n)
5022 +-{
5023 +- unsigned int i;
5024 +-
5025 +- for (i = 0; i < n; ++i)
5026 +- kmemcheck_mark_unallocated(page_address(&p[i]), PAGE_SIZE);
5027 +-}
5028 +-
5029 +-void kmemcheck_mark_uninitialized_pages(struct page *p, unsigned int n)
5030 +-{
5031 +- unsigned int i;
5032 +-
5033 +- for (i = 0; i < n; ++i)
5034 +- kmemcheck_mark_uninitialized(page_address(&p[i]), PAGE_SIZE);
5035 +-}
5036 +-
5037 +-void kmemcheck_mark_initialized_pages(struct page *p, unsigned int n)
5038 +-{
5039 +- unsigned int i;
5040 +-
5041 +- for (i = 0; i < n; ++i)
5042 +- kmemcheck_mark_initialized(page_address(&p[i]), PAGE_SIZE);
5043 +-}
5044 +-
5045 +-enum kmemcheck_shadow kmemcheck_shadow_test(void *shadow, unsigned int size)
5046 +-{
5047 +-#ifdef CONFIG_KMEMCHECK_PARTIAL_OK
5048 +- uint8_t *x;
5049 +- unsigned int i;
5050 +-
5051 +- x = shadow;
5052 +-
5053 +- /*
5054 +- * Make sure _some_ bytes are initialized. Gcc frequently generates
5055 +- * code to access neighboring bytes.
5056 +- */
5057 +- for (i = 0; i < size; ++i) {
5058 +- if (x[i] == KMEMCHECK_SHADOW_INITIALIZED)
5059 +- return x[i];
5060 +- }
5061 +-
5062 +- return x[0];
5063 +-#else
5064 +- return kmemcheck_shadow_test_all(shadow, size);
5065 +-#endif
5066 +-}
5067 +-
5068 +-enum kmemcheck_shadow kmemcheck_shadow_test_all(void *shadow, unsigned int size)
5069 +-{
5070 +- uint8_t *x;
5071 +- unsigned int i;
5072 +-
5073 +- x = shadow;
5074 +-
5075 +- /* All bytes must be initialized. */
5076 +- for (i = 0; i < size; ++i) {
5077 +- if (x[i] != KMEMCHECK_SHADOW_INITIALIZED)
5078 +- return x[i];
5079 +- }
5080 +-
5081 +- return x[0];
5082 +-}
5083 +-
5084 +-void kmemcheck_shadow_set(void *shadow, unsigned int size)
5085 +-{
5086 +- uint8_t *x;
5087 +- unsigned int i;
5088 +-
5089 +- x = shadow;
5090 +- for (i = 0; i < size; ++i)
5091 +- x[i] = KMEMCHECK_SHADOW_INITIALIZED;
5092 +-}
5093 +diff --git a/arch/x86/mm/kmemcheck/shadow.h b/arch/x86/mm/kmemcheck/shadow.h
5094 +deleted file mode 100644
5095 +index 49768dc18664..000000000000
5096 +--- a/arch/x86/mm/kmemcheck/shadow.h
5097 ++++ /dev/null
5098 +@@ -1,19 +0,0 @@
5099 +-/* SPDX-License-Identifier: GPL-2.0 */
5100 +-#ifndef ARCH__X86__MM__KMEMCHECK__SHADOW_H
5101 +-#define ARCH__X86__MM__KMEMCHECK__SHADOW_H
5102 +-
5103 +-enum kmemcheck_shadow {
5104 +- KMEMCHECK_SHADOW_UNALLOCATED,
5105 +- KMEMCHECK_SHADOW_UNINITIALIZED,
5106 +- KMEMCHECK_SHADOW_INITIALIZED,
5107 +- KMEMCHECK_SHADOW_FREED,
5108 +-};
5109 +-
5110 +-void *kmemcheck_shadow_lookup(unsigned long address);
5111 +-
5112 +-enum kmemcheck_shadow kmemcheck_shadow_test(void *shadow, unsigned int size);
5113 +-enum kmemcheck_shadow kmemcheck_shadow_test_all(void *shadow,
5114 +- unsigned int size);
5115 +-void kmemcheck_shadow_set(void *shadow, unsigned int size);
5116 +-
5117 +-#endif
5118 +diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
5119 +index c21c2ed04612..aa44c3aa4cd5 100644
5120 +--- a/arch/x86/mm/kmmio.c
5121 ++++ b/arch/x86/mm/kmmio.c
5122 +@@ -168,7 +168,7 @@ static int clear_page_presence(struct kmmio_fault_page *f, bool clear)
5123 + return -1;
5124 + }
5125 +
5126 +- __flush_tlb_one(f->addr);
5127 ++ __flush_tlb_one_kernel(f->addr);
5128 + return 0;
5129 + }
5130 +
5131 +diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
5132 +index dfb7d657cf43..3ed9a08885c5 100644
5133 +--- a/arch/x86/mm/pageattr.c
5134 ++++ b/arch/x86/mm/pageattr.c
5135 +@@ -753,7 +753,7 @@ static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
5136 +
5137 + if (!debug_pagealloc_enabled())
5138 + spin_unlock(&cpa_lock);
5139 +- base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
5140 ++ base = alloc_pages(GFP_KERNEL, 0);
5141 + if (!debug_pagealloc_enabled())
5142 + spin_lock(&cpa_lock);
5143 + if (!base)
5144 +@@ -904,7 +904,7 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
5145 +
5146 + static int alloc_pte_page(pmd_t *pmd)
5147 + {
5148 +- pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
5149 ++ pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
5150 + if (!pte)
5151 + return -1;
5152 +
5153 +@@ -914,7 +914,7 @@ static int alloc_pte_page(pmd_t *pmd)
5154 +
5155 + static int alloc_pmd_page(pud_t *pud)
5156 + {
5157 +- pmd_t *pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
5158 ++ pmd_t *pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
5159 + if (!pmd)
5160 + return -1;
5161 +
5162 +@@ -1120,7 +1120,7 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
5163 + pgd_entry = cpa->pgd + pgd_index(addr);
5164 +
5165 + if (pgd_none(*pgd_entry)) {
5166 +- p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
5167 ++ p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL);
5168 + if (!p4d)
5169 + return -1;
5170 +
5171 +@@ -1132,7 +1132,7 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
5172 + */
5173 + p4d = p4d_offset(pgd_entry, addr);
5174 + if (p4d_none(*p4d)) {
5175 +- pud = (pud_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
5176 ++ pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
5177 + if (!pud)
5178 + return -1;
5179 +
5180 +diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
5181 +index 9b7bcbd33cc2..004abf9ebf12 100644
5182 +--- a/arch/x86/mm/pgtable.c
5183 ++++ b/arch/x86/mm/pgtable.c
5184 +@@ -7,7 +7,7 @@
5185 + #include <asm/fixmap.h>
5186 + #include <asm/mtrr.h>
5187 +
5188 +-#define PGALLOC_GFP (GFP_KERNEL_ACCOUNT | __GFP_NOTRACK | __GFP_ZERO)
5189 ++#define PGALLOC_GFP (GFP_KERNEL_ACCOUNT | __GFP_ZERO)
5190 +
5191 + #ifdef CONFIG_HIGHPTE
5192 + #define PGALLOC_USER_GFP __GFP_HIGHMEM
5193 +diff --git a/arch/x86/mm/pgtable_32.c b/arch/x86/mm/pgtable_32.c
5194 +index c3c5274410a9..9bb7f0ab9fe6 100644
5195 +--- a/arch/x86/mm/pgtable_32.c
5196 ++++ b/arch/x86/mm/pgtable_32.c
5197 +@@ -63,7 +63,7 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pteval)
5198 + * It's enough to flush this one mapping.
5199 + * (PGE mappings get flushed as well)
5200 + */
5201 +- __flush_tlb_one(vaddr);
5202 ++ __flush_tlb_one_kernel(vaddr);
5203 + }
5204 +
5205 + unsigned long __FIXADDR_TOP = 0xfffff000;
5206 +diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
5207 +index 012d02624848..0c936435ea93 100644
5208 +--- a/arch/x86/mm/tlb.c
5209 ++++ b/arch/x86/mm/tlb.c
5210 +@@ -492,7 +492,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
5211 + * flush that changes context.tlb_gen from 2 to 3. If they get
5212 + * processed on this CPU in reverse order, we'll see
5213 + * local_tlb_gen == 1, mm_tlb_gen == 3, and end != TLB_FLUSH_ALL.
5214 +- * If we were to use __flush_tlb_single() and set local_tlb_gen to
5215 ++ * If we were to use __flush_tlb_one_user() and set local_tlb_gen to
5216 + * 3, we'd be break the invariant: we'd update local_tlb_gen above
5217 + * 1 without the full flush that's needed for tlb_gen 2.
5218 + *
5219 +@@ -513,7 +513,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
5220 +
5221 + addr = f->start;
5222 + while (addr < f->end) {
5223 +- __flush_tlb_single(addr);
5224 ++ __flush_tlb_one_user(addr);
5225 + addr += PAGE_SIZE;
5226 + }
5227 + if (local)
5228 +@@ -660,7 +660,7 @@ static void do_kernel_range_flush(void *info)
5229 +
5230 + /* flush range by one by one 'invlpg' */
5231 + for (addr = f->start; addr < f->end; addr += PAGE_SIZE)
5232 +- __flush_tlb_one(addr);
5233 ++ __flush_tlb_one_kernel(addr);
5234 + }
5235 +
5236 + void flush_tlb_kernel_range(unsigned long start, unsigned long end)
5237 +diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
5238 +index 61975b6bcb1a..ad5d9538f0f9 100644
5239 +--- a/arch/x86/platform/efi/efi_64.c
5240 ++++ b/arch/x86/platform/efi/efi_64.c
5241 +@@ -211,7 +211,7 @@ int __init efi_alloc_page_tables(void)
5242 + if (efi_enabled(EFI_OLD_MEMMAP))
5243 + return 0;
5244 +
5245 +- gfp_mask = GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO;
5246 ++ gfp_mask = GFP_KERNEL | __GFP_ZERO;
5247 + efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER);
5248 + if (!efi_pgd)
5249 + return -ENOMEM;
5250 +diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
5251 +index 8538a6723171..7d5d53f36a7a 100644
5252 +--- a/arch/x86/platform/uv/tlb_uv.c
5253 ++++ b/arch/x86/platform/uv/tlb_uv.c
5254 +@@ -299,7 +299,7 @@ static void bau_process_message(struct msg_desc *mdp, struct bau_control *bcp,
5255 + local_flush_tlb();
5256 + stat->d_alltlb++;
5257 + } else {
5258 +- __flush_tlb_single(msg->address);
5259 ++ __flush_tlb_one_user(msg->address);
5260 + stat->d_onetlb++;
5261 + }
5262 + stat->d_requestee++;
5263 +diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
5264 +index a0e2b8c6e5c7..d0b943a6b117 100644
5265 +--- a/arch/x86/xen/mmu_pv.c
5266 ++++ b/arch/x86/xen/mmu_pv.c
5267 +@@ -1300,12 +1300,12 @@ static void xen_flush_tlb(void)
5268 + preempt_enable();
5269 + }
5270 +
5271 +-static void xen_flush_tlb_single(unsigned long addr)
5272 ++static void xen_flush_tlb_one_user(unsigned long addr)
5273 + {
5274 + struct mmuext_op *op;
5275 + struct multicall_space mcs;
5276 +
5277 +- trace_xen_mmu_flush_tlb_single(addr);
5278 ++ trace_xen_mmu_flush_tlb_one_user(addr);
5279 +
5280 + preempt_disable();
5281 +
5282 +@@ -2360,7 +2360,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
5283 +
5284 + .flush_tlb_user = xen_flush_tlb,
5285 + .flush_tlb_kernel = xen_flush_tlb,
5286 +- .flush_tlb_single = xen_flush_tlb_single,
5287 ++ .flush_tlb_one_user = xen_flush_tlb_one_user,
5288 + .flush_tlb_others = xen_flush_tlb_others,
5289 +
5290 + .pgd_alloc = xen_pgd_alloc,
5291 +diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
5292 +index 6083ba462f35..15812e553b95 100644
5293 +--- a/arch/x86/xen/p2m.c
5294 ++++ b/arch/x86/xen/p2m.c
5295 +@@ -694,6 +694,9 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
5296 + int i, ret = 0;
5297 + pte_t *pte;
5298 +
5299 ++ if (xen_feature(XENFEAT_auto_translated_physmap))
5300 ++ return 0;
5301 ++
5302 + if (kmap_ops) {
5303 + ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
5304 + kmap_ops, count);
5305 +@@ -736,6 +739,9 @@ int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
5306 + {
5307 + int i, ret = 0;
5308 +
5309 ++ if (xen_feature(XENFEAT_auto_translated_physmap))
5310 ++ return 0;
5311 ++
5312 + for (i = 0; i < count; i++) {
5313 + unsigned long mfn = __pfn_to_mfn(page_to_pfn(pages[i]));
5314 + unsigned long pfn = page_to_pfn(pages[i]);
5315 +diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
5316 +index 497cc55a0c16..96f26e026783 100644
5317 +--- a/arch/x86/xen/xen-head.S
5318 ++++ b/arch/x86/xen/xen-head.S
5319 +@@ -9,7 +9,9 @@
5320 +
5321 + #include <asm/boot.h>
5322 + #include <asm/asm.h>
5323 ++#include <asm/msr.h>
5324 + #include <asm/page_types.h>
5325 ++#include <asm/percpu.h>
5326 + #include <asm/unwind_hints.h>
5327 +
5328 + #include <xen/interface/elfnote.h>
5329 +@@ -35,6 +37,20 @@ ENTRY(startup_xen)
5330 + mov %_ASM_SI, xen_start_info
5331 + mov $init_thread_union+THREAD_SIZE, %_ASM_SP
5332 +
5333 ++#ifdef CONFIG_X86_64
5334 ++ /* Set up %gs.
5335 ++ *
5336 ++ * The base of %gs always points to the bottom of the irqstack
5337 ++ * union. If the stack protector canary is enabled, it is
5338 ++ * located at %gs:40. Note that, on SMP, the boot cpu uses
5339 ++ * init data section till per cpu areas are set up.
5340 ++ */
5341 ++ movl $MSR_GS_BASE,%ecx
5342 ++ movq $INIT_PER_CPU_VAR(irq_stack_union),%rax
5343 ++ cdq
5344 ++ wrmsr
5345 ++#endif
5346 ++
5347 + jmp xen_start_kernel
5348 + END(startup_xen)
5349 + __FINIT
5350 +diff --git a/block/blk-wbt.c b/block/blk-wbt.c
5351 +index e59d59c11ebb..5c105514bca7 100644
5352 +--- a/block/blk-wbt.c
5353 ++++ b/block/blk-wbt.c
5354 +@@ -698,7 +698,15 @@ u64 wbt_default_latency_nsec(struct request_queue *q)
5355 +
5356 + static int wbt_data_dir(const struct request *rq)
5357 + {
5358 +- return rq_data_dir(rq);
5359 ++ const int op = req_op(rq);
5360 ++
5361 ++ if (op == REQ_OP_READ)
5362 ++ return READ;
5363 ++ else if (op == REQ_OP_WRITE || op == REQ_OP_FLUSH)
5364 ++ return WRITE;
5365 ++
5366 ++ /* don't account */
5367 ++ return -1;
5368 + }
5369 +
5370 + int wbt_init(struct request_queue *q)
5371 +diff --git a/crypto/xor.c b/crypto/xor.c
5372 +index 263af9fb45ea..bce9fe7af40a 100644
5373 +--- a/crypto/xor.c
5374 ++++ b/crypto/xor.c
5375 +@@ -122,12 +122,7 @@ calibrate_xor_blocks(void)
5376 + goto out;
5377 + }
5378 +
5379 +- /*
5380 +- * Note: Since the memory is not actually used for _anything_ but to
5381 +- * test the XOR speed, we don't really want kmemcheck to warn about
5382 +- * reading uninitialized bytes here.
5383 +- */
5384 +- b1 = (void *) __get_free_pages(GFP_KERNEL | __GFP_NOTRACK, 2);
5385 ++ b1 = (void *) __get_free_pages(GFP_KERNEL, 2);
5386 + if (!b1) {
5387 + printk(KERN_WARNING "xor: Yikes! No memory available.\n");
5388 + return -ENOMEM;
5389 +diff --git a/drivers/base/core.c b/drivers/base/core.c
5390 +index 12ebd055724c..c8501cdb95f4 100644
5391 +--- a/drivers/base/core.c
5392 ++++ b/drivers/base/core.c
5393 +@@ -313,6 +313,9 @@ static void __device_link_del(struct device_link *link)
5394 + dev_info(link->consumer, "Dropping the link to %s\n",
5395 + dev_name(link->supplier));
5396 +
5397 ++ if (link->flags & DL_FLAG_PM_RUNTIME)
5398 ++ pm_runtime_drop_link(link->consumer);
5399 ++
5400 + list_del(&link->s_node);
5401 + list_del(&link->c_node);
5402 + device_link_free(link);
5403 +diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
5404 +index 609227211295..fe4fd8aee19f 100644
5405 +--- a/drivers/block/rbd.c
5406 ++++ b/drivers/block/rbd.c
5407 +@@ -124,11 +124,13 @@ static int atomic_dec_return_safe(atomic_t *v)
5408 + #define RBD_FEATURE_STRIPINGV2 (1ULL<<1)
5409 + #define RBD_FEATURE_EXCLUSIVE_LOCK (1ULL<<2)
5410 + #define RBD_FEATURE_DATA_POOL (1ULL<<7)
5411 ++#define RBD_FEATURE_OPERATIONS (1ULL<<8)
5412 +
5413 + #define RBD_FEATURES_ALL (RBD_FEATURE_LAYERING | \
5414 + RBD_FEATURE_STRIPINGV2 | \
5415 + RBD_FEATURE_EXCLUSIVE_LOCK | \
5416 +- RBD_FEATURE_DATA_POOL)
5417 ++ RBD_FEATURE_DATA_POOL | \
5418 ++ RBD_FEATURE_OPERATIONS)
5419 +
5420 + /* Features supported by this (client software) implementation. */
5421 +
5422 +diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig
5423 +index 98a60db8e5d1..b33c8d6eb8c7 100644
5424 +--- a/drivers/bluetooth/Kconfig
5425 ++++ b/drivers/bluetooth/Kconfig
5426 +@@ -66,6 +66,7 @@ config BT_HCIBTSDIO
5427 +
5428 + config BT_HCIUART
5429 + tristate "HCI UART driver"
5430 ++ depends on SERIAL_DEV_BUS || !SERIAL_DEV_BUS
5431 + depends on TTY
5432 + help
5433 + Bluetooth HCI UART driver.
5434 +@@ -80,7 +81,6 @@ config BT_HCIUART
5435 + config BT_HCIUART_SERDEV
5436 + bool
5437 + depends on SERIAL_DEV_BUS && BT_HCIUART
5438 +- depends on SERIAL_DEV_BUS=y || SERIAL_DEV_BUS=BT_HCIUART
5439 + default y
5440 +
5441 + config BT_HCIUART_H4
5442 +diff --git a/drivers/char/hw_random/via-rng.c b/drivers/char/hw_random/via-rng.c
5443 +index d1f5bb534e0e..6e9df558325b 100644
5444 +--- a/drivers/char/hw_random/via-rng.c
5445 ++++ b/drivers/char/hw_random/via-rng.c
5446 +@@ -162,7 +162,7 @@ static int via_rng_init(struct hwrng *rng)
5447 + /* Enable secondary noise source on CPUs where it is present. */
5448 +
5449 + /* Nehemiah stepping 8 and higher */
5450 +- if ((c->x86_model == 9) && (c->x86_mask > 7))
5451 ++ if ((c->x86_model == 9) && (c->x86_stepping > 7))
5452 + lo |= VIA_NOISESRC2;
5453 +
5454 + /* Esther */
5455 +diff --git a/drivers/char/random.c b/drivers/char/random.c
5456 +index 8ad92707e45f..ea0115cf5fc0 100644
5457 +--- a/drivers/char/random.c
5458 ++++ b/drivers/char/random.c
5459 +@@ -259,7 +259,6 @@
5460 + #include <linux/cryptohash.h>
5461 + #include <linux/fips.h>
5462 + #include <linux/ptrace.h>
5463 +-#include <linux/kmemcheck.h>
5464 + #include <linux/workqueue.h>
5465 + #include <linux/irq.h>
5466 + #include <linux/syscalls.h>
5467 +diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
5468 +index 3a2ca0f79daf..d0c34df0529c 100644
5469 +--- a/drivers/cpufreq/acpi-cpufreq.c
5470 ++++ b/drivers/cpufreq/acpi-cpufreq.c
5471 +@@ -629,7 +629,7 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
5472 + if (c->x86_vendor == X86_VENDOR_INTEL) {
5473 + if ((c->x86 == 15) &&
5474 + (c->x86_model == 6) &&
5475 +- (c->x86_mask == 8)) {
5476 ++ (c->x86_stepping == 8)) {
5477 + pr_info("Intel(R) Xeon(R) 7100 Errata AL30, processors may lock up on frequency changes: disabling acpi-cpufreq\n");
5478 + return -ENODEV;
5479 + }
5480 +diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c
5481 +index c46a12df40dd..d5e27bc7585a 100644
5482 +--- a/drivers/cpufreq/longhaul.c
5483 ++++ b/drivers/cpufreq/longhaul.c
5484 +@@ -775,7 +775,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
5485 + break;
5486 +
5487 + case 7:
5488 +- switch (c->x86_mask) {
5489 ++ switch (c->x86_stepping) {
5490 + case 0:
5491 + longhaul_version = TYPE_LONGHAUL_V1;
5492 + cpu_model = CPU_SAMUEL2;
5493 +@@ -787,7 +787,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
5494 + break;
5495 + case 1 ... 15:
5496 + longhaul_version = TYPE_LONGHAUL_V2;
5497 +- if (c->x86_mask < 8) {
5498 ++ if (c->x86_stepping < 8) {
5499 + cpu_model = CPU_SAMUEL2;
5500 + cpuname = "C3 'Samuel 2' [C5B]";
5501 + } else {
5502 +@@ -814,7 +814,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
5503 + numscales = 32;
5504 + memcpy(mults, nehemiah_mults, sizeof(nehemiah_mults));
5505 + memcpy(eblcr, nehemiah_eblcr, sizeof(nehemiah_eblcr));
5506 +- switch (c->x86_mask) {
5507 ++ switch (c->x86_stepping) {
5508 + case 0 ... 1:
5509 + cpu_model = CPU_NEHEMIAH;
5510 + cpuname = "C3 'Nehemiah A' [C5XLOE]";
5511 +diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
5512 +index fd77812313f3..a25741b1281b 100644
5513 +--- a/drivers/cpufreq/p4-clockmod.c
5514 ++++ b/drivers/cpufreq/p4-clockmod.c
5515 +@@ -168,7 +168,7 @@ static int cpufreq_p4_cpu_init(struct cpufreq_policy *policy)
5516 + #endif
5517 +
5518 + /* Errata workaround */
5519 +- cpuid = (c->x86 << 8) | (c->x86_model << 4) | c->x86_mask;
5520 ++ cpuid = (c->x86 << 8) | (c->x86_model << 4) | c->x86_stepping;
5521 + switch (cpuid) {
5522 + case 0x0f07:
5523 + case 0x0f0a:
5524 +diff --git a/drivers/cpufreq/powernow-k7.c b/drivers/cpufreq/powernow-k7.c
5525 +index 80ac313e6c59..302e9ce793a0 100644
5526 +--- a/drivers/cpufreq/powernow-k7.c
5527 ++++ b/drivers/cpufreq/powernow-k7.c
5528 +@@ -131,7 +131,7 @@ static int check_powernow(void)
5529 + return 0;
5530 + }
5531 +
5532 +- if ((c->x86_model == 6) && (c->x86_mask == 0)) {
5533 ++ if ((c->x86_model == 6) && (c->x86_stepping == 0)) {
5534 + pr_info("K7 660[A0] core detected, enabling errata workarounds\n");
5535 + have_a0 = 1;
5536 + }
5537 +diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
5538 +index 3ff5160451b4..7e1e5bbcf430 100644
5539 +--- a/drivers/cpufreq/powernv-cpufreq.c
5540 ++++ b/drivers/cpufreq/powernv-cpufreq.c
5541 +@@ -287,9 +287,9 @@ static int init_powernv_pstates(void)
5542 +
5543 + if (id == pstate_max)
5544 + powernv_pstate_info.max = i;
5545 +- else if (id == pstate_nominal)
5546 ++ if (id == pstate_nominal)
5547 + powernv_pstate_info.nominal = i;
5548 +- else if (id == pstate_min)
5549 ++ if (id == pstate_min)
5550 + powernv_pstate_info.min = i;
5551 +
5552 + if (powernv_pstate_info.wof_enabled && id == pstate_turbo) {
5553 +diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
5554 +index 41bc5397f4bb..4fa5adf16c70 100644
5555 +--- a/drivers/cpufreq/speedstep-centrino.c
5556 ++++ b/drivers/cpufreq/speedstep-centrino.c
5557 +@@ -37,7 +37,7 @@ struct cpu_id
5558 + {
5559 + __u8 x86; /* CPU family */
5560 + __u8 x86_model; /* model */
5561 +- __u8 x86_mask; /* stepping */
5562 ++ __u8 x86_stepping; /* stepping */
5563 + };
5564 +
5565 + enum {
5566 +@@ -277,7 +277,7 @@ static int centrino_verify_cpu_id(const struct cpuinfo_x86 *c,
5567 + {
5568 + if ((c->x86 == x->x86) &&
5569 + (c->x86_model == x->x86_model) &&
5570 +- (c->x86_mask == x->x86_mask))
5571 ++ (c->x86_stepping == x->x86_stepping))
5572 + return 1;
5573 + return 0;
5574 + }
5575 +diff --git a/drivers/cpufreq/speedstep-lib.c b/drivers/cpufreq/speedstep-lib.c
5576 +index ccab452a4ef5..dd7bb00991f4 100644
5577 +--- a/drivers/cpufreq/speedstep-lib.c
5578 ++++ b/drivers/cpufreq/speedstep-lib.c
5579 +@@ -272,9 +272,9 @@ unsigned int speedstep_detect_processor(void)
5580 + ebx = cpuid_ebx(0x00000001);
5581 + ebx &= 0x000000FF;
5582 +
5583 +- pr_debug("ebx value is %x, x86_mask is %x\n", ebx, c->x86_mask);
5584 ++ pr_debug("ebx value is %x, x86_stepping is %x\n", ebx, c->x86_stepping);
5585 +
5586 +- switch (c->x86_mask) {
5587 ++ switch (c->x86_stepping) {
5588 + case 4:
5589 + /*
5590 + * B-stepping [M-P4-M]
5591 +@@ -361,7 +361,7 @@ unsigned int speedstep_detect_processor(void)
5592 + msr_lo, msr_hi);
5593 + if ((msr_hi & (1<<18)) &&
5594 + (relaxed_check ? 1 : (msr_hi & (3<<24)))) {
5595 +- if (c->x86_mask == 0x01) {
5596 ++ if (c->x86_stepping == 0x01) {
5597 + pr_debug("early PIII version\n");
5598 + return SPEEDSTEP_CPU_PIII_C_EARLY;
5599 + } else
5600 +diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c
5601 +index b3869748cc6b..c939f18f70cc 100644
5602 +--- a/drivers/crypto/padlock-aes.c
5603 ++++ b/drivers/crypto/padlock-aes.c
5604 +@@ -512,7 +512,7 @@ static int __init padlock_init(void)
5605 +
5606 + printk(KERN_NOTICE PFX "Using VIA PadLock ACE for AES algorithm.\n");
5607 +
5608 +- if (c->x86 == 6 && c->x86_model == 15 && c->x86_mask == 2) {
5609 ++ if (c->x86 == 6 && c->x86_model == 15 && c->x86_stepping == 2) {
5610 + ecb_fetch_blocks = MAX_ECB_FETCH_BLOCKS;
5611 + cbc_fetch_blocks = MAX_CBC_FETCH_BLOCKS;
5612 + printk(KERN_NOTICE PFX "VIA Nano stepping 2 detected: enabling workaround.\n");
5613 +diff --git a/drivers/crypto/sunxi-ss/sun4i-ss-prng.c b/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
5614 +index 0d01d1624252..63d636424161 100644
5615 +--- a/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
5616 ++++ b/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
5617 +@@ -28,7 +28,7 @@ int sun4i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
5618 + algt = container_of(alg, struct sun4i_ss_alg_template, alg.rng);
5619 + ss = algt->ss;
5620 +
5621 +- spin_lock(&ss->slock);
5622 ++ spin_lock_bh(&ss->slock);
5623 +
5624 + writel(mode, ss->base + SS_CTL);
5625 +
5626 +@@ -51,6 +51,6 @@ int sun4i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
5627 + }
5628 +
5629 + writel(0, ss->base + SS_CTL);
5630 +- spin_unlock(&ss->slock);
5631 +- return dlen;
5632 ++ spin_unlock_bh(&ss->slock);
5633 ++ return 0;
5634 + }
5635 +diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
5636 +index a1c4ee818614..202476fbbc4c 100644
5637 +--- a/drivers/devfreq/devfreq.c
5638 ++++ b/drivers/devfreq/devfreq.c
5639 +@@ -676,7 +676,7 @@ struct devfreq *devm_devfreq_add_device(struct device *dev,
5640 + devfreq = devfreq_add_device(dev, profile, governor_name, data);
5641 + if (IS_ERR(devfreq)) {
5642 + devres_free(ptr);
5643 +- return ERR_PTR(-ENOMEM);
5644 ++ return devfreq;
5645 + }
5646 +
5647 + *ptr = devfreq;
5648 +diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
5649 +index b44d9d7db347..012fa3d1f407 100644
5650 +--- a/drivers/dma-buf/reservation.c
5651 ++++ b/drivers/dma-buf/reservation.c
5652 +@@ -455,13 +455,15 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
5653 + unsigned long timeout)
5654 + {
5655 + struct dma_fence *fence;
5656 +- unsigned seq, shared_count, i = 0;
5657 ++ unsigned seq, shared_count;
5658 + long ret = timeout ? timeout : 1;
5659 ++ int i;
5660 +
5661 + retry:
5662 + shared_count = 0;
5663 + seq = read_seqcount_begin(&obj->seq);
5664 + rcu_read_lock();
5665 ++ i = -1;
5666 +
5667 + fence = rcu_dereference(obj->fence_excl);
5668 + if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
5669 +@@ -477,14 +479,14 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
5670 + fence = NULL;
5671 + }
5672 +
5673 +- if (!fence && wait_all) {
5674 ++ if (wait_all) {
5675 + struct reservation_object_list *fobj =
5676 + rcu_dereference(obj->fence);
5677 +
5678 + if (fobj)
5679 + shared_count = fobj->shared_count;
5680 +
5681 +- for (i = 0; i < shared_count; ++i) {
5682 ++ for (i = 0; !fence && i < shared_count; ++i) {
5683 + struct dma_fence *lfence = rcu_dereference(fobj->shared[i]);
5684 +
5685 + if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
5686 +diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
5687 +index ac2f30295efe..59ce32e405ac 100644
5688 +--- a/drivers/edac/amd64_edac.c
5689 ++++ b/drivers/edac/amd64_edac.c
5690 +@@ -3147,7 +3147,7 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
5691 + struct amd64_family_type *fam_type = NULL;
5692 +
5693 + pvt->ext_model = boot_cpu_data.x86_model >> 4;
5694 +- pvt->stepping = boot_cpu_data.x86_mask;
5695 ++ pvt->stepping = boot_cpu_data.x86_stepping;
5696 + pvt->model = boot_cpu_data.x86_model;
5697 + pvt->fam = boot_cpu_data.x86;
5698 +
5699 +diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h b/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h
5700 +index 262c8ded87c0..dafc9c4b1e6f 100644
5701 +--- a/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h
5702 ++++ b/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h
5703 +@@ -40,7 +40,7 @@ struct smu_table_entry {
5704 + uint32_t table_addr_high;
5705 + uint32_t table_addr_low;
5706 + uint8_t *table;
5707 +- uint32_t handle;
5708 ++ unsigned long handle;
5709 + };
5710 +
5711 + struct smu_table_array {
5712 +diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
5713 +index 6f3849ec0c1d..e9f1e6fe7b94 100644
5714 +--- a/drivers/gpu/drm/ast/ast_mode.c
5715 ++++ b/drivers/gpu/drm/ast/ast_mode.c
5716 +@@ -644,6 +644,7 @@ static void ast_crtc_commit(struct drm_crtc *crtc)
5717 + {
5718 + struct ast_private *ast = crtc->dev->dev_private;
5719 + ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0);
5720 ++ ast_crtc_load_lut(crtc);
5721 + }
5722 +
5723 +
5724 +diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
5725 +index 18d9da53282b..3f818412765c 100644
5726 +--- a/drivers/gpu/drm/i915/i915_drv.h
5727 ++++ b/drivers/gpu/drm/i915/i915_drv.h
5728 +@@ -842,6 +842,7 @@ struct intel_device_info {
5729 + u8 gen;
5730 + u16 gen_mask;
5731 + enum intel_platform platform;
5732 ++ u8 gt; /* GT number, 0 if undefined */
5733 + u8 ring_mask; /* Rings supported by the HW */
5734 + u8 num_rings;
5735 + #define DEFINE_FLAG(name) u8 name:1
5736 +diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
5737 +index 09d97e0990b7..2985f1e418ad 100644
5738 +--- a/drivers/gpu/drm/i915/i915_pci.c
5739 ++++ b/drivers/gpu/drm/i915/i915_pci.c
5740 +@@ -224,15 +224,34 @@ static const struct intel_device_info intel_ironlake_m_info = {
5741 + GEN_DEFAULT_PIPEOFFSETS, \
5742 + CURSOR_OFFSETS
5743 +
5744 +-static const struct intel_device_info intel_sandybridge_d_info = {
5745 +- GEN6_FEATURES,
5746 +- .platform = INTEL_SANDYBRIDGE,
5747 ++#define SNB_D_PLATFORM \
5748 ++ GEN6_FEATURES, \
5749 ++ .platform = INTEL_SANDYBRIDGE
5750 ++
5751 ++static const struct intel_device_info intel_sandybridge_d_gt1_info = {
5752 ++ SNB_D_PLATFORM,
5753 ++ .gt = 1,
5754 + };
5755 +
5756 +-static const struct intel_device_info intel_sandybridge_m_info = {
5757 +- GEN6_FEATURES,
5758 +- .platform = INTEL_SANDYBRIDGE,
5759 +- .is_mobile = 1,
5760 ++static const struct intel_device_info intel_sandybridge_d_gt2_info = {
5761 ++ SNB_D_PLATFORM,
5762 ++ .gt = 2,
5763 ++};
5764 ++
5765 ++#define SNB_M_PLATFORM \
5766 ++ GEN6_FEATURES, \
5767 ++ .platform = INTEL_SANDYBRIDGE, \
5768 ++ .is_mobile = 1
5769 ++
5770 ++
5771 ++static const struct intel_device_info intel_sandybridge_m_gt1_info = {
5772 ++ SNB_M_PLATFORM,
5773 ++ .gt = 1,
5774 ++};
5775 ++
5776 ++static const struct intel_device_info intel_sandybridge_m_gt2_info = {
5777 ++ SNB_M_PLATFORM,
5778 ++ .gt = 2,
5779 + };
5780 +
5781 + #define GEN7_FEATURES \
5782 +@@ -249,22 +268,41 @@ static const struct intel_device_info intel_sandybridge_m_info = {
5783 + GEN_DEFAULT_PIPEOFFSETS, \
5784 + IVB_CURSOR_OFFSETS
5785 +
5786 +-static const struct intel_device_info intel_ivybridge_d_info = {
5787 +- GEN7_FEATURES,
5788 +- .platform = INTEL_IVYBRIDGE,
5789 +- .has_l3_dpf = 1,
5790 ++#define IVB_D_PLATFORM \
5791 ++ GEN7_FEATURES, \
5792 ++ .platform = INTEL_IVYBRIDGE, \
5793 ++ .has_l3_dpf = 1
5794 ++
5795 ++static const struct intel_device_info intel_ivybridge_d_gt1_info = {
5796 ++ IVB_D_PLATFORM,
5797 ++ .gt = 1,
5798 + };
5799 +
5800 +-static const struct intel_device_info intel_ivybridge_m_info = {
5801 +- GEN7_FEATURES,
5802 +- .platform = INTEL_IVYBRIDGE,
5803 +- .is_mobile = 1,
5804 +- .has_l3_dpf = 1,
5805 ++static const struct intel_device_info intel_ivybridge_d_gt2_info = {
5806 ++ IVB_D_PLATFORM,
5807 ++ .gt = 2,
5808 ++};
5809 ++
5810 ++#define IVB_M_PLATFORM \
5811 ++ GEN7_FEATURES, \
5812 ++ .platform = INTEL_IVYBRIDGE, \
5813 ++ .is_mobile = 1, \
5814 ++ .has_l3_dpf = 1
5815 ++
5816 ++static const struct intel_device_info intel_ivybridge_m_gt1_info = {
5817 ++ IVB_M_PLATFORM,
5818 ++ .gt = 1,
5819 ++};
5820 ++
5821 ++static const struct intel_device_info intel_ivybridge_m_gt2_info = {
5822 ++ IVB_M_PLATFORM,
5823 ++ .gt = 2,
5824 + };
5825 +
5826 + static const struct intel_device_info intel_ivybridge_q_info = {
5827 + GEN7_FEATURES,
5828 + .platform = INTEL_IVYBRIDGE,
5829 ++ .gt = 2,
5830 + .num_pipes = 0, /* legal, last one wins */
5831 + .has_l3_dpf = 1,
5832 + };
5833 +@@ -299,10 +337,24 @@ static const struct intel_device_info intel_valleyview_info = {
5834 + .has_rc6p = 0 /* RC6p removed-by HSW */, \
5835 + .has_runtime_pm = 1
5836 +
5837 +-static const struct intel_device_info intel_haswell_info = {
5838 +- HSW_FEATURES,
5839 +- .platform = INTEL_HASWELL,
5840 +- .has_l3_dpf = 1,
5841 ++#define HSW_PLATFORM \
5842 ++ HSW_FEATURES, \
5843 ++ .platform = INTEL_HASWELL, \
5844 ++ .has_l3_dpf = 1
5845 ++
5846 ++static const struct intel_device_info intel_haswell_gt1_info = {
5847 ++ HSW_PLATFORM,
5848 ++ .gt = 1,
5849 ++};
5850 ++
5851 ++static const struct intel_device_info intel_haswell_gt2_info = {
5852 ++ HSW_PLATFORM,
5853 ++ .gt = 2,
5854 ++};
5855 ++
5856 ++static const struct intel_device_info intel_haswell_gt3_info = {
5857 ++ HSW_PLATFORM,
5858 ++ .gt = 3,
5859 + };
5860 +
5861 + #define BDW_FEATURES \
5862 +@@ -318,12 +370,27 @@ static const struct intel_device_info intel_haswell_info = {
5863 + .gen = 8, \
5864 + .platform = INTEL_BROADWELL
5865 +
5866 +-static const struct intel_device_info intel_broadwell_info = {
5867 ++static const struct intel_device_info intel_broadwell_gt1_info = {
5868 ++ BDW_PLATFORM,
5869 ++ .gt = 1,
5870 ++};
5871 ++
5872 ++static const struct intel_device_info intel_broadwell_gt2_info = {
5873 + BDW_PLATFORM,
5874 ++ .gt = 2,
5875 ++};
5876 ++
5877 ++static const struct intel_device_info intel_broadwell_rsvd_info = {
5878 ++ BDW_PLATFORM,
5879 ++ .gt = 3,
5880 ++ /* According to the device ID those devices are GT3, they were
5881 ++ * previously treated as not GT3, keep it like that.
5882 ++ */
5883 + };
5884 +
5885 + static const struct intel_device_info intel_broadwell_gt3_info = {
5886 + BDW_PLATFORM,
5887 ++ .gt = 3,
5888 + .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
5889 + };
5890 +
5891 +@@ -358,13 +425,29 @@ static const struct intel_device_info intel_cherryview_info = {
5892 + .has_guc = 1, \
5893 + .ddb_size = 896
5894 +
5895 +-static const struct intel_device_info intel_skylake_info = {
5896 ++static const struct intel_device_info intel_skylake_gt1_info = {
5897 + SKL_PLATFORM,
5898 ++ .gt = 1,
5899 + };
5900 +
5901 +-static const struct intel_device_info intel_skylake_gt3_info = {
5902 ++static const struct intel_device_info intel_skylake_gt2_info = {
5903 + SKL_PLATFORM,
5904 +- .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
5905 ++ .gt = 2,
5906 ++};
5907 ++
5908 ++#define SKL_GT3_PLUS_PLATFORM \
5909 ++ SKL_PLATFORM, \
5910 ++ .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING
5911 ++
5912 ++
5913 ++static const struct intel_device_info intel_skylake_gt3_info = {
5914 ++ SKL_GT3_PLUS_PLATFORM,
5915 ++ .gt = 3,
5916 ++};
5917 ++
5918 ++static const struct intel_device_info intel_skylake_gt4_info = {
5919 ++ SKL_GT3_PLUS_PLATFORM,
5920 ++ .gt = 4,
5921 + };
5922 +
5923 + #define GEN9_LP_FEATURES \
5924 +@@ -416,12 +499,19 @@ static const struct intel_device_info intel_geminilake_info = {
5925 + .has_guc = 1, \
5926 + .ddb_size = 896
5927 +
5928 +-static const struct intel_device_info intel_kabylake_info = {
5929 ++static const struct intel_device_info intel_kabylake_gt1_info = {
5930 + KBL_PLATFORM,
5931 ++ .gt = 1,
5932 ++};
5933 ++
5934 ++static const struct intel_device_info intel_kabylake_gt2_info = {
5935 ++ KBL_PLATFORM,
5936 ++ .gt = 2,
5937 + };
5938 +
5939 + static const struct intel_device_info intel_kabylake_gt3_info = {
5940 + KBL_PLATFORM,
5941 ++ .gt = 3,
5942 + .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
5943 + };
5944 +
5945 +@@ -434,20 +524,28 @@ static const struct intel_device_info intel_kabylake_gt3_info = {
5946 + .has_guc = 1, \
5947 + .ddb_size = 896
5948 +
5949 +-static const struct intel_device_info intel_coffeelake_info = {
5950 ++static const struct intel_device_info intel_coffeelake_gt1_info = {
5951 ++ CFL_PLATFORM,
5952 ++ .gt = 1,
5953 ++};
5954 ++
5955 ++static const struct intel_device_info intel_coffeelake_gt2_info = {
5956 + CFL_PLATFORM,
5957 ++ .gt = 2,
5958 + };
5959 +
5960 + static const struct intel_device_info intel_coffeelake_gt3_info = {
5961 + CFL_PLATFORM,
5962 ++ .gt = 3,
5963 + .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
5964 + };
5965 +
5966 +-static const struct intel_device_info intel_cannonlake_info = {
5967 ++static const struct intel_device_info intel_cannonlake_gt2_info = {
5968 + BDW_FEATURES,
5969 + .is_alpha_support = 1,
5970 + .platform = INTEL_CANNONLAKE,
5971 + .gen = 10,
5972 ++ .gt = 2,
5973 + .ddb_size = 1024,
5974 + .has_csr = 1,
5975 + .color = { .degamma_lut_size = 0, .gamma_lut_size = 1024 }
5976 +@@ -476,31 +574,40 @@ static const struct pci_device_id pciidlist[] = {
5977 + INTEL_PINEVIEW_IDS(&intel_pineview_info),
5978 + INTEL_IRONLAKE_D_IDS(&intel_ironlake_d_info),
5979 + INTEL_IRONLAKE_M_IDS(&intel_ironlake_m_info),
5980 +- INTEL_SNB_D_IDS(&intel_sandybridge_d_info),
5981 +- INTEL_SNB_M_IDS(&intel_sandybridge_m_info),
5982 ++ INTEL_SNB_D_GT1_IDS(&intel_sandybridge_d_gt1_info),
5983 ++ INTEL_SNB_D_GT2_IDS(&intel_sandybridge_d_gt2_info),
5984 ++ INTEL_SNB_M_GT1_IDS(&intel_sandybridge_m_gt1_info),
5985 ++ INTEL_SNB_M_GT2_IDS(&intel_sandybridge_m_gt2_info),
5986 + INTEL_IVB_Q_IDS(&intel_ivybridge_q_info), /* must be first IVB */
5987 +- INTEL_IVB_M_IDS(&intel_ivybridge_m_info),
5988 +- INTEL_IVB_D_IDS(&intel_ivybridge_d_info),
5989 +- INTEL_HSW_IDS(&intel_haswell_info),
5990 ++ INTEL_IVB_M_GT1_IDS(&intel_ivybridge_m_gt1_info),
5991 ++ INTEL_IVB_M_GT2_IDS(&intel_ivybridge_m_gt2_info),
5992 ++ INTEL_IVB_D_GT1_IDS(&intel_ivybridge_d_gt1_info),
5993 ++ INTEL_IVB_D_GT2_IDS(&intel_ivybridge_d_gt2_info),
5994 ++ INTEL_HSW_GT1_IDS(&intel_haswell_gt1_info),
5995 ++ INTEL_HSW_GT2_IDS(&intel_haswell_gt2_info),
5996 ++ INTEL_HSW_GT3_IDS(&intel_haswell_gt3_info),
5997 + INTEL_VLV_IDS(&intel_valleyview_info),
5998 +- INTEL_BDW_GT12_IDS(&intel_broadwell_info),
5999 ++ INTEL_BDW_GT1_IDS(&intel_broadwell_gt1_info),
6000 ++ INTEL_BDW_GT2_IDS(&intel_broadwell_gt2_info),
6001 + INTEL_BDW_GT3_IDS(&intel_broadwell_gt3_info),
6002 +- INTEL_BDW_RSVD_IDS(&intel_broadwell_info),
6003 ++ INTEL_BDW_RSVD_IDS(&intel_broadwell_rsvd_info),
6004 + INTEL_CHV_IDS(&intel_cherryview_info),
6005 +- INTEL_SKL_GT1_IDS(&intel_skylake_info),
6006 +- INTEL_SKL_GT2_IDS(&intel_skylake_info),
6007 ++ INTEL_SKL_GT1_IDS(&intel_skylake_gt1_info),
6008 ++ INTEL_SKL_GT2_IDS(&intel_skylake_gt2_info),
6009 + INTEL_SKL_GT3_IDS(&intel_skylake_gt3_info),
6010 +- INTEL_SKL_GT4_IDS(&intel_skylake_gt3_info),
6011 ++ INTEL_SKL_GT4_IDS(&intel_skylake_gt4_info),
6012 + INTEL_BXT_IDS(&intel_broxton_info),
6013 + INTEL_GLK_IDS(&intel_geminilake_info),
6014 +- INTEL_KBL_GT1_IDS(&intel_kabylake_info),
6015 +- INTEL_KBL_GT2_IDS(&intel_kabylake_info),
6016 ++ INTEL_KBL_GT1_IDS(&intel_kabylake_gt1_info),
6017 ++ INTEL_KBL_GT2_IDS(&intel_kabylake_gt2_info),
6018 + INTEL_KBL_GT3_IDS(&intel_kabylake_gt3_info),
6019 + INTEL_KBL_GT4_IDS(&intel_kabylake_gt3_info),
6020 +- INTEL_CFL_S_IDS(&intel_coffeelake_info),
6021 +- INTEL_CFL_H_IDS(&intel_coffeelake_info),
6022 +- INTEL_CFL_U_IDS(&intel_coffeelake_gt3_info),
6023 +- INTEL_CNL_IDS(&intel_cannonlake_info),
6024 ++ INTEL_CFL_S_GT1_IDS(&intel_coffeelake_gt1_info),
6025 ++ INTEL_CFL_S_GT2_IDS(&intel_coffeelake_gt2_info),
6026 ++ INTEL_CFL_H_GT2_IDS(&intel_coffeelake_gt2_info),
6027 ++ INTEL_CFL_U_GT3_IDS(&intel_coffeelake_gt3_info),
6028 ++ INTEL_CNL_U_GT2_IDS(&intel_cannonlake_gt2_info),
6029 ++ INTEL_CNL_Y_GT2_IDS(&intel_cannonlake_gt2_info),
6030 + {0, 0, 0}
6031 + };
6032 + MODULE_DEVICE_TABLE(pci, pciidlist);
6033 +diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
6034 +index 74fc9362ecf9..3eb920851141 100644
6035 +--- a/drivers/gpu/drm/qxl/qxl_cmd.c
6036 ++++ b/drivers/gpu/drm/qxl/qxl_cmd.c
6037 +@@ -388,7 +388,11 @@ void qxl_io_create_primary(struct qxl_device *qdev,
6038 + create->width = bo->surf.width;
6039 + create->height = bo->surf.height;
6040 + create->stride = bo->surf.stride;
6041 +- create->mem = qxl_bo_physical_address(qdev, bo, offset);
6042 ++ if (bo->shadow) {
6043 ++ create->mem = qxl_bo_physical_address(qdev, bo->shadow, offset);
6044 ++ } else {
6045 ++ create->mem = qxl_bo_physical_address(qdev, bo, offset);
6046 ++ }
6047 +
6048 + QXL_INFO(qdev, "%s: mem = %llx, from %p\n", __func__, create->mem,
6049 + bo->kptr);
6050 +diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
6051 +index afbf50d0c08f..9a9214ae0fb5 100644
6052 +--- a/drivers/gpu/drm/qxl/qxl_display.c
6053 ++++ b/drivers/gpu/drm/qxl/qxl_display.c
6054 +@@ -289,6 +289,7 @@ static void qxl_crtc_destroy(struct drm_crtc *crtc)
6055 + {
6056 + struct qxl_crtc *qxl_crtc = to_qxl_crtc(crtc);
6057 +
6058 ++ qxl_bo_unref(&qxl_crtc->cursor_bo);
6059 + drm_crtc_cleanup(crtc);
6060 + kfree(qxl_crtc);
6061 + }
6062 +@@ -305,7 +306,9 @@ static const struct drm_crtc_funcs qxl_crtc_funcs = {
6063 + void qxl_user_framebuffer_destroy(struct drm_framebuffer *fb)
6064 + {
6065 + struct qxl_framebuffer *qxl_fb = to_qxl_framebuffer(fb);
6066 ++ struct qxl_bo *bo = gem_to_qxl_bo(qxl_fb->obj);
6067 +
6068 ++ WARN_ON(bo->shadow);
6069 + drm_gem_object_unreference_unlocked(qxl_fb->obj);
6070 + drm_framebuffer_cleanup(fb);
6071 + kfree(qxl_fb);
6072 +@@ -493,6 +496,53 @@ static int qxl_primary_atomic_check(struct drm_plane *plane,
6073 + return 0;
6074 + }
6075 +
6076 ++static int qxl_primary_apply_cursor(struct drm_plane *plane)
6077 ++{
6078 ++ struct drm_device *dev = plane->dev;
6079 ++ struct qxl_device *qdev = dev->dev_private;
6080 ++ struct drm_framebuffer *fb = plane->state->fb;
6081 ++ struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc);
6082 ++ struct qxl_cursor_cmd *cmd;
6083 ++ struct qxl_release *release;
6084 ++ int ret = 0;
6085 ++
6086 ++ if (!qcrtc->cursor_bo)
6087 ++ return 0;
6088 ++
6089 ++ ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
6090 ++ QXL_RELEASE_CURSOR_CMD,
6091 ++ &release, NULL);
6092 ++ if (ret)
6093 ++ return ret;
6094 ++
6095 ++ ret = qxl_release_list_add(release, qcrtc->cursor_bo);
6096 ++ if (ret)
6097 ++ goto out_free_release;
6098 ++
6099 ++ ret = qxl_release_reserve_list(release, false);
6100 ++ if (ret)
6101 ++ goto out_free_release;
6102 ++
6103 ++ cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release);
6104 ++ cmd->type = QXL_CURSOR_SET;
6105 ++ cmd->u.set.position.x = plane->state->crtc_x + fb->hot_x;
6106 ++ cmd->u.set.position.y = plane->state->crtc_y + fb->hot_y;
6107 ++
6108 ++ cmd->u.set.shape = qxl_bo_physical_address(qdev, qcrtc->cursor_bo, 0);
6109 ++
6110 ++ cmd->u.set.visible = 1;
6111 ++ qxl_release_unmap(qdev, release, &cmd->release_info);
6112 ++
6113 ++ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
6114 ++ qxl_release_fence_buffer_objects(release);
6115 ++
6116 ++ return ret;
6117 ++
6118 ++out_free_release:
6119 ++ qxl_release_free(qdev, release);
6120 ++ return ret;
6121 ++}
6122 ++
6123 + static void qxl_primary_atomic_update(struct drm_plane *plane,
6124 + struct drm_plane_state *old_state)
6125 + {
6126 +@@ -508,6 +558,8 @@ static void qxl_primary_atomic_update(struct drm_plane *plane,
6127 + .x2 = qfb->base.width,
6128 + .y2 = qfb->base.height
6129 + };
6130 ++ int ret;
6131 ++ bool same_shadow = false;
6132 +
6133 + if (old_state->fb) {
6134 + qfb_old = to_qxl_framebuffer(old_state->fb);
6135 +@@ -519,15 +571,28 @@ static void qxl_primary_atomic_update(struct drm_plane *plane,
6136 + if (bo == bo_old)
6137 + return;
6138 +
6139 ++ if (bo_old && bo_old->shadow && bo->shadow &&
6140 ++ bo_old->shadow == bo->shadow) {
6141 ++ same_shadow = true;
6142 ++ }
6143 ++
6144 + if (bo_old && bo_old->is_primary) {
6145 +- qxl_io_destroy_primary(qdev);
6146 ++ if (!same_shadow)
6147 ++ qxl_io_destroy_primary(qdev);
6148 + bo_old->is_primary = false;
6149 ++
6150 ++ ret = qxl_primary_apply_cursor(plane);
6151 ++ if (ret)
6152 ++ DRM_ERROR(
6153 ++ "could not set cursor after creating primary");
6154 + }
6155 +
6156 + if (!bo->is_primary) {
6157 +- qxl_io_create_primary(qdev, 0, bo);
6158 ++ if (!same_shadow)
6159 ++ qxl_io_create_primary(qdev, 0, bo);
6160 + bo->is_primary = true;
6161 + }
6162 ++
6163 + qxl_draw_dirty_fb(qdev, qfb, bo, 0, 0, &norect, 1, 1);
6164 + }
6165 +
6166 +@@ -560,11 +625,12 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
6167 + struct drm_device *dev = plane->dev;
6168 + struct qxl_device *qdev = dev->dev_private;
6169 + struct drm_framebuffer *fb = plane->state->fb;
6170 ++ struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc);
6171 + struct qxl_release *release;
6172 + struct qxl_cursor_cmd *cmd;
6173 + struct qxl_cursor *cursor;
6174 + struct drm_gem_object *obj;
6175 +- struct qxl_bo *cursor_bo, *user_bo = NULL;
6176 ++ struct qxl_bo *cursor_bo = NULL, *user_bo = NULL;
6177 + int ret;
6178 + void *user_ptr;
6179 + int size = 64*64*4;
6180 +@@ -617,6 +683,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
6181 + cmd->u.set.shape = qxl_bo_physical_address(qdev,
6182 + cursor_bo, 0);
6183 + cmd->type = QXL_CURSOR_SET;
6184 ++
6185 ++ qxl_bo_unref(&qcrtc->cursor_bo);
6186 ++ qcrtc->cursor_bo = cursor_bo;
6187 ++ cursor_bo = NULL;
6188 + } else {
6189 +
6190 + ret = qxl_release_reserve_list(release, true);
6191 +@@ -634,6 +704,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
6192 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
6193 + qxl_release_fence_buffer_objects(release);
6194 +
6195 ++ qxl_bo_unref(&cursor_bo);
6196 ++
6197 + return;
6198 +
6199 + out_backoff:
6200 +@@ -679,8 +751,9 @@ static void qxl_cursor_atomic_disable(struct drm_plane *plane,
6201 + static int qxl_plane_prepare_fb(struct drm_plane *plane,
6202 + struct drm_plane_state *new_state)
6203 + {
6204 ++ struct qxl_device *qdev = plane->dev->dev_private;
6205 + struct drm_gem_object *obj;
6206 +- struct qxl_bo *user_bo;
6207 ++ struct qxl_bo *user_bo, *old_bo = NULL;
6208 + int ret;
6209 +
6210 + if (!new_state->fb)
6211 +@@ -689,6 +762,32 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
6212 + obj = to_qxl_framebuffer(new_state->fb)->obj;
6213 + user_bo = gem_to_qxl_bo(obj);
6214 +
6215 ++ if (plane->type == DRM_PLANE_TYPE_PRIMARY &&
6216 ++ user_bo->is_dumb && !user_bo->shadow) {
6217 ++ if (plane->state->fb) {
6218 ++ obj = to_qxl_framebuffer(plane->state->fb)->obj;
6219 ++ old_bo = gem_to_qxl_bo(obj);
6220 ++ }
6221 ++ if (old_bo && old_bo->shadow &&
6222 ++ user_bo->gem_base.size == old_bo->gem_base.size &&
6223 ++ plane->state->crtc == new_state->crtc &&
6224 ++ plane->state->crtc_w == new_state->crtc_w &&
6225 ++ plane->state->crtc_h == new_state->crtc_h &&
6226 ++ plane->state->src_x == new_state->src_x &&
6227 ++ plane->state->src_y == new_state->src_y &&
6228 ++ plane->state->src_w == new_state->src_w &&
6229 ++ plane->state->src_h == new_state->src_h &&
6230 ++ plane->state->rotation == new_state->rotation &&
6231 ++ plane->state->zpos == new_state->zpos) {
6232 ++ drm_gem_object_get(&old_bo->shadow->gem_base);
6233 ++ user_bo->shadow = old_bo->shadow;
6234 ++ } else {
6235 ++ qxl_bo_create(qdev, user_bo->gem_base.size,
6236 ++ true, true, QXL_GEM_DOMAIN_VRAM, NULL,
6237 ++ &user_bo->shadow);
6238 ++ }
6239 ++ }
6240 ++
6241 + ret = qxl_bo_pin(user_bo, QXL_GEM_DOMAIN_CPU, NULL);
6242 + if (ret)
6243 + return ret;
6244 +@@ -713,6 +812,11 @@ static void qxl_plane_cleanup_fb(struct drm_plane *plane,
6245 + obj = to_qxl_framebuffer(old_state->fb)->obj;
6246 + user_bo = gem_to_qxl_bo(obj);
6247 + qxl_bo_unpin(user_bo);
6248 ++
6249 ++ if (user_bo->shadow && !user_bo->is_primary) {
6250 ++ drm_gem_object_put_unlocked(&user_bo->shadow->gem_base);
6251 ++ user_bo->shadow = NULL;
6252 ++ }
6253 + }
6254 +
6255 + static const uint32_t qxl_cursor_plane_formats[] = {
6256 +diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
6257 +index 3397a1907336..c0a927efa653 100644
6258 +--- a/drivers/gpu/drm/qxl/qxl_drv.h
6259 ++++ b/drivers/gpu/drm/qxl/qxl_drv.h
6260 +@@ -113,6 +113,8 @@ struct qxl_bo {
6261 + /* Constant after initialization */
6262 + struct drm_gem_object gem_base;
6263 + bool is_primary; /* is this now a primary surface */
6264 ++ bool is_dumb;
6265 ++ struct qxl_bo *shadow;
6266 + bool hw_surf_alloc;
6267 + struct qxl_surface surf;
6268 + uint32_t surface_id;
6269 +@@ -133,6 +135,8 @@ struct qxl_bo_list {
6270 + struct qxl_crtc {
6271 + struct drm_crtc base;
6272 + int index;
6273 ++
6274 ++ struct qxl_bo *cursor_bo;
6275 + };
6276 +
6277 + struct qxl_output {
6278 +diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
6279 +index 5e65d5d2d937..11085ab01374 100644
6280 +--- a/drivers/gpu/drm/qxl/qxl_dumb.c
6281 ++++ b/drivers/gpu/drm/qxl/qxl_dumb.c
6282 +@@ -63,6 +63,7 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
6283 + &handle);
6284 + if (r)
6285 + return r;
6286 ++ qobj->is_dumb = true;
6287 + args->pitch = pitch;
6288 + args->handle = handle;
6289 + return 0;
6290 +diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
6291 +index d34d1cf33895..95f4db70dd22 100644
6292 +--- a/drivers/gpu/drm/radeon/radeon_uvd.c
6293 ++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
6294 +@@ -995,7 +995,7 @@ int radeon_uvd_calc_upll_dividers(struct radeon_device *rdev,
6295 + /* calc dclk divider with current vco freq */
6296 + dclk_div = radeon_uvd_calc_upll_post_div(vco_freq, dclk,
6297 + pd_min, pd_even);
6298 +- if (vclk_div > pd_max)
6299 ++ if (dclk_div > pd_max)
6300 + break; /* vco is too big, it has to stop */
6301 +
6302 + /* calc score with current vco freq */
6303 +diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
6304 +index ee3e74266a13..97a0a639dad9 100644
6305 +--- a/drivers/gpu/drm/radeon/si_dpm.c
6306 ++++ b/drivers/gpu/drm/radeon/si_dpm.c
6307 +@@ -2984,6 +2984,11 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
6308 + (rdev->pdev->device == 0x6667)) {
6309 + max_sclk = 75000;
6310 + }
6311 ++ if ((rdev->pdev->revision == 0xC3) ||
6312 ++ (rdev->pdev->device == 0x6665)) {
6313 ++ max_sclk = 60000;
6314 ++ max_mclk = 80000;
6315 ++ }
6316 + } else if (rdev->family == CHIP_OLAND) {
6317 + if ((rdev->pdev->revision == 0xC7) ||
6318 + (rdev->pdev->revision == 0x80) ||
6319 +diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
6320 +index c088703777e2..68eed684dff5 100644
6321 +--- a/drivers/gpu/drm/ttm/ttm_bo.c
6322 ++++ b/drivers/gpu/drm/ttm/ttm_bo.c
6323 +@@ -175,7 +175,8 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
6324 + list_add_tail(&bo->lru, &man->lru[bo->priority]);
6325 + kref_get(&bo->list_kref);
6326 +
6327 +- if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) {
6328 ++ if (bo->ttm && !(bo->ttm->page_flags &
6329 ++ (TTM_PAGE_FLAG_SG | TTM_PAGE_FLAG_SWAPPED))) {
6330 + list_add_tail(&bo->swap,
6331 + &bo->glob->swap_lru[bo->priority]);
6332 + kref_get(&bo->list_kref);
6333 +diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
6334 +index c8ebb757e36b..b17d0d38f290 100644
6335 +--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
6336 ++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
6337 +@@ -299,7 +299,7 @@ static void ttm_bo_vm_close(struct vm_area_struct *vma)
6338 +
6339 + static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
6340 + unsigned long offset,
6341 +- void *buf, int len, int write)
6342 ++ uint8_t *buf, int len, int write)
6343 + {
6344 + unsigned long page = offset >> PAGE_SHIFT;
6345 + unsigned long bytes_left = len;
6346 +@@ -328,6 +328,7 @@ static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
6347 + ttm_bo_kunmap(&map);
6348 +
6349 + page++;
6350 ++ buf += bytes;
6351 + bytes_left -= bytes;
6352 + offset = 0;
6353 + } while (bytes_left);
6354 +diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
6355 +index c13a4fd86b3c..a42744c7665b 100644
6356 +--- a/drivers/hwmon/coretemp.c
6357 ++++ b/drivers/hwmon/coretemp.c
6358 +@@ -268,13 +268,13 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
6359 + for (i = 0; i < ARRAY_SIZE(tjmax_model_table); i++) {
6360 + const struct tjmax_model *tm = &tjmax_model_table[i];
6361 + if (c->x86_model == tm->model &&
6362 +- (tm->mask == ANY || c->x86_mask == tm->mask))
6363 ++ (tm->mask == ANY || c->x86_stepping == tm->mask))
6364 + return tm->tjmax;
6365 + }
6366 +
6367 + /* Early chips have no MSR for TjMax */
6368 +
6369 +- if (c->x86_model == 0xf && c->x86_mask < 4)
6370 ++ if (c->x86_model == 0xf && c->x86_stepping < 4)
6371 + usemsr_ee = 0;
6372 +
6373 + if (c->x86_model > 0xe && usemsr_ee) {
6374 +@@ -425,7 +425,7 @@ static int chk_ucode_version(unsigned int cpu)
6375 + * Readings might stop update when processor visited too deep sleep,
6376 + * fixed for stepping D0 (6EC).
6377 + */
6378 +- if (c->x86_model == 0xe && c->x86_mask < 0xc && c->microcode < 0x39) {
6379 ++ if (c->x86_model == 0xe && c->x86_stepping < 0xc && c->microcode < 0x39) {
6380 + pr_err("Errata AE18 not fixed, update BIOS or microcode of the CPU!\n");
6381 + return -ENODEV;
6382 + }
6383 +diff --git a/drivers/hwmon/hwmon-vid.c b/drivers/hwmon/hwmon-vid.c
6384 +index ef91b8a67549..84e91286fc4f 100644
6385 +--- a/drivers/hwmon/hwmon-vid.c
6386 ++++ b/drivers/hwmon/hwmon-vid.c
6387 +@@ -293,7 +293,7 @@ u8 vid_which_vrm(void)
6388 + if (c->x86 < 6) /* Any CPU with family lower than 6 */
6389 + return 0; /* doesn't have VID */
6390 +
6391 +- vrm_ret = find_vrm(c->x86, c->x86_model, c->x86_mask, c->x86_vendor);
6392 ++ vrm_ret = find_vrm(c->x86, c->x86_model, c->x86_stepping, c->x86_vendor);
6393 + if (vrm_ret == 134)
6394 + vrm_ret = get_via_model_d_vrm();
6395 + if (vrm_ret == 0)
6396 +diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
6397 +index ce3b91f22e30..5c740996aa62 100644
6398 +--- a/drivers/hwmon/k10temp.c
6399 ++++ b/drivers/hwmon/k10temp.c
6400 +@@ -179,7 +179,7 @@ static bool has_erratum_319(struct pci_dev *pdev)
6401 + * and AM3 formats, but that's the best we can do.
6402 + */
6403 + return boot_cpu_data.x86_model < 4 ||
6404 +- (boot_cpu_data.x86_model == 4 && boot_cpu_data.x86_mask <= 2);
6405 ++ (boot_cpu_data.x86_model == 4 && boot_cpu_data.x86_stepping <= 2);
6406 + }
6407 +
6408 + static int k10temp_probe(struct pci_dev *pdev,
6409 +diff --git a/drivers/hwmon/k8temp.c b/drivers/hwmon/k8temp.c
6410 +index 5a632bcf869b..e59f9113fb93 100644
6411 +--- a/drivers/hwmon/k8temp.c
6412 ++++ b/drivers/hwmon/k8temp.c
6413 +@@ -187,7 +187,7 @@ static int k8temp_probe(struct pci_dev *pdev,
6414 + return -ENOMEM;
6415 +
6416 + model = boot_cpu_data.x86_model;
6417 +- stepping = boot_cpu_data.x86_mask;
6418 ++ stepping = boot_cpu_data.x86_stepping;
6419 +
6420 + /* feature available since SH-C0, exclude older revisions */
6421 + if ((model == 4 && stepping == 0) ||
6422 +diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
6423 +index 84fc32a2c8b3..ebfdb5503701 100644
6424 +--- a/drivers/infiniband/core/device.c
6425 ++++ b/drivers/infiniband/core/device.c
6426 +@@ -446,7 +446,6 @@ int ib_register_device(struct ib_device *device,
6427 + struct ib_udata uhw = {.outlen = 0, .inlen = 0};
6428 + struct device *parent = device->dev.parent;
6429 +
6430 +- WARN_ON_ONCE(!parent);
6431 + WARN_ON_ONCE(device->dma_device);
6432 + if (device->dev.dma_ops) {
6433 + /*
6434 +@@ -455,16 +454,25 @@ int ib_register_device(struct ib_device *device,
6435 + * into device->dev.
6436 + */
6437 + device->dma_device = &device->dev;
6438 +- if (!device->dev.dma_mask)
6439 +- device->dev.dma_mask = parent->dma_mask;
6440 +- if (!device->dev.coherent_dma_mask)
6441 +- device->dev.coherent_dma_mask =
6442 +- parent->coherent_dma_mask;
6443 ++ if (!device->dev.dma_mask) {
6444 ++ if (parent)
6445 ++ device->dev.dma_mask = parent->dma_mask;
6446 ++ else
6447 ++ WARN_ON_ONCE(true);
6448 ++ }
6449 ++ if (!device->dev.coherent_dma_mask) {
6450 ++ if (parent)
6451 ++ device->dev.coherent_dma_mask =
6452 ++ parent->coherent_dma_mask;
6453 ++ else
6454 ++ WARN_ON_ONCE(true);
6455 ++ }
6456 + } else {
6457 + /*
6458 + * The caller did not provide custom DMA operations. Use the
6459 + * DMA mapping operations of the parent device.
6460 + */
6461 ++ WARN_ON_ONCE(!parent);
6462 + device->dma_device = parent;
6463 + }
6464 +
6465 +diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
6466 +index abc5ab581f82..0a1e96c25ca3 100644
6467 +--- a/drivers/infiniband/core/sysfs.c
6468 ++++ b/drivers/infiniband/core/sysfs.c
6469 +@@ -1262,7 +1262,6 @@ int ib_device_register_sysfs(struct ib_device *device,
6470 + int ret;
6471 + int i;
6472 +
6473 +- WARN_ON_ONCE(!device->dev.parent);
6474 + ret = dev_set_name(class_dev, "%s", device->name);
6475 + if (ret)
6476 + return ret;
6477 +diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
6478 +index 603acaf91828..6511cb21f6e2 100644
6479 +--- a/drivers/infiniband/core/user_mad.c
6480 ++++ b/drivers/infiniband/core/user_mad.c
6481 +@@ -500,7 +500,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
6482 + }
6483 +
6484 + memset(&ah_attr, 0, sizeof ah_attr);
6485 +- ah_attr.type = rdma_ah_find_type(file->port->ib_dev,
6486 ++ ah_attr.type = rdma_ah_find_type(agent->device,
6487 + file->port->port_num);
6488 + rdma_ah_set_dlid(&ah_attr, be16_to_cpu(packet->mad.hdr.lid));
6489 + rdma_ah_set_sl(&ah_attr, packet->mad.hdr.sl);
6490 +diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c
6491 +index 0a98579700ec..5f9321eda1b7 100644
6492 +--- a/drivers/infiniband/core/uverbs_std_types.c
6493 ++++ b/drivers/infiniband/core/uverbs_std_types.c
6494 +@@ -315,7 +315,7 @@ static int uverbs_create_cq_handler(struct ib_device *ib_dev,
6495 + cq->uobject = &obj->uobject;
6496 + cq->comp_handler = ib_uverbs_comp_handler;
6497 + cq->event_handler = ib_uverbs_cq_event_handler;
6498 +- cq->cq_context = &ev_file->ev_queue;
6499 ++ cq->cq_context = ev_file ? &ev_file->ev_queue : NULL;
6500 + obj->uobject.object = cq;
6501 + obj->uobject.user_handle = user_handle;
6502 + atomic_set(&cq->usecnt, 0);
6503 +diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
6504 +index c636842c5be0..8c681a36e6c7 100644
6505 +--- a/drivers/infiniband/hw/mlx4/main.c
6506 ++++ b/drivers/infiniband/hw/mlx4/main.c
6507 +@@ -2972,9 +2972,8 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
6508 + kfree(ibdev->ib_uc_qpns_bitmap);
6509 +
6510 + err_steer_qp_release:
6511 +- if (ibdev->steering_support == MLX4_STEERING_MODE_DEVICE_MANAGED)
6512 +- mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
6513 +- ibdev->steer_qpn_count);
6514 ++ mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
6515 ++ ibdev->steer_qpn_count);
6516 + err_counter:
6517 + for (i = 0; i < ibdev->num_ports; ++i)
6518 + mlx4_ib_delete_counters_table(ibdev, &ibdev->counters_table[i]);
6519 +@@ -3079,11 +3078,9 @@ static void mlx4_ib_remove(struct mlx4_dev *dev, void *ibdev_ptr)
6520 + ibdev->iboe.nb.notifier_call = NULL;
6521 + }
6522 +
6523 +- if (ibdev->steering_support == MLX4_STEERING_MODE_DEVICE_MANAGED) {
6524 +- mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
6525 +- ibdev->steer_qpn_count);
6526 +- kfree(ibdev->ib_uc_qpns_bitmap);
6527 +- }
6528 ++ mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
6529 ++ ibdev->steer_qpn_count);
6530 ++ kfree(ibdev->ib_uc_qpns_bitmap);
6531 +
6532 + iounmap(ibdev->uar_map);
6533 + for (p = 0; p < ibdev->num_ports; ++p)
6534 +diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c
6535 +index e9a91736b12d..d80b61a71eb8 100644
6536 +--- a/drivers/infiniband/hw/qib/qib_rc.c
6537 ++++ b/drivers/infiniband/hw/qib/qib_rc.c
6538 +@@ -434,13 +434,13 @@ int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags)
6539 + qp->s_state = OP(COMPARE_SWAP);
6540 + put_ib_ateth_swap(wqe->atomic_wr.swap,
6541 + &ohdr->u.atomic_eth);
6542 +- put_ib_ateth_swap(wqe->atomic_wr.compare_add,
6543 +- &ohdr->u.atomic_eth);
6544 ++ put_ib_ateth_compare(wqe->atomic_wr.compare_add,
6545 ++ &ohdr->u.atomic_eth);
6546 + } else {
6547 + qp->s_state = OP(FETCH_ADD);
6548 + put_ib_ateth_swap(wqe->atomic_wr.compare_add,
6549 + &ohdr->u.atomic_eth);
6550 +- put_ib_ateth_swap(0, &ohdr->u.atomic_eth);
6551 ++ put_ib_ateth_compare(0, &ohdr->u.atomic_eth);
6552 + }
6553 + put_ib_ateth_vaddr(wqe->atomic_wr.remote_addr,
6554 + &ohdr->u.atomic_eth);
6555 +diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
6556 +index 77b3ed0df936..7f945f65d8cd 100644
6557 +--- a/drivers/infiniband/sw/rxe/rxe_loc.h
6558 ++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
6559 +@@ -237,7 +237,6 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
6560 +
6561 + void rxe_release(struct kref *kref);
6562 +
6563 +-void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify);
6564 + int rxe_completer(void *arg);
6565 + int rxe_requester(void *arg);
6566 + int rxe_responder(void *arg);
6567 +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
6568 +index 00bda9380a2e..aeea994b04c4 100644
6569 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c
6570 ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
6571 +@@ -824,9 +824,9 @@ void rxe_qp_destroy(struct rxe_qp *qp)
6572 + }
6573 +
6574 + /* called when the last reference to the qp is dropped */
6575 +-void rxe_qp_cleanup(struct rxe_pool_entry *arg)
6576 ++static void rxe_qp_do_cleanup(struct work_struct *work)
6577 + {
6578 +- struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem);
6579 ++ struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
6580 +
6581 + rxe_drop_all_mcast_groups(qp);
6582 +
6583 +@@ -859,3 +859,11 @@ void rxe_qp_cleanup(struct rxe_pool_entry *arg)
6584 + kernel_sock_shutdown(qp->sk, SHUT_RDWR);
6585 + sock_release(qp->sk);
6586 + }
6587 ++
6588 ++/* called when the last reference to the qp is dropped */
6589 ++void rxe_qp_cleanup(struct rxe_pool_entry *arg)
6590 ++{
6591 ++ struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem);
6592 ++
6593 ++ execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work);
6594 ++}
6595 +diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
6596 +index d84222f9d5d2..44b838ec9420 100644
6597 +--- a/drivers/infiniband/sw/rxe/rxe_req.c
6598 ++++ b/drivers/infiniband/sw/rxe/rxe_req.c
6599 +@@ -594,15 +594,8 @@ int rxe_requester(void *arg)
6600 + rxe_add_ref(qp);
6601 +
6602 + next_wqe:
6603 +- if (unlikely(!qp->valid)) {
6604 +- rxe_drain_req_pkts(qp, true);
6605 ++ if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
6606 + goto exit;
6607 +- }
6608 +-
6609 +- if (unlikely(qp->req.state == QP_STATE_ERROR)) {
6610 +- rxe_drain_req_pkts(qp, true);
6611 +- goto exit;
6612 +- }
6613 +
6614 + if (unlikely(qp->req.state == QP_STATE_RESET)) {
6615 + qp->req.wqe_index = consumer_index(qp->sq.queue);
6616 +diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
6617 +index 4240866a5331..01f926fd9029 100644
6618 +--- a/drivers/infiniband/sw/rxe/rxe_resp.c
6619 ++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
6620 +@@ -1210,7 +1210,7 @@ static enum resp_states do_class_d1e_error(struct rxe_qp *qp)
6621 + }
6622 + }
6623 +
6624 +-void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify)
6625 ++static void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify)
6626 + {
6627 + struct sk_buff *skb;
6628 +
6629 +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
6630 +index 0b362f49a10a..afbf701dc9a7 100644
6631 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
6632 ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
6633 +@@ -813,6 +813,8 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, struct ib_send_wr *wr,
6634 + (queue_count(qp->sq.queue) > 1);
6635 +
6636 + rxe_run_task(&qp->req.task, must_sched);
6637 ++ if (unlikely(qp->req.state == QP_STATE_ERROR))
6638 ++ rxe_run_task(&qp->comp.task, 1);
6639 +
6640 + return err;
6641 + }
6642 +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
6643 +index 0c2dbe45c729..1019f5e7dbdd 100644
6644 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
6645 ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
6646 +@@ -35,6 +35,7 @@
6647 + #define RXE_VERBS_H
6648 +
6649 + #include <linux/interrupt.h>
6650 ++#include <linux/workqueue.h>
6651 + #include <rdma/rdma_user_rxe.h>
6652 + #include "rxe_pool.h"
6653 + #include "rxe_task.h"
6654 +@@ -281,6 +282,8 @@ struct rxe_qp {
6655 + struct timer_list rnr_nak_timer;
6656 +
6657 + spinlock_t state_lock; /* guard requester and completer */
6658 ++
6659 ++ struct execute_work cleanup_work;
6660 + };
6661 +
6662 + enum rxe_mem_state {
6663 +diff --git a/drivers/md/dm.c b/drivers/md/dm.c
6664 +index 804419635cc7..1dfc855ac708 100644
6665 +--- a/drivers/md/dm.c
6666 ++++ b/drivers/md/dm.c
6667 +@@ -815,7 +815,8 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
6668 + queue_io(md, bio);
6669 + } else {
6670 + /* done with normal IO or empty flush */
6671 +- bio->bi_status = io_error;
6672 ++ if (io_error)
6673 ++ bio->bi_status = io_error;
6674 + bio_endio(bio);
6675 + }
6676 + }
6677 +diff --git a/drivers/media/tuners/r820t.c b/drivers/media/tuners/r820t.c
6678 +index ba80376a3b86..d097eb04a0e9 100644
6679 +--- a/drivers/media/tuners/r820t.c
6680 ++++ b/drivers/media/tuners/r820t.c
6681 +@@ -396,9 +396,11 @@ static int r820t_write(struct r820t_priv *priv, u8 reg, const u8 *val,
6682 + return 0;
6683 + }
6684 +
6685 +-static int r820t_write_reg(struct r820t_priv *priv, u8 reg, u8 val)
6686 ++static inline int r820t_write_reg(struct r820t_priv *priv, u8 reg, u8 val)
6687 + {
6688 +- return r820t_write(priv, reg, &val, 1);
6689 ++ u8 tmp = val; /* work around GCC PR81715 with asan-stack=1 */
6690 ++
6691 ++ return r820t_write(priv, reg, &tmp, 1);
6692 + }
6693 +
6694 + static int r820t_read_cache_reg(struct r820t_priv *priv, int reg)
6695 +@@ -411,17 +413,18 @@ static int r820t_read_cache_reg(struct r820t_priv *priv, int reg)
6696 + return -EINVAL;
6697 + }
6698 +
6699 +-static int r820t_write_reg_mask(struct r820t_priv *priv, u8 reg, u8 val,
6700 ++static inline int r820t_write_reg_mask(struct r820t_priv *priv, u8 reg, u8 val,
6701 + u8 bit_mask)
6702 + {
6703 ++ u8 tmp = val;
6704 + int rc = r820t_read_cache_reg(priv, reg);
6705 +
6706 + if (rc < 0)
6707 + return rc;
6708 +
6709 +- val = (rc & ~bit_mask) | (val & bit_mask);
6710 ++ tmp = (rc & ~bit_mask) | (tmp & bit_mask);
6711 +
6712 +- return r820t_write(priv, reg, &val, 1);
6713 ++ return r820t_write(priv, reg, &tmp, 1);
6714 + }
6715 +
6716 + static int r820t_read(struct r820t_priv *priv, u8 reg, u8 *val, int len)
6717 +diff --git a/drivers/misc/c2port/core.c b/drivers/misc/c2port/core.c
6718 +index 1922cb8f6b88..1c5b7aec13d4 100644
6719 +--- a/drivers/misc/c2port/core.c
6720 ++++ b/drivers/misc/c2port/core.c
6721 +@@ -15,7 +15,6 @@
6722 + #include <linux/errno.h>
6723 + #include <linux/err.h>
6724 + #include <linux/kernel.h>
6725 +-#include <linux/kmemcheck.h>
6726 + #include <linux/ctype.h>
6727 + #include <linux/delay.h>
6728 + #include <linux/idr.h>
6729 +@@ -904,7 +903,6 @@ struct c2port_device *c2port_device_register(char *name,
6730 + return ERR_PTR(-EINVAL);
6731 +
6732 + c2dev = kmalloc(sizeof(struct c2port_device), GFP_KERNEL);
6733 +- kmemcheck_annotate_bitfield(c2dev, flags);
6734 + if (unlikely(!c2dev))
6735 + return ERR_PTR(-ENOMEM);
6736 +
6737 +diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c
6738 +index 229dc18f0581..768972af8b85 100644
6739 +--- a/drivers/mmc/host/bcm2835.c
6740 ++++ b/drivers/mmc/host/bcm2835.c
6741 +@@ -1265,7 +1265,8 @@ static int bcm2835_add_host(struct bcm2835_host *host)
6742 + char pio_limit_string[20];
6743 + int ret;
6744 +
6745 +- mmc->f_max = host->max_clk;
6746 ++ if (!mmc->f_max || mmc->f_max > host->max_clk)
6747 ++ mmc->f_max = host->max_clk;
6748 + mmc->f_min = host->max_clk / SDCDIV_MAX_CDIV;
6749 +
6750 + mmc->max_busy_timeout = ~0 / (mmc->f_max / 1000);
6751 +diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
6752 +index 85745ef179e2..08a55c2e96e1 100644
6753 +--- a/drivers/mmc/host/meson-gx-mmc.c
6754 ++++ b/drivers/mmc/host/meson-gx-mmc.c
6755 +@@ -716,22 +716,6 @@ static int meson_mmc_clk_phase_tuning(struct mmc_host *mmc, u32 opcode,
6756 + static int meson_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode)
6757 + {
6758 + struct meson_host *host = mmc_priv(mmc);
6759 +- int ret;
6760 +-
6761 +- /*
6762 +- * If this is the initial tuning, try to get a sane Rx starting
6763 +- * phase before doing the actual tuning.
6764 +- */
6765 +- if (!mmc->doing_retune) {
6766 +- ret = meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
6767 +-
6768 +- if (ret)
6769 +- return ret;
6770 +- }
6771 +-
6772 +- ret = meson_mmc_clk_phase_tuning(mmc, opcode, host->tx_clk);
6773 +- if (ret)
6774 +- return ret;
6775 +
6776 + return meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
6777 + }
6778 +@@ -762,9 +746,8 @@ static void meson_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
6779 + if (!IS_ERR(mmc->supply.vmmc))
6780 + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd);
6781 +
6782 +- /* Reset phases */
6783 ++ /* Reset rx phase */
6784 + clk_set_phase(host->rx_clk, 0);
6785 +- clk_set_phase(host->tx_clk, 270);
6786 +
6787 + break;
6788 +
6789 +diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
6790 +index d96a057a7db8..4ffa6b173a21 100644
6791 +--- a/drivers/mmc/host/sdhci-of-esdhc.c
6792 ++++ b/drivers/mmc/host/sdhci-of-esdhc.c
6793 +@@ -458,6 +458,33 @@ static unsigned int esdhc_of_get_min_clock(struct sdhci_host *host)
6794 + return clock / 256 / 16;
6795 + }
6796 +
6797 ++static void esdhc_clock_enable(struct sdhci_host *host, bool enable)
6798 ++{
6799 ++ u32 val;
6800 ++ ktime_t timeout;
6801 ++
6802 ++ val = sdhci_readl(host, ESDHC_SYSTEM_CONTROL);
6803 ++
6804 ++ if (enable)
6805 ++ val |= ESDHC_CLOCK_SDCLKEN;
6806 ++ else
6807 ++ val &= ~ESDHC_CLOCK_SDCLKEN;
6808 ++
6809 ++ sdhci_writel(host, val, ESDHC_SYSTEM_CONTROL);
6810 ++
6811 ++ /* Wait max 20 ms */
6812 ++ timeout = ktime_add_ms(ktime_get(), 20);
6813 ++ val = ESDHC_CLOCK_STABLE;
6814 ++ while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) {
6815 ++ if (ktime_after(ktime_get(), timeout)) {
6816 ++ pr_err("%s: Internal clock never stabilised.\n",
6817 ++ mmc_hostname(host->mmc));
6818 ++ break;
6819 ++ }
6820 ++ udelay(10);
6821 ++ }
6822 ++}
6823 ++
6824 + static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock)
6825 + {
6826 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
6827 +@@ -469,8 +496,10 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock)
6828 +
6829 + host->mmc->actual_clock = 0;
6830 +
6831 +- if (clock == 0)
6832 ++ if (clock == 0) {
6833 ++ esdhc_clock_enable(host, false);
6834 + return;
6835 ++ }
6836 +
6837 + /* Workaround to start pre_div at 2 for VNN < VENDOR_V_23 */
6838 + if (esdhc->vendor_ver < VENDOR_V_23)
6839 +@@ -558,39 +587,20 @@ static void esdhc_pltfm_set_bus_width(struct sdhci_host *host, int width)
6840 + sdhci_writel(host, ctrl, ESDHC_PROCTL);
6841 + }
6842 +
6843 +-static void esdhc_clock_enable(struct sdhci_host *host, bool enable)
6844 ++static void esdhc_reset(struct sdhci_host *host, u8 mask)
6845 + {
6846 + u32 val;
6847 +- ktime_t timeout;
6848 +-
6849 +- val = sdhci_readl(host, ESDHC_SYSTEM_CONTROL);
6850 +
6851 +- if (enable)
6852 +- val |= ESDHC_CLOCK_SDCLKEN;
6853 +- else
6854 +- val &= ~ESDHC_CLOCK_SDCLKEN;
6855 +-
6856 +- sdhci_writel(host, val, ESDHC_SYSTEM_CONTROL);
6857 +-
6858 +- /* Wait max 20 ms */
6859 +- timeout = ktime_add_ms(ktime_get(), 20);
6860 +- val = ESDHC_CLOCK_STABLE;
6861 +- while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) {
6862 +- if (ktime_after(ktime_get(), timeout)) {
6863 +- pr_err("%s: Internal clock never stabilised.\n",
6864 +- mmc_hostname(host->mmc));
6865 +- break;
6866 +- }
6867 +- udelay(10);
6868 +- }
6869 +-}
6870 +-
6871 +-static void esdhc_reset(struct sdhci_host *host, u8 mask)
6872 +-{
6873 + sdhci_reset(host, mask);
6874 +
6875 + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
6876 + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
6877 ++
6878 ++ if (mask & SDHCI_RESET_ALL) {
6879 ++ val = sdhci_readl(host, ESDHC_TBCTL);
6880 ++ val &= ~ESDHC_TB_EN;
6881 ++ sdhci_writel(host, val, ESDHC_TBCTL);
6882 ++ }
6883 + }
6884 +
6885 + /* The SCFG, Supplemental Configuration Unit, provides SoC specific
6886 +diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
6887 +index 6152e83ff935..90cc1977b792 100644
6888 +--- a/drivers/mmc/host/sdhci.c
6889 ++++ b/drivers/mmc/host/sdhci.c
6890 +@@ -21,6 +21,7 @@
6891 + #include <linux/dma-mapping.h>
6892 + #include <linux/slab.h>
6893 + #include <linux/scatterlist.h>
6894 ++#include <linux/sizes.h>
6895 + #include <linux/swiotlb.h>
6896 + #include <linux/regulator/consumer.h>
6897 + #include <linux/pm_runtime.h>
6898 +@@ -502,8 +503,35 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host,
6899 + if (data->host_cookie == COOKIE_PRE_MAPPED)
6900 + return data->sg_count;
6901 +
6902 +- sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
6903 +- mmc_get_dma_dir(data));
6904 ++ /* Bounce write requests to the bounce buffer */
6905 ++ if (host->bounce_buffer) {
6906 ++ unsigned int length = data->blksz * data->blocks;
6907 ++
6908 ++ if (length > host->bounce_buffer_size) {
6909 ++ pr_err("%s: asked for transfer of %u bytes exceeds bounce buffer %u bytes\n",
6910 ++ mmc_hostname(host->mmc), length,
6911 ++ host->bounce_buffer_size);
6912 ++ return -EIO;
6913 ++ }
6914 ++ if (mmc_get_dma_dir(data) == DMA_TO_DEVICE) {
6915 ++ /* Copy the data to the bounce buffer */
6916 ++ sg_copy_to_buffer(data->sg, data->sg_len,
6917 ++ host->bounce_buffer,
6918 ++ length);
6919 ++ }
6920 ++ /* Switch ownership to the DMA */
6921 ++ dma_sync_single_for_device(host->mmc->parent,
6922 ++ host->bounce_addr,
6923 ++ host->bounce_buffer_size,
6924 ++ mmc_get_dma_dir(data));
6925 ++ /* Just a dummy value */
6926 ++ sg_count = 1;
6927 ++ } else {
6928 ++ /* Just access the data directly from memory */
6929 ++ sg_count = dma_map_sg(mmc_dev(host->mmc),
6930 ++ data->sg, data->sg_len,
6931 ++ mmc_get_dma_dir(data));
6932 ++ }
6933 +
6934 + if (sg_count == 0)
6935 + return -ENOSPC;
6936 +@@ -673,6 +701,14 @@ static void sdhci_adma_table_post(struct sdhci_host *host,
6937 + }
6938 + }
6939 +
6940 ++static u32 sdhci_sdma_address(struct sdhci_host *host)
6941 ++{
6942 ++ if (host->bounce_buffer)
6943 ++ return host->bounce_addr;
6944 ++ else
6945 ++ return sg_dma_address(host->data->sg);
6946 ++}
6947 ++
6948 + static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
6949 + {
6950 + u8 count;
6951 +@@ -858,8 +894,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
6952 + SDHCI_ADMA_ADDRESS_HI);
6953 + } else {
6954 + WARN_ON(sg_cnt != 1);
6955 +- sdhci_writel(host, sg_dma_address(data->sg),
6956 +- SDHCI_DMA_ADDRESS);
6957 ++ sdhci_writel(host, sdhci_sdma_address(host),
6958 ++ SDHCI_DMA_ADDRESS);
6959 + }
6960 + }
6961 +
6962 +@@ -2248,7 +2284,12 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq)
6963 +
6964 + mrq->data->host_cookie = COOKIE_UNMAPPED;
6965 +
6966 +- if (host->flags & SDHCI_REQ_USE_DMA)
6967 ++ /*
6968 ++ * No pre-mapping in the pre hook if we're using the bounce buffer,
6969 ++ * for that we would need two bounce buffers since one buffer is
6970 ++ * in flight when this is getting called.
6971 ++ */
6972 ++ if (host->flags & SDHCI_REQ_USE_DMA && !host->bounce_buffer)
6973 + sdhci_pre_dma_transfer(host, mrq->data, COOKIE_PRE_MAPPED);
6974 + }
6975 +
6976 +@@ -2352,8 +2393,45 @@ static bool sdhci_request_done(struct sdhci_host *host)
6977 + struct mmc_data *data = mrq->data;
6978 +
6979 + if (data && data->host_cookie == COOKIE_MAPPED) {
6980 +- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
6981 +- mmc_get_dma_dir(data));
6982 ++ if (host->bounce_buffer) {
6983 ++ /*
6984 ++ * On reads, copy the bounced data into the
6985 ++ * sglist
6986 ++ */
6987 ++ if (mmc_get_dma_dir(data) == DMA_FROM_DEVICE) {
6988 ++ unsigned int length = data->bytes_xfered;
6989 ++
6990 ++ if (length > host->bounce_buffer_size) {
6991 ++ pr_err("%s: bounce buffer is %u bytes but DMA claims to have transferred %u bytes\n",
6992 ++ mmc_hostname(host->mmc),
6993 ++ host->bounce_buffer_size,
6994 ++ data->bytes_xfered);
6995 ++ /* Cap it down and continue */
6996 ++ length = host->bounce_buffer_size;
6997 ++ }
6998 ++ dma_sync_single_for_cpu(
6999 ++ host->mmc->parent,
7000 ++ host->bounce_addr,
7001 ++ host->bounce_buffer_size,
7002 ++ DMA_FROM_DEVICE);
7003 ++ sg_copy_from_buffer(data->sg,
7004 ++ data->sg_len,
7005 ++ host->bounce_buffer,
7006 ++ length);
7007 ++ } else {
7008 ++ /* No copying, just switch ownership */
7009 ++ dma_sync_single_for_cpu(
7010 ++ host->mmc->parent,
7011 ++ host->bounce_addr,
7012 ++ host->bounce_buffer_size,
7013 ++ mmc_get_dma_dir(data));
7014 ++ }
7015 ++ } else {
7016 ++ /* Unmap the raw data */
7017 ++ dma_unmap_sg(mmc_dev(host->mmc), data->sg,
7018 ++ data->sg_len,
7019 ++ mmc_get_dma_dir(data));
7020 ++ }
7021 + data->host_cookie = COOKIE_UNMAPPED;
7022 + }
7023 + }
7024 +@@ -2636,7 +2714,8 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
7025 + */
7026 + if (intmask & SDHCI_INT_DMA_END) {
7027 + u32 dmastart, dmanow;
7028 +- dmastart = sg_dma_address(host->data->sg);
7029 ++
7030 ++ dmastart = sdhci_sdma_address(host);
7031 + dmanow = dmastart + host->data->bytes_xfered;
7032 + /*
7033 + * Force update to the next DMA block boundary.
7034 +@@ -3217,6 +3296,68 @@ void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps, u32 *caps1)
7035 + }
7036 + EXPORT_SYMBOL_GPL(__sdhci_read_caps);
7037 +
7038 ++static int sdhci_allocate_bounce_buffer(struct sdhci_host *host)
7039 ++{
7040 ++ struct mmc_host *mmc = host->mmc;
7041 ++ unsigned int max_blocks;
7042 ++ unsigned int bounce_size;
7043 ++ int ret;
7044 ++
7045 ++ /*
7046 ++ * Cap the bounce buffer at 64KB. Using a bigger bounce buffer
7047 ++ * has diminishing returns, this is probably because SD/MMC
7048 ++ * cards are usually optimized to handle this size of requests.
7049 ++ */
7050 ++ bounce_size = SZ_64K;
7051 ++ /*
7052 ++ * Adjust downwards to maximum request size if this is less
7053 ++ * than our segment size, else hammer down the maximum
7054 ++ * request size to the maximum buffer size.
7055 ++ */
7056 ++ if (mmc->max_req_size < bounce_size)
7057 ++ bounce_size = mmc->max_req_size;
7058 ++ max_blocks = bounce_size / 512;
7059 ++
7060 ++ /*
7061 ++ * When we just support one segment, we can get significant
7062 ++ * speedups by the help of a bounce buffer to group scattered
7063 ++ * reads/writes together.
7064 ++ */
7065 ++ host->bounce_buffer = devm_kmalloc(mmc->parent,
7066 ++ bounce_size,
7067 ++ GFP_KERNEL);
7068 ++ if (!host->bounce_buffer) {
7069 ++ pr_err("%s: failed to allocate %u bytes for bounce buffer, falling back to single segments\n",
7070 ++ mmc_hostname(mmc),
7071 ++ bounce_size);
7072 ++ /*
7073 ++ * Exiting with zero here makes sure we proceed with
7074 ++ * mmc->max_segs == 1.
7075 ++ */
7076 ++ return 0;
7077 ++ }
7078 ++
7079 ++ host->bounce_addr = dma_map_single(mmc->parent,
7080 ++ host->bounce_buffer,
7081 ++ bounce_size,
7082 ++ DMA_BIDIRECTIONAL);
7083 ++ ret = dma_mapping_error(mmc->parent, host->bounce_addr);
7084 ++ if (ret)
7085 ++ /* Again fall back to max_segs == 1 */
7086 ++ return 0;
7087 ++ host->bounce_buffer_size = bounce_size;
7088 ++
7089 ++ /* Lie about this since we're bouncing */
7090 ++ mmc->max_segs = max_blocks;
7091 ++ mmc->max_seg_size = bounce_size;
7092 ++ mmc->max_req_size = bounce_size;
7093 ++
7094 ++ pr_info("%s bounce up to %u segments into one, max segment size %u bytes\n",
7095 ++ mmc_hostname(mmc), max_blocks, bounce_size);
7096 ++
7097 ++ return 0;
7098 ++}
7099 ++
7100 + int sdhci_setup_host(struct sdhci_host *host)
7101 + {
7102 + struct mmc_host *mmc;
7103 +@@ -3713,6 +3854,13 @@ int sdhci_setup_host(struct sdhci_host *host)
7104 + */
7105 + mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;
7106 +
7107 ++ if (mmc->max_segs == 1) {
7108 ++ /* This may alter mmc->*_blk_* parameters */
7109 ++ ret = sdhci_allocate_bounce_buffer(host);
7110 ++ if (ret)
7111 ++ return ret;
7112 ++ }
7113 ++
7114 + return 0;
7115 +
7116 + unreg:
7117 +diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
7118 +index 54bc444c317f..1d7d61e25dbf 100644
7119 +--- a/drivers/mmc/host/sdhci.h
7120 ++++ b/drivers/mmc/host/sdhci.h
7121 +@@ -440,6 +440,9 @@ struct sdhci_host {
7122 +
7123 + int irq; /* Device IRQ */
7124 + void __iomem *ioaddr; /* Mapped address */
7125 ++ char *bounce_buffer; /* For packing SDMA reads/writes */
7126 ++ dma_addr_t bounce_addr;
7127 ++ unsigned int bounce_buffer_size;
7128 +
7129 + const struct sdhci_ops *ops; /* Low level hw interface */
7130 +
7131 +diff --git a/drivers/mtd/nand/vf610_nfc.c b/drivers/mtd/nand/vf610_nfc.c
7132 +index 8037d4b48a05..e2583a539b41 100644
7133 +--- a/drivers/mtd/nand/vf610_nfc.c
7134 ++++ b/drivers/mtd/nand/vf610_nfc.c
7135 +@@ -752,10 +752,8 @@ static int vf610_nfc_probe(struct platform_device *pdev)
7136 + if (mtd->oobsize > 64)
7137 + mtd->oobsize = 64;
7138 +
7139 +- /*
7140 +- * mtd->ecclayout is not specified here because we're using the
7141 +- * default large page ECC layout defined in NAND core.
7142 +- */
7143 ++ /* Use default large page ECC layout defined in NAND core */
7144 ++ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
7145 + if (chip->ecc.strength == 32) {
7146 + nfc->ecc_mode = ECC_60_BYTE;
7147 + chip->ecc.bytes = 60;
7148 +diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
7149 +index 1dd3a1264a53..06f3fe429d82 100644
7150 +--- a/drivers/net/ethernet/marvell/mvpp2.c
7151 ++++ b/drivers/net/ethernet/marvell/mvpp2.c
7152 +@@ -6888,6 +6888,7 @@ static void mvpp2_set_rx_mode(struct net_device *dev)
7153 + int id = port->id;
7154 + bool allmulti = dev->flags & IFF_ALLMULTI;
7155 +
7156 ++retry:
7157 + mvpp2_prs_mac_promisc_set(priv, id, dev->flags & IFF_PROMISC);
7158 + mvpp2_prs_mac_multi_set(priv, id, MVPP2_PE_MAC_MC_ALL, allmulti);
7159 + mvpp2_prs_mac_multi_set(priv, id, MVPP2_PE_MAC_MC_IP6, allmulti);
7160 +@@ -6895,9 +6896,13 @@ static void mvpp2_set_rx_mode(struct net_device *dev)
7161 + /* Remove all port->id's mcast enries */
7162 + mvpp2_prs_mcast_del_all(priv, id);
7163 +
7164 +- if (allmulti && !netdev_mc_empty(dev)) {
7165 +- netdev_for_each_mc_addr(ha, dev)
7166 +- mvpp2_prs_mac_da_accept(priv, id, ha->addr, true);
7167 ++ if (!allmulti) {
7168 ++ netdev_for_each_mc_addr(ha, dev) {
7169 ++ if (mvpp2_prs_mac_da_accept(priv, id, ha->addr, true)) {
7170 ++ allmulti = true;
7171 ++ goto retry;
7172 ++ }
7173 ++ }
7174 + }
7175 + }
7176 +
7177 +diff --git a/drivers/net/ethernet/mellanox/mlx4/qp.c b/drivers/net/ethernet/mellanox/mlx4/qp.c
7178 +index 728a2fb1f5c0..22a3bfe1ed8f 100644
7179 +--- a/drivers/net/ethernet/mellanox/mlx4/qp.c
7180 ++++ b/drivers/net/ethernet/mellanox/mlx4/qp.c
7181 +@@ -287,6 +287,9 @@ void mlx4_qp_release_range(struct mlx4_dev *dev, int base_qpn, int cnt)
7182 + u64 in_param = 0;
7183 + int err;
7184 +
7185 ++ if (!cnt)
7186 ++ return;
7187 ++
7188 + if (mlx4_is_mfunc(dev)) {
7189 + set_param_l(&in_param, base_qpn);
7190 + set_param_h(&in_param, cnt);
7191 +diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
7192 +index cd314946452c..9511f5fe62f4 100644
7193 +--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
7194 ++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
7195 +@@ -2781,7 +2781,10 @@ static void mwifiex_pcie_card_reset_work(struct mwifiex_adapter *adapter)
7196 + {
7197 + struct pcie_service_card *card = adapter->card;
7198 +
7199 +- pci_reset_function(card->dev);
7200 ++ /* We can't afford to wait here; remove() might be waiting on us. If we
7201 ++ * can't grab the device lock, maybe we'll get another chance later.
7202 ++ */
7203 ++ pci_try_reset_function(card->dev);
7204 + }
7205 +
7206 + static void mwifiex_pcie_work(struct work_struct *work)
7207 +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
7208 +index 9ac1511de7ba..b82e5b363c05 100644
7209 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
7210 ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
7211 +@@ -1122,7 +1122,7 @@ static u8 _rtl8821ae_dbi_read(struct rtl_priv *rtlpriv, u16 addr)
7212 + }
7213 + if (0 == tmp) {
7214 + read_addr = REG_DBI_RDATA + addr % 4;
7215 +- ret = rtl_read_word(rtlpriv, read_addr);
7216 ++ ret = rtl_read_byte(rtlpriv, read_addr);
7217 + }
7218 + return ret;
7219 + }
7220 +@@ -1164,7 +1164,8 @@ static void _rtl8821ae_enable_aspm_back_door(struct ieee80211_hw *hw)
7221 + }
7222 +
7223 + tmp = _rtl8821ae_dbi_read(rtlpriv, 0x70f);
7224 +- _rtl8821ae_dbi_write(rtlpriv, 0x70f, tmp | BIT(7));
7225 ++ _rtl8821ae_dbi_write(rtlpriv, 0x70f, tmp | BIT(7) |
7226 ++ ASPM_L1_LATENCY << 3);
7227 +
7228 + tmp = _rtl8821ae_dbi_read(rtlpriv, 0x719);
7229 + _rtl8821ae_dbi_write(rtlpriv, 0x719, tmp | BIT(3) | BIT(4));
7230 +diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
7231 +index 1ab1024330fb..25c4e3e55921 100644
7232 +--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
7233 ++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
7234 +@@ -99,6 +99,7 @@
7235 + #define RTL_USB_MAX_RX_COUNT 100
7236 + #define QBSS_LOAD_SIZE 5
7237 + #define MAX_WMMELE_LENGTH 64
7238 ++#define ASPM_L1_LATENCY 7
7239 +
7240 + #define TOTAL_CAM_ENTRY 32
7241 +
7242 +diff --git a/drivers/pci/dwc/pci-keystone.c b/drivers/pci/dwc/pci-keystone.c
7243 +index 5bee3af47588..39405598b22d 100644
7244 +--- a/drivers/pci/dwc/pci-keystone.c
7245 ++++ b/drivers/pci/dwc/pci-keystone.c
7246 +@@ -178,7 +178,7 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
7247 + }
7248 +
7249 + /* interrupt controller is in a child node */
7250 +- *np_temp = of_find_node_by_name(np_pcie, controller);
7251 ++ *np_temp = of_get_child_by_name(np_pcie, controller);
7252 + if (!(*np_temp)) {
7253 + dev_err(dev, "Node for %s is absent\n", controller);
7254 + return -EINVAL;
7255 +@@ -187,6 +187,7 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
7256 + temp = of_irq_count(*np_temp);
7257 + if (!temp) {
7258 + dev_err(dev, "No IRQ entries in %s\n", controller);
7259 ++ of_node_put(*np_temp);
7260 + return -EINVAL;
7261 + }
7262 +
7263 +@@ -204,6 +205,8 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
7264 + break;
7265 + }
7266 +
7267 ++ of_node_put(*np_temp);
7268 ++
7269 + if (temp) {
7270 + *num_irqs = temp;
7271 + return 0;
7272 +diff --git a/drivers/pci/host/pcie-iproc-platform.c b/drivers/pci/host/pcie-iproc-platform.c
7273 +index a5073a921a04..32228d41f746 100644
7274 +--- a/drivers/pci/host/pcie-iproc-platform.c
7275 ++++ b/drivers/pci/host/pcie-iproc-platform.c
7276 +@@ -92,6 +92,13 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
7277 + pcie->need_ob_cfg = true;
7278 + }
7279 +
7280 ++ /*
7281 ++ * DT nodes are not used by all platforms that use the iProc PCIe
7282 ++ * core driver. For platforms that require explict inbound mapping
7283 ++ * configuration, "dma-ranges" would have been present in DT
7284 ++ */
7285 ++ pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges");
7286 ++
7287 + /* PHY use is optional */
7288 + pcie->phy = devm_phy_get(dev, "pcie-phy");
7289 + if (IS_ERR(pcie->phy)) {
7290 +diff --git a/drivers/pci/host/pcie-iproc.c b/drivers/pci/host/pcie-iproc.c
7291 +index 3a8b9d20ee57..c0ecc9f35667 100644
7292 +--- a/drivers/pci/host/pcie-iproc.c
7293 ++++ b/drivers/pci/host/pcie-iproc.c
7294 +@@ -1396,9 +1396,11 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res)
7295 + }
7296 + }
7297 +
7298 +- ret = iproc_pcie_map_dma_ranges(pcie);
7299 +- if (ret && ret != -ENOENT)
7300 +- goto err_power_off_phy;
7301 ++ if (pcie->need_ib_cfg) {
7302 ++ ret = iproc_pcie_map_dma_ranges(pcie);
7303 ++ if (ret && ret != -ENOENT)
7304 ++ goto err_power_off_phy;
7305 ++ }
7306 +
7307 + #ifdef CONFIG_ARM
7308 + pcie->sysdata.private_data = pcie;
7309 +diff --git a/drivers/pci/host/pcie-iproc.h b/drivers/pci/host/pcie-iproc.h
7310 +index a6b55cec9a66..4ac6282f2bfd 100644
7311 +--- a/drivers/pci/host/pcie-iproc.h
7312 ++++ b/drivers/pci/host/pcie-iproc.h
7313 +@@ -74,6 +74,7 @@ struct iproc_msi;
7314 + * @ob: outbound mapping related parameters
7315 + * @ob_map: outbound mapping related parameters specific to the controller
7316 + *
7317 ++ * @need_ib_cfg: indicates SW needs to configure the inbound mapping window
7318 + * @ib: inbound mapping related parameters
7319 + * @ib_map: outbound mapping region related parameters
7320 + *
7321 +@@ -101,6 +102,7 @@ struct iproc_pcie {
7322 + struct iproc_pcie_ob ob;
7323 + const struct iproc_pcie_ob_map *ob_map;
7324 +
7325 ++ bool need_ib_cfg;
7326 + struct iproc_pcie_ib ib;
7327 + const struct iproc_pcie_ib_map *ib_map;
7328 +
7329 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
7330 +index f66f9375177c..4c3feb96f391 100644
7331 +--- a/drivers/pci/quirks.c
7332 ++++ b/drivers/pci/quirks.c
7333 +@@ -1636,8 +1636,8 @@ static void quirk_pcie_mch(struct pci_dev *pdev)
7334 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch);
7335 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch);
7336 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch);
7337 +-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, quirk_pcie_mch);
7338 +
7339 ++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, PCI_CLASS_BRIDGE_PCI, 8, quirk_pcie_mch);
7340 +
7341 + /*
7342 + * It's possible for the MSI to get corrupted if shpc and acpi
7343 +diff --git a/drivers/platform/x86/apple-gmux.c b/drivers/platform/x86/apple-gmux.c
7344 +index 623d322447a2..7c4eb86c851e 100644
7345 +--- a/drivers/platform/x86/apple-gmux.c
7346 ++++ b/drivers/platform/x86/apple-gmux.c
7347 +@@ -24,7 +24,6 @@
7348 + #include <linux/delay.h>
7349 + #include <linux/pci.h>
7350 + #include <linux/vga_switcheroo.h>
7351 +-#include <linux/vgaarb.h>
7352 + #include <acpi/video.h>
7353 + #include <asm/io.h>
7354 +
7355 +@@ -54,7 +53,6 @@ struct apple_gmux_data {
7356 + bool indexed;
7357 + struct mutex index_lock;
7358 +
7359 +- struct pci_dev *pdev;
7360 + struct backlight_device *bdev;
7361 +
7362 + /* switcheroo data */
7363 +@@ -599,23 +597,6 @@ static int gmux_resume(struct device *dev)
7364 + return 0;
7365 + }
7366 +
7367 +-static struct pci_dev *gmux_get_io_pdev(void)
7368 +-{
7369 +- struct pci_dev *pdev = NULL;
7370 +-
7371 +- while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev))) {
7372 +- u16 cmd;
7373 +-
7374 +- pci_read_config_word(pdev, PCI_COMMAND, &cmd);
7375 +- if (!(cmd & PCI_COMMAND_IO))
7376 +- continue;
7377 +-
7378 +- return pdev;
7379 +- }
7380 +-
7381 +- return NULL;
7382 +-}
7383 +-
7384 + static int is_thunderbolt(struct device *dev, void *data)
7385 + {
7386 + return to_pci_dev(dev)->is_thunderbolt;
7387 +@@ -631,7 +612,6 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
7388 + int ret = -ENXIO;
7389 + acpi_status status;
7390 + unsigned long long gpe;
7391 +- struct pci_dev *pdev = NULL;
7392 +
7393 + if (apple_gmux_data)
7394 + return -EBUSY;
7395 +@@ -682,7 +662,7 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
7396 + ver_minor = (version >> 16) & 0xff;
7397 + ver_release = (version >> 8) & 0xff;
7398 + } else {
7399 +- pr_info("gmux device not present or IO disabled\n");
7400 ++ pr_info("gmux device not present\n");
7401 + ret = -ENODEV;
7402 + goto err_release;
7403 + }
7404 +@@ -690,23 +670,6 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
7405 + pr_info("Found gmux version %d.%d.%d [%s]\n", ver_major, ver_minor,
7406 + ver_release, (gmux_data->indexed ? "indexed" : "classic"));
7407 +
7408 +- /*
7409 +- * Apple systems with gmux are EFI based and normally don't use
7410 +- * VGA. In addition changing IO+MEM ownership between IGP and dGPU
7411 +- * disables IO/MEM used for backlight control on some systems.
7412 +- * Lock IO+MEM to GPU with active IO to prevent switch.
7413 +- */
7414 +- pdev = gmux_get_io_pdev();
7415 +- if (pdev && vga_tryget(pdev,
7416 +- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM)) {
7417 +- pr_err("IO+MEM vgaarb-locking for PCI:%s failed\n",
7418 +- pci_name(pdev));
7419 +- ret = -EBUSY;
7420 +- goto err_release;
7421 +- } else if (pdev)
7422 +- pr_info("locked IO for PCI:%s\n", pci_name(pdev));
7423 +- gmux_data->pdev = pdev;
7424 +-
7425 + memset(&props, 0, sizeof(props));
7426 + props.type = BACKLIGHT_PLATFORM;
7427 + props.max_brightness = gmux_read32(gmux_data, GMUX_PORT_MAX_BRIGHTNESS);
7428 +@@ -822,10 +785,6 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
7429 + err_notify:
7430 + backlight_device_unregister(bdev);
7431 + err_release:
7432 +- if (gmux_data->pdev)
7433 +- vga_put(gmux_data->pdev,
7434 +- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM);
7435 +- pci_dev_put(pdev);
7436 + release_region(gmux_data->iostart, gmux_data->iolen);
7437 + err_free:
7438 + kfree(gmux_data);
7439 +@@ -845,11 +804,6 @@ static void gmux_remove(struct pnp_dev *pnp)
7440 + &gmux_notify_handler);
7441 + }
7442 +
7443 +- if (gmux_data->pdev) {
7444 +- vga_put(gmux_data->pdev,
7445 +- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM);
7446 +- pci_dev_put(gmux_data->pdev);
7447 +- }
7448 + backlight_device_unregister(gmux_data->bdev);
7449 +
7450 + release_region(gmux_data->iostart, gmux_data->iolen);
7451 +diff --git a/drivers/rtc/rtc-opal.c b/drivers/rtc/rtc-opal.c
7452 +index e2a946c0e667..304e891e35fc 100644
7453 +--- a/drivers/rtc/rtc-opal.c
7454 ++++ b/drivers/rtc/rtc-opal.c
7455 +@@ -58,6 +58,7 @@ static void tm_to_opal(struct rtc_time *tm, u32 *y_m_d, u64 *h_m_s_ms)
7456 + static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
7457 + {
7458 + long rc = OPAL_BUSY;
7459 ++ int retries = 10;
7460 + u32 y_m_d;
7461 + u64 h_m_s_ms;
7462 + __be32 __y_m_d;
7463 +@@ -67,8 +68,11 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
7464 + rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);
7465 + if (rc == OPAL_BUSY_EVENT)
7466 + opal_poll_events(NULL);
7467 +- else
7468 ++ else if (retries-- && (rc == OPAL_HARDWARE
7469 ++ || rc == OPAL_INTERNAL_ERROR))
7470 + msleep(10);
7471 ++ else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)
7472 ++ break;
7473 + }
7474 +
7475 + if (rc != OPAL_SUCCESS)
7476 +@@ -84,6 +88,7 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
7477 + static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)
7478 + {
7479 + long rc = OPAL_BUSY;
7480 ++ int retries = 10;
7481 + u32 y_m_d = 0;
7482 + u64 h_m_s_ms = 0;
7483 +
7484 +@@ -92,8 +97,11 @@ static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)
7485 + rc = opal_rtc_write(y_m_d, h_m_s_ms);
7486 + if (rc == OPAL_BUSY_EVENT)
7487 + opal_poll_events(NULL);
7488 +- else
7489 ++ else if (retries-- && (rc == OPAL_HARDWARE
7490 ++ || rc == OPAL_INTERNAL_ERROR))
7491 + msleep(10);
7492 ++ else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)
7493 ++ break;
7494 + }
7495 +
7496 + return rc == OPAL_SUCCESS ? 0 : -EIO;
7497 +diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
7498 +index f796bd61f3f0..40406c162d0d 100644
7499 +--- a/drivers/scsi/scsi_sysfs.c
7500 ++++ b/drivers/scsi/scsi_sysfs.c
7501 +@@ -1383,7 +1383,10 @@ static void __scsi_remove_target(struct scsi_target *starget)
7502 + * check.
7503 + */
7504 + if (sdev->channel != starget->channel ||
7505 +- sdev->id != starget->id ||
7506 ++ sdev->id != starget->id)
7507 ++ continue;
7508 ++ if (sdev->sdev_state == SDEV_DEL ||
7509 ++ sdev->sdev_state == SDEV_CANCEL ||
7510 + !get_device(&sdev->sdev_gendev))
7511 + continue;
7512 + spin_unlock_irqrestore(shost->host_lock, flags);
7513 +diff --git a/drivers/scsi/smartpqi/Makefile b/drivers/scsi/smartpqi/Makefile
7514 +index 0f42a225a664..e6b779930230 100644
7515 +--- a/drivers/scsi/smartpqi/Makefile
7516 ++++ b/drivers/scsi/smartpqi/Makefile
7517 +@@ -1,3 +1,3 @@
7518 + ccflags-y += -I.
7519 +-obj-m += smartpqi.o
7520 ++obj-$(CONFIG_SCSI_SMARTPQI) += smartpqi.o
7521 + smartpqi-objs := smartpqi_init.o smartpqi_sis.o smartpqi_sas_transport.o
7522 +diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
7523 +index f9bc8ec6fb6b..9518ffd8b8ba 100644
7524 +--- a/drivers/target/iscsi/iscsi_target_auth.c
7525 ++++ b/drivers/target/iscsi/iscsi_target_auth.c
7526 +@@ -421,7 +421,8 @@ static int chap_server_compute_md5(
7527 + auth_ret = 0;
7528 + out:
7529 + kzfree(desc);
7530 +- crypto_free_shash(tfm);
7531 ++ if (tfm)
7532 ++ crypto_free_shash(tfm);
7533 + kfree(challenge);
7534 + kfree(challenge_binhex);
7535 + return auth_ret;
7536 +diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
7537 +index 7a6751fecd32..87248a2512e5 100644
7538 +--- a/drivers/target/iscsi/iscsi_target_nego.c
7539 ++++ b/drivers/target/iscsi/iscsi_target_nego.c
7540 +@@ -432,6 +432,9 @@ static void iscsi_target_sk_data_ready(struct sock *sk)
7541 + if (test_and_set_bit(LOGIN_FLAGS_READ_ACTIVE, &conn->login_flags)) {
7542 + write_unlock_bh(&sk->sk_callback_lock);
7543 + pr_debug("Got LOGIN_FLAGS_READ_ACTIVE=1, conn: %p >>>>\n", conn);
7544 ++ if (iscsi_target_sk_data_ready == conn->orig_data_ready)
7545 ++ return;
7546 ++ conn->orig_data_ready(sk);
7547 + return;
7548 + }
7549 +
7550 +diff --git a/drivers/usb/Kconfig b/drivers/usb/Kconfig
7551 +index 939a63bca82f..72eb3e41e3b6 100644
7552 +--- a/drivers/usb/Kconfig
7553 ++++ b/drivers/usb/Kconfig
7554 +@@ -19,6 +19,14 @@ config USB_EHCI_BIG_ENDIAN_MMIO
7555 + config USB_EHCI_BIG_ENDIAN_DESC
7556 + bool
7557 +
7558 ++config USB_UHCI_BIG_ENDIAN_MMIO
7559 ++ bool
7560 ++ default y if SPARC_LEON
7561 ++
7562 ++config USB_UHCI_BIG_ENDIAN_DESC
7563 ++ bool
7564 ++ default y if SPARC_LEON
7565 ++
7566 + menuconfig USB_SUPPORT
7567 + bool "USB support"
7568 + depends on HAS_IOMEM
7569 +diff --git a/drivers/usb/host/Kconfig b/drivers/usb/host/Kconfig
7570 +index fa5692dec832..92b19721b595 100644
7571 +--- a/drivers/usb/host/Kconfig
7572 ++++ b/drivers/usb/host/Kconfig
7573 +@@ -637,14 +637,6 @@ config USB_UHCI_ASPEED
7574 + bool
7575 + default y if ARCH_ASPEED
7576 +
7577 +-config USB_UHCI_BIG_ENDIAN_MMIO
7578 +- bool
7579 +- default y if SPARC_LEON
7580 +-
7581 +-config USB_UHCI_BIG_ENDIAN_DESC
7582 +- bool
7583 +- default y if SPARC_LEON
7584 +-
7585 + config USB_FHCI_HCD
7586 + tristate "Freescale QE USB Host Controller support"
7587 + depends on OF_GPIO && QE_GPIO && QUICC_ENGINE
7588 +diff --git a/drivers/video/console/dummycon.c b/drivers/video/console/dummycon.c
7589 +index 9269d5685239..b90ef96e43d6 100644
7590 +--- a/drivers/video/console/dummycon.c
7591 ++++ b/drivers/video/console/dummycon.c
7592 +@@ -67,7 +67,6 @@ const struct consw dummy_con = {
7593 + .con_switch = DUMMY,
7594 + .con_blank = DUMMY,
7595 + .con_font_set = DUMMY,
7596 +- .con_font_get = DUMMY,
7597 + .con_font_default = DUMMY,
7598 + .con_font_copy = DUMMY,
7599 + };
7600 +diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c
7601 +index e06358da4b99..3dee267d7c75 100644
7602 +--- a/drivers/video/fbdev/atmel_lcdfb.c
7603 ++++ b/drivers/video/fbdev/atmel_lcdfb.c
7604 +@@ -1119,7 +1119,7 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo)
7605 + goto put_display_node;
7606 + }
7607 +
7608 +- timings_np = of_find_node_by_name(display_np, "display-timings");
7609 ++ timings_np = of_get_child_by_name(display_np, "display-timings");
7610 + if (!timings_np) {
7611 + dev_err(dev, "failed to find display-timings node\n");
7612 + ret = -ENODEV;
7613 +@@ -1140,6 +1140,12 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo)
7614 + fb_add_videomode(&fb_vm, &info->modelist);
7615 + }
7616 +
7617 ++ /*
7618 ++ * FIXME: Make sure we are not referencing any fields in display_np
7619 ++ * and timings_np and drop our references to them before returning to
7620 ++ * avoid leaking the nodes on probe deferral and driver unbind.
7621 ++ */
7622 ++
7623 + return 0;
7624 +
7625 + put_timings_node:
7626 +diff --git a/drivers/video/fbdev/geode/video_gx.c b/drivers/video/fbdev/geode/video_gx.c
7627 +index 6082f653c68a..67773e8bbb95 100644
7628 +--- a/drivers/video/fbdev/geode/video_gx.c
7629 ++++ b/drivers/video/fbdev/geode/video_gx.c
7630 +@@ -127,7 +127,7 @@ void gx_set_dclk_frequency(struct fb_info *info)
7631 + int timeout = 1000;
7632 +
7633 + /* Rev. 1 Geode GXs use a 14 MHz reference clock instead of 48 MHz. */
7634 +- if (cpu_data(0).x86_mask == 1) {
7635 ++ if (cpu_data(0).x86_stepping == 1) {
7636 + pll_table = gx_pll_table_14MHz;
7637 + pll_table_len = ARRAY_SIZE(gx_pll_table_14MHz);
7638 + } else {
7639 +diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
7640 +index 149c5e7efc89..092981171df1 100644
7641 +--- a/drivers/xen/xenbus/xenbus.h
7642 ++++ b/drivers/xen/xenbus/xenbus.h
7643 +@@ -76,6 +76,7 @@ struct xb_req_data {
7644 + struct list_head list;
7645 + wait_queue_head_t wq;
7646 + struct xsd_sockmsg msg;
7647 ++ uint32_t caller_req_id;
7648 + enum xsd_sockmsg_type type;
7649 + char *body;
7650 + const struct kvec *vec;
7651 +diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
7652 +index 5b081a01779d..d239fc3c5e3d 100644
7653 +--- a/drivers/xen/xenbus/xenbus_comms.c
7654 ++++ b/drivers/xen/xenbus/xenbus_comms.c
7655 +@@ -309,6 +309,7 @@ static int process_msg(void)
7656 + goto out;
7657 +
7658 + if (req->state == xb_req_state_wait_reply) {
7659 ++ req->msg.req_id = req->caller_req_id;
7660 + req->msg.type = state.msg.type;
7661 + req->msg.len = state.msg.len;
7662 + req->body = state.body;
7663 +diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
7664 +index 3e59590c7254..3f3b29398ab8 100644
7665 +--- a/drivers/xen/xenbus/xenbus_xs.c
7666 ++++ b/drivers/xen/xenbus/xenbus_xs.c
7667 +@@ -227,6 +227,8 @@ static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg)
7668 + req->state = xb_req_state_queued;
7669 + init_waitqueue_head(&req->wq);
7670 +
7671 ++ /* Save the caller req_id and restore it later in the reply */
7672 ++ req->caller_req_id = req->msg.req_id;
7673 + req->msg.req_id = xs_request_enter(req);
7674 +
7675 + mutex_lock(&xb_write_mutex);
7676 +@@ -310,6 +312,7 @@ static void *xs_talkv(struct xenbus_transaction t,
7677 + req->num_vecs = num_vecs;
7678 + req->cb = xs_wake_up;
7679 +
7680 ++ msg.req_id = 0;
7681 + msg.tx_id = t.id;
7682 + msg.type = type;
7683 + msg.len = 0;
7684 +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
7685 +index 5eaedff28a32..1ae61f82e54b 100644
7686 +--- a/fs/btrfs/inode.c
7687 ++++ b/fs/btrfs/inode.c
7688 +@@ -1330,8 +1330,11 @@ static noinline int run_delalloc_nocow(struct inode *inode,
7689 + leaf = path->nodes[0];
7690 + if (path->slots[0] >= btrfs_header_nritems(leaf)) {
7691 + ret = btrfs_next_leaf(root, path);
7692 +- if (ret < 0)
7693 ++ if (ret < 0) {
7694 ++ if (cow_start != (u64)-1)
7695 ++ cur_offset = cow_start;
7696 + goto error;
7697 ++ }
7698 + if (ret > 0)
7699 + break;
7700 + leaf = path->nodes[0];
7701 +@@ -3368,6 +3371,11 @@ int btrfs_orphan_add(struct btrfs_trans_handle *trans,
7702 + ret = btrfs_orphan_reserve_metadata(trans, inode);
7703 + ASSERT(!ret);
7704 + if (ret) {
7705 ++ /*
7706 ++ * dec doesn't need spin_lock as ->orphan_block_rsv
7707 ++ * would be released only if ->orphan_inodes is
7708 ++ * zero.
7709 ++ */
7710 + atomic_dec(&root->orphan_inodes);
7711 + clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED,
7712 + &inode->runtime_flags);
7713 +@@ -3382,12 +3390,17 @@ int btrfs_orphan_add(struct btrfs_trans_handle *trans,
7714 + if (insert >= 1) {
7715 + ret = btrfs_insert_orphan_item(trans, root, btrfs_ino(inode));
7716 + if (ret) {
7717 +- atomic_dec(&root->orphan_inodes);
7718 + if (reserve) {
7719 + clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED,
7720 + &inode->runtime_flags);
7721 + btrfs_orphan_release_metadata(inode);
7722 + }
7723 ++ /*
7724 ++ * btrfs_orphan_commit_root may race with us and set
7725 ++ * ->orphan_block_rsv to zero, in order to avoid that,
7726 ++ * decrease ->orphan_inodes after everything is done.
7727 ++ */
7728 ++ atomic_dec(&root->orphan_inodes);
7729 + if (ret != -EEXIST) {
7730 + clear_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
7731 + &inode->runtime_flags);
7732 +@@ -3419,28 +3432,26 @@ static int btrfs_orphan_del(struct btrfs_trans_handle *trans,
7733 + {
7734 + struct btrfs_root *root = inode->root;
7735 + int delete_item = 0;
7736 +- int release_rsv = 0;
7737 + int ret = 0;
7738 +
7739 +- spin_lock(&root->orphan_lock);
7740 + if (test_and_clear_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
7741 + &inode->runtime_flags))
7742 + delete_item = 1;
7743 +
7744 ++ if (delete_item && trans)
7745 ++ ret = btrfs_del_orphan_item(trans, root, btrfs_ino(inode));
7746 ++
7747 + if (test_and_clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED,
7748 + &inode->runtime_flags))
7749 +- release_rsv = 1;
7750 +- spin_unlock(&root->orphan_lock);
7751 ++ btrfs_orphan_release_metadata(inode);
7752 +
7753 +- if (delete_item) {
7754 ++ /*
7755 ++ * btrfs_orphan_commit_root may race with us and set ->orphan_block_rsv
7756 ++ * to zero, in order to avoid that, decrease ->orphan_inodes after
7757 ++ * everything is done.
7758 ++ */
7759 ++ if (delete_item)
7760 + atomic_dec(&root->orphan_inodes);
7761 +- if (trans)
7762 +- ret = btrfs_del_orphan_item(trans, root,
7763 +- btrfs_ino(inode));
7764 +- }
7765 +-
7766 +- if (release_rsv)
7767 +- btrfs_orphan_release_metadata(inode);
7768 +
7769 + return ret;
7770 + }
7771 +@@ -5315,7 +5326,7 @@ void btrfs_evict_inode(struct inode *inode)
7772 + trace_btrfs_inode_evict(inode);
7773 +
7774 + if (!root) {
7775 +- kmem_cache_free(btrfs_inode_cachep, BTRFS_I(inode));
7776 ++ clear_inode(inode);
7777 + return;
7778 + }
7779 +
7780 +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
7781 +index d3002842d7f6..b6dfe7af7a1f 100644
7782 +--- a/fs/btrfs/tree-log.c
7783 ++++ b/fs/btrfs/tree-log.c
7784 +@@ -28,6 +28,7 @@
7785 + #include "hash.h"
7786 + #include "compression.h"
7787 + #include "qgroup.h"
7788 ++#include "inode-map.h"
7789 +
7790 + /* magic values for the inode_only field in btrfs_log_inode:
7791 + *
7792 +@@ -2494,6 +2495,9 @@ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
7793 + clean_tree_block(fs_info, next);
7794 + btrfs_wait_tree_block_writeback(next);
7795 + btrfs_tree_unlock(next);
7796 ++ } else {
7797 ++ if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags))
7798 ++ clear_extent_buffer_dirty(next);
7799 + }
7800 +
7801 + WARN_ON(root_owner !=
7802 +@@ -2574,6 +2578,9 @@ static noinline int walk_up_log_tree(struct btrfs_trans_handle *trans,
7803 + clean_tree_block(fs_info, next);
7804 + btrfs_wait_tree_block_writeback(next);
7805 + btrfs_tree_unlock(next);
7806 ++ } else {
7807 ++ if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags))
7808 ++ clear_extent_buffer_dirty(next);
7809 + }
7810 +
7811 + WARN_ON(root_owner != BTRFS_TREE_LOG_OBJECTID);
7812 +@@ -2652,6 +2659,9 @@ static int walk_log_tree(struct btrfs_trans_handle *trans,
7813 + clean_tree_block(fs_info, next);
7814 + btrfs_wait_tree_block_writeback(next);
7815 + btrfs_tree_unlock(next);
7816 ++ } else {
7817 ++ if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags))
7818 ++ clear_extent_buffer_dirty(next);
7819 + }
7820 +
7821 + WARN_ON(log->root_key.objectid !=
7822 +@@ -3038,13 +3048,14 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
7823 +
7824 + while (1) {
7825 + ret = find_first_extent_bit(&log->dirty_log_pages,
7826 +- 0, &start, &end, EXTENT_DIRTY | EXTENT_NEW,
7827 ++ 0, &start, &end,
7828 ++ EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT,
7829 + NULL);
7830 + if (ret)
7831 + break;
7832 +
7833 + clear_extent_bits(&log->dirty_log_pages, start, end,
7834 +- EXTENT_DIRTY | EXTENT_NEW);
7835 ++ EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT);
7836 + }
7837 +
7838 + /*
7839 +@@ -5705,6 +5716,23 @@ int btrfs_recover_log_trees(struct btrfs_root *log_root_tree)
7840 + path);
7841 + }
7842 +
7843 ++ if (!ret && wc.stage == LOG_WALK_REPLAY_ALL) {
7844 ++ struct btrfs_root *root = wc.replay_dest;
7845 ++
7846 ++ btrfs_release_path(path);
7847 ++
7848 ++ /*
7849 ++ * We have just replayed everything, and the highest
7850 ++ * objectid of fs roots probably has changed in case
7851 ++ * some inode_item's got replayed.
7852 ++ *
7853 ++ * root->objectid_mutex is not acquired as log replay
7854 ++ * could only happen during mount.
7855 ++ */
7856 ++ ret = btrfs_find_highest_objectid(root,
7857 ++ &root->highest_objectid);
7858 ++ }
7859 ++
7860 + key.offset = found_key.offset - 1;
7861 + wc.replay_dest->log_root = NULL;
7862 + free_extent_buffer(log->node);
7863 +diff --git a/fs/dcache.c b/fs/dcache.c
7864 +index 34c852af215c..b8d999a5768b 100644
7865 +--- a/fs/dcache.c
7866 ++++ b/fs/dcache.c
7867 +@@ -2705,8 +2705,6 @@ static void swap_names(struct dentry *dentry, struct dentry *target)
7868 + */
7869 + unsigned int i;
7870 + BUILD_BUG_ON(!IS_ALIGNED(DNAME_INLINE_LEN, sizeof(long)));
7871 +- kmemcheck_mark_initialized(dentry->d_iname, DNAME_INLINE_LEN);
7872 +- kmemcheck_mark_initialized(target->d_iname, DNAME_INLINE_LEN);
7873 + for (i = 0; i < DNAME_INLINE_LEN / sizeof(long); i++) {
7874 + swap(((long *) &dentry->d_iname)[i],
7875 + ((long *) &target->d_iname)[i]);
7876 +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
7877 +index ea2ccc524bd9..0b9f3f284799 100644
7878 +--- a/fs/ext4/inode.c
7879 ++++ b/fs/ext4/inode.c
7880 +@@ -3724,10 +3724,18 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
7881 + /* Credits for sb + inode write */
7882 + handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
7883 + if (IS_ERR(handle)) {
7884 +- /* This is really bad luck. We've written the data
7885 +- * but cannot extend i_size. Bail out and pretend
7886 +- * the write failed... */
7887 +- ret = PTR_ERR(handle);
7888 ++ /*
7889 ++ * We wrote the data but cannot extend
7890 ++ * i_size. Bail out. In async io case, we do
7891 ++ * not return error here because we have
7892 ++ * already submmitted the corresponding
7893 ++ * bio. Returning error here makes the caller
7894 ++ * think that this IO is done and failed
7895 ++ * resulting in race with bio's completion
7896 ++ * handler.
7897 ++ */
7898 ++ if (!ret)
7899 ++ ret = PTR_ERR(handle);
7900 + if (inode->i_nlink)
7901 + ext4_orphan_del(NULL, inode);
7902 +
7903 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
7904 +index f29351c66610..16d247f056e2 100644
7905 +--- a/fs/ext4/super.c
7906 ++++ b/fs/ext4/super.c
7907 +@@ -742,6 +742,7 @@ __acquires(bitlock)
7908 + }
7909 +
7910 + ext4_unlock_group(sb, grp);
7911 ++ ext4_commit_super(sb, 1);
7912 + ext4_handle_error(sb);
7913 + /*
7914 + * We only get here in the ERRORS_RO case; relocking the group
7915 +diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
7916 +index 8b08044b3120..c0681814c379 100644
7917 +--- a/fs/jbd2/transaction.c
7918 ++++ b/fs/jbd2/transaction.c
7919 +@@ -495,8 +495,10 @@ void jbd2_journal_free_reserved(handle_t *handle)
7920 + EXPORT_SYMBOL(jbd2_journal_free_reserved);
7921 +
7922 + /**
7923 +- * int jbd2_journal_start_reserved(handle_t *handle) - start reserved handle
7924 ++ * int jbd2_journal_start_reserved() - start reserved handle
7925 + * @handle: handle to start
7926 ++ * @type: for handle statistics
7927 ++ * @line_no: for handle statistics
7928 + *
7929 + * Start handle that has been previously reserved with jbd2_journal_reserve().
7930 + * This attaches @handle to the running transaction (or creates one if there's
7931 +@@ -626,6 +628,7 @@ int jbd2_journal_extend(handle_t *handle, int nblocks)
7932 + * int jbd2_journal_restart() - restart a handle .
7933 + * @handle: handle to restart
7934 + * @nblocks: nr credits requested
7935 ++ * @gfp_mask: memory allocation flags (for start_this_handle)
7936 + *
7937 + * Restart a handle for a multi-transaction filesystem
7938 + * operation.
7939 +diff --git a/fs/mbcache.c b/fs/mbcache.c
7940 +index d818fd236787..49c5b25bfa8c 100644
7941 +--- a/fs/mbcache.c
7942 ++++ b/fs/mbcache.c
7943 +@@ -94,6 +94,7 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
7944 + entry->e_key = key;
7945 + entry->e_value = value;
7946 + entry->e_reusable = reusable;
7947 ++ entry->e_referenced = 0;
7948 + head = mb_cache_entry_head(cache, key);
7949 + hlist_bl_lock(head);
7950 + hlist_bl_for_each_entry(dup, dup_node, head, e_hash_list) {
7951 +diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
7952 +index 4689940a953c..5193218f5889 100644
7953 +--- a/fs/ocfs2/dlmglue.c
7954 ++++ b/fs/ocfs2/dlmglue.c
7955 +@@ -2486,6 +2486,15 @@ int ocfs2_inode_lock_with_page(struct inode *inode,
7956 + ret = ocfs2_inode_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK);
7957 + if (ret == -EAGAIN) {
7958 + unlock_page(page);
7959 ++ /*
7960 ++ * If we can't get inode lock immediately, we should not return
7961 ++ * directly here, since this will lead to a softlockup problem.
7962 ++ * The method is to get a blocking lock and immediately unlock
7963 ++ * before returning, this can avoid CPU resource waste due to
7964 ++ * lots of retries, and benefits fairness in getting lock.
7965 ++ */
7966 ++ if (ocfs2_inode_lock(inode, ret_bh, ex) == 0)
7967 ++ ocfs2_inode_unlock(inode, ex);
7968 + ret = AOP_TRUNCATED_PAGE;
7969 + }
7970 +
7971 +diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
7972 +index 321511ed8c42..d60900b615f9 100644
7973 +--- a/fs/overlayfs/inode.c
7974 ++++ b/fs/overlayfs/inode.c
7975 +@@ -579,6 +579,16 @@ static int ovl_inode_set(struct inode *inode, void *data)
7976 + static bool ovl_verify_inode(struct inode *inode, struct dentry *lowerdentry,
7977 + struct dentry *upperdentry)
7978 + {
7979 ++ if (S_ISDIR(inode->i_mode)) {
7980 ++ /* Real lower dir moved to upper layer under us? */
7981 ++ if (!lowerdentry && ovl_inode_lower(inode))
7982 ++ return false;
7983 ++
7984 ++ /* Lookup of an uncovered redirect origin? */
7985 ++ if (!upperdentry && ovl_inode_upper(inode))
7986 ++ return false;
7987 ++ }
7988 ++
7989 + /*
7990 + * Allow non-NULL lower inode in ovl_inode even if lowerdentry is NULL.
7991 + * This happens when finding a copied up overlay inode for a renamed
7992 +@@ -606,6 +616,8 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
7993 + struct inode *inode;
7994 + /* Already indexed or could be indexed on copy up? */
7995 + bool indexed = (index || (ovl_indexdir(dentry->d_sb) && !upperdentry));
7996 ++ struct dentry *origin = indexed ? lowerdentry : NULL;
7997 ++ bool is_dir;
7998 +
7999 + if (WARN_ON(upperdentry && indexed && !lowerdentry))
8000 + return ERR_PTR(-EIO);
8001 +@@ -614,15 +626,19 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
8002 + realinode = d_inode(lowerdentry);
8003 +
8004 + /*
8005 +- * Copy up origin (lower) may exist for non-indexed upper, but we must
8006 +- * not use lower as hash key in that case.
8007 +- * Hash inodes that are or could be indexed by origin inode and
8008 +- * non-indexed upper inodes that could be hard linked by upper inode.
8009 ++ * Copy up origin (lower) may exist for non-indexed non-dir upper, but
8010 ++ * we must not use lower as hash key in that case.
8011 ++ * Hash non-dir that is or could be indexed by origin inode.
8012 ++ * Hash dir that is or could be merged by origin inode.
8013 ++ * Hash pure upper and non-indexed non-dir by upper inode.
8014 + */
8015 +- if (!S_ISDIR(realinode->i_mode) && (upperdentry || indexed)) {
8016 +- struct inode *key = d_inode(indexed ? lowerdentry :
8017 +- upperdentry);
8018 +- unsigned int nlink;
8019 ++ is_dir = S_ISDIR(realinode->i_mode);
8020 ++ if (is_dir)
8021 ++ origin = lowerdentry;
8022 ++
8023 ++ if (upperdentry || origin) {
8024 ++ struct inode *key = d_inode(origin ?: upperdentry);
8025 ++ unsigned int nlink = is_dir ? 1 : realinode->i_nlink;
8026 +
8027 + inode = iget5_locked(dentry->d_sb, (unsigned long) key,
8028 + ovl_inode_test, ovl_inode_set, key);
8029 +@@ -643,8 +659,9 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
8030 + goto out;
8031 + }
8032 +
8033 +- nlink = ovl_get_nlink(lowerdentry, upperdentry,
8034 +- realinode->i_nlink);
8035 ++ /* Recalculate nlink for non-dir due to indexing */
8036 ++ if (!is_dir)
8037 ++ nlink = ovl_get_nlink(lowerdentry, upperdentry, nlink);
8038 + set_nlink(inode, nlink);
8039 + } else {
8040 + inode = new_inode(dentry->d_sb);
8041 +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
8042 +index f5738e96a052..b8f8d666e8d4 100644
8043 +--- a/fs/overlayfs/super.c
8044 ++++ b/fs/overlayfs/super.c
8045 +@@ -200,6 +200,7 @@ static void ovl_destroy_inode(struct inode *inode)
8046 + struct ovl_inode *oi = OVL_I(inode);
8047 +
8048 + dput(oi->__upperdentry);
8049 ++ iput(oi->lower);
8050 + kfree(oi->redirect);
8051 + ovl_dir_cache_free(inode);
8052 + mutex_destroy(&oi->lock);
8053 +diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
8054 +index b9b239fa5cfd..f60ce2e04df0 100644
8055 +--- a/fs/overlayfs/util.c
8056 ++++ b/fs/overlayfs/util.c
8057 +@@ -253,7 +253,7 @@ void ovl_inode_init(struct inode *inode, struct dentry *upperdentry,
8058 + if (upperdentry)
8059 + OVL_I(inode)->__upperdentry = upperdentry;
8060 + if (lowerdentry)
8061 +- OVL_I(inode)->lower = d_inode(lowerdentry);
8062 ++ OVL_I(inode)->lower = igrab(d_inode(lowerdentry));
8063 +
8064 + ovl_copyattr(d_inode(upperdentry ?: lowerdentry), inode);
8065 + }
8066 +@@ -269,7 +269,7 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
8067 + */
8068 + smp_wmb();
8069 + OVL_I(inode)->__upperdentry = upperdentry;
8070 +- if (!S_ISDIR(upperinode->i_mode) && inode_unhashed(inode)) {
8071 ++ if (inode_unhashed(inode)) {
8072 + inode->i_private = upperinode;
8073 + __insert_inode_hash(inode, (unsigned long) upperinode);
8074 + }
8075 +diff --git a/fs/seq_file.c b/fs/seq_file.c
8076 +index 4be761c1a03d..eea09f6d8830 100644
8077 +--- a/fs/seq_file.c
8078 ++++ b/fs/seq_file.c
8079 +@@ -181,8 +181,11 @@ ssize_t seq_read(struct file *file, char __user *buf, size_t size, loff_t *ppos)
8080 + * if request is to read from zero offset, reset iterator to first
8081 + * record as it might have been already advanced by previous requests
8082 + */
8083 +- if (*ppos == 0)
8084 ++ if (*ppos == 0) {
8085 + m->index = 0;
8086 ++ m->version = 0;
8087 ++ m->count = 0;
8088 ++ }
8089 +
8090 + /* Don't assume *ppos is where we left it */
8091 + if (unlikely(*ppos != m->read_pos)) {
8092 +diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h
8093 +index 34c8f5600ce0..c65e4489006d 100644
8094 +--- a/include/drm/i915_pciids.h
8095 ++++ b/include/drm/i915_pciids.h
8096 +@@ -118,92 +118,125 @@
8097 + #define INTEL_IRONLAKE_M_IDS(info) \
8098 + INTEL_VGA_DEVICE(0x0046, info)
8099 +
8100 +-#define INTEL_SNB_D_IDS(info) \
8101 ++#define INTEL_SNB_D_GT1_IDS(info) \
8102 + INTEL_VGA_DEVICE(0x0102, info), \
8103 +- INTEL_VGA_DEVICE(0x0112, info), \
8104 +- INTEL_VGA_DEVICE(0x0122, info), \
8105 + INTEL_VGA_DEVICE(0x010A, info)
8106 +
8107 +-#define INTEL_SNB_M_IDS(info) \
8108 +- INTEL_VGA_DEVICE(0x0106, info), \
8109 ++#define INTEL_SNB_D_GT2_IDS(info) \
8110 ++ INTEL_VGA_DEVICE(0x0112, info), \
8111 ++ INTEL_VGA_DEVICE(0x0122, info)
8112 ++
8113 ++#define INTEL_SNB_D_IDS(info) \
8114 ++ INTEL_SNB_D_GT1_IDS(info), \
8115 ++ INTEL_SNB_D_GT2_IDS(info)
8116 ++
8117 ++#define INTEL_SNB_M_GT1_IDS(info) \
8118 ++ INTEL_VGA_DEVICE(0x0106, info)
8119 ++
8120 ++#define INTEL_SNB_M_GT2_IDS(info) \
8121 + INTEL_VGA_DEVICE(0x0116, info), \
8122 + INTEL_VGA_DEVICE(0x0126, info)
8123 +
8124 ++#define INTEL_SNB_M_IDS(info) \
8125 ++ INTEL_SNB_M_GT1_IDS(info), \
8126 ++ INTEL_SNB_M_GT2_IDS(info)
8127 ++
8128 ++#define INTEL_IVB_M_GT1_IDS(info) \
8129 ++ INTEL_VGA_DEVICE(0x0156, info) /* GT1 mobile */
8130 ++
8131 ++#define INTEL_IVB_M_GT2_IDS(info) \
8132 ++ INTEL_VGA_DEVICE(0x0166, info) /* GT2 mobile */
8133 ++
8134 + #define INTEL_IVB_M_IDS(info) \
8135 +- INTEL_VGA_DEVICE(0x0156, info), /* GT1 mobile */ \
8136 +- INTEL_VGA_DEVICE(0x0166, info) /* GT2 mobile */
8137 ++ INTEL_IVB_M_GT1_IDS(info), \
8138 ++ INTEL_IVB_M_GT2_IDS(info)
8139 +
8140 +-#define INTEL_IVB_D_IDS(info) \
8141 ++#define INTEL_IVB_D_GT1_IDS(info) \
8142 + INTEL_VGA_DEVICE(0x0152, info), /* GT1 desktop */ \
8143 ++ INTEL_VGA_DEVICE(0x015a, info) /* GT1 server */
8144 ++
8145 ++#define INTEL_IVB_D_GT2_IDS(info) \
8146 + INTEL_VGA_DEVICE(0x0162, info), /* GT2 desktop */ \
8147 +- INTEL_VGA_DEVICE(0x015a, info), /* GT1 server */ \
8148 + INTEL_VGA_DEVICE(0x016a, info) /* GT2 server */
8149 +
8150 ++#define INTEL_IVB_D_IDS(info) \
8151 ++ INTEL_IVB_D_GT1_IDS(info), \
8152 ++ INTEL_IVB_D_GT2_IDS(info)
8153 ++
8154 + #define INTEL_IVB_Q_IDS(info) \
8155 + INTEL_QUANTA_VGA_DEVICE(info) /* Quanta transcode */
8156 +
8157 +-#define INTEL_HSW_IDS(info) \
8158 ++#define INTEL_HSW_GT1_IDS(info) \
8159 + INTEL_VGA_DEVICE(0x0402, info), /* GT1 desktop */ \
8160 +- INTEL_VGA_DEVICE(0x0412, info), /* GT2 desktop */ \
8161 +- INTEL_VGA_DEVICE(0x0422, info), /* GT3 desktop */ \
8162 + INTEL_VGA_DEVICE(0x040a, info), /* GT1 server */ \
8163 +- INTEL_VGA_DEVICE(0x041a, info), /* GT2 server */ \
8164 +- INTEL_VGA_DEVICE(0x042a, info), /* GT3 server */ \
8165 + INTEL_VGA_DEVICE(0x040B, info), /* GT1 reserved */ \
8166 +- INTEL_VGA_DEVICE(0x041B, info), /* GT2 reserved */ \
8167 +- INTEL_VGA_DEVICE(0x042B, info), /* GT3 reserved */ \
8168 + INTEL_VGA_DEVICE(0x040E, info), /* GT1 reserved */ \
8169 +- INTEL_VGA_DEVICE(0x041E, info), /* GT2 reserved */ \
8170 +- INTEL_VGA_DEVICE(0x042E, info), /* GT3 reserved */ \
8171 + INTEL_VGA_DEVICE(0x0C02, info), /* SDV GT1 desktop */ \
8172 +- INTEL_VGA_DEVICE(0x0C12, info), /* SDV GT2 desktop */ \
8173 +- INTEL_VGA_DEVICE(0x0C22, info), /* SDV GT3 desktop */ \
8174 + INTEL_VGA_DEVICE(0x0C0A, info), /* SDV GT1 server */ \
8175 +- INTEL_VGA_DEVICE(0x0C1A, info), /* SDV GT2 server */ \
8176 +- INTEL_VGA_DEVICE(0x0C2A, info), /* SDV GT3 server */ \
8177 + INTEL_VGA_DEVICE(0x0C0B, info), /* SDV GT1 reserved */ \
8178 +- INTEL_VGA_DEVICE(0x0C1B, info), /* SDV GT2 reserved */ \
8179 +- INTEL_VGA_DEVICE(0x0C2B, info), /* SDV GT3 reserved */ \
8180 + INTEL_VGA_DEVICE(0x0C0E, info), /* SDV GT1 reserved */ \
8181 +- INTEL_VGA_DEVICE(0x0C1E, info), /* SDV GT2 reserved */ \
8182 +- INTEL_VGA_DEVICE(0x0C2E, info), /* SDV GT3 reserved */ \
8183 + INTEL_VGA_DEVICE(0x0A02, info), /* ULT GT1 desktop */ \
8184 +- INTEL_VGA_DEVICE(0x0A12, info), /* ULT GT2 desktop */ \
8185 +- INTEL_VGA_DEVICE(0x0A22, info), /* ULT GT3 desktop */ \
8186 + INTEL_VGA_DEVICE(0x0A0A, info), /* ULT GT1 server */ \
8187 +- INTEL_VGA_DEVICE(0x0A1A, info), /* ULT GT2 server */ \
8188 +- INTEL_VGA_DEVICE(0x0A2A, info), /* ULT GT3 server */ \
8189 + INTEL_VGA_DEVICE(0x0A0B, info), /* ULT GT1 reserved */ \
8190 +- INTEL_VGA_DEVICE(0x0A1B, info), /* ULT GT2 reserved */ \
8191 +- INTEL_VGA_DEVICE(0x0A2B, info), /* ULT GT3 reserved */ \
8192 + INTEL_VGA_DEVICE(0x0D02, info), /* CRW GT1 desktop */ \
8193 +- INTEL_VGA_DEVICE(0x0D12, info), /* CRW GT2 desktop */ \
8194 +- INTEL_VGA_DEVICE(0x0D22, info), /* CRW GT3 desktop */ \
8195 + INTEL_VGA_DEVICE(0x0D0A, info), /* CRW GT1 server */ \
8196 +- INTEL_VGA_DEVICE(0x0D1A, info), /* CRW GT2 server */ \
8197 +- INTEL_VGA_DEVICE(0x0D2A, info), /* CRW GT3 server */ \
8198 + INTEL_VGA_DEVICE(0x0D0B, info), /* CRW GT1 reserved */ \
8199 +- INTEL_VGA_DEVICE(0x0D1B, info), /* CRW GT2 reserved */ \
8200 +- INTEL_VGA_DEVICE(0x0D2B, info), /* CRW GT3 reserved */ \
8201 + INTEL_VGA_DEVICE(0x0D0E, info), /* CRW GT1 reserved */ \
8202 +- INTEL_VGA_DEVICE(0x0D1E, info), /* CRW GT2 reserved */ \
8203 +- INTEL_VGA_DEVICE(0x0D2E, info), /* CRW GT3 reserved */ \
8204 + INTEL_VGA_DEVICE(0x0406, info), /* GT1 mobile */ \
8205 ++ INTEL_VGA_DEVICE(0x0C06, info), /* SDV GT1 mobile */ \
8206 ++ INTEL_VGA_DEVICE(0x0A06, info), /* ULT GT1 mobile */ \
8207 ++ INTEL_VGA_DEVICE(0x0A0E, info), /* ULX GT1 mobile */ \
8208 ++ INTEL_VGA_DEVICE(0x0D06, info) /* CRW GT1 mobile */
8209 ++
8210 ++#define INTEL_HSW_GT2_IDS(info) \
8211 ++ INTEL_VGA_DEVICE(0x0412, info), /* GT2 desktop */ \
8212 ++ INTEL_VGA_DEVICE(0x041a, info), /* GT2 server */ \
8213 ++ INTEL_VGA_DEVICE(0x041B, info), /* GT2 reserved */ \
8214 ++ INTEL_VGA_DEVICE(0x041E, info), /* GT2 reserved */ \
8215 ++ INTEL_VGA_DEVICE(0x0C12, info), /* SDV GT2 desktop */ \
8216 ++ INTEL_VGA_DEVICE(0x0C1A, info), /* SDV GT2 server */ \
8217 ++ INTEL_VGA_DEVICE(0x0C1B, info), /* SDV GT2 reserved */ \
8218 ++ INTEL_VGA_DEVICE(0x0C1E, info), /* SDV GT2 reserved */ \
8219 ++ INTEL_VGA_DEVICE(0x0A12, info), /* ULT GT2 desktop */ \
8220 ++ INTEL_VGA_DEVICE(0x0A1A, info), /* ULT GT2 server */ \
8221 ++ INTEL_VGA_DEVICE(0x0A1B, info), /* ULT GT2 reserved */ \
8222 ++ INTEL_VGA_DEVICE(0x0D12, info), /* CRW GT2 desktop */ \
8223 ++ INTEL_VGA_DEVICE(0x0D1A, info), /* CRW GT2 server */ \
8224 ++ INTEL_VGA_DEVICE(0x0D1B, info), /* CRW GT2 reserved */ \
8225 ++ INTEL_VGA_DEVICE(0x0D1E, info), /* CRW GT2 reserved */ \
8226 + INTEL_VGA_DEVICE(0x0416, info), /* GT2 mobile */ \
8227 + INTEL_VGA_DEVICE(0x0426, info), /* GT2 mobile */ \
8228 +- INTEL_VGA_DEVICE(0x0C06, info), /* SDV GT1 mobile */ \
8229 + INTEL_VGA_DEVICE(0x0C16, info), /* SDV GT2 mobile */ \
8230 +- INTEL_VGA_DEVICE(0x0C26, info), /* SDV GT3 mobile */ \
8231 +- INTEL_VGA_DEVICE(0x0A06, info), /* ULT GT1 mobile */ \
8232 + INTEL_VGA_DEVICE(0x0A16, info), /* ULT GT2 mobile */ \
8233 +- INTEL_VGA_DEVICE(0x0A26, info), /* ULT GT3 mobile */ \
8234 +- INTEL_VGA_DEVICE(0x0A0E, info), /* ULX GT1 mobile */ \
8235 + INTEL_VGA_DEVICE(0x0A1E, info), /* ULX GT2 mobile */ \
8236 ++ INTEL_VGA_DEVICE(0x0D16, info) /* CRW GT2 mobile */
8237 ++
8238 ++#define INTEL_HSW_GT3_IDS(info) \
8239 ++ INTEL_VGA_DEVICE(0x0422, info), /* GT3 desktop */ \
8240 ++ INTEL_VGA_DEVICE(0x042a, info), /* GT3 server */ \
8241 ++ INTEL_VGA_DEVICE(0x042B, info), /* GT3 reserved */ \
8242 ++ INTEL_VGA_DEVICE(0x042E, info), /* GT3 reserved */ \
8243 ++ INTEL_VGA_DEVICE(0x0C22, info), /* SDV GT3 desktop */ \
8244 ++ INTEL_VGA_DEVICE(0x0C2A, info), /* SDV GT3 server */ \
8245 ++ INTEL_VGA_DEVICE(0x0C2B, info), /* SDV GT3 reserved */ \
8246 ++ INTEL_VGA_DEVICE(0x0C2E, info), /* SDV GT3 reserved */ \
8247 ++ INTEL_VGA_DEVICE(0x0A22, info), /* ULT GT3 desktop */ \
8248 ++ INTEL_VGA_DEVICE(0x0A2A, info), /* ULT GT3 server */ \
8249 ++ INTEL_VGA_DEVICE(0x0A2B, info), /* ULT GT3 reserved */ \
8250 ++ INTEL_VGA_DEVICE(0x0D22, info), /* CRW GT3 desktop */ \
8251 ++ INTEL_VGA_DEVICE(0x0D2A, info), /* CRW GT3 server */ \
8252 ++ INTEL_VGA_DEVICE(0x0D2B, info), /* CRW GT3 reserved */ \
8253 ++ INTEL_VGA_DEVICE(0x0D2E, info), /* CRW GT3 reserved */ \
8254 ++ INTEL_VGA_DEVICE(0x0C26, info), /* SDV GT3 mobile */ \
8255 ++ INTEL_VGA_DEVICE(0x0A26, info), /* ULT GT3 mobile */ \
8256 + INTEL_VGA_DEVICE(0x0A2E, info), /* ULT GT3 reserved */ \
8257 +- INTEL_VGA_DEVICE(0x0D06, info), /* CRW GT1 mobile */ \
8258 +- INTEL_VGA_DEVICE(0x0D16, info), /* CRW GT2 mobile */ \
8259 + INTEL_VGA_DEVICE(0x0D26, info) /* CRW GT3 mobile */
8260 +
8261 ++#define INTEL_HSW_IDS(info) \
8262 ++ INTEL_HSW_GT1_IDS(info), \
8263 ++ INTEL_HSW_GT2_IDS(info), \
8264 ++ INTEL_HSW_GT3_IDS(info)
8265 ++
8266 + #define INTEL_VLV_IDS(info) \
8267 + INTEL_VGA_DEVICE(0x0f30, info), \
8268 + INTEL_VGA_DEVICE(0x0f31, info), \
8269 +@@ -212,17 +245,19 @@
8270 + INTEL_VGA_DEVICE(0x0157, info), \
8271 + INTEL_VGA_DEVICE(0x0155, info)
8272 +
8273 +-#define INTEL_BDW_GT12_IDS(info) \
8274 ++#define INTEL_BDW_GT1_IDS(info) \
8275 + INTEL_VGA_DEVICE(0x1602, info), /* GT1 ULT */ \
8276 + INTEL_VGA_DEVICE(0x1606, info), /* GT1 ULT */ \
8277 + INTEL_VGA_DEVICE(0x160B, info), /* GT1 Iris */ \
8278 + INTEL_VGA_DEVICE(0x160E, info), /* GT1 ULX */ \
8279 +- INTEL_VGA_DEVICE(0x1612, info), /* GT2 Halo */ \
8280 ++ INTEL_VGA_DEVICE(0x160A, info), /* GT1 Server */ \
8281 ++ INTEL_VGA_DEVICE(0x160D, info) /* GT1 Workstation */
8282 ++
8283 ++#define INTEL_BDW_GT2_IDS(info) \
8284 ++ INTEL_VGA_DEVICE(0x1612, info), /* GT2 Halo */ \
8285 + INTEL_VGA_DEVICE(0x1616, info), /* GT2 ULT */ \
8286 + INTEL_VGA_DEVICE(0x161B, info), /* GT2 ULT */ \
8287 +- INTEL_VGA_DEVICE(0x161E, info), /* GT2 ULX */ \
8288 +- INTEL_VGA_DEVICE(0x160A, info), /* GT1 Server */ \
8289 +- INTEL_VGA_DEVICE(0x160D, info), /* GT1 Workstation */ \
8290 ++ INTEL_VGA_DEVICE(0x161E, info), /* GT2 ULX */ \
8291 + INTEL_VGA_DEVICE(0x161A, info), /* GT2 Server */ \
8292 + INTEL_VGA_DEVICE(0x161D, info) /* GT2 Workstation */
8293 +
8294 +@@ -243,7 +278,8 @@
8295 + INTEL_VGA_DEVICE(0x163D, info) /* Workstation */
8296 +
8297 + #define INTEL_BDW_IDS(info) \
8298 +- INTEL_BDW_GT12_IDS(info), \
8299 ++ INTEL_BDW_GT1_IDS(info), \
8300 ++ INTEL_BDW_GT2_IDS(info), \
8301 + INTEL_BDW_GT3_IDS(info), \
8302 + INTEL_BDW_RSVD_IDS(info)
8303 +
8304 +@@ -303,7 +339,6 @@
8305 + #define INTEL_KBL_GT1_IDS(info) \
8306 + INTEL_VGA_DEVICE(0x5913, info), /* ULT GT1.5 */ \
8307 + INTEL_VGA_DEVICE(0x5915, info), /* ULX GT1.5 */ \
8308 +- INTEL_VGA_DEVICE(0x5917, info), /* DT GT1.5 */ \
8309 + INTEL_VGA_DEVICE(0x5906, info), /* ULT GT1 */ \
8310 + INTEL_VGA_DEVICE(0x590E, info), /* ULX GT1 */ \
8311 + INTEL_VGA_DEVICE(0x5902, info), /* DT GT1 */ \
8312 +@@ -313,6 +348,7 @@
8313 +
8314 + #define INTEL_KBL_GT2_IDS(info) \
8315 + INTEL_VGA_DEVICE(0x5916, info), /* ULT GT2 */ \
8316 ++ INTEL_VGA_DEVICE(0x5917, info), /* Mobile GT2 */ \
8317 + INTEL_VGA_DEVICE(0x5921, info), /* ULT GT2F */ \
8318 + INTEL_VGA_DEVICE(0x591E, info), /* ULX GT2 */ \
8319 + INTEL_VGA_DEVICE(0x5912, info), /* DT GT2 */ \
8320 +@@ -335,25 +371,33 @@
8321 + INTEL_KBL_GT4_IDS(info)
8322 +
8323 + /* CFL S */
8324 +-#define INTEL_CFL_S_IDS(info) \
8325 ++#define INTEL_CFL_S_GT1_IDS(info) \
8326 + INTEL_VGA_DEVICE(0x3E90, info), /* SRV GT1 */ \
8327 +- INTEL_VGA_DEVICE(0x3E93, info), /* SRV GT1 */ \
8328 ++ INTEL_VGA_DEVICE(0x3E93, info) /* SRV GT1 */
8329 ++
8330 ++#define INTEL_CFL_S_GT2_IDS(info) \
8331 + INTEL_VGA_DEVICE(0x3E91, info), /* SRV GT2 */ \
8332 + INTEL_VGA_DEVICE(0x3E92, info), /* SRV GT2 */ \
8333 + INTEL_VGA_DEVICE(0x3E96, info) /* SRV GT2 */
8334 +
8335 + /* CFL H */
8336 +-#define INTEL_CFL_H_IDS(info) \
8337 ++#define INTEL_CFL_H_GT2_IDS(info) \
8338 + INTEL_VGA_DEVICE(0x3E9B, info), /* Halo GT2 */ \
8339 + INTEL_VGA_DEVICE(0x3E94, info) /* Halo GT2 */
8340 +
8341 + /* CFL U */
8342 +-#define INTEL_CFL_U_IDS(info) \
8343 ++#define INTEL_CFL_U_GT3_IDS(info) \
8344 + INTEL_VGA_DEVICE(0x3EA6, info), /* ULT GT3 */ \
8345 + INTEL_VGA_DEVICE(0x3EA7, info), /* ULT GT3 */ \
8346 + INTEL_VGA_DEVICE(0x3EA8, info), /* ULT GT3 */ \
8347 + INTEL_VGA_DEVICE(0x3EA5, info) /* ULT GT3 */
8348 +
8349 ++#define INTEL_CFL_IDS(info) \
8350 ++ INTEL_CFL_S_GT1_IDS(info), \
8351 ++ INTEL_CFL_S_GT2_IDS(info), \
8352 ++ INTEL_CFL_H_GT2_IDS(info), \
8353 ++ INTEL_CFL_U_GT3_IDS(info)
8354 ++
8355 + /* CNL U 2+2 */
8356 + #define INTEL_CNL_U_GT2_IDS(info) \
8357 + INTEL_VGA_DEVICE(0x5A52, info), \
8358 +diff --git a/include/linux/c2port.h b/include/linux/c2port.h
8359 +index 4efabcb51347..f2736348ca26 100644
8360 +--- a/include/linux/c2port.h
8361 ++++ b/include/linux/c2port.h
8362 +@@ -9,8 +9,6 @@
8363 + * the Free Software Foundation
8364 + */
8365 +
8366 +-#include <linux/kmemcheck.h>
8367 +-
8368 + #define C2PORT_NAME_LEN 32
8369 +
8370 + struct device;
8371 +@@ -22,10 +20,8 @@ struct device;
8372 + /* Main struct */
8373 + struct c2port_ops;
8374 + struct c2port_device {
8375 +- kmemcheck_bitfield_begin(flags);
8376 + unsigned int access:1;
8377 + unsigned int flash_access:1;
8378 +- kmemcheck_bitfield_end(flags);
8379 +
8380 + int id;
8381 + char name[C2PORT_NAME_LEN];
8382 +diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
8383 +index 2272ded07496..bf09213895f7 100644
8384 +--- a/include/linux/compiler-gcc.h
8385 ++++ b/include/linux/compiler-gcc.h
8386 +@@ -167,8 +167,6 @@
8387 +
8388 + #if GCC_VERSION >= 40100
8389 + # define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
8390 +-
8391 +-#define __nostackprotector __attribute__((__optimize__("no-stack-protector")))
8392 + #endif
8393 +
8394 + #if GCC_VERSION >= 40300
8395 +@@ -196,6 +194,11 @@
8396 + #endif /* __CHECKER__ */
8397 + #endif /* GCC_VERSION >= 40300 */
8398 +
8399 ++#if GCC_VERSION >= 40400
8400 ++#define __optimize(level) __attribute__((__optimize__(level)))
8401 ++#define __nostackprotector __optimize("no-stack-protector")
8402 ++#endif /* GCC_VERSION >= 40400 */
8403 ++
8404 + #if GCC_VERSION >= 40500
8405 +
8406 + #ifndef __CHECKER__
8407 +diff --git a/include/linux/compiler.h b/include/linux/compiler.h
8408 +index fab5dc250c61..e8c9cd18bb05 100644
8409 +--- a/include/linux/compiler.h
8410 ++++ b/include/linux/compiler.h
8411 +@@ -266,6 +266,10 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
8412 +
8413 + #endif /* __ASSEMBLY__ */
8414 +
8415 ++#ifndef __optimize
8416 ++# define __optimize(level)
8417 ++#endif
8418 ++
8419 + /* Compile time object size, -1 for unknown */
8420 + #ifndef __compiletime_object_size
8421 + # define __compiletime_object_size(obj) -1
8422 +diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
8423 +index 8f7788d23b57..a6989e02d0a0 100644
8424 +--- a/include/linux/cpuidle.h
8425 ++++ b/include/linux/cpuidle.h
8426 +@@ -225,7 +225,7 @@ static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev,
8427 + }
8428 + #endif
8429 +
8430 +-#ifdef CONFIG_ARCH_HAS_CPU_RELAX
8431 ++#if defined(CONFIG_CPU_IDLE) && defined(CONFIG_ARCH_HAS_CPU_RELAX)
8432 + void cpuidle_poll_state_init(struct cpuidle_driver *drv);
8433 + #else
8434 + static inline void cpuidle_poll_state_init(struct cpuidle_driver *drv) {}
8435 +diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
8436 +index 46930f82a988..7bf3b99e6fbb 100644
8437 +--- a/include/linux/dma-mapping.h
8438 ++++ b/include/linux/dma-mapping.h
8439 +@@ -9,7 +9,6 @@
8440 + #include <linux/dma-debug.h>
8441 + #include <linux/dma-direction.h>
8442 + #include <linux/scatterlist.h>
8443 +-#include <linux/kmemcheck.h>
8444 + #include <linux/bug.h>
8445 + #include <linux/mem_encrypt.h>
8446 +
8447 +@@ -230,7 +229,6 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
8448 + const struct dma_map_ops *ops = get_dma_ops(dev);
8449 + dma_addr_t addr;
8450 +
8451 +- kmemcheck_mark_initialized(ptr, size);
8452 + BUG_ON(!valid_dma_direction(dir));
8453 + addr = ops->map_page(dev, virt_to_page(ptr),
8454 + offset_in_page(ptr), size,
8455 +@@ -263,11 +261,8 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
8456 + unsigned long attrs)
8457 + {
8458 + const struct dma_map_ops *ops = get_dma_ops(dev);
8459 +- int i, ents;
8460 +- struct scatterlist *s;
8461 ++ int ents;
8462 +
8463 +- for_each_sg(sg, s, nents, i)
8464 +- kmemcheck_mark_initialized(sg_virt(s), s->length);
8465 + BUG_ON(!valid_dma_direction(dir));
8466 + ents = ops->map_sg(dev, sg, nents, dir, attrs);
8467 + BUG_ON(ents < 0);
8468 +@@ -297,7 +292,6 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
8469 + const struct dma_map_ops *ops = get_dma_ops(dev);
8470 + dma_addr_t addr;
8471 +
8472 +- kmemcheck_mark_initialized(page_address(page) + offset, size);
8473 + BUG_ON(!valid_dma_direction(dir));
8474 + addr = ops->map_page(dev, page, offset, size, dir, attrs);
8475 + debug_dma_map_page(dev, page, offset, size, dir, addr, false);
8476 +diff --git a/include/linux/filter.h b/include/linux/filter.h
8477 +index 48ec57e70f9f..42197b16dd78 100644
8478 +--- a/include/linux/filter.h
8479 ++++ b/include/linux/filter.h
8480 +@@ -454,13 +454,11 @@ struct bpf_binary_header {
8481 +
8482 + struct bpf_prog {
8483 + u16 pages; /* Number of allocated pages */
8484 +- kmemcheck_bitfield_begin(meta);
8485 + u16 jited:1, /* Is our filter JIT'ed? */
8486 + locked:1, /* Program image locked? */
8487 + gpl_compatible:1, /* Is filter GPL compatible? */
8488 + cb_access:1, /* Is control block accessed? */
8489 + dst_needed:1; /* Do we need dst entry? */
8490 +- kmemcheck_bitfield_end(meta);
8491 + enum bpf_prog_type type; /* Type of BPF program */
8492 + u32 len; /* Number of filter blocks */
8493 + u32 jited_len; /* Size of jited insns in bytes */
8494 +diff --git a/include/linux/gfp.h b/include/linux/gfp.h
8495 +index 710143741eb5..b041f94678de 100644
8496 +--- a/include/linux/gfp.h
8497 ++++ b/include/linux/gfp.h
8498 +@@ -37,7 +37,6 @@ struct vm_area_struct;
8499 + #define ___GFP_THISNODE 0x40000u
8500 + #define ___GFP_ATOMIC 0x80000u
8501 + #define ___GFP_ACCOUNT 0x100000u
8502 +-#define ___GFP_NOTRACK 0x200000u
8503 + #define ___GFP_DIRECT_RECLAIM 0x400000u
8504 + #define ___GFP_WRITE 0x800000u
8505 + #define ___GFP_KSWAPD_RECLAIM 0x1000000u
8506 +@@ -201,19 +200,11 @@ struct vm_area_struct;
8507 + * __GFP_COMP address compound page metadata.
8508 + *
8509 + * __GFP_ZERO returns a zeroed page on success.
8510 +- *
8511 +- * __GFP_NOTRACK avoids tracking with kmemcheck.
8512 +- *
8513 +- * __GFP_NOTRACK_FALSE_POSITIVE is an alias of __GFP_NOTRACK. It's a means of
8514 +- * distinguishing in the source between false positives and allocations that
8515 +- * cannot be supported (e.g. page tables).
8516 + */
8517 + #define __GFP_COLD ((__force gfp_t)___GFP_COLD)
8518 + #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
8519 + #define __GFP_COMP ((__force gfp_t)___GFP_COMP)
8520 + #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
8521 +-#define __GFP_NOTRACK ((__force gfp_t)___GFP_NOTRACK)
8522 +-#define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK)
8523 +
8524 + /* Disable lockdep for GFP context tracking */
8525 + #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
8526 +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
8527 +index baeb872283d9..69c238210325 100644
8528 +--- a/include/linux/interrupt.h
8529 ++++ b/include/linux/interrupt.h
8530 +@@ -594,21 +594,6 @@ static inline void tasklet_hi_schedule(struct tasklet_struct *t)
8531 + __tasklet_hi_schedule(t);
8532 + }
8533 +
8534 +-extern void __tasklet_hi_schedule_first(struct tasklet_struct *t);
8535 +-
8536 +-/*
8537 +- * This version avoids touching any other tasklets. Needed for kmemcheck
8538 +- * in order not to take any page faults while enqueueing this tasklet;
8539 +- * consider VERY carefully whether you really need this or
8540 +- * tasklet_hi_schedule()...
8541 +- */
8542 +-static inline void tasklet_hi_schedule_first(struct tasklet_struct *t)
8543 +-{
8544 +- if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
8545 +- __tasklet_hi_schedule_first(t);
8546 +-}
8547 +-
8548 +-
8549 + static inline void tasklet_disable_nosync(struct tasklet_struct *t)
8550 + {
8551 + atomic_inc(&t->count);
8552 +diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
8553 +index 606b6bce3a5b..29290bfb94a8 100644
8554 +--- a/include/linux/jbd2.h
8555 ++++ b/include/linux/jbd2.h
8556 +@@ -418,26 +418,41 @@ static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
8557 + #define JI_WAIT_DATA (1 << __JI_WAIT_DATA)
8558 +
8559 + /**
8560 +- * struct jbd_inode is the structure linking inodes in ordered mode
8561 +- * present in a transaction so that we can sync them during commit.
8562 ++ * struct jbd_inode - The jbd_inode type is the structure linking inodes in
8563 ++ * ordered mode present in a transaction so that we can sync them during commit.
8564 + */
8565 + struct jbd2_inode {
8566 +- /* Which transaction does this inode belong to? Either the running
8567 +- * transaction or the committing one. [j_list_lock] */
8568 ++ /**
8569 ++ * @i_transaction:
8570 ++ *
8571 ++ * Which transaction does this inode belong to? Either the running
8572 ++ * transaction or the committing one. [j_list_lock]
8573 ++ */
8574 + transaction_t *i_transaction;
8575 +
8576 +- /* Pointer to the running transaction modifying inode's data in case
8577 +- * there is already a committing transaction touching it. [j_list_lock] */
8578 ++ /**
8579 ++ * @i_next_transaction:
8580 ++ *
8581 ++ * Pointer to the running transaction modifying inode's data in case
8582 ++ * there is already a committing transaction touching it. [j_list_lock]
8583 ++ */
8584 + transaction_t *i_next_transaction;
8585 +
8586 +- /* List of inodes in the i_transaction [j_list_lock] */
8587 ++ /**
8588 ++ * @i_list: List of inodes in the i_transaction [j_list_lock]
8589 ++ */
8590 + struct list_head i_list;
8591 +
8592 +- /* VFS inode this inode belongs to [constant during the lifetime
8593 +- * of the structure] */
8594 ++ /**
8595 ++ * @i_vfs_inode:
8596 ++ *
8597 ++ * VFS inode this inode belongs to [constant for lifetime of structure]
8598 ++ */
8599 + struct inode *i_vfs_inode;
8600 +
8601 +- /* Flags of inode [j_list_lock] */
8602 ++ /**
8603 ++ * @i_flags: Flags of inode [j_list_lock]
8604 ++ */
8605 + unsigned long i_flags;
8606 + };
8607 +
8608 +@@ -447,12 +462,20 @@ struct jbd2_revoke_table_s;
8609 + * struct handle_s - The handle_s type is the concrete type associated with
8610 + * handle_t.
8611 + * @h_transaction: Which compound transaction is this update a part of?
8612 ++ * @h_journal: Which journal handle belongs to - used iff h_reserved set.
8613 ++ * @h_rsv_handle: Handle reserved for finishing the logical operation.
8614 + * @h_buffer_credits: Number of remaining buffers we are allowed to dirty.
8615 +- * @h_ref: Reference count on this handle
8616 +- * @h_err: Field for caller's use to track errors through large fs operations
8617 +- * @h_sync: flag for sync-on-close
8618 +- * @h_jdata: flag to force data journaling
8619 +- * @h_aborted: flag indicating fatal error on handle
8620 ++ * @h_ref: Reference count on this handle.
8621 ++ * @h_err: Field for caller's use to track errors through large fs operations.
8622 ++ * @h_sync: Flag for sync-on-close.
8623 ++ * @h_jdata: Flag to force data journaling.
8624 ++ * @h_reserved: Flag for handle for reserved credits.
8625 ++ * @h_aborted: Flag indicating fatal error on handle.
8626 ++ * @h_type: For handle statistics.
8627 ++ * @h_line_no: For handle statistics.
8628 ++ * @h_start_jiffies: Handle Start time.
8629 ++ * @h_requested_credits: Holds @h_buffer_credits after handle is started.
8630 ++ * @saved_alloc_context: Saved context while transaction is open.
8631 + **/
8632 +
8633 + /* Docbook can't yet cope with the bit fields, but will leave the documentation
8634 +@@ -462,32 +485,23 @@ struct jbd2_revoke_table_s;
8635 + struct jbd2_journal_handle
8636 + {
8637 + union {
8638 +- /* Which compound transaction is this update a part of? */
8639 + transaction_t *h_transaction;
8640 + /* Which journal handle belongs to - used iff h_reserved set */
8641 + journal_t *h_journal;
8642 + };
8643 +
8644 +- /* Handle reserved for finishing the logical operation */
8645 + handle_t *h_rsv_handle;
8646 +-
8647 +- /* Number of remaining buffers we are allowed to dirty: */
8648 + int h_buffer_credits;
8649 +-
8650 +- /* Reference count on this handle */
8651 + int h_ref;
8652 +-
8653 +- /* Field for caller's use to track errors through large fs */
8654 +- /* operations */
8655 + int h_err;
8656 +
8657 + /* Flags [no locking] */
8658 +- unsigned int h_sync: 1; /* sync-on-close */
8659 +- unsigned int h_jdata: 1; /* force data journaling */
8660 +- unsigned int h_reserved: 1; /* handle with reserved credits */
8661 +- unsigned int h_aborted: 1; /* fatal error on handle */
8662 +- unsigned int h_type: 8; /* for handle statistics */
8663 +- unsigned int h_line_no: 16; /* for handle statistics */
8664 ++ unsigned int h_sync: 1;
8665 ++ unsigned int h_jdata: 1;
8666 ++ unsigned int h_reserved: 1;
8667 ++ unsigned int h_aborted: 1;
8668 ++ unsigned int h_type: 8;
8669 ++ unsigned int h_line_no: 16;
8670 +
8671 + unsigned long h_start_jiffies;
8672 + unsigned int h_requested_credits;
8673 +@@ -729,228 +743,253 @@ jbd2_time_diff(unsigned long start, unsigned long end)
8674 + /**
8675 + * struct journal_s - The journal_s type is the concrete type associated with
8676 + * journal_t.
8677 +- * @j_flags: General journaling state flags
8678 +- * @j_errno: Is there an outstanding uncleared error on the journal (from a
8679 +- * prior abort)?
8680 +- * @j_sb_buffer: First part of superblock buffer
8681 +- * @j_superblock: Second part of superblock buffer
8682 +- * @j_format_version: Version of the superblock format
8683 +- * @j_state_lock: Protect the various scalars in the journal
8684 +- * @j_barrier_count: Number of processes waiting to create a barrier lock
8685 +- * @j_barrier: The barrier lock itself
8686 +- * @j_running_transaction: The current running transaction..
8687 +- * @j_committing_transaction: the transaction we are pushing to disk
8688 +- * @j_checkpoint_transactions: a linked circular list of all transactions
8689 +- * waiting for checkpointing
8690 +- * @j_wait_transaction_locked: Wait queue for waiting for a locked transaction
8691 +- * to start committing, or for a barrier lock to be released
8692 +- * @j_wait_done_commit: Wait queue for waiting for commit to complete
8693 +- * @j_wait_commit: Wait queue to trigger commit
8694 +- * @j_wait_updates: Wait queue to wait for updates to complete
8695 +- * @j_wait_reserved: Wait queue to wait for reserved buffer credits to drop
8696 +- * @j_checkpoint_mutex: Mutex for locking against concurrent checkpoints
8697 +- * @j_head: Journal head - identifies the first unused block in the journal
8698 +- * @j_tail: Journal tail - identifies the oldest still-used block in the
8699 +- * journal.
8700 +- * @j_free: Journal free - how many free blocks are there in the journal?
8701 +- * @j_first: The block number of the first usable block
8702 +- * @j_last: The block number one beyond the last usable block
8703 +- * @j_dev: Device where we store the journal
8704 +- * @j_blocksize: blocksize for the location where we store the journal.
8705 +- * @j_blk_offset: starting block offset for into the device where we store the
8706 +- * journal
8707 +- * @j_fs_dev: Device which holds the client fs. For internal journal this will
8708 +- * be equal to j_dev
8709 +- * @j_reserved_credits: Number of buffers reserved from the running transaction
8710 +- * @j_maxlen: Total maximum capacity of the journal region on disk.
8711 +- * @j_list_lock: Protects the buffer lists and internal buffer state.
8712 +- * @j_inode: Optional inode where we store the journal. If present, all journal
8713 +- * block numbers are mapped into this inode via bmap().
8714 +- * @j_tail_sequence: Sequence number of the oldest transaction in the log
8715 +- * @j_transaction_sequence: Sequence number of the next transaction to grant
8716 +- * @j_commit_sequence: Sequence number of the most recently committed
8717 +- * transaction
8718 +- * @j_commit_request: Sequence number of the most recent transaction wanting
8719 +- * commit
8720 +- * @j_uuid: Uuid of client object.
8721 +- * @j_task: Pointer to the current commit thread for this journal
8722 +- * @j_max_transaction_buffers: Maximum number of metadata buffers to allow in a
8723 +- * single compound commit transaction
8724 +- * @j_commit_interval: What is the maximum transaction lifetime before we begin
8725 +- * a commit?
8726 +- * @j_commit_timer: The timer used to wakeup the commit thread
8727 +- * @j_revoke_lock: Protect the revoke table
8728 +- * @j_revoke: The revoke table - maintains the list of revoked blocks in the
8729 +- * current transaction.
8730 +- * @j_revoke_table: alternate revoke tables for j_revoke
8731 +- * @j_wbuf: array of buffer_heads for jbd2_journal_commit_transaction
8732 +- * @j_wbufsize: maximum number of buffer_heads allowed in j_wbuf, the
8733 +- * number that will fit in j_blocksize
8734 +- * @j_last_sync_writer: most recent pid which did a synchronous write
8735 +- * @j_history_lock: Protect the transactions statistics history
8736 +- * @j_proc_entry: procfs entry for the jbd statistics directory
8737 +- * @j_stats: Overall statistics
8738 +- * @j_private: An opaque pointer to fs-private information.
8739 +- * @j_trans_commit_map: Lockdep entity to track transaction commit dependencies
8740 + */
8741 +-
8742 + struct journal_s
8743 + {
8744 +- /* General journaling state flags [j_state_lock] */
8745 ++ /**
8746 ++ * @j_flags: General journaling state flags [j_state_lock]
8747 ++ */
8748 + unsigned long j_flags;
8749 +
8750 +- /*
8751 ++ /**
8752 ++ * @j_errno:
8753 ++ *
8754 + * Is there an outstanding uncleared error on the journal (from a prior
8755 + * abort)? [j_state_lock]
8756 + */
8757 + int j_errno;
8758 +
8759 +- /* The superblock buffer */
8760 ++ /**
8761 ++ * @j_sb_buffer: The first part of the superblock buffer.
8762 ++ */
8763 + struct buffer_head *j_sb_buffer;
8764 ++
8765 ++ /**
8766 ++ * @j_superblock: The second part of the superblock buffer.
8767 ++ */
8768 + journal_superblock_t *j_superblock;
8769 +
8770 +- /* Version of the superblock format */
8771 ++ /**
8772 ++ * @j_format_version: Version of the superblock format.
8773 ++ */
8774 + int j_format_version;
8775 +
8776 +- /*
8777 +- * Protect the various scalars in the journal
8778 ++ /**
8779 ++ * @j_state_lock: Protect the various scalars in the journal.
8780 + */
8781 + rwlock_t j_state_lock;
8782 +
8783 +- /*
8784 ++ /**
8785 ++ * @j_barrier_count:
8786 ++ *
8787 + * Number of processes waiting to create a barrier lock [j_state_lock]
8788 + */
8789 + int j_barrier_count;
8790 +
8791 +- /* The barrier lock itself */
8792 ++ /**
8793 ++ * @j_barrier: The barrier lock itself.
8794 ++ */
8795 + struct mutex j_barrier;
8796 +
8797 +- /*
8798 ++ /**
8799 ++ * @j_running_transaction:
8800 ++ *
8801 + * Transactions: The current running transaction...
8802 + * [j_state_lock] [caller holding open handle]
8803 + */
8804 + transaction_t *j_running_transaction;
8805 +
8806 +- /*
8807 ++ /**
8808 ++ * @j_committing_transaction:
8809 ++ *
8810 + * the transaction we are pushing to disk
8811 + * [j_state_lock] [caller holding open handle]
8812 + */
8813 + transaction_t *j_committing_transaction;
8814 +
8815 +- /*
8816 ++ /**
8817 ++ * @j_checkpoint_transactions:
8818 ++ *
8819 + * ... and a linked circular list of all transactions waiting for
8820 + * checkpointing. [j_list_lock]
8821 + */
8822 + transaction_t *j_checkpoint_transactions;
8823 +
8824 +- /*
8825 ++ /**
8826 ++ * @j_wait_transaction_locked:
8827 ++ *
8828 + * Wait queue for waiting for a locked transaction to start committing,
8829 +- * or for a barrier lock to be released
8830 ++ * or for a barrier lock to be released.
8831 + */
8832 + wait_queue_head_t j_wait_transaction_locked;
8833 +
8834 +- /* Wait queue for waiting for commit to complete */
8835 ++ /**
8836 ++ * @j_wait_done_commit: Wait queue for waiting for commit to complete.
8837 ++ */
8838 + wait_queue_head_t j_wait_done_commit;
8839 +
8840 +- /* Wait queue to trigger commit */
8841 ++ /**
8842 ++ * @j_wait_commit: Wait queue to trigger commit.
8843 ++ */
8844 + wait_queue_head_t j_wait_commit;
8845 +
8846 +- /* Wait queue to wait for updates to complete */
8847 ++ /**
8848 ++ * @j_wait_updates: Wait queue to wait for updates to complete.
8849 ++ */
8850 + wait_queue_head_t j_wait_updates;
8851 +
8852 +- /* Wait queue to wait for reserved buffer credits to drop */
8853 ++ /**
8854 ++ * @j_wait_reserved:
8855 ++ *
8856 ++ * Wait queue to wait for reserved buffer credits to drop.
8857 ++ */
8858 + wait_queue_head_t j_wait_reserved;
8859 +
8860 +- /* Semaphore for locking against concurrent checkpoints */
8861 ++ /**
8862 ++ * @j_checkpoint_mutex:
8863 ++ *
8864 ++ * Semaphore for locking against concurrent checkpoints.
8865 ++ */
8866 + struct mutex j_checkpoint_mutex;
8867 +
8868 +- /*
8869 ++ /**
8870 ++ * @j_chkpt_bhs:
8871 ++ *
8872 + * List of buffer heads used by the checkpoint routine. This
8873 + * was moved from jbd2_log_do_checkpoint() to reduce stack
8874 + * usage. Access to this array is controlled by the
8875 +- * j_checkpoint_mutex. [j_checkpoint_mutex]
8876 ++ * @j_checkpoint_mutex. [j_checkpoint_mutex]
8877 + */
8878 + struct buffer_head *j_chkpt_bhs[JBD2_NR_BATCH];
8879 +-
8880 +- /*
8881 ++
8882 ++ /**
8883 ++ * @j_head:
8884 ++ *
8885 + * Journal head: identifies the first unused block in the journal.
8886 + * [j_state_lock]
8887 + */
8888 + unsigned long j_head;
8889 +
8890 +- /*
8891 ++ /**
8892 ++ * @j_tail:
8893 ++ *
8894 + * Journal tail: identifies the oldest still-used block in the journal.
8895 + * [j_state_lock]
8896 + */
8897 + unsigned long j_tail;
8898 +
8899 +- /*
8900 ++ /**
8901 ++ * @j_free:
8902 ++ *
8903 + * Journal free: how many free blocks are there in the journal?
8904 + * [j_state_lock]
8905 + */
8906 + unsigned long j_free;
8907 +
8908 +- /*
8909 +- * Journal start and end: the block numbers of the first usable block
8910 +- * and one beyond the last usable block in the journal. [j_state_lock]
8911 ++ /**
8912 ++ * @j_first:
8913 ++ *
8914 ++ * The block number of the first usable block in the journal
8915 ++ * [j_state_lock].
8916 + */
8917 + unsigned long j_first;
8918 ++
8919 ++ /**
8920 ++ * @j_last:
8921 ++ *
8922 ++ * The block number one beyond the last usable block in the journal
8923 ++ * [j_state_lock].
8924 ++ */
8925 + unsigned long j_last;
8926 +
8927 +- /*
8928 +- * Device, blocksize and starting block offset for the location where we
8929 +- * store the journal.
8930 ++ /**
8931 ++ * @j_dev: Device where we store the journal.
8932 + */
8933 + struct block_device *j_dev;
8934 ++
8935 ++ /**
8936 ++ * @j_blocksize: Block size for the location where we store the journal.
8937 ++ */
8938 + int j_blocksize;
8939 ++
8940 ++ /**
8941 ++ * @j_blk_offset:
8942 ++ *
8943 ++ * Starting block offset into the device where we store the journal.
8944 ++ */
8945 + unsigned long long j_blk_offset;
8946 ++
8947 ++ /**
8948 ++ * @j_devname: Journal device name.
8949 ++ */
8950 + char j_devname[BDEVNAME_SIZE+24];
8951 +
8952 +- /*
8953 ++ /**
8954 ++ * @j_fs_dev:
8955 ++ *
8956 + * Device which holds the client fs. For internal journal this will be
8957 + * equal to j_dev.
8958 + */
8959 + struct block_device *j_fs_dev;
8960 +
8961 +- /* Total maximum capacity of the journal region on disk. */
8962 ++ /**
8963 ++ * @j_maxlen: Total maximum capacity of the journal region on disk.
8964 ++ */
8965 + unsigned int j_maxlen;
8966 +
8967 +- /* Number of buffers reserved from the running transaction */
8968 ++ /**
8969 ++ * @j_reserved_credits:
8970 ++ *
8971 ++ * Number of buffers reserved from the running transaction.
8972 ++ */
8973 + atomic_t j_reserved_credits;
8974 +
8975 +- /*
8976 +- * Protects the buffer lists and internal buffer state.
8977 ++ /**
8978 ++ * @j_list_lock: Protects the buffer lists and internal buffer state.
8979 + */
8980 + spinlock_t j_list_lock;
8981 +
8982 +- /* Optional inode where we store the journal. If present, all */
8983 +- /* journal block numbers are mapped into this inode via */
8984 +- /* bmap(). */
8985 ++ /**
8986 ++ * @j_inode:
8987 ++ *
8988 ++ * Optional inode where we store the journal. If present, all
8989 ++ * journal block numbers are mapped into this inode via bmap().
8990 ++ */
8991 + struct inode *j_inode;
8992 +
8993 +- /*
8994 ++ /**
8995 ++ * @j_tail_sequence:
8996 ++ *
8997 + * Sequence number of the oldest transaction in the log [j_state_lock]
8998 + */
8999 + tid_t j_tail_sequence;
9000 +
9001 +- /*
9002 ++ /**
9003 ++ * @j_transaction_sequence:
9004 ++ *
9005 + * Sequence number of the next transaction to grant [j_state_lock]
9006 + */
9007 + tid_t j_transaction_sequence;
9008 +
9009 +- /*
9010 ++ /**
9011 ++ * @j_commit_sequence:
9012 ++ *
9013 + * Sequence number of the most recently committed transaction
9014 + * [j_state_lock].
9015 + */
9016 + tid_t j_commit_sequence;
9017 +
9018 +- /*
9019 ++ /**
9020 ++ * @j_commit_request:
9021 ++ *
9022 + * Sequence number of the most recent transaction wanting commit
9023 + * [j_state_lock]
9024 + */
9025 + tid_t j_commit_request;
9026 +
9027 +- /*
9028 ++ /**
9029 ++ * @j_uuid:
9030 ++ *
9031 + * Journal uuid: identifies the object (filesystem, LVM volume etc)
9032 + * backed by this journal. This will eventually be replaced by an array
9033 + * of uuids, allowing us to index multiple devices within a single
9034 +@@ -958,85 +997,151 @@ struct journal_s
9035 + */
9036 + __u8 j_uuid[16];
9037 +
9038 +- /* Pointer to the current commit thread for this journal */
9039 ++ /**
9040 ++ * @j_task: Pointer to the current commit thread for this journal.
9041 ++ */
9042 + struct task_struct *j_task;
9043 +
9044 +- /*
9045 ++ /**
9046 ++ * @j_max_transaction_buffers:
9047 ++ *
9048 + * Maximum number of metadata buffers to allow in a single compound
9049 +- * commit transaction
9050 ++ * commit transaction.
9051 + */
9052 + int j_max_transaction_buffers;
9053 +
9054 +- /*
9055 ++ /**
9056 ++ * @j_commit_interval:
9057 ++ *
9058 + * What is the maximum transaction lifetime before we begin a commit?
9059 + */
9060 + unsigned long j_commit_interval;
9061 +
9062 +- /* The timer used to wakeup the commit thread: */
9063 ++ /**
9064 ++ * @j_commit_timer: The timer used to wakeup the commit thread.
9065 ++ */
9066 + struct timer_list j_commit_timer;
9067 +
9068 +- /*
9069 +- * The revoke table: maintains the list of revoked blocks in the
9070 +- * current transaction. [j_revoke_lock]
9071 ++ /**
9072 ++ * @j_revoke_lock: Protect the revoke table.
9073 + */
9074 + spinlock_t j_revoke_lock;
9075 ++
9076 ++ /**
9077 ++ * @j_revoke:
9078 ++ *
9079 ++ * The revoke table - maintains the list of revoked blocks in the
9080 ++ * current transaction.
9081 ++ */
9082 + struct jbd2_revoke_table_s *j_revoke;
9083 ++
9084 ++ /**
9085 ++ * @j_revoke_table: Alternate revoke tables for j_revoke.
9086 ++ */
9087 + struct jbd2_revoke_table_s *j_revoke_table[2];
9088 +
9089 +- /*
9090 +- * array of bhs for jbd2_journal_commit_transaction
9091 ++ /**
9092 ++ * @j_wbuf: Array of bhs for jbd2_journal_commit_transaction.
9093 + */
9094 + struct buffer_head **j_wbuf;
9095 ++
9096 ++ /**
9097 ++ * @j_wbufsize:
9098 ++ *
9099 ++ * Size of @j_wbuf array.
9100 ++ */
9101 + int j_wbufsize;
9102 +
9103 +- /*
9104 +- * this is the pid of hte last person to run a synchronous operation
9105 +- * through the journal
9106 ++ /**
9107 ++ * @j_last_sync_writer:
9108 ++ *
9109 ++ * The pid of the last person to run a synchronous operation
9110 ++ * through the journal.
9111 + */
9112 + pid_t j_last_sync_writer;
9113 +
9114 +- /*
9115 +- * the average amount of time in nanoseconds it takes to commit a
9116 ++ /**
9117 ++ * @j_average_commit_time:
9118 ++ *
9119 ++ * The average amount of time in nanoseconds it takes to commit a
9120 + * transaction to disk. [j_state_lock]
9121 + */
9122 + u64 j_average_commit_time;
9123 +
9124 +- /*
9125 +- * minimum and maximum times that we should wait for
9126 +- * additional filesystem operations to get batched into a
9127 +- * synchronous handle in microseconds
9128 ++ /**
9129 ++ * @j_min_batch_time:
9130 ++ *
9131 ++ * Minimum time that we should wait for additional filesystem operations
9132 ++ * to get batched into a synchronous handle in microseconds.
9133 + */
9134 + u32 j_min_batch_time;
9135 ++
9136 ++ /**
9137 ++ * @j_max_batch_time:
9138 ++ *
9139 ++ * Maximum time that we should wait for additional filesystem operations
9140 ++ * to get batched into a synchronous handle in microseconds.
9141 ++ */
9142 + u32 j_max_batch_time;
9143 +
9144 +- /* This function is called when a transaction is closed */
9145 ++ /**
9146 ++ * @j_commit_callback:
9147 ++ *
9148 ++ * This function is called when a transaction is closed.
9149 ++ */
9150 + void (*j_commit_callback)(journal_t *,
9151 + transaction_t *);
9152 +
9153 + /*
9154 + * Journal statistics
9155 + */
9156 ++
9157 ++ /**
9158 ++ * @j_history_lock: Protect the transactions statistics history.
9159 ++ */
9160 + spinlock_t j_history_lock;
9161 ++
9162 ++ /**
9163 ++ * @j_proc_entry: procfs entry for the jbd statistics directory.
9164 ++ */
9165 + struct proc_dir_entry *j_proc_entry;
9166 ++
9167 ++ /**
9168 ++ * @j_stats: Overall statistics.
9169 ++ */
9170 + struct transaction_stats_s j_stats;
9171 +
9172 +- /* Failed journal commit ID */
9173 ++ /**
9174 ++ * @j_failed_commit: Failed journal commit ID.
9175 ++ */
9176 + unsigned int j_failed_commit;
9177 +
9178 +- /*
9179 ++ /**
9180 ++ * @j_private:
9181 ++ *
9182 + * An opaque pointer to fs-private information. ext3 puts its
9183 +- * superblock pointer here
9184 ++ * superblock pointer here.
9185 + */
9186 + void *j_private;
9187 +
9188 +- /* Reference to checksum algorithm driver via cryptoapi */
9189 ++ /**
9190 ++ * @j_chksum_driver:
9191 ++ *
9192 ++ * Reference to checksum algorithm driver via cryptoapi.
9193 ++ */
9194 + struct crypto_shash *j_chksum_driver;
9195 +
9196 +- /* Precomputed journal UUID checksum for seeding other checksums */
9197 ++ /**
9198 ++ * @j_csum_seed:
9199 ++ *
9200 ++ * Precomputed journal UUID checksum for seeding other checksums.
9201 ++ */
9202 + __u32 j_csum_seed;
9203 +
9204 + #ifdef CONFIG_DEBUG_LOCK_ALLOC
9205 +- /*
9206 ++ /**
9207 ++ * @j_trans_commit_map:
9208 ++ *
9209 + * Lockdep entity to track transaction commit dependencies. Handles
9210 + * hold this "lock" for read, when we wait for commit, we acquire the
9211 + * "lock" for writing. This matches the properties of jbd2 journalling
9212 +diff --git a/include/linux/kmemcheck.h b/include/linux/kmemcheck.h
9213 +deleted file mode 100644
9214 +index 7b1d7bead7d9..000000000000
9215 +--- a/include/linux/kmemcheck.h
9216 ++++ /dev/null
9217 +@@ -1,172 +0,0 @@
9218 +-/* SPDX-License-Identifier: GPL-2.0 */
9219 +-#ifndef LINUX_KMEMCHECK_H
9220 +-#define LINUX_KMEMCHECK_H
9221 +-
9222 +-#include <linux/mm_types.h>
9223 +-#include <linux/types.h>
9224 +-
9225 +-#ifdef CONFIG_KMEMCHECK
9226 +-extern int kmemcheck_enabled;
9227 +-
9228 +-/* The slab-related functions. */
9229 +-void kmemcheck_alloc_shadow(struct page *page, int order, gfp_t flags, int node);
9230 +-void kmemcheck_free_shadow(struct page *page, int order);
9231 +-void kmemcheck_slab_alloc(struct kmem_cache *s, gfp_t gfpflags, void *object,
9232 +- size_t size);
9233 +-void kmemcheck_slab_free(struct kmem_cache *s, void *object, size_t size);
9234 +-
9235 +-void kmemcheck_pagealloc_alloc(struct page *p, unsigned int order,
9236 +- gfp_t gfpflags);
9237 +-
9238 +-void kmemcheck_show_pages(struct page *p, unsigned int n);
9239 +-void kmemcheck_hide_pages(struct page *p, unsigned int n);
9240 +-
9241 +-bool kmemcheck_page_is_tracked(struct page *p);
9242 +-
9243 +-void kmemcheck_mark_unallocated(void *address, unsigned int n);
9244 +-void kmemcheck_mark_uninitialized(void *address, unsigned int n);
9245 +-void kmemcheck_mark_initialized(void *address, unsigned int n);
9246 +-void kmemcheck_mark_freed(void *address, unsigned int n);
9247 +-
9248 +-void kmemcheck_mark_unallocated_pages(struct page *p, unsigned int n);
9249 +-void kmemcheck_mark_uninitialized_pages(struct page *p, unsigned int n);
9250 +-void kmemcheck_mark_initialized_pages(struct page *p, unsigned int n);
9251 +-
9252 +-int kmemcheck_show_addr(unsigned long address);
9253 +-int kmemcheck_hide_addr(unsigned long address);
9254 +-
9255 +-bool kmemcheck_is_obj_initialized(unsigned long addr, size_t size);
9256 +-
9257 +-/*
9258 +- * Bitfield annotations
9259 +- *
9260 +- * How to use: If you have a struct using bitfields, for example
9261 +- *
9262 +- * struct a {
9263 +- * int x:8, y:8;
9264 +- * };
9265 +- *
9266 +- * then this should be rewritten as
9267 +- *
9268 +- * struct a {
9269 +- * kmemcheck_bitfield_begin(flags);
9270 +- * int x:8, y:8;
9271 +- * kmemcheck_bitfield_end(flags);
9272 +- * };
9273 +- *
9274 +- * Now the "flags_begin" and "flags_end" members may be used to refer to the
9275 +- * beginning and end, respectively, of the bitfield (and things like
9276 +- * &x.flags_begin is allowed). As soon as the struct is allocated, the bit-
9277 +- * fields should be annotated:
9278 +- *
9279 +- * struct a *a = kmalloc(sizeof(struct a), GFP_KERNEL);
9280 +- * kmemcheck_annotate_bitfield(a, flags);
9281 +- */
9282 +-#define kmemcheck_bitfield_begin(name) \
9283 +- int name##_begin[0];
9284 +-
9285 +-#define kmemcheck_bitfield_end(name) \
9286 +- int name##_end[0];
9287 +-
9288 +-#define kmemcheck_annotate_bitfield(ptr, name) \
9289 +- do { \
9290 +- int _n; \
9291 +- \
9292 +- if (!ptr) \
9293 +- break; \
9294 +- \
9295 +- _n = (long) &((ptr)->name##_end) \
9296 +- - (long) &((ptr)->name##_begin); \
9297 +- BUILD_BUG_ON(_n < 0); \
9298 +- \
9299 +- kmemcheck_mark_initialized(&((ptr)->name##_begin), _n); \
9300 +- } while (0)
9301 +-
9302 +-#define kmemcheck_annotate_variable(var) \
9303 +- do { \
9304 +- kmemcheck_mark_initialized(&(var), sizeof(var)); \
9305 +- } while (0) \
9306 +-
9307 +-#else
9308 +-#define kmemcheck_enabled 0
9309 +-
9310 +-static inline void
9311 +-kmemcheck_alloc_shadow(struct page *page, int order, gfp_t flags, int node)
9312 +-{
9313 +-}
9314 +-
9315 +-static inline void
9316 +-kmemcheck_free_shadow(struct page *page, int order)
9317 +-{
9318 +-}
9319 +-
9320 +-static inline void
9321 +-kmemcheck_slab_alloc(struct kmem_cache *s, gfp_t gfpflags, void *object,
9322 +- size_t size)
9323 +-{
9324 +-}
9325 +-
9326 +-static inline void kmemcheck_slab_free(struct kmem_cache *s, void *object,
9327 +- size_t size)
9328 +-{
9329 +-}
9330 +-
9331 +-static inline void kmemcheck_pagealloc_alloc(struct page *p,
9332 +- unsigned int order, gfp_t gfpflags)
9333 +-{
9334 +-}
9335 +-
9336 +-static inline bool kmemcheck_page_is_tracked(struct page *p)
9337 +-{
9338 +- return false;
9339 +-}
9340 +-
9341 +-static inline void kmemcheck_mark_unallocated(void *address, unsigned int n)
9342 +-{
9343 +-}
9344 +-
9345 +-static inline void kmemcheck_mark_uninitialized(void *address, unsigned int n)
9346 +-{
9347 +-}
9348 +-
9349 +-static inline void kmemcheck_mark_initialized(void *address, unsigned int n)
9350 +-{
9351 +-}
9352 +-
9353 +-static inline void kmemcheck_mark_freed(void *address, unsigned int n)
9354 +-{
9355 +-}
9356 +-
9357 +-static inline void kmemcheck_mark_unallocated_pages(struct page *p,
9358 +- unsigned int n)
9359 +-{
9360 +-}
9361 +-
9362 +-static inline void kmemcheck_mark_uninitialized_pages(struct page *p,
9363 +- unsigned int n)
9364 +-{
9365 +-}
9366 +-
9367 +-static inline void kmemcheck_mark_initialized_pages(struct page *p,
9368 +- unsigned int n)
9369 +-{
9370 +-}
9371 +-
9372 +-static inline bool kmemcheck_is_obj_initialized(unsigned long addr, size_t size)
9373 +-{
9374 +- return true;
9375 +-}
9376 +-
9377 +-#define kmemcheck_bitfield_begin(name)
9378 +-#define kmemcheck_bitfield_end(name)
9379 +-#define kmemcheck_annotate_bitfield(ptr, name) \
9380 +- do { \
9381 +- } while (0)
9382 +-
9383 +-#define kmemcheck_annotate_variable(var) \
9384 +- do { \
9385 +- } while (0)
9386 +-
9387 +-#endif /* CONFIG_KMEMCHECK */
9388 +-
9389 +-#endif /* LINUX_KMEMCHECK_H */
9390 +diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
9391 +index a13525daf09b..ae15864c8708 100644
9392 +--- a/include/linux/mlx5/driver.h
9393 ++++ b/include/linux/mlx5/driver.h
9394 +@@ -1201,7 +1201,7 @@ mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int vector)
9395 + int eqn;
9396 + int err;
9397 +
9398 +- err = mlx5_vector2eqn(dev, vector, &eqn, &irq);
9399 ++ err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE + vector, &eqn, &irq);
9400 + if (err)
9401 + return NULL;
9402 +
9403 +diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
9404 +index c30b32e3c862..10191c28fc04 100644
9405 +--- a/include/linux/mm_inline.h
9406 ++++ b/include/linux/mm_inline.h
9407 +@@ -127,10 +127,4 @@ static __always_inline enum lru_list page_lru(struct page *page)
9408 +
9409 + #define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
9410 +
9411 +-#ifdef arch_unmap_kpfn
9412 +-extern void arch_unmap_kpfn(unsigned long pfn);
9413 +-#else
9414 +-static __always_inline void arch_unmap_kpfn(unsigned long pfn) { }
9415 +-#endif
9416 +-
9417 + #endif
9418 +diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
9419 +index c85f11dafd56..9f0bb908e2b5 100644
9420 +--- a/include/linux/mm_types.h
9421 ++++ b/include/linux/mm_types.h
9422 +@@ -207,14 +207,6 @@ struct page {
9423 + not kmapped, ie. highmem) */
9424 + #endif /* WANT_PAGE_VIRTUAL */
9425 +
9426 +-#ifdef CONFIG_KMEMCHECK
9427 +- /*
9428 +- * kmemcheck wants to track the status of each byte in a page; this
9429 +- * is a pointer to such a status block. NULL if not tracked.
9430 +- */
9431 +- void *shadow;
9432 +-#endif
9433 +-
9434 + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
9435 + int _last_cpupid;
9436 + #endif
9437 +diff --git a/include/linux/net.h b/include/linux/net.h
9438 +index d97d80d7fdf8..caeb159abda5 100644
9439 +--- a/include/linux/net.h
9440 ++++ b/include/linux/net.h
9441 +@@ -22,7 +22,6 @@
9442 + #include <linux/random.h>
9443 + #include <linux/wait.h>
9444 + #include <linux/fcntl.h> /* For O_CLOEXEC and O_NONBLOCK */
9445 +-#include <linux/kmemcheck.h>
9446 + #include <linux/rcupdate.h>
9447 + #include <linux/once.h>
9448 + #include <linux/fs.h>
9449 +@@ -111,9 +110,7 @@ struct socket_wq {
9450 + struct socket {
9451 + socket_state state;
9452 +
9453 +- kmemcheck_bitfield_begin(type);
9454 + short type;
9455 +- kmemcheck_bitfield_end(type);
9456 +
9457 + unsigned long flags;
9458 +
9459 +diff --git a/include/linux/nospec.h b/include/linux/nospec.h
9460 +index b99bced39ac2..fbc98e2c8228 100644
9461 +--- a/include/linux/nospec.h
9462 ++++ b/include/linux/nospec.h
9463 +@@ -19,20 +19,6 @@
9464 + static inline unsigned long array_index_mask_nospec(unsigned long index,
9465 + unsigned long size)
9466 + {
9467 +- /*
9468 +- * Warn developers about inappropriate array_index_nospec() usage.
9469 +- *
9470 +- * Even if the CPU speculates past the WARN_ONCE branch, the
9471 +- * sign bit of @index is taken into account when generating the
9472 +- * mask.
9473 +- *
9474 +- * This warning is compiled out when the compiler can infer that
9475 +- * @index and @size are less than LONG_MAX.
9476 +- */
9477 +- if (WARN_ONCE(index > LONG_MAX || size > LONG_MAX,
9478 +- "array_index_nospec() limited to range of [0, LONG_MAX]\n"))
9479 +- return 0;
9480 +-
9481 + /*
9482 + * Always calculate and emit the mask even if the compiler
9483 + * thinks the mask is not needed. The compiler does not take
9484 +@@ -43,6 +29,26 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
9485 + }
9486 + #endif
9487 +
9488 ++/*
9489 ++ * Warn developers about inappropriate array_index_nospec() usage.
9490 ++ *
9491 ++ * Even if the CPU speculates past the WARN_ONCE branch, the
9492 ++ * sign bit of @index is taken into account when generating the
9493 ++ * mask.
9494 ++ *
9495 ++ * This warning is compiled out when the compiler can infer that
9496 ++ * @index and @size are less than LONG_MAX.
9497 ++ */
9498 ++#define array_index_mask_nospec_check(index, size) \
9499 ++({ \
9500 ++ if (WARN_ONCE(index > LONG_MAX || size > LONG_MAX, \
9501 ++ "array_index_nospec() limited to range of [0, LONG_MAX]\n")) \
9502 ++ _mask = 0; \
9503 ++ else \
9504 ++ _mask = array_index_mask_nospec(index, size); \
9505 ++ _mask; \
9506 ++})
9507 ++
9508 + /*
9509 + * array_index_nospec - sanitize an array index after a bounds check
9510 + *
9511 +@@ -61,7 +67,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
9512 + ({ \
9513 + typeof(index) _i = (index); \
9514 + typeof(size) _s = (size); \
9515 +- unsigned long _mask = array_index_mask_nospec(_i, _s); \
9516 ++ unsigned long _mask = array_index_mask_nospec_check(_i, _s); \
9517 + \
9518 + BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \
9519 + BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \
9520 +diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
9521 +index fa6ace66fea5..289e4d54e3e0 100644
9522 +--- a/include/linux/ring_buffer.h
9523 ++++ b/include/linux/ring_buffer.h
9524 +@@ -2,7 +2,6 @@
9525 + #ifndef _LINUX_RING_BUFFER_H
9526 + #define _LINUX_RING_BUFFER_H
9527 +
9528 +-#include <linux/kmemcheck.h>
9529 + #include <linux/mm.h>
9530 + #include <linux/seq_file.h>
9531 + #include <linux/poll.h>
9532 +@@ -14,9 +13,7 @@ struct ring_buffer_iter;
9533 + * Don't refer to this struct directly, use functions below.
9534 + */
9535 + struct ring_buffer_event {
9536 +- kmemcheck_bitfield_begin(bitfield);
9537 + u32 type_len:5, time_delta:27;
9538 +- kmemcheck_bitfield_end(bitfield);
9539 +
9540 + u32 array[];
9541 + };
9542 +diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
9543 +index 051e0939ec19..be45224b01d7 100644
9544 +--- a/include/linux/skbuff.h
9545 ++++ b/include/linux/skbuff.h
9546 +@@ -15,7 +15,6 @@
9547 + #define _LINUX_SKBUFF_H
9548 +
9549 + #include <linux/kernel.h>
9550 +-#include <linux/kmemcheck.h>
9551 + #include <linux/compiler.h>
9552 + #include <linux/time.h>
9553 + #include <linux/bug.h>
9554 +@@ -706,7 +705,6 @@ struct sk_buff {
9555 + /* Following fields are _not_ copied in __copy_skb_header()
9556 + * Note that queue_mapping is here mostly to fill a hole.
9557 + */
9558 +- kmemcheck_bitfield_begin(flags1);
9559 + __u16 queue_mapping;
9560 +
9561 + /* if you move cloned around you also must adapt those constants */
9562 +@@ -725,7 +723,6 @@ struct sk_buff {
9563 + head_frag:1,
9564 + xmit_more:1,
9565 + __unused:1; /* one bit hole */
9566 +- kmemcheck_bitfield_end(flags1);
9567 +
9568 + /* fields enclosed in headers_start/headers_end are copied
9569 + * using a single memcpy() in __copy_skb_header()
9570 +diff --git a/include/linux/slab.h b/include/linux/slab.h
9571 +index af5aa65c7c18..ae5ed6492d54 100644
9572 +--- a/include/linux/slab.h
9573 ++++ b/include/linux/slab.h
9574 +@@ -78,12 +78,6 @@
9575 +
9576 + #define SLAB_NOLEAKTRACE 0x00800000UL /* Avoid kmemleak tracing */
9577 +
9578 +-/* Don't track use of uninitialized memory */
9579 +-#ifdef CONFIG_KMEMCHECK
9580 +-# define SLAB_NOTRACK 0x01000000UL
9581 +-#else
9582 +-# define SLAB_NOTRACK 0x00000000UL
9583 +-#endif
9584 + #ifdef CONFIG_FAILSLAB
9585 + # define SLAB_FAILSLAB 0x02000000UL /* Fault injection mark */
9586 + #else
9587 +diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
9588 +index 4bcdf00c110f..34f053a150a9 100644
9589 +--- a/include/linux/thread_info.h
9590 ++++ b/include/linux/thread_info.h
9591 +@@ -44,10 +44,9 @@ enum {
9592 + #endif
9593 +
9594 + #if IS_ENABLED(CONFIG_DEBUG_STACK_USAGE) || IS_ENABLED(CONFIG_DEBUG_KMEMLEAK)
9595 +-# define THREADINFO_GFP (GFP_KERNEL_ACCOUNT | __GFP_NOTRACK | \
9596 +- __GFP_ZERO)
9597 ++# define THREADINFO_GFP (GFP_KERNEL_ACCOUNT | __GFP_ZERO)
9598 + #else
9599 +-# define THREADINFO_GFP (GFP_KERNEL_ACCOUNT | __GFP_NOTRACK)
9600 ++# define THREADINFO_GFP (GFP_KERNEL_ACCOUNT)
9601 + #endif
9602 +
9603 + /*
9604 +diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
9605 +index db8162dd8c0b..8e51b4a69088 100644
9606 +--- a/include/net/inet_sock.h
9607 ++++ b/include/net/inet_sock.h
9608 +@@ -17,7 +17,6 @@
9609 + #define _INET_SOCK_H
9610 +
9611 + #include <linux/bitops.h>
9612 +-#include <linux/kmemcheck.h>
9613 + #include <linux/string.h>
9614 + #include <linux/types.h>
9615 + #include <linux/jhash.h>
9616 +@@ -84,7 +83,6 @@ struct inet_request_sock {
9617 + #define ireq_state req.__req_common.skc_state
9618 + #define ireq_family req.__req_common.skc_family
9619 +
9620 +- kmemcheck_bitfield_begin(flags);
9621 + u16 snd_wscale : 4,
9622 + rcv_wscale : 4,
9623 + tstamp_ok : 1,
9624 +@@ -93,7 +91,6 @@ struct inet_request_sock {
9625 + ecn_ok : 1,
9626 + acked : 1,
9627 + no_srccheck: 1;
9628 +- kmemcheck_bitfield_end(flags);
9629 + u32 ir_mark;
9630 + union {
9631 + struct ip_options_rcu __rcu *ireq_opt;
9632 +diff --git a/include/net/inet_timewait_sock.h b/include/net/inet_timewait_sock.h
9633 +index 6a75d67a30fd..1356fa6a7566 100644
9634 +--- a/include/net/inet_timewait_sock.h
9635 ++++ b/include/net/inet_timewait_sock.h
9636 +@@ -15,8 +15,6 @@
9637 + #ifndef _INET_TIMEWAIT_SOCK_
9638 + #define _INET_TIMEWAIT_SOCK_
9639 +
9640 +-
9641 +-#include <linux/kmemcheck.h>
9642 + #include <linux/list.h>
9643 + #include <linux/timer.h>
9644 + #include <linux/types.h>
9645 +@@ -69,14 +67,12 @@ struct inet_timewait_sock {
9646 + /* Socket demultiplex comparisons on incoming packets. */
9647 + /* these three are in inet_sock */
9648 + __be16 tw_sport;
9649 +- kmemcheck_bitfield_begin(flags);
9650 + /* And these are ours. */
9651 + unsigned int tw_kill : 1,
9652 + tw_transparent : 1,
9653 + tw_flowlabel : 20,
9654 + tw_pad : 2, /* 2 bits hole */
9655 + tw_tos : 8;
9656 +- kmemcheck_bitfield_end(flags);
9657 + struct timer_list tw_timer;
9658 + struct inet_bind_bucket *tw_tb;
9659 + };
9660 +diff --git a/include/net/sock.h b/include/net/sock.h
9661 +index 006580155a87..9bd5d68076d9 100644
9662 +--- a/include/net/sock.h
9663 ++++ b/include/net/sock.h
9664 +@@ -436,7 +436,6 @@ struct sock {
9665 + #define SK_FL_TYPE_MASK 0xffff0000
9666 + #endif
9667 +
9668 +- kmemcheck_bitfield_begin(flags);
9669 + unsigned int sk_padding : 1,
9670 + sk_kern_sock : 1,
9671 + sk_no_check_tx : 1,
9672 +@@ -445,8 +444,6 @@ struct sock {
9673 + sk_protocol : 8,
9674 + sk_type : 16;
9675 + #define SK_PROTOCOL_MAX U8_MAX
9676 +- kmemcheck_bitfield_end(flags);
9677 +-
9678 + u16 sk_gso_max_segs;
9679 + unsigned long sk_lingertime;
9680 + struct proto *sk_prot_creator;
9681 +diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
9682 +index e8608b2dc844..6533aa64f009 100644
9683 +--- a/include/rdma/ib_verbs.h
9684 ++++ b/include/rdma/ib_verbs.h
9685 +@@ -971,9 +971,9 @@ struct ib_wc {
9686 + u32 invalidate_rkey;
9687 + } ex;
9688 + u32 src_qp;
9689 ++ u32 slid;
9690 + int wc_flags;
9691 + u16 pkey_index;
9692 +- u32 slid;
9693 + u8 sl;
9694 + u8 dlid_path_bits;
9695 + u8 port_num; /* valid only for DR SMPs on switches */
9696 +diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
9697 +index 648cbf603736..72162f3a03fa 100644
9698 +--- a/include/trace/events/mmflags.h
9699 ++++ b/include/trace/events/mmflags.h
9700 +@@ -46,7 +46,6 @@
9701 + {(unsigned long)__GFP_RECLAIMABLE, "__GFP_RECLAIMABLE"}, \
9702 + {(unsigned long)__GFP_MOVABLE, "__GFP_MOVABLE"}, \
9703 + {(unsigned long)__GFP_ACCOUNT, "__GFP_ACCOUNT"}, \
9704 +- {(unsigned long)__GFP_NOTRACK, "__GFP_NOTRACK"}, \
9705 + {(unsigned long)__GFP_WRITE, "__GFP_WRITE"}, \
9706 + {(unsigned long)__GFP_RECLAIM, "__GFP_RECLAIM"}, \
9707 + {(unsigned long)__GFP_DIRECT_RECLAIM, "__GFP_DIRECT_RECLAIM"},\
9708 +diff --git a/include/trace/events/xen.h b/include/trace/events/xen.h
9709 +index a7c8b452aab9..d791863b62fc 100644
9710 +--- a/include/trace/events/xen.h
9711 ++++ b/include/trace/events/xen.h
9712 +@@ -365,7 +365,7 @@ TRACE_EVENT(xen_mmu_flush_tlb,
9713 + TP_printk("%s", "")
9714 + );
9715 +
9716 +-TRACE_EVENT(xen_mmu_flush_tlb_single,
9717 ++TRACE_EVENT(xen_mmu_flush_tlb_one_user,
9718 + TP_PROTO(unsigned long addr),
9719 + TP_ARGS(addr),
9720 + TP_STRUCT__entry(
9721 +diff --git a/init/do_mounts.c b/init/do_mounts.c
9722 +index f6d4dd764a52..7cf4f6dafd5f 100644
9723 +--- a/init/do_mounts.c
9724 ++++ b/init/do_mounts.c
9725 +@@ -380,8 +380,7 @@ static int __init do_mount_root(char *name, char *fs, int flags, void *data)
9726 +
9727 + void __init mount_block_root(char *name, int flags)
9728 + {
9729 +- struct page *page = alloc_page(GFP_KERNEL |
9730 +- __GFP_NOTRACK_FALSE_POSITIVE);
9731 ++ struct page *page = alloc_page(GFP_KERNEL);
9732 + char *fs_names = page_address(page);
9733 + char *p;
9734 + #ifdef CONFIG_BLOCK
9735 +diff --git a/init/main.c b/init/main.c
9736 +index b32ec72cdf3d..2d355a61dfc5 100644
9737 +--- a/init/main.c
9738 ++++ b/init/main.c
9739 +@@ -69,7 +69,6 @@
9740 + #include <linux/kgdb.h>
9741 + #include <linux/ftrace.h>
9742 + #include <linux/async.h>
9743 +-#include <linux/kmemcheck.h>
9744 + #include <linux/sfi.h>
9745 + #include <linux/shmem_fs.h>
9746 + #include <linux/slab.h>
9747 +diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
9748 +index 2246115365d9..d203a5d6b726 100644
9749 +--- a/kernel/bpf/core.c
9750 ++++ b/kernel/bpf/core.c
9751 +@@ -85,8 +85,6 @@ struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
9752 + if (fp == NULL)
9753 + return NULL;
9754 +
9755 +- kmemcheck_annotate_bitfield(fp, meta);
9756 +-
9757 + aux = kzalloc(sizeof(*aux), GFP_KERNEL | gfp_extra_flags);
9758 + if (aux == NULL) {
9759 + vfree(fp);
9760 +@@ -127,8 +125,6 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
9761 + if (fp == NULL) {
9762 + __bpf_prog_uncharge(fp_old->aux->user, delta);
9763 + } else {
9764 +- kmemcheck_annotate_bitfield(fp, meta);
9765 +-
9766 + memcpy(fp, fp_old, fp_old->pages * PAGE_SIZE);
9767 + fp->pages = pages;
9768 + fp->aux->prog = fp;
9769 +@@ -662,8 +658,6 @@ static struct bpf_prog *bpf_prog_clone_create(struct bpf_prog *fp_other,
9770 +
9771 + fp = __vmalloc(fp_other->pages * PAGE_SIZE, gfp_flags, PAGE_KERNEL);
9772 + if (fp != NULL) {
9773 +- kmemcheck_annotate_bitfield(fp, meta);
9774 +-
9775 + /* aux->prog still points to the fp_other one, so
9776 + * when promoting the clone to the real program,
9777 + * this still needs to be adapted.
9778 +diff --git a/kernel/fork.c b/kernel/fork.c
9779 +index 500ce64517d9..98c91bd341b4 100644
9780 +--- a/kernel/fork.c
9781 ++++ b/kernel/fork.c
9782 +@@ -469,7 +469,7 @@ void __init fork_init(void)
9783 + /* create a slab on which task_structs can be allocated */
9784 + task_struct_cachep = kmem_cache_create("task_struct",
9785 + arch_task_struct_size, align,
9786 +- SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, NULL);
9787 ++ SLAB_PANIC|SLAB_ACCOUNT, NULL);
9788 + #endif
9789 +
9790 + /* do the arch specific task caches init */
9791 +@@ -2208,18 +2208,18 @@ void __init proc_caches_init(void)
9792 + sighand_cachep = kmem_cache_create("sighand_cache",
9793 + sizeof(struct sighand_struct), 0,
9794 + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
9795 +- SLAB_NOTRACK|SLAB_ACCOUNT, sighand_ctor);
9796 ++ SLAB_ACCOUNT, sighand_ctor);
9797 + signal_cachep = kmem_cache_create("signal_cache",
9798 + sizeof(struct signal_struct), 0,
9799 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT,
9800 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
9801 + NULL);
9802 + files_cachep = kmem_cache_create("files_cache",
9803 + sizeof(struct files_struct), 0,
9804 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT,
9805 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
9806 + NULL);
9807 + fs_cachep = kmem_cache_create("fs_cache",
9808 + sizeof(struct fs_struct), 0,
9809 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT,
9810 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
9811 + NULL);
9812 + /*
9813 + * FIXME! The "sizeof(struct mm_struct)" currently includes the
9814 +@@ -2230,7 +2230,7 @@ void __init proc_caches_init(void)
9815 + */
9816 + mm_cachep = kmem_cache_create("mm_struct",
9817 + sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
9818 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT,
9819 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
9820 + NULL);
9821 + vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT);
9822 + mmap_init();
9823 +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
9824 +index e36e652d996f..4d362d3e4571 100644
9825 +--- a/kernel/locking/lockdep.c
9826 ++++ b/kernel/locking/lockdep.c
9827 +@@ -47,7 +47,6 @@
9828 + #include <linux/stringify.h>
9829 + #include <linux/bitops.h>
9830 + #include <linux/gfp.h>
9831 +-#include <linux/kmemcheck.h>
9832 + #include <linux/random.h>
9833 + #include <linux/jhash.h>
9834 +
9835 +@@ -3225,8 +3224,6 @@ static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
9836 + {
9837 + int i;
9838 +
9839 +- kmemcheck_mark_initialized(lock, sizeof(*lock));
9840 +-
9841 + for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++)
9842 + lock->class_cache[i] = NULL;
9843 +
9844 +diff --git a/kernel/memremap.c b/kernel/memremap.c
9845 +index 403ab9cdb949..4712ce646e04 100644
9846 +--- a/kernel/memremap.c
9847 ++++ b/kernel/memremap.c
9848 +@@ -301,7 +301,8 @@ static void devm_memremap_pages_release(struct device *dev, void *data)
9849 +
9850 + /* pages are dead and unused, undo the arch mapping */
9851 + align_start = res->start & ~(SECTION_SIZE - 1);
9852 +- align_size = ALIGN(resource_size(res), SECTION_SIZE);
9853 ++ align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
9854 ++ - align_start;
9855 +
9856 + mem_hotplug_begin();
9857 + arch_remove_memory(align_start, align_size);
9858 +diff --git a/kernel/signal.c b/kernel/signal.c
9859 +index 1facff1dbbae..6895f6bb98a7 100644
9860 +--- a/kernel/signal.c
9861 ++++ b/kernel/signal.c
9862 +@@ -1038,8 +1038,7 @@ static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
9863 + else
9864 + override_rlimit = 0;
9865 +
9866 +- q = __sigqueue_alloc(sig, t, GFP_ATOMIC | __GFP_NOTRACK_FALSE_POSITIVE,
9867 +- override_rlimit);
9868 ++ q = __sigqueue_alloc(sig, t, GFP_ATOMIC, override_rlimit);
9869 + if (q) {
9870 + list_add_tail(&q->list, &pending->list);
9871 + switch ((unsigned long) info) {
9872 +diff --git a/kernel/softirq.c b/kernel/softirq.c
9873 +index 4e09821f9d9e..e89c3b0cff6d 100644
9874 +--- a/kernel/softirq.c
9875 ++++ b/kernel/softirq.c
9876 +@@ -486,16 +486,6 @@ void __tasklet_hi_schedule(struct tasklet_struct *t)
9877 + }
9878 + EXPORT_SYMBOL(__tasklet_hi_schedule);
9879 +
9880 +-void __tasklet_hi_schedule_first(struct tasklet_struct *t)
9881 +-{
9882 +- BUG_ON(!irqs_disabled());
9883 +-
9884 +- t->next = __this_cpu_read(tasklet_hi_vec.head);
9885 +- __this_cpu_write(tasklet_hi_vec.head, t);
9886 +- __raise_softirq_irqoff(HI_SOFTIRQ);
9887 +-}
9888 +-EXPORT_SYMBOL(__tasklet_hi_schedule_first);
9889 +-
9890 + static __latent_entropy void tasklet_action(struct softirq_action *a)
9891 + {
9892 + struct tasklet_struct *list;
9893 +diff --git a/kernel/sysctl.c b/kernel/sysctl.c
9894 +index 56aca862c4f5..069550540a39 100644
9895 +--- a/kernel/sysctl.c
9896 ++++ b/kernel/sysctl.c
9897 +@@ -30,7 +30,6 @@
9898 + #include <linux/proc_fs.h>
9899 + #include <linux/security.h>
9900 + #include <linux/ctype.h>
9901 +-#include <linux/kmemcheck.h>
9902 + #include <linux/kmemleak.h>
9903 + #include <linux/fs.h>
9904 + #include <linux/init.h>
9905 +@@ -1173,15 +1172,6 @@ static struct ctl_table kern_table[] = {
9906 + .extra1 = &zero,
9907 + .extra2 = &one_thousand,
9908 + },
9909 +-#endif
9910 +-#ifdef CONFIG_KMEMCHECK
9911 +- {
9912 +- .procname = "kmemcheck",
9913 +- .data = &kmemcheck_enabled,
9914 +- .maxlen = sizeof(int),
9915 +- .mode = 0644,
9916 +- .proc_handler = proc_dointvec,
9917 +- },
9918 + #endif
9919 + {
9920 + .procname = "panic_on_warn",
9921 +diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
9922 +index 434c840e2d82..4ad6f6ca18c1 100644
9923 +--- a/kernel/trace/Kconfig
9924 ++++ b/kernel/trace/Kconfig
9925 +@@ -343,7 +343,7 @@ config PROFILE_ANNOTATED_BRANCHES
9926 + on if you need to profile the system's use of these macros.
9927 +
9928 + config PROFILE_ALL_BRANCHES
9929 +- bool "Profile all if conditionals"
9930 ++ bool "Profile all if conditionals" if !FORTIFY_SOURCE
9931 + select TRACE_BRANCH_PROFILING
9932 + help
9933 + This tracer profiles all branch conditions. Every if ()
9934 +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
9935 +index 0476a9372014..39c221454186 100644
9936 +--- a/kernel/trace/ring_buffer.c
9937 ++++ b/kernel/trace/ring_buffer.c
9938 +@@ -13,7 +13,6 @@
9939 + #include <linux/uaccess.h>
9940 + #include <linux/hardirq.h>
9941 + #include <linux/kthread.h> /* for self test */
9942 +-#include <linux/kmemcheck.h>
9943 + #include <linux/module.h>
9944 + #include <linux/percpu.h>
9945 + #include <linux/mutex.h>
9946 +@@ -2059,7 +2058,6 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
9947 + }
9948 +
9949 + event = __rb_page_index(tail_page, tail);
9950 +- kmemcheck_annotate_bitfield(event, bitfield);
9951 +
9952 + /* account for padding bytes */
9953 + local_add(BUF_PAGE_SIZE - tail, &cpu_buffer->entries_bytes);
9954 +@@ -2690,7 +2688,6 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer,
9955 + /* We reserved something on the buffer */
9956 +
9957 + event = __rb_page_index(tail_page, tail);
9958 +- kmemcheck_annotate_bitfield(event, bitfield);
9959 + rb_update_event(cpu_buffer, event, info);
9960 +
9961 + local_inc(&tail_page->entries);
9962 +diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
9963 +index 61e7f0678d33..a764aec3c9a1 100644
9964 +--- a/kernel/trace/trace_events_filter.c
9965 ++++ b/kernel/trace/trace_events_filter.c
9966 +@@ -400,7 +400,6 @@ enum regex_type filter_parse_regex(char *buff, int len, char **search, int *not)
9967 + for (i = 0; i < len; i++) {
9968 + if (buff[i] == '*') {
9969 + if (!i) {
9970 +- *search = buff + 1;
9971 + type = MATCH_END_ONLY;
9972 + } else if (i == len - 1) {
9973 + if (type == MATCH_END_ONLY)
9974 +@@ -410,14 +409,14 @@ enum regex_type filter_parse_regex(char *buff, int len, char **search, int *not)
9975 + buff[i] = 0;
9976 + break;
9977 + } else { /* pattern continues, use full glob */
9978 +- type = MATCH_GLOB;
9979 +- break;
9980 ++ return MATCH_GLOB;
9981 + }
9982 + } else if (strchr("[?\\", buff[i])) {
9983 +- type = MATCH_GLOB;
9984 +- break;
9985 ++ return MATCH_GLOB;
9986 + }
9987 + }
9988 ++ if (buff[0] == '*')
9989 ++ *search = buff + 1;
9990 +
9991 + return type;
9992 + }
9993 +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
9994 +index 00cb02daeddd..62d0e25c054c 100644
9995 +--- a/lib/Kconfig.debug
9996 ++++ b/lib/Kconfig.debug
9997 +@@ -504,7 +504,7 @@ config DEBUG_OBJECTS_ENABLE_DEFAULT
9998 +
9999 + config DEBUG_SLAB
10000 + bool "Debug slab memory allocations"
10001 +- depends on DEBUG_KERNEL && SLAB && !KMEMCHECK
10002 ++ depends on DEBUG_KERNEL && SLAB
10003 + help
10004 + Say Y here to have the kernel do limited verification on memory
10005 + allocation as well as poisoning memory on free to catch use of freed
10006 +@@ -516,7 +516,7 @@ config DEBUG_SLAB_LEAK
10007 +
10008 + config SLUB_DEBUG_ON
10009 + bool "SLUB debugging on by default"
10010 +- depends on SLUB && SLUB_DEBUG && !KMEMCHECK
10011 ++ depends on SLUB && SLUB_DEBUG
10012 + default n
10013 + help
10014 + Boot with debugging on by default. SLUB boots by default with
10015 +@@ -730,8 +730,6 @@ config DEBUG_STACKOVERFLOW
10016 +
10017 + If in doubt, say "N".
10018 +
10019 +-source "lib/Kconfig.kmemcheck"
10020 +-
10021 + source "lib/Kconfig.kasan"
10022 +
10023 + endmenu # "Memory Debugging"
10024 +diff --git a/lib/Kconfig.kmemcheck b/lib/Kconfig.kmemcheck
10025 +deleted file mode 100644
10026 +index 846e039a86b4..000000000000
10027 +--- a/lib/Kconfig.kmemcheck
10028 ++++ /dev/null
10029 +@@ -1,94 +0,0 @@
10030 +-config HAVE_ARCH_KMEMCHECK
10031 +- bool
10032 +-
10033 +-if HAVE_ARCH_KMEMCHECK
10034 +-
10035 +-menuconfig KMEMCHECK
10036 +- bool "kmemcheck: trap use of uninitialized memory"
10037 +- depends on DEBUG_KERNEL
10038 +- depends on !X86_USE_3DNOW
10039 +- depends on SLUB || SLAB
10040 +- depends on !CC_OPTIMIZE_FOR_SIZE
10041 +- depends on !FUNCTION_TRACER
10042 +- select FRAME_POINTER
10043 +- select STACKTRACE
10044 +- default n
10045 +- help
10046 +- This option enables tracing of dynamically allocated kernel memory
10047 +- to see if memory is used before it has been given an initial value.
10048 +- Be aware that this requires half of your memory for bookkeeping and
10049 +- will insert extra code at *every* read and write to tracked memory
10050 +- thus slow down the kernel code (but user code is unaffected).
10051 +-
10052 +- The kernel may be started with kmemcheck=0 or kmemcheck=1 to disable
10053 +- or enable kmemcheck at boot-time. If the kernel is started with
10054 +- kmemcheck=0, the large memory and CPU overhead is not incurred.
10055 +-
10056 +-choice
10057 +- prompt "kmemcheck: default mode at boot"
10058 +- depends on KMEMCHECK
10059 +- default KMEMCHECK_ONESHOT_BY_DEFAULT
10060 +- help
10061 +- This option controls the default behaviour of kmemcheck when the
10062 +- kernel boots and no kmemcheck= parameter is given.
10063 +-
10064 +-config KMEMCHECK_DISABLED_BY_DEFAULT
10065 +- bool "disabled"
10066 +- depends on KMEMCHECK
10067 +-
10068 +-config KMEMCHECK_ENABLED_BY_DEFAULT
10069 +- bool "enabled"
10070 +- depends on KMEMCHECK
10071 +-
10072 +-config KMEMCHECK_ONESHOT_BY_DEFAULT
10073 +- bool "one-shot"
10074 +- depends on KMEMCHECK
10075 +- help
10076 +- In one-shot mode, only the first error detected is reported before
10077 +- kmemcheck is disabled.
10078 +-
10079 +-endchoice
10080 +-
10081 +-config KMEMCHECK_QUEUE_SIZE
10082 +- int "kmemcheck: error queue size"
10083 +- depends on KMEMCHECK
10084 +- default 64
10085 +- help
10086 +- Select the maximum number of errors to store in the queue. Since
10087 +- errors can occur virtually anywhere and in any context, we need a
10088 +- temporary storage area which is guarantueed not to generate any
10089 +- other faults. The queue will be emptied as soon as a tasklet may
10090 +- be scheduled. If the queue is full, new error reports will be
10091 +- lost.
10092 +-
10093 +-config KMEMCHECK_SHADOW_COPY_SHIFT
10094 +- int "kmemcheck: shadow copy size (5 => 32 bytes, 6 => 64 bytes)"
10095 +- depends on KMEMCHECK
10096 +- range 2 8
10097 +- default 5
10098 +- help
10099 +- Select the number of shadow bytes to save along with each entry of
10100 +- the queue. These bytes indicate what parts of an allocation are
10101 +- initialized, uninitialized, etc. and will be displayed when an
10102 +- error is detected to help the debugging of a particular problem.
10103 +-
10104 +-config KMEMCHECK_PARTIAL_OK
10105 +- bool "kmemcheck: allow partially uninitialized memory"
10106 +- depends on KMEMCHECK
10107 +- default y
10108 +- help
10109 +- This option works around certain GCC optimizations that produce
10110 +- 32-bit reads from 16-bit variables where the upper 16 bits are
10111 +- thrown away afterwards. This may of course also hide some real
10112 +- bugs.
10113 +-
10114 +-config KMEMCHECK_BITOPS_OK
10115 +- bool "kmemcheck: allow bit-field manipulation"
10116 +- depends on KMEMCHECK
10117 +- default n
10118 +- help
10119 +- This option silences warnings that would be generated for bit-field
10120 +- accesses where not all the bits are initialized at the same time.
10121 +- This may also hide some real bugs.
10122 +-
10123 +-endif
10124 +diff --git a/lib/swiotlb.c b/lib/swiotlb.c
10125 +index 8c6c83ef57a4..20df2fd9b150 100644
10126 +--- a/lib/swiotlb.c
10127 ++++ b/lib/swiotlb.c
10128 +@@ -585,7 +585,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
10129 +
10130 + not_found:
10131 + spin_unlock_irqrestore(&io_tlb_lock, flags);
10132 +- if (printk_ratelimit())
10133 ++ if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
10134 + dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", size);
10135 + return SWIOTLB_MAP_ERROR;
10136 + found:
10137 +@@ -712,6 +712,7 @@ void *
10138 + swiotlb_alloc_coherent(struct device *hwdev, size_t size,
10139 + dma_addr_t *dma_handle, gfp_t flags)
10140 + {
10141 ++ bool warn = !(flags & __GFP_NOWARN);
10142 + dma_addr_t dev_addr;
10143 + void *ret;
10144 + int order = get_order(size);
10145 +@@ -737,8 +738,8 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
10146 + * GFP_DMA memory; fall back on map_single(), which
10147 + * will grab memory from the lowest available address range.
10148 + */
10149 +- phys_addr_t paddr = map_single(hwdev, 0, size,
10150 +- DMA_FROM_DEVICE, 0);
10151 ++ phys_addr_t paddr = map_single(hwdev, 0, size, DMA_FROM_DEVICE,
10152 ++ warn ? 0 : DMA_ATTR_NO_WARN);
10153 + if (paddr == SWIOTLB_MAP_ERROR)
10154 + goto err_warn;
10155 +
10156 +@@ -768,9 +769,11 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
10157 + return ret;
10158 +
10159 + err_warn:
10160 +- pr_warn("swiotlb: coherent allocation failed for device %s size=%zu\n",
10161 +- dev_name(hwdev), size);
10162 +- dump_stack();
10163 ++ if (warn && printk_ratelimit()) {
10164 ++ pr_warn("swiotlb: coherent allocation failed for device %s size=%zu\n",
10165 ++ dev_name(hwdev), size);
10166 ++ dump_stack();
10167 ++ }
10168 +
10169 + return NULL;
10170 + }
10171 +diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
10172 +index 5b0adf1435de..e5e606ee5f71 100644
10173 +--- a/mm/Kconfig.debug
10174 ++++ b/mm/Kconfig.debug
10175 +@@ -11,7 +11,6 @@ config DEBUG_PAGEALLOC
10176 + bool "Debug page memory allocations"
10177 + depends on DEBUG_KERNEL
10178 + depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
10179 +- depends on !KMEMCHECK
10180 + select PAGE_EXTENSION
10181 + select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
10182 + ---help---
10183 +diff --git a/mm/Makefile b/mm/Makefile
10184 +index 4659b93cba43..e7ebd176fb93 100644
10185 +--- a/mm/Makefile
10186 ++++ b/mm/Makefile
10187 +@@ -17,7 +17,6 @@ KCOV_INSTRUMENT_slub.o := n
10188 + KCOV_INSTRUMENT_page_alloc.o := n
10189 + KCOV_INSTRUMENT_debug-pagealloc.o := n
10190 + KCOV_INSTRUMENT_kmemleak.o := n
10191 +-KCOV_INSTRUMENT_kmemcheck.o := n
10192 + KCOV_INSTRUMENT_memcontrol.o := n
10193 + KCOV_INSTRUMENT_mmzone.o := n
10194 + KCOV_INSTRUMENT_vmstat.o := n
10195 +@@ -70,7 +69,6 @@ obj-$(CONFIG_KSM) += ksm.o
10196 + obj-$(CONFIG_PAGE_POISONING) += page_poison.o
10197 + obj-$(CONFIG_SLAB) += slab.o
10198 + obj-$(CONFIG_SLUB) += slub.o
10199 +-obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
10200 + obj-$(CONFIG_KASAN) += kasan/
10201 + obj-$(CONFIG_FAILSLAB) += failslab.o
10202 + obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
10203 +diff --git a/mm/kmemcheck.c b/mm/kmemcheck.c
10204 +deleted file mode 100644
10205 +index 800d64b854ea..000000000000
10206 +--- a/mm/kmemcheck.c
10207 ++++ /dev/null
10208 +@@ -1,126 +0,0 @@
10209 +-// SPDX-License-Identifier: GPL-2.0
10210 +-#include <linux/gfp.h>
10211 +-#include <linux/mm_types.h>
10212 +-#include <linux/mm.h>
10213 +-#include <linux/slab.h>
10214 +-#include "slab.h"
10215 +-#include <linux/kmemcheck.h>
10216 +-
10217 +-void kmemcheck_alloc_shadow(struct page *page, int order, gfp_t flags, int node)
10218 +-{
10219 +- struct page *shadow;
10220 +- int pages;
10221 +- int i;
10222 +-
10223 +- pages = 1 << order;
10224 +-
10225 +- /*
10226 +- * With kmemcheck enabled, we need to allocate a memory area for the
10227 +- * shadow bits as well.
10228 +- */
10229 +- shadow = alloc_pages_node(node, flags | __GFP_NOTRACK, order);
10230 +- if (!shadow) {
10231 +- if (printk_ratelimit())
10232 +- pr_err("kmemcheck: failed to allocate shadow bitmap\n");
10233 +- return;
10234 +- }
10235 +-
10236 +- for(i = 0; i < pages; ++i)
10237 +- page[i].shadow = page_address(&shadow[i]);
10238 +-
10239 +- /*
10240 +- * Mark it as non-present for the MMU so that our accesses to
10241 +- * this memory will trigger a page fault and let us analyze
10242 +- * the memory accesses.
10243 +- */
10244 +- kmemcheck_hide_pages(page, pages);
10245 +-}
10246 +-
10247 +-void kmemcheck_free_shadow(struct page *page, int order)
10248 +-{
10249 +- struct page *shadow;
10250 +- int pages;
10251 +- int i;
10252 +-
10253 +- if (!kmemcheck_page_is_tracked(page))
10254 +- return;
10255 +-
10256 +- pages = 1 << order;
10257 +-
10258 +- kmemcheck_show_pages(page, pages);
10259 +-
10260 +- shadow = virt_to_page(page[0].shadow);
10261 +-
10262 +- for(i = 0; i < pages; ++i)
10263 +- page[i].shadow = NULL;
10264 +-
10265 +- __free_pages(shadow, order);
10266 +-}
10267 +-
10268 +-void kmemcheck_slab_alloc(struct kmem_cache *s, gfp_t gfpflags, void *object,
10269 +- size_t size)
10270 +-{
10271 +- if (unlikely(!object)) /* Skip object if allocation failed */
10272 +- return;
10273 +-
10274 +- /*
10275 +- * Has already been memset(), which initializes the shadow for us
10276 +- * as well.
10277 +- */
10278 +- if (gfpflags & __GFP_ZERO)
10279 +- return;
10280 +-
10281 +- /* No need to initialize the shadow of a non-tracked slab. */
10282 +- if (s->flags & SLAB_NOTRACK)
10283 +- return;
10284 +-
10285 +- if (!kmemcheck_enabled || gfpflags & __GFP_NOTRACK) {
10286 +- /*
10287 +- * Allow notracked objects to be allocated from
10288 +- * tracked caches. Note however that these objects
10289 +- * will still get page faults on access, they just
10290 +- * won't ever be flagged as uninitialized. If page
10291 +- * faults are not acceptable, the slab cache itself
10292 +- * should be marked NOTRACK.
10293 +- */
10294 +- kmemcheck_mark_initialized(object, size);
10295 +- } else if (!s->ctor) {
10296 +- /*
10297 +- * New objects should be marked uninitialized before
10298 +- * they're returned to the called.
10299 +- */
10300 +- kmemcheck_mark_uninitialized(object, size);
10301 +- }
10302 +-}
10303 +-
10304 +-void kmemcheck_slab_free(struct kmem_cache *s, void *object, size_t size)
10305 +-{
10306 +- /* TODO: RCU freeing is unsupported for now; hide false positives. */
10307 +- if (!s->ctor && !(s->flags & SLAB_TYPESAFE_BY_RCU))
10308 +- kmemcheck_mark_freed(object, size);
10309 +-}
10310 +-
10311 +-void kmemcheck_pagealloc_alloc(struct page *page, unsigned int order,
10312 +- gfp_t gfpflags)
10313 +-{
10314 +- int pages;
10315 +-
10316 +- if (gfpflags & (__GFP_HIGHMEM | __GFP_NOTRACK))
10317 +- return;
10318 +-
10319 +- pages = 1 << order;
10320 +-
10321 +- /*
10322 +- * NOTE: We choose to track GFP_ZERO pages too; in fact, they
10323 +- * can become uninitialized by copying uninitialized memory
10324 +- * into them.
10325 +- */
10326 +-
10327 +- /* XXX: Can use zone->node for node? */
10328 +- kmemcheck_alloc_shadow(page, order, gfpflags, -1);
10329 +-
10330 +- if (gfpflags & __GFP_ZERO)
10331 +- kmemcheck_mark_initialized_pages(page, pages);
10332 +- else
10333 +- kmemcheck_mark_uninitialized_pages(page, pages);
10334 +-}
10335 +diff --git a/mm/kmemleak.c b/mm/kmemleak.c
10336 +index a1ba553816eb..bd1374f402cd 100644
10337 +--- a/mm/kmemleak.c
10338 ++++ b/mm/kmemleak.c
10339 +@@ -110,7 +110,6 @@
10340 + #include <linux/atomic.h>
10341 +
10342 + #include <linux/kasan.h>
10343 +-#include <linux/kmemcheck.h>
10344 + #include <linux/kmemleak.h>
10345 + #include <linux/memory_hotplug.h>
10346 +
10347 +@@ -1238,9 +1237,6 @@ static bool update_checksum(struct kmemleak_object *object)
10348 + {
10349 + u32 old_csum = object->checksum;
10350 +
10351 +- if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
10352 +- return false;
10353 +-
10354 + kasan_disable_current();
10355 + object->checksum = crc32(0, (void *)object->pointer, object->size);
10356 + kasan_enable_current();
10357 +@@ -1314,11 +1310,6 @@ static void scan_block(void *_start, void *_end,
10358 + if (scan_should_stop())
10359 + break;
10360 +
10361 +- /* don't scan uninitialized memory */
10362 +- if (!kmemcheck_is_obj_initialized((unsigned long)ptr,
10363 +- BYTES_PER_POINTER))
10364 +- continue;
10365 +-
10366 + kasan_disable_current();
10367 + pointer = *ptr;
10368 + kasan_enable_current();
10369 +diff --git a/mm/memory-failure.c b/mm/memory-failure.c
10370 +index 88366626c0b7..1cd3b3569af8 100644
10371 +--- a/mm/memory-failure.c
10372 ++++ b/mm/memory-failure.c
10373 +@@ -1146,8 +1146,6 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
10374 + return 0;
10375 + }
10376 +
10377 +- arch_unmap_kpfn(pfn);
10378 +-
10379 + orig_head = hpage = compound_head(p);
10380 + num_poisoned_pages_inc();
10381 +
10382 +diff --git a/mm/memory.c b/mm/memory.c
10383 +index a728bed16c20..fc7779165dcf 100644
10384 +--- a/mm/memory.c
10385 ++++ b/mm/memory.c
10386 +@@ -81,7 +81,7 @@
10387 +
10388 + #include "internal.h"
10389 +
10390 +-#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
10391 ++#if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
10392 + #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
10393 + #endif
10394 +
10395 +diff --git a/mm/page_alloc.c b/mm/page_alloc.c
10396 +index 2de080003693..6627caeeaf82 100644
10397 +--- a/mm/page_alloc.c
10398 ++++ b/mm/page_alloc.c
10399 +@@ -24,7 +24,6 @@
10400 + #include <linux/memblock.h>
10401 + #include <linux/compiler.h>
10402 + #include <linux/kernel.h>
10403 +-#include <linux/kmemcheck.h>
10404 + #include <linux/kasan.h>
10405 + #include <linux/module.h>
10406 + #include <linux/suspend.h>
10407 +@@ -1022,7 +1021,6 @@ static __always_inline bool free_pages_prepare(struct page *page,
10408 + VM_BUG_ON_PAGE(PageTail(page), page);
10409 +
10410 + trace_mm_page_free(page, order);
10411 +- kmemcheck_free_shadow(page, order);
10412 +
10413 + /*
10414 + * Check tail pages before head page information is cleared to
10415 +@@ -2674,15 +2672,6 @@ void split_page(struct page *page, unsigned int order)
10416 + VM_BUG_ON_PAGE(PageCompound(page), page);
10417 + VM_BUG_ON_PAGE(!page_count(page), page);
10418 +
10419 +-#ifdef CONFIG_KMEMCHECK
10420 +- /*
10421 +- * Split shadow pages too, because free(page[0]) would
10422 +- * otherwise free the whole shadow.
10423 +- */
10424 +- if (kmemcheck_page_is_tracked(page))
10425 +- split_page(virt_to_page(page[0].shadow), order);
10426 +-#endif
10427 +-
10428 + for (i = 1; i < (1 << order); i++)
10429 + set_page_refcounted(page + i);
10430 + split_page_owner(page, order);
10431 +@@ -4228,9 +4217,6 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
10432 + page = NULL;
10433 + }
10434 +
10435 +- if (kmemcheck_enabled && page)
10436 +- kmemcheck_pagealloc_alloc(page, order, gfp_mask);
10437 +-
10438 + trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype);
10439 +
10440 + return page;
10441 +diff --git a/mm/slab.c b/mm/slab.c
10442 +index b7095884fd93..966839a1ac2c 100644
10443 +--- a/mm/slab.c
10444 ++++ b/mm/slab.c
10445 +@@ -114,7 +114,6 @@
10446 + #include <linux/rtmutex.h>
10447 + #include <linux/reciprocal_div.h>
10448 + #include <linux/debugobjects.h>
10449 +-#include <linux/kmemcheck.h>
10450 + #include <linux/memory.h>
10451 + #include <linux/prefetch.h>
10452 + #include <linux/sched/task_stack.h>
10453 +@@ -1413,7 +1412,7 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
10454 + if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
10455 + flags |= __GFP_RECLAIMABLE;
10456 +
10457 +- page = __alloc_pages_node(nodeid, flags | __GFP_NOTRACK, cachep->gfporder);
10458 ++ page = __alloc_pages_node(nodeid, flags, cachep->gfporder);
10459 + if (!page) {
10460 + slab_out_of_memory(cachep, flags, nodeid);
10461 + return NULL;
10462 +@@ -1435,15 +1434,6 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
10463 + if (sk_memalloc_socks() && page_is_pfmemalloc(page))
10464 + SetPageSlabPfmemalloc(page);
10465 +
10466 +- if (kmemcheck_enabled && !(cachep->flags & SLAB_NOTRACK)) {
10467 +- kmemcheck_alloc_shadow(page, cachep->gfporder, flags, nodeid);
10468 +-
10469 +- if (cachep->ctor)
10470 +- kmemcheck_mark_uninitialized_pages(page, nr_pages);
10471 +- else
10472 +- kmemcheck_mark_unallocated_pages(page, nr_pages);
10473 +- }
10474 +-
10475 + return page;
10476 + }
10477 +
10478 +@@ -1455,8 +1445,6 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
10479 + int order = cachep->gfporder;
10480 + unsigned long nr_freed = (1 << order);
10481 +
10482 +- kmemcheck_free_shadow(page, order);
10483 +-
10484 + if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
10485 + mod_lruvec_page_state(page, NR_SLAB_RECLAIMABLE, -nr_freed);
10486 + else
10487 +@@ -3516,8 +3504,6 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
10488 + kmemleak_free_recursive(objp, cachep->flags);
10489 + objp = cache_free_debugcheck(cachep, objp, caller);
10490 +
10491 +- kmemcheck_slab_free(cachep, objp, cachep->object_size);
10492 +-
10493 + /*
10494 + * Skip calling cache_free_alien() when the platform is not numa.
10495 + * This will avoid cache misses that happen while accessing slabp (which
10496 +diff --git a/mm/slab.h b/mm/slab.h
10497 +index 86d7c7d860f9..485d9fbb8802 100644
10498 +--- a/mm/slab.h
10499 ++++ b/mm/slab.h
10500 +@@ -40,7 +40,6 @@ struct kmem_cache {
10501 +
10502 + #include <linux/memcontrol.h>
10503 + #include <linux/fault-inject.h>
10504 +-#include <linux/kmemcheck.h>
10505 + #include <linux/kasan.h>
10506 + #include <linux/kmemleak.h>
10507 + #include <linux/random.h>
10508 +@@ -142,10 +141,10 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
10509 + #if defined(CONFIG_SLAB)
10510 + #define SLAB_CACHE_FLAGS (SLAB_MEM_SPREAD | SLAB_NOLEAKTRACE | \
10511 + SLAB_RECLAIM_ACCOUNT | SLAB_TEMPORARY | \
10512 +- SLAB_NOTRACK | SLAB_ACCOUNT)
10513 ++ SLAB_ACCOUNT)
10514 + #elif defined(CONFIG_SLUB)
10515 + #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
10516 +- SLAB_TEMPORARY | SLAB_NOTRACK | SLAB_ACCOUNT)
10517 ++ SLAB_TEMPORARY | SLAB_ACCOUNT)
10518 + #else
10519 + #define SLAB_CACHE_FLAGS (0)
10520 + #endif
10521 +@@ -164,7 +163,6 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
10522 + SLAB_NOLEAKTRACE | \
10523 + SLAB_RECLAIM_ACCOUNT | \
10524 + SLAB_TEMPORARY | \
10525 +- SLAB_NOTRACK | \
10526 + SLAB_ACCOUNT)
10527 +
10528 + int __kmem_cache_shutdown(struct kmem_cache *);
10529 +@@ -439,7 +437,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
10530 + for (i = 0; i < size; i++) {
10531 + void *object = p[i];
10532 +
10533 +- kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
10534 + kmemleak_alloc_recursive(object, s->object_size, 1,
10535 + s->flags, flags);
10536 + kasan_slab_alloc(s, object, flags);
10537 +diff --git a/mm/slab_common.c b/mm/slab_common.c
10538 +index 0d7fe71ff5e4..65212caa1f2a 100644
10539 +--- a/mm/slab_common.c
10540 ++++ b/mm/slab_common.c
10541 +@@ -44,7 +44,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
10542 + SLAB_FAILSLAB | SLAB_KASAN)
10543 +
10544 + #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
10545 +- SLAB_NOTRACK | SLAB_ACCOUNT)
10546 ++ SLAB_ACCOUNT)
10547 +
10548 + /*
10549 + * Merge control. If this is set then no merging of slab caches will occur.
10550 +diff --git a/mm/slub.c b/mm/slub.c
10551 +index 8e1c027a30f4..41c01690d116 100644
10552 +--- a/mm/slub.c
10553 ++++ b/mm/slub.c
10554 +@@ -22,7 +22,6 @@
10555 + #include <linux/notifier.h>
10556 + #include <linux/seq_file.h>
10557 + #include <linux/kasan.h>
10558 +-#include <linux/kmemcheck.h>
10559 + #include <linux/cpu.h>
10560 + #include <linux/cpuset.h>
10561 + #include <linux/mempolicy.h>
10562 +@@ -1370,12 +1369,11 @@ static inline void *slab_free_hook(struct kmem_cache *s, void *x)
10563 + * So in order to make the debug calls that expect irqs to be
10564 + * disabled we need to disable interrupts temporarily.
10565 + */
10566 +-#if defined(CONFIG_KMEMCHECK) || defined(CONFIG_LOCKDEP)
10567 ++#ifdef CONFIG_LOCKDEP
10568 + {
10569 + unsigned long flags;
10570 +
10571 + local_irq_save(flags);
10572 +- kmemcheck_slab_free(s, x, s->object_size);
10573 + debug_check_no_locks_freed(x, s->object_size);
10574 + local_irq_restore(flags);
10575 + }
10576 +@@ -1399,8 +1397,7 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s,
10577 + * Compiler cannot detect this function can be removed if slab_free_hook()
10578 + * evaluates to nothing. Thus, catch all relevant config debug options here.
10579 + */
10580 +-#if defined(CONFIG_KMEMCHECK) || \
10581 +- defined(CONFIG_LOCKDEP) || \
10582 ++#if defined(CONFIG_LOCKDEP) || \
10583 + defined(CONFIG_DEBUG_KMEMLEAK) || \
10584 + defined(CONFIG_DEBUG_OBJECTS_FREE) || \
10585 + defined(CONFIG_KASAN)
10586 +@@ -1436,8 +1433,6 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
10587 + struct page *page;
10588 + int order = oo_order(oo);
10589 +
10590 +- flags |= __GFP_NOTRACK;
10591 +-
10592 + if (node == NUMA_NO_NODE)
10593 + page = alloc_pages(flags, order);
10594 + else
10595 +@@ -1596,22 +1591,6 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
10596 + stat(s, ORDER_FALLBACK);
10597 + }
10598 +
10599 +- if (kmemcheck_enabled &&
10600 +- !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
10601 +- int pages = 1 << oo_order(oo);
10602 +-
10603 +- kmemcheck_alloc_shadow(page, oo_order(oo), alloc_gfp, node);
10604 +-
10605 +- /*
10606 +- * Objects from caches that have a constructor don't get
10607 +- * cleared when they're allocated, so we need to do it here.
10608 +- */
10609 +- if (s->ctor)
10610 +- kmemcheck_mark_uninitialized_pages(page, pages);
10611 +- else
10612 +- kmemcheck_mark_unallocated_pages(page, pages);
10613 +- }
10614 +-
10615 + page->objects = oo_objects(oo);
10616 +
10617 + order = compound_order(page);
10618 +@@ -1687,8 +1666,6 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
10619 + check_object(s, page, p, SLUB_RED_INACTIVE);
10620 + }
10621 +
10622 +- kmemcheck_free_shadow(page, compound_order(page));
10623 +-
10624 + mod_lruvec_page_state(page,
10625 + (s->flags & SLAB_RECLAIM_ACCOUNT) ?
10626 + NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
10627 +@@ -3792,7 +3769,7 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
10628 + struct page *page;
10629 + void *ptr = NULL;
10630 +
10631 +- flags |= __GFP_COMP | __GFP_NOTRACK;
10632 ++ flags |= __GFP_COMP;
10633 + page = alloc_pages_node(node, flags, get_order(size));
10634 + if (page)
10635 + ptr = page_address(page);
10636 +@@ -5655,8 +5632,6 @@ static char *create_unique_id(struct kmem_cache *s)
10637 + *p++ = 'a';
10638 + if (s->flags & SLAB_CONSISTENCY_CHECKS)
10639 + *p++ = 'F';
10640 +- if (!(s->flags & SLAB_NOTRACK))
10641 +- *p++ = 't';
10642 + if (s->flags & SLAB_ACCOUNT)
10643 + *p++ = 'A';
10644 + if (p != name + 1)
10645 +diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
10646 +index f3a4efcf1456..3aa5a93ad107 100644
10647 +--- a/net/9p/trans_virtio.c
10648 ++++ b/net/9p/trans_virtio.c
10649 +@@ -160,7 +160,8 @@ static void req_done(struct virtqueue *vq)
10650 + spin_unlock_irqrestore(&chan->lock, flags);
10651 + /* Wakeup if anyone waiting for VirtIO ring space. */
10652 + wake_up(chan->vc_wq);
10653 +- p9_client_cb(chan->client, req, REQ_STATUS_RCVD);
10654 ++ if (len)
10655 ++ p9_client_cb(chan->client, req, REQ_STATUS_RCVD);
10656 + }
10657 + }
10658 +
10659 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
10660 +index 15fa5baa8fae..cc811add68c6 100644
10661 +--- a/net/core/skbuff.c
10662 ++++ b/net/core/skbuff.c
10663 +@@ -41,7 +41,6 @@
10664 + #include <linux/module.h>
10665 + #include <linux/types.h>
10666 + #include <linux/kernel.h>
10667 +-#include <linux/kmemcheck.h>
10668 + #include <linux/mm.h>
10669 + #include <linux/interrupt.h>
10670 + #include <linux/in.h>
10671 +@@ -234,14 +233,12 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
10672 + shinfo = skb_shinfo(skb);
10673 + memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
10674 + atomic_set(&shinfo->dataref, 1);
10675 +- kmemcheck_annotate_variable(shinfo->destructor_arg);
10676 +
10677 + if (flags & SKB_ALLOC_FCLONE) {
10678 + struct sk_buff_fclones *fclones;
10679 +
10680 + fclones = container_of(skb, struct sk_buff_fclones, skb1);
10681 +
10682 +- kmemcheck_annotate_bitfield(&fclones->skb2, flags1);
10683 + skb->fclone = SKB_FCLONE_ORIG;
10684 + refcount_set(&fclones->fclone_ref, 1);
10685 +
10686 +@@ -301,7 +298,6 @@ struct sk_buff *__build_skb(void *data, unsigned int frag_size)
10687 + shinfo = skb_shinfo(skb);
10688 + memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
10689 + atomic_set(&shinfo->dataref, 1);
10690 +- kmemcheck_annotate_variable(shinfo->destructor_arg);
10691 +
10692 + return skb;
10693 + }
10694 +@@ -1284,7 +1280,6 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
10695 + if (!n)
10696 + return NULL;
10697 +
10698 +- kmemcheck_annotate_bitfield(n, flags1);
10699 + n->fclone = SKB_FCLONE_UNAVAILABLE;
10700 + }
10701 +
10702 +diff --git a/net/core/sock.c b/net/core/sock.c
10703 +index beb1e299fed3..ec6eb546b228 100644
10704 +--- a/net/core/sock.c
10705 ++++ b/net/core/sock.c
10706 +@@ -1469,8 +1469,6 @@ static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority,
10707 + sk = kmalloc(prot->obj_size, priority);
10708 +
10709 + if (sk != NULL) {
10710 +- kmemcheck_annotate_bitfield(sk, flags);
10711 +-
10712 + if (security_sk_alloc(sk, family, priority))
10713 + goto out_free;
10714 +
10715 +diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
10716 +index 5b039159e67a..d451b9f19b59 100644
10717 +--- a/net/ipv4/inet_timewait_sock.c
10718 ++++ b/net/ipv4/inet_timewait_sock.c
10719 +@@ -9,7 +9,6 @@
10720 + */
10721 +
10722 + #include <linux/kernel.h>
10723 +-#include <linux/kmemcheck.h>
10724 + #include <linux/slab.h>
10725 + #include <linux/module.h>
10726 + #include <net/inet_hashtables.h>
10727 +@@ -167,8 +166,6 @@ struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk,
10728 + if (tw) {
10729 + const struct inet_sock *inet = inet_sk(sk);
10730 +
10731 +- kmemcheck_annotate_bitfield(tw, flags);
10732 +-
10733 + tw->tw_dr = dr;
10734 + /* Give us an identity. */
10735 + tw->tw_daddr = inet->inet_daddr;
10736 +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
10737 +index ff48ac654e5a..d9d215e27b8a 100644
10738 +--- a/net/ipv4/tcp_input.c
10739 ++++ b/net/ipv4/tcp_input.c
10740 +@@ -6204,7 +6204,6 @@ struct request_sock *inet_reqsk_alloc(const struct request_sock_ops *ops,
10741 + if (req) {
10742 + struct inet_request_sock *ireq = inet_rsk(req);
10743 +
10744 +- kmemcheck_annotate_bitfield(ireq, flags);
10745 + ireq->ireq_opt = NULL;
10746 + #if IS_ENABLED(CONFIG_IPV6)
10747 + ireq->pktopts = NULL;
10748 +diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
10749 +index c5b9ce41d66f..aee385eb72e7 100644
10750 +--- a/net/mpls/af_mpls.c
10751 ++++ b/net/mpls/af_mpls.c
10752 +@@ -8,6 +8,7 @@
10753 + #include <linux/ipv6.h>
10754 + #include <linux/mpls.h>
10755 + #include <linux/netconf.h>
10756 ++#include <linux/nospec.h>
10757 + #include <linux/vmalloc.h>
10758 + #include <linux/percpu.h>
10759 + #include <net/ip.h>
10760 +@@ -904,24 +905,27 @@ static int mpls_nh_build_multi(struct mpls_route_config *cfg,
10761 + return err;
10762 + }
10763 +
10764 +-static bool mpls_label_ok(struct net *net, unsigned int index,
10765 ++static bool mpls_label_ok(struct net *net, unsigned int *index,
10766 + struct netlink_ext_ack *extack)
10767 + {
10768 ++ bool is_ok = true;
10769 ++
10770 + /* Reserved labels may not be set */
10771 +- if (index < MPLS_LABEL_FIRST_UNRESERVED) {
10772 ++ if (*index < MPLS_LABEL_FIRST_UNRESERVED) {
10773 + NL_SET_ERR_MSG(extack,
10774 + "Invalid label - must be MPLS_LABEL_FIRST_UNRESERVED or higher");
10775 +- return false;
10776 ++ is_ok = false;
10777 + }
10778 +
10779 + /* The full 20 bit range may not be supported. */
10780 +- if (index >= net->mpls.platform_labels) {
10781 ++ if (is_ok && *index >= net->mpls.platform_labels) {
10782 + NL_SET_ERR_MSG(extack,
10783 + "Label >= configured maximum in platform_labels");
10784 +- return false;
10785 ++ is_ok = false;
10786 + }
10787 +
10788 +- return true;
10789 ++ *index = array_index_nospec(*index, net->mpls.platform_labels);
10790 ++ return is_ok;
10791 + }
10792 +
10793 + static int mpls_route_add(struct mpls_route_config *cfg,
10794 +@@ -944,7 +948,7 @@ static int mpls_route_add(struct mpls_route_config *cfg,
10795 + index = find_free_label(net);
10796 + }
10797 +
10798 +- if (!mpls_label_ok(net, index, extack))
10799 ++ if (!mpls_label_ok(net, &index, extack))
10800 + goto errout;
10801 +
10802 + /* Append makes no sense with mpls */
10803 +@@ -1021,7 +1025,7 @@ static int mpls_route_del(struct mpls_route_config *cfg,
10804 +
10805 + index = cfg->rc_label;
10806 +
10807 +- if (!mpls_label_ok(net, index, extack))
10808 ++ if (!mpls_label_ok(net, &index, extack))
10809 + goto errout;
10810 +
10811 + mpls_route_update(net, index, NULL, &cfg->rc_nlinfo);
10812 +@@ -1779,7 +1783,7 @@ static int rtm_to_route_config(struct sk_buff *skb,
10813 + goto errout;
10814 +
10815 + if (!mpls_label_ok(cfg->rc_nlinfo.nl_net,
10816 +- cfg->rc_label, extack))
10817 ++ &cfg->rc_label, extack))
10818 + goto errout;
10819 + break;
10820 + }
10821 +@@ -2106,7 +2110,7 @@ static int mpls_getroute(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
10822 + goto errout;
10823 + }
10824 +
10825 +- if (!mpls_label_ok(net, in_label, extack)) {
10826 ++ if (!mpls_label_ok(net, &in_label, extack)) {
10827 + err = -EINVAL;
10828 + goto errout;
10829 + }
10830 +diff --git a/net/socket.c b/net/socket.c
10831 +index d894c7c5fa54..43d2f17f5eea 100644
10832 +--- a/net/socket.c
10833 ++++ b/net/socket.c
10834 +@@ -568,7 +568,6 @@ struct socket *sock_alloc(void)
10835 +
10836 + sock = SOCKET_I(inode);
10837 +
10838 +- kmemcheck_annotate_bitfield(sock, type);
10839 + inode->i_ino = get_next_ino();
10840 + inode->i_mode = S_IFSOCK | S_IRWXUGO;
10841 + inode->i_uid = current_fsuid();
10842 +diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
10843 +index f1889f4d4803..491ae9fc561f 100644
10844 +--- a/net/sunrpc/xprtrdma/rpc_rdma.c
10845 ++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
10846 +@@ -142,7 +142,7 @@ static bool rpcrdma_args_inline(struct rpcrdma_xprt *r_xprt,
10847 + if (xdr->page_len) {
10848 + remaining = xdr->page_len;
10849 + offset = offset_in_page(xdr->page_base);
10850 +- count = 0;
10851 ++ count = RPCRDMA_MIN_SEND_SGES;
10852 + while (remaining) {
10853 + remaining -= min_t(unsigned int,
10854 + PAGE_SIZE - offset, remaining);
10855 +diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
10856 +index 11a1fbf7e59e..9e8e1de19b2e 100644
10857 +--- a/net/sunrpc/xprtrdma/verbs.c
10858 ++++ b/net/sunrpc/xprtrdma/verbs.c
10859 +@@ -523,7 +523,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
10860 + pr_warn("rpcrdma: HCA provides only %d send SGEs\n", max_sge);
10861 + return -ENOMEM;
10862 + }
10863 +- ia->ri_max_send_sges = max_sge - RPCRDMA_MIN_SEND_SGES;
10864 ++ ia->ri_max_send_sges = max_sge;
10865 +
10866 + if (ia->ri_device->attrs.max_qp_wr <= RPCRDMA_BACKWARD_WRS) {
10867 + dprintk("RPC: %s: insufficient wqe's available\n",
10868 +@@ -1331,6 +1331,9 @@ __rpcrdma_dma_map_regbuf(struct rpcrdma_ia *ia, struct rpcrdma_regbuf *rb)
10869 + static void
10870 + rpcrdma_dma_unmap_regbuf(struct rpcrdma_regbuf *rb)
10871 + {
10872 ++ if (!rb)
10873 ++ return;
10874 ++
10875 + if (!rpcrdma_regbuf_is_mapped(rb))
10876 + return;
10877 +
10878 +@@ -1346,9 +1349,6 @@ rpcrdma_dma_unmap_regbuf(struct rpcrdma_regbuf *rb)
10879 + void
10880 + rpcrdma_free_regbuf(struct rpcrdma_regbuf *rb)
10881 + {
10882 +- if (!rb)
10883 +- return;
10884 +-
10885 + rpcrdma_dma_unmap_regbuf(rb);
10886 + kfree(rb);
10887 + }
10888 +diff --git a/scripts/kernel-doc b/scripts/kernel-doc
10889 +index 9d3eafea58f0..8323ff9dec71 100755
10890 +--- a/scripts/kernel-doc
10891 ++++ b/scripts/kernel-doc
10892 +@@ -2182,8 +2182,6 @@ sub dump_struct($$) {
10893 + # strip comments:
10894 + $members =~ s/\/\*.*?\*\///gos;
10895 + $nested =~ s/\/\*.*?\*\///gos;
10896 +- # strip kmemcheck_bitfield_{begin,end}.*;
10897 +- $members =~ s/kmemcheck_bitfield_.*?;//gos;
10898 + # strip attributes
10899 + $members =~ s/__attribute__\s*\(\([a-z,_\*\s\(\)]*\)\)//i;
10900 + $members =~ s/__aligned\s*\([^;]*\)//gos;
10901 +diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
10902 +index ac30fc1ab98b..dea11d1babf5 100644
10903 +--- a/sound/core/seq/seq_clientmgr.c
10904 ++++ b/sound/core/seq/seq_clientmgr.c
10905 +@@ -999,7 +999,7 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
10906 + {
10907 + struct snd_seq_client *client = file->private_data;
10908 + int written = 0, len;
10909 +- int err = -EINVAL;
10910 ++ int err;
10911 + struct snd_seq_event event;
10912 +
10913 + if (!(snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT))
10914 +@@ -1014,11 +1014,15 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
10915 +
10916 + /* allocate the pool now if the pool is not allocated yet */
10917 + if (client->pool->size > 0 && !snd_seq_write_pool_allocated(client)) {
10918 +- if (snd_seq_pool_init(client->pool) < 0)
10919 ++ mutex_lock(&client->ioctl_mutex);
10920 ++ err = snd_seq_pool_init(client->pool);
10921 ++ mutex_unlock(&client->ioctl_mutex);
10922 ++ if (err < 0)
10923 + return -ENOMEM;
10924 + }
10925 +
10926 + /* only process whole events */
10927 ++ err = -EINVAL;
10928 + while (count >= sizeof(struct snd_seq_event)) {
10929 + /* Read in the event header from the user */
10930 + len = sizeof(event);
10931 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
10932 +index b2d039537d5e..b7acffdf16a4 100644
10933 +--- a/sound/pci/hda/patch_realtek.c
10934 ++++ b/sound/pci/hda/patch_realtek.c
10935 +@@ -3355,6 +3355,19 @@ static void alc269_fixup_pincfg_no_hp_to_lineout(struct hda_codec *codec,
10936 + spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
10937 + }
10938 +
10939 ++static void alc269_fixup_pincfg_U7x7_headset_mic(struct hda_codec *codec,
10940 ++ const struct hda_fixup *fix,
10941 ++ int action)
10942 ++{
10943 ++ unsigned int cfg_headphone = snd_hda_codec_get_pincfg(codec, 0x21);
10944 ++ unsigned int cfg_headset_mic = snd_hda_codec_get_pincfg(codec, 0x19);
10945 ++
10946 ++ if (cfg_headphone && cfg_headset_mic == 0x411111f0)
10947 ++ snd_hda_codec_set_pincfg(codec, 0x19,
10948 ++ (cfg_headphone & ~AC_DEFCFG_DEVICE) |
10949 ++ (AC_JACK_MIC_IN << AC_DEFCFG_DEVICE_SHIFT));
10950 ++}
10951 ++
10952 + static void alc269_fixup_hweq(struct hda_codec *codec,
10953 + const struct hda_fixup *fix, int action)
10954 + {
10955 +@@ -4827,6 +4840,28 @@ static void alc_fixup_tpt440_dock(struct hda_codec *codec,
10956 + }
10957 + }
10958 +
10959 ++static void alc_fixup_tpt470_dock(struct hda_codec *codec,
10960 ++ const struct hda_fixup *fix, int action)
10961 ++{
10962 ++ static const struct hda_pintbl pincfgs[] = {
10963 ++ { 0x17, 0x21211010 }, /* dock headphone */
10964 ++ { 0x19, 0x21a11010 }, /* dock mic */
10965 ++ { }
10966 ++ };
10967 ++ struct alc_spec *spec = codec->spec;
10968 ++
10969 ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
10970 ++ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
10971 ++ /* Enable DOCK device */
10972 ++ snd_hda_codec_write(codec, 0x17, 0,
10973 ++ AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0);
10974 ++ /* Enable DOCK device */
10975 ++ snd_hda_codec_write(codec, 0x19, 0,
10976 ++ AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0);
10977 ++ snd_hda_apply_pincfgs(codec, pincfgs);
10978 ++ }
10979 ++}
10980 ++
10981 + static void alc_shutup_dell_xps13(struct hda_codec *codec)
10982 + {
10983 + struct alc_spec *spec = codec->spec;
10984 +@@ -5206,6 +5241,7 @@ enum {
10985 + ALC269_FIXUP_LIFEBOOK_EXTMIC,
10986 + ALC269_FIXUP_LIFEBOOK_HP_PIN,
10987 + ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT,
10988 ++ ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC,
10989 + ALC269_FIXUP_AMIC,
10990 + ALC269_FIXUP_DMIC,
10991 + ALC269VB_FIXUP_AMIC,
10992 +@@ -5301,6 +5337,7 @@ enum {
10993 + ALC700_FIXUP_INTEL_REFERENCE,
10994 + ALC274_FIXUP_DELL_BIND_DACS,
10995 + ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
10996 ++ ALC298_FIXUP_TPT470_DOCK,
10997 + };
10998 +
10999 + static const struct hda_fixup alc269_fixups[] = {
11000 +@@ -5411,6 +5448,10 @@ static const struct hda_fixup alc269_fixups[] = {
11001 + .type = HDA_FIXUP_FUNC,
11002 + .v.func = alc269_fixup_pincfg_no_hp_to_lineout,
11003 + },
11004 ++ [ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC] = {
11005 ++ .type = HDA_FIXUP_FUNC,
11006 ++ .v.func = alc269_fixup_pincfg_U7x7_headset_mic,
11007 ++ },
11008 + [ALC269_FIXUP_AMIC] = {
11009 + .type = HDA_FIXUP_PINS,
11010 + .v.pins = (const struct hda_pintbl[]) {
11011 +@@ -6126,6 +6167,12 @@ static const struct hda_fixup alc269_fixups[] = {
11012 + .chained = true,
11013 + .chain_id = ALC274_FIXUP_DELL_BIND_DACS
11014 + },
11015 ++ [ALC298_FIXUP_TPT470_DOCK] = {
11016 ++ .type = HDA_FIXUP_FUNC,
11017 ++ .v.func = alc_fixup_tpt470_dock,
11018 ++ .chained = true,
11019 ++ .chain_id = ALC293_FIXUP_LENOVO_SPK_NOISE
11020 ++ },
11021 + };
11022 +
11023 + static const struct snd_pci_quirk alc269_fixup_tbl[] = {
11024 +@@ -6176,6 +6223,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
11025 + SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
11026 + SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
11027 + SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
11028 ++ SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
11029 ++ SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
11030 + SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
11031 + SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
11032 + SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
11033 +@@ -6277,6 +6326,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
11034 + SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
11035 + SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
11036 + SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
11037 ++ SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
11038 + SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
11039 + SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
11040 + SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
11041 +@@ -6305,8 +6355,16 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
11042 + SND_PCI_QUIRK(0x17aa, 0x2218, "Thinkpad X1 Carbon 2nd", ALC292_FIXUP_TPT440_DOCK),
11043 + SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK),
11044 + SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
11045 ++ SND_PCI_QUIRK(0x17aa, 0x222d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11046 ++ SND_PCI_QUIRK(0x17aa, 0x222e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11047 + SND_PCI_QUIRK(0x17aa, 0x2231, "Thinkpad T560", ALC292_FIXUP_TPT460),
11048 + SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC292_FIXUP_TPT460),
11049 ++ SND_PCI_QUIRK(0x17aa, 0x2245, "Thinkpad T470", ALC298_FIXUP_TPT470_DOCK),
11050 ++ SND_PCI_QUIRK(0x17aa, 0x2246, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11051 ++ SND_PCI_QUIRK(0x17aa, 0x2247, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11052 ++ SND_PCI_QUIRK(0x17aa, 0x224b, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11053 ++ SND_PCI_QUIRK(0x17aa, 0x224c, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11054 ++ SND_PCI_QUIRK(0x17aa, 0x224d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11055 + SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
11056 + SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
11057 + SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
11058 +@@ -6327,7 +6385,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
11059 + SND_PCI_QUIRK(0x17aa, 0x5050, "Thinkpad T560p", ALC292_FIXUP_TPT460),
11060 + SND_PCI_QUIRK(0x17aa, 0x5051, "Thinkpad L460", ALC292_FIXUP_TPT460),
11061 + SND_PCI_QUIRK(0x17aa, 0x5053, "Thinkpad T460", ALC292_FIXUP_TPT460),
11062 ++ SND_PCI_QUIRK(0x17aa, 0x505d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11063 ++ SND_PCI_QUIRK(0x17aa, 0x505f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11064 ++ SND_PCI_QUIRK(0x17aa, 0x5062, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11065 + SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
11066 ++ SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11067 ++ SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
11068 + SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
11069 + SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
11070 + SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
11071 +@@ -6584,6 +6647,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
11072 + {0x12, 0xb7a60130},
11073 + {0x14, 0x90170110},
11074 + {0x21, 0x02211020}),
11075 ++ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
11076 ++ {0x12, 0x90a60130},
11077 ++ {0x14, 0x90170110},
11078 ++ {0x14, 0x01011020},
11079 ++ {0x21, 0x0221101f}),
11080 + SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
11081 + ALC256_STANDARD_PINS),
11082 + SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC,
11083 +@@ -6653,6 +6721,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
11084 + {0x12, 0x90a60120},
11085 + {0x14, 0x90170110},
11086 + {0x21, 0x0321101f}),
11087 ++ SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
11088 ++ {0x12, 0xb7a60130},
11089 ++ {0x14, 0x90170110},
11090 ++ {0x21, 0x04211020}),
11091 + SND_HDA_PIN_QUIRK(0x10ec0290, 0x103c, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1,
11092 + ALC290_STANDARD_PINS,
11093 + {0x15, 0x04211040},
11094 +diff --git a/sound/soc/intel/common/sst-match-acpi.c b/sound/soc/intel/common/sst-match-acpi.c
11095 +index 56d26f36a3cb..b4a929562218 100644
11096 +--- a/sound/soc/intel/common/sst-match-acpi.c
11097 ++++ b/sound/soc/intel/common/sst-match-acpi.c
11098 +@@ -83,11 +83,9 @@ struct sst_acpi_mach *sst_acpi_find_machine(struct sst_acpi_mach *machines)
11099 +
11100 + for (mach = machines; mach->id[0]; mach++) {
11101 + if (sst_acpi_check_hid(mach->id) == true) {
11102 +- if (mach->machine_quirk == NULL)
11103 +- return mach;
11104 +-
11105 +- if (mach->machine_quirk(mach) != NULL)
11106 +- return mach;
11107 ++ if (mach->machine_quirk)
11108 ++ mach = mach->machine_quirk(mach);
11109 ++ return mach;
11110 + }
11111 + }
11112 + return NULL;
11113 +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
11114 +index 75bce127d768..89efec891e68 100644
11115 +--- a/sound/usb/mixer.c
11116 ++++ b/sound/usb/mixer.c
11117 +@@ -347,17 +347,20 @@ static int get_ctl_value_v2(struct usb_mixer_elem_info *cval, int request,
11118 + int validx, int *value_ret)
11119 + {
11120 + struct snd_usb_audio *chip = cval->head.mixer->chip;
11121 +- unsigned char buf[4 + 3 * sizeof(__u32)]; /* enough space for one range */
11122 ++ /* enough space for one range */
11123 ++ unsigned char buf[sizeof(__u16) + 3 * sizeof(__u32)];
11124 + unsigned char *val;
11125 +- int idx = 0, ret, size;
11126 ++ int idx = 0, ret, val_size, size;
11127 + __u8 bRequest;
11128 +
11129 ++ val_size = uac2_ctl_value_size(cval->val_type);
11130 ++
11131 + if (request == UAC_GET_CUR) {
11132 + bRequest = UAC2_CS_CUR;
11133 +- size = uac2_ctl_value_size(cval->val_type);
11134 ++ size = val_size;
11135 + } else {
11136 + bRequest = UAC2_CS_RANGE;
11137 +- size = sizeof(buf);
11138 ++ size = sizeof(__u16) + 3 * val_size;
11139 + }
11140 +
11141 + memset(buf, 0, sizeof(buf));
11142 +@@ -390,16 +393,17 @@ static int get_ctl_value_v2(struct usb_mixer_elem_info *cval, int request,
11143 + val = buf + sizeof(__u16);
11144 + break;
11145 + case UAC_GET_MAX:
11146 +- val = buf + sizeof(__u16) * 2;
11147 ++ val = buf + sizeof(__u16) + val_size;
11148 + break;
11149 + case UAC_GET_RES:
11150 +- val = buf + sizeof(__u16) * 3;
11151 ++ val = buf + sizeof(__u16) + val_size * 2;
11152 + break;
11153 + default:
11154 + return -EINVAL;
11155 + }
11156 +
11157 +- *value_ret = convert_signed_value(cval, snd_usb_combine_bytes(val, sizeof(__u16)));
11158 ++ *value_ret = convert_signed_value(cval,
11159 ++ snd_usb_combine_bytes(val, val_size));
11160 +
11161 + return 0;
11162 + }
11163 +diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
11164 +index b9c9a19f9588..3cbfae6604f9 100644
11165 +--- a/sound/usb/pcm.c
11166 ++++ b/sound/usb/pcm.c
11167 +@@ -352,6 +352,15 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
11168 + ep = 0x86;
11169 + iface = usb_ifnum_to_if(dev, 2);
11170 +
11171 ++ if (!iface || iface->num_altsetting == 0)
11172 ++ return -EINVAL;
11173 ++
11174 ++ alts = &iface->altsetting[1];
11175 ++ goto add_sync_ep;
11176 ++ case USB_ID(0x1397, 0x0002):
11177 ++ ep = 0x81;
11178 ++ iface = usb_ifnum_to_if(dev, 1);
11179 ++
11180 + if (!iface || iface->num_altsetting == 0)
11181 + return -EINVAL;
11182 +
11183 +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
11184 +index 8d7db7cd4f88..ed56cd307059 100644
11185 +--- a/sound/usb/quirks.c
11186 ++++ b/sound/usb/quirks.c
11187 +@@ -1369,8 +1369,11 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
11188 + return SNDRV_PCM_FMTBIT_DSD_U32_BE;
11189 + break;
11190 +
11191 +- /* Amanero Combo384 USB interface with native DSD support */
11192 +- case USB_ID(0x16d0, 0x071a):
11193 ++ /* Amanero Combo384 USB based DACs with native DSD support */
11194 ++ case USB_ID(0x16d0, 0x071a): /* Amanero - Combo384 */
11195 ++ case USB_ID(0x2ab6, 0x0004): /* T+A DAC8DSD-V2.0, MP1000E-V2.0, MP2000R-V2.0, MP2500R-V2.0, MP3100HV-V2.0 */
11196 ++ case USB_ID(0x2ab6, 0x0005): /* T+A USB HD Audio 1 */
11197 ++ case USB_ID(0x2ab6, 0x0006): /* T+A USB HD Audio 2 */
11198 + if (fp->altsetting == 2) {
11199 + switch (le16_to_cpu(chip->dev->descriptor.bcdDevice)) {
11200 + case 0x199:
11201 +diff --git a/tools/include/linux/kmemcheck.h b/tools/include/linux/kmemcheck.h
11202 +deleted file mode 100644
11203 +index 2bccd2c7b897..000000000000
11204 +--- a/tools/include/linux/kmemcheck.h
11205 ++++ /dev/null
11206 +@@ -1,9 +0,0 @@
11207 +-/* SPDX-License-Identifier: GPL-2.0 */
11208 +-#ifndef _LIBLOCKDEP_LINUX_KMEMCHECK_H_
11209 +-#define _LIBLOCKDEP_LINUX_KMEMCHECK_H_
11210 +-
11211 +-static inline void kmemcheck_mark_initialized(void *address, unsigned int n)
11212 +-{
11213 +-}
11214 +-
11215 +-#endif
11216 +diff --git a/tools/objtool/check.c b/tools/objtool/check.c
11217 +index 2e458eb45586..c7fb5c2392ee 100644
11218 +--- a/tools/objtool/check.c
11219 ++++ b/tools/objtool/check.c
11220 +@@ -1935,13 +1935,19 @@ static bool ignore_unreachable_insn(struct instruction *insn)
11221 + if (is_kasan_insn(insn) || is_ubsan_insn(insn))
11222 + return true;
11223 +
11224 +- if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest) {
11225 +- insn = insn->jump_dest;
11226 +- continue;
11227 ++ if (insn->type == INSN_JUMP_UNCONDITIONAL) {
11228 ++ if (insn->jump_dest &&
11229 ++ insn->jump_dest->func == insn->func) {
11230 ++ insn = insn->jump_dest;
11231 ++ continue;
11232 ++ }
11233 ++
11234 ++ break;
11235 + }
11236 +
11237 + if (insn->offset + insn->len >= insn->func->offset + insn->func->len)
11238 + break;
11239 ++
11240 + insn = list_next_entry(insn, list);
11241 + }
11242 +
11243 +diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
11244 +index 35d4b9c9a9e8..9e693ce4b73b 100644
11245 +--- a/tools/perf/builtin-kmem.c
11246 ++++ b/tools/perf/builtin-kmem.c
11247 +@@ -655,7 +655,6 @@ static const struct {
11248 + { "__GFP_RECLAIMABLE", "RC" },
11249 + { "__GFP_MOVABLE", "M" },
11250 + { "__GFP_ACCOUNT", "AC" },
11251 +- { "__GFP_NOTRACK", "NT" },
11252 + { "__GFP_WRITE", "WR" },
11253 + { "__GFP_RECLAIM", "R" },
11254 + { "__GFP_DIRECT_RECLAIM", "DR" },
11255 +diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
11256 +index 24dbf634e2dd..0b457e8e0f0c 100644
11257 +--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
11258 ++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
11259 +@@ -1717,7 +1717,7 @@ void tracer_ptrace(struct __test_metadata *_metadata, pid_t tracee,
11260 +
11261 + if (nr == __NR_getpid)
11262 + change_syscall(_metadata, tracee, __NR_getppid);
11263 +- if (nr == __NR_open)
11264 ++ if (nr == __NR_openat)
11265 + change_syscall(_metadata, tracee, -1);
11266 + }
11267 +
11268 +@@ -1792,7 +1792,7 @@ TEST_F(TRACE_syscall, ptrace_syscall_dropped)
11269 + true);
11270 +
11271 + /* Tracer should skip the open syscall, resulting in EPERM. */
11272 +- EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_open));
11273 ++ EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_openat));
11274 + }
11275 +
11276 + TEST_F(TRACE_syscall, syscall_allowed)
11277 +diff --git a/tools/testing/selftests/vm/compaction_test.c b/tools/testing/selftests/vm/compaction_test.c
11278 +index a65b016d4c13..1097f04e4d80 100644
11279 +--- a/tools/testing/selftests/vm/compaction_test.c
11280 ++++ b/tools/testing/selftests/vm/compaction_test.c
11281 +@@ -137,6 +137,8 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size)
11282 + printf("No of huge pages allocated = %d\n",
11283 + (atoi(nr_hugepages)));
11284 +
11285 ++ lseek(fd, 0, SEEK_SET);
11286 ++
11287 + if (write(fd, initial_nr_hugepages, strlen(initial_nr_hugepages))
11288 + != strlen(initial_nr_hugepages)) {
11289 + perror("Failed to write value to /proc/sys/vm/nr_hugepages\n");
11290 +diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
11291 +index 91fbfa8fdc15..aa6e2d7f6a1f 100644
11292 +--- a/tools/testing/selftests/x86/Makefile
11293 ++++ b/tools/testing/selftests/x86/Makefile
11294 +@@ -5,16 +5,26 @@ include ../lib.mk
11295 +
11296 + .PHONY: all all_32 all_64 warn_32bit_failure clean
11297 +
11298 +-TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt ptrace_syscall test_mremap_vdso \
11299 +- check_initial_reg_state sigreturn ldt_gdt iopl mpx-mini-test ioperm \
11300 ++UNAME_M := $(shell uname -m)
11301 ++CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
11302 ++CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
11303 ++
11304 ++TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
11305 ++ check_initial_reg_state sigreturn iopl mpx-mini-test ioperm \
11306 + protection_keys test_vdso test_vsyscall
11307 + TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \
11308 + test_FCMOV test_FCOMI test_FISTTP \
11309 + vdso_restorer
11310 + TARGETS_C_64BIT_ONLY := fsgsbase sysret_rip
11311 ++# Some selftests require 32bit support enabled also on 64bit systems
11312 ++TARGETS_C_32BIT_NEEDED := ldt_gdt ptrace_syscall
11313 +
11314 +-TARGETS_C_32BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_32BIT_ONLY)
11315 ++TARGETS_C_32BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_32BIT_ONLY) $(TARGETS_C_32BIT_NEEDED)
11316 + TARGETS_C_64BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_64BIT_ONLY)
11317 ++ifeq ($(CAN_BUILD_I386)$(CAN_BUILD_X86_64),11)
11318 ++TARGETS_C_64BIT_ALL += $(TARGETS_C_32BIT_NEEDED)
11319 ++endif
11320 ++
11321 + BINARIES_32 := $(TARGETS_C_32BIT_ALL:%=%_32)
11322 + BINARIES_64 := $(TARGETS_C_64BIT_ALL:%=%_64)
11323 +
11324 +@@ -23,18 +33,16 @@ BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64))
11325 +
11326 + CFLAGS := -O2 -g -std=gnu99 -pthread -Wall -no-pie
11327 +
11328 +-UNAME_M := $(shell uname -m)
11329 +-CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
11330 +-CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
11331 +-
11332 + ifeq ($(CAN_BUILD_I386),1)
11333 + all: all_32
11334 + TEST_PROGS += $(BINARIES_32)
11335 ++EXTRA_CFLAGS += -DCAN_BUILD_32
11336 + endif
11337 +
11338 + ifeq ($(CAN_BUILD_X86_64),1)
11339 + all: all_64
11340 + TEST_PROGS += $(BINARIES_64)
11341 ++EXTRA_CFLAGS += -DCAN_BUILD_64
11342 + endif
11343 +
11344 + all_32: $(BINARIES_32)
11345 +diff --git a/tools/testing/selftests/x86/mpx-mini-test.c b/tools/testing/selftests/x86/mpx-mini-test.c
11346 +index ec0f6b45ce8b..9c0325e1ea68 100644
11347 +--- a/tools/testing/selftests/x86/mpx-mini-test.c
11348 ++++ b/tools/testing/selftests/x86/mpx-mini-test.c
11349 +@@ -315,11 +315,39 @@ static inline void *__si_bounds_upper(siginfo_t *si)
11350 + return si->si_upper;
11351 + }
11352 + #else
11353 ++
11354 ++/*
11355 ++ * This deals with old version of _sigfault in some distros:
11356 ++ *
11357 ++
11358 ++old _sigfault:
11359 ++ struct {
11360 ++ void *si_addr;
11361 ++ } _sigfault;
11362 ++
11363 ++new _sigfault:
11364 ++ struct {
11365 ++ void __user *_addr;
11366 ++ int _trapno;
11367 ++ short _addr_lsb;
11368 ++ union {
11369 ++ struct {
11370 ++ void __user *_lower;
11371 ++ void __user *_upper;
11372 ++ } _addr_bnd;
11373 ++ __u32 _pkey;
11374 ++ };
11375 ++ } _sigfault;
11376 ++ *
11377 ++ */
11378 ++
11379 + static inline void **__si_bounds_hack(siginfo_t *si)
11380 + {
11381 + void *sigfault = &si->_sifields._sigfault;
11382 + void *end_sigfault = sigfault + sizeof(si->_sifields._sigfault);
11383 +- void **__si_lower = end_sigfault;
11384 ++ int *trapno = (int*)end_sigfault;
11385 ++ /* skip _trapno and _addr_lsb */
11386 ++ void **__si_lower = (void**)(trapno + 2);
11387 +
11388 + return __si_lower;
11389 + }
11390 +@@ -331,7 +359,7 @@ static inline void *__si_bounds_lower(siginfo_t *si)
11391 +
11392 + static inline void *__si_bounds_upper(siginfo_t *si)
11393 + {
11394 +- return (*__si_bounds_hack(si)) + sizeof(void *);
11395 ++ return *(__si_bounds_hack(si) + 1);
11396 + }
11397 + #endif
11398 +
11399 +diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
11400 +index 7a1cc0e56d2d..6cbb83b47150 100644
11401 +--- a/tools/testing/selftests/x86/protection_keys.c
11402 ++++ b/tools/testing/selftests/x86/protection_keys.c
11403 +@@ -393,34 +393,6 @@ pid_t fork_lazy_child(void)
11404 + return forkret;
11405 + }
11406 +
11407 +-void davecmp(void *_a, void *_b, int len)
11408 +-{
11409 +- int i;
11410 +- unsigned long *a = _a;
11411 +- unsigned long *b = _b;
11412 +-
11413 +- for (i = 0; i < len / sizeof(*a); i++) {
11414 +- if (a[i] == b[i])
11415 +- continue;
11416 +-
11417 +- dprintf3("[%3d]: a: %016lx b: %016lx\n", i, a[i], b[i]);
11418 +- }
11419 +-}
11420 +-
11421 +-void dumpit(char *f)
11422 +-{
11423 +- int fd = open(f, O_RDONLY);
11424 +- char buf[100];
11425 +- int nr_read;
11426 +-
11427 +- dprintf2("maps fd: %d\n", fd);
11428 +- do {
11429 +- nr_read = read(fd, &buf[0], sizeof(buf));
11430 +- write(1, buf, nr_read);
11431 +- } while (nr_read > 0);
11432 +- close(fd);
11433 +-}
11434 +-
11435 + #define PKEY_DISABLE_ACCESS 0x1
11436 + #define PKEY_DISABLE_WRITE 0x2
11437 +
11438 +diff --git a/tools/testing/selftests/x86/single_step_syscall.c b/tools/testing/selftests/x86/single_step_syscall.c
11439 +index a48da95c18fd..ddfdd635de16 100644
11440 +--- a/tools/testing/selftests/x86/single_step_syscall.c
11441 ++++ b/tools/testing/selftests/x86/single_step_syscall.c
11442 +@@ -119,7 +119,9 @@ static void check_result(void)
11443 +
11444 + int main()
11445 + {
11446 ++#ifdef CAN_BUILD_32
11447 + int tmp;
11448 ++#endif
11449 +
11450 + sethandler(SIGTRAP, sigtrap, 0);
11451 +
11452 +@@ -139,12 +141,13 @@ int main()
11453 + : : "c" (post_nop) : "r11");
11454 + check_result();
11455 + #endif
11456 +-
11457 ++#ifdef CAN_BUILD_32
11458 + printf("[RUN]\tSet TF and check int80\n");
11459 + set_eflags(get_eflags() | X86_EFLAGS_TF);
11460 + asm volatile ("int $0x80" : "=a" (tmp) : "a" (SYS_getpid)
11461 + : INT80_CLOBBERS);
11462 + check_result();
11463 ++#endif
11464 +
11465 + /*
11466 + * This test is particularly interesting if fast syscalls use
11467 +diff --git a/tools/testing/selftests/x86/test_mremap_vdso.c b/tools/testing/selftests/x86/test_mremap_vdso.c
11468 +index bf0d687c7db7..64f11c8d9b76 100644
11469 +--- a/tools/testing/selftests/x86/test_mremap_vdso.c
11470 ++++ b/tools/testing/selftests/x86/test_mremap_vdso.c
11471 +@@ -90,8 +90,12 @@ int main(int argc, char **argv, char **envp)
11472 + vdso_size += PAGE_SIZE;
11473 + }
11474 +
11475 ++#ifdef __i386__
11476 + /* Glibc is likely to explode now - exit with raw syscall */
11477 + asm volatile ("int $0x80" : : "a" (__NR_exit), "b" (!!ret));
11478 ++#else /* __x86_64__ */
11479 ++ syscall(SYS_exit, ret);
11480 ++#endif
11481 + } else {
11482 + int status;
11483 +
11484 +diff --git a/tools/testing/selftests/x86/test_vdso.c b/tools/testing/selftests/x86/test_vdso.c
11485 +index 29973cde06d3..235259011704 100644
11486 +--- a/tools/testing/selftests/x86/test_vdso.c
11487 ++++ b/tools/testing/selftests/x86/test_vdso.c
11488 +@@ -26,20 +26,59 @@
11489 + # endif
11490 + #endif
11491 +
11492 ++/* max length of lines in /proc/self/maps - anything longer is skipped here */
11493 ++#define MAPS_LINE_LEN 128
11494 ++
11495 + int nerrs = 0;
11496 +
11497 ++typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
11498 ++
11499 ++getcpu_t vgetcpu;
11500 ++getcpu_t vdso_getcpu;
11501 ++
11502 ++static void *vsyscall_getcpu(void)
11503 ++{
11504 + #ifdef __x86_64__
11505 +-# define VSYS(x) (x)
11506 ++ FILE *maps;
11507 ++ char line[MAPS_LINE_LEN];
11508 ++ bool found = false;
11509 ++
11510 ++ maps = fopen("/proc/self/maps", "r");
11511 ++ if (!maps) /* might still be present, but ignore it here, as we test vDSO not vsyscall */
11512 ++ return NULL;
11513 ++
11514 ++ while (fgets(line, MAPS_LINE_LEN, maps)) {
11515 ++ char r, x;
11516 ++ void *start, *end;
11517 ++ char name[MAPS_LINE_LEN];
11518 ++
11519 ++ /* sscanf() is safe here as strlen(name) >= strlen(line) */
11520 ++ if (sscanf(line, "%p-%p %c-%cp %*x %*x:%*x %*u %s",
11521 ++ &start, &end, &r, &x, name) != 5)
11522 ++ continue;
11523 ++
11524 ++ if (strcmp(name, "[vsyscall]"))
11525 ++ continue;
11526 ++
11527 ++ /* assume entries are OK, as we test vDSO here not vsyscall */
11528 ++ found = true;
11529 ++ break;
11530 ++ }
11531 ++
11532 ++ fclose(maps);
11533 ++
11534 ++ if (!found) {
11535 ++ printf("Warning: failed to find vsyscall getcpu\n");
11536 ++ return NULL;
11537 ++ }
11538 ++ return (void *) (0xffffffffff600800);
11539 + #else
11540 +-# define VSYS(x) 0
11541 ++ return NULL;
11542 + #endif
11543 ++}
11544 +
11545 +-typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
11546 +-
11547 +-const getcpu_t vgetcpu = (getcpu_t)VSYS(0xffffffffff600800);
11548 +-getcpu_t vdso_getcpu;
11549 +
11550 +-void fill_function_pointers()
11551 ++static void fill_function_pointers()
11552 + {
11553 + void *vdso = dlopen("linux-vdso.so.1",
11554 + RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
11555 +@@ -54,6 +93,8 @@ void fill_function_pointers()
11556 + vdso_getcpu = (getcpu_t)dlsym(vdso, "__vdso_getcpu");
11557 + if (!vdso_getcpu)
11558 + printf("Warning: failed to find getcpu in vDSO\n");
11559 ++
11560 ++ vgetcpu = (getcpu_t) vsyscall_getcpu();
11561 + }
11562 +
11563 + static long sys_getcpu(unsigned * cpu, unsigned * node,
11564 +diff --git a/tools/testing/selftests/x86/test_vsyscall.c b/tools/testing/selftests/x86/test_vsyscall.c
11565 +index 6e0bd52ad53d..003b6c55b10e 100644
11566 +--- a/tools/testing/selftests/x86/test_vsyscall.c
11567 ++++ b/tools/testing/selftests/x86/test_vsyscall.c
11568 +@@ -33,6 +33,9 @@
11569 + # endif
11570 + #endif
11571 +
11572 ++/* max length of lines in /proc/self/maps - anything longer is skipped here */
11573 ++#define MAPS_LINE_LEN 128
11574 ++
11575 + static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *),
11576 + int flags)
11577 + {
11578 +@@ -98,7 +101,7 @@ static int init_vsys(void)
11579 + #ifdef __x86_64__
11580 + int nerrs = 0;
11581 + FILE *maps;
11582 +- char line[128];
11583 ++ char line[MAPS_LINE_LEN];
11584 + bool found = false;
11585 +
11586 + maps = fopen("/proc/self/maps", "r");
11587 +@@ -108,10 +111,12 @@ static int init_vsys(void)
11588 + return 0;
11589 + }
11590 +
11591 +- while (fgets(line, sizeof(line), maps)) {
11592 ++ while (fgets(line, MAPS_LINE_LEN, maps)) {
11593 + char r, x;
11594 + void *start, *end;
11595 +- char name[128];
11596 ++ char name[MAPS_LINE_LEN];
11597 ++
11598 ++ /* sscanf() is safe here as strlen(name) >= strlen(line) */
11599 + if (sscanf(line, "%p-%p %c-%cp %*x %*x:%*x %*u %s",
11600 + &start, &end, &r, &x, name) != 5)
11601 + continue;