Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: /
Date: Thu, 22 Feb 2018 23:23:45
Message-Id: 1519341805.d603d53d71f6a32e75ad20ea841b50b6b9c15bf7.mpagano@gentoo
1 commit: d603d53d71f6a32e75ad20ea841b50b6b9c15bf7
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Thu Feb 22 23:23:25 2018 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Thu Feb 22 23:23:25 2018 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d603d53d
7
8 Linux patchj 4.14.21
9
10 0000_README | 4 +
11 1020_linux-4.14.21.patch | 11566 +++++++++++++++++++++++++++++++++++++++++++++
12 2 files changed, 11570 insertions(+)
13
14 diff --git a/0000_README b/0000_README
15 index 7fd6d67..f9abc2d 100644
16 --- a/0000_README
17 +++ b/0000_README
18 @@ -123,6 +123,10 @@ Patch: 1019_linux-4.14.20.patch
19 From: http://www.kernel.org
20 Desc: Linux 4.14.20
21
22 +Patch: 1020_linux-4.14.21.patch
23 +From: http://www.kernel.org
24 +Desc: Linux 4.14.21
25 +
26 Patch: 1500_XATTR_USER_PREFIX.patch
27 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
28 Desc: Support for namespace user.pax.* on tmpfs.
29
30 diff --git a/1020_linux-4.14.21.patch b/1020_linux-4.14.21.patch
31 new file mode 100644
32 index 0000000..f26afa9
33 --- /dev/null
34 +++ b/1020_linux-4.14.21.patch
35 @@ -0,0 +1,11566 @@
36 +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
37 +index c76afdcafbef..fb385af482ff 100644
38 +--- a/Documentation/admin-guide/kernel-parameters.txt
39 ++++ b/Documentation/admin-guide/kernel-parameters.txt
40 +@@ -1841,13 +1841,6 @@
41 + Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
42 + the default is off.
43 +
44 +- kmemcheck= [X86] Boot-time kmemcheck enable/disable/one-shot mode
45 +- Valid arguments: 0, 1, 2
46 +- kmemcheck=0 (disabled)
47 +- kmemcheck=1 (enabled)
48 +- kmemcheck=2 (one-shot mode)
49 +- Default: 2 (one-shot mode)
50 +-
51 + kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
52 + Default is 0 (don't ignore, but inject #GP)
53 +
54 +diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst
55 +index a81787cd47d7..e313925fb0fa 100644
56 +--- a/Documentation/dev-tools/index.rst
57 ++++ b/Documentation/dev-tools/index.rst
58 +@@ -21,7 +21,6 @@ whole; patches welcome!
59 + kasan
60 + ubsan
61 + kmemleak
62 +- kmemcheck
63 + gdb-kernel-debugging
64 + kgdb
65 + kselftest
66 +diff --git a/Documentation/dev-tools/kmemcheck.rst b/Documentation/dev-tools/kmemcheck.rst
67 +deleted file mode 100644
68 +index 7f3d1985de74..000000000000
69 +--- a/Documentation/dev-tools/kmemcheck.rst
70 ++++ /dev/null
71 +@@ -1,733 +0,0 @@
72 +-Getting started with kmemcheck
73 +-==============================
74 +-
75 +-Vegard Nossum <vegardno@×××××××.no>
76 +-
77 +-
78 +-Introduction
79 +-------------
80 +-
81 +-kmemcheck is a debugging feature for the Linux Kernel. More specifically, it
82 +-is a dynamic checker that detects and warns about some uses of uninitialized
83 +-memory.
84 +-
85 +-Userspace programmers might be familiar with Valgrind's memcheck. The main
86 +-difference between memcheck and kmemcheck is that memcheck works for userspace
87 +-programs only, and kmemcheck works for the kernel only. The implementations
88 +-are of course vastly different. Because of this, kmemcheck is not as accurate
89 +-as memcheck, but it turns out to be good enough in practice to discover real
90 +-programmer errors that the compiler is not able to find through static
91 +-analysis.
92 +-
93 +-Enabling kmemcheck on a kernel will probably slow it down to the extent that
94 +-the machine will not be usable for normal workloads such as e.g. an
95 +-interactive desktop. kmemcheck will also cause the kernel to use about twice
96 +-as much memory as normal. For this reason, kmemcheck is strictly a debugging
97 +-feature.
98 +-
99 +-
100 +-Downloading
101 +------------
102 +-
103 +-As of version 2.6.31-rc1, kmemcheck is included in the mainline kernel.
104 +-
105 +-
106 +-Configuring and compiling
107 +--------------------------
108 +-
109 +-kmemcheck only works for the x86 (both 32- and 64-bit) platform. A number of
110 +-configuration variables must have specific settings in order for the kmemcheck
111 +-menu to even appear in "menuconfig". These are:
112 +-
113 +-- ``CONFIG_CC_OPTIMIZE_FOR_SIZE=n``
114 +- This option is located under "General setup" / "Optimize for size".
115 +-
116 +- Without this, gcc will use certain optimizations that usually lead to
117 +- false positive warnings from kmemcheck. An example of this is a 16-bit
118 +- field in a struct, where gcc may load 32 bits, then discard the upper
119 +- 16 bits. kmemcheck sees only the 32-bit load, and may trigger a
120 +- warning for the upper 16 bits (if they're uninitialized).
121 +-
122 +-- ``CONFIG_SLAB=y`` or ``CONFIG_SLUB=y``
123 +- This option is located under "General setup" / "Choose SLAB
124 +- allocator".
125 +-
126 +-- ``CONFIG_FUNCTION_TRACER=n``
127 +- This option is located under "Kernel hacking" / "Tracers" / "Kernel
128 +- Function Tracer"
129 +-
130 +- When function tracing is compiled in, gcc emits a call to another
131 +- function at the beginning of every function. This means that when the
132 +- page fault handler is called, the ftrace framework will be called
133 +- before kmemcheck has had a chance to handle the fault. If ftrace then
134 +- modifies memory that was tracked by kmemcheck, the result is an
135 +- endless recursive page fault.
136 +-
137 +-- ``CONFIG_DEBUG_PAGEALLOC=n``
138 +- This option is located under "Kernel hacking" / "Memory Debugging"
139 +- / "Debug page memory allocations".
140 +-
141 +-In addition, I highly recommend turning on ``CONFIG_DEBUG_INFO=y``. This is also
142 +-located under "Kernel hacking". With this, you will be able to get line number
143 +-information from the kmemcheck warnings, which is extremely valuable in
144 +-debugging a problem. This option is not mandatory, however, because it slows
145 +-down the compilation process and produces a much bigger kernel image.
146 +-
147 +-Now the kmemcheck menu should be visible (under "Kernel hacking" / "Memory
148 +-Debugging" / "kmemcheck: trap use of uninitialized memory"). Here follows
149 +-a description of the kmemcheck configuration variables:
150 +-
151 +-- ``CONFIG_KMEMCHECK``
152 +- This must be enabled in order to use kmemcheck at all...
153 +-
154 +-- ``CONFIG_KMEMCHECK_``[``DISABLED`` | ``ENABLED`` | ``ONESHOT``]``_BY_DEFAULT``
155 +- This option controls the status of kmemcheck at boot-time. "Enabled"
156 +- will enable kmemcheck right from the start, "disabled" will boot the
157 +- kernel as normal (but with the kmemcheck code compiled in, so it can
158 +- be enabled at run-time after the kernel has booted), and "one-shot" is
159 +- a special mode which will turn kmemcheck off automatically after
160 +- detecting the first use of uninitialized memory.
161 +-
162 +- If you are using kmemcheck to actively debug a problem, then you
163 +- probably want to choose "enabled" here.
164 +-
165 +- The one-shot mode is mostly useful in automated test setups because it
166 +- can prevent floods of warnings and increase the chances of the machine
167 +- surviving in case something is really wrong. In other cases, the one-
168 +- shot mode could actually be counter-productive because it would turn
169 +- itself off at the very first error -- in the case of a false positive
170 +- too -- and this would come in the way of debugging the specific
171 +- problem you were interested in.
172 +-
173 +- If you would like to use your kernel as normal, but with a chance to
174 +- enable kmemcheck in case of some problem, it might be a good idea to
175 +- choose "disabled" here. When kmemcheck is disabled, most of the run-
176 +- time overhead is not incurred, and the kernel will be almost as fast
177 +- as normal.
178 +-
179 +-- ``CONFIG_KMEMCHECK_QUEUE_SIZE``
180 +- Select the maximum number of error reports to store in an internal
181 +- (fixed-size) buffer. Since errors can occur virtually anywhere and in
182 +- any context, we need a temporary storage area which is guaranteed not
183 +- to generate any other page faults when accessed. The queue will be
184 +- emptied as soon as a tasklet may be scheduled. If the queue is full,
185 +- new error reports will be lost.
186 +-
187 +- The default value of 64 is probably fine. If some code produces more
188 +- than 64 errors within an irqs-off section, then the code is likely to
189 +- produce many, many more, too, and these additional reports seldom give
190 +- any more information (the first report is usually the most valuable
191 +- anyway).
192 +-
193 +- This number might have to be adjusted if you are not using serial
194 +- console or similar to capture the kernel log. If you are using the
195 +- "dmesg" command to save the log, then getting a lot of kmemcheck
196 +- warnings might overflow the kernel log itself, and the earlier reports
197 +- will get lost in that way instead. Try setting this to 10 or so on
198 +- such a setup.
199 +-
200 +-- ``CONFIG_KMEMCHECK_SHADOW_COPY_SHIFT``
201 +- Select the number of shadow bytes to save along with each entry of the
202 +- error-report queue. These bytes indicate what parts of an allocation
203 +- are initialized, uninitialized, etc. and will be displayed when an
204 +- error is detected to help the debugging of a particular problem.
205 +-
206 +- The number entered here is actually the logarithm of the number of
207 +- bytes that will be saved. So if you pick for example 5 here, kmemcheck
208 +- will save 2^5 = 32 bytes.
209 +-
210 +- The default value should be fine for debugging most problems. It also
211 +- fits nicely within 80 columns.
212 +-
213 +-- ``CONFIG_KMEMCHECK_PARTIAL_OK``
214 +- This option (when enabled) works around certain GCC optimizations that
215 +- produce 32-bit reads from 16-bit variables where the upper 16 bits are
216 +- thrown away afterwards.
217 +-
218 +- The default value (enabled) is recommended. This may of course hide
219 +- some real errors, but disabling it would probably produce a lot of
220 +- false positives.
221 +-
222 +-- ``CONFIG_KMEMCHECK_BITOPS_OK``
223 +- This option silences warnings that would be generated for bit-field
224 +- accesses where not all the bits are initialized at the same time. This
225 +- may also hide some real bugs.
226 +-
227 +- This option is probably obsolete, or it should be replaced with
228 +- the kmemcheck-/bitfield-annotations for the code in question. The
229 +- default value is therefore fine.
230 +-
231 +-Now compile the kernel as usual.
232 +-
233 +-
234 +-How to use
235 +-----------
236 +-
237 +-Booting
238 +-~~~~~~~
239 +-
240 +-First some information about the command-line options. There is only one
241 +-option specific to kmemcheck, and this is called "kmemcheck". It can be used
242 +-to override the default mode as chosen by the ``CONFIG_KMEMCHECK_*_BY_DEFAULT``
243 +-option. Its possible settings are:
244 +-
245 +-- ``kmemcheck=0`` (disabled)
246 +-- ``kmemcheck=1`` (enabled)
247 +-- ``kmemcheck=2`` (one-shot mode)
248 +-
249 +-If SLUB debugging has been enabled in the kernel, it may take precedence over
250 +-kmemcheck in such a way that the slab caches which are under SLUB debugging
251 +-will not be tracked by kmemcheck. In order to ensure that this doesn't happen
252 +-(even though it shouldn't by default), use SLUB's boot option ``slub_debug``,
253 +-like this: ``slub_debug=-``
254 +-
255 +-In fact, this option may also be used for fine-grained control over SLUB vs.
256 +-kmemcheck. For example, if the command line includes
257 +-``kmemcheck=1 slub_debug=,dentry``, then SLUB debugging will be used only
258 +-for the "dentry" slab cache, and with kmemcheck tracking all the other
259 +-caches. This is advanced usage, however, and is not generally recommended.
260 +-
261 +-
262 +-Run-time enable/disable
263 +-~~~~~~~~~~~~~~~~~~~~~~~
264 +-
265 +-When the kernel has booted, it is possible to enable or disable kmemcheck at
266 +-run-time. WARNING: This feature is still experimental and may cause false
267 +-positive warnings to appear. Therefore, try not to use this. If you find that
268 +-it doesn't work properly (e.g. you see an unreasonable amount of warnings), I
269 +-will be happy to take bug reports.
270 +-
271 +-Use the file ``/proc/sys/kernel/kmemcheck`` for this purpose, e.g.::
272 +-
273 +- $ echo 0 > /proc/sys/kernel/kmemcheck # disables kmemcheck
274 +-
275 +-The numbers are the same as for the ``kmemcheck=`` command-line option.
276 +-
277 +-
278 +-Debugging
279 +-~~~~~~~~~
280 +-
281 +-A typical report will look something like this::
282 +-
283 +- WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88003e4a2024)
284 +- 80000000000000000000000000000000000000000088ffff0000000000000000
285 +- i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
286 +- ^
287 +-
288 +- Pid: 1856, comm: ntpdate Not tainted 2.6.29-rc5 #264 945P-A
289 +- RIP: 0010:[<ffffffff8104ede8>] [<ffffffff8104ede8>] __dequeue_signal+0xc8/0x190
290 +- RSP: 0018:ffff88003cdf7d98 EFLAGS: 00210002
291 +- RAX: 0000000000000030 RBX: ffff88003d4ea968 RCX: 0000000000000009
292 +- RDX: ffff88003e5d6018 RSI: ffff88003e5d6024 RDI: ffff88003cdf7e84
293 +- RBP: ffff88003cdf7db8 R08: ffff88003e5d6000 R09: 0000000000000000
294 +- R10: 0000000000000080 R11: 0000000000000000 R12: 000000000000000e
295 +- R13: ffff88003cdf7e78 R14: ffff88003d530710 R15: ffff88003d5a98c8
296 +- FS: 0000000000000000(0000) GS:ffff880001982000(0063) knlGS:00000
297 +- CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
298 +- CR2: ffff88003f806ea0 CR3: 000000003c036000 CR4: 00000000000006a0
299 +- DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
300 +- DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
301 +- [<ffffffff8104f04e>] dequeue_signal+0x8e/0x170
302 +- [<ffffffff81050bd8>] get_signal_to_deliver+0x98/0x390
303 +- [<ffffffff8100b87d>] do_notify_resume+0xad/0x7d0
304 +- [<ffffffff8100c7b5>] int_signal+0x12/0x17
305 +- [<ffffffffffffffff>] 0xffffffffffffffff
306 +-
307 +-The single most valuable information in this report is the RIP (or EIP on 32-
308 +-bit) value. This will help us pinpoint exactly which instruction that caused
309 +-the warning.
310 +-
311 +-If your kernel was compiled with ``CONFIG_DEBUG_INFO=y``, then all we have to do
312 +-is give this address to the addr2line program, like this::
313 +-
314 +- $ addr2line -e vmlinux -i ffffffff8104ede8
315 +- arch/x86/include/asm/string_64.h:12
316 +- include/asm-generic/siginfo.h:287
317 +- kernel/signal.c:380
318 +- kernel/signal.c:410
319 +-
320 +-The "``-e vmlinux``" tells addr2line which file to look in. **IMPORTANT:**
321 +-This must be the vmlinux of the kernel that produced the warning in the
322 +-first place! If not, the line number information will almost certainly be
323 +-wrong.
324 +-
325 +-The "``-i``" tells addr2line to also print the line numbers of inlined
326 +-functions. In this case, the flag was very important, because otherwise,
327 +-it would only have printed the first line, which is just a call to
328 +-``memcpy()``, which could be called from a thousand places in the kernel, and
329 +-is therefore not very useful. These inlined functions would not show up in
330 +-the stack trace above, simply because the kernel doesn't load the extra
331 +-debugging information. This technique can of course be used with ordinary
332 +-kernel oopses as well.
333 +-
334 +-In this case, it's the caller of ``memcpy()`` that is interesting, and it can be
335 +-found in ``include/asm-generic/siginfo.h``, line 287::
336 +-
337 +- 281 static inline void copy_siginfo(struct siginfo *to, struct siginfo *from)
338 +- 282 {
339 +- 283 if (from->si_code < 0)
340 +- 284 memcpy(to, from, sizeof(*to));
341 +- 285 else
342 +- 286 /* _sigchld is currently the largest know union member */
343 +- 287 memcpy(to, from, __ARCH_SI_PREAMBLE_SIZE + sizeof(from->_sifields._sigchld));
344 +- 288 }
345 +-
346 +-Since this was a read (kmemcheck usually warns about reads only, though it can
347 +-warn about writes to unallocated or freed memory as well), it was probably the
348 +-"from" argument which contained some uninitialized bytes. Following the chain
349 +-of calls, we move upwards to see where "from" was allocated or initialized,
350 +-``kernel/signal.c``, line 380::
351 +-
352 +- 359 static void collect_signal(int sig, struct sigpending *list, siginfo_t *info)
353 +- 360 {
354 +- ...
355 +- 367 list_for_each_entry(q, &list->list, list) {
356 +- 368 if (q->info.si_signo == sig) {
357 +- 369 if (first)
358 +- 370 goto still_pending;
359 +- 371 first = q;
360 +- ...
361 +- 377 if (first) {
362 +- 378 still_pending:
363 +- 379 list_del_init(&first->list);
364 +- 380 copy_siginfo(info, &first->info);
365 +- 381 __sigqueue_free(first);
366 +- ...
367 +- 392 }
368 +- 393 }
369 +-
370 +-Here, it is ``&first->info`` that is being passed on to ``copy_siginfo()``. The
371 +-variable ``first`` was found on a list -- passed in as the second argument to
372 +-``collect_signal()``. We continue our journey through the stack, to figure out
373 +-where the item on "list" was allocated or initialized. We move to line 410::
374 +-
375 +- 395 static int __dequeue_signal(struct sigpending *pending, sigset_t *mask,
376 +- 396 siginfo_t *info)
377 +- 397 {
378 +- ...
379 +- 410 collect_signal(sig, pending, info);
380 +- ...
381 +- 414 }
382 +-
383 +-Now we need to follow the ``pending`` pointer, since that is being passed on to
384 +-``collect_signal()`` as ``list``. At this point, we've run out of lines from the
385 +-"addr2line" output. Not to worry, we just paste the next addresses from the
386 +-kmemcheck stack dump, i.e.::
387 +-
388 +- [<ffffffff8104f04e>] dequeue_signal+0x8e/0x170
389 +- [<ffffffff81050bd8>] get_signal_to_deliver+0x98/0x390
390 +- [<ffffffff8100b87d>] do_notify_resume+0xad/0x7d0
391 +- [<ffffffff8100c7b5>] int_signal+0x12/0x17
392 +-
393 +- $ addr2line -e vmlinux -i ffffffff8104f04e ffffffff81050bd8 \
394 +- ffffffff8100b87d ffffffff8100c7b5
395 +- kernel/signal.c:446
396 +- kernel/signal.c:1806
397 +- arch/x86/kernel/signal.c:805
398 +- arch/x86/kernel/signal.c:871
399 +- arch/x86/kernel/entry_64.S:694
400 +-
401 +-Remember that since these addresses were found on the stack and not as the
402 +-RIP value, they actually point to the _next_ instruction (they are return
403 +-addresses). This becomes obvious when we look at the code for line 446::
404 +-
405 +- 422 int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
406 +- 423 {
407 +- ...
408 +- 431 signr = __dequeue_signal(&tsk->signal->shared_pending,
409 +- 432 mask, info);
410 +- 433 /*
411 +- 434 * itimer signal ?
412 +- 435 *
413 +- 436 * itimers are process shared and we restart periodic
414 +- 437 * itimers in the signal delivery path to prevent DoS
415 +- 438 * attacks in the high resolution timer case. This is
416 +- 439 * compliant with the old way of self restarting
417 +- 440 * itimers, as the SIGALRM is a legacy signal and only
418 +- 441 * queued once. Changing the restart behaviour to
419 +- 442 * restart the timer in the signal dequeue path is
420 +- 443 * reducing the timer noise on heavy loaded !highres
421 +- 444 * systems too.
422 +- 445 */
423 +- 446 if (unlikely(signr == SIGALRM)) {
424 +- ...
425 +- 489 }
426 +-
427 +-So instead of looking at 446, we should be looking at 431, which is the line
428 +-that executes just before 446. Here we see that what we are looking for is
429 +-``&tsk->signal->shared_pending``.
430 +-
431 +-Our next task is now to figure out which function that puts items on this
432 +-``shared_pending`` list. A crude, but efficient tool, is ``git grep``::
433 +-
434 +- $ git grep -n 'shared_pending' kernel/
435 +- ...
436 +- kernel/signal.c:828: pending = group ? &t->signal->shared_pending : &t->pending;
437 +- kernel/signal.c:1339: pending = group ? &t->signal->shared_pending : &t->pending;
438 +- ...
439 +-
440 +-There were more results, but none of them were related to list operations,
441 +-and these were the only assignments. We inspect the line numbers more closely
442 +-and find that this is indeed where items are being added to the list::
443 +-
444 +- 816 static int send_signal(int sig, struct siginfo *info, struct task_struct *t,
445 +- 817 int group)
446 +- 818 {
447 +- ...
448 +- 828 pending = group ? &t->signal->shared_pending : &t->pending;
449 +- ...
450 +- 851 q = __sigqueue_alloc(t, GFP_ATOMIC, (sig < SIGRTMIN &&
451 +- 852 (is_si_special(info) ||
452 +- 853 info->si_code >= 0)));
453 +- 854 if (q) {
454 +- 855 list_add_tail(&q->list, &pending->list);
455 +- ...
456 +- 890 }
457 +-
458 +-and::
459 +-
460 +- 1309 int send_sigqueue(struct sigqueue *q, struct task_struct *t, int group)
461 +- 1310 {
462 +- ....
463 +- 1339 pending = group ? &t->signal->shared_pending : &t->pending;
464 +- 1340 list_add_tail(&q->list, &pending->list);
465 +- ....
466 +- 1347 }
467 +-
468 +-In the first case, the list element we are looking for, ``q``, is being
469 +-returned from the function ``__sigqueue_alloc()``, which looks like an
470 +-allocation function. Let's take a look at it::
471 +-
472 +- 187 static struct sigqueue *__sigqueue_alloc(struct task_struct *t, gfp_t flags,
473 +- 188 int override_rlimit)
474 +- 189 {
475 +- 190 struct sigqueue *q = NULL;
476 +- 191 struct user_struct *user;
477 +- 192
478 +- 193 /*
479 +- 194 * We won't get problems with the target's UID changing under us
480 +- 195 * because changing it requires RCU be used, and if t != current, the
481 +- 196 * caller must be holding the RCU readlock (by way of a spinlock) and
482 +- 197 * we use RCU protection here
483 +- 198 */
484 +- 199 user = get_uid(__task_cred(t)->user);
485 +- 200 atomic_inc(&user->sigpending);
486 +- 201 if (override_rlimit ||
487 +- 202 atomic_read(&user->sigpending) <=
488 +- 203 t->signal->rlim[RLIMIT_SIGPENDING].rlim_cur)
489 +- 204 q = kmem_cache_alloc(sigqueue_cachep, flags);
490 +- 205 if (unlikely(q == NULL)) {
491 +- 206 atomic_dec(&user->sigpending);
492 +- 207 free_uid(user);
493 +- 208 } else {
494 +- 209 INIT_LIST_HEAD(&q->list);
495 +- 210 q->flags = 0;
496 +- 211 q->user = user;
497 +- 212 }
498 +- 213
499 +- 214 return q;
500 +- 215 }
501 +-
502 +-We see that this function initializes ``q->list``, ``q->flags``, and
503 +-``q->user``. It seems that now is the time to look at the definition of
504 +-``struct sigqueue``, e.g.::
505 +-
506 +- 14 struct sigqueue {
507 +- 15 struct list_head list;
508 +- 16 int flags;
509 +- 17 siginfo_t info;
510 +- 18 struct user_struct *user;
511 +- 19 };
512 +-
513 +-And, you might remember, it was a ``memcpy()`` on ``&first->info`` that
514 +-caused the warning, so this makes perfect sense. It also seems reasonable
515 +-to assume that it is the caller of ``__sigqueue_alloc()`` that has the
516 +-responsibility of filling out (initializing) this member.
517 +-
518 +-But just which fields of the struct were uninitialized? Let's look at
519 +-kmemcheck's report again::
520 +-
521 +- WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88003e4a2024)
522 +- 80000000000000000000000000000000000000000088ffff0000000000000000
523 +- i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
524 +- ^
525 +-
526 +-These first two lines are the memory dump of the memory object itself, and
527 +-the shadow bytemap, respectively. The memory object itself is in this case
528 +-``&first->info``. Just beware that the start of this dump is NOT the start
529 +-of the object itself! The position of the caret (^) corresponds with the
530 +-address of the read (ffff88003e4a2024).
531 +-
532 +-The shadow bytemap dump legend is as follows:
533 +-
534 +-- i: initialized
535 +-- u: uninitialized
536 +-- a: unallocated (memory has been allocated by the slab layer, but has not
537 +- yet been handed off to anybody)
538 +-- f: freed (memory has been allocated by the slab layer, but has been freed
539 +- by the previous owner)
540 +-
541 +-In order to figure out where (relative to the start of the object) the
542 +-uninitialized memory was located, we have to look at the disassembly. For
543 +-that, we'll need the RIP address again::
544 +-
545 +- RIP: 0010:[<ffffffff8104ede8>] [<ffffffff8104ede8>] __dequeue_signal+0xc8/0x190
546 +-
547 +- $ objdump -d --no-show-raw-insn vmlinux | grep -C 8 ffffffff8104ede8:
548 +- ffffffff8104edc8: mov %r8,0x8(%r8)
549 +- ffffffff8104edcc: test %r10d,%r10d
550 +- ffffffff8104edcf: js ffffffff8104ee88 <__dequeue_signal+0x168>
551 +- ffffffff8104edd5: mov %rax,%rdx
552 +- ffffffff8104edd8: mov $0xc,%ecx
553 +- ffffffff8104eddd: mov %r13,%rdi
554 +- ffffffff8104ede0: mov $0x30,%eax
555 +- ffffffff8104ede5: mov %rdx,%rsi
556 +- ffffffff8104ede8: rep movsl %ds:(%rsi),%es:(%rdi)
557 +- ffffffff8104edea: test $0x2,%al
558 +- ffffffff8104edec: je ffffffff8104edf0 <__dequeue_signal+0xd0>
559 +- ffffffff8104edee: movsw %ds:(%rsi),%es:(%rdi)
560 +- ffffffff8104edf0: test $0x1,%al
561 +- ffffffff8104edf2: je ffffffff8104edf5 <__dequeue_signal+0xd5>
562 +- ffffffff8104edf4: movsb %ds:(%rsi),%es:(%rdi)
563 +- ffffffff8104edf5: mov %r8,%rdi
564 +- ffffffff8104edf8: callq ffffffff8104de60 <__sigqueue_free>
565 +-
566 +-As expected, it's the "``rep movsl``" instruction from the ``memcpy()``
567 +-that causes the warning. We know about ``REP MOVSL`` that it uses the register
568 +-``RCX`` to count the number of remaining iterations. By taking a look at the
569 +-register dump again (from the kmemcheck report), we can figure out how many
570 +-bytes were left to copy::
571 +-
572 +- RAX: 0000000000000030 RBX: ffff88003d4ea968 RCX: 0000000000000009
573 +-
574 +-By looking at the disassembly, we also see that ``%ecx`` is being loaded
575 +-with the value ``$0xc`` just before (ffffffff8104edd8), so we are very
576 +-lucky. Keep in mind that this is the number of iterations, not bytes. And
577 +-since this is a "long" operation, we need to multiply by 4 to get the
578 +-number of bytes. So this means that the uninitialized value was encountered
579 +-at 4 * (0xc - 0x9) = 12 bytes from the start of the object.
580 +-
581 +-We can now try to figure out which field of the "``struct siginfo``" that
582 +-was not initialized. This is the beginning of the struct::
583 +-
584 +- 40 typedef struct siginfo {
585 +- 41 int si_signo;
586 +- 42 int si_errno;
587 +- 43 int si_code;
588 +- 44
589 +- 45 union {
590 +- ..
591 +- 92 } _sifields;
592 +- 93 } siginfo_t;
593 +-
594 +-On 64-bit, the int is 4 bytes long, so it must the union member that has
595 +-not been initialized. We can verify this using gdb::
596 +-
597 +- $ gdb vmlinux
598 +- ...
599 +- (gdb) p &((struct siginfo *) 0)->_sifields
600 +- $1 = (union {...} *) 0x10
601 +-
602 +-Actually, it seems that the union member is located at offset 0x10 -- which
603 +-means that gcc has inserted 4 bytes of padding between the members ``si_code``
604 +-and ``_sifields``. We can now get a fuller picture of the memory dump::
605 +-
606 +- _----------------------------=> si_code
607 +- / _--------------------=> (padding)
608 +- | / _------------=> _sifields(._kill._pid)
609 +- | | / _----=> _sifields(._kill._uid)
610 +- | | | /
611 +- -------|-------|-------|-------|
612 +- 80000000000000000000000000000000000000000088ffff0000000000000000
613 +- i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
614 +-
615 +-This allows us to realize another important fact: ``si_code`` contains the
616 +-value 0x80. Remember that x86 is little endian, so the first 4 bytes
617 +-"80000000" are really the number 0x00000080. With a bit of research, we
618 +-find that this is actually the constant ``SI_KERNEL`` defined in
619 +-``include/asm-generic/siginfo.h``::
620 +-
621 +- 144 #define SI_KERNEL 0x80 /* sent by the kernel from somewhere */
622 +-
623 +-This macro is used in exactly one place in the x86 kernel: In ``send_signal()``
624 +-in ``kernel/signal.c``::
625 +-
626 +- 816 static int send_signal(int sig, struct siginfo *info, struct task_struct *t,
627 +- 817 int group)
628 +- 818 {
629 +- ...
630 +- 828 pending = group ? &t->signal->shared_pending : &t->pending;
631 +- ...
632 +- 851 q = __sigqueue_alloc(t, GFP_ATOMIC, (sig < SIGRTMIN &&
633 +- 852 (is_si_special(info) ||
634 +- 853 info->si_code >= 0)));
635 +- 854 if (q) {
636 +- 855 list_add_tail(&q->list, &pending->list);
637 +- 856 switch ((unsigned long) info) {
638 +- ...
639 +- 865 case (unsigned long) SEND_SIG_PRIV:
640 +- 866 q->info.si_signo = sig;
641 +- 867 q->info.si_errno = 0;
642 +- 868 q->info.si_code = SI_KERNEL;
643 +- 869 q->info.si_pid = 0;
644 +- 870 q->info.si_uid = 0;
645 +- 871 break;
646 +- ...
647 +- 890 }
648 +-
649 +-Not only does this match with the ``.si_code`` member, it also matches the place
650 +-we found earlier when looking for where siginfo_t objects are enqueued on the
651 +-``shared_pending`` list.
652 +-
653 +-So to sum up: It seems that it is the padding introduced by the compiler
654 +-between two struct fields that is uninitialized, and this gets reported when
655 +-we do a ``memcpy()`` on the struct. This means that we have identified a false
656 +-positive warning.
657 +-
658 +-Normally, kmemcheck will not report uninitialized accesses in ``memcpy()`` calls
659 +-when both the source and destination addresses are tracked. (Instead, we copy
660 +-the shadow bytemap as well). In this case, the destination address clearly
661 +-was not tracked. We can dig a little deeper into the stack trace from above::
662 +-
663 +- arch/x86/kernel/signal.c:805
664 +- arch/x86/kernel/signal.c:871
665 +- arch/x86/kernel/entry_64.S:694
666 +-
667 +-And we clearly see that the destination siginfo object is located on the
668 +-stack::
669 +-
670 +- 782 static void do_signal(struct pt_regs *regs)
671 +- 783 {
672 +- 784 struct k_sigaction ka;
673 +- 785 siginfo_t info;
674 +- ...
675 +- 804 signr = get_signal_to_deliver(&info, &ka, regs, NULL);
676 +- ...
677 +- 854 }
678 +-
679 +-And this ``&info`` is what eventually gets passed to ``copy_siginfo()`` as the
680 +-destination argument.
681 +-
682 +-Now, even though we didn't find an actual error here, the example is still a
683 +-good one, because it shows how one would go about to find out what the report
684 +-was all about.
685 +-
686 +-
687 +-Annotating false positives
688 +-~~~~~~~~~~~~~~~~~~~~~~~~~~
689 +-
690 +-There are a few different ways to make annotations in the source code that
691 +-will keep kmemcheck from checking and reporting certain allocations. Here
692 +-they are:
693 +-
694 +-- ``__GFP_NOTRACK_FALSE_POSITIVE``
695 +- This flag can be passed to ``kmalloc()`` or ``kmem_cache_alloc()``
696 +- (therefore also to other functions that end up calling one of
697 +- these) to indicate that the allocation should not be tracked
698 +- because it would lead to a false positive report. This is a "big
699 +- hammer" way of silencing kmemcheck; after all, even if the false
700 +- positive pertains to particular field in a struct, for example, we
701 +- will now lose the ability to find (real) errors in other parts of
702 +- the same struct.
703 +-
704 +- Example::
705 +-
706 +- /* No warnings will ever trigger on accessing any part of x */
707 +- x = kmalloc(sizeof *x, GFP_KERNEL | __GFP_NOTRACK_FALSE_POSITIVE);
708 +-
709 +-- ``kmemcheck_bitfield_begin(name)``/``kmemcheck_bitfield_end(name)`` and
710 +- ``kmemcheck_annotate_bitfield(ptr, name)``
711 +- The first two of these three macros can be used inside struct
712 +- definitions to signal, respectively, the beginning and end of a
713 +- bitfield. Additionally, this will assign the bitfield a name, which
714 +- is given as an argument to the macros.
715 +-
716 +- Having used these markers, one can later use
717 +- kmemcheck_annotate_bitfield() at the point of allocation, to indicate
718 +- which parts of the allocation is part of a bitfield.
719 +-
720 +- Example::
721 +-
722 +- struct foo {
723 +- int x;
724 +-
725 +- kmemcheck_bitfield_begin(flags);
726 +- int flag_a:1;
727 +- int flag_b:1;
728 +- kmemcheck_bitfield_end(flags);
729 +-
730 +- int y;
731 +- };
732 +-
733 +- struct foo *x = kmalloc(sizeof *x);
734 +-
735 +- /* No warnings will trigger on accessing the bitfield of x */
736 +- kmemcheck_annotate_bitfield(x, flags);
737 +-
738 +- Note that ``kmemcheck_annotate_bitfield()`` can be used even before the
739 +- return value of ``kmalloc()`` is checked -- in other words, passing NULL
740 +- as the first argument is legal (and will do nothing).
741 +-
742 +-
743 +-Reporting errors
744 +-----------------
745 +-
746 +-As we have seen, kmemcheck will produce false positive reports. Therefore, it
747 +-is not very wise to blindly post kmemcheck warnings to mailing lists and
748 +-maintainers. Instead, I encourage maintainers and developers to find errors
749 +-in their own code. If you get a warning, you can try to work around it, try
750 +-to figure out if it's a real error or not, or simply ignore it. Most
751 +-developers know their own code and will quickly and efficiently determine the
752 +-root cause of a kmemcheck report. This is therefore also the most efficient
753 +-way to work with kmemcheck.
754 +-
755 +-That said, we (the kmemcheck maintainers) will always be on the lookout for
756 +-false positives that we can annotate and silence. So whatever you find,
757 +-please drop us a note privately! Kernel configs and steps to reproduce (if
758 +-available) are of course a great help too.
759 +-
760 +-Happy hacking!
761 +-
762 +-
763 +-Technical description
764 +----------------------
765 +-
766 +-kmemcheck works by marking memory pages non-present. This means that whenever
767 +-somebody attempts to access the page, a page fault is generated. The page
768 +-fault handler notices that the page was in fact only hidden, and so it calls
769 +-on the kmemcheck code to make further investigations.
770 +-
771 +-When the investigations are completed, kmemcheck "shows" the page by marking
772 +-it present (as it would be under normal circumstances). This way, the
773 +-interrupted code can continue as usual.
774 +-
775 +-But after the instruction has been executed, we should hide the page again, so
776 +-that we can catch the next access too! Now kmemcheck makes use of a debugging
777 +-feature of the processor, namely single-stepping. When the processor has
778 +-finished the one instruction that generated the memory access, a debug
779 +-exception is raised. From here, we simply hide the page again and continue
780 +-execution, this time with the single-stepping feature turned off.
781 +-
782 +-kmemcheck requires some assistance from the memory allocator in order to work.
783 +-The memory allocator needs to
784 +-
785 +- 1. Tell kmemcheck about newly allocated pages and pages that are about to
786 +- be freed. This allows kmemcheck to set up and tear down the shadow memory
787 +- for the pages in question. The shadow memory stores the status of each
788 +- byte in the allocation proper, e.g. whether it is initialized or
789 +- uninitialized.
790 +-
791 +- 2. Tell kmemcheck which parts of memory should be marked uninitialized.
792 +- There are actually a few more states, such as "not yet allocated" and
793 +- "recently freed".
794 +-
795 +-If a slab cache is set up using the SLAB_NOTRACK flag, it will never return
796 +-memory that can take page faults because of kmemcheck.
797 +-
798 +-If a slab cache is NOT set up using the SLAB_NOTRACK flag, callers can still
799 +-request memory with the __GFP_NOTRACK or __GFP_NOTRACK_FALSE_POSITIVE flags.
800 +-This does not prevent the page faults from occurring, however, but marks the
801 +-object in question as being initialized so that no warnings will ever be
802 +-produced for this object.
803 +-
804 +-Currently, the SLAB and SLUB allocators are supported by kmemcheck.
805 +diff --git a/Documentation/devicetree/bindings/dma/snps-dma.txt b/Documentation/devicetree/bindings/dma/snps-dma.txt
806 +index a122723907ac..99acc712f83a 100644
807 +--- a/Documentation/devicetree/bindings/dma/snps-dma.txt
808 ++++ b/Documentation/devicetree/bindings/dma/snps-dma.txt
809 +@@ -64,6 +64,6 @@ Example:
810 + reg = <0xe0000000 0x1000>;
811 + interrupts = <0 35 0x4>;
812 + dmas = <&dmahost 12 0 1>,
813 +- <&dmahost 13 0 1 0>;
814 ++ <&dmahost 13 1 0>;
815 + dma-names = "rx", "rx";
816 + };
817 +diff --git a/Documentation/filesystems/ext4.txt b/Documentation/filesystems/ext4.txt
818 +index 5a8f7f4d2bca..7449893dc039 100644
819 +--- a/Documentation/filesystems/ext4.txt
820 ++++ b/Documentation/filesystems/ext4.txt
821 +@@ -233,7 +233,7 @@ data_err=ignore(*) Just print an error message if an error occurs
822 + data_err=abort Abort the journal if an error occurs in a file
823 + data buffer in ordered mode.
824 +
825 +-grpid Give objects the same group ID as their creator.
826 ++grpid New objects have the group ID of their parent.
827 + bsdgroups
828 +
829 + nogrpid (*) New objects have the group ID of their creator.
830 +diff --git a/MAINTAINERS b/MAINTAINERS
831 +index 2811a211632c..76ea063d8083 100644
832 +--- a/MAINTAINERS
833 ++++ b/MAINTAINERS
834 +@@ -7670,16 +7670,6 @@ F: include/linux/kdb.h
835 + F: include/linux/kgdb.h
836 + F: kernel/debug/
837 +
838 +-KMEMCHECK
839 +-M: Vegard Nossum <vegardno@×××××××.no>
840 +-M: Pekka Enberg <penberg@××××××.org>
841 +-S: Maintained
842 +-F: Documentation/dev-tools/kmemcheck.rst
843 +-F: arch/x86/include/asm/kmemcheck.h
844 +-F: arch/x86/mm/kmemcheck/
845 +-F: include/linux/kmemcheck.h
846 +-F: mm/kmemcheck.c
847 +-
848 + KMEMLEAK
849 + M: Catalin Marinas <catalin.marinas@×××.com>
850 + S: Maintained
851 +diff --git a/Makefile b/Makefile
852 +index 33176140f133..68d70485b088 100644
853 +--- a/Makefile
854 ++++ b/Makefile
855 +@@ -1,7 +1,7 @@
856 + # SPDX-License-Identifier: GPL-2.0
857 + VERSION = 4
858 + PATCHLEVEL = 14
859 +-SUBLEVEL = 20
860 ++SUBLEVEL = 21
861 + EXTRAVERSION =
862 + NAME = Petit Gorille
863 +
864 +diff --git a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
865 +index 7b8d90b7aeea..29b636fce23f 100644
866 +--- a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
867 ++++ b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
868 +@@ -150,11 +150,6 @@
869 + interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
870 + };
871 +
872 +-&charlcd {
873 +- interrupt-parent = <&intc>;
874 +- interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
875 +-};
876 +-
877 + &serial0 {
878 + interrupt-parent = <&intc>;
879 + interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
880 +diff --git a/arch/arm/boot/dts/exynos5410.dtsi b/arch/arm/boot/dts/exynos5410.dtsi
881 +index 7eab4bc07cec..7628bbb02324 100644
882 +--- a/arch/arm/boot/dts/exynos5410.dtsi
883 ++++ b/arch/arm/boot/dts/exynos5410.dtsi
884 +@@ -333,7 +333,6 @@
885 + &rtc {
886 + clocks = <&clock CLK_RTC>;
887 + clock-names = "rtc";
888 +- interrupt-parent = <&pmu_system_controller>;
889 + status = "disabled";
890 + };
891 +
892 +diff --git a/arch/arm/boot/dts/lpc3250-ea3250.dts b/arch/arm/boot/dts/lpc3250-ea3250.dts
893 +index 52b3ed10283a..e2bc731079be 100644
894 +--- a/arch/arm/boot/dts/lpc3250-ea3250.dts
895 ++++ b/arch/arm/boot/dts/lpc3250-ea3250.dts
896 +@@ -156,8 +156,8 @@
897 + uda1380: uda1380@18 {
898 + compatible = "nxp,uda1380";
899 + reg = <0x18>;
900 +- power-gpio = <&gpio 0x59 0>;
901 +- reset-gpio = <&gpio 0x51 0>;
902 ++ power-gpio = <&gpio 3 10 0>;
903 ++ reset-gpio = <&gpio 3 2 0>;
904 + dac-clk = "wspll";
905 + };
906 +
907 +diff --git a/arch/arm/boot/dts/lpc3250-phy3250.dts b/arch/arm/boot/dts/lpc3250-phy3250.dts
908 +index fd95e2b10357..b7bd3a110a8d 100644
909 +--- a/arch/arm/boot/dts/lpc3250-phy3250.dts
910 ++++ b/arch/arm/boot/dts/lpc3250-phy3250.dts
911 +@@ -81,8 +81,8 @@
912 + uda1380: uda1380@18 {
913 + compatible = "nxp,uda1380";
914 + reg = <0x18>;
915 +- power-gpio = <&gpio 0x59 0>;
916 +- reset-gpio = <&gpio 0x51 0>;
917 ++ power-gpio = <&gpio 3 10 0>;
918 ++ reset-gpio = <&gpio 3 2 0>;
919 + dac-clk = "wspll";
920 + };
921 +
922 +diff --git a/arch/arm/boot/dts/mt2701.dtsi b/arch/arm/boot/dts/mt2701.dtsi
923 +index afe12e5b51f9..f936000f0699 100644
924 +--- a/arch/arm/boot/dts/mt2701.dtsi
925 ++++ b/arch/arm/boot/dts/mt2701.dtsi
926 +@@ -593,6 +593,7 @@
927 + compatible = "mediatek,mt2701-hifsys", "syscon";
928 + reg = <0 0x1a000000 0 0x1000>;
929 + #clock-cells = <1>;
930 ++ #reset-cells = <1>;
931 + };
932 +
933 + usb0: usb@1a1c0000 {
934 +@@ -677,6 +678,7 @@
935 + compatible = "mediatek,mt2701-ethsys", "syscon";
936 + reg = <0 0x1b000000 0 0x1000>;
937 + #clock-cells = <1>;
938 ++ #reset-cells = <1>;
939 + };
940 +
941 + eth: ethernet@1b100000 {
942 +diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
943 +index ec8a07415cb3..36983a7d7cfd 100644
944 +--- a/arch/arm/boot/dts/mt7623.dtsi
945 ++++ b/arch/arm/boot/dts/mt7623.dtsi
946 +@@ -753,6 +753,7 @@
947 + "syscon";
948 + reg = <0 0x1b000000 0 0x1000>;
949 + #clock-cells = <1>;
950 ++ #reset-cells = <1>;
951 + };
952 +
953 + eth: ethernet@1b100000 {
954 +diff --git a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
955 +index 688a86378cee..7bf5aa2237c9 100644
956 +--- a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
957 ++++ b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
958 +@@ -204,7 +204,7 @@
959 + bus-width = <4>;
960 + max-frequency = <50000000>;
961 + cap-sd-highspeed;
962 +- cd-gpios = <&pio 261 0>;
963 ++ cd-gpios = <&pio 261 GPIO_ACTIVE_LOW>;
964 + vmmc-supply = <&mt6323_vmch_reg>;
965 + vqmmc-supply = <&mt6323_vio18_reg>;
966 + };
967 +diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
968 +index 726c5d0dbd5b..b290a5abb901 100644
969 +--- a/arch/arm/boot/dts/s5pv210.dtsi
970 ++++ b/arch/arm/boot/dts/s5pv210.dtsi
971 +@@ -463,6 +463,7 @@
972 + compatible = "samsung,exynos4210-ohci";
973 + reg = <0xec300000 0x100>;
974 + interrupts = <23>;
975 ++ interrupt-parent = <&vic1>;
976 + clocks = <&clocks CLK_USB_HOST>;
977 + clock-names = "usbhost";
978 + #address-cells = <1>;
979 +diff --git a/arch/arm/boot/dts/spear1310-evb.dts b/arch/arm/boot/dts/spear1310-evb.dts
980 +index 84101e4eebbf..0f5f379323a8 100644
981 +--- a/arch/arm/boot/dts/spear1310-evb.dts
982 ++++ b/arch/arm/boot/dts/spear1310-evb.dts
983 +@@ -349,7 +349,7 @@
984 + spi0: spi@e0100000 {
985 + status = "okay";
986 + num-cs = <3>;
987 +- cs-gpios = <&gpio1 7 0>, <&spics 0>, <&spics 1>;
988 ++ cs-gpios = <&gpio1 7 0>, <&spics 0 0>, <&spics 1 0>;
989 +
990 + stmpe610@0 {
991 + compatible = "st,stmpe610";
992 +diff --git a/arch/arm/boot/dts/spear1340.dtsi b/arch/arm/boot/dts/spear1340.dtsi
993 +index 5f347054527d..d4dbc4098653 100644
994 +--- a/arch/arm/boot/dts/spear1340.dtsi
995 ++++ b/arch/arm/boot/dts/spear1340.dtsi
996 +@@ -142,8 +142,8 @@
997 + reg = <0xb4100000 0x1000>;
998 + interrupts = <0 105 0x4>;
999 + status = "disabled";
1000 +- dmas = <&dwdma0 0x600 0 0 1>, /* 0xC << 11 */
1001 +- <&dwdma0 0x680 0 1 0>; /* 0xD << 7 */
1002 ++ dmas = <&dwdma0 12 0 1>,
1003 ++ <&dwdma0 13 1 0>;
1004 + dma-names = "tx", "rx";
1005 + };
1006 +
1007 +diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
1008 +index 17ea0abcdbd7..086b4b333249 100644
1009 +--- a/arch/arm/boot/dts/spear13xx.dtsi
1010 ++++ b/arch/arm/boot/dts/spear13xx.dtsi
1011 +@@ -100,7 +100,7 @@
1012 + reg = <0xb2800000 0x1000>;
1013 + interrupts = <0 29 0x4>;
1014 + status = "disabled";
1015 +- dmas = <&dwdma0 0 0 0 0>;
1016 ++ dmas = <&dwdma0 0 0 0>;
1017 + dma-names = "data";
1018 + };
1019 +
1020 +@@ -290,8 +290,8 @@
1021 + #size-cells = <0>;
1022 + interrupts = <0 31 0x4>;
1023 + status = "disabled";
1024 +- dmas = <&dwdma0 0x2000 0 0 0>, /* 0x4 << 11 */
1025 +- <&dwdma0 0x0280 0 0 0>; /* 0x5 << 7 */
1026 ++ dmas = <&dwdma0 4 0 0>,
1027 ++ <&dwdma0 5 0 0>;
1028 + dma-names = "tx", "rx";
1029 + };
1030 +
1031 +diff --git a/arch/arm/boot/dts/spear600.dtsi b/arch/arm/boot/dts/spear600.dtsi
1032 +index 6b32d20acc9f..00166eb9be86 100644
1033 +--- a/arch/arm/boot/dts/spear600.dtsi
1034 ++++ b/arch/arm/boot/dts/spear600.dtsi
1035 +@@ -194,6 +194,7 @@
1036 + rtc: rtc@fc900000 {
1037 + compatible = "st,spear600-rtc";
1038 + reg = <0xfc900000 0x1000>;
1039 ++ interrupt-parent = <&vic0>;
1040 + interrupts = <10>;
1041 + status = "disabled";
1042 + };
1043 +diff --git a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
1044 +index 68aab50a73ab..733678b75b88 100644
1045 +--- a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
1046 ++++ b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
1047 +@@ -750,6 +750,7 @@
1048 + reg = <0x10120000 0x1000>;
1049 + interrupt-names = "combined";
1050 + interrupts = <14>;
1051 ++ interrupt-parent = <&vica>;
1052 + clocks = <&clcdclk>, <&hclkclcd>;
1053 + clock-names = "clcdclk", "apb_pclk";
1054 + status = "disabled";
1055 +diff --git a/arch/arm/boot/dts/stih407.dtsi b/arch/arm/boot/dts/stih407.dtsi
1056 +index fa149837df14..11fdecd9312e 100644
1057 +--- a/arch/arm/boot/dts/stih407.dtsi
1058 ++++ b/arch/arm/boot/dts/stih407.dtsi
1059 +@@ -8,6 +8,7 @@
1060 + */
1061 + #include "stih407-clock.dtsi"
1062 + #include "stih407-family.dtsi"
1063 ++#include <dt-bindings/gpio/gpio.h>
1064 + / {
1065 + soc {
1066 + sti-display-subsystem {
1067 +@@ -122,7 +123,7 @@
1068 + <&clk_s_d2_quadfs 0>,
1069 + <&clk_s_d2_quadfs 1>;
1070 +
1071 +- hdmi,hpd-gpio = <&pio5 3>;
1072 ++ hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>;
1073 + reset-names = "hdmi";
1074 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>;
1075 + ddc = <&hdmiddc>;
1076 +diff --git a/arch/arm/boot/dts/stih410.dtsi b/arch/arm/boot/dts/stih410.dtsi
1077 +index 21fe72b183d8..96eed0dc08b8 100644
1078 +--- a/arch/arm/boot/dts/stih410.dtsi
1079 ++++ b/arch/arm/boot/dts/stih410.dtsi
1080 +@@ -9,6 +9,7 @@
1081 + #include "stih410-clock.dtsi"
1082 + #include "stih407-family.dtsi"
1083 + #include "stih410-pinctrl.dtsi"
1084 ++#include <dt-bindings/gpio/gpio.h>
1085 + / {
1086 + aliases {
1087 + bdisp0 = &bdisp0;
1088 +@@ -213,7 +214,7 @@
1089 + <&clk_s_d2_quadfs 0>,
1090 + <&clk_s_d2_quadfs 1>;
1091 +
1092 +- hdmi,hpd-gpio = <&pio5 3>;
1093 ++ hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>;
1094 + reset-names = "hdmi";
1095 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>;
1096 + ddc = <&hdmiddc>;
1097 +diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h
1098 +index 0722ec6be692..6821f1249300 100644
1099 +--- a/arch/arm/include/asm/dma-iommu.h
1100 ++++ b/arch/arm/include/asm/dma-iommu.h
1101 +@@ -7,7 +7,6 @@
1102 + #include <linux/mm_types.h>
1103 + #include <linux/scatterlist.h>
1104 + #include <linux/dma-debug.h>
1105 +-#include <linux/kmemcheck.h>
1106 + #include <linux/kref.h>
1107 +
1108 + #define ARM_MAPPING_ERROR (~(dma_addr_t)0x0)
1109 +diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
1110 +index b2902a5cd780..2d7344f0e208 100644
1111 +--- a/arch/arm/include/asm/pgalloc.h
1112 ++++ b/arch/arm/include/asm/pgalloc.h
1113 +@@ -57,7 +57,7 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
1114 + extern pgd_t *pgd_alloc(struct mm_struct *mm);
1115 + extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
1116 +
1117 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1118 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1119 +
1120 + static inline void clean_pte_table(pte_t *pte)
1121 + {
1122 +diff --git a/arch/arm/mach-pxa/tosa-bt.c b/arch/arm/mach-pxa/tosa-bt.c
1123 +index 107f37210fb9..83606087edc7 100644
1124 +--- a/arch/arm/mach-pxa/tosa-bt.c
1125 ++++ b/arch/arm/mach-pxa/tosa-bt.c
1126 +@@ -132,3 +132,7 @@ static struct platform_driver tosa_bt_driver = {
1127 + },
1128 + };
1129 + module_platform_driver(tosa_bt_driver);
1130 ++
1131 ++MODULE_LICENSE("GPL");
1132 ++MODULE_AUTHOR("Dmitry Baryshkov");
1133 ++MODULE_DESCRIPTION("Bluetooth built-in chip control");
1134 +diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
1135 +index dc3817593e14..61da6e65900b 100644
1136 +--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
1137 ++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
1138 +@@ -901,6 +901,7 @@
1139 + "dsi_phy_regulator";
1140 +
1141 + #clock-cells = <1>;
1142 ++ #phy-cells = <0>;
1143 +
1144 + clocks = <&gcc GCC_MDSS_AHB_CLK>;
1145 + clock-names = "iface_clk";
1146 +@@ -1430,8 +1431,8 @@
1147 + #address-cells = <1>;
1148 + #size-cells = <0>;
1149 +
1150 +- qcom,ipc-1 = <&apcs 0 13>;
1151 +- qcom,ipc-6 = <&apcs 0 19>;
1152 ++ qcom,ipc-1 = <&apcs 8 13>;
1153 ++ qcom,ipc-3 = <&apcs 8 19>;
1154 +
1155 + apps_smsm: apps@0 {
1156 + reg = <0>;
1157 +diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
1158 +index d25f4f137c2a..5ca6a573a701 100644
1159 +--- a/arch/arm64/include/asm/pgalloc.h
1160 ++++ b/arch/arm64/include/asm/pgalloc.h
1161 +@@ -26,7 +26,7 @@
1162 +
1163 + #define check_pgt_cache() do { } while (0)
1164 +
1165 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1166 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1167 + #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
1168 +
1169 + #if CONFIG_PGTABLE_LEVELS > 2
1170 +diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
1171 +index 07823595b7f0..52f15cd896e1 100644
1172 +--- a/arch/arm64/kernel/cpu_errata.c
1173 ++++ b/arch/arm64/kernel/cpu_errata.c
1174 +@@ -406,6 +406,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
1175 + .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
1176 + MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
1177 + },
1178 ++ {
1179 ++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
1180 ++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
1181 ++ .enable = qcom_enable_link_stack_sanitization,
1182 ++ },
1183 ++ {
1184 ++ .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
1185 ++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
1186 ++ },
1187 + {
1188 + .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
1189 + MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
1190 +diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
1191 +index 79364d3455c0..e08ae6b6b63e 100644
1192 +--- a/arch/arm64/kvm/hyp/switch.c
1193 ++++ b/arch/arm64/kvm/hyp/switch.c
1194 +@@ -371,8 +371,10 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
1195 + u32 midr = read_cpuid_id();
1196 +
1197 + /* Apply BTAC predictors mitigation to all Falkor chips */
1198 +- if ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)
1199 ++ if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
1200 ++ ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) {
1201 + __qcom_hyp_sanitize_btac_predictors();
1202 ++ }
1203 + }
1204 +
1205 + fp_enabled = __fpsimd_enabled();
1206 +diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
1207 +index 27058f3fd132..329a1c43365e 100644
1208 +--- a/arch/arm64/mm/proc.S
1209 ++++ b/arch/arm64/mm/proc.S
1210 +@@ -190,7 +190,8 @@ ENDPROC(idmap_cpu_replace_ttbr1)
1211 + dc cvac, cur_\()\type\()p // Ensure any existing dirty
1212 + dmb sy // lines are written back before
1213 + ldr \type, [cur_\()\type\()p] // loading the entry
1214 +- tbz \type, #0, next_\()\type // Skip invalid entries
1215 ++ tbz \type, #0, skip_\()\type // Skip invalid and
1216 ++ tbnz \type, #11, skip_\()\type // non-global entries
1217 + .endm
1218 +
1219 + .macro __idmap_kpti_put_pgtable_ent_ng, type
1220 +@@ -249,8 +250,9 @@ ENTRY(idmap_kpti_install_ng_mappings)
1221 + add end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
1222 + do_pgd: __idmap_kpti_get_pgtable_ent pgd
1223 + tbnz pgd, #1, walk_puds
1224 +- __idmap_kpti_put_pgtable_ent_ng pgd
1225 + next_pgd:
1226 ++ __idmap_kpti_put_pgtable_ent_ng pgd
1227 ++skip_pgd:
1228 + add cur_pgdp, cur_pgdp, #8
1229 + cmp cur_pgdp, end_pgdp
1230 + b.ne do_pgd
1231 +@@ -278,8 +280,9 @@ walk_puds:
1232 + add end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
1233 + do_pud: __idmap_kpti_get_pgtable_ent pud
1234 + tbnz pud, #1, walk_pmds
1235 +- __idmap_kpti_put_pgtable_ent_ng pud
1236 + next_pud:
1237 ++ __idmap_kpti_put_pgtable_ent_ng pud
1238 ++skip_pud:
1239 + add cur_pudp, cur_pudp, 8
1240 + cmp cur_pudp, end_pudp
1241 + b.ne do_pud
1242 +@@ -298,8 +301,9 @@ walk_pmds:
1243 + add end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
1244 + do_pmd: __idmap_kpti_get_pgtable_ent pmd
1245 + tbnz pmd, #1, walk_ptes
1246 +- __idmap_kpti_put_pgtable_ent_ng pmd
1247 + next_pmd:
1248 ++ __idmap_kpti_put_pgtable_ent_ng pmd
1249 ++skip_pmd:
1250 + add cur_pmdp, cur_pmdp, #8
1251 + cmp cur_pmdp, end_pmdp
1252 + b.ne do_pmd
1253 +@@ -317,7 +321,7 @@ walk_ptes:
1254 + add end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
1255 + do_pte: __idmap_kpti_get_pgtable_ent pte
1256 + __idmap_kpti_put_pgtable_ent_ng pte
1257 +-next_pte:
1258 ++skip_pte:
1259 + add cur_ptep, cur_ptep, #8
1260 + cmp cur_ptep, end_ptep
1261 + b.ne do_pte
1262 +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
1263 +index c3d798b44030..c82457b0e733 100644
1264 +--- a/arch/mips/Kconfig
1265 ++++ b/arch/mips/Kconfig
1266 +@@ -119,12 +119,12 @@ config MIPS_GENERIC
1267 + select SYS_SUPPORTS_MULTITHREADING
1268 + select SYS_SUPPORTS_RELOCATABLE
1269 + select SYS_SUPPORTS_SMARTMIPS
1270 +- select USB_EHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
1271 +- select USB_EHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
1272 +- select USB_OHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
1273 +- select USB_OHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
1274 +- select USB_UHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
1275 +- select USB_UHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
1276 ++ select USB_EHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
1277 ++ select USB_EHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
1278 ++ select USB_OHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
1279 ++ select USB_OHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
1280 ++ select USB_UHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
1281 ++ select USB_UHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
1282 + select USE_OF
1283 + help
1284 + Select this to build a kernel which aims to support multiple boards,
1285 +diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
1286 +index fe3939726765..795caa763da3 100644
1287 +--- a/arch/mips/kernel/setup.c
1288 ++++ b/arch/mips/kernel/setup.c
1289 +@@ -374,6 +374,7 @@ static void __init bootmem_init(void)
1290 + unsigned long reserved_end;
1291 + unsigned long mapstart = ~0UL;
1292 + unsigned long bootmap_size;
1293 ++ phys_addr_t ramstart = (phys_addr_t)ULLONG_MAX;
1294 + bool bootmap_valid = false;
1295 + int i;
1296 +
1297 +@@ -394,7 +395,8 @@ static void __init bootmem_init(void)
1298 + max_low_pfn = 0;
1299 +
1300 + /*
1301 +- * Find the highest page frame number we have available.
1302 ++ * Find the highest page frame number we have available
1303 ++ * and the lowest used RAM address
1304 + */
1305 + for (i = 0; i < boot_mem_map.nr_map; i++) {
1306 + unsigned long start, end;
1307 +@@ -406,6 +408,8 @@ static void __init bootmem_init(void)
1308 + end = PFN_DOWN(boot_mem_map.map[i].addr
1309 + + boot_mem_map.map[i].size);
1310 +
1311 ++ ramstart = min(ramstart, boot_mem_map.map[i].addr);
1312 ++
1313 + #ifndef CONFIG_HIGHMEM
1314 + /*
1315 + * Skip highmem here so we get an accurate max_low_pfn if low
1316 +@@ -435,6 +439,13 @@ static void __init bootmem_init(void)
1317 + mapstart = max(reserved_end, start);
1318 + }
1319 +
1320 ++ /*
1321 ++ * Reserve any memory between the start of RAM and PHYS_OFFSET
1322 ++ */
1323 ++ if (ramstart > PHYS_OFFSET)
1324 ++ add_memory_region(PHYS_OFFSET, ramstart - PHYS_OFFSET,
1325 ++ BOOT_MEM_RESERVED);
1326 ++
1327 + if (min_low_pfn >= max_low_pfn)
1328 + panic("Incorrect memory mapping !!!");
1329 + if (min_low_pfn > ARCH_PFN_OFFSET) {
1330 +@@ -663,9 +674,6 @@ static int __init early_parse_mem(char *p)
1331 +
1332 + add_memory_region(start, size, BOOT_MEM_RAM);
1333 +
1334 +- if (start && start > PHYS_OFFSET)
1335 +- add_memory_region(PHYS_OFFSET, start - PHYS_OFFSET,
1336 +- BOOT_MEM_RESERVED);
1337 + return 0;
1338 + }
1339 + early_param("mem", early_parse_mem);
1340 +diff --git a/arch/openrisc/include/asm/dma-mapping.h b/arch/openrisc/include/asm/dma-mapping.h
1341 +index f41bd3cb76d9..e212a1f0b6d2 100644
1342 +--- a/arch/openrisc/include/asm/dma-mapping.h
1343 ++++ b/arch/openrisc/include/asm/dma-mapping.h
1344 +@@ -23,7 +23,6 @@
1345 + */
1346 +
1347 + #include <linux/dma-debug.h>
1348 +-#include <linux/kmemcheck.h>
1349 + #include <linux/dma-mapping.h>
1350 +
1351 + extern const struct dma_map_ops or1k_dma_map_ops;
1352 +diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
1353 +index a14203c005f1..e11f03007b57 100644
1354 +--- a/arch/powerpc/include/asm/pgalloc.h
1355 ++++ b/arch/powerpc/include/asm/pgalloc.h
1356 +@@ -18,7 +18,7 @@ static inline gfp_t pgtable_gfp_flags(struct mm_struct *mm, gfp_t gfp)
1357 + }
1358 + #endif /* MODULE */
1359 +
1360 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1361 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1362 +
1363 + #ifdef CONFIG_PPC_BOOK3S
1364 + #include <asm/book3s/pgalloc.h>
1365 +diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
1366 +index 023ff9f17501..d5f2ee882f74 100644
1367 +--- a/arch/powerpc/include/asm/topology.h
1368 ++++ b/arch/powerpc/include/asm/topology.h
1369 +@@ -44,6 +44,11 @@ extern int sysfs_add_device_to_node(struct device *dev, int nid);
1370 + extern void sysfs_remove_device_from_node(struct device *dev, int nid);
1371 + extern int numa_update_cpu_topology(bool cpus_locked);
1372 +
1373 ++static inline void update_numa_cpu_lookup_table(unsigned int cpu, int node)
1374 ++{
1375 ++ numa_cpu_lookup_table[cpu] = node;
1376 ++}
1377 ++
1378 + static inline int early_cpu_to_node(int cpu)
1379 + {
1380 + int nid;
1381 +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
1382 +index e9f72abc52b7..e91b40aa5417 100644
1383 +--- a/arch/powerpc/kernel/exceptions-64s.S
1384 ++++ b/arch/powerpc/kernel/exceptions-64s.S
1385 +@@ -1617,7 +1617,7 @@ USE_TEXT_SECTION()
1386 + .balign IFETCH_ALIGN_BYTES
1387 + do_hash_page:
1388 + #ifdef CONFIG_PPC_STD_MMU_64
1389 +- lis r0,DSISR_BAD_FAULT_64S@h
1390 ++ lis r0,(DSISR_BAD_FAULT_64S|DSISR_DABRMATCH)@h
1391 + ori r0,r0,DSISR_BAD_FAULT_64S@l
1392 + and. r0,r4,r0 /* weird error? */
1393 + bne- handle_page_fault /* if not, try to insert a HPTE */
1394 +diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
1395 +index 8c54166491e7..29b2fed93289 100644
1396 +--- a/arch/powerpc/kernel/head_32.S
1397 ++++ b/arch/powerpc/kernel/head_32.S
1398 +@@ -388,7 +388,7 @@ DataAccess:
1399 + EXCEPTION_PROLOG
1400 + mfspr r10,SPRN_DSISR
1401 + stw r10,_DSISR(r11)
1402 +- andis. r0,r10,DSISR_BAD_FAULT_32S@h
1403 ++ andis. r0,r10,(DSISR_BAD_FAULT_32S|DSISR_DABRMATCH)@h
1404 + bne 1f /* if not, try to put a PTE */
1405 + mfspr r4,SPRN_DAR /* into the hash table */
1406 + rlwinm r3,r10,32-15,21,21 /* DSISR_STORE -> _PAGE_RW */
1407 +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
1408 +index a51df9ef529d..a81279249bfb 100644
1409 +--- a/arch/powerpc/mm/numa.c
1410 ++++ b/arch/powerpc/mm/numa.c
1411 +@@ -142,11 +142,6 @@ static void reset_numa_cpu_lookup_table(void)
1412 + numa_cpu_lookup_table[cpu] = -1;
1413 + }
1414 +
1415 +-static void update_numa_cpu_lookup_table(unsigned int cpu, int node)
1416 +-{
1417 +- numa_cpu_lookup_table[cpu] = node;
1418 +-}
1419 +-
1420 + static void map_cpu_to_node(int cpu, int node)
1421 + {
1422 + update_numa_cpu_lookup_table(cpu, node);
1423 +diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
1424 +index cfbbee941a76..17ae5c15a9e0 100644
1425 +--- a/arch/powerpc/mm/pgtable-radix.c
1426 ++++ b/arch/powerpc/mm/pgtable-radix.c
1427 +@@ -17,6 +17,7 @@
1428 + #include <linux/of_fdt.h>
1429 + #include <linux/mm.h>
1430 + #include <linux/string_helpers.h>
1431 ++#include <linux/stop_machine.h>
1432 +
1433 + #include <asm/pgtable.h>
1434 + #include <asm/pgalloc.h>
1435 +@@ -671,6 +672,30 @@ static void free_pmd_table(pmd_t *pmd_start, pud_t *pud)
1436 + pud_clear(pud);
1437 + }
1438 +
1439 ++struct change_mapping_params {
1440 ++ pte_t *pte;
1441 ++ unsigned long start;
1442 ++ unsigned long end;
1443 ++ unsigned long aligned_start;
1444 ++ unsigned long aligned_end;
1445 ++};
1446 ++
1447 ++static int stop_machine_change_mapping(void *data)
1448 ++{
1449 ++ struct change_mapping_params *params =
1450 ++ (struct change_mapping_params *)data;
1451 ++
1452 ++ if (!data)
1453 ++ return -1;
1454 ++
1455 ++ spin_unlock(&init_mm.page_table_lock);
1456 ++ pte_clear(&init_mm, params->aligned_start, params->pte);
1457 ++ create_physical_mapping(params->aligned_start, params->start);
1458 ++ create_physical_mapping(params->end, params->aligned_end);
1459 ++ spin_lock(&init_mm.page_table_lock);
1460 ++ return 0;
1461 ++}
1462 ++
1463 + static void remove_pte_table(pte_t *pte_start, unsigned long addr,
1464 + unsigned long end)
1465 + {
1466 +@@ -699,6 +724,52 @@ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
1467 + }
1468 + }
1469 +
1470 ++/*
1471 ++ * clear the pte and potentially split the mapping helper
1472 ++ */
1473 ++static void split_kernel_mapping(unsigned long addr, unsigned long end,
1474 ++ unsigned long size, pte_t *pte)
1475 ++{
1476 ++ unsigned long mask = ~(size - 1);
1477 ++ unsigned long aligned_start = addr & mask;
1478 ++ unsigned long aligned_end = addr + size;
1479 ++ struct change_mapping_params params;
1480 ++ bool split_region = false;
1481 ++
1482 ++ if ((end - addr) < size) {
1483 ++ /*
1484 ++ * We're going to clear the PTE, but not flushed
1485 ++ * the mapping, time to remap and flush. The
1486 ++ * effects if visible outside the processor or
1487 ++ * if we are running in code close to the
1488 ++ * mapping we cleared, we are in trouble.
1489 ++ */
1490 ++ if (overlaps_kernel_text(aligned_start, addr) ||
1491 ++ overlaps_kernel_text(end, aligned_end)) {
1492 ++ /*
1493 ++ * Hack, just return, don't pte_clear
1494 ++ */
1495 ++ WARN_ONCE(1, "Linear mapping %lx->%lx overlaps kernel "
1496 ++ "text, not splitting\n", addr, end);
1497 ++ return;
1498 ++ }
1499 ++ split_region = true;
1500 ++ }
1501 ++
1502 ++ if (split_region) {
1503 ++ params.pte = pte;
1504 ++ params.start = addr;
1505 ++ params.end = end;
1506 ++ params.aligned_start = addr & ~(size - 1);
1507 ++ params.aligned_end = min_t(unsigned long, aligned_end,
1508 ++ (unsigned long)__va(memblock_end_of_DRAM()));
1509 ++ stop_machine(stop_machine_change_mapping, &params, NULL);
1510 ++ return;
1511 ++ }
1512 ++
1513 ++ pte_clear(&init_mm, addr, pte);
1514 ++}
1515 ++
1516 + static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
1517 + unsigned long end)
1518 + {
1519 +@@ -714,13 +785,7 @@ static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
1520 + continue;
1521 +
1522 + if (pmd_huge(*pmd)) {
1523 +- if (!IS_ALIGNED(addr, PMD_SIZE) ||
1524 +- !IS_ALIGNED(next, PMD_SIZE)) {
1525 +- WARN_ONCE(1, "%s: unaligned range\n", __func__);
1526 +- continue;
1527 +- }
1528 +-
1529 +- pte_clear(&init_mm, addr, (pte_t *)pmd);
1530 ++ split_kernel_mapping(addr, end, PMD_SIZE, (pte_t *)pmd);
1531 + continue;
1532 + }
1533 +
1534 +@@ -745,13 +810,7 @@ static void remove_pud_table(pud_t *pud_start, unsigned long addr,
1535 + continue;
1536 +
1537 + if (pud_huge(*pud)) {
1538 +- if (!IS_ALIGNED(addr, PUD_SIZE) ||
1539 +- !IS_ALIGNED(next, PUD_SIZE)) {
1540 +- WARN_ONCE(1, "%s: unaligned range\n", __func__);
1541 +- continue;
1542 +- }
1543 +-
1544 +- pte_clear(&init_mm, addr, (pte_t *)pud);
1545 ++ split_kernel_mapping(addr, end, PUD_SIZE, (pte_t *)pud);
1546 + continue;
1547 + }
1548 +
1549 +@@ -777,13 +836,7 @@ static void remove_pagetable(unsigned long start, unsigned long end)
1550 + continue;
1551 +
1552 + if (pgd_huge(*pgd)) {
1553 +- if (!IS_ALIGNED(addr, PGDIR_SIZE) ||
1554 +- !IS_ALIGNED(next, PGDIR_SIZE)) {
1555 +- WARN_ONCE(1, "%s: unaligned range\n", __func__);
1556 +- continue;
1557 +- }
1558 +-
1559 +- pte_clear(&init_mm, addr, (pte_t *)pgd);
1560 ++ split_kernel_mapping(addr, end, PGDIR_SIZE, (pte_t *)pgd);
1561 + continue;
1562 + }
1563 +
1564 +diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
1565 +index ac0717a90ca6..12f95b1f7d07 100644
1566 +--- a/arch/powerpc/mm/pgtable_64.c
1567 ++++ b/arch/powerpc/mm/pgtable_64.c
1568 +@@ -483,6 +483,8 @@ void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
1569 + if (old & PATB_HR) {
1570 + asm volatile(PPC_TLBIE_5(%0,%1,2,0,1) : :
1571 + "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
1572 ++ asm volatile(PPC_TLBIE_5(%0,%1,2,1,1) : :
1573 ++ "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
1574 + trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 1);
1575 + } else {
1576 + asm volatile(PPC_TLBIE_5(%0,%1,2,0,0) : :
1577 +diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
1578 +index d304028641a2..4b295cfd5f7e 100644
1579 +--- a/arch/powerpc/mm/tlb-radix.c
1580 ++++ b/arch/powerpc/mm/tlb-radix.c
1581 +@@ -453,14 +453,12 @@ void radix__flush_tlb_all(void)
1582 + */
1583 + asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
1584 + : : "r"(rb), "i"(r), "i"(1), "i"(ric), "r"(rs) : "memory");
1585 +- trace_tlbie(0, 0, rb, rs, ric, prs, r);
1586 + /*
1587 + * now flush host entires by passing PRS = 0 and LPID == 0
1588 + */
1589 + asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
1590 + : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(0) : "memory");
1591 + asm volatile("eieio; tlbsync; ptesync": : :"memory");
1592 +- trace_tlbie(0, 0, rb, 0, ric, prs, r);
1593 + }
1594 +
1595 + void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
1596 +diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
1597 +index fadb95efbb9e..b1ac8ac38434 100644
1598 +--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
1599 ++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
1600 +@@ -36,6 +36,7 @@
1601 + #include <asm/xics.h>
1602 + #include <asm/xive.h>
1603 + #include <asm/plpar_wrappers.h>
1604 ++#include <asm/topology.h>
1605 +
1606 + #include "pseries.h"
1607 + #include "offline_states.h"
1608 +@@ -331,6 +332,7 @@ static void pseries_remove_processor(struct device_node *np)
1609 + BUG_ON(cpu_online(cpu));
1610 + set_cpu_present(cpu, false);
1611 + set_hard_smp_processor_id(cpu, -1);
1612 ++ update_numa_cpu_lookup_table(cpu, -1);
1613 + break;
1614 + }
1615 + if (cpu >= nr_cpu_ids)
1616 +diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
1617 +index d9c4c9366049..091f1d0d0af1 100644
1618 +--- a/arch/powerpc/sysdev/xive/spapr.c
1619 ++++ b/arch/powerpc/sysdev/xive/spapr.c
1620 +@@ -356,7 +356,8 @@ static int xive_spapr_configure_queue(u32 target, struct xive_q *q, u8 prio,
1621 +
1622 + rc = plpar_int_get_queue_info(0, target, prio, &esn_page, &esn_size);
1623 + if (rc) {
1624 +- pr_err("Error %lld getting queue info prio %d\n", rc, prio);
1625 ++ pr_err("Error %lld getting queue info CPU %d prio %d\n", rc,
1626 ++ target, prio);
1627 + rc = -EIO;
1628 + goto fail;
1629 + }
1630 +@@ -370,7 +371,8 @@ static int xive_spapr_configure_queue(u32 target, struct xive_q *q, u8 prio,
1631 + /* Configure and enable the queue in HW */
1632 + rc = plpar_int_set_queue_config(flags, target, prio, qpage_phys, order);
1633 + if (rc) {
1634 +- pr_err("Error %lld setting queue for prio %d\n", rc, prio);
1635 ++ pr_err("Error %lld setting queue for CPU %d prio %d\n", rc,
1636 ++ target, prio);
1637 + rc = -EIO;
1638 + } else {
1639 + q->qpage = qpage;
1640 +@@ -389,8 +391,8 @@ static int xive_spapr_setup_queue(unsigned int cpu, struct xive_cpu *xc,
1641 + if (IS_ERR(qpage))
1642 + return PTR_ERR(qpage);
1643 +
1644 +- return xive_spapr_configure_queue(cpu, q, prio, qpage,
1645 +- xive_queue_shift);
1646 ++ return xive_spapr_configure_queue(get_hard_smp_processor_id(cpu),
1647 ++ q, prio, qpage, xive_queue_shift);
1648 + }
1649 +
1650 + static void xive_spapr_cleanup_queue(unsigned int cpu, struct xive_cpu *xc,
1651 +@@ -399,10 +401,12 @@ static void xive_spapr_cleanup_queue(unsigned int cpu, struct xive_cpu *xc,
1652 + struct xive_q *q = &xc->queue[prio];
1653 + unsigned int alloc_order;
1654 + long rc;
1655 ++ int hw_cpu = get_hard_smp_processor_id(cpu);
1656 +
1657 +- rc = plpar_int_set_queue_config(0, cpu, prio, 0, 0);
1658 ++ rc = plpar_int_set_queue_config(0, hw_cpu, prio, 0, 0);
1659 + if (rc)
1660 +- pr_err("Error %ld setting queue for prio %d\n", rc, prio);
1661 ++ pr_err("Error %ld setting queue for CPU %d prio %d\n", rc,
1662 ++ hw_cpu, prio);
1663 +
1664 + alloc_order = xive_alloc_order(xive_queue_shift);
1665 + free_pages((unsigned long)q->qpage, alloc_order);
1666 +diff --git a/arch/s390/kernel/compat_linux.c b/arch/s390/kernel/compat_linux.c
1667 +index 59eea9c65d3e..79b7a3438d54 100644
1668 +--- a/arch/s390/kernel/compat_linux.c
1669 ++++ b/arch/s390/kernel/compat_linux.c
1670 +@@ -110,7 +110,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setregid16, u16, rgid, u16, egid)
1671 +
1672 + COMPAT_SYSCALL_DEFINE1(s390_setgid16, u16, gid)
1673 + {
1674 +- return sys_setgid((gid_t)gid);
1675 ++ return sys_setgid(low2highgid(gid));
1676 + }
1677 +
1678 + COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
1679 +@@ -120,7 +120,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
1680 +
1681 + COMPAT_SYSCALL_DEFINE1(s390_setuid16, u16, uid)
1682 + {
1683 +- return sys_setuid((uid_t)uid);
1684 ++ return sys_setuid(low2highuid(uid));
1685 + }
1686 +
1687 + COMPAT_SYSCALL_DEFINE3(s390_setresuid16, u16, ruid, u16, euid, u16, suid)
1688 +@@ -173,12 +173,12 @@ COMPAT_SYSCALL_DEFINE3(s390_getresgid16, u16 __user *, rgidp,
1689 +
1690 + COMPAT_SYSCALL_DEFINE1(s390_setfsuid16, u16, uid)
1691 + {
1692 +- return sys_setfsuid((uid_t)uid);
1693 ++ return sys_setfsuid(low2highuid(uid));
1694 + }
1695 +
1696 + COMPAT_SYSCALL_DEFINE1(s390_setfsgid16, u16, gid)
1697 + {
1698 +- return sys_setfsgid((gid_t)gid);
1699 ++ return sys_setfsgid(low2highgid(gid));
1700 + }
1701 +
1702 + static int groups16_to_user(u16 __user *grouplist, struct group_info *group_info)
1703 +diff --git a/arch/sh/kernel/dwarf.c b/arch/sh/kernel/dwarf.c
1704 +index e1d751ae2498..1a2526676a87 100644
1705 +--- a/arch/sh/kernel/dwarf.c
1706 ++++ b/arch/sh/kernel/dwarf.c
1707 +@@ -1172,11 +1172,11 @@ static int __init dwarf_unwinder_init(void)
1708 +
1709 + dwarf_frame_cachep = kmem_cache_create("dwarf_frames",
1710 + sizeof(struct dwarf_frame), 0,
1711 +- SLAB_PANIC | SLAB_HWCACHE_ALIGN | SLAB_NOTRACK, NULL);
1712 ++ SLAB_PANIC | SLAB_HWCACHE_ALIGN, NULL);
1713 +
1714 + dwarf_reg_cachep = kmem_cache_create("dwarf_regs",
1715 + sizeof(struct dwarf_reg), 0,
1716 +- SLAB_PANIC | SLAB_HWCACHE_ALIGN | SLAB_NOTRACK, NULL);
1717 ++ SLAB_PANIC | SLAB_HWCACHE_ALIGN, NULL);
1718 +
1719 + dwarf_frame_pool = mempool_create_slab_pool(DWARF_FRAME_MIN_REQ,
1720 + dwarf_frame_cachep);
1721 +diff --git a/arch/sh/kernel/process.c b/arch/sh/kernel/process.c
1722 +index b2d9963d5978..68b1a67533ce 100644
1723 +--- a/arch/sh/kernel/process.c
1724 ++++ b/arch/sh/kernel/process.c
1725 +@@ -59,7 +59,7 @@ void arch_task_cache_init(void)
1726 +
1727 + task_xstate_cachep = kmem_cache_create("task_xstate", xstate_size,
1728 + __alignof__(union thread_xstate),
1729 +- SLAB_PANIC | SLAB_NOTRACK, NULL);
1730 ++ SLAB_PANIC, NULL);
1731 + }
1732 +
1733 + #ifdef CONFIG_SH_FPU_EMU
1734 +diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
1735 +index a0cc1be767c8..984e9d65ea0d 100644
1736 +--- a/arch/sparc/mm/init_64.c
1737 ++++ b/arch/sparc/mm/init_64.c
1738 +@@ -2934,7 +2934,7 @@ void __flush_tlb_all(void)
1739 + pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
1740 + unsigned long address)
1741 + {
1742 +- struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
1743 ++ struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
1744 + pte_t *pte = NULL;
1745 +
1746 + if (page)
1747 +@@ -2946,7 +2946,7 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
1748 + pgtable_t pte_alloc_one(struct mm_struct *mm,
1749 + unsigned long address)
1750 + {
1751 +- struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
1752 ++ struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
1753 + if (!page)
1754 + return NULL;
1755 + if (!pgtable_page_ctor(page)) {
1756 +diff --git a/arch/unicore32/include/asm/pgalloc.h b/arch/unicore32/include/asm/pgalloc.h
1757 +index 26775793c204..f0fdb268f8f2 100644
1758 +--- a/arch/unicore32/include/asm/pgalloc.h
1759 ++++ b/arch/unicore32/include/asm/pgalloc.h
1760 +@@ -28,7 +28,7 @@ extern void free_pgd_slow(struct mm_struct *mm, pgd_t *pgd);
1761 + #define pgd_alloc(mm) get_pgd_slow(mm)
1762 + #define pgd_free(mm, pgd) free_pgd_slow(mm, pgd)
1763 +
1764 +-#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
1765 ++#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
1766 +
1767 + /*
1768 + * Allocate one PTE table.
1769 +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
1770 +index 17de6acc0eab..559b37bf5a2e 100644
1771 +--- a/arch/x86/Kconfig
1772 ++++ b/arch/x86/Kconfig
1773 +@@ -111,7 +111,6 @@ config X86
1774 + select HAVE_ARCH_JUMP_LABEL
1775 + select HAVE_ARCH_KASAN if X86_64
1776 + select HAVE_ARCH_KGDB
1777 +- select HAVE_ARCH_KMEMCHECK
1778 + select HAVE_ARCH_MMAP_RND_BITS if MMU
1779 + select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT
1780 + select HAVE_ARCH_COMPAT_MMAP_BASES if MMU && COMPAT
1781 +@@ -1443,7 +1442,7 @@ config ARCH_DMA_ADDR_T_64BIT
1782 +
1783 + config X86_DIRECT_GBPAGES
1784 + def_bool y
1785 +- depends on X86_64 && !DEBUG_PAGEALLOC && !KMEMCHECK
1786 ++ depends on X86_64 && !DEBUG_PAGEALLOC
1787 + ---help---
1788 + Certain kernel features effectively disable kernel
1789 + linear 1 GB mappings (even if the CPU otherwise
1790 +diff --git a/arch/x86/Makefile b/arch/x86/Makefile
1791 +index 504b1a4535ac..fad55160dcb9 100644
1792 +--- a/arch/x86/Makefile
1793 ++++ b/arch/x86/Makefile
1794 +@@ -158,11 +158,6 @@ ifdef CONFIG_X86_X32
1795 + endif
1796 + export CONFIG_X86_X32_ABI
1797 +
1798 +-# Don't unroll struct assignments with kmemcheck enabled
1799 +-ifeq ($(CONFIG_KMEMCHECK),y)
1800 +- KBUILD_CFLAGS += $(call cc-option,-fno-builtin-memcpy)
1801 +-endif
1802 +-
1803 + #
1804 + # If the function graph tracer is used with mcount instead of fentry,
1805 + # '-maccumulate-outgoing-args' is needed to prevent a GCC bug
1806 +diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
1807 +index 3f48f695d5e6..dce7092ab24a 100644
1808 +--- a/arch/x86/entry/calling.h
1809 ++++ b/arch/x86/entry/calling.h
1810 +@@ -97,80 +97,69 @@ For 32-bit we have the following conventions - kernel is built with
1811 +
1812 + #define SIZEOF_PTREGS 21*8
1813 +
1814 +- .macro ALLOC_PT_GPREGS_ON_STACK
1815 +- addq $-(15*8), %rsp
1816 +- .endm
1817 ++.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax
1818 ++ /*
1819 ++ * Push registers and sanitize registers of values that a
1820 ++ * speculation attack might otherwise want to exploit. The
1821 ++ * lower registers are likely clobbered well before they
1822 ++ * could be put to use in a speculative execution gadget.
1823 ++ * Interleave XOR with PUSH for better uop scheduling:
1824 ++ */
1825 ++ pushq %rdi /* pt_regs->di */
1826 ++ pushq %rsi /* pt_regs->si */
1827 ++ pushq \rdx /* pt_regs->dx */
1828 ++ pushq %rcx /* pt_regs->cx */
1829 ++ pushq \rax /* pt_regs->ax */
1830 ++ pushq %r8 /* pt_regs->r8 */
1831 ++ xorq %r8, %r8 /* nospec r8 */
1832 ++ pushq %r9 /* pt_regs->r9 */
1833 ++ xorq %r9, %r9 /* nospec r9 */
1834 ++ pushq %r10 /* pt_regs->r10 */
1835 ++ xorq %r10, %r10 /* nospec r10 */
1836 ++ pushq %r11 /* pt_regs->r11 */
1837 ++ xorq %r11, %r11 /* nospec r11*/
1838 ++ pushq %rbx /* pt_regs->rbx */
1839 ++ xorl %ebx, %ebx /* nospec rbx*/
1840 ++ pushq %rbp /* pt_regs->rbp */
1841 ++ xorl %ebp, %ebp /* nospec rbp*/
1842 ++ pushq %r12 /* pt_regs->r12 */
1843 ++ xorq %r12, %r12 /* nospec r12*/
1844 ++ pushq %r13 /* pt_regs->r13 */
1845 ++ xorq %r13, %r13 /* nospec r13*/
1846 ++ pushq %r14 /* pt_regs->r14 */
1847 ++ xorq %r14, %r14 /* nospec r14*/
1848 ++ pushq %r15 /* pt_regs->r15 */
1849 ++ xorq %r15, %r15 /* nospec r15*/
1850 ++ UNWIND_HINT_REGS
1851 ++.endm
1852 +
1853 +- .macro SAVE_C_REGS_HELPER offset=0 rax=1 rcx=1 r8910=1 r11=1
1854 +- .if \r11
1855 +- movq %r11, 6*8+\offset(%rsp)
1856 +- .endif
1857 +- .if \r8910
1858 +- movq %r10, 7*8+\offset(%rsp)
1859 +- movq %r9, 8*8+\offset(%rsp)
1860 +- movq %r8, 9*8+\offset(%rsp)
1861 +- .endif
1862 +- .if \rax
1863 +- movq %rax, 10*8+\offset(%rsp)
1864 +- .endif
1865 +- .if \rcx
1866 +- movq %rcx, 11*8+\offset(%rsp)
1867 +- .endif
1868 +- movq %rdx, 12*8+\offset(%rsp)
1869 +- movq %rsi, 13*8+\offset(%rsp)
1870 +- movq %rdi, 14*8+\offset(%rsp)
1871 +- UNWIND_HINT_REGS offset=\offset extra=0
1872 +- .endm
1873 +- .macro SAVE_C_REGS offset=0
1874 +- SAVE_C_REGS_HELPER \offset, 1, 1, 1, 1
1875 +- .endm
1876 +- .macro SAVE_C_REGS_EXCEPT_RAX_RCX offset=0
1877 +- SAVE_C_REGS_HELPER \offset, 0, 0, 1, 1
1878 +- .endm
1879 +- .macro SAVE_C_REGS_EXCEPT_R891011
1880 +- SAVE_C_REGS_HELPER 0, 1, 1, 0, 0
1881 +- .endm
1882 +- .macro SAVE_C_REGS_EXCEPT_RCX_R891011
1883 +- SAVE_C_REGS_HELPER 0, 1, 0, 0, 0
1884 +- .endm
1885 +- .macro SAVE_C_REGS_EXCEPT_RAX_RCX_R11
1886 +- SAVE_C_REGS_HELPER 0, 0, 0, 1, 0
1887 +- .endm
1888 +-
1889 +- .macro SAVE_EXTRA_REGS offset=0
1890 +- movq %r15, 0*8+\offset(%rsp)
1891 +- movq %r14, 1*8+\offset(%rsp)
1892 +- movq %r13, 2*8+\offset(%rsp)
1893 +- movq %r12, 3*8+\offset(%rsp)
1894 +- movq %rbp, 4*8+\offset(%rsp)
1895 +- movq %rbx, 5*8+\offset(%rsp)
1896 +- UNWIND_HINT_REGS offset=\offset
1897 +- .endm
1898 +-
1899 +- .macro POP_EXTRA_REGS
1900 ++.macro POP_REGS pop_rdi=1 skip_r11rcx=0
1901 + popq %r15
1902 + popq %r14
1903 + popq %r13
1904 + popq %r12
1905 + popq %rbp
1906 + popq %rbx
1907 +- .endm
1908 +-
1909 +- .macro POP_C_REGS
1910 ++ .if \skip_r11rcx
1911 ++ popq %rsi
1912 ++ .else
1913 + popq %r11
1914 ++ .endif
1915 + popq %r10
1916 + popq %r9
1917 + popq %r8
1918 + popq %rax
1919 ++ .if \skip_r11rcx
1920 ++ popq %rsi
1921 ++ .else
1922 + popq %rcx
1923 ++ .endif
1924 + popq %rdx
1925 + popq %rsi
1926 ++ .if \pop_rdi
1927 + popq %rdi
1928 +- .endm
1929 +-
1930 +- .macro icebp
1931 +- .byte 0xf1
1932 +- .endm
1933 ++ .endif
1934 ++.endm
1935 +
1936 + /*
1937 + * This is a sneaky trick to help the unwinder find pt_regs on the stack. The
1938 +@@ -178,7 +167,7 @@ For 32-bit we have the following conventions - kernel is built with
1939 + * is just setting the LSB, which makes it an invalid stack address and is also
1940 + * a signal to the unwinder that it's a pt_regs pointer in disguise.
1941 + *
1942 +- * NOTE: This macro must be used *after* SAVE_EXTRA_REGS because it corrupts
1943 ++ * NOTE: This macro must be used *after* PUSH_AND_CLEAR_REGS because it corrupts
1944 + * the original rbp.
1945 + */
1946 + .macro ENCODE_FRAME_POINTER ptregs_offset=0
1947 +diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
1948 +index 16e2d72e79a0..68a2d76e4f8f 100644
1949 +--- a/arch/x86/entry/entry_64.S
1950 ++++ b/arch/x86/entry/entry_64.S
1951 +@@ -209,7 +209,7 @@ ENTRY(entry_SYSCALL_64)
1952 +
1953 + swapgs
1954 + /*
1955 +- * This path is not taken when PAGE_TABLE_ISOLATION is disabled so it
1956 ++ * This path is only taken when PAGE_TABLE_ISOLATION is disabled so it
1957 + * is not required to switch CR3.
1958 + */
1959 + movq %rsp, PER_CPU_VAR(rsp_scratch)
1960 +@@ -223,22 +223,8 @@ ENTRY(entry_SYSCALL_64)
1961 + pushq %rcx /* pt_regs->ip */
1962 + GLOBAL(entry_SYSCALL_64_after_hwframe)
1963 + pushq %rax /* pt_regs->orig_ax */
1964 +- pushq %rdi /* pt_regs->di */
1965 +- pushq %rsi /* pt_regs->si */
1966 +- pushq %rdx /* pt_regs->dx */
1967 +- pushq %rcx /* pt_regs->cx */
1968 +- pushq $-ENOSYS /* pt_regs->ax */
1969 +- pushq %r8 /* pt_regs->r8 */
1970 +- pushq %r9 /* pt_regs->r9 */
1971 +- pushq %r10 /* pt_regs->r10 */
1972 +- pushq %r11 /* pt_regs->r11 */
1973 +- pushq %rbx /* pt_regs->rbx */
1974 +- pushq %rbp /* pt_regs->rbp */
1975 +- pushq %r12 /* pt_regs->r12 */
1976 +- pushq %r13 /* pt_regs->r13 */
1977 +- pushq %r14 /* pt_regs->r14 */
1978 +- pushq %r15 /* pt_regs->r15 */
1979 +- UNWIND_HINT_REGS
1980 ++
1981 ++ PUSH_AND_CLEAR_REGS rax=$-ENOSYS
1982 +
1983 + TRACE_IRQS_OFF
1984 +
1985 +@@ -317,15 +303,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
1986 + syscall_return_via_sysret:
1987 + /* rcx and r11 are already restored (see code above) */
1988 + UNWIND_HINT_EMPTY
1989 +- POP_EXTRA_REGS
1990 +- popq %rsi /* skip r11 */
1991 +- popq %r10
1992 +- popq %r9
1993 +- popq %r8
1994 +- popq %rax
1995 +- popq %rsi /* skip rcx */
1996 +- popq %rdx
1997 +- popq %rsi
1998 ++ POP_REGS pop_rdi=0 skip_r11rcx=1
1999 +
2000 + /*
2001 + * Now all regs are restored except RSP and RDI.
2002 +@@ -555,9 +533,7 @@ END(irq_entries_start)
2003 + call switch_to_thread_stack
2004 + 1:
2005 +
2006 +- ALLOC_PT_GPREGS_ON_STACK
2007 +- SAVE_C_REGS
2008 +- SAVE_EXTRA_REGS
2009 ++ PUSH_AND_CLEAR_REGS
2010 + ENCODE_FRAME_POINTER
2011 +
2012 + testb $3, CS(%rsp)
2013 +@@ -618,15 +594,7 @@ GLOBAL(swapgs_restore_regs_and_return_to_usermode)
2014 + ud2
2015 + 1:
2016 + #endif
2017 +- POP_EXTRA_REGS
2018 +- popq %r11
2019 +- popq %r10
2020 +- popq %r9
2021 +- popq %r8
2022 +- popq %rax
2023 +- popq %rcx
2024 +- popq %rdx
2025 +- popq %rsi
2026 ++ POP_REGS pop_rdi=0
2027 +
2028 + /*
2029 + * The stack is now user RDI, orig_ax, RIP, CS, EFLAGS, RSP, SS.
2030 +@@ -684,8 +652,7 @@ GLOBAL(restore_regs_and_return_to_kernel)
2031 + ud2
2032 + 1:
2033 + #endif
2034 +- POP_EXTRA_REGS
2035 +- POP_C_REGS
2036 ++ POP_REGS
2037 + addq $8, %rsp /* skip regs->orig_ax */
2038 + INTERRUPT_RETURN
2039 +
2040 +@@ -900,7 +867,9 @@ ENTRY(\sym)
2041 + pushq $-1 /* ORIG_RAX: no syscall to restart */
2042 + .endif
2043 +
2044 +- ALLOC_PT_GPREGS_ON_STACK
2045 ++ /* Save all registers in pt_regs */
2046 ++ PUSH_AND_CLEAR_REGS
2047 ++ ENCODE_FRAME_POINTER
2048 +
2049 + .if \paranoid < 2
2050 + testb $3, CS(%rsp) /* If coming from userspace, switch stacks */
2051 +@@ -1111,9 +1080,7 @@ ENTRY(xen_failsafe_callback)
2052 + addq $0x30, %rsp
2053 + UNWIND_HINT_IRET_REGS
2054 + pushq $-1 /* orig_ax = -1 => not a system call */
2055 +- ALLOC_PT_GPREGS_ON_STACK
2056 +- SAVE_C_REGS
2057 +- SAVE_EXTRA_REGS
2058 ++ PUSH_AND_CLEAR_REGS
2059 + ENCODE_FRAME_POINTER
2060 + jmp error_exit
2061 + END(xen_failsafe_callback)
2062 +@@ -1150,16 +1117,13 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
2063 + #endif
2064 +
2065 + /*
2066 +- * Save all registers in pt_regs, and switch gs if needed.
2067 ++ * Switch gs if needed.
2068 + * Use slow, but surefire "are we in kernel?" check.
2069 + * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
2070 + */
2071 + ENTRY(paranoid_entry)
2072 + UNWIND_HINT_FUNC
2073 + cld
2074 +- SAVE_C_REGS 8
2075 +- SAVE_EXTRA_REGS 8
2076 +- ENCODE_FRAME_POINTER 8
2077 + movl $1, %ebx
2078 + movl $MSR_GS_BASE, %ecx
2079 + rdmsr
2080 +@@ -1198,21 +1162,18 @@ ENTRY(paranoid_exit)
2081 + jmp .Lparanoid_exit_restore
2082 + .Lparanoid_exit_no_swapgs:
2083 + TRACE_IRQS_IRETQ_DEBUG
2084 ++ RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
2085 + .Lparanoid_exit_restore:
2086 + jmp restore_regs_and_return_to_kernel
2087 + END(paranoid_exit)
2088 +
2089 + /*
2090 +- * Save all registers in pt_regs, and switch gs if needed.
2091 ++ * Switch gs if needed.
2092 + * Return: EBX=0: came from user mode; EBX=1: otherwise
2093 + */
2094 + ENTRY(error_entry)
2095 +- UNWIND_HINT_FUNC
2096 ++ UNWIND_HINT_REGS offset=8
2097 + cld
2098 +- SAVE_C_REGS 8
2099 +- SAVE_EXTRA_REGS 8
2100 +- ENCODE_FRAME_POINTER 8
2101 +- xorl %ebx, %ebx
2102 + testb $3, CS+8(%rsp)
2103 + jz .Lerror_kernelspace
2104 +
2105 +@@ -1393,22 +1354,7 @@ ENTRY(nmi)
2106 + pushq 1*8(%rdx) /* pt_regs->rip */
2107 + UNWIND_HINT_IRET_REGS
2108 + pushq $-1 /* pt_regs->orig_ax */
2109 +- pushq %rdi /* pt_regs->di */
2110 +- pushq %rsi /* pt_regs->si */
2111 +- pushq (%rdx) /* pt_regs->dx */
2112 +- pushq %rcx /* pt_regs->cx */
2113 +- pushq %rax /* pt_regs->ax */
2114 +- pushq %r8 /* pt_regs->r8 */
2115 +- pushq %r9 /* pt_regs->r9 */
2116 +- pushq %r10 /* pt_regs->r10 */
2117 +- pushq %r11 /* pt_regs->r11 */
2118 +- pushq %rbx /* pt_regs->rbx */
2119 +- pushq %rbp /* pt_regs->rbp */
2120 +- pushq %r12 /* pt_regs->r12 */
2121 +- pushq %r13 /* pt_regs->r13 */
2122 +- pushq %r14 /* pt_regs->r14 */
2123 +- pushq %r15 /* pt_regs->r15 */
2124 +- UNWIND_HINT_REGS
2125 ++ PUSH_AND_CLEAR_REGS rdx=(%rdx)
2126 + ENCODE_FRAME_POINTER
2127 +
2128 + /*
2129 +@@ -1618,7 +1564,8 @@ end_repeat_nmi:
2130 + * frame to point back to repeat_nmi.
2131 + */
2132 + pushq $-1 /* ORIG_RAX: no syscall to restart */
2133 +- ALLOC_PT_GPREGS_ON_STACK
2134 ++ PUSH_AND_CLEAR_REGS
2135 ++ ENCODE_FRAME_POINTER
2136 +
2137 + /*
2138 + * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
2139 +@@ -1642,8 +1589,7 @@ end_repeat_nmi:
2140 + nmi_swapgs:
2141 + SWAPGS_UNSAFE_STACK
2142 + nmi_restore:
2143 +- POP_EXTRA_REGS
2144 +- POP_C_REGS
2145 ++ POP_REGS
2146 +
2147 + /*
2148 + * Skip orig_ax and the "outermost" frame to point RSP at the "iret"
2149 +diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
2150 +index 98d5358e4041..fd65e016e413 100644
2151 +--- a/arch/x86/entry/entry_64_compat.S
2152 ++++ b/arch/x86/entry/entry_64_compat.S
2153 +@@ -85,15 +85,25 @@ ENTRY(entry_SYSENTER_compat)
2154 + pushq %rcx /* pt_regs->cx */
2155 + pushq $-ENOSYS /* pt_regs->ax */
2156 + pushq $0 /* pt_regs->r8 = 0 */
2157 ++ xorq %r8, %r8 /* nospec r8 */
2158 + pushq $0 /* pt_regs->r9 = 0 */
2159 ++ xorq %r9, %r9 /* nospec r9 */
2160 + pushq $0 /* pt_regs->r10 = 0 */
2161 ++ xorq %r10, %r10 /* nospec r10 */
2162 + pushq $0 /* pt_regs->r11 = 0 */
2163 ++ xorq %r11, %r11 /* nospec r11 */
2164 + pushq %rbx /* pt_regs->rbx */
2165 ++ xorl %ebx, %ebx /* nospec rbx */
2166 + pushq %rbp /* pt_regs->rbp (will be overwritten) */
2167 ++ xorl %ebp, %ebp /* nospec rbp */
2168 + pushq $0 /* pt_regs->r12 = 0 */
2169 ++ xorq %r12, %r12 /* nospec r12 */
2170 + pushq $0 /* pt_regs->r13 = 0 */
2171 ++ xorq %r13, %r13 /* nospec r13 */
2172 + pushq $0 /* pt_regs->r14 = 0 */
2173 ++ xorq %r14, %r14 /* nospec r14 */
2174 + pushq $0 /* pt_regs->r15 = 0 */
2175 ++ xorq %r15, %r15 /* nospec r15 */
2176 + cld
2177 +
2178 + /*
2179 +@@ -214,15 +224,25 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
2180 + pushq %rbp /* pt_regs->cx (stashed in bp) */
2181 + pushq $-ENOSYS /* pt_regs->ax */
2182 + pushq $0 /* pt_regs->r8 = 0 */
2183 ++ xorq %r8, %r8 /* nospec r8 */
2184 + pushq $0 /* pt_regs->r9 = 0 */
2185 ++ xorq %r9, %r9 /* nospec r9 */
2186 + pushq $0 /* pt_regs->r10 = 0 */
2187 ++ xorq %r10, %r10 /* nospec r10 */
2188 + pushq $0 /* pt_regs->r11 = 0 */
2189 ++ xorq %r11, %r11 /* nospec r11 */
2190 + pushq %rbx /* pt_regs->rbx */
2191 ++ xorl %ebx, %ebx /* nospec rbx */
2192 + pushq %rbp /* pt_regs->rbp (will be overwritten) */
2193 ++ xorl %ebp, %ebp /* nospec rbp */
2194 + pushq $0 /* pt_regs->r12 = 0 */
2195 ++ xorq %r12, %r12 /* nospec r12 */
2196 + pushq $0 /* pt_regs->r13 = 0 */
2197 ++ xorq %r13, %r13 /* nospec r13 */
2198 + pushq $0 /* pt_regs->r14 = 0 */
2199 ++ xorq %r14, %r14 /* nospec r14 */
2200 + pushq $0 /* pt_regs->r15 = 0 */
2201 ++ xorq %r15, %r15 /* nospec r15 */
2202 +
2203 + /*
2204 + * User mode is traced as though IRQs are on, and SYSENTER
2205 +@@ -338,15 +358,25 @@ ENTRY(entry_INT80_compat)
2206 + pushq %rcx /* pt_regs->cx */
2207 + pushq $-ENOSYS /* pt_regs->ax */
2208 + pushq $0 /* pt_regs->r8 = 0 */
2209 ++ xorq %r8, %r8 /* nospec r8 */
2210 + pushq $0 /* pt_regs->r9 = 0 */
2211 ++ xorq %r9, %r9 /* nospec r9 */
2212 + pushq $0 /* pt_regs->r10 = 0 */
2213 ++ xorq %r10, %r10 /* nospec r10 */
2214 + pushq $0 /* pt_regs->r11 = 0 */
2215 ++ xorq %r11, %r11 /* nospec r11 */
2216 + pushq %rbx /* pt_regs->rbx */
2217 ++ xorl %ebx, %ebx /* nospec rbx */
2218 + pushq %rbp /* pt_regs->rbp */
2219 ++ xorl %ebp, %ebp /* nospec rbp */
2220 + pushq %r12 /* pt_regs->r12 */
2221 ++ xorq %r12, %r12 /* nospec r12 */
2222 + pushq %r13 /* pt_regs->r13 */
2223 ++ xorq %r13, %r13 /* nospec r13 */
2224 + pushq %r14 /* pt_regs->r14 */
2225 ++ xorq %r14, %r14 /* nospec r14 */
2226 + pushq %r15 /* pt_regs->r15 */
2227 ++ xorq %r15, %r15 /* nospec r15 */
2228 + cld
2229 +
2230 + /*
2231 +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
2232 +index 09c26a4f139c..1c2558430cf0 100644
2233 +--- a/arch/x86/events/intel/core.c
2234 ++++ b/arch/x86/events/intel/core.c
2235 +@@ -3559,7 +3559,7 @@ static int intel_snb_pebs_broken(int cpu)
2236 + break;
2237 +
2238 + case INTEL_FAM6_SANDYBRIDGE_X:
2239 +- switch (cpu_data(cpu).x86_mask) {
2240 ++ switch (cpu_data(cpu).x86_stepping) {
2241 + case 6: rev = 0x618; break;
2242 + case 7: rev = 0x70c; break;
2243 + }
2244 +diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
2245 +index ae64d0b69729..cf372b90557e 100644
2246 +--- a/arch/x86/events/intel/lbr.c
2247 ++++ b/arch/x86/events/intel/lbr.c
2248 +@@ -1186,7 +1186,7 @@ void __init intel_pmu_lbr_init_atom(void)
2249 + * on PMU interrupt
2250 + */
2251 + if (boot_cpu_data.x86_model == 28
2252 +- && boot_cpu_data.x86_mask < 10) {
2253 ++ && boot_cpu_data.x86_stepping < 10) {
2254 + pr_cont("LBR disabled due to erratum");
2255 + return;
2256 + }
2257 +diff --git a/arch/x86/events/intel/p6.c b/arch/x86/events/intel/p6.c
2258 +index a5604c352930..408879b0c0d4 100644
2259 +--- a/arch/x86/events/intel/p6.c
2260 ++++ b/arch/x86/events/intel/p6.c
2261 +@@ -234,7 +234,7 @@ static __initconst const struct x86_pmu p6_pmu = {
2262 +
2263 + static __init void p6_pmu_rdpmc_quirk(void)
2264 + {
2265 +- if (boot_cpu_data.x86_mask < 9) {
2266 ++ if (boot_cpu_data.x86_stepping < 9) {
2267 + /*
2268 + * PPro erratum 26; fixed in stepping 9 and above.
2269 + */
2270 +diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
2271 +index 8d0ec9df1cbe..f077401869ee 100644
2272 +--- a/arch/x86/include/asm/acpi.h
2273 ++++ b/arch/x86/include/asm/acpi.h
2274 +@@ -94,7 +94,7 @@ static inline unsigned int acpi_processor_cstate_check(unsigned int max_cstate)
2275 + if (boot_cpu_data.x86 == 0x0F &&
2276 + boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
2277 + boot_cpu_data.x86_model <= 0x05 &&
2278 +- boot_cpu_data.x86_mask < 0x0A)
2279 ++ boot_cpu_data.x86_stepping < 0x0A)
2280 + return 1;
2281 + else if (boot_cpu_has(X86_BUG_AMD_APIC_C1E))
2282 + return 1;
2283 +diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
2284 +index 1e7c955b6303..4db77731e130 100644
2285 +--- a/arch/x86/include/asm/barrier.h
2286 ++++ b/arch/x86/include/asm/barrier.h
2287 +@@ -40,7 +40,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
2288 +
2289 + asm ("cmp %1,%2; sbb %0,%0;"
2290 + :"=r" (mask)
2291 +- :"r"(size),"r" (index)
2292 ++ :"g"(size),"r" (index)
2293 + :"cc");
2294 + return mask;
2295 + }
2296 +diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
2297 +index 34d99af43994..6804d6642767 100644
2298 +--- a/arch/x86/include/asm/bug.h
2299 ++++ b/arch/x86/include/asm/bug.h
2300 +@@ -5,23 +5,20 @@
2301 + #include <linux/stringify.h>
2302 +
2303 + /*
2304 +- * Since some emulators terminate on UD2, we cannot use it for WARN.
2305 +- * Since various instruction decoders disagree on the length of UD1,
2306 +- * we cannot use it either. So use UD0 for WARN.
2307 ++ * Despite that some emulators terminate on UD2, we use it for WARN().
2308 + *
2309 +- * (binutils knows about "ud1" but {en,de}codes it as 2 bytes, whereas
2310 +- * our kernel decoder thinks it takes a ModRM byte, which seems consistent
2311 +- * with various things like the Intel SDM instruction encoding rules)
2312 ++ * Since various instruction decoders/specs disagree on the encoding of
2313 ++ * UD0/UD1.
2314 + */
2315 +
2316 +-#define ASM_UD0 ".byte 0x0f, 0xff"
2317 ++#define ASM_UD0 ".byte 0x0f, 0xff" /* + ModRM (for Intel) */
2318 + #define ASM_UD1 ".byte 0x0f, 0xb9" /* + ModRM */
2319 + #define ASM_UD2 ".byte 0x0f, 0x0b"
2320 +
2321 + #define INSN_UD0 0xff0f
2322 + #define INSN_UD2 0x0b0f
2323 +
2324 +-#define LEN_UD0 2
2325 ++#define LEN_UD2 2
2326 +
2327 + #ifdef CONFIG_GENERIC_BUG
2328 +
2329 +@@ -77,7 +74,11 @@ do { \
2330 + unreachable(); \
2331 + } while (0)
2332 +
2333 +-#define __WARN_FLAGS(flags) _BUG_FLAGS(ASM_UD0, BUGFLAG_WARNING|(flags))
2334 ++#define __WARN_FLAGS(flags) \
2335 ++do { \
2336 ++ _BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \
2337 ++ annotate_reachable(); \
2338 ++} while (0)
2339 +
2340 + #include <asm-generic/bug.h>
2341 +
2342 +diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
2343 +index 836ca1178a6a..69f16f0729d0 100644
2344 +--- a/arch/x86/include/asm/dma-mapping.h
2345 ++++ b/arch/x86/include/asm/dma-mapping.h
2346 +@@ -7,7 +7,6 @@
2347 + * Documentation/DMA-API.txt for documentation.
2348 + */
2349 +
2350 +-#include <linux/kmemcheck.h>
2351 + #include <linux/scatterlist.h>
2352 + #include <linux/dma-debug.h>
2353 + #include <asm/io.h>
2354 +diff --git a/arch/x86/include/asm/kmemcheck.h b/arch/x86/include/asm/kmemcheck.h
2355 +deleted file mode 100644
2356 +index 945a0337fbcf..000000000000
2357 +--- a/arch/x86/include/asm/kmemcheck.h
2358 ++++ /dev/null
2359 +@@ -1,43 +0,0 @@
2360 +-/* SPDX-License-Identifier: GPL-2.0 */
2361 +-#ifndef ASM_X86_KMEMCHECK_H
2362 +-#define ASM_X86_KMEMCHECK_H
2363 +-
2364 +-#include <linux/types.h>
2365 +-#include <asm/ptrace.h>
2366 +-
2367 +-#ifdef CONFIG_KMEMCHECK
2368 +-bool kmemcheck_active(struct pt_regs *regs);
2369 +-
2370 +-void kmemcheck_show(struct pt_regs *regs);
2371 +-void kmemcheck_hide(struct pt_regs *regs);
2372 +-
2373 +-bool kmemcheck_fault(struct pt_regs *regs,
2374 +- unsigned long address, unsigned long error_code);
2375 +-bool kmemcheck_trap(struct pt_regs *regs);
2376 +-#else
2377 +-static inline bool kmemcheck_active(struct pt_regs *regs)
2378 +-{
2379 +- return false;
2380 +-}
2381 +-
2382 +-static inline void kmemcheck_show(struct pt_regs *regs)
2383 +-{
2384 +-}
2385 +-
2386 +-static inline void kmemcheck_hide(struct pt_regs *regs)
2387 +-{
2388 +-}
2389 +-
2390 +-static inline bool kmemcheck_fault(struct pt_regs *regs,
2391 +- unsigned long address, unsigned long error_code)
2392 +-{
2393 +- return false;
2394 +-}
2395 +-
2396 +-static inline bool kmemcheck_trap(struct pt_regs *regs)
2397 +-{
2398 +- return false;
2399 +-}
2400 +-#endif /* CONFIG_KMEMCHECK */
2401 +-
2402 +-#endif
2403 +diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
2404 +index 4d57894635f2..76b058533e47 100644
2405 +--- a/arch/x86/include/asm/nospec-branch.h
2406 ++++ b/arch/x86/include/asm/nospec-branch.h
2407 +@@ -6,6 +6,7 @@
2408 + #include <asm/alternative.h>
2409 + #include <asm/alternative-asm.h>
2410 + #include <asm/cpufeatures.h>
2411 ++#include <asm/msr-index.h>
2412 +
2413 + #ifdef __ASSEMBLY__
2414 +
2415 +@@ -164,10 +165,15 @@ static inline void vmexit_fill_RSB(void)
2416 +
2417 + static inline void indirect_branch_prediction_barrier(void)
2418 + {
2419 +- alternative_input("",
2420 +- "call __ibp_barrier",
2421 +- X86_FEATURE_USE_IBPB,
2422 +- ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));
2423 ++ asm volatile(ALTERNATIVE("",
2424 ++ "movl %[msr], %%ecx\n\t"
2425 ++ "movl %[val], %%eax\n\t"
2426 ++ "movl $0, %%edx\n\t"
2427 ++ "wrmsr",
2428 ++ X86_FEATURE_USE_IBPB)
2429 ++ : : [msr] "i" (MSR_IA32_PRED_CMD),
2430 ++ [val] "i" (PRED_CMD_IBPB)
2431 ++ : "eax", "ecx", "edx", "memory");
2432 + }
2433 +
2434 + #endif /* __ASSEMBLY__ */
2435 +diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
2436 +index 4baa6bceb232..d652a3808065 100644
2437 +--- a/arch/x86/include/asm/page_64.h
2438 ++++ b/arch/x86/include/asm/page_64.h
2439 +@@ -52,10 +52,6 @@ static inline void clear_page(void *page)
2440 +
2441 + void copy_page(void *to, void *from);
2442 +
2443 +-#ifdef CONFIG_X86_MCE
2444 +-#define arch_unmap_kpfn arch_unmap_kpfn
2445 +-#endif
2446 +-
2447 + #endif /* !__ASSEMBLY__ */
2448 +
2449 + #ifdef CONFIG_X86_VSYSCALL_EMULATION
2450 +diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
2451 +index 892df375b615..554841fab717 100644
2452 +--- a/arch/x86/include/asm/paravirt.h
2453 ++++ b/arch/x86/include/asm/paravirt.h
2454 +@@ -297,9 +297,9 @@ static inline void __flush_tlb_global(void)
2455 + {
2456 + PVOP_VCALL0(pv_mmu_ops.flush_tlb_kernel);
2457 + }
2458 +-static inline void __flush_tlb_single(unsigned long addr)
2459 ++static inline void __flush_tlb_one_user(unsigned long addr)
2460 + {
2461 +- PVOP_VCALL1(pv_mmu_ops.flush_tlb_single, addr);
2462 ++ PVOP_VCALL1(pv_mmu_ops.flush_tlb_one_user, addr);
2463 + }
2464 +
2465 + static inline void flush_tlb_others(const struct cpumask *cpumask,
2466 +diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
2467 +index 6ec54d01972d..f624f1f10316 100644
2468 +--- a/arch/x86/include/asm/paravirt_types.h
2469 ++++ b/arch/x86/include/asm/paravirt_types.h
2470 +@@ -217,7 +217,7 @@ struct pv_mmu_ops {
2471 + /* TLB operations */
2472 + void (*flush_tlb_user)(void);
2473 + void (*flush_tlb_kernel)(void);
2474 +- void (*flush_tlb_single)(unsigned long addr);
2475 ++ void (*flush_tlb_one_user)(unsigned long addr);
2476 + void (*flush_tlb_others)(const struct cpumask *cpus,
2477 + const struct flush_tlb_info *info);
2478 +
2479 +diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
2480 +index 211368922cad..8b8f1f14a0bf 100644
2481 +--- a/arch/x86/include/asm/pgtable.h
2482 ++++ b/arch/x86/include/asm/pgtable.h
2483 +@@ -668,11 +668,6 @@ static inline bool pte_accessible(struct mm_struct *mm, pte_t a)
2484 + return false;
2485 + }
2486 +
2487 +-static inline int pte_hidden(pte_t pte)
2488 +-{
2489 +- return pte_flags(pte) & _PAGE_HIDDEN;
2490 +-}
2491 +-
2492 + static inline int pmd_present(pmd_t pmd)
2493 + {
2494 + /*
2495 +diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
2496 +index e67c0620aec2..e55466760ff8 100644
2497 +--- a/arch/x86/include/asm/pgtable_32.h
2498 ++++ b/arch/x86/include/asm/pgtable_32.h
2499 +@@ -61,7 +61,7 @@ void paging_init(void);
2500 + #define kpte_clear_flush(ptep, vaddr) \
2501 + do { \
2502 + pte_clear(&init_mm, (vaddr), (ptep)); \
2503 +- __flush_tlb_one((vaddr)); \
2504 ++ __flush_tlb_one_kernel((vaddr)); \
2505 + } while (0)
2506 +
2507 + #endif /* !__ASSEMBLY__ */
2508 +diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
2509 +index 9e9b05fc4860..3696398a9475 100644
2510 +--- a/arch/x86/include/asm/pgtable_types.h
2511 ++++ b/arch/x86/include/asm/pgtable_types.h
2512 +@@ -32,7 +32,6 @@
2513 +
2514 + #define _PAGE_BIT_SPECIAL _PAGE_BIT_SOFTW1
2515 + #define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1
2516 +-#define _PAGE_BIT_HIDDEN _PAGE_BIT_SOFTW3 /* hidden by kmemcheck */
2517 + #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */
2518 + #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4
2519 +
2520 +@@ -79,18 +78,6 @@
2521 + #define _PAGE_KNL_ERRATUM_MASK 0
2522 + #endif
2523 +
2524 +-#ifdef CONFIG_KMEMCHECK
2525 +-#define _PAGE_HIDDEN (_AT(pteval_t, 1) << _PAGE_BIT_HIDDEN)
2526 +-#else
2527 +-#define _PAGE_HIDDEN (_AT(pteval_t, 0))
2528 +-#endif
2529 +-
2530 +-/*
2531 +- * The same hidden bit is used by kmemcheck, but since kmemcheck
2532 +- * works on kernel pages while soft-dirty engine on user space,
2533 +- * they do not conflict with each other.
2534 +- */
2535 +-
2536 + #ifdef CONFIG_MEM_SOFT_DIRTY
2537 + #define _PAGE_SOFT_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_SOFT_DIRTY)
2538 + #else
2539 +diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
2540 +index c57c6e77c29f..15fc074bd628 100644
2541 +--- a/arch/x86/include/asm/processor.h
2542 ++++ b/arch/x86/include/asm/processor.h