Gentoo Archives: gentoo-commits

From: Alice Ferrazzi <alicef@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: /
Date: Fri, 05 Jan 2018 15:41:11
Message-Id: 1515166853.8348d8b68abd9b60bb2e42c5e7d8ec894d786ba3.alicef@gentoo
1 commit: 8348d8b68abd9b60bb2e42c5e7d8ec894d786ba3
2 Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
3 AuthorDate: Fri Jan 5 15:40:53 2018 +0000
4 Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
5 CommitDate: Fri Jan 5 15:40:53 2018 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8348d8b6
7
8 removed upstreamed patch x86 page table isolation fixes
9
10 0000_README | 3 -
11 1700_x86-page-table-isolation-fixes.patch | 453 ------------------------------
12 2 files changed, 456 deletions(-)
13
14 diff --git a/0000_README b/0000_README
15 index a10ea98..e98e570 100644
16 --- a/0000_README
17 +++ b/0000_README
18 @@ -99,9 +99,6 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
19 From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
20 Desc: Enable link security restrictions by default.
21
22 -Patch: 1700_x86-page-table-isolation-fixes.patch
23 -From: https://github.com/torvalds/linux/commit/00a5ae218d57741088068799b810416ac249a9ce
24 -Desc: x86 page table isolation fixes comulative patch.
25
26 Patch: 2100_bcache-data-corruption-fix-for-bi-partno.patch
27 From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62530ed8b1d07a45dec94d46e521c0c6c2d476e6
28
29 diff --git a/1700_x86-page-table-isolation-fixes.patch b/1700_x86-page-table-isolation-fixes.patch
30 deleted file mode 100644
31 index 6fcbf41..0000000
32 --- a/1700_x86-page-table-isolation-fixes.patch
33 +++ /dev/null
34 @@ -1,453 +0,0 @@
35 -From 87faa0d9b43b4755ff6963a22d1fd1bee1aa3b39 Mon Sep 17 00:00:00 2001
36 -From: Thomas Gleixner <tglx@××××××××××.de>
37 -Date: Wed, 3 Jan 2018 15:18:44 +0100
38 -Subject: [PATCH 1/7] x86/pti: Enable PTI by default
39 -
40 -This really want's to be enabled by default. Users who know what they are
41 -doing can disable it either in the config or on the kernel command line.
42 -
43 -Signed-off-by: Thomas Gleixner <tglx@××××××××××.de>
44 -Cc: stable@×××××××××××.org
45 ----
46 - security/Kconfig | 1 +
47 - 1 file changed, 1 insertion(+)
48 -
49 -diff --git a/security/Kconfig b/security/Kconfig
50 -index a623d13bf2884..3d4debd0257e2 100644
51 ---- a/security/Kconfig
52 -+++ b/security/Kconfig
53 -@@ -56,6 +56,7 @@ config SECURITY_NETWORK
54 -
55 - config PAGE_TABLE_ISOLATION
56 - bool "Remove the kernel mapping in user mode"
57 -+ default y
58 - depends on X86_64 && !UML
59 - help
60 - This feature reduces the number of hardware side channels by
61 -
62 -From 694d99d40972f12e59a3696effee8a376b79d7c8 Mon Sep 17 00:00:00 2001
63 -From: Tom Lendacky <thomas.lendacky@×××.com>
64 -Date: Tue, 26 Dec 2017 23:43:54 -0600
65 -Subject: [PATCH 2/7] x86/cpu, x86/pti: Do not enable PTI on AMD processors
66 -
67 -AMD processors are not subject to the types of attacks that the kernel
68 -page table isolation feature protects against. The AMD microarchitecture
69 -does not allow memory references, including speculative references, that
70 -access higher privileged data when running in a lesser privileged mode
71 -when that access would result in a page fault.
72 -
73 -Disable page table isolation by default on AMD processors by not setting
74 -the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI
75 -is set.
76 -
77 -Signed-off-by: Tom Lendacky <thomas.lendacky@×××.com>
78 -Signed-off-by: Thomas Gleixner <tglx@××××××××××.de>
79 -Reviewed-by: Borislav Petkov <bp@××××.de>
80 -Cc: Dave Hansen <dave.hansen@×××××××××××.com>
81 -Cc: Andy Lutomirski <luto@××××××.org>
82 -Cc: stable@×××××××××××.org
83 -Link: https://lkml.kernel.org/r/20171227054354.20369.94587.stgit@×××××××××××××××××××××.net
84 ----
85 - arch/x86/kernel/cpu/common.c | 4 ++--
86 - 1 file changed, 2 insertions(+), 2 deletions(-)
87 -
88 -diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
89 -index f2a94dfb434e9..b1be494ab4e8b 100644
90 ---- a/arch/x86/kernel/cpu/common.c
91 -+++ b/arch/x86/kernel/cpu/common.c
92 -@@ -899,8 +899,8 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
93 -
94 - setup_force_cpu_cap(X86_FEATURE_ALWAYS);
95 -
96 -- /* Assume for now that ALL x86 CPUs are insecure */
97 -- setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
98 -+ if (c->x86_vendor != X86_VENDOR_AMD)
99 -+ setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
100 -
101 - fpu__init_system(c);
102 -
103 -
104 -From 52994c256df36fda9a715697431cba9daecb6b11 Mon Sep 17 00:00:00 2001
105 -From: Thomas Gleixner <tglx@××××××××××.de>
106 -Date: Wed, 3 Jan 2018 15:57:59 +0100
107 -Subject: [PATCH 3/7] x86/pti: Make sure the user/kernel PTEs match
108 -
109 -Meelis reported that his K8 Athlon64 emits MCE warnings when PTI is
110 -enabled:
111 -
112 -[Hardware Error]: Error Addr: 0x0000ffff81e000e0
113 -[Hardware Error]: MC1 Error: L1 TLB multimatch.
114 -[Hardware Error]: cache level: L1, tx: INSN
115 -
116 -The address is in the entry area, which is mapped into kernel _AND_ user
117 -space. That's special because we switch CR3 while we are executing
118 -there.
119 -
120 -User mapping:
121 -0xffffffff81e00000-0xffffffff82000000 2M ro PSE GLB x pmd
122 -
123 -Kernel mapping:
124 -0xffffffff81000000-0xffffffff82000000 16M ro PSE x pmd
125 -
126 -So the K8 is complaining that the TLB entries differ. They differ in the
127 -GLB bit.
128 -
129 -Drop the GLB bit when installing the user shared mapping.
130 -
131 -Fixes: 6dc72c3cbca0 ("x86/mm/pti: Share entry text PMD")
132 -Reported-by: Meelis Roos <mroos@×××××.ee>
133 -Signed-off-by: Thomas Gleixner <tglx@××××××××××.de>
134 -Tested-by: Meelis Roos <mroos@×××××.ee>
135 -Cc: Borislav Petkov <bp@××××××.de>
136 -Cc: Tom Lendacky <thomas.lendacky@×××.com>
137 -Cc: stable@×××××××××××.org
138 -Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031407180.1957@nanos
139 ----
140 - arch/x86/mm/pti.c | 3 ++-
141 - 1 file changed, 2 insertions(+), 1 deletion(-)
142 -
143 -diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
144 -index bce8aea656062..2da28ba975082 100644
145 ---- a/arch/x86/mm/pti.c
146 -+++ b/arch/x86/mm/pti.c
147 -@@ -367,7 +367,8 @@ static void __init pti_setup_espfix64(void)
148 - static void __init pti_clone_entry_text(void)
149 - {
150 - pti_clone_pmds((unsigned long) __entry_text_start,
151 -- (unsigned long) __irqentry_text_end, _PAGE_RW);
152 -+ (unsigned long) __irqentry_text_end,
153 -+ _PAGE_RW | _PAGE_GLOBAL);
154 - }
155 -
156 - /*
157 -
158 -From a9cdbe72c4e8bf3b38781c317a79326e2e1a230d Mon Sep 17 00:00:00 2001
159 -From: Josh Poimboeuf <jpoimboe@××××××.com>
160 -Date: Sun, 31 Dec 2017 10:18:06 -0600
161 -Subject: [PATCH 4/7] x86/dumpstack: Fix partial register dumps
162 -MIME-Version: 1.0
163 -Content-Type: text/plain; charset=UTF-8
164 -Content-Transfer-Encoding: 8bit
165 -
166 -The show_regs_safe() logic is wrong. When there's an iret stack frame,
167 -it prints the entire pt_regs -- most of which is random stack data --
168 -instead of just the five registers at the end.
169 -
170 -show_regs_safe() is also poorly named: the on_stack() checks aren't for
171 -safety. Rename the function to show_regs_if_on_stack() and add a
172 -comment to explain why the checks are needed.
173 -
174 -These issues were introduced with the "partial register dump" feature of
175 -the following commit:
176 -
177 - b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")
178 -
179 -That patch had gone through a few iterations of development, and the
180 -above issues were artifacts from a previous iteration of the patch where
181 -'regs' pointed directly to the iret frame rather than to the (partially
182 -empty) pt_regs.
183 -
184 -Tested-by: Alexander Tsoy <alexander@××××.me>
185 -Signed-off-by: Josh Poimboeuf <jpoimboe@××××××.com>
186 -Cc: Andy Lutomirski <luto@××××××.org>
187 -Cc: Linus Torvalds <torvalds@××××××××××××××××.org>
188 -Cc: Peter Zijlstra <peterz@×××××××××.org>
189 -Cc: Thomas Gleixner <tglx@××××××××××.de>
190 -Cc: Toralf Förster <toralf.foerster@×××.de>
191 -Cc: stable@×××××××××××.org
192 -Fixes: b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")
193 -Link: http://lkml.kernel.org/r/5b05b8b344f59db2d3d50dbdeba92d60f2304c54.1514736742.git.jpoimboe@××××××.com
194 -Signed-off-by: Ingo Molnar <mingo@××××××.org>
195 ----
196 - arch/x86/include/asm/unwind.h | 17 +++++++++++++----
197 - arch/x86/kernel/dumpstack.c | 28 ++++++++++++++++++++--------
198 - arch/x86/kernel/stacktrace.c | 2 +-
199 - 3 files changed, 34 insertions(+), 13 deletions(-)
200 -
201 -diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
202 -index c1688c2d0a128..1f86e1b0a5cdc 100644
203 ---- a/arch/x86/include/asm/unwind.h
204 -+++ b/arch/x86/include/asm/unwind.h
205 -@@ -56,18 +56,27 @@ void unwind_start(struct unwind_state *state, struct task_struct *task,
206 -
207 - #if defined(CONFIG_UNWINDER_ORC) || defined(CONFIG_UNWINDER_FRAME_POINTER)
208 - /*
209 -- * WARNING: The entire pt_regs may not be safe to dereference. In some cases,
210 -- * only the iret frame registers are accessible. Use with caution!
211 -+ * If 'partial' returns true, only the iret frame registers are valid.
212 - */
213 --static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state)
214 -+static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state,
215 -+ bool *partial)
216 - {
217 - if (unwind_done(state))
218 - return NULL;
219 -
220 -+ if (partial) {
221 -+#ifdef CONFIG_UNWINDER_ORC
222 -+ *partial = !state->full_regs;
223 -+#else
224 -+ *partial = false;
225 -+#endif
226 -+ }
227 -+
228 - return state->regs;
229 - }
230 - #else
231 --static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state)
232 -+static inline struct pt_regs *unwind_get_entry_regs(struct unwind_state *state,
233 -+ bool *partial)
234 - {
235 - return NULL;
236 - }
237 -diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
238 -index 5fa110699ed27..d0bb176a7261a 100644
239 ---- a/arch/x86/kernel/dumpstack.c
240 -+++ b/arch/x86/kernel/dumpstack.c
241 -@@ -76,12 +76,23 @@ void show_iret_regs(struct pt_regs *regs)
242 - regs->sp, regs->flags);
243 - }
244 -
245 --static void show_regs_safe(struct stack_info *info, struct pt_regs *regs)
246 -+static void show_regs_if_on_stack(struct stack_info *info, struct pt_regs *regs,
247 -+ bool partial)
248 - {
249 -- if (on_stack(info, regs, sizeof(*regs)))
250 -+ /*
251 -+ * These on_stack() checks aren't strictly necessary: the unwind code
252 -+ * has already validated the 'regs' pointer. The checks are done for
253 -+ * ordering reasons: if the registers are on the next stack, we don't
254 -+ * want to print them out yet. Otherwise they'll be shown as part of
255 -+ * the wrong stack. Later, when show_trace_log_lvl() switches to the
256 -+ * next stack, this function will be called again with the same regs so
257 -+ * they can be printed in the right context.
258 -+ */
259 -+ if (!partial && on_stack(info, regs, sizeof(*regs))) {
260 - __show_regs(regs, 0);
261 -- else if (on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
262 -- IRET_FRAME_SIZE)) {
263 -+
264 -+ } else if (partial && on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
265 -+ IRET_FRAME_SIZE)) {
266 - /*
267 - * When an interrupt or exception occurs in entry code, the
268 - * full pt_regs might not have been saved yet. In that case
269 -@@ -98,6 +109,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
270 - struct stack_info stack_info = {0};
271 - unsigned long visit_mask = 0;
272 - int graph_idx = 0;
273 -+ bool partial;
274 -
275 - printk("%sCall Trace:\n", log_lvl);
276 -
277 -@@ -140,7 +152,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
278 - printk("%s <%s>\n", log_lvl, stack_name);
279 -
280 - if (regs)
281 -- show_regs_safe(&stack_info, regs);
282 -+ show_regs_if_on_stack(&stack_info, regs, partial);
283 -
284 - /*
285 - * Scan the stack, printing any text addresses we find. At the
286 -@@ -164,7 +176,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
287 -
288 - /*
289 - * Don't print regs->ip again if it was already printed
290 -- * by show_regs_safe() below.
291 -+ * by show_regs_if_on_stack().
292 - */
293 - if (regs && stack == &regs->ip)
294 - goto next;
295 -@@ -199,9 +211,9 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
296 - unwind_next_frame(&state);
297 -
298 - /* if the frame has entry regs, print them */
299 -- regs = unwind_get_entry_regs(&state);
300 -+ regs = unwind_get_entry_regs(&state, &partial);
301 - if (regs)
302 -- show_regs_safe(&stack_info, regs);
303 -+ show_regs_if_on_stack(&stack_info, regs, partial);
304 - }
305 -
306 - if (stack_name)
307 -diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
308 -index 8dabd7bf16730..60244bfaf88f6 100644
309 ---- a/arch/x86/kernel/stacktrace.c
310 -+++ b/arch/x86/kernel/stacktrace.c
311 -@@ -98,7 +98,7 @@ static int __save_stack_trace_reliable(struct stack_trace *trace,
312 - for (unwind_start(&state, task, NULL, NULL); !unwind_done(&state);
313 - unwind_next_frame(&state)) {
314 -
315 -- regs = unwind_get_entry_regs(&state);
316 -+ regs = unwind_get_entry_regs(&state, NULL);
317 - if (regs) {
318 - /*
319 - * Kernel mode registers on the stack indicate an
320 -
321 -From 3ffdeb1a02be3086f1411a15c5b9c481fa28e21f Mon Sep 17 00:00:00 2001
322 -From: Josh Poimboeuf <jpoimboe@××××××.com>
323 -Date: Sun, 31 Dec 2017 10:18:07 -0600
324 -Subject: [PATCH 5/7] x86/dumpstack: Print registers for first stack frame
325 -MIME-Version: 1.0
326 -Content-Type: text/plain; charset=UTF-8
327 -Content-Transfer-Encoding: 8bit
328 -
329 -In the stack dump code, if the frame after the starting pt_regs is also
330 -a regs frame, the registers don't get printed. Fix that.
331 -
332 -Reported-by: Andy Lutomirski <luto@××××××××××.net>
333 -Tested-by: Alexander Tsoy <alexander@××××.me>
334 -Signed-off-by: Josh Poimboeuf <jpoimboe@××××××.com>
335 -Cc: Andy Lutomirski <luto@××××××.org>
336 -Cc: Linus Torvalds <torvalds@××××××××××××××××.org>
337 -Cc: Peter Zijlstra <peterz@×××××××××.org>
338 -Cc: Thomas Gleixner <tglx@××××××××××.de>
339 -Cc: Toralf Förster <toralf.foerster@×××.de>
340 -Cc: stable@×××××××××××.org
341 -Fixes: 3b3fa11bc700 ("x86/dumpstack: Print any pt_regs found on the stack")
342 -Link: http://lkml.kernel.org/r/396f84491d2f0ef64eda4217a2165f5712f6a115.1514736742.git.jpoimboe@××××××.com
343 -Signed-off-by: Ingo Molnar <mingo@××××××.org>
344 ----
345 - arch/x86/kernel/dumpstack.c | 3 ++-
346 - 1 file changed, 2 insertions(+), 1 deletion(-)
347 -
348 -diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
349 -index d0bb176a7261a..afbecff161d16 100644
350 ---- a/arch/x86/kernel/dumpstack.c
351 -+++ b/arch/x86/kernel/dumpstack.c
352 -@@ -115,6 +115,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
353 -
354 - unwind_start(&state, task, regs, stack);
355 - stack = stack ? : get_stack_pointer(task, regs);
356 -+ regs = unwind_get_entry_regs(&state, &partial);
357 -
358 - /*
359 - * Iterate through the stacks, starting with the current stack pointer.
360 -@@ -132,7 +133,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
361 - * - hardirq stack
362 - * - entry stack
363 - */
364 -- for (regs = NULL; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
365 -+ for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
366 - const char *stack_name;
367 -
368 - if (get_stack_info(stack, task, &stack_info, &visit_mask)) {
369 -
370 -From d7732ba55c4b6a2da339bb12589c515830cfac2c Mon Sep 17 00:00:00 2001
371 -From: Thomas Gleixner <tglx@××××××××××.de>
372 -Date: Wed, 3 Jan 2018 19:52:04 +0100
373 -Subject: [PATCH 6/7] x86/pti: Switch to kernel CR3 at early in
374 - entry_SYSCALL_compat()
375 -
376 -The preparation for PTI which added CR3 switching to the entry code
377 -misplaced the CR3 switch in entry_SYSCALL_compat().
378 -
379 -With PTI enabled the entry code tries to access a per cpu variable after
380 -switching to kernel GS. This fails because that variable is not mapped to
381 -user space. This results in a double fault and in the worst case a kernel
382 -crash.
383 -
384 -Move the switch ahead of the access and clobber RSP which has been saved
385 -already.
386 -
387 -Fixes: 8a09317b895f ("x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching")
388 -Reported-by: Lars Wendler <wendler.lars@×××.de>
389 -Reported-by: Laura Abbott <labbott@××××××.com>
390 -Signed-off-by: Thomas Gleixner <tglx@××××××××××.de>
391 -Cc: Borislav Betkov <bp@××××××.de>
392 -Cc: Andy Lutomirski <luto@××××××.org>,
393 -Cc: Dave Hansen <dave.hansen@×××××××××××.com>,
394 -Cc: Peter Zijlstra <peterz@×××××××××.org>,
395 -Cc: Greg KH <gregkh@×××××××××××××××.org>, ,
396 -Cc: Boris Ostrovsky <boris.ostrovsky@××××××.com>,
397 -Cc: Juergen Gross <jgross@××××.com>
398 -Cc: stable@×××××××××××.org
399 -Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031949200.1957@nanos
400 ----
401 - arch/x86/entry/entry_64_compat.S | 13 ++++++-------
402 - 1 file changed, 6 insertions(+), 7 deletions(-)
403 -
404 -diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
405 -index 40f17009ec20c..98d5358e4041a 100644
406 ---- a/arch/x86/entry/entry_64_compat.S
407 -+++ b/arch/x86/entry/entry_64_compat.S
408 -@@ -190,8 +190,13 @@ ENTRY(entry_SYSCALL_compat)
409 - /* Interrupts are off on entry. */
410 - swapgs
411 -
412 -- /* Stash user ESP and switch to the kernel stack. */
413 -+ /* Stash user ESP */
414 - movl %esp, %r8d
415 -+
416 -+ /* Use %rsp as scratch reg. User ESP is stashed in r8 */
417 -+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
418 -+
419 -+ /* Switch to the kernel stack */
420 - movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
421 -
422 - /* Construct struct pt_regs on stack */
423 -@@ -219,12 +224,6 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
424 - pushq $0 /* pt_regs->r14 = 0 */
425 - pushq $0 /* pt_regs->r15 = 0 */
426 -
427 -- /*
428 -- * We just saved %rdi so it is safe to clobber. It is not
429 -- * preserved during the C calls inside TRACE_IRQS_OFF anyway.
430 -- */
431 -- SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
432 --
433 - /*
434 - * User mode is traced as though IRQs are on, and SYSENTER
435 - * turned them off.
436 -
437 -From 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb Mon Sep 17 00:00:00 2001
438 -From: Nick Desaulniers <ndesaulniers@××××××.com>
439 -Date: Wed, 3 Jan 2018 12:39:52 -0800
440 -Subject: [PATCH 7/7] x86/process: Define cpu_tss_rw in same section as
441 - declaration
442 -
443 -cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
444 -but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
445 -leading to section mismatch warnings.
446 -
447 -Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
448 -it's mapped to the cpu entry area and must be page aligned.
449 -
450 -[ tglx: Massaged changelog a bit ]
451 -
452 -Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
453 -Suggested-by: Thomas Gleixner <tglx@××××××××××.de>
454 -Signed-off-by: Nick Desaulniers <ndesaulniers@××××××.com>
455 -Signed-off-by: Thomas Gleixner <tglx@××××××××××.de>
456 -Cc: thomas.lendacky@×××.com
457 -Cc: Borislav Petkov <bpetkov@××××.de>
458 -Cc: tklauser@×××××××.ch
459 -Cc: minipli@××××××××××.com
460 -Cc: me@××××××××.com
461 -Cc: namit@××××××.com
462 -Cc: luto@××××××.org
463 -Cc: jpoimboe@××××××.com
464 -Cc: tj@××××××.org
465 -Cc: cl@×××××.com
466 -Cc: bp@××××.de
467 -Cc: thgarnie@××××××.com
468 -Cc: kirill.shutemov@×××××××××××.com
469 -Cc: stable@×××××××××××.org
470 -Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@××××××.com
471 ----
472 - arch/x86/kernel/process.c | 2 +-
473 - 1 file changed, 1 insertion(+), 1 deletion(-)
474 -
475 -diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
476 -index 5174159784093..3cb2486c47e48 100644
477 ---- a/arch/x86/kernel/process.c
478 -+++ b/arch/x86/kernel/process.c
479 -@@ -47,7 +47,7 @@
480 - * section. Since TSS's are completely CPU-local, we want them
481 - * on exact cacheline boundaries, to eliminate cacheline ping-pong.
482 - */
483 --__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = {
484 -+__visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = {
485 - .x86_tss = {
486 - /*
487 - * .sp0 is only used when entering ring 0 from a lower