Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:3.4 commit in: /
Date: Mon, 02 Feb 2015 23:32:52
Message-Id: 1422919279.a1d2f4883e1b6a2e2ca1849e6c9e1efd51c287d0.mpagano@gentoo
1 commit: a1d2f4883e1b6a2e2ca1849e6c9e1efd51c287d0
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Mon Feb 2 23:21:19 2015 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Mon Feb 2 23:21:19 2015 +0000
6 URL: http://sources.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=a1d2f488
7
8 Linux patch 3.4.106
9
10 ---
11 0000_README | 4 +
12 1105_linux-3.4.106.patch | 6748 ++++++++++++++++++++++++++++++++++++++++++++++
13 2 files changed, 6752 insertions(+)
14
15 diff --git a/0000_README b/0000_README
16 index 5ff1506..ba104e4 100644
17 --- a/0000_README
18 +++ b/0000_README
19 @@ -459,6 +459,10 @@ Patch: 1104_linux-3.4.105.patch
20 From: http://www.kernel.org
21 Desc: Linux 3.4.105
22
23 +Patch: 1105_linux-3.4.106.patch
24 +From: http://www.kernel.org
25 +Desc: Linux 3.4.106
26 +
27 Patch: 1500_XATTR_USER_PREFIX.patch
28 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
29 Desc: Support for namespace user.pax.* on tmpfs.
30
31 diff --git a/1105_linux-3.4.106.patch b/1105_linux-3.4.106.patch
32 new file mode 100644
33 index 0000000..61b5c3c
34 --- /dev/null
35 +++ b/1105_linux-3.4.106.patch
36 @@ -0,0 +1,6748 @@
37 +diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
38 +new file mode 100644
39 +index 000000000000..ea45dd3901e3
40 +--- /dev/null
41 ++++ b/Documentation/lzo.txt
42 +@@ -0,0 +1,164 @@
43 ++
44 ++LZO stream format as understood by Linux's LZO decompressor
45 ++===========================================================
46 ++
47 ++Introduction
48 ++
49 ++ This is not a specification. No specification seems to be publicly available
50 ++ for the LZO stream format. This document describes what input format the LZO
51 ++ decompressor as implemented in the Linux kernel understands. The file subject
52 ++ of this analysis is lib/lzo/lzo1x_decompress_safe.c. No analysis was made on
53 ++ the compressor nor on any other implementations though it seems likely that
54 ++ the format matches the standard one. The purpose of this document is to
55 ++ better understand what the code does in order to propose more efficient fixes
56 ++ for future bug reports.
57 ++
58 ++Description
59 ++
60 ++ The stream is composed of a series of instructions, operands, and data. The
61 ++ instructions consist in a few bits representing an opcode, and bits forming
62 ++ the operands for the instruction, whose size and position depend on the
63 ++ opcode and on the number of literals copied by previous instruction. The
64 ++ operands are used to indicate :
65 ++
66 ++ - a distance when copying data from the dictionary (past output buffer)
67 ++ - a length (number of bytes to copy from dictionary)
68 ++ - the number of literals to copy, which is retained in variable "state"
69 ++ as a piece of information for next instructions.
70 ++
71 ++ Optionally depending on the opcode and operands, extra data may follow. These
72 ++ extra data can be a complement for the operand (eg: a length or a distance
73 ++ encoded on larger values), or a literal to be copied to the output buffer.
74 ++
75 ++ The first byte of the block follows a different encoding from other bytes, it
76 ++ seems to be optimized for literal use only, since there is no dictionary yet
77 ++ prior to that byte.
78 ++
79 ++ Lengths are always encoded on a variable size starting with a small number
80 ++ of bits in the operand. If the number of bits isn't enough to represent the
81 ++ length, up to 255 may be added in increments by consuming more bytes with a
82 ++ rate of at most 255 per extra byte (thus the compression ratio cannot exceed
83 ++ around 255:1). The variable length encoding using #bits is always the same :
84 ++
85 ++ length = byte & ((1 << #bits) - 1)
86 ++ if (!length) {
87 ++ length = ((1 << #bits) - 1)
88 ++ length += 255*(number of zero bytes)
89 ++ length += first-non-zero-byte
90 ++ }
91 ++ length += constant (generally 2 or 3)
92 ++
93 ++ For references to the dictionary, distances are relative to the output
94 ++ pointer. Distances are encoded using very few bits belonging to certain
95 ++ ranges, resulting in multiple copy instructions using different encodings.
96 ++ Certain encodings involve one extra byte, others involve two extra bytes
97 ++ forming a little-endian 16-bit quantity (marked LE16 below).
98 ++
99 ++ After any instruction except the large literal copy, 0, 1, 2 or 3 literals
100 ++ are copied before starting the next instruction. The number of literals that
101 ++ were copied may change the meaning and behaviour of the next instruction. In
102 ++ practice, only one instruction needs to know whether 0, less than 4, or more
103 ++ literals were copied. This is the information stored in the <state> variable
104 ++ in this implementation. This number of immediate literals to be copied is
105 ++ generally encoded in the last two bits of the instruction but may also be
106 ++ taken from the last two bits of an extra operand (eg: distance).
107 ++
108 ++ End of stream is declared when a block copy of distance 0 is seen. Only one
109 ++ instruction may encode this distance (0001HLLL), it takes one LE16 operand
110 ++ for the distance, thus requiring 3 bytes.
111 ++
112 ++ IMPORTANT NOTE : in the code some length checks are missing because certain
113 ++ instructions are called under the assumption that a certain number of bytes
114 ++ follow because it has already been garanteed before parsing the instructions.
115 ++ They just have to "refill" this credit if they consume extra bytes. This is
116 ++ an implementation design choice independant on the algorithm or encoding.
117 ++
118 ++Byte sequences
119 ++
120 ++ First byte encoding :
121 ++
122 ++ 0..17 : follow regular instruction encoding, see below. It is worth
123 ++ noting that codes 16 and 17 will represent a block copy from
124 ++ the dictionary which is empty, and that they will always be
125 ++ invalid at this place.
126 ++
127 ++ 18..21 : copy 0..3 literals
128 ++ state = (byte - 17) = 0..3 [ copy <state> literals ]
129 ++ skip byte
130 ++
131 ++ 22..255 : copy literal string
132 ++ length = (byte - 17) = 4..238
133 ++ state = 4 [ don't copy extra literals ]
134 ++ skip byte
135 ++
136 ++ Instruction encoding :
137 ++
138 ++ 0 0 0 0 X X X X (0..15)
139 ++ Depends on the number of literals copied by the last instruction.
140 ++ If last instruction did not copy any literal (state == 0), this
141 ++ encoding will be a copy of 4 or more literal, and must be interpreted
142 ++ like this :
143 ++
144 ++ 0 0 0 0 L L L L (0..15) : copy long literal string
145 ++ length = 3 + (L ?: 15 + (zero_bytes * 255) + non_zero_byte)
146 ++ state = 4 (no extra literals are copied)
147 ++
148 ++ If last instruction used to copy between 1 to 3 literals (encoded in
149 ++ the instruction's opcode or distance), the instruction is a copy of a
150 ++ 2-byte block from the dictionary within a 1kB distance. It is worth
151 ++ noting that this instruction provides little savings since it uses 2
152 ++ bytes to encode a copy of 2 other bytes but it encodes the number of
153 ++ following literals for free. It must be interpreted like this :
154 ++
155 ++ 0 0 0 0 D D S S (0..15) : copy 2 bytes from <= 1kB distance
156 ++ length = 2
157 ++ state = S (copy S literals after this block)
158 ++ Always followed by exactly one byte : H H H H H H H H
159 ++ distance = (H << 2) + D + 1
160 ++
161 ++ If last instruction used to copy 4 or more literals (as detected by
162 ++ state == 4), the instruction becomes a copy of a 3-byte block from the
163 ++ dictionary from a 2..3kB distance, and must be interpreted like this :
164 ++
165 ++ 0 0 0 0 D D S S (0..15) : copy 3 bytes from 2..3 kB distance
166 ++ length = 3
167 ++ state = S (copy S literals after this block)
168 ++ Always followed by exactly one byte : H H H H H H H H
169 ++ distance = (H << 2) + D + 2049
170 ++
171 ++ 0 0 0 1 H L L L (16..31)
172 ++ Copy of a block within 16..48kB distance (preferably less than 10B)
173 ++ length = 2 + (L ?: 7 + (zero_bytes * 255) + non_zero_byte)
174 ++ Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S
175 ++ distance = 16384 + (H << 14) + D
176 ++ state = S (copy S literals after this block)
177 ++ End of stream is reached if distance == 16384
178 ++
179 ++ 0 0 1 L L L L L (32..63)
180 ++ Copy of small block within 16kB distance (preferably less than 34B)
181 ++ length = 2 + (L ?: 31 + (zero_bytes * 255) + non_zero_byte)
182 ++ Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S
183 ++ distance = D + 1
184 ++ state = S (copy S literals after this block)
185 ++
186 ++ 0 1 L D D D S S (64..127)
187 ++ Copy 3-4 bytes from block within 2kB distance
188 ++ state = S (copy S literals after this block)
189 ++ length = 3 + L
190 ++ Always followed by exactly one byte : H H H H H H H H
191 ++ distance = (H << 3) + D + 1
192 ++
193 ++ 1 L L D D D S S (128..255)
194 ++ Copy 5-8 bytes from block within 2kB distance
195 ++ state = S (copy S literals after this block)
196 ++ length = 5 + L
197 ++ Always followed by exactly one byte : H H H H H H H H
198 ++ distance = (H << 3) + D + 1
199 ++
200 ++Authors
201 ++
202 ++ This document was written by Willy Tarreau <w@×××.eu> on 2014/07/19 during an
203 ++ analysis of the decompression code available in Linux 3.16-rc5. The code is
204 ++ tricky, it is possible that this document contains mistakes or that a few
205 ++ corner cases were overlooked. In any case, please report any doubt, fix, or
206 ++ proposed updates to the author(s) so that the document can be updated.
207 +diff --git a/Makefile b/Makefile
208 +index cf2c8a82ca3e..649f1462ebf8 100644
209 +--- a/Makefile
210 ++++ b/Makefile
211 +@@ -1,6 +1,6 @@
212 + VERSION = 3
213 + PATCHLEVEL = 4
214 +-SUBLEVEL = 105
215 ++SUBLEVEL = 106
216 + EXTRAVERSION =
217 + NAME = Saber-toothed Squirrel
218 +
219 +diff --git a/arch/m68k/mm/hwtest.c b/arch/m68k/mm/hwtest.c
220 +index 2c7dde3c6430..2a5259fd23eb 100644
221 +--- a/arch/m68k/mm/hwtest.c
222 ++++ b/arch/m68k/mm/hwtest.c
223 +@@ -28,9 +28,11 @@
224 + int hwreg_present( volatile void *regp )
225 + {
226 + int ret = 0;
227 ++ unsigned long flags;
228 + long save_sp, save_vbr;
229 + long tmp_vectors[3];
230 +
231 ++ local_irq_save(flags);
232 + __asm__ __volatile__
233 + ( "movec %/vbr,%2\n\t"
234 + "movel #Lberr1,%4@(8)\n\t"
235 +@@ -46,6 +48,7 @@ int hwreg_present( volatile void *regp )
236 + : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr)
237 + : "a" (regp), "a" (tmp_vectors)
238 + );
239 ++ local_irq_restore(flags);
240 +
241 + return( ret );
242 + }
243 +@@ -58,9 +61,11 @@ EXPORT_SYMBOL(hwreg_present);
244 + int hwreg_write( volatile void *regp, unsigned short val )
245 + {
246 + int ret;
247 ++ unsigned long flags;
248 + long save_sp, save_vbr;
249 + long tmp_vectors[3];
250 +
251 ++ local_irq_save(flags);
252 + __asm__ __volatile__
253 + ( "movec %/vbr,%2\n\t"
254 + "movel #Lberr2,%4@(8)\n\t"
255 +@@ -78,6 +83,7 @@ int hwreg_write( volatile void *regp, unsigned short val )
256 + : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr)
257 + : "a" (regp), "a" (tmp_vectors), "g" (val)
258 + );
259 ++ local_irq_restore(flags);
260 +
261 + return( ret );
262 + }
263 +diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
264 +index 0bc485b3cd60..6d64efe77026 100644
265 +--- a/arch/mips/mm/tlbex.c
266 ++++ b/arch/mips/mm/tlbex.c
267 +@@ -1041,6 +1041,7 @@ static void __cpuinit build_update_entries(u32 **p, unsigned int tmp,
268 + struct mips_huge_tlb_info {
269 + int huge_pte;
270 + int restore_scratch;
271 ++ bool need_reload_pte;
272 + };
273 +
274 + static struct mips_huge_tlb_info __cpuinit
275 +@@ -1055,6 +1056,7 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
276 +
277 + rv.huge_pte = scratch;
278 + rv.restore_scratch = 0;
279 ++ rv.need_reload_pte = false;
280 +
281 + if (check_for_high_segbits) {
282 + UASM_i_MFC0(p, tmp, C0_BADVADDR);
283 +@@ -1247,6 +1249,7 @@ static void __cpuinit build_r4000_tlb_refill_handler(void)
284 + } else {
285 + htlb_info.huge_pte = K0;
286 + htlb_info.restore_scratch = 0;
287 ++ htlb_info.need_reload_pte = true;
288 + vmalloc_mode = refill_noscratch;
289 + /*
290 + * create the plain linear handler
291 +@@ -1283,6 +1286,8 @@ static void __cpuinit build_r4000_tlb_refill_handler(void)
292 + }
293 + #ifdef CONFIG_HUGETLB_PAGE
294 + uasm_l_tlb_huge_update(&l, p);
295 ++ if (htlb_info.need_reload_pte)
296 ++ UASM_i_LW(&p, htlb_info.huge_pte, 0, K1);
297 + build_huge_update_entries(&p, htlb_info.huge_pte, K1);
298 + build_huge_tlb_write_entry(&p, &l, &r, K0, tlb_random,
299 + htlb_info.restore_scratch);
300 +diff --git a/arch/mips/oprofile/backtrace.c b/arch/mips/oprofile/backtrace.c
301 +index 6854ed5097d2..83a1dfd8f0e3 100644
302 +--- a/arch/mips/oprofile/backtrace.c
303 ++++ b/arch/mips/oprofile/backtrace.c
304 +@@ -92,7 +92,7 @@ static inline int unwind_user_frame(struct stackframe *old_frame,
305 + /* This marks the end of the previous function,
306 + which means we overran. */
307 + break;
308 +- stack_size = (unsigned) stack_adjustment;
309 ++ stack_size = (unsigned long) stack_adjustment;
310 + } else if (is_ra_save_ins(&ip)) {
311 + int ra_slot = ip.i_format.simmediate;
312 + if (ra_slot < 0)
313 +diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
314 +index e500969bea0c..c3fc39ee57a5 100644
315 +--- a/arch/powerpc/kernel/entry_64.S
316 ++++ b/arch/powerpc/kernel/entry_64.S
317 +@@ -813,7 +813,13 @@ user_work:
318 + b .ret_from_except_lite
319 +
320 + 1: bl .save_nvgprs
321 ++ /*
322 ++ * Use a non volatile GPR to save and restore our thread_info flags
323 ++ * across the call to restore_interrupts.
324 ++ */
325 ++ mr r30,r4
326 + bl .restore_interrupts
327 ++ mr r4,r30
328 + addi r3,r1,STACK_FRAME_OVERHEAD
329 + bl .do_notify_resume
330 + b .ret_from_except
331 +diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
332 +index 10e13b331d38..df69bcb13c79 100644
333 +--- a/arch/s390/kvm/interrupt.c
334 ++++ b/arch/s390/kvm/interrupt.c
335 +@@ -43,6 +43,7 @@ static int __interrupt_is_deliverable(struct kvm_vcpu *vcpu,
336 + return 0;
337 + if (vcpu->arch.sie_block->gcr[0] & 0x2000ul)
338 + return 1;
339 ++ return 0;
340 + case KVM_S390_INT_EMERGENCY:
341 + if (psw_extint_disabled(vcpu))
342 + return 0;
343 +diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
344 +index e95822d683f4..fa9c8c7bc500 100644
345 +--- a/arch/x86/include/asm/desc.h
346 ++++ b/arch/x86/include/asm/desc.h
347 +@@ -250,7 +250,8 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
348 + gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i];
349 + }
350 +
351 +-#define _LDT_empty(info) \
352 ++/* This intentionally ignores lm, since 32-bit apps don't have that field. */
353 ++#define LDT_empty(info) \
354 + ((info)->base_addr == 0 && \
355 + (info)->limit == 0 && \
356 + (info)->contents == 0 && \
357 +@@ -260,11 +261,18 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
358 + (info)->seg_not_present == 1 && \
359 + (info)->useable == 0)
360 +
361 +-#ifdef CONFIG_X86_64
362 +-#define LDT_empty(info) (_LDT_empty(info) && ((info)->lm == 0))
363 +-#else
364 +-#define LDT_empty(info) (_LDT_empty(info))
365 +-#endif
366 ++/* Lots of programs expect an all-zero user_desc to mean "no segment at all". */
367 ++static inline bool LDT_zero(const struct user_desc *info)
368 ++{
369 ++ return (info->base_addr == 0 &&
370 ++ info->limit == 0 &&
371 ++ info->contents == 0 &&
372 ++ info->read_exec_only == 0 &&
373 ++ info->seg_32bit == 0 &&
374 ++ info->limit_in_pages == 0 &&
375 ++ info->seg_not_present == 0 &&
376 ++ info->useable == 0);
377 ++}
378 +
379 + static inline void clear_LDT(void)
380 + {
381 +diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
382 +index 5939f44fe0c0..06ec1fe26d98 100644
383 +--- a/arch/x86/include/asm/elf.h
384 ++++ b/arch/x86/include/asm/elf.h
385 +@@ -155,8 +155,9 @@ do { \
386 + #define elf_check_arch(x) \
387 + ((x)->e_machine == EM_X86_64)
388 +
389 +-#define compat_elf_check_arch(x) \
390 +- (elf_check_arch_ia32(x) || (x)->e_machine == EM_X86_64)
391 ++#define compat_elf_check_arch(x) \
392 ++ (elf_check_arch_ia32(x) || \
393 ++ (IS_ENABLED(CONFIG_X86_X32_ABI) && (x)->e_machine == EM_X86_64))
394 +
395 + #if __USER32_DS != __USER_DS
396 + # error "The following code assumes __USER32_DS == __USER_DS"
397 +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
398 +index 944471f4d142..4f787579b329 100644
399 +--- a/arch/x86/include/asm/kvm_host.h
400 ++++ b/arch/x86/include/asm/kvm_host.h
401 +@@ -453,6 +453,7 @@ struct kvm_vcpu_arch {
402 + u64 mmio_gva;
403 + unsigned access;
404 + gfn_t mmio_gfn;
405 ++ u64 mmio_gen;
406 +
407 + struct kvm_pmu pmu;
408 +
409 +@@ -881,6 +882,20 @@ static inline void kvm_inject_gp(struct kvm_vcpu *vcpu, u32 error_code)
410 + kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
411 + }
412 +
413 ++static inline u64 get_canonical(u64 la)
414 ++{
415 ++ return ((int64_t)la << 16) >> 16;
416 ++}
417 ++
418 ++static inline bool is_noncanonical_address(u64 la)
419 ++{
420 ++#ifdef CONFIG_X86_64
421 ++ return get_canonical(la) != la;
422 ++#else
423 ++ return false;
424 ++#endif
425 ++}
426 ++
427 + #define TSS_IOPB_BASE_OFFSET 0x66
428 + #define TSS_BASE_SIZE 0x68
429 + #define TSS_IOPB_SIZE (65536 / 8)
430 +@@ -939,7 +954,7 @@ int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
431 + int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
432 +
433 + void kvm_define_shared_msr(unsigned index, u32 msr);
434 +-void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
435 ++int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
436 +
437 + bool kvm_is_linear_rip(struct kvm_vcpu *vcpu, unsigned long linear_rip);
438 +
439 +diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
440 +index ade619ff9e2a..88dae6b3d7d5 100644
441 +--- a/arch/x86/include/asm/page_32_types.h
442 ++++ b/arch/x86/include/asm/page_32_types.h
443 +@@ -18,7 +18,6 @@
444 + #define THREAD_ORDER 1
445 + #define THREAD_SIZE (PAGE_SIZE << THREAD_ORDER)
446 +
447 +-#define STACKFAULT_STACK 0
448 + #define DOUBLEFAULT_STACK 1
449 + #define NMI_STACK 0
450 + #define DEBUG_STACK 0
451 +diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
452 +index 7639dbf5d223..a9e9937b9a62 100644
453 +--- a/arch/x86/include/asm/page_64_types.h
454 ++++ b/arch/x86/include/asm/page_64_types.h
455 +@@ -14,12 +14,11 @@
456 + #define IRQ_STACK_ORDER 2
457 + #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
458 +
459 +-#define STACKFAULT_STACK 1
460 +-#define DOUBLEFAULT_STACK 2
461 +-#define NMI_STACK 3
462 +-#define DEBUG_STACK 4
463 +-#define MCE_STACK 5
464 +-#define N_EXCEPTION_STACKS 5 /* hw limit: 7 */
465 ++#define DOUBLEFAULT_STACK 1
466 ++#define NMI_STACK 2
467 ++#define DEBUG_STACK 3
468 ++#define MCE_STACK 4
469 ++#define N_EXCEPTION_STACKS 4 /* hw limit: 7 */
470 +
471 + #define PUD_PAGE_SIZE (_AC(1, UL) << PUD_SHIFT)
472 + #define PUD_PAGE_MASK (~(PUD_PAGE_SIZE-1))
473 +diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
474 +index 31f180c21ce9..504d1cf9def8 100644
475 +--- a/arch/x86/include/asm/vmx.h
476 ++++ b/arch/x86/include/asm/vmx.h
477 +@@ -279,6 +279,8 @@ enum vmcs_field {
478 + #define EXIT_REASON_APIC_ACCESS 44
479 + #define EXIT_REASON_EPT_VIOLATION 48
480 + #define EXIT_REASON_EPT_MISCONFIG 49
481 ++#define EXIT_REASON_INVEPT 50
482 ++#define EXIT_REASON_INVVPID 53
483 + #define EXIT_REASON_WBINVD 54
484 + #define EXIT_REASON_XSETBV 55
485 +
486 +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
487 +index edc24480469f..cb5b54e796eb 100644
488 +--- a/arch/x86/kernel/apic/apic.c
489 ++++ b/arch/x86/kernel/apic/apic.c
490 +@@ -1229,7 +1229,7 @@ void __cpuinit setup_local_APIC(void)
491 + unsigned int value, queued;
492 + int i, j, acked = 0;
493 + unsigned long long tsc = 0, ntsc;
494 +- long long max_loops = cpu_khz;
495 ++ long long max_loops = cpu_khz ? cpu_khz : 1000000;
496 +
497 + if (cpu_has_tsc)
498 + rdtscll(tsc);
499 +@@ -1325,7 +1325,7 @@ void __cpuinit setup_local_APIC(void)
500 + acked);
501 + break;
502 + }
503 +- if (cpu_has_tsc) {
504 ++ if (cpu_has_tsc && cpu_khz) {
505 + rdtscll(ntsc);
506 + max_loops = (cpu_khz << 10) - (ntsc - tsc);
507 + } else
508 +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
509 +index cf79302198a6..114db0fee86c 100644
510 +--- a/arch/x86/kernel/cpu/common.c
511 ++++ b/arch/x86/kernel/cpu/common.c
512 +@@ -142,6 +142,8 @@ EXPORT_PER_CPU_SYMBOL_GPL(gdt_page);
513 +
514 + static int __init x86_xsave_setup(char *s)
515 + {
516 ++ if (strlen(s))
517 ++ return 0;
518 + setup_clear_cpu_cap(X86_FEATURE_XSAVE);
519 + setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT);
520 + return 1;
521 +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
522 +index 3e6ff6cbf42a..e7a64dd602d9 100644
523 +--- a/arch/x86/kernel/cpu/intel.c
524 ++++ b/arch/x86/kernel/cpu/intel.c
525 +@@ -143,6 +143,21 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
526 + setup_clear_cpu_cap(X86_FEATURE_ERMS);
527 + }
528 + }
529 ++
530 ++ /*
531 ++ * Intel Quark Core DevMan_001.pdf section 6.4.11
532 ++ * "The operating system also is required to invalidate (i.e., flush)
533 ++ * the TLB when any changes are made to any of the page table entries.
534 ++ * The operating system must reload CR3 to cause the TLB to be flushed"
535 ++ *
536 ++ * As a result cpu_has_pge() in arch/x86/include/asm/tlbflush.h should
537 ++ * be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE
538 ++ * to be modified
539 ++ */
540 ++ if (c->x86 == 5 && c->x86_model == 9) {
541 ++ pr_info("Disabling PGE capability bit\n");
542 ++ setup_clear_cpu_cap(X86_FEATURE_PGE);
543 ++ }
544 + }
545 +
546 + #ifdef CONFIG_X86_32
547 +diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
548 +index 17107bd6e1f0..e8206060a0a8 100644
549 +--- a/arch/x86/kernel/dumpstack_64.c
550 ++++ b/arch/x86/kernel/dumpstack_64.c
551 +@@ -24,7 +24,6 @@ static char x86_stack_ids[][8] = {
552 + [ DEBUG_STACK-1 ] = "#DB",
553 + [ NMI_STACK-1 ] = "NMI",
554 + [ DOUBLEFAULT_STACK-1 ] = "#DF",
555 +- [ STACKFAULT_STACK-1 ] = "#SS",
556 + [ MCE_STACK-1 ] = "#MC",
557 + #if DEBUG_STKSZ > EXCEPTION_STKSZ
558 + [ N_EXCEPTION_STACKS ...
559 +diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
560 +index 42b055e24691..45f9c70f1246 100644
561 +--- a/arch/x86/kernel/entry_64.S
562 ++++ b/arch/x86/kernel/entry_64.S
563 +@@ -912,13 +912,16 @@ ENTRY(native_iret)
564 + jnz native_irq_return_ldt
565 + #endif
566 +
567 ++.global native_irq_return_iret
568 + native_irq_return_iret:
569 ++ /*
570 ++ * This may fault. Non-paranoid faults on return to userspace are
571 ++ * handled by fixup_bad_iret. These include #SS, #GP, and #NP.
572 ++ * Double-faults due to espfix64 are handled in do_double_fault.
573 ++ * Other faults here are fatal.
574 ++ */
575 + iretq
576 +
577 +- .section __ex_table,"a"
578 +- .quad native_irq_return_iret, bad_iret
579 +- .previous
580 +-
581 + #ifdef CONFIG_X86_ESPFIX64
582 + native_irq_return_ldt:
583 + pushq_cfi %rax
584 +@@ -945,25 +948,6 @@ native_irq_return_ldt:
585 + jmp native_irq_return_iret
586 + #endif
587 +
588 +- .section .fixup,"ax"
589 +-bad_iret:
590 +- /*
591 +- * The iret traps when the %cs or %ss being restored is bogus.
592 +- * We've lost the original trap vector and error code.
593 +- * #GPF is the most likely one to get for an invalid selector.
594 +- * So pretend we completed the iret and took the #GPF in user mode.
595 +- *
596 +- * We are now running with the kernel GS after exception recovery.
597 +- * But error_entry expects us to have user GS to match the user %cs,
598 +- * so swap back.
599 +- */
600 +- pushq $0
601 +-
602 +- SWAPGS
603 +- jmp general_protection
604 +-
605 +- .previous
606 +-
607 + /* edi: workmask, edx: work */
608 + retint_careful:
609 + CFI_RESTORE_STATE
610 +@@ -1011,37 +995,6 @@ ENTRY(retint_kernel)
611 + CFI_ENDPROC
612 + END(common_interrupt)
613 +
614 +- /*
615 +- * If IRET takes a fault on the espfix stack, then we
616 +- * end up promoting it to a doublefault. In that case,
617 +- * modify the stack to make it look like we just entered
618 +- * the #GP handler from user space, similar to bad_iret.
619 +- */
620 +-#ifdef CONFIG_X86_ESPFIX64
621 +- ALIGN
622 +-__do_double_fault:
623 +- XCPT_FRAME 1 RDI+8
624 +- movq RSP(%rdi),%rax /* Trap on the espfix stack? */
625 +- sarq $PGDIR_SHIFT,%rax
626 +- cmpl $ESPFIX_PGD_ENTRY,%eax
627 +- jne do_double_fault /* No, just deliver the fault */
628 +- cmpl $__KERNEL_CS,CS(%rdi)
629 +- jne do_double_fault
630 +- movq RIP(%rdi),%rax
631 +- cmpq $native_irq_return_iret,%rax
632 +- jne do_double_fault /* This shouldn't happen... */
633 +- movq PER_CPU_VAR(kernel_stack),%rax
634 +- subq $(6*8-KERNEL_STACK_OFFSET),%rax /* Reset to original stack */
635 +- movq %rax,RSP(%rdi)
636 +- movq $0,(%rax) /* Missing (lost) #GP error code */
637 +- movq $general_protection,RIP(%rdi)
638 +- retq
639 +- CFI_ENDPROC
640 +-END(__do_double_fault)
641 +-#else
642 +-# define __do_double_fault do_double_fault
643 +-#endif
644 +-
645 + /*
646 + * End of kprobes section
647 + */
648 +@@ -1217,7 +1170,7 @@ zeroentry overflow do_overflow
649 + zeroentry bounds do_bounds
650 + zeroentry invalid_op do_invalid_op
651 + zeroentry device_not_available do_device_not_available
652 +-paranoiderrorentry double_fault __do_double_fault
653 ++paranoiderrorentry double_fault do_double_fault
654 + zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun
655 + errorentry invalid_TSS do_invalid_TSS
656 + errorentry segment_not_present do_segment_not_present
657 +@@ -1431,7 +1384,7 @@ apicinterrupt XEN_HVM_EVTCHN_CALLBACK \
658 +
659 + paranoidzeroentry_ist debug do_debug DEBUG_STACK
660 + paranoidzeroentry_ist int3 do_int3 DEBUG_STACK
661 +-paranoiderrorentry stack_segment do_stack_segment
662 ++errorentry stack_segment do_stack_segment
663 + #ifdef CONFIG_XEN
664 + zeroentry xen_debug do_debug
665 + zeroentry xen_int3 do_int3
666 +@@ -1541,16 +1494,15 @@ error_sti:
667 +
668 + /*
669 + * There are two places in the kernel that can potentially fault with
670 +- * usergs. Handle them here. The exception handlers after iret run with
671 +- * kernel gs again, so don't set the user space flag. B stepping K8s
672 +- * sometimes report an truncated RIP for IRET exceptions returning to
673 +- * compat mode. Check for these here too.
674 ++ * usergs. Handle them here. B stepping K8s sometimes report a
675 ++ * truncated RIP for IRET exceptions returning to compat mode. Check
676 ++ * for these here too.
677 + */
678 + error_kernelspace:
679 + incl %ebx
680 + leaq native_irq_return_iret(%rip),%rcx
681 + cmpq %rcx,RIP+8(%rsp)
682 +- je error_swapgs
683 ++ je error_bad_iret
684 + movl %ecx,%eax /* zero extend */
685 + cmpq %rax,RIP+8(%rsp)
686 + je bstep_iret
687 +@@ -1561,7 +1513,15 @@ error_kernelspace:
688 + bstep_iret:
689 + /* Fix truncated RIP */
690 + movq %rcx,RIP+8(%rsp)
691 +- jmp error_swapgs
692 ++ /* fall through */
693 ++
694 ++error_bad_iret:
695 ++ SWAPGS
696 ++ mov %rsp,%rdi
697 ++ call fixup_bad_iret
698 ++ mov %rax,%rsp
699 ++ decl %ebx /* Return to usergs */
700 ++ jmp error_sti
701 + CFI_ENDPROC
702 + END(error_entry)
703 +
704 +diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
705 +index e554e5ad2fe8..226f28413f76 100644
706 +--- a/arch/x86/kernel/kvm.c
707 ++++ b/arch/x86/kernel/kvm.c
708 +@@ -258,7 +258,14 @@ do_async_page_fault(struct pt_regs *regs, unsigned long error_code)
709 + static void __init paravirt_ops_setup(void)
710 + {
711 + pv_info.name = "KVM";
712 +- pv_info.paravirt_enabled = 1;
713 ++
714 ++ /*
715 ++ * KVM isn't paravirt in the sense of paravirt_enabled. A KVM
716 ++ * guest kernel works like a bare metal kernel with additional
717 ++ * features, and paravirt_enabled is about features that are
718 ++ * missing.
719 ++ */
720 ++ pv_info.paravirt_enabled = 0;
721 +
722 + if (kvm_para_has_feature(KVM_FEATURE_NOP_IO_DELAY))
723 + pv_cpu_ops.io_delay = kvm_io_delay;
724 +diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
725 +index f8492da65bfc..5e3f91bb6ec3 100644
726 +--- a/arch/x86/kernel/kvmclock.c
727 ++++ b/arch/x86/kernel/kvmclock.c
728 +@@ -212,7 +212,6 @@ void __init kvmclock_init(void)
729 + #endif
730 + kvm_get_preset_lpj();
731 + clocksource_register_hz(&kvm_clock, NSEC_PER_SEC);
732 +- pv_info.paravirt_enabled = 1;
733 + pv_info.name = "KVM";
734 +
735 + if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE_STABLE_BIT))
736 +diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
737 +index 9d9d2f9e77a5..9d25a6eef1e1 100644
738 +--- a/arch/x86/kernel/tls.c
739 ++++ b/arch/x86/kernel/tls.c
740 +@@ -27,6 +27,42 @@ static int get_free_idx(void)
741 + return -ESRCH;
742 + }
743 +
744 ++static bool tls_desc_okay(const struct user_desc *info)
745 ++{
746 ++ /*
747 ++ * For historical reasons (i.e. no one ever documented how any
748 ++ * of the segmentation APIs work), user programs can and do
749 ++ * assume that a struct user_desc that's all zeros except for
750 ++ * entry_number means "no segment at all". This never actually
751 ++ * worked. In fact, up to Linux 3.19, a struct user_desc like
752 ++ * this would create a 16-bit read-write segment with base and
753 ++ * limit both equal to zero.
754 ++ *
755 ++ * That was close enough to "no segment at all" until we
756 ++ * hardened this function to disallow 16-bit TLS segments. Fix
757 ++ * it up by interpreting these zeroed segments the way that they
758 ++ * were almost certainly intended to be interpreted.
759 ++ *
760 ++ * The correct way to ask for "no segment at all" is to specify
761 ++ * a user_desc that satisfies LDT_empty. To keep everything
762 ++ * working, we accept both.
763 ++ *
764 ++ * Note that there's a similar kludge in modify_ldt -- look at
765 ++ * the distinction between modes 1 and 0x11.
766 ++ */
767 ++ if (LDT_empty(info) || LDT_zero(info))
768 ++ return true;
769 ++
770 ++ /*
771 ++ * espfix is required for 16-bit data segments, but espfix
772 ++ * only works for LDT segments.
773 ++ */
774 ++ if (!info->seg_32bit)
775 ++ return false;
776 ++
777 ++ return true;
778 ++}
779 ++
780 + static void set_tls_desc(struct task_struct *p, int idx,
781 + const struct user_desc *info, int n)
782 + {
783 +@@ -40,7 +76,7 @@ static void set_tls_desc(struct task_struct *p, int idx,
784 + cpu = get_cpu();
785 +
786 + while (n-- > 0) {
787 +- if (LDT_empty(info))
788 ++ if (LDT_empty(info) || LDT_zero(info))
789 + desc->a = desc->b = 0;
790 + else
791 + fill_ldt(desc, info);
792 +@@ -66,6 +102,9 @@ int do_set_thread_area(struct task_struct *p, int idx,
793 + if (copy_from_user(&info, u_info, sizeof(info)))
794 + return -EFAULT;
795 +
796 ++ if (!tls_desc_okay(&info))
797 ++ return -EINVAL;
798 ++
799 + if (idx == -1)
800 + idx = info.entry_number;
801 +
802 +@@ -196,6 +235,7 @@ int regset_tls_set(struct task_struct *target, const struct user_regset *regset,
803 + {
804 + struct user_desc infobuf[GDT_ENTRY_TLS_ENTRIES];
805 + const struct user_desc *info;
806 ++ int i;
807 +
808 + if (pos >= GDT_ENTRY_TLS_ENTRIES * sizeof(struct user_desc) ||
809 + (pos % sizeof(struct user_desc)) != 0 ||
810 +@@ -209,6 +249,10 @@ int regset_tls_set(struct task_struct *target, const struct user_regset *regset,
811 + else
812 + info = infobuf;
813 +
814 ++ for (i = 0; i < count / sizeof(struct user_desc); i++)
815 ++ if (!tls_desc_okay(info + i))
816 ++ return -EINVAL;
817 ++
818 + set_tls_desc(target,
819 + GDT_ENTRY_TLS_MIN + (pos / sizeof(struct user_desc)),
820 + info, count / sizeof(struct user_desc));
821 +diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
822 +index ff9281f16029..9bfe95fda57c 100644
823 +--- a/arch/x86/kernel/traps.c
824 ++++ b/arch/x86/kernel/traps.c
825 +@@ -213,29 +213,41 @@ DO_ERROR(X86_TRAP_OLD_MF, SIGFPE, "coprocessor segment overrun",
826 + coprocessor_segment_overrun)
827 + DO_ERROR(X86_TRAP_TS, SIGSEGV, "invalid TSS", invalid_TSS)
828 + DO_ERROR(X86_TRAP_NP, SIGBUS, "segment not present", segment_not_present)
829 +-#ifdef CONFIG_X86_32
830 + DO_ERROR(X86_TRAP_SS, SIGBUS, "stack segment", stack_segment)
831 +-#endif
832 + DO_ERROR_INFO(X86_TRAP_AC, SIGBUS, "alignment check", alignment_check,
833 + BUS_ADRALN, 0)
834 +
835 + #ifdef CONFIG_X86_64
836 + /* Runs on IST stack */
837 +-dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code)
838 +-{
839 +- if (notify_die(DIE_TRAP, "stack segment", regs, error_code,
840 +- X86_TRAP_SS, SIGBUS) == NOTIFY_STOP)
841 +- return;
842 +- preempt_conditional_sti(regs);
843 +- do_trap(X86_TRAP_SS, SIGBUS, "stack segment", regs, error_code, NULL);
844 +- preempt_conditional_cli(regs);
845 +-}
846 +-
847 + dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
848 + {
849 + static const char str[] = "double fault";
850 + struct task_struct *tsk = current;
851 +
852 ++#ifdef CONFIG_X86_ESPFIX64
853 ++ extern unsigned char native_irq_return_iret[];
854 ++
855 ++ /*
856 ++ * If IRET takes a non-IST fault on the espfix64 stack, then we
857 ++ * end up promoting it to a doublefault. In that case, modify
858 ++ * the stack to make it look like we just entered the #GP
859 ++ * handler from user space, similar to bad_iret.
860 ++ */
861 ++ if (((long)regs->sp >> PGDIR_SHIFT) == ESPFIX_PGD_ENTRY &&
862 ++ regs->cs == __KERNEL_CS &&
863 ++ regs->ip == (unsigned long)native_irq_return_iret)
864 ++ {
865 ++ struct pt_regs *normal_regs = task_pt_regs(current);
866 ++
867 ++ /* Fake a #GP(0) from userspace. */
868 ++ memmove(&normal_regs->ip, (void *)regs->sp, 5*8);
869 ++ normal_regs->orig_ax = 0; /* Missing (lost) #GP error code */
870 ++ regs->ip = (unsigned long)general_protection;
871 ++ regs->sp = (unsigned long)&normal_regs->orig_ax;
872 ++ return;
873 ++ }
874 ++#endif
875 ++
876 + /* Return not checked because double check cannot be ignored */
877 + notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV);
878 +
879 +@@ -332,7 +344,7 @@ dotraplinkage void __kprobes do_int3(struct pt_regs *regs, long error_code)
880 + * for scheduling or signal handling. The actual stack switch is done in
881 + * entry.S
882 + */
883 +-asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
884 ++asmlinkage notrace __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
885 + {
886 + struct pt_regs *regs = eregs;
887 + /* Did already sync */
888 +@@ -351,6 +363,35 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
889 + *regs = *eregs;
890 + return regs;
891 + }
892 ++
893 ++struct bad_iret_stack {
894 ++ void *error_entry_ret;
895 ++ struct pt_regs regs;
896 ++};
897 ++
898 ++asmlinkage notrace __kprobes
899 ++struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s)
900 ++{
901 ++ /*
902 ++ * This is called from entry_64.S early in handling a fault
903 ++ * caused by a bad iret to user mode. To handle the fault
904 ++ * correctly, we want move our stack frame to task_pt_regs
905 ++ * and we want to pretend that the exception came from the
906 ++ * iret target.
907 ++ */
908 ++ struct bad_iret_stack *new_stack =
909 ++ container_of(task_pt_regs(current),
910 ++ struct bad_iret_stack, regs);
911 ++
912 ++ /* Copy the IRET target to the new stack. */
913 ++ memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8);
914 ++
915 ++ /* Copy the remainder of the stack from the current stack. */
916 ++ memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip));
917 ++
918 ++ BUG_ON(!user_mode_vm(&new_stack->regs));
919 ++ return new_stack;
920 ++}
921 + #endif
922 +
923 + /*
924 +@@ -694,7 +735,7 @@ void __init trap_init(void)
925 + set_intr_gate(X86_TRAP_OLD_MF, &coprocessor_segment_overrun);
926 + set_intr_gate(X86_TRAP_TS, &invalid_TSS);
927 + set_intr_gate(X86_TRAP_NP, &segment_not_present);
928 +- set_intr_gate_ist(X86_TRAP_SS, &stack_segment, STACKFAULT_STACK);
929 ++ set_intr_gate(X86_TRAP_SS, stack_segment);
930 + set_intr_gate(X86_TRAP_GP, &general_protection);
931 + set_intr_gate(X86_TRAP_SPURIOUS, &spurious_interrupt_bug);
932 + set_intr_gate(X86_TRAP_MF, &coprocessor_error);
933 +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
934 +index fc0a147e3727..8652aa408ae0 100644
935 +--- a/arch/x86/kernel/tsc.c
936 ++++ b/arch/x86/kernel/tsc.c
937 +@@ -959,14 +959,17 @@ void __init tsc_init(void)
938 +
939 + x86_init.timers.tsc_pre_init();
940 +
941 +- if (!cpu_has_tsc)
942 ++ if (!cpu_has_tsc) {
943 ++ setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
944 + return;
945 ++ }
946 +
947 + tsc_khz = x86_platform.calibrate_tsc();
948 + cpu_khz = tsc_khz;
949 +
950 + if (!tsc_khz) {
951 + mark_tsc_unstable("could not calculate TSC khz");
952 ++ setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
953 + return;
954 + }
955 +
956 +diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
957 +index 83756223f8aa..91e8680ec239 100644
958 +--- a/arch/x86/kvm/emulate.c
959 ++++ b/arch/x86/kvm/emulate.c
960 +@@ -459,11 +459,6 @@ register_address_increment(struct x86_emulate_ctxt *ctxt, unsigned long *reg, in
961 + *reg = (*reg & ~ad_mask(ctxt)) | ((*reg + inc) & ad_mask(ctxt));
962 + }
963 +
964 +-static inline void jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
965 +-{
966 +- register_address_increment(ctxt, &ctxt->_eip, rel);
967 +-}
968 +-
969 + static u32 desc_limit_scaled(struct desc_struct *desc)
970 + {
971 + u32 limit = get_desc_limit(desc);
972 +@@ -537,6 +532,40 @@ static int emulate_nm(struct x86_emulate_ctxt *ctxt)
973 + return emulate_exception(ctxt, NM_VECTOR, 0, false);
974 + }
975 +
976 ++static inline int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
977 ++ int cs_l)
978 ++{
979 ++ switch (ctxt->op_bytes) {
980 ++ case 2:
981 ++ ctxt->_eip = (u16)dst;
982 ++ break;
983 ++ case 4:
984 ++ ctxt->_eip = (u32)dst;
985 ++ break;
986 ++#ifdef CONFIG_X86_64
987 ++ case 8:
988 ++ if ((cs_l && is_noncanonical_address(dst)) ||
989 ++ (!cs_l && (dst >> 32) != 0))
990 ++ return emulate_gp(ctxt, 0);
991 ++ ctxt->_eip = dst;
992 ++ break;
993 ++#endif
994 ++ default:
995 ++ WARN(1, "unsupported eip assignment size\n");
996 ++ }
997 ++ return X86EMUL_CONTINUE;
998 ++}
999 ++
1000 ++static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
1001 ++{
1002 ++ return assign_eip_far(ctxt, dst, ctxt->mode == X86EMUL_MODE_PROT64);
1003 ++}
1004 ++
1005 ++static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
1006 ++{
1007 ++ return assign_eip_near(ctxt, ctxt->_eip + rel);
1008 ++}
1009 ++
1010 + static u16 get_segment_selector(struct x86_emulate_ctxt *ctxt, unsigned seg)
1011 + {
1012 + u16 selector;
1013 +@@ -1224,11 +1253,13 @@ static int write_segment_descriptor(struct x86_emulate_ctxt *ctxt,
1014 + }
1015 +
1016 + /* Does not support long mode */
1017 +-static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
1018 +- u16 selector, int seg)
1019 ++static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
1020 ++ u16 selector, int seg, u8 cpl,
1021 ++ bool in_task_switch,
1022 ++ struct desc_struct *desc)
1023 + {
1024 + struct desc_struct seg_desc;
1025 +- u8 dpl, rpl, cpl;
1026 ++ u8 dpl, rpl;
1027 + unsigned err_vec = GP_VECTOR;
1028 + u32 err_code = 0;
1029 + bool null_selector = !(selector & ~0x3); /* 0000-0003 are null */
1030 +@@ -1279,7 +1310,6 @@ static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
1031 +
1032 + rpl = selector & 3;
1033 + dpl = seg_desc.dpl;
1034 +- cpl = ctxt->ops->cpl(ctxt);
1035 +
1036 + switch (seg) {
1037 + case VCPU_SREG_SS:
1038 +@@ -1336,12 +1366,21 @@ static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
1039 + }
1040 + load:
1041 + ctxt->ops->set_segment(ctxt, selector, &seg_desc, 0, seg);
1042 ++ if (desc)
1043 ++ *desc = seg_desc;
1044 + return X86EMUL_CONTINUE;
1045 + exception:
1046 + emulate_exception(ctxt, err_vec, err_code, true);
1047 + return X86EMUL_PROPAGATE_FAULT;
1048 + }
1049 +
1050 ++static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
1051 ++ u16 selector, int seg)
1052 ++{
1053 ++ u8 cpl = ctxt->ops->cpl(ctxt);
1054 ++ return __load_segment_descriptor(ctxt, selector, seg, cpl, false, NULL);
1055 ++}
1056 ++
1057 + static void write_register_operand(struct operand *op)
1058 + {
1059 + /* The 4-byte case *is* correct: in 64-bit mode we zero-extend. */
1060 +@@ -1681,17 +1720,31 @@ static int em_iret(struct x86_emulate_ctxt *ctxt)
1061 + static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
1062 + {
1063 + int rc;
1064 +- unsigned short sel;
1065 ++ unsigned short sel, old_sel;
1066 ++ struct desc_struct old_desc, new_desc;
1067 ++ const struct x86_emulate_ops *ops = ctxt->ops;
1068 ++ u8 cpl = ctxt->ops->cpl(ctxt);
1069 ++
1070 ++ /* Assignment of RIP may only fail in 64-bit mode */
1071 ++ if (ctxt->mode == X86EMUL_MODE_PROT64)
1072 ++ ops->get_segment(ctxt, &old_sel, &old_desc, NULL,
1073 ++ VCPU_SREG_CS);
1074 +
1075 + memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
1076 +
1077 +- rc = load_segment_descriptor(ctxt, sel, VCPU_SREG_CS);
1078 ++ rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl, false,
1079 ++ &new_desc);
1080 + if (rc != X86EMUL_CONTINUE)
1081 + return rc;
1082 +
1083 +- ctxt->_eip = 0;
1084 +- memcpy(&ctxt->_eip, ctxt->src.valptr, ctxt->op_bytes);
1085 +- return X86EMUL_CONTINUE;
1086 ++ rc = assign_eip_far(ctxt, ctxt->src.val, new_desc.l);
1087 ++ if (rc != X86EMUL_CONTINUE) {
1088 ++ WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64);
1089 ++ /* assigning eip failed; restore the old cs */
1090 ++ ops->set_segment(ctxt, old_sel, &old_desc, 0, VCPU_SREG_CS);
1091 ++ return rc;
1092 ++ }
1093 ++ return rc;
1094 + }
1095 +
1096 + static int em_grp2(struct x86_emulate_ctxt *ctxt)
1097 +@@ -1785,13 +1838,15 @@ static int em_grp45(struct x86_emulate_ctxt *ctxt)
1098 + case 2: /* call near abs */ {
1099 + long int old_eip;
1100 + old_eip = ctxt->_eip;
1101 +- ctxt->_eip = ctxt->src.val;
1102 ++ rc = assign_eip_near(ctxt, ctxt->src.val);
1103 ++ if (rc != X86EMUL_CONTINUE)
1104 ++ break;
1105 + ctxt->src.val = old_eip;
1106 + rc = em_push(ctxt);
1107 + break;
1108 + }
1109 + case 4: /* jmp abs */
1110 +- ctxt->_eip = ctxt->src.val;
1111 ++ rc = assign_eip_near(ctxt, ctxt->src.val);
1112 + break;
1113 + case 5: /* jmp far */
1114 + rc = em_jmp_far(ctxt);
1115 +@@ -1823,26 +1878,43 @@ static int em_cmpxchg8b(struct x86_emulate_ctxt *ctxt)
1116 +
1117 + static int em_ret(struct x86_emulate_ctxt *ctxt)
1118 + {
1119 +- ctxt->dst.type = OP_REG;
1120 +- ctxt->dst.addr.reg = &ctxt->_eip;
1121 +- ctxt->dst.bytes = ctxt->op_bytes;
1122 +- return em_pop(ctxt);
1123 ++ int rc;
1124 ++ unsigned long eip;
1125 ++
1126 ++ rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
1127 ++ if (rc != X86EMUL_CONTINUE)
1128 ++ return rc;
1129 ++
1130 ++ return assign_eip_near(ctxt, eip);
1131 + }
1132 +
1133 + static int em_ret_far(struct x86_emulate_ctxt *ctxt)
1134 + {
1135 + int rc;
1136 +- unsigned long cs;
1137 ++ unsigned long eip, cs;
1138 ++ u16 old_cs;
1139 ++ struct desc_struct old_desc, new_desc;
1140 ++ const struct x86_emulate_ops *ops = ctxt->ops;
1141 ++
1142 ++ if (ctxt->mode == X86EMUL_MODE_PROT64)
1143 ++ ops->get_segment(ctxt, &old_cs, &old_desc, NULL,
1144 ++ VCPU_SREG_CS);
1145 +
1146 +- rc = emulate_pop(ctxt, &ctxt->_eip, ctxt->op_bytes);
1147 ++ rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
1148 + if (rc != X86EMUL_CONTINUE)
1149 + return rc;
1150 +- if (ctxt->op_bytes == 4)
1151 +- ctxt->_eip = (u32)ctxt->_eip;
1152 + rc = emulate_pop(ctxt, &cs, ctxt->op_bytes);
1153 + if (rc != X86EMUL_CONTINUE)
1154 + return rc;
1155 +- rc = load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS);
1156 ++ rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, 0, false,
1157 ++ &new_desc);
1158 ++ if (rc != X86EMUL_CONTINUE)
1159 ++ return rc;
1160 ++ rc = assign_eip_far(ctxt, eip, new_desc.l);
1161 ++ if (rc != X86EMUL_CONTINUE) {
1162 ++ WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64);
1163 ++ ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS);
1164 ++ }
1165 + return rc;
1166 + }
1167 +
1168 +@@ -2091,7 +2163,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
1169 + {
1170 + struct x86_emulate_ops *ops = ctxt->ops;
1171 + struct desc_struct cs, ss;
1172 +- u64 msr_data;
1173 ++ u64 msr_data, rcx, rdx;
1174 + int usermode;
1175 + u16 cs_sel = 0, ss_sel = 0;
1176 +
1177 +@@ -2107,6 +2179,9 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
1178 + else
1179 + usermode = X86EMUL_MODE_PROT32;
1180 +
1181 ++ rcx = ctxt->regs[VCPU_REGS_RCX];
1182 ++ rdx = ctxt->regs[VCPU_REGS_RDX];
1183 ++
1184 + cs.dpl = 3;
1185 + ss.dpl = 3;
1186 + ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data);
1187 +@@ -2124,6 +2199,9 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
1188 + ss_sel = cs_sel + 8;
1189 + cs.d = 0;
1190 + cs.l = 1;
1191 ++ if (is_noncanonical_address(rcx) ||
1192 ++ is_noncanonical_address(rdx))
1193 ++ return emulate_gp(ctxt, 0);
1194 + break;
1195 + }
1196 + cs_sel |= SELECTOR_RPL_MASK;
1197 +@@ -2132,8 +2210,8 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt)
1198 + ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS);
1199 + ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
1200 +
1201 +- ctxt->_eip = ctxt->regs[VCPU_REGS_RDX];
1202 +- ctxt->regs[VCPU_REGS_RSP] = ctxt->regs[VCPU_REGS_RCX];
1203 ++ ctxt->_eip = rdx;
1204 ++ ctxt->regs[VCPU_REGS_RSP] = rcx;
1205 +
1206 + return X86EMUL_CONTINUE;
1207 + }
1208 +@@ -2222,6 +2300,7 @@ static int load_state_from_tss16(struct x86_emulate_ctxt *ctxt,
1209 + struct tss_segment_16 *tss)
1210 + {
1211 + int ret;
1212 ++ u8 cpl;
1213 +
1214 + ctxt->_eip = tss->ip;
1215 + ctxt->eflags = tss->flag | 2;
1216 +@@ -2244,23 +2323,30 @@ static int load_state_from_tss16(struct x86_emulate_ctxt *ctxt,
1217 + set_segment_selector(ctxt, tss->ss, VCPU_SREG_SS);
1218 + set_segment_selector(ctxt, tss->ds, VCPU_SREG_DS);
1219 +
1220 ++ cpl = tss->cs & 3;
1221 ++
1222 + /*
1223 + * Now load segment descriptors. If fault happenes at this stage
1224 + * it is handled in a context of new task
1225 + */
1226 +- ret = load_segment_descriptor(ctxt, tss->ldt, VCPU_SREG_LDTR);
1227 ++ ret = __load_segment_descriptor(ctxt, tss->ldt, VCPU_SREG_LDTR, cpl,
1228 ++ true, NULL);
1229 + if (ret != X86EMUL_CONTINUE)
1230 + return ret;
1231 +- ret = load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES);
1232 ++ ret = __load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES, cpl,
1233 ++ true, NULL);
1234 + if (ret != X86EMUL_CONTINUE)
1235 + return ret;
1236 +- ret = load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS);
1237 ++ ret = __load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS, cpl,
1238 ++ true, NULL);
1239 + if (ret != X86EMUL_CONTINUE)
1240 + return ret;
1241 +- ret = load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS);
1242 ++ ret = __load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS, cpl,
1243 ++ true, NULL);
1244 + if (ret != X86EMUL_CONTINUE)
1245 + return ret;
1246 +- ret = load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS);
1247 ++ ret = __load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS, cpl,
1248 ++ true, NULL);
1249 + if (ret != X86EMUL_CONTINUE)
1250 + return ret;
1251 +
1252 +@@ -2339,6 +2425,7 @@ static int load_state_from_tss32(struct x86_emulate_ctxt *ctxt,
1253 + struct tss_segment_32 *tss)
1254 + {
1255 + int ret;
1256 ++ u8 cpl;
1257 +
1258 + if (ctxt->ops->set_cr(ctxt, 3, tss->cr3))
1259 + return emulate_gp(ctxt, 0);
1260 +@@ -2357,7 +2444,8 @@ static int load_state_from_tss32(struct x86_emulate_ctxt *ctxt,
1261 +
1262 + /*
1263 + * SDM says that segment selectors are loaded before segment
1264 +- * descriptors
1265 ++ * descriptors. This is important because CPL checks will
1266 ++ * use CS.RPL.
1267 + */
1268 + set_segment_selector(ctxt, tss->ldt_selector, VCPU_SREG_LDTR);
1269 + set_segment_selector(ctxt, tss->es, VCPU_SREG_ES);
1270 +@@ -2371,43 +2459,45 @@ static int load_state_from_tss32(struct x86_emulate_ctxt *ctxt,
1271 + * If we're switching between Protected Mode and VM86, we need to make
1272 + * sure to update the mode before loading the segment descriptors so
1273 + * that the selectors are interpreted correctly.
1274 +- *
1275 +- * Need to get rflags to the vcpu struct immediately because it
1276 +- * influences the CPL which is checked at least when loading the segment
1277 +- * descriptors and when pushing an error code to the new kernel stack.
1278 +- *
1279 +- * TODO Introduce a separate ctxt->ops->set_cpl callback
1280 + */
1281 +- if (ctxt->eflags & X86_EFLAGS_VM)
1282 ++ if (ctxt->eflags & X86_EFLAGS_VM) {
1283 + ctxt->mode = X86EMUL_MODE_VM86;
1284 +- else
1285 ++ cpl = 3;
1286 ++ } else {
1287 + ctxt->mode = X86EMUL_MODE_PROT32;
1288 +-
1289 +- ctxt->ops->set_rflags(ctxt, ctxt->eflags);
1290 ++ cpl = tss->cs & 3;
1291 ++ }
1292 +
1293 + /*
1294 + * Now load segment descriptors. If fault happenes at this stage
1295 + * it is handled in a context of new task
1296 + */
1297 +- ret = load_segment_descriptor(ctxt, tss->ldt_selector, VCPU_SREG_LDTR);
1298 ++ ret = __load_segment_descriptor(ctxt, tss->ldt_selector, VCPU_SREG_LDTR,
1299 ++ cpl, true, NULL);
1300 + if (ret != X86EMUL_CONTINUE)
1301 + return ret;
1302 +- ret = load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES);
1303 ++ ret = __load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES, cpl,
1304 ++ true, NULL);
1305 + if (ret != X86EMUL_CONTINUE)
1306 + return ret;
1307 +- ret = load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS);
1308 ++ ret = __load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS, cpl,
1309 ++ true, NULL);
1310 + if (ret != X86EMUL_CONTINUE)
1311 + return ret;
1312 +- ret = load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS);
1313 ++ ret = __load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS, cpl,
1314 ++ true, NULL);
1315 + if (ret != X86EMUL_CONTINUE)
1316 + return ret;
1317 +- ret = load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS);
1318 ++ ret = __load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS, cpl,
1319 ++ true, NULL);
1320 + if (ret != X86EMUL_CONTINUE)
1321 + return ret;
1322 +- ret = load_segment_descriptor(ctxt, tss->fs, VCPU_SREG_FS);
1323 ++ ret = __load_segment_descriptor(ctxt, tss->fs, VCPU_SREG_FS, cpl,
1324 ++ true, NULL);
1325 + if (ret != X86EMUL_CONTINUE)
1326 + return ret;
1327 +- ret = load_segment_descriptor(ctxt, tss->gs, VCPU_SREG_GS);
1328 ++ ret = __load_segment_descriptor(ctxt, tss->gs, VCPU_SREG_GS, cpl,
1329 ++ true, NULL);
1330 + if (ret != X86EMUL_CONTINUE)
1331 + return ret;
1332 +
1333 +@@ -2629,10 +2719,13 @@ static int em_das(struct x86_emulate_ctxt *ctxt)
1334 +
1335 + static int em_call(struct x86_emulate_ctxt *ctxt)
1336 + {
1337 ++ int rc;
1338 + long rel = ctxt->src.val;
1339 +
1340 + ctxt->src.val = (unsigned long)ctxt->_eip;
1341 +- jmp_rel(ctxt, rel);
1342 ++ rc = jmp_rel(ctxt, rel);
1343 ++ if (rc != X86EMUL_CONTINUE)
1344 ++ return rc;
1345 + return em_push(ctxt);
1346 + }
1347 +
1348 +@@ -2641,34 +2734,50 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt)
1349 + u16 sel, old_cs;
1350 + ulong old_eip;
1351 + int rc;
1352 ++ struct desc_struct old_desc, new_desc;
1353 ++ const struct x86_emulate_ops *ops = ctxt->ops;
1354 ++ int cpl = ctxt->ops->cpl(ctxt);
1355 +
1356 +- old_cs = get_segment_selector(ctxt, VCPU_SREG_CS);
1357 + old_eip = ctxt->_eip;
1358 ++ ops->get_segment(ctxt, &old_cs, &old_desc, NULL, VCPU_SREG_CS);
1359 +
1360 + memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
1361 +- if (load_segment_descriptor(ctxt, sel, VCPU_SREG_CS))
1362 ++ rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl, false,
1363 ++ &new_desc);
1364 ++ if (rc != X86EMUL_CONTINUE)
1365 + return X86EMUL_CONTINUE;
1366 +
1367 +- ctxt->_eip = 0;
1368 +- memcpy(&ctxt->_eip, ctxt->src.valptr, ctxt->op_bytes);
1369 ++ rc = assign_eip_far(ctxt, ctxt->src.val, new_desc.l);
1370 ++ if (rc != X86EMUL_CONTINUE)
1371 ++ goto fail;
1372 +
1373 + ctxt->src.val = old_cs;
1374 + rc = em_push(ctxt);
1375 + if (rc != X86EMUL_CONTINUE)
1376 +- return rc;
1377 ++ goto fail;
1378 +
1379 + ctxt->src.val = old_eip;
1380 +- return em_push(ctxt);
1381 ++ rc = em_push(ctxt);
1382 ++ /* If we failed, we tainted the memory, but the very least we should
1383 ++ restore cs */
1384 ++ if (rc != X86EMUL_CONTINUE)
1385 ++ goto fail;
1386 ++ return rc;
1387 ++fail:
1388 ++ ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS);
1389 ++ return rc;
1390 ++
1391 + }
1392 +
1393 + static int em_ret_near_imm(struct x86_emulate_ctxt *ctxt)
1394 + {
1395 + int rc;
1396 ++ unsigned long eip;
1397 +
1398 +- ctxt->dst.type = OP_REG;
1399 +- ctxt->dst.addr.reg = &ctxt->_eip;
1400 +- ctxt->dst.bytes = ctxt->op_bytes;
1401 +- rc = emulate_pop(ctxt, &ctxt->dst.val, ctxt->op_bytes);
1402 ++ rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
1403 ++ if (rc != X86EMUL_CONTINUE)
1404 ++ return rc;
1405 ++ rc = assign_eip_near(ctxt, eip);
1406 + if (rc != X86EMUL_CONTINUE)
1407 + return rc;
1408 + register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], ctxt->src.val);
1409 +@@ -2977,20 +3086,24 @@ static int em_lmsw(struct x86_emulate_ctxt *ctxt)
1410 +
1411 + static int em_loop(struct x86_emulate_ctxt *ctxt)
1412 + {
1413 ++ int rc = X86EMUL_CONTINUE;
1414 ++
1415 + register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RCX], -1);
1416 + if ((address_mask(ctxt, ctxt->regs[VCPU_REGS_RCX]) != 0) &&
1417 + (ctxt->b == 0xe2 || test_cc(ctxt->b ^ 0x5, ctxt->eflags)))
1418 +- jmp_rel(ctxt, ctxt->src.val);
1419 ++ rc = jmp_rel(ctxt, ctxt->src.val);
1420 +
1421 +- return X86EMUL_CONTINUE;
1422 ++ return rc;
1423 + }
1424 +
1425 + static int em_jcxz(struct x86_emulate_ctxt *ctxt)
1426 + {
1427 ++ int rc = X86EMUL_CONTINUE;
1428 ++
1429 + if (address_mask(ctxt, ctxt->regs[VCPU_REGS_RCX]) == 0)
1430 +- jmp_rel(ctxt, ctxt->src.val);
1431 ++ rc = jmp_rel(ctxt, ctxt->src.val);
1432 +
1433 +- return X86EMUL_CONTINUE;
1434 ++ return rc;
1435 + }
1436 +
1437 + static int em_in(struct x86_emulate_ctxt *ctxt)
1438 +@@ -4168,7 +4281,7 @@ special_insn:
1439 + break;
1440 + case 0x70 ... 0x7f: /* jcc (short) */
1441 + if (test_cc(ctxt->b, ctxt->eflags))
1442 +- jmp_rel(ctxt, ctxt->src.val);
1443 ++ rc = jmp_rel(ctxt, ctxt->src.val);
1444 + break;
1445 + case 0x8d: /* lea r16/r32, m */
1446 + ctxt->dst.val = ctxt->src.addr.mem.ea;
1447 +@@ -4207,7 +4320,7 @@ special_insn:
1448 + break;
1449 + case 0xe9: /* jmp rel */
1450 + case 0xeb: /* jmp rel short */
1451 +- jmp_rel(ctxt, ctxt->src.val);
1452 ++ rc = jmp_rel(ctxt, ctxt->src.val);
1453 + ctxt->dst.type = OP_NONE; /* Disable writeback. */
1454 + break;
1455 + case 0xf4: /* hlt */
1456 +@@ -4310,7 +4423,7 @@ twobyte_insn:
1457 + break;
1458 + case 0x80 ... 0x8f: /* jnz rel, etc*/
1459 + if (test_cc(ctxt->b, ctxt->eflags))
1460 +- jmp_rel(ctxt, ctxt->src.val);
1461 ++ rc = jmp_rel(ctxt, ctxt->src.val);
1462 + break;
1463 + case 0x90 ... 0x9f: /* setcc r/m8 */
1464 + ctxt->dst.val = test_cc(ctxt->b, ctxt->eflags);
1465 +diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
1466 +index d68f99df690c..db336f9f2c8c 100644
1467 +--- a/arch/x86/kvm/i8254.c
1468 ++++ b/arch/x86/kvm/i8254.c
1469 +@@ -263,8 +263,10 @@ void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
1470 + return;
1471 +
1472 + timer = &pit->pit_state.pit_timer.timer;
1473 ++ mutex_lock(&pit->pit_state.lock);
1474 + if (hrtimer_cancel(timer))
1475 + hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
1476 ++ mutex_unlock(&pit->pit_state.lock);
1477 + }
1478 +
1479 + static void destroy_pit_timer(struct kvm_pit *pit)
1480 +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
1481 +index fd6dec6ffa47..84f4bca0ca2c 100644
1482 +--- a/arch/x86/kvm/mmu.c
1483 ++++ b/arch/x86/kvm/mmu.c
1484 +@@ -2842,7 +2842,7 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu)
1485 + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
1486 + return;
1487 +
1488 +- vcpu_clear_mmio_info(vcpu, ~0ul);
1489 ++ vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY);
1490 + kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC);
1491 + if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) {
1492 + hpa_t root = vcpu->arch.mmu.root_hpa;
1493 +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
1494 +index b567285efceb..86c74c0cd876 100644
1495 +--- a/arch/x86/kvm/svm.c
1496 ++++ b/arch/x86/kvm/svm.c
1497 +@@ -3212,7 +3212,7 @@ static int wrmsr_interception(struct vcpu_svm *svm)
1498 +
1499 +
1500 + svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
1501 +- if (svm_set_msr(&svm->vcpu, ecx, data)) {
1502 ++ if (kvm_set_msr(&svm->vcpu, ecx, data)) {
1503 + trace_kvm_msr_write_ex(ecx, data);
1504 + kvm_inject_gp(&svm->vcpu, 0);
1505 + } else {
1506 +@@ -3494,9 +3494,9 @@ static int handle_exit(struct kvm_vcpu *vcpu)
1507 +
1508 + if (exit_code >= ARRAY_SIZE(svm_exit_handlers)
1509 + || !svm_exit_handlers[exit_code]) {
1510 +- kvm_run->exit_reason = KVM_EXIT_UNKNOWN;
1511 +- kvm_run->hw.hardware_exit_reason = exit_code;
1512 +- return 0;
1513 ++ WARN_ONCE(1, "vmx: unexpected exit reason 0x%x\n", exit_code);
1514 ++ kvm_queue_exception(vcpu, UD_VECTOR);
1515 ++ return 1;
1516 + }
1517 +
1518 + return svm_exit_handlers[exit_code](svm);
1519 +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
1520 +index 617b00b4857b..2eb4e5af8816 100644
1521 +--- a/arch/x86/kvm/vmx.c
1522 ++++ b/arch/x86/kvm/vmx.c
1523 +@@ -388,6 +388,7 @@ struct vcpu_vmx {
1524 + u16 fs_sel, gs_sel, ldt_sel;
1525 + int gs_ldt_reload_needed;
1526 + int fs_reload_needed;
1527 ++ unsigned long vmcs_host_cr4; /* May not match real cr4 */
1528 + } host_state;
1529 + struct {
1530 + int vm86_active;
1531 +@@ -2209,12 +2210,15 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
1532 + break;
1533 + msr = find_msr_entry(vmx, msr_index);
1534 + if (msr) {
1535 ++ u64 old_msr_data = msr->data;
1536 + msr->data = data;
1537 + if (msr - vmx->guest_msrs < vmx->save_nmsrs) {
1538 + preempt_disable();
1539 +- kvm_set_shared_msr(msr->index, msr->data,
1540 +- msr->mask);
1541 ++ ret = kvm_set_shared_msr(msr->index, msr->data,
1542 ++ msr->mask);
1543 + preempt_enable();
1544 ++ if (ret)
1545 ++ msr->data = old_msr_data;
1546 + }
1547 + break;
1548 + }
1549 +@@ -3622,16 +3626,21 @@ static void vmx_disable_intercept_for_msr(u32 msr, bool longmode_only)
1550 + * Note that host-state that does change is set elsewhere. E.g., host-state
1551 + * that is set differently for each CPU is set in vmx_vcpu_load(), not here.
1552 + */
1553 +-static void vmx_set_constant_host_state(void)
1554 ++static void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
1555 + {
1556 + u32 low32, high32;
1557 + unsigned long tmpl;
1558 + struct desc_ptr dt;
1559 ++ unsigned long cr4;
1560 +
1561 + vmcs_writel(HOST_CR0, read_cr0() | X86_CR0_TS); /* 22.2.3 */
1562 +- vmcs_writel(HOST_CR4, read_cr4()); /* 22.2.3, 22.2.5 */
1563 + vmcs_writel(HOST_CR3, read_cr3()); /* 22.2.3 FIXME: shadow tables */
1564 +
1565 ++ /* Save the most likely value for this task's CR4 in the VMCS. */
1566 ++ cr4 = read_cr4();
1567 ++ vmcs_writel(HOST_CR4, cr4); /* 22.2.3, 22.2.5 */
1568 ++ vmx->host_state.vmcs_host_cr4 = cr4;
1569 ++
1570 + vmcs_write16(HOST_CS_SELECTOR, __KERNEL_CS); /* 22.2.4 */
1571 + vmcs_write16(HOST_DS_SELECTOR, __KERNEL_DS); /* 22.2.4 */
1572 + vmcs_write16(HOST_ES_SELECTOR, __KERNEL_DS); /* 22.2.4 */
1573 +@@ -3753,7 +3762,7 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
1574 +
1575 + vmcs_write16(HOST_FS_SELECTOR, 0); /* 22.2.4 */
1576 + vmcs_write16(HOST_GS_SELECTOR, 0); /* 22.2.4 */
1577 +- vmx_set_constant_host_state();
1578 ++ vmx_set_constant_host_state(vmx);
1579 + #ifdef CONFIG_X86_64
1580 + rdmsrl(MSR_FS_BASE, a);
1581 + vmcs_writel(HOST_FS_BASE, a); /* 22.2.4 */
1582 +@@ -4539,7 +4548,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu)
1583 + u64 data = (vcpu->arch.regs[VCPU_REGS_RAX] & -1u)
1584 + | ((u64)(vcpu->arch.regs[VCPU_REGS_RDX] & -1u) << 32);
1585 +
1586 +- if (vmx_set_msr(vcpu, ecx, data) != 0) {
1587 ++ if (kvm_set_msr(vcpu, ecx, data) != 0) {
1588 + trace_kvm_msr_write_ex(ecx, data);
1589 + kvm_inject_gp(vcpu, 0);
1590 + return 1;
1591 +@@ -5557,6 +5566,18 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu)
1592 + return 1;
1593 + }
1594 +
1595 ++static int handle_invept(struct kvm_vcpu *vcpu)
1596 ++{
1597 ++ kvm_queue_exception(vcpu, UD_VECTOR);
1598 ++ return 1;
1599 ++}
1600 ++
1601 ++static int handle_invvpid(struct kvm_vcpu *vcpu)
1602 ++{
1603 ++ kvm_queue_exception(vcpu, UD_VECTOR);
1604 ++ return 1;
1605 ++}
1606 ++
1607 + /*
1608 + * The exit handlers return 1 if the exit was handled fully and guest execution
1609 + * may resume. Otherwise they set the kvm_run parameter to indicate what needs
1610 +@@ -5599,6 +5620,8 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
1611 + [EXIT_REASON_PAUSE_INSTRUCTION] = handle_pause,
1612 + [EXIT_REASON_MWAIT_INSTRUCTION] = handle_invalid_op,
1613 + [EXIT_REASON_MONITOR_INSTRUCTION] = handle_invalid_op,
1614 ++ [EXIT_REASON_INVEPT] = handle_invept,
1615 ++ [EXIT_REASON_INVVPID] = handle_invvpid,
1616 + };
1617 +
1618 + static const int kvm_vmx_max_exit_handlers =
1619 +@@ -5783,6 +5806,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu)
1620 + case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD:
1621 + case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE:
1622 + case EXIT_REASON_VMOFF: case EXIT_REASON_VMON:
1623 ++ case EXIT_REASON_INVEPT: case EXIT_REASON_INVVPID:
1624 + /*
1625 + * VMX instructions trap unconditionally. This allows L1 to
1626 + * emulate them for its L2 guest, i.e., allows 3-level nesting!
1627 +@@ -5912,10 +5936,10 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
1628 + && kvm_vmx_exit_handlers[exit_reason])
1629 + return kvm_vmx_exit_handlers[exit_reason](vcpu);
1630 + else {
1631 +- vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
1632 +- vcpu->run->hw.hardware_exit_reason = exit_reason;
1633 ++ WARN_ONCE(1, "vmx: unexpected exit reason 0x%x\n", exit_reason);
1634 ++ kvm_queue_exception(vcpu, UD_VECTOR);
1635 ++ return 1;
1636 + }
1637 +- return 0;
1638 + }
1639 +
1640 + static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
1641 +@@ -6101,6 +6125,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
1642 + static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
1643 + {
1644 + struct vcpu_vmx *vmx = to_vmx(vcpu);
1645 ++ unsigned long cr4;
1646 +
1647 + if (is_guest_mode(vcpu) && !vmx->nested.nested_run_pending) {
1648 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
1649 +@@ -6131,6 +6156,12 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
1650 + if (test_bit(VCPU_REGS_RIP, (unsigned long *)&vcpu->arch.regs_dirty))
1651 + vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]);
1652 +
1653 ++ cr4 = read_cr4();
1654 ++ if (unlikely(cr4 != vmx->host_state.vmcs_host_cr4)) {
1655 ++ vmcs_writel(HOST_CR4, cr4);
1656 ++ vmx->host_state.vmcs_host_cr4 = cr4;
1657 ++ }
1658 ++
1659 + /* When single-stepping over STI and MOV SS, we must clear the
1660 + * corresponding interruptibility bits in the guest state. Otherwise
1661 + * vmentry fails as it then expects bit 14 (BS) in pending debug
1662 +@@ -6589,7 +6620,7 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
1663 + * Other fields are different per CPU, and will be set later when
1664 + * vmx_vcpu_load() is called, and when vmx_save_host_state() is called.
1665 + */
1666 +- vmx_set_constant_host_state();
1667 ++ vmx_set_constant_host_state(vmx);
1668 +
1669 + /*
1670 + * HOST_RSP is normally set correctly in vmx_vcpu_run() just before
1671 +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
1672 +index 4b1be290f6e3..318a2454366f 100644
1673 +--- a/arch/x86/kvm/x86.c
1674 ++++ b/arch/x86/kvm/x86.c
1675 +@@ -220,19 +220,24 @@ static void kvm_shared_msr_cpu_online(void)
1676 + shared_msr_update(i, shared_msrs_global.msrs[i]);
1677 + }
1678 +
1679 +-void kvm_set_shared_msr(unsigned slot, u64 value, u64 mask)
1680 ++int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask)
1681 + {
1682 + struct kvm_shared_msrs *smsr = &__get_cpu_var(shared_msrs);
1683 ++ int err;
1684 +
1685 + if (((value ^ smsr->values[slot].curr) & mask) == 0)
1686 +- return;
1687 ++ return 0;
1688 + smsr->values[slot].curr = value;
1689 +- wrmsrl(shared_msrs_global.msrs[slot], value);
1690 ++ err = checking_wrmsrl(shared_msrs_global.msrs[slot], value);
1691 ++ if (err)
1692 ++ return 1;
1693 ++
1694 + if (!smsr->registered) {
1695 + smsr->urn.on_user_return = kvm_on_user_return;
1696 + user_return_notifier_register(&smsr->urn);
1697 + smsr->registered = true;
1698 + }
1699 ++ return 0;
1700 + }
1701 + EXPORT_SYMBOL_GPL(kvm_set_shared_msr);
1702 +
1703 +@@ -858,7 +863,6 @@ void kvm_enable_efer_bits(u64 mask)
1704 + }
1705 + EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
1706 +
1707 +-
1708 + /*
1709 + * Writes msr value into into the appropriate "register".
1710 + * Returns 0 on success, non-0 otherwise.
1711 +@@ -866,8 +870,34 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
1712 + */
1713 + int kvm_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
1714 + {
1715 ++ switch (msr_index) {
1716 ++ case MSR_FS_BASE:
1717 ++ case MSR_GS_BASE:
1718 ++ case MSR_KERNEL_GS_BASE:
1719 ++ case MSR_CSTAR:
1720 ++ case MSR_LSTAR:
1721 ++ if (is_noncanonical_address(data))
1722 ++ return 1;
1723 ++ break;
1724 ++ case MSR_IA32_SYSENTER_EIP:
1725 ++ case MSR_IA32_SYSENTER_ESP:
1726 ++ /*
1727 ++ * IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
1728 ++ * non-canonical address is written on Intel but not on
1729 ++ * AMD (which ignores the top 32-bits, because it does
1730 ++ * not implement 64-bit SYSENTER).
1731 ++ *
1732 ++ * 64-bit code should hence be able to write a non-canonical
1733 ++ * value on AMD. Making the address canonical ensures that
1734 ++ * vmentry does not fail on Intel after writing a non-canonical
1735 ++ * value, and that something deterministic happens if the guest
1736 ++ * invokes 64-bit SYSENTER.
1737 ++ */
1738 ++ data = get_canonical(data);
1739 ++ }
1740 + return kvm_x86_ops->set_msr(vcpu, msr_index, data);
1741 + }
1742 ++EXPORT_SYMBOL_GPL(kvm_set_msr);
1743 +
1744 + /*
1745 + * Adapt set_msr() to msr_io()'s calling convention
1746 +diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
1747 +index cb80c293cdd8..1ce5611fdb09 100644
1748 +--- a/arch/x86/kvm/x86.h
1749 ++++ b/arch/x86/kvm/x86.h
1750 +@@ -78,15 +78,23 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
1751 + vcpu->arch.mmio_gva = gva & PAGE_MASK;
1752 + vcpu->arch.access = access;
1753 + vcpu->arch.mmio_gfn = gfn;
1754 ++ vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
1755 ++}
1756 ++
1757 ++static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
1758 ++{
1759 ++ return vcpu->arch.mmio_gen == kvm_memslots(vcpu->kvm)->generation;
1760 + }
1761 +
1762 + /*
1763 +- * Clear the mmio cache info for the given gva,
1764 +- * specially, if gva is ~0ul, we clear all mmio cache info.
1765 ++ * Clear the mmio cache info for the given gva. If gva is MMIO_GVA_ANY, we
1766 ++ * clear all mmio cache info.
1767 + */
1768 ++#define MMIO_GVA_ANY (~(gva_t)0)
1769 ++
1770 + static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva)
1771 + {
1772 +- if (gva != (~0ul) && vcpu->arch.mmio_gva != (gva & PAGE_MASK))
1773 ++ if (gva != MMIO_GVA_ANY && vcpu->arch.mmio_gva != (gva & PAGE_MASK))
1774 + return;
1775 +
1776 + vcpu->arch.mmio_gva = 0;
1777 +@@ -94,7 +102,8 @@ static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva)
1778 +
1779 + static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva)
1780 + {
1781 +- if (vcpu->arch.mmio_gva && vcpu->arch.mmio_gva == (gva & PAGE_MASK))
1782 ++ if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gva &&
1783 ++ vcpu->arch.mmio_gva == (gva & PAGE_MASK))
1784 + return true;
1785 +
1786 + return false;
1787 +@@ -102,7 +111,8 @@ static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva)
1788 +
1789 + static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
1790 + {
1791 +- if (vcpu->arch.mmio_gfn && vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT)
1792 ++ if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gfn &&
1793 ++ vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT)
1794 + return true;
1795 +
1796 + return false;
1797 +diff --git a/arch/xtensa/include/asm/unistd.h b/arch/xtensa/include/asm/unistd.h
1798 +index 798ee6d285a1..7ab1f52f1fdd 100644
1799 +--- a/arch/xtensa/include/asm/unistd.h
1800 ++++ b/arch/xtensa/include/asm/unistd.h
1801 +@@ -394,7 +394,8 @@ __SYSCALL(174, sys_chroot, 1)
1802 + #define __NR_pivot_root 175
1803 + __SYSCALL(175, sys_pivot_root, 2)
1804 + #define __NR_umount 176
1805 +-__SYSCALL(176, sys_umount, 2)
1806 ++__SYSCALL(176, sys_oldumount, 1)
1807 ++#define __ARCH_WANT_SYS_OLDUMOUNT
1808 + #define __NR_swapoff 177
1809 + __SYSCALL(177, sys_swapoff, 1)
1810 + #define __NR_sync 178
1811 +diff --git a/block/blk-settings.c b/block/blk-settings.c
1812 +index b74cc58bc038..14f1d3083cae 100644
1813 +--- a/block/blk-settings.c
1814 ++++ b/block/blk-settings.c
1815 +@@ -538,7 +538,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
1816 + bottom = max(b->physical_block_size, b->io_min) + alignment;
1817 +
1818 + /* Verify that top and bottom intervals line up */
1819 +- if (max(top, bottom) & (min(top, bottom) - 1)) {
1820 ++ if (max(top, bottom) % min(top, bottom)) {
1821 + t->misaligned = 1;
1822 + ret = -1;
1823 + }
1824 +@@ -579,7 +579,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
1825 +
1826 + /* Find lowest common alignment_offset */
1827 + t->alignment_offset = lcm(t->alignment_offset, alignment)
1828 +- & (max(t->physical_block_size, t->io_min) - 1);
1829 ++ % max(t->physical_block_size, t->io_min);
1830 +
1831 + /* Verify that new alignment_offset is on a logical block boundary */
1832 + if (t->alignment_offset & (t->logical_block_size - 1)) {
1833 +diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
1834 +index 9a87daa6f4fb..f1c00c9aec1a 100644
1835 +--- a/block/scsi_ioctl.c
1836 ++++ b/block/scsi_ioctl.c
1837 +@@ -505,7 +505,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
1838 +
1839 + if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_WAIT)) {
1840 + err = DRIVER_ERROR << 24;
1841 +- goto out;
1842 ++ goto error;
1843 + }
1844 +
1845 + memset(sense, 0, sizeof(sense));
1846 +@@ -515,7 +515,6 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
1847 +
1848 + blk_execute_rq(q, disk, rq, 0);
1849 +
1850 +-out:
1851 + err = rq->errors & 0xff; /* only 8 bit SCSI status */
1852 + if (err) {
1853 + if (rq->sense_len && rq->sense) {
1854 +diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
1855 +index d366a75e6705..ca9a287b5864 100644
1856 +--- a/drivers/ata/ahci.c
1857 ++++ b/drivers/ata/ahci.c
1858 +@@ -313,6 +313,11 @@ static const struct pci_device_id ahci_pci_tbl[] = {
1859 + { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */
1860 + { PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
1861 + { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */
1862 ++ { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H AHCI */
1863 ++ { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H RAID */
1864 ++ { PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */
1865 ++ { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H RAID */
1866 ++ { PCI_VDEVICE(INTEL, 0xa10f), board_ahci }, /* Sunrise Point-H RAID */
1867 +
1868 + /* JMicron 360/1/3/5/6, match class to avoid IDE function */
1869 + { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
1870 +diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
1871 +index d8af325a6bda..3723e5ec2b4d 100644
1872 +--- a/drivers/ata/libata-sff.c
1873 ++++ b/drivers/ata/libata-sff.c
1874 +@@ -2008,13 +2008,15 @@ static int ata_bus_softreset(struct ata_port *ap, unsigned int devmask,
1875 +
1876 + DPRINTK("ata%u: bus reset via SRST\n", ap->print_id);
1877 +
1878 +- /* software reset. causes dev0 to be selected */
1879 +- iowrite8(ap->ctl, ioaddr->ctl_addr);
1880 +- udelay(20); /* FIXME: flush */
1881 +- iowrite8(ap->ctl | ATA_SRST, ioaddr->ctl_addr);
1882 +- udelay(20); /* FIXME: flush */
1883 +- iowrite8(ap->ctl, ioaddr->ctl_addr);
1884 +- ap->last_ctl = ap->ctl;
1885 ++ if (ap->ioaddr.ctl_addr) {
1886 ++ /* software reset. causes dev0 to be selected */
1887 ++ iowrite8(ap->ctl, ioaddr->ctl_addr);
1888 ++ udelay(20); /* FIXME: flush */
1889 ++ iowrite8(ap->ctl | ATA_SRST, ioaddr->ctl_addr);
1890 ++ udelay(20); /* FIXME: flush */
1891 ++ iowrite8(ap->ctl, ioaddr->ctl_addr);
1892 ++ ap->last_ctl = ap->ctl;
1893 ++ }
1894 +
1895 + /* wait the port to become ready */
1896 + return ata_sff_wait_after_reset(&ap->link, devmask, deadline);
1897 +@@ -2215,10 +2217,6 @@ void ata_sff_error_handler(struct ata_port *ap)
1898 +
1899 + spin_unlock_irqrestore(ap->lock, flags);
1900 +
1901 +- /* ignore ata_sff_softreset if ctl isn't accessible */
1902 +- if (softreset == ata_sff_softreset && !ap->ioaddr.ctl_addr)
1903 +- softreset = NULL;
1904 +-
1905 + /* ignore built-in hardresets if SCR access is not available */
1906 + if ((hardreset == sata_std_hardreset ||
1907 + hardreset == sata_sff_hardreset) && !sata_scr_valid(&ap->link))
1908 +diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c
1909 +index 71eaf385e970..5929dde07c91 100644
1910 +--- a/drivers/ata/pata_serverworks.c
1911 ++++ b/drivers/ata/pata_serverworks.c
1912 +@@ -252,12 +252,18 @@ static void serverworks_set_dmamode(struct ata_port *ap, struct ata_device *adev
1913 + pci_write_config_byte(pdev, 0x54, ultra_cfg);
1914 + }
1915 +
1916 +-static struct scsi_host_template serverworks_sht = {
1917 ++static struct scsi_host_template serverworks_osb4_sht = {
1918 ++ ATA_BMDMA_SHT(DRV_NAME),
1919 ++ .sg_tablesize = LIBATA_DUMB_MAX_PRD,
1920 ++};
1921 ++
1922 ++static struct scsi_host_template serverworks_csb_sht = {
1923 + ATA_BMDMA_SHT(DRV_NAME),
1924 + };
1925 +
1926 + static struct ata_port_operations serverworks_osb4_port_ops = {
1927 + .inherits = &ata_bmdma_port_ops,
1928 ++ .qc_prep = ata_bmdma_dumb_qc_prep,
1929 + .cable_detect = serverworks_cable_detect,
1930 + .mode_filter = serverworks_osb4_filter,
1931 + .set_piomode = serverworks_set_piomode,
1932 +@@ -266,6 +272,7 @@ static struct ata_port_operations serverworks_osb4_port_ops = {
1933 +
1934 + static struct ata_port_operations serverworks_csb_port_ops = {
1935 + .inherits = &serverworks_osb4_port_ops,
1936 ++ .qc_prep = ata_bmdma_qc_prep,
1937 + .mode_filter = serverworks_csb_filter,
1938 + };
1939 +
1940 +@@ -405,6 +412,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id
1941 + }
1942 + };
1943 + const struct ata_port_info *ppi[] = { &info[id->driver_data], NULL };
1944 ++ struct scsi_host_template *sht = &serverworks_csb_sht;
1945 + int rc;
1946 +
1947 + rc = pcim_enable_device(pdev);
1948 +@@ -418,6 +426,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id
1949 + /* Select non UDMA capable OSB4 if we can't do fixups */
1950 + if (rc < 0)
1951 + ppi[0] = &info[1];
1952 ++ sht = &serverworks_osb4_sht;
1953 + }
1954 + /* setup CSB5/CSB6 : South Bridge and IDE option RAID */
1955 + else if ((pdev->device == PCI_DEVICE_ID_SERVERWORKS_CSB5IDE) ||
1956 +@@ -434,7 +443,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id
1957 + ppi[1] = &ata_dummy_port_info;
1958 + }
1959 +
1960 +- return ata_pci_bmdma_init_one(pdev, ppi, &serverworks_sht, NULL, 0);
1961 ++ return ata_pci_bmdma_init_one(pdev, ppi, sht, NULL, 0);
1962 + }
1963 +
1964 + #ifdef CONFIG_PM
1965 +diff --git a/drivers/base/core.c b/drivers/base/core.c
1966 +index e28ce9898af4..32e86d6f141c 100644
1967 +--- a/drivers/base/core.c
1968 ++++ b/drivers/base/core.c
1969 +@@ -718,12 +718,12 @@ class_dir_create_and_add(struct class *class, struct kobject *parent_kobj)
1970 + return &dir->kobj;
1971 + }
1972 +
1973 ++static DEFINE_MUTEX(gdp_mutex);
1974 +
1975 + static struct kobject *get_device_parent(struct device *dev,
1976 + struct device *parent)
1977 + {
1978 + if (dev->class) {
1979 +- static DEFINE_MUTEX(gdp_mutex);
1980 + struct kobject *kobj = NULL;
1981 + struct kobject *parent_kobj;
1982 + struct kobject *k;
1983 +@@ -787,7 +787,9 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
1984 + glue_dir->kset != &dev->class->p->glue_dirs)
1985 + return;
1986 +
1987 ++ mutex_lock(&gdp_mutex);
1988 + kobject_put(glue_dir);
1989 ++ mutex_unlock(&gdp_mutex);
1990 + }
1991 +
1992 + static void cleanup_device_parent(struct device *dev)
1993 +diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
1994 +index 5401814c874d..b7a4fe586f8a 100644
1995 +--- a/drivers/base/firmware_class.c
1996 ++++ b/drivers/base/firmware_class.c
1997 +@@ -588,6 +588,9 @@ request_firmware(const struct firmware **firmware_p, const char *name,
1998 + struct firmware_priv *fw_priv;
1999 + int ret;
2000 +
2001 ++ if (!name || name[0] == '\0')
2002 ++ return -EINVAL;
2003 ++
2004 + fw_priv = _request_firmware_prepare(firmware_p, name, device, true,
2005 + false);
2006 + if (IS_ERR_OR_NULL(fw_priv))
2007 +diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
2008 +index 8ab1eab90be7..1db12895110a 100644
2009 +--- a/drivers/base/regmap/regmap-debugfs.c
2010 ++++ b/drivers/base/regmap/regmap-debugfs.c
2011 +@@ -244,7 +244,12 @@ static const struct file_operations regmap_access_fops = {
2012 +
2013 + void regmap_debugfs_init(struct regmap *map)
2014 + {
2015 +- map->debugfs = debugfs_create_dir(dev_name(map->dev),
2016 ++ const char *devname = "dummy";
2017 ++
2018 ++ if (map->dev)
2019 ++ devname = dev_name(map->dev);
2020 ++
2021 ++ map->debugfs = debugfs_create_dir(devname,
2022 + regmap_debugfs_root);
2023 + if (!map->debugfs) {
2024 + dev_warn(map->dev, "Failed to create debugfs directory\n");
2025 +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
2026 +index e5545427b46b..8e81f85b1ba0 100644
2027 +--- a/drivers/base/regmap/regmap.c
2028 ++++ b/drivers/base/regmap/regmap.c
2029 +@@ -600,6 +600,11 @@ int regmap_bulk_write(struct regmap *map, unsigned int reg, const void *val,
2030 + if (val_bytes == 1) {
2031 + wval = (void *)val;
2032 + } else {
2033 ++ if (!val_count) {
2034 ++ ret = -EINVAL;
2035 ++ goto out;
2036 ++ }
2037 ++
2038 + wval = kmemdup(val, val_count * val_bytes, GFP_KERNEL);
2039 + if (!wval) {
2040 + ret = -ENOMEM;
2041 +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
2042 +index 3cc242535012..155a61841e2b 100644
2043 +--- a/drivers/bluetooth/btusb.c
2044 ++++ b/drivers/bluetooth/btusb.c
2045 +@@ -304,6 +304,9 @@ static void btusb_intr_complete(struct urb *urb)
2046 + BT_ERR("%s corrupted event packet", hdev->name);
2047 + hdev->stat.err_rx++;
2048 + }
2049 ++ } else if (urb->status == -ENOENT) {
2050 ++ /* Avoid suspend failed when usb_kill_urb */
2051 ++ return;
2052 + }
2053 +
2054 + if (!test_bit(BTUSB_INTR_RUNNING, &data->flags))
2055 +@@ -392,6 +395,9 @@ static void btusb_bulk_complete(struct urb *urb)
2056 + BT_ERR("%s corrupted ACL packet", hdev->name);
2057 + hdev->stat.err_rx++;
2058 + }
2059 ++ } else if (urb->status == -ENOENT) {
2060 ++ /* Avoid suspend failed when usb_kill_urb */
2061 ++ return;
2062 + }
2063 +
2064 + if (!test_bit(BTUSB_BULK_RUNNING, &data->flags))
2065 +@@ -486,6 +492,9 @@ static void btusb_isoc_complete(struct urb *urb)
2066 + hdev->stat.err_rx++;
2067 + }
2068 + }
2069 ++ } else if (urb->status == -ENOENT) {
2070 ++ /* Avoid suspend failed when usb_kill_urb */
2071 ++ return;
2072 + }
2073 +
2074 + if (!test_bit(BTUSB_ISOC_RUNNING, &data->flags))
2075 +diff --git a/drivers/char/random.c b/drivers/char/random.c
2076 +index 1052fc4cae66..85172faa1569 100644
2077 +--- a/drivers/char/random.c
2078 ++++ b/drivers/char/random.c
2079 +@@ -932,8 +932,8 @@ static void extract_buf(struct entropy_store *r, __u8 *out)
2080 + * pool while mixing, and hash one final time.
2081 + */
2082 + sha_transform(hash.w, extract, workspace);
2083 +- memset(extract, 0, sizeof(extract));
2084 +- memset(workspace, 0, sizeof(workspace));
2085 ++ memzero_explicit(extract, sizeof(extract));
2086 ++ memzero_explicit(workspace, sizeof(workspace));
2087 +
2088 + /*
2089 + * In case the hash function has some recognizable output
2090 +@@ -956,7 +956,7 @@ static void extract_buf(struct entropy_store *r, __u8 *out)
2091 + }
2092 +
2093 + memcpy(out, &hash, EXTRACT_SIZE);
2094 +- memset(&hash, 0, sizeof(hash));
2095 ++ memzero_explicit(&hash, sizeof(hash));
2096 + }
2097 +
2098 + static ssize_t extract_entropy(struct entropy_store *r, void *buf,
2099 +@@ -989,7 +989,7 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
2100 + }
2101 +
2102 + /* Wipe data just returned from memory */
2103 +- memset(tmp, 0, sizeof(tmp));
2104 ++ memzero_explicit(tmp, sizeof(tmp));
2105 +
2106 + return ret;
2107 + }
2108 +@@ -1027,7 +1027,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
2109 + }
2110 +
2111 + /* Wipe data just returned from memory */
2112 +- memset(tmp, 0, sizeof(tmp));
2113 ++ memzero_explicit(tmp, sizeof(tmp));
2114 +
2115 + return ret;
2116 + }
2117 +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
2118 +index 7f2f149ae40f..cf864ef8d181 100644
2119 +--- a/drivers/cpufreq/cpufreq.c
2120 ++++ b/drivers/cpufreq/cpufreq.c
2121 +@@ -371,7 +371,18 @@ show_one(cpuinfo_max_freq, cpuinfo.max_freq);
2122 + show_one(cpuinfo_transition_latency, cpuinfo.transition_latency);
2123 + show_one(scaling_min_freq, min);
2124 + show_one(scaling_max_freq, max);
2125 +-show_one(scaling_cur_freq, cur);
2126 ++
2127 ++static ssize_t show_scaling_cur_freq(
2128 ++ struct cpufreq_policy *policy, char *buf)
2129 ++{
2130 ++ ssize_t ret;
2131 ++
2132 ++ if (cpufreq_driver && cpufreq_driver->setpolicy && cpufreq_driver->get)
2133 ++ ret = sprintf(buf, "%u\n", cpufreq_driver->get(policy->cpu));
2134 ++ else
2135 ++ ret = sprintf(buf, "%u\n", policy->cur);
2136 ++ return ret;
2137 ++}
2138 +
2139 + static int __cpufreq_set_policy(struct cpufreq_policy *data,
2140 + struct cpufreq_policy *policy);
2141 +@@ -818,11 +829,11 @@ static int cpufreq_add_dev_interface(unsigned int cpu,
2142 + if (ret)
2143 + goto err_out_kobj_put;
2144 + }
2145 +- if (cpufreq_driver->target) {
2146 +- ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
2147 +- if (ret)
2148 +- goto err_out_kobj_put;
2149 +- }
2150 ++
2151 ++ ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
2152 ++ if (ret)
2153 ++ goto err_out_kobj_put;
2154 ++
2155 + if (cpufreq_driver->bios_limit) {
2156 + ret = sysfs_create_file(&policy->kobj, &bios_limit.attr);
2157 + if (ret)
2158 +diff --git a/drivers/edac/mpc85xx_edac.c b/drivers/edac/mpc85xx_edac.c
2159 +index 73464a62adf7..0f0bf1a2ae1a 100644
2160 +--- a/drivers/edac/mpc85xx_edac.c
2161 ++++ b/drivers/edac/mpc85xx_edac.c
2162 +@@ -577,7 +577,8 @@ static int __devinit mpc85xx_l2_err_probe(struct platform_device *op)
2163 + if (edac_op_state == EDAC_OPSTATE_INT) {
2164 + pdata->irq = irq_of_parse_and_map(op->dev.of_node, 0);
2165 + res = devm_request_irq(&op->dev, pdata->irq,
2166 +- mpc85xx_l2_isr, IRQF_DISABLED,
2167 ++ mpc85xx_l2_isr,
2168 ++ IRQF_DISABLED | IRQF_SHARED,
2169 + "[EDAC] L2 err", edac_dev);
2170 + if (res < 0) {
2171 + printk(KERN_ERR
2172 +diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
2173 +index b558810b2da0..b449572cb800 100644
2174 +--- a/drivers/firewire/core-cdev.c
2175 ++++ b/drivers/firewire/core-cdev.c
2176 +@@ -1619,8 +1619,7 @@ static int dispatch_ioctl(struct client *client,
2177 + _IOC_SIZE(cmd) > sizeof(buffer))
2178 + return -ENOTTY;
2179 +
2180 +- if (_IOC_DIR(cmd) == _IOC_READ)
2181 +- memset(&buffer, 0, _IOC_SIZE(cmd));
2182 ++ memset(&buffer, 0, sizeof(buffer));
2183 +
2184 + if (_IOC_DIR(cmd) & _IOC_WRITE)
2185 + if (copy_from_user(&buffer, arg, _IOC_SIZE(cmd)))
2186 +diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
2187 +index df62c393f2f5..01434ef9e00f 100644
2188 +--- a/drivers/gpu/drm/radeon/evergreen.c
2189 ++++ b/drivers/gpu/drm/radeon/evergreen.c
2190 +@@ -1176,6 +1176,7 @@ void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *sav
2191 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
2192 + tmp |= EVERGREEN_CRTC_BLANK_DATA_EN;
2193 + WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
2194 ++ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
2195 + }
2196 + } else {
2197 + tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
2198 +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
2199 +index 00fb5aa2bf77..7ca1d472d7cb 100644
2200 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
2201 ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
2202 +@@ -1915,6 +1915,14 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector,
2203 + DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
2204 + };
2205 + int i;
2206 ++ u32 assumed_bpp = 2;
2207 ++
2208 ++ /*
2209 ++ * If using screen objects, then assume 32-bpp because that's what the
2210 ++ * SVGA device is assuming
2211 ++ */
2212 ++ if (dev_priv->sou_priv)
2213 ++ assumed_bpp = 4;
2214 +
2215 + /* Add preferred mode */
2216 + {
2217 +@@ -1925,8 +1933,9 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector,
2218 + mode->vdisplay = du->pref_height;
2219 + vmw_guess_mode_timing(mode);
2220 +
2221 +- if (vmw_kms_validate_mode_vram(dev_priv, mode->hdisplay * 2,
2222 +- mode->vdisplay)) {
2223 ++ if (vmw_kms_validate_mode_vram(dev_priv,
2224 ++ mode->hdisplay * assumed_bpp,
2225 ++ mode->vdisplay)) {
2226 + drm_mode_probed_add(connector, mode);
2227 + } else {
2228 + drm_mode_destroy(dev, mode);
2229 +@@ -1948,7 +1957,8 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector,
2230 + bmode->vdisplay > max_height)
2231 + continue;
2232 +
2233 +- if (!vmw_kms_validate_mode_vram(dev_priv, bmode->hdisplay * 2,
2234 ++ if (!vmw_kms_validate_mode_vram(dev_priv,
2235 ++ bmode->hdisplay * assumed_bpp,
2236 + bmode->vdisplay))
2237 + continue;
2238 +
2239 +diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
2240 +index f4c3d28cd1fc..3c8b2c473b81 100644
2241 +--- a/drivers/hv/channel.c
2242 ++++ b/drivers/hv/channel.c
2243 +@@ -207,8 +207,10 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
2244 + ret = vmbus_post_msg(open_msg,
2245 + sizeof(struct vmbus_channel_open_channel));
2246 +
2247 +- if (ret != 0)
2248 ++ if (ret != 0) {
2249 ++ err = ret;
2250 + goto error1;
2251 ++ }
2252 +
2253 + t = wait_for_completion_timeout(&open_info->waitevent, 5*HZ);
2254 + if (t == 0) {
2255 +@@ -400,7 +402,6 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
2256 + u32 next_gpadl_handle;
2257 + unsigned long flags;
2258 + int ret = 0;
2259 +- int t;
2260 +
2261 + next_gpadl_handle = atomic_read(&vmbus_connection.next_gpadl_handle);
2262 + atomic_inc(&vmbus_connection.next_gpadl_handle);
2263 +@@ -447,9 +448,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
2264 +
2265 + }
2266 + }
2267 +- t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
2268 +- BUG_ON(t == 0);
2269 +-
2270 ++ wait_for_completion(&msginfo->waitevent);
2271 +
2272 + /* At this point, we received the gpadl created msg */
2273 + *gpadl_handle = gpadlmsg->gpadl;
2274 +@@ -472,7 +471,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
2275 + struct vmbus_channel_gpadl_teardown *msg;
2276 + struct vmbus_channel_msginfo *info;
2277 + unsigned long flags;
2278 +- int ret, t;
2279 ++ int ret;
2280 +
2281 + info = kmalloc(sizeof(*info) +
2282 + sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL);
2283 +@@ -494,11 +493,12 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
2284 + ret = vmbus_post_msg(msg,
2285 + sizeof(struct vmbus_channel_gpadl_teardown));
2286 +
2287 +- BUG_ON(ret != 0);
2288 +- t = wait_for_completion_timeout(&info->waitevent, 5*HZ);
2289 +- BUG_ON(t == 0);
2290 ++ if (ret)
2291 ++ goto post_msg_err;
2292 ++
2293 ++ wait_for_completion(&info->waitevent);
2294 +
2295 +- /* Received a torndown response */
2296 ++post_msg_err:
2297 + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
2298 + list_del(&info->msglistentry);
2299 + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
2300 +@@ -531,11 +531,28 @@ void vmbus_close(struct vmbus_channel *channel)
2301 +
2302 + ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel));
2303 +
2304 +- BUG_ON(ret != 0);
2305 ++ if (ret) {
2306 ++ pr_err("Close failed: close post msg return is %d\n", ret);
2307 ++ /*
2308 ++ * If we failed to post the close msg,
2309 ++ * it is perhaps better to leak memory.
2310 ++ */
2311 ++ return;
2312 ++ }
2313 ++
2314 + /* Tear down the gpadl for the channel's ring buffer */
2315 +- if (channel->ringbuffer_gpadlhandle)
2316 +- vmbus_teardown_gpadl(channel,
2317 +- channel->ringbuffer_gpadlhandle);
2318 ++ if (channel->ringbuffer_gpadlhandle) {
2319 ++ ret = vmbus_teardown_gpadl(channel,
2320 ++ channel->ringbuffer_gpadlhandle);
2321 ++ if (ret) {
2322 ++ pr_err("Close failed: teardown gpadl return %d\n", ret);
2323 ++ /*
2324 ++ * If we failed to teardown gpadl,
2325 ++ * it is perhaps better to leak memory.
2326 ++ */
2327 ++ return;
2328 ++ }
2329 ++ }
2330 +
2331 + /* Cleanup the ring buffers for this channel */
2332 + hv_ringbuffer_cleanup(&channel->outbound);
2333 +@@ -543,8 +560,6 @@ void vmbus_close(struct vmbus_channel *channel)
2334 +
2335 + free_pages((unsigned long)channel->ringbuffer_pages,
2336 + get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
2337 +-
2338 +-
2339 + }
2340 + EXPORT_SYMBOL_GPL(vmbus_close);
2341 +
2342 +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
2343 +index 650c9f0b6642..2d52a1b15b35 100644
2344 +--- a/drivers/hv/connection.c
2345 ++++ b/drivers/hv/connection.c
2346 +@@ -294,10 +294,21 @@ int vmbus_post_msg(void *buffer, size_t buflen)
2347 + * insufficient resources. Retry the operation a couple of
2348 + * times before giving up.
2349 + */
2350 +- while (retries < 3) {
2351 +- ret = hv_post_message(conn_id, 1, buffer, buflen);
2352 +- if (ret != HV_STATUS_INSUFFICIENT_BUFFERS)
2353 ++ while (retries < 10) {
2354 ++ ret = hv_post_message(conn_id, 1, buffer, buflen);
2355 ++
2356 ++ switch (ret) {
2357 ++ case HV_STATUS_INSUFFICIENT_BUFFERS:
2358 ++ ret = -ENOMEM;
2359 ++ case -ENOMEM:
2360 ++ break;
2361 ++ case HV_STATUS_SUCCESS:
2362 + return ret;
2363 ++ default:
2364 ++ pr_err("hv_post_msg() failed; error code:%d\n", ret);
2365 ++ return -EINVAL;
2366 ++ }
2367 ++
2368 + retries++;
2369 + msleep(100);
2370 + }
2371 +diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
2372 +index ff0b71a7e8e1..6bd8e81cfd9d 100644
2373 +--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
2374 ++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
2375 +@@ -2146,6 +2146,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
2376 + if (!qp_init)
2377 + goto out;
2378 +
2379 ++retry:
2380 + ch->cq = ib_create_cq(sdev->device, srpt_completion, NULL, ch,
2381 + ch->rq_size + srp_sq_size, 0);
2382 + if (IS_ERR(ch->cq)) {
2383 +@@ -2169,6 +2170,13 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
2384 + ch->qp = ib_create_qp(sdev->pd, qp_init);
2385 + if (IS_ERR(ch->qp)) {
2386 + ret = PTR_ERR(ch->qp);
2387 ++ if (ret == -ENOMEM) {
2388 ++ srp_sq_size /= 2;
2389 ++ if (srp_sq_size >= MIN_SRPT_SQ_SIZE) {
2390 ++ ib_destroy_cq(ch->cq);
2391 ++ goto retry;
2392 ++ }
2393 ++ }
2394 + printk(KERN_ERR "failed to create_qp ret= %d\n", ret);
2395 + goto err_destroy_cq;
2396 + }
2397 +diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
2398 +index 4c6a72d3d48c..9854a1ff5ab5 100644
2399 +--- a/drivers/input/mouse/alps.c
2400 ++++ b/drivers/input/mouse/alps.c
2401 +@@ -787,7 +787,13 @@ static psmouse_ret_t alps_process_byte(struct psmouse *psmouse)
2402 + struct alps_data *priv = psmouse->private;
2403 + const struct alps_model_info *model = priv->i;
2404 +
2405 +- if ((psmouse->packet[0] & 0xc8) == 0x08) { /* PS/2 packet */
2406 ++ /*
2407 ++ * Check if we are dealing with a bare PS/2 packet, presumably from
2408 ++ * a device connected to the external PS/2 port. Because bare PS/2
2409 ++ * protocol does not have enough constant bits to self-synchronize
2410 ++ * properly we only do this if the device is fully synchronized.
2411 ++ */
2412 ++ if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) {
2413 + if (psmouse->pktcnt == 3) {
2414 + alps_report_bare_ps2_packet(psmouse, psmouse->packet,
2415 + true);
2416 +@@ -1619,6 +1625,9 @@ int alps_init(struct psmouse *psmouse)
2417 + /* We are having trouble resyncing ALPS touchpads so disable it for now */
2418 + psmouse->resync_time = 0;
2419 +
2420 ++ /* Allow 2 invalid packets without resetting device */
2421 ++ psmouse->resetafter = psmouse->pktsize * 2;
2422 ++
2423 + return 0;
2424 +
2425 + init_fail:
2426 +diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
2427 +index 32b1363f7ace..97e5f6f797b4 100644
2428 +--- a/drivers/input/mouse/synaptics.c
2429 ++++ b/drivers/input/mouse/synaptics.c
2430 +@@ -506,6 +506,8 @@ static void synaptics_parse_agm(const unsigned char buf[],
2431 + priv->agm_pending = true;
2432 + }
2433 +
2434 ++static bool is_forcepad;
2435 ++
2436 + static int synaptics_parse_hw_state(const unsigned char buf[],
2437 + struct synaptics_data *priv,
2438 + struct synaptics_hw_state *hw)
2439 +@@ -535,7 +537,7 @@ static int synaptics_parse_hw_state(const unsigned char buf[],
2440 + hw->left = (buf[0] & 0x01) ? 1 : 0;
2441 + hw->right = (buf[0] & 0x02) ? 1 : 0;
2442 +
2443 +- if (SYN_CAP_FORCEPAD(priv->ext_cap_0c)) {
2444 ++ if (is_forcepad) {
2445 + /*
2446 + * ForcePads, like Clickpads, use middle button
2447 + * bits to report primary button clicks.
2448 +@@ -1512,6 +1514,18 @@ static const struct dmi_system_id min_max_dmi_table[] __initconst = {
2449 + { }
2450 + };
2451 +
2452 ++static const struct dmi_system_id forcepad_dmi_table[] __initconst = {
2453 ++#if defined(CONFIG_DMI) && defined(CONFIG_X86)
2454 ++ {
2455 ++ .matches = {
2456 ++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
2457 ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook Folio 1040 G1"),
2458 ++ },
2459 ++ },
2460 ++#endif
2461 ++ { }
2462 ++};
2463 ++
2464 + void __init synaptics_module_init(void)
2465 + {
2466 + const struct dmi_system_id *min_max_dmi;
2467 +@@ -1522,6 +1536,12 @@ void __init synaptics_module_init(void)
2468 + min_max_dmi = dmi_first_match(min_max_dmi_table);
2469 + if (min_max_dmi)
2470 + quirk_min_max = min_max_dmi->driver_data;
2471 ++
2472 ++ /*
2473 ++ * Unfortunately ForcePad capability is not exported over PS/2,
2474 ++ * so we have to resort to checking DMI.
2475 ++ */
2476 ++ is_forcepad = dmi_check_system(forcepad_dmi_table);
2477 + }
2478 +
2479 + static int __synaptics_init(struct psmouse *psmouse, bool absolute_mode)
2480 +diff --git a/drivers/input/mouse/synaptics.h b/drivers/input/mouse/synaptics.h
2481 +index ac1b77354cac..20d861b4e326 100644
2482 +--- a/drivers/input/mouse/synaptics.h
2483 ++++ b/drivers/input/mouse/synaptics.h
2484 +@@ -76,12 +76,9 @@
2485 + * for noise.
2486 + * 2 0x08 image sensor image sensor tracks 5 fingers, but only
2487 + * reports 2.
2488 ++ * 2 0x01 uniform clickpad whole clickpad moves instead of being
2489 ++ * hinged at the top.
2490 + * 2 0x20 report min query 0x0f gives min coord reported
2491 +- * 2 0x80 forcepad forcepad is a variant of clickpad that
2492 +- * does not have physical buttons but rather
2493 +- * uses pressure above certain threshold to
2494 +- * report primary clicks. Forcepads also have
2495 +- * clickpad bit set.
2496 + */
2497 + #define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */
2498 + #define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */
2499 +@@ -90,7 +87,6 @@
2500 + #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000)
2501 + #define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400)
2502 + #define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800)
2503 +-#define SYN_CAP_FORCEPAD(ex0c) ((ex0c) & 0x008000)
2504 +
2505 + /* synaptics modes query bits */
2506 + #define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7))
2507 +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
2508 +index 1291673bd57e..ce715b1bee46 100644
2509 +--- a/drivers/input/serio/i8042-x86ia64io.h
2510 ++++ b/drivers/input/serio/i8042-x86ia64io.h
2511 +@@ -101,6 +101,12 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = {
2512 + },
2513 + {
2514 + .matches = {
2515 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
2516 ++ DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"),
2517 ++ },
2518 ++ },
2519 ++ {
2520 ++ .matches = {
2521 + DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
2522 + DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"),
2523 + DMI_MATCH(DMI_PRODUCT_VERSION, "8500"),
2524 +@@ -609,6 +615,22 @@ static const struct dmi_system_id __initconst i8042_dmi_notimeout_table[] = {
2525 + },
2526 + },
2527 + {
2528 ++ /* Fujitsu A544 laptop */
2529 ++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */
2530 ++ .matches = {
2531 ++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
2532 ++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"),
2533 ++ },
2534 ++ },
2535 ++ {
2536 ++ /* Fujitsu AH544 laptop */
2537 ++ /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
2538 ++ .matches = {
2539 ++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
2540 ++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
2541 ++ },
2542 ++ },
2543 ++ {
2544 + /* Fujitsu U574 laptop */
2545 + /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
2546 + .matches = {
2547 +diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
2548 +index 6f99500790b3..535b09cd3cd8 100644
2549 +--- a/drivers/md/dm-bufio.c
2550 ++++ b/drivers/md/dm-bufio.c
2551 +@@ -467,6 +467,7 @@ static void __relink_lru(struct dm_buffer *b, int dirty)
2552 + b->list_mode = dirty;
2553 + list_del(&b->lru_list);
2554 + list_add(&b->lru_list, &c->lru[dirty]);
2555 ++ b->last_accessed = jiffies;
2556 + }
2557 +
2558 + /*----------------------------------------------------------------
2559 +@@ -1378,9 +1379,9 @@ static void drop_buffers(struct dm_bufio_client *c)
2560 +
2561 + /*
2562 + * Test if the buffer is unused and too old, and commit it.
2563 +- * At if noio is set, we must not do any I/O because we hold
2564 +- * dm_bufio_clients_lock and we would risk deadlock if the I/O gets rerouted to
2565 +- * different bufio client.
2566 ++ * And if GFP_NOFS is used, we must not do any I/O because we hold
2567 ++ * dm_bufio_clients_lock and we would risk deadlock if the I/O gets
2568 ++ * rerouted to different bufio client.
2569 + */
2570 + static int __cleanup_old_buffer(struct dm_buffer *b, gfp_t gfp,
2571 + unsigned long max_jiffies)
2572 +@@ -1388,7 +1389,7 @@ static int __cleanup_old_buffer(struct dm_buffer *b, gfp_t gfp,
2573 + if (jiffies - b->last_accessed < max_jiffies)
2574 + return 1;
2575 +
2576 +- if (!(gfp & __GFP_IO)) {
2577 ++ if (!(gfp & __GFP_FS)) {
2578 + if (test_bit(B_READING, &b->state) ||
2579 + test_bit(B_WRITING, &b->state) ||
2580 + test_bit(B_DIRTY, &b->state))
2581 +@@ -1427,7 +1428,7 @@ static int shrink(struct shrinker *shrinker, struct shrink_control *sc)
2582 + unsigned long r;
2583 + unsigned long nr_to_scan = sc->nr_to_scan;
2584 +
2585 +- if (sc->gfp_mask & __GFP_IO)
2586 ++ if (sc->gfp_mask & __GFP_FS)
2587 + dm_bufio_lock(c);
2588 + else if (!dm_bufio_trylock(c))
2589 + return !nr_to_scan ? 0 : -1;
2590 +diff --git a/drivers/md/dm-log-userspace-transfer.c b/drivers/md/dm-log-userspace-transfer.c
2591 +index 08d9a207259a..c69d0b787746 100644
2592 +--- a/drivers/md/dm-log-userspace-transfer.c
2593 ++++ b/drivers/md/dm-log-userspace-transfer.c
2594 +@@ -272,7 +272,7 @@ int dm_ulog_tfr_init(void)
2595 +
2596 + r = cn_add_callback(&ulog_cn_id, "dmlogusr", cn_ulog_callback);
2597 + if (r) {
2598 +- cn_del_callback(&ulog_cn_id);
2599 ++ kfree(prealloced_cn_msg);
2600 + return r;
2601 + }
2602 +
2603 +diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
2604 +index ead5ca99a749..5dea02cff622 100644
2605 +--- a/drivers/md/dm-raid.c
2606 ++++ b/drivers/md/dm-raid.c
2607 +@@ -592,8 +592,7 @@ struct dm_raid_superblock {
2608 + __le32 layout;
2609 + __le32 stripe_sectors;
2610 +
2611 +- __u8 pad[452]; /* Round struct to 512 bytes. */
2612 +- /* Always set to 0 when writing. */
2613 ++ /* Remainder of a logical block is zero-filled when writing (see super_sync()). */
2614 + } __packed;
2615 +
2616 + static int read_disk_sb(struct md_rdev *rdev, int size)
2617 +@@ -628,7 +627,7 @@ static void super_sync(struct mddev *mddev, struct md_rdev *rdev)
2618 + if ((r->raid_disk >= 0) && test_bit(Faulty, &r->flags))
2619 + failed_devices |= (1ULL << r->raid_disk);
2620 +
2621 +- memset(sb, 0, sizeof(*sb));
2622 ++ memset(sb + 1, 0, rdev->sb_size - sizeof(*sb));
2623 +
2624 + sb->magic = cpu_to_le32(DM_RAID_MAGIC);
2625 + sb->features = cpu_to_le32(0); /* No features yet */
2626 +@@ -663,7 +662,11 @@ static int super_load(struct md_rdev *rdev, struct md_rdev *refdev)
2627 + uint64_t events_sb, events_refsb;
2628 +
2629 + rdev->sb_start = 0;
2630 +- rdev->sb_size = sizeof(*sb);
2631 ++ rdev->sb_size = bdev_logical_block_size(rdev->meta_bdev);
2632 ++ if (rdev->sb_size < sizeof(*sb) || rdev->sb_size > PAGE_SIZE) {
2633 ++ DMERR("superblock size of a logical block is no longer valid");
2634 ++ return -EINVAL;
2635 ++ }
2636 +
2637 + ret = read_disk_sb(rdev, rdev->sb_size);
2638 + if (ret)
2639 +diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
2640 +index c2cdefa1651e..d4f7f9537db2 100644
2641 +--- a/drivers/net/can/dev.c
2642 ++++ b/drivers/net/can/dev.c
2643 +@@ -359,7 +359,7 @@ void can_free_echo_skb(struct net_device *dev, unsigned int idx)
2644 + BUG_ON(idx >= priv->echo_skb_max);
2645 +
2646 + if (priv->echo_skb[idx]) {
2647 +- kfree_skb(priv->echo_skb[idx]);
2648 ++ dev_kfree_skb_any(priv->echo_skb[idx]);
2649 + priv->echo_skb[idx] = NULL;
2650 + }
2651 + }
2652 +diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
2653 +index 09b1da5bc512..6c8423411dd8 100644
2654 +--- a/drivers/net/can/usb/esd_usb2.c
2655 ++++ b/drivers/net/can/usb/esd_usb2.c
2656 +@@ -1094,6 +1094,7 @@ static void esd_usb2_disconnect(struct usb_interface *intf)
2657 + }
2658 + }
2659 + unlink_all_urbs(dev);
2660 ++ kfree(dev);
2661 + }
2662 + }
2663 +
2664 +diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
2665 +index f5b9de48bb82..c19f9447b200 100644
2666 +--- a/drivers/net/macvtap.c
2667 ++++ b/drivers/net/macvtap.c
2668 +@@ -17,6 +17,7 @@
2669 + #include <linux/idr.h>
2670 + #include <linux/fs.h>
2671 +
2672 ++#include <net/ipv6.h>
2673 + #include <net/net_namespace.h>
2674 + #include <net/rtnetlink.h>
2675 + #include <net/sock.h>
2676 +@@ -578,6 +579,8 @@ static int macvtap_skb_from_vnet_hdr(struct sk_buff *skb,
2677 + break;
2678 + case VIRTIO_NET_HDR_GSO_UDP:
2679 + gso_type = SKB_GSO_UDP;
2680 ++ if (skb->protocol == htons(ETH_P_IPV6))
2681 ++ ipv6_proxy_select_ident(skb);
2682 + break;
2683 + default:
2684 + return -EINVAL;
2685 +@@ -634,6 +637,8 @@ static int macvtap_skb_to_vnet_hdr(const struct sk_buff *skb,
2686 + if (skb->ip_summed == CHECKSUM_PARTIAL) {
2687 + vnet_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
2688 + vnet_hdr->csum_start = skb_checksum_start_offset(skb);
2689 ++ if (vlan_tx_tag_present(skb))
2690 ++ vnet_hdr->csum_start += VLAN_HLEN;
2691 + vnet_hdr->csum_offset = skb->csum_offset;
2692 + } else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
2693 + vnet_hdr->flags = VIRTIO_NET_HDR_F_DATA_VALID;
2694 +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
2695 +index 21d7151fb0ab..1207bb19ba58 100644
2696 +--- a/drivers/net/ppp/ppp_generic.c
2697 ++++ b/drivers/net/ppp/ppp_generic.c
2698 +@@ -588,7 +588,7 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
2699 + if (file == ppp->owner)
2700 + ppp_shutdown_interface(ppp);
2701 + }
2702 +- if (atomic_long_read(&file->f_count) <= 2) {
2703 ++ if (atomic_long_read(&file->f_count) < 2) {
2704 + ppp_release(NULL, file);
2705 + err = 0;
2706 + } else
2707 +diff --git a/drivers/net/tun.c b/drivers/net/tun.c
2708 +index 5b1a1b51fdb0..84b95c9b15f6 100644
2709 +--- a/drivers/net/tun.c
2710 ++++ b/drivers/net/tun.c
2711 +@@ -64,6 +64,7 @@
2712 + #include <linux/nsproxy.h>
2713 + #include <linux/virtio_net.h>
2714 + #include <linux/rcupdate.h>
2715 ++#include <net/ipv6.h>
2716 + #include <net/net_namespace.h>
2717 + #include <net/netns/generic.h>
2718 + #include <net/rtnetlink.h>
2719 +@@ -696,6 +697,8 @@ static ssize_t tun_get_user(struct tun_struct *tun,
2720 + break;
2721 + }
2722 +
2723 ++ skb_reset_network_header(skb);
2724 ++
2725 + if (gso.gso_type != VIRTIO_NET_HDR_GSO_NONE) {
2726 + pr_debug("GSO!\n");
2727 + switch (gso.gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
2728 +@@ -707,6 +710,8 @@ static ssize_t tun_get_user(struct tun_struct *tun,
2729 + break;
2730 + case VIRTIO_NET_HDR_GSO_UDP:
2731 + skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
2732 ++ if (skb->protocol == htons(ETH_P_IPV6))
2733 ++ ipv6_proxy_select_ident(skb);
2734 + break;
2735 + default:
2736 + tun->dev->stats.rx_frame_errors++;
2737 +diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h
2738 +index 063bfa8b91f4..9105493445bd 100644
2739 +--- a/drivers/net/wireless/rt2x00/rt2800.h
2740 ++++ b/drivers/net/wireless/rt2x00/rt2800.h
2741 +@@ -1751,7 +1751,7 @@ struct mac_iveiv_entry {
2742 + * 2 - drop tx power by 12dBm,
2743 + * 3 - increase tx power by 6dBm
2744 + */
2745 +-#define BBP1_TX_POWER_CTRL FIELD8(0x07)
2746 ++#define BBP1_TX_POWER_CTRL FIELD8(0x03)
2747 + #define BBP1_TX_ANTENNA FIELD8(0x18)
2748 +
2749 + /*
2750 +diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c
2751 +index 664e93d2a682..49baf0cfe304 100644
2752 +--- a/drivers/net/wireless/rt2x00/rt2800usb.c
2753 ++++ b/drivers/net/wireless/rt2x00/rt2800usb.c
2754 +@@ -1081,6 +1081,7 @@ static struct usb_device_id rt2800usb_device_table[] = {
2755 + /* Ovislink */
2756 + { USB_DEVICE(0x1b75, 0x3071) },
2757 + { USB_DEVICE(0x1b75, 0x3072) },
2758 ++ { USB_DEVICE(0x1b75, 0xa200) },
2759 + /* Para */
2760 + { USB_DEVICE(0x20b8, 0x8888) },
2761 + /* Pegatron */
2762 +diff --git a/drivers/net/wireless/rt2x00/rt2x00queue.c b/drivers/net/wireless/rt2x00/rt2x00queue.c
2763 +index 4d792a242c5e..c5bdbe94aec8 100644
2764 +--- a/drivers/net/wireless/rt2x00/rt2x00queue.c
2765 ++++ b/drivers/net/wireless/rt2x00/rt2x00queue.c
2766 +@@ -148,55 +148,29 @@ void rt2x00queue_align_frame(struct sk_buff *skb)
2767 + skb_trim(skb, frame_length);
2768 + }
2769 +
2770 +-void rt2x00queue_insert_l2pad(struct sk_buff *skb, unsigned int header_length)
2771 ++/*
2772 ++ * H/W needs L2 padding between the header and the paylod if header size
2773 ++ * is not 4 bytes aligned.
2774 ++ */
2775 ++void rt2x00queue_insert_l2pad(struct sk_buff *skb, unsigned int hdr_len)
2776 + {
2777 +- unsigned int payload_length = skb->len - header_length;
2778 +- unsigned int header_align = ALIGN_SIZE(skb, 0);
2779 +- unsigned int payload_align = ALIGN_SIZE(skb, header_length);
2780 +- unsigned int l2pad = payload_length ? L2PAD_SIZE(header_length) : 0;
2781 ++ unsigned int l2pad = (skb->len > hdr_len) ? L2PAD_SIZE(hdr_len) : 0;
2782 +
2783 +- /*
2784 +- * Adjust the header alignment if the payload needs to be moved more
2785 +- * than the header.
2786 +- */
2787 +- if (payload_align > header_align)
2788 +- header_align += 4;
2789 +-
2790 +- /* There is nothing to do if no alignment is needed */
2791 +- if (!header_align)
2792 ++ if (!l2pad)
2793 + return;
2794 +
2795 +- /* Reserve the amount of space needed in front of the frame */
2796 +- skb_push(skb, header_align);
2797 +-
2798 +- /*
2799 +- * Move the header.
2800 +- */
2801 +- memmove(skb->data, skb->data + header_align, header_length);
2802 +-
2803 +- /* Move the payload, if present and if required */
2804 +- if (payload_length && payload_align)
2805 +- memmove(skb->data + header_length + l2pad,
2806 +- skb->data + header_length + l2pad + payload_align,
2807 +- payload_length);
2808 +-
2809 +- /* Trim the skb to the correct size */
2810 +- skb_trim(skb, header_length + l2pad + payload_length);
2811 ++ skb_push(skb, l2pad);
2812 ++ memmove(skb->data, skb->data + l2pad, hdr_len);
2813 + }
2814 +
2815 +-void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length)
2816 ++void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int hdr_len)
2817 + {
2818 +- /*
2819 +- * L2 padding is only present if the skb contains more than just the
2820 +- * IEEE 802.11 header.
2821 +- */
2822 +- unsigned int l2pad = (skb->len > header_length) ?
2823 +- L2PAD_SIZE(header_length) : 0;
2824 ++ unsigned int l2pad = (skb->len > hdr_len) ? L2PAD_SIZE(hdr_len) : 0;
2825 +
2826 + if (!l2pad)
2827 + return;
2828 +
2829 +- memmove(skb->data + l2pad, skb->data, header_length);
2830 ++ memmove(skb->data + l2pad, skb->data, hdr_len);
2831 + skb_pull(skb, l2pad);
2832 + }
2833 +
2834 +diff --git a/drivers/of/address.c b/drivers/of/address.c
2835 +index 66d96f14c274..c059ce1dd338 100644
2836 +--- a/drivers/of/address.c
2837 ++++ b/drivers/of/address.c
2838 +@@ -328,6 +328,21 @@ static struct of_bus *of_match_bus(struct device_node *np)
2839 + return NULL;
2840 + }
2841 +
2842 ++static int of_empty_ranges_quirk(void)
2843 ++{
2844 ++ if (IS_ENABLED(CONFIG_PPC)) {
2845 ++ /* To save cycles, we cache the result */
2846 ++ static int quirk_state = -1;
2847 ++
2848 ++ if (quirk_state < 0)
2849 ++ quirk_state =
2850 ++ of_machine_is_compatible("Power Macintosh") ||
2851 ++ of_machine_is_compatible("MacRISC");
2852 ++ return quirk_state;
2853 ++ }
2854 ++ return false;
2855 ++}
2856 ++
2857 + static int of_translate_one(struct device_node *parent, struct of_bus *bus,
2858 + struct of_bus *pbus, u32 *addr,
2859 + int na, int ns, int pna, const char *rprop)
2860 +@@ -353,12 +368,10 @@ static int of_translate_one(struct device_node *parent, struct of_bus *bus,
2861 + * This code is only enabled on powerpc. --gcl
2862 + */
2863 + ranges = of_get_property(parent, rprop, &rlen);
2864 +-#if !defined(CONFIG_PPC)
2865 +- if (ranges == NULL) {
2866 ++ if (ranges == NULL && !of_empty_ranges_quirk()) {
2867 + pr_err("OF: no ranges; cannot translate\n");
2868 + return 1;
2869 + }
2870 +-#endif /* !defined(CONFIG_PPC) */
2871 + if (ranges == NULL || rlen == 0) {
2872 + offset = of_read_number(addr, na);
2873 + memset(addr, 0, pna * 4);
2874 +diff --git a/drivers/of/base.c b/drivers/of/base.c
2875 +index 1c207f23b114..d439d0611559 100644
2876 +--- a/drivers/of/base.c
2877 ++++ b/drivers/of/base.c
2878 +@@ -716,52 +716,6 @@ int of_property_read_string(struct device_node *np, const char *propname,
2879 + EXPORT_SYMBOL_GPL(of_property_read_string);
2880 +
2881 + /**
2882 +- * of_property_read_string_index - Find and read a string from a multiple
2883 +- * strings property.
2884 +- * @np: device node from which the property value is to be read.
2885 +- * @propname: name of the property to be searched.
2886 +- * @index: index of the string in the list of strings
2887 +- * @out_string: pointer to null terminated return string, modified only if
2888 +- * return value is 0.
2889 +- *
2890 +- * Search for a property in a device tree node and retrieve a null
2891 +- * terminated string value (pointer to data, not a copy) in the list of strings
2892 +- * contained in that property.
2893 +- * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if
2894 +- * property does not have a value, and -EILSEQ if the string is not
2895 +- * null-terminated within the length of the property data.
2896 +- *
2897 +- * The out_string pointer is modified only if a valid string can be decoded.
2898 +- */
2899 +-int of_property_read_string_index(struct device_node *np, const char *propname,
2900 +- int index, const char **output)
2901 +-{
2902 +- struct property *prop = of_find_property(np, propname, NULL);
2903 +- int i = 0;
2904 +- size_t l = 0, total = 0;
2905 +- const char *p;
2906 +-
2907 +- if (!prop)
2908 +- return -EINVAL;
2909 +- if (!prop->value)
2910 +- return -ENODATA;
2911 +- if (strnlen(prop->value, prop->length) >= prop->length)
2912 +- return -EILSEQ;
2913 +-
2914 +- p = prop->value;
2915 +-
2916 +- for (i = 0; total < prop->length; total += l, p += l) {
2917 +- l = strlen(p) + 1;
2918 +- if (i++ == index) {
2919 +- *output = p;
2920 +- return 0;
2921 +- }
2922 +- }
2923 +- return -ENODATA;
2924 +-}
2925 +-EXPORT_SYMBOL_GPL(of_property_read_string_index);
2926 +-
2927 +-/**
2928 + * of_property_match_string() - Find string in a list and return index
2929 + * @np: pointer to node containing string list property
2930 + * @propname: string list property name
2931 +@@ -787,7 +741,7 @@ int of_property_match_string(struct device_node *np, const char *propname,
2932 + end = p + prop->length;
2933 +
2934 + for (i = 0; p < end; i++, p += l) {
2935 +- l = strlen(p) + 1;
2936 ++ l = strnlen(p, end - p) + 1;
2937 + if (p + l > end)
2938 + return -EILSEQ;
2939 + pr_debug("comparing %s with %s\n", string, p);
2940 +@@ -799,39 +753,41 @@ int of_property_match_string(struct device_node *np, const char *propname,
2941 + EXPORT_SYMBOL_GPL(of_property_match_string);
2942 +
2943 + /**
2944 +- * of_property_count_strings - Find and return the number of strings from a
2945 +- * multiple strings property.
2946 ++ * of_property_read_string_util() - Utility helper for parsing string properties
2947 + * @np: device node from which the property value is to be read.
2948 + * @propname: name of the property to be searched.
2949 ++ * @out_strs: output array of string pointers.
2950 ++ * @sz: number of array elements to read.
2951 ++ * @skip: Number of strings to skip over at beginning of list.
2952 + *
2953 +- * Search for a property in a device tree node and retrieve the number of null
2954 +- * terminated string contain in it. Returns the number of strings on
2955 +- * success, -EINVAL if the property does not exist, -ENODATA if property
2956 +- * does not have a value, and -EILSEQ if the string is not null-terminated
2957 +- * within the length of the property data.
2958 ++ * Don't call this function directly. It is a utility helper for the
2959 ++ * of_property_read_string*() family of functions.
2960 + */
2961 +-int of_property_count_strings(struct device_node *np, const char *propname)
2962 ++int of_property_read_string_helper(struct device_node *np, const char *propname,
2963 ++ const char **out_strs, size_t sz, int skip)
2964 + {
2965 + struct property *prop = of_find_property(np, propname, NULL);
2966 +- int i = 0;
2967 +- size_t l = 0, total = 0;
2968 +- const char *p;
2969 ++ int l = 0, i = 0;
2970 ++ const char *p, *end;
2971 +
2972 + if (!prop)
2973 + return -EINVAL;
2974 + if (!prop->value)
2975 + return -ENODATA;
2976 +- if (strnlen(prop->value, prop->length) >= prop->length)
2977 +- return -EILSEQ;
2978 +-
2979 + p = prop->value;
2980 ++ end = p + prop->length;
2981 +
2982 +- for (i = 0; total < prop->length; total += l, p += l, i++)
2983 +- l = strlen(p) + 1;
2984 +-
2985 +- return i;
2986 ++ for (i = 0; p < end && (!out_strs || i < skip + sz); i++, p += l) {
2987 ++ l = strnlen(p, end - p) + 1;
2988 ++ if (p + l > end)
2989 ++ return -EILSEQ;
2990 ++ if (out_strs && i >= skip)
2991 ++ *out_strs++ = p;
2992 ++ }
2993 ++ i -= skip;
2994 ++ return i <= 0 ? -ENODATA : i;
2995 + }
2996 +-EXPORT_SYMBOL_GPL(of_property_count_strings);
2997 ++EXPORT_SYMBOL_GPL(of_property_read_string_helper);
2998 +
2999 + /**
3000 + * of_parse_phandle - Resolve a phandle property to a device_node pointer
3001 +diff --git a/drivers/of/selftest.c b/drivers/of/selftest.c
3002 +index f24ffd7088d2..5a0771cc8987 100644
3003 +--- a/drivers/of/selftest.c
3004 ++++ b/drivers/of/selftest.c
3005 +@@ -120,8 +120,9 @@ static void __init of_selftest_parse_phandle_with_args(void)
3006 + pr_info("end - %s\n", passed_all ? "PASS" : "FAIL");
3007 + }
3008 +
3009 +-static void __init of_selftest_property_match_string(void)
3010 ++static void __init of_selftest_property_string(void)
3011 + {
3012 ++ const char *strings[4];
3013 + struct device_node *np;
3014 + int rc;
3015 +
3016 +@@ -139,13 +140,66 @@ static void __init of_selftest_property_match_string(void)
3017 + rc = of_property_match_string(np, "phandle-list-names", "third");
3018 + selftest(rc == 2, "third expected:0 got:%i\n", rc);
3019 + rc = of_property_match_string(np, "phandle-list-names", "fourth");
3020 +- selftest(rc == -ENODATA, "unmatched string; rc=%i", rc);
3021 ++ selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
3022 + rc = of_property_match_string(np, "missing-property", "blah");
3023 +- selftest(rc == -EINVAL, "missing property; rc=%i", rc);
3024 ++ selftest(rc == -EINVAL, "missing property; rc=%i\n", rc);
3025 + rc = of_property_match_string(np, "empty-property", "blah");
3026 +- selftest(rc == -ENODATA, "empty property; rc=%i", rc);
3027 ++ selftest(rc == -ENODATA, "empty property; rc=%i\n", rc);
3028 + rc = of_property_match_string(np, "unterminated-string", "blah");
3029 +- selftest(rc == -EILSEQ, "unterminated string; rc=%i", rc);
3030 ++ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
3031 ++
3032 ++ /* of_property_count_strings() tests */
3033 ++ rc = of_property_count_strings(np, "string-property");
3034 ++ selftest(rc == 1, "Incorrect string count; rc=%i\n", rc);
3035 ++ rc = of_property_count_strings(np, "phandle-list-names");
3036 ++ selftest(rc == 3, "Incorrect string count; rc=%i\n", rc);
3037 ++ rc = of_property_count_strings(np, "unterminated-string");
3038 ++ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
3039 ++ rc = of_property_count_strings(np, "unterminated-string-list");
3040 ++ selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
3041 ++
3042 ++ /* of_property_read_string_index() tests */
3043 ++ rc = of_property_read_string_index(np, "string-property", 0, strings);
3044 ++ selftest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
3045 ++ strings[0] = NULL;
3046 ++ rc = of_property_read_string_index(np, "string-property", 1, strings);
3047 ++ selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
3048 ++ rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
3049 ++ selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
3050 ++ rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
3051 ++ selftest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
3052 ++ rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
3053 ++ selftest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
3054 ++ strings[0] = NULL;
3055 ++ rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
3056 ++ selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
3057 ++ strings[0] = NULL;
3058 ++ rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
3059 ++ selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
3060 ++ rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
3061 ++ selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
3062 ++ strings[0] = NULL;
3063 ++ rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
3064 ++ selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
3065 ++ strings[1] = NULL;
3066 ++
3067 ++ /* of_property_read_string_array() tests */
3068 ++ rc = of_property_read_string_array(np, "string-property", strings, 4);
3069 ++ selftest(rc == 1, "Incorrect string count; rc=%i\n", rc);
3070 ++ rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
3071 ++ selftest(rc == 3, "Incorrect string count; rc=%i\n", rc);
3072 ++ rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
3073 ++ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
3074 ++ /* -- An incorrectly formed string should cause a failure */
3075 ++ rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
3076 ++ selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
3077 ++ /* -- parsing the correctly formed strings should still work: */
3078 ++ strings[2] = NULL;
3079 ++ rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
3080 ++ selftest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
3081 ++ strings[1] = NULL;
3082 ++ rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
3083 ++ selftest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
3084 + }
3085 +
3086 + static int __init of_selftest(void)
3087 +@@ -161,7 +215,7 @@ static int __init of_selftest(void)
3088 +
3089 + pr_info("start of selftest - you will see error messages\n");
3090 + of_selftest_parse_phandle_with_args();
3091 +- of_selftest_property_match_string();
3092 ++ of_selftest_property_string();
3093 + pr_info("end of selftest - %s\n", selftest_passed ? "PASS" : "FAIL");
3094 + return 0;
3095 + }
3096 +diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
3097 +index 9e39df969560..75dc402f0347 100644
3098 +--- a/drivers/pci/hotplug/pciehp_core.c
3099 ++++ b/drivers/pci/hotplug/pciehp_core.c
3100 +@@ -237,6 +237,13 @@ static int pciehp_probe(struct pcie_device *dev)
3101 + else if (pciehp_acpi_slot_detection_check(dev->port))
3102 + goto err_out_none;
3103 +
3104 ++ if (!dev->port->subordinate) {
3105 ++ /* Can happen if we run out of bus numbers during probe */
3106 ++ dev_err(&dev->device,
3107 ++ "Hotplug bridge without secondary bus, ignoring\n");
3108 ++ goto err_out_none;
3109 ++ }
3110 ++
3111 + ctrl = pcie_init(dev);
3112 + if (!ctrl) {
3113 + dev_err(&dev->device, "Controller initialization failed\n");
3114 +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
3115 +index a55e248618cd..985ada79191e 100644
3116 +--- a/drivers/pci/pci-sysfs.c
3117 ++++ b/drivers/pci/pci-sysfs.c
3118 +@@ -173,7 +173,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
3119 + {
3120 + struct pci_dev *pci_dev = to_pci_dev(dev);
3121 +
3122 +- return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02x\n",
3123 ++ return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n",
3124 + pci_dev->vendor, pci_dev->device,
3125 + pci_dev->subsystem_vendor, pci_dev->subsystem_device,
3126 + (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8),
3127 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
3128 +index 61bc33ed1116..e587d0035a74 100644
3129 +--- a/drivers/pci/quirks.c
3130 ++++ b/drivers/pci/quirks.c
3131 +@@ -28,6 +28,7 @@
3132 + #include <linux/ioport.h>
3133 + #include <linux/sched.h>
3134 + #include <linux/ktime.h>
3135 ++#include <linux/mm.h>
3136 + #include <asm/dma.h> /* isa_dma_bridge_buggy */
3137 + #include "pci.h"
3138 +
3139 +@@ -291,6 +292,25 @@ static void __devinit quirk_citrine(struct pci_dev *dev)
3140 + }
3141 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine);
3142 +
3143 ++/* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */
3144 ++static void quirk_extend_bar_to_page(struct pci_dev *dev)
3145 ++{
3146 ++ int i;
3147 ++
3148 ++ for (i = 0; i < PCI_STD_RESOURCE_END; i++) {
3149 ++ struct resource *r = &dev->resource[i];
3150 ++
3151 ++ if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) {
3152 ++ r->end = PAGE_SIZE - 1;
3153 ++ r->start = 0;
3154 ++ r->flags |= IORESOURCE_UNSET;
3155 ++ dev_info(&dev->dev, "expanded BAR %d to page size: %pR\n",
3156 ++ i, r);
3157 ++ }
3158 ++ }
3159 ++}
3160 ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page);
3161 ++
3162 + /*
3163 + * S3 868 and 968 chips report region size equal to 32M, but they decode 64M.
3164 + * If it's needed, re-allocate the region.
3165 +diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
3166 +index c1a3fd8e1243..4d047316e831 100644
3167 +--- a/drivers/platform/x86/acer-wmi.c
3168 ++++ b/drivers/platform/x86/acer-wmi.c
3169 +@@ -523,6 +523,17 @@ static const struct dmi_system_id video_vendor_dmi_table[] = {
3170 + DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 4750"),
3171 + },
3172 + },
3173 ++ {
3174 ++ /*
3175 ++ * Note no video_set_backlight_video_vendor, we must use the
3176 ++ * acer interface, as there is no native backlight interface.
3177 ++ */
3178 ++ .ident = "Acer KAV80",
3179 ++ .matches = {
3180 ++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
3181 ++ DMI_MATCH(DMI_PRODUCT_NAME, "KAV80"),
3182 ++ },
3183 ++ },
3184 + {}
3185 + };
3186 +
3187 +diff --git a/drivers/platform/x86/samsung-laptop.c b/drivers/platform/x86/samsung-laptop.c
3188 +index de9f432cf22d..28c1bdb2e59b 100644
3189 +--- a/drivers/platform/x86/samsung-laptop.c
3190 ++++ b/drivers/platform/x86/samsung-laptop.c
3191 +@@ -1517,6 +1517,16 @@ static struct dmi_system_id __initdata samsung_dmi_table[] = {
3192 + },
3193 + .driver_data = &samsung_broken_acpi_video,
3194 + },
3195 ++ {
3196 ++ .callback = samsung_dmi_matched,
3197 ++ .ident = "NC210",
3198 ++ .matches = {
3199 ++ DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
3200 ++ DMI_MATCH(DMI_PRODUCT_NAME, "NC210/NC110"),
3201 ++ DMI_MATCH(DMI_BOARD_NAME, "NC210/NC110"),
3202 ++ },
3203 ++ .driver_data = &samsung_broken_acpi_video,
3204 ++ },
3205 + { },
3206 + };
3207 + MODULE_DEVICE_TABLE(dmi, samsung_dmi_table);
3208 +diff --git a/drivers/power/charger-manager.c b/drivers/power/charger-manager.c
3209 +index 4c449b26de46..102267fc713d 100644
3210 +--- a/drivers/power/charger-manager.c
3211 ++++ b/drivers/power/charger-manager.c
3212 +@@ -808,6 +808,11 @@ static int charger_manager_probe(struct platform_device *pdev)
3213 + goto err_no_charger_stat;
3214 + }
3215 +
3216 ++ if (!desc->psy_fuel_gauge) {
3217 ++ dev_err(&pdev->dev, "No fuel gauge power supply defined\n");
3218 ++ return -EINVAL;
3219 ++ }
3220 ++
3221 + /* Counting index only */
3222 + while (desc->psy_charger_stat[i])
3223 + i++;
3224 +diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
3225 +index dcc39b612780..185971c2b41e 100644
3226 +--- a/drivers/scsi/scsi_error.c
3227 ++++ b/drivers/scsi/scsi_error.c
3228 +@@ -1679,8 +1679,10 @@ static void scsi_restart_operations(struct Scsi_Host *shost)
3229 + * is no point trying to lock the door of an off-line device.
3230 + */
3231 + shost_for_each_device(sdev, shost) {
3232 +- if (scsi_device_online(sdev) && sdev->locked)
3233 ++ if (scsi_device_online(sdev) && sdev->was_reset && sdev->locked) {
3234 + scsi_eh_lock_door(sdev);
3235 ++ sdev->was_reset = 0;
3236 ++ }
3237 + }
3238 +
3239 + /*
3240 +diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c
3241 +index b9f0192758d6..efc494a65b43 100644
3242 +--- a/drivers/spi/spi-dw-mid.c
3243 ++++ b/drivers/spi/spi-dw-mid.c
3244 +@@ -89,7 +89,10 @@ err_exit:
3245 +
3246 + static void mid_spi_dma_exit(struct dw_spi *dws)
3247 + {
3248 ++ dmaengine_terminate_all(dws->txchan);
3249 + dma_release_channel(dws->txchan);
3250 ++
3251 ++ dmaengine_terminate_all(dws->rxchan);
3252 + dma_release_channel(dws->rxchan);
3253 + }
3254 +
3255 +@@ -136,7 +139,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
3256 + txconf.dst_addr = dws->dma_addr;
3257 + txconf.dst_maxburst = LNW_DMA_MSIZE_16;
3258 + txconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
3259 +- txconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
3260 ++ txconf.dst_addr_width = dws->dma_width;
3261 + txconf.device_fc = false;
3262 +
3263 + txchan->device->device_control(txchan, DMA_SLAVE_CONFIG,
3264 +@@ -159,7 +162,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
3265 + rxconf.src_addr = dws->dma_addr;
3266 + rxconf.src_maxburst = LNW_DMA_MSIZE_16;
3267 + rxconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
3268 +- rxconf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
3269 ++ rxconf.src_addr_width = dws->dma_width;
3270 + rxconf.device_fc = false;
3271 +
3272 + rxchan->device->device_control(rxchan, DMA_SLAVE_CONFIG,
3273 +diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c
3274 +index 469eb28e8328..e3b845ae93c6 100644
3275 +--- a/drivers/spi/spi-pl022.c
3276 ++++ b/drivers/spi/spi-pl022.c
3277 +@@ -1061,7 +1061,7 @@ err_rxdesc:
3278 + pl022->sgt_tx.nents, DMA_TO_DEVICE);
3279 + err_tx_sgmap:
3280 + dma_unmap_sg(rxchan->device->dev, pl022->sgt_rx.sgl,
3281 +- pl022->sgt_tx.nents, DMA_FROM_DEVICE);
3282 ++ pl022->sgt_rx.nents, DMA_FROM_DEVICE);
3283 + err_rx_sgmap:
3284 + sg_free_table(&pl022->sgt_tx);
3285 + err_alloc_tx_sg:
3286 +diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c
3287 +index cd82b56d58af..2db80b1fda82 100644
3288 +--- a/drivers/staging/iio/impedance-analyzer/ad5933.c
3289 ++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c
3290 +@@ -109,15 +109,44 @@ static struct ad5933_platform_data ad5933_default_pdata = {
3291 + };
3292 +
3293 + static struct iio_chan_spec ad5933_channels[] = {
3294 +- IIO_CHAN(IIO_TEMP, 0, 1, 1, NULL, 0, 0, 0,
3295 +- 0, AD5933_REG_TEMP_DATA, IIO_ST('s', 14, 16, 0), 0),
3296 +- /* Ring Channels */
3297 +- IIO_CHAN(IIO_VOLTAGE, 0, 1, 0, "real_raw", 0, 0,
3298 +- IIO_CHAN_INFO_SCALE_SEPARATE_BIT,
3299 +- AD5933_REG_REAL_DATA, 0, IIO_ST('s', 16, 16, 0), 0),
3300 +- IIO_CHAN(IIO_VOLTAGE, 0, 1, 0, "imag_raw", 0, 0,
3301 +- IIO_CHAN_INFO_SCALE_SEPARATE_BIT,
3302 +- AD5933_REG_IMAG_DATA, 1, IIO_ST('s', 16, 16, 0), 0),
3303 ++ {
3304 ++ .type = IIO_TEMP,
3305 ++ .indexed = 1,
3306 ++ .processed_val = 1,
3307 ++ .channel = 0,
3308 ++ .address = AD5933_REG_TEMP_DATA,
3309 ++ .scan_type = {
3310 ++ .sign = 's',
3311 ++ .realbits = 14,
3312 ++ .storagebits = 16,
3313 ++ },
3314 ++ }, { /* Ring Channels */
3315 ++ .type = IIO_VOLTAGE,
3316 ++ .indexed = 1,
3317 ++ .channel = 0,
3318 ++ .extend_name = "real",
3319 ++ .info_mask = IIO_CHAN_INFO_SCALE_SEPARATE_BIT,
3320 ++ .address = AD5933_REG_REAL_DATA,
3321 ++ .scan_index = 0,
3322 ++ .scan_type = {
3323 ++ .sign = 's',
3324 ++ .realbits = 16,
3325 ++ .storagebits = 16,
3326 ++ },
3327 ++ }, {
3328 ++ .type = IIO_VOLTAGE,
3329 ++ .indexed = 1,
3330 ++ .channel = 0,
3331 ++ .extend_name = "imag",
3332 ++ .info_mask = IIO_CHAN_INFO_SCALE_SEPARATE_BIT,
3333 ++ .address = AD5933_REG_IMAG_DATA,
3334 ++ .scan_index = 1,
3335 ++ .scan_type = {
3336 ++ .sign = 's',
3337 ++ .realbits = 16,
3338 ++ .storagebits = 16,
3339 ++ },
3340 ++ },
3341 + };
3342 +
3343 + static int ad5933_i2c_write(struct i2c_client *client,
3344 +diff --git a/drivers/staging/iio/meter/ade7758_ring.c b/drivers/staging/iio/meter/ade7758_ring.c
3345 +index c45b23bb1229..629a6ed2c6ed 100644
3346 +--- a/drivers/staging/iio/meter/ade7758_ring.c
3347 ++++ b/drivers/staging/iio/meter/ade7758_ring.c
3348 +@@ -96,7 +96,7 @@ static int ade7758_ring_preenable(struct iio_dev *indio_dev)
3349 + size_t d_size;
3350 + unsigned channel;
3351 +
3352 +- if (!bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength))
3353 ++ if (bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength))
3354 + return -EINVAL;
3355 +
3356 + channel = find_first_bit(indio_dev->active_scan_mask,
3357 +diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
3358 +index 34df0b2a630e..b4b308ef6cf5 100644
3359 +--- a/drivers/target/target_core_transport.c
3360 ++++ b/drivers/target/target_core_transport.c
3361 +@@ -3284,8 +3284,7 @@ static void transport_complete_qf(struct se_cmd *cmd)
3362 +
3363 + if (cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) {
3364 + ret = cmd->se_tfo->queue_status(cmd);
3365 +- if (ret)
3366 +- goto out;
3367 ++ goto out;
3368 + }
3369 +
3370 + switch (cmd->data_direction) {
3371 +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
3372 +index d53f39668044..6f8f985e7805 100644
3373 +--- a/drivers/tty/serial/8250/8250_pci.c
3374 ++++ b/drivers/tty/serial/8250/8250_pci.c
3375 +@@ -1164,6 +1164,7 @@ pci_xr17c154_setup(struct serial_private *priv,
3376 + #define PCI_DEVICE_ID_PLX_CRONYX_OMEGA 0xc001
3377 + #define PCI_DEVICE_ID_INTEL_PATSBURG_KT 0x1d3d
3378 + #define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a
3379 ++#define PCI_DEVICE_ID_INTEL_QRK_UART 0x0936
3380 +
3381 + /* Unknown vendors/cards - this should not be in linux/pci_ids.h */
3382 + #define PCI_SUBDEVICE_ID_UNKNOWN_0x1584 0x1584
3383 +@@ -1686,6 +1687,13 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
3384 + .init = pci_eg20t_init,
3385 + .setup = pci_default_setup,
3386 + },
3387 ++ {
3388 ++ .vendor = PCI_VENDOR_ID_INTEL,
3389 ++ .device = PCI_DEVICE_ID_INTEL_QRK_UART,
3390 ++ .subvendor = PCI_ANY_ID,
3391 ++ .subdevice = PCI_ANY_ID,
3392 ++ .setup = pci_default_setup,
3393 ++ },
3394 + /*
3395 + * Cronyx Omega PCI (PLX-chip based)
3396 + */
3397 +@@ -1894,6 +1902,7 @@ enum pci_board_num_t {
3398 + pbn_ADDIDATA_PCIe_4_3906250,
3399 + pbn_ADDIDATA_PCIe_8_3906250,
3400 + pbn_ce4100_1_115200,
3401 ++ pbn_qrk,
3402 + pbn_omegapci,
3403 + pbn_NETMOS9900_2s_115200,
3404 + pbn_brcm_trumanage,
3405 +@@ -2592,6 +2601,12 @@ static struct pciserial_board pci_boards[] __devinitdata = {
3406 + .base_baud = 921600,
3407 + .reg_shift = 2,
3408 + },
3409 ++ [pbn_qrk] = {
3410 ++ .flags = FL_BASE0,
3411 ++ .num_ports = 1,
3412 ++ .base_baud = 2764800,
3413 ++ .reg_shift = 2,
3414 ++ },
3415 + [pbn_omegapci] = {
3416 + .flags = FL_BASE0,
3417 + .num_ports = 8,
3418 +@@ -4164,6 +4179,12 @@ static struct pci_device_id serial_pci_tbl[] = {
3419 + pbn_ce4100_1_115200 },
3420 +
3421 + /*
3422 ++ * Intel Quark x1000
3423 ++ */
3424 ++ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_QRK_UART,
3425 ++ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
3426 ++ pbn_qrk },
3427 ++ /*
3428 + * Cronyx Omega PCI
3429 + */
3430 + { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_CRONYX_OMEGA,
3431 +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
3432 +index 4185cc5332ab..82aac2920e19 100644
3433 +--- a/drivers/tty/serial/serial_core.c
3434 ++++ b/drivers/tty/serial/serial_core.c
3435 +@@ -355,7 +355,7 @@ uart_get_baud_rate(struct uart_port *port, struct ktermios *termios,
3436 + * The spd_hi, spd_vhi, spd_shi, spd_warp kludge...
3437 + * Die! Die! Die!
3438 + */
3439 +- if (baud == 38400)
3440 ++ if (try == 0 && baud == 38400)
3441 + baud = altbaud;
3442 +
3443 + /*
3444 +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
3445 +index b28d6356a142..a07eb4c068a0 100644
3446 +--- a/drivers/tty/tty_io.c
3447 ++++ b/drivers/tty/tty_io.c
3448 +@@ -1633,6 +1633,8 @@ int tty_release(struct inode *inode, struct file *filp)
3449 + int devpts;
3450 + int idx;
3451 + char buf[64];
3452 ++ long timeout = 0;
3453 ++ int once = 1;
3454 +
3455 + if (tty_paranoia_check(tty, inode, __func__))
3456 + return 0;
3457 +@@ -1713,11 +1715,18 @@ int tty_release(struct inode *inode, struct file *filp)
3458 + if (!do_sleep)
3459 + break;
3460 +
3461 +- printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
3462 ++ if (once) {
3463 ++ once = 0;
3464 ++ printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
3465 + __func__, tty_name(tty, buf));
3466 ++ }
3467 + tty_unlock();
3468 + mutex_unlock(&tty_mutex);
3469 +- schedule();
3470 ++ schedule_timeout_killable(timeout);
3471 ++ if (timeout < 120 * HZ)
3472 ++ timeout = 2 * timeout + 1;
3473 ++ else
3474 ++ timeout = MAX_SCHEDULE_TIMEOUT;
3475 + }
3476 +
3477 + /*
3478 +diff --git a/drivers/tty/vt/consolemap.c b/drivers/tty/vt/consolemap.c
3479 +index 8308fc7cdc26..87025d01aaec 100644
3480 +--- a/drivers/tty/vt/consolemap.c
3481 ++++ b/drivers/tty/vt/consolemap.c
3482 +@@ -518,6 +518,10 @@ int con_set_unimap(struct vc_data *vc, ushort ct, struct unipair __user *list)
3483 +
3484 + /* Save original vc_unipagdir_loc in case we allocate a new one */
3485 + p = (struct uni_pagedir *)*vc->vc_uni_pagedir_loc;
3486 ++
3487 ++ if (!p)
3488 ++ return -EINVAL;
3489 ++
3490 + if (p->readonly) return -EIO;
3491 +
3492 + if (!ct) return 0;
3493 +diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
3494 +index 2f2540ff21f6..8f4a628d3382 100644
3495 +--- a/drivers/usb/class/cdc-acm.c
3496 ++++ b/drivers/usb/class/cdc-acm.c
3497 +@@ -910,11 +910,12 @@ static void acm_tty_set_termios(struct tty_struct *tty,
3498 + /* FIXME: Needs to clear unsupported bits in the termios */
3499 + acm->clocal = ((termios->c_cflag & CLOCAL) != 0);
3500 +
3501 +- if (!newline.dwDTERate) {
3502 ++ if (C_BAUD(tty) == B0) {
3503 + newline.dwDTERate = acm->line.dwDTERate;
3504 + newctrl &= ~ACM_CTRL_DTR;
3505 +- } else
3506 ++ } else if (termios_old && (termios_old->c_cflag & CBAUD) == B0) {
3507 + newctrl |= ACM_CTRL_DTR;
3508 ++ }
3509 +
3510 + if (newctrl != acm->ctrlout)
3511 + acm_set_control(acm, acm->ctrlout = newctrl);
3512 +@@ -1601,6 +1602,7 @@ static const struct usb_device_id acm_ids[] = {
3513 + { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */
3514 + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
3515 + },
3516 ++ { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */
3517 + { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */
3518 + },
3519 + /* Motorola H24 HSPA module: */
3520 +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
3521 +index e2cc8df3d87b..6baeada782eb 100644
3522 +--- a/drivers/usb/core/hcd.c
3523 ++++ b/drivers/usb/core/hcd.c
3524 +@@ -1882,6 +1882,8 @@ int usb_alloc_streams(struct usb_interface *interface,
3525 + return -EINVAL;
3526 + if (dev->speed != USB_SPEED_SUPER)
3527 + return -EINVAL;
3528 ++ if (dev->state < USB_STATE_CONFIGURED)
3529 ++ return -ENODEV;
3530 +
3531 + /* Streams only apply to bulk endpoints. */
3532 + for (i = 0; i < num_eps; i++)
3533 +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
3534 +index 62a9e44bfef6..93f2538b16cc 100644
3535 +--- a/drivers/usb/core/hub.c
3536 ++++ b/drivers/usb/core/hub.c
3537 +@@ -1638,8 +1638,10 @@ void usb_set_device_state(struct usb_device *udev,
3538 + || new_state == USB_STATE_SUSPENDED)
3539 + ; /* No change to wakeup settings */
3540 + else if (new_state == USB_STATE_CONFIGURED)
3541 +- wakeup = udev->actconfig->desc.bmAttributes
3542 +- & USB_CONFIG_ATT_WAKEUP;
3543 ++ wakeup = (udev->quirks &
3544 ++ USB_QUIRK_IGNORE_REMOTE_WAKEUP) ? 0 :
3545 ++ udev->actconfig->desc.bmAttributes &
3546 ++ USB_CONFIG_ATT_WAKEUP;
3547 + else
3548 + wakeup = 0;
3549 + }
3550 +@@ -3359,6 +3361,9 @@ check_highspeed (struct usb_hub *hub, struct usb_device *udev, int port1)
3551 + struct usb_qualifier_descriptor *qual;
3552 + int status;
3553 +
3554 ++ if (udev->quirks & USB_QUIRK_DEVICE_QUALIFIER)
3555 ++ return;
3556 ++
3557 + qual = kmalloc (sizeof *qual, GFP_KERNEL);
3558 + if (qual == NULL)
3559 + return;
3560 +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
3561 +index bcde6f65b1c6..980a9d8c6504 100644
3562 +--- a/drivers/usb/core/quirks.c
3563 ++++ b/drivers/usb/core/quirks.c
3564 +@@ -88,6 +88,12 @@ static const struct usb_device_id usb_quirk_list[] = {
3565 + { USB_DEVICE(0x04e8, 0x6601), .driver_info =
3566 + USB_QUIRK_CONFIG_INTF_STRINGS },
3567 +
3568 ++ { USB_DEVICE(0x04f3, 0x009b), .driver_info =
3569 ++ USB_QUIRK_DEVICE_QUALIFIER },
3570 ++
3571 ++ { USB_DEVICE(0x04f3, 0x016f), .driver_info =
3572 ++ USB_QUIRK_DEVICE_QUALIFIER },
3573 ++
3574 + /* Roland SC-8820 */
3575 + { USB_DEVICE(0x0582, 0x0007), .driver_info = USB_QUIRK_RESET_RESUME },
3576 +
3577 +@@ -158,6 +164,10 @@ static const struct usb_device_id usb_interface_quirk_list[] = {
3578 + { USB_VENDOR_AND_INTERFACE_INFO(0x046d, USB_CLASS_VIDEO, 1, 0),
3579 + .driver_info = USB_QUIRK_RESET_RESUME },
3580 +
3581 ++ /* ASUS Base Station(T100) */
3582 ++ { USB_DEVICE(0x0b05, 0x17e0), .driver_info =
3583 ++ USB_QUIRK_IGNORE_REMOTE_WAKEUP },
3584 ++
3585 + { } /* terminating entry must be last */
3586 + };
3587 +
3588 +diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
3589 +index 5bf2bc00821b..8a7a8ee176fa 100644
3590 +--- a/drivers/usb/dwc3/ep0.c
3591 ++++ b/drivers/usb/dwc3/ep0.c
3592 +@@ -209,7 +209,7 @@ static void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
3593 + struct dwc3_ep *dep = dwc->eps[0];
3594 +
3595 + /* stall is always issued on EP0 */
3596 +- __dwc3_gadget_ep_set_halt(dep, 1);
3597 ++ __dwc3_gadget_ep_set_halt(dep, 1, false);
3598 + dep->flags = DWC3_EP_ENABLED;
3599 + dwc->delayed_status = false;
3600 +
3601 +@@ -382,7 +382,7 @@ static int dwc3_ep0_handle_feature(struct dwc3 *dwc,
3602 + return -EINVAL;
3603 + if (set == 0 && (dep->flags & DWC3_EP_WEDGE))
3604 + break;
3605 +- ret = __dwc3_gadget_ep_set_halt(dep, set);
3606 ++ ret = __dwc3_gadget_ep_set_halt(dep, set, true);
3607 + if (ret)
3608 + return -EINVAL;
3609 + break;
3610 +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
3611 +index 895497d42270..1acb3a419539 100644
3612 +--- a/drivers/usb/dwc3/gadget.c
3613 ++++ b/drivers/usb/dwc3/gadget.c
3614 +@@ -485,12 +485,11 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep,
3615 + if (!usb_endpoint_xfer_isoc(desc))
3616 + return 0;
3617 +
3618 +- memset(&trb_link, 0, sizeof(trb_link));
3619 +-
3620 + /* Link TRB for ISOC. The HWO bit is never reset */
3621 + trb_st_hw = &dep->trb_pool[0];
3622 +
3623 + trb_link = &dep->trb_pool[DWC3_TRB_NUM - 1];
3624 ++ memset(trb_link, 0, sizeof(*trb_link));
3625 +
3626 + trb_link->bpl = lower_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw));
3627 + trb_link->bph = upper_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw));
3628 +@@ -533,7 +532,7 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
3629 +
3630 + /* make sure HW endpoint isn't stalled */
3631 + if (dep->flags & DWC3_EP_STALL)
3632 +- __dwc3_gadget_ep_set_halt(dep, 0);
3633 ++ __dwc3_gadget_ep_set_halt(dep, 0, false);
3634 +
3635 + reg = dwc3_readl(dwc->regs, DWC3_DALEPENA);
3636 + reg &= ~DWC3_DALEPENA_EP(dep->number);
3637 +@@ -1078,7 +1077,7 @@ out0:
3638 + return ret;
3639 + }
3640 +
3641 +-int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value)
3642 ++int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
3643 + {
3644 + struct dwc3_gadget_ep_cmd_params params;
3645 + struct dwc3 *dwc = dep->dwc;
3646 +@@ -1087,6 +1086,14 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value)
3647 + memset(&params, 0x00, sizeof(params));
3648 +
3649 + if (value) {
3650 ++ if (!protocol && ((dep->direction && dep->flags & DWC3_EP_BUSY) ||
3651 ++ (!list_empty(&dep->req_queued) ||
3652 ++ !list_empty(&dep->request_list)))) {
3653 ++ dev_dbg(dwc->dev, "%s: pending request, cannot halt\n",
3654 ++ dep->name);
3655 ++ return -EAGAIN;
3656 ++ }
3657 ++
3658 + if (dep->number == 0 || dep->number == 1) {
3659 + /*
3660 + * Whenever EP0 is stalled, we will restart
3661 +@@ -1135,7 +1142,7 @@ static int dwc3_gadget_ep_set_halt(struct usb_ep *ep, int value)
3662 + goto out;
3663 + }
3664 +
3665 +- ret = __dwc3_gadget_ep_set_halt(dep, value);
3666 ++ ret = __dwc3_gadget_ep_set_halt(dep, value, false);
3667 + out:
3668 + spin_unlock_irqrestore(&dwc->lock, flags);
3669 +
3670 +diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
3671 +index a8600084348c..6f498fc4f568 100644
3672 +--- a/drivers/usb/dwc3/gadget.h
3673 ++++ b/drivers/usb/dwc3/gadget.h
3674 +@@ -108,7 +108,7 @@ void dwc3_ep0_interrupt(struct dwc3 *dwc,
3675 + void dwc3_ep0_out_start(struct dwc3 *dwc);
3676 + int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request,
3677 + gfp_t gfp_flags);
3678 +-int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value);
3679 ++int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol);
3680 + int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep,
3681 + unsigned cmd, struct dwc3_gadget_ep_cmd_params *params);
3682 +
3683 +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
3684 +index 8882d654b0d1..c8835d591b37 100644
3685 +--- a/drivers/usb/host/xhci-pci.c
3686 ++++ b/drivers/usb/host/xhci-pci.c
3687 +@@ -118,20 +118,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
3688 + xhci->quirks |= XHCI_SPURIOUS_REBOOT;
3689 + xhci->quirks |= XHCI_AVOID_BEI;
3690 + }
3691 +- if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
3692 +- (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI ||
3693 +- pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) {
3694 +- /* Workaround for occasional spurious wakeups from S5 (or
3695 +- * any other sleep) on Haswell machines with LPT and LPT-LP
3696 +- * with the new Intel BIOS
3697 +- */
3698 +- /* Limit the quirk to only known vendors, as this triggers
3699 +- * yet another BIOS bug on some other machines
3700 +- * https://bugzilla.kernel.org/show_bug.cgi?id=66171
3701 +- */
3702 +- if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP)
3703 +- xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
3704 +- }
3705 + if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
3706 + pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
3707 + xhci->quirks |= XHCI_RESET_ON_RESUME;
3708 +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
3709 +index ac339570a805..19074db60896 100644
3710 +--- a/drivers/usb/serial/cp210x.c
3711 ++++ b/drivers/usb/serial/cp210x.c
3712 +@@ -128,6 +128,7 @@ static const struct usb_device_id id_table[] = {
3713 + { USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */
3714 + { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
3715 + { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
3716 ++ { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
3717 + { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
3718 + { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
3719 + { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
3720 +@@ -160,7 +161,9 @@ static const struct usb_device_id id_table[] = {
3721 + { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
3722 + { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
3723 + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
3724 ++ { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */
3725 + { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
3726 ++ { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */
3727 + { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
3728 + { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
3729 + { USB_DEVICE(0x1FB9, 0x0100) }, /* Lake Shore Model 121 Current Source */
3730 +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
3731 +index 8425e9e9e127..a89433bd5314 100644
3732 +--- a/drivers/usb/serial/ftdi_sio.c
3733 ++++ b/drivers/usb/serial/ftdi_sio.c
3734 +@@ -156,6 +156,7 @@ static struct ftdi_sio_quirk ftdi_8u2232c_quirk = {
3735 + * /sys/bus/usb/ftdi_sio/new_id, then send patch/report!
3736 + */
3737 + static struct usb_device_id id_table_combined [] = {
3738 ++ { USB_DEVICE(FTDI_VID, FTDI_BRICK_PID) },
3739 + { USB_DEVICE(FTDI_VID, FTDI_ZEITCONTROL_TAGTRACE_MIFARE_PID) },
3740 + { USB_DEVICE(FTDI_VID, FTDI_CTI_MINI_PID) },
3741 + { USB_DEVICE(FTDI_VID, FTDI_CTI_NANO_PID) },
3742 +@@ -685,6 +686,10 @@ static struct usb_device_id id_table_combined [] = {
3743 + { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_5_PID) },
3744 + { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_6_PID) },
3745 + { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_7_PID) },
3746 ++ { USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) },
3747 ++ { USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) },
3748 ++ { USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) },
3749 ++ { USB_DEVICE(XSENS_VID, XSENS_MTW_PID) },
3750 + { USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
3751 + { USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) },
3752 + { USB_DEVICE(FTDI_VID, FTDI_ACTIVE_ROBOTS_PID) },
3753 +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
3754 +index 7628b91017ba..64ee791687d9 100644
3755 +--- a/drivers/usb/serial/ftdi_sio_ids.h
3756 ++++ b/drivers/usb/serial/ftdi_sio_ids.h
3757 +@@ -30,6 +30,12 @@
3758 +
3759 + /*** third-party PIDs (using FTDI_VID) ***/
3760 +
3761 ++/*
3762 ++ * Certain versions of the official Windows FTDI driver reprogrammed
3763 ++ * counterfeit FTDI devices to PID 0. Support these devices anyway.
3764 ++ */
3765 ++#define FTDI_BRICK_PID 0x0000
3766 ++
3767 + #define FTDI_LUMEL_PD12_PID 0x6002
3768 +
3769 + /*
3770 +@@ -142,12 +148,19 @@
3771 + /*
3772 + * Xsens Technologies BV products (http://www.xsens.com).
3773 + */
3774 +-#define XSENS_CONVERTER_0_PID 0xD388
3775 +-#define XSENS_CONVERTER_1_PID 0xD389
3776 ++#define XSENS_VID 0x2639
3777 ++#define XSENS_AWINDA_STATION_PID 0x0101
3778 ++#define XSENS_AWINDA_DONGLE_PID 0x0102
3779 ++#define XSENS_MTW_PID 0x0200 /* Xsens MTw */
3780 ++#define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */
3781 ++
3782 ++/* Xsens devices using FTDI VID */
3783 ++#define XSENS_CONVERTER_0_PID 0xD388 /* Xsens USB converter */
3784 ++#define XSENS_CONVERTER_1_PID 0xD389 /* Xsens Wireless Receiver */
3785 + #define XSENS_CONVERTER_2_PID 0xD38A
3786 +-#define XSENS_CONVERTER_3_PID 0xD38B
3787 +-#define XSENS_CONVERTER_4_PID 0xD38C
3788 +-#define XSENS_CONVERTER_5_PID 0xD38D
3789 ++#define XSENS_CONVERTER_3_PID 0xD38B /* Xsens USB-serial converter */
3790 ++#define XSENS_CONVERTER_4_PID 0xD38C /* Xsens Wireless Receiver */
3791 ++#define XSENS_CONVERTER_5_PID 0xD38D /* Xsens Awinda Station */
3792 + #define XSENS_CONVERTER_6_PID 0xD38E
3793 + #define XSENS_CONVERTER_7_PID 0xD38F
3794 +
3795 +diff --git a/drivers/usb/serial/kobil_sct.c b/drivers/usb/serial/kobil_sct.c
3796 +index 4a9a75eb9b95..c3a53acda67a 100644
3797 +--- a/drivers/usb/serial/kobil_sct.c
3798 ++++ b/drivers/usb/serial/kobil_sct.c
3799 +@@ -447,7 +447,7 @@ static int kobil_write(struct tty_struct *tty, struct usb_serial_port *port,
3800 + );
3801 +
3802 + priv->cur_pos = priv->cur_pos + length;
3803 +- result = usb_submit_urb(port->write_urb, GFP_NOIO);
3804 ++ result = usb_submit_urb(port->write_urb, GFP_ATOMIC);
3805 + dbg("%s - port %d Send write URB returns: %i",
3806 + __func__, port->number, result);
3807 + todo = priv->filled - priv->cur_pos;
3808 +@@ -463,7 +463,7 @@ static int kobil_write(struct tty_struct *tty, struct usb_serial_port *port,
3809 + if (priv->device_type == KOBIL_ADAPTER_B_PRODUCT_ID ||
3810 + priv->device_type == KOBIL_ADAPTER_K_PRODUCT_ID) {
3811 + result = usb_submit_urb(port->interrupt_in_urb,
3812 +- GFP_NOIO);
3813 ++ GFP_ATOMIC);
3814 + dbg("%s - port %d Send read URB returns: %i",
3815 + __func__, port->number, result);
3816 + }
3817 +diff --git a/drivers/usb/serial/opticon.c b/drivers/usb/serial/opticon.c
3818 +index 1f850065d159..58b7cecd682f 100644
3819 +--- a/drivers/usb/serial/opticon.c
3820 ++++ b/drivers/usb/serial/opticon.c
3821 +@@ -293,7 +293,7 @@ static int opticon_write(struct tty_struct *tty, struct usb_serial_port *port,
3822 +
3823 + /* The conncected devices do not have a bulk write endpoint,
3824 + * to transmit data to de barcode device the control endpoint is used */
3825 +- dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_NOIO);
3826 ++ dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC);
3827 + if (!dr) {
3828 + dev_err(&port->dev, "out of memory\n");
3829 + count = -ENOMEM;
3830 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
3831 +index 703ebe7eaa93..d8232df2c211 100644
3832 +--- a/drivers/usb/serial/option.c
3833 ++++ b/drivers/usb/serial/option.c
3834 +@@ -269,6 +269,7 @@ static void option_instat_callback(struct urb *urb);
3835 + #define TELIT_PRODUCT_DE910_DUAL 0x1010
3836 + #define TELIT_PRODUCT_UE910_V2 0x1012
3837 + #define TELIT_PRODUCT_LE920 0x1200
3838 ++#define TELIT_PRODUCT_LE910 0x1201
3839 +
3840 + /* ZTE PRODUCTS */
3841 + #define ZTE_VENDOR_ID 0x19d2
3842 +@@ -362,6 +363,7 @@ static void option_instat_callback(struct urb *urb);
3843 +
3844 + /* Haier products */
3845 + #define HAIER_VENDOR_ID 0x201e
3846 ++#define HAIER_PRODUCT_CE81B 0x10f8
3847 + #define HAIER_PRODUCT_CE100 0x2009
3848 +
3849 + /* Cinterion (formerly Siemens) products */
3850 +@@ -589,6 +591,11 @@ static const struct option_blacklist_info zte_1255_blacklist = {
3851 + .reserved = BIT(3) | BIT(4),
3852 + };
3853 +
3854 ++static const struct option_blacklist_info telit_le910_blacklist = {
3855 ++ .sendsetup = BIT(0),
3856 ++ .reserved = BIT(1) | BIT(2),
3857 ++};
3858 ++
3859 + static const struct option_blacklist_info telit_le920_blacklist = {
3860 + .sendsetup = BIT(0),
3861 + .reserved = BIT(1) | BIT(5),
3862 +@@ -1138,6 +1145,8 @@ static const struct usb_device_id option_ids[] = {
3863 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
3864 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
3865 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
3866 ++ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
3867 ++ .driver_info = (kernel_ulong_t)&telit_le910_blacklist },
3868 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
3869 + .driver_info = (kernel_ulong_t)&telit_le920_blacklist },
3870 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
3871 +@@ -1614,6 +1623,7 @@ static const struct usb_device_id option_ids[] = {
3872 + { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) },
3873 + { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) },
3874 + { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) },
3875 ++ { USB_DEVICE_AND_INTERFACE_INFO(HAIER_VENDOR_ID, HAIER_PRODUCT_CE81B, 0xff, 0xff, 0xff) },
3876 + /* Pirelli */
3877 + { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_1)},
3878 + { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_2)},
3879 +diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c
3880 +index c70109e5d60b..d8d26f4f14dd 100644
3881 +--- a/drivers/usb/storage/transport.c
3882 ++++ b/drivers/usb/storage/transport.c
3883 +@@ -1120,6 +1120,31 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
3884 + */
3885 + if (result == USB_STOR_XFER_LONG)
3886 + fake_sense = 1;
3887 ++
3888 ++ /*
3889 ++ * Sometimes a device will mistakenly skip the data phase
3890 ++ * and go directly to the status phase without sending a
3891 ++ * zero-length packet. If we get a 13-byte response here,
3892 ++ * check whether it really is a CSW.
3893 ++ */
3894 ++ if (result == USB_STOR_XFER_SHORT &&
3895 ++ srb->sc_data_direction == DMA_FROM_DEVICE &&
3896 ++ transfer_length - scsi_get_resid(srb) ==
3897 ++ US_BULK_CS_WRAP_LEN) {
3898 ++ struct scatterlist *sg = NULL;
3899 ++ unsigned int offset = 0;
3900 ++
3901 ++ if (usb_stor_access_xfer_buf((unsigned char *) bcs,
3902 ++ US_BULK_CS_WRAP_LEN, srb, &sg,
3903 ++ &offset, FROM_XFER_BUF) ==
3904 ++ US_BULK_CS_WRAP_LEN &&
3905 ++ bcs->Signature ==
3906 ++ cpu_to_le32(US_BULK_CS_SIGN)) {
3907 ++ US_DEBUGP("Device skipped data phase\n");
3908 ++ scsi_set_resid(srb, transfer_length);
3909 ++ goto skipped_data_phase;
3910 ++ }
3911 ++ }
3912 + }
3913 +
3914 + /* See flow chart on pg 15 of the Bulk Only Transport spec for
3915 +@@ -1155,6 +1180,7 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us)
3916 + if (result != USB_STOR_XFER_GOOD)
3917 + return USB_STOR_TRANSPORT_ERROR;
3918 +
3919 ++ skipped_data_phase:
3920 + /* check bulk status */
3921 + residue = le32_to_cpu(bcs->Residue);
3922 + US_DEBUGP("Bulk Status S 0x%x T 0x%x R %u Stat 0x%x\n",
3923 +diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
3924 +index 28b1a834906b..6cbb2069531d 100644
3925 +--- a/drivers/video/console/bitblit.c
3926 ++++ b/drivers/video/console/bitblit.c
3927 +@@ -205,7 +205,6 @@ static void bit_putcs(struct vc_data *vc, struct fb_info *info,
3928 + static void bit_clear_margins(struct vc_data *vc, struct fb_info *info,
3929 + int bottom_only)
3930 + {
3931 +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
3932 + unsigned int cw = vc->vc_font.width;
3933 + unsigned int ch = vc->vc_font.height;
3934 + unsigned int rw = info->var.xres - (vc->vc_cols*cw);
3935 +@@ -214,7 +213,7 @@ static void bit_clear_margins(struct vc_data *vc, struct fb_info *info,
3936 + unsigned int bs = info->var.yres - bh;
3937 + struct fb_fillrect region;
3938 +
3939 +- region.color = attr_bgcol_ec(bgshift, vc, info);
3940 ++ region.color = 0;
3941 + region.rop = ROP_COPY;
3942 +
3943 + if (rw && !bottom_only) {
3944 +diff --git a/drivers/video/console/fbcon_ccw.c b/drivers/video/console/fbcon_ccw.c
3945 +index 41b32ae23dac..5a3cbf6dff4d 100644
3946 +--- a/drivers/video/console/fbcon_ccw.c
3947 ++++ b/drivers/video/console/fbcon_ccw.c
3948 +@@ -197,9 +197,8 @@ static void ccw_clear_margins(struct vc_data *vc, struct fb_info *info,
3949 + unsigned int bh = info->var.xres - (vc->vc_rows*ch);
3950 + unsigned int bs = vc->vc_rows*ch;
3951 + struct fb_fillrect region;
3952 +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
3953 +
3954 +- region.color = attr_bgcol_ec(bgshift,vc,info);
3955 ++ region.color = 0;
3956 + region.rop = ROP_COPY;
3957 +
3958 + if (rw && !bottom_only) {
3959 +diff --git a/drivers/video/console/fbcon_cw.c b/drivers/video/console/fbcon_cw.c
3960 +index 6a737827beb1..7d3fd9bda66c 100644
3961 +--- a/drivers/video/console/fbcon_cw.c
3962 ++++ b/drivers/video/console/fbcon_cw.c
3963 +@@ -181,9 +181,8 @@ static void cw_clear_margins(struct vc_data *vc, struct fb_info *info,
3964 + unsigned int bh = info->var.xres - (vc->vc_rows*ch);
3965 + unsigned int rs = info->var.yres - rw;
3966 + struct fb_fillrect region;
3967 +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
3968 +
3969 +- region.color = attr_bgcol_ec(bgshift,vc,info);
3970 ++ region.color = 0;
3971 + region.rop = ROP_COPY;
3972 +
3973 + if (rw && !bottom_only) {
3974 +diff --git a/drivers/video/console/fbcon_ud.c b/drivers/video/console/fbcon_ud.c
3975 +index ff0872c0498b..19e3714abfe8 100644
3976 +--- a/drivers/video/console/fbcon_ud.c
3977 ++++ b/drivers/video/console/fbcon_ud.c
3978 +@@ -227,9 +227,8 @@ static void ud_clear_margins(struct vc_data *vc, struct fb_info *info,
3979 + unsigned int rw = info->var.xres - (vc->vc_cols*cw);
3980 + unsigned int bh = info->var.yres - (vc->vc_rows*ch);
3981 + struct fb_fillrect region;
3982 +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
3983 +
3984 +- region.color = attr_bgcol_ec(bgshift,vc,info);
3985 ++ region.color = 0;
3986 + region.rop = ROP_COPY;
3987 +
3988 + if (rw && !bottom_only) {
3989 +diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c
3990 +index 2e03d416b9af..a41f264dc23d 100644
3991 +--- a/drivers/virtio/virtio_pci.c
3992 ++++ b/drivers/virtio/virtio_pci.c
3993 +@@ -745,6 +745,7 @@ static int virtio_pci_restore(struct device *dev)
3994 + struct pci_dev *pci_dev = to_pci_dev(dev);
3995 + struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev);
3996 + struct virtio_driver *drv;
3997 ++ unsigned status = 0;
3998 + int ret;
3999 +
4000 + drv = container_of(vp_dev->vdev.dev.driver,
4001 +@@ -755,14 +756,40 @@ static int virtio_pci_restore(struct device *dev)
4002 + return ret;
4003 +
4004 + pci_set_master(pci_dev);
4005 ++ /* We always start by resetting the device, in case a previous
4006 ++ * driver messed it up. */
4007 ++ vp_reset(&vp_dev->vdev);
4008 ++
4009 ++ /* Acknowledge that we've seen the device. */
4010 ++ status |= VIRTIO_CONFIG_S_ACKNOWLEDGE;
4011 ++ vp_set_status(&vp_dev->vdev, status);
4012 ++
4013 ++ /* Maybe driver failed before freeze.
4014 ++ * Restore the failed status, for debugging. */
4015 ++ status |= vp_dev->saved_status & VIRTIO_CONFIG_S_FAILED;
4016 ++ vp_set_status(&vp_dev->vdev, status);
4017 ++
4018 ++ if (!drv)
4019 ++ return 0;
4020 ++
4021 ++ /* We have a driver! */
4022 ++ status |= VIRTIO_CONFIG_S_DRIVER;
4023 ++ vp_set_status(&vp_dev->vdev, status);
4024 ++
4025 + vp_finalize_features(&vp_dev->vdev);
4026 +
4027 +- if (drv && drv->restore)
4028 ++ if (drv->restore) {
4029 + ret = drv->restore(&vp_dev->vdev);
4030 ++ if (ret) {
4031 ++ status |= VIRTIO_CONFIG_S_FAILED;
4032 ++ vp_set_status(&vp_dev->vdev, status);
4033 ++ return ret;
4034 ++ }
4035 ++ }
4036 +
4037 + /* Finally, tell the device we're all set */
4038 +- if (!ret)
4039 +- vp_set_status(&vp_dev->vdev, vp_dev->saved_status);
4040 ++ status |= VIRTIO_CONFIG_S_DRIVER_OK;
4041 ++ vp_set_status(&vp_dev->vdev, status);
4042 +
4043 + return ret;
4044 + }
4045 +diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
4046 +index 5d158d320233..6eab2dd16e94 100644
4047 +--- a/fs/btrfs/file-item.c
4048 ++++ b/fs/btrfs/file-item.c
4049 +@@ -393,7 +393,7 @@ int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end,
4050 + ret = 0;
4051 + fail:
4052 + while (ret < 0 && !list_empty(&tmplist)) {
4053 +- sums = list_entry(&tmplist, struct btrfs_ordered_sum, list);
4054 ++ sums = list_entry(tmplist.next, struct btrfs_ordered_sum, list);
4055 + list_del(&sums->list);
4056 + kfree(sums);
4057 + }
4058 +diff --git a/fs/buffer.c b/fs/buffer.c
4059 +index f235e1834e39..ed2dc709883a 100644
4060 +--- a/fs/buffer.c
4061 ++++ b/fs/buffer.c
4062 +@@ -1982,6 +1982,7 @@ int generic_write_end(struct file *file, struct address_space *mapping,
4063 + struct page *page, void *fsdata)
4064 + {
4065 + struct inode *inode = mapping->host;
4066 ++ loff_t old_size = inode->i_size;
4067 + int i_size_changed = 0;
4068 +
4069 + copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
4070 +@@ -2001,6 +2002,8 @@ int generic_write_end(struct file *file, struct address_space *mapping,
4071 + unlock_page(page);
4072 + page_cache_release(page);
4073 +
4074 ++ if (old_size < pos)
4075 ++ pagecache_isize_extended(inode, old_size, pos);
4076 + /*
4077 + * Don't mark the inode dirty under page lock. First, it unnecessarily
4078 + * makes the holding time of page lock longer. Second, it forces lock
4079 +@@ -2221,6 +2224,11 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping,
4080 + err = 0;
4081 +
4082 + balance_dirty_pages_ratelimited(mapping);
4083 ++
4084 ++ if (unlikely(fatal_signal_pending(current))) {
4085 ++ err = -EINTR;
4086 ++ goto out;
4087 ++ }
4088 + }
4089 +
4090 + /* page covers the boundary, find the boundary offset */
4091 +diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
4092 +index 11030b2fd3b4..b5b9b4086143 100644
4093 +--- a/fs/ecryptfs/inode.c
4094 ++++ b/fs/ecryptfs/inode.c
4095 +@@ -1093,7 +1093,7 @@ ecryptfs_setxattr(struct dentry *dentry, const char *name, const void *value,
4096 + }
4097 +
4098 + rc = vfs_setxattr(lower_dentry, name, value, size, flags);
4099 +- if (!rc)
4100 ++ if (!rc && dentry->d_inode)
4101 + fsstack_copy_attr_all(dentry->d_inode, lower_dentry->d_inode);
4102 + out:
4103 + return rc;
4104 +diff --git a/fs/ext3/super.c b/fs/ext3/super.c
4105 +index ef4c812c7a63..564f9429b3b1 100644
4106 +--- a/fs/ext3/super.c
4107 ++++ b/fs/ext3/super.c
4108 +@@ -1292,13 +1292,6 @@ set_qf_format:
4109 + "not specified.");
4110 + return 0;
4111 + }
4112 +- } else {
4113 +- if (sbi->s_jquota_fmt) {
4114 +- ext3_msg(sb, KERN_ERR, "error: journaled quota format "
4115 +- "specified with no journaling "
4116 +- "enabled.");
4117 +- return 0;
4118 +- }
4119 + }
4120 + #endif
4121 + return 1;
4122 +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
4123 +index 521ba9d18ce6..b9cdb6df8d2b 100644
4124 +--- a/fs/ext4/ext4.h
4125 ++++ b/fs/ext4/ext4.h
4126 +@@ -1891,6 +1891,7 @@ int ext4_get_block(struct inode *inode, sector_t iblock,
4127 + struct buffer_head *bh_result, int create);
4128 +
4129 + extern struct inode *ext4_iget(struct super_block *, unsigned long);
4130 ++extern struct inode *ext4_iget_normal(struct super_block *, unsigned long);
4131 + extern int ext4_write_inode(struct inode *, struct writeback_control *);
4132 + extern int ext4_setattr(struct dentry *, struct iattr *);
4133 + extern int ext4_getattr(struct vfsmount *mnt, struct dentry *dentry,
4134 +diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
4135 +index 75c4f36bced8..97ca4b6fb2a9 100644
4136 +--- a/fs/ext4/ialloc.c
4137 ++++ b/fs/ext4/ialloc.c
4138 +@@ -725,6 +725,10 @@ got:
4139 + struct buffer_head *block_bitmap_bh;
4140 +
4141 + block_bitmap_bh = ext4_read_block_bitmap(sb, group);
4142 ++ if (!block_bitmap_bh) {
4143 ++ err = -EIO;
4144 ++ goto out;
4145 ++ }
4146 + BUFFER_TRACE(block_bitmap_bh, "get block bitmap access");
4147 + err = ext4_journal_get_write_access(handle, block_bitmap_bh);
4148 + if (err) {
4149 +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
4150 +index 5b6dcba304b1..9e9db425c613 100644
4151 +--- a/fs/ext4/inode.c
4152 ++++ b/fs/ext4/inode.c
4153 +@@ -157,16 +157,14 @@ void ext4_evict_inode(struct inode *inode)
4154 + goto no_delete;
4155 + }
4156 +
4157 +- if (!is_bad_inode(inode))
4158 +- dquot_initialize(inode);
4159 ++ if (is_bad_inode(inode))
4160 ++ goto no_delete;
4161 ++ dquot_initialize(inode);
4162 +
4163 + if (ext4_should_order_data(inode))
4164 + ext4_begin_ordered_truncate(inode, 0);
4165 + truncate_inode_pages(&inode->i_data, 0);
4166 +
4167 +- if (is_bad_inode(inode))
4168 +- goto no_delete;
4169 +-
4170 + handle = ext4_journal_start(inode, ext4_blocks_for_truncate(inode)+3);
4171 + if (IS_ERR(handle)) {
4172 + ext4_std_error(inode->i_sb, PTR_ERR(handle));
4173 +@@ -2410,6 +2408,20 @@ static int ext4_nonda_switch(struct super_block *sb)
4174 + return 0;
4175 + }
4176 +
4177 ++/* We always reserve for an inode update; the superblock could be there too */
4178 ++static int ext4_da_write_credits(struct inode *inode, loff_t pos, unsigned len)
4179 ++{
4180 ++ if (likely(EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb,
4181 ++ EXT4_FEATURE_RO_COMPAT_LARGE_FILE)))
4182 ++ return 1;
4183 ++
4184 ++ if (pos + len <= 0x7fffffffULL)
4185 ++ return 1;
4186 ++
4187 ++ /* We might need to update the superblock to set LARGE_FILE */
4188 ++ return 2;
4189 ++}
4190 ++
4191 + static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
4192 + loff_t pos, unsigned len, unsigned flags,
4193 + struct page **pagep, void **fsdata)
4194 +@@ -2436,7 +2448,8 @@ retry:
4195 + * to journalling the i_disksize update if writes to the end
4196 + * of file which has an already mapped buffer.
4197 + */
4198 +- handle = ext4_journal_start(inode, 1);
4199 ++ handle = ext4_journal_start(inode,
4200 ++ ext4_da_write_credits(inode, pos, len));
4201 + if (IS_ERR(handle)) {
4202 + ret = PTR_ERR(handle);
4203 + goto out;
4204 +@@ -3840,6 +3853,13 @@ bad_inode:
4205 + return ERR_PTR(ret);
4206 + }
4207 +
4208 ++struct inode *ext4_iget_normal(struct super_block *sb, unsigned long ino)
4209 ++{
4210 ++ if (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO)
4211 ++ return ERR_PTR(-EIO);
4212 ++ return ext4_iget(sb, ino);
4213 ++}
4214 ++
4215 + static int ext4_inode_blocks_set(handle_t *handle,
4216 + struct ext4_inode *raw_inode,
4217 + struct ext4_inode_info *ei)
4218 +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
4219 +index 665e55ca208c..dc5852301da7 100644
4220 +--- a/fs/ext4/namei.c
4221 ++++ b/fs/ext4/namei.c
4222 +@@ -1051,7 +1051,7 @@ static struct dentry *ext4_lookup(struct inode *dir, struct dentry *dentry, stru
4223 + dentry->d_name.name);
4224 + return ERR_PTR(-EIO);
4225 + }
4226 +- inode = ext4_iget(dir->i_sb, ino);
4227 ++ inode = ext4_iget_normal(dir->i_sb, ino);
4228 + if (inode == ERR_PTR(-ESTALE)) {
4229 + EXT4_ERROR_INODE(dir,
4230 + "deleted inode referenced: %u",
4231 +@@ -1087,7 +1087,7 @@ struct dentry *ext4_get_parent(struct dentry *child)
4232 + return ERR_PTR(-EIO);
4233 + }
4234 +
4235 +- return d_obtain_alias(ext4_iget(child->d_inode->i_sb, ino));
4236 ++ return d_obtain_alias(ext4_iget_normal(child->d_inode->i_sb, ino));
4237 + }
4238 +
4239 + #define S_SHIFT 12
4240 +@@ -1421,31 +1421,38 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry,
4241 + hinfo.hash_version += EXT4_SB(dir->i_sb)->s_hash_unsigned;
4242 + hinfo.seed = EXT4_SB(dir->i_sb)->s_hash_seed;
4243 + ext4fs_dirhash(name, namelen, &hinfo);
4244 ++ memset(frames, 0, sizeof(frames));
4245 + frame = frames;
4246 + frame->entries = entries;
4247 + frame->at = entries;
4248 + frame->bh = bh;
4249 + bh = bh2;
4250 +
4251 +- ext4_handle_dirty_metadata(handle, dir, frame->bh);
4252 +- ext4_handle_dirty_metadata(handle, dir, bh);
4253 ++ retval = ext4_handle_dirty_metadata(handle, dir, frame->bh);
4254 ++ if (retval)
4255 ++ goto out_frames;
4256 ++ retval = ext4_handle_dirty_metadata(handle, dir, bh);
4257 ++ if (retval)
4258 ++ goto out_frames;
4259 +
4260 + de = do_split(handle,dir, &bh, frame, &hinfo, &retval);
4261 + if (!de) {
4262 +- /*
4263 +- * Even if the block split failed, we have to properly write
4264 +- * out all the changes we did so far. Otherwise we can end up
4265 +- * with corrupted filesystem.
4266 +- */
4267 +- ext4_mark_inode_dirty(handle, dir);
4268 +- dx_release(frames);
4269 +- return retval;
4270 ++ goto out_frames;
4271 + }
4272 + dx_release(frames);
4273 +
4274 + retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
4275 + brelse(bh);
4276 + return retval;
4277 ++out_frames:
4278 ++ /*
4279 ++ * Even if the block split failed, we have to properly write
4280 ++ * out all the changes we did so far. Otherwise we can end up
4281 ++ * with corrupted filesystem.
4282 ++ */
4283 ++ ext4_mark_inode_dirty(handle, dir);
4284 ++ dx_release(frames);
4285 ++ return retval;
4286 + }
4287 +
4288 + /*
4289 +@@ -1992,7 +1999,7 @@ int ext4_orphan_add(handle_t *handle, struct inode *inode)
4290 + struct ext4_iloc iloc;
4291 + int err = 0, rc;
4292 +
4293 +- if (!ext4_handle_valid(handle))
4294 ++ if (!ext4_handle_valid(handle) || is_bad_inode(inode))
4295 + return 0;
4296 +
4297 + mutex_lock(&EXT4_SB(sb)->s_orphan_lock);
4298 +diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
4299 +index a43e43c835d1..cfd321104250 100644
4300 +--- a/fs/ext4/resize.c
4301 ++++ b/fs/ext4/resize.c
4302 +@@ -991,7 +991,7 @@ static void update_backups(struct super_block *sb,
4303 + (err = ext4_journal_restart(handle, EXT4_MAX_TRANS_DATA)))
4304 + break;
4305 +
4306 +- bh = sb_getblk(sb, group * bpg + blk_off);
4307 ++ bh = sb_getblk(sb, ((ext4_fsblk_t)group) * bpg + blk_off);
4308 + if (!bh) {
4309 + err = -ENOMEM;
4310 + break;
4311 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
4312 +index f0e4e46867f7..92ea560efcc7 100644
4313 +--- a/fs/ext4/super.c
4314 ++++ b/fs/ext4/super.c
4315 +@@ -1041,7 +1041,7 @@ static struct inode *ext4_nfs_get_inode(struct super_block *sb,
4316 + * Currently we don't know the generation for parent directory, so
4317 + * a generation of 0 means "accept any"
4318 + */
4319 +- inode = ext4_iget(sb, ino);
4320 ++ inode = ext4_iget_normal(sb, ino);
4321 + if (IS_ERR(inode))
4322 + return ERR_CAST(inode);
4323 + if (generation && inode->i_generation != generation) {
4324 +@@ -1642,13 +1642,6 @@ static int parse_options(char *options, struct super_block *sb,
4325 + "not specified");
4326 + return 0;
4327 + }
4328 +- } else {
4329 +- if (sbi->s_jquota_fmt) {
4330 +- ext4_msg(sb, KERN_ERR, "journaled quota format "
4331 +- "specified with no journaling "
4332 +- "enabled");
4333 +- return 0;
4334 +- }
4335 + }
4336 + #endif
4337 + if (test_opt(sb, DIOREAD_NOLOCK)) {
4338 +diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
4339 +index 5743e9db8027..96455e6988fe 100644
4340 +--- a/fs/ext4/xattr.c
4341 ++++ b/fs/ext4/xattr.c
4342 +@@ -144,14 +144,28 @@ ext4_listxattr(struct dentry *dentry, char *buffer, size_t size)
4343 + }
4344 +
4345 + static int
4346 +-ext4_xattr_check_names(struct ext4_xattr_entry *entry, void *end)
4347 ++ext4_xattr_check_names(struct ext4_xattr_entry *entry, void *end,
4348 ++ void *value_start)
4349 + {
4350 +- while (!IS_LAST_ENTRY(entry)) {
4351 +- struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(entry);
4352 ++ struct ext4_xattr_entry *e = entry;
4353 ++
4354 ++ while (!IS_LAST_ENTRY(e)) {
4355 ++ struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(e);
4356 + if ((void *)next >= end)
4357 + return -EIO;
4358 +- entry = next;
4359 ++ e = next;
4360 + }
4361 ++
4362 ++ while (!IS_LAST_ENTRY(entry)) {
4363 ++ if (entry->e_value_size != 0 &&
4364 ++ (value_start + le16_to_cpu(entry->e_value_offs) <
4365 ++ (void *)e + sizeof(__u32) ||
4366 ++ value_start + le16_to_cpu(entry->e_value_offs) +
4367 ++ le32_to_cpu(entry->e_value_size) > end))
4368 ++ return -EIO;
4369 ++ entry = EXT4_XATTR_NEXT(entry);
4370 ++ }
4371 ++
4372 + return 0;
4373 + }
4374 +
4375 +@@ -161,7 +175,8 @@ ext4_xattr_check_block(struct buffer_head *bh)
4376 + if (BHDR(bh)->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC) ||
4377 + BHDR(bh)->h_blocks != cpu_to_le32(1))
4378 + return -EIO;
4379 +- return ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size);
4380 ++ return ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size,
4381 ++ bh->b_data);
4382 + }
4383 +
4384 + static inline int
4385 +@@ -274,7 +289,7 @@ ext4_xattr_ibody_get(struct inode *inode, int name_index, const char *name,
4386 + header = IHDR(inode, raw_inode);
4387 + entry = IFIRST(header);
4388 + end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
4389 +- error = ext4_xattr_check_names(entry, end);
4390 ++ error = ext4_xattr_check_names(entry, end, entry);
4391 + if (error)
4392 + goto cleanup;
4393 + error = ext4_xattr_find_entry(&entry, name_index, name,
4394 +@@ -402,7 +417,7 @@ ext4_xattr_ibody_list(struct dentry *dentry, char *buffer, size_t buffer_size)
4395 + raw_inode = ext4_raw_inode(&iloc);
4396 + header = IHDR(inode, raw_inode);
4397 + end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
4398 +- error = ext4_xattr_check_names(IFIRST(header), end);
4399 ++ error = ext4_xattr_check_names(IFIRST(header), end, IFIRST(header));
4400 + if (error)
4401 + goto cleanup;
4402 + error = ext4_xattr_list_entries(dentry, IFIRST(header),
4403 +@@ -914,7 +929,8 @@ ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
4404 + is->s.here = is->s.first;
4405 + is->s.end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
4406 + if (ext4_test_inode_state(inode, EXT4_STATE_XATTR)) {
4407 +- error = ext4_xattr_check_names(IFIRST(header), is->s.end);
4408 ++ error = ext4_xattr_check_names(IFIRST(header), is->s.end,
4409 ++ IFIRST(header));
4410 + if (error)
4411 + return error;
4412 + /* Find the named attribute. */
4413 +diff --git a/fs/ioprio.c b/fs/ioprio.c
4414 +index 0f1b9515213b..0dd6a2a7ae82 100644
4415 +--- a/fs/ioprio.c
4416 ++++ b/fs/ioprio.c
4417 +@@ -153,14 +153,16 @@ out:
4418 +
4419 + int ioprio_best(unsigned short aprio, unsigned short bprio)
4420 + {
4421 +- unsigned short aclass = IOPRIO_PRIO_CLASS(aprio);
4422 +- unsigned short bclass = IOPRIO_PRIO_CLASS(bprio);
4423 ++ unsigned short aclass;
4424 ++ unsigned short bclass;
4425 +
4426 +- if (aclass == IOPRIO_CLASS_NONE)
4427 +- aclass = IOPRIO_CLASS_BE;
4428 +- if (bclass == IOPRIO_CLASS_NONE)
4429 +- bclass = IOPRIO_CLASS_BE;
4430 ++ if (!ioprio_valid(aprio))
4431 ++ aprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
4432 ++ if (!ioprio_valid(bprio))
4433 ++ bprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
4434 +
4435 ++ aclass = IOPRIO_PRIO_CLASS(aprio);
4436 ++ bclass = IOPRIO_PRIO_CLASS(bprio);
4437 + if (aclass == bclass)
4438 + return min(aprio, bprio);
4439 + if (aclass > bclass)
4440 +diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
4441 +index 606a8dd8818c..0a68e0b22839 100644
4442 +--- a/fs/lockd/mon.c
4443 ++++ b/fs/lockd/mon.c
4444 +@@ -114,6 +114,12 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res,
4445 +
4446 + msg.rpc_proc = &clnt->cl_procinfo[proc];
4447 + status = rpc_call_sync(clnt, &msg, 0);
4448 ++ if (status == -ECONNREFUSED) {
4449 ++ dprintk("lockd: NSM upcall RPC failed, status=%d, forcing rebind\n",
4450 ++ status);
4451 ++ rpc_force_rebind(clnt);
4452 ++ status = rpc_call_sync(clnt, &msg, RPC_TASK_SOFTCONN);
4453 ++ }
4454 + if (status < 0)
4455 + dprintk("lockd: NSM upcall RPC failed, status=%d\n",
4456 + status);
4457 +diff --git a/fs/namespace.c b/fs/namespace.c
4458 +index f0f2e067c5df..f7be8d9c1cd6 100644
4459 +--- a/fs/namespace.c
4460 ++++ b/fs/namespace.c
4461 +@@ -2508,6 +2508,9 @@ SYSCALL_DEFINE2(pivot_root, const char __user *, new_root,
4462 + /* make sure we can reach put_old from new_root */
4463 + if (!is_path_reachable(real_mount(old.mnt), old.dentry, &new))
4464 + goto out4;
4465 ++ /* make certain new is below the root */
4466 ++ if (!is_path_reachable(new_mnt, new.dentry, &root))
4467 ++ goto out4;
4468 + br_write_lock(vfsmount_lock);
4469 + detach_mnt(new_mnt, &parent_path);
4470 + detach_mnt(root_mnt, &root_parent);
4471 +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
4472 +index 9bb4e5c541b0..a6d59054e8b3 100644
4473 +--- a/fs/nfs/inode.c
4474 ++++ b/fs/nfs/inode.c
4475 +@@ -512,7 +512,7 @@ int nfs_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat)
4476 + {
4477 + struct inode *inode = dentry->d_inode;
4478 + int need_atime = NFS_I(inode)->cache_validity & NFS_INO_INVALID_ATIME;
4479 +- int err;
4480 ++ int err = 0;
4481 +
4482 + /* Flush out writes to the server in order to update c/mtime. */
4483 + if (S_ISREG(inode->i_mode)) {
4484 +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
4485 +index 527a4fc12546..3d344ab0bdb3 100644
4486 +--- a/fs/nfs/nfs4proc.c
4487 ++++ b/fs/nfs/nfs4proc.c
4488 +@@ -1740,6 +1740,28 @@ static int nfs4_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *sta
4489 + return ret;
4490 + }
4491 +
4492 ++static void nfs_finish_clear_delegation_stateid(struct nfs4_state *state)
4493 ++{
4494 ++ nfs_remove_bad_delegation(state->inode);
4495 ++ write_seqlock(&state->seqlock);
4496 ++ nfs4_stateid_copy(&state->stateid, &state->open_stateid);
4497 ++ write_sequnlock(&state->seqlock);
4498 ++ clear_bit(NFS_DELEGATED_STATE, &state->flags);
4499 ++}
4500 ++
4501 ++static void nfs40_clear_delegation_stateid(struct nfs4_state *state)
4502 ++{
4503 ++ if (rcu_access_pointer(NFS_I(state->inode)->delegation) != NULL)
4504 ++ nfs_finish_clear_delegation_stateid(state);
4505 ++}
4506 ++
4507 ++static int nfs40_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *state)
4508 ++{
4509 ++ /* NFSv4.0 doesn't allow for delegation recovery on open expire */
4510 ++ nfs40_clear_delegation_stateid(state);
4511 ++ return nfs4_open_expired(sp, state);
4512 ++}
4513 ++
4514 + #if defined(CONFIG_NFS_V4_1)
4515 + static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *stateid, unsigned int flags)
4516 + {
4517 +@@ -5796,7 +5818,7 @@ static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cr
4518 + int ret = 0;
4519 +
4520 + if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0)
4521 +- return 0;
4522 ++ return -EAGAIN;
4523 + task = _nfs41_proc_sequence(clp, cred, &nfs41_sequence_ops);
4524 + if (IS_ERR(task))
4525 + ret = PTR_ERR(task);
4526 +@@ -6547,7 +6569,7 @@ static const struct nfs4_state_recovery_ops nfs41_reboot_recovery_ops = {
4527 + static const struct nfs4_state_recovery_ops nfs40_nograce_recovery_ops = {
4528 + .owner_flag_bit = NFS_OWNER_RECLAIM_NOGRACE,
4529 + .state_flag_bit = NFS_STATE_RECLAIM_NOGRACE,
4530 +- .recover_open = nfs4_open_expired,
4531 ++ .recover_open = nfs40_open_expired,
4532 + .recover_lock = nfs4_lock_expired,
4533 + .establish_clid = nfs4_init_clientid,
4534 + .get_clid_cred = nfs4_get_setclientid_cred,
4535 +diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c
4536 +index dc484c0eae7f..78071cf90079 100644
4537 +--- a/fs/nfs/nfs4renewd.c
4538 ++++ b/fs/nfs/nfs4renewd.c
4539 +@@ -88,10 +88,18 @@ nfs4_renew_state(struct work_struct *work)
4540 + }
4541 + nfs_expire_all_delegations(clp);
4542 + } else {
4543 ++ int ret;
4544 ++
4545 + /* Queue an asynchronous RENEW. */
4546 +- ops->sched_state_renewal(clp, cred, renew_flags);
4547 ++ ret = ops->sched_state_renewal(clp, cred, renew_flags);
4548 + put_rpccred(cred);
4549 +- goto out_exp;
4550 ++ switch (ret) {
4551 ++ default:
4552 ++ goto out_exp;
4553 ++ case -EAGAIN:
4554 ++ case -ENOMEM:
4555 ++ break;
4556 ++ }
4557 + }
4558 + } else {
4559 + dprintk("%s: failed to call renewd. Reason: lease not expired \n",
4560 +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
4561 +index 461816beff13..c4600b59744a 100644
4562 +--- a/fs/nfs/nfs4state.c
4563 ++++ b/fs/nfs/nfs4state.c
4564 +@@ -1515,7 +1515,8 @@ restart:
4565 + if (status < 0) {
4566 + set_bit(ops->owner_flag_bit, &sp->so_flags);
4567 + nfs4_put_state_owner(sp);
4568 +- return nfs4_recovery_handle_error(clp, status);
4569 ++ status = nfs4_recovery_handle_error(clp, status);
4570 ++ return (status != 0) ? status : -EAGAIN;
4571 + }
4572 +
4573 + nfs4_put_state_owner(sp);
4574 +@@ -1524,7 +1525,7 @@ restart:
4575 + spin_unlock(&clp->cl_lock);
4576 + }
4577 + rcu_read_unlock();
4578 +- return status;
4579 ++ return 0;
4580 + }
4581 +
4582 + static int nfs4_check_lease(struct nfs_client *clp)
4583 +@@ -1796,23 +1797,18 @@ static void nfs4_state_manager(struct nfs_client *clp)
4584 + if (test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state)) {
4585 + status = nfs4_do_reclaim(clp,
4586 + clp->cl_mvops->reboot_recovery_ops);
4587 +- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
4588 +- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
4589 +- continue;
4590 +- nfs4_state_end_reclaim_reboot(clp);
4591 +- if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
4592 ++ if (status == -EAGAIN)
4593 + continue;
4594 + if (status < 0)
4595 + goto out_error;
4596 ++ nfs4_state_end_reclaim_reboot(clp);
4597 + }
4598 +
4599 + /* Now recover expired state... */
4600 + if (test_and_clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state)) {
4601 + status = nfs4_do_reclaim(clp,
4602 + clp->cl_mvops->nograce_recovery_ops);
4603 +- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
4604 +- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
4605 +- test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
4606 ++ if (status == -EAGAIN)
4607 + continue;
4608 + if (status < 0)
4609 + goto out_error;
4610 +diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
4611 +index 22beaff3544a..b2ce878080be 100644
4612 +--- a/fs/nfsd/nfs4proc.c
4613 ++++ b/fs/nfsd/nfs4proc.c
4614 +@@ -1132,7 +1132,8 @@ static bool need_wrongsec_check(struct svc_rqst *rqstp)
4615 + */
4616 + if (argp->opcnt == resp->opcnt)
4617 + return false;
4618 +-
4619 ++ if (next->opnum == OP_ILLEGAL)
4620 ++ return false;
4621 + nextd = OPDESC(next);
4622 + /*
4623 + * Rest of 2.6.3.1.1: certain operations will return WRONGSEC
4624 +diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
4625 +index 48bc91d60fce..97d91f046265 100644
4626 +--- a/fs/notify/fanotify/fanotify_user.c
4627 ++++ b/fs/notify/fanotify/fanotify_user.c
4628 +@@ -67,7 +67,7 @@ static int create_fd(struct fsnotify_group *group, struct fsnotify_event *event)
4629 +
4630 + pr_debug("%s: group=%p event=%p\n", __func__, group, event);
4631 +
4632 +- client_fd = get_unused_fd();
4633 ++ client_fd = get_unused_fd_flags(group->fanotify_data.f_flags);
4634 + if (client_fd < 0)
4635 + return client_fd;
4636 +
4637 +diff --git a/fs/super.c b/fs/super.c
4638 +index 3c520a5ed715..d0154e52c76b 100644
4639 +--- a/fs/super.c
4640 ++++ b/fs/super.c
4641 +@@ -69,6 +69,8 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
4642 +
4643 + total_objects = sb->s_nr_dentry_unused +
4644 + sb->s_nr_inodes_unused + fs_objects + 1;
4645 ++ if (!total_objects)
4646 ++ total_objects = 1;
4647 +
4648 + if (sc->nr_to_scan) {
4649 + int dentries;
4650 +diff --git a/fs/ubifs/commit.c b/fs/ubifs/commit.c
4651 +index fb3b5c813a30..b2ca12fd593b 100644
4652 +--- a/fs/ubifs/commit.c
4653 ++++ b/fs/ubifs/commit.c
4654 +@@ -166,15 +166,10 @@ static int do_commit(struct ubifs_info *c)
4655 + err = ubifs_orphan_end_commit(c);
4656 + if (err)
4657 + goto out;
4658 +- old_ltail_lnum = c->ltail_lnum;
4659 +- err = ubifs_log_end_commit(c, new_ltail_lnum);
4660 +- if (err)
4661 +- goto out;
4662 + err = dbg_check_old_index(c, &zroot);
4663 + if (err)
4664 + goto out;
4665 +
4666 +- mutex_lock(&c->mst_mutex);
4667 + c->mst_node->cmt_no = cpu_to_le64(c->cmt_no);
4668 + c->mst_node->log_lnum = cpu_to_le32(new_ltail_lnum);
4669 + c->mst_node->root_lnum = cpu_to_le32(zroot.lnum);
4670 +@@ -203,8 +198,9 @@ static int do_commit(struct ubifs_info *c)
4671 + c->mst_node->flags |= cpu_to_le32(UBIFS_MST_NO_ORPHS);
4672 + else
4673 + c->mst_node->flags &= ~cpu_to_le32(UBIFS_MST_NO_ORPHS);
4674 +- err = ubifs_write_master(c);
4675 +- mutex_unlock(&c->mst_mutex);
4676 ++
4677 ++ old_ltail_lnum = c->ltail_lnum;
4678 ++ err = ubifs_log_end_commit(c, new_ltail_lnum);
4679 + if (err)
4680 + goto out;
4681 +
4682 +diff --git a/fs/ubifs/log.c b/fs/ubifs/log.c
4683 +index f9fd068d1ae0..843beda25767 100644
4684 +--- a/fs/ubifs/log.c
4685 ++++ b/fs/ubifs/log.c
4686 +@@ -110,10 +110,14 @@ static inline long long empty_log_bytes(const struct ubifs_info *c)
4687 + h = (long long)c->lhead_lnum * c->leb_size + c->lhead_offs;
4688 + t = (long long)c->ltail_lnum * c->leb_size;
4689 +
4690 +- if (h >= t)
4691 ++ if (h > t)
4692 + return c->log_bytes - h + t;
4693 +- else
4694 ++ else if (h != t)
4695 + return t - h;
4696 ++ else if (c->lhead_lnum != c->ltail_lnum)
4697 ++ return 0;
4698 ++ else
4699 ++ return c->log_bytes;
4700 + }
4701 +
4702 + /**
4703 +@@ -453,9 +457,9 @@ out:
4704 + * @ltail_lnum: new log tail LEB number
4705 + *
4706 + * This function is called on when the commit operation was finished. It
4707 +- * moves log tail to new position and unmaps LEBs which contain obsolete data.
4708 +- * Returns zero in case of success and a negative error code in case of
4709 +- * failure.
4710 ++ * moves log tail to new position and updates the master node so that it stores
4711 ++ * the new log tail LEB number. Returns zero in case of success and a negative
4712 ++ * error code in case of failure.
4713 + */
4714 + int ubifs_log_end_commit(struct ubifs_info *c, int ltail_lnum)
4715 + {
4716 +@@ -483,7 +487,12 @@ int ubifs_log_end_commit(struct ubifs_info *c, int ltail_lnum)
4717 + spin_unlock(&c->buds_lock);
4718 +
4719 + err = dbg_check_bud_bytes(c);
4720 ++ if (err)
4721 ++ goto out;
4722 +
4723 ++ err = ubifs_write_master(c);
4724 ++
4725 ++out:
4726 + mutex_unlock(&c->log_mutex);
4727 + return err;
4728 + }
4729 +diff --git a/fs/ubifs/master.c b/fs/ubifs/master.c
4730 +index 278c2382e8c2..bb9f48107815 100644
4731 +--- a/fs/ubifs/master.c
4732 ++++ b/fs/ubifs/master.c
4733 +@@ -352,10 +352,9 @@ int ubifs_read_master(struct ubifs_info *c)
4734 + * ubifs_write_master - write master node.
4735 + * @c: UBIFS file-system description object
4736 + *
4737 +- * This function writes the master node. The caller has to take the
4738 +- * @c->mst_mutex lock before calling this function. Returns zero in case of
4739 +- * success and a negative error code in case of failure. The master node is
4740 +- * written twice to enable recovery.
4741 ++ * This function writes the master node. Returns zero in case of success and a
4742 ++ * negative error code in case of failure. The master node is written twice to
4743 ++ * enable recovery.
4744 + */
4745 + int ubifs_write_master(struct ubifs_info *c)
4746 + {
4747 +diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
4748 +index d867bd97bc60..129bb488ce75 100644
4749 +--- a/fs/ubifs/super.c
4750 ++++ b/fs/ubifs/super.c
4751 +@@ -1984,7 +1984,6 @@ static struct ubifs_info *alloc_ubifs_info(struct ubi_volume_desc *ubi)
4752 + mutex_init(&c->lp_mutex);
4753 + mutex_init(&c->tnc_mutex);
4754 + mutex_init(&c->log_mutex);
4755 +- mutex_init(&c->mst_mutex);
4756 + mutex_init(&c->umount_mutex);
4757 + mutex_init(&c->bu_mutex);
4758 + mutex_init(&c->write_reserve_mutex);
4759 +diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
4760 +index 3f962617e29b..cd62067aea85 100644
4761 +--- a/fs/ubifs/ubifs.h
4762 ++++ b/fs/ubifs/ubifs.h
4763 +@@ -1041,7 +1041,6 @@ struct ubifs_debug_info;
4764 + *
4765 + * @mst_node: master node
4766 + * @mst_offs: offset of valid master node
4767 +- * @mst_mutex: protects the master node area, @mst_node, and @mst_offs
4768 + *
4769 + * @max_bu_buf_len: maximum bulk-read buffer length
4770 + * @bu_mutex: protects the pre-allocated bulk-read buffer and @c->bu
4771 +@@ -1281,7 +1280,6 @@ struct ubifs_info {
4772 +
4773 + struct ubifs_mst_node *mst_node;
4774 + int mst_offs;
4775 +- struct mutex mst_mutex;
4776 +
4777 + int max_bu_buf_len;
4778 + struct mutex bu_mutex;
4779 +diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h
4780 +index 757f98066d6b..53baa0d7c34f 100644
4781 +--- a/include/drm/drm_pciids.h
4782 ++++ b/include/drm/drm_pciids.h
4783 +@@ -56,7 +56,6 @@
4784 + {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \
4785 + {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \
4786 + {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \
4787 +- {0x1002, 0x4C6E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|RADEON_IS_MOBILITY}, \
4788 + {0x1002, 0x4E44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
4789 + {0x1002, 0x4E45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
4790 + {0x1002, 0x4E46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
4791 +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
4792 +index 4d4ac24a263e..01b7047b60df 100644
4793 +--- a/include/linux/blkdev.h
4794 ++++ b/include/linux/blkdev.h
4795 +@@ -1069,10 +1069,9 @@ static inline int queue_alignment_offset(struct request_queue *q)
4796 + static inline int queue_limit_alignment_offset(struct queue_limits *lim, sector_t sector)
4797 + {
4798 + unsigned int granularity = max(lim->physical_block_size, lim->io_min);
4799 +- unsigned int alignment = (sector << 9) & (granularity - 1);
4800 ++ unsigned int alignment = sector_div(sector, granularity >> 9) << 9;
4801 +
4802 +- return (granularity + lim->alignment_offset - alignment)
4803 +- & (granularity - 1);
4804 ++ return (granularity + lim->alignment_offset - alignment) % granularity;
4805 + }
4806 +
4807 + static inline int bdev_alignment_offset(struct block_device *bdev)
4808 +diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
4809 +index 7970e31c8c0e..ea9cffe22ec4 100644
4810 +--- a/include/linux/compiler-gcc.h
4811 ++++ b/include/linux/compiler-gcc.h
4812 +@@ -37,6 +37,9 @@
4813 + __asm__ ("" : "=r"(__ptr) : "0"(ptr)); \
4814 + (typeof(ptr)) (__ptr + (off)); })
4815 +
4816 ++/* Make the optimizer believe the variable can be manipulated arbitrarily. */
4817 ++#define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))
4818 ++
4819 + #ifdef __CHECKER__
4820 + #define __must_be_array(arr) 0
4821 + #else
4822 +diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
4823 +new file mode 100644
4824 +index 000000000000..cdd1cc202d51
4825 +--- /dev/null
4826 ++++ b/include/linux/compiler-gcc5.h
4827 +@@ -0,0 +1,66 @@
4828 ++#ifndef __LINUX_COMPILER_H
4829 ++#error "Please don't include <linux/compiler-gcc5.h> directly, include <linux/compiler.h> instead."
4830 ++#endif
4831 ++
4832 ++#define __used __attribute__((__used__))
4833 ++#define __must_check __attribute__((warn_unused_result))
4834 ++#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
4835 ++
4836 ++/* Mark functions as cold. gcc will assume any path leading to a call
4837 ++ to them will be unlikely. This means a lot of manual unlikely()s
4838 ++ are unnecessary now for any paths leading to the usual suspects
4839 ++ like BUG(), printk(), panic() etc. [but let's keep them for now for
4840 ++ older compilers]
4841 ++
4842 ++ Early snapshots of gcc 4.3 don't support this and we can't detect this
4843 ++ in the preprocessor, but we can live with this because they're unreleased.
4844 ++ Maketime probing would be overkill here.
4845 ++
4846 ++ gcc also has a __attribute__((__hot__)) to move hot functions into
4847 ++ a special section, but I don't see any sense in this right now in
4848 ++ the kernel context */
4849 ++#define __cold __attribute__((__cold__))
4850 ++
4851 ++#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
4852 ++
4853 ++#ifndef __CHECKER__
4854 ++# define __compiletime_warning(message) __attribute__((warning(message)))
4855 ++# define __compiletime_error(message) __attribute__((error(message)))
4856 ++#endif /* __CHECKER__ */
4857 ++
4858 ++/*
4859 ++ * Mark a position in code as unreachable. This can be used to
4860 ++ * suppress control flow warnings after asm blocks that transfer
4861 ++ * control elsewhere.
4862 ++ *
4863 ++ * Early snapshots of gcc 4.5 don't support this and we can't detect
4864 ++ * this in the preprocessor, but we can live with this because they're
4865 ++ * unreleased. Really, we need to have autoconf for the kernel.
4866 ++ */
4867 ++#define unreachable() __builtin_unreachable()
4868 ++
4869 ++/* Mark a function definition as prohibited from being cloned. */
4870 ++#define __noclone __attribute__((__noclone__))
4871 ++
4872 ++/*
4873 ++ * Tell the optimizer that something else uses this function or variable.
4874 ++ */
4875 ++#define __visible __attribute__((externally_visible))
4876 ++
4877 ++/*
4878 ++ * GCC 'asm goto' miscompiles certain code sequences:
4879 ++ *
4880 ++ * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
4881 ++ *
4882 ++ * Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
4883 ++ * Fixed in GCC 4.8.2 and later versions.
4884 ++ *
4885 ++ * (asm goto is automatically volatile - the naming reflects this.)
4886 ++ */
4887 ++#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
4888 ++
4889 ++#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
4890 ++#define __HAVE_BUILTIN_BSWAP32__
4891 ++#define __HAVE_BUILTIN_BSWAP64__
4892 ++#define __HAVE_BUILTIN_BSWAP16__
4893 ++#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
4894 +diff --git a/include/linux/compiler-intel.h b/include/linux/compiler-intel.h
4895 +index cba9593c4047..1a97cac7dcb2 100644
4896 +--- a/include/linux/compiler-intel.h
4897 ++++ b/include/linux/compiler-intel.h
4898 +@@ -15,6 +15,7 @@
4899 + */
4900 + #undef barrier
4901 + #undef RELOC_HIDE
4902 ++#undef OPTIMIZER_HIDE_VAR
4903 +
4904 + #define barrier() __memory_barrier()
4905 +
4906 +@@ -23,6 +24,12 @@
4907 + __ptr = (unsigned long) (ptr); \
4908 + (typeof(ptr)) (__ptr + (off)); })
4909 +
4910 ++/* This should act as an optimization barrier on var.
4911 ++ * Given that this compiler does not have inline assembly, a compiler barrier
4912 ++ * is the best we can do.
4913 ++ */
4914 ++#define OPTIMIZER_HIDE_VAR(var) barrier()
4915 ++
4916 + /* Intel ECC compiler doesn't support __builtin_types_compatible_p() */
4917 + #define __must_be_array(a) 0
4918 +
4919 +diff --git a/include/linux/init_task.h b/include/linux/init_task.h
4920 +index e7bafa432aa3..b11e298e989f 100644
4921 +--- a/include/linux/init_task.h
4922 ++++ b/include/linux/init_task.h
4923 +@@ -38,6 +38,7 @@ extern struct fs_struct init_fs;
4924 +
4925 + #define INIT_SIGNALS(sig) { \
4926 + .nr_threads = 1, \
4927 ++ .thread_head = LIST_HEAD_INIT(init_task.thread_node), \
4928 + .wait_chldexit = __WAIT_QUEUE_HEAD_INITIALIZER(sig.wait_chldexit),\
4929 + .shared_pending = { \
4930 + .list = LIST_HEAD_INIT(sig.shared_pending.list), \
4931 +@@ -202,6 +203,7 @@ extern struct task_group root_task_group;
4932 + [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \
4933 + }, \
4934 + .thread_group = LIST_HEAD_INIT(tsk.thread_group), \
4935 ++ .thread_node = LIST_HEAD_INIT(init_signals.thread_head), \
4936 + INIT_IDS \
4937 + INIT_PERF_EVENTS(tsk) \
4938 + INIT_TRACE_IRQFLAGS \
4939 +diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
4940 +index 6b394f0b5148..eeb307985715 100644
4941 +--- a/include/linux/khugepaged.h
4942 ++++ b/include/linux/khugepaged.h
4943 +@@ -6,7 +6,8 @@
4944 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE
4945 + extern int __khugepaged_enter(struct mm_struct *mm);
4946 + extern void __khugepaged_exit(struct mm_struct *mm);
4947 +-extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma);
4948 ++extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
4949 ++ unsigned long vm_flags);
4950 +
4951 + #define khugepaged_enabled() \
4952 + (transparent_hugepage_flags & \
4953 +@@ -35,13 +36,13 @@ static inline void khugepaged_exit(struct mm_struct *mm)
4954 + __khugepaged_exit(mm);
4955 + }
4956 +
4957 +-static inline int khugepaged_enter(struct vm_area_struct *vma)
4958 ++static inline int khugepaged_enter(struct vm_area_struct *vma,
4959 ++ unsigned long vm_flags)
4960 + {
4961 + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags))
4962 + if ((khugepaged_always() ||
4963 +- (khugepaged_req_madv() &&
4964 +- vma->vm_flags & VM_HUGEPAGE)) &&
4965 +- !(vma->vm_flags & VM_NOHUGEPAGE))
4966 ++ (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) &&
4967 ++ !(vm_flags & VM_NOHUGEPAGE))
4968 + if (__khugepaged_enter(vma->vm_mm))
4969 + return -ENOMEM;
4970 + return 0;
4971 +@@ -54,11 +55,13 @@ static inline int khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm)
4972 + static inline void khugepaged_exit(struct mm_struct *mm)
4973 + {
4974 + }
4975 +-static inline int khugepaged_enter(struct vm_area_struct *vma)
4976 ++static inline int khugepaged_enter(struct vm_area_struct *vma,
4977 ++ unsigned long vm_flags)
4978 + {
4979 + return 0;
4980 + }
4981 +-static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma)
4982 ++static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
4983 ++ unsigned long vm_flags)
4984 + {
4985 + return 0;
4986 + }
4987 +diff --git a/include/linux/mm.h b/include/linux/mm.h
4988 +index dbca4b21b7d3..656b4e968991 100644
4989 +--- a/include/linux/mm.h
4990 ++++ b/include/linux/mm.h
4991 +@@ -953,6 +953,7 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
4992 +
4993 + extern void truncate_pagecache(struct inode *inode, loff_t old, loff_t new);
4994 + extern void truncate_setsize(struct inode *inode, loff_t newsize);
4995 ++void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
4996 + extern int vmtruncate(struct inode *inode, loff_t offset);
4997 + extern int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end);
4998 + void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
4999 +diff --git a/include/linux/of.h b/include/linux/of.h
5000 +index fa7fb1d97458..ac58796df055 100644
5001 +--- a/include/linux/of.h
5002 ++++ b/include/linux/of.h
5003 +@@ -212,14 +212,12 @@ extern int of_property_read_u64(const struct device_node *np,
5004 + extern int of_property_read_string(struct device_node *np,
5005 + const char *propname,
5006 + const char **out_string);
5007 +-extern int of_property_read_string_index(struct device_node *np,
5008 +- const char *propname,
5009 +- int index, const char **output);
5010 + extern int of_property_match_string(struct device_node *np,
5011 + const char *propname,
5012 + const char *string);
5013 +-extern int of_property_count_strings(struct device_node *np,
5014 +- const char *propname);
5015 ++extern int of_property_read_string_helper(struct device_node *np,
5016 ++ const char *propname,
5017 ++ const char **out_strs, size_t sz, int index);
5018 + extern int of_device_is_compatible(const struct device_node *device,
5019 + const char *);
5020 + extern int of_device_is_available(const struct device_node *device);
5021 +@@ -304,15 +302,9 @@ static inline int of_property_read_string(struct device_node *np,
5022 + return -ENOSYS;
5023 + }
5024 +
5025 +-static inline int of_property_read_string_index(struct device_node *np,
5026 +- const char *propname, int index,
5027 +- const char **out_string)
5028 +-{
5029 +- return -ENOSYS;
5030 +-}
5031 +-
5032 +-static inline int of_property_count_strings(struct device_node *np,
5033 +- const char *propname)
5034 ++static inline int of_property_read_string_helper(struct device_node *np,
5035 ++ const char *propname,
5036 ++ const char **out_strs, size_t sz, int index)
5037 + {
5038 + return -ENOSYS;
5039 + }
5040 +@@ -352,6 +344,70 @@ static inline int of_machine_is_compatible(const char *compat)
5041 + #endif /* CONFIG_OF */
5042 +
5043 + /**
5044 ++ * of_property_read_string_array() - Read an array of strings from a multiple
5045 ++ * strings property.
5046 ++ * @np: device node from which the property value is to be read.
5047 ++ * @propname: name of the property to be searched.
5048 ++ * @out_strs: output array of string pointers.
5049 ++ * @sz: number of array elements to read.
5050 ++ *
5051 ++ * Search for a property in a device tree node and retrieve a list of
5052 ++ * terminated string values (pointer to data, not a copy) in that property.
5053 ++ *
5054 ++ * If @out_strs is NULL, the number of strings in the property is returned.
5055 ++ */
5056 ++static inline int of_property_read_string_array(struct device_node *np,
5057 ++ const char *propname, const char **out_strs,
5058 ++ size_t sz)
5059 ++{
5060 ++ return of_property_read_string_helper(np, propname, out_strs, sz, 0);
5061 ++}
5062 ++
5063 ++/**
5064 ++ * of_property_count_strings() - Find and return the number of strings from a
5065 ++ * multiple strings property.
5066 ++ * @np: device node from which the property value is to be read.
5067 ++ * @propname: name of the property to be searched.
5068 ++ *
5069 ++ * Search for a property in a device tree node and retrieve the number of null
5070 ++ * terminated string contain in it. Returns the number of strings on
5071 ++ * success, -EINVAL if the property does not exist, -ENODATA if property
5072 ++ * does not have a value, and -EILSEQ if the string is not null-terminated
5073 ++ * within the length of the property data.
5074 ++ */
5075 ++static inline int of_property_count_strings(struct device_node *np,
5076 ++ const char *propname)
5077 ++{
5078 ++ return of_property_read_string_helper(np, propname, NULL, 0, 0);
5079 ++}
5080 ++
5081 ++/**
5082 ++ * of_property_read_string_index() - Find and read a string from a multiple
5083 ++ * strings property.
5084 ++ * @np: device node from which the property value is to be read.
5085 ++ * @propname: name of the property to be searched.
5086 ++ * @index: index of the string in the list of strings
5087 ++ * @out_string: pointer to null terminated return string, modified only if
5088 ++ * return value is 0.
5089 ++ *
5090 ++ * Search for a property in a device tree node and retrieve a null
5091 ++ * terminated string value (pointer to data, not a copy) in the list of strings
5092 ++ * contained in that property.
5093 ++ * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if
5094 ++ * property does not have a value, and -EILSEQ if the string is not
5095 ++ * null-terminated within the length of the property data.
5096 ++ *
5097 ++ * The out_string pointer is modified only if a valid string can be decoded.
5098 ++ */
5099 ++static inline int of_property_read_string_index(struct device_node *np,
5100 ++ const char *propname,
5101 ++ int index, const char **output)
5102 ++{
5103 ++ int rc = of_property_read_string_helper(np, propname, output, 1, index);
5104 ++ return rc < 0 ? rc : 0;
5105 ++}
5106 ++
5107 ++/**
5108 + * of_property_read_bool - Findfrom a property
5109 + * @np: device node from which the property value is to be read.
5110 + * @propname: name of the property to be searched.
5111 +diff --git a/include/linux/oom.h b/include/linux/oom.h
5112 +index 3d7647536b03..d6ed7b05e31c 100644
5113 +--- a/include/linux/oom.h
5114 ++++ b/include/linux/oom.h
5115 +@@ -45,6 +45,10 @@ extern int test_set_oom_score_adj(int new_val);
5116 +
5117 + extern unsigned int oom_badness(struct task_struct *p, struct mem_cgroup *memcg,
5118 + const nodemask_t *nodemask, unsigned long totalpages);
5119 ++
5120 ++extern int oom_kills_count(void);
5121 ++extern void note_oom_kill(void);
5122 ++
5123 + extern int try_set_zonelist_oom(struct zonelist *zonelist, gfp_t gfp_flags);
5124 + extern void clear_zonelist_oom(struct zonelist *zonelist, gfp_t gfp_flags);
5125 +
5126 +diff --git a/include/linux/sched.h b/include/linux/sched.h
5127 +index 56d8233c5de7..d529bd9e6680 100644
5128 +--- a/include/linux/sched.h
5129 ++++ b/include/linux/sched.h
5130 +@@ -534,6 +534,7 @@ struct signal_struct {
5131 + atomic_t sigcnt;
5132 + atomic_t live;
5133 + int nr_threads;
5134 ++ struct list_head thread_head;
5135 +
5136 + wait_queue_head_t wait_chldexit; /* for wait4() */
5137 +
5138 +@@ -1394,6 +1395,7 @@ struct task_struct {
5139 + /* PID/PID hash table linkage. */
5140 + struct pid_link pids[PIDTYPE_MAX];
5141 + struct list_head thread_group;
5142 ++ struct list_head thread_node;
5143 +
5144 + struct completion *vfork_done; /* for vfork() */
5145 + int __user *set_child_tid; /* CLONE_CHILD_SETTID */
5146 +@@ -2397,6 +2399,16 @@ extern bool current_is_single_threaded(void);
5147 + #define while_each_thread(g, t) \
5148 + while ((t = next_thread(t)) != g)
5149 +
5150 ++#define __for_each_thread(signal, t) \
5151 ++ list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node)
5152 ++
5153 ++#define for_each_thread(p, t) \
5154 ++ __for_each_thread((p)->signal, t)
5155 ++
5156 ++/* Careful: this is a double loop, 'break' won't work as expected. */
5157 ++#define for_each_process_thread(p, t) \
5158 ++ for_each_process(p) for_each_thread(p, t)
5159 ++
5160 + static inline int get_nr_threads(struct task_struct *tsk)
5161 + {
5162 + return tsk->signal->nr_threads;
5163 +diff --git a/include/linux/string.h b/include/linux/string.h
5164 +index e033564f10ba..3d9feb70dc20 100644
5165 +--- a/include/linux/string.h
5166 ++++ b/include/linux/string.h
5167 +@@ -133,7 +133,7 @@ int bprintf(u32 *bin_buf, size_t size, const char *fmt, ...) __printf(3, 4);
5168 + #endif
5169 +
5170 + extern ssize_t memory_read_from_buffer(void *to, size_t count, loff_t *ppos,
5171 +- const void *from, size_t available);
5172 ++ const void *from, size_t available);
5173 +
5174 + /**
5175 + * strstarts - does @str start with @prefix?
5176 +@@ -144,5 +144,7 @@ static inline bool strstarts(const char *str, const char *prefix)
5177 + {
5178 + return strncmp(str, prefix, strlen(prefix)) == 0;
5179 + }
5180 ++
5181 ++void memzero_explicit(void *s, size_t count);
5182 + #endif
5183 + #endif /* _LINUX_STRING_H_ */
5184 +diff --git a/include/linux/usb/quirks.h b/include/linux/usb/quirks.h
5185 +index 3e93de7ecbc3..d0d2af09dcc6 100644
5186 +--- a/include/linux/usb/quirks.h
5187 ++++ b/include/linux/usb/quirks.h
5188 +@@ -30,4 +30,10 @@
5189 + descriptor */
5190 + #define USB_QUIRK_DELAY_INIT 0x00000040
5191 +
5192 ++/* device generates spurious wakeup, ignore remote wakeup capability */
5193 ++#define USB_QUIRK_IGNORE_REMOTE_WAKEUP 0x00000200
5194 ++
5195 ++/* device can't handle device_qualifier descriptor requests */
5196 ++#define USB_QUIRK_DEVICE_QUALIFIER 0x00000100
5197 ++
5198 + #endif /* __LINUX_USB_QUIRKS_H */
5199 +diff --git a/include/net/ipv6.h b/include/net/ipv6.h
5200 +index 117eaa578d0d..8898a191929a 100644
5201 +--- a/include/net/ipv6.h
5202 ++++ b/include/net/ipv6.h
5203 +@@ -485,6 +485,7 @@ static inline int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_add
5204 + }
5205 +
5206 + extern void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt);
5207 ++void ipv6_proxy_select_ident(struct sk_buff *skb);
5208 +
5209 + /*
5210 + * Prototypes exported by ipv6
5211 +diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
5212 +index 0caf1f8de0fb..8a142844318f 100644
5213 +--- a/kernel/audit_tree.c
5214 ++++ b/kernel/audit_tree.c
5215 +@@ -154,6 +154,7 @@ static struct audit_chunk *alloc_chunk(int count)
5216 + chunk->owners[i].index = i;
5217 + }
5218 + fsnotify_init_mark(&chunk->mark, audit_tree_destroy_watch);
5219 ++ chunk->mark.mask = FS_IN_IGNORED;
5220 + return chunk;
5221 + }
5222 +
5223 +diff --git a/kernel/exit.c b/kernel/exit.c
5224 +index 3eb4dcfc658a..38980ea0b2d6 100644
5225 +--- a/kernel/exit.c
5226 ++++ b/kernel/exit.c
5227 +@@ -74,6 +74,7 @@ static void __unhash_process(struct task_struct *p, bool group_dead)
5228 + __this_cpu_dec(process_counts);
5229 + }
5230 + list_del_rcu(&p->thread_group);
5231 ++ list_del_rcu(&p->thread_node);
5232 + }
5233 +
5234 + /*
5235 +diff --git a/kernel/fork.c b/kernel/fork.c
5236 +index 878dcb2eec55..be2495f0eed7 100644
5237 +--- a/kernel/fork.c
5238 ++++ b/kernel/fork.c
5239 +@@ -1026,6 +1026,11 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
5240 + sig->nr_threads = 1;
5241 + atomic_set(&sig->live, 1);
5242 + atomic_set(&sig->sigcnt, 1);
5243 ++
5244 ++ /* list_add(thread_node, thread_head) without INIT_LIST_HEAD() */
5245 ++ sig->thread_head = (struct list_head)LIST_HEAD_INIT(tsk->thread_node);
5246 ++ tsk->thread_node = (struct list_head)LIST_HEAD_INIT(sig->thread_head);
5247 ++
5248 + init_waitqueue_head(&sig->wait_chldexit);
5249 + if (clone_flags & CLONE_NEWPID)
5250 + sig->flags |= SIGNAL_UNKILLABLE;
5251 +@@ -1412,14 +1417,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
5252 + goto bad_fork_free_pid;
5253 + }
5254 +
5255 +- if (clone_flags & CLONE_THREAD) {
5256 +- current->signal->nr_threads++;
5257 +- atomic_inc(&current->signal->live);
5258 +- atomic_inc(&current->signal->sigcnt);
5259 +- p->group_leader = current->group_leader;
5260 +- list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group);
5261 +- }
5262 +-
5263 + if (likely(p->pid)) {
5264 + ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace);
5265 +
5266 +@@ -1434,6 +1431,15 @@ static struct task_struct *copy_process(unsigned long clone_flags,
5267 + list_add_tail(&p->sibling, &p->real_parent->children);
5268 + list_add_tail_rcu(&p->tasks, &init_task.tasks);
5269 + __this_cpu_inc(process_counts);
5270 ++ } else {
5271 ++ current->signal->nr_threads++;
5272 ++ atomic_inc(&current->signal->live);
5273 ++ atomic_inc(&current->signal->sigcnt);
5274 ++ p->group_leader = current->group_leader;
5275 ++ list_add_tail_rcu(&p->thread_group,
5276 ++ &p->group_leader->thread_group);
5277 ++ list_add_tail_rcu(&p->thread_node,
5278 ++ &p->signal->thread_head);
5279 + }
5280 + attach_pid(p, PIDTYPE_PID, pid);
5281 + nr_threads++;
5282 +diff --git a/kernel/freezer.c b/kernel/freezer.c
5283 +index 11f82a4d4eae..2f8ecd994d47 100644
5284 +--- a/kernel/freezer.c
5285 ++++ b/kernel/freezer.c
5286 +@@ -36,6 +36,9 @@ bool freezing_slow_path(struct task_struct *p)
5287 + if (p->flags & PF_NOFREEZE)
5288 + return false;
5289 +
5290 ++ if (test_thread_flag(TIF_MEMDIE))
5291 ++ return false;
5292 ++
5293 + if (pm_nosig_freezing || cgroup_freezing(p))
5294 + return true;
5295 +
5296 +diff --git a/kernel/futex.c b/kernel/futex.c
5297 +index 41dfb1866f95..6b320c2ad6fa 100644
5298 +--- a/kernel/futex.c
5299 ++++ b/kernel/futex.c
5300 +@@ -212,6 +212,8 @@ static void drop_futex_key_refs(union futex_key *key)
5301 + case FUT_OFF_MMSHARED:
5302 + mmdrop(key->private.mm);
5303 + break;
5304 ++ default:
5305 ++ smp_mb(); /* explicit MB (B) */
5306 + }
5307 + }
5308 +
5309 +@@ -484,8 +486,14 @@ static struct futex_pi_state * alloc_pi_state(void)
5310 + return pi_state;
5311 + }
5312 +
5313 ++/*
5314 ++ * Must be called with the hb lock held.
5315 ++ */
5316 + static void free_pi_state(struct futex_pi_state *pi_state)
5317 + {
5318 ++ if (!pi_state)
5319 ++ return;
5320 ++
5321 + if (!atomic_dec_and_test(&pi_state->refcount))
5322 + return;
5323 +
5324 +@@ -1399,15 +1407,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
5325 + }
5326 +
5327 + retry:
5328 +- if (pi_state != NULL) {
5329 +- /*
5330 +- * We will have to lookup the pi_state again, so free this one
5331 +- * to keep the accounting correct.
5332 +- */
5333 +- free_pi_state(pi_state);
5334 +- pi_state = NULL;
5335 +- }
5336 +-
5337 + ret = get_futex_key(uaddr1, flags & FLAGS_SHARED, &key1, VERIFY_READ);
5338 + if (unlikely(ret != 0))
5339 + goto out;
5340 +@@ -1495,6 +1494,8 @@ retry_private:
5341 + case 0:
5342 + break;
5343 + case -EFAULT:
5344 ++ free_pi_state(pi_state);
5345 ++ pi_state = NULL;
5346 + double_unlock_hb(hb1, hb2);
5347 + put_futex_key(&key2);
5348 + put_futex_key(&key1);
5349 +@@ -1504,6 +1505,8 @@ retry_private:
5350 + goto out;
5351 + case -EAGAIN:
5352 + /* The owner was exiting, try again. */
5353 ++ free_pi_state(pi_state);
5354 ++ pi_state = NULL;
5355 + double_unlock_hb(hb1, hb2);
5356 + put_futex_key(&key2);
5357 + put_futex_key(&key1);
5358 +@@ -1580,6 +1583,7 @@ retry_private:
5359 + }
5360 +
5361 + out_unlock:
5362 ++ free_pi_state(pi_state);
5363 + double_unlock_hb(hb1, hb2);
5364 +
5365 + /*
5366 +@@ -1596,8 +1600,6 @@ out_put_keys:
5367 + out_put_key1:
5368 + put_futex_key(&key1);
5369 + out:
5370 +- if (pi_state != NULL)
5371 +- free_pi_state(pi_state);
5372 + return ret ? ret : task_count;
5373 + }
5374 +
5375 +diff --git a/kernel/posix-timers.c b/kernel/posix-timers.c
5376 +index e885be1e8a11..02824a5c2693 100644
5377 +--- a/kernel/posix-timers.c
5378 ++++ b/kernel/posix-timers.c
5379 +@@ -589,6 +589,7 @@ SYSCALL_DEFINE3(timer_create, const clockid_t, which_clock,
5380 + goto out;
5381 + }
5382 + } else {
5383 ++ memset(&event.sigev_value, 0, sizeof(event.sigev_value));
5384 + event.sigev_notify = SIGEV_SIGNAL;
5385 + event.sigev_signo = SIGALRM;
5386 + event.sigev_value.sival_int = new_timer->it_id;
5387 +diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
5388 +index 52a18173c845..dd875cbe0d1a 100644
5389 +--- a/kernel/power/hibernate.c
5390 ++++ b/kernel/power/hibernate.c
5391 +@@ -486,8 +486,14 @@ int hibernation_restore(int platform_mode)
5392 + error = dpm_suspend_start(PMSG_QUIESCE);
5393 + if (!error) {
5394 + error = resume_target_kernel(platform_mode);
5395 +- dpm_resume_end(PMSG_RECOVER);
5396 ++ /*
5397 ++ * The above should either succeed and jump to the new kernel,
5398 ++ * or return with an error. Otherwise things are just
5399 ++ * undefined, so let's be paranoid.
5400 ++ */
5401 ++ BUG_ON(!error);
5402 + }
5403 ++ dpm_resume_end(PMSG_RECOVER);
5404 + pm_restore_gfp_mask();
5405 + ftrace_start();
5406 + resume_console();
5407 +diff --git a/kernel/power/process.c b/kernel/power/process.c
5408 +index f27d0c8cd9e8..286ff570f3b7 100644
5409 +--- a/kernel/power/process.c
5410 ++++ b/kernel/power/process.c
5411 +@@ -114,6 +114,28 @@ static int try_to_freeze_tasks(bool user_only)
5412 + return todo ? -EBUSY : 0;
5413 + }
5414 +
5415 ++/*
5416 ++ * Returns true if all freezable tasks (except for current) are frozen already
5417 ++ */
5418 ++static bool check_frozen_processes(void)
5419 ++{
5420 ++ struct task_struct *g, *p;
5421 ++ bool ret = true;
5422 ++
5423 ++ read_lock(&tasklist_lock);
5424 ++ for_each_process_thread(g, p) {
5425 ++ if (p != current && !freezer_should_skip(p) &&
5426 ++ !frozen(p)) {
5427 ++ ret = false;
5428 ++ goto done;
5429 ++ }
5430 ++ }
5431 ++done:
5432 ++ read_unlock(&tasklist_lock);
5433 ++
5434 ++ return ret;
5435 ++}
5436 ++
5437 + /**
5438 + * freeze_processes - Signal user space processes to enter the refrigerator.
5439 + *
5440 +@@ -122,6 +144,7 @@ static int try_to_freeze_tasks(bool user_only)
5441 + int freeze_processes(void)
5442 + {
5443 + int error;
5444 ++ int oom_kills_saved;
5445 +
5446 + error = __usermodehelper_disable(UMH_FREEZING);
5447 + if (error)
5448 +@@ -132,12 +155,27 @@ int freeze_processes(void)
5449 +
5450 + printk("Freezing user space processes ... ");
5451 + pm_freezing = true;
5452 ++ oom_kills_saved = oom_kills_count();
5453 + error = try_to_freeze_tasks(true);
5454 + if (!error) {
5455 +- printk("done.");
5456 + __usermodehelper_set_disable_depth(UMH_DISABLED);
5457 + oom_killer_disable();
5458 ++
5459 ++ /*
5460 ++ * There might have been an OOM kill while we were
5461 ++ * freezing tasks and the killed task might be still
5462 ++ * on the way out so we have to double check for race.
5463 ++ */
5464 ++ if (oom_kills_count() != oom_kills_saved &&
5465 ++ !check_frozen_processes()) {
5466 ++ __usermodehelper_set_disable_depth(UMH_ENABLED);
5467 ++ printk("OOM in progress.");
5468 ++ error = -EBUSY;
5469 ++ goto done;
5470 ++ }
5471 ++ printk("done.");
5472 + }
5473 ++done:
5474 + printk("\n");
5475 + BUG_ON(in_atomic());
5476 +
5477 +diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
5478 +index c9ce09addacd..4d0a209ecfda 100644
5479 +--- a/kernel/trace/trace_syscalls.c
5480 ++++ b/kernel/trace/trace_syscalls.c
5481 +@@ -309,7 +309,7 @@ void ftrace_syscall_enter(void *ignore, struct pt_regs *regs, long id)
5482 + int pc;
5483 +
5484 + syscall_nr = syscall_get_nr(current, regs);
5485 +- if (syscall_nr < 0)
5486 ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
5487 + return;
5488 + if (!test_bit(syscall_nr, enabled_enter_syscalls))
5489 + return;
5490 +@@ -349,7 +349,7 @@ void ftrace_syscall_exit(void *ignore, struct pt_regs *regs, long ret)
5491 + int pc;
5492 +
5493 + syscall_nr = syscall_get_nr(current, regs);
5494 +- if (syscall_nr < 0)
5495 ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
5496 + return;
5497 + if (!test_bit(syscall_nr, enabled_exit_syscalls))
5498 + return;
5499 +@@ -519,6 +519,8 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
5500 + int size;
5501 +
5502 + syscall_nr = syscall_get_nr(current, regs);
5503 ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
5504 ++ return;
5505 + if (!test_bit(syscall_nr, enabled_perf_enter_syscalls))
5506 + return;
5507 +
5508 +@@ -593,6 +595,8 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret)
5509 + int size;
5510 +
5511 + syscall_nr = syscall_get_nr(current, regs);
5512 ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
5513 ++ return;
5514 + if (!test_bit(syscall_nr, enabled_perf_exit_syscalls))
5515 + return;
5516 +
5517 +diff --git a/lib/bitmap.c b/lib/bitmap.c
5518 +index b5a8b6ad2454..6ccf2120b406 100644
5519 +--- a/lib/bitmap.c
5520 ++++ b/lib/bitmap.c
5521 +@@ -131,7 +131,9 @@ void __bitmap_shift_right(unsigned long *dst,
5522 + lower = src[off + k];
5523 + if (left && off + k == lim - 1)
5524 + lower &= mask;
5525 +- dst[k] = upper << (BITS_PER_LONG - rem) | lower >> rem;
5526 ++ dst[k] = lower >> rem;
5527 ++ if (rem)
5528 ++ dst[k] |= upper << (BITS_PER_LONG - rem);
5529 + if (left && k == lim - 1)
5530 + dst[k] &= mask;
5531 + }
5532 +@@ -172,7 +174,9 @@ void __bitmap_shift_left(unsigned long *dst,
5533 + upper = src[k];
5534 + if (left && k == lim - 1)
5535 + upper &= (1UL << left) - 1;
5536 +- dst[k + off] = lower >> (BITS_PER_LONG - rem) | upper << rem;
5537 ++ dst[k + off] = upper << rem;
5538 ++ if (rem)
5539 ++ dst[k + off] |= lower >> (BITS_PER_LONG - rem);
5540 + if (left && k + off == lim - 1)
5541 + dst[k + off] &= (1UL << left) - 1;
5542 + }
5543 +diff --git a/lib/lzo/lzo1x_decompress_safe.c b/lib/lzo/lzo1x_decompress_safe.c
5544 +index 8563081e8da3..a1c387f6afba 100644
5545 +--- a/lib/lzo/lzo1x_decompress_safe.c
5546 ++++ b/lib/lzo/lzo1x_decompress_safe.c
5547 +@@ -19,31 +19,21 @@
5548 + #include <linux/lzo.h>
5549 + #include "lzodefs.h"
5550 +
5551 +-#define HAVE_IP(t, x) \
5552 +- (((size_t)(ip_end - ip) >= (size_t)(t + x)) && \
5553 +- (((t + x) >= t) && ((t + x) >= x)))
5554 ++#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x))
5555 ++#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x))
5556 ++#define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun
5557 ++#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun
5558 ++#define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun
5559 +
5560 +-#define HAVE_OP(t, x) \
5561 +- (((size_t)(op_end - op) >= (size_t)(t + x)) && \
5562 +- (((t + x) >= t) && ((t + x) >= x)))
5563 +-
5564 +-#define NEED_IP(t, x) \
5565 +- do { \
5566 +- if (!HAVE_IP(t, x)) \
5567 +- goto input_overrun; \
5568 +- } while (0)
5569 +-
5570 +-#define NEED_OP(t, x) \
5571 +- do { \
5572 +- if (!HAVE_OP(t, x)) \
5573 +- goto output_overrun; \
5574 +- } while (0)
5575 +-
5576 +-#define TEST_LB(m_pos) \
5577 +- do { \
5578 +- if ((m_pos) < out) \
5579 +- goto lookbehind_overrun; \
5580 +- } while (0)
5581 ++/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base
5582 ++ * count without overflowing an integer. The multiply will overflow when
5583 ++ * multiplying 255 by more than MAXINT/255. The sum will overflow earlier
5584 ++ * depending on the base count. Since the base count is taken from a u8
5585 ++ * and a few bits, it is safe to assume that it will always be lower than
5586 ++ * or equal to 2*255, thus we can always prevent any overflow by accepting
5587 ++ * two less 255 steps. See Documentation/lzo.txt for more information.
5588 ++ */
5589 ++#define MAX_255_COUNT ((((size_t)~0) / 255) - 2)
5590 +
5591 + int lzo1x_decompress_safe(const unsigned char *in, size_t in_len,
5592 + unsigned char *out, size_t *out_len)
5593 +@@ -75,17 +65,24 @@ int lzo1x_decompress_safe(const unsigned char *in, size_t in_len,
5594 + if (t < 16) {
5595 + if (likely(state == 0)) {
5596 + if (unlikely(t == 0)) {
5597 ++ size_t offset;
5598 ++ const unsigned char *ip_last = ip;
5599 ++
5600 + while (unlikely(*ip == 0)) {
5601 +- t += 255;
5602 + ip++;
5603 +- NEED_IP(1, 0);
5604 ++ NEED_IP(1);
5605 + }
5606 +- t += 15 + *ip++;
5607 ++ offset = ip - ip_last;
5608 ++ if (unlikely(offset > MAX_255_COUNT))
5609 ++ return LZO_E_ERROR;
5610 ++
5611 ++ offset = (offset << 8) - offset;
5612 ++ t += offset + 15 + *ip++;
5613 + }
5614 + t += 3;
5615 + copy_literal_run:
5616 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
5617 +- if (likely(HAVE_IP(t, 15) && HAVE_OP(t, 15))) {
5618 ++ if (likely(HAVE_IP(t + 15) && HAVE_OP(t + 15))) {
5619 + const unsigned char *ie = ip + t;
5620 + unsigned char *oe = op + t;
5621 + do {
5622 +@@ -101,8 +98,8 @@ copy_literal_run:
5623 + } else
5624 + #endif
5625 + {
5626 +- NEED_OP(t, 0);
5627 +- NEED_IP(t, 3);
5628 ++ NEED_OP(t);
5629 ++ NEED_IP(t + 3);
5630 + do {
5631 + *op++ = *ip++;
5632 + } while (--t > 0);
5633 +@@ -115,7 +112,7 @@ copy_literal_run:
5634 + m_pos -= t >> 2;
5635 + m_pos -= *ip++ << 2;
5636 + TEST_LB(m_pos);
5637 +- NEED_OP(2, 0);
5638 ++ NEED_OP(2);
5639 + op[0] = m_pos[0];
5640 + op[1] = m_pos[1];
5641 + op += 2;
5642 +@@ -136,13 +133,20 @@ copy_literal_run:
5643 + } else if (t >= 32) {
5644 + t = (t & 31) + (3 - 1);
5645 + if (unlikely(t == 2)) {
5646 ++ size_t offset;
5647 ++ const unsigned char *ip_last = ip;
5648 ++
5649 + while (unlikely(*ip == 0)) {
5650 +- t += 255;
5651 + ip++;
5652 +- NEED_IP(1, 0);
5653 ++ NEED_IP(1);
5654 + }
5655 +- t += 31 + *ip++;
5656 +- NEED_IP(2, 0);
5657 ++ offset = ip - ip_last;
5658 ++ if (unlikely(offset > MAX_255_COUNT))
5659 ++ return LZO_E_ERROR;
5660 ++
5661 ++ offset = (offset << 8) - offset;
5662 ++ t += offset + 31 + *ip++;
5663 ++ NEED_IP(2);
5664 + }
5665 + m_pos = op - 1;
5666 + next = get_unaligned_le16(ip);
5667 +@@ -154,13 +158,20 @@ copy_literal_run:
5668 + m_pos -= (t & 8) << 11;
5669 + t = (t & 7) + (3 - 1);
5670 + if (unlikely(t == 2)) {
5671 ++ size_t offset;
5672 ++ const unsigned char *ip_last = ip;
5673 ++
5674 + while (unlikely(*ip == 0)) {
5675 +- t += 255;
5676 + ip++;
5677 +- NEED_IP(1, 0);
5678 ++ NEED_IP(1);
5679 + }
5680 +- t += 7 + *ip++;
5681 +- NEED_IP(2, 0);
5682 ++ offset = ip - ip_last;
5683 ++ if (unlikely(offset > MAX_255_COUNT))
5684 ++ return LZO_E_ERROR;
5685 ++
5686 ++ offset = (offset << 8) - offset;
5687 ++ t += offset + 7 + *ip++;
5688 ++ NEED_IP(2);
5689 + }
5690 + next = get_unaligned_le16(ip);
5691 + ip += 2;
5692 +@@ -174,7 +185,7 @@ copy_literal_run:
5693 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
5694 + if (op - m_pos >= 8) {
5695 + unsigned char *oe = op + t;
5696 +- if (likely(HAVE_OP(t, 15))) {
5697 ++ if (likely(HAVE_OP(t + 15))) {
5698 + do {
5699 + COPY8(op, m_pos);
5700 + op += 8;
5701 +@@ -184,7 +195,7 @@ copy_literal_run:
5702 + m_pos += 8;
5703 + } while (op < oe);
5704 + op = oe;
5705 +- if (HAVE_IP(6, 0)) {
5706 ++ if (HAVE_IP(6)) {
5707 + state = next;
5708 + COPY4(op, ip);
5709 + op += next;
5710 +@@ -192,7 +203,7 @@ copy_literal_run:
5711 + continue;
5712 + }
5713 + } else {
5714 +- NEED_OP(t, 0);
5715 ++ NEED_OP(t);
5716 + do {
5717 + *op++ = *m_pos++;
5718 + } while (op < oe);
5719 +@@ -201,7 +212,7 @@ copy_literal_run:
5720 + #endif
5721 + {
5722 + unsigned char *oe = op + t;
5723 +- NEED_OP(t, 0);
5724 ++ NEED_OP(t);
5725 + op[0] = m_pos[0];
5726 + op[1] = m_pos[1];
5727 + op += 2;
5728 +@@ -214,15 +225,15 @@ match_next:
5729 + state = next;
5730 + t = next;
5731 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
5732 +- if (likely(HAVE_IP(6, 0) && HAVE_OP(4, 0))) {
5733 ++ if (likely(HAVE_IP(6) && HAVE_OP(4))) {
5734 + COPY4(op, ip);
5735 + op += t;
5736 + ip += t;
5737 + } else
5738 + #endif
5739 + {
5740 +- NEED_IP(t, 3);
5741 +- NEED_OP(t, 0);
5742 ++ NEED_IP(t + 3);
5743 ++ NEED_OP(t);
5744 + while (t > 0) {
5745 + *op++ = *ip++;
5746 + t--;
5747 +diff --git a/lib/string.c b/lib/string.c
5748 +index e5878de4f101..43d0781daf47 100644
5749 +--- a/lib/string.c
5750 ++++ b/lib/string.c
5751 +@@ -586,6 +586,22 @@ void *memset(void *s, int c, size_t count)
5752 + EXPORT_SYMBOL(memset);
5753 + #endif
5754 +
5755 ++/**
5756 ++ * memzero_explicit - Fill a region of memory (e.g. sensitive
5757 ++ * keying data) with 0s.
5758 ++ * @s: Pointer to the start of the area.
5759 ++ * @count: The size of the area.
5760 ++ *
5761 ++ * memzero_explicit() doesn't need an arch-specific version as
5762 ++ * it just invokes the one of memset() implicitly.
5763 ++ */
5764 ++void memzero_explicit(void *s, size_t count)
5765 ++{
5766 ++ memset(s, 0, count);
5767 ++ OPTIMIZER_HIDE_VAR(s);
5768 ++}
5769 ++EXPORT_SYMBOL(memzero_explicit);
5770 ++
5771 + #ifndef __HAVE_ARCH_MEMCPY
5772 + /**
5773 + * memcpy - Copy one area of memory to another
5774 +diff --git a/mm/huge_memory.c b/mm/huge_memory.c
5775 +index 3da5c0bff3b0..8978c1bf91e4 100644
5776 +--- a/mm/huge_memory.c
5777 ++++ b/mm/huge_memory.c
5778 +@@ -711,7 +711,7 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
5779 + if (haddr >= vma->vm_start && haddr + HPAGE_PMD_SIZE <= vma->vm_end) {
5780 + if (unlikely(anon_vma_prepare(vma)))
5781 + return VM_FAULT_OOM;
5782 +- if (unlikely(khugepaged_enter(vma)))
5783 ++ if (unlikely(khugepaged_enter(vma, vma->vm_flags)))
5784 + return VM_FAULT_OOM;
5785 + page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
5786 + vma, haddr, numa_node_id(), 0);
5787 +@@ -1505,7 +1505,7 @@ int hugepage_madvise(struct vm_area_struct *vma,
5788 + * register it here without waiting a page fault that
5789 + * may not happen any time soon.
5790 + */
5791 +- if (unlikely(khugepaged_enter_vma_merge(vma)))
5792 ++ if (unlikely(khugepaged_enter_vma_merge(vma, *vm_flags)))
5793 + return -ENOMEM;
5794 + break;
5795 + case MADV_NOHUGEPAGE:
5796 +@@ -1637,7 +1637,8 @@ int __khugepaged_enter(struct mm_struct *mm)
5797 + return 0;
5798 + }
5799 +
5800 +-int khugepaged_enter_vma_merge(struct vm_area_struct *vma)
5801 ++int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
5802 ++ unsigned long vm_flags)
5803 + {
5804 + unsigned long hstart, hend;
5805 + if (!vma->anon_vma)
5806 +@@ -1653,11 +1654,11 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma)
5807 + * If is_pfn_mapping() is true is_learn_pfn_mapping() must be
5808 + * true too, verify it here.
5809 + */
5810 +- VM_BUG_ON(is_linear_pfn_mapping(vma) || vma->vm_flags & VM_NO_THP);
5811 ++ VM_BUG_ON(is_linear_pfn_mapping(vma) || vm_flags & VM_NO_THP);
5812 + hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
5813 + hend = vma->vm_end & HPAGE_PMD_MASK;
5814 + if (hstart < hend)
5815 +- return khugepaged_enter(vma);
5816 ++ return khugepaged_enter(vma, vm_flags);
5817 + return 0;
5818 + }
5819 +
5820 +diff --git a/mm/memory.c b/mm/memory.c
5821 +index ffd74f370e8d..c34e60a950aa 100644
5822 +--- a/mm/memory.c
5823 ++++ b/mm/memory.c
5824 +@@ -1164,8 +1164,10 @@ again:
5825 + if (unlikely(page_mapcount(page) < 0))
5826 + print_bad_pte(vma, addr, ptent, page);
5827 + force_flush = !__tlb_remove_page(tlb, page);
5828 +- if (force_flush)
5829 ++ if (force_flush) {
5830 ++ addr += PAGE_SIZE;
5831 + break;
5832 ++ }
5833 + continue;
5834 + }
5835 + /*
5836 +diff --git a/mm/mmap.c b/mm/mmap.c
5837 +index 69367e43e52e..94fdbe8f3b99 100644
5838 +--- a/mm/mmap.c
5839 ++++ b/mm/mmap.c
5840 +@@ -826,7 +826,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
5841 + end, prev->vm_pgoff, NULL);
5842 + if (err)
5843 + return NULL;
5844 +- khugepaged_enter_vma_merge(prev);
5845 ++ khugepaged_enter_vma_merge(prev, vm_flags);
5846 + return prev;
5847 + }
5848 +
5849 +@@ -845,7 +845,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
5850 + next->vm_pgoff - pglen, NULL);
5851 + if (err)
5852 + return NULL;
5853 +- khugepaged_enter_vma_merge(area);
5854 ++ khugepaged_enter_vma_merge(area, vm_flags);
5855 + return area;
5856 + }
5857 +
5858 +@@ -1820,7 +1820,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
5859 + }
5860 + }
5861 + vma_unlock_anon_vma(vma);
5862 +- khugepaged_enter_vma_merge(vma);
5863 ++ khugepaged_enter_vma_merge(vma, vma->vm_flags);
5864 + return error;
5865 + }
5866 + #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */
5867 +@@ -1871,7 +1871,7 @@ int expand_downwards(struct vm_area_struct *vma,
5868 + }
5869 + }
5870 + vma_unlock_anon_vma(vma);
5871 +- khugepaged_enter_vma_merge(vma);
5872 ++ khugepaged_enter_vma_merge(vma, vma->vm_flags);
5873 + return error;
5874 + }
5875 +
5876 +diff --git a/mm/oom_kill.c b/mm/oom_kill.c
5877 +index 597ecac5731a..cb1f046faa68 100644
5878 +--- a/mm/oom_kill.c
5879 ++++ b/mm/oom_kill.c
5880 +@@ -435,6 +435,23 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order,
5881 + dump_tasks(memcg, nodemask);
5882 + }
5883 +
5884 ++/*
5885 ++ * Number of OOM killer invocations (including memcg OOM killer).
5886 ++ * Primarily used by PM freezer to check for potential races with
5887 ++ * OOM killed frozen task.
5888 ++ */
5889 ++static atomic_t oom_kills = ATOMIC_INIT(0);
5890 ++
5891 ++int oom_kills_count(void)
5892 ++{
5893 ++ return atomic_read(&oom_kills);
5894 ++}
5895 ++
5896 ++void note_oom_kill(void)
5897 ++{
5898 ++ atomic_inc(&oom_kills);
5899 ++}
5900 ++
5901 + #define K(x) ((x) << (PAGE_SHIFT-10))
5902 + static void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
5903 + unsigned int points, unsigned long totalpages,
5904 +diff --git a/mm/page_alloc.c b/mm/page_alloc.c
5905 +index ff0b0997b953..2891a9059f8a 100644
5906 +--- a/mm/page_alloc.c
5907 ++++ b/mm/page_alloc.c
5908 +@@ -1982,6 +1982,14 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
5909 + }
5910 +
5911 + /*
5912 ++ * PM-freezer should be notified that there might be an OOM killer on
5913 ++ * its way to kill and wake somebody up. This is too early and we might
5914 ++ * end up not killing anything but false positives are acceptable.
5915 ++ * See freeze_processes.
5916 ++ */
5917 ++ note_oom_kill();
5918 ++
5919 ++ /*
5920 + * Go through the zonelist yet one more time, keep very high watermark
5921 + * here, this is only to catch a parallel oom killing, we must fail if
5922 + * we're still under heavy pressure.
5923 +diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
5924 +index 1ccbd714059c..b7693fdc6bb5 100644
5925 +--- a/mm/page_cgroup.c
5926 ++++ b/mm/page_cgroup.c
5927 +@@ -170,6 +170,7 @@ static void free_page_cgroup(void *addr)
5928 + sizeof(struct page_cgroup) * PAGES_PER_SECTION;
5929 +
5930 + BUG_ON(PageReserved(page));
5931 ++ kmemleak_free(addr);
5932 + free_pages_exact(addr, table_size);
5933 + }
5934 + }
5935 +diff --git a/mm/percpu.c b/mm/percpu.c
5936 +index 5f6042b61ca8..13b2eefabfdd 100644
5937 +--- a/mm/percpu.c
5938 ++++ b/mm/percpu.c
5939 +@@ -1907,8 +1907,6 @@ void __init setup_per_cpu_areas(void)
5940 +
5941 + if (pcpu_setup_first_chunk(ai, fc) < 0)
5942 + panic("Failed to initialize percpu areas.");
5943 +-
5944 +- pcpu_free_alloc_info(ai);
5945 + }
5946 +
5947 + #endif /* CONFIG_SMP */
5948 +diff --git a/mm/truncate.c b/mm/truncate.c
5949 +index f38055cb8af6..57625f7ed8e1 100644
5950 +--- a/mm/truncate.c
5951 ++++ b/mm/truncate.c
5952 +@@ -20,6 +20,7 @@
5953 + #include <linux/buffer_head.h> /* grr. try_to_release_page,
5954 + do_invalidatepage */
5955 + #include <linux/cleancache.h>
5956 ++#include <linux/rmap.h>
5957 + #include "internal.h"
5958 +
5959 +
5960 +@@ -571,16 +572,70 @@ EXPORT_SYMBOL(truncate_pagecache);
5961 + */
5962 + void truncate_setsize(struct inode *inode, loff_t newsize)
5963 + {
5964 +- loff_t oldsize;
5965 +-
5966 +- oldsize = inode->i_size;
5967 ++ loff_t oldsize = inode->i_size;
5968 + i_size_write(inode, newsize);
5969 +
5970 ++ if (newsize > oldsize)
5971 ++ pagecache_isize_extended(inode, oldsize, newsize);
5972 + truncate_pagecache(inode, oldsize, newsize);
5973 + }
5974 + EXPORT_SYMBOL(truncate_setsize);
5975 +
5976 + /**
5977 ++ * pagecache_isize_extended - update pagecache after extension of i_size
5978 ++ * @inode: inode for which i_size was extended
5979 ++ * @from: original inode size
5980 ++ * @to: new inode size
5981 ++ *
5982 ++ * Handle extension of inode size either caused by extending truncate or by
5983 ++ * write starting after current i_size. We mark the page straddling current
5984 ++ * i_size RO so that page_mkwrite() is called on the nearest write access to
5985 ++ * the page. This way filesystem can be sure that page_mkwrite() is called on
5986 ++ * the page before user writes to the page via mmap after the i_size has been
5987 ++ * changed.
5988 ++ *
5989 ++ * The function must be called after i_size is updated so that page fault
5990 ++ * coming after we unlock the page will already see the new i_size.
5991 ++ * The function must be called while we still hold i_mutex - this not only
5992 ++ * makes sure i_size is stable but also that userspace cannot observe new
5993 ++ * i_size value before we are prepared to store mmap writes at new inode size.
5994 ++ */
5995 ++void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to)
5996 ++{
5997 ++ int bsize = 1 << inode->i_blkbits;
5998 ++ loff_t rounded_from;
5999 ++ struct page *page;
6000 ++ pgoff_t index;
6001 ++
6002 ++ WARN_ON(to > inode->i_size);
6003 ++
6004 ++ if (from >= to || bsize == PAGE_CACHE_SIZE)
6005 ++ return;
6006 ++ /* Page straddling @from will not have any hole block created? */
6007 ++ rounded_from = round_up(from, bsize);
6008 ++ if (to <= rounded_from || !(rounded_from & (PAGE_CACHE_SIZE - 1)))
6009 ++ return;
6010 ++
6011 ++ index = from >> PAGE_CACHE_SHIFT;
6012 ++ page = find_lock_page(inode->i_mapping, index);
6013 ++ /* Page not cached? Nothing to do */
6014 ++ if (!page)
6015 ++ return;
6016 ++ /*
6017 ++ * See clear_page_dirty_for_io() for details why set_page_dirty()
6018 ++ * is needed.
6019 ++ */
6020 ++ if (page_mkclean(page))
6021 ++ set_page_dirty(page);
6022 ++ unlock_page(page);
6023 ++ page_cache_release(page);
6024 ++}
6025 ++EXPORT_SYMBOL(pagecache_isize_extended);
6026 ++
6027 ++/**
6028 ++ * truncate_pagecache_range - unmap and remove pagecache that is hole-punched
6029 ++ * @inode: inode
6030 ++ * @lstart: offset of beginning of hole
6031 + * vmtruncate - unmap mappings "freed" by truncate() syscall
6032 + * @inode: inode of the file used
6033 + * @newsize: file offset to start truncating
6034 +diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
6035 +index 605156f13899..61e2494dc188 100644
6036 +--- a/net/bluetooth/smp.c
6037 ++++ b/net/bluetooth/smp.c
6038 +@@ -325,8 +325,11 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
6039 + }
6040 +
6041 + /* Not Just Works/Confirm results in MITM Authentication */
6042 +- if (method != JUST_CFM)
6043 ++ if (method != JUST_CFM) {
6044 + set_bit(SMP_FLAG_MITM_AUTH, &smp->smp_flags);
6045 ++ if (hcon->pending_sec_level < BT_SECURITY_HIGH)
6046 ++ hcon->pending_sec_level = BT_SECURITY_HIGH;
6047 ++ }
6048 +
6049 + /* If both devices have Keyoard-Display I/O, the master
6050 + * Confirms and the slave Enters the passkey.
6051 +diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c
6052 +index 9da7fdd3cd8a..3d1be9911b8e 100644
6053 +--- a/net/ceph/crypto.c
6054 ++++ b/net/ceph/crypto.c
6055 +@@ -89,11 +89,82 @@ static struct crypto_blkcipher *ceph_crypto_alloc_cipher(void)
6056 +
6057 + static const u8 *aes_iv = (u8 *)CEPH_AES_IV;
6058 +
6059 ++/*
6060 ++ * Should be used for buffers allocated with ceph_kvmalloc().
6061 ++ * Currently these are encrypt out-buffer (ceph_buffer) and decrypt
6062 ++ * in-buffer (msg front).
6063 ++ *
6064 ++ * Dispose of @sgt with teardown_sgtable().
6065 ++ *
6066 ++ * @prealloc_sg is to avoid memory allocation inside sg_alloc_table()
6067 ++ * in cases where a single sg is sufficient. No attempt to reduce the
6068 ++ * number of sgs by squeezing physically contiguous pages together is
6069 ++ * made though, for simplicity.
6070 ++ */
6071 ++static int setup_sgtable(struct sg_table *sgt, struct scatterlist *prealloc_sg,
6072 ++ const void *buf, unsigned int buf_len)
6073 ++{
6074 ++ struct scatterlist *sg;
6075 ++ const bool is_vmalloc = is_vmalloc_addr(buf);
6076 ++ unsigned int off = offset_in_page(buf);
6077 ++ unsigned int chunk_cnt = 1;
6078 ++ unsigned int chunk_len = PAGE_ALIGN(off + buf_len);
6079 ++ int i;
6080 ++ int ret;
6081 ++
6082 ++ if (buf_len == 0) {
6083 ++ memset(sgt, 0, sizeof(*sgt));
6084 ++ return -EINVAL;
6085 ++ }
6086 ++
6087 ++ if (is_vmalloc) {
6088 ++ chunk_cnt = chunk_len >> PAGE_SHIFT;
6089 ++ chunk_len = PAGE_SIZE;
6090 ++ }
6091 ++
6092 ++ if (chunk_cnt > 1) {
6093 ++ ret = sg_alloc_table(sgt, chunk_cnt, GFP_NOFS);
6094 ++ if (ret)
6095 ++ return ret;
6096 ++ } else {
6097 ++ WARN_ON(chunk_cnt != 1);
6098 ++ sg_init_table(prealloc_sg, 1);
6099 ++ sgt->sgl = prealloc_sg;
6100 ++ sgt->nents = sgt->orig_nents = 1;
6101 ++ }
6102 ++
6103 ++ for_each_sg(sgt->sgl, sg, sgt->orig_nents, i) {
6104 ++ struct page *page;
6105 ++ unsigned int len = min(chunk_len - off, buf_len);
6106 ++
6107 ++ if (is_vmalloc)
6108 ++ page = vmalloc_to_page(buf);
6109 ++ else
6110 ++ page = virt_to_page(buf);
6111 ++
6112 ++ sg_set_page(sg, page, len, off);
6113 ++
6114 ++ off = 0;
6115 ++ buf += len;
6116 ++ buf_len -= len;
6117 ++ }
6118 ++ WARN_ON(buf_len != 0);
6119 ++
6120 ++ return 0;
6121 ++}
6122 ++
6123 ++static void teardown_sgtable(struct sg_table *sgt)
6124 ++{
6125 ++ if (sgt->orig_nents > 1)
6126 ++ sg_free_table(sgt);
6127 ++}
6128 ++
6129 + static int ceph_aes_encrypt(const void *key, int key_len,
6130 + void *dst, size_t *dst_len,
6131 + const void *src, size_t src_len)
6132 + {
6133 +- struct scatterlist sg_in[2], sg_out[1];
6134 ++ struct scatterlist sg_in[2], prealloc_sg;
6135 ++ struct sg_table sg_out;
6136 + struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
6137 + struct blkcipher_desc desc = { .tfm = tfm, .flags = 0 };
6138 + int ret;
6139 +@@ -109,16 +180,18 @@ static int ceph_aes_encrypt(const void *key, int key_len,
6140 +
6141 + *dst_len = src_len + zero_padding;
6142 +
6143 +- crypto_blkcipher_setkey((void *)tfm, key, key_len);
6144 + sg_init_table(sg_in, 2);
6145 + sg_set_buf(&sg_in[0], src, src_len);
6146 + sg_set_buf(&sg_in[1], pad, zero_padding);
6147 +- sg_init_table(sg_out, 1);
6148 +- sg_set_buf(sg_out, dst, *dst_len);
6149 ++ ret = setup_sgtable(&sg_out, &prealloc_sg, dst, *dst_len);
6150 ++ if (ret)
6151 ++ goto out_tfm;
6152 ++
6153 ++ crypto_blkcipher_setkey((void *)tfm, key, key_len);
6154 + iv = crypto_blkcipher_crt(tfm)->iv;
6155 + ivsize = crypto_blkcipher_ivsize(tfm);
6156 +-
6157 + memcpy(iv, aes_iv, ivsize);
6158 ++
6159 + /*
6160 + print_hex_dump(KERN_ERR, "enc key: ", DUMP_PREFIX_NONE, 16, 1,
6161 + key, key_len, 1);
6162 +@@ -127,16 +200,22 @@ static int ceph_aes_encrypt(const void *key, int key_len,
6163 + print_hex_dump(KERN_ERR, "enc pad: ", DUMP_PREFIX_NONE, 16, 1,
6164 + pad, zero_padding, 1);
6165 + */
6166 +- ret = crypto_blkcipher_encrypt(&desc, sg_out, sg_in,
6167 ++ ret = crypto_blkcipher_encrypt(&desc, sg_out.sgl, sg_in,
6168 + src_len + zero_padding);
6169 +- crypto_free_blkcipher(tfm);
6170 +- if (ret < 0)
6171 ++ if (ret < 0) {
6172 + pr_err("ceph_aes_crypt failed %d\n", ret);
6173 ++ goto out_sg;
6174 ++ }
6175 + /*
6176 + print_hex_dump(KERN_ERR, "enc out: ", DUMP_PREFIX_NONE, 16, 1,
6177 + dst, *dst_len, 1);
6178 + */
6179 +- return 0;
6180 ++
6181 ++out_sg:
6182 ++ teardown_sgtable(&sg_out);
6183 ++out_tfm:
6184 ++ crypto_free_blkcipher(tfm);
6185 ++ return ret;
6186 + }
6187 +
6188 + static int ceph_aes_encrypt2(const void *key, int key_len, void *dst,
6189 +@@ -144,7 +223,8 @@ static int ceph_aes_encrypt2(const void *key, int key_len, void *dst,
6190 + const void *src1, size_t src1_len,
6191 + const void *src2, size_t src2_len)
6192 + {
6193 +- struct scatterlist sg_in[3], sg_out[1];
6194 ++ struct scatterlist sg_in[3], prealloc_sg;
6195 ++ struct sg_table sg_out;
6196 + struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
6197 + struct blkcipher_desc desc = { .tfm = tfm, .flags = 0 };
6198 + int ret;
6199 +@@ -160,17 +240,19 @@ static int ceph_aes_encrypt2(const void *key, int key_len, void *dst,
6200 +
6201 + *dst_len = src1_len + src2_len + zero_padding;
6202 +
6203 +- crypto_blkcipher_setkey((void *)tfm, key, key_len);
6204 + sg_init_table(sg_in, 3);
6205 + sg_set_buf(&sg_in[0], src1, src1_len);
6206 + sg_set_buf(&sg_in[1], src2, src2_len);
6207 + sg_set_buf(&sg_in[2], pad, zero_padding);
6208 +- sg_init_table(sg_out, 1);
6209 +- sg_set_buf(sg_out, dst, *dst_len);
6210 ++ ret = setup_sgtable(&sg_out, &prealloc_sg, dst, *dst_len);
6211 ++ if (ret)
6212 ++ goto out_tfm;
6213 ++
6214 ++ crypto_blkcipher_setkey((void *)tfm, key, key_len);
6215 + iv = crypto_blkcipher_crt(tfm)->iv;
6216 + ivsize = crypto_blkcipher_ivsize(tfm);
6217 +-
6218 + memcpy(iv, aes_iv, ivsize);
6219 ++
6220 + /*
6221 + print_hex_dump(KERN_ERR, "enc key: ", DUMP_PREFIX_NONE, 16, 1,
6222 + key, key_len, 1);
6223 +@@ -181,23 +263,30 @@ static int ceph_aes_encrypt2(const void *key, int key_len, void *dst,
6224 + print_hex_dump(KERN_ERR, "enc pad: ", DUMP_PREFIX_NONE, 16, 1,
6225 + pad, zero_padding, 1);
6226 + */
6227 +- ret = crypto_blkcipher_encrypt(&desc, sg_out, sg_in,
6228 ++ ret = crypto_blkcipher_encrypt(&desc, sg_out.sgl, sg_in,
6229 + src1_len + src2_len + zero_padding);
6230 +- crypto_free_blkcipher(tfm);
6231 +- if (ret < 0)
6232 ++ if (ret < 0) {
6233 + pr_err("ceph_aes_crypt2 failed %d\n", ret);
6234 ++ goto out_sg;
6235 ++ }
6236 + /*
6237 + print_hex_dump(KERN_ERR, "enc out: ", DUMP_PREFIX_NONE, 16, 1,
6238 + dst, *dst_len, 1);
6239 + */
6240 +- return 0;
6241 ++
6242 ++out_sg:
6243 ++ teardown_sgtable(&sg_out);
6244 ++out_tfm:
6245 ++ crypto_free_blkcipher(tfm);
6246 ++ return ret;
6247 + }
6248 +
6249 + static int ceph_aes_decrypt(const void *key, int key_len,
6250 + void *dst, size_t *dst_len,
6251 + const void *src, size_t src_len)
6252 + {
6253 +- struct scatterlist sg_in[1], sg_out[2];
6254 ++ struct sg_table sg_in;
6255 ++ struct scatterlist sg_out[2], prealloc_sg;
6256 + struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
6257 + struct blkcipher_desc desc = { .tfm = tfm };
6258 + char pad[16];
6259 +@@ -209,16 +298,16 @@ static int ceph_aes_decrypt(const void *key, int key_len,
6260 + if (IS_ERR(tfm))
6261 + return PTR_ERR(tfm);
6262 +
6263 +- crypto_blkcipher_setkey((void *)tfm, key, key_len);
6264 +- sg_init_table(sg_in, 1);
6265 + sg_init_table(sg_out, 2);
6266 +- sg_set_buf(sg_in, src, src_len);
6267 + sg_set_buf(&sg_out[0], dst, *dst_len);
6268 + sg_set_buf(&sg_out[1], pad, sizeof(pad));
6269 ++ ret = setup_sgtable(&sg_in, &prealloc_sg, src, src_len);
6270 ++ if (ret)
6271 ++ goto out_tfm;
6272 +
6273 ++ crypto_blkcipher_setkey((void *)tfm, key, key_len);
6274 + iv = crypto_blkcipher_crt(tfm)->iv;
6275 + ivsize = crypto_blkcipher_ivsize(tfm);
6276 +-
6277 + memcpy(iv, aes_iv, ivsize);
6278 +
6279 + /*
6280 +@@ -227,12 +316,10 @@ static int ceph_aes_decrypt(const void *key, int key_len,
6281 + print_hex_dump(KERN_ERR, "dec in: ", DUMP_PREFIX_NONE, 16, 1,
6282 + src, src_len, 1);
6283 + */
6284 +-
6285 +- ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in, src_len);
6286 +- crypto_free_blkcipher(tfm);
6287 ++ ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in.sgl, src_len);
6288 + if (ret < 0) {
6289 + pr_err("ceph_aes_decrypt failed %d\n", ret);
6290 +- return ret;
6291 ++ goto out_sg;
6292 + }
6293 +
6294 + if (src_len <= *dst_len)
6295 +@@ -250,7 +337,12 @@ static int ceph_aes_decrypt(const void *key, int key_len,
6296 + print_hex_dump(KERN_ERR, "dec out: ", DUMP_PREFIX_NONE, 16, 1,
6297 + dst, *dst_len, 1);
6298 + */
6299 +- return 0;
6300 ++
6301 ++out_sg:
6302 ++ teardown_sgtable(&sg_in);
6303 ++out_tfm:
6304 ++ crypto_free_blkcipher(tfm);
6305 ++ return ret;
6306 + }
6307 +
6308 + static int ceph_aes_decrypt2(const void *key, int key_len,
6309 +@@ -258,7 +350,8 @@ static int ceph_aes_decrypt2(const void *key, int key_len,
6310 + void *dst2, size_t *dst2_len,
6311 + const void *src, size_t src_len)
6312 + {
6313 +- struct scatterlist sg_in[1], sg_out[3];
6314 ++ struct sg_table sg_in;
6315 ++ struct scatterlist sg_out[3], prealloc_sg;
6316 + struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
6317 + struct blkcipher_desc desc = { .tfm = tfm };
6318 + char pad[16];
6319 +@@ -270,17 +363,17 @@ static int ceph_aes_decrypt2(const void *key, int key_len,
6320 + if (IS_ERR(tfm))
6321 + return PTR_ERR(tfm);
6322 +
6323 +- sg_init_table(sg_in, 1);
6324 +- sg_set_buf(sg_in, src, src_len);
6325 + sg_init_table(sg_out, 3);
6326 + sg_set_buf(&sg_out[0], dst1, *dst1_len);
6327 + sg_set_buf(&sg_out[1], dst2, *dst2_len);
6328 + sg_set_buf(&sg_out[2], pad, sizeof(pad));
6329 ++ ret = setup_sgtable(&sg_in, &prealloc_sg, src, src_len);
6330 ++ if (ret)
6331 ++ goto out_tfm;
6332 +
6333 + crypto_blkcipher_setkey((void *)tfm, key, key_len);
6334 + iv = crypto_blkcipher_crt(tfm)->iv;
6335 + ivsize = crypto_blkcipher_ivsize(tfm);
6336 +-
6337 + memcpy(iv, aes_iv, ivsize);
6338 +
6339 + /*
6340 +@@ -289,12 +382,10 @@ static int ceph_aes_decrypt2(const void *key, int key_len,
6341 + print_hex_dump(KERN_ERR, "dec in: ", DUMP_PREFIX_NONE, 16, 1,
6342 + src, src_len, 1);
6343 + */
6344 +-
6345 +- ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in, src_len);
6346 +- crypto_free_blkcipher(tfm);
6347 ++ ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in.sgl, src_len);
6348 + if (ret < 0) {
6349 + pr_err("ceph_aes_decrypt failed %d\n", ret);
6350 +- return ret;
6351 ++ goto out_sg;
6352 + }
6353 +
6354 + if (src_len <= *dst1_len)
6355 +@@ -324,7 +415,11 @@ static int ceph_aes_decrypt2(const void *key, int key_len,
6356 + dst2, *dst2_len, 1);
6357 + */
6358 +
6359 +- return 0;
6360 ++out_sg:
6361 ++ teardown_sgtable(&sg_in);
6362 ++out_tfm:
6363 ++ crypto_free_blkcipher(tfm);
6364 ++ return ret;
6365 + }
6366 +
6367 +
6368 +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
6369 +index 7827436ae843..330be870c1ef 100644
6370 +--- a/net/ipv4/ip_output.c
6371 ++++ b/net/ipv4/ip_output.c
6372 +@@ -1346,10 +1346,10 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
6373 + iph->ihl = 5;
6374 + iph->tos = inet->tos;
6375 + iph->frag_off = df;
6376 +- ip_select_ident(skb, sk);
6377 + iph->ttl = ttl;
6378 + iph->protocol = sk->sk_protocol;
6379 + ip_copy_addrs(iph, fl4);
6380 ++ ip_select_ident(skb, sk);
6381 +
6382 + if (opt) {
6383 + iph->ihl += opt->optlen>>2;
6384 +diff --git a/net/ipv6/Makefile b/net/ipv6/Makefile
6385 +index 686934acfac1..4b20d5606f6d 100644
6386 +--- a/net/ipv6/Makefile
6387 ++++ b/net/ipv6/Makefile
6388 +@@ -37,6 +37,6 @@ obj-$(CONFIG_NETFILTER) += netfilter/
6389 + obj-$(CONFIG_IPV6_SIT) += sit.o
6390 + obj-$(CONFIG_IPV6_TUNNEL) += ip6_tunnel.o
6391 +
6392 +-obj-y += addrconf_core.o exthdrs_core.o
6393 ++obj-y += addrconf_core.o exthdrs_core.o output_core.o
6394 +
6395 + obj-$(subst m,y,$(CONFIG_IPV6)) += inet6_hashtables.o
6396 +diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
6397 +new file mode 100644
6398 +index 000000000000..a6126c62a9be
6399 +--- /dev/null
6400 ++++ b/net/ipv6/output_core.c
6401 +@@ -0,0 +1,38 @@
6402 ++#include <linux/export.h>
6403 ++#include <linux/skbuff.h>
6404 ++#include <net/ip.h>
6405 ++#include <net/ipv6.h>
6406 ++
6407 ++/* This function exists only for tap drivers that must support broken
6408 ++ * clients requesting UFO without specifying an IPv6 fragment ID.
6409 ++ *
6410 ++ * This is similar to ipv6_select_ident() but we use an independent hash
6411 ++ * seed to limit information leakage.
6412 ++ */
6413 ++void ipv6_proxy_select_ident(struct sk_buff *skb)
6414 ++{
6415 ++ static u32 ip6_proxy_idents_hashrnd __read_mostly;
6416 ++ static bool hashrnd_initialized = false;
6417 ++ struct in6_addr buf[2];
6418 ++ struct in6_addr *addrs;
6419 ++ u32 hash, id;
6420 ++
6421 ++ addrs = skb_header_pointer(skb,
6422 ++ skb_network_offset(skb) +
6423 ++ offsetof(struct ipv6hdr, saddr),
6424 ++ sizeof(buf), buf);
6425 ++ if (!addrs)
6426 ++ return;
6427 ++
6428 ++ if (unlikely(!hashrnd_initialized)) {
6429 ++ hashrnd_initialized = true;
6430 ++ get_random_bytes(&ip6_proxy_idents_hashrnd,
6431 ++ sizeof(ip6_proxy_idents_hashrnd));
6432 ++ }
6433 ++ hash = __ipv6_addr_jhash(&addrs[1], ip6_proxy_idents_hashrnd);
6434 ++ hash = __ipv6_addr_jhash(&addrs[0], hash);
6435 ++
6436 ++ id = ip_idents_reserve(hash, 1);
6437 ++ skb_shinfo(skb)->ip6_frag_id = htonl(id);
6438 ++}
6439 ++EXPORT_SYMBOL_GPL(ipv6_proxy_select_ident);
6440 +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
6441 +index 95a04f02f30b..9f32756a302a 100644
6442 +--- a/net/mac80211/iface.c
6443 ++++ b/net/mac80211/iface.c
6444 +@@ -395,10 +395,12 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata,
6445 + u32 hw_reconf_flags = 0;
6446 + int i;
6447 + enum nl80211_channel_type orig_ct;
6448 ++ bool cancel_scan;
6449 +
6450 + clear_bit(SDATA_STATE_RUNNING, &sdata->state);
6451 +
6452 +- if (local->scan_sdata == sdata)
6453 ++ cancel_scan = local->scan_sdata == sdata;
6454 ++ if (cancel_scan)
6455 + ieee80211_scan_cancel(local);
6456 +
6457 + /*
6458 +@@ -562,6 +564,9 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata,
6459 +
6460 + ieee80211_recalc_ps(local, -1);
6461 +
6462 ++ if (cancel_scan)
6463 ++ flush_delayed_work(&local->scan_work);
6464 ++
6465 + if (local->open_count == 0) {
6466 + if (local->ops->napi_poll)
6467 + napi_disable(&local->napi);
6468 +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
6469 +index f5ed86388555..32929b07269f 100644
6470 +--- a/net/mac80211/rx.c
6471 ++++ b/net/mac80211/rx.c
6472 +@@ -1486,11 +1486,14 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
6473 + sc = le16_to_cpu(hdr->seq_ctrl);
6474 + frag = sc & IEEE80211_SCTL_FRAG;
6475 +
6476 +- if (likely((!ieee80211_has_morefrags(fc) && frag == 0) ||
6477 +- is_multicast_ether_addr(hdr->addr1))) {
6478 +- /* not fragmented */
6479 ++ if (likely(!ieee80211_has_morefrags(fc) && frag == 0))
6480 ++ goto out;
6481 ++
6482 ++ if (is_multicast_ether_addr(hdr->addr1)) {
6483 ++ rx->local->dot11MulticastReceivedFrameCount++;
6484 + goto out;
6485 + }
6486 ++
6487 + I802_DEBUG_INC(rx->local->rx_handlers_fragments);
6488 +
6489 + if (skb_linearize(rx->skb))
6490 +@@ -1583,10 +1586,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
6491 + out:
6492 + if (rx->sta)
6493 + rx->sta->rx_packets++;
6494 +- if (is_multicast_ether_addr(hdr->addr1))
6495 +- rx->local->dot11MulticastReceivedFrameCount++;
6496 +- else
6497 +- ieee80211_led_rx(rx->local);
6498 ++ ieee80211_led_rx(rx->local);
6499 + return RX_CONTINUE;
6500 + }
6501 +
6502 +diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
6503 +index c487715698ab..d96b7f6a3b44 100644
6504 +--- a/security/integrity/evm/evm_main.c
6505 ++++ b/security/integrity/evm/evm_main.c
6506 +@@ -282,9 +282,12 @@ int evm_inode_setxattr(struct dentry *dentry, const char *xattr_name,
6507 + {
6508 + const struct evm_ima_xattr_data *xattr_data = xattr_value;
6509 +
6510 +- if ((strcmp(xattr_name, XATTR_NAME_EVM) == 0)
6511 +- && (xattr_data->type == EVM_XATTR_HMAC))
6512 +- return -EPERM;
6513 ++ if (strcmp(xattr_name, XATTR_NAME_EVM) == 0) {
6514 ++ if (!xattr_value_len)
6515 ++ return -EINVAL;
6516 ++ if (xattr_data->type != EVM_IMA_XATTR_DIGSIG)
6517 ++ return -EPERM;
6518 ++ }
6519 + return evm_protect_xattr(dentry, xattr_name, xattr_value,
6520 + xattr_value_len);
6521 + }
6522 +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
6523 +index 639e5c4028ff..cbae6d392087 100644
6524 +--- a/security/selinux/hooks.c
6525 ++++ b/security/selinux/hooks.c
6526 +@@ -436,6 +436,7 @@ next_inode:
6527 + list_entry(sbsec->isec_head.next,
6528 + struct inode_security_struct, list);
6529 + struct inode *inode = isec->inode;
6530 ++ list_del_init(&isec->list);
6531 + spin_unlock(&sbsec->isec_lock);
6532 + inode = igrab(inode);
6533 + if (inode) {
6534 +@@ -444,7 +445,6 @@ next_inode:
6535 + iput(inode);
6536 + }
6537 + spin_lock(&sbsec->isec_lock);
6538 +- list_del_init(&isec->list);
6539 + goto next_inode;
6540 + }
6541 + spin_unlock(&sbsec->isec_lock);
6542 +diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
6543 +index 91cdf9435fec..4dbb66ef435d 100644
6544 +--- a/sound/core/pcm_compat.c
6545 ++++ b/sound/core/pcm_compat.c
6546 +@@ -204,6 +204,8 @@ static int snd_pcm_status_user_compat(struct snd_pcm_substream *substream,
6547 + if (err < 0)
6548 + return err;
6549 +
6550 ++ if (clear_user(src, sizeof(*src)))
6551 ++ return -EFAULT;
6552 + if (put_user(status.state, &src->state) ||
6553 + put_user(status.trigger_tstamp.tv_sec, &src->trigger_tstamp.tv_sec) ||
6554 + put_user(status.trigger_tstamp.tv_nsec, &src->trigger_tstamp.tv_nsec) ||
6555 +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
6556 +index d776291d09a0..3a907935fa09 100644
6557 +--- a/sound/core/pcm_native.c
6558 ++++ b/sound/core/pcm_native.c
6559 +@@ -3171,7 +3171,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
6560 +
6561 + #ifndef ARCH_HAS_DMA_MMAP_COHERENT
6562 + /* This should be defined / handled globally! */
6563 +-#ifdef CONFIG_ARM
6564 ++#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
6565 + #define ARCH_HAS_DMA_MMAP_COHERENT
6566 + #endif
6567 + #endif
6568 +diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c
6569 +index a0afa5057488..f35284be7b02 100644
6570 +--- a/sound/pci/emu10k1/emu10k1_callback.c
6571 ++++ b/sound/pci/emu10k1/emu10k1_callback.c
6572 +@@ -85,6 +85,8 @@ snd_emu10k1_ops_setup(struct snd_emux *emux)
6573 + * get more voice for pcm
6574 + *
6575 + * terminate most inactive voice and give it as a pcm voice.
6576 ++ *
6577 ++ * voice_lock is already held.
6578 + */
6579 + int
6580 + snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
6581 +@@ -92,12 +94,10 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
6582 + struct snd_emux *emu;
6583 + struct snd_emux_voice *vp;
6584 + struct best_voice best[V_END];
6585 +- unsigned long flags;
6586 + int i;
6587 +
6588 + emu = hw->synth;
6589 +
6590 +- spin_lock_irqsave(&emu->voice_lock, flags);
6591 + lookup_voices(emu, hw, best, 1); /* no OFF voices */
6592 + for (i = 0; i < V_END; i++) {
6593 + if (best[i].voice >= 0) {
6594 +@@ -113,11 +113,9 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
6595 + vp->emu->num_voices--;
6596 + vp->ch = -1;
6597 + vp->state = SNDRV_EMUX_ST_OFF;
6598 +- spin_unlock_irqrestore(&emu->voice_lock, flags);
6599 + return ch;
6600 + }
6601 + }
6602 +- spin_unlock_irqrestore(&emu->voice_lock, flags);
6603 +
6604 + /* not found */
6605 + return -ENOMEM;
6606 +diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
6607 +index f0b8d8e38f71..c40b7ca7a143 100644
6608 +--- a/sound/soc/codecs/sgtl5000.c
6609 ++++ b/sound/soc/codecs/sgtl5000.c
6610 +@@ -1313,8 +1313,7 @@ static int sgtl5000_probe(struct snd_soc_codec *codec)
6611 +
6612 + /* enable small pop, introduce 400ms delay in turning off */
6613 + snd_soc_update_bits(codec, SGTL5000_CHIP_REF_CTRL,
6614 +- SGTL5000_SMALL_POP,
6615 +- SGTL5000_SMALL_POP);
6616 ++ SGTL5000_SMALL_POP, 1);
6617 +
6618 + /* disable short cut detector */
6619 + snd_soc_write(codec, SGTL5000_CHIP_SHORT_CTRL, 0);
6620 +diff --git a/sound/soc/codecs/sgtl5000.h b/sound/soc/codecs/sgtl5000.h
6621 +index d3a68bbfea00..0bd6e1cd8200 100644
6622 +--- a/sound/soc/codecs/sgtl5000.h
6623 ++++ b/sound/soc/codecs/sgtl5000.h
6624 +@@ -275,7 +275,7 @@
6625 + #define SGTL5000_BIAS_CTRL_MASK 0x000e
6626 + #define SGTL5000_BIAS_CTRL_SHIFT 1
6627 + #define SGTL5000_BIAS_CTRL_WIDTH 3
6628 +-#define SGTL5000_SMALL_POP 0x0001
6629 ++#define SGTL5000_SMALL_POP 0
6630 +
6631 + /*
6632 + * SGTL5000_CHIP_MIC_CTRL
6633 +diff --git a/sound/soc/sh/fsi.c b/sound/soc/sh/fsi.c
6634 +index 91b728774dba..eb0599f29768 100644
6635 +--- a/sound/soc/sh/fsi.c
6636 ++++ b/sound/soc/sh/fsi.c
6637 +@@ -1393,8 +1393,7 @@ static const struct snd_soc_dai_ops fsi_dai_ops = {
6638 + static struct snd_pcm_hardware fsi_pcm_hardware = {
6639 + .info = SNDRV_PCM_INFO_INTERLEAVED |
6640 + SNDRV_PCM_INFO_MMAP |
6641 +- SNDRV_PCM_INFO_MMAP_VALID |
6642 +- SNDRV_PCM_INFO_PAUSE,
6643 ++ SNDRV_PCM_INFO_MMAP_VALID,
6644 + .formats = FSI_FMTS,
6645 + .rates = FSI_RATES,
6646 + .rate_min = 8000,
6647 +diff --git a/sound/usb/card.c b/sound/usb/card.c
6648 +index 658ea1118a8e..43fca5231628 100644
6649 +--- a/sound/usb/card.c
6650 ++++ b/sound/usb/card.c
6651 +@@ -568,18 +568,19 @@ static void snd_usb_audio_disconnect(struct usb_device *dev,
6652 + {
6653 + struct snd_card *card;
6654 + struct list_head *p;
6655 ++ bool was_shutdown;
6656 +
6657 + if (chip == (void *)-1L)
6658 + return;
6659 +
6660 + card = chip->card;
6661 + down_write(&chip->shutdown_rwsem);
6662 ++ was_shutdown = chip->shutdown;
6663 + chip->shutdown = 1;
6664 + up_write(&chip->shutdown_rwsem);
6665 +
6666 + mutex_lock(&register_mutex);
6667 +- chip->num_interfaces--;
6668 +- if (chip->num_interfaces <= 0) {
6669 ++ if (!was_shutdown) {
6670 + snd_card_disconnect(card);
6671 + /* release the pcm resources */
6672 + list_for_each(p, &chip->pcm_list) {
6673 +@@ -593,6 +594,10 @@ static void snd_usb_audio_disconnect(struct usb_device *dev,
6674 + list_for_each(p, &chip->mixer_list) {
6675 + snd_usb_mixer_disconnect(p);
6676 + }
6677 ++ }
6678 ++
6679 ++ chip->num_interfaces--;
6680 ++ if (chip->num_interfaces <= 0) {
6681 + usb_chip[chip->index] = NULL;
6682 + mutex_unlock(&register_mutex);
6683 + snd_card_free_when_closed(card);
6684 +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
6685 +index 915bc2cf73d9..5ef357983d92 100644
6686 +--- a/sound/usb/quirks-table.h
6687 ++++ b/sound/usb/quirks-table.h
6688 +@@ -301,6 +301,36 @@ YAMAHA_DEVICE(0x105d, NULL),
6689 + }
6690 + }
6691 + },
6692 ++{
6693 ++ USB_DEVICE(0x0499, 0x1509),
6694 ++ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
6695 ++ /* .vendor_name = "Yamaha", */
6696 ++ /* .product_name = "Steinberg UR22", */
6697 ++ .ifnum = QUIRK_ANY_INTERFACE,
6698 ++ .type = QUIRK_COMPOSITE,
6699 ++ .data = (const struct snd_usb_audio_quirk[]) {
6700 ++ {
6701 ++ .ifnum = 1,
6702 ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE
6703 ++ },
6704 ++ {
6705 ++ .ifnum = 2,
6706 ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE
6707 ++ },
6708 ++ {
6709 ++ .ifnum = 3,
6710 ++ .type = QUIRK_MIDI_YAMAHA
6711 ++ },
6712 ++ {
6713 ++ .ifnum = 4,
6714 ++ .type = QUIRK_IGNORE_INTERFACE
6715 ++ },
6716 ++ {
6717 ++ .ifnum = -1
6718 ++ }
6719 ++ }
6720 ++ }
6721 ++},
6722 + YAMAHA_DEVICE(0x2000, "DGP-7"),
6723 + YAMAHA_DEVICE(0x2001, "DGP-5"),
6724 + YAMAHA_DEVICE(0x2002, NULL),
6725 +diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
6726 +index defc9baa9a45..3225903ec91b 100644
6727 +--- a/virt/kvm/iommu.c
6728 ++++ b/virt/kvm/iommu.c
6729 +@@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm,
6730 + gfn_t base_gfn, unsigned long npages);
6731 +
6732 + static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot,
6733 +- gfn_t gfn, unsigned long size)
6734 ++ gfn_t gfn, unsigned long npages)
6735 + {
6736 + gfn_t end_gfn;
6737 + pfn_t pfn;
6738 +
6739 + pfn = gfn_to_pfn_memslot(kvm, slot, gfn);
6740 +- end_gfn = gfn + (size >> PAGE_SHIFT);
6741 ++ end_gfn = gfn + npages;
6742 + gfn += 1;
6743 +
6744 + if (is_error_pfn(pfn))
6745 +@@ -117,7 +117,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
6746 + * Pin all pages we are about to map in memory. This is
6747 + * important because we unmap and unpin in 4kb steps later.
6748 + */
6749 +- pfn = kvm_pin_pages(kvm, slot, gfn, page_size);
6750 ++ pfn = kvm_pin_pages(kvm, slot, gfn, page_size >> PAGE_SHIFT);
6751 + if (is_error_pfn(pfn)) {
6752 + gfn += 1;
6753 + continue;
6754 +@@ -129,7 +129,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
6755 + if (r) {
6756 + printk(KERN_ERR "kvm_iommu_map_address:"
6757 + "iommu failed to map pfn=%llx\n", pfn);
6758 +- kvm_unpin_pages(kvm, pfn, page_size);
6759 ++ kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
6760 + goto unmap_pages;
6761 + }
6762 +
6763 +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
6764 +index bc5ed1412382..f4732bd2816c 100644
6765 +--- a/virt/kvm/kvm_main.c
6766 ++++ b/virt/kvm/kvm_main.c
6767 +@@ -52,6 +52,7 @@
6768 +
6769 + #include <asm/processor.h>
6770 + #include <asm/io.h>
6771 ++#include <asm/ioctl.h>
6772 + #include <asm/uaccess.h>
6773 + #include <asm/pgtable.h>
6774 +
6775 +@@ -1744,6 +1745,9 @@ static long kvm_vcpu_ioctl(struct file *filp,
6776 + if (vcpu->kvm->mm != current->mm)
6777 + return -EIO;
6778 +
6779 ++ if (unlikely(_IOC_TYPE(ioctl) != KVMIO))
6780 ++ return -EINVAL;
6781 ++
6782 + #if defined(CONFIG_S390) || defined(CONFIG_PPC)
6783 + /*
6784 + * Special cases: vcpu ioctls that are asynchronous to vcpu execution,