Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:3.10 commit in: /
Date: Fri, 31 Oct 2014 11:21:41
Message-Id: 1414754326.fdca1444333f450d874582ba2ad6532254eb6d4d.mpagano@gentoo
1 commit: fdca1444333f450d874582ba2ad6532254eb6d4d
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Fri Oct 31 11:18:46 2014 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Fri Oct 31 11:18:46 2014 +0000
6 URL: http://sources.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=fdca1444
7
8 Linux patch 3.10.59
9
10 ---
11 0000_README | 4 +
12 1058_linux-3.10.59.patch | 1369 ++++++++++++++++++++++++++++++++++++++++++++++
13 2 files changed, 1373 insertions(+)
14
15 diff --git a/0000_README b/0000_README
16 index fb459eb..580573b 100644
17 --- a/0000_README
18 +++ b/0000_README
19 @@ -274,6 +274,10 @@ Patch: 1057_linux-3.10.58.patch
20 From: http://www.kernel.org
21 Desc: Linux 3.10.58
22
23 +Patch: 1058_linux-3.10.59.patch
24 +From: http://www.kernel.org
25 +Desc: Linux 3.10.59
26 +
27 Patch: 1500_XATTR_USER_PREFIX.patch
28 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
29 Desc: Support for namespace user.pax.* on tmpfs.
30
31 diff --git a/1058_linux-3.10.59.patch b/1058_linux-3.10.59.patch
32 new file mode 100644
33 index 0000000..8bddbfc
34 --- /dev/null
35 +++ b/1058_linux-3.10.59.patch
36 @@ -0,0 +1,1369 @@
37 +diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
38 +new file mode 100644
39 +index 000000000000..ea45dd3901e3
40 +--- /dev/null
41 ++++ b/Documentation/lzo.txt
42 +@@ -0,0 +1,164 @@
43 ++
44 ++LZO stream format as understood by Linux's LZO decompressor
45 ++===========================================================
46 ++
47 ++Introduction
48 ++
49 ++ This is not a specification. No specification seems to be publicly available
50 ++ for the LZO stream format. This document describes what input format the LZO
51 ++ decompressor as implemented in the Linux kernel understands. The file subject
52 ++ of this analysis is lib/lzo/lzo1x_decompress_safe.c. No analysis was made on
53 ++ the compressor nor on any other implementations though it seems likely that
54 ++ the format matches the standard one. The purpose of this document is to
55 ++ better understand what the code does in order to propose more efficient fixes
56 ++ for future bug reports.
57 ++
58 ++Description
59 ++
60 ++ The stream is composed of a series of instructions, operands, and data. The
61 ++ instructions consist in a few bits representing an opcode, and bits forming
62 ++ the operands for the instruction, whose size and position depend on the
63 ++ opcode and on the number of literals copied by previous instruction. The
64 ++ operands are used to indicate :
65 ++
66 ++ - a distance when copying data from the dictionary (past output buffer)
67 ++ - a length (number of bytes to copy from dictionary)
68 ++ - the number of literals to copy, which is retained in variable "state"
69 ++ as a piece of information for next instructions.
70 ++
71 ++ Optionally depending on the opcode and operands, extra data may follow. These
72 ++ extra data can be a complement for the operand (eg: a length or a distance
73 ++ encoded on larger values), or a literal to be copied to the output buffer.
74 ++
75 ++ The first byte of the block follows a different encoding from other bytes, it
76 ++ seems to be optimized for literal use only, since there is no dictionary yet
77 ++ prior to that byte.
78 ++
79 ++ Lengths are always encoded on a variable size starting with a small number
80 ++ of bits in the operand. If the number of bits isn't enough to represent the
81 ++ length, up to 255 may be added in increments by consuming more bytes with a
82 ++ rate of at most 255 per extra byte (thus the compression ratio cannot exceed
83 ++ around 255:1). The variable length encoding using #bits is always the same :
84 ++
85 ++ length = byte & ((1 << #bits) - 1)
86 ++ if (!length) {
87 ++ length = ((1 << #bits) - 1)
88 ++ length += 255*(number of zero bytes)
89 ++ length += first-non-zero-byte
90 ++ }
91 ++ length += constant (generally 2 or 3)
92 ++
93 ++ For references to the dictionary, distances are relative to the output
94 ++ pointer. Distances are encoded using very few bits belonging to certain
95 ++ ranges, resulting in multiple copy instructions using different encodings.
96 ++ Certain encodings involve one extra byte, others involve two extra bytes
97 ++ forming a little-endian 16-bit quantity (marked LE16 below).
98 ++
99 ++ After any instruction except the large literal copy, 0, 1, 2 or 3 literals
100 ++ are copied before starting the next instruction. The number of literals that
101 ++ were copied may change the meaning and behaviour of the next instruction. In
102 ++ practice, only one instruction needs to know whether 0, less than 4, or more
103 ++ literals were copied. This is the information stored in the <state> variable
104 ++ in this implementation. This number of immediate literals to be copied is
105 ++ generally encoded in the last two bits of the instruction but may also be
106 ++ taken from the last two bits of an extra operand (eg: distance).
107 ++
108 ++ End of stream is declared when a block copy of distance 0 is seen. Only one
109 ++ instruction may encode this distance (0001HLLL), it takes one LE16 operand
110 ++ for the distance, thus requiring 3 bytes.
111 ++
112 ++ IMPORTANT NOTE : in the code some length checks are missing because certain
113 ++ instructions are called under the assumption that a certain number of bytes
114 ++ follow because it has already been garanteed before parsing the instructions.
115 ++ They just have to "refill" this credit if they consume extra bytes. This is
116 ++ an implementation design choice independant on the algorithm or encoding.
117 ++
118 ++Byte sequences
119 ++
120 ++ First byte encoding :
121 ++
122 ++ 0..17 : follow regular instruction encoding, see below. It is worth
123 ++ noting that codes 16 and 17 will represent a block copy from
124 ++ the dictionary which is empty, and that they will always be
125 ++ invalid at this place.
126 ++
127 ++ 18..21 : copy 0..3 literals
128 ++ state = (byte - 17) = 0..3 [ copy <state> literals ]
129 ++ skip byte
130 ++
131 ++ 22..255 : copy literal string
132 ++ length = (byte - 17) = 4..238
133 ++ state = 4 [ don't copy extra literals ]
134 ++ skip byte
135 ++
136 ++ Instruction encoding :
137 ++
138 ++ 0 0 0 0 X X X X (0..15)
139 ++ Depends on the number of literals copied by the last instruction.
140 ++ If last instruction did not copy any literal (state == 0), this
141 ++ encoding will be a copy of 4 or more literal, and must be interpreted
142 ++ like this :
143 ++
144 ++ 0 0 0 0 L L L L (0..15) : copy long literal string
145 ++ length = 3 + (L ?: 15 + (zero_bytes * 255) + non_zero_byte)
146 ++ state = 4 (no extra literals are copied)
147 ++
148 ++ If last instruction used to copy between 1 to 3 literals (encoded in
149 ++ the instruction's opcode or distance), the instruction is a copy of a
150 ++ 2-byte block from the dictionary within a 1kB distance. It is worth
151 ++ noting that this instruction provides little savings since it uses 2
152 ++ bytes to encode a copy of 2 other bytes but it encodes the number of
153 ++ following literals for free. It must be interpreted like this :
154 ++
155 ++ 0 0 0 0 D D S S (0..15) : copy 2 bytes from <= 1kB distance
156 ++ length = 2
157 ++ state = S (copy S literals after this block)
158 ++ Always followed by exactly one byte : H H H H H H H H
159 ++ distance = (H << 2) + D + 1
160 ++
161 ++ If last instruction used to copy 4 or more literals (as detected by
162 ++ state == 4), the instruction becomes a copy of a 3-byte block from the
163 ++ dictionary from a 2..3kB distance, and must be interpreted like this :
164 ++
165 ++ 0 0 0 0 D D S S (0..15) : copy 3 bytes from 2..3 kB distance
166 ++ length = 3
167 ++ state = S (copy S literals after this block)
168 ++ Always followed by exactly one byte : H H H H H H H H
169 ++ distance = (H << 2) + D + 2049
170 ++
171 ++ 0 0 0 1 H L L L (16..31)
172 ++ Copy of a block within 16..48kB distance (preferably less than 10B)
173 ++ length = 2 + (L ?: 7 + (zero_bytes * 255) + non_zero_byte)
174 ++ Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S
175 ++ distance = 16384 + (H << 14) + D
176 ++ state = S (copy S literals after this block)
177 ++ End of stream is reached if distance == 16384
178 ++
179 ++ 0 0 1 L L L L L (32..63)
180 ++ Copy of small block within 16kB distance (preferably less than 34B)
181 ++ length = 2 + (L ?: 31 + (zero_bytes * 255) + non_zero_byte)
182 ++ Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S
183 ++ distance = D + 1
184 ++ state = S (copy S literals after this block)
185 ++
186 ++ 0 1 L D D D S S (64..127)
187 ++ Copy 3-4 bytes from block within 2kB distance
188 ++ state = S (copy S literals after this block)
189 ++ length = 3 + L
190 ++ Always followed by exactly one byte : H H H H H H H H
191 ++ distance = (H << 3) + D + 1
192 ++
193 ++ 1 L L D D D S S (128..255)
194 ++ Copy 5-8 bytes from block within 2kB distance
195 ++ state = S (copy S literals after this block)
196 ++ length = 5 + L
197 ++ Always followed by exactly one byte : H H H H H H H H
198 ++ distance = (H << 3) + D + 1
199 ++
200 ++Authors
201 ++
202 ++ This document was written by Willy Tarreau <w@×××.eu> on 2014/07/19 during an
203 ++ analysis of the decompression code available in Linux 3.16-rc5. The code is
204 ++ tricky, it is possible that this document contains mistakes or that a few
205 ++ corner cases were overlooked. In any case, please report any doubt, fix, or
206 ++ proposed updates to the author(s) so that the document can be updated.
207 +diff --git a/Makefile b/Makefile
208 +index c27454b8ca3e..7baf27f5cf0f 100644
209 +--- a/Makefile
210 ++++ b/Makefile
211 +@@ -1,6 +1,6 @@
212 + VERSION = 3
213 + PATCHLEVEL = 10
214 +-SUBLEVEL = 58
215 ++SUBLEVEL = 59
216 + EXTRAVERSION =
217 + NAME = TOSSUG Baby Fish
218 +
219 +diff --git a/arch/arm/mach-at91/clock.c b/arch/arm/mach-at91/clock.c
220 +index da841885d01c..64f9f1045539 100644
221 +--- a/arch/arm/mach-at91/clock.c
222 ++++ b/arch/arm/mach-at91/clock.c
223 +@@ -947,6 +947,7 @@ static int __init at91_clock_reset(void)
224 + }
225 +
226 + at91_pmc_write(AT91_PMC_SCDR, scdr);
227 ++ at91_pmc_write(AT91_PMC_PCDR, pcdr);
228 + if (cpu_is_sama5d3())
229 + at91_pmc_write(AT91_PMC_PCDR1, pcdr1);
230 +
231 +diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h
232 +index 899af807ef0f..c30a548cee56 100644
233 +--- a/arch/arm64/include/asm/compat.h
234 ++++ b/arch/arm64/include/asm/compat.h
235 +@@ -33,8 +33,8 @@ typedef s32 compat_ssize_t;
236 + typedef s32 compat_time_t;
237 + typedef s32 compat_clock_t;
238 + typedef s32 compat_pid_t;
239 +-typedef u32 __compat_uid_t;
240 +-typedef u32 __compat_gid_t;
241 ++typedef u16 __compat_uid_t;
242 ++typedef u16 __compat_gid_t;
243 + typedef u16 __compat_uid16_t;
244 + typedef u16 __compat_gid16_t;
245 + typedef u32 __compat_uid32_t;
246 +diff --git a/arch/m68k/mm/hwtest.c b/arch/m68k/mm/hwtest.c
247 +index 2c7dde3c6430..2a5259fd23eb 100644
248 +--- a/arch/m68k/mm/hwtest.c
249 ++++ b/arch/m68k/mm/hwtest.c
250 +@@ -28,9 +28,11 @@
251 + int hwreg_present( volatile void *regp )
252 + {
253 + int ret = 0;
254 ++ unsigned long flags;
255 + long save_sp, save_vbr;
256 + long tmp_vectors[3];
257 +
258 ++ local_irq_save(flags);
259 + __asm__ __volatile__
260 + ( "movec %/vbr,%2\n\t"
261 + "movel #Lberr1,%4@(8)\n\t"
262 +@@ -46,6 +48,7 @@ int hwreg_present( volatile void *regp )
263 + : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr)
264 + : "a" (regp), "a" (tmp_vectors)
265 + );
266 ++ local_irq_restore(flags);
267 +
268 + return( ret );
269 + }
270 +@@ -58,9 +61,11 @@ EXPORT_SYMBOL(hwreg_present);
271 + int hwreg_write( volatile void *regp, unsigned short val )
272 + {
273 + int ret;
274 ++ unsigned long flags;
275 + long save_sp, save_vbr;
276 + long tmp_vectors[3];
277 +
278 ++ local_irq_save(flags);
279 + __asm__ __volatile__
280 + ( "movec %/vbr,%2\n\t"
281 + "movel #Lberr2,%4@(8)\n\t"
282 +@@ -78,6 +83,7 @@ int hwreg_write( volatile void *regp, unsigned short val )
283 + : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr)
284 + : "a" (regp), "a" (tmp_vectors), "g" (val)
285 + );
286 ++ local_irq_restore(flags);
287 +
288 + return( ret );
289 + }
290 +diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
291 +index 5c948177529e..bc79ab00536f 100644
292 +--- a/arch/s390/kvm/interrupt.c
293 ++++ b/arch/s390/kvm/interrupt.c
294 +@@ -71,6 +71,7 @@ static int __interrupt_is_deliverable(struct kvm_vcpu *vcpu,
295 + return 0;
296 + if (vcpu->arch.sie_block->gcr[0] & 0x2000ul)
297 + return 1;
298 ++ return 0;
299 + case KVM_S390_INT_EMERGENCY:
300 + if (psw_extint_disabled(vcpu))
301 + return 0;
302 +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
303 +index f7f20f7fac3c..373058c9b75d 100644
304 +--- a/arch/x86/include/asm/kvm_host.h
305 ++++ b/arch/x86/include/asm/kvm_host.h
306 +@@ -463,6 +463,7 @@ struct kvm_vcpu_arch {
307 + u64 mmio_gva;
308 + unsigned access;
309 + gfn_t mmio_gfn;
310 ++ u64 mmio_gen;
311 +
312 + struct kvm_pmu pmu;
313 +
314 +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
315 +index f187806dfc18..8533e69d2b89 100644
316 +--- a/arch/x86/kernel/cpu/intel.c
317 ++++ b/arch/x86/kernel/cpu/intel.c
318 +@@ -154,6 +154,21 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
319 + setup_clear_cpu_cap(X86_FEATURE_ERMS);
320 + }
321 + }
322 ++
323 ++ /*
324 ++ * Intel Quark Core DevMan_001.pdf section 6.4.11
325 ++ * "The operating system also is required to invalidate (i.e., flush)
326 ++ * the TLB when any changes are made to any of the page table entries.
327 ++ * The operating system must reload CR3 to cause the TLB to be flushed"
328 ++ *
329 ++ * As a result cpu_has_pge() in arch/x86/include/asm/tlbflush.h should
330 ++ * be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE
331 ++ * to be modified
332 ++ */
333 ++ if (c->x86 == 5 && c->x86_model == 9) {
334 ++ pr_info("Disabling PGE capability bit\n");
335 ++ setup_clear_cpu_cap(X86_FEATURE_PGE);
336 ++ }
337 + }
338 +
339 + #ifdef CONFIG_X86_32
340 +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
341 +index 711c649f80b7..e14b1f8667bb 100644
342 +--- a/arch/x86/kvm/mmu.c
343 ++++ b/arch/x86/kvm/mmu.c
344 +@@ -3072,7 +3072,7 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu)
345 + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
346 + return;
347 +
348 +- vcpu_clear_mmio_info(vcpu, ~0ul);
349 ++ vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY);
350 + kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC);
351 + if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) {
352 + hpa_t root = vcpu->arch.mmu.root_hpa;
353 +diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
354 +index 3186542f2fa3..7626d3efa064 100644
355 +--- a/arch/x86/kvm/x86.h
356 ++++ b/arch/x86/kvm/x86.h
357 +@@ -78,15 +78,23 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
358 + vcpu->arch.mmio_gva = gva & PAGE_MASK;
359 + vcpu->arch.access = access;
360 + vcpu->arch.mmio_gfn = gfn;
361 ++ vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
362 ++}
363 ++
364 ++static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
365 ++{
366 ++ return vcpu->arch.mmio_gen == kvm_memslots(vcpu->kvm)->generation;
367 + }
368 +
369 + /*
370 +- * Clear the mmio cache info for the given gva,
371 +- * specially, if gva is ~0ul, we clear all mmio cache info.
372 ++ * Clear the mmio cache info for the given gva. If gva is MMIO_GVA_ANY, we
373 ++ * clear all mmio cache info.
374 + */
375 ++#define MMIO_GVA_ANY (~(gva_t)0)
376 ++
377 + static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva)
378 + {
379 +- if (gva != (~0ul) && vcpu->arch.mmio_gva != (gva & PAGE_MASK))
380 ++ if (gva != MMIO_GVA_ANY && vcpu->arch.mmio_gva != (gva & PAGE_MASK))
381 + return;
382 +
383 + vcpu->arch.mmio_gva = 0;
384 +@@ -94,7 +102,8 @@ static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva)
385 +
386 + static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva)
387 + {
388 +- if (vcpu->arch.mmio_gva && vcpu->arch.mmio_gva == (gva & PAGE_MASK))
389 ++ if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gva &&
390 ++ vcpu->arch.mmio_gva == (gva & PAGE_MASK))
391 + return true;
392 +
393 + return false;
394 +@@ -102,7 +111,8 @@ static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva)
395 +
396 + static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
397 + {
398 +- if (vcpu->arch.mmio_gfn && vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT)
399 ++ if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gfn &&
400 ++ vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT)
401 + return true;
402 +
403 + return false;
404 +diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
405 +index 01e21037d8fe..00a565676583 100644
406 +--- a/drivers/base/firmware_class.c
407 ++++ b/drivers/base/firmware_class.c
408 +@@ -1021,6 +1021,9 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
409 + if (!firmware_p)
410 + return -EINVAL;
411 +
412 ++ if (!name || name[0] == '\0')
413 ++ return -EINVAL;
414 ++
415 + ret = _request_firmware_prepare(&fw, name, device);
416 + if (ret <= 0) /* error or already assigned */
417 + goto out;
418 +diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
419 +index 975719bc3450..b41994fd8460 100644
420 +--- a/drivers/base/regmap/regmap-debugfs.c
421 ++++ b/drivers/base/regmap/regmap-debugfs.c
422 +@@ -460,16 +460,20 @@ void regmap_debugfs_init(struct regmap *map, const char *name)
423 + {
424 + struct rb_node *next;
425 + struct regmap_range_node *range_node;
426 ++ const char *devname = "dummy";
427 +
428 + INIT_LIST_HEAD(&map->debugfs_off_cache);
429 + mutex_init(&map->cache_lock);
430 +
431 ++ if (map->dev)
432 ++ devname = dev_name(map->dev);
433 ++
434 + if (name) {
435 + map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s",
436 +- dev_name(map->dev), name);
437 ++ devname, name);
438 + name = map->debugfs_name;
439 + } else {
440 +- name = dev_name(map->dev);
441 ++ name = devname;
442 + }
443 +
444 + map->debugfs = debugfs_create_dir(name, regmap_debugfs_root);
445 +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
446 +index 4b5cf2e34e9a..6a66f0b7d3d4 100644
447 +--- a/drivers/base/regmap/regmap.c
448 ++++ b/drivers/base/regmap/regmap.c
449 +@@ -1177,7 +1177,7 @@ int _regmap_write(struct regmap *map, unsigned int reg,
450 + }
451 +
452 + #ifdef LOG_DEVICE
453 +- if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
454 ++ if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
455 + dev_info(map->dev, "%x <= %x\n", reg, val);
456 + #endif
457 +
458 +@@ -1437,7 +1437,7 @@ static int _regmap_read(struct regmap *map, unsigned int reg,
459 + ret = map->reg_read(context, reg, val);
460 + if (ret == 0) {
461 + #ifdef LOG_DEVICE
462 +- if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
463 ++ if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
464 + dev_info(map->dev, "%x => %x\n", reg, *val);
465 + #endif
466 +
467 +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
468 +index 45aa8e760124..61a8ec4e5f4d 100644
469 +--- a/drivers/bluetooth/btusb.c
470 ++++ b/drivers/bluetooth/btusb.c
471 +@@ -302,6 +302,9 @@ static void btusb_intr_complete(struct urb *urb)
472 + BT_ERR("%s corrupted event packet", hdev->name);
473 + hdev->stat.err_rx++;
474 + }
475 ++ } else if (urb->status == -ENOENT) {
476 ++ /* Avoid suspend failed when usb_kill_urb */
477 ++ return;
478 + }
479 +
480 + if (!test_bit(BTUSB_INTR_RUNNING, &data->flags))
481 +@@ -390,6 +393,9 @@ static void btusb_bulk_complete(struct urb *urb)
482 + BT_ERR("%s corrupted ACL packet", hdev->name);
483 + hdev->stat.err_rx++;
484 + }
485 ++ } else if (urb->status == -ENOENT) {
486 ++ /* Avoid suspend failed when usb_kill_urb */
487 ++ return;
488 + }
489 +
490 + if (!test_bit(BTUSB_BULK_RUNNING, &data->flags))
491 +@@ -484,6 +490,9 @@ static void btusb_isoc_complete(struct urb *urb)
492 + hdev->stat.err_rx++;
493 + }
494 + }
495 ++ } else if (urb->status == -ENOENT) {
496 ++ /* Avoid suspend failed when usb_kill_urb */
497 ++ return;
498 + }
499 +
500 + if (!test_bit(BTUSB_ISOC_RUNNING, &data->flags))
501 +diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
502 +index db0be2fb05fe..db35c542eb20 100644
503 +--- a/drivers/bluetooth/hci_h5.c
504 ++++ b/drivers/bluetooth/hci_h5.c
505 +@@ -237,7 +237,7 @@ static void h5_pkt_cull(struct h5 *h5)
506 + break;
507 +
508 + to_remove--;
509 +- seq = (seq - 1) % 8;
510 ++ seq = (seq - 1) & 0x07;
511 + }
512 +
513 + if (seq != h5->rx_ack)
514 +diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
515 +index 0b122f8c7005..92f34de7aee9 100644
516 +--- a/drivers/hv/channel.c
517 ++++ b/drivers/hv/channel.c
518 +@@ -199,8 +199,10 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
519 + ret = vmbus_post_msg(open_msg,
520 + sizeof(struct vmbus_channel_open_channel));
521 +
522 +- if (ret != 0)
523 ++ if (ret != 0) {
524 ++ err = ret;
525 + goto error1;
526 ++ }
527 +
528 + t = wait_for_completion_timeout(&open_info->waitevent, 5*HZ);
529 + if (t == 0) {
530 +@@ -392,7 +394,6 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
531 + u32 next_gpadl_handle;
532 + unsigned long flags;
533 + int ret = 0;
534 +- int t;
535 +
536 + next_gpadl_handle = atomic_read(&vmbus_connection.next_gpadl_handle);
537 + atomic_inc(&vmbus_connection.next_gpadl_handle);
538 +@@ -439,9 +440,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
539 +
540 + }
541 + }
542 +- t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
543 +- BUG_ON(t == 0);
544 +-
545 ++ wait_for_completion(&msginfo->waitevent);
546 +
547 + /* At this point, we received the gpadl created msg */
548 + *gpadl_handle = gpadlmsg->gpadl;
549 +@@ -464,7 +463,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
550 + struct vmbus_channel_gpadl_teardown *msg;
551 + struct vmbus_channel_msginfo *info;
552 + unsigned long flags;
553 +- int ret, t;
554 ++ int ret;
555 +
556 + info = kmalloc(sizeof(*info) +
557 + sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL);
558 +@@ -486,11 +485,12 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
559 + ret = vmbus_post_msg(msg,
560 + sizeof(struct vmbus_channel_gpadl_teardown));
561 +
562 +- BUG_ON(ret != 0);
563 +- t = wait_for_completion_timeout(&info->waitevent, 5*HZ);
564 +- BUG_ON(t == 0);
565 ++ if (ret)
566 ++ goto post_msg_err;
567 ++
568 ++ wait_for_completion(&info->waitevent);
569 +
570 +- /* Received a torndown response */
571 ++post_msg_err:
572 + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
573 + list_del(&info->msglistentry);
574 + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
575 +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
576 +index b9f5d295cbec..a3b555808768 100644
577 +--- a/drivers/hv/connection.c
578 ++++ b/drivers/hv/connection.c
579 +@@ -393,10 +393,21 @@ int vmbus_post_msg(void *buffer, size_t buflen)
580 + * insufficient resources. Retry the operation a couple of
581 + * times before giving up.
582 + */
583 +- while (retries < 3) {
584 +- ret = hv_post_message(conn_id, 1, buffer, buflen);
585 +- if (ret != HV_STATUS_INSUFFICIENT_BUFFERS)
586 ++ while (retries < 10) {
587 ++ ret = hv_post_message(conn_id, 1, buffer, buflen);
588 ++
589 ++ switch (ret) {
590 ++ case HV_STATUS_INSUFFICIENT_BUFFERS:
591 ++ ret = -ENOMEM;
592 ++ case -ENOMEM:
593 ++ break;
594 ++ case HV_STATUS_SUCCESS:
595 + return ret;
596 ++ default:
597 ++ pr_err("hv_post_msg() failed; error code:%d\n", ret);
598 ++ return -EINVAL;
599 ++ }
600 ++
601 + retries++;
602 + msleep(100);
603 + }
604 +diff --git a/drivers/message/fusion/mptspi.c b/drivers/message/fusion/mptspi.c
605 +index 5653e505f91f..424f51d1e2ce 100644
606 +--- a/drivers/message/fusion/mptspi.c
607 ++++ b/drivers/message/fusion/mptspi.c
608 +@@ -1422,6 +1422,11 @@ mptspi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
609 + goto out_mptspi_probe;
610 + }
611 +
612 ++ /* VMWare emulation doesn't properly implement WRITE_SAME
613 ++ */
614 ++ if (pdev->subsystem_vendor == 0x15AD)
615 ++ sh->no_write_same = 1;
616 ++
617 + spin_lock_irqsave(&ioc->FreeQlock, flags);
618 +
619 + /* Attach the SCSI Host to the IOC structure
620 +diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c
621 +index b53e5c3f403b..bb020ad3f76c 100644
622 +--- a/drivers/net/wireless/iwlwifi/pcie/drv.c
623 ++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
624 +@@ -269,6 +269,8 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
625 + {IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)},
626 + {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)},
627 + {IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)},
628 ++ {IWL_PCI_DEVICE(0x08B1, 0x4C60, iwl7260_2ac_cfg)},
629 ++ {IWL_PCI_DEVICE(0x08B1, 0x4C70, iwl7260_2ac_cfg)},
630 + {IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)},
631 + {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)},
632 + {IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)},
633 +@@ -306,6 +308,8 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
634 + {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)},
635 + {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)},
636 + {IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)},
637 ++ {IWL_PCI_DEVICE(0x08B1, 0xCC70, iwl7260_2ac_cfg)},
638 ++ {IWL_PCI_DEVICE(0x08B1, 0xCC60, iwl7260_2ac_cfg)},
639 + {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)},
640 + {IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)},
641 + {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)},
642 +diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h
643 +index a7630d5ec892..a629313dd98a 100644
644 +--- a/drivers/net/wireless/rt2x00/rt2800.h
645 ++++ b/drivers/net/wireless/rt2x00/rt2800.h
646 +@@ -1920,7 +1920,7 @@ struct mac_iveiv_entry {
647 + * 2 - drop tx power by 12dBm,
648 + * 3 - increase tx power by 6dBm
649 + */
650 +-#define BBP1_TX_POWER_CTRL FIELD8(0x07)
651 ++#define BBP1_TX_POWER_CTRL FIELD8(0x03)
652 + #define BBP1_TX_ANTENNA FIELD8(0x18)
653 +
654 + /*
655 +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
656 +index 5b4a9d9cd200..689f3c87ee5c 100644
657 +--- a/drivers/pci/pci-sysfs.c
658 ++++ b/drivers/pci/pci-sysfs.c
659 +@@ -175,7 +175,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
660 + {
661 + struct pci_dev *pci_dev = to_pci_dev(dev);
662 +
663 +- return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02x\n",
664 ++ return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n",
665 + pci_dev->vendor, pci_dev->device,
666 + pci_dev->subsystem_vendor, pci_dev->subsystem_device,
667 + (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8),
668 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
669 +index 4510279e28dc..910339c0791f 100644
670 +--- a/drivers/pci/quirks.c
671 ++++ b/drivers/pci/quirks.c
672 +@@ -28,6 +28,7 @@
673 + #include <linux/ioport.h>
674 + #include <linux/sched.h>
675 + #include <linux/ktime.h>
676 ++#include <linux/mm.h>
677 + #include <asm/dma.h> /* isa_dma_bridge_buggy */
678 + #include "pci.h"
679 +
680 +@@ -291,6 +292,25 @@ static void quirk_citrine(struct pci_dev *dev)
681 + }
682 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine);
683 +
684 ++/* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */
685 ++static void quirk_extend_bar_to_page(struct pci_dev *dev)
686 ++{
687 ++ int i;
688 ++
689 ++ for (i = 0; i < PCI_STD_RESOURCE_END; i++) {
690 ++ struct resource *r = &dev->resource[i];
691 ++
692 ++ if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) {
693 ++ r->end = PAGE_SIZE - 1;
694 ++ r->start = 0;
695 ++ r->flags |= IORESOURCE_UNSET;
696 ++ dev_info(&dev->dev, "expanded BAR %d to page size: %pR\n",
697 ++ i, r);
698 ++ }
699 ++ }
700 ++}
701 ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page);
702 ++
703 + /*
704 + * S3 868 and 968 chips report region size equal to 32M, but they decode 64M.
705 + * If it's needed, re-allocate the region.
706 +diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c
707 +index 245a9595a93a..ef0a78b0d730 100644
708 +--- a/drivers/scsi/be2iscsi/be_mgmt.c
709 ++++ b/drivers/scsi/be2iscsi/be_mgmt.c
710 +@@ -812,17 +812,20 @@ mgmt_static_ip_modify(struct beiscsi_hba *phba,
711 +
712 + if (ip_action == IP_ACTION_ADD) {
713 + memcpy(req->ip_params.ip_record.ip_addr.addr, ip_param->value,
714 +- ip_param->len);
715 ++ sizeof(req->ip_params.ip_record.ip_addr.addr));
716 +
717 + if (subnet_param)
718 + memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
719 +- subnet_param->value, subnet_param->len);
720 ++ subnet_param->value,
721 ++ sizeof(req->ip_params.ip_record.ip_addr.subnet_mask));
722 + } else {
723 + memcpy(req->ip_params.ip_record.ip_addr.addr,
724 +- if_info->ip_addr.addr, ip_param->len);
725 ++ if_info->ip_addr.addr,
726 ++ sizeof(req->ip_params.ip_record.ip_addr.addr));
727 +
728 + memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
729 +- if_info->ip_addr.subnet_mask, ip_param->len);
730 ++ if_info->ip_addr.subnet_mask,
731 ++ sizeof(req->ip_params.ip_record.ip_addr.subnet_mask));
732 + }
733 +
734 + rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
735 +@@ -850,7 +853,7 @@ static int mgmt_modify_gateway(struct beiscsi_hba *phba, uint8_t *gt_addr,
736 + req->action = gtway_action;
737 + req->ip_addr.ip_type = BE2_IPV4;
738 +
739 +- memcpy(req->ip_addr.addr, gt_addr, param_len);
740 ++ memcpy(req->ip_addr.addr, gt_addr, sizeof(req->ip_addr.addr));
741 +
742 + return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
743 + }
744 +diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
745 +index f033b191a022..e6884940d107 100644
746 +--- a/drivers/scsi/qla2xxx/qla_target.c
747 ++++ b/drivers/scsi/qla2xxx/qla_target.c
748 +@@ -1514,12 +1514,10 @@ static inline void qlt_unmap_sg(struct scsi_qla_host *vha,
749 + static int qlt_check_reserve_free_req(struct scsi_qla_host *vha,
750 + uint32_t req_cnt)
751 + {
752 +- struct qla_hw_data *ha = vha->hw;
753 +- device_reg_t __iomem *reg = ha->iobase;
754 + uint32_t cnt;
755 +
756 + if (vha->req->cnt < (req_cnt + 2)) {
757 +- cnt = (uint16_t)RD_REG_DWORD(&reg->isp24.req_q_out);
758 ++ cnt = (uint16_t)RD_REG_DWORD(vha->req->req_q_out);
759 +
760 + ql_dbg(ql_dbg_tgt, vha, 0xe00a,
761 + "Request ring circled: cnt=%d, vha->->ring_index=%d, "
762 +diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c
763 +index b9f0192758d6..0791c92e8c50 100644
764 +--- a/drivers/spi/spi-dw-mid.c
765 ++++ b/drivers/spi/spi-dw-mid.c
766 +@@ -89,7 +89,13 @@ err_exit:
767 +
768 + static void mid_spi_dma_exit(struct dw_spi *dws)
769 + {
770 ++ if (!dws->dma_inited)
771 ++ return;
772 ++
773 ++ dmaengine_terminate_all(dws->txchan);
774 + dma_release_channel(dws->txchan);
775 ++
776 ++ dmaengine_terminate_all(dws->rxchan);
777 + dma_release_channel(dws->rxchan);
778 + }
779 +
780 +@@ -136,7 +142,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
781 + txconf.dst_addr = dws->dma_addr;
782 + txconf.dst_maxburst = LNW_DMA_MSIZE_16;
783 + txconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
784 +- txconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
785 ++ txconf.dst_addr_width = dws->dma_width;
786 + txconf.device_fc = false;
787 +
788 + txchan->device->device_control(txchan, DMA_SLAVE_CONFIG,
789 +@@ -159,7 +165,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
790 + rxconf.src_addr = dws->dma_addr;
791 + rxconf.src_maxburst = LNW_DMA_MSIZE_16;
792 + rxconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
793 +- rxconf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
794 ++ rxconf.src_addr_width = dws->dma_width;
795 + rxconf.device_fc = false;
796 +
797 + rxchan->device->device_control(rxchan, DMA_SLAVE_CONFIG,
798 +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
799 +index 8fcd2424e7f9..187911fbabce 100644
800 +--- a/fs/btrfs/inode.c
801 ++++ b/fs/btrfs/inode.c
802 +@@ -3545,7 +3545,8 @@ noinline int btrfs_update_inode(struct btrfs_trans_handle *trans,
803 + * without delay
804 + */
805 + if (!btrfs_is_free_space_inode(inode)
806 +- && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) {
807 ++ && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID
808 ++ && !root->fs_info->log_root_recovering) {
809 + btrfs_update_root_times(trans, root);
810 +
811 + ret = btrfs_delayed_update_inode(trans, root, inode);
812 +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
813 +index b3896d5f233a..0e7f7765b3bb 100644
814 +--- a/fs/btrfs/relocation.c
815 ++++ b/fs/btrfs/relocation.c
816 +@@ -967,8 +967,11 @@ again:
817 + need_check = false;
818 + list_add_tail(&edge->list[UPPER],
819 + &list);
820 +- } else
821 ++ } else {
822 ++ if (upper->checked)
823 ++ need_check = true;
824 + INIT_LIST_HEAD(&edge->list[UPPER]);
825 ++ }
826 + } else {
827 + upper = rb_entry(rb_node, struct backref_node,
828 + rb_node);
829 +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
830 +index 0544587d74f4..1f214689fa5e 100644
831 +--- a/fs/btrfs/transaction.c
832 ++++ b/fs/btrfs/transaction.c
833 +@@ -524,7 +524,6 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid)
834 + if (transid <= root->fs_info->last_trans_committed)
835 + goto out;
836 +
837 +- ret = -EINVAL;
838 + /* find specified transaction */
839 + spin_lock(&root->fs_info->trans_lock);
840 + list_for_each_entry(t, &root->fs_info->trans_list, list) {
841 +@@ -540,9 +539,16 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid)
842 + }
843 + }
844 + spin_unlock(&root->fs_info->trans_lock);
845 +- /* The specified transaction doesn't exist */
846 +- if (!cur_trans)
847 ++
848 ++ /*
849 ++ * The specified transaction doesn't exist, or we
850 ++ * raced with btrfs_commit_transaction
851 ++ */
852 ++ if (!cur_trans) {
853 ++ if (transid > root->fs_info->last_trans_committed)
854 ++ ret = -EINVAL;
855 + goto out;
856 ++ }
857 + } else {
858 + /* find newest transaction that is committing | committed */
859 + spin_lock(&root->fs_info->trans_lock);
860 +diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
861 +index 5eab400e2590..41baf8b5e0eb 100644
862 +--- a/fs/ecryptfs/inode.c
863 ++++ b/fs/ecryptfs/inode.c
864 +@@ -1051,7 +1051,7 @@ ecryptfs_setxattr(struct dentry *dentry, const char *name, const void *value,
865 + }
866 +
867 + rc = vfs_setxattr(lower_dentry, name, value, size, flags);
868 +- if (!rc)
869 ++ if (!rc && dentry->d_inode)
870 + fsstack_copy_attr_all(dentry->d_inode, lower_dentry->d_inode);
871 + out:
872 + return rc;
873 +diff --git a/fs/namespace.c b/fs/namespace.c
874 +index 00409add4d96..7f6a9348c589 100644
875 +--- a/fs/namespace.c
876 ++++ b/fs/namespace.c
877 +@@ -1274,6 +1274,8 @@ static int do_umount(struct mount *mnt, int flags)
878 + * Special case for "unmounting" root ...
879 + * we just try to remount it readonly.
880 + */
881 ++ if (!capable(CAP_SYS_ADMIN))
882 ++ return -EPERM;
883 + down_write(&sb->s_umount);
884 + if (!(sb->s_flags & MS_RDONLY))
885 + retval = do_remount_sb(sb, MS_RDONLY, NULL, 0);
886 +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
887 +index 3fc87b6f9def..69fc437be661 100644
888 +--- a/fs/nfs/nfs4proc.c
889 ++++ b/fs/nfs/nfs4proc.c
890 +@@ -6067,7 +6067,7 @@ static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cr
891 + int ret = 0;
892 +
893 + if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0)
894 +- return 0;
895 ++ return -EAGAIN;
896 + task = _nfs41_proc_sequence(clp, cred, false);
897 + if (IS_ERR(task))
898 + ret = PTR_ERR(task);
899 +diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c
900 +index 1720d32ffa54..e1ba58c3d1ad 100644
901 +--- a/fs/nfs/nfs4renewd.c
902 ++++ b/fs/nfs/nfs4renewd.c
903 +@@ -88,10 +88,18 @@ nfs4_renew_state(struct work_struct *work)
904 + }
905 + nfs_expire_all_delegations(clp);
906 + } else {
907 ++ int ret;
908 ++
909 + /* Queue an asynchronous RENEW. */
910 +- ops->sched_state_renewal(clp, cred, renew_flags);
911 ++ ret = ops->sched_state_renewal(clp, cred, renew_flags);
912 + put_rpccred(cred);
913 +- goto out_exp;
914 ++ switch (ret) {
915 ++ default:
916 ++ goto out_exp;
917 ++ case -EAGAIN:
918 ++ case -ENOMEM:
919 ++ break;
920 ++ }
921 + }
922 + } else {
923 + dprintk("%s: failed to call renewd. Reason: lease not expired \n",
924 +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
925 +index 2c37442ed936..d482b86d0e0b 100644
926 +--- a/fs/nfs/nfs4state.c
927 ++++ b/fs/nfs/nfs4state.c
928 +@@ -1699,7 +1699,8 @@ restart:
929 + if (status < 0) {
930 + set_bit(ops->owner_flag_bit, &sp->so_flags);
931 + nfs4_put_state_owner(sp);
932 +- return nfs4_recovery_handle_error(clp, status);
933 ++ status = nfs4_recovery_handle_error(clp, status);
934 ++ return (status != 0) ? status : -EAGAIN;
935 + }
936 +
937 + nfs4_put_state_owner(sp);
938 +@@ -1708,7 +1709,7 @@ restart:
939 + spin_unlock(&clp->cl_lock);
940 + }
941 + rcu_read_unlock();
942 +- return status;
943 ++ return 0;
944 + }
945 +
946 + static int nfs4_check_lease(struct nfs_client *clp)
947 +@@ -1755,7 +1756,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
948 + break;
949 + case -NFS4ERR_STALE_CLIENTID:
950 + clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
951 +- nfs4_state_clear_reclaim_reboot(clp);
952 + nfs4_state_start_reclaim_reboot(clp);
953 + break;
954 + case -NFS4ERR_CLID_INUSE:
955 +@@ -2174,14 +2174,11 @@ static void nfs4_state_manager(struct nfs_client *clp)
956 + section = "reclaim reboot";
957 + status = nfs4_do_reclaim(clp,
958 + clp->cl_mvops->reboot_recovery_ops);
959 +- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
960 +- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
961 +- continue;
962 +- nfs4_state_end_reclaim_reboot(clp);
963 +- if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
964 ++ if (status == -EAGAIN)
965 + continue;
966 + if (status < 0)
967 + goto out_error;
968 ++ nfs4_state_end_reclaim_reboot(clp);
969 + }
970 +
971 + /* Now recover expired state... */
972 +@@ -2189,9 +2186,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
973 + section = "reclaim nograce";
974 + status = nfs4_do_reclaim(clp,
975 + clp->cl_mvops->nograce_recovery_ops);
976 +- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
977 +- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
978 +- test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
979 ++ if (status == -EAGAIN)
980 + continue;
981 + if (status < 0)
982 + goto out_error;
983 +diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
984 +index f1680cdbd88b..9be6b4163406 100644
985 +--- a/fs/notify/fanotify/fanotify_user.c
986 ++++ b/fs/notify/fanotify/fanotify_user.c
987 +@@ -69,7 +69,7 @@ static int create_fd(struct fsnotify_group *group,
988 +
989 + pr_debug("%s: group=%p event=%p\n", __func__, group, event);
990 +
991 +- client_fd = get_unused_fd();
992 ++ client_fd = get_unused_fd_flags(group->fanotify_data.f_flags);
993 + if (client_fd < 0)
994 + return client_fd;
995 +
996 +diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
997 +new file mode 100644
998 +index 000000000000..cdd1cc202d51
999 +--- /dev/null
1000 ++++ b/include/linux/compiler-gcc5.h
1001 +@@ -0,0 +1,66 @@
1002 ++#ifndef __LINUX_COMPILER_H
1003 ++#error "Please don't include <linux/compiler-gcc5.h> directly, include <linux/compiler.h> instead."
1004 ++#endif
1005 ++
1006 ++#define __used __attribute__((__used__))
1007 ++#define __must_check __attribute__((warn_unused_result))
1008 ++#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
1009 ++
1010 ++/* Mark functions as cold. gcc will assume any path leading to a call
1011 ++ to them will be unlikely. This means a lot of manual unlikely()s
1012 ++ are unnecessary now for any paths leading to the usual suspects
1013 ++ like BUG(), printk(), panic() etc. [but let's keep them for now for
1014 ++ older compilers]
1015 ++
1016 ++ Early snapshots of gcc 4.3 don't support this and we can't detect this
1017 ++ in the preprocessor, but we can live with this because they're unreleased.
1018 ++ Maketime probing would be overkill here.
1019 ++
1020 ++ gcc also has a __attribute__((__hot__)) to move hot functions into
1021 ++ a special section, but I don't see any sense in this right now in
1022 ++ the kernel context */
1023 ++#define __cold __attribute__((__cold__))
1024 ++
1025 ++#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
1026 ++
1027 ++#ifndef __CHECKER__
1028 ++# define __compiletime_warning(message) __attribute__((warning(message)))
1029 ++# define __compiletime_error(message) __attribute__((error(message)))
1030 ++#endif /* __CHECKER__ */
1031 ++
1032 ++/*
1033 ++ * Mark a position in code as unreachable. This can be used to
1034 ++ * suppress control flow warnings after asm blocks that transfer
1035 ++ * control elsewhere.
1036 ++ *
1037 ++ * Early snapshots of gcc 4.5 don't support this and we can't detect
1038 ++ * this in the preprocessor, but we can live with this because they're
1039 ++ * unreleased. Really, we need to have autoconf for the kernel.
1040 ++ */
1041 ++#define unreachable() __builtin_unreachable()
1042 ++
1043 ++/* Mark a function definition as prohibited from being cloned. */
1044 ++#define __noclone __attribute__((__noclone__))
1045 ++
1046 ++/*
1047 ++ * Tell the optimizer that something else uses this function or variable.
1048 ++ */
1049 ++#define __visible __attribute__((externally_visible))
1050 ++
1051 ++/*
1052 ++ * GCC 'asm goto' miscompiles certain code sequences:
1053 ++ *
1054 ++ * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
1055 ++ *
1056 ++ * Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
1057 ++ * Fixed in GCC 4.8.2 and later versions.
1058 ++ *
1059 ++ * (asm goto is automatically volatile - the naming reflects this.)
1060 ++ */
1061 ++#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
1062 ++
1063 ++#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
1064 ++#define __HAVE_BUILTIN_BSWAP32__
1065 ++#define __HAVE_BUILTIN_BSWAP64__
1066 ++#define __HAVE_BUILTIN_BSWAP16__
1067 ++#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
1068 +diff --git a/include/linux/sched.h b/include/linux/sched.h
1069 +index 8293545ac9b7..f87e9a8d364f 100644
1070 +--- a/include/linux/sched.h
1071 ++++ b/include/linux/sched.h
1072 +@@ -1670,11 +1670,13 @@ extern void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut,
1073 + #define tsk_used_math(p) ((p)->flags & PF_USED_MATH)
1074 + #define used_math() tsk_used_math(current)
1075 +
1076 +-/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags */
1077 ++/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags
1078 ++ * __GFP_FS is also cleared as it implies __GFP_IO.
1079 ++ */
1080 + static inline gfp_t memalloc_noio_flags(gfp_t flags)
1081 + {
1082 + if (unlikely(current->flags & PF_MEMALLOC_NOIO))
1083 +- flags &= ~__GFP_IO;
1084 ++ flags &= ~(__GFP_IO | __GFP_FS);
1085 + return flags;
1086 + }
1087 +
1088 +diff --git a/lib/lzo/lzo1x_decompress_safe.c b/lib/lzo/lzo1x_decompress_safe.c
1089 +index 8563081e8da3..a1c387f6afba 100644
1090 +--- a/lib/lzo/lzo1x_decompress_safe.c
1091 ++++ b/lib/lzo/lzo1x_decompress_safe.c
1092 +@@ -19,31 +19,21 @@
1093 + #include <linux/lzo.h>
1094 + #include "lzodefs.h"
1095 +
1096 +-#define HAVE_IP(t, x) \
1097 +- (((size_t)(ip_end - ip) >= (size_t)(t + x)) && \
1098 +- (((t + x) >= t) && ((t + x) >= x)))
1099 ++#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x))
1100 ++#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x))
1101 ++#define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun
1102 ++#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun
1103 ++#define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun
1104 +
1105 +-#define HAVE_OP(t, x) \
1106 +- (((size_t)(op_end - op) >= (size_t)(t + x)) && \
1107 +- (((t + x) >= t) && ((t + x) >= x)))
1108 +-
1109 +-#define NEED_IP(t, x) \
1110 +- do { \
1111 +- if (!HAVE_IP(t, x)) \
1112 +- goto input_overrun; \
1113 +- } while (0)
1114 +-
1115 +-#define NEED_OP(t, x) \
1116 +- do { \
1117 +- if (!HAVE_OP(t, x)) \
1118 +- goto output_overrun; \
1119 +- } while (0)
1120 +-
1121 +-#define TEST_LB(m_pos) \
1122 +- do { \
1123 +- if ((m_pos) < out) \
1124 +- goto lookbehind_overrun; \
1125 +- } while (0)
1126 ++/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base
1127 ++ * count without overflowing an integer. The multiply will overflow when
1128 ++ * multiplying 255 by more than MAXINT/255. The sum will overflow earlier
1129 ++ * depending on the base count. Since the base count is taken from a u8
1130 ++ * and a few bits, it is safe to assume that it will always be lower than
1131 ++ * or equal to 2*255, thus we can always prevent any overflow by accepting
1132 ++ * two less 255 steps. See Documentation/lzo.txt for more information.
1133 ++ */
1134 ++#define MAX_255_COUNT ((((size_t)~0) / 255) - 2)
1135 +
1136 + int lzo1x_decompress_safe(const unsigned char *in, size_t in_len,
1137 + unsigned char *out, size_t *out_len)
1138 +@@ -75,17 +65,24 @@ int lzo1x_decompress_safe(const unsigned char *in, size_t in_len,
1139 + if (t < 16) {
1140 + if (likely(state == 0)) {
1141 + if (unlikely(t == 0)) {
1142 ++ size_t offset;
1143 ++ const unsigned char *ip_last = ip;
1144 ++
1145 + while (unlikely(*ip == 0)) {
1146 +- t += 255;
1147 + ip++;
1148 +- NEED_IP(1, 0);
1149 ++ NEED_IP(1);
1150 + }
1151 +- t += 15 + *ip++;
1152 ++ offset = ip - ip_last;
1153 ++ if (unlikely(offset > MAX_255_COUNT))
1154 ++ return LZO_E_ERROR;
1155 ++
1156 ++ offset = (offset << 8) - offset;
1157 ++ t += offset + 15 + *ip++;
1158 + }
1159 + t += 3;
1160 + copy_literal_run:
1161 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
1162 +- if (likely(HAVE_IP(t, 15) && HAVE_OP(t, 15))) {
1163 ++ if (likely(HAVE_IP(t + 15) && HAVE_OP(t + 15))) {
1164 + const unsigned char *ie = ip + t;
1165 + unsigned char *oe = op + t;
1166 + do {
1167 +@@ -101,8 +98,8 @@ copy_literal_run:
1168 + } else
1169 + #endif
1170 + {
1171 +- NEED_OP(t, 0);
1172 +- NEED_IP(t, 3);
1173 ++ NEED_OP(t);
1174 ++ NEED_IP(t + 3);
1175 + do {
1176 + *op++ = *ip++;
1177 + } while (--t > 0);
1178 +@@ -115,7 +112,7 @@ copy_literal_run:
1179 + m_pos -= t >> 2;
1180 + m_pos -= *ip++ << 2;
1181 + TEST_LB(m_pos);
1182 +- NEED_OP(2, 0);
1183 ++ NEED_OP(2);
1184 + op[0] = m_pos[0];
1185 + op[1] = m_pos[1];
1186 + op += 2;
1187 +@@ -136,13 +133,20 @@ copy_literal_run:
1188 + } else if (t >= 32) {
1189 + t = (t & 31) + (3 - 1);
1190 + if (unlikely(t == 2)) {
1191 ++ size_t offset;
1192 ++ const unsigned char *ip_last = ip;
1193 ++
1194 + while (unlikely(*ip == 0)) {
1195 +- t += 255;
1196 + ip++;
1197 +- NEED_IP(1, 0);
1198 ++ NEED_IP(1);
1199 + }
1200 +- t += 31 + *ip++;
1201 +- NEED_IP(2, 0);
1202 ++ offset = ip - ip_last;
1203 ++ if (unlikely(offset > MAX_255_COUNT))
1204 ++ return LZO_E_ERROR;
1205 ++
1206 ++ offset = (offset << 8) - offset;
1207 ++ t += offset + 31 + *ip++;
1208 ++ NEED_IP(2);
1209 + }
1210 + m_pos = op - 1;
1211 + next = get_unaligned_le16(ip);
1212 +@@ -154,13 +158,20 @@ copy_literal_run:
1213 + m_pos -= (t & 8) << 11;
1214 + t = (t & 7) + (3 - 1);
1215 + if (unlikely(t == 2)) {
1216 ++ size_t offset;
1217 ++ const unsigned char *ip_last = ip;
1218 ++
1219 + while (unlikely(*ip == 0)) {
1220 +- t += 255;
1221 + ip++;
1222 +- NEED_IP(1, 0);
1223 ++ NEED_IP(1);
1224 + }
1225 +- t += 7 + *ip++;
1226 +- NEED_IP(2, 0);
1227 ++ offset = ip - ip_last;
1228 ++ if (unlikely(offset > MAX_255_COUNT))
1229 ++ return LZO_E_ERROR;
1230 ++
1231 ++ offset = (offset << 8) - offset;
1232 ++ t += offset + 7 + *ip++;
1233 ++ NEED_IP(2);
1234 + }
1235 + next = get_unaligned_le16(ip);
1236 + ip += 2;
1237 +@@ -174,7 +185,7 @@ copy_literal_run:
1238 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
1239 + if (op - m_pos >= 8) {
1240 + unsigned char *oe = op + t;
1241 +- if (likely(HAVE_OP(t, 15))) {
1242 ++ if (likely(HAVE_OP(t + 15))) {
1243 + do {
1244 + COPY8(op, m_pos);
1245 + op += 8;
1246 +@@ -184,7 +195,7 @@ copy_literal_run:
1247 + m_pos += 8;
1248 + } while (op < oe);
1249 + op = oe;
1250 +- if (HAVE_IP(6, 0)) {
1251 ++ if (HAVE_IP(6)) {
1252 + state = next;
1253 + COPY4(op, ip);
1254 + op += next;
1255 +@@ -192,7 +203,7 @@ copy_literal_run:
1256 + continue;
1257 + }
1258 + } else {
1259 +- NEED_OP(t, 0);
1260 ++ NEED_OP(t);
1261 + do {
1262 + *op++ = *m_pos++;
1263 + } while (op < oe);
1264 +@@ -201,7 +212,7 @@ copy_literal_run:
1265 + #endif
1266 + {
1267 + unsigned char *oe = op + t;
1268 +- NEED_OP(t, 0);
1269 ++ NEED_OP(t);
1270 + op[0] = m_pos[0];
1271 + op[1] = m_pos[1];
1272 + op += 2;
1273 +@@ -214,15 +225,15 @@ match_next:
1274 + state = next;
1275 + t = next;
1276 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
1277 +- if (likely(HAVE_IP(6, 0) && HAVE_OP(4, 0))) {
1278 ++ if (likely(HAVE_IP(6) && HAVE_OP(4))) {
1279 + COPY4(op, ip);
1280 + op += t;
1281 + ip += t;
1282 + } else
1283 + #endif
1284 + {
1285 +- NEED_IP(t, 3);
1286 +- NEED_OP(t, 0);
1287 ++ NEED_IP(t + 3);
1288 ++ NEED_OP(t);
1289 + while (t > 0) {
1290 + *op++ = *ip++;
1291 + t--;
1292 +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
1293 +index f92818155958..175dca44c97e 100644
1294 +--- a/sound/core/pcm_native.c
1295 ++++ b/sound/core/pcm_native.c
1296 +@@ -3197,7 +3197,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
1297 +
1298 + #ifndef ARCH_HAS_DMA_MMAP_COHERENT
1299 + /* This should be defined / handled globally! */
1300 +-#ifdef CONFIG_ARM
1301 ++#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
1302 + #define ARCH_HAS_DMA_MMAP_COHERENT
1303 + #endif
1304 + #endif
1305 +diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c
1306 +index cae36597aa71..0a34b5f1c475 100644
1307 +--- a/sound/pci/emu10k1/emu10k1_callback.c
1308 ++++ b/sound/pci/emu10k1/emu10k1_callback.c
1309 +@@ -85,6 +85,8 @@ snd_emu10k1_ops_setup(struct snd_emux *emux)
1310 + * get more voice for pcm
1311 + *
1312 + * terminate most inactive voice and give it as a pcm voice.
1313 ++ *
1314 ++ * voice_lock is already held.
1315 + */
1316 + int
1317 + snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
1318 +@@ -92,12 +94,10 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
1319 + struct snd_emux *emu;
1320 + struct snd_emux_voice *vp;
1321 + struct best_voice best[V_END];
1322 +- unsigned long flags;
1323 + int i;
1324 +
1325 + emu = hw->synth;
1326 +
1327 +- spin_lock_irqsave(&emu->voice_lock, flags);
1328 + lookup_voices(emu, hw, best, 1); /* no OFF voices */
1329 + for (i = 0; i < V_END; i++) {
1330 + if (best[i].voice >= 0) {
1331 +@@ -113,11 +113,9 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
1332 + vp->emu->num_voices--;
1333 + vp->ch = -1;
1334 + vp->state = SNDRV_EMUX_ST_OFF;
1335 +- spin_unlock_irqrestore(&emu->voice_lock, flags);
1336 + return ch;
1337 + }
1338 + }
1339 +- spin_unlock_irqrestore(&emu->voice_lock, flags);
1340 +
1341 + /* not found */
1342 + return -ENOMEM;
1343 +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
1344 +index 8b75bcf136f6..d5bed1d25713 100644
1345 +--- a/sound/usb/quirks-table.h
1346 ++++ b/sound/usb/quirks-table.h
1347 +@@ -386,6 +386,36 @@ YAMAHA_DEVICE(0x105d, NULL),
1348 + }
1349 + },
1350 + {
1351 ++ USB_DEVICE(0x0499, 0x1509),
1352 ++ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
1353 ++ /* .vendor_name = "Yamaha", */
1354 ++ /* .product_name = "Steinberg UR22", */
1355 ++ .ifnum = QUIRK_ANY_INTERFACE,
1356 ++ .type = QUIRK_COMPOSITE,
1357 ++ .data = (const struct snd_usb_audio_quirk[]) {
1358 ++ {
1359 ++ .ifnum = 1,
1360 ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE
1361 ++ },
1362 ++ {
1363 ++ .ifnum = 2,
1364 ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE
1365 ++ },
1366 ++ {
1367 ++ .ifnum = 3,
1368 ++ .type = QUIRK_MIDI_YAMAHA
1369 ++ },
1370 ++ {
1371 ++ .ifnum = 4,
1372 ++ .type = QUIRK_IGNORE_INTERFACE
1373 ++ },
1374 ++ {
1375 ++ .ifnum = -1
1376 ++ }
1377 ++ }
1378 ++ }
1379 ++},
1380 ++{
1381 + USB_DEVICE(0x0499, 0x150a),
1382 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
1383 + /* .vendor_name = "Yamaha", */
1384 +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
1385 +index 8cf1cd2fadaa..a17f190be58e 100644
1386 +--- a/virt/kvm/kvm_main.c
1387 ++++ b/virt/kvm/kvm_main.c
1388 +@@ -52,6 +52,7 @@
1389 +
1390 + #include <asm/processor.h>
1391 + #include <asm/io.h>
1392 ++#include <asm/ioctl.h>
1393 + #include <asm/uaccess.h>
1394 + #include <asm/pgtable.h>
1395 +
1396 +@@ -1981,6 +1982,9 @@ static long kvm_vcpu_ioctl(struct file *filp,
1397 + if (vcpu->kvm->mm != current->mm)
1398 + return -EIO;
1399 +
1400 ++ if (unlikely(_IOC_TYPE(ioctl) != KVMIO))
1401 ++ return -EINVAL;
1402 ++
1403 + #if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_MIPS)
1404 + /*
1405 + * Special cases: vcpu ioctls that are asynchronous to vcpu execution,