Gentoo Archives: gentoo-commits

From: "Anthony G. Basile" <blueness@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/hardened-patchset:master commit in: 3.2.22/, 3.4.4/
Date: Tue, 10 Jul 2012 01:36:23
Message-Id: 1341884139.3e13627b7d931efd34c2e12c4e96cf85effda337.blueness@gentoo
1 commit: 3e13627b7d931efd34c2e12c4e96cf85effda337
2 Author: Anthony G. Basile <blueness <AT> gentoo <DOT> org>
3 AuthorDate: Tue Jul 10 01:35:39 2012 +0000
4 Commit: Anthony G. Basile <blueness <AT> gentoo <DOT> org>
5 CommitDate: Tue Jul 10 01:35:39 2012 +0000
6 URL: http://git.overlays.gentoo.org/gitweb/?p=proj/hardened-patchset.git;a=commit;h=3e13627b
7
8 Add linux patch for 3.2.21 -> 3.2.22
9
10 ---
11 3.2.22/0000_README | 4 +
12 3.2.22/1021_linux-3.2.22.patch | 1245 +++++++++++++++++++++++
13 3.2.22/4450_grsec-kconfig-default-gids.patch | 12 +-
14 3.2.22/4465_selinux-avc_audit-log-curr_ip.patch | 2 +-
15 3.4.4/4450_grsec-kconfig-default-gids.patch | 2 +-
16 5 files changed, 1257 insertions(+), 8 deletions(-)
17
18 diff --git a/3.2.22/0000_README b/3.2.22/0000_README
19 index ccfefdd..7a8a57c 100644
20 --- a/3.2.22/0000_README
21 +++ b/3.2.22/0000_README
22 @@ -2,6 +2,10 @@ README
23 -----------------------------------------------------------------------------
24 Individual Patch Descriptions:
25 -----------------------------------------------------------------------------
26 +Patch: 1021_linux-3.2.22.patch
27 +From: http://www.kernel.org
28 +Desc: Linux 3.2.22
29 +
30 Patch: 4420_grsecurity-2.9.1-3.2.22-201207080924.patch
31 From: http://www.grsecurity.net
32 Desc: hardened-sources base patch from upstream grsecurity
33
34 diff --git a/3.2.22/1021_linux-3.2.22.patch b/3.2.22/1021_linux-3.2.22.patch
35 new file mode 100644
36 index 0000000..e6ad93a
37 --- /dev/null
38 +++ b/3.2.22/1021_linux-3.2.22.patch
39 @@ -0,0 +1,1245 @@
40 +diff --git a/Documentation/stable_kernel_rules.txt b/Documentation/stable_kernel_rules.txt
41 +index 21fd05c..e1f856b 100644
42 +--- a/Documentation/stable_kernel_rules.txt
43 ++++ b/Documentation/stable_kernel_rules.txt
44 +@@ -12,6 +12,12 @@ Rules on what kind of patches are accepted, and which ones are not, into the
45 + marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
46 + security issue, or some "oh, that's not good" issue. In short, something
47 + critical.
48 ++ - Serious issues as reported by a user of a distribution kernel may also
49 ++ be considered if they fix a notable performance or interactivity issue.
50 ++ As these fixes are not as obvious and have a higher risk of a subtle
51 ++ regression they should only be submitted by a distribution kernel
52 ++ maintainer and include an addendum linking to a bugzilla entry if it
53 ++ exists and additional information on the user-visible impact.
54 + - New device IDs and quirks are also accepted.
55 + - No "theoretical race condition" issues, unless an explanation of how the
56 + race can be exploited is also provided.
57 +diff --git a/Makefile b/Makefile
58 +index 7eb465e..9a7d921 100644
59 +--- a/Makefile
60 ++++ b/Makefile
61 +@@ -1,6 +1,6 @@
62 + VERSION = 3
63 + PATCHLEVEL = 2
64 +-SUBLEVEL = 21
65 ++SUBLEVEL = 22
66 + EXTRAVERSION =
67 + NAME = Saber-toothed Squirrel
68 +
69 +diff --git a/arch/arm/plat-samsung/include/plat/map-s3c.h b/arch/arm/plat-samsung/include/plat/map-s3c.h
70 +index 7d04875..c0c70a8 100644
71 +--- a/arch/arm/plat-samsung/include/plat/map-s3c.h
72 ++++ b/arch/arm/plat-samsung/include/plat/map-s3c.h
73 +@@ -22,7 +22,7 @@
74 + #define S3C24XX_VA_WATCHDOG S3C_VA_WATCHDOG
75 +
76 + #define S3C2412_VA_SSMC S3C_ADDR_CPU(0x00000000)
77 +-#define S3C2412_VA_EBI S3C_ADDR_CPU(0x00010000)
78 ++#define S3C2412_VA_EBI S3C_ADDR_CPU(0x00100000)
79 +
80 + #define S3C2410_PA_UART (0x50000000)
81 + #define S3C24XX_PA_UART S3C2410_PA_UART
82 +diff --git a/arch/arm/plat-samsung/include/plat/watchdog-reset.h b/arch/arm/plat-samsung/include/plat/watchdog-reset.h
83 +index 40dbb2b..11b19ea 100644
84 +--- a/arch/arm/plat-samsung/include/plat/watchdog-reset.h
85 ++++ b/arch/arm/plat-samsung/include/plat/watchdog-reset.h
86 +@@ -24,7 +24,7 @@ static inline void arch_wdt_reset(void)
87 +
88 + __raw_writel(0, S3C2410_WTCON); /* disable watchdog, to be safe */
89 +
90 +- if (s3c2410_wdtclk)
91 ++ if (!IS_ERR(s3c2410_wdtclk))
92 + clk_enable(s3c2410_wdtclk);
93 +
94 + /* put initial values into count and data */
95 +diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
96 +index f3444f7..0c3b775 100644
97 +--- a/arch/x86/include/asm/cpufeature.h
98 ++++ b/arch/x86/include/asm/cpufeature.h
99 +@@ -175,7 +175,7 @@
100 + #define X86_FEATURE_XSAVEOPT (7*32+ 4) /* Optimized Xsave */
101 + #define X86_FEATURE_PLN (7*32+ 5) /* Intel Power Limit Notification */
102 + #define X86_FEATURE_PTS (7*32+ 6) /* Intel Package Thermal Status */
103 +-#define X86_FEATURE_DTS (7*32+ 7) /* Digital Thermal Sensor */
104 ++#define X86_FEATURE_DTHERM (7*32+ 7) /* Digital Thermal Sensor */
105 +
106 + /* Virtualization flags: Linux defined, word 8 */
107 + #define X86_FEATURE_TPR_SHADOW (8*32+ 0) /* Intel TPR Shadow */
108 +diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
109 +index effff47..cb00ccc 100644
110 +--- a/arch/x86/include/asm/pgtable-3level.h
111 ++++ b/arch/x86/include/asm/pgtable-3level.h
112 +@@ -31,6 +31,60 @@ static inline void native_set_pte(pte_t *ptep, pte_t pte)
113 + ptep->pte_low = pte.pte_low;
114 + }
115 +
116 ++#define pmd_read_atomic pmd_read_atomic
117 ++/*
118 ++ * pte_offset_map_lock on 32bit PAE kernels was reading the pmd_t with
119 ++ * a "*pmdp" dereference done by gcc. Problem is, in certain places
120 ++ * where pte_offset_map_lock is called, concurrent page faults are
121 ++ * allowed, if the mmap_sem is hold for reading. An example is mincore
122 ++ * vs page faults vs MADV_DONTNEED. On the page fault side
123 ++ * pmd_populate rightfully does a set_64bit, but if we're reading the
124 ++ * pmd_t with a "*pmdp" on the mincore side, a SMP race can happen
125 ++ * because gcc will not read the 64bit of the pmd atomically. To fix
126 ++ * this all places running pmd_offset_map_lock() while holding the
127 ++ * mmap_sem in read mode, shall read the pmdp pointer using this
128 ++ * function to know if the pmd is null nor not, and in turn to know if
129 ++ * they can run pmd_offset_map_lock or pmd_trans_huge or other pmd
130 ++ * operations.
131 ++ *
132 ++ * Without THP if the mmap_sem is hold for reading, the pmd can only
133 ++ * transition from null to not null while pmd_read_atomic runs. So
134 ++ * we can always return atomic pmd values with this function.
135 ++ *
136 ++ * With THP if the mmap_sem is hold for reading, the pmd can become
137 ++ * trans_huge or none or point to a pte (and in turn become "stable")
138 ++ * at any time under pmd_read_atomic. We could read it really
139 ++ * atomically here with a atomic64_read for the THP enabled case (and
140 ++ * it would be a whole lot simpler), but to avoid using cmpxchg8b we
141 ++ * only return an atomic pmdval if the low part of the pmdval is later
142 ++ * found stable (i.e. pointing to a pte). And we're returning a none
143 ++ * pmdval if the low part of the pmd is none. In some cases the high
144 ++ * and low part of the pmdval returned may not be consistent if THP is
145 ++ * enabled (the low part may point to previously mapped hugepage,
146 ++ * while the high part may point to a more recently mapped hugepage),
147 ++ * but pmd_none_or_trans_huge_or_clear_bad() only needs the low part
148 ++ * of the pmd to be read atomically to decide if the pmd is unstable
149 ++ * or not, with the only exception of when the low part of the pmd is
150 ++ * zero in which case we return a none pmd.
151 ++ */
152 ++static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
153 ++{
154 ++ pmdval_t ret;
155 ++ u32 *tmp = (u32 *)pmdp;
156 ++
157 ++ ret = (pmdval_t) (*tmp);
158 ++ if (ret) {
159 ++ /*
160 ++ * If the low part is null, we must not read the high part
161 ++ * or we can end up with a partial pmd.
162 ++ */
163 ++ smp_rmb();
164 ++ ret |= ((pmdval_t)*(tmp + 1)) << 32;
165 ++ }
166 ++
167 ++ return (pmd_t) { ret };
168 ++}
169 ++
170 + static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
171 + {
172 + set_64bit((unsigned long long *)(ptep), native_pte_val(pte));
173 +diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
174 +index c7f64e6..ea6106c 100644
175 +--- a/arch/x86/kernel/cpu/scattered.c
176 ++++ b/arch/x86/kernel/cpu/scattered.c
177 +@@ -31,7 +31,7 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
178 + const struct cpuid_bit *cb;
179 +
180 + static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
181 +- { X86_FEATURE_DTS, CR_EAX, 0, 0x00000006, 0 },
182 ++ { X86_FEATURE_DTHERM, CR_EAX, 0, 0x00000006, 0 },
183 + { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006, 0 },
184 + { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006, 0 },
185 + { X86_FEATURE_PLN, CR_EAX, 4, 0x00000006, 0 },
186 +diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
187 +index a43fa1a..1502c502 100644
188 +--- a/drivers/acpi/acpi_pad.c
189 ++++ b/drivers/acpi/acpi_pad.c
190 +@@ -36,6 +36,7 @@
191 + #define ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME "Processor Aggregator"
192 + #define ACPI_PROCESSOR_AGGREGATOR_NOTIFY 0x80
193 + static DEFINE_MUTEX(isolated_cpus_lock);
194 ++static DEFINE_MUTEX(round_robin_lock);
195 +
196 + static unsigned long power_saving_mwait_eax;
197 +
198 +@@ -107,7 +108,7 @@ static void round_robin_cpu(unsigned int tsk_index)
199 + if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
200 + return;
201 +
202 +- mutex_lock(&isolated_cpus_lock);
203 ++ mutex_lock(&round_robin_lock);
204 + cpumask_clear(tmp);
205 + for_each_cpu(cpu, pad_busy_cpus)
206 + cpumask_or(tmp, tmp, topology_thread_cpumask(cpu));
207 +@@ -116,7 +117,7 @@ static void round_robin_cpu(unsigned int tsk_index)
208 + if (cpumask_empty(tmp))
209 + cpumask_andnot(tmp, cpu_online_mask, pad_busy_cpus);
210 + if (cpumask_empty(tmp)) {
211 +- mutex_unlock(&isolated_cpus_lock);
212 ++ mutex_unlock(&round_robin_lock);
213 + return;
214 + }
215 + for_each_cpu(cpu, tmp) {
216 +@@ -131,7 +132,7 @@ static void round_robin_cpu(unsigned int tsk_index)
217 + tsk_in_cpu[tsk_index] = preferred_cpu;
218 + cpumask_set_cpu(preferred_cpu, pad_busy_cpus);
219 + cpu_weight[preferred_cpu]++;
220 +- mutex_unlock(&isolated_cpus_lock);
221 ++ mutex_unlock(&round_robin_lock);
222 +
223 + set_cpus_allowed_ptr(current, cpumask_of(preferred_cpu));
224 + }
225 +diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
226 +index c3d2dfc..b96544a 100644
227 +--- a/drivers/base/power/main.c
228 ++++ b/drivers/base/power/main.c
229 +@@ -869,7 +869,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
230 + dpm_wait_for_children(dev, async);
231 +
232 + if (async_error)
233 +- return 0;
234 ++ goto Complete;
235 +
236 + pm_runtime_get_noresume(dev);
237 + if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
238 +@@ -878,7 +878,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
239 + if (pm_wakeup_pending()) {
240 + pm_runtime_put_sync(dev);
241 + async_error = -EBUSY;
242 +- return 0;
243 ++ goto Complete;
244 + }
245 +
246 + device_lock(dev);
247 +@@ -926,6 +926,8 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
248 + }
249 +
250 + device_unlock(dev);
251 ++
252 ++ Complete:
253 + complete_all(&dev->power.completion);
254 +
255 + if (error) {
256 +diff --git a/drivers/char/hw_random/atmel-rng.c b/drivers/char/hw_random/atmel-rng.c
257 +index 0477982..1b5675b 100644
258 +--- a/drivers/char/hw_random/atmel-rng.c
259 ++++ b/drivers/char/hw_random/atmel-rng.c
260 +@@ -34,7 +34,7 @@ static int atmel_trng_read(struct hwrng *rng, void *buf, size_t max,
261 + u32 *data = buf;
262 +
263 + /* data ready? */
264 +- if (readl(trng->base + TRNG_ODATA) & 1) {
265 ++ if (readl(trng->base + TRNG_ISR) & 1) {
266 + *data = readl(trng->base + TRNG_ODATA);
267 + /*
268 + ensure data ready is only set again AFTER the next data
269 +diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
270 +index 70ad892..b3ccefa 100644
271 +--- a/drivers/edac/i7core_edac.c
272 ++++ b/drivers/edac/i7core_edac.c
273 +@@ -1932,12 +1932,6 @@ static int i7core_mce_check_error(struct notifier_block *nb, unsigned long val,
274 + if (mce->bank != 8)
275 + return NOTIFY_DONE;
276 +
277 +-#ifdef CONFIG_SMP
278 +- /* Only handle if it is the right mc controller */
279 +- if (mce->socketid != pvt->i7core_dev->socket)
280 +- return NOTIFY_DONE;
281 +-#endif
282 +-
283 + smp_rmb();
284 + if ((pvt->mce_out + 1) % MCE_LOG_LEN == pvt->mce_in) {
285 + smp_wmb();
286 +@@ -2234,8 +2228,6 @@ static void i7core_unregister_mci(struct i7core_dev *i7core_dev)
287 + if (pvt->enable_scrub)
288 + disable_sdram_scrub_setting(mci);
289 +
290 +- atomic_notifier_chain_unregister(&x86_mce_decoder_chain, &i7_mce_dec);
291 +-
292 + /* Disable EDAC polling */
293 + i7core_pci_ctl_release(pvt);
294 +
295 +@@ -2336,8 +2328,6 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
296 + /* DCLK for scrub rate setting */
297 + pvt->dclk_freq = get_dclk_freq();
298 +
299 +- atomic_notifier_chain_register(&x86_mce_decoder_chain, &i7_mce_dec);
300 +-
301 + return 0;
302 +
303 + fail0:
304 +@@ -2481,8 +2471,10 @@ static int __init i7core_init(void)
305 +
306 + pci_rc = pci_register_driver(&i7core_driver);
307 +
308 +- if (pci_rc >= 0)
309 ++ if (pci_rc >= 0) {
310 ++ atomic_notifier_chain_register(&x86_mce_decoder_chain, &i7_mce_dec);
311 + return 0;
312 ++ }
313 +
314 + i7core_printk(KERN_ERR, "Failed to register device with error %d.\n",
315 + pci_rc);
316 +@@ -2498,6 +2490,7 @@ static void __exit i7core_exit(void)
317 + {
318 + debugf2("MC: " __FILE__ ": %s()\n", __func__);
319 + pci_unregister_driver(&i7core_driver);
320 ++ atomic_notifier_chain_unregister(&x86_mce_decoder_chain, &i7_mce_dec);
321 + }
322 +
323 + module_init(i7core_init);
324 +diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
325 +index 7a402bf..18a1293 100644
326 +--- a/drivers/edac/sb_edac.c
327 ++++ b/drivers/edac/sb_edac.c
328 +@@ -1661,9 +1661,6 @@ static void sbridge_unregister_mci(struct sbridge_dev *sbridge_dev)
329 + debugf0("MC: " __FILE__ ": %s(): mci = %p, dev = %p\n",
330 + __func__, mci, &sbridge_dev->pdev[0]->dev);
331 +
332 +- atomic_notifier_chain_unregister(&x86_mce_decoder_chain,
333 +- &sbridge_mce_dec);
334 +-
335 + /* Remove MC sysfs nodes */
336 + edac_mc_del_mc(mci->dev);
337 +
338 +@@ -1731,8 +1728,6 @@ static int sbridge_register_mci(struct sbridge_dev *sbridge_dev)
339 + goto fail0;
340 + }
341 +
342 +- atomic_notifier_chain_register(&x86_mce_decoder_chain,
343 +- &sbridge_mce_dec);
344 + return 0;
345 +
346 + fail0:
347 +@@ -1861,8 +1856,10 @@ static int __init sbridge_init(void)
348 +
349 + pci_rc = pci_register_driver(&sbridge_driver);
350 +
351 +- if (pci_rc >= 0)
352 ++ if (pci_rc >= 0) {
353 ++ atomic_notifier_chain_register(&x86_mce_decoder_chain, &sbridge_mce_dec);
354 + return 0;
355 ++ }
356 +
357 + sbridge_printk(KERN_ERR, "Failed to register device with error %d.\n",
358 + pci_rc);
359 +@@ -1878,6 +1875,7 @@ static void __exit sbridge_exit(void)
360 + {
361 + debugf2("MC: " __FILE__ ": %s()\n", __func__);
362 + pci_unregister_driver(&sbridge_driver);
363 ++ atomic_notifier_chain_unregister(&x86_mce_decoder_chain, &sbridge_mce_dec);
364 + }
365 +
366 + module_init(sbridge_init);
367 +diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
368 +index 3e927ce..a1ee634 100644
369 +--- a/drivers/gpu/drm/drm_edid.c
370 ++++ b/drivers/gpu/drm/drm_edid.c
371 +@@ -585,7 +585,7 @@ static bool
372 + drm_monitor_supports_rb(struct edid *edid)
373 + {
374 + if (edid->revision >= 4) {
375 +- bool ret;
376 ++ bool ret = false;
377 + drm_for_each_detailed_block((u8 *)edid, is_rb, &ret);
378 + return ret;
379 + }
380 +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
381 +index 3e7c478..3e2edc6 100644
382 +--- a/drivers/gpu/drm/i915/i915_gem.c
383 ++++ b/drivers/gpu/drm/i915/i915_gem.c
384 +@@ -3312,6 +3312,10 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
385 +
386 + if (ret == 0 && atomic_read(&dev_priv->mm.wedged))
387 + ret = -EIO;
388 ++ } else if (wait_for(i915_seqno_passed(ring->get_seqno(ring),
389 ++ seqno) ||
390 ++ atomic_read(&dev_priv->mm.wedged), 3000)) {
391 ++ ret = -EBUSY;
392 + }
393 + }
394 +
395 +diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
396 +index d3820c2..578ddfc 100644
397 +--- a/drivers/gpu/drm/i915/i915_irq.c
398 ++++ b/drivers/gpu/drm/i915/i915_irq.c
399 +@@ -424,6 +424,30 @@ static void gen6_pm_rps_work(struct work_struct *work)
400 + mutex_unlock(&dev_priv->dev->struct_mutex);
401 + }
402 +
403 ++static void gen6_queue_rps_work(struct drm_i915_private *dev_priv,
404 ++ u32 pm_iir)
405 ++{
406 ++ unsigned long flags;
407 ++
408 ++ /*
409 ++ * IIR bits should never already be set because IMR should
410 ++ * prevent an interrupt from being shown in IIR. The warning
411 ++ * displays a case where we've unsafely cleared
412 ++ * dev_priv->pm_iir. Although missing an interrupt of the same
413 ++ * type is not a problem, it displays a problem in the logic.
414 ++ *
415 ++ * The mask bit in IMR is cleared by rps_work.
416 ++ */
417 ++
418 ++ spin_lock_irqsave(&dev_priv->rps_lock, flags);
419 ++ dev_priv->pm_iir |= pm_iir;
420 ++ I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
421 ++ POSTING_READ(GEN6_PMIMR);
422 ++ spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
423 ++
424 ++ queue_work(dev_priv->wq, &dev_priv->rps_work);
425 ++}
426 ++
427 + static void pch_irq_handler(struct drm_device *dev, u32 pch_iir)
428 + {
429 + drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
430 +@@ -529,16 +553,8 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
431 + pch_irq_handler(dev, pch_iir);
432 + }
433 +
434 +- if (pm_iir & GEN6_PM_DEFERRED_EVENTS) {
435 +- unsigned long flags;
436 +- spin_lock_irqsave(&dev_priv->rps_lock, flags);
437 +- WARN(dev_priv->pm_iir & pm_iir, "Missed a PM interrupt\n");
438 +- dev_priv->pm_iir |= pm_iir;
439 +- I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
440 +- POSTING_READ(GEN6_PMIMR);
441 +- spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
442 +- queue_work(dev_priv->wq, &dev_priv->rps_work);
443 +- }
444 ++ if (pm_iir & GEN6_PM_DEFERRED_EVENTS)
445 ++ gen6_queue_rps_work(dev_priv, pm_iir);
446 +
447 + /* should clear PCH hotplug event before clear CPU irq */
448 + I915_WRITE(SDEIIR, pch_iir);
449 +@@ -634,25 +650,8 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS)
450 + i915_handle_rps_change(dev);
451 + }
452 +
453 +- if (IS_GEN6(dev) && pm_iir & GEN6_PM_DEFERRED_EVENTS) {
454 +- /*
455 +- * IIR bits should never already be set because IMR should
456 +- * prevent an interrupt from being shown in IIR. The warning
457 +- * displays a case where we've unsafely cleared
458 +- * dev_priv->pm_iir. Although missing an interrupt of the same
459 +- * type is not a problem, it displays a problem in the logic.
460 +- *
461 +- * The mask bit in IMR is cleared by rps_work.
462 +- */
463 +- unsigned long flags;
464 +- spin_lock_irqsave(&dev_priv->rps_lock, flags);
465 +- WARN(dev_priv->pm_iir & pm_iir, "Missed a PM interrupt\n");
466 +- dev_priv->pm_iir |= pm_iir;
467 +- I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
468 +- POSTING_READ(GEN6_PMIMR);
469 +- spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
470 +- queue_work(dev_priv->wq, &dev_priv->rps_work);
471 +- }
472 ++ if (IS_GEN6(dev) && pm_iir & GEN6_PM_DEFERRED_EVENTS)
473 ++ gen6_queue_rps_work(dev_priv, pm_iir);
474 +
475 + /* should clear PCH hotplug event before clear CPU irq */
476 + I915_WRITE(SDEIIR, pch_iir);
477 +diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c
478 +index a1eb83d..f38d196 100644
479 +--- a/drivers/gpu/drm/i915/i915_suspend.c
480 ++++ b/drivers/gpu/drm/i915/i915_suspend.c
481 +@@ -739,8 +739,11 @@ static void i915_restore_display(struct drm_device *dev)
482 + if (HAS_PCH_SPLIT(dev)) {
483 + I915_WRITE(BLC_PWM_PCH_CTL1, dev_priv->saveBLC_PWM_CTL);
484 + I915_WRITE(BLC_PWM_PCH_CTL2, dev_priv->saveBLC_PWM_CTL2);
485 +- I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->saveBLC_CPU_PWM_CTL);
486 ++ /* NOTE: BLC_PWM_CPU_CTL must be written after BLC_PWM_CPU_CTL2;
487 ++ * otherwise we get blank eDP screen after S3 on some machines
488 ++ */
489 + I915_WRITE(BLC_PWM_CPU_CTL2, dev_priv->saveBLC_CPU_PWM_CTL2);
490 ++ I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->saveBLC_CPU_PWM_CTL);
491 + I915_WRITE(PCH_PP_ON_DELAYS, dev_priv->savePP_ON_DELAYS);
492 + I915_WRITE(PCH_PP_OFF_DELAYS, dev_priv->savePP_OFF_DELAYS);
493 + I915_WRITE(PCH_PP_DIVISOR, dev_priv->savePP_DIVISOR);
494 +diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
495 +index 5c1cdb8..6aa7716 100644
496 +--- a/drivers/gpu/drm/i915/intel_display.c
497 ++++ b/drivers/gpu/drm/i915/intel_display.c
498 +@@ -2187,6 +2187,33 @@ intel_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb,
499 + }
500 +
501 + static int
502 ++intel_finish_fb(struct drm_framebuffer *old_fb)
503 ++{
504 ++ struct drm_i915_gem_object *obj = to_intel_framebuffer(old_fb)->obj;
505 ++ struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
506 ++ bool was_interruptible = dev_priv->mm.interruptible;
507 ++ int ret;
508 ++
509 ++ wait_event(dev_priv->pending_flip_queue,
510 ++ atomic_read(&dev_priv->mm.wedged) ||
511 ++ atomic_read(&obj->pending_flip) == 0);
512 ++
513 ++ /* Big Hammer, we also need to ensure that any pending
514 ++ * MI_WAIT_FOR_EVENT inside a user batch buffer on the
515 ++ * current scanout is retired before unpinning the old
516 ++ * framebuffer.
517 ++ *
518 ++ * This should only fail upon a hung GPU, in which case we
519 ++ * can safely continue.
520 ++ */
521 ++ dev_priv->mm.interruptible = false;
522 ++ ret = i915_gem_object_finish_gpu(obj);
523 ++ dev_priv->mm.interruptible = was_interruptible;
524 ++
525 ++ return ret;
526 ++}
527 ++
528 ++static int
529 + intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
530 + struct drm_framebuffer *old_fb)
531 + {
532 +@@ -2224,25 +2251,8 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
533 + return ret;
534 + }
535 +
536 +- if (old_fb) {
537 +- struct drm_i915_private *dev_priv = dev->dev_private;
538 +- struct drm_i915_gem_object *obj = to_intel_framebuffer(old_fb)->obj;
539 +-
540 +- wait_event(dev_priv->pending_flip_queue,
541 +- atomic_read(&dev_priv->mm.wedged) ||
542 +- atomic_read(&obj->pending_flip) == 0);
543 +-
544 +- /* Big Hammer, we also need to ensure that any pending
545 +- * MI_WAIT_FOR_EVENT inside a user batch buffer on the
546 +- * current scanout is retired before unpinning the old
547 +- * framebuffer.
548 +- *
549 +- * This should only fail upon a hung GPU, in which case we
550 +- * can safely continue.
551 +- */
552 +- ret = i915_gem_object_finish_gpu(obj);
553 +- (void) ret;
554 +- }
555 ++ if (old_fb)
556 ++ intel_finish_fb(old_fb);
557 +
558 + ret = intel_pipe_set_base_atomic(crtc, crtc->fb, x, y,
559 + LEAVE_ATOMIC_MODE_SET);
560 +@@ -3312,6 +3322,23 @@ static void intel_crtc_disable(struct drm_crtc *crtc)
561 + struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
562 + struct drm_device *dev = crtc->dev;
563 +
564 ++ /* Flush any pending WAITs before we disable the pipe. Note that
565 ++ * we need to drop the struct_mutex in order to acquire it again
566 ++ * during the lowlevel dpms routines around a couple of the
567 ++ * operations. It does not look trivial nor desirable to move
568 ++ * that locking higher. So instead we leave a window for the
569 ++ * submission of further commands on the fb before we can actually
570 ++ * disable it. This race with userspace exists anyway, and we can
571 ++ * only rely on the pipe being disabled by userspace after it
572 ++ * receives the hotplug notification and has flushed any pending
573 ++ * batches.
574 ++ */
575 ++ if (crtc->fb) {
576 ++ mutex_lock(&dev->struct_mutex);
577 ++ intel_finish_fb(crtc->fb);
578 ++ mutex_unlock(&dev->struct_mutex);
579 ++ }
580 ++
581 + crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF);
582 +
583 + if (crtc->fb) {
584 +diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
585 +index 933e66b..f6613dc 100644
586 +--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
587 ++++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
588 +@@ -306,7 +306,7 @@ static int init_ring_common(struct intel_ring_buffer *ring)
589 +
590 + I915_WRITE_CTL(ring,
591 + ((ring->size - PAGE_SIZE) & RING_NR_PAGES)
592 +- | RING_REPORT_64K | RING_VALID);
593 ++ | RING_VALID);
594 +
595 + /* If the head is still not zero, the ring is dead */
596 + if ((I915_READ_CTL(ring) & RING_VALID) == 0 ||
597 +@@ -1157,18 +1157,6 @@ int intel_wait_ring_buffer(struct intel_ring_buffer *ring, int n)
598 + struct drm_device *dev = ring->dev;
599 + struct drm_i915_private *dev_priv = dev->dev_private;
600 + unsigned long end;
601 +- u32 head;
602 +-
603 +- /* If the reported head position has wrapped or hasn't advanced,
604 +- * fallback to the slow and accurate path.
605 +- */
606 +- head = intel_read_status_page(ring, 4);
607 +- if (head > ring->head) {
608 +- ring->head = head;
609 +- ring->space = ring_space(ring);
610 +- if (ring->space >= n)
611 +- return 0;
612 +- }
613 +
614 + trace_i915_ring_wait_begin(ring);
615 + end = jiffies + 3 * HZ;
616 +diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
617 +index 3a4cc32..cc0801d 100644
618 +--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
619 ++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
620 +@@ -499,7 +499,7 @@ int nouveau_fbcon_init(struct drm_device *dev)
621 + nfbdev->helper.funcs = &nouveau_fbcon_helper_funcs;
622 +
623 + ret = drm_fb_helper_init(dev, &nfbdev->helper,
624 +- nv_two_heads(dev) ? 2 : 1, 4);
625 ++ dev->mode_config.num_crtc, 4);
626 + if (ret) {
627 + kfree(nfbdev);
628 + return ret;
629 +diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.c
630 +index 4c07436..d99aa84 100644
631 +--- a/drivers/hwmon/applesmc.c
632 ++++ b/drivers/hwmon/applesmc.c
633 +@@ -215,7 +215,7 @@ static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
634 + int i;
635 +
636 + if (send_command(cmd) || send_argument(key)) {
637 +- pr_warn("%s: read arg fail\n", key);
638 ++ pr_warn("%.4s: read arg fail\n", key);
639 + return -EIO;
640 + }
641 +
642 +@@ -223,7 +223,7 @@ static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
643 +
644 + for (i = 0; i < len; i++) {
645 + if (__wait_status(0x05)) {
646 +- pr_warn("%s: read data fail\n", key);
647 ++ pr_warn("%.4s: read data fail\n", key);
648 + return -EIO;
649 + }
650 + buffer[i] = inb(APPLESMC_DATA_PORT);
651 +diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
652 +index 427468f..0790c98 100644
653 +--- a/drivers/hwmon/coretemp.c
654 ++++ b/drivers/hwmon/coretemp.c
655 +@@ -660,7 +660,7 @@ static void __cpuinit get_core_online(unsigned int cpu)
656 + * sensors. We check this bit only, all the early CPUs
657 + * without thermal sensors will be filtered out.
658 + */
659 +- if (!cpu_has(c, X86_FEATURE_DTS))
660 ++ if (!cpu_has(c, X86_FEATURE_DTHERM))
661 + return;
662 +
663 + if (!pdev) {
664 +diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
665 +index da2f021..532a902 100644
666 +--- a/drivers/md/dm-thin.c
667 ++++ b/drivers/md/dm-thin.c
668 +@@ -288,8 +288,10 @@ static void __cell_release(struct cell *cell, struct bio_list *inmates)
669 +
670 + hlist_del(&cell->list);
671 +
672 +- bio_list_add(inmates, cell->holder);
673 +- bio_list_merge(inmates, &cell->bios);
674 ++ if (inmates) {
675 ++ bio_list_add(inmates, cell->holder);
676 ++ bio_list_merge(inmates, &cell->bios);
677 ++ }
678 +
679 + mempool_free(cell, prison->cell_pool);
680 + }
681 +@@ -312,9 +314,10 @@ static void cell_release(struct cell *cell, struct bio_list *bios)
682 + */
683 + static void __cell_release_singleton(struct cell *cell, struct bio *bio)
684 + {
685 +- hlist_del(&cell->list);
686 + BUG_ON(cell->holder != bio);
687 + BUG_ON(!bio_list_empty(&cell->bios));
688 ++
689 ++ __cell_release(cell, NULL);
690 + }
691 +
692 + static void cell_release_singleton(struct cell *cell, struct bio *bio)
693 +diff --git a/drivers/media/dvb/siano/smsusb.c b/drivers/media/dvb/siano/smsusb.c
694 +index b7d1e3e..fb68805 100644
695 +--- a/drivers/media/dvb/siano/smsusb.c
696 ++++ b/drivers/media/dvb/siano/smsusb.c
697 +@@ -544,6 +544,8 @@ static const struct usb_device_id smsusb_id_table[] __devinitconst = {
698 + .driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
699 + { USB_DEVICE(0x2040, 0xc0a0),
700 + .driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
701 ++ { USB_DEVICE(0x2040, 0xf5a0),
702 ++ .driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
703 + { } /* Terminating entry */
704 + };
705 +
706 +diff --git a/drivers/media/video/gspca/gspca.c b/drivers/media/video/gspca/gspca.c
707 +index 2ca10df..981501f 100644
708 +--- a/drivers/media/video/gspca/gspca.c
709 ++++ b/drivers/media/video/gspca/gspca.c
710 +@@ -1697,7 +1697,7 @@ static int vidioc_streamoff(struct file *file, void *priv,
711 + enum v4l2_buf_type buf_type)
712 + {
713 + struct gspca_dev *gspca_dev = priv;
714 +- int ret;
715 ++ int i, ret;
716 +
717 + if (buf_type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
718 + return -EINVAL;
719 +@@ -1728,6 +1728,8 @@ static int vidioc_streamoff(struct file *file, void *priv,
720 + wake_up_interruptible(&gspca_dev->wq);
721 +
722 + /* empty the transfer queues */
723 ++ for (i = 0; i < gspca_dev->nframes; i++)
724 ++ gspca_dev->frame[i].v4l2_buf.flags &= ~BUF_ALL_FLAGS;
725 + atomic_set(&gspca_dev->fr_q, 0);
726 + atomic_set(&gspca_dev->fr_i, 0);
727 + gspca_dev->fr_o = 0;
728 +diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c
729 +index 8dc84d6..86cd532 100644
730 +--- a/drivers/net/can/c_can/c_can.c
731 ++++ b/drivers/net/can/c_can/c_can.c
732 +@@ -590,8 +590,8 @@ static void c_can_chip_config(struct net_device *dev)
733 + priv->write_reg(priv, &priv->regs->control,
734 + CONTROL_ENABLE_AR);
735 +
736 +- if (priv->can.ctrlmode & (CAN_CTRLMODE_LISTENONLY &
737 +- CAN_CTRLMODE_LOOPBACK)) {
738 ++ if ((priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) &&
739 ++ (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)) {
740 + /* loopback + silent mode : useful for hot self-test */
741 + priv->write_reg(priv, &priv->regs->control, CONTROL_EIE |
742 + CONTROL_SIE | CONTROL_IE | CONTROL_TEST);
743 +diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
744 +index e023379..e59d006 100644
745 +--- a/drivers/net/can/flexcan.c
746 ++++ b/drivers/net/can/flexcan.c
747 +@@ -933,12 +933,12 @@ static int __devinit flexcan_probe(struct platform_device *pdev)
748 + u32 clock_freq = 0;
749 +
750 + if (pdev->dev.of_node) {
751 +- const u32 *clock_freq_p;
752 ++ const __be32 *clock_freq_p;
753 +
754 + clock_freq_p = of_get_property(pdev->dev.of_node,
755 + "clock-frequency", NULL);
756 + if (clock_freq_p)
757 +- clock_freq = *clock_freq_p;
758 ++ clock_freq = be32_to_cpup(clock_freq_p);
759 + }
760 +
761 + if (!clock_freq) {
762 +diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c
763 +index a3e65fd..e556fc3 100644
764 +--- a/drivers/net/ethernet/intel/e1000e/82571.c
765 ++++ b/drivers/net/ethernet/intel/e1000e/82571.c
766 +@@ -2080,8 +2080,9 @@ const struct e1000_info e1000_82574_info = {
767 + | FLAG_HAS_SMART_POWER_DOWN
768 + | FLAG_HAS_AMT
769 + | FLAG_HAS_CTRLEXT_ON_LOAD,
770 +- .flags2 = FLAG2_CHECK_PHY_HANG
771 ++ .flags2 = FLAG2_CHECK_PHY_HANG
772 + | FLAG2_DISABLE_ASPM_L0S
773 ++ | FLAG2_DISABLE_ASPM_L1
774 + | FLAG2_NO_DISABLE_RX,
775 + .pba = 32,
776 + .max_hw_frame_size = DEFAULT_JUMBO,
777 +diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
778 +index 4e933d1..64d3f98 100644
779 +--- a/drivers/net/ethernet/intel/e1000e/netdev.c
780 ++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
781 +@@ -5132,14 +5132,6 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
782 + return -EINVAL;
783 + }
784 +
785 +- /* 82573 Errata 17 */
786 +- if (((adapter->hw.mac.type == e1000_82573) ||
787 +- (adapter->hw.mac.type == e1000_82574)) &&
788 +- (max_frame > ETH_FRAME_LEN + ETH_FCS_LEN)) {
789 +- adapter->flags2 |= FLAG2_DISABLE_ASPM_L1;
790 +- e1000e_disable_aspm(adapter->pdev, PCIE_LINK_STATE_L1);
791 +- }
792 +-
793 + while (test_and_set_bit(__E1000_RESETTING, &adapter->state))
794 + usleep_range(1000, 2000);
795 + /* e1000e_down -> e1000e_reset dependent on max_frame_size & mtu */
796 +diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
797 +index 8b0c2ca..6973620 100644
798 +--- a/drivers/net/wireless/ath/ath9k/hw.c
799 ++++ b/drivers/net/wireless/ath/ath9k/hw.c
800 +@@ -718,13 +718,25 @@ static void ath9k_hw_init_qos(struct ath_hw *ah)
801 +
802 + u32 ar9003_get_pll_sqsum_dvc(struct ath_hw *ah)
803 + {
804 ++ struct ath_common *common = ath9k_hw_common(ah);
805 ++ int i = 0;
806 ++
807 + REG_CLR_BIT(ah, PLL3, PLL3_DO_MEAS_MASK);
808 + udelay(100);
809 + REG_SET_BIT(ah, PLL3, PLL3_DO_MEAS_MASK);
810 +
811 +- while ((REG_READ(ah, PLL4) & PLL4_MEAS_DONE) == 0)
812 ++ while ((REG_READ(ah, PLL4) & PLL4_MEAS_DONE) == 0) {
813 ++
814 + udelay(100);
815 +
816 ++ if (WARN_ON_ONCE(i >= 100)) {
817 ++ ath_err(common, "PLL4 meaurement not done\n");
818 ++ break;
819 ++ }
820 ++
821 ++ i++;
822 ++ }
823 ++
824 + return (REG_READ(ah, PLL3) & SQSUM_DVC_MASK) >> 3;
825 + }
826 + EXPORT_SYMBOL(ar9003_get_pll_sqsum_dvc);
827 +diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
828 +index f76a814..95437fc 100644
829 +--- a/drivers/net/wireless/ath/ath9k/main.c
830 ++++ b/drivers/net/wireless/ath/ath9k/main.c
831 +@@ -1042,6 +1042,15 @@ void ath_hw_pll_work(struct work_struct *work)
832 + hw_pll_work.work);
833 + u32 pll_sqsum;
834 +
835 ++ /*
836 ++ * ensure that the PLL WAR is executed only
837 ++ * after the STA is associated (or) if the
838 ++ * beaconing had started in interfaces that
839 ++ * uses beacons.
840 ++ */
841 ++ if (!(sc->sc_flags & SC_OP_BEACONS))
842 ++ return;
843 ++
844 + if (AR_SREV_9485(sc->sc_ah)) {
845 +
846 + ath9k_ps_wakeup(sc);
847 +@@ -1486,15 +1495,6 @@ static int ath9k_add_interface(struct ieee80211_hw *hw,
848 + }
849 + }
850 +
851 +- if ((ah->opmode == NL80211_IFTYPE_ADHOC) ||
852 +- ((vif->type == NL80211_IFTYPE_ADHOC) &&
853 +- sc->nvifs > 0)) {
854 +- ath_err(common, "Cannot create ADHOC interface when other"
855 +- " interfaces already exist.\n");
856 +- ret = -EINVAL;
857 +- goto out;
858 +- }
859 +-
860 + ath_dbg(common, ATH_DBG_CONFIG,
861 + "Attach a VIF of type: %d\n", vif->type);
862 +
863 +diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
864 +index 76fd277..c59c592 100644
865 +--- a/drivers/net/wireless/ath/ath9k/xmit.c
866 ++++ b/drivers/net/wireless/ath/ath9k/xmit.c
867 +@@ -936,13 +936,13 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
868 + }
869 +
870 + /* legacy rates */
871 ++ rate = &sc->sbands[tx_info->band].bitrates[rates[i].idx];
872 + if ((tx_info->band == IEEE80211_BAND_2GHZ) &&
873 + !(rate->flags & IEEE80211_RATE_ERP_G))
874 + phy = WLAN_RC_PHY_CCK;
875 + else
876 + phy = WLAN_RC_PHY_OFDM;
877 +
878 +- rate = &sc->sbands[tx_info->band].bitrates[rates[i].idx];
879 + info->rates[i].Rate = rate->hw_value;
880 + if (rate->hw_value_short) {
881 + if (rates[i].flags & IEEE80211_TX_RC_USE_SHORT_PREAMBLE)
882 +diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
883 +index 5815cf5..4661a64 100644
884 +--- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
885 ++++ b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
886 +@@ -1777,6 +1777,7 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file,
887 + return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
888 + }
889 +
890 ++#ifdef CONFIG_IWLWIFI_DEBUG
891 + static ssize_t iwl_dbgfs_log_event_read(struct file *file,
892 + char __user *user_buf,
893 + size_t count, loff_t *ppos)
894 +@@ -1814,6 +1815,7 @@ static ssize_t iwl_dbgfs_log_event_write(struct file *file,
895 +
896 + return count;
897 + }
898 ++#endif
899 +
900 + static ssize_t iwl_dbgfs_interrupt_read(struct file *file,
901 + char __user *user_buf,
902 +@@ -1941,7 +1943,9 @@ static ssize_t iwl_dbgfs_fh_reg_read(struct file *file,
903 + return ret;
904 + }
905 +
906 ++#ifdef CONFIG_IWLWIFI_DEBUG
907 + DEBUGFS_READ_WRITE_FILE_OPS(log_event);
908 ++#endif
909 + DEBUGFS_READ_WRITE_FILE_OPS(interrupt);
910 + DEBUGFS_READ_FILE_OPS(fh_reg);
911 + DEBUGFS_READ_FILE_OPS(rx_queue);
912 +@@ -1957,7 +1961,9 @@ static int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans,
913 + {
914 + DEBUGFS_ADD_FILE(rx_queue, dir, S_IRUSR);
915 + DEBUGFS_ADD_FILE(tx_queue, dir, S_IRUSR);
916 ++#ifdef CONFIG_IWLWIFI_DEBUG
917 + DEBUGFS_ADD_FILE(log_event, dir, S_IWUSR | S_IRUSR);
918 ++#endif
919 + DEBUGFS_ADD_FILE(interrupt, dir, S_IWUSR | S_IRUSR);
920 + DEBUGFS_ADD_FILE(csr, dir, S_IWUSR);
921 + DEBUGFS_ADD_FILE(fh_reg, dir, S_IRUSR);
922 +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
923 +index 226faab..fc35308 100644
924 +--- a/drivers/net/xen-netfront.c
925 ++++ b/drivers/net/xen-netfront.c
926 +@@ -1922,14 +1922,14 @@ static int __devexit xennet_remove(struct xenbus_device *dev)
927 +
928 + dev_dbg(&dev->dev, "%s\n", dev->nodename);
929 +
930 +- unregister_netdev(info->netdev);
931 +-
932 + xennet_disconnect_backend(info);
933 +
934 +- del_timer_sync(&info->rx_refill_timer);
935 +-
936 + xennet_sysfs_delif(info->netdev);
937 +
938 ++ unregister_netdev(info->netdev);
939 ++
940 ++ del_timer_sync(&info->rx_refill_timer);
941 ++
942 + free_percpu(info->stats);
943 +
944 + free_netdev(info->netdev);
945 +diff --git a/drivers/oprofile/oprofile_perf.c b/drivers/oprofile/oprofile_perf.c
946 +index da14432..efc4b7f 100644
947 +--- a/drivers/oprofile/oprofile_perf.c
948 ++++ b/drivers/oprofile/oprofile_perf.c
949 +@@ -25,7 +25,7 @@ static int oprofile_perf_enabled;
950 + static DEFINE_MUTEX(oprofile_perf_mutex);
951 +
952 + static struct op_counter_config *counter_config;
953 +-static struct perf_event **perf_events[nr_cpumask_bits];
954 ++static struct perf_event **perf_events[NR_CPUS];
955 + static int num_counters;
956 +
957 + /*
958 +diff --git a/drivers/staging/iio/adc/ad7606_core.c b/drivers/staging/iio/adc/ad7606_core.c
959 +index 54423ab..2ee187f 100644
960 +--- a/drivers/staging/iio/adc/ad7606_core.c
961 ++++ b/drivers/staging/iio/adc/ad7606_core.c
962 +@@ -241,6 +241,7 @@ static const struct attribute_group ad7606_attribute_group = {
963 + .indexed = 1, \
964 + .channel = num, \
965 + .address = num, \
966 ++ .info_mask = (1 << IIO_CHAN_INFO_SCALE_SHARED), \
967 + .scan_index = num, \
968 + .scan_type = IIO_ST('s', 16, 16, 0), \
969 + }
970 +diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
971 +index ec41d38..f4b738f 100644
972 +--- a/drivers/staging/rtl8712/usb_intf.c
973 ++++ b/drivers/staging/rtl8712/usb_intf.c
974 +@@ -102,6 +102,8 @@ static struct usb_device_id rtl871x_usb_id_tbl[] = {
975 + /* - */
976 + {USB_DEVICE(0x20F4, 0x646B)},
977 + {USB_DEVICE(0x083A, 0xC512)},
978 ++ {USB_DEVICE(0x25D4, 0x4CA1)},
979 ++ {USB_DEVICE(0x25D4, 0x4CAB)},
980 +
981 + /* RTL8191SU */
982 + /* Realtek */
983 +diff --git a/drivers/staging/rts_pstor/rtsx_transport.c b/drivers/staging/rts_pstor/rtsx_transport.c
984 +index 4e3d2c1..9b2e5c9 100644
985 +--- a/drivers/staging/rts_pstor/rtsx_transport.c
986 ++++ b/drivers/staging/rts_pstor/rtsx_transport.c
987 +@@ -335,6 +335,7 @@ static int rtsx_transfer_sglist_adma_partial(struct rtsx_chip *chip, u8 card,
988 + int sg_cnt, i, resid;
989 + int err = 0;
990 + long timeleft;
991 ++ struct scatterlist *sg_ptr;
992 + u32 val = TRIG_DMA;
993 +
994 + if ((sg == NULL) || (num_sg <= 0) || !offset || !index)
995 +@@ -371,7 +372,7 @@ static int rtsx_transfer_sglist_adma_partial(struct rtsx_chip *chip, u8 card,
996 + sg_cnt = dma_map_sg(&(rtsx->pci->dev), sg, num_sg, dma_dir);
997 +
998 + resid = size;
999 +-
1000 ++ sg_ptr = sg;
1001 + chip->sgi = 0;
1002 + /* Usually the next entry will be @sg@ + 1, but if this sg element
1003 + * is part of a chained scatterlist, it could jump to the start of
1004 +@@ -379,14 +380,14 @@ static int rtsx_transfer_sglist_adma_partial(struct rtsx_chip *chip, u8 card,
1005 + * the proper sg
1006 + */
1007 + for (i = 0; i < *index; i++)
1008 +- sg = sg_next(sg);
1009 ++ sg_ptr = sg_next(sg_ptr);
1010 + for (i = *index; i < sg_cnt; i++) {
1011 + dma_addr_t addr;
1012 + unsigned int len;
1013 + u8 option;
1014 +
1015 +- addr = sg_dma_address(sg);
1016 +- len = sg_dma_len(sg);
1017 ++ addr = sg_dma_address(sg_ptr);
1018 ++ len = sg_dma_len(sg_ptr);
1019 +
1020 + RTSX_DEBUGP("DMA addr: 0x%x, Len: 0x%x\n",
1021 + (unsigned int)addr, len);
1022 +@@ -415,7 +416,7 @@ static int rtsx_transfer_sglist_adma_partial(struct rtsx_chip *chip, u8 card,
1023 + if (!resid)
1024 + break;
1025 +
1026 +- sg = sg_next(sg);
1027 ++ sg_ptr = sg_next(sg_ptr);
1028 + }
1029 +
1030 + RTSX_DEBUGP("SG table count = %d\n", chip->sgi);
1031 +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
1032 +index aa0c43f..35e6b5f 100644
1033 +--- a/drivers/usb/serial/cp210x.c
1034 ++++ b/drivers/usb/serial/cp210x.c
1035 +@@ -93,6 +93,7 @@ static const struct usb_device_id id_table[] = {
1036 + { USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */
1037 + { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */
1038 + { USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */
1039 ++ { USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */
1040 + { USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */
1041 + { USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */
1042 + { USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */
1043 +@@ -134,7 +135,13 @@ static const struct usb_device_id id_table[] = {
1044 + { USB_DEVICE(0x10CE, 0xEA6A) }, /* Silicon Labs MobiData GPRS USB Modem 100EU */
1045 + { USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */
1046 + { USB_DEVICE(0x1555, 0x0004) }, /* Owen AC4 USB-RS485 Converter */
1047 ++ { USB_DEVICE(0x166A, 0x0201) }, /* Clipsal 5500PACA C-Bus Pascal Automation Controller */
1048 ++ { USB_DEVICE(0x166A, 0x0301) }, /* Clipsal 5800PC C-Bus Wireless PC Interface */
1049 + { USB_DEVICE(0x166A, 0x0303) }, /* Clipsal 5500PCU C-Bus USB interface */
1050 ++ { USB_DEVICE(0x166A, 0x0304) }, /* Clipsal 5000CT2 C-Bus Black and White Touchscreen */
1051 ++ { USB_DEVICE(0x166A, 0x0305) }, /* Clipsal C-5000CT2 C-Bus Spectrum Colour Touchscreen */
1052 ++ { USB_DEVICE(0x166A, 0x0401) }, /* Clipsal L51xx C-Bus Architectural Dimmer */
1053 ++ { USB_DEVICE(0x166A, 0x0101) }, /* Clipsal 5560884 C-Bus Multi-room Audio Matrix Switcher */
1054 + { USB_DEVICE(0x16D6, 0x0001) }, /* Jablotron serial interface */
1055 + { USB_DEVICE(0x16DC, 0x0010) }, /* W-IE-NE-R Plein & Baus GmbH PL512 Power Supply */
1056 + { USB_DEVICE(0x16DC, 0x0011) }, /* W-IE-NE-R Plein & Baus GmbH RCM Remote Control for MARATON Power Supply */
1057 +@@ -146,7 +153,11 @@ static const struct usb_device_id id_table[] = {
1058 + { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
1059 + { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
1060 + { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
1061 ++ { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
1062 ++ { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
1063 + { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
1064 ++ { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
1065 ++ { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
1066 + { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */
1067 + { } /* Terminating Entry */
1068 + };
1069 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
1070 +index 61d6c31..21a4734 100644
1071 +--- a/drivers/usb/serial/option.c
1072 ++++ b/drivers/usb/serial/option.c
1073 +@@ -235,6 +235,7 @@ static void option_instat_callback(struct urb *urb);
1074 + #define NOVATELWIRELESS_PRODUCT_G1 0xA001
1075 + #define NOVATELWIRELESS_PRODUCT_G1_M 0xA002
1076 + #define NOVATELWIRELESS_PRODUCT_G2 0xA010
1077 ++#define NOVATELWIRELESS_PRODUCT_MC551 0xB001
1078 +
1079 + /* AMOI PRODUCTS */
1080 + #define AMOI_VENDOR_ID 0x1614
1081 +@@ -496,6 +497,10 @@ static void option_instat_callback(struct urb *urb);
1082 + /* MediaTek products */
1083 + #define MEDIATEK_VENDOR_ID 0x0e8d
1084 +
1085 ++/* Cellient products */
1086 ++#define CELLIENT_VENDOR_ID 0x2692
1087 ++#define CELLIENT_PRODUCT_MEN200 0x9005
1088 ++
1089 + /* some devices interfaces need special handling due to a number of reasons */
1090 + enum option_blacklist_reason {
1091 + OPTION_BLACKLIST_NONE = 0,
1092 +@@ -730,6 +735,8 @@ static const struct usb_device_id option_ids[] = {
1093 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1) },
1094 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1_M) },
1095 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G2) },
1096 ++ /* Novatel Ovation MC551 a.k.a. Verizon USB551L */
1097 ++ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) },
1098 +
1099 + { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) },
1100 + { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) },
1101 +@@ -1227,6 +1234,7 @@ static const struct usb_device_id option_ids[] = {
1102 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a1, 0xff, 0x02, 0x01) },
1103 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x00, 0x00) },
1104 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x02, 0x01) }, /* MediaTek MT6276M modem & app port */
1105 ++ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
1106 + { } /* Terminating entry */
1107 + };
1108 + MODULE_DEVICE_TABLE(usb, option_ids);
1109 +diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
1110 +index 08a07a2..57ceaf3 100644
1111 +--- a/fs/nilfs2/gcinode.c
1112 ++++ b/fs/nilfs2/gcinode.c
1113 +@@ -191,6 +191,8 @@ void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs)
1114 + while (!list_empty(head)) {
1115 + ii = list_first_entry(head, struct nilfs_inode_info, i_dirty);
1116 + list_del_init(&ii->i_dirty);
1117 ++ truncate_inode_pages(&ii->vfs_inode.i_data, 0);
1118 ++ nilfs_btnode_cache_clear(&ii->i_btnode_cache);
1119 + iput(&ii->vfs_inode);
1120 + }
1121 + }
1122 +diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
1123 +index bb24ab6..6f24e67 100644
1124 +--- a/fs/nilfs2/segment.c
1125 ++++ b/fs/nilfs2/segment.c
1126 +@@ -2309,6 +2309,8 @@ nilfs_remove_written_gcinodes(struct the_nilfs *nilfs, struct list_head *head)
1127 + if (!test_bit(NILFS_I_UPDATED, &ii->i_state))
1128 + continue;
1129 + list_del_init(&ii->i_dirty);
1130 ++ truncate_inode_pages(&ii->vfs_inode.i_data, 0);
1131 ++ nilfs_btnode_cache_clear(&ii->i_btnode_cache);
1132 + iput(&ii->vfs_inode);
1133 + }
1134 + }
1135 +diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
1136 +index a03c098..bc00876 100644
1137 +--- a/include/asm-generic/pgtable.h
1138 ++++ b/include/asm-generic/pgtable.h
1139 +@@ -445,6 +445,18 @@ static inline int pmd_write(pmd_t pmd)
1140 + #endif /* __HAVE_ARCH_PMD_WRITE */
1141 + #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
1142 +
1143 ++#ifndef pmd_read_atomic
1144 ++static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
1145 ++{
1146 ++ /*
1147 ++ * Depend on compiler for an atomic pmd read. NOTE: this is
1148 ++ * only going to work, if the pmdval_t isn't larger than
1149 ++ * an unsigned long.
1150 ++ */
1151 ++ return *pmdp;
1152 ++}
1153 ++#endif
1154 ++
1155 + /*
1156 + * This function is meant to be used by sites walking pagetables with
1157 + * the mmap_sem hold in read mode to protect against MADV_DONTNEED and
1158 +@@ -458,14 +470,30 @@ static inline int pmd_write(pmd_t pmd)
1159 + * undefined so behaving like if the pmd was none is safe (because it
1160 + * can return none anyway). The compiler level barrier() is critically
1161 + * important to compute the two checks atomically on the same pmdval.
1162 ++ *
1163 ++ * For 32bit kernels with a 64bit large pmd_t this automatically takes
1164 ++ * care of reading the pmd atomically to avoid SMP race conditions
1165 ++ * against pmd_populate() when the mmap_sem is hold for reading by the
1166 ++ * caller (a special atomic read not done by "gcc" as in the generic
1167 ++ * version above, is also needed when THP is disabled because the page
1168 ++ * fault can populate the pmd from under us).
1169 + */
1170 + static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
1171 + {
1172 +- /* depend on compiler for an atomic pmd read */
1173 +- pmd_t pmdval = *pmd;
1174 ++ pmd_t pmdval = pmd_read_atomic(pmd);
1175 + /*
1176 + * The barrier will stabilize the pmdval in a register or on
1177 + * the stack so that it will stop changing under the code.
1178 ++ *
1179 ++ * When CONFIG_TRANSPARENT_HUGEPAGE=y on x86 32bit PAE,
1180 ++ * pmd_read_atomic is allowed to return a not atomic pmdval
1181 ++ * (for example pointing to an hugepage that has never been
1182 ++ * mapped in the pmd). The below checks will only care about
1183 ++ * the low part of the pmd with 32bit PAE x86 anyway, with the
1184 ++ * exception of pmd_none(). So the important thing is that if
1185 ++ * the low part of the pmd is found null, the high part will
1186 ++ * be also null or the pmd_none() check below would be
1187 ++ * confused.
1188 + */
1189 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE
1190 + barrier();
1191 +diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
1192 +index f961cc5..da587ad 100644
1193 +--- a/net/batman-adv/routing.c
1194 ++++ b/net/batman-adv/routing.c
1195 +@@ -619,6 +619,8 @@ int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if)
1196 + /* packet needs to be linearized to access the TT changes */
1197 + if (skb_linearize(skb) < 0)
1198 + goto out;
1199 ++ /* skb_linearize() possibly changed skb->data */
1200 ++ tt_query = (struct tt_query_packet *)skb->data;
1201 +
1202 + if (is_my_mac(tt_query->dst))
1203 + handle_tt_response(bat_priv, tt_query);
1204 +diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
1205 +index 5f09a57..088af45 100644
1206 +--- a/net/batman-adv/translation-table.c
1207 ++++ b/net/batman-adv/translation-table.c
1208 +@@ -1816,10 +1816,10 @@ bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst)
1209 + {
1210 + struct tt_local_entry *tt_local_entry = NULL;
1211 + struct tt_global_entry *tt_global_entry = NULL;
1212 +- bool ret = true;
1213 ++ bool ret = false;
1214 +
1215 + if (!atomic_read(&bat_priv->ap_isolation))
1216 +- return false;
1217 ++ goto out;
1218 +
1219 + tt_local_entry = tt_local_hash_find(bat_priv, dst);
1220 + if (!tt_local_entry)
1221 +@@ -1829,10 +1829,10 @@ bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst)
1222 + if (!tt_global_entry)
1223 + goto out;
1224 +
1225 +- if (_is_ap_isolated(tt_local_entry, tt_global_entry))
1226 ++ if (!_is_ap_isolated(tt_local_entry, tt_global_entry))
1227 + goto out;
1228 +
1229 +- ret = false;
1230 ++ ret = true;
1231 +
1232 + out:
1233 + if (tt_global_entry)
1234 +diff --git a/net/wireless/reg.c b/net/wireless/reg.c
1235 +index c1c99dd..d57d05b 100644
1236 +--- a/net/wireless/reg.c
1237 ++++ b/net/wireless/reg.c
1238 +@@ -1369,7 +1369,7 @@ static void reg_set_request_processed(void)
1239 + spin_unlock(&reg_requests_lock);
1240 +
1241 + if (last_request->initiator == NL80211_REGDOM_SET_BY_USER)
1242 +- cancel_delayed_work_sync(&reg_timeout);
1243 ++ cancel_delayed_work(&reg_timeout);
1244 +
1245 + if (need_more_processing)
1246 + schedule_work(&reg_work);
1247 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
1248 +index 0005bde..5f096a5 100644
1249 +--- a/sound/pci/hda/patch_realtek.c
1250 ++++ b/sound/pci/hda/patch_realtek.c
1251 +@@ -5988,6 +5988,7 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = {
1252 + { .id = 0x10ec0272, .name = "ALC272", .patch = patch_alc662 },
1253 + { .id = 0x10ec0275, .name = "ALC275", .patch = patch_alc269 },
1254 + { .id = 0x10ec0276, .name = "ALC276", .patch = patch_alc269 },
1255 ++ { .id = 0x10ec0280, .name = "ALC280", .patch = patch_alc269 },
1256 + { .id = 0x10ec0861, .rev = 0x100340, .name = "ALC660",
1257 + .patch = patch_alc861 },
1258 + { .id = 0x10ec0660, .name = "ALC660-VD", .patch = patch_alc861vd },
1259 +diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
1260 +index 11224ed..323d4d9 100644
1261 +--- a/tools/hv/hv_kvp_daemon.c
1262 ++++ b/tools/hv/hv_kvp_daemon.c
1263 +@@ -384,14 +384,18 @@ int main(void)
1264 + pfd.fd = fd;
1265 +
1266 + while (1) {
1267 ++ struct sockaddr *addr_p = (struct sockaddr *) &addr;
1268 ++ socklen_t addr_l = sizeof(addr);
1269 + pfd.events = POLLIN;
1270 + pfd.revents = 0;
1271 + poll(&pfd, 1, -1);
1272 +
1273 +- len = recv(fd, kvp_recv_buffer, sizeof(kvp_recv_buffer), 0);
1274 ++ len = recvfrom(fd, kvp_recv_buffer, sizeof(kvp_recv_buffer), 0,
1275 ++ addr_p, &addr_l);
1276 +
1277 +- if (len < 0) {
1278 +- syslog(LOG_ERR, "recv failed; error:%d", len);
1279 ++ if (len < 0 || addr.nl_pid) {
1280 ++ syslog(LOG_ERR, "recvfrom failed; pid:%u error:%d %s",
1281 ++ addr.nl_pid, errno, strerror(errno));
1282 + close(fd);
1283 + return -1;
1284 + }
1285
1286 diff --git a/3.2.22/4450_grsec-kconfig-default-gids.patch b/3.2.22/4450_grsec-kconfig-default-gids.patch
1287 index 123f877..545e82e 100644
1288 --- a/3.2.22/4450_grsec-kconfig-default-gids.patch
1289 +++ b/3.2.22/4450_grsec-kconfig-default-gids.patch
1290 @@ -21,7 +21,7 @@ diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1291
1292 config GRKERNSEC_PROC_ADD
1293 bool "Additional restrictions"
1294 -@@ -671,7 +671,7 @@
1295 +@@ -520,7 +520,7 @@
1296 config GRKERNSEC_AUDIT_GID
1297 int "GID for auditing"
1298 depends on GRKERNSEC_AUDIT_GROUP
1299 @@ -30,7 +30,7 @@ diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1300
1301 config GRKERNSEC_EXECLOG
1302 bool "Exec logging"
1303 -@@ -875,7 +875,7 @@
1304 +@@ -735,7 +735,7 @@
1305 config GRKERNSEC_TPE_GID
1306 int "GID for untrusted users"
1307 depends on GRKERNSEC_TPE && !GRKERNSEC_TPE_INVERT
1308 @@ -39,7 +39,7 @@ diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1309 help
1310 Setting this GID determines what group TPE restrictions will be
1311 *enabled* for. If the sysctl option is enabled, a sysctl option
1312 -@@ -884,7 +884,7 @@
1313 +@@ -744,7 +744,7 @@
1314 config GRKERNSEC_TPE_GID
1315 int "GID for trusted users"
1316 depends on GRKERNSEC_TPE && GRKERNSEC_TPE_INVERT
1317 @@ -48,7 +48,7 @@ diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1318 help
1319 Setting this GID determines what group TPE restrictions will be
1320 *disabled* for. If the sysctl option is enabled, a sysctl option
1321 -@@ -957,7 +957,7 @@
1322 +@@ -819,7 +819,7 @@
1323 config GRKERNSEC_SOCKET_ALL_GID
1324 int "GID to deny all sockets for"
1325 depends on GRKERNSEC_SOCKET_ALL
1326 @@ -57,7 +57,7 @@ diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1327 help
1328 Here you can choose the GID to disable socket access for. Remember to
1329 add the users you want socket access disabled for to the GID
1330 -@@ -978,7 +978,7 @@
1331 +@@ -840,7 +840,7 @@
1332 config GRKERNSEC_SOCKET_CLIENT_GID
1333 int "GID to deny client sockets for"
1334 depends on GRKERNSEC_SOCKET_CLIENT
1335 @@ -66,7 +66,7 @@ diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1336 help
1337 Here you can choose the GID to disable client socket access for.
1338 Remember to add the users you want client socket access disabled for to
1339 -@@ -996,7 +996,7 @@
1340 +@@ -858,7 +858,7 @@
1341 config GRKERNSEC_SOCKET_SERVER_GID
1342 int "GID to deny server sockets for"
1343 depends on GRKERNSEC_SOCKET_SERVER
1344
1345 diff --git a/3.2.22/4465_selinux-avc_audit-log-curr_ip.patch b/3.2.22/4465_selinux-avc_audit-log-curr_ip.patch
1346 index 5a9d80c..48acad7 100644
1347 --- a/3.2.22/4465_selinux-avc_audit-log-curr_ip.patch
1348 +++ b/3.2.22/4465_selinux-avc_audit-log-curr_ip.patch
1349 @@ -28,7 +28,7 @@ Signed-off-by: Lorenzo Hernandez Garcia-Hierro <lorenzo@×××.org>
1350 diff -Naur a/grsecurity/Kconfig b/grsecurity/Kconfig
1351 --- a/grsecurity/Kconfig 2011-04-17 19:25:54.000000000 -0400
1352 +++ b/grsecurity/Kconfig 2011-04-17 19:32:53.000000000 -0400
1353 -@@ -1309,6 +1309,27 @@
1354 +@@ -917,6 +917,27 @@
1355 menu "Logging Options"
1356 depends on GRKERNSEC
1357
1358
1359 diff --git a/3.4.4/4450_grsec-kconfig-default-gids.patch b/3.4.4/4450_grsec-kconfig-default-gids.patch
1360 index bb24abe..6d092db 100644
1361 --- a/3.4.4/4450_grsec-kconfig-default-gids.patch
1362 +++ b/3.4.4/4450_grsec-kconfig-default-gids.patch
1363 @@ -73,7 +73,7 @@ diff -Nuar a/grsecurity/Kconfig b/Kconfig
1364 diff -Nuar a/security/Kconfig b/security/Kconfig
1365 --- a/security/Kconfig 2012-07-01 12:51:41.000000000 -0400
1366 +++ b/security/Kconfig 2012-07-01 13:00:23.000000000 -0400
1367 -@@ -167,7 +167,7 @@
1368 +@@ -186,7 +186,7 @@
1369
1370 config GRKERNSEC_PROC_GID
1371 int "GID exempted from /proc restrictions"