Gentoo Archives: gentoo-commits

From: "Anthony G. Basile" <blueness@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/hardened-patchset:master commit in: 3.10.2/, 3.2.48/
Date: Sat, 27 Jul 2013 13:34:29
Message-Id: 1374932203.096e518fe808ba5a9aefd9615f63279ac9d5bdbf.blueness@gentoo
1 commit: 096e518fe808ba5a9aefd9615f63279ac9d5bdbf
2 Author: Anthony G. Basile <blueness <AT> gentoo <DOT> org>
3 AuthorDate: Sat Jul 27 13:36:43 2013 +0000
4 Commit: Anthony G. Basile <blueness <AT> gentoo <DOT> org>
5 CommitDate: Sat Jul 27 13:36:43 2013 +0000
6 URL: http://git.overlays.gentoo.org/gitweb/?p=proj/hardened-patchset.git;a=commit;h=096e518f
7
8 Grsec/PaX: 2.9.1-{3.2.48,3.10.3}-201307261327
9
10 ---
11 3.10.2/0000_README | 2 +-
12 ...420_grsecurity-2.9.1-3.10.3-201307261236.patch} | 572 +++++++++++++++------
13 3.2.48/0000_README | 2 +-
14 ...420_grsecurity-2.9.1-3.2.48-201307261327.patch} | 444 ++++++++++++----
15 4 files changed, 771 insertions(+), 249 deletions(-)
16
17 diff --git a/3.10.2/0000_README b/3.10.2/0000_README
18 index e834ed0..a26d38c 100644
19 --- a/3.10.2/0000_README
20 +++ b/3.10.2/0000_README
21 @@ -2,7 +2,7 @@ README
22 -----------------------------------------------------------------------------
23 Individual Patch Descriptions:
24 -----------------------------------------------------------------------------
25 -Patch: 4420_grsecurity-2.9.1-3.10.2-201307212247.patch
26 +Patch: 4420_grsecurity-2.9.1-3.10.3-201307261236.patch
27 From: http://www.grsecurity.net
28 Desc: hardened-sources base patch from upstream grsecurity
29
30
31 diff --git a/3.10.2/4420_grsecurity-2.9.1-3.10.2-201307212247.patch b/3.10.2/4420_grsecurity-2.9.1-3.10.3-201307261236.patch
32 similarity index 99%
33 rename from 3.10.2/4420_grsecurity-2.9.1-3.10.2-201307212247.patch
34 rename to 3.10.2/4420_grsecurity-2.9.1-3.10.3-201307261236.patch
35 index 0a1f292..194d82d 100644
36 --- a/3.10.2/4420_grsecurity-2.9.1-3.10.2-201307212247.patch
37 +++ b/3.10.2/4420_grsecurity-2.9.1-3.10.3-201307261236.patch
38 @@ -229,7 +229,7 @@ index b89a739..79768fb 100644
39 +zconf.lex.c
40 zoffset.h
41 diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
42 -index 2fe6e76..3dd8184 100644
43 +index 2fe6e76..df58221 100644
44 --- a/Documentation/kernel-parameters.txt
45 +++ b/Documentation/kernel-parameters.txt
46 @@ -976,6 +976,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
47 @@ -243,7 +243,7 @@ index 2fe6e76..3dd8184 100644
48 hashdist= [KNL,NUMA] Large hashes allocated during boot
49 are distributed across NUMA nodes. Defaults on
50 for 64-bit NUMA, off otherwise.
51 -@@ -2195,6 +2199,18 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
52 +@@ -2195,6 +2199,22 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
53 the specified number of seconds. This is to be used if
54 your oopses keep scrolling off the screen.
55
56 @@ -252,6 +252,10 @@ index 2fe6e76..3dd8184 100644
57 + expand down segment used by UDEREF on X86-32 or the frequent
58 + page table updates on X86-64.
59 +
60 ++ pax_sanitize_slab=
61 ++ 0/1 to disable/enable slab object sanitization (enabled by
62 ++ default).
63 ++
64 + pax_softmode= 0/1 to disable/enable PaX softmode on boot already.
65 +
66 + pax_extra_latent_entropy
67 @@ -263,7 +267,7 @@ index 2fe6e76..3dd8184 100644
68
69 pcd. [PARIDE]
70 diff --git a/Makefile b/Makefile
71 -index 4336730..cb79194 100644
72 +index b548552..6e18246 100644
73 --- a/Makefile
74 +++ b/Makefile
75 @@ -241,8 +241,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
76 @@ -6428,7 +6432,7 @@ index 4aad413..85d86bf 100644
77 #define _PAGE_NO_CACHE 0x020 /* I: cache inhibit */
78 #define _PAGE_WRITETHRU 0x040 /* W: cache write-through */
79 diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
80 -index 4a9e408..724aa59 100644
81 +index 362142b..8b22c1b 100644
82 --- a/arch/powerpc/include/asm/reg.h
83 +++ b/arch/powerpc/include/asm/reg.h
84 @@ -234,6 +234,7 @@
85 @@ -6681,10 +6685,10 @@ index 645170a..6cf0271 100644
86 ld r4,_DAR(r1)
87 bl .bad_page_fault
88 diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
89 -index 40e4a17..5a84b37 100644
90 +index 4e00d22..b26abcc 100644
91 --- a/arch/powerpc/kernel/exceptions-64s.S
92 +++ b/arch/powerpc/kernel/exceptions-64s.S
93 -@@ -1362,10 +1362,10 @@ handle_page_fault:
94 +@@ -1356,10 +1356,10 @@ handle_page_fault:
95 11: ld r4,_DAR(r1)
96 ld r5,_DSISR(r1)
97 addi r3,r1,STACK_FRAME_OVERHEAD
98 @@ -6826,10 +6830,10 @@ index 076d124..6cb2cbf 100644
99 - return ret;
100 -}
101 diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
102 -index 98c2fc1..b73a4ca 100644
103 +index 64f7bd5..8dd550f 100644
104 --- a/arch/powerpc/kernel/ptrace.c
105 +++ b/arch/powerpc/kernel/ptrace.c
106 -@@ -1781,6 +1781,10 @@ long arch_ptrace(struct task_struct *child, long request,
107 +@@ -1783,6 +1783,10 @@ long arch_ptrace(struct task_struct *child, long request,
108 return ret;
109 }
110
111 @@ -6840,7 +6844,7 @@ index 98c2fc1..b73a4ca 100644
112 /*
113 * We must return the syscall number to actually look up in the table.
114 * This can be -1L to skip running any syscall at all.
115 -@@ -1793,6 +1797,11 @@ long do_syscall_trace_enter(struct pt_regs *regs)
116 +@@ -1795,6 +1799,11 @@ long do_syscall_trace_enter(struct pt_regs *regs)
117
118 secure_computing_strict(regs->gpr[0]);
119
120 @@ -6852,7 +6856,7 @@ index 98c2fc1..b73a4ca 100644
121 if (test_thread_flag(TIF_SYSCALL_TRACE) &&
122 tracehook_report_syscall_entry(regs))
123 /*
124 -@@ -1827,6 +1836,11 @@ void do_syscall_trace_leave(struct pt_regs *regs)
125 +@@ -1829,6 +1838,11 @@ void do_syscall_trace_leave(struct pt_regs *regs)
126 {
127 int step;
128
129 @@ -6865,10 +6869,10 @@ index 98c2fc1..b73a4ca 100644
130
131 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
132 diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
133 -index 201385c..0f01828 100644
134 +index 0f83122..c0aca6a 100644
135 --- a/arch/powerpc/kernel/signal_32.c
136 +++ b/arch/powerpc/kernel/signal_32.c
137 -@@ -976,7 +976,7 @@ int handle_rt_signal32(unsigned long sig, struct k_sigaction *ka,
138 +@@ -987,7 +987,7 @@ int handle_rt_signal32(unsigned long sig, struct k_sigaction *ka,
139 /* Save user registers on the stack */
140 frame = &rt_sf->uc.uc_mcontext;
141 addr = frame;
142 @@ -6878,10 +6882,10 @@ index 201385c..0f01828 100644
143 tramp = current->mm->context.vdso_base + vdso32_rt_sigtramp;
144 } else {
145 diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
146 -index 3459473..2d40783 100644
147 +index 887e99d..310bc11 100644
148 --- a/arch/powerpc/kernel/signal_64.c
149 +++ b/arch/powerpc/kernel/signal_64.c
150 -@@ -749,7 +749,7 @@ int handle_rt_signal64(int signr, struct k_sigaction *ka, siginfo_t *info,
151 +@@ -751,7 +751,7 @@ int handle_rt_signal64(int signr, struct k_sigaction *ka, siginfo_t *info,
152 #endif
153
154 /* Set up to return from userspace. */
155 @@ -6904,7 +6908,7 @@ index e68a845..8b140e6 100644
156 };
157
158 diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
159 -index c0e5caf..68e8305 100644
160 +index e4f205a..8bfffb8 100644
161 --- a/arch/powerpc/kernel/traps.c
162 +++ b/arch/powerpc/kernel/traps.c
163 @@ -143,6 +143,8 @@ static unsigned __kprobes long oops_begin(struct pt_regs *regs)
164 @@ -7143,7 +7147,7 @@ index e779642..e5bb889 100644
165 };
166
167 diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
168 -index 88c0425..717feb8 100644
169 +index 2859a1f..74f9a6e 100644
170 --- a/arch/powerpc/mm/numa.c
171 +++ b/arch/powerpc/mm/numa.c
172 @@ -919,7 +919,7 @@ static void __init *careful_zallocation(int nid, unsigned long size,
173 @@ -34534,10 +34538,10 @@ index edc089e..bc7c0bc 100644
174 pr_debug("CPU%u - ACPI performance management activated.\n", cpu);
175 for (i = 0; i < perf->state_count; i++)
176 diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
177 -index 2d53f47..eb3803e 100644
178 +index 178fe7a..5ee8501 100644
179 --- a/drivers/cpufreq/cpufreq.c
180 +++ b/drivers/cpufreq/cpufreq.c
181 -@@ -1851,7 +1851,7 @@ static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb,
182 +@@ -1853,7 +1853,7 @@ static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb,
183 return NOTIFY_OK;
184 }
185
186 @@ -34546,7 +34550,7 @@ index 2d53f47..eb3803e 100644
187 .notifier_call = cpufreq_cpu_callback,
188 };
189
190 -@@ -1883,8 +1883,11 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
191 +@@ -1885,8 +1885,11 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
192
193 pr_debug("trying to register driver %s\n", driver_data->name);
194
195 @@ -34561,10 +34565,10 @@ index 2d53f47..eb3803e 100644
196 write_lock_irqsave(&cpufreq_driver_lock, flags);
197 if (cpufreq_driver) {
198 diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
199 -index dc9b72e..11c0302 100644
200 +index 5af40ad..ddf907b 100644
201 --- a/drivers/cpufreq/cpufreq_governor.c
202 +++ b/drivers/cpufreq/cpufreq_governor.c
203 -@@ -238,7 +238,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
204 +@@ -235,7 +235,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
205 struct dbs_data *dbs_data;
206 struct od_cpu_dbs_info_s *od_dbs_info = NULL;
207 struct cs_cpu_dbs_info_s *cs_dbs_info = NULL;
208 @@ -34573,7 +34577,7 @@ index dc9b72e..11c0302 100644
209 struct od_dbs_tuners *od_tuners = NULL;
210 struct cs_dbs_tuners *cs_tuners = NULL;
211 struct cpu_dbs_common_info *cpu_cdbs;
212 -@@ -301,7 +301,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
213 +@@ -298,7 +298,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
214
215 if ((cdata->governor == GOV_CONSERVATIVE) &&
216 (!policy->governor->initialized)) {
217 @@ -34582,7 +34586,7 @@ index dc9b72e..11c0302 100644
218
219 cpufreq_register_notifier(cs_ops->notifier_block,
220 CPUFREQ_TRANSITION_NOTIFIER);
221 -@@ -318,7 +318,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
222 +@@ -315,7 +315,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
223
224 if ((dbs_data->cdata->governor == GOV_CONSERVATIVE) &&
225 (policy->governor->initialized == 1)) {
226 @@ -34630,10 +34634,10 @@ index 93eb5cb..f8ab572 100644
227 }
228 EXPORT_SYMBOL_GPL(od_unregister_powersave_bias_handler);
229 diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
230 -index 591b6fb..2a01183 100644
231 +index bfd6273..e39dd63 100644
232 --- a/drivers/cpufreq/cpufreq_stats.c
233 +++ b/drivers/cpufreq/cpufreq_stats.c
234 -@@ -367,7 +367,7 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
235 +@@ -365,7 +365,7 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
236 }
237
238 /* priority=1 so this will get called before cpufreq_remove_dev */
239 @@ -35680,10 +35684,10 @@ index 3c59584..500f2e9 100644
240
241 return ret;
242 diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
243 -index 0aa2ef0..77c03d0 100644
244 +index e5e32869..1678f36 100644
245 --- a/drivers/gpu/drm/i915/i915_irq.c
246 +++ b/drivers/gpu/drm/i915/i915_irq.c
247 -@@ -679,7 +679,7 @@ static irqreturn_t valleyview_irq_handler(int irq, void *arg)
248 +@@ -670,7 +670,7 @@ static irqreturn_t valleyview_irq_handler(int irq, void *arg)
249 int pipe;
250 u32 pipe_stats[I915_MAX_PIPES];
251
252 @@ -35692,7 +35696,7 @@ index 0aa2ef0..77c03d0 100644
253
254 while (true) {
255 iir = I915_READ(VLV_IIR);
256 -@@ -844,7 +844,7 @@ static irqreturn_t ivybridge_irq_handler(int irq, void *arg)
257 +@@ -835,7 +835,7 @@ static irqreturn_t ivybridge_irq_handler(int irq, void *arg)
258 irqreturn_t ret = IRQ_NONE;
259 int i;
260
261 @@ -35701,7 +35705,7 @@ index 0aa2ef0..77c03d0 100644
262
263 /* disable master interrupt before clearing iir */
264 de_ier = I915_READ(DEIER);
265 -@@ -934,7 +934,7 @@ static irqreturn_t ironlake_irq_handler(int irq, void *arg)
266 +@@ -925,7 +925,7 @@ static irqreturn_t ironlake_irq_handler(int irq, void *arg)
267 int ret = IRQ_NONE;
268 u32 de_iir, gt_iir, de_ier, pm_iir, sde_ier;
269
270 @@ -35710,7 +35714,7 @@ index 0aa2ef0..77c03d0 100644
271
272 /* disable master interrupt before clearing iir */
273 de_ier = I915_READ(DEIER);
274 -@@ -2098,7 +2098,7 @@ static void ironlake_irq_preinstall(struct drm_device *dev)
275 +@@ -2089,7 +2089,7 @@ static void ironlake_irq_preinstall(struct drm_device *dev)
276 {
277 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
278
279 @@ -35719,7 +35723,7 @@ index 0aa2ef0..77c03d0 100644
280
281 I915_WRITE(HWSTAM, 0xeffe);
282
283 -@@ -2133,7 +2133,7 @@ static void valleyview_irq_preinstall(struct drm_device *dev)
284 +@@ -2124,7 +2124,7 @@ static void valleyview_irq_preinstall(struct drm_device *dev)
285 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
286 int pipe;
287
288 @@ -35728,7 +35732,7 @@ index 0aa2ef0..77c03d0 100644
289
290 /* VLV magic */
291 I915_WRITE(VLV_IMR, 0);
292 -@@ -2420,7 +2420,7 @@ static void i8xx_irq_preinstall(struct drm_device * dev)
293 +@@ -2411,7 +2411,7 @@ static void i8xx_irq_preinstall(struct drm_device * dev)
294 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
295 int pipe;
296
297 @@ -35737,7 +35741,7 @@ index 0aa2ef0..77c03d0 100644
298
299 for_each_pipe(pipe)
300 I915_WRITE(PIPESTAT(pipe), 0);
301 -@@ -2499,7 +2499,7 @@ static irqreturn_t i8xx_irq_handler(int irq, void *arg)
302 +@@ -2490,7 +2490,7 @@ static irqreturn_t i8xx_irq_handler(int irq, void *arg)
303 I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
304 I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
305
306 @@ -35746,7 +35750,7 @@ index 0aa2ef0..77c03d0 100644
307
308 iir = I915_READ16(IIR);
309 if (iir == 0)
310 -@@ -2574,7 +2574,7 @@ static void i915_irq_preinstall(struct drm_device * dev)
311 +@@ -2565,7 +2565,7 @@ static void i915_irq_preinstall(struct drm_device * dev)
312 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
313 int pipe;
314
315 @@ -35755,7 +35759,7 @@ index 0aa2ef0..77c03d0 100644
316
317 if (I915_HAS_HOTPLUG(dev)) {
318 I915_WRITE(PORT_HOTPLUG_EN, 0);
319 -@@ -2673,7 +2673,7 @@ static irqreturn_t i915_irq_handler(int irq, void *arg)
320 +@@ -2664,7 +2664,7 @@ static irqreturn_t i915_irq_handler(int irq, void *arg)
321 I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
322 int pipe, ret = IRQ_NONE;
323
324 @@ -35764,7 +35768,7 @@ index 0aa2ef0..77c03d0 100644
325
326 iir = I915_READ(IIR);
327 do {
328 -@@ -2800,7 +2800,7 @@ static void i965_irq_preinstall(struct drm_device * dev)
329 +@@ -2791,7 +2791,7 @@ static void i965_irq_preinstall(struct drm_device * dev)
330 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
331 int pipe;
332
333 @@ -35773,7 +35777,7 @@ index 0aa2ef0..77c03d0 100644
334
335 I915_WRITE(PORT_HOTPLUG_EN, 0);
336 I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
337 -@@ -2907,7 +2907,7 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
338 +@@ -2898,7 +2898,7 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
339 I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
340 I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT;
341
342 @@ -38755,7 +38759,7 @@ index 6e17f81..140f717 100644
343 "md/raid1:%s: read error corrected "
344 "(%d sectors at %llu on %s)\n",
345 diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
346 -index 6ddae25..514caa9 100644
347 +index d61eb7e..adfd00a 100644
348 --- a/drivers/md/raid10.c
349 +++ b/drivers/md/raid10.c
350 @@ -1940,7 +1940,7 @@ static void end_sync_read(struct bio *bio, int error)
351 @@ -38767,7 +38771,7 @@ index 6ddae25..514caa9 100644
352 &conf->mirrors[d].rdev->corrected_errors);
353
354 /* for reconstruct, we always reschedule after a read.
355 -@@ -2286,7 +2286,7 @@ static void check_decay_read_errors(struct mddev *mddev, struct md_rdev *rdev)
356 +@@ -2292,7 +2292,7 @@ static void check_decay_read_errors(struct mddev *mddev, struct md_rdev *rdev)
357 {
358 struct timespec cur_time_mon;
359 unsigned long hours_since_last;
360 @@ -38776,7 +38780,7 @@ index 6ddae25..514caa9 100644
361
362 ktime_get_ts(&cur_time_mon);
363
364 -@@ -2308,9 +2308,9 @@ static void check_decay_read_errors(struct mddev *mddev, struct md_rdev *rdev)
365 +@@ -2314,9 +2314,9 @@ static void check_decay_read_errors(struct mddev *mddev, struct md_rdev *rdev)
366 * overflowing the shift of read_errors by hours_since_last.
367 */
368 if (hours_since_last >= 8 * sizeof(read_errors))
369 @@ -38788,7 +38792,7 @@ index 6ddae25..514caa9 100644
370 }
371
372 static int r10_sync_page_io(struct md_rdev *rdev, sector_t sector,
373 -@@ -2364,8 +2364,8 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
374 +@@ -2370,8 +2370,8 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
375 return;
376
377 check_decay_read_errors(mddev, rdev);
378 @@ -38799,7 +38803,7 @@ index 6ddae25..514caa9 100644
379 char b[BDEVNAME_SIZE];
380 bdevname(rdev->bdev, b);
381
382 -@@ -2373,7 +2373,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
383 +@@ -2379,7 +2379,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
384 "md/raid10:%s: %s: Raid device exceeded "
385 "read_error threshold [cur %d:max %d]\n",
386 mdname(mddev), b,
387 @@ -38808,7 +38812,7 @@ index 6ddae25..514caa9 100644
388 printk(KERN_NOTICE
389 "md/raid10:%s: %s: Failing raid device\n",
390 mdname(mddev), b);
391 -@@ -2528,7 +2528,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
392 +@@ -2534,7 +2534,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
393 sect +
394 choose_data_offset(r10_bio, rdev)),
395 bdevname(rdev->bdev, b));
396 @@ -42839,7 +42843,7 @@ index 4d231c1..2892c37 100644
397 ddb_entry->default_relogin_timeout =
398 (def_timeout > LOGIN_TOV) && (def_timeout < LOGIN_TOV * 10) ?
399 diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
400 -index 2c0d0ec..4e8681a 100644
401 +index 3b1ea34..1583a72 100644
402 --- a/drivers/scsi/scsi.c
403 +++ b/drivers/scsi/scsi.c
404 @@ -661,7 +661,7 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
405 @@ -43005,10 +43009,10 @@ index f379c7f..e8fc69c 100644
406
407 transport_setup_device(&rport->dev);
408 diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
409 -index 6f6a1b4..80704a9 100644
410 +index 1b1125e..31a2019 100644
411 --- a/drivers/scsi/sd.c
412 +++ b/drivers/scsi/sd.c
413 -@@ -2918,7 +2918,7 @@ static int sd_probe(struct device *dev)
414 +@@ -2936,7 +2936,7 @@ static int sd_probe(struct device *dev)
415 sdkp->disk = gd;
416 sdkp->index = index;
417 atomic_set(&sdkp->openers, 0);
418 @@ -49659,6 +49663,19 @@ index f0857e0..e7023c5 100644
419 __btrfs_std_error(root->fs_info, function, line, errno, NULL);
420 }
421 /*
422 +diff --git a/fs/buffer.c b/fs/buffer.c
423 +index d2a4d1b..df798ca 100644
424 +--- a/fs/buffer.c
425 ++++ b/fs/buffer.c
426 +@@ -3367,7 +3367,7 @@ void __init buffer_init(void)
427 + bh_cachep = kmem_cache_create("buffer_head",
428 + sizeof(struct buffer_head), 0,
429 + (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
430 +- SLAB_MEM_SPREAD),
431 ++ SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
432 + NULL);
433 +
434 + /*
435 diff --git a/fs/cachefiles/bind.c b/fs/cachefiles/bind.c
436 index 622f469..e8d2d55 100644
437 --- a/fs/cachefiles/bind.c
438 @@ -50650,15 +50667,16 @@ index dafafba..10b3b27 100644
439 EXPORT_SYMBOL(dump_write);
440
441 diff --git a/fs/dcache.c b/fs/dcache.c
442 -index f09b908..4dd10d8 100644
443 +index f09b908..04b9690 100644
444 --- a/fs/dcache.c
445 +++ b/fs/dcache.c
446 -@@ -3086,7 +3086,7 @@ void __init vfs_caches_init(unsigned long mempages)
447 +@@ -3086,7 +3086,8 @@ void __init vfs_caches_init(unsigned long mempages)
448 mempages -= reserve;
449
450 names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
451 - SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
452 -+ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_USERCOPY, NULL);
453 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_USERCOPY|
454 ++ SLAB_NO_SANITIZE, NULL);
455
456 dcache_init();
457 inode_init();
458 @@ -50714,7 +50732,7 @@ index e4141f2..d8263e8 100644
459 i += packet_length_size;
460 if (copy_to_user(&buf[i], msg_ctx->msg, msg_ctx->msg_size))
461 diff --git a/fs/exec.c b/fs/exec.c
462 -index ffd7a81..e38107f 100644
463 +index ffd7a81..f0afae1 100644
464 --- a/fs/exec.c
465 +++ b/fs/exec.c
466 @@ -55,8 +55,20 @@
467 @@ -50992,7 +51010,7 @@ index ffd7a81..e38107f 100644
468 +
469 +#ifdef CONFIG_X86
470 + if (!ret) {
471 -+ size = mmap_min_addr + ((mm->delta_mmap ^ mm->delta_stack) & (0xFFUL << PAGE_SHIFT));
472 ++ size = PAGE_SIZE + mmap_min_addr + ((mm->delta_mmap ^ mm->delta_stack) & (0xFFUL << PAGE_SHIFT));
473 + ret = 0 != mmap_region(NULL, 0, PAGE_ALIGN(size), vm_flags, 0);
474 + }
475 +#endif
476 @@ -56832,6 +56850,63 @@ index 04ce1ac..a13dd1e 100644
477
478 generic_fillattr(inode, stat);
479 return 0;
480 +diff --git a/fs/super.c b/fs/super.c
481 +index 7465d43..68307c0 100644
482 +--- a/fs/super.c
483 ++++ b/fs/super.c
484 +@@ -336,19 +336,19 @@ EXPORT_SYMBOL(deactivate_super);
485 + * and want to turn it into a full-blown active reference. grab_super()
486 + * is called with sb_lock held and drops it. Returns 1 in case of
487 + * success, 0 if we had failed (superblock contents was already dead or
488 +- * dying when grab_super() had been called).
489 ++ * dying when grab_super() had been called). Note that this is only
490 ++ * called for superblocks not in rundown mode (== ones still on ->fs_supers
491 ++ * of their type), so increment of ->s_count is OK here.
492 + */
493 + static int grab_super(struct super_block *s) __releases(sb_lock)
494 + {
495 +- if (atomic_inc_not_zero(&s->s_active)) {
496 +- spin_unlock(&sb_lock);
497 +- return 1;
498 +- }
499 +- /* it's going away */
500 + s->s_count++;
501 + spin_unlock(&sb_lock);
502 +- /* wait for it to die */
503 + down_write(&s->s_umount);
504 ++ if ((s->s_flags & MS_BORN) && atomic_inc_not_zero(&s->s_active)) {
505 ++ put_super(s);
506 ++ return 1;
507 ++ }
508 + up_write(&s->s_umount);
509 + put_super(s);
510 + return 0;
511 +@@ -463,11 +463,6 @@ retry:
512 + destroy_super(s);
513 + s = NULL;
514 + }
515 +- down_write(&old->s_umount);
516 +- if (unlikely(!(old->s_flags & MS_BORN))) {
517 +- deactivate_locked_super(old);
518 +- goto retry;
519 +- }
520 + return old;
521 + }
522 + }
523 +@@ -660,10 +655,10 @@ restart:
524 + if (hlist_unhashed(&sb->s_instances))
525 + continue;
526 + if (sb->s_bdev == bdev) {
527 +- if (grab_super(sb)) /* drops sb_lock */
528 +- return sb;
529 +- else
530 ++ if (!grab_super(sb))
531 + goto restart;
532 ++ up_write(&sb->s_umount);
533 ++ return sb;
534 + }
535 + }
536 + spin_unlock(&sb_lock);
537 diff --git a/fs/sysfs/bin.c b/fs/sysfs/bin.c
538 index 15c68f9..36a8b3e 100644
539 --- a/fs/sysfs/bin.c
540 @@ -71974,10 +72049,10 @@ index dec1748..112c1f9 100644
541
542 static inline void nf_reset_trace(struct sk_buff *skb)
543 diff --git a/include/linux/slab.h b/include/linux/slab.h
544 -index 0c62175..9ece3d8 100644
545 +index 0c62175..f016ac1 100644
546 --- a/include/linux/slab.h
547 +++ b/include/linux/slab.h
548 -@@ -12,13 +12,20 @@
549 +@@ -12,15 +12,29 @@
550 #include <linux/gfp.h>
551 #include <linux/types.h>
552 #include <linux/workqueue.h>
553 @@ -71998,8 +72073,17 @@ index 0c62175..9ece3d8 100644
554 +
555 #define SLAB_RED_ZONE 0x00000400UL /* DEBUG: Red zone objs in a cache */
556 #define SLAB_POISON 0x00000800UL /* DEBUG: Poison objects */
557 ++
558 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
559 ++#define SLAB_NO_SANITIZE 0x00001000UL /* PaX: Do not sanitize objs on free */
560 ++#else
561 ++#define SLAB_NO_SANITIZE 0x00000000UL
562 ++#endif
563 ++
564 #define SLAB_HWCACHE_ALIGN 0x00002000UL /* Align objs on cache lines */
565 -@@ -89,10 +96,13 @@
566 + #define SLAB_CACHE_DMA 0x00004000UL /* Use GFP_DMA memory */
567 + #define SLAB_STORE_USER 0x00010000UL /* DEBUG: Store the last owner for bug hunting */
568 +@@ -89,10 +103,13 @@
569 * ZERO_SIZE_PTR can be passed to kfree though in the same way that NULL can.
570 * Both make kfree a no-op.
571 */
572 @@ -72016,7 +72100,7 @@ index 0c62175..9ece3d8 100644
573
574
575 struct mem_cgroup;
576 -@@ -132,6 +142,8 @@ void * __must_check krealloc(const void *, size_t, gfp_t);
577 +@@ -132,6 +149,8 @@ void * __must_check krealloc(const void *, size_t, gfp_t);
578 void kfree(const void *);
579 void kzfree(const void *);
580 size_t ksize(const void *);
581 @@ -72025,7 +72109,7 @@ index 0c62175..9ece3d8 100644
582
583 /*
584 * Some archs want to perform DMA into kmalloc caches and need a guaranteed
585 -@@ -164,7 +176,7 @@ struct kmem_cache {
586 +@@ -164,7 +183,7 @@ struct kmem_cache {
587 unsigned int align; /* Alignment as calculated */
588 unsigned long flags; /* Active flags on the slab */
589 const char *name; /* Slab name for sysfs */
590 @@ -72034,7 +72118,7 @@ index 0c62175..9ece3d8 100644
591 void (*ctor)(void *); /* Called on object slot creation */
592 struct list_head list; /* List of all slab caches on the system */
593 };
594 -@@ -226,6 +238,10 @@ extern struct kmem_cache *kmalloc_caches[KMALLOC_SHIFT_HIGH + 1];
595 +@@ -226,6 +245,10 @@ extern struct kmem_cache *kmalloc_caches[KMALLOC_SHIFT_HIGH + 1];
596 extern struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
597 #endif
598
599 @@ -72045,7 +72129,7 @@ index 0c62175..9ece3d8 100644
600 /*
601 * Figure out which kmalloc slab an allocation of a certain size
602 * belongs to.
603 -@@ -234,7 +250,7 @@ extern struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
604 +@@ -234,7 +257,7 @@ extern struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
605 * 2 = 120 .. 192 bytes
606 * n = 2^(n-1) .. 2^n -1
607 */
608 @@ -72054,7 +72138,7 @@ index 0c62175..9ece3d8 100644
609 {
610 if (!size)
611 return 0;
612 -@@ -406,6 +422,7 @@ void print_slabinfo_header(struct seq_file *m);
613 +@@ -406,6 +429,7 @@ void print_slabinfo_header(struct seq_file *m);
614 * for general use, and so are not documented here. For a full list of
615 * potential flags, always refer to linux/gfp.h.
616 */
617 @@ -72062,7 +72146,7 @@ index 0c62175..9ece3d8 100644
618 static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags)
619 {
620 if (size != 0 && n > SIZE_MAX / size)
621 -@@ -465,7 +482,7 @@ static inline void *kmem_cache_alloc_node(struct kmem_cache *cachep,
622 +@@ -465,7 +489,7 @@ static inline void *kmem_cache_alloc_node(struct kmem_cache *cachep,
623 #if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB) || \
624 (defined(CONFIG_SLAB) && defined(CONFIG_TRACING)) || \
625 (defined(CONFIG_SLOB) && defined(CONFIG_TRACING))
626 @@ -72071,7 +72155,7 @@ index 0c62175..9ece3d8 100644
627 #define kmalloc_track_caller(size, flags) \
628 __kmalloc_track_caller(size, flags, _RET_IP_)
629 #else
630 -@@ -485,7 +502,7 @@ extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long);
631 +@@ -485,7 +509,7 @@ extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long);
632 #if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB) || \
633 (defined(CONFIG_SLAB) && defined(CONFIG_TRACING)) || \
634 (defined(CONFIG_SLOB) && defined(CONFIG_TRACING))
635 @@ -72081,7 +72165,7 @@ index 0c62175..9ece3d8 100644
636 __kmalloc_node_track_caller(size, flags, node, \
637 _RET_IP_)
638 diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
639 -index cd40158..d9dc02c 100644
640 +index cd40158..4e2f7af 100644
641 --- a/include/linux/slab_def.h
642 +++ b/include/linux/slab_def.h
643 @@ -50,7 +50,7 @@ struct kmem_cache {
644 @@ -72093,7 +72177,7 @@ index cd40158..d9dc02c 100644
645 int object_size;
646 int align;
647
648 -@@ -66,10 +66,10 @@ struct kmem_cache {
649 +@@ -66,10 +66,14 @@ struct kmem_cache {
650 unsigned long node_allocs;
651 unsigned long node_frees;
652 unsigned long node_overflow;
653 @@ -72105,10 +72189,14 @@ index cd40158..d9dc02c 100644
654 + atomic_unchecked_t allocmiss;
655 + atomic_unchecked_t freehit;
656 + atomic_unchecked_t freemiss;
657 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
658 ++ atomic_unchecked_t sanitized;
659 ++ atomic_unchecked_t not_sanitized;
660 ++#endif
661
662 /*
663 * If debugging is enabled, then the allocator can add additional
664 -@@ -103,7 +103,7 @@ struct kmem_cache {
665 +@@ -103,7 +107,7 @@ struct kmem_cache {
666 };
667
668 void *kmem_cache_alloc(struct kmem_cache *, gfp_t);
669 @@ -72117,7 +72205,7 @@ index cd40158..d9dc02c 100644
670
671 #ifdef CONFIG_TRACING
672 extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t);
673 -@@ -136,6 +136,13 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
674 +@@ -136,6 +140,13 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
675 cachep = kmalloc_dma_caches[i];
676 else
677 #endif
678 @@ -72131,7 +72219,7 @@ index cd40158..d9dc02c 100644
679 cachep = kmalloc_caches[i];
680
681 ret = kmem_cache_alloc_trace(cachep, flags, size);
682 -@@ -146,7 +153,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
683 +@@ -146,7 +157,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
684 }
685
686 #ifdef CONFIG_NUMA
687 @@ -72140,7 +72228,7 @@ index cd40158..d9dc02c 100644
688 extern void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
689
690 #ifdef CONFIG_TRACING
691 -@@ -185,6 +192,13 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
692 +@@ -185,6 +196,13 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
693 cachep = kmalloc_dma_caches[i];
694 else
695 #endif
696 @@ -75395,7 +75483,7 @@ index 00eb8f7..d7e3244 100644
697 #ifdef CONFIG_MODULE_UNLOAD
698 {
699 diff --git a/kernel/events/core.c b/kernel/events/core.c
700 -index b391907..a0e2372 100644
701 +index e76e495..cbfe63a 100644
702 --- a/kernel/events/core.c
703 +++ b/kernel/events/core.c
704 @@ -156,8 +156,15 @@ static struct srcu_struct pmus_srcu;
705 @@ -75424,7 +75512,7 @@ index b391907..a0e2372 100644
706
707 static void cpu_ctx_sched_out(struct perf_cpu_context *cpuctx,
708 enum event_type_t event_type);
709 -@@ -2725,7 +2732,7 @@ static void __perf_event_read(void *info)
710 +@@ -2747,7 +2754,7 @@ static void __perf_event_read(void *info)
711
712 static inline u64 perf_event_count(struct perf_event *event)
713 {
714 @@ -75433,7 +75521,7 @@ index b391907..a0e2372 100644
715 }
716
717 static u64 perf_event_read(struct perf_event *event)
718 -@@ -3071,9 +3078,9 @@ u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running)
719 +@@ -3093,9 +3100,9 @@ u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running)
720 mutex_lock(&event->child_mutex);
721 total += perf_event_read(event);
722 *enabled += event->total_time_enabled +
723 @@ -75445,7 +75533,7 @@ index b391907..a0e2372 100644
724
725 list_for_each_entry(child, &event->child_list, child_list) {
726 total += perf_event_read(child);
727 -@@ -3459,10 +3466,10 @@ void perf_event_update_userpage(struct perf_event *event)
728 +@@ -3481,10 +3488,10 @@ void perf_event_update_userpage(struct perf_event *event)
729 userpg->offset -= local64_read(&event->hw.prev_count);
730
731 userpg->time_enabled = enabled +
732 @@ -75458,7 +75546,7 @@ index b391907..a0e2372 100644
733
734 arch_perf_update_userpage(userpg, now);
735
736 -@@ -4012,7 +4019,7 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
737 +@@ -4034,7 +4041,7 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
738
739 /* Data. */
740 sp = perf_user_stack_pointer(regs);
741 @@ -75467,7 +75555,7 @@ index b391907..a0e2372 100644
742 dyn_size = dump_size - rem;
743
744 perf_output_skip(handle, rem);
745 -@@ -4100,11 +4107,11 @@ static void perf_output_read_one(struct perf_output_handle *handle,
746 +@@ -4122,11 +4129,11 @@ static void perf_output_read_one(struct perf_output_handle *handle,
747 values[n++] = perf_event_count(event);
748 if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) {
749 values[n++] = enabled +
750 @@ -75481,7 +75569,7 @@ index b391907..a0e2372 100644
751 }
752 if (read_format & PERF_FORMAT_ID)
753 values[n++] = primary_event_id(event);
754 -@@ -4813,12 +4820,12 @@ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
755 +@@ -4835,12 +4842,12 @@ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
756 * need to add enough zero bytes after the string to handle
757 * the 64bit alignment we do later.
758 */
759 @@ -75496,7 +75584,7 @@ index b391907..a0e2372 100644
760 if (IS_ERR(name)) {
761 name = strncpy(tmp, "//toolong", sizeof(tmp));
762 goto got_name;
763 -@@ -6240,7 +6247,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
764 +@@ -6262,7 +6269,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
765 event->parent = parent_event;
766
767 event->ns = get_pid_ns(task_active_pid_ns(current));
768 @@ -75505,7 +75593,7 @@ index b391907..a0e2372 100644
769
770 event->state = PERF_EVENT_STATE_INACTIVE;
771
772 -@@ -6550,6 +6557,11 @@ SYSCALL_DEFINE5(perf_event_open,
773 +@@ -6572,6 +6579,11 @@ SYSCALL_DEFINE5(perf_event_open,
774 if (flags & ~PERF_FLAG_ALL)
775 return -EINVAL;
776
777 @@ -75517,7 +75605,7 @@ index b391907..a0e2372 100644
778 err = perf_copy_attr(attr_uptr, &attr);
779 if (err)
780 return err;
781 -@@ -6882,10 +6894,10 @@ static void sync_child_event(struct perf_event *child_event,
782 +@@ -6904,10 +6916,10 @@ static void sync_child_event(struct perf_event *child_event,
783 /*
784 * Add back the child's count to the parent's count:
785 */
786 @@ -75630,7 +75718,7 @@ index 7bb73f9..d7978ed 100644
787 {
788 struct signal_struct *sig = current->signal;
789 diff --git a/kernel/fork.c b/kernel/fork.c
790 -index 987b28a..4e03c05 100644
791 +index 987b28a..e0102b2 100644
792 --- a/kernel/fork.c
793 +++ b/kernel/fork.c
794 @@ -319,7 +319,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig)
795 @@ -75949,6 +76037,15 @@ index 987b28a..4e03c05 100644
796 if (clone_flags & CLONE_VFORK) {
797 p->vfork_done = &vfork;
798 init_completion(&vfork);
799 +@@ -1723,7 +1802,7 @@ void __init proc_caches_init(void)
800 + mm_cachep = kmem_cache_create("mm_struct",
801 + sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
802 + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
803 +- vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
804 ++ vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC | SLAB_NO_SANITIZE);
805 + mmap_init();
806 + nsproxy_cache_init();
807 + }
808 @@ -1763,7 +1842,7 @@ static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp)
809 return 0;
810
811 @@ -77794,7 +77891,7 @@ index 98088e0..aaf95c0 100644
812
813 if (pm_wakeup_pending()) {
814 diff --git a/kernel/printk.c b/kernel/printk.c
815 -index 8212c1a..eb61021 100644
816 +index d37d45c..ab918b3 100644
817 --- a/kernel/printk.c
818 +++ b/kernel/printk.c
819 @@ -390,6 +390,11 @@ static int check_syslog_permissions(int type, bool from_file)
820 @@ -78971,7 +79068,7 @@ index c61a614..d7f3d7e 100644
821 int this_cpu = smp_processor_id();
822 struct rq *this_rq = cpu_rq(this_cpu);
823 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
824 -index ce39224..0e09343 100644
825 +index ce39224d..0e09343 100644
826 --- a/kernel/sched/sched.h
827 +++ b/kernel/sched/sched.h
828 @@ -1009,7 +1009,7 @@ struct sched_class {
829 @@ -79731,19 +79828,6 @@ index f11d83b..d016d91 100644
830 .clock_getres = alarm_clock_getres,
831 .clock_get = alarm_clock_get,
832 .timer_create = alarm_timer_create,
833 -diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
834 -index 20d6fba..09e103a 100644
835 ---- a/kernel/time/tick-broadcast.c
836 -+++ b/kernel/time/tick-broadcast.c
837 -@@ -147,7 +147,7 @@ int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu)
838 - * then clear the broadcast bit.
839 - */
840 - if (!(dev->features & CLOCK_EVT_FEAT_C3STOP)) {
841 -- int cpu = smp_processor_id();
842 -+ cpu = smp_processor_id();
843 - cpumask_clear_cpu(cpu, tick_broadcast_mask);
844 - tick_broadcast_clear_oneshot(cpu);
845 - } else {
846 diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
847 index baeeb5c..c22704a 100644
848 --- a/kernel/time/timekeeping.c
849 @@ -80287,10 +80371,10 @@ index e444ff8..438b8f4 100644
850 *data_page = bpage;
851
852 diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
853 -index e71a8be..948710a 100644
854 +index 0b936d8..306a7eb 100644
855 --- a/kernel/trace/trace.c
856 +++ b/kernel/trace/trace.c
857 -@@ -3201,7 +3201,7 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
858 +@@ -3302,7 +3302,7 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
859 return 0;
860 }
861
862 @@ -80300,10 +80384,10 @@ index e71a8be..948710a 100644
863 /* do nothing if flag is already set */
864 if (!!(trace_flags & mask) == !!enabled)
865 diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
866 -index 20572ed..fe55cf3 100644
867 +index 51b4448..7be601f 100644
868 --- a/kernel/trace/trace.h
869 +++ b/kernel/trace/trace.h
870 -@@ -1030,7 +1030,7 @@ extern const char *__stop___trace_bprintk_fmt[];
871 +@@ -1035,7 +1035,7 @@ extern const char *__stop___trace_bprintk_fmt[];
872 void trace_printk_init_buffers(void);
873 void trace_printk_start_comm(void);
874 int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
875 @@ -80313,10 +80397,10 @@ index 20572ed..fe55cf3 100644
876 /*
877 * Normal trace_printk() and friends allocates special buffers
878 diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
879 -index 27963e2..5a6936f 100644
880 +index 6dfd48b..a6d88d0 100644
881 --- a/kernel/trace/trace_events.c
882 +++ b/kernel/trace/trace_events.c
883 -@@ -1637,10 +1637,6 @@ static LIST_HEAD(ftrace_module_file_list);
884 +@@ -1731,10 +1731,6 @@ static LIST_HEAD(ftrace_module_file_list);
885 struct ftrace_module_file_ops {
886 struct list_head list;
887 struct module *mod;
888 @@ -80327,7 +80411,7 @@ index 27963e2..5a6936f 100644
889 };
890
891 static struct ftrace_module_file_ops *
892 -@@ -1681,17 +1677,12 @@ trace_create_file_ops(struct module *mod)
893 +@@ -1775,17 +1771,12 @@ trace_create_file_ops(struct module *mod)
894
895 file_ops->mod = mod;
896
897 @@ -80351,7 +80435,7 @@ index 27963e2..5a6936f 100644
898
899 list_add(&file_ops->list, &ftrace_module_file_list);
900
901 -@@ -1782,8 +1773,8 @@ __trace_add_new_mod_event(struct ftrace_event_call *call,
902 +@@ -1878,8 +1869,8 @@ __trace_add_new_mod_event(struct ftrace_event_call *call,
903 struct ftrace_module_file_ops *file_ops)
904 {
905 return __trace_add_new_event(call, tr,
906 @@ -84412,7 +84496,7 @@ index fd26d04..0cea1b0 100644
907 if (!mm || IS_ERR(mm)) {
908 rc = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH;
909 diff --git a/mm/rmap.c b/mm/rmap.c
910 -index 6280da8..ecce194 100644
911 +index 6280da8..b5c090e 100644
912 --- a/mm/rmap.c
913 +++ b/mm/rmap.c
914 @@ -163,6 +163,10 @@ int anon_vma_prepare(struct vm_area_struct *vma)
915 @@ -84501,6 +84585,19 @@ index 6280da8..ecce194 100644
916 {
917 struct anon_vma_chain *avc;
918 struct anon_vma *anon_vma;
919 +@@ -373,8 +407,10 @@ static void anon_vma_ctor(void *data)
920 + void __init anon_vma_init(void)
921 + {
922 + anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
923 +- 0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
924 +- anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
925 ++ 0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE,
926 ++ anon_vma_ctor);
927 ++ anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain,
928 ++ SLAB_PANIC|SLAB_NO_SANITIZE);
929 + }
930 +
931 + /*
932 diff --git a/mm/shmem.c b/mm/shmem.c
933 index 5e6a842..b41916e 100644
934 --- a/mm/shmem.c
935 @@ -84562,10 +84659,10 @@ index 5e6a842..b41916e 100644
936 return -ENOMEM;
937
938 diff --git a/mm/slab.c b/mm/slab.c
939 -index bd88411..8371a16 100644
940 +index bd88411..2d46fd6 100644
941 --- a/mm/slab.c
942 +++ b/mm/slab.c
943 -@@ -366,10 +366,10 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
944 +@@ -366,10 +366,12 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
945 if ((x)->max_freeable < i) \
946 (x)->max_freeable = i; \
947 } while (0)
948 @@ -84577,10 +84674,21 @@ index bd88411..8371a16 100644
949 +#define STATS_INC_ALLOCMISS(x) atomic_inc_unchecked(&(x)->allocmiss)
950 +#define STATS_INC_FREEHIT(x) atomic_inc_unchecked(&(x)->freehit)
951 +#define STATS_INC_FREEMISS(x) atomic_inc_unchecked(&(x)->freemiss)
952 ++#define STATS_INC_SANITIZED(x) atomic_inc_unchecked(&(x)->sanitized)
953 ++#define STATS_INC_NOT_SANITIZED(x) atomic_inc_unchecked(&(x)->not_sanitized)
954 #else
955 #define STATS_INC_ACTIVE(x) do { } while (0)
956 #define STATS_DEC_ACTIVE(x) do { } while (0)
957 -@@ -477,7 +477,7 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct slab *slab,
958 +@@ -386,6 +388,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
959 + #define STATS_INC_ALLOCMISS(x) do { } while (0)
960 + #define STATS_INC_FREEHIT(x) do { } while (0)
961 + #define STATS_INC_FREEMISS(x) do { } while (0)
962 ++#define STATS_INC_SANITIZED(x) do { } while (0)
963 ++#define STATS_INC_NOT_SANITIZED(x) do { } while (0)
964 + #endif
965 +
966 + #if DEBUG
967 +@@ -477,7 +481,7 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct slab *slab,
968 * reciprocal_divide(offset, cache->reciprocal_buffer_size)
969 */
970 static inline unsigned int obj_to_index(const struct kmem_cache *cache,
971 @@ -84589,7 +84697,7 @@ index bd88411..8371a16 100644
972 {
973 u32 offset = (obj - slab->s_mem);
974 return reciprocal_divide(offset, cache->reciprocal_buffer_size);
975 -@@ -1384,7 +1384,7 @@ static int __cpuinit cpuup_callback(struct notifier_block *nfb,
976 +@@ -1384,7 +1388,7 @@ static int __cpuinit cpuup_callback(struct notifier_block *nfb,
977 return notifier_from_errno(err);
978 }
979
980 @@ -84598,7 +84706,7 @@ index bd88411..8371a16 100644
981 &cpuup_callback, NULL, 0
982 };
983
984 -@@ -1565,12 +1565,12 @@ void __init kmem_cache_init(void)
985 +@@ -1565,12 +1569,12 @@ void __init kmem_cache_init(void)
986 */
987
988 kmalloc_caches[INDEX_AC] = create_kmalloc_cache("kmalloc-ac",
989 @@ -84613,7 +84721,29 @@ index bd88411..8371a16 100644
990
991 slab_early_init = 0;
992
993 -@@ -3800,6 +3800,7 @@ void kfree(const void *objp)
994 +@@ -3583,6 +3587,21 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
995 + struct array_cache *ac = cpu_cache_get(cachep);
996 +
997 + check_irq_off();
998 ++
999 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1000 ++ if (pax_sanitize_slab) {
1001 ++ if (!(cachep->flags & (SLAB_POISON | SLAB_NO_SANITIZE))) {
1002 ++ memset(objp, PAX_MEMORY_SANITIZE_VALUE, cachep->object_size);
1003 ++
1004 ++ if (cachep->ctor)
1005 ++ cachep->ctor(objp);
1006 ++
1007 ++ STATS_INC_SANITIZED(cachep);
1008 ++ } else
1009 ++ STATS_INC_NOT_SANITIZED(cachep);
1010 ++ }
1011 ++#endif
1012 ++
1013 + kmemleak_free_recursive(objp, cachep->flags);
1014 + objp = cache_free_debugcheck(cachep, objp, caller);
1015 +
1016 +@@ -3800,6 +3819,7 @@ void kfree(const void *objp)
1017
1018 if (unlikely(ZERO_OR_NULL_PTR(objp)))
1019 return;
1020 @@ -84621,7 +84751,7 @@ index bd88411..8371a16 100644
1021 local_irq_save(flags);
1022 kfree_debugcheck(objp);
1023 c = virt_to_cache(objp);
1024 -@@ -4241,10 +4242,10 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
1025 +@@ -4241,14 +4261,22 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
1026 }
1027 /* cpu stats */
1028 {
1029 @@ -84636,7 +84766,19 @@ index bd88411..8371a16 100644
1030
1031 seq_printf(m, " : cpustat %6lu %6lu %6lu %6lu",
1032 allochit, allocmiss, freehit, freemiss);
1033 -@@ -4476,13 +4477,71 @@ static const struct file_operations proc_slabstats_operations = {
1034 + }
1035 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1036 ++ {
1037 ++ unsigned long sanitized = atomic_read_unchecked(&cachep->sanitized);
1038 ++ unsigned long not_sanitized = atomic_read_unchecked(&cachep->not_sanitized);
1039 ++
1040 ++ seq_printf(m, " : pax %6lu %6lu", sanitized, not_sanitized);
1041 ++ }
1042 ++#endif
1043 + #endif
1044 + }
1045 +
1046 +@@ -4476,13 +4504,71 @@ static const struct file_operations proc_slabstats_operations = {
1047 static int __init slab_proc_init(void)
1048 {
1049 #ifdef CONFIG_DEBUG_SLAB_LEAK
1050 @@ -84710,19 +84852,36 @@ index bd88411..8371a16 100644
1051 * ksize - get the actual amount of memory allocated for a given object
1052 * @objp: Pointer to the object
1053 diff --git a/mm/slab.h b/mm/slab.h
1054 -index f96b49e..5634e90 100644
1055 +index f96b49e..db1d204 100644
1056 --- a/mm/slab.h
1057 +++ b/mm/slab.h
1058 -@@ -67,7 +67,7 @@ __kmem_cache_alias(struct mem_cgroup *memcg, const char *name, size_t size,
1059 +@@ -32,6 +32,15 @@ extern struct list_head slab_caches;
1060 + /* The slab cache that manages slab cache information */
1061 + extern struct kmem_cache *kmem_cache;
1062 +
1063 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1064 ++#ifdef CONFIG_X86_64
1065 ++#define PAX_MEMORY_SANITIZE_VALUE '\xfe'
1066 ++#else
1067 ++#define PAX_MEMORY_SANITIZE_VALUE '\xff'
1068 ++#endif
1069 ++extern bool pax_sanitize_slab;
1070 ++#endif
1071 ++
1072 + unsigned long calculate_alignment(unsigned long flags,
1073 + unsigned long align, unsigned long size);
1074 +
1075 +@@ -67,7 +76,8 @@ __kmem_cache_alias(struct mem_cgroup *memcg, const char *name, size_t size,
1076
1077 /* Legal flag mask for kmem_cache_create(), for various configurations */
1078 #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
1079 - SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
1080 -+ SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_USERCOPY)
1081 ++ SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | \
1082 ++ SLAB_USERCOPY | SLAB_NO_SANITIZE)
1083
1084 #if defined(CONFIG_DEBUG_SLAB)
1085 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
1086 -@@ -229,6 +229,9 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
1087 +@@ -229,6 +239,9 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
1088 return s;
1089
1090 page = virt_to_head_page(x);
1091 @@ -84733,10 +84892,10 @@ index f96b49e..5634e90 100644
1092 if (slab_equal_or_root(cachep, s))
1093 return cachep;
1094 diff --git a/mm/slab_common.c b/mm/slab_common.c
1095 -index 2d41450..e22088e 100644
1096 +index 2d41450..4efe6ee 100644
1097 --- a/mm/slab_common.c
1098 +++ b/mm/slab_common.c
1099 -@@ -22,7 +22,7 @@
1100 +@@ -22,11 +22,22 @@
1101
1102 #include "slab.h"
1103
1104 @@ -84745,7 +84904,22 @@ index 2d41450..e22088e 100644
1105 LIST_HEAD(slab_caches);
1106 DEFINE_MUTEX(slab_mutex);
1107 struct kmem_cache *kmem_cache;
1108 -@@ -209,7 +209,7 @@ kmem_cache_create_memcg(struct mem_cgroup *memcg, const char *name, size_t size,
1109 +
1110 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1111 ++bool pax_sanitize_slab __read_only = true;
1112 ++static int __init pax_sanitize_slab_setup(char *str)
1113 ++{
1114 ++ pax_sanitize_slab = !!simple_strtol(str, NULL, 0);
1115 ++ printk("%sabled PaX slab sanitization\n", pax_sanitize_slab ? "En" : "Dis");
1116 ++ return 1;
1117 ++}
1118 ++__setup("pax_sanitize_slab=", pax_sanitize_slab_setup);
1119 ++#endif
1120 ++
1121 + #ifdef CONFIG_DEBUG_VM
1122 + static int kmem_cache_sanity_check(struct mem_cgroup *memcg, const char *name,
1123 + size_t size)
1124 +@@ -209,7 +220,7 @@ kmem_cache_create_memcg(struct mem_cgroup *memcg, const char *name, size_t size,
1125
1126 err = __kmem_cache_create(s, flags);
1127 if (!err) {
1128 @@ -84754,7 +84928,7 @@ index 2d41450..e22088e 100644
1129 list_add(&s->list, &slab_caches);
1130 memcg_cache_list_add(memcg, s);
1131 } else {
1132 -@@ -255,8 +255,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
1133 +@@ -255,8 +266,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
1134
1135 get_online_cpus();
1136 mutex_lock(&slab_mutex);
1137 @@ -84764,7 +84938,7 @@ index 2d41450..e22088e 100644
1138 list_del(&s->list);
1139
1140 if (!__kmem_cache_shutdown(s)) {
1141 -@@ -302,7 +301,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
1142 +@@ -302,7 +312,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
1143 panic("Creation of kmalloc slab %s size=%zu failed. Reason %d\n",
1144 name, size, err);
1145
1146 @@ -84773,7 +84947,7 @@ index 2d41450..e22088e 100644
1147 }
1148
1149 struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
1150 -@@ -315,7 +314,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
1151 +@@ -315,7 +325,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
1152
1153 create_boot_cache(s, name, size, flags);
1154 list_add(&s->list, &slab_caches);
1155 @@ -84782,7 +84956,7 @@ index 2d41450..e22088e 100644
1156 return s;
1157 }
1158
1159 -@@ -327,6 +326,11 @@ struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
1160 +@@ -327,6 +337,11 @@ struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
1161 EXPORT_SYMBOL(kmalloc_dma_caches);
1162 #endif
1163
1164 @@ -84794,7 +84968,7 @@ index 2d41450..e22088e 100644
1165 /*
1166 * Conversion table for small slabs sizes / 8 to the index in the
1167 * kmalloc array. This is necessary for slabs < 192 since we have non power
1168 -@@ -391,6 +395,13 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
1169 +@@ -391,6 +406,13 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
1170 return kmalloc_dma_caches[index];
1171
1172 #endif
1173 @@ -84808,7 +84982,7 @@ index 2d41450..e22088e 100644
1174 return kmalloc_caches[index];
1175 }
1176
1177 -@@ -447,7 +458,7 @@ void __init create_kmalloc_caches(unsigned long flags)
1178 +@@ -447,7 +469,7 @@ void __init create_kmalloc_caches(unsigned long flags)
1179 for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
1180 if (!kmalloc_caches[i]) {
1181 kmalloc_caches[i] = create_kmalloc_cache(NULL,
1182 @@ -84817,7 +84991,7 @@ index 2d41450..e22088e 100644
1183 }
1184
1185 /*
1186 -@@ -456,10 +467,10 @@ void __init create_kmalloc_caches(unsigned long flags)
1187 +@@ -456,10 +478,10 @@ void __init create_kmalloc_caches(unsigned long flags)
1188 * earlier power of two caches
1189 */
1190 if (KMALLOC_MIN_SIZE <= 32 && !kmalloc_caches[1] && i == 6)
1191 @@ -84830,7 +85004,7 @@ index 2d41450..e22088e 100644
1192 }
1193
1194 /* Kmalloc array is now usable */
1195 -@@ -492,6 +503,23 @@ void __init create_kmalloc_caches(unsigned long flags)
1196 +@@ -492,6 +514,23 @@ void __init create_kmalloc_caches(unsigned long flags)
1197 }
1198 }
1199 #endif
1200 @@ -84854,8 +85028,18 @@ index 2d41450..e22088e 100644
1201 }
1202 #endif /* !CONFIG_SLOB */
1203
1204 +@@ -516,6 +555,9 @@ void print_slabinfo_header(struct seq_file *m)
1205 + seq_puts(m, " : globalstat <listallocs> <maxobjs> <grown> <reaped> "
1206 + "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
1207 + seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
1208 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1209 ++ seq_puts(m, " : pax <sanitized> <not_sanitized>");
1210 ++#endif
1211 + #endif
1212 + seq_putc(m, '\n');
1213 + }
1214 diff --git a/mm/slob.c b/mm/slob.c
1215 -index eeed4a0..c414c12 100644
1216 +index eeed4a0..bb0e9ab 100644
1217 --- a/mm/slob.c
1218 +++ b/mm/slob.c
1219 @@ -157,7 +157,7 @@ static void set_slob(slob_t *s, slobidx_t size, slob_t *next)
1220 @@ -84936,7 +85120,7 @@ index eeed4a0..c414c12 100644
1221 INIT_LIST_HEAD(&sp->list);
1222 set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
1223 set_slob_page_free(sp, slob_list);
1224 -@@ -359,9 +363,7 @@ static void slob_free(void *block, int size)
1225 +@@ -359,12 +363,15 @@ static void slob_free(void *block, int size)
1226 if (slob_page_free(sp))
1227 clear_slob_page_free(sp);
1228 spin_unlock_irqrestore(&slob_lock, flags);
1229 @@ -84947,7 +85131,15 @@ index eeed4a0..c414c12 100644
1230 return;
1231 }
1232
1233 -@@ -424,11 +426,10 @@ out:
1234 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1235 ++ if (pax_sanitize_slab)
1236 ++ memset(block, PAX_MEMORY_SANITIZE_VALUE, size);
1237 ++#endif
1238 ++
1239 + if (!slob_page_free(sp)) {
1240 + /* This slob page is about to become partially free. Easy! */
1241 + sp->units = units;
1242 +@@ -424,11 +431,10 @@ out:
1243 */
1244
1245 static __always_inline void *
1246 @@ -84962,7 +85154,7 @@ index eeed4a0..c414c12 100644
1247
1248 gfp &= gfp_allowed_mask;
1249
1250 -@@ -442,23 +443,41 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
1251 +@@ -442,23 +448,41 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
1252
1253 if (!m)
1254 return NULL;
1255 @@ -85007,7 +85199,7 @@ index eeed4a0..c414c12 100644
1256 return ret;
1257 }
1258
1259 -@@ -493,34 +512,112 @@ void kfree(const void *block)
1260 +@@ -493,34 +517,112 @@ void kfree(const void *block)
1261 return;
1262 kmemleak_free(block);
1263
1264 @@ -85129,7 +85321,7 @@ index eeed4a0..c414c12 100644
1265 }
1266 EXPORT_SYMBOL(ksize);
1267
1268 -@@ -536,23 +633,33 @@ int __kmem_cache_create(struct kmem_cache *c, unsigned long flags)
1269 +@@ -536,23 +638,33 @@ int __kmem_cache_create(struct kmem_cache *c, unsigned long flags)
1270
1271 void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
1272 {
1273 @@ -85165,7 +85357,7 @@ index eeed4a0..c414c12 100644
1274
1275 if (c->ctor)
1276 c->ctor(b);
1277 -@@ -564,10 +671,14 @@ EXPORT_SYMBOL(kmem_cache_alloc_node);
1278 +@@ -564,10 +676,14 @@ EXPORT_SYMBOL(kmem_cache_alloc_node);
1279
1280 static void __kmem_cache_free(void *b, int size)
1281 {
1282 @@ -85182,7 +85374,7 @@ index eeed4a0..c414c12 100644
1283 }
1284
1285 static void kmem_rcu_free(struct rcu_head *head)
1286 -@@ -580,17 +691,31 @@ static void kmem_rcu_free(struct rcu_head *head)
1287 +@@ -580,17 +696,31 @@ static void kmem_rcu_free(struct rcu_head *head)
1288
1289 void kmem_cache_free(struct kmem_cache *c, void *b)
1290 {
1291 @@ -85218,7 +85410,7 @@ index eeed4a0..c414c12 100644
1292 EXPORT_SYMBOL(kmem_cache_free);
1293
1294 diff --git a/mm/slub.c b/mm/slub.c
1295 -index 57707f0..c28619b 100644
1296 +index 57707f0..7857bd3 100644
1297 --- a/mm/slub.c
1298 +++ b/mm/slub.c
1299 @@ -198,7 +198,7 @@ struct track {
1300 @@ -85239,7 +85431,22 @@ index 57707f0..c28619b 100644
1301 s, (void *)t->addr, jiffies - t->when, t->cpu, t->pid);
1302 #ifdef CONFIG_STACKTRACE
1303 {
1304 -@@ -2661,7 +2661,7 @@ static int slub_min_objects;
1305 +@@ -2594,6 +2594,14 @@ static __always_inline void slab_free(struct kmem_cache *s,
1306 +
1307 + slab_free_hook(s, x);
1308 +
1309 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1310 ++ if (pax_sanitize_slab && !(s->flags & SLAB_NO_SANITIZE)) {
1311 ++ memset(x, PAX_MEMORY_SANITIZE_VALUE, s->object_size);
1312 ++ if (s->ctor)
1313 ++ s->ctor(x);
1314 ++ }
1315 ++#endif
1316 ++
1317 + redo:
1318 + /*
1319 + * Determine the currently cpus per cpu slab.
1320 +@@ -2661,7 +2669,7 @@ static int slub_min_objects;
1321 * Merge control. If this is set then no merging of slab caches will occur.
1322 * (Could be removed. This was introduced to pacify the merge skeptics.)
1323 */
1324 @@ -85248,7 +85455,17 @@ index 57707f0..c28619b 100644
1325
1326 /*
1327 * Calculate the order of allocation given an slab object size.
1328 -@@ -3283,6 +3283,59 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
1329 +@@ -2938,6 +2946,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
1330 + s->inuse = size;
1331 +
1332 + if (((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) ||
1333 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1334 ++ (pax_sanitize_slab && !(flags & SLAB_NO_SANITIZE)) ||
1335 ++#endif
1336 + s->ctor)) {
1337 + /*
1338 + * Relocate free pointer after the object if it is not
1339 +@@ -3283,6 +3294,59 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
1340 EXPORT_SYMBOL(__kmalloc_node);
1341 #endif
1342
1343 @@ -85308,7 +85525,7 @@ index 57707f0..c28619b 100644
1344 size_t ksize(const void *object)
1345 {
1346 struct page *page;
1347 -@@ -3347,6 +3400,7 @@ void kfree(const void *x)
1348 +@@ -3347,6 +3411,7 @@ void kfree(const void *x)
1349 if (unlikely(ZERO_OR_NULL_PTR(x)))
1350 return;
1351
1352 @@ -85316,7 +85533,7 @@ index 57707f0..c28619b 100644
1353 page = virt_to_head_page(x);
1354 if (unlikely(!PageSlab(page))) {
1355 BUG_ON(!PageCompound(page));
1356 -@@ -3652,7 +3706,7 @@ static int slab_unmergeable(struct kmem_cache *s)
1357 +@@ -3652,7 +3717,7 @@ static int slab_unmergeable(struct kmem_cache *s)
1358 /*
1359 * We may have set a slab to be unmergeable during bootstrap.
1360 */
1361 @@ -85325,7 +85542,7 @@ index 57707f0..c28619b 100644
1362 return 1;
1363
1364 return 0;
1365 -@@ -3710,7 +3764,7 @@ __kmem_cache_alias(struct mem_cgroup *memcg, const char *name, size_t size,
1366 +@@ -3710,7 +3775,7 @@ __kmem_cache_alias(struct mem_cgroup *memcg, const char *name, size_t size,
1367
1368 s = find_mergeable(memcg, size, align, flags, name, ctor);
1369 if (s) {
1370 @@ -85334,7 +85551,7 @@ index 57707f0..c28619b 100644
1371 /*
1372 * Adjust the object sizes so that we clear
1373 * the complete object on kzalloc.
1374 -@@ -3719,7 +3773,7 @@ __kmem_cache_alias(struct mem_cgroup *memcg, const char *name, size_t size,
1375 +@@ -3719,7 +3784,7 @@ __kmem_cache_alias(struct mem_cgroup *memcg, const char *name, size_t size,
1376 s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
1377
1378 if (sysfs_slab_alias(s, name)) {
1379 @@ -85343,7 +85560,7 @@ index 57707f0..c28619b 100644
1380 s = NULL;
1381 }
1382 }
1383 -@@ -3781,7 +3835,7 @@ static int __cpuinit slab_cpuup_callback(struct notifier_block *nfb,
1384 +@@ -3781,7 +3846,7 @@ static int __cpuinit slab_cpuup_callback(struct notifier_block *nfb,
1385 return NOTIFY_OK;
1386 }
1387
1388 @@ -85352,7 +85569,7 @@ index 57707f0..c28619b 100644
1389 .notifier_call = slab_cpuup_callback
1390 };
1391
1392 -@@ -3839,7 +3893,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
1393 +@@ -3839,7 +3904,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
1394 }
1395 #endif
1396
1397 @@ -85361,7 +85578,7 @@ index 57707f0..c28619b 100644
1398 static int count_inuse(struct page *page)
1399 {
1400 return page->inuse;
1401 -@@ -4226,12 +4280,12 @@ static void resiliency_test(void)
1402 +@@ -4226,12 +4291,12 @@ static void resiliency_test(void)
1403 validate_slab_cache(kmalloc_caches[9]);
1404 }
1405 #else
1406 @@ -85376,7 +85593,7 @@ index 57707f0..c28619b 100644
1407 enum slab_stat_type {
1408 SL_ALL, /* All slabs */
1409 SL_PARTIAL, /* Only partially allocated slabs */
1410 -@@ -4475,7 +4529,7 @@ SLAB_ATTR_RO(ctor);
1411 +@@ -4475,7 +4540,7 @@ SLAB_ATTR_RO(ctor);
1412
1413 static ssize_t aliases_show(struct kmem_cache *s, char *buf)
1414 {
1415 @@ -85385,7 +85602,7 @@ index 57707f0..c28619b 100644
1416 }
1417 SLAB_ATTR_RO(aliases);
1418
1419 -@@ -4563,6 +4617,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
1420 +@@ -4563,6 +4628,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
1421 SLAB_ATTR_RO(cache_dma);
1422 #endif
1423
1424 @@ -85400,7 +85617,7 @@ index 57707f0..c28619b 100644
1425 static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)
1426 {
1427 return sprintf(buf, "%d\n", !!(s->flags & SLAB_DESTROY_BY_RCU));
1428 -@@ -4897,6 +4959,9 @@ static struct attribute *slab_attrs[] = {
1429 +@@ -4897,6 +4970,9 @@ static struct attribute *slab_attrs[] = {
1430 #ifdef CONFIG_ZONE_DMA
1431 &cache_dma_attr.attr,
1432 #endif
1433 @@ -85410,7 +85627,7 @@ index 57707f0..c28619b 100644
1434 #ifdef CONFIG_NUMA
1435 &remote_node_defrag_ratio_attr.attr,
1436 #endif
1437 -@@ -5128,6 +5193,7 @@ static char *create_unique_id(struct kmem_cache *s)
1438 +@@ -5128,6 +5204,7 @@ static char *create_unique_id(struct kmem_cache *s)
1439 return name;
1440 }
1441
1442 @@ -85418,7 +85635,7 @@ index 57707f0..c28619b 100644
1443 static int sysfs_slab_add(struct kmem_cache *s)
1444 {
1445 int err;
1446 -@@ -5151,7 +5217,7 @@ static int sysfs_slab_add(struct kmem_cache *s)
1447 +@@ -5151,7 +5228,7 @@ static int sysfs_slab_add(struct kmem_cache *s)
1448 }
1449
1450 s->kobj.kset = slab_kset;
1451 @@ -85427,7 +85644,7 @@ index 57707f0..c28619b 100644
1452 if (err) {
1453 kobject_put(&s->kobj);
1454 return err;
1455 -@@ -5185,6 +5251,7 @@ static void sysfs_slab_remove(struct kmem_cache *s)
1456 +@@ -5185,6 +5262,7 @@ static void sysfs_slab_remove(struct kmem_cache *s)
1457 kobject_del(&s->kobj);
1458 kobject_put(&s->kobj);
1459 }
1460 @@ -85435,7 +85652,7 @@ index 57707f0..c28619b 100644
1461
1462 /*
1463 * Need to buffer aliases during bootup until sysfs becomes
1464 -@@ -5198,6 +5265,7 @@ struct saved_alias {
1465 +@@ -5198,6 +5276,7 @@ struct saved_alias {
1466
1467 static struct saved_alias *alias_list;
1468
1469 @@ -85443,7 +85660,7 @@ index 57707f0..c28619b 100644
1470 static int sysfs_slab_alias(struct kmem_cache *s, const char *name)
1471 {
1472 struct saved_alias *al;
1473 -@@ -5220,6 +5288,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name)
1474 +@@ -5220,6 +5299,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name)
1475 alias_list = al;
1476 return 0;
1477 }
1478 @@ -86924,6 +87141,28 @@ index 03795d0..eaf7368 100644
1479 i++, cmfptr++)
1480 {
1481 struct socket *sock;
1482 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
1483 +index 1c1738c..4cab7f0 100644
1484 +--- a/net/core/skbuff.c
1485 ++++ b/net/core/skbuff.c
1486 +@@ -3087,13 +3087,15 @@ void __init skb_init(void)
1487 + skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
1488 + sizeof(struct sk_buff),
1489 + 0,
1490 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC,
1491 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|
1492 ++ SLAB_NO_SANITIZE,
1493 + NULL);
1494 + skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
1495 + (2*sizeof(struct sk_buff)) +
1496 + sizeof(atomic_t),
1497 + 0,
1498 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC,
1499 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|
1500 ++ SLAB_NO_SANITIZE,
1501 + NULL);
1502 + }
1503 +
1504 diff --git a/net/core/sock.c b/net/core/sock.c
1505 index d6d024c..6ea7ab4 100644
1506 --- a/net/core/sock.c
1507 @@ -89209,7 +89448,7 @@ index 9ca8e32..48e4a9b 100644
1508 /* number of interfaces with corresponding FIF_ flags */
1509 int fif_fcsfail, fif_plcpfail, fif_control, fif_other_bss, fif_pspoll,
1510 diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
1511 -index 98d20c0..586675b 100644
1512 +index 514e90f..56f22bf 100644
1513 --- a/net/mac80211/iface.c
1514 +++ b/net/mac80211/iface.c
1515 @@ -502,7 +502,7 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
1516 @@ -92152,10 +92391,10 @@ index f5eb43d..1814de8 100644
1517 shdr = (Elf_Shdr *)((char *)ehdr + _r(&ehdr->e_shoff));
1518 shstrtab_sec = shdr + r2(&ehdr->e_shstrndx);
1519 diff --git a/security/Kconfig b/security/Kconfig
1520 -index e9c6ac7..66bf8e9 100644
1521 +index e9c6ac7..0d298ea 100644
1522 --- a/security/Kconfig
1523 +++ b/security/Kconfig
1524 -@@ -4,6 +4,945 @@
1525 +@@ -4,6 +4,956 @@
1526
1527 menu "Security options"
1528
1529 @@ -92893,21 +93132,32 @@ index e9c6ac7..66bf8e9 100644
1530 + default y if (GRKERNSEC_CONFIG_AUTO && GRKERNSEC_CONFIG_PRIORITY_SECURITY)
1531 + depends on !HIBERNATION
1532 + help
1533 -+ By saying Y here the kernel will erase memory pages as soon as they
1534 -+ are freed. This in turn reduces the lifetime of data stored in the
1535 -+ pages, making it less likely that sensitive information such as
1536 -+ passwords, cryptographic secrets, etc stay in memory for too long.
1537 ++ By saying Y here the kernel will erase memory pages and slab objects
1538 ++ as soon as they are freed. This in turn reduces the lifetime of data
1539 ++ stored in them, making it less likely that sensitive information such
1540 ++ as passwords, cryptographic secrets, etc stay in memory for too long.
1541 +
1542 + This is especially useful for programs whose runtime is short, long
1543 + lived processes and the kernel itself benefit from this as long as
1544 -+ they operate on whole memory pages and ensure timely freeing of pages
1545 -+ that may hold sensitive information.
1546 ++ they ensure timely freeing of memory that may hold sensitive
1547 ++ information.
1548 ++
1549 ++ A nice side effect of the sanitization of slab objects is the
1550 ++ reduction of possible info leaks caused by padding bytes within the
1551 ++ leaky structures. Use-after-free bugs for structures containing
1552 ++ pointers can also be detected as dereferencing the sanitized pointer
1553 ++ will generate an access violation.
1554 +
1555 + The tradeoff is performance impact, on a single CPU system kernel
1556 + compilation sees a 3% slowdown, other systems and workloads may vary
1557 + and you are advised to test this feature on your expected workload
1558 + before deploying it.
1559 +
1560 ++ To reduce the performance penalty by sanitizing pages only, albeit
1561 ++ limiting the effectiveness of this feature at the same time, slab
1562 ++ sanitization can be disabled with the kernel commandline parameter
1563 ++ "pax_sanitize_slab=0".
1564 ++
1565 + Note that this feature does not protect data stored in live pages,
1566 + e.g., process memory swapped to disk may stay there for a long time.
1567 +
1568 @@ -93101,7 +93351,7 @@ index e9c6ac7..66bf8e9 100644
1569 source security/keys/Kconfig
1570
1571 config SECURITY_DMESG_RESTRICT
1572 -@@ -103,7 +1042,7 @@ config INTEL_TXT
1573 +@@ -103,7 +1053,7 @@ config INTEL_TXT
1574 config LSM_MMAP_MIN_ADDR
1575 int "Low address space for LSM to protect from user allocation"
1576 depends on SECURITY && SECURITY_SELINUX
1577
1578 diff --git a/3.2.48/0000_README b/3.2.48/0000_README
1579 index 5e1d7bc..5e3379d 100644
1580 --- a/3.2.48/0000_README
1581 +++ b/3.2.48/0000_README
1582 @@ -110,7 +110,7 @@ Patch: 1047_linux-3.2.48.patch
1583 From: http://www.kernel.org
1584 Desc: Linux 3.2.48
1585
1586 -Patch: 4420_grsecurity-2.9.1-3.2.48-201307212241.patch
1587 +Patch: 4420_grsecurity-2.9.1-3.2.48-201307261327.patch
1588 From: http://www.grsecurity.net
1589 Desc: hardened-sources base patch from upstream grsecurity
1590
1591
1592 diff --git a/3.2.48/4420_grsecurity-2.9.1-3.2.48-201307212241.patch b/3.2.48/4420_grsecurity-2.9.1-3.2.48-201307261327.patch
1593 similarity index 99%
1594 rename from 3.2.48/4420_grsecurity-2.9.1-3.2.48-201307212241.patch
1595 rename to 3.2.48/4420_grsecurity-2.9.1-3.2.48-201307261327.patch
1596 index d9a4f00..df50f4e 100644
1597 --- a/3.2.48/4420_grsecurity-2.9.1-3.2.48-201307212241.patch
1598 +++ b/3.2.48/4420_grsecurity-2.9.1-3.2.48-201307261327.patch
1599 @@ -200,7 +200,7 @@ index dfa6fc6..be27ac3 100644
1600 +zconf.lex.c
1601 zoffset.h
1602 diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
1603 -index 2ba8272..187c634 100644
1604 +index 2ba8272..e2a9806 100644
1605 --- a/Documentation/kernel-parameters.txt
1606 +++ b/Documentation/kernel-parameters.txt
1607 @@ -859,6 +859,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
1608 @@ -213,7 +213,7 @@ index 2ba8272..187c634 100644
1609 hashdist= [KNL,NUMA] Large hashes allocated during boot
1610 are distributed across NUMA nodes. Defaults on
1611 for 64-bit NUMA, off otherwise.
1612 -@@ -1960,6 +1963,18 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
1613 +@@ -1960,6 +1963,22 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
1614 the specified number of seconds. This is to be used if
1615 your oopses keep scrolling off the screen.
1616
1617 @@ -222,6 +222,10 @@ index 2ba8272..187c634 100644
1618 + expand down segment used by UDEREF on X86-32 or the frequent
1619 + page table updates on X86-64.
1620 +
1621 ++ pax_sanitize_slab=
1622 ++ 0/1 to disable/enable slab object sanitization (enabled by
1623 ++ default).
1624 ++
1625 + pax_softmode= 0/1 to disable/enable PaX softmode on boot already.
1626 +
1627 + pax_extra_latent_entropy
1628 @@ -6741,7 +6745,7 @@ index 42b282f..408977c 100644
1629 addr = vmm->vm_end;
1630 if (flags & MAP_SHARED)
1631 diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
1632 -index 5e4252b..05942dd 100644
1633 +index 5e4252b..379f84f 100644
1634 --- a/arch/sparc/kernel/sys_sparc_64.c
1635 +++ b/arch/sparc/kernel/sys_sparc_64.c
1636 @@ -119,12 +119,13 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
1637 @@ -6884,17 +6888,31 @@ index 5e4252b..05942dd 100644
1638
1639 bottomup:
1640 /*
1641 -@@ -365,6 +368,10 @@ static unsigned long mmap_rnd(void)
1642 +@@ -361,10 +364,14 @@ unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, u
1643 + EXPORT_SYMBOL(get_fb_unmapped_area);
1644 +
1645 + /* Essentially the same as PowerPC. */
1646 +-static unsigned long mmap_rnd(void)
1647 ++static unsigned long mmap_rnd(struct mm_struct *mm)
1648 {
1649 unsigned long rnd = 0UL;
1650
1651 +#ifdef CONFIG_PAX_RANDMMAP
1652 -+ if (!(current->mm->pax_flags & MF_PAX_RANDMMAP))
1653 ++ if (!(mm->pax_flags & MF_PAX_RANDMMAP))
1654 +#endif
1655 +
1656 if (current->flags & PF_RANDOMIZE) {
1657 unsigned long val = get_random_int();
1658 if (test_thread_flag(TIF_32BIT))
1659 +@@ -377,7 +384,7 @@ static unsigned long mmap_rnd(void)
1660 +
1661 + void arch_pick_mmap_layout(struct mm_struct *mm)
1662 + {
1663 +- unsigned long random_factor = mmap_rnd();
1664 ++ unsigned long random_factor = mmap_rnd(mm);
1665 + unsigned long gap;
1666 +
1667 + /*
1668 @@ -390,6 +397,12 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
1669 gap == RLIM_INFINITY ||
1670 sysctl_legacy_va_layout) {
1671 @@ -49268,6 +49286,19 @@ index 200f63b..490b833 100644
1672
1673 /*
1674 * used by btrfsctl to scan devices when no FS is mounted
1675 +diff --git a/fs/buffer.c b/fs/buffer.c
1676 +index 19a4f0b..6638f5c 100644
1677 +--- a/fs/buffer.c
1678 ++++ b/fs/buffer.c
1679 +@@ -3314,7 +3314,7 @@ void __init buffer_init(void)
1680 + bh_cachep = kmem_cache_create("buffer_head",
1681 + sizeof(struct buffer_head), 0,
1682 + (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
1683 +- SLAB_MEM_SPREAD),
1684 ++ SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
1685 + NULL);
1686 +
1687 + /*
1688 diff --git a/fs/cachefiles/bind.c b/fs/cachefiles/bind.c
1689 index 622f469..e8d2d55 100644
1690 --- a/fs/cachefiles/bind.c
1691 @@ -50072,7 +50103,7 @@ index 739fb59..5385976 100644
1692 static int __init init_cramfs_fs(void)
1693 {
1694 diff --git a/fs/dcache.c b/fs/dcache.c
1695 -index d322929..ff57049 100644
1696 +index d322929..9f4b816 100644
1697 --- a/fs/dcache.c
1698 +++ b/fs/dcache.c
1699 @@ -103,11 +103,11 @@ static unsigned int d_hash_shift __read_mostly;
1700 @@ -50091,12 +50122,13 @@ index d322929..ff57049 100644
1701 return dentry_hashtable + (hash & D_HASHMASK);
1702 }
1703
1704 -@@ -3057,7 +3057,7 @@ void __init vfs_caches_init(unsigned long mempages)
1705 +@@ -3057,7 +3057,8 @@ void __init vfs_caches_init(unsigned long mempages)
1706 mempages -= reserve;
1707
1708 names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
1709 - SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
1710 -+ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_USERCOPY, NULL);
1711 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_USERCOPY|
1712 ++ SLAB_NO_SANITIZE, NULL);
1713
1714 dcache_init();
1715 inode_init();
1716 @@ -50229,7 +50261,7 @@ index 451b9b8..12e5a03 100644
1717
1718 out_free_fd:
1719 diff --git a/fs/exec.c b/fs/exec.c
1720 -index 312e297..4df82cf 100644
1721 +index 312e297..25c839c 100644
1722 --- a/fs/exec.c
1723 +++ b/fs/exec.c
1724 @@ -55,12 +55,35 @@
1725 @@ -50526,7 +50558,7 @@ index 312e297..4df82cf 100644
1726 +
1727 +#ifdef CONFIG_X86
1728 + if (!ret) {
1729 -+ size = mmap_min_addr + ((mm->delta_mmap ^ mm->delta_stack) & (0xFFUL << PAGE_SHIFT));
1730 ++ size = PAGE_SIZE + mmap_min_addr + ((mm->delta_mmap ^ mm->delta_stack) & (0xFFUL << PAGE_SHIFT));
1731 + ret = 0 != mmap_region(NULL, 0, PAGE_ALIGN(size), flags, vm_flags, 0);
1732 + }
1733 +#endif
1734 @@ -57222,6 +57254,63 @@ index 7b21801..ee8fe9b 100644
1735
1736 generic_fillattr(inode, stat);
1737 return 0;
1738 +diff --git a/fs/super.c b/fs/super.c
1739 +index 2a698f6..056eff7 100644
1740 +--- a/fs/super.c
1741 ++++ b/fs/super.c
1742 +@@ -295,19 +295,19 @@ EXPORT_SYMBOL(deactivate_super);
1743 + * and want to turn it into a full-blown active reference. grab_super()
1744 + * is called with sb_lock held and drops it. Returns 1 in case of
1745 + * success, 0 if we had failed (superblock contents was already dead or
1746 +- * dying when grab_super() had been called).
1747 ++ * dying when grab_super() had been called). Note that this is only
1748 ++ * called for superblocks not in rundown mode (== ones still on ->fs_supers
1749 ++ * of their type), so increment of ->s_count is OK here.
1750 + */
1751 + static int grab_super(struct super_block *s) __releases(sb_lock)
1752 + {
1753 +- if (atomic_inc_not_zero(&s->s_active)) {
1754 +- spin_unlock(&sb_lock);
1755 +- return 1;
1756 +- }
1757 +- /* it's going away */
1758 + s->s_count++;
1759 + spin_unlock(&sb_lock);
1760 +- /* wait for it to die */
1761 + down_write(&s->s_umount);
1762 ++ if ((s->s_flags & MS_BORN) && atomic_inc_not_zero(&s->s_active)) {
1763 ++ put_super(s);
1764 ++ return 1;
1765 ++ }
1766 + up_write(&s->s_umount);
1767 + put_super(s);
1768 + return 0;
1769 +@@ -436,11 +436,6 @@ retry:
1770 + destroy_super(s);
1771 + s = NULL;
1772 + }
1773 +- down_write(&old->s_umount);
1774 +- if (unlikely(!(old->s_flags & MS_BORN))) {
1775 +- deactivate_locked_super(old);
1776 +- goto retry;
1777 +- }
1778 + return old;
1779 + }
1780 + }
1781 +@@ -650,10 +645,10 @@ restart:
1782 + if (list_empty(&sb->s_instances))
1783 + continue;
1784 + if (sb->s_bdev == bdev) {
1785 +- if (grab_super(sb)) /* drops sb_lock */
1786 +- return sb;
1787 +- else
1788 ++ if (!grab_super(sb))
1789 + goto restart;
1790 ++ up_write(&sb->s_umount);
1791 ++ return sb;
1792 + }
1793 + }
1794 + spin_unlock(&sb_lock);
1795 diff --git a/fs/sysfs/bin.c b/fs/sysfs/bin.c
1796 index a475983..3aab767 100644
1797 --- a/fs/sysfs/bin.c
1798 @@ -72846,10 +72935,10 @@ index efe50af..0d0b145 100644
1799
1800 static inline void nf_reset_trace(struct sk_buff *skb)
1801 diff --git a/include/linux/slab.h b/include/linux/slab.h
1802 -index 573c809..c643b82 100644
1803 +index 573c809..d82a501 100644
1804 --- a/include/linux/slab.h
1805 +++ b/include/linux/slab.h
1806 -@@ -11,12 +11,20 @@
1807 +@@ -11,14 +11,29 @@
1808
1809 #include <linux/gfp.h>
1810 #include <linux/types.h>
1811 @@ -72869,8 +72958,17 @@ index 573c809..c643b82 100644
1812 +
1813 #define SLAB_RED_ZONE 0x00000400UL /* DEBUG: Red zone objs in a cache */
1814 #define SLAB_POISON 0x00000800UL /* DEBUG: Poison objects */
1815 ++
1816 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1817 ++#define SLAB_NO_SANITIZE 0x00001000UL /* PaX: Do not sanitize objs on free */
1818 ++#else
1819 ++#define SLAB_NO_SANITIZE 0x00000000UL
1820 ++#endif
1821 ++
1822 #define SLAB_HWCACHE_ALIGN 0x00002000UL /* Align objs on cache lines */
1823 -@@ -87,10 +95,13 @@
1824 + #define SLAB_CACHE_DMA 0x00004000UL /* Use GFP_DMA memory */
1825 + #define SLAB_STORE_USER 0x00010000UL /* DEBUG: Store the last owner for bug hunting */
1826 +@@ -87,10 +102,22 @@
1827 * ZERO_SIZE_PTR can be passed to kfree though in the same way that NULL can.
1828 * Both make kfree a no-op.
1829 */
1830 @@ -72884,10 +72982,19 @@ index 573c809..c643b82 100644
1831 -#define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \
1832 - (unsigned long)ZERO_SIZE_PTR)
1833 +#define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) - 1 >= (unsigned long)ZERO_SIZE_PTR - 1)
1834 ++
1835 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1836 ++#ifdef CONFIG_X86_64
1837 ++#define PAX_MEMORY_SANITIZE_VALUE '\xfe'
1838 ++#else
1839 ++#define PAX_MEMORY_SANITIZE_VALUE '\xff'
1840 ++#endif
1841 ++extern bool pax_sanitize_slab;
1842 ++#endif
1843
1844 /*
1845 * struct kmem_cache related prototypes
1846 -@@ -161,6 +172,8 @@ void * __must_check krealloc(const void *, size_t, gfp_t);
1847 +@@ -161,6 +188,8 @@ void * __must_check krealloc(const void *, size_t, gfp_t);
1848 void kfree(const void *);
1849 void kzfree(const void *);
1850 size_t ksize(const void *);
1851 @@ -72896,7 +73003,7 @@ index 573c809..c643b82 100644
1852
1853 /*
1854 * Allocator specific definitions. These are mainly used to establish optimized
1855 -@@ -242,7 +255,7 @@ size_t ksize(const void *);
1856 +@@ -242,7 +271,7 @@ size_t ksize(const void *);
1857 */
1858 static inline void *kcalloc(size_t n, size_t size, gfp_t flags)
1859 {
1860 @@ -72905,7 +73012,7 @@ index 573c809..c643b82 100644
1861 return NULL;
1862 return __kmalloc(n * size, flags | __GFP_ZERO);
1863 }
1864 -@@ -287,7 +300,7 @@ static inline void *kmem_cache_alloc_node(struct kmem_cache *cachep,
1865 +@@ -287,7 +316,7 @@ static inline void *kmem_cache_alloc_node(struct kmem_cache *cachep,
1866 */
1867 #if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB) || \
1868 (defined(CONFIG_SLAB) && defined(CONFIG_TRACING))
1869 @@ -72914,7 +73021,7 @@ index 573c809..c643b82 100644
1870 #define kmalloc_track_caller(size, flags) \
1871 __kmalloc_track_caller(size, flags, _RET_IP_)
1872 #else
1873 -@@ -306,7 +319,7 @@ extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long);
1874 +@@ -306,7 +335,7 @@ extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long);
1875 */
1876 #if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB) || \
1877 (defined(CONFIG_SLAB) && defined(CONFIG_TRACING))
1878 @@ -72924,10 +73031,10 @@ index 573c809..c643b82 100644
1879 __kmalloc_node_track_caller(size, flags, node, \
1880 _RET_IP_)
1881 diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
1882 -index d00e0ba..ce1f90b 100644
1883 +index d00e0ba..a443aff 100644
1884 --- a/include/linux/slab_def.h
1885 +++ b/include/linux/slab_def.h
1886 -@@ -68,10 +68,10 @@ struct kmem_cache {
1887 +@@ -68,10 +68,14 @@ struct kmem_cache {
1888 unsigned long node_allocs;
1889 unsigned long node_frees;
1890 unsigned long node_overflow;
1891 @@ -72939,10 +73046,14 @@ index d00e0ba..ce1f90b 100644
1892 + atomic_unchecked_t allocmiss;
1893 + atomic_unchecked_t freehit;
1894 + atomic_unchecked_t freemiss;
1895 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1896 ++ atomic_unchecked_t sanitized;
1897 ++ atomic_unchecked_t not_sanitized;
1898 ++#endif
1899
1900 /*
1901 * If debugging is enabled, then the allocator can add additional
1902 -@@ -105,11 +105,16 @@ struct cache_sizes {
1903 +@@ -105,11 +109,16 @@ struct cache_sizes {
1904 #ifdef CONFIG_ZONE_DMA
1905 struct kmem_cache *cs_dmacachep;
1906 #endif
1907 @@ -72960,7 +73071,7 @@ index d00e0ba..ce1f90b 100644
1908
1909 #ifdef CONFIG_TRACING
1910 extern void *kmem_cache_alloc_trace(size_t size,
1911 -@@ -152,6 +157,13 @@ found:
1912 +@@ -152,6 +161,13 @@ found:
1913 cachep = malloc_sizes[i].cs_dmacachep;
1914 else
1915 #endif
1916 @@ -72974,7 +73085,7 @@ index d00e0ba..ce1f90b 100644
1917 cachep = malloc_sizes[i].cs_cachep;
1918
1919 ret = kmem_cache_alloc_trace(size, cachep, flags);
1920 -@@ -162,7 +174,7 @@ found:
1921 +@@ -162,7 +178,7 @@ found:
1922 }
1923
1924 #ifdef CONFIG_NUMA
1925 @@ -72983,7 +73094,7 @@ index d00e0ba..ce1f90b 100644
1926 extern void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
1927
1928 #ifdef CONFIG_TRACING
1929 -@@ -181,6 +193,7 @@ kmem_cache_alloc_node_trace(size_t size,
1930 +@@ -181,6 +197,7 @@ kmem_cache_alloc_node_trace(size_t size,
1931 }
1932 #endif
1933
1934 @@ -72991,7 +73102,7 @@ index d00e0ba..ce1f90b 100644
1935 static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
1936 {
1937 struct kmem_cache *cachep;
1938 -@@ -205,6 +218,13 @@ found:
1939 +@@ -205,6 +222,13 @@ found:
1940 cachep = malloc_sizes[i].cs_dmacachep;
1941 else
1942 #endif
1943 @@ -76248,7 +76359,7 @@ index 234e152..0ae0243 100644
1944 {
1945 struct signal_struct *sig = current->signal;
1946 diff --git a/kernel/fork.c b/kernel/fork.c
1947 -index ce0c182..64aeae3 100644
1948 +index ce0c182..16fd1e0 100644
1949 --- a/kernel/fork.c
1950 +++ b/kernel/fork.c
1951 @@ -270,19 +270,24 @@ static struct task_struct *dup_task_struct(struct task_struct *orig)
1952 @@ -76583,6 +76694,15 @@ index ce0c182..64aeae3 100644
1953 if (clone_flags & CLONE_VFORK) {
1954 p->vfork_done = &vfork;
1955 init_completion(&vfork);
1956 +@@ -1591,7 +1670,7 @@ void __init proc_caches_init(void)
1957 + mm_cachep = kmem_cache_create("mm_struct",
1958 + sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
1959 + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
1960 +- vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
1961 ++ vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC | SLAB_NO_SANITIZE);
1962 + mmap_init();
1963 + nsproxy_cache_init();
1964 + }
1965 @@ -1630,7 +1709,7 @@ static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp)
1966 return 0;
1967
1968 @@ -83209,6 +83329,28 @@ index 4f4f53b..02d443a 100644
1969 if (!(flags & MCL_CURRENT) || (current->mm->total_vm <= lock_limit) ||
1970 capable(CAP_IPC_LOCK))
1971 ret = do_mlockall(flags);
1972 +diff --git a/mm/mm_init.c b/mm/mm_init.c
1973 +index 1ffd97a..240aa20 100644
1974 +--- a/mm/mm_init.c
1975 ++++ b/mm/mm_init.c
1976 +@@ -11,6 +11,17 @@
1977 + #include <linux/export.h>
1978 + #include "internal.h"
1979 +
1980 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
1981 ++bool pax_sanitize_slab __read_only = true;
1982 ++static int __init pax_sanitize_slab_setup(char *str)
1983 ++{
1984 ++ pax_sanitize_slab = !!simple_strtol(str, NULL, 0);
1985 ++ printk("%sabled PaX slab sanitization\n", pax_sanitize_slab ? "En" : "Dis");
1986 ++ return 1;
1987 ++}
1988 ++__setup("pax_sanitize_slab=", pax_sanitize_slab_setup);
1989 ++#endif
1990 ++
1991 + #ifdef CONFIG_DEBUG_MEMORY_INIT
1992 + int mminit_loglevel;
1993 +
1994 diff --git a/mm/mmap.c b/mm/mmap.c
1995 index dff37a6..0e57094 100644
1996 --- a/mm/mmap.c
1997 @@ -85321,7 +85463,7 @@ index 70e814a..38e1f43 100644
1998 rc = process_vm_rw_single_vec(
1999 (unsigned long)rvec[i].iov_base, rvec[i].iov_len,
2000 diff --git a/mm/rmap.c b/mm/rmap.c
2001 -index 8685697..b490361 100644
2002 +index 8685697..e047d10 100644
2003 --- a/mm/rmap.c
2004 +++ b/mm/rmap.c
2005 @@ -153,6 +153,10 @@ int anon_vma_prepare(struct vm_area_struct *vma)
2006 @@ -85413,6 +85555,19 @@ index 8685697..b490361 100644
2007 {
2008 struct anon_vma_chain *avc;
2009 struct anon_vma *anon_vma;
2010 +@@ -381,8 +418,10 @@ static void anon_vma_ctor(void *data)
2011 + void __init anon_vma_init(void)
2012 + {
2013 + anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
2014 +- 0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
2015 +- anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
2016 ++ 0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE,
2017 ++ anon_vma_ctor);
2018 ++ anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain,
2019 ++ SLAB_PANIC|SLAB_NO_SANITIZE);
2020 + }
2021 +
2022 + /*
2023 diff --git a/mm/shmem.c b/mm/shmem.c
2024 index a78acf0..a31df98 100644
2025 --- a/mm/shmem.c
2026 @@ -85474,7 +85629,7 @@ index a78acf0..a31df98 100644
2027 return -ENOMEM;
2028
2029 diff --git a/mm/slab.c b/mm/slab.c
2030 -index 4c3b671..020b6bb 100644
2031 +index 4c3b671..884702c 100644
2032 --- a/mm/slab.c
2033 +++ b/mm/slab.c
2034 @@ -151,7 +151,7 @@
2035 @@ -85482,19 +85637,21 @@ index 4c3b671..020b6bb 100644
2036 /* Legal flag mask for kmem_cache_create(). */
2037 #if DEBUG
2038 -# define CREATE_MASK (SLAB_RED_ZONE | \
2039 -+# define CREATE_MASK (SLAB_USERCOPY | SLAB_RED_ZONE | \
2040 ++# define CREATE_MASK (SLAB_USERCOPY | SLAB_NO_SANITIZE | SLAB_RED_ZONE | \
2041 SLAB_POISON | SLAB_HWCACHE_ALIGN | \
2042 SLAB_CACHE_DMA | \
2043 SLAB_STORE_USER | \
2044 -@@ -159,7 +159,7 @@
2045 +@@ -159,8 +159,8 @@
2046 SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \
2047 SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE | SLAB_NOTRACK)
2048 #else
2049 -# define CREATE_MASK (SLAB_HWCACHE_ALIGN | \
2050 -+# define CREATE_MASK (SLAB_USERCOPY | SLAB_HWCACHE_ALIGN | \
2051 - SLAB_CACHE_DMA | \
2052 +- SLAB_CACHE_DMA | \
2053 ++# define CREATE_MASK (SLAB_USERCOPY | SLAB_NO_SANITIZE | \
2054 ++ SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
2055 SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \
2056 SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \
2057 + SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE | SLAB_NOTRACK)
2058 @@ -288,7 +288,7 @@ struct kmem_list3 {
2059 * Need this for bootstrapping a per node allocator.
2060 */
2061 @@ -85504,7 +85661,7 @@ index 4c3b671..020b6bb 100644
2062 #define CACHE_CACHE 0
2063 #define SIZE_AC MAX_NUMNODES
2064 #define SIZE_L3 (2 * MAX_NUMNODES)
2065 -@@ -389,10 +389,10 @@ static void kmem_list3_init(struct kmem_list3 *parent)
2066 +@@ -389,10 +389,12 @@ static void kmem_list3_init(struct kmem_list3 *parent)
2067 if ((x)->max_freeable < i) \
2068 (x)->max_freeable = i; \
2069 } while (0)
2070 @@ -85516,10 +85673,21 @@ index 4c3b671..020b6bb 100644
2071 +#define STATS_INC_ALLOCMISS(x) atomic_inc_unchecked(&(x)->allocmiss)
2072 +#define STATS_INC_FREEHIT(x) atomic_inc_unchecked(&(x)->freehit)
2073 +#define STATS_INC_FREEMISS(x) atomic_inc_unchecked(&(x)->freemiss)
2074 ++#define STATS_INC_SANITIZED(x) atomic_inc_unchecked(&(x)->sanitized)
2075 ++#define STATS_INC_NOT_SANITIZED(x) atomic_inc_unchecked(&(x)->not_sanitized)
2076 #else
2077 #define STATS_INC_ACTIVE(x) do { } while (0)
2078 #define STATS_DEC_ACTIVE(x) do { } while (0)
2079 -@@ -538,7 +538,7 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct slab *slab,
2080 +@@ -409,6 +411,8 @@ static void kmem_list3_init(struct kmem_list3 *parent)
2081 + #define STATS_INC_ALLOCMISS(x) do { } while (0)
2082 + #define STATS_INC_FREEHIT(x) do { } while (0)
2083 + #define STATS_INC_FREEMISS(x) do { } while (0)
2084 ++#define STATS_INC_SANITIZED(x) do { } while (0)
2085 ++#define STATS_INC_NOT_SANITIZED(x) do { } while (0)
2086 + #endif
2087 +
2088 + #if DEBUG
2089 +@@ -538,7 +542,7 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct slab *slab,
2090 * reciprocal_divide(offset, cache->reciprocal_buffer_size)
2091 */
2092 static inline unsigned int obj_to_index(const struct kmem_cache *cache,
2093 @@ -85528,7 +85696,7 @@ index 4c3b671..020b6bb 100644
2094 {
2095 u32 offset = (obj - slab->s_mem);
2096 return reciprocal_divide(offset, cache->reciprocal_buffer_size);
2097 -@@ -559,12 +559,13 @@ EXPORT_SYMBOL(malloc_sizes);
2098 +@@ -559,12 +563,13 @@ EXPORT_SYMBOL(malloc_sizes);
2099 struct cache_names {
2100 char *name;
2101 char *name_dma;
2102 @@ -85544,7 +85712,7 @@ index 4c3b671..020b6bb 100644
2103 #undef CACHE
2104 };
2105
2106 -@@ -752,6 +753,12 @@ static inline struct kmem_cache *__find_general_cachep(size_t size,
2107 +@@ -752,6 +757,12 @@ static inline struct kmem_cache *__find_general_cachep(size_t size,
2108 if (unlikely(gfpflags & GFP_DMA))
2109 return csizep->cs_dmacachep;
2110 #endif
2111 @@ -85557,7 +85725,7 @@ index 4c3b671..020b6bb 100644
2112 return csizep->cs_cachep;
2113 }
2114
2115 -@@ -1370,7 +1377,7 @@ static int __cpuinit cpuup_callback(struct notifier_block *nfb,
2116 +@@ -1370,7 +1381,7 @@ static int __cpuinit cpuup_callback(struct notifier_block *nfb,
2117 return notifier_from_errno(err);
2118 }
2119
2120 @@ -85566,7 +85734,7 @@ index 4c3b671..020b6bb 100644
2121 &cpuup_callback, NULL, 0
2122 };
2123
2124 -@@ -1572,7 +1579,7 @@ void __init kmem_cache_init(void)
2125 +@@ -1572,7 +1583,7 @@ void __init kmem_cache_init(void)
2126 sizes[INDEX_AC].cs_cachep = kmem_cache_create(names[INDEX_AC].name,
2127 sizes[INDEX_AC].cs_size,
2128 ARCH_KMALLOC_MINALIGN,
2129 @@ -85575,7 +85743,7 @@ index 4c3b671..020b6bb 100644
2130 NULL);
2131
2132 if (INDEX_AC != INDEX_L3) {
2133 -@@ -1580,7 +1587,7 @@ void __init kmem_cache_init(void)
2134 +@@ -1580,7 +1591,7 @@ void __init kmem_cache_init(void)
2135 kmem_cache_create(names[INDEX_L3].name,
2136 sizes[INDEX_L3].cs_size,
2137 ARCH_KMALLOC_MINALIGN,
2138 @@ -85584,7 +85752,7 @@ index 4c3b671..020b6bb 100644
2139 NULL);
2140 }
2141
2142 -@@ -1598,7 +1605,7 @@ void __init kmem_cache_init(void)
2143 +@@ -1598,7 +1609,7 @@ void __init kmem_cache_init(void)
2144 sizes->cs_cachep = kmem_cache_create(names->name,
2145 sizes->cs_size,
2146 ARCH_KMALLOC_MINALIGN,
2147 @@ -85593,7 +85761,7 @@ index 4c3b671..020b6bb 100644
2148 NULL);
2149 }
2150 #ifdef CONFIG_ZONE_DMA
2151 -@@ -1610,6 +1617,16 @@ void __init kmem_cache_init(void)
2152 +@@ -1610,6 +1621,16 @@ void __init kmem_cache_init(void)
2153 SLAB_PANIC,
2154 NULL);
2155 #endif
2156 @@ -85610,7 +85778,29 @@ index 4c3b671..020b6bb 100644
2157 sizes++;
2158 names++;
2159 }
2160 -@@ -3879,6 +3896,7 @@ void kfree(const void *objp)
2161 +@@ -3662,6 +3683,21 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
2162 + struct array_cache *ac = cpu_cache_get(cachep);
2163 +
2164 + check_irq_off();
2165 ++
2166 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
2167 ++ if (pax_sanitize_slab) {
2168 ++ if (!(cachep->flags & (SLAB_POISON | SLAB_NO_SANITIZE))) {
2169 ++ memset(objp, PAX_MEMORY_SANITIZE_VALUE, obj_size(cachep));
2170 ++
2171 ++ if (cachep->ctor)
2172 ++ cachep->ctor(objp);
2173 ++
2174 ++ STATS_INC_SANITIZED(cachep);
2175 ++ } else
2176 ++ STATS_INC_NOT_SANITIZED(cachep);
2177 ++ }
2178 ++#endif
2179 ++
2180 + kmemleak_free_recursive(objp, cachep->flags);
2181 + objp = cache_free_debugcheck(cachep, objp, caller);
2182 +
2183 +@@ -3879,6 +3915,7 @@ void kfree(const void *objp)
2184
2185 if (unlikely(ZERO_OR_NULL_PTR(objp)))
2186 return;
2187 @@ -85618,7 +85808,17 @@ index 4c3b671..020b6bb 100644
2188 local_irq_save(flags);
2189 kfree_debugcheck(objp);
2190 c = virt_to_cache(objp);
2191 -@@ -4325,10 +4343,10 @@ static int s_show(struct seq_file *m, void *p)
2192 +@@ -4216,6 +4253,9 @@ static void print_slabinfo_header(struct seq_file *m)
2193 + seq_puts(m, " : globalstat <listallocs> <maxobjs> <grown> <reaped> "
2194 + "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
2195 + seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
2196 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
2197 ++ seq_puts(m, " : pax <sanitized> <not_sanitized>");
2198 ++#endif
2199 + #endif
2200 + seq_putc(m, '\n');
2201 + }
2202 +@@ -4325,14 +4365,22 @@ static int s_show(struct seq_file *m, void *p)
2203 }
2204 /* cpu stats */
2205 {
2206 @@ -85633,7 +85833,19 @@ index 4c3b671..020b6bb 100644
2207
2208 seq_printf(m, " : cpustat %6lu %6lu %6lu %6lu",
2209 allochit, allocmiss, freehit, freemiss);
2210 -@@ -4587,13 +4605,71 @@ static int __init slab_proc_init(void)
2211 + }
2212 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
2213 ++ {
2214 ++ unsigned long sanitized = atomic_read_unchecked(&cachep->sanitized);
2215 ++ unsigned long not_sanitized = atomic_read_unchecked(&cachep->not_sanitized);
2216 ++
2217 ++ seq_printf(m, " : pax %6lu %6lu", sanitized, not_sanitized);
2218 ++ }
2219 ++#endif
2220 + #endif
2221 + seq_putc(m, '\n');
2222 + return 0;
2223 +@@ -4587,13 +4635,71 @@ static int __init slab_proc_init(void)
2224 {
2225 proc_create("slabinfo",S_IWUSR|S_IRUSR,NULL,&proc_slabinfo_operations);
2226 #ifdef CONFIG_DEBUG_SLAB_LEAK
2227 @@ -85707,7 +85919,7 @@ index 4c3b671..020b6bb 100644
2228 * ksize - get the actual amount of memory allocated for a given object
2229 * @objp: Pointer to the object
2230 diff --git a/mm/slob.c b/mm/slob.c
2231 -index 8105be4..e1af823 100644
2232 +index 8105be4..8c1ce34 100644
2233 --- a/mm/slob.c
2234 +++ b/mm/slob.c
2235 @@ -29,7 +29,7 @@
2236 @@ -85804,7 +86016,19 @@ index 8105be4..e1af823 100644
2237 INIT_LIST_HEAD(&sp->list);
2238 set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
2239 set_slob_page_free(sp, slob_list);
2240 -@@ -476,10 +477,9 @@ out:
2241 +@@ -418,6 +419,11 @@ static void slob_free(void *block, int size)
2242 + return;
2243 + }
2244 +
2245 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
2246 ++ if (pax_sanitize_slab)
2247 ++ memset(block, PAX_MEMORY_SANITIZE_VALUE, size);
2248 ++#endif
2249 ++
2250 + if (!slob_page_free(sp)) {
2251 + /* This slob page is about to become partially free. Easy! */
2252 + sp->units = units;
2253 +@@ -476,10 +482,9 @@ out:
2254 * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
2255 */
2256
2257 @@ -85817,7 +86041,7 @@ index 8105be4..e1af823 100644
2258 void *ret;
2259
2260 gfp &= gfp_allowed_mask;
2261 -@@ -494,7 +494,10 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
2262 +@@ -494,7 +499,10 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
2263
2264 if (!m)
2265 return NULL;
2266 @@ -85829,7 +86053,7 @@ index 8105be4..e1af823 100644
2267 ret = (void *)m + align;
2268
2269 trace_kmalloc_node(_RET_IP_, ret,
2270 -@@ -506,16 +509,25 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
2271 +@@ -506,16 +514,25 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
2272 gfp |= __GFP_COMP;
2273 ret = slob_new_pages(gfp, order, node);
2274 if (ret) {
2275 @@ -85859,7 +86083,7 @@ index 8105be4..e1af823 100644
2276 return ret;
2277 }
2278 EXPORT_SYMBOL(__kmalloc_node);
2279 -@@ -530,16 +542,92 @@ void kfree(const void *block)
2280 +@@ -530,16 +547,92 @@ void kfree(const void *block)
2281 return;
2282 kmemleak_free(block);
2283
2284 @@ -85955,7 +86179,7 @@ index 8105be4..e1af823 100644
2285 /* can't use ksize for kmem_cache_alloc memory, only kmalloc */
2286 size_t ksize(const void *block)
2287 {
2288 -@@ -552,10 +640,10 @@ size_t ksize(const void *block)
2289 +@@ -552,10 +645,10 @@ size_t ksize(const void *block)
2290 sp = slob_page(block);
2291 if (is_slob_page(sp)) {
2292 int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
2293 @@ -85969,7 +86193,7 @@ index 8105be4..e1af823 100644
2294 }
2295 EXPORT_SYMBOL(ksize);
2296
2297 -@@ -571,8 +659,13 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
2298 +@@ -571,8 +664,13 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
2299 {
2300 struct kmem_cache *c;
2301
2302 @@ -85983,7 +86207,7 @@ index 8105be4..e1af823 100644
2303
2304 if (c) {
2305 c->name = name;
2306 -@@ -614,17 +707,25 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
2307 +@@ -614,17 +712,25 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
2308
2309 lockdep_trace_alloc(flags);
2310
2311 @@ -86009,7 +86233,7 @@ index 8105be4..e1af823 100644
2312
2313 if (c->ctor)
2314 c->ctor(b);
2315 -@@ -636,10 +737,16 @@ EXPORT_SYMBOL(kmem_cache_alloc_node);
2316 +@@ -636,10 +742,16 @@ EXPORT_SYMBOL(kmem_cache_alloc_node);
2317
2318 static void __kmem_cache_free(void *b, int size)
2319 {
2320 @@ -86028,7 +86252,7 @@ index 8105be4..e1af823 100644
2321 }
2322
2323 static void kmem_rcu_free(struct rcu_head *head)
2324 -@@ -652,17 +759,31 @@ static void kmem_rcu_free(struct rcu_head *head)
2325 +@@ -652,17 +764,31 @@ static void kmem_rcu_free(struct rcu_head *head)
2326
2327 void kmem_cache_free(struct kmem_cache *c, void *b)
2328 {
2329 @@ -86064,7 +86288,7 @@ index 8105be4..e1af823 100644
2330 EXPORT_SYMBOL(kmem_cache_free);
2331
2332 diff --git a/mm/slub.c b/mm/slub.c
2333 -index 5710788..12ea6c9 100644
2334 +index 5710788..3d095c0 100644
2335 --- a/mm/slub.c
2336 +++ b/mm/slub.c
2337 @@ -186,7 +186,7 @@ static enum {
2338 @@ -86094,7 +86318,22 @@ index 5710788..12ea6c9 100644
2339 s, (void *)t->addr, jiffies - t->when, t->cpu, t->pid);
2340 #ifdef CONFIG_STACKTRACE
2341 {
2342 -@@ -2572,6 +2572,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
2343 +@@ -2537,6 +2537,14 @@ static __always_inline void slab_free(struct kmem_cache *s,
2344 +
2345 + slab_free_hook(s, x);
2346 +
2347 ++#ifdef CONFIG_PAX_MEMORY_SANITIZE
2348 ++ if (pax_sanitize_slab && !(s->flags & SLAB_NO_SANITIZE)) {
2349 ++ memset(x, PAX_MEMORY_SANITIZE_VALUE, s->objsize);
2350 ++ if (s->ctor)
2351 ++ s->ctor(x);
2352 ++ }
2353 ++#endif
2354 ++
2355 + redo:
2356 + /*
2357 + * Determine the currently cpus per cpu slab.
2358 +@@ -2572,6 +2580,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
2359
2360 page = virt_to_head_page(x);
2361
2362 @@ -86103,7 +86342,7 @@ index 5710788..12ea6c9 100644
2363 slab_free(s, page, x, _RET_IP_);
2364
2365 trace_kmem_cache_free(_RET_IP_, x);
2366 -@@ -2605,7 +2607,7 @@ static int slub_min_objects;
2367 +@@ -2605,7 +2615,7 @@ static int slub_min_objects;
2368 * Merge control. If this is set then no merging of slab caches will occur.
2369 * (Could be removed. This was introduced to pacify the merge skeptics.)
2370 */
2371 @@ -86112,7 +86351,7 @@ index 5710788..12ea6c9 100644
2372
2373 /*
2374 * Calculate the order of allocation given an slab object size.
2375 -@@ -3055,7 +3057,7 @@ static int kmem_cache_open(struct kmem_cache *s,
2376 +@@ -3055,7 +3065,7 @@ static int kmem_cache_open(struct kmem_cache *s,
2377 else
2378 s->cpu_partial = 30;
2379
2380 @@ -86121,7 +86360,7 @@ index 5710788..12ea6c9 100644
2381 #ifdef CONFIG_NUMA
2382 s->remote_node_defrag_ratio = 1000;
2383 #endif
2384 -@@ -3159,8 +3161,7 @@ static inline int kmem_cache_close(struct kmem_cache *s)
2385 +@@ -3159,8 +3169,7 @@ static inline int kmem_cache_close(struct kmem_cache *s)
2386 void kmem_cache_destroy(struct kmem_cache *s)
2387 {
2388 down_write(&slub_lock);
2389 @@ -86131,7 +86370,7 @@ index 5710788..12ea6c9 100644
2390 list_del(&s->list);
2391 up_write(&slub_lock);
2392 if (kmem_cache_close(s)) {
2393 -@@ -3189,6 +3190,10 @@ static struct kmem_cache *kmem_cache;
2394 +@@ -3189,6 +3198,10 @@ static struct kmem_cache *kmem_cache;
2395 static struct kmem_cache *kmalloc_dma_caches[SLUB_PAGE_SHIFT];
2396 #endif
2397
2398 @@ -86142,7 +86381,7 @@ index 5710788..12ea6c9 100644
2399 static int __init setup_slub_min_order(char *str)
2400 {
2401 get_option(&str, &slub_min_order);
2402 -@@ -3303,6 +3308,13 @@ static struct kmem_cache *get_slab(size_t size, gfp_t flags)
2403 +@@ -3303,6 +3316,13 @@ static struct kmem_cache *get_slab(size_t size, gfp_t flags)
2404 return kmalloc_dma_caches[index];
2405
2406 #endif
2407 @@ -86156,7 +86395,7 @@ index 5710788..12ea6c9 100644
2408 return kmalloc_caches[index];
2409 }
2410
2411 -@@ -3371,6 +3383,59 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
2412 +@@ -3371,6 +3391,59 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
2413 EXPORT_SYMBOL(__kmalloc_node);
2414 #endif
2415
2416 @@ -86216,7 +86455,7 @@ index 5710788..12ea6c9 100644
2417 size_t ksize(const void *object)
2418 {
2419 struct page *page;
2420 -@@ -3435,6 +3500,7 @@ void kfree(const void *x)
2421 +@@ -3435,6 +3508,7 @@ void kfree(const void *x)
2422 if (unlikely(ZERO_OR_NULL_PTR(x)))
2423 return;
2424
2425 @@ -86224,7 +86463,7 @@ index 5710788..12ea6c9 100644
2426 page = virt_to_head_page(x);
2427 if (unlikely(!PageSlab(page))) {
2428 BUG_ON(!PageCompound(page));
2429 -@@ -3645,7 +3711,7 @@ static void __init kmem_cache_bootstrap_fixup(struct kmem_cache *s)
2430 +@@ -3645,7 +3719,7 @@ static void __init kmem_cache_bootstrap_fixup(struct kmem_cache *s)
2431 int node;
2432
2433 list_add(&s->list, &slab_caches);
2434 @@ -86233,7 +86472,7 @@ index 5710788..12ea6c9 100644
2435
2436 for_each_node_state(node, N_NORMAL_MEMORY) {
2437 struct kmem_cache_node *n = get_node(s, node);
2438 -@@ -3762,17 +3828,17 @@ void __init kmem_cache_init(void)
2439 +@@ -3762,17 +3836,17 @@ void __init kmem_cache_init(void)
2440
2441 /* Caches that are not of the two-to-the-power-of size */
2442 if (KMALLOC_MIN_SIZE <= 32) {
2443 @@ -86254,7 +86493,7 @@ index 5710788..12ea6c9 100644
2444 caches++;
2445 }
2446
2447 -@@ -3814,6 +3880,22 @@ void __init kmem_cache_init(void)
2448 +@@ -3814,6 +3888,22 @@ void __init kmem_cache_init(void)
2449 }
2450 }
2451 #endif
2452 @@ -86277,7 +86516,7 @@ index 5710788..12ea6c9 100644
2453 printk(KERN_INFO
2454 "SLUB: Genslabs=%d, HWalign=%d, Order=%d-%d, MinObjects=%d,"
2455 " CPUs=%d, Nodes=%d\n",
2456 -@@ -3840,7 +3922,7 @@ static int slab_unmergeable(struct kmem_cache *s)
2457 +@@ -3840,7 +3930,7 @@ static int slab_unmergeable(struct kmem_cache *s)
2458 /*
2459 * We may have set a slab to be unmergeable during bootstrap.
2460 */
2461 @@ -86286,7 +86525,7 @@ index 5710788..12ea6c9 100644
2462 return 1;
2463
2464 return 0;
2465 -@@ -3899,7 +3981,7 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
2466 +@@ -3899,7 +3989,7 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
2467 down_write(&slub_lock);
2468 s = find_mergeable(size, align, flags, name, ctor);
2469 if (s) {
2470 @@ -86295,7 +86534,7 @@ index 5710788..12ea6c9 100644
2471 /*
2472 * Adjust the object sizes so that we clear
2473 * the complete object on kzalloc.
2474 -@@ -3908,7 +3990,7 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
2475 +@@ -3908,7 +3998,7 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
2476 s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
2477
2478 if (sysfs_slab_alias(s, name)) {
2479 @@ -86304,7 +86543,7 @@ index 5710788..12ea6c9 100644
2480 goto err;
2481 }
2482 up_write(&slub_lock);
2483 -@@ -3979,7 +4061,7 @@ static int __cpuinit slab_cpuup_callback(struct notifier_block *nfb,
2484 +@@ -3979,7 +4069,7 @@ static int __cpuinit slab_cpuup_callback(struct notifier_block *nfb,
2485 return NOTIFY_OK;
2486 }
2487
2488 @@ -86313,7 +86552,7 @@ index 5710788..12ea6c9 100644
2489 .notifier_call = slab_cpuup_callback
2490 };
2491
2492 -@@ -4037,7 +4119,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
2493 +@@ -4037,7 +4127,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
2494 }
2495 #endif
2496
2497 @@ -86322,7 +86561,7 @@ index 5710788..12ea6c9 100644
2498 static int count_inuse(struct page *page)
2499 {
2500 return page->inuse;
2501 -@@ -4424,12 +4506,12 @@ static void resiliency_test(void)
2502 +@@ -4424,12 +4514,12 @@ static void resiliency_test(void)
2503 validate_slab_cache(kmalloc_caches[9]);
2504 }
2505 #else
2506 @@ -86337,7 +86576,7 @@ index 5710788..12ea6c9 100644
2507 enum slab_stat_type {
2508 SL_ALL, /* All slabs */
2509 SL_PARTIAL, /* Only partially allocated slabs */
2510 -@@ -4670,7 +4752,7 @@ SLAB_ATTR_RO(ctor);
2511 +@@ -4670,7 +4760,7 @@ SLAB_ATTR_RO(ctor);
2512
2513 static ssize_t aliases_show(struct kmem_cache *s, char *buf)
2514 {
2515 @@ -86346,7 +86585,7 @@ index 5710788..12ea6c9 100644
2516 }
2517 SLAB_ATTR_RO(aliases);
2518
2519 -@@ -5237,6 +5319,7 @@ static char *create_unique_id(struct kmem_cache *s)
2520 +@@ -5237,6 +5327,7 @@ static char *create_unique_id(struct kmem_cache *s)
2521 return name;
2522 }
2523
2524 @@ -86354,7 +86593,7 @@ index 5710788..12ea6c9 100644
2525 static int sysfs_slab_add(struct kmem_cache *s)
2526 {
2527 int err;
2528 -@@ -5265,7 +5348,7 @@ static int sysfs_slab_add(struct kmem_cache *s)
2529 +@@ -5265,7 +5356,7 @@ static int sysfs_slab_add(struct kmem_cache *s)
2530 }
2531
2532 s->kobj.kset = slab_kset;
2533 @@ -86363,7 +86602,7 @@ index 5710788..12ea6c9 100644
2534 if (err) {
2535 kobject_put(&s->kobj);
2536 return err;
2537 -@@ -5299,6 +5382,7 @@ static void sysfs_slab_remove(struct kmem_cache *s)
2538 +@@ -5299,6 +5390,7 @@ static void sysfs_slab_remove(struct kmem_cache *s)
2539 kobject_del(&s->kobj);
2540 kobject_put(&s->kobj);
2541 }
2542 @@ -86371,7 +86610,7 @@ index 5710788..12ea6c9 100644
2543
2544 /*
2545 * Need to buffer aliases during bootup until sysfs becomes
2546 -@@ -5312,6 +5396,7 @@ struct saved_alias {
2547 +@@ -5312,6 +5404,7 @@ struct saved_alias {
2548
2549 static struct saved_alias *alias_list;
2550
2551 @@ -86379,7 +86618,7 @@ index 5710788..12ea6c9 100644
2552 static int sysfs_slab_alias(struct kmem_cache *s, const char *name)
2553 {
2554 struct saved_alias *al;
2555 -@@ -5334,6 +5419,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name)
2556 +@@ -5334,6 +5427,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name)
2557 alias_list = al;
2558 return 0;
2559 }
2560 @@ -88115,6 +88354,28 @@ index 925991a..209a505 100644
2561
2562 #ifdef CONFIG_INET
2563 static u32 seq_scale(u32 seq)
2564 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
2565 +index af9c3c6..76914a3 100644
2566 +--- a/net/core/skbuff.c
2567 ++++ b/net/core/skbuff.c
2568 +@@ -2902,13 +2902,15 @@ void __init skb_init(void)
2569 + skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
2570 + sizeof(struct sk_buff),
2571 + 0,
2572 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC,
2573 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|
2574 ++ SLAB_NO_SANITIZE,
2575 + NULL);
2576 + skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
2577 + (2*sizeof(struct sk_buff)) +
2578 + sizeof(atomic_t),
2579 + 0,
2580 +- SLAB_HWCACHE_ALIGN|SLAB_PANIC,
2581 ++ SLAB_HWCACHE_ALIGN|SLAB_PANIC|
2582 ++ SLAB_NO_SANITIZE,
2583 + NULL);
2584 + }
2585 +
2586 diff --git a/net/core/sock.c b/net/core/sock.c
2587 index 8a2c2dd..3ba3cf1 100644
2588 --- a/net/core/sock.c
2589 @@ -94052,10 +94313,10 @@ index 38f6617..e70b72b 100755
2590
2591 exuberant()
2592 diff --git a/security/Kconfig b/security/Kconfig
2593 -index 51bd5a0..999fbad 100644
2594 +index 51bd5a0..2ae77cf 100644
2595 --- a/security/Kconfig
2596 +++ b/security/Kconfig
2597 -@@ -4,6 +4,945 @@
2598 +@@ -4,6 +4,956 @@
2599
2600 menu "Security options"
2601
2602 @@ -94792,21 +95053,32 @@ index 51bd5a0..999fbad 100644
2603 + default y if (GRKERNSEC_CONFIG_AUTO && GRKERNSEC_CONFIG_PRIORITY_SECURITY)
2604 + depends on !HIBERNATION
2605 + help
2606 -+ By saying Y here the kernel will erase memory pages as soon as they
2607 -+ are freed. This in turn reduces the lifetime of data stored in the
2608 -+ pages, making it less likely that sensitive information such as
2609 -+ passwords, cryptographic secrets, etc stay in memory for too long.
2610 ++ By saying Y here the kernel will erase memory pages and slab objects
2611 ++ as soon as they are freed. This in turn reduces the lifetime of data
2612 ++ stored in them, making it less likely that sensitive information such
2613 ++ as passwords, cryptographic secrets, etc stay in memory for too long.
2614 +
2615 + This is especially useful for programs whose runtime is short, long
2616 + lived processes and the kernel itself benefit from this as long as
2617 -+ they operate on whole memory pages and ensure timely freeing of pages
2618 -+ that may hold sensitive information.
2619 ++ they ensure timely freeing of memory that may hold sensitive
2620 ++ information.
2621 ++
2622 ++ A nice side effect of the sanitization of slab objects is the
2623 ++ reduction of possible info leaks caused by padding bytes within the
2624 ++ leaky structures. Use-after-free bugs for structures containing
2625 ++ pointers can also be detected as dereferencing the sanitized pointer
2626 ++ will generate an access violation.
2627 +
2628 + The tradeoff is performance impact, on a single CPU system kernel
2629 + compilation sees a 3% slowdown, other systems and workloads may vary
2630 + and you are advised to test this feature on your expected workload
2631 + before deploying it.
2632 +
2633 ++ To reduce the performance penalty by sanitizing pages only, albeit
2634 ++ limiting the effectiveness of this feature at the same time, slab
2635 ++ sanitization can be disabled with the kernel commandline parameter
2636 ++ "pax_sanitize_slab=0".
2637 ++
2638 + Note that this feature does not protect data stored in live pages,
2639 + e.g., process memory swapped to disk may stay there for a long time.
2640 +
2641 @@ -95001,7 +95273,7 @@ index 51bd5a0..999fbad 100644
2642 config KEYS
2643 bool "Enable access key retention support"
2644 help
2645 -@@ -169,7 +1108,7 @@ config INTEL_TXT
2646 +@@ -169,7 +1119,7 @@ config INTEL_TXT
2647 config LSM_MMAP_MIN_ADDR
2648 int "Low address space for LSM to protect from user allocation"
2649 depends on SECURITY && SECURITY_SELINUX